id
stringlengths 10
10
| title
stringlengths 26
192
| abstract
stringlengths 172
1.92k
| authors
stringlengths 7
591
| published_date
stringlengths 20
20
| link
stringlengths 33
33
| markdown
stringlengths 269
344k
|
---|---|---|---|---|---|---|
2305.15747 | Union Subgraph Neural Networks | Graph Neural Networks (GNNs) are widely used for graph representation
learning in many application domains. The expressiveness of vanilla GNNs is
upper-bounded by 1-dimensional Weisfeiler-Leman (1-WL) test as they operate on
rooted subtrees through iterative message passing. In this paper, we empower
GNNs by injecting neighbor-connectivity information extracted from a new type
of substructure. We first investigate different kinds of connectivities
existing in a local neighborhood and identify a substructure called union
subgraph, which is able to capture the complete picture of the 1-hop
neighborhood of an edge. We then design a shortest-path-based substructure
descriptor that possesses three nice properties and can effectively encode the
high-order connectivities in union subgraphs. By infusing the encoded neighbor
connectivities, we propose a novel model, namely Union Subgraph Neural Network
(UnionSNN), which is proven to be strictly more powerful than 1-WL in
distinguishing non-isomorphic graphs. Additionally, the local encoding from
union subgraphs can also be injected into arbitrary message-passing neural
networks (MPNNs) and Transformer-based models as a plugin. Extensive
experiments on 18 benchmarks of both graph-level and node-level tasks
demonstrate that UnionSNN outperforms state-of-the-art baseline models, with
competitive computational efficiency. The injection of our local encoding to
existing models is able to boost the performance by up to 11.09%. Our code is
available at https://github.com/AngusMonroe/UnionSNN. | Jiaxing Xu, Aihu Zhang, Qingtian Bian, Vijay Prakash Dwivedi, Yiping Ke | 2023-05-25T05:52:43Z | http://arxiv.org/abs/2305.15747v3 | # Union Subgraph Neural Networks
###### Abstract
Graph Neural Networks (GNNs) are widely used for graph representation learning in many application domains. The expressiveness of vanilla GNNs is upper-bounded by 1-dimensional Weisfeiler-Leman (1-WL) test as they operate on rooted subtrees through iterative message passing. In this paper, we empower GNNs by injecting neighbor-connectivity information extracted from a new type of substructure. We first investigate different kinds of connectivities existing in a local neighborhood and identify a substructure called union subgraph, which is able to capture the complete picture of the 1-hop neighborhood of an edge. We then design a shortest-path-based substructure descriptor that possesses three nice properties and can effectively encode the high-order connectivities in union subgraphs. By infusing the encoded neighbor connectivities, we propose a novel model, namely Union Subgraph Neural Network (UnionSNN), which is proven to be strictly more powerful than 1-WL in distinguishing non-isomorphic graphs. Additionally, the local encoding from union subgraphs can also be injected into arbitrary message-passing neural networks (MPNNs) and Transformer-based models as a plugin. Extensive experiments on 17 benchmarks of both graph-level and node-level tasks demonstrate that UnionSNN outperforms state-of-the-art baseline models, with competitive computational efficiency. The injection of our local encoding to existing models is able to boost the performance by up to 11.09%.
## 1 Introduction
With the ubiquity of graph-structured data emerging from various modern applications, Graph Neural Networks (GNNs) have gained increasing attention from both researchers and practitioners. GNNs have been applied to many application domains, including quantum chemistry [8; 10; 31], social science [14; 44; 60], transportation [9; 40] and neuroscience [2; 52], and have attained promising results on graph classification [31; 60], node classification [43] and link prediction [45; 63] tasks.
Most GNNs are limited in terms of their expressive power. Xu et al., [57] show that GNNs are at most as powerful as 1-dimentional Weisfeiler-Leman (1-WL) test [54] in distinguishing non-isomorphic
graph structures. This is because a vanilla GNN essentially operates on a subtree rooted at each node in its message passing, _i.e._, it treats every neighbor of the node equally in its message aggregation. In this regard, it overlooks any discrepancy that may exist in the connectivities between neighbors. To address this limitation, efforts have been devoted to incorporating local substructure information to GNNs. Several studies attempt to encode such local information through induced subgraphs [65], overlap subgraphs [55] and spatial encoding [4], to enhance GNNs' expressiveness. But the local structures they choose are not able to capture the complete picture of the 1-hop neighborhood of an edge. Some others incorporate shortest path information to edges in message passing via distance encoding [26], adaptive breath/depth functions [29] and affinity matrix [51] to control the message from neighbors at different distances. However, the descriptor used to encode the substructure may overlook some connectivities between neighbors. Furthermore, some of the above models also suffer from high computational cost due to the incorporation of certain substructures.
In this paper, we aim to develop a model that overcomes the above drawbacks and yet is able to empower GNNs' expressiveness. (1) We define a new type of substructures named union subgraphs, each capturing the entire closed neighborhood w.r.t. an edge. (2) We design an effective substructure descriptor that encodes high-order connectivities and it is easy to incorporate with arbitrary message-passing neural network (MPNN) or Transformer-based models. (3) We propose a new model, namely Union Subgraph Neural Network (UnionSNN), which is strictly more expressive than the vanilla GNNs (1-WL) in theory and also computationally efficient in practice. Our contributions are summarized as follows:
* We investigate different types of connectivities existing in the local neighborhood and identify the substructure, named "union subgraph", that is able to capture the complete 1-hop neighborhood.
* We abstract three desirable properties for a good substructure descriptor and design a shortest-path-based descriptor that possesses all the properties with high-order connectivities encoded.
* We propose a new model, UnionSNN, which incorporates the information extracted from union subgraphs into the message passing. We also show how our local encoding can be flexibly injected into any arbitrary MPNNs and Transformer-based models. We theoretically prove that UnionSNN is more expressive than 1-WL. We also show that UnionSNN is stronger than 3-WL in some cases.
* We perform extensive experiments on both graph-level and node-level tasks. UnionSNN consistently outperforms baseline models on 17 benchmark datasets, with competitive efficiency. The injection of our local encoding is able to boost the performance of base models by up to 11.09%, which justifies the effectiveness of our proposed union subgraph and substructure descriptor in capturing local information.
## 2 Related Work
### Substructure-Enhanced GNNs
In recent years, several GNN architectures have been designed to enhance their expressiveness by encoding local substructures. GraphSNN [55] brings the information of overlap subgraphs into the message passing scheme as a structural coefficient. However, the overlap subgraph and the substructure descriptor used by GraphSNN are not powerful enough to distinguish all non-isomorphic substructures in the 1-hop neighborhood. Zhao et al. [65] encode the induced subgraph for each node and inject it into node representations. Graph Substructure Network [4] introduces structural biases in the aggregation function to break the symmetry in message passing. For these two methods, the neighborhood under consideration should be pre-defined, and the subgraph matching is extremely expensive (\(O(n^{k})\) for \(k\)-tuple substructure) when the substructure gets large. Similarly, a line of research [3; 17; 47] develops new WL aggregation schemes to take into account substructures like cycles or cliques. Despite these enhancements, performing cycle counting is very time-consuming. Other Transformer-based methods [11; 25; 33; 56] incorporate local structural information via positional encoding [27; 62]. Graphormer [59] combines the node degree and the shortest path information for spatial encoding, while other works [13; 26] employ random walk based encodings that can encode \(k\)-hop neighborhood information of a node. However, these positional encodings
only consider relative distances from the center node and ignore high-order connectivities between the neighbors.
### Path-Related GNNs
A significant amount of works have focused on the application of shortest paths and their related techniques to GNNs. Li et al. [26] present a distance encoding module to augment node features and control the receptive field of message passing. GeniePath [29] proposes an adaptive breath function to learn the importance of different-sized neighborhoods and an adaptive depth function to extract and filter signals from neighbors within different distances. PathGNN [46] imitates how the Bellman-Ford algorithm solves the shortest path problem in generating weights when updating node features. SPN [1] designs a scheme, in which the representation of a node is propagated to each node in its shortest path neighborhood. Some recent works adapt the concept of curvature from differential geometry to reflect the connectivity between nodes and the possible bottleneck effects. CurvGN [58] reflects how easily information flows between two nodes by graph curvature information, and exploits curvature to reweigh different channels of messages. Topping et al. [48] propose Balanced Forman curvature that better reflects the edges having bottleneck effects, and alleviate the over-squashing problem of GNNs by rewiring graphs. SNALS [51] utilizes an affinity matrix based on shortest paths to encode the structural information of hyperedges. Our method is different from these existing methods by introducing a shortest-path-based substructure descriptor for distinguishing non-isomorphic substructures.
## 3 Local Substructures to Empower MPNNs
In this section, we first introduce MPNNs. We then investigate what kind of local substructures are beneficial to improve the expressiveness of MPNNs.
### Message Passing Neural Networks
We represent a graph as \(G=(V,E,X)\), where \(V=\{v_{1},...,v_{n}\}\) is the set of nodes, \(E\in V\times V\) is the set of edges, and \(X=\{\mathbf{x}_{v}\mid v\in V\}\) is the set of node features. The set of neighbors of node \(v\) is denoted by \(\mathcal{N}(v)=\{u\in V\mid(v,u)\in E\}\). The \(l\)-th layer of an MPNN [57] can be written as:
\[\mathbf{h}_{v}^{(l)}=\mathrm{AGG}^{(l-1)}(\mathbf{h}_{v}^{(l-1)},\mathrm{MSG }^{(l-1)}(\{\mathbf{h}_{u}^{(l-1)},u\in\mathcal{N}(v)\})), \tag{1}\]
where \(\mathbf{h}_{v}^{(l)}\) is the representation of node \(v\) at the \(l\)-th layer, \(\mathbf{h}_{v}^{(0)}=\mathbf{x}_{v}\), \(\mathrm{AGG}(\cdot)\) and \(\mathrm{MSG}(\cdot)\) denote the aggregation and message functions, respectively.
### Local Substructures to Improve MPNNs
According to Eq. (1), MPNN updates the representation of a node isotropously at each layer and ignores the structural connections between the neighbors of the node. Essentially, the local substructure utilized in the message passing of MPNN is a subtree rooted at the node. Consequently, if two non-isomorphic graphs have the same set of rooted subtrees, they cannot be distinguished by MPNN (and also 1-WL). Such an example is shown in Figure 1(a). A simple fix to this problem is to encode the local structural information about each neighbor, based on which neighbors are treated unequally in the message passing. One natural question arises: **which substructure shall we choose to characterize the 1-hop local information?**
To answer the above question, we consider two adjacent nodes \(v\) and \(u\), and discuss different types of edges that may exist in their neighbor sets, \(\mathcal{N}(v)\) and \(\mathcal{N}(u)\). We define the closed neighbor set of node \(v\) as \(\tilde{\mathcal{N}}(v)=\mathcal{N}(v)\cup\{v\}\). The induced subgraph of \(\tilde{\mathcal{N}}(v)\) is denoted by \(S_{v}\), which defines the closed neighborhood of \(v\). The common closed neighbor set of \(v\) and \(u\) is \(\mathcal{N}_{vu}=\tilde{\mathcal{N}}(v)\cap\tilde{\mathcal{N}}(u)\) and the exclusive neighbor set of \(v\) w.r.t \(u\) is defined as \(\mathcal{N}_{v}^{-u}=\tilde{\mathcal{N}}(v)-\mathcal{N}_{vu}\). As shown in Figure 1(b), there are four types of edges in the closed neighborhood of \(\{v,u\}\):
* \(E_{1}^{vu}\in\mathcal{N}_{vu}\times\mathcal{N}_{vu}\): edges between the common closed neighbors of \(v\) and \(u\), such as \((a,b)\)
* \(E_{2}^{vu}\in(\mathcal{N}_{vu}\times\mathcal{N}_{v}^{-u})\cup(\mathcal{N}_{vu} \times\mathcal{N}_{u}^{-v})\): edges between a common closed neighbor of \(v\) and \(u\) and an exclusive neighbor of \(v\)/\(u\), such as \((a,d)\);
* \(E_{3}^{vu}\in\mathcal{N}_{v}^{-u}\times\mathcal{N}_{u}^{-v}\): edges between two exclusive neighbors of \(v\) and \(u\) from different sides, such as \((c,d)\);
* \(E_{4}^{vu}\in(\mathcal{N}_{v}^{-u}\times\mathcal{N}_{v}^{-u})\cup(\mathcal{N}_ {u}^{-v}\times\mathcal{N}_{u}^{-v})\): edges between two exclusive neighbors of \(v\) or \(u\) from the same side, such as \((d,f)\).
We now discuss three different local substructures, each capturing a different set of edges.
**Overlap Subgraph[55].** The overlap subgraph of two adjacent nodes \(v\) and \(u\) is defined as \(S_{v\cap u}=S_{v}\cap S_{u}\). The overlap subgraph contains only edges in \(E_{1}^{vu}\).
**Union Minus Subgraph.** The union minus subgraph of two adjacent nodes \(v\) and \(u\) is defined as \(S_{v\cup u}^{-}=S_{v}\cup S_{u}\). The union minus subgraph consists of edges in \(E_{1}^{vu}\), \(E_{2}^{vu}\) and \(E_{4}^{vu}\).
**Union Subgraph**. The union subgraph of two adjacent nodes \(v\) and \(u\), denoted as \(S_{v\cup u}\), is defined as the induced subgraph of \(\tilde{\mathcal{N}}(v)\cup\tilde{\mathcal{N}}(u)\). The union subgraph contains all four types of edges mentioned above.
It is obvious that the union subgraph captures the whole picture of the 1-hop neighborhood of two adjacent nodes. This subgraph captures all types of connectivities within the neighborhood, providing an ideal local substructure for enhancing the expressive power of MPNNs. We illustrate how effective different local substructures are in improving MPNNs through an example in Appendix A. Note that we restrict the discussion to the 1-hop neighborhood because we aim to develop a model based on the MPNN scheme, in which a single layer of aggregation is performed on the 1-hop neighbors.
### Union Isomorphism
We now proceed to define the isomorphic relationship between the neighborhoods of two nodes \(i\) and \(j\) based on union subgraphs. The definition follows that of overlap isomorphism in [55].
**Overlap Isomorphism.**\(S_{i}\) and \(S_{j}\) are overlap-isomorphic, denoted as \(S_{i}\simeq_{overlap}S_{j}\), if there exists a bijective mapping \(g\): \(\tilde{\mathcal{N}}(i)\rightarrow\tilde{\mathcal{N}}(j)\) such that \(g(i)=j\), and for any \(v\in\mathcal{N}(i)\) and \(g(v)=u\), \(S_{i\cap v}\) and \(S_{j\cap u}\) are isomorphic (ordinary graph isomorphic).
**Union Isomorphism**. \(S_{i}\) and \(S_{j}\) are union-isomorphic, denoted as \(S_{i}\simeq_{union}S_{j}\), if there exists a bijective mapping \(g\): \(\tilde{\mathcal{N}}(i)\rightarrow\tilde{\mathcal{N}}(j)\) such that \(g(i)=j\), and for any \(v\in\mathcal{N}(i)\) and \(g(v)=u\), \(S_{i\cup v}\) and \(S_{j\cup u}\) are isomorphic (ordinary graph isomorphic).
**Theorem 1**.: _If \(S_{i}\simeq_{union}S_{j}\), then \(S_{i}\simeq_{overlap}S_{j}\); but not vice versa._
Theorem 1 states that union-isomorphism is stronger than overlap-isomorphism. The proofs of all theorems are provided in Appendix B. We provide an example of a pair of non-isomorphic graphs that are distinguishable under union-isomorphism but not overlap-isomorphism or 1-WL (subtree). Please refer to Figure 6 in Appendix C for detailed discussions.
Figure 1: (a) A pair of non-isomorphic graphs not distinguishable by 1-WL; (b) An example of various local substructures for two adjacent nodes \(v\) and \(u\).
UnionSNN
In this section, we first discuss how to design our substructure descriptor so that it well captures the structural information in union subgraphs with several desirable properties. We then present our model UnionSNN, which effectively incorporates the information encoded by the substructure descriptor to MPNNs and Transformer-based models. Finally, we show that UnionSNN has a stronger expressiveness than 1-WL and is superior to GraphSNN in its design.
### Design of Substructure Descriptor Function
Let \(\mathcal{U}=\{S_{v\cup u}|(v,u)\in E\}\) be the set of union subgraphs in \(G\). In order to fuse the information of union subgraphs in message passing, we need to define a function \(f(\cdot)\) to describe the structural information of each \(S_{v\cup u}\in\mathcal{U}\). Ideally, given two union subgraphs centered at node \(v\), \(S_{v\cup u}=(V_{v\cup u},E_{v\cup u})\) and \(S_{v\cup u^{\prime}}=(V_{v\cup u^{\prime}},E_{v\cup u^{\prime}})\), we want \(f(S_{v\cup u})=f(S_{v\cup u^{\prime}})\) iff \(S_{v\cup u}\) and \(S_{v\cup u^{\prime}}\) are isomorphic. We abstract the following properties of a good substructure descriptor function \(f(\cdot)\):
* **Size Awareness**. \(f(S_{v\cup u})\neq f(S_{v\cup u^{\prime}})\) if \(|V_{v\cup u}|\neq|V_{v\cup u^{\prime}}|\) or \(|E_{v\cup u}|\neq|E_{v\cup u^{\prime}}|\);
* **Connectivity Awareness**. \(f(S_{v\cup u})\neq f(S_{v\cup u^{\prime}})\) if \(|V_{v\cup u}|=|V_{v\cup u^{\prime}}|\) and \(|E_{v\cup u}|=|E_{v\cup u^{\prime}}|\) but \(S_{v\cup u}\) and \(S_{v\cup u^{\prime}}\) are not isomorphic;
* **Isomorphic Invariance**. \(f(S_{v\cup u})=f(S_{v\cup u^{\prime}})\) if \(S_{v\cup u}\) and \(S_{v\cup u^{\prime}}\) are isomorphic.
Figure 2 illustrates the properties. Herein, we design \(f(\cdot)\) as a function that transforms \(S_{v\cup u}\) to a path matrix \(\mathbf{P}^{vu}\in\mathbb{R}^{|V_{v\cup u}|\times|V_{v\cup u}|}\) such that each entry:
\[\mathbf{P}^{vu}_{ij}=\mathrm{PathLen}(i,j,S_{v\cup u}),i,j\in V_{v\cup u}, \tag{2}\]
where \(\mathrm{PathLen}(\cdot)\) denotes the length of the shortest path between \(i\) and \(j\) in \(S_{v\cup u}\). We choose the path matrix over the adjacency matrix or the Laplacian matrix as it explicitly encodes high-order connectivities between the neighbors. In addition, with a fixed order of nodes, we can get a unique \(\mathbf{P}^{vu}\) for a given \(S_{v\cup u}\), and vice versa. We formulate it in Theorem 2.
**Theorem 2**.: _With a fixed order of nodes in the path matrix, we can obtain a unique path matrix \(\mathbf{P}^{vu}\) for a given union subgraph \(S_{v\cup u}\), and vice versa._
It is obvious that our proposed \(f(\cdot)\) satisfies the above-mentioned three properties, with a node permutation applied in the isomorphic case.
**Discussion on other substructure descriptor functions**. In the literature, some other functions have also been proposed to describe graph substructures. (1) Edge Betweenness [5] is defined by the number of shortest paths between any pair of nodes in a (sub)graph \(G\) that pass through an edge. When applying the edge betweenness to \((v,u)\) in \(S_{v\cup u}\), the metric would remain the same on two different union subgraphs, one with an edge in \(E_{4}^{u}\) and one without. This shows that edge betweenness does not satisfy Size Awareness; (2) Wijesinghe and Wang [55] puts forward a substructure descriptor as a function of the number of nodes and edges. This descriptor fails to distinguish non-isomorphic subgraphs with the same size, and thus does not satisfy Connectivity Awareness; (3) Discrete Graph Curvature, e.g., Olliveier Ricci curvature [28; 37], has been introduced to MPNNs in recent years [58]. Ricci curvature first computes for each node a probability vector of length \(|V|\) that characterizes a uniform propagation distribution in the neighborhood. It then defines the curvature of two adjacent nodes as the Wasserstein distance of their corresponding probability vectors. Similar to edge betweenness, curvature doesn't take into account the edges in \(E_{4}^{vu}\) in its
Figure 2: Three properties that a good substructure descriptor function \(f(\cdot)\) should exhibit.
computation and thus does not satisfy Size Awareness either. We detail the definitions of these substructure descriptor functions in Appendix D.
### Network Design
For the path matrix of an edge \((v,u)\) to be used in message passing, we need to further encode it as a scalar. We choose to perform Singular Value Decomposition (SVD) [18] on the path matrix and extract the singular values:
\[\mathbf{P}=\mathbf{U}\mathbf{\Sigma}\mathbf{V}^{*}. \tag{3}\]
The sum of the singular values of \(\mathbf{P}^{vu}\), denoted as \(a^{vu}=\mathrm{sum}(\mathbf{\Sigma}^{vu})\), is used as the local structural coefficient of the edge \((v,u)\in E\). Note that since the local structure never changes in message passing, we can compute the structural coefficients in preprocessing before the training starts. A nice property of this structural coefficient is that, it is **permutation invariant** thanks to the use of SVD and the sum operator. With an arbitrary order of nodes, the computed \(a^{vu}\) remains the same, which removes the condition required by Theorem 2.
**UnionSNN.** We now present our model, namely Union Subgraph Neural Network (UnionSNN), which utilizes union-subgraph-based structural coefficients to incorporate local substructures in message passing. For each vertex \(v\in V\), the node representation at the \(l\)-th layer is generated by:
\[\mathbf{h}_{v}^{(l)}=\mathrm{MLP}_{1}{}^{(l-1)}((1+\epsilon^{(l-1)})\mathbf{h}_{v}^{( l-1)}+\sum_{u\in\mathcal{N}(v)}\mathrm{Trans}^{(l-1)}(\tilde{a}^{vu})\mathbf{h}_{u}^ {(l-1)}), \tag{4}\]
where \(\epsilon^{(l-1)}\) is a learnable scalar parameter and \(\tilde{a}^{vu}=\frac{a^{vu}}{\sum_{u\in\mathcal{N}(v)}a^{vu}}\). \(\mathrm{MLP}_{1}(\cdot)\) denotes a multilayer perceptron (MLP) with a non-linear function ReLU. To transform the weight \(\tilde{a}^{vu}\) to align with the multi-channel representation \(\mathbf{h}_{u}^{(l-1)}\), we follow [58] and apply a transformation function \(\mathrm{Trans}(\cdot)\) for better expressiveness and easier training:
\[\mathrm{Trans}(a)=\mathrm{softmax}(\mathrm{MLP}_{2}(a)), \tag{5}\]
where \(\mathrm{MLP}_{2}\) denotes an MLP with ReLU and a channel-wise softmax function \(\mathrm{softmax}(\cdot)\) normalizes the outputs of MLP separately on each channel. For better understanding, we provide the pseudo-code of UnionSNN in Appendix E.
**As a Plugin to Empower Other GNNs.** In addition to a standalone UnionSNN network, our union-subgraph-based structural coefficients could also be incorporated into other GNNs in a flexible and yet effective manner. For arbitrary MPNNs as in Eq. (1), we can plugin our structural coefficients via an element-wise multiplication:
\[\mathbf{h}_{v}^{(l)}=\mathrm{AGG}^{(l-1)}(\mathbf{h}_{v}^{(l-1)},\mathrm{MSG}^{(l-1)}(\{\mathrm{Trans}^{(l-1)}(\tilde{a}^{vu}) \mathbf{h}_{u}^{(l-1)},u\in\mathcal{N}(v)\})). \tag{6}\]
For transformer-based models, inspired by the spatial encoding in Graphormer [59], we can inject our structural coefficients into the attention matrix as a bias term:
\[A_{vu}=\frac{\left(h_{v}W_{Q}\right)\left(h_{u}W_{K}\right)^{T}}{\sqrt{d}}+ \mathrm{Trans}(\tilde{a}^{vu}), \tag{7}\]
where the definition of \(\mathrm{Trans}(\cdot)\) is the same as Eq. (5) and shared across all layers, \(h_{v},h_{u}\in\mathbb{R}^{1\times d}\) are the node representations of \(v\) and \(u\), \(W_{Q},W_{K}\in\mathbb{R}^{d\times d}\) are the parameter matrices, and \(d\) is the hidden dimension of \(h_{v}\) and \(h_{u}\). The detailed interpretation is presented in Appendix F.
### Expressive Power of UnionSNN
We formalize the following theorem to show that UnionSNN is more powerful than 1-WL test in terms of expressive power.
**Theorem 3**.: _UnionSNN is more expressive than 1-WL in testing non-isomorphic graphs._
The stronger expressiveness of UnionSNN over 1-WL is credited to its use of union subgraphs, with an effective encoding of local neighborhood connectivities via the shortest-path-based design of structural coefficients. We further provide a special case to show some graphs can be distinguished by UnionSNN but not by 3-WL or GraphSNN in Appendix G.
**Design Comparisons with GraphSNN** Our UnionSNN is similar to GraphSNN in the sense that both improve the expressiveness of MPNNs (and 1-WL) by injecting the information of local substructures. However, UnionSNN is superior to GraphSNN in the following aspects. (1) Union subgraphs in UnionSNN are stronger than overlap subgraphs in GraphSNN, as ensured by Theorem 1. (2) The shortest-path-based substructure descriptor designed in UnionSNN is more powerful than that in GraphSNN: the latter fails to possess the property of Connectivity Awareness (as elaborated in Section 4.1). An example of two non-isomorphic subgraphs \(S_{v^{\prime}\cap u}\) and \(S_{v^{\prime}\cap u^{\prime}}\) is shown in Figure 3. They have the same structural coefficients in GraphSNN. (3) The aggregation function in UnionSNN works on adjacent nodes in the input graph, while that in GraphSNN utilizes the structural coefficients on all pairs of nodes (regardless of their adjacency). Consequently, GraphSNN requires to pad the adjacency matrix and feature matrix of each graph to the maximum graph size, which significantly increases the computational complexity. The advantages of UnionSNN over GraphSNN are also evidenced by experimental results in Section 5.4.
## 5 Experimental Study
In this section, we evaluate the effectiveness of our proposed model under various settings and aim to answer the following research questions: **RQ1.** Can UnionSNN outperform existing MPNNs and transformer-based models? **RQ2.** Can other GNNs benefit from our structural coefficient? **RQ3.** How do different components affect the performance of UnionSNN? **RQ4.** Is our runtime competitive with other substructure descriptors? We conduct experiments on three tasks: graph classification, graph regression and node classification. When we use UnionSNN to plugin other models we use the prefix term "Union-", such as UnionGCN.
**Datasets.** For graph classification, we use 10 benchmark datasets. Eight of them were selected from the TUDataset [22], including MUTAG, PROTEINS, ENZYMES, DD, FRANKENSTEIN (denoted
\begin{table}
\begin{tabular}{l|c c c c c c c c} \hline \hline & MUTAG & PROTEINS & ENZYMES & DD & FRANK & Tox21 & NCI1 & NCI109 \\ \hline GAT & 77.56 \(\pm\) 10.49 & 74.34 \(\pm\) 2.09 & 67.67 \(\pm\) 3.74 & 74.25 \(\pm\) 3.76 & 62.85 \(\pm\) 1.59 & 90.35 \(\pm\) 0.71 & 78.07 \(\pm\) 1.94 & 74.34 \(\pm\) 2.18 \\
3WL-GNN & 84.06 \(\pm\) 6.62 & 60.18 \(\pm\) 6.35 & 54.17 \(\pm\) 6.25 & 74.84 \(\pm\) 2.63 & 58.68 \(\pm\) 1.93 & 90.31 \(\pm\) 1.33 & 78.39 \(\pm\) 1.54 & 77.97 \(\pm\) 2.22 \\ UGformer & 75.66 \(\pm\) 8.67 & 70.17 \(\pm\) 5.42 & 64.57 \(\pm\) 4.53 & 75.51 \(\pm\) 5.52 & 56.13 \(\pm\) 2.51 & 88.06 \(\pm\) 0.50 & 68.84 \(\pm\) 1.54 & 66.37 \(\pm\) 2.74 \\ MEWISPool & 84.73 \(\pm\) 4.73 & 68.10 \(\pm\) 3.97 & 53.66 \(\pm\) 6.07 & 76.03 \(\pm\) 2.59 & 64.63 \(\pm\) 2.83 & 88.13 \(\pm\) 0.05 & 74.21 \(\pm\) 3.26 & 75.30 \(\pm\) 1.45 \\ CurvGN & 87.25 \(\pm\) 6.28 & 75.73 \(\pm\) 2.87 & 56.50 \(\pm\) 7.13 & 72.16 \(\pm\) 1.88 & 61.89 \(\pm\) 2.41 & 90.87 \(\pm\) 0.38 & 79.32 \(\pm\) 1.26 & 77.30 \(\pm\) 1.78 \\ NestedGIN & 86.23 \(\pm\) 8.82 & 86.85 \(\pm\) 3.22 & 54.67 \(\pm\) 9.90 & 70.44 \(\pm\) 3.67 & 67.14 \(\pm\) 19.42 & 11.81 \(\pm\) 82.04 & 22.32 \(\pm\) 79.94 \(\pm\) 1.59 \\ GatedGCN-LSPE & 88.33 \(\pm\) 8.38 & 73.94 \(\pm\) 4.22 & 64.50 \(\pm\) 5.92 & 76.74 \(\pm\) 2.69 & 67.44 \(\pm\) 2.65 & 91.71 \(\pm\) 78.05 & 78.15 \(\pm\) 68.01 & 3.23 \(\pm\) 2.33 \\ GraphSNN & 84.04 \(\pm\) 4.09 & 71.78 \(\pm\) 4.11 & 67.67 \(\pm\) 3.74 & 76.03 \(\pm\) 2.59 & 67.17 \(\pm\) 2.25 & **92.24**\(\pm\) 0.59 & 70.87 \(\pm\) 2.78 & 70.11 \(\pm\) 1.86 \\ \hline GCN & 77.13 \(\pm\) 5.24 & 73.89 \(\pm\) 2.85 & 64.33 \(\pm\) 5.83 & 72.16 \(\pm\) **2.28** 58.80 \(\pm\) 1.06 & 90.10 \(\pm\) 0.77 & 79.73 \(\pm\) 0.95 & 75.91 \(\pm\) 1.53 \\ UnionGCN (ours) & **81.87 \(\pm\) 3.81** & 75.02 \(\pm\) **2.50** & **64.67**\(\pm\)**7.14** & 69.69 \(\pm\) 4.18 & 61.72 \(\pm\) 1.76 & 91.63 \(\pm\) **0.72** & 80.41 \(\pm\) **1.84** & 79.50 \(\pm\) 1.82 \\ \hline GatedGCN & 77.11 \(\pm\) 10.05 & 76.18 \(\pm\) 3.12 & 66.83 \(\pm\) 5.08 & 72.58 \(\pm\) 3.061 & 61.40 \(\pm\) 1.92 & 90.83 \(\pm\) 0.96 & 80.32 \(\pm\) 2.07 & 78.19 \(\pm\) 2.39 \\ UnionGCN(ours) & 77.14 \(\pm\) **8.14** & **76.91** & 8.06 \(\pm\) **6.73** & 8.68 \(\pm\) 6.89 & 72.50 \(\pm\) 2.22 & 61.44 \(\pm\) **9.81** & 91.31 \(\pm\) **0.88** & 80.95 \(\pm\) 2.11 & 61.82 \(\pm\) **0.88** \\ \hline GraphSAGE & 80.38 \(\pm\) 0.98 & 74.87 \(\pm\) 3.35 & 52.50 \(\pm\) 5.65 & 73.10 \(\pm\) 3.44 & 52.95 \(\pm\) 4.01 & 88.36 \(\pm\) 0.15 & 63.94 \(\pm\) 2.65 & 64.6 \(\pm\) 1.12 \\ UnionSAGE(ours) & 83.04 \(\pm\) 8.20 & 74.57 \(\pm\) 3.55 & 88.32 \(\pm\) 26.64 & 73.85 \(\pm\) 4.06 & 88.59 \(\pm\) 0.12 & 69.36 \(\pm\) 1.84 & 69.87 \(\pm\) 0.06 \\ \hline GIN & 86.23 \(\pm\) 8.17 & 72.86 \(\pm\) 4.14 & 65.83 \(\pm\) 5.93 & 70.29 \(\pm\) 2.96 & 66.50 \(\pm\) 2.37 & 91.74 \(\pm\) 0.95 & 82.29 \(\pm\) 1.77 & 80.95 \(\pm\) 1.87 \\ UnionGIN (ours) & **88.86 \(\pm\) 4.33** & 73.22 \(\pm\) 3.90 & 67.83 \(\pm\) **6.10** & 70.47 \(\pm\) **4.98** & **68.02** & **91.74** & **74.04** & **82.29** & **9.185** & **82.24** & **9.158** \\ \hline UnionSNN (ours) & 87.31 \(\pm\) 5.29 & 75.02 \(\pm\) 2.50 & **68.17** \(\pm\) 7.50 & **77.00** & \(\pm\) 3.73 & 67.83 \(\pm\) 1.99 & 91.76 \(\pm\) 0.85 & **82.34** & \(\pm\) 1.93 & 81.61 \(\pm\) 1.78 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Graph classification results (average accuracy \(\pm\) standard deviation) over 10-fold-CV. The first and second best results on each dataset are highlighted in **bold
as FRANK in our tables), Tox21, NCI1 and NCI109. The other two datasets OGBG-MOLHIV and OGBG-MOLBBBP were selected from Open Graph Benchmark [19]. For graph regression, we conduct experiments on ZINC10k and ZINC-full datasets [12]. For node classification, we test on five datasets, including citation networks (Cora, Citeseer, and PubMed [42]) and Amazon co-purchase networks (Computer and Photo [32]). These datasets cover various graph sizes and densities. The statistics of datasets are summarized in Appendix H.
**Baseline Models.** We select various GNN models as baselines, including (1) classical MPNNs such as GCN [24], GIN [57], GraphSAGE [16], GAT [49], GatedGCN [6]; (2) WL-based GNNs such as 3WL-GNN [30]; (3) transformer-based methods such as UGformer [35], Graphormer [59] and GPS [41]; (4) state-of-the-art graph pooling methods such as MEWISPool [36]; (5) methods that introduce structural information by shortest paths or curvature, such as GeniePath [29], CurvGN [58], and NestedGIN [64]; (6) GNNs with positional encoding, such as GatedGCN-LSPE [13]; (7) GraphSNN [55]. Model implementation details are provided in Appendix I.
### Performance on Different Graph Tasks
**Graph-Level Tasks**. For graph classification, we report the results on 8 TUDatasets in Table 1 and the results on 2 OGB datasets in Appendix J. Our UnionSNN outperforms all baselines in 7 out of 10 datasets (by comparing UnionSNN with all baselines without "ours"). We further apply our structural coefficient as a plugin component to four MPNNs: GCN, GatedGCN, GraphSAGE and GIN. The results show that our structural coefficient is able to boost the performance of the base model in almost all cases, with an improvement of up to 11.09%. For graph regression, we report the mean absolute error (MAE) on ZINC10k and ZINC-full. As shown in Table 2, the performance of MPNNs with our structural coefficient (UnionGCN, UnionGIN and UnionSAGE) dramatically beat their counterparts. Additionally, when injecting our structural coefficient into Transformer-based models, Unionormer and UnionGPS make further improvements over Graphormer and GPS.
**Node-Level Tasks**. We report the results of node classification in Table 3. UnionSNN outperforms all baselines on all 5 datasets. Again, injecting our structural coefficient to GCN, GIN, and GraphSNN achieves performance improvement over base models in almost all cases.
\begin{table}
\begin{tabular}{l|c c c} \hline & ZINC10k & ZINC-full \\ \hline GCN & 0.3800 \(\pm\) 0.0171 & 0.1152 \(\pm\) 0.0010 \\ UnionGCN (ours) & 0.2811 \(\pm\) 0.0050 & 0.0877 \(\pm\) 0.0003 \\ \hline GIN & 0.5090 \(\pm\) 0.0365 & 0.1552 \(\pm\) 0.0079 \\ UnionGIN (ours) & 0.4625 \(\pm\) 0.0222 & 0.1334 \(\pm\) 0.0013 \\ \hline GraphSAGE & 0.3953 \(\pm\) 0.0290 & 0.1205 \(\pm\) 0.0034 \\ UnionSAGE (ours) & 0.3768 \(\pm\) 0.0011 & 0.1146 \(\pm\) 0.0017 \\ \hline Graphormer & 0.1269 \(\pm\) 0.0033 & 0.039 \(\pm\) 0.0031 \\ Unionformer (ours) & 0.1241 \(\pm\) 0.0066 & 0.0252 \(\pm\) 0.0026 \\ \hline GPS & 0.0740 \(\pm\) 0.0022 & 0.0262 \(\pm\) 0.0025 \\ UnionGPS (ours) & 0.0681 \(\pm\) 0.0013 & 0.0236 \(\pm\) 0.0017 \\ \hline \end{tabular}
\end{table}
Table 2: Graph regression results (average test MAE \(\pm\) standard deviation) on ZINC10k and ZINC-full datasets. The best result is highlighted in **bold**. The winner between a base model with and without our structural coefficient injected is highlighted in **gray background**.
\begin{table}
\begin{tabular}{l|c c c c c} \hline & Cora & Citeseer & PubMed & Computer & Photo \\ \hline GraphSAGE & \(70.60\pm 0.64\) & \(55.02\pm 3.40\) & \(70.36\pm 4.29\) & \(80.30\pm 1.30\) & \(89.16\pm 1.03\) \\ GAT & \(74.82\pm 1.95\) & \(63.82\pm 2.81\) & \(74.02\pm 1.11\) & \(85.94\pm 2.35\) & \(91.86\pm 0.47\) \\ GeniePath & \(72.16\pm 2.69\) & \(57.40\pm 2.16\) & \(70.96\pm 2.06\) & \(82.68\pm 0.45\) & \(89.98\pm 1.14\) \\ CurvGN & \(74.06\pm 1.54\) & \(62.08\pm 0.85\) & \(74.54\pm 1.61\) & \(86.30\pm 0.70\) & \(92.50\pm 0.50\) \\ \hline GCN & \(72.56\pm 4.41\) & \(85.30\pm 3.2\) & \(74.44\pm 0.71\) & \(84.58\pm 3.02\) & \(91.71\pm 0.55\) \\ UnionGCN (ours) & \(74.48\pm 0.42\) & \(59.02\pm 3.64\) & \(74.82\pm 1.10\) & \(88.84\pm 0.27\) & \(92.33\pm 0.53\) \\ \hline GIN & \(75.86\pm 1.09\) & \(63.10\pm 2.24\) & \(76.62\pm 0.64\) & \(86.26\pm 0.56\) & \(92.11\pm 0.32\) \\ UnionGIN (ours) & \(75.90\pm 0.80\) & \(63.66\pm 1.75\) & \(76.78\pm 1.02\) & \(86.81\pm 2.12\) & \(92.28\pm 0.19\) \\ \hline GraphSNN & \(75.44\pm 0.73\) & \(64.68\pm 2.72\) & \(76.76\pm 0.54\) & \(84.11\pm 0.57\) & \(90.82\pm 0.30\) \\ UnionGraphSNN (ours) & \(75.58\pm 0.49\) & \(65.22\pm 1.12\) & \(76.99\pm 0.56\) & \(84.58\pm 0.46\) & \(90.60\pm 0.58\) \\ \hline UnionSNN (ours) & \(76.86\pm 1.58\) & \(65.02\pm 1.02\) & \(77.06\pm 1.07\) & \(87.76\pm 0.36\) & \(92.92\pm 0.38\) \\ \hline \end{tabular}
\end{table}
Table 3: Node classification results (average accuracy \(\pm\) standard deviation) over 10 runs. The first and second best results on each dataset are highlighted in **bold** and underlined. The winner between a base model with and without our structural coefficient injected is highlighted in **gray background**.
### Ablation Study
In this subsection, we validate empirically the design choices made in different components of our model: (1) the local substructure; (2) the substructure descriptor; (3) the encoding method from a path matrix to a scalar. All experiments were conducted on 6 graph classification datasets.
**Local Substructure**. We test three types of local substructures defined in Section 3.2: overlap subgraphs, union minus subgraphs and union subgraphs. They are denoted as "overlap", "minus", and "union" respectively in Table 4. The best results are consistently achieved by using union subgraphs.
**Substructure Descriptor**. We compare our substructure descriptor with four existing ones discussed in Section 4.1. We replace the substructure descriptor in UnionSNN with edge betweenness, node/edge counting, Ricci curvature, and Laplacian matrix (other components unchanged), and obtain four variants, namely BetSNN, CountSNN, CurvNN, and LapSNN. Table 5 shows our UnionSNN is a clear winner: it achieves the best result on 5 out of 6 datasets. This experiment demonstrates that our path matrix better captures structural information.
**Path Matrix Encoding Method**. We test three methods that transform a path matrix to a scalar: (1) sum of all elements in the path matrix (matrix sum); (2) maximum eigenvalue of the path matrix (eigen max); (3) sum of all singular values of the matrix (svd sum) used by UnionSNN in Section 4.2. Table 6 shows that the encoding method "svd sum" performs the best on 5 out of 6 datasets.
### Case Study
In this subsection, we investigate how the proposed structural coefficient \(a^{vu}\) reflects local connectivities. We work on an example union subgraph \(S_{v\cup u}\) in Figure 4 and modify its nodes/edges to study how the coefficient \(a^{vu}\) varies with the local structural change. We have the following observations: (1) with the set of nodes unchanged, deleting an edge increases \(a^{vu}\); (2) deleting a node (and its incident edges) decreases \(a^{vu}\); (3) the four types of edges in the closed neighborhood (Section 3.2) have different effects to \(a^{vu}\): \(E_{1}^{vu}\) <\(E_{2}^{vu}\) <\(E_{3}^{vu}\) <\(E_{4}^{vu}\) (by comparing -ab, -ad, -de, and +df). These observations indicate that a smaller coefficient will be assigned
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline & MUTAG & PROTEINS & ENZYMES & DD & NCI1 & NCI109 \\ \hline \hline \multirow{2}{*}{
\begin{tabular}{l} BetSNN \\ CountSNN \\ CurvNN \\ LapSNN \\ UnionSNN \\ \end{tabular} } & \(80.94\pm 6.60\) & \(69.44\pm 6.15\) & \(65.00\pm 5.63\) & \(70.20\pm 5.15\) & \(74.91\pm 2.48\) & \(73.70\pm 1.87\) \\ & \(84.65\pm 6.76\) & \(70.79\pm 5.07\) & \(66.50\pm 6.77\) & \(74.36\pm 7.21\) & \(81.74\pm 2.35\) & \(79.80\pm 1.67\) \\ & \(85.15\pm 7.35\) & \(72.77\pm 4.42\) & \(67.17\pm 6.54\) & \(75.88\pm 3.24\) & \(81.34\pm 2.27\) & \(80.64\pm 1.85\) \\ & \(89.39\pm 5.24\) & \(68.32\pm 3.49\) & \(66.17\pm 4.15\) & \(76.31\pm 2.85\) & \(81.39\pm 2.08\) & \(81.34\pm 2.93\) \\ & \(87.31\pm 5.29\) & \(75.02\pm 2.50\) & \(68.17\pm 5.70\) & \(77.00\pm 2.37\) & \(82.34\pm 1.93\) & \(81.61\pm 1.78\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Ablation study on substructure descriptor. The best result is highlighted in **bold**.
\begin{table}
\begin{tabular}{l|c c c c c c} \hline \hline & MUTAG & PROTEINS & ENZYMES & DD & NCI1 & NCI109 \\ \hline \hline \multirow{2}{*}{
\begin{tabular}{l} overlap \\ minus \\ union \\ \end{tabular} } & \(85.70\pm 7.40\) & \(71.33\pm 5.35\) & \(65.00\pm 5.63\) & \(73.43\pm 4.07\) & \(73.58\pm 1.73\) & \(72.96\pm 2.01\) \\ & \(\mathbf{87.31\pm 5.29}\) & \(68.70\pm 3.61\) & \(65.33\pm 4.58\) & \(74.79\pm 4.63\) & \(80.66\pm 1.90\) & \(78.70\pm 2.48\) \\ & \(\mathbf{87.31\pm 5.29}\) & \(\mathbf{75.02\pm 2.50}\) & \(\mathbf{68.17\pm 5.70}\) & \(\mathbf{77.00\pm 2.37}\) & \(\mathbf{82.34\pm 1.93}\) & \(\mathbf{81.61\pm 1.78}\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation study on local substructure. The best result is highlighted in **bold**.
Figure 4: Structural coefficient analysis.
\begin{table}
\begin{tabular}{l|c c c c c c} \hline \hline & MUTAG & PROTEINS & ENZYMES & DD & NCI1 & NCI109 \\ \hline \hline \multirow{2}{*}{
\begin{tabular}{l} BetSNN \\ CountSNN \\ CurvNN \\ LapSNN \\ UnionSNN \\ \end{tabular} } & \(80.94\pm 6.60\) & \(69.44\pm 6.15\) & \(65.00\pm 5.63\) & \(70.20\pm 5.15\) & \(74.91\pm 2.48\) & \(73.70\pm 1.87\) \\ & \(84.65\pm 6.76\) & \(70.79\pm 5.07\) & \(66.50\pm 6.77\) & \(74.36\pm 7.21\) & \(81.74\pm 2.35\) & \(79.80\pm 1.67\) \\ & \(85.15\pm 7.35\) & \(72.77\pm 4.42\) & \(67.17\pm 6.54\) & \(75.88\pm 3.24\) & \(81.34\pm 2.27\) & \(80.64\pm 1.85\) \\ & \(\mathbf{89.39\pm 5.24}\) & \(68.32\pm 3.49\) & \(66.17\pm 4.15\) & \(76.31\pm 2.85\) & \(81.39\pm 2.08\) & \(81.34\pm 2.93\) \\ & \(87.31\pm 5.29\) & \(\mathbf{75.02\pm 2.50}\) & \(\mathbf{68.17\pm 5.70}\) & \(\mathbf{77.00\pm 2.37}\) & \(\mathbf{82.34\pm 1.93}\) & \(\mathbf{81.61\pm 1.78}\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Ablation study on substructure descriptor. The best result is highlighted in **bold**.
to an edge with a denser local substructure. This matches our expectation that the coefficient should be small for an edge in a highly connected neighborhood. The rationale is, such edges are less important in message passing as the information between their two incident nodes can flow through more paths. By using the coefficients that well capture local connectivities, the messages from different neighbors could be properly adjusted when passing to the center node. This also explains the effectiveness of UnionSNN in performance experiments. A quantitative analysis of cycle detection is provided in Appendix J.2 to show the ability of our proposed structural coefficients to capture local substructure information.
### Efficiency Analysis
In this subsection, we conduct experiments on PROTEINS, DD and FRANKENSTEIN datasets, which cover various number of graphs and graph sizes.
**Preprocessing computational cost.** UnionSNN computes structural coefficients in preprocessing. We compare its preprocessing time with the time needed in baseline models for pre-computing their substructure descriptors, including edge betweenness (Betweenness) in BetSNN, node/edge counting (Count_ne) in GraphSNN, Ricci curvature (Curvature) in CurvGN, and counting cycles (Count_cycle) in [3]. As shown in Table 7, the preprocessing time of UnionSNN is comparable to that of other models. This demonstrates that our proposed structural coefficient is able to improve classification performance without significantly sacrificing efficiency. Theoretical time complexity analysis of the structure descriptors is provided in Appendix K.
**Runtime computational cost.** We conduct an experiment to compare the total runtime cost of UnionSNN with those in other MPNNs. The results are reported in Table 8. Although UnionSNN runs slightly slower than GCN and GIN, it runs over 4.56 times faster than WL-based MPNN (3WL-GNN) and is comparable to MPNN with positional encoding (GatedGCN-LSPE). Compared with GraphSNN, UnionSNN runs significantly faster: the efficiency improvement approaches an order of magnitude on datasets with large graphs, e.g., DD. This is because UnionSNN does not need to pad the adjacency matrix and the feature matrix of each graph to the maximum graph size in the dataset, as what GraphSNN does.
## 6 Conclusions
We present UnionSNN, a model that outperforms 1-WL in distinguishing non-isomorphic graphs. UnionSNN utilizes an effective shortest-path-based substructure descriptor applied to union subgraphs, making it more powerful than previous models. Our experiments demonstrate that UnionSNN surpasses state-of-the-art baselines in both graph-level and node-level classification tasks while maintaining its computational efficiency. The use of union subgraphs enhances the model's ability to capture neighbor connectivities and facilitate message passing. Additionally, when applied to existing MPNNs and Transformer-based models, UnionSNN improves their performance by up to 11.09%.
|
2305.09408 | Numerical solution of Poisson partial differential equations in high
dimension using two-layer neural networks | The aim of this article is to analyze numerical schemes using two-layer
neural networks with infinite width for the resolution of the high-dimensional
Poisson-Neumann partial differential equations (PDEs) with Neumann boundary
conditions. Using Barron's representation of the solution with a measure of
probability, the energy is minimized thanks to a gradient curve dynamic on the
$2$ Wasserstein space of parameters defining the neural network. Inspired by
the work from Bach and Chizat, we prove that if the gradient curve converges,
then the represented function is the solution of the elliptic equation
considered. Numerical experiments are given to show the potential of the
method. | Mathias Dus, Virginie Ehrlacher | 2023-05-16T12:52:55Z | http://arxiv.org/abs/2305.09408v2 | Numerical solution of Poisson partial differential equation in high dimension using two-layer neural networks
###### Abstract
The aim of this article is to analyze numerical schemes using two-layer neural networks with infinite width for the resolution of the high-dimensional Poisson partial differential equation (PDE) with Neumann boundary condition. Using Barron's representation of the solution [1] with a probability measure defined on the set of parameter values, the energy is minimized thanks to a gradient curve dynamic on the 2-Wasserstein space of the set of parameter values defining the neural network. Inspired by the work from Bach and Chizat [2, 3], we prove that if the gradient curve converges, then the represented function is the solution of the elliptic equation considered. In contrast to the works [2, 3], the activation function we use here is not assumed to be homogeneous to obtain global convergence of the flow. Numerical experiments are given to show the potential of the method.
## 1 Introduction
### Literature review
The motivation of our work is to bring some contributions on the mathematical understanding of neural-network based numerical schemes, typically Physically-Informed-Neural-Networks (PINNs) [4, 5, 6, 7, 8, 9] approaches, for the resolution of some high-dimensional Partial Differential Equations (PDEs). In this context, it is of tremendous importance to understand why neural networks work so well in some contexts in order to improve its efficiency and get an insight of why a particular neural network should be relevant to a specific task.
The first step towards a mathematical analysis theory of neural network-based numerical methods is the identification of functional spaces suited for neural network approximation. The first important result in this direction is the celebrated theorem of approximation due to Cybenko [10] proving that two-layer neural networks can approximate an arbitrary smooth function on a compact of \(\mathbb{R}^{d}\). However, this work does not give an estimation of the number of neurons needed even if it is of utmost importance to hope for tractable numerical methods. To answer this question, Yarotsky [11] gave bounds on the number of neurons necessary to represent smooth functions. This theory mainly relies on classical techniques of Taylor expansions and does not give computable architectures in the high dimensional regime. Another original point of view was given by Barron [1] who used Monte Carlo techniques from Maurey-Jones-Barron to prove that functions belonging to a certain metric space _ie_ the Barron space, can be approximated by a two-layer NN with precision \(O\left(\dfrac{1}{\sqrt{m}}\right)\), \(m\) being the width of the first layer. Initially, Barron's norm was characterized using Fourier analysis reducing the theory to domain where Fourier decomposition is available. Now other Barron type norms which does not suppose the existence of an harmonic decomposition [12], are also available.
In order to give a global idea of how this works, one can say that a Barron function \(f_{\mu}:\mathbb{R}^{d}\rightarrow\mathbb{R}\) can be represented by a measure \(\mu\) with second order moments :
\[f_{\mu}(x):=\int a\sigma(wx+b)d\mu(a,b,c)\]
where \(\sigma\) is an activation function and the Barron norm \(\|f_{\mu}\|_{\mathcal{B}}\) is roughly speaking, a mix of the second order moments of \(\mu\). Intuitively, the law of large number says that the function \(f_{\mu}\) can be represented
by a sum of Dirac corresponding to a two-layer neural network whose width equals the number of Dirac masses. The architecture of a two-layer neural network is recalled in Figure 1. Having said that, some important questions arise :
* What is the size of the Barron space and the influence of the activation function on such size? Some works have been done in this direction for the ReLU activation function. In [13], it is proven that \(H^{s}\) functions are Barron if \(s\geq\dfrac{d}{2}+2\) and that \(f_{\mu}\) can be decomposed by an infinite sum of \(f_{\mu_{i}}\) whose singularities are located on a \(k\) (\(k<d\)) affine subspace of \(\mathbb{R}^{d}\). For the moment, no similar result seems to hold with more regular activation functions.
* One can add more and more layers and observe the influence on the corresponding space. In [14], tree-like spaces \(\mathcal{W}_{L}\) (where \(L\) is the number of hidden layers) are introduced using an iterative scheme starting from the Barron space. Of course, multi-layers neural networks naturally belong to these spaces. Nevertheless for a function belonging to \(\mathcal{W}_{L}\), it is not clear that a multilayer neural network is more efficient than its two-layer counterpart for its approximation.
* Does solutions of classical PDEs belong to a Barron space? In this case, there is a potential to solve PDEs without suffering from the curse of dimension. Some important advances have been made in this direction in [15] where authors considered the Poisson problem with Neumann boundary condition on the \(d\) dimensional cube. If the source term is Barron, then it is proved that the solution is also Barron and there is hope for an approximation with a two-layer NN.
Using conclusions from [15], the object of this paper is to propose and analyze a neural-network based numerical approach for the resolution of the Poisson equation in the high dimensional regime with Barron source. Inspired from [2], we immerse the problem on the space of probability measures with finite second order moments defined on the parametric domain. This corresponds to finding a solution to the PDE thanks to an infinitely wide two-layer neural network. Then we interpret the learning phase of the network as a gradient curve in the space of probability measure. Finally under some hypothesis on the initial support, we prove that if the curve converges then it necessarily does towards a measure corresponding to the solution of the PDE considered. Note that our argumentation is different from [2, 3] since the convergence proof is not based on topological degree nor the topological properties of the sphere. We rather use a homology argument taken from algebraic topology and a clever choice of activation function to prove that the dynamic of the support of the gradient curve of measure behaves nicely. Numerical experiments are conducted to confirm the potential of the method proposed.
In Section 2, the problem is presented in a more precise way and the link between probability and Barron functions is made clearly. In Section 3, the gradient curve is introduced and our main theorems on its well-posedness and convergence are presented and proved. Finally, numerical experiments are exposed in Section 4.
**Notation** : For \(1\leq p\leq\infty\), the notation \(|\cdot|_{p}\) designates the \(\ell^{p}\) norm of a vector of arbitrary finite dimension with particular attention to \(p=2\) (euclidean norm) for which the notation \(|\cdot|\) is preferred.
## 2 Preliminaries
This section introduces the mathematical framework we consider in this paper to relate two-layer neural networks and high-dimensional Poisson equations.
### Problem setting
The following Poisson equation is considered on \(\Omega:=\left[0,1\right]^{d}\) (\(d\in\mathbb{N}\)) with Neumann boundary condition : find \(u^{*}\in H^{1}(\Omega)\) with \(\int_{\Omega}u^{*}=0\) solution to :
\[\begin{cases}-\Delta u^{*}=f\text{ on }\Omega,\\ \partial_{n}u^{*}=0\text{ on }\partial\Omega,\end{cases} \tag{1}\]
where \(f\in L^{2}(\Omega)\) with \(\int_{\Omega}f=0\). Here (1) has to be understood in the variational sense, in the sense that \(u^{*}\) is equivalently the unique minimizer to :
\[u^{*}=\operatorname*{argmin}_{u\in H^{1}(\Omega)}\mathcal{E}(u), \tag{2}\]
Figure 1: A two-layer neural network of width \(m\)
where
\[\forall u\in H^{1}(\Omega),\ \ \ \mathcal{E}(u):=\int_{\Omega}\Big{(}\frac{|\nabla u |^{2}}{2}-fu\Big{)}\,dx+\frac{1}{2}\Big{(}\int_{\Omega}udx\Big{)}^{2}.\]
This can indeed be easily checked by classic Lax-Milgram arguments. The functional \(\mathcal{E}\) is strongly convex and differentiable with derivative given by Lemma 1.
**Lemma 1**.: _The functional \(\mathcal{E}:H^{1}(\Omega)\to\mathbb{R}\) is continuous, differentiable and for all \(u\in H^{1}(\Omega)\), it holds that_
\[\forall v\in H^{1}(\Omega),\ d\,\mathcal{E}\,|_{u}(v)=\int_{\Omega}\left( \nabla u\cdot\nabla v-fv\right)dx+\int_{\Omega}udx\int_{\Omega}vdx.\]
It can be easily seen that points \(u\) where the differential is identically zero are solution to equation (1).
**Remark 1**.: _The coercive symmetric bilinear form \(\bar{a}\) involved in the definition of the energy writes :_
\[\bar{a}(u,v):=\int_{\Omega}\nabla u\cdot\nabla vdx+\int_{\Omega}udx\int_{ \Omega}vdx.\]
_The energy \(\mathcal{E}\) can then be equivalently rewritten thanks to the bilinear form \(a\) :_
\[\mathcal{E}(u)=\frac{1}{2}\bar{a}(u-u^{\star},u-u^{\star})-\frac{1}{2}\int_{ \Omega}|\nabla u^{\star}|^{2}dx.\]
The aim of the present work is to analyze a numerical method based on the use of infinite-width two-layer neural networks for the resolution of (1) with a specific focus on the case when \(d\) is large.
### Activation function
We introduce here the particular choice of activation function we consider in this work.
Let \(\sigma:\mathbb{R}\to\mathbb{R}\) be the classical Rectified Linear Unit (ReLU) function where :
\[\forall y\in\mathbb{R},\ \sigma(y):=\max(y,0). \tag{3}\]
Let \(\rho:\mathbb{R}\to\mathbb{R}\) be defined by
\[\left\{\begin{array}{cl}Z\exp\left(-\frac{\tan(\frac{\tau}{2}y)^{2}}{2} \right)&\mbox{if }|y|\leq 1\\ 0&\mbox{otherwise.}\end{array}\right. \tag{4}\]
where the constant \(Z\in\mathbb{R}\) is defined such that the integral of \(\rho\) is equal to one. For all \(\tau>0\), we then define \(\rho_{\tau}:=\tau\rho(\tau\cdot)\) and \(\sigma_{\tau}:\mathbb{R}\to\mathbb{R}\) the function defined by
\[\forall y\in\mathbb{R},\ \sigma_{\tau}(y):=(\rho_{\tau}\star\sigma)(y). \tag{5}\]
We then have the following lemma.
**Lemma 2**.: _For any \(\tau>0\), it holds that_
1. \(\sigma_{\tau}\in\mathcal{C}^{\infty}(\mathbb{R})\) _is uniformly bounded ans so is_ \(\sigma_{\tau}^{\prime}\)_,_
2. _for all_ \(y<-1/\tau\)_,_ \(\sigma_{\tau}(y)=0\)_,_
3. _for all_ \(y>1/\tau\)_,_ \(\sigma_{\tau}(y)=y\)_,_
4. _there exists_ \(C>0\) _such that for all_ \(\tau>0\)_,_ \[\|\sigma-\sigma_{\tau}\|_{H^{1}(\mathbb{R})}\leq\frac{C}{\sqrt{\tau}}.\]
Proof.: The first item \((i)\) is classic and left to the reader. For \((ii)\), we have :
\[\sigma_{\tau}(y)=\int_{-1/\tau}^{1/\tau}\rho_{\tau}(y)\sigma(x-y)dy \tag{6}\]
and if \(x<-1/\tau\) then \(x-y<0\) for \(-1/\tau<y<1/\tau\) and \(\sigma(x-y)=0\). This naturally gives \(\sigma_{\tau}(y)=0\).
For \((iii)\), using again (6) and if \(x>1/\tau\), then \(x-y>0\) for \(-1/\tau<y<1/\tau\) and \(\sigma(x-y)=x-y\). As a consequence,
\[\sigma_{\tau}(y)=\int_{-1/\tau}^{1/\tau}\rho_{\tau}(y)(x-y)dy=x,\]
where we have used the fact that \(\int_{\mathbb{R}}\rho_{\tau}(y)dy=1\) and \(\int_{\mathbb{R}}y\rho_{\tau}(y)dy=0\) by symmetry of \(\rho\).
For \((iv)\), we have by \((ii)-(iii)\):
\[\|\sigma-\sigma_{\tau}\|_{L^{2}(\mathbb{R})}^{2}=\int_{-1/\tau}^{1/\tau}( \sigma(x)-\sigma_{\tau}(x))^{2}dx\leq\frac{8}{\tau^{2}},\]
where we used the fact that \(|\sigma(x)|,|\sigma_{\tau}(x)|\leq 1/\tau\) on \([-1/\tau,1/\tau]\). In a similar way,
\[\|\sigma^{\prime}-\sigma_{\tau}^{\prime}\|_{L^{2}(\mathbb{R})}^{2}=\int_{-1/ \tau}^{1/\tau}(\sigma^{\prime}(x)-\sigma_{\tau}^{\prime}(x))^{2}dx\leq\frac{4} {\tau}.\]
The two last inequalities gives \((iv)\).
In this work, we will rather use a hat version of the regularized ReLU activation function. More precisely, we define:
\[\forall y\in\mathbb{R},\ \sigma_{H,\tau}(y):=\sigma_{\tau}(y+1)-\sigma_{\tau}(2y )+\sigma_{\tau}(y-1). \tag{7}\]
We call hereafter this activation function the regularized HReLU (Hat ReLU) activation. When \(\tau=+\infty\), the following notation is proposed :
\[\forall y\in\mathbb{R},\ \sigma_{H}(y):=\sigma(y+1)-\sigma(2y)+\sigma(y-1). \tag{8}\]
The reasons why we use this activation is that it has a compact support and can be used to generate an arbitrary piecewise constant function on \([0,1]\). Note however that neither \(\sigma_{H,\tau}\) nor \(\sigma_{H}\) are homogeneous (in contrast to the activation functions considered in [2, 3]). Notice also that a direct corollary of Lemma 2 is that there exists a constant \(C>0\) such that for all \(\tau>0\),
\[\|\sigma_{H}-\sigma_{H,\tau}\|_{H^{1}(\mathbb{R})}\leq\frac{C}{\sqrt{\tau}} \tag{9}\]
We will also use the fact that there exists a constant \(C>0\) such that for all \(\tau>0\),
\[\|\sigma_{H,\tau}\|_{L^{\infty}(\mathbb{R})}\leq C,\ \|\sigma_{H,\tau}^{ \prime}\|_{L^{\infty}(\mathbb{R})}\leq C,\ \|\sigma_{H,\tau}^{\prime\prime}\|_{L^{\infty}(\mathbb{R})}\leq C\tau\ \ \text{and}\ \|\sigma_{H,\tau}^{\prime\prime\prime}\|_{L^{\infty}(\mathbb{R})}\leq C \tau^{2}. \tag{10}\]
Figure 2: The hat activation function and its regularization (\(\tau=4\))
### Spectral Barron space
We introduce the orthonormal basis in \(L^{2}(\Omega)\) composed of the eigenfunctions \(\{\phi_{k}\}_{k\in\mathbb{N}^{d}}\) of the Laplacian operator with Neumann boundary conditions, where
\[\forall k=(k_{1},\ldots,k_{d})\in\mathbb{N}^{d},\ \forall x:=(x_{1},\cdots,x_{d}) \in\Omega,\quad\phi_{k}(x_{1},\ldots,x_{d}):=\prod_{i=1}^{d}\cos(\pi k_{i}x_{i}). \tag{11}\]
Notice that \(\{\phi_{k}\}_{k\in\mathbb{N}^{d}}\) is also an orthogonal basis of \(H^{1}(\Omega)\). Using this basis, we have the Fourier representation formula for any function \(u\in L^{2}(\Omega):\)
\[u=\sum_{k\in\mathbb{N}^{d}}\hat{u}(k)\phi_{k},\]
where for all \(k\in\mathbb{N}^{d}\), \(\hat{u}(k):=\langle\phi_{k},u\rangle_{L^{2}(\Omega)}\). This allows to define the (spectral) Barron space [15] as follows :
**Definition 1**.: _For all \(s>0\), the Barron space \(\mathcal{B}^{s}(\Omega)\) is defined as :_
\[\mathcal{B}^{s}(\Omega):=\Big{\{}u\in L^{1}(\Omega):\sum_{k\in\mathbb{N}^{d}}( 1+\pi^{s}|k_{1}^{s})|\hat{u}(k)|<+\infty\Big{\}} \tag{12}\]
_and the space \(\mathcal{B}^{2}(\Omega)\) is denoted \(\mathcal{B}(\Omega)\). Moreover, the space \(\mathcal{B}^{s}(\Omega)\) is embedded with the norm :_
\[\|u\|_{\mathcal{B}^{s}(\Omega)}:=\sum_{k\in\mathbb{N}^{d}}(1+\pi^{s}|k|_{1}^{s} )|\hat{u}(k)|. \tag{13}\]
By [15, Lemma 4.3], it is possible to relate the Barron space to traditional Sobolev spaces :
**Lemma 3**.: _The following continuous injections hold :_
* \(\mathcal{B}(\Omega)\hookrightarrow H^{1}(\Omega)\)_,_
* \(\mathcal{B}^{0}(\Omega)\hookrightarrow L^{\infty}(\Omega)\)_._
The space \(\mathcal{B}(\Omega)\) has interesting approximation properties related to neural networks schemes. We introduce the following approximation space:
**Definition 2**.: _Let \(\chi:\mathbb{R}\rightarrow\mathbb{R}\) be measurable, \(m\in\mathbb{N}^{*}\) and \(B>0\). The space \(\mathcal{F}_{\chi,m}(B)\) is defined as:_
\[\mathcal{F}_{\chi,m}(B):=\left\{c+\sum_{i=1}^{m}a_{i}\chi(w_{i}\cdot x+b_{i}) :c,a_{i},b_{i}\in\mathbb{R},\,w_{i}\in\mathbb{R}^{d},\ |c|\leq 2B,|w_{i}|=1,|b_{i}| \leq 1,\sum_{i=1}^{m}|a_{i}|\leq 4B\right\} \tag{14}\]
Now, we are able to state the main approximation theorem.
**Theorem 1**.: _For any \(u\in\mathcal{B}(\Omega)\), \(m\in\mathbb{N}^{*}\) :_
1. _there exists_ \(u_{m}\in\mathcal{F}_{\sigma_{H,m}(\|u\|_{\mathcal{B}(\Omega)})}\) _such that :_ \[\|u-u_{m}\|_{H^{1}(\Omega)}\leq\frac{C\|u\|_{\mathcal{B}(\Omega)}}{\sqrt{m}},\]
2. _there exists_ \(\tilde{u}_{m}\in\mathcal{F}_{\sigma_{H,m},m}(\|u\|_{\mathcal{B}(\Omega)})\) _such that :_ \[\|u-\tilde{u}_{m}\|_{H^{1}(\Omega)}\leq\frac{C\|u\|_{\mathcal{B}(\Omega)}}{ \sqrt{m}}.\] (15)
_where for both items, \(C\) is a universal constant which does not depend on \(d\) neither on \(u\)._
Proof.: Let \(B:=\|u\|_{\mathcal{B}(\Omega)}\). We just give a sketch of the proof of (ii), (i) being derived from similar arguments as in [15, Theorem 2.1]
By (i), there exists \(u_{m}\in\mathcal{F}_{\sigma_{H},m}(B)\) such that
\[\|u-u_{m}\|_{H^{1}(\Omega)}\leq\frac{CB}{\sqrt{m}}.\]
The function \(u_{m}\) can be written as :
\[u_{m}(x)=c+\sum_{i=1}^{m}a_{i}\sigma_{H}(w_{i}\cdot x+b_{i})\]
for some \(c,a_{i},b_{i}\in\mathbb{R}\), \(w_{i}\in\mathbb{R}^{d}\) for \(i=1,\ldots,m\) with \(|c|\leq 2B,|w_{i}|=1,|b_{i}|\leq 1,\sum_{i=1}^{m}|a_{i}|\leq 4B\).
By Lemma 2\((iv)\), there exists \(C>0\) such that for all \(\tau>0\), \(\|\sigma_{H}-\sigma_{H,\tau}\|_{H^{1}(\mathbb{R})}\leq\frac{C}{\sqrt{\tau}}\), it is easy to see that
\[\|\tilde{u}_{m}-u_{m}\|_{H^{1}(\Omega)}\leq\frac{CB}{\sqrt{m}}\]
where :
\[\tilde{u}_{m}(x)=c+\sum_{i=1}^{m}a_{i}\sigma_{H,m}(w_{i}\cdot x+b_{i}).\]
Consequently,
\[\|u-\tilde{u}_{m}\|_{H^{1}(\Omega)}\leq\frac{CB}{\sqrt{m}}\]
which yields the desired result.
**Remark 2**.: _With other words, a Barron function can be approximated in \(H^{1}(\Omega)\) by a two-layer neural network of width \(m\) with precision \(O\left(\frac{1}{\sqrt{m}}\right)\) when the activation function is the HReLU one._
In the sequel, we assume that any parameter vector \(\theta=(c,a,w,b)\) takes values in the neural network parameter set
\[\Theta:=\mathbb{R}\times\mathbb{R}\times S_{\mathbb{R}^{d}}(1)\times[-\sqrt{ d}-2,\sqrt{d}+2], \tag{16}\]
with \(S_{\mathbb{R}^{d}}(1)\) the unit sphere of \(\mathbb{R}^{d}\). In addition, for all \(r>0\), we denote by
\[K_{r}:=[-2r,2r]\times[-4r,4r]\times S_{\mathbb{R}^{d}}(1)\times[-\sqrt{d}-2, \sqrt{d}+2]. \tag{17}\]
The particular choice of the domain value of the parameter \(b\), namely \([-\sqrt{d}-2,\sqrt{d}+2]\) will be made clear in the following. Moreover, let \(\mathcal{P}_{2}(\Theta)\) (respectively \(\mathcal{P}_{2}(K_{r})\)) denote the set of probability measures on \(\Theta\) (respectively on \(K_{r}\)) with finite second-order moments.
Let us make the following remark.
**Remark 3**.: _Let \(m\in\mathbb{N}^{*}\), \(u_{m}\in\mathcal{F}_{\chi,m}(B)\) with \(B>0\) and \(\chi:\mathbb{R}\rightarrow\mathbb{R}\). Then, there exists \(c,a_{i},b_{i}\in\mathbb{R}\), \(w_{i}\in\mathbb{R}^{d}\) for \(i=1,\ldots,m\) with \(|c|\leq 2B,|w_{i}|=1,|b_{i}|\leq 1,\sum_{i=1}^{m}|a_{i}|\leq 4B\) such that for all \(x\in\Omega\),_
\[u_{m}(x) =c+\sum_{i=1}^{m}a_{i}\chi(w_{i}\cdot x+b_{i})\] \[=\sum_{i=1}^{m}\left(c+\sum_{j=1}^{m}|a_{j}|sign(a_{i})\chi(w_{i }\cdot x+b_{i})\right)\frac{|a_{i}|}{\sum_{j=1}^{m}|a_{j}|}\] \[=\int_{\Theta}[c+a\chi(w\cdot x+b)]d\mu_{m}(c,a,w,b),\]
_where the measure \(\mu_{m}\) is a probability measure on \(\Theta\) given by :_
\[\mu_{m}:=\sum_{i=1}^{m}\frac{|a_{i}|}{\sum_{j=1}^{m}|a_{j}|}\delta_{(c,\sum_{j=1} ^{m}|a_{j}|sign(a_{i}),w_{i},b_{i})}.\]
_Remark that \(\mu_{m}\) has support in \(K_{B}\). In addition, the sequence \((\mu_{m})_{m\in\mathbb{N}^{*}}\) is uniformly (with respect to \(m\)) bounded in \(\mathcal{P}_{2}(\Theta)\)._
For a general domain \(\Omega\) which is not of the form \(\Omega=[0,1]^{d}\), the solution to equation (1) does not necessarily belong to the Barron space even if the source term has finite Barron norm. Nevertheless for our case \(\left(\Omega=[0,1]^{d}\right)\), there is an explicit bound of the Barron norm of the solution compared with the source one. This gives hope for a neural network approximation of the solution.
**Theorem 2**.: _[_15_]_ _Let \(u^{*}\) be the solution of the equation (1) with \(f\in\mathcal{B}^{0}(\Omega)\), then \(u^{*}\in\mathcal{B}(\Omega)\). Moreover, the following estimate holds :_
\[\|u^{*}\|_{\mathcal{B}(\Omega)}\leq d\|f\|_{\mathcal{B}^{0}(\Omega)}.\]
### Infinite width two-layer neural networks
In order to ease the notation for future computations, for all \(\tau>0\), we introduce the function \(\Phi_{\tau}:\Theta\times\Omega\to\mathbb{R}\) defined by
\[\forall\theta:=(c,a,w,b)\in\Theta,\;\forall x\in\Omega,\quad\Phi_{\tau}( \theta;x):=c+a\sigma_{H,\tau}(w\cdot x+b) \tag{18}\]
and \(\Phi_{\infty}:\Theta\times\Omega\to\mathbb{R}\) defined by such that:
\[\forall\theta:=(c,a,w,b)\in\Theta,\;\forall x\in\Omega,\quad\Phi_{\infty}( \theta;x):=c+a\sigma_{H}(w\cdot x+b). \tag{19}\]
The space \(\mathcal{P}_{2}(\Theta)\) is embedded with the 2-Wasserstein distance :
\[\forall\mu,\nu\in\mathcal{P}_{2}(\Theta),\quad W_{2}^{2}(\mu,\nu):=\inf_{\tau \in\Gamma(\mu,\nu)}\int_{\Theta^{2}}d(\theta,\tilde{\theta})^{2}d\gamma(\theta,\tilde{\theta}),\]
where \(\Gamma(\mu,\nu)\) is the set of probability measures on \(\Theta^{2}\) with marginals given respectively by \(\mu\) and \(\nu\) and where \(d\) is the geodesic distance in \(\Theta\). For the interested reader, the geodesic distance between \(\theta,\tilde{\theta}\in\Theta\) can be computed as :
\[d(\theta,\tilde{\theta})=\sqrt{(c-\tilde{c})^{2}+(a-\tilde{a})^{2}+d_{S_{bd}(1 )}(w,\tilde{w})+(b-\tilde{b})^{2}}.\]
For all \(\tau,r>0\), we introduce the operator \(P_{\tau}\) and the functional \(\mathcal{E}_{\tau,r}\) defined as follows :
**Definition 3**.: _The operator \(P_{\tau}:\mathcal{P}_{2}(\Theta)\to H^{1}(\Omega)\) is defined for all \(\mu\in\mathcal{P}_{2}(\Theta)\) as :_
\[P_{\tau}(\mu):=\int_{\Theta}\Phi_{\tau}(\theta;x)d\mu(\theta).\]
_Additionally, we define the functional \(\mathcal{E}_{\tau,r}(\mu):\mathcal{P}_{2}(\Theta)\to\mathbb{R}\) as :_
\[\mathcal{E}_{\tau,r}(\mu):=\begin{cases}\mathcal{E}(P_{\tau}(\mu))\text{ if }\mu(K_{\tau})=1\\ \qquad+\infty\text{ otherwise}.\end{cases}\]
_._
**Proposition 1**.: _For all \(0<\tau,r<\infty\), the functional \(\mathcal{E}_{\tau,r}\) is weakly lower semicontinuous._
Proof.: Let \((\mu_{n})_{n\in\mathbb{N}^{*}}\) be a sequence of elements of \(\mathcal{P}_{2}(\Theta)\) which narrowly converges towards some \(\mu\in\mathcal{P}_{2}(\Theta)\). Without loss of generality, we can assume that \(\mu_{n}\) is supported in \(K_{r}\) for all \(n\in\mathbb{N}^{*}\). Then, it holds that :
* the limit \(\mu\) has support in \(K_{r}\) (by Portmanteau theorem);
* moreover, let \(u_{n}:\Omega\to\mathbb{R}\) be defined such that for all \(x\in\Omega\), \[u_{n}(x):=\int_{\Theta}\Phi_{\tau}(\theta;x)d\mu_{n}(\theta)=\int_{K_{r}}\Phi_{ \tau}(\theta;x)\,d\mu_{n}(\theta).\] Since for all \(x\in\Omega\), the function \(K_{r}\ni\theta\mapsto\Phi_{\tau}(\theta;x)\) is continuous and bounded, it then holds that, for all \(x\in\Omega\), \[u_{n}(x)\underset{n\to\infty}{\longrightarrow}u(x):=\int_{K_{r}}\Phi_{\tau}( \theta;x)d\mu(\theta)=\int_{\Theta}\Phi_{\tau}(\theta;x)d\mu(\theta),\] where the last equality comes from the fact that \(\mu\) is supported in \(K_{r}\).
* It actually holds that the sequence \((u_{n})_{n\in\mathbb{N}^{*}}\) is uniformly bounded in \(\mathcal{C}(\Omega)\). Indeed, there exists \(C_{\tau}>0\) such that for all \(x\in\Omega\) and \(n\in\mathbb{N}^{*}\), we have \[u_{n}(x)^{2} =\left(\int_{K_{r}}\Phi_{\tau}(\theta;x)d\mu_{n}(\theta)\right)^{2}\] \[\leq\int_{K_{r}}\Phi_{\tau}^{2}(\theta;x)d\mu_{n}(\theta)\] \[\leq Cr^{2},\] where the last inequality comes from (10).
As a consequence of the Lebesgue dominated convergence theorem, the sequence \((u_{n})_{n\in\mathbb{N}^{*}}\) strongly converges towards \(u\) in \(L^{2}(\Omega)\). Reproducing the same argument as above for the sequence \((\nabla u_{n})_{n\in\mathbb{N}^{*}}\), one easily proves that this strong convergence holds in fact in \(H^{1}(\Omega)\). The fact that the functional \(\mathcal{E}:H^{1}(\Omega)\to\mathbb{R}\) is continuous allows us to conclude.
**Remark 4**.: _In \(\mathcal{P}_{2}(K_{r})\), the weak convergence is metricized by the Wasserstein distance. Hence, \(\mathcal{E}_{\tau}\) is lower semicontinuous as a functional from \((\mathcal{P}_{2}(\Theta),W_{2})\) to \((\mathbb{R},|\cdot|)\)._
Finally, the lower semicontinuity of \(\mathcal{E}_{\tau,r}\) and compactness of \(\mathcal{P}_{2}(K_{r})\) (as \(K_{r}\) is compact) allows to prove the existence of at least one solution to the following minimization problem :
**Problem 1**.: _For \(0<\tau<\infty\) and \(0<r<+\infty\), let \(\mu_{\tau,r}^{\star}\in\mathcal{P}_{2}(\Theta)\) be solution to_
\[\mu_{\tau,r}^{\star}\in\operatorname*{argmin}_{\mu\in\mathcal{P}_{2}(\Theta) }\mathcal{E}_{\tau,r}(\mu). \tag{20}\]
For large values of \(\tau\) and \(r=d\|f\|_{\mathcal{B}^{0}(\Omega)}\), solutions of (20) enable to obtain accurate approximations of the solution of (1). This result is stated in Theorem 3.
**Theorem 3**.: _There exists \(C>0\) such that for all \(m\in\mathbb{N}^{*}\) and any solution \(\mu_{m,d\|f\|_{\mathcal{B}^{0}(\Omega)}}^{\star}\) to (20) with \(\tau=m\) and \(r=d\|f\|_{\mathcal{B}^{0}(\Omega)}\), it holds that:_
\[\left\|u^{\star}-\int_{\Theta}\Phi_{m}(\theta;\cdot)d\mu_{m,d\|f\|_{\mathcal{B }^{0}(\Omega)}}^{\star}(\theta)\right\|_{H^{1}(\Omega)}\leq Cd\frac{\|f\|_{ \mathcal{B}^{0}(\Omega)}}{\sqrt{m}}\]
_where \(u^{\star}\) is the solution of the equation (1)._
Proof.: For all \(m\in\mathbb{N}^{*}\), let \(\tilde{u}_{m}\in\mathcal{F}_{\sigma_{H,m},m}(\|u^{*}\|_{\mathcal{B}})\) satisfying (15) for \(u=u^{*}\) (using Theorem 1). Since \(\|u^{*}\|_{\mathcal{B}(\Omega)}\leq d\|f\|_{\mathcal{B}^{0}(\Omega)}\) thanks to Theorem 2 and by Remark 3, \(\tilde{u}_{m}\) can be rewritten using a probability measure \(\mu_{m}\) with support in \(K_{d\|f\|_{\mathcal{B}^{0}(\Omega)}}\) as :
\[\forall x\in\Omega,\quad\tilde{u}_{m}(x)=\int_{\Theta}\Phi_{m}(\theta;x)\,d \mu_{m}(\theta).\]
Let \(\mu_{m,d\|f\|_{\mathcal{B}^{0}(\Omega)}}^{\star}\) be a minimizer of (20) with \(\tau=m\) and \(r=d\|f\|_{\mathcal{B}^{0}(\Omega)}\). Then, it holds that:
\[\mathcal{E}_{m,d\|f\|_{\mathcal{B}^{0}(\Omega)}}\left(\mu_{m,d\|f\|_{\mathcal{ B}^{0}(\Omega)}}^{\star}\right)\leq\mathcal{E}_{m,d\|f\|_{\mathcal{B}^{0}( \Omega)}}(\mu_{m}),\]
which by Remark 1, is equivalent to :
\[\bar{a}(u^{\star}_{m}-u^{\star},u^{\star}_{m}-u^{\star})\leq\bar{a}(\tilde{u}_{m}- u^{\star},\tilde{u}_{m}-u^{\star}).\]
where for all \(x\in\Omega\),
\[u^{\star}_{m}(x):=\int_{\Theta}\Phi_{m}(\theta;x)\,d\mu^{\star}_{m,d\|f\|_{ \mathcal{B}^{0}(\Omega)}}(\theta).\]
Denoting by \(\alpha\) and \(L\) respectively the coercivity and continuity constants of \(\bar{a}\), we obtain that
\[\|u^{\star}_{m}-u^{\star}\|_{H^{1}(\Omega)}\leq\frac{L}{\alpha}\|\tilde{u}_{m} -u^{\star}\|_{H^{1}(\Omega)}\leq Cd\frac{\|f\|_{\mathcal{B}^{0}(\Omega)}}{ \sqrt{m}}.\]
### Main results
In this section, we find a solution to Problem 1 using gradient curve techniques. More particularly, we will define and prove the existence of a gradient descent curve such that if the convergence is asserted, then the convergence necessarily holds towards a global minimizer. In all the sequel, we fix an a priori chosen value of \(\tau>0\).
#### 2.5.1 Well-posedness
First, we introduce the concept of gradient curve which formally writes for \(r>0\):
\[\forall t\geq 0,\,\,\,\frac{d}{dt}\mu^{r}(t)=-\nabla\,\mathcal{E}_{\tau,r}( \mu^{r}(t)). \tag{21}\]
Equation (21) has no mathematical sense since the space \(\mathcal{P}_{2}(\Theta)\) is not a Hilbert space and consequently, the gradient of \(\mathcal{E}_{\tau,r}\) is not available in a classical sense. Nevertheless \(\mathcal{P}_{2}(\Theta)\) being an Alexandrov space, it has a differential structure which allows to define gradients properly. The careful reader wishing to understand this structure can find a complete recap of all useful definitions and properties of Alexandrov spaces in Appendix A.
Before exposing our main results of well-posedness, we recall the basic definition of local slope [16]. In the sequel, we denote by \(\mathcal{P}_{2}(K_{r})\) the set of probability measures on \(\Theta\) with support included in \(K_{r}\).
**Definition 4**.: _At every \(\mu\in\mathcal{P}_{2}(K_{r})\), the local slope writes :_
\[|\nabla^{-}\,\mathcal{E}_{\tau,r}\,|(\mu):=\limsup_{\nu\to\mu}\frac{( \mathcal{E}_{\tau,r}(\mu)-\mathcal{E}_{\tau,r}(\nu))_{+}}{W_{2}(\mu,\nu)}\]
_which may be infinite._
In Section 3.1, we prove two theorems; the first one states the existence and the uniqueness of the gradient curve with respect to \(\mathcal{E}_{\tau,r}\) when \(r<\infty\).
**Theorem 4**.: _For all \(\mu_{0}\in\mathcal{P}_{2}(K_{r})\), there exists a unique locally Lipschitz gradient curve \(\mu^{r}:\mathbb{R}_{+}\to\mathcal{P}_{2}(K_{r})\) which is also a curve of maximal slope with respect to the upper gradient \(|\nabla^{-}\,\mathcal{E}_{\tau,r}\,|\). Moreover, for almost all \(t\geq 0\), there exists a vector field \(v^{r}_{t}\in L^{2}(\Theta;\,d\mu^{r}(t))^{d+3}\) such that_
\[\int_{\Theta}\|v^{r}_{t}\|^{2}\,d\mu^{r}(t)=\|v^{r}_{t}\|^{2}_{L^{2}(\Theta;\, d\mu^{r}(t))}<+\infty \tag{22}\]
_and :_
\[\left\{\begin{array}{rl}\partial_{t}\mu^{r}(t)+\operatorname{div}(v^{r}_{t} \mu^{r}(t))=&0\\ \mu^{r}(0)=&\mu_{0}\\ \mu^{r}(t)\in&\mathcal{P}_{2}(K_{r}).\end{array}\right. \tag{23}\]
In the second theorem, we focus on the case when \(r=+\infty\) for which we formally take the limit of gradient curves \((\mu^{r})_{r>0}\) as \(r\) goes to infinity. Introducing the following quantities, the definition of which will be made precise below :
\[\left\{\begin{array}{rl}\phi_{\mu}(\theta):=&d\,\mathcal{E}\,|_{\mathcal{P}_ {r}(\theta)}(\Phi_{\tau}(\theta;\cdot)),\\ v_{\mu}(\theta):=&\nabla_{\theta}\phi_{\mu}(\theta),\end{array}\right.\]
and \(\mathbf{P}\) the projection on the tangent bundle of \(\Theta\) the precise definition of which is given in Definition 5, the following theorem is proved.
**Theorem 5**.: _For all \(\mu_{0}\) compactly supported, there exists a curve \(\mu:\mathbb{R}_{+}\to\mathcal{P}_{2}(\Theta)\) such that :_
\[\begin{cases}\partial_{t}\mu(t)+\operatorname{div}((-\mathbf{P}v_{\mu(t)})\mu( t))=0\\ \mu(0)=\mu_{0}\end{cases} \tag{24}\]
_and for almost all \(t\geq 0\) :_
\[\int_{\Theta}|\mathbf{P}v_{\mu(t)}|^{2}\;d\mu(t)=\|\mathbf{P}v_{\mu(t)}\|_{L^ {2}(\Theta;d\mu(t))}^{2}<+\infty.\]
_Moreover, the solution satisfies :_
\[\forall t\geq 0,\mu(t)=\chi(t)\#\mu_{0}\]
_with \(\chi:\mathbb{R}_{+}\times\Theta\to\Theta\) solution to_
\[\begin{cases}\partial_{t}\chi(t;\theta)=-\mathbf{P}v_{\mu(t)}(\theta)\\ \chi(0;\theta)=\theta.\end{cases}\]
In Remark 6, we argue why proving the existence and uniqueness of a gradient curve for \(\mathcal{E}_{\tau,\infty}\) is not reachable. This is why \(\mu:\mathbb{R}_{+}\to\mathcal{P}_{2}(\Theta)\) is described as a limiting gradient curve and not a gradient curve itself in Theorem 5.
#### 2.5.2 Link with neural network
Our motivation for considering the analysis presented in the previous section is that we can link the learning phase of a neural network with the optimization procedure given by gradient curves defined above. Indeed, let \(m>0\) be an integer. A two-layer neural network \(u\) with \(\sigma_{H,\tau}\) as activation function can always be written as :
\[u=\frac{1}{m}\sum_{i=1}^{m}\Phi_{\tau}(\theta_{i},.) \tag{25}\]
with \(\theta_{i}\in\Theta\). Then, we differentiate the functional \(\mathcal{F}:(\theta_{1},\cdots,\theta_{m})\to\mathcal{E}\left(\frac{1}{m}\sum _{i=1}^{m}\Phi_{\tau}(\theta_{i},\cdot)\right)\) :
\[d\,\mathcal{F}\,|_{\theta_{1},\cdots,\theta_{m}}(d\theta_{1},\cdots,d\theta_{ m})=d\,\mathcal{E}\,|_{u}\left(\frac{1}{m}\sum_{i=1}^{m}\nabla_{\theta}\Phi_{ \tau}(\theta_{i},\cdot)\cdot d\theta_{i}\right).\]
Thus, the gradient of \(\mathcal{F}\) is given by :
\[\nabla_{\theta_{i}}\,\mathcal{F}(\theta_{1},\cdots,\theta_{m})=\frac{1}{m} \nabla_{\theta}\phi_{\mu}(\theta_{i})\]
where :
\[\mu:=\frac{1}{m}\sum_{i=1}^{m}\delta_{\theta_{i}}\in\mathcal{P}_{2}(\Theta). \tag{26}\]
As a consequence, a gradient descent of \(\mathcal{F}\) in the sense that, for all \(1\leq i\leq m\),
\[\begin{cases}\frac{d}{dt}\theta_{i}(t)=-m\mathbf{P}\nabla_{\theta_{i}}\, \mathcal{F}(\theta_{1}(t),\cdots,\theta_{m}(t))\\ \theta_{i}(0)=\theta_{i,0},\end{cases}\]
which is equivalent to the gradient curve of \(\mathcal{E}_{\tau,+\infty}\) with initial condition given by
\[\mu_{0,m}:=\frac{1}{m}\sum_{i=1}^{m}\delta_{\theta_{i,0}}. \tag{27}\]
**Theorem 6**.: _Let \(\mu_{0}\in\mathcal{P}_{2}(\Omega)\) compactly supported, \((\mu_{0,m})_{m\in\mathbb{N}^{*}}\) be such that for all \(m\in\mathbb{N}^{*}\), \(\mu_{0,m}\) is of the form (27) for some \((\theta_{i,0})_{1\leq i\leq m}\subset\mathrm{Supp}(\mu_{0})\) and \(\lim\limits_{m\to+\infty}W_{2}(\mu_{0,m},\mu_{0})=0\)._
_Let \(\mu:\mathbb{R}_{+}\to\mathcal{P}_{2}(\Theta)\) and \(\mu_{m}:\mathbb{R}_{+}\to\mathcal{P}_{2}(\Theta)\) be the gradient curves constructed in Theorem 5 associated respectively to the initial conditions \(\mu(0)=\mu_{0}\) and \(\mu_{m}(0)=\mu_{0,m}\). Then for all \(T>0\), there exists a constant \(C_{T}>0\) such that_
\[\sup\limits_{0\leq t\leq T}W_{2}(\mu(t),\mu_{m}(t))\leq C_{T}W_{2}(\mu_{0},\mu _{0,m}).\]
This theorem is proved in Section 3.2.
#### 2.5.3 Convergence
Our convergence result towards a global optimum is based on the following hypothesis on the initial measure \(\mu_{0}\) :
**Hypothesis 1**.: _The support of the measure \(\mu_{0}\) verifies :_
\[\{0\}\times\{0\}\times S_{\mathbb{R}^{d}}(1)\times[-\sqrt{d}-2,\sqrt{d}+2] \subset\mathrm{Supp}(\mu_{0})\]
Under such hypothesis, one gets a result of convergence in the spirit of a previous work from Bach and Chizat [2] :
**Theorem 7**.: _If \(\mu_{0}\) satisfies Hypothesis 1 and \(\mu(t)\) converges towards \(\mu^{\star}\in\mathcal{P}_{2}(\Theta)\) as \(t\) goes to infinity, then \(\mu^{\star}\) is optimal for Problem 1._
This theorem is proved in Section 3.3.
## 3 Gradient curve
This section is dedicated to the proof of the two theorems stated in Section 2.5.
### Well-posedness
#### 3.1.1 Proof of Theorem 4
Let us fix some value of \(r>0\) in this section. In the following, \(C>0\) will denote an arbitrary constant which does not depend on \(\tau\) and \(r\). Let \(\mathfrak{P}\) be the set of geodesics of \(\Theta\)_ie_ the set of absolutely continuous curves \(\pi:[0,1]\to\Theta\) such that for all \(t_{1},t_{2}\in[0,1]\), \(d(\pi(t_{1}),\pi(t_{2}))=d(\pi(0),\pi(1))|t_{1}-t_{2}|\). Besides, it holds that for all \(0\leq t\leq 1\), we have \(|\dot{\pi}(t)|=d(\pi(0),\pi(1))\).
For all \(s\in[0,1]\), we define the application map \(e_{s}:\mathfrak{P}\to\Theta\) such that \(e_{s}(\pi):=\pi(s)\). Owing this, McCann interpolation gives the fundamental characterization of constant speed geodesics in \(\mathcal{P}_{2}(\Theta):\)
**Proposition 2**.: _[_17_, Proposition 2.10]_ _For all \(\mu,\nu\in\mathcal{P}_{2}(\Theta)\) and any geodesic \(\kappa:[0,1]\to\mathcal{P}_{2}(\Theta)\) between them (i.e. such that \(\kappa(0)=\mu\) and \(\kappa(1)=\nu\)) in the \(W_{2}\) sense, there exists \(\Pi\in\mathcal{P}_{2}(\mathfrak{P})\) such that :_
\[\forall t\in[0,1],\ \kappa(t)=e_{t}\#\Pi.\]
**Remark 5**.: _As \(e_{0}\#\Pi=\mu\) and \(e_{1}\#\Pi=\nu\), the support of \(\Pi\) is included in the set of geodesics \(\pi:[0,1]\to\Theta\) such that \(\pi(0)\) belongs to the support of \(\mu\) and \(\pi(1)\) belongs to the support of \(\nu\). In addition, it holds that \(\gamma:=(e_{0},e_{1})\#\Pi\) is then an optimal transport plan between \(\mu\) and \(\nu\) for the quadratic cost, i.e. \(W_{2}(\mu\cdot\nu)^{2}=\int_{\Theta\times\Theta}|\theta-\widetilde{\theta}|^ {2}\,d\gamma(\theta,\widetilde{\theta})\)._
The next result states smoothness properties of geodesics on \(\Theta\) which are direct consequences of the smoothness properties of geodesics on the unit sphere of \(\mathbb{R}^{d}\). It is a classical result and its proof is left to the reader.
**Lemma 4**.: _There exists \(C>0\) such that for all \((\theta,\tilde{\theta})\) in \(\Theta^{2}\), all geodesic \(\pi:[0,1]\to\Theta\) such that \(\pi(0)=\theta\) and \(\pi(1)=\widetilde{\theta}\) and all \(0\leq s\leq t\leq 1\),_
\[|\pi(t)-\pi(s)|\leq d(\pi(t),\pi(s))=(t-s)d(\theta,\tilde{\theta})\leq C(t-s)| \tilde{\theta}-\theta|\]
_and_
\[\left|\frac{d}{dt}\pi(t)\right|\leq d(\theta,\tilde{\theta})\leq C|\tilde{ \theta}-\theta|.\]
In order to prove the well-posedness, it is necessary to get information about the smoothness of \(\mathcal{E}_{\tau,r}\).
**Proposition 3**.: _The functional \(\mathcal{E}_{\tau,r}\) is proper, coercive, differentiable on \(\mathcal{P}_{2}(K_{r})\). Moreover, there exists a constant \(C_{r,\tau}>0\) such that for all \(\mu,\nu\in\mathcal{P}_{2}(K_{r})\), \(\gamma\in\Gamma(\mu,\nu)\) with support included in \(K_{r}\times K_{r}\):_
\[\left|\mathcal{E}_{\tau,r}(\nu)-\mathcal{E}_{\tau,r}(\mu)+\int_{\Theta^{2}}v _{\mu}(\theta)\cdot(\tilde{\theta}-\theta)d\gamma(\theta,\tilde{\theta}) \right|\leq C_{r,r}c_{2}(\gamma) \tag{28}\]
_with_
\[c_{2}(\gamma):=\int_{\Theta^{2}}(\theta-\tilde{\theta})^{2}\,d\gamma(\theta, \tilde{\theta}),\]
_and_
\[v_{\mu}(\theta):=\nabla_{\theta}\phi_{\mu}(\theta) \tag{29}\]
_where for all \(\theta\in K_{r}\),_
\[\begin{split}\phi_{\mu}(\theta)&:=\langle\nabla_{x} P_{\tau}(\mu),\nabla_{x}\Phi_{\tau}(\theta;\cdot)\rangle_{L^{2}(\Omega)}- \langle f,\Phi_{\tau}(\theta;\cdot)\rangle_{L^{2}(\Omega)}+\int_{\Omega}P_{ \tau}(\mu)(x)dx\times\int_{\Omega}\Phi_{\tau}(\theta;x)dx\\ &=d\,\mathcal{E}\,|_{\mathcal{P}_{\tau}(\mu)}(\Phi_{\tau}( \theta;\cdot)).\end{split} \tag{30}\]
The properness and coercivity are easy to prove and left to the reader. Before proving the differentiability property of \(\mathcal{E}_{\tau,r}\), we will need the following auxiliary lemma.
**Lemma 5**.: _There exists a constant \(C>0\) such that for all \(\tau>0\) and all \(\theta\in\Theta\), we have_
\[\begin{split}\|\Phi_{\tau}(\theta;\cdot)\|_{L^{\infty}(\Omega)} &\leq C|\theta|,\\ \|\nabla_{x}\Phi_{\tau}(\theta;\cdot)\|_{L^{\infty}(\Omega)}& \leq C|\theta|,\\ \|\nabla_{\theta}\Phi_{\tau}(\theta;\cdot)\|_{L^{\infty}(\Omega)} &\leq C|\theta|,\\ \|H_{\theta}\Phi_{\tau}(\theta;\cdot)\|_{L^{\infty}(\Omega)}& \leq C|\theta|\tau,\\ \|\nabla_{x}\nabla_{\theta}\Phi_{\tau}(\theta;\cdot)\|_{L^{\infty}( \Omega)}&\leq C|\theta|\tau,\\ \|\nabla_{x}H_{\theta}\Phi_{\tau}(\theta;\cdot)\|_{L^{\infty}( \Omega)}&\leq C|\theta|\tau^{2},\end{split}\]
_where for all \(\theta\in\Theta\) and \(x\in\Omega\), \(H_{\theta}\Phi_{\tau}(\theta;x)\) denotes the Hessian of \(\Phi_{\tau}\) with respect to the variable \(\theta\) at the point \((\theta,x)\in\Theta\times\Omega\)._
Proof.: Let \(\theta=(c,a,w,b)\in\Theta\). It then holds that, for all \(x\in\Omega\),
\[\begin{split}\begin{cases}\frac{\partial\Phi_{\tau}(\theta;x)}{ \partial c}&=1\\ \frac{\partial\Phi_{\tau}(\theta;x)}{\partial a}&=\sigma_{H,\tau}(w \cdot x+b)\\ \frac{\partial\Phi_{\tau}(\theta;x)}{\partial w}&=ax\sigma^{ \prime}_{H,\tau}(w\cdot x+b)\\ \frac{\partial\Phi_{\tau}(\theta;x)}{\partial b}&=a\sigma^{ \prime}_{H,\tau}(w\cdot x+b).\end{cases}\end{split} \tag{31}\]
This expression yields the first desired inequality. In addition, the nonzero terms of the Hessian matrix read as:
\[\left\{\begin{aligned} \frac{\partial^{2}\Phi_{\tau}(\theta;x)}{ \partial a\partial w}&=\sigma^{\prime}_{H,\tau}(w\cdot x+b)x\\ \frac{\partial^{2}\Phi_{\tau}(\theta;x)}{\partial a\partial b}& =\sigma^{\prime}_{H,\tau}(w\cdot x+b)\\ \frac{\partial^{2}\Phi_{\tau}(\theta;x)}{\partial^{2}w}& =a\sigma^{\prime\prime}_{H,\tau}(w\cdot x+b)xx^{T}\\ \frac{\partial^{2}\Phi_{\tau}(\theta;x)}{\partial w\partial b}& =a\sigma^{\prime\prime}_{H,\tau}(w\cdot x+b)x\\ \frac{\partial^{2}\Phi_{\tau}(\theta;x)}{\partial^{2}b}& =a\sigma^{\prime\prime}_{H,\tau}(w\cdot x+b).\end{aligned}\right. \tag{32}\]
From these expressions, together with (10), we easily get that, for all \(\theta\in K_{r}\),
\[\left\|H_{\theta}\Phi_{\tau}(\theta;\cdot)\right\|_{L^{\infty}(\Omega)}\leq Cr\tau,\]
for some constant \(C>0\) independent of \(\theta\), \(r\) and \(\tau\). Moreover, for all \(x\in\Omega\),
\[\nabla_{x}\Phi_{\tau}(\theta;x)=aw\sigma^{\prime}_{H,\tau}(w\cdot x+b), \tag{33}\]
which implies that
\[\left\{\begin{aligned} \frac{\partial\nabla_{x}\Phi_{\tau}( \theta;x)}{\partial c}&=0\\ \frac{\partial\nabla_{x}\Phi_{\tau}(\theta;x)}{\partial a}& =w\sigma^{\prime}_{H,\tau}(w\cdot x+b)\\ \frac{\partial\nabla_{x}\Phi_{\tau}(\theta;x)}{\partial w}& =a\sigma^{\prime}_{H,\tau}(w\cdot x+b)I_{d}+axw^{T}\sigma^{\prime \prime}_{H,\tau}(w\cdot x+b)\\ \frac{\partial\nabla_{x}\Phi_{\tau}(\theta;x)}{\partial b}& =aw\sigma^{\prime\prime}_{H,\tau}(w\cdot x+b).\end{aligned}\right. \tag{34}\]
This implies then that
\[\left\|\nabla_{\theta}\nabla_{x}\Phi_{\tau}(\theta;\cdot)\right\|_{L^{\infty} (\Omega)}\leq Cr\tau.\]
Moreover, it then holds, using again (10), that for all \(\theta\in K_{r}\),
\[\left\|H_{\theta}\nabla_{x}\Phi_{\tau}(\theta;\cdot)\right\|_{L^{\infty}( \Omega)}\leq Cr\tau^{2},\]
for some constant \(C>0\) independent of \(\theta\), \(r\) and \(\tau\).
The following corollary is also a prerequisite for the proof of Proposition 3.
**Corollary 1**.: _There exists a constant \(C_{\tau}>0\) and a constant \(C_{r,\tau}>0\) such that for all \(\mu,\nu\in\mathcal{P}_{2}(K_{r})\) :_
\[\left\|P_{\tau}(\mu)\right\|_{H^{1}(\Omega)}^{2}\leq C_{\tau}\int_{\Theta}| \theta|^{2}d\mu(\theta), \tag{35}\]
_and_
\[\left\|P_{\tau}(\mu)-P_{\tau}(\nu)\right\|_{H^{1}(\Omega)}^{2}\leq C_{r,\tau} W_{2}^{2}(\mu,\nu).\]
Proof.: From Lemma 5 we immediately obtain that, for all \(\tau,r>0\), there exists a constant \(C_{\tau,r}>0\) such that for all \(\theta_{1},\theta_{2}\in K_{r}\),
\[\left\{\begin{aligned} \left\|\nabla_{\theta}\Phi_{\tau}( \theta_{1};\cdot)\right\|_{H^{1}(\Omega)}&\leq& C_{r,r}|\theta_{1}|,\\ \left\|\nabla_{\theta}\Phi_{\tau}(\theta_{1};\cdot)-\nabla_{\theta} \Phi_{\tau}(\theta_{2};\cdot)\right\|_{H^{1}(\Omega)}&\leq& C_{r,r}|\theta_{1}-\theta_{2}|.\end{aligned}\right.\]
The corollary immediately follows from that fact.
Now we are able to prove Proposition 3.
Proof.: First, we focus on the proof of (28)-(30). As \(\Phi_{\tau}\) and \(\mathcal{E}\) are smooth, it holds that for all \(x\in\Omega\), \(\theta,\widetilde{\theta}\in\Theta\), \(u,\tilde{u}\in H^{1}(\Omega)\),
\[\begin{cases}\Phi_{\tau}(\tilde{\theta};x)=\Phi_{\tau}(\theta;x)+\nabla_{ \theta}\Phi_{\tau}(\theta;x)\cdot(\tilde{\theta}-\theta)+M_{\tau}(\theta, \tilde{\theta};x)\\ \mathcal{E}(\tilde{u})=\mathcal{E}(u)+d\,\mathcal{E}\,|_{u}(\tilde{u}-u)+N( \tilde{u}-u),\end{cases}\]
where \(N(u):=\dfrac{1}{2}\bar{a}(u,u)\) for all \(u\in H^{1}(\Omega)\) and \(M_{\tau}(\theta,\tilde{\theta};x):=\int_{0}^{1}(\tilde{\theta}-\theta)^{T}H_{ \theta}\Phi_{\tau}(\theta+t(\tilde{\theta}-\theta);x)(\tilde{\theta}-\theta) (1-t)dt.\) Using Lemma 5, there exists a constant \(C>0\) independent on \(r\) and \(\tau\) such that:
* \(\forall x\in\Omega,\ \forall\theta,\tilde{\theta}\in K_{r},\ |M_{\tau}( \theta,\tilde{\theta};x)|\leq Crr|\theta-\tilde{\theta}|^{2},\)
* \(\forall x\in\Omega,\ \forall\theta,\tilde{\theta}\in K_{r},\ |\nabla_{x}M_{\tau}( \theta,\tilde{\theta};x)|\leq Crr^{2}|\theta-\tilde{\theta}|^{2}.\)
Moreover, there exists a constant \(C>0\) such that for all \(u\in H^{1}(\Omega)\),
\[0\leq N(u)\leq C\|u\|_{H^{1}(\Omega)}^{2}. \tag{36}\]
Thus, for \(\mu,\nu\in\mathcal{P}_{2}(K_{r})\) and \(\gamma\in\Gamma(\mu,\nu)\) supported in \(K_{r}^{2}\), it holds that:
\[\mathcal{E}_{\tau,r}(\nu) =\mathcal{E}\,\Big{(}\int_{K_{r}}\Phi_{\tau}(\tilde{\theta}; \cdot)d\nu(\tilde{\theta})\Big{)}\] \[=\mathcal{E}\,\Big{(}\int_{K_{r}^{2}}\Phi_{\tau}(\tilde{\theta}; \cdot)d\gamma(\theta,\tilde{\theta})\Big{)}\] \[=\mathcal{E}\,\Big{(}\int_{K_{r}^{2}}\Big{[}\Phi_{\tau}(\theta; \cdot)+\nabla_{\theta}\Phi_{\tau}(\theta;\cdot)\cdot(\tilde{\theta}-\theta)+ M_{\tau}(\theta,\tilde{\theta};\cdot)\Big{]}\,d\gamma(\theta,\tilde{\theta})\Big{)}\] \[=\mathcal{E}_{\tau,r}(\mu)+d\,\mathcal{E}\,|_{P_{\tau}(\mu)} \Big{(}\int_{K_{r}^{2}}\Big{[}\nabla_{\theta}\Phi_{\tau}(\theta;\cdot)\cdot( \tilde{\theta}-\theta)+M_{\tau}(\theta,\tilde{\theta};\cdot)\Big{]}\,d\gamma (\theta,\tilde{\theta})\Big{)}\] \[+N\Big{(}\int_{K_{r}^{2}}M_{\tau}(\theta,\tilde{\theta};\cdot)d \gamma(\theta,\tilde{\theta})+\int_{K_{r}^{2}}\nabla_{\theta}\Phi_{\tau}( \theta;\cdot)\cdot(\tilde{\theta}-\theta)\,d\gamma(\theta,\tilde{\theta}) \Big{)},\]
Using standard derivation integral theorems, a bound on \(M_{\tau}\) is available :
\[\Big{\|}\int_{K_{r}^{2}}M_{\tau}(\theta,\tilde{\theta};\cdot)d \gamma(\theta,\tilde{\theta})\Big{\|}_{H^{1}(\Omega)}^{2} =\Big{\|}\int_{K_{r}^{2}}M_{\tau}(\theta,\tilde{\theta};\cdot)d \gamma(\theta,\tilde{\theta})\Big{\|}_{L^{2}(\Omega)}^{2}+\Big{\|}\int_{K_{r}^ {2}}\nabla_{x}M_{\tau}(\theta,\tilde{\theta};\cdot)d\gamma(\theta,\tilde{ \theta})\Big{\|}_{L^{2}(\Omega)}^{2}\] \[\leq\int_{K_{r}^{2}}\|M_{\tau}(\theta,\tilde{\theta};\cdot)\|_{L^ {2}(\Omega)}^{2}d\gamma(\theta,\tilde{\theta})+\int_{K_{r}^{2}}\|\nabla_{x}M_ {\tau}(\theta,\tilde{\theta};\cdot)\|_{L^{2}(\Omega)}^{2}d\gamma(\theta, \tilde{\theta})\] \[\leq C(r^{2}\tau^{2}+r^{2}\tau^{4})\int_{\Theta^{2}}|\tilde{ \theta}-\theta|^{4}d\gamma(\theta,\tilde{\theta})\] \[\leq C(r^{4}\tau^{2}+r^{4}\tau^{4})\int_{\Theta^{2}}|\tilde{ \theta}-\theta|^{2}d\gamma(\theta,\tilde{\theta})\] \[=C(r^{4}\tau^{2}+r^{4}\tau^{4})c_{2}(\gamma),\]
where we used Jensen inequality to get the first inequality and Lemma 4 to get the last inequality. Using Corollary 1 and the uniform continuity of \(d\,\mathcal{E}\), it holds :
\[\left|d\,\mathcal{E}\,|_{P_{\tau}\mu}\left(\int_{K_{r}^{2}}M_{\tau}(\theta, \tilde{\theta};\cdot)d\gamma(\theta,\tilde{\theta})\right)\right|\leq C_{\tau, r}c_{2}(\gamma).\]
Moreover, using similar calculations, it holds that
\[\left\|\int_{K_{x}^{2}}\nabla_{\theta}\Phi_{\tau}(\theta;\cdot)\cdot (\tilde{\theta}-\theta)\,d\gamma(\theta,\tilde{\theta})\right\|_{H^{1}(\Omega)}^{2} =\left\|\int_{K_{x}^{2}}\nabla_{\theta}\Phi_{\tau}(\theta;\cdot) \cdot(\tilde{\theta}-\theta)\,d\gamma(\theta,\tilde{\theta})\right\|_{L^{2}( \Omega)}^{2}\] \[+\left\|\int_{K_{x}^{2}}\nabla_{x}\nabla_{\theta}\Phi_{\tau}( \theta;\cdot)\cdot(\tilde{\theta}-\theta)\,d\gamma(\theta,\tilde{\theta})\right\| _{L^{2}(\Omega)}^{2},\] \[\leq\int_{K_{x}^{2}}\left\|\nabla_{\theta}\Phi_{\tau}(\theta; \cdot)\cdot(\tilde{\theta}-\theta)\right\|_{L^{2}(\Omega)}^{2}d\gamma(\theta,\tilde{\theta})\] \[+\int_{K_{x}^{2}}\left\|\nabla_{x}\nabla_{\theta}\Phi_{\tau}( \theta;\cdot)\cdot(\tilde{\theta}-\theta)\right\|_{L^{2}(\Omega)}^{2}\,d \gamma(\theta,\tilde{\theta}),\] \[\leq C(r^{2}+r^{2}r^{2})\int_{\Theta^{2}}|\tilde{\theta}-\theta| ^{2}d\gamma(\theta,\tilde{\theta})\] \[\leq C(r^{2}+r^{2}r^{2})c_{2}(\gamma).\]
Hence, together with the previous bounds and (36), we easily obtain that there exists a constant \(C_{r,\tau}>0\) such that for all \(\mu,\nu\in\mathcal{P}_{2}(K_{r})\), it holds that
\[\left|\mathcal{E}_{r,r}(\nu)-\mathcal{E}_{\tau,r}(\mu)+d\,\mathcal{E}\left|{}_ {P_{\tau}(\mu)}\left(\int_{K_{x}^{2}}\nabla_{\theta}\Phi_{\tau}(\theta;\cdot )\cdot(\tilde{\theta}-\theta)d\gamma(\theta,\tilde{\theta})\right)\right|\leq C _{r,\tau}c_{2}(\gamma). \tag{37}\]
Now we focus on the first order term and by Fubini and standard integral derivation theorem, we obtain that:
\[d\,\mathcal{E}\left|{}_{P_{\tau}(\mu)}\left(\int_{K_{x}^{2}} \nabla_{\theta}\Phi_{\tau}(\theta;\cdot)\cdot(\tilde{\theta}-\theta)d\gamma \right) =\left\langle\nabla_{x}P_{\tau}(\mu),\nabla_{x}\int_{K_{x}^{2}} \nabla_{\theta}\Phi_{\tau}(\theta;\cdot)\cdot(\tilde{\theta}-\theta)d\gamma( \theta,\tilde{\theta})\right\rangle_{L^{2}(\Omega)}\] \[-\left\langle f,\int_{K_{x}^{2}}\nabla_{\theta}\Phi_{\tau}( \theta;\cdot)\cdot(\tilde{\theta}-\theta)d\gamma(\theta,\tilde{\theta})\right\rangle _{L^{2}(\Omega)}\] \[+\int_{\Omega}P_{\tau}(\mu)(x)\,dx\times\int_{\Omega}\int_{K_{x} ^{2}}\nabla_{\theta}\Phi_{\tau}(\theta;x)\cdot(\tilde{\theta}-\theta)d\gamma (\theta,\tilde{\theta})dx\] \[=\int_{K_{x}^{2}}\langle\nabla_{x}P_{\tau}(\mu),\nabla_{x} \nabla_{\theta}\Phi_{\tau}(\theta;\cdot)\cdot(\tilde{\theta}-\theta)\rangle_{ L^{2}(\Omega)}d\gamma(\theta,\tilde{\theta})\] \[-\int_{K_{x}^{2}}\nabla_{\theta}\langle f,\Phi_{\tau}(\theta; \cdot)\rangle_{L^{2}(\Omega)}\cdot(\tilde{\theta}-\theta)d\gamma(\theta, \tilde{\theta})\] \[+\int_{K_{x}^{2}}\int_{\Omega}P_{\tau}(\mu)(x)dx\times\int_{ \Omega}\nabla_{\theta}\Phi_{\tau}(\theta;x)\cdot(\tilde{\theta}-\theta)dxd \gamma(\theta,\tilde{\theta})\] \[=\int_{K_{x}^{2}}v_{\mu}(\theta)\cdot(\tilde{\theta}-\theta)d \gamma(\theta,\tilde{\theta}),\]
where
\[v_{\mu}(\theta):=\nabla_{\theta}\phi_{\mu}(\theta)\quad\gamma-\text{almost everywhere}, \tag{38}\]
with
\[\phi_{\mu}(\theta):=\langle\nabla_{x}P_{\tau}(\mu),\nabla_{x}\Phi_{\tau}( \theta;\cdot)\rangle_{L^{2}(\Omega)}-\langle f,\Phi_{\tau}(\theta;\cdot) \rangle_{L^{2}(\Omega)}+\int_{\Omega}P_{\tau}(\mu)(x)dx\times\int_{\Omega} \Phi_{\tau}(\theta;x)dx.\]
Note that (38) is equivalent to
\[v_{\mu}(\theta):=\nabla_{\theta}\phi_{\mu}(\theta)\quad\mu-\text{almost everywhere},\]
as \(v_{\mu}\) only depends on \(\theta\).
To prove a well-posedness result, some convexity is needed. More precisely, one should check that \(\mathcal{E}_{\tau,r}\) is convex along geodesics.
**Proposition 4**.: _For all \(\tau,r>0\), there exists \(\lambda_{\tau,r}>0\) such that for all \(\mu,\nu\in\mathcal{P}_{2}(K_{r})\) with associated geodesic \(\kappa(t):=e_{t}\#\Pi\) given by Proposition 2, the functional \([0,1]\ni t\mapsto\dfrac{d}{dt}\,(\mathcal{E}_{\tau,r}(\kappa(t))\) is \(-\lambda_{\tau,r}\)-Lipschitz._
Proof.: First of all, one has to check that for all \(t\in[0,1]\), \(\kappa(t)\in\mathcal{P}_{2}(K_{r})\). This is a direct consequence of the fact that \(\mu,\nu\) are supported in \(K_{r}\), Remark 5 and that \(K_{r}\) is convex (in the geodesic sense).
Let \(t,s\in[0,1]\) and define \(\alpha(t,s):=(e_{t},e_{s})\#\Pi\in\Gamma(\kappa(t),\kappa(s))\). By (37), it holds that
\[\bigg{|}\mathcal{E}_{\tau,r}(\kappa(s))-\mathcal{E}_{\tau,r}(\kappa(t))+\int_ {\Theta^{2}}d\,\mathcal{E}\,|_{P_{r}(\kappa(t))}\Big{(}\nabla_{\theta}\Phi_{ \tau}(\theta;\cdot)\cdot(\tilde{\theta}-\theta)\Big{)}d\alpha(t,s)(\theta, \tilde{\theta})\bigg{|}\leq C_{r,r}c_{2}(\alpha(t,s)),\]
which reads equivalently as
\[\bigg{|}\frac{\mathcal{E}_{\tau,r}(\kappa(s))-\mathcal{E}_{\tau, r}(\kappa(t))}{s-t}-\int_{\mathfrak{P}}d\,\mathcal{E}\,|_{P_{r}(\kappa(t))} \Big{(}\nabla_{\theta}\Phi_{\tau}(\pi(t);\cdot)\cdot\Big{(}\frac{\pi(s)-\pi(t )}{s-t}\Big{)}\Big{)}d\Pi(\pi)\bigg{|}\] \[\leq C_{r,\tau}\frac{1}{|s-t|}\int_{\Theta^{2}}|\theta-\tilde{\theta} |^{2}\,d\alpha(t,s)(\theta,\tilde{\theta})\] \[= C_{r,\tau}\frac{1}{|s-t|}\int_{\mathfrak{P}}|\pi(t)-\pi(s)|^{2} \,d\Pi(\pi)\] \[= C_{r,\tau}|s-t|\int_{\mathfrak{P}}|\pi(1)-\pi(0)|^{2}\,d\Pi(\pi)\] \[\leq C_{r,\tau}|s-t|,\]
where the value of the constant \(C_{r,\tau}\) only depends on \(r\) and \(\tau\). Letting \(s\) go to \(t\) and using the dominated convergence theorem, one concludes that \([0,1]\ni t\mapsto\mathcal{E}_{\tau,r}(\kappa(t))\) is differentiable with derivative equal to :
\[h(t):=\dfrac{d}{dt}\,(\mathcal{E}_{\tau,r}(\kappa(t)))=\ \ \int_{\mathfrak{P}}d\, \mathcal{E}\,|_{P_{r}(\kappa(t))}\Big{(}\nabla_{\theta}\Phi_{\tau}(\pi(t); \cdot)\cdot\Big{(}\dfrac{d}{dt}\pi(t)\Big{)}\Big{)}d\Pi(\pi).\]
To conclude, one has the decomposition :
\[|h(t)-h(s)| \leq\Big{|}\int_{\mathfrak{P}}d\,\mathcal{E}\,|_{P_{\tau}(\kappa( t))}\Big{(}(\nabla_{\theta}\Phi_{\tau}(\pi(t);\cdot)-\nabla_{\theta}\Phi_{ \tau}(\pi(s);\cdot))\cdot\Big{(}\dfrac{d}{dt}\pi(t)\Big{)}\Big{)}d\Pi(\pi) \Big{|} \tag{39}\] \[+\Big{|}\int_{\mathfrak{P}}d\,\mathcal{E}\,|_{P_{r}(\kappa(t))} \Big{(}\nabla_{\theta}\Phi_{\tau}(\pi(s);\cdot)\cdot\Big{(}\dfrac{d}{dt}\pi(t )-\dfrac{d}{dt}\pi(s)\Big{)}\Big{)}d\Pi(\pi)\Big{|}.\]
Recalling (39), denoting \(\alpha:=(e_{0},e_{1})\#\Pi\) and using the previous estimates, we obtain that, for all
\(t,s\in[0,1]\),
\[|h(t)-h(s)| \leq C_{r,\tau}\Big{(}\|P_{\tau}(\kappa(t))\|_{H^{1}(\Omega)}\int_{ \mathfrak{P}}|\pi(t)-\pi(s)|\Big{|}\frac{d}{dt}\pi(t)\Big{|}d\Pi(\pi)\] \[+\|P_{\tau}(\kappa(t))-P_{\tau}(\kappa(s))\|_{H^{1}(\Omega)}\int_ {\mathfrak{P}}|\pi(s)|\Big{|}\frac{d}{dt}\pi(t)\Big{|}d\Pi(\pi)\] \[+\|P_{\tau}(\kappa(s))\|_{H^{1}(\Omega)}\int_{\mathfrak{P}}|\pi( s)|\Big{|}\frac{d}{dt}\pi(t)-\frac{d}{dt}\pi(s)\Big{|}d\Pi(\pi)\Big{)}\] \[\leq C_{r,\tau}\Big{(}|t-s|\|P_{\tau}(\kappa(t))\|_{H^{1}(\Omega) }\int_{\mathfrak{P}}|\pi(1)-\pi(0)|^{2}d\Pi(\pi)\] \[+\|P_{\tau}(\kappa(t))-P_{\tau}(\kappa(s))\|_{H^{1}(\Omega)} \int_{\mathfrak{P}}\sup_{u\in[0,1]}|\pi(u)||\pi(1)-\pi(0)|d\Pi(\pi)\] \[+|t-s|\|P_{\tau}(\kappa(s))\|_{H^{1}(\Omega)}\int_{\mathfrak{P}} \sup_{u\in[0,1]}|\pi(u)|\sup_{u\in[0,1]}\Big{|}\frac{d^{2}\pi(u)}{dt^{2}}\Big{|} d\Pi(\pi)\Big{)}\] \[\leq C_{r,\tau}\left(|t-s|\left(\sqrt{\int_{\Theta^{2}}|\theta|^{2 }d\kappa(t)(\theta)}+\sqrt{\int_{\Theta^{2}}|\theta|^{2}d\kappa(s)(\theta)} \right)(1+c_{2}(\alpha))+W_{2}(\kappa(t),\kappa(s))c_{2}(\alpha)\right)\]
where we have used Lemma 4 to get the second inequality and the fact that \(\sup_{u\in[0,1]}\bigg{|}\frac{d^{2}\pi(u)}{dt^{2}}\bigg{|}\) is uniformly bounded (since the curvature of \(\Theta\) is bounded) to get the last one. We also have the following estimates:
* By Remark 5 and the convexity of \(K_{r}\) (in the geodesic sense), for all \(0\leq t\leq 1\) : \[\int_{\Theta}|\theta|^{2}d\kappa(t)(\theta)\leq C(1+r^{2}).\]
* Moreover, \[W_{2}^{2}(\kappa(t),\kappa(s)) \leq\int_{\Theta^{2}}d(\theta,\tilde{\theta})^{2}d\alpha(t,s)( \theta,\tilde{\theta})\] \[\leq\int_{\Gamma}d(\pi(t),\pi(s))^{2}d\Pi(\pi)\] \[=|t-s|\int_{\Gamma}d(\pi(1),\pi(0))^{2}d\Pi(\pi)\] \[=|t-s|\int_{\Theta^{2}}d(\theta,\tilde{\theta})^{2}d\alpha( \theta,\tilde{\theta}).\]
This allows us to conclude that :
\[|h(t)-h(s)|\leq C_{r,\tau}(1+c_{2}(\alpha))|t-s|.\]
As the measure \(\alpha\) is supported in \(K_{r}^{2}\), we get :
\[|h(t)-h(s)|\leq\lambda_{r,r}|t-s|.\]
for some \(\lambda_{r,r}>0\), which yields the desired result.
The characterization of the velocity field allows to get a bound on its amplitude. This is given by the next corollary which will be useful later in the paper.
**Corollary 2**.: _There exists a constant \(C_{\tau}>0\) such that for all \(r>0\), all \(\mu\in\mathcal{P}_{2}(K_{r})\) and \(\theta\in\Theta\):_
\[|v_{\mu}(\theta)|\leq C_{\tau}r|\theta|.\]
Proof.: This can be proved combining (35), (31) and (34). The rest is just basic computations and left to the reader.
An important consequence of Proposition 4 is that \(\mathcal{E}_{\tau,r}\) is \((-\lambda_{\tau,r})\)-convex along geodesics. Now we are able to prove Theorem 4.
Proof of Theorem 4.: The functional \(\mathcal{E}_{\tau,r}\) is lower semicontinuous by Remark 4 and it is \((-\lambda_{\tau,r})\)-convex along generalized geodesics. Moreover, the space \(\Theta\) has a curvature bounded from below which ensures that it is an Alexandrov space of curvature bounded from below. We apply [18, Theorem 5.9, 5.11] to get the existence and the uniqueness of a gradient curve \(\mu^{r}:\mathbb{R}_{+}\to\mathcal{P}_{2}(K_{r})\) in the sense of [18, Definition 5.8]. Being a gradient curve, it is also a curve of maximal slope in the sense of [16, Definition 1.3.2]. Note that in [18], the space on which the probability measures are defined (here this is \(\Theta\)) is supposed to be compact. This is not a problem here since the domain of the functional \(\mathcal{E}_{\tau,r}\) is reduced to probability measures whose support is included in \(K_{r}\) which is compact and geodesically convex.
The existence of the vector field \(v_{t}^{r}\) for almost all \(t\geq 0\) is given by the absolute continuity of the curve \([0,1]\ni t\mapsto\mu^{r}(t)\) (because it is a gradient curve) and by [19, Proposition 2.5].
The work is not finished here since we do not have any knowledge about the velocity field \(v_{t}^{r}\) and the well-posedness result is proved only for \(\mathcal{E}_{\tau,r}\) with \(r<\infty\). In the following sections, we prove that this velocity field can be related to \(v_{\mu^{r}(t)}\) and use a bootstrap argument to prove an existence result for the gradient curve of \(\mathcal{E}_{\tau,+\infty}\).
#### 3.1.2 Identification of the vector field \(v_{t}^{r}\)
In the following, we denote by \(T\Theta\) the tangent bundle of \(\Theta\), i.e.
\[T\Theta:=\bigcup_{\theta\in\Theta}\{\theta\}\times T_{\theta}\Theta,\]
where \(T_{\theta}\Theta\) is the tangent space to \(\Theta\) at \(\theta\). It is easy to check that for all \(\theta=(c,a,w,b)\in\Theta\), it holds that \(T_{\theta}\Theta=\mathbb{R}\times\mathbb{R}\times\mathrm{Span}\{w\}^{\perp} \times\mathbb{R}\), where \(\mathrm{Span}\{w\}^{\perp}\) is the subspace of \(\mathbb{R}^{d}\) containing all \(d\)-dimensional vectors orthogonal to \(w\).
We also introduce the operators \(G\) and \(S_{h}\) for \(0<h\leq 1\) as follows :
\[G:=\left\{\begin{array}{rcl}\mathfrak{P}&\to&T\Theta\\ \pi&\mapsto&(\pi(0),\dot{\pi}(0))\end{array}\right.\]
and
\[S_{h}:=\left\{\begin{array}{rcl}T\Theta&\to&T\Theta\\ (\theta,v)&\mapsto&\left(\theta,\frac{v}{h}\right).\end{array}\right.\]
The next lemma concerns the local behaviour of couplings along a curve of maximal slope \(\mu^{r}:\mathbb{R}_{+}\to\mathcal{P}_{2}(K_{r})\). In the following, for any \(\mu,\nu\in\mathcal{P}_{2}(\Theta)\), we denote by \(\Gamma_{o}(\mu,\nu)\) the set of optimal transport plans between \(\mu\) and \(\nu\) in the sense of the quadratic cost. In other words, for all \(\gamma\in\Gamma_{o}(\mu,\nu)\), it holds that \(W_{2}^{2}(\mu,\nu)=\int_{\Theta\times\Theta}|\theta-\widetilde{\theta}|^{2}\, d\gamma(\theta,\widetilde{\theta})\).
**Lemma 6**.: _Let \(\mu^{r}:\mathbb{R}_{+}\to\mathcal{P}_{2}(K_{r})\) be a solution to (23) and for all \(0<h\leq 1\), let \(\Pi_{h}\in\mathcal{P}_{2}(\mathfrak{P})\) such that \(\gamma_{h}:=(e_{0},e_{1})\#\Pi_{h}\in\Gamma_{o}(\mu^{r}(t),\mu^{r}(t+h))\) (i.e. satisfying the condition of Proposition 2 with \(\mu=\mu^{r}(t)\) and \(\nu=\mu^{r}(t+h)\)). Then, for almost all \(t\geq 0\), it holds that_
\[\lim_{h\to 0}(S_{h}\circ G)\#\Pi_{h}=(i\times v_{t}^{r})\#\mu^{r}(t)\text{ in }\,\mathcal{P}_{2}(T\Theta),\]
_where \((v_{t}^{r})_{t\geq 0}\) is given by Theorem 4, and \(i:\Theta\to\Theta\) is the identity map._
_Moreover,_
\[\lim_{h\to 0}\frac{W_{2}^{2}(\mu^{r}(t+h),\exp(hv_{t}^{r})\#\mu^{r}(t))}{h^{2}}=0,\]
_where \(\exp(hv_{t}^{r}):\Theta\ni\theta\mapsto\exp_{\theta}(hv_{t}^{r}(\theta))\)._
Proof.: Let \(\phi\) be in \(C_{c}^{\infty}(\Theta)\). The continuity equation gives :
\[\int_{\mathbb{R}_{+}}\eta^{\prime}(t)\int_{\Theta}\phi\,d\mu^{r}(t)dt=-\int_{ \mathbb{R}_{+}}\eta(t)\int_{\Theta}\nabla_{\theta}\phi\cdot v_{t}\,d\mu^{r}(t)dt\]
for \(\eta\) smooth compactly supported in \(\mathbb{R}_{+}\). Taking \(\eta\) as an approximation of the characteristic function of \([t,t+h]\), owing to the fact that \(\mu^{r}\) is locally Lipschitz and passing to the limit, one gets :
\[\int_{\Theta}\phi\,d\mu^{r}(t)-\int_{\Theta}\phi\,d\mu^{r}(t+h)=-\int_{t}^{t+h} \int_{\Theta}\nabla_{\theta}\phi\cdot v_{t}^{r}\,d\mu^{r}(t)dt.\]
Passing to the limit as \(h\) goes to \(0\), one gets the differentiability almost everywhere of \(\mathbb{R}_{+}\ni t\mapsto\int_{\Theta}\phi\,d\mu^{r}(t)\) and :
\[\lim_{h\to 0}\frac{\int_{\Theta}\phi\,d\mu^{r}(t+h)-\int_{\Theta}\phi\,d\mu^{r} (t)}{h}=\int_{\Theta}\nabla_{\theta}\phi\cdot v_{t}^{r}\,d\mu^{r}(t).\]
For all \(0<h\leq 1\), let us introduce \(\nu_{h}:=(S_{h}\circ G)\#\Pi_{h}\) and let \(\nu_{0}\) be an accumulation point of \((\nu_{h})_{0<h\leq 1}\) with respect to the narrow convergence on \(\mathcal{P}_{2}(T\Theta)\).
Then, it holds that
\[\frac{\int_{\Theta}\phi\,d\mu^{r}(t+h)-\int_{\Theta}\phi\,d\mu^{r} (t)}{h} =\frac{1}{h}\int_{\Theta^{2}}(\phi(\tilde{\theta})-\phi(\theta)) \,d\gamma_{h}(\theta,\tilde{\theta})\] \[=\frac{1}{h}\int_{\Phi}(\phi(\pi(1))-\phi(\pi(0)))\,d\Pi_{h}(\pi)\] \[=\frac{1}{h}\int_{T\Theta}(\phi(\exp_{\theta}(v))-\phi(\theta)) \,dG\#\Pi_{h}(\theta,v)\] \[=\frac{1}{h}\int_{T\Theta}(\phi(\exp_{\theta}(hv))-\phi(\theta)) \,d(S_{h}\circ G)\#\Pi_{h}(\theta,v)\] \[=\int_{T\Theta}\nabla_{\theta}\phi(\theta)\cdot v\,d\nu_{h}( \theta,v)\] \[+\int_{T\Theta}R_{h}(\theta,v)\,d\nu_{h}(\theta,v)\] \[\underset{h\to 0}{\longrightarrow}\int_{T\Theta}\nabla_{ \theta}\phi(\theta)\cdot vd\nu_{0}(\theta,v),\]
where \(R_{h}(\theta,v):=\frac{\phi(\exp_{\theta}(hv))-\phi(\theta)}{h}-\nabla_{ \theta}\phi(\theta)\cdot v\) is bounded by \(C(\phi)|v|^{2}h\) (\(\phi\in C_{c}^{\infty}(\Theta)\) and the euclidean curvature in \(\Theta\) is uniformly bounded; see [20, Chapter 8] for the definition of euclidean curvature). Actually, to get the last limit, we need the following arguments detailed below :
* For the first term, \(\nabla\phi(\theta)\cdot v\) is quadratic in \((\theta,v)\) and consequently the passage to the limit is allowed.
* For the second one, \[\int_{T\Theta}|R_{h}(\theta,v)|d\nu_{h}(\theta,v) \leq C(\phi)h\int_{T\Theta}|v|^{2}d\nu_{h}(\theta,v)\] \[= C(\phi)h\frac{W_{2}^{2}(\mu^{r}(t),\mu^{r}(t+h))}{h^{2}}\] and using again the local Lipschitz property, we can pass to the limit which is zero.
As a consequence,
\[\int_{T\Theta}\nabla_{\theta}\phi(\theta)\cdot vd\nu_{0}(\theta,v)=\int_{ \Theta}\nabla_{\theta}\phi(\theta)\cdot v_{t}^{r}(\theta)\,d\mu^{r}(t)(\theta)\]
which is no more than (by disintegration) :
\[\int_{\Theta}\nabla_{\theta}\phi(\theta)\cdot\int_{T_{\Theta}\Theta}v\,d\nu_{0, \theta}(v)\,d\mu^{r}(t)(\theta)=\int_{\Theta}\nabla_{\theta}\phi(\theta)\cdot v _{t}^{r}(\theta)\,d\mu^{r}(t)(\theta).\]
Noting \(\tilde{v_{t}}(\theta):=\int_{T_{\Theta}\Theta}v\,d\nu_{0,\theta}(v)\), the last equation is equivalent to :
\[\operatorname{div}((\tilde{v_{t}}-v_{t}^{r})\mu^{r}(t))=0.\]
In addition, as \(T\Theta\ni(\theta,v)\mapsto|v|^{2}\) is positive and lower semicontinuous and as for almost all \(t\geq 0\) we have that \(\lim_{h\to 0}\dfrac{W_{2}(\mu^{r}(t),\mu^{r}(t+h))}{h}=|(\mu^{r})^{ \prime}|(t)\) (as \(\mu^{r}\) is locally Lipschitz):
\[\begin{split}\int_{\Theta}\int_{T_{\Theta}\Theta}|v|^{2}\,d\nu_ {0,\theta}(v)\,d\mu^{r}(t)(\theta)&\leq\liminf_{h\to 0}\,\int_{T \Theta}|v|^{2}\,d\nu_{h}(\theta,v)\\ &=\liminf_{h\to 0}\,\frac{1}{h^{2}}\int_{T\Theta}|v|^{2}\,dG\#\Pi_{h}( \theta,v)\\ &=\liminf_{h\to 0}\,\frac{1}{h^{2}}\int_{\mathfrak{P}}|\dot{ \pi}(0)|^{2}\,d\Pi_{h}(\pi)\\ &=\liminf_{h\to 0}\,\frac{1}{h^{2}}\int_{\Theta^{2}}d(\theta, \tilde{\theta})^{2}\,d\gamma_{h}(\theta,\tilde{\theta})\\ &=\liminf_{h\to 0}\,\frac{W_{2}^{2}(\mu^{r}(t),\mu^{r}(t+h))}{h^{ 2}}\\ &=|(\mu^{r})^{\prime}|^{2}(t).\end{split} \tag{40}\]
As a consequence and by Jensen inequality,
\[\|\tilde{v_{t}}\|_{L^{2}(\Theta;d\mu^{r}(t))}^{2}\leq\int_{\Theta}\int_{T_{ \Theta}\Theta}|v|^{2}\,d\nu_{0,\theta}(v)\,d\mu^{r}(t)(\theta)\leq|(\mu^{r}) ^{\prime}|^{2}(t)=\|v_{t}^{r}\|_{L^{2}(\Theta;d\mu^{r}(t))}^{2}. \tag{41}\]
By [19, Lemma 2.4], one gets \(\tilde{v}_{t}=v_{t}^{r}\). Reconsidering (41), one gets the equality case in Jensen inequality _ie_ :
\[\int_{\Theta}|\tilde{v_{t}}(\theta)|^{2}\,\,d\mu^{r}(t)(\theta)=\int_{\Theta} \int_{T_{\Theta}\Theta}|v|^{2}d\nu_{h,\theta}(v)\,d\mu^{r}(t)(\theta),\]
and as a consequence \(\nu_{0,\theta}=\delta_{v_{t}^{r}(\theta)}\), \(\mu^{r}(t)\)-almost everywhere in \(\Theta\). In addition,
\[\lim_{h\to 0}(S_{h}\circ G)\#\Pi_{h}=(i\times v_{t}^{r})\#\mu^{r}(t),\]
in the sense of the narrow convergence. The convergence of the \(v\) moment is given by (40)-(41) where inequalities can be replaced by equalities (as \(\tilde{v}_{t}=v_{t}^{r}\)) and the \(\liminf\) can be replaced by a lim as \(\lim_{h\to 0}\dfrac{W_{2}(\mu^{r}(t),\mu^{r}(t+h))}{h}=|(\mu^{r})^{ \prime}|(t)\) exists :
\[\int_{\Theta}\int_{T_{\Theta}\Theta}|v|^{2}d\nu_{0,\theta}(v)\,d\mu^{r}(t)( \theta)=\lim_{h\to 0}\int_{T\Theta}|v|^{2}d\nu_{h}(\theta,v). \tag{42}\]
For the \(\theta\) moment, it is more obvious as for all \(0<h\leq 1\) :
\[\int_{T\Theta}|\theta|^{2}\,d\nu_{h}(\theta,v)=\int_{\Theta}|\theta|^{2}\,d \mu^{r}(t)(\theta)\]
and
\[\int_{T\Theta}|\theta|^{2}\,d\nu_{0}(\theta,v)=\int_{T\Theta}|\theta|^{2}d(i \times v_{t}^{r})\#\mu^{r}(t)(\theta)=\int_{\Theta}|\theta|^{2}\,d\mu^{r}(t)( \theta).\]
Consequently,
\[\int_{T\Theta}|\theta|^{2}d\nu_{0}(\theta,v)=\lim_{h\to 0}\int_{T\Theta}| \theta|^{2}d\nu_{h}(\theta,v). \tag{43}\]
With (42)-(43), the convergence of moments is asserted. The narrow convergence combined with the convergence of moments gives the convergence in \(\mathcal{P}_{2}(\Theta)\) and the proof of the first part of the
lemma is finished.
For the second part, it holds that \((\exp(hv_{t}^{r})\times i)\#\gamma_{h}\) belongs to \(\Gamma(\exp(hv_{t}^{r})\#\mu^{r}(t),\mu^{r}(t+h))\). Hence,
\[\frac{W_{2}^{2}(\mu^{r}(t+h),\exp(hv_{t}^{r})\#\mu^{r}(t))}{h^{2}} \leq\frac{1}{h^{2}}\int_{\Theta^{2}}\,d(\theta,\tilde{\theta})^{2 }d(\exp(hv_{t}^{r})\times i)\#\gamma_{h}(\theta,\tilde{\theta})\] \[\leq\frac{1}{h^{2}}\int_{\Theta^{2}}d(\exp_{\theta}(hv_{t}^{r}( \theta)),\tilde{\theta})^{2}\,d\gamma_{h}(\theta,\tilde{\theta})\] \[\leq\frac{1}{h^{2}}\int_{T\Theta}d(\exp_{\theta}(hv_{t}^{r}( \theta)),\exp_{\theta}(hv))^{2}\,d\nu_{h}(\theta,v)\] \[\leq C\int_{T\Theta}|v_{t}^{r}(\theta)-v|^{2}\,d\nu_{h}(\theta,v)\] \[\underset{h\to 0}{\longrightarrow}0,\]
where we have used the boundedness of the euclidean curvature of the manifold \(\Theta\) in the last inequality and the fact that \(\nu_{h}\to(i\times v_{t}^{r})\#\mu^{r}(t)\), which was proved earlier. Hence the desired result.
We now introduce the projection operator on the manifold \(\Theta\) :
**Definition 5**.: _For all \(\theta\) in \(\Theta\), the orthogonal projection on the tangent space of \(\Theta\) is given by the operator \(\mathbf{P}_{\theta}:\mathbb{R}^{d+3}\to T_{\Theta}\Theta\). The operator \(\mathbf{P}:L^{1}_{\mathrm{loc}}(\Theta;\mathbb{R}^{d+3})\to L^{1}_{\mathrm{ loc}}(\Theta;\mathbb{R}^{d+3})\) denotes the corresponding projection on vector fields, i.e. for all \(X\in L^{1}_{\mathrm{loc}}(\Theta;\mathbb{R}^{d+3})\), \((\mathbf{P}X)(\theta):=\mathbf{P}_{\theta}X(\theta)\) for almost all \(\theta\in\Theta\)._
Now we are able to identify the velocity field given in Theorem 4 under a support hypothesis.
**Proposition 5**.: _Let \(t\geq 0\). If there exists \(\delta>0\) such that \(\mathrm{Supp}(\mu^{r}(t))\subset K_{r-\delta}\), then the velocity field \(v_{t}^{r}\) in (23) is equal to \(-\mathbf{P}v_{\mu^{r}(t)}\)\(\mu^{r}(t)\)-almost everywhere._
Proof.: On the one hand, for \(\gamma_{h}:=(e_{0},e_{1})\#\Pi_{h}\in\Gamma_{o}(\mu^{r}(t),\mu^{r}(t+h))\), by Proposition 3 and the fact that for all \(t\geq 0\), \(\mu^{r}(t)\in\mathcal{P}_{2}(K_{r})\) :
\[\left|\mathcal{E}_{\tau,r}(\mu^{r}(t+h))-\mathcal{E}_{\tau,r}(\mu^{r}(t))-\int _{\Theta^{2}}v_{\mu^{r}(t)}(\theta)\cdot(\tilde{\theta}-\theta)\,d\gamma_{h}( \theta,\tilde{\theta})\right|\leq C_{r,\tau}W_{2}(\mu^{r}(t),\mu^{r}(t+h))^{2},\]
which is equivalent to
\[\left|\frac{\mathcal{E}_{\tau,r}(\mu^{r}_{t+h})-\mathcal{E}_{\tau,r}(\mu^{r}( t))}{h}-\int_{T\Theta}v_{\mu^{r}(t)}(\theta)\cdot\frac{\exp_{\theta}(hv)- \theta}{h}\,d(S_{h}\circ G)\#\Pi_{h}(\theta,v)\right|\leq C_{r,\tau}\frac{1}{h} W_{2}(\mu^{r}(t),\mu^{r}(t+h))^{2}.\]
Then, one can use the decomposition :
\[\int_{T\Theta}v_{\mu^{r}(t)}(\theta)\cdot\frac{\exp_{\theta}(hv)- \theta}{h}\,d(S_{h}\circ G)\#\Pi_{h}(\theta,v) =\int_{T\Theta}v_{\mu^{r}(t)}(\theta)\cdot v\,d(S_{h}\circ G)\# \Pi_{h}(\theta,v)\] \[+\int_{T\Theta}v_{\mu^{r}(t)}(\theta)\cdot R_{h}(\theta,v)\,d(S_{ h}\circ G)\#\Pi_{h}(\theta,v),\]
where \(R_{h}(\theta,v):=\frac{\exp_{\theta}(hv)-\theta}{h}-v\) is bounded by \(Ch|v|^{2}\) due to the uniform boundedness of euclidean curvature in \(\Theta\). Passing to the limit as \(h\) goes to zero and using Lemma 6, one gets the differentiability of \(\mathbb{R}_{+}\ni t\to\mathcal{E}_{\tau,r}(\mu^{r}(t))\) almost everywhere and for almost all \(t\geq 0\) :
\[\frac{d}{dt}\left[\mathcal{E}_{\tau,r}(\mu^{r}(t))\right]=\int_{\Theta}v_{\mu^{ r}(t)}(\theta)\cdot v_{t}^{r}(\theta)\,d\mu^{r}(t)(\theta).\]
Note that to pass to the limit to obtain the last equation, we need the two following points :
* First, \(v\cdot v_{\mu^{r}(t)}(\theta)\) is at most quadratic in \((\theta,v)\) which is given by Corollary 2.
* Second, it holds that \(|v_{\mu^{r}(t)}(\theta)\cdot R_{h}(\theta,v)|\leq Cr|\theta|h|v|^{2}\) by Corollary 2 and consequently : \[\left|\int_{T\Theta}v_{\mu^{r}(t)}(\theta)\cdot R_{h}(\theta,v)\,d(S _{h}\circ G)\#\Pi_{h}(\theta,v)\right|\leq C_{r}h\int_{T\Theta}|\theta||v|^{2}\,d(S_{h}\circ G) \#\Pi_{h}(\theta,v)\] \[\leq C_{r}h\int_{T\Theta}|v|^{2}\,d(S_{h}\circ G)\#\Pi_{h}(\theta,v)\] \[\leq C_{r}h\frac{W_{2}(\mu^{r}(t),\mu^{r}(t+h))^{2}}{h^{2}}\] where we used the fact that \(\Pi_{h}\) is supported in \(K_{r}\) in its first variable to get the second inequality. The last term converges to zero since \((\mu_{r}(t))_{t}\) is local Lipschitz.
Next as \(\mathbf{P}v_{t}^{r}=v_{t}^{r}\), it holds that:
\[\frac{d}{dt}\left[\mathcal{E}_{\tau,r}(\mu^{r}(t))\right]=\int_{\Theta^{2}} \mathbf{P}v_{\mu^{r}(t)}(\theta)\cdot v_{t}^{r}(\theta)\,d\mu^{r}(t)(\theta). \tag{44}\]
On the other hand, consider the curve \(\tilde{\mu_{h}}:\mathbb{R}_{+}\rightarrow\mathcal{P}_{2}(\Theta)\) satisfying :
\[\forall t\geq 0,\quad\tilde{\mu}_{h}(t):=\exp(-h\mathbf{P}v_{\mu^{r}(t)}) \#\mu^{r}(t).\]
As \(\text{Supp}(\mu^{r}(t))\subset K_{r-\delta}\), there exists a small time interval around zero such that \(\tilde{\mu}_{h}(t)\) is in \(\mathcal{P}_{2}(K_{r})\) for \(h>0\) small enough. So, with \(\gamma_{h}:=(i\times\exp(-h\mathbf{P}v_{\mu^{r}(t)}))\#\mu^{r}(t)\in\Gamma(\mu ^{r}(t),\tilde{\mu}_{h}(t))\),
\[\left|\mathcal{E}_{\tau,r}(\tilde{\mu}_{h}(t))-\mathcal{E}_{\tau,r}(\mu^{r}(t ))-\int_{\Theta^{2}}\mathbf{P}v_{\mu^{r}(t)}(\theta)\cdot(\tilde{\theta}- \theta)\,d\gamma_{h}(\theta,\tilde{\theta})\right|\leq C_{r,\tau}W_{2}^{2}( \mu^{r}(t),\tilde{\mu}_{h}(t))\]
and it holds that
\[\int_{\Theta^{2}}\mathbf{P}v_{\mu^{r}(t)}(\theta)\cdot(\tilde{\theta}-\theta) \,d\gamma_{h}(\theta,\tilde{\theta})=h\int_{\Theta^{2}}\mathbf{P}v_{\mu^{r}( t)}(\theta)\cdot\frac{\exp_{\theta}(-h\mathbf{P}v_{\mu^{r}(t)}(\theta))-\theta}{h} \,d\mu^{r}(t)(\theta).\]
Hence,
\[\frac{\mathcal{E}_{\tau,r}(\tilde{\mu}_{h}(t))-\mathcal{E}_{\tau,r}(\mu^{r}(t ))}{W_{2}(\tilde{\mu}_{h}(t),\mu^{r}(t))}=\frac{h}{W_{2}(\tilde{\mu}_{h}(t), \mu^{r}(t))}\int_{\Theta^{2}}\mathbf{P}v_{\mu^{r}(t)}(\theta)\cdot\frac{\exp_ {\theta}(-h\mathbf{P}v_{\mu^{r}(t)}^{r}(\theta))-\theta}{h}\,d\mu^{r}(t)( \theta)+o_{h}(1)\]
and getting the limsup as \(h\) goes to zero (proceeding in the similar way as above to get the limit of the first term on the right hand side) and owing to the fact that \(\limsup_{h\to 0}\frac{W_{2}(\tilde{\mu}_{h}(t),\mu^{r}(t))}{h}\leq\|\mathbf{P}v_{\mu ^{r}(t)}\|_{L^{2}(\Theta;d\mu^{r}(t))}\), we obtain that
\[|\nabla^{-}\,\mathcal{E}_{\tau,r}\,|(\mu^{r}(t))\geq\|\mathbf{P}v_{\mu^{r}(t) }\|_{L^{2}(\Theta;d\mu^{r}(t))}. \tag{45}\]
As \(\mu^{r}\) is a curve of maximal slope with respect to the upper gradient \(|\nabla^{-}\,\mathcal{E}_{\tau,r}\,|\) of \(\mathcal{E}_{\tau,r}\), one has :
\[\frac{d}{dt}\left[\mathcal{E}_{\tau,r}(\mu^{r}(t))\right]=\int_{ \Theta}\mathbf{P}v_{\mu^{r}(t)}(\theta)\cdot v_{t}^{r}(\theta)\,d\mu^{r}(t)( \theta)\leq-\frac{1}{2}\|v_{t}^{r}\|_{L^{2}(\Theta;d\mu^{r}(t))}-\frac{1}{2}| \nabla^{-}\,\mathcal{E}_{\tau,r}\,|^{2}(\mu^{r}(t))\] \[\leq-\frac{1}{2}\|v_{t}^{r}\|_{L^{2}(\Theta;d\mu^{r}(t))}^{2}- \frac{1}{2}\|\mathbf{P}v_{\mu^{r}(t)}\|_{L^{2}(\Theta;d\mu^{r}(t))}^{2}\]
where we have used (45). As a consequence,
\[\int_{\Theta}\left(\frac{1}{2}(\mathbf{P}v_{\mu^{r}(t)})^{2}(\theta)+\frac{1} {2}|v_{t}^{r}(\theta)|^{2}-\mathbf{P}v_{\mu^{r}(t)}(\theta)\cdot v_{t}^{r}( \theta)\right)\,d\mu^{r}(t)(\theta)\leq 0\]
and
\[v_{t}^{r}=-\mathbf{P}v_{\mu^{r}(t)}\quad\mu^{r}(t)\text{-a.e.}\]
The identification of the velocity field when the support condition is satisfied allows to give an explicit formula for the gradient curve. It is given by the characteristics :
**Proposition 6**.: _Let \(\chi^{r}:\mathbb{R}_{+}\times\Theta\to\Theta\) be the flow associated to the velocity field \(-\mathbf{P}v_{\mu^{r}(t)}\) :_
\[\begin{cases}\partial_{t}\chi^{r}(t)=-\mathbf{P}v_{\mu^{r}(t)}\\ \chi^{r}(0;\theta)=\theta.\end{cases}\]
_Then \(\chi^{r}\) is uniquely defined, continuous, and for all \(t\geq 0\), \(\chi^{r}(t)\) is Lipschitz on \(K_{r}\). Moreover, as long as \(\operatorname{Supp}(\mu^{r}(t))\subset K_{r-\delta}\) for some \(\delta>0\) :_
\[\mu^{r}(t)=\chi^{r}(t)\#\mu_{0}.\]
Proof.: This is a direct consequence of the fact that \(v_{t}^{r}=-\mathbf{P}v_{\mu^{r}(t)}=-\mathbf{P}\nabla_{\theta}\phi_{\mu^{r}(t)}\) is \(C^{\infty}\).
Next lemma relates the curve \([0,1]\ni h\mapsto\exp(hv_{t}^{r})\#\mu^{r}(t)\) with \(\nabla_{-}\operatorname{\mathcal{E}}_{\tau,r}(\mu^{r}(t))\). This will be useful later to prove that the velocity field characterizes the gradient curve.
**Lemma 7**.: _For all \(\mu\in\mathcal{P}_{2}(\Theta)\) with \(\operatorname{Supp}(\mu)\subset K_{r-\delta}\) for some \(\delta>0\), the map \(\nu:[0,1]\ni h\mapsto\exp(-h\mathbf{P}v_{\mu}/\|\mathbf{P}v_{\mu}\|_{L^{2}( \Theta;d\mu)})\#\mu\) is differentiable at \(h=0\). Moreover, it holds that_
\[\nu^{\prime}(0)=\nabla_{-}\operatorname{\mathcal{E}}_{\tau,r}(\mu)/|\nabla_{ -}\operatorname{\mathcal{E}}_{\tau,r}|(\mu).\]
Proof.: First, we claim that \(|\nabla_{-}\operatorname{\mathcal{E}}_{\tau,r}(\mu)|=\|\mathbf{P}v_{\mu}( \theta)\|_{L^{2}(\Theta;d\mu)}\). In order to prove it, take an arbitrary unit speed geodesic \([0,1]\ni s\mapsto(e_{s})\#\Pi\) starting at \(\mu\) for which there exists a time interval around zero such that \((e_{s})\#\Pi\) belongs to \(\mathcal{P}_{2}(K_{r})\). As a consequence, one can write for all \(s>0\) sufficiently small :
\[\left|\operatorname{\mathcal{E}}_{\tau,r}((e_{s})\#\Pi)-\operatorname{ \mathcal{E}}_{\tau,r}(\mu)+\int_{\Theta^{2}}v_{\mu}(\theta)\cdot(\tilde{\theta }-\theta)\,d(e_{0},e_{s})\#\Pi(\theta,\tilde{\theta})\right|\leq C_{r,\tau}W_ {2}^{2}(\mu,(e_{s})\#\Pi).\]
with
\[\int_{\Theta^{2}}v_{\mu}(\theta)\cdot(\tilde{\theta}-\theta)\,d(e_{0},e_{s}) \#\Pi(\theta,\tilde{\theta})=\int_{T\Theta}v_{\mu}(\theta)\cdot(\exp_{\theta} (sv)-\theta)\,dG\#\Pi(\theta,v).\]
Dividing by \(s\) and passing to the limit as \(s\) goes to zero, one obtains :
\[\frac{d}{ds}\left[\operatorname{\mathcal{E}}_{\tau,r}((e_{s})\#\Pi)\right]= \int_{T\Theta}v_{\mu}(\theta)\cdot v\,dG\#\Pi(\theta,v).\]
Note that, to get the last equation, we need to prove that for all \(s\) sufficiently small the function \(\eta(s):T\Theta\ni(\theta,v)\mapsto v_{\mu}(\theta)\cdot\frac{\exp_{\theta}( sv)-\theta}{s}\) is uniformly integrable with respect to \(G\#\Pi\). In fact, this is given by Corollary 2 and the uniform curvature bound on \(\Theta\) giving \(|\eta(s)|(\theta,v)\leq Csr|\theta||v|^{2}\). As the term \(Cr|\theta||v|^{2}\) is integrable with respect to the measure \(G\#\Pi\) (recall that it has finite second-order moments and is supported in \(K_{r}\) in the \(\theta\) variable), we have the desired uniform integrability property.
Moreover, by Cauchy-Schwartz inequality:
\[\frac{d}{ds}\left[\operatorname{\mathcal{E}}_{\tau,r}((e_{s})\#\Pi)\right] \geq-\|\mathbf{P}v_{\mu}\|_{L^{2}(\Theta;d\mu)}\sqrt{\int_{T \Theta}v^{2}dG\#\Pi(\theta,v)}\] \[=-\|\mathbf{P}v_{\mu}\|_{L^{2}(\Theta;d\mu)},\]
where the last equality comes from :
\[\int_{T\Theta}v^{2}\,dG\#\Pi(\theta,v) =\int_{\mathfrak{P}}\dot{\pi}(0)^{2}\,d\Pi(\pi)\] \[=\int_{\mathfrak{P}}d(\pi(0),\pi(1))^{2}\,d\Pi(\pi)\] \[=W_{2}^{2}((e_{0})\#\Pi,(e_{1})\#\Pi)\] \[=1.\]
The last equality is derived from the fact that \([0,1]\ni s\mapsto(e_{s})\#\Pi\) is a unit speed geodesic. To conclude, we have proved that for all unit speed geodesic \((\alpha,1)\in C_{\mu}(\mathcal{P}_{2}(K_{r}))\)
\[D_{\mu}\,\mathcal{E}_{\tau,r}((\alpha,1))\geq-\|\mathbf{P}v_{\mu}\|_{L^{2}( \Theta;d\mu)}\]
which by [18, Lemma 4.3], asserts that :
\[|\nabla_{-}\,\mathcal{E}_{\tau,r}\,|(\mu)\leq\|\mathbf{P}v_{\mu}\|_{L^{2}( \Theta;d\mu)}. \tag{46}\]
Aside that, let \(h>0\) :
\[W_{2}^{2}(\nu(h),\nu(0)) \leq\int_{\Theta}d^{2}\left(\exp_{\theta}\left(-h\mathbf{P}v_{ \mu}(\theta)/\|\mathbf{P}v_{\mu}^{\tau}\|_{L^{2}(\Theta;d\mu)}\right),\theta \right)\,d\mu(\theta)\] \[\leq h^{2}\int_{\Theta}d^{2}\left(\exp_{\theta}\left(-\mathbf{P} v_{\mu}(\theta)/\|\mathbf{P}v_{\mu}\|_{L^{2}(\Theta;d\mu)}\right),\theta\right)\,d\mu(\theta)\] \[=h^{2},\]
and
\[\limsup_{h\to 0}\frac{W_{2}(\nu(h),\nu_{0})}{h}\leq 1. \tag{47}\]
Moreover as \(\mathrm{Supp}(\mu)\subset K_{r-\delta}\), \(v_{\mu}\) is bounded in \(L^{\infty}(K_{r})\) by Corollary 2 and for a small time interval around zero \(\nu(h)\in\mathcal{P}_{2}(K_{r})\). Consequently, as \(h\) goes to \(0\),
\[\mathcal{E}_{\tau,r}(\nu(h))-\mathcal{E}_{\tau,r}(\mu) =\int_{\Theta^{2}}v_{\mu}(\theta)\cdot(\tilde{\theta}-\theta)\,d( i\times\exp(-h\mathbf{P}v_{\mu}/\|\mathbf{P}v_{\mu}\|_{L^{2}(\Theta;d\mu)}))\#\mu(\theta)\] \[+o\left(h\right)\] \[=\int_{\Theta}v_{\mu}(\theta)\cdot\left(\exp\left(-h\mathbf{P}v_ {\mu}(\theta)/\|\mathbf{P}v_{\mu}\|_{L^{2}(\Theta;d\mu)}\right)-\theta\right) \,d\mu(\theta)+o(h).\]
Dividing by \(h\) and passing to the limit as \(h\) goes to zero (justifying the passage to the limit as above), it holds that:
\[\lim_{h\to 0}\frac{\mathcal{E}_{\tau,r}(\nu(h))-\mathcal{E}_{\tau,r}(\mu)}{h}= -\|\mathbf{P}v_{\mu}^{\tau}(\theta)\|_{L^{2}(\Theta;d\mu)}. \tag{48}\]
Additionally, with (47) :
\[\limsup_{h\to 0}\frac{\mathcal{E}_{\tau,r}(\nu(h))-\mathcal{E}_{\tau,r}(\mu)}{W _{2}(\nu(h),\nu(0))}\leq-\|\mathbf{P}v_{\mu}\|_{L^{2}(\Theta;d\mu)}. \tag{49}\]
To conclude :
* With (49) and (46), the claim is proved : \[|\nabla_{-}\,\mathcal{E}_{\tau,r}\,|(\mu)=\|\mathbf{P}v_{\mu}\|_{L^{2}(\Theta ;d\mu)}.\]
* Owing to this, (47) and (49) the curve \([0,1]\ni h\mapsto\nu(h)\) is differentiable at \(h=0\) by [18, Proof of (ii) Lemma 5.4] and : \[\nu^{\prime}(0)=\nabla_{-}\,\mathcal{E}_{\tau,r}(\mu)/|\nabla_{-}\, \mathcal{E}_{\tau,r}\,|(\mu).\]
This finishes the proof of the lemma.
#### 3.1.3 Existence without support limitation
Note that for the moment the definition domain of \(\mathcal{E}_{\tau,r}\) is reduced to measures supported in \(K_{r}\). Using a bootstrapping argument, we will prove that the existence theorem 5 can be extended to the energy \(\mathcal{E}_{\tau,+\infty}\).
Proof of Theorem 5.: Let :
* \(r_{0}>0\) be such that \(\operatorname{Supp}(\mu_{0})\subset K_{r_{0}}\),
* \(\mu^{r}:\mathbb{R}_{+}\ni t\mapsto\mu^{r}(t)\) the gradient curve associated to \(\mathcal{E}_{\tau,r}\) for \(r>r_{0}\).
By Corollary 2, it holds that \(|v_{\mu^{r}(t)}(\theta)|\leq Cr|\theta|\) for all \(t\geq 0\). Hence, for all \(\theta\in K_{r_{0}}\), \(|\chi^{r}(t;\theta)|\leq r_{0}e^{Crt}\) for all time \(t\in\left[0,T_{r}:=\frac{1}{Cr}\log\left(\frac{r+r_{0}}{2r_{0}}\right)\right]\) and \(\operatorname{Supp}(\mu^{r}(t))\subset K_{(r+r_{0})/2}\subset K_{r}\). By the definition of the gradient curve :
\[\forall t\in[0,T_{r}],\ (\mu^{r})^{\prime}(t)=\nabla_{-}\,\mathcal{E}_{\tau,r} (\mu^{r}(t))=g^{\prime}(0) \tag{50}\]
with \(g:[0,1]\ni h\mapsto\exp\left(-\mathbf{P}v_{\mu^{r}(t)}h\right)\), by Lemma 7. Note that the right hand side of last equation does not depend explicitly on \(r\) but on \(\mu^{r}_{\cdot}\).
We construct the curve \(\mu:[0,T_{r}]\to\mathcal{P}_{2}(\Theta)\) as follows:
\[\forall t\in[0,T_{r}],\ r>r_{0}\ \mu(t):=\mu^{r}(t).\]
This is well-defined since by uniqueness of the gradient curve with respect to \(\mathcal{E}_{\tau,r}\), \(\mu^{r_{1}}(t)=\mu^{r_{2}}(t)\) on \([0,\min(T_{r_{1}},T_{r_{2}})]\) for \(r_{0}<r_{1}\leq r_{2}\). Defining for all \(n\in\mathbb{N}^{*}\)
\[r_{n}:=(n+1)r_{0},\]
we can build inductively a gradient curve on \(\left[0,\frac{1}{Cr_{0}}\sum_{i=1}^{n}\frac{1}{(i+1)}\log\left( \frac{i+2}{2(i+1)}\right)\right]\). As the width of this interval is diverging, it is possible to construct a gradient curve on \(\mathbb{R}^{+}\).
All the properties given by the theorem comes from the properties of \(\mu^{r}\) derived in Theorem 4 and Proposition 6.
**Remark 6**.: _We make here two important remarks:_
* _We did not prove the existence of a gradient curve with respect to_ \(\mathcal{E}_{\tau,\infty}\) _because this functional is not proved to be convex along geodesics and it is impossible to define gradients without such an assumption._
* _The uniqueness of a solution to (_24_) is out of the scope of this article. To prove it, one should link (_24_) and the support condition to prove that locally in time, a solution to (_24_) coincides with the unique gradient curve of_ \(\mathcal{E}_{\tau,r}\) _for some_ \(r>0\) _large enough._
### Link with backpropagation in neural network
Here, we give a proof of Theorem 6.
Proof of Theorem 6.: Returning back to the proof of Theorem 5 and for all time \(T>0\), one can find \(r>0\) large enough such that \(\mu\), \(\mu_{m}\) coincide with gradient curves on \([0,T]\) with respect to \(\mathcal{E}_{\tau,r}\) starting from \(\mu_{0}\) and \(\mu_{0,m}\) respectively. As gradient curves with respect to \(\mathcal{E}_{\tau,r}\) verifies the following semigroup property [18, Theorem 5.11]
\[\forall t\in[0,T],\ W_{2}(\mu(t),\mu_{m}(t))\leq e^{\lambda_{\tau,r}t}W_{2}( \mu_{0},\mu_{0,m}),\]
the expected convergence on \(C([0,T],\mathcal{P}_{2}(\Omega))\) holds by the convergence of initial measures.
### Convergence of the measure towards the optimum
In the following, a LaSalle's principle argument is invoked in order to prove Theorem 7. For simplicity, we note \(\mathcal{E}_{\tau}:=\mathcal{E}_{\tau,\infty}\) for \(0<\tau<+\infty\).
#### 3.3.1 Characterization of optima
In this part, we focus on a characterization of global optima. For convenience, we extend the functional \(\mathcal{E}_{\tau}\) to the set of signed finite measures on \(\Theta\), denoted by \(\mathcal{M}(\Theta)\).
**Lemma 8**.: _For all \(\mu\in\mathcal{M}(\Theta)\), there exists a probability measure \(\mu_{p}\) such that \(\mathcal{E}_{\tau}(\mu)=\mathcal{E}_{\tau}(\mu_{p})\)._
Proof.: Let us first consider a positive signed measure \(\mu\in\mathcal{M}^{+}(\Theta)\). If \(\mu(\Theta)=0\), \(\Phi(\theta,.)=0\)\(\mu\)-almost everywhere and \(\mathcal{E}_{\tau}(\mu)=0\). Taking \(\mu_{p}:=\delta_{(0,0,w,b)}\) with \(w,b\) taken arbitrary eis sufficient to prove the desired result. Now, if \(\mu(\Theta)\neq 0\), consider \(\mu_{p}:=T\#\left(\frac{\mu}{\mu(\Theta)}\right)\) where \(T:(c,a,w,b)\to(c\mu(\Theta),a\mu(\Theta),w,b)\). In this case :
\[\int_{\Theta}\Phi(\theta;\cdot)d\mu =\int_{\Theta}\mu(\Theta)\Phi(\theta;\cdot)\frac{d\mu(\theta)}{ \mu(\Theta)}\] \[=\int_{\Theta}\Phi(T\theta;\cdot)\frac{d\mu(\theta)}{\mu(\Theta)}\] \[=\int_{\Theta}\Phi(\theta;\cdot)d\mu_{p}(\theta)\]
where we have used the form of \(\Phi\) (18)-(19) to get the last inequality.
Now take an arbitrary signed measure \(\mu\in\mathcal{M}(\Theta)\). By Hahn-Jordan decomposition theorem, there exists \(P,N\)\(\mu\)-measurable sets such that \(P\cup N=\Theta\) and \(\mu\) is non-negative (respectively non-positive) on \(P\) (respectively \(N\)). The signed measure \(\mu\) can be written as :
\[\mu=\mu_{P}-\mu_{N}\]
where \(\mu_{P},\mu_{N}\in\mathcal{M}^{+}(\Theta)\). Consider following map :
\[G(c,a,w,b):=\left\{\begin{array}{rl}(-c,-a,w,b)&\text{if }(a,b,w,c)\in N \\ (c,a,w,b)&\text{if }(a,b,w,c)\in P\end{array}\right.\]
and the measure :
\[\mu_{G}:=G\#(\mu_{P}+\mu_{N})\in\mathcal{M}^{+}(\Theta).\]
By construction, we have \(P_{\tau}\left(T\#\left(\frac{\mu_{G}}{\mu_{G}(\Theta)}\right)\right)=P_{\tau} (\mu)\) and consequently, \(\mathcal{E}_{\tau}(\mu)=\mathcal{E}_{\tau}\left(T\#\left(\frac{\mu_{G}}{\mu_ {G}(\Theta)}\right)\right)\).
**Lemma 9**.: _The measure \(\mu\in\mathcal{P}_{2}(\Theta)\) is optimal for Problem 1 if and only if \(\phi_{\mu}(\theta)=0\) for all \(\theta\in\Theta\)._
Proof.: Suppose \(\mu\in\mathcal{P}_{2}(\Theta)\) optimal and let \(\zeta\in L^{1}(\Theta;\mu)\). Then, for all \(\nu:=\zeta\mu+\nu^{\perp}\in\mathcal{M}(\Theta)\) (Lebesgue decomposition of \(\nu\) with respect to \(\mu\) with \(\zeta\in L^{1}(\Theta;\mu)\)) and owing to Lemma 8, as \(t\) goes to \(0\),
\[\mathcal{E}_{\tau}(\mu+t\nu) =\mathcal{E}(P_{\tau}(\mu)+tP_{\tau}(\nu))\] \[=\mathcal{E}_{\tau}(\mu)+td\mathcal{E}\,|_{P_{\tau}(\mu)}(P_{\tau} (\nu))+o(t).\]
Hence as \(\mu\) is optimal
\[0=\frac{d}{dt}\left[\mathcal{E}_{\tau}(\mu+t\nu)\right]|_{t=0} =d\,\mathcal{E}\,|_{P_{\tau}(\mu)}(P_{\tau}(\nu))\] \[=\int_{\Theta}\,d\,\mathcal{E}\,|_{P_{\tau}(\mu)}(\Phi_{\tau}( \theta;\cdot))d\nu(\theta)\] \[=\int_{\Theta}\phi_{\mu}(\theta)d\nu(\theta)\] \[=\int_{\Theta}\phi_{\mu}(\theta)\zeta(\theta)d\mu(\theta)+\int_{ \Theta}\phi_{\mu}(\theta)d\nu^{\perp}(\theta).\]
As this is true for all \(\zeta\in L^{1}(\Theta,\mu)\), one gets:
\[\phi_{\mu}=0\ \mu\text{-almost everywhere},\quad\phi_{\mu}=0\ \nu^{\perp}\text{- almost everywhere} \tag{51}\]
for all \(\nu^{\perp}\perp\mu\). As \(\phi_{\mu}\) is continuous, this is equivalent to \(\phi_{\mu}=0\) everywhere in \(\Theta\). Indeed, let \(\theta\in\Theta\). If \(\theta\) belongs to \(\operatorname{Supp}(\mu)\), then by definition of the support, \(\mu(B(\theta,\varepsilon))>0\) for all \(\varepsilon>0\). Thus, one can take \(\theta_{\varepsilon}\in B(\theta,\varepsilon)\) with \(\phi_{\mu}(\theta_{\varepsilon})=0\). As \(\theta_{\varepsilon}\mathop{\longrightarrow}\limits_{\varepsilon\to 0}\theta\), using the continuity of \(\phi_{\mu}\), we obtain \(\phi_{\mu}(\theta)=0\). If \(\theta\not\in\operatorname{Supp}(\mu)\), then \(\delta_{\theta}\perp\mu\) and necessarily, \(\phi_{\mu}(\theta)=0\). The reverse implication is trivial.
Conversely suppose now \(\phi_{\mu}=0\) everywhere in \(\Theta\) and take \(\nu\in\mathcal{P}_{2}(\Theta)\), then by previous computations and the convexity of \(\mathcal{E}\) (slopes are increasing)
\[0=\frac{d}{dt}\left[\mathcal{E}(\mu+t(\mu-\nu))\right]=\frac{d}{dt}\left[ \mathcal{E}(P_{\tau}(\mu)+tP_{\tau}(\mu-\nu))\right]\leq\mathcal{E}(P_{\tau}( \nu))-\mathcal{E}(P_{\tau}(\mu))\]
which implies that
\[\mathcal{E}_{\tau}(\mu)\leq\mathcal{E}_{\tau}(\nu)\]
and \(\mu\) is optimal.
#### 3.3.2 Escape from critical points
In this section, we use the notation :
\[\theta=(a,c,w,b)=:(a,c,\omega)\]
to make the difference between "linear" variables and "nonlinear" ones.
**Lemma 10**.: _For all \(\mu,\nu\) in \(\mathcal{P}_{2}(\Theta)\), it holds that_
\[\forall\theta\in\Theta,\ |\phi_{\mu}(\theta)-\phi_{\nu}(\theta)|\leq C \left(\int_{\Theta}|\theta_{1}|^{2}d\mu(\theta_{1})+\int_{\Theta}|\theta_{2}|^ {2}d\nu(\theta_{2})\right)W_{2}^{2}(\mu,\nu)(1+|\theta|^{2})\]
\[\forall\theta\in\Theta,\ |v_{\mu}(\theta)-v_{\nu}(\theta)|\leq C\left(\int_{ \Theta}|\theta_{1}|^{2}d\mu(\theta_{1})+\int_{\Theta}|\theta_{2}|^{2}d\nu( \theta_{2})\right)W_{2}^{2}(\mu,\nu)(1+|\theta|^{2})\]
Proof.: Here we focus on \(v_{\mu}\), the proof for \(\phi_{\mu}\) being very similar. Considering (29)-(30), one can decompose \(v_{\mu}\) as
\[v_{\mu}=:v_{\mu,1}+v_{2}+v_{\mu,3}, \tag{52}\]
with
\[v_{\mu,1} :=\nabla_{\theta}\left[\langle\nabla_{x}P_{\tau}(\mu),\nabla_{x} \Phi_{\tau}(\theta;\cdot)\rangle_{L^{2}(\Omega)}\right],\] \[v_{2} :=\nabla_{\theta}\left[-\langle f,\Phi_{\tau}(\theta;\cdot) \rangle_{L^{2}(\Omega)}\right],\] \[v_{\mu,3} :=\nabla_{\theta}\left[\int_{\Omega}P_{\tau}(\mu)(x)dx\times\int _{\Omega}\Phi_{\tau}(\theta;x)dx\right].\]
Using standard integral derivation and Fubini theorems, it holds that for all \(\gamma\in\Gamma_{o}(\mu,\nu)\),
\[v_{\mu,1}(\theta)-v_{\nu,1}(\theta)=\int_{\Theta^{2}}\int_{\Omega}\nabla_{ \theta}\nabla_{x}\Phi_{\tau}(\theta;x)(\nabla_{x}\Phi_{\tau}(\theta_{1};x)- \nabla_{x}\Phi_{\tau}(\theta_{2};x))dxd\gamma(\theta_{1},\theta_{2}).\]
Owing to (33)-(34), one gets
\[|v_{\mu,1}(\theta)-v_{\nu,1}(\theta)| \leq C(\tau)\int_{\Theta^{2}}\max(|\theta_{1}|,|\theta_{2}|)| \theta_{1}-\theta_{1}||\theta|^{2}dxd\gamma(\theta_{1},\theta_{2})\] \[\leq C(\tau)\left(\int_{\Theta}|\theta_{1}|^{2}d\mu+\int_{\Theta} |\theta_{2}|^{2}d\nu\right)W_{2}^{2}(\mu,\nu)|\theta|^{2},\]
where \(C(\tau)\) is a positive constant which only depends on \(\tau\), and where we used the Cauchy-Schwartz inequality. For the third term in the decomposition (52), one has :
\[v_{\mu,3}-v_{\nu,3}=\int_{\Theta^{2}}\int_{\Omega}\Phi_{\tau}(\theta_{1};\cdot)- \Phi_{\tau}(\theta_{2};\cdot)dxd\gamma(\theta_{1},\theta_{2})\times\int_{\Omega }\nabla_{\theta}\Phi_{\tau}(\theta;\cdot)dx.\]
Owing to (31), one gets :
\[|v_{\mu,3}(\theta)-v_{\nu,3}(\theta)| \leq C(\tau)\int_{\Theta^{2}}\int_{\Omega}\max(|\theta_{1},\theta _{2}|)|\theta_{1}-\theta_{1}|dxd\gamma(\theta_{1},\theta_{2})|\theta|\] \[\leq C(\tau)\left(\int_{\Theta}|\theta_{1}|^{2}d\mu+\int_{\Theta} |\theta_{2}|^{2}d\nu\right)W_{2}^{2}(\mu,\nu)|\theta|\]
where we used again the Cauchy-Schwartz inequality. Hence the desired result.
**Proposition 7**.: _Let \(\mu\in\mathcal{P}_{2}(\Theta)\) such that there exists \(\theta\in\Theta\), \(\phi_{\mu}(\theta)\neq 0\). Then there exist a set \(A\subset\Theta\) and \(\varepsilon>0\) such that if there exists \(t_{0}>0\) with \(W_{2}(\mu(t_{0}),\mu)\leq\varepsilon\) and \(\mu(t_{0})(A)>0\), then there exists a time \(0<t_{0}<t_{1}<+\infty\) such that \(W_{2}(\mu(t_{1}),\mu)>\varepsilon\)._
Proof.: As \(\phi_{\mu}\) is linear in \(a\) and \(c\), it can be written under the form
\[\phi_{\mu}(\theta)=:a\psi_{\mu}(\omega)+cr_{\mu}.\]
By hypothesis, the set
\[A_{0}:=\{\theta\in\Theta\ |\ \phi_{\mu}(\theta)\neq 0\}\]
is a non empty (open set). This is equivalent to say that either there exists \(\omega\) such that \(\psi_{\mu}(\omega)\neq 0\) or \(r_{\mu}\neq 0\). Suppose that \(\psi_{\mu}\neq 0\) is non zero somewhere, the case for \(r_{\mu}\) being similar. For all \(\alpha\in\mathbb{R}\), we denote by
\[\begin{cases}A_{\alpha}^{+}=\psi_{\mu}^{-1}(]0,+\infty[),\\ A_{\alpha}^{-}=\psi_{\mu}^{-1}(]-\infty,0[).\end{cases}\]
Now we focus on \(A_{0}^{-}\) and suppose that this set is non empty. The case where \(A_{0}^{+}\) is non empty is similar to handle and left to the reader.
By Lemma 11 and the regular value theorem, there exists \(\eta>0\) such that \(\partial A_{-\eta}^{-1}=\psi_{\mu}^{-1}(\{-\eta\})\) is a \((d+1)-\)orientable manifold on which \(\nabla_{\omega}\psi_{\mu}\) is non zero. With our choice of activation function \(\sigma_{H,\tau}\), it is easy to prove that \(A_{-\eta}^{-}\) is a bounded set. Indeed, if \(b\) is large enough, then \(\Omega\ni x\mapsto\sigma_{H,\tau}(w\cdot x+b)\) is zero and \(\psi_{\mu}(w,b)\) is zero.
On \(A_{-\eta}^{-}\), the gradient \(\nabla_{\omega}\psi_{\mu}\) is pointing outward \(A_{-\eta}^{-}\) and, denoting by \(n_{\text{out}}\) the outward unit vector to \(A_{-\eta}^{-}\), there exists \(\beta>0\) such that \(|\nabla_{\omega}\psi_{\mu}\cdot n_{\text{out}}|>\beta\) for on \(\partial A_{-\eta}^{-}\), since this continuous function is nonzero on a compact set. Hence, defining :
\[A:=\{(a,c,\omega)\in\Theta\ |\ \omega\in A_{-\eta}^{-},\ a\geq 0\}\]
and owing to the fact that \(v_{\mu}=(v_{\mu,a},v_{\mu,c},v_{\mu,\omega})\) with \(v_{\mu,a}=\psi_{\mu}(\omega)\), \(v_{\mu,c}=r_{\mu}\), \(v_{\mu,\omega}=a\nabla_{\omega}\psi_{\mu}(\omega)\), it holds :
\[\begin{cases}\qquad v_{\mu,a}<\eta\ \text{on}\ A\\ v_{\mu,\omega}\cdot n_{out}>\beta a\ \text{on}\ \mathbb{R}_{+}\times\mathbb{R} \times\partial A_{-\eta}^{-}.\end{cases} \tag{53}\]
By contradiction, suppose that \(\mu(t_{0})\) has non zero mass on \(A\) and that \(W_{2}(\mu,\mu(t))\leq\varepsilon\) (with \(\varepsilon\) fixed later) for all time \(t\geq t_{0}\). Then using Lemma 10, one has :
\[|v_{\mu(t)}(\theta)-v_{\mu}(\theta)|\leq C(\tau,\mu)(1+|\theta|^{2})\varepsilon \tag{54}\]
and
\[|\phi_{\mu(t)}(\theta)-\phi_{\mu}(\theta)|\leq C(\tau,\mu)(1+|\theta|^{2})\varepsilon.\]
One takes \(\varepsilon:=\sqrt{\frac{\eta}{2C(\tau,\mu)R}}\) where \(R>0\) satisfies :
\[(R-1)\mu(t_{0})(A)>\int|\theta|^{2}d\mu+\frac{\eta}{2C(\tau,\mu)R} \tag{55}\]
which exists since \(\mu(t_{0})(A)>0\) by hypothesis. On the set \(\{\theta\in A\ |\ 1+|\theta|^{2}\leq R\}\) and by (54), we have :
\[|v_{\mu(t)}(\theta)-v_{\mu}(\theta)|\leq\frac{\eta}{2}\]
and so by (53) and the fact that \(v_{t}=-v_{\mu(t)}\):
\[\left\{\begin{aligned} v_{t,a}&>\eta/2\ \text{on}\ A\\ v_{t,\omega}\cdot n_{out}&<-\beta/2\times a\ \text{on}\ \partial A^{-}_{-\eta}.\end{aligned}\right.\]
The general picture is given by Figure 3. As a consequence, there exists a time \(t_{1}\) such that the set \(\{\theta\in A\ |\ 1+|\theta|^{2}\leq R\}\) has no mass and
\[\int|\theta|^{2}d\mu(t)(\theta)\geq(R-1)\mu(t)(A)\geq(R-1)\mu(t_{0})(A).\]
At the same time, as \(W_{2}(\mu,\mu(t))\leq\varepsilon\) :
\[\int|\theta|^{2}d\mu(t)(\theta)\leq\int|\theta|^{2}d\mu(\theta)+\varepsilon^{2 }=\int|\theta|^{2}d\mu(\theta)+\frac{\eta}{2C(\tau,\mu)}\]
and this a contradiction with the condition (55) on \(R\).
**Remark 7**.: _The set \(A\) constructed in the proof of previous lemma is of the form :_
\[A:=\{(a,c,\omega)\in\Theta\ |\ \omega\in A^{-}_{-\eta_{1}}\}\cup\{(a,c,\omega)\ |\ \omega\in A^{+}_{\eta_{2}}\} \tag{56}\]
_where \(\eta_{1},\eta_{2}\) are strictly positive._
**Lemma 11**.: _For all \(\mu\in\mathcal{P}_{2}(\Theta)\), if \(\psi_{\mu}<0\) somewhere, there exists a strictly negative regular value \(-\eta\) (\(\eta>0\)) of \(\psi_{\mu}\)._
Proof.: As \(\psi_{\mu}<0\) somewhere and by continuity, there exists a non empty open \(O\subset]-\infty,0[\) such that \(O\subset range(\psi_{\mu})\). Next, we use the Sard-Morse theorem recalled below :
**Theorem 8** (Sard-Morse).: _Let \(\mathcal{M}\) be a differentiable manifold and \(f:\mathcal{M}\to\mathbb{R}\) of class \(\mathcal{C}^{n}\), then the image of the critical points of \(f\) (where the gradient is zero) is Lebesgue negligible in \(\mathbb{R}\)._
This result applies to \(\phi_{\mu}\) and the image of critical points of \(\phi_{\mu}\) is Lebesgue negligible. As a consequence, there exists a point \(o\in O\) which is a regular value of \(\phi_{\mu}\). As \(o\in O\), it is strictly negative and this finishes the proof of the lemma.
#### 3.3.3 Convergence
This preliminary lemma gives an insight of why Hypothesis 1 is useful :
**Lemma 12**.: _For all \(\mu\in\mathcal{P}_{2}(\Theta)\), \(\theta\notin\mathbb{R}^{2}\times S_{\mathbb{R}^{d}}(1)\times]-\sqrt{d}-2,\sqrt{d }+2[,\tau>1\), the potential writes :_
\[\phi_{\mu}(\theta)=cr_{\mu}\]
_where \(r_{\mu}\) is a constant that depends on \(\mu\). In particular, \(\phi_{\mu}(\theta)\) does not depend on \(a,w,b\)._
Proof.: For all \(x\in\Omega,|b|>\sqrt{d}+2,\tau>1\) :
\[|w\cdot x+b|\geq|b|-|x|_{\infty}|w|_{1}>2\]
and
\[\sigma_{H,\tau}(w\cdot x+b)=0.\]
This implies that for \(|b|\geq\sqrt{d}+2,\mu\in\mathcal{P}_{2}(\Theta)\), the potential \(\phi_{\mu}\) writes \(\phi_{\mu}=cr_{\mu}\) where \(r_{\mu}\) is a constant.
In fact Hypothesis 1 is verified by the gradient curve \((\mu(t))_{t\geq 0}\) for all time. This is proved in the next lemma.
**Lemma 13**.: _If \(\mu_{0}\) satisfies Hypothesis 1 then for all \(t\geq 0\) and all open set \(O\subset S_{\mathbb{R}^{d}}(1)\times[-\sqrt{d}-2,\sqrt{d}+2]\),_
\[\mu(t)(\mathbb{R}^{2}\!\times\!O)>0\]
The arguments of the proof of last lemma are based on fine tools of algebraic topology. One can find a nice introduction to the topic in the reference book [21]. With simple words, we enjoy the homotopy properties on the sphere to prove that the measure \(\mu(t)\) keeps a large enough support.
Proof.: For all \(t\geq 0\), as \(\mu(t)=(\chi(t))\#\mu_{0}\), we have [2, Lemma C.8] :
\[\operatorname{Supp}(\mu(t))=\overline{\chi(t)\left(\operatorname{Supp}(\mu_ {0})\right)}. \tag{57}\]
Now let \(\xi_{t}(w,b):=(P_{S_{\mathbb{R}^{d}}(1)\times\mathbb{R}}\circ\chi(t))((0,0,w,b))\) where \(P_{S_{\mathbb{R}^{d}}(1)\times\mathbb{R}}\) is the projection on \(S_{\mathbb{R}^{d}}(1)\times\mathbb{R}\) (\(w,b\) variables). We claim that the choice of the function of activation lets the extremal spheres invariant **ie**\(\xi_{t}(w,\pm(\sqrt{d}+2))=(w,\pm(\sqrt{d}+2))\). Indeed, by Lemma 12 for \(\theta=(c,a,w,\pm(\sqrt{d}+2))\), \(\phi_{\mu}(\theta)=cr_{\mu}\) giving :
\[\left\{\begin{array}{rcl}v_{\mu,w}(\theta)=&0,\\ v_{\mu,b}(\theta)=&0\end{array}\right.\]
and the claim is proven. Consequently by Lemma 14, the continuous map \(\xi_{t}\) is surjective.
Now let \(O\subset S_{\mathbb{R}^{d}}(1)\times[-\sqrt{d}-2,\sqrt{d}+2]\) be an open set. By what precedes, there exists a point \(\omega\in S_{\mathbb{R}^{d}}(1)\times[-\sqrt{d}-2,\sqrt{d}+2]\) such that \(\xi_{t}(\omega)\in O\) and \(\chi(t)((0,0,\omega))\in\mathbb{R}^{2}\times O\). As \((0,0,\omega)\) belongs to the support of \(\mu_{0}\) by hypothesis then \(\chi(t)((0,0,\omega))\) belongs to the support of \(\mu(t)\) by (57) and :
\[\mu(t)(\mathbb{R}^{2}\!\times\!O)>0\]
which finishes the proof of the lemma.
Lemma 14 gives conditions for the surjectivity of a continuous map on a cylinder.
Figure 3: The escape of mass towards large values of \(a\)
**Lemma 14**.: _Let \(f\) be a continuous map \(f:S_{\mathbb{R}^{d}}(1)\times[0,1]\to S_{\mathbb{R}^{d}}(1)\times[0,1]=:C\), homotopic to the identity such that :_
\[\forall w\in S_{\mathbb{R}^{d}}(1),\ \begin{cases}f(w,0)=&(w,0),\\ f(w,1)=&(w,1).\end{cases}\]
_Then \(f\) is surjective._
Proof.: Suppose that \(f\) misses a point \(p\), then necessarily \(p=(w,t)\) with \(0<t<1\). We can write :
\[g:C\to C\setminus\{p\}\]
the restriction of \(f\) on its image. The induced homomorphism on homology groups writes :
\[g_{\star}:H_{d-1}(C)\to H_{d-1}(C\setminus\{p\}).\]
Aside that, we have the classic information on homology groups of \(C\) and \(C\setminus\{p\}\) :
\[\begin{cases}H_{d-1}(C)=H_{d-1}(S_{\mathbb{R}^{d}}(1))&\simeq\mathbb{Z},\\ H_{d-1}(C\setminus\{p\})=H_{d-1}(S_{\mathbb{R}^{d}}(1)\lor S_{\mathbb{R}^{d}}( 1))&\simeq\mathbb{Z}^{2}\end{cases}\]
where \(\vee\) designates the wedge sum. Thus, the homomorphism \(g_{\star}\) can be written as :
\[g_{\star}:\mathbb{Z}\to\mathbb{Z}^{2}.\]
As \(g\) lets the two spheres \(w\to(w,0),w\to(w,1)\) invariant, we have :
\[g_{\star}(1)=(1,1).\]
Now we note \(i:C\setminus\{p\}\to C\) the canonical inclusion map. For all \((a,b)\in\mathbb{Z}^{2}\),
\[i_{\star}(a,b)=a+b.\]
By hypothesis, \(f\) is homotopic to the identity so \(f_{\star}=I_{\star}\) and \(f_{\star}(1)=1\) but at the same time :
\[f_{\star}(1)=i_{\star}g_{\star}(1)=i_{\star}((1,1))=2\]
which gives a contradiction.
It allows to conclude on the convergence and prove Theorem 7.
Proof of Theorem 7.: By contradiction, suppose \(\mu^{\star}\) is not optimal. Then by Lemma 9, \(\phi_{\mu^{\star}}\neq 0\) somewhere. Reusing the separation of variables (see the proof of Proposition 7), \(\phi_{\mu^{\star}}\) writes :
\[\phi_{\mu^{\star}}(\theta)=a\psi_{\mu}(w,b)+cr_{\mu}.\]
Hence either :
* \(r_{\mu}\) is not zero and \(v_{\mu,c}\neq 0\) and one can prove that some mass escapes at \(c=\infty\) as in the proof of Proposition 7.
* \(\psi_{\mu}\) is not identically zero and the set \(A\) defined in (56) is not empty and verifies : \[A\subset\mathbb{R}^{2}\times S_{\mathbb{R}^{d}(1)}\times[-\sqrt{d}-2,\sqrt{d}+2]\] (58) by Lemma 12.
We focus on the last item. By Proposition 7, there exists \(\varepsilon>0\) such that if \(W_{2}(\mu_{t_{0}},\mu^{\star})\leq\varepsilon\) for some \(t_{0}\) and \(\mu(t_{0})(A)>0\) then there exists a further time \(t_{1}\) with \(W_{2}(\mu(t_{0}),\mu^{\star})>\varepsilon\). As \((\mu(t))_{t\geq 0}\) converges towards \(\mu^{\star}\), there exists \(t_{0}\) such that :
\[\forall t\geq t_{0},\ W_{2}(\mu(t_{0}),\mu^{\star})\leq\varepsilon.\]
But by Lemma 13 and (58), for all time \(\mu(t)(A)>0\) and consequently there exists a time \(t_{1}>t_{0}\) with :
\[W_{2}(\mu(t_{0}),\mu^{\star})>\varepsilon\]
which gives the contradiction.
## 4 Numerical experiments
In this section, we will conduct numerical experiments to evaluate the potential of the proposed method.
### The effect of frequency
First, the influence of the frequency on the approximation is investigated. To do so, we consider \(d=1\) and the following source term for which the solution is a cosinus mode :
\[f_{k}(x):=\pi^{2}|k|^{2}\cos(\pi k\cdot x).\]
In higher dimension, we use the corresponding source term which is a tensor product of its one dimensional counterpart :
\[f_{k}(x_{1},\cdots,x_{d}):=\pi^{2}|k|_{l^{2}}^{2}\cos(\pi k_{1}\cdot x_{1}) \cdots\cos(\pi k_{d}\cdot x_{d}).\]
The code is written using python supplemented with Keras/Tensorflow framework. One should remember the following implementation facts :
* The neural network represents the numerical approximation taking values of \(x\in\Omega\) as input and giving a real as output.
* The loss function is approximated with a Monte Carlo sampling for the integrals where the measure is uniform on \(\Omega\). For each training phase, we use batches of size \(10^{2}\) obtained from a dataset of \(10^{5}\) samples, the number of epochs is calculated to have a time of optimization equals to \(2\) (learning rate \(\times\) number steps \(=2\)). Note that the dataset is shuffled at each epoch.
* The derivative involved in the loss is computed thanks to automatic differentiation.
* The training routine is given by the algorithm of backpropagation coupled with a gradient descent optimizer for which the learning rate \(\zeta:=\dfrac{1}{2\eta m}\) where \(n\) is the batch size and \(m\) is the width of the neural network involved. This choice will be explained later in the analysis.
* In all the plots, the reader will see the mean curve and a shaded zone representing the interval whose width is two times the standards deviation. Each simulation is run \(4\) times to calculate these statistical parameters.
For \(d=1\) and a width \(m=1000\), the simulations are reported in Figure 4 for which very satisfactory results for \(k=1,3\) are observed, the same conclusions hold for \(d=2\).
Figure 4: The effect of frequency on the approximation when \(d=1\) and \(m=1000\)
Figure 5: The numerical solutions when \(d=1\) and \(m=1000\)
**Remark 8**.: _In this remark, we expose some heuristic arguments for the present choice of scaling related to the learning rate :_
\[\xi:=\frac{1}{2nm}.\]
_It is possible to write the learning scheme as follows :_
\[\frac{\theta_{t+1}-\theta_{t}}{dt}=-\nabla_{\theta}\phi_{\mu_{t}^{m}}^{n}( \theta_{t}) \tag{59}\]
_where :_
\[\phi_{\mu_{t}^{m}}^{n}(\theta):=\frac{1}{nm}\sum_{i,j}\nabla\Phi( \theta_{j},x_{i})\cdot\nabla\Phi(\theta,x_{i})-f(x_{i})\Phi(\theta,x_{i})+ \left(\frac{1}{nm}\sum_{i,j}\Phi(\theta,x_{i})\right)^{2} \tag{60}\]
_with \((x_{i})_{i}\) are \(n\) samples taken uniformly on the \(d\) dimensional cube._
_By analogy, equations (59)-(60) can be interpreted as an explicit finite element scheme for the heat equation where the space discretization parameter is \(h:=\frac{1}{\sqrt{nm}}.\) This gives the CFL condition :_
\[2dt\leq h^{2}\]
_which is equivalent to :_
\[dt\leq\frac{1}{2nm}.\]
_In practice, one can observe that if one takes \(dt>O\left(\frac{1}{nm}\right)\) then the scheme diverges in the same way as a classic finite elements scheme._
Figure 6: The effect of frequency on the approximation when \(d=2\)
_The CFL condition is bad news since it prevents the use of large batch sizes necessary to get a good precision. In practice, the maximum on can do with a standard personal computer is \(n,m=10^{2}\)._
### The effect of dimension
To evaluate the effect of dimension on performance, we consider frequencies of the form \(k=(\bar{k},0,\cdots,0)\) where \(\bar{k}\) is an integer, and plot the \(L^{2}\) error as a function of the dimension for different \(\bar{k}\). This is done in Figure 7 where several observations can be made :
* For low frequency, the precision is not affected by dimension.
* At high frequency, performance are deteriorated as dimension increases.
* Having a larger neural network captures better high frequency modes up to a certain dimension.
* Variance increases with frequency but not with dimension.
For completeness we plot in Figure 8 a high dimensional example where \(d=10\), \(k=(1,1,0,\cdots,0)\) to show that the proposed method works well in the high dimensional/low frequency regime. The contour plot shows the function's values on the slice \((x_{1},x_{2},0.5,\cdots,0.5)\).
Figure 7: The effect of dimension for different frequencies and width
Finally we show an example where a lot of low frequencies are involved in the high dimensional regime :
\[f(x)=2\pi^{2}\sum_{k=1}^{d-1}\cos(\pi\cdot x_{k})\cos(\pi\cdot x_{k+1})\]
whose solution is :
\[u^{\star}(x)=\sum_{k=1}^{d-1}\cos(\pi\cdot x_{k})\cos(\pi\cdot x_{k+1}).\]
For \(d=6\), \(m=1000\) and all other parameters being identical to previous cases, one gets convergence of the solution on Figure 9 where the contour plot still shows the function's values on the slice \((x_{1},x_{2},0.5,\cdots,0.5)\).
## 5 Conclusion
In this article, the ability of two-layer neural networks to solve Poisson equation is investigated. First the PDE problem commonly understood in the Sobolev sense, is reinterpreted in the perspective of probability measures by writing the energy functional as a function over probabilities. Then, we propose to solve the obtained minimization problem thanks to gradient curves for which an existence result is shown. To justify this choice of method, the convergence towards an optimal measure is proved assuming the convergence of the gradient curve. Finally, numerical illustrations with a detailed analysis
Figure 8: The case \(d=10\), \(k=(1,1,0,\cdots,0)\) and \(m=1000\)
Figure 9: The mixed mode solution
of the effects of dimension and frequency are presented. With this work, it becomes clear that neural networks is a viable method to solve Poisson equation even in the high dimensional regime; something out of reach for classical methods. Nonetheless, some questions and extensions deserve more detailed developments. First, the main remark to observe is that the convergence is not proved theoretically even if it is observed in practice. Additionally, the domain considered is very peculiar \(\Omega=[0,1]^{d}\) and it is not obvious that one could generalize such theory on domain where \(\sin\)/cosine decomposition is not available. In numerical illustrations, integrals involved in the cost were not computed exactly but approximated by uniform sampling. It should be interesting to study the convergence of gradient curves with respect to the number of samples.
## Appendix A The differential structure of Wasserstein spaces over compact Alexandrov spaces
The aim of this section is to get acquainted of the differential structure of \(\mathcal{P}_{2}(\Theta)\). All the results presented here are not rigorously proved and we rather give a didactic introduction to the topic, the main reference being [18].
### The differential structure of Alexandrov spaces
An Alexandrov space \((A,d)\) is a geodesic space embedded with its distance \(d\) having a nice concave property on triangles. Roughly, Alexandrov spaces are spaces where the curvature is bounded from below by a uniform constant. Before going further, we need to introduce some notation :
**Definition 6**.: _Let \(\alpha\) be a unit speed geodesic with \(\alpha(0)=a\in A\) and \(s\geq 0\), then we introduce the notation :_
\[(\alpha,s):\mathbb{R}_{+}\ni t\mapsto\alpha(st)\]
_the associated geodesic of velocity \(s\). We then make the identification_
\["(\alpha,1)=\alpha"\]
_unit speed geodesic \(\alpha\)._
It is not so important to focus on a rigorous definition of such spaces but one should remember the following fundamental property of existence of a tangential cone structure :
**Theorem 9**.: _Let \(\alpha,\beta\) be two unit speed geodesics with \(\alpha(0)=\beta(0)=:a\in A\) and \(s,t\geq 0\). Then the limit :_
\[\sigma_{a}((\alpha,s),(\beta,t)):=\lim_{\varepsilon\to 0}\frac{1}{ \varepsilon}d(\alpha(s\varepsilon),\beta(t\varepsilon))\]
_exists. Moreover,_
\[\frac{1}{2st}\left(s^{2}+t^{2}-\sigma_{a}((\alpha,s),(\beta,t))\right) \tag{61}\]
_depends neither on \(s\) nor on \(t\)._
The previous theorem is very important as it enables to introduce a notion of angle and scalar product :
**Corollary 3**.: _One can define the local angle \(\angle_{a}((\alpha,s),(\beta,t))\) between \((\alpha,s)\) and \((\beta,t)\) by :_
\[\cos(\angle_{a}((\alpha,s),(\beta,t))):=\frac{1}{2st}\left(s^{2}+t^{2}-\sigma _{a}((\alpha,s),(\beta,t))\right)\]
_and a local scalar product :_
\[\langle(\alpha,s),(\beta,t)\rangle_{a}:=st\cos(\angle_{a}((\alpha,s),(\beta,t) )).\]
We then have the following definitions.
**Definition 7**.: _The space of directions \(\Sigma_{a}(A)\) is the completion of_
\[\{(\alpha,1)\ |\ \alpha\text{ unit speed geodesic departing from a }\}\]
_quotiented by the relationship \(\sigma_{a}=0\) with respect to the distance \(\sigma_{a}\)._
_The tangent cone, i.e. the set of geodesics departing from \(a\) at speed \(s\), of the form \((\alpha,s)\) for some \((\alpha,1)\in\Sigma_{a}(A)\), is denoted by \(C_{a}(A)\)._
A major result from [18] is that if the underlying space \(A\) is Alexandrov and compact then the space over probabibilty \(\mathcal{P}_{2}(A)\) is also an Alexandrov space and all the differential structure presented above is available. The proof of this result is based on McCann interpolation which allows to make the link between probability geodesics and geodesics of the underlying space.
Moreover, it is possible to define a notion of differentiation.
**Definition 8**.: _For a curve \((a_{t})_{t\in\mathbb{R}}\) of \(A\), it is said to be differentiable at \(t=0\) if there exists \((\alpha,\tau)\in C_{a}(A)\) such that for all \((\alpha_{i},1)\in\Sigma_{a}(A)\), \(t_{i}\geq 0\) with \(\lim\limits_{i\to\infty}t_{i}=0\), linking \(a_{0}\) and \(a_{t_{i}}\) then :_
\[\lim\limits_{i\to\infty}(\alpha_{i},d(a_{0},a_{t_{i}})/t_{i})=(\alpha,\tau)\]
_where the convergence has to be understood in the sense of the distance \(\sigma_{a}\). Moreover, the derivative of the curve at \(t=0\) writes :_
\[a_{0}^{\prime}:=(\alpha,\tau).\]
### The notion of gradient
Now let us consider an energy \(\mathcal{E}:A\to\mathbb{R}\) with the following property of convexity.
**Definition 9**.: _We say that \(\mathcal{E}\) is convex along geodesics if there exists \(K\in\mathbb{R}\) such that for all rescaled geodesics \(\alpha:[0,1]\to A\) :_
\[\mathcal{E}(\alpha(\lambda))\leq(1-\lambda)\,\mathcal{E}(\alpha(0))+\lambda\, \mathcal{E}(\alpha(1))-\frac{K}{2}\lambda(1-\lambda)d(\alpha(0),\alpha(1)).\]
Assuming such convexity, it is possible to define the gradient's direction of \(\mathcal{E}\) using the differential structure of \(A\) (see [18, Lemma 4.3]). Before doing this, it is necessary to introduce the directional derivative :
**Definition 10**.: _For \(a\in A\) and \((\alpha,s)\in C_{a}(A)\), one defines :_
\[D_{a}\,\mathcal{E}((\alpha,s)):=\lim\limits_{\varepsilon\to 0}\frac{ \mathcal{E}(\alpha(s\varepsilon))-\mathcal{E}(\alpha(0))}{\varepsilon}.\]
One can prove that the limit above exists using the convexity assumption of \(\mathcal{E}\). Owing this, there exists a direction for which the local slope (see Definition 4) is attained in the sense defined below.
**Theorem 10**.: _For all \(a\in A\) such that \(|\nabla_{-}\,\mathcal{E}\,|(a)<\infty\), there exists a unique direction \((\alpha,1)\in\Sigma_{a}(A)\) such that :_
\[D_{a}\,\mathcal{E}((\alpha,1))=-|\nabla_{-}\,\mathcal{E}\,|(a).\]
_This direction \(\alpha\) is denoted by \(\frac{\nabla_{-}\,\mathcal{E}(a)}{|\nabla_{-}\,\mathcal{E}\,|(a)}\), which means that :_
\[D_{a}\,\mathcal{E}((\alpha,|\nabla_{-}\,\mathcal{E}\,|(a))):=-|\nabla_{-}\, \mathcal{E}\,|^{2}(a).\]
With this, it is straightforward to define the notion of gradient curve.
**Definition 11**.: _A Lipschitz curve \((a_{t})_{t\geq 0}\) is said to be a gradient curve with respect to \(\mathcal{E}\) if it is differentiable for all \(t\geq 0\) and :_
\[\forall t\geq 0,\ a_{t}^{\prime}=\left(\frac{\nabla_{-}\,\mathcal{E}(a_{t})}{| \nabla_{-}\,\mathcal{E}\,|(a_{t})},|\nabla_{-}\,\mathcal{E}\,|(a_{t})\right) \in C_{a_{t}}(A).\]
In [18], results about existence and uniqueness of gradient curve on \(\mathcal{P}_{2}(A)\) are given.
## Acknowledgements
The authors acknowledge funding from the Tremplin-ERC Starting ANR grant HighLEAP (ANR-22-ERCS-0012). |
2307.00527 | Graph Neural Networks based Log Anomaly Detection and Explanation | Event logs are widely used to record the status of high-tech systems, making
log anomaly detection important for monitoring those systems. Most existing log
anomaly detection methods take a log event count matrix or log event sequences
as input, exploiting quantitative and/or sequential relationships between log
events to detect anomalies. Unfortunately, only considering quantitative or
sequential relationships may result in low detection accuracy. To alleviate
this problem, we propose a graph-based method for unsupervised log anomaly
detection, dubbed Logs2Graphs, which first converts event logs into attributed,
directed, and weighted graphs, and then leverages graph neural networks to
perform graph-level anomaly detection. Specifically, we introduce One-Class
Digraph Inception Convolutional Networks, abbreviated as OCDiGCN, a novel graph
neural network model for detecting graph-level anomalies in a collection of
attributed, directed, and weighted graphs. By coupling the graph representation
and anomaly detection steps, OCDiGCN can learn a representation that is
especially suited for anomaly detection, resulting in a high detection
accuracy. Importantly, for each identified anomaly, we additionally provide a
small subset of nodes that play a crucial role in OCDiGCN's prediction as
explanations, which can offer valuable cues for subsequent root cause
diagnosis. Experiments on five benchmark datasets show that Logs2Graphs
performs at least on par with state-of-the-art log anomaly detection methods on
simple datasets while largely outperforming state-of-the-art log anomaly
detection methods on complicated datasets. | Zhong Li, Jiayang Shi, Matthijs van Leeuwen | 2023-07-02T09:38:43Z | http://arxiv.org/abs/2307.00527v3 | # Graph Neural Network based Log Anomaly Detection and Explanation
###### Abstract.
Event logs are widely used to record the status of high-tech systems, making log anomaly detection important for monitoring those systems. Most existing log anomaly detection methods take a log event count matrix or log event sequences as input, exploiting quantitative and/or sequential relationships between log events to detect anomalies. Unfortunately, only considering quantitative or sequential relationships may result in many false positives and/or false negatives. To alleviate this problem, we propose a graph-based method for unsupervised log anomaly detection, dubbed _Log3Graphs_, which first converts event logs into attributed, directed, and weighted graphs, and then leverages graph neural networks to perform graph-level anomaly detection. Specifically, we introduce One-Class Digraph Inception Convolutional Networks, abbreviated as OCDiGCN, a novel graph neural network model for detecting graph-level anomalies in a collection of attributed, directed, and weighted graphs. By coupling the graph representation and anomaly detection steps, OCDiGCN can learn a representation that is especially suited for anomaly detection, resulting in a high detection accuracy. Importantly, for each identified anomaly, we additionally provide a small subset of nodes that play a crucial role in OCDiGCN's prediction as explanations, which can offer valuable cues for subsequent root cause diagnosis. Experiments on five benchmark datasets show that _Log3Graphs_ performs at least on par state-of-the-art log anomaly detection methods on simple datasets while largely outperforming state-of-the-art log anomaly detection methods on complicated datasets.
Log Analysis, Log Anomaly Detection, Graph Neural Networks +
Footnote †: ccs: 2024
most existing log anomaly detection methods focus exclusively on detection performance without giving any explanations.
To overcome these limitations, we propose _Logs2Graphs_, a graph-based unsupervised log anomaly detection approach by designing a novel one-class graph neural network. Specifically, _Logs2Graphs_ first utilises off-the-shelf methods to learn a semantic embedding for each log event, and then assigns log messages to different groups. Second, _Logs2Graphs_ converts each group of log messages into an attributed, directed, and weighted graph, with each node representing a log event, the node attributes containing its semantic embedding, a directed edge representing how an event is followed by another event, and the corresponding edge weight indicating the number of times the events follow each other. Third, by coupling the graph representation learning and anomaly detection objectives, we introduce One-Class Digraph Inception Convolutional Networks (OCDiGCN) as a novel method to detect anomalous graphs from a set of graphs. As a result, _Logs2Graphs_ leverages the rich and expressive power of attributed, directed and edge-weighted graphs to represent logs, followed by using graph neural networks to effectively detect graph-level anomalies, taking into account both semantic information of log events and structure information (including sequential information as a special case) among log events. Importantly, by decomposing the anomaly score of a graph into individual nodes and visualizing these nodes based on their contributions, we provide straightforward and understandable explanations for identified anomalies.
Overall, our contributions can be summarised as follows: (1) We introduce _Logs2Graphs_, which formalises log anomaly detection as a graph-level anomaly detection problem and represents log sequences as directed graphs to capture more structure information than previous approaches; (2) We introduce OCDiGCN, the first end-to-end unsupervised graph-level anomaly detection method for attributed, directed and edge-weighted graphs. By coupling the graph representation and anomaly detection objectives, we improve the potential for accurate anomaly detection over existing approaches; (3) For each detected anomaly, we identify important nodes as explanations, offering valuable cues for subsequent root cause diagnosis; (4) We empirically compare our approach to eight state-of-the-art log anomaly detection methods on five benchmark datasets, showing that _Logs2Graphs_ performs at least on par and often better than its competitors.
The reminder of this paper is organised as follows. Section 2 revisits related work, after which Section 3 formalises the problem. Section 4 describes Digraph Inception Convolutional Networks (Dosovitskiy et al., 2016), which are used for _Logs2Graphs_ in Section 5. We then evaluate _Logs2Graphs_ in Section 6 and conclude in Section 7.
## 2. Related Work
Graph-based log anomaly detection methods usually comprise five steps: log parsing, log grouping, graph construction, graph representation learning, and anomaly detection. In this paper we focus on graph representation learning, log anomaly detection and explanation, thus only revisiting related work in these fields.
### Graph Representation Learning
Graph-level representation learning methods, such as GIN (Shi et al., 2017) and Graph2Vec (Kipf and Welling, 2017), are able to learn a mapping from graphs to vectors. Further, graph kernel methods, including Weisfeiler-Lehman (WL) (Welker and Hinton, 2010) and Propagation Kernels (PK) (Kipf and Welling, 2017), can directly provide pairwise distances between graphs. Both types of methods can be combined with off-the-shelf anomaly detectors, such as OCSVM (Kipf and Welling, 2017) and iForest (Kipf and Welling, 2017), to perform graph-level anomaly detection.
To improve on these naive approaches, efforts have been made to develop graph representation learning methods especially for anomaly detection. For instance, OCGIN (Wang et al., 2017) and GLAM (Wang et al., 2017) combine the GIN (Shi et al., 2017) representation learning objective with the SVDD objective (Dosovitskiy et al., 2016) to perform graph-level representation learning and anomaly detection in an end-to-end manner. GLocalKD (Kipf and Welling, 2017) performs random distillation of graph and node representations to learn 'normal' graph patterns. Further, OCGTL (Shi et al., 2017) combines neural transformation learning and one-class classification to learn graph representations for anomaly detection. Although these methods are unsupervised or semi-supervised, they can only deal with attributed, undirected, and unweighted graphs.
iGAD (Wang et al., 2017) considers graph-level anomaly detection as a graph classification problem and combines attribute-aware graph convolution and substructure-aware deep random walks to learn graph representations. However, iGAD is a supervised method, and can only handle attributed, undirected, and unweighted graphs. CODEtect (Kipf and Welling, 2017) takes a pattern-based modelling approach using the minimum description length (MDL) principle and identifies anomalous graphs based on _motifs_. CODEtect can (only) deal with labelled, directed, and edge-weighted graphs, but is computationally very expensive. To our knowledge, we introduce the first unsupervised method for graph-level anomaly detection that can handle attributed, directed and edge-weighted graphs.
### Log Anomaly Detection and Explanation
Log anomaly detection methods can be roughly subdivided into three categories: 1) traditional,'shallow' methods, such as principal component analysis (PCA) (Shen et al., 2016), one-class SVM (OCSVM) (Kipf and Welling, 2017), isolation forest (iForest) (Kipf and Welling, 2017) and histogram-based outlier score (HOS) (Kipf and Welling, 2017), which take a log event count matrix as input and analyse quantitative relationships; 2) deep learning based methods, such as DeepLog (Chen et al., 2017), LogAnomaly (Kipf and Welling, 2017), and AutoEncoder (Chen et al., 2017), which employ sequences of log events (and sometimes their semantic embeddings) as input, analysing sequential information and possibly semantic information of log events to identify anomalies; and 3) graph-based methods, such as TCFG (Kipf and Welling, 2017) and GLAD-PAW (Wang et al., 2017), which first convert logs into graphs and then perform graph-level anomaly detection.
To our knowledge, only a few works (Wang et al., 2017; Wang et al., 2017; Wang et al., 2017) have capitalised on the powerful learning capabilities of graph neural networks for log anomaly detection. GLAD-PAW (Wang et al., 2017) first transforms logs into attributed and undirected graphs and then uses a Position Aware Weighted Graph Attention Network to identify anomalies. However, converting logs into undirected graphs may result in loss of important sequential information. Further, DeepTraLog (Wang et al., 2017) combines traces and logs to generate a so-called Trace Event Graph, which is an attributed and directed graph. On this basis, they train
a Gated Graph Neural Networks based Deep Support Vector Data Description model to identify anomalies. However, their approach requires the availability of both traces and logs, and is unable to handle edge weights. In contrast, like LogGD (Wang et al., 2017), our proposed _Logs2Graphs_ approach is applicable to generic logs by converting logs into attributed, directed, and edge-weighted graphs. However, LogGD is a supervised method that requires fully labelled training data, which is usually impractical and even impossible. In contrast, our proposed algorithm OCDiGCN is the first _unsupervised_ graph-level anomaly detection method for attributed, directed, and edge-weighted graphs.
Although anomaly explanation has received much attention in traditional anomaly detection (Kumar et al., 2017), only a few studies (Wang et al., 2017) considered log anomaly explanation. Specifically, PLELog (Wang et al., 2017) offers explanations by quantifying the significance of individual log events within an anomalous log sequence, thereby facilitating improved identification of relevant log events by operators. Similarly, our method provides straightforward explanations for anomalous log groups by identifying and visualising a small subset of important nodes.
## 3. Problem Statement
Before we state the log anomaly detection problem, we first introduce the necessary notations and definitions regarding event logs and graphs.
**Event logs**. _Logs_ are used to record system status and important events, and are usually collected and stored centrally as log files. A _log file_ typically consists of many _log messages_. Each _log message_ is composed of three components: a timestamp, an event type (_log event_ or _log template_), and additional information (_log parameter_). _Log parsers_ are used to extract log events from log messages.
Further, log messages can be grouped into _log groups_ (a.k.a. _log sequences_) using certain criteria. Specifically, if a _log identifier_ is available for each log message, one can group log messages based on such identifiers. Otherwise, one can use a _fixed_ or _sliding window_ to group log messages. The _window size_ can be determined according to timestamp or the number of observations. Besides, counting the occurrences of each log event within a log group results in a _event count vector_. Consequently, for a log file consisting of many log groups, one can obtain a _event count matrix_. The process of generating an _event count matrix_ (or other feature matrix) is known as _feature extraction_. Extracted features are often used as input to an anomaly detection algorithm to identify _log anomalies_, i.e., log messages or log groups that deviate from what is considered 'normal'.
**Graphs**. We consider an attributed, directed, and edge-weighted graph \(\mathcal{G}=(\mathcal{V},\mathcal{E},\mathbf{X},\mathbf{Y})\), where \(\mathcal{V}=\{v_{1},...,v_{|\mathcal{V}|}\}\) denotes the set of _nodes_ and \(\mathcal{E}=\{e_{1},...,e_{|\mathcal{E}|}\}\subseteq\mathcal{V}\times\mathcal{ V}\) represents the set of edges between nodes. If \((v_{i},v_{j})\in\mathcal{E}\), then there is an edge from node \(v_{i}\) to node \(v_{j}\). Moreover, \(\mathbf{X}\in\mathbb{R}^{|\mathcal{V}|\times d}\) is the node attribute matrix, with the \(i\)-th row representing the attributes of node \(v_{i}\), and \(d\) is the number of attributes. Besides, \(\mathbf{Y}\in\mathbb{N}^{|\mathcal{E}|\times|\mathcal{E}|}\) is the edge-weight matrix, where \(\mathbf{Y}_{ij}\) represents the weight of the edge from node \(v_{i}\) to \(v_{j}\).
Equivalently, \(\mathcal{G}\) can be described as \((\mathbf{A},\mathbf{X},\mathbf{Y})\), with adjacency matrix \(\mathbf{A}\in\mathbb{R}^{|\mathcal{V}|\times|\mathcal{V}|}\), where \(\mathbf{A}_{ij}=\mathbb{I}[(v_{i},v_{j})\in\mathcal{E}]\) indicates whether there is an edge from node \(v_{i}\) to node \(v_{j}\), for \(i,j\in\{1,...,|\mathcal{V}|\}\).
### Graph-based Log Anomaly Detection
Given a set of log files, we let \(\mathcal{L}=\{L_{1},...,L_{|\mathcal{L}|}\}\) denote the set of unique log events. We divide the log messages into \(M\) log groups \(\mathbf{Q}=\{\mathbf{q}_{1},...,\mathbf{q}_{m},...,\mathbf{q}_{M}\}\), where \(\mathbf{q}_{m}=\{\mathbf{q}_{m1},...,\mathbf{q}_{mmv},...,\mathbf{q}_{mN}\}\) is a log group and \(\mathbf{q}_{mn}\) a log message.
For each log group \(\mathbf{q}_{m}\), we construct an attributed, directed, and edge-weighted graph \(\mathcal{G}_{m}=(\mathcal{V}_{m},\mathcal{E}_{m},\mathbf{X}_{m},\mathbf{Y}_{m})\) to represent the log messages and their relationships. Specifically, each node \(v_{i}\in\mathcal{V}_{m}\) corresponds to exactly one log event \(L\in\mathcal{L}\) (and vice versa). Further, an edge \(e_{ij}\in\mathcal{E}_{m}\) indicates that log event \(i\) is at least once immediately followed by log event \(j\) in \(\mathbf{q}_{m}\). Attributes \(\mathbf{x}_{i}\in\mathbf{X}_{m}\) represent the semantic embedding of log event \(i\), and \(y_{ij}\in\mathbf{Y}_{m}\) is the weight of edge \(e_{ij}\), representing the number of times event \(i\) was immediately followed by event \(j\). In this manner, we construct a set of log graphs \(\{\mathcal{G}_{1},...,\mathcal{G}_{m},...,\mathcal{G}_{M}\}\).
We can use these definitions to define graph-level anomaly detection:
**Problem 1 (Graph-based Log Anomaly Detection)**. _Given a set of attributed, directed, and weighted graphs that represent logs, find those graphs that are notably different from the majority of graphs._
What we mean by 'notably different' will have to be made more specific when we define our method, but we can already discuss what types of anomalies can potentially be detected. Most methods aim to detect two types of anomalies:
* A log group (namely a graph) is considered a _quantitative anomaly_ if the occurrence frequencies of some events in the group are higher or lower than expected from what is commonly observed. For example, if a file is opened (event \(A\)) twice, it should normally also be closed (event \(B\)) twice. In other words, the number of event occurrences \(\#A=\#B\) in a normal pattern and an anomaly is detected if \(\#A\neq\#B\).
* A log group (namely a graph) is considered to contain _sequential anomalies_ if the order of certain events violates the normal order pattern. For instance, a file can be closed only after it has been opened in a normal workflow. In other words, the order of event occurrences \(A\to B\) is considered normal while \(B\to A\) is considered anomalous.
An advantage of graph-based anomaly detection is that it detect these two types of anomalies, but also anomalies reflected in the structure of the graphs. Moreover, no _unsupervised_ log anomaly detection approaches represent event logs as attributed, directed, weighted graphs, which allow for even higher expressiveness than undirected graphs (and thus limiting the information loss resulting from the representation of the log files as graphs).
## 4. Preliminaries: Digraph Inception Convolutional Nets
To learn node representations for attributed, directed, and edge-weighted graphs, (Wang et al., 2017) proposed Digraph Inception Convolutional Networks (DiGCN).
Specifically, given a graph \(\mathcal{G}\) described by an adjacency matrix \(\mathbf{A}\in\mathbb{R}^{|\mathcal{V}|\times|\mathcal{V}|}\), a node attribute matrix \(\mathbf{X}\in\mathbb{R}^{|\mathcal{V}|\times d}\), and an edge-weight matrix \(\mathbf{Y}\in\mathbb{R}^{|\mathcal{V}|\times|\mathcal{V}|}\), DiGCN defines the \(k\)-th order digraph convolution as
\[\mathbf{Z}^{(k)}=\begin{cases}\mathbf{X}\Theta^{(0)}&k=0\\ \Psi\mathbf{X}\mathbf{Q}^{(1)}&k=1\\ \Phi\mathbf{X}\mathbf{Q}^{(k)}&k\geq 2,\end{cases} \tag{1}\]
where \(\Psi=\frac{1}{2}\left(\Pi^{(1)}\frac{1}{2}\mathbf{P}^{(1)}\Pi^{(1)}\Pi^{(1)} \frac{-1}{2}+\Pi^{(1)}\frac{-1}{2}\mathbf{P}^{(1)T}\Pi^{(1)}\frac{1}{2}\right)\) and \(\Phi=\mathbf{W}^{(k)}\frac{-1}{2}\mathbf{P}^{(k)}\mathbf{W}^{(k)}\frac{-1}{2}\). Particularly, \(\mathbf{Z}^{(k)}\in\mathcal{R}^{|\mathcal{V}|\times f}\) denote the convolved output with \(f\) output dimension, and \(\Theta^{(0)},\Theta^{(1)},\Theta^{(k)}\) represent the trainable parameter matrices.
Moreover, \(\mathbf{P}^{(k)}\) is the \(k\)-th order proximity matrix defined as
\[\mathbf{P}^{(k)}=\begin{cases}\mathbf{I}&k=0\\ \tilde{\mathbf{D}}^{-1}\tilde{\mathbf{A}}&k=1\\ Ins\left((\mathbf{P}^{(1)})^{(k-1)}(\mathbf{P}^{(1)T})^{(k-1)}\right)&k\geq 2,\end{cases} \tag{2}\]
where \(\mathbf{I}\in\mathcal{R}^{|\mathcal{V}|\times|\mathcal{V}|}\) is an identity matrix, \(\tilde{\mathbf{A}}=\mathbf{A}+\mathbf{I}\), and \(\tilde{\mathbf{D}}\) denotes the diagonal degree matrix with \(\tilde{\mathbf{D}}_{ii}=\sum_{j}\tilde{\mathbf{A}}_{ij}\). Besides, \(Ins\left((\mathbf{P}^{(1)})^{(k-1)}(\mathbf{P}^{(1)T})^{(k-1)}\right)\) is defined as
\[\frac{1}{2}Intersect\left((\mathbf{P}^{(1)})^{(k-1)}(\mathbf{P}^{(1)T})^{(k-1) },(\mathbf{P}^{(1)T})^{(k-1)}(\mathbf{P}^{(1)})^{(k-1)}\right) \tag{3}\]
, with \(Intersect(\cdot)\) denoting the element-wise intersection of two matrices (see [33] for computation details). In addition, \(\mathbf{W}^{(k)}\) is the diagonalized weight matrix of \(\mathbf{P}^{(k)}\), and \(\Pi^{(1)}\) is the approximate diagonalized eigenvector of \(\mathbf{P}^{(1)}\). Particularly, the approximate diagonalized eigenvector is calculated based on personalised PageRank [2], with a parameter \(\alpha\) to control the degree of conversion from a digraph to an undirected graph. We omit the details to conserve space, and refer to [33] for more details.
After obtaining the multi-scale features \(\{\mathbf{Z}^{(0)},\mathbf{Z}^{(1)},...,\mathbf{Z}^{(k)}\}\), DiGCN defines an Inception block as
\[\mathbf{Z}=\sigma\left(\Gamma\left(\mathbf{Z}^{(0)},\mathbf{Z}^{(1)},..., \mathbf{Z}^{(k)}\right)\right), \tag{4}\]
where \(\sigma\) represents an activation function, and \(\Gamma(\cdot)\) denotes a fusion operation, which can be summation, normalisation, and concatenation. In practice, we often adapt a fusion operation that keeps the output dimension unchanged, namely \(\mathbf{Z}\in\mathcal{R}^{|\mathcal{V}|\times f}\). As a result, the \(i\)-th row of \(\mathbf{Z}\) (namely \(\mathbf{Z}_{i}\)) denotes the learned vector representation for node \(v_{i}\) in a certain layer.
## 5. Graph-Based Anomaly Detection for Event Logs
We propose _Logs2Graphs_, a graph-based log anomaly detection method tailored to event logs. The overall pipeline consists of the usual main steps, i.e., log parsing, log grouping, graph construction, graph representation learning, and anomaly detection, and is illustrated in Figure 1. Note that we couple the graph representation learning and anomaly detection steps to accomplish end-to-end learning once the graphs have constructed.
First, after collecting logs from a system, the _log parsing_ step extracts log events and log parameters from raw log messages. Since log parsing is not the primary focus of this article, we use Drain [11] for this task. Drain is a log parsing technique with fixed depth tree, and has been shown to generally outperform its competitors [47]. We make the following assumptions on the log files:
* Logs files are written in English;
* Each log message contains at least the following information: date, time, operation detail, and log identifier;
* The logs contain enough events to make the mined relationships (quantitative, sequential, structural) statistically meaningful, i.e., it must be possible to learn from the logs what the 'normal' behaviour of the system is.
Second, the _log grouping_ step uses the log identifiers to divide the parsed log messages into log groups. Third, for each resulting group of log messages, the _graph construction_ steps builds an attributed, directed, and edge-weighted graph, as described in more detail in Subsection 5.1. Fourth and last, in an integrated step for _graph representation learning and anomaly detection_, we learn a One-Class Digraph Inception Convolutional Network (OCDiGCN) based on the obtained set of log graphs. The resulting model can be used for graph-level anomaly detection. This model couples the graph representation learning objective and anomaly detection objective, and is thus trained in an end-to-end manner. The model, its training, and its use for graph-level anomaly detection are explained in detail in Subsection 5.2.
### Graph Construction
We next explain how to construct an attributed, directed, and edge-weighted graph given a group of parsed log messages, and illustrate this in Figure 2. Particularly, the motivation of graph construction is to keep everything relevant in log data.
First, we construct nodes to represent the different log events. That is, the number of nodes depends on the number of unique log events that occur in the log group. Second, starting from the first line of log messages in chronological order, we add an directed edge from log event \(L_{i}\) to \(L_{j}\) and set its edge-weight to \(1\) if the next event after \(L_{i}\) is \(L_{j}\). If the corresponding edge already exists, we increase its edge-weight by \(1\). In this manner, we obtain a labelled, directed, and edge-weighted graph.
However, using only the labels (e.g., _open_ or _write_) of log events for graph construction may lead to missing important information. That is, we can improve on this by explicitly taking the semantic information of log events into account, by which we mean that we should look at the text of the log event in entirety. Specifically, we generate a vector representation for each log event as follows:
1. _Preprocessing:_ for each log event, we first remove non-character words and stop words, and split compound words into separate words;
2. _Word embedding:_ we use Glove [25], a pre-trained word embedding model with \(200\) embedding dimensions to generate a vector representation for each English word in a log event;
3. _Sentence embedding:_ we generate a vector representation for each log event. Considering that the words in a sentence are usually not of equal importance, we use Term Frequency-Inverse Document frequency (TF-IDF) [27] to measure the importance of words. As a result, the weighted sum of word embedding vectors composes the vector representation of a log event.
By augmenting the nodes with the vector representations of the log events as attributes, we obtain an attributed, directed, and edge-weighted graph.
### OCDiGCN: One-Class Digraph Inception Convolutional Nets
We next describe One-Class Digraph Inception Convolutional Networks, abbreviated as OCDiGCN, a novel method for end-to-end graph-level anomaly detection. We chose to build on Digraph Inception Convolutional Networks (DiGCN) (Shen et al., 2017) for their capability to handle directed graphs, which we argued previously is an advantage in graph-based log anomaly detection.
Considering that DiGCN was designed for node representation learning, we repurpose it for graph representation learning as follows:
\[\mathbf{z}=\text{Readout}(\mathbf{Z}_{i}\mid i\in\{1,2,...,|\mathcal{V}|\}). \tag{4}\]
That is, at the final iteration layer, we utilise a so-called Readout(\(\cdot\)) function to aggregate node vector representations to obtain a graph vector representation. Importantly, Readout(\(\cdot\)) can be a simple permutation-invariant function such as maximum, sum or mean, or a more advanced graph-level pooling function (Zhu et al., 2017).
Next, note that DiGCN work did not explicitly enable learning edge features (i.e., \(\mathbf{Y}\)). However, as DiGCN follows the Message Passing Neural Network (MPNN) framework (Dong et al., 2017), incorporating \(\mathbf{Y}\) into Equation (1) and conducting computations in Equations (2-4) analogously enables learning edge features.
Now, given a set of graphs \(\{\mathcal{G}_{1},...,\mathcal{G}_{m},...,\mathcal{G}_{M}\}\), we can use Equation (4) to obtain an explicit vector representation for each graph, respectively. We denote the vector presentation of \(\mathcal{G}_{m}\) learned by the DiGCN model as DiGCN(\(\mathcal{G}_{m}\);\(\mathcal{H}\)).
In graph anomaly detection, anomalies are typically identified based on a reconstruction or distance loss (Krizhevsky et al., 2014). In particular, the One-Class Deep SVDD objective (Wang et al., 2017) is commonly used for two reasons: it can be easily combined with other neural networks, and more importantly, it generally achieves a state-of-the-art performance (Wang et al., 2017). Therefore, to detect anomalies, we train a one-class classifier by optimising the following One-Class Deep SVDD objective:
\[\min_{\mathcal{H}}\frac{1}{M}\sum_{m=1}^{M}\lVert\text{DiGCN}(\mathcal{G}_{m} ;\mathcal{H})-\mathbf{o}\rVert_{2}^{2}+\frac{\lambda}{2}\sum_{l=1}^{L}\lVert \mathbf{H}^{(l)}\rVert_{\text{F}}^{2}, \tag{5}\]
where \(\mathbf{H}^{(l)}\) represents the trainable parameters of DiGCN at the \(l\)-th layer, namely \((\Theta^{(0)(l)},\Theta^{(1)(l)},...,\Theta^{(k)(l)})^{T},\mathcal{H}\) denotes \(\{\mathbf{H}^{(1)},...,\mathbf{H}^{(L)}\}\), \(\lambda>0\) represents the weight-decay hyperparameter, \(\lVert\cdot\rVert_{2}\) is the Euclidean norm, and \(\lVert\cdot\rVert_{F}\) denotes the Frobenius norm. Moreover, \(\mathbf{o}\) is the center of the hypersphere in the learned representation space. Ruff et al. (Ruff et al., 2017) found empirically that setting \(\mathbf{o}\) to the average of the
Figure 1. The Logs2Graphs pipeline. We use attributed, directed, and weighted graphs for representing the log files with high expressiveness, and integrate representation learning and anomaly detection for accurate anomaly detection. We use off-the-shelf methods for log parsing, log grouping, and graph construction.
Figure 2. The construction of an attributed, directed, and edge-weighted graph from a group of log messages.
network representations (i.e., graph representations in our case) obtained by performing an initial forward pass is a good strategy.
Ruff et al. (Ruff et al., 2017) also pointed out, however, that One-Class Deep SVDD classification may suffer from a hypersphere collapse, which will yield trivial solutions, namely mapping all graphs to a fixed center in the representation space. To avoid a hypersphere collapse, the hypersphere center \(\mathbf{o}\) is set to the average of the network representations, the bias terms in the neural networks are removed, and unbounded activation functions such as ReLU are preferred.
After training the model on a set of non-anomalous graphs (or with a very low proportion of anomalies), given a test graph \(\mathcal{G}_{m}\), we define its distance to the center in the representation space as its anomaly score, namely
\[score(\mathcal{G}_{m})=\|\mathrm{DiGCN}(\mathcal{G}_{m};\mathcal{H})-\mathbf{o }\|_{2}. \tag{6}\]
**Training and hyperparameters:** In summary, OCDiGCN is composed of an \(L\)-layer DiGCN architecture to learn node representations, plus a Readout(\(\cdot\)) function to obtain the graph representation. It is trained in an end-to-end manner via optimising the SVDD objective, which can be optimised using stochastic optimisation techniques such as Adam (Kingmaa and Ba, 2014). Overall, OCDiGCN takes a collection of non-anomalous graphs and a set of hyperparameters, which are outlined in Table 2, as inputs. Importantly, the pseudo-code for Logs2Graphs is given in Algorithm 1.
```
0: Training dataset \(D_{tr}\), testing dataset \(D_{ts}\), model \(\theta\)
0: Predicted labels and explanations for \(D_{ts}\)
1: Parse \(D_{tr}\) and \(D_{ts}\) using Drain (Kingmaa and Ba, 2014)\(\rightarrow\) Obtain parsed datasets \(\hat{D}_{tr}\) and \(\hat{D}_{ts}\)
2: Group \(\hat{D}_{tr}\) and \(\hat{D}_{ts}\) based on log identifier \(\rightarrow\) Obtain grouped dataset \(\hat{D}_{tr}\) and \(\hat{D}_{ts}\)
3: Construct graphs using \(\hat{D}_{tr}\) and \(\hat{D}_{ts}\)\(\rightarrow\) Obtain graph sets \(\mathbf{Q}_{tr}\) and \(\mathbf{Q}_{ts}\)
4: Train the OCDiGCN model using Equation (5) with \(\mathbf{Q}_{tr}\)\(\rightarrow\) Obtain trained model \(\hat{\theta}\)
5: Use \(\hat{\theta}\) to predict anomalies in \(\mathbf{Q}_{ts}\)\(\rightarrow\) Obtain a set of anomalies \(\{Q_{1},...,Q_{n}\}\)
6: Generate explanations for \(Q_{i}\in\{Q_{1},...,Q_{n}\}\)
```
**Algorithm 1** Pseudo-code of Logs2Graphs
### Anomaly Explanation
Our anomaly explanation method can be regarded as a decomposition method (Kingmaa and Ba, 2014), that is, we build a score decomposition rule to distribute the prediction anomaly score to the input space. Concretely, a graph \(\mathcal{G}_{m}\) is identified as anomalous if and only if its graph-level representation has a large distance to the hyper-sphere center (Equation 6). Further, the graph-level representation is obtained via a Readout(\(\cdot\)) function applied on the node-level representations (Equation 4). Therefore, if the Readout(\(\cdot\)) function is attributable (such as sum or mean), we can easily obtain the a small subset of important nodes (in the penultimate layer) whose node embeddings contribute the most to the distance. Specifically, the importance score of node \(v_{j}\) (in the penultimate layer) in a graph \(\mathcal{G}_{m}\) is defined as:
\[\frac{|score(\mathcal{G}_{m})-score(\mathcal{G}_{m}/\{\mathbf{Z}_{j}\})|}{score (\mathcal{G}_{m})} \tag{7}\]
where \(score(\mathcal{G}_{m})\) is defined in Equation 6 and \(score(\mathcal{G}_{m}/\{\mathbf{Z}_{j}\})\) is the anomaly score by removing the embedding vector of \(v_{j}\) (namely \(\mathbf{Z}_{j}\)) when applying Readout function to obtain the graph-level representation.
Next, for each important node in the penultimate layer, we extend the LRP (Layerwise Relevance Propagation) algorithm (Bogorst and Welling, 2014) to obtain a minor set of important nodes in the input layer (this is not the contribution of our paper and we simply follow the practice in (Bogorst and Welling, 2014; Lees and Vanhoucke, 2015)). If certain of these nodes are connected by edges, the resulting subgraphs can provide more meaningful explanations. As the LRP method generates explanations utilizing the hidden features and model weights directly, its explanation outcomes are deemed reliable and trustworthy (Kingmaa and Ba, 2014).
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Name & \#Events & \#Graphs & \#Anomalies & \#Nodes & \#Edges \\ \hline HDFS & 48 & 575,061 & 16,838 & 7 & 20 \\ Hadoop & 683 & 978 & 811 & 34 & 120 \\ BGL & 1848 & 69,251 & 31,374 & 10 & 30 \\ Spirit & 834 & 10,155 & 4,432 & 6 & 24 \\ Thunderbird & 1013 & 52,160 & 6,814 & 16 & 52 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Summary of datasets. #Events refers to the number of log event templates obtained using log parser Drain (Kingmaa and Ba, 2014). #Groups means the number of generated graphs. #Anomalies represents the number of anomalous graphs. #Nodes denotes the average number of nodes in generated graphs. #Edges indicates the average number of edges in the generated graphs.
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Symbol** & **Meaning** & **Range** \\ \hline _Bs_ & Batch size & \{16, 32, 64, **128**, 256, 512, 1024, 1536, 2048, 2560\} \\ \hline _Op_ & optimisation method & Adam, **SGD** \\ \hline \(L\) & number of layers & \{\(1,2,3,4,5\)\} \\ \hline \(\lambda\) & weight decay parameter & \{**0.0001**, 0.001, 0.01, 0.1\} \\ \hline \(\eta\) & learning rate & \{0.0001, 0.001, **0.01\} \\ \hline \(k\) & proximity parameter & \{\(1,2\)\} \\ \hline \(\alpha\) & teleport probability & \{0.05, **0.1**, 0.2\} \\ \hline \(\Gamma\) & fusion operation if \(k\geq 2\) & sum, concatenation \\ \hline _Re_ & readout function & **mean**, sum, max \\ \hline \(d\) & embedding dimension & \{32, 64, **128**, 256, 300\} \\ \hline _Ep_ & Epochs for training & range(100,1000,50) \\ \hline \hline \end{tabular}
\end{table}
Table 2. Description of hyperparameters involved in OCDiGCN. Range indicates the values that we have tested, and boldfaced values represent the values suggested to use in experiments. Particularly, for the embedding dimensions: 300 is suggested for BGL and 128 for others. For the batch sizes: 32 is suggested for HDFS and 128 for others. For the training epochs: 100 for BGL and Thunderbird, 200 for HDFS, 300 for Hadoop and 500 for Spirit are suggested.
## 6. Experiments
We perform extensive experiments to answer the following questions:
1. **Detection accuracy:** How effective is _Logs2Graphs_ at identifying log anomalies when compared to state-of-the-art methods?
2. **Directed vs. undirected graphs:** Is the directed log graph representation better than the undirected version for detecting log anomalies?
3. **Node Labels vs. Node Attributes**: How important is it to use semantic embedding of log event template as node attributes?
4. **Robustness analysis:** To what extent is Logs2Graphs robust to contamination in training data?
5. **Ability to detect structural anomalies:** Can Logs2Graphs better capture structural anomalies and identify structurally equivalent normal instances than other contenders?
6. **Explainability Analysis:** How understandable are the anomaly detection results delivered by Logs2Graphs?
7. **Sensitivity analysis:** How do the values of the hyperparameters influence the detection accuracy?
8. **Runtime analysis:** What are the runtime for different methods?
### Experiment Setup
#### 6.1.1. Datasets
The five datasets that we use, summarised in Table 1, were chosen for three reasons: 1) they are commonly used for the evaluation of log anomaly detection methods; 2) they contain ground truth labels that can be used to calculate evaluation metrics; and 3) they include log identifiers that can be used for partitioning log messages into groups. For each group of log messages in a dataset, we label the group as anomalous if it contains at least one anomaly. More details are given as follows:
* HDFS (Han et al., 2017) consists of Hadoop Distributed File System logs obtained by running 200 Amazon EC2 nodes. These logs contain _block_id_, which can be used to group log events into different groups. Moreover, these logs are manually labeled by Hadoop experts.
* Hadoop (Han et al., 2017) was collected from a Hadoop cluster consisting of 46 cores over 5 machines. The _ContainerID_ variable is used to divide log messages into different groups.
* BGL, Spirit and Thunderbird contain system logs collected from the BlueGene/L (BGL) supercomputing system, Spirit supercomputing system, and Thunderbird supercomputing system located at Sandia National Labs, respectively. For those datasets, each log message was manually inspected by engineers and labelled as normal or anomalous. For BGL, we use all log messages, and group log messages based on the _Node_ variable. For Spirit and Thunderbird, we only use the first 1 million and first 5 million log messages for evaluation, respectively. Furthermore, for these two datasets, the _User_ is used as log identifier to group log messages. However, considering that an ordinary user may generate hundreds of thousands of logs, we regard every 100 consecutive logs of each user as a group. If the number of logs is less than 100, we also consider it as a group.
#### 6.1.2. Baselines
To investigate the performance of _Logs2Graphs_, we compare it with the following seven state-of-the-art log anomaly detection methods: Principal Component Analysis (PCA) (Han et al., 2017), OneClass SVM (OCSVM) (Krishnan et al., 2016), Isolation Forest (iForest) (Han et al., 2017), HBOS (Han et al., 2017), DeepLog (Chen et al., 2017), LogAnomaly (Han et al., 2017) and AutoEncoder (Chen et al., 2017), and one state-of-the-art graph level anomaly detection method: GLAM (Zhou et al., 2017).
We choose these methods as baselines because they are often regarded to be representatives of traditional machine learning-based (PCA, OCSCM, IForest, HBOS) and deep learning-based approaches (DeepLog, LogAnomaly and AutoEncoder), respectively. All methods are unsupervised or semi-supervised methods that do not require labeled anomalous samples for training the models.
#### 6.1.3. Evaluation Metrics
The Area Under Receiver Operating Characteristics Curve (AUC ROC) and the Area Under the Precision-Recall Curve (AUC PRC) are widely used to quantify the detection accuracy of anomaly detection. Therefore, we employ both to evaluate and compare the different log anomaly detection methods. AUC PRC is also known as Average Precision (AP). For both AUC ROC and AUC PRC, values closer to 1 indicate better performance.
### Model Implementation and Configuration
Traditional machine learning based approaches--such as PCA, OCSVM, iForest, and HBOS--usually first transform logs into log event count vectors, and then apply traditional anomaly detection techniques to identify anomalies. For these methods, we utilise their resource implementations provided in PyOD (Zhou et al., 2017). Meanwhile, for deep learning methods DeepLog, LogAnomaly, and AutoEncoder, we use their open-source implementations in Deep-Loglizer (Chen et al., 2017). For these competing methods, we use their default hyperparameter values.
For all deep learning based methods, the experimental design adopted in this study follows a train/validation/test strategy with a distribution of 70% : 5% : 25% for normal instances. Specifically, the model was trained using 70% of normal instances, while 5% of normal instances and an equal number of abnormal instances were employed for validation (i.e., hyperparameter tuning). The remaining 25% of normal instances and the remaining abnormal instances were used for testing. Specifically, Table 2 summarises the hyperparameters involved in OCDiGCN as well as their recommended values.
We implemented and ran all algorithms in Python 3.8 (using PyTorch (Paszasz et al., 2017) and PyTorch Geometric (Chen et al., 2017) libraries when applicable), on a computer with Apple M1 chip 8-core CPU and 16GB unified memory. For reproducibility, all code and datasets will be released on GitHub.
### Comparison to the state of the art
We first compare _Logs2Graphs_ to the state of the art. The results are shown in Table 3, based on which we make the following main observations:
* In terms of AUC ROC, _Logs2Graphs_ achieves the best performance against its competitors on four out of five datasets. Particularly, _Logs2Graphs_ outperforms the closet competitor on BGL with 9.6% and delivers remarkable results (i.e., an AUC ROC larger than 0.99) on Spirit and Thunderbird. Similar observations can be made for Average Precision.
* Deep learning based methods generally outperform the traditional machine learning based methods. One possible reason is that traditional machine learning based methods only leverage log event count vectors as input, which makes them unable to capture and exploit sequential relationships between log events and the semantics of the log templates.
* The performance of (not-graph-based) deep learning methods is often inferior to that of _Log2Graphs_ on the more complex datasets, i.e., Hadoop, BGL, Spirit, and Thunderbird, which all contain hundreds or even thousands of log templates. This suggests that LSTM-based models may not be well suited for logs with a large number of log templates. One possible reason is that the test dataset contains many unprecedented log templates, namely log templates that are not present in the training dataset.
* In terms of ROC AUC score, all methods except for OCSVM and AutoEncoder achieve impressive results (with \(RC>0.91\)) on HDFS. One possible reason is that HDFS is a relatively simple log dataset that contains only 48 log templates. Concerning AP, PCA and LSTM-based DeepLog achieve impressive results (with \(AP>0.89\)) on HDFS. Meanwhile, _Logs2Graphs_ obtains a competitive performance (with \(AP=0.87\)) on HDFS.
### Directed vs. undirected graphs
To investigate the practical added value of using _directed_ log graphs as opposed to _undirected_ log graphs, we convert the logs to attributed, undirected, and edge-weighted graphs, and apply GLAM (Wang et al., 2019), a graph-level anomaly detection method for undirected graphs. We use the same graph construction method as for _Logs2Graphs_, except that we use undirected edges. Similar to our method, GLAM also couples the graph representation learning and anomaly detection objectives by optimising a single SVDD objective. The key difference with OCDiGCN is that GLAM leverages GIN (Wang et al., 2019), which can only tackle undirected graphs, while OCDiGCN utilises DiGCN (Wang et al., 2019) that is especially designed for directed graphs.
The results in Table 3 indicate that GLAM's detection performance is comparable to that of most competitors. However, it consistently underperforms on all datasets, except for Hadoop, when compared to _Logs2Graphs_. Given that the directed vs undirected representation of the log graphs is the key difference between the methods, a plausible explanation is that directed graphs have the capability to retain the temporal sequencing of log events, whereas undirected graphs lack this ability. Consequently, GLAM may encounter difficulties in detecting sequential anomalies and is outperformed by _Logs2Graphs_.
### Node Labels vs. Node Attributes
To investigate the importance of using semantic embedding of log event template as node attributes, we replace the node semantic attributes with one-hot-encoding of node labels (i.e., using an integer to represent a log event template). The performance comparisons in terms of ROC AUC for Logs2Graphs are depicted in Figure 3, which shows that using semantic embedding is always superior to using node labels. Particularly, it can lead to a substantial performance improvement on Hadoop, Spirit and HDFS datasets. The PR AUC results show a similar behaviour and thus are omitted.
### Robustness to Contamination
To investigate the robustness of Logs2Graphs when the training dataset is contaminated, we report its performance in terms of ROC AUC under a wide range of contamination levels. Figure 4 shows that the performance of Logs2Graphs decreases with an increase of contamination in the training data. The PR AUC results show a similar behaviour and thus are omitted. Hence, it is important to ensure that the training data contains only normal graphs (or with a very low proportion of anomalies).
### Ability to Detect Structural Anomalies and Recognise Unseen Normal Instances
To showcase the effectiveness of different neural networks in detecting structural anomalies, we synthetically generate normal and anomalous directed graphs as shown in Figure 5. As Deeplog, LogAnomaly and AutoEncoder require log sequences as inputs, we convert directed graphs into sequences by sequentially presenting the endpoints pair of each edge. Moreover, for GLAM we convert directed graphs into undirected graphs by turning each directed edge into undirected edge.
Figure 4. ROC results of Logs2Graphs w.r.t. a wide range of contamination levels. Results are averaged over 10 runs. Particularly, HDFS contains only 3% anomalies and thus results at 5% and 10% are not available.
Figure 3. The comparative performance analysis of Logs2Graphs, measured by ROC AUC, demonstrating the distinction between utilizing node semantic attributes and node labels.
Moreover, to investigate their capability of recognising unseen but structurally equivalent normal instances, we generate the following normal log sequences based on the synthetic normal graph as training data: \(A\to B\to C\to D\to A\) (1000), \(B\to C\to D\to A\to B\) (1000) and \(C\to D\to A\to B\) (1000), and the following as test dataset: \(D\to A\to B\to C\to D\) (1000).
Specifically, the results in Table 4 indicate that Logs2Graphs, Deelog and LogAnomaly can effectively detect structural anomalies while AutoEncoder and GLAM fail in some cases. However, log sequences based methods, namely Deelog, LogAnomaly and AutoEncoder, can lead to high positive rates due to their inability of recognising unseen but structurally equivalent normal instances.
### Anomaly Explanation
Particularly, Figure 6 provides an example of log anomaly explanation with the HDFS dataset. For each detected anomalous log graph (namely a group of logs), we first quantify the importance of nodes according to the description in Section 5.3. Next, we visualise the anomalous graph by assigning darker shade of red to more important nodes. In this example, the node "WriteBlock(WithException)" contributes the most to the anomaly score of an anomalous log group and thus is highlighted in red.
### Sensitivity Analysis
We examine the effects of three hyperparameters in OCDiGCN on the detection performance.
**The Number of Convolutional Layers:**\(L\) is a potentially important parameter as it determines how many convolutional layers to use in OCDiGCN. Figure 7 (top row) depicts PR AUC and ROC AUC for the five benchmark datasets when \(L\) is varied from 1 to 5. We found that \(L=1\) yields consistently good performance. As the value of \(L\) is increased, there is only a slight enhancement in the resulting performance or even degradation, while the associated
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Case** & \multicolumn{2}{c}{Deelog LogAnomaly AutoEncoder GLAM Ours} \\ \hline S1 (RUC) & 1.0 & 1.0 & 0.0 & 0.0 & 1.0 \\ S2 (RUC) & 1.0 & 1.0 & 0.50 & 1.0 & 1.0 \\ S3 (RUC) & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 \\ S4 (RUC) & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 \\ \hline N1 (FPR) & 100\% & 100\% & 100\% & 0\% & 0\% \\ \hline \hline \end{tabular}
\end{table}
Table 4. ROC AUC Results (higher is better) of detecting structural anomalies and False Positive Rate (lower is better) of recognising unseen normal instances. S1: Reverse Edge Direction; S2: Change Edge Endpoint; S3: Delete Edge; S4: Add Edge; N1: Unseen normal instances.
Figure 5. Synthetic generation of normal (10000) and structurally anomalous (200 each) graphs.
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{**HDFS**} & \multicolumn{2}{c}{**Hadoop**} & \multicolumn{2}{c}{**BGL**} & \multicolumn{2}{c}{**Spirit**} & \multicolumn{2}{c}{**Thunderbird**} \\ \cline{2-11} Method & AP & RC & AP & RC & AP & RC & AP & RC & AP & RC \\ \hline PCA & 0.91\(\pm\)0.03 & **1.0**\(\pm\)0.00 & 0.84\(\pm\)0.00 & 0.52\(\pm\)0.00 & 0.73\(\pm\)0.01 & 0.82\(\pm\)0.00 & 0.31\(\pm\)0.00 & 0.19\(\pm\)0.00 & 0.11\(\pm\)0.00 & 0.34\(\pm\)0.01 \\ OCSVM & 0.18\(\pm\)0.01 & 0.88\(\pm\)0.01 & 0.83\(\pm\)0.00 & 0.45\(\pm\)0.00 & 0.47\(\pm\)0.00 & 0.47\(\pm\)0.01 & 0.34\(\pm\)0.00 & 0.29\(\pm\)0.00 & 0.12\(\pm\)0.00 & 0.45\(\pm\)0.01 \\ IForest & 0.73\(\pm\)0.04 & 0.97\(\pm\)0.01 & 0.85\(\pm\)0.01 & 0.55\(\pm\)0.01 & 0.79\(\pm\)0.01 & 0.83\(\pm\)0.01 & 0.32\(\pm\)0.03 & 0.23\(\pm\)0.02 & 0.11\(\pm\)0.01 & 0.24\(\pm\)0.10 \\ HBOS & 0.74\(\pm\)0.04 & 0.99\(\pm\)0.00 & 0.84\(\pm\)0.00 & 0.50\(\pm\)0.00 & 0.84\(\pm\)0.02 & 0.87\(\pm\)0.03 & 0.35\(\pm\)0.00 & 0.22\(\pm\)0.00 & 0.15\(\pm\)0.01 & 0.29\(\pm\)0.05 \\ \hline DeepLog & **0.92\(\pm\)**0.07 & 0.97\(\pm\)0.04 & **0.96\(\pm\)**0.00 & 0.47\(\pm\)0.00 & 0.89\(\pm\)0.00 & 0.72\(\pm\)0.00 & 0.99\(\pm\)0.00 & 0.97\(\pm\)0.00 & 0.91\(\pm\)0.01 & 0.96\(\pm\)0.00 \\ LogAnomaly & 0.89\(\pm\)0.09 & 0.95\(\pm\)0.05 & **0.96\(\pm\)**0.00 & 0.47\(\pm\)0.00 & 0.89\(\pm\)0.00 & 0.72\(\pm\)0.00 & 0.99\(\pm\)0.00 & 0.97\(\pm\)0.00 & 0.90\(\pm\)0.01 & 0.96\(\pm\)0.00 \\ AutoEncoder & 0.71\(\pm\)0.03 & 0.84\(\pm\)0.01 & **0.96\(\pm\)**0.00 & 0.52\(\pm\)0.00 & 0.91\(\pm\)0.01 & 0.79\(\pm\)0.02 & 0.96\(\pm\)0.00 & 0.92\(\pm\)0.01 & 0.44\(\pm\)0.02 & 0.46\(\pm\)0.05 \\ \hline GLAM & 0.78\(\pm\)0.08 & 0.89\(\pm\)0.04 & 0.95\(\pm\)0.00 & **0.61\(\pm\)**0.00 & 0.94\(\pm\)0.02 & 0.90\(\pm\)0.03 & 0.93\(\pm\)0.00 & 0.91\(\pm\)0.00 & 0.75\(\pm\)0.02 & 0.85\(\pm\)0.01 \\ Logs2Graphs & 0.87\(\pm\)0.04 & 0.91\(\pm\)0.02 & 0.95\(\pm\)0.00 & 0.59\(\pm\)0.00 & **0.96\(\pm\)**0.01 & **0.93\(\pm\)**0.01 & **1.0\(\pm\)**0.00 & **1.0\(\pm\)**0.00 & **0.99\(\pm\)**0.00 & **1.0\(\pm\)**0.00 \\ \hline \hline \end{tabular}
\end{table}
Table 3. Anomaly detection accuracy on five benchmark datasets for _Logs2Graphs_ and its eight competitors. AP and RC denote Average Precision and AUC ROC, respectively. HDFS, BGL, and Thunderbird have been downsampled to 10,000 graphs each while maintaining the original anomaly rates. For each method on each dataset, to mitigate potential biases arising from randomness, we conducted ten experimental runs with varying random seeds and report the average values along with standard deviations of AP and RC. Moreover, we highlight the best results with bold and the runner-up with underline.
computational burden increases substantially. We thus recommend and use \(L=1\).
**The Embedding Dimension \(d\):** From Table 7 (middle row), one can see that \(d=128\) yields good performance on Spirit, Hadoop, HDFS and Thunderbird, while further increasing \(d\) obtains negligible performance improvement or even degradation. However, an increase of \(d\) on BGL leads to significantly better performance. One possible reason is that BGL is a complex dataset wherein anomalies and normal instances are not easily separable on lower dimensions.
**The Proximity Parameter \(k\):** As this parameter increases, a one can gain more information from its further neighbours. Figure 7 (bottom row) contrasts the detection performance when \(k\) is set to \(1\) and \(2\), respectively. Particularly, we construct one Inception Block when \(k=2\), using concatenation to fuse the results.
We observe that there is no significant improvement in performance when using a value of \(k=2\) in comparison to \(k=1\). It is important to recognize that a node exhibits 0th-order proximity with itself and 1st-order proximity with its immediately connected neighbors. If \(k=2\), a node can directly aggregate information from its \(2\)-order neighbours. As described in Table 1, graphs generated from logs usually contain a limited number of nodes, varying to \(6\) to \(34\). Therefore, this is no need to utilise the Inception Block, which was originally designed to handle large graphs in (Zhu et al., 2018).
### Runtime Analysis
Note that traditional machine learning methods, including PCA, OCSVM, IForest and HBOS, usually perform log anomaly detection in a transductive way. In other words, they require the complete dataset beforehand and do not follow a train-and-test strategy. In contrast, neural network based methods, such as DeepLog, LogAnomaly, AutoEncoder, and Logs2Graphs, perform log anomaly detection in an inductive manner, namely following a train-and-test strategy.
Figure 8 shows that most computational time demanded by _Logs2Graphs_ is allocated towards the graph generation phase. In contrast, the training and testing phases require a minimal time budget. The graph generation phase can be amenable to parallelisation though, thereby potentially reducing the overall processing time. As a result, _Logs2Graphs_ shows great promise in performing online log anomaly detection. Meanwhile, other neural networks based models--such as DeepLog, LogAnomaly, and AutoEncoder--demand considerably more time for the training and testing phases.
## 7. Threats to Validity
We have discerned several factors that may pose a threat to the validity of our findings.
**Limited Datasets.** Our experimental protocol entails utilizing five publicly available log datasets, which have been commonly employed in prior research on log-based anomaly detection. However, it is important to acknowledge that these datasets may not fully encapsulate the entirety of log data characteristics. To address this limitation, our future work will involve conducting experiments on additional datasets, particularly those derived from industrial settings, in order to encompass a broader range of real-world scenarios.
**Limited Competitors.** This study focuses solely on the experimental evaluation of eight competing models, which are considered representative and possess publicly accessible source code. However, it is worth noting that certain models such as GLAD-PAW did not disclose their source code and it requires non-trivial efforts
Figure 8. Runtime for all eight methods on all datasets, wherein HDFS, BGL, and Thunderbird have been downsampled to 10,000 graphs. Runetimes are averaged over 10 repetitions. We report the training time per epoch for neural network based methods.
Figure 7. The effects of the number of layers (top row), the embedding dimensions (middle row) and the proximity parameter (bottom row) on AP (left column) and AUC ROC (right column).
to re-implement these models. Moreover, certain models such as CODEtect require several months to conduct the experiments on our limited computing resources. For these reasons, we exclude them from our present evaluation. In subsequent endeavors, we intend to re-implement certain models and attain more computing resources to test more models.
**Purity of Training Data.** The purity of training data is usually hard to guarantee in practical scenarios. Although Logs2Graphs is shown to be robust to very small contamination in the training data, it is critical to improve the model robustness by using techniques such as adversarial training (Bahmani et al., 2010) in the future.
**Graph Construction.** The graph construction process, especially regarding the establishment of edges and assigning edge weights, adheres to a rule based on connecting consecutive log events. However, this rule may be considered overly simplistic in certain scenarios. Therefore, more advanced techniques will be explored to construct graphs in the future.
## 8. Conclusions
We introduced _Logs2Graphs_, a new approach for unsupervised log anomaly detection. It first converts log files to attributed, directed, and edge-weighted graphs, translating the problem to an instance of graph-level anomaly detection. Next, this problem is solved by OCDDiGCN, a novel method based on graph neural networks that performs graph representation learning and graph-level anomaly detection in an end-to-end manner. Important properties of OCDDiGCN include that it can deal with directed graphs and do unsupervised learning.
Extensive results on five benchmark datasets reveal that _Logs2Graphs_ is at least comparable to and often outperforms state-of-the-art log anomaly detection methods such as DeepLog and LogAnomaly. Furthermore, a comparison to a similar method for graph-level anomaly detection on _undirected_ graphs demonstrates that directed log graphs lead to better detection accuracy in practice.
## Acknowledgments
**Zhong Li and Matthijs van Leeuwen:** this publication is part of the project Digital Twin with project number P18-03 of the research programme TTV Perspective, which is (partly) financed by the Dutch Research Council (NWO). **Jiayang Shi:** This research is co-financed by the European Union H2020-MSCA-ITN-2020 under grant agreement no. 956172 (xCTing).
|
2305.05540 | Direct Poisson neural networks: Learning non-symplectic mechanical
systems | In this paper, we present neural networks learning mechanical systems that
are both symplectic (for instance particle mechanics) and non-symplectic (for
instance rotating rigid body). Mechanical systems have Hamiltonian evolution,
which consists of two building blocks: a Poisson bracket and an energy
functional. We feed a set of snapshots of a Hamiltonian system to our neural
network models which then find both the two building blocks. In particular, the
models distinguish between symplectic systems (with non-degenerate Poisson
brackets) and non-symplectic systems (degenerate brackets). In contrast with
earlier works, our approach does not assume any further a priori information
about the dynamics except its Hamiltonianity, and it returns Poisson brackets
that satisfy Jacobi identity. Finally, the models indicate whether a system of
equations is Hamiltonian or not. | Martin Šípka, Michal Pavelka, Oğul Esen, Miroslav Grmela | 2023-05-07T15:24:41Z | http://arxiv.org/abs/2305.05540v1 | # Direct Poisson neural networks: Learning non-symplectic mechanical systems
###### Abstract
In this paper, we present neural networks learning mechanical systems that are both symplectic (for instance particle mechanics) and non-symplectic (for instance rotating rigid body). Mechanical systems have Hamiltonian evolution, which consists of two building blocks: a Poisson bracket and an energy functional. We feed a set of snapshots of a Hamiltonian system to our neural network models which then find both the two building blocks. In particular, the models distinguish between symplectic systems (with non-degenerate Poisson brackets) and non-symplectic systems (degenerate brackets). In contrast with earlier works, our approach does not assume any further a priori information about the dynamics except its Hamiltonianity, and it returns Poisson brackets that satisfy Jacobi identity. Finally, the models indicate whether a system of equations is Hamiltonian or not.
###### Contents
* 1 Introduction
* 2 Hamiltonian Dynamics on Poisson Geometry
* 2.1 General Formulation
* 2.2 \(3D\) Hamiltonian Dynamics
* 2.3 \(4D\) Hamiltonian Dynamics
* 2.4 Semi-direct Extension to a \(6D\) system
* 3
Learning Hamiltonian systems * 3.1 Rigid body * 3.2 Particle in 2D * 3.3 Shivamoggi equations * 3.4 Particle in 3D * 3.5 Heavy top
* 4 Learning non-Hamiltonian systems
* 5 Conclusion
## 1 Introduction
The estimation of unknown parameters in physics and engineering is a standard step in many well-established methods and workflows. One usually starts with a model - a set of assumptions and equations that are considered given and then, based on the available data, estimates the exact form of the evolution equations for the system of interest. As an example, we can consider a situation where we need to estimate the mass of a star far away based on its interaction with light [1], or when the moments of inertia of an asteroid are inferred from its rotations [2]. The assumptions can be of varying complexity and the method for parameter estimation should be therefore adequately chosen.
Techniques for machine learning of dynamical systems have sparked significant interest in recent years. With the rise of neural network-related advances, several methods have been developed for capturing the behavior of dynamical systems, each with its advantages and drawbacks. A symbolic approach (for instance [3]) allows us to learn precise symbolic form of equations from the predefined set of allowed operations. This can be often the most efficient approach that frequently leads to an exact match between the learned and target system, but the class of captured equations is by definition limited by the algebraic operations we consider as candidates.
Alternatively, one can learn directly the equations of motion
\[\dot{\mathbf{x}}=f(\mathbf{x},\theta) \tag{1}\]
by learning \(f\) parameterized by weights \(\theta\). The function can be represented by any function approximator, in many cases by a neural network. Although this approach is very general, it does not incorporate any known physics into our procedure. There is no concept of energy of the system, no quantities are implicitly conserved, and the method thus might produce unphysical predictions. A remedy is the concept of physics-informed machine learning that constrains the neural network models so that they obey some required laws of physics [4]. In particular, models of mechanical systems, which can be described by Hamiltonian mechanics, preserve several physical quantities like energy or angular momentum, as well as geometric quantities (for instance the symplectic two-form) that ensure the self-consistency of the systems. A neural-network model learning a Hamiltonian system from its trajectories that is compatible with the underlying geometry without any a priori knowledge about the system has been missing, to the best of our knowledge,
and it is the main purpose of the current manuscript to introduce it. Moreover, we present several models that vary in how strictly they reproduce the underlying geometry and the degree to which these models learn a system can be used to estimate whether the system is Hamiltonian or not.
**Geometrization of a dynamical system.** A dynamical system is described by a differential equation, in particular, a mechanical system obeys Hamiltonian evolution equations. These equations are of geometric origin that is invariant with respect to changes of coordinates and which is preserved during the motion of the system. The geometry of Hamiltonian systems goes back to Sophus Lie and Henry Poincare [5, 6]. Modern approaches extend to infinite-dimensional systems and provide foundations for many parts of nowadays physics [7, 8, 9, 10]. Formulating a mechanical system geometrically typically means finding a bracket algebra (such as symplectic, Poisson, Jacobi, Leibniz, etc.) and a generating function (such as Hamiltonian, energy, or entropy). The bracket is generally required to satisfy some algebraic conditions (Jacobi identity, Leibniz identity, etc.). However, there is no general algorithmic way to obtain the Hamiltonian formulation (even if it exists) of a given system by means of analytical calculations. So such an analysis is proper for machine learning.
Apart from Hamiltonian mechanics, one can also include dissipation [11, 12, 13] or extend the learning of Hamiltonian systems to control problems [14]. Such an approach then, with the suitable choice of integrator ensures the conservation of physically important quantities, such as energy, momentum or angular momentum.
A reversible system is a candidate for being a Hamiltonian system. For a reversible system, the beginning point could be to search for a symplectic manifold and a Hamiltonian function. Learning the symplectic character (if it exists) of a physical system (including particles in potential fields, pendulums of various complexities) can be done utilizing neural networks, see, for example, [15, 16]. The symplectic geometry exists only in even-dimensional models and due to the nondegeneracy criteria, it is very rigid. A generalization of symplectic geometry is Poisson geometry where the non-degeneracy requirement is relaxed [17, 18]. In Poisson geometry, there exists a Poisson bracket (defining an algebra on the space of functions) satisfying the Leibniz and the Jacobi identities. This generalization permits (Hamiltonian) analysis in odd dimensions. The degenerate character of Poisson geometry brings some advantages for the investigation of (total, or even super) integrability of the dynamics.
The point of this paper is to better learn Hamiltonian mechanics. We build neural networks that encode the building blocks of Hamiltonian dynamics admitting Poisson geometry. According to the Darboux-Weinstein theorem [19] for a Poisson manifold, there are canonical coordinates which make the Poisson bivector (determining the bracket) constant. This formulation has also geometric implications, since it determines the symplectic foliation of the Poisson manifold (see more details in the main body of this paper). Recently, Poisson neural networks (abbreviated as PNN) were proposed to learn Hamiltonian systems [20] by transforming the system in the Darboux-Weinstein coordinates. But, for many physical systems, the Poisson bracket is far from being in the canonical coordinates, and the dimension of the symplectic part of the Darboux-Weinstein Poisson bivector may be a priori unknown. For \(n-\)dimensional physical models, Jacobi identity, which ensures consistency of the underlying Poisson bivector, is a system of PDEs. To
determine the Poisson structure, one needs to solve this system analytically, which is usually difficult, or enforce its validity while learning the Poisson bivector.
**The goal of this work.** The novel result of this paper is what we call a Direct Poisson Neural Network (abbreviated as DPNN) which learns the Poisson structure without assuming any particular form of the Poisson structure such as Darboux-Weinstein coordinates. Instead, DPNN learns directly in the coordinate system in which the data are provided. There are several advantages of DPNN: **(i)** We do not need to know a priori the degeneracy level of the Poisson structure (or in other terms the dimensions of the symplectic foliations of the Poisson manifold) **(ii)** it is easier to learn the Hamiltonian (energy), and **(iii)** Jacobi identity is satisfied on the data, not only in a representation accessible only through another neural network (an Invertible Neural Network in [20]). DPNN learns Poisson systems by identifying directly the Poisson bivector and the Hamiltonian as functions of the state variables.
We actually provide three flavors of DPNNs. The least-informed flavor directly learns the Hamiltonian function and the Poisson bivector, assuming its skew-symmetry but not the Jacobi identity. Another flavor adds squares of Jacobi identity to the loss function and thus softly imposes its validity. The most geometry-informed flavor automatically satisfies Jacobi identity by building the Poisson bivector as a general solution to Jacobi identity in three dimensions. While the most geometry-informed version is typically most successful in learning Hamiltonian systems, it is restricted to three-dimensional systems, where the general solution of Jacobi identity is available. The second flavor is typically a bit less precise, and the least-informed flavor is usually the least precise, albeit still being able to learn Hamiltonian systems to a good degree of precision.
Interestingly, when we try to learn a non-Hamiltonian model by these three flavors of DPNNs, the order of precision is reversed and the least-informed flavor becomes most precise. The order of precision of the DPNNs flavors thus indicates whether a system of equations is Hamiltonian or not.
Section 2 recalls Poisson dynamics, in particular symplectic dynamics, rigid body mechanics, Shivamoggi equations, and evolution of heavy top. Section 3 introduces DPNNs and illustrates their use on learning Hamiltonian systems. Finally, Section 4 shows DPPNs applied on a non-Hamiltonian system (dissipative rigid body).
## 2 Hamiltonian Dynamics on Poisson Geometry
### General Formulation
A Poisson bracket on a manifold \(\mathcal{M}\) (physically corresponding to the state space, for instance position and momentum of the body) is a skew-symmetric bilinear algebra on the space \(\mathcal{F}(\mathcal{M})\) of smooth functions on \(\mathcal{M}\) given by
\[\{\bullet,\bullet\}:\mathcal{F}(\mathcal{M})\times\mathcal{F}(\mathcal{M}) \rightarrow\mathcal{F}(\mathcal{M}). \tag{1}\]
Poisson brackets satisfy the Leibniz rule
\[\{F,HG\}=\{F,H\}G+H\{F,G\}, \tag{2}\]
and the Jacobi identity,
\[\{F,\{H,G\}+\{H,\{G,F\}+\{G,\{F,H\}=0, \tag{3}\]
for arbitrary functions \(F\), \(H\) and \(G\), [10, 17, 18]. A manifold equipped with a Poisson bracket is called a Poisson manifold and is denoted by a two-tuple \((\mathcal{M},\{\bullet,\bullet\})\). A function \(C\) is called a Casimir function if it commutes with all other functions that is \(\{F,C\}=0\) for all \(F\). For instance, the magnitude of the angular momentum of a rigid body is a Casimir function.
**Hamiltonian Dynamics.** Hamiltonian dynamics can be seen as evolution on a Poisson manifold. For a Hamiltonian function (physically energy) \(H\) on \(\mathcal{M}\), the Hamiltonian vector field and Hamilton's equation are
\[X_{H}(F):=\{F,H\},\qquad\dot{\mathbf{x}}=X_{H}(\mathbf{x})=\{\mathbf{x},H\}, \tag{4}\]
respectively, where \(\mathbf{x}\in\mathcal{M}\) is a parametrization of manifold \(\mathcal{M}\). The algebraic properties of the Poisson bracket have some physical consequences. Skew-symmetry implies energy conservation,
\[\dot{H}=\{H,H\}=0, \tag{5}\]
while the Leibniz rule ensures that the dynamics does not depend on biasing the energy by a constant. Referring to a Poisson bracket, one may determine the Poisson bivector field according to
\[L(dF,dH):=\{F,H\}. \tag{6}\]
This identification makes it possible to define a Poisson manifold by a tuple \((\mathcal{M},L)\) consisting of a manifold and a Poisson bivector. In this notation, the Jacobi identity can be rewritten as \(\mathcal{L}_{\mathbf{X}_{H}}L=0\), that is the Lie derivative of the Poisson bivector with respect to the Hamiltonian vector field is zero [21]. In other words, the Jacobi identity expresses the self-consistency of the Hamiltonian dynamics in the sense that both the building blocks (Hamiltonian function and the bivector field) are constant along the evolution.
Assuming a local coordinate system \(\mathbf{x}=(x^{i})\) on \(\mathcal{M}\), Poisson bivector determines Poisson matrix \(L=[L^{kl}]\) which enables us to write [22]
\[L=L^{kl}\frac{\partial}{\partial x^{k}}\wedge\frac{\partial}{\partial x^{l}}. \tag{7}\]
In this realization, the Poisson bracket and Hamilton's equations are written as
\[\{F,H\}=L^{kl}\frac{\partial F}{\partial x^{k}}\frac{\partial H}{\partial x^{ l}},\qquad\dot{x}^{k}=L^{kl}\frac{\partial H}{\partial x^{l}}, \tag{8}\]
respectively. Here, we have assumed summation over the repeated indices. Further, the Jacobi identity (3) turns out to be the following system of PDEs
\[L^{kl}\frac{\partial L^{ij}}{\partial x^{k}}+L^{ki}\frac{\partial L^{jl}}{ \partial x^{k}}+L^{kj}\frac{\partial L^{li}}{\partial x^{k}}=0. \tag{9}\]
The left-hand side of this equation is called Jacobiator, and in the case of Hamiltonian systems, it is equal to zero. Jacobi identity (9) is a system of differential equations consisting of PDEs. In \(3D\), Jacobi identity (9) is a single PDE whose general solution is known [23]. In \(4D\), Jacobi identity (9) consists of four PDEs, and there are some partial results, but for an arbitrary \(n\), according to our knowledge, there is no general solution yet. We shall focus on \(3D\), \(4D\) and \(6D\) cases in the upcoming subsections.
**Symplectic Manifolds.** If there is no non-constant Casimir function for a Poisson manifold, then it is also a symplectic manifold. Although we can see symplectic manifolds as examples of Poisson manifolds, it is possible to define a symplectic manifold in a direct way without referring to a Poisson manifold. A manifold \(\mathcal{M}\) is called symplectic if it is equipped with a closed non-degenerate two-form (called a symplectic two-form) \(\Omega\). A two-form is called non-degenerate when
\[\Omega(X,Y)=0,\qquad\forall X\in\mathfrak{X}(\mathcal{M}) \tag{10}\]
implies \(Y=0\). A two-form is closed when being in the kernel of deRham exterior derivative, \(d\Omega=0\). A Hamiltonian vector field \(X_{H}\) on a symplectic manifold \((\mathcal{M},\Omega)\) is defined as
\[\iota_{X_{H}}\Omega=dH, \tag{11}\]
where \(\iota\) is the contraction operator (more precisely the interior derivative) [21]. Referring to a symplectic manifold one can define a Poisson bracket
\[\{F,H\}:=\Omega(X_{F},X_{H}), \tag{12}\]
where the Hamiltonian vector fields are defined through Equation (11). The closedness of the symplectic two-form \(\Omega\) guarantees the Jacobi identity (3). The non-degeneracy condition of \(\Omega\) puts an extra condition to the bracket (12) that the Casimir functions are only the constant functions, in contrast with Poisson manifolds, which may have also non-constant Casimirs. The Darboux-Weinstein coordinates show more explicitly the relationship between Poisson and symplectic manifolds in a local picture.
**Darboux-Weinstein Coordinates.** We start with \(n=(2m+k)\)-dimensional Poisson manifold \(\mathcal{M}\) equipped with Poisson bivector \(L\). Near every point of the Poisson manifold, the Darboux-Weinstein coordinates \((x^{i})=(q^{a},p_{b},u^{\alpha})\) (here \(a\) runs from \(1\) to \(m\), and \(\alpha\) runs from \(1\) to \(k\)) give a local form of the Poisson bivector
\[L=\frac{\partial}{\partial q^{a}}\wedge\frac{\partial}{\partial p_{a}}+\frac {1}{2}\lambda^{\alpha\beta}\frac{\partial}{\partial u^{\alpha}}\wedge\frac{ \partial}{\partial u^{\beta}} \tag{13}\]
with the coefficient functions \(\lambda^{\alpha\beta}\) equal zero at the origin. If \(k=0\) in the local formula (13), then there remains only the first term on the right-hand side and the Poisson manifold turns out to be a \(2m\)-dimensional symplectic manifold.
Newtonian mechanics, for instance, fits this kind of realization. On the other hand, if \(m=0\) in (13), then there remains only the second term which is a full-degenerate Poisson bivector. A large class of Poisson manifolds is of this form, namely Lie-Poisson structure on the dual of a Lie algebra including rigid body dynamics, Vlasov dynamics, etc. In general, Poisson bivectors have both the symplectic part as well as the fully degenerate part, for instance, the heavy top dynamics in Section 2.4.
When the Poisson bivector is non-degenerate, it generates a symplectic Poisson bracket, and it commutes with the canonical Poisson bivector
\[\mathbf{L}_{can}=\begin{pmatrix}\mathbf{0}&\mathbf{I}\\ -\mathbf{I}&\mathbf{0}\end{pmatrix} \tag{14}\]
in the sense that
\[\mathbf{L}\cdot\mathbf{L}_{can}-\mathbf{L}_{can}\cdot\mathbf{L}=\mathbf{0}. \tag{15}\]
This compatibility condition is employed later in Section 3 to measure the error of DPNNs when learning symplectic Poisson bivectors.
### \(3d\) Hamiltonian Dynamics
In this subsection, we focus on three-dimensional Poisson manifolds, following [24, 25, 26]. One of the important observations in \(3D\) is the isomorphism between the space of vectors and the space of skew-symmetric matrices given by
\[\mathbf{L}=\begin{pmatrix}0&-J_{z}&J_{y}\\ J_{z}&0&-J_{x}\\ -J_{y}&J_{x}&0\end{pmatrix}\leftrightarrow\mathbf{J}=(J_{x},J_{y},J_{z}). \tag{16}\]
This isomorphism lets us write Jacobi identity (9) as a single scalar equation
\[\mathbf{J}\cdot(\nabla\times\mathbf{J})=0, \tag{17}\]
see, for example, [25, 27, 28, 29]. The general solution of Jacobi identity (17) is
\[\mathbf{J}=\frac{1}{\phi}\nabla C \tag{18}\]
for arbitrary functions \(\phi\) and \(C\), where \(C\) is a Casimir function. Hamilton's equation then takes the particular form
\[\mathbf{\dot{x}}=\mathbf{J}\times\nabla H=\frac{1}{\phi}\nabla C\times\nabla H. \tag{19}\]
Note that by changing the roles of the Hamiltonian function \(H\) and the Casimir \(C\) one can arrive at another Hamiltonian structure for the same system. In this case, the Poisson vector is defined as \(\mathbf{J}=-(1/\phi)\nabla H\) and the Hamiltonian function is \(C\). This is an example of a bi-Hamiltonian system, that manifests integrability [23, 30, 31, 32]. A bi-Hamiltonian system admits two different but compatible Hamilton formulations. In \(3D\), two Poisson vectors, say
and \(\mathbf{J}_{2}\) are compatible if
\[\mathbf{J}_{1}\cdot(\nabla\times\mathbf{J}_{2})=\mathbf{J}_{2}\cdot(\nabla\times \mathbf{J}_{1}). \tag{20}\]
This compatibility condition will be used later in Section 3 to measure the error of learning Poisson bivectors in 3D by DPNNs.
**Rigid Body Dynamics.** Let us consider an example of a 3D Hamiltonian system, a freely rotating rigid body. The state variable \(\mathbf{M}\in\mathcal{M}\) is the angular momentum in the frame of reference co-rotating with the rigid body. The Poisson structure is
\[\{F,H\}^{(RB)}(\mathbf{M})=-\mathbf{M}\cdot\frac{\partial F}{\partial\mathbf{ M}}\times\frac{\partial H}{\partial\mathbf{M}}, \tag{21}\]
see [33]. Poisson bracket (21) is degenerate because it preserves any function of the magnitude of \(\mathbf{M}\). The Hamiltonian function is the energy
\[H=\frac{1}{2}\left(\frac{M_{x}^{2}}{I_{x}}+\frac{M_{y}^{2}}{I_{y}}+\frac{M_{z} ^{2}}{I_{z}}\right), \tag{22}\]
where \(I_{x}\), \(I_{y}\) and \(I_{z}\) are moments of inertia of the body. In this case, Hamilton's equation
\[\dot{\mathbf{M}}=\mathbf{M}\times\frac{\partial H}{\partial\mathbf{M}} \tag{23}\]
gives Euler's rigid body equation [34].
### \(4d\) Hamiltonian Dynamics
In four dimensions, we consider the following local coordinates \((u,x,y,z)=(u,\mathbf{x})\). A skew-symmetric matrix \(L\) can be identified with a couple of vectors \(\mathbf{U}=(U^{1},U^{2},U^{3})\) and \(\mathbf{V}=(V^{1},V^{2},V^{3})\) as
\[L=\begin{pmatrix}0&-U^{1}&-U^{2}&-U^{3}\\ U^{1}&0&-V^{3}&V^{2}\\ U^{2}&V^{3}&0&-V^{1}\\ U^{3}&-V^{2}&V^{1}&0\end{pmatrix}. \tag{24}\]
After this identification, Jacobi identity (9) turns out to be a system of PDEs consisting of four equations [35]
\[\partial_{u}(\mathbf{U}\cdot\mathbf{V}) =\mathbf{V}\cdot\left(\partial_{u}\mathbf{U}-\nabla\times \mathbf{V}\right), \tag{25a}\] \[\nabla(\mathbf{U}\cdot\mathbf{V}) =\mathbf{V}(\nabla\cdot\mathbf{U})-\mathbf{U}\times\left(\partial _{u}\mathbf{U}-\nabla\times\mathbf{V}\right). \tag{25b}\]
Note that \(L\) is degenerate (its determinant being zero) if and only if \(\mathbf{U}\cdot\mathbf{V}=0\). So, for degenerate Poisson matrices, the Jacobi identity is satisfied if
\[\nabla\cdot\mathbf{U}=0,\qquad\partial_{u}\mathbf{U}-\nabla\times\mathbf{V}= \mathbf{0}. \tag{26}\]
**Superintegrability.** Assume that a given dynamics admits two time-independent first integrals, say \(H_{1}\) and \(H_{2}\). Then, when vectors \(\mathbf{U}\) and \(\mathbf{V}\) have the form
\[\mathbf{U}=\nabla H_{1}\times\nabla H_{2},\qquad\mathbf{V}=\partial_{u}H_{1} \nabla H_{2}-\partial_{u}H_{2}\nabla H_{1}, \tag{27}\]
they constitute a Poisson bivector, and in particular Jacobi identity (26) is satisfied. Functions \(H_{1}\) and \(H_{2}\) are Casimir functions. For a Hamiltonian function \(H_{3}\), the Hamiltonian dynamics is
\[\dot{u} =-\left(\nabla H_{1}\times\nabla H_{2}\right)\cdot\nabla H_{3}, \tag{28a}\] \[\mathbf{\dot{x}} =(\nabla H_{1}\times\nabla H_{2})\partial_{u}H_{3}+(\nabla H_{2} \times\nabla H_{3})\partial_{u}H_{1}+(\nabla H_{3}\times\nabla H_{1})\partial _{u}H_{2}. \tag{28b}\]
By permuting the roles of the functions \(H_{1}\), \(H_{2}\), and \(H_{3}\) (two of them are Casimirs and one of them is Hamiltonian), one arrives at two additional Poisson realizations of the dynamics. This is the case of a superintegrable (tri-Hamiltonian) formulation, [36].
Two Poisson bivectors of a superintegrable 4D Hamiltonian system must satisfy the compatibility condition
\[\mathbf{U}_{1}\cdot\mathbf{V}_{2}+\mathbf{V}_{1}\cdot\mathbf{U}_{2}=0 \tag{29}\]
where \(\mathbf{U}_{1,2}\) and \(\mathbf{V}_{1,2}\) are the vectors identified from formula (24). We shall use this compatibility condition to measure the error of DPNNs when learning Poisson bivectors in 4D.
**Shivamoggi Equations.** An example of a 4D Poisson (and non-symplectic) system is the Shivamoggi equations, which arise in the context of magnetohydrodynamics,
\[\dot{u}=-uy,\qquad\dot{x}=zy,\qquad\dot{y}=zx-u^{2},\qquad\dot{z}=xy, \tag{30}\]
see [37, 38]. The first integrals of this system of equations are
\[H_{1}=x^{2}-z^{2},\qquad H_{2}=z^{2}+u^{2}-y^{2},\qquad H_{3}=u(z+x). \tag{31}\]
Vectors \(\mathbf{U}_{i}\) and \(\mathbf{V}_{i}\) of Poisson matrices \(N^{(i)}\) (\(i=1,2,3\)) for the Hamiltonian functions \(H_{1}\), \(H_{2}\), and \(H_{3}\) are
\[\mathbf{U}_{1} =2u\left(-y,z,y\right),\qquad\mathbf{V}_{1}=2\left(u^{2},y(x+z), u^{2}-z(x+z)\right),\] \[\mathbf{U}_{2} =2\left(x+z\right)\left(0,u,0\right),\qquad\mathbf{V}_{2}=2\left( x+z\right)\left(x,0,-z\right),\] \[\mathbf{U}_{3} =-4\left(yz,zx,xy\right),\qquad\mathbf{V}_{3}=4u\left(-x,0,z \right), \tag{32}\]
respectively. Note that all these three Poisson matrices are degenerate, since \(\mathbf{U}_{i}\cdot\mathbf{V}_{i}=0\) holds for all \(i=1,2,3\). The equations of motion can be written as
\[X=\vartheta N^{(1)}\bar{\nabla}H_{1}=\vartheta N^{(2)}\bar{\nabla}H_{2}= \vartheta N^{(3)}\bar{\nabla}H_{3},\quad\vartheta=-\frac{1}{4(x+z)}\]
up to multiplication with a conformal factor \(\vartheta\) in all three cases. Note that the 4D gradient is denoted by \(\bar{\nabla}H=(\partial_{u}H,\partial_{x}H,\partial_{y}H,\partial_{z}H)\).
### Semi-direct Extension to a \(6d\) system
Six-dimensional Hamiltonian systems can again be symplectic or non-symplectic (degenerate). The former case is represented by a particle in three dimensions while the latter is for instance the heavy top dynamics. Since the evolution of a particle in 3D is canonical and thus analogical to the 2D dynamics, we shall recall only the heavy top dynamics.
A supported rotating rigid body in a uniform gravitational field is called heavy top [39]. The mechanical state of the body is described by the position of the center of mass \(\mathbf{r}\) and angular momentum \(\mathbf{M}\). In this case, the Poisson bracket is
\[\{F,G\}^{(\mathrm{HT})}(\mathbf{r},\mathbf{M})=-\mathbf{M}\cdot\left(\frac{ \partial F}{\partial\mathbf{M}}\times\frac{\partial G}{\partial\mathbf{M}} \right)-\mathbf{r}\cdot\left(\frac{\partial F}{\partial\mathbf{M}}\times \frac{\partial G}{\partial\mathbf{r}}-\frac{\partial G}{\partial\mathbf{M}} \times\frac{\partial F}{\partial\mathbf{r}}\right), \tag{33}\]
Even though the model is even dimensional, it is not symplectic. Two non-constant Casimir functions are \(\mathbf{r}^{2}\) and \(\mathbf{M}\cdot\mathbf{r}\). In this case, we assume the Hamiltonian function as
\[H=\frac{1}{2}\left(\frac{M_{x}^{2}}{I_{x}}+\frac{M_{y}^{2}}{I_{y}}+\frac{M_{ z}^{2}}{I_{z}}\right)+Mgl\mathbf{r}\cdot\mathbf{\chi}, \tag{34}\]
where \(-g\mathbf{\chi}\) is the vector of gravitational acceleration. Hamilton's equation is then
\[\dot{\mathbf{M}}= \mathbf{M}\times\frac{\partial H}{\partial\mathbf{M}}+\mathbf{r} \times\frac{\partial H}{\partial\mathbf{r}} \tag{35a}\] \[\dot{\mathbf{r}}= -\mathbf{r}\times\frac{\partial H}{\partial\mathbf{M}}. \tag{35b}\]
In the following sections, we apply DPNNs to the here recalled models and we show that DPNNs are capable to extract the Poisson bivector and Hamiltonian from simulated trajectories of the models.
## 3 Learning Hamiltonian systems
When we have a collection of snapshots of a trajectory of a Hamiltonian system, how to identify the underlying mechanics? In other words, how to learn the Poisson bivector and energy from the snapshots? Machine learning provides a robust method for such task. It has been previously shown that machine learning can reconstruct GENERIC models [40, 11, 12], but the Poisson bivector is typically known and symplectic. Poisson Neural Networks [20] provide a method for learning also non-symplectic mechanics, which however relies on the identification of dimension of the symplectic subdynamics in the Darboux-Weinstein coordinates and on a transformation to the coordinates. Here, we show a robust method that does not need to know the dimension of the symplectic subsystem and that satisfies Jacobi identity also in the coordinates in which the data are prescribed. Therefore, we refer to the method as Direct Poisson Neural Networks (DPNNs).
DPNNs learn Hamiltonian mechanics directly by training a model for the \(\mathbf{L}(\mathbf{x})\) matrix and a model for the Hamiltonian \(H(\mathbf{x})\) simultaneously. The neural network that encodes \(\mathbf{L}(\mathbf{x})\) only learns the upper triangular part of \(\mathbf{L}\) and skew-symmetry is then automatically satisfied. The network has one hidden fully connected layer equipped with the softplus activation. The network that learns \(H(\mathbf{x})\) has the same structure. The actual learning was implemented within the standard framework PyTorch [41], using the Adam optimizer [42]. The loss function contains squares of deviation of the training data and predicted trajectories that are obtained by the implicit midpoint rule (IMR) numerically solving the exact equations (for the training data) or Hamilton's equation with the trained models for \(\mathbf{L}(\mathbf{x})\) and \(H(\mathbf{x})\) (for the predicted data). Although such a model leads to a good match between the validation trajectories and predicted trajectories, it does not need to satisfy Jacobi identity. Therefore, we use also an alternative model where squares of the Jacobiator (9) are added to the loss function, which enforces Jacobi identity in a soft way, see Figure 1. Finally, in 3D we know the form of the Poisson bivector since we have the general solution of Jacobi identity (19). In such a case, the neural network encoding \(\mathbf{L}\) can be simplified to a network learning \(C(\mathbf{x})\) and Jacobi identity is automatically satisfied, see Figure 2.
In summary, we use three methods:
* **(WJ)** Training \(\mathbf{L}(\mathbf{x})\) and \(H(\mathbf{x})\)_without_ the Jacobi identity.
* **(SJ)** Training \(\mathbf{L}(\mathbf{x})\) and \(H(\mathbf{x})\) with _soft_ Jacobi identity, where the \(L_{2}\)-norm of the Jacobiator (9) is a part of the loss function.
* **(IJ)** Training \(C(\mathbf{x})\) and \(H(\mathbf{x})\) with _implicitly_ valid Jacobi identity, based on the general solution of Jacobi identity in 3D (19).
The training itself then proceeds in the following steps:
Figure 1: Scheme SJ (Soft Jacobi) of the methods that learn both the energy and Poisson bivector.
Figure 2: Scheme IJ (Implicit Jacobi) of the learning method implicitly enforcing Jacobi identity.
1. Simulation of the training and validation data. For a randomly generated set of initial conditions, we simulate a set of trajectories by IMR. These trajectories are then split into steps and the collection of steps is split into a training set and a validation set.
2. Parameters of the neural networks WJ, SJ, and IJ are trained by back-propagation on the training data, minimizing the loss function. Then, the loss function is evaluated on the validation data to report the errors.
3. A new set of initial conditions is randomly chosen and new trajectories are generated using IMR, which gives the ground truth (GT).
4. Trajectories with the GT initial conditions are simulated using the trained models for \(\mathbf{L}\) and \(H\) and compared with GT.
In the following Sections, we illustrate this procedure for learning rigid body mechanics, a particle in 2D, Shivamoggi equations, a particle in 3D, and heavy top dynamics.
### Rigid body
Figure 2(a) shows a sample trajectory of rigid body dynamics (23) from the GT set, as well as trajectories with the same initial conditions, generated the DPNNs. The training was carried out on 200 trajectories while GT consisted of 400 trajectories. Errors of the three learning methods (WJ, SJ, and IJ) are shown in Table 6. All three methods were capable to learn the dynamics well. Figure 2(b) shows the norm of the Jacobiator evaluated on the validation set. Jacobiator is zero for IJ and small in SJ, while it does not go to zero in WJ. Therefore, IJ and SJ satisfy Jacobi identity while WJ does not.
Figure 4 shows the compatibility error of learning the Poisson bivector (20). All three methods learn the Poisson bivector well, but IJ is the most precise, followed by SJ and WJ. Finally, Figure 5 shows errors in learning the trajectories
Figure 3: Rigid body: comparison of learned models (WJ, SJ, and IJ) with GT.
\(\mathbf{M}(t)\). All three methods learn the trajectories well, but in this case, the SJ method works slightly better.
Table 6 shows the medians of the errors for all the models and applied learning methods. N/A indicates quantities that are not to be conserved. Error \(\Delta\mathbf{M}\) is calculated as the median of square deviation of \(\mathbf{M}^{2}\) over all time steps. Error \(\Delta\mathbf{r}\), \(\Delta\mathbf{M}\cdot\mathbf{r}\), \(\Delta\mathbf{M}^{2}\), and \(\Delta\mathbf{r}^{2}\) are calculated analogically. Error \(\Delta\mathbf{L}\) in the RB case is calculated as \(\log_{10}\) of the \(L^{2}\) norm of the compatibility condition (20), calculated for the learned \(\mathbf{J}\) divided by its trace and multiplied by factor 1000 (using the exact \(\mathbf{J}\) when generating the GT). In the P2D and P3D cases, where the Poisson bivector is symplectic, the error is calculated as \(Log_{10}\) of squares of the symplecticity condition (15). In the case of Shivamoggi equations, the \(\Delta\mathbf{L}\) error is the \(Log_{10}\) of the squared superintegrable compatibility condition (29). \(\Delta\det\mathbf{L}\) errors are medians of squares of learned \(\det\mathbf{L}\), and in the Shivamoggi and the heavy top cases, the values are logarithmic, since determinants are supposed to be zero in those cases.
Figure 4: Rigid body: Compatibility errors for RB evaluated as \(\log_{10}\) of squares of Equation (20). The distribution of errors is approximately log-normal. The Compatibility error of the IJ method is the lowest, followed by SJ and WJ.
Figure 5: Rigid body: Distribution of \(\log_{10}\) of squares of errors in \(\mathbf{M}\).
Figure 6: Summary of the learning errors for a rigid body (RB), particle in two dimensions (P2D), Shivamoggi equations (Sh), particle in three dimensions (P3D), and heavy top (HT).
### Particle in 2D
A particle moving in a 2D potential field represents a four-dimensional symplectic system. The simulated trajectories were learned by WJ and SJ methods. No implicit IJ method was used because no general solution of Jacobi identity in 4D is available that would work for both the degenerate and symplectic Poisson bivectors. Results of the learning are in Table 6, and both WJ and SJ learn the dynamics comparably well. Figure 7 shows a sample trajectory, and Figure 8 shows the distribution of learned \(\det(\mathbf{L})\). The median determinant (after a normalization such that the determinant is equal to \(1.0\) in GT), was close to this value for both SJ and WJ, indicating a symplectic system.
### Shivamoggi equations
Shivamoggi equations (30) represent a 4D Hamiltonian system that is not symplectic, and thus has a degenerate Poisson bivector. The equations were solved within the range of parameters \(u\in[-0.5,0.5]\), \(x\in[-0.5,0,5]\), \(y\in[-0.1,0.1]\), \(z\in[-0.5,0.5]\). It was necessary to constraint the range of \(\mathbf{r}=(u,x,y,z)\) because for instance when \(u=0\), the solutions explode [37]. Figure 9 shows the \(u\)-component over a sample trajectory. Figure 10 shows the distribution of \(\log_{10}(\det(\mathbf{L}))\), indicating that the system is indeed degenerate.
In comparison with determinants of \(\mathbf{L}\) learned in the symplectic case of a two-dimensional particle (P2D), see Table 6, the learned determinants are quite low in the Shivamoggi case (after the same normalization as in the P2D case). Therefore, DPNNs are able to distinguish between symplectic and non-symplectic Hamiltonian systems.
Figure 7: P2D: A sample trajectory. Both SJ and WJ learn the dynamics of a particle in two dimensions well.
Figure 8: P2D: Learned \(\det(\mathbf{L})\).
Figure 10: Shivamoggi: Learned \(\log_{10}(\det(\mathbf{L}))\).
Figure 9: Shivamoggi: A sample trajectory, component \(u\).
### Particle in 3D
Figure 11 shows momentum during a sample trajectory of a particle in 3D space taken from the GT set, as well as trajectories with the same initial conditions obtained by DPNNs (WJ and SJ flavors). The training and validation were done in two sets of trajectories (with 200 and 400 trajectories, respectively).
Table 6 contains numerical values of the learning errors. Both WJ and SJ learn the Poisson bivector as well as the trajectories (and thus also the energy) well. The median determinant is close to unity, which indicates a symplectic system.
### Heavy top
Figures 12a and 12b show a sample trajectory of a heavy top from the GT set and trajectories with the same initial conditions obtained by DPNNs. The training and validation were done in two sets of trajectories (with 300 and 400 trajectories, respectively). Numerical values of the learning errors can be found in Table 6. For instance, the \(\mathbf{L}\) matrix is close to being singular, indicating a non-symplectic system, but SJ learns slightly better than WJ. Similarly, as in the four-dimensional case, DPNNs distinguish between symplectic (P3D) and non-symplectic cases (HT).
Figure 11: P3D: Comparison of momentum \(\mathbf{M}(t)\) on an exact trajectory (GT) and trajectories obtained by integrating the learned models (without Jacobi and with soft Jacobi) in the case of a 3D harmonic oscillator.
## 4 Learning non-Hamiltonian systems
Let us now try to apply the WJ, SJ, and IJ methods, that are developed for learning purely Hamiltonian systems, to a non-Hamiltonian system, specifically a dissipative rigid body. A way to formulate the dissipative evolution of a rigid body is called the energetic Ehrenfest regularization [43], where the Hamiltonian evolution of a rigid body is supplemented with dissipative terms that keep the magnitude of angular momentum constant while dissipating the energy. The evolution equations are
\[\dot{\mathbf{M}}=\mathbf{M}\times E_{\mathbf{M}}-\frac{\tau}{2}\mathbf{\Xi} \cdot E_{\mathbf{M}} \tag{1}\]
where \(\tau\) is a positive dissipation parameter and where \(\mathbf{\Xi}=\mathbf{L}^{T}d^{2}E\mathbf{L}\) is a positive symmetric definite matrix (assuming that energy be positive definite) constructed from the Poisson bivector of the rigid body \(L^{ij}=-\epsilon^{ijk}M_{k}\) and energy \(E(\mathbf{M})\). These equations satisfy that \(\dot{\mathbf{M}}^{2}=0\) while \(\dot{E}\leq 0\), and their solutions converge to pure rotations around the principal axis of the rigid body (an axis with the highest moment of inertia), which is the physically relevant solution.
Results of learning trajectories generated by solving Equations (1) are shown in Figure 13. All the methods (WJ, SJ, and IJ) are capable to learn the trajectories to some extent, but WJ is the most successful, followed by SJ and IJ. As SJ, and especially IJ, use deeper properties of Hamiltonian systems (soft and exact validity of Jacobi identity), they are less robust in the case of non-Hamiltonian systems.
Figure 13 can be actually seen as an indication of non-Hamiltonianity of Equations (1). Systems where IJ learns best, followed by SJ and WJ much more likely to be Hamiltonian, in contrast with non-Hamiltonian systems where WJ learns best, followed by SJ and IJ. In other words, DPNNs can distinguish between Hamiltonian and non-Hamiltonian systems by the order in which the flavors of DPNNs perform.
Figure 12: Heavy top: Comparison on an exact trajectory (GT) and trajectories obtained by integrating the learned models (without Jacobi and with soft Jacobi) in case of the heavy top.
Figure 13: Distribution of errors in the angular momentum \(\mathbf{M}\) when learning dissipative rigid-body dynamics (1) by methods assuming purely Hamiltonian systems (WJ, SJ, and IJ). WJ method is the most robust, capable to learn also the dissipative system relatively well (although worse than in the purely Hamiltonian case). The SJ method, which moreover softly imposes Jacobi identity, is less capable to learn the dissipative system. The IJ method, which has the best performance in purely Hamiltonian systems, see Table 6, has the worst learning capability in the dissipative case.
## 5 Conclusion
This paper proposes a machine learning method for learning Hamiltonian systems from data. Direct Poisson Neural Networks (DPNN) learn directly the Poisson bivector and Hamiltonian of the mechanical systems with no further assumptions about the structure of the systems. In particular, DPNN can distinguish between symplectic and non-symplectic systems by measuring the determinant of the learned Poisson bivector.
DPNNs come in three flavors: (i) without Jacobi identity (WJ), (ii) with softly imposed Jacobi identity (SJ), and with implicitly valid Jacobi identity (IJ). Although all the methods are capable to learn the dynamics, only SJ and IJ satisfy also the Jacobi identity. Typical behavior is that IJ learns Hamiltonian models most precisely, see Table 6, followed by SJ and WJ.
When the three flavors of DPNNs are applied to learn a non-Hamiltonian system, it is expected that the order of precision gets reversed, making WJ the most precise, followed by SJ and IJ. This reversed order of precision can be used as an indicator that distinguishes between Hamiltonian and non-Hamiltonian systems.
In future, we would like to extend DPNNs to systems with dissipation prescribed by gradient dynamics.
## Acknowledgment
We are grateful to E. Cueto, F. Chinesta, and B. Moya for inspiring discussions about the purpose of Jacobi identity in the learning of physical systems. MP and MS were supported by the Czech Grant Agency, grant number 23-05736S.
|
2302.12716 | Supervised Hierarchical Clustering using Graph Neural Networks for
Speaker Diarization | Conventional methods for speaker diarization involve windowing an audio file
into short segments to extract speaker embeddings, followed by an unsupervised
clustering of the embeddings. This multi-step approach generates speaker
assignments for each segment. In this paper, we propose a novel Supervised
HierArchical gRaph Clustering algorithm (SHARC) for speaker diarization where
we introduce a hierarchical structure using Graph Neural Network (GNN) to
perform supervised clustering. The supervision allows the model to update the
representations and directly improve the clustering performance, thus enabling
a single-step approach for diarization. In the proposed work, the input segment
embeddings are treated as nodes of a graph with the edge weights corresponding
to the similarity scores between the nodes. We also propose an approach to
jointly update the embedding extractor and the GNN model to perform end-to-end
speaker diarization (E2E-SHARC). During inference, the hierarchical clustering
is performed using node densities and edge existence probabilities to merge the
segments until convergence. In the diarization experiments, we illustrate that
the proposed E2E-SHARC approach achieves 53% and 44% relative improvements over
the baseline systems on benchmark datasets like AMI and Voxconverse,
respectively. | Prachi Singh, Amrit Kaul, Sriram Ganapathy | 2023-02-24T16:16:41Z | http://arxiv.org/abs/2302.12716v1 | # Supervised Hierarchical Clustering Using Graph Neural Networks for Speaker Diarization
###### Abstract
Conventional methods for speaker diarization involve windowing an audio file into short segments to extract speaker embeddings, followed by an unsupervised clustering of the embeddings. This multi-step approach generates speaker assignments for each segment. In this paper, we propose a novel Supervised Hierarchical gRaph Clustering algorithm (SHARC) for speaker diarization where we introduce a hierarchical structure using Graph Neural Network (GNN) to perform supervised clustering. The supervision allows the model to update the representations and directly improve the clustering performance, thus enabling a single-step approach for diarization. In the proposed work, the input segment embeddings are treated as nodes of a graph with the edge weights corresponding to the similarity scores between the nodes. We also propose an approach to jointly update the embedding extractor and the GNN model to perform end-to-end speaker diarization (E2E-SHARC). During inference, the hierarchical clustering is performed using node densities and edge existence probabilities to merge the segments until convergence. In the diarization experiments, we illustrate that the proposed E2E-SHARC approach achieves \(53\%\) and \(44\%\) relative improvements over the baseline systems on benchmark datasets like AMI and Voxconverse, respectively.
Prachi Singh, Amrit Kaul, Sriram Ganapathy+LEAP Lab, Electrical Engineering, Indian Institute of Science,Bangalore.
prachisingh@iisc.ac.in
Supervised Hierarchical Clustering, Graph Neural Networks, Speaker Diarization.
Footnote †: This work was supported by the grants from the British Telecom Research Center.
## 1 Introduction
Speaker Diarization (SD) is the task of segmenting an audio file based on speaker identity. The task has important applications in rich speech transcription for multi-speaker conversational audio like customer call center data, doctor patient conversations and meeting data.
The conventional approach for the task of SD involves multiple steps. In the first step, the audio is windowed into short segments (1-2 s) and fed to a speaker embedding extractor. The speaker embedding extractors are deep neural networks trained for the speaker classification task. The output of penultimate layer, called as embeddings, provides a good initial speaker representation (for example, x-vectors) [1, 2]. In a subsequent step, these speaker embeddings are clustered based on similarity scores computed using methods like Probabilistic Linear Discriminant Analysis (PLDA) scoring [3, 4]. The most common clustering approach is the agglomerative hierarchical clustering (AHC) [5], which merges two clusters at each time step based on similarity scores until the required number of clusters/speakers are attained. Other approaches involve spectral clustering (SC) [6], k-means clustering [7] and graph based clustering [8, 9].
Recently, the end-to-end neural diarization [10, 11] approaches involving transformers have proved effective in handling overlaps. However, due to the difficulty in handling more than 4 speakers, pairwise metric learning loss is proposed recently [12]. There have been recent efforts on clustering algorithms to improve the diarization performance over the conventional approach. A graph-based agglomerative clustering called path integral clustering proposed by Zhang et al. [8] is shown to outperform other clustering approaches on CALL-HOME and AMI datasets [9]. Similarly, metric learning approaches are introduced in [6, 13] to improve the speaker similarity scores. In a recent work, Singh et al. [9, 14] introduced self-supervised metric learning using clustering output labels as the pseudo labels for model training.
Most of the previous approaches for diarization are trained to improve the similarity scores. However, they still use an unsupervised clustering algorithm to obtain the final labels. We hypothesize that this limits their performance as they are not trained with the clustering objectives. On the other hand, EEND models require a large amount of data and hundreds of hours of training. We propose a simple approach to SD which is not data intensive and can handle large number of speakers (more than 7) while training and evaluation. The approach is called as Supervised HierArchical gRaph Clustering algorithm (SHARC). Our work is inspired by Xing et al. [15], where a supervised learning approach to image clustering was proposed. We perform supervised representation learning and clustering jointly without requiring an external clustering algorithm. The major contributions are:
1. Introducing supervised hierarchical clustering using Graph Neural Networks (GNN) for diarization.
2. Developing the framework for joint representation learning and clustering using supervision.
3. Achieving state-of-the-art performance on two benchmark datasets.
## 2 Related Work and Background
This section highlights previous works on SD by representing multi-speaker audio file in the form of a graph. We first introduce GNN and their use in metric learning and supervised clustering in other domains. Then, we describe a variant of GNN, called GraphSAGE [16] which is used in our approach.
Wang et al. [17] proposed GNN for metric learning. The inputs to the model are x-vectors/d-vectors and the PLDA similarity score. The output of the model is a probability score of whether two nodes are connected or not. The Graph Convolution Network (GCN) [18], the most common variant of GNNs, are used in [19] for semi-supervised training using clustering output as "pseudo-labels".
**GraphSAGE:** The GCN is inherently transductive and does not generalize to unseen nodes. The GraphSAGE [16], another variant of GNN, is a representation learning technique suitable for dynamic graphs. It can predict the embedding of a new node without requiring a re-training procedure. The GraphSAGE learns aggregator functions that can induce the embedding of a new node given its features and neighborhood. First, a graph is constructed using the embeddings as the nodes. The edges are connected using the similarity scores between the embeddings. Instead of training individual embeddings for each node, a function is learnt that generates embeddings by sampling and aggregating features from a node's local neighborhood. The aggregate function outputs a single neighborhood embedding by taking a weighted average of each neighbor's embedding.
## 3 Proposed Approach
The Supervised HierArchical gRaph Clustering algorithm (SHARC) model is shown in Figure 1. It introduces a hierarchical structure in the GNN-based clustering. Figure 1(a), shows the training procedure using \(R\) audio recordings \(r\in\{1,2,..,R\}\) where \(r\) is the recording-id assigned to each recording in the dataset. It involves extracting short segment embeddings such as x-vectors \(\boldsymbol{\mathcal{X}}=\{\boldsymbol{X}_{1},\boldsymbol{X}_{2},..., \boldsymbol{X}_{R}\}\) from an Extended Time Delay Neural Network (ETDNN) [1] for all recordings where \(\boldsymbol{X}_{r}\in\mathcal{R}^{N_{r}\times F}\), \(N_{r}\) is the number of x-vectors for recording \(r\) and \(F\) is the dimension of x-vectors. These are used to form graphs at different levels of hierarchy denoted as \(G=\{G_{1}^{0},G_{2}^{0},...,G_{1}^{1},...,G_{R}^{M_{r}}\}\) where \(G_{r}^{m}\) is a graph of recording \(r\) at level \(m\) and \(M_{r}\) is the maximum number of levels created for \(r\). The nodes of the graphs are obtained from \(\boldsymbol{\mathcal{X}}\), and edges are connected using \(k\)-nearest neighbors with weights coming from similarity matrices \(\boldsymbol{\mathcal{S}}^{m}=\{\boldsymbol{S}_{1}^{m},...,\boldsymbol{S}_{R}^ {m}\}\) for level \(m\) where, \(\boldsymbol{S}_{r}^{m}\in\mathcal{R}^{N_{r}^{m}\times N_{r}^{m}}\), \(N_{r}^{m}\) is number of nodes at level \(m\) for recording \(r\). The graphs are constructed at different clustering levels by merging the node features of each cluster and recomputing the similarity matrix, as discussed in Section 3.1. For training, a set of graphs \(G\) are fed to the GNN module in batches. The module comprises of GNN along with a feed forward network to predict edge weights \(\hat{E_{m}}\in\mathcal{R}^{N_{r}^{m}\times k}\) of all nodes with their k-nearest neighbors. The loss is computed using \(E^{q}\) (true edge weights) and \(\hat{E_{m}}\) (predicted edge weights). The details of the GNN scoring and loss computation are given in Section 3.4.
Figure 1(b), shows the inference block diagram. For a test recording \(t\), x-vectors \(\boldsymbol{X}_{t}\) and \(\boldsymbol{S}_{t}\) are extracted and a graph \(G_{t}^{0}\) is constructed at level 0. Then it is passed to the clustering module which iteratively performs clustering using edge predictions from GNN module followed by merging nodes of same cluster and then, reconstructing graph for next level \(m\). This process stops if the graph has no edges (\(G^{m}=\{\phi\}\)) or maximum allowed level \(M\) is attained. The algorithm outputs cluster labels predicted for the nodes at the top level, propagating down to the original embeddings. The process is summarized in Algorithm 1.
### Graph generation
In this step, a hierarchy of graphs, \(G_{r}^{m}=(V^{m},E_{m})\), is created using \(\boldsymbol{X}_{r}\) and \(\boldsymbol{S}_{r}^{m}\) where \(V^{m}=\{v_{1},v_{2},...,v_{n}\}\) is the set of the nodes and \(E_{m}\) is the set of the edges. Each graph consists of node representations \(H_{r}^{m}=\{h_{1}^{(m)},h_{2}^{(m)},...,h_{n}^{(m)}\}\in\mathcal{R}^{F^{\prime} Xn}\) where \(n=N_{r}^{m}\) is the number of nodes at level \(m\). \(E_{m}\) is obtained using \(\boldsymbol{S}_{r}^{m}\in[0,1]\) considering \(k\)-nearest neighbors of each node in \(V^{m}\). At level \(m=0\), we consider each embedding as individual cluster. Therefore, node representations are given as \(H_{r}^{0}=[\boldsymbol{X}_{r};\boldsymbol{X}_{r}]\). For any level \(m>0\), the node representation is obtained by concatenating the identity feature and the average feature of the current cluster, as described in Section 3.3.
Figure 1: Block diagram of proposed SHARC method. The ETDNN model and GNN are the extended time delay network model for x-vector extraction and the graph neural network for score prediction. FFN stands for feed forward network. The left side (a) shows the training steps and the right side (b) shows the inference steps.
### GNN scoring and clustering
The node representations \(H^{m}\) at each level \(m\) are passed to the GNN scoring function \(\Phi\). It predicts edge linkage probability (\(p_{ij}\)) which indicates presence of an edge \((v_{i},v_{j})\in V^{m}\) along with node density (\(\hat{d}_{i}\)) which measures how densely the node is connected with its neighbors. After GNN scoring, the clustering is performed. At each level of hierarchy \(m\), it creates a candidate edge set \(\varepsilon(i)\), for the node \(v_{i}\), with edge connection threshold \(p_{\tau}\), as given below.
\[\varepsilon(i)=\{j|(v_{i},v_{j})\in E_{m},\quad\hat{d}_{i}\leq\hat{d}_{j} \quad\text{and}\quad p_{ij}\geq p_{\tau}\} \tag{1}\]
For any \(i\), if \(\varepsilon(i)\) is not empty, we pick \(j=\text{argmax}_{j\in\varepsilon(i)}\hat{e}_{ij}\) and add \((v_{i},v_{j})\) to \(E^{\prime}_{m}\) where \(\hat{e}_{ij}\) is the predicted edge weights, given as,
\[\hat{e}_{ij}=2p_{ij}-1\in[-1,1]\forall j\in N_{i}^{k} \tag{2}\]
Here, \(N_{i}^{k}\) are the k-nearest neighbors of node \(v_{i}\). After a full pass over every node, \(E^{\prime}_{m}\) forms a set of connected components \(C^{\prime}_{m}\), which serves as the designated clusters for the next level (\(m+1\)). The clustering stops when there are no connected components present in the graph.
### Feature aggregation
To obtain node representations for next level \(H^{m+1}\), the connected components \(C^{\prime}_{m}\) obtained from the clustering along with \(H^{m}\) are passed to an aggregation function \(\Psi\). The function \(\Psi\) concatenates identity feature \(\tilde{h}_{i}^{(m+1)}\) and average feature \(\tilde{h}_{i}^{(m+1)}\) of each cluster \(i\) to obtain \({h_{i}}^{(m+1)}=[\tilde{h}_{i}^{(m+1)};\tilde{h}_{i}^{(m+1)}]\). The identity feature of node \(i\) at level \(m+1\) is the feature of the node which has highest node density at level \(m\) in the cluster \(i\). The average feature is computed by taking average of all the identity features from previous level of that cluster, given as,
\[\tilde{h}_{i}^{(m+1)}=\tilde{h}_{zi}^{(m)};\qquad\tilde{h}_{i}^{(m+1)}=\frac{ 1}{|c_{i}{}^{(m)}|}\sum_{j\in c_{i}{}^{(m)}}\tilde{h}_{j}^{(m)} \tag{3}\]
where \(z_{i}\) = \(\text{argmax}_{j\in c_{i}{}^{(m)}}\tilde{d}_{j}^{(m)}\).
### GNN module architecture and training
GNN scoring function \(\Phi\) is implemented as a learnable GNN module designed for supervised clustering. The module consists of one GraphSAGE [16] layer with \(F^{\prime}=2048\) neurons. Each graph \(G^{m}_{i}\), containing source and destination node pairs, is fed to the GNN module. It takes node representations \(H^{m}\) and their edge connections as input and generates latent representations denoted as \(\hat{H}^{(m)}\in\mathcal{R}^{F^{\prime}\times n}\), with \(n\) being the number of embeddings at a level m. The pair of embeddings are concatenated \([\tilde{h}_{i};\tilde{h}_{j}]\) and passed to a three-layer fully connected feed-forward network with a size of \(\{1024,1024,2\}\) followed by softmax activation to generate linkage probability \(p_{ij}\). The predicted node density is computed as:
\[\hat{d}_{i}=\frac{1}{k}\sum_{j\in N_{i}^{k}}\hat{e}_{ij}\mathbf{S}_{r}(i,j) \tag{4}\]
The ground truth density \(d_{i}\) is obtained using ground truth edge weight \(e_{ij}^{g}=2q_{ij}-1\in E^{q}\{-1,1\}^{N_{r}Xk}\), where \(q_{ij}=1\) if nodes \(v_{i}\) and \(v_{j}\) belong to the same cluster, otherwise \(q_{ij}=0\). A node with higher density is a better representative of the cluster than a node with lower density. Each node \(v_{i}\) has a cluster (speaker) label \(y_{i}\) in the training set, allowing the function to learn the clustering criterion from the data. The loss function for training is given as follows:
\[L=L_{conn}+L_{den} \tag{5}\]
where \(L_{conn}\) is the pairwise binary cross entropy loss based on linkage probability across all the possible edges in \(E\) accumulated across all levels and recordings in a batch. \(L_{den}\) represents mean squared error (MSE) loss between ground truth node density \(d_{i}\) and predicted node density \(\hat{d}_{i}\)\(\forall i\in\{1,...,|V|\}\), where \(|V|\) is the cardinality of \(V\).
### E2e-Sharc
The SHARC model described in the previous section also allows the computation of gradients of the loss function w.r.t the input x-vector embeddings. The computation of these gradients enables the fine-tuning of the embedding extractor. We remove the classification layer from the 13-layer ETDNN model [20] and connect the \(11^{th}\) affine layer output with the SHARC model input. This model is trained using \(40\)-D mel-spectrogram feature vectors and similarity matrices as input. The details of ETDNN embedding extractor are described in Section 4.2. The training loss is the same as the SHARC model (Equation 5). This approach is referred as End-to-End Supervised HierArchical gRaph Clustering (E2E-SHARC).
## 4 Experiments
### Datasets
#### 4.1.1 Ami
* **Train, dev and eval sets**: The AMI dataset [21] contains meeting recordings from four different sites (Edinburgh, Idiap, TNO, Brno). It comprises of training, development (dev) and evaluation (eval) sets consisting of \(136\), \(18\) and \(16\) recordings sampled at \(16\)kHz, respectively. The number of speakers and the duration ranges of each recording from 3-5 and \(20\)-\(60\) mins, respectively.
#### 4.1.2 Voxcomverse
* **Train set**: The dataset used for training Voxconverse model is simulated using Voxceleb 1 and 2 [22, 23] and Librispeech [24] using the recipe from [10]. We simulated \(5000\) mixtures containing \(2\)-\(5\) speakers with duration ranging from \(150\)-\(440\) s. This generates \(1000\) hrs of data with \(6,023\) speakers.
* **Voxconverse dev and eval sets**: It is an audio-visual diarization dataset [25] consisting of multispeaker human speech recordings extracted from YouTube videos. It is divided into a development (dev) set and an evaluation (eval) set consisting of \(216\) and \(232\) recordings respectively. The duration of a recording ranges from \(22\)-\(1200\) s. The number of speakers per recording varies from \(1\)-\(21\).
### Baseline system
The baseline method is an x-vector-clustering based approach followed in [14, 26]. First, the recording is divided into 1.5 s short segments with 0.75 s shift. The 40-D mel-spectrogram features are computed from each segment which is passed to the ETDNN model [1] to extract 512-D x-vectors. The ETDNN model follows the Bign DNN architecture described in [20] and is trained on the VoxCeleb1 [22] and VoxCeleb2 [23] datasets, for speaker identification task, to discriminate among the \(7,146\) speakers. The whitening transform, length normalization and recording level PCA are applied to the x-vectors as pre-processing steps to compute the PLDA similarity score matrix and perform clustering to generate speaker labels for each segment. For comparison, we have used two most popular clustering approaches - AHC [5] and SC [28]. To perform AHC, the PLDA is used directly. For SC, we convert the scores in \([0,1]\) range by applying sigmoid with temperature parameter \(\tau=10\) (best value obtained from experimentation).
### Training configuration
For training the SHARC model, we extract x-vectors with a window of duration 1.5 s with 0.75 s shift, from single speaker regions of the training set. The similarity score matrices, \(\mathbf{S}^{m}\), are obtained using baseline PLDA models which are fed to the GNN module described in Section 3.4. The possible levels of each recording depend on the number of x-vectors (\(N_{r}\)) and the choice of \(k\).
To train the end-to-end SHARC model, the weights of the x-vector model are initialized with the pre-trained ETDNN model while the SHARC model weights are initialized with the one obtained from SHARC training. The input to the model is 40-D mel-spectrogram computed from 1.5 s with 0.75 s shift. To prevent overfitting of the embedding extractor, the pre-trained x-vectors are added to the embedding extractor output before feeding to the GNN.
### Choice of hyper-parameters
The SHARC model is trained with Stochastic Gradient Descent (SGD) optimizer with a learning rate \(lr\)=\(0.01\) (for Voxconverse) and \(lr\)=\(0.001\)(for AMI) for \(500\) epochs. Similarly, the E2E-SHARC is also trained with an SGD optimizer. In this case, the learning rate is \(1e\)-\(06\) for the ETDNN model and \(1e\)-\(03\) for the SHARC model, trained for 20 epochs. The hyperparameters \(k,p_{r}\) are selected based on the best performance on the dev set for the eval set and vice versa. The maximum number of levels \(M\) is initially set to \(15\) to avoid infinite loops but the algorithm converges at \(M\leq 3\). Table 1 shows the values of hyperparameters obtained for the AMI and Voxconverse datasets.
## 5 Results
The proposed approaches are evaluated using the diarization error rate (DER) metric [26]. In our work, we use ground truth speech regions for performance comparison. The DERs are computed for two cases. The first case considers overlaps and without collar regions, and the second case ignores overlaps and incorporates a tolerance collar of 0.25 s. Table 2 shows that the proposed SHARC model improves over the baseline systems, and the performance further improves with the E2E-SHARC model for both datasets. To incorporate temporal continuity, we applied a re-segmentation approach using Variational Bayes inference (VBx) [27] with the E2E-SHARC clustering labels as initialization, which further boosted the performance. As shown in Table 2, for the AMI SDM dataset, we obtain \(15.6\%\) and \(52.6\%\) relative improvements for the dev and eval set, respectively over the PLDA-SC baseline (best). Similarly, we achieve \(39.6\%\) and \(44.4\%\) relative improvements over the Voxconverse baseline (PLDA- SC) for the dev and eval set, respectively.
Table 3 compares proposed approach performance with state-of-the-art systems. The widely reported beamformed AMI multi-distant microphone (MDM) dataset, without TNO recordings, is used for benchmarking. The beamformed recordings are obtained using [33]. The proposed SHARC model has the lowest DER for eval set compared to all previous SOTA approaches. For the Voxconverse dataset, we compare it with the challenge baseline and other published results. Here, the E2E-SHARC with VBx shows the best results compared to previously published results.
## 6 Summary
We have proposed a supervised hierarchical clustering algorithm using graph neural networks for speaker diarization. The GNN module learns the edge linkages and node densities across all levels of hierarchy. The proposed approach enables the learnt GNN module to perform clustering hierarchically based on merging criteria which can handle a large number of speakers. The method is further extended to perform end-to-end diarization by jointly learning the embedding extractor and the GNN module. With challenging diarization datasets, we have illustrated the performance improvements obtained using the proposed approach.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|} \hline \multirow{2}{*}{Parameters} & \multicolumn{3}{c|}{AMI} & \multicolumn{3}{c|}{Voxconverse} \\ \cline{2-6} & Train & Dev & Eval & Train & Dev & Eval \\ \hline \(k\) & 60 & 60 & 60 & 60 & 30 & 30 \\ \(p_{\tau}\) & - & 0.0 & 0.0 & - & 0.5 & 0.8 \\ \(k^{*}\) & 30 & 50 & 50 & 60 & 30 & 30 \\ \(p_{\tau}^{*}\) & - & 0.0 & 0.0 & - & 0.9 & 0.8 \\ \hline \end{tabular}
\end{table}
Table 1: Choice of hyper-parameters for train, dev, eval split of AMI and Voxconverse datasets. The parameters \(k^{*}\) and \(p_{\tau}^{*}\) are used in E2E-SHARC training.
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline
**AMII SDM System** & \multicolumn{2}{c|}{**with OVP + no COL**} & \multicolumn{2}{c|}{**w/out OVP + COL**} \\ \cline{2-5} & Dev. & Eval. & Dev. & Eval. \\ \hline x-vec + PLDA + AHC [26] & 24.50 & 29.51 & 7.61 & 14.59 \\ x-vec + PLDA + SC & 19.8 & 22.29 & 4.1 & 5.76 \\ x-vec + PLDA + SHARC & **19.71** & 21.44 & **3.91** & 4.88 \\ E2E-SHARC & 20.59 & **19.83** & 5.15 & **2.89** \\ — + VBx [27] & **19.35** & **19.82** & **3.46** & **2.73** \\ \hline \multicolumn{5}{|l|}{**Voxconverse System**} \\ \hline x-vec + PLDA + AHC [26] & 12.68 & 13.41 & 7.82 & 9.28 \\ x-vec + PLDA + SC & 10.78 & 14.02 & 6.52 & 9.92 \\ x-vec + PLDA + SHARC & 10.25 & 13.29 & 6.06 & 9.40 \\ E2E-SHARC & **9.90** & **11.68** & **5.68** & **7.65** \\ — & + VBx [27] & **8.29** & **9.67** & **3.94** & **5.51** \\ \hline \end{tabular}
\end{table}
Table 2: DER (%) comparison on the AMI SDM and Voxconverse datasets with the baseline methods. OVP: overlap, COL: collar.
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**AMII MDM System** & Dev. & Eval. \\ \hline x-vec(ResNet101)+AHC+VBx [29] & 2.78 & 3.09 \\ ECAPA-TDNN [30] & 3.66 & **3.01** \\ SelfSup-PLDA-PIC (+VBx) [14] & 5.38 (**2.18**) & 4.63 (3.27) \\ SHARC (+VBx) & 3.58 (3.72) & **2.29 (2.11)** \\ \hline \multicolumn{2}{|l|}{**Voxconverse System**} & Dev. & Eval. \\ \hline Voxconverse challenge [25] & 24.57 & \(-\) \\ VBx BUT system [31] & 4.36 & \(-\) \\ Wang et. al. [32] & 4.41 & 5.82 \\ E2E-SHARC +VBx & **3.94** & **5.51** \\ \hline \end{tabular}
\end{table}
Table 3: DER (%, w/out overlap \(+\) with collar) comparison with state-of-the-art on AMI MDM (without TNO sets) and Voxconverse datasets.
## 7 Acknowledgements
The authors would like to thank Michael Free, Rohit Singh, Shakti Srivastava of British Telecom Research for their valuable inputs.
|
2302.04174 | The Hardware Impact of Quantization and Pruning for Weights in Spiking
Neural Networks | Energy efficient implementations and deployments of Spiking neural networks
(SNNs) have been of great interest due to the possibility of developing
artificial systems that can achieve the computational powers and energy
efficiency of the biological brain. Efficient implementations of SNNs on modern
digital hardware are also inspired by advances in machine learning and deep
neural networks (DNNs). Two techniques widely employed in the efficient
deployment of DNNs -- the quantization and pruning of parameters, can both
compress the model size, reduce memory footprints, and facilitate low-latency
execution. The interaction between quantization and pruning and how they might
impact model performance on SNN accelerators is currently unknown. We study
various combinations of pruning and quantization in isolation, cumulatively,
and simultaneously (jointly) to a state-of-the-art SNN targeting gesture
recognition for dynamic vision sensor cameras (DVS). We show that this
state-of-the-art model is amenable to aggressive parameter quantization, not
suffering from any loss in accuracy down to ternary weights. However, pruning
only maintains iso-accuracy up to 80% sparsity, which results in 45% more
energy than the best quantization on our architectural model. Applying both
pruning and quantization can result in an accuracy loss to offer a favourable
trade-off on the energy-accuracy Pareto-frontier for the given hardware
configuration. | Clemens JS Schaefer, Pooria Taheri, Mark Horeni, Siddharth Joshi | 2023-02-08T16:25:20Z | http://arxiv.org/abs/2302.04174v1 | # The Hardware Impact of Quantization and Pruning
###### Abstract
Energy efficient implementations and deployments of Spiking neural networks (SNNs) have been of great interest due to the possibility of developing artificial systems that can achieve the computational powers and energy efficiency of the biological brain. Efficient implementations of SNNs on modern digital hardware are also inspired by advances in machine learning and deep neural networks (DNNs). Two techniques widely employed in the efficient deployment of DNNs - the quantization and pruning of parameters, can both compress the model size, reduce memory footprints, and facilitate low-latency execution. The interaction between quantization and pruning and how they might impact model performance on SNN accelerators is currently unknown. We study various combinations of pruning and quantization in isolation, cumulatively, and simultaneously (jointly) to a state-of-the-art SNN targeting gesture recognition for dynamic vision sensor cameras (DVS). We show that this state-of-the-art model is amenable to aggressive parameter quantization, not suffering from any loss in accuracy down to ternary weights. However, pruning only maintains iso-accuracy up to 80% sparsity, which results in 45% more energy than the best quantization on our architectural model. Applying both pruning and quantization can result in an accuracy loss to offer a favourable trade-off on the energy-accuracy Pareto-frontier for the given hardware configuration.1
Footnote 1: Code [https://github.com/Intelligent-Microsystems-Lab/SNNQunantPrune](https://github.com/Intelligent-Microsystems-Lab/SNNQunantPrune) © 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Spiking Neural Networks (SNNs), Quantization, Pruning, Sparsity, Model Compression, Efficient Inference
## I Introduction
Spiking neural networks (SNNs) offer a promising way of delivering low-latency, energy-efficient, yet highly accurate machine intelligence in resource-constraint environments. By leveraging bio-inspired temporal spike communication between neurons, SNNs promise to reduce communication and compute overheads drastically compared to their rate-coded artificial neural network (ANN) counterparts. In particular, SNNs are attractive for temporal tasks such as event-driven audio and vision classification problems [1, 2]. Because spike storage and communication are often low-overhead (compared to DNN activations), the primary energy expenditure for SNN accelerators can be attributed to the cost of accessing and computing with high-precision model parameters.
Parameter quantization and parameter pruning are widely used to reduce model footprint [3, 4]. Quantization, especially fixed-point integer quantization, can enable the use of cheaper lower precision arithmetic hardware and simultaneously reduce the required memory bandwidth for accessing all the parameters in a layer. On the other hand, pruning compresses a model by identifying unimportant weights and zeroing them out. Judicious use of both is critical in practical deployments of machine intelligence when the system might be constrained by size, weight, area and power (SWAP). Developing a better understanding of the interaction between quantization and pruning is critical to developing the next generation of SNN accelerators.
Efforts to quantize SNN parameters have recently gained traction. Authors in [5] developed a second-order method to allocate bit-widths per-layer an reduce the model size by 58% while only incurring an accuracy loss of 0.3% on neuromorphic MNIST. Other work [6] demonstrated up to a \(2\times\) model size reduction using quantization with minimal (2%) loss in accuracy on the DVS gesture data set. Separately, the authors in [7] demonstrated quantization in both inference and training, allowing for a memory footprint reduction of 73.78% with approx. 1% loss in accuracy on the DVS gesture data set. Similarly, authors in [8] showed that SNNs with binarized weights can still achieve high accuracy at tasks where temporal features are not task-critical (e.g. DVS Gesture \(0.69\%\) accuracy loss). They note that while on tasks highly dependent on temporal features such binarization incurs a greater degradation in accuracies hits (e.g. Heidelberg digits \(16.16\%\) accuracy loss). Authors in [9] directly optimize the energy efficiency of SNNs obtained by conversion from ANNs by adding an activation penalty and quantization-aware training to the ANN training. They demonstrated a 10-fold drop in synaptic operations while only facing a minimal (1-2%) loss in accuracy on the DVS-MNIST task.
The authors in [10] demonstrated a 75% reduction in weights through pruning while achieving a 90% accuracy on the MNIST data set. More recently, researchers combined weight and connectivity training of SNNs [11] to reduce the number of parameters in a spiking CNN to \(\approx 1\%\) of the original while incurring an accuracy loss of \(~{}3.5\%\) on the CIFAR-10 data set.
There has been limited work on the interaction between pruning and quantization with recent work in [12] showing \(5\)-bit parameter quantization and \(10-14\times\) model compression through pruning. However, most work in this domain has not focused on temporal tasks, where spike times and neuron states are critical to SNN functionality. This paper focuses comprehensively on the energy efficiency derived from quantizing and pruning a state-of-the-art SNN [13].
## II Methods
### _Spiking Neural Networks_
Spiking neural networks are motivated by biological neural networks, where both communication and computation use action potentials. Similar to the work in [13], we use a standard leaky-integrate and fire neuron, written as:
\[u(t)_{i}^{l} =\tau u(t-1)_{i}^{l}+\sum_{j}s(t)_{ij}^{l-1}w_{ij}^{l},\] \[s(t)_{i}^{l} =H(u(t)_{i}^{l}-V_{th}). \tag{1}\]
Where \(u(t)_{i}^{l}\) is the membrane potential of the \(i^{th}\) neuron in layer \(l\) at time step \(t\) and decays by the factor \(\tau\) while accumulating spiking inputs \(s(t)_{ij}^{l-1}\) from the previous layer weighted by \(w_{ij}\). We do not constrain the connectivity of these layers. When the membrane potential exceeds the spike threshold \(V_{th}\), the neuron issues a spike (see eq. (1)) through the Heaviside step function \(H\) and resets its membrane potential to a resting potential (set to \(0\) in this work).
We illustrate our exemplar SNN model in 1, with max pooling and batch norm layers (in the convolutional layers) omitted for clarity. The model is trained using backpropagation-through-time with arc-tangent surrogate gradients [14, 15].
### _Quantization_
In this work, we only quantify the effects of quantizing synaptic weights, employing standard quantization techniques like clipping and rounding as described in [16]:
\[Q_{u}(x)=\mathrm{clip}\left(\mathrm{round}\left(\frac{x}{s_{\text{in}}}\right),-2^{b-1},2^{b-1}-1\right)\cdot s_{\text{out}}.\]
Here, \(Q_{u}\) is the quantizer function, \(b\) is the number of bits allocated to the quantizer, and \(x\) is the value to be quantized. Additionally, \(s_{\text{in}}\) and \(s_{\text{out}}\) are learnable scale parameters. The scale parameters are initially set to \(3\times\) the standard deviation of the underlying weight distribution added to its mean. In this work, we implement quantization-aware training through gradient scaling [17] to minimize accuracy degradation.
### _Pruning_
We implement global unstructured pruning by enforcing a model-wide sparsity target (\(\omega\)) across all the SNN model parameters (weights). During pruning, we rank the weights by magnitude and prune the \(\omega\%\) lowest.
### _Energy Evaluation_
We use an Eyeriss-Like architecture [18] illustrated in Figure 2. This accelerator implements a grid of digital processing elements (PEs) with input spikes and membrane partial sums supplied from a shared activation buffer. Each digital core has scratchpad memories for input spikes, weights, neuron state, and spiking threshold. The PEs can leverage multiple sparse representation schemes, incorporating features like clock gating and input read skipping. These sparsity-aware techniques are applied for input spikes, weights, and output spikes. Across the PE grid, inputs can be shared diagonally, weights can be multicasted horizontally, and partial sums can be accumulated vertically. We removed multipliers from the PE and modified the input scratch pads to be only 1-bit wide. This architecture is evaluated for our SNN workload of choice using high-level component-based energy estimation tools [19, 20] calibrated to a commercially available 28 nm CMOS node. We determine spike activity statistics through the training-time sparsity. Tensor-sparsity statistics and calibrated energy costs are used for loop analysis, where loop interchange analysis is employed to search for efficient dataflows [20]. All results enforce causality, preventing temporal loop reordering.
Fig. 1: On the left the schematic SNN architecture proposed by [13] using stacked spiking convolution blocks with \(3\times 3\) weight kernels, 128 output channels and striding of 1. Followed by temporal-channel joint attention (TCJA) blocks and spiking dense blocks for classification. On the right is a simplified flow within a spiking layer: weights are first pruned and then quantized (preprocessing), inputs are multiplied and accumulated with preprocessed weights before being fed into the logic of the leaky integrate and fire neurons, which additionally relies on a stored state (membrane potential).
Fig. 2: Eyeriss-like spiking architecture consisting of a shared activation buffer connected to an array of digital cores. A single digital core has dedicated memory for inputs (IF-Spad), weights (W-Spad), state (Vmem-Spad) and spiking threshold (\(V_{th}\)). The digital cores possess arithmetic components, e.g. an accumulator or a comparator (CMP) for spike generation.
## III Results
We evaluate the pruning and quantization performance of the SNN model [13] on the DVS gesture data set [1] using spike-count for classification. Our baseline model has weights quantized to 8 bits while delivering 97.57% accuracy on DVS gestures. Figure 1 shows the SNN architecture and the location in the computational graph where pruning and quantization operations were applied to the weights. Finetuning experiments started with pre-trained floating point weights and lasted 50 epochs with a linear warm-up and cosine decay learning rate schedule with a peak learning rate of 0.001.
Figure 3 illustrates how different rates of sparsity achieved through pruning and quantization interact with the different compression schemes studied. These include uncompressed bitmask (UBM), uncompressed offset pair (UOP), coordinate payload (CP), and run length encoding (RLE). We see that for highly quantized SNNs, such as those resulting from ternarization the accelerator incurs lower energy and latency in computing the SNN operations when compared to pruned SNNs that deliver the same accuracy. In part, when terenerizing the weights, there are only three representation levels available, and most weights are quantized to \(0\). This allows ternarization to simultaneously benefit from both low-precision computing and high weight sparsity and model compression. At higher levels of precision, there is insufficient quantization-induced sparsity for the network to benefit from sparse representation (see upper left of Figs 3). At higher sparsity rates, RLE outperforms other representation formats. RLE and CP formats incur similar overheads and deliver similar performance in energy and energy-delay-product (EDP), with RLE being, on average,.3% better on both metrics. The 8b and 6b models for quantization incur similar energy/energy-delay costs, as seen by their clustering in the upper right corner of both plots in Fig. 3. In contrast, 4b, 3b, and ternary models benefit from both sparsity and reduced numerical precision, with ternarization gaining the most.
We better disentangle the benefits of quantization and quantization-induced sparsity, by examining its variation across multiple quantization levels in Fig. 4. For 8 and 6 bit weights the model incurs substantial overheads due to
Fig. 4: Highlighting energy overheads and gains for using sparse representation. Quantization-induced sparsity reduces as model precision increases, leading to applying sparse representations to weights being unsuitable at higher precisions. We still employ sparse representation schemes for input and output spikes to better leverage activity sparsity in SNNs. We also annotate the quantization-induced weight sparsity levels above the data points.
Fig. 3: Effect of sparsity on energy (top) and energy-delay product (EDP) (bottom). We compute the sparsity (compared to a fully dense model) of our SNN model after quantization-aware and pruning-aware fine-tuning. For pruning-aware training, we ensure that although the weight updates can set more weights to zero the pruning mask prevents the training from regrowing connections. For this guided pruning, we set the sparsity targets to 75%, 80%, 85%, 90%, 92.5%, 95% and 97.5%. We evaluated weight quantization across 8b, 6b, 4b, 3b, and ternary precision. We measure quantization-induced weight sparsity for these models by detecting how many values are set to 0 after quantization- and finetuning. We explore different compression schemes: uncompressed bitmask (UBM), uncompressed offset pair (UOP), coordinate payload (CP), and run length encoding (RLE). For models with little sparsity (e.g. less than 25%) the UOP scheme performs best. However, models with considerable amounts of sparsity achieve better energy and EDP performance with RLE. Note although RLE and CP nearly overlap each other, RLE is strictly better, although by a small margin. The highlighted models are ternarized models and pruned models with 80% sparsity target, all delivering iso-accuracy, demonstrating a clear advantage for ternary quantization.
the additional sparse-storage format related metadata. We still employ sparse-storage formats for spike activity, with RLE storage performing better due to the extreme sparsity in spike activity. At the higher precision levels, with a crossover point for 4-b weights, employing RLE and ignoring any sparsity in weights delivers higher performance. However, for 3 bit and weight ternarization, there is a significant improvement to be derived from employing sparse storage formats in weights too. We additionally show the energy breakdown, normalized to computing energy for data movement across the accelerator in Fig. 5. The larger energy cost of transferring metadata from DRAM for 8 bit sparse weights results in greater energy incurred for the entire model. Remarkably, the ternarization includes significantly more utilization of the intermediate memories which in turn leads to improved energy-efficiency.
Although both pruning and quantization try to leverage model robustness to improve energy, they operate along different principles. Quantization leverages model overparameterization to facilitate computations at a lower precision while pruning reduces the redundancy in the model parameters to compress it. Consequently, pruning and quantization can often be at odds, requiring a careful study of their interaction. We study two strategies for pruning and quantizing SNNs: i) _cumulatively_, where the model is first finetuned with pruning, and after half the finetuning epochs, we commence quantization aware training, and ii) _jointly_, where the model is simultaneously pruned and quantized (as shown in Fig. 1).
Figure 6 demonstrates how the different strategies for quantizing and pruning impact the model accuracy and energy-efficiency (top) / energy-delay product (bottom) for our architecture of choice. Although 80% pruning does not yet suffer from accuracy loss, it is 45% costlier than ternary quantization, which can maintain accuracy for this model. If constrained to iso-accuracy, a pure quantization approach outperforms all other alternatives, with the 3-bit quantization with 80% pruning occupying a larger footprint than the ternary network. However, more fine-grained trade-offs between the model accuracy and energy can be achieved across various combined strategies, as shown in the inset of Fig 6. The mixed pruning and quantization schemes enable operation along the Pareto curve when some loss in accuracy can be tolerated, delivering SNN models suitable for multiple SWAP constraints. Cumulative quantization and pruning maintains accuracy in low-energy regimes, however, the overhead of encoding these models obviates this advantage in terms of energy-delay product.
Fig. 5: A breakdown of normalized energy for different data-movement and operations for three different configurations. First, we see that ternarization incurs minimal DRAM access which is where significant efficiency is gained. Next, for the configuration with 8 bit sparse models, storing and accessing the additional metadata incurs significant costs. This metadata overhead does not exist for the dense 8 bit model. In each case, we employ RLE sparse-storage. Data movement to and from the shared activation buffer incurs the second highest energy costs in our analysis.
Fig. 6: Illustration of accuracy vs. energy trade-off (top) and accuracy vs. tradeoff between energy-delay product (bottom) for studied model compression techniques. We study the following interactions between pruning and quantization: quantization only, pruning only, cumulative quantized & pruned, and jointly quantized & pruned models. Inset of the figure highlights the points where the models start to degrade from iso-accuracy. Note that all quantized models down to ternary quantization maintain iso-accuracy meanwhile, pruned models degrade at higher energy cost or EDP. Mixed model using both quantization and pruning form the Pareto-curve once model accuracy degrades below iso-accuracy.
We examine how the layer sparsity statistics change across the best iso-accuracy model resulting from quantization, pruning, cumulative quantization & pruning and joint quantization & pruning in figure 7. We observe that the pruning and mixed schemes all have a relatively similar sparsity distribution among layers, with the only difference being that the joint and cumulative approaches can deliver higher sparsity on some layers (e.g., the first TCJA-t). On the other hand, the ternary quantization scheme presents a significantly different sparsity distribution across layers. We propose that the significant energy advantages from terenarization can be attributed to the combination of ternary encoding and consistently higher sparsity levels achieved for the first layer, the last layer, and the two TCJA layers. The different sparsity levels across layers of the iso-accuracy models also hint at commonly observed non-uniform impact for different layer types on energy consumption, e.g. different tensor sizes affecting compiler mappings.
## IV Conclusion
We analyze the energy-accuracy trade-off between quantization and pruning in state-of-the-art spiking neural networks. We also provide a realistic analysis of how quantization and pruning might interact with a baseline digital SNN accelerator. Our results showed that exploiting quantization-induced sparsity, which is particularly beneficial for weight ternarization, can lead to remarkable performance benefits. By carefully employing such aggressive quantization, SNN model accuracy can be maintained while still profiting from both cheaper arithmetic and quantization-induced sparsity, thereby outperforming alternative model compression techniques. Additional fine-grained control can be achieved by further trading-off energy and accuracy by employing hybrid pruning and quantization schemes to deliver multiple models that occupy the accuracy-efficiency frontier.
|
2301.10451 | Knowledge-augmented Graph Neural Networks with Concept-aware Attention
for Adverse Drug Event Detection | Adverse drug events (ADEs) are an important aspect of drug safety. Various
texts such as biomedical literature, drug reviews, and user posts on social
media and medical forums contain a wealth of information about ADEs. Recent
studies have applied word embedding and deep learning -based natural language
processing to automate ADE detection from text. However, they did not explore
incorporating explicit medical knowledge about drugs and adverse reactions or
the corresponding feature learning. This paper adopts the heterogenous text
graph which describes relationships between documents, words and concepts,
augments it with medical knowledge from the Unified Medical Language System,
and proposes a concept-aware attention mechanism which learns features
differently for the different types of nodes in the graph. We further utilize
contextualized embeddings from pretrained language models and convolutional
graph neural networks for effective feature representation and relational
learning. Experiments on four public datasets show that our model achieves
performance competitive to the recent advances and the concept-aware attention
consistently outperforms other attention mechanisms. | Shaoxiong Ji, Ya Gao, Pekka Marttinen | 2023-01-25T08:01:45Z | http://arxiv.org/abs/2301.10451v3 | Knowledge-augmented Graph Neural Networks with Concept-aware Attention for Adverse Drug Event Detection
###### Abstract
Adverse drug events (ADEs) are an important aspect of drug safety. Various texts such as biomedical literature, drug reviews, and user posts on social media and medical forums contain a wealth of information about ADEs. Recent studies have applied word embedding and deep learning -based natural language processing to automate ADE detection from text. However, they did not explore incorporating explicit medical knowledge about drugs and adverse reactions or the corresponding feature learning. This paper adopts the heterogenous text graph which describes relationships between documents, words and concepts, augments it with medical knowledge from the Unified Medical Language System, and proposes a concept-aware attention mechanism which learns features differently for the different types of nodes in the graph. We further utilize contextualized embeddings from pretrained language models and convolutional graph neural networks for effective feature representation and relational learning. Experiments on four public datasets show that our model achieves performance competitive to the recent advances and the concept-aware attention consistently outperforms other attention mechanisms.
_Keywords--_ Adverse Drug Event Detection, Graph Neural Networks, Knowledge Augmentation, Attention Mechanism
## 1 Introduction
Pharmacovigilance, i.e., drug safety monitoring, is a critical step in drug development (Wise et al., 2009). It detects adverse events and safety issues and promotes drug safety through post-market assessment; therefore, it promotes safe drug development and shows significant promise in better healthcare service delivery. A drug-related negative health outcome is referred to as an Adverse Drug Event (ADE) (Donaldson et al., 2000). Given the significant harm caused by ADEs, it is essential to detect them for pharmacovigilance purposes.
Clinical trials are the common way to detect ADEs. However, some ADEs are hard to investigate through clinical trials due to their long latency (Sultana et al., 2013). Additionally, regular trials cannot cover all aspects of drug use. Through the voluntary Post-marketing Drug Safety Surveillance System (Li et al., 2014), users report their experiences on drug usage and related safety issues. Nevertheless, the system suffers several limitations such as incomplete reporting, under-reporting, and delayed reporting.
Recent advances in automated pharmacovigilance are based on collecting large amounts of text about adverse drug events from various platforms, such as medical forums (e.g., AskaPatient), biomedical publications, and social media, and training Natural Language Processing (NLP) models to automatically detect whether a given textual record contains information about adverse drug reactions, which is usually framed as a binary classification task. Text mentions of adverse drug events include a plethora of drug names and adverse reactions. Figure 1 shows an example annotated with concepts from the Unified Medical Language System (UMLS). To understand the drug information and corresponding adverse reactions, the NLP model needs to capture abundant medical knowledge and be able to do relational reasoning.
Early studies used rule-based methods (Xu et al., 2010; Sohn et al., 2014) with manually built rules or applied machine learning algorithms such as conditional random fields (Nikfarjam et al., 2015; Wang et al., 2022), support vector machine (Bollegala et al., 2018), and neural networks (Cocos et al., 2017; Huynh et al., 2016). These approaches can process text with manual feature engineering or enable automated feature learning with
deep learning methods, allowing for automated ADE detection. However, they are limited in capturing rich contextual information and relational reasoning.
Graphs are expressive and can represent various data. For example, nodes in a graph for a collection of texts can represent various entities, such as words, phrases, and documents, while edges represent relationships between them. Such text graphs together with graph neural networks are widely used in NLP applications such as sentiment classification and review rating (Yao et al., 2019; Lin et al., 2021; Zhang et al., 2020). Recently, graphs have been used for text representation with graph boosting (Shen et al., 2020) or contextualized graph embeddings (Gao et al., 2022) for ADE detection. Other works have applied knowledge graph embeddings and link prediction to ADE prediction in drug-effect knowledge-graphs (Kwak et al., 2020; Joshi et al., 2022). However, medical knowledge plays an important role in ADE detection from text, and so far there are no studies that incorporate medical knowledge in a text graph and learn concept-aware representations that inject medical concepts (e.g., the UMLS concepts as illustrated in Figure 1) into the text embeddings.
Our previous model CGEM (Gao et al., 2022) applied a heterogeneous text graph, embodying word, concept, and document relations for ADE corpus, to learn contextualized graph embeddings for ADE detection. Here, we extend this work by showing how the graph can be augmented with medical knowledge from the UMLS metathesaurus (Bodenreider, 2004). In addition, we deploy concept-aware self-attention that applies different feature learning for various types of nodes. We name our model as KnowCAGE (Knowledge-augmented Concept-Aware Graph Embeddings). Our contributions are thus summarized as follows:
* We introduce medical knowledge, i.e., the UMLS metathesaurus, to augment the contextualized graph embedding model for representation learning on drug adverse events.
* A concept-aware self-attention is devised to learn discriminable features for concept (from the medical knowledge), word and document nodes.
* Experimental results evaluated in four public datasets from medical forums, biomedical publications and social media show our approach outperforms recent advanced ADE detection models in most cases.
## 2 Related Work
Recent advances on adverse drug event detection use word embeddings and neural network models to extract text features and capture the drug-effect interaction. Many studies deploy recurrent neural networks to capture the sequential dependency in text. For example, Cocos et al. (2017) utilized a Bidirectional Long Short-Term Memory (BiLSTM) network and Luo (2017) proposed to learn sentence- and segment-level representations based on LSTM. To process entity mentions and relations for ADE detection and extraction, pipeline-based systems (Dandala et al., 2019) and jointly learning methods (Wei et al., 2020) are two typical approaches.
Several recent publications studied graph neural networks for ADE detection. Kwak et al. (2020) built a drug-disease graph to represent clinical data for adverse drug reaction detection. GAR (Shen et al., 2021) uses graph embedding-based methods and adversarial learning. CGEM (Gao et al., 2022) combines contextualized embeddings from pretrained language models with graph convolutional neural networks.
Some other studies also adopted other neural network architectures such as capsule networks and self-attention mechanism. Zhang et al. (2020) proposed the gated iterative capsule network (GICN) using CNN and a capsule network to extract the complete phrase information and deep semantic information. The gated iterative unit in the capsule network enables the clustering of features and captures contextual information. The attention mechanism prioritizes representation learning for the critical parts of a document by assigning
Figure 1: An example of a text mentioning an adverse drug event from Karimi et al. (2015). The recognition of drugs and adverse reactions requires medical knowledge and relational reasoning.
them higher weight scores. Ge et al. (2019) employed multi-head self-attention and Wunnava et al. (2020) developed a dual-attention mechanism with BiLSTM to capture semantic information in the sentence.
Another direction of related work is knowledge augmentation for deep learning models. Many publications adopt knowledge graph to guide the representation learning in various applications (Ji et al., 2022). For example, Ma et al. (2018) injected commonsense knowledge into a long short-term memory network and (Liang et al., 2022) enhanced the graph convolutional network with affective knowledge to improve aspect-based sentiment analysis. Knowledge injection is also used for other applications such as hate speech detection (Pamungkas et al., 2021), mental healthcare (Yang et al., 2022), and personality detection Poria et al. (2013).
## 3 Methods
This section introduces the proposed graph-based model with knowledge augmentation, i.e., Knowledge-augmented Concept-Aware Graph Embeddings (KnowCAGE), as illustrated in Figure 2. The model consists of four components: 1) Knowledge-augmented Graph Construction, 2) Heterogeneous Graph Convolution, 3) Concept-aware Attention, and 4) Ensemble-based ADE classification layers. Following TextGCN (Yao et al., 2019), we construct a heterogeneous text graph, which contains three types of nodes: words, documents and concepts, and we augment it with medical knowledge from the Unified Medical Language System metahesaurus. Heterogeneous graph convolution network is then used to encode the text graph and learn rich representations. We use the contextualized embeddings from pretrained BERT (Bidirectional Encoder Representations from Transformers) (Devlin et al., 2019) to represent the node features in the heterogenous text graph. The adjacency matrix and feature matrix obtained from the embedding layers are inputs to graph neural network encoders which take into account the relationships and information between and within the nodes. Considering different types of nodes, we use a concept-aware self-attention, inspired by the entity-aware representation learning (Yamada et al., 2020), which treats the different types of nodes differently, allowing the most significant content to have the largest contribution to the final prediction. To boost the prediction of ADE even further, we follow the BertGCN model (Lin et al., 2021) and apply an ensemble classifier with contextualized embeddings on one hand and the graph networks on the other, and learn a weight coefficient to balance these two prediction branches.
### Knowledge-augmented Graph Construction
We firstly build the heterogenous text graph for the whole document collection by using the external knowledge source - UMLS - to augment the word/document graph with concept information. Representing text in a heterogeneous graph can provide different perspectives for text encoding and improve ADE detection. In the UMLS metahesaurus, different words or phrases are assigned different Concept Unique Identifiers (CUI), where each CUI represents one concept class. Every concept class has an attribute "preferred name" which is a short description or a synonym of this concept. Our model uses UMLS to obtain the "preferred name" of each word in the dataset and add words in "preferred name" to the graph as the concept nodes. Therefore, the augmented graph also contains concept nodes in addition to the word and document nodes. The number
Figure 2: An illustration of the model architecture with knowledge-augmented graph embeddings and concept-aware representations
of total nodes \(n=n_{d}+n_{w}+n_{c}\), where \(n_{d}\), \(n_{w}\) and \(n_{c}\) are the numbers of documents, words and concepts, respectively. There are five types of edges, i.e., word-word, word-concept, document-concept, concept-concept and document-word edges. The weights of document-word edges and document-concept edges are calculated as the term frequency-inverse document frequency (TF-IDF), while the weights of the other edges are defined as the positive point-wise mutual information (PMI) of the respective words or concepts. Specifically, the weight between the node \(i\) and the node \(j\) is computed as:
\[\mathbf{A}_{ij}=\left\{\begin{array}{ll}\text{PMI}(i,j),&\text{PMI}>0;i,\text { j: word/concept}\\ \text{TF-IDF}_{\text{ij}},&\text{i: document, j: word/concept}\\ 0,&\text{otherwise}\end{array}\right.\]
We use pretrained contextualized embeddings from language models. Given the dimension of embeddings denoted as \(d\), the pooled output of contextualized document encoding are denoted as \(\mathbf{H}_{doc}\in\mathbb{R}^{n_{d}\times d}\). We initialize word and concept nodes with a zero matrix to get the initial feature matrix which is used as input to the graph neural network:
\[\mathbf{H}^{[0]}=\left(\begin{array}{c}\mathbf{H}_{doc}\\ \mathbf{0}\end{array}\right), \tag{1}\]
where \(\mathbf{H}^{[0]}\in\mathbb{R}^{[n_{d}+n_{w}+n_{c})\times d}\) and \([0]\) denotes the initial layer.
### Heterogeneous Graph Convolution
We adopt graph neural networks over the heterogeneous text graph to learn complex relations between words, concepts, and documents. Specifically, given the initial input features \(\mathbf{H}^{[0]}\) obtained from pretrained language models and the adjacency matrix \(\mathbf{A}\), we update the representations via graph convolution. A forward pass of the \(i\)-th layer of a convolutional graph network can be denoted as:
\[\mathbf{H}^{[i+1]}=f\left(\mathbf{\hat{A}}\mathbf{H}^{[i]}\mathbf{W}^{[i]} \right), \tag{2}\]
where \(\mathbf{\hat{A}}\) is the normalized adjacency matrix, \(\mathbf{H}^{[i]}\) are the hidden representations of \(i\)-th layer, \(\mathbf{W}^{[i]}\) is the weight matrix, and \(f(\cdot)\) is an activation function. The KnowCAGE framework can adopt various types of convolutional graph neural networks. Our experimental study chooses three representative models, i.e., Graph Convolutional Network (GCN) (Kipf and Welling, 2017), Graph Attention Network (GAT) (Velickovic et al., 2018), and Deep Graph Convolutional Neural Network (DGCNN) (Zhang et al., 2018). GCN is a spectral-based model with a fixed number of layers where different weights are assigned to layers and the update of node features incorporates information from the node's neighbors. It employs convolutional architectures to get a localized first-order representation. Graph attention layers in GAT assign different attention scores to one node's distant neighbors and prioritize the importance of different types of nodes. DGCNN concatenates hidden representations of each layers to capture rich substructure information and adopt a SortPooling layer to sort the node features.
### Concept-aware Attention Mechanism
Different types of nodes have various impacts on the prediction of adverse drug events. Inspired by the contextualized entity representation learning from the knowledge supervision of knowledge bases (Yamada et al., 2020), we propose to use a concept-aware attention mechanism that distinguishes the types of nodes, especially the concept nodes, and better captures important information related to the positive or negative ADE classes.
Two types of nodes may not have the same impact on each other. Thus, we use different transformations for different types of nodes in the concept-aware attention mechanism in order to learn concept-aware attentive representations. We obtain key and value matrices \(\mathbf{K}\in\mathbb{R}^{l\times d_{h}}\) and \(\mathbf{V}\in\mathbb{R}^{l\times d_{h}}\) similarly to the key and value in the self-attention of transformer network (Vaswani et al., 2017). Concept-aware attention has nine different query matrices \(\mathbf{Q}\) for concept nodes \(c\), word nodes \(w\) and document nodes \(d\), i.e., \(\mathbf{Q}_{ww}\), \(\mathbf{Q}_{cc}\), \(\mathbf{Q}_{dd}\), \(\mathbf{Q}_{cw}\), \(\mathbf{Q}_{wc}\), \(\mathbf{Q}_{wd}\), \(\mathbf{Q}_{dw}\), \(\mathbf{Q}_{dc}\), and \(\mathbf{Q}_{cd}\in\mathbb{R}^{l\times d_{h}}\). Then, we use \(\mathbf{Q}\), \(\mathbf{K}\) and \(\mathbf{V}\) to compute the attention scores. For example, for \(i\)-th document and \(j\)-th concept nodes, i.e., \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\in\mathbb{R}^{d_{h}}\), we calculate the attention score as:
\[\alpha_{ij}=\text{Softmax}\left(\frac{(\mathbf{K}\mathbf{x}_{j})^{\top}\mathbf{ Q}_{cd}\mathbf{x}_{i}}{\sqrt{l}}\right) \tag{3}\]
The concept-aware representation \(\mathbf{h}_{i}\in\mathbb{R}^{l}\) for the \(i\)-th document is obtained as:
\[\mathbf{h}_{i}=\sum_{j=1}^{n_{c}}\alpha_{ij}\mathbf{V}\mathbf{x}_{j} \tag{4}\]
We can obtain the representations of word and concept nodes in the same way. These concept-aware representations are fed to the graph network as the node features in the next iteration of model updating.
### Classification Layers and Model Training
We apply the two linear layers and a softmax function over the concept-aware document embeddings \(\mathbf{h}_{i}\) to compute probability of classifying the document in each class \(\mathbf{p}_{g}\), representing the presence or absence of mentions of ADE in the document. Besides, the interpolation of the prediction probability of two classifiers is adopted to combine the prediction of graph-based modules and pretrained language model-based predictions (Lin et al., 2021). We use a similar classification module to process the contextualized embeddings from the pretrained language model (the upper branch in Fig. 2) and denote the corresponding classification probabilities by \(\mathbf{p}_{c}\). A weight coefficient \(\lambda\in[0,1)\) is introduced to balance the results from graph-based encoding and contextualized models:
\[\mathbf{p}=\lambda\mathbf{p}_{g}+(1-\lambda)\mathbf{p}_{c}. \tag{5}\]
This interpolation strategy can also be viewed as a weighted ensemble of two classifiers.
ADE detection is a binary classification task and the classes are highly imbalanced in most datasets. To complicate the matter further, most datasets contain only a small number of samples, making the downsampling method to balance the classes inappropriate. This study applies the weighted binary cross-entropy loss function to alleviate this problem. The weighted loss function is denoted as:
\[\mathcal{L}=\sum_{i=1}^{N}[-w_{+}y_{i}\log(p_{i})-w_{-}(1-y_{i})\log(1-p_{i})], \tag{6}\]
where \(w_{+}=\frac{N_{1}}{N_{0}+N_{1}}\) and \(w_{-}=\frac{N_{0}}{N_{0}+N_{1}}\) are weights of documents predicted as positive or negative samples respectively, \(N_{0}\) and \(N_{1}\) are the numbers of negative/positive samples in the training set, and \(y_{i}\) is the ground-truth label of a document. The Adam optimizer (Kingma and Ba, 2015) is used for model optimization.
## 4 Experimental Setup
Our goal is to conduct experiments on four ADE datasets and answer the following research questions.
**RQ1:**: How does the proposed model perform in ADE detection on texts from various sources, compared to other methods?
**RQ2:**: How does the heterogeneous graph convolution with knowledge augmentation improve the accuracy of ADE detection?
**RQ3:**: Does the concept-aware attention improve the accuracy of the heterogeneous graph convolution to detect ADE?
**RQ4:**: What is the impact of pretraining domains and contextualized language representation on the performance of the method in ADE election?
In this section we will describe the setup of the experiments and in the next section we will present the results of the experiments.
### Data and Preprocessing
We used four datasets from the medical forum, biomedical publications and social media, as summarized in Table 1, for evaluation. We preprocess data by removing stop words, punctuation, and numbers. For the data collected from Twitter, we use the tweet-preprocessor Python package 1 to remove URLs, emojis, and some reserved words for tweets.
Footnote 1: [https://pypi.org/project/tweet-preprocessor/](https://pypi.org/project/tweet-preprocessor/)
**TwiMed (TwiMed-Twitter and TwiMed-Pub) 2** The TwiMed dataset (Alvaro et al., 2017) includes two sets collected from different domains, i.e., TwiMed-Twitter from social media and TwiMed-Pub for biomedical publications. In each document, people from various backgrounds annotate diseases, symptoms, drugs, and their
\begin{table}
\begin{tabular}{l c c c} \hline \hline Dataset & Documents & ADE & non-ADE \\ \hline SMM4H & 2,418 & 1,209 & 1,209 \\ TwiMed-Pub & 1,000 & 191 & 809 \\ TwiMed-Twitter & 625 & 232 & 393 \\ CADEC & 7,474 & 2,478 & 4,996 \\ \hline \hline \end{tabular}
\end{table}
Table 1: A statistical summary of datasets
relationships. A document annotated as outcome-negative is regarded as an adverse drug event. Models are tested using 10-fold cross-validation.
**SMM4H**3 This dataset from Social Media Mining for Health Applications (#SMM4H) shared tasks (Magge et al., 2021) is collected from Twitter with a description of drugs and diseases. We use the official validation set to evaluate the model performance for a fair comparison with baseline models developed in the SMM4H shared task.
Footnote 3: [https://healthlanguageprocessing.org/smm4h-2021/task-1/](https://healthlanguageprocessing.org/smm4h-2021/task-1/)
**CADEC**4 The CSIRO Adverse Drug Event Corpus contains patient-reported posts from a medical forum called AskaPatient (Karimi et al., 2015). It includes extensive annotations on drugs, side effects, symptoms, and diseases. We use 10-fold cross-validation to evaluate the model's performance.
Footnote 4: [https://data.csiro.au/collection/csiro:10948](https://data.csiro.au/collection/csiro:10948)
### Baselines and Evaluation
We compare the performance of our method with two sets of baseline models: 1) models designed for ADE detection and 2) pretrained contextualized models, and report Precision (P), Recall (R), and F1-score.
Customized models for ADE detection are as follows. **CNN-Transfer**(Li et al., 2020) (CNN-T for short) used a convolutional neural network (CNN) for feature extraction and exploited adversarial transfer learning to boost the performance. **HTR-MSA**(Wu et al., 2018) adopted CNN and Bidirectional Long Short-Term Memory (BiLSTM) networks to learn text representations, and learned hierarchical text representation for tweets. It also employed multi-head self-attention. **ATL**(Li et al., 2020) applied adversarial transfer learning to ADE detection with corpus-shared features exploited. **MSAM**(Zhang et al., 2019) used the BiLSTM network to learn semantic representations of sentences and the multi-hop self-attention mechanism to boost the classification performance. **IAN**(Alimova and Solovyev, 2018) interactively learned attention representations through separate modeling on targets and context. **ANNSA**(Zhang et al., 2021) proposed a sentiment-aware attention mechanism to obtain word-level sentiment features and used adversarial training to improve the generalization ability. **CGEM**(Gao et al., 2022), a predecessor of our work, developed a contextualized graph-based model that utilizes contextualized language models and graph neural networks, and also devised an attention classifier to improve the performance.
The previously mentioned ADE detection baselines did not use the SMM4H dataset in their experiments. Therefore, we compare our model with pretrained language models. We use the base version of pretrained models for a fair comparison. Yaseen and Langer (2021) combined the LSTM network with the BERT text encoder (Devlin et al., 2019) for ADE detection. We denote it as BERT-LSTM. Pimpalkhute et al. (2021) introduced a data augmentation method and adopted the RoBERTa text encoder with additional classification layers (Liu et al., 2019) for ADE detection, denoted as RoBERTa-aug. Kayastha et al. (2021) utilized the domain-specific BERTweet (Nguyen et al., 2020) that is pretrained with English Tweets using the same architecture as BERT-base and classified ADE with a single-layer BiLSTM network, denoted as BERTweet-LSTM.
### Hyper-parameters
Table 2 shows the hyper-parameters we tuned in our experiments, where LR is the learning rate. When the number of iterations exceeds a certain threshold, the learning rate scheduler decays the learning rate by the parameter \(\gamma\). In our experiment, we set \(\gamma\) and the iteration milestone to 0.1 and 30, respectively.
## 5 Results
### Comparison with Baselines in Different Domains (RQ1)
We compare our model's predictive performance with baseline models on the TwiMed (Table 3), SMM4H (Table 4) and CADEC (Table 5) datasets. The results of the baselines are taken directly from the original papers.
\begin{table}
\begin{tabular}{l c} \hline \hline Hyper-parameters & Choices \\ \hline LR for text encoder & \(2e^{-5}\), \(3e^{-5}\), \(1e^{-4}\) \\ LR for classifier & \(1e^{-4}\), \(5e^{-4}\), \(1e^{-3}\) \\ LR for graph-based models & \(1e^{-3}\), \(3e^{-3}\), \(5e^{-3}\) \\ Hidden dimension for GNN & 200, 300, 400 \\ Weight coefficient \(\lambda\) & 0, 0.1 0.3, 0.5, 0.7, 0.9 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Choices of hyper-parameters
However, some of the baselines did not conducted experiments on all of the four datasets. Our proposed model outperforms baseline models in most cases, demonstrating its effectiveness on ADE detection from texts in various domains (RQ1). Our model can better balance precision and recall scores, leading to higher F1 scores. Table 5 shows that our model consistently outperforms the baselines. Our proposed model can capture rich features to identify a document containing ADEs.
### Usefulness of the Knowledge Augmented Graph Convolution (RQ2)
Here we investigate in more detail how the heterogeneous graph convolution with knowledge augmentation can help with ADE detection (RQ2). In Table 3, most models such as HTR-MSA, IAN, CNN-T and ATL perform worse on TwiMed-Twitter dataset, showing that it is difficult to process informal tweets with colloquial language. However, the graph-based encoder in our model helps in effectively encoding information from the informal text, resulting in a better ability to capture the relationships between different entities, improving performance in most cases. Table 4 compares our model with several pretrained BERT-based models. Our model
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Models & \multicolumn{2}{c}{P (\%) R (\%) F1 (\%)} \\ \hline BERTweet-LSTM (Kayastha et al., 2021) & 81.2 & 86.2 & 83.6 \\ RoBERTa-aug (Pimplakhute et al., 2021) & 82.1 & 85.7 & 84.3 \\ BERT-LSTM (Yaseen and Langer, 2021) & 77.0 & 72.0 & 74.0 \\ CGEM (Gao et al., 2022) & **86.7** & 93.4 & 89.9 \\ \hline KnowCAGE (GCN) & 86.6 & 93.1 & 89.7 \\ KnowCAGE (GAT) & 85.2 & **96.8** & 90.6 \\ KnowCAGE (DGCNN) & 86.6 & 95.9 & **91.0** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results of SMM4H dataset. Scores are reported for the best performing results, which follows the setup of baselines. The results of baselines are from the corresponding publications. **Bold** text indicates the best performance.
\begin{table}
\begin{tabular}{l c c c|c c c} \hline \hline \multirow{2}{*}{Models} & \multicolumn{3}{c}{TwiMed-Pub} & \multicolumn{3}{c}{TwiMed-Twitter} \\ & P (\%) & R (\%) & F1 (\%) & P (\%) & R (\%) & F1 (\%) \\ \hline HTR-MSA (Wu et al., 2018) & 75.0 & 66.0 & 70.2 & 60.7 & 61.7 & 61.2 \\ IAN (Alimova and Solovyev, 2018) & 87.8 & 73.8 & 79.2 & 83.6 & 81.3 & 82.4 \\ CNN-T (Li et al., 2020) & 81.3 & 63.9 & 71.6 & 61.8 & 60.0 & 60.9 \\ MSAM (Zhang et al., 2019) & 85.8 & 85.2 & 85.3 & 74.8 & **85.6** & 79.9 \\ ATL (Li et al., 2020) & 81.5 & 67.0 & 73.4 & 63.7 & 63.4 & 63.5 \\ CGEM (Gao et al., 2022) & 88.4 & 85.0 & 86.7 & 84.2 & 83.7 & 83.9 \\ \hline KnowCAGE (GCN) & 88.8 & **85.8** & **87.3** & 84.1 & 84.0 & 84.0 \\ KnowCAGE (GAT) & **89.6** & 83.4 & 86.4 & **84.8** & 84.1 & **84.4** \\ KnowCAGE (DGCNN) & 88.7 & 83.7 & 86.1 & 83.5 & 84.1 & 83.8 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results for two TwiMed datasets, i.e., TwiMed-Pub and TwiMed-Twitter. Scores are reported with the mean of 10-fold cross validation following the setup of baselines. The results of baselines are from the corresponding publications. **Bold** text indicates the best performance.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Models & P (\%) & R (\%) & F1 (\%) \\ \hline HTR-MSA (Wu et al., 2018) & 81.8 & 77.6 & 79.7 \\ CNN-T (Li et al., 2020) & 84.8 & 79.4 & 82.0 \\ ATL (Li et al., 2020) & 84.3 & 81.3 & 82.8 \\ ANNSA (Zhang et al., 2021) & 82.7 & 83.5 & 83.1 \\ \hline KnowCAGE (GCN) & 86.2 & 90.2 & 88.2 \\ KnowCAGE (GAT) & 83.9 & 92.0 & 87.8 \\ KnowCAGE (DGCNN) & **86.1** & **92.9** & **89.4** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Results for CADEC dataset. Scores are reported with the mean of 10-fold cross validation following the setup of baselines. The results of baselines are from the corresponding publications. **Bold** text indicates the best performance.
differs from the pretrained models by addition employing the GNN architectures in addition to the pretrained embeddings, and the results suggest the GNN can further improve models' performance on this task. Compared with another graph-based model, CGEM, our method applies knowledge-augmentation to incorporate concept information into the graph learning, which is also seen to improve performance of ADE detection in most cases.
Finally, we examine three graph architectures to study which one is most suitable for the ADE detection task. For datasets containing more training samples (i.e., SMM4H and CADEC datasets), DGCNN performs better. When the number of training samples is small, GCN and GAT achieve better performance. Hence, we conclude that the graph-based encoding method improves the performance. However, we also notice that none of examined graph architectures consistently outperforms the others on all three datasets from different domains.
### Effectiveness of the Concept-Aware Attention (RQ3)
We examine the effectiveness of concept-aware attention by comparing it with other two attention mechanisms, i.e., a simple dot-product attention (Gao et al., 2022) and structured self-attention (Lin et al., 2017). Table 6 shows the concept-aware attention consistently achieves the best F1 score on four datasets. The concept-aware attention distinguishes different types of nodes from the heterogeneous graph and make the overall model better utilize the knowledge augmentation from the ULMS, which answers the third research question (RQ3).
### Effect of Pretraining Domains (RQ4)
We use four pretrained contextualized language models to obtain node embeddings. They use the BERT base model architecture but are pretrained with different strategies or corpora collected from different domains. The pretrained language models include: (1) RoBERTa (Liu et al., 2019), a BERT model optimized with more robust approaches; (2) BioBERT (Lee et al., 2020), a domain-specific model trained with biomedical corpora including PubMed abstracts and PubMed Central full-text articles; (3) ClinicalBERT (Alsentzer et al., 2019), a domain-specific model trained on clinical notes from the MIMIC-III database (Johnson et al., 2016); and (4) PubMedBERT (Gu et al., 2021): a domain specific model trained from scratch using abstracts from PubMed biomedical literature. Figure 3 shows that RoBERTa performs slightly worse than other models on the TwiMedPub dataset. For the other three datasets, RoBERTa performs better than its counterparts. One explanation is the discrepancy between different subdomains. SMM4H, TwiMed-Twitter and CADEC from social media and forum contain more informal social text and non-medical terms, while ClinicalBERT, BioBERT and PubMedBERT are pretrained with clinical notes or biomedical articles. Hence, we conclude that the choice of a specific pretrained model is critical for the accuracy in the ADE detection task. Our finding shows domain-specific pretraining can improve the performance to some extent, and RoBERTa can be the first choice when processing informal text, which is the first answer to RQ4.
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{Datasets} & \multicolumn{3}{c}{dot-product attention} & \multicolumn{3}{c}{structured attention} & \multicolumn{3}{c}{concept-aware attention} \\ & P (\%) & R (\%) & F1 (\%) & P (\%) & R (\%) & F1 (\%) & P (\%) & R (\%) & F1 (\%) \\ \hline SMM4H & 83.4 & 97.8 & 90.0 & 84.4 & 94.9 & 89.3 & 86.6 & 95.9 & **91.0** \\ TwiMed-Pub & 87.9 & 84.5 & 86.2 & 88.9 & 82.9 & 85.8 & 88.8 & 85.8 & **87.3** \\ TwiMed-Twitter & 84.5 & 82.2 & 83.4 & 83.0 & 81.8 & 82.4 & 84.8 & 84.1 & **84.4** \\ CADEC & 83.8 & 91.8 & 87.6 & 83.5 & 89.3 & 86.3 & 86.1 & 92.9 & **89.4** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Comparison on the choices of attention mechanisms
Figure 3: The effect of contextualized text embeddings pretrained from different domains and with different pretraining strategies.
### Effect of Weight Coefficient (RQ4)
This section studies the weight coefficient \(\lambda\) that balances the pretrained embedding-based and GNN-based classifiers and answers how does the contextualized language representation affect the performance of ADE detection (RQ4). When \(\lambda\) is zero, only pretrained embedding-based classifier is working. Figure 4 shows that F1 score experiences an increase then drops to some extent. In terms of F1 score, the best choices of \(\lambda\) for four datasets are 0.5 (SMM4H), 0.7 (TwiMed-Pub), 0.7 (TwiMed-Twitter) and 0.3 (CADEC). This study reveals that the combination of pretrained embedding and graph learning on the heterogeneous graph boosts the performance, which is the second answer to RQ4.
## 6 Conclusion
The automated detection of adverse drug events from social media content or biomedical literature requires the model to encode text information and capture the relation between drug and adverse effect efficiently. This paper utilizes knowledge-augmented contextualized graph embeddings to learn contextual information and capture relations for ADE detection. We equip different graph convolutional networks with pretrained language representations over the knowledge-augmented heterogeneous text graph and develop a concept-aware attention to optimally process the different types of nodes in the graph. By comparing our model with other baseline methods, experimental results show that graph-based embeddings incorporating concept information from the UMLS can inject medical knowledge into the model and the concept-aware attention can learn richer concept-aware representations, leading to better detection performance.
## Acknowledgements
We acknowledge the computational resources provided by the Aalto Science-IT project and CSC - IT Center for Science, Finland. This work was supported by the Academy of Finland (Flagship programme: Finnish Center for Artificial Intelligence FCAI, and grants 315896, 336033) and EU H2020 (grant 101016775). We thank Volker Tresp, Zhen Han, Ruotong Liao and Zhiliang Wu for valuable discussions.
|
2302.01018 | Graph Neural Networks for temporal graphs: State of the art, open
challenges, and opportunities | Graph Neural Networks (GNNs) have become the leading paradigm for learning on
(static) graph-structured data. However, many real-world systems are dynamic in
nature, since the graph and node/edge attributes change over time. In recent
years, GNN-based models for temporal graphs have emerged as a promising area of
research to extend the capabilities of GNNs. In this work, we provide the first
comprehensive overview of the current state-of-the-art of temporal GNN,
introducing a rigorous formalization of learning settings and tasks and a novel
taxonomy categorizing existing approaches in terms of how the temporal aspect
is represented and processed. We conclude the survey with a discussion of the
most relevant open challenges for the field, from both research and application
perspectives. | Antonio Longa, Veronica Lachi, Gabriele Santin, Monica Bianchini, Bruno Lepri, Pietro Lio, Franco Scarselli, Andrea Passerini | 2023-02-02T11:12:51Z | http://arxiv.org/abs/2302.01018v4 | # Graph Neural Networks for temporal graphs: State of the art, open challenges, and opportunities
###### Abstract
Graph Neural Networks (GNNs) have become the leading paradigm for learning on (static) graph-structured data. However, many real-world systems are dynamic in nature, since the graph and node/edge attributes change over time. In recent years, GNN-based models for temporal graphs have emerged as a promising area of research to extend the capabilities of GNNs. In this work, we provide the first comprehensive overview of the current state-of-the-art of temporal GNN, introducing a rigorous formalization of learning settings and tasks and a novel taxonomy categorizing existing approaches in terms of how the temporal aspect is represented and processed. We conclude the survey with a discussion of the most relevant open challenges for the field, from both research and application perspectives.
## 1 Introduction
The ability to process temporal graphs is becoming increasingly important in a variety of fields such as recommendation systems [48], social network analysis [8], transportation systems [56], face-to-face interactions [25], epidemic modeling and contact tracing [5], and many others. Traditional graph-based models are not well suited for analyzing temporal graphs as they assume a fixed structure and are unable to capture its temporal evolution. Therefore, in the last few years, several models able to directly encode temporal graphs have been developed, such as random walk-based methods [46], temporal motif-based methods [24], matrix factorization-based approaches [1] and deep learning models [30].
In the realm of static graph processing, Graph Neural Networks (GNNs) [40] have progressively emerged as the leading paradigm, thanks to their ability to efficiently propagate information along the graph and learn rich node and graph representations. GNNs have achieved state-of-the-art performance on various tasks such as node classification [17], link prediction [57], and graph classification [51], making them the go-to method. Overall, the success of GNNs highlights the importance of developing deep learning techniques to handle non-Euclidean data, and in particular the potential of these architectures to revolutionize the way we analyze and understand complex graph-structured systems.
Recently, GNNs have been successfully applied also
to temporal graphs, achieving state-of-the-art results on tasks such as temporal link prediction [38], node classification [36] and edge classification [45], with approaches ranging from attention-based methods [50] to Variational Graph-Autoencoder (VGAE) [16]. However, despite the potential of GNN-based models for temporal graph processing and the variety of different approaches that emerged, a systematization of the literature is still missing. Existing surveys either discuss general techniques for learning over temporal graphs, only briefly mentioning temporal extensions of GNNs [20, 2, 52, 49], or focus on specific topics, like temporal link prediction [33, 41] or temporal graph generation [15].
This work aims to fill this gap by providing a systematization of existing GNN-based methods for temporal graphs, or Temporal GNNs (TGNNs), and a formalization of the tasks being addressed.
Our main contributions are the following:
* We propose a coherent formalization of the different learning settings and of the tasks that can be performed on temporal graphs, unifying existing formalism and informal definitions that are scattered in the literature, and highlighting substantial gaps in what is currently being tackled;
* We organize existing TGNN works into a comprehensive taxonomy that groups methods according to the way in which time is represented and the mechanism with which it is taken into account;
* We highlight the limitations of current TGNN methods, discuss open challenges that deserve further investigation and present critical real-world applications where TGNNs could provide substantial gains.
## 2 Temporal Graphs
We provide a formal definition of the different types of graphs analyzed in this work and we structure different existing notions in a common framework.
**Definition 1** (Static Graph - SG): A Static Graph is a tuple \(G=(V,E,X^{V},X^{E})\), where \(V\) is the set of nodes, \(E\subseteq V\times V\) is the set of edges, and \(X^{V},X^{E}\) are \(d_{V}\)-dimensional node features and \(d_{E}\)-dimensional edge features.
Node and edge features may be empty. Moreover, in the following, we assume that all graphs are directed, i.e., \((u,v)\in E\) does not imply that \((v,u)\in E\).
Extending [33], we define Temporal Graphs as follows.
**Definition 2** (Temporal Graph - TG): A Temporal Graph is a tuple \(G_{T}=(V,E,V_{T},E_{T})\), where \(V\) and \(E\) are, respectively, the set of all possible nodes and edges appearing in a graph at any time, while
\[V_{T} :=\{(v,x^{v},t_{s},t_{e}):v\in V,x^{v}\in\mathbb{R}^{d_{V}},t_{s} \leq t_{e}\},\] \[E_{T} :=\{(e,x^{e},t_{s},t_{e}):e\in E,x^{e}\in\mathbb{R}^{d_{E}},t_{s} \leq t_{e}\},\]
are the temporal nodes and edges, with time-dependent features and initial and final timestamps. A set of temporal graphs is denoted as \(\mathcal{G_{T}}\).
Observe that we implicitly assume that the existence of a temporal edge in \(E_{T}\) requires the simultaneous existence of the corresponding temporal nodes in \(V_{T}\). Moreover, the definition implies that node and edge features are constant inside each interval \([t_{s},t_{e}]\), but may otherwise change over time. Since the same node or edge may be listed multiple times, with different timestamps, we denote as \(\bar{t}_{s}(v)=\min\{t_{s}:(v,x^{v},t_{s},t_{e})\in V_{T}\}\) and \(\bar{t}_{e}(v)=\max\{t_{e}:(v,x^{v},t_{s},t_{e})\in V_{T}\}\) the time of first and last appearance of a node, and similarly for \(\bar{t}_{s}(e)\), \(\bar{t}_{e}(e)\), \(e\in E\). Moreover, we set \(T_{s}(G_{T}):=\min\{\bar{t}_{s}(v):v\in V\}\), \(T_{e}(G_{T}):=\max\{\bar{t}_{e}(v):v\in V\}\) as the initial and final timestamps in a TG \(G_{T}\). For two TGs \(G_{T}^{i}:=(V^{i},E^{i},V_{T}^{i},E_{T}^{i})\), \(i=1,2\), we write \(G_{T}^{1}\subseteq_{V}G_{T}^{2}\) to indicate the topological inclusion \(V^{1}\subseteq V^{2}\), while no relation between the corresponding timestamps is required.
General TGs have no restriction on their timestamps, which can take any value (for simplicity, we just assume that they are non-negative). However, in some applications, it makes sense to force these values to be multiples of a fixed timestep. This leads to the notion of Discrete Time Temporal Graphs, which are defined as follows.
**Definition 3** (Discrete Time Temporal Graph - DTTG): Let \(\Delta t>0\) be a fixed timestep and let \(t_{1}<t_{2}<\cdots<t_{n}\) be timestamps with \(t_{k+1}=t_{k}+\Delta t\). A Discrete Time Temporal Graph \(G_{DT}\) is a TG where for each \((v,x^{v},t_{s},t_{e})\in V_{T}\) or \((e,x^{e},t_{s},t_{e})\in E_{T}\), the \(t_{s},t_{e}\) are taken from the set of fixed timestamps (i.e., \(t_{s},t_{e}\in\{t_{1},t_{2},\ldots,t_{n}\}\), with \(t_{s}<t_{e}\)).
### Representation of temporal graphs
Two main strategies can be found in the literature for the description of time-varying graphs, based on snapshots or on events. These different representations lead to different algorithmic approaches. Extending [15] we give a formal definition of these representation strategies.
The snapshot-based strategy focuses on the temporal evolution of the whole graph. Snapshot-based Temporal Graphs can be defined as follows.
**Definition 4** (Snapshot-based Temporal Graph - STG): Let \(t_{1}<t_{2}<\cdots<t_{n}\) be the ordered set of all timestamps \(t_{s},t_{e}\) occurring in a TG \(G_{T}\). Set
\[V_{i} :=\{(v,x^{v}):(v,x^{v},t_{s},t_{e})\in V_{T},t_{s}\leq t_{i}\leq t _{e}\},\] \[E_{i} :=\{(e,x^{e}):(e,x^{e},t_{s},t_{e})\in E_{T},t_{s}\leq t_{i}\leq t _{e}\},\]
and define the snapshots \(G_{i}:=(V_{i},E_{i})\), \(i=1,\ldots,n\). Then a Snapshot-based Temporal Graph representation of \(G_{T}\) is the sequence
\[G_{T}^{S}:=\{(G_{i},t_{i}):i=1,\ldots,n\}\]
of time-stamped static graphs.
This representation is mostly used to describe DTTGs, where the snapshots represent the TG captured at periodic intervals (e.g., hours, days, etc.).
The event-based strategy is more appropriate when the focus is on the temporal evolution of individual nodes or edges. This leads to the following definition.
**Definition 5** (Event-based Temporal Graph - ETG): Let \(G_{T}\) be a TG, and let \(\varepsilon\) denote one of the following event:
* Node insertion \(\varepsilon_{V}^{+}:=(v,t)\): the node \(v\) is added to \(G_{T}\) at time \(t\), i.e., there exists \((v,x^{v},t_{s},t_{e})\in V_{T}\) with \(t_{s}=t\).
* Node deletion \(\varepsilon_{V}^{-}:=(v,t)\): the node \(v\) is removed from \(G_{T}\) at time \(t\), i.e., there exists \((v,x^{v},t_{s},t_{e})\in V_{T}\) with \(t_{e}=t\).
* Edge insertion \(\varepsilon_{E}^{+}:=(e,t)\): the edge \(e\) is added to \(G_{T}\) at time \(t\), i.e., there exists \((e,x^{e},t_{s},t_{e})\in E_{T}\) with \(t_{s}=t\).
* Edge deletion \(\varepsilon_{E}^{-}:=(e,t)\): the edge \(e\) is removed from \(G_{T}\) at time \(t\), i.e., there exists \((e,x^{e},t_{s},t_{e})\in E_{T}\) with \(t_{e}=t\).
An Event-based Temporal Graph representation of TG is a sequence of events
\[G_{T}^{E}:=\{\varepsilon:\varepsilon\in\{\varepsilon_{V}^{+},\varepsilon_{E}^{ -},\varepsilon_{E}^{+},\varepsilon_{E}^{-}\}\}.\]
Here it is implicitly assumed that node and edge events are consistent (e.g., a node deletion event implies the existence of an edge deletion event for each incident edge). In the case of an ETG, the TG structure can be recovered by coupling an insertion and deletion event for each temporal edge and node. ETGs are better suited than STGs to represent TGs with arbitrary timestamps.
We will use the general notion of TG, which comprises both STG and ETG, in formalizing learning tasks in the next section. On the other hand, we will revert to the STG and ETG notions when introducting the taxonomy of TGNN methods in Section 4, since TGNNs use one or the other representation strategy in their algorithmic approaches.
## 3 Learning tasks on temporal graphs
Thanks to their learning capabilities, TGNNs are extremely flexible and can be adapted to a wide range of tasks on TGs. Some of these tasks are straightforward temporal extensions of their static counterparts. However, the temporal dimension has some non-trivial consequences in the definition of learning settings and tasks, some of which are often only
loosely formalized in the literature. We start by formalizing the notions of transductive and inductive learning for TGNNs, and then describe the different tasks that can be addressed.
### Learning settings
The machine learning literature distinguishes between inductive learning, in which a model is learned on training data and later applied to unseen test instances, and transductive learning, in which the input data of both training and test instances are assumed to be available, and learning is equivalent to leveraging the training inputs and labels to infer the labels of test instances given their inputs. This distinction becomes extremely relevant for graph-structured data, where the topological structure gives rise to a natural connection between nodes, and thus to a way to propagate the information in a transductive fashion. Roughly speaking, transductive learning is used in the graph learning literature when the node to be predicted and its neighborhood are known at training time -- and is typical of node classification tasks --, while inductive learning indicates that this information is not available -- and is most often associated to graph classification tasks.
However, when talking about GNNs with their representation learning capabilities, this distinction is not so sharp. For example, a GNN trained for node classification in transductive mode could still be applied to an unseen graph, thus effectively performing inductive learning. The temporal dimension makes this classification even more elusive, since the graph structure is changing over time and nodes are naturally appearing and disappearing. Defining node membership in a temporal graph is thus a challenging task in itself.
Below, we provide a formal definition of transductive and inductive learning for TGNNs which is purely topological, i.e. linked to knowing or not the instance to be predicted at the training time, and we complete it with a temporal dimension, which distinguishes between past and future prediction tasks.
**Definition 6** (Learning settings): Assume that a model is trained on a set of \(n\geq 1\) temporal graphs \(\mathcal{G}_{\mathcal{T}}\ :=\ \{G_{T}^{i}\ :=\ (V_{i},E_{i},X_{i}^{V},X_{i}^{E}),\ i =1,\ldots,n\}\). Moreover, let
\[T_{e}^{all}:=\max_{i=1,\ldots,n}T_{e}(G_{T}^{i}),\,V^{all}:=\cup_{i=1}^{n}V_{i},\ E^{all}:=\cup_{i=1}^{n}E_{i},\]
be the final timestamp and the set of all nodes and edges in the training set. Then, we have the following settings:
* Transductive learning: inference can only be performed on \(v\in V^{all}\), \(e\in E^{all}\), or \(G_{T}\subseteq_{V}G_{T}^{i}\) with \(G_{T}^{i}\in\mathcal{G}_{\mathcal{T}}\).
* Inductive learning: inference can be performed also on \(v\notin V^{all}\), \(e\notin E^{all}\), or \(G_{T}\not\subseteq_{V}G_{T}^{i}\), for all \(i=1,\ldots,n\).
* Past prediction: inference is performed for \(t\leq T_{e}^{all}\).
* Future prediction: inference is performed for \(t>T_{e}^{all}\).
We remark that all combinations of topological and temporal settings are meaningful, except for the case of inductive graph-based tasks. Indeed, the measure of time used in TGs is relative to each single graph. Moving to an unobserved graph would thus make the distinction between past and future pointless. Moreover, let us observe that, in all other cases, the two temporal settings are defined based on the final time of the entire training set, and not of the specific instances (nodes or edges), since their embedding may change also as an effect of the change of their neighbors in the training set.
We will use this categorization to describe supervised and unsupervised learning tasks in Section 3.2-3.3, and to present existing models in Section 4.
### Supervised learning tasks
Supervised learning tasks are based on a dataset where each object is annotated with its label (or class), from a finite set of possible choices \(\mathcal{C}:=\{C_{1},C_{2},\ldots,C_{k}\}\).
#### 3.2.1 Classification
**Definition 7** (Temporal Node Classification): Given a TG \(G_{T}=(V,E,V_{T},E_{T})\), the node classification task consists in learning the function
\[f_{\text{NC}}:V\times\mathbb{R}^{+}\rightarrow\mathcal{C}\]
which maps each node to a class \(C\in\mathcal{C}\), at a time \(t\in\mathbb{R}^{+}\).
This is one of the most common tasks in the TGNN literature. For instance, [31, 50, 45, 58, 36] focus on a future-transductive (FT) setting, i.e., predicting the label of a node in future timestamps. TGAT [50] performs future-inductive (FI) learning, i.e. it predicts the label of an unseen node in the future. Finally, DGNN [27] is the only method that has been tested on a past-inductive (PI) setting, i.e., predicting labels of past nodes that are unavailable (or masked) during training, while no approach has been applied to the past-transductive (PT) one. A significant application may be in epidemic surveillance, where contact tracing is used to produce a TG of past human interactions, and sample testing reveals the labels (infection status) of a set of individuals. Identifying the past infection status of the untested nodes is a PT task.
**Definition 8** (Temporal Edge Classification): Given a TG \(G_{T}=(V,E,V_{T},E_{T})\), the temporal edge classification task consists in learning a function
\[f_{\text{EC}}:E\times\mathbb{R}^{+}\rightarrow\mathcal{C}\]
which assigns each edge to a class at a given time \(t\in\mathbb{R}^{+}\).
Temporal edge classification has been less explored in the literature. Existing methods have focused on FT learning [31, 45], while FI, PI and PT have not been tackled so far. An example of PT learning consists in predicting the unknown past relationship between two acquaintances in a social network given their subsequent behaviour. For FI, one may predict if a future transaction between new users is a fraud or not.
In the next definition we use the set of real and positive intervals \(I^{+}:=\{[t_{s},t_{e}]\subset\mathbb{R}^{+}\}\).
**Definition 9** (Temporal Graph Classification): Let \(\mathcal{G}_{\mathcal{T}}\) be a domain of TGs. The graph classification task requires to learn a function
\[f_{\text{GC}}:\mathcal{G}_{\mathcal{T}}\times I^{+}\rightarrow\mathcal{C}\]
that maps a temporal graph, restricted to a time interval \([t_{s},t_{e}]\in I^{+}\), into a class.
The definition includes the classification of a single snapshot (i.e., \(t_{s}=t_{e}\)). As mentioned above, in the inductive setting the distinction between past and future predictions is pointless. In the transductive setting, instead, a graph \(G_{T}\in\mathcal{G}_{\mathcal{T}}\) may be classified in a past mode if \([T_{s}(G_{T}),T_{e}(G_{T})]\subseteq[t_{s},t_{e}]\), or in the future mode, otherwise.
None of the existing methods tackles temporal graph classification tasks, possibly for the lack of suitable datasets (see also Section 5). The task has however numerous relevant applications. An example of inductive temporal graph classification is predicting mental disorders from the analysis of the brain connectome [18]. On the other hand, detecting critical stages during disease progression from gene expression profiles [13] can be framed as a past transductive graph classification task.
#### 3.2.2 Regression
The tasks introduced for classification can all be turned into corresponding regression tasks, simply by replacing the categorical target \(\mathcal{C}\) with the set \(\mathbb{R}\). We omit the formal definitions for the sake of brevity. To the best of our knowledge, no existing TGNN work addresses this kind of problem, even if the application of static GNNs alone has already shown outstanding results in this setting, e.g. in weather forecasting [21] and earthquake location and estimation [28].
#### 3.2.3 Link prediction
Link prediction requires the model to predict the relation between two given nodes, and can be formulated by taking as input any possible pair of nodes. Thus, we consider the setting to be transductive when both node instances are known at training time, and inductive otherwise. Instead, [33] adopt a different approach and identify _Level-1_ (the set of nodes is fixed)
and _Level-2_ (nodes may be added and removed over time) temporal link prediction tasks.
**Definition 10** (Temporal Link Prediction): Let \(G_{T}=(V,E,V_{T},E_{T})\) be a TG. The temporal link prediction task consists in learning a function
\[f_{\text{LP}}:V\times V\times\mathbb{R}^{+}\rightarrow[0,1]\]
which predicts the probability that, at a certain time, there exists an edge between two given nodes.
The domain of the function \(f_{\text{LP}}\) is the set of all feasible pairs of nodes, since it is possible to predict the probability of future interactions between nodes that have been connected in the past or not, as well as the probability of missing edges in a past time. Most TGNN approaches for temporal link prediction focus on future predictions, forecasting the existence of an edge in a future timestamp between existing nodes (FT is the most common setting) [31, 38, 16, 55, 50, 26, 45, 27, 36, 58], or unseen nodes (FI) [16, 50, 36]. The only model that investigates past temporal link prediction is [26], which devises a PI setting by masking some nodes and predicting the existence of a past edge between them. Note that predicting past temporal links can be extremely useful for predicting, e.g., missing interactions in contact tracing for epidemiological studies.
**Definition 11** (Event Time Prediction): Let \(G_{T}=(V,E,V_{T},E_{T})\) be a TG. The aim of the event time prediction task is to learn a function
\[f_{\text{EP}}:V\times V\rightarrow\mathbb{R}^{+}\]
that predicts the time of the first appearance of an edge.
None of the existing methods address this task. Potential FT applications of event time prediction include predicting when a customer will pay an invoice to its supplier, or how long it takes to connect two similar users in a social network.
### Unsupervised learning tasks
In this section, we formalize unsupervised learning tasks on temporal graphs, an area that has received little to no attention in the TGNN literature so far.
#### 3.3.1 Clustering
Temporal graphs can be clustered at the node or graph level, with edge-level clustering being a minor variation of the node-level one. Some relevant applications can be defined in terms of temporal clustering.
**Definition 12** (Temporal Node Clustering): Given a TG \(G_{T}=(V,E,V_{T},E_{T})\), the temporal node clustering task consists in learning a time-dependent cluster assignment map
\[f_{\text{NCl}}:V\times\mathbb{R}^{+}\rightarrow\mathcal{P}(V),\]
where \(\mathcal{P}(V):=\{p_{1},p_{2},\ldots,p_{k}\}\) is a partition of the node set \(V\), i.e., \(p_{i}\subset V_{T}\), \(p_{i}\cap p_{j}=\emptyset\), if \(i\neq j\), \(\cup_{i=1}^{N}p_{i}=V_{T}\).
While node clustering in SGs is a very common task, its temporal counterpart has not been explored yet for TGNNs, despite its potential relevance in application domains like epidemic modelling (identifying groups of exposed individuals, in both inductive and transductive settings), or trend detection in customer profiling (mostly transductive).
**Definition 13** (Temporal Graph Clustering): Given a set of temporal graphs \(\mathcal{G_{T}}\), the temporal graph clustering task consists in learning a cluster-assignment function
\[f_{\text{GCl}}:\mathcal{G_{T}}\times I^{+}\rightarrow\mathcal{P}(\mathcal{G_{T }}),\]
where \(\mathcal{P}(\mathcal{G_{T}}):=\{p_{1},\ldots,p_{k}\}\) is a partition of the set of temporal graphs in the given time interval.
Relevant examples of tasks of inductive temporal graph clustering are grouping social interaction networks (e.g., hospitals, workplaces, schools) according to their interaction patterns, or grouping diseases in terms of similarity between their spreading processes [10].
#### 3.3.2 Low-dimensional embedding (LDE)
LDEs are especially useful in the temporal setting, e.g. to visually inspect temporal dynamics of individual nodes or entire graphs, and identify relevant
trends and patterns. No GNN-based model has been applied to these tasks, neither at the node nor at the graph level. We formally define the tasks of temporal node and graph LDE as follows.
**Definition 14** (Low-dimensional temporal node embedding): Given a TG \(G_{T}=(V,E,V_{T},E_{T})\), the low-dimensional temporal node embedding task consists in learning a map
\[f_{NEm}:V\times\mathbb{R}^{+}\rightarrow\mathbb{R}^{d}\]
to map a node, at a given time, into a low dimensional space.
**Definition 15** (Low-dimensional temporal graph embedding): Given a domain of TGs \(\mathcal{G_{T}}\), the low-dimensional temporal graph embedding task aims to learn a map
\[f_{GEm}:\mathcal{G_{T}}\times I^{+}\rightarrow\mathbb{R}^{d},\]
which represents each graph as a low dimensional vector in a given time interval.
## 4 A taxonomy of TGNNs
This section describes the taxonomy with which we categorize existing TGNN approaches (see Figure 1). Following the representation strategies outlined in Section 2.1, the first level groups methods into _Snaphost-based_ and _Event-based_. The second level of the taxonomy further divides these two macro-categories based on the techniques used to manage the temporal dependencies. The leaves of the taxonomy in Figure 1 correspond to the individual models, with a colored symbol indicating their main underlying technology.
### Snapshot-based models
_Snaphot-based_ models are specifically tailored for STGs (see Def. 4) and thus, consistently with the definition, they are equipped with a suitable method to process the entire graph at each point in time, and with a mechanism that learns the temporal dependencies across timesteps. Based on the mechanism used, we can further distinguish between _Model Evolution_ and _Embedding Evolution_ methods.
#### 4.1.1 Model Evolution methods
We call _Model Evolution_ the evolution of the parameters of a static GNN model over time. This mechanism is appropriate for modelling STG, as the evolution of the model is performed at the snapshot level.
To the best of our knowledge, the only existing method belonging to this category is **EvolveGCN**[31]. This model utilizes a Recurrent Neural Network (RNN) to update the Graph Convolutional Network (GCN) [23] parameters at each timestep, allowing for model adaptation that is not constrained by the presence or absence of nodes. The method can effectively handle new nodes without prior historical information. A key advantage of this approach is that the GCN parameters are no longer trained directly, but rather they are computed from the trained RNN, resulting in a more manageable model size that does not increase with the number of timesteps.
#### 4.1.2 Embedding Evolution methods
Rather than evolving the parameters of a static GNN model, _Embedding Evolution_ methods focus on evolving the embeddings produced by a static model.
There are several different TGNN models that fall under this category. These networks differ from one another in the techniques used for processing both the structural information and the temporal dynamics of the STGs. **DySAT**[38] introduced a generalization of Graph Attention Network (GAT) [44] for STGs. First, it uses a self-attention mechanism to generate static node embeddings at each timestamp. Then, it uses a second self-attention block to process past temporal embeddings for a node to generate its novel embedding. The **VGRNN** model [16] uses VGAE [22] coupled with Semi-Implicit Variational Inference (SIVI) [54] to handle the variation of the graph over time. The learned latent representation is then evolved through an LSTM conditioned
on the previous time's latent representation, allowing the model to predict the future evolution of the graph. Finally, **ROLAND**[55] is a general framework for extending state-of-the-art GNN techniques to STGs. The key insight is that node embeddings at different GNN layers can be viewed as hierarchical node states. To generalize a static GNN for dynamic settings, hierarchical node states are updated based on newly observed nodes and edges through a Gated Recurrent Unit (GRU) update module [6].
### Event-based models
Models belonging to the _Event-based_ macro category are designed to process ETGs (see Def. 5). These models are able to process streams of events by incorporating techniques that update the representation of a node whenever an event involving that node occurs.
The models that lie in this macro category can be further classified in _Temporal Embedding_ and _Temporal Neighborhood_ methods, based on the technology used to learn the time dependencies. In particular, the _Temporal Embedding_ models use recurrent or self-attention mechanisms to model sequential information from streams of events, while also incorporating a time encoding. This allows for temporal signals to be modeled by the interaction between time embedding, node features and the topology of the graph. _Temporal Neighborhood_ models, instead, use a module that stores functions of events involving a specific node at a given time. These values are then aggregated and used to update the node representation as time progresses.
Figure 1: **The proposed TGNN taxonomy and an analysis of the surveyed methods.** The top panel shows the new categories introduced in this work with the corresponding model instances (Section 4), where the colored bullets additionally indicate the main technology that they employ. The bottom table maps these methods to the task (Section 3) to which they have been applied in the respective original paper, with an additional indication of their use in the future (F), past (P), inductive (I), or transductive (T) settings (Section 3.1). Notice that no method has been applied yet to clustering and visualization, for neither graphs nor nodes. Moreover, only four out of ten models have been tested in the past mode (three in PT, one in PI).
#### 4.2.1 Temporal Embedding methods
_Temporal embedding_ methods model TGs by combining time embedding, node features, and graph topology. These models use an explicit functional time encoding, i.e., a translation-invariant vectorial embedding of time based on Random Fourier Features (RFF) [34].
**TGAT**[50], for example, introduces a graph-temporal attention mechanism which works on the embeddings of the temporal neighbours of a node, where the positional encoding is replaced by a temporal encoding based on RFFs. On the other hand, **NAT**[26] collects the temporal neighbours of each node into dictionaries, and then it learns the node representation with a recurrent mechanism, using the historical neighbourhood of the current node and a RFF based time embedding.
#### 4.2.2 Temporal Neighborhood methods
The _Temporal Neighborhood_ class includes all TGNN models that make use of a special _mailbox_ module to update node embeddings based on events. When an event occurs, a function is evaluated on the details of the event to compute a _mail_ or a _message_. For example, when a new edge appears between two nodes, a message is produced, taking into account the time of occurrence of the event, the node features, and the features of the new edge. The node representation is then updated at each time by aggregating all the generated messages.
Several existing TGNN methods belong to this category. **APAN**[45] introduces the concept of asynchronous algorithm, which decouples graph query and model inference. An attention-based encoder maps the content of the mailbox to a latent representation of each node, which is decoded by an MLP adapted to the downstream task. After each node update following an event, mails containing the current node embedding are sent to the mailboxes of its neighbors using a propagator. **DGNN**[27] combines an _interact_ module -- which generates an encoding of each event based on the current embedding of the interacting nodes and its history of past interactions -- and a _propagate_ module -- which transmits the updated encoding to each neighbors of the interacting nodes. The aggregation of the current node encoding with those of its temporal neighbors uses a modified LSTM, which permits to work on non-constant timesteps, and implements a discount factor to downweight the importance of remote interactions. **TGN**[36] provides a generic framework for representation learning in ETGs, and it makes an effort to integrate the concepts put forward in earlier techniques. This inductive framework is made up of separate and interchangeable modules. Each node the model has seen so far is characterized by a memory vector, which is a compressed representation of all its past interactions. Given a new event, a mailbox module computes a mail for every node involved. Mails will then be used to update the memory vector. To overcome the so-called staleness problem [20], an embedding module computes, at each timestamp, the node embeddings using their neighbourhood and their memory states. Finally, **TGL**[58] is a general framework for training TGNNs on graphs with billions of nodes and edges by using a distributed training approach. In TGL, a mailbox module is used to store a limited number of the most recent interactions, called mails. When a new event occurs, the node memory of the relevant nodes is updated using the cached messages in the mailbox. The mailbox is then updated after the node embeddings are calculated. This process is also used during inference to ensure consistency in the node memory, even though updating the memory is not required during this phase.
## 5 Open challenges
Building on existing libraries of GNN methods, two major TGNN libraries have been developed, namely PyTorch Geometric Temporal (PyGT) [37], based on PyTorch Geometric1, and DynaGraph [14], based on Deep Graph Library2. While these are substantial contributions to the development and practical application of TGNN models, several open challenges still need to be faced to fully exploit the potential of this technology. We discuss the ones we believe are
the most relevant in the following.
**Evaluation**: The evaluation of GNN models has been greatly enhanced by the Open Graph Benchmark (OGB) [19], which provides a standardized evaluation protocol and a collection of graph datasets enabling a fair and consistent comparison between GNN models. An equally well-founded standardized benchmark for evaluating TGNNs does not currently exist. As a result, each model has been tested on its own selection of datasets, making it challenging to compare and rank different TGNNs on a fair basis. For instance, Zhou _et al._[58] introduced two real-world datasets with 0.2 billion and 1.3 billion temporal edges which allow to evaluate the scalability of TGNNs to large scale real-world scenarios, but they only tested the TGL model [58]. The variety and the complexity of learning settings and tasks described in Section 3 makes a standardization of tasks, datasets and processing pipelines especially crucial to allow a fair assessment of the different approaches and foster innovation in the field.
**Expressiveness**: Driven by the popularity of (static) GNNs, the study of their expressive power has received a lot of attention in the last few years [39]. For instance, appropriate formulations of message-passing GNNs have been shown to be as powerful as the Weisfeiler-Lehman isomorphism test (WL test) in distinguishing graphs or nodes [51], and higher-order generalizations of message-passing GNNs have been proposed that can match the expressivity of the \(k\)-WL test [29]. Finally, it has been proven that GNNs are a sort of universal approximators on graphs modulo the node equivalence induced by the WL test [9].
Conversely, the expressive power of TGNNs is still far from being fully explored, and the design of new WL tests, suitable for TGNNs, is a crucial step towards this aim. This is a challenging task since the definition of a node neighbourhood in temporal graphs is not as trivial as for static graphs, due to the appearing/disappearing of nodes and edges. In [3], a new version of the WL test for temporal graphs has been proposed, applicable only to DTTGs. Instead, [42] proposed a novel WL test for ETGs, and the TGN model [36] has been proved to be as powerful as this test. Finally, [3] proved a universal approximation theorem, but the result just holds for a specific TGNN model for STGs, composed of standard GNNs stacked with an RNN.
To the best of our knowledge, these are the only results achieved so far on the expressive power of TGNNs. A complete theory of the WL test for the different TG representations, such as universal approximation theorems for event-based models, is still lacking. Moreover, no efforts have been made to incorporate higher-order graph structures to enhance the expressiveness of TGNNs. This task is particularly demanding, since it requires not only the definition of the temporal counterpart of the \(k\)-WL test but also some techniques to scale to large datasets. Indeed, a drawback of considering higher-order structures is that of high memory consumption, which can only get worse in the case of TGs, as they usually have a greater number of nodes than static graphs.
**Learnability**: Training standard GNNs over large and complex graph data is highly non-trivial, often resulting in problems such as over-smoothing and over-squashing. A theoretical explanation for this difficulty has been given using algebraic topology and Sheaf theory [43, 4]. More intuitively, we yet do not know how to reproduce the breakthrough obtained in training very deep architectures over vector data when training deep GNNs. Such a difficulty is even more challenging with TGNNs, because the typical long-term dependency of TGs poses additional problems to those due to over-smoothing and over-squashing.
Modern static GNN models face the problems arising from the complexity of the data using techniques such as dropout, virtual nodes, neighbor sampling, but a general solution is far from being reached. The extension of the above mentioned techniques to TGNNs, and the corre
sponding theoretical studies, are open challenges and we are aware of only one work towards this goal [53]. On the other hand, the goal of proposing general very deep TGNNs is even more challenging due to the difficulty in designing the graph dynamics in a hierarchical fashion.
**Real-world applications**: The analysis of the tasks in Section 3 revealed several opportunities for the use of TGNNs far beyond their current scope of application. We would like to outline here some promising directions of application.
A challenging and potentially disruptive direction for the application of TGNNs is the learning of dynamical systems through the combination of machine learning and physical knowledge [47]. Physic Informed Neural Networks (PINNs) [35] are already revolutionizing the field of scientific computing [7], and static GNNs have been employed in this framework with great success [32, 12]. Adapting TGNNs to this field may enable to carry over these results to the treatment of time-dependent problems. Climate science [11] is a particularly attractive field of application, both for its critical impact in our societies and for the promising results achieved by GNNs in climate modelling tasks [21]. We believe that TGNNs may rise to be a prominent technology in this field, thanks to their unique capability to capture spatio-temporal correlations at multiple scales. Epidemics studies are another topic of enormous everyday impact that may be explored throw the lens of TGNNs, since a proper modelling of the spreading dynamics needs to be tightly coupled to the underlying TG structure [10]. Both fields requires a better development of TGNNs for regression problem, a task that is still underdeveloped (see Section 3).
## 6 Conclusion
GNN based models for temporal graphs have become a promising research area. However, we believe that the potential of GNNs in this field has only been partially explored. In this work, we propose a systematic formalization of tasks and learning settings for TGNNs, which was lacking in the literature, and a comprehensive taxonomy categorizing existing methods and highlighting unaddressed tasks. Building on this systematization of the current state-of-the-art, we discuss open challenges that need to be addressed to unleash the full potential of TGNNs. We conclude by stressing the fact that the issues open to date are very challenging, since they presuppose considering both the temporal and relational dimension of data, suggesting that forthcoming new computational models must go beyond the GNN framework to provide substantially better solutions.
|
2301.03439 | Generalized adaptive smoothing based neural network architecture for
traffic state estimation | The adaptive smoothing method (ASM) is a standard data-driven technique used
in traffic state estimation. The ASM has free parameters which, in practice,
are chosen to be some generally acceptable values based on intuition. However,
we note that the heuristically chosen values often result in un-physical
predictions by the ASM. In this work, we propose a neural network based on the
ASM which tunes those parameters automatically by learning from sparse data
from road sensors. We refer to it as the adaptive smoothing neural network
(ASNN). We also propose a modified ASNN (MASNN), which makes it a strong
learner by using ensemble averaging. The ASNN and MASNN are trained and tested
two real-world datasets. Our experiments reveal that the ASNN and the MASNN
outperform the conventional ASM. | Chuhan Yang, Sai Venkata Ramana Ambadipudi, Saif Eddin Jabari | 2023-01-09T15:40:45Z | http://arxiv.org/abs/2301.03439v1 | # Generalized adaptive smoothing based neural network architecture for traffic state estimation
###### Abstract
The adaptive smoothing method (ASM) is a standard data-driven technique used in traffic state estimation. The ASM has free parameters which, in practice, are chosen to be some generally acceptable values based on intuition. However, we note that the heuristically chosen values often result in un-physical predictions by the ASM. In this work, we propose a neural network based on the ASM which tunes those parameters automatically by learning from sparse data from road sensors. We refer to it as the adaptive smoothing neural network (ASNN). We also propose a modified ASNN (MASNN), which makes it a strong learner by using ensemble averaging. The ASNN and MASNN are trained and tested two real-world datasets. Our experiments reveal that the ASNN and the MASNN outperform the conventional ASM.
## 1 Introduction
Macroscopic traffic state variables such as flow rate and average vehicle speed, as key measurements of traffic conditions on road segments in a traffic network, have been instrumental in transportation planning and management. The accurate estimation of traffic state variables has received considerable attention because traffic state variables cannot be directly measured everywhere due to technological and financial limitations, and need to be estimated from noisy and incomplete traffic data. As a result, the process of the inference of traffic state variables from partially observed traffic data, which is referred to as _Traffic state estimation_ (TSE), has been the subject of much systematic investigation. Commonly, traffic data comes from many heterogeneous sources including stationary detector data (SDD) collected by sensors fixed in the infrastructure or floating-car data (FCD) collected by GPS devices and cellphones.
The approaches of interest are often grouped into model-driven and data-driven [15]. Model-driven approaches like Lighthill-Whitham-Richards (LWR) model [11], [13], Aw-Rascle-Zhang (ARZ) model [1] use hyperbolic conservation laws coupled with dynamical measurement models to estimate traffic states. Data-driven approaches use supervised learning algorithms to predict the traffic states. A widely used data-driven approach to estimate the traffic state is the adaptive smoothing method (ASM) [19]. Other data-driven approaches used in TSE include deep neural networks [8], [2], [18], support vector regression [22] and matrix factorization [10].
The ASM is widely used in traffic state estimation because of its simplicity in implementation and low computational cost. It is basically an interpolation method that reconstructs the spatio-temporal traffic state by passing a nonlinear low-pass filter to discrete traffic data to produce missing traffic state variables as smooth functions of space and time. Specifically, the ASM accounts for the fact that perturbations in traffic flow propagate downstream in free-flow and upstream in congestion with a characteristic constant velocity, and estimates the traffic states as superpositions of two linear anisotropic lowpass filters with an adaptive weight factor.
A major disadvantage of ASM is the free parameters in the method. Ideally, one would determine the free-parameters by calibrating them to match real data. However, in most cases the real data available is so sparse and scattered that conventional fitting methods are not efficient. This leads to non-physical traffic state predictions by the method.
To address this shortcoming, we develop a neural network architecture based on ASM which we call ASNN. We
believe the proposed ASNN allows for better parameter estimation from sparse data. We first develop a vanilla version of the ASNN and then build a more sophisticated version, MASNN, by adding multiple smoothing nodes with different initializations and incorporating multiple a priori estimates. The MASNN allows for finding suitable parameters for ASM where no easy access exists to prior information for the parameters. We compare ASM and our proposed method's performance on data from the Next Generation Simulation Program (NGSIM) and the Germany Highway Drone (HighD) dataset with different input settings.
The rest of the paper is organized as follows: Section 2 reviews the relevant literature. In Section 3, we present the ASM and our proposed method ASNN, we also discuss ASNN's workflow and the generalization of ASNN, MASNN. The experiment details are illustrated in Section 4, followed by a brief discussion of results in Section 5. Finally, we draw our conclusion and discuss future research in Section 6.
## 2 Related Work
The adaptive smoothing method (ASM) belongs to the group of data-driven methods known as structured-learning methods, which honor physics constraints such as conservation laws in order to improve the performance or interpretability, see [18], [17], [16], [4], [14], [3]. The physical constraints are incorporated in the model fitting stage, e.g., [16], [4] or infused in the model architecture, e.g., [6], [5], [7], [18]. Structured-learning methods have shown robust estimation performance and require limited data in the function fitting process. ASM is a structured-learning method because the interpolation process in ASM considers free-flow and congested traffic wave propagation characteristics.
Different attempts exist in the literature regarding improvement and modification of the conventional ASM. For instance, [14] discretized the ASM and applied Fast Fourier Transform techniques to further reduce the computation time for real-time applications. In [3]'s work, the smoothing width parameters of the ASM dynamically change in a rolling- horizon framework to capture complicated traffic states. But this work ignored other ASM parameters (free-flow and shock wave speeds, transition width, critical speed) and their values are fixed. The method proposed in our paper overcomes this limitation by systematically searching for optimal parameters considered in ASM. Anisotropic kernels of the ASM are also applied in designing efficient convolutional neural networks for traffic speed estimation in [18]. A recent work of [23] investigated ASM from a matrix completion perspective and proposed a systematic procedure to calculate the optimal weight factors instead of using the conventional heuristic weight factors.
## 3 Methodology
### Notation
We denote matrices using uppercase bold Roman letters (\(\mathbf{W},\mathbf{Z},\mathbf{V}\),etc). We specify \(\mathbf{J}\) as the all-ones matrix, in which every element is equal to one. The symbol \(\odot\) represents the Hadamard product. \(\|\cdot\|_{\mathrm{F}}\) is the Frobenius norm of a matrix. The symbol \(*\) represents convolution operation. Other notations are described in Table 1.
### Conventional Adaptive Smoothing Method (ASM)
The adaptive smoothing method takes any macroscopic traffic quantity \(\mathbf{Z}\) as input. The elements of \(\mathbf{Z}\), \(\mathbf{Z}(x,t)\), can can represent traffic flux at space-time indices \((x,t)\) denoted \(q(x,t)\), speeds \((v(x,t))\), or traffic densities \((\rho(x,t))\). ASM calculates the estimated field based on the assumption that traffic is in one of two regimes: In free-flow traffic, perturbations essentially move downstream of the flow whereas the perturbations move upstream in the congested traffic. According to kinematic wave theory, the perturbations travel with a constant wave speed along the characteristic curves over space and time in each of these regimes. ASM first calculates two a priori fields as follows:
\[\mathbf{Z}^{\mathrm{free}}(x,t)=\frac{1}{N(x,t)}\sum_{n}\phi\left(x-x_{n},t-t_ {n}-\frac{x-x_{n}}{c_{\mathrm{free}}}\right)z_{n}, \tag{1}\]
\[\mathbf{Z}^{\mathrm{cong}}(x,t)=\frac{1}{N(x,t)}\sum_{n}\phi\left(x-x_{n},t-t _{n}-\frac{x-x_{n}}{c_{\mathrm{cong}}}\right)z_{n}, \tag{2}\]
where \(c_{\mathrm{free}}\) is the characteristic wave speed in free-flow regime and \(c_{\mathrm{cong}}\) is the characteristic wave speed in the congested regime and \(N(x,t)\) is a normalization constant, \(\phi(\cdot,\cdot)\) is the smoothing kernel function:
\[\phi(x,t)=\exp\Big{(}-\frac{|x|}{\sigma}-\frac{|t|}{\tau}\Big{)}. \tag{3}\]
\((x_{n},t_{n})\) are discrete position and time pairs at which the data \((z_{n})\) are measured. \(\sigma\) and \(\tau\) represent the range of spatial smoothing in \(x\) and temporal smoothing in \(t\). We can also apply other localized functions such as a two-dimensional Gaussian kernels. The output will be a superposition of these two a priori estimates with an adaptive weight field \(\mathbf{W}^{\mathrm{cong}}\) is written in a compact form as
\[\mathbf{Z}=\mathbf{W}^{\mathrm{cong}}\odot\mathbf{Z}^{\mathrm{cong}}+( \mathbf{J}-\mathbf{W}^{\mathrm{cong}})\odot\mathbf{Z}^{\mathrm{free}}. \tag{4}\]
\(\mathbf{W}^{\text{cong}}\in[0,1]\) also depends nonlinearly on the two a priori estimates, given by:
\[\mathbf{W}^{\text{cong}}=\frac{1}{2}\left[1+\tanh\left(\frac{Z_{\text{thr}}- \min\{\mathbf{Z}^{\text{free}},\mathbf{Z}^{\text{cong}}\}}{\Delta Z}\right) \right]. \tag{4}\]
Note the operators \(\min\{\cdot,\cdot\}\) and \(\tanh(\cdot)\) in Eq.(4) are applied elementwise. \(Z_{\text{thr}}\) is the critical threshold, \(\Delta Z\) is the transition range between congested and free traffic.
There are six parameters involved in ASM estimation. The typical values of ASM parameters and their interpretation are summarized in Table 1. [20] suggested that suitable values for \(\sigma\) and \(\tau\) are half of the inter-detector spacing and half of the sampling time respectively. Further, they performed a sensitivity analysis by visual inspection and noted that the ASM is less sensitive to remaining parameters. However, we noted in our experiments and also it can be observed from figure 3 in [20] that the ASM is sensitive to parameter values and we expect that the sensitivity is due to \(Z_{thr}\) and \(\Delta Z\). This demands for a robust algorithm to optimize the ASM parameters. Moreover, there are cases where we have no prior knowledge of the parameter values to start with and a method of tuning ASM parameters is essential. For example, we don't know the smoothing width value with mixture signals input recorded by heterogeneous sources. To this end, we formulate ASM as a neural network (ASNN) to efficiently handle the tuning of the field parameters. Our architecture can also accommodate multiple wave speeds, which allows one to determine optimal parameters setting when typical values as those given in Table 1 do not work.
### Adaptive smoothing neural network (ASNN)
Our adaptive smoothing neural network (ASNN) built using the ASM. It is, thus, designed to respect the kinematic wave theory inherently in the hidden layers as explained below.
ASNN is illustrated graphically in Fig.1. The ASNN consists of the following layers: (i) An input layer where stationary detector or heterogeneous source data are fed into the neural network. (ii) An output layer where a estimated field is reconstructed and given as output. (iii) Hidden layers, where information is processed between the input and output layers while respecting kinematic wave theory. From left to right in the hidden layers in Fig.1, the first hidden layer is the smoothing layer. In the smoothing layer, the input is fed into smooth spatio-temporal functions Eq.(2) to generate an estimate of \(\mathbf{Z}^{\text{free}}\) and \(\mathbf{Z}^{\text{cong}}\) as given in Eq.(1). The second hidden layer is the weighting layer. \(\mathbf{Z}^{\text{free}}\) and \(\mathbf{Z}^{\text{cong}}\) from the smoothing layer are used to calculate the weight matrix \(\mathbf{W}^{\text{cong}}\) according to (4). \(\mathbf{W}^{\text{cong}}\) is then used in the convex combination (3) of the \(\mathbf{Z}^{\text{free}}\) and \(\mathbf{Z}^{\text{cong}}\) to predict the output in the output layer. This architecture strictly preserves the workflow of ASM, thus when we use the same parameter values from Table 1 as initialization, one feedforward of ASNN would reproduce the results of the conventional ASM.
The cost function measures the average estimation accuracy for the neural network. In each training iteration, weights in neurons are adjusted to minimize the cost. In this work, we propose two cost functions for ASNN:
1. Convolution cost function: We apply the \(\ell_{1}\) regularized root mean square error based on the heuristic that the state of every cell in the estimated matrix should be close to those of its neighbors. The cost function is
\[\frac{\|\mathsf{P}_{\Omega}(\mathbf{Z}^{\text{est}}-\mathbf{Z}^{\text{true}} )\|_{F}}{\|\mathsf{P}_{\Omega}(\mathbf{J})\|_{F}}+\lambda\frac{\sum_{(x,t)}| \omega*\mathbf{Z}^{\text{est}}(x,t)|}{\|\mathbf{J}\|_{F}} \tag{5}\]
The binary mask operator \(\mathsf{P}_{\Omega}\) evaluates the objective function only at the observed indices \(\Omega\). \(\lambda\) is the regularization parameter (or penalty parameter). The penalty term is a kernel convolution operation on the estimated matrix \(\mathbf{Z}^{\text{est}}\). \(\omega\) is a filter kernel that takes two inputs that represent distance, \(-a\leq dx\leq a\) and \(-b\leq dy\leq b\), and produces a weight. For \(|dx|>a\) or \(|dy|>b\), \(\omega(dx,dy)=0\). The expression of the convolution is:
\[\omega*\mathbf{Z}^{\text{est}}(x,t)\\ =\sum_{dx=-a}^{a}\sum_{dy=-b}^{b}\omega(dx,dy)\mathbf{Z}^{\text{ est}}(x-dx,y-dy) \tag{6}\]
It measures the magnitude of the difference between each cell's estimated value and its neighbors' estimated values.
2. Physics-informed cost function: We can encode traffic flow models as a regularization term in the cost function, which encourages the estimated result to follow some physics-based constraint. In this work, we consider a sim
\begin{table}
\begin{tabular}{|c|c|l|} \hline Parameter & Value & Description \\ \hline \(c_{\text{cong}}\) & -15 km/h & Congested wave speed \\ \(c_{\text{free}}\) & 80 km/h & Free-flow wave speed \\ \(v_{\text{thr}}\) & 60 km/h & Critical traffic speed \\ \(\Delta v\) & 20 km/h & Transition width \\ \(\sigma\) & \(\Delta x/2\) & Space coordinate smoothing width \\ \(\tau\) & \(\Delta t/2\) & Time coordinate smoothing width \\ \hline \end{tabular}
\end{table} TABLE 1: Parameter Settings
ple conservation principle based on the Lighthill-Witham-Richards (LWR) traffic flow model [11], [13]:
\[\partial_{t}\rho(x,t)=-\partial_{x}Q\big{(}\rho(x,t)\big{)}, \tag{7}\]
where \(Q(\cdot)\) is a concave flux function that describes the local rate of flow (\(q(x,t)\)) as a function of traffic density (\(\rho(x,t)\)). See in Fig. 1(a) for illustration.
LWR-type models are non-linear hyperbolic partial differential equations (PDEs), written in a conservative form [9]. A Godunov scheme developed to numerically solve the hyperbolic PDE results in the following discrete dynamical system:
\[\rho(x+\Delta x,t)=\rho(x,t)+\frac{\Delta x}{\Delta t}[q_{j-1\to j}-q_{j \to j+1}], \tag{8}\]
where \(q_{j-1\to j}\) and \(q_{j\to j+1}\) denote the numerical traffic flux (the number of vehicles moving from cell \(j-1\) to cell \(j\) and from cell \(j\) to cell \(j+1\), respectively). Numerical traffic flux is calculated using the minimum supply-demand principle [21]:
\[q_{j-1\to j}=\min\big{\{}Q_{\text{dem}}(x,t-\Delta t),Q_{\text{sup}}(x,t)\big{\}} \tag{9}\]
\[q_{j\to j+1}=\min\big{\{}Q_{\text{dem}}(x,t),Q_{\text{sup}}(x,t+\Delta t) \big{\}} \tag{10}\]
The demand and supply functions in cell \((x,t)\) are defined as
\[Q_{\text{dem}}(x,t)=\begin{cases}Q(\rho(x,t))&\text{if }\rho(x,t)\leq\rho_{ \text{cr}}\\ q_{\text{max}}&\text{otherwise}\end{cases} \tag{11}\]
and
\[Q_{\text{sup}}(x,t)=\begin{cases}Q(\rho(x,t))&\text{if }\rho(x,t)>\rho_{ \text{cr}}\\ q_{\text{max}}&\text{otherwise}\end{cases}, \tag{12}\]
respectively. \(\rho_{\text{cr}}\) is the critical traffic density and \(q_{\text{max}}=Q(\rho_{\text{cr}})\) is the maximal local flux. These local demand and supply relations are illustrated in Fig. 1(b) and Fig. 1(c), respectively.
We use Eq.(8) as a regularization term
\[l_{\text{P}}=\sum_{(x,t)}\|\rho(x+\Delta x,t)-\rho(x,t)-\frac{\Delta x}{ \Delta t}[q_{j-1\to j}-q_{j\to j+1}]\|_{2} \tag{13}\]
and obtain the physics-informed cost function as follows:
\[\frac{\|\mathbb{P}_{\Omega}(\mathbf{Z}^{\text{est}}-\mathbf{Z}^{\text{true} })\|_{F}}{\|\mathbb{P}_{\Omega}(\mathbf{J})\|_{F}}+\lambda\frac{l_{\text{P}}} {\|\mathbf{J}\|_{F}}. \tag{14}\]
It promotes solutions that satisfy the discrete conservation equation (8).
Figure 1: Adaptive Smoothing Neural Network for Traffic State estimation
### _ASNN workflow and MASNN_
To evaluate the model, we split the ground truth traffic state dataset into a training set and a testing set, where the training set consists of the partially observed traffic data (SDD or FCD). The training set is fed into the neural network as input. The cost function averages loss over the entire training set, and computes its gradients with respect to the parameters using backpropagation. The gradients are used to update each parameter. The training process takes multiple epochs to converge. In the event of no change in total cost, a maximum iteration number concludes the learning process.
ASNN inherits its architecture from ASM; it consists of the same six parameters as weights. We should point out that similar to other gradient-based learning methods, the ASNN architecture also faces the challenge of the diminishing gradient effects and weights in smoothing layers (\(c,\sigma,\tau\)) receive only minor changes in each update during training. This phenomenon indicates that parameters from smoothing layers require reasonable initialization for overall efficiency. Fortunately, these parameters are shared by ASNN and ASN and have the same interpretation, we can use their typical values as initialization and freeze the smoothing layers for fast convergence.
We noted from our experiments that the ASNN is a weak learner. For instance, when the data is hybrid i.e., when it comes from heterogeneous sources including stationary detectors and floating cars, the ASNN estimations are sensitive to the initialization. To alleviate this problem and to make the ASNN a robust learner, we develop the multiple a priori adaptive smoothing neural network (MASNN). MASNN is graphically demonstrated in Fig. 3.
The MASNN basically uses the ensemble averaging technique to make the predictions independent of initial guess. To that effect, we consider an ensemble of ASNNs with different possible initializations and a convex combination layer is added to combine all ASNN estimations produced by different initializations. The weights of the ASNN and the convex combination layer are determined together by optimizing the cost function.
## 4 Experimental Setup
We test the performance of our method using the NGSIM US 101 highway data and German highway data. The NGSIM US 101 data consists of vehicle trajectories from a road section which is 670 meters in length and German highway data consists of vehicle trajectories from a 400 meter road section. They are converted to a ground-truth macroscopic speed field \(\mathbf{V}(\mathbf{x},\mathbf{t})\). Thus in this work, we deal with estimating the speed field. Heatmaps of the speed
Figure 2: Flux, demand, and supply relations. \(\rho_{\text{jam}}\) is the ‘jam density’, which is a maximum traffic density at which vehicles cease to move, i.e., their speeds are zero, which results in zero flux.
Figure 3: Multiple A Priori Adaptive Smoothing Neural Network for Traffic State estimation
fields are shown in Fig. 4 and Fig. 5 for NGSIM US 101 and German highway data, respectively, both over an interval of length 600 seconds. The US101 data come from 4 evenly spaced stationary traffic sensors (inductance loop detectors). Input of German highway data come from a combination of 5% floating car trajectories and 2 stationary detectors. Both cases involve complex build-up and dissipation of congestion with mixtures of free flow, congested traffic, and shock-waves traveling in the opposite direction of traffic (the red bands in the figures).
We conduct two kinds of experiments in this study. In the first experiment, we compare the estimation error of the proposed ASNN algorithm and that of the conventional ASM. We use the typical values from Table 1 as optimal settings for ASM and as initialization of ASNN. The ASNN is trained is using both convolution cost function and the physics-informed cost functions. In the second experiment, we evaluate the benefits of the proposed MASNN estimation algorithm when the data is from heterogeneous sources where a good initialization of the ASNN parameters wasn't known.
We optimize \(\lambda\) using a grid search on candidate values \([10,5,1,0.5,0.1,0.05,0.01]\) and set \(\lambda\) to 0.01 in both experiments. The kernel \(\omega\) in the convolution cost function is chosen as
\[\omega=\begin{bmatrix}-1&0&0\\ -1&3&0\\ -1&0&0\end{bmatrix} \tag{15}\]
The heuristic behind this kernel is that current traffic states are influenced by past states.
The traffic density \(\rho\) introduced in the physics-informed cost function is calculated by inverting the Newell-Franklin relation [12], \(v=V(\rho)=\frac{1}{\rho}Q(\rho)\), to transform speed value in each cell to a density as follows:
\[\rho(x,t)=V^{-1}\big{(}\rho(x,t)\big{)}=\frac{\rho_{\rm jam}}{1-\frac{v_{\rm f }}{|c_{\rm cong}|}\ln{(1-\frac{v(x,t)}{v_{\rm f}})}}, \tag{16}\]
where \(\rho_{\rm jam}\) is the jam density (roughly 120 veh/km/lane), \(v_{\rm f}\) is the desired (free-flow) speed, which is comparable to the speed limit of the road. We set \(v_{\rm f}=100\) km/h, which is greater than maximum value of speed data observed, and \(c_{\rm cong}\) is the congested wave speed. With this set of parameters, \(\rho_{\rm cr}\) is 30 veh/km/lane.
We investigate the algorithm's accuracy numerically and by reconstructing the resulting heatmap visually. The estimation quality of the speed field \(\mathbf{\hat{V}}\) is measured by the relative error in all experiments:
\[m_{\rm r}=\frac{\|\mathbf{\hat{Z}}-\mathbf{Z}\|_{\rm F}}{\|\mathbf{Z}\|_{\rm F}} \tag{17}\]
## 5 Result Analysis and Discussion
_Expt 1: Comparison of ASM and ASNN:_
The estimation errors \(m_{\rm r}\) for conventional ASM and ASNN with both convolution and physics losses are summarized in Table 2, with best performance highlighted in bold.
We observe that the ASNN estimations have better performance over ASM in both cases. While the error estimates of ASM and ASNN don't differ significantly, we note that the conventional ASM predictions are non-physical for US 101 data when the typical parameter values in Table
\begin{table}
\begin{tabular}{|c|c||c|c|} \hline US101 & \(m_{\rm r}\) & Germany & \(m_{\rm r}\) \\ \hline ASM & 0.12417 & ASM & 0.08179 \\ ASNN (Conv) & 0.11868 & ASNN (Conv) & **0.08123** \\ ASNN (Phys) & **0.11866** & ASNN (Phys) & **0.08123** \\ \hline \end{tabular}
\end{table} TABLE 2: Estimation errors results
Figure 4: US101 data with stationary detector inputs
Figure 5: German highway data with heterogeneous inputs
1 are used. Specifically, we observe a cross-hatch in the ASM estimates when in free-flow, which violates causality in traffic (free-flowing traffic propagates with the direction of travel of the vehicles). The ASNN corrected the non-physical predictions produced by ASM (see Fig.6).
We note that as the data available for interpolation increases, the ASM gives more accurate predictions and its accuracy gets closer to that of ASNN. For instance, we used trajectory data in addition to detector data to estimate the traffic state for the German highway and noted that the ASM results are closer to ASNN results. See Fig. 7.
ASNNs trained using convolution cost and the physics cost perform equally well in this case. This indicates that temporal causality considered in the convolution cost function is effective in reconstructing traffic patterns. It also suggests that it is reasonable to adopt traffic flow models as regularization terms if we have prior information on the traffic patterns to be reconstructed. We noted in our experiments that the ASNN converges well if the initialization parameter values are chosen to be within the range between 0 and their typical values in Table 1.
_Expt 2: MASNN with multiple a priori estimates_
Here, we discuss the results we obtained using 5 sets of possible initializations for the German highway data with heterogeneous inputs. We use 5 different initialization values for \(\{\tau:2.5,5,7.5,10,12.5\}\). The congested and free-flow wave speeds are fixed at -15 km/hr and 80 km/hr, respectively for each initialization set. \(V_{\mathrm{thr}}\) and \(\Delta v\) are initialized to 1 km/hr in all the sets. The \(\sigma\) is still considered to be \(\Delta x/2\) i.e., half the distance between the detectors. The five different initializations for \(\tau\) were chosen because of uneven time gaps in the floating car data. We note that adding more initializations doesn't change the results significantly.
The estimation accuracy \(m_{\mathrm{r}}\) of MASNN is 0.08021. We note from Table 2 and also visually observe from the Fig.7 and Fig.8, MASNN's ability to combine multiple a priori estimates to improve its performance over that of ASNN.
The trained weights in the convex combination layer are recorded in Table 3. Each weight setting is then tested with conventional ASM estimation and its corresponding estimation error is recorded. The initialization set with highest trained weight is highlighted in bold. A comparison of the corresponding ASM error reveals that MASNN preferred weights that produce better estimation results for ASM, which indicates that MASNN is also capable of choosing suitable parameters for conventional ASM. Fig. 8
Figure 6: Estimation results for US101 data with stationary detector inputs: In the pane showing the conventional ASM case, non-physical patterns are highlighted with dark arrows and a red circle.
Figure 7: Estimation results for German highway data with heterogeneous inputs
displays a heatmap of the speed field produced by MASNN and ASM with different parameter settings.
We observe that MASNN produced better shockwave patterns in comparison to the ground truth visualization than those estimated by ASM and ASNN in Fig. 7. Even with the parameter set(s) determined by MASNN, the ASM has smoothened out the sharp changes more than the MASNN. The smoothening effect in ASM increases with increase in \(\tau\).
## 6 Summary and Conclusion
In this study, we develop a neural network (ASNN) based on adaptive smoothing method (ASM). The ASNN tunes the parameters of the ASM by sparse data from detectors and reconstructs the traffic state all along the road. We tested the ASNN with two cost functions. One minimizes data loss with a regularizer that penalizes sharp differences in speed in neighboring cells. The other cost function minimizes the data loss with a physics-informed regularizer based on the principle of conservation of traffic density. We also proposed a way to enhance the robustness of the ASNN in learning by using an ensemble averaging technique which we refer as MASNN. We applied the ASNN and the MASNN on two real-world data sets; one is the NGSIM US 101 highway data set which is from stationary detectors, the other uses German highway data from heterogeneous sources including stationary detectors and floating vehicles. Our experiments reveal that ASNN initialized with typical parameter values used for ASM outperforms the ASM in estimating the traffic state in all the cases. In the case of heterogeneous detectors where a good initialization set for parameters is not known, the MASNN has shown excellent performance in estimating traffic. We also note that the initialization parameters with the highest weights as predicted by MASNN, can serve as good parameter values for conventional ASM. Overall, we find that the MASNN and the ASNN are promising tools for traffic state estimation.
We find scope for future improvements: The ASNN architecture suffers from the vanishing gradient effect which affects the the parameters in the initial layers. Moreover, the ASM being a smoothing method, it smears out sharp variations in the patterns. A feed-forward network, in-lieu of the smoothing layer might improve the performance in both the above-mentioned aspects. The use of Frobenius-norm based costs focused on large numbers, which results in inaccurate estimates when the speeds are low. One can circumvent this by using density as the variable to be estimated or adding alternate penalty terms to emphasize low-speed traffic.
\begin{table}
\begin{tabular}{|c|c|c|} \hline Germany: \(\sigma,\tau\) & weight & ASM \(m_{\tau}\) \\ \hline
**50,2.5** & **1** & **0.08240** \\
**50,5** & **1** & **0.08674** \\
50,7.5 & 0 & 0.09367 \\
50,10 & 0 & 0.10154 \\
50,12.5 & 0 & 0.10963 \\ \hline \end{tabular}
\end{table} TABLE III: MASNN trained weights in convex combination layer
Figure 8: MASNN estimation results and its optimal temporal smoothing widths applied in ASM
## Acknowledgment
This work was supported by the NYUAD Center for Interacting Urban Networks (CITIES), funded by Tamkeen under the NYUAD Research Institute Award CG001. The opinions expressed in this article are those of the authors alone do not represent the opinions of CITIES.
|
2301.11773 | Automatic Modulation Classification with Deep Neural Networks | Automatic modulation classification is a desired feature in many modern
software-defined radios. In recent years, a number of convolutional deep
learning architectures have been proposed for automatically classifying the
modulation used on observed signal bursts. However, a comprehensive analysis of
these differing architectures and importance of each design element has not
been carried out. Thus it is unclear what tradeoffs the differing designs of
these convolutional neural networks might have. In this research, we
investigate numerous architectures for automatic modulation classification and
perform a comprehensive ablation study to investigate the impacts of varying
hyperparameters and design elements on automatic modulation classification
performance. We show that a new state of the art in performance can be achieved
using a subset of the studied design elements. In particular, we show that a
combination of dilated convolutions, statistics pooling, and
squeeze-and-excitation units results in the strongest performing classifier. We
further investigate this best performer according to various other criteria,
including short signal bursts, common misclassifications, and performance
across differing modulation categories and modes. | Clayton Harper, Mitchell Thornton, Eric Larson | 2023-01-27T15:16:06Z | http://arxiv.org/abs/2301.11773v1 | # Automatic Modulation Classification with Deep Neural Networks
###### Abstract
Automatic modulation classification is a desired feature in many modern software-defined radios. In recent years, a number of convolutional deep learning architectures have been proposed for automatically classifying the modulation used on observed signal bursts. However, a comprehensive analysis of these differing architectures and importance of each design element has not been carried out. Thus it is unclear what tradeoffs the differing designs of these convolutional neural networks might have. In this research, we investigate numerous architectures for automatic modulation classification and perform a comprehensive ablation study to investigate the impacts of varying hyperparameters and design elements on automatic modulation classification performance. We show that a new state of the art in performance can be achieved using a subset of the studied design elements. In particular, we show that a combination of dilated convolutions, statistics pooling, and squeeze-and-excitation units results in the strongest performing classifier. We further investigate this best performer according to various other criteria, including short signal bursts, common misclassifications, and performance across differing modulation categories and modes.
Automatic modulation classification, deep learning, convolutional neural network.
## I Introduction
Automatic modulation classification (AMC) is of particular interest for radio frequency (RF) analysis and in modern software-defined radios to perform numerous tasks including "spectrum interference monitoring, radio fault detection, dynamic spectrum access, opportunistic mesh networking, and numerous regulatory and defense applications" [1]. Upon detection of an RF signal with unknown characteristics, AMC is a crucial initial procedure in order to demodulate the signal. Efficient AMC allows for maximal usage of transmission mediums and can provide resilience in modern cognitive radios. Systems capable of adaptive modulation schemes can monitor current channel conditions with AMC and adjust exercised modulation schemes to maximize usage across the transmission medium.
Moreover, for receivers that have a versatile demodulation capability, AMC is a requisite task. The correct demodulation scheme must be applied to recover the modulated message within a detected signal. In systems where the modulation scheme is not known _a priori_, AMC allows for efficient prediction of the employed modulation scheme. Higher performing AMC can increase the throughput and accuracy of these systems; therefore, AMC is currently an important research topic in the fields of machine learning and communication systems, specifically for software-defined radios.
Typical benchmarks are constructed on the premise that the AMC model must classify not only the mode of modulation (_e.g._, QAM), but the exact variant of that mode of modulation (_e.g._, 32QAM). While many architectures have proven to be effective at high signal to noise ratios (SNRs), performance degrades significantly at lower SNRs that often occur in real-world applications. Other works have investigated increasing classification performance at lower SNR levels through the use of SNR-specific modulation classifiers [2] and clustering based on SNR ranges [3]. To perform classification, a variety of signal features have been investigated. Historically, AMC has relied upon statistical moments and higher order cumulants [4, 5, 6] derived from the received signal. Recent approaches [1, 7, 8, 9] use raw time-domain in-phase (I) and quadrature (Q) components as features to predict the modulation variant of a signal. Further works have investigated additional features including I/Q constellation plots [10, 11, 12].
After selecting the signal input features, machine learning models are used to determine statistical patterns in the data for the classification task. Support vector machines, decision trees, and neural networks are commonly used classifiers for this application [13, 14, 1, 3, 7, 8, 9, 1]. Residual neural networks (ResNets), along with convolutional neural networks (CNNs), have been shown to achieve high classification performance for AMC [1, 3, 7, 8, 9, 10]. Thus, deep learning based methods in AMC have become more prevalent due to their promising performance and their ability to generalize to large, complex datasets.
While other works have contributed to increased AMC performance, the importance of many design elements for AMC remains unclear and a number of architectural elements have yet to be investigated. Therefore, in this work, we aim to formalize the impact of a variety of architectural changes and model design decisions on AMC performance. Numerous modifications to architectures from previous works, including our own [7], and novel combinations of elements applied to AMC are considered. After an initial investigation, we provide a comprehensive ablation study in this work to investigate the performance impact of various architectural modifications. Additionally, we achieve new state-of-the-art classification performance on the RadioML 2018.01A dataset [15]. Using the best performing model, we provide additional analyses that characterize its performance across modulation modes and
signal burst duration.
## II Related Work
The area of AMC has been investigated by several research groups. We provide a summary of results in AMC to provide context and motivation for our contributions to AMC and the corresponding ablation study described in this paper.
Corgan _et al._[8] illustrate that deep convolutional neural networks are able to achieve high classification performance particularly at low SNRs on a dataset comprising 11 different types of modulation. It was found that CNNs exceeded performance over expertly crafted features. Comparing results with architectures in [8] and [1, 16] improved AMC performance utilizing self-supervised contrastive learning. First, an encoder is pre-trained in a self-supervised manner through creating contrastive pairs with data augmentation. By creating different views of the input data through augmentation, contrastive loss is used to maximize the cosine similarity between positive pairs (augmented views of the same input). Once converged, the encoder is frozen (_i.e._, the weights are set to fixed values) and two fully-connected layers are added following the encoder to form the classifier. The classifier is trained using supervised learning to predict the 11 different modulation schemes. Chen _et al._ applied a novel architecture to the same dataset where the input signal is sliced and transformed into a square matrix and apply a residual network to predict the modulation schemes [17]. Other work has investigated empirical and variational mode decomposition to improve few-shot learning for AMC [18]. In our work, we utilize a larger, more complex dataset consisting of 24 modulation schemes, as well as modeling improvements.
Spectrograms and I/Q constellation plots in [19] were found to be effective input features to a traditional CNN achieving nearly equivalent performance as the baseline CNN network in [1] which used raw I/Q signals.
Further, [10, 11, 12] also used I/Q constellations as an input feature in their machine learning models on a smaller scale of four or eight modulation types. Other features have been used in AMC--[20, 21] utilized statistical features and support vector machines while [22, 23] used fusion methods in CNN classifiers. Mao _et al._ utilized various constellation diagrams at varying symbol timings alleviating symbol timing synchronization concerns [24]. A squeeze-and-excitation [25] inspired architecture was used as an attention mechanism to focus on the most important diagrams.
Although spectrograms and constellation plots have shown promise, they require additional processing overhead and have had comparable performance to raw I/Q signals. In addition, models that use raw I/Q signals could be more adept at handling varying-length signals than constellation plots because they are not limited by periodicity constraints for short duration signals (_i.e._, burst transmissions). Consequently, we utilize raw I/Q signals in our work.
Tridgell, in his dissertation [26], builds upon these works by investigating these architectures when deployed on resource-limited Field Programmable Gate Arrays (FGPAs). His work stresses the importance of reducing the number of parameters for modulation classifiers because they are typically deployed in resource-constrained embedded systems.
Fig. 1: ResNet architecture used in [1]. Each block represents a unit in the network, which may be comprised of several layers and connections as shown on the right of the figure. Dimensions of the tensors on the output of each block are also shown where appropriate.
Fig. 2: X-Vector architecture overview. The convolutional activations immediately before pooling are shown. These activations are fed into two statistical pooling layers that collapse the activations over time, creating a fixed-length tensor that can be further processed by fully connected dense layers.
In [1], Oshea _et al._ created a dataset with 24 different types of modulation, known as RadioML 2018.01A, and achieved high classification performance using convolutional neural networks--specifically using residual connections (see Figure 1) within the network (ResNet). A total of 6 residual stacks were used in the architecture. A residual stack is defined as a series of a convolutional layers, residual units, and a max pooling operation as shown in Figure 1. The ResNet employed by [1] attained approximately 95% classification accuracy at high SNR values.
Harper _et al._ proposed the use of X-Vectors [27] to increase classification performance using CNNs [7]. X-Vectors are traditionally used in speaker recognition and verification systems making use of aggregate statistics. X-Vectors employ statistical moments, specifically mean and variance, across convolutional filter outputs. It can be theorized that taking the mean and variance of the embedding layer helps to eliminate signal-specific information, leaving global, modulation-specific characteristics. Figure 2 illustrates the X-Vector architecture where statistics are computed over the activations from a convolutional layer producing a fixed-length vector.
Additionally, this architecture maintains a fully-convolutional structure enabling variable size inputs into the network. Using statistical aggregations allows for this property to be exploited. When using statistical aggregations, the input to the first dense layer is dependent upon the number of filters in the final convolutional layer. The number of filters is a hyperparameter, independent of the length in time of the input signal into the neural network.
Without the statistical aggregations, the input signals into a traditional CNN or ResNet would need to be resampled, cropped or padded to a fixed-length in time such that there is not a size mismatch with the final convolutional output and the first dense layer. While the dataset used in this work has uniformly sized signals in terms of duration, (\(1024\times 2\)), this is an architectural advantage in our deployment as received signals may vary in duration. Instead of modifying the inputs to the network via sampling, cropping, padding, etc., the X-Vector architecture can directly operate with variable-length inputs without modifications to the network or input signal.
Figure 3 outlines the employed X-Vector architecture in [7] where \(F=[f_{1},f_{2},...,f_{7}]=64\) and \(K=[k_{1},k_{2},...,k_{7}]=3\). Mean and variance pooling are performed on the final convolutional outputs, concatenated, and fed through a series of dense layers creating the fixed-length X-Vector. A maximum of 98% accuracy was achieved at high SNR levels.
The work of [7] replicated the ResNet architecture from [1] and compared the results with the X-Vector architectures as seen in Figure 4. Harper _et al._[7] were able to reproduce this architecture achieving a maximum of 93.7% accuracy. The authors attribute the difference in performance to differences in the train and test set separation they used since these parameters were unavailable. As expected, the classifiers perform with a higher accuracy as the SNR value increases. In signals with a low SNR value, noise becomes more dominant and the signal is harder to distinguish. In modern
Fig. 4: Accuracy comparison of the reproduced ResNet in [1] and the X-Vector inspired model from [7] over varying SNRs. This accuracy comparison shows the superior performance of the X-Vector architecture, especially at higher SNRs, and supports using this architecture as a baseline for the improvements investigated in this paper.
Fig. 3: Proposed CNN Architecture in [7]. This is the first work to employ an X-Vector inspired architecture for AMC showing strong performance. This architecture is used as a baseline for the modifications investigated in this paper. The \(f\) and \(k\) variables shown designate the number of kernels and size of each kernel, respectively, in each layer. These parameters are investigated for optimal sizing in our initial investigation.
applications, a high SNR value is not always a given. However, there is still significant improvement compared to random chance, even at low SNR values. Moreover, in systems where the modulation type must be classified quickly, this could become crucially important as fewer demodulation schemes would need to be applied in a trial and error manner to discover the correct scheme.
One challenge of AMC is that performance is desired to work well across a large range of SNRs. For instance, Figure 4 illustrates modulation classification performance plateaued in peak performance beyond \(+8\)dB SNR and approached chance classification performance below \(-8\)dB SNR on the RadioML 2018.01A dataset. This range is denoted by the shaded region. Harper _et al._ investigated methods to improve classification performance in this range by employing an SNR regression model to aid separate modulation classifiers (MCs). While other works have trained models to be as resilient as possible under varying SNR conditions, Harper _et al._ employed SNR-specific MCs [2].
Six MCs were created by discretizing the SNR range to ameliorate performance between \(-8\)dB to \(+8\)dB SNR (see Figure 5). These groupings were chosen in order to provide sufficient training data to avoid overfitting the MCs and provide enough resolution so that combining MCs provided more value than a single classifier.
By first predicting the SNR of the received signal with a regression model, an SNR-specific MC that was trained on signals with the predicted SNR is applied to make the final prediction. Although the SNR values in the dataset are discrete, SNR is measured on a continuous scale in a deployment scenario and can vary over time. As a result, regression is used over classification to model SNR. Using this approach, different classifiers can tune their feature processing for differing SNR ranges. Each MC in this approach uses the same architecture as that proposed in [7]; however, each MC is trained with signals within each MC's SNR training range (see Table I).
Highlighting improvements across varying SNR values, Figure 6 shows the overall performance improvement (in percentage accuracy) using the SNR-assisted architecture compared to the baseline classification architecture described in [7]. While a slight decrease in performance was observed for \(-8\)dB and a larger decrease for \(-2\)dB, improvement is shown under most SNR conditions--particularly in the target range of \(-8\)dB to \(+8\)dB. A possible explanation for the decrease in performance at particular SNRs is that the optimization for a particular MC helped overall performance for a grouping at the expense of a single value in the group. That is, the MC for \([-4,0)\) boosted the overall performance by performing well at \(-4\) and \(0\)dB at the expense of \(-2\)dB. Due to the large size of the testing set, these small percentage gains are impactful because thousands more classifications are correct. All results are statistically significant based on a McNemar's test [28], therefore achieving new state-of-the-art performance at the time.
Soltani _et al._[3] found SNR regions of \([-10,-2]\)dB, \([0,8]\)dB, and \([10,30]\)dB having similar classification patterns. Instead of predicting exact modulation variants, the authors group commonly confused variants into a more generic, coarse-grained label. This grouping increases performance of AMC by combining modulation variants that are commonly confused. However, it also decreases the sensitivity of the model to the numerous possible variants.
Cai _et al._ utilized a transformer based architecture to aid performance at low SNR levels with relatively few training parameters (approximately 265,0000 parameters) [29]. A multi-scale network along with center loss [30] was used in [31]. It was found that larger kernel sizes improved AMC performance. We further explore kernel size performance impacts in this work. Zhang _et al._ proposed a high-order attention mechanism using the covariance matrix achieving a maximum accuracy of 95.49% [32].
Although many discussed works use the same RadioML 2018.01A dataset, there is a lack of a uniform dataset split to establish a benchmark for papers to report performance. In an effort to make AMC work more reproducible and comparable across publications, we have made our dataset split and accompanying code available on GitHub.1
Footnote 1: [https://github.com/charper/Automatic-Modulation-Classification-with-Deep-Neural-Networks](https://github.com/charper/Automatic-Modulation-Classification-with-Deep-Neural-Networks)
While numerous works have investigated architectural improvements, we aim to improve upon these works by introducing additional modifications as well as a comprehensive ablation study that illustrates the improvement of each modification. With the new modifications, we achieve new state-of-the-art AMC performance.
## III Dataset
To evaluate different machine learning architectures, we use the RadioML 2018.01A dataset that is comprised of 24
Fig. 5: The architecture using SNR regression and SNR-specific classifiers from [2]. Each MC block shown employs the same architecture as the baseline from [7], but specifically trained to perform AMC within a more narrow range of SNRs (denoted as dB ranges in each block).
different modulation types [1, 15]. Due to the complexity and variety of modulation schemes in the dataset, it is fairly representative of typically encountered modulation schemes. Moreover, this variety increases the likelihood that AMC models will generalize to more exotic or non-existing modulation schemes in the training data that are derived from these traditional variants.
There are a total of 2.56 million labeled signals, \(S(T)\), each consisting of 1024 time domain digitized intermediate frequency (IF) samples of in-phase (\(I\)) and quadrature (\(Q\)) signal components where \(S(T)=I(T)+jQ(T)\). The data was collected at a 900MHz IF with an assumed sampling rate of 1MS/sec such that each 1024 time domain digitized I/Q sample is 1.024 ms [33]. The 24 modulation types and the representative groups that we chose for each are listed as follows:
* **Amplitude**: OOK, 4ASK, 8ASK, AM-SSB-SC, AM-SSB-WC, AM-DSB-WC, and AM-DSB-SC
* **Phase**: BPSK, QPSK, 8PSK, 16PSK, 32PSK, and OQPSK
* **Amplitude and Phase**: 16APSK, 32APSK, 64APSK, 128APSK, 16QAM, 32QAM, 64QAM, 128QAM, and 256QAM
* **Frequency**: FM and GMSK
Each modulation type includes a total of \(106,496\) observations ranging from \(-20\)dB to \(+30\)dB SNR in \(2\)dB steps for a total of 26 different SNR values. SNR is assumed to be consistent over the same window length as the I/Q sample window. For evaluation, we divided the dataset into 1 million different training observations and 1.5 million testing observations under a random shuffle split, stratified across modulation type and SNR. Because of this balance, the expected performance for a random chance classifier is 1/24 or 4.2%. With varying SNR levels across the dataset, it is expected that the classifier would perform with a higher degree of accuracy as the SNR value is increased. For consistency, each model investigated in this work was trained and evaluated on the same train and test set splits.
## IV Initial Investigation
In this work, we use the architecture described in [7] as the baseline architecture. We note that [2] improved upon the baseline; however, each individual MC used the baseline architecture except trained on specific SNR ranges. Therefore, the base architectural elements were similar to [7], but separated for different SNRs. In this work, our focus is to improve upon the employed CNN architecture for an individual MC rather than the use of several MCs. Therefore, we use the architecture from [7] as our baseline.
Before exploring an ablation study, we make a few notable changes from the baseline architecture in an effort to increase AMC performance. This initial exploration is for clarity as it reserves the ablation study that follows from requiring an inordinate number of models. It also introduces the general training procedures that assist and orient the reader in following the ablation study--the ablation study mirrors these procedures. We first provide an initial investigation exploring these notable changes.
We train each model using the Adam optimizer [34] with an initial learning rate _lr_ = 0.0001, a decay factor of 0.1 if the validation loss does not decrease for 12 epochs, and a minimum learning rate of 1e-7. If the validation loss does not decrease after 20 epochs, training is terminated and the models are deemed converged. For all experiments, mini-batches of size 32 are used. As has been established in most programming packages for neural networks, we refer to fully connected neural network layers as _dense_ layers, which are typically followed by an activation function.
### _Architectural Changes_
A common property of neural networks is using fewer but larger kernels in the early layers of the network, and an increase of smaller kernels are used in the later layers than the baseline architecture. This is commonly referred to as the information distillation pipeline [35]. By utilizing a smaller number of large kernels in early layers, we are able to increase the temporal context of the convolutional features without dramatically increasing the number of trainable parameters. Numerous, but smaller kernels are used in later convolutional layers to create more abstract features. Configuring the network in this manner is especially popular in image classification where later layers represent more abstract, class-specific features.
We investigate this modification in three stages, using the baseline architecture described in Figure 3[7]. We denote number of filters in the network and the filter sizes as \(F=[f_{1},f_{2},...,f_{7}]\) and \(K=[k_{1},k_{2},...k_{7}]\) in Figure 3. The baseline architecture used \(f=64\) (for all layers) and \(k=3\) (consistent kernel size for all layers). Our first modification to the baseline architecture is \(F=[32,48,64,72,84,96,108]\), but keeping \(k=3\) for all layers. Second, we use the baseline architecture, but change the size of filters in the network where \(f=64\) (same as baseline) and \(K=[7,5,7,5,3,3,3]\). Third, we make both modifications and compare the result to the baseline model where \(F=[32,48,64,72,84,96,108]\) and \(K=[7,5,7,5,3,3,3]\). These modifications are not exhaustive
Fig. 6: Summary of residual improvement in accuracy over [7] that was first published in [2]. This work showed how the baseline architecture could be tuned to specific SNR ranges. Positive improvement is observed for most SNR ranges.
searches; rather, these modifications are meant to guide future changes to the network by understanding the influence of filter quantity and filter size in a limited context.
### _Initial Investigation Results_
As shown in Table II, increasing the size of the filters in earlier layers increases both average and maximum test accuracy over [7]; but, at the cost of additional parameters. A possible explanation for the increase in performance is the increase in temporal context due to the larger kernel sizes. Increasing the number of filters without increasing temporal context decreases performance. This is possibly because it increases the complexity of the model without adding additional signal context.
Figure 7 illustrates the change in accuracy with varying SNR. The combined model, utilizing various kernel sizes and numbers of filters, consistently outperforms the other architectures across changing SNR conditions.
Although increasing the number of filters decreases performance alone, combining the approach with larger kernel sizes yields the best performance in our initial investigation. Increasing the temporal context may have allowed additional filters to better characterize the input signal.
Because increased temporal context improves AMC performance, we are inspired to investigate additional methods such as squeeze-and-excitation blocks and dilated convolutions that can increase global and local context [25, 36].
## V Ablation Study Architecture Background
Building upon our findings from our initial investigation, we make additional modifications to the baseline architecture. For the MCs, we introduce dilated convolutions, squeeze-and-excitation blocks, self-attention, and other architectural changes. We also investigate various kernel sizes and the quantity of kernels employed from the initial investigation. Our goal is to improve upon existing architectures while investigating the impact of each modification on classification accuracy through an ablation study. In this section, we describe each modification performed.
### _Squeeze-and-Excitation Networks_
Squeeze-and-Excitation (SE) blocks introduce a channel-wise attention mechanism first proposed in [25]. Due to the limited receptive field of each convolutional filter, SE blocks propose a recalibration step based on global statistics across channels (average pooling) to provide global context. Although initially utilized for image classification tasks [25, 37, 38], we argue the use of SE blocks can provide meaningful global context to the convolutional network used for AMC over the time domain.
Figure 8 depicts an SE block. The squeeze operation is defined as temporal global average pooling across convolutional filters. For an individual channel, \(c\), the squeeze operation is defined as:
\[z_{c}=F_{sq}(x_{c})=\frac{1}{T}\sum_{i=1}^{T}x_{i,c} \tag{1}\]
where \(X\in\mathbb{R}^{T\times C}=[x_{1},x_{2},...,x_{C}]\), \(Z\in\mathbb{R}^{1\times C}=[z_{1},z_{2},...,z_{C}]\), \(T\) is the number of samples in time, and \(C\) is the total number of channels. To model nonlinear interactions between channel-wise statistics, \(Z\) is fed into a series of dense layers followed by nonlinear activation functions:
\[s=F_{ex}(z,W)=\sigma(g(z,W))=\sigma(W_{2}\delta(W_{1}z)) \tag{2}\]
where \(\delta\) is the rectified linear (ReLU) activation function, \(W_{1}\in\mathbb{R}^{\frac{C}{T}\times C}\), \(W_{2}\in\mathbb{R}^{C\times\frac{C}{T}}\), \(r\) is a dimensionality reduction ratio, and \(\sigma\) is the sigmoid activation function. The sigmoid function is chosen as opposed to the softmax function so that multiple channels can be accentuated and are not mutually-exclusive. That is, the normalization term in the softmax can cause dependencies among channels, so the sigmoid activation is preferred. \(W_{1}\) imposes a bottleneck to improve generalization performance and reduce parameter counts while \(W_{2}\) increases the dimensionality back to the original number of channels for the recalibration operation. In our work, we
Fig. 8: Squeeze-and-Excitation block proposed in [25]. One SE block is shown applied to a single layer convolutional output activation. Two paths are shown, a scaling path and an identity path. The scaling vector is applied across channels to the identity path of the activations.
Fig. 7: SNR vs. accuracy comparison of the initial investigation using the baseline architecture. Noticeable improvements can be observed across all SNRs.
use \(r=2\) for all SE blocks to ensure a reasonable number of trainable parameters without over-squashing the embedding size.
The final operation in the SE block, scaling or recalibration, is obtained by scaling the the input \(X\) by \(s\):
\[\hat{x_{c}}=F_{scale}(x_{c},s_{c})=s_{c}x_{c} \tag{3}\]
where \(\hat{X}\in\mathbb{R}^{T\times C}=[\hat{x_{1}},\hat{x_{2}},...,\hat{x_{C}}]\).
### _Dilated Convolutions_
Proposed in [36], Figure 10 depicts dilated convolutions where the convolutional kernels are denoted by the colored components. In a traditional convolution, the dilation rate is equal to 1. Dilated convolutions build temporal context by increasing the receptive field of the convolutional kernels without increasing parameter counts as the number of entries in the kernel remains the same.
Dilated convolutions also do not downsample the signals like strided convolutions. Instead, the output of a dilated convolution can be the exact size of the input after properly handling edge effects at the beginning and end of the signal.
### _Final Convolutional Activation_
We also investigate the impact of using an activation function (ReLU) after the last convolutional layer, just before statistics pooling. Because ReLU transforms the input sequence to be non-negative, the distribution characterized by the pooling statistics may become skewed. In [7] and [2], no activation was applied after the final convolutional layer as shown in Figure 3. We investigate if this transformation impacts classification performance.
### _Self-Attention_
Self-attention allows the convolutional outputs to interact with one another enabling the network to learn to focus on important outputs. Self-attention before statistics pooling essentially creates a weighted summation over the convolutional outputs weighting their importance similarly to [39, 40, 41].
We use the attention mechanism described by Vaswani _et al._ in [42] where each output element is a weighted sum of the linearly transformed input where the dimensionality of \(K\) is \(d_{k}\) as seen in Equation (4).
\[Attention(Q,K,V)=softmax\left(\frac{QK^{T}}{|\sqrt{d_{k}}|}\right)V \tag{4}\]
In the case of self-attention, \(Q\), \(K\), and \(V\) are equal. A scaling factor of \(\frac{1}{|\sqrt{d_{k}}|}\) is applied to counteract vanishing gradients in the softmax output when \(d_{k}\) is large.
## VI Ablation Study Architecture
Applying the specified modifications to the architecture in [7], Figure 9 illustrates the proposed architecture with every modification included in the graphic. Each colored block represents an optional change to the architecture that will be investigated in the ablation study. That is, each combination of network modifications are analyzed to aid understanding of each modification's impact on the network.
Each convolutional layer has the following parameters: number of filters, kernel size, and dilation rate. The asterisk next to each dilation rate represents the changing of dilation rates in the ablation study. If dilated convolutions are used,
Fig. 10: Dilated convolutions diagram. The top shows a traditional kernel applied to sequential time series points. The middle and bottom diagram illustrate dilation rates of two and three, respectively. These dilations serve to increase the receptive field of the filter without increasing the number of trainable variables in the kernel.
Fig. 9: Proposed architecture with modifications including SENets, dilated convolutions, optional ReLU activation before statistics pooling, and self-attention. The output tensor sizes are also shown for each unit in the diagram. An * denotes where the sizes differ from the baseline architecture.
then the dilation rate value in the graphic is used. If dilated convolutions are not used, each dilation rate is set to 1. That is, a traditional convolution is applied. All convolutions use a stride of 1, and the same training procedure from the initial investigation is used.
## VII Evaluation Metrics
We present several evaluation metrics to compare the different architectures considered in the ablation study. In this section, we will discuss each evaluation technique used in the results section.
Due to the varying levels of SNRs in the employed dataset, we plot classification accuracy over each true SNR value. This allows for a visualization of the tradeoff in performance as noise becomes more or less dominant in the received signals. Additionally, we report average accuracy and maximum accuracy across the entire test set for each model. While we note that average accuracy is not indicative of the model's performance, as accuracy is highly correlated to the SNR of the input signal, we share this result to give other researchers the ability to reproduce and compare works.
As discussed in [26], AMC is often implemented on resource-constrained devices. In these systems, using larger models in terms of parameter counts may not be feasible. We report the number of parameters for each model in the ablation study to examine the tradeoff in AMC performance and model size.
Additional analyses are also carried out. However, due to the large number of models investigated in this study, we will select the best performing model from the ablation study for brevity and analyze the performance of this model in greater detail. For example, confusion matrices for the best performing model from the ablation study are provided to show common misclassifications for each modulation type. Additionally, there exist several use-cases where relatively short signal bursts are received. For example, a wide-band scanning receiver may only detect a short signal burst. Therefore, signal duration in the time domain versus AMC performance is investigated to determine the robustness of the best performing model when short signal bursts are received.
## VIII Ablation Results
### _Overall Performance_
Table III lists the maximum and average accuracy performance for each model in the ablation study. A binary naming convention is used to indicate the various methods used for each architecture. Similarly to the result found in Section IV, increasing the temporal context typically results in increased performance. Models that incorporate dilated convolutions tended to have higher average accuracies than models without dilated convolutions.
The best performing model, in terms of average accuracy across all SNR conditions included SE blocks, dilated convolutions, and a ReLU activation prior to statistics pooling (model 1110) with an average accuracy of approximately 63.7%. This model also achieved the highest maximum accuracy of about 98.9% at a 22dB level.
SE blocks did not increase performance compared to model 0000 with the exception of models 1110 and 1111. However, SE blocks were incorporated in the best performing model, 1110. Self-attention was not found to aid classification performance in general with the proposed architecture. Self-attention introduces a large number of trainable parameters possibly forming a complex loss space.
Table IV lists the performances of single modification (from baseline) architectures. Each component of the ablation study, with the exception of dilated convolutions, decreased performance when applied individually. When combined, however, the best performing model was found. Therefore, we conclude that each component could possibly aid the optimization of
each other--and, in general, dilated convolutions tend to have the most dramatic performance increases.
### _Accuracy Over Varying SNR_
Figure 11 summarizes the ablation study in terms of classification accuracy over varying SNR levels. We add this figure for completeness and reproducibility for other researchers. The accuracy within each SNR band is shown along with the modifications used, similar to Table III. The coloring in the figure denotes the accuracy in each SNR band. Performance follows a trend similar to that of a sigmoid function, where the rate at which peak classification accuracy is achieved is the most distinguishing feature between the different models. With the improved architectures, a maximum of 99% accuracy is achieved at high SNR levels (starting around 12dB SNR).
While the proposed changes to the architectures generally improve performance at higher SNR levels, the largest improvements occur between \(-12\)dB and \(12\)dB compared to the baseline model in [7]. For example, at \(4\)dB, the performance increases from 75% up to 82%. Incorporating these modifications to the network may prove to be critical in real-world situations where noisy signals are likely to be obtained. Improving AMC performance at lower SNR ranges (\(<-12\)dB) is still an open research topic, with accuracies near chance level.
One observation is the best performing model can vary with SNR. In systems that have available memory and processing power, an approach similar to [2] may be used to utilize several models and intelligently chose predictions based on estimated SNR conditions. That is, if the SNR of the signal of interest is known, a model can be tuned to increase performance slightly, as shown in [2]. Using the results presented here, researchers could also choose the architecture differences that perform best for a given SNR range (although performance differences are subtle).
### _Parameter Count Tradeoff_
An overview of each model's complexity and overall performance across the entire testing set is shown in Table III. This information is also shown graphically in Figure 12 for the maximum accuracy over SNR and the average accuracy across all SNRs. Whether looking at the maximum or the average measures of performance, the conclusions are similar. The previously described binary model name also appears in the figure. We found a slight correlation between the number of model parameters and overall model performance; however, with the architectures explored, there was a general parameter count where performance peaked. Models with parameter counts between approximately 170k to 205k generally performed better than smaller and larger models. We note that the
Fig. 11: Ablation study results in terms of classification accuracy across SNR ranges. The best performing model is in the second to last row and displays strong performance across SNR values.
Fig. 12: Ablation study parameter count tradeoff. The x-axis shows the number of trainable variables in each model and the y-axis shows max or average accuracy. The callout for each point denotes the model name as shown in Table III.
models with more than 205k parameters included self-attention which was found to decrease model performance with the proposed architectures. This implies that one possible reason self-attention did not perform as well as other modifications is because of the increase in parameters, resulting in a more difficult loss space from which to optimize.
## IX Best Performing Model Investigation
Due to the large volume of models, we focus upon the best performing model, (model 1110), for the remainder of this work. As previously mentioned, this model employs all modifications except self-attention.
### _Top-K Accuracy_
As discussed, in systems where the modulation schemes must be classified quickly, it is advantageous to apply fewer demodulation schemes in a trial and error fashion. This is particularly significant at lower SNR values where accuracy is mediocre. Top-k accuracy allows an in-depth view on the expected number of trials before finding the correct modulation scheme. Although traditional accuracy (top-1 accuracy) characterizes the performance of the model in terms of classifying the exact variant, top-k accuracy characterizes the percentage of the classifier predicting the correct variant among the top-k predictions (sorted by descending class probabilities). We plot the top-1, top-2, and top-5 classification accuracy over varying SNR conditions for each modulation grouping defined in Section III in Figure 13.
Although performance decays to approximately random chance for the overall (all modulation schemes) performance curves for each top-k accuracy, it is notable that some modulation group performances drop below random chance. The models are trained to maximize the overall model performance. This could explain why certain modulation groups dip below random chance but the overall performance and other modulation groups remain at or above random chance.
Using the proposed method greatly reduces the correct modulation scheme search space. While high performance in top-1 accuracy is increasingly difficult to achieve with low SNR signals, top-2 and top-5 accuracy converge to higher values at a much faster rate. This indicates our proposed method greatly reduces the search space from 24 modulation candidates to fewer candidate types when employing trial and error methods to determine the correct modulation scheme. Further, if the group of modulation is known (_e.g._, FM), one can view a more specific tradeoff curve in terms of SNR and top-k accuracy given in Figure 13.
### _Short Duration Signal Bursts_
Due to the rapid scanning characteristic of some modern software-defined radios, we investigate the performance trade-off of varying signal duration and AMC performance. This analysis is meant to emulate the situation wherein a receiver only detects a short RF signal burst. We investigate signal burst durations of 1.024 ms (full length signal from original dataset), 512 \(\mu\)s, 256 \(\mu\)s, 128 \(\mu\)s, 64 \(\mu\)s, 32 \(\mu\)s, and 16 \(\mu\)s. We assume the same 1MS/sec sampling rate as in the previous analyses such that 16 \(\mu\)s burst is captured in 16 I/Q samples.
In this section, we use the same test set as our other investigations; however, a uniformly random starting point is
Fig. 14: Tradeoff in accuracy for various signal lengths across SNR, grouped by modulation category for the best performing model 1110. The top plot shows the baseline performance using the full sequence. Subsequent plots show the same information using increasingly smaller signal lengths for classification.
Fig. 13: Accuracy over varying SNR conditions for model 1110 with (a), (b), and (c) showing the top-1, top-2, and top-5 accuracy respectively. Random chance for each is defined as 1/24, 2/24, and 5/24.
determined for each signal such that a contiguous sample of the desired duration, starting at the random point, is chosen. Thus, the chosen segment from a test set sample is randomly assigned.
We also note that, although the sample length for the evaluation is changed, the best performing model is the same architecture with the exact same trained weights because this model uses statistics pooling from the X-Vector inspired modification. A significant benefit to the X-Vector inspired architecture is its ability to handle variable-length inputs without the need of padding, retraining, or other network modifications. This is achieved by taking global statistics across convolutional channels producing a fixed-length vector, regardless of signal duration. Due to this flexibility, the same model (model 1110) weights are used for each duration experiment. This fact also emphasizes the desirability of using X-vector inspired AMC architectures for receivers that are deployed in an environment where short-burst and variable duration signals are anticipated to be present.
For each signal duration in the time domain, we plot the overall classification accuracy over varying SNR conditions as well as the accuracy for each modulation grouping defined in Section III. Figure 14 demonstrates the tradeoff for various signal durations where \(n\) is the number of samples from the time domain I/Q signal. The first observation is, as we would expect, that classification performance degrades with decreased signal duration. For example, the maximum accuracy begins to degrade at 256 \(\mu\)s and is more noticeable at 128 \(\mu\)s. This is likely a result of using sample statistics that result in unstable or biased estimates for short signal lengths since the number of received signal data points are insufficient to characterize the sample statistics used during training. Random classification accuracy is approximately 4% and is shown in the black dotted line in Figure 14. Although classification performance decreases with decreased duration, we are still able to achieve significantly higher classification accuracy than random chance down to 16 \(\mu\)s of signal capture.
FM (frequency modulation) signals were typically more resilient to noise interference than AM (amplitude modulation) and AM-PM (amplitude and phase modulation) signals in our AMC. This was observed across all signal burst durations and our top-k accuracy analysis. This behavior indicates that the performance of our AMC for short bursts, in the presence of increasing amounts of noise, is more robust for signals modulated by changes in the carrier frequency and is more sensitive to signals modulated by varying the carrier amplitude. We attribute this behavior to our AMC architecture, the architecture of the receiver, or a combination of both of the AMC and receiver.
### _Confusion Matrices_
While classification accuracy provides a holistic view of model performance, it lacks the granularity to investigate where misclassifications are occurring. Confusion matrices are used to analyze the distribution of classifications for each given class. For each true label, the proportion of correctly classified samples is calculated along with the proportion of incorrect predictions for each opposing class. In this way, we can see which classes the model is struggling to distinguish from one another. A perfect classifier would be the identity matrix where the diagonal values represent the true class matches the predicted class. Each matrix value represents the percentage of classifications for the true label and each row sums to 1 (100%).
Figure 15 illustrates the class confusion matrices for SNR levels greater than or equal to 0dB for models 1110, the reproduced ResNet architecture from [1], and the baseline X-Vector architecture from [7] respectively. Shown in [7], the X-Vector architecture was able to distinguish PSK and AMSSB variants to a higher degree and performed better overall than [1]. Both architectures struggled to differentiate QAM variants.
Model 1110 improved upon these prior results for QAM signals and in general has higher diagonal components than the other architectures. This again supports a conclusion that model 1110 achieves a new state-of-the-art in AMC performance.
## X Conclusion
A comprehensive ablation study was carried out with regard to AMC architectural features using the extensive RadioML 2018.01A dataset. This ablation study built upon a strong performance of a new baseline model that was also introduced in the initial investigation of this study. This initial investigation informed the design of a number of AMC architecture modifications--specifically, the use of X-Vectors, dilated convolutions, and SE blocks. With the combined
Fig. 15: Confusion matrices for (a) model 1110 (best performing model from this work), (b) the reproduced ResNet model from [1], and (c) the X-Vector inspired model from [19] with SNR \(\geq\) 0dB.
modifications, we achieved a new state-of-the-art in AMC performance. Among these modifications, dilated convolutions were found to be the most critical architectural feature for model performance. Self-attention was also investigated but was not found to increase performance--although increased temporal context improved upon prior works.
|
2303.01682 | Neural-BO: A Black-box Optimization Algorithm using Deep Neural Networks | Bayesian Optimization (BO) is an effective approach for global optimization
of black-box functions when function evaluations are expensive. Most prior
works use Gaussian processes to model the black-box function, however, the use
of kernels in Gaussian processes leads to two problems: first, the kernel-based
methods scale poorly with the number of data points and second, kernel methods
are usually not effective on complex structured high dimensional data due to
curse of dimensionality. Therefore, we propose a novel black-box optimization
algorithm where the black-box function is modeled using a neural network. Our
algorithm does not need a Bayesian neural network to estimate predictive
uncertainty and is therefore computationally favorable. We analyze the
theoretical behavior of our algorithm in terms of regret bound using advances
in NTK theory showing its efficient convergence. We perform experiments with
both synthetic and real-world optimization tasks and show that our algorithm is
more sample efficient compared to existing methods. | Dat Phan-Trong, Hung Tran-The, Sunil Gupta | 2023-03-03T02:53:56Z | http://arxiv.org/abs/2303.01682v3 | # Neural-BO: A Black-box Optimization Algorithm using Deep Neural Networks
###### Abstract
Bayesian Optimization (BO) is an effective approach for global optimization of black-box functions when function evaluations are expensive. Most prior works use Gaussian processes to model the black-box function, however, the use of kernels in Gaussian processes leads to two problems: first, the kernel-based methods scale poorly with the number of data points and second, kernel methods are usually not effective on complex structured high dimensional data due to curse of dimensionality. Therefore, we propose a novel black-box optimization algorithm where the black-box function is modeled using a neural network. Our algorithm does not need a _Bayesian_ neural network to estimate predictive uncertainty and is therefore computationally favorable. We analyze the theoretical behavior of our algorithm in terms of regret bound using advances in NTK theory showing its efficient convergence. We perform experiments with both synthetic and real-world optimization tasks and show that our algorithm is more sample efficient compared to existing methods.
keywords: Black-box Optimization, Neural Tangent Kernel +
Footnote †: journal: Neurocomputing
## 1 Introduction
Optimizing a black-box function is crucial in many real-world application domains. Some well-known examples include hyper-parameter optimization
of machine learning algorithms [1; 2], synthesis of short polymer fiber materials, alloy design, 3D bio-printing, molecule design, etc [3; 4]. Bayesian Optimization (BO) has emerged as an effective approach to optimize such expensive-to-evaluate black-box functions. Most BO algorithms work as follows. First, a probabilistic regression model e.g. a Gaussian Process (GP) is trained on available function observations. This model is then used to construct an acquisition function that trades off between two potentially conflicting objectives: exploration and exploitation. The acquisition function is then optimized to suggest a point where the black-box functions should be next evaluated. There are several choices for acquisition functions, e.g., improvement-based schemes (Probability of Improvement [5], Expected improvement [6]), Upper Confidence Bound scheme [7], entropy search (ES [8], PES [9]), Thompson Sampling [10] and Knowledge Gradient [11].
While GPs are a natural and usually the most prominent probabilistic model choice for BO, unfortunately they are not suitable for optimizing functions in complex structured high dimensional spaces such as functions involving optimization of images and text documents. This is because GPs use kernels to model the functions and the popular kernels e.g. Square Exponential kernel, Matern kernel cannot accurately capture the similarity between complex structured data. This limitation prevents BO with GP from being applied to computer vision and natural language processing applications. Further, GPs do not scale well computationally due to kernel inversion requiring cubic complexity.
Recently, Deep Neural Networks (DNNs) have emerged as state-of-the-art models for a wide range of tasks such as speech recognition, computer vision, natural language processing, and so on. The emergence of DNNs proves their power to scale and generalize to most kinds of data, including high dimensional and structured data as they can extract powerful features and scale linearly with the number of data points unlike GPs. Leveraging these advantages, some efforts have been made to utilize DNNs for optimization e.g., black-box function optimization in continuous search spaces [12; 13; 14] and contextual bandit optimizations in discrete search spaces [15; 16; 17; 18]. However, these works have several drawbacks: (a) some of these approaches are computationally expensive because of relying on Bayesian Neural Networks (BNNs), making them as unscalable as GPs. (b) other works in continuous search space setting do not provide any theoretical analysis of the respective optimization algorithms. Therefore, the problem of developing a DNN-based black-box optimization algorithm that is computationally effi
cient and comes with a mathematical guarantee of its convergence and sample efficiency remains open.
Addressing the above challenge, we propose a novel black-box optimization algorithm where the unknown function is modeled using an over-parameterized deep neural network. Our algorithm does not need a _Bayesian Neural Network_ to model predictive uncertainty and is therefore computationally favorable. We utilize the recent advances in neural network theory using Neural Tangent Kernels (NTK) to estimate a confidence interval around the neural network model and then use it for Thompson Sampling to draw a random sample of the function, which is maximized to recommend the next function evaluation point. We analyze the theoretical behavior of our algorithm in terms of its regret bound and show that our algorithm is convergent and more sample efficient compared to existing methods. In summary, our contributions are:
* A deep neural network based black-box optimization (Neural-BO) that can perform efficient optimization by accurately modeling the complex structured data (e.g. image, text, etc) and also being computationally favorable, i.e. scaling linearly with the number of data points.
* A theoretical analysis of our proposed Neural-BO algorithm showing that our algorithm is convergent with a sub-linear regret bound. To the best of our knowledge, our work is the first to achieve sub-linear bound in any neural network based bandit or black-box optimization algorithms.
* Experiments with both synthetic benchmark optimization functions and real-world optimization tasks showing that our algorithm outperforms current state-of-the-art black-box optimization methods.
## 2 Related Works
### Bayesian optimization
Bayesian optimization is a well-established approach for performing the global optimization of noisy, expensive black-box functions [6]. Traditionally, Bayesian optimization relies on the construction of a probabilistic surrogate model that places a prior on objective functions and updates the model using the existing function evaluations. To handle the promise and uncertainty of a new point, an acquisition function is used to capture the trade-off between
exploration and exploitation through the posterior distribution of the functions. By performing a proxy optimization over this acquisition function, the next evaluation point is determined. The Expected Improvement (EI) [6] and Upper Confidence Bound (UCB) [7] criteria are typically used due to their analytical tractability in both value and gradient. Beside, the information-theoretic acquisition functions e.g., (ES [8], PES [19], MES [9]) guide our evaluations to locations where we can maximise our learning about the unknown minimum rather than to locations where we expect to obtain higher function values. For a review on Bayesian optimization, see [3; 4; 20; 21].
### Bayesian Optimization using GPs
Because of their flexibility, well-calibrated uncertainty, and analytical properties, GPs have been a popular choice to model the distribution over functions for Bayesian optimization. Formally, a function \(f\) is assumed to be drawn from GP, i.e., \(f\sim\text{GP}(m(\mathbf{x}),k(\mathbf{x},\mathbf{x}^{\prime}))\), where \(\mu(\mathbf{x})\) is a mean function and \(k(\mathbf{x},\mathbf{x}^{\prime})\) is a covariance function. Common covariance functions include linear kernel, squared exponential (SE) kernel and Matern kernel. Thus, given a set of \(t\) observed points \(\mathcal{D}_{1:t}=\{\mathbf{x}_{i},y_{i}\}_{i=1}^{t}\), the posterior mean and variance at step \(t+1\) are computed as: \(\mu_{t+1}(\mathbf{x})=\mathbf{k}^{\top}\mathbf{K}^{-1}\mathbf{y}\) and variance \(\sigma_{t+1}^{2}(\mathbf{x})=k(\mathbf{x},\mathbf{x})-\mathbf{k}^{\top} \mathbf{K}^{-1}\mathbf{k}\), where \(\mathbf{K}=\left[k(\mathbf{x}_{i},\mathbf{x}_{j})\right]_{1\leq i,j\leq t}\), \(\mathbf{k}=\left[k(\mathbf{x},\mathbf{x}_{1})\ \ k(\mathbf{x},\mathbf{x}_{2})\ \ldots\ \ k( \mathbf{x},\mathbf{x}_{t})\right]\) and \(\mathbf{y}=[y_{1},y_{2},\cdots,y_{t}]\). Nonetheless, a major drawback of GP-based Bayesian optimization is that the computational complexity grows cubically in the number of observations, as it necessitates the inversion of the covariance matrix. Further, the covariance functions are usually too simplistic to model the function effectively when the inputs are high dimensional and complex structured data such as images and text.
### Black-box Optimization using models other than GPs
There are relatively few works in black-box optimization that have used models other than GPs. For example, random forest (RF) and neural networks (NNs) have been used to estimate black-box functions. The first work that utilized RF as surrogate model was proposed in SMAC [22] and later refined in [23]. Although random forest performs well by making good predictions in the neighbourhood of training data, it faces trouble with extrapolation [4]. Over the past decade, neural networks have emerged as a potential candidate to play the role of surrogate model in black-box optimization. The first work on black-box optimization using neural networks [12] utilizes
a deep neural network as a mapping to transform high-dimensional inputs to lower-dimensional feature vectors before fitting a Bayesian linear regressor surrogate model. Another attempt has been made in [13] where the objective function has been modeled using a Bayesian neural network, which has high computational complexity due to the need to maintain an ensemble of models to get the posterior distribution over the parameters. Although these early works show the promise of neural networks for black-box optimization tasks through empirical results, they are still computationally unscalable (due to using Bayesian models) and no theoretical guarantees have been provided to ensure the convergence of their algorithms.
Very recently, [14] also explores the use of deep neural networks as an alternative to GPs to model black-box functions. However, this work only provides theoretical guarantees for their algorithm under _infinite width_ of DNN setting, which is unrealistic in practice, while we provide theoretical guarantees for over-parameterized DNN, which is more general than their setting. Furthermore, [14] simply used a noise on the target does not correctly account for epistemic uncertainty and thus this method performs exploration randomly like \(\epsilon\)-greedy, which is known to be sub-optimal. Moreover, their experimental results are limited to only synthetic functions, which is not enough to demonstrate the effectiveness of an algorithm in real world. In contrast, we demonstrate the effectiveness of their algorithm on various real applications with high dimensional structured and complex inputs.
### Contextual Bandits with Neural Networks
There is a research line of contextual bandits which also uses deep neural networks to model reward functions. This problem is defined as follows: at each round \(t\), the agent is presented with a set of \(K\) actions associated with \(K\) feature vectors called "contexts". After choosing an arm, the agent receives a scalar reward generated from some unknown function with respect to the corresponding chosen feature vector. The goal of the agent is to maximize the expected cumulative reward over \(T\) rounds. For theoretical guarantees, most of works in this line (e.g., [15; 17; 16; 18]) utilize a _over-parametrized_ DNN to predict rewards. While NeuralUCB [15] and NeuralTS [17] use UCB and Thompson Sampling acquisition functions to control the trade-off between exploration and exploitation by utilizing gradient of the network in algorithms, [16] performs LinearUCB on top of the representation of the last layer of the neural network, motivated by [12].
Similar to the above contextual bandit algorithms, we attempt to develop an effective algorithm with theoretical analysis for black-box optimization using over-parametrized neural networks with _finite_ width. The key challenge we need to address when extending the contextual bandit algorithms to black-box optimization is that of dealing with a continuous action space instead of a discrete space. Having continuous action space requires to bound the function modeling error at arbitrary locations in the action space. More specifically, from neural networks' viewpoint, this requirement is equivalent to characterizing predictive output on unseen data, which requires using neural network generalization theory different from the problem setting of contextual bandits.
## 3 Preliminaries
### Problem Setting
We consider a global optimization problem to maximize \(f(\mathbf{x})\) subject to \(\mathbf{x}\in\mathcal{D}\subset\mathbb{R}^{d}\), where \(\mathbf{x}\) is the input with \(d\) dimensions. The function \(f\) is a noisy black-box function that can only be evaluated via noisy evaluations of \(f\) in the form \(y=f(x)+\epsilon\), where \(\epsilon\) is a sub-Gaussian noise which we will discuss more clearly in Assumption 2 in our regret analysis section. In our work, we consider input space \(\mathcal{D}\) of the form: \(a\leq\left\|\mathbf{x}\right\|_{2}\leq b\), where \(a,b\) are absolute constants and \(a\leq b\). We note that this assumption is mild compared to current works in neural contextual bandits [15; 17; 16; 18] that required that \(\left\|\mathbf{x}\right\|_{2}=1\).
To imply the smoothness of function \(f\), we assume the objective function \(f\) is from an RKHS corresponding to a positive definite Neural Tangent Kernel \(k_{\mathrm{NTK}}\). In particular, \(\left\|f\right\|_{\mathcal{H}_{k_{\mathrm{NTK}}}}\leq B\), for a finite \(B>0\). These assumptions are regular and have been typically used in many previous works [10; 7; 24].
### RKHS with Neural Tangent Kernel
Given a training set \(\mathcal{X}_{t}=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{t}\subset\mathcal{D}\times \mathbb{R}\), that is a set of \(t\) arbitrary points picked from \(\mathcal{D}\) with its corresponding observations, and consider training a neural network \(h(\mathbf{x};\mathbf{\theta})\) with gradient descent using an infinitesimally small learning rate. The Neural Tangent Kernel (NTK) \(k_{\mathrm{NTK}}(\mathbf{x},\mathbf{x}^{\prime})\) is defined as: \(k_{\mathrm{NTK}}(\mathbf{x},\mathbf{x}^{\prime})=\langle\mathbf{g}(\mathbf{x };\mathbf{\theta}),\mathbf{g}(\mathbf{x}^{\prime};\mathbf{\theta})\rangle/m\), where \(\mathbf{g}(\mathbf{x};\mathbf{\theta})\) is the gradient of \(h(\mathbf{x};\mathbf{\theta})\) as defined. When the network width \(m\) increases, \(\langle\mathbf{g}(\mathbf{x};\mathbf{\theta}),\mathbf{g}(\mathbf{x}^{\prime},\bm {\theta})\rangle\) gradually converges to \(\langle\mathbf{g}(\mathbf{x},\mathbf{\theta}_{0}),\mathbf{g}(\mathbf{x}^{\prime}, \mathbf{\theta}_{0})\rangle\) and the corresponding NTK matrix is defined in Definition 1.
**Definition 1** ([25]): _Consider the same set \(\mathcal{X}_{t}\) as in Section 3.2. Define_
\[\widetilde{\mathbf{H}}_{i,j}^{(1)}=\mathbf{\Sigma}_{i,j}^{(1)}=\left\langle \mathbf{z}_{i},\mathbf{z}_{j}\right\rangle\,\mathbf{A}_{i,j}^{(l)}=\begin{pmatrix} \mathbf{\Sigma}_{i,i}^{(l)}&\mathbf{\Sigma}_{i,j}^{(l)}\\ \mathbf{\Sigma}_{i,j}^{(l)}&\mathbf{\Sigma}_{j,j}^{(l)}\end{pmatrix}\]
\[\mathbf{\Sigma}_{i,j}^{(l+1)}=2\mathbb{E}_{(u,v)\sim\mathbf{N}(\mathbf{0}, \mathbf{A}_{i,j}^{(l)})}[\phi(u)\phi(v)]\]
\[\widetilde{\mathbf{H}}_{i,j}^{(l+1)}=2\widetilde{\mathbf{H}}_{i,j}^{(l)} \mathbb{E}_{(u,v)\sim\mathbf{N}(\mathbf{0},\mathbf{A}_{i,j}^{(l)}}[\phi^{\prime }(u)\phi^{\prime}(v)]+\mathbf{\Sigma}_{i,j}^{(l+1)},\]
where \(\phi(\cdot)\) is the activation function of the neural network \(h(\mathbf{x};\boldsymbol{\theta})\). Then \(\mathbf{H}_{t}=\frac{\widehat{\mathbf{H}}^{(L)}+\mathbf{\Sigma}^{(L)}}{2}\) is called the NTK matrix on the set \(\mathbf{X}\).
We assume \(f\) to be a member of the Reproducing Kernel Hilbert Space (RKHS) of real-valued functions on \(\mathcal{D}\) with Neural Tangent Kernel (NTK, see Section 3.2), and a bounded norm, \(\|f\|_{\mathcal{H}_{k_{\text{NTK}}}}\leq B\). This RKHS, denoted by \(\mathcal{H}_{k_{\text{NTK}}}(\mathcal{D})\), is completely specified by its kernel function \(k_{\text{NTK}}(\cdot,\cdot)\) and vice-versa, with an inner product \(\left\langle\cdot,\cdot\right\rangle\) obeying the reproducing property: \(f(\mathbf{x})=\left\langle f,k_{\text{NTK}}(\cdot,\mathbf{x})\right\rangle\) for all \(f\in\mathcal{H}_{k_{\text{NTK}}}(\mathcal{D})\). Equivalently, the RKHS associated with \(k_{\text{NTK}}\) is then given by:
\[\mathcal{H}_{k_{\text{NTK}}}=\{f:f=\sum_{i}\alpha_{i}k_{\text{NTK}}(\mathbf{x} _{i},\cdot),\alpha_{i}\in\mathbb{R}\}\]
The induced RKHS norm \(\|f\|_{\mathcal{H}_{k_{\text{NTK}}}}=\sqrt{\left\langle f,f\right\rangle_{ \mathcal{H}_{k_{\text{NTK}}}}}\) measures the smoothness of \(f\), with respect to the kernel function \(k_{\text{NTK}}\), and satisfies: \(f\in\mathcal{H}_{k_{\text{NTK}}}(\mathcal{D})\) if and only if \(\|f\|_{k_{\mathcal{H}_{\text{NTK}}}}<\infty\).
### Maximum Information Gain
Assume after \(t\) steps, the model receives an input sequence \(\mathcal{X}_{t}=(\mathbf{x}_{1},\mathbf{x}_{2},\ldots\mathbf{x}_{t})\) and observes noisy rewards \(\mathbf{y}_{t}=(y_{1},y_{2},\ldots,y_{t})\). The _information gain_ at step \(t\), quantifies the reduction in uncertainty about \(f\) after observing \(\mathbf{y}_{t}\), defined as the mutual information between \(\mathbf{y}_{t}\) and \(f\):
\[I(\mathbf{y}_{t};f):=H(\mathbf{y}_{t})-H(\mathbf{y}_{t}|f),\]
where \(H\) denotes the entropy. To obtain closed from the expression of information gain, one needs to introduce a GP model where \(f\) is assumed to be a zero mean GP indexed on \(\mathcal{X}\) with kernel \(k\). Let \(\mathbf{f}_{t}=(f(\mathbf{x}_{1}),f(\mathbf{x}_{1}),\ldots,f(\mathbf{x}_{t})])\)
be the corresponding true function values. From [26], the mutual information between two multivariate Gaussian distributions is:
\[I(\mathbf{y}_{t};f)=I(\mathbf{y}_{t};\mathbf{f}_{t})=\frac{1}{2}\log\det\bigl{(} \mathbf{I}+\lambda^{-1}\mathbf{H}_{t}\bigr{)},\]
where \(\lambda>0\) is a regularization parameter and \(\mathbf{H}_{t}\) is the kernel matrix. In our case, \(\mathbf{H}_{t}\) can be referred to as the NTK matrix associated with Neural Tangent Kernel.
Assuming GP model with NTK kernel to calculate closed-form expression of maximum information gain is popular in NTK-related works [18; 27].
To further achieve input-independent and kernel-specific bounds, we define the _maximum information gain_ as follows: The _maximum information gain_\(\gamma_{t}\), after \(t\) steps, defined as:
\[\gamma_{t}:=\max_{\mathcal{A}\subset\mathcal{X},|A|=t}I(\mathbf{y}_{A}; \mathbf{f}_{A})\]
As stated in Lemma 3 of [10], the maximum information gain \(\gamma_{t}\) can be lower bounded as:
\[\gamma_{t}\geq\frac{1}{2}\log\det\bigl{(}\mathbf{I}+\lambda^{-1}\mathbf{H}_{t }\bigr{)}\]
## 4 Proposed Neural-BO Algorithm
```
1:Set \(\mathbf{U}_{0}=\lambda\mathbf{I}\)
2:for\(t=1\) to \(T\)do
3:\(\sigma_{t}^{2}(\mathbf{x})=\lambda\mathbf{g}(\mathbf{x};\boldsymbol{\theta}_ {0})^{\top}\mathbf{U}_{t-1}^{-1}\mathbf{g}(\mathbf{x};\boldsymbol{\theta}_{0 })/m\)
4:\(\widetilde{f}_{t}(\mathbf{x})\sim\mathcal{N}(h(\mathbf{x};\boldsymbol{\theta}_ {t-1}),\nu_{t}^{2}\sigma_{t}^{2}(\mathbf{x}))\)
5: Choose \(\mathbf{x}_{t}=\operatorname*{argmax}_{\mathbf{x}\in\mathcal{D}}\widetilde{f}_ {t}(\mathbf{x})\) and receive observation \(y_{t}=f(\mathbf{x}_{t})+\epsilon_{t}\)
6: Update \(\boldsymbol{\theta}_{t}\leftarrow\operatorname*{TrainNN}(\{\mathbf{x}_{i}\}_{ i=1}^{t}\{y_{i}\}_{i=1}^{t},\boldsymbol{\theta}_{0})\)
7:\(\mathbf{U}_{t}\leftarrow\mathbf{U}_{t-1}+\frac{\mathbf{g}(\mathbf{x}_{t}; \boldsymbol{\theta}_{0})\mathbf{g}(\mathbf{x}_{t};\boldsymbol{\theta}_{0})^{ \top}}{m}\)
8:endfor
```
**Algorithm 1** Neural Black-box Optimization (Neural-BO)
In this section, we present our proposed neural-network based black-box algorithm: Neural-BO. Following the principle of BO, our algorithm consists of two main steps: (1) Build a model of the black-box objective function, and (2) Use the black-box model to choose the next function evaluation point at each iteration. For the first step, unlike traditional BO algorithms which use Gaussian process to model the objective function, our idea is to use a fully connected neural network \(h(\mathbf{x};\mathbf{\theta})\) to learn the function \(f\) as follows:
\[h(\mathbf{x};\mathbf{\theta})=\sqrt{m}\mathbf{W}_{L}\phi(\mathbf{W}_{L-1}\phi( \cdots\phi(\mathbf{W}_{1}\mathbf{x})),\]
where \(\phi(x)=\max(x,0)\) is the rectified linear unit (ReLU) activation function, \(\mathbf{W}_{1}\in\mathbb{R}^{m\times d},\mathbf{W}_{i}\in\mathbb{R}^{m\times m },2\leq i\leq L-1,\mathbf{W}_{L}\in\mathbb{R}^{1\times m}\), and \(\mathbf{\theta}=(vec(\mathbf{W}_{1}),\cdots,vec(\mathbf{W}_{L}))\in\mathbb{R}^{p}\) is the collection of parameters of the neural network, \(p=md+m^{2}(L-2)+m\). To keep the analysis convenient, we assume that the width \(m\) is the same for all hidden layers. We also denote the gradient of the neural network by \(\mathbf{g}(\mathbf{x};\mathbf{\theta})=\nabla_{\mathbf{\theta}}h(\mathbf{x};\mathbf{ \theta})\in\mathbb{R}^{p}\).
For the second step, we choose the next sample point \(\mathbf{x}_{t}\) by using a Thompson Sampling strategy. In particular, given each \(\mathbf{x}\), our algorithm maintains a Gaussian distribution for \(f(\mathbf{x})\). To choose the next point, it samples a random function \(\widetilde{f_{t}}\) from posterior distribution \(\mathcal{N}(h(\mathbf{x},\mathbf{\theta}_{t-1}),\nu_{t}^{2}\sigma_{t}^{2}(\mathbf{ x}))\), where \(\sigma_{t}^{2}(\mathbf{x})=\lambda\mathbf{g}(\mathbf{x};\mathbf{\theta}_{0}) \mathbf{U}_{t-1}^{-1}\mathbf{g}(\mathbf{x};\mathbf{\theta}_{0})/m\) and \(\nu_{t}=\sqrt{2}B+\frac{R}{\sqrt{\lambda}}(\sqrt{2\log(1/\alpha)}\). From here, \(\sigma_{t}(\mathbf{x})\) and \(\nu_{t}\) construct a confidence interval, where for all \(\mathbf{x}\in\mathcal{D}\), we have \(|f(\mathbf{x})-f(\mathbf{x},\mathbf{\theta})|\leq\nu_{t}\sigma_{t}(\mathbf{x})\) that holds with probability \(1-\alpha\). We also note the difference from [17] that uses \(\mathbf{g}(\mathbf{x};\mathbf{\theta}_{t})/\sqrt{m}\) as dynamic feature map, we here use a fixed feature map \(\mathbf{g}(\mathbf{x};\mathbf{\theta}_{0})/\sqrt{m}\) which can be viewed as a finite approximation of the feature map \(\phi(\mathbf{x})\) of \(k_{\text{NTK}}\). This allows us to eliminate \(\sqrt{m}\) away the regret bound of our algorithm. Our algorithm chooses \(\mathbf{x}_{t}\) as \(\mathbf{x}_{t}=\text{argmax}_{\mathbf{x}\in\mathcal{D}}\widetilde{f_{t}}( \mathbf{x})\).
Besides, to initialize the network \(h(\mathbf{x};\mathbf{\theta}_{0})\), we randomly generate each element of \(\mathbf{\theta}_{0}\) from an appropriate Gaussian distribution: for each \(1\leq l\leq L-1\), each entry of \(\mathbf{W}\) is generated independently from \(\mathcal{N}(0,2/m)\); while each entry of the last layer \(\mathbf{W}_{L}\) is set to zero to make \(h(\mathbf{x},\mathbf{\theta}_{0})=0\). Here we inherit so-called _He initialization_[28], which ensures the stability of the expected length of the output vector at each hidden layer.
The proposed Neural-BO is summarized using Algorithm 1 and Algorithm 2. In section 5, we will demonstrate that our algorithm can achieve a sub-linear regret bound, and works well in practice (See our section 7).
DiscussionA simple way to have a neural network based black-box algorithm is to extend the existing works of [15; 17] or [18] to our setting where the search space is continuous. However, this is difficult because these algorithms are designed for a finite set of actions. A simple adaptation can yield non-vanishing bounds on cumulative regret. For example, the regret bound in [18] depends on the size of finite action spaces \(|\mathcal{A}|\). If \(|\mathcal{A}|\to\infty\) then the regret bound goes to infinity. In [15; 17], the regret bounds are built on gram matrix \(\mathbf{H}\) which is defined through the set of actions. However, such a matrix cannot even be defined in our setting where the search space (action space) is infinite. We solve this challenge by using a discretization of the search space at each iteration as mentioned in Section 5. In addition, for our confidence bound calculation, using \(\mathbf{g}(\mathbf{x};\mathbf{\theta}_{0})/\sqrt{m}\) instead of \(\mathbf{g}(\mathbf{x};\mathbf{\theta}_{t})/\sqrt{m}\) as in [15; 17] allows us to reduce a factor \(\sqrt{m}\) from our regret bound.
Our algorithm is also different from [10] in traditional Bayesian optimization which uses a Gaussian process to model the function with posterior computed in closed form. In contrast, our algorithm computes posterior by gradient descent.
## 5 Regret Analysis
In this section, we provide a regret bound for the proposed NeuralBO algorithm. Our regret analysis is built upon the recent advances in NTK theory [29; 30] and proof techniques of GP-TS [10]. Unlike most previous neural-based contextual bandits works [17; 15] that restrict necessarily the search space to a set \(\mathbb{S}^{d-1}=\{\mathbf{x}\in\mathbb{R}^{d}\colon\left\|\mathbf{x}\right\| _{2}=1\}\), we here derive our results for a flexible condition, where the norm of inputs are bounded: \(a\leq\left\|\mathbf{x}\right\|_{2}\leq b,\) where \(0<a\leq b\).
For theoretical guarantees, we need to use the following assumptions on the observations \(\{\mathbf{x}_{i}\}_{i=1}^{T}\).
**Assumption 1**.: There exists \(\lambda_{0}>0\), \(\mathbf{H}_{T}\geq\lambda_{0}\mathbf{I}_{T}\).
This is a common assumption in the literature [15, 16, 17], and is satisfied as long as we sufficiently explore the input space such that no two inputs \(\mathbf{x}\) and \(\mathbf{x}^{\prime}\) are identical.
**Assumption 2**.: We assume the noises \(\{\epsilon_{i}\}_{i=1}^{T}\) where \(\epsilon_{i}=y_{i}-f(\mathbf{x}_{i})\), are i.i.d and sub-Gaussian with parameter \(R>0\).
This assumption is mild and similar to the work of [31] where the assumption is used to prove an optimal order of the regret bound for Gaussian process bandits. We use it here to provide a regret bound with the sublinear order for our proposed algorithm.
Now we are ready to bound the regret of our proposed NeuralBO algorithm. To measure the regret of the algorithm, we use the cumulative regret which is defined as follows: \(R_{T}=\sum_{t=1}^{T}r_{t}\) after \(T\) iterations, where \(\mathbf{x}^{*}=\operatorname*{argmax}_{\mathbf{x}\in\mathcal{D}}f(\mathbf{x})\) is the maximum point of unknown function \(f\) and \(r_{t}=f(\mathbf{x}^{*})-f(\mathbf{x}_{t})\) is instantaneous regret incurred at time \(t\). We present our main theoretical result.
**Theorem 1**.: _Let \(\alpha\in(0,1)\). Assume that the true underlying \(f\) lies in the RKHS \(\mathcal{H}_{k_{\mathrm{NTK}}}(\mathcal{D})\) corresponding to the NTK kernel \(k_{\mathrm{NTK}}(\mathbf{x},\mathbf{x}^{\prime})\) with RKHS norm bounded by \(B\). Set the parameters in Algorithm 1 as \(\lambda=1+\frac{1}{T}\), \(\nu_{t}=\sqrt{2}B+\frac{R}{\sqrt{\lambda}}(\sqrt{2\log(1/\alpha)}\), where \(R\) is the noise sub-Gaussianity parameter. Set \(\eta=(m\lambda+mLT)^{-1}\) and \(J=(1+LT/\lambda)\left(1+\log(T^{3}L\lambda^{-1}\log\left(1/\alpha\right))\right)\). If the network width m satisfies:_
\[m\geq\operatorname{poly}\left(\lambda,T,\log(1/\alpha),\lambda_{0}^{-1}\right),\]
_then with probability at least \(1-\alpha\), the regret of Algorithm 1 is bounded as_
\[R_{T} \leq C_{1}(1+c_{T})\nu_{T}\sqrt{L}\sqrt{\frac{\lambda BT}{\log(B+ 1)}(2\gamma_{T}+1)}\] \[+(4+C_{2}(1+c_{T})\nu_{T}L)\sqrt{2\log(1/\alpha)T}+\frac{(2B+1) \pi^{2}}{6}+\frac{b(a^{2}+b^{2})}{a^{3}}\]
_where \(C_{1},C_{2}\) are absolute constants, \(a\) and \(b\) are lower and upper norm bounds of \(\mathbf{x}\in\mathcal{D}\) and \(c_{T}=\sqrt{4\log T+2\log\ |\mathcal{D}_{t}|}\)._
**Remark 1**.: There exists a component \(\frac{b(a^{2}+b^{2})}{a^{3}}\) in the regret bound \(R_{T}\). This component comes from the condition of the search space. If we remove the influence of constants \(a,b,B\) and \(R\), Theorem 1 implies that our regret bound follows an order \(\widetilde{O}(\sqrt{T\gamma_{T}})\). This regret bound has the same order as the one of the MVR algorithm in [31], but is significantly better than traditional BO algorithms (e.g., [7; 10]) and existing neural contextual bandits (e.g., [15; 17]) that have the order \(\widetilde{O}(\gamma_{T}\sqrt{T})\). We note that the regret bounds in [15; 17] are expressed through the effective dimension \(\widetilde{d}=\frac{\log\det(\mathbf{I}+\mathbf{H}/\lambda)}{\log(1+TK/\lambda)}\). However, \(\gamma_{T}\) and the effective dimension are of the same order up to a ratio of \(1/(2\log(1+TK/\lambda))\). The improvement of factor \(\sqrt{\gamma_{T}}\) is significant because for NTK kernel as shown by [18], \(\gamma_{T}\) follows order \(\mathcal{O}(T^{1-1/d})\), where \(d\) is the input dimension. For \(d\geq 2\), existing bounds become vacuous and are no longer sub-linear. In contrast, our regret bound remains sub-linear for any \(d>0\).
## 6 Proof of the Main Theorem
To perform analysis for continuous actions as in our setting, we use a discretization technique. At each time \(t\), we use a discretization \(\mathcal{D}_{t}\subset\mathcal{D}\) with the property that \(|f(\mathbf{x})-f([\mathbf{x}]_{t})|\leq\frac{1}{t^{2}}\) where \([\mathbf{x}]_{t}\in\mathcal{D}_{t}\) is the closest point to \(\mathbf{x}\in\mathcal{D}\). Then we choose \(\mathcal{D}_{t}\) with size \(|\mathcal{D}_{t}|=\left((b-a)BC_{\text{lip}}t^{2}\right)^{d}\) that satisfies \(\|\mathbf{x}-[\mathbf{x}]_{t}\|_{1}\leq\frac{b-a}{(b-a)BC_{\text{lip}}t^{2}}= \frac{1}{BC_{\text{lip}}t^{2}}\) for all \(\mathbf{x}\in\mathcal{D}\), where \(C_{\text{lip}}=\underset{\mathbf{x}\in\mathcal{D}j\in[d]}{\text{supsup}}\left( \frac{\partial^{2}k_{\text{NTK}}(\mathbf{p},\mathbf{q})}{\partial\mathbf{p}_{j }\mathbf{q}_{j}}|\mathbf{p}=\mathbf{q}=\mathbf{x}\right)\). This implies, for all \(\mathbf{x}\in\mathcal{D}\):
\[|f(\mathbf{x})-f([\mathbf{x}]_{t})|\leq\|f\|_{\mathcal{H}_{k_{\text{NTK}}}}C_{ \text{lip}}\|\mathbf{x}-[\mathbf{x}]_{t}\|_{1}\leq BC_{\text{lip}}\frac{1}{BC _{\text{lip}}t^{2}}=1/t^{2}, \tag{1}\]
is Lipschitz continuous of any \(f\in\mathcal{H}_{k_{\text{NTK}}}(\mathcal{D})\) with Lipschitz constant \(BC_{\text{lip}}\), where we have used the inequality \(\|f\|_{\mathcal{H}_{k_{\text{NTK}}}}\leq B\) which is our assumption about function \(f\). We bound the regret of the proposed algorithm by starting from the instantaneous regret \(r_{t}=f(\mathbf{x}^{*})-f(\mathbf{x}_{t})\) can be decomposed to \(r_{t}=[f(\mathbf{x}^{*})-f([\mathbf{x}^{*}]_{t})]+[f([\mathbf{x}^{*}]_{t}-f( \mathbf{x}_{t})]\). While the first term is bounded by Eq(1), we bound the second term in Lemma 7 provided in our Appendix. The proof of Lemma 7 requires a few steps - we introduce a saturated set \(\mathcal{S}_{t}\) (see Definition A.2), we then combine it with the results of Lemma 2 (deriving a bound on \(|\widetilde{f}_{t}(\mathbf{x})-h(\mathbf{x};\boldsymbol{\theta}_{t-1})|\)) necessitated due to the Thompson sampling and Lemma 3 (deriving a bound on \(|f(\mathbf{x})-h(\mathbf{x};\boldsymbol{\theta}_{t-1})|\)) utilizing
the generalization result of the over-parameterized neural networks. It is noted that in our proof, we consider the effect of our general search space setting (where input \(\mathbf{x}\in\mathcal{D}\) is norm-bounded, i.e., \(0<a\leq\left\|\mathbf{x}\right\|_{2}\leq b\)) on the output of the over-parameterized deep neural network at each layer in Lemma 1 and on the gradient of this network utilized in Lemma 3.2. These results contribute to the appearance of the lower and the upper norm bounds \(a\) and \(b\), the first in the generalization property of over-parameterized neural networks in Lemma 3 and the later affect the cumulative regret bound in Theorem 1. Our proof follows the proof style of Lemma 4.1 of [30], Lemma B.3 of [30] and Theorem 5 of [29].
On the other hand, Lemma 3.3, based on Assumption 2, provides a tighter bound on the confidence interval that eliminated the factor \(\sqrt{\gamma_{T}}\), in comparison with previous relevant works [10, 15], leads to the sub-linear regret bound in Theorem 1. By these arguments, we achieve a cumulative regret bound for our proposed algorithm in terms of the maximum information gain \(\gamma_{T}\) (Lemma 6, 7, 8). **Our detailed proof is provided in Appendix A**.
Proof Sketch for Theorem 1.: With probability at least \(1-\alpha\), we have
\[R_{T} =\sum_{t=1}^{T}f(\mathbf{x}^{*})-f(\mathbf{x}_{t})\] \[=\sum_{t=1}^{T}\left[f(\mathbf{x}^{*})-f([\mathbf{x}^{*}]_{t}) \right]+\left[f([\mathbf{x}^{*}]_{t})-f(\mathbf{x}_{t})\right]\] \[\leq 4T\epsilon(m)+\frac{(2B+1)\pi^{2}}{6}+\bar{C_{1}}(1+c_{T}) \nu_{T}\sqrt{L}\sum_{i=1}^{T}\min(\sigma_{t}(\mathbf{x}_{t}),B)\] \[\quad+(4+\bar{C_{2}}(1+c_{T})\nu_{T}L+4\epsilon(m))\sqrt{2\log(1 /\alpha)T}\] \[\leq\bar{C_{1}}(1+c_{T})\nu_{T}\sqrt{L}\sqrt{\frac{\lambda BT}{ \log(B+1)}(2\gamma_{T}+1)}+\frac{(2B+1)\pi^{2}}{6}+4T\epsilon(m)\] \[\quad+4\epsilon(m)\sqrt{2\log(1/\alpha)T}+\left(4+\bar{C_{2}}(1+ c_{T})\nu_{T}L\right)\sqrt{2\log(1/\alpha)T}\] \[=\bar{C_{1}}(1+c_{T})\nu_{t}\sqrt{L}\sqrt{\frac{\lambda BT}{\log( B+1)}(2\gamma_{T}+1)}+\frac{(2B+1)\pi^{2}}{6}\] \[\quad+\epsilon(m)(4T+\sqrt{2\log(1/\alpha)T})+(4+\bar{C_{2}}(1+c_ {T})\nu_{t}L)\sqrt{2\log(1/\alpha)T}\]
The first inequality is due to Lemma 7, which provides the bound for cumulative regret \(R_{T}\) in terms of \(\sum_{t=1}^{T}\min(\sigma_{t}(\mathbf{x}_{t}),B)\). The second inequality further provides the bound of term \(\sum_{t=1}^{T}\min(\sigma_{t}(\mathbf{x}_{t}),B)\) due to Lemma 8, while the last equality rearranges addition. Picking \(\eta=(m\lambda+mLT)^{-1}\) and \(J=(1+LT/\lambda)\left(\log(C_{\epsilon,2})+\log(T^{3}L\lambda^{-1}\log(1/ \alpha))\right)\), we have
\[\frac{b}{a}C_{\epsilon,2}(1-\eta m\lambda)^{J}\sqrt{TL/\lambda} \left(4T+\sqrt{2\log(1/\alpha)T}\right)\] \[= \frac{b}{a}C_{\epsilon,2}\left(1-\frac{1}{1+LT/\lambda}\right)^{ J}\left(4T+\sqrt{2\log(1/\alpha)T}\right)\] \[= \frac{b}{a}C_{\epsilon,2}e^{-\left(\log(C_{\epsilon,2})+\log \left(T^{3}L\lambda^{-1}\log(1/\alpha)\right)\right)}\left(4T+\sqrt{2\log(1/ \alpha)T}\right)\] \[= \frac{b}{a}\frac{1}{C_{\epsilon,2}}.T^{-3}L^{-1}\lambda\log^{-1} (1/\alpha)\left(4T+\sqrt{2\log(1/\alpha)T}\right)\leq\frac{b}{a}\]
Then choosing \(m\) that satisfies:
\[\left(\frac{b}{a}C_{\epsilon,1}m^{-1/6}\lambda^{-2/3}L^{3}\sqrt{ \log m}+\left(\frac{b}{a}\right)^{3}C_{\epsilon,3}m^{-1/6}\sqrt{\log m}L^{4}T ^{5/3}\lambda^{-5/3}(1+\sqrt{T/\lambda})\right)\] \[\left(4T+\sqrt{2\log(1/\alpha)T}\right)\leq\left(\frac{b}{a} \right)^{3}\]
We finally achieve the bound of \(R_{T}\) as:
\[R_{T} \leq\bar{C}_{1}(1+c_{T})\nu_{T}\sqrt{L}\sqrt{\frac{\lambda BT}{ \log(B+1)}(2\gamma_{T}+1)}\] \[\quad+(4+\bar{C}_{2}(1+c_{T})\nu_{T}L)\sqrt{2\log(1/\alpha)T}+ \frac{(2B+1)\pi^{2}}{6}+\frac{b(a^{2}+b^{2})}{a^{3}}\]
## 7 Experiments
In this section, we demonstrate the effectiveness of our proposed NeuralBO algorithm compared to traditional BO algorithms and several other BO algorithms on various synthetic benchmark optimisation functions and various real world optimisation problems which have complex and structured inputs. The real world optimization problems consists of (1) generating sensitive samples to detect tampered model trained on MNIST [32] dataset; (2) optimizing control parameters for robot pushing considered in [9]; and
(3) optimizing the number of evaluations needed to find a document that resembles an intended (unknown) target document.
For all experiments, we compared our algorithm with common classes of surrogate models used in black-box optimization, including Gaussian Processes (GPs), Random Forests and Deep Neural Networks (DNNs). For GPs, we have three baselines: GP-UCB [7], GP-TS [10] and GP-EI [33] with the common Squared Exponential (SE) Kernel. Our implementations for GP-based Bayesian Optimization baselines utilize public library GPyTorch 1 and BOTorch 2. Next, we consider SMAC [22] as a baseline using RF as surrogate model. Lastly, we also include two recent DNNs-based works for black-box optimization: DNGO [12] and Neural Greedy [14]. We further describe the implementations for RF and DNN-based baselines as follows:
Footnote 1: [https://gpytorch.ai/](https://gpytorch.ai/)
Footnote 2: [https://botorch.org/](https://botorch.org/)
* DNGO [12] models black-box function by a Bayesian linear regressor, using low-dimensional features (extracted by DNN) as inputs. We run DNGO algorithm with the implementation of AutoML 3 with default settings. Footnote 3: [https://github.com/automl/pybnn](https://github.com/automl/pybnn)
* NeuralGreedy [14] fits a neural network to the current set of observations, where the function values are randomly perturbed before learning the neural network. The learnt neural network is then used as the acquisition function to determine the next query point. Our re-implementation of NeuralGreedy follows the setting described in Appendix F.2[14].
* Random forest (RF) based surrogate model is implemented using public library GPyOpt 4 and optimization is performed with EI acquisition function. Footnote 4: [https://github.com/SheffieldML/GPyOpt](https://github.com/SheffieldML/GPyOpt)
For our proposed NeuralBO algorithm, we model the functions using two-layer neural networks \(f(\mathbf{x};\boldsymbol{\theta})=\sqrt{m}\mathbf{W}_{2}\sigma(\mathbf{W}_{1} \mathbf{x})\) with network width \(m=500\). The weights of the network are initialized with independent samples from a normal distribution \(\mathcal{N}(0,1/m)\). To train the surrogate neural
network models, we use SGD optimizer with batch size \(50\), epochs \(50\), learning rate \(\eta=0.001\) and regularizer \(\lambda=0.01\). We set exploration parameter in Algorithm 1: \(\nu_{t}=\nu\) and do a grid search over \(\{0.1,1,10\}\).
### Synthetic Benchmark Functions
We optimized several commonly used synthetic benchmark functions: Ackley, Levy, and Michalewics. The exact expression of these functions can be found at: [http://www.sfu.ca/~ssurjano/optimization.html](http://www.sfu.ca/~ssurjano/optimization.html). For each function, we perform optimization in two different dimensions. The latter dimension is twice as high as the former. This is done to evaluate the methods across a range of dimensions. More precisely, we choose dimensions \(15\) and \(30\) for Michalewics, \(20\) and \(40\) for Levy, \(50\) and \(100\) for Ackley. The function evaluation noise follows a normal distribution with zero mean and the variance depends on objective function.
All reported experiments are averaged over ten runs, where each run uses a random initialization. All the methods start with the same set of initial points. As seen from Figure 1, Neural-BO outperforms all other baselines, including GP-based BO algorithms, both NN-based BO algorithms (DNGO
Figure 1: The plots show the minimum true value observed after optimizing several synthetic functions over 2000 iterations of our proposed algorithm and 6 baselines. The dimension of each function is shown in the parenthesis.
and NeuralGreedy) and one algorithm using random forest. Moreover, our algorithm is also promising for high dimensions.
### Real Applications
#### 7.2.1 Designing Sensitive Samples for Detection of Model Tampering
We consider an attack scenario where a company offering Machine Learning as a service (MLaaS) hosts its model on a cloud. However, an adversary with backdoor access may tamper with the model to change its weight. This requires the detection of model tampering. To deal with this problem, [34] suggests generating a set of test vectors named _Sensitive-Samples_\(\{v_{i}\}_{i=1}^{n}\), whose outputs predicted by the compromised model will be different from the outputs predicted by the original model. As formalized in [34], suppose we suspect a pre-trained model \(f_{\theta}(x)\) of having been modified by an attacker after it was sent to the cloud service provider. Finding sensitive
Figure 2: The plot shows the **detection rates** corresponding to the number of samples on MNIST dataset. The larger the number of sensitive samples, the higher the detection rate. As shown in the figure, NeuralBO can generate sensitive samples that achieves nearly 90% of the detection rate with at least 8 samples.
samples for verifying the model's integrity is equivalent to optimizing task: \(v=\operatorname*{argmax}_{x}\left\|\frac{\partial f_{\theta}(x)}{\partial\theta} \right\|_{F}\), where \(\left\|\cdot\right\|_{F}\) is the Frobenius norm of a matrix. A _successful detection_ is defined as "given \(N_{S}\) sensitive samples, there is at least one sample, whose top-1 label predicted by the compromised model is different from the top-1 label predicted by the correct model". Clearly, optimizing this expensive function requires a BO algorithm to be able to work with high-dimensional structured images, unlike usual inputs that take values in hyper-rectangles.
We used a hand-written digit classification model (pre-trained on MNIST data) as the original model and tampered it by randomly adding noise to each weight of this model. We repeat this procedure 1000 times to generate 1000 different tampered models. The top-1 accuracy of the MNIST original model is 93% and is reduced to \(87.73\%\pm 0.08\%\) after modifications.
To reduce the computations, we reduce the dimension of images from \(28\times 28\) to \(7\times 7\) and do the optimization process in 49-dimensional space. After finding the optimal points, we recover these points to the original dimension by applying an upscale operator to get sensitive samples. We compare our method with all other baselines by the average detection rate with respect to the number of sensitive samples. From Figure 2, it can be seen that our method can generate samples with better detection ability than other baselines. This demonstrates the ability of our method to deal with complex structured data such as images.
#### 7.2.2 Unknown target document retrieval
Next, we evaluate our proposed method on a retrieval problem where our goal is to retrieve a document from a corpus that matches user's preference. The optimization algorithm works as follows: it retrieves a document, receives user's feedback score and then updates its belief and attempts again until it reaches a high score, or its budget is depleted. The objective target function is defined as the user's evaluation of each document, which usually has a complex structure. It is clear that evaluations are expensive since the user must read each suggestion. Searching for the most relevant document is considered as finding document \(d\) in the dataset \(S_{text}\) that maximizes the match to the target document \(d_{t}\): \(d=\operatorname*{argmax}_{d\in S_{text}}\operatorname{Match}(d,d_{t})\), where \(\operatorname{Match}(d,d_{t})\) is a matching score between documents \(d\) and \(d_{t}\). We represent each document by a word frequency vector \(x_{n}=(x_{n1},\cdots,x_{nJ})\), where \(x_{nj}\) is the number of occurrences of the \(j\)-th vocabulary term in the \(n\)-th document,
Figure 3: We search for the most related document for a specified target document in **Amazon product reviews** dataset and report the maximum **hierarchical F1 score** found by all baselines. All methods show similar behaviour and NeuralBO performs comparably and much better than GP-based baselines.
and \(J\) is the vocabulary size.
Our experiment uses Amazon product reviews dataset 5, which are taken from users' product reviews from Amazon's online selling platform. This dataset has hierarchical categories, where the category classes are sorted from general to detail. The dataset are structured as: 6 classes in "level 1", 64 classes in "level 2" and 464 classes in "level 3". The number of users' reviews was originally 40K, which was reduced to 37738 after ignoring reviews with "unknown" category. We choose the size of vocabulary to be 500 and use hierarchical F1-score introduced in [35] as a scoring metric for the target and retrieved documents. We report the mean and standard deviation of hierarchical F1-score between target and retrieved documents over ten runs for all methods in Figure 3. Figures 3 indicate that our method shows a better performance for the Amazon product review dataset in comparison with other approaches.
Footnote 5: [https://www.kaggle.com/datasets/kashnitsky/hierarchical-text-classification](https://www.kaggle.com/datasets/kashnitsky/hierarchical-text-classification)
#### 7.2.3 Optimizing control parameters for robot pushing
Finally, we evaluate our method on tuning control parameters for robot pushing problem considered in [9]. We run each method for a total of 6000 evaluations and repeat ten times to take average optimization results. Neural-BO and all other methods are initialized with 15 points. Figure 4 summarizes the median of the best rewards achieved by all methods. It can be seen that Neural-BO achieves the highest reward after 5K optimization steps.
We also use this task to demonstrate the GPU memory used by three practical baselines: GP-UCB, Neural-BO, and NeuralGreedy in Figure 5. The NeuralGreedy and NeuralBO GPU memory requirements do not appear to have changed as the memory used by these two methods remains almost constant, unlike GP. For GP, GPU memory requirement grows fast with the iterations due to the need to compute the inverse of larger and larger covariance matrices. We report here GP-UCB, but similar behavior is also noted for GP-EI and GP-TS. Neural Greedy is slightly lesser memory-consuming
Figure 4: Optimization results for control parameters of 14D robot pushing problem. The X-axis shows iterations, and the y-axis shows the median of the best reward obtained.
because it only needs memory to store the neural network model and for the training process. Neural-BO used a little bit more memory due to the calculation of matrix \(\mathbf{U}\) described in Algorithm 1. However, this matrix has a fixed size; therefore, the memory consumption does not increase over time. We do not report memory used by RF and DNGO because their implementations are CPU-based, and their running speeds are too slow compared to all the above baselines.
## 8 Conclusion
We proposed a new algorithm for Bayesian optimization using deep neural networks. A key advantage of our algorithm is that it is computationally
Figure 5: The GPU memory used by three practical baselines for 14D robot pushing control parameters optimization task.
efficient and performs better than traditional Gaussian process based methods, especially for complex structured design problems. We provided rigorous theoretical analysis for our proposed algorithm and showed that its cumulative regret converges with a sub-linear regret bound. Using both synthetic benchmark optimization functions and a few real-world optimization tasks, we showed the effectiveness of our proposed algorithm.
Our proposed method currently employs over-parametric neural networks, which can be analyzed using neural tangent kernels. As a future work, it would be interesting to investigate if it is possible to extend this work beyond over-parametric neural networks to allow a broader class of neural networks that may have smaller widths.
## 9 Acknowledgement
This research was partially funded by the Australian Government through the Australian Research Council (ARC). Prof Venkatesh is the recipient of an ARC Australian Laureate Fellowship (FL170100006).
|
2307.13765 | A real-time material breakage detection for offshore wind turbines based
on improved neural network algorithm | The integrity of offshore wind turbines, pivotal for sustainable energy
generation, is often compromised by surface material defects. Despite the
availability of various detection techniques, limitations persist regarding
cost-effectiveness, efficiency, and applicability. Addressing these
shortcomings, this study introduces a novel approach leveraging an advanced
version of the YOLOv8 object detection model, supplemented with a Convolutional
Block Attention Module (CBAM) for improved feature recognition. The optimized
loss function further refines the learning process. Employing a dataset of
5,432 images from the Saemangeum offshore wind farm and a publicly available
dataset, our method underwent rigorous testing. The findings reveal a
substantial enhancement in defect detection stability, marking a significant
stride towards efficient turbine maintenance. This study's contributions
illuminate the path for future research, potentially revolutionizing
sustainable energy practices. | Yantong Liu | 2023-07-25T18:50:05Z | http://arxiv.org/abs/2307.13765v1 | # Fast Recognition of birds in offshore wind farms based on an improved deep learning model
###### Abstract
Offshore wind turbines are crucial for sustainable energy generation. However, their effectiveness and longevity are significantly affected by surface material defects. These defects can result from various factors such as corrosion, erosion, and mechanical damage. Early and accurate detection of these defects is essential to maintain the performance of the turbines and prevent catastrophic failures.
Bird detection; CBAM algorithm; computer science; deep learning; offshore wind farm; recognition; timely detection; YOLOv5 algorithm.
## 1 Introduction
Offshore wind turbines are crucial for sustainable energy generation. However, their effectiveness and longevity are significantly affected by surface material defects [1]. These defects can result from various factors such as corrosion, erosion, and mechanical damage. Early and accurate detection of these defects is essential to maintain the performance of the turbines and prevent catastrophic failures [2].
Traditionally, defect detection has been carried out through manual inspections and conventional automated methods [3]. However, these methods have significant limitations such as high cost, time consumption, and a requirement for expert knowledge [4]. Furthermore, conventional methods often struggle with the variable environmental conditions and the complex structures of offshore wind turbines [5].
With the advancement of technology, machine learning, especially deep learning, has shown great potential in defect detection [6]. Deep learning-based methods, unlike traditional methods, can learn to recognize complex patterns and adapt to different conditions, making them highly effective for defect detection [7]. However, current deep learning methods still have some limitations, such as difficulty handling large and high-resolution images and sensitivity to the variation in defect appearance [8].
In this study, we propose an improved deep learning method for real-time surface material defect detection in offshore wind turbines. Our method is based on the YOLOv8 algorithm, a state-of-the-art object detection algorithm, with an
added Convolutional Block Attention Module (CBAM) for better feature learning [9]. We also propose an improved loss function to optimize the learning process. Our method was tested on a publicly available dataset and a dataset obtained from Xinwanjin offshore power plant.
The surface material of offshore wind turbines is exposed to a harsh and variable environment, which can cause various types of defects [10]. The most common types of defects include corrosion, erosion, and mechanical damage, each of which can significantly reduce the effectiveness and lifespan of the turbines [11]. Early and accurate detection of these defects is a crucial aspect of maintaining the performance and safety of offshore wind turbines [12].
Current methods for surface material defect detection in offshore wind turbines can be broadly categorized into manual inspections and automated methods [13]. Manual inspections involve trained professionals visually inspecting the turbines for defects. While this method can be highly accurate, it is costly, time-consuming, and requires expert knowledge. It is also subject to human error and can be hazardous due to the often dangerous conditions of offshore wind farms [14].
Automated methods, on the other hand, use technology to automate the defect detection process. These methods usually involve the use of sensors and imaging technologies to capture images or data from the surface of the turbines, which are then analyzed to identify defects [15]. While automated methods can overcome some of the limitations of manual inspections, they still have significant drawbacks. For instance, they often struggle with the variable environmental conditions and the complex structures of offshore wind turbines [16]. Moreover, they usually require a significant amount of data preprocessing and feature engineering, which can be complex and time-consuming [17].
Recent years have seen the emergence of deep learning as a promising solution to these challenges [18]. Deep learning algorithms, unlike traditional automated methods, can learn complex patterns from data and adapt to different conditions, making them highly effective for defect detection [19]. In particular, convolutional neural networks (CNNs), a type of deep learning algorithm, have shown great potential in image-based defect detection due to their ability to learn hierarchical features from images [20].
However, despite their potential, current deep learning methods for defect detection still have some limitations. One major issue is that they often struggle with large and high-resolution images, which are common in offshore wind turbine inspections [21]. Another issue is that they can be sensitive to the variation in defect appearance due to different environmental conditions and lighting [22].
In the next chapter, we will describe our proposed method, which aims to address these issues by improving the YOLOv8 algorithm with the addition of a Convolutional Block Attention Module (CBAM) and an improved loss function.
### Experimental Environment
All experiments in this study were conducted on a Linux system, with an Intel(R) Xeon(R) CPU E5-2680 v4 CPU _@_ 2.40GHz, NVIDIA GeForce GTX 3090 (24G) graphics card, and 32GB RAM. We used the Pytorch 1.0.1 deep learning framework and Python 3.6 for training and testing the bird detection network.
### Experimental Data
The images used in this research were collected from the Saemangeum (\(\mathcal{N}\boxplus\boxplus\boxplus\boxplus\boxplus\boxplus\boxplus\)) offshore wind farm in South Korea and a publicly available dataset of offshore wind turbines. A total of 5,432 images were collected, from which 5,000 images of offshore wind turbines were randomly selected. These images were then resized to 500x500 pixels. The images were manually labeled and annotated using labeling software to highlight areas of surface material defects. The dataset was divided into 4,000 images for training, 800 for validation, and 200 for testing.
### Background and Literature Review
The efficacy of defect detection in offshore wind turbines is a crucial factor in enhancing their efficiency and longevity. Traditional methods, such as manual inspections or conventional automated methods, often fall short due to their cost, time consumption, and need for expert knowledge, especially when dealing with the complex structures and variable environmental conditions of offshore wind turbines [23]. With the advent of technology, machine learning and particularly deep learning have shown substantial potential in defect detection. These algorithms can adapt to different conditions and recognize complex patterns, thereby rendering them effective for defect detection [24]. However, even these methods face challenges, such as difficulties in handling large, high-resolution images, and sensitivity to variations in defect appearances [25]. This study endeavors to address these limitations by proposing an improved deep learning
\begin{table}
\begin{tabular}{c c} \hline Parameters & Value \\ \hline Initial learning rate & 0.0032 \\ Abort learning rate & 0.12 \\ Feed batch size & 16 \\ Number of warm-up learning rounds & 2.0 \\ Initial bias learning rate of warm-up learning & 0.05 \\ Number of training sessions & 100 \\ \hline \end{tabular}
\end{table}
Table 1: Training hyperparameter settings
Figure 6: Dataset sample illustration
algorithm, based on the YOLOv8 model, for real-time surface material defect detection. Our approach adds a Convolutional Block Attention Module (CBAM) for enhanced feature learning, and an improved loss function to optimize the learning process. The methodology has been tested on a dataset obtained from the Saemangeum offshore wind farm in South Korea and a publicly available dataset of offshore wind turbines, showing promising results in terms of stability and complementing other detection methods [26].
## 3 Methods
### YOLOv8
YOLO (You Only Look Once), a popular object detection and image segmentation model, was developed by Joseph Redmon and Ali Farhadi at the University of Washington. The Ultralytics' YOLOv8 is a state-of-the-art (SOTA) model that builds on the success of previous versions, introducing new features and improvements for enhanced performance, flexibility, and efficiency. YOLOv8 supports a full range of vision AI tasks, including detection, segmentation, pose estimation, tracking, and classification, rendering it versatile across diverse applications and domains [27].
YOLOv8 was developed as a part of the YOLO series, each version of which introduced significant advancements. The original YOLO model was known for its high speed and accuracy. YOLOv2 improved the original model by incorporating batch normalization, anchor boxes, and dimension clusters. YOLOv3 further enhanced the model's performance using a more efficient backbone network, multiple anchors, and spatial pyramid pooling. YOLOv4 introduced innovations like Mosaic data augmentation, a new anchor-free detection head, and a new loss function. YOLOv5 further improved the model's performance and added new features such as hyperparameter optimization, integrated experiment tracking, and automatic export to popular export formats. YOLOv6, open-sourced by Meituan, is used in many of the company's autonomous delivery robots. YOLOv7 added additional tasks such as pose estimation on the COCO keypoints dataset [27].
As the latest version of YOLO by Ultralytics, YOLOv8 is built on cutting-edge advancements in deep learning and computer vision, offering unparalleled performance in terms of speed and accuracy. Its streamlined design makes it suitable for various applications and easily adaptable to different hardware platforms, from edge devices to cloud APIs [27].
### Cbam
To improve the accuracy of object detection, we have added the Convolutional Block Attention Module (CBAM) to the YOLOv5s network model. [27]
The inspiration for CBAM comes mainly from the way the human brain processes visual information.[28]CBAM is a simple, lightweight and effective attention module for feedforward convolutional neural networks. This module improves on the problem of SENet's generated attention on feature map channels, which can only focus on feedback from certain layers. [29]
CBAM infers attention in both channel and spatial dimensions by multiplying the generated attention map with the input feature image for adaptive feature refinement.[30]With only a negligible increase in computational
Figure 1: YOLOv5s network structure
complexity, CBAM significantly enhances the image feature extraction capabilities of the network model. [31]
CBAM can be integrated into most current mainstream networks and trained end-to-end[]with basic convolutional neural networks. Therefore, we chose to integrate this module into the YOLOv5 network to highlight essential features, reduce unnecessary feature extraction, and effectively improve detection accuracy. The structure of the CBAM module is shown in Figure 2.
|
2310.11595 | WaveAttack: Asymmetric Frequency Obfuscation-based Backdoor Attacks
Against Deep Neural Networks | Due to the popularity of Artificial Intelligence (AI) technology, numerous
backdoor attacks are designed by adversaries to mislead deep neural network
predictions by manipulating training samples and training processes. Although
backdoor attacks are effective in various real scenarios, they still suffer
from the problems of both low fidelity of poisoned samples and non-negligible
transfer in latent space, which make them easily detectable by existing
backdoor detection algorithms. To overcome the weakness, this paper proposes a
novel frequency-based backdoor attack method named WaveAttack, which obtains
image high-frequency features through Discrete Wavelet Transform (DWT) to
generate backdoor triggers. Furthermore, we introduce an asymmetric frequency
obfuscation method, which can add an adaptive residual in the training and
inference stage to improve the impact of triggers and further enhance the
effectiveness of WaveAttack. Comprehensive experimental results show that
WaveAttack not only achieves higher stealthiness and effectiveness, but also
outperforms state-of-the-art (SOTA) backdoor attack methods in the fidelity of
images by up to 28.27\% improvement in PSNR, 1.61\% improvement in SSIM, and
70.59\% reduction in IS. | Jun Xia, Zhihao Yue, Yingbo Zhou, Zhiwei Ling, Xian Wei, Mingsong Chen | 2023-10-17T21:43:42Z | http://arxiv.org/abs/2310.11595v2 | # WaveAttack: Asymmetric Frequency Obfuscation-based Backdoor Attacks Against Deep Neural Networks
###### Abstract
Due to the increasing popularity of Artificial Intelligence (AI), more and more backdoor attacks are designed to mislead Deep Neural Network (DNN) predictions by manipulating training samples and training processes. Although backdoor attacks have been investigated in various real scenarios, they still suffer from the problems of both low fidelity of poisoned samples and non-negligible transfer in latent space, which make them easily detectable by existing backdoor detection algorithms. To overcome this weakness, we propose a novel frequency-based backdoor attack method named WaveAttack, which obtains image high-frequency features through Discrete Wavelet Transform (DWT) to generate backdoor triggers. Furthermore, we introduce an asymmetric frequency obfuscation method, which can add an adaptive residual in the training and inference stage to improve the impact of triggers and further enhance the effectiveness of WaveAttack. Comprehensive experimental results show that WaveAttack not only achieves higher stealthiness and effectiveness, but also outperforms state-of-the-art (SOTA) backdoor attack methods in the fidelity of images by up to 28.27% improvement in PSNR, 1.61% improvement in SSIM, and 70.59% reduction in IS.
## Introduction
Along with the prosperity of Artificial Intelligence (AI), Deep Neural Networks (DNNs) have become increasingly prevalent in numerous safety-critical domains for precise perception and real-time control, such as autonomous vehicles [22], medical diagnosis [23], and industrial automation [14]. However, the trustworthiness of DNNs faces significant threats due to various notorious adversarial and backdoor attacks. Typically, adversarial attacks [1] manipulate input data during the inference stage to induce incorrect predictions by a trained DNN, whereas backdoor attacks [15] tamper with training samples or training processes to embed concealed triggers during the training stage, which can be exploited to generate malicious outputs. Although adversarial attacks on neural networks frequently appear in various scenarios, backdoor attacks have attracted more attention due to their stealthiness and effectiveness. Generally, the performance of backdoor attacks can be evaluated by the following three objectives of an adversary: i) _efficacy_ that refers to the effectiveness of an attack in causing the target model to produce incorrect outputs or exhibit unintended behavior; ii) _specificity_ that denotes the precision of the attack in targeting a specific class; and iii) _fidelity_ that represents the degree to which adversarial examples or poisoned training samples are indistinguishable from their benign counterparts [13]. Note that efficacy and specificity represent the effectiveness of backdoor attacks, while fidelity denotes the stealthiness of backdoor attacks.
Aiming at higher stealthiness and effectiveness, existing backdoor attack methods (e.g., IAD [24], WaNet [24], BppAttack [25], and FTrojan [26]) are built based on various optimizations, which can be mainly classified into two categories. The first one is the _sample minimal impact_ methods that can optimize the size of the trigger and minimize its pixel value, making the backdoor trigger hard to detect in training samples for the purpose of achieving a high stealthiness of a backdoor attacker. Although these methods are promising in backdoor attacks, due to the explicit trigger influence on training samples, they cannot fully evade the existing backdoor detection methods based on training samples. The second one is the _latent space obfuscation-based_ methods, which can be integrated into any existing backdoor attack methods. By employing asymmetric samples, these methods can obfuscate the latent space between benign samples and poisoned samples [23, 24]. Although these methods can bypass latent space detection techniques, they greatly suffer from low image quality, making them extremely difficult to apply in practice. Therefore, _how to improve both the effectiveness and stealthiness of backdoor attacks while minimally impacting the quality of training samples is becoming a significant challenge in the development of backdoor attacks, especially when facing various state-of-the-art backdoor detection methods_.
This paper draws inspiration from the work [23] where Wang et al. find that high-frequency features can enhance the generalization ability of DNNs and remain imperceptible to humans. To acquire high-frequency components (i.e., high-frequency features), wavelet transform has been widely investigated in various image-processing tasks [11, 22, 20]. This paper introduces a novel frequency-based backdoor attack method named WaveAttack, which utilizes Discrete Wavelet Transform (DWT) to extract the high-frequency component for
backdoor trigger generation. Furthermore, to improve the impact of triggers and further enhance the effectiveness of our approach, we employ _asymmetric frequency obfuscation_ that utilizes an asymmetric coefficient of the trigger in the high-frequency domain during the training and inference stages. This paper makes the following three contributions:
* We introduce a frequency-based backdoor trigger generation method named WaveAttack, which can effectively generate the backdoor residuals for the high-frequency component based on DWT, thus ensuring the high fidelity of poisoned samples.
* We propose a novel asymmetric frequency-based obfuscation backdoor attack method to enhance its stealthiness and effectiveness, which can not only increase stealthiness in the latent space but also improve the Attack Success Rate (ASR) in training samples.
* We conduct comprehensive experiments to demonstrate that WaveAttack outperforms SOTA backdoor attack methods regarding both stealthiness and effectiveness.
## Related Work
**Backdoor Attack.** Typically, backdoor attacks try to embed backdoors into DNNs by manipulating their input samples and training processes. In this way, adversaries can control DNN outputs through concealed triggers, thus resulting in manipulated predictions [10]. Based on whether the training process is manipulated, existing backdoor attacks can be categorized into two types, i.e., _trainingsummipulated_ and _training-manipulated_ attacks. Specifically, the training-unmanipulated attacks only inject a visible or invisible trigger into the training samples of some DNN, leading to its recognition errors [2]. For instance, Chen et al. [3] introduced a Blend attack that generates poisoned data by merging benign training samples with specific key visible triggers. Moreover, there exist a large number of invisible trigger-based backdoor attack methods, such as natural reflection [11], human imperceptible noise [23], and image perturbation [24], which exploit the changes induced by real-world physical environments. Although these training-unmanipulated attacks are promising, due to their substantial impacts on training sample quality, most of them still can be easily identified somehow. As an alternative, the training-manipulated attacks [22, 23] assume that adversaries from some malicious third party can control the key steps of the training process, thus achieving a stealthier attack. Although the above two categories of backdoor attacks are promising, most of them struggle with the coarse-grained optimization of effectiveness and stealthiness, complicating the acquisition of superior backdoor triggers. Due to the significant difference in latent space and low poisoned sample fidelity, they cannot evade the latest backdoor detection methods.
**Backdoor Defense.** There are two major types of backdoor defense methods, i.e., the _detection-based defense_ and _erasure-based defense_. The detection-based defenses can be further classified into two categories, i.e., sample-based and latent space-based detection methods. Specifically, sample-based detection methods can identify the distribution differences between poisoned samples and benign samples [1, 3, 2], while latent space-based detection methods aim to find the disparity between the latent spaces of poisoned samples and benign samples [12, 1]. Unlike the above detection strategies that aim to prevent the injection of backdoors into DNNs by identifying poisoned samples during the training stages, the erasure-based defenses can eradicate the backdoors from DNNs. So far, the erasure-based defenses can be classified into three categories, i.e., poison suppression-based, model reconstruction-based, and trigger generation-based defenses. The poison suppression-based methods [10] utilize the differential learning speed between poisoned and benign samples during training to mitigate the influence of backdoor triggers on DNNs. The model reconstruction-based methods [11, 12] leverage a selected set of benign data to rebuild DNN models, aiming to mitigate the impact of backdoor triggers. The trigger generation-based methods [22] reverse-engineer backdoor triggers by capitalizing on the effects of backdoor attacks on training samples.
To the best of our knowledge, WaveAttack is the first attempt to generate triggers for the high-frequency component obtained through DWT. Unlike existing backdoor attack methods, WaveAttack considers both the fidelity of poisoned samples and latent space obfuscation. By using asymmetric frequency obfuscation, WaveAttack can not only acquire backdoor attack effectiveness but also achieve high stealthiness regarding both image quality and latent space.
## Our Method
In this section, we first present the preliminaries and the threat model. Then, we show our motivations for adding triggers to the high-frequency components. Finally, we detail the attack process of our method WaveAttack.
### Preliminaries
**Notations.** We follow the training scheme of Adapt-Blend [20]. Let \(\mathcal{D}=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{N}\) be a clean training dataset, where \(\mathbf{x}_{i}\in\mathbb{X}=\{0,1,...,255\}^{C\times W\times H}\) is an image, and \(y_{i}\in\mathbb{Y}=\{1,2,...,K\}\) is its corresponding label. Note that \(K\) represents the number of labels. For a given training dataset, we select a subset of \(\mathcal{D}\) with a poisoning rate \(p_{a}\) as the _payload samples_\(\mathcal{D}_{a}=\{(\mathbf{x}^{\prime},y_{t})|\mathbf{x^{\prime}}_{i}=T(\mathbf{x}_{i}), \mathbf{x}_{i}\in\mathbb{X}\}\), where \(T(\cdot)\) is a backdoor transformation function, and \(y_{t}\) is an adversary-specified target label. We use a subset of \(\mathcal{D}\) with poisoning rate \(p_{r}\) as the _regularization samples_\(\mathcal{D}_{r}=\{(\mathbf{x^{\prime}}_{i},y_{i})|\mathbf{x^{\prime}}_{i}=T(\mathbf{x}_{i} ),\mathbf{x}_{i}\in\mathbb{X}\}\). For a given dataset, a backdoor attack adversary tries to train a backdoored model \(f\) that predicts \(\mathbf{x}\) as its corresponding label, where \(\mathbf{x}\in\mathcal{D}\cup\mathcal{D}_{a}\cup\mathcal{D}_{r}\).
**Threat Model.** Similar to existing backdoor attack methods [22, 23, 24], we assume that adversaries have complete control over the training datasets, the training process, and model implementation. They can embed backdoors into the DNNs by poi
soning the given training dataset. Moreover, in the inference stage, we assume that adversaries can only query back-doored models using any samples.
**Adversarial Goal.** Throughout the attack process, adversaries strive to meet two core goals, i.e., effectiveness and stealthiness. Effectiveness indicates that adversaries try to train backdoored models with a high ASR while ensuring that the decrease in Benign Accuracy (BA) remains imperceptible. Stealthiness denotes that samples with triggers have high fidelity, and there is no latent separation between poisoned and clean samples in the latent space.
### Motivation
Unlike humans that are not sensitive to high-frequency features, DNNs can effectively learn high-frequency features of images Wang et al. (2020), which can be used for the purpose of backdoor trigger generation. In other words, the poisoned samples generated by high-frequency features can easily escape from various examination methods by humans. Based on this observation, if we can design backdoor triggers on top of high-frequency features, the stealthiness of corresponding backdoored attacks can be ensured. To obtain high-frequency components from training samples, we resort to Discrete Wavelet Transform (DWT) to capture characteristics from both time and frequency domains Shenas et al. (1992), which enables the extraction of multiple frequency components from training samples. The reason why we adopt DWT rather than Discrete Cosine Transform (DCT) is that DWT can better capture high-frequency features from training samples (i.e., edges and textures) and allow superior reverse operations during both encoding and decoding phases, thus minimizing the impact on the fidelity of poisoned samples. In our approach, we adopt a classic and effective biorthogonal wavelet transform method (i.e., Haar wavelet Daubechies (1990)), which mainly contains four kernels operations, i.e., \(LL^{T}\), \(LH^{T}\), \(HL^{T}\), and \(HH^{T}\). Here \(L\) and \(H\) denote the low and high pass filters, respectively, where \(L^{T}=\frac{1}{\sqrt{2}}\left[1\right],H^{T}=\frac{1}{\sqrt{2}}\left[-1\right.\) \(1\). Note that, based on the four operations, the Haar wavelet can decompose an image into four frequency components (i.e., \(LL\), \(LH\), \(HL\), \(HH\)) using DWT, where \(HH\) only contains the high-frequency information of a sample. Meanwhile, the Haar wavelet can reconstruct the image from the four frequency components via the Inverse Discrete Wavelet Transform (IDWT). To verify the motivation of our approach, Figure 1 illustrates the impact of adding the same noises to different frequency components on an image, i.e., Figure 1(a). We can find that, compared with the other three poisoned images, i.e., Figure 1(b) to 1(d), it is much more difficult to figure out the difference between the original image and the poisoned counterpart on HH, i.e., Figure 1(e). Therefore, it is more suitable to inject triggers into the high-frequency component (i.e., HH) for the backdoor attack purpose.
### Implementation of WaveAttack
In this subsection, we introduce the design of our WaveAttack approach. Figure 2 shows the overview of WaveAttack. We first poisoned samples through our trigger design to construct the poisoned samples, which contain payload samples and regularization samples. Then, we use benign samples, payload samples, and regularization samples to train a classifier to achieve the core goals of WaveAttack.
**Trigger Design.** As aforementioned, our WaveAttack approach aims to achieve a stealthier backdoor attack, introducing triggers into the \(HH\) frequency component. Figure 2 contains the process of generating triggers by WaveAttack. First, we obtain the \(HH\) component of samples through DWT. Then, to generate imperceptible and sample-specific triggers, we employ an encoder-decoder network as a generator \(g\). These generated triggers are imperceptible additive residuals. Moreover, to achieve our asymmetric frequency obfuscation, we multiply the residuals by a coefficient \(\alpha\). We can generate the poisoned \(HH^{\prime}\) component with the triggers as follows:
\[\mathbf{HH^{\prime}}=\mathbf{HH}+\alpha\cdot g(\mathbf{HH};\mathbf{\omega}_{g}), \tag{1}\]
where \(\mathbf{\omega}_{g}\) is the generator parameters. Finally, we can utilize IDWT to reconstruct four frequency components of poisoned samples. Specifically, we use a U-Net-like Ronneberger et al. (2015) generator to obtain residuals, though other methods, such as VAE Kingma et al. (2014), can also be used by the adversary. This is because the skip connections of U-Net can effectively preserve the features of inputs with minimal impacts Ronneberger et al. (2015).
Figure 1: A motivating example for the backdoor trigger design on high-frequency components.
Figure 2: Overview of our attack method WaveAttack.
**Optimization Objective.** Our attack method WaveAttack has two networks to optimize. We aim to optimize a generator \(g\) to generate small residuals with a minimal impact on samples. Furthermore, we aim to optimize a backdoored classifier \(f\), which can enable the effectiveness and stealthiness of WaveAttack. For the first optimization objective, we use the \(L_{\infty}\) norm to optimize small residuals. The optimization objective is defined as follows:
\[\mathcal{L}_{r}=||g(HH;\mathbf{\omega}_{g})||_{\infty}. \tag{2}\]
As for the second optimization objective, we train the classifier by the cross-entropy loss function in \(\mathcal{D}\), \(\mathcal{D}_{a}\), and \(\mathcal{D}_{r}\) dataset. The optimization objective is defined as follows:
\[\mathcal{L}_{c}=\mathcal{L}(\mathbf{x}_{p},y_{t};\mathbf{\omega}_{c})+\mathcal{L}(\bm {x}_{r},\mathbf{y};\mathbf{\omega}_{c})+\mathcal{L}(\mathbf{x}_{b},\mathbf{y};\mathbf{\omega}_{c}), \tag{3}\]
where \(\mathcal{L}(\cdot)\) is the cross-entropy loss function, \(\mathbf{\omega}_{c}\) is the classifier parameters, \(\mathbf{x}_{b}\in\mathcal{D},\mathbf{x}_{p}\in\mathcal{D}_{a}\), and \(\mathbf{x}_{r}\in\mathcal{D}_{r}\). The total loss function is as follows:
\[\mathcal{L}_{total}=\mathcal{L}_{c}+\mathcal{L}_{r}. \tag{4}\]
```
1:i) \(\mathcal{D}\), benign training dataset; ii) \(\mathbf{\omega}_{g}\), randomly initialized generator parameters; iii) \(\mathbf{\omega}_{f}\), randomly initialized classifier parameters; iv) \(p_{a}\), rate of payload samples. v) \(p_{r}\), rate of regularization samples. vii) \(y_{t}\), target label. vi) \(E\), # of epochs in training process.
2:i) \(\mathbf{\omega}_{g}\), well-trained generator model, ii) \(\mathbf{\omega}_{c}\), well-trained classifier model.
3:WaveAttack Training:
4:for\(c=1,\dots,E\)do
5:for\((\mathbf{x},\mathbf{y})\) in \(\mathcal{D}\)do
6:\(b\leftarrow\mathbf{x}\).shape[0]
7:\(n_{m}\leftarrow(p_{a}+p_{r})\times b\)
8:\(n_{a}\gets p_{a}\times b\)
9:\(n_{r}\gets p_{r}\times b\)
10:\(\mathbf{x}_{m}\leftarrow\mathbf{x}[:\!\!m_{m}]\)
11:\(\mathbf{LL},\mathbf{LH},\mathbf{HL},\mathbf{HH}\gets DWT(\mathbf{x}_{m})\)
12:\(\mathbf{resdiual}\leftarrow\alpha\cdot g(\mathbf{HH};\mathbf{\omega}_{g})\)
13:\(\mathbf{HH}^{\prime}\leftarrow\mathbf{HH}+\mathbf{resdiual}\)
14:\(\mathbf{x}_{m}\gets IDWT(\mathbf{LL},\mathbf{LH},\mathbf{HL},\mathbf{HH}^{\prime})\)
15:\(\mathcal{L}_{1}\leftarrow\mathcal{L}(\mathbf{x}_{m}[n_{a};];\!\!y_{t};\!\mathbf{\omega}_{c})\)
16:\(\mathcal{L}_{2}\leftarrow\mathcal{L}(\mathbf{x}_{m}[:\!\!n_{r}];\!\mathbf{y}[n_{a}.\! \!n_{r}];\!\!\mathbf{\omega}_{c})\)
17:\(\mathcal{L}_{3}\leftarrow\mathcal{L}(\mathbf{x}[\!\![n_{m};\!];\!\mathbf{y}[n_{m};]\! ;\!\mathbf{\omega}_{c})\)
18:\(\mathcal{L}\leftarrow\mathcal{L}_{1}+\mathcal{L}_{2}+\mathcal{L}_{3}+||\mathbf{ resdiual}||_{\infty}\)
19:\(\mathcal{L}\).backward()
20:update(\(\mathbf{\omega}_{g},\mathbf{\omega}_{c}\))
21:endfor
22:endfor
23:Return\(\mathbf{\omega}_{g},\mathbf{\omega}_{c}\)
```
**Algorithm 1** Training of WaveAttack
**Algorithm 1** Training of WaveAttack
**Algorithm 2** Training of WaveAttack
**Algorithm 3** Training of WaveAttack
**Algorithm 4** Training of WaveAttack
**Algorithm 5** Training of WaveAttack
**Datasets and DNNs.** We evaluated all the attack methods on four classical benchmark datasets, i.e., CIFAR-10 (Krizhevsky, Hinton et al., 2009), CIFAR-100 (Krizhevsky, Hinton et al., 2009), GTSRB (Stallkamp et al., 2012) and a subset of ImageNet (with the first 20 categories) (Deng et al., 2009). The statistics of datasets adopted in the experiments are presented in Table 4 (see Appendix ). We used ResNet18 (He et al., 2016) as the DNN for the experiments. Moreover, we used VGG16 (Simonyan et al., 2015), SENet18 (Hu, et al., 2018), ResNeXt29 (Xie et al., 2017), and DenseNet121 (Huang et al., 2017) to evaluate the generalizability of WaveAttack.
**Attack Configurations.** To compare the performance of WaveAttack with SOTA attack methods, we considered seven SOTA backdoor attacks, i.e., BadNets Gu et al. (2019), Blend Chen et al. (2017), IAD Nguyen et al. (2020), WaNet Nguyen et al. (2021), BppAttack Wang et al. (2022), Adapt-Blend Qi et al. (2023), and FTrojan Wang et al. (2022). Note that, similar to our work, Adapt-Blend has asymmetric triggers, and FTrojan is also a frequency-based attack. We performed the attack methods using the default hyperparameters described in their original papers. Specifically, the poisoning rate is set to 10% with a target label of 0 to ensure a fair comparison. See Appendix -A1 for more details.
**Evaluation Metrics.** Similar to the existing work in Wang et al. (2022), we evaluated the effectiveness of all attack methods using two metrics, i.e., Attack Success Rate (ASR) and Benign Accuracy (BA). To evaluate the stealthiness of all attack methods, we used three metrics, i.e., Peak Signal-to-Noise Ratio (PSNR) Huynh-Thu et al. (2008), Structure Similarity Index Measure (SSIM) Wang et al. (2004), and Inception Score (IS) Salimans et al. (2016).
### Effectiveness Evaluation (RQ1)
**Effectiveness Comparison with SOTA Attack Methods.** To evaluate the effectiveness of WaveAttack, we compared the ASR and BA of WaveAttack with seven SOTA attack methods. Since the IAD Nguyen et al. (2020) cannot attack the ImageNet dataset based on their open-source code, we do not provide its comparison result. Table 1 shows the attack performance of different attack methods. From this table, we can find that WaveAttack can acquire the high ASR without degrading BA obviously. Especially, for the dataset CIFAR-10 and GTSRB, WaveAttack achieves the best ASR and BA than other SOTA attack methods. Compared to the FTrojan, a frequency-based attack method, WaveAttack outperforms FTrojan in BA for the datasets CIFAR-10, GTSRB, and ImageNet. Note that compared to the asymmetric-based method Adapt-Blend, WaveAttack can obtain superior performance in terms of ASR and BA for all datasets.
**Effectiveness on Different Networks.** To evaluate the effectiveness of WaveAttack on various networks, we conducted experiments on CIFAR-10 using different networks (i.e., VGG16 Simonyan et al. (2015), SENet18 Hu et al. (2018), ResNeXt29 Xie et al. (2017), and DenseNet121 Huang et al. (2017)). Table 2 shows the attack performance of WaveAttack on these networks. From this table, we can find that our approach WaveAttack can successfully embed the backdoor into different networks. WaveAttack can not only cause malicious impacts of backdoor attacks, but also maintain a classification performance with a high BA, demonstrating the generalizability of WaveAttack.
### Stealthiness Evaluation (RQ2)
To evaluate the stealthiness of WaveAttack, we compared the images with triggers generated by WaveAttack with the ones of SOTA attack methods. Moreover, we used t-SNE Van der et al. (2008) to visualize the latent spaces for poisoned samples and benign samples from the target label.
**Stealthiness Results from The Perspective of Images.** To show the stealthiness of triggers generated by WaveAtt
\begin{table}
\begin{tabular}{c|c c|c c|c c|c c} \hline \multirow{2}{*}{Method} & \multicolumn{2}{c|}{CIFAR-10} & \multicolumn{2}{c|}{CIFAR-100} & \multicolumn{2}{c|}{GTSRB} & \multicolumn{2}{c}{ImageNet} \\ \cline{2-9} & BA & ASR & BA & ASR & BA & ASR & BA & ASR \\ \hline No attack & 94.59 & - & 75.55 & - & 99.00 & - & 87.00 & - \\ \hline BadNets Gu et al. (2019) & 94.36 & **100** & 74.90 & **100** & 98.97 & **100** & 85.80 & **100** \\ Blend Chen et al. (2017) & 94.51 & 99.91 & 75.10 & 99.84 & 98.26 & **100** & 86.40 & **100** \\ IAD Nguyen et al. (2020) & 94.32 & 99.12 & 75.14 & 99.28 & 99.26 & 98.37 & - & - \\ WaNet Nguyen et al. (2021) & 94.23 & 99.57 & 73.18 & 98.52 & 99.21 & 99.58 & 86.60 & 89.20 \\ BppAttack Wang et al. (2022) & 94.10 & **100** & 74.68 & **100** & 98.93 & 99.91 & 85.90 & 99.50 \\ Adapt-Blend Qi et al. (2023) & 94.31 & 71.57 & 74.53 & 81.66 & 98.76 & 60.25 & 86.40 & 90.10 \\ FTrojan Wang et al. (2022) & 94.29 & **100** & 75.37 & **100** & 98.83 & **100** & 85.10 & **100** \\ \hline WaveAttack & **94.55** & **100** & **75.17** & 99.16 & **99.30** & **100** & **86.60** & 97.60 \\ \hline \end{tabular}
\end{table}
Table 1: Attack performance comparison between WaveAttack and seven SOTA attack methods.
\begin{table}
\begin{tabular}{c|c|c c} \hline \multirow{2}{*}{Network} & No Attack & WaveAttack \\ \cline{2-4} & BA & BA & ASR \\ \hline VGG16 & 93.62 & 93.70 & 99.76 \\ SENet18 & 94.51 & 94.63 & 100 \\ ResNeXt29 & 94.79 & 95.08 & 100 \\ DenseNet121 & 95.29 & 95.10 & 99.78 \\ \hline \end{tabular}
\end{table}
Table 2: Effectiveness on different DNNs.
Figure 3: Comparison of examples generated by seven backdoor attacks. For each attack, we show the poisoned sample (top) and the magnified (\(\times\)5) residual (bottom).
tack, Figure 3 compares WaveAttack and SOTA attack methods using poisoned samples and their magnified residuals (\(\times\)5) counterparts. From this figure, we can find that the residual generated by WaveAttack is the smallest and leaves only a few subtle artifacts. The injected trigger by WaveAttack is nearly invisible to humans.
We used three metrics (i.e., PSNR, SSIM, and IS) to evaluate the stealthiness of triggers generated by WaveAttack. Table 3 shows the stealthiness comparison results between WaveAttack and seven SOTA attack methods. From this table, we can find that for datasets CIFAR-10, CIFAR-100, and ImageNet, WaveAttack achieves the best stealthiness. Note that WaveAttack can achieve the second-best SSIM for dataset GTSRB, but it outperforms BadNets by up to 60.56% in PSNR and 67.5% in IS.
**Stealthiness Results from The Perspective of Latent Space.** There are so many backdoor defense methods [14, 13] based on the assumption that there is a latent separation between poisoned and benign samples in latent space. Therefore, ensuring the stealthiness of the attack method from the perspective of the latent space becomes necessary. We obtained feature vectors of the test result from the feature extractor (the DNN without the last classifier layer) and used t-SNE [15] for visualization. Figure 4 visualizes the distributions of feature representations of the poisoned samples and the benign samples from the target label under the six attacks. From Figure 4(a) to 4(c) and 4(e), we can observe that there are two distinct clusters, which can be utilized to detect poisoned samples or backdoored models [11]. However, as shown in 4(d) and 4(f), we can find that the feature representations of poisoned samples are intermingled with those of benign samples for Adapt-Blend and WaveAttack, i.e., there is only one cluster. Adapt-Blend and WaveAttack can achieve the best stealthiness from the perspective of latent space and break the latent separation assumption to evade backdoor defenses. Although Adapt-Blend exhibits a degree of stealthiness, Table 3 reveals that WaveAttack surpasses Adapt-Blend in terms of image quality, thus suggesting that WaveAttack can achieve superior stealthiness.
### Resistance to Existing Defenses (RQ3)
To evaluate the robustness of WaveAttack against existing backdoor defenses, we implemented representative backdoor defenses (i.e., GradCAM [20], STRIP [10], Fine-Pruning [12], and Neural Cleanse [13]) and evaluated the resistance to them. We also show the robustness of WaveAttack against Spectral Signature [14] in the appendix.
**GradCAM.** As an effective visualizing mechanism, GradCAM [20] has been used to visualize intermediate feature maps of DNN, interpreting the DNN
Figure 4: The t-SNE of feature vectors in the latent space under different attacks on CIFAR-10. We use red and blue points to denote poisoned and benign samples, respectively, where each point in the plots corresponds to a training sample from the target label.
Figure 5: GradCAM visualization results for both clean and backdoored models.
\begin{table}
\begin{tabular}{c|c c c|c c c|c c c|c c c} \hline \multirow{2}{*}{Attack Method} & \multicolumn{3}{c|}{CIFAR-10} & \multicolumn{3}{c|}{CIFAR-100} & \multicolumn{3}{c|}{GTSRB} & \multicolumn{3}{c}{ImageNet} \\ \cline{2-13} & PSNR & SSIM & IS & PSNR & SSIM & IS & PSNR & SSIM & IS & PSNR & SSIM & IS \\ \hline No Attack & INF & 1.0000 & 0.000 & INF & 1.0000 & 0.000 & INF & 1.0000 & 0.000 & INF & 1.0000 & 0.000 \\ \hline BadNets [11] & 25.77 & 0.9942 & 0.136 & 25.48 & 0.9943 & 0.137 & 25.33 & **0.9935** & 0.180 & 21.88 & 0.9678 & 0.025 \\ Blend [1] & 20.40 & 0.8181 & 1.823 & 20.37 & 0.8031 & 1.60 & 18.58 & 0.6840 & 2.118 & 13.72 & 0.1871 & 2.252 \\ IAD [12] & 24.35 & 0.9180 & 0.472 & 23.98 & 0.9138 & 0.490 & 23.84 & 0.9404 & 0.309 & - & - & - \\ Wawet [12] & 30.91 & 0.9724 & 0.326 & 31.62 & 0.9762 & 0.237 & 33.26 & 0.9659 & 0.170 & 35.18 & 0.9756 & 0.029 \\ BppAttack [13] & 27.79 & 0.9285 & 0.895 & 27.93 & 0.9207 & 0.779 & 27.79 & 0.8462 & 0.714 & 27.34 & 0.8009 & 0.273 \\ Adapt-Blend [11] & 25.97 & 0.9231 & 0.519 & 26.00 & 0.9133 & 0.495 & 24.14 & 0.8103 & 1.136 & 18.96 & 0.6065 & 1.150 \\ FTrojan [13] & 44.07 & 0.9976 & 0.019 & 44.09 & 0.9972 & 0.017 & 40.23 & 0.9813 & 0.065 & 35.55 & 0.9440 & 0.013 \\ \hline WaveAttack & **47.49** & **0.9979** & **0.011** & **50.12** & **0.9992** & **0.005** & **40.67** & 0.9877 & **0.058** & **45.60** & **0.9913** & **0.007** \\ \hline \end{tabular}
\end{table}
Table 3: Stealthiness comparison with existing attacks. Larger PSNR, SSIM, and smaller IS indicate better performance. The best and the second-best results are **highlighted** and **underlined**, respectively.
predictions. Existing defense methods [10, 14] exploit GradCAM to analyze the heatmap of input samples. Specifically, a clean model correctly predicts the class label, whereas a backdoored model predicts the target label. Based on this phenomenon, the backdoored model can induce an abnormal GradCAM heatmap compared with the clean model. If the heatmaps of poisoned samples are similar to those of benign sample counterparts, the attack method is robust and can resist defense methods based on GradCAM. Figure 5 shows the visualization heatmaps of a clean model and a backdoored model attacked by WaveAttack. Please note that here "clean" denotes a clean model trained by using benign training datasets. From this figure, we can find that the heatmaps of these models are similar, and WaveAttack can resist defense methods based on GradCAM.
**STRIP**. STRIP [10] is a representative sample-based defense method. When inputting a sample potentially poisoned to a model, STRIP will perturb it through a random set of clean samples and monitor the entropy of the prediction output. If the entropy of an input sample is low, STRIP will consider it poisoned. Figure 6 shows the entropies of the benign and poisoned samples. From this figure, we can see the entropies of the poisoned samples are bigger than those of the benign samples, and STRIP fails to detect the poisoned samples generated by WaveAttack.
**Fine-Pruning**. Fine-Pruning (FP) [13] is a representative model reconstruction defense, which is based on the assumption that the backdoor can activate a few dormant neurons in DNNs. Therefore, pruning these dormant neurons can eliminate the backdoors in DNNs. To evaluate the resistance to FP, we gradually pruned the neurons of the last convolutional and fully-connected layers. Figure 7 shows the performance comparison between WaveAttack and seven SOTA attack methods on CIFAR-10 by resisting FP. From this figure, we can find that along with the more neurons are pruned, WaveAttack can acquire superior performance than other SOTA attack methods in terms of both ASR and BA. In other words, Fine-Pruning is not able to eliminate the backdoor generated by WaveAttack. Note that, though the ASR and BA of WaveAttack are similar to those of Adapt-Blend the final stage of pruning, the initial ASR of Adapt-Blend (i.e., 71.57%) is much lower than that of WaveAttack (i.e., 100%).
**Neural Cleanse**. As a representative trigger generation defense method, Neural Cleanse (NC) [11] assumes that the trigger designed by the adversary is small. Initially, NC optimizes a trigger pattern for each class label via an optimization process. Then, NC uses Anomaly Index (i.e., Median Absolute Deviation [1]) to detect whether a DNN is backdoored. Similar to the work [11], we think the DNN is backdoored if the anomaly index is larger than 2. To evaluate the resistance to NC, we conducted experiments to evaluate our approach WaveAttack by resisting NC. Figure 8 shows the defense results against NC. Please note that here "clean" denotes clean models trained by using benign training datasets, and "backdoored" denotes backdoored models by WaveAttack that are from the Subsection. From this figure, we can find that the abnormal index of WaveAttack is smaller than 2 for all datasets, and WaveAttack can bypass the NC detection.
## Conclusion
Although backdoor attacks on DNNs have attracted increasing attention from adversaries, few of them consider both fidelity of poisoned samples and latent space simultaneously to enhance the stealthiness of their attack methods. To establish an effective and stealthy backdoor attack against various backdoor detection techniques, this paper proposed a novel frequency-based method named WaveAttack, which employs DWT to extract high-frequency features from samples for backdoor trigger generation. Based on our proposed frequency obfuscation method, WaveAttack can maintain high effectiveness and stealthiness, thus remaining undetectable by both human inspection and backdoor detection mechanisms. Furthermore, we introduced an asymmetric frequency obfuscation method to improve the impact of triggers and further enhance the effectiveness of WaveAttack. Comprehensive experimental results show that, compared with various SOTA backdoor attack methods, WaveAttack can not only achieve both higher stealthiness and effectiveness but also minimize the impact of image quality on four well-known datasets.
Figure 8: Defense performance against NC.
Figure 6: STRIP normalized entropy of WaveAttack.
Figure 7: ASR comparison against Fine-Pruning. |
2305.17346 | Input-Aware Dynamic Timestep Spiking Neural Networks for Efficient
In-Memory Computing | Spiking Neural Networks (SNNs) have recently attracted widespread research
interest as an efficient alternative to traditional Artificial Neural Networks
(ANNs) because of their capability to process sparse and binary spike
information and avoid expensive multiplication operations. Although the
efficiency of SNNs can be realized on the In-Memory Computing (IMC)
architecture, we show that the energy cost and latency of SNNs scale linearly
with the number of timesteps used on IMC hardware. Therefore, in order to
maximize the efficiency of SNNs, we propose input-aware Dynamic Timestep SNN
(DT-SNN), a novel algorithmic solution to dynamically determine the number of
timesteps during inference on an input-dependent basis. By calculating the
entropy of the accumulated output after each timestep, we can compare it to a
predefined threshold and decide if the information processed at the current
timestep is sufficient for a confident prediction. We deploy DT-SNN on an IMC
architecture and show that it incurs negligible computational overhead. We
demonstrate that our method only uses 1.46 average timesteps to achieve the
accuracy of a 4-timestep static SNN while reducing the energy-delay-product by
80%. | Yuhang Li, Abhishek Moitra, Tamar Geller, Priyadarshini Panda | 2023-05-27T03:01:27Z | http://arxiv.org/abs/2305.17346v1 | # Input-Aware Dynamic Timestep Spiking Neural Networks for Efficient In-Memory Computing
###### Abstract
Spiking Neural Networks (SNNs) have recently attracted widespread research interest as an efficient alternative to traditional Artificial Neural Networks (ANNs) because of their capability to process sparse and binary spike information and avoid expensive multiplication operations. Although the efficiency of SNNs can be realized on the In-Memory Computing (IMC) architecture, we show that the energy cost and latency of SNNs scale linearly with the number of timesteps used on IMC hardware. Therefore, in order to maximize the efficiency of SNNs, we propose input-aware Dynamic Timestep SNN (DT-SNN), a novel algorithmic solution to dynamically determine the number of timesteps during inference on an input-dependent basis. By calculating the entropy of the accumulated output after each timestep, we can compare it to a predefined threshold and decide if the information processed at the current timestep is sufficient for a confident prediction. We deploy DT-SNN on an IMC architecture and show that it incurs negligible computational overhead. We demonstrate that our method only uses 1.46 average timesteps to achieve the accuracy of a 4-timestep static SNN while reducing the energy-delay-product by 80%.
Spiking neural networks, in-memory computing, dynamic inference
## I Introduction
Deep learning has revolutionized many challenging computational tasks such as computer vision and natural language processing [10] using Artificial Neural Networks (ANNs). These successes, however, have come at the cost of tremendous computing resources and high latency [6]. Over the past decade, Spiking Neural Networks (SNNs) have gained popularity as an energy-efficient alternative to ANNs [14, 17]. SNNs are different from ANNs in that they process inputs over a series of timesteps, whereas ANNs infer over what can be considered a single timestep. The biologically-plausible neurons in SNNs maintain a variable called membrane potential, which controls the behavior of the SNN over a series of timesteps. When the membrane potential exceeds a certain threshold, the neuron fires, creating a spike, and otherwise, the neuron remains inactive (neuron outputs a 0 or 1). Such spike-based computing creates sparsity in the computations and replaces multiplications with additions.
Although the binary spike nature of SNNs eliminates the need for multiplications, compared to ANNs, SNNs require significantly more memory access due to multi-timestep computations on traditional von-Neumann architectures (called the "memory wall problem") [21]. To alleviate this problem, In-Memory Computing (IMC) hardware is used to perform analog dot-product operations to achieve high memory bandwidth and compute parallelism [15]. In this work, we mainly focus on achieving lower energy and latency in the case of IMC-implemented SNNs while maintaining iso-accuracy.
Fig. 1(A) shows a component-wise energy distribution for the CIFAR10-trained VGG-16 network on 64\(\times\)64 4-bit RRAM IMC architecture. Among these, the digital peripherals (containing crossbar input switching circuits, buffers, and accumulators) entail the highest energy cost (45%). The IMC-crossbar and analog-to-digital converter (ADC) consumes the second highest energy (25%). Efforts to lower energy consumption have been made by previous works. As an example, prior IMC-aware algorithm-hardware co-design techniques [13, 22], have used pruning, quantization to reduce the ADC and crossbar energy, and area cost. However, in the case of SNNs, the improvement is rather limited because the crossbar and ADC only occupy 25% of the overall energy cost.
Unlike ANNs, the number of timesteps in SNNs plays an important role in hardware performance, which is orthogonal to data precision or sparsity. In Fig. 1(B) we investigate how timesteps affect the energy consumption and latency of an SNN. Note that both metrics are normalized to the performance of a 1-timestep SNN. We find that both energy consumption and latency scale linearly with the number of timesteps, up to \(4.9\times\) more energy and \(8\times\) more latency when changing the number of timesteps from 1 to 8. More importantly, if one can reduce the number of timesteps in SNNs, then all parts in Fig. 1(A) can benefit from the energy and latency savings. These findings highlight the tremendous potential to optimize SNNs' performance on IMC hardware.
In fact, [3, 8, 12] have explored ways to reduce the number of timesteps from an algorithmic perspective. They
Fig. 1: Energy estimation on our IMC architecture using VGG-16 on CIFAR-10 dataset. (A) energy ratio of each unit, (B) energy/latency vs. timesteps.
all train an SNN with a high number of timesteps first and then finetune the model with fewer timesteps later. However, their method decreases the number of timesteps for all input samples, thereby inevitably leading to an accuracy-timestep trade-off. In this paper, we tackle this problem with another solution. _We view the number of timesteps during inference as a variable conditional to each input sample._ We call our method Dynamic Timestep Spiking Neural Network (DT-SNN) as it varies the number of timesteps based on each input sample. In particular, we use entropy thresholding to determine the appropriate number of timesteps for each sample. To further optimize our algorithm in practice, we design a new training loss function and implement our algorithm on an IMC architecture.
The main contributions of our work are summarized below:
1. Based on what we have seen thus far, this is the first work that changes the number of timesteps in SNNs based on the input, reducing computational overhead and increasing inference efficiency without compromising task performance.
2. To achieve that goal, we propose using entropy thresholding to distinguish the number of timesteps required. Meanwhile, we also provide a new training loss function and an IMC implementation of our method.
3. Extensive experiments are carried out to demonstrate the efficacy and efficiency of DT-SNN. For example, the DT-SNN ResNet-19 achieves the same accuracy as the 4-timestep SNN ResNet-19 with only an average of 1.27-timestep on the CIFAR-10 dataset, reducing 84% energy-delay-product.
## II Preliminaries
We start by introducing the basic background of SNNs. We denote the overall spiking neural network as a function \(f_{T}(\mathbf{x})\) (\(\mathbf{x}\) is the input image), its forward propagation can be formulated as
\[\mathbf{y}=f_{T}(\mathbf{x})=\frac{1}{T}\sum_{t=1}^{T}h\circ g^{L}\circ g^{L-1}\circ g ^{L-2}\circ\cdots g^{1}(\mathbf{x}), \tag{1}\]
where \(g^{\ell}(\mathbf{x})=\mathrm{LIF}(\mathbf{W}^{\ell}\mathbf{x})\) denotes the \(\ell\)-th block. A block contains a convolutional layer, a leaky integrate-and-fire (LIF) layer, and an optional normalization layer placed in between the former two layers [23]. \(L\) represents the total number of blocks in the network and \(h(\cdot)\) denotes the final linear classifier. In this work, we use the direct encoding method, i.e., using \(g^{1}(\mathbf{x})\) to encode the input tensor into spike trains, as done in recent SNN works [20]. To get the final prediction, we repeat the inference process \(T\) times and average the output from the classifier.
SNNs emulate the biological neurons using LIF layers. For each timestep, the input current charges the membrane potential \(\mathbf{u}\) in the LIF neurons. When the membrane potential exceeds a certain threshold, a spike \(\mathbf{s}\) will be fired to the next layer, given by
\[\mathbf{u}^{\ell}[t+1]=\tau\mathbf{u}^{\ell}[t]+\mathbf{W}^{\ell}\mathbf{s}^{\ell}[t], \tag{2}\]
\[\mathbf{s}^{\ell+1}[t+1]=\begin{cases}1&\text{if }\mathbf{u}^{\ell}[t+1]>V_{th}\\ 0&\text{otherwise}\end{cases}, \tag{3}\]
where \(\tau\in(0,1]\) is the leaky factor, mimicking the potential decay. If a spike is fired, the membrane potential will be reset to 0, _i.e._\((\mathbf{u}[t+1]=\mathbf{u}[t+1]*(1-\mathbf{s}[t+1]))\).
In the spiking neurons, all functions except the spike firing function (Eq. (3)) can be normally differentiated. The firing function generates a 0 gradient if \(\mathbf{u}^{\ell}[t+1]\neq V_{th}\), otherwise, it generates a gradient with infinity. This impedes the gradient-based optimization in SNNs. To overcome this problem, we leverage the surrogate gradient training method [19]. Specifically, in the forward propagation, we keep the original LIF neuron dynamics, while in the backward propagation, we use another function:
\[\frac{\partial\mathbf{s}^{\ell}[t]}{\partial\mathbf{u}^{\ell}[t]}=\max(0,V_{th}-|\mathbf{u }^{\ell}[t]-V_{th}|) \tag{4}\]
## III Methodology
In this section, we first introduce the algorithm for our work. Then we demonstrate the hardware implementation of DT-SNN.
### _Dynamic Timestep Spiking Neural Network_
Because spikes in an SNN are sparse and binary, the number of timesteps, therefore, controls the density of information inside the SNNs. Generally, more timesteps help SNNs explore more temporal information and thus achieve higher task performance. Fig. 2 demonstrates that when the number of timesteps of an SNN VGG-16 is increased from 1 to 4 during inference, the accuracy increases as well. Together with the hardware performance as shown in Fig. 1(B), the number of timesteps \(T\) controls a trade-off between hardware performance and task performance on an SNN model \(f_{T}(\cdot)\).
Unlike the conventional approach where \(T\) is selected and fixed for all images, we propose a conditional probability of selecting \(T\) for different input \(\mathbf{x}\). We call our method the Dynamic Timestep Spiking Neural Network (DT-SNN). More concretely, denote \(\mathbb{P}(T|\mathbf{x})\) as the conditional probability of \(T\) with respect to \(\mathbf{x}\), DT-SNN is given by
\[f_{\widehat{T}\sim\mathbb{P}(T|\mathbf{x})}=\frac{1}{\widehat{T}}\sum_{t=1}^{ \widehat{T}}h\circ g^{L}\circ g^{L-1}\circ g^{L-2}\circ\cdots g^{1}(\mathbf{x}). \tag{5}\]
DT-SNN allows allocating a different number of timesteps for each input sample. As seen in Fig. 2, we find that the majority
Fig. 2: The impact of the number of timesteps on the accuracy. We test the spiking VGG-16 on three datasets (CIFAR10, CIFAR100, TinyImageNet).
of samples can be correctly classified with fewer timesteps. For example, on the CIFAR-100 dataset, 69.39% of overall test data can be correctly predicted using only 2 timesteps. Yet only 2.9% of test data needs full timesteps (\(T=4\)) to get the right prediction. If we compare the hardware performance, the 4-timestep model brings 86% more energy consumption and 100% more latency than the 2-timestep model. This observation is also applicable to other datasets like CIFAR10 and TinyImageNet.
Choosing the Right \(T\)Our objective in DT-SNN is to reduce the unnecessary timesteps as much as possible while not compromising accuracy. However, finding the appropriate timestep for each input data is non-trivial. In this work, we use entropy to determine \(T\). Formally, given a dataset that has \(K\) classes, the prediction probability \(\pi(\mathbf{y}|\mathbf{x})\) is calculated by the Softmax function \((\sigma(\cdot))\), given by
\[\pi(\mathbf{y}_{i}|\mathbf{x})=\sigma_{\mathrm{i}}(f(\mathbf{x}))=\frac{\exp(f(\mathbf{x})_{i} )}{\sum_{j=1}^{K}\exp(f(\mathbf{x})_{j})}, \tag{6}\]
where \(\pi(\mathbf{y}_{i}|\mathbf{x})\) is the probability of predicting \(i\)-th class. The entropy can be further calculated by
\[E_{f}(\mathbf{x})=-\frac{1}{\log K}\sum_{i=1}^{K}\pi(\mathbf{y}_{i}|\mathbf{x})\log\pi(\bm {y}_{i}|\mathbf{x}). \tag{7}\]
Here, \(\log K\) ensures the final entropy is normalized to \((0,1]\). The entropy measures the state of uncertainty. For instance, if all classes have an equal probability of \(\frac{1}{K}\), the entropy will become 1, meaning the current state is completely random and uncertain. Instead, if one class's probability is approaching 1 while others are approaching 0, the entropy moves towards 0, indicating the state is becoming certain.
Generally, the prediction accuracy is highly correlated with entropy. If the model is certain about some inputs (low entropy), the prediction would be highly probable to be correct, and vice versa [5]. Therefore, we select the \(T\) if the entropy is lower than some pre-defined threshold \(\theta\), given by
\[\hat{T}(\mathbf{x})=\operatorname*{arg\,min}_{\hat{T}}\{E_{f_{T}}(\mathbf{x})<\theta| 1\leq\hat{T}<T\}\cup\{T\}. \tag{8}\]
Here, the \(\hat{T}\) is selected based on the lowest timestep that can have lower-than-\(\theta\) entropy. If none of them can have confident output, the SNN will use maximum timesteps, _i.e._, \(T\).
Training DT-SNNOriginally, the loss function for training an SNN is the cross-entropy function, given by:
\[\mathcal{L}(\mathbf{x},\mathbf{z})=-\frac{1}{B}\sum_{i=1}^{K}\mathbf{z}_{i}\log\pi(f_{T}( \mathbf{x})_{i}|\mathbf{x}), \tag{9}\]
where \(\mathbf{z}\) is the label vector and \(B\) is the batch size. Although the output from lower timesteps implicitly contributes to \(f_{T}(\mathbf{x})\), there lacks some explicit guidance to them. As shown in Fig. 2, the accuracy in the first timestep is always low. Here, we propose to explicitly add a loss function to each timestep output. The new loss function is defined as:
\[\mathcal{L}(\mathbf{x},\mathbf{z})=-\frac{1}{TB}\sum_{t=1}^{T}\sum_{i=1}^{K}\mathbf{z}_{i }\log\pi(f_{t}(\mathbf{x})_{i}|\mathbf{x}), \tag{10}\]
In practice, we find this loss function does not change much training time on GPUs. Also, we will demonstrate that adding this loss function can benefit the accuracy of the outputs from all timesteps, further improving our DT-SNN.
Relation to Early Exit in ANNConceptually our DT-SNN is similar to the Early-Exit technique in ANN [18, 1] which adds multiple exits to different layers. Here, we want to clarify the relation between our DT-SNN and early exit: (1) DT-SNN is operated in the time dimension, it naturally fits SNNs and does not require any additional layers, while early exit has to add classifier layers in each branch; (2) DT-SNN has a higher potential than early exit: in experiments section, we will show that the majority of the examples can only use the first timestep, while the first exit in ANNs outputs marginal examples. Furthermore, DT-SNN is fully complementary to the early exit, that is, we can further add the early exit technique to SNN to fulfill even higher efficiency.
### _Hardware Implementation_
We implement DT-SNN on a tiled-monolithic chip architecture [2] as shown in Fig. 3a. First, the individual layers of an SNN are mapped onto tiles. The number of tiles occupied by an SNN layer depends on factors such as the crossbar size, the number of input and output channels in a layer, kernel size, and the number of crossbars per tile. To implement DT-SNN-specific functionality, we incorporate the following modifications in conventionally used architectures 1) A digital \(\sigma-E\) module to jointly compute the SoftMax (\(\sigma(\cdot)\)) and entropy (\(E\)) followed by threshold (\(\theta\)) comparison to detect whether to exit or not. 2) Timesteps are processed sequentially without pipelining. This eliminates the delay and hardware overhead (energy and area cost) required to empty the pipeline
Fig. 3: Figure showing (a) monolithic-tiled IMC architecture implementation of an SNN (b) architecture of the \(\sigma-E\) module for softmax and entropy value computation.
in case of dynamic timestep inference. The tiles additionally contain global accumulators (GA) and global buffers (GB) for accumulating partial sums and storing the intermediate outputs, respectively from different processing elements (PE). At the tile level, all modules are connected via a Network-on-Chip (NoC) interconnect. Each tile consists of several PEs, accumulators, and buffers. Each PE contains several crossbars, accumulators and buffers. The PEs and crossbars are connected by an H-Tree interconnect. We use a standard 2D-IMC crossbar connected to peripherals such as switch matrix, multiplexers, analog-to-digital converters (ADCs), and Shift-\(\&\)-Add circuits. The switch matrix provides input voltages at the source lines (SL) while simultaneously activating the word lines (WL). The voltages accumulate over the bit lines (BL). The analog MAC value (partial sum output) is converted to a digital value using the ADC. Multiplexers enable resource sharing of ADCs and Shift-\(\&\)-Add circuits among multiple crossbar columns to reduce the area overheads. The digital partial sum outputs from different crossbars, PEs and tiles are accumulated using the PE, tile, and global accumulators, respectively. For all layers except the last, the final MAC outputs from the GA are transferred to the LIF module for the non-linear activation functionality. The spike outputs are relayed to the tiles mapping the subsequent layers.
For the last layer (a fully connected layer) the GA accumulated MAC output is directed to the \(\sigma-E\) module as shown in Fig. 3a (using red arrows). Inside the \(\sigma-E\) module MAC outputs are stored in the \(y\)-FIFO buffer. The depth of \(y\)-FIFO depends on the dataset. For example, in CIFAR10, the FIFO depth is 10. Data from the \(y\)-FIFO is passed to the address lines of the \(\sigma\)-LUT to compute \(\sigma\) which are pushed into the \(\sigma\)-FIFO. The \(\sigma\)-FIFO outputs are sent as inputs to the Entropy Module that contains LUT for \(\log(\sigma)\) computation. The Entropy Module additionally contains a multiplier and accumulator circuit (comprised of an adder and register) to implement the entropy computation using Eq. (7). If the computed entropy is less than the threshold \(\theta\), the inference is terminated and new input data is loaded into the GB. DT-SNN is implemented on the IMC architecture using parameters shown in Table I.
**Energy Consumption of the \(\sigma-E\) module:** Based on 32nm CMOS implementations, we find that the energy consumed by the \(\sigma-E\) module for one timestep is merely \(2e^{-5}\times\) of the 1 timestep inference energy consumed by the IMC architecture, which is negligible.
## IV Experimental Results
In this section, we present the evaluation of both the task performance and the hardware performance of DT-SNN, highlighting its extreme efficacy and efficiency.
### _Comparison with Static SNN_
We select 4 popular visual benchmarks for evaluation, CIFAR-10 [9], CIFAR-100 [9], TinyImageNet [4], and CIFAR10-DVS [11] dataset. For the architecture, we choose to use VGG-16 [16] and ResNet-19 [7]. We compare our DT-SNN with the static SNN, _i.e.,_ an SNN that uses a fixed number of timesteps for all inputs. The training method for the static SNN and the DT-SNN are kept the same, except that the static SNN uses Eq. (9) as training loss while DT-SNN uses Eq. (10). We train them with a batch size of 256, a learning rate of 0.1 followed by a cosine decay. The L2 regularization is set to 0.0005. The number of timesteps is set to 4 as done in existing work [23]. For task metrics, we report the top-1 accuracy on the visual benchmarks. As for hardware metrics, we measure them based on the parameters shown in Table I and further normalize them w.r.t. static SNNs. We report the number of timesteps (\(T\)), energy, and energy-delay-product (EDP). Note that the cost of DT-SNN is varied across input data, thus we average the hardware metrics in the test dataset.
#### Iv-A1 Comparison of Accuracy, Energy Cost, and \(T\)
We summarize the results of 4 datasets in Table II. Here, we test static SNN with the full number of timesteps, _i.e.,_\(T=4\), and compare the hardware performances with DT-SNN under a similar accuracy level. We find that DT-SNN only needs 1.46 average timesteps on the CIFAR-10 dataset, bringing more than 50% energy saving. For the other three datasets, DT-SNN requires roughly half the number of timesteps used in a static SNN model. Nevertheless, DT-SNN reduces at least 40% of energy cost when compared to static SNN.
Fig. 4: Comparison between static SNN and DT-SNN in terms of Energy-Delay-Product (EDP) (normalized to the static SNN).
#### Vi-A2 Comparison of EDP
We next compare the EDP between static SNNs and DT-SNNs. EDP is more suitable for measuring a holistic performance in hardware because it considers both time and energy efficiency. Fig. 4 shows the EDP comparison normalized by the EDP of the static SNN. We can find that DT-SNN is extremely efficient as it reduces 61.2%\(\sim\)80.9% EDP of static SNNs. These results highlight the efficiency brought by our method, leading to both energy cost and latency reduction.
#### Vi-A3 Accuracy vs. EDP curve
The static SNN can adjust the number of timesteps for all inputs to balance its accuracy and efficiency. Our DT-SNN can also adjust threshold \(\theta\) to balance such a trade-off. Here, we draw an accuracy-EDP curve in Fig. 5. _Note that here the EDP is normalized to the EDP of the 1-timestep static SNN._ We evaluate static SNN at 1,2,3, and 4 timesteps, and evaluate DT-SNN using three different thresholds. It can be seen that our DT-SNN is placed in the top-left corner, indicating a better accuracy-EDP trade-off than the static SNN. Remarkably, DT-SNN can bring significant improvement in low-timestep scenarios. For instance, DT-SNN VGG-16 increases the accuracy by 17% when compared to the 1-timestep static counterpart on the CIFAR-10 dataset, while it only has \(\sim\)10% higher EDP.
In order to further visualize the dynamic timesteps in our method, Fig. 5 provides three pie charts in each case, which show the percentage of input examples that are inferred with 1, 2, 3, or 4 timesteps. Notably, \(T=1\) is usually the most selected case for the input due to the fact that most input examples can be correctly predicted using only 1 timestep. As the threshold decreases, more images start to use higher timesteps. Overall, we find \(T=3\) and \(T=4\) are rarely used in DT-SNN, demonstrating the effectiveness of our method to reduce redundant timesteps.
#### Vi-A4 Comparison with Prior Work :
Here, we also compare our method with prior work on SNNs. We compare tdBN [23] and Dspike [12] with our static-SNN and DT-SNN trained with Eq. (10). Fig. 6(A) shows the accuracy under different \(T\). Our DT-SNN reaches a new state of the art.
#### Vi-A5 Non-Ideal Accuracy with Device Variations
So far, all the experiments are performed without considering device conductance variation. In Fig. 6(B), we compare DT-SNN and static SNN under 20% device conductance variation. We simulate this by adding noise to the weights post-training. We see that DT-SNN still maintains higher accuracy while eliminating the redundant timesteps compared to static-SNN.
### _Acceleration in General Processors_
In the previous section, we demonstrated that our DT-SNN can accelerate the inference on an IMC architecture. Here, we show that apart from the IMC architecture we simulated, our method can also be applicable to other types of hardware like digital processors. To this end, we measure the inference throughput (_i.e.,_ the number of images inferred per second) on an RTX 2080Ti GPU simulated by PyTorch using batch size 1. Table III lists the accuracy, (averaged) timesteps, and throughput. As can be seen from the table, the throughput on GPU significantly reduces as the timesteps are increased.
Fig. 5: Accuracy vs. EDP curve for both static SNN (drawn by pink line) and DT-SNN (drawn by blue line). For the static SNN, we report the accuracy and EDP when \(T=\{1,2,3,4\}\). For the DT-SNN, we draw the pie charts illustrating the distribution of \(\hat{T}(\mathbf{x})\).
Fig. 6: Accuracy vs. the number of timesteps. (A) Comparison with prior works, (B) Comparison under non-ideal (NI) device variation in the IMC.
Compared to static SNNs, our DT-SNN substantially improves the throughput while not sacrificing accuracy. For example, DT-SNN ResNet-19 with 1.07 averaged timesteps can infer 169.3 images per second, quite close to the 1-timestep static SNN (185.3 images per second), yet still bringing 3.4% accuracy improvement.
### _Ablation Study_
In this section, we ablate the training loss function choice. To verify this, we train static an SNN VGG-16 on CIFAR-10 either with Eq. (9) or Eq. (10) and test the corresponding accuracies of both static SNN and DT-SNN. Fig. 7 demonstrates the comparison between these two loss functions. We find that our training loss boosts the accuracy of all timesteps in the static SNN. In particular, the first timestep of VGG-16 changes from 76.3% to 91.5%. This result proves that explicit guidance from label annotations should be added to the lower timesteps. Meanwhile, it also increases the full timestep performance, resulting in a 0.6% accuracy increase on VGG-16.
This improvement is extremely beneficial to DT-SNN. According to the pie charts describing the distribution of \(\hat{T}\), we find our training loss function enables a smaller number of timesteps to be required to classify the test data, thus reducing the EDP significantly.
### _Visualization_
In this section, we visualize the input images that are differentiated by our DT-SNN. Ideally, we anticipate DT-SNN can identify whether an image is easy or hard to infer (corresponding to 4 or 1 timestep). To maximize the differentiation, we use a low threshold to filter out the high timesteps, so that only the easiest images can be classified in the first timestep and vice versa. Fig. 8 presents the results on the TinyImageNet dataset. Generally, we find the images inferred with 1 timestep exhibit a simple structure: a clear object placed in the center of a clean background. In contrast, hard images require 4 timesteps and usually mix the background and the object together, making the object imperceptible.
## V Conclusion
In this work, we introduce the Dynamic Timestep Spiking Neural Network, a simple yet significantly effective method that selects different timesteps for different input samples. DT-SNN determines the suitable timestep based on the confidence of the model output, seamlessly fitting the nature of sequential processing over time in SNNs. Moreover, DT-SNN is practical. It can be deployed to IMC architecture and even general digital processors. Extensive experiments prove that DT-SNN establishes a new state-of-the-art trade-off between hardware efficiency and task performance.
|
2305.09178 | Empirical Analysis of the Inductive Bias of Recurrent Neural Networks by
Discrete Fourier Transform of Output Sequences | A unique feature of Recurrent Neural Networks (RNNs) is that it incrementally
processes input sequences. In this research, we aim to uncover the inherent
generalization properties, i.e., inductive bias, of RNNs with respect to how
frequently RNNs switch the outputs through time steps in the sequence
classification task, which we call output sequence frequency. Previous work
analyzed inductive bias by training models with a few synthetic data and
comparing the model's generalization with candidate generalization patterns.
However, when examining the output sequence frequency, previous methods cannot
be directly applied since enumerating candidate patterns is computationally
difficult for longer sequences. To this end, we propose to directly calculate
the output sequence frequency for each model by regarding the outputs of the
model as discrete-time signals and applying frequency domain analysis.
Experimental results showed that Long Short-Term Memory (LSTM) and Gated
Recurrent Unit (GRU) have an inductive bias towards lower-frequency patterns,
while Elman RNN tends to learn patterns in which the output changes at high
frequencies. We also found that the inductive bias of LSTM and GRU varies with
the number of layers and the size of hidden layers. | Taiga Ishii, Ryo Ueda, Yusuke Miyao | 2023-05-16T05:30:13Z | http://arxiv.org/abs/2305.09178v1 | Empirical Analysis of the Inductive Bias of Recurrent Neural Networks by Discrete Fourier Transform of Output Sequences
###### Abstract
A unique feature of Recurrent Neural Networks (RNNs) is that it incrementally processes input sequences. In this research, we aim to uncover the inherent generalization properties, i.e., inductive bias, of RNNs with respect to how frequently RNNs switch the outputs through time steps in the sequence classification task, which we call output sequence frequency. Previous work analyzed inductive bias by training models with a few synthetic data and comparing the model's generalization with candidate generalization patterns. However, when examining the output sequence frequency, previous methods cannot be directly applied since enumerating candidate patterns is computationally difficult for longer sequences. To this end, we propose to directly calculate the output sequence frequency for each model by regarding the outputs of the model as discrete-time signals and applying frequency domain analysis. Experimental results showed that Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) have an inductive bias towards lower-frequency patterns, while Elman RNN tends to learn patterns in which the output changes at high frequencies. We also found that the inductive bias of LSTM and GRU varies with the number of layers and the size of hidden layers.
## 1 Introduction
In this research, we aim to uncover the inherent generalization properties, i.e., inductive bias, of RNNs with respect to how frequently RNNs switch the outputs through time steps in the sequence classification task, which we call _output sequence frequency_ (Figure 1).
In supervised learning settings, a model is trained with finite input-output examples \(\{(x_{0},\,y_{0}),\dots,(x_{n},\,y_{n})\}\) and then tested with unseen input-output pairs. The models that achieve high accuracy on test data are often said to "generalize well". However, the important point is that function \(f\) that satisfies \(f(x_{i})=y_{i}\) cannot be uniquely determined by finite train examples. This entails that if a model generalizes well to a certain function \(f\), then the model hardly generalizes to another function \(f^{\prime}\) that has different outputs for the same unseen inputs, i.e., \(f(x_{\mathrm{test}})\neq f^{\prime}(x_{\mathrm{test}})\) but is consistent with the same train examples; \(f^{\prime}(x_{i})=y_{i}\). Therefore, it is crucial to understand what kind of functions a model inherently prefers to learn, which is referred to as **inductive bias**(White and Cotterell, 2021; Kharitonov and Chaabouni, 2020; Deletang et al., 2022; Lovering et al., 2020).
Our target is Recurrent Neural Network (RNN): a well-known deep learning architecture. A key feature of RNN is that it processes the input incrementally and predicts the output at each time step, producing a sequence of outputs. This is different from other deep learning architectures, e.g., Feed Forward Network (FFN), Convolutional Neural Network (CNN), and Transformers (Vaswani et al.,
Figure 1: An example showing a train dataset and two candidate generalization patterns, each showing a different output sequence frequency. Here, ”aababba” is the input sequence, and there are four binary train labels \(0,1,1,0\) each corresponding to the prefix of length \(2,3,5,6\).
2017). Due to the incremental processing feature of RNNs, the inputs can be of variable length; RNNs have been used for various tasks in natural language processing, such as sentence classification and text generation. It has also been used as a subcomponent of more complex architectures (Dyer et al., 2016) and to simulate human sequential processing (Steinert-Threlkeld and Szymanik, 2019). Variants of RNN architectures have been proposed so far. The most basic one is the Elman RNN (Elman, 1990). Later, more complex architectures, such as LSTM (Hochreiter and Schmidhuber, 1997) and GRU (Cho et al., 2014), have been proposed to improve modeling long-term dependencies.
Although deep learning models, including RNNs, are said to be high-performance models, they are essentially black boxes, and it is not clear what inductive bias they may have. In this research, in order to analyze the inductive bias of RNNs, we propose to calculate the output sequence frequency by regarding the outputs of RNNs as discrete-time signals and applying frequency domain analysis. Specifically, we apply discrete Fourier transform (DFT) to the output signals and compute the dominant frequencies to grasp the overall output patterns.
Inductive bias is not straightforward to analyze since it can be affected by various factors such as the task, dataset, and training method; theoretical analysis has been limited to simple architecture such as FFN (Rahaman et al., 2019; Valle-Perez et al., 2019). Therefore, empirical studies have been conducted to clarify the inductive bias in various tasks and settings, such as language modeling (White and Cotterell, 2021), sequence classification (Lovering et al., 2020), and sequence-to-sequence (Kharitonov and Chaabouni, 2020). These works approached the problems by designing synthetic datasets and testing several generalization patterns. However, when examining the output sequence frequency, we cannot directly apply these previous methods since enumerating exponentially many output sequence patterns in longer sequences is computationally difficult. To this end, our method makes use of frequency domain analysis to directly calculate the output sequence frequencies and avoid enumerating the candidate generalization patterns.
In the experiment, we randomly generated \(500\) synthetic datasets and trained models on a few data points (Figure 1). As a result, we found:
* LSTM and GRU have an inductive bias such that the output changes at lower frequencies compared to Elman RNN, which can easily learn higher frequency patterns,
* The inductive bias of LSTM and GRU varies with the number of layers and the size of hidden layers.
## 2 Background
### Inductive Bias Analysis
Inductive bias analysis is usually performed by constructing synthetic datasets. This is because data from real tasks are complex and intertwined with various factors, making it difficult to determine what properties of the dataset affect the behavior of the model. For example, White and Cotterell (2021) targeted LSTM and Transformer and investigated whether easy-to-learn languages differ depending on their typological features in language modeling. White and Cotterell (2021) used Context Free Grammar (CFG) to construct parallel synthetic language corpora with controlled typological features. They trained models on each language and computed their perplexities to find that LSTM performs well regardless of word order while the transformer is affected. Another more synthetic example is Kharitonov and Chaabouni (2020). Kharitonov and Chaabouni (2020) targeted LSTM, CNN, and Transformer. They designed four synthetic tasks in the sequence-to-sequence framework and trained models on very small datasets (containing 1\(\sim\)4 data points). To examine the inductive biases of the models, they prepared a pair of candidate generalization patterns, such as COUNT and MEMORIZATION, for each task and compared the models' preference over the candidate patterns by calculating the Minimum Description Length (Rissanen, 1978). Using extremely small train datasets makes it possible to restrict the information models can obtain during training and analyze the models' inherent inductive bias in a more controlled setup.
In this research, we take a similar approach as (Kharitonov and Chaabouni, 2020), restricting the train data to extremely small numbers. However, we cannot directly apply the methods of Kharitonov and Chaabouni (2020) because the approach of comparing with candidate generalization patterns can be impractical in our case. Specifically, when examining the output sequence frequency, it is necessary to feed the models with longer se
quences in order to analyze a wide range of frequencies from low to high; there are exponentially many patterns with the same number of output changes in longer sequences, which makes it difficult to exhaustively enumerate the candidate generalization patterns. Therefore, instead of preparing candidate generalization patterns, we directly calculate the output sequence frequency for each model by regarding the outputs of the model as discrete-time signals and applying frequency domain analysis.
### Frequency Domain Analysis
Discrete Fourier Transform (DFT) is a fundamental analysis technique in digital signal processing. Intuitively, DFT decomposes a signal into a sum of finite sine waves of different frequencies, allowing one to analyze what frequency components the original signal consists of. The DFT for a length \(N\) discrete-time signal \(f[0],\ldots,f[N-1]\) is defined by the following equation:
\[F[k]\;=\;\sum_{n=0}^{N-1}f[n]\exp\left(-\sqrt{-1}\frac{2\pi}{N}kn\right). \tag{1}\]
When \(f[n]\) is a real-value signal, it is sufficient to consider only \(k\in\{1,\ldots,\frac{N}{2}\}\).1 Here, \(k=1\) corresponds to the lowest frequency component and \(k=\frac{N}{2}\) to the highest.
Footnote 1: This is due to the periodicity of \(\exp(-\sqrt{-1}\frac{2\pi}{N}kn)\). Furthermore, we do not take into account the \(k=0\) term since it is called a DC term and works as an offset.
One useful measure for analyzing the property of the signal \(f[n]\) is the dominant frequency (Ng and Goldberger, 2007). In short, dominant frequency is the frequency component of maximum amplitude and is expected to represent the general periodic pattern of the original signal \(f[n]\). The dominant frequency \(\omega_{\mathrm{dom}}\;=\;\frac{2\pi}{N}k_{max}\), where \(k_{max}\;=\;\arg\max\{|F[k]|\}\).
## 3 Methods
### Task
To analyze the output sequence frequency, i.e., how frequently the output changes through time steps, we focus on a simple case of binary sequence classification task: the inputs are the prefixes of a binary sequence \(s\in\{\mathrm{a},\mathrm{b}\}^{*}\). Specifically, given a binary sequence \(s\in\{\mathrm{a},\mathrm{b}\}^{*}\), the input space \(\mathcal{I}\) and the output space \(\mathcal{O}\) are defined as follows:
\[\mathcal{I} \;=\;\{s_{0:i}\:|\:i=0,\ldots|s|-1\}, \tag{2}\] \[\mathcal{O} \;=\;\{(1-p,p)\:|\:p\in[0,1]\}, \tag{3}\]
where \(\mathcal{O}\) is a set of categorical distributions over the binary labels \(\{0,1\}\), and \(p\) denotes the probability of predicting label \(1\).
Without loss of generality, we can only consider the model's output probability of predicting label \(1\) for the sequence \(s_{0:i}\), which we denote by \(\mathcal{M}(s_{0:i})\). In this way, we can regard the model's output sequence \(\mathcal{M}(s_{0:0}),\ldots,\mathcal{M}(s_{0:|s|-1})\) as a discrete-time signal taking values in \([0,1]\).
### Train Dataset
Figure 2 shows an intuitive illustration of our dataset construction. Given a sequence \(s\), we randomly generate the binary labels \(y_{0:|s|-1}\), where each \(y_{i}\) is the label assigned to the prefix \(s_{0:i}\). When two successive labels \(y_{i}\) and \(y_{i+1}\) differ, we say there is a _label change_ (e.g., \(y_{9}\) and \(y_{10}\) in Figure 2).2 We then make a train dataset \(\mathcal{D}\) by taking instances where the labels change: \(\{(s_{0:i},y_{i}),(s_{0:i+1},y_{i+1})\:|\:y_{i}\neq y_{i+1}\}\). For example, in Figure 2, the train data \(\mathcal{D}\) contains \(\{(\mathrm{aa},0)\:(\mathrm{aab},1)\:(\mathrm{aababba},1)\:(\mathrm{aababba},0 ),\ldots\}\). Note that the original labels \(y_{0:|s|-1}\) can be uniquely recovered from \(\mathcal{D}\) simply by _interpolating_ or _extending_ the labels for other prefixes.
Footnote 2: Similarly, we use _output change_ for output sequences.
The procedure is formalized as follows:
1. Sample a sequence \(s\in\{0,1\}^{N}\), where \(N\) is
Figure 2: Illustration of train dataset construction. The train dataset contains only the instances corresponding to the label changes.
the length of the sequence,
2. Sample the number of label changes \(m\in\{1,\ldots M\}\), where \(M\) is the maximum number of label changes,
3. Sample the labels \(y_{0:|s|-1}\) so that all the \(m\) label changes do not overlap3, i.e. \(\forall i,j.\ i<j\wedge y_{i}\neq y_{i+1}\wedge y_{j}\neq y_{j+1}\Rightarrow i+1<j\), Footnote 3: This condition ensures that the labels in the train dataset are balanced.
4. Create a dataset as \[\mathcal{D}\ =\ \{(s_{0:i},y_{i}),(s_{0:i+1},y_{i+1})\,|\,y_{i}\neq y_{i+1}\}.\]
By training models on random input sequences \(s\), we expect the model predictions to represent the inherent generalization property of the model.
### Evaluation Metrics
For the analysis, we apply two evaluation metrics.
#### 3.3.1 Test Cross-entropy Loss
First, we compare the model's output sequence \(\mathcal{M}(s_{0:0}),\ldots,\mathcal{M}(s_{0:|s|-1})\) with the original labels \(y_{0:|s|-1}\) by calculating test cross-entropy loss \(\mathcal{L}_{\mathrm{CE}}\). Intuitively, near-zero \(\mathcal{L}_{\mathrm{CE}}\) indicates that the model generalizes to simply _interpolate_ or _extend_ the training labels since we constructed the train datasets so that the original labels can be recovered by interpolation, as described in section 3.2.
The loss is formalized as:
\[\begin{split}\mathcal{L}_{\mathrm{CE}}\ =&-\frac{1}{| \mathcal{T}|}\sum_{i\in\mathcal{T}}(y_{i}\ln(\mathcal{M}(s_{0:i}))\\ +(1&-y_{i})\ln(1-\mathcal{M}(s_{0:i}))),\end{split} \tag{4}\]
where \(\mathcal{T}=\{i\,|\,(s_{0:i},\_)\notin\mathcal{D}\}\) is the set of test data indices.
#### 3.3.2 Dominant Frequency
In case \(\mathcal{L}_{\mathrm{CE}}\) is high, we consider the model's output sequence \(\mathcal{M}(s_{0:0}),\ldots,\mathcal{M}(s_{0:|s|-1})\) as a discrete-time signal and apply frequency domain analysis to look into the model's behavior. More specifically, we apply DFT to the output signal and obtain the dominant frequency \(\omega_{\mathrm{dom}}\). The dominant frequency \(\omega_{\mathrm{dom}}\) is calculated by simply replacing \(f[n]\) in Equation 1 with \(\mathcal{M}(s_{0:n})\).
### Experiment Settings
Here, we describe the basic settings of our experiment. We use well-known basic RNN architectures: LSTM [16], GRU [10], and Elman RNN [1]. For the decoding, we use a linear decoder without bias followed by a softmax function. We try 4 combinations of hyperparameters: \((\textit{num\_layers},\ hidden\_size)\in\{(1,200),\ (2,200),\ (3,200),\ (2,2000)\}\), where \(\textit{num\_layers}\) denotes the number of layers, and \(\textit{hidden\_size}\) denotes the size of hidden layers.4
Footnote 4: For other hyperparameters and parameter initialization, we used the default settings of PyTorch [https://pytorch.org/](https://pytorch.org/).
For optimization, we train models to minimize the average cross-entropy loss by gradient descent using Adam [11] with a learning rate of \(1.0\times 10^{-4}\) for \(1000\) epochs.5
Footnote 5: Since the maximum size of train data is 10 in our settings, all the data are put in a batch during the training.
Finally, we randomly generate \(500\) train datasets with \(N\,=\,100,M\,=\,5\) and train \(10\) models with different random seeds for each dataset, architecture, and parameter setting. Note that this sparse setting (\(10:90\) train-test data ratio at maximum) keeps the hypothesis space large and thus enables us to analyze the inductive bias of the models as described in section 2.1.
Training all the models took around 30 hours using 8 NVIDIA A100 GPUs.
## 4 Findings
### Models Do Not Learn to Interpolate
In order to see if the models generalize simply to interpolate the given labels, we calculate the median test cross-entropy loss of the multiple models trained for each dataset (Figure 3). The dotted vertical line shows the random baseline loss
Figure 3: The median test cross-entropy loss counts for LSTM, GRU, and Elman RNN with \((\textit{num\_layers},\ hidden\_size)=(2,200)\). The dotted vertical line shows the random baseline loss of \(-\ln(\frac{1}{2})\).
of \(-\ln(\frac{1}{2})\approx 0.7\). As can be seen in Figure 3, the median test cross-entropy loss is higher than the random baseline for most datasets for all of LSTM, GRU, and Elman RNN. This indicates that, in most cases, none of the LSTM, GRU, or Elman RNN learns to interpolate in this extremely simple setup, where only the label-changing part is given as training data. We also observe a similar trend in other hyperparameter settings; The test cross-entropy losses for other settings are shown in Appendix A.
### Architectural Difference
Now that the test cross-entropy loss has revealed that the patterns learned by the models contain more output changes than the original pattern in the train data, the next step is to see if there are any architecture-specific trends in the output sequence patterns. We calculate the dominant frequency for each model and take the median over the models trained on the same dataset. Figure 4 shows the distribution of median dominant frequencies for LSTM, GRU, and Elman RNN with different hyperparameters. It is clear that, in all settings, LSTM and GRU tend to learn lower-frequency patterns, while the dominant frequencies of Elman RNN tend to be higher. Comparing LSTM and GRU, LSTM has slightly lower-frequency patterns for \(hidden\_size=200\) (Figure 4 (a, b, c)), though the difference is not as clear for \(hidden\_size=2000\) (Figure 4 (d)).
An example of sequential outputs of LSTM and Elman is shown in Figure 5. The top rows show the actual model outputs for a specific sequence, and the bottom rows show the DFT of model outputs. In this example, only 4 labels \(0,1,1,0\) are given to the prefixes of length \(60,61,84,85\). It is clear that both LSTM and Elman learn periodic patterns but do not learn to interpolate the given train labels. Besides, it is also notable that LSTMs indeed learn lower
Figure 4: The median dominant frequency counts for LSTM, GRU, and Elman RNN with different hyperparameters.
frequency patterns compared to Elman RNNs.
### Effect of Hyperparameters
Here, we describe how hyperparameters affect the observed inductive biases.
#### 4.3.1 Number of Layers
Figure 6 shows the median dominant frequencies of \(num\_layers=1,2,3\) for LSTM, GRU, and Elman RNN. As for LSTM, it can be seen that the proportion of patterns in the lower-frequency domain tends to increase as the number of layers increases. In other words, despite the increased complexity of the models, LSTMs tend to learn simpler patterns (in the sense that the output changes less). A similar trend is observed for GRU, although not as clear as for LSTM. On the other hand, Elman RNN does not show such apparent differences.
#### 4.3.2 Hidden Layer Size
Figure 7 shows the median dominant frequencies of \(hidden\_size=200,2000\) for LSTM, GRU, and Elman RNN. Although the trend is not so clear, for LSTM and GRU, the counts are slightly larger for \(\omega_{\mathrm{dom}}=0.5\sim 1.0\) when \(hidden\_size=2000\), while the counts are larger for \(\omega_{\mathrm{dom}}=0.0\sim 0.5\) when \(hidden\_size=200\). This is rather the opposite trend from that of \(num\_layers\). However, the above trend does not seem to appear in Elman RNN.
## 5 Discussion and Limitation
### Expressive Capacity and Output Sequence Frequency
Our results do not align with the expressive capacity of RNNs reported in previous work (Merrill et al., 2020; Weiss et al., 2018). Merrill et al. (2020); Weiss et al. (2018) formally showed that LSTM is strictly more expressive than GRU and Elman RNN. On the other hand, in our experiments, LSTM and GRU show a bias toward lower frequencies, while Elman RNN, which has the same expressive capacity as GRU, according to (Merrill et al., 2020), shows an opposite bias toward higher frequencies. Note that the expressive capacity and the inductive bias of a model are basically different concepts. This is because expressive capacity is the theoretical upper bound on the functions a model can represent with all possible combinations of its parameters, regardless of the training procedure. In contrast, inductive bias is the preference of functions that a model learns from finite train data, possibly depending on training settings. However, they are not entirely unrelated because a function that is impossible to learn in terms of expressive capacity will never be learned, which can emerge as inductive bias. We conjecture that the difference
Figure 5: An example of LSTM and Elman RNN with \((num\_layers,\ hidden\_size)=(2,200)\). The top rows show the actual model outputs for a specific sequence, and the bottom rows show the DFT of model outputs. In this example, 4 labels \(0,1,1,0\) are assigned to the prefixes of length \(60,61,84,85\). The Red and blue vertical lines correspond to the labels \(0,1\), respectively. The results of 10 models with different random seeds are shown.
Figure 6: The median dominant frequencies of \(num\_layers=1,2,3\) for LSTM, GRU, and Elman RNN with \(hidden\_size=200\).
between the expressive capacity and the observed inductive bias is due to the simplicity of our experiment setting. This difference is not a negative result: It indicates that inductive bias in such a simple setting is effective in observing detailed differences that cannot be captured by expressive capacity.
### Randomness of Outputs
Previous study showed that FFNs hardly learn random functions since they are inherently biased toward simple structured functions Valle-Perez et al. (2019). We can find a similar trend for RNNs in our experimental results. In other words, by regarding the outputs of RNNs as discrete-time signals, we can confirm that the signals are not random, i.e., white noises. If we assume that the output signals of the RNNs are random, the dominant frequency should be uniformly distributed from low to high-frequency regions. Therefore, the biased distribution in Figure 4 indicates that the outputs of the RNNs are not random signals. This is also clear from the example outputs in Figure 5, where the models show periodic patterns.
### Practical Implication
For LSTM and GRU, we observed different inductive biases between increasing the number of layers and hidden layer size. Previous study that investigated whether RNNs can learn parenthesis also reported that LSTM and GRU behaved differently when the number of layers and the hidden layer size were increased Bernardy (2018). Although the tasks are different, our findings align with the previous work. From a practical point of view, these findings suggest that it may be more effective to increase the number of layers than to increase the hidden layer size depending on the target task.
Besides, the fact that LSTM and GRU, which are known to be "more practical" than Elman RNN, tend to learn lower frequency patterns may support the idea that output sequence frequency aligns with "practical usefulness." Furthermore, a concept similar to output sequence frequency has been proposed as a complexity measure in sequence classification: sensitivity Hahn et al. (2021). While output sequence frequency focuses on the change in output over string length, sensitivity focuses on the change in output when a string is partially replaced, keeping its length. It would be an interesting future direction to examine the validity of inductive biases in output sequence frequency as an indicator of complexity and practical usefulness.
### Limitation
There are some dissimilarities between our experimental setup and practical sequence classification tasks:
* The task is limited to the binary classification of binary sequences,
* Models are trained only on prefixes of a sequence,
* The number of train data is extremely small.
Therefore, in order to accurately estimate the impact of our findings on the actual task, it is necessary to expand from sequence to language in a multi-label setting with a larger vocabulary.
Due to the computational complexity, we only tried 4 combinations of hyperparameters. However, it is still necessary to exhaustively try combinations of hyperparameters for a more detailed analysis.
## 6 Conclusion
This study focuses on inductive bias regarding the output sequence frequency of RNNs, i.e., how often RNNs tend to change the outputs through time steps. To this end, we constructed synthetic datasets and applied frequency domain analysis by regarding the model outputs as discrete-time signals.
Experimental results showed that LSTM and GRU have inductive biases towards having low output sequence frequency, whereas Elman RNN tends to learn higher-frequency patterns. Such differences in inductive bias could not be captured by the expressive capacity of each architecture alone. This indicates that inductive bias analysis on synthetic datasets is an effective method for studying model behaviors.
By testing different hyperparameters, we found that the inductive biases of LSTM and GRU vary with the number of layers and the hidden layer size in different ways. This confirms that when increasing the total number of parameters in a model, it would be effective not only to increase the hidden layer size but also to try various hyperparameters, such as the number of layers.
Although the experimental setting was limited to simple cases, we believe this research shed some light on the inherent generalization properties of RNNs and built the basis for architecture selection and design. |
2309.01829 | A Post-Training Approach for Mitigating Overfitting in Quantum
Convolutional Neural Networks | Quantum convolutional neural network (QCNN), an early application for quantum
computers in the NISQ era, has been consistently proven successful as a machine
learning (ML) algorithm for several tasks with significant accuracy. Derived
from its classical counterpart, QCNN is prone to overfitting. Overfitting is a
typical shortcoming of ML models that are trained too closely to the availed
training dataset and perform relatively poorly on unseen datasets for a similar
problem. In this work we study post-training approaches for mitigating
overfitting in QCNNs. We find that a straightforward adaptation of a classical
post-training method, known as neuron dropout, to the quantum setting leads to
a significant and undesirable consequence: a substantial decrease in success
probability of the QCNN. We argue that this effect exposes the crucial role of
entanglement in QCNNs and the vulnerability of QCNNs to entanglement loss.
Hence, we propose a parameter adaptation method as an alternative method. Our
method is computationally efficient and is found to successfully handle
overfitting in the test cases. | Aakash Ravindra Shinde, Charu Jain, Amir Kalev | 2023-09-04T21:46:24Z | http://arxiv.org/abs/2309.01829v2 | Soft-Dropout: A Practical Approach for Mitigating Overfitting in Quantum Convolutional Neural Networks
###### Abstract
Quantum convolutional neural network (QCNN), an early application for quantum computers in the NISQ era, has been consistently proven successful as a machine learning (ML) algorithm for several tasks with significant accuracy. Derived from its classical counterpart, QCNN is prone to overfitting. Overfitting is a typical shortcoming of ML models that are trained too closely to the availed training dataset and perform relatively poorly on unseen datasets for a similar problem. In this work we study the adaptation of one of the most successful overfitting mitigation method, knows as the (post-training) dropout method, to the quantum setting. We find that a straightforward implementation of this method in the quantum setting leads to a significant and undesirable consequence: a substantial decrease in success probability of the QCNN. We argue that this effect exposes the crucial role of entanglement in QCNNs and the vulnerability of QCNNs to entanglement loss. To handle overfitting, we proposed a softer version of the dropout method. We find that the proposed method allows us to handle successfully overfitting in the test cases.
## I Introduction
Quantum machine learning (QML) has shown great promise on several prototypes of NISQ devices, presenting significant accuracy over several different datasets, see e.g., [1; 2] and references therein. It has been proven effective even on a limited number of available qubits, not only on simulated devices but even when tested on noise-prone quantum information processing devices, as seemingly proven difficult in the case presented in [3]. As most of QML models have been derived from the classical machine learning (ML) models, with adaptations over the functioning being operable on quantum devices, these algorithms pose similar challenges as found in their classical counterparts. One of the setbacks of ML models, classical and quantum included, is overfitting, causing models to underperform on average when presented with external data outside the training set for the same problem. Due to its importance, the problem of overfitting has been studied extensively in the classical ML literature, and several methods have been proposed and implemented to mitigate it, see e.g., [4] and Sec. II for a brief overview. Very generally, there are two ways one can approach the problem of overfitting. In the first approach we combat overfitting before or during the training process. This can be done, for example, by data augmentation or regularization techniques [4]. Complementary to this approach, overfitting can be treated post-training by changing the trained parameters to compensate for overfitting. One such method for handling overfitting in neural networks (NN) architectures is neuron dropout [5]. In this method, a new NN is created from the trained NN by removing (dropping out) a few neurons from the network. Due to its simplicity and proven applicability in a wide range of NNs, dropout became one of the most popular techniques to handle overfitting in the classical ML paradigm. While pre-training methods can be effective to a certain extent, they may not fully eliminate the problem of overfitting since during the training, the model may potentially learn patterns that are specific to that data and may not generalize well. In contrast, post-training methods generally allow for a more comprehensive analysis of the trained model's behavior and performance, resulting in fine-tuning the models and improving generalization.
In contrast to the classical setting, there has been much less investigation as to how to address the problem of overfitting in QML models. While pre-training methods such as data augmentation or early stopping may have been implemented in the QML setting, to the best of our knowledge there has been no systematic study of the problem of overfitting in QML models. In this paper, we study the problem of overfitting in QML models and propose a post-training method to mitigate it. Specifically, we focus on the dropout method as a deterrent for overfitting and for concreteness, and since it is one of the widely-used QML architectures, we concentrate on quantum convolutional neural networks (QCNNs) [6]. As we discuss in more detail in the following sections, we find that a straightforward generalization of the dropout method to the quantum setting can cause QCNN models to lose their prediction capabilities on the _trained_ data, let alone improving overfitting. As we shall argue, this result can be traced back to the way QCNNs are designed and to the crucial role of entanglement in their performance and success. Therefore, we propose a new method that is based on a "softer" version of the post-training dropout method termed soft-dropout, an overfitting deterrent. This method, as we will show, provides excellent performance for suppressing overfitting in QCNN for the tested cases.
The paper is organized as follows: In Sec. II, we discuss
the problem of overfitting and the classical techniques for mitigating it. In Sec. III we present the various techniques we tested and developed to mitigate overfitting in QCNNs. In Sec. IV we present our numerical experimental results using these techniques. We offer conclusions and outlook in Sec. V.
## II Overfitting and its mitigation in classical neural networks
Before considering the problem of overfitting in the quantum setting, in the following section, we take a closer look at this problem as it is manifested in the classical setting and the current methods for mitigating it [4].
Overfitting is one of the common problems noticed in ML and statistical modeling, where a model performs exceptionally well on the training data but needs to be generalized to new, unseen data. It occurs when the model learns the noise and random fluctuations in the training data in addition to the underlying patterns and relationships. When a model overfits the training data, it becomes too complex and captures the idiosyncrasies of the training data, leading to poor performance on any new data. In practice, overfitting is manifested as a relatively poor performance of a model, in terms of its prediction accuracy, on validation data. Therefore, model overfitting is a problem that undermines the very essence of the learning task, i.e., generalizability. For this reason, a lot of efforts have been devoted to developing methods and techniques to handle overfitting and make sure that learning models are flexible enough and not overfitted to the training data. We briefly review some of the most popular techniques in what follows, see also [4] and references therein.
Cross-validation is a pre-training technique used to assess the performance and robustness of a model. It involves splitting the data into multiple subsets or folds and training the model on different combinations of these subsets. By evaluating the model's performance on different folds, cross-validation helps estimate the model's ability to generalize to new data. It can be easily implemented in classical as well as quantum ML models (we have implemented it in our numerical experiments). Increasing the training data size by augmenting it can also help alleviate overfitting. Data augmentation techniques involve generating additional training samples by applying transformations, such as rotations, translations, or distortions, to the existing data. This introduces more variation and diversity, helping the model to generalize better. This method of avoiding overfitting is highly dependent on the availability of data and is easily transferable to QML since it is not related to the ML aspect but the data prepossessing part and is considered during the experimental process.
Regularization is another popular pre-training overfitting prevention technique. In general terms, it avoids overfitting by adding a penalty term to the loss function during training. In this way, regularization introduces a bias into the model, discouraging overly complex solutions. \(L1\) and \(L2\) regularization are commonly used methods. \(L1\) regularization (Lasso) adds the absolute value of the coefficients to the loss function, promoting sparsity. \(L2\) regularization (Ridge) adds the squared value of the coefficients, which tends to distribute the impact across all features. Overfitting can occur when the model has access to many irrelevant or redundant features. Feature selection techniques aim to identify and retain only the most informative and relevant features for model training. This can be done through statistical methods, such as univariate feature selection or recursive feature elimination, or using domain knowledge and expert intuition. Feature selection has been implemented in the QML setting and has proven to be useful [7].
In addition to the aforementioned methods, the complexity of a model plays a crucial role in determining the risk of overfitting. Simplifying the model architecture or reducing the number of parameters can mitigate overfitting. Techniques such as reducing the number of hidden layers, decreasing the number of neurons, or using simpler model architectures, like linear models or decision trees, can help control the complexity and prevent overfitting.
Finally, we mention dropout. Dropout is a regularization technique specific to NNs. It is one of the most popular methods for mitigating overfitting in classical NNs due to its simplicity and performance [4; 5]. Dropout can be implemented in two ways: one is during the training process, and another method would be after getting a trained model. During training, dropout randomly disables a fraction of the neurons in each layer, forcing the network to learn redundant representations and reducing the reliance on specific neurons. For post-training methods of dropout, typically a few percent of neurons are dropped at random from a trained NN. This process is repeated, with different realization, until the least overfitted model is found. The dropout method is known to help to prevent overfitting by making the network more globally robust and less sensitive to individual neurons [5].
In this work, we adjust the dropout method to the QCNNs' setting and experimentally test it. We find that due to quantum entanglement, the dropout method does not carry over in its simple form to the quantum setting. Rather, we propose a softer version of dropout, as we describe in the next section. We found that the soft-dropout method performs very well in mitigating overfitting in several QCNN numerical experiments. The reason for proposing a post-training method for a QCNN model is as a prevention technique to tackle overfitting even after all previous measures are considered prior to training the model and overfitting is still observed.
## III Methods and techniques
Before presenting the results from our numerical experiments, we devote this section to providing an overview of the main tools and techniques we used and developed in this work.
### The QCNN architecture
QCNNs are essentially variational quantum algorithms [6; 8]. Similar to their classical counterparts, QCNN is designed and used for solving classification problems (supervised and unsupervised paradigms have been studied in this context) [9]. They were proposed to be well-fitted for NISQ computing due to their intrinsically shallow circuit depth. It was shown that due to unique quantum phenomena, such as superposition and entanglement, QCNN can provide better prediction statistics using less training data than classical ones in certain circumstances [10].
Due to the noise and technical challenges of building quantum hardware, the size of quantum circuits that can be reliably executed on NISQ devices is limited. Thus, the encoding schemes for high dimensional data usually require a number of qubits that are beyond the current capabilities of quantum devices. Therefore, classical dimensionality reduction techniques are particularly useful in the near-term application of QML techniques. In this work, the classical data was pre-processed using two dimensionality reduction techniques, namely Principal Component Analysis (PCA) [11] and Autoencoding (AutoEnc) [12]. Autoencoders are capable of modeling complex non-linear functions, whereas PCA is a simpler linear transformation that helps in cheaper and faster computation.
A generic QCNN is composed of a few key components [6; 8], as illustrated in Fig. 1. The first component is data encoding, also known as a quantum feature map. In classical ML, feature maps are used to transform input data into higher-dimensional spaces, where the data can be more easily separated or classified. Similarly, a quantum feature map transforms classical data into a quantum state representation. The main idea is to encode the classical data as an entangled state with the possibility of capturing richer and more complex patterns within the data. The quantum feature map is done in practice by applying a unitary transformation to the initial state (typically the all-zero state). In this work, we implemented two of the main feature encoding schemes, amplitude encoding and qubit encoding [6; 8]. In the former, classical data \((x_{1},\ldots,x_{k})\in\mathrm{R}^{k}\) is represented as, generally, an entangled input quantum state \(\ket{\psi_{\mathrm{in}}}\sim\sum_{i=1}^{k}x_{i}\ket{i}\) (up to normalization), where \(\ket{i}\) is a computational basis ket. Amplitude encoding uses a circuit depth of size \(\mathcal{O}(\log N)\) circuit and \(N\) qubits [13]. To evaluate the robustness of our dropout method with respect to the feature map, we also used qubit encoding. In this method the input state is a separable state \(\ket{\psi_{\mathrm{in}}}=\bigotimes_{i=1}^{k}(\cos\frac{x_{i}}{2}|0\rangle+ \sin\frac{x_{i}}{2}|1\rangle)\). As such, it uses a constant-depth circuit given by a product of a single-qubit rotation.
The second key component of a QCNN is a parameterized quantum circuit (PQC) [14; 15]. PQCs are composed of quantum gates whose action is determined by the value of a set of parameters. Using a variational algorithm (classical or quantum), the PQC is trained by optimizing the parameters of the gates to yield the highest accuracy in solving the ML task (e.g., classification) on the input data. Typically, in QCNN architectures, the PQC is composed of a repeated sequence of a (parametric) convolution circuit followed by and a (parametric) pooling circuit. The convolution layer is used as the PQC for training a tree tensor network (TTN) [16]. In this work, we used a specific form of the convolution layer, which was proposed and implemented by Hur _et al._[8], and that is constructed out of a concatenation of two-qubit gates (building blocks). In Fig. 2(a)-(b) we sketch two of the building blocks that we used for convolution layer in our architecture.
The convolution layer is followed by a pooling layer, which reduces the dimensionality of the input data while preserving important features, i.e., the pooling layer applies parameterized (controlled) quantum gates on the sequence of two qubits. To reduce the dimensionality, the control qubits are traced out (assuming they maintain coherence through the computation) while the target qubits continue to the next convolution layer, see Fig. 1. For the implementation of the pooling layer, we used a parameterized two-qubit circuit that consisting of two controlled rotations \(R_{z}(\theta_{1})\) and \(R_{x}(\theta_{2})\), respectively, each activated when the control qubit is 1 or 0 (filled and open circle in Fig. 2(c)). The PQC is followed by a measurement in the computational basis on the last layer of qubits.
Training the QCNN is obtained by successively optimizing the PQC using the input data and their labels by minimizing an appropriate cost function. Here we use the mean squared error (MSE) between predictions and class labels. Given a set of training data \(\{\ket{\psi_{i}},y_{i}\}\) of size \(K\), where \(\ket{\psi_{i}}\) denotes an initial state and \(y_{i}\in\{0,1\}\) denotes their label, the MSE cost function is given by
\[C(\mathbf{\theta})=\frac{1}{K}\sum_{i=1}^{K}\Big{(}y_{i}-f_{\mathbf{\theta}}(\ket{\psi _{i}})\Big{)}^{2}. \tag{1}\]
Here, \(f_{\mathbf{\theta}}(\ket{\psi_{i}})\) is the output of QCNN (\(f\in\{0,1\}\)) which depends on the set of parameters \(\mathbf{\theta}\) that define the gates of the PQC.
### Dropout
Once the models had been trained, we tested two dropout approaches to mitigate overfitting: a straightforward generalization of the classical dropout method and a'softer' approach.
In the classical setting, post-training dropout is usually implemented by removing a certain percentage of neurons from the network. In a similar vein, in our first approach, we dropped a certain percentage of single-qubit gates (equivalently, replacing a gate with the identity gate) from the trained network. None of the CNOT gates in the convolution layers or the controlled two-qubit gates in the pooling layers were dropped out. As discussed in length in Sec. IV, we found that this dropout method fails catastrophically. Not only did it not help with mitigating overfitting, it substantially reduced the success rates on the _trained_ data.
The second method we implement to mitigate overfitting in QCNNs we termed soft-dropout. Since setting (random, single-qubit) gates to the identity seemed to have a crucial effect on the network's performance, we hypothesized that tinkering with the trained parameters to a certain degree might provide enough flexibility to the model without hampering its accuracy and predicting capability. In the soft-dropout method, rather than dropping out gates completely, some of the trained parameters are slightly modified. The performance of the slightly modified model was then tested using testing and validation data. The process of changing the trained parameters was done manually to study the effect of the soft-dropout method and the threshold at which the changing parameter falters. We envision the soft-dropout not as a single technique to deter overfitting but as several collections of techniques that can be utilized individually or in combination. Soft-dropout may consist of techniques such as rounding the trained parameters up to certain decimal places, asserting certain values up to a threshold to a common whole number, and setting certain values close to zero, considering their proximity to the number. For the technique of setting values to zero, all the trained parameter values in the list of parameters below a certain floating number value (generally ranging below \(|\pm 0.09|\)) were changed to float value 0 by taking the absolute of all the parameters to cover all the values from the positive and negative spectrum of trained parameters. A similar technique was utilized in the case of threshold whole number conversion instead of setting it up to zero; the threshold depended on the point of mitigating overfitting without loss in accuracy dropping, which was found over several manual iterations of finding
Figure 1: **General QCNN architecture**. The QCNN includes three key components: A feature map, a parametric quantum circuit that includes concatenated convolution and pooling layers, and a measurement followed by an optimization unit. In this work the convolution and the pooling layers are constructed from two-qubit gates (building blocks). Examples of the building blocks we used are given in Fig. 2.
Figure 2: **Two-qubit building blocks of the implemented QCNN.** We used the architecture proposed and implemented in [8]. The building blocks for the convolution layers are given in subfigure (a) and (b) where \(U_{3}(\theta,\varphi,\lambda)=R_{x}(\varphi)R_{x}(-\frac{\pi}{2})R_{x}(\theta) R_{x}(\frac{\pi}{2})R_{z}(\lambda)\), while the building block for the pooling layer is shown in subfigure (c).
the least testing-validation accuracy value and not losing the actual testing value significantly. For the round-off method, a built-in Python function for rounding up values up to certain decimal places was used, while not all values were rounded up to certain decimal places as after a certain threshold, a severe drop in accuracy was observed. The threshold for the round-off method was determined by iterating the method until finding the parameters that yield the highest validation accuracy compared to the unmitigated circuit. Results and observations of the soft-dropout method are discussed for all the datasets in Sec. IV.
## IV Numerical experiments and results
### Datasets
Using multiple datasets in ML, including QML, is very useful to increase generalization, robustness, and reliability. It also helps overcome data limitations, introduces variability and heterogeneity, and allows the exploration of different perspectives. In regards to these ideas, we chose to work with three datasets, two of them being image-based medical datasets, Medical MNIST [17] and BraTS [18], while the third was a Stellar dataset [19] consisting of numerical values.
Each dataset was split into three parts: one for training, another for testing, and the last portion for validation. The validation set is used to test the performance of the (trained) QCNN on an unseen dataset and provide a proxy for determining if the model exhibits overfitting and how well we mitigate this problem. Many variations of percentage split were implemented to find the best option for creating the overfitting conditions. This operation was performed for all three datasets. Later, after the split, the testing data was added to the training data to develop the overfitting conditions more prominently. Subsequently, the data were processed using PCA for dimensionality reduction and fitting it to the limited number of qubits (we used 8, 12, and 12 qubits). The classically-processed data was then sent for training on a (simulated) QCNN.
_Medical MNIST._-- The Medical MNIST dataset consists of 6 classes: Abdominal CT, Breast MRI, Chest CT, Chest X-ray, Hand X-ray, and Head CT. These classes contained around 10000 images, each of size \(64\times 64\) pixels. As we are trying to implement binary classification, all possible permutations of classes were tested, and the similarity between Chest CT and Abdominal CT was considered most of the time. Several QCNNs were created to differentiate between the CT images being a Chest CT image or an Abdominal CT image. These two classes were pretty much alike and, hence, challenging to differentiate and were prone to overfitting, see Fig. 3.
_BraTS 2019._-- To validate the uniformity of the proposed dropout approach on different datasets and models, we used another medical dataset for this implementation. The BraTS 2019 dataset was chosen for classification between High-Grade Gliomas (HGG) and Low-Grade Gliomas (LGG) patient MRI images. The BraTS 2019 training dataset has 259 HGG and 76 LGG images. The BraTS multimodal scans are provided in NIfTI file format (.nii.gz) and include the following components: a) native (T1) scans, b) post-contrast T1-weighted (T1Gd) scans, c) T2-weighted (T2) scans, and d) T2 Fluid Attenuated Inversion Recovery (T2-FLAIR) scans. These scans were obtained using diverse clinical protocols and a variety of scanners from a total of 19 different institutions. Due to resource limitations, only one modality, specifically the T2-FLAIR, was considered for the classification of HGG versus LGG. The images were resized to 64 pixels. As depicted in Fig. 4, the resulting images appeared unclear and pixelated, which was expected given the constraints.
Figure 4: **Example of BraTS 2019 dataset brain images** The high-resolution brain images at the top row were resized to 64 pixels, bottom row. The resulting images appeared unclear and pixelated which pose a challenge for classification.
Figure 3: **Example of Medical MNIST dataset images.** Top row: Abdominal CT, bottom row: Chest CT. The similarity between Chest CT and Abdominal CT images implies that they would be hard to classify and in addition, that the learning model may be prone to overfitting.
_Stellar classification dataset (SDSS17)._-- As both of the prior datasets were image-based, we consider using a dataset in a different format to verify the conclusions devised and ascertain our claims derived from the results. The stellar classification dataset SDSS17 seemingly proved to be a reliable candidate. It consists of 100,000 observations of space taken from the Sloan Digital Sky Survey (SDSS). Every recorded observation was registered in 17 columns defining different values, such as the ultraviolet filter in the photometric system, the green filter in the photometric system, the redshift value based on the increase in wavelength, etc., and one class column identifying the category for being a star, quasar or a galaxy. Out of the 17 available columns, only 8 were used for training the model after the initial data pre-processing (columns containing the ID of equipment and date parameters were removed in order to generalize the object regardless of its position in the sky and the equipment detecting it). Considering the close proximity of the star and quasar data and their difficulty in classifying based on the data available, we considered classifying these two classes using QCNN. As the data consisted of only 8 columns, PCA did not need to be applied to reduce the dimensionality of the data. Hence, all the experiments conducted for the classification of this dataset were limited to 8 qubits. The same process of data splitting was utilized as defined for the previous dataset with few exceptions for the train-test-validation split percentage in order to characterize the overfitting scenario.
### Results from numerical experiments
All the experimental data generated for this manuscript was generated using the Pennylene software (v0.31) and simulated on a local classical device, considering the total amount of iteration needed to train the QML model and the queuing operations required to complete the process on any of the available quantum computers on the cloud. For optimization purposes, we utilized the Pennylene optimizer Nesterov Momentum Optimizer, considering its merits observed during the initial training of the QML models [20]. The total number of qubits used for this experimentation varied from 8-16 depending upon the dataset used and the complexity of the circuit. PCA, as previously mentioned in Sec. III, was utilized for converting higher dimensionality data onto the number of qubits defined for the QML model.
We verify success in mitigating overfitting in our experiment by two metrics: (1) increasing validation accuracy and (2) reducing the difference between testing and validation accuracy after implementing the dropout method.
#### iv.2.1 Mitigating overfitting using dropout
The first method we implemented to mitigate overfitting in QCNN is a direct adaptation of (post-training) dropout to the quantum setting, as discussed above. We have applied this method to mitigate overfitting when considering the Medical MNIST dataset. We found that this method has devastating effects on the tested QCNNs. For example, when we implemented this method on an 8-qubit model with a testing accuracy of 95% and validation accuracy of 90%, by dropping out only 5% of the single-qubit gates, the accuracy of the QCNN on the testing data was reduced significantly with 77% accuracy being one of the best-performing models and about 2% being the worst-performing. Not only was this method not able to mitigate overfitting by increasing validation accuracy or reducing the gap between the testing and validation accuracy, but it also resulted in dramatically hampering the performance of the network on the _trained_ data. Similar behavior was observed to be robust with respect to the number of single-qubit gates that were dropped out. Particularly, we tested dropping out 1% to 10% of the single-qubit gates and observed a similar drastic drop in performance accuracy. This was contrary to our naive intuition that a network with many gates, more than the minimum required to accomplish the learning task, should be minimally affected, if at all, by dropping out a few single-qubit gates. To test the effect of the method at its limit, we implemented it by dropping out one (randomly chosen) single-qubit gate out of a model with 78 gates. This experiment resulted in very interesting results. In these cases, the accuracy was plunged to a range of about 46% to 53%, which is almost \(50-50\) chance in the particular model we have tested. This experimentation bore the conclusion that even deleting a single gate from the circuit of a trained QCNN causes a loss of information gained during the training process.
We hypothesize that entanglement plays a crucial role in this behavior. In classical CNN, where the dropout method is used very successfully, each neuron holds a few bits of information regarding the data it is trained on, and therefore, losing a small fraction of neurons does not affect the overall performance of the entire network. In stark contrast, QCNN is designed to harness entanglement between qubits in our implementation through the concatenation of single-qubit (parameterized) gates and CNOT gates. This means that the information learned about a certain dataset is stored and distributed in the QCNN in a "non-local" way, loosely speaking. Our experiments show that entanglement can be a double-edged sword in quantum NNs: On one hand, it may promote speedup, e.g., in terms of learning rates, but on the other hand, it can lead to a fragile NN, in the sense that removing even a single gate from a _trained_ network may have devastating consequences with respect to its performance. This experiment, therefore, exposes an intrinsic vulnerability of QML, and QCNNs in particular, in comparison to their classical counterpart.
To ascertain the conclusion, we have conducted a set of experiments, schematically shown in Fig. 5. We have constructed a QCNN with 8 qubits and an additional ancillary qubit that does not pass through the feature map
but rather is initialized in a computational state (say, \(|0\rangle\)) as a non-featured attestation to the QCNN circuit. Thus, this qubit does not hold any information about the input data. The ancillary qubit is then passed through a parameterized single-qubit gate (our experiments were done with an \(R_{x}\) gate and a \(R_{y}\) gate) whose parameters are consistently updated in every iteration of the training cycle along with the rest of the training parameters in the first convolutional layer. The qubit is then entangled with one of the qubits from the circuit with a CNOT gate after the first convolutional layer, and then it is traced out in the following pooling layer. Training this QCNN resulted in 93%-95% testing accuracy (depending on the network building blocks we used). However, by dropping out the parameterized gate of the ancillary qubit, the testing accuracy plunged to order of a few percents. This set of experiments clearly indicates that even though the ancillary qubit was not encoding information about the input data, the mere fact that it is trained and entangled with the rest of the qubits, dropping after training, caused an information loss that resulted in a sharp accuracy drop. These results suggested that while dropping out gates in QCNN may not be a viable method for mitigating overfitting, tinkering with the trained values of the gates parameters may have a more subtle effect and thus can be used for this purpose.
#### iv.2.2 Mitigating overfitting using soft-dropout
As we discussed above, applying the classically derived method for post-trained dropout resulted in the loss of learned information due dropping of gates. In contrast, encouraging positive results were observed when the soft-dropout method was applied. In these experiments we implemented the method by variations of rounding of the learned parameters and introducing a threshold on the values of the parameters, as prior mentioned in Sec. III.
We summarize our results in Tables 1-3, with respect to the datasets they are associated with. The results clearly indicate that when a model suffers from overfitting (as captured by the lower validation accuracy and also an appreciable difference between test accuracy and validation accuracy), the soft-dropout method not only was successful in reducing the gap between testing and validation accuracy in several test cases, but also helped to increase the model validation accuracy across all of our experiments.
We attempted to devise a systematic way to determine the threshold for rounding up to a number or an absolute value that could be decided for mitigating overfitting after obtaining the trained parameters. Utilizing this method on several trained and overfitted models, we observed that every model had a different threshold which could only be determined after constant testing to find the best fit for tackling the overfitting issue. In addition, a closer observation revealed that the set of parameters which were used to successfully mitigate overfitting were those which fluctuated around a mean value and did not changed much during training. This observation will be explored in more detail in future work.
## V Conclusion and Outlook
In this study we focus on addressing the challenge of overfitting in QML setting, specifically, in QCNNs. Overfitting, a common issue in ML models, poses significant obstacles to the generalization performance of QCNNs. To overcome this challenge, we introduced and explored the potential of soft-dropout and compares it to a straightforward application of the dropout method commonly utilized in classical CNNs.
Surprisingly, we found that dropping out even a single parameterized gate from a trained QCNN can results in a dramatic decrease in its performance. This result high
Figure 5: **Ancillary qubit dropout experimental setup. The figure depicts the experimental setup used to exemplify the vulnerability of QML models. (a) The QCNN is trained along with an ancillary qubit, which is not part of the feature map. The ancilla qubit takes part in the training via a parameterized rotation (\(R_{x}\) or \(R_{y}\)). We found that this setup results in a testing accuracy of about 95%. (b) After training is completed, removing the single-qubit gate from the ancillary qubit, the model experienced a significant loss in prediction accuracy.**
lights a vulnerability of QCNNs compared to their classical counterparts. On the other hand the soft-dropout approach resulted in encouraging results.
Extensive experimentation is conducted on diverse datasets, including Medical MNIST, BraTS, and Stellar Classification, to evaluate the effectiveness of soft-dropout in mitigating overfitting in QCNNs. Our findings highlight the promising performance of soft-dropout in reducing overfitting and enhancing the generalization capabilities of QCNN models. By fine-tuning the trained parameters through various techniques, notable improvements in accuracy are observed while preserving the integrity of the quantum circuit. Hence, soft-dropout can be considered one of the most viable options to mitigate overfitting in a post-training setting.
We close this section with a few directions for future work. The first direction is developing a systematic approach for determining which, and how, parameters should be tinkered to handle overfitting. Following our initial observation, we believe that identifying those parameters that fluctuate around a mean value during training play an important role for mitigating overfitting.
Another important direction for future work is to investigate the performance of soft-dropout in the presence of experimental noise. Quantum systems are inherently susceptible to noise, which can impact the reliability and effectiveness of quantum operations. Understanding how soft-dropout performs under noisy conditions will contribute to the development of robust QCNN models that can operate in realistic quantum computing environments.
Another aspect that requires further exploration is
\begin{table}
\begin{tabular}{c c c} \multicolumn{3}{c}{8 qubits} \\ \hline
**Test Acc. Validation Acc.** & **Gap** \\ \hline
0.8728 & 0.8548 & 0.018 \\
0.8765 & 0.8829 & -0.0064 \\
0.8543 & 0.8499 & 0.0044 \\
0.8655 & 0.8757 & 0.0102 \\ \hline \end{tabular}
\begin{tabular}{c c c} \multicolumn{3}{c}{12 qubits} \\ \hline
**Test Acc. Validation Acc.** & **Gap** \\ \hline
0.8958 & 0.8859 & 0.01 \\
0.8996 & 0.9082 & -0.0086 \\
0.8666 & 0.8645 & 0.0021 \\
0.8731 & 0.8793 & 0.0062 \\ \hline \end{tabular}
\begin{tabular}{c c c} \multicolumn{3}{c}{16 qubits} \\ \hline
**Test Acc. Validation Acc.** & **Gap** \\ \hline
0.9422 & 0.9257 & 0.0165 \\
0.9518 & 0.9586 & -0.0068 \\
0.8972 & 0.8895 & 0.0077 \\
0.9127 & 0.9233 & 0.0106 \\ \hline \end{tabular}
\end{table}
Table 2: **Results based on BraTS dataset.** The format of results is similar to Table 1. for different models that were trained with qubits 8, 12 and 16. For all three numbers of qubits (8, 12, and 16), the validation accuracy after soft-dropout was implemented is higher than without dropout. This suggests that soft-dropout regularization technique helps improve the model’s generalization performance and reduces overfitting.
\begin{table}
\begin{tabular}{c c c} \multicolumn{3}{c}{8 qubits} \\ \hline
**Test Acc. Validation Acc.** & **Gap** \\ \hline
0.9154 & 0.7175 & 0.1979 \\
0.9229 & 0.8629 & 0.06 \\ \hline
0.9721 & 0.9136 & 0.0585 \\
0.9794 & 0.9447 & 0.0347 \\ \hline
0.9225 & 0.8770 & 0.0455 \\
0.9339 & 0.9039 & 0.03 \\ \hline
0.9675 & 0.9298 & 0.0377 \\
0.9464 & 0.9374 & 0.009 \\ \hline \end{tabular}
\begin{tabular}{c c c} \multicolumn{3}{c}{12 qubits} \\ \hline
**Test Acc. Validation Acc.** & **Gap** \\ \hline
0.8958 & 0.8859 & 0.01 \\
0.8996 & 0.9082 & -0.0086 \\ \hline
0.8666 & 0.8645 & 0.0021 \\
0.8731 & 0.8793 & 0.0062 \\ \hline \end{tabular}
\begin{tabular}{c c c} \multicolumn{3}{c}{16 qubits} \\ \hline
**Test Acc. Validation Acc.** & **Gap** \\ \hline
0.9422 & 0.9257 & 0.0165 \\
0.9518 & 0.9586 & -0.0068 \\ \hline
0.8972 & 0.8895 & 0.0077 \\
0.9127 & 0.9233 & 0.0106 \\ \hline \end{tabular}
\end{table}
Table 3: **Results based on Stellar dataset.** Results have the same format as in Table 1. The results clearly indicate that soft-dropout was successful in mitigating overfitting, as indicated by higher validation accuracy and smaller gap, as compared to the results with no dropout, across all tested models.
the scalability and performance of soft-dropout in larger QCNN models. As quantum hardware continues to advance, larger and more complex QCNN architectures become feasible. Evaluating the behavior and effectiveness of soft-dropout in handling larger quantum circuits will provide insights into its scalability and potential challenges in maintaining regularization benefits.
By pursuing these research directions, we can advance the field of QML and enhance the practical deployment of QCNN models. Overcoming overfitting challenges is crucial for ensuring the reliability and effectiveness of QCNNs in real-world applications, unlocking their potential to make significant contributions in various domains.
###### Acknowledgements.
This project was supported in part by NSF award #2210374.
|
2307.14988 | Incrementally-Computable Neural Networks: Efficient Inference for
Dynamic Inputs | Deep learning often faces the challenge of efficiently processing dynamic
inputs, such as sensor data or user inputs. For example, an AI writing
assistant is required to update its suggestions in real time as a document is
edited. Re-running the model each time is expensive, even with compression
techniques like knowledge distillation, pruning, or quantization. Instead, we
take an incremental computing approach, looking to reuse calculations as the
inputs change. However, the dense connectivity of conventional architectures
poses a major obstacle to incremental computation, as even minor input changes
cascade through the network and restrict information reuse. To address this, we
use vector quantization to discretize intermediate values in the network, which
filters out noisy and unnecessary modifications to hidden neurons, facilitating
the reuse of their values. We apply this approach to the transformers
architecture, creating an efficient incremental inference algorithm with
complexity proportional to the fraction of the modified inputs. Our experiments
with adapting the OPT-125M pre-trained language model demonstrate comparable
accuracy on document classification while requiring 12.1X (median) fewer
operations for processing sequences of atomic edits. | Or Sharir, Anima Anandkumar | 2023-07-27T16:30:27Z | http://arxiv.org/abs/2307.14988v1 | # Incrementally-Computable Neural Networks:
###### Abstract
Deep learning often faces the challenge of efficiently processing dynamic inputs, such as sensor data or user inputs. For example, an AI writing assistant is required to update its suggestions in real time as a document is edited. Re-running the model each time is expensive, even with compression techniques like knowledge distillation, pruning, or quantization. Instead, we take an _incremental computing_ approach, looking to reuse calculations as the inputs change. However, the dense connectivity of conventional architectures poses a major obstacle to incremental computation, as even minor input changes cascade through the network and restrict information reuse. To address this, we use _vector quantization_ to discretize intermediate values in the network, which filters out noisy and unnecessary modifications to hidden neurons, facilitating the reuse of their values. We apply this approach to the transformers architecture, creating an efficient incremental inference algorithm with complexity proportional to the fraction of the modified inputs. Our experiments with adapting the OPT-125M pre-trained language model demonstrate comparable accuracy on document classification while requiring 12.1X (median) fewer operations for processing sequences of atomic edits.
## 1 Introduction
Large language models (LLMs) based on the transformers architecture (Vaswani et al., 2017; Devlin et al., 2019; Radford et al., 2019; Brown et al., 2020) have revolutionized the entire natural language processing field, achieving remarkable results on a wide range of tasks, including text classification, question answering, document retrieval and many others. Increasingly, LLMs aid in the writing process itself, analyzing and proposing suggestions as documents are edited (Shi et al., 2022; Ippolito et al., 2022), both in online and offline settings. In the online case, LLMs must react in real time as a document is edited, word-by-word. In the offline case, there is a preexisting history of revisions waiting in the queue for processing. Even though the difference between each revision is often small - just a single word in the online setting - today's LLMs process each revision from scratch, wasting a lot of computational resources.
An ideal deep-learning model should be able to leverage the similarity between two revisions and reuse information as much as possible. Such an approach is known as _incremental computing_. Usually, incremental computing is solved by examining the dependency graph of the computational operations and tracking which nodes are affected by modified inputs. The prototypical example is a software build system, tracking which source files have changed and recompiling just the files that depend on them. However, most deep learning architectures, and specifically transformers, are structured such that nearly all neurons depend on nearly all inputs, as illustrated in fig. 0(a). Having a _densely connected_ architecture was proven to be a crucial aspect for producing highly expressive models (Sharir and Shashua, 2018; Levine et al., 2017; Cohen et al., 2018). But this same property also poses a challenge for reusing calculations.
We propose to augment transformers with discrete operations, namely, vector quantization (VQ) (van den Oord et al., 2017), to inhibit superfluous effects on the intermediate values in the network. By only allowing for meaningful effects to propagate, VQ paves the path to incrementally computable neural networks, as illustrated in fig. 1b. In sec. 3, we describe our complete algorithm for incremental computation of Vector-Quantized Transformers, or VQT for short, with a runtime complexity proportional to the edit distance of the documents.
We demonstrate our approach in sec. 4 by adapting the OPT-125M pre-trained language model (Zhang et al., 2022) and distilling it to its analog VQ-OPT model, followed by fine-tuning on document classification task (IMDB (Maas et al., 2011)), on which it obtains comparable accuracy to the original OPT-125M model. To measure the runtime performance on processing real-world edits, we scraped Wikipedia edit histories. Unlike the plain OPT, VQ-OPT requires 12.1X (median) fewer arithmetic operations for online processing of a sequence of atomic edits (replace / insert / delete a single token). For offline processing of two complete revisions, we observe a more modest 4.7X reduction on average, as expected when modifying larger chunks of the document as opposed to atomic edits.
## 2 Related Works
**Model compression** is the primary approach for efficient inference. Knowledge distillation (Buciluundefined et al., 2006; Hinton et al., 2015; Sanh et al., 2020) trains a smaller "student" network to imitate the outputs of a larger "teacher" network. Pruning (Han et al., 2015, 2016) and tensor factorizations (Yu et al., 2017; Hsu et al., 2022) can be used to find sparse and low-rank representation, respectively, of the network's weights. Weight and activation quantization (Gholami et al., 2021) can be used to reduce the memory footprint and communication costs, which can be crucial for running extreme LLMs (\(>\)1B parameters) on a single GPU or even edge devices. Combined, they can be very effective, but they do not leverage the vast redundancy found in the editing process.
**Sparse computation** is a form of selective computation where computation is applied according to some sparse activation mask (Ren et al., 2018; Pan et al., 2018; Cavigelli et al., 2017; Parger et al., 2022; Chen et al., 2023). Most related to this paper are works that utilize temporal redundancy in video frames. The general approach is to compute the delta between consecutive activation maps, truncate small values, and then only operate on the remaining sparse values while also trying to account for the loss in numerical precision. A similar approach was also used for accelerating generative image editing (Li et al., 2022). While sharing some similarities to our approach, the cited methods focus on a different domain (vision), with different assumptions (gradual pixel change, locality), using a different underlying architecture (convolutional), and resulting in approximate
Figure 1: In conventional architectures **(a)**, reusing calculations is challenging as modifying even a single input coordinates affects most of the network. We propose vector-quantized networks **(b)** to filter insignificant output perturbations and enable activation reusability.
(as opposed to exact) inference, which makes direct comparison nontrivial. Swapping words can cause abrupt and non-local (though often sparse) effects on the document, very different from the gradual and local nature of video consecutive frames. Furthermore, our work handles the insertion and deletion of words, akin to shifting pixels around in a non-uniform pattern, while the cited works mostly assume the camera is static. Nevertheless, future work could investigate the transfer of VQT to the vision domain and allow for a more direct comparison.
**Vector Quantization**(Gray, 1984) has been widely used in machine learning for various means, but never for incremental computing. Denote with \(C\in\mathbb{R}^{q\times d}\) a codebook of \(q\)\(d\)-dimensional vectors, then vector quantization is typically defined as \(VQ(\mathbf{x})=C_{\operatorname*{argmin}_{i}d(\mathbf{x},C_{i},.),:}\) for some distance metric \(d(\cdot,\cdot)\). VQ is a discrete operation, both with respect to its parameters, i.e., the codebook, and with respect to its input, meaning its derivatives are zero almost everywhere. However, several methods have been proposed for defining a "pseudo-derivative", e.g., via the straight-through estimator (van den Oord et al., 2017), which enables embedding vector quantization into deep learning models and training them with any gradient descent optimizer. Such methods were employed to define variational auto-encoders with discrete hidden variables (van den Oord et al., 2017), learn a shared latent space for images and text (Gu et al., 2022; Yu et al., 2022), compress audio (Zeghidour et al., 2021), as well as to improve generalization in transformers (Liu et al., 2022, 2021). To the best of our knowledge, our work is the first usage of VQ for improving the efficiency at inference time.
## 3 Method
To support the incremental computation of transformers, we propose two simple modifications to the self-attention layer in the transformers architecture: (i) Append a VQ layer the self-attention outputs the outputs, and (ii) replace the softmax normalization of the attention matrix with any element-wise non-linearity. This amounts to the following vector-quantized self-attention operation:
\[O=\operatorname{VQ}(\sigma(QK^{T})V), \tag{1}\]
where \(Q,K,\) and \(V\) are the query, key, and value matrices, respectively, \(\sigma\) is the non-linear element-wise activation (e.g., a GELU), and VQ the VQ module applied on every row separately. For multi-head self-attention, the VQ layer takes as input the concatenation of all attention heads, followed by an additional linear layer to mix the heads. It was recently proposed Hua et al. (2022); Ma et al. (2022) that using such an element-wise non-linearity has comparable performance to softmax while being easier to optimize on the hardware level. Here, we employ it for numerical stability, because though reusing calculations with softmax is mathematically possible, numerically it can lead to a significant loss of precision.
Using the above module, we can now present how we can reuse calculations. As discussed in the introductions, there are two settings of interest, online editing, operating on one small change at a time, and offline batch processing, operating on a batch of revisions. To simplify the presentation, we will present our algorithm for the offline setting, where the online setting is equivalent to a batch of size 2 and keeping a cache for the first input. Furthermore, we first assume the only valid edit operations are token replacements, and will handle the token insertion and deletion in subsection 3.3.
We reuse calculations in two steps. First, we describe a compressed format for storing the quantized activations, leveraging the redundancy in the quantized intermediate values of similar revisions. Second, we show how this compressed format can be directly manipulated to efficiently implement any operation in the VQ Transformer.
### Compressing Vector-Quantized Activations
The intermediate values in the transformer can typically be described as a 3D tensor \(X\in\mathbb{R}^{b\times n\times d}\), where \(b,n,\) and \(d\) denoted the batch size, sequence length, and hidden dimension, respectively. Since we assume \(X\) is vector-quantized1, each hidden vector in any batch and sequence location is matched against some codebook matrix \(C\in\mathbb{R}^{q\times d}\), represent the unique vectors found in \(X\). This means we can represent it more compactly with an indices matrix \(P\in\{1,\dots,q\}^{b\times n}\) pointing to the codebook vectors, i.e., \(X_{ijk}=C_{P_{ijk}}\). We emphasize that \(C\) is not the same as the codebook in the VQ layer
itself, but rather the representation of the unique sets of vectors in \(X\). It could have both fewer vectors than in VQ as well as more, depending on the sequence of operations up to this point.
If we also assume the edit distance between revisions is small, i.e., few tokens were replaced per revision, then the indices in \(P\) should agree on most sequence locations across the batch dimension. This has two implications. First, \(q=O(n+b)\) since the number of unique indices cannot be more than the length of any row plus the few indices that are different per row. Second, at any sequence location across the batch dimension, most indices should be the same, meaning we can further compress \(P\) through a sparse representation. Namely, at any sequence location pick the most frequent index, resulting in a base set of indices per location, and then store only the coordinates in \(P\) which differ from the base. This sparse representation has an \(O(n+b)\) storage complexity. In total, we get an \(O((n+b)d)\) storage complexity, and since typically \(b=O(n)\) as well, it reduces to just \(O(nd)\) down from \(O(bnd)\) for the dense representation of \(X\). The entire compression scheme is illustrated in fig.2.
### Efficient Operations Over the Compressed Format
Having a compressed representation only helps if we can efficiently operate over it. The most common type of operation in transformers can be broadly described as a _identical per-location vector operation_, i.e., any vector-valued operation that is applied on each location separately. Formally, they can be written as \(Y=F(X)\) such that \(Y_{ij:}=f(X_{ij:})\) for some \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\), and it has an \(O(bn\operatorname{cost}(f))\) complexity. This includes normalization layers, linear layers, scaling by a constant, activation functions, and others. In most common configurations, they together constitute more than 70% of the total floating-point operations in the forward pass, and for extreme models like GPT3 (Brown et al., 2020) it is over 97%. Fortunately, we map this operation very efficiently to our compressed format. If \(X\) is represented by \((P,C)\), following the same notations as before, then \(Y\) is equivalent to \((P,F(C))\), i.e., operating only on the codebook matrix while keeping the indices unchanged. This relation holds because
\[Y_{ij:}=f(X_{ij:})=f(C_{P_{ij}:}=f(C)_{P_{ij}:}. \tag{2}\]
Thus, the complexity of applying per-location operations on our compressed format is mere \(O(q\operatorname{cost}(f))=O(n\operatorname{cost}(f))\), i.e., reducing the dependency on the batch size.
For other operations not covered by the above, e.g., self-attention and residual connections, we can show similar improved efficiency, but we defer their description to app. A. In the same appendix, we also explain how the cost of the VQ layer itself could be reduced during inference. In total, we can show that any batch operation in the network can be computed with an equivalent complexity to processing a single document (up to constants). Of course, this analysis depends on the assumption that the VQ layer can indeed converge to a solution during training that maximizes the redundancy found in the input. We demonstrate in our experiments in sec.4 that indeed this method can attain significant computational savings in practice.
Figure 2: Illustration of the compression scheme of the vector-quantized activations through our VQ-Transformer network.
### Supporting Token Insertion and Deletion
Previously, we assumed for simplicity that the only valid edit operations are replacing tokens, keeping the total number of tokens in a document fixed. The main issue with inserting or deleting tokens is the effect it has on the positional embeddings, effectively shifting them around the edit locations. If we were to use conventional positional embeddings, then inserting a token would lead to nearly all token representations to be different between two revisions, and so little opportunity for reusability.
Our solution is training with a _sampled_ absolute positional embedding. During training, instead of using a contiguous range of positional indices (Vaswani et al., 2017), we sample per document a random ordered subset of the maximum number of positional embeddings. So instead of always assigning positions \(1,2,3,4\) to the first 4 tokens, we sample \(p_{1}<p_{2}<p_{3}<p_{4}\), and use those values to fetch their respective vectors from the positional embedding matrix.
In essence, we are training the absolute positional embedding to be relational, forcing the network to only leverage the order between positions and not their absolute value. The benefit of such a scheme is that we can now spread the initial positions, allowing for gaps in between tokens, so new tokens could be inserted with suitable positions. We can further use the attention mask matrix to ensure other tokens do not attend to the pad locations, thus ensuring that each prediction is self-consistent and independent of the other revision in the batch.
For the online setting, every so often might require reindexing the positional values after the initial gaps are filled, akin to defragmentation. As long as it does not happen too often, the expected speedup will remain effective. To ensure this is the case, we can use a very large pool of positional embedding vectors, e.g., on the order of 100X more than the maximal sequence length. It is reasonable to train with such a large number of positional vectors due to the well-known coupon collector problem. See app. B for further discussion.
For the offline batch setting, we do not need as much overhead. We can simply align all revisions by introducing pad tokens for missing or misaligned tokens, and then assign a contiguous range of positions according to the longest document including padding.
## 4 Experiments
We demonstrate our method by training a decoder transformer model following the OPT (Zhang et al., 2022) family of models. Since training large language models from scratch is expensive, we opt instead to use knowledge distillation to adapt the base OPT 125M parameters model to fit our architecture. We follow the procedure of Sanh et al. (2020). We keep all the parameters of the original model while adding the VQ layers in each self-attention module and changing softmax to GELU (Hendrycks and Gimpel, 2020).
We specifically use multi-head VQ, i.e., each vector is split into smaller chunks and each is matched against a codebook of 64 vectors, thus the effective codebook is of size \(64^{\text{heads}}\). For the pseudo-gradient, we use a variant of the Gumbel-Softmax estimator (Jang et al., 2017).
During training, we use the sampled positional embedding scheme of sec. 3.3, where we initialize the larger positional embedding matrix from the original by simply repeating each positional embedding to match the new size.
For distillation, we use the Pile (Gao et al., 2020), which constitutes a large subset of the corpus used to train OPT. Unlike Zhang et al. (2022), we used the raw corpus without extra filtering and deduplication steps. We trained for 100K iterations, using 16384 tokens per iteration with a max sequence length of 2048, 5K iterations of linear warm-up to a learning rate of 5e-4, followed by a cosine decay to 5e-5.
As an alternative to our incremental computing approach, we also distilled OPT to a smaller OPT model with 6 instead of 12 layers, following the same initialization as Sanh et al. (2020) and training using the same method as above (though Sanh et al. (2020) trained their distilled models for longer).
After the initial distillation phase, we fine-tuned our VQ-OPT model, the original OPT, and our DistilOPT model on sentiment analysis using the IMDB reviews dataset. It was chosen as it operates over relatively long documents, and as it could represent tasks relevant to a hypothetical writing assistant. We present the results in table 1, and we included the reported results of RoBERTa (Liu
et al., 2019; Ding et al., 2021) as a reference. As can be seen, both VQ-OPT and DistilOPT attain comparable results to the original OPT, demonstrating that our proposed approach can retain most of the capacity of the original OPT, with VQ-OPT retaining 95-97% of the accuracy of OPT-125M, depending on the number of VQ heads (i.e., codebook size).
Finally, we measure the runtime complexity for processing edits in a document. We examine the offline case of processing an entirely new revision of a document and the online case of atomic edits. We also examine the case of atomic edits happening in the first 5% of tokens for a more fair comparison to OPT-125M and DistilOPT which do not explicitly leverage the redundancy found in edited documents. For every case, we have collected 500 pairs of consecutive revisions from the edit history of Wikipedia articles. The articles were selected from the list of featured articles that also had a long history of revisions, and the specific revisions were randomly sampled from the set of revisions having at least 1536 tokens and no more than 2048. We chose revisions of specific lengths to ensure they will fit within all architectures as well as reduce the effect of variable sequence length on the complexity estimation. Revisions with purely metadata changes, as well as revisions that were later reverted (often due to malicious editing), were pruned from the histories.
We focus on the VQ-OPT with 2 heads, as it retains as much as 95% of the accuracy of OPT-125M while providing the most computational benefits. We measured the theoretical arithmetic operations required for the forward pass assuming the previous revision was already processed. We present the ratio of arithmetic operations for the original OPT to VQ-OPT as the theoretical speedup under ideal implementation. For the offline case where we process two complete revisions, we observe that on average the theoretical speedup is 4X. However, a closer examination reveals that the theoretical speedup is simply proportional to the fraction of the modified tokens, i.e., changing fewer words results in more opportunity to reuse calculations, as presented in figure 3.
To simulate the online case where a user edits a document word-by-word, we pick a random modified location in a given pair of revisions and keep all the changes up to that point in the document and ignore all that came after it. Here we see that the median theoretical speedup is 12.1X, and that there is a correlation between the relative location of the edit and the theoretical speedup. Our results for the online case are presented in figure 4.
In both the online and offline cases it is important to keep in mind that for DistilOPT is limited to just a 2X improvement over OPT-125M.
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Model** & \multicolumn{2}{c}{**IMDB**} \\ \hline & Accuracy & F1 \\ \hline RoBERTa & 95.3 & 95.0 \\ OPT-125M & 94.4 & 94.5 \\ DistilOPT & 92.4 & 92.3 \\ VQ-OPT (h=2) & 90.3 & 90.4 \\ VQ-OPT (h=4) & 91.6 & 91.6 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Accuracy and F1 on the IMDB document classification dataset. \(h\) denotes the number of heads in each VQ layer.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Model** & Atomic & Entire Revision & First 5\% \\ \hline OPT-125M & 1X & 1X & 1X \\ DistilOPT & 2X & 2X & 2X \\ VQ-OPT (h=2) & 12.1X & 4.7X & 4.8X \\ VQ-OPT (h=4) & 5.2X & 2.5X & 2.2X \\ \hline \hline \end{tabular}
\end{table}
Table 2: Theoretical speedups for processing sequence of edits to documents, based on measuring relative reduction in arithmetic operations on subset of 500 random edits (either atomic or whole revisions) scraped from English Wikipedia.
## 5 Discussion
We proposed the first incremental computable transformer model, attaining significant computational savings while performing comparably to its target pre-trained model.
Nevertheless, we wish the emphasize some of the limitations of these early findings. It is well known that as language models grow, their behavior can change in a non-linear way (Brown et al., 2020). Namely, it is well-documented that post-training quantization of very large language models (\(\geq\)6B) is more difficult due to the presence of activation outliers (Xiao et al., 2022). Testing our method on such scales could reveal similar challenges, though some of the proposed solutions could be applicable to our case. Furthermore, while the fundamental assumption of our approach is that minor input modifications should not typically result in massive modifications in the computational graph, it is trivial to imagine pathological edge cases where that assumption would not hold. It is thus crucial to further experiment on a wide variety of NLP tasks and assess the runtime complexity per task.
Figure 4: Relative reduction in arithmetic operations for online processing of atomic edits (logarithmic scale). Each point corresponds to the processing of a single atomic edit at a given normalized location. 12.1X is the median reduction in cost.
Figure 3: Relative reduction in arithmetic operations for offline processing of two complete revisions. Each point corresponds to 2 consecutive revisions in a Wikipedia edit history with respect to the fraction of modified tokens, i.e., edit distance, demonstrating complexity is proportional to edit distance. 4.7X is the median reduction.
Regardless of the above limitations, incremental computation sets a completely new path to efficient inference, which was under-explored in the machine learning community. While this paper has focused on its application to natural language, the same method is highly relevant to many other domains, including different modalities (video, genes), as well as settings (e.g., agent-environment interaction loop). We plan to explore these avenues in future works.
## Acknowledgments
The authors would like to thank Devin Chotzen-Hartzell for his contributions to preliminary experiments and initial code for scraping Wikipedia edit histories.
|
2308.03330 | Expediting Neural Network Verification via Network Reduction | A wide range of verification methods have been proposed to verify the safety
properties of deep neural networks ensuring that the networks function
correctly in critical applications. However, many well-known verification tools
still struggle with complicated network architectures and large network sizes.
In this work, we propose a network reduction technique as a pre-processing
method prior to verification. The proposed method reduces neural networks via
eliminating stable ReLU neurons, and transforming them into a sequential neural
network consisting of ReLU and Affine layers which can be handled by the most
verification tools. We instantiate the reduction technique on the
state-of-the-art complete and incomplete verification tools, including
alpha-beta-crown, VeriNet and PRIMA. Our experiments on a large set of
benchmarks indicate that the proposed technique can significantly reduce neural
networks and speed up existing verification tools. Furthermore, the experiment
results also show that network reduction can improve the availability of
existing verification tools on many networks by reducing them into sequential
neural networks. | Yuyi Zhong, Ruiwei Wang, Siau-Cheng Khoo | 2023-08-07T06:23:24Z | http://arxiv.org/abs/2308.03330v2 | # Expediting Neural Network Verification via Network Reduction
###### Abstract
A wide range of verification methods have been proposed to verify the safety properties of deep neural networks ensuring that the networks function correctly in critical applications. However, many well-known verification tools still struggle with complicated network architectures and large network sizes. In this work, we propose a network reduction technique as a pre-processing method prior to verification. The proposed method reduces neural networks via eliminating stable ReLU neurons, and transforming them into a sequential neural network consisting of ReLU and Affine layers which can be handled by the most verification tools. We instantiate the reduction technique on the state-of-the-art complete and incomplete verification tools, including \(\alpha\beta\)-crown, VeriNet and PRIMA. Our experiments on a large set of benchmarks indicate that the proposed technique can significantly reduce neural networks and speed up existing verification tools. Furthermore, the experiment results also show that network reduction can improve the availability of existing verification tools on many networks by reducing them into sequential neural networks.
Neural Network Verification, Network Reduction, Pre-processing
## I Introduction
Deep neural networks have been widely applied in real-world applications. At the same time, it is indispensable to guarantee the safety properties of neural networks in those critical scenarios. As neural networks are trained to be larger and deeper, researchers have deployed various techniques to speed up the verification process. For example, to over-approximate the whole network behavior [1, 2, 3, 4]; to deploy GPU implementation [5, 6, 7]; or to merge neurons in the same layer in an over-approximate manner so as to reduce the number of neurons [8, 9].
This work aims to further accelerate the verification process by "pre-processing" the tested neural network with ReLU activation function and constructing a _reduced_ network with _fewer_ number of neurons and connections. We propose the _network reduction_ technique, which returns a reduced network (named as _REDNet_) that captures the _exact_ behavior of the original network rather than over-approximating the original network. Therefore verification over the reduced network equals the original verification problem yet requires less execution cost.
The REDNet could be instantiated on different verification techniques and is beneficial for complex verification processes such as branch-and-bound based (bab) or abstract refinement based methods. For example, branch-and-bound based methods [6, 7, 10, 11] generate a set of sub-problems to verify the original problem. Once deployed with REDNet before the branch-and-bound phase, all the sub-problems are built on the reduced network, thus achieving overall speed gain. For abstract refinement based methods, like [12, 13, 4], they collect and encode the network constraints and refine individual neuron bounds via LP (linear program) or MILP (mixed-integer linear program) solving. This refinement process could be applied to a large set of neurons, and the number of constraints in the network encoding can be significantly reduced with the deployment of REDNet.
We have implemented our proposed network reduction technique in a prototypical system named REDNet (the reduced network), which is available at [https://github.com/REDNet-verifier/IDNN](https://github.com/REDNet-verifier/IDNN). The experiments show that the ReLU neurons in the reduced network could be up to 95 times smaller than those in the original network in the best case and 10.6 times smaller on average. We instantiated REDNet on the state-of-the-art complete verifier \(\alpha,\beta\)-CROWN [14], VeriNet [15] and incomplete verifier PRIMA [12] and assessed the effectiveness of REDNet over a wide range of benchmarks. The results show that, with the deployment of REDNet, \(\alpha,\beta\)-CROWN could verify more properties given the same timeout and gain average \(1.5\times\) speedup. Also, VeriNet with REDNet verifies 25.9% more properties than the original and can be \(1.6\times\) faster. REDNet could also assist PRIMA to gain \(1.99\times\) speedup and verifies 60.6% more images on average. Lastly, REDNet is constructed in a simple network architecture and making it amenable for existing tools to handle network architectures that they could not support previously.
We summarize the contributions of our work as follows:
* We define stable ReLU neurons and deploy the state-of-the-art bounding method to detect such stable neurons.
* We propose a _network reduction_ process to _remove_ stable neurons and generate REDNet that contains a smaller number of neurons and connections, thereby boosting the efficiency of existing verification methods.
* We prove that the generated REDNet preserves the input-output equivalence of the original network.
* We instantiate the REDNet on several state-of-the-art
verification methods. The experiment results indicate that the same verification processes execute faster on the REDNet than on the original network; it can also respond accurately to tougher queries the verification of which were timeout when running on the original network.
* REDNet is constructed with a simple network architecture, it can assist various tools in handling more networks that they have failed to be supported previously.
* Lastly, we export REDNet as a fully-connected network in ONNX, which is an open format that is widely accepted by the most verification tools.
## II Preliminaries
A _feedforward neural network_ is a function \(\mathcal{F}\) defined as a directed acyclic diagram \((\mathcal{V},\mathcal{E})\) where every node \(L_{i}\) in \(\mathcal{V}\) represents a layer of \(|L_{i}|\) neurons and each arc \((L_{i},L_{j})\) in \(\mathcal{E}\) denotes that the outputs of the neurons in \(L_{i}\) are inputs of the neurons in \(L_{j}\). For each layer \(L_{i}\in\mathcal{V}\), \(in(L_{i})=\{L_{j}|(L_{j},L_{i})\in\mathcal{E}\}\) is the _preceding layers_ of \(L_{i}\) and \(out(L_{i})=\{L_{j}|(L_{i},L_{j})\in\mathcal{E}\}\) denotes the _succeeding layers_ of \(L_{i}\). If the set \(in(L_{i})\) of preceding layers is non-empty, then the layer \(L_{i}\) represents a computation operator, e.g. the ReLU and GEMM operators; otherwise \(L_{i}\) is an input layer of the neural network. In addition, \(L_{i}\) is an _output layer_ of the neural network if \(out(L_{i})\) is empty.
In this paper, we consider the ReLU-based neural network with one input layer and one output layer. Note that multiple input layers (and output layers) can be concatenated as one input layer (and one output layer). Then a neural network is considered as a _sequential neural network_ if \(|in(L_{i})|=1\) and \(|out(L_{i})|=1\) for all layers \(L_{i}\) in the neural network except the input and output layers.
We use a vector \(\vec{a}\) to denote the input of a neural network \(\mathcal{F}\), and the _input space_\(I\) of \(\mathcal{F}\) includes all possible inputs of \(\mathcal{F}\). For any input \(\vec{a}\in I\), \(L_{i}(\vec{a})\) is a vector denoting the outputs of neurons in a layer \(L_{i}\) given this input \(\vec{a}\). The output \(\mathcal{F}(\vec{a})\) of the neural network is the output of its output layer.
The _neural network verification problem_ is to verify that for all possible inputs \(\vec{a}\) from a given input space \(I\), the outputs \(\mathcal{F}(\vec{a})\) of a neural network \(\mathcal{F}\) must satisfy a specific condition. For example, the robustness verification problem is to verify that for all inputs within a specified input space, the neural network's output must satisfy that the value of the neuron corresponding to the ground truth label is greater than the values of the other neurons in the output layer.
## III Overview
In this section, we present a simple network \(\mathcal{F}\) and illustrate how to construct the reduced network via the deletion of stable neurons, where stable neurons refer to those ReLU neurons whose inputs are completely non-negative or non-positive.
The example is a fully-connected network with ReLU activation function \(y=\text{max}(0,x)\) as shown in Figure 1, where the connections are recorded in the linear constraints near each affine neuron. We set the input space \(I\) to be \([-1,1]\times[-1,1]\), and apply one of the state-of-the-art bound propagators CROWN [16] to propagate the input intervals to other neurons of the network. The deployment of CROWN returns us the concrete bounds for intermediate neurons, which are displayed next to the corresponding neurons in Figure 1.
From this initial computation, we observe that four ReLU neurons are _stable_: \(x_{8}\) is stably deactivated as its input \(x_{3}\leq 0\) yields \(x_{8}=0\); \(x_{9},x_{10},x_{11}\) are stably activated as their inputs are always greater or equal to zero, yielding \(x_{9}=x_{4},x_{10}=x_{5},x_{11}=x_{6}\). Given the observation, we could _remove_ those stable ReLU neurons together with their input neurons: we could directly eliminate neurons \(x_{3},x_{8}\); and delete \(x_{4},x_{5},x_{6},x_{9},x_{10},x_{11}\) while _connecting_\(y_{1},y_{2}\) directly to the preceding neurons of \(x_{4},x_{5},x_{6}\), which are \(x_{1},x_{2}\).
After removal, the connections are updated as in Figure 2. The new affine constraint of \(y_{1}\) is computed as follows:
\[y_{1} = x_{8}-x_{9}+x_{10}+x_{11}-x_{12}\] \[= 0-x_{4}+x_{5}+x_{6}-x_{12}\] \[= x_{1}-x_{2}+1-x_{12}\]
Similarly, the computation of \(y_{2}\) is updated as follows:
Figure 1: The example network with initial concrete bounds
Figure 2: The network connections after neuron removal
\[y_{2} = x_{8}+x_{9}+x_{10}+x_{11}+x_{12}\] \[= 0+x_{4}+x_{5}+x_{6}+x_{12}\] \[= 3x_{1}+x_{2}+7+x_{12}\]
The above computation only involves equality replacement; therefore, the two networks in Figure 2 and Figure 1 functions _the same_ given the specified input space \(I\). However, the network architecture has been modified, and the output neurons are now defined over its preceding layer together with the input layer. To preserve the network architecture without introducing new connections between layers, we _merge_ the stably activated neurons into _a smaller set_ of neurons instead of directly deleting them.
The final reduced network is shown below in Figure 3, where we transform the connection between \(y_{1},y_{2}\) and \(x_{1},x_{2}\) in Figure 2 into two merged neurons \(m_{1}=x_{1}-x_{2}+1;m_{2}=3x_{1}+x2+7\). Since \(m_{2}\) is stably activated given the input space, we have \(m_{4}=m_{2}\), thus \(y_{2}=m_{4}+x_{12}\) which equals to the definition of \(y_{2}\) in Figure 2. To further enforce \(m_{1}\) to remain stably activated, we increase the bias of \(m_{1}\) to be 2, which leads to \(m_{3}=m_{1}\), thus \(y_{2}=m_{3}-x_{12}-1\). Therefore, the final reduced network in Figure 3 remains to be a fully-connected network, but the stably deactivated neuron \(x_{3}\) has been removed, and the original set of stably activated neurons \(x_{4},x_{5},x_{6}\) are merged into a smaller set of stably activated neurons \(m_{1},m_{2}\). Note that the connection between \(y_{1},y_{2}\) and \(m_{1},m_{2}\) are actually identity matrix:
\[\begin{bmatrix}y_{1}\\ y_{2}\end{bmatrix}=\begin{bmatrix}1&0\\ 0&1\end{bmatrix}\cdot\begin{bmatrix}m_{1}\\ m_{2}\end{bmatrix}+\begin{bmatrix}-1\\ 1\end{bmatrix}\cdot x_{12}+\begin{bmatrix}-1\\ 0\end{bmatrix} \tag{3}\]
Therefore, the number of merged neurons depends on the number of neurons in the succeeding layer (e.g., the output layer in this example). Generally speaking, the number of output neurons is significantly smaller than the number of intermediate neurons. Therefore we conduct the reduction in a backward manner from the last hidden layer to the first hidden layer, and the experiments in section VI show that a major proportion of the neurons could be deleted, which boosts verification efficiency and therefore improve precision within a given execution timeout.
## IV Stable ReLU Neurons Reduction
### _Stable ReLU neurons_
Given a ReLU layer \(X\) in a neural network and an input \(\vec{a}\), the layer \(X\) has exactly one preceding layer \(Y\) (i.e. \(in(X)=\{Y\}\)) and the _pre-activation_ of the \(k^{th}\) ReLU neuron \(x\) in \(X\) is the output \(y(\vec{a})\) of the \(k^{th}\) neuron \(y\) in \(Y\). For simplicity, we use \(\hat{x}(\vec{a})=y(\vec{a})\) to denote the pre-activation of \(x\).
**Definition 1**.: A ReLU neuron \(x\) in a neural network is _deactivated_ w.r.t. (with respect to) an input space \(I\) if \(\hat{x}(\vec{a})\leq 0\) for all inputs \(\vec{a}\in I\), and \(x\) is _activated_ w.r.t. the input space \(I\) if \(\hat{x}(\vec{a})\geq 0\) for all \(\vec{a}\in I\). Then \(x\) is _stable_ w.r.t. \(I\) if \(x\) is deactivated or activated w.r.t. \(I\).
It is NP-complete to check whether the output of a neural network is greater than or equal to 0 [17]. In addition, we can add an additional ReLU layer behind the output layer of the neural network where the output of the original neural network becomes the pre-activation of ReLU neurons, therefore, it is straightforward that checking the stability of ReLU neurons w.r.t. an input space is NP-hard.
**Theorem 1**.: It is NP-hard to check whether a ReLU neuron is stable w.r.t. an input space \(I\).
Many methods have been proposed to compute the lower and upper bounds of the pre-activation of ReLU neurons. Usually, these methods use different constraints to tighten the pre-activation bounds, such as linear relaxations, split constraints, global cuts and output constraints. For detecting the stability of ReLU neurons w.r.t. an input space, we only consider the methods which employ the linear relaxations of ReLU neurons alone, as the other constraints may filter out inputs from the input space, e.g. a ReLU neuron is compulsory to be stably deactivated despite the input space if the propagator uses a split constraint to enforce that the pre-activation input value is always non-positive. We enumerate various bound propagation methods in Table I. The methods using other constraints are not suitable for detecting the stability of intermediate neurons, such as \(\beta\)-Crown employs split constraints.
In our experiments, we use the GPU-based bound propagation method CROWN to detect stable ReLU neurons, which leads to a reasonably significant reduction with a small time cost, as displayed in Table III.
### _ReLU layer reduction_
As illustrated in the example given in section III, after computation of concrete neuron bounds, we detect and handle those stable ReLU neurons in the following ways:
\begin{table}
\begin{tabular}{|c|c|} \hline Methods for Stability Detection & Other Methods \\ \hline \hline Interval [18], DeepZ/Symbolic [2, 19, 20] & \(\beta\)-Crown [21] \\ CROWN [16], FCrown [22], \(\alpha\)-Crown [23] & GCP-Crown [24] \\ Deeply [25], KPoly [26], PRIMA [12] & ARENA [13] \\ RefineZon [27],OptC2V [28] & DeepSRGR [4] \\ SDP-Relaxation [29, 30, 31] & \\ LP/Lagrangian Dual [32, 33, 34] & \\ \hline \end{tabular}
\end{table}
Table I: Bound propagation methods
Figure 3: The final network after reduction (REDNet), where the dashed connection means the coefficient equals to 0
* For a stably deactivated ReLU neuron whose input value is always non-positive, it is always evaluated as \(0\) and thereby will be directly deleted from the network as it has no effect on the actual computation;
* For a stably activated ReLU neurons (the input values of which are always non-negative) in the same layer, we reconstruct this set of stably activated neurons into _a smaller_ set of stably activated neurons as we reduce \(x_{4},x_{5},x_{6}\) into \(m_{1},m_{2}\) in section III.
**Reconstruction of Stably Activated Neurons.** Figure 2 illustrates that the deletion of stably activated neurons requires creating new connections between the preceding and succeeding neurons of the deleted neurons. We follow the convention that every intermediate ReLU layer only directly connects to one preceding layer and one succeeding layer, which conducts linear computation (and we defer the details of how to simplify a complicated network with multiple preceding/succeeding connections into such simpler architecture in section V).
An example of a ReLU layer pending reduction is shown in Figure 4, where \(M_{1}\) indicates the linear connection between layer \(V\) and \(X\); \(M_{2}\) indicates the connection between layer \(Y\) and \(Z\). Biases are recorded in \(B_{1}\) and \(B_{2}\) respectively. Suppose that the uppermost \(k\) neurons in layer \(Y\) are stably activated, and we delete them together with their inputs in layer \(X\) from Figure 4. After deletion, we need to generate a new connection between layers \(Z\) and \(V\). As stably activated ReLU neurons behave as identity functions, the new connection matrix between layer \(Z\) and \(V\) can be computed from existing connection matrices \(M_{1}\) (size \(m\times q\)) and \(M_{2}\) (matrix with size \(n\times m\)). Assume that \(M[0:k,:]\) indicates that we slice the matrix to contain only the first \(k\) rows; and \(M[:0:k]\) means we only take the leftmost \(k\) columns of the matrix, we define a matrix \(M^{\prime}_{VZ}\) with size \(n\times q\) that is computed as:
\[M^{\prime}_{VZ}=M_{2}[:,0:k]\cdot M_{1}[0:k,:] \tag{4}\]
We consider this new connection \(M^{\prime}_{VZ}\) with size \(n\times q\) as:
\[M^{\prime}_{VZ}=M_{I}\cdot M^{\prime}_{VX}, \tag{5}\]
where \(M^{\prime}_{VX}\) _equals to \(M^{\prime}_{VZ}\)_ and functions as the affine connection between layers \(V\) and _newly constructed_ neurons in layer \(X\); \(M_{I}\) denotes an \(n\times n\) identity matrix and is the affine connection between layer \(Z\) and the _newly constructed_ neurons in layer \(Y\), as shown in Figure 5. Here, the additional weight matrix between layers \(V\) and \(Z\) is actually computed as \(M^{\prime}_{VZ}=M_{I}\cdot ReLU()\cdot M^{\prime}_{VX}\). For Equation 5 to hold, we need to make sure that the ReLU function between \(M\) and \(M^{\prime}\) becomes an identity, which means \(M\) must be non-negative and \(M^{\prime}\) is stably activated. So we will compute the concrete bounds of \(M\) and add an additional bias \(B\) to enforce it as non-negative as we did for neuron \(m_{1}\) in section III. This additional bias will be canceled out at layer \(Z\) with \(-B\) offset.
Note that we conduct this reduction in a backward manner from the last hidden layer (whose succeeding layer is the output layer that usually consists of a very small number of neurons, e.g. 10) to the first hidden layer. Therefore, upon reduction of layers \(X\) and \(Y\), layer \(Z\) has already been reduced and contains a small number \(n\) of neurons. In the end, the \(k\) stably activated neurons will be reduced into \(n\) stably activated neurons and we obtain a smaller-sized affine layer with \(m-k+n\) neurons, where \(k\) is usually much bigger than \(n\). Therefore, we are able to observe a significant size reduction as shown in Table III.
**Lemma 1**.: The reduction process preserves the input-output equivalence of the original network. That is, for any input \(\vec{a}\in I\), \(\mathcal{F}(\vec{a})\equiv\mathcal{F}^{\prime}(\vec{a})\) where \(\mathcal{F}\) is the original network and \(\mathcal{F}^{\prime}\) is the reduced one.
Proof.: The reduction process operates on ReLU neurons that are stable w.r.t. the input space \(I\). Specifically, (i) Stably _deactivated_ ReLU neurons are always evaluated as \(0\) and can be deleted directly as they have no effect on the subsequent computation; (ii) Stably _activated_ ReLU neurons are reconstructed in a way that their functionality are preserved before (Figure 4) and after (Figure 5) reconstruction.
For any \(\vec{a}\in I\), \(V(\vec{a})\) is the output of Layer \(V\) and the output of Layer \(Z\) is computed as \(Z(\vec{a})=M_{2}\cdot ReLU(M_{1}\cdot V(\vec{a})+B_{1})+B_{2}\) in Figure 4. we decompose \(Z(\vec{a})-B_{2}\) as
\[M_{2}[:,0:k]\cdot ReLU(M_{1}[0:k,:]\cdot V(\vec{a})+B_{1}[0:k]) \tag{6}\] \[+M_{2}[:,k:m]\cdot ReLU(M_{1}[k:m,:]\cdot V(\vec{a})+B_{1}[k:m]) \tag{7}\]
Without loss of generality, we assume the uppermost \(k\) neurons in layer \(Y\) are stably activated. Formula 6 thus simplifies to \(M_{2}[:,0:k]\cdot M_{1}[0:k,:]\cdot V(\vec{a})+M_{2}[:,0:k]\cdot B_{1}[0:k]=M_ {I}\cdot M^{\prime}_{VX}\cdot V(\vec{a})+B^{\prime}\), where \(M^{\prime}_{VX}=M_{2}[:,0:k]\cdot M_{1}[0:k,:]\), \(B^{\prime}=M_{2}[:,0:k]\cdot B_{1}[0:k]\) and \(M_{I}\) is an identity matrix.
Figure 4: Layer \(X\) and \(Y\) with pending reduction, together with its preceding layer \(V\) and succeeding affine layer \(Z\).
Figure 5: The block after reduction of stably activated neurons. \(M_{1}[k:m,:]\) contains the last \(m-k\) rows of \(M_{1}\), while \(M_{2}[:,k:m]\) takes the rightmost \(m-k\) columns of \(M_{2}\). \(B^{\prime}\) is computed as \(M_{2}[:,0:k]\cdot B_{1}[0:k]\). The _newly constructed_ neurons are dashed and colored in blue.
Furthermore, we compute an additional bias \(B\) to ensure that \(M^{\prime}_{VX}\cdot V(\vec{a})+B\geq 0\) for all \(\vec{a}\in I\). Thus Formula 6 finally simplifies to:
\[M_{I}\cdot ReLU(M^{\prime}_{VX}\cdot V(\vec{a})+B)-B+B^{\prime} \tag{8}\]
Based on Formula 8, we obtain \(Z(\vec{a})=M_{I}\cdot ReLU(M^{\prime}_{VX}\cdot V(\vec{a})+B)+M_{2}[:,k:m]\cdot ReLU (M_{1}[k:m,:]\cdot V(\vec{a})+B_{1}[k:m])+B^{\prime}+B_{2}-B\), which equals to the computation conducted in Figure 5. Thus, the network preserves input-output equivalence after reduction.
## V Neural Network Simplification
In section IV, we describe how reduction is conducted on a sequential neural network, where each intermediate layer only connects to one preceding Linear layer and one succeeding Linear layer. In this paper, a Linear layer refers to a layer whose output is computed via linear computation. Nonetheless, there exist many complicated network architectures (e.g., residual networks) that are not sequential. In order to handle a wider range of neural networks, we propose a neural network simplification process to transform complex network architectures into simplified sequential neural networks and then conduct reduction on the simplified network.
We now introduce how to transform a complex ReLU-based neural network into a sequential neural network consisting of Linear and ReLU layers so that stable ReLU neurons can be reduced. Note that we only consider the neural network layers that can be encoded as Linear and ReLU layers; further discussion about this can be found in section VII.
The network simplification process involves two main steps (shown in Figure 6): (i) Encode various layers as SumLinear blocks and ReLU layers (we defer the definition of SumLinear block to subsection V-A); (ii) Transform SumLinear blocks into Linear layers. Here, Linear layers refer to layers that conduct linear computation.
### _Encode various layers into SumLinear blocks_
A _SumLinear block_ is a combination of a set of Linear layers and a Sum layer such that the Linear layers are preceding layers of the Sum layer, where the output of the Sum layer is the element-wise sum of its inputs. The output of the SumLinear block is equal to the element-wise sum of the outputs of the Linear layers, and the preceding layers of the SumLinear block include the preceding layers of all the Linear layers. Any Linear layer can be encoded as a SumLinear block by adding a Sum layer behind the Linear layer. A main difference between them is that the SumLinear block can have more than 1 preceding layer.
Many neural network layers can be directly transformed into SumLinear blocks, such as Conv, GeMM, Add, Sub, Concat, Reshape, Split, Squeeze, Unsqueeze, \(\cdots\), Flatten, MatMul, and BatchNormalization layers used in ONNX models.1 Note that the Linear layer only has one preceding layer, while the Add and Concat layers can have more than one preceding layer; hence, they cannot be directly encoded as a Linear layer (this motivates the introduction of SumLinear blocks).
Footnote 1: In general, Maxpooling can be encoded as Conv and ReLU layers with existing tools such as DNNV [35]. Note that \(max(x,y)=ReLU(x-y)+y\).
Figure 7 shows a SumLinear block encoding a Concat layer with 2 precedessors \(X\) and \(Y\). The biases of the two Linear layers are zero and the concatenation of their weights is an identity matrix. Thus, each neuron of the layers \(X,Y\) is mapped to a neuron of the Sum layer. Assume \(|X|=|Y|=1\). Their weights are represented by matrices: \(\begin{bmatrix}1\\ 0\end{bmatrix}\) and \(\begin{bmatrix}0\\ 1\end{bmatrix}\) which can be concatenated into an identity matrix \(\begin{bmatrix}1&0\\ 0&1\end{bmatrix}\).
In the same spirit, the Add layer could also be encoded as a SumLinear block as shown in Figure 8. Assume that \(|X|\) and \(|Y|\) are each equal to 2, the weights of the two Linear layers are represented by identity matrices \(\begin{bmatrix}1&0\\ 0&1\end{bmatrix}\).
Figure 8: Encode an Add layer into a SumLinear block.
Figure 6: The procedure of neural network reduction. The encoding session is described in subsection V-A; the transformation is discussed in subsection V-B; and the reduction part is explained in section IV.
Figure 7: Encode a Concat layer into a SumLinear block.
### _Transform SumLinear blocks into Linear Layers_
In this subsection, we show how to encode SumLinear blocks as Linear layers. Firstly, we need to transform SumLinear blocks into normalized SumLinear blocks. To this end, a SumLinear block \(L\) is _normalized_ if it does not have any Linear layer of which the preceding layer is a SumLinear block (i.e. \(in(L)\) does not have SumLinear blocks), and each of the Linear layers in \(L\) has different preceding layers. For example, the SumLinear given in Figure 7 is normalized if its preceding layers \(X\) and \(Y\) are not SumLinear blocks.
**SumLinear Block Normalization.** If a SumLinear block \(L^{\prime}\) includes a Linear layer \(L^{\prime}_{j}\) with a weight \(M^{\prime}_{j}\) and a bias \(B^{\prime}_{j}\) such that the preceding layer of \(L^{\prime}_{j}\) is another SumLinear block \(L^{\prime\prime}\) including \(k\) Linear layers \(L^{\prime\prime}_{1},\cdots L^{\prime\prime}_{k}\) with weights \(M^{\prime\prime}_{1},\cdots,M^{\prime\prime}_{k}\) and biases \(B^{\prime\prime}_{1},\cdots,B^{\prime\prime}_{k}\), then we can normalize \(L^{\prime}\) by replacing \(L^{\prime}_{j}\) with \(k\) new Linear layers \(L_{1},\cdots L_{k}\) where for any \(1\leq i\leq k\), the layer \(L_{i}\) has the same preceding layer as that of \(L^{\prime\prime}_{i}\), and the weight and bias of \(L_{i}\) are computed as:
\[M_{i} =M^{\prime}_{j}\cdot M^{\prime\prime}_{i} \tag{9}\] \[B_{i} =\begin{cases}B^{\prime}_{j}+M^{\prime}_{j}\cdot B^{\prime\prime }_{i}&\text{if }i=1\\ M^{\prime}_{j}\cdot B^{\prime\prime}_{i}&\text{otherwise}\end{cases} \tag{10}\]
During the normalization, if the succeeding layers of the block \(L^{\prime\prime}\) become empty, then \(L^{\prime\prime}\) is directly removed.
In addition, if two Linear layers \(L_{a},L_{b}\) in a SumLinear block have the same preceding layer, then in normalization, we can replace them by one new Linear layer \(L_{c}\) such that \(L_{c}\) has the same preceding layer as them and the weight (and bias) of \(L_{c}\) is the sum of the weights (and biases) of \(L_{a},L_{b}\).
**Lemma 2**.: SumLinear block normalization does not change the functionality of a neural network.
Proof.: Let \(\vec{a}^{\prime\prime}_{i}\) be any input of a Linear layer \(L^{\prime\prime}_{i}\) in the block \(L^{\prime\prime}\) where \(L^{\prime\prime}\) is the preceding layer of \(L^{\prime}_{j}\). Thus, the input of \(L^{\prime}_{j}\) (called \(\vec{a}^{\prime}_{j}\)) equals to \(\sum_{i=1}^{k}(M^{\prime\prime}_{i}\cdot\vec{a}^{\prime\prime}_{i}+B^{\prime \prime}_{i})\). Then the output of \(L^{\prime}_{j}\) is \(B^{\prime}_{j}+\sum_{i=1}^{k}(M^{\prime}_{j}\cdot M^{\prime\prime}_{i}\cdot \vec{a}^{\prime\prime}_{i}+M^{\prime}_{j}\cdot B^{\prime\prime}_{i})\) which is equal to the sum of the outputs of the layers \(L_{1},\cdots L_{k}\). Therefore, replacing \(L^{\prime}_{j}\) with \(L_{1},\cdots L_{k}\) does not change the output of the SumLinear block \(L^{\prime}\).
If the succeeding layers of \(L^{\prime\prime}\) become empty, then removing \(L^{\prime\prime}\) does not affect the outputs of other layers and the network.
In addition, the sum of the outputs of the two linear layers \(L_{a},L_{b}\) in a SumLinear Block with the same preceding layer is equal to the output of the new layer \(L_{c}\), thus, the output of the block does not change after replacing \(L_{a},L_{b}\) with \(L_{c}\).
So SumLinear block normalization does not change the functionality of a neural network.
We next show how to encode normalized SumLinear blocks as Linear layers.
**Linear Layer Construction**. First, we say that a ReLU layer \(L_{i}\) is _blocked_ by a SumLinear block \(L\) if \(L\) is the only succeeding layer of \(L_{i}\). Then, we use \(\mathcal{R}_{L}\) to denote the set of ReLU layers blocked by the SumLinear block \(L\). Let \(\mathcal{P}_{L}\) include other preceding layers of \(L\) which are not in \(\mathcal{R}_{L}\). If \(L\) is normalized, then \(L\) and the set of ReLU layers in \(\mathcal{R}_{L}\) can be replaced by a Linear layer \(L^{l}\), a ReLU layer \(L^{r}\) and a new SumLinear block \(L^{s}\) such that
* the weight \(M^{l}\) (the bias \(B^{l}\)) of the linear layer \(L^{l}\) is a concatenation (the sum) of the weights (the bias) of the Linear layers in \(L\) and the preceding layer of \(L^{l}\) is \(L^{r}\) and \(L^{l}\) has the same succeeding layers as \(L\);
* the SumLinear block \(L^{s}\) encodes a concatenation of layers in \(\mathcal{P}_{L}\) and the preceding layers of layers in \(\mathcal{R}_{L}\);
* \(L^{s}\) is the preceding layer of \(L^{r}\).
Additionally, in order to make sure that the outputs of the layers in \(\mathcal{P}_{L}\) can pass through the ReLU layer \(L^{r}\), the neurons in \(L^{r}\) which connect to the layers in \(P_{L}\) are enforced as activated neurons by adding an additional bias \(B\) to a Linear layer in \(L^{s}\) and minus \(M^{l}\cdot B\) from the bias of \(L^{l}\).
**Lemma 3**.: Linear layer construction does not change the functionality of a neural network.
Proof.: The pre-activation of \(L^{r}\) is the output of \(L^{s}\) that equals to \(B\) plus the concatenation of the outputs of layers in \(\mathcal{P}_{L}\) and the pre-activation of Layers in \(\mathcal{R}_{L}\). This ensures that the output of \(L^{r}\) equals to \(B\) plus the concatenation (call it \(\vec{a}\)) of the outputs of layers in \(\mathcal{P}_{L}\) and \(\mathcal{R}_{L}\). Next, the output of \(L^{l}\) equals to \(M^{l}\cdot(B+\vec{a})+B^{l}-M^{l}\cdot B=M^{l}\cdot\vec{a}+B^{l}\) which is equal to the output of original layer \(L\).
In addition, \(L\) is the only succeeding layer of layers in \(\mathcal{R}_{L}\), so replacing \(\mathcal{R}_{L}\), \(L\) with the layers \(L^{s}\), \(L^{r}\), \(L^{l}\) does not change the functionality of the neural network.
**Network simplification.** We use algorithm 1 to transform ReLU-based neural networks \((\mathcal{V},\mathcal{E})\) into a sequential neural network consisting of Linear and ReLU layers. At line 1, the function \(Initialization(\mathcal{V},\mathcal{E})\) encodes all layers in \(\mathcal{V}\) as SumLinear blocks and ReLU layers. Between line 3 and line 8, the algorithm repeatedly selects the last SumLinear block \(L\) in \(\mathcal{V}\) and reconstructs \(L\) into Linear layers, where a SumLinear block is _the last_ block means there is not any path from it to another SumLinear block. \((\mathcal{V},\mathcal{E})\) only has 1 output layer, and the Linear and ReLU layers only have 1 preceding layer, thus, there is only one last SumLinear block.
```
Input: A neural network \((\mathcal{V},\mathcal{E})\) Output: A sequential neural network
1\(\mathcal{V},\mathcal{E}\gets Initialization(\mathcal{V},\mathcal{E})\);
2while\((\mathcal{V},\mathcal{E})\) has SumLinear blocksdo
3 Let \(L\) be the last SumLinear block in \((\mathcal{V},\mathcal{E})\);
4\(\mathcal{V},\mathcal{E},L\gets Normalization(\mathcal{V},\mathcal{E},L)\);
5if\(|in(L)|>1\)then
6\(\mathcal{V},\mathcal{E}\gets LinearLayerConstruction(\mathcal{V},\mathcal{E},L)\);
7
8else
9\(\mathcal{V},\mathcal{E}\gets Linearization(\mathcal{V},\mathcal{E},L)\);
10
11return\((\mathcal{V},\mathcal{E})\);
```
**Algorithm 1**Neural Network Simplification
At line 4, the function \(Normalization(\mathcal{V},\mathcal{E},L)\) is used to normalize the last SumLinear block \(L\). If the normalized \(L\) has more than one preceding layer (i.e. \(|in(L)|>1\)), then the function \(LinearLayer Construction(\mathcal{V},\mathcal{E},L)\) is used to replace \(L\) with the layers \(L^{l},L^{r},L^{s}\) introduced in the Linear layer construction (at line 6), otherwise the function \(Linearization(\mathcal{V},\mathcal{E},L)\) is used to directly replace \(L\) with the only Linear layer included in \(L\) (at line 8).
In the rest of this subsection, we show that line 6 in algorithm 1 can only be visited at most \(|\mathcal{V}|\) times, thus, the algorithm can terminate and generate an equivalent sequential neural network consisting of Linear and ReLU layers.
**Lemma 4**.: Assume \((\mathcal{V},\mathcal{E})\) is a neural network consisting of Linear, ReLU layers and SumLinear blocks and \(L\) is the last SumLinear block in \(\mathcal{V}\). If \(|in(L)|>1\) and \(in(L)\) does not have SumLinear blocks and Linear layers, then \(\mathcal{R}_{L}\) is not empty.
Proof.: \((\mathcal{V},\mathcal{E})\) only has one output layer and all layers behind \(L\) have at most one preceding layer, thus, a path from any layer before \(L\) to the output layer must pass \(L\).
Let \(L_{i}\) be the last ReLU layer in \(in(L)\). If \(L_{i}\) has a succeeding layer \(L_{j}\) such that \(L_{j}\neq L\), then \(L\) must be in all paths from \(L_{j}\) to the output layer, and there would be a ReLU layer in \(in(L)\) included by a path from \(L_{j}\) to \(L\), which meant that \(L_{i}\) was not the last layer in \(in(L)\), a contradiction. Hence, \(L_{i}\) is in \(\mathcal{R}_{L}\) and \(\mathcal{R}_{L}\neq\emptyset\).
Based on lemma 4, if \(|in(L)|>1\), then \(|\mathcal{R}_{L}|\geq 1\), thus, the number of ReLU layers before the last SumLinear block in \(\mathcal{V}\) is decreased after replacing \(L\) and the layers in \(\mathcal{R}_{L}\) with the layers \(L^{l},L^{r},L^{s}\) introduced in the Linear layer construction where \(L^{s}\) becomes the last SumLinear block in \(\mathcal{V}\). So we can get that algorithm 1 can terminate and generate a neural network consisting of Linear and ReLU layers.
**Theorem 2**.: Algorithm 1 can terminate and generate a neural network consisting of Linear and ReLU layers.
Proof.: From lemma 4, we know that line 6 in algorithm 1 reduces the number of ReLU neurons before the last SumLinear block in \(\mathcal{V}\), therefore, it can only be visited at most \(|\mathcal{V}|\) times. Note that SumLinear block normalization (at line 4) does not affect ReLU layers and the layers behind \(L\).
Then line 8 can directly replace all SumLinear blocks having one preceding layer in \(\mathcal{V}\) with Linear layers. Therefore, algorithm 1 can terminate and return a neural network consisting of Linear and ReLU layers.
**Theorem 3**.: Our constructed REDNet is input-output equivalent to the original network given the input space \(I\).
Proof.: (Sketch.) Our reduction technique contains two steps: (i) network simplification presented in section V; (ii) stable ReLU neuron reduction described in section IV. Each step is designed deliberately to preserve input-output equivalence.
_Simplification equivalence._ Function \(Initialization(\mathcal{V},\mathcal{E})\) at line 1 in algorithm 1 encodes ONNX layers into a uniform network representation; such encoding preserves input-output equivalence. Then lemma 2 and lemma 3 show that line 4 and line 6 do not change network functionality. In addition, line 8, replacing a SumLinear Block with the only Linear layer in the block, also does not change network output. Therefore, algorithm 1 can construct a sequential neural network that has the same functionality as the original neural network.
_Reduction equivalence._ The proof is given in lemma 1.
### _Illustrative example of network simplification_
In this subsection, we use a simple network block to illustrate how to perform algorithm 1 on a non-sequential structure (Figure 9(a)) to get a sequential neural network consisting of Linear and ReLU layers (Figure 9(b)). In Figure 9, each rectangular node (including \(n_{1},n_{2},n_{3},n_{4}\)) represents a set of neurons whose values are derived from the preceding connected node(s) and the connections between them. Note that red-colored rectangular nodes are ReLU nodes that represent the output neurons of the ReLU layer; blue nodes are convolutional nodes; the black node is an Add layer. The connections between nodes are represented with directed edges, and the connected functions are displayed near the edges (e.g. conv1, ReLU). Symbol \(\oplus\) represents concatenation.
Firstly, we apply function \(Initialization(\mathcal{V},\mathcal{E})\) at line 1 to encode Figure 9(a) as SumLinear blocks and ReLU layers, where the weights and biases of each Linear layer are displayed above the layer. We name the two ReLU nodes \(n_{1},n_{3}\) as ReLU1, ReLU2 respectively.
Figure 10: Network in Figure 9(a) encoded with SumLinear blocks and ReLU layers
Figure 9: The simplification of a non-sequential block
Then we take the last SumLinear block from Figure 10 and normalize this block (Figure 11(a)) and obtain the normalized block as in Figure 11(b). The whole network is now updated as Figure 12.
At this step, we notice that ReLU layer ReLU2 is _blocked_ by the last SumLinear block, and ReLU1 is _not blocked_ as it has another path to a subsequent ReLU layer. Therefore, we perform the Linear layer construction at line 6 and obtain the network in Figure 13.
Lastly, we take out the last SumLinear block in Figure 13 and perform normalization to obtain Figure 14. At Figure 14, the last SumLinear block includes two Linear layers having the same preceding layer ReLU1 (Figure 14), which requires further normalization.
The final architecture of the network is given in Figure 15, where we have \(\begin{bmatrix}\text{conv1-w}\\ \text{identity}\end{bmatrix}=\text{conv1-w}\oplus\text{identity}\). Now the oringal network has been simplified into a sequential one.
## VI Experiments
In this section, we present our experimental results of instantiation of network reduction technique on \(\alpha,\beta\)-CROWN [14], VeriNet [15] and PRIMA [12] to show evidence that: given the _same_ verification problem, the _same_ verification algorithm runs _faster_ on the reduced network compared to the original network, which gives us confidence in the ability of our method as in enhancing the efficiency of existing verification methods. Furthermore, the simple architecture in REDNet allows existing verification tools that only support limited network benchmarks to handle more networks.
### _Experiment Setup_
The evaluation machine has two 2.40GHz Intel(R) Xeon(R) Silver 4210R CPUs with 384 GB of main memory and a NVIDIA RTX A5000 GPU.
**Evaluation Benchmarks.** The evaluation datasets include MNIST [36] and CIFAR10/CIFAR100 [37]. MNIST dataset contains hand-written digits with 784 pixels, while CIFAR10/CIFAR100 includes colorful images with 3072 pixels. We chose fully-connected, convolutional and residual networks with various sizes from two well-known benchmarks: the academic ERAN system [38] and VNNCOMP2021/2022 (International Verification of Neural Networks Competition) [39, 40]. The number of activation layers (#Layers), the number of ReLU neurons (#Neurons), and the trained defense of each network are listed in Table II, where a trained defense refers to a defense method against adversarial samples to improve robustness. Please note that "Mixed" means mixed training, which combines adversarial training and certified defense training loss. This could lead to an excellent balance between model clean accuracy and robustness, and is beneficial for obtaining higher verified accuracy [41].
**Verification Properties.** We conduct robustness analysis, where we determine if the classification result of a neural
Figure 11: Normalization of the last SumLinear block
Figure 14: The network after the second normalization
Figure 12: The network after the first normalization
Figure 13: The network after the Linear layer construction. For simplicity to show the weight matrices, we assume that ReLU2 and ReLU1 all have one neuron; and “-biases/-weights” are abbreviated as “-b/-w” respectively.
Figure 15: The sequential network after the third normalization
network - given a set of slightly perturbed images derived from the original image (input specification) - remains the same as the ground truth label obtained from the original unperturbed image (output specification). The set of images is defined by a user-specified parameter \(\epsilon\), which perturbs each pixel \(p_{i}\) to take an intensity interval \([p_{i}-\epsilon,p_{i}+\epsilon]\). Therefore, the input space \(I\) is \(\bigtimes_{i=1}^{n}[p_{i}-\epsilon,p_{i}+\epsilon]\). In our experiment, we acquire the verification properties from the provided vnnlib files [44] that record the input and output specification or via a self-specified \(\epsilon\). We aim to speed up the analysis process for those _properties that are tough to be verified._ Hence we _filter out_ those falsified properties. We obtain around 30 properties for each tested network, as enumerated in Table II.
### _Network reduction results_
Table III shows the size of reduced networks, where the bound propagation methods crown and \(\alpha\)-crown are used to compute concrete bounds and detect stable neurons. Here we present the number of neurons in the original network and the average size after reduction (under column "AvgN") and reduction time (under column "AvgT") for the two methods. We have out-of-memory problem when running \(\alpha\)-crown on network C_100_Large, thus we mark the result as "-".
The table shows that a significant number of neurons could be reduced within a reasonable time budget by leveraging the concrete bounds returned by CROWN. Therefore, we use CROWN as our bound propagator for the rest of the experiments. On average, the reduced networks are \(10.6\times\) smaller than the original networks.
Figure 16(a) shows the reduction ratio distribution where each dot \((\alpha,\beta)\) in the figure means that the reduction ratio is greater than \(\beta\) on \(\alpha\) percent properties. The reduction ratio can be up to 95 times at the best case and greater than 20 times on 10% properties. Figure 16(b) gives the size distribution of reduced networks. Each dot \((\alpha,\beta)\) in the figure means the reduced networks have at most \(\beta\) ReLU neurons on \(\alpha\) percent properties. We can see that on more than 94% properties, there are at most 8000 ReLU neurons in the reduced networks.
### _Instantiation on \(\alpha,\beta\)-Crown_
\(\alpha,\beta\)-CROWN is GPU based and the winning verifier in VNCOMP 2021 [39] and VNNCOMP 2022 [40], the methodology of which is based on linear bound propagation framework and branch-and-bound. We first instantiate our technique on \(\alpha,\beta\)-CROWN, and we name the new system as \(\alpha,\beta\)-CROWN-R. We set the timeout of verification as 300 seconds, and if the tool fails to terminate within the timeout, we deem the result to be inconclusive. The results are listed in Table IV, where we explicitly enumerate the number of timeout properties, the number of verified properties, and the average execution time of \(\alpha,\beta\)-CROWN (column \(\alpha,\beta\)-**CROWN-O**) and our instantiated system (column \(\alpha,\beta\)-**CROWN-R**) on the properties where both methods can terminate within timeout.2
Footnote 2: When the original method is timeout or fails to execute for all properties, e.g. C_100_Med in Table V and M_ConvBig in Table VI, the average time is computed on the properties where our method can terminate within timeout.
From the result, we observe that \(\alpha,\beta\)-CROWN-R could verify more tough properties that have failed to be verified within 300 seconds in \(\alpha,\beta\)-CROWN-O. This indicates that our reduction pre-processing does not only benefit those easy verification problems but also helps verify more difficult properties within a decent time. In general, \(\alpha,\beta\)-CROWN-R verifies 11 more properties and boosts the efficiency of \(\alpha,\beta\)-CROWN-O with average \(1.52\times\) speedup on all 12 networks. The average
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|} \hline
**Network** & **Type** & **\#Layers** & **\#Neurons** & **Defense** & **\#Property** \\ \hline M\_256x6 & fully-connected & 6 & 1,536 & None & 30 \\ \hline M\_ConvMed & convolutional & 3 & 5,704 & None & 31 \\ \hline M\_ConvBig & convolutional & 6 & 48,064 & DiffAf [42] & 29 \\ \hline M\_SkipNet & residual & 6 & 71,650 & DiffAI & 31 \\ \hline C\_8\_255Simp & convolutional & 3 & 16,634 & None & 30 \\ \hline C\_WideNew & convolutional & 3 & 6,244 & None & 32 \\ \hline C\_ConvBig & convolutional & 6 & 62,464 & PGD [43] & 37 \\ \hline C\_Resnet4b & residual & 10 & 14,436 & None & 30 \\ \hline C\_ResnetA & residual & 8 & 11,364 & None & 32 \\ \hline C\_ResnetB & residual & 8 & 11,364 & None & 29 \\ \hline C\_100\_Med & residual & 10 & 55,460 & Mixed & 24 \\ \hline C\_100\_Large & residual & 10 & 286,820 & Mixed & 24 \\ \hline \end{tabular}
\end{table}
Table II: Detailed information of the experimental networks
Figure 16: Visualized results of the reduction with CROWN
is computed across the networks where the speedup for each network is calculated by the reported time on the original network.
In addition, the performance of REDNet is affected by the network reduction ratio. For example, \(\alpha,\beta\)-CROWN-R only has average 1.12 speedup on M_256\(\times\)6 whose reduction ratio is only 1.55, while \(\alpha,\beta\)-CROWN-R can have average 2.52\(\times\) speedup on C_100_large whose mean reduction ratio is 39.80.
### _Instantiation on PRIMA_
PRIMA [12] is one of the state-of-the-art incomplete verification tools. It introduces a new convex relaxation method that considers multiple ReLUs jointly in order to capture the correlation between ReLU neurons in the same layer. Furthermore, PRIMA leverages LP-solving or MILP-solving to refine individual neuron bounds within a user-configured timeout. Note that PRIMA stores the connection between neurons in two ways: 1. Dense expression, which encodes the fully-connected computation in a fully-connected layer; 2. Sparse expression, that only keeps the non-zero coefficients and the indexes of preceding neurons of which the corresponding coefficients are non-zero (e.g. convolutional layer). As some affine connections between layers in our reduced network contain many zero elements (since we introduce the identity matrix in the newly constructed connection), we elect to record them as sparse expressions in the instantiated PRIMA (abbreviated as PRIMA-R).
The comparison results are given in Table V, and we set a 2000 seconds timeout for each verification query as PRIMA runs on the CPU and usually takes a long execution time for deep networks. Note that PRIMA returns segmentation fault for M_SkipNet, thus the results are marked as "-"; PRIMA times out for all properties of C_100_Med and C_100_Large, hence marked as "TO". Note that there are some cases where PRIMA-R runs slower than PRIMA-O, e.g., for network C_ConvBig. This happens because PRIMA conducts refined verification by pruning the potential adversarial label one by one within a certain timeout. Once an adversarial label fails to be pruned within the timeout, PRIMA returns unknown immediately without checking the rest of the adversarial labels. In PRIMA-R, we could prune those failed labels that previously timed out in PRIMA-O, thus continuing the verification process, which may take more overall time. But accordingly, we gain significant precision improvement, e.g. PRIMA-R can verify 9 more properties on C_ConvBig.
On average, PRIMA-R gains \(1.99\times\) speedup than PRIMA-O and verifies 60.6% more images, which indicates the strength of REDNet to improve both efficiency and precision.
of a verification tool on an unsupported network is regarded as timeout. Then Figure 17(b) shows the execution time distribution of \(\alpha,\beta\)-CROWN-R and \(\alpha,\beta\)-CROWN-O, where each position \((\alpha,\beta)\) denotes that the tool can verify \(\beta\) percent properties in \(\alpha\) seconds. For example, by setting the time limit to 10 seconds, \(\alpha,\beta\)-CROWN-R can verify 32.9% properties, and \(\alpha,\beta\)-CROWN-O only verifies 18.3% properties.
On most properties, the verification tools on REDNet are faster than the tools on the original network. Despite its generality, REDNet may achieve marginal effectiveness on certain tools or benchmarks due to the following factors:
* The reduction ratio affects the subsequent verification acceleration. A less significant reduction ratio plus the reduction cost could cause marginal overall speedup. Figure 17(e) and Figure 17(f) depict the effect of the reduction ratio on the speedup gained. Despite other factors affecting the final speedup, there is a general trend that a significant reduction ratio leads to better speedup, which may cause superb effectiveness on networks C_100_Med and C_100_Large.
* Different tools may use distinct bound propagation methods, which have different degrees of dependency on the network size. PRIMA deploys DeepPoly whose time complexity depends on \(N^{3}\) where each layer has at most \(N\) neurons [45]; as such reduction in network size can lead to better performance. \(\alpha,\beta\)-CROWN, on the other hand, uses \(\beta\)-crown. \(\beta\)-crown is used to generate constraints of output neurons defined over preceding layers until the input layer. Thus, the number of constraints does not vary, and the number of intermediate neurons can only affect the number of variables that appear in the constraint; as such, deployment of REDNET may reap a marginal effect in speedup on \(\alpha,\beta\)-CROWN compared to PRIMA. For VeriNet, it uses symbolic interval propagation to generate constraints of intermediate and output neurons defined over the input neurons. Thereby intermediate neuron size only affects the number of constraints while the number of defined variables in the constraint is fixed as the input dimension. Hence, REDNet could be less effective on VeriNet compared to PRIMA in general.
* Some layer types (e.g. Conv) may compute faster than fully-connected layers; since our method transforms these layers into fully-connected layers before performing network reduction, its efficiency may not be that significant as compared to the original layers. On the other hand, it is worth-noticing that the use of fully-connected layers improves the availability of existing tools.
* \(\alpha,\beta\)-CROWN and VeriNet are branch-and-bound based and they generate sub-problems from their respective branching heuristics, which are dependent on the original network structures. The REDNet changes the network structure, and hence the heuristic can generate different sub-problems. This may affect the performance.
We conclude empirically that REDNet has better performance (significant speedup or much more properties verified) on large networks, i.e. networks with more than 40k ReLU neurons. On the large networks, the average speedup of \(\alpha,\beta\)
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Neural Net**} & \multicolumn{4}{c|}{**VeriNet-O**} & \multicolumn{2}{c|}{**VeriNet-R**} \\ & \multicolumn{2}{c|}{(on original network)} & \multicolumn{2}{c|}{(on reduced network)} \\ \cline{2-7} & \#Timeout & \#VeriNet & Time(s) & \#Timeout & \#VeriNet & Time(s) \\ \hline M\_256x6 & 27 & 3 & 52.94 & 27 & 3 & **47.78** \\ \hline M\_ConvMed & - & - & - & 19 & **12** & **23.96** \\ \hline M\_ConvBig & - & - & - & 23 & **6** & **48.81** \\ \hline C\_WideKW & 3 & 29 & 27.12 & 2 & **30** & **22.74** \\ \hline C\_8\_255Simp & 10 & 20 & 63.26 & 10 & 20 & **52.56** \\ \hline C\_ResnetA & 25 & 7 & 129.49 & 25 & 7 & **83.35** \\ \hline C\_ResnetB & 21 & 8 & 116.80 & 20 & **9** & **74.98** \\ \hline C\_100\_Med & 10 & 14 & 69.71 & 9 & **15** & **21.15** \\ \hline \end{tabular}
\end{table}
Table VI: The experiment results of VeriNet. The time is the average execution time of the properties where both methods terminate before timeout. When VeriNet-O fails to execute, e.g. M\_ConvBig, the average time is computed on the properties where our method can terminate within the timeout.
Figure 17: Visualized comparison results.
CROWN-R is 1.94\(\times\), and the average speedup of VeriNet-R is 3.29\(\times\); and PRIMA-R verifies 42 properties while PRIMA-O only verifies 11 properties.
### _Support of other verifiers for the benchmarks_
As can be seen from subsection VI-D, PRIMA fails to analyze M_SkipNet because it does not support its network architecture. However, with the introduction of REDNet, which is constructed as a fully-connected neural network, PRIMA is now able to verify M_SkipNet. A similar improvement happens to VeriNet. Therefore, our REDNet not only speeds up the verification process but also allows existing tools to handle network architectures that are not supported originally.
To further testify that the reduced network adds support to existing verification tools, we select four tools - Debona [46], Venus [47], Nnenum [48], PeregriNN [49] - from VNCOMP2021/2022 that only support limited network architectures. We select one representative verification property for each of our tested networks to check if the four designated tools can support the networks.
We present the results in Table VII where we color the left half of the circle black to indicate that the original network is supported by the tool (and white otherwise); we also color the right half of the circle black if the reduced network is supported by the tool (and white otherwise.) In general, the black color implies the network is _supported_ and the white color implies the network is _not supported_. Note that Venus does not support networks whose output layer is a ReLU layer; therefore, it cannot be executed for both the original and the reduced network for M_SkipNet. These results boost our confidence that our constructed REDNet not only accelerates the verification but also _produces a simple neural network architecture that significantly expands the scope of neural networks which various tools can handle._
## VII Discussion
We now discuss the limitation of our work.
_Supported layer types._ As described in section V, our reduced neural network contains only Affine layers (e.g. GEMM layers) and ReLU layers, therefore we could only represent non-activation layers that conduct linear computation. For example, an _Add_ layer that takes layer \(\alpha\) and layer \(\beta\) conducts linear computation as the output is computed as \(\alpha+\beta\). A _Convolutional_ layer conducts linear computation as well as it only takes one input layer and the other operands are constant weights and bias. However, we couldn't support a _Multiplication_ layer if it takes layer \(\alpha\) and layer \(\beta\) and computes \(\alpha\times\beta\) as the output. For future work, we will explore the possibility of handling more non-linear computations.
_Floating-point error._ As presented in Theorem 3, our reduction process preserves the input-output equivalence of the original network in the real-number domain. However, like many existing verification algorithms [12, 14, 24, 26] that use floating-point numbers when conducted on physical machines, our implementation involves floating-point number computation, thus inevitably introducing floating-point error. The error could be mitigated by deploying float data type with higher precision during implementation.
## VIII Related Work
Theoretically, verifying deep neural networks with ReLU functions is an NP-hard problem [17]. Particularly, the complexity of the problems grows with a larger number of _nodes_ in the network. Therefore, with the concern of scalability, many works have been proposed by over-approximating the behavior of the network. This over-approximation can be conducted by abstract interpretation techniques [1, 25, 50]; or to soundly approximate the network with _fewer nodes_[8, 9, 51, 52].
In detail, abstract interpretation-based methods over-approximate the functionality of each neuron with an abstract domain, such as box/interval [50], zonotope [1] or polyhedra [25]. These methods reason over the original neural networks without changing the number of neurons in the test network.
On the contrary, reduction methods in [8, 9, 51, 52] reduce the number of neurons in a way that over-approximates the original network's behavior. However, such over-approximation would jeopardize completeness when instantiated on complete methods. On the contrary, our reduction method captures the _exact_ behavior of the network without approximation error. Therefore REDNet could be instantiated on complete tools and even verify more properties given the same timeout. Furthermore, REDNet could handle various large networks where the previous work [51] only evaluated one large-scale network (the C_ConvBig in our benchmark) that was reduced to 25% of the original size with a very small perturbation \(\epsilon=0.001\); whereas we could reduce it to just 10% with \(\epsilon\approx 0.0078\) (properties from VNN competition 2022). We remark that the smaller perturbation, the more reduction we could gain. Other related tools in [9, 52] were only evaluated with ACAS Xu networks with a very small input dimension and network sizes, making it challenging for us to make any meaningful comparison. Last but not least, the reduced networks designed in [8, 9] use intervals or
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline \multirow{2}{*}{**Networks(12)**} & \multicolumn{4}{c|}{**Tools**} \\ \cline{2-5} & Venus & Debona & Nnenum & PeregriNN \\ \hline M\_256x6 & \(\bullet\) & \(\bullet\) & \(\bullet\) & \(\bullet\) \\ M\_ConvMed & \(\mathfrak{D}\) & \(\mathfrak{D}\) & \(\mathfrak{D}\) & \(\mathfrak{D}\) \\ M\_ConvBig & \(\mathfrak{D}\) & \(\mathfrak{D}\) & \(\mathfrak{D}\) & \(\mathfrak{D}\) \\ M\_SkipNet & ReLU-error & \(\mathfrak{D}\) & \(\mathfrak{D}\) & \(\mathfrak{D}\) \\ C\_WideKW & \(\bullet\) & \(\mathfrak{D}\) & \(\mathfrak{D}\) & \(\mathfrak{D}\) \\ C\_8\_255Simp & \(\mathfrak{D}\) & \(\mathfrak{D}\) & \(\mathfrak{D}\) & \(\mathfrak{D}\) \\ C\_ConvBig & \(\mathfrak{D}\) & \(\mathfrak{D}\) & \(\mathfrak{D}\) & \(\mathfrak{D}\) \\ C\_ResnetA & \(\mathfrak{D}\) & \(\mathfrak{D}\) & \(\mathfrak{D}\) & \(\mathfrak{D}\) \\ C\_ResnetB & \(\mathfrak{D}\) & \(\mathfrak{D}\) & \(\mathfrak{D}\) & \(\mathfrak{D}\) \\ C\_100\_Med & \(\mathfrak{D}\) & \(\mathfrak{D}\) & \(\mathfrak{D}\) & \(\mathfrak{D}\) \\ C\_100\_Large & \(\mathfrak{D}\) & \(\mathfrak{D}\) & \(\mathfrak{D}\) & \(\mathfrak{D}\) \\ \hline \end{tabular}
\end{table} TABLE VII: The networks supported by existing verification tools. A fully black circle indicates both the original and the reduced networks are supported. A right-half black circle indicates that the tool supports only the reduced network.
values in an abstract domain to represent connection weights. Such specialized connections require implementation support if instantiated on existing verification methods. But we export REDNet as a fully-connected network in ONNX, which is an open format that is widely accepted. This makes our REDNet a versatile tool that could also be combined with existing reduction methods to lessen the network size even further as they all apply to fully-connected networks.
REDNet could benefit various verification techniques, such as branch-and-bound based methods [53, 7, 10, 5]. The key insight of the branch-and-bound method is to divide the original verification problem \(P\) into subdomains/subproblems by splitting neuron intervals. For example, one can bisect the input neuron interval such that the input space is curtailed in each subdomain; or to split an unstable ReLU neuron \(y\) (whose input value can be both negative and positive) at the point 0, thereby \(y\) will be stably activated or stably deactivated in the subdomains. Our _network reduction_ technique, once applied at the beginning of branch-and-bound based methods, will help generates easier subproblems based on a small-sized network, thus accelerating the whole analysis process without sacrificing verification precision.
Furthermore, the reduced network could accelerate abstract refinement based processes like PRIMA [12], where it encodes the network constraints and resolves individual neuron bounds. As REDNet contains fewer neurons and connections, the solving process involves a smaller set of constraints, which leads to overall speedup.
## IX Conclusion
In this work, we propose the _neural network reduction_ technique, which constructs a reduced neural network with fewer neurons and connections while capturing the _same_ behavior as the original tested neural network. In particular, we provide formal definitions of stable ReLU neurons and deploy the state-of-the-art bound propagation method to detect such stable neurons and _remove_ them from the neural network in a way that preserves the network functionality. We conduct extensive experiments over various benchmarks and state-of-the-art verification tools. The results on a large set of neural networks indicate that our method can be instantiated on different verification methods, including \(\alpha,\beta\)-CROWN, VeriNet and PRIMA, to expedite the analysis process further.
We believe that our method is an efficient pre-processing technique that returns a functionally-equivalent reduced network on which the same verification algorithm runs faster, correspondingly enhancing the efficiency of existing verification methods for them to answer tough verification queries within a decent time budget. Moreover, the simplified network architectures in REDNets empower existing tools to handle a wider range of networks they could not support previously.
## Acknowledgment
This research is supported by a Singapore Ministry of Education Academic Research Fund Tier 1 T1-251RES2103. The second author is supported by NUS grant T1 251RES2219.
|
2301.02243 | Machine Fault Classification using Hamiltonian Neural Networks | A new approach is introduced to classify faults in rotating machinery based
on the total energy signature estimated from sensor measurements. The overall
goal is to go beyond using black-box models and incorporate additional physical
constraints that govern the behavior of mechanical systems. Observational data
is used to train Hamiltonian neural networks that describe the conserved energy
of the system for normal and various abnormal regimes. The estimated total
energy function, in the form of the weights of the Hamiltonian neural network,
serves as the new feature vector to discriminate between the faults using
off-the-shelf classification models. The experimental results are obtained
using the MaFaulDa database, where the proposed model yields a promising area
under the curve (AUC) of $0.78$ for the binary classification (normal vs
abnormal) and $0.84$ for the multi-class problem (normal, and $5$ different
abnormal regimes). | Jeremy Shen, Jawad Chowdhury, Sourav Banerjee, Gabriel Terejanu | 2023-01-04T19:07:01Z | http://arxiv.org/abs/2301.02243v1 | # Machine Fault Classification using Hamiltonian Neural Networks
###### Abstract
A new approach is introduced to classify faults in rotating machinery based on the total energy signature estimated from sensor measurements. The overall goal is to go beyond using black-box models and incorporate additional physical constraints that govern the behavior of mechanical systems. Observational data is used to train Hamiltonian neural networks that describe the conserved energy of the system for normal and various abnormal regimes. The estimated total energy function, in the form of the weights of the Hamiltonian neural network, serves as the new feature vector to discriminate between the faults using off-the-shelf classification models. The experimental results are obtained using the MaFauDa database, where the proposed model yields a promising area under the curve (AUC) of 0.78 for the binary classification (normal vs abnormal) and 0.84 for the multi-class problem (normal, and 5 different abnormal regimes).
Physics-Informed Neural Networks, Supervised Learning, Energy Conservation, Dynamical Systems
## 1 Introduction
A cost-effective method for ensuring component reliability is to enhance the current schedule-based maintenance approach with deterministic component health and usage data to inform selective and targeted maintenance activities. Condition monitoring and fault diagnosis systems are required to guard against unexpected failures in safety-critical and production applications. Early fault detection can reduce unplanned failures, which will in turn reduce life cycle costs and increase readiness and mission assurances. Irrespective of different machinery, manufacturing tools like CNC machines, heavy equipment, aircraft, helicopters, space vehicles, car engines, and machines generate vibrations. The analysis of these vibration data is the key to detecting machinery degradation before the equipment or the structure fails. Machine faults usually leave key indications of its internal signature through the changes in modal parameters. For example, they may change the natural frequency of the system, generate unique damping characteristics, degradation in stiffness, generation of acoustic frequencies, etc. The defects and faults in the system may also generate a different form of energy transduction from mechanical to electrical or to electromagnetic energy, which leaves unique signatures. The statistical features of vibration signals in the time, frequency, and time-frequency domains each have different strengths for detecting fault patterns, which has been thoroughly studied [21, 22, 23]. Various approaches have been proposed to extract features from these vibration signals using time-domain and frequency-domain analysis [18], Fourier and wavelet transform [11, 10], and manifold learning [15]. It is also shown that integration and hybridization of feature extraction algorithms can yield synergies that combine strengths and eliminate weaknesses [21, 16]. Most of the work done in this area is based on data collected from vibration sensors [21, 22, 20], which are cheap and enable non-intrusive deployments. However, they generate huge datasets. As these data sets are very big in nature finding the above-mentioned unique features, and their respective paraxial contributions are extremely challenging. Hence, recently several feature extraction-driven machine learning algorithms are deployed to solve this challenge [1, 23, 24, 25].
One of the challenges with building and deploying machine learning models to support decision-making is achieving a level of generalization that allows us
to learn on one part of the data distribution and predict on another. This challenge is amplified when learning using data from physical systems, as machine learning models such as neural networks (NN) capture an approximation of the underlying physical laws. Recently, new approaches have emerged under the umbrella of physics-informed neural networks (PINN) [10] to train NN that not only fit the observational data but also respect the underlying physics. This work leverages the Hamiltonian neural network (HNN) [11] to learn the Hamiltonian equations of energy-conserving dynamical systems from noisy data.
HNNs are used to characterize the total energy of rotating machinery, which is part of a wide range of applications such as power turbines, helicopters, and CNC machines just to name a few. In Ref. [12], the authors proposed a similarity-based model to calculate the similarity score of a signal with a set of prototype signals that characterize a target operating condition. These similarity score features are used in conjunction with time and spectral domain features to classify the behavior of the system using off-the-shelf classification models, such as random forests.
The main contribution of this work is to be intentional with respect to the underlying physics of the rotating machinery when generating discriminatory features. Namely, the conservation of energy is used as an inductive bias in the development and training of the HNN. While these mechanical systems are dissipative in nature, we assume that for short periods of time, the energy of the system is conserved due to the energy injected by the motor. The features derived by our approach are in the form of the weights of the HNN, which characterize the total energy of the system. In other words, we attempt to identify the operating regime based on the energy function. As with the previous approaches, these physics-informed features are then used to train off-the-shelf classifiers, such as logistic regressions and random forests to predict the condition of the mechanical system. The experimental results are performed on the Machinery Fault Database (MaFauIDa)1 from the Federal University of Rio de Janeiro. The proposed system yields a promising area under the curve (AUC) of 0.78 for both the binary classification (normal vs abnormal) and 0.84 for the multi-class problem (normal, and 5 different abnormal regimes).
Footnote 1: [http://www02.smt.ufrj.br/~offshore/mfs/page_01.html](http://www02.smt.ufrj.br/~offshore/mfs/page_01.html)
This paper is structured as follows: Section 2 introduces the background on the HNN and MaFauIDa dataset. Section 3 presents our proposed approach to derive physics-informed features to classify operating conditions. Section 4 shows the empirical evaluations and Section 5 summarizes our findings.
## 2 Background
### Hamiltonian Neural Networks
The Hamiltonian equations of motion, Eq. 1, describe the mechanical system in terms of canonical coordinates, position \(\mathbf{q}\) and momentum \(\mathbf{p}\), and the Hamiltonian of the system \(\mathcal{H}\).
\[\frac{d\mathbf{q}}{dt}=\frac{\partial\mathcal{H}}{\partial\mathbf{p}},\quad \frac{d\mathbf{p}}{dt}=-\frac{\partial\mathcal{H}}{\partial\mathbf{q}} \tag{1}\]
Instead of using neural networks to directly learn the Hamiltonian vector field \(\left(\frac{\partial\mathcal{H}}{\partial\mathbf{p}},-\frac{\partial\mathcal{ H}}{\partial\mathbf{q}}\right)\), the approach used by Hamiltonian neural networks is to learn a parametric function in the form of a neural network for the Hamiltonian itself [11]. This distinction accounts for learning the exact quantity of interest and it allows us to also easily obtain the vector field by taking the derivative with respect to the canonical coordinates via automatic differentiation. Given the training data, the parameters of the HNN are learned by minimizing the following loss function, Eq. 2.
\[\mathcal{L}=\left\|\frac{\partial\mathcal{H}}{\partial\mathbf{p}}-\frac{d \mathbf{q}}{dt}\right\|_{2}+\left\|\frac{\partial\mathcal{H}}{\partial\mathbf{ q}}+\frac{d\mathbf{p}}{dt}\right\|_{2} \tag{2}\]
### Machinery Fault Database
(MaFauIDa)
A comprehensive set of machine faults and vibration data was needed for the development and testing of the Hamiltonian-based feature extraction and classification of different operating states with damage/defects. The Machinery Fault Database (MaFauIDa) consists of a comprehensive set of vibration data from a SpectraQuest Alignment-Balance-Vibration System, which includes multiple types of faults, see Fig. 1. The equipment has two shaft-supporting bearings, a rotor, and a motor. Accelerometers are attached to the bearings to measure the vibration in the radial, axial, and tangential directions of each bearing. In addition, measurements from a tachometer (for measuring system rotation frequency) and a microphone (for capturing sound during system operation) are also included in the database. The database includes 10 different operating states and a total of 1951 sets of vibration data: (1) normal operation, (2) rotor imbalance, (3) underhang bearing fault: outer track, (4) underhang bearing fault: rolling
elements, (5) underhang bearing fault: inner track, (6) overhang bearing fault: outer track, (7) overhang bearing fault: rolling elements, (8) overhang bearing fault: inner track, (9) horizontal shaft misalignment, (10) vertical shaft misalignment.
**Normal Operation**. There are 49 sets of data from the system operating under normal conditions without any fault, each with a fixed rotating speed within the range from 737 rpm to 3686 rpm with steps of approximately 60 rpm.
**Rotor Imbalance**. To simulate different degrees of imbalanced operation, distinct loads of (6, 10, 15, 20, 25, 30, 35) g were coupled to the rotor. The database includes a total of 333 different imbalance-operation scenarios with combinations of loads and rotation frequencies.
**Bearing Faults**. As one of the most complex elements of the machine, the rolling bearings are the most susceptible elements to fault occurrence. Three defective bearings, each one with a distinct defective element (outer track, rolling elements, and inner track), were placed one at a time in each of the bearings. The three masses of (6, 10, 20) g were also added to the rotor to induce a combination of rotor imbalance and bearing faults with various rotation frequencies. There is a total of 558 underhang bearing fault scenarios and 513 overhang bearing fault scenarios.
**Horizontal Shaft Misalignment**. Horizontal shaft misalignment faults were induced by shifting the motor shaft horizontally of (0.5, 1.0, 1.5, 2.0) mm. The database includes a total of 197 different scenarios with combinations of horizontal shaft misalignment and rotation frequencies.
**Vertical Shaft Misalignment**. Vertical shaft misalignment faults were induced by shifting the motor shaft vertically of (0.51, 0.63, 1.27, 1.4, 1.78, 1.9) mm. The database includes a total of 301 different scenarios with combinations of vertical shaft misalignment and rotation frequencies.
## 3 Methodology
The approach proposed to identify the operating state of the rotating machinery is to learn the total energy of the system from vibration data using HNN and use the parameters of the Hamiltonian as discriminating features. The intuition is that the total energy signature is different under various faults. The main assumption that we make is that the energy of the system is conserved for short periods of time thanks to the energy injected by the motor, which allows us to use the HNN [1]. The overall model architecture is shown in Fig. 2.
The first step is to develop a set of generalized coordinates from the raw vibration data using an autoencoder trained only on the data from normal conditions. The encoder NN is then used to generate a low dimensional representation (2D in our case) from the 8 vibration measurements taken in any operating regime. This approach in developing arbitrary coordinates has been proposed in the original HNN paper [1].
Using the newly developed coordinates, the second step is to train an HNN for each sequence of data generated at 50 kHz sampling rate during 5 s. The parameters \(\theta\) of the Hamiltonian \(\mathcal{H}_{\theta}\) fully characterize the energy function of the operating state and they can be used to train a classifier.
The parameterization of the Hamiltonian is high-dimensional (\(41,200\) weights in our case) as it depends on the number of layers and hidden neurons per layer chosen in HNN. As a result, we have chosen to reduce its dimension using principal component analysis (PCA) before training a classifier as a random forest in the last modeling step.
## 4 Numerical Results
The proposed fault classification system has been used with the MaFaulDa dataset and a \(70:30\) split into train and test data. Given the imbalance of collected data, namely 49 datasets recorded for normally operating motors vs. 1800+ datasets recorded for all faulty operating motors, we have used the synthetic minority over-sampling technique (SMOTE) [1] to create synthetic data points for the minority class. The PyCaret2 framework was used to develop the classifiers and preprocess the HNN features.
Footnote 2: [https://pycaret.org](https://pycaret.org)
Two different tasks are considered. The first is the binary classification where we are discriminating between normal and abnormal conditions using a random forest, and the second is the multi-class problem where we are discriminating using a logistic regression between the classes listed in Table 1, where class 0 is the normal regime.
The receiver operating characteristic (ROC) curves are provided for both tasks in Figs. 3 and 4 respectively. The macro-averaged AUC calculated on the test data is 0.78 for the binary classification and 0.84 for the multi-class problem and the F1 score is 0.96 and 0.51 respectively, which demonstrates the viability of physics-informed features from HNN to
capture the state of the system. These classification problems are imbalanced due to the skewed distribution of examples across the classes and as a result, we have chosen not to report the accuracy as it was reported in prior work [10, 11]. We note however that Ref. [10] reports an F1-score of 0.99 on a 10-fold cross-validation exercise, which is higher that the 0.96 on our binary classification, and that our multi-class F1-score is lower due to the aggregation of bearing fault classes.
Table 1 shows the AUC for pairwise classification between each unique defective operating condition and the normal condition. Fig. 6 shows the phase spaces of 10 different operating conditions (1 normal and 9 faulty). Interestingly, among all the pairwise comparisons, the model finds the discrimination between normal and horizontal-misalignment regimes rather challenging, which we plan to further explore in future studies. We do expect that the faults introduced generate slight changes in the phase portraits of various regimes, see Fig. 6. However, we find qualitatively that the phase portrait of overhang/ball-fault is significantly different than the rest, which suggests that the sub-classes of overhang, namely ball-fault, cage-fault, and outer-race should be treated as classes on their own.
**Discussion on the effect of rotation frequency on the Hamiltonian**. Fig. 5 shows the Hamiltonian of normally operating motors operating at different speeds. Interestingly, even though this has not been enforced, the general structure of the Hamiltonian vector field remains largely the same across various speeds, while the magnitude of the Hamiltonian increases at higher speeds as expected. It can be concluded in this case that the vector field is dependent on the operating condition, and the magnitude is dependent on the operating speed.
**Discussion on HNN on Dissipative Systems**. The HNN has the ability to learn the total energy of a number of systems [1], including an ideal mass-spring system. Although the HNN is designed to conserve energy, it is interesting to consider what the HNN learns from dissipative systems. We believe that the methodology is more broadly applicable and it applies also when this assumption does not hold.
We have used a mass-spring-damper to experiment with the behavior of HNN for dissipative systems. This is a non-conservative system and the Hamiltonian formulated by the HNN is not a conventional solution to the mass-spring-damper system, the conserved quantity is not the total energy, and the generalized coordinates are not position and momentum defined by classical mechanics. Nevertheless, a qualitative analysis of the trajectories shows that the HNN creates unique solutions for each value of the damping ratio, see Fig. 7. While we are unable to use conventional physics to understand the results of the HNN on the mass-spring-damper system, it is evident that the results can be used to discriminate between the different systems.
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline Class & **normal vs X** & **AUC** & **F1-score** \\ \hline
1 & horizontal-misalign. & 0.59 & 0.80 \\ \hline
2 & imbalance & 0.92 & 0.95 \\ \hline
3 & overhang & 0.85 & 0.85 \\ \hline
4 & underhang & 0.80 & 0.88 \\ \hline
5 & vertical-misalign. & 0.91 & 0.92 \\ \hline \end{tabular}
\end{table}
Table 1: Pairwise classification - results on testing set
Figure 1: SpectraQuest System: Alignment-Balance-Vibration
## 5 Conclusions
A novel predictive model is introduced to discriminate between normal and abnormal operating regimes of rotating machinery. The model is based on the total energy signature of the system learned using a Hamiltonian Neural Network. The performance measures obtained from the experimental data suggest that the proposed physics-informed features are an excellent candidate for machine fault classification.
Figure 4: Results on test set - multi-class problem
Figure 3: Results on test set - binary classification
Figure 2: The proposed fault classification model
## Acknowledgement
Research was sponsored by the National Institute of Food and Agriculture under Grant Number 2017-67017-26167 and by the Army Research Office under Grant Number W911NF-22-1-0035. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office, the National Institute of Food and Agriculture, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
Figure 5: The effect of rotation frequency on the Hamiltonian
Figure 6: Phase portraits of various operating conditions
Figure 7: HNN results on variable damping ratios |
2302.02713 | Flat Seeking Bayesian Neural Networks | Bayesian Neural Networks (BNNs) provide a probabilistic interpretation for
deep learning models by imposing a prior distribution over model parameters and
inferring a posterior distribution based on observed data. The model sampled
from the posterior distribution can be used for providing ensemble predictions
and quantifying prediction uncertainty. It is well-known that deep learning
models with lower sharpness have better generalization ability. However,
existing posterior inferences are not aware of sharpness/flatness in terms of
formulation, possibly leading to high sharpness for the models sampled from
them. In this paper, we develop theories, the Bayesian setting, and the
variational inference approach for the sharpness-aware posterior. Specifically,
the models sampled from our sharpness-aware posterior, and the optimal
approximate posterior estimating this sharpness-aware posterior, have better
flatness, hence possibly possessing higher generalization ability. We conduct
experiments by leveraging the sharpness-aware posterior with state-of-the-art
Bayesian Neural Networks, showing that the flat-seeking counterparts outperform
their baselines in all metrics of interest. | Van-Anh Nguyen, Tung-Long Vuong, Hoang Phan, Thanh-Toan Do, Dinh Phung, Trung Le | 2023-02-06T11:40:44Z | http://arxiv.org/abs/2302.02713v5 | # Flat Seeking Bayesian Neural Networks
###### Abstract
Bayesian Neural Networks (BNNs) offer a probabilistic interpretation for deep learning models by imposing a prior distribution over model parameters and inferencing a posterior distribution based on observed data. The model sampled from the posterior distribution can be used for providing ensemble predictions and quantifying prediction uncertainty. It is well-known that deep learning models with a lower sharpness have a better generalization ability. Nonetheless, existing posterior inferences are not aware of sharpness/flatness, hence possibly leading to high-sharpness for the models sampled from it. In this paper, we develop theories, the Bayesian setting, and the variational inference approach for the sharpness-aware posterior. Specifically, the models sampled from our sharpness-aware posterior and the optimal approximate posterior estimating this sharpness-aware posterior have a better flatness, hence possibly possessing a higher generalization ability. We conduct experiments by leveraging the sharpness-aware posterior with the state-of-the-art Bayesian Neural Networks, showing that the flat-seeking counterparts outperform their baselines in all metrics of interest.
Machine Learning, Bayesian Neural Networks, Bayesian Neural Networks
## 1 Introduction
Bayesian Neural Networks (BNNs) (Neal, 2012) offer a probabilistic interpretation for deep learning models by imposing a prior distribution over model parameters and then making inference posterior distribution over model parameters from observed data. The models sampled from the posterior distribution allow us to not only make predictions but also quantify prediction uncertainty which is valuable for many real-world applications.
To sample deep learning models from complex and complicated posterior distributions, we can use Hamiltonian Monte Carlo (HMC) (Neal, 1996) or other advanced particle-sampling approaches, notably Stochastic Gradient HMC (SGHMC) (Chen et al., 2014), Stochastic Gradient Langevin dynamics (SGLD) (Welling and Teh, 2011), and Stein Variational Gradient Descent (SVGD) (Liu and Wang, 2016). Although showing the ability to relieve the computational burden of HMC, the latter particle-sampling approaches are still computationally expensive, especially when needing to sample many models for a better ensemble, because it is computationally prohibitive to maintain and manipulate many deep learning models in the memory at the same time.
To make it more economic to sample multiple deep learning models from posterior distributions and alleviate the computational burden of BNN inferences, variational inference approaches employ approximate posteriors to estimate a true posterior. The essence is to use sufficiently rich approximate families whose elements are economic and convenient to be sampled from. Pioneering works (Graves, 2011; Blundell et al., 2015; Kingma et al., 2015) in variational inference assume approximate posteriors to be fully factorized distributions, also called mean-field variational inference, hence ignoring the strong statistical dependencies among random weights of neural nets and leading to the inability to capture the complicated structure of the true posterior and to estimate the true model uncertainty. To overcome this issue, latter works (Zhang et al., 2018; Ritter et al., 2018; Rossi et al., 2020; Swiatkowski et al., 2020; Ghosh et al., 2018; Ong et al., 2018; Tomczak et al., 2020; Khan et al., 2018) attempt to provide posterior approximations with a richer expressiveness.
For the standard training, flat minimizers have been found to improve the generalization ability of deep learning models because they enable models to find wider local minima, which is more robust against shifts between train and test sets (Jiang et al., 2020; Petzka et al., 2021; Dziugaite and Roy, 2017). Although seeking flat minimizers is a well-known principle to improve the generalization ability of deep learning models, the posteriors used in existing BNNs cannot be aware of the sharpness/flatness of the models drawn from them. Consequently, the models sampled from these posteriors can fall into regions with high sharpness and low flatness, hence possibly possessing an insufficient generalization ability. Moreover, in variational inference approaches, when dedicating approximate posteriors to estimate these non
sharpness-aware posteriors, the models sampled from the corresponding optimal approximate posterior are infeasible to be aware of the sharpness/flatness, hence again possibly possessing an insufficient generalization ability.
In this paper, in the context of learning BNNs, we aim to develop a sharpness-aware posterior, by which the models sampled from this posterior have high flatness for a better generalization ability. Moreover, we also devise the Bayesian setting and the variational inference approach for this sharpness-aware posterior. Accordingly, the optimal approximate posteriors estimating this sharpness-aware posterior can generate flatter models for a better ensemble and further improve the generalization ability.
Specifically, our development pathway is as follows. In Theorem 3.1, we point out that the standard posterior is the optimal solution of an optimization problem, that balances between the empirical loss induced by the models sampled from an approximate posterior for fitting a training set and a Kullback-Leibler (KL) divergence for encouraging a simple approximate posterior. This motivates us to replace the empirical loss induced by the approximate posterior with the corresponding general loss over the entire data-label distribution for boosting the generalization ability. Inspired by sharpness-aware minimization (Foret et al., 2021), we develop an upper-bound of the general loss in Theorem 3.2, hinting us the formulation of the sharpness-aware posterior in Theorem 3.3. Finally, we develop the Bayesian setting and variational approach for the sharpness-aware posterior.
Overall, our contributions in this paper can be summarized as follows:
* We propose and develop theories, the Bayesian setting, and the variational inference approach for the sharpness-aware posterior. This posterior enables us to sample a set of flat models that improve the model generalization ability.
* We conduct extensive experiments by leveraging our sharpness-aware posterior with the state-of-the-art and well-known BNNs, including SWAG (Maddox et al., 2019), MC-Dropout (Gal and Ghahramani, 2016), and Bayesian deep ensemble (Lakshminarayanan et al., 2017), to demonstrate that the flat-seeking counterparts consistently outperform the corresponding approaches in all metrics of interest, including the ensemble accuracy, expected calibration error (ECE), and negative log-likelihood (NLL).
Our paper is organized as follows. In Section 2, we introduce the literature review in Bayesian Neural Networks and flat minima. Section 3 is dedicated to our proposed sharpness-aware posterior. Finally, Section 4 presents the experiments and ablation studies, followed by the conclusion.
## 2 Related Work
### Bayesian Neural Networks
**Markov chain Monte Carlo (MCMC):** This approach allows us to sample multiple models from the posterior distribution and was well-known for inference with neural networks through the Hamiltonian Monte Carlo (HMC) (Neal, 1996). However, HMC requires the estimation of full gradients, which is computationally expensive for neural networks. To make the HMC framework practical, Stochastic Gradient HMC (SGHMC) (Chen et al., 2014) enables stochastic gradients to be used in Bayesian inference, crucial for both scalability and exploring a space of solutions. Alternatively, stochastic gradient Langevin dynamics (SGLD) (Welling and Teh, 2011) employs first-order Langevin dynamics in the stochastic gradient setting. Additionally, Stein Variational Gradient Descent (SVGD) (Liu and Wang, 2016) maintains a set of particles to gradually approach a posterior distribution. Theoretically, all SGHMC, SGLD, and SVGD asymptotically sample from the posterior in the limit of infinitely small step sizes.
**Variational Inference**: This approach uses an approximate posterior distribution in a family to estimate the true posterior distribution by maximizing a variational lower bound. (Graves, 2011) suggests fitting a Gaussian variational posterior approximation over the weights of neural networks, which was generalized in (Kingma and Welling, 2013; Kingma et al., 2015; Blundell et al., 2015), using the reparameterization trick for training deep latent variable models. To provide posterior approximations with richer expressiveness, many extensive studies have been proposed. Notably, (Louizos and Welling, 2017) treats the weight matrix as a whole via a matrix variate Gaussian (Gupta and Nagar, 2018) and approximates the posterior based on this parametrization.
Several later works have inspected this distribution to examine different structured representations for the variational Gaussian posterior, such as Kronecker-factored (Zhang et al., 2018; Ritter et al., 2018; Rossi et al., 2020), k-tied distribution (Swiatkowski et al., 2020), non-centered or rank-1 parameterization (Ghosh et al., 2018; Dusenberry et al., 2020). Another recipe to represent the true covariance matrix of Gaussian posterior is through the low-rank approximation (Ong et al., 2018; Tomczak et al., 2020; Khan et al., 2018; Maddox et al., 2019).
**Dropout Variational Inference:** This approach utilizes dropout to characterize approximate posteriors. Typically, (Gal and Ghahramani, 2016) and (Kingma et al., 2015) use this principle to propose Bayesian Dropout inference meth
ods such as MC Dropout and Variational Dropout. Concrete dropout (Gal et al., 2017) extends this idea to optimize the dropout probabilities. Variational Structured Dropout (Nguyen et al., 2021) employs Householder transformation to learn a structured representation for multiplicative Gaussian noise in the Variational Dropout method.
### Flat Minima
Flat minimizers have been found to improve the generalization ability of neural networks because they enable models to find wider local minima, by which the models will be more robust against shifts between train and test sets (Jiang et al., 2020; Petzka et al., 2021; Dziugaite & Roy, 2017). The relationship between generalization ability and the width of minima is theoretically and empirically investigated in many studies, notably (Hochreiter & Schmidhuber, 1994; Neyshabur et al., 2017; Dinh et al., 2017; Fort & Ganguli, 2019). Moreover, a variety of methods seeking flat minima have been proposed in (Pereyra et al., 2017; Chaudhari et al., 2017; Keskar et al., 2017; Izmailov et al., 2018; Foret et al., 2021).
Typically, (Keskar et al., 2017; Jastrzebski et al., 2017; Wei et al., 2020) investigate the impacts of different training factors, such as batch-size, learning rate, the covariance of gradient, dropout, on the flatness of found minima. Additionally, several approaches pursue wide local minima by adding regularization terms to the loss function (Pereyra et al., 2017; Zhang et al., 2018; Zhu et al., 2019; Chaudhari et al., 2017), e.g., softmax output's low entropy penalty, (Pereyra et al., 2017), distillation losses (Zhang et al., 2018; Zhu et al., 2019).
Recently, SAM (Foret et al., 2021), a method that seeks flat regions by explicitly minimizing the worst-case loss around the current model, has received significant attention due to its effectiveness and scalability compared to previous methods. Particularly, it has been exploited in a variety of tasks and domains (Cha et al., 2021; Abbas et al., 2022; Qu et al., 2022; Caldarola et al., 2022; Bahri et al., 2022; Chen et al., 2021). A notable example is the improvement that SAM brings to meta-learning bi-level optimization in (Abbas et al., 2022). Another application of SAM is in federated learning (FL) (Qu et al., 2022) in which the authors achieved tighter convergence rates than existing FL works, and proposed a generalization bound for the global model.
In addition, SAM shows its generalization ability in vision models (Chen et al., 2021), language models (Bahri et al., 2022), domain generalization (Cha et al., 2021), and multi-task learning (Phan et al., 2022). Some other works attempt to improve by exploiting geometry SAM (Kwon et al., 2021), additionally minimizing surrogate gap (Zhuang et al., 2022), and speeding up SAM training time (Du et al., 2022; Liu et al., 2022).
## 3 Proposed Framework
In what follows, we present the technicality of our proposed sharpness-aware posterior. Particularly, Section 3.1 introduces the problem setting and motivation for our sharpness-aware posterior. Section 3.2 is dedicated to our theory development, while Section 3.3 is used to describe the Bayesian setting and variational inference approach for our sharpness-aware posterior.
### Problem Setting and Motivation
We aim to develop Sharpness-Aware Bayesian Neural Networks (SA-BNN). Consider a family of neural networks \(f_{\theta}(x)\) with \(\theta\in\Theta\) and a training set \(\mathcal{S}=\{(x_{1},y_{1}),...,(x_{n},y_{n})\}\) where \((x_{i},y_{i})\sim\mathcal{D}\). We wish to learn a posterior distribution \(\mathbb{Q}_{\mathcal{S}}^{SA}\) with the density function \(q^{SA}(\theta|\mathcal{S})\) such that any model \(\theta\sim\mathbb{Q}^{SA}\) is aware of the sharpness when predicting over the training set \(\mathcal{S}\).
We depart with the standard posterior
\[q(\theta\mid\mathcal{S})\propto\prod_{i=1}^{n}p(y_{i}\mid x_{i},\mathcal{S}, \theta)p(\theta),\]
where the prior distribution \(\mathbb{P}\) has the density function \(p(\theta)\) and the likelihood has the form
\[p\left(y\mid x,\mathcal{S},\theta\right) \propto\exp\left\{-\frac{\lambda}{|\mathcal{S}|}\ell\left(f_{ \theta}(x),y\right)\right\}\] \[=\exp\left\{-\frac{\lambda}{n}\ell\left(f_{\theta}(x),y\right)\right\}\]
with the loss function \(\ell\). The standard posterior \(\mathbb{Q}_{\mathcal{S}}\) has the density function defined as
\[q(\theta\mid\mathcal{S})\propto\exp\left\{-\frac{\lambda}{n}\sum_{i=1}^{n} \ell\left(f_{\theta}\left(x_{i}\right),y_{i}\right)\right\}p(\theta), \tag{1}\]
where \(\lambda\geq 0\) is a regularization parameter.
We define the general and empirical losses as follows:
\[\mathcal{L}_{\mathcal{D}}\left(\theta\right)=\mathbb{E}_{(x,y)\sim\mathcal{D}} \left[\ell\left(f_{\theta}\left(x\right),y\right)\right].\]
\[\mathcal{L}_{\mathcal{S}}\left(\theta\right)=\mathbb{E}_{(x,y)\sim\mathcal{S}} \left[\ell\left(f_{\theta}\left(x\right),y\right)\right]=\frac{1}{n}\sum_{i=1}^ {n}\ell\left(f_{\theta}\left(x_{i}\right),y_{i}\right).\]
Basically, the general loss is defined as the expected loss over the entire data-label distribution \(\mathcal{D}\), while the empirical loss is defined as the empirical loss over a specific training set \(\mathcal{S}\).
The standard posterior in Eq. (1) can be rewritten as
\[q(\theta\mid\mathcal{S})\propto\exp\left\{-\lambda\mathcal{L}_{\mathcal{S}} \left(\theta\right)\right\}p(\theta). \tag{2}\]
Given a distribution \(\mathbb{Q}\) with the density function \(q\left(\theta\right)\) over the model parameters \(\theta\in\Theta\), we define the empirical and general losses over this model distribution \(\mathbb{Q}\) as
\[\mathcal{L}_{\mathcal{S}}\left(\mathbb{Q}\right) =\int_{\Theta}\mathcal{L}_{\mathcal{S}}\left(\theta\right)d \mathbb{Q}\left(\theta\right)=\int_{\Theta}\mathcal{L}_{\mathcal{S}}\left( \theta\right)q\left(\theta\right)d\theta.\] \[\mathcal{L}_{\mathcal{D}}\left(\mathbb{Q}\right) =\int_{\Theta}\mathcal{L}_{\mathcal{D}}\left(\theta\right)d \mathbb{Q}\left(\theta\right)=\int_{\Theta}\mathcal{L}_{\mathcal{D}}\left( \theta\right)q\left(\theta\right)d\theta.\]
Specifically, the general loss over the model distribution \(\mathbb{Q}\) is defined as the expectation of the general losses incurred by the models sampled from this distribution, while the empirical loss over the model distribution \(\mathbb{Q}\) is defined as the expectation of the empirical losses incurred by the models sampled from this distribution.
### Our Theory Development
We now present the theory development for the sharpness-aware posterior. Inspired by the Gibbs form of the standard posterior \(\mathbb{Q}_{\mathcal{S}}\) in Eq. (2), we establish the following theorem to connect the standard posterior \(\mathbb{Q}_{\mathcal{S}}\) with the density \(q(\theta\mid\mathcal{S})\) and the empirical loss \(\mathcal{L}_{\mathcal{S}}\left(\mathbb{Q}\right)\)(Catoni, 2007; Alquier et al., 2016).
**Theorem 3.1**.: _Consider the following optimization problem_
\[\min_{\mathbb{Q}<<\mathbb{P}}\left\{\lambda\mathcal{L}_{\mathcal{S}}\left( \mathbb{Q}\right)+KL\left(\mathbb{Q},\mathbb{P}\right)\right\}, \tag{3}\]
_where we search over \(\mathbb{Q}\) absolutely continuous w.r.t. \(\mathbb{P}\) and \(KL\left(\cdot,\cdot\right)\) is the Kullback-Leibler divergence. This optimization has a closed-form optimal solution \(\mathbb{Q}^{*}\) with the density_
\[q^{*}\left(\theta\right)\propto\exp\left\{-\lambda\mathcal{L}_{\mathcal{S}} \left(\theta\right)\right\}p(\theta),\]
_which is exactly the standard posterior \(\mathbb{Q}_{\mathcal{S}}\) with the density \(q(\theta\mid\mathcal{S})\)._
Theorem 3.1 whose proof can be found in Appendix A.1 reveals that we need to find the posterior \(\mathbb{Q}_{\mathcal{S}}\) that balances between optimizing its empirical loss \(\mathcal{L}_{\mathcal{S}}\left(\mathbb{Q}\right)\) and simplicity via \(KL\left(\mathbb{Q},\mathbb{P}\right)\). However, minimizing the empirical loss \(\mathcal{L}_{\mathcal{S}}\left(\mathbb{Q}\right)\) only ensures the correct predictions for the training examples in \(\mathcal{S}\) and might encounter overfitting. Hence, it is natural to replace the empirical loss by the general loss to combat overfitting.
To mitigate overfitting, in (7), we replace the empirical loss by the general loss and desire to solve the following optimization problem (OP):
\[\min_{\mathbb{Q}<<\mathbb{P}}\left\{\lambda\mathcal{L}_{\mathcal{D}}\left( \mathbb{Q}\right)+KL\left(\mathbb{Q},\mathbb{P}\right)\right\}. \tag{4}\]
However, solving the optimization problem (OP) in (4) is generally intractable. To make it tractable, we find its upperbound which is relevant to the sharpness as shown in the following theorem.
**Theorem 3.2**.: _Assume that \(\Theta\) is a compact set. Under some mild conditions, given any \(\delta\in[0;1]\), with the probability at least \(1-\delta\) over the choice of \(\mathcal{S}\sim\mathcal{D}^{n}\), for any distribution \(\mathbb{Q}\), we have_
\[\mathcal{L}_{\mathcal{D}}\left(\mathbb{Q}\right)\leq\mathbb{E}_{\theta\sim \mathbb{Q}}\left[\max_{\theta^{\prime}\|\theta^{\prime}-\theta\|\leq\rho} \mathcal{L}_{\mathcal{S}}\left(\theta^{\prime}\right)\right]+f\left(\max_{ \theta\in\Theta}\|\theta\|^{2}\,,n\right),\]
_where \(f\) is a non-decreasing function w.r.t. the first variable and approaches \(0\) when the training size \(n\) approaches \(\infty\)._
The proof of Theorem 3.2 can be found in Appendix A.1. Here we note that this proof is not a trivial extension of sharpness-aware minimization because we need to tackle the general and empirical losses over a distribution \(\mathbb{Q}\). Moreover, inspired by Theorem 3.2, we propose solving the following OP which forms an upper-bound of the desirable OP in (4)
\[\min_{\mathbb{Q}<<\mathbb{P}}\left\{\lambda\mathbb{E}_{\theta\sim \mathbb{Q}}\left[\max_{\theta^{\prime}\|\theta^{\prime}-\theta\|\leq\rho} \mathcal{L}_{\mathcal{S}}\left(\theta^{\prime}\right)\right]+KL\left(\mathbb{ Q},\mathbb{P}\right)\right\}. \tag{5}\]
The following theorem characterizes the optimal solution of the OP in (5).
**Theorem 3.3**.: _The optimal solution the OP in (5) is the sharpness-aware posterior distribution \(\mathbb{Q}_{S}^{SA}\) with the density function \(q^{SA}(\theta|\mathcal{S})\):_
\[q^{SA}(\theta|\mathcal{S}) \propto\exp\left\{-\lambda\max_{\theta^{\prime}:\|\theta^{\prime} -\theta\|\leq\rho}\mathcal{L}_{\mathcal{S}}\left(\theta^{\prime}\right) \right\}p\left(\theta\right)\] \[=\exp\left\{-\lambda\mathcal{L}_{\mathcal{S}}\left(s\left(\theta \right)\right)\right\}p\left(\theta\right),\]
_where we have defined \(s\left(\theta\right)=\underset{\sigma^{\prime}:\|\theta^{\prime}-\theta\|\leq \rho}{\text{argmax}}\mathcal{L}_{\mathcal{S}}\left(\theta^{\prime}\right)\)._
Theorem 3.3 whose proof can be found in Appendix A.1 describes the close form of the sharpness-aware posterior distribution \(\mathbb{Q}_{\mathcal{S}}^{SA}\) with the density function \(q^{SA}(\theta|\mathcal{S})\). Based on this characterization, in what follows, we introduce the sharpness-aware Bayesian setting that sheds limits on its variational approach.
### Sharpness-Aware Bayesian Setting and Its Variational Approach
Bayesian Setting:To promote the Bayesian setting for sharpness-aware posterior distribution \(\mathbb{Q}_{S}^{SA}\), we examine the sharpness-aware likelihood
\[p^{SA}\left(y\mid x,\mathcal{S},\theta\right) \propto\exp\left\{-\frac{\lambda}{|\mathcal{S}|}\ell\left(f_{s \left(\theta\right)}(x),y\right)\right\}\] \[=\exp\left\{-\frac{\lambda}{n}\ell\left(f_{s\left(\theta\right)}( x),y\right)\right\},\]
where \(s\left(\theta\right)=\underset{\theta^{\prime}:\|\theta^{\prime}-\theta\|\leq \rho}{\text{argmax}}\mathcal{L}_{\mathcal{S}}\left(\theta^{\prime}\right)\).
With this predefined sharpness-aware likelihood, we can recover the sharpness-aware posterior distribution \(\mathbb{Q}_{\mathcal{S}}^{SA}\) with the density function \(q^{SA}(\theta|\mathcal{S})\):
\[q^{SA}(\theta|\mathcal{S})\propto\prod_{i=1}^{n}p^{SA}\left(y_{i}\mid x_{i}, \mathcal{S},\theta\right)p\left(\theta\right).\]
Variational inference for the sharpness-aware posterior distribution:We now develop the variational inference for the sharpness-aware posterior distribution. Let denote \(X=\left[x_{1},...,x_{n}\right]\) and \(Y=\left[y_{1},...,y_{n}\right]\). Considering an approximate posterior family \(\left\{q_{\phi}\left(\theta\right):\phi\in\Phi\right\}\), we have
\[\log p^{SA}\left(Y\mid X,\mathcal{S}\right)=\int_{\Theta}q_{\phi} \left(\theta\right)\log p^{SA}\left(Y\mid X,\mathcal{S}\right)d\theta\] \[=\int_{\Theta}q_{\phi}\left(\theta\right)\log\frac{p^{SA}\left(Y \mid x,\mathcal{S}\right)}{q_{\phi}\left(\theta\right)}\frac{q_{\phi}\left( \theta\right)}{q^{SA}(\theta|\mathcal{S})}d\theta\] \[=\int_{\Theta}q_{\phi}\left(\theta\right)\log\frac{p^{SA}\left(Y \mid\theta,X,\mathcal{S}\right)p\left(\theta\right)}{q_{\phi}\left(\theta \right)}\frac{q_{\phi}\left(\theta\right)}{q^{SA}(\theta|\mathcal{S})}d\theta\] \[=\mathbb{E}_{q_{\phi}\left(\theta\right)}\left[\sum_{i=1}^{n}\log p ^{SA}\left(y_{i}\mid x_{i},\mathcal{S},\theta\right)\right]-KL\left(q_{\phi},p\right)\] \[+KL\left(q_{\phi},q^{SA}\right).\]
It is obvious that we need to maximize the following lower bound for maximally reducing the gap \(KL\left(q_{\phi},q^{SA}\right)\):
\[\max_{q_{\phi}}\left\{\mathbb{E}_{q_{\phi}\left(\theta\right)} \left[\sum_{i=1}^{n}\log p^{SA}\left(y_{i}\mid x_{i},\mathcal{S},\theta\right) \right]-KL\left(q_{\phi},p\right)\right\},\]
which can be equivalently rewritten as
\[\min_{q_{\phi}}\left\{\lambda\mathbb{E}_{q_{\phi}\left(\theta \right)}\left[\mathcal{L}_{\mathcal{S}}\left(s\left(\theta\right)\right)+KL \left(q_{\phi},p\right)\right\}\text{or}\] \[\min_{q_{\phi}}\left\{\lambda\mathbb{E}_{q_{\phi}\left(\theta \right)}\left[\max_{\theta^{\prime}:\left\|\theta^{\prime}-\theta\right\|\leq \rho}\mathcal{L}_{\mathcal{S}}\left(\theta^{\prime}\right)\right]+KL\left(q_{ \phi},p\right)\right\}. \tag{6}\]
Furthermore, in the OP in (6), the first term implies that we seek an approximate posterior distribution such that any model sampled from it should be aware of the sharpness. Meanwhile, the second term prefers simpler approximate posterior distributions and can be estimated depending to how to equip approximate posterior distributions.
With the Bayesian setting and variational inference formulation, our proposed sharpness-aware posterior is ready to be incorporated into the MCMC-based, variational inference-based Bayesian Neural Networks.
## 4 Experiments
In this section, we conduct various experiments to demonstrate the effectiveness of the sharpness-aware approach on Bayesian Neural networks, including SWAG (Maddox et al., 2019) (with its variations SWA, SWAG-Diagonal, and SWAG), MC-dropout (Gal & Ghahramani, 2016), and Deep Ensemble (Lakshminarayanan et al., 2017).
The experiments are conducted on three benchmark datasets: CIFAR-10, CIFAR-100, and ImageNet ILSVRC-2012, and report accuracy, negative log-likelihood (NLL), and Expected Calibration Error (ECE) to estimate the calibration capability and uncertainty of our method against baselines. Additionally, we visualize the loss-landscape and sharpness score of models to examine the loss geometry awareness of our method and its effectiveness in the Bayesian setting.
We note that the purpose of the experiments is not to seek the state-of-the-art performances. Alternatively, we wish to demonstrate the usefulness of the sharpness-aware posterior when incorporated with specific Bayesian Neural Networks. We first conduct ablation studies to illustrate that the models sampled from our sharpness-aware posterior are flatter than those sampled from the standard posterior (cf. Section 4.4) and the ensemble models produced by our sharpness-aware posterior have lower sharpness compared to the standard posterior (cf. Section 4.4.2). Resultantly, Bayesian Neural Networks incorporated with our sharpness-aware posterior receive better predictive performances and calibration of uncertainty estimates than their counterpart baselines.
### Dataset and implementation detail
**CIFAR:** We run experiments with PreResNet-164 and WideResNet28x10 on both CIFAR-10 and CIFAR-100. These datasets contain 60,000 images in total with 50,000 instances for training and 10,000 for testing. For each network and dataset pair, we apply Sharpness-Aware Bayesian to various settings: F-SWA, F-SWAG-Diag, F-SWAG, F-SWAG-Dropout, F-MC-Dropout, and F-Deep-Ensemble. Note that we train all models for 300 epochs and start to collect models after epoch 161 for the F-SWA and F-SWAG settings, which is the same in (Maddox et al., 2019). We choose \(\rho=0.05\) for CIFAR10 and \(\rho=0.1\) for CIFAR100 in all experiments. For inference, we evaluate a single model for F-SWA and do an ensemble on 30 sample models for the other settings. For the Deep-Ensemble method, we re-produce results by training three independent models and doing ensembles for inference. All hyper-parameters and processes are the same for F-Deep-Ensemble. To get a stable result, we run three times for each setting and report the mean and standard deviation.
**ImageNet:** this is a large and challenging dataset with 1000 labels. We run experiments with Densenet-161 and ResNet-152 on F-SWA, F-SWAG-Diag, and F-SWAG. The results are shown in Table 3. For each setting, we use the pre-trained weight on ImageNet collected from _torchvision_ package and finetune on 10 epochs with \(\rho=0.05\). We start
collecting 4 times per epoch at the beginning of finetuning process for all settings and evaluate the models the same as experiments for the CIFAR dataset.
The result of SWA, SWAG-Diag, SWAG, and MC-Dropout is collected from the original paper (Maddox et al., 2019). There is no available reported result of Deep-Ensemble on examined networks so we report the re-produced result and marked *. 1
Footnote 1: The implementation is provided in [https://anonymous.4open.science/r/flat_bnn-B46B](https://anonymous.4open.science/r/flat_bnn-B46B)
### Metrics
**Ensemble accuracy \(\uparrow\) (ACC)** is computed by sampling several models from the Bayesian Neural Networks and then doing ensemble the prediction probabilities.
**Negative log-likelihood \(\downarrow\) (NLL)** is computed sampling several models from the Bayesian Neural Networks and evaluating the negative log-likelihood of the prediction probabilities.
**Expected calibration error \(\downarrow\) (ECE)** compares the predicted probability (or confidence) of a model to its accuracy (Naeini et al., 2015; Guo et al., 2017). To compute this error, we first bin the the confidence interval \([0,1]\) into \(M\) equal bins, then categorize data samples into these bins according to their confidence scores.
We finally compute the absolute value of the difference between the average confidence and the average accuracy within each bin, and report the average value over all bins as the ECE. Specifically, let \(B_{m}\) denote the set of indices of samples having their confidence scores belonging to the \(m^{th}\) bin. The average accuracy and the average confidence within this bin are:
\[acc(B_{m}) =\frac{1}{|B_{m}|}\sum_{i\in B_{m}}\mathbf{1}[\hat{y_{i}}=y_{i}],\] \[conf(B_{m}) =\frac{1}{|B_{m}|}\sum_{i\in B_{m}}\hat{p}(x_{i}).\]
Then the ECE of the model is defined as:
\[ECE=\sum_{m=1}^{M}\frac{|B_{m}|}{N}|acc(B_{m})-conf(B_{m})|.\]
### Experimental results
#### 4.3.1 Predictive performance
The experimental results are shown in Table 1 for CIFAR-100, Table 2 for CIFAR-10, and Table 3 for ImageNet dataset. There is an appropriate improvement in all experiments. Especially on ImageNet and CIFAR-10, our method consistently surpasses the baselines on both three calibrations, which demonstrate the stability and reliability of the model in uncertainty matter. On CIFAR-100, we gain from 0.7% to 1.03% on PreResNet-164 and at least 1.2% on the WideResNet28x10 model. We observe that there is a trade-off between accuracy, negative log-likelihood, and expected calibration error. However, the trade-off is miner compared to overall improvement.
The Deep-Ensemble method trains several models of the
Figure 1: Comparing loss landscape of PreResNet-164 on CIFAR-100 dataset training with SWAG and F-SWAG method. For visualization purposes, we sample two models for each SWAG and F-SWAG and then plot the loss landscapes. It can be observed that the loss landscapes of our F-SWAG are flatter, supporting our argument for the flatter sampled models.
same network independently, then does ensemble for inference. We do ensemble three models and get a remarkable result for both baseline and sharpness-aware settings. Additionally, this simple yet effective method takes much longer time to train but is much faster for inference. Again, our approach surpasses the baseline in all metrics of interest.
#### 4.3.2 Calibration of uncertainty estimates
We evaluate the ECE of each setting and compare it to baselines in Tables 1, 2, and 3. This score measures the maximum discrepancy between the accuracy and confidence of the model. To further clarify it, we display the Reliability Diagrams of PreResNet-164 on CIFAR-100 to understand how well the model predicts according to the confidence threshold in Figure 2. To produce this plot, the predicted confidence score is split into 20 bins uniformly. For each bin, we calculate the mean confidence and the accuracy of test samples within the bin. The difference between these two scores is displayed. The plot closer to the zero lines (black dashed line in Figure 2) is better. Also, the plot above this line represents over-confident prediction, while the one below this line represents under-confident prediction. As can be seen, our methods F-SWA, F-SWAG-Diag, and F-SWAG are closer to the zeros line in most bins, while the corresponding baselines tend to be over-confident. The flatter model could help to generate smoother outcomes, therefore mitigating the over or under-confidence problems. This demonstrates the sharpness-aware effect on reliability
\begin{table}
\begin{tabular}{l c c c|c c c} \hline \hline & \multicolumn{3}{c}{**PreResNet-164**} & \multicolumn{3}{c}{**WideResNet28x10**} \\ Method & ACC \(\uparrow\) & NLL \(\downarrow\) & ECE \(\downarrow\) & ACC \(\uparrow\) & NLL \(\downarrow\) & ECE \(\downarrow\) \\ \hline \hline SWA & 80.19 \(\pm\) 0.52 & 0.7370 \(\pm\) 0.0265 & 0.0700 \(\pm\) 0.0056 & 82.40 \(\pm\) 0.16 & 0.6684 \(\pm\) 0.0034 & 0.0684 \(\pm\) 0.0022 \\ F-SWA & **80.94 \(\pm\) 0.23** & **0.6839 \(\pm\) 0.0047** & **0.0482 \(\pm\) 0.0025** & **83.61 \(\pm\) 0.29** & **0.5801 \(\pm\) 0.0174** & **0.0283 \(\pm\) 0.0042** \\ \(\hat{\text{SWAG-Diag}}\) & 80.18 \(\pm\) 0.50 & 0.6837 \(\pm\) 0.0186 & **0.0239 \(\pm\) 0.0047** & 82.40 \(\pm\) 0.09 & 0.6150 \(\pm\) 0.0029 & 0.0322 \(\pm\) 0.0018 \\ F-SWAG-Diag & **81.01 \(\pm\) 0.29** & **0.6645 \(\pm\) 0.0050** & 0.0242 \(\pm\) 0.0039 & **83.50 \(\pm\) 0.29** & **0.5763 \(\pm\) 0.0120** & **0.0151 \(\pm\) 0.0020** \\ \(\hat{\text{SWAG}}\) & 79.90 \(\pm\) 0.50 & **0.6595 \(\pm\) 0.019** & 0.0587 \(\pm\) 0.0048 & 82.23 \(\pm\) 0.19 & 0.6078 \(\pm\) 0.0066 & **0.0113 \(\pm\) 0.0026** \\ F-SWAG & **80.93 \(\pm\) 0.27** & 0.6704 \(\pm\) 0.0049 & **0.0350 \(\pm\) 0.0025** & **83.57 \(\pm\) 0.26** & **0.5757 \(\pm\) 0.0136** & 0.0196 \(\pm\) 0.0015 \\ \hline MC-Dropout & - & - & 82.30 \(\pm\) 0.19 & 0.6500 \(\pm\) 0.0049 & 0.0574 \(\pm\) 0.0028 \\ F-MC-Dropout & 81.06 \(\pm\) 0.44 & 0.7027 \(\pm\) 0.0049 & 0.0514 \(\pm\) 0.0047 & **83.24 \(\pm\) 0.11** & **0.6144 \(\pm\) 0.0068** & **0.0250 \(\pm\) 0.0027** \\ \hline Deep-ens* & 82.08 \(\pm\) 0.42 & 0.7189 \(\pm\) 0.0108 & 0.0334 \(\pm\) 0.0064 & 83.04 \(\pm\) 0.15 & 0.6958 \(\pm\) 0.0335 & 0.0483 \(\pm\) 0.0017 \\ F-Deep-ens & **82.54 \(\pm\) 0.10** & **0.6286 \(\pm\) 0.0022** & **0.0143 \(\pm\) 0.0041** & **84.52 \(\pm\) 0.03** & **0.5644 \(\pm\) 0.0106** & **0.0191 \(\pm\) 0.0039** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Classification score on CIFAR100 dataset. * Experiments are reproduced with the same hyper-parameters with our corresponding method for a fair comparison
\begin{table}
\begin{tabular}{l c c c|c c c} \hline \hline & \multicolumn{3}{c}{**PreResNet-161**} & \multicolumn{3}{c}{**ResNet-152**} \\ Model & ACC \(\uparrow\) & NLL \(\downarrow\) & ECE \(\downarrow\) & ACC \(\uparrow\) & NLL \(\downarrow\) & ECE \(\downarrow\) \\ \hline \hline SWA & 78.60 & 0.8655 & 0.0509 & 78.92 & 0.8682 & 0.0605 \\ F-SWA & **78.72** & **0.8269** & **0.0201** & **79.15** & **0.8080** & **0.0211** \\ \(\hat{\text{SWAG-Diag}}\) & 78.59 & 0.8559 & 0.0459 & 78.96 & 0.8584 & 0.0566 \\ F-SWAG-Diag & **78.71** & **0.8267** & **0.0194** & **79.20** & **0.8065** & **0.0199** \\ \(\hat{\text{SWAG}}\) & 78.59 & 0.8303 & 0.0204 & 79.08 & 0.8205 & 0.0279 \\ F-SWAG & **78.70** & **0.8262** & **0.0185** & **79.17** & **0.8078** & **0.0208** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Classification score on ImageNet dataset
\begin{table}
\begin{tabular}{l c c c|c c c} \hline \hline & \multicolumn{3}{c}{**Densenet-161**} & \multicolumn{3}{c}{**ResNet-152**} \\ Model & ACC \(\uparrow\) & NLL \(\downarrow\) & ECE \(\downarrow\) & ACC \(\uparrow\) & NLL \(\downarrow\) & ECE \(\downarrow\) \\ \hline \hline SWA & 78.60 & 0.8655 & 0.0509 & 78.92 & 0.8682 & 0.0605 \\ F-SWA & **78.72** & **0.8269** & **0.0201** & **79.15** & **0.8080** & **0.0211** \\ \(\hat{\text{SWAG-Diag}}\) & 78.59 & 0.8559 & 0.0459 & 78.96 & 0.8584 & 0.0566 \\ F-SWAG-Diag & **78.71** & **0.8267** & **0.0194** & **79.20** & **0.8065** & **0.0199** \\ \(\hat{\text{SWAG}}\) & 78.59 & 0.8303 & 0.0204 & 79.08 & 0.8205 & 0.0279 \\ F-SWAG & **78.70** & **0.8262** & **0.0185** & **79.17** & **0.8078** & **0.0208** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Classification score on CIFAR10 dataset. * Experiments is re-produced with the same hyper-parameters with our corresponding method for a fair comparison
for BNNs, reflected by the better ECE score overall.
### Ablation studies
#### 4.4.1 Loss landscape
In Figure 1, we plot the loss-landscape of the models sampled from our proposal of sharpness-aware posterior against the non-sharpness-aware one. Particularly, we compare two methods F-SWAG and SWAG by selecting four random models sampled from the posterior distribution of each method under the same hyper-parameter settings. As observed, our method not only improves the generalization of ensemble inference, demonstrated by classification results in Section 4 and sharpness in Section 4.4.2, but also the individual sampled model is flatter itself.
#### 4.4.2 Sharpness evaluation
We measure and visualize the sharpness of the models. To this end, we sample five models from the approximate posteriors and then take the average of the sharpness of these models. For a model \(\theta\), the sharpness is evaluated as \(\max\limits_{||\epsilon||_{2}\leq\rho}\mathcal{L}_{\mathcal{S}}(\theta+ \epsilon)-\mathcal{L}_{\mathcal{S}}(\theta)\) to measure the change of loss value around \(\theta\).
We calculate the sharpness score of PreResNet-164 network for SWAG-Diag, SWAG, and flat settings F-SWAG-Diag, F-SWAG training on CIFAR100 dataset and visualize them in Figure 3. The sharpness-aware settings generate smaller _sharpness_ scores compared to the corresponding baseline, indicating that our models get into flatter regions.
## 5 Conclusion
In this paper, we devise theories, the Bayesian setting, and the variational inference for the sharpness-aware posterior in the context of Bayesian Neural Networks (BNNs). The sharpness-aware posterior enables the models sampled from it and the optimal approximate posterior estimating it to have a higher flatness, hence possibly possessing a better generalization ability. We conduct extensive experiments by leveraging the sharpness-aware posterior with the state-of-the-art Bayesian Neural Networks, showing that the flat-seeking counterparts outperform their baselines in all metrics of interest, including the ensemble accuracy, ECE, and NLL.
Figure 3: Evaluate the sharpness over training epoch. The average sharpness of the models sampled from our sharpness-aware posterior is smaller than the corresponding baseline.
Figure 2: Reliability diagrams for PreResNet164 on CIFAR-100. The confidence is split into 20 bins and plot the gap between confidence and accuracy is in each bin. The best case is the black dashed line when this gap is zeros. The plots of F-SWAG get closer to the zero lines, implying our F-SWAG can calibrate the uncertainty better. |
2308.02760 | Neural Collapse in the Intermediate Hidden Layers of Classification
Neural Networks | Neural Collapse (NC) gives a precise description of the representations of
classes in the final hidden layer of classification neural networks. This
description provides insights into how these networks learn features and
generalize well when trained past zero training error. However, to date, (NC)
has only been studied in the final layer of these networks. In the present
paper, we provide the first comprehensive empirical analysis of the emergence
of (NC) in the intermediate hidden layers of these classifiers. We examine a
variety of network architectures, activations, and datasets, and demonstrate
that some degree of (NC) emerges in most of the intermediate hidden layers of
the network, where the degree of collapse in any given layer is typically
positively correlated with the depth of that layer in the neural network.
Moreover, we remark that: (1) almost all of the reduction in intra-class
variance in the samples occurs in the shallower layers of the networks, (2) the
angular separation between class means increases consistently with hidden layer
depth, and (3) simple datasets require only the shallower layers of the
networks to fully learn them, whereas more difficult ones require the entire
network. Ultimately, these results provide granular insights into the
structural propagation of features through classification neural networks. | Liam Parker, Emre Onal, Anton Stengel, Jake Intrater | 2023-08-05T01:19:38Z | http://arxiv.org/abs/2308.02760v1 | # Neural Collapse in the Intermediate Hidden Layers of Classification Neural Networks
###### Abstract
_Neural Collapse_ (\(\mathcal{NC}\)) gives a precise description of the representations of classes in the final hidden layer of classification neural networks. This description provides insights into how these networks learn features and generalize well when trained past zero training error. However, to date, \(\mathcal{NC}\) has only been studied in the final layer of these networks. In the present paper, we provide the first comprehensive empirical analysis of the emergence of \(\mathcal{NC}\) in the intermediate hidden layers of these classifiers. We examine a variety of network architectures, activations, and datasets, and demonstrate that some degree of \(\mathcal{NC}\) emerges in most of the intermediate hidden layers of the network, where the degree of collapse in any given layer is typically positively correlated with the depth of that layer in the neural network. Moreover, we remark that: (1) almost all of the reduction in intra-class variance in the samples occurs in the shallower layers of the networks, (2) the angular separation between class means increases consistently with hidden layer depth, and (3) simple datasets require only the shallower layers of the networks to fully learn them, whereas more difficult ones require the entire network. Ultimately, these results provide granular insights into the structural propagation of features through classification neural networks.
1
## 1 Introduction
Modern, highly-overparameterized deep neural networks have exceeded human performance on a variety of computer vision tasks [1, 2, 3]. However, despite their many successes, it remains unclear how these overparameterized networks converge to solutions which generalize well. In a bid to demystify neural networks' performance, a recent line of inquiry has explored the internally represented 'features' of these networks during the Terminal Phase of Training (TPT), i.e. when networks are trained past the point of zero error on the training data [4, 5, 6].
The phenomenon of _Neural Collapse_ (\(\mathcal{NC}\)), introduced by Papyan, Han, and Donoho [7, 8], represents one such avenue of inquiry. \(\mathcal{NC}\) refers to the emergence of a simple geometry present in neural network classifiers that appears during TPT. Specifically, \(\mathcal{NC}\) describes the phenomena by which neural network classifiers converge to learning maximally separated, negligible-variance representations of classes in their last layer activation maps. However, despite the extensive documentation of \(\mathcal{NC}\) in the last-layer representations of classification neural networks [9, 10, 11, 12],
there has been no exploration of its presence throughout the intermediate hidden layers of these networks.
In the present study, we provide a detailed account of the emergence of \(\mathcal{NC}\) in the intermediate hidden layers of neural networks across a range of different settings. Specifically, we investigate the impact of varying architectures, datasets, and activation functions on the degree of \(\mathcal{NC}\) present in the intermediate layers of classification networks. Our results show that some level of \(\mathcal{NC}\) typically occurs in these intermediate layers in all explored settings, where the strength of \(\mathcal{NC}\) in a given layer increases as the depth of that layer within the network increases. By examining the presence of \(\mathcal{NC}\) not only in the final hidden layer but also the intermediate hidden layers of classification networks, we gain a more nuanced understanding of the mechanisms that drive the behavior of these networks.
## 2 Methodology
### Network Architecture and Training
We examine the degree of \(\mathcal{NC}\) present in the hidden layers of three different neural network classifiers. Two of these models are popular in the computer vision community, and have been extensively studied and widely adopted: VGG11 [13] and ResNet18 [14]. We also train a fully-connected (FC) classification network, MLP6, with network depth of \(\ell=6\) and layer width of \(d=4096\) for each of its hidden layers. MLP6 serves as a toy model in which to more easily explore \(\mathcal{NC}\). In addition to varying network architecture, we also vary the activation functions. Specifically, we explore the effects on \(\mathcal{NC}\) of the ReLU, Tanh, and LeakyReLU activation functions. We train these classification neural networks on five popular computer vision classification datasets: MNIST [15], CIFAR10 and CIFAR100 [16], SVHN [17], and FashionMNIST [18]. To rebalance the datasets, MNIST and SVHN were subsampled to \(N=5,000\), \(N=4,600\), and \(N=600\) images per class, respectively. We normalize all datasets but do not perform any data augmentation. We use stochastic gradient descent with 0.9 momentum, Mean Square Error Loss (MSE) 1, \(\lambda=10^{-5}\) weight decay, and the one-cycle learning rate scheduler for all training [20].
Footnote 1: We use MSE loss rather than Cross Entropy loss as it has been shown to exhibit a greater degree of \(\mathcal{NC}\) in classification neural networks while maintaining a greater degree of mathematical clarity [7, 19].
### Intermediate Layer Analysis
To assess the extent of \(\mathcal{NC}\) in the hidden layers of our classifiers, we perform a step of "\(\mathcal{NC}\) analysis" at various points during training. This analysis involves freezing the network and passing all the training samples through it. We then collect the network's post-activation representation of each sample after every hidden layer of interest. Specifically, we collect post-activations after each FC layer in MLP6, each convolutional layer in VGG11, and each convolutional layer in ResNet18. We then flatten these post-activation representations into vectors \(\mathbf{h}_{i,c}^{j}\), where \(j\) is the hidden layer, \(i\) is the training sample, and \(c\) is the class. We then compute four quantities in each of these hidden layer post-activation vectors to track \(\mathcal{NC}\): intra-class variance collapse \((\mathcal{NC}1)\), intra-class norm equality \((\mathcal{NC}2)\), inter-class maximal angular separation \((\mathcal{NC}2)\), and simplification to nearest-class center classifier \((\mathcal{NC}4)\) following the general outline provided in [7]. The specifics of these quantities are provided in Appendix A.
## 3 Results
We present the results for each of the \(\mathcal{NC}\) conditions for MLP6, VGG11, and ResNet18 with ReLU activation functions trained to TPT on FashionMNIST, MNIST, SVHN, CIFAR10, and CIFAR100 in Figure 1 to Figure 4.
\(\mathcal{NC}1\): The within-class variability decreases linearly with layer depth in the shallower layers of the classification networks, indicating that the earlier fully-connected/convolutional layers are equally effective in clustering samples of the same class. However, in the deeper layers of the networks, the within-class variability plateaus, suggesting that the network has already maximally reduced the variance between same-class samples even when they have only partially propagated through the network. This behavior is observed in most tested network architectures and datasets, except for CIFAR100. Ultimately, the earlier layers of the classifiers primarily group same-class samples together and contribute to the generation of \(\mathcal{NC}1\).
\(\mathcal{NC}2\): The two phenomena related to the emergence of the simplex ETF structure in the class means, i.e. the convergence of class means to equal norms and to maximal equal angles, also exhibit a somewhat linear relationship between the degree of collapse in any given hidden layer and that layer's depth in the network. However, unlike \(\mathcal{NC}1\), the collapse continues to strengthen even in the deeper layers of the network, rather than plateauing after the first few layers; this is more prevalent for the angular separation between different class means than for the similarity between their norms. This phenomenon also persists across most architectures and datasets, and suggests that the network continues to separate classes as it feeds samples forward through its full depth. This makes sense, as features extracted in the shallower layers of the network can be used to learn more and more effective ways of separating different-class samples in deeper layers, leading to an increase in the recorded strength of \(\mathcal{NC}2\) over layer depth.
\(\mathcal{NC}4\): The degree of \(\mathcal{NC}4\) in any given layer during training seems to be influenced by the presence of \(\mathcal{NC}1\) and \(\mathcal{NC}2\) in that layer. For the nearest class mean to accurately predict net
Figure 1: **Intra-class Variance Collapse (\(\mathcal{NC}1\)) in the ReLU classifiers’ intermediate hidden layers.** The results are generated at various points in training, where the blue dotted line indicates \(\mathcal{NC}1\) at initialization and the red solid line indicates \(\mathcal{NC}1\) after TPT.
work output, samples need to be close to their class mean (\(\mathcal{NC}1\)) and class means should be well-separated (\(\mathcal{NC}2\)). In most experiments, \(\mathcal{NC}4\) decreases linearly in the shallower layers and plateaus in the deeper layers. However, the plateauing occurs later than \(\mathcal{NC}1\) due to the additional angular separation between class means in these deeper layers, which contributes valuable information for classification. The degree of \(\mathcal{NC}4\) in the \(j\)-th hidden layer indicates how much of the network's classification ability is captured by its \(\leq j\) hidden layers. If the mismatch between the nearest neighbor classification in the \(j\)-th layer and the total network classification is zero, then all of the
Figure 3: **Maximal Angles (\(\mathcal{NC}2\)) in the ReLU classifiers’ intermediate hidden layers.** The results are generated at various points in training, where the blue dotted line indicates \(\mathcal{NC}2\) at initialization and the red solid line indicates \(\mathcal{NC}2\) after TPT.
Figure 2: **Equal Norms (\(\mathcal{NC}2\)) in the ReLU classifiers’ intermediate hidden layers.** The results are generated at various points in training, where the blue dotted line indicates \(\mathcal{NC}2\) at initialization and the red solid line indicates \(\mathcal{NC}2\) after TPT.
network's classification ability is already present by the \(j\)-th layer. For simpler datasets like MNIST and SVHN, the NCC mismatch between shallower layers and the network output reaches zero, while for more complex datasets like CIFAR100, the NCC mismatch only reaches zero in the final layers. This observation aligns with the notion that complex datasets require the deeper networks for complete classification, while simpler datasets can achieve it with the shallower layers; however, it will be important for future studies to observe how this generalizes to test/validation datasets.
However, despite these general trends, we note that the models trained on CIFAR100 exhibit a number of unique behaviors not consistent with our broader observations on \(\mathcal{NC}\), most striking of which is no significant decrease in within-class variability over training. Instead, the data seems largely noisy for this \(\mathcal{NC}\) condition. This merits future investigation across other challenging datasets such as TinyImageNet, as well as more exploratory analysis.
In addition to the experiments performed on the classification networks with ReLU activation functions above, we also perform the same set of experiments on Tanh and LeakyReLU classifiers in Appendix B and Appendix C respectively. These experiments largely demonstrate the same characteristics as the ReLU experiments.
## 4 Conclusions
Our work demonstrates that the _Neural Collapse_ (\(\mathcal{NC}\)) phenomenon occurs throughout most of the hidden layers of classification neural networks trained through the terminal phase of training (TPT). We demonstrate these findings across a variety of settings, including varying network architectures, classification datasets, and network activations functions, and find that the degree of \(\mathcal{NC}\) present in any hidden layer is typically correlated with the depth of that layer in the network. In particular, we make the following specific observations: (1) almost all of the reduction in intra-class variance in the samples occurs in the shallower layers of the classification networks, (2) the angular separation between class means is increased consistently as samples propagate through the entire
Figure 4: **Simplification to NCC (\(\mathcal{NC}4\)) in the ReLU classifiers’ intermediate hidden layers. The results are generated at various points in training, where the blue dotted line indicates \(\mathcal{NC}4\) at initialization and the red solid line indicates \(\mathcal{NC}4\) after TPT.**
network, and (3) simpler datasets require only the shallower layers of the networks to fully learn them, whereas more difficult ones require the entire network. Ultimately, these results provide a granular view of the structural propagation of features through classification networks. In future work, it will be important to analyze how these results generalize to held-out validation data. For example, do our observations of \(\mathcal{NC}4\) serve as a proxy for the \(\leq j\) network's ability to classify data extend to validation data? Moreover, is there a broader relationship between \(\mathcal{NC}\) and network generalization/over-training?
## 5 Contributions
**Parker**: Initiated problem concept; led experiments and analysis design; collaborated on computational work; collaborated on results analysis; wrote the paper. **Onal**: Collaborated on experiments and analysis design; collaborated on computational work; collaborated on results analysis. **Stengel**: Collaborated on background research and initial experiment design; collaborated on computational work. **Intrater**: Collaborated on problem concept and background research.
## 6 Acknowledgements
We would like to thank Professor Boris Hanin for his generous guidance and support in the completion of this paper as well as for his insightful and exciting class, Deep Learning Theory, which ultimately led to the creation of this project.
|
2301.08034 | Cooperative Artificial Neural Networks for Rate-Maximization in Optical
Wireless Networks | Recently, Optical wireless communication (OWC) have been considered as a key
element in the next generation of wireless communications due to its potential
in supporting unprecedented communication speeds. In this paper, infrared
lasers referred to as vertical-cavity surface-emitting lasers (VCSELs) are used
as transmitters sending information to multiple users. In OWC,
rate-maximization optimization problems are usually complex due to the high
number of optical access points (APs) needed to ensure coverage. Therefore,
practical solutions with low computational time are essential to cope with
frequent updates in user-requirements that might occur. In this context, we
formulate an optimization problem to determine the optimal user association and
resource allocation in the network, while the serving time is partitioned into
a series of time periods. Therefore, cooperative ANN models are designed to
estimate and predict the association and resource allocation variables for each
user such that sub-optimal solutions can be obtained within a certain period of
time prior to its actual starting, which makes the solutions valid and in
accordance with the demands of the users at a given time. The results show the
effectiveness of the proposed model in maximizing the sum rate of the network
compared with counterpart models. Moreover, ANN-based solutions are close to
the optimal ones with low computational time. | Ahmad Adnan Qidan, Taisir El-Gorashi, Jaafar M. H. Elmirghani | 2023-01-19T12:25:37Z | http://arxiv.org/abs/2301.08034v1 | # Cooperative Artificial Neural Networks for Rate-Maximization in Optical Wireless Networks
###### Abstract
Recently, Optical wireless communication (OWC) have been considered as a key element in the next generation of wireless communications due to its potential in supporting unprecedented communication speeds. In this paper, infrared lasers referred to as vertical-cavity surface-emitting lasers (VCSELs) are used as transmitters sending information to multiple users. In OWC, rate-maximization optimization problems are usually complex due to the high number of optical access points (APs) needed to ensure coverage. Therefore, practical solutions with low computational time are essential to cope with frequent updates in user-requirements that might occur. In this context, we formulate an optimization problem to determine the optimal user association and resource allocation in the network, while the serving time is partitioned into a series of time periods. Therefore, cooperative ANN models are designed to estimate and predict the association and resource allocation variables for each user such that sub-optimal solutions can be obtained within a certain period of time prior to its actual starting, which makes the solutions valid and in accordance with the demands of the users at a given time. The results show the effectiveness of the proposed model in maximizing the sum rate of the network compared with counterpart models. Moreover, ANN-based solutions are close to the optimal ones with low computational time.
Optical wireless networks, machine learning, interference management, optimization
## I Introduction
The evolution of Internet-based technologies in recent days has led to challenges in terms of traffic congestion and lack of resources and secrecy that current wireless networks have failed to support. Therefore, optical wireless communication (OWC) has attracted massive interest from scientific researchers to provide unprecedented communication speeds. Basically, OWC sends information modulated on the optical band, which offers huge license free-bandwidth and high spectral and energy efficiency. In [1], light-emitting diodes (LEDs) were used as transmitters providing data rates in gigabit-per-second (Gbps) communication speeds. Despite the characteristics of LEDs, the modulation speed is limited, and they are usually deployed for providing illumination, and therefore, increasing the number of transmitters must be in compliance with the recommended illumination levels in such indoor environments. Alternatively, infrared lasers such as vertical-cavity surface-emitting lasers (VCSELs) were used in [2] to serve users at Terabit-per-second (Tbps) aggregate data rates, which makes OWC as a strong candidate in the next generation of wireless communications. However, the transmit power of the VCSEL can be harmful to human eyes if it operates at high power levels without considering eye safety regulations.
Optimization problems for rate-maximization were formulated in [3, 4] to enhance the spectral efficiency of OWC networks. In particular, a resource allocation approach was designed in [3] to guarantee high quality of service for users with different demands. In [4], centralized and decentralized algorithms were proposed to maximize the sum rate of the network under the capacity constraint of the optical AP. It is worth pointing out that optimization problems in the context of rate-maximization are usually defined as complex problems that are time consumers. Recently, machine learning (ML) techniques have been considered to provide practical solutions for NP-hard optimization problems. In [5], a deep learning algorithm was used for power allocation in massive multiple-input multiple-output (MIMO) to achieve relatively high spectral efficiency at low loss. In [6], an artificial neural network (ANN) model was trained for resource allocation-based rate maximization in OWC network. It is shown that a closed form solution to the optimum solution of exhaustive search can be achieved at low complexity. However, the use of ML techniques in optical or RF wireless networks is still under investigation especially in complex scenarios where decisions for example in rate-maximization must be made promptly.
In contrast to the work in the literature, in this paper, we design two ANN models working in cooperation to maximize the sum rate of a discrete-time OWC network in which the serving time is partitioned into consecutive periods of time. First, a multi user OWC system model is defined where a transmission scheme referred to as blind interference alignment (BIA) is applied for multiple access services. Then, an optimization problem is formulated to find the optimum user-association and resource allocation during a certain period of time. The computational time of solving such complex optimization problems exceeds the time during which the optimum solution must be determined. Therefore, two ANN models are designed and trained to maximize the sum rate of the network during the intended period of time prior to its starting by exploiting the records of the network in the previous
period of time and performing prediction. The results show the ability of the trained ANN models in providing accurate solutions close to the optimum ones.
## II System Model
We consider a discrete-time downlink OWC network as shown in Fig. 1, where multiple optical APs given by \(L\), \(l=\{1,\ldots,L\}\), are deployed on the ceiling to serve multiple users given by \(K\), \(k=\{1,\ldots,K\}\), distributed on the communication floor. Note that, the VCSEL is used as a transmitter, and therefore, each optical AP consists of \(L_{v}\) VCSELs to extend its coverage area. On the user side, a reconfigurable optical detector with \(M\) photodiodes providing a wide field of view (FoV) [7] is used to ensure that each user has more than one optical link available at a given time. In this work, the serving time in the network is partitioned into a set of time periods given by \(\mathcal{T}\), where \(t=\{1,\ldots,t,t+1,\ldots\mathcal{T}\}\), and the duration of each time period is \(\tau\). In this context, the signal received by a generic user \(k\), \(k\in K\), connected to AP \(l\) during the period of time \(t+1\) can be expressed as
\[y^{[l,k]}(t+1)=\mathbf{h}_{t+1}^{[l,k]}(m^{[l,k]})^{T}\mathbf{x}(t+1)+z^{[l,k] }(t+1), \tag{1}\]
where \(m\in M\) is a photodiode of user \(k\), \(\mathbf{h}_{t+1}^{[l,k]}(m^{[l,k]})^{T}\in\mathbb{R}_{+}^{l\times 1}\), is the channel matrix, \(\mathbf{x}(t+1)\) is the transmitted signal, and \(z^{[l,k]}(t+1)\) is real valued additive white Gaussian noise with zero mean and variance given by the sum of shot noise, thermal noise and the intensity noise of the laser. In this work, all the optical APs are connected through a central unit (CU) to exchange essential information for solving optimization problems. It is worth mentioning that the distribution of the users is known at the central unite, while the channel state information (CSI) at the transmitters is limited to the channel coherence time due to the fact that BIA is implemented for interference management [7, 8].
### _Transmitter_
The VCSEL transmitter has Gaussian beam profile with multiple modes. For lasers, the power distribution is determined based on the beam waist \(W_{0}\), the wavelength \(\lambda\) and the distance \(d\) between the transmitter and user. Basically, the beam radius of the VCSEL at photodiode \(m\) of user \(k\) located on the communication floor at distance \(d\) is given by
\[W_{d}=W_{0}\left(1+\left(\frac{d}{d_{Ra}}\right)^{2}\right)^{1/2}, \tag{2}\]
where \(d_{Ra}\) is the Rayleigh range. Moreover, the spatial distribution of the intensity of VCSEL transmitter \(l\) over the transverse plane at distance \(d\) is given by
\[I_{l}(r,d)=\frac{2P_{t,l}}{\pi W_{d}^{2}}\exp\left(-\frac{2r^{2}}{W_{d}^{2}} \right). \tag{3}\]
Finally, the power received by photodiode \(m\) of user \(k\) from transmitter \(l\) is given by
\[\begin{split}& P_{m,l}=\\ &\int_{0}^{r_{m}}I(r,d)2\pi rdr=P_{t,l}\left[1-\exp\left(\frac{-2 r_{m}^{2}}{W_{d}^{2}}\right)\right],\end{split} \tag{4}\]
where \(r_{m}\) is the radius of photodiode \(m\). Note that, \(A_{m}=\frac{A_{rec}}{M}\), \(m\in M\), is the detection area of photodiode \(m\), assuming \(A_{rec}\) is the whole detection area of the receiver. In (4), the location of user \(k\) is considered right under transmitter \(l\), more details on the power calculations of the laser are in [2].
### _Blind interference alignment_
BIA is a transmission scheme proposed for RF and optical networks to manage multi-user interference with no CSI at the transmitters [7, 8], showing superiority over other transmit precoding schemes with CSI such as zero-forcing (ZF). Basically, the transmission block of BIA allocates multiple alignments block to each user following a unique methodology. For instance, an AP with \(L_{v}=2\) transmitters serving \(K=3\) users, one alignment block is allocated to each user as shown in Fig. 2. For the general case where an optical AP composed of \(L_{v}\) transmitters serving \(K\) users, BIA allocates \(\ell=\left\{1,\ldots,(L_{v}-1)^{K-1}\right\}\) alignment blocks to each user over a transmission block consisting of \((L_{v}-1)^{K}+K(L_{v}-1)^{K-1}\) time slots. In this context, user \(k\) receives the symbol \(\mathbf{u}_{\ell}^{[l,k]}\) from AP \(l\) during the \(\ell\)-th alignment block as follows
\[\mathbf{y}^{[l,k]}=\mathbf{H}^{[l,k]}\mathbf{u}_{\ell}^{[l,k]}+\sum_{l^{ \prime}=1,l^{\prime}\neq l}^{L}\sqrt{\alpha_{l^{\prime}}^{[l,k]}}\mathbf{H}^{[ l^{\prime},k]}\mathbf{u}_{\ell}^{[l^{\prime},k]}+\mathbf{z}^{[l,k]}, \tag{5}\]
where \(\mathbf{H}^{[l,c]}\) is the channel matrix of user \(k\). It is worth mentioning that user \(k\) is equipped with a reconfigurable detector that has the ability to provide \(L_{v}\) linearly independent channel responses, i.e.,
\[\mathbf{H}^{[l,k]}=\left[\mathbf{h}^{[l,k]}(1)\quad\mathbf{h}^{[l,k]}(2)\quad \ldots\quad\mathbf{h}^{[l,k]}(L_{v})\right]\in\mathbb{R}_{+}^{L_{v}\times 1}. \tag{6}\]
In (5), \(\alpha_{l^{\prime}}^{[l,k]}\) is the signal-to-interference ratio (SIR) received at user \(k\) due to other APs \(l\neq l^{\prime}\), and \(\mathbf{u}_{\ell}^{[l^{\prime},k]}\) represents the interfering symbols received from the adjacent APs during the alignment block \(\ell\) over which the desired symbol \(\mathbf{u}_{\ell}^{[k,\ell]}\) is received. It is worth pointing out that frequency reuse is usually applied to avoid inter-cell interference so that the
Fig. 1: An OWC system with \(L\) optical APs serving \(K\) users.
interfering symbol \(\mathbf{u}_{\ell}^{[l^{\prime},k]}\) can be teared as noise. Finally, \(\mathbf{z}^{[k,c]}\) is defined as noise resulting from interference subtraction, and it is given by a covariance matrix, i.e.,
\[\mathbf{R_{z_{p}}}=\begin{bmatrix}(K)\mathbf{I}_{L_{v}-1}&\mathbf{0}\\ \mathbf{0}&1\end{bmatrix}. \tag{7}\]
According to [7], the BIA-based data rate received by user \(k\) from its corresponding APs during the period of time \((t+1)\) is expressed as
\[\begin{split}& r^{[l,k]}(t+1)=\\ & B_{t+1}^{[l,k]}\mathbb{E}\left[\log\det\left(\mathbf{I}_{L_{v}}+P_{ \mathrm{str}}\mathbf{H}_{t+1}^{[l,k]}\mathbf{I}^{[l,k]}\mathbf{R_{z}}^{-1}(t+1 )\right)\right],\end{split} \tag{8}\]
where \(B_{t+1}^{[l,k]}=\dfrac{(L_{v}-1)^{K-1}}{(L_{v}-1)^{K}+K(L_{v}-1)^{K-1}}=\dfrac{ 1}{K+L_{v}-1}\) is the ratio of the alignment blocks allocated to each user connected to AP \(l\) over the entire transmission block, \(P_{\mathrm{str}}\) is the power allocated to each stream and
\[\mathbf{R_{z}}(t+1)=\mathbf{R_{z_{p}}}+P_{\mathrm{str}}\sum_{l^{\prime}=1}^{L }\alpha_{l^{\prime}}^{[l,k]}\mathbf{H}_{t+1}^{[l^{\prime},k]}\mathbf{H}^{[l^ {\prime},k]}, \tag{9}\]
is the covariance matrix of the noise plus interference received from other APs \(l\neq l^{\prime}\).
## III Problem Formulation
We formulate an optimization problem in a discrete time OWC system aiming to maximize the sum rate of the users by determining the optimum user assignment and resource allocation simultaneously. It is worth mentioning that data rate maximization in the network must be achieved during each period of time \(t\), otherwise, it cannot be considered as a valid solution due to the fact that user-conditions are subject to changes in the next period of time. Focussing on period of time \(t+1\), the utility function of sum rate maximization is given by
\[U(x,e)=\sum_{k\in K}\varphi\left(\sum_{l\in L}x_{t+1}^{[l,k]}R^{[l,k]}(t+1) \right), \tag{10}\]
where \(x_{t+1}^{[l,k]}\) is an assignment variable that determines the connectivity of user \(k\) to optical AP \(l\), where \(x_{t+1}^{[l,k]}=1\) if user \(k\) is assigned to AP \(l\) during the period of time \(t+1\), otherwise, it equals 0. Moreover, the actual data rate of user \(k\) during \(t+1\) is \(R^{[l,k]}(t+1)=e_{t+1}^{[l,k]}r^{[l,k]}(t+1)\), where \(e_{t+1}^{[l,k]}\), \(0\leq e_{t+1}^{[l,k]}\leq 1\), determines the resources devoted from AP \(l\) to serve user \(k\), and \(r^{[l,k]}(t+1)\) is the user rate given by equation (8). The sum rate maximization during the period of time \(t+1\) can be obtained by solving the optimization problem as follows
\[\begin{split}\mathbf{P1}:&\max_{x,e}\quad\sum_{k\in K }\varphi\left(\sum_{l\in L}x_{t+1}^{[l,k]}R^{[l,k]}(t+1)\right)\\ &\text{s.t.}\quad\sum_{l\in L}x_{t+1}^{[l,k]}=1,\qquad\qquad \forall k\in K\\ &\quad\sum_{k\in K}x_{t+1}^{[l,k]}R^{[l,k]}(t+1)\leq\rho_{l},\\ &\qquad\qquad\qquad\qquad\qquad\qquad\forall k\in K\\ & R_{min}\leq x_{t+1}^{[l,k]}R^{[l,k]}(t+1)\leq R_{max},\\ &\qquad\qquad\qquad\qquad\qquad\forall l\in L,k\in K\\ & x_{t+1}^{[l,k]}\in\{0,1\},l\in L,k\in K,\end{split} \tag{11}\]
where \(\varphi(.)=\log(.)\) is a logarithmic function that achieves proportional fairness among users [9], and \(\rho_{l}\) is the capacity limitation of AP \(l\). The first constraint in (11) guarantees that each user is assigned to only one AP, while the second constraint ensures that each AP is not overloaded. Moreover, the achievable user rate must be within a certain range as in the third constraint where \(R_{min}\) is the minimum data rate required by a given user and \(R_{max}\) is the maximum data rate that user \(k\) can receive. It is worth mentioning that imposing the third constraint helps in minimizing the waste of the resources and guarantees high quality of service. Finally, the last constraint defines the feasible region of the optimization problem.
The optimization problem in (11) is defined is as a mixed integer non-linear programming (MINLP) problem in which two variables, \(x_{t+1}^{[l,k]}\) and \(e_{t+1}^{[l,k]}\), are coupled. Interestingly, some of the deterministic algorithms can be used to solve such complex MINLP problems with high computational time. However, the application of these algorithms in real scenarios is not practical to solve optimization problems like in (11), where the optimal solutions must be determined within a certain period of time. One of the assumptions for relaxing the main optimization problem in (11) is to connect each user to more than one AP, which means that the association variable \(x_{t+1}^{[l,k]}\) equals to 1. In this context, the optimization problem can be rewritten as
\[\begin{split}\mathbf{P2}:&\max_{e}\quad\sum_{k\in K }\varphi\left(\sum_{l\in L}e_{t+1}^{[l,k]}r^{[l,k]}(t+1)\right)\\ &\text{s.t.}\quad\sum_{k\in K}e_{t+1}^{[l,k]}r^{[l,k]}(t+1)\leq \rho_{l},\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \forall k\in K\\ & R_{min}\leq e_{t+1}^{[l,k]}r^{[l,k]}(t+1)\leq R_{max},\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\forall l\in L,k \in K\\ & 0\leq e_{t+1}^{[l,k]}\leq 1,l\in L,k\in K.\end{split} \tag{12}\]
Fig. 2: Transmission block of BIA for a use case.
Note that, considering our assumption of full connectivity, the variable \(x_{t+1}^{[l,k]}\) is eliminated. Interestingly, the optimization problem in (12) can be solved in a distributed manner on the AP and user sides using the full decomposition method via Lagrangian multipliers [10]. Thus, the Lagrangian function is
\[f\left(e,\mu,\xi_{\max},\lambda_{\min}\right)=\sum_{k\in K}\sum_{l \in L}\varphi\left(e_{t+1}^{[l,k]}r^{[l,k]}(t+1)\right)+\\ \sum_{l\in L}\mu_{t+1}^{[l]}\left(\rho_{l}-\sum_{k\in K}R^{[l,k]} (t+1)\right)+\sum_{k\in K}\xi_{t+1,\max}^{[k]}\\ \left(R_{max}-\sum_{l\in\mathbf{L}}R^{[l,k]}(t+1)\right)+\sum_{k \in K}\lambda_{t+1,\min}^{[k]}\\ \left(\sum_{l\in L}R^{[l,k]}(t+1)-R_{min}\right), \tag{13}\]
where \(\mu\), \(\xi_{\max}\) and \(\lambda_{\min}\) are the Lagrange multipliers according to the first and second constraints in (13), respectively. However, the assumption of users assigned to more than one AP is unrealistic in real time scenarios where users might not see more than one AP at a given time due to blockage. Therefore, focusing on resource allocation more than user association as in (12) can cause relatively high waste of resources due to the fact that an AP might allocate resources to users blocked from receiving its LoS link. In the following, an alternative solution is proposed using ANN models.
### _Artificial neural network_
Our aim in (11) is to provide optimal solutions during the period of time \(t+1\). Therefore, our ANN model must have the ability to exploit the solutions of the optimization problem in the previous period of time \(t\). Given that, the problem in hand can be defined as time series prediction, Focussing on the optimization problem in (11), calculating the network assignment vector \(\mathbf{X}_{t+1}\) involves high complexity. Therefore, having an ANN model that is able to perform prediction for the network assignment vector can considerably minimize the computational time, while valid sub-optimum solutions are obtained within a certain period of time. As in [6], we design a convolutional neural network (CNN) to estimate the network assignment vector denoted by \(\widehat{\mathbf{X}}_{t}\) during a given period of time \(t\) based on user-requirements sent to the whole set of the APs through uplink transmission. It is worth mentioning that the CNN model must be trained over a data set generated from solving the original optimization problem as in the following sub-section. For prediction, we consider the use of long-short-term-memory (LSTM) model classified as a recurrent neural network (RNN) [11], which is known to solve complex sequence problems through time. Once, the network assignment vector is estimated during the period of time \(t\), it is fed into the input layer of the LSTM model trained to predict the network assignment vector \(\widetilde{\mathbf{X}}_{t+1}\) during the next period of time \(t+1\) prior to its starting. Note that, resource allocation can be performed in accordance with the predicted network assignment vector to achieve data rate maximization during the intended period of time.
### _Offline phase_
We first train the CNN model over a dataset with \(N\) size within each period of time to determine an accurate set of weight terms that can perfectly map between the information sent to the input layer of the ANN model and its output layer. Note that, the CNN model aims to estimate the network assignment vector at a given time. For instance during period of time \(t\), the CNN model provides an estimated network assignment vector within the interval \([\widehat{\mathbf{X}}_{t-\tau+1},\widehat{\mathbf{X}}_{t-\tau+2},\ldots, \widehat{\mathbf{X}}_{t}]\), which then can be fed into the input layer of the LSTM model to predict the network assignment vector \(\widetilde{\mathbf{X}}_{t+1}\). In this context, the CNN model must be trained during the period of time \(t\) over data points generated from solving the following problem
\[\mathbf{P3}: \max_{x} \sum_{k\in K}\varphi\left(\sum_{l\in L}x_{t}^{[l,k]}\frac{r^{[l,k ]}(t)}{\sum_{k\in K}x_{t}^{[l,k]}}\right)\] (14) s.t. \[\sum_{l\in L}x_{t}^{[l,k]}=1,\qquad\qquad\forall k\in K\] \[x_{t}^{[l,k]}\in\left\{0,1\right\},l\in L,k\in K.\]
This optimization problem is a rewritten form of the problem in (11) with the assumption of uniform resource allocation, i.e., \(e_{t}^{[l,k]}=\frac{1}{K_{t}}\), where \(K_{l}=\sum_{k\in K}x_{t}^{[l,k]}\). It is worth pointing out that this assumption is considered due to the fact that once the estimation and prediction processes for the network assignment vector are done using CNN and LSTM models, respectively, resource allocation is performed at each optical AP to satisfy the requirements of the users as in subsection III-C. The optimization problem in (14) can be solved through brute force search with a complexity that increases exponentially with the size of the network. Note that, since the dataset is generated in an offline phase, complexity is not an issue.
For the LSTM model, the dataset is generated over \(\mathcal{T}\) consecutive period of times. Then, it is processed to train the LSTM model for determining a set of wight terms that can accurately predict the network assignment vector during a certain period of time. Interestingly, the training of the LSTM model for predicting \(\widetilde{\mathbf{X}}_{t+1}\) during \(t+1\) is occurred over date points included in the dataset during the previous time duration \(\tau\), i.e., \([\widehat{\mathbf{X}}_{t-\tau+1},\widehat{\mathbf{X}}_{t-\tau+2},\ldots, \widehat{\mathbf{X}}_{t}]\).
### _Online application_
After generating the dataset and training the ANN models in an offline phase, their application is considered at the optical APs to perform instantaneous data rate-maximization during a certain period of time \(t+1\) by finding the optimum user association and resource allocation. Basically, the users send their requirements to the optical APs at the beginning of the period of time \(t\) through uplink transmission. Subsequently, these information are injected into the trained CNN model to estimate the network assignment vector \(\widehat{\mathbf{X}}_{t}\) during the interval \([t-\tau+1,t-\tau+2,\ldots,t]\), which then can be used as
information for the input layer of the LSTM trained to predict the network assignment vector \(\mathbf{\widetilde{X}}_{t+1}\) during the next period of time prior to its actual starting. Once the network assignment variable \(x_{t+1}^{[l,k]}\) is predicted for each user \(k\) during \(t+1\), resource allocation is determined at each AP according to equation (13) as follows
\[\mathcal{L}(e,\mu,\xi_{\max},\lambda_{\min})=\] \[\sum_{k\in K_{l}}\varphi\left(e_{t+1}^{[l,k]}r^{[l,k]}(t+1) \right)-\mu_{t+1}^{[l]}\sum_{k\in K_{l}}e_{t+1}^{[l,k]}r^{[l,k]}(t+1)\] \[\qquad-\sum_{k\in K_{l}}(\xi_{t+1,\max}^{[k]}-\lambda_{t+1,\min}^ {[k]})\ e_{t+1}^{[l,k]}r^{[l,k]}(t+1). \tag{15}\]
The optimum resources allocated to user \(k\) associated with AP \(l\) during \(t+1\) is determined by taking the partial derivative of \(\mathcal{L}(e,\mu,\xi_{\max},\lambda_{\min})\) with respect to \(e_{t+1}^{[l,k]}\). Therefore, it is given by
\[\left(\frac{\partial\varphi\left(e_{t+1}^{[l,k]}r^{[l,k]}(t+1) \right)}{\partial e_{t+1}}\right)=\\ r^{[l,k]}(t+1)\left(\mu_{t+1}^{[l]}+\xi_{t+1,\max}^{[k]}- \lambda_{t+1,\min}^{[k]}\right), \tag{16}\]
Otherwise, considering the definition of \(\dfrac{\partial\mathcal{L}\left(e\right)}{\partial e}\) as monotonically decreasing function with \(e_{t+1}^{[l,k]}\), the partial derivative \(\dfrac{\partial\mathcal{L}\left(e\right)}{\partial e}|_{e_{t+1}^{[l,k]}=0}\leq 0\) means that the optimum value \(e_{t+1}^{*[l,k]}\) equals zero, while \(\dfrac{\partial\mathcal{L}\left(e\right)}{\partial e}|_{e_{t+1}^{[l,k]}=1}\geq 0\) means that the optimum value \(e_{t+1}^{*[l,k]}\) equals one. At this point, the gradient projection method is applied to solve the dual problem, and the Lagrangian multipliers in (15) are updated as follows
\[\mu_{t+1}^{[l]}(i)=\left[\mu_{t+1}^{[l]}(i-1)-\Omega_{\varepsilon}\left( \alpha_{l}-\sum_{k\in K_{l}}R^{[l,k]}(t+1)\right)\right]^{+}, \tag{17}\]
\[\xi_{t+1,\max}^{[k]}(i)=\left[\xi_{t+1}^{[k]}(i-1)-\Omega_{\nu}\Bigg{(}R_{ \max}-R^{[l,k]}(t+1)\Bigg{)}\right]^{+}, \tag{18}\]
\[\lambda_{t+1,\min}^{[k]}(i)=\left[\lambda_{t+1}^{[k]}(i-1)-\Omega_{\lambda} \Bigg{(}R^{[l,k]}(t+1)-R_{\min}\Bigg{)}\right]^{+}, \tag{19}\]
where \(i\) denotes the iteration of the gradient algorithm and [\(:\)]+ is a projection on the positive orthant. The Lagrangian variables work as indicators between the users and APs to maximize the sum rate of the network, while ensuring that each AP is not overloaded and the users receiver their demands. Note that, the resources are determined based on the predicted network assignment vector \(\mathbf{\widetilde{X}}_{t+1}\). Therefore, at the beginning of the period of time \(t+1\), each AP sets its link price according to (17), and the users update and broadcast their demands as in (18) and (19). These values remain fixed during the whole time interval so that the trained CNN estimate a new assignment vector to feed the LSTM model in order to predict \(\mathbf{\widetilde{X}}_{t+1}\) for the next period of time \(t+2\).
## IV Performance evaluations
We consider an indoor environment with 5m\(\times\) 5m\(\times\) 3m dimensions where on the ceiling \(L=8\) APs are deployed, and each AP with \(L_{v}\) transmitters. On the communication floor located at 2 m from the ceiling, \(K\) users are distributed randomly with different activities. Note that, each user \(k\) is equipped a reconfigurable detector that gives the opportunity to connect to most of the APs, more details are in [4]. All the other simulation parameters are listed in Table 1.
The accuracy of the trained ANN model, i.e., LSTM, in performing prediction is depicted in Fig. 3 in terms of mean square error (MSE) versus a set of epochs. It is can be seen that training and validation losses decrease with the number of epochs regardless of the dataset size considered since the optimal weights needed to preform specific mathematical calculations are set over time. However, increasing the size of dataset from 5000 to \(10^{4}\) results in decreasing the error of validation and training processes. It is worth noticing that MSE increases if more periods of time, \(\mathcal{T}=5\), is assumed for the same dataset size which is due to an increase in the prediction
Fig. 3: The performance of the ANN model trained for prediction.
error. This issue can be avoided by training the ANN model over a larger dataset with data points \(>10^{4}\). The figure also shows that our ANN model is not overfitting and can predict accurate solutions in the online application where unexpected scenarios are more likely to occur.
In Fig. 4, the sum rate of the network is shown against different values of the beam waist \(W_{0}\), which is known as a vital parameter in laser-based OWC that influences the power received at the user end. It is shown that the sum rate of the users increases with the beam waist due to the fact that more transmit power is directed towards the users, and less interference is received from the neighboring APs. Interestingly, our cooperative ANNs provides accurate solutions close to the optimal ones that involves high computational time. Note that, solving the the optimization problem in (12) results in low sum rates compared to our ANN-based solutions, which is expected due to the assumption of full connectivity, i.e., \(x_{t+1}^{[l,k]}=1\), which in turn leads to wasting the resources. Moreover, the proposed models shows superiority over the conventional scheme proposed in [7] in which each AP serves users located at a distance determining whether the signal received is useful or noise, and therefore, users are served regardless of their demands, the available resources and capacity limitations of the APs.
Fig. 5 shows the sum rate of the network versus a range of SNR values using the trained ANN models. It can be seen that determining the optimal user assignment and resource allocation using ANN results in higher sum rates compared to the scenarios of full connectivity and distance-based user association. It is because of in our model, each user is assigned to an AP that has enough resources to satisfy its demands and positively impact the sum rate of the network. Interestingly, as in [7], BIA achieves a higher sum rate than ZF due to its ability to serve multiple users simultaneously with no CSI, while the performance of ZF is dictated by the need for CSI.
## V Conclusions
In this paper, sum rate-maximization is addressed in a discrete time laser-based OWC. We first define the system model consisting of multiple APs to serve users distributed on the receiving plane. Then, the user rate is derived considering the application of BIA, which manages multi-user interference without the need for CSI at the transmitters. Moreover, an optimization problem is formulated to maximize the sum rate of the network during a certain period of time. Finally, CNN and LSTM models are designed and trained to provide instantaneous solutions during the validity of each period of time. The results show that solving the formulated model achieves higher sum rates compared to other benchmark models, and the trained ANN models have the ability to obtain accurate and valid solutions close to the optimal ones.
|
2303.11283 | Resource Saving via Ensemble Techniques for Quantum Neural Networks | Quantum neural networks hold significant promise for numerous applications,
particularly as they can be executed on the current generation of quantum
hardware. However, due to limited qubits or hardware noise, conducting
large-scale experiments often requires significant resources. Moreover, the
output of the model is susceptible to corruption by quantum hardware noise. To
address this issue, we propose the use of ensemble techniques, which involve
constructing a single machine learning model based on multiple instances of
quantum neural networks. In particular, we implement bagging and AdaBoost
techniques, with different data loading configurations, and evaluate their
performance on both synthetic and real-world classification and regression
tasks. To assess the potential performance improvement under different
environments, we conduct experiments on both simulated, noiseless software and
IBM superconducting-based QPUs, suggesting these techniques can mitigate the
quantum hardware noise. Additionally, we quantify the amount of resources saved
using these ensemble techniques. Our findings indicate that these methods
enable the construction of large, powerful models even on relatively small
quantum devices. | Massimiliano Incudini, Michele Grossi, Andrea Ceschini, Antonio Mandarino, Massimo Panella, Sofia Vallecorsa, David Windridge | 2023-03-20T17:19:45Z | http://arxiv.org/abs/2303.11283v2 | # Resource Saving via Ensemble Techniques for Quantum Neural Networks
###### Abstract
Quantum neural networks hold significant promise for numerous applications, particularly as they can be executed on the current generation of quantum hardware. However, due to limited qubits or hardware noise, conducting large-scale experiments often requires significant resources. Moreover, the output of the model is susceptible to corruption by quantum hardware noise. To address this issue, we propose the use of ensemble techniques, which involve constructing a single machine learning model based on multiple instances of quantum neural networks. In particular, we implement bagging and AdaBoost techniques, with different data loading configurations, and evaluate their performance on both synthetic and real-world classification and regression tasks. To assess the potential performance improvement under different environments, we conduct experiments on both simulated, noiseless software and IBM superconducting-based QPUs, suggesting these techniques can mitigate the quantum hardware noise. Additionally, we quantify the amount of resources saved using these ensemble techniques. Our findings indicate that these methods enable the
construction of large, powerful models even on relatively small quantum devices.
## 1 Introduction
The emerging field of quantum machine learning [1] holds promise for enhancing the accuracy and speed of machine learning algorithms by utilizing quantum computing techniques. Although the potential of quantum machine learning is expected to be advantageous for certain classes of problems in chemistry, physics, material science, and pharmacology [2], its applicability to more conventional use cases remains uncertain [3]. Notably, utilizable quantum machine learning algorithms generally need to be adapted to run on 'NISQ' devices [4], that are current noisy quantum computer, no error corrected and with modest number of qubits and circuit depth capabilities. In the quantum machine learning scenario, the quantum counterparts of classical neural networks, quantum neural networks [5], have emerged as the de facto standard model for solving supervised and unsupervised learning tasks in the quantum domain.
While quantum neural networks have generated much interest, they presently have some issues. The first is _barren plateau_[6] characterised by the exponentially-fast decay of the loss gradient's variance with increasing system size. This problem may be exacerbated by various factors, such as having overly-expressive quantum circuits [7]. To address this issue, quantum neural networks need to be carefully designed [8] and to incorporate expressibility control techniques such as projection [9] and bandwidth control [10]. The second problem, which is the one addressed in this work, concerns the amount of resources required to run quantum neural networks (the limited number of total qubits -currently up to over a hundred- and the low fidelity of operations on current quantum devices severely restrict the size of the quantum neural network in terms of input dimension and layers).
In order to address the latter issue, we propose employing of NISQ-appropriate implementation of ensemble learning [11], a widely used technique in classical machine learning for tuning the bias and variance of a specific machine learning mechanism via the construction of a stronger classifier using multiple weak components, such that the ensemble, as a whole, outperforms the best individual classifier. The effectiveness of ensemble systems has been extensively demonstrated empirically and theoretically [12], although there does not currently exist any overarching theoretical framework capable of e.g. covering the requirements of ensemble components diversity to guarantee its out-performance. We here seek to provide and quantify a motivation for employing classical ensemble techniques in relation to NISQ-bases quantum neural networks, which we address via the following three arguments.
The first argument concerns the potential for the superior performance of an ensemble system composed of small quantum neural networks compared to a single larger quantum neural network. This notion is based on the rationale that while quantum neural networks are inherently powerful machine learning
models, they exhibit intrinsic variance due to the nature of highly non-convex loss landscape, implying that different predictors will result from randomly-initialised stochastic gradient descent training, in common with classical neural networks. (Modern deep learning practice often deliberately overparameterises the network in order to render the loss more convex [13], with the asymptotic case of infinitely wide neural networks exhibiting a fully convex loss landscape, making it effectively a linear model [14]). Although overparameterization in quantum neural networks has been studied theoretically [15, 16, 17] and has been shown to be beneficial to generalization performances within certain settings, the increase in resource requirements makes this approach almost completely impractical on NISQ devices. In the classical literature, however, it has been demonstrated that ensemble techniques can perform comparably to the largest (generally overparameterized) models with significantly fewer resources (especially in relation to overall model parameterization), c.f. for example [18, Figure 2].
The second argument pertains to the resource savings achievable by ensemble systems, particularly in terms of the number of qubits, gates, and training samples required. For example, the boosting ensemble technique involves progressive dividing of the training dataset into multiple, partially overlapping subsets on the basis of their respective impact on the performance of the cumulative ensemble classifier created by summing of the partial weak classifiers trained on previously-selected data subsets. This enables the ensemble quantum neural network to be constructed in parallel with individual quantum neural networks operating on datasets of reduced size. The random subspace technique, by contrast, trains each base predictor on a random subset of features, but also provides an advantage in terms of the overall number of qubits and gates required. Employing the random subspace technique in a quantum machine learning setting would parallel the various quantum circuit splitting techniques (c.f. for example [19]), and divide-and-conquer approaches, that have been utilized in the field of quantum chemistry [20] and quantum optimization [21].
Our third argument, which is specific to quantum computing, examines the potential of ensembles' noise-canceling ability. Previous works have demonstrated that ensembles can enhance the performance of several noisy machine-learning tasks (see [22]). Our investigation aims to determine whether and to what extent these techniques can reduce the impact of noise during the execution on a NISQ device _at the applicative level_. This approach differs from most current approaches, which aim to reduce noise at a lower level, as described in [23].
We here examine the impact of ensemble techniques based on bagging (bootstrap aggregation) and boosting ensembles in a quantum neural network setting across seven variant data loading schemes. Bagging techniques are selected for their applicability in high-variance settings, i.e. those exhibiting significant fluctuations in relation to differ initialisations and differ sample subselections; contrarily, boosting techniques are effective in relation to high-bias models, i.e. those which are relatively insensitive to data subsampling.
Our first objective is to quantify the amount of resources (in particular, the number of qubits, gates, parameters, and training samples) saved by the respective approaches. Secondly, we evaluate the performance using quantum
neural networks as base predictors to solve a number of representative synthetic and real-world regression and classification tasks. Critically, the accuracy and loss performance of these approaches are assessed with respect to the number of layers of the quantum neural networks in a simulated environment. We thus obtain a layer-wise quantification of performance that addresses one of the fundamental questions in architecting deep neural systems, namely, how many layers of abstraction to incorporate? Note that this question is fundamentally different in a quantum setting compared to classical neural systems; in the latter, the possibility of multi-level feature learning exists, and thus the potential for indefinite performance improvement with neural layer depth [17]. This contrast with the quantum neural networks, in which an increase in the number of layers affects the expressibility of the ansatz and thus might introduce a barren plateau [7].
Finally, the noise-canceling capabilities of ensembles will be investigated by testing a synthetic linear regression task on IBM's superconductor-based quantum processing unit (QPU) Lagos.
ContributionsOur contributions are the following:
* We evaluate various ensemble schemes that incorporate bagging and boosting techniques into quantum neural networks, and quantify the benefits in terms of resource savings, including the number of qubits, gates, and training samples required for these approaches.
* We apply our approach to the IBM Lagos superconductor-based quantum processing unit to investigate the potential advantages of bagging techniques in mitigating the effects of noise during the execution of quantum circuits on NISQ devices.
* We conduct a layer-wise analysis of quantum neural network performance in the ensemble setting with a view to determining the implicit trade-off between ensemble advantage and layer-wise depth.
## 2 Related Works
The quest for quantum algorithms able to be executed on noisy small-scale quantum systems led to the concept of Variational Quantum Circuits (VQCs), i.e. quantum circuits based on a hybrid quantum-classical optimization framework [24, 25]. VQCs are currently believed to be promising candidates to harness the potential of QC and achieve a quantum advantage [26, 27, 28]. VQCs rely on a hybrid quantum-classical scheme, where a parameterized quantum circuit is iteratively optimized with the help of a classical co-processor. This way, low-depth quantum circuits can be efficiently designed and implemented on the available NISQ devices; the noisy components of the quantum process are mitigated by the low number of quantum gates present in the VQCs. The basic structure of a VQC include a data encoding stage, where classical data are embedded
into a complex Hilbert space as quantum states, a processing of such quantum states via an ansatz made of parameterized rotation gates and entangling gates, and finally a measurement of the circuit to retrieve the expected outcome. Many different circuit architectures and ansatzes have been proposed for VQCs [29; 30; 31; 32], depending on the structure of the problem or on the underlying quantum hardware. VQCs demonstrated remarkable performances and a good resilience to noise in several optimization tasks and real-world applications. For example, researchers in [33] introduced a circuit-centric quantum classifier based on VQC that could effectively be implemented on a near-term quantum device. It correctly classified quantum encoded data and demonstrated to be robust against noise. Authors in [25] proposed a VQC that successfully approximated high-dimensional regression and classification functions with a limited number of qubits.
VQCs are incredibly well-suited for the realization of quantum neural networks with a constraint on the number of qubits [34]. A quantum neural network is usually composed of a layered architecture able to encode input data into quantum states and perform heavy manipulations in a high-dimensional feature space. The encoding strategy and the choice of the circuit ansatz are critical for the achievement of superior performances over classical NNs: more complex data encoding with hard-to-simulate feature maps could lead to a concrete quantum advantage [35], but too expressive quantum circuits may exhibit flatter cost landscapes and result in untrainable models [7]. An example of quantum neural network was given in [36], where a shallow NN was employed to perform classification and regression tasks using both simulators and real quantum devices. In [37], authors proposed a multi-layer Quantum Deep Neural Network (QDNN) with three variational layers for an image classification task. They managed to prove that QDNNs have more representation capacity with respect to classical deep NN. A hybrid Quantum-classical Recurrent Neural Network (QRNN) was presented in [38] to solve a time series prediction problem. The QRNN, composed of a quantum layer as well as two classical recurrent layers, demonstrated superior performances over the classical counterpart in terms of prediction error.
However, quantum neural networks suffer from some non-negligible problems, which deeply affect their performances and limit their impact in the quantum ecosystem. Firstly, they are still subject to quantum noise, and it gets worse as the number of layers (i.e., the depth of the quantum circuit) increases [39; 40]. Secondly, barren plateaus phenomena may occur depending on the ansatz and the number of qubits chosen, reducing the trainability of such models [7; 41; 6]. Finally, data encoding on NISQ devices continues to represent an obstacle when the number of features is considerable [34], making them hard to implement and train [38].
In classical ML, ensemble learning has been investigated for years to improve generalization and robustness over a single estimator [42; 11]. Ensembling is based on the so-called "wisdom of the crowd" principle, namely it combines the predictions of several base estimators with the same learning algorithm to build a single stronger model. Despite there are many different ensemble methods, the latter can be easily grouped into two different categories: bagging methods, which
build and train several estimators independently and then compute an average of their predictions [43], and boosting methods, which in turn train the estimators sequentially so that the each one corrects the predictions of the prior models and output a weighted average of such predictions [44]. Ensemble methods for NNs have also been extensively studied, yielding remarkable performances in both classification and regression tasks [45, 46, 47].
In the quantum setting, the adoption of an ensemble strategy has received little consideration in the past few years, with very few approaches focusing on near-term quantum devices and VQC ensembles. In [48, 49], the authors exploit the superposition principle to obtain an exponentially large ensemble wherein each instance is weighted according to its accuracy on the training dataset. However, they make use of a fault-tolerant approach rather than considering limited quantum resources. A similar approach is explored in [50], where authors create an ensemble of Quantum Binary Neural Networks (QBNNs) with reduced computational training cost without taking into consideration the amount of quantum resources necessary to build the circuit. An efficient strategy for bagging with quantum circuits is proposed in [51] instead. Very recently, [52] has proposed a distributed framework for ensemble learning on a variety of NISQ quantum devices, although it requires many NISQ devices to be actually implemented. A quantum ECOC multiclass ensemble approach was proposed in [53]. In [54], the authors investigated the performance enhancement of a majority-voting-based ensemble system in the quantum regime. Authors in [55] studied the role of ensemble techniques in the context of quantum reservoir computing. Finally, an analysis of robustness to hardware error as applied to quantum reinforcement learning, and presenting compatible results, is given in [56].
In this paper, we propose a classical ensemble learning approach to the outputs of several quantum neural networks in order to reduce the quantum resources for a given quantum model and provide superior performances in terms of error rate over single quantum neural network instances. To the best of our knowledge, no one has ever proposed such an ensemble framework for VQCs. We also compare both bagging and boosting strategy to provide an analysis on the most appropriate ensemble methods for quantum neural networks in a noiseless setting. An error analysis with respect to the number of layers of the quantum neural networks reveals that bagging models greatly outperform the baseline model with low number of layers, with remarkable performances as the number of layers increase. Finally, we apply our approach to the IBM Lagos superconductor-based QPU to investigate the potential advantages of bagging techniques in mitigating the effects of noise during the execution of quantum circuits on NISQ devices.
## 3 Background and Notation
We provide a brief introduction to the notation and concepts used in this work. The sets \(\mathcal{X}\) and \(\mathcal{Y}\) represent the set of features and targets, respectively. Typically,
\(\mathcal{X}\) is equal to \(\mathbb{R}^{d}\), with \(d\) equal to the dimensionality in input, whereas \(\mathcal{Y}\) is equal to \(\mathbb{R}\) for regression tasks and \(\mathcal{Y}\) is equal to \(\{c_{1},...,c_{k}\}\) for \(k\)-ary classification tasks. Sequences of elements are indexed in the apex with \(x^{(j)}\), where the \(i\)-th component is denoted as \(x_{i}\). The notation \(\epsilon\sim\mathcal{N}(\mu,\sigma^{2})\) indicates that the value of \(\epsilon\) is randomly sampled from a univariate normal distribution with mean \(\mu\) and variance \(\sigma^{2}\). We use the function \(\llbracket P\rrbracket\) to denote one when the predicate \(P\) is true and zero otherwise.
### Models in quantum machine learning
We define the state of a quantum system as the density matrix \(\rho\) having unitary trace and belonging to the Hilbert space \(\mathcal{H}\equiv\mathbb{C}^{2^{n}\times 2^{n}}\) where \(n\) is the number of qubits. The system starts in the state \(\rho_{0}=|0\rangle\!\langle 0|\). The evolution in a closed quantum system is described by a unitary transformation \(U=\exp(-itH)\), \(t\in\mathbb{R}\), \(H\) Hermitian operator, and acts like \(\rho\mapsto U^{\dagger}\rho U\). The measurement of the system in its computational basis \(\{\Pi_{i}=|i\rangle\!\langle i|\}_{i=0}^{2^{n}-1}\) applied to the system in the state \(\rho\) will give outcome \(i\in 0,1,...,2^{n}-1\) with probability \(\mathrm{Tr}[\Pi_{i}\rho\Pi_{i}]\) after which the state collapses to \(\rho^{\prime}=\Pi_{i}\rho\Pi_{i}/\,\mathrm{Tr}[\Pi_{i}\rho\Pi_{i}]\). A different measurement operation is given by the expectation value of an observable \(O=\sum_{i}\lambda_{i}\Pi_{i}\) acting on the system in state \(\rho\), whose value is \(\langle O\rangle=\mathrm{Tr}[\rho O]\).
Quantum computation can be described using a quantum circuit, a sequence of gates (i.e. elementary operations) acting on one or more qubits of the system terminating with the measurement operation over some or all of its qubits. The output of the measurement can be post-processed using a classical function. "The set of gates available shall be _universal_", i.e. the composition of such elementary operation allows the expression of any unitary transformation with arbitrary precision. An exemplar universal gate set is composed of parametric operators \(R_{x}^{(i)}(\theta)=\exp(-i\frac{\theta}{2}\sigma_{x}^{(i)})\), \(R_{y}^{(i)}(\theta)=\exp(-i\frac{\theta}{2}\sigma_{y}^{(i)})\), \(R_{z}^{(i)}(\theta)=\exp(-i\frac{\theta}{2}\sigma_{z}^{(i)})\), and the operator \(\mathrm{CNOT}^{(i,j)}=\exp(-i\frac{\pi}{4}(I-\sigma_{z}^{(i)})(I-\sigma_{x}^{ (j)}))\). The gate \(I\) is the identity. The matrices \(\sigma_{x}=\left(\begin{smallmatrix}0&1&0\\ 1&0\end{smallmatrix}\right),\sigma_{y}=\left(\begin{smallmatrix}0&1&0\\ 1&0\end{smallmatrix}\right),\sigma_{z}=\left(\begin{smallmatrix}0&1&0\\ 1&0\end{smallmatrix}\right)\) are the Pauli matrices. The apex denotes explicitly the qubits in which the transformation acts.
Quantum machine learning forms a broad family of algorithms, some of which require fault-tolerant quantum computation while others are ready to execute on current generation 'NISQ' (noisy) quantum devices. The family of NISQ-ready techniques of interest in this document is denoted _variational quantum algorithms_[24]. These algorithms are based on the tuning of a cost function \(C(\theta)\) dependent on a set of parameters \(\theta\in[0,2\pi]^{P}\) and optimized classically (possibly via gradient descent-based techniques) to obtain the value \(\theta^{*}=\arg\min_{\theta}C(\theta)\). Optimization through gradient-descent thus involves computation of the gradient of \(C\). This can be done using finite difference methods or else the parameter-shift rule [57]. The parameter-shift rule is particularly well-suited for NISQ devices as it can utilise a large step size relative to finite difference methods, making it less sensitive to noise in calculations.
In general, \(C(\theta)\) is a function corresponding to a parametric quantum transformation \(U(\theta)\) of a length polynomial in the number of qubits, the set of input
states \(\{\rho_{i}\}\), and the set of observables \(\{O_{k}\}\). Specifically, a _quantum neural network_ is a function in the form
\[f(x;\theta)=\mathrm{Tr}[U^{\dagger}(\theta)V^{\dagger}(x)\rho_{0}V(x)U(\theta)O] \tag{1}\]
where \(\rho_{0}\) is the initial state of the system, \(V(x)\) is a parametric quantum circuit depending on the input parameters \(x\in\mathcal{X}\), \(U(\theta)\) is a parametric quantum circuit named an _ansatz_ that depends on the trainable parameters \(\theta\in[0,2\pi)^{P}\), and \(O\) is an observable. Given the training dataset \(\{(x^{(i)},y^{(i)})\}_{i=1}^{M}\in(\mathcal{X}\times\mathcal{Y})^{M}\), the cost function of a quantum neural network, being a supervised learning problem, is the empirical risk
\[C(\theta)=\sum_{i=1}^{M}\ell(f(x^{(i)};\theta),y^{(i)}) \tag{2}\]
where \(\ell:\mathcal{Y}\times\mathcal{Y}\rightarrow\mathbb{R}\) is any convex loss function, e.g. the mean square error.
The quantum neural network constitutes a linear model in the Hilbert space of the quantum system as a consequence of the linearity of quantum dynamics. It behaves, in particular, as a _kernel machine_ that employs the unitary \(V(x)\) as the feature map \(\rho\mapsto\rho_{x}=V(x)\rho\), while the variational ansatz \(\rho\mapsto\rho_{\theta}=U(\theta)\rho\) adjusts the model weights. Note that although the model is linear in the Hilbert space of the quantum system, the measurement projection makes it nonlinear in the parameter space, enabling a set of rich dynamics. quantum neural networks can have a layer-wise structure, i.e., \(U(\theta)=\prod_{i=1}^{\ell}U_{i}(\theta_{i})\), which provides it with further degrees of freedom for optimization (however, due to the lack of nonlinearity between the layers, the model does not possess the hierarchical feature learning capabilities of classical neural networks).
The selection of the ansatz is thus a crucial aspect in defining the quantum neural network, and it is required to adhere to certain classifier-friendly principles. Expressibility is one such, being the property governing the extent of the search space that can be explored by the optimization method. Although there are various ways to formalize expressibility, one of the most widely used definitions is based on the generation of state ensembles \(\{\rho_{\theta}=U(\theta)\rho_{0}\mid\theta\in\Theta\}\) that are similar to Haar-random (i.e. uniform) distributions of states. Expressible unitaries are those for which the operator norm of a certain expression involving the Haar measure and the state ensemble is small. However, expressible circuits are susceptible to the barren plateau problem, where the variance of the gradient decreases exponentially with the number of qubits, making parameter training infeasible. The varieties of ansatz and their expressibilities are presented in [58]. Expressibility is tightly connected to the concept of controllability in quantum optimal control, and authors in [8] show that the asymptotic limit of the number of layers \(\ell\rightarrow\infty\) in the expressible circuits are the controllable ones, i.e. those whose ansatz is underlied by a Lie algebra matching the space of skew-Hermitian matrices \(\mathfrak{u}(2^{n})\).
### Ensemble techniques
The purpose of using ensemble systems is to improve the generalization performance through reducing the bias or variance of a decision system. Such a result is obtained by training several models and combining the outcomes according to a combination rule. A large body of literature on ensemble techniques exists; the reader is referred to [11] for a general overview.
The idea behind the ensemble system may be motivated by Condorcet's jury theorem [12]: a jury of \(m\) peers, each having probability \(p=\frac{1}{2}+\epsilon,0<\epsilon\ll 1\), of giving the correct answer, implies that the probability of the verdict given by majority voting to be correct is
\[p_{\text{jury}}=\sum_{k=\lceil m/2\rceil+1}^{m}\binom{m}{k}p^{k}(1-p)^{m-k} \tag{3}\]
and quickly approaches 1 as \(m\rightarrow\infty\). The theorem, broadly interpreted, suggests that a combination of small, individually ineffective machine learning models \(h_{1},...,h_{m}\) (_weak learners_) can be combined to constitute a more powerful one, with arbitrarily good performance depending on the nature of data manifold and the base classifiers \(h_{\text{ens}}\) (_strong learner_). According to [11], three aspects characterize an ensemble system: a data selection strategy, the composition plus training strategies of the single model instances, and the combination rule of its output. Some of the possible choices are summarized in Figure 1.
The data selection strategy determines how the data should be distributed to the individual instances. If all instances are trained on the same dataset, their predictions will be highly correlated, resulting in similar output. The _bootstrapping_ technique creates smaller, overlapping
Figure 1: Taxonomy of the three aspects characterizing an ensemble system.
replacement from the dataset, which are then assigned to different instances. Alternatively, the _pasting_ technique can be used for processing larger datasets by subsampling without replacement. Another approach is to divide the dataset by randomly assigning different sets of features with replacement, known as the random subspace technique (when the bootstrapping and random subspace techniques are combined, the result is the _random patch_ technique).
There are numerous schemes for combining predictors, with _bagging_ being the most straightforward and commonly used. Bagging, short for bootstrap aggregation, involves the creation of multiple homogeneous model instances trained on bootstrapped datasets. An instance of a bagging scheme is the random forest, which involves bagging decision trees trained on differing sample subsets (in some cases, random forests may favor a random patch data selection strategy over bagging). Another predictor combination scheme is _boosting_, which involves training a sequence of predictors via subsampling data according to the following strategy: an initial predictor is trained on a uniformly drawn subset of samples, while the \(i\)-th instance of the predictor is trained on a subset of elements that the previous ensemble classifier incorrectly predicted. The ensemble is itself the convex cumulative sum over predictors. Numerous variations of boosting exist, one of the most notable being AdaBoost [59]. Contrary to vanilla boosting, AdaBoost employs an exponential loss such that the ensemble error function allows for the fact that it is only the sign of outcome that is significant. These two scheme are illustrated in Figure 2. The other major ensemble scheme is _stacking_ in which a collection of heterogeneous classifiers trained on the same dataset are combined via an optimised meta-classifier.
The combination rule merges the output of individual models \(h_{1},...,h_{m}\). In classification tasks i.e. where the label output is discrete \(y\in C=\{c_{1},...,c_{k}\}\), the most commonly used rule is majority voting. This is calculated as \(y_{\text{ens}}=\arg\max_{c\in C}\sum_{i=1}^{m}\llbracket h_{i}(x)=c\rrbracket\). Where there exists prior knowledge regarding the performance of individual predictors, positive weights \(w_{i}\) can be assigned, such that the output is a weighted majority vote. The ensemble prediction in this case will be \(y_{\text{ens}}=\arg\max_{c\in C}\sum_{i=1}^{m}w_{i}\llbracket h_{i}(x)=c\rrbracket\). Alternatively, the
Figure 2: Comparison between bagging (left) and ‘vanilla’ boosting (right) techniques. The bagging ensemble trains the models in parallel over a subset of the dataset drawn uniformly; each prediction is then merged via an average function. The boosting ensemble trains the models sequentially, the first predictor draws the samples uniformly, and the subsequent models draw the elements from a probability distribution biased toward previously misclassified items.
_borda count_ method sorts labels in descending order by likelihood, with the ensemble prediction being the highest ranking sum. Nevertheless, averaging functions can also be utilised for ensemble classifiers. For regression tasks where \(y\in\mathbb{R}\), common combination rules are (possibly weighted) mean, minimum, and maximum.
## 4 Discussion
Ensemble techniques, while well-established in the classical realm, have been largely overlooked in the quantum literature, leaving a number of open questions in this setting, such as whether bagging techniques, which reduce variance, can be deployed as effectively as boosting techniques, which reduce bias (both of which are also data-manifold and base-model dependent). It is also unclear as to the relative resource saving in terms of circuit size (number of qubits) and depth (number of gates), and also samples required for training, that can be obtained by using an ensemble of quantum neural networks instead of a single, large quantum network. Furthermore, it is not currently well understood the extent to which an ensemble system can mitigate hardware noise. Our experiments are designed to explore these questions.
To investigate the first two aspects, we conduct a suite of experiments within a simulation environment, employing seven distinct ensemble schemes with varying strategies for data selection, model training and decision combination applied to four synthetic and real-world datasets, encompassing both regression and classification tasks. Specifically, we analyze: a synthetic linear regression dataset, the Concrete Compressive Strength regression dataset, the Diabetes regression dataset, and the Wine classification dataset, which are widely used benchmarks for evaluating machine learning models.
Six of the proposed techniques are classified as bagging methods, employing bootstrapped data to generate the ensemble, while the seventh is a sequential boosting technique, namely AdaBoost. In particular, we implemented the AdaBoost.R2 version [60] for the regression tasks and the AdaBoost SAMME.R version [61] for the classification problem. The bagging ensembles are characterized by two parameters: the sample ratio \(r_{n}\in[0,1]\), which determines the percentage of training samples used for each base predictor (with replacement), and the feature ratio \(r_{f}\in[0,1]\), which indicates the percentage of features used for each predictor (without replacement). We test six bagging schemes by varying \((r_{n},r_{f})\in\{0.2,1.0\}\times\{0.3,0.5,0.8\}\). For both the classification and regression tasks, the outputs of the base predictors are combined via averaging. In the case of the AdaBoost ensemble, the training set for each base predictor has the same size and dimensionality as the original training set. However, the samples are not uniformly drawn but are selected and weighted based on the probability of misclassification by previous classifiers composing the cumulative ensemble; single predictors are hence combined using a weighted average. Each ensemble system comprises 10 base predictors. The characteristics of these ensemble schemes are summarized in Table 1, where FM identifies the baseline quantum neural network
model, whereas Bag_\(r_{f}\)_\(r_{n}\) represents a bagging model with \(r_{f}\) percentage of the features and \(r_{n}\) percentage of the samples. Our experiments aim to evaluate the performance of each of the ensemble frameworks in comparison to the baseline model, as well as to assess the overall resource saving, including the number of qubits and overall parametric requirements.
To investigate the impact of quantum hardware noise, we conduct additional experiments on the IBM Lagos QPU. Such a device is a 7-qubit superconducting-based quantum computer. The topology of Lagos is depicted in Figure 3. Specifically, we compare the performance of the baseline model FM with that of the Bag_0.8_0.2 configuration on the linear regression dataset. Our goal is to determine whether ensemble techniques can effectively mitigate quantum noise, and whether the difference in performance between single predictors and ensemble systems is more pronounced within a simulated environment in comparison with real-world execution on quantum hardware.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline \multirow{2}{*}{Model} & \multicolumn{2}{c}{Data Loading} & \multirow{2}{*}{Ensemble} & \multirow{2}{*}{\#BP} & \multirow{2}{*}{Rule} \\ \cline{2-2} \cline{5-6} & & RSBS (\(r_{f}\)) & & & & \\ \hline FM & - & - & - & - & - \\ Bag\_0.3\_0.2 & 0.3 & 0.2 & Bagging & 10 & Avg \\ Bag\_0.3\_1.0 & 0.3 & 1.0 & Bagging & 10 & Avg \\ Bag\_0.5\_0.2 & 0.5 & 0.2 & Bagging & 10 & Avg \\ Bag\_0.5\_1.0 & 0.5 & 1.0 & Bagging & 10 & Avg \\ Bag\_0.8\_0.2 & 0.8 & 0.2 & Bagging & 10 & Avg \\ Bag\_0.8\_1.0 & 0.8 & 1.0 & Bagging & 10 & Avg \\ AdaBoost & 1.0 & 1.0 & AdaBoost & 10 & W.Avg \\ \hline \hline \end{tabular}
\end{table}
Table 1: Characteristics of the baseline benchmark model (0) and ensemble systems (I to VII). The ensemble system is identified by its broad data loading method (BST for Boosting and RSBS for Random Subspace), predictor composition & training type (Ensemble), number of base predictors (#BP), composition rule (Rule, with Avg representing the average function and W.Avg representing weighted average).
Figure 3: Topology of IBM Lagos quantum processing unit
### Experimental setup
This section outlines experimental protocols used to evaluate the performance of the various ensemble approaches in terms of both the experimental structure and specific parameters/settings used to configure the algorithm and hardware.
Choice of quantum neural networksWe utilize a quantum neural network of the form \(f(x;\theta)=\mathrm{Tr}[U^{\dagger}(\theta)V^{\dagger}(x)\rho_{0}V(x)U(\theta)O]\), which operates on \(n\) qubits, with \(n\) corresponding to the number of features in the classification/regression problem. For the feature map, we opted for the simple parametric transformation \(V(x)=\bigotimes_{i=1}^{n}R_{y}^{(i)}(x_{i})\). This choice was motivated by the findings in [62], suggesting that more complex feature maps can lead to unfavorable generalization properties, incorporation of which may thus unnecessarily bias our findings. (In [63], various feature maps are compared).
The ansatz is implemented with the parametric transformations structured layer-wise with, for \(\ell\) the number of layers, a total of \(3\ell n\) parameters. It is thus defined as:
\[U_{\ell}(\theta)= \prod_{k=1}^{\ell}\Bigg{[}\left(\bigotimes_{i=1}^{n}R_{x}^{(i)}( \theta_{3kn+2n+i})\right)\left(\prod_{i=1}^{n-1}\mathrm{CX}^{(i,i+1)}\right) \left(\bigotimes_{i=1}^{n}R_{z}^{(i)}(\theta_{3kn+n+i})\right)\] \[\left(\prod_{i=1}^{n-1}\mathrm{CX}^{(i,i+1)}\right)\left(\bigotimes _{i=1}^{n}R_{x}^{(i)}(\theta_{3kn+i})\right)\Bigg{]} \tag{4}\]
The role of CNOT gates is the introduction of entanglement in the system, which would otherwise be efficiently classical simulable. We select as the observable \(O=\sigma_{z}^{(0)}\), which operates on a single qubit. Local observables like this one are less susceptible to the barren plateau problem than global ones, for example, \(O=\otimes_{i=1}^{n}\sigma_{z}^{(i)}\) (as noted in [41]). The quantum neural network described in our investigation is pictured in Figure 4.
Training of the modelTo train models, we utilize a standard state-of-the-art gradient descent-based algorithm, ADAM. The Mean Squared Error (MSE) was selected as the loss function and error metric to evaluate the performances of the models in the regression tasks, as it is a standard error metric in supervised learning. MSE was selected as the loss function to train the networks because it is more sensitive to larger errors. Categorical Cross Entropy (CCE) was used as the loss function for the classification task instead, while Accuracy score was employed as error metric to assess the goodness of the classification. Given the output \(f\) of the model, the computation of its gradient \(\nabla f\), which is required to calculate the gradient of the loss function, is accomplished using the parameter-shift rule [57], since the commonly-used finite difference method \(\nabla f(x;\theta)\approx(f(x;\theta)-f(x;\theta+\epsilon))/\epsilon\) is highly susceptible to hardware noise. The optimization hyper-parameters used are the learning rate, set to 0.1, and the number of training epochs, which was selected through empirical investigation (specifically, we carry out 150 training epochs to obtain the simulated results,
while for QPU-based results, we perform just 10 epochs due to technological constraints on current hardware).
DatasetsWe assess the performance of our approach using both synthetic and real-world datasets, across both regression and classification problems. The linear regression dataset is artificially generated with parametric control over the number of samples \(n\), the dimensionality \(d\), and the noise variance \(\sigma\). It is procedurally generated by randomly sampling a weight vector \(w\) uniformly over \([-1,1]^{d}\) such that the training set \(\{(x^{(i)},y^{(i)})\}_{i=1}^{n}\) is constructed with \(x^{(i)}\) uniformly sampled from \([-1,1]^{d}\), \(y^{(i)}=w\cdot x^{(i)}+\epsilon^{(i)}\), and \(\epsilon^{(i)}\) sampled from a normal distribution with zero mean and variance \(\sigma\). In our case we have \(n=250\) (jointly the training and testing datasets), \(d=5\) and \(\sigma=0.1\). The other datasets involved in the experiments are the _Concrete Compressive Strength_ dataset, the _Diabetes_ dataset, and the _Wine_ dataset. The first of these is a multivariate regression problem calculating the strength of the material based on its age and ingredients. The second is a multivariate regression problem correlating the biological and lifestyle characteristic of patients to their insulin levels. The third one is a multivariate, three-class classification problem investigating the geographic origin of wine samples from their chemical characteristics. All are freely available and open source. Table 2 summarizes the characteristics of these datasets. Every dataset is divided into 80% train samples and 20% test samples. Moreover, in a data preprocessing phase, raw data were scaled in the range \([-1,1]\) to best suit the output of the quantum neural networks; the scaler was fitted using training data only. No other preprocessing technique, i.e. PCA, has been applied.
Implementation detailsOur implementation is written in Python3, and utilizes Pennylane as a framework to define and simulate quantum circuits, with the Pennylane-Qiskit plugin used to execute circuits on IBM Quantum devices
Figure 4: Quantum Neural Network used to classify the linear regression dataset, having 5 qubits and \(\ell=1\) layers. The rotational gates parameterized by the feature \(x_{i}\) form the feature map, while those parameterized via the \(\theta\)s form the ansatz.
via the Qiskit software stack. To improve simulation times, we employed the JAX linear algebra framework as the simulation backend. By using JAX, the quantum circuit can be just-in-time compiled to an intermediate representation called XLA, which can significantly speed up simulation times (by up to a factor of 10). Our simulations were run on a commercial computer with an AMD Ryzen 7 5800X (8-core CPU with a frequency of 3.80 GHz) and 64 GB of RAM. The experiments on the noise canceling properties of ensemble systems were conducted on the ibm_lagos quantum processing unit, which consists of 7 qubits arranged in the topology \(\{(0,1);(1,2);(1,3);(3,4);(4,5);(4,6)\}\). The single-gate fidelity and CNOT fidelity of this QPU did not exceed \(2.89e^{-4}\) and \(8.63e^{-3}\), respectively (according to the latest calibration available).
### Resource efficiency of quantum neural network ensembles
Besides performance, resource efficiency is a key argument for the utilization of quantum neural network ensembles. Efficiency can be measured by various metrics: for example, number of qubits, gates, parameters, and training samples required to achieve comparable performance.
To determine the potential savings in the number of qubits we here deploy the random subspace technique (also known as _attribute bagging_ or _attribute bootstrap aggregation_). Our experiments (cf Figure 5) suggest a potential saving of 20% to 80% of the total qubit budget via this approach. However, such a saving is made at the cost of the ensemble was a whole having the potential for less rich class-discrimination behaviour, dependent on both the sampling required to achieve full feature coverage and the nature of the underlying data manifold. A positive consequence of reducing the number of qubits, though, is that each quantum circuit will have fewer gates and parameters, resulting in improved noise robustness on real hardware (i.e less decoherence, higher overall fidelity), as well as faster gradient calculation (individual gradient calculations require \(P+1\) quantum circuit evaluations for \(P\) parameters). This allows for a saving of the parameter budget of up to 75% in the indicated experimental regime, while the saving on gates corresponds proportionately (cf Figure 4). Savings for each dataset and ensemble technique are as depicted in Figure 5.
\begin{table}
\begin{tabular}{l l l r r l} \hline \hline Dataset & Source & Nature & \# Features & \# Samples & Task \\ \hline Linear & - & Synthetic & 5 & 250 & Regression \\ Concrete & UCI & Real-world & 8 & 1030 & Regression \\ Diabetes & Scikit-Learn & Real-world & 10 & 442 & Regression \\ Wine & UCI & Real-world & 13 & 178 & Classification \\ \hline \hline \end{tabular}
\end{table}
Table 2: Characteristics of the datasets analyzed. UCI stands for the open source _UCI Repository_. _Scikit-Learn_ is an open-source software library for Python3. The number of features does not include the target.
### Simulated Domain Experiments
Initially, we evaluate our method in a simulated environment, one free of noise, such that the output estimation is infinitely precise. This differs significantly from execution on a NISQ quantum processing unit, which introduces various types of hardware error (such as decoherence and infidelity of operations) as well as sampling error caused via the measurement operation. We examine the performance of both the baseline models and ensemble systems in a scenario where the number of layers (i.e. quantum neural network depth) is gradually increased. To establish robustness to random initialization of parameters (that is, susceptibility to local minima effects), each simulation is repeated ten times.
#### 4.3.1 Experiment I
The first experiment seeks to perform linear regression on a synthetic noisy 5-dimensional dataset. The function generating the targets is as follows: \(y=w\cdot x+\epsilon\), where \(x\in(-1,1)^{5}\subseteq\mathbb{R}^{5}\), \(w\in\mathbb{R}^{5}\) is randomly generated from a uniform distribution having as support the range \(-1\) to \(1\), and \(\epsilon\) is a Gaussian noise of mean zero and standard deviation \(0.1\). The total number of samples composing this synthetic dataset is 250. Each experimental data point instantiates a layer number, a number of bagged features, and a percentage of training data points available to the ensemble.
The results of the first experiment are indicated in Figure 6. Both FM and AdaBoost achieve the lowest MSE generalisation error of about 0.021 at 10 layers, reaching a performance plateau at 5 layers. The bagging models utilising 80% of the features are able to reach satisfactory results with 10 layers, which are only 0.03 - 0.05 points higher than the error obtained by the best performing models. In general, it appears that quantum bagging models with a high number of features are able to generalize well on unseen data in this setting, even with only 20% of the training samples (unsurprisingly, the performance of bagging models with only 20% of training samples are worse than those of the counterparts using 100% of the training samples). Nevertheless, they still achieve remarkable results and show impressive generalization capabilities, confirming the effectiveness of
Figure 5: Number of qubits & parameters employed in individual experiments.
bagged quantum models in generalizing well with relatively little training data [64].
It is also notable that all of the bagging models have a lower MSE generalisation error as compared to FM and AdaBoost when the number of layers is low. In particular, with just 1 layer, all of the bagging models outperform FM and AdaBoost. However, as the number of layers increases, the performances of bagging models begin to plateau more rapidly than FM and Adaboost which, in contrast, continue their trend of decreasing error with increasing circuit depth. This is consistent with the notion that as base classifiers become expressive their risk of overfitting increases (i.e. they develop an intrinsically low bias). Adaboost, in particular, is known to be most effective in relation to weak, under-fitting base classifiers.
Finally, the decreasing error trend seen in the more complex bagging models as well as the FM and AdaBoost models is not visible in relation bagging with 30% of the features. We conjecture that since this bagging configuration utilises only 1 qubit, it cannot appropriately model the evolution of the quantum state with respect to the input. Hence, despite leveraging 10 different submodels of 1 qubit (i.e., one feature) each, the performance of bagging models with 30% of the features cannot improve as the number of layers increases (adding more layers in this case translates in performing rotations on the single qubit only, without the possibility of further CNOTs or other entangling gate operations). This result hence highlights the importance of entanglement in quantum neural network models as a means of improving performance.
#### 4.3.2 Experiment II
The second experiment seeks to assess the performance of the respective ensemble techniques on the Concrete Compressive Strength dataset, which consists in 1030 samples of 8 features. The target value to predict in this regression case is hence the concrete compressive strength, measured in Megapascal (MPa), a highly nonlinear function of age and composition of the material.
The results of the regression experiment are in line with the findings of Experiment I, and are reported in Figure 7. FM, AdaBoost and the two bagging models applied in relation to 80% of features achieve comparable results at 10 layers, with the Bag_0.8_1.0 configuration obtaining the lowest MSE error, followed by Bag_0.8_0.2, FM and finally by AdaBoost. Also in this case, the differential between bagging models with 20% of samples and with 100% of samples is marginal, confirming the effectiveness of bagging quantum models in relation to reduced training dataset size. In contrast with Experiment I, bagging models having 30% of available features now have 2 qubits, and therefore demonstrate a relative improvement in test error when \(l=2\). However, their expressive power soon saturates and their error curves plateau.
In general, the generalization capability of bagging models decreases monotonically with the number of layers, in contrast to FM and AdaBoost. In fact, they exhibit episodes of overfitting when utilising 5 (and up to 7) layers, while bagging appears to be able to evade this outcome. This is again not surprising,
Figure 6: Evolution of MSE error with respect to the number of quantum neural network layers in Experiment I. Each experimental data point instantiates a layer number, a number of bagged features and a percentage of training data points available to the ensemble.
since AdaBoost is designed to reduce bias, while bagging ensembles are designed to reduce variance.
All of the bagging models analyzed still outperform FM and AdaBoost at a low number of layers, suggesting that they may be the right choice for implementation on NISQ devices, or else when there is any necessity of implementing low-depth quantum circuits. As in the first experiment, it is also of interest to note that all the bagging models with \(l=1\) here have very similar MSE values, while their performances vary as the number of layers increases. This may indicate that the MSE value reached at \(l=1\) is the optimal for that family of bagging models, given their expressibility. Moreover, a sharp decrease in MSE beyond the first layers would appear to be a common pattern, both with respect to the ensembles and the FM model. For example, at \(l\geq 3\), the MSE error of FM and AdaBoost dramatically decrease, while bagging models with 50% of the features exhibit this trend between \(l=1\) and \(l=2\). (A future analysis of this topic might seek to exploit this characteristic in order to predict _a priori_ how many layers one would need to attain an error level within a given bound).
#### 4.3.3 Experiment III
The dataset used in Experiment III is the reference Diabetes dataset from Scikit-learn, consisting of 10 numerical features, including age, sex, body mass index, blood serum measurements, and also a target variable, a quantitative measure
Figure 7: Evolution of MSE error with respect to the number of quantum neural network layers in Experiment II.
of disease progression one year after baseline. The dataset is composed of 442 instances and is often used for non-trivial regression analysis in ML.
Figure 8 illustrates the results of this experiment. The performance of the quantum models is notably different from those of the previous two experiments. It may be seen that the best performing models are the bagging models containing 80% of the features, while FM and AdaBoost achieve satisfactory results up to 6 layers, at which point their MSE begins to increase. At \(l=10\), every model has stabilized, however. Bag_0\(8\)1.0 and Bag_0\(8\)0.2 have an MSE of respectively 8.8% and 6.1% lower than that of FM. AdaBoost has an MSE comparable to the error of Bag_0\(3\)1.0, being only 0.9% higher than FM. Bagging models with 50% of the features have surprisingly good results, better than those of FM and very close to bagging models with 80% of the features.
As in Experiment I and II, a very sharp MSE reduction between \(l=1\) and \(l=3\) is evident for all of the models. Less complex models like bagging with 30% and 50% of the features immediately reach a plateau, while the error curves for bagging with 80% of the features, FM and AdaBoost evolves as the number of parameters increases. Considering layer numbers between \(l=6\) and \(l=8\), it is clear that FM and AdaBoost overfit as the number of model parameters increases, and thus they perform poorly on test data. In particular, they overfit to such an extent that they almost reach the same performance level of the simplest bagging models with 30% of the features. The latter show no indication of overfitting however, in common with bagging models having 50% of the features. Bagging with 80% of the features shows light overfitting when \(l>6\), but still achieve the best results from among all of the tested algorithms.
The robustness of bagging models to overfitting with respect to AdaBoost and FM arises from their ability to reduce variance via averaging of decorrelated error across the predictions of each submodel. By contrast, when the number of layers is high, AdaBoost and FM utilise a model that is too complex and expressive for the underlying task, leading to overfitting. In concordance with Experiment II, this results suggests that attribute bagging is an effective solution to overfitting in the NISQ setting in common with that of the classical domain.
In addition, this experiment also highlights more markedly the discrepancy between the error level of bagging models with the same number of features but a distinct number of training samples. The difference between the MSE of the bagging model with 30% and 20% of samples and that with 100% of samples is now far more apparent, suggesting that when the variance of the dataset is very high, even bagging models require a sufficient threshold of training samples to perform well in the NISQ setting.
#### 4.3.4 Experiment IV
For the classification task in Experiment IV, we used the reference UCI Wine dataset. It is a multi-class classification dataset corresponding to the results of a chemical analysis of wines grown within a specific region of Italy. It consists of 13 numerical features representing various chemical properties, such as alcohol, malic acid, and ash content, and a target variable indicating the class of the
e. The dataset has 178 samples and is a common baseline ML benchmark for low-parametric complexity classifiers.
Results from Experiment IV are reported in Figure 9. Although they cannot be directly compared to the previous results due to the intrinsically different nature of the problem, there are few comparative insights that can be gained from the respective plot of Accuracy curves. First, all the models except bagging with 30% of the features achieve the same accuracy score of 97.2% using 10 layers. The performances of Bag_0.3_0.2 and Bag_0.3_1.0 are still relatively strong, however, having an accuracy score of 94.2% and 96.9% respectively. Given the very low complexity of these two models, this is a striking result.
A further notable aspect of the Accuracy curves is that all ensemble models converge with far fewer layers than FM. In particular, they require 3 layers in order to reach a performance plateau on average, after which they saturate and the accuracy score reaches saturation as well. By contrast, FM struggles to achieve a comparable accuracy score, only achieving an accuracy greater than 90% when \(l\geq 7\). This means that the ensemble models are able to learn and capture the complex relationships between the input features far more efficiently than FM, which requires a much deeper architecture to attain comparable results. This observation is particularly relevant when considering the implementation of these models on NISQ devices, where the number of qubits and the coherence time are severely limited.
Figure 8: Evolution of MSE error with respect to the number of quantum neural network layers in Experiment III.
Moreover, as expected, bagging models with 100% of the samples obtain a higher accuracy score than their counterparts with 20% of the features given the same number of layers. This suggests that using more training samples can improve the performance of ensemble models provided that the number of layers is low, as it allows them to better capture the underlying patterns of class discriminability in the data.
### Experiments executed on superconducting-based QPU
For the real-hardware evaluation, we compare the performance of the baseline quantum neural network with the Bag_0.8_0.2 ensemble on the same synthetic linear regression dataset used in Experiment I. We selected the Bag_0.8_0.2 model as representative ensemble technique for its outstanding performance in the simulated experiments despite the low number of training samples. To ensure statistical validity, we repeat each experiment 10 times. However, due to technological constraints on real quantum hardware, we analyze only the linear dataset with a quantum neural network having a single layer.
Figure 10 presents the real-world experimental findings, which indicate that the bagging ensemble reduces the expected mean square error by one-third and the expected variance by half when executed on quantum hardware, compared to the baseline model. Such results demonstrate that the noise-canceling capabilities
Figure 9: Evolution of Accuracy score with respect to quantum neural network depth in Experiment IV.
of ensemble technique can be effectively exploited to work on NISQ devices in realistic settings. Additionally, the performance of the ten bagging models varied significantly, underlining the need to reinitialise the ensemble multiple times and validate it against a suitable validation dataset to ensure that the best model is selected.
## 5 Conclusion
We propose the use of ensemble techniques for practical implementation of quantum machine learning models on NISQ hardware. In particular, we justify the application of these techniques based on their capacity for significant reduction in resource usage, including in respect to the overall qubit, parameter, and gate budget, which is achieved via the random subspace (attribute bagging) technique. This resource-saving is especially crucial for noisy hardware, which is typically limited to a small number of qubits, being vulnerable to decoherence, noise, and operational errors. Consequently, the contribution of ensemble techniques may be seen as a form of quantum noise reduction.
To establish this, we evaluated and compared various configurations of bagging and boosting ensemble techniques on synthetic and real-world datasets, tested in both a simulated, noise-free environment and a superconducting-based QPU by IBM, and subtending a range of layer depths.
Our experimental findings showed that bagging ensembles can effectively train quantum neural network instances using fewer features and qubits, which leads to ensemble models with superior performance compared to the baseline model. Reducing the number of features in bagging models of quantum neural networks directly translates into a reduction in the number of qubits, that is a desirable characteristics for practical quantum applications. Ensembles of
Figure 10: Comparison of average performance of the baseline model and the Bag_0\(8\)0.2 ensemble technique on IBM quantum hardware. (10a) shows the difference in terms of MSE over 10 executions. (10b) shows the performance of the bagging model with respect to its estimators.
quantum neural network can also help addressing some of the toughest challenges associated with noise and decoherence in NISQ devices, as well as to mitigate barren plateau effects. These can be key considerations in the development of quantum machine learning models, particularly when working with limited resources on modern quantum systems.
Moreover, bagging models were found to be extremely robust to overfitting, being able to effectively capture the underlying patterns in the data with high generalization ability. This makes them better suited for tasks where generalization is important, such as in real-world applications. However, it is important to notice that the effectiveness of bagging quantum models diminishes with a decrement in the number of features, which suggests that complex bagging models are still needed to obtain satisfactory results. Using only a subset of the features can reduce the computational complexity of the model and prevent overfitting, but it may also result in a loss of information and a decrease in performance. On the contrary, the number of training samples do not seem to have a deep impact on bagging quantum models, hence this bagging strategy may be used when executing quantum neural network instances on real hardware in order to deal with long waiting queues and job scheduling issues. In this regard, having a low number of training data leads to faster training procedures and quantum resource savings. The training of ensembles can also be done in parallel on multiple QPUs in a distributed learning fashion. Therefore, it is important to strike a balance between model complexity and performance to achieve the best possible outcomes.
Additionally, the fact that the bagging models outperform FM and AdaBoost at low number of layers suggests that the former models are better suited for low-depth quantum circuits, which have limited capacity and are prone to noise and errors. For quantum machine learning tasks with NISQ devices, using bagging models with a low number of layers may be a good strategy to achieve good generalization performance while minimizing the impact of noise and errors in the circuit.
Overall, our results suggest that ensembles of quantum neural network models can be a promising avenue for the development of practical quantum machine learning applications on NISQ devices, both from a performance and resource usage perspective. A careful evaluation of the trade-offs between model complexity, performance, quantum resources available and explainability may be necessary to make an informed decision.
In a future work, we plan to further investigate the relationship between ensembles and quantum noise, which is a key consideration when developing quantum neural network models. Our findings could potentially contribute to the development of more efficient and accurate quantum machine learning algorithms, which could have significant implications for real-world applications.
## Acknowledgements
The contribution of M. Panella in this work was supported by the "NATIONAL CENTRE FOR HPC, BIG DATA AND QUANTUM COMPUTING" (CN1, Spoke 10) within the Italian "Piano Nazionale di Ripresa e Resilenza (PNRR)", Mission 4 Component 2 Investment 1.4 funded by the European Union - NextGenerationEU - CN00000013 - CUP B83C22002940006. MG and SV are supported by CERN through CERN Quantum Technology Initiative. Access to the IBM Quantum Services was obtained through the IBM Quantum Hub at CERN. The views expressed are those of the authors and do not reflect the official policy or position of IBM and the IBM Q team. MI is part of the Gruppo Nazionale Calcolo Scientifico of "Istituto Nazionale di Alta Matematica Francesco Severi". AM is supported by Foundation for Polish Science (FNP), IRAP project ICTQT, contract no. 2018/MAB/5, co-financed by EU Smart Growth Operational Programme.
## Declaration
### Authors' contributions
MI, MG, and AC had the initial idea, implemented the interface for executing experiments on the IBM QPUs, performed the experiments, and analyzed the data. MG, SV, DW, AM, and MP supervised the project. All authors contributed to the manuscript.
### Availability of data and materials
The data and source code utilized in our study are freely accessible at [https://github.com/incud/Classical-ensemble-of-Quantum-Neural-Networks](https://github.com/incud/Classical-ensemble-of-Quantum-Neural-Networks). The procedural generation code for the Linear Regression dataset is also accessible at the same URL. In addition, the UCI Repository provides open access to Concrete and Wine datasets, which can be found at https://[https://archive.ics.uci.edu/ml/index.php](https://archive.ics.uci.edu/ml/index.php). The Diabetes dataset provided by Scikit-Learn is also freely available and included with the Python3 package.
|
2305.09756 | Clinical Note Owns its Hierarchy: Multi-Level Hypergraph Neural Networks
for Patient-Level Representation Learning | Leveraging knowledge from electronic health records (EHRs) to predict a
patient's condition is essential to the effective delivery of appropriate care.
Clinical notes of patient EHRs contain valuable information from healthcare
professionals, but have been underused due to their difficult contents and
complex hierarchies. Recently, hypergraph-based methods have been proposed for
document classifications. Directly adopting existing hypergraph methods on
clinical notes cannot sufficiently utilize the hierarchy information of the
patient, which can degrade clinical semantic information by (1) frequent
neutral words and (2) hierarchies with imbalanced distribution. Thus, we
propose a taxonomy-aware multi-level hypergraph neural network (TM-HGNN), where
multi-level hypergraphs assemble useful neutral words with rare keywords via
note and taxonomy level hyperedges to retain the clinical semantic information.
The constructed patient hypergraphs are fed into hierarchical message passing
layers for learning more balanced multi-level knowledge at the note and
taxonomy levels. We validate the effectiveness of TM-HGNN by conducting
extensive experiments with MIMIC-III dataset on benchmark in-hospital-mortality
prediction. | Nayeon Kim, Yinhua Piao, Sun Kim | 2023-05-16T19:08:18Z | http://arxiv.org/abs/2305.09756v1 | # Clinical Note Owns its Hierarchy: Multi-Level Hypergraph
###### Abstract
Leveraging knowledge from electronic health records (EHRs) to predict a patient's condition is essential to the effective delivery of appropriate care. Clinical notes of patient EHRs contain valuable information from healthcare professionals, but have been underused due to their difficult contents and complex hierarchies. Recently, hypergraph-based methods have been proposed for document classifications. Directly adopting existing hypergraph methods on clinical notes cannot sufficiently utilize the hierarchy information of the patient, which can degrade clinical semantic information by (1) _frequent neutral words_ and (2) _hierarchies with imbalanced distribution_. Thus, we propose a taxonomy-aware multi-level hypergraph neural network (TM-HGNN), where multi-level hypergraphs assemble useful neutral words with rare keywords via note and taxonomy level hyperedges to retain the clinical semantic information. The constructed patient hypergraphs are fed into hierarchical message passing layers for learning more balanced multi-level knowledge at the note and taxonomy levels. We validate the effectiveness of TM-HGNN by conducting extensive experiments with MIMIC-III dataset on benchmark in-hospital-mortality prediction.1
Footnote 1: Our codes and models are publicly available at: [https://github.com/ny1031/TM-HGNN](https://github.com/ny1031/TM-HGNN)
## 1 Introduction
With improvement in healthcare technologies, electronic health records (EHRs) are being used to monitor intensive care units (ICUs) in hospitals. Since it is crucial to schedule appropriate treatments for patients in ICUs, there are many prognostic models that use EHRs to address related tasks, such as in-hospital mortality prediction. EHRs consist of three types of data; structured, semi-structured, and unstructured. Clinical notes, which are unstructured data, contain valuable comments or summary of the patient's condition written by medical professionals (doctors, nurses, etc.). However, compared to structured data, clinical notes have been underutilized in previous studies due to the difficult-to-understand contents and the complex hierarchies (Figure 1(a)). Transformer-based (Vaswani et al., 2017) methods like ClinicalBERT (Alsentzer et al., 2019; Huang et al., 2019, 2020) have been proposed to pre-train on large-scale corpus from similar domains, and fine-tune on the clinical notes through transfer learning. While Transformer-based methods can effectively detect distant words compared to other sequence-based methods like convolutional neural networks (Kim, 2014; Zhang et al., 2015) and recurrent neural networks (Mikolov et al., 2010; Tai et al., 2015; Liu et al., 2016), there are still limitations of increasing computational complexity for long clinical notes (Figure 2).
Recently, with the remarkable success of the graph neural networks (GNNs) (Kipf and Welling,
Figure 1: (a) Examples of patient clinical notes with difficult contents (e.g. jargons and abbreviations) and complex structures. Patient \(p_{1}\) owns notes of radiology taxonomy (pink) and nursing taxonomy (blue). (b) Differences between existing hypergraphs and our proposed multi-level hypergraphs.
2017; Velickovic et al., 2018; Brody et al., 2021), graph-based document classification methods have been proposed (Yao et al., 2019; Huang et al., 2019) that can capture long range word dependencies and can be adapted to documents with different and irregular lengths. Some methods build word co-occurrence graphs by sliding fixed-size windows to model pairwise interactions between words (Zhang et al., 2020; Piao et al., 2022; Wang et al., 2022). However, the density of the graph increases as the document becomes longer. Besides, there are also some methods apply hypergraph for document classification (Ding et al., 2020; Zhang et al., 2022), which can alleviate the high density of the document graphs and extract high-order structural information of the documents.
Adopting hypergraphs can reduce burden for managing long documents with irregular lengths, but additional issues remain when dealing with clinical notes: _(1) Neutral words deteriorate clinical semantic information_. In long clinical notes, there are many frequently written neutral words (e.g. "_rhythm_") that do not directly represent the patient's condition. Most of the previous methods treat all words equally at the learning stage, which may result in dominance of frequent neutral words, and negligence of rare keywords that are directly related to the patient's condition. Meanwhile, the neutral word can occasionally augment information of rare keywords, depending on the intra-taxonomy context. Taxonomy represents the category of the clinical notes, where implicit semantic meaning of the words can differ. For example, "_rhythm_" occurred with "_fibrillation_" in _ECG_ taxonomy can represent serious cardiac disorder of a patient, but when "_rhythm_" is written with "_benadryl_" in _Nursing_ taxonomy, it can hardly represent the serious condition. Therefore, assembling intra-taxonomy related words can leverage "_useful_" neutral words with rare keywords to jointly augment the clinical semantic information, which implies the necessity of introducing taxonomy-level hyperedges. (2) _Imbalanced distribution of multi-level hyperedges_. There are a small number of taxonomies compared to notes for each patient. As a result, when taxonomy-level and note-level information are learned simultaneously, note-level information can obscure taxonomy-level information. To learn more balanced multi-level information of the clinical notes, an effective way for learning the multi-level hypergraphs with imbalanced distributed hyperedges is required.
To address the above issues, we propose TM-HGNN (Taxonomy-aware Multi-level HyperGraph Neural Networks), which can effectively and efficiently utilize the multi-level high-order semantic information for patient representation learning. Specifically, we adopt patient-level hypergraphs to manage highly unstructured and long clinical notes and define multi-level hyperedges, i.e., note-level and taxonomy-level hyperedges. Moreover, we conduct the hierarchical message passing from note-level to taxonomy-level hyperedges using edge-masking. To hierarchically learn word embeddings without mixture of information between note and taxonomy, note and taxonomy hyperedges are disconnected. Note-level word embeddings are learned only with intra-note local information. The following taxonomy-level propagation introduce clinical semantic information by assembling the intra-taxonomy words and separating inter-taxonomy words for better patient-level representation learning. The contributions of this article can be summarized as follows (Figure 2):
* To address issue 1, we construct multi-level hypergraphs for patient-level representation learning, which can assemble "_useful_" neutral word with rare keyword via note and taxonomy level hyperedges to retain the clinical semantic information.
Figure 2: Advantages of the proposed model, compared to sequence, graph and hypergraph based models. \(N\) and \(E\) denote the number of nodes and edges respectively. We address issues of complexity and different lengths by adopting the hypergraph to represent each patient. Our model retains semantic information by constructing multi-level hypergraph (Section 3.2), and hierarchical message passing layers (Section 3.3) are proposed for balancing multi-level knowledge for patient representation learning.
* To address issue 2, we propose hierarchical message passing layers for the constructed graphs with imbalanced hyperedges, which can learn more balanced multi-level knowledge for patient-level representation learning.
* We conduct experiments with MIMIC-III clinical notes on benchmark in-hospital-mortality task. The experimental results demonstrate the effectiveness of our approach.
## 2 Related Work
### Models for Clinical Data
With the promising potential of managing medical data, four benchmark tasks were proposed by Harutyunyan et al. (2019) for MIMIC-III (Medical Information Mart for Intensive Care-III) (Johnson et al., 2016) clinical dataset. Most of the previous works with MIMIC-III dataset focus on the structured data (e.g. vital signals with time-series) for prognostic prediction tasks (Choi et al., 2016; Shang et al., 2019) or utilize clinical notes combined with time-series data (Khadanga et al., 2019; Deznabi et al., 2021). Recently, there are approaches focused on clinical notes, adopting pre-trained models such as BERT-based (Alsentzer et al., 2019; Huang et al., 2019; Golmaei and Luo, 2021; Naik et al., 2022) and XLNet-based (Huang et al., 2020) or utilizing contextualized phenotypic features extracted from clinical notes (Zhang et al., 2022).
### Graph Neural Networks for Document Classification
Graph neural networks (Kipf and Welling, 2017; Velickovic et al., 2018; Brody et al., 2021) have achieved remarkable success in various deep learning tasks, including text classification. Initially, transductive graphs have been applied to documents, such as TextGCN (Yao et al., 2019). Transductive models have to be retrained for every renewal of the data, which is inefficient and hard to generalize (Yao et al., 2019; Huang et al., 2019). For inductive document graph learning, word co-occurrence graphs initialize nodes with word embeddings and exploit pairwise interactions between words. TextING (Zhang et al., 2020) employs the gated graph neural networks for document-level graph learning. Following TextGCN (Yao et al., 2019) which applies graph convolutional networks (GCNs) (Kipf and Welling, 2017) in transductive level corpus graph, InducT-GCN (Wang et al., 2022) applies GCNs in inductive level where unseen documents are allowed to use. TextSSL (Piao et al., 2022) captures both local and global structural information within graphs.
However, the density of word co-occurrence graph increases as the document becomes longer, since the fixed-sized sliding windows are used to capture local pairwise edges. In case of hypergraph neural networks, hyperedges connect multiple number of nodes instead of connecting words to words by edges, which alleviates the high density of the text graphs. HyperGAT (Ding et al., 2020) proposes document-level hypergraphs with hyperedges containing sequential and semantic information. HEGEL (Zhang et al., 2022) applies Transformer-like (Vaswani et al., 2017) multi-head attention to capture high-order cross-sentence relations for effective summarization of long documents. According to the reduced computational complexity for long documents (Figure 2), we adopt hypergraphs to represent patient-level EHRs with clinical notes. Considering issues of existing hypergraph-based methods (Figure 2), we construct multi-level hypergraphs at note-level and taxonomy-level for each patient. The constructed graphs are fed into hierarchical message passing layers to capture rich hierarchical information of the clinical notes, which can augment semantic information for patient representation learning.
## 3 Method
### Problem Definition
Our task is to predict in-hospital-mortality for each patient using a set of clinical notes. Given a patient \(p\in\mathcal{P}\) with in-hospital-mortality label \(y\in\mathcal{Y}\), patient \(p\) owns a list of clinical notes \(\mathcal{N}_{p}=[n_{1}^{t_{1}},...,n_{j}^{t_{k}},...]\), and each clinical note \(n^{t}\in\mathcal{N}_{p}\) with taxonomy \(t\in\mathcal{T}_{p}\) contains a sequence of words \(\mathcal{W}_{n^{t}}=[w_{1}^{n^{t}},...,w_{i}^{n^{t}},...]\), where \(j\), \(k\) and \(i\) denote the index of clinical note \(n\), taxonomy \(t\) and word \(w\) of patient \(p\). The set of taxonomies can be represented by \(\mathcal{T}=\{t_{1},t_{2},...,t_{k},...\}\).
Our goal is to construct individual multi-level hypergraphs \(\mathcal{G}_{p}\) for each patient \(p\) and learn patient-level representation \(\mathcal{G}_{p}\) with the multi-level knowledge by hierarchical message passing layers for in-hospital-mortality prediction tasks. Since our model is trained by inductive learning, patient \(p\) is omitted throughout the paper.
### Multi-Level Hypergraph Construction
We construct multi-level hypergraphs for patient-level representation learning, which can address the issues that are mentioned in introduction 1. A hypergraph \(\mathcal{G}^{*}=(\mathcal{V},\mathcal{E})\) consists of a set of nodes \(\mathcal{V}\) and hyperedges \(\mathcal{E}\) where multiple nodes can be connected to single hyperedge \(e\in\mathcal{E}\). A multi-level hypergraph \(\mathcal{G}=\{\mathcal{V},\{\mathcal{E}_{\mathcal{N}}\cup\mathcal{E}_{ \mathcal{T}}\}\}\) is constructed from patient's clinical notes, where \(\mathcal{E}_{\mathcal{N}}\) and \(\mathcal{E}_{\mathcal{T}}\) denote note-level and taxonomy-level hyperedges, respectively. A word node \(v\) exists in note \(n\) with the taxonomy of \(t\) can be represented by \(\{v\in n,n\in t\}\). A note-level hyperedge is denoted as \(e_{n}\), and a taxonomy-level hyperedge is denoted as \(e_{t}\).
Multi-level Positional EncodingThere are three types of entries in the multi-level hypergraph \(\mathcal{G}\), such as word nodes \(\mathcal{V}\), note-level hyperedges \(\mathcal{E}_{\mathcal{N}}\) and taxonomy-level hyperedges \(\mathcal{E}_{\mathcal{T}}\). To distinguish these entries, we propose multi-level positional encoding to introduce more domain-specific meta-information to the hypergraph \(\mathcal{G}\). The function of multi-level positional encoding \(\texttt{mpe}(\cdot)\) can be defined as:
\[\texttt{MPE}(x)=[\tau(x),\mathcal{I}_{\mathcal{W}}(x),\mathcal{I}_{\mathcal{N }}(x),\mathcal{I}_{\mathcal{T}}(x)] \tag{1}\]
where entry \(x\in\{\mathcal{V},\mathcal{E}_{\mathcal{N}},\mathcal{E}_{\mathcal{T}}\}\), and function \(\tau:x\mapsto\{0,1,2\}\) maps entry \(x\) to a single type among nodes, note-level and taxonomy-level hyperedges. Functions \(\mathcal{I}_{\mathcal{W}}(\cdot),\mathcal{I}_{\mathcal{N}}(\cdot)\), and \(\mathcal{I}_{\mathcal{T}}(\cdot)\) maps entry \(x\) to positions in the word, note and taxonomy-level, respectively. To initialize embedding of node \(v\), we concatenate embedding \(\texttt{mpe}(v)\) from multi-level position encoding and word2vec (Mikolov et al., 2010) pre-trained embedding \(\mathbf{z}_{v}\). Since shallow word embeddings are widely used to initialize node embeddings in graph-based document representation (Grohe, 2020), we use word2vec (Mikolov et al., 2010) embedding. A word node embedding \(\mathbf{h}_{v}^{(0)}\) is constructed as follows:
\[\mathbf{h}_{v}^{(0)}=\texttt{MPE}(v)\oplus\mathbf{z}_{v}, \tag{2}\]
where \(\oplus\) denotes concatenation function.
#### 3.2.1 Hyperedge Construction
To extract multi-level information of patient-level representation using clinical notes, we construct patient hypergraphs with two types of hyperedges, one at the note-level hyperedge \(\mathcal{E}_{\mathcal{N}}\) and the other at the taxonomy-level hyperedge \(\mathcal{E}_{\mathcal{T}}\). A word node \(v\) in note \(n\) with taxonomy \(t\) is assigned to one note-level hyperedge \(e_{n}\) and one taxonomy-level hyperedge \(e_{t}\), which can be defined as:
\[\mathcal{E}(v)=\{e_{n},e_{t}|v\in n,n\in t\} \tag{3}\]
Note-level HyperedgesWe adopt linear embedding function \(f_{n}\) and obtain the index embedding
Figure 3: Overview of the proposed TM-HGNN. Taxonomy-aware multi-level hypergraphs are fed into the model for hierarchical message passing. \(\hat{y}\) denotes the patient-level prediction.
using \(\mathcal{I_{N}}(n)\). To preserve time-dependent sequential information of clinical note \(n\), we simply add time information \(\mathbf{t}(n)\) to the embedding. Then initial embedding of note-level hyperedge \(h_{e_{n}}^{(0)}\) with \(\texttt{MPE}(\cdot)\) can be defined as:
\[\mathbf{h}_{e_{n}}^{(0)}=\texttt{MPE}(n)\oplus f_{n}^{\theta}(\mathcal{I_{N}}(n ),\mathbf{t}(n)), \tag{4}\]
where \(\theta\in\mathbb{R}^{d\times d}\) denotes the parameter matrix of function \(f_{n}\). Notably, we set the value of word index \(\mathcal{I_{W}}(n)\) as -1 since the note \(n\) represents higher level information than word \(v\).
Taxonomy-level HyperedgesTaxonomy-level hyperedges \(e_{t}\) are constructed by taxonomy index \(\mathcal{I_{T}}(t)\) through linear layers \(f_{t}\) concatenated with \(\texttt{mpE}(\cdot)\) function, which can be defined as:
\[\mathbf{h}_{e_{t}}^{(0)}=\texttt{MPE}(t)\oplus f_{t}^{\theta}(\mathcal{I_{T}}( t)), \tag{5}\]
where \(\theta\in\mathbb{R}^{d\times d}\) denotes the parameter matrix of function \(f_{t}\). Like note-level hyperedge, we set \(\mathcal{I_{W}}(t)\) and \(\mathcal{I_{N}}(t)\) as -1 since the level of taxonomy \(t\) is higher than the levels of note and word.
### Hierarchical Message Passing
To leverage the characteristics of two types of hyperedges, we propose a hierarchical hypergraph convolutional networks, composed of three layers that allow message passing from different types of hyperedges. In general, we define message passing functions for nodes and hyperedges as follows:
\[\mathcal{F_{W}}(\mathrm{h},\mathcal{E},\theta)=\sigma\bigg{(}\theta\bigg{(} \sum_{u\in\mathcal{E}(v)}\frac{1}{\sqrt{d_{v}}\sqrt{d_{u}}}\mathrm{h}_{u} \bigg{)}\bigg{)}, \tag{6}\]
\[\mathcal{F_{\tau}}(\mathrm{h},\mathcal{V}^{\tau},\theta)=\sigma\bigg{(}\theta \bigg{(}\sum_{z\in\mathcal{V}^{\tau}(e)}\frac{1}{\sqrt{d_{e}}\sqrt{d_{z}}} \mathrm{h}_{z}\bigg{)}\bigg{)}, \tag{7}\]
where \(\mathcal{F_{W}}\) denotes message passing function for word nodes and \(\mathcal{F_{\tau}}\) denotes message passing function for hyperedges with type \(\tau\in\{1,2\}\), i.e., note-level hyperedges and taxonomy-level hyperedges, respectively. Function \(\mathcal{F_{W}}\) updates word node embedding \(\mathrm{h}_{v}\) by aggregating embeddings of connected hyperedges \(\mathcal{E}(v)\). Function \(\mathcal{F_{\tau}}\) updates hyperedge embedding \(\mathrm{h}_{e}\) by aggregating embeddings of connected word nodes \(\mathcal{V}^{\tau}(e)\). \(\sigma\) is the non-linear activation function such as \(\mathrm{ReLU}\), \(\theta\in\mathbb{R}^{d\times d}\) is the weight matrix with dimension \(d\) which can be differently assinged and learned at multiple levels.
Then we can leverage these defined functions to conduct hierarchical message passing learning at the note level and at the taxonomy level.
Initialization LayerDue to the complex structure of the clinical notes, the initial multi-level hypergraph constructed for each patient has a large variance. To prevent falling into local optima in advance, we first use an initialization layer to pre-train the entries of hypergraphs by learning the entire patient graph structure. In this layer, message passing functions are applied to all word nodes \(v\in\mathcal{V}\) and hyperedges \(e\in\mathcal{E_{I}}=\{\mathcal{E_{N}}\cup\mathcal{E_{T}}\}\). Thus, embeddings of node \(v\), hyperedges \(e_{n}\) and \(e_{t}\) at both levels can be defined as:
\[h_{I}(v)=\mathcal{F_{W}}(h_{v}^{(0)},\mathcal{E_{I}}(v),\theta_{I}), \tag{8}\]
\[h_{I}(e_{n})=\mathcal{F_{\tau}}(h_{e_{n}}^{(0)},\mathcal{V}^{\tau}(e_{n}), \theta_{I}),\tau=1 \tag{9}\]
\[h_{I}(e_{t})=\mathcal{F_{\tau}}(h_{e_{t}}^{(0)},\mathcal{V}^{\tau}(e_{t}), \theta_{I}),\tau=2 \tag{10}\]
Note-level Message Passing LayerThen we apply note-level message passing layer on hypergraphs with only word nodes \(v\in\mathcal{V}\) and note-level hyperedges \(e_{n}\in\mathcal{E_{N}}\), and the taxonomy-level hyperedges are masked during message passing. In this layer, the word nodes can only interact with note-level hyperedges, which can learn the intra-note local information.
\[h_{N}(v)=\mathcal{F_{W}}\big{(}h_{I}(v),\mathcal{E_{N}}(v),\theta_{N}\big{)}, \tag{11}\]
\[h_{N}(e_{n})=\mathcal{F_{\tau}}\big{(}h_{I}(e_{n}),\mathcal{V}^{\tau}(e_{n}), \theta_{N}\big{)},\tau=1, \tag{12}\]
\[h_{N}(e_{t})=h_{I}(e_{t}) \tag{13}\]
Taxonomy-level Message Passing LayerThe last layer is the taxonomy-level message passing layer, where all word nodes \(v\in\mathcal{V}\) and taxonomy-level hyperedges \(e_{t}\in\mathcal{E_{T}}\) can be updated. In this layer, we block the hyperedges at the note level. The node representations with note-level information are fused with taxonomy information
\begin{table}
\begin{tabular}{l l} \hline \hline & Statistics \\ \hline \# of patients & 17,927 \\ \# of ICU stays & 21,013 \\ \hline \# of in-hospital survival & 18,231 \\ \# of in-hospital mortality & 2,679 \\ \hline \# of notes per ICU stay & 13.29 (7.84) \\ \# of words per ICU stay & 1,385.62 (1,079.57) \\ \# of words per note & 104.25 (66.82) \\ \# of words per taxonomy & 474.75 (531.42) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics of the MIMIC-III clinical notes. Averaged numbers are reported with standard deviation.
via taxonomy-level hyperedges, which can assemble the intra-taxonomy related words to augment semantic information.
\[h_{T}(v)=\mathcal{F}_{V}\big{(}h_{N}(v),\mathcal{E}_{T}(v),\theta_{T}\big{)}, \tag{14}\]
\[h_{T}(e_{n})=h_{N}(e_{n}), \tag{15}\]
\[h_{T}(e_{t})=\mathcal{F}_{\tau}\big{(}h_{N}(e_{t}),\mathcal{V}^{\tau}(e_{t}), \theta_{T}\big{)},\tau=2 \tag{16}\]
#### 3.3.1 Patient-Level Hypergraph Classification
After all aforementioned hierarchical message passing layers, node and hyperedge embeddings \(h_{T}(v),h_{T}(e_{n}),h_{T}(e_{t})\in\mathbf{H}_{T}\) follow mean-pooling operation which summarizes patient-level embedding \(z\), which is finally fed into \(\mathrm{sigmoid}\) operation as follows:
\[\hat{y}=\mathrm{sigmoid}(z) \tag{17}\]
where \(\hat{y}\) denotes the probability of the predicted label for in-hospital-mortality of the patient. The loss function for patient-level classification is defined as the binary cross-entropy loss:
\[\mathcal{L}=-\left(y\times\log\hat{y}+(1-y)\times\log(1-\hat{y})\right) \tag{18}\]
where \(y\) denotes the true label for in-hospital-mortality. The proposed network, TM-HGNN, can be trained by minimizing the loss function.
## 4 Experimental Settings
### Dataset
We use clinical notes from the Medical Information Mart for Intensive Care III (MIMIC-III) Johnson et al. (2016) dataset, which are written within 48 hours from the ICU admission. For quantitative evaluation, we follow Harutyunyan et al.'s (2019) benchmark setup for data pre-processing and train/test splits, then randomly divide 20% of train set as validation set. All patients without any notes are dropped during the data preparation. To prevent overfitting into exceptionally long clinical notes for a single patient, we set the maximum number of notes per patient into 30 from the first admission. Table 1 shows the statistics of pre-processed MIMIC-III clinical note dataset for our experiments. We select top six taxonomies for experiments, since the number of notes assigned to each taxonomy differs in a wide range (Appendix B Table 3). In addition, we select two chronic diseases, hypertension and diabetes, to compare prediction results for patients with each disease.
### Compared Methods
In our experiments, the compared baseline methods for end-to-end training are as follows:
* Word-based methods: word2vec (Mikolov et al., 2013) with multi-layer perceptron classifier, and FastText (Joulin et al., 2017).
* Sequence-based methods: TextCNN (Kim, 2014), Bi-LSTM (Hochreiter and Schmidhuber, 1997), and Bi-LSTM with additional attention layer (Zhou et al., 2016).
* Graph-based methods: TextING (Zhang et al., 2020), InducT-GCN (Wang et al., 2022), and HyperGAT (Ding et al., 2020). In particular, HyperGAT represents hypergraph-based method, and the other graph-based methods employ word co-occurrence graphs.
### Implementation Details
TM-HGNN is implemented by PyTorch (Paszke et al., 2019) and optimized with Adam (Kingma and Ba, 2015) optimizer with learning rate 0.001 and dropout rate 0.3. We set hidden dimension \(d\) of each layer to 64 and batch size to 32 by searching parameters. We train models for 100 epochs with early-stopping strategy, where the epoch of 30 shows the best results. All experiments are trained on a single NVIDIA GeForce RTX 3080 GPU.
## 5 Results
Since the dataset has imbalanced class labels for in-hospital mortality as shown in Table 1, we use AUPRC (Area Under the Precision-Recall Curve) and AUROC (Area Under the Receiver Operating Characteristic Curve) for precise evaluation. It is suggested by Davis and Goadrich (2006) to use AUPRC for imbalanced class problems.
### Classification Performance
Table 2 shows performance comparisons of TM-HGNN and baseline methods. Sequence-based methods outperform word-based methods, which indicates capturing local dependencies between neighboring words benefits patient document classification. Moreover, all graph-based methods outperform sequence-based and word-based methods. This demonstrates ignoring sequential information of words is not detrimental to clinical notes. Furthermore, hypergraphs are more effective than previous word co-occurrence graphs, indicating that
it is crucial to extract high-order relations within clinical notes. In particular, as TM-HGNN outperforms HyperGAT Ding et al. (2020), exploiting taxonomy-level semantic information which represents the medical context of the notes aids precise prediction in patient-level. Another advantage of our model, which captures multi-level high order relations from note-level and taxonomy-level with hierarchy, can be verified by the results in Table 2 where TM-HGNN outperforms T-HGNN. T-HGNN indicates the variant of TM-HGNN, which considers note-level and taxonomy-level hyperedges homogeneous. Likewise, results from hypertension and diabetes patient groups show similar tendencies in overall.
### Robustness to Lengths
To evaluate the performance dependencies to lengths, we divide clinical notes in patient-level into three groups by lengths, which are short, medium, and long (Appendix B, Figure 8). For test set, the number of patients is 645, 1,707, and 856 for short, medium, and long group each, and the percentage of mortality is 6.98%, 10.72%, and 15.89% for each group, which implies patients in critical condition during ICU stays are more likely to have long clinical notes. Figure 4 shows performance comparisons for three divided groups with TextING Zhang et al. (2020) which utilizes word co-occurrence graph, HyperGAT Ding et al. (2020), a ordinary hypergraph based approach, and our multi-level hypergraph approach (TM-HGNN). All three models were more effective to longer clinical notes, which demonstrates graph based models are robust to long document in general. Among the three models, our proposed TM-HGNN mostly performs the best and HyperGAT Ding et al. (2020) follows, and then TextING Zhang et al. (2020). The results demonstrate that our TM-HGNN, which exploits taxonomy-level semantic information, is most effective for clinical notes regardless of the lengths, compared to other graph-based approaches.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{**Categories**} & \multirow{2}{*}{**Models**} & \multicolumn{2}{c}{**Whole**} & \multicolumn{2}{c}{**Hypertension**} & \multicolumn{2}{c}{**Diabetes**} \\ \cline{3-8} & & AUPRC & AUROC & AUPRC & AUROC & AUPRC & AUROC \\ \hline \multirow{2}{*}{Word-based} & Word2vec + MLP & 13.49 \(\pm\) 1.68 & 56.65 \(\pm\) 5.12 & 16.82 \(\pm\) 1.78 & 53.56 \(\pm\) 4.20 & 18.15 \(\pm\) 1.42 & 51.94 \(\pm\) 3.40 \\ & FastText & 17.06 \(\pm\) 0.08 & 62.37 \(\pm\) 0.11 & 25.56 \(\pm\) 0.28 & 62.39 \(\pm\) 0.18 & 31.33 \(\pm\) 0.33 & 67.59 \(\pm\) 0.20 \\ \hline \multirow{2}{*}{Sequence-based} & Bi-LSTM w/ Att. & 17.67 \(\pm\) 4.19 & 58.75 \(\pm\) 5.78 & 21.75 \(\pm\) 5.25 & 57.39 \(\pm\) 6.11 & 27.52 \(\pm\) 7.57 & 61.86 \(\pm\) 8.38 \\ & Bi-LSTM w/ Att. & 17.96 \(\pm\) 0.61 & 62.63 \(\pm\) 1.31 & 26.05 \(\pm\) 1.80 & 63.24 \(\pm\) 1.57 & 33.01 \(\pm\) 3.53 & 68.89 \(\pm\) 1.58 \\ & TextCNN & 20.34 \(\pm\) 0.67 & 68.25 \(\pm\) 0.54 & 27.10 \(\pm\) 1.82 & 66.10 \(\pm\) 1.20 & 36.99 \(\pm\) 2.54 & 71.83 \(\pm\) 1.69 \\ \hline \multirow{2}{*}{Graph-based} & TextING & 34.50 \(\pm\) 7.79 & 78.20 \(\pm\) 4.27 & 36.63 \(\pm\) 8.30 & 80.12 \(\pm\) 4.05 & 36.13 \(\pm\) 8.66 & 80.28 \(\pm\) 3.84 \\ & InducT-GCN & 43.03 \(\pm\) 1.96 & 82.23 \(\pm\) 0.72 & 41.06 \(\pm\) 2.95 & 85.56 \(\pm\) 1.24 & 40.59 \(\pm\) 3.67 & 84.42 \(\pm\) 1.45 \\ \hline \multirow{2}{*}{HyperGraph-based} & HyperGAT & 44.42 \(\pm\) 1.96 & 84.00 \(\pm\) 0.84 & 42.32 \(\pm\) 1.78 & 86.41 \(\pm\) 1.01 & 40.08 \(\pm\) 2.45 & 85.03 \(\pm\) 1.20 \\ & T-HGNN (Ours) & 45.85 \(\pm\) 1.91 & 84.29 \(\pm\) 0.31 & 43.53 \(\pm\) 2.01 & 87.07 \(\pm\) 0.64 & 40.47 \(\pm\) 2.29 & 85.48 \(\pm\) 0.92 \\ & TM-HGNN (Ours) & **48.74 \(\pm\) 0.60** & **84.89 \(\pm\) 0.42** & **47.27 \(\pm\) 1.21** & **87.75 \(\pm\) 0.54** & **42.22 \(\pm\) 1.25** & **85.86 \(\pm\) 0.73** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Classification performance comparison on patient-level clinical tasks, evaluated with AUPRC and AUROC in percentages. We report averaged results with standard deviation over 10 random seeds. Values in boldface denote the best results.
Figure 4: Prediction results of TextING, HyperGAT, and TM-HGNN for three patient-level clinical note groups divided by length (short, medium, and long). AUPRC and AUROC are used for evaluation.
Figure 5: Performance results of ablation studies. The effectiveness of the multi-level hypergraph and hierarchical message passing in the proposed model TM-HGNN are validated respectively.
### Ablation Study
Effect of Multi-level HypergraphIn order to validate the effect of multi-level hypergraphs, we ignore taxonomy-level and note-level hyper-edges respectively. _w/o taxonomy_, which ignores taxonomy-level hyperedges, deteriorates the performance most significantly. _w/o note_ shows degraded performance as well. Thus, effectiveness of multi-level hypergraph construction for patient representation learning can be verified (Figure 5).
Effect of Hierarchical Message PassingFigure 5 demonstrates that hierarchical message passing (note-level to taxonomy-level) for multi-level hypergraphs is effective than learning without hierarchies, since _w/o hierarchy_ shows inferior performance compared to TM-HGNN. _w/o hierarchy_ represents T-HGNN from Table 2, which considers every hyperedge as homogeneous. Degraded performance from _w/o initialization_ shows the effectiveness of the initialization layer before hierarchical message passing, which indicates that pre-training on the entire multi-level hypergraphs first benefits the patient-level representation learning.
### Case Study
Hierarchical Message PassingWe visualize the learned node representations based on principal component analysis (PCA) [11] results, as hierarchical message passing continues in TM-HGNN. In Figure 6(a), "_rhythm_" from ECG and Nursing/other taxonomy are mapped closely for initial word embeddings, since they are literally same words. As the patient-level hypergraphs are fed into a global-level, note-level, and taxonomy-level convolutional layers in order, words in the same taxonomies assemble, which can be found in Figure 6(b), (c), and (d). As a result, "_rhythm_" of ECG represents different semantic meanings from "_rhythm_" of Nursing/other, as it is learned considerably close to "_fibrillation_" from the same taxonomy.
Importance of Taxonomy-level Semantic InformationTo investigate the importance of taxonomy-level semantic information extraction, we visualize PCA results of the learned node embeddings from the baseline method and the proposed TM-HGNN. We select patient with hospital admission id (HADM_ID) 147702 for case study since TM-HGNN successfully predicts the true label for in-hospital-mortality, which is pos
Figure 6: PCA results of learned node representations from each layer of TM-HGNN, for patient case HADM_ID=147702. "_Rhythm_" and "_fibrillation_" from ECG, "_rhythm_" and "_benadryl_" from Nursing/other taxonomy are highlighted. (a) Input word node embeddings. (b) Initialized node embeddings from the first layer. (c) After second layer, note-level message passing. (d) Final node embeddings from TM-HGNN, after taxonomy-level message passing. Word node embeddings are aligned with the same taxonomy words.
Figure 7: PCA results of learned node representations from HyperGAT (a) and TM-HGNN (b). "_Rhythm_" and "_fibrillation_" from ECG, "_rhythm_" and “_benadryl_" from Nursing/other taxonomy are highlighted.
itive, but the other baseline methods show false negative predictions. As in Figure 7, HyperGAT learns "_rhythm_" without taxonomy-level semantic information, since it is not assembled with other words in the same taxonomy. But TM-HGNN separately learns "_rhythm_" from ECG and "_rhythm_" from Nursing/other based on different contexts, which results in same taxonomy words aligned adjacently, such as "_fibrillation_" of ECG and "_behavioryl_" of Nursing/other. Therefore, in case of TM-HGNN, frequently used neutral word "_rhythm_" from ECG with a word "_fibrillation_" means an irregular "_rhythm_" of the heart and is closely related to mortality of the patient, but "_rhythm_" from Nursing/other with another nursing term remains more neutral. This phenomenon demonstrates that contextualizing taxonomy to frequent neutral words enables differentiation and reduces ambiguity of the frequent neutral words (e.g. "_rhythm_"), which is crucial to avoid false negative predictions on patient-level representation learning.
## 6 Conclusion
In this paper, we propose a taxonomy-aware multi-level hypergraph neural networks, TM-HGNN, a novel approach for patient-level clinical note representation learning. We employ hypergraph-based approach and introduce multi-level hyperedges (note and taxonomy-level) to address long and complex information of clinical notes. TM-HGNN aims to extract high-order semantic information from the multi-level patient hypergraphs in hierarchical order, note-level and then taxonomy-level. Clinical note representations can be effectively learned in an end-to-end manner with TM-HGNN, which is validated from extensive experiments.
## Limitations
Since our approach, TM-HGNN, aggregates every note during ICU stays for patient representation learning, it is inappropriate for time-series prediction tasks (e.g. vital signs). We look forward to further study that adopts and applies our approach to time-series prediction tasks.
## Ethics Statement
In MIMIC-III dataset Johnson et al. (2016), every patient is deidentified, according to Health Insurance Portability and Accountability Act (HIPAA) standards. The fields of data which can identify the patient, such as patient name and address, are completely removed based on the identifying data list provided in HIPAA. In addition, the dates for ICU stays are shifted for randomly selected patients, preserving the intervals within data collected from each patient. Therefore, the personal information for the patients used in this study is strictly kept private. More detailed information about deidentification of MIMIC-III can be found in Johnson et al. (2016).
## Acknowledgements
This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) [NO.2021-0-01343, Artificial Intelligence Graduate School Program (Seoul National University)] and the Bio & Medical Technology Development Program of the National Research Foundation (NRF) funded by the Ministry of Science & ICT (RS-2023-00257479), and the ICT at Seoul National University provides research facilities for this study.
|
2305.16886 | Understanding Sparse Neural Networks from their Topology via
Multipartite Graph Representations | Pruning-at-Initialization (PaI) algorithms provide Sparse Neural Networks
(SNNs) which are computationally more efficient than their dense counterparts,
and try to avoid performance degradation. While much emphasis has been directed
towards \emph{how} to prune, we still do not know \emph{what topological
metrics} of the SNNs characterize \emph{good performance}. From prior work, we
have layer-wise topological metrics by which SNN performance can be predicted:
the Ramanujan-based metrics. To exploit these metrics, proper ways to represent
network layers via Graph Encodings (GEs) are needed, with Bipartite Graph
Encodings (BGEs) being the \emph{de-facto} standard at the current stage.
Nevertheless, existing BGEs neglect the impact of the inputs, and do not
characterize the SNN in an end-to-end manner. Additionally, thanks to a
thorough study of the Ramanujan-based metrics, we discover that they are only
as good as the \emph{layer-wise density} as performance predictors, when paired
with BGEs. To close both gaps, we design a comprehensive topological analysis
for SNNs with both linear and convolutional layers, via (i) a new input-aware
Multipartite Graph Encoding (MGE) for SNNs and (ii) the design of new
end-to-end topological metrics over the MGE. With these novelties, we show the
following: (a) The proposed MGE allows to extract topological metrics that are
much better predictors of the accuracy drop than metrics computed from current
input-agnostic BGEs; (b) Which metrics are important at different sparsity
levels and for different architectures; (c) A mixture of our topological
metrics can rank PaI algorithms more effectively than Ramanujan-based metrics.
The codebase is publicly available at https://github.com/eliacunegatti/mge-snn. | Elia Cunegatti, Matteo Farina, Doina Bucur, Giovanni Iacca | 2023-05-26T12:45:58Z | http://arxiv.org/abs/2305.16886v2 | # Peeking inside Sparse Neural Networks using Multi-Partite Graph Representations
###### Abstract
Modern Deep Neural Networks (DNNs) have achieved very high performance at the expense of computational resources. To decrease the computational burden, several techniques have proposed to extract, from a given DNN, efficient subnetworks which are able to preserve performance while reducing the number of network parameters. The literature provides a broad set of techniques to discover such subnetworks, but few works have studied the peculiar topologies of such pruned architectures. In this paper, we propose a novel _unrolled input-aware_ bipartite Graph Encoding (GE) that is able to generate, for each layer in an either sparse or dense neural network, its corresponding graph representation based on its relation with the input data. We also extend it into a multipartite GE, to capture the relation between layers. Then, we leverage on topological properties to study the difference between the existing pruning algorithms and algorithm categories, as well as the relation between topologies and performance.
## 1 Introduction
Pruning Dense Neural Networks (DNNs) has recently become one of the most promising research areas in machine learning. Network pruning consists in removing a portion of network parameters (i.e., weights) aiming at reducing the computational resources and the inference time while avoiding performance degradation. Currently, one of the most important findings in network pruning [26] proved that, inside a randomly initialized DNN, there are subnetworks (the so-called _"winning tickets"_) that once trained in isolation can reach the performance of the overall dense network.
The algorithms proposed to find such subnetworks can be roughly split into four categories, which differ in _how_ and _when_ they uncover the sparse architecture. The earliest works focus on _Post-Training Pruning_, i.e., methods that, once the dense network has been fully trained, are able to uncover the sparse structures using simple heuristics to remove the lower-magnitude weights from the trained network [33, 50, 3, 19]. To decrease the computational cost of training the whole DNN, [54] and [46] propose a dense-to-sparse pruning technique employing Gradual Magnitude Pruning (GMP) to reduce the number of parameters during the training phase. The Lottery Ticket Hypothesis (LTH) [26] discovers the final sparse structure using an iterative process of training-pruning. A second group of algorithms focuses instead on _Pruning at Initialization (PaI)_, where the subnetwork is retrieved prior to training, based on a scoring criterion calculated on randomly initialized weights [35, 8, 37, 44, 1]. A third family of algorithms, called _Dynamic Sparse Training (DST)_ techniques, aims at modifying the structure of the sparse network during the training process [18, 12, 34, 45, 14, 30]. The last category of pruning algorithms, based on the so-called _Strong Lottery Ticket Hypothesis (SLTH)_, differs from the previous categories since the weights are not trained at all [20, 49, 27].
Applying such pruning algorithms to a DNN \(f(x,\theta)\) provides a binary mask \(m\in\{0,1\}^{|\theta|}\) that generates a Sparse Neural Network (SNN) \(f(x,m\ominus\theta)\) characterized by a certain topology. While many papers have investigated _how_ to find sparse architectures, only a few looked at _why_ (from a topological viewpoint) SNNs perform well, and _how_ different pruning algorithms actually produce different SNN topologies. An SNN has indeed a unique topology, whose graph representation can be constructed and analyzed [31, 36]. The state-of-the-art in graph construction for SNNs with convolutional layers is based on _rolled_ representations [17, 36, 13], where each node represents a layer parameter, i.e., either a filter/channel or a kernel value. However, such _rolled_ representations do not capture the complete structure of the network. In fact, in these representations, the kernel parameters are used as static nodes. On the other hand,
convolutional operations use the kernel _dynamically_ over the input data. The latter can be represented as nodes only in _unrolled_ representations, see [4].
To overcome this limitation, we propose a novel _unrolled input-aware_ Graph Encoding (GE) which fully represents the relation between the layer parameters and the layer input data. This encoding is radically different from all the _rolled_ encodings previously proposed in the literature for the complete graph representation of convolutional layers. In particular, unlike the weighted _unrolled_ GE proposed in [4], which is only designed to associate the weights of a DNN to graph edges, our proposed GE works with both dense and sparse architectures and focuses on the presence/absence of edges, rather than their weights. Furthermore, our GE generates one bipartite graph _for each layer_, rather than one bipartite graph _for each combination of input feature maps and filters_, as in [4]. Our GE works as follows: it takes as input a neural network (dense or sparse), and the input data size, and generates one bipartite graph for each layer. Nodes correspond to that layer's inputs (e.g., in computer vision tasks, these are pixels in feature maps) while edges correspond to the masked/unmasked relation between those inputs and the output features, which in turn is based on the pruned layer parameters (in the case of SNNs; in the case of DNNs, all parameters are considered). We then extend this GE into a multipartite graph, to describe relations between layers.
Finally, we use the proposed GE to thoroughly study different categories of state-of-the-art unstructured pruning algorithms, to "peek inside" the SNNs they generate, and to understand how the _topological features_ of those SNNs are related to their _performance drop_ w.r.t. their corresponding DNNs, especially at extreme sparsity levels. We base our analysis on a large pool of SNNs obtained by combining eleven pruning algorithms, five sparsity ratios, three datasets, and four Convolutional Neural Networks (CNNs) benchmarked in the literature on SNNs, such as Conv-6 [26], as well as on state-of-the-art CNNs, such as ResNet [22], Wide-Resnet [53], and MobileNet [23; 38].
To summarize, the main contributions of this paper can be outlined as follows:
1. a novel _unrolled input-aware_ GE which correctly reflects the convolutional operations, and links consecutive layers into a single multipartite graph representing the whole SNN;
2. an extensive study about how each pruning algorithm (and algorithm category) generates sparse topologies with similar patterns;
3. an analysis of which topological features can predict the performance drop of SNNs.
## 2 Related Work
We summarize the literature on pruning algorithms and graph representations of neural networks.
### Pruning Algorithms
**Pruning at Initialization.** This set of algorithms aims at discovering the best subnetwork (prior to training) and selecting the weights based on some predefined criteria. The earliest approach of this kind is SNIP [35], which aims at selecting the weights based on their influence on the loss function. Moreover, an iterative version of SNIP is presented in [37]. GraSP [8] applies a gradient signal preservation mechanism based on Hessian-gradient product, while SynFlow [44] maximizes the synaptic strengths. More recently, ProsPR [1] has been devised in order to maximize the trainability throughout meta-gradients on the first steps of the optimization process. Lastly, NTK-SAP [51] uses neural tangent kernel theory to remove the less informative connections. These approaches are relatively cheap in terms of computational cost, since the mask is found before training. However, while increasing the sparsity ratios, the performances deteriorate faster than with other categories of pruning algorithms, due to the difficulty of training SNNs from scratch [16; 15] and to the fact that the architecture is statically determined and cannot change during the training phase.
**Dynamic Sparse Training.** This category of algorithms has been proposed to tackle the aforementioned limitations of PaI, thus allowing higher performance at the cost of a slight increase of the computational cost. Using gradient information retrieved during the training, sparse architectures can change their topology based on magnitude pruning followed by a pre-defined growth criteria e.g. based on random growth [12; 34], momentum [45], or absolute gradients [14]. To overcome the limitation of fixed layer-wise sparsity ratio, a few reparametrization techniques have been proposed [34; 45], in order to better allocate parameters over layers.
**Sanity Checks.** Recent works [43; 25] have questioned the ability of PaI algorithms to uncover the most effective architectures, stating that they rather focus on discovering the best _layer-wise sparsity ratios_. Their ability to improve performance over random pruning algorithms has been discussed in [39], which shows how even small perturbations of random pruning (Layer-wise Random Pruning such as ER [12] and ERK [14]) are able to outperform well-engineered PaI algorithms.
### Graph Representation of Sparse and Dense Neural Networks
In order to gain insights from DNN topologies, several works have tried to devise weighted graph representations of the networks. Based on such representations, early-stopping criteria [4], customized initialization techniques [29], and performance predictors [47; 48] have been introduced.
What sets an SNN apart from a DNN is its unique topology, which can provide insight into its ability to solve a given task even when removing a portion of parameters. Grounded in graph theory, the random generation of small sparse structures for both Multi-Layer Perceptrons (MLPs) [6; 42] and CNNs [52] has been studied, showing that performance is associated with the _clustering coefficient_ (which measures the capacity of nodes to cluster together) and the _average path length_ of its graph representation (i.e., the average number of connections across all shortest paths).
To investigate the topological properties of SNNs, different metrics have been proposed. In [31], Graph-Edit-Distance has been used as a similarity measure, computed only on MLPs, to show how the graph topology evolves using a _Dynamic Sparse Training_ algorithm such as SET [12]. By using Singular Vector Canonical Correlation Analysis, in [5; 2] it has been shown that different topologies are able to achieve similar performances. Clusterability has been taken into account in [17], showing that MLPs are more clusterable than CNNs. Finally, the performance of SNNs has been investigated via basic graph properties [41] and by means of Ramanujan graphs for Pal algorithms, indicating that performances are correlate to the graph connectivity [36] and can be predicted e.g. using the Iterative Mean Difference of Bound (IMDB) [13].
In all the works mentioned above, the graph representation of the convolutional layers is modelled either with a relational graph [52], or with a _rolled_ encoding based only on the kernel parameters [36; 13], rather than on the relationship between the latter and the input data. To the best of our knowledge, so far only [4] proposed an _unrolled_ GE but, besides the fact that it considers the network weights, this method generates several bipartite graphs for each convolutional layer, while our approach generates only one bipartite graph per layer. From a topological point of view, the approach from [4] has in fact two main limitations. Firstly, it does not make possible to directly calculate topological measures for any layer, since each graph contains only partial information about it. Secondly, since in the convolutional operations each \(j^{th}\) filter is convoluted with the \(a^{th}\) input feature map one by one, separating these operations does not allow computing the correct final contribution over the input data.
## 3 Methodology
In this section, we first introduce the novel _unrolled input-aware_ graph encoding and its formulation in the _bipartite_ version. We then extend it to the _multipartite_ version, which links consecutive layers. Finally, we propose topological metrics that can be extracted from such GEs.
### Bipartite Graph Encoding (BGE)
The proposed BGE encodes a neural network as a list of unweighted directed acyclic bipartite graphs \(G=(G_{1},\ldots,G_{N})\), with \(N\) the number of layers in the neural network. The individual graphs are not linked into a single graph. Our notation is summarized in Table 1.
Due to its design, the bipartite graph construction differs for linear and convolutional layers. For linear layers, we use the encoding proposed in [4; 17; 47; 13]: denoting with \(L\) and \(R\) respectively the left and right layer of the \(i\)-th bipartite
\begin{table}
\begin{tabular}{l l} \hline \hline
**Symbol** & **Definition** \\ \hline \(G=(L\cup R,E)\) & bipartite graph with left node set \(L\), right node set \(R\) (for a \\ & total of \(|L|+|R|\) nodes), and edge set \(E\) \\ \hline \(N\) & number of layers \\ \(h,w\) & height and width of the input feature map \\ \(M\) & binary mask of pruned/unpruned weights \\ \(W\) & layer parameters \\ \(h_{\text{\tiny{br}}},v_{\text{\tiny{br}}}\) & height and width of kernel \\ \(c_{\text{\tiny{in}}},c_{\text{\tiny{our}}}\) & number of input and output channels \\ \(P,S\) & padding, stride \\ \hline \hline \end{tabular}
\end{table}
Table 1: Notation used in the paper. We consider the case of vision tasks.
graph, and given a binary mask \(M_{i}\in\{0,1\}^{|L_{i}|\times|R_{i}|}\), its corresponding GE is \(G_{i}=(L_{i}\cup R_{i},E_{i})\), where \(E_{i}\) is the set of edges present in \(M_{i}\), i.e., \((a,b)\in E_{i}\iff M_{i}^{a,b}\neq 0\).
For convolutional layers, our approach is substantially different from all the previous ones proposed in the literature. Specifically, we devise our encoding based on the _unrolled_ input size: given as input, for each \(i\)-th layer, a set of feature maps \(I_{i}\in\mathbb{R}^{h_{i}\times w_{i}\times c_{in}}\), we construct the corresponding bipartite graph as \(G_{i}=(L_{i}\cup R_{i},E_{i})\), where again \(L_{i}\) and \(R_{i}\) are the two layers of the bipartite graph, and \(L_{i}\) corresponds to the flattened representation of the inputs. The size of the layer \(R_{i}\), i.e., the output feature map, is calculated based on the input size \(I_{i}\) and the layer parameters \(W_{i}\in\mathbb{R}^{c_{out}\times c_{in}\times h_{in}\times w_{in}}\):
\[|L_{i}|=h_{i}\times w_{i}\times c_{in}\hskip 28.452756pt|R_{i}|=\left(\frac{h_{ i}-h_{\textit{ker}}}{S}+1\right)\times\left(\frac{w_{i}-w_{\textit{ker}}}{S}+1 \right)\times c_{out}. \tag{1}\]
Differently from the linear layer case, the set of edges \(E\) cannot be directly computed from the convolutional mask \(M_{i}\in\{0,1\}^{c_{out}\times c_{in}\times h_{in}\times w_{in}}\) since the latter is dynamically computed over the input data:1:
Footnote 1: The formula uses cross-correlation.
\[x_{i,j}^{out}=\sum_{u=0}^{c_{in}-1}\sum_{u=-h_{u}}^{h_{\textit{ker}}}\sum_{v=- u_{\textit{ker}}}^{w_{\textit{ker}}}I_{u,v}^{in}\times M_{i+u,j+v}^{out,in} \hskip 14.226378pt\forall\ out\in[0,c_{\textit{out}}). \tag{2}\]
From Eq. (1), we know that \(I_{u,v}^{in}\) and \(x_{i,j}^{out}\) respectively correspond to a node \(a_{(u+v)\times in}\in L_{i}\) and a node \(b_{(i+j)\times out}\in R_{i}\), so in this case the edges of the bipartite graph are constructed during the convolutional operation such that:
\[E_{i}=\{(a_{(u+v)\times in},b_{(i+j)\times out})\mid M_{i+u,j+v}^{out,in}\neq 0 \hskip 5.690551pt\forall\ out,in,u,v\} \tag{3}\]
where the ranges of \(out,in,u,v\) are defined according to Eq. (1), and \(in\) and \(out\) are respectively the IDs of the input and the output channel taken into consideration for that convolutional step2, and \(i+u,j+v\) correspond to one kernel entry. Intuitively, given a layer \(l^{i}\), each input element (e.g., in computer vision tasks, each pixel) represents a node in the graph, and the connection between an element of the input (denoted as \(a\)) and an element of the output feature map (denoted as \(b\)) is present if and only if during the convolutional operation, the contribution of \(a\) for generating \(b\) is not set to zero by the mask \(M_{i}\) in the kernel cell used to convolute the two pixels. An illustration of such encoding, which highlights the construction of the graph throughout the convolutional steps for both dense and sparse networks, is shown in Figure 1.
Footnote 2: In case of depth-wise separable convolution [23], the steps are only computed if \(in=out\).
### Multipartite Graph Encoding (MGE)
The bipartite GE described above has been devised to encode, independently, each single layer (either convolutional or linear) in a network. However, the limitation of this BGE lies in the lack of connections between consecutive (and, indirectly, non-consecutive) layers. As mentioned earlier, this limitation is however common to all the other GEs proposed in the literature [4; 47; 36; 13], that analyze the layers one by one, without connecting consecutive layers \(l_{i}\) and \(l_{i+1}\). On the other hand, differently from the existing encodings, our bipartite GE can be straightforwardly extended into a multipartite GE, in order to encode the whole network as an _unweighted directed acyclic multipartite graph_\(G=(G_{1},\ldots,G_{N})\), where each pair of consecutive graphs \(G_{i}\) and \(G_{i+1}\) is linked such that \(R_{G_{i}}=L_{G_{i+1}}\)3. The set of edges for each partition \(G_{i}\) is computed as described in Section 3.1. However, an extension of the previous encoding is needed for connecting consecutive layers when a pooling operation is employed between them, as explained in Appendix C.
Footnote 3: The graph representation of residual connections is not taken into consideration since the number of parameters is much smaller compared to classical convolutional layers.
### Topological Metrics
The unrolled GE proposed allows us to study the SNNs from a topological perspective, including a first theoretical analysis of the network connectivity between consecutive layers. We compute a number of _topological metrics_ (in the following, referred to for brevity as _topometrics_) over SNN topologies. These topometrics can be broken down into three categories: two structural (that we call **local** and **regional** graph metrics), and one related to the **stability** of pruning.
The **local** graph metrics are those computable over individual nodes or edges. These metrics (1) are computationally inexpensive, and (2) are able to capture some features of the graph connectivity between consecutive layers. Node-based topometrics include the fraction of _sink_, _source_, and _disconnected nodes_ over the MGE. The sink and source4 nodes are, respectively, those with outdegree and indegree of zero. The disconnected nodes are those with neither incoming nor outgoing connections. Considering the sink and source nodes, it is possible to compute the fraction of _removable connections_, which are edge-based topometrics. The out-connections of the set of source nodes (denoted here \(\alpha\)) are \(\text{r-out}=\frac{1}{|E|}\cdot\sum_{n\in\alpha}\text{outdegree}(n)\). The in-connections of the set of sink nodes (denoted here \(\beta\)) are \(\text{r-in}=\frac{1}{|E|}\cdot\sum_{n\in\beta}\text{indegree}(n)\). In fact, both these types of connections are useless for the final SNN performance, since they are ignored at inference.
Footnote 4: Padding nodes are already removed from the source set, since they have zero in-connections by design.
The **regional** metrics are calculated over linked subgraphs in the MGE, such as \(G=(G_{i},G_{i+1})\) (hence any pair of consecutive BGEs, i.e., each tripartite slice). They (1) are more expensive computationally, but (2) can better analyze the connectivity of the networks. These topometrics are: the number of _motifs_ (of size \(3\)), the number of _connected components_ (also known as clusters) and the _edge connectivity_ (i.e., the number of "bridges" or edges to cut in order to disconnect the graph). Each topometric has been normalized based on the number of edges present in the graph representation--to prevent the graph size from being a confounding variable for the topological study conducted in Section 4.
The **stability** metrics are calculated in order to gain insights about how relevant (and stable) the graph edges are for a given task. These metrics, which we call \(SJ\) (Stability-Jaccard) and \(SO\) (Stability-Overlap), can be computed between any two graph representations of SNNs in two settings: 1) an _init_ setting, where the pruning algorithm, sparsity ratio, and dataset are fixed, while the initialization seed is changed, and 2) a _data_ setting, where the pruning algorithm, sparsity ratio, and initialization seed are fixed, while the input dataset is changed. These two metrics are computed over the graph edges respectively using the Jaccard distance (Intersection over Union) and the overlap coefficient (Szymkiewicz-Simpson):
\[SJ=\frac{\sum_{i=0}^{N}\frac{|E_{i}^{1}\cap E_{i}^{2}|}{|E_{i}^{1}\cup E_{i}^{ 2}|}\times e_{i}}{\sum_{i=0}^{N}e_{i}}\qquad SO=\frac{\sum_{i=0}^{N}\frac{|E_{ i}^{1}\cap E_{i}^{2}|}{\min(|E_{i}^{1}|,|E_{i}^{2}|)}\times e_{i}}{\sum_{i=0}^{N}e_{i}} \tag{4}\]
where \(e_{i}=\frac{|E_{i}^{1}|+|E_{i}^{2}|}{2}\). A value of either \(SO\) or \(SJ\) close to \(1\) has a different meaning per setting. In the _init_ setting, it means that the pruning algorithm finds a topological structure which is not related to the values of the initialized weights. On the other hand, for the _data_ setting a value close to \(1\) means that the algorithm finds the exact same topological structure independently from the input dataset.
Figure 1: Illustration of the proposed unrolled input-aware BGE with \(I=3\times 3\times 3\) and convolutional parameters \((c_{\textit{out}}=2,c_{\textit{in}}=3,w_{\textit{ker}}=2,h_{\textit{ker}}=2,P= 0,S=1)\). (a) and (b) show, respectively, the first and second convolutional steps and how the graph edges are generated assuming that all the kernel parameters are unmasked. (c) shows the complete graph representation before pruning the kernel parameters. (d) shows the final graph representation after pruning the kernel parameters.
Experiments
In this section, we show how our proposed _input-aware unrolled GE_ provides meaningful topometrics from graph representations of SNNs. We then use the topometrics for two purposes: 1) to classify both pruning algorithms and their categories, and 2) to develop a regression analysis to capture which topometrics can predict the accuracy drop of SNNs for different sparsity ratios. The first setting allows us to understand what makes SNNs produced by Pals, DSTs and Layer-wise Random Pruning algorithms topologically different. The second setting allows us to understand how a certain sparse topology affects the architecture performance (from now on, we use "architecture" to refer to each CNN considered in our experimentation) and what may make some pruning algorithms perform better than others. It is worth mentioning that such analyses are based only on the unweighted graph representation of SNNs, hence do not take into consideration the weight values which could be highly dependent on the hyperparameters used in the training processes.
**Experimental Setup.** We generated a large pool of SNNs for our purposes. We use eleven different pruning algorithms: four Pruning at Initialization methods (Table 2), four Dynamic Sparse Training algorithms (Table 4), and three instances of Layer-wise Random Pruning (Table 3).
Since the graph size of the proposed GE is based on the size of the input data, we selected three datasets with the same data sizes, namely CIFAR-10, CIFAR-100 [28], and the downscaled Tiny-ImageNet (of size \(32\times 32\) pixels) [11]. We then used four different architectures designed for such input size, namely Conv-6 [26], Resnet-20 [22], Wide-Resnet-28-2 [53], and MobileNet-V2 [38]. We considered five sparsity values to cover a broad spectrum, namely \(s\in[0.6,0.8,0.9,0.95,0.98]\) (as in [13]). We trained each combination of \(\langle\)pruning algorithm, dataset, architecture,sparsity\(\rangle\) for \(3\) runs, obtaining a pool of \(1,980\) sparse architectures. More information on architectures, datasets, and hyperparameters is in Appendix A; the numerical results in terms of training accuracy (which correctly reproduce those reported in the literature) are in Appendix B.
The topometrics taken into consideration in the following experiments are the ones described in Section 3.3, namely: 1) _local_ metrics, which consist of graph properties over nodes such as the fraction of source, sink, and disconnected nodes, plus metrics over edges, such as the fraction of removable connections (both in and out); 2) _regional_ metrics, which consist of the number of motifs of size \(3\) over our directed acyclic multipartite graphs, the edge-connectivity (i.e., the percentage of bridge connections), and the number of clusters; 3) _stability_ metrics, which are the \(SJ\) and \(SO\) metrics both for the _init_ and the _data_ settings; d) _combination_, which considers all these metrics together. For the classification and regression analysis, we use XGBoost [9].
### Topological Classification
The first step in order to understand if different pruning algorithms provide either similar or diverse topologies is checking if the graph representation can be correctly classified based on its topological features. This analysis has been conducted for classifying both pruning algorithms and their categories (PaI, DST, Linear-wise Random Pruning). To do that, we average the topological properties of the SNNs obtained over different runs for the same combination \(\langle\)architecture, sparsity, dataset\(\rangle\), in order to avoid overfitting, then we remove the duplicate entries. For each type of topometrics, we tested the classification accuracy over two different data subsets: 1) **Sparsity-wise**, i.e., we conduct the classification separately for each sparsity ratio, and 2) **Architecture-wise**, i.e., we conduct the classification separately
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Algorithm** & **Drop** & **Growth** & **Redistribution** & **Sparse Init.** \\ \hline \hline DSR [34] & \(|\theta|\) & random & random (zero layers) & Uniform \\ SNFS [45] & \(|\theta|\) & momentum & momentum & Uniform \\ RigL [14] & \(|\theta|\) & gradient & ✗ & ERK \\ SET-ITOP [32] & \(|\theta|\) & random & ✗ & ERK \\ \hline \hline \end{tabular}
\end{table}
Table 4: Dynamic Sparse Training Pruning Algorithms.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Pruning Method** & **Drop** & **Sanity Check** & **Training Data** & **Iterative** \\ \hline SNIP [35] & \(|\nabla_{\theta}L(\theta)|\) & ✗ & ✓ & ✗ \\ GraSP[8] & \(-H\nabla\phi\,L(\theta)\) & ✗ & ✓ & ✗ \\ Synflow [44] & \(\frac{\partial\mathcal{R}}{\partial\theta}\theta,\mathcal{R}=1\top(\prod_{l=1} ^{l}|\theta^{l}|)1\) & ✗ & ✗ & ✓ \\ ProsPr [1] & \(|\nabla_{\theta_{e}}L(\theta_{e})|\) & ✗ & ✓ & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 2: Pruning at Initialization Algorithms.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Algorithm** & **Layer-wise Sparsity** & **\(s^{t}\ \lor\ l\in[0,N)\)** \\ ER [12] & \(1-\frac{n^{t-1}+n^{t}}{n^{t-1}\times n^{t}}\) \\ ERK [14] & \(1-\frac{n^{t-1}+n^{t}+n^{t}+h^{t}}{n^{t-1}\times n^{t}\times w^{t}\times h^{t}}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Random Pruning Algs.
for each architecture. We also consider the case where all data are taken together (this case is referred to as "General"). We report the results in Table 5.
The results prove how different pruning algorithm categories generate SNNs with _different topological features_, which can be effectively classified with an average cross-validation balanced accuracy \(\sim 0.9\). It is also clear that the network topologies become more _separable_ (i.e., the classification accuracy increases) with increasing sparsity ratio. For the classification of the pruning algorithms, the accuracy is computed over \(11\) classes, which means fewer samples per class available for training (compared to the classification by algorithm categories). However, on average, it is still possible to reach an accuracy \(\sim 0.7\). For both Sparsity-wise and Architecture-wise classification, the best results are achieved when all the topometrics are used all at once. In Appendix D, we report the feature importance scores for both the classification tasks in the "General" case.
### Performance Prediction
The previous analysis showed that, from a classification perspective, different pruning algorithms (and categories thereof) generate network topologies with topological features. However, this analysis does not reveal how those features are associated with the **performance drop** of SNNs. It is already known that: 1) the performance drop is not always proportional to sparsity ratio, and 2) the performance of SNNs for some (sparsity, architecture, dataset) combinations could be even better than that of their dense counterparts [26; 25; 39]. Aiming at discovering associations between the topometrics and the performance drop, we conduct the following analysis. Starting from our pool of SNNs, we train a regression model and analyze its coefficient of determination \(R^{2}\) (which is computed as \(1-\frac{\sum_{i}(y_{i}-\hat{y_{i}})^{2}}{\sum_{i}(y_{i}-\hat{y})^{2}}\)), which has been proved to be the most informative measure for discovering associations between input features and predicted variables [10]. For the regression, we use as inputs the same topometrics introduced before, and compute the performance drop as \(1-\frac{accuracy_{k}}{accuracy_{d}}\), where \(s\) and \(d\) respectively correspond to the sparse and deep version of any given architecture. We conduct this analysis separately for each dataset. This has been done due to the fact that the performance drop is highly related to the over-parametrization relation between the dataset and the architecture.
Then, for each dataset, we study the association between the topometrics and the performance drop for both the Sparsity-wise and Architecture-wise cases. These two analyses allow us to investigate from a topological perspective: 1) what makes certain SNNs, given the same fraction of parameters w.r.t. the dense version, perform better than others, and 2) what topological properties, for a given architecture, make its sparse version perform worse.
The \(R^{2}\) coefficient values obtained using the _combination of all the proposed topometrics_ are shown in Table 6. The results for each single category of topometrics (_local_, _regional_ and _stability_) are available in Appendix D.2. Also for this study, the results reported are based on stratified cross-validation over \(100\) runs. To further assess the validity of our results, we also conducted an _ablation_ study. For the Sparsity-wise case, we calculated the \(R^{2}\) coefficient between architectures and corresponding performance drops separately for each value of sparsity ratio. For the Architecture-wise case, we calculated the \(R^{2}\) coefficient between sparsity ratios and performance drops separately for each architecture. It can be clearly noticed that our topological approach reaches a \(R^{2}\) coefficient much higher than that of the ablation studies, meaning that the proposed topometrics: 1) have a much higher predictive power than sparsity ratio and architecture alone, and 2) particularly for the Architecture-wise case, they add valuable information that is not captured when considering only the sparsity.
In addition, we analyzed the feature importance scores (using permutation importance) obtained during the regression analysis to find the most discriminative topometrics. Figure 2 (top-row) shows the feature importance for the Sparsity-wise case. The results have been averaged over \(100\) runs and then averaged over the three datasets (the results for each dataset are reported in Appendix D.2). For the Sparsity-wise case, the feature importance follows a clear pattern when increasing the sparsity ratio. Overall, the most discriminative feature turns out to be the number of motifs, i.e.,
\begin{table}
\begin{tabular}{l l c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Classes**} & \multirow{2}{*}{**Topometrics**} & \multicolumn{3}{c}{**Accuracy**} & \multicolumn{3}{c}{**Accuracy (Sparsity-wise)**} & \multicolumn{3}{c}{**Accuracy (Architecture-wise)**} \\ \cline{3-11} & & **(General)** & 0.6 & 0.8 & 0.9 & 0.95 & 0.98 & Conv-6 & Resnet-20 & Wide-Resnet-28-2 & MobileNet-V2 \\ \hline \multirow{4}{*}{**Pruning**} & **Local** & 7.5\(\pm\)0.1 & 66\(\pm\)0.2 & 71\(\pm\)0.2 & 73\(\pm\)0.3 & 81\(\pm\)0.3 & 85\(\pm\)0.3 & 76\(\pm\)0.2 & 67\(\pm\)0.3 & 79\(\pm\)0.2 & 93\(\pm\)0.2 \\ & **Regional** & 8.3\(\pm\)0.2 & 74\(\pm\)0.3 & 85\(\pm\)0.3 & 86\(\pm\)0.3 & 88\(\pm\)0.3 & 91\(\pm\)0.2 & 87\(\pm\)0.2 & 73\(\pm\)0.3 & 84\(\pm\)0.3 & 93\(\pm\)0.2 \\ \cline{1-1} & **Stability** & 76\(\pm\)0.1 & 79\(\pm\)0.3 & 83\(\pm\)0.3 & 76\(\pm\)0.4 & 80\(\pm\)0.3 & 79\(\pm\)0.4 & 76\(\pm\)0.3 & 74\(\pm\)0.3 & 74\(\pm\)0.3 & 72\(\pm\)0.3 & 77\(\pm\)0.4 \\ \cline{1-1} & **Combination** & **.95\(\pm\)0.1** & **89\(\pm\)0.3** & **93\(\pm\)0.3** & **92\(\pm\)0.3** & **93\(\pm\)0.2** & **95\(\pm\)0.2** & **94\(\pm\)0.2** & **89\(\pm\)0.2** & **91\(\pm\)0.3** & **96\(\pm\)0.2** \\ \hline \multirow{4}{*}{**Pruning**} & **Local** & 39.0\(\pm\)0.1 & 31\(\pm\)0.2 & 39\(\pm\)0.2 & 40\(\pm\)0.3 & 40\(\pm\)0.3 & 50\(\pm\)0.4 & 36\(\pm\)0.3 & 33\(\pm\)0.3 & 34\(\pm\)0.3 & 64\(\pm\)0.3 \\ \cline{1-1} & **Regional** & 49\(\pm\)0.2 & 57\(\pm\)0.4 & 57\(\pm\)0.4 & 53\(\pm\)0.3 & 55\(\pm\)0.4 & 58\(\pm\)0.4 & 60\(\pm\)0.4 & 44\(\pm\)0.3 & 52\(\pm\)0.4 & 60\(\pm\)0.4 \\ \cline{1-1} & **Stability** & 57\(\pm\)0.2 & 67\(\pm\)0.3 & 70\(\pm\)0.3 & 61\(\pm\)0.4 & 63\(\pm\)0.3 & 60\(\pm\)0.4 & 58\(\pm\)0.3 & 55\(\pm\)0.3 & 55\(\pm\)0.3 & 58\(\pm\)0.3 \\ \cline{1-1} & **Combination** & **.72\(\pm\)0.2** & **74\(\pm\)0.3** & **80\(\pm\)0.3** & **73\(\pm\)0.3** & **72\(\pm\)0.3** & **77\(\pm\)0.3** & **73\(\pm\)0.3** & **65\(\pm\)0.3** & **66\(\pm\)0.4** & **71\(\pm\)0.3** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Cross-validation balanced accuracy, using stratified k-fold with \(k=5\), for both pruning algorithm and algorithm category classification. The values have been averaged over \(100\) runs.
the number of significant recurrent subgraphs of size \(3\) in the graph representation. Another clear trend regards the different categories of topometrics: as the sparsity ratio increases, the _local_ metrics start to be more discriminative, while the _regional_ and _stability_ ones follow the inverse trend. Different sparsity ratios remove a different portion of parameters (and the corresponding edges in the graph representation) and disconnect the networks in different ways: for this reason, the feature importance scores "evolve" with the sparsity ratios. For instance, the importance of _clusters_ and _removable connections_ increases when the networks are sparser, therefore such metrics start to be effective in the regression analysis.
The same analysis was done over the Architecture-wise case, see Figure 2 (bottom-row), where the results previously discussed have been confirmed yet again. Also in this case, the number of motifs is the most discriminative feature, followed by edge connectivity (no. of bridges). It is also interesting to analyze how in a smaller network such as Resnet-20 (whose number of parameters is \(\sim 0.1-0.2\%\) w.r.t. the other considered networks), the most discriminative feature turns out to be the number of removable connections. Overall, the metrics that are mostly associated with the performance drop are the ones related to network connectivity.
\begin{table}
\begin{tabular}{l l c c c c c c c c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{
\begin{tabular}{c} **Input** \\ **features** \\ \end{tabular} } & \multicolumn{4}{c}{\(\boldsymbol{R^{2}}\) **(Sparsity-wise)**} & \multicolumn{4}{c}{\(\boldsymbol{R^{2}}\) **(Architecture-wise)**} \\ \cline{3-11} & & 0.6 & 0.8 & 0.9 & 0.95 & 0.98 & Conv-6 & Resnet-20 & Wide-Resnet-28-2 & MobileNet-V2 \\ \hline \multirow{2}{*}{**CIFAR-10**} & **Topometrics** & **73\(\pm\)0.04** & **76\(\pm\)0.07** & **81\(\pm\)0.05** & **80\(\pm\)0.09** & **77\(\pm\)0.07** & **89\(\pm\)0.03** & **92\(\pm\)0.02** & **89\(\pm\)0.05** \\ & **Architectures** & 02\(\pm\)0.02 & 03\(\pm\)0.02 & 11\(\pm\)0.06 & 15\(\pm\)0.07 & 49\(\pm\)0.07 & - & - & - & - & - \\ & **Sparsity ratios** & - & - & - & - & - & - & - & - & - \\ \hline \multirow{2}{*}{**CIFAR-100**} & **Topometrics** & **49\(\pm\)0.07** & **69\(\pm\)0.07** & **73\(\pm\)0.05** & **72\(\pm\)0.08** & **80\(\pm\)0.05** & **95\(\pm\)0.02** & **95\(\pm\)0.01** & **91\(\pm\)0.02** & **92\(\pm\)0.04** \\ & **Architectures** &.01\(\pm\)0.01 &.03\(\pm\)0.02 &.40\(\pm\)0.06 &.46\(\pm\)0.07 & 40\(\pm\)0.06 & - & - & - & - \\ & **Sparsity ratios** & - & - & - & - & - &.58\(\pm\)0.03 &.86\(\pm\)0.01 &.78\(\pm\)0.03 &.64\(\pm\)0.05 \\ \hline \multirow{2}{*}{**Tiny-ImageNet**} & **Topometrics** & **74\(\pm\)0.06** & **58\(\pm\)0.04** & **86\(\pm\)0.03** & **85\(\pm\)0.05** & **81\(\pm\)0.05** & **93\(\pm\)0.02** & **91\(\pm\)0.02** & **92\(\pm\)0.03** & **89\(\pm\)0.05** \\ & **Architectures** & 23\(\pm\)0.07 & 51\(\pm\)0.08 &.51\(\pm\)0.06 &.57\(\pm\)0.04 &.55\(\pm\)0.03 & - & - & - & - & - \\ & **Sparsity ratios** & - & - & - & - & - &.52\(\pm\)0.06 &.78\(\pm\)0.01 &.63\(\pm\)0.04 &.69\(\pm\)0.05 \\ \hline \hline \end{tabular}
\end{table}
Table 6: \(R^{2}\) computed using stratified k-fold with \(k=5\) over \(100\) runs.
Figure 2: Feature importance scores for the regression analysis in the **Sparsity-wise (top row)** and **Architecture-wise (bottom row)** case. Colors highlight the contribution of different sets of metrics: _local_ (red), _regional_ (green), _stability_ (blue).
Conclusions
In this paper, we presented an in-depth analysis of the topologies of SNNs and their association with the performance drop. To do that, we studied the SNNs from a graph theory perspective, relying on a novel _unrolled input-aware_ graph encoding which correctly reflects the convolutional steps and links across layers. The main limitation of our proposed GE is the time and space complexity for the encoding creation: for each layer of the MGE, i.e., for each BGE, the time complexity is \(\mathcal{O}(c_{\textit{in}}\times c_{\textit{out}}\times step)\), while the space complexity is \(\mathcal{O}(L+R+E)\), where \(step=(\frac{l_{\textit{in}}-d_{\textit{in}}}{S}+1)^{2}\) and \(|E|=c_{\textit{in}}\times|W\setminus\{0\}|\times step\), assuming square feature maps and kernels.
On the other hand, we showed the practical applicability of our proposed GE through an analysis, both in terms of classification and prediction of the SNN performance based on topological metrics, of the most recent pruning algorithms from the literature.
Our findings are in line with the No-Free-Lunch-Theorem (NFLT) for pruning algorithms, i.e., _"No single method is SOTA at initialization. Depending on the network, dataset, and sparsity, there is a setting where each early pruning method reaches the highest accuracy."_[25]. We in fact proved how, for the sake of accuracy prediction, the importance of most topometrics changes depending on the sparsity ratio and architecture. Even though association does not imply causation, our results suggest new hints to the NFLT, namely: 1) as we showed in our classification analysis, different pruning algorithms are designed differently, and, by construction, they generate SNNs with different topological features; 2) as shown in our performance prediction analysis, the topological features that are positively associated with the performance depend on the specific sparsity ratio and architecture, hence there is no guarantee that subnetworks found by any given pruning algorithm will perform well regardless of the sparsity ratio and architecture. Taken together, our analysis sheds therefore new light on the reason why the NFLT may hold true. However, while previous studies focused on the effects of the weights in SNNs, here we were able to investigate the properties and performance of the SNNs only looking at their topology, regardless of the weight values.
|
2305.11125 | Skin Lesion Diagnosis Using Convolutional Neural Networks | Cancerous skin lesions are one of the most common malignancies detected in
humans, and if not detected at an early stage, they can lead to death.
Therefore, it is crucial to have access to accurate results early on to
optimize the chances of survival. Unfortunately, accurate results are typically
obtained by highly trained dermatologists, who may not be accessible to many
people, particularly in low-income and middle-income countries. Artificial
Intelligence (AI) appears to be a potential solution to this problem, as it has
proven to provide equal or even better diagnoses than healthcare professionals.
This project aims to address the issue by collecting state-of-the-art
techniques for image classification from various fields and implementing them.
Some of these techniques include mixup, presizing, and test-time augmentation,
among others. Three architectures were used for the implementation:
DenseNet121, VGG16 with batch normalization, and ResNet50. The models were
designed with two main purposes. First, to classify images into seven
categories, including melanocytic nevus, melanoma, benign keratosis-like
lesions, basal cell carcinoma, actinic keratoses and intraepithelial carcinoma,
vascular lesions, and dermatofibroma. Second, to classify images into benign or
malignant. The models were trained using a dataset of 8012 images, and their
performance was evaluated using 2003 images. It's worth noting that this model
is trained end-to-end, directly from the image to the labels, without the need
for handcrafted feature extraction. | Daniel Alonso Villanueva Nunez, Yongmin Li | 2023-05-18T17:15:08Z | http://arxiv.org/abs/2305.11125v1 | # Skin Lesion Diagnosis
###### Abstract
Cancerous skin lesions are the most common malignancy detected in humans, which, if not detected at early stage, may lead to death. Therefore, it is extremely crucial to have access to accurate results at early stages to optimize the chances of survival. Unfortunately, accurate results are only obtained by highly trained dermatologists which are not accessible to most people, especially in low-income and middle- income countries. Artificial Intelligence (AI) seems to tackle this problem because it has proven to provide equal or better diagnosis than healthcare professionals.
This project collects state-of-the-art techniques for the task of image classification from other fields and implements them in this project. Some of these techniques include mixup, presizing, test time augmentation, among others. These techniques were implemented in three architectures: DenseNet121, VGG16 with batch normalization, and ResNet50. Models are built with two purposes. First to classify images into seven categories - melanocytic nevus, melanoma, benign keratosis like lesions, basal cell carcinoma, actinic keratoses and intraepithelial carcinoma, vascular lesions, and dermatofibroma. Second to classify images into benign or malignant. Models were trained using 8012 images and its performance was evaluated using 2003 images. This model is trained end-to-end from image to labels and does not require the extraction of hand-crafted features.
keywords: Skin cancer, skin lesion, medical imaging, deep learning, Convolutional Neural Networks, DenseNet, ResNet, VGG
## 1 Introduction
Skin Cancer is the most common malignancy detected in humans. Some categories of cancerous skin lesions are dangerous because they can potentially spread all over the human skin, and lead to death if not cure in time. One example is melanoma, which is the most dangerous type of skin cancer, and it is expected to appear in 97,920 people in the United States by the end of
2022 according to the U.S. Cancer Statistics Centre [42]. 98% are the chances of a 5-year survival but it reduces to 18% once it reaches other organs. For all these reasons, it is important to diagnose malignant skin lesions at an early stage, to permit treatment before metastasis occurs. Malignant skin lesions are first diagnosed by visual inspection and potentially continued by dermoscopic analysis, a biopsy and histopathological examination [19]. Dermoscopic is a non-invasive skin imaging approach which permits visualize zoomed versions of the subdermal structures and surface of the skin. Non-invasive procedures are always preferred because they do not destroy the lesion and increases the possibility to monitor its evolution [31]. Examination of these images are time-consuming, tedious and require domain-knowledge. Furthermore, the precision and reproducibility of the diagnosis is highly dependent on the dermatologist's experience. Some studies show that diagnostic's precision on dermoscopic imaging decreases when they are used by inexperienced physicians [3]. To help doctors make more accurate decisions, computer-aided diagnosis (CAD) systems for skin lesions are used [36]. These CAD systems are able to help doctors make better decisions by detecting lesion borders, removing noise, among others. However, the process of making an appointment to see the doctor for the skin lesion examination, then proceeding with the dermoscopic procedure, and waiting for the CAD system to extract features for assisting the dermatologists on examining the type of lesion, is long. What if there could exist a way to access highly experienced dermatologists from the commodity of your home that can provide highly accurate results by just using your phone?
Artificial intelligence has proven to surpassed human capabilities. For instance, studies from University Hospitals Birmingham shown that AI systems are able to outperform healthcare professionals when it comes to diagnose diseases [30]. Furthermore, it is expected in 2023 that 4.3 billion people will own a smartphone (Alsop, n.d.). Therefore, being able to place these AI systems on people's phones will immensely benefit millions of people who do not have access to highly trained physicians. In 2017, Esteva et al. reported the first work on the usage of deep learning convolutional neural networks (CNNs) to classify malignant lesions on the skin that could outperform 21 board-certified dermatologists [19]. Esteva et al.'s work was innovative because it did not need extensive pre-processing on images such as segmentation or extraction of visual features. This study presents a system, built using state-of-the-art techniques in deep learning, and that does not require the extraction of hand-crafted features, that is able to classify seven types of skin lesions as well as classify lesions into benign and malignant.
The aim of this project is to build a skin lesion image classifier that is able to identify seven types of skin lesions and whether it is benign or malignant, using state-of-the-art techniques and that does not require hand-crafted features. The rest of the paper is organised as follows: A literature review is provided in Section 2. The methods are described in Section 3. Experiments and results
analysis are presented in Section 4. Conclusions are drawn in Section 5.
## 2 Background
Artificial intelligence has revolutionized the world of healthcare and medical imaging applications, where various methods such as probabilistic modelling [29, 27, 28], graph cut [41, 38, 37, 40, 39], level-set [45, 44, 46, 48, 47, 13, 8, 9, 10, 12, 14, 6, 11, 7] have been investigated. Recently, convolutional neural networks (CNN), a technique used in deep learning, have been found to bring the most progress on the automation of medical imagining diagnostic. A plethora of studies have shown that CNN models match or exceed doctors' diagnostic performance [33, 34, 35].
For example, Esteva et al. built a model which performance was on par with 21 board-certified dermatologists. All skin images were biopsy-proven and tested in two binary classification cases: keratinocyte carcinoma versus benign seborrheic keratoses; and melanoma versus melanocytic nevi. The model obtained an AUC of 0.96 for carcinoma and an AUC of 0.94 for melanoma. To build this model, Esteva et al. collected images from three places: the Edinburgh Dermofit Library, the Stanford Hospital, and the ISIC Dermoscopic Archive [19]. In the test and validation dataset, blurry images and long-distance images were deleted, yet they were kept during training. The InceptionV3 architecture with pre-trained weights from ImageNet was used. During training, a global learning rate equal to 0.001, and a decay factor of 16 every 30 epochs was implemented. RMSProp was the function used to optimize the model. Images were randomly rotated between 0degand 359deg. Random cropping and vertically flip with a probability of 0.5 were also used during data augmentation. After augmentation, images increased by a factor of 720.
Haenssle et al. [22] trained a model which performance was compared with 58 international dermatologists. The model was built to classify images into melanocytic nevi or melanoma. The studied shown that the CNN model obtained better performance than most physicians. The deep learning model obtained a specificity of 82.5%, which is 6.8% higher than the average obtained by the dermatologists. Google's InceptionV4 was the architecture used to train this model. Han et al. [23] used the ResNet-152 architecture to build a model to detect 12 different skin diseases: dermatofibroma, pyogenic granuloma, melanocytic nevus, seborrheic keratosis, intraepithelial carcinoma, basal cell carcinoma, wart, hemangiioma, lentigo, malignant melanoma, actinic keratosis, and squamous cell carcinoma. The model was compared with 16 dermatologists and obtained an AUC of 0.96 for melanoma.
Brinker et al. [4] utilized a pre-trained on ImageNet ResNet50 architecture to
train images from the ISIC dataset for a binary classification task to predict melanoma versus nevus. 92.8% sensitivity and 61.1% specificity were achieved as compared to 89.4% mean sensitivity and 64.4% mean specificity obtained by 145 board-certified dermatologists from 12 German university hospitals. To train the model, two datasets were used: the HAM10000 dataset and the public ISIC image archive. High learning rates were used at the end of the architecture while small learning rates were used at the beginning. This technique is called differential learning rates. The scheduler used to train the model was the cosine annealing method. The experiment consisted in 13 epochs.
Fujisawa et al. [20] built a model that outperform 13 board-certified dermatologists. The model was built to classify skin lesions into malignant and benign. The accuracy of the model was 92.4%, the sensitivity was 96.3% and the specificity was 89.9%. Meanwhile, the dermatologists obtained an overall accuracy of 85.3%. The architecture chosen was GoogLeNet using pre-trained weights. The dataset utilized belongs to the University of Tsukuba Hospital. Images were trimmed to 1000x1000 pixels from the centre. Each image was randomly rotated by 15 degrees leading and saved as a different figure. This led to an increase of figures by a factor of 24. When the images were passed through the CNN, blur filters and changes in brightness were applied.
Other studies include that Eltayef et al. [18, 15, 16, 17] reported various methods in combination with particle swarm optimization and markov random field for lesion segmentation in dermoscopy images.
As it can be seen, there are a plethora of studies that show that deep learning can match or outperform physicians when it comes to the detection of skin lesions. However, not all studies on the usage of CNNs for skin disease detection compared their results with physicians. Nonetheless, they can provide useful information when it comes to the techniques used to train the models.
Majtner et al. [32] achieved 0.801 accuracy using VGG16, 0.797% accuracy using GoogleNet, and 0.815% accuracy by assembling both of them. Models were built to classify the same seven types of skin lesions that this project is attempting to do. The ISIC 2018 dataset was used. For both architectures, only the last layers were trained. To deal with the unbalanced data, each image was horizontally flipped, which led dataset to increase by a factor of two. Then, a rotation factor was assigned to each image to balance the data.
Xie et al. [49] obtained 94.17% accuracy using the ISIC 2018 dataset to classify images into benign or malignant which is considered as a binary classification task. The best AUC reported was 0.970. The data to train this model was obtained from the General Hospital of the Air Force of the Chinese People's Liberation Army. Different methods to extract features from the images were used. For instance, the authors built a self-generating neural network (SGNN)
segmentation model to remove hairs from the image. Shape features were not used since many images are not complete. Five colour features were used: RGB features, LUV histogram distances, colour diversity, centroidal distances. Five statistical texture descriptors were computed from a grey level co-occurrence matrix - based texture. They were regional energy, entropy, contrast, correlation, and inverse difference moment. Finally, 6 concavity features were calculated using the lesion convex hull which describes degrees of concavities. To train the model, three artificial neural networks were ensembled. The experiment run with a max of 1000 epochs, sigmoid activation functions were used, and a learning rate of 0.7 was set. Bisla et al. obtained 81.6% accuracy when using the ISIC 2017 dataset to predict three classes of skin lesion: melanoma, nevus, and seborrheic keratosis [5]. A hair removal algorithm was implemented for cleaning images. U-Net algorithm was utilized to segment the lesion area from the skin. To tackle the unbalanced problem, generative adversarial networks (GANs) were utilized to generate more images from the labels with less instances. Horizontal flipping, vertical flipping and random cropping were the methods used for data augmentation. The ResNet50 pre-trained on ImageNet architecture was used.
Almaraz-Damian et al. [2] used transfer learning with different architectures for a binary classification model (benign vs malignant) utilizing the HAM10000 dataset with the architectures of VGG16, VGG19, Mobilenet v1, Mobilenet v2, ResNet50, DenseNet-201, Inception V3, Xception respectively. A mix between handcrafted and deep learning features were employed. To balance the data, a SMOTE oversampling technique was implemented.
Jojoa et al. [26] trained the pre-trained ResNet152 using the dataset for ISIC 2017 to predict skin cancer into benign or malignant. Images were pre-processed by extracting the lesion from the image using Mask R_CNN. From all the experiments done tweaking the learning rates and number of epochs, the best model obtained had an accuracy of 90.4%.
Aljohani et al. [1] used transfer learning to predict melanoma or no melanoma on dataset ISIC 2019 by experimenting different architectures including DenseNet201, MobileNetV2, ResNet50V2, ResNet152V2, Xception, VGG16, VGG19, and GoogleNet. All experiments were run with 50 epochs, but early stopping was used. To overcome the imbalanced dataset, the ImageDataGenerator from TensorFlow was implemented. The best architecture was the GoogleNet with 76% accuracy with 29 epochs.
Methods
In this project, we have selected three different types of network architectures to perform the task of skin lesion recognition: the VGG, ResNet and DenseNet.
### Vgg
VGG gives reference to a group in Oxford University named Visual Geometry Group [43]. This architecture was built with the aim to understand the relationship between the depth of an architecture and its accuracy. In order to allow the usage of a long structure and avoid using a large quantity of parameters, small 3x3 convolutional kernels were used. At the end of the architecture, three fully connected layers are placed. They are two VGGs normally used: VGG16 which has 16 layers and VGG19 which has 19 layers. VGG16 has 138 million trainable parameters. This project will be using a variant of VGG16 which is VGG16 with batch normalization because it allows faster training and makes the layers' inputs more stable by re-centring and re-scaling them.
### ResNet
When deep networks are able to commence converging, a degradation problem appears. With the architecture depth expanding, accuracy saturates leading to fast degradation. Surprisingly, overfitting is not the root of this degradation, and attaching extra layers to an appropriate deep model results to higher training error [24]. Nevertheless, this problem is tackled by the introduction of deep residual learning framework which is the building block of the ResNet architecture. Instead of passing an input x through a block F(x) and then passing it through a ReLU activation function, the output of F(x) gets added with the input before it gets passed to the next ReLU Activation Function. There are many types of ResNet. Some of its variants are ResNet18, ResNet34, ResNet50, ResNet101, and ResNet152. The number refers to the number of residual blocks stacked.
This project will be using both ResNet18 and ResNet50. ResNet18 has around 11 million trainable parameters whereas ResNet50 has over 23 million.
### DenseNet
Improving model architecture is not as easy as just stacking new layers because they normally lead to the vanishing or exploding gradient problem [21]. Nonetheless, DenseNet solves this problem by simplifying the connection between layers which allows the no learning of redundant feature maps. Furthermore, feature reuse is achieved by taking full advantage of this connectivity, this is known as collective knowledge [25]. The difference between DenseNet and ResNets is that rather than adding the input with the output of the feature maps, they get concatenated. Just like ResNet is composed of residuals blocks, DenseNet is composed of dense blocks.
It exists four variants of DenseNet which are DenseNet121, DenseNet161, DenseNet169, and DenseNet201. For this project, DenseNet121 will be used which is composed of 120 convolutions, four average pooling layers, and 8 million trainable parameters.
## 4 Experiments
### Dataset
For this project, the HAM10000 dataset is used ([https://www.kaggle.com/datasets/kmader/skin-cancer-mnist-ham10000](https://www.kaggle.com/datasets/kmader/skin-cancer-mnist-ham10000)). The dataset consists of 10,015 dermoscopic images belonging to seven categories which can be seen in Table 1.
The downloaded file from Kaggle contains all the images in two folders named "HAM10000_images_part_1" and "HAM10000_images_part_2". It also contains a csv file named "HAM10000_metadata.csv" which maps the name of each image file to its label. The csv file has metadata for each image such as the region of the lesion (e.g., ear, head), age of the patient, and sex. An empty folder named "classes" was created containing subfolders with the name of each label. Then, the images from "HAM10000_images_part_1" and "HAM10000_images_part_2" were map to the subfolders from the folder "classes". This was done with help of the csv file "HAM10000_metadata.csv". This was done such way because that is the format FastAI, which is the framework used for this project, uses.
The data was split into 80% training and 20% validation set which means 8012 images for training and 2003 images for validation. A seed equal to 101096 was used to make this work reproducible. The dataset was only separated in
training and validation because some classes have too little images for having a third split for the testing set.
### Data Augmentation
Many deep learning practitioners apply data augmentation operations after resizing down the images to a standard form. However, this practice could lead to spurious empty zones, degraded data, or both of them. One example is when an image is rotated 30 degrees and the corners are filled with emptiness which bring not added information to the model. Therefore, to tackle this issue, presizing adopts two steps. During the first step, presizing resizes the images to much larger dimensions compared to the training target dimensions. This allows images to have spare margin to permit more augmentation transforms on the inner regions without generating empty zones. During the second step, it joins all common augmentation operations (including a resize to the final target dimensions) and computes them on the GPU.
For this work, presizing will be used for data augmentation. First, all images will be expanded to a size of 460x460 pixels in RAM. Then, in the GPU, for each batch, first different augmentations will be stacked to the images. Second, images will be resized to 224x224 pixels and then normalized. That means that for every batch, the computer is seeing a different combination of
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**Label** & **Description** & **Amount** \\ \hline nv & melanocytic nevi & 6,705 \\ mel & melanoma & 1,113 \\ bkl & benign keratosis – like lesions & 1,099 \\ & (solar lentigines / seborrheic & \\ & keratoses and lichen – planus & \\ & like keratoses) & \\ bcc & basal cell carcinoma & 514 \\ akiec & actinic keratoses and intraepithelial carcinoma / Bowen’s disease & 327 \\ vasc & vascular lesions (angiomas, angiokeratomas, pyogenic granulomas and haemorrhage) & 142 \\ df & dermatofibroma & 115 \\ \hline \end{tabular}
\end{table}
Table 1: Skin Lesion Labels and Amounts
transformations. Nonetheless, these images disappear at the end of each batch. Then, in the next batch, a different combination of transformations is applied. This process is performed repeatedly until the training ends. Therefore, if a batch size equal to 64 is used to run 8012 images for 100 epochs, then the model will be seeing approximately 12,500 images, however, no new images are stored in RAM or in memory.
### Results and Analysis
The classification results measured by precision, recall and F1-score for the three network architectures of DenseNet, VGG and ResNet are shown in Figure 1. Also, the confusion matrices of all the skin lesion labels for each individual network models are presented in Figure 2.
## 5 Conclusions
In this project, we have built skin lesion image classifiers to identify seven types of skin lesions and whether it is benign or malignant. This task was accomplished by using state-of-the-art techniques in convolutional neural networks for image classification. The best model is assessed by the one who obtains the best recalls for the labels akiec, bcc and mel, and the best sensitivity when the model is tweaked for binary classification. The best model found was ResNet50 which achieved a recall for label akiec of 0.67, a recall for label bcc of 0.91, and a recall for label mel of 0.74. When the model was tweaked for binary classification, ResNet50 at a threshold of 0.06 was the one which obtained the best results achieving a sensitivity of 0.9235. In addition, all model's performances were boosted by using test time augmentation. Here, ResNet50 again obtained the best results with recall 0.69 for label akiec, recall 0.93 for label bcc, and a recall 0.76 for label mel. However, when the model was tweaked for binary classification the model that achieved the best results was VGG16 at threshold 0.05 with a sensitivity of 0.9540.
Figure 1: Classification results measured by precision, recall and F1-score for the three network architectures of DenseNet, VGG and ResNet. |
2306.00578 | Does Black-box Attribute Inference Attacks on Graph Neural Networks
Constitute Privacy Risk? | Graph neural networks (GNNs) have shown promising results on real-life
datasets and applications, including healthcare, finance, and education.
However, recent studies have shown that GNNs are highly vulnerable to attacks
such as membership inference attack and link reconstruction attack.
Surprisingly, attribute inference attacks has received little attention. In
this paper, we initiate the first investigation into attribute inference attack
where an attacker aims to infer the sensitive user attributes based on her
public or non-sensitive attributes. We ask the question whether black-box
attribute inference attack constitutes a significant privacy risk for
graph-structured data and their corresponding GNN model. We take a systematic
approach to launch the attacks by varying the adversarial knowledge and
assumptions. Our findings reveal that when an attacker has black-box access to
the target model, GNNs generally do not reveal significantly more information
compared to missing value estimation techniques. Code is available. | Iyiola E. Olatunji, Anmar Hizber, Oliver Sihlovec, Megha Khosla | 2023-06-01T11:49:43Z | http://arxiv.org/abs/2306.00578v1 | # Does Black-box Attribute Inference Attacks on Graph Neural Networks Constitute Privacy Risk?
###### Abstract
Graph neural networks (GNNs) have shown promising results on real-life datasets and applications, including healthcare, finance, and education. However, recent studies have shown that GNNs are highly vulnerable to attacks such as membership inference attack and link reconstruction attack. Surprisingly, attribute inference attacks has received little attention. In this paper, we initiate the first investigation into attribute inference attack where an attacker aims to infer the sensitive user attributes based on her public or non-sensitive attributes. We ask the question whether black-box attribute inference attack constitutes a significant privacy risk for graph-structured data and their corresponding GNN model. We take a systematic approach to launch the attacks by varying the adversarial knowledge and assumptions. Our findings reveal that when an attacker has black-box access to the target model, GNNs generally do not reveal significantly more information compared to missing value estimation techniques. Code is available.
Keywords:Attribute Inference Attack Privacy Risk Graph Neural Network.
## 1 Introduction
Several real-world data can be modeled as graphs. For instance, graphs are widely used to represent interactions between biological entities [16, 3, 15], model the interaction between users in social networks [14, 9] and for designing recommendation systems [5, 23]. Graph neural networks (GNNs) are a type of machine learning model that are specifically designed to handle graph-structured data. They have demonstrated effectiveness in diverse graph-based learning tasks, including node classification, link prediction, and community detection. GNNs leverage recursive aggregation of node information from neighboring nodes to generate informative graph representations [9]. However, despite their usefulness, GNNs can pose privacy threats to the data they are trained on. Multiple studies have
shown that GNNs are more vulnerable to privacy attacks than traditional machine learning methods. These attacks include membership inference attacks [13, 4], link stealing attacks [7, 25, 14], backdoor attacks [26], and adversarial attacks [22, 30]. One main reason for the high vulnerability of GNNs to attacks is their use of graph topology during training, which can lead to the leakage of sensitive information [13, 12]. However, attribute inference attacks (AIA) have been under-explored for GNNs. In AIA, the attacker's goal is to infer the sensitive attribute value of a node via access to the target model. This poses a severe privacy risk. For instance, if a health insurance company knows the disease status of a potential client, they may discriminate against them and increase their insurance premium. We take the first step of systematically investigating the privacy risks posed by AIA on GNNs under the practical black-box access assumptions.
As machine learning as a service (MLaaS) becomes more prevalent and GNNs are used in privacy-sensitive domains, it is essential to consider the privacy implications of black-box attribute inference attacks on GNNs. In this scenario, a user sends data to the trained model via an API, and receives a predictions. The user does not have access to the internal workings of the model. Motivated by this, we ask the question: _what is the privacy implication of black-box attribute inference attack on GNNs?_. To investigate this issue, we construct several attacks in a practical scenario where an attacker has black-box access to the trained model.
We develop two attribute inference attack (AIA) methods, namely the _attribute inference attack via repeated query of the target model_ (Fp-ma) and the _feature propagation-only attribute inference attack_ (Fp). In the Fp attack, we reconstruct the missing sensitive attributes by updating the attribute with the attribute value of the neighboring node via a feature propagation algorithm[17]. On the other hand, the Fp-ma attack employs a _feature propagation_ algorithm iteratively for each candidate node (nodes with sensitive attributes). It queries the target model with the estimated attribute and outputs a model confidence, which is then compared to a threshold to determine whether the inferred attribute is the true attribute. Additionally, we propose a _shadow-based attribute inference attack_ (Sa) that assumes an attacker has access to a shadow dataset and a shadow model, similar to the target model.
The contributions of this paper can be summarized as follows: (i) we develop two black-box attribute inference attack on GNNs and a relaxed shadow attack. (ii) while most AIA focus on inferring single binary attributes, our attacks go beyond these limitations. Our approach enables the inference of both single or multiple binary attributes, as well as continuous attribute values. (iii) through experimentation and extensive discussion, we show that having black-box access to a trained model may not result in a greater privacy risk than using missing value estimation techniques in a practical scenario.
## 2 Related Works
Several recent studies have demonstrated the vulnerabilities of GNNs to adversarial attacks [2, 20, 22, 26, 30]. These attacks encompass various techniques aimed
at deceiving the GNN model, such as altering node features or structural information. Additionally, researchers have explored attacks which aim to steal links from the trained model [7, 28] and extracting private graph information through feature explanations [14]. Property inference attacks have also been launched on GNNs [29], where an adversary can infer significant graph statistics, reconstruct the structure of a target graph, or identify sub-graphs within a larger graph. Another type of attack, membership inference, distinguishes between inputs the target model encountered during training and those it did not [13, 4]. Model inversion attacks aim to infer edges of a graph using auxiliary knowledge about graph attributes or labels [27]. The vulnerabilities of GNNs extend beyond these attack techniques, with model extraction attacks [21] and stealing attacks [19] posing additional risks.
Collectively, these studies provide valuable insights into the various vulnerabilities that GNNs are prone to. To the best of our knowledge, no attribute inference attack has been proposed to infer sensitive attributes (node features) from queries to a target model (black-box access). In addition, previous AIAs proposed on tabular data are not applicable to graphs [6, 11, 24, 8]. This is because first, all the attacks assume that the data points that are nodes are independent and secondly, they only consider binary attributes. This is not the case for graphs where nodes can be linked to other nodes by some graph properties, the node attributes can be highly correlated, and may have continuous values. Moreover, all other attacks can only infer one sensitive attribute at a time. Here, we propose multi-attribute inference attacks for GNNs. However, it should be noted that [4] performed preliminary white-box AIA and their method makes assumptions that the attacker has access to node embedding and sensitive attribute from the auxiliary graph (shadow data). The attacker trains a supervised attack to map embedding to sensitive attribute using the shadow data on the shadow model. The learnt mapping is then used to infer the attribute from the publicly released target embedding. It is important to mention that their approach focuses on the white-box setting where the attacker has access to the internal workings (node embeddings) of the target model and also requires strong assumptions of shadow data and shadow model from the same distribution.
## 3 Overview
**Attribute Inference Attack (AIA).** The task in AIA is to infer sensitive attributes (or features) on a number of nodes. These attributes could hold any possible kind of value. In this paper, we consider datasets having two categories of attribute values, binary and continuous. We note that AIA in its traditional form has only been used for inferring binary attribute values. We take the first step to evaluate on continuous attribute value. We define the task of AIA as follows:
Definition 1 (Aia).: _Let some GNN model \(\Phi\) be trained on a graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) and node feature matrix \(\mathbf{X}\in\mathbb{R}^{|\mathcal{V}|\times d}\), let \(\mathbf{X}^{*}\in\mathbb{R}^{|\mathcal{V}|\times d}\) be the partial feature matrix
such that the sensitive attributes of a subset of nodes \(\mathcal{V}^{\prime}\subset\mathcal{V}\) are masked/hidden. Given access to \(\mathbf{X}^{*}\) and blackbox access to the GNN model \(\Phi\), the goal of the attacker is to reconstruct the hidden entries of \(\mathbf{X}^{*}\)._
#### 3.0.2 Black-box vs White-box access.
Here, we distinguish between two types of model access: black-box and white-box. In a black-box access scenario, an adversary can query the model and receive outputs for the query nodes, but does not have knowledge of the model's internal workings. On the other hand, in a white-box scenario, the attacker possesses knowledge of the target model's internal mechanisms, including its weights and embeddings. However, acquiring such white-box access can be challenging in practice due to intellectual property concerns. Hereafter, we focus on the practical black-box access scenario which is commonly encountered in MLaaS.
### Threat Model
#### 3.0.1 Adversary's background knowledge.
In AIA, the adversary has access to the trained (target) model, knows the non-sensitive attribute values, and the data distribution (which we later relax). We make no assumption about the adversary access to the graph structure but rather experimentally vary such access. Also, the attacker might be interested in inferring multiple attributes. Concisely, we characterize the adversary's background knowledge along two dimensions:
_Available nodes._ This quantifies the amount of nodes with sensitive attributes that are available to the attacker. Note, this is different from the sensitive attribute she wants to infer. This refer to Setting-1 and Setting-2 in our experiments. In **Setting-1**, the attacker only knows the non-sensitive attributes while in **Setting-2**, she knows 50% of the sensitive values and all non-sensitive attributes.
_Graph structure._ Whether the attacker has access to the graph structure used in training the target model or not. In the case where she has no access to the graph structure, she artificially creates a KNN graph from the candidate nodes.
## 4 Proposed Attacks
Towards this, we develop two different attacks. The first attack which we call _attribute inference attack via repeated query of the target model_ (Fp-ma) involves querying the target model multiple times with attributes produced by a feature propagation algorithm. The attacker iteratively performs feature propagation by querying the target model with the estimated attributes and evaluates the result using a confidence score. We develop a variant of the attack, Ri-ma, that initializes the sensitive attribute based on a random initialization mechanism and also iteratively queries the target model.
In the second attack which we call _feature propagation-only attribute inference attack_ denoted as Fp, relies on a single execution of a feature propagation
algorithm. The result obtained from this feature propagation is subsequently considered as the final outcome of the attack. As a baseline, we replace the missing attribute with random values, we refer to this as Ri attack. Lastly, we also propose a _shadow-based attribute inference attack_ denoted as Sa that assumes that an attacker has access to a shadow dataset and a corresponding shadow model similar to the target model. We note that this attack has a major limitation in that it may be difficult to obtain such shadow dataset and model in practice but we include it as a relaxed version of our black-box AIA attacks.
Note that all attack methods except Sa do not utilize any prior information about the labels.
### Data Partitioning
For all attacks except Sa, the dataset is partitioned into train/test sets, which are utilized to train and evaluate the target model. The candidate set is chosen from the training set and includes nodes with missing sensitive attributes for which the attacker wants to infer.
In the case of the Sa attack, the dataset is initially divided into two parts, namely the target and shadow datasets. The train and test sets are then derived from each dataset to train and evaluate the respective target and shadow models. The candidate set for evaluating the attacks is selected from the training set of the target dataset. All attacks are evaluated using the candidate set \(\mathbf{X}^{*}\).
### Attack Modules
In the following, the major components of the attacks are introduced. Specifically, feature propagation and confidence score. Feature propagation algorithm is used by the attacks for estimating the sensitive attributes, while confidence score acts as a threshold for measuring the correctness of the inferred attributes.
**Feature Propagation** The feature propagation algorithm is a method for constructing missing features by propagating the known features in the graph. As shown in Algorithm 1, the algorithm takes as input a set of nodes \(\mathbf{X}^{*}\) which have missing attributes, the adjacency matrix (either given or constructed via KNN), and the number of iterations which is the stopping criteria for the algorithm.
Feature propagation starts by computing a normalized Laplacian of the graph \(\mathbf{L}\) (line 1), then initializes only the missing attributes in \(\mathbf{X}^{*}\) with random values. This randomly initialized \(\mathbf{X}^{*}\) is denoted as \(\mathbf{X}^{*}_{\mathbf{R}}\) (line 3). \(\mathbf{X}^{*}_{\mathbf{R}}\) is propagated over the graph by multiplying it with the computed graph Laplacian \(\mathbf{L}\) (line 4). This step is repeated for multiple iterations until convergence [17]. Since convergence might be difficult to attain on a real-world dataset, we fix the values of iterations experimentally. If the attribute values are binary, the updated values of \(\mathbf{X}^{*}_{\mathbf{R}}\) are rounded up such that any values above 0.5 are assigned the value 1 and 0 otherwise (line 7). For a dataset with continuous values, this step is omitted. Then, the values of the known attributes in \(\mathbf{X}^{*}\) (attributes that were not considered missing at the start of feature propagation) will be reassigned to
nodes in \(\mathbf{X}_{\mathbf{R}}^{*}\) (line 9). Since feature propagation is an algorithm that reconstructs missing values by diffusion from the known neighbors, the non-missing attributes of the neighbors are always reset to their true values after each iteration.
**Confidence Score** In our attacks, since the attacker has no information about the ground truth label, she utilizes the confidence of the model based on its prediction. The underlying idea is that a model that is confident in its prediction will assign a higher probability to the corresponding class label. Thus, the attacker leverages this confidence as a measure of the confidence score. One approach is to consider the value of the class with the highest probability as the confidence score. However, a problem arises when the target model generates an output where either all classes have similar probabilities or there are multiple classes with comparable probabilities.
To address this issue, we propose a solution by applying a "tax" on the highest class probability. First, we compute the average class probability of the remaining classes, and then determine the difference between this average and the highest probability in \(\mathbf{y}\). This difference serves as the confidence score. Intuitively, if the class probabilities in \(\mathbf{y}\) are similar, the final score will be low, indicating a lower level of confidence. Conversely, if there is a substantial difference between the highest class probability and the others, the taxation is reduced, resulting in a higher final confidence score. It is important to note that the output vector \(\mathbf{y}\) is normalized, ensuring that the maximum confidence score is 1 and the minimum is 0. Additionally, the confidence score is computed on a per-node basis.
### Attribute Inference Attack via Repeated Query of the Target Model (FP-MA)
Our first attack, referred to as Fp-ma, involves multiple queries to the target model in order to infer the sensitive attribute. It relies on the feature propagation
algorithm (Algorithm 1) to initialize the sensitive attribute. The attack process is depicted in Figure 1. Furthermore, we introduce another variation of the attack, known as Ri-ma, which initializes the sensitive attribute through a random initialization mechanism while accessing the target model. The overall procedure of the attack is outlined in Algorithm 2.
```
input : Incomplete node feature matrix with missing sensitive attributes \(\mathbf{X}^{*}\), Edges \(\mathcal{E}\), confidence threshold \(\mathbf{cs_{threshold}}\) output :\(\mathbf{X}^{*}_{\mathbf{R}}\) with inferred attributes, mean confidence score \(\mathbf{cs_{mean}}\)
1\(\mathbf{X}^{*}_{\mathbf{R}}\leftarrow\)\(\mathbf{X}^{*}\)
2while\(\mathbf{X}^{*}_{\mathbf{R}}\) has missing valuesdo
3\(\mathbf{X}^{*}_{\mathbf{R}}\leftarrow\)InitAlgorithm(\(\mathbf{X}^{*}_{\mathbf{R}}\), \(\mathcal{E}\)) \(\backslash\)Do Feature Propagation
4\(\mathbf{Y}\leftarrow\)TargetGCN(\(\mathbf{X}^{*}_{\mathbf{R}}\),\(\mathcal{E}\)) \(\backslash\)Query target Model
5CS\(\leftarrow\)CalculateConfidenceScores(\(\mathbf{Y}\))
6ifCS has a value greater than\(\mathbf{cs_{threshold}}\)then
7\(i\leftarrow\)IndexOfMax(CS) \(\backslash\) Check index of maximum Confidence Score (\(\mathbf{cs}\))
8\(\mathbf{X}^{*}_{\mathbf{R}}[i]\leftarrow\)max(\(\mathbf{X}^{*}_{\mathbf{R}}[i]_{\mathbf{cs}}\)\(\backslash\) Choose node with maximum \(\mathbf{cs}\)
9 Reset \(\mathbf{cs_{threshold}}\)
10else
11\(\mathbf{cs_{threshold}}\leftarrow\)LowerBy5Percent(\(\mathbf{cs_{threshold}}\))
12
13 end if
14
15 end for
16Y\(\leftarrow\)TargetGCN(\(\mathbf{X}^{*}_{\mathbf{R}}\), \(\mathcal{E}\))
17CS\(\leftarrow\)CalculateConfidenceScores(\(\mathbf{Y}\))
18cs\(\mathbf{{}_{mean}}\leftarrow\)Mean(CS) return\(\mathbf{X}^{*}_{\mathbf{R}}\), \(\mathbf{cs_{mean}}\)
```
**Algorithm 2**Attack algorithm. InitAlgorithm refers to either the feature propagation (FP-MA) or the random initilization (RI-MA)
First, the attacker has \(\mathbf{n}\) candidate nodes with \(\mathbf{m}\) missing sensitive attributes, the corresponding edges of these nodes (or she computes the edges via KNN if she has no access), and a confidence score threshold \(\mathbf{cs_{threshold}}\). These nodes are in the matrix \(\mathbf{X}^{*}\). In this first phase, the attack procedure runs the initialization algorithm (feature propagation (Section 4.2) or random initialization) to obtain an estimated value for the missing attribute (line 3). The attacker then queries the target model with the attributes obtained from the initialization algorithm \(\mathbf{X}^{*}_{\mathbf{R}}\) and the edges \(\mathcal{E}\) (line 4). As shown in lines 5-7, the attacker computes the confidence scores as described in Section 11 and then chooses the node which produces the highest confidence score if it passes the confidence score threshold. Any node whose confidence score is higher than the threshold is "fixed" (line 8). That is, the estimated values for the missing attributes of those nodes does not change in the next iteration of the attack.
To incentivize the attack to infer nodes with high confidence scores, a threshold for the confidence score \(\mathbf{c}_{\mathbf{Sthreshold}}\) is selected by the attacker at the start. A node with inferred attributes with a confidence score lower than the threshold will not be fixed (line 11). The threshold assures that the algorithm will have multiple iterations to maximize the confidence score and in turn predict the right attribute value. But a problem arise when the threshold is too high and no nodes can obtain such confidence score. To tackle this problem, we propose a method that lowers the confidence score by 5% after each iteration when a node is not fixed (line 11). The lowered threshold is set back to the original value when a node is finally fixed (line 9). We reset the threshold to ensure that the attacker is re-incentivized when the feature propagation algorithm produces new randomized values for the rest of the nodes. This is an iterative process and are repeated until no nodes are left with missing values.
When the attacker infers values for all nodes with missing sensitive attributes, it queries the target model with these nodes and edges, compute the confidence scores, and then takes the mean of all the confidence scores of all inferred nodes (lines 14-16). The mean of the confidence score is returned for experimental purposes to compare the behavior of the confidence score to other attack methods. The attacker finally returns the candidate nodes with their inferred attributes \(\mathbf{X}_{\mathbf{R}}^{*}\) and the mean of the confidence scores \(\mathbf{c}_{\mathbf{s}\mathbf{c}\mathbf{c}\mathbf{c}\mathbf{c}\mathbf{c} \mathbf{c}}\) (line 17).
### Feature Propagation-only Attribute Inference Attack (FP)
The feature propagation-only attribute inference attack (FP) only execute a feature propagation algorithm once to infer the sensitive attribute.
One advantage of FP is that it is simpler and faster in runtime than other methods. This is because FP attack does not utilize the information obtained by querying the model in the attack process. The attack procedure is as follows. First, FP takes the missing attributes in \(\mathbf{X}^{*}\) and their edges \(\mathcal{E}\) as input. The
Figure 1: The black-box attack FP-ma. For the Ri-ma attack, we replace the feature propagation module with a random initializer.
attacker then runs feature propagation algorithm as described in Section 4.2. The output of the feature propagation algorithm is considered as the inferred nodes. Similar to Fp-ma, the target model is queried with the final inferred nodes, not for finetuning the inferred estimate but only to compute the mean of the confidence scores as a measure of comparison with other methods. Finally, Fp returns the candidate nodes with their inferred attributes \(\mathbf{X}_{\mathbf{R}}^{*}\) and the mean confidence \(\mathbf{cs_{mean}}\).
### Shadow-based attribute inference attack (SA)
We adapt (with several modification) the white-box attack proposed by [4] into the black-box setting. In this attack, the model's output (posteriors) is used to determine whether the attribute has been correctly inferred or not. The purpose of the attack is to study the behavior of the model on nodes with sensitive attribute values that have already been seen. To achieve this, a pseudo-model called a shadow model is trained, which the attacker can use to observe the model's behavior on her shadow dataset. The attacker chooses a candidate set from the train set of her shadow model, which includes nodes with complete attributes and some with missing sensitive attribute values. During the query, she randomly assigns random values to the missing sensitive attributes, observes the posterior, and compares it with the posterior of the node-set with the true attributes. After obtaining the labeled posteriors from the candidate set of the shadow dataset, she proceeds to train the attack model, which is a 3-layer MLP model. In this step, the attack model is trained using the labeled posteriors, where the posteriors with the original sensitive attribute value are labeled as 1, while those with the assigned random attribute value are labeled as 0. To infer the attribute of the candidate set (node of interest), she queries the target model, obtains posteriors, and inputs them into her trained attack model to infer attribute values. We note that this attack is only applicable for sensitive binary attribute values.
## 5 Datasets and Implementation details
We utilize three private datasets, namely the credit defaulter graph, Facebook, and LastFM social network dataset, along with two benchmark datasets, Cora and PubMed. The details for all the datasets are provided in Table 1 and in Appendix 0.B. The target GNN model \(\Phi\) is a 2-layer graph convolution network (GCN). All our experiments were conducted for 10 different instantiations. We report the mean values across the runs in the main paper. The full result with standard deviation is on GitHub.
#### 5.0.1 Attack Evaluation
For all experiment, we choose 100 candidate nodes (\(\mathbf{X}^{*}\)) at random from the training set with the objective of inferring the sensitive attribute values. Note that these candidate nodes are fixed across all experiments unless otherwise stated. To assess the success of the attack, we compare the
inferred attributes with the values of the original node-set, which refers to the nodes that were not modified. We employed two metrics that measures the percentage of the inferred attributes for binary values and mean-squared error for continuous values. Details of the evaluation methods are in Appendix 0.C.
## 6 Results
**Inference of sensitive binary attributes.** In the case when the missing sensitive attributes are binary, we observe a similar trend among all datasets: the attack performance is worse when the attacker has access to a black-box model compared to when they have no access, as shown in Figure 2. In Setting-1, Fp and Ri, which do not require access to the target model, exhibit slightly better performance (an improvement of at most 4%) compared to the black-box attack models Fp-ma and Ri-ma, which rely on such access. It is important to note that in Setting-1, the attacker only has information on all non-sensitive nodes and no additional knowledge about some of the sensitive attributes, unlike
\begin{table}
\begin{tabular}{l r r r r r r} \hline \hline & Credit & Cora & Pubmed & Facebook & LastFM & Texas \\ \hline \# attributes & 13 & 1,433 & 500 & 1,406 & 128 & 10 \\ \(|\mathcal{V}|\) & 30,000 & 2,708 & 19,717 & 4,167 & 7,624 & 925,128 \\ \(|\mathcal{E}|\) & 1,436,858 & 5,278 & 44,324 & 178,124 & 55,612 & – \\ \(deg\) & 95.79 & 3.90 & 4.50 & 42.7 & 7.3 & – \\ \# classes & 2 & 7 & 3 & 2 & 18 & 100 \\ Train Acc. & 0.78 & 0.95 & 0.88 & 0.98 & 0.80 & 0.53 \\ Test Acc. & 0.77 & 0.78 & 0.86 & 0.98 & 0.85 & 0.45 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics of dataset used in all experiments
Figure 2: Performance of the binary attribute inference attack at different settings. Setting-1 is represented by a solid line with triangle marker, while Setting-2 is represented by a dashed line with a circle marker. We fix the size of the training dataset to 1000 for all datasets.
in Setting-2. When Fp and Ri outperform Fp-ma and Ri-ma respectively, it suggests that even with access to the trained model, no further sensitive information can be leaked. One reason for this is that the available information from non-sensitive nodes already captures the majority of patterns and correlations in the dataset. Therefore, the incremental advantage gained from accessing the model is minimal, resulting in comparable or slightly inferior performance of the black-box attack methods. In Setting-2, the performance of Ri is similar to Setting-1, where Ri performs slightly better than Ri-ma in inferring the sensitive attribute. However, the opposite phenomenon is observed for Fp. Specifically, Fp-ma achieves a higher inferred attribute rate. For instance on the Cora dataset, this difference amounts to an improvement of 21%.
On all datasets except Facebook, we observe that when the attacker has access to the graph structure, it does not provide additional advantages to the attack models (Fp-ma and Ri-ma). Computing the graph based on K nearest neighbor strategy to query the target model leads to better inference than access to the true graph structure. One reason for this is that the candidate nodes are disjoint from the training nodes, and the connections among the candidate nodes are relatively sparse. Therefore, when using the original sparse neighborhood the feature representation of the query node might be less informative as compared to using the neighborhood constructed based on feature closeness.
For the Sa attack, the attack performance is a random guess (\(<=50\%\)) on all datasets. Therefore, in addition to the difficulty in obtaining a shadow dataset, such attacks are also not successful given a black-box access to the model. We omit the results due to space limitation.
#### 4.2.2 Inference of sensitive continuous attributes.
As shown in Figure 3, on the LastFM dataset, Fp-ma performs the worst, while Ri-ma achieves the best results. The performance of Fp varies depending on the availability of the full graph structure or using the KNN approach. Throughout the experiments, using the full graph structure consistently yields better results for Fp-ma. The Ri and Ri-ma method, on the other hand, is not sensitive to the graph structure. Ri-ma method consistently outperforms the other methods in Setting-1.
In Setting-2, the performance pattern follows a similar trend as Setting-1, but with improved results. This can be attributed to the attacker having knowledge about some sensitive attributes. For example, at 100 training dataset size, the error rate drops by 60% for Fp, 45% for Fp-ma, 51% for Ri, and 50% for Ri-ma. This highlights the impact of having knowledge of nodes with sensitive attributes. An attacker with such information can launch a stronger attack compared to an attacker with no knowledge, as the error rates significantly decrease when partial knowledge is available.
Similarly, on the PubMed dataset (Figure 3), we observe that an attacker in Setting-2 can infer the sensitive attributes better compared to an attacker in Setting-1, with a significant decrease in error rates of up to 52%. Among the inference methods used, Fp-ma consistently achieves the best performance (lowest error) in inferring the sensitive attribute across all settings in the PubMed dataset. The reason behind this superiority can be attributed to the utilization
of feature propagation and the availability of the graph structure. By leveraging feature propagation algorithms, Fp-ma can effectively propagate information and exploit the relationships between nodes to make more accurate inferences.
In both the settings, having access to the graph structure consistently yields the highest inference capability (lowest error rates) for all other methods except for Ri and Ri-ma. This observation is not surprising unlike in the case of inferring binary attributes. This is because continuous attributes typically exhibit a smooth and gradual variation, whereas binary inference focuses on identifying distinct decision boundaries within the data. Hence, the connectivity patterns of the nodes in the graph structure play a crucial role in propagating information and inferring missing values. The inference methods leverage these patterns to make accurate estimations.
**Summary.** For inferring binary attribute, the underlying algorithms used by Fp and Ri methods allow them to leverage the dataset's inherent characteristics effectively. On the other hand, Fp-ma and Ri-ma, relying on access to the target model, may exhibit slightly lower performance in the absence of specific knowledge about the sensitive attributes such as in Setting-2. Additionally, successful attacks can be carried out without full access to the graph structure.
For inferring continuous attribute, the results emphasize the importance of utilizing the graph structure, and additional information in improving the accuracy of inferring the sensitive attributes.
**Effect of the Training Data Size on Inferring Sensitive Attributes** Here, we investigate the influence of the data size used to train the target model on the inference of the sensitive attribute. On the Cora and Facebook dataset (Figure 4), we observe that it is easier to infer more attributes when there is less data used in training the target model. One reason for this is that when there is less training data available, the target model may not have learned all the relevant features, and therefore the attack can leverage this to infer the sensitive attribute more easily. However, as the number of available training data increases, the
Figure 3: Performance of inferring continuous attributes on the PubMed and LastFM dataset. Setting-1 is represented by a solid line with triangle marker, while Setting-2 is represented by a dashed line with a circle marker. We fix the size of the training dataset to 1000.
target model becomes better at learning the underlying patterns in the data, which in turn makes it more difficult for an attacker to infer sensitive attributes from the model's output. Additionally, the increased training data may also reduce overfitting and improve generalization performance, which can make the model less vulnerable to attacks. We also observe that the attack achieves greater success in Setting-2 compared to Setting-1. This outcome is expected, as it demonstrates the attack's ability to leverage the information from the known 50% sensitive attribute values to enhance the accuracy of inferring the remaining sensitive attribute values.
On the Credit dataset in Figure 4, the effect of the target's training data on inferring the sensitive attribute is minimal. For instance, the performance of Fp-ma is similar across different variations of training size. Furthermore, the additional knowledge of certain sensitive attributes (Setting-2) does not have any noticeable effect. On the Pubmed and LastFM datasets (Figure 5), Ri-ma consistently achieves lower error rates as training data sizes increases. This indicates that Ri-ma benefits from larger training datasets in terms of capturing more diverse patterns leading to improved performance and lower error rates. However, for Fp-ma, we observed a convex-shaped effect as the training data size increases. Although this is strange, we believe that as the training size increases, the model may encounter more outliers or noise that hinder its performance, resulting in higher error rates. However, as the training size further increases, the model starts to learn the underlying patterns better, leading to improved performance and lower error rates.
**Inferring Multiple Attributes (m\(>1\))** In the multiple attributes inference experiment, the attacker is interested in inferring more than one attribute. For example, on the Credit dataset, the attacker may want to infer both the age and education level of the victims. In this experiment, we set \(m=2\). As shown in Figure 6, the results of the multiple attribute inference closely follow the trends observed in the case of single attribute inference. However, one notable difference is that on the Credit, Cora, and Facebook datasets, the performance of inferring multiple attributes is lower as compared to single attribute inference (solid lines).
Figure 4: Performance of varying target model’s training size on black-box attacks (Ri-ma and Fp-ma) on binary AIA.
This is expected because of the increased complexity to the inference task. The presence of multiple attributes introduces additional dependencies and interactions among the attributes, making it more challenging to accurately infer all attributes simultaneously. Moreover, some attributes may have conflicting patterns or dependencies, making it difficult for the inference algorithm to reconcile the competing information.
For inferring sensitive continuous values, the opposite is observed, especially for Fp-ma (Figure 7). Specifically, inferring multiple attributes achieves lower error rates than inferring a single attribute, with up to a 99% decrease in error on both the PubMed and LastFM datasets (dashed lines). This interesting phenomenon can be attributed to the unique characteristics of the feature propagation algorithm used in Fp-ma. The feature propagation algorithm utilizes the relationships and dependencies among the attributes to propagate information and refine the inferred values. When inferring multiple attributes simultaneously, the propagated information from one attribute can provide valuable insights and constraints for the inference of other related attributes. Specifically, if certain attributes have missing or noisy data, the presence of other attributes with similar patterns may compensate for the errors and improve the robustness of the inference process.
**Additional Experiment** We perform additional experiment by varying the distribution assumption in Appendix 0.A. The result on the Texas dataset demonstrates that having access to skewed distributions (where the candidate and the training nodes are from different distributions) leaks more information than having access to the same distribution when the training dataset size is small.
## 7 Conclusion
In this paper, we develop several black-box attribute inference attacks to quantify the privacy leakage of GNNs. Our findings are as follows:
(i) For a stronger attacker with additional access to some sensitive attribute (Setting-2), the performance of black-box attacks can improve by up to 21%
Figure 5: Performance of varying target model’s training size on black-box attacks (Ri-ma and Fp-ma) on continuous AIA
compared to an attacker without such access (Setting-1).
(ii) The graph structure plays a significant role in inferring sensitive continuous values, leading to a remarkable reduction in error rate of up to 99%. However, when it comes to inferring sensitive binary values, except for the Facebook dataset, the graph structure has no noticeable impact.
(iii) Despite a stronger attacker (Setting-2) and access to the graph structure, our black-box attribute inference attacks generally does not leak any additional information compared to missing value estimation algorithms, regardless of whether the sensitive values are binary or continuous.
#### Acknowledgment.
This work is, in part, funded by the Lower Saxony Ministry of Science and Culture under grant no. ZN3491 within the Lower Saxony "Vorab" of the Volkswagen Foundation and supported by the Center for Digital Innovations (ZDIN), and the Federal Ministry of Education and Research (BMBF), Germany, under the project LeibnizKILabor (grant no. 01DD20003).
|
2305.05150 | Physics-informed neural network for seismic wave inversion in layered
semi-infinite domain | Estimating the material distribution of Earth's subsurface is a challenging
task in seismology and earthquake engineering. The recent development of
physics-informed neural network (PINN) has shed new light on seismic inversion.
In this paper, we present a PINN framework for seismic wave inversion in
layered (1D) semi-infinite domain. The absorbing boundary condition is
incorporated into the network as a soft regularizer for avoiding excessive
computation. In specific, we design a lightweight network to learn the unknown
material distribution and a deep neural network to approximate solution
variables. The entire network is end-to-end and constrained by both sparse
measurement data and the underlying physical laws (i.e., governing equations
and initial/boundary conditions). Various experiments have been conducted to
validate the effectiveness of our proposed approach for inverse modeling of
seismic wave propagation in 1D semi-infinite domain. | Pu Ren, Chengping Rao, Hao Sun, Yang Liu | 2023-05-09T03:30:06Z | http://arxiv.org/abs/2305.05150v1 | # Physics-informed neural network for seismic wave inversion in layered semi-infinite domain
###### Abstract
Estimating the material distribution of Earth's subsurface is a challenging task in seismology and earthquake engineering. The recent development of physics-informed neural network (PINN) has shed new light on seismic inversion. In this paper, we present a PINN framework for seismic wave inversion in layered (1D) semi-infinite domain. The absorbing boundary condition is incorporated into the network as a soft regularizer for avoiding excessive computation. In specific, we design a lightweight network to learn the unknown material distribution and a deep neural network to approximate solution variables. The entire network is end-to-end and constrained by both sparse measurement data and the underlying physical laws (i.e., governing equations and initial/boundary conditions). Various experiments have been conducted to validate the effectiveness of our proposed approach for inverse modeling of seismic wave propagation in 1D semi-infinite domain.
keywords: Physics-informed neural network, Seismic wave propagation, Absorbing boundary conditions, Full waveform inversion, Material distribution +
Footnote †: journal: Journal name
## 1 Introduction
Seismic imaging is essential to a wide range of scientific and engineering tasks. Thanks to the development of a theoretical foundation for wave propagation, researchers have been dedicated to tackling seismic inversion problems by combining physical simulation and field measurement data. One of the most important applications is full waveform inversion (FWI), which determines the high-resolution and high-fidelity velocity model (or material distribution) of the subsurface from the seismic waveforms collected on the field surface. This task is usually solved as an inverse problem of minimizing the misfit error between the observed seismic signals and the synthetic solution simulated based on an estimated velocity model. However, the traditional numerical methods suffer from three significant limitations for inverse analysis (e.g., FWI). First, there is a harsh requirement of computational resources and memory for calculating the Frchet derivatives in the optimization procedure. Second, it limits the resolution of the inverted velocity model by using specific smoothing regularization approaches for alleviating the ill-posedness of inverse problems. The last one is that the final solution heavily depends on the initial guess of the model.
Due to the great advancement in machine learning (ML) and artificial intelligence (AI), scientists have embraced a new research direction for forward and inverse analysis of complex systems in the domain of seismology. For instance, deep neural networks (DNNs) and auto differentiation
are utilized for solving the Eikonal equation [1] and seismic inversion [2; 3]. [4] also explore the potential of Fourier Neural Operator [5] for acoustic wave propagation and inversion. In addition to inferring velocity structures, [6] develop a generalized Expectation-Maximization (DeepGEM) algorithm for estimating the wave sources simultaneously. Nevertheless, these methods rely on large amounts of high-fidelity data, which are usually inaccessible in real-world applications. Therefore, there is an urgent need to develop novel ML approaches to address the sparsity and noise issues of measurement data. It is worthwhile to mention that Graph Network (GN) [7] has been proven to be an effective tool for estimating the underlying velocity model and the corresponding initial condition of the specific wave propagation based on sparse measurement data [8].
Moreover, the recent physics-informed neural networks (PINNs) [9; 10] have also succeeded in forward and inverse analysis of various complex systems with limited sensor measurements [11; 12; 13; 14; 15] or even no labeled data [16; 17; 18; 19; 20]. The general principle of PINNs is to combine the known physical laws and the measured data to optimize the DNNs simultaneously. In particular, we observe a growing interest in applying PINNs for seismic wave modeling [21; 22; 23; 24], which can overcome the obstacles of traditional methods. Note that the current PINN implementations for seismic wave propagation primarily focus on studying acoustic wave propagation.
In this paper, we aim to design PINN frameworks for elastic wave modeling and inversion in 1D domain, which is inspired by the previous work [25]. To be more concrete, we first design a specific PINN architecture to approximate the solution variables of interest. Next, for the sake of seismic inversion, we introduce a new part of shallow NN to learn the material distribution (i.e., Young's modulus). The synthesized networks work simultaneously to model the specific seismic wave propagation. Furthermore, the absorbing boundary condition is considered here to truncate the domain and avoid unnecessary computation, which works as a soft regularizer in the loss function to constrain the entire network. Finally, we evaluate the performance of our proposed PINN frameworks on various numerical experiments, including solving wave equations and FWI. More specifically, we consider different material configurations when validating the method.
The rest of the paper is organized as follows. The Methods Section formulates the seismic inversion task (i.e., FWI) and introduces the proposed PINN framework. The Experiments Section presents various numerical experiments for validating the performance of our proposed method on seismic wave inversion. Moreover, we conclude the entire paper in the Conclusions Section.
## 2 Method
### Problem setup
In this paper, we focus on the inverse analysis of 1D seismic wave propagation in elastic media. First of all, the momentum and constitutive equations of elasticity in 1D are defined as
\[\rho\frac{\partial u^{2}}{\partial t^{2}} =\frac{\partial\sigma}{\partial x}, \tag{1}\] \[E\frac{\partial u}{\partial x} =\sigma,\]
where \(t\) and \(x\) are the time and spatial dimensions; \(\sigma(t,x)\) and \(u(t,x)\) are the stress and displacement fields, respectively. \(E(x)\) denotes Young's modulus with respect to \(x\). \(\rho\) represents the density. The initial condition of the displacement field is set as \(u(0,:)=0\). In addition, we apply the absorbing boundary condition for the truncated boundary, which is given by
\[\frac{\partial u}{\partial t}+\sqrt{\frac{E}{\rho}}\cdot\frac{\partial u}{ \partial x}=0. \tag{2}\]
Our specific goal is to design PINN frameworks to take the spatiotemporal coordinate \((x,t)\) as inputs and approximate the displacement and stress variables \(\{u,\sigma\}\) based on Eq. (1) and Eq. (2). For the FWI problem, we aim to reconstruct the velocity model (in the form of Young's modulus) of the subsurface given the displacement waveform collected on the surface (i.e., \(x=0\)).
### PINN framework
In this part, we introduce the strategy of combining DNNs and physical laws (i.e., governing equations and boundary conditions) for learning seismic wave propagation. Various network architectures have been explored in the area of scientific ML, such as fully-connected NNs [9], convolutional NNs [19], graph NNs [26; 27], and transformer [28]. Considering the complexity of our task, we employ the fully-connected feed-forward NNs to represent the seismic dynamics in this paper. The common implementation of fully-connected NNs utilizes an input layer, multiple hidden layers, and an output layer. In specific, the connection between \((i-1)\)-th layer and \(i\)-th layer is given by
\[\mathbf{X}^{i}=\sigma\left(\mathbf{W}^{i}\mathbf{X}^{i-1}+\mathbf{b}^{i} \right),\;\;i\in[1,n+1], \tag{3}\]
where \(\mathbf{X}^{i-1}\) and \(\mathbf{X}^{i}\) are input and output features at \(i\)-th layer; \(\mathbf{W}^{i}\) and \(\mathbf{b}^{i}\) denote the weight tensor and bias vector of \(i\)-th layer; \(n\) refers to the number of hidden layers; \(\sigma(\cdot)\) represents the nonlinear activation function (e.g., Hyperbolic Tangent Function).
The major difference between pure data-driven ML and physics-informed learning lies in the introduction of physical constraints apart from the traditional data loss. The physics constraints are achieved by constructing the residual loss between the predicted dynamics and the underlying true physics. In PINNs, automatic differentiation [29] is applied to obtain the derivative terms of interest. More details about the specific design of PINN architecture for seismic inversion can be found in Section 2.3.
### Seismic inversion
The goal of seismic inversion is to infer the high-fidelity velocity model (i.e., material distribution) of the subsurface based on the observed seismic signals (e.g., seismic waveforms), which are measured from the field surface. The challenge here is that the distribution of seismic wavespeed in the medium is unknown. To this end, we introduce another shallow NN to approximate the material variable \(\widetilde{E}\) independently apart from the PINN architecture designed for approximating the solution variables \(\widehat{u}\) and \(\widetilde{\sigma}\). The overall framework of our proposed method is presented in Fig. 1. Note that the shallow NN part, which simulates the material distribution, holds a smaller network size than the deep NN module due to less spatiotemporal complexity [14; 24] compared with learning solution variables. Herein, this new synergetic PINN framework serves as a model pipeline for FWI.
In addition, to avoid unnecessary computation, we consider applying the absorbing boundary condition for handling the semi-infinite domain in seismic wave propagation. The truncated boundary condition is defined as Eq. (2). Finally, the loss function for inferring the velocity model is defined as
\[J=J_{g}+\alpha\cdot J_{d}+\beta\cdot J_{ibc} \tag{4}\]
where \(\alpha\) and \(\beta\) denote weighting coefficients. \(J_{g}\), \(J_{ibc}\), and \(J_{d}\) denote the governing equation loss, initial/boundary condition loss, and data loss, respectively. The loss function guarantees the optimization of both the misfit between the predicted and the measured waveform, as well as the physical consistency described by Eq. (1). Moreover, as shown in Fig. 1, Young's modulus \(\widetilde{E}\) is approximated by the shallow NNs part, which is constrained by \(J_{g}\) and \(J_{ibc}\).
## 3 Experiments
To validate the effectiveness of our proposed method, we conduct two numerical experiments on evaluating the performance of FWI in 1D domain. Specifically, two velocity models are designed: (1) a mixed velocity model with both linear and Gaussian functions; (2) a four-layer material distribution. All of the experiments are coded with TensorFlow [30] and implemented on an NVIDIA Tesla V100 GPU card (32G) in a standard workstation.
### Setup
In these numerical examples, we consider a 1D subsurface under the elasticity assumption, as shown in Fig. (2). To ensure the computational efficiency of PINN, we truncate the original computational semi-infinite domain with a finite depth, i.e., \(x=2.0\). As a result, the absorbing boundary condition would be applied to the truncated boundary to avoid wave reflection. Note that the waveform we used for inversion is synthesized through finite element simulation in an enlarged domain \(x\in[0,10]\) though a truncated domain (i.e., \(x\in[0,2]\)) is considered in PINN.
Furthermore, we also define an evaluation metric, the mean absolute relative error (MARE), to measure the prediction discrepancy of material distribution in the form of Young's modulus \(E\). MARE is given by
\[\text{MARE}=\frac{1}{N}\sum_{i=1}^{N}\frac{|\widetilde{E}_{i}-E_{i}|}{|E_{i}| }\times 100\%, \tag{5}\]
where \(N\) is the number of collocation points.
Figure 1: Architecture of the PINN for 1D full waveform inversion. A shallow neural network is used to approximate the unknown distribution of \(E\).
### Linear and Gaussian material distribution
The first case considers a velocity model of the subsurface with both linear and Gaussian functions, which is defined in the form of Young's modulus
\[E=2.0+0.9x+4.0\exp\left[-\frac{(x-1.1)^{2}}{0.2}\right]. \tag{6}\]
Moreover, in the simulation, we prescribe a time-varying force on the surface, which is given by
\[\sigma(t)=\sigma_{0}\cdot\exp\left[-\frac{(t-t_{s})^{2}}{0.03}\right], \tag{7}\]
where the amplitude \(\sigma_{0}\) is set as 1.0 and the offset \(t_{s}\) is 0.5. The entire simulation duration is 3.
We design the deep NNs of \(50\times 3\) (i.e., depth of 3 and width of 50) for approximating the solution variables \(u\) and \(\sigma\). The shallow NN has a network size of \(4\times 2\) (i.e., depth of 2 and width of 4) for learning the material distribution \(E\). In addition, the Hyperbolic Tangent activation function (i.e., Tanh) is utilized in the framework. To train the composite PINN jointly, we adopt a routine with 2,000 iterations of Adam optimizer [31] followed by 40,000 iterations of L-BFGS-B optimizer [32].
In addition, 30,000 collocation points are randomly sampled within the spatiotemporal domain for constructing the \(J_{g}\). The synthetic wave signal includes 301 data points collected in \(t\in[0,3]\). When the optimization convergences, we can directly infer the material distribution \(E\) and predict the waveform on the surface. The comparison between the predicted velocity model and the ground truth is shown in Fig. 3. It is obvious that the learned velocity model agrees well with the ground truth despite the minor discrepancy near the truncated boundary. In specific, the evaluation metric MARE shows that the prediction error of material distribution is only 0.47%. Furthermore, we present the comparison between the predicted surface waveform (i.e., displacement responses) and the synthetic measurements in Fig. 4. The result also proves an excellent performance of our proposed PINN framework in approximating the solution variables.
Figure 2: Computational domain of the 1D full waveform inversion problem. \(F(t)\) denotes the time-varying load excited on the surface. The truncated depth is \(L=2.0\).
### Four-layer material distribution
The second numerical example is a four-layer velocity model, which is a more complicated subsurface compared with the first case. The mathematical formulation is given by
\[E=\begin{cases}1.5,&0\leq x\leq 0.5\\ 6.0,&0.5\leq x\leq 1.0\\ 4.0,&1.0\leq x\leq 1.5\\ 6.5,&\text{others}\end{cases}. \tag{8}\]
To avoid the discontinuity of Young's modulus \(E\), we smooth the transition between two adjacent layers with the Sine function. Note that the derivatives of \(E\) are still discontinuous. The transition length is set to \(0.1\). This problem becomes more complicated since the distribution of \(E\) would dissipate the seismic wave during propagation and lead to a faint waveform. Therefore, the excitation of surface load applied here features a higher frequency which is described by
\[\sigma(t)=\sigma_{0}\cdot\exp\left[-\frac{(t-t_{s})^{2}}{0.008}\right], \tag{9}\]
Figure 4: Comparison between the learned surface waveform (i.e., displacement) and the ground truth.
Figure 3: Comparison between the predicted material distribution in the form of Young’s modulus \(E\) and the true velocity model.
\(\sigma_{0}\), \(t_{s}\), and time duration are still defined as 1.0, 0.5, and 3, respectively.
The NNs with more powerful expressiveness are employed to approximate the solutions, i.e., deep NN with a network size of \(80\times 4\) and shallow NN with the size of \(10\times 2\) for learning \(E\). Additionally, 100,000 collocation points are sampled within the spatiotemporal domain. 5,000 iterations of Adam optimizer and 400,000 iterations of L-BFGS-B optimizer are used to train the composite PINNs. The inverted distribution of \(E\) is inferred from the trained model for comparison with the ground truth, as shown in Fig. 5. It can be seen that the PINN is capable of identifying the subsurface with four layers. The prediction of four-layer material distribution has a MARE of 1.70%, which also exhibits great performance for the complex velocity model. Nevertheless, we also observe that the discrepancy increases as the depth increases. This is owing to the inherent nature of this problem - as the wave propagates deep, the reflected wave that is observable on the surface carries fewer features. It reflects the poor identifiability of the deeper subsurface and the ill-posedness of such tasks. Moreover, the discontinuity (i.e., derivatives of \(E\)) near the interface of adjacent layers also makes it challenging to accurately reconstruct the subsurface material as the PINN is a continuous approximator. Besides, the comparison of the surface waveform between the prediction from PINN and the ground truth also validates the effectiveness of our proposed method. To summarize, our designed PINN framework shows promising results on seismic inversion problems.
## 4 Conclusions
This paper introduces a novel PINN framework for learning seismic wave propagation in elastic media. In specific, a deep NN is designed to approximate the physical laws (governing equations and the initial/boundary condition), and a shallow NN is employed to learn the underlying velocity structure. Moreover, the absorbing boundary is incorporated into the network to truncate the semi-infinite domain and avoid excessive computation. We validate the effectiveness of our proposed approach in two numerical cases in the context of seismic inversion (i.e., FWI). In the future, we would like to explore the potential of physics-informed learning for high-dimensional seismic inversion problems (e.g., in 2D/3D domains) but are not limited to employing fully-connected NNs that are typically used in PINNs. This is due to the observation that the physics-informed discrete learning scheme [15; 18; 19] is promising to estimate more complex velocity models.
Figure 5: Comparison between the predicted material distribution in the form of Young’s modulus \(E\) and the true velocity model.
## Acknowledgement
P. Ren would like to gratefully acknowledge the Vilas Mujumdar Fellowship at Northeastern University. Y. Liu would like to thank the support from the Fundamental Research Funds for the Central Universities. The work is also supported by the National Natural Science Foundation of China (No. 62276269).
|
2304.04167 | Neural network assisted quantum state and process tomography using
limited data sets | In this study we employ a feed-forward artificial neural network (FFNN)
architecture to perform tomography of quantum states and processes obtained
from noisy experimental data. To evaluate the performance of the FFNN, we use a
heavily reduced data set and show that the density and process matrices of
unknown quantum states and processes can be reconstructed with high fidelity.
We use the FFNN model to tomograph 100 two-qubit and 128 three-qubit states
which were experimentally generated on a nuclear magnetic resonance (NMR)
quantum processor. The FFNN model is further used to characterize different
quantum processes including two-qubit entangling gates, a shaped pulsed field
gradient, intrinsic decoherence processes present in an NMR system, and various
two-qubit noise channels (correlated bit flip, correlated phase flip and a
combined bit and phase flip). The results obtained via the FFNN model are
compared with standard quantum state and process tomography methods and the
computed fidelities demonstrates that for all cases, the FFNN model outperforms
the standard methods for tomography. | Akshay Gaikwad, Omkar Bihani, Arvind, Kavita Dorai | 2023-04-09T05:51:16Z | http://arxiv.org/abs/2304.04167v1 | # Neural network assisted quantum state and process tomography using limited data sets
###### Abstract
In this study we employ a feed-forward artificial neural network (FFNN) architecture to perform tomography of quantum states and processes obtained from noisy experimental data. To evaluate the performance of the FFNN, we use a heavily reduced data set and show that the density and process matrices of unknown quantum states and processes can be reconstructed with high fidelity. We use the FFNN model to tomograph 100 two-qubit and 128 three-qubit states which were experimentally generated on a nuclear magnetic resonance (NMR) quantum processor. The FFNN model is further used to characterize different quantum processes including two-qubit entangling gates, a shaped pulsed field gradient, intrinsic decoherence processes present in an NMR system, and various two-qubit noise channels (correlated bit flip, correlated phase flip and a combined bit and phase flip). The results obtained via the FFNN model are compared with standard quantum state and process tomography methods and the computed fidelities demonstrates that for all cases, the FFNN model outperforms the standard methods for tomography.
+
Footnote †: These authors contributed equally to this work
+
Footnote †: These authors contributed equally to this work
+
Footnote †: These authors contributed equally to this work
## I Introduction
Quantum state tomography (QST) and quantum process tomography (QPT) are essential techniques to characterize unknown quantum states and processes respectively, and to evaluate the quality of quantum devices [1; 2; 3]. Numerous computationally and experimentally efficient QST and QPT algorithms have been designed such as self-guided tomography[4], adaptive tomography [5], compressed sensing based QST and QPT protocols which use heavily reduced data sets [6; 7], selective QPT [8; 9; 10], and direct QST/QPT using weak measurements [11].
Recently, machine learning (ML) techniques have been used to improve the efficiency of tomography protocols [12; 13; 14]. QST was performed on entangled quantum states using a restricted Boltzmann machine based artificial neural network (ANN) model [15] and was experimentally implemented on an optical system [16]. ML based adaptive QST was performed which adapts to experiments and suggests suitable further measurements [17]. QST using an attention based generative network was realized experimentally on an IBMQ quantum computer[18]. ANN enhanced QST was carried out after minimizing state preparation and measurement errors when reconstructing the state on a photonic quantum dataset [19]. A convolutional ANN model was employed to reconstruct quantum states with tomography measurements in the presence of simulated noise [20]. Local measurement-based QST via ANN was experimentally demonstrated on NMR[21]. ML was used to detect experimental multipartite entanglement structure for NMR entangled states [22]. ANN was used to perform QST while taking into account measurement imperfections [23] and were trained to uniquely reconstruct a quantum state without requiring any prior information about the state [24]. ANN was used to reconstruct quantum states encoded in the spatial degrees of freedom of photons with high fidelity [25]. ML methods were used to directly estimate the fidelity of prepared quantum states [26]. ANN was used to reconstruct quantum states in the presence of various types of noise [27]. Quantum state tomography in intermediate-scale quantum devices was performed using conditional generative adversial networks [28].
In this study, we employed a Feed Forward Neural Network (FFNN) architecture to perform quantum state as well as process tomography. We trained and tested the model on states/processes generated computationally and then validated it on noisy experimental data generated on an NMR quantum processor. Furthermore, we tested the efficacy of the FFNN model on a heavily reduced data set, where a random fraction of the total data set was used. The FFNN model was able to reconstruct the true quantum states and quantum processes with high fidelity even with this heavily reduced data set.
This paper is organized as follows: Section II briefly describes the basic framework of the FFNN model in the context of QST and QPT; Section II.1 describes the FFNN architecture while Section II.2 details how to construct the FFNN training data set to perform QST and QPT. Sections III and IV contain the results of implementing the FFNN to perform QST and QPT of experimental NMR data, respectively. Section V contains a few concluding remarks.
## II FFNN Based QST and QPT
### The Basic FFNN Architecture
First we describe the multilayer perceptron model also referred to as a Feed-Forward-Neural network (FFNN) which we employ to the task of characterizing quantum states and processes. An ANN is a mathematical computing model motivated by the biological nervous system which consists of adaptive units called neurons which are connected to other neurons via weights. A neuron is activated when its value is greater than a 'threshold value' termed the bias. Figure 1 depicts a schematic of an ANN with \(n\) inputs \(x_{1},x_{2},\cdots,x_{n}\) which are connected to a neuron with weights \(w_{1},w_{2},\cdots,w_{n}\); the weighted sum of these inputs is compared with the bias \(b\) and is acted upon the activation function \(f\), with the output \(\tilde{y}=f(\sum_{i=1}^{n}w_{i}x_{i}-b)\).
A multilayer FFNN architecture consists of three layers: the input layer, the hidden layer and the output layer. Data is fed into the input layer, which is passed on to the hidden layers and finally from the last hidden layer, it arrives at the output layer. Figure 2 depicts a schematic of a prototypical FFNN model with one input layer, two hidden layers and one output layer, which has been employed (as an illustration) to perform QST of an experimental two-qubit NMR quantum state, using a heavily reduced data set.
The data is divided into two parts: a training dataset which is used to train the model, a process in which network parameters, (weights and biases) are updated based on the outcomes and a test dataset which is used to evaluate the network performance. Consider '\(m\)' training elements \(\{(\vec{x}^{(1)},\vec{y}^{(1)}),(\vec{x}^{(2)},\vec{y}^{(2)}),\cdots,(\vec{x} ^{(p)},\vec{y}^{(p)})\}\) where \(\vec{x}^{(i)}\) is the \(i^{th}\) input and \(\vec{y}^{(i)}\) is the corresponding output. Feeding these inputs to the network produces the outputs \([\vec{\tilde{y}}^{(1)},\vec{\tilde{y}}^{(2)},...,\vec{\tilde{y}}^{(p)}]\). Since network parameters are initialized randomly, the predicted output is not equal to the expected output. Training of this network can be achieved by minimizing a mean-squared-error cost function, with respect to the network parameters, by using a stochastic gradient descent method and the backpropagation algorithm [29]:
\[w_{ij}\to w^{\prime}_{ij}= w_{ij}-\frac{\eta}{p^{\prime}}\sum_{i=1}^{p^{\prime}}\frac{ \partial}{\partial w_{ij}}\mathcal{L}(\vec{x}^{(i)}) \tag{1}\] \[b_{i}\to b^{\prime}_{i}= b_{i}-\frac{\eta}{p^{\prime}}\sum_{i=1}^{p^{\prime}}\frac{ \partial}{\partial b_{i}}\mathcal{L}(\vec{x}^{(i)}) \tag{2}\]
where \(\mathcal{L}(x^{(i)})=||\vec{y}^{(i)}-\vec{\tilde{y}}^{(i)}||^{2}\) is the cost function of the randomly chosen \(m^{\prime}\) training inputs \(x^{(i)}\), \(\eta\) is the learning rate and \(w^{\prime}_{ij}\) and \(b^{\prime}_{i}\) are updated weights and biases, respectively.
### FFNN Training Dataset for QST and QPT
An \(n\)-qubit density operator \(\rho\) can be expressed as a matrix in the product basis by:
\[\rho=\sum_{i=0}^{3}\sum_{j=0}^{3}...\sum_{n=0}^{3}a_{ij...n}\sigma_{i}\otimes \sigma_{j}\otimes...\sigma_{n} \tag{3}\]
where \(a_{00...0}=1/2^{n}\), \(\sigma_{0}\) denotes the \(2\times 2\) identity matrix and \(\sigma_{i},i=1,2,3\) are single-qubit Pauli matrices.
The aim of QST is to reconstruct \(\rho\) from a set of tomographic measurements. The standard procedure for QST involves solving linear system of equations of the form [30]:
\[\mathcal{A}\mathcal{X}=\mathcal{B} \tag{4}\]
where \(\mathcal{A}\) is a fixed coefficient matrix and only depends on the chosen measurement settings, \(\mathcal{X}\) is a column matrix which contains elements of the density matrix which needs to be reconstructed, and the input vector \(\mathcal{B}\) contains the actual experimental data.
The FFNN model is trained on a dataset containing randomly generated pure and mixed states. To generate these ensembles, consider a normal distribution \(\mathcal{N}(\mu=0,\sigma^{2}=1)\) with zero mean and unit variance. An \(n\)-qubit pure random state in the computational basis is represented by an \(2^{n}\) column vector \(C\) whose \(i\)th entry \(c_{i}\) generated from the random distribution as follows:
\[c_{i}=\frac{1}{N}(\mathfrak{d}_{i}+i\,\mathfrak{e}_{i}) \tag{5}\]
where \(\mathfrak{d}_{i},\mathfrak{e}_{i}\) are randomly chosen from the distribution \(\mathcal{N}\) and \(N\) is a normalization factor to ensure that \(C\) represent a unit vector.
For mixed states:
\[R=\mathcal{D}_{i}+i\,\mathcal{E}_{i} \tag{6}\]
where \(R\) is a \(2^{n}\times 2^{n}\) matrix with its elements \(\mathcal{D}_{i},\mathcal{E}_{i}\) randomly sampled from the normal distribution \(\mathcal{N}\). Using
Figure 1: (Color online) Basic unit of an ANN model, where the \(x_{i}\) are the inputs, \(w_{i}\) are the weights, \(b\) is the bias, \(\sum\) is the summation function, \(f\) is an activation function and \(\tilde{y}\) is the output of the ANN.
the \(R\) matrix, the corresponding mixed state density matrix \(\rho_{\text{mix}}\) is constructed as \(\rho_{\text{mix}}=\frac{RR^{\dagger}}{\text{Tr}(\text{RR}^{\dagger})}\).
The FFNN is trained on both pure as well as mixed states and the appropriate density matrices are generated. After generating the density matrices \(\mathcal{X}_{i}\), the corresponding \(\mathcal{B}_{i}\) are computed using Eq.(4). The training elements \(\{\mathcal{B}_{i},\mathcal{X}_{i}\}\) are then used to train the FFNN model given in Figure 2, where \(\mathcal{B}_{i}\) are the inputs to the FFNN and \(\mathcal{X}_{i}\) are the corresponding labeled outputs.
QPT of given a quantum process is typically performed using the Kraus operator representation, wherein for a fixed operator basis set \(\{E_{i}\}\), a quantum map \(\Lambda\) acting on an input state \(\rho_{\text{in}}\) can be written as [31]:
\[\Lambda(\rho_{in})=\sum_{m,n}\chi_{mn}E_{m}\rho_{in}E_{n}^{\dagger} \tag{7}\]
where \(\chi_{mn}\) are the elements of the process matrix \(\chi\) characterizing the quantum map \(\Lambda\). The \(\chi\) matrix can be experimentally determined by preparing a complete set of linearly independent input states, estimating the output states after action of the map, and finally computing the elements of \(\chi_{mn}\) from these experimentally estimated output states via linear equations of the form[32]:
\[\beta\vec{\chi}=\vec{\lambda} \tag{8}\]
where \(\beta\) is a coefficient matrix, \(\vec{\chi}\) contains the elements \(\{\chi_{mn}\}\) which are to be determined and \(\vec{\lambda}\) is a vector representing the experimental data.
The training data set for using the FFNN model to perform QPT is constructed by randomly generating a set of unitary operators. The generated unitary operators are allowed to act upon the input states \(\rho_{in}=\{|0\rangle,|1\rangle,\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle),\frac{ 1}{\sqrt{2}}(|0\rangle+i|1\rangle)\}^{\otimes n}\), to obtain \(\rho_{out}=U\rho_{in}U^{\dagger}\). All the output states \(\rho_{out}\) are stacked to form form \(\vec{\lambda}\). Finally, \(\vec{\chi}\) is computed using Eq. (8). The training elements \(\{\vec{\lambda}_{i},\vec{\chi}_{i}\}\) will then be used to train the FFNN model, where \(\vec{\lambda}_{i}\) acts as the input to FFNN and \(\vec{\chi}_{i}\) is the corresponding labeled output.
To perform tomography of given state or process one has to perform a series of experiments. Then the input vector is constructed where the entries are outputs/readouts of the experiments. In the case of standard QST or QPT, a tomographically complete set of experiments needs to be done and the input vector corresponding to the tomographically complete set of experiments is referred to as the full data set. To perform QST and
\begin{table}
\begin{tabular}{c|c|c|c} \hline Dataset Size & \multicolumn{3}{c}{Fidelity} \\ \hline & Epoch(50) & Epoch(100) & Epoch(150) \\ \hline
500 & 0.6944 & 0.8507 & 0.8716 \\
1000 & 0.8793 & 0.8994 & 0.9025 \\
5000 & 0.9231 & 0.9262 & 0.9285 \\
10000 & 0.9278 & 0.9321 & 0.9332 \\
20000 & 0.9333 & 0.9362 & 0.9393 \\
80000 & 0.9413 & 0.9433 & 0.9432 \\ \hline \end{tabular}
\end{table}
Table 2: Average state fidelities obtained after training the FFNN model to perform QST on 3000 test three-qubit states (\(M_{\text{data}}=120\)) for training datasets of different sizes, with the number of epochs varying from 50 to 150 for each dataset (an epoch refers to one iteration of the complete training dataset).
\begin{table}
\begin{tabular}{c|c|c|c} \hline Dataset Size & \multicolumn{3}{c}{Fidelity} \\ \hline & Epoch(50) & Epoch(100) & Epoch(150) \\ \hline
500 & 0.8290 & 0.9176 & 0.9224 \\
1000 & 0.9244 & 0.9287 & 0.9298 \\
5000 & 0.9344 & 0.9379 & 0.9389 \\
10000 & 0.9378 & 0.9390 & 0.9400 \\
20000 & 0.9394 & 0.9414 & 0.9409 \\
80000 & 0.9426 & 0.9429 & 0.9422 \\ \hline \end{tabular}
\end{table}
Table 1: Average state fidelities obtained after training the FFNN model to perform QST on 3000 test two-qubit states (\(M_{\text{data}}=20\)) for training datasets of different sizes, with the number of epochs varying from 50 to 150 for each dataset (an epoch refers to one iteration of the complete training dataset).
Figure 2: (Color online) Flowchart illustrating the FFNN model used to perform QST on two-qubit quantum states generated on an NMR quantum; on the left, \(\rho_{in}\) represents the state which is to be tomographed; IY denotes a tomographic operation, which is followed by signal detection, the set of depicted NMR spectra are those obtained after the tomographic measurement. The FFNN with two hidden layers is represented next, which then uses a reduced data set to reconstruct the final experimental tomographs represented on the right.
QPT via FFNN on a heavily reduced data set of size \(m\), a reduced size input vector \(\vec{b}_{m}\) with fewer elements (and a correspondingly reduced \(\vec{\lambda}_{m}\)) is constructed by randomly selecting \(m\) elements from the input vectors while the remaining elements are set to 0 (zero padding); these reduced input vectors together with the corresponding labeled output vectors are used to train the FFNN.
The FFNN was trained and implemented using the Keras Python library [33] with the Tensor-Flow backend, on an Intel Xeon processor with 48GB RAM and a CPU base speed of 3.90GHz. To perform QST and QPT the LeakyReLU (\(\alpha=0.5\)) activation function was used for both the input and the hidden layers of the FFNN:
\[\text{LeakyReLU(x)}= x\,;\,x>0 \tag{9}\] \[= \alpha x\,;\,x<0\]
A linear activation function was used for the output layer. A cosine similarity loss function, \(\mathcal{L}=\arccos\left(\frac{\vec{b}\cdot\vec{y}}{||\vec{y}||.||\vec{y}||}\right)\) was used for validation and the _adagrad_ (\(\eta=0.5\)) optimizer with a learning rate \(\eta\), was used to train the network. The _adagrad_ optimizer adapts the learning rate relative to how frequently a parameter gets updated during training.
The FFNN was used to perform QST on 3000 two-qubit and three-qubit test quantum states and to perform QPT on 3000 two-qubit test quantum processes for training datasets of different sizes. The number of epochs were varied from 50 to 150 for each dataset, where an epoch refers to one iteration of the training dataset during the FFNN training process. The computed average fidelities of 3000 two-qubit and three-qubit test quantum states and 3000 two-qubit test quantum processes are shown in Tables 1, 2 and 3, respectively; \(M_{\text{data}}\) refers to the reduced size of the data set. After comparing the effect of training data size, the value of \(M_{\text{data}}\) and the number of epochs, the maximum size of the training data set was chosen to be 80000 and the maximum number of epochs was set to 150 for performing QST and QPT. After 150 training epochs the validation loss function remained constant.
## III FFNN based QST on experimental data
We used an NMR quantum processor as the experimental testbed to generate data for the FFNN model. We applied FFNN to perform QST of two-qubit and three-qubit quantum states using a heavily reduced data set of noisy data generated on an NMR quantum processor. The performance of the FFNN was evaluated by computing the average state or average process fidelity. The state fidelity is given by [34]:
\[\mathcal{F}=\frac{\left|\operatorname{Tr}\left[\rho_{\text{FFNN}}\rho_{\text {STD}}^{\dagger}\right]\right|}{\sqrt{\operatorname{Tr}\left[\rho_{\text{FFNN }}^{\dagger}\rho_{\text{FFNN}}\right]\operatorname{Tr}\left[\rho_{\text{STD }}^{\dagger}\rho_{\text{STD}}\right]}} \tag{10}\]
where \(\rho_{\text{FFNN}}\) and \(\rho_{\text{STD}}\) are the density matrices obtained via the FFNN and the standard linear inversion method, respectively. The process fidelity can be computed by replacing the \(\rho\) in Eq. (10) by \(\chi\), where \(\chi_{\text{FFNN}}\) and \(\chi_{\text{STD}}\) are process matrices obtained via the FFNN and standard linear inversion method, respectively.
QST of a two-qubit NMR system is typically performed using a set of four unitary rotations: \(\{II,IX,IY,XX\}\) where \(I\) denotes the identity operation and \(X(Y)\) denotes a \(90^{\circ}\)\(x\) rotation on the specified qubit. The input vector \(\vec{b}\) (Eq. (4)) is constructed by applying the tomographic pulses followed by measurement, wherein the signal which is recorded in the time domain is then Fourier transformed to obtain the NMR spectrum. For two qubits, there are four peaks in the NMR spectrum and each measurement yields eight elements of the vector \(\vec{b}\); the dimension of the input vector \(\vec{b}\) is \(33\times 1\) (32
Figure 3: (Color online) Fidelity (\(\mathcal{\bar{F}}\)) between the FFNN model and the standard linear inversion method vs size of the heavily reduced dataset (\(M_{data}\)), for QST performed on (a) 100 two-qubit states and (b) 128 three-qubit states respectively. The states are numbered on the \(x\)-axis and the color coded bar on the right represents the value of the fidelity.
\begin{table}
\begin{tabular}{c|c|c|c} \hline Dataset Size & \multicolumn{3}{c}{Fidelity} \\ \hline & Epoch(50) & Epoch(100) & Epoch(150) \\ \hline
500 & 0.4904 & 0.5421 & 0.5512 \\
2000 & 0.6872 & 0.7090 & 0.7202 \\
15000 & 0.7947 & 0.8128 & 0.8218 \\
20000 & 0.8047 & 0.8203 & 0.8295 \\
50000 & 0.8305 & 0.8482 & 0.8598 \\
80000 & 0.8441 & 0.8617 & 0.8691 \\ \hline \end{tabular}
\end{table}
Table 3: Average process fidelities obtained after training the FFNN model to perform QPT on 3000 test three-qubit states (\(M_{\text{data}}=200\)) for training datasets of different sizes, with the number of epochs varying from 50 to 150 for each dataset (an epoch refers to one iteration of the complete training dataset).
from tomographic pulses and 1 from the unit trace condition). Similarly, QST of a three-qubit NMR system is typically performed using a set of seven unitary rotations: \(\{III,IIY,IYY,VII,XYX,XXY,XXX\}\). Each measurement produces 12 resonance peaks in the NMR spectrum (4 per qubit); the dimension of the input vector \(\vec{b}\) is be \(169\times 1\). To evaluate the performance of FFNN model in achieving full QST of two-qubit and three-qubit states, we experimentally prepared 100 two-qubit states and 128 three-qubit states using different preparation settings and calculated the average fidelity between the density matrix predicted via the FFNN model and that obtained using the standard linear inversion method for QST. We also performed full FFNN based QST of maximally entangled two-qubit Bell states and three-qubit GHZ and Biseparable states using a heavily reduced data set.
The FFNN model was trained on 80,000 states to perform QST. To perform FFNN based QST on two- and three-qubit states, we used three hidden layers containing 100, 100 and 50 neurons and 300, 200 and 100 neurons, respectively. The performance of the trained FFNN is shown in Figure 3. The fidelity between density matrices obtained via FFNN and standard linear inversion method of 100 experimentally generated two-qubit states and of 128 experimentally generated three-qubit states is shown in Figures 3(a) and (b), respectively. The reduced input vector of size \(M_{\text{data}}\) is plotted on the \(y\)-axis and the quantum states are numbered along the \(x\)-axis.
The performance of the FFNN for QST is evaluated in Figure 4 by computing the average state fidelity \(\bar{\mathcal{F}}\) calculated over a set of test/experimental states. The reduced size \(M_{data}\) of the input vector which was fed into the FFNN is plotted along the \(x\)-axis. The average fidelity \(\bar{\mathcal{F}}\) and the standard deviation \(\sigma\) in average state fidelity \(\bar{\mathcal{F}}\) are plotted along the \(y\)-axis in (a) and (c) for two-qubit states and in (b) and (d) for three-qubit states, respectively. For a given value of \(M_{data}\), the average fidelity \(\bar{\mathcal{F}}_{i}=\frac{50}{50}\sum_{n=1}^{50}\mathcal{F}_{n}\) of a given quantum state \(\rho_{i}\) predicted via FFNN is calculated by randomly selecting \(M_{data}\) elements from the corresponding full input vector \(\vec{b}\) for 50 times. For test data sets (blue circles), the performance of the FFNN is evaluated by computing the average fidelity \(\bar{\mathcal{F}}=\frac{1}{3000}\sum_{n=1}^{3000}\bar{\mathcal{F}}_{n}\) over 3000 two-qubit and three-qubit states. For experimental data sets (red triangles), the performance of the FFNN is evaluated by computing the average fidelity \(\bar{\mathcal{F}}\) over 100 two-qubit and 128 three-qubit states, respectively. The standard deviation \(\sigma\) in average state fidelity \(\bar{\mathcal{F}}\) is:
\[\sigma=\sqrt{\frac{\sum_{i=1}^{N}(\bar{\mathcal{F}}_{i}-\bar{\mathcal{F}})^{2 }}{N-1}} \tag{11}\]
As inferred from Figure 4, the FFNN model is able to predict an unknown two-qubit test state with average fidelity \(\bar{\mathcal{F}}\geq 0.8392\pm 0.084\) for a reduced data set of size \(M_{data}\geq 8\), and is able to predict an unknown three-qubit test state with average fidelity \(\bar{\mathcal{F}}\geq 0.8630\pm 0.0407\) for a reduced data set of size \(M_{data}\geq 60\). Similarly, for experimental quantum states, the FFNN model is able to predict two-qubit states with an average fidelity \(\bar{\mathcal{F}}\geq 0.8466\pm 0.1450\) for a reduced data set of size \(M_{data}\geq 12\), while for three-qubit experimental states, the FFNN is able to predict the unknown quantum state with average fidelity \(\bar{\mathcal{F}}\geq 0.8327\pm 0.0716\) for a reduced data set of size \(M_{data}\geq 60\). When the full input vector \(\vec{b}\) is considered, the average fidelity calculated over 3000 two- and three
Figure 4: (Color online) Average fidelity (\(\bar{\mathcal{F}}\)) and standard deviation \(\Delta\bar{\mathcal{F}}\) plotted as a function of the size of the heavily reduced dataset (\(M_{data}\)) computed for FFNN based QST on two qubits ((a) and (c)) and on three qubits ((b) and (d)), respectively. The average fidelity for the test dataset (blue dots) is calculated for 3000 states while for experimental data-set the average fidelity is calculated by randomly choosing the reduced dataset \(M_{data}\) elements from the full set for 100 2-qubit states and 128 3-qubit states, and then repeating the procedure 50 times.
Figure 5: (Color online) Fidelity (\(\bar{\mathcal{F}}\)) versus size of the heavily reduced dataset (\(M_{\text{data}}\)) computed for FFNN based QST of (a) two-qubit Bell states, where the different bars correspond to four different Bell states, and (b) three-qubit GHZ (black and red cross-hatched bars) and Biseparable states (gray and horizontal blue bars).
qubit test states turns out to be \(\bar{\mathcal{F}}=0.9993\) and \(\bar{\mathcal{F}}=0.9989\), respectively. The average fidelity calculated over 100 two-qubit and 128 three-qubit experimental states turns out to be \(\bar{\mathcal{F}}=0.9983\) and \(\bar{\mathcal{F}}=0.9833\), respectively, for the full input data set.
The FFNN model was applied to perform QST of two-qubit maximally entangled Bell states and three-qubit GHZ and biseparable states. Figure 5 depicts the experimental fidelities \(\mathcal{F}(\rho_{\text{FFNN}},\rho_{\text{STD}})\) of two-qubit Bell states and three-qubit GHZ and biseparable states calculated between the density matrices predicted via FFNN and those obtained via standard linear inversion QST for a reduced data set of size \(M_{\text{data}}\). The black, crosshatched red, gray and horizontal blue bars in Figure 5(a) correspond to the Bell states \(|B_{1}\rangle=(|00\rangle+|11\rangle)/\sqrt{2}\), \(|B_{2}\rangle=(|01\rangle-|10\rangle)/\sqrt{2}\), \(|B_{3}\rangle=(|00\rangle-|11\rangle)/\sqrt{2}\) and \(|B_{4}\rangle=(|01\rangle+|10\rangle)/\sqrt{2}\), respectively. The black and red cross-hatched bars in Figure 5(b) correspond to three-qubit GHZ states \(|\psi_{1}\rangle=(|000\rangle+|111\rangle)/\sqrt{2}\) and \(|\psi_{2}\rangle=(|010\rangle+|101\rangle)/\sqrt{2}\) respectively, while the gray and horizontal blue bars correspond to three-qubit biseparable states \(|\psi_{3}\rangle=(|000\rangle+|001\rangle+|110\rangle+|111\rangle)/2\) and \(|\psi_{4}\rangle=(|000\rangle+|010\rangle+|101\rangle+|111\rangle)/2\), respectively. The bar plots in Figure 5 clearly demonstrate that the FFNN model is able to predict the two- and three-qubit entangled states with very high fidelity for a reduced data set.
We note here in passing that the size of the heavily reduced dataset \(M_{\text{data}}\) is equivalent to the number of experimental readouts which are used to perform QST (QPT), while the standard QST (QPT) methods based on linear inversion always use the full dataset. Hence, the highest value of \(M_{\text{data}}\) is the same as the size of the full dataset which is 32, 168 and 256 for two-qubit and three-qubit QST and for two-qubit QPT, respectively.
## IV FFNN based QPT on experimental data
We used the FFNN model to perform two-qubit QPT for three different experimental NMR data sets: (i) unitary quantum gates (ii) non-unitary processes such as natural NMR decoherence processes and pulsed field gradient and (iii) experimentally simulated correlated bit flip, correlated phase flip and correlated bit+phase flip noise channels using the duality algorithm on an NMR quantum processor.
### FFNN Reconstruction of Two-Qubit Unitary and Non-Unitary Processes
The FFNN model was trained on 80,000 synthesized two-qubit quantum processes using a heavily reduced data set, with three hidden layers containing 600, 400 and 300 neurons, respectively. The performance of the trained FFNN was evaluated using 3000 test and 10 experimentally implemented quantum processes on NMR.
The FFNN results for QPT of various two-qubit experimental quantum processes are shown in Figure 6. The quality of the FFNN is evaluated by means of the average process fidelity \(\bar{\mathcal{F}}\), between the process matrix predicted by the FFNN (\(\chi_{\text{FFNN}}\)) using a reduced data set of size \(M_{data}\) and the process matrix obtained via the standard QPT method (\(\chi_{\text{STD}}\)) using a full data set.
Figure 6(a) depicts the performance of the FFNN evaluated on 3000 two-qubit test quantum processes (blue circles), where the \(y\)-axis denotes the average fidelity \(\bar{\mathcal{F}}=\frac{1}{3000}\sum_{n=1}^{3000}\bar{\mathcal{F}}_{n}\), where \(\bar{\mathcal{F}}_{n}=\frac{1}{300}\sum_{i=1}^{300}\bar{\mathcal{F}}_{i}\) is the average fidelity of the \(n\)th test quantum process calculated by randomly constructing an input vector of given size and repeating the process 300 times. Similarly, the red triangles and pink stars correspond to four unitary and six non-unitary quantum processes respectively, obtained from experimental data. The plots given in Figure 6(a) clearly show that the FFNN model is able to predict unitary as well as non-unitary quantum processes from a noisy experimental reduced data set, with good accuracy. For instance, for \(M_{\text{data}}=160\), the FFNN is able to predict the test process with \(\bar{\mathcal{F}}=0.8411\pm 0.0284\), whereas the experimental unitary and non-unitary processes are obtained with \(\bar{\mathcal{F}}=0.8447\pm 0.038\) and \(0.8187\pm 0.0493\), respectively. Hence, the value of \(M_{data}\) can be accordingly, depending on the desired accuracy and precision. The standard deviation in average fidelity \(\bar{\mathcal{F}}\) is calculated using Eq. (11) over 3000 quantum processes and is depicted in Figure 6(b). From Figure 6(b), it can be
Figure 6: (Color online) (a) Average process fidelity (\(\bar{\mathcal{F}}\)) and (b) Standard deviation \(\Delta\bar{\mathcal{F}}\) obtained for QPT of two-qubit processes using the FFNN model versus size of the dataset (\(M_{data}\)). For the test dataset (blue dots) the average fidelity is calculated for 3000 processes while for experimental unitary processes (red triangles) and non-unitary processes (magenta stars) the average fidelity is calculated by randomly choosing the reduced dataset \(M_{data}\) elements from the full set for four unitary quantum gates and six non-unitary processes, and then repeating the procedure 300 times.
observed that the FFNN model performs better for the QPT of unitary processes as compared to non-unitary processes, since the corresponding process matrices are more sparse.
The experimental fidelity obtained via FFNN of individual quantum processes is given in Figure 8, where the average fidelity is calculated for a set of quantum processes for a given value of the reduced dataset \(M_{data}\). For the test dataset the \(\bar{\mathcal{F}}\) is calculated over \(3000\) test processes, whereas for the experimental data set, \(\bar{\mathcal{F}}\) is computed over four unitary and six non-unitary processes. For the unitary quantum gates: Identity, CX180, CNOT and CY90, (corresponding to a 'no operation' gate, a bit flip gate, a controlled rotation about the \(x\)-axis by \(180^{\circ}\) and a controlled rotation about the \(y\)-axis by \(90^{\circ}\), respectively) the FFNN is able to predict the corresponding process matrix with average fidelities of \(\bar{\mathcal{F}}=0.8767\pm 0.0356,0.8216\pm 0.0463,0.8314\pm 0.0387\) and \(0.8489\pm 0.0315\) respectively, using a reduced data set of size \(160\). The six non-unitary processes to be tomographed include free evolution processes for two different times: \(D1=0.05\) sec and \(D2=0.5\) sec, a magnetic field gradient pulse (MFGP), and three error channels, namely, a correlated bit flip (CBF) channel, a correlated phase flip (CPF) channel, and a correlated bit-phase flip (CBPF) channel. There are several noise channels acting simultaneously all the qubits, during the free evolution times \(D1\) and \(D2\), such as the phase damping channel (corresponding to the T\({}_{2}\) NMR relaxation process) and the amplitude damping channel (corresponding to the T\({}_{1}\) NMR relaxation process). The MFGP process is typically implemented using gradient coils in NMR hardware where the magnetic field gradient is along the \(z\)-axis. The MFGP process to be tomographed is a sine-shaped pulse of duration \(1000\mu\)s, \(100\) time intervals \(=100\) and an applied gradient strength of \(15\%\). For the intrinsic non-unitary quantum processes D1 D2, and the MFGP, the FFNN is able to predict the corresponding process matrix with average fidelities of \(\bar{\mathcal{F}}=0.8373\pm 0.0381,0.7607\pm 0.0690\) and \(0.7858\pm 0.0703\) respectively, using a reduced data set of size \(160\). It is evident from the computed fidelity values that the FFNN performs better if the process matrix is sparse.
Although our main goal is to prove that the FFNN is able to reconstruct quantum states and processes with a high fidelity even for heavily reduced datasets, we also wanted to verify the efficacy of the network when applied to a complete data set. The values of process fidelity obtained via FFNN for the full data set are shown in Table 4, where it is clearly evident that the FFNN is able to predict the underlying quantum process with very high fidelity, and works accurately even for non-unitary quantum processes. The somewhat lower fidelity of the D2 process as compared to other quantum processes can be attributed to the corresponding process matrix being less sparse.
### FFNN Reconstruction of Correlated Noise Channels
The duality simulation algorithm (DSA) can be used to simulate fully correlated two-qubit noise channels, namely the CBF, CPF and CBPF channels [35]. The FFNN model is then employed to fully characterize these channels. DSA allows us to simulate the arbitrary dynamics of an open quantum system in a single experiment where the ancilla system has a dimension equal to the total number of Kraus operators characterizing the given quantum channel. An arbitrary quantum channel having \(d\) Kraus operators can be simulated via DSA using unitary operations \(V\), \(W\), and the control operation \(U_{c}=\sum_{i=0}^{d-1}|i\rangle\langle i|\otimes U_{i}\) such that the following condition is satisfied:
\[E_{k}=\sum_{i=0}^{d-1}W_{ki}V_{i0}U_{i}\quad(k=0,1,2,...,d-1) \tag{12}\]
where \(E_{k}\) is the Kraus operator, and \(V_{i0}\) and \(W_{ki}\) are the elements of \(V\) and \(W\), respectively. The quantum circuit for DSA is given in Reference [35], where the initial state of the system is encoded as \(|0\rangle_{a}\otimes|\psi\rangle_{s}\) which is then acted upon by \(V\otimes I\) followed by \(U_{c}\) and \(W\otimes I\), and finally a measurement is performed on the system qubits.
For this study, the two-qubit CBF, CPF and CBPF channels are characterized using two Kraus operators as:
\[\text{CBF}:E_{0} =\sqrt{1-p}I^{\otimes 2},\quad E_{1}=\sqrt{p}\sigma_{x}^{\otimes 2}\] \[\text{CPF}:E_{0} =\sqrt{1-p}I^{\otimes 2},\quad E_{1}=\sqrt{p}\sigma_{z}^{\otimes 2}\] \[\text{CBPF}:E_{0} =\sqrt{1-p}I^{\otimes 2},\quad E_{1}=\sqrt{p}\sigma_{y}^{\otimes 2} \tag{13}\]
where \(p\) is the noise strength, which can also be interpreted as probability with which the state of the system is affected by the given noise channel. For \(p=0\) the state of the system is unaffected, and for \(p=1\) the state of the system is maximally affected by the given noise channel. Since all the three noise channels considered in this study have only two Kraus operators, they can be simulated using a single ancilla qubit. Hence for all three noise
\begin{table}
\begin{tabular}{|l|l||l|l|} \hline \hline Unitary & \(\mathcal{F}\) & Non-Unitary & \(\mathcal{F}\) \\ Process & & Process & \\ \hline Test & 0.9997 & D1 & 0.9987 \\ Identity & 0.9943 & D2 & 0.9635 \\ CNOT & 0.9996 & Grad & 0.9917 \\ CX180 & 0.9996 & CBF & 0.9943 \\ CY90 & 0.9996 & CPF & 0.9996 \\ & & CBPF & 0.9996 \\ \hline \end{tabular}
\end{table}
Table 4: Experimental fidelities \(\mathcal{F}\) computed between \(\chi_{\text{FFNN}}\), the process matrix predicted via FFNN using a full data set, and \(\chi_{\text{STD}}\), the process matrix obtained via the standard QPT method.
channels, one can set \(V=\left(\begin{array}{cc}\sqrt{1-p}&-\sqrt{p}\\ \sqrt{p}&\sqrt{1-p}\end{array}\right)\), \(W=I\), and \(U_{0}=I\otimes I\). The different \(U_{1}\) for CBF, CPF and CBPF channels are set to \(\sigma_{x}\otimes\sigma_{x}\), \(\sigma_{z}\otimes\sigma_{z}\) and \(\sigma_{y}\otimes\sigma_{y}\) respectively, such that the condition given in Eq. (12) is satisfied. Note that \(V\) can be interpreted as a rotation about the \(y\)-axis by an angle \(\theta\) such that \(p=\sin^{2}{(\frac{\theta}{2})}\).
The generalized quantum circuit using DSA to simulate all three error channels is given in Figure 7. For the CBF channel, \(U_{c}\) turns out to be a Control-NOT-NOT gate, where the value of \(\theta\) (Figure 7) is zero. For the CPF and the CBPF channels, the values of \(\theta,\phi\) (the angle and axis of rotation) are \((\frac{\pi}{2},y)\) and \((\frac{\pi}{2},z)\), respectively. The output from the tomographic measurements on the system qubits forms the column vector \(\overrightarrow{\lambda}\). For a given value of \(p\), the full vector \(\overrightarrow{\lambda}\) can be constructed by preparing the system qubits in a complete set of linearly independent input states.
As can be seen from Figure 8, the average fidelity \(\bar{\mathcal{F}}=0.8738\pm 0.0366,0.8272\pm 0.0403\) and \(0.8273\pm 0.0416\) for the experimentally simulated noise channels CBF, CPF and CBPF respectively, using a reduced data set of size 160. Since all three correlated noise channels are characterized by only two Kraus operators, the corresponding process matrices turn out to be sufficiently sparse (with only two non-zero elements in the process matrix). The FFNN can hence be used to accurately tomograph such noise channels with arbitrary noise strength using a heavily reduced dataset.
## V Conclusions
Much recent research has focused on training artificial neural networks to perform several quantum information processing tasks including tomography, entanglement characterization and quantum gate optimization. We designed and applied a FFNN to perform QST and QPT on experimental NMR data, in order to reconstruct the density and process matrices which characterize the true quantum state and process, respectively. The FFNN is able to predict the true quantum state and process with very high fidelity and performs in an exemplary fashion even when the experimental data set is heavily reduced. Compressed sensing is another method which also uses reduced data sets to perform tomography of quantum states and processes. However, this method requires prior knowledge such as system noise and also requires that the basis in which the desired state (process) is to be tomographed should be sufficiently sparse. The FFNN, on the other hand, does not need any such prior knowledge and works well for all types of quantum states and processes. Moreover, working with a heavily reduced data set has the benefit of substantially reducing experimental complexity since performing tomographically complete experiments grows exponentially with system size. One can perform very few experiments and feed this minimal experimental dataset as inputs to the FFNN, which can then reconstruct the true density or process matrix. Our results hence demonstrate that FFNN architectures are promising methods for performing QST and QPT of large qubit registers and are an attractive alternative to standard methods, since they require substantially fewer resources.
Figure 7: Quantum circuit to simulate the action of a correlated bit flip, a correlated phase flip and a correlated bit+phase flip noise channel. \(|\psi\rangle_{s}\) are a set of linearly independent two-qubit input states, \(|0\rangle_{a}\) denotes the state of the ancilla, \(V\) is a single-qubit rotation gate and \(U_{c}\) denotes a set of control operations with varying values of \((\theta,\phi)\), depending on the noise channel being simulated.
Figure 8: (Color online) Process fidelity (\(\bar{\mathcal{F}}\)) between FFNN model and the standard linear inversion method vs size of the heavily reduced dataset (\(M_{data}\)), for different unitary and non-unitary quantum processes. The various quantum processes are labeled on the \(x\)-axis and the color coded bar on the right represents the value of the fidelity.
###### Acknowledgements.
All experiments were performed on a Bruker Avance-III 600 MHz FT-NMR spectrometer at the NMR Research Facility at IISER Mohali. Arvind acknowledges funding from the Department of Science and Technology (DST), India, under Grant No DST/ICPS/QuST/Theme-1/2019/Q-68. K.D. acknowledges funding from the Department of Science and Technology (DST), India, under Grant No DST/ICPS/QuST/Theme-2/2019/Q-74.
|
2304.14374 | Pseudo-Hamiltonian neural networks for learning partial differential
equations | Pseudo-Hamiltonian neural networks (PHNN) were recently introduced for
learning dynamical systems that can be modelled by ordinary differential
equations. In this paper, we extend the method to partial differential
equations. The resulting model is comprised of up to three neural networks,
modelling terms representing conservation, dissipation and external forces, and
discrete convolution operators that can either be learned or be given as input.
We demonstrate numerically the superior performance of PHNN compared to a
baseline model that models the full dynamics by a single neural network.
Moreover, since the PHNN model consists of three parts with different physical
interpretations, these can be studied separately to gain insight into the
system, and the learned model is applicable also if external forces are removed
or changed. | Sølve Eidnes, Kjetil Olsen Lye | 2023-04-27T17:46:00Z | http://arxiv.org/abs/2304.14374v3 | # Pseudo-Hamiltonian neural networks for learning partial differential equations
###### Abstract
Pseudo-Hamiltonian neural networks (PHNN) were recently introduced for learning dynamical systems that can be modelled by ordinary differential equations. In this paper, we extend the method to partial differential equations. The resulting model is comprised of up to three neural networks, modelling terms representing conservation, dissipation and external forces, and discrete convolution operators that can either be learned or be given as input. We demonstrate numerically the superior performance of PHNN compared to a baseline model that models the full dynamics by a single neural network. Moreover, since the PHNN model consists of three parts with different physical interpretations, these can be studied separately to gain insight into the system, and the learned model is applicable also if external forces are removed or changed.
## 1 Introduction
The field called physics-informed machine learning combines the strengths of physics-based models and data-driven techniques to achieve a deeper understanding and improved predictive capabilities for complex physical systems [35, 57]. The rapidly growing interest in this interdisciplinary approach is largely motivated by the increasing capabilities of computers to store and process large quantities of data, along with the decreasing costs of sensors and computers that capture and handle data from physical systems. Machine learning for differential equations can broadly be divided into two categories: the forward problem, which involves predicting future states from an initial state, and the inverse problem, which entails learning a system or parts of it from data. A wealth of recent literature exists on machine learning for the forward problem in the context of partial differential equations (PDEs). The proposed methods include neural-network-based substitutes for numerical solvers [20, 21, 55, 50], but also methods that can aid the solution process, e.g. by optimizing the discretization to be used in a solver [2]. The focus of this paper is however on the inverse problem, and much of the foundation for our proposed model can be found in recent advances in learning neural network models for ordinary differential equations (ODEs). Specifically, we build on recent works on models that incorporate Hamiltonian mechanics and related structures that underlie the physical systems we seek to model.
Greydanus et al. introduced Hamiltonian neural networks (HNN) in [26], for learning finite-dimensional Hamiltonian systems from data. They assume that the data \(q\in\mathbb{R}^{n}\), \(p\in\mathbb{R}^{n}\) is obtained from a canonical Hamiltonian system
\[\begin{pmatrix}\dot{q}\\ \dot{p}\end{pmatrix}=\begin{pmatrix}0&I_{n}\\ -I_{n}&0\end{pmatrix}\begin{pmatrix}\frac{\partial H}{\partial q}\\ \frac{\partial H}{\partial p}\end{pmatrix},\]
and aim to learn a neural network model \(H_{\theta}\) of the Hamiltonian \(H:\mathbb{R}^{n}\times\mathbb{R}^{n}\to\mathbb{R}\). This approach has since been further explored and expanded in a number of directions, which includes considering control input [61], dissipative systems [60, 25], constrained systems [24, 10], port-Hamiltonian systems [16, 18, 19], and metriplectic system formulations [36, 29]. A similar approach considering a Lagrangian formulation instead of a Hamiltonian is presented in [13]. Modifications of the models that focus on improved training from sparse and noisy data have been proposed in [12, 33, 15, 11].
In [23], we proposed pseudo-Hamiltonian neural networks (PHNN), which are models for learning a formulation that allows for damping of the Hamiltonian and external forces acting on the system. In addition, we allow for the Hamiltonian system to be non-canonical, and thus consider the formulation
\[\dot{x}=(S(x)-R(x))\nabla H(x)+f(x,t),\qquad x\in\mathbb{R}^{d}, \tag{1}\]
where \(S(x)=-S(x)^{T}\), and \(y^{T}R(x)y\geq 0\) for all \(y\). That is, \(S(x)\in\mathbb{R}^{d\times d}\) can be any skew-symmetric matrix and \(R(x)\in\mathbb{R}^{d\times d}\) can be any positive semi-definite matrix. Since we put no restrictions on the external forces, a pseudo-Hamiltonian formulation can in principle be obtained for any first-order ODE, which in turn can be obtained from any arbitrary-order ODE by a variable transformation. The formulation makes it possible to learn models that can be separated into internal dynamics and the external forces, i.e. \((\hat{S}_{\theta}(x)-\hat{R}_{\theta}(x))\nabla\hat{H}_{\theta}(x)\) and \(\hat{f}_{\theta}(x,t)\). This requires some sense of uniqueness in this separation, so certain restrictions need be put on the model to consider systems less general than (1). A major advantage that comes with this feature of the PHNN approach is that it makes it possible to learn a model of the system as if under ideal conditions even if data is sampled from a system affected by disturbances, if one assumes that an undisturbed system is closed and thus given only by the internal dynamics.
The motivating idea behind the present paper is to extend the framework of [23] to PDEs. In principe, one could always treat the spatially discretized PDE as a system of ODEs and apply the PHNN models of [23] to that. However, that would be disregarding certain structures we know to be present in the discretized PDE and would lead to inefficient models. Thus, compared to the ODE case, we consider different neural network architectures. Moreover, we will in the PDE setting impose some restrictions on the form of the external forces, in that we will not allow for them to depend on spatial derivatives of the solution. On the other hand, we will consider a more general form of the internal dynamics, where dissipation can result from a separate term and not just damping of the Hamiltonian.
Although HNN and extensions of this has attracted considerable attention in recent years, there has been very few studies on extending the methodology to PDEs. To our knowledge, the only prior works that consider HNN for PDEs are those of Matsubara et al. in [44] and Jin et al. in [32]. The latter has included a numerical example on the nonlinear Schrodinger equation. The former reference considers both Hamiltonian PDEs, exemplified by the Korteweg-de Vries equation, and an extension to dissipative PDEs, demonstrated by the Cahn-Hilliard equation. This paper has been a major inspiration for our work, especially on the neural network architecture we use to model the integrals in our PDE formulation. By generalizing to a wider class of PDEs that can
have conservative and dissipative terms at once, and also allowing for external forces, we largely expand the utility of this learning approach.
The extension of PHNN to PDEs we propose here should naturally also be put in context with other recent advances in learning of PDEs from data. Long et al. introduced PDE-Net in [40], and together with [44] this may be the work that is most comparable to what we present here. Their model is similar to the baseline model we will compare PHNN to in this paper, albeit less general. Their approach has two components: learning neural network models for the nonlinear terms in the PDE and identifying convolution operators that correspond to appropriate finite difference approximations of the spatial derivative operators present. They do however make considerable simplifying assumptions in their numerical experiments, e.g. only considering linear terms and a forward Euler discretization in time. Other works that have received significant attention are those that have focused on identifying coefficients of terms of the PDE, both in the setting where one assumes that the terms are known and approximate them by neural network models [50] and in the setting where one also identifies the terms present from a search space of candidates using sparse regression [51, 53, 34]. There has also been considerable recent research on learning operators associated with the underlying PDEs, where two prominent methods are Fourier neural operators (FNO) [1, 38] and deep operator network (DeepONet) [41]. These operators can e.g. map from a source term to solution states or from an initial state to future states; in the latter case, learning the operator equates to solving the forward problem of the PDE. The review paper [5] summarizes the literature on operator learning and system identification of PDEs, as well as recent developments on learning order reductions.
As will be demonstrated theoretically and experimentally in this paper, assuming a pseudo-Hamiltonian structure when solving the inverse problem for PDEs has both qualitative and quantitative advantages. The latter is shown by numerical comparisons to a baseline model on four test cases. The main qualitative feature of PHNN is that it is composed of up to six trainable submodels, which after training each can be studied for an increased understanding of the system we are modelling. Moreover, we could train a system affected by external forces and remove these from the model after training, so that we have a model unaffected by these disturbances. The code for this paper is built on the code for PHNN for ODEs developed for [23], and we have updated the GitHub repository [https://github.com/SINTEF/pseudo-hamiltonian-neural-networks](https://github.com/SINTEF/pseudo-hamiltonian-neural-networks) and the Python package phlearn with this extension to the PDE case.
The rest of this paper is organised as follows. In the next section, we explore the theoretical foundations upon which our method is based. Then the pseudo-Hamiltonian formulation and the class of PDEs we will learn are presented and discussed in Section 3. Section 4 is the centerpiece of the paper, as it is here we present the PHNN method for PDEs. We then dedicate a substantial portion of the paper to presenting and evaluating numerical results for various PDEs, in Section 5. The penultimate section is devoted to analysis of the results and our model, and a discussion of open questions to address in future research. We summarize the main results and draw some conclusions in the last section.
## 2 Background: Derivatives, discretizations and neural networks
Before delving into the pseudo-Hamiltonian formulation and the model we propose based on this, we will review and discuss some requisites for making efficient neural network models of systems
governed by PDEs.
### Learning dynamical systems
Consider the PDE
\[u_{t}=g(u^{P},x,t),\quad u\in H^{p}(\Omega),x\in\Omega\subseteq\mathbb{R}^{d},t\in \mathbb{R}, \tag{2}\]
first-order in time, where \(u^{P}\) means \(u\) itself and its partial derivatives up to the arbitrary order \(p\) with respect to the spatial variables \(x_{1},\ldots,x_{d}\). We seek to train a model \(\hat{g}_{\theta}\) of \(g\) so that solving \(u_{t}=\hat{g}_{\theta}(u^{P},x,t)\) leads to accurate predictions of the future states of the system. The universal approximation theorem [31, 14] states that \(g\) can be approximated with an arbitrarily small error by a neural network. In practice, we have to assume an abundance of observations of \(u_{t}\) and \(u^{P}\) at \(t\) and \(x\) to actually find a precise neural network approximation of \(g\). This brings us straight to one of the fundamental challenges of machine learning of differential equations: in a typical real-world setting, we can not expect to have data on the derivatives, neither temporal nor spatial. Thus we will have to depend on approximations obtained from discrete data. In this paper we will use sub- and superscript to denote discrete solution points in space resp. time. That is, \(u^{j}(x)=u(x,t^{j})\) and \(u_{i}(t)=u(x_{i},t)\), and we will suppress the arguments when they are not necessary. Let us consider the issue of time-discretization first, an issue shared by ODEs and PDEs alike, and defer the second issue to the next subsection.
In several of the papers introducing the most prominent recent methods for learning finite-dimensional dynamical systems, e.g. the original HNN paper [26] and the first paper by Brunton et al. on system identification [5], the derivatives of the solution are assumed to be known or approximated by finite differences. Approximating the time-derivative by the forward finite difference operator is equivalent to training using the forward Euler integrator, which is also what is done in the PDE-Net papers [40, 39]. There has however been several recent papers proposing more efficient training methods that incorporate other numerical integration schemes, see e.g. [12, 33, 15]. We follow [23, 45] and set up the training is such a way that we can use any mono-implicit integrator; that is, any integrator that relies explicitly on the solution in the times it integrates from and to. For the majority of the experiments in this paper, we use the implicit midpoint method, which is second-order, symplectic and symmetric. That is, we train the model \(\hat{g}_{\theta}\) by identifying the parameters \(\theta\) that minimize the loss function
\[\mathcal{L}_{g_{\theta}}=\bigg{\|}\frac{u^{j+1}-u^{j}}{\Delta t}-\hat{g}_{ \theta}\Big{(}\frac{(u^{P})^{j}+(u^{P})^{j+1}}{2},x,\frac{t^{j}+t^{j+1}}{2} \Big{)}\bigg{\|}_{2}^{2},\]
given for one training point \(u^{j}\) and barring regularization for now. This yields a considerable improvement over the forward Euler method at next to no additional computational cost, since the model in both cases is evaluated at only one point at each iteration of training. The option to use other integrators, including symmetric methods of order four and six, is readily implemented in the phlearn package, and we do demonstrate the need and utility of a fourth-order integrator in Section 5.3. For a thorough study of integrators especially suited for training neural network models of dynamical systems, we refer the reader to [45, 46].
### Spatial derivatives and convolution operators
Moving from finite-dimensional systems to infinite-dimensional systems introduces the issue of how to approximate spatial derivatives by the neural network models. Thankfully, a proposed solution
to this issue can be found in recent literature, as several works have noted the connection between finite difference schemes for differential equations and the convolutional neural network models originally developed for image analysis [8, 17, 52, 2, 9].
Given a function \(u\) and a kernel or filter \(w\), a discrete convolution is defined by
\[(u*w)(x_{i})=\sum_{j=-r}^{s}w_{j}u(x_{i-j}),\quad r,s\geq 0. \tag{3}\]
Here \(*\) is called the convolution operator, and the kernel \(w\) is a tensor containing trainable weights: \(w=\left[w_{-r},w_{-r+1},\ldots,w_{0},\ldots,w_{s-1},w_{s}\right]\). If the function \(u\) is periodic, so that \(u_{0}=u_{M}\), we obtain a circular convolution, which can be expressed by a circulant matrix applied on the vector \(u=\left[u_{0},\ldots,u_{M-1}\right]^{T}\).
A convolutional layer in a neural network can be represented as
\[y_{k}(u_{i})=\phi\big{(}(u*w_{k})(x_{i})+b_{k}\big{)}=\phi\bigg{(}\sum_{j=-r}^{ s}w_{kj}u_{i-j}+b_{k}\bigg{)},\]
where \(y_{k}(u_{i})\) is the output of the \(k\)-th feature map at point \(u_{i}\), \(w_{kj}\) are the weights of the kernel \(w_{k}\), \(b_{k}\) is the bias term, and \(\phi(\cdot)\) is an activation function. The width of the layer is \(r+s+1\), and this is usually referred to as the size of the convolution kernel, or filter. For our purpose it makes sense to have either \(r=0\) or \(r=s\), and the latter is the standard when convolutional neural networks are used in image analysis. Training the convolutional layer of a neural network constitutes of optimizing the weights and biases, which we collectively denoted by \(\theta\) in the previous subsection.
Similarly, a finite difference approximation of the \(n\)-th order derivative of \(u\) at a point \(x_{i}\) can also be expressed as applying a discrete convolution:
\[\frac{\mathrm{d}^{n}u(x_{i})}{\mathrm{d}x^{n}}\approx\sum_{j=-r}^{s}a_{j}u(x_ {i-j}), \tag{4}\]
where the finite difference weights \(a_{j}\) depend on the spatial grid. If we assume the spatial points to be equidistributed and let \(h:=x_{i+1}-x_{i}\), we have e.g.
\[\frac{\mathrm{d}u(x_{i})}{\mathrm{d}x} =\frac{u(x_{i+1})-u(x_{i})}{h}+\mathcal{O}(h),\] \[\frac{\mathrm{d}u(x_{i})}{\mathrm{d}x} =\frac{u(x_{i+1})-u(x_{i-1})}{2h}+\mathcal{O}(h^{2}),\] \[\frac{\mathrm{d}^{2}u(x_{i})}{\mathrm{d}x^{2}} =\frac{u(x_{i+1})-u(x_{i})+u(x_{i-1})}{h^{2}}+\mathcal{O}(h^{2}),\] \[\frac{\mathrm{d}^{3}u(x_{i})}{\mathrm{d}x^{3}} =\frac{u(x_{i+2})-2u(x_{i+1})+2u(x_{i-1})-u(x_{i-2})}{2h^{3}}+ \mathcal{O}(h^{2}).\]
Hence, a kernel size of two, with e.g. \(r=0\) and \(s=1\), is sufficient to obtain a first-order approximation of the first derivative, while a kernel size of three is sufficient and necessary to obtain second order approximations of first and second derivatives. Further, kernel size five is needed to approximate the third derivative. As noted by [52], higher order derivatives can be approximated either by increasing the kernel size or applying multiple convolution operations. In our models, we have designed neural networks where only the first layer is convolutional, and thus the kernel size restricts the order of the derivative we can expect to learn, while it also restricts the order of the approximations of these derivatives.
### Variational derivative
Given the function \(H\) depending on \(u\), \(x\) and the first derivative \(u_{x}\), let \(\mathcal{H}\) be the integral of \(H\) over the spatial domain:
\[\mathcal{H}[u]=\int_{\Omega}H(x,u,u_{x})\,\mathrm{d}x. \tag{5}\]
The variational derivative, or functional derivative, \(\frac{\delta\mathcal{H}}{\delta u}[u]\) of \(\mathcal{H}\) is defined by the property
\[\left\langle\frac{\delta\mathcal{H}}{\delta u}[u],v\right\rangle_{L_{2}}= \frac{\mathrm{d}}{\mathrm{d}\epsilon}\bigg{|}_{\epsilon=0}\mathcal{H}[u+ \epsilon v]\quad\forall v\,\in H^{p}(\Omega). \tag{6}\]
When \(\mathcal{H}\) as here only depends on first derivatives, the variational derivative can be calculated explicitly by the relation
\[\frac{\delta\mathcal{H}}{\delta u}[u]=\frac{\partial H}{\partial u}-\frac{ \mathrm{d}}{\mathrm{d}x}\frac{\partial H}{\partial u_{x}},\]
assuming enough regularity in \(H\).
## 3 Pseudo-Hamiltonian formulation of PDEs
In this paper we consider the class of PDEs that can be written on the form
\[u_{t}=S(u^{P},x)\frac{\delta\mathcal{H}}{\delta u}[u]-R(u^{P},x)\frac{\delta \mathcal{V}}{\delta u}[u]+f(u^{P},x,t), \tag{7}\]
where \(S(u^{P},x)\) and \(R(u^{P},x)\) are operators that are skew-symmetric resp. positive semi-definite with respect to the \(L^{2}\) inner product, \(\mathcal{H}\) and \(\mathcal{V}\) are integrals of the form (5) and \(f:\mathbb{R}\times\mathbb{R}^{d}\times\mathbb{R}\rightarrow\mathbb{R}\). The naming of this class is a challenge, since similar but not identical classes have been called by a myriad of names in the literature. Ignoring the term \(f\), the class could be referred to as metriplectic PDEs, where the name is a portmanteau of metric and symplectic [28, 4]. The formulation is also similar to an infinite-dimensional variant of the General Equation for Non-Equilibrium Reversible-Irreversible Coupling (GENERIC) formalism from thermodynamics [27, 47], except for \(f\) and the fact that \(R(u^{P},x)\) is positive instead of negative semi-definite. To be consistent with our previous work [23] and to make clear the connection to the vast recent literature on Hamiltonian neural networks, we say that (7) is the class of pseudo-Hamiltonian PDEs. This marks a generalization of the definition used in [23], in addition to the extension to infinite-dimensional systems, in that we here allow for \(\mathcal{H}\) and \(\mathcal{V}\) to be two different integrals. In the case \(\mathcal{V}=0\) and \(f(u^{P},x,t)=0\), we have the class of integral-preserving PDEs, which encompasses all (non-canonical) Hamiltonian PDEs [37]. That is, given the appropriate boundary conditions, e.g. periodic, the PDE will preserve the integral \(\mathcal{H}\), usually labeled as a Hamiltonian, or integral of motion, of the system. This follows from the skew-symmetry of \(S\):
\[\frac{\mathrm{d}\mathcal{H}}{\mathrm{d}t}=\left\langle\frac{\delta\mathcal{H} }{\delta u}[u],\frac{\partial u}{\partial t}\right\rangle_{L^{2}}=\left\langle \frac{\delta\mathcal{H}}{\delta u}[u],S(u^{P},x)\frac{\delta\mathcal{H}}{ \delta u}[u]\right\rangle_{L^{2}}=0.\]
Similarly, if \(\mathcal{H}=0\) and \(f(u^{P},x,t)=0\) but \(\mathcal{V}\geq 0\), the PDE (7) will dissipate the integral \(\mathcal{V}\), and \(\mathcal{V}\) may be called a Lyapunov function.
**Example 1**.: Consider the KdV-Burgers (or viscous KdV) equation [56]
\[u_{t}+\eta uu_{x}-\nu u_{xx}-\gamma^{2}u_{xxx}=0. \tag{8}\]
This can be written on the form (16) with \(A\) and \(R\) both being the identity operator \(I\), \(S=\frac{\partial}{\partial x}\) and \(f(u,x,t)=0\), and
\[\mathcal{H}=-\int_{\Omega}\left(\frac{\eta}{6}u^{3}+\frac{\gamma^{2}}{2}u_{x} ^{2}\right)\,dx \tag{9}\]
and
\[\mathcal{V}=\frac{\nu}{2}\int_{\Omega}u_{x}^{2}\,dx. \tag{10}\]
We see this connection by deriving the variational derivatives
\[\frac{\delta\mathcal{H}}{\delta u}[u]=-(\frac{\eta}{2}u^{2}-\gamma^{2}u_{xx}) \tag{11}\]
and
\[\frac{\delta\mathcal{V}}{\delta u}[u]=-\nu u_{xx}. \tag{12}\]
We have that (8) reduces to the inviscid Burgers' equation for \(\eta=-1\) and \(\nu=\gamma=0\), the viscous Burgers' equation for \(\eta=-1\), \(\nu\neq 0\) and \(\gamma=0\), and the KdV equation for \(\nu=0\), \(\eta\neq 0\) and \(\gamma\neq 0\).
### Spatial discretization
In this section and this section only we will use boldface notation for vectors, to distinguish continuous functions and parameters from their spatial discretizations. Assume that values of \(u\) are obtained at grid points \(\mathbf{x}=[x_{0},\ldots,x_{M}]^{T}\). Following [22], we interpret these as quadrature points with non-zero quadrature weights \(\kappa=[\kappa_{0},\ldots,\kappa_{M}]^{T}\), and approximate the \(L_{2}\) inner product by a weighted discrete inner product:
\[\langle u,v\rangle=\int_{\Omega}u(x)v(x)\,\mathrm{d}x\approx\sum_{i=0}^{M} \kappa_{i}u(x_{i})v(x_{i})=u^{T}\mathrm{diag}(\kappa)v=:\langle u,v\rangle_{ \kappa}.\]
Let \(\mathbf{p}\) denote the discretization parameters that consist of \(\mathbf{x}\) and the associated \(\kappa\). Then, assuming that there exists a consistent approximation \(\mathcal{H}_{\mathbf{p}}(\mathbf{u})\) to \(\mathcal{H}[u]\) that depends on \(u\) evaluated at \(\mathbf{x}\), we define the discretized variational derivative by the analogue to (6)
\[\left\langle\frac{\delta\mathcal{H}_{\mathbf{p}}}{\delta\mathbf{u}}(\mathbf{u }),\mathbf{v}\right\rangle_{\kappa}=\frac{\mathrm{d}}{\mathrm{d}\epsilon} \bigg{|}_{\epsilon=0}\mathcal{H}_{\mathbf{p}}(\mathbf{u}+\epsilon\mathbf{v}) \quad\forall\mathbf{v}\,\in\mathbb{R}^{M+1}.\]
Thus, as shown in [22], we have a relationship between the discretized variational derivative and the gradient:
\[\frac{\delta\mathcal{H}_{\mathbf{p}}}{\delta\mathbf{u}}(\mathbf{u})=\mathrm{ diag}(\kappa)^{-1}\nabla_{\mathbf{u}}\mathcal{H}_{\mathbf{p}}(\mathbf{u}).\]
Furthermore, we approximate \(S(u^{P},x)\) and \(R(u^{P},x)\) by matrices \(S_{d}(\mathbf{u})\) and \(R_{d}(\mathbf{u})\) that are skew-symmetric resp. positive semi-definite with respect to \(\langle\cdot,\cdot\rangle_{\kappa}\). Then a spatial discretization of (7) is given by
\[\mathbf{u}_{t}=S_{d}(\mathbf{u})\frac{\delta\mathcal{H}_{\mathbf{p}}}{\delta \mathbf{u}}(\mathbf{u})-R_{d}(\mathbf{u})\frac{\delta\mathcal{V}_{\mathbf{p}} }{\delta\mathbf{u}}(\mathbf{u})+\mathbf{f}(\mathbf{u},\mathbf{x},t),\]
which may equivalently be written as
\[\mathbf{u}_{t}=S_{\mathbf{p}}(\mathbf{u})\nabla_{\mathbf{u}}\mathcal{H}_{\mathbf{ p}}(\mathbf{u})-R_{\mathbf{p}}(\mathbf{u})\nabla_{\mathbf{u}}\mathcal{V}_{\mathbf{p}}( \mathbf{u})+\mathbf{f}(\mathbf{u},\mathbf{x},t), \tag{13}\]
where \(S_{\mathbf{p}}(\mathbf{u}):=S_{d}(\mathbf{u})\operatorname{diag}(\kappa)^{-1}\) and \(R_{\mathbf{p}}(\mathbf{u}):=R_{d}(\mathbf{u})\operatorname{diag}(\kappa)^{-1}\) are skew-symmetric resp. positive semi-definite by the standard definitions for matrices.
Thus, upon discretizing in space, we obtain a system of ODEs (13) that is on a form quite similar to the generalized pseudo-Hamiltonian formulation considered in [23]. In fact, if \(\mathcal{V}=\mathcal{H}\), we obtain the system
\[\mathbf{u}_{t}=(S_{\mathbf{p}}(\mathbf{u})-R_{\mathbf{p}}(\mathbf{u}))\nabla_ {\mathbf{u}}\mathcal{H}_{\mathbf{p}}(\mathbf{u})+\mathbf{f}(\mathbf{u}, \mathbf{x},t).\]
Still, we do not recommend applying the PHNN method of [23] on this directly without taking into consideration what we know about \(\mathcal{H}_{\mathbf{p}}\). Specifically, we want to exploit that it is a discrete approximation of the integral (5), and can thus be expected to be given by a sum of \(M\) terms that each depend in the same way on \(u_{i}\) and the neighbouring points \(u_{i-1}\) and \(u_{i+1}\). Hence, as discussed in the next section, we will employ convolutional neural networks with weight sharing across the spatial discretization points.
**Example 2**.: Consider again the KdV-Burgers equation (8), on the domain \(\Omega=[0,P]\) with periodic boundary conditions \(u(0,t)=u(P,t)\). We assume that the \(M+1\) grid points are equidistributed and define \(h:=x_{i+1}-x_{i}=P/M\). We approximate the integrals (9) and (10) by
\[\mathcal{H}_{\mathbf{p}}=-h\sum_{i=0}^{M-1}\left(\frac{\eta}{6}u_{i}^{3}+ \frac{\gamma}{2}\left(\delta_{f}u_{i}\right)^{2}\right) \tag{14}\]
and
\[\mathcal{V}_{\mathbf{p}}=\frac{\nu}{2}h\sum_{i=0}^{M-1}\kappa_{i} \left(\delta_{f}u_{i}\right)^{2}, \tag{15}\]
where the operator \(\delta_{f}\) denotes forward difference, i.e. \(\delta_{f}u_{i}=\frac{u_{i+1}-u_{i}}{h}\). Furthermore, we approximate \(\partial_{x}\) by the matrix corresponding to the central difference approximation \(\delta_{c}\) defined by \(\delta_{c}u_{i}=\frac{u_{i+1}-u_{i-1}}{2h}\), i.e.
\[S_{d}=\frac{1}{2h}\begin{pmatrix}0&1&0&\cdots&0&-1\\ -1&0&1&0&\cdots\\ 0&-1&0&1&0&\cdots\\ &&\ddots&\ddots&\ddots&\\ &\cdots&0&-1&0&1\\ 1&0&\cdots&0&-1&0\end{pmatrix}\in\mathbb{R}^{M\times M},\]
where the first and last rows are adjusted according to the periodic boundary conditions.
To obtain (13), we have that \(S_{\mathbf{p}}=\frac{1}{h}S_{d}\) and \(R_{\mathbf{p}}=\frac{1}{h}I\), with \(I\) being the identity matrix, and take the gradients of the approximated integrals to find
\[\nabla_{u}\mathcal{H}_{\mathbf{p}} =-h\left(\frac{\eta}{2}\mathbf{u}^{2}-\gamma^{2}\delta_{c}^{2} \mathbf{u}\right),\] \[\nabla_{u}\mathcal{V}_{\mathbf{p}} =-h\,\nu\,\delta_{c}^{2}\mathbf{u},\]
where \(\mathbf{u}^{2}\) and \(\mathbf{u}^{3}\) denotes the element-wise square and cube of \(\mathbf{u}\), and \(\delta_{c}^{2}:=\delta_{f}\delta_{b}\) denotes the second order difference operator approximating the second derivative by \(\delta_{c}^{2}u_{i}=\frac{1}{2h}(u_{i+1}-2u_{i}+u_{i-1})\). Observe that \(\frac{\delta\mathcal{H}_{\mathbf{p}}}{\delta\mathbf{u}}(\mathbf{u})=\frac{1}{ h}\nabla_{\mathbf{u}}\mathcal{H}_{\mathbf{p}}(\mathbf{u})\) and \(\frac{\delta\mathcal{V}_{\mathbf{p}}}{\delta\mathbf{u}}(\mathbf{u})=\frac{1}{ h}\nabla_{\mathbf{u}}\mathcal{V}_{\mathbf{p}}(\mathbf{u})\) are consistent discrete approximations of (11) and (12). Moreover, they are second-order approximations of these variational derivatives, even though (14) and (15) are only first-order approximations of the integrals (9) and (10).
### Restricting the class by imposing assumptions
Without imposing any further restrictions, the formulation (7) can be applied to any PDE that is first-order in time and will not be unique for any system. Any contribution from the two first terms on the left-hand side could also be expressed in \(f\), and even if we restrict this term, the operators \(S\) and \(R\) are generally not uniquely defined for a corresponding integral. In the remainder of this paper, we will consider the cases where the operators \(S\) and \(R\) are linear and independent of \(x\) and \(u\), we assume \(R\) to be symmetric, and we will not let \(f\) depend on derivatives of \(u\). Furthermore, we apply the symmetric positive semi-definite operator \(A\) to the equation and require that this commutes with \(R\) and \(S\). We redefine \(S:=AS\), \(R:=AR\) and \(f(u,x,t):=Af(u,x,t)\), and thus get
\[Au_{t}=S\frac{\delta\mathcal{H}}{\delta u}[u]-R\frac{\delta\mathcal{V}}{\delta u }[u]+f(u,x,t), \tag{16}\]
where the new \(S\) is still skew-symmetric and the new \(R\) is still symmetric and positive semi-definite.
In the following we will denote the identity operator by \(I\) and the zero operator by \(0\), so that \(Iv=v\) and \(0v=0\) for any \(v\in L_{2}\). We note that the zero operator is positive semi-definite, symmetric and skew-symmetric, while the identity operator is symmetric and positive semi-definite, but not skew-symmetric.
## 4 The PHNN model for PDEs
Since we assume that the operators \(A\), \(S\) and \(R\) are independent of \(x\) and \(u\), the discretization of these operators will necessarily result in circulant matrices, given that \(u\) is periodic. That is, they can be viewed as discrete convolution operators. We thus set \(\hat{A}_{\theta}^{[k_{1}]}\), \(\hat{S}_{\theta}^{[k_{2}]}\) and \(\hat{R}_{\theta}^{[k_{3}]}\) to be trainable convolution operators, where \(k_{1}\), \(k_{2}\) and \(k_{3}\) denote the kernel sizes, and we impose symmetry on \(\hat{A}_{\theta}^{[k_{1}]}\) and \(\hat{R}_{\theta}^{[k_{3}]}\) and skew-symmetry on \(\hat{S}_{\theta}^{[k_{2}]}\). Furthermore, we let \(\hat{\mathcal{H}}_{\theta}\) and \(\hat{\mathcal{V}}_{\theta}\) be two separate neural networks that take input vectors of length \(M\), the number of spatial discretization points, and outputs a scalar. The neural network \(\hat{f}_{\theta}\) can take input vectors representing both \(u\), \(x\) and \(t\) and outputs a vector of length \(M\).
The full pseudo-Hamiltonian neural network model for PDEs is then given by
\[\hat{g}_{\theta}(u,x,t)=(\hat{A}_{\theta}^{[k_{1}]})^{-1}\Big{(}\hat{S}_{ \theta}^{[k_{2}]}\nabla\hat{\mathcal{H}}_{\theta}(u)-\hat{R}_{\theta}^{[k_{3}] }\nabla\hat{\mathcal{V}}_{\theta}(u)+k_{4}\hat{f}_{\theta}(u,x,t)\Big{)}, \tag{17}\]
where we also have introduced \(k_{4}\), which should be \(1\) or \(0\) depending on whether or not we want to learn a force term. Given a set of \(N\) training points \(\{(u^{j_{n}},u^{j_{n}+1},t^{j_{n}})\}_{n=1}^{N}\) varying across time and different stochastic realizations of initial conditions, we let the loss function be defined as
\[\mathcal{L}_{g_{\theta}}(\{(u^{j_{n}},u^{j_{n}+1},t^{j_{n}})\}_{n=1}^{N})= \frac{1}{N}\sum_{n=1}^{N}\bigg{|}\frac{u^{j_{n}+1}-u^{j_{n}}}{\Delta t}-\hat{g} _{\theta}\Big{(}\frac{u^{j_{n}}+u^{j_{n}+1}}{2},x,\frac{t^{j_{n}}+t^{j_{n}+1}}{2 }\Big{)}\bigg{|}^{2}, \tag{18}\]
if the implicit midpoint integrator is used. For the experiments in Section 5.3 we use the fourth-order symmetric integrator introduced in [23], and the loss function is amended accordingly.
### Implementation
PHNN is comprised of up to six trainable models; the framework is very flexible and assumptions may be imposed so one or more of the parts do not have to be learned. Moreover, careful considerations should be made on how to best model the different parts. In the following, we explain how we have set up the models in our code.
#### 4.1.1 Modelling \(\mathcal{H}\) and \(\mathcal{V}\)
The networks \(\hat{\mathcal{H}}_{\theta}\) and \(\hat{\mathcal{V}}_{\theta}\) consist of one convolutional layer with kernel size two followed by linear layers corresponding to convolutional layers with kernel size one, and then in the last layer performs a summation of the \(M\) inputs to one scalar. Following each of the first two layers, we apply the \(\tanh\) activation function. To impose the periodic boundary conditions, we pad the input to the convolutional layer by adding \(u(P)=u(0)\) at the end of the array of the discretized \(u\). A similar technique was suggested in [44], although they use a kernel of size three on the first convolutional layer. We opt to have a smaller filter, since kernel size two is sufficient to learn the forward difference approximation of the first derivative in the integrals, which in turn is sufficient to obtain second order approximation of the resulting variational derivative; this is shown for the KdV-Burgers equation in Example 2. If we want to be able to learn derivatives of order two in the integral, we would need kernel size three, and to pad the input with one element on each side. If we want to learn derivatives of order three or four, or if we want to learn third- or fourth-order approximations of the derivatives, we would need the kernel size of the convolutional layer to be five, and to pad the input by two elements on each side. This adjustment can easily be made in our code. For the examples in this paper, we would not gain anything by increasing the kernel size, because we only have up to first derivatives in the integrals and because the training data is generated using second order spatial discretizations. On the other hand, having a kernel of size two simplifies the learning and may facilitate superior performance over a model that does not rely on a pseudo-Hamiltonian structure and have to approximate up to third derivatives by convolutional neural networks.
#### 4.1.2 Modelling \(A\), \(S\), \(R\) and \(f\)
In (17), \(k=(k_{1},k_{2},k_{3},k_{4})\) are hyperparameters that determine the expressiveness of the model. Setting \(k_{1}=k_{2}=k_{3}=M\) and \(k_{4}=1\) means that we can approximate the general system (16), while setting \(k_{1}=k_{3}=1\), \(k_{2}=3\) and \(k_{4}=0\) would be sufficient to learn a model for the discretized KdV-Burgers system (8). In fact, if we set \(k_{3}=3\), the skew-symmetric operator \(\hat{S}_{\theta}^{[k_{2}]}\) is uniquely defined up to a constant, since we require \(w_{0}=0\) and \(w_{1}=-w_{-1}\). Moreover, since this constant would only amount to a scaling between the operator and the discrete variational derivative it is applied to, \(\hat{S}_{\theta}^{[k_{2}]}\) does not have to be trained in this case; determining \(w_{1}\) would just lead to a scaling of the second-order approximation of the first derivative in space that could be compensated by a scaling of \(\mathcal{H}\). Similarly, if the kernel size of \(A\) or \(R\) is 3, we could set \(w_{0}=1\) and learn a single parameter \(w_{1}=w_{-1}\) for each of these when training the model. This corresponds to learning a linear combination of the identity and the second-order approximation of the second derivative in space.
We model \(f\) by the neural network \(\hat{f}_{\theta}\) that may take either of the variables \(u\), \(x\) and \(t\) as input. This has three linear layers, i.e. convolutional layers with kernel size one, with the \(\tanh\) activation function after each of the first two. If \(\hat{f}_{\theta}\) depends on \(x\), periodicity on the domain \([0,P]\) is imposed in a similar fashion as suggested in [58, 43, 42] for hard-constraining periodic boundary conditions in physics-informed neural networks and DeepONet. That is, we replace the input \(x\) by the first two Fourier basis functions, \(\sin\left(\frac{2\pi}{P}x\right)\) and \(\cos\left(\frac{2\pi}{P}x\right)\), which is sufficient for expressing any \(x\)-dependent periodic function.
In the numerical experiments of the next section, we do not consider systems where \(A\) and \(R\) are anything other than linear combinations of the identity and the spatial second derivative, or \(S\) is anything other than the first derivative in space. Thus we set \(k=[3,3,3,1]\) in our most general model, which is expressive enough to learn all the systems we consider. We also consider what we call _informed_ PHNN models where we assume to have prior knowledge of the operators, affecting \(k\), and also what variables \(f\) depends on.
#### 4.1.3 Leakage of constant
If \(R\) is the identity, or a linear combination of the identity with differential operators, the separation between the dissipation term and the external force term in (16) is at best unique up to a constant, which means that there may be leakage of a constant between the two last terms of the PHNN model. Hence, we must make some assumptions about these terms to separate them as desired. If we want the external force term to be small, we may use regularization and penalize large values of \(\|\hat{f}_{\theta}\|\) during training. The option to do this is implemented in the phlearn package. However, for the numerical experiments in the next section, we have instead opted to assume that the dissipative term should be zero for the zero solution, and thus correct the two terms in question after training is done so that it adheres to this without changing the full model. That is, if we have the model
\[\hat{g}_{\theta}^{\text{pre}}(u,x,t)=(\hat{A}_{\theta}^{[k_{1}]})^{-1}\Big{(} \hat{S}_{\theta}^{[k_{2}]}\nabla\hat{\mathcal{H}}_{\theta}(u)-\hat{R}_{\theta} ^{[k_{3}]}\nabla\hat{\mathcal{V}}_{\theta}^{\text{pre}}(u)+k_{4}\hat{f}_{ \theta}^{\text{pre}}(u,x,t)\Big{)}, \tag{19}\]
when the last training step is performed, we set
\[\nabla\hat{\mathcal{V}}_{\theta}(u) :=\nabla\hat{\mathcal{V}}_{\theta}^{\text{pre}}(u)-k_{4}\nabla \hat{\mathcal{V}}_{\theta}^{\text{pre}}(0),\] \[\hat{f}_{\theta}(u,x,t) :=\hat{f}_{\theta}^{\text{pre}}(u,x,t)-\hat{R}_{\theta}^{[k_{3}]} \nabla\hat{\mathcal{V}}_{\theta}^{\text{pre}}(0)\]
to get our final model (17), which is equivalent to (19). Then we may remove the dissipation or external forces from the model simply by setting \(k_{3}=0\) or \(k_{4}=0\). Note, however, that this correction may not work as expected if the zero solution is far outside the domain of the training data, since the neural network \(\hat{\mathcal{V}}_{\theta}^{\text{pre}}\) like most neural networks generally extrapolates poorly. In that case, regularization is to be preferred.
#### 4.1.4 Algorithms
We refer to Algorithm 1 and Algorithm 2 for the training of the PHNN and baseline models, respectively.
## 5 Numerical experiments
In this section, we test how PHNN models perform on a variety of problems with different properties. In the experiments, we compare PHNNs to a baseline that do not have the pseudo-Hamiltonian
**Data:** Observations \(D=\{(t_{1},\vec{x}^{1},\vec{u}^{1}),\ldots,(t_{N},\vec{x}^{N},\vec{u}^{N}\}\)
**Data:** Number of epochs \(K\)
**Data:** Batch size \(M_{b}\)
**Data:** Initial CNN \(\hat{H}_{\theta},\hat{V}_{\theta}\)
**Data:** Initial DNN \(\hat{f}_{\theta}\)
**Data:** Matrices \(\hat{A}_{\theta}^{[k_{1}]}\), \(\hat{S}_{\theta}^{[k_{2}]}\) and \(\hat{R}_{\theta}^{[k_{3}]}\)
**Data:**\(g_{\theta}\) defined in (17)
**Data:** Loss function \(\mathcal{L}_{g_{\theta}}\) defined in (18)
**Result:** Parameters \(\theta\) for \(g_{\theta}\)
**for** \(k\) _in_ \(1\ldots K\)**do**
**for** _batch in Batches_ **do**
\(B:=\{(u^{j_{m}},u^{j_{m}+1},t^{j_{m}})\}_{m=1}^{M_{b}}\leftarrow\) DrawRandomBatch\((D,M_{b})\);
Step using \(\mathcal{L}_{g_{\theta}}(B)\) and \(\nabla_{\theta}\mathcal{L}_{g_{\theta}}(B)\)
**end**
**end**
**Algorithm 1:** The training phase of the PHNN algorithm
**Data:** Observations \(D=\{(t_{1},\vec{x}^{1},\vec{u}^{1}),\ldots,(t_{N},\vec{x}^{N},\vec{u}^{N}\}\)
**Data:** Number of epochs \(K\)
**Data:** Batch size \(M_{b}\)
**Data:** Initial CNN \(g_{\theta}\)
**Data:** Loss function \(\mathcal{L}_{g_{\theta}}\) defined in (18)
**Result:** Parameters \(\theta\) for \(g_{\theta}\)
**for** _k in_\(1\ldots K\)**do**
**for** _batch in Batches_ **do**
\(B:=\{(u^{j_{m}},u^{j_{m}+1},t^{j_{m}})\}_{m=1}^{M_{b}}\leftarrow\) DrawRandomBatch\((D,M_{b})\);
Step using \(\mathcal{L}_{g_{\theta}}(B)\) and \(\nabla_{\theta}\mathcal{L}_{g_{\theta}}(B)\)
**end**
**end**
**Algorithm 2:** The training phase of the baseline algorithm
structure but is otherwise as similar as possible to the PHNNs and is trained in the same way. We test either two or three PHNNs for each problem, in addition to a baseline model. The models we test on all problems are:
* PHNN (general): A PHNN model with kernel sizes \(k=[3,3,3,1]\) and an \(\hat{f}_{\theta}\) that depends on \(u\), \(x\) and t;
* PHNN (informed): A PHNN model where the operators \(A\), \(S\) and \(R\) are known a priori and \(\hat{f}_{\theta}\) depends only on the variable(s) which \(f\) depend on;
* Baseline: A model consisting of one neural network that takes \(u\), \(x\) and \(t\) as input, where the output is of the same dimension as \(u\). The network consists of two parts: First a five layer deep neural network with a hidden dimension of 20 and the tanh activation function, then a convolutional layer with kernel size five and activation function tanh, followed by two additional layers with hidden dimension 100 and the tanh activation function. The first five layers are meant to approximate any non-linear function (say \(u\mapsto\frac{1}{2}u^{2}\) in the case of Burgers' equation), while the convolutional layer is supposed to represent a finite-difference approximation of the spatial derivatives. The baseline model needs a kernel size of five to approximate the third and fourth derivatives present in the first resp. last example we consider.
### The KdV-Burgers equation
Consider again the KdV-Burgers equation from examples 1 and 2, but with external forces:
\[u_{t}+\eta(\frac{1}{2}u^{2})_{x}-\nu u_{xx}-\gamma^{2}u_{xxx}=f(x,t). \tag{20}\]
We consider the domain \([0,P]\) and assume periodic solutions \(u(0,t)=u(P,t)\), with \(P=20\). We let \(\eta=6\), \(\nu=0.3\), \(\gamma=1\) and \(f(x,t)=\sin\left(\frac{2\pi x}{P}\right)\). We generate training data from initial conditions
\[u(x,0)=2\sum_{l=1}^{2}c_{l}^{2}\,\mathrm{sech}^{2}\left(c_{l}\Big{(}\big{(}x+ \frac{P}{2}-d_{l}P\big{)}\bmod P-\frac{P}{2}\Big{)}\right), \tag{21}\]
where \(c_{1},c_{2}\) and \(d_{1},d_{2}\) are randomly drawn from the uniform distributions \(\mathcal{U}(\frac{1}{2},2)\) and \(\mathcal{U}(0,1)\) respectively. That is, the initial states are two waves of height \(2c_{1}^{2}\) and \(2c_{2}^{2}\) centered at \(d_{1}P\) and \(d_{2}P\), with periodicity imposed.
We compare the general and informed PHNN models to the baseline model on (20) with
\[f(x,t)=\frac{3}{5}\sin\big{(}\frac{4\pi}{P}x-t\big{)}. \tag{22}\]
Ten models were trained of each type using training sets consisting of 410 states, obtained from integrating 10 random initial states of the form (21) and evaluating the solution at every time step \(\Delta t=0.05\) until \(t=2\). The PHNN models were trained for 20 000 epochs, while the baseline models were trained for 50 000 epochs. At every epoch, the models were validated by integrating them to time \(t=2\) starting at three random initial states and calculating the average mean squared error (MSE) from these. The model with the lowest validation score after the last epoch was saved as the final model.
The PHNN models are quite sensitive to variations in the initialization of the learnable parameters of the model, an issue we discuss further in Section 6.1. Hence we get a large average MSE from these models when applying them on 10 different initial states, as evident in Table 1. However, for certain initial conditions the PHNN models are consistently outperforming the baseline model; see figures 1 and 2. Moreover, the best PHNN models perform well on all the test cases; of the 30 models trained, 10 of each type, the seven models with the lowest average MSE on 10 test sets are all PHNN models. Thus it would be advisable to run several PHNN models with different initializations of the neural networks and disregard those models who behave vastly different from the others.
Figure 2 gives a demonstration of one of the main qualitative features of the PHNN models: we can remove the force and dissipation from the model and still get an accurate solution of the system without these. In this figure, we have also extracted the external force part from the baseline model by
\[\hat{f}_{\theta}(x,t):=\hat{g}_{\theta}(u,x,t)-\hat{g}_{\theta}(u,0,0). \tag{23}\]
This works here, since the external force is independent of the solution states and the integrals are zero when \(u=0\), but it would not be an option in general. Moreover, we note that there is no way to separate the conservation and dissipation terms of the baseline model. This can however be done with the PHNN models, so that we also have a model for the energy-preserving KdV equation.
\begin{table}
\begin{tabular}{l r r} \hline \hline Model type & mean & std \\ \hline PHNN (general) & 9.60e+00 & 1.42e+01 \\ PHNN (informed) & 3.37e+01 & 3.45e+01 \\ Baseline & 1.90e+00 & 1.62e+00 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Mean and standard deviation of the MSE at \(t=2\), for 10 models of each type trained on the KdV–Burgers equation with the external force (22).
Figure 1: Predictions of the forced KdV–Burgers system system (20) with force (22), obtained from the best of 10 models of each model type, as evaluated by the mean MSE at \(t=2\) on predictions from 10 random initial states.
When the external force is explicitly dependent on time, we are generally not able to learn a model that is accurate beyond the temporal domain of the training data. For autonomous systems, we can make due with less training data. Consider thus instead of (22) the external force
\[f(x)=\frac{3}{5}\sin\big{(}\frac{4\pi}{P}x\big{)}. \tag{24}\]
We train models for (20) with this \(f\), where now we have training sets consisting of \(60\) states, from solutions obtained at every time step \(\Delta t=0.1\) from \(t=0\) to \(t=0.5\). As can be seen in figures 3 and 4, the PHNN models perform better than the baseline model, and especially so with increasing time. The worst-performing baseline models become unstable before the final test time \(t=4\). We also see that the most general PHNN model struggles to correctly separate the external force from the viscosity term with this amount of training data. This is however not an issue when the model is informed that the force is purely dependent on the spatial variable.
Figure 2: Solutions of the various learned models of the forced KdV–Burgers system (20) with \(f\) given by (22) at time \(t=2\). The line and the shaded area, barely visible in these plots, are the mean resp. standard deviation of \(10\) models of each type. The dashed black line is the ground truth. _Upper row:_ The original system (20) that the models are trained on. _Second row:_ The learned force approximating \(f\) in (20). _Third row:_ Predictions with the force \(f\) removed from the models. _Lower row:_ Predictions with the external force and the dissipation term removed from the models.
### The forced BBM equation
The Benjamin-Bona-Mahony (BBM) equation was introduced as an improvement on the KdV equation for modelling waves on a shallow surface [48, 3]. We consider this equation with a time- and state-dependent source term:
\[u_{t}-u_{xxt}+u_{x}+uu_{x}=f(u,t), \tag{25}\]
which can be written on the form (16) with \(A=1-\frac{\partial^{2}}{\partial x^{2}}\), \(S=\frac{\partial}{\partial x}\) and \(R=0\). This requires
\[\mathcal{H}=\frac{1}{2}\int_{\Omega}\Big{(}u^{2}+\frac{1}{3}u^{3}\Big{)}\, \mathrm{d}x.\]
As for the KdV-Burgers equation, we train the model on a forced system starting with a two-soliton initial condition. In this case, the initial states are given by
\[u(x,0)=3\sum_{l=1}^{2}(c_{l}-1)\operatorname{sech}^{2}\bigg{(}\frac{1}{2}\sqrt {1-\frac{1}{c_{l}}}\Big{(}\big{(}x+\frac{P}{2}-d_{l}P\big{)}\bmod P-\frac{P}{2 }\Big{)}\bigg{)}, \tag{26}\]
i.e. two waves of amplitude \(3(c_{1}-1)\) and \(3(c_{2}-1)\) centered at \(d_{1}P\) and \(d_{2}P\), where \(c_{1},c_{2}\) and \(d_{1},d_{2}\) are randomly drawn from the uniform distributions \(\mathcal{U}(1,4)\) and \(\mathcal{U}(0,1)\) respectively, and with periodicity imposed on \(\Omega=[0,P]\). We set \(P=50\) in the numerical experiments. Furthermore, we let
\[f(u,t)=\frac{1}{10}\sin(t)u. \tag{27}\]
In addition to the three models described in the introduction of this section, we also test a model that is identical to the most general PHNN model except that it does not include a dissipation term. We do this because it is not clearly defined whether or how (27) should be separated into a term that is constantly dissipative and one that is not. The most general model does learn that the system has a non-zero dissipative term; however, this term added to the learned force term is close
Figure 3: Predictions of the forced KdV–Burgers system (20) with \(f\) given by (24) obtained from the best of 10 models of each model type, as evaluated by the mean MSE at \(t=0.5\) on predictions from 10 random initial states.
to the ground truth force term. This is due to a leakage of a term \(\alpha u\) for some random constant \(\alpha\) between the terms, similar to the constant leakage described in Section 4.1.3, so that we learn an approximated integral \(\hat{\mathcal{V}}_{\theta}=\alpha\frac{\Delta x}{2}\sum_{i=0}^{M}u_{i}^{2}\) with \(\hat{R}_{\theta}^{[3]}=I\) and a corresponding external force \(\hat{f}_{\theta}=(\frac{1}{20}\sin{(t)}+\alpha)u\). This leakage could be combatted by regularization, i.e. by penalizing the mean absolute value of the dissipation term. We have not done that in the numerical experiments, but instead opted to also learn a model without the dissipation term and compare to this.
For every type of model, we trained 10 distinct models with random initializations for a total of 20 000 epochs for the PHNN models and 50 000 epochs for the baseline model. We used training data comprising 260 states, obtained from integrating 10 randomly drawn initial states with time step \(\Delta t=0.4\) from time \(t=0\) to time \(t=10\). A validation score at each epoch was generated by integrating the models to time \(t=1\) starting at three initial states and calculating the mean MSE, and the model with the lowest validation score was kept. After training was done, we integrated the models starting from 10 new arbitrary initial conditions to determine the average MSE at \(t=10\)
Figure 4: Solutions of the various learned models of the forced KdV–Burgers system with \(f\) given by (24). The line and the shaded area is the mean resp. standard deviation of predictions at \(t=4\) of 10 models of each type. The dashed black line is the ground truth. _Upper row:_ The original system. _Second row:_ The learned force approximating \(f\) in (24). _Third row:_ Predictions with the external force \(f\) removed from the models. _Lower row:_ Predictions with the force and the dissipation term removed.
By the error of the models as reported in Table 2, we see that all PHNN models perform better than the baseline model. Interestingly, the average MSE is lowest for the PHNN model where \(A\) and \(S\) has to be learned but \(R\) is known to be zero. However, the best PHNN model of all those that we trained is one of those informed of these operators, as seen in Figure 5. Note that the baseline model cannot be expected to learn a perfect model for this example, since the discrete approximation of \(A^{-1}S=(1-\frac{\partial^{2}}{\partial x^{2}})^{-1}\frac{\partial}{\partial x}\) used when generating the training data is a discrete convolution operator with kernel size bigger than five. The PHNN models are more stable and behave especially well with increasing time compared to the baseline model. Beyond \(t=10\), the accuracy of all models quickly deteriorates. We attribute this to the poor extrapolation abilities of neural networks; the models are not able to learn how the time-dependent \(f\) behaves beyond the temporal domain \(t\in[0,10]\) of the training data. Figure 6 shows the external forces learned by the PHNNs, and how well the models predict the system with these forces removed. For the general PHNN model, we have in this case also removed the dissipation term.
### The Perona-Malik equation
In addition to modelling physical systems, PDEs can be used for image restoration and denoising. For instance, if the heat equation is applied to a greyscale digital image, where the state \(u\) gives the intensity of each pixel, it will smooth out the image with increasing time. The Perona-Malik equation for so-called anisotropic diffusion is designed to smooth out noise but not the edges of an image [49]. Several variations of the equation exist. We consider the one-dimensional case, with a space-dependent force term, given by
\[u_{t}+\left(\frac{u_{x}}{1+u_{x}^{2}}\right)_{x}=f(x). \tag{28}\]
This is a PDE of the type (16) with \(A=I\), \(S=0\) and \(R=I\), and
\[\mathcal{V}[u]=\frac{1}{2}\int_{\Omega}\ln(1+u_{x}^{2})\,\mathrm{d}x.\]
Note that the equation can be written on the form \(u_{t}=\frac{\partial}{\partial x}\phi[u]+f(x)\) for \(\phi[u]=-\frac{u_{x}}{1+u_{x}^{2}}\), but this \(\phi[u]\) is not the variational derivative of any integral. We consider (28) on the domain \([0,P]\) with \(P=6\), and set
\[f(x)=10\sin\left(\frac{4\pi}{P}x\right) \tag{29}\]
\begin{table}
\begin{tabular}{l r r} \hline \hline Model type & mean & std \\ \hline PHNN (general) & 9.23e-01 & 1.04e+00 \\ PHNN (no diss. term) & 1.43e-01 & 9.91e-02 \\ PHNN (informed) & 2.22e-01 & 5.33e-01 \\ Baseline & 2.72e+00 & 3.67e+00 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Mean and standard deviation of the MSE at \(t=10\), for 10 models of each type trained on the BBM equation with an external force.
for the following experiments. The initial conditions are given by
\[u(x,0)=a-\sum_{l=1}^{2}\Big{(}h_{l}\Big{(}\tanh\big{(}b(x-d_{l})\big{)}-\tanh \big{(}b(x-P+d_{l})\big{)}\Big{)}\Big{)}+c\sin^{2}\left(r\pi x\right)\sin\left(s \pi x\right) \tag{30}\]
where \(a\in\mathcal{U}(-5,5)\), \(b\in\mathcal{U}(20,40)\), \(c\in\mathcal{U}(0.05,0.15)\), \(d_{l}\in\mathcal{U}(0.3,3)\), \(h_{l}\in\mathcal{U}(0.5,1.5)\), \(r\in\mathcal{U}(0.5,3)\) and \(s\in\mathcal{U}(10,20)\).
We trained 10 models of each type for 10 000 epochs, on 20 pairs of data at time \(t=0\) and \(t=0.02\). This corresponds to the original noisy image and and an image where the noise is almost completely removed, as judged by visual inspection. Because the step between these states is quite large, a high-order integrator is required to get an accurate approximation of the time-derivative. Indeed, models trained with the second-order implicit midpoint method fails to remove the noise as fast or accurately as the ground truth (28). Thus we use instead the fourth-order symmetric Runge-Kutta method (SRK4) introduced in [23]. This requires roughly four times the computational cost per epoch as using the midpoint method, but gives a considerably improved performance, as demonstrated in Figure 7.
Table 3 and Figure 8 reports the result of applying the learned models on an original noisy state (30) with \(a=1\), \(b=30\), \(c=0.15\), \(d_{1}=1\), \(d_{2}=2\), \(h_{1}=h_{2}=1\), \(r=2\) and \(s=15\). Interestingly, the general PHNN model performs better than the informed one. Moreover, the PHNN models perform better when the kernel size of the first convolutional layer of \(\hat{\mathcal{V}}_{\theta}\) is three instead of two. This indicates that the model does not learn the Perona-Malik equation but rather a different PDE
Figure 5: Predictions of the forced BBM system (25) obtained from the best of 10 models of each model type, as evaluated by the mean MSE at \(t=10\) on predictions from 10 random initial states.
that denoises the image. This may be as expected when we only train on initial states and end states. An odd-numbered filter size is the norm when convolutional neural networks are used for imaging tasks, since this helps to maintain spatial symmetry, and the improved performance with the same number of filters.
Figure 6: Solutions of the learned BBM system obtained from the different models. The line and the shaded area is the mean resp. standard deviation of predictions at \(t=9\) of 10 models of each type. The dashed black line is the ground truth. _Upper row:_ The original system (25) that the models are trained on. _Middle row:_ The learned force approximating \(f\) in (25). _Lower row:_ Predictions with the force \(f\) removed from the models.
Figure 7: The result at different times from integrating the mean of five general PHNN models trained on the Perona–Malik system (28) using two different integration schemes in the training. _Upper row:_ The second-order midpoint method. _Lower row:_ The fourth-order symmetric method SRK4.
a kernel of size three in \(\hat{\mathcal{V}}_{\theta}\) can perhaps be related to this.
### The Cahn-Hilliard equation
The Cahn-Hilliard equation was originally developed for describing phase separation [7], but has applications also in image analysis, and specifically image inpainting [6, 54]. Machine learning of pattern-forming PDEs, which include the Cahn-Hilliard and Allen-Cahn equations, has been studied in [59]. Results on applying PHNN to the Allen-Cahn equation is included in our GitHub repository. However, here we only consider the Cahn-Hilliard equation, with an external force, given by
\[u_{t}-(\nu u+\alpha u^{3}+\mu u_{xx})_{xx}=f(u,x). \tag{31}\]
\begin{table}
\begin{tabular}{l r r} \hline \hline Model type & mean & std \\ \hline PHNN (general) & 5.36e-04 & 1.58e-04 \\ PHNN (informed) & 4.14e-03 & 1.03e-03 \\ Baseline & 3.09e-03 & 3.08e-04 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Mean and standard deviation of the MSE at end time \(t=0.02\), for 10 models of each type trained on the Perona–Malik equation with an external force, and evaluated on predictions from 10 random initial states.
Figure 8: Perona–Malik at time \(t=0.02\), models and exact. The line plot is the average of 10 models of each type, while the shaded region indicates the standard deviation. _Upper row:_ The original system (31) that the models are trained on. _Middle row:_ The learned force approximating (29). _Lower row:_ Predictions with the force \(f\) removed from the models.
This is a dissipative PDE if the external force is zero, and it can be written on the form (16) with \(A=I\), \(S=0\) and \(R=-\frac{\partial^{2}}{\partial x^{2}}\), and
\[\mathcal{V}[u]=\frac{1}{2}\int_{\Omega}\left(\nu u^{2}+\frac{1}{2}\alpha u^{4} -\mu u_{x}^{2}\right)\,dx.\]
In the experiments, we have used \(\nu=-1\), \(\alpha=1\) and \(\mu=-\frac{1}{1000}\), and
\[f(u,x)=\begin{cases}30u&\text{if }0.3<x<0.7,\\ 0&\text{otherwise.}\end{cases}\]
The initial conditions of the training data are
\[u(x,0)=\sum_{l=1}^{2}\left(a_{l}\sin\left(c_{l}\frac{2\pi}{P}x\right)+b_{l} \cos\left(d_{l}\frac{2\pi}{P}x\right)\right) \tag{32}\]
on the domain \([0,P]\) with \(P=1\), where \(a_{l}\), \(b_{l}\), \(c_{l}\) and \(d_{l}\) are random parameters from the uniform distributions \(\mathcal{U}(0,\frac{1}{5})\), \(\mathcal{U}(0,\frac{1}{20})\), \(\mathcal{U}(1,6)\) and \(\mathcal{U}(1,6)\), respectively.
In addition to the models described in the introduction of this section, we also train a "lean" model, with \(k=[1,0,3,1]\) but no prior knowledge of how \(R\) looks. For each model type, 10 randomly initialized models were trained for 20 000 (for the PHNN models) or 50 000 (for the baseline models) epochs on different randomly drawn data sets consisting of a total of 300 states, at times \(t=0\), \(t=0.004\) and \(t=0.008\). At each epoch, the model was evaluated by comparing to the ground truth solution of three states at \(t=0.008\), and the model with the lowest MSE on this validation set was kept. The resulting 10 models were then evaluated on 10 random initial conditions and the mean MSE in the last time step was calculated from this. The mean and standard deviation from the 10 models of each type are given in Table 4, and the prediction of the model of each type with the lowest mean MSE is shown in Figure 9. In Figure 10 we give the results of the average of all models, including the standard deviation. Here we also show the learned external force and the prediction when this is removed from the model. The initial state of the plots in figures 9 and 10 is (32) with \(a_{1}=0.1\), \(a_{2}=0.06\), \(b_{1}=0.01\), \(b_{2}=0.02\), \(c_{1}=2\), \(c_{2}=5\), \(d_{1}=1\), \(b_{2}=2\).
We see from Figure 9 that the most general PHNN model may model the system moderately well, but it is highly sensitive to variations in the training data and the initialization of the neural networks in the model; from Figure (10) and Table 4 we see that this model may produce unstable predictions. In any case, the PHNN models struggle to learn the external force of this problem accurately without knowing \(R\), which we see by comparing the predictions of _PHNN (lean)_ and _PHNN (informed)_ in Figure 10, where the difference between the models is that the former has to learn an approximation \(\hat{R}_{\theta}^{[2]}\) of \(R\) and is not informed that \(f\) is not explicitly time-dependent.
## 6 Analysis of the models and further work
Here we provide some preliminary analysis of the PHNN models, which lays the groundwork for further analysis to be performed in the future.
### Stability with respect to initial neural network
The training of neural networks is often observed to be quite sensitive to the intial guesses for the weights and biases of the network. Here we test this sensitivity for both the PHNN model and the baseline model on the KdV-Burgers experiment in Section 5.1. We keep the training data fixed and re-generate the initial weights for the neural networks and rerun the training procedure described in algorithms 1 and 2. In Figure 11 we plot the solution at the final time together with the standard deviation and the pointwise maximum and minimum values for both the baseline model and the PHNN approach, where the standard deviation and maximum/minimum is computed across an ensemble of different initial weights for the deep neural network. In Figure 12 we plot the \(L^{2}\) error at the final time step against the exact solution for varying number of epochs, where the shaded areas represent the maximum and minimum values of an ensemble of varying initial weights of the neural network.
### Spatial discretization and training data
We will strive to develop PHNN further to make the models discretization invariant. For now, we settle with noting that this is already a property of our model in certain cases; a sufficiently well-trained informed PHNN model will be discretization invariant if the involved integrals do not depend on derivatives. Of the examples considered in this paper, that applies to the BBM equation, the inviscid Burgers' equation, and the Cahn-Hilliard equation if \(\mu=0\) in (31). Figure 13 shows how the learned BBM system can be discretized and integrated on spatial grids different from where
\begin{table}
\begin{tabular}{l r r} \hline \hline Model type & mean & std \\ \hline PHNN (general) & 1.18e+00 & 9.49e-01 \\ PHNN (lean) & 2.51e-01 & 1.42e-01 \\ PHNN (informed) & 7.01e-05 & 5.69e-05 \\ Baseline & 2.85e-01 & 7.09e-02 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Mean and standard deviation of the MSE at \(t=0.02\), for 10 models of each type trained on the forced Cahn–Hilliard system (31) and evaluated on predictions from 10 random initial states.
Figure 9: Predictions of the forced Cahn–Hilliard system obtained from the best of 10 models of each model type, as evaluated by the mean MSE at \(t=0.02\) on predictions from 10 random initial states.
there was training data.
For the experiments in the Section 5, we generated training data using first and second order finite difference operators to approximate the spatial derivatives. Then we trained our models on the same spatial grid, thus making it possible to learn convolution operators of kernel size two or three that perfectly captured the operators in the data. In a real-world scenario, this is unrealistic, as we would have to deal with discretization of a continuous system in space, as well as in time. We chose to disregard this issue in the experiments, to give a clearer comparison between PHNN and the baseline model not clouded by the error from the spatial discretization that would affect both. However, we have also tested our models on data generated on a spatial grid of four times as many discretization points, so that the data is generated from a more accurate approximation of the differential operators than what is possible to capture by the convolution operators. As we see in Figure 14, the PHNN models do appear to work well even with the introduction of approximation error in the spatial discretization. A more thorough study of this issue is required to gain a good understanding of how to best handle the spatial discretization.
### Proof of convergence in the idealized case
In this section we show a simplified error estimate for learning the right hand side of an ODE. Consider thus a model ODE of the form
Figure 10: Mean and standard deviation of predictions at \(t=0.02\) obtained from integrating 10 models of each type, for the Cahn–Hilliard problem (31). The dashed black line is the ground truth. _Upper row:_ The original system (31) that the models are trained on. _Middle row:_ The learned force approximating \(f\) in (31). _Lower row:_ Predictions with the force \(f\) removed from the models.
\[\begin{cases}\dot{u}(t)&=g(u(t),t)\\ u(0)&=u_{0},\end{cases} \tag{33}\]
where \(u:[0,T)\rightarrow\mathbb{R}^{M}\) and \(g:\mathbb{R}^{M}\rightarrow\mathbb{R}^{M}\). Note that the spatially discretized equation (13) can be cast into this form. In a certain sense, both the baseline model and the PHNN model tries to identify \(g\) by minimizing the \(L^{p}\)-norm of the observations \(u(t^{j})\) and the predictions \(u_{\theta}(t^{j})\). On a high level, this gives us a sequence \(u_{\theta}\to u\) in \(L^{p}\), but as is well-known, this would not be enough to conclude anything about the convergence of \(g_{\theta}\to g\), since \(L^{p}\) convergence in general does not imply convergence of the the derivatives. However, by utilizing the fact that we have a certain control over the discretized temporal derivatives in the learning phase, we can show that the \(g_{\theta}\to g\) in
Figure 11: Stability comparison of the baseline model and the general PHNN model. We retrain the models 20 times and compute the pointwise mean (plotted as a solid line) together with the standard deviation (plotted as error bars) and the pointwise maximum and minimum value (plotted as the shaded area).
the same \(L^{p}\) norm, _provided the training loss is small enough_. The following theorem makes this precise.
**Theorem 1**.: _Let \(\Delta t>0\), and \(g,\tilde{g}:\mathbb{R}^{M}\to\mathbb{R}^{M}\). Assume that \(u:[0,T)\to\mathbb{R}^{M}\) solves (33) and that
Figure 12: Convergence of the solution with respect to the number of training epochs. Here, the shaded area represents the maximum and minimum errors obtained, and the solid middle line represents the mean value across different initial weights.
Figure 13: The solution at \(t=10\) obtained from models of the BBM equation (25) with \(f=0\), learned from data discretized on \(M+1=101\) equidistributed points on the domain \([0,50]\). The \(M\) above the plots indicates the number of equidistributed discretization points used in the integration. The PHNN model was trained for 10 000 epochs, while the baseline model was trained for 100 000 epochs, on 20 pairs of states at time \(t=0\) and \(t=0.4\), with initial states (26).
\(\tilde{u}^{1},\ldots,\tilde{u}^{N}\in\mathbb{R}^{M}\) obey1
Footnote 1: This is essentially saying they are obtained during training for the loss function.
\[\frac{\tilde{u}^{j+1}-u^{j}}{\Delta t}=\tilde{g}(u^{j})\qquad\text{ for }j=0,\ldots,N-1. \tag{34}\]
_Then,_
\[\left(\Delta t\sum_{j=1}^{N-1}\left(g(u(t^{j}),t^{j})-\tilde{g}(u^{ j},t^{j})\right)^{p}\right)^{1/p}\leq\frac{1}{\Delta t}\left(\sum_{j=1}^{N} \Delta t\left|u^{j}-\tilde{u}^{j}\right|^{p}\right)^{1/p}+C_{g}\Delta t.\]
Proof.: Define \(u^{0},\ldots,u^{N}\in\mathbb{R}^{M}\) as
\[u^{j}:=u(t^{n})\qquad j=1,\ldots,N. \tag{35}\]
By a Taylor expansion, we have
\[\frac{u^{j+1}-u^{j}}{\Delta t}=g(u^{n})+\left(\left[\frac{\partial g }{\partial u}(u(\xi),\xi)\right]g(u(\xi,\xi))+\frac{\partial g}{\partial t}( u(\xi),\xi)\right)\Delta t\qquad\xi\in[t^{j},t^{j+1}],j=0,\ldots,N-1.\]
Figure 14: Solution obtained from models of the KdV–Burgers equation (8), i.e. without a force term, learned from 410 training states, with 10 different initial conditions and points equidistributed in time between \(t=0\) and \(t=2\). _Upper row:_ Models trained on data generated on a spatial grid with \(M=100\), same as used for training. _Lower row:_ Models trained on data generated on a spatial grid with \(M=400\) and then projected down on a grid with \(M=100\).
Hence, we get
\[\left(\sum_{j=0}^{N-1}\Delta t\left|g(u(t^{j}),t^{j})-\tilde{g}(u^{j},t^{j}) \right|^{p}\right)^{1/p}\leq\left(\sum_{j=1}^{N}\Delta t\left|\frac{u^{j}-u^{j- 1}}{\Delta t}-\frac{\tilde{u}^{j}-u^{j-1}}{\Delta t}\right|^{p}\right)^{1/p}+C_ {g}\Delta t.\]
We furthermore have
\[\left(\sum_{j=1}^{N}\Delta t\left|\frac{u^{j}-u^{j-1}}{\Delta t}- \frac{\tilde{u}^{j}-u^{j-1}}{\Delta t}\right|^{p}\right)^{1/p} =\left(\sum_{j=1}^{N}\Delta t\left|\frac{u^{j}-\tilde{u}^{j}}{ \Delta t}-\frac{u^{j-1}-u^{j-1}}{\Delta t}\right|^{p}\right)^{1/p}\] \[=\left(\sum_{j=1}^{N}\Delta t\left|\frac{u^{j}-\tilde{u}^{j}}{ \Delta t}\right|^{p}\right)^{1/p}\] \[=\frac{1}{\Delta t}\left(\sum_{j=1}^{N}\Delta t\left|u^{j}- \tilde{u}^{j}\right|^{p}\right)^{1/p}.\qed\]
**Remark 1**.: We note that (34) means that the sequence \(\{\tilde{u}^{j}\}_{j}\) is the learning data obtained by either the baseline or PHNN algorithm using the forward Euler integrator.
**Remark 2**.: In the above theorem, \(\left(\sum_{i=1}^{N}\Delta t\left|u^{j}-\tilde{u}^{i}\right|^{p}\right)^{1/p}\) is proportional to the training loss. In other words, the error in the approximation of \(g\) we get is bounded by \(1/\Delta t\cdot(\text{Training loss})\). Hence, to achieve an accuracy \(\epsilon\) in \(g\), we need to train to a training loss of \(\epsilon\Delta t\).
## 7 Conclusions
One of the advantages of PHNN is that it facilitates incorporating prior knowledge and assumptions into the models. The advantage of this is evident from the experiments in Section 5; the informed PHNN model performs consistently very well. We envision that our models can be used in an iterative process where you start by the most general model and as you learn more about the system from this, priors can be imposed.
As discussed in Section 6.1, the PHNN models are highly sensitive to variations in the initialized parameters of the neural networks. By the numerical results, the general PHNN model in particular seems to be more sensitive to this than the baseline model. However, the best trained PHNN models outperform the baseline models across the board. In a practical setting, where the ground truth is missing, we could train a number of models with different initialization of the neural networks and disregard those that deviate greatly from the others.
The aim of this paper has been to introduce a new method, demonstrate some of its advantages and share the code for the interested reader to study and develop further. It is the intent of the authors to also continue the work on these models. For one, we will be doing further analysis to address the issues raised in the previous section and improve the training of the models under various conditions. Secondly, we would like to extend the code to also work on higher-dimensional PDEs, and consider more advanced problems. One of the most promising uses of the methodology may be on image denoising and inpainting, motivated by the results of sections 5.3 and 5.4. Lastly,
the pseudo-Hamiltonian formulation could be used with other machine learning models than neural networks. Building on [30], we will develop methods for identifying analytic terms for one or several or all of the parts of the pseudo-Hamiltonian model (17), and compare the performance to existing system identification methods like [51, 53, 34]. We are especially intrigued by the possibility to identify the integrals of (16), while the external forces might be best modelled by a neural network.
### Acknowledgement
This work was supported by the research project PRAI (Prediction of Riser-response by Artificial Intelligence) financed by the Research Council of Norway, project no. 308832, with Equinor, BP, Subsea7, Kongsberg Maritime and Aker Solutions. The authors thank Katarzyna Michalowska and Signe Riemer-Sorensen for helpful comments on the manuscript, Eivind Bohn for help with coding issues, and Benjamin Tapley for both.
|
2310.12671 | Neural networks for insurance pricing with frequency and severity data:
a benchmark study from data preprocessing to technical tariff | Insurers usually turn to generalized linear models for modeling claim
frequency and severity data. Due to their success in other fields, machine
learning techniques are gaining popularity within the actuarial toolbox. Our
paper contributes to the literature on frequency-severity insurance pricing
with machine learning via deep learning structures. We present a benchmark
study on four insurance data sets with frequency and severity targets in the
presence of multiple types of input features. We compare in detail the
performance of: a generalized linear model on binned input data, a
gradient-boosted tree model, a feed-forward neural network (FFNN), and the
combined actuarial neural network (CANN). The CANNs combine a baseline
prediction established with a GLM and GBM, respectively, with a neural network
correction. We explain the data preprocessing steps with specific focus on the
multiple types of input features typically present in tabular insurance data
sets, such as postal codes, numeric and categorical covariates. Autoencoders
are used to embed the categorical variables into the neural network, and we
explore their potential advantages in a frequency-severity setting. Model
performance is evaluated not only on out-of-sample deviance but also using
statistical and calibration performance criteria and managerial tools to get
more nuanced insights. Finally, we construct global surrogate models for the
neural nets' frequency and severity models. These surrogates enable the
translation of the essential insights captured by the FFNNs or CANNs to GLMs.
As such, a technical tariff table results that can easily be deployed in
practice. | Freek Holvoet, Katrien Antonio, Roel Henckaerts | 2023-10-19T12:00:33Z | http://arxiv.org/abs/2310.12671v3 | # Neural networks for insurance pricing with frequency and severity data:
###### Abstract
Insurers usually turn to generalized linear models for modelling claim frequency and severity data. Due to their success in other fields, machine learning techniques are gaining popularity within the actuarial toolbox. Our paper contributes to the literature on frequency-severity insurance pricing with machine learning via deep learning structures. We present a benchmark study on four insurance data sets with frequency and severity targets in the presence of multiple types of input features. We compare in detail the performance of: a generalized linear model on binned input data, a gradient-boosted tree model, a feed-forward neural network (FFNN), and the combined actuarial neural network (CANN). Our CANNs combine a baseline prediction established with a GLM and GBM, respectively, with a neural network correction. We explain the data preprocessing steps with specific focus on the multiple types of input features typically present in tabular insurance data sets, such as postal codes, numeric and categorical covariates. Autoencoders are used to embed the categorical variables into the neural network and we explore their potential advantages in a frequency-severity setting. Finally, we construct global surrogate models for the neural nets' frequency and severity models. These surrogates enable the translation of the essential insights captured by the FFNNs or CANNs to GLMs. As such, a technical tariff table results that can easily be deployed in practice.
**Practical applications summary:** This paper explores how insights captured with deep learning models can enhance the insurance pricing practice. Hereto we discuss the required data preprocessing and calibration steps, and we present a work flow to construct GLMs for frequency and severity data by leveraging the insights obtained with a carefully designed neural network.
**JEL classification:** G22
**Key words:** property and casualty insurance, pricing, neural networks, embeddings, interpretable machine learning
## 1 Introduction
One of the central problems in actuarial science is the technical pricing of insurance contracts. Premiums are determined at the time of underwriting, while the actual cost of the contract
is only known when claims are processed. The technical premium is defined as the expected loss on a contract. In property and casualty insurance (P&C), expected losses are often estimated by independently modelling the frequency and severity of claims in function of policy and policyholder information. Hence, the modelling of historical data sets, with policyholder characteristics and the observed claim frequency and severity, is key in the design of predictive models. These historical data sets are of tabular structure, containing both numerical, categorical and spatial variables.
Industry-standard is the use of generalized linear models (GLM), introduced by Nelder and Wedderburn (1972), as a predictive modelling tool for claim frequency and severity. Haberman and Renshaw (1996), De Jong and Heller (2008), Ohlsson and Johansson (2010) and Denuit et al. (2019) apply GLMs for non-life insurance pricing. Frees and Valdez (2008) and Antonio et al. (2010) convert the numerical inputs to categorical format for use in a frequency GLM. Henckaerts et al. (2018) present a data-driven method for constructing both a frequency and severity GLM on categorized input data, by combining evolutionary trees and generalized additive models to convert the numerical inputs to categorical variables.
In recent years, machine learning techniques for actuarial purposes have been rising in popularity because of their strong predictive powers. Both Wuthrich and Buser (2021) and Denuit et al. (2020) detail the use of tree-based models in an actuarial context. Liu et al. (2014) use Adaboost for claim frequency modelling. Henckaerts et al. (2021) compare the performance of decision trees, random forests, and gradient boosted trees for modelling claim frequency and severity. Moreover, their paper studies a range of interpretational tools to look under the hood of these predictive models and compares the resulting technical tariffs with managerial tools. Instead of modelling the claim frequency and severity independently, the total loss random variable can be modelled directly via a gradient boosting model with Tweedie distributional assumption, see Yang et al. (2018) and Hainaut et al. (2022). Henckaerts and Antonio (2022) combine tabular contract and policyholder specific information with telematics data in a gradient boosting model for usage-based pricing. Henckaerts et al. (2022) construct a surrogate model on top of a gradient boosting model (GBM) to translate the insights captured by a GBM into a tariff table. A benchmark study on six data sets then examines the robustness of the proposed strategy.
Deep learning methods have been popular in the field of machine learning for many years. An early study of deep learning in an actuarial context is Dugas et al. (2003), comparing the performance of a GLM, decision tree, neural network and a support vector machine for the construction of a technical insurance tariff. Ferrario et al. (2020) use neural networks for frequency modelling and discuss various preprocessing steps. Wuthrich (2019) compares the performance of neural networks and GLMs on a frequency case study. Both Wuthrich (2019) and Schelldorfer and Wuthrich (2019) propose a combined actuarial neural network (CANN) for claim frequency modelling. The CANN starts with a GLM and builds a neural network adjustment on top of the GLM predictions, via a skip connection between input and output layer.
Categorical or factor data must be transformed into numerical representations in order to be utilized by neural networks (Guo and Berkhahn, 2016). This transformation is known in the literature as embedding, which maps categorical variables into numerical vectors. The choice of embedding technique can significantly impact the neural network's performance; see, for example, the claim severity study by Kuo and Richman (2021) where embedding layers are used in both a feed-forward neural network and a transformer network. Embedding layers allow a neural network to learn meaningful representations from the categorical inputs during the training of the neural network. Delong and Kozak (2023) suggest using autoencoders as an alternative method for categorical embedding. An autoencoder is a type of neural network that
learns to compress and to reconstruct data in an unsupervised manner. Using an autoencoder, a compact, numerical representation of the factor input data results that can then be used in both frequency as well as severity modelling. Delong and Kozak (2023) compare different setups of the autoencoder for claim frequency modelling and highlight the importance of normalization of the resulting numerical representation before using it in a feed-forward neural network. Meng et al. (2022) use the same technique in a claim frequency case study with telematic input data and extend the autoencoder with convolutional layers to process input data in image format.
Table 1 gives an overview of the discussed literature on deep learning for insurance pricing. We list the treatment techniques applied to categorical input data, the model architectures used and the extent of the case studies covered by these papers. Lastly, we summarize the interpretation tools used by the authors to extract insights from the model architectures.
Historical claim data sets are often of tabular structure, meaning they can be represented in matrix notation, with each column representing an input variable and each row representing a
\begin{table}
\begin{tabular}{c|c c c c c} \hline \hline
vector of policyholder information. Several papers recently questioned the performance of neural networks on tabular data. Borisov et al. (2022) compare 23 deep learning models on five tabular data sets and show how different tree-based ensemble methods outperform them. They highlight the predictive powers of techniques that combine gradient boosting models with neural network models, such as DeepGBM (Ke et al., 2019), which combines a GBM and a neural network for, respectively, numerical and categorical input features and TabNN (Ke et al., 2018), which bins the input features based on a GBM and uses the resulting bins in a neural network. Shwartz-Ziv and Armon (2022) analyze eight tabular data sets and compare five ensemble methods with four deep learning methods, concluding that the best performer combines gradient boosted trees and a neural network. Grinsztajn et al. (2022) compare the performance of gradient boosted trees, a random forest and different neural network structures on 45 different tabular data sets, highlighting the importance of data normalization and categorical treatment for deep learning models.
In light of these recent papers questioning the performance of deep learning architectures on tabular data, this paper aims to explore the added value of deep learning for non-life insurance pricing using tabular frequency and severity data. For this, we extend the analyses performed in Henckaerts et al. (2021) to deep learning models. Our study is an extension of the existing literature in five directions. First, we extend the CANN model architecture from Schellorfer and Wuthrich (2019) by combining a GBM baseline with neural network adjustments. Moreover, we study both trainable and non-trainable adjustments. Second, we compare a neural network, the proposed CANN structures and two benchmark models, a GLM and a GBM, by considering predictive accuracy and using interpretation tools. The GLM is constructed on categorized input data, following the approach outlined in Henckaerts et al. (2018); the GBM follows the setup from Henckaerts et al. (2021). Third, we study the autoencoder embedding technique from Delong and Kozak (2023) and highlight its importance in frequency-severity modelling. Because the autoencoder is trained in an unsupervised setting, the embedding can be learned on the frequency data and transferred to the severity setting, where we typically have fewer data points. Fourth, our case study is not limited to frequency modelling only but studies both frequency and severity modelling. We use four different insurance data sets to study the impact of sample size and the composition of the input data. Lastly, we use a set of interpretation techniques to capture insights from the constructed frequency and severity models and extend the surrogate GLM technique from Henckaerts et al. (2022) to deep learning architectures. We compare the resulting technical tariffs based on their Lorenz curves and look at the balance achieved by each model at portfolio level. This allows us to get a robust look at the possibilities of neural networks for frequency-severity pricing, from preprocessing steps to technical tariff.
## 2 Technical insurance pricing: notation and set-up
This paper assumes access to an insurance data set with tabular structure, meaning the data can be written in matrix notation, with each column representing a variable and each row representing a data point. We denote a data set as \(\mathcal{D}=\left(\mathbf{x}_{i},y_{i}\right)_{i=1}^{n}\), where each \(\mathbf{x}_{i}\) is a \(p\)-dimensional data point with response \(y_{i}\). Each data point \(\mathbf{x}_{i}\) can be written as a vector \(\left(x_{i,1},\ldots,x_{i,p}\right)\), where each entry \(x_{i,j}\) represents the value of input variable \(j\) for data point \(i\). When not referencing a specific observation \(i\), we often omit the subscript \(i\) and write \(\left(\mathbf{x},y\right)\), with \(\mathbf{x}=\left(x_{1},\ldots,x_{p}\right)\), each \(x_{j}\) representing a variable in our data set \(\mathcal{D}\).
The variables in our data sets can be either numerical or categorical. Assuming \(c\) categorical
variables, we order the variables in \(\mathcal{D}\) as follows:
\[\mathcal{D}=\big{(}\underbrace{x_{1},\ldots,x_{p-c}}_{\text{numerical variables}},\underbrace{x_{p-c+1},\ldots,x_{p}}_{\text{categorical variables}},\underbrace{y}_{\text{response variable}}\big{)}.\]
Insurance data sets can also contain spatial information. A spatial variable is either numerical, i.e., latitude and longitude coordinates, or categorical, i.e., postal code of residence. We do not denote spatial variables separately, but count them as a numerical or categorical variable. When introducing a data set, we specify how the spatial information is encoded.
For frequency-severity modelling, we work with a frequency data set \(\mathcal{D}^{\text{freq}}\) and a severity data set \(\mathcal{D}^{\text{sev}}\), where a data point \(\mathbf{x}_{i}\) represents information about policyholder \(i\). In \(\mathcal{D}^{\text{freq}}\), the response \(y_{i}\) is the number of claims reported by policyholder \(i\). The severity data set \(\mathcal{D}^{\text{sev}}\) consists of the policyholders from \(\mathcal{D}^{\text{freq}}\) who had at least one claim. In \(\mathcal{D}^{\text{sev}}\) we use the average claim size over all reported claims as the response \(y\). Because \(\mathcal{D}^{\text{sev}}\subseteq\mathcal{D}^{\text{freq}}\), but with a different response, we often omit the superscript freq and use \(\mathcal{D}\) and \(\mathcal{D}^{\text{sev}}\). We denote the number of observations as \(n_{f}\) for \(\mathcal{D}\) and \(n_{s}\) for \(\mathcal{D}^{\text{sev}}\), with \(n_{s}\leq n_{f}\). Note that for both \(\mathcal{D}\) and \(\mathcal{D}^{\text{sev}}\), we have the same variables \(x_{1},\ldots,x_{p}\), except that we add the extra variable _exposure-to-risk_\(e\) to the frequency data set. Exposure is the fraction of the year the insurance covered the policyholder. This is only relevant for frequency modelling, hence we do not add this variable to the severity data set. In \(\mathcal{D}^{\text{sev}}\) we do take into account the observed number of claims for each data point, to be used as a weight in the loss function.
For a regression model \(f(\cdot)\) with input covariates \((x_{1},\ldots,x_{p})\) and target response \(y\), we write the model prediction for data point \(i\) as \(f\left(x_{i,1},\ldots,x_{i,p}\right)=\hat{y}_{i}\). We train \(f\) on a training set, denoted as \(\mathcal{D}^{\text{train}}\subset\mathcal{D}\), by choosing the model-specific parameters that minimize a chosen loss function \(\sum_{\mathbf{x}_{i}\in\mathcal{D}^{\text{train}}}\mathscr{L}\left(\hat{y}_{i},y_ {i}\right)\). The out-of-sample performance of a trained model is calculated on the test set \(\mathcal{D}^{\text{test}}=\mathcal{D}\backslash\mathcal{D}^{\text{train}}\) as \(\sum_{\mathbf{x}_{i}\in\mathcal{D}^{\text{test}}}\mathscr{L}\left(\hat{y}_{i},y_ {i}\right)\). We follow the loss functions proposed by Wuthrich and Buser (2021) and Henckaerts et al. (2021) for modelling claim frequency and severity. For claim frequency modelling, where the claim count is typically assumed to be Poisson distributed, we use the Poisson deviance:
\[D_{\text{Poisson}}(f(\mathbf{x}),\mathbf{y})=\frac{2}{n_{f}}\sum_{i=1}^{n_{f}}\left(y_ {i}\ln\frac{y_{i}}{f(\mathbf{x}_{i})}-(y_{i}-f(\mathbf{x}_{i}))\right). \tag{1}\]
Note that when using the exposure-to-risk \(e\) in the frequency model, we replace each prediction \(f(\mathbf{x}_{i})\) with \(e_{i}\cdot f(\mathbf{x}_{i})\) in the Poisson loss function above. Claim severity data are often assumed to be long-tailed and right-skewed, so we use the gamma deviance given by
\[D_{\text{gamma}}(f(\mathbf{x}),\mathbf{y})=\frac{2}{n_{s}}\sum_{i=1}^{n_{s}}\alpha_{i }\left(\frac{y_{i}-f(\mathbf{x}_{i})}{f(\mathbf{x}_{i})}-\ln\frac{y_{i}}{f(\mathbf{x}_{i} )}\right), \tag{2}\]
where the weight \(\alpha_{i}\) is the observed number of claims for data point \(i\).
## 3 Deep learning architectures and preprocessing steps
### Neural network architectures
Feed-forward neural networkA feed-forward neural network (FFNN) is a type of machine learning model that utilizes interconnected layers, represented by \(\mathbf{z}^{(m)}\) with \(m=0,\ldots,M+1\)
The input layer, represented by \(\mathbf{z}^{(0)}\), provides the network with input data, while the output layer, represented by \(\mathbf{z}^{(M+1)}\), gives the network's prediction. Between the input and output layers, there can be one or more hidden layers, represented by \(\mathbf{z}^{(1)},\dots,\mathbf{z}^{(M)}\). When there are two or more hidden layers, we call the neural network a deep learning model. Each layer \(\mathbf{z}^{(m)}\) consists of \(q_{m}\) nodes, so it can be expressed as a vector \(\mathbf{z}^{(m)}=\left(z_{1}^{(m)},\dots,z_{q_{m}}^{(m)}\right)\).
Each node in a layer, excluding the input layer, is connected to all nodes in the previous layer through weights, represented by \(W_{m}\in\mathbb{R}^{q_{m}\times q_{m-1}}\), and a bias term, represented by \(\mathbf{b}_{m}\in\mathbb{R}^{q_{m}}\). An activation function \(\sigma^{(m)}\), \(m=1,\dots,M+1\), adds non-linearity to the network and allows it to learn complex relationships between inputs and outputs. The activation function is applied to the weighted sum of inputs to a node, along with its bias. Each layer \(\mathbf{z}^{(m)}\) can be written in function of the previous layer as follows:
\[\mathbf{z}^{(m)}=\sigma^{(m)}\left(W_{m}\cdot\mathbf{z}^{(m-1)}+\mathbf{b}_{m}\right). \tag{3}\]
Calculating the output of the FFNN in function of the input consists of performing a matrix multiplication for each layer and applying the activation functions. The value of a layer \(\mathbf{z}^{(m)}\) for input \(\mathbf{x}_{i}\) is denoted as \(\mathbf{z}^{(m)}_{i}\) and the value of a specific node \(j\) as \(z^{(m)}_{ij}\). When referencing a node without a specific input, we omit the subscript \(i\) and write \(z^{(m)}_{j}\).
The inputs of the neural network are the data points in a data set \(\mathcal{D}\), the dimension \(q_{0}\) of the input layer is equal to the number of variables \(p\) in the data set1. We write the input layer as \((x_{1},\dots,x_{p})\) to indicate that each node in the input layer represents an input variable from the data set. The target variable \(y\) in our insurance data sets is one-dimensional, so the output layer \(z^{(M+1)}\) has only one node and \(q_{M+1}=1\). We write the output node as \(\hat{y}\). Figure 1 gives a schematic overview of a feed-forward neural network.
Footnote 1: The dimension of the input layer can be larger than \(p\) when using an encoding technique, such as one-hot encoding.
Figure 1: Structure of a feed-forward neural network with \(p\)-dimensional input layer, hidden layers \(\mathbf{z}^{(1)},\dots,\mathbf{z}^{(M)}\), with \(q_{1},\dots,q_{M}\) nodes, respectively. The network has a single output node \(\hat{y}\).
When modelling claim frequency and severity data with GLMs, an actuary typically relies on a Poisson GLM with a log-link function for frequency and a gamma GLM with a log-link function for severity modelling. To mimic this log-link relationship between covariates and output in our FFNN, we use an exponential activation function for the output layer in both the frequency and the severity model. As such, we obtain strictly positive predictions for claim counts and claim amounts.
Combined actuarial neural networksWuthrich (2019) and Schelldorfer and Wuthrich (2019) propose a combination of a GLM with a FFNN, called the Combined Actuarial Neural Network (CANN). A CANN model calibrates a neural network adjustment on top of the GLM prediction. We refer to the GLM prediction as the _initial model prediction_, denoted as \(\hat{y}^{\text{IN}}\). We use \(\hat{y}^{\text{IN}}\) as an input node in a FFNN but do not connect this node to the hidden layers. Instead, \(\hat{y}^{\text{IN}}\) directly connects to the output node of the FFNN via a so-called skip connection. The adjustment made by the neural network on the initial model prediction is called the _adjustment model prediction_ and denoted as \(\hat{y}^{\text{NN}}\). The combination of the initial model prediction and the adjustment calibrated by the neural net is the resulting CANN model prediction, denoted as \(\hat{y}\). Figure 2 shows the structure of the CANN model.
The output node of the CANN model, \(\hat{y}\), is only connected to the initial model input \(\hat{y}^{\text{IN}}\) and the neural network adjustment \(\hat{y}^{\text{NN}}\). We use the exponential activation function in the output layer to ensure the log-link relationship between inputs and the predicted output. Because \(\hat{y}^{\text{IN}}\) is a prediction at the level of the response, we apply a log transform on the initial model predictions. The output of the CANN model is then calculated as:
\[\hat{y}=\exp\left(w_{\text{NN}}\cdot\hat{y}^{\text{NN}}+w_{\text{IN}}\cdot \ln\left(\hat{y}^{\text{IN}}\right)+b\right). \tag{4}\]
Figure 2: Structure of a Combined Actuarial Neural Network (CANN). The initial model prediction \(\hat{y}^{\text{IN}}\) is connected via a skip-connection to output node of the FFNN.
The case study in Schelldorfer and Wuthrich (2019) fixes the weights and bias in the output of the CANN as follows
\[w_{\text{NN}}=1,\,w_{\text{IN}}=1\,\text{and}\,b=0.\]
Following Gielis (2020), we call this the _fixed_ CANN, as the output weights are fixed and not trainable. In our case study, we also run experiments with trainable weights in the output layer and refer to this model as the _flexible_ CANN. This flexibility allows the training of the neural network to put more, or less, weight on the initial model prediction. This can potentially improve the predictive accuracy of the flexible CANN compared to the fixed CANN. Moreover, the initial model input is not restricted to GLM predictions and we will also run experiments in Section 4.4 with an input prediction established with a carefully trained GBM. According to Henckaerts et al. (2021) the GBMs are capable of achieving a higher predictive accuracy compared to a GLM. Using the GBM predictions as initial model input can therefore potentially increase the performance of the CANN model, compared to a CANN using the GLM predictions.
### Preprocessing steps
Continuous variablesWe normalize the continuous input variables to ensure that each variable in the input data has a similar scale. This is important because most neural network training algorithms use gradient-based optimization, which can be sensitive to the scale of the input data (Sola and Sevilla, 1997). For a continuous variable \(x_{j}\) in the input data \(\mathcal{D}\), we use normalization around zero as a scaling technique. Hereto, we replace each value \(x_{i,j}\) as follows:
\[x_{i,j}\mapsto\tilde{x}_{i,j}=\frac{x_{i,j}-\mu_{x_{j}}}{\sigma_{x_{j}}}, \tag{5}\]
where \(\mu_{x_{j}}\) and \(\sigma_{x_{j}}\) are the mean and standard deviation of the variable \(x_{j}\) in the data set \(\mathcal{D}\). When using a subset \(\mathcal{D}^{\text{train}}\subset\mathcal{D}\) to train the model, we calculate the \(\mu_{x_{j}}\) and \(\sigma_{x_{j}}\) only on the data set \(\mathcal{D}^{\text{train}}\) to avoid data leakage.
Categorical variablesThe FFNN and CANN models generate output by performing matrix multiplications and applying activation functions. Therefore, all inputs must be in numerical format. So-called embedding techniques convert categorical input variables to a numerical format. In this study, we utilize the _autoencoder embedding_ proposed by Delong and Kozak (2023). Autoencoders are neural networks commonly used for dimensionality reduction (Goodfellow et al., 2016). They consist of two components: an encoder and a decoder. The encoder maps a numerical input vector to a lower-dimensional representation, while the decoder reconstructs the original input from this representation. During training, the autoencoder minimizes the difference between the original and reconstructed inputs, resulting in an encoder that captures the most important characteristics of the data.
Figure 2(a) shows the general structure of such an autoencoder. It consists of an input layer of dimension \(c\), one hidden layer \(\mathbf{z}^{\text{enc}}\) of dimension \(d\) and an output layer of the same dimension as the input layer. The encoding layer is defined by the activation function \(\sigma^{\text{(enc)}}\), weights matrix \(W_{\text{enc}}\in\mathbb{R}^{d\times c}\) and bias vector \(\mathbf{b}_{\text{enc}}\in\mathbb{R}^{d}\). Similarly, the output layer is defined by activation function \(\sigma^{\text{(dec)}}\), weight matrix \(W_{\text{dec}}\in\mathbb{R}^{c\times d}\) and bias vector \(\mathbf{b}_{\text{dec}}\in\mathbb{R}^{c}\). For input \(\mathbf{x}_{i}\in\mathbb{R}^{c}\), the encoded and decoded representations are calculated as
\[\mathbf{z}_{i}^{\text{enc}} =\sigma^{\text{(enc)}}\left(W_{\text{enc}}\cdot\mathbf{x}_{i}+\mathbf{b}_{ \text{enc}}\right), \tag{6}\] \[\mathbf{x}_{i}^{\text{dec}} =\sigma^{\text{(dec)}}\left(W_{\text{dec}}\cdot\mathbf{z}_{i}^{\text {enc}}+\mathbf{b}_{\text{dec}}\right).\]
The autoencoder is trained on all data points \(\mathbf{x}_{i}\) in a data set \(\mathcal{D}\) by adjusting the weight matrices and bias vectors in order to minimize a chosen loss function \(\sum_{\mathbf{x}_{i}\in\mathcal{D}}\mathscr{L}\left(\mathbf{x}_{i}^{\text{dec}},\mathbf{x}_ {i}\right)\).
Our study employs an autoencoder to construct an embedding for multiple categorical input variables. First, we construct the one-hot encoded representation of each categorical variable (Ferrario et al., 2020). One-hot encoding maps a categorical variable \(x_{j}\) with \(L_{j}\) levels to a binary vector \(x_{j}^{\text{OH}}=\left(x_{1}^{(j)},\ldots,x_{L_{j}}^{(j)}\right)\) in the space \(\{0,1\}^{L_{j}}\). If we have \(c\) categorical variables, the dimension of all one-hot representations together equals \(\sum_{j=1}^{c}L_{j}\).
Second, we train an autoencoder using the combined one-hot representations of the categorical variables as input nodes. As such, the input layer has a dimension of \(\sum_{j=1}^{c}L_{j}\). The input layer is connected to an encoded layer of dimension \(d\), which is then connected back to the output layer of dimension \(\sum_{j=1}^{c}L_{j}\). We use the identity function as activation function for both \(\sigma^{\text{(enc)}}\) and \(\sigma^{\text{(dec)}}\) in Equation (6).
Following the construction in Delong and Kozak (2023), we apply a softmax transformation on the output layer of the autoencoder after the activation function \(\sigma^{\text{(dec)}}\). For each categorical variable \(x_{j}\), exactly one value in the input nodes \(\left(x_{1}^{(j)},\ldots,x_{L_{j}}^{(j)}\right)\) is one and the rest of the input nodes takes the value zero. Therefore, we apply the softmax activation function to the output layer of the autoencoder for each group of nodes corresponding to the one-hot encoding of a categorical variable. For each categorical variable \(x_{j}^{\text{OH}}=\left(x_{1}^{(j)},\ldots,x_{L_{j}}^{(j)}\right)\) and for each
Figure 3: Our proposed network structure combines the autoencoder embedding technique from Delong and Kozak (2023) and the CANN structure from Schelldorfer and Wüthrich (2019).
\(h\in\{1,\ldots,L_{j}\}\), the softmax transformation of the output node \(x_{h}^{(j,\text{dec})}\) is defined as:
\[x_{h}^{(j,\text{dec})}\mapsto\tilde{x}_{h}^{(j,\text{dec})}=\frac{\exp\left(x_{h }^{(j,\text{dec})}\right)}{\exp\left(x_{1}^{(j,\text{dec})}+\ldots+x_{L_{j}}^{ (j,\text{dec})}\right)},\qquad h=1,\ldots,L_{j}. \tag{7}\]
The use of the softmax activation function ensures that the values of the decoded vectors \(\left(x_{1}^{(j,\text{dec})},\ldots,x_{L_{j}}^{(j,\text{dec})}\right)\) sum up to one for each variable \(x_{j}\).
To train the autoencoder, we use the cross-entropy loss function, which is suitable because of the 0/1 values in the input data. With \(\mathbf{x}_{i}^{\text{OH}}\) the one-hot encoding of all categorical variables for policyholder \(i\) and \(\tilde{\mathbf{x}}_{i}^{\text{dec}}\) the values of the autoencoder's output layer for policyholder \(i\), the cross-entropy loss function is defined as:
\[\mathscr{L}^{\text{CE}}\left(\tilde{\mathbf{x}}_{i}^{\text{dec}},\mathbf{x}_{i}^{\text {OH}}\right)=-\sum_{j=1}^{c}\sum_{h=1}^{L_{j}}x_{ih}^{(j)}\cdot\log\left(\tilde {x}_{ih}^{(j,\text{dec})}\right). \tag{8}\]
After training the autoencoder and applying the trained autoencoder on each policyholder \(\mathbf{x}_{i}\in\mathcal{D}\), the vector of categorical inputs \((x_{i,p-c+1},\ldots,x_{i,p})\) is accurate, compact and numerically represented in the vector \((z_{i1}^{\text{enc}},\ldots,z_{id}^{\text{enc}})\) as calculated by Equation (6). We call the vector \(\mathbf{z}_{i}^{\text{enc}}\) the embedding of the categorical inputs of \(\mathbf{x}_{i}\). To use the embedding together with the numerical features of \(\mathbf{x}_{i}\), we normalize the values in the nodes \(z_{1}^{\text{enc}},\ldots,z_{d}^{\text{enc}}\) by scaling the weight matrix \(W_{\text{enc}}\) and bias vector \(\mathbf{b}_{\text{enc}}\) of the trained encoder. With \(\mu_{1},\ldots,\mu_{d}\) the means, and \(\sigma_{1},\ldots,\sigma_{d}\) the standard deviations, of the values \(z_{i1}^{\text{enc}},\ldots,z_{id}^{\text{enc}}\) for all \(\mathbf{x}_{i}\in\mathcal{D}^{\text{train}}\), we scale the weight matrix \(W_{\text{enc}}\) and bias vector \(\mathbf{b}_{\text{enc}}\) of the pre-trained encoder as follows:
\[W_{\text{enc}}\mapsto\tilde{W}_{\text{enc}}=\left(\begin{array}{cccc}\frac{ w_{11}}{\sigma_{1}}&\frac{w_{12}}{\sigma_{1}}&\ldots&\frac{w_{1c}}{\sigma_{1}} \\ \frac{w_{21}}{\sigma_{2}}&\frac{w_{12}}{\sigma_{2}}&\ldots&\frac{w_{1c}}{\sigma_ {2}}\\ \vdots&\vdots&\ddots&\vdots\\ \frac{w_{d-1,1}}{\sigma_{d-1}}&\frac{w_{d-1,2}}{\sigma_{d-1}}&\ldots&\frac{w_ {d-1,c}}{\sigma_{d-1}}\\ \frac{w_{d1}}{\sigma_{d}}&\frac{w_{d2}}{\sigma_{d}}&\ldots&\frac{w_{d}}{\sigma _{d}}\\ \end{array}\right),\mathbf{b}_{\text{enc}}\mapsto\tilde{\mathbf{b}}_{\text{enc}}= \left(\begin{array}{c}\frac{b_{1}-\mu_{1}}{\sigma_{1}}\\ \frac{b_{2}-\mu_{2}}{\sigma_{2}}\\ \vdots\\ \frac{b_{d-1}-\mu_{d-1}}{\sigma_{d-1}}\\ \frac{b_{d-1}-\mu_{d-1}}{\sigma_{d-1}}\\ \frac{b_{d-1}-\mu_{d}}{\sigma_{d}}\\ \end{array}\right). \tag{9}\]
Having access to the trained and scaled autoencoder, we now add the encoder part to the FFNN and the CANN structures by replacing the input nodes of the categorical variables in Figure 1 and 2 with the encoding part of the trained autoencoder, as shown in Figure 2(b) for the CANN. For clarity, we omit the one-hot encoding notation of each variable in Figure 3. We say the autoencoder is _pre-trained_ because we perform a first training and scaling of the autoencoder before training the neural network architectures with the added encoder. Adding the encoder to the network allows the network to finetune the weights and biases of the pre-trained encoder with respect to the considered regression task and its applicable loss function as in Equation(1) or Equation (2).
Autoencoders used to embed categorical variables provide several advantages over one-hot encoding (Delong and Kozak, 2023). Firstly, they allow for a significantly smaller dimension of the encoding compared to the dimension resulting from one-hot encoding. Secondly, autoencoders enable the encoding of all categorical variables together, capturing interactions between variables more effectively than variable specific encoding does. Lastly, autoencoders prove advantageous in multi-task scenarios such as frequency-severity modeling. Learning to encode
categorical variables solely on the severity dataset can be problematic due to its smaller size. Since autoencoders are unsupervised learning methods, we can train the autoencoder using all data available, and add the resulting pre-trained encoder to both frequency and severity models.
### Training and tuning neural networks
We train the FFNN and CANN models using the Adam optimization algorithm. Adam, introduced by Kingma and Ba (2014), is a stochastic gradient descent algorithm with an adaptive learning rate. Iteratively, the Adam algorithm changes the weights and biases in the network to minimize the loss between predictions \(\hat{y}\) and the observed responses \(y\). We use batches of training data for each training iteration to speed up optimization; see Keskar et al. (2016). The size of the batches is a parameter that needs to be tuned. The network size is also tuned; the number of hidden layers \(M\), and the number of nodes in each layer \(q_{1},\ldots,q_{M}\) are tuning parameters. We use a drop-out rate (Srivastava et al., 2014) to avoid overfitting, and consider this rate to be a tuning parameter as well. The drop-out rate is the percentage of nodes in each layer that are disconnected from the next and previous layer during each iteration of the Adam algorithm. The last tuning parameter is the choice of activation functions \(\sigma^{(1)},\ldots,\sigma^{(M)}\). To simplify the tuning process, we use layers of equal sizes, \(q_{1}=\ldots=q_{M}=q\), and apply the same activation function for all hidden layers, \(\sigma^{(1)}=\ldots=\sigma^{(M)}=\sigma\). Hence, only the value for \(q\) and the activation function \(\sigma\) are tuned and applied to each hidden layer.
We deploy a random grid search, introduced by Bergstra and Bengio (2012), to determine the optimal value for each tuning parameter. For each tuning parameter \(t_{k}\), with \(k=1,\ldots,K\), we define a range of possible values \([t_{k,\min},t_{k,\max}]\). The search space \(\mathcal{S}\) is the space consisting of all possible values for all tuning parameters:
\[\mathcal{S}=[t_{1,\min},t_{1,\max}]\times\ldots\times[t_{K,\min},t_{K,\max}]\,.\]
The _random grid_\(\mathcal{R}\subset\mathcal{S}\) consists of randomly drawn points in the search space \(\mathcal{S}\). Each point \(s\in\mathcal{R}\) represents a set of candidate tuning parameter values. Out of the random grid \(\mathcal{R}\), we select the optimal point \(s^{*}\) with a cross-validation scheme. In Figure 4, we give an example of a search space defined by two tuning parameters and a random grid of size nine sampled in the search space.
We use the extensive cross-validation scheme proposed by Henckaerts et al. (2021), as sketched in Figure 5. We divide the data set \(\mathcal{D}\) in six disjoint and stratified subsets \(\mathcal{D}_{1},\ldots,\mathcal{D}_{6}\). We define six data folds; in data fold \(\ell\), for \(\ell=1,\ldots,6\), we select a hold-out test set \(\mathcal{D}_{\ell}\) and use five-fold cross-validation (Hastie et al., 2009) on the data set \(\mathcal{D}\backslash\mathcal{D}_{\ell}\). Each cross-validation loop uses four out of the five data subsets in \(\mathcal{D}\backslash\mathcal{D}_{\ell}\) to train the neural network. The fifth subset is used both for early stopping and to calculate the validation error. The cross-validation error is the average validation error over the five validation sets. We then determine the optimal point \(s^{*}_{\ell}\in\mathcal{R}\) which minimizes the cross-validation error for data fold \(\ell\). We use the six optimal tuning parameter sets \(s^{*}_{1},\ldots,s^{*}_{6}\) to determine out-of-sample performance on the test set \(\mathcal{D}_{1},\ldots,\mathcal{D}_{6}\) of each data fold \(\ell=1,\ldots,6\). As such, we obtain an out-of-sample prediction for every point in the data set.
## 4 Performance comparison between benchmark models and deep learning architectures
Section 4.1 introduces four data sets that are used in our benchmark study. In this study we compare the performance of the deep learning architectures against two benchmark mod
els introduced in Section 4.2. Section 4.3 covers the used tuning parameter grid for both the autoencoder and the deep learning architectures. We compare the statistical out-of-sample performance of the models under study in Section 4.4. Lastly, Section 4.5 compares the autoencoder embedding against using the one-hot encoding when used in the deep learning models under consideration.
Figure 4: Example of random grid search with two tuning parameters \(t_{1}\) and \(t_{2}\). The search space \(\mathcal{S}=[t_{1,\min},t_{1,\max}]\times[t_{2,\min},t_{2,\max}]\) is shown in the figure by the dotted square. The random grid \(\mathcal{R}\) consists of nine randomly drawn points \(s_{1},\ldots,s_{9}\) from \(\mathcal{S}\). The optimal point \(s^{*}\in\mathcal{R}\) is then selected via a cross-validation scheme.
Figure 5: Representation of the 6 times 5-fold cross-validation scheme, figure from Henckaerts et al. (2021).
### Data sets
The used data sets are an Australian, Belgian, French2 and Norwegian MTPL data set, available through the R packages CASdatasets(Dutang and Charpentier, 2019) and maidrr(Henckaerts and Antonio, 2022; Henckaerts, 2021). Table 2 gives an overview of the number of records for each data set and the number of continuous, categorical and spatial variables.
Footnote 2: The French data set in the CASdatasets package contains 35 560 claims, but only 24 000 claims have a claim amount. We exclude the policies with claims but without claim amount from our study.
The spatial variables are listed separately in Table 2. The Belgian spatial variable is the postal code, which is converted to two continuous variables, the latitude and longitude coordinates of the center of that postal code. The French data includes two spatial variables: the French district, which is categorical, and the logarithm of the population density of the place of residence, which is a continuous variable. The Norwegian data has one spatial variable denoting the population density of the region of residence as a categorical variable.
### Benchmark models
To enable an assessment of the predictive performance of the neural network and CANN structures we construct two benchmark models; a generalized linear model (GLM) and a gradient boosting model (GBM), for both frequency as well as severity. Predictions from these benchmark models are then also used as the initial model inputs in the CANN models. For the Belgian data set we use the GLM constructed in Henckaerts et al. (2018) and the GBM from Henckaerts et al. (2021). For the other data sets, we follow the construction methods outlined in the mentioned papers.
For the construction of the GLM, we follow the strategy proposed in Henckaerts et al. (2018) and start from a generalized additive model (GAM), including interaction effects between continuous variables. Based on the insights from the GAM, we bin the continuous variables using a regression tree. On the binned input data, we construct a GLM. We repeat the construction of the GLM six times, each time withholding a subset \(\mathcal{D}_{\ell}\), \(\ell=1,\ldots,6\). This way we obtain GLM based out-of-sample predictions for all observations in the data set \(\mathcal{D}\).
GBM is an ensemble method combining multiple decision trees (Friedman, 2001). A GBM has two tuning parameters; the number of trees and the depth of each tree. We use a tuning grid
\begin{table}
\begin{tabular}{c c c c} \hline \hline Australian MTPL & Belgian MTPL & French MTPL & Norwegian MTPL \\ \hline \hline \multicolumn{3}{c}{**Number of observations**} \\ \hline Frequency & 67 856 & 163 212 & 668 897 & 183 999 \\ Severity & 4 624 & 18 276 & 24 944 & 8 444 \\ \hline \multicolumn{3}{c}{**Covariates: number and type**} \\ \hline Continuous & 1 & 4 & 2 & 0 \\ Categorical & 4 & 5 & 5 & 3 \\ Spatial & 0 & 1 & 2 & 1 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Overview of the structure of the data sets used in the benchmark study. The number of records for both frequency and severity modelling is given, as well as the number of different input variables per type.
with the following values:
\[\text{Number of trees:}\,\{100,300,500,\ldots,5\,000\},\] \[\text{Depth of each tree:}\,\{1,2,3,4,5,6,7,8,9,10\}.\]
We use three hyper parameters of which the values are not tuned: shrinkage = 0.01, bagging fraction = 0.75 and minimum observations per node is 0.75% of the number of records in the training data. The loss functions in Equation (1) and Equation (2) are used for, respectively, frequency and severity modelling. We follow the repeated 5-fold cross-validation scheme as described in Section 3.3. With optimal tuning parameters, we fit a GBM for each data fold and look at the prediction for the observations in the corresponding test set. As such, we obtain an out-of-sample prediction for every data point in the portfolio.
### Neural network models
The pre-training of the autoencoder uses the Nadam optimizer algorithm, a batch size of 1 000 and a randomly selected validation set of 20% of the frequency data set \(\mathcal{D}\) for early stopping. The number of nodes \(d\) in the encoding layer is tuned, testing across the values \(\{5,10,15\}\), selecting the lowest value of \(d\) while the loss \(\mathscr{L}^{\text{CE}}(\cdot,\cdot)<0.001\), as calculated with Equation (8). After the autoencoder is trained and scaled, the encoder is used in each FFNN and CANN structure, for both frequency and severity modelling.
For both the FFNN and the CANN models, a random grid \(\mathcal{R}\) of size 40 is sampled from the search space \(\mathcal{S}\) defined by the tuning parameters and their respective ranges as shown in Table 3.
The cross-validation scheme is shown in Algorithm 1, starting with the pre-processing steps, and resulting in out-of-sample performances for each holdout test set \(\mathcal{D}_{1},\ldots,\mathcal{D}_{6}\). For \(\ell=1,\ldots,6\), we train a network on the data \(\mathcal{D}\backslash\mathcal{D}_{\ell}\), choosing a random validation set consisting of 20% of the training data for early stopping. With this model, we construct out-of-sample predictions on the test set \(\mathcal{D}_{\ell}\) and calculate the out-of-sample loss using the loss functions in Equation (1) and (2). Because optimization in a neural network is dependent on the random initialization of the weights, we train the model three times, and use the average out-of-sample loss over the three trainings. This ensures an objective out-of-sample loss evaluation, without the risk of accidentally getting well or badly initialised weights.
\begin{table}
\begin{tabular}{l l c} \hline \hline
**Tuning parameter** & **Range** & \\ \hline Activation function for hidden layers & ReLU, sigmoid, softmax\({}^{3}\) & \\ Batch size & \([10\,000,50\,000]\) & Frequency \\ & \([200,10\,000]\) & Severity \\ Number of hidden layers & \([1,4]\) & \\ Nodes per hidden layer & \([10,50]\) & \\ Dropout rate & \([0,0.1]\) & \\ \hline \hline \end{tabular}
\end{table}
Table 3: Collection of tuning parameters and their respective ranges for the random grid search tuning strategy. This range is used for both the FFNN and CANN structures.
```
Input: model class (mclass) and corresponding tuning grid \(\mathcal{R}\) data \(\mathcal{D}\) with 6 disjoint stratified subsets \(\mathcal{D}_{1},\ldots,\mathcal{D}_{6}\); for\(\ell=1,\ldots,6\)do leave out \(\mathcal{D}_{\ell}\) as test set; foreach continuous variable \(x_{j}\in\mathcal{D}\)do calculate mean \(\mu_{x_{j}}\) and standard deviation \(\sigma_{x_{j}}\) on the data \(\mathcal{D}\setminus\mathcal{D}_{\ell}\); normalize the variable \(x_{j}\) in the data set \(\mathcal{D}\) with Equation (5); foreach categorical variable \(x_{j}\in\mathcal{D}\)do construct one-hot encoding \(\left(x_{1}^{(j)},\ldots,x_{L_{j}}^{(j)}\right)\) of variable \(x_{j}\); for\(d\in\{5,10,15\}\)do train an autoencoder \(f_{d}^{\mathrm{AE}}\) using the one-hot encoding of all categorical variables in \(\mathcal{D}\setminus\mathcal{D}_{\ell}\) as input and \(d\) nodes in the encoding layer; evaluate the model performance on \(\mathcal{D}\setminus\mathcal{D}_{\ell}\) using loss function \(\mathscr{L}^{\mathrm{BCE}}(\cdot,\cdot)\); select \(f_{d}^{\mathrm{AE}}\) with the lowest \(d\) while \(\mathscr{L}^{\mathrm{BCE}}(\cdot,\cdot)<0.001\); calculate the scaled \(\tilde{W}_{\mathrm{enc}}\) and bias vector \(\tilde{\mathbf{b}}_{\mathrm{enc}}\) for the encoder part of \(f_{d}^{\mathrm{AE}}\); add the encoder part of \(f_{d}^{\mathrm{AE}}\), with \(\tilde{W}_{\mathrm{enc}}\) and \(\tilde{\mathbf{b}}_{\mathrm{enc}}\), to the model class mclass; foreach tuning parameter point \(s\in\mathcal{R}\)do for\(k\in\{1,\ldots,6\}\setminus\ell\)do train a model \(f_{\ell k}\) of mclass on \(\mathcal{D}\setminus\{\mathcal{D}_{\ell},\mathcal{D}_{k}\}\); evaluate the model performance on \(\mathcal{D}_{k}\) using loss function \(\mathscr{L}(\cdot,\cdot)\); valid_error\({}_{\ell k}\leftarrow\frac{1}{|\mathcal{D}_{k}|}\underset{\mathbf{x}_{i}\in\mathcal{D}_{k}} {\sum}\mathscr{L}\{y_{i},f_{\ell k}(\mathbf{x}_{i})\}\); valid_error\({}_{\ell}\leftarrow\frac{1}{5}\sum_{k\in\{1,\ldots,6\}\setminus\ell}\) valid_error\({}_{\ell k}\); optimal parameter point \(s_{\ell}^{*}\in\mathcal{R}\) minimizes valid_error\({}_{\ell}\); for\(\mathrm{rep}\in\{1,2,3\}\)do select a validation set \(\mathcal{D}_{\mathrm{val}}\) containing 20% of the records in \(\mathcal{D}\setminus\mathcal{D}_{\ell}\); train a model \(f_{\ell,\mathrm{rep}}\) of mclass on \(\mathcal{D}\setminus\{\mathcal{D}_{\ell},\mathcal{D}_{\mathrm{val}}\}\) using the optimal parameter point \(s_{\ell}^{*}\) and using \(\mathcal{D}_{\mathrm{val}}\) for early stopping; evaluate the model performance on \(\mathcal{D}_{\ell}\) using loss function \(\mathscr{L}(\cdot,\cdot)\); test_error\({}_{\ell,\mathrm{rep}}\leftarrow\frac{1}{|\mathcal{D}_{\ell}|}\underset{\mathbf{x}_{i} \in\mathcal{D}_{\ell}}{\sum}\mathscr{L}\{y_{i},f_{\ell,\mathrm{rep}}(\mathbf{x}_{i })\}\); test_error\({}_{\ell}\leftarrow\frac{1}{3}\sum_{\mathrm{rep}}\) test_error\({}_{\ell,\mathrm{rep}}\); Output: optimal tuning parameters + performance measure for each of the six folds.
```
**Algorithm 1**Pseudocode to sketch the pipeline for calculating out-of-sample performances with the neural network structures from Section 3.1, including the data pre-processing steps, the cross-validation scheme as outlined in Henckaerts et al. (2021) with the random grid search methodology and the repeated out-of-sample loss calculation to avoid local minima solutions.
### Out-of-sample performances
The two benchmark models enable a comparison between their performance and those obtained with the proposed neural network architectures. Moreover, they serve as initial model input for the CANN models. We investigate the predictive performance of seven models for each data set: GLM, GBM, neural network, the CANN with GLM input, with fixed and flexible output layer, and the CANN with GBM input, with both fixed and flexible output layer. We compare the out-of-sample performances of the benchmark models and the neural network structures in Figure 6. We have out-of-sample deviances for each withheld test set, measured in Poisson deviance (1) or gamma deviance (2). We show frequency on the left-hand side and severity on the right-hand side.
Among the four data sets analyzed, the combination of a neural network and a gradient boosting model (CANN GBM flex) consistently yields the lowest deviance when modelling claim frequency. This aligns with recent research highlighting the predictive performance of combining a gradient boosting model with a neural network (Borisov et al., 2022; Ke et al., 2019; Shwartz-Ziv and Armon, 2022). However, for the Norwegian data set, which has few input variables, the impact was less pronounced, with similar performance observed across all models except for the feed-forward neural network, which exhibits slightly higher deviance. Regarding claim severity modelling, no single model consistently achieves the lowest deviance across all data sets and test sets. For the Australian and Norwegian data sets, all models perform comparably in terms of deviance. The CANN models with GBM input demonstrate the lowest deviance for the Belgian data set, while for the French data set, the CANN model with GLM input achieves the best results. Notably, the CANN models with a flexible output layer structure outperform those with a fixed output layer in most cases, for both frequency and severity modelling. This suggests that the more adaptable combination of the initial model input and the neural network adjustment leads to reduced deviance.
### Comparison of categorical embedding methods
We investigate the impact of the autoencoder embedding compared to directly utilizing one-hot encoded categorical variables. For each data set, we train a FFNN and a CANN model using the one-hot encoding of each categorical variable and a FFNN and a CANN model using the autoencoder embedding. We do not tune the models but choose a set of tuning parameters that we apply to all shown models. This means the deviance of each model is not relevant here, only the difference in deviance between the model with one-hot encoding and the model with autoencoder embedding. This approach allows us to isolate the embedding technique's effect on a model's predictive performance. Each model is trained on \(\mathcal{D}\setminus\mathcal{D}_{1}\), and the out-of-sample performance is calculated on the out-of-sample test set \(\mathcal{D}_{1}\).
Figure 7 displays the predictive accuracy of each model under consideration, with the frequency models in the top and the severity models in the bottom row. For frequency modeling, the autoencoder embedding has the most pronounced effect on the performance of the FFNNs, leading to a lower deviance compared to models utilizing one-hot encoding. However, the impact on CANN models appears to be negligible. In the case of severity modeling, both FFNNs and CANNs demonstrate an improved predictive performance when using the autoencoder embedding. Only the FFNN on Australian data set and the CANN model on Belgian data set perform similarly when comparing the one-hot encoding and the autoencoder embedding for claim severity modelling. The reduced deviance in most severity models highlights the benefits of unsupervised learning through the autoencoder approach.
Figure 6: Out-of-sample performance comparison between the different models for each data set. The left-hand side shows the performance of the frequency models and the right-hand side for the severity models. From top to bottom, we show the results on the Australian, Belgian, French and Norwegian data sets. The deviances for the GLM and GBM on the Belgian data correspond to the results reported in, respectively, Henckaerts et al. (2018) and Henckaerts et al. (2021).
## 5 Looking under the hood: interpretation tools and surrogate models
First, we use two model interpretation tools to look under the hood of the constructed models. Second, we translate the model insights into a tariff structure by constructing GLM surrogates along the work flow presented in Henckaerts et al. (2022). All results shown in this section are calculated using data fold one, meaning the models are trained using data subsets \(\mathcal{D}_{2},\ldots,\mathcal{D}_{6}\) and the shown results are calculated on the test set \(\mathcal{D}_{1}\).
### Variable importance
We measure variable importance using the permutation method from Olden et al. (2004). Hereby, we consider the average change in predictions when a variable is randomly permuted. For a trained model \(f\), we measure the importance of a variable \(x_{j}\) by calculating
\[\mathrm{VIP}_{x_{j}}=\sum_{\mathbf{x}_{i}\in\mathcal{D}}\mathrm{abs}\big{(}f\left( x_{i,1},\ldots,x_{i,j},\ldots,x_{i,p}\right)-f\left(x_{i,1},\ldots,\tilde{x}_{i,j}, \ldots,x_{i,p}\right)\big{)}, \tag{10}\]
where \(\tilde{x}_{i,j}\) is a random permutation of the values observed for \(x_{j}\) in the data set \(\mathcal{D}\). A large value for \(\mathrm{VIP}_{x_{j}}\) indicates that the variable significantly influences the model output and is therefore considered important. Figure 8 shows the variable importance of each variable in the four data sets for both frequency and severity modelling. For clarity, we show the relative VIP of each variable, calculated as
\[\overline{\mathrm{VIP}}_{x_{j}}=\frac{\mathrm{VIP}_{x_{j}}}{\sum_{x_{j}\in \mathcal{D}}\mathrm{VIP}_{x_{j}}},\qquad\text{where the sum runs over all over all variables $x_{j}$.} \tag{11}\]
Figure 7: Comparison of one-hot encoding and autoencoder embedding on the out-of-sample performance of both the FFNN and the CANN model. Top row shows the effect on frequency modelling and bottom row on severity modelling.
By comparing the variable importance of the GBM with the CANN model, we can evaluate the impact of the neural network adjustment component within the CANN. In general, most variables show similar importance in both the GBM and the CANN GBM flexible models, indicating that the adjustment calibrated by the neural network does not substantially alter the importance of the relationships between input variables and the response variable. However, notable changes are observed for certain variables, such as the postal code in the frequency model for the Belgian data set and the vehicle age and brand in the frequency model for the French data set. When we compare the variable importance of the GBM and CANN GBM with the FFNN, we observe more substantial changes, particularly in claim severity modelling. This shows that the FFNN models a significantly different relationship between the input variables and the output variable, compared to the GBM and CANN GBM flexible model.
### Partial dependence effects
We consider partial dependence effects (Hastie et al., 2009; Henckaerts et al., 2021) to explore the relationship between an input variable and the model output. Let the variable space \(X_{j}\) be the vector containing all possible values for variable \(x_{j}\). For a trained model \(f\), the partial dependency effect of a variable \(x_{j}\) is the vector calculated as
\[\mathrm{PD}_{x_{j}}=\left\{\frac{1}{|\mathcal{D}|}\sum_{\mathbf{x}_{i}\in\mathcal{ D}}f\left(x_{i,1},\ldots,X_{o,j},\ldots,x_{i,p}\right)\text{ ; }\forall X_{o,j}\in X_{j}\right\}, \tag{12}\]
where \((x_{i,1},\ldots,X_{o,j},\ldots,x_{i,p})\) is the data point \(\mathbf{x}_{i}\in\mathcal{D}\) with element \(x_{i,j}\) replaced by the value \(X_{o,j}\in X_{j}\). The vector \(\mathrm{PD}_{x_{j}}\) can be seen as the average prediction on \(\mathcal{D}\), while letting the
Figure 8: Relative variable importance in the GBM, the FFNN and the CANN GBM flexible. Top row shows the effects for the frequency models, the bottom row for the severity models.
variable \(x_{j}\) range over all possible values in the variable space \(X_{j}\). A partial dependence plot is the plotted effect between \(X_{j}\) and \(\text{PD}_{x_{j}}\). Equation (12) can be extended to a two-way interaction partial dependence effect by letting two variables range over their respective variable spaces.
Figure 9 shows the partial dependence effect between the policyholder's age and the predicted claim frequency across the four data sets in the benchmark study. We compare the effects of the benchmark GBM, the FFNN and the CANN GBM flexible. The effect in all three models is similar for the Australian, French and Norwegian data. However, for the Belgian data set, the GBM and CANN GBM flexible show a similar partial dependence effect, while the FFNN shows a very different pattern. The partial dependence effect of this FFNN shows a less complex, less nuanced relationship between age of the policyholder and claim frequency. Across the four data sets, the average predicted claim frequency decreases with age, which is an expected relationship between age and claim frequency. For the Belgian and French data sets, we observe an increasing effect for the older ages.
Figure 10 displays the partial dependence effect of the policyholder's age when calibrated on the claim severity data. Similar to the effects portrayed in Figure 9, the three models applied to the Australian and Norwegian data sets exhibit a comparable effect. For the Belgian and French data sets, the FFNN showcases a notably distinct partial dependence effect. Specifically
Figure 10: Partial dependence effect of the policyholder’s age across the four data sets, claim severity models. We compare the benchmark GBM, the FFNN and the CANN GBM flexible.
Figure 9: Partial dependence effect of the policyholder’s age across the four data sets, claim frequency models. We compare the benchmark GBM, the FFNN and the CANN GBM flexible.
for the French data, the FFNN model reveals an almost flat effect across all age groups.
Figure 11 shows the partial dependence effect of the bonus-malus score for the Belgian and French frequency data sets. For both data sets, the three models show an increasing relation between the level occupied in the bonus-malus scale and the expected claim frequency. According to the FFNN, the partial dependence is a distinctly smoother effect compared to the effect calibrated by the GBM and CANN GBM flexible, showing again the less complex, less nuanced relationships captured by the FFNN.
We consider the partial dependence effect of the postal code in the Belgian frequency data set in Figure 12. We compare the partial dependence effect with the empirical claim frequency in the Belgian data, calculated as the number of claims per postal code divided by the sum of the exposure for that postal code. The effect in the GBM and CANN GBM flexible is very similar, with a higher expected claim frequency around the capital of Belgium. The effect in the FFNN also shows a higher expected number of claims in the capital but the calibrated spatial effect is much smoother. This aligns with the smoother partial dependence effects for the policyholder age and bonus-malus in the Belgian frequency FFNN model. Empirically, we see a higher concentration of claims per unit of exposure in and around the capital and for some postal codes in the west and east of Belgium. This effect is visible for the GBM and CANN model but not for the FFNN.
### Surrogate models for practical applications
Surrogate model constructionHenckaerts et al. (2022) present a workflow for constructing a surrogate GLM by leveraging insights obtained with a black box model. In our study, we apply this technique to the CANN model with GBM input, as discussed in Section 3 and calibrated in Section 4. To create the surrogates, we first calculate the partial dependence effect for each individual variable and for interactions between any two variables, as discussed in Section 5.2. Next, we use the dynamic programming algorithm introduced by Wang and Song (2011) to segment the input data into homogeneous groups based on these partial dependence effects. On the resulting binned data set, we fit a generalized linear model. Constructing the surrogate
Figure 11: Partial dependence effect of the bonus-malus score for the Belgian data set, claim frequency model (left) and claim severity model (right). We compare the benchmark GBM, the FFNN and the CANN GBM flexible.
GLM on the segmented frequency and severity data leads to a tabular premium structure incorporating the insights captured by the CANN architectures.
Figure 13 shows the partial dependence effects of the CANN GBM flexible for the bonus
Figure 12: Partial dependence relationship of the spatial variable and the expected number of claims in the GBM, FFNN and CANN GBM flexible for the Belgian data. We compare the modelled effects with the empirical claim frequency in the Belgian data set.
Figure 13: Partial dependence plots for three variables in the French data set; left to right: bonus-malus scale, policyholder age and region. In color, we show the binning of the input data with the frequency surrogate GLM. Each color represents one bin of the input variable.
malus score, policyholder age and the region variable from the French data set, with respect to frequency modelling. In color, we show the obtained data segmentation. The so-called surrogate GLM is fitted on the segmented input data. The benchmark GLM, constructed via the approach in Henckaerts et al. (2018) (hereafter referred to as binned GLM), is also fitted on binned data, therefore, it is insightful to compare both the predictive accuracy and the selected variables as obtained with both techniques. To avoid data leakage in the comparison between two models, we compare the predictive accuracy on a withheld test set.
Table 4 shows the variables included in the binned GLMs and the surrogate GLMs for the Australian, Belgian and French data set. The out-of-sample performance of these models are evaluated on the withheld data set \(\mathcal{D}_{1}\). We excluded the Norwegian data set from the surrogate fitting, as this data set only consists of categorical variables. The surrogate technique selects more variables and performs better on the out-of-sample test set than the binned GLM. This finding is consistent across all three data sets. Hence, the surrogate GLM benefits from the insights learned from the neural network adjustments in the CANN compared to the direct construction of the binned GLM.
Identification of risk profilesWe estimate the number of claims and the claim severities using the surrogate models constructed for the frequency and severity CANN GBM flexible, respectively.
We construct a low, medium, and high-risk profile based on the frequency surrogate GLM for the French data set. Table 5 compares these profiles via their expected claim frequency according to the surrogate GLM and the CANN GBM flexible model. We compare the influence of each variable on the assessed risk using two local interpretation tools in Figure 14. For the GLM, we
\begin{table}
\end{table}
Table 4: Comparison between the benchmark GLM and the surrogate GLM for frequency modelling on the Australian, Belgian and French data sets. The surrogate GLM is constructed from the CANN with GBM input and flexible output layer. The last row shows the Poisson deviance of both GLMs on the out-of-sample data set \(\mathcal{D}_{1}\).
show the fitted coefficients on the response scale. A value lower (higher) than one means the feature's value leads to a lower (higher) prediction than the baseline prediction obtained with the intercept of the GLM. The uncertainty of each contribution is shown with the 95% confidence interval. Shapley values (Shapley et al., 1953) are used to compare the feature contributions of the GLM to the influences in the CANN model. A positive (negative) Shapley value indicates that this feature's value leads to a higher (lower) than average prediction. The effect in the GLM and CANN model mostly align. We see a strong impact of the variables region, driver age and bonus-malus score on the predicted number of claims. The variable area was not selected in the surrogate GLM construction, and its Shapely value is negligible in all tree risk profiles.
## 6 Managerial insights: a comparison between technical tariff stuctures
We now combine the predictions for claim frequency and severity to a technical tariff. For each data set and each fold, we make predictions for all observations in the test set \(\mathcal{D}_{\ell}\), using the model trained on the data subsets \(\mathcal{D}\setminus\mathcal{D}_{\ell}\). As such, we obtain out-of-sample predictions for both the expected number of claims and the expected claim severities for each policyholder in the data set. The predicted loss, or technical tariff, for each policyholder is the expected number of claims times the expected claim severity.
Table 6 shows the total predicted loss next to the total loss observed in each data set. We compare the results from the benchmark GLM and GBM, the CANN GBM flexible and the surrogate GLM. We also show the ratio of predicted losses over the observed losses. A ratio of one means the model has perfect balance at portfolio level. For the Norwegian data set, the predicted losses are very close to the observed losses for all models. For the Australian and Belgian data, both GLM models are close to balance, meaning the predicted losses are close to the observed losses. Although a canonical link GLM satisfies the balance property (Nelder and Wedderburn, 1972), our severity models use a gamma distribution with non-canonical log-link,
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Variables** & **Low risk** & **Medium risk** & **High risk** \\ \hline Vehicle power & 4 & 6 & 9 \\ Vehicle age & 3 & 2 & 1 \\ Policyholder age & \([21,26[\) & \(4\,[30,40[\) & \(\geq 70\) \\ Bonus-malus scale & 50 & 70 & 190 \\ Vehicle brand & B12 & B5 & B11 \\ Fuel type & Regular & Regular & Diesel \\ Population density of area & 2.71 & 665.14 & \(22\,026.47\) \\ District of residence & Midi-Pyrenees & Basse-Normandie & Corse \\ \hline
**Predicted number of claims** & & & \\ \hline Surrogate GLM & 0.020 & 0.106 & 0.361 \\ CANN GBM flexible & 0.021 & 0.101 & 0.519 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Example of a low, medium and high risk profile for the French data set, using the surrogate based on the CANN model with GBM input and flexible output layer, withholding test set one. We compare the predicted number of claims for each profile.
and the tariff structures shown here are based on out-of-sample predictions. The GBM and the CANN model deviate slightly from perfect balance.
To compare tariff structures, we follow the methodology from Henckaerts and Antonio (2022) using risk scores. For a model \(f\), let \(F_{n}\) be the empirical cumulative distribution function of the predictions made by the model \(f\). For each policyholder \(i\), the risk score \(r_{i}^{f}\) is the evaluation of \(F_{n}\) in \(f(\mathbf{x}_{i})\). For frequency-severity modelling, with a frequency model \(f^{\text{freq}}\) and a severity model \(f^{\text{sev}}\), the risk score is calculated as
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multicolumn{1}{c}{**Observed and predicted losses**} & \multicolumn{1}{c}{**GDM**} & \multicolumn{1}{c}{**GBM**} & \multicolumn{1}{c}{**CANN**} & \multicolumn{1}{c}{**Surrogate GLM**} \\ \hline Australia (AU8) & 9 314 604 & 9 345 113 & 9 136 324 & 9 154 467 & 9 355 718 \\ Belgium (€ & 26 464 970 & 26 399 027 & 26 079 709 & 25 720 143 & 26 345 969 \\ France (€ & 58 872 147 & 56 053 341 & 56 207 993 & 58 629 584 & 57 048 375 \\ Norway (NOK) & 206 649 080 & 206 634 401 & 206 475 980 & 206 494 683 & - \\ \hline \multicolumn{1}{c}{**Ratio of predicted losses over observed losses**} & & & & \\ \hline Australia & - & 1.00 & 0.98 & 0.98 & 1.00 \\ Belgium & - & 1.00 & 0.99 & 0.97 & 1.00 \\ France & - & 0.95 & 0.95 & 1.00 & 0.97 \\ Norway & - & 1.00 & 1.00 & 1.00 & - \\ \hline \hline \end{tabular}
\end{table}
Table 6: Comparison between the total observed losses for each data set and the total predicted losses for the GLM, GBM, CANN GBM flex and the surrogate GLM. We also show the ratio of total predicted losses over observed losses.
Figure 14: Comparison between the low, medium and high risk profiles in the frequency models on the French data set according to the Shapley values of the CANN GBM flexible model (top row) and the fitted coefficients in the surrogate GLM (bottom row).
\[r_{i}^{f}=F_{n}\left\{f^{\text{freq}}(\mathbf{x}_{i})\times f^{\text{sev}}(\mathbf{x}_{i}) \right\}. \tag{13}\]
We compare the risk scores of multiple models using Lorenz curves (Lorenz, 1905). For a model \(f\), the Lorenz curve evaluated in \(s\in[0,1]\) is
\[LC^{f}(s)=\frac{\sum_{i=1}^{n}L_{i}\,\mathbbm{1}\{r_{i}^{f}\leq s\}}{\sum_{i=1} ^{n}L_{i}}, \tag{14}\]
with \(L_{i}\) the observed loss for policyholder \(i\). We visualize the Lorenz curve by plotting the pairs \(\left(s,LC^{f}(s)\right)\), for \(s\in[0,1]\). The Lorenz curves shows the accumulation of losses, ordered by risk score \(r_{i}^{f}\) obtained with model \(f\). A model with a better risk classification accumulates losses slower for low risk scores and faster for high risk scores. A Lorenz curve further away from the equality line at 45%, represents a better risk classification.
Figure 15 shows the Lorenz curves for the four data sets in the benchmark study where the top row compares the benchmark GBM with the CANN GBM flexible model and the bottom row the benchmark GLM with the surrogate GLM. For the Australian and French data sets, the tariff structure from the CANN model is (slightly) preferred to that of the benchmark GBM according to the Lorenz curve. For the Belgian and Norwegian data sets, the CANN model is also preferred, but the two curves are very similar. For the Australian, Belgian and French data sets, the Lorenz curves of the benchmark GLM and the surrogate GLM show a very similar pattern, with the surrogate model having a preferable curve over the benchmark GLM, showing that the higher predictive accuracy of the surrogate model also results in a slightly better risk classification in the tariff structure.
Figure 15: Lorenz curve comparison between the GBM benchmark model and the CANN GBM flexible model in the top row. Bottom row compares the GLM benchmark with the surrogate GLM. A dashed line is added to show line of equality.
## 7 Conclusion
This paper explores the potential of deep learning models for the analysis of tabular frequency and severity data in non-life insurance pricing. We detail a benchmark study using extensive cross-validation on multiple model architectures. Categorical input features are embedded by use of an autoencoder of which the encoder part is integrated into the FFNN and CANN structures. Our results demonstrate the performance gains achieved using the autoencoder embedding technique for categorical input variables, especially in modelling claim severities. Interpretation techniques are applied to both frequency and severity modelling.
The literature often questions the value created when analyzing tabular data with deep learning models. Indeed, our feed-forward neural network does not improve upon a carefully designed GBM. Combining gradient-boosted trees and neural networks leads to higher accuracy for frequency modelling. This aligns with what we see in other fields, where GBM and neural network combinations outperform the corresponding stand-alone models on tabular data. In modelling the severity data, out-of-sample deviances are relatively similar across benchmark models and deep learning architectures. This suggests that the added value created by using the deep learning approach is limited when applied to these datasets. Data sets with a high dimensional set of input variables and/or complex input features might benefit more from using deep learning models to model claim frequency and severity data. Gao et al. (2022), for instance, use deep learning models to analyze high-frequency time series of telematics data.
The end result of our study is a technical tariff structure on categorized input data, using a GLM that is carefully designed as a global surrogate for the deep learning model. Hence, this surrogate GLM leverages the insights from the deep learning models. The surrogate GLMs lead to a lower out-of-sample deviance than the benchmark GLMs and have a better risk differentiation. The workflow to construct a GLM as a global surrogate for a deep learning model can potentially be of interest to insurance companies aiming to harvest refined insights from their available data while aiming for an interpretable, explainable technical tariff. The latter consideration is of much interest in light of the GDPR algorithmic accountability clause. Further research could look into data sets with high dimensional feature sets, including features with images, text or times series, and explore the value of deep learning architectures via a carefully designed benchmark study, as outlined in this paper.
## Acknowledgements
Katrien Antonio gratefully acknowledges funding from the FWO and Fonds De La Recherche Scientifique - FNRS (F.R.S.-FNRS) under the Excellence of Science (EOS) program, project ASTeRISK Research Foundation Flanders [grant number 40007517]. The authors gratefully acknowledge support from the Ageas research chair on insurance analytics at KU Leuven, from the Chaire DIALog sponored by CNP Assurances and the FWO network W001021N. The authors thank Simon Gielis for his contributions (as MSc student) in the early stage of the research.
## Declaration of interest statement
The authors declare no potential conflict of interests.
## Supplemental material
The results in this paper were obtained using R. All code is available through github: [https://github.com/freekholvoet/NNforFreqSevPricing](https://github.com/freekholvoet/NNforFreqSevPricing). An R Markdown demonstration is available on that github page, called NNforFreqSevPricing.nb.html.
|
2308.13057 | Data-Side Efficiencies for Lightweight Convolutional Neural Networks | We examine how the choice of data-side attributes for two important visual
tasks of image classification and object detection can aid in the choice or
design of lightweight convolutional neural networks. We show by experimentation
how four data attributes - number of classes, object color, image resolution,
and object scale affect neural network model size and efficiency. Intra- and
inter-class similarity metrics, based on metric learning, are defined to guide
the evaluation of these attributes toward achieving lightweight models.
Evaluations made using these metrics are shown to require 30x less computation
than running full inference tests. We provide, as an example, applying the
metrics and methods to choose a lightweight model for a robot path planning
application and achieve computation reduction of 66% and accuracy gain of 3.5%
over the pre-method model. | Bryan Bo Cao, Lawrence O'Gorman, Michael Coss, Shubham Jain | 2023-08-24T19:50:25Z | http://arxiv.org/abs/2308.13057v1 | # Data-Side Efficiencies for Lightweight Convolutional Neural Networks
###### Abstract
We examine how the choice of data-side attributes for two important visual tasks of image classification and object detection can aid in the choice or design of lightweight convolutional neural networks. We show by experimentation how four data attributes - number of classes, object color, image resolution, and object scale affect neural network model size and efficiency. Intra- and inter-class similarity metrics, based on metric learning, are defined to guide the evaluation of these attributes toward achieving lightweight models. Evaluations made using these metrics are shown to require \(30\times\) less computation than running full inference tests. We provide, as an example, applying the metrics and methods to choose a lightweight model for a robot path planning application and achieve computation reduction of \(66\%\) and accuracy gain of \(3.5\%\) over the pre-method model.
Efficient Neural Network Convolutional Neural Network Image Classification Object Detection
## 1 Introduction
Traditionally for computer vision applications, an algorithm designer with domain expertise would begin by identifying handcrafted features to help recognize objects of interest. More recently, end-to-end learning (E2E) has supplanted that expert by training a deep neural network to learn important features on its own. Besides the little forethought required about data features, there is usually only basic preprocessing done on the input data; an image is often downsampled and converted to a vector, and an audio signal is often transformed to a spectogram. In this paper, we use the term "data-side" to include operations that are performed on the data before input to the neural network. Our proposal is that a one-time analysis of data-side attributes can aid the design of more efficient convolutional neural networks (CNNs) for the many-times that they are used to perform inferences.
On the data side of the neural network, we examine four independent image attributes and two dependent attributes, the latter which we use as metrics. The independent attributes are **number of classes, object color, image resolution** and **object scale**. The metrics are **intra-** and **inter-class similarity**. Our goal is to optimize the metrics by choice of the independent variables - specifically to maximize intra-class similarity and minimize inter-class similarity - to obtain the most computationally efficient model.
Unlike benchmark competitions such as ImageNet [1], practical applications involve a design stage that can include adjustment of input specifications. In Section 2, we tabulate a selection of applications. The "wildlife" application reduced the number of animal and bird classes from 18 to 6 in the Wildlife Spotter dataset [2]. In the "driving" application [3], the 10 classes of the BDD dataset [4] were reduced to 7 by eliminating the "train" class due to few labeled instances and combining the similar classes of rider, motor, and bike into rider.
The main contributions of this paper are:
1. Four data-side attributes are identified, and experiments are run to show their effects on the computational efficiency of lightweight CNNs.
2. Intra- and inter-class similarity metrics are defined to aid evaluation of the four independent attribute values. Use of these metrics is shown to be about \(30\times\) faster than evaluation by full inference testing.
3. Procedures are described using the similarity metrics to evaluate how changing attribute values can reduce model computation while maintaining accuracy.
4. Starting with the EfficientNet-B0 model, we show how our methods can guide the application designer to smaller "sub-EfficientNets" with greater efficiency and similar or higher accuracy.
We describe related work in Section 2. Each of the attributes is defined in Section 3, procedures are described to apply these toward more efficient models in Section 4, and experimental evidence is shown in Section 5. We conclude in Section 6.
## 2 Related Work
The post-AlexNet [5] era (2012-) of convolutional neural networks brought larger and larger networks with the understanding that a larger model yielded higher accuracy (e.g., VGGNet-16 [6] in 2014 with 144M parameters). But the need for more efficiency, especially for embedded systems [7], encouraged the design of lightweight neural networks [8], such as SqueezeNet [9] in 2017 with 1.2M parameters. Models were reduced in size by such architectures as depthwise separable convolution filters [10]. More efficient handling of data was incorporated by using quantization [11, 12, 13], pruning [14, 15, 16, 17], and data saliency [18]. This model-side efficiency quest is a common research trend where new models are evaluated for general purpose classification and object detection on public benchmarks such as ImageNet [1], COCO [19], and VOC [20]. Orthogonal and complementary to these model-side efficiencies [21, 22, 23], we examine efficiencies that can be gained before the model by understanding and adjusting the data attributes within the confines of the application specifications. Early work [24] optimizes models specialized to the target video only for binary-classification. Our work extends to multi-class classification and object detection.
In Table 1, we list a selection of 9 applications whose data attributes are far less complex than common benchmarks. For these applications, class number is often just 2. The largest number of classes, 7, is for the "driving" [4] application. Compare these with 80 classes for the COCO dataset and 1000 for ImageNet. For 2 applications, color is not used. For the "crowd" application, it is not deemed useful and for the "ship, SAR" application, the input data is inherently not color. The resolution range is not broad in this sampling, likely due to matching image size to model input width. Many papers did not describe the scale range; for these, we approximated from the given information or images in the paper. The broadest scale range (as a fraction of image size) is the "driving" application (\(1/32\) to \(1/2\)), and the narrowest is for the "mammals" application, using aerial image capture, with scale from \(1/30\) to \(1/20\).
We use a measure of class similarity to efficiently examine data attributes, based on neural network metric learning. This term describes the use of learned, low-dimensional representations of discrete variables (images in our case). The distance between two instances in the latent space can be measured by L1 [32], L2, or cosine similarity [33]. Previous studies [34, 35, 36] focus on learning a single similarity latent space. Differences between classification [37] and ranking based losses [38] have been studied in [39]. PAN incorporates attributes to visual similarity learning [40]. In Sections 3.6 and 3.7 we extend this line of research by adapting the metric from [33] to measure intra- and inter-class similarity to serve efficiency purposes.
_In contrast to research to improve model performance on public benchmarks, our goal is to develop an empirical understanding of the effects of these attributes on common CNNs, and from this to provide practical guidelines to obtain lightweight CNNs in real-world scenarios._ Our use of intra- and inter-class similarity metrics enables an efficient
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Application & \(N_{Cl}\) & \(N_{Co}\) & \(R_{E}\) & \(S_{C}\) \\ \hline crowd [25] & 2 & no & \(320\times 240\) & \(1/8\), \(1/2\) \\ ship, SAR [26] & 2 & no & \(416\times 416\) & \(1/20\), \(1/6\) \\ cattle [27] & 2 & yes & \(224\times 224\) & \(1/16\), \(1/2\) \\ hardhat [28] & 2 & yes & \(300\times 300\) & \(1/25\), \(1/2\) \\ wildlife [2] & 6 & yes & \(224\times 224\) & \(1/6\), \(1/2\) \\ PCB defect [29] & 6 & yes & \(640\times 640\) & \(1/30\), \(1/15\) \\ ship [30] & 6 & yes & \(416\times 416\) & \(1/20\), \(1/2\) \\ mammals [31] & 6 & yes & \(2\)k \(\times\) 2k & \(1/30\), \(1/20\) \\ driving [3] & 7 & yes & 608 \(\times\) 608 & \(1/32\), \(1/2\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Examples of data attributes for object detection applications with attributes: number of classes (\(N_{Cl}\)), color (\(N_{Co}\)), input resolution (\(R_{E}\)), and scale range (\(S_{C}\)) as a fraction of image size.
methodology toward this goal. Practical, low-complexity applications as in Table 1 can benefit from our investigation and method.
## 3 Data Attributes and Metrics
This work is centered on the hypothesis that, the easier the input data is to classify, the more computationally efficient the model can be at a fixed or chosen lower accuracy threshold. In this section, we describe each of the data attributes and their relationships to the hypothesis. We define metrics to obtain the dependent variables. And we describe procedures to adjust the independent variables to obtain metrics that guide the design of an efficient model.
We first introduce the term, Ease of Classification \(EoC\) and hypothesize the following relationship exists with the data-side attributes,
\[\mathrm{EoC}\leftarrow(S_{1},\frac{1}{S_{2}})\leftarrow(\frac{1}{N_{Cl}}, \frac{1}{N_{Co}},\frac{1}{R_{E}},\frac{1}{S_{C}}). \tag{1}\]
The symbol (\(\leftarrow\)) is used to describe the direct relationships in which the left expressions are related to the right. \(EoC\) increases with intra-class similarity \(S_{1}\) and decreases with inter-class similarity \(S_{2}\). The dependent variable \(S_{1}\) is related to the reciprocals of the independent variables, number of classes \(N_{Cl}\), number of color channels \(N_{Co}\), image resolution \(R_{E}\), and object scale \(S_{C}\). The dependent variable \(S_{2}\) is directly related to these independent variables. The model designer follows an approach to adjust these independent variables to obtain similarity measurements that achieve high \(EoC\). Note that we will sometimes simplify \(N_{Cl}\) to \(CL\) for readability in figures.
In Section 5 we perform experiments on a range of values for each attribute to understand how they affect model size and accuracy. However, these experiments cannot be done independently for each attribute because some have dependencies on others. We call these _interdependencies_ because they are 2-way. We discuss interdependencies below for two groups, {\(S_{C}\), \(R_{E}\)} and {\(S_{1}\), \(S_{2}\), \(N_{Cl}\)}.
### Number of Classes, \(N_{Cl}\)
The \(N_{Cl}\) attribute is the number of classes being classified in the dataset. Experimental results with different numbers of classes are shown in Section 5.2. In Section 5.6 we present results of changing the number of classes for a robot path planning application.
### Object Colors, \(N_{Co}\)
The \(N_{Co}\) attribute is the number of color axes, either 1 for grayscale or 3 for tristimulus color models such as RGB (red, green, blue). When the data has a color format, the first layer of the neural model has 3 channels. For grayscale data input, the model has 1 input channel. In Section 5.3, we show the efficiency gain for 1 versus 3 input channels.
### Image Resolution, \(R_{e}\)
Image resolution, measured in pixels, has the most direct relationship to model size and computation. Increasing the image size by a multiple \(m\) (rows \(I_{r}\) to \(m\times I_{r}/2\) and columns \(I_{c}\) to \(m\times I_{c}\)) increases the computation by at least one-to-one. In Figure 1, MobileNet computation increases proportionally to image size. The lower sized EfficientNet-B0 and -B1 models also increase proportionally, but rise faster with larger models B2, B3, and B4.
It is not the case that higher resolution always yields better accuracy. It usually plateaus, and that plateau is different for different classes involved. The objective is to choose the minimum image resolution before which accuracy drops unacceptably. Note that resolution and scale are dependent attributes, and neither can be adjusted without considering the effect on the other. Experimental results with different resolutions are shown in Section 5.5.
### Object Scale, \(S_{c}\)
For a CNN, the image area, or receptive field, of a \(3\times 3\) filter kernel expands as the following sequence for layers \(L=1,2,3,...,L\),
\[3\times 3,7\times 7,15\times 15,\ldots,(2^{L+1}-1)\times(2^{L+1}-1). \tag{2}\]
For object detection, this means that, if the maximum object size in an image is, for example, \(15\times 15\), the network needs at least 3 layers to yield features that convolve with (or overlap) the full size of these objects. In practice, from the
sequence in equation 2 the minimum number of layers can be found from the maximum sidelength \(b_{max}\) of bounding boxes of objects over all image instances,
\[L\geq\lceil\log_{2}(b_{max}+1)\rceil-1. \tag{3}\]
where \(\lceil x\rceil\) is a ceiling operator, which rounds the \(x\) value to the higher integer. For example, if \(b_{max}\) is 250, the minimum number of layers needed is \(\lceil 7.966\rceil-1=7\).
In practice, the maximum object size is approximated as the longer sidelength of the bounding box of the object. In terms of model size and minimizing computation, by measuring the bounding box size of objects in a dataset, one can discover the maximum number of layers needed for full-object feature extraction. Furthermore, using filter kernels that are larger than the maximum object size increases extractions of inter-object features. When generalized to a sequence of rectangular filters, these tend toward Gaussian filters with increasing convolution layers, so the filter edge versus middle magnitude lowers as well.
To decouple scale from resolution, we define the scale of an object as a fraction of the longer image dimension. With this definition, one can measure the scale of an object in an image without knowing the resolution. The _size_ of the object is measured in pixels; it is the product of scale times the longer dimension of the resolution. Figure 2 shows the range of scales for objects in the COCO dataset. Although 50% of instances account for \(\leq 10\%\) of image size, still the sizes range from very small to full image size. Conversely, in many applications, scale range is known and much more contained as for some applications in Table 1. Experimental results with different scales are shown in Section 5.5.
### Resolution and Scale Interdependency
The interdependence between the attributes {\(S_{C}\), \(R_{E}\) } is common to any image processing application, and can be described by two cases. For case 1, if the scale of objects within a class in an image is large enough to reduce the image size and still recognize the objects with high accuracy, then there is a resulting benefit of lower computation. However, reducing the image size also reduces resolution, and if there is another class that depends upon a high spatial frequency
Figure 1: As the input image size is increased, the computation for the smaller MobileNet models increase at a similar rate. For the larger EfficientNet models (only up to EfficientNet-B4 are shown for clarity of display), the computation increases at a higher rate.
Figure 2: Histogram of bounding box sizes in COCO training dataset highly skewed at small sizes \(<0.2\).
feature such as texture, then reducing the image would reduce overall accuracy. For case 2, if resolution of the highest frequency features of a class is more than adequate to support image reduction, then computational efficiency can be gained by reducing the image size. However, because that reduction also reduces scale, classes that are differentiated by scale will become more similar, and this situation may cause lower accuracy. We can write these relationships as follows,
\[(S_{C}\;\propto\;R_{E})\;\rightarrow\;\frac{1}{\mathrm{EoC}}, \tag{4}\]
where the expression within parentheses is commutative (a change in \(S_{C}\) will cause a proportional change in \(R_{E}\), and vice versa), and these are inversely related to \(EoC\).
### Intra-Class Similarity, \(S_{1}\)
Intra-class similarity is a measure of visual similarity between members of the same class as measured with vectors in the embedding space of a deep neural network. It is described by the average and variance of pairwise similarities of instances in the class,
\[S_{1}(C_{1})=\frac{1}{N}\sum_{i,j\in C_{1}}\cos(\mathbf{Z}_{i}\mathbf{Z}_{j}), \tag{5}\]
\[\sigma_{S1}^{2}(C_{1})=\frac{1}{N}\sum_{i,j\in C_{1}}(S_{ij}-S_{1})^{2}, \tag{6}\]
where \(C_{1}\) is a class containing a set of instances, \(i\) and \(j\) are indices in set \(C_{1}\), \(i\neq j\), and \(N\) is the total number of similarity pairs \(S_{ij}\) of two different instances in the class. \(\mathbf{Z}\) is the latent vector in the embedding space from a neural network trained by metric learning on instances of the class as well as other classes. (That is, it is the same model trained on the same instances as for inter-class similarity.) This metric is adapted from [33]. We show the use of \(S_{1}\) and \(\sigma_{S1}^{2}\) in Section 4.1.
### Inter-Class Similarity, \(S_{2}\)
Inter-class similarity is a measure of visual similarity between classes as determined by the closeness of instances of two classes in the embedding space of a deep neural network. For 2 classes, it is defined as the average of pairwise similarities of instances between classes,
\[S_{2}(C_{1},C_{2})=\frac{1}{N}\sum_{i\in C_{1},j\in C_{2}}\cos(\mathbf{Z}_{i} \mathbf{Z}_{j}), \tag{7}\]
where \(C_{1}\) and \(C_{2}\) are instance sets of two different classes, \(i\) and \(j\) are indices in sets \(C_{1}\) and \(C_{2}\) respectively, \(N\) is the total number of pairs of two instances in two different classes, and \(\mathbf{Z}\) is the latent vector in the embedding space from a neural network trained by metric learning on instances that include both classes as well as other classes if there are more than 2.
For an application involving more than two classes, we choose the inter-class similarity measure to be the maximum of inter-class similarity measures over all class pairs,
\[\hat{S_{2}}(\{C_{K}\})=\max\{S_{2}(C_{m},C_{n})\}, \tag{8}\]
where \(\{C_{K}\}\) is the set of all classes for \(0<=k<K\), and \(\{(C_{m},C_{n})\}\) is all class pairs for \(0<=m,n<K\), \(m\neq n\). We choose the maximum of similarity results of class pairs because maximum similarity represents the worst case - most difficult to distinguish - between two classes. Alternatively, one could calculate the average of inter-class similarities for each pair, however the operation of averaging will hide the effect of one pair of very similar classes among other dissimilar classes, and this effect is worse for larger \(N_{Cl}\). We show the use of \(\hat{S_{2}}\) in Section 4.2.
We also use a measure that is the normalized difference between the maximum and the average,
\[\Delta S_{2}(\{C_{K}\})=\frac{\hat{S_{2}}-S_{2}}{S_{2}}. \tag{9}\]
A larger \(\Delta S_{2}\) indicates a higher value of worst case \(\hat{S_{2}}\), so we seek low \(\Delta S_{2}\) in the methods described in Sections 4.1 and 4.3.
### Intra- and Inter-Class Interdependency
There is a strong interdependence between \(S_{1}\) and \(S_{2}\) with a secondary dependence upon \(N_{Cl}\), as we will describe. With other factors fixed, smaller intra-class similarity increases inter-class similarity because more heterogeneous classes with wider attribute ranges are being compared, thus reducing \(EoC\). As \(N_{Cl}\) is increased, this effect is exacerbated because there is higher probability of low \(S_{1}\) and high \(S_{2}\) pairs, and (similarly) because we use worst-case maximum in equation 8. We can write these dependent relationships as follows,
\[(S_{1}\propto 1/S_{2})\,\rightarrow\mathrm{EoC}\,,\,\mathrm{for}N_{Cl}\geq 2, \tag{10}\]
where the expression within brackets is commutative (either \(S_{1}\) or \(S_{2}\) can cause inverse change in the other, and the relationship becomes stronger as \(N_{Cl}\) increases.
## 4 Method
The general approach toward finding the most efficient model is to select ranges of the independent attribute values [\(N_{Cl}\), \(N_{Co}\), \(R_{E}\), \(S_{C}\)], calculate the similarity measures {\(S_{1}\), \(S_{2}\)} for selected values in the range, and choose those independent attribute values that minimize \(S_{2}\) and maximize \(S_{1}\). Step-by-step procedures for selecting each of the independent attributes are described below. These procedures should be followed in the order of the sections below.
### Selection of \(N_{Cl}\)
If the application permits adjustment of the number of classes, then the following procedure can help finding class groupings and an associated \(N_{Cl}\) that supports a more efficient model. Initialize \(\Delta S_{2}=\infty\), and follow these steps,
1. Choose class groupings.
2. Calculate \(\Delta S_{2}\) from equation 9. If this is less than the current smallest \(\Delta S_{2}\) then this grouping is the most efficient so far.
3. If \(\Delta S_{2}\) is low, one can choose to exit, or continue.
4. If one decides to continue, calculate \(\hat{S_{1}}\) from equation 5 and \(\sigma_{S1}^{2}\) from equation 6 for each class. The value of \(S_{1}\) is for guidance to help understand why the current \(S_{2}\) is good or bad (low or high) to give guidance on choosing the next groupings. In general, class grouping with high \(S_{1}\) and low \(\sigma_{S1}^{2}\) will yield higher accuracy. Repeat these steps.
We want to stop when inter-class similarity is low, indicated by a low value of \(\Delta S_{2}\). However there is subjectivity in this step due to the manual choice of groupings and because different applications will have different levels of intra-class homogeneity and inter-class distinguishability. In practice, the procedure is usually run for a few iterations to understand relative \(\Delta S_{2}\) values for different groupings, and the lowest is chosen.
### Selection of \(N_{Co}\)
The application can gain computational advantage if _all_ classes can be reduced to grayscale; if all classes cannot be reduced, then the model must handle color and there is no computational advantage. Following are the steps to choose grayscale or color,
1. For all classes in color and grayscale, calculate \(\hat{S_{2}}\).
2. If \(\hat{S_{2}}\) for grayscale is less than or equal to \(\hat{S_{2}}\) for color, choose a grayscale model and grayscale for all instances of all classes.
If the procedure above does not result in the choice of grayscale, there is a second option. This requires more testing and does not directly yield computation reduction, but may improve accuracy.
1. For each class, calculate \(\hat{S_{2}}\) against every other class for these four combinations of the \((C_{1},C_{2})\) pairs: (grayscale, grayscale), (grayscale, color), (color, grayscale), and (color, color).
2. For each class \(C_{1}\) whose \(\hat{S_{2}}\) for (grayscale, grayscale) and (grayscale, color) is smaller than for (color, grayscale) and (color, color), choose to use grayscale for the \(C_{1}\) class.
### Selection of \(R_{e}\) and \(S_{c}\)
We first adjust the attribute \(R_{E}\) to find the lower bound of resolution with respect to acceptable accuracy. Initialize \(\Delta S_{2}=\infty\), and follow these steps,
Calculate \(\Delta S_{2}\) from equation 9. If this is less than the current smallest \(\Delta S_{2}\) then this resolution is the most efficient so far.
* If \(\Delta S_{2}\) is low, one can choose to exit, or continue.
* Reduce the resolution by half and repeat these steps.
In practice, a few iterations is run to see when \(\Delta S_{2}\) rises, then choose the resolution where it is lowest.
After the resolution has been chosen, maximum object scale is multiplied by resolution to find the maximum object size in pixels. Equation 3 is used to find an estimate of the lower bound of number of layers needed in the model.
## 5 Experiments
In this section, we perform experiments on the data attributes to show how their values relate to model efficiency and accuracy. Note that our level of experimentation is not even across all attributes. We believe that the experiments upon number of classes, intra- and inter-class similarities, and resolution are sufficient to support the Ease of Classification relationship of equation 1. For color, we give a quantitative comparison of color versus grayscale computation, and then because the difference is small, leave further investigation of accuracy and efficiency to cited literature. For scale, one aspect of this attribute is covered by its interdependency in the resolution experiments. However another aspect, the relationship of scale to model levels (equation 2), we leave to future work. This is because scale is less easily separated from particular applications - specifically the size range of objects in an application - than for other attributes. Our plan is to investigate this in a more application-oriented paper in the future.
### Similarity Metric Efficiency
We could discover the effects of data attributes by training and inference testing all combinations of attribute values. For \(N_{CL}\) classes, binary classification of pairs would require \(\binom{N_{CL}}{2}\) training and inference operations. In comparison, for similarity metrics, we need to train once for all combinations of binary classifications. During testing, the similarity model (SM) caches the latent space, thus only indices of each instance need to be paired with other instances to obtain their cosine similarities.
For the CIFAR10 dataset for example, there is a one-time task of feeding all test images into SM, which takes 0.72 seconds, and then caching, which takes 0.63 seconds. In contrast, we feed each image into the CNN to obtain its prediction in order to calculate its accuracy in the conventional pipeline. We show the runtime in Table 2.
### Number of Classes, \(N_{Cl}\)
It is well known that accuracy is reduced when more classes are involved since more visual features are needed for a CNN to learn in the feature extractor backbone, as well as more complex decision boundary to learn in the fully connected layers for classification. However, we perform experiments here, first to confirm this relationship with test data, but secondly to gain any extra insight into the relationship between number of classes and accuracy.
We performed three sets of experiments. The first was for object detection using the YOLOv5-nano [22] backbone upon increasing-size class groupings of the COCO dataset [19]. Ten groups with \(Cl\) of \(\{1,2,3,4,5,10,20,40,60,80\}\) were prepared. For each group, we trained a separate YOLOv5-nano model from scratch. As seen in Figure 3 (left), accuracy decreases with number of classes. An added insight is that the accuracy decrease is steep for very few classes, say 5-10 or fewer, and flattens beyond 10.
The second set of experiments was for image classification on the CIFAR-10 dataset. With many fewer classes in CIFAR-10 [41] than COCO (10 versus 80), we expect to see how the number of classes and accuracy relate for this smaller range. We extracted subsets of classes -- which we call groups - from CIFAR-10 with \(N_{Cl}\) ranging from 2 to 9.
\begin{table}
\begin{tabular}{l l l} \hline \hline Model & \(t_{train}\) & \(t_{test}\) \\ & (s/epoch) & (s/pair) \\ \hline VGG19 & 0.69 & 4.49 \\ EfficientNet-B0 & 3.13 & 21.82 \\ MobileNetV2 & 2.19 & 15.05 \\
**Similarity Metrics** & 3.31 & **0.76** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results of runtime comparison of Similarity Metrics and existing models. s: second, pair: all pair of instances in two classes.
For example, a group with \(N_{Cl}=4\) might contain airplane, cat, automobile, and ship classes. We trained classifiers from scratch for each group.
Results of the image classification experiments are shown in Figure 3 (middle). The three classifiers used for testing, EfficientNet-B0, VGG19 [6], and MobileNetV2 [42] showed the expected trend of accuracy reduction as \(N_{Cl}\) per group increased. However, the trend was not as monotonic as might be expected. We hypothesized that this might be due to the composition of each group, specifically if the group classes were largely similar or dissimilar. This insight led to the experiments on class similarity in the next section.
The third set of experiments involved reducing model size for different numbers of classes. We prepared 90 class groupings extracted from the COCO minitrain dataset [43]. There are 80 datasets for \(N_{Cl}=1\), each containing a single class from 80 classes. There are 8 datasets for \(N_{Cl}=10\) combining the 8 single-class datasets. The final dataset is the original COCO minitrain with \(N_{Cl}=80\).
We scale YOLOv5 layers and channels with the depth and width multiples already used for scaling the family between nano and x-large. Starting with depth and width multiples of 0.33 and 0.25 for YOLOv5-nano, we reduce these in step sizes of 0.04 for depth and 0.03 for width. In this way, we design a monotonically decreasing sequence of sub-YOLO models denoted as SY1 to SY8. We train each model separately for each of the six datasets.
Results of sub-YOLO detection are shown in Figure 3 (right). There are three lines where each point of \(mAP@.5\) is averaged across all models in all datasets for a specific \(N_{Cl}\). An overall trend is observed that fewer-class models (upper-left blue star) achieve higher efficiency than many-class models. Another finding of interest here is that, whereas the accuracies for 80 classes drops steadily from the YOLOv5-nano size, accuracy for 10 classes is fairly flat down to SY2, which corresponds to a 36% computation reduction, and for 1 class down to SY4, which corresponds to a 72% computation reduction.
### Color, \(N_{Co}\)
Because reduction from color to grayscale only affects the number of multiplies in the first layer, the efficiency gain depends upon the model size. For a large model such as VGG-19, percentage efficiency gain will be much smaller than for a small model such as EfficientNet-B0, as shown in Table 3. However, even for small networks, the effect of reducing from color to grayscale processing is small relative to effects of other attributes, so we perform experiments on these more impactful attributes. For further investigation of color computation, refer to previous work on this topic [44].
\begin{table}
\begin{tabular}{l|c|c|c|c|c} \hline \hline Model & \multicolumn{2}{c|}{Layer-1} & \multicolumn{2}{c|}{All Layers} & Ratio \\ \cline{2-5} & color & gray & color & gray & [\%] \\ \hline VGG-19 & 1835.01 & 655.36 & 399474 & 398295 & 99.7 \\ EN-B0 & 884.7 & 294.9 & 31431 & 30841 & 98.1 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Difference between color and grayscale computation [kFLOPS] for the large VGG-19 classifier and much smaller EfficientNet-B0 classifier. Ratio is grayscale-to-color computation for all layers.
Figure 3: Overall relationship of performance and \(N_{CL}\). (Left) Object detection accuracy and recall (R) decrease for the YOLOv5-nano when the number of classes is increased from 1 to 80. (Middle) CIFAR-10 Image classification accuracy decreases for the classifiers tested when the number of classes per group is increased from 2 to 10. (Right) Accuracy plot for increasingly smaller models from YOLOv5-nano through 8 sub-YOLO models (SY1-8) and class groupings of 1 (\(N_{CL}\):1), 10 (\(N_{CL}\):10), and 80 (\(N_{CL}\):80).
### Intra- and Inter-Class Similarity, \(S_{1}\), \(S_{2}\)
In equation 1, we hypothesized that accuracy is lower if inter-class similarity is higher. Table 4 shows accuracy and inter-class similarity results for groups of 2 and 4 classes from the CIFAR-10 dataset that we have subjectively designated similar (S) or dissimilar (D). "S4" indicates a similar group of 4 classes, and "D2" indicates a dissimilar group of 2 classes. The results indicate that our subjective labels correspond to our hypothesis, and that our objective measure of inter-class similarity in equation 7 both corresponds to the subjective, and is consistent with the hypothesis.
### Resolution and Scale, \(R_{e}\) and \(S_{c}\)
In Table 1, the applications often downsize images to much smaller than the originals. Designers of these applications likely determined that accuracy didn't suffer when images were reduced to the degree that they chose. This is also the case for our experiment of YOLOv5 nano on the COCO dataset in Figure 5 (left). One can see that accuracies drop slowly for image reduction from \(640^{2}\) to \(448^{2}\) - and further depending upon the degree of accuracy reduction that can be tolerated. Because scale drops with resolution, this plot includes the effects on accuracy reduction for both. In Figure 5 (right) we see the object detection performance almost flattens when reducing \(R_{E}\) from 4k down to 1080p, and SY1-3 models' accuracies are very close to nano but with about half the required computation (4.2 GFLOPs for nano versus 2.2 GFLOPs for SY3.
We also verify by experiment that \(R_{E}\) has a direct impact on inference runtime per image [45, 46]. Experiments on YOLOv5 nano in the COCO validation set are conducted, and results are shown in Table 6. One observation is that runtime almost doubles from 7.4ms for \(1280^{2}\) to 13.4ms for \(2560^{2}\) on a GPU, while it rises dramatically from 64.1 for \(640^{2}\) to 4396.7 for \(1280^{2}\) on a CPU.
### Robot Path Planning Application
We briefly mention results of methods from this paper used for efficient model design for our own application. The application is to detect times, locations, and densities of human activity on a factory floor for robot path planning. Initial labelling comprised 5 object classes: 2 of humans engaged in different activities, and 3 of different types of robots. Similarity metrics guided a reduction to 2 classes enabling a smaller model with computation reduction of 66% and accuracy gain of 3.5%. See [48] for a more complete description of this application.
## 6 Conclusion
We conclude that for applications requiring lightweight CNNs, data attributes can be examined and adjusted to obtain more efficient models. We examined four independent data-side variables, and results from our experiments indicate the following ranking upon computation reduction. Resolution has the greatest effect on computation. Most practitioners already perform resolution reduction, but many simply to fit the model of choice. We show that, for small (few-class) applications the model size can be reduced (to sub-YOLO models) to achieve more efficiency with similar accuracy, and this can be done efficiently using similarity metrics. Number of classes is second in rank. This is dependent on the application, but our methods using similarity metrics enable the application designer to compare different class groupings efficiently. We showed that the choice of color or grayscale had a relatively small (1-2%) effect on computation for small models. We don't rank scale because scale was only tested with interdependency to resolution.
\begin{table}
\begin{tabular}{c|c c c c c c c} \hline \hline \(R_{E}\) & \(80^{2}\) & \(160^{2}\) & \(320^{2}\) & \(640^{2}\) & \(1280^{2}\) & \(2560^{2}\) & \(5120^{2}\) \\ \hline GPU & 6.4 & 6.2 & 6.3 & 6.4 & 7.4 & 13.4 & 48.7 \\ CPU & 14.0 & 24.7 & 52.8 & 64.1 & 4396.7 & 4728.3 & 5456.9 \\ \hline \hline \end{tabular}
\end{table}
Table 6: YOLOv5 runtimes [ms/img] for resolutions of the COCO validation set. Each number is averaged for 5 runs.
Figure 5: (Left) Effect of resolution on accuracy for YOLOv5-nano object detection on the COCO dataset, and (Right) effect on accuracy of YOLOv5 and sub-YOLO models for 80 classes of the PANDA 4k dataset [47]. |
2302.10505 | Higher-order Sparse Convolutions in Graph Neural Networks | Graph Neural Networks (GNNs) have been applied to many problems in computer
sciences. Capturing higher-order relationships between nodes is crucial to
increase the expressive power of GNNs. However, existing methods to capture
these relationships could be infeasible for large-scale graphs. In this work,
we introduce a new higher-order sparse convolution based on the Sobolev norm of
graph signals. Our Sparse Sobolev GNN (S-SobGNN) computes a cascade of filters
on each layer with increasing Hadamard powers to get a more diverse set of
functions, and then a linear combination layer weights the embeddings of each
filter. We evaluate S-SobGNN in several applications of semi-supervised
learning. S-SobGNN shows competitive performance in all applications as
compared to several state-of-the-art methods. | Jhony H. Giraldo, Sajid Javed, Arif Mahmood, Fragkiskos D. Malliaros, Thierry Bouwmans | 2023-02-21T08:08:18Z | http://arxiv.org/abs/2302.10505v1 | # Higher-order Sparse Convolutions in Graph Neural Networks
###### Abstract
Graph Neural Networks (GNNs) have been applied to many problems in computer sciences. Capturing higher-order relationships between nodes is crucial to increase the expressive power of GNNs. However, existing methods to capture these relationships could be infeasible for large-scale graphs. In this work, we introduce a new higher-order sparse convolution based on the Sobolev norm of graph signals. Our Sparse Sobolev GNN (S-SobGNN) computes a cascade of filters on each layer with increasing Hadamard powers to get a more diverse set of functions, and then a linear combination layer weights the embeddings of each filter. We evaluate S-SobGNN in several applications of semi-supervised learning. S-SobGNN shows competitive performance in all applications as compared to several state-of-the-art methods.
Jhony H. Giraldo\({}^{\star+}\), Sajid Javed\({}^{\ddagger}\), Arif Mahmood\({}^{\circled}\), Fragkiskos D. Malliaros\({}^{\dagger}\), Thierry Bouwmans\({}^{\lx@sectionsign}\)\({}^{\star}\)LTCI, Telecom Paris - Institut Polytechnique de Paris, France;
\({}^{\ddagger}\)Khalifa University, United Arab Emirates; \({}^{\circled}\)Information Technology University, Pakistan;
\({}^{\dagger}\)Universite Paris-Saclay, CentraleSupelec, Inria, Centre for Visual Computing (CVN), France;
\({}^{\lx@sectionsign}\)Laboratoire MIA, La Rochelle Universite, France Graph neural networks, sparse convolutions, Sobolev norm
## 1 Introduction
Graph representation learning and its applications have gained significant attention in recent years. Notably, Graph Neural Networks (GNNs) have been extensively studied [1, 2, 3, 4, 5, 6]. GNNs extend the concepts of Convolutional Neural Networks (CNNs) [7] to non-Euclidean data modeled as graphs. GNNs have numerous applications like semi-supervised learning [2], graph clustering [8], point cloud semantic segmentation [9], misinformation detection [10], and protein modeling [11]. Similarly, other graph learning techniques have been recently applied to image and video processing applications [12, 13].
Most GNNs update their node embeddings by computing specific operations in the neighborhood of each node. This updating is limited when we want to capture higher-order vertex relationships between nodes. Previous methods in GNNs have tried to capture these higher-order connections by taking powers of the sparse adjacency matrix [14], quickly converting this sparse representation into a dense matrix. The densification of the adjacency matrix results in memory and scalability problems in GNNs. Therefore, the use of these higher-order methods is limited for large-scale graphs.
In this work, we propose a new sparse GNN model that computes a cascade of higher-order filtering operations. Our model is inspired by the Sobolev norm in Graph Signal Processing (GSP) [15, 16]. We modify the Sobolev norm using concepts of the Hadamard product between matrices to maintain the sparsity of the adjacency matrix. We rely on spectral graph theory [17] and the Schur product theorem [18] to explain some mathematical properties of our filtering operation. Our Sparse Sobolev GNN (S-SobGNN) employs a linear combination layer at the end of each cascade of filters to select the best power functions. Thus, we improve expressiveness by computing a more diverse set of sparse graph-convolutional operations. We evaluate S-SobGNN in semi-supervised learning tasks in several domains like tissue phenotyping in colon cancer histology images [19], text classification of news [20], activity recognition with sensors [21], and recognition of spoken letters [22].
The main contributions of the current work are summarized as follows: 1) we propose a new GNN architecture that computes a cascade of higher-order filters inspired by the Sobolev norm in GSP, 2) some mathematical insights of S-SobGNN are introduced based on spectral graph theory [17] and the Schur product theorem [18], and 3) we perform experimental evaluations on four publicly available benchmark datasets and compared S-SobGNN to seven GNN architectures. Our algorithm shows the best performance against previous methods. The rest of the paper is organized as follows. Section 2 introduces the proposed GNN model. Section 3 presents the experimental framework and results. Finally, Section 4 shows the conclusions.
## 2 Sparse Sobolev Graph Neural Networks
### Preliminaries
A graph is represented as \(G=(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}=\{1,\ldots,N\}\) is the set of \(N\) nodes and \(\mathcal{E}=\{(i,j)\}\) is the set of edges
between nodes \(i\) and \(j\). \(\mathbf{A}\in\mathbb{R}^{N\times N}\) is the weighted adjacency matrix of the graph such that \(\mathbf{A}(i,j)=a_{i,j}\in\mathbb{R}_{+}\) is the weight of the edge \((i,j)\), and \(\mathbf{A}(i,j)=0\;\forall\;(i,j)\notin\mathcal{E}\). As a result, \(\mathbf{A}\) is symmetric for undirected graphs. A graph signal is a function \(x:\mathcal{V}\rightarrow\mathbb{R}\) and is represented as \(\mathbf{x}\in\mathbb{R}^{N}\). The degree matrix of \(G\) is a diagonal matrix given by \(\mathbf{D}=\mathrm{diag}(\mathbf{A}\mathbf{1})\). \(\mathbf{L}=\mathbf{D}-\mathbf{A}\) is the combinatorial Laplacian matrix, and \(\mathbf{\Delta}=\mathbf{I}-\mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{- \frac{1}{2}}\) is the symmetric normalized Laplacian [23]. The Laplacian matrix is a positive semi-definite matrix for undirected graphs with eigenvalues1\(0=\lambda_{1}\leq\lambda_{2}\leq\cdots\leq\lambda_{N}\) and corresponding eigenvectors \(\{\mathbf{u}_{1},\mathbf{u}_{2},\ldots,\mathbf{u}_{N}\}\). In GSP, the Graph Fourier Transform (GFT) of \(\mathbf{x}\) is given by \(\mathbf{\hat{x}}=\mathbf{U}^{\mathsf{T}}\mathbf{x}\), and the inverse GFT is \(\mathbf{x}=\mathbf{U}\mathbf{\hat{x}}\)[23]. In this work, we use the spectral definitions of graphs to analyze our filtering operation. However, the spectrum is not required for the implementation of S-SobGNN.
Footnote 1: \(\lambda_{N}\leq 2\) in the case of the symmetric normalized Laplacian \(\mathbf{\Delta}\).
### Sobolev Norm
The Sobolev norm in GSP has been used as a regularization term to solve problems in 1) video processing [24, 13], 2) modeling of infectious diseases [25], and 3) interpolation of graph signals [15, 16].
**Definition 1**.: _For fixed parameters \(\epsilon\geq 0\), \(\rho\in\mathbb{R}\), the Sobolev norm is given by \(\|\mathbf{x}\|_{\rho,\epsilon}\triangleq\|(\mathbf{L}+\epsilon\mathbf{I})^{ \rho/2}\mathbf{x}\|\)[15]._
When \(\mathbf{L}\) is symmetric, we have that \(\|\mathbf{x}\|_{\rho,\epsilon}^{2}\) is given by:
\[\|\mathbf{x}\|_{\rho,\epsilon}^{2}=\mathbf{x}^{\mathsf{T}}(\mathbf{L}+ \epsilon\mathbf{I})^{\rho}\mathbf{x}. \tag{1}\]
We divide the analysis of (1) into two parts: 1) when \(\epsilon=0\), and 2) when \(\rho=1\). For \(\epsilon=0\) in (1) we have:
\[\mathbf{x}^{\mathsf{T}}\mathbf{L}^{\rho}\mathbf{x}=\mathbf{x}^{\mathsf{T}} \mathbf{U}\mathbf{\Lambda}^{\rho}\mathbf{U}^{\mathsf{T}}\mathbf{x}=\mathbf{ \hat{x}}^{\mathsf{T}}\mathbf{\Lambda}^{\rho}\mathbf{\hat{x}}=\sum_{i=1}^{N} \mathbf{\hat{x}}^{2}(i)\lambda_{i}^{\rho}. \tag{2}\]
Notice that the spectral components \(\mathbf{\hat{x}}(i)\) are penalized with powers of the eigenvalues \(\lambda_{i}^{\rho}\) of \(\mathbf{L}\). Since the eigenvalues are ordered in increasing order, the higher frequencies of \(\mathbf{\hat{x}}\) are penalized more than the lower frequencies when \(\rho=1\), leading to a smooth function in \(G\). For \(\rho>1\), the GFT \(\mathbf{\hat{x}}\) is penalized with a more diverse set of eigenvalues. We can have a similar analysis for the adjacency matrix \(\mathbf{A}\) using the eigenvalue decomposition \(\mathbf{A}^{\rho}=(\mathbf{V}\mathbf{\Sigma}\mathbf{V}^{\mathsf{H}})^{\rho}= \mathbf{V}\mathbf{\Sigma}^{\rho}\mathbf{V}^{\mathsf{H}}\), where \(\mathbf{V}\) is the matrix of eigenvectors, and \(\mathbf{\Sigma}\) is the matrix of eigenvalues of \(\mathbf{A}\). In the case of \(\mathbf{A}\), the GFT \(\mathbf{\hat{x}}=\mathbf{V}^{\mathsf{H}}\mathbf{x}\).
For \(\rho=1\) in (1) we have \(\|\mathbf{x}\|_{\rho,\epsilon}^{2}=\mathbf{x}^{\mathsf{T}}(\mathbf{L}+\epsilon \mathbf{I})\mathbf{x}\). The term \((\mathbf{L}+\epsilon\mathbf{I})\) is associated with a better condition number2 than using \(\mathbf{L}\) alone. For example, better condition numbers are associated with faster convergence rates in gradient descent methods as shown in [16]. For the Laplacian matrix \(\mathbf{L}\), we know that \(\kappa(\mathbf{L})=\frac{|\lambda_{\text{max}}(\mathbf{L})|}{|\lambda_{\text{ min}}(\mathbf{L})|}\approx\frac{\lambda_{\text{max}}(\mathbf{L})}{0}\rightarrow\infty\), where \(\kappa(\mathbf{L})\) is the condition number of \(\mathbf{L}\), \(\lambda_{\text{max}}(\mathbf{L})\) is the maximum eigenvalue, and \(\lambda_{\text{min}}(\mathbf{L})\) is the minimum eigenvalue of \(\mathbf{L}\). Since \(\kappa(\mathbf{L})\rightarrow\infty\), we have an ill-conditioned problem when relying on the Laplacian matrix alone. On the other hand, for the Sobolev term, we have that \(\mathbf{L}+\epsilon\mathbf{I}=\mathbf{U}\mathbf{\Lambda}\mathbf{U}^{\mathsf{T}} +\epsilon\mathbf{I}=\mathbf{U}(\mathbf{\Lambda}+\epsilon\mathbf{I})\mathbf{U}^{ \mathsf{T}}\). Therefore, \(\lambda_{\text{min}}(\mathbf{L}+\epsilon\mathbf{I})=\epsilon\), _i.e._, \(\mathbf{L}+\epsilon\mathbf{I}\) is positive definite (\(\mathbf{L}+\epsilon\mathbf{I}\succ 0\)) for \(\epsilon>0\), and:
Footnote 2: The condition number \(\kappa(\mathbf{L})\) associated with the square matrix \(\mathbf{L}\) is a measure of how well or ill-conditioned is the inversion of \(\mathbf{L}\).
\[\kappa(\mathbf{L}+\epsilon\mathbf{I})=\frac{|\lambda_{\text{max}}(\mathbf{L}+ \epsilon\mathbf{I})|}{|\lambda_{\text{min}}(\mathbf{L}+\epsilon\mathbf{I})|}= \frac{\lambda_{\text{max}}(\mathbf{L})+\epsilon}{\epsilon}<\kappa(\mathbf{L})\; \forall\;\epsilon>0. \tag{3}\]
Namely, \(\mathbf{L}+\epsilon\mathbf{I}\) has a better condition number than \(\mathbf{L}\). It might not be evident why a better condition number could help in GNNs, where the inverses of the Laplacian or adjacency matrices are not required to perform the propagation rules. However, some studies have indicated the adverse effects of bad-behaved matrices. For example, Kipf and Welling [2] used a renormalization trick (\(\mathbf{A}+\mathbf{I}\)) in their filtering operation to avoid exploding/vanishing gradients. Similarly, Wu _et al_. [26] showed that adding the identity matrix to \(\mathbf{A}\) shrinks the graph spectral domain, resulting in a low-pass-type filter.
The previous theoretical analysis shows the benefits of the Sobolev norm about 1) the diverse frequencies computation in (2), and 2) the better condition number in (3).
### Sparse Sobolev Norm
The use of \(\mathbf{L}\) or \(\mathbf{A}\) in GNNs is computationally efficient because these matrices are usually sparse. Therefore, we can perform a small number of sparse matrix operations. For the Sobolev norm, the term \((\mathbf{L}+\epsilon\mathbf{I})^{\rho}\) can quickly become a dense matrix for large values of \(\rho\), leading to scalability and memory problems. To mitigate this limitation, we use a sparse Sobolev norm to keep the same sparsity level.
**Definition 2**.: _Let \(\mathbf{L}\in\mathbb{R}^{N\times N}\) be the Laplacian matrix of \(G\). For fixed parameters \(\epsilon\geq 0\) and \(\rho\in\mathbb{N}\), the sparse Sobolev term for GNNs is introduced as the \(\rho\) Hadamard multiplications of \((\mathbf{L}+\epsilon\mathbf{I})\) (the Hadamard power) such that:_
\[(\mathbf{L}+\epsilon\mathbf{I})^{(\rho)}=(\mathbf{L}+\epsilon\mathbf{I})\circ( \mathbf{L}+\epsilon\mathbf{I})\circ\cdots\circ(\mathbf{L}+\epsilon\mathbf{I}). \tag{4}\]
_For example, \((\mathbf{L}+\epsilon\mathbf{I})^{(2)}=(\mathbf{L}+\epsilon\mathbf{I})\circ( \mathbf{L}+\epsilon\mathbf{I})\). Thus, the sparse Sobolev norm is given by:_
\[\|\mathbf{x}\|_{(\rho),\epsilon}\triangleq\|(\mathbf{L}+\epsilon\mathbf{I})^{( \rho/2)}\mathbf{x}\|. \tag{5}\]
Let \((\mathbf{x},\mathbf{y})_{(\rho),\epsilon}=\mathbf{x}^{\mathsf{T}}(\mathbf{L}+ \epsilon\mathbf{I})^{(\rho)}\mathbf{y}\) be the inner product between two graph signals \(\mathbf{x}\) and \(\mathbf{y}\) that induces the associated sparse Sobolev norm. We can easily prove that the sparse Sobolev norm \(\|\mathbf{x}\|_{(\rho),\epsilon}\triangleq\|(\mathbf{L}+\epsilon\mathbf{I})^{( \rho/2)}\mathbf{x}\|\) satisfies the basic properties of vector norms3 for \(\epsilon>0\) (for \(\epsilon=0\) we have a semi-norm). For the positive definiteness property, we need the Schur product theorem [18].
The sparse Sobolev term in (4) has the property of keeping the same sparsity level for any \(\rho\). Notice that \((\mathbf{L}+\epsilon\mathbf{I})^{\rho}\) is equal to the sparse Sobolev term if 1) we restrict \(\rho\) to be in \(\mathbb{N}\), and 2) we replace the matrix multiplication by the Hadamard product. The theoretical properties of the Sobolev norm in (2) and (3) do not extend trivially to its sparse counterpart. However, we can develop some theoretical insights using concepts of Kronecker products and the Schur product theorem [18].
**Theorem 1**.: _Let \(\mathbf{L}\) be any Laplacian matrix of a graph with eigenvalue decomposition \(\mathbf{L}=\mathbf{U}\mathbf{\Lambda}\mathbf{U}^{\mathsf{T}}\), we have that:_
\[\mathbf{L}\circ\mathbf{L}=\mathbf{L}^{(2)}=\mathbf{P}_{N}^{\mathsf{T}}( \mathbf{U}\otimes\mathbf{U})(\mathbf{\Lambda}\otimes\mathbf{\Lambda})( \mathbf{U}^{\mathsf{T}}\otimes\mathbf{U}^{\mathsf{T}})\mathbf{P}_{N}, \tag{6}\]
_where \(\mathbf{P}_{N}\in\{0,1\}^{N^{2}\times N}\) is a partial permutation matrix._
Proof.: For the spectral decomposition, we have that:
\[\mathbf{L}\otimes\mathbf{L}=(\mathbf{U}\otimes\mathbf{U})(\mathbf{\Lambda} \otimes\mathbf{\Lambda})(\mathbf{U}^{\mathsf{T}}\otimes\mathbf{U}^{\mathsf{T}}), \tag{7}\]
where we used the property of Kronecker products \((\mathbf{A}\otimes\mathbf{B})(\mathbf{C}\otimes\mathbf{D})=\mathbf{A}\mathbf{C }\otimes\mathbf{B}\mathbf{D}\)[18]. Similarly, we know that \(\mathbf{S}\circ\mathbf{T}=\mathbf{P}_{n}^{\mathsf{T}}(\mathbf{S}\otimes \mathbf{T})\mathbf{P}_{m}\), where \(\mathbf{S},\mathbf{T}\in\mathbb{R}^{n\times m}\), and \(\mathbf{P}_{n}\in\{0,1\}^{n^{2}\times n},\mathbf{P}_{m}\in\{0,1\}^{m^{2} \times m}\) are partial permutation matrices. If \(\mathbf{S},\mathbf{T}\in\mathbb{R}^{n\times n}\) are square matrices, we have that \(\mathbf{S}\circ\mathbf{T}=\mathbf{P}_{n}^{\mathsf{T}}(\mathbf{S}\otimes \mathbf{T})\mathbf{P}_{n}\) (Theorem 1 in [27]). We can then get a general form of the spectrum of the Hadamard product for \(\rho=2\) using (7) and Theorem 1 in [27] as follows: \(\mathbf{L}\circ\mathbf{L}=\mathbf{L}^{(2)}=\mathbf{P}_{N}^{\mathsf{T}}( \mathbf{U}\otimes\mathbf{U})(\mathbf{\Lambda}\otimes\mathbf{\Lambda})( \mathbf{U}^{\mathsf{T}}\otimes\mathbf{U}^{\mathsf{T}})\mathbf{P}_{N}\).
Eq. (6) is a closed-form solution regarding the spectrum of the Hadamard power for \(\rho=2\). Thus, the spectrum of the Hadamard multiplication is a compressed form of the Kronecker product of its spectral components. The sparse Sobolev term we use in our S-SobGNN is given by \((\mathbf{L}+\epsilon\mathbf{I})^{(\rho)}\) so that the spectral components of the graph are changing for each value of \(\rho\) as shown in (6).
For the condition number of the Hadamard powers, we can use the Schur product theorem [18]. We know that \((\mathbf{L}+\epsilon\mathbf{I})^{(\rho)}\succ 0\)\(\forall\)\(\epsilon>0\) since \((\mathbf{L}+\epsilon\mathbf{I})\succ 0\)\(\forall\)\(\epsilon>0\), and therefore \(\kappa((\mathbf{L}+\epsilon\mathbf{I})^{(\rho)})<\infty\). For the adjacency matrix, the eigenvalues of \(\mathbf{A}\) lie into \([-d,d]\), where \(d\) is the maximal degree of \(G\)[28]. Therefore, we can bound the eigenvalues of \(\mathbf{A}\) into \([-1,1]\) by normalizing \(\mathbf{A}\) such that \(\mathbf{A}_{N}=\mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{-\frac{1}{2}}\). As a result, we know that \(\mathbf{A}_{N}+\epsilon\mathbf{I}\succ 0\)\(\forall\)\(\epsilon>1\), and \((\mathbf{A}_{N}+\epsilon\mathbf{I})^{(\rho)}\succ 0\)\(\forall\)\(\epsilon>1\). We can say that the theoretical developments of the sparse Sobolev norm hold to some extent the same developments of Section 2.2, _i.e._, a more diverse set of frequencies and a better condition number. Figure 1 shows five normalized eigenvalue penalizations for \(\mathbf{L}^{\rho}\) (non-sparse) and \(\mathbf{L}^{(\rho)}\) (sparse). We notice that the normalized spectrum of \(\mathbf{L}^{\rho}\) and \(\mathbf{L}^{(\rho)}\) are very similar. Finally, we should work with weighted graphs when using the adjacency matrix since \(\mathbf{A}^{(\rho)}=\mathbf{A}\)\(\forall\)\(\rho\in\mathbb{N}\) for unweighted graphs.
### Graph Neural Network Architecture
Kipf and Welling [2] proposed one of the most successful yet simple GNN, called Graph Convolutional Networks (GCNs):
\[\mathbf{H}^{(l+1)}=\sigma(\tilde{\mathbf{D}}^{-\frac{1}{2}}\tilde{\mathbf{A}} \tilde{\mathbf{D}}^{-\frac{1}{2}}\mathbf{H}^{(l)}\mathbf{W}^{(l)}), \tag{8}\]
where \(\tilde{\mathbf{A}}=\mathbf{A}+\mathbf{I}\), \(\tilde{\mathbf{D}}\) is the degree matrix of \(\tilde{\mathbf{A}}\), \(\mathbf{H}^{(l)}\) is the matrix of activations in layer \(l\) such that \(\mathbf{H}^{(1)}=\mathbf{X}\) (data matrix), \(\mathbf{W}^{(l)}\) is the matrix of trainable weights in layer \(l\), and \(\sigma(\cdot)\) is an activation function. The motivation of the propagation rule in (8) comes from the first-order approximation of localized spectral filters on graphs [1]. Kipf and Welling [2] used (8) to propose the vanilla GCN, which is composed of two graph convolutional layers as in (8). The first activation function is a Rectified Linear Unit (\(\mathrm{ReLU}(\cdot)=\max(0,\cdot)\)), and the final activation function is a softmax applied row-wise such that \(\mathrm{softmax}(x_{i})=\frac{1}{G}\exp{(x_{i})}\) where \(Q=\sum_{i}\exp{(x_{i})}\). Finally, the vanilla GCN uses cross-entropy as a loss function.
We introduce a new filtering operation based on the sparse Sobolev term where our propagation rule is such that:
\[\mathbf{B}_{\rho}^{(l+1)}=\sigma(\tilde{\mathbf{D}}_{\rho}^{-\frac{1}{2}} \tilde{\mathbf{A}}_{\rho}\tilde{\mathbf{D}}_{\rho}^{-\frac{1}{2}}\tilde{ \mathbf{H}}^{(l)}\mathbf{W}_{\rho}^{(l)}), \tag{9}\]
where \(\tilde{\mathbf{A}}_{\rho}=(\mathbf{A}+\epsilon\mathbf{I})^{(\rho)}\) is the \(\rho\)th sparse Sobolev term of \(\mathbf{A}\), \(\tilde{\mathbf{D}}_{\rho}\) is the degree matrix of \(\tilde{\mathbf{A}}_{\rho}\), and \(\tilde{\mathbf{H}}^{(1)}=\mathbf{X}\). Notice that \(\tilde{\mathbf{A}}_{\rho}=\tilde{\mathbf{A}}\) when \(\epsilon=1\), and \(\rho=1\), _i.e._, our propagation rule is a generalization of the GCN model. S-SobGNN computes a cascade of propagation rules as in (9) with several values of \(\rho\) in the set \(\{1,2,\ldots,\alpha\}\), and therefore a linear
Figure 1: Eigenvalues penalization for the non-sparse and sparse matrix multiplications of the combinatorial Laplacian matrix.
combination layer weights the outputs of each filter. Figure 2 shows the basic configuration of S-SobGNN. Notice that our graph convolution is efficiently computed since the term \(\bar{\mathbf{D}}_{\rho}^{-\frac{1}{2}}\bar{\mathbf{A}}_{\rho}\bar{\mathbf{D}}_{ \rho}^{-\frac{1}{2}}\)\(\forall\)\(\rho\in\{1,2,\ldots,\alpha\}\) is the same in all layers (so we can compute it offline), and also, these terms are sparse for any value of \(\rho\) (given that \(\mathbf{A}\) is also sparse). S-SobGNN uses \(\mathrm{ReLU}\) as the activation function for each filter, \(\mathrm{softmax}\) at the end of the network, and the cross-entropy loss function. The basic configuration of S-SobGNN is defined by the number of filters \(\alpha\) in each layer, the parameter \(\epsilon\), the number of hidden units of each \(\mathbf{W}_{\rho}^{(l)}\), and the number of layers \(n\). When we construct weighted graphs with Gaussian kernels, the weights of the edges are in the interval \([0,1]\). As a consequence, large values of \(\rho\) could make \(\bar{\mathbf{A}}_{\rho}=\mathbf{0}\), and the diagonal elements of \(\bar{\mathbf{D}}_{\rho}^{-\frac{1}{2}}\) could become \(\infty\). Similarly, large values of \(\alpha\) make very wide architectures with a high parameter budget, so it is desirable to maintain a reasonable value for \(\alpha\). The computational complexity of S-SobGNN is \(\mathcal{O}(n\alpha|\mathcal{E}|+n\alpha)\). For comparison, the computational complexity of a \(n\)-layers GCN is \(\mathcal{O}(n|\mathcal{E}|)\). The exact complexity of both methods also depends on the feature dimension, the hidden units, and the number of nodes in the graph, which we omit for simplicity.
## 3 Experiments and Results
S-SobGNN is compared to eight GNN architectures: Chebyshev filters (Cheby) [1], GCN [2], GAT [3], SIGN [14], SGC [26], ClusterGCN [29], SuperGAT [30], and Transformers [31]. We test S-SobGNN in several semi-supervised learning tasks including, cancer detection in images [20], text classification of news (20News) [20], Human Activity Recognition using sensors (HAR) [21], and recognition of isolated spoken letters (Isolet) [22]. We frame the semi-supervised learning problem as a node classification task in graphs, where we construct the graphs with a \(k\)-Nearest Neighbors (\(k\)-NN) method and a Gaussian kernel with \(k=30\). We split the data into train/validation/test sets with \(10\%\)/\(45\%\)/\(45\%\), where we first divide the data into a development set and a test set. This is done once to avoid using the test set in the hyperparameter optimization. We tune the hyperparameters of each GNN with a random search with \(100\) repetitions and five different seeds for the validation set. We report average accuracies on the test set using \(50\) different seeds with \(95\%\) confidence intervals calculated by bootstrapping with \(1,000\) samples. Table 1 shows the experimental results. S-SobGNN shows the best performance against state-of-the-art methods.
## 4 Conclusions
In this work, we extended the concept of Sobolev norms using the Hadamard product between matrices to keep the sparsity level of the graph representations. We introduced a new Sparse GNN architecture using the proposed sparse Sobolev norm. Similarly, certain theoretical notions of our filtering operation were provided in Sections 2.2 and 2.3. Finally, S-SobGNN outperformed several methods of the literature in four semi-supervised learning tasks.
**Acknowledgments:** This work was supported by the DATAIA Institute as part of the "Programme d'Investissement d'Avenir", (ANR-17-CONV-0003) operated by CentraleSupelec, and by ANR (French National Research Agency) under the JCJC project GraphIA (ANR-20-CE23-0009-01).
\begin{table}
\begin{tabular}{r|c c c c} \hline \hline
**Model** & **Cancer** & **20News** & **HAR** & **Isolet** \\ \hline Cheby [1] & \(87.55\pm 3.91\) & \(70.36\pm 1.14\) & \(73.14\pm 7.01\) & \(69.70\pm 1.47\) \\ GCN [2] & \(76.71\pm 4.47\) & \(51.76\pm 2.11\) & \(66.26\pm 4.91\) & \(55.55\pm 2.72\) \\ GAT [3] & \(73.51\pm 4.87\) & \(48.72\pm 2.21\) & \(59.13\pm 6.30\) & \(60.00\pm 2.21\) \\ SIGN [14] & \(\mathbf{95.55\pm 0.38}\) & \(\mathbf{72.79\pm 0.25}\) & \(\mathbf{99.98\pm 0.25}\) & \(\mathbf{64.02\pm 0.30}\) \\ SGC [26] & \(72.80\pm 4.71\) & \(54.68\pm 1.99\) & \(42.19\pm 3.44\) & \(41.55\pm 1.91\) \\ ClusteredCN [29] & \(74.46\pm 5.37\) & \(60.56\pm 2.18\) & \(57.07\pm 5.48\) & \(63.99\pm 2.24\) \\ SuperGAT [30] & \(70.56\pm 5.14\) & \(57.52\pm 1.93\) & \(56.04\pm 5.32\) & \(58.49\pm 2.27\) \\ Transformer [31] & \(71.10\pm 5.45\) & \(57.48\pm 2.29\) & \(66.01\pm 6.10\) & \(66.24\pm 2.39\) \\ \hline S-SobGNN (ours) & \(\mathbf{93.11\pm 0.45}\) & \(\mathbf{72.18\pm 0.32}\) & \(\mathbf{92.85\pm 0.58}\) & \(\mathbf{86.17\pm 0.34}\) \\ \hline \hline \end{tabular}
* The best and second-best performing methods on each dataset are shown in red and blue, respectively.
\end{table}
Table 1: Accuracy (in %) for the baseline methods and our S-SobGNN algorithm in four datasets for semi-supervised learning, inferring the graphs with a \(k\)-NN method. |
2307.00686 | Neural network execution using nicked DNA and microfluidics | DNA has been discussed as a potential medium for data storage. Potentially it
could be denser, could consume less energy, and could be more durable than
conventional storage media such as hard drives, solid-state storage, and
optical media. However, computing on data stored in DNA is a largely unexplored
challenge. This paper proposes an integrated circuit (IC) based on
microfluidics that can perform complex operations such as artificial neural
network (ANN) computation on data stored in DNA. It computes entirely in the
molecular domain without converting data to electrical form, making it a form
of in-memory computing on DNA. The computation is achieved by topologically
modifying DNA strands through the use of enzymes called nickases. A novel
scheme is proposed for representing data stochastically through the
concentration of the DNA molecules that are nicked at specific sites. The paper
provides details of the biochemical design, as well as the design, layout, and
operation of the microfluidics device. Benchmarks are reported on the
performance of neural network computation. | Arnav Solanki, Zak Griffin, Purab Ranjan Sutradhar, Amlan Ganguly, Marc D. Riedel | 2023-07-02T23:42:27Z | http://arxiv.org/abs/2307.00686v1 | Neural network execution using nicked DNA and microfluidics Arnav Solanki\({}^{*1}\), Zak Griffin\({}^{*2}\), Purab Ranjan Sutradhar\({}^{2}\), Amlan Ganguly\({}^{2}\), Marc Riedel\({}^{1\dagger}\)
## 1 Abstract
DNA has been discussed as a potential medium for data storage. Potentially it could be denser, could consume less energy, and could be more durable than conventional storage media such as hard drives, solid-state storage, and optical media. However, computing on data stored in DNA is a largely unexplored challenge. This paper proposes an integrated circuit (IC) based on microfluidics that can perform complex operations such as artificial neural network (ANN) computation on data stored in DNA. It computes entirely in the molecular domain without converting data to electrical form, making it a form of _in-memory_ computing on DNA. The computation is achieved by topologically modifying DNA strands through the use of enzymes called nickases. A novel scheme is proposed for representing data stochastically through the concentration of the DNA molecules that are nicked at specific sites. The paper provides details of the biochemical design, as well as the design, layout, and operation of the microfluidics device. Benchmarks are reported on the performance of neural network computation.
## 2 Introduction
This paper presents a novel method for implementing mathematical operations in general, and artificial neural networks (ANNs) in particular, with molecular reactions on DNA in a microfluidic device. In what follows, we discuss the impetus to store data and perform computation with DNA. Then we outline the microfluidic technology that we will use for these tasks.
### Background
The fields of _molecular computing_ and _molecular storage_ are based on the quixotic idea of creating molecular systems that perform computation or store data directly in molecular form. Everything consists of molecules, of course, so the terms generally mean computing and storage in _aqueous_ environments, based on chemical or biochemical mechanisms. This is in contrast to conventional computers, in which computing is effected _electrically_ and data is either stored _electrically_, in terms of voltage, in solid-state storage devices; or _magnetically_, in hard drives; or _optically_ on CDs and DVDs. Given the maturity of these conventional forms of computing and storage, why consider chemical or biochemical means?
The motivation comes from distinct angles:
1. Molecules are very, very **small**, even compared to the remarkable densities in our modern electronic systems. For instance, DNA has the potential to store approximately 1,000 times more data per unit volume compared to solid-state drives. Small size also means that molecular computing can be _localized_, so it can be performed in confined spaces, such as inside cells or on tiny sensors.
2. In principle, molecular computing could offer unprecedented **parallelism**, with billions of operations occurring simultaneously.
3. In principle, molecular computing could consume much less **energy** than our silicon systems, which always need a bulky battery or wired power source.
4. The use of naturally occurring molecules with enzymes results in a more **sustainable** computer design without the use of toxic and unethically sourced raw materials.
5. Finally, molecular computing could be deployed **in situ** in our bodies or our environment. Here the goal is to perform sensing, computing, and actuating at a molecular level, with no interfacing at all with external electronics. The inherent biocompatibility of molecular computing components offers the possibility of seamless integration into biological systems.
#### DNA Storage
The leading contender for a molecular storage medium is DNA. Ever since Watson and Crick first described the molecular structure of DNA, its information-bearing potential has been apparent to computer scientists. With each nucleotide in the sequence drawn from the four-valued alphabet of \(\{A,T,C,G\}\), a molecule of DNA with \(n\) nucleotides stores \(4^{n}\) bits of data. Indeed, this information storage underpins life as we know it: all the instructions on how to build and operate a life form are stored in its DNA, honed over eons of evolutionary time.
In a highly influential Science paper in 2012, the renowned Harvard genomicist George Church made the case that we will eventually turn to DNA for information storage, based on the ultimate physical limits of materials [1]. He delineated the theoretical storage **capacity** of DNA: 200 petabytes per gram; the read-write **speed**: less than 100 microseconds per bit; and, most importantly, the **energy**: as little as \(10^{-19}\) joules per bit, which is orders of magnitude below the femtojoules/bit (\(10^{-15}\) J/bit) barrier touted for other emerging technologies. Moreover, DNA is stable for decades, perhaps even millennia, as DNA extracted from the carcasses of woolly mammoths can attest. In principle, DNA could outperform all other types of media that have been studied or proposed.
Of course, no one has yet built a DNA storage system that comes close to beating existing media (magnetic, optical, or solid-state storage). The practical challenges are formidable. Fortunately, DNA technology is not exotic. Spurred by the biotech and pharma industries, the technology for both sequencing (_reading_) and synthesizing (_writing_) DNA has followed a Moore's law-like trajectory for the past 20 years. Sequencing 3 billion nucleotides in a human genome can be done for less than $1,000. Synthesizing a megabyte of DNA data can be done in less than a day. Inspired no doubt by Church's first-principles thinking, but also motivated the trajectory of sequencing and synthesis technology, there has been a groundswell of interest in DNA storage. The leading approach is the synthesis of DNA based on phosphoramidite chemistry [2]. However, many other creative ideas and novel technologies, ranging from nanopores [3] to DNA origami [4], are being deployed.
#### DNA Computing
Beginning with the seminal work of Adelman a quarter-century ago [5], DNA computing has promised the benefits of massive parallelism in operations. Operations are typically performed on the _concentration_ of DNA strands in solution. For instance, with DNA strand displacement cascades, single strands displace parts of double strands, releasing single strands that can then participate in further operations [6, 7, 8]. The inputs and outputs are the concentration values of specific strands.
It is fair to say that in the three decades since Adelman first proposed the idea, the practical impact of research on this topic has been modest. A practical DNA storage system, particularly one that is inherently programmable, changes this. Such storage opens up the possibility of "in-memory" computing, that is computing directly on the data stored in DNA [9, 10, 11]. One performs such computation not on data stored not in the sequence of nucleotides, but rather by making topological modifications to the strands: breaks in the phosphodieser backbone of DNA that we call "nicks" and gaps in the backbone that we call "toeholds." The nicking can be performed enzymatically with a system such as CRISPR/Cas9 [12, 13].
Note that the data that we operate on with this form of DNA computing is encoded in a different dimension than the data encoded in the sequence data of the DNA. The **underlying data** - perhaps terabyte's worth of it - is stored as the sequence of \(A\)'s, \(C\)'s, and \(G\)'s in synthesized strands. Superimposed on this, we store **metadata** via topological modifications. This is illustrated in Fig. 1. This metadata is rewritable. Accordingly, it fits the paradigm of "in-memory" computing [14]. The computation is of SIMD form1 SIMD provides a means to transform stored data, perhaps large amounts of it, with a single parallel instruction.
Footnote 1: SIMD is a computer engineering acronym for Single Instruction, Multiple Data [15], a form of computation in which multiple processing elements perform the same operation on multiple data points simultaneously. It contrasts with the more general class of parallel computation called MIMD (Multiple Instructions, Multiple Data). Much of the modern progress in electronic computing power has come by scaling up SIMD computation with platforms such as graphical processing units (GPUs).
### Stochastic Logic
The form of molecular computing that we present in this paper is predicated on a novel encoding of data. A link is made between the representation of random variables with a paradigm called _stochastic logic_ on the one hand, and the representation of variables in molecular systems as the concentration of molecular species, on the other.
Stochastic logic is an active topic of research in digital design, with applications to emerging technologies [16, 17, 18]. Computation is performed with familiar digital constructs,
Figure 1: Data is stored in multiple dimensions. The sequence of nucleotides stores data in the form of the \(A\)’s, \(C\)’s, \(T\)’s, and \(G\), with 2 bits per letter. Superimposed on this, we store data via topological modifications to the DNA, in the form of nicks and exposed toeholds. This data is **rewritable**, with techniques developed for DNA computation.
such as AND, OR, and NOT gates. However, instead of having specific Boolean values of 0 and 1, the inputs are random bitstreams. A number \(x\) (\(0\leq x\leq 1\)) corresponds to a sequence of random bits. Each bit has _probability_\(x\) of being one and probability \(1-x\) of being zero, as illustrated in Figure 2. Computation is recast in terms of the probabilities observed in these streams. Research in stochastic logic has demonstrated that many mathematical functions of interest can be computed with simple circuits built with logic gates [17, 19].
Consider basic logic gates. Given a stochastic input \(x\), a NOT gate implements the function
\[\text{NOT}(x)=1-x. \tag{1}\]
This means that while an individual input of 1 results in an output of 0 for the NOT gate (and vice versa), statistically, for a random bitstream that encodes the stochastic value \(x\), the NOT gate output is a new bitstream that encodes \(1-x\). The output of an AND gate is 1 only if all the inputs are simultaneously 1. The probability of the output being 1 is thus the probability of all the inputs being 1. Therefore, an AND gate implements the stochastic function:
\[\text{AND}(x,y)=xy, \tag{2}\]
that is to say, multiplication. Probabilities, of course, are values between 0 and 1, inclusive. If we express them as rational numbers, given some positive integer \(n\) as the denominator, we have fractions
\[x=\frac{a}{n},\,y=\frac{b}{n}\]
where \(0\leq a\leq n\) and \(0\leq b\leq n\). So an AND gate computes _a fraction of a fraction._
We can implement other logic functions. The output of an OR gate is 0 only if all the inputs are 0. Therefore, an OR gate implements the stochastic function:
\[\text{OR}(x,y)=1-(1-x)(1-y)=x+y-xy. \tag{3}\]
The output of an XOR gate is 1 only if the two inputs \(x,y\) are different. Therefore, an XOR gate implements the stochastic function:
\[\text{XOR}(x,y)=(1-x)y+x(1-y)=x+y-2xy. \tag{4}\]
The NAND, NOR, and XNOR gates can be derived by composing the AND, OR, and XOR gates each with a NOT gate, respectively. Please refer to Table 1 for a full list of the algebraic expressions of these gates. It is well known that any Boolean
Fig 2: Stochastic representation: A random bitstream. A value \(x\in[0,1]\), in this case \(3/8\), is represented as a bitstream. The probability that a randomly sampled bit in the stream is one is \(x=3/8\); the probability that it is zero is \(1-x=5/8\).
function can be expressed in terms of AND and NOT operations (or entirely in terms of NAND operations). Accordingly, any function can be expressed as a nested sequence of multiplications and \(~{}1-x~{}\) type operations.
There is a large body of literature on the topic of stochastic logic. We point to some of our prior work in this field. In [20] we proved that any multivariate polynomial function with its domain and codomain in the unit interval \([0,1]\) can be implemented using stochastic logic. In [17], we provide an efficient and general synthesis procedure for stochastic logic, the first in the field. In [21], we provided a method for transforming probabilities values with digital logic. Finally, in [22, 23] we demonstrated how stochastic computation can be performed deterministically.
### DNA Strand Displacement
DNA is generally present in double-stranded form (**dsDNA**), in double-helix, with A's pairing with T's, and C's with G's. Without qualification, when we refer to "DNA" we mean double-stranded. However, for the operation we describe here, DNA in single-stranded form (**ssDNA**) plays a role.
The molecular operation that we exploit in our system is called DNA strand displacement [24, 6]. It has been widely studied and deployed. Indeed, prior work has shown that such a system can emulate _any_ abstract set of chemical reactions. The reader is referred to Soloveichik et al. and Zhang et al. for further details [25, 7]. Here we illustrate a simple, generic example. In Section 5, we discuss how to map our models to such DNA strand-displacement systems.
We begin by first defining a few basic concepts. DNA strands are linear sequences of four different nucleotides \(\{A,T,C,G\}\). A nucleotide can bind to another following _Watson-Crick_ base-pairing: A binds to T, C binds to G. A pair of single DNA strands will bind to each other, a process called _hybridization_, if their sequences are complementary according to the base-pairing rule, that is to say, wherever there is an \(A\) in one, there is a \(T\) in the other, and vice versa; and whenever there is a \(C\) in one, there is a \(G\) in the other and vice-versa. The binding strength depends on the length of the complementary regions. Longer regions will bind strongly, smaller ones weakly. Reaction rates match binding strength: hybridization completes quickly if the complementary regions are long and slowly if they are short. If the complementary regions are very short, hybridization might not occur at all. (We acknowledge that, in this brief discussion, we are omitting many relevant details such as temperature, concentration, and the distribution of nucleotide types, i.e., the fraction of paired bases that are A-T versus C-G. All of these parameters must be accounted for in realistic simulation runs.)
Figure 3 illustrates strand displacement with a set of reversible reactions. The entire reaction occurs as reactant molecules \(A\) and \(B\) form products \(E\) and \(F\), with each intermediate stage operating on molecules \(C\) and \(D\). In the figure, \(A\) and \(F\) are
\begin{table}
\begin{tabular}{c|c|c} gate & inputs & function \\ \hline NOT & \(x\) & \(1-x\) \\ \hline AND & \(x,y\) & \(xy\) \\ \hline OR & \(x,y\) & \(x+y-xy\) \\ \hline NAND & \(x,y\) & \(1-xy\) \\ \hline NOR & \(x,y\) & \(1-x-y+xy\) \\ \hline XOR & \(x,y\) & \(x+y-2xy\) \\ \hline XNOR & \(x,y\) & \(1-x-y+2xy\) \\ \end{tabular}
\end{table}
Table 1: Stochastic Function Implemented by Basic Logic Gates
single strands of DNA, while \(B\), \(C\), \(D\), and \(E\) are double-stranded complexes. Each single-strand DNA molecule is divided, conceptually, into subsequences that we call **domains**, denoted as 1, 2, and 3 in the figure. The complementary sequences for these domains are \(1^{*},2^{*}\) and \(3^{*}\). (We will use this notation for complementarity throughout.) All distinct domains are assumed to be _orthogonal_ to each other, meaning that these domains do not hybridize.
**Toeholds** are a specific kind of domain in a double-stranded DNA complex where a single strand is exposed. For instance, the molecule \(B\) contains a toehold domain at \(1^{*}\) in Figure 3. Toeholds are usually 6 to 10 nucleotides long, while the lengths of regular domains are typically 20 nucleotides. The exposed strand of a toehold domain can bind to the complementary domain from a longer ssDNA, and thus toeholds can trigger the binding and displacement of DNA strands. The small length of the toehold makes this hybridization reversible.
In the first reaction in Figure 3, the open toehold \(1^{*}\) in molecule \(B\) binds with domain 1 from strand \(A\). This forms the molecule \(C\) where the duplicate 2 domain section from molecule \(A\) forms an overhanging flap. This reaction shows how a toehold triggers the binding of DNA strands. In molecule \(C\), the overhanging flap can stick onto the complementary domain \(2^{*}\), thus displacing the previously bound strand. This type of branch migration is shown in the second reaction, where the displacement of one flap to the other forms the molecule \(D\). This reaction is reversible, and the molecules \(C\) and \(D\) exist in a dynamic equilibrium. The process of branch migration of the flap is essentially a random walk: at any time when part of the strand from molecule \(A\) hybridizes with strand \(B\), more of \(A\) might bind and displace a part of \(F\), or more of \(F\) might bind and displace a part of \(A\). Therefore, this reaction is reversible. The third reaction is the exact opposite of reaction 1 - the new flap in molecule \(D\) can peel off from the complex and thus create the single-strand molecule \(F\) and leave a new double-stranded complex \(E\). Molecule \(E\) is similar to molecule \(B\), but the toehold has migrated from \(1^{*}\) to \(3^{*}\). The reaction rate of this reaction depends on the length of the toehold \(3^{*}\). If we reduce the length of the toehold, the rate of reaction 3 becomes so small that the reaction can be treated as a forward-only reaction. This bias in the direction of the reaction means that we can model the entire set of reactions as a single DNA strand displacement event, where reactants \(A\) and \(B\) react to produce \(E\) and \(F\). Note that the strand \(F\) can now participate in further toehold-mediated reactions, allowing for cascading of such these DNA strand displacement systems.
### Chemical Model
Recent research has shown how data can be encoded via _nicks_ on DNA using gene-editing enzymes like CRISPR-Cas9 and PfAgo [26]. _Probabilistic switching_ of concentration values has been demonstrated by the DNA computing community [27]. In previous work, we demonstrated how a concept from computer engineering called _stochastic logic_ can be adapted to DNA computing [28]. In this paper, we bring these disparate threads together: we demonstrate how to perform stochastic computation on _fractionally-encoded_ data stored on nicked DNA.
The conventional approach to storing data in DNA is to use a single species of strand to represent a value. It is either encoded as a binary value, where the presence of the specific strand represents a 1 and its absence a 0 [29]; or as a non-integer value, encoded according to its concentration, called a _direct representation_[30]. In recent research, we have shown how a _fractional representation_ can be used [31, 11, 28]. The idea is to use the concentration of two species of strand \(X_{0},X_{1}\) to represent a value \(x\) with
\[x=\frac{X_{1}}{X_{0}+X_{1}}\]
where \(x\in[0,1]\). This encoding is related to the concept of _stochastic logic_ in which computation is performed on randomized bit streams, with values represented by the fraction of 1's versus 0's in the stream [32], [33], [17].
In this work, we store values according to nicking sites on double DNA strands. For a given site, we will have some strands nicked there, but others not. Let the overall concentration of the double strand equal \(C_{0}\), and the concentration of strands nicked at the site equal \(C_{1}\). The ratio of the concentration of strands nicked versus the overall concentration is
\[x=\frac{C_{1}}{C_{0}}\]
So this ratio is the relative concentration of the nicked strand at this site. We use it to represent a variable \(x\in[0,1]\).
Setting this ratio can be achieved by two possible methods. One is that we nick a site using a gene-editing guide that is not fully complementary to the nicking site. The degree of complementarity would control the rate of nicking and so set the relative concentration of strands that are nicked. A simpler method is to split the initial solution containing the strand into two samples; nick all the strands in one sample; and then mix the two samples with the desired ratio \(x\).
### Microfluidics and Lab-on-Chip
Microfluidics is a rapidly developing discipline where small volumes of fluids are manipulated and transferred over channels whose dimensions range from one to hundreds of microns [34]. Typically, such channels leverage principles of fluid dynamics enabling the modeling and design of systems where small volumes of fluids are moved to achieve a variety of purposes such as information and energy transfer. Due to their small form factors and need for very small amounts of fluids, this discipline is finding application in a variety of application domains such as cell sorting, DNA analysis, chemical synthesis and medical applications.
Fig 3: A set of DNA strand displacement reactions. Each DNA single strand is drawn as a continuous arrow, consisting of different colored domains numbered 1 through 3. DNA domains that are complementary to each other due to A–T, C–G binding are paired as 1 and \(1^{*}\). The first reaction shows reactants A and B hybridizing together via the toehold at domain \(1^{*}\) on molecule \(B\). The second reaction depicts branch migration of the overhanging flap of DNA in molecule \(C\), thereby resulting in the nick migrating from after domain 1 to 2. The third reaction shows how an overhanging strand of DNA can be peeled off of molecule \(D\), thereby exposing a toehold at domain \(3^{*}\) on molecule \(E\) and releasing a freely floating strand \(F\). All reactions are reversible. The only domains that are toeholds are \(1^{*}\) and \(3^{*}\).
Utilizing the advances in microfluidics a practical device concept was envisioned as a Lab-on-Chip (LoC) [35]. A LoC is a device consisting of a network of microfluidic channels and microcells capable of transferring fluids to perform several functions such as chemical analysis, reactions, and sorting. Typical applications were in the area of medical sciences where small amounts of samples were needed to perform tests and diagnoses [35]. While the dominant application area of LoCs remains efficient medical diagnoses, advances in manufacturing capability using Integrated Circuit (IC) fabrication methodologies or 3D printing their applicability is expanding into sensing and processing more widely. In this paper, we envision an LoC device enabled by microfluidics to perform neural network computations using DNA molecules as the medium.
### Organization
The rest of this paper is organized as follows. Section 3.3 describes how we implement our core operation, namely multiplication. We do so by computing a _fraction_ of a _fraction_ of concentration values. Section 4 presents the architecture of the microfluidic system that we use to implement computation on data stored in DNA. Section 5 discusses the implementation of an artificial neural network (ANN) using our microfluidic neural engine. Section 6 simulations results of the ANN computation. Finally, Section 7 presents conclusions and discusses future work.
Fig 4: Microcell operation sequence. The microfluidic channels are painted blue, with arrows showing flow direction induced by pressure differentiation. The gray and red boxes respectively represent Quake valves open and closed.
## 3 Multiplication
The core component of our design is the multiplication operation, computed as a fraction of a fraction of a concentration value of nicked DNA.
### Encoding Scheme
Nicking enzymes such as CRISPR-Cas9 can be used to effectively "nick" dsDNA at a particular site [12, 13]. Since DNA is double-stranded, with strong base pairing between the A's and T's and the C's and G's, the molecule does not fall apart. Indeed, the nicking can be performed at multiple sites, and this process can be conducted independently.
Suppose a molecule of DNA molecule with a particular nicking site labeled \(A\) is in a solution. We separate the solution into two parts with a volume ratio \(a\) to \(1-a\) for some fraction \(a\). Now site \(A\) is nicked on all DNA molecules in the first solution, while the second solution is left untouched. These two solutions are mixed back to obtain a single solution. Some molecules in this solution are nicked, while others are not. The relative concentration of DNA molecules with a nick at site \(A\) is \(a\), while that of the molecules that are not nicked is \(1-a\). Thus, any arbitrary fraction \(a\) can be encoded in a solution of DNA molecules with a nicking site. In our framework, the stochastic value encoded at a particular site in DNA is the relative concentration (between 0 and 1) of DNA molecules with a nick at that site.
### Multiplying two values
Consider a DNA molecule with two unique nicking sites, \(A\) and \(B\). First, a stochastic value \(a\) is encoded at site \(A\), as was discussed in Section 3.1. Now the single solution is again split into two parts, of volume ratio \(b\) to \(1-b\). All molecules are nicked at site \(B\) in the first solution, while the second solution is again left untouched. Mixing these two solutions yields a solution containing DNA molecules that are either nicked at site \(B\) or not. Thus, site \(B\) now encodes the stochastic value \(b\). Now both sites \(A\) and \(B\) are being used to independently store stochastic values \(a\) and \(b\). Since either site could be
Fig 5: Multiplying two values, \(a\) and \(b\), through nicking DNA. We start with a solution containing the DNA molecule shown on the top row. Fractional amount \(a\) of these molecules are nicked at site \(A\), and \(b\) amount of all DNA molecules are nicked at site \(B\). This results in a solution of 4 different possible DNA molecule types (as shown on each row). Assuming independent nicking on both sites, the concentration of each of these molecules is shown on the right. The molecule with nicks on both sites \(A\) and \(B\) has a concentration of \(a\times b\), that is, the product of the two fractions.
nicked or not nicked, there are 4 different possible molecules, as shown in Fig. 5. Most significantly, the molecule containing two nicks, both at site \(A\) and \(B\), has a relative concentration of \(a\times b\). That is the product of the two fractional values - a fraction of a fraction. The concentrations of all other molecules are also listed in Fig. 5. Note that these values only hold if both sites are nicked independently.
Thus, our encoding approach not only allows us not only to store data but also to compute on it. This is ideal for computing a scalar multiplication in a neural network - input data is initialized at site \(A\) in a given solution, and then the scalar weight it is to be multiplied with is stored at site \(B\). In this approach, it is necessary for sites \(A\) and \(B\) to be neighboring each other (i.e., no other nicking sites lie between them) to allow for readout.
### Reading Out
Having covered storing two stochastic values in a single solution, we now discuss multiplying these values.
Assume a solution storing two stochastic values \(a\) and \(b\), as detailed in Section 3.2. This solution is gently heated to initiate denaturing of DNA. That is, the DNA starts to break apart into two strands. By restricting the temperature, only short regions with low G-C content will fully denature, while longer strands remain bound. For our starting molecule, the short region between the nicking sites \(A\) and \(B\) will fully break apart into a single-stranded region. That is, a toehold will be formed between these two sites [36]. This toehold will only be formed on DNA molecules with nicks on both sites, so only \(a\times b\) amount of molecules will have a toehold. Now a probe strand is supplied that will bind to the newly exposed toehold. This probe strand is used to displace the DNA strand adjacent to the toehold. The amount of single-stranded DNA (ssDNA) that is displaced through this process is again \(a\times b\) the amount of the starting dsDNA. Thus, the product of two stochastic variables can be read out _in vitro_. This procedure is shown in Fig. 6. In Section 5, we discuss how these single strands can then participate in further strand-displacement operations.
Fig 6: Reading out the multiplication results. (a) The DNA solution storing stochastic values \(a\) and \(b\) on sites \(A\) and \(B\) is gently heated. This creates a toehold only on the molecules with nicks on both sites, i.e., the \(a\times b\) molecules. (b) A probe strand (the first reactant) can then bind with the newly exposed toehold and displace ssDNA (the first product). The concentration of this ssDNA stores the product \(a\times b\).
It is important to cleanly separate the dsDNA molecules from the ssDNA extracted above. To achieve this, the dsDNA molecules and probe strands can have magnetic beads attached to them. When a magnetic field is applied to the solution, the dsDNA molecules and any excess probe strands can be pulled down from the solution, allowing the displaced ssDNA to be separated. These magnetic beads are shown in Fig. 9.
## 4 DNA-based Neural Engine
ANN computational workload consists primarily of matrix operations and activation functions. Among the matrix operations, matrix-matrix multiplication (GEMM) and matrix-vector multiplication (GEMV) make up almost the entirety of the workload which can be performed via repeated multiplications and accumulations (MAC). In the proposed DNA Neural Engine the process of performing a multiplication will take advantage of the stochastic representation of the operands. The input to a single neuron can be stochastically represented by the proportion of DNA strands nicked at a consistent site, compared to the total number of DNA strands in a solution (_i.e.,_ the concentration of specifically nicked DNA strands). In this paper, molecules with 2 nicks as shown in Fig. 7 represent value 1, while all other molecule types correspond to 0. The relative concentration of doubly-nicked DNA molecules is the stochastic value stored in the solution.
The neuron weights, on the other hand, are represented by the concentration of enzymes in a droplet intended to create a second nick on the already-nicked DNA molecules. To perform the stochastic multiplication for each neuron's input-weight pair, the droplet with a concentration of enzymes, representing the weight value, is mixed with the droplet of the nicked DNA strands to create a second nick in the DNA strands. The second nicking site is required to be within around 18 base pairs of the first nick to allow
Fig 7: Storing data on DNA molecules using nicks. (a) The DNA template molecule consists of domains 0 to 4 (in color), with an additional unnamed domain (black) preceding them and a magnetic bead attached (on the left). 0*-4* denote the complementary top strand sequence for these domains. (b) The DNA molecule with a nick at nicking site \(A\) between the black domain and 0*. (c) The DNA molecule with a nick at nicking site \(B\) between the 0* and 1*. (d) The DNA with nicks on both nicking sites. Only this DNA molecule with two nicks represents data value 1; the other three configurations (a)-(c) correspond 0.
a small fragment between the two nicked sites to be detached upon the introduction of probe strands. The product of the input and weight for this particular neuron is represented by the relative concentration of double-nicked strands compared to the total concentrations of DNA strands.
It may be noted that at the beginning of the processing the inputs to the neural engine may also be set by this multiplication process where a solution of un-nicked DNA strands are nicked in a single site by the nickase enzymes whose concentrations are set to represent the input values thereby, creating an array of solutions with DNA strands with a single nick in concentrations representing the concentrations of the nickase and therefore the values of the inputs. Next, we describe the DNA-based neural engine hardware proposed in this work followed by the execution of the basic operations for an ANN.
### Neural Engine Architecture
For the implementation of this process, we adopt a lab-on-chip (LoC) architecture. LoC emulates the electric signals in a digital chip with a set of controlled fluid channels, valves, and similar components. In our implementation, we will be using microfluidics where components are on the scale of 1-100\(\mu\). Our system will operate using droplet-based microfluidics, meaning the fluid that holds data such as DNA or enzymes will move in small packages called droplets. The movement of droplets through the system will
Fig 8: (a) Microcell operation sequence, and (b) Microcell assembly for Matrix Multiplications. The microfluidic channels are painted blue, with arrows showing flow direction induced by pressure differentiation. The gray and red boxes respectively represent Quake valves open and closed.
be controlled by creating pressure differentials. One critical component for controlling the flow of the microfluidic channels is the Quake valve which operates by running a pneumatic channel perpendicularly over a microfluidic channel. When the pneumatic channel is pressurized, it expands, closing the flow across the two sides of the microfluidic channel. To contain each stochastically nicked DNA droplet and merge these with weight enzymes, a small droplet storage container, which we will call a microcell, will be used as seen in Figure 8(a).
### Microcell Function
Figure 8(a) shows the sequence used to load and mix the two droplets holding the stochastically nicked DNA and weight enzymes. Throughout the loading, mixing, and release processes, there will be a constant pressure difference between the bottom and the top of the microcells shown in the figure, creating the upward flow into the next microcell. The steps, as demonstrated in Figure 8(a), are described below:
1. The right valve R is closed, and the left valve L is kept open. This has the effect of routing the fluid through the left side of the microcell, leaving the fluid on the right-side static.
2. The droplet of stochastically nicked DNA enters the microcell and continues until it is known to be at a predefined, timed distance along the left channel.
3. The left valve is closed, and the right valve is opened, rerouting the fluid to flow along the right channel.
4. The weight enzyme droplet is inserted into the microcell and continuously until it is known to be approximately the same distance along the right channel. It can be observed that the DNA droplet does not move since it is in static fluid.
5. Both valves are opened, pushing both droplets simultaneously.
6. The two droplets exit the microcell together, mixing them as the channels merge.
### Microcell Assembly
The microcells will be arranged in a \(k\times k\) formation, each capable of holding and mixing two droplets. These \(k^{2}\) microcells are interconnected with a mesh of microfluidic channels, as shown in Figure 8(b). In this figure, M, S, and P respectively represent the microcells, the merge modules, and the closing reaction pipelines. When delivering the nicked DNA droplets, all right valves are closed, and all left valves are open. The droplets are arranged at fixed distances so will travel across the microcells until each contains a single droplet. The weight enzyme droplets will similarly be inserted as in steps 3 and 4 of the microcell operation, with the exception that the left and right valve states are swapped this time. All left and right valves are then opened to perform steps 5 and 6 of the microcell operation shown previously in Figure 8(a) and described in Section 4.2.
## 5 Implementation of ANN Operation in the Neural Engine
Using the principles of stochastic computing with DNA nicking, we implement the operations involved in an ANN using the above microfluidic neural engine.
### Execution of a Multiplication in a Neuron
We demonstrate the execution of a single multiplication within a microcell by mixing two droplets containing our operands. The multiplicand is a concentration of \(t\) DNA strands, nicked at a known site \(A\) at a concentration \(a\) (as shown in Fig. 7). The multiplier \(b\) is represented by the concentration of nicking enzymes. The nicking enzymes are responsible for weakening the bonds holding the strands together so that after mixing and reacting, the strands nicked at both sites are our product, \(a\times b\). The multiplier is a droplet of the weight enzyme with a concentration:
\[E=b\times t\times(1/k). \tag{5}\]
Here, \(k\) represents the number of neurons present in the ANN layer, processed across \(k\) microcells and the factor \(1/k\) is a consequence of distributing the nicking enzymes over k microcells. To compensate for this \(1/k\) operand, each of these nicking enzymes will be given enough time to react with \(k\) DNA strands. This new nick will be at a second known site, \(B\), nearby the first site \(A\) as shown in Fig 7d. This will result in \(a\times t\) of the strands nicked at site \(A\) and \(b\times t\) of the strands nicked at site \(B\). This means that the proportion of strands nicked at both sites will be the product of the two operands. A concentration of _probe strands_ are then introduced to displace the small ssDNA fragment from each of the aforementioned DNA product strands, as shown in Fig. 9. The resulting proportion of free-floating ssDNA fragments with respect to the total DNA (\(t\)) strands represents the product, \(ab\).
### Execution of Dot Product
The above method for scalar multiplication can be used to compute the dot product for \(k\) microcells, where each microcell contains the corresponding element of both input and weight vectors. Each of these \(k\) microcells will undergo the multiplication as described, with the multiplier, \(b\), being a unique weight enzyme concentration representing the
Fig 9: Extracting ssDNA from dsDNA molecules using probe strands. (a) The DNA template molecule with two nicks at sites \(A\) and \(B\). After applying gentle heat, the ssDNA between the two nicks is selectively denatured to create a toehold at domain 0. (b) A probe strand is used to displace the ssDNA spanning domains 1 to 3 from the DNA molecule. The ssDNA is separated from all the other DNA molecules (i.e., the DNA and any excess probe strands) as the other molecules can all be pulled out.
weight values for each input pair. The products in each row of the microcell array as shown in Figure 8(b) are then aggregated by mixing the droplets row-wise into one large combined droplet. This large combined droplet contains the sum of the number of fragments from each microcell which represents the dot product. Since the multiplicand in subsequent multiplications must be in the form of nicked DNA strands, this concentration of fragments must be transformed. Each fragment within the large droplet is mapped one-to-one to a nicking enzyme. This nicking enzyme is designed to nick at the primary site along a fresh, un-nicked DNA strand using a method known as strand displacement. The aforementioned method for dot product is implemented in the proposed microcell architecture using the following steps.
#### 5.2.1 Droplet Merging
The droplet merging module, S shown in Figure 8(b) adds the individual products of the elements of the two vectors to create the dot product. To compute the dot products as described, the mixed droplets from each microcell must be merged row-wise. Each droplet will exit the microcell, then take an immediate right turn, and remain on this horizontal path until entering the merging module, S. The two-step process is outlined as follows. Please refer to Figure 10.
1. All droplets are merged into a single large droplet with the Y valves kept open (shown in green) and the Z valves closed (shown in red). This ensures a rightward flow and no vertical pressure difference. This is shown in Figure 10(a)
2. Next, the Y valves are closed (red), and the Z valves are opened (green), causing a pressure difference that forces each droplet upward through the merge channels. The construction of the merge channels is such that each droplet reaches the final merge point at the same time. This is shown in Figure 10(b)
Once each row of droplets has been mixed, they will go through the three-step closing reaction pipeline to apply the necessary transformations as discussed below.
#### 5.2.2 Reaction Pipeline
The Reaction Pipeline module enables the implementation of an activation function in the DNA Neural Engine to the previously computed dot products. In addition to implementing the activation, it also transforms the nicked fragments into a singly-nicked DNA molecule to iteratively repeat the process to implement multiple ANN layers using the following steps.
After merging all the droplets, the fraction of doubly nicked DNA molecules to all DNA molecules represents the dot product stored in the merged droplet as shown
Fig 10: The merging module, S.
in Section 3. By applying gentle heat to this droplet, toeholds are created on DNA molecules with two nicks due to partial denaturing. The ssDNA next to this toehold can be displaced using probe strands as shown in Fig. 9. Assuming complete displacement of these ssDNA molecules, the relative concentration (or to be even more precise, the relative number of molecules) of the ssDNA still represents the same fraction as the double-nicked DNA. Following this, we must apply an activation function on this ssDNA value to incorporate non-linear computations necessary in the neural networks.
Our approach utilizes a sharp sigmoid function with a user-defined transition point - i.e., the activation function is a step function with the domain and range \([0,1]\), and the transition point can be set in the range \((0,1)\). This is achieved with the DNA seesaw gates presented by Qian and Winfree [37]. This approach involves utilizing a basic DNA gate motif, which relies on a reversible strand-displacement reaction utilizing the concept of toehold exchange. The seesawing process allows for the exchange of DNA signals, with a pair of seesawing steps completing a catalytic cycle. The reader is referred to [37] for further details.
We use different DNA strands for thresholding and replenishing the output. The threshold molecule binds with the input ssDNA to generate waste (Fig 11a), so the input ssDNA concentration must be larger than the threshold molecule concentration to preserve some residual amount of input ssDNA for the next stage. In the next stage, the gate reaction, the input ssDNA is used to generate output ssDNA (Fig 11b). The replenishment strand in the (Fig 11c) drives the gate reaction since it frees up more input ssDNA (Fig 11c). That is, increasing the replenishment strand concentration maximizes the concentration of the output ssDNA [37].
With these DNA reactions, a gate can be designed that applies a threshold (in detail, the input ssDNA must be greater than the threshold DNA concentration) on the input ssDNA value, and then generates an output ssDNA value of 1 due to excess replenishment molecules. This allows us to implement a sigmoid activation function. If desired, the concentration of the replenishment molecules (Fig 11c) can be limited to also apply an upper bound to the output ssDNA concentration.
With an activation function applied to the ssDNA concentration, we must now transform this value of DNA molecules to a value of nicking enzymes that can be used to trigger the next level of computation in the network. To achieve this, we will use a DNA strand displacement-based protein switch. First, we will conjugate the nicking enzyme with a DNA tag. This DNA tag will have one strand (called the _major strand_) attached to the protein and contain a toehold, while the other strand (the _minor strand_) will have a magnetic bead attached but will not connect with the protein directly. This is shown in Fig. 11d. The DNA tag sequence will be constructed such that the toehold on the major strand will recruit the displaced DNA strands from the previous step, and the resulting strand-displacement reaction will entirely release the minor strand. The design of the protein-DNA tag allows individual displaced DNA strands to "untag" nicking enzyme molecules. The remaining nicking enzymes (those that did not get to react with the DNA strands) will still be "tagged" with magnetic beads and can be pulled out from the solution through the application of a magnetic field. After the pull-down process, the solution contains only untagged nicking enzymes at a specific concentration (this is discussed in detail below). This solution of nicking enzyme can now be used to nick site \(A\) on a new droplet of DNA in the neuron downstream in the network.
1. Gentle heat is applied to the large, merged droplet. This allows denaturing of short DNA molecules and creates toeholds.
2. A droplet containing excess probe strands is mixed to release the input ssDNA fragments. The input ssDNA is separated from the remaining molecules through the application of a magnetic field.
Fig 11: The set of reactions used to apply the activation function on ssDNA and generate an equivalent concentration of nicking enzyme. (a) The threshold reaction: the threshold molecule reacts with the input ssDNA to generate products that do not participate in any further reactions. (b) The gate reaction: the input ssDNA reacts with the seesaw gate molecule to create the output ssDNA and an intermediate molecule (c) The replenishment reaction: the replenishment strand reacts with the intermediate molecule to release more input ssDNA. This replenishes the concentration of input ssDNA and drives the production of more output ssDNA. (d) The translation reaction: the output ssDNA (domain 3* is not shown for clarity) reacts with the “tagged” nicking enzyme (provided in excess) to produce an “untagged” nicking enzyme. The concentration of untagged nicking enzyme is proportional to the concentration of the output ssDNA.
3. A droplet containing the DNA seesaw gate, the threshold DNA (this amount is controlled by the user-defined sigmoid function), and the replenishment DNA (in excess) molecules is mixed with the ssDNA fragments. This applies a sigmoidal activation function on the ssDNA concentration.
4. The ssDNA strands are now mapped to a specific nicking enzyme concentration. For this, a drop containing an excess of the DNA-tagged nicking enzyme will be mixed with the ssDNA. After completion of the reaction, the drop will be subjected to a magnetic field to pull down the surplus nicking enzyme molecules. The resulting solution will contain the nicking enzyme with a concentration proportional to the particular concentration of the ssDNA strands after the activation function.
5. The droplet containing the nicking enzyme is now mixed with un-nicked DNA strands to prepare the inputs to the next layer of neurons in the ANN.
After each stage of the reaction pipeline is completed, the merged droplets from each row now must be broken down into a collection of \(k\) smaller droplets to be entered column-wise into the microcell array. This is accomplished using a droplet separator which functions by applying a pinching pressure at some regular interval to the channels carrying merged droplets [38]. This results in a series of equally spaced droplets, which can then be placed back into the microcells column-wise.
### Layer-wise Execution of an ANN
Using the \(k\times k\) array of microcells and the S and P modules an entire layer of an ANN with \(k\) neurons can be implemented. In this array, each column implements a single neuron of the layer, and all the columns collectively form a single layer of the ANN. All microcells in the same column contain an equal nickel DNA concentration of double-stranded DNA molecules, \(A_{1}-A_{k}\). The large droplet resulting from the output of each row's activation function is now divided back into \(k\) originally sized droplets, which are then entered back into the microcell array column-wise, to repeat the computations for the next layer of the ANN, with the new inputs to each neuron held within the microcells.
## 6 Results
In this work, we evaluate the proposed DNA Neural Engine while processing a simple ANN using the microfluidics-based DNA computing architecture in terms of latency of processing and area footprint of the device.
The time for execution of a single layer, \(t_{\text{layer}}\) can be modelled as follows:
\[t_{\text{layer}}=t_{\text{transport}}+t_{\text{mult}}+t_{\text{merge}}+t_{ \text{activation}}. \tag{6}\]
And,
\[t_{\text{activation}}=t_{\text{displacement}}+t_{\text{threshold}}+t_{\text{ gate}}+t_{\text{translation}}+t_{\text{nick}}. \tag{7}\]
Here:
1. \(t_{\text{transport}}\) is the time it takes for all droplets to travel throughout the microfluidic channels for all stages in the process. It is assumed that the time taken just for transportation is not the dominant bottleneck, and so it has been estimated to be around 2 minutes.
2. \(t_{\rm mult}\) is the time taken to perform a multiplication. This is the time taken for the second nicking of the strands, the second factor in the multiplication.
3. \(t_{\rm merge}\) is the time taken to merge each of the small droplets per row into a single large droplet, the major step of the dot product summation.
4. \(t_{\rm activation}\) can be broken up into several parts: displacement, inhibit, and nicking.
5. \(t_{\rm displacement}\) is the time it takes to displace each of the ssDNA fragments from the doubly nicked strands
6. \(t_{\rm threshold}\) is the time it takes for some input ssDNA strands to react with the threshold DNA.
7. \(t_{\rm gate}\) is the time it takes for the displacement of the output ssDNA alongside the replenishment reaction being used to drive the gate reaction.
8. \(t_{\rm translation}\) is the time it takes for "untagging" the right concentration of nicking enzyme and separating it.
9. \(t_{\rm nick}\) is the time it takes for the untagged nicking enzyme to react with the fresh DNA strands for the resultant node value.
The size of the proposed microfluidic device will scale quadratically with the number of neurons in a layer of the ANN, \(k\) to support parallel execution of all neurons. This is because any layer with \(k\) neurons requires an array of \(k\times k\) microcells. As a pessimistic estimate, we assume each microcell will occupy an area equivalent of 6 channel widths of space in both length and breadth, given their structure with 2 microfluidic channel tracks in both horizontal and vertical directions as well an empty track for separation between the channels. Each track is assumed to be twice in width compared to the width of a channel to allow for manufacturability of the system.
The following expression shows the area of a microcell array, \(W\) with \(k\times k\) microcells, where \(c\) representing microfluidic channel width
\[W=s(c)=(6kc)^{2}. \tag{8}\]
A pessimistic channel width of \(200\mathrm{\SIUnitSymbolMicro m}\) yields a resulting expression for area of \((6\times 0.2\times k)^{2}=\)\(1.44k^{2}\mathrm{mm}^{2}\) for the array [39]. For an optimistic estimate, assuming a channel width of \(35\mathrm{\SIUnitSymbolMicro m}\), and a condensed microchamber estimate of \(3\times 3\) channel widths per cell, we get an area estimate of \(0.01k^{2}\mathrm{mm}^{2}\) for the microcell array [39]. So depending on the manufacturing technology and fabrication node adopted, the parallelism of the device can be scaled up significantly to accommodate large hidden layers.
Table 1 shows the size and timing parameters of the microfluidic architecture [39]. Here we assume that all neurons of a single layer of the ANN can be accommodated in the device simultaneously. Using these parameters we estimate the area requirements and delay for implementation of a simple ANN capable of classifying MNIST digits [refs]. In Table 2 we show the area and delay of the ANN for various device dimensions. The area estimate considers both a pessimistic and an optimistic dimension of the microfluidic channels and chambers from a fabrication perspective. We have considered multiple configurations (Config-1 to Config-4) corresponding to different device dimensions capable of accommodating varying numbers of microcells. These configurations offer a trade-off between device size and delay in ANN processing. In Config-1, we consider the number of microcells in the microfluidic system as 196 \(\times\) 196 which is capable of accommodating an ANN layer with 196 neurons. Therefore, to accommodate the input layer for the ANN that receives the 28 \(\times\) 28 MNIST frames the computations are serialized by a factor of 4 to compute the whole frame. Similarly, the other configurations require serialization by
factors of 16, 49 and 196, respectively. Besides the input later, the designed ANN has a single hidden layer of 784 neurons and an output layer with 10 neurons. The hidden layer is serialized with the same factor as the input layer while the output layer did not need any serialization as it has only 10 neurons except for Config-4 where it was serialized by a factor of 3. Based on the required serialization factor and due to the limited number of microcells in a die the delay of executing a single layer is modified as follows,
\(t_{layer}=((k_{layer}/k_{physical})*(t_{transport}+t_{mult}))+t_{merge}+t_{activation}\),
where \(k_{layer}\) and \(k_{physical}\) are the number of neurons in an ANN layer and the number of neurons that can be computed simultaneously on the microfluidic die respectively. The Python model of the ANN was constrained to consider only positive inputs and weights and yielded an accuracy of 96% in all the configurations as the computation model was not altered in any of them.
We use a sigmoid activation function in all the layers, implemented with "seesaw" gates [37], as discussed above. This enables signal amplification in the form of a sigmoid function - precisely what we need. Again, the reader is referred to [37] for further details.
We assume that the partial results of the serialized computation can be stored in the DNA solution medium in an external reservoir array [40] that is communicating with the microfluidic ANN system through a microfluidic bus interface where the reservoirs are indexed and routed using the valve-system of the microfluidic system to the appropriate micro-chamber corresponding to the appropriate neuron.
Note that a configuration that minimizes the computational delay of the ANN for MNIST classification evaluated here would need a system with an array of \(784\times 784\) microcells to accommodate the entire input layer simultaneously. However, that would make the die size unrealistic. Therefore, such a system could consist of multiple smaller microfluidic dies integrated on a microfluidic interposer substrate capable of communicating between the dies enabling a scalable solution [41]. This system with \(784\times 784\) microcells would reduce the delay per layer of the ANN to 8.07 hours.
A distinct advantage of using the DNA-based approach is that the variability of DNA as a computing medium adds an interesting new factor to ANN training. Slight variations in any reaction in the process could be used as a natural source of drift in training. Iterative feedback from executing the model could be used to correct the errors and further train the model indefinitely. This is not something reflected in traditional digital implementations without the artificial introduction of variation or noise between the models.
## 7 Conclusions
Conventional silicon computing systems generally have centralized control with a CPU that can aggregate sensory data, execute arbitrarily complex analysis, and then actuate. For molecular applications, the actions of sensing, processing, and actuating must all be performed _in situ_, in a decentralized way. Our goal in this paper was to devise molecular computing in which data processing occurs in the storage system itself using the natural properties of the molecules, with no need for readout and external electronic processing.
\begin{table}
\begin{tabular}{|c|c|} \hline
**Attribute** & **Value** \\ \hline Delay of single ANN layer (t\({}_{layer}\)) & 8.07 hrs \\ \hline Channel Width (Optimistic) & 35\(\upmu\)m \\ \hline Channel Width (Pessimistic) & 200\(\upmu\)m \\ \hline Microcell Area (Optimistic) (\(W_{min}\)) & \(0.01mm^{2}\) \\ \hline Microcell Area (Pessimistic) (\(W_{max}\)) & \(1.44mm^{2}\) \\ \hline \end{tabular}
\end{table}
Table 2: Summary of the estimated system performance
_In situ_ molecular processing of data is critical from the standpoint of I/O: reading and writing data will always be a bottleneck for molecular systems. Computing "in-memory" is, therefore, a prerequisite.
We are collaborating with an industrial partner, Seagate, on the development of microfluidics technology for DNA storage. This technology will take many years to mature; however, when it does, techniques for computing _on_ the data that is stored in DNA will be needed. While conceptual in nature, this paper demonstrates how such computation could be performed.
In this paper presented a methodology for implementing complex operations, including ANN computation, on data stored in DNA. The paper weaves together two distinct strands: a conceptual representation of data, on the one hand, and the technology to compute with this representation, on the other hand. The representation is a fractional encoding on the concentration of nicked DNA strands. With this representation, we can compute a _fraction_ of a _fraction_ - so the operation of multiplication - borrowing ideas from stochastic logic. The "read-out" process is effected by releasing single strands via DNA toehold-mediated strand displacement. The technology is microfluidics. We described the microcell layout used in a pneumatic lab-on-a-chip (LOC) to control mixing. Mixing allows us to compute a fraction of a fraction of a concentration value. Based on this core operation, we presented a full architecture to implement neural computation.
There are a number of practical challenges. One of the concerns, ubiquitous with DNA strand displacement operations, is "leakage", that is to say errors in transforming concentrations. This occurs because we never have 100% of DNA strands participating in designated reactions. Based upon the actual experimental results, we might have to mitigate leakage with error correction methods or adopt so-called "leakless" designs [42].
In future work, we will investigate ambitious applications of small molecule storage and computing. Our goal is to devise _in situ_ computing capabilities, where sensing, computing, and actuating occur at the molecular level, with no interfacing at all with external electronics. The applications include:
* **Image processing and classification**: We will implement a full-scale molecular image classifier using neural network algorithms. Performing the requisite image processing _in situ_, in molecular form, eliminates data transfer bottlenecks. We will quantify the accuracy of image processing in terms of the _signal-to-noise_ ratio and the _structural similarity index_.
* **Machine learning**: We will explore a common data representation for integrating sensing, computing, and actuation _in situ_: hyperdimensional random vectors. Data is represented by long random vectors of integer or Boolean values. We will deploy this paradigm for machine learning, exploiting the randomness of molecular mixtures for encoding, which can naturally map to large vector representations.
|
2305.17583 | On Neural Networks as Infinite Tree-Structured Probabilistic Graphical
Models | Deep neural networks (DNNs) lack the precise semantics and definitive
probabilistic interpretation of probabilistic graphical models (PGMs). In this
paper, we propose an innovative solution by constructing infinite
tree-structured PGMs that correspond exactly to neural networks. Our research
reveals that DNNs, during forward propagation, indeed perform approximations of
PGM inference that are precise in this alternative PGM structure. Not only does
our research complement existing studies that describe neural networks as
kernel machines or infinite-sized Gaussian processes, it also elucidates a more
direct approximation that DNNs make to exact inference in PGMs. Potential
benefits include improved pedagogy and interpretation of DNNs, and algorithms
that can merge the strengths of PGMs and DNNs. | Boyao Li, Alexandar J. Thomson, Matthew M. Engelhard, David Page | 2023-05-27T21:32:28Z | http://arxiv.org/abs/2305.17583v3 | # On Neural Networks as Infinite Tree-Structured Probabilistic Graphical Models
###### Abstract
Deep neural networks (DNNs) lack the precise semantics and definitive probabilistic interpretation of probabilistic graphical models (PGMs). In this paper, we propose an innovative solution by constructing infinite tree-structured PGMs that correspond exactly to neural networks. Our research reveals that DNNs, during forward propagation, indeed perform approximations of PGM inference that are precise in this alternative PGM structure. Not only does our research complement existing studies that describe neural networks as kernel machines or infinite-sized Gaussian processes, it also elucidates a more direct approximation that DNNs make to exact inference in PGMs. Potential benefits include improved pedagogy and interpretation of DNNs, and algorithms that can merge the strengths of PGMs and DNNs.
## 1 Introduction
Deep neural networks (DNNs), including large language models, offer state-of-the-art prediction performance, but they are difficult to interpret due to their complex multilayer structure, large number of latent variables, and the presence of nonlinear activation functions (Buhrmester et al., 2021). To gain a precise statistical interpretation for DNNs, much progress has been made in linking them to probabilistic graphical models (PGMs). Variational autoencoders (VAEs) (Kingma and Welling, 2014) are an early example; more recent examples relate recurrent neural networks (RNNs) with hidden Markov models (HMMs) (Choe et al., 2017) and convolutional neural networks (CNNs) with Gaussian processes (GPs) (Garriga-Alonso et al., 2018). When such a connection is possible, benefits include:
* Clear statistical semantics for a trained DNN model beyond providing the conditional distribution over output variables given input variables. Instead, PGMs provide a joint distribution over all variables including latent variables.
* Ability to make inferences about how evidence of some nodes influences probabilities at others, including how later nodes influence earlier ones, as in Bayes nets or Markov nets.
* Ability to understand weight initializations of DNNs as representing prior distributions and trained DNNs as representing posterior distributions, or ensembles of models.
* Proposal of new algorithms by importing algorithmic approaches from PGMs into DNNs.
In this paper, we establish a correspondence between DNNs and PGMs. Given an arbitrary DNN, we first construct an infinite-width tree-structured PGM. We then demonstrate that during training, the DNN executes approximations of precise inference in the PGM during the forward propagation. We prove our result in the case of sigmoid activations and then indicate how the proof can be expanded to other activation functions, provided that some form of normalization is employed. These findings provide immediate benefits such as those listed above.
This work stands apart from most theoretical analyses of DNNs, which typically view DNNs purely as _function approximators_ and prove theorems about the quality of function approximation. Here we instead show that DNNs may be viewed as statistical models, specifically PGMs. This work is also different from the field of _Bayesian neural networks_, where the goal is to seek and model a probability distribution over neural network parameters. In our work, the neural network itself defines a joint probability distribution over its variables (nodes). Our work therefore is synergistic with Bayesian neural networks but more closely related to older work to learn stochastic neural networks via expectation maximization (EM) (Amari, 1995) or approximate EM (Song et al., 2016).
Although the approach is different, our motivation is similar to that of Dutordoir et al. (2021) and Sun et al. (2020) in their work to link DNNs to deep Gaussian processes (GPs) (Damianou and Lawrence, 2013). By identifying the forward pass of a DNN with the mean of a deep GP layer, they aim to augment DNNs with advantages of GPs, notably the ability to quantify uncertainty over both output and latent nodes. What distinguishes our work is that we make the DNN-PGM approximation explicit and include _all_ sigmoid DNNs, not just unsupervised belief networks or other specific cases.
Background: Comparison to Bayesian Networks and Markov Networks
Syntactically a Bayesian network (BN) is a directed acyclic graph, like a neural network, whose nodes are random variables. Semantically, a BN represents a full joint probability distribution over its variables as \(P(\vec{v})=\prod_{i}P(v_{i}|pa(v_{i}))\), where \(\vec{v}\) is a complete setting of the variables, and \(pa(v_{i})\) denotes the parents of variable \(v_{i}\). If the conditional probability distributions (CPDs) \(P(v_{i}|pa(v_{i}))\) are all logistic regression models, we refer to the network as a sigmoid BN.
It is well known that given sigmoid activation and a cross-entropy error, training a single neuron by gradient descent is identical to training a logistic regression model. Hence, a neural network under such conditions can be viewed as a "stacked logistic regression model", and also as a Bayesian network with logistic regression CPDs at the nodes. Technically, the sigmoid BN has a distribution over the input variables (variables without parents), whereas the neural network does not, and all nodes are treated as random variables. These distributions are easily added, and distributions of the input variables can be viewed as represented by the joint sample over them in our training set.
A Markov network (MN) syntactically is an undirected graph with potentials \(\phi_{i}\) on its cliques, where each potential gives the relative probabilities of the various settings for its variables (the variables in the clique). Semantically, it defines the full joint distribution on the variables as \(P(\vec{v})=\frac{1}{Z}\prod_{i}\phi_{i}(\vec{v})\) where the partition function \(Z\) is defined as \(\sum_{\vec{v}}\prod_{i}\phi_{i}(\vec{v})\). It is common to use a loglinear form of the same MN, which can be obtained by treating a setting of the variables in a clique as a binary feature \(f_{i}\), and the natural log of the corresponding entry for that setting in the potential for that clique as a weight \(w_{i}\) on that feature; the equivalent definition of the full joint is then \(P(\vec{v})=\frac{1}{Z}e^{\sum_{i}w_{i}f_{i}(\vec{v})}\). For training and prediction at this point the original graph itself is superfluous.
The potentials of an MN may be on subsets of cliques; in that case we simply multiply all potentials on subsets of a clique to derive the potential on the clique itself. If the MN can be expressed entirely as potentials on edges or individual nodes, we call it a "pairwise" MN. An MN whose variables are all binary is a binary MN.
A DNN of any architecture is, like a Bayesian network, a directed acyclic graph. A sigmoid activation can be understood as a logistic model, thus giving a conditional probability distribution for a binary variable given its parents. Thus, there is a natural interpretation of a DNN with sigmoid activations as a Bayesian network (e.g., Bayesian belief network).* As reviewed in theorem 1, this Bayes net in turn is equivalent to (represents the same probability distribution) as a Markov network where every edge of weight \(w\) from variable \(A\) to variable \(B\) has a potential of the following form:
\begin{tabular}{|c|c|c|} \hline & \(B\) & \(\neg B\) \\ \hline \(A\) & \(e^{w}\) & \(1\) \\ \hline \(\neg A\) & \(1\) & \(1\) \\ \hline \end{tabular}
**Theorem 1**.: _Let \(N\) be a Bayesian belief network whose underlying undirected graph has treewidth 1, and let \(w_{AB}\) denote the coefficient of variable \(A\) in the logistic CPD for its child \(B\). Let \(M\) be a binary pairwise Markov random field with the same nodes and edges (now undirected) as \(N\). Let \(M\)'s potentials all have the value \(e^{w_{AB}}\) if the nodes \(A\) and \(B\) on either side of edge \(AB\) are true, and the value \(1\) otherwise. \(M\) and \(N\) represent the same joint probability distribution over their nodes._
We don't claim theorem 1 is new, but we provide a proof in Appendix A because it captures several components of common knowledge to which we couldn't find a single reference.
For space reasons, we assume the reader is already familiar with the Variable Elimination (VE) algorithm for computing the probability distribution over any query variable(s) given evidence (known values) at other variables in the network. This algorithm is identical for Bayes nets and Markov nets. It repeatedly multiplies together all the potentials (in a Bayes net, conditional probability distributions) involving the variable to be eliminated, and then sums that variable out of the resulting table, until only the query variable(s) remain. Normalization of the resulting table yields the final answer. VE is an exact inference algorithm, meaning its answers are exactly correct.
## 3 The Construction of Tree-structured PGMs
Although both a binary pairwise Markov network (MN) and a Bayesian network (BN) share the same sigmoid functional structure as a DNN with sigmoid activations, it can be shown that the DNN does not in general define the same probability for the output variables given the input variables: forward propagation in the DNN is very fast but yields a different result than VE in the MN or BN, which can be much slower because the inference task is NP-complete. Therefore, if we take the distribution \(\mathcal{D}\) defined by the BN or MN to be the correct meaning of the DNN, the DNN must be using an approximation \(\mathcal{D}^{\prime}\) to \(\mathcal{D}\). Procedurally, the approximation can be shown to be exactly the following: the DNN repeatedly treats the _expectation_ of a variable \(V\), given the values of \(V\)'s parents, as if it were the actual _value_ of \(V\). Thus previously binary variables in the Bayesian network view and binary features in the Markov network view become continuous. While this procedural characterization of the approximation of \(\mathcal{D}^{\prime}\) to \(\mathcal{D}\) is precise, we prefer in the PGM literature to characterize approximate distributions such as \(\mathcal{D}^{\prime}\) with an alternative PGM that precisely corresponds to \(\mathcal{D}^{\prime}\); for example, in some variational methods we may remove edges from a PGM to obtain a simpler PGM in which inference is more efficient. Treewidth-1 (tree-structured or forest-structured) PGMs are among the most desirable because in those exact inference by
VE or other algorithms becomes efficient. We seek to so characterize the DNN approximation here.
To begin, we consider the Bayesian network view of the DNN. Our first step in this construction is to copy the shared parents in the network into separate nodes whose values are not tied. The algorithm for this step is as follows:
1. Consider the observed nodes in the Bayesian network that correspond to the input of the neural network and their outgoing edges.
2. At each node, for each outgoing edge, create a copy of the current node that is only connected to one of the original node's children with that edge. Since these nodes are observed at this step, these copies do all share the same values. The weights on these edges remain the same.
3. Consider then the children of these nodes. Again, for each outgoing edge, make a copy of this node that is only connected to one child with that edge. In this step, for each copied node, we then also copy the entire subgraph formed by all ancestor nodes of the current node. Note that while weights across copies are tied, the values of the copies of any node are not tied. However, since we also copy the subtree of all input and intermediary hidden nodes relevant in the calculation of this node for each copy, the probability of any of these copied nodes being true remains the same across copies.
4. We repeat this process until we have separate trees for each output node in the original deep neural network graph.
This process ultimately creates a graph whose undirected structure is a tree or forest. In the directed structure, trees converge at the output nodes. The probability of any copy of a latent node given the observed input is the same across all the copies, but when sampling, their values may not be the same.
The preceding step alone is still not sufficient to accurately express the deep neural network as a PGM. Recall that in the Markov network view, we have seen that the neural network makes a mean-field approximation where it uses the expected value of a node in place of its actual value. The following additional step in the construction yields this same behavior. This next step of the construction creates \(L\) copies of every non-output node in the network while also copying the entire subtrees of each of these nodes, as was done in step 1. The weight of a copied edges is then set to its original value divided by \(L\). As \(L\) approaches infinity, we show that the gradient in this PGM construction matches the gradient in the neural network exactly.
This second step in the construction can be thought of intuitively by considering the behavior of sampling in the Bayesian network view. Since we make \(L\) copies of each node while also copying the subgraph of its ancestors, these copied nodes all share the same probabilities. As \(L\) grows large, even if we sampled every copied node only once, we would
expect the average value across these \(L\) copies to match the probability of an individual copied node being true. Given that we set the new weights between these copies and their parents as the original weights divided by \(L\), the sum of products (new weights times parent values) yields the average parent value multiplied by the original weight. As \(L\) goes to infinity, we remove sampling bias and the result exactly matches the value of the sigmoid activation function of the neural network, where this expectation in the PGM view is passed repeatedly to the subsequent neurons. The formal proof of this result, based on variable elimination, is found below. There, we show the following:
**Theorem 2**.: _In the PGM construction, as \(L\rightarrow\infty\), \(P(H=1|\vec{x})\rightarrow\sigma(\sum_{j=1}^{M}w_{j}g_{j}+\sum_{i}^{N}\theta_ {i}\sigma(p_{i}))\), for an arbitrary latent node \(H\) in the DNN that has observed parents \(g_{1},...,g_{M}\) and latent parents \(h_{1},...,h_{N}\) that are true with probabilities \(\sigma(p_{1}),...,\sigma(p_{N})\). \(w_{1},...,w_{M}\) and \(\theta_{1},...,\theta_{N}\) are the weights on edges between these nodes and \(H\)._
In order to prove that as \(L\) goes to infinity, this PGM construction does indeed match the neural network's forward propagation, we consider an arbitrary latent node \(H\) with \(N\) unobserved parents \(h_{1},...,h_{N}\) and \(M\) observed parents \(g_{1},...,g_{M}\). The edges between these parents and \(H\) then each have a weight \(\theta_{i}\), \(1\leq i\leq N\), for the unobserved nodes, and \(w_{j}\), \(1\leq j\leq M\), for the observed nodes. The network as a whole has observed evidence \(\vec{x}\). For the rest of this problem we use a Markov network view of the neural network. The potentials for these nodes in the network are as follows:
Figure 1: The first step of the PGM construction where shared latent parents are separated into copies along with the subtree of their ancestors. Copies of nodes H1 and H2 are made in this example.
\begin{tabular}{|c|c|} \hline \(h_{i}\) & \(\neg h_{i}\) \\ \hline \(e^{p_{i}}\) & 1 \\ \hline \end{tabular} Since \(g_{j}\) are observed, their values are found in \(\vec{x}\).
\begin{tabular}{|c|c|c|} \hline & \(H\) & \(\neg H\) \\ \hline \(h_{i}\) & \(e^{\theta_{i}}\) & 1 \\ \hline \(\neg h_{i}\) & 1 & 1 \\ \hline \end{tabular} &
\begin{tabular}{|c|c|c|} \hline & \(H\) & \(\neg H\) \\ \hline \(g_{j}\) & \(e^{w_{j}}\) & 1 \\ \hline \(\neg g_{j}\) & 1 & 1 \\ \hline \end{tabular} Suppose, then, using the second step of our construction, we make \(L\) copies of all the nodes that were parents of \(H\) in the Bayesian network view of the DNN, \(h_{1}^{1},...,h_{1}^{L},...,h_{N}^{1},...,h_{N}^{L}\) and \(g_{1}^{1},...,g_{1}^{L},...,g_{M}^{1}\) with weights \(\theta_{1}/L,...,\theta_{N}/L\) and \(w_{1}/L,...,w_{M}/L\) respectively. The potential on \(H\) and these copied nodes is then:
\begin{tabular}{|c|c|c|} \hline & \(H\) & \(\neg H\) \\ \hline \(h_{i}^{k}\) & \(e^{\theta_{i}/L}\) & 1 \\ \hline \(\neg h_{i}^{k}\) & 1 & 1 \\ \hline \end{tabular} &
\begin{tabular}{|c|c|c|} \hline & \(H\) & \(\neg H\) \\ \hline \(g_{j}^{k}\) & \(e^{w_{j}/L}\) & 1 \\ \hline \(\neg g_{j}^{k}\) & 1 & 1 \\ \hline \end{tabular} where \(1\leq i\leq N\), \(1\leq j\leq M\), and \(1\leq k\leq L\). The potentials for each of the copied nodes are the same as the nodes they were originally copied from. We then have that,
\[P(H,h_{1}^{1},...,h_{1}^{L},...,h_{N}^{1},...,h_{N}^{L},g_{1},...,g_{1}^{L},...,g_{M}^{1},...,g_{M}^{L}|\vec{x})\] \[=\frac{1}{Z}\times\prod_{j=1}^{M}\prod_{k=1}^{L}e^{(w_{j}/L)H \times g_{j}^{k}}\times\prod_{i=1}^{N}\prod_{k=1}^{L}e^{(\theta_{i}/L)H\times h _{i}^{k}}\] \[=\frac{1}{Z}\times e^{\sum_{j=1}^{M}w_{j}g_{j}\times H}\times e^{ H(\frac{\theta_{1}}{L}\sum_{k=1}^{L}h_{1}^{k}+...+\frac{\theta_{N}}{L}\sum_{k=1}^{L}h_{ N}^{k})}\.\]
Summing out an arbitrary, copied latent node, \(h_{\alpha}^{\beta}\):
\[\sum_{h_{\alpha}^{\beta},\neg h_{\alpha}^{\beta}}P(H,h_{1}^{1},...,h_{1}^{L},...,h_{N}^{1},...,h_{N}^{L}|\vec{x})\] \[=\frac{1}{Z}\times e^{\sum_{j=1}^{M}w_{j}g_{j}\times H}\times\sum_ {h_{\alpha}^{\beta},\neg h_{\alpha}^{\beta}}\prod_{i=1}^{N}\prod_{k=1}^{L}e^{( \theta_{i}/L)H\times h_{i}^{k}}\] \[=\left(\frac{1}{Z}\times e^{\sum_{j=1}^{M}w_{j}g_{j}\times H} \times e^{p_{\alpha}}e^{(\theta_{\alpha}/L)H}\prod_{\begin{subarray}{c}i=1,.., N\\ (i,k)\neq(\alpha,\beta)\end{subarray}}\prod_{k=1,...L}e^{(\theta_{i}/L)H\times h_{i}^{k}}\right.\] \[\left.+\frac{1}{Z}\times e^{\sum_{j=1}^{M}w_{j}g_{j}\times H} \times\prod_{\begin{subarray}{c}i=1,..,N\\ (i,k)\neq(\alpha,\beta)\end{subarray}}\prod_{k=1,...L}e^{(\theta_{i}/L)H \times h_{i}^{k}}\right)\] \[=\left(\frac{1}{Z}\times e^{\sum_{j=1}^{M}w_{j}g_{j}\times H} \times(e^{p_{\alpha}}e^{(\theta_{\alpha}/L)H}+1)\times\prod_{\begin{subarray} {c}i=1,..,N\\ (i,k)\neq(\alpha,\beta)\end{subarray}}\prod_{k=1,...L}e^{(\theta_{i}/L)H\times h _{i}^{k}}\right)\.\]
Summing out all \(L\) copies of \(h_{\alpha}\):
\[\left(\frac{1}{Z}\times e^{\sum_{j=1}^{M}w_{j}g_{j}\times H}\times(e^{p_{ \alpha}}e^{(\theta_{\alpha}/L)H}+1)^{L}\times\prod_{\begin{subarray}{c}i=1,.., N\\ i\neq\alpha\end{subarray}}\prod_{k=1,...L}e^{(\theta_{i}/L)H\times h_{i}^{k}} \right)\.\]
Summing out the \(L\) copies of each latent parent would then yield:
\[\frac{1}{Z}e^{\sum_{j=1}^{M}w_{j}g_{j}\times H}\times\prod_{i}^{N}(e^{p_{i}}e^ {(\theta_{i}/L)H}+1)^{L}\,\]
which, in turn, gives us:
\[P(H=1|\vec{x}) =\frac{e^{\sum_{j=1}^{M}w_{j}g_{j}\times 1}\times\prod_{i}^{N}(e^{p_ {i}}e^{(\theta_{i}/L)\times 1}+1)^{L}}{e^{\sum_{j=1}^{M}w_{j}g_{j}\times 1}\times \prod_{i}^{N}(e^{p_{i}}e^{(\theta_{i}/L)\times 1}+1)^{L}+e^{\sum_{j=1}^{M}w_{j}g_{j} \times 0}\times\prod_{i}^{N}(e^{p_{i}}e^{(\theta_{i}/L)\times 0}+1)^{L}}\] \[=\frac{e^{\sum_{j=1}^{M}w_{j}g_{j}}\times\prod_{i}^{N}(e^{p_{i}}e^ {(\theta_{i}/L)}+1)^{L}}{e^{\sum_{j=1}^{M}w_{j}g_{j}}\times\prod_{i}^{N}(e^{p_ {i}}e^{(\theta_{i}/L)}+1)^{L}+\prod_{i}^{N}(e^{p_{i}}+1)^{L}}\] \[=\left(\frac{1}{1+\frac{\prod_{i}^{N}(e^{p_{i}}+1)^{L}}{e^{\sum_ {j=1}^{M}w_{j}g_{j}}\times\prod_{i}^{N}(e^{p_{i}}e^{(\theta_{i}/L)}+1)^{L}}} \right)\;.\]
We then consider:
\[\lim_{L\rightarrow\infty}\frac{\prod_{i}^{N}(e^{p_{i}}+1)^{L}}{e ^{\sum_{j=1}^{M}w_{j}g_{j}}\times\prod_{i}^{N}(e^{p_{i}}e^{(\theta_{i}/L)}+1) ^{L}}\] \[=\lim_{L\rightarrow\infty}e^{-\sum_{j=1}^{M}w_{j}g_{j}+\sum_{i=1 }^{N}L\times\log(e^{p_{i}}+1)-\sum_{i=1}^{N}L\times\log(e^{p_{i}}e^{(\theta_{ i}/L)}+1)}\;,\]
\[\lim_{L\rightarrow\infty}-\sum_{j=1}^{M}w_{j}g_{j}+\sum_{i=1}^{N} L\times\log(e^{p_{i}}+1)-\sum_{i=1}^{N}L\times\log(e^{p_{i}}e^{(\theta_{i}/L)}+1)\] \[=-\sum_{j=1}^{M}w_{j}g_{j}+\lim_{L\rightarrow\infty}\frac{\sum_ {i=1}^{N}\left(\log(e^{p_{i}}+1)-\log(e^{p_{i}}e^{(\theta_{i}/L)}+1)\right)}{ 1/L}\]
This limit clearly has the indeterminate form of \(\frac{0}{0}\). Consider, then the following change of variables, \(S=1/L\), and subsequent use of l'Hospital's rule.
\[-\sum_{j=1}^{M}w_{j}g_{j}+\lim_{S\to 0^{+}}\frac{\sum_{i=1}^{N} \left(\log(e^{p_{i}}+1)-\log(e^{p_{i}}e^{(\theta_{i}S)}+1)\right)}{S}\] \[=-\sum_{j=1}^{M}w_{j}g_{j}+\lim_{S\to 0^{+}}\frac{\frac{\partial}{ \partial S}\sum_{i=1}^{N}\left(\log(e^{p_{i}}+1)-\log(e^{p_{i}}e^{(\theta_{i}S) }+1)\right)}{\frac{\partial}{\partial S}S}\] \[=-\sum_{j=1}^{M}w_{j}g_{j}+\lim_{S\to 0^{+}}\frac{\sum_{i=1}^{N} \frac{-1}{e^{p_{i}}e^{(\theta_{i}S)}+1}\times e^{p_{i}}e^{\theta_{i}S}\times \theta_{i}}{1}\] \[=-\sum_{j=1}^{M}w_{j}g_{j}+\lim_{S\to 0^{+}}\sum_{i=1}^{N} \frac{-e^{p_{i}}e^{\theta_{i}S}\times\theta_{i}}{e^{p_{i}}e^{\theta_{i}S}+1}\] \[=-\sum_{j=1}^{M}w_{j}g_{j}-\sum_{i}^{N}\frac{e^{p_{i}}}{e^{p_{i} }+1}\times\theta_{i}=-\sum_{j=1}^{M}w_{j}g_{j}-\sum_{i}^{N}\sigma(p_{i})\theta _{i}\;.\]
Therefore,
\[\lim_{L\rightarrow\infty}\frac{\prod_{i}^{N}(e^{p_{i}}+1)^{L}}{e^{\sum_{j=1}^ {M}w_{j}g_{j}}\times\prod_{i}^{N}(e^{p_{i}}e^{(\theta_{i}/L)}+1)^{L}}=e^{- \sum_{j=1}^{M}w_{j}g_{j}-\sum_{i}^{N}\sigma(p_{i})\theta_{i}}\;,\]
and,
\[\lim_{L\rightarrow\infty}P(H=1|\vec{x})=\frac{1}{1+e^{-\sum_{j=1}^{M}w_{j}g_{ j}-\sum_{i}^{N}\sigma(p_{i})\theta_{i}}}=\sigma(\sum_{j=1}^{M}w_{j}g_{j}+ \sum_{i}^{N}\sigma(p_{i})\theta_{i})\;.\]
This is exactly the result of the deep neural network. Suppose then that \(z\) is a hidden node whose parents in the Bayesian network view are all observed. By our PGM construction, we have that the potential for \(z\) true is \(e^{\sum_{x\in\vec{x}}w_{zx}x}\) where \(w_{zx}\) is the weight between nodes \(z\) and \(x\), and, for \(z\) false, 1. This clearly matches the deep neural network's sigmoid activation in this 'first layer'. Consider, then, the nodes whose parents in the Bayesian network view are either one of these first layer hidden nodes, or an observed node. By our PGM construction, we have shown that so long as nodes in the previous layer are either observed or have sigmoid conditional probabilities, as is the case here, the conditional probability of any nodes that immediately follow will also have a sigmoid conditional probability. Repeating this argument up to the output nodes gives us that the conditional probability in this PGM construction and the activation values of the DNN match for any layer in the DNN.
Consider then the view of the DNN where each layer is defined such that the values of the activation functions for all neurons in a given layer can be calculated using neurons
of the preceding layers. The DNN is structured then such that the first layer depends only on observed evidence and all layers can be calculated sequentially from that starting point. We have already established that nodes with only observed parents have this sigmoid conditional probability. Given the structure of the DNN and Theorem 2, we then have that the corresponding layers in our PGM construction of the DNN can be similarly computed sequentially from that first layer and have conditional probabilities that exactly match the DNN's activations.
## 4 Implications and Extensions
We are not claiming that one should actually carry out the PGM construction used in the preceding section, since that PGM is infinite. Rather, its contribution is to let us understand precisely the approximation that SGD in a DNN is making; although a DNN itself can be understood as a BN or MN, SGD is not using that BN or MN but rather the infinite tree-structured one. While that PGM is infinite, is it built using the original as a template in a straightforward fashion, and hence is easy to understand. Beyond this contribution to comprehensibility and pedagogy, are there other applications?
One application is an ability to use standard PGM algorithms such as Markov chain Monte Carlo (MCMC) to sample latent variables given observed values of input and output variables, such as for producing confidence intervals or understanding relationships among variables. One could already do so using Gibbs sampling in the BN or MN directly represented by the DNN itself (which we will call the "direct PGM"), but then one wouldn't be using the BN or MN that SGD in the DNN actually used during training. For that, our result has shown that one instead needs to use Gibbs sampling in the infinite tree-structured PGM, which is impractical. Nevertheless, for any variable \(V\) in the original DNN, on each iteration a Gibbs sampler takes infinitely many samples of \(V\) given infinitely many samples of each of the members of \(V\)'s Markov blanket in the original DNN. By treating the variables of the original DNN as continuous, with their values approximating their sampled probabilities in the Gibbs sampler, we can instead apply Hamiltonian Monte Carlo or other MCMC methods for continuous variables in the much smaller DNN structure. We explore this approach empirically rather than theoretically in the next section. Another, related application of our result is that one could further fine-tune the trained DNN using other PGM algorithms, such as contrastive divergence. We also explore this use in the next section.
One might object that most results in this paper use sigmoid activation functions. Nair and Hinton showed that rectified linear units (ReLU) can be thought of as a combination of infinitely many sigmoid units with varying biases (Nair and Hinton, 2010). Hence our result in the previous section can be extended to ReLU activations by the same argument. More generally, with any non-negative activation function that can yield values greater than one, while our BN argument no longer holds, the MN version of the argument can be extended. An MN already requires normalization to represent a probability distribution. While Batch
Normalization and Layer Normalization typically are motivated procedurally, to keep nodes from "saturating," and consequently to keep gradients from "exploding" or "vanishing," as the names suggest, they also can be used to bring variables into the range \([0,1]\) and hence to being considered as probabilities. Consider an idealized variant of these that begins by normalizing all the values coming from a node \(h\) of a neural network, over a given minibatch, to sum to \(1.0\); the argument can be extended to a set of \(h\) and all its siblings in a layer (or other portion of the network structure) assumed to share their properties. It is easily shown that if the parents of any node \(h\) in the neural network provide to \(h\) approximate probabilities that those parent variables are true in the distribution defined by the Markov network given the inputs, then \(h\) in turn provides to its children an approximate probability that \(h\) is true in the distribution defined by the Markov network given the inputs. Use of Batch or Layer Normalization is only approximate and hence adds an additional source of approximation to the result of the preceding section. Detailed consideration of other activation functions is left for further work; in the next section we return to the sigmoid case.
## 5 Alternative Training Algorithms: The Sigmoid Case
To illustrate the potential utility of the infinite tree-structured PGM view of a DNN, in this section we pursue one of its potential implications in depth. We have already noted we can view forward propagation in an all-sigmoid DNN as exact inference in a tree-structured BN, such that the CPD of each hidden variable is a logistic regression. In other words, each hidden node is a Bernoulli random variable, with parameter \(\lambda\) being a sigmoid activation (i.e. logistic function) applied to a linear function of the parent nodes. This view suggests alternative learning algorithms such as contrastive divergence (CD) that use sampling methods for high-dimensional binary random variables such as Gibbs sampling. Doing so has a natural advantage over SGD, which samples the values of the hidden variables using only the evidence at the input values. Instead, Gibbs sampling uses _all_ the available evidence, both at input and output variables. MCMC has now advanced beyond Gibbs sampling with methods such as Hamiltonian Monte Carlo (HMC), but HMC will sample values in \(\{0,1\}\) rather than in \([0,1]\).
To use HMC as proposed, we define hidden variables using the recently-developed _continuous_ Bernoulli distribution (Loaiza-Ganem and Cunningham, 2019), where the single parameter \(\lambda\) of the distribution is defined as in the Bernoulli case. Whereas the Bernoulli density is unnormalized when viewed (improperly) as a density over \((0,1)\), the continuous Bernoulli distribution is a proper density. Somewhat counter-intuitively, the expected value of this distribution is _not_ equal to the parameter \(\lambda\), which has important implications for inference. This option leads to learning algorithms that are able to take advantage of sampling methods effective for high-dimensional continuous random variables, including HMC.
With respect to this BN, whose variables correspond exactly with those of the DNN, we can see that our previous CD-1 algorithm, which is standard SGD for DNNs, samples
settings of the input variables, computes expectations on all the remaining variables (latent and output variables), and adjusts the weights (CPDs) toward maximizing the probability conditional log likelihood, or minimizing cross-entropy. If instead of one gradient step we continued to convergence, the resulting algorithm _almost_ would be the well-known Expectation Maximization (EM) algorithm for training any BN's parameters from a data set with missing or hidden variables, given the BN structure. We say _almost_ because this analysis reveals one important "error" or shortcoming in the SGD algorithm: in the E step where we compute the expected values of the latent variables, it _entirely ignores_ the values of the output variables. In other words, we can view SGD as an approximation to EM in which evidence from output variables is ignored during the E step, and therefore gradients must be backpropagated across _all_ layers to account for this evidence when updating the weights in the M step. This strategy is effective in later layers, which are closer to the output evidence, but highly ineffective in earlier layers. This limitation has been recognized as the vanishing gradient problem and addressed through ad-hoc initialization and pre-training strategies as well as Nesterov momentum. Nesterov momentum can be seen as "looking ahead" one step of SGD when computing the gradient, which partially incorporates evidence at output variables into the expectations at the hidden nodes.
More precisely and formally correcting this shortcoming is not easy: computing the correct expectation is NP-complete, and the most obvious algorithm always requires time exponential in the number of latent variables. In such situations in PGMs, researchers have replaced the E step with MCMC sampling (e.g., Gibbs) (Hinton et al., 2006; Song et al., 2016). However, running these MCMC chains to convergence is impractical, therefore it is common in practice to take a small number \(k\) of steps in the chain between gradient steps, which again gives rise to the CD-1 or CD-\(k\) algorithm. However, a different training algorithm for DNNs, motivated by the natural correspondence between DNNs and BNs described above - and which correctly accounts for evidence from the output variables through proper EM updates - converges to the correct answer in fewer training epochs compared to SGD.
The Continuous Bernoulli Bayes net (CBBN) is similar to the sigmoid BN (_i.e._, "stacked logistic regression"), but with logistic regression CPDs replaced by their continuous Bernoulli analogues. Equivalently, it is a feedforward, stochastic neural network in which hidden variables are continuous Bernoulli distributed. Consider a Bayesian network composed of input variables \(\mathbf{x}=\mathbf{h}_{0}\), a sequence of layers of hidden variables \(\mathbf{h}_{1},...,\mathbf{h}_{L}\), and output variables \(\mathbf{y}\). Each pair of consecutive layers forms a bipartite subgraph of the network as a whole, and the variables \(\mathbf{h}_{i}=(h_{i1},...,h_{iM_{i}})\) follow a multivariate continuous Bernoulli distribution with parameters \(\mathbf{\lambda}_{i}=(\lambda_{i1},...,\lambda_{iM_{i}})\) that depend on variables in the previous layer \(\mathbf{h}_{i-1}\) as follows:
\[h_{ij}\sim\mathcal{CB}(\lambda_{ij}),\,\text{where}\,\,\mathbf{\lambda}_{i}= \sigma(\mathbf{W}_{i-1}\mathbf{h}_{i-1}+\mathbf{b}_{i-1}). \tag{1}\]
\(\sigma:\mathbb{R}\rightarrow(0,1)\) is a non-linearity - here the logistic function - that is applied element-wise, and \(\mathbf{\theta}_{i}=(\mathbf{W}_{i},\mathbf{b}_{i})\) are parameters to be learned. For a complete setting of the variables
\(\{\mathbf{x},\mathbf{h},\mathbf{y}\}\), where \(\mathbf{h}=\{\mathbf{h}_{1},...,\mathbf{h}_{L}\}\), and parameters \(\mathbf{\theta}=\{\mathbf{\theta}_{i}\}_{i=0}^{L}\), the likelihood \(p(\mathbf{y},\mathbf{h}|\mathbf{x};\mathbf{\theta})\) may be decomposed as:
\[p(\mathbf{y},\mathbf{h}|\mathbf{x};\mathbf{\theta})=p(\mathbf{y}|\mathbf{h}_{L};\mathbf{ \theta}_{L})\cdot\prod_{i=1}^{L}\prod_{j=1}^{M_{i}}p_{\mathcal{CB}}(h_{ij}| \lambda_{ij}(\mathbf{h}_{i-1};\mathbf{\theta}_{i-1})), \tag{2}\]
where \(p_{\mathcal{CB}}(\cdot|\cdot)\) denotes the continuous Bernoulli density, and a specific form for \(p(\mathbf{y}|\mathbf{h}_{L},\mathbf{\theta}_{L})\) has been omitted to allow variability in the output variables. In our experiments, \(\mathbf{y}\) is a Bernoulli or categorical random variable parameterized via the logistic or softmax function, respectively.
### Learning via Contrastive Divergence with Hamiltonian Monte Carlo Sampling
Let \(\mathbf{h}^{(0)},\mathbf{h}^{(1)},\mathbf{h}^{(2)},...\) denote a chain of MCMC samples of the complete setting of hidden variables in our CBBN. As previously noted, we allow hidden variables \(h_{ij}\in(0,1)\) for \(i\in\{1,...,L\}\) and \(j\in\{1,...,M_{i}\}\), and use Hamiltonian Monte Carlo (HMC) to generate the next state due to its fast convergence. Since HMC samples are unbounded, we sample the _logit_ associated with \(h_{ij}\in(0,1)\), i.e. \(\sigma^{-1}(h_{ij})\in(-\infty,\infty)\), rather than sampling the \(h_{ij}\) directly.
The HMC trajectories are defined by Hamilton's Equations:
\[\frac{d\rho_{i}}{dt}=\frac{\partial H}{\partial\mu_{i}} \frac{d\mu_{i}}{dt}=-\frac{\partial H}{\partial\rho_{i}} \tag{3}\]
where \(\rho_{i},\mu_{i}\) are the \(i\)th component of the position and momentum vector. The Hamiltonian \(H\) is
\[H=H(\mathbf{\rho},\mathbf{\mu})=U(\mathbf{\rho})+\frac{1}{2}\mathbf{\mu}^{T}M^{- 1}\mathbf{\mu} \tag{4}\]
where \(M\) is a positive definite and symmetric mass matrix, and \(M^{-1}\) could represent a diagonal estimate of the covariance. Defining the position \(\mathbf{\rho}=\mathbf{h}\), the complete set of hidden variables of our CBBN, we have that the potential energy \(U\) is the negative log-likelihood associated with equation (2):
\[U(\mathbf{h})=-\log p(\mathbf{y},\mathbf{h}|\mathbf{x};\mathbf{\theta})=-\log p(\mathbf{ y}|\mathbf{h}_{L};\mathbf{\theta}_{L})-\sum_{i=1}^{L}\sum_{j=1}^{M_{i}}\log p_{ \mathcal{CB}}(h_{ij}|\lambda_{ij}(\mathbf{h}_{i-1};\mathbf{\theta}_{i-1})). \tag{5}\]
We set the leap frog size \(L>0\), step size \(\Delta t>0\). A description of the HMC trajectories (_i.e._, evolution of \(\mathbf{h}\)) is provided in the supplementary material.
The initial state of the chain \(\mathbf{h}^{(0)}\) is drawn with a simple forward pass through the network, ignoring the output variables; in other words, we have \(h_{ij}^{(0)}\sim\mathcal{CB}(\sigma(\mathbf{W}_{i-1}^{(0)}\mathbf{h}_{i-1}^{(0)}+\mathbf{ b}_{i-1}^{(0)})_{j})\) for
\(i\in\{1,...L\}\), where \(\mathbf{h}_{0}=\mathbf{x}\) are the input variables, and the values of \(\mathbf{W}_{i}^{(0)}\) and \(\mathbf{b}_{i}^{(0)}\) are manually set or drawn from a standard normal or uniform distribution. We update \(\mathbf{h}\) through a number of burn-in steps before beginning to update our parameters to ensure that \(\mathbf{h}\) is first consistent with evidence from the output variables. After \(k\) steps, corresponding to CD-\(k\), we define the loss based on equation (2):
\[\mathcal{L}(\mathbf{\theta}^{(n)})=-\log p(\mathbf{y},\mathbf{h}|\mathbf{x};\mathbf{ \theta}^{(n)}). \tag{6}\]
We then apply the following gradients to update the parameters \(\{\mathbf{W}_{i}^{(n)}\}_{i=0}^{L}\) and \(\{\mathbf{b}_{i}^{(n)}\}_{i=0}^{L}\):
\[\mathbf{W}_{i}^{(n+1)}=\mathbf{W}_{i}^{(n)}-\eta\frac{\partial\mathcal{ L}}{\partial\mathbf{W}_{i}^{(n)}} \mathbf{b}_{i}^{(n+1)}=\mathbf{b}_{i}^{(n)}-\eta\frac{\partial\mathcal{L}}{ \partial\mathbf{b}_{i}^{(n)}} \tag{7}\]
where \(\eta\) is the learning rate. Algorithm 1 (see supplementary material) summarizes this procedure.
### Experimental Results
The preceding algorithm shows the potential of the DNN-as-PGM view to generate new algorithms and approaches, but does it work? Taking the view of neural net as BN, we start with the hard problem of learning the exclusive-or function, but with the "right" prior (up to magnitude) on the BN parameters. This algorithm is also CD-k - here we use CD-1 - but under this alternative correspondence between PGM and neural network. We therefore call it CD-HMC to distinguish it from the earlier CD-k algorithm that is identical to SGD. As shown in Table 1, CD-HMC converges in half as many training epochs as SGD using the same minibatch size, and each epoch takes little longer than for SGD. But what if we did not know the correct prior?
Using a hidden layer of thirty variables makes it highly probable that there exists a pair latent variables that, together with the inputs and output, have their weights randomly initialized to the correct prior. The empirical results below bear this out: results are similar to experiments using the correct prior (Table 1). If more than one combination of two such latent variables exist, the resulting trained model becomes an ensemble of accurate posteriors. The argument scales to more complex target functions by using more hidden layers in the neural network. These empirical results support the following simple view of why overparameterization works: more parameters, from more latent units and layers, provides a higher probability of having the correct prior embedded somewhere. And if it is embedded more than once, all the better since the final logistic regression layer simply learns a weighted vote among the various possible ensemble components. Further empirical support for this view can be found elsewhere Frankle and Carbin (2019), but for the first time here we view initialization as encoding multiple priors. This in turn suggests viewing the trained DNN as encoding a posterior distribution, from which we can make inferences about uncertainty.
In addition to the experiments on learning the exclusive-or function, we also explore our method on two other datasets. One is a synthetic dataset generated using the _Make Moons_ from _sklearn_ (1k data, 30% noise). The other is MNIST, where we randomly choose 2k images of the digits 0 and 1. Using networks with one hidden layer of 32 or 128 hidden variables and sigmoid activation, we test CD-HMC on both datasets and compare it to SGD and CD-Gibbs, in which we return to logistic regression CPDs and use Gibbs sampling. The training and test datasets are split (80:20 ratio), and each model is trained for 400 epochs with weights updated by gradient descent (learning rate=0.01). For CD-HMC and CD-Gibbs, we draw 500 "burn-in" samples for each data point before the first weight update. The results below illustrate CD-HMC has similar accuracy to SGD and CD-Gibbs on the test set and converges in fewer epochs on _Make Moons_ dataset (Figure 2). Table 2 also shows that CD-HMC has a higher test loss than SGD under all the settings for networks and datasets. This is likely due to variability in sampling in contrast to SGD, which is deterministic.
As the size of the dataset increases, the training time of CD-HMC and CD-Gibbs becomes considerably longer than SGD due to the high cost of drawing high-dimensional samples. However, HMC samples all hidden nodes together, whereas CD-Gibbs samples them one by one, and as a result, CD-HMC is much faster.
As these results suggest, and consistent with prior work on learning in sigmoid BNs, it is difficult (or perhaps impossible) to match the computational efficiency of SGD while
\begin{table}
\begin{tabular}{c|c|c|c|c} Network & Algorithm & Accuracy & Training Steps (Epochs) & Training time (s) \\ \hline \hline \multirow{2}{*}{A} & SGD (BP) & 100\% & 1693 & 19.8 \\ & CD-HMC & 100\% & 263 & 14.0 \\ \hline \multirow{2}{*}{B} & SGD (BP) & 100\% & 4000 & 40 \\ & CD-HMC & 100\% & 445 & 22.8 \\ \end{tabular}
\end{table}
Table 1: Comparing SGD (CD-1) and new algorithm CD-HMC on learning exclusive-or using 30 hidden variables and a random initialization, given correct initialization (A), or all possible priors (B).
\begin{table}
\begin{tabular}{c|c|c|c|c|c} Network & Algorithm & \multicolumn{2}{c|}{_Make Moons_} & \multicolumn{2}{c}{MNIST} \\ \cline{3-6} & & Test Accuracy & Test Loss & Test Accuracy & Test Loss \\ \hline \hline \multirow{3}{*}{32 nodes} & SGD (BP) & 80\% & 0.3753 & 100\% & 0.0021 \\ & CD-HMC & 80.5\% & 0.5131 & 99.25\% & 0.2606 \\ & CD-Gibbs & 80.5\% & 0.6425 & 100\% & 0.4663 \\ \hline \multirow{3}{*}{128 nodes} & SGD (BP) & 80.5\% & 0.3751 & 100\% & 0.0017 \\ & CD-HMC & 80.5\% & 0.4822 & 99.75\% & 0.1815 \\ \cline{1-1} & CD-Gibbs & 81\% & 0.6408 & 99.75\% & 0.4733 \\ \end{tabular}
\end{table}
Table 2: SGD, CD-HMC and CD-Gibbs performance on synthetic data (_Make Moons_, 1k samples, noise=0.3) and MNIST (2k images, \(\{0,1\}\) only) with 32 or 128 hidden units and random initialization.
maintaining the view of DNNs as PGMs. However, a hybrid approach - for example, using SGD (CD-1) as pre-training and then applying CD-HMC - preserves the benefits of understanding the DNN as a PGM, or even as a distribution (ensemble) of PGMs, while also allowing exact inference and quantifying uncertainty.
## 6 Limitations and Future Work
Limitations of the present work and directions for future work include establishing formal results about how closely batch- and layer-normalization approximate Markov network normalization when using non-sigmoid activations, establishing theoretical results relating HMC in the neural network to Gibbs sampling in the large treewidth-1 Markov network, and obtaining empirical results for HMC with non-sigmoid activations. Also of great interest is comparing HMC and other PGM algorithms to Shapley values, Integrated Gradients, and other approaches for assessing the relationship of some latent variables to each other
Figure 2: Test accuracy of SGD, CD-HMC and CD-Gibbs on synthetic data (_Make Moons_, 1k, noise=0.3) and MNIST (2k images, \(\{0,1\}\) only).
or to inputs and/or outputs in a neural network. Finally, the large treewidth-1 PGM is a substantial approximation to the direct PGM of a DNN. After training the DNN and hence the large treewidth-1 model, can we fine-tune with a less-approximate approach, perhaps based on loopy belief propagation or other approximate algorithms often used in PGMs?
## Acknowledgements
The authors would like to thank Sayan Mukherjee, Samuel I. Berchuck, Youngsoo Baek, Andrew Allen, William H. Majoros, David Carlson, Juan Restrepo and Houssam Nassif for their helpful discussion about the theoretical work. We are also grateful to Mengyue Han and Jinyi Zhou for their technical support.
This project is in part supported by Impact of Genomic Variation on Function (IGVF) Consortium of the National Institutes of Health via grant U01HG011967.
|
2310.17053 | Invariant Physics-Informed Neural Networks for Ordinary Differential
Equations | Physics-informed neural networks have emerged as a prominent new method for
solving differential equations. While conceptually straightforward, they often
suffer training difficulties that lead to relatively large discretization
errors or the failure to obtain correct solutions. In this paper we introduce
invariant physics-informed neural networks for ordinary differential equations
that admit a finite-dimensional group of Lie point symmetries. Using the method
of equivariant moving frames, a differential equation is invariantized to
obtain a, generally, simpler equation in the space of differential invariants.
A solution to the invariantized equation is then mapped back to a solution of
the original differential equation by solving the reconstruction equations for
the left moving frame. The invariantized differential equation together with
the reconstruction equations are solved using a physics-informed neural
network, and form what we call an invariant physics-informed neural network. We
illustrate the method with several examples, all of which considerably
outperform standard non-invariant physics-informed neural networks. | Shivam Arora, Alex Bihlo, Francis Valiquette | 2023-10-25T23:26:51Z | http://arxiv.org/abs/2310.17053v2 | # Invariant Physics-Informed Neural Networks for Ordinary Differential Equations
###### Abstract
Physics-informed neural networks have emerged as a prominent new method for solving differential equations. While conceptually straightforward, they often suffer training difficulties that lead to relatively large discretization errors or the failure to obtain correct solutions. In this paper we introduce _invariant physics-informed neural networks_ for ordinary differential equations that admit a finite-dimensional group of Lie point symmetries. Using the method of equivariant moving frames, a differential equation is invariantized to obtain a, generally, simpler equation in the space of differential invariants. A solution to the invariantized equation is then mapped back to a solution of the original differential equation by solving the reconstruction equations for the left moving frame. The invariantized differential equation together with the reconstruction equations are solved using a physcis-informed neural network, and form what we call an invariant physics-informed neural network. We illustrate the method with several examples, all of which considerably outperform standard non-invariant physics-informed neural networks.
## 1 Introduction
Physics-informed neural networks (PINN) are an emerging method for solving differential equations using deep learning, [23, 30]. The main idea behind this method is to train a neural network as an approximate solution interpolant for a system of differential equations. This is done by minimizing a loss function that incorporates both the differential equations and any associated initial and/or
boundary conditions. The method has a particular elegance as the derivatives in the differential equations can be computed using automatic differentiation rather than numerical discretization, which greatly simplifies the solution procedure, especially when solving differential equations on arbitrary surfaces, [35].
The ease of the discretization procedure in physics-informed neural networks, however, comes at the price of numerous training difficulties, and numerical solutions that are either not particularly accurate, or fail to converge at all to the true solution of the given differential equation. Since training a physics-informed neural network constitutes a non-convex optimization problem, an analysis of failure modes when physics-informed neural networks fail to train accurately is a non-trivial endeavour. This is why several modified training methodologies have been proposed, which include domain decomposition strategies, [18], modified loss functions, [38], and custom optimization, [3]. While all of these strategies, sometimes substantially, improve upon vanilla physics-informed neural networks, none of these modified approaches completely overcome all the inherent training difficulties.
Here we propose a new approach for training physics-informed neural networks, which relies on using Lie point symmetries of differential equations and the method of equivariant moving frames to simplify the form of the differential equations that have to be solved. This is accomplished by first projecting the differential equation onto the space of differential invariant to produce an _invariantized differential equation_. The solution to the invariantized equation is then mapped back to the solution of the original equation by solving a system of first order differential equations for the left moving frame, called _reconstruction equations_. The invariant physics-informed neural network architecture proposed in this paper consists of simultaneously solving the system of equations consisting of the invariantized differential equation and the reconstruction equations using a physics-informed neural network. The method proposed is entirely algorithmic, and can be implemented for any system of differential equations that is strongly invariant under the action of a group of Lie point symmetries. Since almost all equations of physical relevance admit a non-trivial group of Lie point symmetries, the proposed method is potentially a viable path for improving physics-informed neural networks for many real-world applications. The idea of projecting a differential equation into the space of invariants and then reconstructing its solution is reminiscent of the recent work [37], where the authors consider Hamiltonian systems with symmetries, although the tools used in our paper and in [37] to achieve the desired goals are very different. Moreover, in our approach we do not assume that our equations have an underlying symplectic structure.
To simplify the theoretical exposition, we focus on the case of ordinary differential equations in this paper. We show using several examples that the proposed approach substantially improves upon the numerical results achievable with vanilla physics-informed neural networks. Applications to partial differential equations will be considered elsewhere.
The paper is organized as follows. We first review relevant work on physics-informed neural networks and symmetry preserving numerical methods in Section 2. In Section 3 we introduce the methods of equivariant moving frames and review how it can be used to solve ordinary differential equations that admit a group of Lie point symmetries. Building on Section 3 we introduce a version of invariant physics-informed neural network in Section 4. We illustrate our method with several examples in Section 5. The examples show that our proposed invariant physics-informed neural
network formulation can yield better numerical results than its non-invariant version. A short summary and discussion about potential future research avenues concludes the paper in Section 6.
## 2 Previous work
Physics-informed neural networks were first proposed in [23], and later popularized through the work of Raissi et al., [30]. The main idea behind physics-informed neural networks is to train a deep neural network to directly approximate the solution to a system of differential equations. This is done by defining a loss function that incorporates the given system of equations, along with any relevant initial and/or boundary conditions. Crucially, this turns training physics-informed neural networks into a multi-task, non-convex optimization problem that can be challenging to minimize, [22]. There have been several solutions proposed to overcome the training difficulties and improve the generalization capabilities of physics-informed neural networks. These include modified loss functions, [26; 38], meta-learned optimization, [3], domain decomposition methods, [6; 18], and the use of operator-based methods, [11; 24].
The concepts of symmetries and transformation groups have also received considerable attention in the machine learning community. Notably, the equivariance of convolutional operations with respect to spatial translations has been identified as a crucial ingredient for the success of convolutional neural networks, [13]. The generalization of this observation for other types of layers of neural networks and other transformation groups has become a prolific subfield of deep learning since. For example, see [16] for some recent results.
Here we do not consider the problem of endowing a neural network with equivariance properties but rather investigate the question whether a better formulation of a given differential equation can help physics-informed neural networks better learn a solution. As we will be using symmetries of differential equations for this re-formulation, our approach falls within the framework of geometric numerical integration. The problem of symmetry-preserving numerical schemes, in other words the problem of designing discretization methods for differential equations that preserve the symmetries of a given differential equation, has been studied extensively over the past several decades, see [14; 34; 39] for some early work on the topic. Invariant discretization schemes have since been proposed for finite difference, finite volume, finite elements and meshless methods, [2; 4; 5; 7; 8; 12; 19; 28; 31; 32].
## 3 Method
In this section we introduce the theoretical foundations on which the invariant physics-informed neural network framework is based. In order to fix some notation, we begin by recalling certain well-known results pertaining to symmetries of differential equations, and refer the reader to [9; 10; 17; 27] for a more thorough exposition. Within this field, the use of moving frames to solve differential equations admitting symmetries is not as well-known. Therefore, the main purpose of this section is to introduce this solution procedure. In contrast to the approach proposed in [25], we avoid the introduction of computational variables. All computations are based on the differential invariants of the prolonged group action which results in less differential equations. Our approach is a simplified version of the algorithm presented in [36], which deals with partial differential equations admitting infinite-dimensional symmetry Lie pseudo-groups.
As mentioned in the introduction, in this paper we limit ourselves to the case of ordinary differential equations.
### Invariant differential equations
Let \(M\) be a \((q+1)\)-dimensional manifold with \(q\geq 1\). Given a one-dimensional curve \(C\subset M\), we introduce the local coordinates \(z=(t,u)=(t,u^{1},\ldots,u^{q})\) on \(M\) so that the curve \(C\) is locally specified by the graph of a function \(C=\{(t,f(t))\}\). Accordingly, the \(n\)-th order jet space \(\mathrm{J}^{(n)}\) is locally parametrized by \(z^{(n)}=(t,u^{(n)})\), where \(u^{(n)}\) denotes all the derivatives \(u^{\alpha}_{j}=u^{\alpha}_{t^{j}}\) of order \(0\leq j\leq n\), with \(\alpha=1,\ldots,q\).
Let \(G\) be an \(r\)-dimensional Lie group (locally) acting on \(M\):
\[(T,U)=Z=g\cdot z=g\cdot(t,u),\qquad\text{where}\qquad g\in G. \tag{1}\]
The group transformation (1) induces an action on curves \(C\subset M\), which prolongs to the jet space \(\mathrm{J}^{(n)}\):
\[Z^{(n)}=g\cdot z^{(n)}. \tag{2}\]
Coordinate expressions for the prolonged action (2) are obtained by applying the implicit total derivative operator
\[\mathrm{D}_{T}=\frac{1}{\mathrm{D}_{t}(T)}\,\mathrm{D}_{t},\qquad\text{where} \qquad\mathrm{D}_{t}=\frac{\partial}{\partial t}+\sum_{j=0}^{\infty}\sum_{ \alpha=1}^{q}\,u^{\alpha}_{j+1}\frac{\partial}{\partial u^{\alpha}_{j}}\]
denotes the standard total derivative operator, to the transformed dependent variables \(U^{\alpha}\):
\[U^{\alpha}_{j}=U^{\alpha}_{T^{j}}=\mathrm{D}^{j}_{T}(U^{\alpha}),\qquad\alpha= 1,\ldots,q,\qquad j\geq 0. \tag{3}\]
In the following we use the notation \(\Delta(z^{(n)})=\Delta(t,u^{(n)})=0\) to denote a system of differential equations, and use the index notation \(\Delta_{i}(z^{(n)})=0\), \(i=1,\ldots,l\), to label each equation in \(\Delta(z^{(n)})\). Also, a differential equation \(\Delta(z^{(n)})=0\), can either be a single equation or represent a system of differential equations.
**Definition 1**.: A nondegenerate1 ordinary differential equation \(\Delta(z^{(n)})=0\) is said to be _strongly invariant_ under the prolonged action of a connected local Lie group of transformations \(G\) if and only if
Footnote 1: A differential equation is nondegenerate if at every point in its solution space it is both locally solvable and of maximal rank, [27].
\[\Delta(g\cdot z^{(n)})=0\qquad\text{for all}\qquad g\in G\]
near the identity element.
**Remark 2**.: Strong invariance is more restrictive than the usual notion of symmetry, where invariance is only required to hold on the solution space. In the following, we require strong invariance to guarantee that our differential equation is an invariant function.
Invariance is usually stated in terms of the infinitesimal generators of the group action. To this end, let
\[\mathbf{v}_{\kappa}=\xi_{\kappa}(t,u)\frac{\partial}{\partial t}+\sum_{\alpha= 1}^{q}\phi^{\alpha}_{\kappa}(t,u)\frac{\partial}{\partial u^{\alpha}},\qquad \kappa=1,\ldots,r, \tag{4}\]
be a basis of infinitesimal generators for the group action \(G\). The prolongation of the vector fields (4), induced from the prolonged action (3), is given by
\[\mathbf{v}^{(n)}=\xi_{\kappa}(t,u)\frac{\partial}{\partial t}+\sum_{j=0}^{n}\sum _{\alpha=1}^{q}\phi_{\kappa}^{\alpha,j}(t,u^{(j)})\frac{\partial}{\partial u_ {j}^{\alpha}},\qquad\kappa=1,\ldots,r,\]
where the prolonged coefficients are computed using the standard prolongation formula
\[\phi_{\kappa}^{\alpha,j}=\mathrm{D}_{t}^{j}(\phi_{\kappa}^{\alpha}-\xi_{ \kappa}u_{1}^{\alpha})+\xi_{\kappa}u_{j+1}^{\alpha},\qquad\kappa=1,\ldots,r, \qquad\alpha=1,\ldots,q,\qquad 0\leq j\leq n.\]
**Proposition 3**.: A nondegenerate ordinary differential equation \(\Delta(z^{(n)})=0\) is strongly invariant under the prolonged action of a connected local Lie group of transformations \(G\) if and only if
\[\mathbf{v}_{\kappa}^{(n)}[\Delta_{i}(z^{(n)})]=0,\qquad\kappa=1,\ldots,r, \qquad i=1,\ldots,l,\]
where \(\mathbf{v}_{1},\ldots,\mathbf{v}_{r}\) is a basis of infinitesimal generators for the group of transformations \(G\).
**Remark 4**.: As one may observe, we do not include the initial conditions
\[u^{(n-1)}(t_{0})=u_{0}^{(n-1)} \tag{5}\]
when discussing the symmetry of the differential equation \(\Delta(z^{(n)})=\Delta(x,u^{(n)})=0\). This is customary when studying symmetries of differential equations. Of course, the initial conditions are necessary to select a particular solution and when implementing numerical simulations.
### Invariantization
Given a nondegenerate differential equation \(\Delta(z^{(n)})=0\) strongly invariant under the prolonged action of an \(r\)-dimensional Lie group \(G\) acting regularly on \(\mathrm{J}^{(n)}\), we now explain how to use the method of equivariant moving frames to "project" the differential equation onto the space of differential invariants. For the theoretical foundations of the method of equivariant moving frames, we refer the reader to the foundational papers [15, 21] and the textbook [25].
**Definition 5**.: A Lie group \(G\) acting smoothly on a \(\mathrm{J}^{(n)}\) is said to act _freely_ if the isotropy group
\[G_{z^{(n)}}=\{g\in G\,|\,g\cdot z^{(n)}=z^{(n)}\}=\{e\}\]
at the point \(z^{(n)}\) is trivial for all \(z^{(n)}\in\mathrm{J}^{(n)}\). The Lie group \(G\) is said to act _locally freely_ if \(G_{z^{(n)}}\) is a discrete subgroup of \(G\) for all \(z^{(n)}\in\mathrm{J}^{(n)}\).
**Remark 6**.: More generally we can restrict Definition 5, and the subsequent considerations, to a \(G\)-invariant submanifold \(\mathcal{V}^{(n)}\subset\mathrm{J}^{(n)}\). To simplify the discussion. we assume \(\mathcal{V}^{(n)}=\mathrm{J}^{(n)}\).
**Definition 7**.: A _right moving frame_ is a \(G\)-equivariant map \(\rho\colon\mathrm{J}^{(n)}\to G\) such that
\[\rho(g\cdot z^{(n)})=g\cdot\rho(z^{(n)})\]
for all \(g\in G\) where the prolonged action is defined. Taking the group inverse of a right moving frame yields the left moving frame
\[\overline{\rho}(z^{(n)})=\rho(z^{(n)})^{-1}\]
satisfying the equivariance condition
\[\overline{\rho}(g\cdot z^{(n)})=g\cdot\overline{\rho}(z^{(n)}).\]
**Theorem 8**.: A moving frame exists in the neighborhood of a point \(z^{(n)}\in\mathrm{J}^{(n)}\) provided the prolonged action of \(G\) on \(\mathrm{J}^{(n)}\) is (locally) free and regular.
A moving frame is obtained by selecting a cross-section \(\mathcal{K}\subset\mathrm{J}^{(n)}\) to the orbits of the prolonged action. Keeping with most applications, and to simplify the exposition, assume \(\mathcal{K}\) is a coordinate cross-section obtained by setting \(r\) coordinates of the jet \(z^{(n)}\) to constant values:
\[z^{a_{\kappa}}=c^{\kappa},\qquad\kappa=1,\dots,r. \tag{6}\]
Solving the normalization equations
\[Z^{a_{\kappa}}=c^{\kappa},\qquad\kappa=1,\dots,r,\]
for the group parameters, yields a right moving frame \(\rho\). Given a moving frame, there is a systemic procedure for constructing differential invariant functions.
**Definition 9**.: Let \(\rho\colon\mathrm{J}^{(n)}\to\mathbb{R}\) be a right moving frame. The invariantization of the differential function \(F\colon\mathrm{J}^{(n)}\to\mathbb{R}\) is the differential invariant function
\[\iota(F)(z^{(n)})=F(\rho(z^{(n)})\cdot z^{(n)}).\]
In particular, invariantization of the coordinate jet functions
\[\iota(z^{(n)})=\rho(z^{(n)})\cdot z_{n}\]
yields differential invariants that can be used as coordinates on the cross-section \(\mathcal{K}\). In particular, the invariantization of the coordinates used to define the cross-section in (6) are constant
\[\iota(z^{a_{\kappa}})=c^{\kappa},\qquad\kappa=1,\dots,r,\]
and are called _phantom invariants_. The remaining invariantized coordinates are called _normalized invariants_. In light of Theorem 5.32 in [29], assume there are \(q+1\) normalized invariants
\[H,I^{1},\,\dots,I^{q}, \tag{7}\]
such that locally the invariants
\[I^{\alpha}=I^{\alpha}(H),\qquad\alpha=1,\dots,q,\]
are independent functions of the invariant \(H\), and generate the algebra of differential invariants. This means that any differential invariant can be expressed in terms of (7) and their invariant derivatives with respect to \(\mathrm{D}_{H}\). In the following we let \(I^{(n)}\) denote the derivatives of \(I=(I^{1},\dots,I^{q})\) with respect to \(H\), up to order \(n\).
Assuming the differential equation \(\Delta(z^{(n)})=0\) is strongly invariant and its solutions are transverse to the prolonged action, this equation, once invariantized, will yield a differential equation in the space of invariants
\[\iota[\Delta(t,u^{(n)})]=\Delta_{\text{Inv}}(H,I^{(k)})=0,\qquad\text{where} \qquad k\leq n. \tag{8}\]
Initial conditions for (8) are obtained by invariantizing (5) to obtain
\[I^{(k-1)}(H_{0})=I_{0}^{(k-1)}. \tag{9}\]
**Example 10**.: To illustrate the concepts introduced thus far, we use the Schwarz equation
\[\frac{u_{ttt}}{u_{t}}-\frac{3}{2}\bigg{(}\frac{u_{tt}}{u_{t}}\bigg{)}^{2}=F(t) \tag{10}\]
as our running example. This equation admits a three-dimensional Lie group of point transformations given by
\[T=t,\qquad U=\frac{\alpha u+\beta}{\gamma u+\delta},\qquad\text{where}\qquad g =\begin{bmatrix}\alpha&\beta\\ \gamma&\delta\end{bmatrix}\in\text{SL}(2,\mathbb{R}), \tag{11}\]
so that \(\alpha\delta-\beta\gamma=1\). A cross-section to the prolonged action
\[U_{T} =\text{D}_{t}(U)=\frac{u_{t}}{(\gamma u+\delta)^{2}},\] \[U_{TT} =\text{D}_{t}(U_{T})=\frac{u_{tt}}{(\gamma u+\delta)^{2}}-\frac{2 \gamma u_{t}^{2}}{(\gamma u+\delta)^{3}}, \tag{12}\] \[U_{TTT} =\text{D}_{t}(U_{TT})=\frac{u_{ttt}}{(\gamma u+\delta)^{2}}- \frac{6\gamma u_{t}u_{tt}}{(\gamma u+\delta)^{3}}+\frac{6\gamma^{2}u_{t}^{3}}{ (\gamma u+\delta)^{4}},\]
is given by
\[\mathcal{K}=\{u=0,\,u_{t}=\sigma,\,u_{tt}=0\}\subset\mathcal{V}^{(n)}\subset \text{J}^{(n)}, \tag{13}\]
where \(\sigma=\text{sign}(u_{t})\), and \(\mathcal{V}^{(n)}=\{z\in\text{J}^{(n)}\,|\,u_{t}\neq 0\}\) with \(n\geq 2\). Solving the normalization equations
\[U=0,\qquad U_{T}=\sigma,\qquad U_{TT}=0, \tag{14}\]
together with the unitary constraint \(\alpha\delta-\beta\gamma=1\), we obtain the right moving frame
\[\alpha=\pm\frac{1}{\sqrt{|u_{t}|}},\qquad\beta=\mp\frac{u}{\sqrt{|u_{t}|}}, \qquad\gamma=\pm\frac{u_{tt}}{2|u_{t}|^{3/2}},\qquad\delta=\pm\frac{2u_{t}^{2 }-uu_{tt}}{2|u_{t}|^{3/2}}, \tag{15}\]
where the sign ambiguity comes from solving the normalization \(U_{T}=\sigma\), which involves the quadratic term \((\gamma u+\delta)^{2}\). Invariantizing the third order derivative \(u_{ttt}\) produces the differential invariant
\[\iota(u_{ttt})=\frac{u_{ttt}}{(\gamma u+\delta)^{2}}-\frac{6\gamma u_{t}u_{tt }}{(\gamma u+\delta)^{3}}+\frac{6\gamma^{2}u_{t}^{3}}{(\gamma u+\delta)^{4}} \bigg{|}_{(15)}=\sigma\bigg{(}\frac{u_{ttt}}{u_{t}}-\frac{3}{2}\bigg{(}\frac{ u_{tt}}{u_{t}}\bigg{)}^{2}\bigg{)}. \tag{16}\]
In terms of the general theory previously introduced, we have the invariants
\[H=t,\qquad I=\frac{u_{ttt}}{u_{t}}-\frac{3}{2}\bigg{(}\frac{u_{tt}}{u_{t}} \bigg{)}^{2}. \tag{17}\]
Since the independent variable \(t\) is an invariant, instead of using \(H\), we use \(t\) in the following computations. The invariantization of Schwarz equation (10) then yields the algebraic equation
\[I=F(t). \tag{18}\]
Since the prolonged action is transitive on the fibers of each component \(\{(t,u,u_{t},u_{tt})\,|\,u_{t}>0\}\cup\{(t,u,u_{t},u_{tt})\,|\,u_{t}<0\}= \mathcal{V}^{(2)}\), any initial conditions
\[u(t_{0})=u_{0},\qquad u_{t}(t_{0})=u_{t}^{0},\qquad u_{tt}(t_{0})=u_{tt}^{0},\]
is mapped, under invariantization, to the identities
\[0=0,\qquad\sigma=\sigma,\qquad 0=0.\]
### Recurrence relations
Using the recurrence relations we now explain how the invariantized equation (8) can be derived symbolically, without requiring the coordinate expressions for the moving frame \(\rho\) or the invariants \((H,I)\). The key observation is that the invariantization map \(\iota\) and the exterior differential do not, in general, commute
\[\iota\circ\mathrm{d}\neq\mathrm{d}\circ\iota.\]
The extend by which these two operations do not commute is encapsulated in the recurrence relations. To state these equations we need to introduce the (contact) invariant one-form
\[\varpi=\iota(\mathrm{d}t)=\rho^{*}(\mathrm{D}_{t}(T))\,\mathrm{d}t,\]
which comes from invariantizing the horizontal one-form \(\mathrm{d}t\), see [21] for more details.
Given a Lie group \(G\), let \(g\in G\) be represented by a faithful matrix. Then the _right Maurer-Cartan form_ is given by
\[\mu=\mathrm{d}g\cdot g^{-1}. \tag{19}\]
The pull-back of the Maurer-Cartan form (19) by a right moving frame \(\rho\) yields the invariant matrix
\[\nu=\mathrm{d}\rho\cdot\rho^{-1}=\begin{bmatrix}I_{ij}\end{bmatrix}\varpi, \tag{20}\]
where the invariants \(I_{ij}\) are called Maurer-Cartan invariants.
**Proposition 11**.: Let \(F\colon\mathrm{J}^{(n)}\to\mathbb{R}\) be a differential function. The recurrence relation for the invariantization map \(\iota\) is
\[\mathrm{d}[\iota(F)]=\iota[\mathrm{d}F]+\sum_{\kappa=1}^{r}\iota[\mathbf{v}_{ \kappa}^{(n)}(F)]\,\nu^{\kappa}, \tag{21}\]
where \(\nu^{1},\ldots,\nu^{r}\) is a basis of normalized Maurer-Cartan forms extracted from (20).
Substituting for \(F\) in (21) the jet coordinates (6) specifying the coordinate cross-section \(\mathcal{K}\) leads to \(r\) linear equations for the normalized Maurer-Cartan forms \(\nu^{1},\ldots,\nu^{r}\). Solving those equations and substituting the result back in (21) yields a symbolic expression for the differential of any invariantized differential function \(F\), without requiring the coordinate expressions for the moving frame \(\rho\).
**Example 12**.: Continuing Example 10, a basis of infinitesimal generators for the group action (11) is provided by
\[\mathbf{v}_{1}=\frac{\partial}{\partial u},\qquad\mathbf{v}_{2}=u\frac{ \partial}{\partial u},\qquad\mathbf{v}_{3}=u^{2}\frac{\partial}{\partial u}.\]
The prolongation of those vector fields, up to order 2, is given by
\[\mathbf{v}_{1}^{(3)} =\frac{\partial}{\partial u},\] \[\mathbf{v}_{2}^{(3)} =u\frac{\partial}{\partial u}+u_{t}\frac{\partial}{\partial u_{t} }+u_{tt}\frac{\partial}{\partial u_{tt}},\] \[\mathbf{v}_{3}^{(3)} =u^{2}\frac{\partial}{\partial u}+2uu_{t}\frac{\partial}{\partial u _{t}}+2(u_{t}^{2}+uu_{tt})\frac{\partial}{\partial u_{tt}}.\]
Applying the recurrence relation (21) to \(t\), \(u\), \(u_{t}\), and \(u_{tt}\) yields
\[\begin{split}\mathrm{d}[\iota(t)]&=\varpi,\\ \mathrm{d}[\iota(u)]&=\iota(u_{t})\varphi+\nu^{1},\\ \mathrm{d}[\iota(u_{t})]&=\iota(u_{tt})\varpi+ \iota(u_{t})\nu^{2}+2\iota(u)\iota(u_{t})\nu^{3},\\ \mathrm{d}[\iota(u_{tt})]&=\iota(u_{ttt})\varpi+ \iota(u_{tt})\nu^{2}+2[\iota(u_{t})^{2}+\iota(u)\iota(u_{tt})]\nu^{3}.\end{split} \tag{22}\]
Recalling the cross-section (13) and the invariants (16), (17), we make the substitutions \(H=\iota(t)=t\), \(\iota(u)=0\), \(\iota(u_{t})=\sigma\), \(\iota(u_{tt})=0\), \(\iota(u_{tt})=\sigma I\) into (22) and obtain
\[\mathrm{d}t=\varpi,\qquad 0=\varpi+\nu^{1},\qquad 0=\nu^{2},\qquad 0=I\sigma \varpi+2\nu^{3}.\]
Solving for the normalized Maurer-Cartan forms yields
\[\nu^{1}=-\sigma\,\varpi,\qquad\nu^{2}=0,\qquad\nu^{3}=-\frac{I\sigma}{2}\varpi.\]
In matrix form we have that
\[\nu=\begin{bmatrix}0&-\sigma\\ \frac{1}{2}\sigma I&0\end{bmatrix}\varpi=\begin{bmatrix}0&-\sigma\\ \frac{1}{2}\sigma F(t)&0\end{bmatrix}\varpi,\]
where we used the algebraic relationship (18) originating from the invariance of Schwarz' equation (10).
### Reconstruction
Let \(I(H)\) be a solution to the invariantized differential equation (8) with initial conditions (9). In this section we explain how to reconstruct the solution to the original equation \(\Delta(x,u^{(n)})=0\) with initial conditions (5). To do so, we introduce the reconstruction equations for the left moving frame \(\overline{\rho}=\rho^{-1}\):
\[\mathrm{d}\overline{\rho}=-\overline{\rho}\cdot\mathrm{d}\rho\cdot\overline{ \rho}=-\overline{\rho}\,\nu, \tag{23}\]
where \(\nu\) is the normalized Maurer-Cartan form introduced in (20). As we have seen in Section 3.3, the invariantized Maurer-Cartan matrix \(\nu\) can be obtained symbolically using the recurrence relations for the phantom invariants. Since \(\nu\) is invariant, it can be expressed in terms of \(H\), the solution \(I(H)\), and its derivatives. Thus, equation (23) yield a first order system of differential equations for the group parameters expressed in the independent variable \(H\). Integrating (23), we obtain the left moving frame that sends the invariant curve \((H,I(H))\) to the original solution
\[(t(H),u(H))=\overline{\rho}(H)\cdot\iota(t,u)(H). \tag{24}\]
Assuming \(t_{H}>0\), the initial conditions to the reconstruction equations (23) are given by
\[\overline{\rho}(H_{0})=\overline{\rho}_{0}\qquad\text{such that}\qquad \overline{\rho}_{0}\cdot\iota(t_{0},u_{0}^{(n-1)})=(t_{0},u_{0}^{(n-1)}). \tag{25}\]
If \(t_{H}<0\), one can always reparametrize the solution so that the derivative becomes positive.
The solution (24) is a parametric curve with the invariant \(H\) serving as the parameter. From a numerical perspective, this is sufficient to graph the solution. Though we note that by inverting \(t=t(H)\) to express the invariant \(H=H(t)\) in terms of \(t\), we can recover the solution as a function of \(t\):
\[u=u(H(t)).\]
**Example 13**.: The left moving frame
\[\overline{\rho}=\begin{bmatrix}\alpha&\beta\\ \gamma&\delta\end{bmatrix}\quad\in\quad\text{SL}(2,\mathbb{R})\]
that will send the solution (18) to the original function \(u(t)\) is a solution to the reconstruction equations
\[\begin{bmatrix}\alpha_{t}&\beta_{t}\\ \gamma_{t}&\delta_{t}\end{bmatrix}=-\begin{bmatrix}\alpha&\beta\\ \gamma&\delta\end{bmatrix}\begin{bmatrix}0&-\sigma\\ \frac{1}{2}\sigma F(t)&0\end{bmatrix}=\begin{bmatrix}\alpha&\beta\\ \gamma&\delta\end{bmatrix}\begin{bmatrix}0&\sigma\\ -\frac{1}{2}\sigma F(t)&0\end{bmatrix},\] (26a) with initial conditions \[\delta_{0}=\pm\frac{1}{\sqrt{|u_{t}^{0}|}},\qquad\beta_{0}=\pm\frac{u_{0}}{ \sqrt{|u_{t}^{0}|}},\qquad\gamma_{0}=\mp\frac{u_{tt}^{0}}{2(u_{t}^{0})^{3/2}}, \qquad\alpha_{0}=\pm\sqrt{|u_{t}^{0}|}\mp\frac{u_{0}u_{tt}^{0}}{2(|u_{t}^{0}|) ^{3/2}}. \tag{26b}\]
Then, the solution to Schwarz' equation (10) is
\[u(t)=\overline{\rho}\cdot 0=\frac{\beta}{\delta}. \tag{27}\]
### Summary
Let us summarize the algorithm for solving an ordinary differential equation \(\Delta(t,u^{(n)})=0\) admitting a group of Lie point transformations \(G\) using the method of moving frames.
1. Select a cross-section \(\mathcal{K}\) to the prolonged action.
2. Choose \(q+1\) invariants \(H\), \(I^{1},\ldots,I^{q}\) from \(\iota(t,u^{(n)})\), that generate the algebra of differential invariants, and assume \(I^{1}(H),\ldots,I^{q}(H)\) are functions of \(H\).
3. Invariantize the differential equation \(\Delta(t,u^{(n)})=0\) and use the recurrence relation (21) to write the result in terms of \(H\) and \(I^{(k)}\) to obtain the equation \(\Delta_{\text{Inv}}(H,I^{(k)})=0\).
4. Solve the equation \(\Delta_{\text{Inv}}(H,I^{(k)})=0\) subject to the initial conditions (9).
5. A parametric solution to the original equation \(\Delta(t,u^{(n)})=0\) is given by \(\overline{\rho}(H)\cdot\iota(t,u)(H)\), where the left moving frame \(\overline{\rho}(H)\) is a solution of the reconstruction equation (23) subject to the initial conditions (25).
## 4 Invariant physics-informed neural networks
Before introducing our invariant physics-informed neural network, we recall the definition of the standard physics-informed loss function that needs to be minimized when solving ordinary differential equations. To this end, assume we want to solve the ordinary differential equation \(\Delta(t,u^{(n)})=0\) subject to the initial conditions \(u^{(n-1)}(t_{0})=u_{0}^{(n-1)}\) on the interval \([t_{0},t_{f}]\).
First we introduce the collocation points \(\{t_{i}\}_{i=0}^{\ell}\) sampled randomly over the interval \([t_{0},t_{f}]\) with \(t_{0}=t_{0}<t_{1}<\cdots<t_{\ell}=t_{f}\). Then, a neural network of the form \(u_{\boldsymbol{\theta}}(t)=\mathcal{N}_{\boldsymbol{\theta}}(t)\), parameterized
by the parameter vector \(\mathbf{\theta}\), is trained to approximate the solution of the differential equation, i.e. \(u_{\mathbf{\theta}}(t)\approx u(t)\), by minimizing the physics-informed loss function
\[\mathcal{L}(\mathbf{\theta})=\mathcal{L}_{\Delta}(\mathbf{\theta})+\alpha\,\mathcal{L}_ {\text{I.C.}}(\mathbf{\theta})\] (28a) with respect to \[\mathbf{\theta}\], where \[\mathcal{L}_{\Delta}(\mathbf{\theta})=\sum_{i=0}^{\ell}\ \left[\Delta(t_{i},u_{\mathbf{\theta}}^{( n)}(t_{i}))\right]^{2}\] (28b) is the _differential equation loss_, \[\mathcal{L}_{\text{I.C.}}(\mathbf{\theta})=\left[u_{\mathbf{\theta}}^{(n-1)}(t_{0})-u_{ 0}^{(n-1)}\right]^{2} \tag{28c}\]
is the _initial condition loss_, and \(\alpha\) is a hyper-parameter to re-scale the importance of both loss functions. We note that the differential equation loss is the mean squared error of the differential equation evaluated at the collocation points \(\left\{t_{i}\right\}_{i=0}^{\ell}\subset[t_{0},t_{f}]\) over which the numerical solution is sought. The initial condition loss is likewise the mean squared error between the true initial conditions and the initial conditions approximated by the neural network. We note in passage that the initial conditions could alternatively be enforced as a hard constraint in the neural network, [11, 23], in which case the physics-informed loss function would reduce to \(\mathcal{L}_{\Delta}(\mathbf{\theta})\) only.
The physics-informed loss function (28) is minimized using gradient descent, usually using the Adam optimizer, [20], but also more elaborate optimizers can be employed, [3]. The particular elegance of the method of physics-informed neural networks lies in the fact that the derivatives \(u_{\mathbf{\theta}}^{(n)}\) of the neural network solution approximation are computed using _automatic differentiation_, [1], which is built into all modern deep learning frameworks such as JAX, TensorFlow, or PyTorch.
Similar to the above standard physics-informed neural network, an invariant physics-informed neural network is a feed-forward neural network approximating the solution of the invariantized
Figure 1: Solving a differential equation using moving frames.
differential equation and the reconstruction equations for the left moving frame. In other words, the loss function to be minimized is defined using symmetries of the given differential equation.
In light of the five step process given in Section 3.5, assume the invariantized equation \(\Delta_{\text{Inv}}(H,I^{(k)})=0\) and the reconstruction equation \(\mathrm{d}\overline{\rho}=-\overline{\rho}\nu\) have been derived. Introduce an interval of integration \([H_{0},H_{f}]\) over which the numerical solution is sought, and consider the collocation points \(\{H_{i}\}_{i=0}^{\ell}\subset[H_{0},H_{f}]\), such that \(H_{\ell}=H_{f}\). The neural network has to learn a mapping between \(H\) and the functions
\[I_{\boldsymbol{\theta}}(H)\qquad\text{and}\qquad\overline{\rho}_{\boldsymbol{ \theta}}(H),\]
where \(I_{\boldsymbol{\theta}}(H)\) denotes the neural network approximation of the differential invariants \(I(H)\) solving (8), and \(\overline{\rho}_{\boldsymbol{\theta}}(H)\) is the approximation of the left moving frame \(\overline{\rho}(H)\) solving the reconstruction equations (23). We note that the output size of the network depends on the numbers of invariants \(I(H)\) and the size of the symmetry group via \(\overline{\rho}(H)\).
The network is trained by minimizing the invariant physics-informed loss function consisting of the invariantized differential equation loss and the reconstruction equations loss defined as the mean squared error
\[\mathcal{L}_{\Delta_{\text{Inv}},\overline{\rho}}(\boldsymbol{ \theta})=\sum_{i=0}^{\ell}\big{(}\big{[}\Delta_{\text{Inv}}(H_{i},I_{ \boldsymbol{\theta}}^{(k)}(H_{i}))\big{]}^{2}+\big{[}\mathrm{d}\overline{ \rho}_{\boldsymbol{\theta}}(H_{i})+\overline{\rho}_{\boldsymbol{\theta}}(H_{ i})\,\nu(H_{i},I_{\boldsymbol{\theta}}^{(\kappa)}(H_{i}))\big{]}^{2}\big{)}. \tag{29}\]
We supplement the loss function (29) with the initial conditions (9) and (25) by considering the invariant initial conditions loss function
\[\mathcal{L}_{\text{I.C.}}(\boldsymbol{\theta})=\big{[}I_{\boldsymbol{\theta}} ^{(k-1)}(H_{0})-I_{0}^{(k-1)}\big{]}^{2}+\big{[}\overline{\rho}_{\boldsymbol{ \theta}}(H_{0})-\overline{\rho}_{0}\big{]}^{2}.\]
The final invariant physics-informed loss function is thus given by
\[\mathcal{L}_{\text{Inv}}(\boldsymbol{\theta})=\mathcal{L}_{\Delta_{\text{Inv} },\overline{\rho}}(\boldsymbol{\theta})+\alpha\,\mathcal{L}_{\text{I.C.}}( \boldsymbol{\theta}),\]
where \(\alpha\) is again a hyper-parameter rescaling the importance of the equation and initial condition losses.
## 5 Examples
We now implement the invariant neural network architecture introduced in Section 4 for several examples. We also train a standard physics-informed neural network to compare the solutions obtained. For both models we use feed-forward neural networks minimizing the invariant loss function and standard PINN loss function, respectively. For the sake of consistency, all networks used throughout this section have 5 layers, with 40 nodes per layer, and use the hyperbolic tangent as activation functions. For most examples, the loss stabilizes at fewer than 5,000 epochs, but for uniformity we trained all models for 5,000 epochs. The numerical errors of the two neural network solutions are obtained by comparing the numerical solutions to the exact solution, if available, or to the numerical solution obtained using odeint in scipy.integrate. We also compute the mean square error over the entire interval of integration for all examples together with the standard deviation averaged over \(5\) runs. These results are summarized in Table 1. Finally, the point-wise square error plots for each example are provided to show the error varying over the interval of integration.
**Example 14**.: As our first example, we consider the Schwarz equation (10), with \(F(t)=2\). For the numerical simulations, we used the initial conditions
\[u_{0}=u_{tt}^{0}=0,\qquad u_{t}^{0}=1. \tag{30}\]
According to (18) the invariantization of Schwarz' equation yields the algebraic constraint \(I=2\). Thus, the loss function (29) will only contain the reconstruction equations (26a). Namely,
\[\alpha_{t}+\beta=\beta_{t}-\alpha=\gamma_{t}+\delta=\delta_{t}-\gamma=0,\]
where we used the fact that \(\sigma=1\). Substituting (30) into (26b), yields the initial conditions
\[\delta_{0}=\alpha_{0}=\pm 1,\qquad\beta_{0}=\gamma_{0}=0\]
for the reconstruction equations. In our numerical simulations we worked with the positive sign. Once the reconstruction equations have been solved, the solution to the Schwarz equation is given by the ratio (27). The solution is integrated on the interval \(t\in[0,\pi]\). Error plots for the solutions obtained via the invariant PINN and the standard PINN implementations are given in Figure 2. These errors are obtained by comparing the numerical solutions to the exact solution \(u(t)=\tan(t)\). Clearly, the invariant implementation is substantially more precise near the vertical asymptote at \(t=\pi/2\).
**Example 15**.: As our second example, we consider the logistic equation
\[u_{t}=u(1-u) \tag{31}\]
occurring in population growth modeling. Equation (31) admits the one-parameter symmetry group
\[T=t,\qquad U=\frac{u}{1+\epsilon\,ue^{-t}},\qquad\text{where}\qquad\epsilon \in\mathbb{R}.\]
Implementing the algorithm outlined in Section 3.5, we choose the cross-section \(\mathcal{K}=\{u=1\}\). This yields the invariantized equation
\[I=\iota(u_{t})=0.\]
Figure 2: Time series of the squared error for the Schwarz equation (10).
The reconstruction equation is
\[\epsilon_{t}=I=0,\]
subject to the initial condition
\[\epsilon(t_{0})=\bigg{(}\frac{1-u_{0}}{u_{0}}\bigg{)}e^{t_{0}},\]
where \(u_{0}=0.5\) and our interval of integration is \([0,\pi]\). The solution to the logistic equation is then given by
\[u(t)=\frac{1}{1+\epsilon\,e^{-t}}.\]
As Figure 3 illustrates, the error incurred by the invariant PINN model is significantly smaller, by about a factor of more than 100, than the standard PINN implementation when compared to the exact solution \(u(t)=1/(1+e^{-t})\).
**Example 16**.: We now consider the driven harmonic oscillator
\[u_{tt}+u=\sin(t^{a}), \tag{32}\]
which appears in inductor-capacitor circuits, [33]. In the following we set \(a=0.99\), which yields bounded solutions close to resonance occurring when \(a=1\). The differential equation (32) admits the two-dimensional symmetry group of transformations
\[T=t,\qquad U=u+\alpha\sin(t)+\beta\cos(t),\qquad\text{where}\qquad\alpha, \beta\in\mathbb{R}.\]
A cross-section to the prolonged action is given by \(\mathcal{K}=\{u=u_{t}=0\}\). The invariantization of (32) yields
\[I=\iota(u_{tt})=\sin(t^{a}).\]
The reconstruction equations are
\[\alpha_{t}=\sin(t^{a})\cos(t),\qquad\beta_{t}=-\sin(t^{a})\sin(t), \tag{33}\]
Figure 3: Time series of the squared error for the logistic equation (31).
with initial conditions
\[\alpha(t_{0})=u_{0}\sin(t_{0})+u_{t}^{0}\cos(t_{0}),\qquad\beta(t_{0})=u_{0}\cos( t_{0})-u_{t}^{0}\sin(t_{0}),\]
where, in our numerical simulations, we set \(u_{0}=u_{t}^{0}=1\) and integrate over the interval \([0,10]\). Given a solution to the reconstruction equations (33), the solution to the driven harmonic oscillator (32) is
\[u(t)=\alpha(t)\,\sin(t)+\beta(t)\cos(t).\]
Figure 4 shows the error for the invariant PINN implementation and the standard PINN approach compared to the solution obtained using the odeint Runge-Kutta method in scipy.integrate. As in the previous two examples, the invariant version yields substantially better numerical results than the standard PINN method.
**Example 17**.: We now consider the second order ordinary differential equation
\[u_{tt}=\exp{[-u_{t}]} \tag{34}\]
with an exponential term. Equation (34) admits a three-dimensional symmetry group action given by
\[T=e^{\epsilon}t+a,\qquad U=e^{\epsilon}u+\epsilon\,e^{\epsilon}x+b,\]
where \(a,b,\epsilon\in\mathbb{R}\). In the following, we only consider the one-dimensional group
\[T=e^{\epsilon}t,\qquad U=e^{\epsilon}u+\epsilon\,e^{\epsilon}x.\]
We note that in this example the independent variable is not invariant as in the previous examples. A cross-section to the prolonged action is given by \(\mathcal{K}=\{u_{x}=0\}\). Introducing the invariants
\[H=\ln{\left[\frac{1}{1-\iota(t)}\right]}=\ln{\left[\frac{1}{1-tu_{t}}\right]},\qquad I=\iota(u)=\exp[-u_{t}](u-tu_{t}),\]
Figure 4: Time series of the squared error for the driven harmonic oscillator (32).
the invariantization of the differential equation (34) reduces to the first order linear equation
\[I_{H}+I=e^{-H}-1.\]
The reconstruction equation for the left moving frame is simply
\[\epsilon_{H}=1.\]
In terms of \(\epsilon\) and \(I\), the parametric solution to the original differential equation (34) is
\[t=e^{\epsilon}(1-e^{-H}),\qquad u=e^{\epsilon}(I+\epsilon(1-e^{-H})). \tag{35}\]
The solution to (34) is known and is given by
\[u(t)=(t+c_{1})\ln(t+c_{1})-t+c_{2}, \tag{36}\]
where \(c_{1}\), \(c_{2}\) are two integration constants. For the numerical simulations, we use the initial conditions
\[I_{0}=\exp[-u_{t}^{0}](u_{0}-t_{0}u_{t}^{0}),\qquad\epsilon_{0}=u_{t}^{0},\]
where \(u_{0}=u(t_{0})\), \(u_{t}^{0}=u_{t}(t_{0})\) with \(t_{0}=0\), and \(c_{1}=\exp(-5)\), \(c_{2}=0\) in (36). The interval of integration \([H_{0},H_{f}]\) is given by
\[H_{0}=\ln\bigg{[}\frac{1}{1-t_{0}u_{t}^{0}}\bigg{]},\qquad H_{f}=\ln\bigg{[} \frac{1}{1-t_{f}u_{t}^{f}}\bigg{]}, \tag{37}\]
where \(u_{t}^{f}=u_{t}(t_{f})\) and \(t_{f}=2\). We choose the interval of integration given by (37), so that when \(t\) is given by (35) it lies in the interval \([0,2]\).
Figure 5 shows the error obtained for the invariant PINN model when compared to the exact solution (34), and similarly for the non-invariant PINN model. As in all previous examples, the invariant version drastically outperforms the standard PINN approach.
**Example 18**.: As our final example, we consider a system of first order ODEs
\[u_{t}=-u+(t+1)v,\qquad v_{t}=u-tv. \tag{38}\]
This system admits a two-dimensional symmetry group of transformations given by
\[T=t,\qquad U=\alpha u+\beta t,\qquad V=\alpha v+\beta,\]
where \(\alpha>0\) and \(\beta\in\mathbb{R}\). Working with the cross-section \(\mathcal{K}=\{u=1,v=0\}\), the invariantization of (38) yields
\[I=\iota(u_{x})=-1,\qquad J=\iota(v_{x})=1.\]
The reconstruction equations are
\[\alpha_{t}=\alpha(1+t),\qquad\beta_{t}=\alpha\]
subject to the initial conditions \(\alpha_{0}=1\), \(\beta_{0}=1\), corresponding to the initial conditions \(u_{0}=v_{0}=1\), when \(t_{0}=0\). In our numerical simulations we integrated over the interval \([0,2]\). The solution to (38) is then given by
\[u(t)=\alpha(t)+t\,\beta(t),\qquad v(t)=\beta(t).\]
As in all previous examples, comparing the numerical solutions to the exact solution
\[u(t)=\sqrt{\frac{2}{\pi}}\,c\,e^{-(t+1)^{2}/2}+c\,t\,\text{erf}\!\left(\frac{t+1} {\sqrt{2}}\right)+kt,\qquad v(t)=c\,\text{erf}\!\left(\frac{t+1}{\sqrt{2}} \right)+k,\]
with \(c=\left(\sqrt{2/\pi}\,\exp(-1/2)\right)^{-1}\) and \(k=1-c\,\text{erf}(1/\sqrt{2})\), where \(\text{erf}(t)=2/\sqrt{\pi}\int_{0}^{t}e^{-x^{2}}\mathrm{d}x\) is the standard error function, we observe in Figure 6 that the invariant version of the PINN model considerably outperforms its non-invariant counterpart.
## 6 Summary and conclusions
In this paper we have introduced the notion of invariant physics-informed neural networks. These combine physics-informed neural networks with methods from the group analysis of differential
Figure 5: Time series of the squared error for the exponential equation (34).
Figure 6: Time series of the squared error for the system of equations (38).
equations to simplify the form of the differential equations that have to be solved. In turns, this simplifies the loss function that has to be minimized, and our numerical tests show that the solutions obtained with the invariant model outperformed their non-invariant counterparts, and typically considerably so. Table 1 summarizes the examples considered in the paper and shows that the invariant PINN outperforms vanilla PINN for the examples considered.
The proposed method is fully algorithmic and as such can be applied to any system of differential equations that is strongly invariant under the prolonged action of a group of Lie point symmetries. It is worth noting that the work proposed here parallels some of the work on invariant discretization schemes which, for ordinary differential equations, also routinely outperform their non-invariant counterparts. We have observed this to also be the case for physics-informed neural networks.
Lastly, while we have restricted ourselves here to the case of ordinary differential equations, our method extends to partial differential equations as well. Though, when considering partial differential equations, it is not sufficient to project the equations onto the space of differential invariants as done in this paper. As explained in [36], integrability conditions among the differential invariants must also be added to the invariantized differential equations. In the multivariate case, the reconstruction equations (23) will then form a system of first order partial derivatives for the left moving frame. Apart from these modifications, invariant physics-informed neural networks can also be constructed for partial differential equations, which will be investigated elsewhere.
#### Acknowledgments and Disclosure of Funding
This research was undertaken, in part, thanks to funding from the Canada Research Chairs program and the NSERC Discovery Grant program. The authors also acknowledge support from the Atlantic Association for Research in the Mathematical Sciences (AARMS) Collaborative Research Group on _Mathematical Foundations for Scientific Machine Learning_.
|
2306.04905 | ViG-UNet: Vision Graph Neural Networks for Medical Image Segmentation | Deep neural networks have been widely used in medical image analysis and
medical image segmentation is one of the most important tasks. U-shaped neural
networks with encoder-decoder are prevailing and have succeeded greatly in
various segmentation tasks. While CNNs treat an image as a grid of pixels in
Euclidean space and Transformers recognize an image as a sequence of patches,
graph-based representation is more generalized and can construct connections
for each part of an image. In this paper, we propose a novel ViG-UNet, a graph
neural network-based U-shaped architecture with the encoder, the decoder, the
bottleneck, and skip connections. The downsampling and upsampling modules are
also carefully designed. The experimental results on ISIC 2016, ISIC 2017 and
Kvasir-SEG datasets demonstrate that our proposed architecture outperforms most
existing classic and state-of-the-art U-shaped networks. | Juntao Jiang, Xiyu Chen, Guanzhong Tian, Yong Liu | 2023-06-08T03:17:00Z | http://arxiv.org/abs/2306.04905v1 | # Vig-UNet: Vision Graph Neural Networks for Medical Image Segmentation
###### Abstract
Deep neural networks have been widely used in medical image analysis and medical image segmentation is one of the most important tasks. U-shaped neural networks with encoder-decoder are prevailing and have succeeded greatly in various segmentation tasks. While CNNs treat an image as a grid of pixels in Euclidean space and Transformers recognize an image as a sequence of patches, graph-based representation is more generalized and can construct connections for each part of an image. In this paper, we propose a novel ViG-UNet, a graph neural network-based U-shaped architecture with the encoder, the decoder, the bottleneck, and skip connections. The downsampling and upsampling modules are also carefully designed. The experimental results on ISIC 2016, ISIC 2017 and Kvasir-SEG datasets demonstrate that our proposed architecture outperforms most existing classic and state-of-the-art U-shaped networks.
Juntao Jiang\({}^{1}\), Xiyu Chen\({}^{2}\), Guanzhong Tian\({}^{3}\)1 and Yong Liu 1\({}^{*}\)1. College of Control Science and Engineering, Zhejiang University, Hangzhou, China
2. Polytechnic Institute, Zhejiang University, Hangzhou, China
3. Ningbo Innovation Center, Zhejiang University, Ningbo, China Medical image segmentation, ViG-UNet, Graph neural networks, Encoder-decoder
Footnote 1: Corresponding authors
## 1 Introduction
Recent years have witnessed the rise of deep learning and its broader applications in computer vision tasks. As one of the most heated topics of computer vision applied in medical scenarios, image segmentation, identifying the pixels of organs or lesions from the background, plays a crucial role in computer-aided diagnosis and treatment, improving efficiency and accuracy.
Currently, medical image segmentation methods based on deep learning mainly use fully convolutional neural networks (FCN) with U-shaped encoder-decoder architecture such as U-Net [1] and its variants. Composed of a symmetric encoder-decoder with skip connections, U-Net uses convolutional layers and downsampling modules for feature extraction, while convolutional layers and upsampling modules for pixel-level semantic classification. The skip connection operation can maintain spatial information from a high-resolution feature, which may be lost in downsampling. Following this work and based on a fully convolutional structure, a lot of U-Net's variants like Attention-UNet [2], UNet++ [3] and so on, have been proposed and achieved great success. Recently, as Transformer-based methods like ViT [4] achieved good results in image recognition tasks, thanks to their capability of enhancing global understanding of images, extracting information from the inputs and their interrelations, the Transformer-based medical image segmentation models such as Trans-UNet [5] and Swin-UNet [6] also have been proposed and showed competitive performance.
While CNNs treat an image as a grid of pixels in Euclidean space and Transformer recognizes an image as a sequence of patches, graph-based representation can be more generalized and reflect the relationship of each part in an image. Since the graph neural network (GNN) [7] was first proposed, the techniques for processing graphs have been researched a lot. A series of spatial-based GCNs [8, 9] and spectral-based GCNs [10, 11, 12, 13] are widely proposed and applied. In recent work, Han et al. [14] proposed a Vision GNN (ViG), which splits the image into many blocks regarded as nodes and constructs a graph representation by connecting the nearest neighbors, then uses GNNs to process it. It contains Grapher modules with graph convolution to aggregate and update graph information and Feed-forward Networks (FFNs) modules with fully connected networks for node feature transformation, which performed well in image recognition tasks. ViG-S has achieved 0.804 Top-1 accuracy and ViG-B has achieved 0.823 on ImageNet [15].
Motivated by the success of ViG model, we propose a ViG-UNet to utilize the powerful functions of ViG for 2D medical image segmentation in this work. The graph-based representation can also be effective in segmentation tasks. ViG-UNet is a GNN-based U-shaped architecture consisting of the encoder, the bottleneck and the decoder, with skip connections. We do comparison experiments on ISIC 2016 [16], ISIC 2017 [17] and Kvair-SEG [18] datasets.The results show that the proposed model outperformed most existing classic and state-of-the-art methods. The code will be released at _[https://github.com/juntaoJianggavin/ViG-UNet_](https://github.com/juntaoJianggavin/ViG-UNet_).
## 2 Methods
### Architecture Overview
ViG-UNet is a U-shape model with symmetrical architectures, whose architecture can be seen in Figure 1. It consists of structures of the encoder, the bottleneck, the decoder and skip-connections. The basic modules of ViG-UNet are the stem block, Grapher Modules, Feed-forward Networks (FFNs), downsampling and upsampling modules. The detailed settings of ViG-UNet can be seen in Table 1, where \(D\) means feature dimension, \(E\) means the numbers of convolutional layers in FFNs, \(K\) means the number of neighbors in GCNs, \(H\times W\) means the output image size.
### Stem Block, Upsampling, Downsampling and SkipConnections
In the stem block, two convolutional layers are used with stride 1 and stride 2, respectively. The output features have height and width equal to \(\frac{H}{2}\) and \(\frac{W}{2}\), where \(H\), \(W\) are the original height and width of the input image. And the position embedding is added. We used a convolutional layer with stride 2 for downsampling operation and a bilinear for upsampling operation with the scale factor 2 following with a convolutional layer. The output of each FFN in the encoder is added to the output of the FFN in the decoder.
### Grapher Module
Vision GNN first builds an image's graph structure by dividing it into \(N\) patches, converting them into feature vectors, and then recognizing them as a set of nodes \(\mathcal{V}=\{v_{1},v_{2},\cdots,v_{N}\}\). A \(K\) nearest neighbors method is used to find \(K\) nearest neighbors \(\mathcal{N}\left(v_{i}\right)\) for each node \(v_{i}\). An edge \(e_{ji}\) is added from \(v_{j}\) to \(v_{i}\) for all \(v_{j}\in\mathcal{N}\left(v_{i}\right)\). In this way, a graph representation of an image \(\mathcal{G}=\left(\mathcal{V},\mathcal{E}\right)\) is obtained, where \(e_{ji}\in\mathcal{E}\). For the constructed graph representation \(\mathcal{G}=G(X)\) and the input feature \(x_{i}\), the aggregation operation calculates the representation of a node by aggregating features of neighboring nodes. Then the update operation merge the aggregated feature. The updated feature \(\mathbf{x}_{i}^{\prime}\) can be represented as:
\[\mathbf{x}_{i}^{\prime}=h\left(\mathbf{x}_{i},g\left(\mathbf{x}_{i},\mathcal{ N}\left(\mathbf{x}_{i}\right);W_{aggregate}\right);W_{\text{update}}\right), \tag{1}\]
where \(W_{aggregate}\) and \(W_{update}\) are the learnable weights of the aggregation and update operations.
\[g(\cdot)=\mathbf{x}_{i}^{\prime\prime}=\left[\mathbf{x}_{i},\max\left(\left\{ \mathbf{x}_{j}-\mathbf{x}_{i}\mid j\in\mathcal{N}\left(\mathbf{x}_{i}\right) \right\}\right]\right., \tag{2}\]
and
\[h(\cdot)=\mathbf{x}_{i}^{\prime}=\mathbf{x}_{i}^{\prime\prime}W_{\text{ update}}+b_{h}, \tag{3}\]
\(b_{h}\) is the bias. And following the design of original ViG networks, the \(g(\cdot)\) operation uses the max-relative graph convolution [19].
\begin{table}
\begin{tabular}{c|c|c} \hline Module & Output size & Architecture \\ \hline Stem & \(\frac{H}{2}\times\frac{W}{2}\) & Conv\(\times\)2 \\ Grapher + FFN & \(\frac{H}{2}\times\frac{W}{2}\) & \(\left[\begin{array}{c}D=32\\ E=2\\ K=9\end{array}\right]\) \\ Downsampling & \(\frac{H}{4}\times\frac{W}{4}\) & Conv \\ Grapher + FFN & \(\frac{H}{4}\times\frac{W}{4}\) & \(\left[\begin{array}{c}D=64\\ E=2\\ K=9\end{array}\right]\) \\ Downsampling & \(\frac{H}{8}\times\frac{W}{8}\) & Conv \\ Grapher + FFN & \(\frac{H}{8}\times\frac{W}{8}\) & \(\left[\begin{array}{c}D=128\\ E=2\\ K=9\end{array}\right]\) \\ Downsampling & \(\frac{H}{16}\times\frac{W}{16}\) & Conv \\ Grapher + FFN & \(\frac{H}{16}\times\frac{W}{16}\) & \(\left[\begin{array}{c}D=256\\ E=2\\ K=9\end{array}\right]\) \\ Downsampling & \(\frac{H}{32}\times\frac{W}{32}\) & Conv \\ Grapher \(\times\)2 & \(\frac{H}{32}\times\frac{W}{32}\) & \(\left[\begin{array}{c}D=512\\ K=9\end{array}\right]\times 2\) \\ Upsampling & \(\frac{H}{16}\times\frac{W}{16}\) & bilinear + Conv \\ Grapher + FFN & \(\frac{H}{16}\times\frac{W}{16}\) & \(\left[\begin{array}{c}D=256\\ E=2\\ K=9\end{array}\right]\) \\ Upsampling & \(\frac{H}{8}\times\frac{W}{8}\) & bilinear + Conv \\ Grapher + FFN & \(\frac{H}{8}\times\frac{W}{8}\) & \(\left[\begin{array}{c}D=128\\ E=2\\ K=9\end{array}\right]\) \\ Upsampling & \(\frac{H}{4}\times\frac{W}{4}\) & bilinear + Conv \\ Grapher + FFN & \(\frac{H}{4}\times\frac{W}{4}\) & \(\left[\begin{array}{c}D=64\\ E=2\\ K=9\end{array}\right]\) \\ Upsampling & \(\frac{H}{2}\times\frac{W}{2}\) & bilinear + Conv \\ Grapher + FFN & \(\frac{H}{2}\times\frac{W}{2}\) & \(\left[\begin{array}{c}D=32\\ E=2\\ K=9\end{array}\right]\) \\ Final Layer & \(H\times W\) & bilinear + Conv \\ \hline \end{tabular}
\end{table}
Table 1: Detailed settings of ViG-UNet
The aggregated feature \(\mathbf{x}_{i}^{\prime\prime}\) is split into \(h\) heads, then each head is updated with different weights. Then the updated feature \(\mathbf{x}_{i}^{\prime}\) can be obtained by concatenating all the heads:
\[\begin{split}\mathbf{x}_{i}^{\prime}=[\mathbf{x}_{\text{ineal}}^{ \prime\prime}\,\,W_{\text{update}}^{1}\,+b_{h1},\mathbf{x}_{\text{ineal}\,2}^{ \prime\prime}\,W_{\text{update}}^{2}\,+b_{h2},\\ \mathbf{x}_{\text{ineal}\,3}^{\prime\prime}\,W_{\text{update}}^{3} \,+b_{h3},\cdots\mathbf{x}_{\text{ineal}h}^{\prime\prime}\,W_{\text{update}}^{ h}\,+b_{hh}]\end{split} \tag{4}\]
where \(\mathbf{x}_{\text{ineal}\,1}^{\prime\prime}\,\mathbf{x}_{\text{ineal}\,2}^{ \prime\prime}\,\cdots,\mathbf{x}_{\text{ineal}h}^{\prime\prime}\) represent the split heads from \(\mathbf{x}_{i}^{\prime}\), \(\,W_{\text{update}}^{1}\,W_{\text{update}}^{2}\,\cdots,W_{\text{update}}^{h}\,\) represent different weights and \(b_{h1},b_{h2},\cdots,b_{hh}\) represent different biases.
For the input feature \(X\), the output feature \(Y\) after a Grapher module can be represented as:
\[X_{1}=XW_{in}+b_{in}, \tag{5}\]
\[Y=\mathrm{Droppath}(\mathrm{GELU}(\mathrm{GraphConv}(X_{1})W_{\text{out}}\,+b_ {out})+X, \tag{6}\]
where \(Y\) has the same size as \(X\), \(W_{\text{in}}\,\) and \(W_{\text{out}}\,\) are the weights. The activation function used is \(\mathrm{GELU}\)[20]. \(b_{in}\) and \(b_{out}\,\) are biases. In the implementation, all \(InputW+b\) are achieved by using a convolutional layer following a batch normalization operation. \(\mathrm{GraphConv}\) means aggregating and updating the discussed graph-level processing. The Grapher module is with a shortcut structure. The \(\mathrm{Droppath}\)[21] operation is used.
### Feed-forward Network
Feed-forward Networks (FFNs) are used to help with the feature transformation capacity and relief the over-smoothing phenomenon after the Grapher module. The FFN can be represented as
\[Z=\mathrm{Droppath}(\mathrm{GELU}\left(YW_{1}+b_{1}\right)W_{2}+b_{2})+Y, \tag{7}\]
where \(W_{1}\) and \(W_{2}\) are weights, \(b_{1}\) and \(b_{2}\) are biases. In the implementation, each feed-forward operation \(InputW+b\) is achieved by using a convolutional layer following a batch normalization operation. The \(\mathrm{Droppath}\) operation is used. The workflow of Grapher and FFN modules is shown in Figure2.
## 3 Experiments
### Datasets
**ISIC 2016** is a dataset of dermoscopic images of skin lesions. We used the dataset of lesion segmentation in this paper. There are 900 pairs of images and corresponding masks in the training set. In the testing set, there are 379 pairs. **ISIC 2017** is a dataset of dermoscopic images of skin lesions. We used the dataset of the lesion segmentation task, which contains images and corresponding masks. There are 2000 pairs in the training set, 150 in the validation set, and 600 in the testing set. The **Kvasir-SEG** dataset contains 1000 pairs of polyp images and masks. We split the dataset into training and testing sets with a ratio of 0.2 with a random state of 41.
### Implementation Details
The experiments are all done on PG500-216(V-100) with 32 GB memory. The training and validation set of ISIC 2016 and
Figure 1: The architecture of ViG-UNet: the basic modules are the stem block for visual embedding, Grapher Modules, Feed-forward Networks (FFNs) modules, downsampling modules in the encoder and upsampling modules in the decoder.
Figure 2: The workflow of Grapher and FFN modules: graph processing and feature transformation are applied
Kvair-SEG are split with a ratio of 0.2 with the random state 41. The total training epochs are 200 and the batch size is 4. The input images are all resized to 512\(\times\)512. The optimizer used is ADAM [10]. The initial learning rate is 0.0001 and a CosineAnnealingLR [11] scheduler is used. The minimum learning rate is 0.00001. Only rotation by 90 degrees clockwise for random times, flipping and normalization methods are used for augmentation. The evaluation metrics in validation are \(IOU\) of the lesions. A mixed loss combining binary cross entropy (BCE) loss and dice loss [1] is used in the training process:
\[\mathcal{L}=0.5BCE(\hat{y},y)+Dice(\hat{y},y)\]
We implemented ViG-UNet and six other U-Net variants for comparison experiments. The pre-trained Swin-T model of \(224\times 224\) input size on ImageNet 1k is used for Swin-UNet, and the pre-trained ViT-B/16 on ImageNet 21k is used for Trans-UNet.
### Results
The performances of different methods are shown in Table 2 and Table 3 with metrics of \(IoU\) and \(Dice\). The example segmentation results of different methods are displayed in Figure 3. From these experiments, we can see that on all three datasets, our approach performs best. And we can expect that if we use pre-trained models of ViG on ImageNet, the performance may be better. For the Swin-UNet, it's strange, but [1] also reports its low performance on small datasets. In conclusion, our method shows competitive performance compared to classical and state-of-the-art techniques.
We also calculate the parameters by using fvcore Python package with (1, 3, 512, 512) input. Admittedly, our model is larger than others and needs more computational resources.
## 4 Conclusion
In this work, we propose a ViG-UNet for 2D medical image segmentation, which has a GNN-based U-shaped architecture consisting of the encoder, the bottleneck, and the decoder with skip connections. Experiments are done on the ISIC 2016, the ISIC 2017 and the Kvasir-SEG dataset, whose results show
\begin{table}
\begin{tabular}{c c} \hline Methods & Parameters \\ \hline UNet & 7.8M \\ Attention-UNet & 8.7M \\ UNet++ & 9.2M \\ Trans-UNet & 92.3M \\ Swin-UNet & 27.3M \\ UNext & 1.5M \\ \hline ViG-UNet & 0.7G \\ \hline \end{tabular}
\end{table}
Table 4: Comparison of Parameters of Different Models
\begin{table}
\begin{tabular}{c c c c} \hline Methods & ISIC 2016 & ISIC 2017 & Kvasir SEG \\ \hline UNet & 0.8984 & 0.7708 & 0.8023 \\ Attention-UNet & 0.9058 & 0.7739 & 0.8065 \\ UNet++ & 0.9070 & 0.7768 & 0.8033 \\ Trans-UNet & 0.9158 & 0.8244 & 0.6439 \\ Swin-UNet & 0.8568 & 0.7914 & 0.4974 \\ UNext & 0.9103 & 0.8241 & 0.8122 \\ \hline ViG-UNet & **0.9206** & **0.8292** & **0.8188** \\ \hline \end{tabular}
\end{table}
Table 3: Comparison Experimental Results on ISIC 2016, ISIC 2017 and Kvasir SEG (using the \(Dice\) metric)
Figure 3: The example segmentation experimental results of different methods on ISIC 2016, ISIC 2017 and Kvasir-SEG datasets
that our method is effective.
## 5 Acknowledgements
This work was supported by a Grant from The National Natural Science Foundation of China (No. U21A20484).
|
2301.12118 | Physics-informed Neural Network: The Effect of Reparameterization in
Solving Differential Equations | Differential equations are used to model and predict the behaviour of complex
systems in a wide range of fields, and the ability to solve them is an
important asset for understanding and predicting the behaviour of these
systems. Complicated physics mostly involves difficult differential equations,
which are hard to solve analytically. In recent years, physics-informed neural
networks have been shown to perform very well in solving systems with various
differential equations. The main ways to approximate differential equations are
through penalty function and reparameterization. Most researchers use penalty
functions rather than reparameterization due to the complexity of implementing
reparameterization. In this study, we quantitatively compare physics-informed
neural network models with and without reparameterization using the
approximation error. The performance of reparameterization is demonstrated
based on two benchmark mechanical engineering problems, a one-dimensional bar
problem and a two-dimensional bending beam problem. Our results show that when
dealing with complex differential equations, applying reparameterization
results in a lower approximation error. | Siddharth Nand, Yuecheng Cai | 2023-01-28T07:53:26Z | http://arxiv.org/abs/2301.12118v1 | # Physics-informed Neural Network: The Effect of Reparameterization in Solving Differential Equations
###### Abstract
Differential equations are used to model and predict the behaviour of complex systems in a wide range of fields, and the ability to solve them is an important asset for understanding and predicting the behaviour of these systems. Complicated physics mostly involves difficult differential equations, which are hard to solve analytically. In recent years, physics-informed neural networks have been shown to perform very well in solving systems with various differential equations. The main ways to approximate differential equations are through penalty function and reparameterization. Most researchers use penalty functions rather than reparameterization due to the complexity of implementing reparameterization. In this study, we quantitatively compare physics-informed neural network models with and without reparameterization using the approximation error. The performance of reparameterization is demonstrated based on two benchmark mechanical engineering problems, a one-dimensional bar problem and a two-dimensional bending beam problem. Our results show that when dealing with complex differential equations, applying reparameterization results in a lower approximation error.
## 1 Introduction
Differential equations are important because they can be used to model a wide variety of phenomena that occur in the physical world, such as the movement of fluids, the behaviour of financial markets, the growth of populations, and the electrical current in a circuit. By solving a differential equation, we can find out how a system will behave over time and make predictions about its future behaviour. This can be useful in a wide range of fields, including engineering, physics, economics, and biology.
There exist two ways of solving differential equations, analytically and numerically. Analytical solutions give us an exact equation, but there are many differential equations whose exact solutions can't be found or are too hard to find. Numerical methods allow us to approximate a solution, but can't give us an exact formula. Recent developments in machine learning have led to neural networks (NNs), which are also universal approximators. This means a feedforward neural network with one hidden layer could accurately predict any continuous function arbitrarily well if there are enough hidden units [1]. Therefore, we can use NNs to approximate the solution of a differential equation.
### Origins
The first paper outlining a method to approximate exact solutions for both ordinary differential equations (ODEs) and partial differential equations (PDEs) was published in 1997. In it, the authors converted a second-order differential equation (DE) into an optimization problem where the original function is approximated by adjusting the parameters of a neural network and using automatic differentiation to find the derivatives of the function [2].
### Physics-informed Neural Network
After about two decades, Raissi et al. [3] introduced the term "physics-informed neural network" (PINN) and described a method for using deep neural networks to solve DEs. Since most natural phenomena and physics are governed by DEs, applying the PINN can significantly improve the accuracy of the model. By incorporating the laws of physics into the unsupervised learning process, PINN can decrease or even eliminate the need for training data. In many fields of research, obtaining training data is difficult due to expensive experimental or high-fidelity simulations; PINN can solve this problem. Benefiting from this, many researchers have proposed various techniques based on PINN in solving different problems, such as in fluid dynamics and electricity transmission [4].
### Current Research
In the field of mechanical engineering, PINN shows great performance in predicting nonlinear behaviour of various structures. All these researchers applied PINN by adding penalty terms instead of reparameterization. For instance, by minimizing the potential energy of a structural system. Without any supervised learning phase, the trained unsupervised NN framework shows a high level of consistency compared to the analytical results in problems such as bending a beam in 2D, 3D geometry [5], and truss structures [6]. In addition to minimizing potential energy, the integrated physics knowledge can be engineered for constraints and boundary conditions, which has been demonstrated in modelling the bending plate [7], and 3D beam with different materials [8]. One can also incorporate this technique into other types of NNs such as recurrent neural networks (RNNs), where multiple long-short term memory networks (LSTMs) are utilized to sequentially learn different time-dependent features [9].
Other pieces of literature have investigated the effect of reparameterization. For instance, Zhu et al [10] compares the performance of different reparameterization schemes in PINNs and discusses the trade-offs between accuracy and computational efficiency. The authors present a series of numerical examples to demonstrate the effectiveness of different reparameterization schemes and provide guidelines for choosing the best scheme for a given problem. In addition, some scholars also present a detailed analysis of the effect of reparameterization on the accuracy and stability of PINN solutions such as Tran et al [11] and Zhang et al [12], who discuss the benefits and limitations of different reparameterization techniques and demonstrate its effectiveness through different examples.
### Problem and Contribution
Comparing reparameterized and not reparameterized versions of a neural network to solve DEs is an important problem because the way that people embed physics (handle constraints) is through reparameterization and penalty functions. However, most works of literature only use one method or the other, as discussed in section 1.3, but never use both methods and compare the results for the DEs they are solving. The problem we will be solving is to see if reparameterization on a neural network can lower the approximation error compared to a benchmark case of not reparameterizing the network (using only penalty functions). Therefore, our contribution will be to show that a reparameterized version of a neural network reduces or does not reduce the approximation error when solving mechanical engineering DEs using neural networks. This contribution will give a greater inside into where reparameterized models perform better than non-reparameterized models and vice-versa. This will help researchers make smarter decisions as to which method can maximize their approximation accuracy.
## 2 Methods and Background Information
In this section, we introduce some background information and our methods for solving DEs to the readers. We start by explaining what are the penalty function and reparameterization methods. Then we explain the finite differences method (FDM) for computing a numerical solution to DEs. Finally, we talk about the PINN we will be using for our tests.
### Penalty Function and Reparameterization
We will use a second-order differential equation as an example.
Given a differential equation of the form
\[G(\vec{x},\Psi(\vec{x}),\nabla\Psi(\vec{x}),\nabla^{2}\Psi(\vec{x}))=0,\vec{x}\in D \tag{1}\]
with certain boundary conditions (BC), where \(\vec{x}=(x_{1},x_{2},...,x_{n})\in\mathbb{R}^{n}\), \(D\subset\mathbb{R}^{n}\) and \(\Psi(\vec{x})\) is the function to be approximated [2].
Assume \(F(\vec{x})\) is the output of a neural network with \(\vec{x}\) as an input. The prediction \(F_{i}(x_{i})\) for input \(x_{i}\) of a neural network can be written as a function of the form
\[F_{i}(x_{i})=v^{T}(h_{n}\circ h_{n-1}\circ\cdots\circ h(Wx_{i})) \tag{2}\]
Then we can easily find the derivatives using automatic differentiation. Next, we let \(A(\vec{x})\) satisfy the boundary conditions. If \(\vec{F}(b_{1})=\vec{c}_{1},...,\vec{F}(b_{n})=\vec{c}_{n}\) are the boundary conditions, with \(\vec{b}_{i}\), set to constants \(\vec{c_{i}}\), then \(A(\vec{x})\) is of the form
\[A(\vec{x})=\left\|F(\vec{b}_{1})-\vec{c}_{1}\right\|+\left\|F(\vec{b}_{2})- \vec{c}_{2}\right\|+...+\left\|F(\vec{b}_{3})-\vec{c}_{n}\right\| \tag{3}\]
Then we can treat \(\Psi(\vec{x})\) as the following loss function
\[Loss=\text{minimum }(A(\vec{x})+\left\|G(\vec{x},F(\vec{x}),\nabla F(\vec{x}), \nabla^{2}F(\vec{x}))\right\|) \tag{4}\]
This is called the penalty function method, where the BCs are handled by adding penalty terms with penalty coefficient \(\lambda\). In short, the boundary conditions are not in-forced with this method, rather, the NN must minimize \(A(\vec{x})\) such that \(F(\vec{b}_{i})\) is as close to \(\vec{c_{i}}\) as possible.
An alternative way to handle BC is by reparameterization, which explicitly forces the NN to satisfy the BCs by modifying the output representation of the NN. Instead of having \(F(\vec{x})\) be the approximator of \(\nabla\Psi(\vec{x})\), the reparameterized output becomes:
\[K(\vec{x})=B(x)F(\vec{x}) \tag{5}\]
where \(B(x)\) is a function of \(x\) so that the reparameterized function \(K(\vec{x})\) can naturally satisfy the BCs:
\[\left\|K(\vec{b}_{1})-\vec{c}_{1}\right\|=\left\|K(\vec{b}_{2})-\vec{c}_{2} \right\|=...=\left\|K(\vec{b}_{n})-\vec{c}_{n}\right\|=0 \tag{6}\]
### Finite Differences (FDM)
To obtain the higher-order derivatives for the numerical output, we use finite differences in our NN learning process. Since the problems considered (see Section 3) are static problems, only the spatial domain is discretized. Solving a set of equations based on the Taylor expansion and ignoring the high-order terms, the formula for calculating the 1st, 2nd and 4th-order derivative is shown here:
\[\frac{dF}{dx}=\frac{F(x+h)-F(x-h)}{2h} \tag{7}\]
\[\frac{d^{2}F}{dx^{2}}=\frac{F(x+h)-2F(x)+F(x-h)}{h^{2}} \tag{8}\]
\[\frac{d^{4}F}{dx^{4}}=\frac{F(x+2h)-4F(x+h)+6F(x)-4F(x-h)+F(x-2h)}{h^{4}} \tag{9}\]
### Physics-informed Neural Network
Fig. 1 exhibits the general schematic of the PINN that we used to solve our DEs. The artificial neural network (ANN) is applied to approximate the mapping between \(x\) and \(u\) or \(w\), depending on the purpose of the problem. A differential operator is employed to numerically calculate the derivatives of the output. The BCs are handled through penalty functions, reparameterization, or both.
In this project, the NN is composed of two hidden layers with 128 neurons in each layer. The training is performed using Adam optimization with an initial learning rate of 0.001. Sigmoid and ReLu activation functions are applied to the hidden and output layers, respectively. For the accuracy of the comparison, the same hyperparameter of the neural network is applied to all test cases.
## 3 Problem Formulation
In this section, we present two benchmark mechanical problems to demonstrate the effect of reparameterized and penalty functions. The implementation detail for both approaches of the two case studies will be exhibited.
### 1D Bar problem
The first case study is a 1D bar problem, as described in Fig. 2, The left side of the bar is fixed to the wall, while the right side is free. A non-uniformly distributed load \(f(x)=x\) is applied through the central axis. An external concentrated force \(P\) is employed at the right side of the bar. Assuming the length of the bar is \(L\), the cross-sectional area is \(A\) and the elastic Young's modulus is \(E\) (a constant representing how easily a material can bend or stretch). The goal is to calculate the displacement of the bar along the x-axis. Based on the beam theory, the governing equation for this problem can be represented as:
\[-\frac{d}{dx}(EA\frac{du}{dx})=f(x) \tag{10}\]
Where the boundary conditions are \(u(x=0)=0\) and \(u^{\prime}(x=L)=0\), respectively. The first boundary condition states that there is zero displacement at the right side point, while the second boundary condition states that the velocity at the left side is zero.
Figure 1: The general form of the PINN
Figure 2: 1D Bar
**Case 1: 1D Bar Reparameterized**
To naturally satisfy both boundary conditions, the reparameterized NN estimator of the first problem can be represented as:
\[U(x)=xe^{-x}NN(x)\text{; where NN(x) is the neural network} \tag{11}\]
Instead of finding the mapping between \(x\) and \(u\), an additional term \(xe^{-x}\) is applied so that that the reparameterized function \(U(x)\) can have the following properties: \(U(x=0)=0\) and \(U^{\prime}(x=L)=0\). According to this formulation, the NN loss function can be defined as:
\[Loss(x)=\left\|\frac{d^{2}U(x)}{dx^{2}}+\frac{f(x)}{EA}\right\|_{2} \tag{12}\]
where \(x\) is a vector of grid points uniformly sampled along the bar from \(0\) to \(L\).
**Case 2: 1D Bar Penalty Function**
As stated in section 2, a penalty function is the most widely applied technique to handle boundary conditions of differential equations. Since the first case study involves two BCs, two penalty terms will be included in the loss function. Note that the reparameterized term is not considered in this scenario, thus the output \(U\) is calculated based on \(NN(x)\):
\[U(x)=NN(x) \tag{13}\]
The loss function based on the penalty function technique can be represented as:
\[Loss(x)=\left\|\frac{d^{2}NN(x)}{dx^{2}}+\frac{f(x)}{EA}\right\|_{2}+\lambda_ {1}\left\|NN(x=0)\right\|_{2}+\lambda_{2}\left\|\frac{dNN(x)}{dx}\right|_{x=L }-P\right\|_{2} \tag{14}\]
where \(\lambda_{1}\) and \(\lambda_{2}\) represent the penalty coefficients. They have been set to 100 for this project based on the preliminary investigations.
### 2D Bending Beam Problem
The second case study is to estimate the vertical deflection (difference in direction between Earth's gravity and some reference direction) of a beam under non-uniformly distributed loading, as seen in Fig. 3. Similar to the 1D bar case study, we assume the length of the beam is \(L\), the Young's modulus is \(E\), the second moment of inertia of the beam cross-section is \(I\) (a measure of how resistant a cross-section is to bending), and the non-uniformly distributed loading as \(f(x)=sin(x)\). Both sides of the beam are simply supported, so the beam is free to rotate, and the bending moment at both ends is zero. Based on the classical beam theory, the governing equation and the boundary conditions of this problem are:
\[EI\frac{d^{4}w(x)}{dx^{4}}+f(x)=0 \tag{15}\]
Subject to:
\[w(x=0)=0;w(x=L)=0;w^{\prime\prime}(x=0)=0;w^{\prime\prime}(x=L)=0 \tag{16}\]
Figure 3: 2D Bending Beam
**Case 3: 2D Beam Reparameterization**
In this 2D bending beam case, it is difficult to fully satisfy all the BCs using reparameterization, due to the existence of zero and the second-order derivative. Thus, part of the BCs will be handled through the penalty functions. The NN approximator after partially reparameterized can be represented as:
\[W(x)=sin(\pi\frac{x}{L})NN(x) \tag{17}\]
The corresponding loss function in this case is:
\[Loss(x)=\left\|\frac{d^{4}W(x)}{dx^{4}}+\frac{f(x)}{EI}\right\|_{2}+\lambda_{1 }\left\|\frac{d^{2}W(x)}{dx^{2}}\right|_{x=0}\right\|_{2}+\lambda_{2}\left\| \frac{d^{2}W(x)}{dx^{2}}\right|_{x=L}\right\|_{2} \tag{18}\]
where \(W(x)\) is the vertical deflection estimation. The first term in Eq. 18 shows the residual of the governing Eq. 15, and the last two terms represent the residual of the second-order BCs in Eq.16.
**Case 4: 2D Beam Penalty Function**
Similar to case 1, no reparameterization will be considered in this case and the loss function is defined based on the residuals of governing equations and BCs. Therefore, the loss function of this 2D beam problem without reparameterized can be represented as:
\[Loss(x)=\left\|\frac{d^{4}NN(x)}{dx^{4}}+\frac{f(x)}{EI}\right\|_{2}+\lambda_ {1}\left\|\frac{d^{2}NN(x)}{dx^{2}}\right|_{x=0,x=L}\right\|_{2}+\lambda_{2} \left\|\frac{d^{2}NN(x)}{dx^{2}}\right|_{x=0,x=L}\right\|_{2} \tag{19}\]
where \(NN(x)\) is the vertical deflection estimation and the last two terms in Eq. 19 include all the BCs in Eq. 16.
## 4 Results
According to the loss function we defined in Section 3, the NNs have been trained and the results are shown below. Fig. 3(a) and Fig. 3(b) represent the approximated displacement versus the analytical result at each node point for case 1 and case 2 respectively. It can be observed that the PINN shows good performance for both cases. The approximation error in cases 1 and 2 are 0.8063% and 0.8136% respectively. This indicates that the difference between the reparameterization and the penalty function method is not significant in the 1D bar case. Since only the first and second-order derivatives are included in the governing equations and BCs, this makes it easy for the neural network to estimate the target equations.
However, when we remove the reparameterization in the 2D beam problem, the approximation noticeably deviates from the exact solution. Similar to Fig. 3(a) and Fig. 3(b), Fig. 4(a) and Fig. 4(b) show
Figure 4: 1D Bar Test Results
that the approximated vertical deflection point is very close to the exact solution. It can be easily noticed that the result in case 3 shows zero deviation at both ends of the beam (\(x=0\) and \(x=L\)). On the other hand, the penalty function does not manage to set the BCs to exactly zero and therefore, shows small deviations at both ends. This also affects the intermediate approximation of the results, at the first and last 30% of the approximated curve in 5b show a large error. This is extremely important in real-life physics and engineering, where errors in initial conditions may be amplified by further steps. The approximation accuracy can be significantly improved even though the output is partially parameterized. The importance of reparameterization can also be observed from the approximation error in both cases, the errors are 2.7813% and 12.9187% for cases 3 and 4 respectively. In the case of the beam problem, the errors may come from the second-order BC and fourth-order governing equations. In this case, FDM may not be suitable for dealing with higher-order derivatives, which can be investigated in future studies.
## 5 Conclusion
In this paper, a physics-informed neural network is applied to solve two mechanical engineering benchmark problems involving different differential equations. The effect of reparameterization has been discussed based on the definition of the loss functions for both cases. The hyperparameter of the PINN for all test cases is the same. Based on the results in Section 4, we can conclude that:
1. Reparameterization shows a similar level of accuracy to the penalty function when the underlying DE is simple, evident in cases 1 and 2;
2. When dealing with difficult DEs, reparameterization dominates the penalty function in terms of approximation accuracy, evident in comparing cases 1 and 3 with 3 and 4;
3. Even partial reparameterization can significantly improve the PINN's overall performance, as evident in case 3.
A strength of our contribution is that we filled a research gap; no researchers have investigated the effect of reparameterization on the approximated result. Some weaknesses of our findings is the use of only ODEs and a lack of variety in our case studies. Our case studies only use ODEs, so our findings do not encompass PDEs. In addition, we only used two DEs, which is not a very large sample size of DEs. With such few DEs, our results are biased towards a small subset of DEs. Given more time, we would like to address our contribution's weakness of not testing PDEs and using a lot more DEs. This would provide more evidence for or against our findings and make our finds more reputable. Finally, future work can be placed on how to incorporate more accurate differential operators into PINN. In addition, more research needs to go into creating efficient ways to perform reparameterization according to different BCs because each NN needs to be reparameterized using a
Figure 5: 2D Bar Test Results
different function based on BCs, there is no systematic way to finding a function that will ensure the BCs are satisfied.
|
2306.04252 | Adversarial Sample Detection Through Neural Network Transport Dynamics | We propose a detector of adversarial samples that is based on the view of
neural networks as discrete dynamic systems. The detector tells clean inputs
from abnormal ones by comparing the discrete vector fields they follow through
the layers. We also show that regularizing this vector field during training
makes the network more regular on the data distribution's support, thus making
the activations of clean inputs more distinguishable from those of abnormal
ones. Experimentally, we compare our detector favorably to other detectors on
seen and unseen attacks, and show that the regularization of the network's
dynamics improves the performance of adversarial detectors that use the
internal embeddings as inputs, while also improving test accuracy. | Skander Karkar, Patrick Gallinari, Alain Rakotomamonjy | 2023-06-07T08:47:41Z | http://arxiv.org/abs/2306.04252v2 | # Adversarial Sample Detection Through Neural Network Transport Dynamics
###### Abstract
We propose a detector of adversarial samples that is based on the view of neural networks as discrete dynamic systems. The detector tells clean inputs from abnormal ones by comparing the discrete vector fields they follow through the layers. We also show that regularizing this vector field during training makes the network more regular on the data distribution's support, thus making the activations of clean inputs more distinguishable from those of abnormal ones. Experimentally, we compare our detector favorably to other detectors on seen and unseen attacks, and show that the regularization of the network's dynamics improves the performance of adversarial detectors that use the internal embeddings as inputs, while also improving test accuracy.
Keywords:Deep learning Adversarial detection Optimal transport
## 1 Introduction
Neural networks have improved performances on many tasks, including image classification. They are however vulnerable to adversarial attacks which modify an image in a way that is imperceptible to a human but that fools the network into wrongly classifying the image [50]. These adversarial images transfer between networks [39], can be carried out physically (e.g. causing autonomous cars to misclassify road signs [15]), and can be generated without access to the network [34]. Developing networks that are robust to adversarial samples or accompanied by detectors that can detect them is indispensable to deploying them safely [3].
We focus on detecting adversarial samples. Networks trained with a softmax classifier produce overconfident predictions even for out-of-distribution inputs [42]. This makes it difficult to detect such inputs via the softmax outputs. A detector is a system capable of predicting if an input at test time has been adversarially modified. Detectors are trained on a dataset made up of clean and adversarial inputs, after the network training. While simply training the detector on the inputs has been tried, using their intermediate embeddings works better [9]. Detectors vary by which activations to use and how to process them to extract the features that the classifier uses to tell clean samples from adversarial ones.
We make two contributions. First, we propose an adversarial detector that is based on the view of neural networks as dynamical systems that move inputs
in space, time represented by depth, to separate them before applying a linear classifier [55]. Our detector follows the trajectory of samples in space, through time, to differentiate clean and adversarial images. The statistics that we extract are the positions of the internal embeddings in space approximated by their norms and cosines to a fixed vector. Given their resemblance to the Euler scheme for differential equations, residual networks [21, 22, 55] are particularly amenable to this analysis. Skip connections and residuals are basic building blocks in many architectures such as EfficientNet [51] and MobileNetV2 [47], and ResNets and their variants such as WideResNet [60] and ResNeXt [59] remain competitive [57]. Visions Transformers [35, 14] are also mainly made up of residual stages. Besides, [58] show an increased vulnerability of residual-type architectures to transferable attacks, precisely because of the skip connections. This motivates the need for a detector that is well adapted to residual-type architectures. But the analysis and implementation can extend immediately to any network where most layers have the same input and output dimensions.
Our second contribution is to use the transport regularization during training proposed in [26] to make the activations of adversarial samples more distinguishable from those of clean samples, thus making adversarial detectors perform better, while also improving generalization. We prove that the regularization achieves this by making the network more regular on the support of the data distribution. This does not necessarily make it more robust, but it will make the activations of the clean samples closer to each other and further from those of out-of-distribution samples, thus making adversarial detection easier. This is illustrated on a 2-dimension example in Figure 1.
## 2 Related Work
Given a classifier \(f\) in a classification task and \(\epsilon\)\(>\)0, an adversarial sample \(y\) constructed from a clean sample \(x\) is \(y=x+\delta\), such that \(f(y)\neq f(x)\) and \(\|\delta\|_{p}\leq\epsilon\) for a certain \(L_{p}\) norm. The maximal perturbation size \(\epsilon\) has to be so small as to be almost imperceptible to a human. Adversarial attacks are algorithms that find such adversarial samples, and they have been particularly successful against neural networks [50, 8]. We present the adversarial attacks we use in our experiments in Appendix 0.D.1. The main defense mechanisms are robustness, i.e. training a network that is not easily fooled by adversarial samples, and having a detector of these samples.
An early idea for detection was to use a second network [38]. However, this network can also be adversarially attacked. More recent statistical approaches include LID [36], which trains the detector on the local intrinsic dimensionality of activations approximated over a batch, and the Mahalanobis detector [33], which trains the detector on the Mahalanobis distances between the activations and a Gaussian fitted to them during training, assuming they are normally distributed. Our detector is not a statistical approach and does not need batch-level statistics, nor statistics from the training data. Detectors trained in the Fourier domain of activations have also been proposed in [20]. See [1] for a review.
Our second contribution is to regularize the network in a way that makes it Holder-continuous, but only on the data distribution's support. Estimations of the Lipschitz constant of a network have been used as estimates of its robustness to adversarial samples in [56; 50; 54; 23], and making the network more Lipschitz (e.g. by penalizing an upper bound on its Lipschitz constant) has been used to make it more robust (i.e. less likely to be fooled) in [23; 11]. These regularizations often work directly on the weights of the network, therefore making it more regular on all the input space. The difference with our method is that we only endue the network with regularity on the support of the clean data. This won't make it more robust to adversarial samples, but it makes its behavior on them more distinguishable, since they tend to lie outside the data manifold.
That adversarial samples lie outside the data manifold, particularly in its co-dimensions, is a common observation and explanation for why adversarial samples are easy to find in high dimensions [18; 52; 49; 36; 46; 29; 2; 16]. To the best of our knowledge, [44] is the only other method that attempts to improve detection
Figure 1: Transformed circles test set from scikit-learn (red and blue) and out-of-distribution points (green) after blocks 6 and 9 of a small ResNet with 9 blocks. In the second row, we add our proposed regularization during training, which makes the movements of the clean points (red and blue) more similar to each other and more different from the movements of the green out-of-distribution points than when using the vanilla network in the first row. In particular, without the regularization, the green points are closer to the clean red points after blocks 6 and 9 which is undesirable.
by encouraging the network during training to learn representations that are more different between clean and adversarial samples. They do this by replacing cross-entropy by a reverse cross-entropy that encourages uniform softmax outputs among the non-predicted classes. We find that our regularization leads to better classification accuracy and adversarial detection than this method.
## 3 Background
Our detector is based on the dynamic viewpoint of neural networks that followed from the analogy between ResNets and the Euler scheme made in [55]. We present this analogy in Section 3.2. The regularization we use was proposed in [26] to improve generalization and we also present it in Section 3.2. The regularity results that follow from this regularization require the use of optimal transport theory, which we present in Section 3.1.
### Optimal Transport
Let \(\alpha\) and \(\beta\) be absolutely continuous densities on a compact set \(\Omega\subset\mathbb{R}^{d}\). The Monge problem is to look for \(T\):\(\mathbb{R}^{d}\)\(\rightarrow\)\(\mathbb{R}^{d}\) moving \(\alpha\) to \(\beta\), i.e. \(T_{\sharp}\alpha\)=\(\beta\), with minimal transport cost:
\[\min_{T\text{ s.t. }T_{\sharp}\alpha=\beta}\int_{\Omega}\|T(x)-x\|_{2}^{2}\, \mathrm{d}\alpha(x) \tag{1}\]
and this problem has a unique solution \(T^{\star}\). An equivalent formulation of the Monge problem in this setting is the dynamical formulation. Here, instead of directly pushing points from \(\alpha\) to \(\beta\) through \(T\), we continuously displace mass from time 0 to 1 according to velocity field \(v_{t}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\). We denote \(\phi_{t}^{x}\) the position at time \(t\) of the particle that was at \(x\sim\alpha\) at time 0. This position evolves according to \(\partial_{t}\phi_{t}^{x}=v_{t}(\phi_{t}^{x})\). Rewriting the constraint, Problem (1) is equivalent to the dynamical formulation:
\[\min_{v}\int_{0}^{1}\|v_{t}\|_{L^{2}((\phi_{\hat{v}})_{\sharp} \alpha)}^{2}\,\mathrm{d}t \tag{2}\] \[\text{s.t. }\partial_{t}\phi_{t}^{x}=v_{t}(\phi_{t}^{x})\text{ for }x \in\text{support}(\alpha)\text{ and }t\in[0,1[\] \[\phi_{0}=\mathrm{id},(\phi_{1})_{\sharp}\alpha=\beta\]
### Least Action Principle Residual Networks
A residual stage made up of \(M\) residual blocks applies \(x_{m+1}=x_{m}+hr_{m}(x_{m})\) for \(0\leq m<M\), with \(x_{0}\) being the input and \(h\)=1 in practice. The final point \(x_{M}\) is then classified by a linear layer \(F\). The dynamic view considers a residual network as an Euler discretization of a differential equation:
\[x_{m+1}=x_{m}+hr_{m}(x_{m})\;\longleftrightarrow\;\partial_{t}x_{t}=v_{t}( x_{t}) \tag{3}\]
where \(r_{m}\) approximates the vector field \(v_{t}\) at time \(t=m/M\). The dynamic view allows to consider that ResNets are transporting their inputs in space by following a vector field to separate them, the depth representing time, before classification by a linear layer. [26] look for a network \(F\circ T\) that solves the task while having minimal transport cost:
\[\begin{split}\inf_{T,F}&\quad\int_{\Omega}\|T(x)-x \|_{2}^{2}\,\mathrm{d}\alpha(x)\\ \mathrm{s.t.}&\quad\mathcal{L}(F,T_{\sharp}\alpha)=0 \end{split} \tag{4}\]
where \(T\) is made up of the \(M\) residual blocks, \(\alpha\) is the data distribution, \(F\) is the classification head and \(\mathcal{L}(F,T_{\sharp}\alpha)\) is the (cross-entropy) loss obtained from classifying the transformed data distribution \(T_{\sharp}\alpha\) through \(F\). Given Section 3.1, the corresponding dynamical version of (4) is
\[\inf_{v,F} \int_{0}^{1}\|v_{t}\|_{L^{2}((\phi_{\hat{1}})_{\sharp}\alpha)}^{2} \;\mathrm{d}t \tag{5}\] \[\mathrm{s.t.} \partial_{t}\phi_{t}^{x}=v_{t}(\phi_{t}^{x})\text{ for }x\in \mathrm{support}(\alpha)\text{ and }t\in[0,1[\] \[\phi_{0}=\mathrm{id},\;\;\mathcal{L}(F,(\phi_{1})_{\sharp}\alpha)=0\]
[26] show that (4) and (5) are equivalent and have a solution such that \(T\) is an optimal transport map. In practice, (5) is discretized using a sample \(\mathcal{D}\) from \(\alpha\) and an Euler scheme, which gives a residual architecture with residual blocks \(r_{m}\) (parametrized along with the classifier by \(\theta\)) that approximate \(v\). This gives the following problem
\[\min_{\theta} \mathcal{C}(\theta)=\sum_{x\in\mathcal{D}}\;\sum_{m=0}^{M-1}\|r_ {m}(\varphi_{m}^{x})\|_{2}^{2} \tag{6}\] \[\mathrm{s.t} \varphi_{m+1}^{x}=\varphi_{m}^{x}+hr_{m}(\varphi_{m}^{x}),\; \varphi_{0}^{x}=x\;\;\forall\;x\in\mathcal{D}\] \[\mathcal{L}(\theta)=0\]
In practice, we solve Problem (6) using a method of multipliers (see Section 4.2). Our contribution is to show, theoretically and experimentally, that this makes adversarial examples easier to detect.
## 4 Method
We take the view that a ResNet moves its inputs through a discrete vector field to separate them, points in the same class having similar trajectories. Heuristically, for a successful adversarial sample that is close to clean samples, the vector field it follows has to be different at some step from that of the clean samples, so that it joins the trajectory of the points in another class. In Section 4.1, we present how we detect adversarial samples by considering these trajectories. In Section 4.2, we apply the transport regularization by solving (6) to improve detectability of adversarial samples.
### Detection
Given a network that applies \(x_{m+1}=x_{m}+hr_{m}(x_{m})\) to an input \(x_{0}\) for \(0\leq m{<}M\), we consider the embeddings \(x_{m}\) for \(0{<}m\leq M\), or the residues \(r_{m}(x_{m})\) for \(0\leq m{<}M\). To describe their positions in space, we take their norms and their cosine similarities with a fixed vector as features to train our adversarial detector on. Using only the norms already gave good detection accuracy. Cosines to other orthogonal vectors can be added to better locate the points at the price of increasing the number of features. We found that using only one vector already gives state-of-the-art detection, so we only use the norms and cosines to a fixed vector of ones. We train the detector (a random forest in practice, see Section 5.2) on these features. The embeddings \(x_{m}\) and the residues \(r_{m}(x_{m})\) can equivalently describe the trajectory of \(x_{0}\) in space through the blocks. In practice, we use the residues \(r_{m}(x_{m})\), with their norms squared and averaged. So the feature vector given to the random forest for each input \(x_{0}\) that goes through a network that applies \(x_{m+1}=x_{m}+h\ r_{m}(x_{m})\) is
\[\left(\frac{1}{d_{m}}\|r_{m}(x_{m})\|_{2}^{2},\ \cos\Bigl{(}r_{m}(x_{m}), \mathbf{1}_{m}\Bigr{)}\right)_{0\leq m<M} \tag{7}\]
and the label is 0 if \(x_{0}\) is clean and 1 if it is adversarial. Here \(\cos\) is the cosine similarity between two vectors and \(\mathbf{1}_{m}\) is a vector of ones of size \(d_{m}\) where \(d_{m}\) is the size of \(r_{m}(x_{m})\). For any non-residual architecture \(x_{m+1}=g_{m}(x_{m})\), the vector \(x_{m+1}{-}x_{m}\) can be used instead of \(r_{m}(x_{m})\) on layers that have the same input and output dimension, allowing to apply the method to any network with many such layers. And we do test the detector on a ResNeXt, which does not fully satisfy the dynamic view, as the activation is applied after the skip-connection, i.e. \(x_{m+1}=\text{ReLU}(x_{m}+h\ r_{m}(x_{m}))\).
The number of features is twice that of residual blocks (a norm and a cosine per block). This is of the same order as for other popular detectors such as Mahalanobis [33] and LID [36] that extract one feature per residual stage (a residual stage is a group of blocks that keep the same dimension). Even for common large architectures, twice the number of residual blocks is still a small number of features for training a binary classifier (ResNet152 has 50 blocks). More importantly, the features we extract (norms and cosines) are quick to calculate, whereas those of other methods require involved statistical computations on the activations. We include in Appendix 0.D.10 a favorable time comparison of our detector to the Mahalanobis detector. Another advantage is that our detector does not have a hyper-parameter to tune unlike the Mahalanobis and LID detectors.
### Regularization
Regularity of neural networks (typically Lipschitz continuity) has been used as a measure of their robustness to adversarial samples [56, 50, 54, 23, 11]. Indeed, the smaller the Lipschitz constant \(L\) of a function \(f\) satisfying \(\|f(x)-f(y)\|\leq L\|x-y\|\), the less \(f\) changes its output \(f(y)\) for a perturbation (adversarial or
not) \(y\) of \(x\). Regularizing a network to make it more Lipschitz and more robust has therefore been tried in [23] and [11]. For this to work, the regularization has to apply to adversarial points, i.e. outside the support of the clean data distribution. Indeed, the Lipschitz continuity obtained though most of these methods and analyses apply on the entire input space \(\mathbb{R}^{d}\) as they penalize the network's weights directly. Likewise, a small step size \(h\) as in [62] will have the same effect on all inputs, clean or not.
We propose here an alternative approach where we regularize the network only on the support of the input distribution, making it \(\eta\)-Holder on this support (a function \(f\) is \(\eta\)-Holder on \(X\) if \(\forall\ a,b\in X\), we have \(\|f(a)-f(b)\|\leq C\|a-b\|^{\eta}\) for some constants \(C{>}0\) and \(0{<}\eta{\leq}1\), and we denote this \(f\in\mathcal{C}^{0,\eta}(X))\). Since this result does not apply outside the input distribution's support, particularly in the adversarial spaces, then this regularity that only applies to clean samples can serve to make adversarial samples more distinguishable from clean ones, and therefore easier to detect. We show experimentally that the behavior of the network will be more distinguishable between clean and adversarial samples in practice in Section 5.1. We discuss the implementation of the regularization in Section 4.2 and prove the regularity it endues the network with in Section 4.2.
#### 4.2.1 Implementation.
We regularize the trajectory of the samples by solving Problem (6). This means finding, among the networks that solve the task (condition \(\mathcal{L}(\theta)=0\) in (6)), the network that moves the points the least, that is the one with minimal kinetic energy \(\mathcal{C}\). The residual functions \(r_{m}\) we find are then our approximation of the vector field \(v\) that solves the continuous version (5) of Problem (6).
We solve Problem (6) via a method of multipliers: since \(\mathcal{L}\geq 0\), Problem (6) is equivalent to the min-max problem \(\min_{\theta}\max_{\lambda>0}\ \mathcal{C}(\theta)+\lambda\ \mathcal{L}(\theta)\), which we solve, given growth factor \(\tau>0\), and starting from initial weight given to the loss \(\lambda_{0}\) and initial parameters \(\theta_{0}\), through
\[\begin{cases}\theta_{i+1}=\arg\min_{\theta}\ \mathcal{C}(\theta)+\lambda_{i}\ \mathcal{L}(\theta)\\ \lambda_{i+1}=\lambda_{i}+\tau\ \mathcal{L}(\theta_{i+1})\end{cases} \tag{8}\]
We use SGD for \(s{>}0\) steps (i.e. batches) for the minimization in the first line of (8), starting from the previous \(\theta_{i}\). When using a ResNeXt, where a residual block applies \(x_{m+1}=\text{ReLU}(x_{m}+r_{m}(x_{m}))\), we regularize the norms of the true residues \(x_{m+1}{-}x_{m}\) instead of \(r_{m}(x_{m})\).
#### 4.2.2 Theoretical Analysis.
We take \(\Omega{\subset}\mathbb{R}^{d}\) convex and compact and the data distribution \(\alpha{\in}\mathcal{P}(\Omega)\) absolutely continuous such that \(\delta\Omega\) is \(\alpha\)-negligible. We suppose that there exists an open bounded convex set \(X{\subset}\Omega\) such that \(\alpha\) is bounded away from zero and infinity on \(X\) and is zero on \(X^{\complement}\). From [26], Problems (4) and (5) are equivalent and have solutions \((T,F)\) and \((v,F)\) such that \(T\) is an optimal transport map between \(\alpha\) and \(\beta{:=}T_{\sharp}\alpha\). We suppose that \(\beta\) is absolutely continuous and that there exists an open bounded convex set \(Y{\subset}\Omega\) such that \(\beta\)
is bounded away from zero and infinity on \(Y\) and is zero on \(Y^{\complement}\). In the rest of this section, \(v\) solves (5) and we suppose that we find a solution to the discretized problem (6) that is an \(\varepsilon/2\)-approximation of \(v\), i.e. \(\|r_{m}-v_{t_{m}}\|_{\infty}{\leq}\varepsilon/2\) for all \(0{\leq}m{<}M\), with \(t_{m}{=}m/M\).
Definition 1: A function \(f\) is \(\eta\)-Holder on \(X\) if \(\forall\ a,b\in X\), we have \(\|f(a)-f(b)\|\leq C\|a-b\|^{\eta}\) for some constants \(C{>}0\) and \(0{<}\eta{\leq}1\). We denote this \(f\in\mathcal{C}^{0,\eta}(X)\).
In Theorem 3.1, we show that the regularization makes the residual blocks of the network \(\eta\)-Holder (with an error of \(\varepsilon\)) on the support of the input distribution as it moves according to the theoretical vector field solution \(v\). The results hold for all norms on \(\mathbb{R}^{d}\).
Theorem 3.1: _For \(a,b\in\text{support}(\alpha_{t_{m}})\), \(\alpha_{t}{:=}(\phi_{t})_{\sharp}\alpha\) where \(\phi\) solves (5) along with \(v\), we have_
\[\|r_{m}(a)-r_{m}(b)\|\leq\varepsilon+K\|a-b\|^{\zeta_{1}}\text{ if }\|a-b\|\leq 1\] \[\|r_{m}(a)-r_{m}(b)\|\leq\varepsilon+K\|a-b\|^{\zeta_{2}}\text{ if }\|a-b\|>1\]
_for constants \(K>0\) and \(0<\zeta_{1}\leq\zeta_{2}\leq 1\)._
Proof: The detailed proof is in Appendix 0.C.1. First, we have that \(v_{t}=(T-\texttt{id})\circ T_{t}^{-1}\) where \(T_{t}:=(1-t)\texttt{id}+tT\) and \(T\) solves (4). Being an optimal transport map, \(T\) is \(\eta\)-Holder. So for all \(a,b\in\text{support}(\alpha_{t})\) and \(t\in[0,1[\), where \(\alpha_{t}=(\phi_{t}^{\cdot})_{\sharp}\alpha=(T_{t})_{\sharp}\alpha\) with \(\phi\) solving (5) with \(v\), we have
\[\|v_{t}(a)-v_{t}(b)\|\leq\|T_{t}^{-1}(a)-T_{t}^{-1}(b)\|+C\|T_{t}^{-1}(a)-T_{t} ^{-1}(b)\|^{\eta} \tag{9}\]
We then show that \(T_{t}^{-1}\) is an optimal transport map and so is \(\eta_{t}\)-Holder with \(0{<}\eta_{t}{\leq}1\). Using the hypothesis on \(r\) and the triangle inequality, we get, for all \(a,b\in\text{support}(\alpha_{t_{m}})\)
\[\|r_{m}(a)-r_{m}(b)\|\leq\ \varepsilon+C_{t_{m}}\|a-b\|^{\eta_{t_{m}}}+CC_{t_{m}} ^{\eta}\|a-b\|^{\eta\eta_{t_{m}}} \tag{10}\]
Then set the constants \(K\), \(\zeta_{1}\) and \(\zeta_{2}\) as necessary.
We use Theorem 3.1 to now bound the distance between the residues at depth \(m\) as a function of the distance between the network's inputs. For inputs \(a_{0}\) and \(b_{0}\) to the network, the intermediate embeddings are \(a_{m+1}=a_{m}+hr_{m}(a_{m})\) and \(b_{m+1}=b_{m}+hr_{m}(b_{m})\), and the residues used to compute features for adversarial detection are \(r_{m}(a_{m})\) and \(r_{m}(b_{m})\). So we want to bound \(\|r_{m}(a_{m})-r_{m}(b_{m})\|\) as a function of \(\|a_{0}-b_{0}\|\). This is usually done by multiplying the Lipschitz constants of each block up to depth \(m\), which leads to an overestimation [24], or through more complex estimation algorithms [54, 32, 6]. Bound (9) allows through \(T_{t}^{-1}\) to avoid multiplying the Holder constants of the blocks. If \(a_{0}\) and \(b_{0}\) are on the clean data support \(X\), we get Theorem 3.1 below with proof in Appendix 0.C.2.
Theorem 4.1: _For \(a_{0},b_{0}\in X\) and constants \(C,L{>}0\),_
\[\|r_{m}(a_{m})-r_{m}(b_{m})\| \leq \ \varepsilon+\|a_{0}-b_{0}\|+C\|a_{0}-b_{0}\|^{\eta}+\] \[+ L(\|a_{m}-\phi_{t_{m}}^{a_{0}}\|+\|b_{m}-\phi_{t_{m}}^{b_{0}}\|)\]
Term \(\mu(a_{0}){:=}\|a_{m}{-}\phi_{t_{m}}^{a_{0}}\|\) (and \(\mu(b_{0}){:=}\|b_{m}{-}\phi_{t_{m}}^{b_{0}}\|\)) is the distance between the point \(a_{m}\) after \(m\) residual blocks and the point \(\phi_{t_{m}}^{a_{0}}\) we get by following the theoretical solution vector field \(v\) up to time \(t_{m}\) starting from \(a_{0}\). If \(a_{0}\) and \(b_{0}\) are not on the data support \(X\), an extra term has to be introduced to use bound (9). Bounding the terms \(\mu(a_{0})\) and \(\mu(b_{0})\) is possible under more regularity assumptions on \(v\). We assume then that \(v\) is \(\mathcal{C}^{1}\) and Lipschitz in \(x\), which is not stronger than the regularity we get on \(v\) through our regularization, as it does not give a similar result to bound (9). We have for all inputs \(a_{0}\) and \(b_{0}\), whether they are clean or not, Theorem 4.1 below with proof in Appendix 0.C.2.
Theorem 4.2: _For \(a_{0},b_{0}\in\mathbb{R}^{d}\) and constants \(R,S>0\),_
\[\|r_{m}(a_{m})-r_{m}(b_{m})\| \leq \ \varepsilon+LS\varepsilon+LSRh+\|a_{0}-b_{0}\|+C\|a_{0}-b_{0}\|^{ \eta}+\] \[+ LS(\mathit{dist}(a_{0},X)+\mathit{dist}(b_{0},X))\]
Terms \(\mathrm{dist}(a_{0},X)\) and \(\mathrm{dist}(b_{0},X)\) show that the regularity guarantee is increased for inputs in \(X\). The trajectories of clean points are then closer to each other and more different from those of abnormal samples outside \(X\).
## 5 Experiments
We evaluate our method on adversarial samples found by 8 attacks. The threat model is as follows. We use 6 white-box attacks that can access the network and its weights and architecture but not its training data: FGM [19], BIM [31], DF [40], CW [8], AutoAttack (AA) [13] and the Auto-PGD-CE (APGD) variant of PGD [37], and 2 black-box attacks that only query the network: HSJ [10] and BA [7]. We assume the attacker has no knowledge of the detector and use the untargeted (i.e. not trying to direct the mistake towards a particular class) versions of the attacks. We use a maximal perturbation of \(\epsilon{=}0.03\) for FGM, APGD, BIM and AA. We use the \(L_{2}\) norm for CW and HSJ and \(L_{\infty}\) for the other attacks. We compare our detector (which we call the Transport detector or TR) to the Mahalanobis detector (MH in the tables below) of [33] and to the detector of [28, 27] that uses natural scene statistics (NS in the tables below), and our regularization to reverse cross entropy training of [44], which is also meant to improve detection of adversarial samples. We use ART [43] and its default hyper-parameter values (except those specified) to generate the adversarial samples, except for AA for which we use the authors' original code. The code is available at github.com/skander-karkar/adv. See Appendix 0.D.1 for more details.
We use 3 networks and datasets: ResNeXt50 on CIFAR100, ResNet110 on CIFAR10 and WideResNet on TinyImageNet. Each network is trained normally
with cross entropy, with the transport regularization added to cross entropy (called a LAP-network for Least Action Principle), and with reverse cross entropy instead of cross entropy (called an RCE-network). For LAP training, we use (8) with \(\tau\)=1, \(s\)=1 and \(\lambda_{0}\)=1 for all networks. These hyper-parameters are chosen to improve validation accuracy during training not adversarial detection. Training details are in Appendix 0.D.2.
In Section 5.1, we conduct preliminary experiments to show that LAP training improves generalization and stability, and increases the difference between the transport costs of clean and adversarial samples. In Section 5.2, we test our detector when it is trained and tested on samples generated by the same attack. In Section 5.3, we test our detector when it is trained on samples generated by FGM and tested on samples from the other attacks. We then consider OOD detection and adaptive attacks on the detector.
### Preliminary Experiments
Our results confirm those in [26] that show that LAP training improves test accuracy. Vanilla ResNeXt50 has an accuracy of 74.38% on CIFAR100, while LAP-ResNeXt50 has an accuracy of 77.2%. Vanilla ResNet110 has an accuracy of 92.52% on CIFAR10, while LAP-ResNet110 has an accuracy of 93.52% and the RCE-ResNet110 of 93.1%. Vanilla WideResNet has an accuracy of 65.14% on TinyImageNet, while LAP-WideResNet has an accuracy of 65.34%. LAP training is also more stable by allowing to train deep networks without batch-normalization in Figure 4 in Appendix 0.D.4.
We see in Figure 2 that LAP training makes the transport cost \(\mathcal{C}\) more different between clean and adversarial points. Using its empirical quantiles on clean points allows then to detect samples from some attacks with high recall and a fixed false positive rate, without seeing adversarial samples.
Figure 2: Histogram of transport cost \(\mathcal{C}\) for clean and FGM-attacked test samples with different values of \(\epsilon\) on CIFAR100. The vertical lines represent the 0.02 and 0.98 empirical quantiles of the transport cost of the clean samples. Left: ResNeXt50. Right: LAP-ResNeXt50.
### Detection of Seen Attacks
For detection training, the test set is split in 0.9/0.1 proportions into two datasets, B1 and B2. For each image in B1 (respectively B2), an adversarial sample is generated and a balanced detection training set (respectively a detection test set) is created. Since adversarial samples are created for a specific network, this is done for the vanilla version of the network and its LAP and RCE versions. We tried augmenting the detection training dataset with a randomly perturbed version of each image, to be considered clean during detection training, as in [33], but we found that this does not improve detection accuracy. This dataset creation protocol is standard and is depicted in Figure 3 in Appendix D.3. We did not limit the datasets to successfully attacked images only as in [33], as we consider the setting of detecting all adversarial samples, whether or not they fool the network, more challenging (which is seen in the results). It also allows to detect any attempted interference with the network, even if it fails at fooling it.
Samples in the detection training set are fed through the network and the features for each detector are extracted. We tried three classifiers (logistic regression, random forest and SVM) trained on these features for all detectors, and kept the random forest as it always performs best. We tried two methods to improve the accuracy of all detectors: class-conditioning and ensembling. In class-conditioning, the features are grouped by the class predicted by the network, and a detector is trained for every class. At test time, the detector trained on the features of the predicted class is used. A detector is also trained on all samples regardless of the predicted class and is used in case a certain class is never targeted by the attack. We also tried ensembling the class-conditional detector with the general all-class detector: an input is considered an attack if at least one detector says so. This ensemble of the class-conditional detector and the general detector performs best for all detectors, and is the one we use.
We report the accuracy of each detector on the detection test set for both the vanilla and the LAP network in Table 1. In each cell, the first number corresponds to the vanilla network and the second to the regularized LAP-network. Since the NS detector takes the image and not its embeddings as input, the impact of LAP and RCE training on its performance is minimal and we report its performance on the vanilla network only. These results are averaged over 5 runs and the standard deviations (which are tight) are in Tables 3 to 7 in Appendix D.5, along with results on RCE-networks. Since some attacks are slow, we don't test them on all network-dataset pairs in this experiment. Results in Table 1 show two things. First, our detector performs better than both other detectors, with or without the regularization. Second, both the TR and MH detectors work better on the LAP-networks most times. The MH detector benefits more from the regularization, but on all attacks, the best detector is always the Transport detector. In the tables in Appendix D.5, RCE often improves detection accuracy in this experiment, but clearly less than LAP training. On CIFAR10, our detector outperforms the MH detector by 9 to 16 percentage points on the vanilla ResNet110, and the NS detector by up to 5 points. LAP training improves the accuracy of our detector by an average 1.5 points and that of the MH detector by a substantial
8.3 points on average. On CIFAR100, our detector outperforms the MH detector by 1 to 5 points on the vanilla ResNeXt50, and the NS detector by up to 3 points. LAP training improves the accuracy of both detectors by an average 1 point. On TinyImageNet, our detector greatly outperforms the MH detector by 3 to 15 points on the vanilla WideResNet, and the NS detector slightly. LAP training does not change the accuracy of our detector and improves that of the MH detector by 0.85 points on average. Detection rates of successful adversarial samples (i.e. those that fool the network) are in Table 14 in Appendix D.7 and are higher than 95% on our detector. False positive rates (positive meaning adversarial) are in Table 16 in Appendix D.8 and are always less than 5% on our detector. The AUROC is in Table 18 in Appendix D.9. On all these metrics, our detector outperforms the other detectors largely, and LAP-training greatly improves the performance of the Mahalanobis detector.
\begin{table}
\begin{tabular}{l c c c c} \hline \multirow{2}{*}{Attack} & \multirow{2}{*}{Detector} & ResNet110 & ResNeXt50 & WideResNet \\ & & CIFAR10 & CIFAR100 & TinyImageNet \\ \hline \multirow{4}{*}{FGM} & TR & 97.14/**98.70** & 97.26/**98.32** & **95.36/**95.14** \\ & MH & 87.78/95.64 & 95.82/96.82 & 81.06/85.26 \\ & NS & 94.56 & 94.70 & 94.90 \\ \hline \multirow{4}{*}{APGD} & TR & 94.10/**97.50** & 96.04/**97.84** & **95.22/**95.20** \\ & MH & 82.08/90.70 & 93.94/94.60 & 79.66/85.10 \\ & NS & 94.28 & 94.18 & 94.86 \\ \hline \multirow{4}{*}{BIM} & TR & 97.54/**99.28** & 98.02/**98.92** & **95.26/**95.12** \\ & MH & 86.78/95.38 & 96.06/97.76 & 81.20/82.46 \\ & NS & 95.04 & 94.72 & 95.00 \\ \hline \multirow{4}{*}{AA} & TR & 88.88/**94.08** & 84.90/**87.56** & **81.38/**81.24** \\ & MH & 80.46/89.96 & 83.90/86.58 & 78.40/78.40 \\ & NS & 88.78 & 84.82 & 81.32 \\ \hline \multirow{4}{*}{DF} & TR & **99.98**/99.84 & **99.80**/99.58 & \multirow{4}{*}{} \\ & MH & 91.50/96.70 & 97.30/97.12 & \\ & NS & 99.78 & 99.6 & \\ \hline \multirow{4}{*}{CW} & TR & **98.04**/97.96 & 97.04/**97.80** & \multirow{4}{*}{} \\ & MH & 85.58/93.36 & 95.38/96.42 & \\ & NS & 93.86 & 90.7 & \\ \hline \multirow{4}{*}{HSJ} & TR & **99.94**/99.92 & \multirow{4}{*}{} \\ & MH & 85.50/94.56 & \multirow{4}{*}{} \\ & NS & 99.68 & \\ \hline \multirow{4}{*}{BA} & TR & 96.56/**97.02** & \multirow{4}{*}{} \\ & MH & 80.20/89.62 & \\ \cline{1-1} & NS & 92.10 & \\ \hline \end{tabular}
\end{table}
Table 1: Average accuracy of detectors on adversarial samples from seen attacks on Network/LAP-Network over 5 runs.
### Detection of Unseen Attacks
An important setting is when we don't know which attack might be used or only have time to train detectors on samples from one attack. We still want our detector to generalize well to unseen attacks. To test this, we use the same vanilla networks as above but the detectors are now trained on the detection training set created by the simplest and quickest attack (FGM) and tested on the detection test sets created by the other attacks. Results are in Table 2. We see that our detector has very good generalization to unseen attacks, even those very different from FGM, comfortably better than the MH detector, by up to 19 percentage points, while the NS detector only generalizes to variants of FGM (APGD and BIM), and fails on the other attacks. These results are averaged over 5 runs and the standard deviations are in Tables 8 to 13 in Appendix D.6. On our detector, the detection rate of successful adversarial samples remains higher than 90% in most cases (Table 15 in Appendix D.7) and the FPR is always lower than 10% (Table 17 in Appendix D.8). The AUROC is in Table 19 in Appendix D.9. Our detector almost always outperforms the other detectors on all these metrics.
\begin{table}
\begin{tabular}{l c c c c} Attack & Detector & \begin{tabular}{c} ResNet110 \\ CIFAR10 \\ \end{tabular} & \begin{tabular}{c} ResNeXt50 \\ CIFAR100 \\ \end{tabular} &
\begin{tabular}{c} WideResNet \\ TinyImageNet \\ \end{tabular} \\ \hline \multirow{3}{*}{APGD} & TR & 89.32 & 91.94 & 93.26 \\ & MH & 77.34 & 90.86 & 76.96 \\ & NS & **92.08** & **92.16** & **94.06** \\ \hline \multirow{3}{*}{BIM} & TR & **96.02** & **95.02** & **94.66** \\ & MH & 77.24 & 93.16 & 77.02 \\ & NS & 93.88 & 93.88 & 94.62 \\ \hline \multirow{3}{*}{AA} & TR & **85.10** & **73.32** & **77.04** \\ & MH & 72.12 & 73.08 & 60.36 \\ & NS & 51.82 & 51.32 & 65.60 \\ \hline \multirow{3}{*}{DF} & TR & **91.02** & **85.16** & **90.62** \\ & MH & 80.12 & 82.72 & 73.18 \\ & NS & 51.40 & 51.62 & 72.82 \\ \hline \multirow{3}{*}{CW} & TR & **93.18** & **78.18** & **91.42** \\ & MH & 79.92 & 76.44 & 75.52 \\ & NS & 50.84 & 51.02 & 71.96 \\ \hline \multirow{3}{*}{HSJ} & TR & **93.00** & **85.04** & \\ & MH & 79.70 & 82.82 & \\ & NS & 52.12 & 52.04 & \\ \hline \multirow{3}{*}{BA} & TR & **90.92** & **92.14** & \\ & MH & 79.32 & 84.46 & \\ \cline{1-1} & NS & 59.88 & 57.90 & \\ \hline \end{tabular}
\end{table}
Table 2: Average accuracy of detectors on samples from unseen attacks after training on FGM over 5 runs.
However, this experiment shows that our regularization has some limitations. We see in Tables 8 to 13 in Appendix D.6 that LAP training does not improve detection accuracy as much, and sometimes reduces it. It still improves it for the MH detector on all attacks on ResNet110 and WideResNet by up to 10 points, and LAP training still always does better than RCE training. We claim this is because these methods reduce the variance of features extracted on the seen attack, harming generalization to unseen attacks. This explains why detection of APGD and BIM, variants of FGM, improves.
### Detection of Out-Of-Distribution Samples
Since our analysis applies to all out-of-distribution (OOD) samples, we test detection of OOD samples in a similar setting to [33]. We train a model on a first dataset (ResNet110 on CIFAR10 and ResNeXt50 on CIFAR100), then train detectors to tell this first dataset from a second dataset (which can be an adversarially attacked version of the first dataset), then test their ability to tell the first dataset from a third unseen dataset (SVHN). Our detector does very well and better than the MH detector on both experiments, and detection accuracy of samples from the unseen distribution is higher than 90% when using the CW attack to create the second dataset. Details are in Appendix D.11.
### Attacking the Detector
We consider the case where the detector is also attacked (adaptive attacks). We try 2 attacks on the TR and MH detectors. Both are white-box with respect to the network. The first is black-box with respect to the detector and only knows if a sample has been detected or not. The second has some knowledge about the detector. It knows what features it uses and can attack it directly to find adversarial features. We test these attacks by looking at the percentage of detected successful adversarial samples that they turn into undetected successful adversarial samples. For the first attack, this is 6.8% for our detector and 12.9% for the MH detector on the LAP-ResNet110, and is lowered by LAP training. For the second attack it is 14% on our detector. Given that detection rates of successful adversarial samples are almost 100% (see Appendix D.7), this shows that an adaptive attack does not circumvent the detector, as detection rates drop to 85% at worst. Details are in Appendix D.12.
## 6 Conclusion
We proposed a method for detecting adversarial samples, based on the dynamical view of neural networks. The method examines the discrete vector field moving the inputs to distinguish clean and abnormal samples. The detector requires minimal computation to extract the features it uses for detection and achieves state-of-the-art detection accuracy on seen and unseen attacks. We also use a transport regularization that both improves test classification accuracy and the accuracy of adversarial detectors.
## Ethical Statement
Adversarial detection and robustness are essential to safely deploy neural networks that attackers might target for nefarious purposes. But adversarial attacks can be used to evade neural networks that are deployed for nefarious purposes.
|
2305.00216 | Physics-Guided Graph Neural Networks for Real-time AC/DC Power Flow
Analysis | The increasing scale of alternating current and direct current (AC/DC) hybrid
systems necessitates a faster power flow analysis tool than ever. This letter
thus proposes a specific physics-guided graph neural network (PG-GNN). The
tailored graph modelling of AC and DC grids is firstly advanced to enhance the
topology adaptability of the PG-GNN. To eschew unreliable experience emulation
from data, AC/DC physics are embedded in the PG-GNN using duality. Augmented
Lagrangian method-based learning scheme is then presented to help the PG-GNN
better learn nonconvex patterns in an unsupervised label-free manner.
Multi-PG-GNN is finally conducted to master varied DC control modes. Case study
shows that, relative to the other 7 data-driven rivals, only the proposed
method matches the performance of the model-based benchmark, also beats it in
computational efficiency beyond 10 times. | Mei Yang, Gao Qiu, Yong Wu, Junyong Liu, Nina Dai, Yue Shui, Kai Liu, Lijie Ding | 2023-04-29T09:58:15Z | http://arxiv.org/abs/2305.00216v1 | # Physics-Guided Graph Neural Networks for Real-time AC/DC Power Flow Analysis
###### Abstract
The increasing scale of alternating current and direct current (AC/DC) hybrid systems necessitates a faster power flow analysis tool than ever. This letter thus proposes a specific physics-guided graph neural network (PG-GNN). The tailored graph modelling of AC and DC grids is firstly advanced to enhance the topology adaptability of the PG-GNN. To secure unreliable experience emulation from data, AC/DC physics are embedded in the PG-GNN using duality. Augmented Lagrangian method-based learning scheme is then presented to help the PG-GNN better learn nonconvex patterns in an unsupervised label-free manner. Multi-PG-GNN is finally conducted to master varied DC control modes. Case study shows that, relative to the other 7 data-driven rivals, only the proposed method matches the performance of the model-based benchmark, also beats it in computational efficiency beyond 10 times.
AC/DC hybrid systems, power flow analysis, physics-guided graph neural networks, Lagrangian method
## I Introduction
Higher power transmission efficiency of AC/DC hybrid systems has widely benefited renewable energy consumption [1]. Faster AC/DC power flow analysis is thus necessary more than ever, due to significant uncertainties from renewable energy, massive scale of interconnected grids, and nonlinearity of DC lines and converters [2]. This is the key for real-time dispatch and control in AC/DC hybrid systems.
Rich model-based methods are available for AC/DC power flow, spanning from the basic unified methods [3] and sequential methods [4], to the their enhanced parallel versions [2, 5]. However, issues, such as slow convergence rate and diverse control modes, can still beget heavy computing burden of power flow analysis in large-scale AC/DC hybrid systems. Emerging data-driven methods provide faster solutions [6, 7, 8], but their generalizability can be limited by the absence of physics embedding and topology patterns.
Graph neural network (GNN) may be the prospective fix for above concerns, only when it is carefully trained to follow physics. Two alternatives exist towards this end, one is to emulate systematic process of power flow solvers, such as Newton-Raphson (NR) solver [9], Hades2 solver [10], etc. However, this method may be slow-paced when DC control mode switches, since the solving workflow must be revisited to hit new control mode. Treating power flow equations directly as loss function is the other candidate [11]. This idea is better, since a one-time feedforward computation is all we need to catch a power flow solution, provided that GNN has been well-trained. The remaining issue is how to embed all AC/DC physical principles into loss function. Penalizing them is pragmatic but imprudent, as it is formidable to handpick penalty weights from scratch. Besides, DC control mode switching has not been studied by the above methods.
To fill the above gaps, an augmented Lagrangian method (ALM)-based physics-guided GNN (PG-GNN) is proposed. On the tailored graph modelling of AC and DC grids, AC/DC operational physics are analytically embedded into GNN learning process. Then, the learning task is recast as a parameterized duality problem of AC/DC power flow model. To yield tolerable performance of the PG-GNN against nonconvex physics, the ALM-based gradient decent algorithm is proposed to cope with the above problem. At last, a multi-PG-GNN decision framework is presented, with a view to enable DC control mode switching.
## II Power Flow Model for AC/DC Hybrid Systems
In this letter, we apply the quasi-steady state model of AC/DC hybrid system. Let \(\mathcal{N}_{PV}\), \(\mathcal{N}_{ac}\), \(\mathcal{N}_{dc}\), and \(\mathcal{N}_{pcc}\) stands for PV bus set, AC bus set, DC bus set, and the set of point of common coupling, respectively. \(\mathcal{N}_{pcc}\) implies the bus connected by both DC links and AC branches. Following the definitions, the AC/DC power flow is modelled as follows (\(\forall i\in\mathcal{N}_{ac},j\in\mathcal{N}_{ac}\cup\mathcal{N}_{pcc},k\in \mathcal{N}_{dc},p\in\mathcal{N}_{PV},c\in\mathcal{N}_{pcc}\)):
\[P_{i}^{inj}-\sum_{j\in i}P_{ij}(V_{i},V_{j},\delta_{ij})=0 \tag{1a}\] \[Q_{i}^{inj}-\sum_{j\in i}Q_{ij}(V_{i},V_{j},\delta_{ij})=0\] (1b) \[P_{c}^{inj}-\sum_{j\in c}P_{cj}(V_{c},V_{j},\delta_{ij})\pm V_{d,k}^{re/iv}I_{d,k}=0\] (2a) \[Q_{c}^{inj}\sum_{j\in c}Q_{cj}(V_{c},V_{j},\delta_{ij})V_{d,k}^{re/iv}I_{d,k}\tan\varphi_{k}^{re/iv}=0\] (2b) \[V_{p}=V_{p}^{ref}\] (3) \[V_{d,k}^{re}=3\sqrt{2}K_{ck}^{re}V_{c}\cos\alpha_{k}/\pi-3X_{ck}^{re}I_{d,k}/\pi \tag{4a}\]
\[V_{d,k}^{iv}=3\sqrt{2}K_{ck}^{iv}V_{c}\cos\gamma_{k}/\pi-3X_{ck}^{iv}I_{d,k}/\pi \tag{4b}\]
\[R_{dc}I_{d,k}=V_{d,k}^{re}-V_{d,k}^{iv} \tag{5}\] \[\cos\varphi_{k}^{re/iv}=V_{d,k}^{re/iv}/(3\sqrt{2}K_{ck}^{re/iv}V_{c }/\pi) \tag{6}\]
Control model 1: \(I_{d,k}^{re}\!-\!I_{d,k}^{re,ref}\!=\!0,V_{d,k}^{iv}\!-\!V_{d,k}^{iv,ref}=0\) (7a) Control model 2: \(\alpha_{k}-\alpha_{k}^{min}=0,I_{d,k}^{iv}-I_{d,k}^{iv,ref}=0\) (7b)
where \(K_{ck}^{re}\) and \(K_{ck}^{iv}\) stand for the ratio of the converter transformer at the rectifier-side and inverter-side \(k\). Note that, superscript \(re/iv\) stand for rectifier or inverter, respectively. \(X_{ck}^{re}\) and \(X_{ck}^{iv}\) represent the reactance of the converter connected at the inverter and rectifier DC bus \(k\). \(\alpha_{k}\) is firing angle of the \(k\)th rectifier, and \(\gamma_{k}\) is extinction angle of the \(k\)th inverter. \(V_{d,k}^{ref/iv}\) represent the rectifier/inverter voltage of \(k\)th DC line. \(R_{dc}\) and \(I_{d,k}\) represent resistance and current of the \(k\)th DC line, respectively. \(V\) indicates the nodal voltage amplitude, \(j\in i\) implies that the \(j\)th bus is adjacent to the \(i\)th bus through branch \(ij\). \(P_{ij}(V_{i},V_{j},\delta_{ij})\) and \(Q_{ij}(V_{i},V_{j},\delta_{ij})\) are function to calculate active and reactive transmission power of line \(ij\), respectively. \(\delta_{ij}\) is voltage phase angle difference between bus \(i\) and bus \(j\). The sign of the third term on the right hand side of (2) is negative if it specify the rectifier side, and is positive otherwise. (7) are governing equations of the DC system. Here we only provide two common control modes. The subscript \(ref\) indicates the control reference.
## III The Proposed Methodology
### _Graph modeling for AC/DC power flow analysis_
A power system can be represented as directed graph \(\mathcal{G}=(\mathcal{V},\mathcal{E},\mathbf{A})\), where \(\mathcal{V}\), \(\mathcal{E}\) and \(\mathbf{A}\) are settled upon the set of buses, that of branches, and adjacency matrix, respectively. GNN naturally involves these properties for well topology-adaptability. For an AC/DC hybrid system, GNN is not easy to create since features necessary to characterize AC grids, DC grids and their bridges do not always match each other. Thus, this letter proposes an alternative to unify these features. As shown in Fig. 1, by introducing converters into nodes, and DC transformers and lines into edges, a DC grid can be unified in a common graph of AC grid. In this sense, for an AC/DC system with \(N\) buses, \(b\) branches and \(a\) DC lines, its corresponding GNN owns \(N+2a\) nodes and \(b+3a\) edges:
\[\boldsymbol{H}(\cdot)^{(0)}=\boldsymbol{x}\triangleq\left[\boldsymbol{x}^{ \mathcal{V}},\boldsymbol{Z}^{\mathcal{V}\leftarrow\mathcal{E}}\boldsymbol{x}^ {\mathcal{E}}\right] \tag{8}\]
where \(\boldsymbol{x}^{\mathcal{V}}\!\!=\!\![\boldsymbol{x}_{1}^{\mathcal{V}},\ldots, \boldsymbol{x}_{N\!=\!2a}^{\mathcal{V}}]^{T}\!\in\!\mathbb{R}^{(N+2a)\!\times\!2}\) and particularly, \(\boldsymbol{x}_{1}^{\mathcal{V}}\!\!=\!\![P^{g}\!-\!P_{1}^{L},V_{1}]^{T}\!,i \!\in\!\mathcal{N}_{PV}\); For rectifiers, \(\boldsymbol{x}_{1}^{\mathcal{V}}\!\!=\!\![\boldsymbol{\alpha}^{min},X_{ck}^{re }I_{d,k}^{re,ref}]\) (mode 1) or \(\boldsymbol{x}_{k}^{\mathcal{V}}\!\!=\!\![\boldsymbol{\alpha}^{min},X_{ck}^{re }I_{d,k}^{re,ref}]\) (mode 2). For inverters, \(\boldsymbol{x}_{k}^{\mathcal{V}}\!\!=\!\![\boldsymbol{\gamma}^{min},V_{d,k}^{ vir},f]\) (mode 1) or \(\boldsymbol{x}_{k}^{\mathcal{V}}\!\!=\!\![\boldsymbol{\gamma}^{min},X_{ck}^{iv}I_{d,k}^{ re,ref}]\) (mode 1) or \(\boldsymbol{x}_{k}^{\mathcal{V}}\!\!=\!\![\boldsymbol{\gamma}^{min},X_{ck}^{iv}I_{d,k}^{ re,ref}]\) (mode 2), \(k\!\!\in\!\!\mathcal{N}_{dc}\); \(\boldsymbol{x}_{k}^{\mathcal{V}}\!\!=\!\![P^{g}\!-\!P_{i}^{L},Q_{i}^{g}\!\!-\!Q _{i}^{L}]^{T}\!,i\!\in\!\mathcal{N}_{PQ}\!\!\cup\!\mathcal{N}_{pcc}; \boldsymbol{x}_{i}^{\mathcal{V}}\!\!=\!\![P^{g}\!-\!P_{i}^{L},V_{i}]^{T}\!,i\! \in\!\mathcal{N}_{V\delta}\), \(\mathcal{N}_{V\delta}\) denote the set of swing bus. The edge features of the input are structured by the branches, which are denoted by \(\boldsymbol{x}^{\mathcal{E}}\!\in\!\mathbb{R}^{(b+3a)\times 2}\). Specifically, for AC branches, \(\boldsymbol{x}^{\mathcal{E}}\!\!=\!\![G_{\mathcal{E}},B_{\mathcal{E}}]\), where \(G_{\mathcal{E}}\) and \(B_{\mathcal{E}}\) are admittance parameters of line \(\mathcal{E}\). For rectifiers and inverters, \(\boldsymbol{x}^{\mathcal{E}}\!\!=\!\![K_{ck}^{re,min},K_{ck}^{re,max}]\). For DC branches, \(\boldsymbol{x}^{\mathcal{E}}\!\!=\!\![R_{dc},I_{d,k}^{re,ref}]\) (mode 1) or \(\boldsymbol{x}^{\mathcal{E}}\!\!=\![R_{dc},I_{d,k}^{iv,ref}]\) (mode 2). \(\boldsymbol{Z}^{\mathcal{V}\leftarrow\mathcal{E}}\!\!\in\!\mathbb{R}^{(M+2a) \times(b+3a)}\) is a trainable transformation matrix that embeds edge features into nodes.
We then consider ChebNet to enhance GNN, as ChebNet can aggregate high-order neighbor information and result in better performance [12]. Prior to the depiction of ChebNet, we firstly calculate the graph Laplacian matrix \(\boldsymbol{L}\) by subtracting \(\boldsymbol{A}\) from degree matrix \(\boldsymbol{D}\), i.e., \(\boldsymbol{L}=\boldsymbol{D}\!-\!\boldsymbol{A}\). Then, the feedforward of the ChebNet is given by (9):
\[\boldsymbol{H}(\cdot)^{(l+1)}=\sigma\left(\sum_{j=0}^{F} \boldsymbol{T}_{j}(\hat{L})\boldsymbol{H}(\cdot)^{(l)}\boldsymbol{\theta}_{j} \right),\boldsymbol{H}(\cdot)^{(0)}\!=\!\boldsymbol{x}\] (9a) \[\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!
outperforms other competitive GNNs. Specifically, ALM rises the following learning scheme (\(i\in\{PQU,DC\}\)).
\[\mathcal{L}_{\rho}^{\mathbf{\theta}}\left(\mathbf{\lambda},\phi^{\mathbf{\theta }}(\mathbf{x})\right) = f_{DC}^{Ang}\left(\phi^{\mathbf{\theta}}(\mathbf{x})\right)+\sum_{i}\lambda_ {i}f_{i}\left(\phi^{\mathbf{\theta}}(\mathbf{x})\right) \tag{12}\] \[+ \frac{\rho}{2}\sum_{i}\left\|f_{i}\left(\phi^{\mathbf{\theta}}(\mathbf{x} )\right)\right\|_{2}^{2}\] \[\mathbf{\theta}^{k+1}:=\arg\min\mathbf{\theta}\mathcal{L}_{\rho}^{\mathbf{ \theta}}\left(\mathbf{\lambda}^{k},\phi^{\mathbf{\theta}^{k}}(\mathbf{x})\right)\] (13a) \[\lambda_{i}^{k+1}:=\lambda_{i}^{k}+\rho\left|f_{i}\left(\phi^{\bm {\theta}^{k+1}}(\mathbf{x})\right)\right| \tag{13b}\]
where \(\rho>0\). (13a) updates \(\mathbf{\theta}\) by minimizing (12). By fixing \(\mathbf{\theta}\), (13b) then updates dual variables, which will be forwarded to perform (13a) over again. (13) continues until the setting steps are hit or (12) remains stable. Notably, (13a) can be realized via gradient descent:
\[\mathbf{\theta}\leftarrow\mathbf{\theta}+\frac{\partial\mathcal{L}^{\mathbf{\theta}}(\mathbf{ \lambda},\phi^{\mathbf{\theta}}(\mathbf{x}))}{\partial\mathbf{\theta}}=\mathbf{\theta}+\frac{ \partial\mathcal{L}^{\mathbf{\theta}}(\mathbf{\lambda},\phi^{\mathbf{\theta}}(\mathbf{x}))}{ \partial\phi^{\mathbf{\theta}}(\mathbf{x})}\frac{\partial\phi^{\mathbf{\theta}}(\mathbf{x})}{ \partial\mathbf{\theta}} \tag{14}\]
## V Conclusion
A physics-guided graph neural networks (PG-GNN) is proposed for AC/DC power flow solution. A build-in graph model of AC and DC components is firstly designed. It then provides parameterized Lagrangian duality of AC/DC power flow problem to embed physics in PG-GNN. An augmented Lagrangian method (ALM)-based training strategy is proposed to enable PG-GNN to better fulfill nonconvex patterns. Decision framework is finally built upon multi-PG-GNN such that DC control mode switching can be manipulated. Case study on the IEEE 30-bus system justifies that the proposed method hits the best trade-off between decision efficiency and accuracy compared to classical model-based and 7 advanced data-driven methods. This work proved the necessity of physics embedding for reliable AC/DC power flow solution, and showed the flexibility of multiple PG-GNNs regarding model switching.
|
2302.08835 | h-analysis and data-parallel physics-informed neural networks | We explore the data-parallel acceleration of physics-informed machine
learning (PIML) schemes, with a focus on physics-informed neural networks
(PINNs) for multiple graphics processing units (GPUs) architectures. In order
to develop scale-robust and high-throughput PIML models for sophisticated
applications which may require a large number of training points (e.g.,
involving complex and high-dimensional domains, non-linear operators or
multi-physics), we detail a novel protocol based on $h$-analysis and
data-parallel acceleration through the Horovod training framework. The protocol
is backed by new convergence bounds for the generalization error and the
train-test gap. We show that the acceleration is straightforward to implement,
does not compromise training, and proves to be highly efficient and
controllable, paving the way towards generic scale-robust PIML. Extensive
numerical experiments with increasing complexity illustrate its robustness and
consistency, offering a wide range of possibilities for real-world simulations. | Paul Escapil-Inchauspé, Gonzalo A. Ruz | 2023-02-17T12:15:18Z | http://arxiv.org/abs/2302.08835v3 | # \(h\)-analysis and data-parallel physics-informed neural networks
###### Abstract
We explore the data-parallel acceleration of physics-informed machine learning (PIML) schemes, with a focus on physics-informed neural networks (PINNs) for multiple graphics processing units (GPUs) architectures. In order to develop scale-robust PIML models for sophisticated applications (e.g., involving complex and high-dimensional domains, non-linear operators or multi-physics), which may require a large number of training points, we detail a protocol based on the Horovod training framework. This protocol is backed by \(h\)-analysis, including a new convergence bound for the generalization error. We show that the acceleration is straightforward to implement, does not compromise training, and proves to be highly efficient, paving the way towards generic scale-robust PIML. Extensive numerical experiments with increasing complexity illustrate its robustness and consistency, offering a wide range of possibilities for real-world simulations.
## Introduction
Simulating physics throughout accurate surrogates is a hard task for engineers and computer scientists. Numerical methods such as finite element methods, finite difference methods and spectral methods can be used to approximate the solution of partial differential equations (PDEs) by representing them as a finite-dimensional function space, delivering an approximation to the desired solution or mapping [1].
Real-world applications often incorporate partial information of physics and observations, which can be noisy. This hints at using data-driven solutions throughout machine learning (ML) techniques. In particular, deep learning (DL) [2, 3] principles have been praised for their good performance, granted by the capability of deep neural networks (DNNs) to approximate high-dimensional and non-linear mappings, and offer great generalization with large datasets. Furthermore, the exponential growth of GPUs capabilities has made it possible to implement even larger DL models.
Recently, a novel paradigm called physics-informed machine learning (PIML) [4] was introduced to bridge the gap between data-driven [5] and physics-based [6] frameworks. PIML enhances the capability and generalization power of ML by adding prior information on physical laws to the scheme by restricting the output space (e.g., via additional constraints or a regularization term). This simple yet general approach was applied successfully to a wide range complex real-world applications, including structural mechanics [7, 8] and biological, biomedical and behavioral sciences [9].
In particular, physics-informed neural networks (PINNs) [10] consist in applying PIML by means of DNNs. They encode the physics in the loss function and rely on automatic differentiation (AD) [11]. PINNs have been used to solve inverse problems [12], stochastic PDEs [13, 14], complex applications such as the Boltzmann transport equation [15] and large-eddy simulations [16], and to perform uncertainty quantification [17, 18].
Concerning the challenges faced by the PINNs community, efficient training [19], proper hyper-parameters setting [20], and scaling PINNs [21] are of particular interest. Regarding the latter, two research areas are gaining attention.
First, it is important to understand how PINNs behave for an increasing number of training points \(N\) (or equivalently, a decreasing maximum distance between points \(h\)). Throughout this work, we refer to this study as \(h\)-analysis. In their pioneer works [22, 23], Mishra and Molinaro provided a bound for the generalization error with respect to to \(N\) for data-free and unique continuation problems, respectively. More precise bounds have been obtained using characterizations of the DNN [24].
Second, PINNs are typically trained over graphics processing units (GPUs), which have limited memory capabilities. To ensure models scale well with increasingly complex settings, two paradigms emerge: data-parallel and model-parallel acceleration. The former splits the training data over different workers, while the latter distributes the model weights. However, general DL backends do not readily support multiple GPU acceleration. To address this issue, Horovod [25] is a distributed framework specifically designed for DL, featuring a ring-allreduce algorithm [26] and implementations for TensorFlow, Keras and PyTorch.
As model size becomes prohibitive, domain decomposition-based approaches allow for distributing the computational domain. Examples of such approaches include conservative PINNs (cPINNs) [27], extended PINNs (XPINNs) [28, 29], and distributed PINNs (DPINNs) [30]. cPINNs and XPINNs were compared in [31]. These approaches are compatible with data-parallel acceleration within each subdomain. Additionally, a recent review concerning distributed PIML [21] is also available. Regarding existing data-parallel implementations, TensorFlow MirroredStrategy in TensorDiffEq [32] and NVIDIA Modulus [33], which supports Horovod acceleration, should be mentioned. However, to the authors knowledge, there is no systematic study of the background of data-parallel PINNs and their implementation.
In this work, we present a procedure to attain data-parallel efficient PINNs. It relies on \(h\)-analysis and consists of a Horovod-based acceleration. Concerning \(h\)-analysis, we observe PINNs exhibiting three phases of behavior as a function of the number of training points \(N\):
1. A pre-asymptotic regime, where the model does not learn the solution due to missing information;
2. A transition regime, where the error decreases with \(N\);
3. A permanent regime, where the error remains stable.
To illustrate this, Fig. 1 presents the relative \(L^{2}\) error distribution with respect to \(N_{f}\) (number of domain collocation points) for the forward "1D Laplace" case. The experiment was conducted over 8 independent runs with a learning rate of \(10^{-4}\) and 20000 iterations of ADAM [34] algorithm. The transition regime--where variability in the results is high and some models converge while others do not--is between \(N_{f}=32\) and \(N_{f}=256\). For more information on the experimental setting and the definition of precision \(\rho\), please refer to "1D Laplace".
Building on the empirical observations, we use the setting in [22, 23] to supply a rigorous theoretical background to \(h\)-analysis. One of the main contributions of this manuscript is the bound on the "Generalization error for generic PINNs", which allows for a simple analysis of the \(h\)-dependence.
Any practitioner strives to reach the permanent regime for their PIML scheme, and we provide the necessary details for an easy implementation of Horovod-based data acceleration for PINNs, with direct application to any PIML model. Fig. 2 further illustrates the scope of data-parallel PIML. Next, we apply the procedure to increasingly complex problems and demonstrate that Horovod acceleration is straightforward, using the pioneer PINNs code of Raissi as an example. Our main practical findings concerning data-parallel PINNs for up to 8 GPUs are the following :
* They do not require to modify their hyper-parameters;
* They show similar training convergence to the 1 GPU-case;
* They lead to high efficiency for both weak and strong scaling (e.g \(E_{\text{ff}}\geq 81\%\) for Navier-Stokes problem with 8 GPUs).
This work is organized as follows: In "Problem formulation", we introduce the PDEs under consideration, PINNs and convergence estimates for the generalization error. We then move to "Data-parallel PINNs" and present "Numerical experiments". Finally, we close this manuscript in "Conclusion".
Figure 1: Error v/s the number of domain collocation points \(N_{f}\) for the “1D Laplace” case. A pre-asymptotic regime (pink) is followed by a rapid transition regime (blue), and eventually leading to a permanent regime (green). This transition occurs over a few extra training points.
### Problem formulation
#### General notation
Throughout, vector and matrices are expressed using bold symbols. For a natural number \(k\), we set \(\mathbb{N}_{k}:=\{k,k+1,\cdots\}\). For \(p\in\mathbb{N}_{0}=\{0,1,\cdots,\}\), and an open set \(D\subseteq\mathbb{R}^{d}\) with \(d\in\mathbb{N}_{1}\), let \(L^{p}(D)\) be the standard class of functions with bounded \(L^{p}\)-norm over \(D\). Given \(s\in\mathbb{R}^{+}\), we refer to [1, Section 2] for the definitions of Sobolev function spaces \(H^{s}(D)\). Norms are denoted by \(\|\cdot\|\), with subscripts indicating the associated functional spaces. For a finite set \(\mathcal{T}\), we introduce notation \(|\mathcal{T}|:=\text{card}(\mathcal{T})\), closed subspaces are denoted by a \(\underset{\text{cl}}{\subset}\)-symbol and \(t^{2}=-1\).
### Abstract PDE
In this work, we consider a domain \(D\subset\mathbb{R}^{d}\), \(d\in\mathbb{N}_{1}\), with boundary \(\Gamma=\partial D\). For any \(T>0\), \(\mathbb{D}:=D\times[0,T]\), we solve a general non-linear PDE of the form:
\[\begin{cases}\mathcal{N}[u(\mathbf{x},t);\lambda]&=f(\mathbf{x},t)\quad( \mathbf{x},t)\in D_{f}=:D\times[0,T],\\ \mathcal{B}[u(\mathbf{x},t);\lambda]&=g(\mathbf{x},t),\quad(\mathbf{x},t)\in D _{g}=:\Gamma\times[0,T],\\ u(\mathbf{x},0)&=\hbar(\mathbf{x}),\quad\mathbf{x}\in D_{\hbar}=:D,\end{cases} \tag{1}\]
with \(\mathcal{N}\) a spatio-temporal differential operator, \(\mathcal{B}\) the boundary conditions (BCs) operator, \(\lambda\) the material parameters--the latter being unknown for inverse problems--and \(u(\mathbf{x},t)\in\mathbb{R}^{m}\) for any \(m\in\mathbb{N}_{1}\). Accordingly, for any function \(\tilde{u}\) defined over \(\mathbb{D}\), we introduce \(\Lambda=\{f,g,\hbar,u\}\) and define the residuals \(\xi_{v}\) for each \(v\in\Lambda\) and any observation function \(u_{\text{obs}}\):
\[\begin{cases}\xi_{f}(\mathbf{x},t;\lambda)&:=\mathcal{N}[\tilde{u}(\mathbf{x},t);\lambda]-f(\mathbf{x},t)\quad\text{in}\quad D_{f},\\ \xi_{g}(\mathbf{x},t;\lambda)&:=\mathcal{B}[\tilde{u}(\mathbf{x},t);\lambda]- g(\mathbf{x},t)\quad\text{in}\quad D_{g},\\ \xi_{h}(\mathbf{x},0)&:=\hat{u}(\mathbf{x},0)-\hbar(\mathbf{x})\quad\text{in }\quad D_{h},\\ \xi_{u}(\mathbf{x},t)&:=\hat{u}(\mathbf{x},t)-u_{\text{obs}}(\mathbf{x},t) \quad\text{in}\quad D_{u}:=\mathbb{D}.\end{cases} \tag{2}\]
#### PINNs
Following [20, 35], let \(\sigma\) be a smooth activation function. Given an input \((\mathbf{x},t)\in\mathbb{R}^{d+1}\), we define \(\mathcal{N}_{\mathcal{B}}\) as being a \(L\)-layer neural feed-forward neural network with \(W_{0}=d+1\), \(W_{L}=m\) and \(W_{l}\) neurons in the \(l\)-th layer for \(1\leq l\leq L-1\). For constant width DNNs, we set \(W=W_{1}=\cdots=W_{L-1}\). For \(1\leq l\leq L\), let us denote the weight matrix and bias vector in the \(l\)-th layer by \(\mathbf{W}^{l}\in\mathbb{R}^{d_{l}\times d_{l-1}}\) and \(\mathbf{b}^{l}\in\mathbb{R}^{d_{l}}\), respectively, resulting in:
\[\begin{array}{ll}\text{input layer:}&(\mathbf{x},t)\in\mathbb{R}^{d+1},\\ \text{hidden layers:}&\mathbf{z}^{l}(\mathbf{x})=\sigma(\mathbf{W}^{l}\mathbf{z} ^{l-1}(\mathbf{x})+\mathbf{b}^{l})\in\mathbb{R}^{d_{l}}\qquad\text{for}\quad 1 \leq l\leq L-1,\\ \text{output layer:}&\mathbf{z}^{L}(\mathbf{x})=\mathbf{W}^{l}\mathbf{z}^{L-1} (\mathbf{x})+\mathbf{b}^{L}\in\mathbb{R}^{m}.\end{array} \tag{3}\]
Figure 2: Scope of of data-parallel PIML.
This results in representation \(\mathbf{x}^{L}(\mathbf{x},t)\), with
\[\theta:=\left\{(\mathbf{W}^{1},\mathbf{b}^{1}),\cdots,(\mathbf{W}^{L},\mathbf{b} ^{L})\right\}, \tag{4}\]
the (trainable) parameters--or weights--in the network. We set \(\Theta=\mathbb{R}^{|\Theta|}\). Application of PINNs to Eq. (1) yields the approximate \(u_{\theta}(\mathbf{x},t)=\mathbf{x}^{L}(\mathbf{x},t)\).
We introduce the training dataset \(\mathcal{T}_{v}:=\{\tau^{i}_{v}\}_{i=1}^{N_{v}},\ \tau^{i}_{v}\in D_{v}\), \(N_{v}\in\mathbb{N}\) for \(i=1,\cdots,N_{v},v\in\Lambda\) and observations \(u_{\text{obs}}(\tau^{i}_{u})\), \(i=1,\cdots,N_{u}\). Furthermore, to each training point \(\tau^{i}_{v}\) we associate a quadrature weight \(w^{i}_{v}>0\). All throughout this manuscript, we set:
\[M:=N_{u},\quad\hat{N}:=N_{f}+N_{g}+N_{h}\quad\text{and}\quad N:=\hat{N}+M. \tag{5}\]
Note that \(M\) (resp. \(\hat{N}\)) represents the amount of information for the data-driven (resp. physics) part, by virtue of the PIML paradigm (refer to Fig. 2). The network weights \(\theta\) in Eq. (4) are trained (e.g., via ADAM optimizer [34]) by minimizing the weighted loss:
\[\mathcal{L}_{\theta}:=\sum_{v\in\Lambda}\omega_{v}\mathcal{L}^{v}_{\theta}, \quad\text{wherein}\quad\mathcal{L}^{v}_{\theta}:=\sum_{i=1}^{N_{v}}w^{i}_{v} |\xi_{v}(\tau^{i}_{v})|^{2}\quad\text{and}\quad\omega_{v}>0\quad\text{for} \quad v\in\Lambda. \tag{6}\]
We seek at obtaining:
\[\theta^{\star}:=\operatorname*{argmin}_{\theta\in\Theta}(\mathcal{L}_{\theta}). \tag{7}\]
The formulation for PINNs addresses the cases with no data (i.e. \(M=0\)) or physics (i.e. \(\hat{N}=0\)), thus exemplifying the PIML paradigm. Furthermore, it is able to handle time-independent operators with only minor changes; a schematic representation of a forward time-independent PINN is shown in Fig. 3.
Our setting assumes that the material parameters \(\lambda\) are known. If \(M>0\), one can solve the inverse problem by seeking:
\[(\theta^{\star}_{\text{inverse}},\lambda^{\star}_{\text{inverse}}):= \operatorname*{argmin}_{\theta\in\Theta,\lambda}\mathcal{L}_{\theta}[\lambda]. \tag{8}\]
Similarly, unique continuation problems [23], which assume incomplete information for \(f,g\) and \(\hbar\), are solved throughout PINNs without changes. Indeed, "2D Navier-Stokes" combines unique continuation problem and unknown parameters \(\lambda_{1},\lambda_{2}\).
### Automatic Differentiation
We aim at giving further details about back-propagation algorithms and their dual role in the context of PINNs:
1. Training the DNN by calculating \(\frac{\partial\mathcal{L}_{\theta}}{\partial\theta}\);
Figure 3: Schematic representation of a PINN. A DNN with \(L=3\) (i.e. \(L-1=2\) hidden layers) and \(W=5\) learns the mapping \(\mathbf{x}\mapsto u(\mathbf{x})\). The PDE is taken into account throughout the residual \(\mathcal{L}_{\theta}\), and the trainable weights are optimized, leading to optimal \(\theta^{\star}\).
2. Evaluating the partial derivatives in \(\mathcal{N}[u_{\theta}(\mathbf{x},t);\lambda]\) and \(\mathcal{B}[u_{\theta}(\mathbf{x},t);\lambda]\) so as to compute the loss \(\mathcal{L}_{\theta}\).
They consist in a forward pass to evaluate the output \(u_{\theta}\) (and \(\mathcal{L}_{\theta}\)), and a backward pass to assess the derivatives. To further elucidate back-propagation, we reproduce the informative diagram from [11] in Fig. 4. TensorFlow includes reverse mode AD by default. Its cost is bounded with \(|\Theta|\) for scalar output NNs (i.e. for\(m=1\)). The application of back-propagation (and reverse mode AD in particular) to any training point is independent of other information, such as neighboring points or the volume of training data. This allows for data-parallel PINNs. Before detailing its implementation, we justify the \(h\)-analysis through an abstract theoretical background.
### Convergence estimates
To understand better how PINNs scale with \(N\), we follow the method in [22, 23] under a simple setting, allowing to control the data and physics counterparts in PIML. Set \(s\geq 0\) and define spaces:
\[\hat{Y}\underset{\mathrm{cl}}{\subset}Y^{*}\underset{\mathrm{cl}}{ \subset}Y=L^{2}(\mathbb{D},\mathbb{R}^{m})\quad\text{and}\quad\hat{X} \underset{\mathrm{cl}}{\subset}X^{*}\underset{\mathrm{cl}}{\subset}X=H^{s}( \mathbb{D},\mathbb{R}^{m}) \tag{9}\]
We assume that Eq. (1) can be recast as:
\[\mathsf{A}u =b\quad\text{with}\quad\mathsf{A}:X^{*}\to Y^{*}\quad\text{and} \quad b\in Y^{*}, \tag{10}\] \[u =u_{\text{obs}}\quad\text{in}\quad X^{*}. \tag{11}\]
We suppose that Eq. (10) is well-posed and that for any \(u,v\in\hat{X}\), there holds that:
\[\|u-v\|_{Y}\leq C_{\text{pde}}(\|u\|_{\hat{X}},\|v\|_{\hat{X}})\left(\|\mathsf{ A}u-\mathsf{A}v\|_{Y}\right). \tag{12}\]
Eq. (10) is a stability estimate, allowing to control the total error by means of a bound on PINNs residual. Residuals in Eq. (2) are:
\[\begin{cases}\xi_{D}&:=\mathsf{A}u-b\quad\text{in}\quad Y^{*},\\ \xi_{u}&=\hat{u}-u_{\text{obs}}\quad\text{in}\quad X^{*}.\end{cases} \tag{13}\]
From the expression of residuals, we are interested in approximating integrals:
\[\overline{s}=\int_{\mathbb{D}}g(y)dy\quad\text{and}\quad\overline{l}=\int_{ \mathbb{D}}l(z)dz\quad\text{for}\quad g\in\hat{Y},l\in\hat{X}. \tag{14}\]
We assume that we are provided quadratures:
\[\overline{s}_{\hat{X}}=\sum_{i=1}^{\hat{X}}w_{i}g(\tau_{i}^{D})\quad\text{and }\quad\overline{l}_{M}=\sum_{i=1}^{M}q_{i}l(\tau_{i}^{u}) \tag{15}\]
Figure 4: Overview of back-propagation. A forward pass generates activations \(y_{i}\) and computes the error \(\mathcal{L}_{\theta}(y_{3},u)\). This is followed by a backward pass, through which the error adjoint is propagated to obtain the gradient with respect to weights \(\nabla\mathcal{L}_{\theta}\) where \(\theta=(w_{1},\cdots,w_{6})\). Additionally, the gradient \(\nabla_{x}\mathcal{L}_{\theta}\) can also be computed in the same backward pass.
for weights \(w_{i},q_{i}\) and quadrature points \(\tau_{i}^{D},\tau_{i}^{u}\in\mathbb{D}\) such that for \(\alpha,\beta>0\):
\[|\bar{g}-\bar{g}_{\beta}|\leq C_{\mathrm{quad},Y}\hat{N}^{-\alpha}\quad\text{ and}\quad|\bar{l}-\bar{l}_{M}|\leq C_{\mathrm{quad},X}M^{-\beta}. \tag{16}\]
For any \(\omega_{u}>0\), the loss is defined as follows:
\[\mathcal{L}_{\theta} =\sum_{i=1}^{\hat{N}}w_{i}|\xi_{D,\theta}(\tau_{i}^{D})|^{2}+ \omega_{u}\sum_{i=1}^{M}w_{i}^{u}|\xi_{u,\theta}(\tau_{i}^{u})|^{2}\approx\|\xi _{D,\theta}\|_{Y}^{2}+\omega_{u}\|\xi_{u,\theta}\|_{X}^{2}\] \[=\varepsilon_{T,D}^{2}+\omega_{u}\varepsilon_{T,u}^{2},\]
with \(\varepsilon_{T,D}\) and \(\varepsilon_{T,u}\) the training error for collocation points and observations respectively. Notice that application of Eq. (16) to \(\xi_{D,\theta}\) and \(\xi_{u,\theta}\) yields:
\[\|\xi_{D,\theta}\|_{Y}^{2}-\varepsilon_{T,D}^{2}\|\leq C_{\mathrm{quad},Y}\hat {N}^{-\alpha}\quad\text{and}\quad||\xi_{u,\theta}||_{X}^{2}-\varepsilon_{T,u} ^{2}|\leq C_{\mathrm{quad},X}M^{-\beta}. \tag{17}\]
We seek to quantify the _generalization error_:
\[\varepsilon_{G}=\varepsilon_{G}(\theta^{\star}):=\|u-u^{\star}\|_{X}\quad \text{with }u^{\star}:=u_{\theta^{\star}}\quad\text{and}\quad\theta^{\star}:=\arg\min_{ \theta}\mathcal{L}_{\theta}. \tag{18}\]
We detail a new result concerning the generalization error for PINNs.
**Theorem 1** (Generalization error for generic PINNs): _Under the presented setting, there holds that:_
\[\varepsilon_{G}\leq\frac{C_{\mathrm{pde}}}{1+\omega_{u}}\left( \varepsilon_{T,D}+C_{\mathrm{quad},Y}^{1/2}\hat{N}^{-\alpha/2}\right)+\frac{ \omega_{u}}{1+\omega_{u}}\left(\varepsilon_{T,u}+C_{\mathrm{quad},X}^{1/2}M^ {-\beta/2}+\hat{\mu}\right) \tag{19}\]
_with \(\hat{\mu}:=\|u-u_{\mathrm{obs}}\|_{X}\)._
**Proof 1**: _Consider the setting of Theorem 1. There holds that:_
\[(1+\omega_{u}) \varepsilon_{G} =\|u-u^{\star}\|_{X}+\omega_{u}\|u-u^{\star}\|_{X},\quad\text{by Eq. \eqref{eq:pde}}\] \[\leq C_{\mathrm{pde}}\|\hat{\Delta}u-\Delta u^{\star}\|_{Y}+ \omega_{u}\|u-u^{\star}\|_{X},\quad\text{by Eq. \eqref{eq:pde}}\] \[\leq C_{\mathrm{pde}}\|\xi_{D,\theta^{\star}}\|_{Y}+\omega_{u}\|u -u_{\mathrm{obs}}\|_{X}+\omega_{u}\|u_{\mathrm{obs}}-u^{\star}\|_{X},\quad \text{by Eq. \eqref{eq:pde} and triangular inequality}\] \[=C_{\mathrm{pde}}\|\xi_{D,\theta^{\star}}\|_{Y}+\omega_{u}\|\xi_{u,\theta^{\star}}\|_{Y}+\omega_{u}\hat{\mu},\quad\text{by definition}\] \[\leq C_{\mathrm{pde}}\varepsilon_{T,D}+\omega_{u}\varepsilon_{T,u} +C_{\mathrm{pde}}C_{\mathrm{quad},Y}^{1/2}\hat{N}^{-\alpha/2}+\omega_{u}C_{ \mathrm{quad},X}^{1/2}M^{-\beta/2}+\omega_{u}\hat{\mu},\quad\text{by Eq. \eqref{eq:pde}.}\]
The novelty of Theorem 1 is that it describes the generalization error for a simple case involving collocation points and observations. To make the result more intuitive, we rewrite Eq. (19), with \(\sim\) expressing the terms up to positive constants:
\[\varepsilon_{G}\sim(\hat{N}^{-\alpha/2}+M^{-\beta/2})+\varepsilon_{T,D}+ \varepsilon_{T,u}+\hat{\mu}. \tag{20}\]
The generalization error depends on the training errors (which are tractable during training), parameters \(\hat{N}\) and \(M\) and bias \(\hat{\mu}\).
To return to \(h\)-analysis, we now have a theoretical proof of the three regimes presented in introduction. Let us assume that \(\hat{\mu}=0\). For small values of \(\hat{N}\) or \(M\), the bound in Theorem 1 is too high to yield a meaningful estimate. Subsequently, the convergence is as \(\max(\hat{N}^{-\alpha/2},M^{-\beta/2})\), marking the transition regime. It is paramount for practitioners to reach the permanent regime when training PINNs, giving ground to data-parallel PINNs.
## Data-parallel PINNs
### Data-distribution and Horovod
In this section, we present the data-parallel distribution for PINNs. Let us set size\(\in\mathbb{N}_{1}\) and define ranks (or workers):
\[\text{rank}=0,\cdots,\text{size}-1,\]
each rank corresponding generally to a GPU. Data-parallel distribution requires the appropriate partitioning of the training points across ranks.
We introduce \(\hat{N}_{1},M_{1}\in\mathbb{N}_{1}\) collocation points and observations, respectively, for each rank (e.g., a GPU) yielding:
\[\mathcal{T}_{\nu}=\bigcup_{\text{rank}=0}^{\text{size}-1}\mathcal{T}_{\nu}^{ \text{rank}}\quad\text{for}\quad v\in\{D,u\},\]
with
\[\hat{N}=\texttt{size}\times\hat{N}_{1},\quad M=\texttt{size}\times M_{1}\quad \text{and}\quad\mathcal{T}=\mathcal{T}_{\nu}\cup\mathcal{T}_{u}\quad\text{with} \quad N=\hat{N}+M. \tag{21}\]
Data-parallel approach is as follows: We send the same synchronized copy of the DNN \(\mathcal{N}\mathcal{N}_{\theta}\) defined in Eq. (3) to each rank. Each rank evaluates the loss \(\mathcal{L}_{\theta}^{\text{rank}}\) and the gradient \(\nabla_{\theta}\mathcal{L}_{\theta}^{\text{rank}}\). The gradients are then averaged using an all-reduce operation, such as the ring all-reduce implemented in Horovod [26], which is known to be optimal with respect to the number of ranks. The process is illustrated in Figure 5 for \(\texttt{size}=4\). The ring-allreduce algorithm involves each of the size nodes communicating with two of its peers \(2\times(\texttt{size}-1)\) times [26].
It noteworthy that data generation for data-free PINNs (i.e. with \(M=0\)) requires no modification to existing codes, provided that each rank has a different seed for random or pseudo-random sampling. Horovod allows to apply data-parallel acceleration with minimal changes to existing code. Moreover, our approach and Horovod can easily be extended to multiple computing nodes. As pointed out in "Introduction", supports popular DL backends such as TensorFlow, PyTorch and Keras. In Listing 1, we demonstrate how to integrate data-parallel distribution using Horovod with a generic PINNs implementation in TensorFlow 1.x. The highlighted changes in pink show the steps for incorporating Horovod, which include: (i) initializing Horovod; (ii) pinning available GPUs to specific workers; (iii) wrapping the Horovod distributed optimizer and (iv) broadcasting initial variables to the master rank being rank = 0.
Figure 5: Data-parallel framework. Horovod supports ring-allreduce algorithm.
```
1#InitializeHorovod
2importhorovod.tensorflowashvd
3hvd.init()
4
5#PinGPUtobecusedtoprocesslocalrank(oneGPUperprocess)
6config=tf.ConfigProto()
7config.gpu_options.visible_device_list=str(hvd.local_rank())
8
9#BuildthePINN
10loss-...
11opt=tf.train.AdamOptimizer()
12#addMicroodDistributedOptimizer
13opt=hvd.DistributedOptimizer(opt)
14
15train=opt.minimize(loss)
16
17#Initializevariables
18init=tf.global_variables_initializer()
19self.ess.run(init)
20
21#Broadcastvariablesfromrank0totherworkers
22bcast=hvd.broadcast_global_variables(0)
23self.ess.run(bcast)
24
25#Trainthemodel
26whilen<-maxiter:
27sess.run(train)
28n+-1
```
Listing 1: Horovod for PINNs with TensorFlow 1.x. Data-parallel Horovod PINNs require minor changes to existing code.
### Weak and strong scaling
Two key concepts in data distribution paradigms are weak and strong scaling, which can be explained as follows: Weak scaling involves increasing the problem size proportionally with the number of processors, while strong scaling involves keeping the problem size fixed and increasing the number of processors. To reformulate:
* Weak scaling: Each worker has \((\hat{N}_{1},M_{1})\) training points, and we increase the number of workers size.
* Strong scaling: We set a fixed total number of \((\hat{N}_{1},M_{1})\) training points, and we split the data over increasing size workers.
We portray weak and strong scaling in Fig. 6 for a data-free PINN with \(\hat{N}_{1}=16\). Each box represents a GPU, with the number of collocation points as as color.
On the left of each scaling option, we present the unaccelerated case. We introduce the training time \(t_{\texttt{size}}\) for size workers. This allows to define the efficiency and speed-up as:
\[E_{\texttt{ff}}:=\frac{t_{1}}{t_{\texttt{size}}}\quad\text{and}\quad S_{\text{ up}}:=\texttt{size}\frac{t_{1}}{t_{\texttt{size}}}.\]
Figure 6: Weak and strong scaling for \(\hat{N}_{1}=16\) and size\(=8\).
## Numerical experiments
Throughout, we apply our proceeding to three cases of interest:
* "1D Laplace" equation (forward problem);
* "1D Schrodinger" equation (forward problem);
* "2D Navier-Stokes" equation (inverse problem).
For each case, we perform a \(h\)-analysis followed by Horovod data-parallel acceleration, which is applied to the domain training points (and observations for the Navier-Stokes case). Boundary loss terms are negligible due to the sufficient number of boundary data points.
### Methodology
We perform simulations in single float precision on a AMAX DL-E48A AMD Rome EPYC server with 8 Quadro RTX 8000 Nvidia GPUs--each one with a 48 GB memory. We use a Docker image of Horovod 0.26.1 with Python 3.6.9 and Tensorflow 2.6.2. All throughout, we use tensorflow.compat.vl as a backend without eager execution.
All the results are ready for use in HorovodPINNs GitHub repository and fully reproducible. We run experiments 8 times with seeds defined as:
\[\mathsf{seed}+1000\times\mathsf{rank},\]
in order to obtain rank-varying training points. For domain points, Latin Hypercube Sampling is performed with with pyDOE 0.3.8. Boundary points are defined over uniform grids.
We use Glorot uniform initialization [3, Chapter 8]. "Error" refers to the \(L^{2}\)-relative error taken over \(\mathcal{T}^{\mathrm{test}}\), and "Time" stands for the training time in seconds. For each case, the loss in Eq. (6) is with unit weights \(\omega_{v}=1\) and Monte-Carlo quadrature rule \(w_{v}^{i}=\frac{1}{N_{v}}\) for \(v\in\Lambda\). Also, we set \(\mathrm{vol}(\mathbb{D})\) the volume of domain \(\mathbb{D}\) and
\[\rho:=\frac{N_{f}{}^{1/(d+1)}}{\mathrm{vol}(\mathbb{D})^{1/(d+1)}}\]
We introduce \(t^{k}\) the time per to perform \(k\) iterations. The training points processed by second is as follows:
\[\mathrm{pointsec}:=\frac{kN_{f}}{t^{k}}.\]
For the sake of simplicity, we summarize the parameters and hyper-parameters for each case in Table 1.
## 1D Laplace
We first consider the 1D Laplace equation in \(D=[-1,7]\) as being:
\[-\Delta u=f\quad\text{in}\quad D\quad\text{with}\quad f=\pi^{2}\sin(\pi x) \quad\text{and}\quad u(-1)=u(7)=0. \tag{22}\]
Acknowledge that \(u(x)=\sin(\pi x)\). We solve the problem for:
\[N_{f}=2^{i}\quad i=3,\cdots,14\quad\text{and}\quad N_{f}=80,100,105,110,115,120.\]
We set \(N_{g}=2\) and \(N_{u}=0\). Points in \(\mathcal{T}_{D}\) are generated randomly over \(D\), and \(\mathcal{T}_{b}=\{-1,7\}\). The residual in Eq. (2):
\[\xi^{f}=-\Delta u-f\]
yields the loss:
\[\mathcal{L}_{\theta}=\mathcal{L}_{\theta}^{f}+\mathcal{L}_{\theta}^{b}\quad \text{with}\quad\mathcal{L}_{\theta}^{f}:=\frac{1}{N_{f}}\sum_{\mathcal{J}}| \xi_{f}|^{2}\quad\text{and}\quad\mathcal{L}_{\theta}^{b}:=\frac{1}{2}\left(|u (-1)|^{2}+|u(7)|^{2}\right).\]
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Case &
\begin{tabular}{c} learning rate \\ \(l_{r}\) \\ \end{tabular} & width & depth & & & & & \\ \hline \hline
1D Laplace & \(10^{-4}\) & 50 & 4 & 20000 & 7801 & \(N_{f}\) & \(\tanh\) \\ \hline
1D Schrödinger & \(10^{-4}\) & 50 & 4 & 30000 & 30802 & \(N_{f}\) & \(\tanh\) \\ \hline
2D Navier-Stokes & \(10^{-4}\) & 20 & 8 & 30000 & 3604 & \(N_{f}\) & \(\tanh\) \\ \hline \end{tabular}
\end{table}
Table 1: Overview of the parameters and hyper-parameters for each case.
\(h\)-analysis
We perform the \(h\)-analysis for the error as portrayed before in Figure 1. The asymptotic regime occurs between \(N_{f}=32\) and \(N_{f}=256\), with a precision of \(\rho=8\) in accordance with general results for \(h\)-analysis of traditional solvers. The permanent regime shows a slight improvement in accuracy, with Error dropping from \(10^{-3}\) to \(10^{-4}\) approximately. To complete the \(h\)-analysis, Figure 7 shows the convergence results of ADAM optimizer for all the values of \(N_{f}\). This plot reveals that each regime exhibits similar patterns. The high variability in convergence during the transition regime is particularly interesting, with some runs converging and others not. In the permanent regime, the convergence shows almost identical and stable patterns irrespective of \(N_{f}\).
#### Data-parallel implementation
We set \(N_{f,1}\equiv N_{1}=32\) and compare both weak and strong scaling to original implementation, referred to as "no scaling".
We provide a detailed description of Fig. 8, as it will serve as the basis for future cases:
* Left-hand side: Error for \(\texttt{size}^{*}\in\{1,2,4,8\}\) corresponding to \(N_{f}\in\{16,32,64,128\}\).
* Middle: Error for weak scaling with \(N_{1}=16\) and for \(\texttt{size}\in\{1,2,4,8\}\);
* Right-hand side: Error for strong scaling with \(N_{1}=128\) and for \(\texttt{size}\in\{1,2,4,8\}\)
Figure 8: Error v/s \(\texttt{size}^{*}\) for the different scaling options.
Figure 7: 1D Laplace: Convergence for ADAM v/s \(N_{f}\) for \(l_{r}=10^{-4}\) and 20000 iterations.
To reduce ambiguity, we use the \(*\)-superscript for no scaling, as size\({}^{*}\) is performed over 1 rank. The color for each violin box in the figure corresponds to the number of domain training points used for each GPU.
Fig. 8 demonstrates that both weak and strong scaling yield similar convergence results to their unaccelerated counterpart. This result is one of the main findings in this work: PINNs scale properly with respect to accuracy, validating the intuition behind \(h\)-analysis and justifying the data-parallel approach. This allows one to move from pre-asymptotic to permanent regime by using weak scaling, or leverage the cost of a permanent regime application by dispatching the training points over different workers. Furthermore, the hyper-parameters, including the learning rate, remained unchanged.
Next, we summarize the data-parallelization results in Table 2 with respect to size.
In the first column, we present the time required to run 500 iterations for ADAM, referred to as \(t^{500}\). This value is averaged over one run of 30000 iterations with a heat-up of 1500 iteration (i.e. we discard the values corresponding to iteration \(0,500\) and \(1000\)). We present the resulting mean value \(\pm\) standard deviation for the resulting vector. The second column, displays the efficiency of the run, evaluated with respect to \(t^{500}\).
The table reveals that data-based acceleration results in supplementary training times as anticipated. Weak scaling efficiency varies between \(70.8\%\) for \(\texttt{size}=1\) to \(60.1\%\) for \(\texttt{size}=8\), resulting in a speed-up of \(4.80\) when using 8 GPUs. Similarly, strong scaling shows similar behavior. Furthermore, it can be observed that \(\texttt{size}=1\) yields almost equal \(t_{500}\) for \(N_{f}=16\) (\(5.65s\)) and \(N_{f}=128\) (\(5.76\%\)).
To conclude, the Laplace case is not reaching its full potential with Horovod due to the small batch size \(N_{1}\). However, increasing the value of \(N_{1}\) in future cases will lead to a noticeable increase in efficiency.
## 1D Schrodinger
We solve the non-linear Schrodinger equation along with periodic BCs (refer to [10, Section 3.1.1]) given over \(D\times T\) with \(D:=(-5,5)\) and \(T:=\pi/2\)
\[u_{t}+0.5u_{xx}+|u|^{2}u =0\quad\text{in}\quad D\times T,\] \[u(-5,t) =u(5,t),\] \[\partial_{x}u(-5,t) =\partial_{x}u(5,t),\] \[u(x,0) =2\,\text{sech}(x),\]
where \(u(x,t)=u^{0}(x,t)+u^{1}(x,t)\). We apply PINNs to the system with \(m=2\) as \((u_{\theta}^{0},v_{\theta}^{1})\in\mathbb{R}^{m}=\mathbb{R}^{2}\). We set \(N_{g}=N_{h}=200\).
\(h\)_-analysis_
To begin with, we perform the \(h\)-analysis for the parameters in Fig. 9. Again, the transition regime for the total execution time distribution begins at a density of \(\rho=6\) and spans between \(N_{f}=350\) and \(N_{f}=4000\). At higher magnitudes of \(N\), the error remains approximately the same. To illustrate this more complex case, we present the total execution time distribution in Fig. 10. We note that the training times remain stable for \(N_{f}\leq 4000\). This observation is of great importance and should be emphasized, as it adds additional parameters to the analysis. Our work primarily focuses on error in \(h\)-analysis, however, execution time and memory requirements are also important considerations. We see that in this case, weak scaling is not necessary (the optimal option is to use \(\texttt{size}=1\) and \(N_{f}=4000\)). Alternatively, strong scaling can be done with \(N_{f,1}=4000\).
To gain further insight into the variability of the transition regime, we focus on \(N_{f}=1000\). We compare the solution for \(\texttt{seed}=1234\) and \(\texttt{seed}=1236\). The upper figures depict \(|u(t,x)|\) predicted by the PINN. The lower figures show a comparison between the exact and predicted solutions are plotted for \(t\in\{0.59,0.79,-.98\}\). It is evident that the solution for \(\texttt{seed}=1234\) closely resembles the exact solution, whereas the solution for \(\texttt{seed}=1236\) fails to accurately represent the solution near \(x=0\), thereby illustrating the importance of achieving a permanent regime.
\begin{table}
\begin{tabular}{|c||c|c||c|c|} \hline & \multicolumn{2}{c||}{\(t^{500}\)} & \multicolumn{2}{c|}{\(E_{\text{ff}}\)} \\ \hline size & Weak scaling & Strong scaling & Weak scaling & Strong scaling \\ \hline \hline
1 & \(5.65\pm 0.15\) & \(5.76\pm 0.25\) & – & – \\ \hline
2 & \(7.98\pm 0.29\) & \(7.94\pm 0.27\) & \(70.8\%\) & \(72.5\%\) \\ \hline
4 & \(8.72\pm 0.14\) & \(8.72\pm 0.11\) & \(64.8\%\) & \(66.0\%\) \\ \hline
8 & \(9.40\pm 0.18\) & \(9.40\pm 0.18\) & \(60.1\%\) & \(61.2\%\) \\ \hline \end{tabular}
\end{table}
Table 2: 1D Laplace: \(t^{500}\) in seconds and efficiency \(E_{\text{ff}}\) for the weak and strong scaling.
#### Data-parallel implementation
We compare the error for unaccelerated and data-parallel implementations of simulations for \(N_{f,1}\equiv N_{1}=500\) in Fig. 12, analogous to the analysis in Fig. 8. Again, the error is stable with size. Both no scaling and weak scaling are similar. Strong scaling is unaffected by size. We plot the training time in Fig. 13. We observe that both weak and strong scaling increase linearly and slightly with size. Both scaling show similar behaviors. Fig. 14 demonstrates the efficiency with respect to size, with white bars representing the ideal scaling. Efficiency \(E_{\text{ff}}\) shows a gradual decrease with size, with results surpassing those of the previous section. The efficiency for \(\texttt{size}=8\) reaches 77% and 76% respectively for weak and strong scaling, representing a speed-up of 6.2 and 6.1.
Figure 11: Schrödinger: Solution for \(N_{f}=1000\) for two differents seeds.
Figure 10: Schrödinger: Time (in seconds) v/s \(N_{f}\).
Figure 9: Schrödinger: Error v/s \(N_{f}\).
### Inverse problem: Navier-Stokes equation
We consider the Navier-Stokes problem with \(D:=[-1,8]\times[-2,2]\), \(T=20\) and unknown parameters \(\lambda_{1}\lambda_{2}\in\mathbb{R}\). The resulting divergence-free Navier Stokes is expressed as follows:
\[\begin{cases}u_{t}+\lambda_{1}(uu_{x}+vu_{y})&=-p_{x}+\lambda_{2}(u_{xx}+u_{yy}),\\ v_{t}+\lambda_{1}(uv_{x}+vv_{y})&=-p_{y}+\lambda_{2}(v_{xx}+v_{yy}),\\ u_{x}+u_{t}&=0,\end{cases} \tag{23}\]
Figure 14: Schrödinger: Efficiency v/s size.
Figure 12: Schrödinger: Error v/s size\({}^{*}\) for the different scaling options.
Figure 13: Schrödinger: Time (in seconds) v/s size\({}^{*}\) for the different scaling options.
wherein \(u(x,y,t)\) and \(v(x,y,t)\) are the \(x\) and \(y\) components of the velocity field, and \(p(x,y,t)\) the pressure. We assume that there exists \(\varphi(x,y,t)\) such that:
\[u=\varphi_{x},\quad\text{and}\quad v=-\varphi_{x}. \tag{24}\]
Under Eq. (24), last row in Eq. (23) is satisfied. The latter leads to the definition of residuals:
\[\xi_{1}:=u_{t}+\lambda_{1}(uu_{x}+vu_{y})+p_{x}-\lambda_{2}(u_{xx} +u_{yy})\] \[\xi_{2}:=v_{t}+\lambda_{1}(uv_{x}+vv_{y})+p_{y}-\lambda_{2}(v_{xx} +v_{yy}),\]
hence the loss:
\[\mathcal{L}_{\theta}:=\mathcal{L}_{D,\theta}+\mathcal{L}_{u,\theta}\]
with
\[\mathcal{L}_{D,\theta}=\frac{1}{N}\sum_{i=1}^{N}\omega_{i}\left(|u(\mathbf{x} ^{i})-u^{i}|^{2}+|v(\mathbf{x}^{i})-v^{i}|^{2}\right)\quad\text{and}\quad \mathcal{L}_{u,\theta}=\frac{1}{N}\sum_{i=1}^{N}\omega_{i}\left(\xi_{1}^{2}( \mathbf{x}^{i})+\xi_{2}^{2}(\mathbf{x}^{i})\right).\]
Throughout this case, we have \(N_{f}=M\), and we plot the results with respect to \(N_{f}\). Acknowledge that \(N=2N_{f}\), and that both \(\lambda_{1},\lambda_{2}\) and the BCs are unkwown.
#### \(h\)-analysis
We conduct the \(h\)-analysis and show the error in Fig. 15, showing no differences with previous cases.
Surprisingly, the permanent regime is reached only for \(N_{f}=1000\), despite the problem being a 3-dimensional, non-linear, and inverse one. This corresponds to low values of \(\rho\), indicating that PINNs seem to prevent the curse of dimensionality. In fact, the it was achieved with only 1.2 points per unit per dimension. The total training time is presented in Fig. 16, where it can be seen to remain stable up to \(N_{f}=5000\), and then it increases linearly with \(N_{f}\).
Figure 16: Navier-Stokes: Time (in seconds) v/s \(N_{f}\).
Figure 15: Navier-Stokes: Error v/s \(N_{f}\).
#### Data-parallel implementation
We run the data-parallel simulations, setting \(N_{f,1}\equiv N_{1}=M_{1}=500\). As shown in Fig. 17, the simulations exhibit stable accuracy with size. The execution time increases moderately with size, as illustrated in Fig. 18. The training time decreases with \(N\) for the no scaling case.
However, this behavior is temporary (refer to Fig. 15 before). We conclude our analysis by plotting the efficiency with respect to size in Fig. 19. Efficiency lowers with increasing size, but shows the best results so far, with efficiency overpassing 80%. This encouraging result sets the stage for further exploration of more intricate applications.
Figure 19: Navier-Stokes: Efficiency v/s size.
Figure 17: Navier-Stokes: Error v/s size\({}^{*}\) for the different scaling options.
Figure 18: Navier-Stokes: Time (in seconds) v/s size\({}^{*}\) for the different scaling options.
## Conclusion
In this work, we proposed a novel data-parallelization approach for PIML with a focus on PINNs. We provided a thorough \(h\)-analysis and associated theoretical results to support our approach, as well as implementation considerations to facilitate implementation with Horovod data acceleration. Additionally, we ran reproducible numerical experiments to demonstrate the scalability of our approach. Further work include the implementation of Horovod acceleration to DeepXDE[35] library, coupling of localized PINNs with domain decomposition methods, and application on larger GPU servers (e.g., with more than 100 GPUs).
## Data availability
The code required to reproduce these findings are available to download from [https://github.com/pescap/HorovodPINNs](https://github.com/pescap/HorovodPINNs).
|
2307.01017 | Scalable quantum neural networks by few quantum resources | This paper focuses on the construction of a general parametric model that can
be implemented executing multiple swap tests over few qubits and applying a
suitable measurement protocol. The model turns out to be equivalent to a
two-layer feedforward neural network which can be realized combining small
quantum modules. The advantages and the perspectives of the proposed quantum
method are discussed. | Davide Pastorello, Enrico Blanzieri | 2023-07-03T13:47:14Z | http://arxiv.org/abs/2307.01017v1 | # Scalable quantum neural networks by few quantum resources
###### Abstract
This paper focuses on the construction of a general parametric model that can be implemented executing multiple swap tests over few qubits and applying a suitable measurement protocol. The model turns out to be equivalent to a two-layer feedforward neural network which can be realized combining small quantum modules. The advantages and the perspectives of the proposed quantum method are discussed.
**Keywords**: Neural networks; quantum machine learning; scalable quantum computing.
## 1 Introduction
The application of quantum computation to machine learning tasks offers some interesting solutions characterized by a quantum advantage with respect to the classical counterparts in terms of time and space complexity, expressive power, generalization capability, at least on a theoretical level [1, 2, 3]. Moreover, quantum machine learning (QML) seems to be a promising path to explore in deep the possibilities offered by quantum machines and to exploit existing and near-term quantum computers for tackling real-world problems. In this respect, the research activity aimed to the development of novel QML algorithms must face the issue of the lack of large-scaled, fault-tolerant, universal quantum computers. In order to promote QML as an operative technology we should provide solutions that are not too demanding in terms of quantum resources and, in the meanwhile, they supply quantum advantages w.r.t. classical information processing. The present paper focuses on a solution which represents a trade-off between the definition of a quantum model, specifically a quantum neural network, and the availability of few quantum resources as a realistic assumption.
In this work, we assume the availability of a very specific quantum hardware that is able to perform the swap test1 over two \(k\)-qubit registers (\(k\geq 1\)). This assumption is motivated by several proposals of a physical implementation of the swap test then it turns out to be rather realistic for few qubits [5, 6]. Then, we assume to combine many copies of the quantum hardware, as modules, in order to obtain a larger structure for data processing. In particular, we argue that a combined execution of many independent swap tests, performing only local measurements, allows to create a two-layer feedforward neural network presenting some quantum advantages in terms of space complexity with the possibility to input quantum data. The main result is a general construction of a meaningful quantum model based on very few quantum resources whose scalability is allowed by the requirement of quantum coherence over a small number of qubits because a major role is played by measurement processes.
Footnote 1: The _swap test_ is a simple but useful technique to evaluate the similarity between two unknown pure states [4]. It is briefly illustrated in the next section.
The swap test is a well-known tool for developing QML techniques, so there are some proposals of quantum neural networks based on it. For instance, there are quantum neurons where the swap test is coupled to the quantum phase estimation algorithm for realizing quantum neural networks with quantum input and quantum output [7, 8]. Our approach is different, we assume only the capability of performing the swap test over few qubits for constructing neural networks where the input is a quantum state and the output is always classical. Our goal is the definition of a quantum model which can be realized assuming the availability of few quantum resources.
The paper is structured as follows: in section 2 we recall some basic notions about two-layer feedforward neural networks and the definition of swap test as the building block for the proposed modular architecture. In section 3, we define the parametric model obtained combining small modules acting on few qubits, showing its equivalence to a neural network after a suitable measurement protocol is performed. In section 4, we discuss the impact of the obtained results. In the final section, we draw some concluding remarks highlighting the perspectives of the proposed modular approach.
## 2 Background
In this section we introduce some basic notions that are crucial for the definition of the model proposed and discussed in the paper. In particular, we define the structure of the neural network we consider in the main part of the work and the necessary quantum computing fundamentals.
_Definition 1_: _Let \(X\) and \(Y\) be non-empty sets respectively called **input domain** and **output domain**. A (deterministic) **predictive model** is a function_
\[y=f(x;\theta)\qquad x\in X,\ y\in Y,\]
_where \(\theta=(\theta_{1},...,\theta_{d})\) is a set of real parameters._
In supervised learning, given a training set \(\{(x_{1},y_{1}),\cdots,(x_{m},y_{m})\}\) of pairs input-output the general task is finding the parameters \(\theta\) such that the model assigns the right output to any unlabelled input. The parameters are typically determined optimizing a _loss function_ like \(\mathcal{L}(\theta)=\sum_{i=1}^{m}d(y_{i},f(x_{i},\theta))\) where \(d\) is a metric defined over \(Y\). A remarkable example of predictive model,
where \(X=\mathbb{R}^{n}\) and \(Y=\mathbb{R}\), is the following _two-layer neural network_ with \(h\) hidden nodes:
\[f(\mathbf{x};\mathbf{p},W)=\sum_{i=1}^{h}p_{i}\varphi(\mathbf{x}\cdot\mathbf{w}_ {i}), \tag{1}\]
where \(\mathbf{w}_{i}\in\mathbb{R}^{n}\) is the \(i\)th row of the \(n\times h\) weight matrix \(W\) of the first layer, \(p_{i}\) is the \(i\)th coefficient of the second layer, and \(\varphi:\mathbb{R}\rightarrow\mathbb{R}\) is a continuous non-linear function. Despite its simplicity, this model is relevant because it is a _universal approximator_ in the sense that it can approximate, up to finite precision, any Borel function \(\mathbb{R}^{n}\rightarrow\mathbb{R}\), provided a sufficient number of hidden nodes is available [9].
Let us focus on a quadratic activation function for a two-layer network that is not very used in practice however it has been widely studied [10, 11, 12, 13] proving, in particular, that networks with quadratic activation are as expressive as networks with threshold activation and can be learned in polynomial time [10]. Moreover, the quadratic activation presents the same convexity of the rectified linear unit (ReLU) but with a wider activating zone, so the loss curves of networks using these two activations converge very similarly [13]. In addition to the choice of a quadratic activation, let us consider the _cosine similarity_ instead of the dot product as pre-activation:
\[\cos(\mathbf{x},\mathbf{w}):=\frac{\mathbf{x}\cdot\mathbf{w}}{\parallel \mathbf{x}\parallel\parallel\mathbf{w}\parallel}. \tag{2}\]
The _cosine pre-activation_ is a normalization procedure adopted to decrease the variance of the neuron that can be large as the dot product is unbounded. A large variance can make the model sensitive to the change of input distribution, thus resulting in poor generalization. Experiments show that cosine pre-activation achieves better performances, in terms of test error in classification, compared to batch, weight, and layer normalization [14]. The aim of the paper is showing that a simple quantum processing, combined in multiple copies, can be used to implement a varying-size two-layer network with cosine pre-activation and quadratic activation.
The main assumption of the work is that few quantum resources are available in the sense that we consider a quantum hardware characterized by a small number of qubits on which we can act with few quantum gates being far from the availability of a large-scaled, fault-tolerant, universal quantum computer. The central object is a well-known tool of QML, the _swap test_, that we assume to have implemented into a specific-purpose quantum hardware, characterized by an arbitrarily small number of qubits, that we call _module_. The questions we address are: _What useful can we do if we are able to build such a module with a small number of qubits? Can we construct a neural network stacking such modules with some interesting properties?_
As a postulate of quantum mechanics, any quantum system is described in a Hilbert space. We consider only the finite-dimensional case where a Hilbert space \(\mathsf{H}\) is nothing but a complex vector space equipped with an _inner product_\(\langle\ \mid\ \rangle:\mathsf{H}\times\mathsf{H}\rightarrow\mathbb{C}\). A quantum system is described in the Hilbert space \(\mathsf{H}\) in the sense that its physical states are in bijective correspondence with the projective rays in \(\mathsf{H}\). Then a (pure) quantum state can be identified to a normalized vector \(|\psi\rangle\in\mathsf{H}\) up to a multiplicative phase factor. In other words, quantum states are described by one-dimensional orthogonal projectors of the form \(\rho_{\psi}=|\psi\rangle\langle\psi|\) where \(\langle\psi|\) is an element of the dual of \(\mathsf{H}\).
Definition 2Let \(\mathsf{H}\) be a (finite-dimensional) Hilbert space and \(I\) a finite set. A **measurement process** is a collection of linear operators \(\mathcal{M}=\{E_{\alpha}\}_{\alpha\in I}\) such that \(E_{\alpha}\geq 0\) for each \(\alpha\in I\) and \(\sum_{\alpha}E_{\alpha}=\mathbb{I}\), where \(\mathbb{I}\) is the identity operator on \(\mathsf{H}\).
Given an orthonormal basis \(\{|i\rangle\}_{i=1,\ldots,n}\) of \(\mathsf{H}\), the **measurement in the basis \(\{|i\rangle\}_{i=1,\ldots,n}\)** is the measurement process defined by \(\mathcal{M}=\{|i\rangle\langle i|\}_{i=1,\ldots,n}\).
The probability of obtaining the outcome \(\alpha\in I\) by the measurement \(\mathcal{M}=\{E_{\alpha}\}_{\alpha\in I}\) on the system in the state \(|\psi\rangle\) is \(\mathbb{P}_{\psi}(\alpha)=\langle\psi|E_{\alpha}\psi\rangle\). If \(\mathcal{M}\) is a measurement in the basis \(\{|i\rangle\}_{i=1,\ldots,n}\) then the probability reduces to \(\mathbb{P}_{\psi}(i)=|\langle\psi|i\rangle|^{2}\). A _qubit_ is any quantum system described in the Hilbert space \(\mathbb{C}^{2}\), denoting the standard basis of \(\mathbb{C}^{2}\) by \(\{|0\rangle,|1\rangle\}\) we have the _computational measurement_ on a single qubit \(\mathcal{M}=\{|0\rangle\langle 0|,|1\rangle\langle 1|\}\).
Another postulate of quantum mechanics reads that, given two quantum systems \(S_{1}\) and \(S_{2}\) described in the Hilbert spaces \(\mathsf{H}_{1}\) and \(\mathsf{H}_{2}\) respectively, the Hilbert space of the composite system \(S_{1}+S_{2}\) is given by the tensor product Hilbert space \(\mathsf{H}_{1}\otimes\mathsf{H}_{2}\). Therefore there are states of the product form \(|\psi_{1}\rangle\otimes|\psi_{2}\rangle\in\mathsf{H}_{1}\otimes\mathsf{H}_{2}\) (in contract notation: \(|\psi_{1}\psi_{2}\rangle\)) called _separable states_ and states that cannot be factorized called _entangled states_. The latter describe a kind of non-local correlations without a counterpart in classical physics which are the key point of the most relevant advantages in quantum information processing.
Encoding classical data into quantum states is a crucial issue in quantum computing and specifically quantum machine learning. One of the most popular quantum encoding is the so-called _amplitude encoding_. Assume we have a data instance represented by a complex vector \(\mathbf{x}\in\mathbb{C}^{n}\) then it can be encoded into the amplitudes of a quantum state:
\[|\mathbf{x}\rangle=\frac{1}{\parallel\mathbf{x}\parallel}\sum_{i=1}^{n}x_{i}| i\rangle, \tag{3}\]
where \(\{|i\rangle\}_{i=1,\ldots,n}\) is a fixed orthonormal basis, called _computational basis_, of the Hilbert space of the considered quantum physical system. If the quantum system is made up by \(k\) qubits, say a \(k\)_-qubit register_, with Hilbert space \(\mathsf{H}=(\mathbb{C}^{2})^{\otimes k}\) then for storing \(\mathbf{x}\in\mathbb{C}^{n}\) we need \(k=\lceil\log n\rceil\) qubits and the commonly used computational basis is \(\{|d\rangle:d\in\{0,1\}^{k}\}\). Therefore, amplitude encoding exploits the exponential storing capacity of a quantum memory, but it does not allow the direct retrieval of the stored data. Indeed, the amplitudes cannot be observed, and only the probabilities \(|x_{i}|^{2}/\parallel\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
where \(\oplus\) is the sum modulo 2. Its circuital symbol is:
The _swap gate_ is a 2-qubit gate which exchanges the input qubits, its circuital definition in terms of CNOT gates is:
The controlled swap gate is called _Fredkin gate_, its action can be expressed in the following form in the computational basis of three qubits, where the first qubit is the control:
\[\mathsf{F}|abc\rangle=(1-a)|abc\rangle+a|acb\rangle\qquad a,b,c\in\{0,1\}. \tag{7}\]
The swap test is a very simple procedure to estimate the probability \(|\langle\psi|\varphi\rangle|^{2}\), given two unknown pure states \(|\psi\rangle\) and \(|\varphi\rangle\)[4]. The procedure requires two applications of the Hadamard gate and a controlled swap operation between two multiple qubit registers generalizing the Fredkin gate. It is straightforward to show that the controlled swap of two \(k\)-qubit registers can be implemented with \(k\) Fredkin gates controlled by the same qubit. Consider two \(k\)-qubit registers initialized in \(|\psi\rangle\) and \(|\varphi\rangle\) and a control qubit prepared in \(|0\rangle\). Assume to act with the following circuit on the \(2k+1\) qubits:
\[\begin{array}{c}\includegraphics[width=142.26378pt]{figs/control_swap_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubitqubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubitqubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubitqubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubitqubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubitqubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubitqubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubitqubit_qubit_qubitqubit_qubit_qubit_qubitqubit_qubit_qubit_qubit_qubitqubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubitqubit_qubit_qubitqubit_qubit_qubitqubit_qubit_qubitqubit_qubit_qubitqubit_qubit_qubitqubit_qubit_qubitqubit_qubitqubit_qubitqubit_qubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubit_qubitqubit_qubitqubit_qubitqubit_qubit_qubitqubit_qubitqubit_qubitqubit_qubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubit_qubitqubit_qubit_qubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubit_qubitqubit_qubit_qubitqubit_qubitqubit_qubit_qubitqubit_qubit_qubitqubit_qubitqubit_qubit_qubitqubit_qubit_qubitqubit_qubit_qubit_qubitqubit_qubitqubit_qubitqubit_qubit_qubitqubit_qubit_qubitqubit_qubit_qubitqubit_qubitqubit_qubit_qubitqubit_qubit_qubit_qubitqubit_qubit_qubit_qubit_qubit_qubitqubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit__qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_\(\)_qubit_qubit_qubit_qubit_\(\)_qubit_qubit_\(\)_qubit_\(\
where \(|\psi\rangle\) and \(|\varphi\rangle\) are \(k\)-qubit states and \(\mathbb{P}(0)\) is given by (9). In the following, we argue that independent swap tests between \(k\)-qubit registers and a suitable measurement protocol over the control qubits can be used for implementing a two-layer feedforward neural network.
## 3 The proposed model
Let us consider \(m\) copies of the process diagram (10), that we call _modules_, each module presents two \(k\)-qubit registers and one auxiliary qubit to perform the swap test. Then the total number of qubits, including the controls, is \(m(2k+1)\). Let \(\mathbf{x}\in\mathbb{R}^{N}\) be an input vector and \(\mathbf{w}\in\mathbb{R}^{N}\) be a vector of parameters, if \(k\geq\log N\) then a single module is sufficient to store \(\mathbf{x}\) and \(\mathbf{w}\) in its registers and the action of the swap test, which returns the probability \(\frac{1}{2}(1-\cos^{2}(\mathbf{x},\mathbf{w}))\) as the value of a non-linear function over \(\cos(\mathbf{x},\mathbf{w})\), realizes an elementary perceptron with quadratic activation and cosine pre-activation. However, we are not interested in implementing a single perceptron and according to the general assumption of few quantum resources available, we consider the case \(k<\log N\), so in each module we can encode the input and weight vectors only partially.
**Proposition 3**: _Let \(\mathcal{R}\) be a \(k\)-qubit register. Let \(N>2^{k}\), the number \(m\) of non-interacting2 copies of \(\mathcal{R}\) needed for storing \(\textbf{x}\in\mathbb{C}^{N}\) into a state \(\bigotimes_{i=1}^{m}|\psi_{i}\rangle\) by amplitude encoding is:_
Footnote 2: In this context, _non-interacting registers_ means that any operation on the registers, including those required for state preparation, must be performed locally on each register. Therefore, no entanglement can be produced among different \(k\)-qubit registers.
\[m=\mathcal{O}(N2^{-k}). \tag{11}\]
Proof.: In each register, we can encode at most \(2^{k}\) entries of \(\mathbf{x}\) into the amplitudes of a \(k\)-qubit state. Therefore the minimum number \(m\) of registers for storing \(\mathbf{x}\) into a pure state, where the registers are non-entangled with each other, satisfies \((m-1)2^{k}<N<m2^{k}\).
Considering \(m\) independent modules, we can encode at most \(m2^{k}\) input entries and \(m2^{k}\) weights into the available qubits (divided into the _input register_ and the _weight register_ of each module). Therefore, by proposition 3, the total number of qubits required to encode and process \(\mathbf{x}\in\mathbb{R}^{N}\) is:
\[n=\mathcal{O}\left(N(2k+1)2^{-k}\right). \tag{12}\]
Since a single module does not have an input register large enough to encode all the entries of the input \(\mathbf{x}\) (i.e. \(k<\log N\)), we have to settle for a piecewise amplitude encoding parallelizing
the modules, as a consequence the total number of qubits scales linearly in \(N\), however there is an exponential decay in the size of a single module which can be relevant in specific cases. For instance, if \(N=128\) and we have a single module with a 5-qubit input register and a 5-qubit weight register then 4 modules and 44 qubits are required to store an input vector and a weight vector. Note that in this context 44 qubits is not considered a _large number_ because the coherence must be maintained only inside a single module (that is 11 qubits, two 5-qubit registers and 1 control qubit). Thus, the parallelization of modules with few qubits cannot reach of course an exponential storing capacity but there are cases with realistic values of \(k\) and \(N\) that can be addressed efficiently by modularity.
In order to introduce the parametric model that we can implement executing swap tests in parallel, let us partition the input vector \(\mathbf{x}\in\mathbb{R}^{N}\) and the weight vector \(\mathbf{w}\in\mathbb{R}^{N}\) into pieces to feed the \(m\) modules (each one with \(2k+1\) qubits) assuming that \(N=m2^{k}\) without loss of generality:
\[\mathbf{x}=(\mathbf{x}^{(1)},...,\mathbf{x}^{(m)})\quad\text{where}\quad \mathbf{x}^{(l)}=(x_{(l-1)2^{k}+1},...,x_{l2^{k}}),\qquad l=1,...,m \tag{13}\]
\[\mathbf{w}=(\mathbf{w}^{(1)},...,\mathbf{w}^{(m)})\quad\text{where}\quad \mathbf{w}^{(l)}=(w_{(l-1)2^{k}+1},...,w_{l2^{k}}),\qquad l=1,...,m\]
\(\mathbf{x}^{(l)}\) can be stored into the \(k\)-qubit input register of the \(l\)th module and \(\mathbf{w}^{(l)}\) can be stored into the \(k\)-qubit weight register of the \(l\)th module, then the computation of the norms \(\|\,\mathbf{x}^{(l)}\,\|\) and \(\|\,\mathbf{w}^{(l)}\,\|\), for any \(l=1,...,m\), are required for the encoding. The parallelization of the modules to perform \(m\) swap tests can be easily visualized as follows:
(14)
where \(\mathbb{P}_{l}(0)\) is the probability of measuring \(0\) over the \(l\)th control qubit which can be estimated within an error \(\epsilon\) with \(\mathcal{O}(\epsilon^{-2})\) runs. The multiple swap tests depicted in (14) can be seen as a layer of perceptrons with connectivity limited by the mutual independence of the modules. On this first elementary structure, we construct the second layer applying a specific _local measurement protocol_ to the \(m\) control qubits.
By definition 2, a measurement process provides a classical probability distribution over the set of possible outcomes. In operational terms, a measurement process is repeated \(\mathcal{N}\) times and a string of outcomes \(x\in\mathbb{R}^{\mathcal{N}}\) is stored, where the probability distribution is estimated in terms of relative frequencies. In the following definition, let us assume that only \(\widetilde{\mathcal{N}}<\mathcal{N}\) outcomes are stored after \(\mathcal{N}\) repetitions of a considered measurement process \(\mathcal{M}\).
_Definition 4_: _Let \(\mathcal{M}\) be a measurement process. Assume to repeat the measurement \(\mathcal{N}\) times storing \(\widetilde{\mathcal{N}}<\mathcal{N}\) outcomes, then \(\mathcal{M}\) is called_ **lossy measurement** _with_ **efficiency**_\(p=\widetilde{\mathcal{N}}/\mathcal{N}\)._
The measurement protocol that we need for constructing the proposed model is characterized by local measurements over single qubits characterized by given efficiencies.
_Definition 5_: _Given a system composed by \(m\) qubits, a_ **local measurement protocol** _is a collection \(\{(\mathcal{M}_{l},p_{l})\}_{l=1,...,m}\) where \(\mathcal{M}_{l}\) is a lossy measurement process on the \(l\)th qubit and \(p_{l}\) its efficiency. If \(\mathcal{M}_{l}=\{|0\rangle\langle 0|,|1\rangle\langle 1|\}\) for any \(l=1,...,m\) then \(\{(\mathcal{M}_{l},p_{l})\}_{l=1,...,m}\) is called_ **computational measurement protocol**_._
In the case of the computational measurement protocol over \(m\) qubits, the outcomes provided by a single shot of the protocol form a string \(b=\{\emptyset,0,1\}^{m}\), where the symbol \(\emptyset\) denotes a missing outcome. In the case the measurement processes \(\mathcal{M}_{l}\) are lossless, the efficiency \(p_{l}\) can be controlled selecting the number \(\widetilde{\mathcal{N}}\) of times a measurement \(\mathcal{M}_{l}\) is actually performed when the overall protocol is repeated \(\mathcal{N}\) times. From the physical viewpoint, one can also select the efficiency of the measurements controlling the losses of employed channels.
Given the \(m\) modules executing the multiple (independent) swap tests depicted in diagram (14), let us define a model with parameters given by the weights \(\mathbf{w}\in\mathbb{R}^{N}\) and the efficiencies \(\mathbf{p}=(p_{1},...,p_{m})\) such that the prediction associated to the input \(\mathbf{x}\in\mathbb{R}^{N}\) is given by:
\[f(\mathbf{x};\mathbf{w},\mathbf{p})=\frac{\mathcal{N}_{0}}{\mathcal{N}}, \tag{15}\]
where \(\mathcal{N}\) is the number of repetitions of the computational measurement protocol, with efficiencies \(\mathbf{p}=(p_{1},...,p_{m})\), on the control qubits and \(\mathcal{N}_{0}\) is the total number of obtained zeros.
_Proposition 6_: _The model (15) is equivalent, up to an error \(\epsilon=\mathcal{O}(1/\sqrt{\mathcal{N}})\), to the following two-layer feedforward neural network with cosine pre-activation and activation \(\varphi(z)=\frac{1}{2}(1-z^{2})\) in the hidden neurons:_
Proof.: By the application of the computational measurement protocol with efficiencies \(\{p_{l}\}_{l=1,\ldots,m}\) to the scheme (14), the overall probability of obtaining the outcome \(0\) on the \(l\)th control qubit is:
\[\widehat{\mathbb{P}}_{l}(0)=p_{l}\,\mathbb{P}_{l}(0)=\frac{p_{l}}{2}\left(1- \cos^{2}(\mathbf{x}^{(l)},\mathbf{w}^{(l)})\right). \tag{17}\]
The probability can be estimated, up to an error \(\epsilon\), by the relative frequency \(\mathcal{N}_{0}^{(l)}/\mathcal{N}\) where \(\mathcal{N}_{0}^{(l)}\) is the number of zeros obtained on the \(l\)th qubit by \(\mathcal{N}=\mathcal{O}(\epsilon^{-2})\) repetitions of the computational measurement protocol. By construction, the output of the neural network (16) corresponds to the value \(\sum_{l}\widehat{\mathbb{P}}_{l}(0)\) that can be estimated by:
\[\sum_{l=1}^{m}\frac{\mathcal{N}_{0}^{(l)}}{\mathcal{N}}=\frac{\mathcal{N}_{0} }{\mathcal{N}}.\]
Therefore, for any input \(\mathbf{x}\) and any choice of the parameters \(\mathbf{w}\) and \(\mathbf{p}\) the output \(\mathcal{N}_{0}/\mathcal{N}\) of the model (15) corresponds to the output of the neural network (16) up to an error \(\epsilon=\mathcal{O}(1/\sqrt{\mathcal{N}})\).
Proposition 6 states that we can construct feedforward neural networks by combination of swap tests as modules with fixed size, i.e. defined over qubit registers with given number of qubits. The neural network (16) fails the universality because of the sparse connections in the first layer, however if the length of the input is smaller than \(2^{k}\) we can recover a universal approximator since we can construct a network with an arbitrary number of hidden nodes and complete topology. More precisely, let us note that the number of hidden and output neurons can be arbitrarily increased. In fact, with additional runs of the swap tests, by permutation of the input entries and the re-initialization of the weights \(\mathbf{w}\) one can realize the connections between the input neurons and an arbitrary number of hidden neurons under the topological constraint given by the size of the modules. Collecting the outcomes obtained by the computational measurement protocol with varying efficiencies, we can define the connections from the hidden neurons and an arbitrary number of output nodes. In this respect, the following proposition enables the implementation of a complete feedforward neural network by a suitable measurement protocol.
Proposition 7: _Given a single module of size \(k\) and the positive integers \(R\) and \(Q\), let be \(\mathbf{w}_{1},...,\mathbf{w}_{R}\in\mathbb{R}^{2^{k}}\), and \(p_{rq}\in[0,1]\) for any \(r=1,...,R\) and \(q=1,...,Q\). Let us consider the following algorithm:_
```
Input:Input vector \(\mathbf{x}\in\mathbb{R}^{2^{k}}\) Result:Output vector \(\mathbf{y}\in\mathbb{R}^{Q}\)
1foreach\(r\gets 1,...,R\)do
2foreach\(q\gets 1,...,Q\)do
3run the swap test \(\mathcal{N}\) times on \(|\mathbf{x}\rangle\) and \(|\mathbf{w}_{r}\rangle\) and measure the control qubit with efficiency \(p_{rq}\);
4 collect the outcomes \(b_{rq}\in\{\emptyset,0,1\}^{\mathcal{N}}\);
5
6 end foreach\(q\gets 1,...,Q\)do
7 construct the concatenation \(b_{q}=b_{1q}b_{2q}\cdots b_{Rq}\in\{\emptyset,0,1\}^{R\times\mathcal{N}}\);
8\(y_{q}\leftarrow\frac{\mathcal{N}_{0,q}}{R\mathcal{N}}\) [\(\mathcal{N}_{0,q}\) is the number of zeros in the string \(b_{q}\)]
9
10 end foreach return\(\mathbf{y}=(y_{1},...,y_{Q})\)
```
_The output \(\mathbf{y}\in\mathbb{R}^{Q}\) corresponds, up to an error \(\mathcal{O}(1/\sqrt{R\mathcal{N}})\), to the output of a fully connected feedforward neural network, with cosine pre-activation and activation \(\varphi(z)=\frac{1}{2}(1-z^{2})\), with \(2^{k}\) input nodes, \(R\) hidden nodes, and \(Q\) output nodes._
Proof.: Let us prove the claim simply for \(k=R=Q=2\), since the generalization is straightforward. Consider a module with \(k=2\) and the execution of the following steps (\(Q=2\)):
1. run the swap test \(\mathcal{N}\) times with weights \(\mathbf{w}_{1}\in\mathbb{R}^{4}\) and measure the control qubit with efficiency \(p_{11}\). Collect the outcomes \(b_{11}\in\{\emptyset,0,1\}^{\mathcal{N}}\);
2. run the swap test \(\mathcal{N}\) times, with the same weights and measure the control qubit with efficiency \(p_{12}\). Collect the outcomes \(b_{12}\in\{\emptyset,0,1\}^{\mathcal{N}}\) ;
By proposition 6, computing the relative frequency of \(0\) over \(b_{11}\) and \(b_{12}\) respectively we obtain the estimations of the probabilities \(\frac{p_{11}}{2}(1-\cos^{2}(\mathbf{w}_{1},\mathbf{x}))\) and \(\frac{p_{12}}{2}(1-\cos^{2}(\mathbf{w}_{2},\mathbf{x}))\) implementing the following network up to an error \(\mathcal{O}(1/\sqrt{\mathcal{N}})\):
Repeating the steps 1 and 2 with weights \(\mathbf{w}_{2}\in\mathbb{R}^{4}\) and efficiencies \(p_{21}\) and \(p_{22}\), we can collect also the outcomes \(b_{21},b_{22}\in\{\emptyset,0,1\}^{\mathcal{N}}\). Computing the relative frequency of \(0\) in the strings \(b_{21}\) and \(b_{22}\), we can implement another network as well:
In order to obtain a complete network, we can combine the two networks above (\(R=2\)), providing the outputs:
\[y_{1} =\frac{p_{11}}{2}(1-\cos^{2}(\mathbf{w}_{1},\mathbf{x}))+\frac{p_{21 }}{2}(1-\cos^{2}(\mathbf{w}_{2},\mathbf{x})),\] \[y_{2} =\frac{p_{12}}{2}(1-\cos^{2}(\mathbf{w}_{1},\mathbf{x}))+\frac{p_{ 22}}{2}(1-\cos^{2}(\mathbf{w}_{2},\mathbf{x})).\]
For estimating the sum \(y_{1}\) of the two probabilities we take the sum of the relative frequencies of the outcome \(0\) in the string \(b_{11}\) and in the string \(b_{21}\) which obviously corresponds to the relative frequency of the outcome zero in the concatenation \(b_{11}b_{21}\). In the same way, we estimate \(y_{2}\) as the relative frequency of \(0\) in the string \(b_{12}b_{22}\). In other words, given an input \(\mathbf{x}\in\mathbb{R}^{4}\) and the parameters \(\mathbf{w}_{1},\mathbf{w}_{2},p_{11},p_{12},p_{21},p_{22}\), we have a model that outputs:
\[y_{1}=\frac{\mathcal{N}_{0,1}}{2\mathcal{N}}\qquad\text{and}\qquad y_{2}= \frac{\mathcal{N}_{0,2}}{2\mathcal{N}},\]
where \(\mathcal{N}_{0,i}\) is the number of zeros in the concatenation \(b_{i}=b_{1i}b_{2i}\). Therefore, \(y_{1}\) and \(y_{2}\) correspond, up to an error \(\mathcal{O}(1/\sqrt{2\mathcal{N}})\), to the outputs of the fully connected network:
where the \(8\) connections in the first layer are weighted by the components of \(\mathbf{w}_{1}\) and \(\mathbf{w}_{2}\) and the \(4\) connections in the second layer are weighted by \(p_{11},p_{12},p_{21},p_{22}\).
By direct generalization of the construction above we have that the number of steps \(Q\) establishes the number of output neurons and the number of repetitions \(R\) corresponds to the number of hidden neurons.
Let us remark that the complete topology of the network, provided by proposition 7, in the first layer is implied by the fact that we consider a single module for the calculation of the swap test. Otherwise, the connectivity in the first layer depends by the number of the considered modules and their size \(k\). On the contrary, the local measurement protocol described in the algorithm of proposition 7 always enables all-to-all connections in the second layer.
Discussion
The statements of propositions 6 and 7 provide a general procedure of constructing measurement protocols for the implementation of feedforward neural networks using the outcomes obtained running multiple swap tests between quantum states encoding the input vectors and the weights vector. In particular, proposition 6 implies that if we are able to perform independent swap test over \(k\)-qubit registers then a suitable sample is sufficient for obtaining a neural structure for processing data encoded into the amplitudes of input states. Moreover, proposition 7 establishes that, by suitable repetitions of the swap test, one can decide the number of hidden and output neurons. In particular, if the number of features in the considered dataset is sufficiently low, complete neural networks can be implemented as a direct consequence of propositions 6 and 7.
The modular structure of the model naturally enables the possibility of ensemble learning. In fact, an input vector can be trivially partitioned as described in section 3 but also random partitions can be actuated, in this case a random sampling is performed over the training set as done in bootstrap aggregating for instance. Moreover, different \(k\)-sized modules initialized with the same weights \(\mathbf{w}\in\mathbb{R}^{2^{k}}\) can be interpreted as the application of a convolutional layer followed by a local pooling as done in convolutional neural networks.
From the viewpoint of time complexity, the performances of the proposed model are bounded by the state preparation for encoding the input \(\mathbf{x}\in\mathbb{R}^{N}\) and the weights \(\mathbf{w}\in\mathbb{R}\) into quantum states since the time taken by the measurement protocols is independent from the input size and only related to accuracy and hyperparameters. The general issue of efficient state preparation is a common bottleneck of the current quantum machine learning algorithms, in particular in quantum neural networks. Indeed, the encoding of a data vector \(\mathbf{x}\in\mathbb{R}^{N}\) into the amplitudes of a \(\log N\)-qubit state \(|\mathbf{x}\rangle\) can be performed in time \(\mathcal{O}(\log N)\) only if a quantum RAM (QRAM) is available otherwise the retrieval of \(|\mathbf{x}\rangle\) and the initialization of \(|\mathbf{w}\rangle\) take time \(\mathcal{O}(N)\). From the viewpoint of the space complexity, there is not the expected exponential storage capacity because we assume the availability of a collection of small quantum register where the input can be encoded piecewise. However, the number of required qubits is tamed by an exponential decay in the size \(k\) of a single module then, for suitable values of \(N\) and \(k\), data can be represented efficiently. Moreover, even if the number of qubits is high, we do not require the coherence over all the qubits but only in any module where the swap test is performed. In addition to the advantage in the storing capacity (bounded by \(k\)) the quantum implementation of the proposed scheme offers the possibility of submitting superposition of data to the network. Moreover, even if the modules are _a priori_ uncorrelated, an input can be represented by quantum data with entanglement distributed on a high number of qubits, in this case the quantum processing of a widely entangled input creates quantum correlations among the modules. In general, the input of the model can be a quantum state encoding a feature vector as discussed but also non-artificial quantum data entangling the modules giving rise to a non-trivial execution of the neural network.
Let us remark that the global structure of neural network is also preserved when modules are of minimum size (\(k=1\)), in this case we have the lowest storing capacity and the first layer turns out to be sparse. Nevertheless, the model is still quantum then capable to process quantum data as a two-layer feedforward network.
Conclusions
In this paper we have presented a scheme for implementing a two-layer feedforward neural network using repeated swap tests over quantum registers and suitable measurement protocols. The main goal of the work is showing how to create a non-trivial quantum machine learning model when few quantum resources are available. In particular, we assume the availability of a specific-purpose quantum hardware, that we call _module_, capable of implementing the swap test (this assumption is motivated by several recent proposals of photonic platforms for this task). By combination of an arbitrary amount of these modules, a scalable neural network can be constructed where the first layer is implemented by the quantum processing itself and the second one is obtained in terms of a measurement protocol characterized by varying efficiencies in registering the outcomes. Even if the composition of independent modules does not create entanglement, an entangled state in input introduces quantum correlations in the network.
The empirical evaluation of the proposed model is not addressed in the present paper where only a theoretical construction is provided. From the experimental viewpoint, in the case of amplitude encoding of classical data a simulation of the multiple swap tests is not so useful because it mathematically corresponds to the execution of a two-layer network with quadratic activation that is a well-known object already investigated in deep. Then, experiments on real quantum hardware would be definitely more significant for revealing actual quantum advantages of the discussed proposal. On the contrary, in the case of quantum data where the input can be represented by highly entangled quantum states, a simulation could be interesting but restricted to a small total number of qubits for computational reasons. Indeed, in our opinion, the crucial experimental point will be the combination of real photonic hardware, as small modules implementing the swap test over few qubits, in order to quantify the performances of the scalable quantum neural networks constructed by modularity. In this perspective, a realization with photonic platforms can lead to an on-chip embedding of the modules increasing the technological impact of the proposed QML architecture.
## Acknowledgements
This work was partially supported by project SERICS (PE00000014) under the MUR National Recovery and Resilience Plan funded by the European Union - NextGenerationEU.
|
2303.05786 | Vertical Federated Graph Neural Network for Recommender System | Conventional recommender systems are required to train the recommendation
model using a centralized database. However, due to data privacy concerns, this
is often impractical when multi-parties are involved in recommender system
training. Federated learning appears as an excellent solution to the data
isolation and privacy problem. Recently, Graph neural network (GNN) is becoming
a promising approach for federated recommender systems. However, a key
challenge is to conduct embedding propagation while preserving the privacy of
the graph structure. Few studies have been conducted on the federated GNN-based
recommender system. Our study proposes the first vertical federated GNN-based
recommender system, called VerFedGNN. We design a framework to transmit: (i)
the summation of neighbor embeddings using random projection, and (ii)
gradients of public parameter perturbed by ternary quantization mechanism.
Empirical studies show that VerFedGNN has competitive prediction accuracy with
existing privacy preserving GNN frameworks while enhanced privacy protection
for users' interaction information. | Peihua Mai, Yan Pang | 2023-03-10T08:39:26Z | http://arxiv.org/abs/2303.05786v3 | # Vertical Federated Graph Neural Network for Recommender System
###### Abstract
Conventional recommender systems are required to train the recommendation model using a centralized database. However, due to data privacy concerns, this is often impractical when multi-parties are involved in recommender system training. Federated learning appears as an excellent solution to the data isolation and privacy problem. Recently, Graph neural network (GNN) is becoming a promising approach for federated recommender systems. However, a key challenge is to conduct embedding propagation while preserving the privacy of the graph structure. Few studies have been conducted on the federated GNN-based recommender system. Our study proposes the first vertical federated GNN-based recommender system, called VerFedGNN. We design a framework to transmit: (i) the summation of neighbor embeddings using random projection, and (ii) gradients of public parameter perturbed by ternary quantization mechanism. Empirical studies show that VerFedGNN has competitive prediction accuracy with existing privacy preserving GNN frameworks while enhanced privacy protection for users' interaction information.
Machine Learning, ICML
## 1 Introduction
Graph neural network (GNN) has become a new state-of-art approach for recommender systems. The core idea behind GNN is an information propagation mechanism, i.e., to iteratively aggregate feature information from neighbors in graphs. The neighborhood aggregation mechanism enables GNN to model the correlation among users, items, and related features. Compared with traditional supervised learning algorithms, GNN can model high-order connectivity through multiple layers of embedding propagation and thus capture the similarity of interacted users and items (Gao et al., 2022). Besides, the GNN-based model can effectively alleviate the problem of data sparsity by encoding semi-supervised signals over the graph (Jin et al., 2020). Benefiting from the above features, GNN-based models have shown promising results and outperformed previous methods on the public benchmark datasets (Berg et al., 2017; Wang et al., 2019;a).
Instead of training the model based on their own graphs, organizations could significantly improve the performance of their recommendation by sharing their user-item interaction data. However, such sharing might be restricted by commercial competition or privacy concerns, leading to the problem of data isolation. For example, the data protection agreement might prohibit the vendors from transferring users' personal data, including their purchase and clicking records, to a third party.
Federated learning (FL), a machine learning setting with data distributed in multiple clients, is a potential solution to the data isolation problem. It enables multiple parties to collaboratively train a model without sharing their local data (Peyre et al., 2019). A challenge in designing the federated GNN-based recommender system is how to perform neighborhood aggregation while keeping the graph topology information private. Indeed, each client should obtain the items their users have interacted with in other organizations to conduct embedding propagation. However, the user interaction data are supposed to keep confidential in each party, adding to the difficulty of federated implementation.
Few studies have been conducted on the federated implementation of GNN-based recommender systems. To our best knowledge, FedPerGNN is the first federated GNN-based recommender system on user-item graphs (Wu et al., 2022). However, their work considers a horizontal federated setting where each client shares the same items but with different users. This paper studies the vertical federated setting in which multiple collaborating recommenders offer different items to the same set of users. In addition, FedPerGNN expands the local user-item graphs with anonymous neighbouring user nodes to perform embedding propagation, which could leak users' interaction information (see Section 4.1).
To fill the gap, this paper proposes a vertical federated learning framework for the GNN-based recommender system
(VerFedGNN)1. Our method transmits (i) the neighbor embedding aggregation reduced by random projection, and (ii) gradients of public parameter perturbed by ternary quantization mechanism. The privacy analysis suggests that our approach could protect users' interaction data while leveraging the relation information from different parties. The empirical analysis demonstrates that VerFedGNN significantly enhance privacy protection for cross-graph interaction compared with existing privacy preserving GNN frameworks while maintaining competitive prediction accuracy.
Footnote 1: Source code: [https://github.com/maiph123/VerticalGNN](https://github.com/maiph123/VerticalGNN)
Our main contributions involves the following:
* To the best of our knowledge, we are the first to study the GNN-based recommender system in vertical federated learning. We also provide a rigorous theoretical analysis of the privacy protection and communication cost. The experiment results show that the performance of our proposed federated algorithm is comparable to that in a centralized setting.
* We design a method to communicate projected neighborhood aggregation and quantization-based gradients for embedding propagation across subgraphs. Our method outperforms existing framework in terms of accuracy and privacy. The proposed framework could be generalized to the horizontal federated setting.
* We propose a de-anonymization attack against existing federated GNN framework that infers cross-graph interaction data. The attack is simulated to evaluate its performance against different privacy preserving GNN approaches.
## 2 Literature Review
**Graph Neural Network (GNN) for Recommendation:** There has been a surge of studies on designing GNN-based recommender systems in recent years. Graph convolutional matrix completion (GC-MC) employs a graph auto-encoder to model the direct interaction between users and items (Berg et al., 2017). To model high-order user-item connectivities, neural graph collaborative filtering (NGCF) stacks multiple embedding propagation layers that aggregate embeddings from neighbouring nodes (Wang et al., 2019). LightGCN simplifies the design of NGCF by demonstrating that two operations, feature transformation and nonlinear activation, are redundant in the GCN model (He et al., 2020). PinSage develops an industrial application of GCN-based recommender systems for Pinterest image recommendation (Ying et al., 2018). To achieve high scalability, it samples a neighborhood of nodes based on a random walk and performs localized aggregation for the node.
**Federated Graph Neural Network:** Recent research has made progress in federated graph neural network. Most of the work either performs neighborhood aggregation individually (Zhou et al., 2020; Liu et al., 2021), or assumes that the graph topology could be shared to other parties (Chen et al., 2021). It's a non-trivial task to incorporate the cross-client connection while preserving the relation information between nodes. To protect the graph topology information, a direct method is to apply differential privacy on the adjacency matrix (Zhang et al., 2021), which requires a tradeoff between privacy protection and model performance. In FedSage+, a missing neighbour generator is trained to mend the local subgraph with generated cross-subgraph neighbors (Zhang et al., 2021). To our best knowledge, FedPerGNN is the first work that develops horizontal federated GNN-based recommender system with user-item graphs (Wu et al., 2022). Each client expands the local user-item graphs with anonymous neighbouring user node to learn high-order interaction information.
This paper considers the unexplored area, i.e., graph learning for recommender systems in a vertical federated setting. We show that applying local graph expansion could leak user interaction information from other parties, and develop a GNN-based recommender system that leverages the neighbourhood information across different subgraphs in a privacy-preserving way.
## 3 Problem Formulation and Background
### Problem Statement
We assume that \(P\) parties collaboratively train a recommender system, where each party holds a set of common users but non-overlapping items. Denote \(\mathcal{U}=\{u_{1},u_{2},...,u_{N}\}\) as the set of common users and \(\mathcal{V}_{p}=\{v_{1},v_{2},...,v_{M_{p}}\}\) as the set of items for party \(p\). The total item size \(M_{p}\), \(p\in[1,P]\) is shared across parties. Assume that user \(u\) has \(N_{u}\) neighboring items \(\mathcal{N}(u)=\{v_{1},v_{2},...,v_{N_{u}}\}\), and item \(v\) has \(N_{v}\) neighboring users \(\mathcal{N}(v)=\{u_{1},u_{2},...,u_{N_{v}}\}\). Denote \(\mathcal{N}_{p}(u)=\{v_{1},v_{2},...,v_{N_{v}^{n}}\}\) as the neighboring items for user \(u\) in party \(p\). The related items and users form a local subgraph \(\mathcal{G}_{p}\) in party \(p\). Denote \(r_{uv}\) as the rating user \(u\) gives to item \(v\). Our objective is to generate a rating prediction that minimizes the squared discrepancy between actual ratings and estimate.
### Graph Neural Network (GNN)
**General Framework:** We adopt a general GNN (Wu et al., 2022) framework as the underlying model for our recommender system. In the initial step, each user and item is offered an ID embedding of size \(D\), denoted by \(e_{u}^{0},e_{v}^{0}\in R^{D}\) respectively. The embeddings are passed through \(K\) mes
age propagation layers:
\[\begin{split} n_{u}^{k}&=Agg_{k}(\{e_{v}^{k},\forall v \in\mathcal{N}(u)\})\\ e_{u}^{k+1}&=Update_{k}(e_{u}^{k},n_{u}^{k})\\ n_{v}^{k}&=Agg_{k}(\{e_{u}^{k},\forall u\in\mathcal{ N}(v)\})\\ e_{v}^{k+1}&=Update_{k}(e_{v}^{k},n_{v}^{k})\end{split} \tag{1}\]
where \(e_{u}^{k}\) and \(e_{v}^{k}\) denote the embeddings at the \(k^{th}\) layer for user \(u\) and item \(v\), and \(Agg_{k}\) and \(Update_{k}\) represent the aggregation and update operations, respectively. The final representation for users (items) are given by the combination of embeddings at each layer:
\[\begin{split} h_{u}&=\sum_{k=0}^{K}a_{k}e_{u}^{k}; \ h_{v}=\sum_{k=0}^{K}a_{k}e_{v}^{k}\end{split} \tag{3}\]
where \(h_{u}\) and \(h_{v}\) denote the final representation for user \(u\) and item \(v\) respectively, \(K\) denotes the number of embedding propagation layers, and \(a_{k}\) denotes the trainable parameter implying the weight of each layer.
The rating prediction is defined as the inner product of user and item representation
\[\begin{split}\hat{r}_{uv}&=h_{u}^{T}h_{v}\end{split} \tag{4}\]
The loss function is computed as:
\[\begin{split}\mathcal{L}&=\sum_{(u,v)}(\hat{r}_{uv }-r_{uv})^{2}+\frac{1}{N}\sum_{u}\|e_{u}^{0}\|_{2}^{2}+\frac{1}{M}\sum_{v}\|e_ {v}^{0}\|_{2}^{2}\end{split} \tag{5}\]
where \((u,v)\) denotes pair of interacted user and item, and \(N\) and \(M\) denote the number of users and items respectively.
**Aggregation and Update Operations:** This paper discusses three typical GNN frameworks: Graph convolutional network (GCN) (Kipf and Welling, 2016), graph attention networks (GAT) (Velickovic et al., 2017), and gated graph neural network (GGNN) (Li et al., 2015). We illustrate their aggregation and update operations for user embedding as an example.
* **GCN** approximates the eigendecomposition of graph Laplacian with layer-wise propagation rule: \[\begin{split} Agg_{k}:&\ n_{u}^{k}=\sum_{v\in \mathcal{N}(u)}\frac{1}{\sqrt{N_{u}N_{v}}}e_{v}^{k}\\ Update_{k}:&\ e_{u}^{k+1}=\sigma(W^{k}(e_{u}^{k}+n_{u}^{k})) \end{split}\] (6)
* **GAT** leverages self-attention layers to assign different importances to neighboring nodes: \[\begin{split} Agg_{k}:&\ n_{u}^{k}=\sum_{v\in \mathcal{N}(u)}b_{uv}^{k}e_{v}^{k}\\ Update_{k}:&\ e_{u}^{k+1}=\sigma(W^{k}(b_{uu}^{k}e_{u}^{k} +n_{u}^{k}))\end{split}\] (7) where \(b_{uu}^{k}\) and \(b_{uv}^{k}\) are the importance coefficients computed by the attention mechanism: \[\begin{split} b_{uv}^{k}&=\frac{exp(\tilde{b}_{uv}^{k })}{\sum_{v^{\prime}\in\mathcal{N}(u)\cup u}exp(\tilde{b}_{uv^{\prime}}^{k})} \end{split}\] (8) where \(b_{uv}^{k}=Att(e_{u}^{k},e_{v}^{k})\) is computed by an attention function.
* **GGNN** updates the embeddings with a gated recurrent unit (GRU): \[\begin{split} Agg_{k}:&\ n_{u}^{k}=\sum_{v\in \mathcal{N}(u)}\frac{1}{N_{u}}e_{v}^{k}\\ Update_{k}:&\ e_{u}^{k+1}=GRU(e_{u}^{k},n_{u}^{k}) \end{split}\] (9)
### Differential Privacy
We adopt the standard definition of \((\epsilon,\delta)\)-differential privacy (Dwork et al., 2014) for our analysis.
**Definition 3.1**.: A randomized function \(M(x)\) is \((\epsilon,\delta)\)-differentially private if for all \(x\), \(y\) such that \(\|x-y\|_{1}\leq 1\) and any measurable subset \(S\subseteq\text{Range}(M)\),
\[\begin{split} P(M(x)\in S)\leq e^{\epsilon}P(M(y)\in S)+\delta \end{split} \tag{10}\]
This paper assumes an untrusted server and requires that the local gradients from each party satisfy \((\epsilon,\delta)\)-local differential privacy (Duchi et al., 2013).
## 4 Proposed Method
### De-anonymization Attack
A straightforward cross-graph propagation solution is to use anonymous neighborhood embeddings from other graphs (Wu et al., 2022). Adapting to the vertical setting, each party sends the encrypted ids of each item's related users to the server, and the server provides each party with the embeddings of their neighboring items via user matching.
One privacy concern of this approach suffers leakage of interaction information, as is shown by the de-anonymization attack below. We assume that each organization is accessible to the set of available items from other parties.
Suppose that party A wants to obtain their users' interaction with party B. Party A could create a set of adversarial users that have registered on other platforms. Each fake user rates only one item in party B. The interaction information could be recovered by matching the embeddings for adversarial and honest users. Denote \(v_{i}\in\mathcal{N}(u)\) as the \(i^{th}\) item in user \(u\)'s neighborhood from party B, \(\mathcal{N}_{adv}\) as the set of items given by all fake users, and \(v_{j}\in\mathcal{N}_{adv}\) as the item rated by \(j^{th}\) adversarial user. The inferred \(i^{th}\) item for user \(u\) is:
\[\begin{split}\hat{v}_{i}&=\arg\min_{v^{\prime}\in \mathcal{N}_{adv}}\|e_{v}^{\prime 0}-e_{v_{i}}^{0}\|_{1}\end{split} \tag{11}\]
where \(\|\cdot\|\) computes the \(l_{1}\)-norm of inner vectors.
The above attack suggest that revealing the individual embedding for each item is susceptible to privacy leakage. Following, we introduce a new method to obtain embeddings at the aggregated level.
### Federated Graph Neural Network
In the proposed framework, the collaborating parties jointly conduct model training with a central server. Each client \(cl_{p}\) is associated with a party \(p\). Item embeddings \(e_{v}^{0}\) should be maintained privately on each client, while other public parameters are initialized and stored at the server. At the initial step, Private Set Intersection (PSI) is adopted to conduct secure user alignment (Pinkas et al., 2014). Algorithm 1 outlines the process to perform VerFedGNN. Figure 1 gives the overall framework of our VerFedGNN, and we will illustrate the key components in the following.
### Neighborhood Aggregation
Instead of sending the individual embeddings of missing neighbors, each party performs embedding aggregation locally for each common user before the transmission. Each party outputs a list of \(N\times D\) aggregation matrices \([X_{p}^{0},X_{p}^{1},...,X_{p}^{K-1}]\), with each row of \(X_{p}^{k}\) given by \(E_{p}(n_{u}^{k})\). Below details the local neighborhood aggregation for the three GNN frameworks:
**GCN** requires \(N_{u}\) to perform local aggregation, while sharing \(N_{u}\) could reveal how many items user \(u\) has interacted with in other parties. To preserve the privacy of \(N_{u}^{p}\), we develop an estimator from party \(p\)'s view in replacement of \(N_{u}\):
\[E_{p}(N_{u})=\frac{\sum_{i}M_{i}}{M_{p}}\cdot N_{u}^{p} \tag{12}\]
where \(M_{i}\) denotes the number of items in party \(i\). The estimator is utilized to perform embedding aggregation:
\[E_{p}(n_{u}^{k})=\sum_{v\in\mathcal{N}_{p}(u)}\frac{1}{\sqrt{E_{p}(N_{u})N_{v} }}e_{v}^{k} \tag{13}\]
**GAT** calculates importance coefficient \(b_{uv}^{k}\) using all item embeddings for \(v\in\mathcal{N}_{u}\), incurring further privacy concern and communication cost. Therefore, we adapt equation 8 to obtain \(E_{p}(b_{uv}^{k})\):
\[E_{p}(b_{uv}^{k})=\frac{exp(\bar{b}_{uv}^{k})}{exp(\bar{b}_{uu}^{k}+\sum_{v^{ \prime}\in\mathcal{N}_{u}}exp(\bar{b}_{uv^{\prime}}^{k})\cdot\sum_{i}M_{i}/M_{ p})} \tag{14}\]
The neighbor items are aggregated locally using:
\[E_{p}(n_{u}^{k})=\sum_{v\in\mathcal{N}_{p}(u)}b_{uv}^{k}e_{v}^{k} \tag{15}\]
Figure 1: Overall framework of VerFedGNN
**GGNN** slightly adapt \(Agg_{k}\) in equation 9 to perform aggregation:
\[E_{p}(n_{u}^{k})=\sum_{v\in\mathcal{N}_{p}(u)}\frac{1}{N_{u}^{p}}e_{v}^{k} \tag{16}\]
Refer to Appendix A for embedding update with the aggregated neighborhood.
### Random Projection
Though neighborhood aggregation reduces the information leakage, users might still be susceptible to de-anonymization attack when they rated few items in other parties. We adopt random projection (Lindenstrauss, 1984) to perform multiplicative data perturbation for two reasons: (1) random projection allows to reduce dimensionality and reconstruct matrix without prior knowledge of the data; (2) random projection preserve the pairwise distance between points with small error (Ghojogh et al., 2021). Below we define a Gaussian random projection matrix.
**Definition 4.1**.: For \(q\ll N_{u}\), a Gaussian random projection matrix \(\Phi\in\mathbb{R}^{q\times N_{u}}\) has elements drawn independently from Gaussian distribution with mean 0 and variance \(1/q\).
Each active party sends a list of \(q\times D\) projected matrices to other participants:
\[Y_{p}^{k}=\Phi X_{p}^{k} \tag{17}\]
for \(k\in[0,K-1]\). The recipient recover the perturbed aggregation matrices \(\hat{X}_{p}^{k},\ k\in[0,K-1]\):
\[\hat{X}_{p}^{k}=\Phi^{T}Y_{p}^{k} \tag{18}\]
### Privacy-preserving Parameter Update
The gradients of public parameters could leak sensitive information about the users. For example, if two users rated the same items, the gradients for their embeddings would be similar. Therefore, a participant could infer subscribers' interaction history by comparing their embeddings with adversarial users. We introduce a ternary quantization scheme (Wang and Basar, 2022) to address this issue.
**Definition 4.2**.: The ternary quantization scheme quantizes a vector \(x=[x_{1},x_{2},...,x_{d}]^{T}\in\mathbb{R}^{d}\) as follows:
\[Q(x)=[q_{1},q_{2},...,q_{d}],\ \ q_{i}=r\text{sign}(x_{i})b_{i},\ \ \forall 1\leq i\leq d \tag{19}\]
where \(r\) is a parameter such that \(\|x\|_{\infty}\leq r\), sign represents the sign of a value, and \(b_{i}\) are independent variables following the distribution
\[\left\{\begin{array}{l}P(b_{i}=1|x)=|x_{i}|/r\\ P(b_{i}=0|x)=1-|x_{i}|/r\end{array}\right. \tag{20}\]
The ternary quantization scheme is adopted in place of Gaussian or Laplace noise for two reasons: (1) The scale of Gaussian or Laplace noise is determined by the sensitivity that could be significant for high dimensional data. For GNN model with large size, directly adding the calibrated noises would greatly distort the direction of gradients. On the other hand, the quantization scheme ensures that the sign of each element in the gradient is not reversed by the stochastic mechanism. (2) For user embeddings, the gradients is a \(N_{u}\times D\) matrix with communication cost proportional to user size. Under the quantization scheme, parties could send a much smaller sparse matrix indexed by the non-zero binary elements.
### Partial Participants
We consider the case where in each iteration, only a portion of clients participates in model training. The challenge is that both the embedding update and gradient computation contain components summed over all clients to capture the complete graph structure. To address this issue, we develop an estimator of the total summation based on the subsets of components received.
Denote \(c_{i}\) as the component send by party \(i\), and \(\mathcal{A}_{t}\) as the set of participating clients in iteration \(t\). The estimation \(E(C)\) is given by:
\[E(C)=\frac{\sum_{i}M_{i}}{\sum_{i\in\mathcal{A}_{t}}M_{i}}\sum_{i}c_{i} \tag{21}\]
Specifically, the component \(c_{i}\) is \(E_{p}(n_{u}^{k}),\ k\in[0,K-1]\) for embedding update, and local gradients for gradient computation, respectively.
## 5 Theoretical Performance Analysis
### Privacy Analysis
The privacy of the algorithm is analyzed for two communication stages: (i) neighborhood exchange, and (ii) gradient transmission. We assume honest-but-curious (Yang et al., 2019) participants for the analysis, i.e., the participant will not deviate from the defined training protocol but attempt to learn information from legitimately received messages.
**Neighborhood exchange:** Suppose that an attacker would like to infer the original aggregation matrix \(X_{p}^{k}\) given \(Y_{p}^{k}\). The model can be analyzed as an underdetermined system of linear equations with more unknowns than equations \(y=\Phi x\), where \(x\) is a column vector in \(X_{p}^{k}\) and \(y\) is the corresponding column in \(Y_{p}^{k}\). We start with the definition of \(l\)-secure (Du et al., 2004).
**Definition 5.1**.: A matrix \(\Phi\) is \(l\)-secure if a submatrix \(\Phi_{k}\) formed by removing any \(l\) columns from \(\Phi\) has full row rank.
**Lemma 5.2**.: _Let \(\Psi\) be an \(l\times N\) matrix, where each row is a nonzero linear combination of row vectors in \(\Phi\). If \(\Phi\) l-secure, the linear equations system \(y=\Psi x\) involves at least \(2l\) variables if these \(l\) vectors are linearly independent._
**Theorem 5.3**.: _For \(2q\leq m+1\), let \(\Phi\) be a \(q\times m\) matrix with entries independently chosen from Gaussian distribution. For a linear system of equations \(y=\Phi x\), it's impossible to solve the exact value of any element in \(x\)._
The proof is given in Appendix B and C. As long as we select \(q\leq(m+1)/2\), the privacy of aggregation matrix is protected in the sense that the attacker cannot identify the exact value of any elements in the original data.
Next, we consider the possibility to infer users' interaction history from the reconstructed aggregation matrix. Appendix E demonstrates the NP-hardness of finding a subset of items that match the aggregated embeddings.
**Gradient transmission:** The gradient transmitted to server is perturbed by the ternary quantization scheme. The following theorem shows that the ternary quantization can achieve \((0,\frac{1}{r})\) differential privacy.
**Theorem 5.4**.: _The ternary quantization scheme given by Definition 4.2 achieves \((0,\frac{1}{r})\)-differential privacy for individual party's gradients in every iteration._
Proof.: The privacy guarantee has been proved by (Wang and Basar, 2022) in Theorem 3.
_Remark 5.5_.: The ternary quantization still achieves \((0,\frac{1}{r})\)-differential privacy when the \(l_{1}\) norm in Definition 3.1 is replaced with any \(l_{p}\) norm with \(p\geq 1\) (see Appendix D).
### Utility Analysis
The federated algorithm involves two sources of error: (i) random projection and reconstruction of aggregation matrix, and (ii) stochastic ternary quantization mechanism. We will discuss the concerns one at a time.
**Random projection and matrix reconstruction:** The reconstructed matrix \(X_{p}^{k},\ k\in[0,K-1],\ p\in[1,P-1]\) doesn't deviate much from the original matrix with the following bounded MSE.
**Theorem 5.6**.: _Let \(\Phi\) be a random matrix defined in Definition 4.1. For any \(X\in\mathbb{R}^{N_{u}\times D}\),_
\[\mathbb{E}_{\Phi}[\|\Phi^{T}\Phi X-X\|_{F}^{2}]=\frac{(m+1)}{p}\|X\|_{F}^{2} \tag{22}\]
_where \(\|\cdot\|\) represents the Frobenius norm of inner matrix._
Refer to Appendix F for the proof of Theorem 5.6.
**Ternary quantization mechanism:** In Appendix G we provide an convergence analysis for the gradient perturbed mechanism.
### Communication Analysis
The communication cost is analyzed in terms of the total message size transferred between parties. We assume that the participation rate \(\alpha\), the number of participating clients divided by that of total clients in an iteration, remains unchanged throughout the training. Suppose that each number in embedding matrix and public parameters requires \(s_{1}\) bits, and that in quantized gradients requires \(s_{2}\) bits, respectively.
Downloading public parameters requires to transfer \(\mathcal{O}(\alpha pKD(D+N_{u})Ts_{1})\) bits. It takes \(\mathcal{O}(\alpha pqDKs_{1})\) bits for each party to communicate neighborhood aggregation matrix per iteration, which adds up to \(\mathcal{O}(\alpha^{2}p^{2}qDKTs_{1})\) in total.
For gradient transmission, each party is expected to have \(\mathcal{O}(|\xi|_{1}/r)\) nonzero entries in their upload matrix, where \(|\xi|_{1}\) denotes the \(l_{1}\) norm of public parameters. It takes \(\mathcal{O}(\alpha p|\xi|_{1}Ts_{2}/r)\) bits to upload quantized gradients from clients to the server,
Summing up the above processes, the algorithm involves \(\mathcal{O}(\alpha pT(KD(D+N_{u})s_{1}+\alpha pqDKs_{1}+|\xi|_{1}s_{2}/r))\) bits of communication cost.
## 6 Experiment
### Dataset and Experiment Settings
**Dataset:** We use two benchmark datasets for recommendation, MovieLens-1M2 (ML-1M) and BookCrossing3. For BookCrossing we randomly select \(6000\) users and \(3000\) items. The items are divided into non-overlapping groups to simulate the vertical federated setting.
Footnote 2: [https://grouplens.org/datasets/movielens/1m/](https://grouplens.org/datasets/movielens/1m/)
Footnote 3: [http://www2.informatik.uni-freiburg.de/cziegler/BX/](http://www2.informatik.uni-freiburg.de/cziegler/BX/)
**Implementation and Hyper-parameter Setting:** Appendix H details the implementation and hyperparameters.
### Experiment Result
#### 6.2.1 Comparison with Different Methods
We compare our proposed method with several centralized and federated recommender system, including: matrix factorization (MF) (Koren et al., 2009), central implementation of GNN (CentralGNN), federated GNN with graph expansion (FedPerGNN) (Wu et al., 2022), adaption of FedSage and FedSage+ to GNN-based recommender system (Zhang et al., 2021). We implement the GNN-related methods using the three GNN frameworks introduced in section 3.2. We compare the methods along four dimensions in table 1: (i) high-order interaction: modeling of high-order connectivity with graph propagation; (ii) gradient protection:
sending perturbed or encrypted gradients instead of raw gradients; (iii) cross graph neighborhood: usage of missing links across parties or subgraphs.
Table 2 summarizes the performance in terms of RMSE for different methods. For VerFedGNN, we use privacy budget \(\frac{1}{r}=\frac{1}{3}\), reduced dimension \(q=N_{u}/5\), and participation rate \(\alpha=1\). It can be observed that our proposed method achieves lower RMSE than other federated GNN algorithms in most scenarios, and clearly outperform MF by an average of \(4.7\%\) and \(18.7\%\) respectively for ML-1M an BookCrossing dataset. The RMSE in VerFedGNN slightly increases over the central implementation, with average percentage difference \(\leq 1.8\%\).
#### 6.2.2 Hyper-parameter Studies
We use GCN as an example to study the impact of hyper-parameters on the performace of VerFedGNN.
**Participation rate:** The participation rate is changed from \(0.2\) to \(1\), with results presented in figure 2. Using GCN model, the percentage differences over the fully participation case are within \(0.15\%\) for ML-1M and \(0.7\%\) for BookCrossing when \(\alpha\) reaches to \(0.5\). The other two models gives RMSE \(\leq 0.92\) for ML-1M and \(\leq 1.2\) for BookCrossing when \(\alpha>0.5\).
**Privacy budget:** A smaller the privacy budget \(\frac{1}{r}\) suggests that the transmitted gradients leak less user information. Figure 3 presents the tradeoff between privacy budget \(\frac{1}{r}\) and model performance. GGNN model is most sensitive to the change in privacy budget, while GAT model remains effective against the increase in \(\frac{1}{r}\).
**Dimension Reduction:** We further analyze the effectiveness of our model with varying dimension reduction ratio \(q/N_{u}\). As is shown in figure 4, GCN and GAT are more robust to the change in neighborhood dimension \(q\), with error increase by \(0.5\%\) for ML-1M and \(1.5\%\) for BookCrossing when \(N_{u}/q\) increases to \(100\).
#### 6.2.3 De-anonymization Attack
To verify the effectiveness of our model against de-anonymization attack, we simulate this attack to compare the inference accuracy of VerFedGNN with the other two methods using cross graph neighborhood: FedPerGNN and FedSage+. For FedSage+, we match the generated embeddings of honest and adversarial users using equation 11. The attack for VerFedGNN utilized the recovered aggregation embeddings. Specifically, we find the subset of adversarial item embeddings leading to smallest \(l_{1}\) distance with each users' neighborhood aggregation. Refer to appendix I for more illustrations.
Table 3 reports the attack accuracy for the three federated algorithm using GCN model. The experiment is conducted under three cases regarding the proportion of items rated by the adversarial users \(p_{ad}\). One important observation is that our algorithm greatly reduces the attack accuracy compared with the two baseline methods. FedPerGNN results in highest F1 and precision of \(100\%\) as the attacker could match the embeddings exactly with adversarial users.
#### 6.2.4 Communication Cost
Figure 5 presents the communication cost measured in the size of bits to be transferred in each iteration. We find that the communication cost is nearly proportional to user size
Figure 4: RMSE by inverse dimension reduction ratio \(N_{u}/q\).
Figure 3: RMSE with varying privacy budget \(\frac{1}{r}\).
Figure 2: RMSE with varying participation rate \(\alpha\).
\(N_{u}\) and participation rate \(\alpha\). Besides, random projecting the neighborhood aggregation matrix with \(q=\frac{1}{5}N_{u}\) saves the communication bits by \(50.6\%\) with gradient quantization, and applying the quantization scheme reduces the communication cost by over \(30\%\) when \(N_{u}/q\geq 4\).
#### 6.2.5 Other Studies
For other studies, we simulate the de-anonymization attack against VerFedGNN under the case with and without dimension reduction, and evaluate the model performance when Laplace noise is employed in place of ternary quantization scheme (see Appendix J).
## 7 Conclusion
This paper proposes VerFedGNN, a framework for GNN-based recommender systems in a vertical federated setting. The cross-graph interactions are transferred in form of neighborhood aggregation matrix perturbed by random projection. We adopt ternary quantization scheme to protect the privacy of public gradients. Our approach could learn the relation information across different graphs while preserving users' interaction data. Empirical studies on two benchmark datasets show that: (1) VerFedGNN achieves comparative prediction performance with SOTA privacy preserving GNN models. (2) The neighborhood aggregation combined with random projection significantly reduces the attack accuracy compared with existing cross-graph propagation methods. (3) Optimizing dimension reduction ratio \(N_{u}/q\) and participation rate \(\alpha\) could lower the communication cost while maintaining accuracy.
This work opens up new possibilities for the federated GCN-based recommendation. Firstly, it's interesting to develop a scalable federated framework with up to millions of users. Secondly, the framework could be extended to other federated scenarios, such as transfer federated recommender systems with few overlapping nodes (Yang et al., 2020).
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & MF & CentralGNN & FedPerGNN & FedSage & FedSage+ & VerFedGNN \\ \hline High-order interaction & \(\times\) & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) \\ Gradient protection & \(\times\) & \(\times\) & \(\surd\) & \(\times\) & \(\times\) & \(\surd\) \\ Cross graph neighborhood & \(\times\) & \(\times\) & \(\surd\) & \(\times\) & \(\surd\) & \(\surd\) \\ Data storage & Central & Central & Local & Local & Local & Local \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of different approaches
\begin{table}
\begin{tabular}{l l c c c} \hline \hline & Model & ML-1M & BookCrossing \\ \hline MF & MF & \(0.9578\)\(\pm\)0.0016 & \(1.9972\)\(\pm\)0.0063 \\ \hline \multirow{3}{*}{CentralGNN} & GCN & \(0.9108\)\(\pm\)0.0007 & \(1.5820\)\(\pm\)0.0050 \\ & GAT & \(0.9062\)\(\pm\)0.0029 & \(1.5478\)\(\pm\)0.0071 \\ & GGNN & \(0.9046\)\(\pm\)0.0045 & \(1.6562\)\(\pm\)0.0040 \\ \hline \multirow{3}{*}{FedPerGNN} & GCN & \(0.9282\)\(\pm\)0.0012 & \(1.6892\)\(\pm\)0.0068 \\ & GAT & \(0.9282\)\(\pm\)0.0017 & \(1.6256\)\(\pm\)0.0048 \\ & GGNN & \(0.9236\)\(\pm\)0.0023 & \(1.6962\)\(\pm\)0.0050 \\ \hline \multirow{3}{*}{FedSage} & GCN & \(0.9268\)\(\pm\)0.0012 & \(1.6916\)\(\pm\)0.0118 \\ & GAT & \(0.9242\)\(\pm\)0.0041 & \(1.6256\)\(\pm\)0.0048 \\ & GGNN & \(0.9268\)\(\pm\)0.0008 & \(2.6596\)\(\pm\)0.0133 \\ \hline \multirow{3}{*}{FedSage+} & GCN & \(0.9194\)\(\pm\)0.0041 & \(1.6335\)\(\pm\)0.0065 \\ & GAT & \(0.9146\)\(\pm\)0.0033 & \(1.6078\)\(\pm\)0.0039 \\ & GGNN & \(0.9180\)\(\pm\)0.0002 & \(1.8788\)\(\pm\)0.0401 \\ \hline \multirow{3}{*}{VerFedGNN} & GCN & \(\mathbf{0.9152\)\(\pm\)**0.0013**} & \(\mathbf{1.5906\)\(\pm\)**0.0030**} \\ & GAT & \(\mathbf{0.9146\)\(\pm\)**0.0010**} & \(\mathbf{1.5830\)\(\pm\)**0.0131**} \\ \cline{1-1} & GGNN & \(\mathbf{0.9076\)\(\pm\)**0.0024**} & \(\mathbf{1.6962\)\(\pm\)**0.0050**} \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance of different methods. The values denote the \(mean\pm standard\ deviation\) of the performance.
Figure 5: Communication cost by user size and dimension for GCN
\begin{table}
\begin{tabular}{l l c c c} \hline \hline \(p_{ad}\) & Methods & Precision & Recall & F1 \\ \hline \multirow{3}{*}{0.2} & FedPerGNN & \(1.00\)\(\pm\)0.00 & \(0.21\)\(\pm\)0.01 & \(0.34\)\(\pm\)0.01 \\ & FedSage+ & \(0.14\)\(\pm\)0.00 & \(0.03\)\(\pm\)0.01 & \(0.05\)\(\pm\)0.01 \\ & VerFedGNN & \(0.01\)\(\pm\)0.00 & \(0.01\)\(\pm\)0.00 & \(0.01\)\(\pm\)0.00 \\ \hline \multirow{3}{*}{0.5} & FedPerGNN & \(1.00\)\(\pm\)0.00 & \(0.49\)\(\pm\)0.03 & \(0.66\)\(\pm\)0.02 \\ & FedSage+ & \(0.22\)\(\pm\)0.03 & \(0.08\)\(\pm\)0.00 & \(0.11\)\(\pm\)0.00 \\ & VerFedGNN & \(0.02\)\(\pm\)0.01 & \(0.01\)\(\pm\)0.00 & \(0.01\)\(\pm\)0.00 \\ \hline \multirow{3}{*}{0.8} & FedPerGNN & \(1.00\)\(\pm\)0.00 & \(0.81\)\(\pm\)0.01 & \(0.90\)\(\pm\)0.01 \\ & FedSage+ & \(0.26\)\(\pm\)0.02 & \(0.10\)\(\pm\)0.01 & \(0.14\)\(\pm\)0.01 \\ \cline{1-1} & VerFedGNN & \(0.02\)\(\pm\)0.00 & \(0.01\)\(\pm\)0.00 & \(0.02\)\(\pm\)0.00 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Attack accuracy for three federated algorithms using GCN model on ML-1M. |
2301.05869 | Functional Neural Networks: Shift invariant models for functional data
with applications to EEG classification | It is desirable for statistical models to detect signals of interest
independently of their position. If the data is generated by some smooth
process, this additional structure should be taken into account. We introduce a
new class of neural networks that are shift invariant and preserve smoothness
of the data: functional neural networks (FNNs). For this, we use methods from
functional data analysis (FDA) to extend multi-layer perceptrons and
convolutional neural networks to functional data. We propose different model
architectures, show that the models outperform a benchmark model from FDA in
terms of accuracy and successfully use FNNs to classify electroencephalography
(EEG) data. | Florian Heinrichs, Mavin Heim, Corinna Weber | 2023-01-14T09:41:21Z | http://arxiv.org/abs/2301.05869v2 | Functional Neural Networks: Shift invariant models for functional data with applications to EEG classification
###### Abstract
It is desirable for statistical models to detect signals of interest independently of their position. If the data is generated by some smooth process, this additional structure should be taken into account. We introduce a new class of neural networks that are shift invariant and preserve smoothness of the data: functional neural networks (FNNs). For this, we use methods from functional data analysis (FDA) to extend multi-layer perceptrons and convolutional neural networks to functional data. We propose different model architectures, show that the models outperform a benchmark model from FDA in terms of accuracy and successfully use FNNs to classify electroencephalography (EEG) data.
Machine Learning, Neural Networks, Neural Networks, Neural Networks, EEG Classification
## 1 Introduction
When autonomous machines act in their environment or humans interact with computers, the necessary input data is often streamed continuously. In some settings, the input streams can be easily transformed into control signals based on simple physical models. However, in more advanced scenarios, it is necessary to develop a more complex, data-driven model and use its predictions to control the machine. An important example of the latter scenario are brain-computer interfaces (BCIs), which record the user's brain activity and decode control signals based on the measured data.
Most statistical models require an input of fixed dimension and a common approach is to extract windows of a fixed size with a fixed step size from the continuous data stream. These _sliding windows_ are then used to predict the desired control signal with one classification per window. The advantage of this approach is that only the most recent data is used for predictions, but it comes at cost: the signal of interest might occur at any time point in the extracted window. Thus, any classifier of the sliding windows needs to be "shift invariant" in the sense that it detects the desired signal independently of its position in the window.
In the context of BCIs, more specifically for the analysis of data measured via electroencephalography (EEG), traditional methods are based on carefully selected features that are calculated from the data. Commonly applied techniques include the principal component analysis of EEG data in the time domain, or features based on the power spectrum in the frequency domain (Azlan and Low, 2014; Boubchir et al., 2017; Boonyakitanont et al., 2020). Due to recent advances in the field of deep learning, different architectures of neural networks were proposed that avoid a manual feature extraction and seem to outperform more traditional methods. For example the neural network _EEGNet_ was proposed to support multiple BCI paradigms and is often referred to as benchmark model in the field (Lawhern et al., 2018). In a clinical setting, some variant of the VGG16 neural network was used to detect signals associated with epilepsy (da Silva Lourenco et al., 2021). In general, deep learning has been applied successfully to a variety of tasks related to EEG data (Craik et al., 2019; Roy et al., 2019).
Inspired by their successes in computer vision and natural language processing, common neural networks used for the classification of EEG data are based on convolutions. Convolutional neural networks are often considered to be shift invariant and are therefore a good choice in the given context. However, they do not take the specific structure of EEG data into account. Similar to most physical processes, the electrical activity, that is recorded on the user's scalp by the EEG, can be considered as smooth (Ramsay and Silverman, 2005). To reflect this additional structure, it is more appropriate to model the data as a (discretized) sample of an underlying smooth function.
The latter paradigm is the basis of _functional data analysis_ (FDA), a branch of statistics that received more and more attention throughout the last decades and remains an active area of research. Most concepts from multivariate statistics have been extended to functional data (Ramsay and Silverman, 2005; Kokoszka and Reimherr, 2017). For example, many functional versions of the principal component analy
sis have been proposed in literature (Shang, 2014). Different generalizations of the linear model to functional covariates and/or functional responses have been introduced (Cardot et al., 1999; Cuevas et al., 2002). Finally, Portmanteau-type tests for detecting serial correlation have been proposed for functional time series (Gabrys & Kokoszka, 2007; Bucher et al., 2020). This functional approach allows it to extract previously unavailable information from the data in form of derivatives of the continuous signal.
As methods from FDA take the functional structure of physical processes into account, they would be suitable classifiers. However, classic methods from FDA are in general not shift invariant and require the signal of interest to be at a fixed point in time. In some applications it is possible to "register" the functions through a suitable transformation of time (Sakoe & Chiba, 1978; Kneip & Gasser, 1992; Gasser & Kneip, 1995; Ramsay & Li, 1998). However, in the context of sliding windows curve registration is often not feasible and methodology that requires previous registration cannot be applied reliably.
In the present work, we propose a framework that combines the advantages of neural networks (particularly CNNs) and FDA: functional neural networks (FNNs). On one side, these networks are shift invariant, and on the other side, they are able to model the functional structure of their input. FNNs have several advantages over scalar-valued neural networks. They are independent of the sample frequency of the input data, as long as the input can be rescaled to a certain interval. Further, they allow to predict smooth outputs. And finally, they are more transparent to some extent due to smoothness constraints.
We summarize our contribution as follows:
* We propose extensions of fully-connected and convolutional layers to functional data.
* We present architectures of functional neural networks based on these extensions.
* We show that the proposed methodology works through a simulation study and real data experiments.
Whereas multi-layer perceptrons (MLPs) are not shift invariant, the introduced functional convolutional layers allow the construction of shift invariant functional convolutional neural networks. This makes FNNs helpful in any scenario where sliding windows based on a (possibly multivariate) continuous data stream are classified, and they can be employed in a variety of applications.
## 2 Related Work
To the best of our knowledge, the combination of functional data and (convolutional) neural networks is only discussed in a handful of papers and the proposed methodology extends previous results. In early works MLPs with functional inputs and neurons, that transform the functional data to scalar values in the first layer, were introduced (Rossi et al., 2002; Rossi & Conan-Guez, 2005; Rossi et al., 2005). (Zhao, 2012) proposed an algorithm to train similar MLPs with inputs from a real Hilbert space. Subsequently, (Wang et al., 2019) proposed to use functional principal components for the transformation of the functional inputs to scalar values in the first layer. (Wang et al., 2020) added another layer based on functional principal components to transform the scalar-valued output of the MLP back to functional data in the last layer. More recently, fully functional neurons were proposed (Rao & Reimherr, 2021; 2).
## 3 Mathematical Preliminaries
Let us assume, we observe \(d\) quantities at \(T\) time instants for \(N\in\mathbb{N}\) individuals, providing us with matrices of observations
\[\mathbf{X}^{(n)}=\left(\begin{array}{ccc}X^{(n)}_{1,1}&\cdots&X^{(n)}_{1,T} \\ \vdots&\ddots&\vdots\\ X^{(n)}_{d,1}&\cdots&X^{(n)}_{d,T}\end{array}\right),\]
for \(n=1,\ldots,N\), and jointly with \(\mathbf{X}^{(n)}\) their corresponding "labels" \(\mathbf{Y}^{(n)}\) which might be vectors in \(\mathbb{R}^{c}\) or matrices
\[\mathbf{Y}^{(n)}=\left(\begin{array}{ccc}Y^{(n)}_{1,1}&\cdots&Y^{(n)}_{1,T} \\ \vdots&\ddots&\vdots\\ Y^{(n)}_{c,1}&\cdots&Y^{(n)}_{c,T},\end{array}\right)\]
where \(c\) denotes the number of quantities that we observe for \(\mathbf{Y}^{(n)}\). Further assume the observed quantities to be noisy versions of an underlying smooth signal, i. e.,
\[X^{(n)}_{i,t}=f^{(n)}_{i}\big{(}\tfrac{t}{T}\big{)}+\varepsilon^{(n)}_{i,t}, \tag{1}\]
for smooth functions \(f^{(n)}_{i}\) and centered errors \(\varepsilon^{(n)}_{i,t}\), for \(i=1,\ldots,d,t=1,\ldots,T\) and \(n=1,\ldots,N\). Note that the degree of smoothness might vary for different applications, which leads to slight modifications in the model. This representation suggests the use of methods from functional data analysis, which consider the intrinsic structure of the data.
Throughout this work, we only require the functions \(f^{(n)}_{i}\) to be square-integrable, i. e., \(f^{(n)}_{i}\in L^{2}([0,1])=\{f:[0,1]\rightarrow\mathbb{R}:\int_{0}^{1}f(x)^ {2}\,\mathrm{d}x<\infty\}\). Similarly, in case of matrix-valued labels \(\mathbf{Y}^{(n)}\), we assume their entries to be discretized versions of some underlying functions \(g^{(n)}_{i}\in L^{2}([0,1])\), more specifically \(Y^{(n)}_{i,t}=g^{(n)}_{i}(t/T)\).
Our aim is to approximate the functional \(F:\big{(}L^{2}([0,1])\big{)}^{d}\rightarrow\mathcal{Y}\), which maps an observation \(\mathbf{X}\) to its
corresponding label \(\mathbf{Y}\), where \(\mathcal{Y}=\mathbb{R}^{c}\) for vector-valued labels \(\mathbf{Y}^{(n)}\) and \(\mathcal{Y}=\big{(}L^{2}([0,1])\big{)}^{c}\) for matrix-valued labels. Formally, the functional \(F\) corresponds to the conditional expectation \(\mathbb{E}[\mathbf{Y}|\mathbf{X}]\).
In case of classification problems, each coordinate \(F_{i}(\mathbf{X})\) of the functional \(F\) can be interpreted as the probability of \(\mathbf{X}\) belonging to class \(i\in\{1,\ldots,c\}\).
### Preprocessing
Before putting observations into a neural network, it is often helpful to preprocess them by applying certain filters or normalization. In our case, we work with noisy functional data, observed at discrete time points. In functional data analysis, a common first step for this kind of data is _smoothing_, which helps to reduce the errors and extends the observations from discrete time points to a continuous interval. Another preprocessing step frequently used for neural networks, is some form of normalization to ensure that the data is of a similar magnitude. We employ _local linear estimation_ for smoothing the data and _standardization_ for its normalization as described below.
#### 3.1.1 Smoothing
In the literature, there exists a variety of smoothing procedures from Fourier series to expansions based on B-splines or wavelets (Ramsay & Silverman, 2005). We use local polynomial regression to estimate the functions \(f_{i}^{(n)}\) and their first derivative(s) (Fan & Gijbels, 1996).
For the sake of clarity, we omit some indices and rewrite (1) as \(X_{t}=f\big{(}\frac{t}{T}\big{)}+\varepsilon_{t}\) for a moment. Then, if \(f\) is \(p+1\) times differentiable with bounded derivatives, we can define the local polynomial estimator as
\[(\hat{f}(x),\widehat{f}^{\prime}(x),\ldots,\widehat{f^{(p)}}(x))\] \[=\operatorname*{arg\,min}_{\beta_{0},\ldots,\beta_{p}}\sum_{t=1}^ {T}\bigg{(}X_{t}-\sum_{j=0}^{p}\beta_{j}\Big{(}\frac{t}{T}-x\Big{)}^{j}\bigg{)} ^{2}K_{h}\big{(}\frac{t}{T}-x\big{)}\]
to estimate \(f\) and its first \(p\) derivatives. Here \(K\) denotes a kernel function, \(h\) the bandwidth of the estimator and \(K_{h}(\cdot)=K(\cdot)\). In the following, we assume \(K:\mathbb{R}\to\mathbb{R}\) to be a symmetric, twice differentiable function, supported on the interval \([-1,1]\) and satisfying \(\int_{[-1,1]}K(x)\,\mathrm{d}x=1\).
From the above definition, explicit formulas can be derived for the estimators by calculating the derivatives of the right-hand side with respect to \(\beta_{j}\) (\(j=1,\ldots,p\)) and a Taylor expansion. The result can be rewritten as a convolution of the signal and a filter depending on the kernel \(K\) and the bandwidth \(h\).
To simplify the notation, we will refer to the estimators of the functions \(f_{i}^{(n)}\) and their derivatives as \(h_{i,1}^{(n)}\), thus, we obtain estimators \((h_{i,1}^{(n)},(h_{i,1}^{(n)})^{\prime},\ldots,(h_{i,1}^{(n)})^{(p)})\) for each \(f_{i}^{(n)}\), \(i=1,\ldots,d,n=1,\ldots,N\).
The choice of the bandwidth is crucial in order to obtain a good estimate of the underlying functions. If the bandwidth is chosen too small, the estimator will overfit the data, whereas a large bandwidth leads to over-smoothing (Silverman, 2018). Oftentimes it is a good idea to use cross validation to select a bandwidth that minimizes a certain error measure, such as the mean squared error. Generally, the estimation of higher derivatives requires larger bandwidths than the estimation of the function itself.
#### 3.1.2 Normalization
When neural networks are trained via some form of gradient descent, it is crucial to ensure that the input data is of a similar size, which is done through prior normalization. There are many different normalization methods and the most useful choice depends on the specific application. In the following, we will standardize the data by subtracting the mean and dividing by the standard deviation across a suitable range of the data. As we did not make any assumptions about the relation between the signals \(f_{i}^{(n)}\) and \(f_{j}^{(n)}\), we standardize each smoothed signal \(h_{i,1}^{(n)}\) (and its derivatives) separately, i. e., we calculate
\[h_{i,2}^{(n)}=\frac{h_{i,1}^{(n)}-\int_{0}^{1}h_{i,1}^{(n)}(x)\,\mathrm{d}x}{ \bigg{(}\int_{0}^{1}\Big{(}h_{i,1}^{(n)}(x)-\int_{0}^{1}h_{i,1}^{(n)}(y)\, \mathrm{d}y\Big{)}^{2}\,\mathrm{d}x\bigg{)}^{1/2}}.\]
After this transformation, the signals are of a similar magnitude, for each observation \(\mathbf{X}^{(n)}\).
## 4 Functional Layers
### Functional Multilayer Perceptrons
Once the data is smoothed and prepared to be analyzed as functional data, it is not clear how to design neural networks that take this additional structure into account. The simplest form of an artificial neural network with scalar input \((h_{1},\ldots,h_{d})\) is the multilayer perceptron, that consists of \(L\) layers with \(J_{1},J_{2},\ldots,J_{L}\) neurons each. The value at neuron \(k\) in the \(\ell\)-th layer is then calculated as
\[H_{(k)}^{(\ell)}=\sigma\bigg{(}b_{(k)}^{(\ell)}+\sum_{j=1}^{J_{\ell-1}}w_{(j, k)}^{(\ell)}H_{(j)}^{(\ell-1)}\bigg{)},\]
where \(H_{(k)}^{(0)}=h_{k}\) denotes the network's input, \(H_{(k)}^{(L)}\) its output, \(b_{(k)}^{(\ell)}\) the \(k\)th neuron's bias and \(w_{(j,k)}^{(\ell)}\) the weight between the \(k\)th neuron in layer \(\ell\) and the \(j\)th neuron in layer \(\ell-1\). The function \(\sigma:\mathbb{R}\to\mathbb{R}\) is referred to as
activation function_ and enables the network to reflect nonlinear dependencies.
When the input data is not scalar, but functional, (Rao and Reimherr, 2021) propose to replace the scalar biases by functional biases and the weights between neurons by integral kernels, finally defining the neurons' values as
\[H_{(k)}^{(\ell)}(s)=\sigma\bigg{(}b_{(k)}^{(\ell)}(s)+\sum_{j=1}^{J_{\ell-1}} \int w_{(j,k)}^{(\ell)}(s,t)H_{(j)}^{(\ell-1)}(t)\,\mathrm{d}t\bigg{)}. \tag{2}\]
While this extension allows to model rather general relations between the model's input and its desired output, this flexibility makes the network's training difficult because we need to find optimal weight functions for any connection between two neurons.
Starting from the fully-connected multilayer perceptron, many advances in deep learning are due to more specific architectures, which reduce the number of model parameters to mitigate the curse of dimensionality. For instance, convolutional neural networks can be interpreted as multi-layer perceptrons, where most weights vanish and the remaining connections between neurons share a smaller set of weights.
In a similar fashion, we propose to simplify the neuron model in (2) by using simple weight functions \(w_{(j,k)}^{(\ell)}:[0,1]\to\mathbb{R}\) rather than integral kernels in \(L^{2}([0,1]^{2})\). This adaptation leads to neurons defined via
\[H_{(k)}^{(\ell)}(t)=\sigma\bigg{(}b_{(k)}^{(\ell)}(t)+\sum_{j=1}^{J_{\ell-1}}w _{(j,k)}^{(\ell)}(t)H_{(j)}^{(\ell-1)}(t)\bigg{)}. \tag{3}\]
The above defined neurons are fully functional in the sense that both their input and output are functions. If we try to predict scalar-valued labels in \(\mathbb{R}^{c}\), we need to summarize the information contained in the functions. We propose to calculate the scalar product of the weights and their corresponding inputs, leading to
\[H_{(k)}^{(\ell)}=\sigma\bigg{(}b_{(k)}^{(\ell)}+\sum_{j=1}^{J_{\ell-1}}\int w _{(j,k)}^{(\ell)}(t)H_{(j)}^{(\ell-1)}(t)\,\mathrm{d}t\bigg{)}. \tag{4}\]
With this definition of a functional multilayer perceptron (F-MLP), we simplified the training and need to optimize functional weights of one variable. The theoretical framework to train the model through backpropagation based on Frechet derivatives is provided by (Rossi et al., 2002; Olver, 2016; Rao and Reimherr, 2021).
The computation of Frechet derivatives becomes tedious and computationally expensive. An efficient approach to simplify computations and simultaneously reduce the dimension of the weights' space, is to replace the weights \(w_{(j,k)}^{(\ell)}(t)\) by linear combinations of a finite set of base functions. Therefore, let \(\{\varphi_{i}\}_{i=0}^{q}\) be a set of suitable functions, such as Legendre polynomials, wavelets or the first \(q/2\) sine-cosine pairs of the Fourier basis, and consider the linear combination
\[w_{(j,k)}^{(\ell)}(t)=\sum_{i=0}^{q}w_{(j,k)}^{(\ell,i)}\varphi_{i}(t), \tag{5}\]
for some scalar weights \(w_{(j,k)}^{(\ell,i)}\). With this representation, the fully functional neural network can be described through scalar weights and we are able to use the standard scalar backpropagation.
### Functional Convolutional Neural Networks
The functional MLP is particularly useful if the input functions are aligned (or can be aligned via a suitable transformation of time) and the signals of interest happen at the same time instants for all measurements. However, under the sliding window paradigm, for high-noise data such as speech or EEG signals, it is not possible (or at least not useful) to previously register the curves, as the signal of interest may occur at any arbitrary time instant. In this case, MLPs are impractical as they would require many parameters to model complex patterns
For scalar input, alternative network architectures have been developed that are shift invariant and therefore capable to detect certain signals independently of their position. One type of neural network that is considered as "translation invariant" are CNNs, which we can extend to functional data as well.
Similarly to (2), we can define a functional convolutional layer by setting \(w_{(j,k)}^{(\ell)}(s,t)=u_{(j,k)}^{(\ell)}(s-t)\) for some filter (or kernel) function \(u_{(j,k)}^{(\ell)}:\mathbb{R}\to\mathbb{R}\) with support on \([-b,b]\) and bandwidth \(b\in(0,1)\), ultimately leading to
\[H_{(k)}^{(\ell)}(s)=\sigma\bigg{(}b_{(k)}^{(\ell)}(s)+\sum_{j=1}^{J_{\ell-1}} \int u_{(j,k)}^{(\ell)}(s-t)H_{(j)}^{(\ell-1)}(t)\,\mathrm{d}t\bigg{)},\]
where the functions \(H_{(k)}^{(\ell)}\) are extended to the interval \([-b,1+b]\) by defining them as zero outside of the interval \([0,1]\).
These functional convolutional layers are shift invariant in the sense that a filter, which is capable of detecting a certain signal, would detect it independently of its position in the interval \([0,1]\). Once again, we reduce the dimension of the optimization problem by representing the filters as linear combinations of a set of base functions as in (5).
### Architecture
With functional versions of fully connected and convolutional layers at hand, we can define arbitrary architectures of functional neural networks (FNNs). Figure 1 displays FNNs with scalar and functional outputs, respectively. In both architectures, the first layer uses a local linear estimator to smooth the input and estimate derivatives of the smoothed signals, while the second layer standardizes the input across each signal. Following are two functional convolutional layers. For the FNN with scalar output, the last layer is a functional fully connected layer, while for the FNN with functional output, the last layer is a third functional convolutional layer.
## 5 Empirical Results
We now show that the proposed methodology works with simulated and real data and compare it to benchmark models.
### Simulation Study
#### Setup
Inspired by brain activity measured through electroencephalography (EEG), we generate two data sets for the simulation study. In both cases, we simulate two-dimensional samples that belong to one of three classes.
For data set (I), we draw independently frequencies \(\alpha_{n}\sim\mathcal{U}_{[8,12]},\beta_{n}\sim\mathcal{U}_{[13,30]}\), time shifts \(t_{1}^{(n)},t_{2}^{(n)}\sim\mathcal{U}_{[0,1]}\), class labels \(c_{n}\sim\mathcal{U}_{\{1,2,3\}}\) and standard normally distributed errors \(\varepsilon_{i,t}^{(n)}\sim\mathcal{N}(0,1)\). Based on these random quantities, we construct the continuous signals
\[f_{i}^{(n)}(x)= (1-\gamma_{i}(c_{n}))\cdot\sin(2\pi\alpha_{n}(x+t_{i}^{(n)}))\] \[+\gamma_{i}(c_{n})\cdot\sin(2\pi\beta_{n}(x+t_{i}^{(n)}))\]
with class dependent coefficients \(\gamma(1)=(0,0),\gamma(2)=(0.8,0.4)\) and \(\gamma(3)=(0.4,0.8)\) and finally define the discretized, noisy samples \(X_{i,t}^{(n)}=f_{i}^{(n)}(t/T)+\varepsilon_{i,t}^{(n)}\). Examples of each class are displayed in Figure 6 of Appendix A.
For data set (II), we draw independently scaling factors \(w_{n}\sim\mathcal{U}_{[0.05,0.1]}\), time points \(t_{n}\sim\mathcal{U}_{[0,1]}\), class labels \(c_{n}\sim\mathcal{U}_{\{1,2,3\}}\) and standard normally distributed errors \(\varepsilon_{i,t}^{(n)}\sim\mathcal{N}(0,1)\). Based on the scaling factors \(w_{n}\) and time points \(t_{n}\), we construct continuous signals
\[f^{(n)}(x)=\max\Big{\{}-\tfrac{4}{w_{n}^{2}}(x-t_{n})^{2}+3,0\Big{\}},\]
which resemble spikes, as displayed in Figure 7 of Appendix A. Again, we define discretized, noisy samples \(X_{i,t}^{(n)}=\gamma_{i}(c)\cdot f^{(n)}(t/T)+\varepsilon_{i,t}^{(n)}\), where the class dependent coefficients are \(\gamma(1)=(0,0),\gamma(2)=(1,0)\) and \(\gamma(3)=(0,1)\).
For both data sets, we vary the sample size \(N\in\{1000,2000,3000,4000,5000\}\), while keeping \(T=250\) fixed.
As baseline models, we use \(k\)-nearest neighbors (KNN) for a varying number of neighbors \(k\in\{1,2,...,19\}\). Before feeding the data into the model, we smooth it by applying a local linear estimator as described in Section 3.1.1.
To show that functional neural networks work, and indeed surpass the performance of the baseline models, we use an FNN as described in Section 4.3 with two functional convolutional and one functional dense layer. For the local linear estimation, we use the quartic kernel \(K(x)=\frac{15}{16}(1-x^{2})^{2}\) with support \([-1,1]\) and the bandwidth \(h=5\) for the estimation of the smooth function and \(h=10\) for the estimation of its derivative. For each functional layer, we used the first 5 Legendre polynomials as base functions, i.e. \(\varphi_{0}(x)=1,\varphi_{1}(x)=x,\varphi_{2}(x)=\frac{1}{2}(3x^{2}-1),\varphi _{3}(x)=\frac{1}{2}(5x^{3}-3x)\) and \(\varphi_{4}(x)=\frac{1}{8}(35x^{4}-30x^{2}+3)\). Further, we used 20 and 10 filters of size 25 for the two convolutional layers. As activation function we chose the _exponential linear unit_ (ELU), which is defined as \(\sigma(x)=x\cdot\mathds{1}(x\geq 0)+(\exp(x)-1)\cdot\mathds{1}(x<0)\). As loss function, we used the categorical crossentropy.
We trained each model 100 times, while generating a new data set for each trial. The FNNs were trained with 5 epochs.
Figure 1: Left: Neural network architecture with functional input and scalar output. Right: Neural network architecture with functional input and output.
## Results
The results of the benchmark model for both data sets with a varying number of samples \(N\) and neighbors are displayed in Figures 2 and 3. In Table 1, the results of the benchmark model with the best choice of neighbors \(k\) are compared with the results of the FNN. In all cases, the classifications of the functional neural network are more reliable than those of the \(k\)-nearest neighbors classifier. The FNN achieved an accuracy above 99.6% in all cases, whereas the KNN classifier achieved between 93.0% and 99.3% for data set (I) and between 76.9% and 86.4% for data set (II). As expected, the shift invariance of the FNN makes it particularly helpful for data set (II), where the signal of interest may occur at any point in the observed interval. More specifically, for data set (II) the FNN's accuracy is at least 13.5% higher than the corresponding accuracy of the KNN classifier.
### Real Data Experiments
### Setup
The BCI Competition IV Dataset 2A (Tangermann et al., 2012) is a common benchmark for evaluating the performance of a new method for the analysis of EEG data. To test our method, we used the 9 openly available, labeled recordings of approximately 45 minutes each.
According to the documentation, participants were asked to imagine movements of their left hand (class 1), right hand (class 2), both feet (class 3) and tongue (class 4). During each session, every imaginary movement was repeated 72 times, yielding a total of 288 trials. Each trial took approximately 8 seconds. At the beginning of each trial (\(t=0\,s\)), a short acoustic signal and a fixation cross on a black screen appeared. Two seconds later (\(t=2\,s\)), a visual cue appeared to indicate the movement, which should be imagined. The imaginary movement can be assumed to start approximately half a second after the cue (\(t=2.5\,s\)) and end when the fixation cross disappeared (\(t=6\,s\)). Each trial was followed by a short break to separate it from subsequent trials. The participants' brain activity was measured through a 22-channel EEG with 3 additional EOG (_Electrooculography_) channels at a sampling rate of 250 Hz.
For this data set, the _classic approach_ to benchmark a new method is to cut windows from each trial, e. g., between 2.5 s and 4.5 s after trial onset, which is feasible since the trial and cue onsets are known. However, if we move beyond externally triggered actions, we need another approach. This is particularly important in the case of brain-computer interfaces where devices should be controlled continuously. In this case, a common approach is to use _sliding windows_, i. e., to use overlapping windows of a fixed length with a fixed step size.
We tested the proposed functional neural network, as described in Section 4.3 with the same specifications as in the simulation study, and compared it to the EEGNet with its default choice of hyperparameters as suggested by (Lawhern et al., 2018). However, to account for the different degrees of complexity of the EEG data, we chose to use different numbers of filters in the convolutional layers. For the classic
\begin{table}
\begin{tabular}{c|c c|c c} \hline \hline & \multicolumn{2}{c|}{Data set (I)} & \multicolumn{2}{c}{Data set (II)} \\ \(N\) & KNN & FNN & KNN & FNN \\ \hline
1000 & 93.0\% & **99.6\%** & 76.9\% & **99.6\%** \\
2000 & 97.3\% & **99.8\%** & 81.4\% & **99.8\%** \\
3000 & 98.5\% & **99.8\%** & 83.6\% & **99.8\%** \\
4000 & 99.1\% & **99.7\%** & 85.5\% & **99.9\%** \\
5000 & 99.3\% & **99.8\%** & 86.4\% & **99.9\%** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Mean accuracy of the classifiers for the respective data sets.
Figure 3: Mean accuracy in percent (y-axis) of the \(k\)-nearest neighbors classifiers for data set (II) and a varying number of neighbors (x-axis).
Figure 2: Mean accuracy in percent (y-axis) of the \(k\)-nearest neighbors classifiers for data set (I) and a varying number of neighbors (x-axis).
approach with 4 classes (corresponding to the 4 different imaginary movements), we tested two models with 5 + 10 and 3 + 12 filters, respectively, and denote these models by FNN(5, 10) and FNN(3, 12). These models have 2,344 and 1,564 trainable weights, which is slightly less than the 2,526 trainable parameters of the EEGNet. For the sliding windows approach with 7 classes, we increased the number of filters to 40 + 20 and 5 + 10. In this setting, we get models with 19,767 and 2,497 trainable weights compared to 2,783 for the EEGNet (with 7 classes).
For the classic approach, we used windows between the cue onset (\(t=2\,s\)) and the disappearance of the fixation cross (\(t=6\,s\)). We split each recording into 80% train and 20% test data, which corresponds to 230 and 58 windows each. We trained the proposed FNN and the EEGNet with 250 batches of size 32 to distinguish between the four classes (left hand, right hand, feet, tongue). In total 8000 samples were used, which means that each of the 230 training windows was used approximately 35 times.
For the sliding windows approach, we split each recording into 80% train and 20% test data, which corresponds to approximately 36 and 9 minutes respectively. We used windows of 1 s and a step size of 0.016 s which led to more than 125,000 sliding windows. These windows might coincide with a break between trials (class 1), the time between trial and cue onset (class 3), the time between cue onset and imagined movement (class 2) or one of the four imagined movements (classes 4 - 7). The windows at the transition between two classes were labeled with the most frequent class. This problem is substantially more complex than the classic approach and we have more varied data. Thus, we trained both models with 4000 batches of size 32, which is close to the number of sliding windows.
We trained each model 10 times for each of the 9 recordings.
## Results
The results for both approaches are displayed in Tables 2 and 3. As before, the accuracy represents the ratio of correctly classified windows to all windows. To account for the class imbalance, the mean recall and precision over all categories were added: the recall of a binary classifier is defined as the ratio of correctly classified positive to all true positive samples, whereas the precision of a binary classifier is defined as the ratio of correctly classified positive to all positively classified samples. These quantities for binary classifiers were extended to the multiclass problem by first calculating the respective quantity per class and then averaging the calculated quantities over all classes.
In both cases, the proposed FNNs outperformed the benchmark model. For the classic approach, the accuracy of the smaller FNN(3, 12) with 1,564 parameters had a 2.4% higher accuracy and the larger FNN(5, 10) with 2,344 trainable parameters had a 3.47% higher accuracy compared to the benchmark model with 2,526 parameters. The respective confusion matrices are displayed in Figure 8 of Appendix B.
For the sliding window approach, the FNN(5, 10) with a comparable number of parameters had a 1.97% higher accuracy compared to the benchmark model, whereas the larger FNN(40, 20) outperformed the benchmark model by 4.69% accuracy. Note that classes 4 - 7 are generally more difficult to detect. Yet, it can be seen from the confusion matrices in Figure 9 of Appendix B, that the classifications of the FNN(40, 20) are particularly better for those classes, which is also reflected in the recall and precision of the classifiers.
Both the FNNs and the (default) EEGNet are relatively simple models. It can be expected that the accuracies improve for both types of models if the hyperparameters are tuned carefully. Further improvements might be possible by changing the FNN's architecture or simply using more layers.
## Fully Functional Predictions
With the proposed methodology it is not only possible to predict scalar-valued labels and use the model for classification, but it is also possible to predict functional labels. With the sliding windows as before, we can try to predict the class label for each time point rather than one label for the whole window. This is particularly useful at the transition from one state to another because these transitions cannot be represented by a simple classification.
We trained a functional neural network with three functional convolutional layers as depicted in the bottom of Figure 1 to predict labels for each time instant of the sliding windows.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Model & Accuracy & Recall & Precision \\ \hline EEGNet & 58.13 & 58.30 & 58.59 \\ FNN(5, 10) & **61.60** & **61.81** & **61.49** \\ FNN(3, 12) & 60.53 & 60.80 & 60.56 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of the models’ quality under the classic approach.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Model & Accuracy & Recall & Precision \\ \hline EEGNet & 46.00 & 35.46 & 40.52 \\ FNN(40, 20) & **50.69** & **42.47** & **45.50** \\ FNN(5, 10) & 47.97 & 37.42 & 42.60 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of the models’ quality under the sliding windows approach.
The overall accuracy was similar to the accuracy reported for the classifier above. In Figures 4 and 5 are the true and predicted labels of two windows at the transition from an inter-trial break (class 1) to the interval after a trial onset (class 3) and from the interval after a trial onset (class 3) to the interval after a cue onset (class 2). It can be seen from both figures that the predictions do not match the true labels perfectly and that the confidences at the border region are generally lower, but overall the predictions match the true labels.
## 6 Conclusions
In this work, we presented a new framework to analyze functional data with scalar- or functional-valued targets. We combined advantages of convolutional neural networks with those of functional data analysis. More specifically, we proposed a neural network that can be considered as shift invariant while taking the intrinsic functional structure of the data into account.
We showed that even shallow models with only two convolutional and one dense layer are more powerful than the functional k-nearest neighbors algorithm for the simulated data. Further, the results of our case study suggest that FNNs with a similar amount of trainable weights outperform EEG-Net, the de facto standard model for the classification of EEG data.
The results of this paper suggest that functional neural networks are a relevant area for future research. First, it could be tested if FNNs work well with other types of time series and functional data, such as stock prices or temperature curves. It might be interesting to investigate if the methodology can be expanded to other types of data like images (considered as functions of the two variables height and width) or videos (considered as functions of the three variables time, height and width). Further, the choice of base is crucial for the performance of the network. Prior simulation studies showed that the Fourier base and Legendre polynomials lead to good results, but other bases might further improve the predictions. In FDA it is common to find roughness penalties as regularizers. Although a preliminary simulation study suggested that a base representation of the weight functions leads to better results, it would be interesting to study if the weight functions in the neural network can be learned directly while their smoothness would be ensured via corresponding roughness penalties. Finally, the proposed framework could be extended to other types of neural networks, such as recurrent neural networks or transformers.
## Acknowledgment
This work is supported by the Ministry of Economics, Innovation, Digitization and Energy of the State of North Rhine-Westphalia and the European Union, grant IT-2-2-023 (VAFES).
|
2301.00911 | Detecting Information Relays in Deep Neural Networks | Deep learning of artificial neural networks (ANNs) is creating highly
functional processes that are, unfortunately, nearly as hard to interpret as
their biological counterparts. Identification of functional modules in natural
brains plays an important role in cognitive and neuroscience alike, and can be
carried out using a wide range of technologies such as fMRI, EEG/ERP, MEG, or
calcium imaging. However, we do not have such robust methods at our disposal
when it comes to understanding functional modules in artificial neural
networks. Ideally, understanding which parts of an artificial neural network
perform what function might help us to address a number of vexing problems in
ANN research, such as catastrophic forgetting and overfitting. Furthermore,
revealing a network's modularity could improve our trust in them by making
these black boxes more transparent. Here, we introduce a new
information-theoretic concept that proves useful in understanding and analyzing
a network's functional modularity: the relay information $I_R$. The relay
information measures how much information groups of neurons that participate in
a particular function (modules) relay from inputs to outputs. Combined with a
greedy search algorithm, relay information can be used to identify
computational modules in neural networks. We also show that the functionality
of modules correlates with the amount of relay information they carry. | Arend Hintze, Christoph Adami | 2023-01-03T01:02:51Z | http://arxiv.org/abs/2301.00911v2 | # Detecting Information Relays in Deep Neural Networks
###### Abstract
Deep learning of artificial neural networks (ANNs) is creating highly functional processes that are, unfortunately, nearly as hard to interpret as their biological counterparts. Identification of functional modules in natural brains plays an important role in cognitive and neuroscience alike, and can be carried out using a wide range of technologies such as fMRI, EEG/ERP, MEG, or calcium imaging. However, we do not have such robust methods at our disposal when it comes to understanding functional modules in artificial neural networks. Ideally, understanding which parts of an artificial neural network perform what function might help us to address a number of vexing problems in ANN research, such as catastrophic forgetting and overfitting. Furthermore, revealing a network's modularity could improve our trust in them by making these black boxes more transparent. Here, we introduce a new information-theoretic concept that proves useful in understanding and analyzing a network's functional modularity: the relay information \(I_{R}\). The relay information measures how much information groups of neurons that participate in a particular function (modules) relay from inputs to outputs. Combined with a greedy search algorithm, relay information can be used to _identify_ computational modules in neural networks. We also show that the functionality of modules correlates with the amount of relay information they carry.
## 1 Introduction
Neural networks, be they natural or artificial deep-learned ones, notoriously are black boxes (Castelvecchi, 2016; Adadi and Berrada, 2018). To understand how groups of neurons perform computations, to obtain insight into the algorithms of the human mind, or to be able to trust artificial systems, we need to make the network's processing more transparent. To this end, various information-theoretic and other methods have been developed to shed light on the inner workings of neural networks. Transfer entropy (Schreiber, 2000) seeks to identify how much information is transferred from one node (or neuron) to the next, which in principle can detect causal links in a network (Amblard and Michel, 2011) or be used to understand general properties about how information is distributed among nodes (Tehrani-Saleh and Adami, 2020; Hintze and Adami, 2020). In general, information theory can be used to make inferences in cognitive- and neuroscience (McDonnell _et al._, 2011; Dimitrov _et al._, 2011; Timme and Lapish, 2018). Predictive information (Bialek _et al._, 2001; Ay _et al._, 2008) determines how much the outputs of a neural network depend on the inputs to the system or on hidden states. Integrated information (Tononi, 2015) quantifies how much a system combines inputs into a single experience and identifies the central component(s) in which that happens. Information theory is also used to determine cognitive control (Fan, 2014) and neural coding (Borst and Theunissen, 1999) in natural systems. Finally, information theory is used to characterize _representations_(Marstaller _et al._, 2013) that quantify how much (and where) information is stored about the environment.
Despite the diverse range of applications of information theory to neuronal networks, the question of which module or subset of nodes in an artificial neural network performs which function remains an open one. In network neuroscience (Sporns, 2022; Hagmann _et al._, 2008; Sporns and Betzel, 2016), this question is investigated using functional magnetic resonance imaging (fMRI) (Logothetis, 2008; He _et al._, 2009), electroencephalography (EEG (Thatcher, 2011), magnetoencephalography (MEG) (Thatcher, 2011), and other physiological methods. fMRI specifically identifies functional cortical areas by the increase in oxygen consumption required to perform a particular task. However, an fMRI detects two things at the same time: the activity of neurons involved in a specific task, and the fact that they often form spatially associated clusters. As a consequence, in an fMRI analysis, functional and structural modularity coincide. In deep convolutional neural networks, detecting modules based on their biological activity is obviously impossible. The essential computation of the dot product of the state vector and weight matrix does not differ depending on how involved nodes and weights are in function. Furthermore, artificial neural networks do not display any structure beyond the order of layers. The order of nodes in a layer is interchangeable, as long as the associated weights change with them.
One approach to determine functional modularity in the context of ANNs is to determine the degree of modularity from the weights that connect nodes (Shine _et al._, 2021), by determining how compartmentalized information is (Hintze _et al._, 2018; Kirkpatrick and Hintze, 2019), or by performing a knockout analysis that allows tasks to be associated with the processing nodes of the neural network (C G _et al._, 2018). However, results from such a knockout analysis are often not conclusive.
Functional modularity in ANNs is interesting for another reason: it appears to affect a phenomenon known as _catastrophic forgetting_(McCloskey and Cohen, 1989; French, 1999), where a network trained on one task can achieve high performance, but catastrophically loses this performance when the network is sequentially trained on a new task. The degree of modularity appears to be related to the method by which these networks are trained. Using a genetic algorithm to modify weights (neuroevolution, see (Stanley _et al._, 2019)) seems to produce modular structures automatically, as was also observed in the evolution of metabolic networks (Hintze and Adami, 2008). This modularity appears to protect ANNs from catastrophic forgetting (Ellessen _et al._, 2015). Neural networks trained via backpropagation are unlikely to be modular since this training method recruits all weights into the task trained (Hintze, 2021). Similarly, dropout regularization (Hinton _et al._, 2012) is believed to cause all weights to be involved in solving a task (making the neural network more robust), which in turn prevents overfitting.
While many methods seek to prevent catastrophic forgetting (Parisi _et al._, 2019), such as Elastic Weight Consolidation (EWC) (Kirkpatrick _et al._, 2017), algorithms such as LIME (Ribeiro _et al._, 2016), and even replay during sleep (Golden _et al._, 2022), it is still argued that catastrophic forgetting has not been solved (Kemker _et al._, 2018). If catastrophic forgetting is due to a lack of modularization of information, it becomes crucial to accurately measure this modularization to identify learning schemes that promote modules. The problem of identifying modules responsible for different functions is further aggravated when information theory and perturbation analysis (via node knockout) disagree (Bohm _et al._, 2022; Sella, 2022).
When identifying candidate neurons in hidden layers that might contain information about the inputs that are used in decision-making, perturbing those neurons by noise or knockout should disrupt function. Similarly, hidden nodes _not_ containing information about inputs should, when perturbed in this manner, not alter the outputs encoding decisions. However, if information is stored _redundantly_, perturbing only part of the redundant nodes will not necessarily disrupt function, even though they carry information. At the same time, nodes without function or information can still accidentally perturb outputs when experiencing noise (Bohm _et al._, 2022; Sella, 2022).
Here, we develop a new information-theoretic measure that quantifies how much information a set of nodes _relays_ between inputs and outputs (relay information \(I_{R}\)). This measure can be applied to all combinations of neurons (sets) to identify which set of a given size contains the most information. While the number of sets of neurons is exponential in size, the number of tests required to find the set with the largest amount of information can be significantly reduced by taking advantage of the fact that a smaller subset cannot have more information than its superset. Thus, this measure can be combined with a greedy search algorithm that identifies the relevant computational modules connecting the inputs to the outputs. We will demonstrate on a wide range of examples the function and applicability of this new method. Specifically, using a positive control in which the nodes relaying the information from inputs to outputs are known, we demonstrate that relay information indeed allows us to recover the relevant functional nodes. We compare this control to a regularly-trained neural network, and show that perturbations on nodes carrying relay information cause failures in their predicted functionality.
Methods
### Training Artificial Neural Networks
The neural networks used here are implemented using PyTorch (Paszke _et al._, 2019) and trained on the MNIST handwritten numerals dataset (LeCun _et al._, 1998). The MNIST dataset consists of 60,000 training images and 10,000 test images of the ten numerals 0-9. Each grey-scale image has \(28\times 28\) pixels with values normalized between \(-1.0\) to \(1.0\). Here, we use two different networks. The _full_ network has 784 input nodes, followed by a layer of 20 hidden nodes with a standard summation aggregation function, and a tanh threshold function. The output layer needs 10 nodes to account for the ten possible numeral classes and uses the same aggregation and threshold function as the hidden layer. The _composite_ network is an aggregate of ten sub-networks each trained to recognize only a single number. In each of the sub-networks, the hidden layer has two nodes, with a single node in the output layer.
Networks are trained using the Adam optimizer (Kingma and Ba, 2015) until they either reach a recognition accuracy of 95% or else reach a given fixed number of training epochs. The node in the output layer with the highest activation is used to indicate the network's prediction of the numeral depicted in the image (argmax function).
### Composing an Artificial Neural Network from Specialized Networks
In a typical ANN performing the MNIST classification task, all nodes of the hidden layer are involved in relaying the information from the input to the output layer: a phenomenon we previously termed _informational smearing_(Hintze _et al._, 2018), as the information is "smeared" over many neurons (as opposed to being localized to one or a few neurons). Our control network is constructed in such a manner that functionality is strictly distributed over very specific nodes. Specifically, we construct a network with 20 hidden nodes by aggregating ten sub-networks with two hidden nodes each. Each of the sub-networks is only trained to recognize a single numeral amongst the background of the other nine, using only two hidden nodes. By combining these 10 sub-networks networks into the _composite model_, we can create a control in which the relay neurons (the two hidden neurons in each of the sub-networks) are guaranteed to only relay information about a very specific function (see Figure 1). Note that those composite networks do not undergo further training.
### Information-Theoretic Measure of Computational Modules
An artificial neural network can be viewed as an information-theoretic channel (Shannon, 1948) that relays the information received at the input layer to the output layer while performing some computations along the way. To measure the throughput of information, define the random variable \(X_{\text{in}}\) with ten states (one for each numeral) and Shannon entropy \(H(X_{\text{in}})\), while the outputs form a random variable \(X_{\text{out}}\) with entropy \(H(X_{\text{out}})\). The mutual information between both \(I(X_{\text{in}};X_{\text{out}})\) (see Equation (1)) consequently measures how much the output symbol distribution is determined by the inputs (and vice versa, as this is an un-directed measurement):
\[I(X_{\text{in}};X_{\text{out}})=H(X_{\text{in}})+H(X_{\text{out}})-H(X_{\text {out}},X_{\text{in}})\;. \tag{1}\]
Here, \(H(X_{\text{out}},X_{\text{in}})\) stands for the joint entropy of the input and output variables.
At the initialization of the network, weights are randomly seeded, giving rise to a network that randomly classifies images. In this case, the confusion matrix is relatively uniform and the conditional entropy \(H(X_{\text{out}}|X_{\text{in}})=H(X_{\text{out}},X_{\text{in}})-H(X_{\text{in}} )\approx H(X_{\text{out}})\), leading to low information \(I(X_{\text{in}};X_{\text{out}})\). However, over the course of training, the prediction accuracy increases, leading ultimately to a strictly diagonal confusion matrix and a vanishing conditional entropy \(H(X_{\text{out}}|X_{\text{in}})\), implying that every numeral is properly classified. In this case, the information channel has maximal information (information equals capacity) when measured over the training or test set. Note that, when we calculate the entropy of the inputs \(H(X_{\text{in}})\), we use only image labels (not all possible images).
We can view this joint channel as being composed of two sequential channels: one from the inputs to the hidden states, and one from the hidden states to the outputs. The information that the outputs receive is still determined by the inputs, but now via the hidden variable \(Y\). A perfect channel can only exist if the hidden layer has sufficient bandwidth to transmit all of the entropy present at the inputs, that is, \(H(Y)\geq H(X_{\rm in})\).
We can now write the information that flows from the inputs via the hidden states to the outputs in terms of the shared information between all three random variables
\[I(X_{\rm in};X_{\rm out};Y) = H(X_{\rm in})+H(X_{\rm out})+H(Y) \tag{2}\] \[- H(X_{\rm in},X_{\rm out})-H(X_{\rm in},Y)-H(X_{\rm out},Y)\] \[+ H(X_{\rm in},X_{\rm out},Y)\;.\]
Because information _must_ pass through the hidden layer, this "triplet information" must be equal to the information \(I(X_{\rm in};X_{\rm out})\) (see Figure 2).
Figure 1: Illustration of the composite network. For each of the ten numerals, an independent neural network (sub-network) is trained to recognize a single numeral among the others. Each of those ten networks has 784 input nodes to receive data from the \(28\times 28\) pixel-wide MNIST images. Each hidden layer has two nodes followed by a single node at the output layer (top panel). The composite network (bottom panel) is assembled from these ten subnetworks. Colors represent which weights in the combined weight matrix come from which corresponding sub-network. Weights shown as white remain 0.0. Consequently, the weight matrix connecting the hidden layer to the output layer is de facto sparse.
However, in general, not all of the nodes that comprise \(Y\) carry information. Let us imagine, for example, that the set of hidden nodes \(Y\) is composed of a set \(Y_{R}\) that shares information with \(X_{\text{in}}\) and \(X_{\text{out}}\), and a set \(Y_{0}\) that does not share this information, that is, \(I(X_{\text{in}};X_{\text{out}};Y_{0})=0\), with \(Y=Y_{R}\otimes Y_{0}\). We will discuss the algorithm to determine which neurons belong in the set of relay neurons \(Y_{R}\) further below.
The nodes that comprise \(Y_{0}\) could, for example, have zero-weight connections to the inputs, the outputs, or both. They are defined in such a way that none of the information \(I(X_{\text{in}};X_{\text{out}})\) (area outlined in yellow in Figure 3B) is shared with them.
We call the information that is relayed through the "critical" nodes that carry the information (the nodes in the set \(Y_{R}\)) the _relay information_. While we could define this information simply as the information shared between \(X_{\text{in}}\) that is also shared with the neurons identified to be in the set \(\mathbb{Y}_{\mathbb{R}}\) (see Section 2.4), it is important to deal with cases where neurons that are informationally inert (they do not read information from \(X_{\text{in}}\) nor write into \(X_{\text{out}}\)) could nevertheless copy the state of a neuron that does relay information. In the current construction, this does not appear to be very likely (or is at most a small effect). However, in other architectures (such as recurrent neural networks, networks with multiple layers, probabilistic transfer functions, or natural brains), such a phenomenon might be more common. As discussed in Appendix A, inert neurons that copy the state of relay neurons may be classified as
Figure 3: (**A**) Input/output structure of an ANN with inputs \(X_{\text{in}}\), outputs \(X_{\text{out}}\), and a hidden layer \(Y=Y_{R}\otimes Y_{0}\). The relay information passes from the inputs via the relay neurons \(Y_{R}\) to the output (green arrow); (**B**) the entropic Venn diagram for the four variables \(X_{\text{in}}\), \(X_{\text{out}}\), \(Y_{R}\), and \(Y_{0}\), with ellipses quantifying the entropy of each of the variables colored according to (**A**). The information shared between \(X_{\text{in}}\) and \(X_{\text{out}}\) is outlined in yellow. The relay information Equation (3) is indicated by the green area.
Figure 2: Entropy Venn diagram for the random variables \(X_{\text{in}}\), \(X_{\text{out}}\), and \(Y\). The shared information between all three variables equals the information \(I(X_{\text{in}};X_{\text{out}})\) because no information can flow from \(X_{\text{in}}\) to \(X_{\text{out}}\) without passing through \(Y\).
belonging to the set \(\mathbb{Y}_{0}\) (because removing them does not reduce the information carried by the set) yet show a nonvanishing \(I(X_{\text{in}};X_{\text{out}};Y_{0})\). In order to eliminate such contributions, we measure the relay information _conditional_ on the state of neurons in \(Y_{0}\) that is
\[I_{R}=H(X_{\text{in}};X_{\text{out}};Y_{R}|Y_{0})\;, \tag{3}\]
which is indicated in the entropic Venn diagram in Figure 3 as the area colored in green. An explicit expression for \(I_{R}\) can be obtained simply by writing Equation (2) for \(Y_{R}\) instead of \(Y\), and conditioning every term on \(Y_{0}\).
We can also define a _particular relay information_ (a relay information that pertains to any particular numeral class) by introducing the input-class random variable
\[Z=Z_{1}\otimes Z_{2}\otimes\dots\otimes Z_{10}\;. \tag{4}\]
Because we can decompose \(X_{\text{out}}\) in a similar manner
\[X_{\text{out}}=X_{\text{out}}^{(1)}\otimes X_{\text{out}}^{(2)}\otimes\dots \otimes X_{\text{out}}^{(10)}\;, \tag{5}\]
the relay information about numeral \(i\) can then be written as
\[I_{R}(i)=H(Z_{i};X_{\text{out}}^{(i)};Y_{R}|Y_{0})\;. \tag{6}\]
This is the information that the critical relay nodes \(Y_{R}\) are providing about numeral \(i\).
The removal of hidden neurons that do not contribute to information transfer suggests a simple algorithm that identifies such neurons: start with the full set and remove neurons one by one, and keep only those neurons that lead to a reduction of the information being relayed. However, this search is in reality more complex because neurons can carry redundant information. We discuss this algorithm in the following section.
### Shrinking Subset Aggregation Algorithm
In order to find the minimal subset of nodes \(\mathbb{Y}_{R}\) that carry all of the information flowing from \(X_{\text{in}}\) to \(X_{\text{out}}\), we should in principle test all possible bi-partitions of neurons in \(Y\). Unfortunately, the number of bi-partitions of a set is still exponential in the set size, so a complete enumeration can only be performed efficiently for small sets. However, it turns out that in most cases a greedy algorithm that removes nodes one by one will find the minimal set \(\mathbb{Y}_{R}\) (see Appendix A).
We start with the largest partition in which all nodes belong to the set \(\mathbb{Y}_{R}\), and none to \(\mathbb{Y}_{0}\). Now, all possible subsets in which a single node is moved from \(\mathbb{Y}_{R}\) to \(\mathbb{Y}_{0}\) can be tested. The subset with the highest information (Equation (3)) is retained, and the node with the lowest information contribution is permanently moved into subset \(\mathbb{Y}_{0}\). This process is repeated until only one node is left in \(\mathbb{Y}_{R}\). Over the course of this procedure (assuming perfect estimation of entropy from sample data), the set with the highest information for each set size should be identified (see Algorithm 1).
As discussed in Appendix A, this algorithm can sometimes fail to identify the correct minimal subset. First, estimates of entropies from finite ensembles can be inaccurate: these estimates are both noisy and biased (see, for example, (Paninski, 2003)), leading to the removal of the wrong node from the set \(\mathbb{Y}_{R}\). Second, information can be stored redundantly. Imagine a network of ten nodes, with three nodes forming the relay between inputs and outputs, while another set of two nodes is _redundant_ with those other three nodes. The greedy algorithm will work until all those five nodes are in the set \(\mathbb{Y}_{R}\). Removing any of those nodes will not drop the information content of the larger set, since the information is fully and redundantly contained in both the set of three and the set of two. Thus, all five nodes appear equally _unimportant_ to the algorithm, which can now not decide anymore which node to remove. It might remove one of the nodes in the set of three, leading to the set of two becoming the crucial computational module. Alternatively, removing a node from the smaller set promotes the larger set to become the crucial computational set. Either way, the algorithm has a chance to fail to find a unique set because there could be several.
One way to amend the algorithm would be to allow the process to dynamically branch. In case multiple nodes upon removal do not reduce the information retained in the remaining set \(\mathbb{Y}_{R}\), all possible branches can be pursued. Such a fix will significantly increase the computational time. However, as we do not expect the occurrence of redundant sets to be a prominent feature of many networks, we have not explored this alternative algorithm further.
```
0:\(\mathbb{Y}=\{0,...,n\}\) \(\mathbb{Y}_{0}\leftarrow\emptyset\) \(\mathbb{Y}_{R}\leftarrow\mathbb{Y}\) while\(\mathbb{Y}_{R}\neq\emptyset\)do for\(\forall a\in\mathbb{Y}_{R}\)do \(\mathbb{Y}_{R}^{\prime}\leftarrow\mathbb{Y}_{R}-a\) \(\mathbb{Y}_{0}^{\prime}\leftarrow\mathbb{Y}_{0}+a\) \(I_{a}\gets I_{R}(X_{\mathrm{in}};X_{\mathrm{out}};\mathbb{Y}_{R}^{\prime} |\mathbb{Y}_{0})\) (see Equation (3)) endfor \(a\leftarrow\{\mathbb{Y}_{R};a=min(I_{a})\}\) \(\mathbb{Y}_{R}\leftarrow\mathbb{Y}_{R}-a\) \(\mathbb{Y}_{0}\leftarrow\mathbb{Y}_{0}+a\) endwhile
```
**Algorithm 1** Shrinking Subset Aggregation Algorithm.
### Knockout Analysis
To test the informational relevance of individual nodes of the hidden layer, we can perform "knockout" experiments. While a knockout in a biological context is defined as the disabling of a component, it is less obvious how to perform such an operation in the context of a neural network. One option would be to replace a neuron's activation level by a random number, which still leaves the freedom to choose a distribution and a range. Furthermore, these random values still propagate through the network, which implies that such a knocked-out neuron is not disabled. Keeping an activation level constant (independent of the inputs) can also have undesirable effects. Imagine that a neuron's activation level is constant, say \(1.0\) or \(-1.0\), independently of the input images. This value would be included in all subsequent aggregation functions affecting the final output of the network. Attempting to knock out this node by forcing it to \(-1.0\) or \(1.0\) can now have two different effects. If the node is already a constant \(1.0\), knocking it out by forcing it to be a constant \(1.0\) would suggest that this node has no function, since such a knockout would not change any output. Setting it to \(-1.0\) might have significant effects, but would on the other hand leave a node that should be at \(-1.0\) unaffected. Here, to "knock out" a node (to render it non-functional) in the hidden layer, we force it to take on a value of \(0.0\) during the forward pass. At the same time, all weights of the first layer leading to such a node are set to \(0.0\), as are all weights of the second layer that are affected by that node. Alternatively, the values of the nodes to be knocked out in the hidden layer could have been forced to \(0.0\) when the forward pass reaches the hidden layer. These methods are equivalent. While this form of knockout can also have undesirable consequences, the effect is likely closest to an actual removal of the node by eliminating it from the network, and shrinking the weight matrices accordingly.
### Coarse-Graining Continuous Variables
The computations performed by the neural network use continuous inputs, and due to the tanh-like threshold function, the activation levels of neurons in the hidden layer are confined to the interval \([-1,1]\). While entropies can be computed on continuous variables (so-called differential entropies, see (Shannon, 1948)), we use discrete entropies here, which require a discretization of the continuous values. In particular, we are coarse-graining those entropies by mapping all continuous values to the binary categories \(0\) and \(1\). We previously used the median value of a neuron's excitation level as the threshold for the bin (Bohm _et al._, 2022). Instead, here the hidden-state values are clustered using a \(k\)-means clustering algorithm with \(k=2\). Using the median for coarse-graining ensures that the resulting distribution has maximal entropy because each bin is guaranteed to receive half of the values. However, we found a maximum-entropy assumption for a neuron to be inappropriate in most cases. Using a \(k\)-means clustering algorithm to distribute values into bins gives a better approximation of the relative entropy between states.
Coarse-graining also reduces the size of the state space that is being sampled. Using \(k=2\) to coarse-grain the hidden states implies that there are at most \(k^{N}\) possible states, which (with \(N=20\)) is a state space that is in danger of being significantly undersampled with a limited number of images (MNIST has at most 70,000 if test and training data are combined). As the entropy of this hidden space is \(N\log_{2}k\) bits, an input sample with \(\log(60,000)\approx 15.87\) bits would be insufficient to adequately sample the random variables \(X_{\text{in}}\), \(X_{\text{out}}\), \(Y_{R}\), or \(Y_{0}\) even for \(k=2\). However, as discussed in Appendix B, because the input entropy is much smaller (\(\log 10\approx 3.32\) bits), estimation errors are small, and the likelihood that nodes are accidentally removed from \(Y_{R}\) due to poor sampling is small.
### Aggregated Relay Information
The greedy algorithm identifies a sequence of sets of nodes that continuously shrink because it is always the node contributing the least to \(I_{R}\) that is removed next. Consequently, every time a node is removed, we can also quantify the loss of information for that particular node \(n\) as the difference in \(I_{R}\) between the larger set containing the node (\(\mathbb{Y}_{R}\cup n\)) and smaller set without it (\(\mathbb{Y}_{R}\)):
\[\Delta I(n)=I_{R}(\mathbb{Y}_{R}\cup n)-I_{R}(\mathbb{Y}_{R})\;. \tag{7}\]
Interestingly, when this process arrives at a set of nodes that taken together is essential in relaying the information, it can happen that the removal of _any_ of the nodes of the set causes the remaining neurons to have \(I_{R}=0\). Information in such an essential set can be seen to be _encrypted_, to the point where no node can be removed without losing all of the information (Bohm _et al._, 2022). However, this creates a situation in which the last nodes, when removed, appear to not contribute any information, even though they are essential. Thus, we quantify the amount that each node contributes to the relay information in terms of the sum of all \(\Delta I(n)\) over all previously removed nodes as
\[I_{A}(n)=\sum_{i=1}^{n}\Delta I(i)\;. \tag{8}\]
Using the information loss due to the removal of a node from the essential set, we can also quantify the _essentiality_ of a neuron in terms of the loss of information the removal of node \(n\) causes when it is removed from the remaining set of nodes. The essentiality of a single node can be computed using Equation (7) where \(n\) is the node being removed from the full set of nodes. Thus, if a neuron is meaningless or redundant, its essentiality \(\Delta I(n)\) will vanish.
## 3 Results
### Identification of Information Relays
To determine if the proposed metric and optimization method correctly identifies the nodes that relay information from the outputs to the inputs, we trained two kinds of networks. A standard ANN with 20 hidden nodes was trained to correctly identify all ten numerals. As a control, ten sub-networks with two hidden nodes were trained on a single numeral each. From the ten smaller networks, a full network was composed (see Figure 1) that can perform the same task as the network trained on all numerals at the same time.
Figure 4 shows the mean accuracy of recognizing each of the different digits as a function of training epoch, for the full as well as the composite network. Note that the full network only needed 43 epochs to reach 96% accuracy, while the training of the smaller models took significantly longer. The full model was trained until it reached an accuracy of 0.96; the smaller models were trained until they reach an accuracy of 0.98. Smaller networks could easily be trained to achieve this high 98% accuracy while training the full network is usually limited to 96%. In order to observe networks performing as optimally as possible, and to maximize the information between inputs and outputs, networks were trained until they reached those practical limits (Chapman _et al._, 2013).
Because in the composite network the two hidden neurons of each sub-network are guaranteed to serve as relays for the relevant information, we can use this network as a positive control to test whether our algorithm correctly detects relay information, and whether neurons carrying non-overlapping information (each of the hidden neuron sets only carries the information about one specific numeral) are either more or less vulnerable to knockout. This does not imply that the hidden neurons that pertain to a particular numeral cannot relay information about another numeral. After all, hidden nodes trained to recognize the numeral 1, for example, might still correlate with nodes trained to recognize numeral 7 due to the similarity between those images.
In order to test whether the greedy algorithm finds the correct minimal informative subset in the full model, we performed an exhaustive search of all \(2^{N}-1\) (with \(N=20\)) bi-partitions of the hidden nodes \(Y\) to find the minimal set \(Y_{R}\). We then compared the result of the exhaustive search with the candidate set resulting from the shrinking subset aggregation algorithm. This un-branched version of the algorithm only needs \(\frac{N(N+1)}{2}\) computations, reducing the computational complexity from exponential to quadratic.
Figure 5 shows that different partitions relay very different amounts of information about the particular output. In general, the larger the set \(\mathbb{Y}_{R}\), the more information it represents, but we also see that the highest information found within sets of a particular size is always higher than the maximal information found amongst all sets that are smaller (as proved in Appendix A, with the caveat of redundant sets). The shrinking subset aggregation algorithm exploits this observation of smaller sets always having less information than their larger superset and should thus be capable of identifying the subsets \(\mathbb{Y}_{R}\) (and consequently also \(\mathbb{Y}_{0}\)) with the highest information content for all sets of the same size, but without the complete enumeration of all possible sets. We find that fewer than \(0.9\%\) of the correct sets have equal
Figure 4: Training accuracy as a function of training epoch. (**A**) full model (top panel). The accuracy to predict each numeral is indicated with lines of different colors (see legend). Accuracy on the training set is shown as solid lines while accuracy on the test is indicated by dotted lines. The average performance classifying all numbers is shown in black; (**B**) accuracy of each of the ten sub-network models used to create the composite model as a function of training epoch. Colors indicate the accuracy for detecting an individual numeral. The endpoint of the training is highlighted with a dot; the same time point but using test data is indicated by an x. Training other networks had marginally different outcomes (data not shown).
or more information than the set identified by the greedy algorithm. As discussed earlier, the failure of the greedy algorithm to correctly identify the most informative set can be attributed to noise in the entropy estimate due to the finite sample size, as well as to the presence of redundant sets with identical information content.
We now investigate whether the greedy algorithm properly identifies the relevant subsets that are critical in relaying the information from inputs to outputs that is, whether the information they carry is indeed used to predict the depicted numeral. We define the _importance_ of a node as the sum of all information loss that this node conveyed before it was removed (aggregated relay information, see Methods). We also define the _essentiality_ of node \(n\) as the amount of relay information lost when moving that node from the minimal set \(Y_{R}\) to \(Y_{0}\) (see Equation (7)). Because this measure of essentiality only considers the effect of removing single nodes, it can be inaccurate if a node is essential only when another node (for example a redundant one) is also removed. However, since the relays in the composite network are so small (two nodes) removing any one of them causes a large drop of information. This can be seen in Figure 6B, where nodes identified as relays are also highly essential.
Figure 6A shows that both the importance analysis (via the aggregated particular relay information) and the essentiality analysis correctly identify the nodes that relay the information from inputs to outputs in the composite model. Aside from the sampling noise, each pair of hidden nodes that were trained to be the relays are correctly identified as highly informative (see Figure 6).
Figure 5: Particular relay information about each numeral for all possible bi-partitions (black dots) as a function of the set sizes \(|\mathbb{Y}_{R}|\). The top ten panels show particular relay information for the full model, while the bottom ten panels show the same for the composite model. Each panel shows the relay information about a different numeral in the MNIST task, indicated by the index of the panel. The red line corresponds to the set identified by the shrinking subset aggregation algorithm. Fewer than \(0.9\%\) of all subsets have a higher information content than the one identified by the algorithm.
Training the full network via backpropagation is not expected to create modules of hidden nodes that each only relay information about one specific numeral. Indeed, we find information to be relayed in an unstructured fashion in this network (see Figure 7A). Interestingly, nodes that are positively identified as relays are not necessarily essential, suggesting that many nodes contain redundant information (see Figure 7B). This further supports our previous findings that backpropagation smears or distributes function across all nodes, rather than isolating functions into structured modules (Hintze _et al._, 2018; Kirkpatrick and Hintze, 2019). The results from Figure 7B also suggest that using the essentiality of single nodes does not properly identify the informational structure of the network.
### Information Relays Are Critical for the Function of the Neural Network
To verify that the sets \(\mathbb{Y}_{R}\) with high information are indeed relaying information from the inputs to the outputs, we can study the effect of knockouts on those nodes. Because we expect a correlation between knockout effect size (the sensitivity of the node to perturbation) and the size of the informative set, care must be taken when interpreting the correlation between relay information and knockout effect size (sensitivity). Smaller sets can relay less information and have a potentially smaller effect when knocked out compared to larger sets. Thus, set size confounds the correlation between knockout effect and the amount of information relayed by the same set. We performed a multiple linear regression to test how much the knockout effect (treated as the dependent variable) is explained either by the set size or the amount of information relayed (independent variable). Figure 8 shows the regression coefficients of that analysis.
Figure 6: Aggregated relay information and essentiality in the composite model. (**A**) aggregated particular information loss \(\Delta I_{R}(n)\) (Equation (8)) for all 20 nodes in the hidden layer (\(x\)-axis) and the ten different numeral classes (\(y\)-axis) shown in grayscale (brighter shades indicate higher loss of information); (**B**) node essentiality (Equation 7)) for each hidden neuron and numeral. Bright squares indicate essential nodes, while black squares would indicate redundant or meaningless nodes. The red dot (node 16, numeral 1) points to a neuron that appears to relay information (**A**) but is entirely redundant and non-essential (red dot in (**B**)).
Relay information explains at least 75% (\(r^{2}>0.75\)) of the variance of the knockout effect for the composite model and at least 45% (\(r^{2}>0.45\)) of the variance of the knockout effect for the full model. We can thus conclude that, when assuming a linear relationship between either set size or relay information and knockout effect, the influence of relay information on knockout effect is significantly stronger than the influence of set size (\(F>1.5\times 10^{5}\) in an F-test).
Figure 8 shows that the knockout effect is better explained by the amount of particular relay information about that node than the set size \(|\mathbb{Y}_{R}|\). This shows also that, as expected, set size is indeed confounding this relation. We further find that in the composite network the relationship between particular relay information and knockout effect is stronger compared to the full network. The weaker relation between knockout effect and relay information is most likely due to the information being distributed more broadly over many nodes, compared to the composite model where the information is forced to reside in only two relay nodes.
## 4 Discussion
We introduced a new information-theoretic concept that we believe will prove to be useful in the analysis of information flow in natural and artificial brains: the "relay information". Relay information quantifies the amount of information within a set of nodes inside a communication channel that passes through this set, and not through other nodes within the same channel. The particular relay information can be
Figure 8: Regression coefficients of the multiple linear regression analysis between knockout effect \(K\) and set size \(|\mathbb{Y}_{R}|\) (red crosses), and knockout effect \(K\) and particular relay information \(I_{R}(i)\) (black crosses), as a function of numeral \(i\). Lines are meant to guide the eye. (**A**) full model; (**B**) composite model.
Figure 7: Aggregated relay information and essentiality in the full model. (**A**) aggregated relay information for each node and every numeral class for the full network; (**B**) essentiality. Methods, axes, and grayscales as in Figure 6.
used to identify which nodes in a hidden layer of a neural network are responsible for what particular classification function. We constructed a greedy algorithm that identifies the minimal informative set of nodes that carry the particular relay information, and tested it on the MNIST hand-written numeral classification task using a regular neural network, as well as a control in a network in which we know--by construction--the function of each hidden node (see Figure 1). We further showed via a knockout analysis that the sets of neurons identified as carrying the relay information indeed are functional because knocking out those nodes abrogates the classification accuracy for that particular numeral.
The identification of information relays, and thus discovering the computational modules that relay information, can only be a first step in a more comprehensive analysis of brain function. Here, we focused on testing the method, and showed using a positive control (the composite network) that the identified relay sets are indeed correlated to function. We also found that the full network, trained on all image classes at the same time, does not display a well-differentiated modular structure. Instead, information is distributed haphazardly across the network, and if we were to identify functional modules, they would be highly overlapping. In other words, the ANNs that we trained here do not seem to have a modular structure in information space.
Because a defined, modular, informational structure appears to be key to understanding a number of key properties of neural networks (such as catastrophic forgetting (McCloskey and Cohen, 1989; Kirkpatrick _et al._, 2017; Kemker _et al._, 2018) or learning (Ellefsen _et al._, 2015)), understanding what design decisions give rise to more (or less) modular networks is an important first step. We are now better equipped to study the role of information smearing and modularity in its effect on fooling, generalization, catastrophic forgetting, or latent space variables, and look forward to exploring these topics in the future.
The concepts and methods we introduced are general and can be applied to any task where a network (be it neural or genetic) performs its function by taking inputs, computing outputs, and then using those outputs for prediction. In the case of a natural neural network, however, because it is continuously processing, an additional temporal binning has to be performed. This, and measuring the states of _all_ neurons in the first place, will make applying the concept of relay information challenging, to say the least. In the future, it would be interesting to study if this method also applies to, for example, time series classification, recurrent neural networks, convolutional layers, or even generative tasks.
Another concern is the scaling of the computational complexity of the algorithm to detect information relays with the number of nodes in the hidden layer. Currently, using the greedy algorithm and all 60,000 training images from the MNIST data set, and applying it to a full network with 20 hidden nodes, takes about 30 s on a 3.5 Ghz desktop computer (for all 10 numerals together). Performing the same analysis but computing the exact critical set (testing all \(2^{N}\) sets) takes about 24 h on the same hardware. Because the complexity of the greedy algorithm has a computational complexity of \(O(N(N-1))\) and the full enumeration has a computational complexity of \(O(2^{N})\), we can conjecture that a network of 1000 nodes can be analyzed within the same 24 h needed for a network of size \(N=20\).
In this work, we only studied one particular optimizer to train the neural network (Adam), one loss function (mean squared error), and the threshold functions hyperbolic tangent and argmax. We conjecture that our method applies to all other variances of deep learning. However, we also conjecture that the way in which information is distributed across the network will depend on the method and parameters of the optimization procedure, and we will test this dependence in future work. Finally, by testing different coarse-grainings of neuronal firing levels, the method should be able identify relay neurons and thus functional modules in biological brains, and thus help in studying information flow in functioning brains.
In this work, we found that the greedy algorithm correctly identifies the minimal informative set in almost all cases. However, we expect that the failure rate depends on the task being studied, the data set size, as well as the amount of redundancy among neurons. In networks with significant redundancy, we can imagine that the algorithm fails significantly more often, in which case a branching algorithm may have to be designed, which would carry a significant complexity cost.
**Author contributions**
A.H. implemented all computational analysis and methods, A.H. and C.A. designed the experiments and devised the new methods, and A.H. and C.A. wrote the manuscript. All authors have read and agreed to the published version of the manuscript.
**Funding information**
This research was supported by the Uppsala Multidisciplinary Center for Advanced Computational Sci
ence SNIC 2020-15-48, and the National Science Foundation No. DBI-0939454 BEACON Center for the Study of Evolution in Action.
**Data availability**
Code for the computational experiments and the data analysis can be found at
[https://github.com/Hintzelab/Entropy-Detecting-Information-Relays-in-Deep-Neural-Networks](https://github.com/Hintzelab/Entropy-Detecting-Information-Relays-in-Deep-Neural-Networks)
DOI:10.5281/zenodo.7660142
**Acknowledgements**
We thank Clifford Bohm for extensive discussions. This research was supported by Uppsala Multidisciplinary Center for Advanced Computational Science SNIC 2020-15-48, and the National Science Foundation No. DBI-0939454 BEACON Center for the Study of Evolution in Action.
## Appendix A Proof of non-deceptive removal of nodes
Here, we show that, as long as information is not redundantly encoded, it is possible to remove nodes one by one in a greedy fashion so that the minimal information reduction by single-node removal is not deceptive. In a deceptive removal, removing a pair of nodes reduces the information by a smaller amount than each of the individuals would have removed.
Say that the information to predict feature \(X_{\rm out}\) is stored in \(n\) variables \(Y_{1}\cdots Y_{n}\). This information is
\[I(X_{\rm out};Y_{1}\cdots Y_{n})\;. \tag{9}\]
The general rule to study node removal is the identity
\[I(X_{\rm out};Y_{1}\cdots Y_{n})=I(X;Y_{1}\cdots Y_{n-1})+H(X_{ \rm out};Y_{n}|Y_{1}\cdots Y_{n-1})\;. \tag{10}\]
We can easily convince ourselves of this, by imagining \(Y=Y_{1}\cdots Y_{n-1}\) and \(Y_{n}=Z\); then, this equation is just
\[I(X;YZ)=I(X;Y)+H(X;Z|Y)\;, \tag{11}\]
which is easily seen by writing the Venn diagram between \(X\), \(Y\), and \(Z\).
Let us study the simplest case of three nodes. The three possible information reductions are
\[\Delta I_{1} = H(X_{\rm out};Y_{1}|Y_{2}Y_{3})\;, \tag{12}\] \[\Delta I_{2} = H(X_{\rm out};Y_{2}|Y_{1}Y_{3})\;,\] (13) \[\Delta I_{3} = H(X_{\rm out};Y_{3}|Y_{1}Y_{2})\;. \tag{14}\]
We will now prove that, if \(\Delta I_{3}<\Delta I_{1}\) and _at the same time_\(\Delta I_{3}<\Delta I_{2}\) (implying that node 3 should be removed first), then it is _not_ possible that the information reduction due to the removal of nodes 1 and 2 at the same time is smaller than the information loss coming from the removal of node 3 (i.e., that \(\Delta I_{12}<\Delta I_{3}\) is impossible). If the latter was possible, we should have removed nodes 1 and 2 at the same time instead of node 3 (making the removal of node 3 deceptive).
Let us first write down \(\Delta I_{12}\). Since
\[H(X_{\rm out};Y_{1}Y_{2}Y_{3})=I(X_{\rm out};Y_{3})+H(X_{\rm out };Y_{1}Y_{2}|Y_{3}), \tag{15}\]
we know that
\[\Delta I_{12}=H(X_{\rm out};Y_{1}Y_{2}|Y_{3})\;. \tag{16}\]
We can rewrite this as
\[H(X_{\rm out};Y_{1}Y_{2}|Y_{3})=H(X_{\rm out};Y_{2}|Y_{3})+H(X _{\rm out};Y_{1}|Y_{2}Y_{3})\;. \tag{17}\]
This is the same rule as (11), just conditioned on a variable (all information-theoretic equalities remain true if the left-hand side and the right-hand side are conditioned on the same variable). Equation (17) implies that
\[\Delta I_{12}=H(X_{\text{out}};Y_{2}|Y_{3})+\Delta I_{1}\;. \tag{18}\]
Now since \(H(X_{\text{out}};Y_{2}|Y_{3})\geq 0\), we know that \(\Delta I_{12}\geq\Delta I_{1}\). However, since \(\Delta I_{1}>\Delta I_{3}\) by assumption, it follows immediately that
\[\Delta I_{12}>\Delta I_{3} \tag{19}\]
contradicting the claim that it is possible that \(\Delta I_{12}<\Delta I_{3}\).
Clearly, the same argument will apply if we ask whether larger groups are removed first: they can never remove less information than the smallest information removed by a single node in that group.
If information is redundantly encoded, the greedy algorithm can fail. Suppose two nodes are copies of each other \(Y_{1}=Y_{2}\), making them perfectly correlated: they carry the same exact information about \(X_{\text{out}}\). In that case, we can remove any of the two nodes, and it will not change the information, that is, \(\Delta I_{1}=\Delta I_{2}=0\):
\[I(X_{\text{out}};Y_{1}Y_{2}Y_{3})=I(X_{\text{out}};Y_{1}Y_{3})=I(X_{\text{out} };Y_{2}Y_{3})\;. \tag{20}\]
However, once we removed one (say we removed \(Y_{1}\)), then removing \(Y_{2}\) potentially removes information, as
\[I(X_{\text{out}};Y_{2}Y_{3})=I(X_{\text{out}};Y_{3})+H(X_{\text{ out}};Y_{2}|Y_{3})\;. \tag{21}\]
Now that \(Y_{1}\) is removed, the redundancy is gone, and \(H(X_{\text{out}};Y_{2}|Y_{3})\) could be large even though \(\Delta I_{1}=H(X_{\text{out}};Y_{1}|Y_{2}Y_{3})=0\). This failure of the greedy algorithm is at the origin of the discrepancies between the true minimal set of nodes (obtained by exhaustive enumeration) and the set identified by the greedy algorithm, but the failure is clearly rare.
## Appendix B Sampling Large State Spaces
In this work, we identify relay information by calculating shared conditional entropies such as those shown in Equation (1). Calculating those entropies, in turn, relies on estimating the entropy of the hidden layer neurons \(H(Y)\), which can become cumbersome if the number of neurons in the hidden layer is large. To calculate a quantity such as \(I(X_{\text{in}};X_{\text{out}};Y)\) (the center of the diagram in Figure 2), we must estimate probabilities such as \(p(y)\), the probability to find the hidden layer in any of its \(2^{20}\) states, assuming a binary state for each node after binning. To obtain an accurate maximum-likelihood estimate of \(p(y)\), the sample size has to be large. In order to estimate entropies, for example, a good rule of thumb is that the finite sample size bias (Basharin, 1959)
\[\Delta H=\frac{M-1}{2N\ln k}<<1\;, \tag{22}\]
where \(M\) is the number of states in \(Y\), \(N\) is the sample size, and \(k\) is the dimension of the alphabet (\(k=2\) for a binary random variable). Evidently, for \(M=2^{20}\) and \(N=60,000\), this condition is not fulfilled. However, because the output entropy \(H(X_{\text{out}})\) is constrained to be \(\log_{2}(10)\approx 3.32\) bits, a trained ANN should never end up with a hidden layer entropy near \(\log M\), but rather have an entropy comparable to that of the output state. In this way, the channel loss \(H(Y|X_{\text{out}})\) is minimized.
To test whether hidden layer entropy estimates are adequately sampled, we measured the entropy of the hidden layer when sampling over the entire image space. If \(Y\) was uniformly distributed, we would expect \(H(Y)\approx\log(N)\approx 15.86\). Instead, we found the actual entropy \(H_{\text{act}}(Y)\approx 3.319\), indicating that the hidden layer probability distribution is highly skewed. In this manner, the effective number of states \(M_{\text{eff}}=2^{H_{\text{act}}(Y)}\approx 9.98\), which easily satisfies condition (22).
We further took the original (continuous-value) hidden states derived from the 60,000 training images and clustered them using \(k\)-means clustering. We find the optimal number of clusters, identified using the elbow method, to be 10 or close to 10 (see Figure 9A), which of course coincides with the number of image classes.
We also performed a principal component analysis (PCA) on the original (not binned) hidden states, and plot the results for the two first components, color-coding each hidden state by its affiliated class. Again, for both full and composite networks, we find the hidden states to cluster according to their assigned class (see Figure 9B,C). All of these measurements confirm that the effective space of the hidden states is significantly smaller than \(2^{20}\) because training causes the hidden states to converge to the attractors needed to classify the images into the ten classes, giving rise to \(H_{\text{act}}(Y)\approx\log_{2}(10)\).
|
2305.15141 | From Tempered to Benign Overfitting in ReLU Neural Networks | Overparameterized neural networks (NNs) are observed to generalize well even
when trained to perfectly fit noisy data. This phenomenon motivated a large
body of work on "benign overfitting", where interpolating predictors achieve
near-optimal performance. Recently, it was conjectured and empirically observed
that the behavior of NNs is often better described as "tempered overfitting",
where the performance is non-optimal yet also non-trivial, and degrades as a
function of the noise level. However, a theoretical justification of this claim
for non-linear NNs has been lacking so far. In this work, we provide several
results that aim at bridging these complementing views. We study a simple
classification setting with 2-layer ReLU NNs, and prove that under various
assumptions, the type of overfitting transitions from tempered in the extreme
case of one-dimensional data, to benign in high dimensions. Thus, we show that
the input dimension has a crucial role on the type of overfitting in this
setting, which we also validate empirically for intermediate dimensions.
Overall, our results shed light on the intricate connections between the
dimension, sample size, architecture and training algorithm on the one hand,
and the type of resulting overfitting on the other hand. | Guy Kornowski, Gilad Yehudai, Ohad Shamir | 2023-05-24T13:36:06Z | http://arxiv.org/abs/2305.15141v3 | # From Tempered to Benign Overfitting
###### Abstract
Overparameterized neural networks (NNs) are observed to generalize well even when trained to perfectly fit noisy data. This phenomenon motivated a large body of work on "benign overfitting", where interpolating predictors achieve near-optimal performance. Recently, it was conjectured and empirically observed that the behavior of NNs is often better described as "tempered overfitting", where the performance is non-optimal yet also non-trivial, and degrades as a function of the noise level. However, a theoretical justification of this claim for non-linear NNs has been lacking so far. In this work, we provide several results that aim at bridging these complementing views. We study a simple classification setting with 2-layer ReLU NNs, and prove that under various assumptions, the type of overfitting transitions from tempered in the extreme case of one-dimensional data, to benign in high dimensions. Thus, we show that the input dimension has a crucial role on the type of overfitting in this setting, which we also validate empirically for intermediate dimensions. Overall, our results shed light on the intricate connections between the dimension, sample size, architecture and training algorithm on the one hand, and the type of resulting overfitting on the other hand.
## 1 Introduction
Overparameterized neural networks (NNs) are observed to generalize well even when trained to perfectly fit noisy data. Although quite standard in deep learning, the so-called interpolation learning regime challenges classical statistical wisdom regarding overfitting, and has attracted much attention in recent years.
In particular, the phenomenon commonly referred to as "benign overfitting" (Bartlett et al., 2020) describes a situation in which a learning method overfits, in the sense that it achieves zero training error over inherently noisy samples, yet it is able to achieve near-optimal accuracy with respect to the underlying distribution. So far, much of the theoretical work studying this phenomenon focused on linear (or kernel) regression problems using the squared loss, with some works extending this to classification problems. However, there is naturally much interest in gradually pushing this research towards neural networks. For example, Frei et al. (2023) recently showed that the implicit bias of gradient-based training algorithms towards margin maximization, as established in a series of works throughout the past few years, leads to benign overfitting of leaky ReLU networks in high dimensions (see discussion of related work below).
Recently, Mallinar et al. (2022) suggested a more nuanced view, coining the notion of "tempered" overfitting. An interpolating learning method is said to overfit in a tempered manner if its error (with respect to the underlying distribution, and as the training sample size increases) is bounded away from the optimal possible error yet is also not "catastrophic", e.g., better than a random guess. To be more concrete, consider a binary classification setting (with labels in \(\{\pm 1\}\)), and a data distribution \(\mathcal{D}\), where the output labels correspond to some ground truth function \(f^{*}\) belonging to the predictor class of interest, corrupted with independent label noise at level \(p\in(0,\frac{1}{2})\) (i.e., given a sampled point \(\mathbf{x}\), its associated label \(y\) equals \(\text{sign}(f^{*}(\mathbf{x}))\) with probability \(1-p\), and \(-\text{sign}(f^{*}(\mathbf{x}))\) with probability \(p\)). Suppose furthermore that the training method is such that the learned predictor achieves zero error on the (inherently noisy) training data. Finally, let \(L_{\mathrm{cl}}\) denote the "clean" test error, namely \(L_{\mathrm{cl}}(N)=\Pr_{\mathbf{x}\sim\mathcal{D}}(N(\mathbf{x})\cdot f^{*}( \mathbf{x})\leq 0)\) for some predictor \(N\) (omitting the contribution of the label noise). In this setting, benign, catastrophic or tempered overfitting corresponds to a situation where the clean test error \(L_{\mathrm{cl}}\) of the learned predictor approaches \(0\), \(\frac{1}{2}\), or some value in \((0,\frac{1}{2})\) respectively. In the latter case, the clean test error typically scales with the amount of noise \(p\). For example, for the one-nearest-neighbor algorithm, it is well-known that the test error asymptotically converges to \(2p(1-p)\) in our setting (Cover and Hart, 1967), which translates to a clean test error of \(p\). Mallinar et al. (2022) show how a similar behavior (with the error scaling linearly with the noise level) occurs for kernel regression, and provided empirical evidence that the same holds for neural networks.
The starting point of our paper is an intriguing experiment from that paper (Figure 5(b)), where they considered the following extremely simple data distributioin: The inputs are drawn uniformly at random from the unit sphere, and \(f^{*}\) is the constant \(+1\) function. This is perhaps the simplest possible setting where benign or tempered overfitting in binary classification may be studied. In this setting, the authors show empirically that a three-layer vanilla neural network exhibits tempered overfitting, with the clean test error almost linear in the label noise level \(p\). Notably, the experiment used an input dimension of \(10\), which is rather small. This naturally leads to the question of whether we can rigorously understand the overfitting behavior of neural networks in this simple setup, and how problem parameters such as the input dimension, number of samples and the architecture affect the type of overfitting.
Our contributions:In this work we take a step towards rigorously understating the regimes under which simple, 2-layer ReLU neural networks exhibit different types of overfitting (i.e. benign, catastrophic or tempered), depending on the problem parameters, for the simple and natural data distribution described in the previous paragraph. We focus on interpolating neural networks trained with exponentially-tailed losses, which are well-known to converge to KKT points of a max-margin problem (see Section 2 for more details). At a high level, our main conclusion is that for such networks, the input dimension plays a crucial role, with the type of overfitting gradually transforming from tempered to benign as the dimension increases. In contrast, most of our results are not sensitive to the network's width, as long as it is sufficient to interpolate the training data. In a bit more detail, our contributions can be summarized as follows:
* [leftmargin=*,noitemsep,topsep=0pt,parsep=0pt,topsep=0pt]
* **Tempered overfitting in one dimension (Theorem 3.1 and Theorem 3.2).** We prove that in one dimension (when the inputs are uniform over the unit interval), the resulting overfitting is provably tempered, with a clean test error scaling as \(\Theta(\mathrm{poly}(p))\). Moreover, under the stronger assumption that the algorithm converges to a local minimum of the max-margin problem, we show that the clean test error provably scales linearly with \(p\). As far as we know, this is the first result provably establishing tempered overfitting for non-linear neural networks.
* **Benign overfitting in high dimensions (Theorem 4.1 and Theorem 4.3).** We prove that when the inputs are sampled uniformly from the unit sphere (and with the dimension scaling polynomially with the sample
size), the resulting overfitting will generally be benign. In particular, we show that convergence to a max-margin predictor (up to some constant factor) implies that the clean test error decays exponentially fast to 0 with respect to the dimension. Furthermore, under an assumption on the scaling of the noise level and the network width, we obtain the same result for any KKT point of the max-margin problem.
* **The role of bias terms and catastrophic overfitting (Section 4.2).** The proof of the results mentioned earlier crucially relied on the existence of bias parameters in the network architecture. Thus, we further study the case of a bias-less network, and prove that without such biases, neural networks may exhibit catastrophic overfitting, and that in general they do not overfit benignly without further assumptions. We consider this an interesting illustration of how catastrophic overfitting is possible in neural networks, even in our simple setup.
* **Empirical study for intermediate dimensions (Section 5).** Following our theoretical results, we attempt at empirically bridging the gap between the one-dimensional and high dimensional settings. In particular, it appears that the tempered and overfitting behavior extends to a wider regime than what our theoretical results formally cover, and that the overfitting profile gradually shifts from tempered to benign as the dimension increases. This substantially extends the prior empirical observation due to Mallinar et al. (2022, Figure 6b), which exhibited tempered overfitting in input dimension \(10\).
The full proofs of our results are provided in appendices A and B.
### Related work
Following the empirical observation that modern deep learning methods can perfectly fit the training data while still performing well on test data (Zhang et al., 2017), many works have tried to provide theoretical explanations of this phenomenon. By now the literature on this topic is quite large, and we will only address here the works most relevant to this paper (for a broader overview, see for example the surveys Belkin 2021, Bartlett et al. 2021, Vardi 2022).
There are many works studying regression settings in which interpolating methods, especially linear and kernel methods, succeed at obtaining optimal or near-optimal performance (Belkin et al., 2018, 2019, 2020, Mei and Montanari, 2022, Hastie et al., 2022). In particular, it is interesting to compare our results to those of Rakhlin and Zhai (2019), who proved that the minimal norm interpolating (i.e. "ridgeless") Laplace kernel method is not consistent in any fixed dimension, while it is whenever the dimension scales with the sample size (under suitable assumptions) (Liang and Rakhlin, 2020). Our results highlight that the same phenomenon occurs for ReLU networks.
There is by now also a large body of work on settings under which benign overfitting (and analogous definitions thereof) occurs (Nagarajan and Kolter, 2019, Bartlett et al., 2020, Negrea et al., 2020, Nakkiran and Bansal, 2020, Yang et al., 2021, Kochler et al., 2021, Bartlett and Long, 2021, Muthukumar et al., 2021, Bachmann et al., 2021, Zhou et al., 2023, Shamir, 2023). In classification settings, as argued by Muthukumar et al. (2021), many existing works on benign overfitting analyze settings in which max-margin predictors can be computed (or approximated), and suffice to ensure benign behavior (Poggio and Liao, 2019, Montanari et al., 2019, Thrampoulidis et al., 2020, Wang and Thrampoulidis, 2021, Wang et al., 2021, Cao et al., 2021, Hu et al., 2022, McRae et al., 2022, Liang and Recht, 2023). It is insightful to compare this to our main proof strategy, in which we analyze the max-margin problem in the case of NNs (which no longer admits a closed form). Frei et al. (2022) showed that NNs with smoothed leaky ReLU activations overfit benignly when the data comes from a well-seperated mixture distribution, a result which was recently generalized to ReLU activations (Xu and Gu, 2023). Similarly, Cao et al. (2022) proved that convolutional NNs with
smoothed activations overfit benignly when the data is distributed according to a high dimensional Guassian with noisy labels, which was subsequently generalized to ReLU CNNs (Kou et al., 2023). We note that these distributions are reminiscent of the setting we study in Section 4 for (non convolutional) ReLU NNs. Chatterji and Long (2023) studied benign overfitting for deep linear networks.
As mentioned in the introduction, Mallinar et al. (2022) formally introduced the notion of tempered overfitting and suggested the study of it in the context of NNs. Subsequently, Manoj and Srebro (2023) proved that the minimum description length learning rule exhibits tempered overfitting, though noticeably this learning rule does not explicitly relate to NNs. As previously mentioned, we are not aware of existing works that prove tempered overfitting in the context of NNs.
In a parallel line of work, the implicit bias of gradient-based training algorithms has received much attention (Soudry et al., 2018; Lyu and Li, 2020; Ji and Telgarsky, 2020). Roughly speaking, these results drew the connection between NN training to margin maximization - see Section 2 for a formal reminder. For our first result (Theorem 3.1) we build upon the analysis of Safran et al. (2022) who studied this implicit bias for univariate inputs, though for the sake of bounding the number of linear regions. In our context, Frei et al. (2023) utilized this bias to prove benign overfitting for leaky ReLU NNs in a high dimensional regime.
## 2 Preliminaries
Notation.We use bold-faced font to denote vectors, e.g. \(\mathbf{x}\in\mathbb{R}^{d}\), and denote by \(\|\mathbf{x}\|\) the Euclidean norm. We use \(c,c^{\prime},\widetilde{c},C>0\) etc. to denote absolute constants whose exact value can change throughout the proofs. We denote by \([n]:=\{1,\ldots,n\}\), by \(\mathbbm{1}\left\{\cdot\right\}\) the indicator function, and by \(\mathbb{S}^{d-1}\subset\mathbb{R}^{d}\) the unit sphere. Given sets \(A\subset B\), we denote by \(\mathrm{Unif}(A)\) the uniform measure over a set \(A\), by \(A^{c}\) the complementary set, and given a function \(f:B\to\mathbb{R}\) we denote its restriction \(f|_{A}:A\to\mathbb{R}\). We let \(I\) be the identity matrix whenever the dimension is clear from context, and for a matrix \(A\) we denote by \(\|A\|\) its spectral norm. We use stadard big-O notation, with \(O(\cdot),\Omega(\cdot)\) and \(\Theta(\cdot)\) hiding absolute constants, write \(f\lesssim g\) if \(f=O(g)\), and denote by \(\mathrm{poly}(\cdot)\) polynomial factors.
Setting.We consider a classification task based on _noisy_ training data \(S=(\mathbf{x}_{i},y_{i})_{i=1}^{m}\subset\mathbb{R}^{d}\times\{\pm 1\}\) drawn i.i.d. from an underlying distribution \(\mathcal{D}:=\mathcal{D}_{\mathbf{x}}\times\mathcal{D}_{y}\). Throughout, we assume that the output values \(y\) are constant \(+1\), corrupted with independent label noise at level \(p\) (namely, each \(y_{i}\) is independent of \(\mathbf{x}_{i}\) and satisfies \(\Pr[y_{i}=1]=1-p,\ \Pr[y_{i}=-1]=p\)). Note that the Bayes-optimal predictor in this setting is \(f^{*}\equiv 1\). Given a dataset \(S\), we denote by \(I_{+}:=\{i\in[m]:y_{i}=1\}\) and \(I_{-}=\{i\in[m]:y_{i}=-1\}\). We study 2-layer ReLU networks:
\[N_{\boldsymbol{\theta}}(\mathbf{x})=\sum_{j=1}^{n}v_{j}\sigma(\mathbf{w}_{j} \cdot\mathbf{x}+b_{j})\;,\]
where \(n\) is the number of neurons or network width, \(\sigma(z):=\max\{0,z\}\) is the ReLU function, \(v_{j},b_{j}\in\mathbb{R},\ \mathbf{w}_{j}\in\mathbb{R}^{d}\) and \(\boldsymbol{\theta}=(v_{j},\mathbf{w}_{j},b_{j})_{j=1}^{n}\in\mathbb{R}^{(d+2)n}\) is a vectorized form of all the parameters. Throughout this work we assume that the trained network classifies the entire dataset correctly, namely \(y_{i}N_{\boldsymbol{\theta}}(\mathbf{x}_{i})>0\) for all \(i\in[m]\). Following Mallinar et al. (2022, Section 2.3) we consider the _clean_ test error \(L_{\mathrm{cl}}(N_{\boldsymbol{\theta}}):=\Pr_{\mathbf{x}\sim\mathcal{D}_{ \mathbf{x}}}[N_{\boldsymbol{\theta}}(\mathbf{x})\leq 0]\), corresponding to the misclassification error with respect to the clean underlying distribution. It is said that \(N_{\boldsymbol{\theta}}\) exhibits benign overfitting if \(L_{\mathrm{cl}}\to 0\) with respect to large enough problem parameters (e.g. \(m,d\to\infty\)), while the overfitting is said to be _catastrophic_ if \(L_{\mathrm{cl}}\to\frac{1}{2}\). Formally, Mallinar et al. (2022) have coined tempered overfitting to describe any case in which \(L_{\mathrm{cl}}\) converges to a value in \((0,\frac{1}{2})\), though a special attention has been given in the literature to cases where \(L_{\mathrm{cl}}\) scales monotonically
or even linearly with the noise level \(p\)(Belkin et al., 2018; Chatterji and Long, 2021; Manoj and Srebro, 2023).
Implicit bias.We will now briefly describe the results of Lyu and Li (2020); Ji and Telgarsky (2020) which shows that under our setting, training the network with respect to the logistic or exponential loss converges towards a KKT point of the _margin-maximization problem_. To that end, suppose \(\ell(z)=\log(1+e^{-z})\) or \(\ell(z)=e^{-z}\), and consider the empirical loss \(\hat{\mathcal{L}}(\mathbf{\theta})=\sum_{i=1}^{n}\ell(y_{i}N_{\mathbf{\theta}}(\mathbf{ x}_{i}))\). Suppose that \(\mathbf{\theta}(t)\) evolves according to the gradient flow of the empirical loss, namely \(\frac{d\mathbf{\theta}(t)}{dt}=-\nabla\hat{\mathcal{L}}(\mathbf{\theta}(t))\).1 Note that this corresponds to performing gradient descent over the data with an infinitesimally small step size.
Footnote 1: Note that the ReLU function is not differentiable at \(0\). This can be addressed either by setting \(\sigma^{\prime}(0)\in[0,1]\) as done in practical implementations, or by considering the Clarke subdifferential (Clarke, 1990)(see Lyu and Li, 2020; Dutta et al., 2013 for a discussion on non-smooth optimization). We note that this issue has no effect on our results.
**Theorem 2.1** (Rephrased from Lyu and Li, 2020; Ji and Telgarsky, 2020).: _Under the setting above, if there exists some \(t_{0}\) such that \(\mathbf{\theta}(t_{0})\) satisfies \(\min_{i\in[m]}y_{i}N_{\mathbf{\theta}(t_{0})}(\mathbf{x}_{i})>0\), then \(\frac{\mathbf{\theta}(t)}{\|\mathbf{\theta}(t)\|}\stackrel{{ t\to\infty}}{{\longrightarrow}} \frac{\mathbf{\theta}}{\|\mathbf{\theta}\|}\) for \(\mathbf{\theta}\) which is a KKT point of the margin maximization problem_
\[\min\quad\|\mathbf{\theta}\|^{2}\quad\mathrm{s.t.}\qquad y_{i}N_{\mathbf{\theta}}( \mathbf{x}_{i})\geq 1\quad\forall i\in[m]\enspace. \tag{1}\]
The result above shows that although there are many possible parameters \(\mathbf{\theta}\) that result in a network correctly classifying the training data, the training method has an _implicit bias_ in the sense that it yields a network whose parameters are a KKT point of Problem (1). Recall that \(\mathbf{\theta}\) is called a KKT point of Problem (1) if there exist \(\lambda_{1},\ldots,\lambda_{m}\in\mathbb{R}\) such that the following conditions hold:
\[\mathbf{\theta}=\sum_{i=1}^{m}\lambda_{i}y_{i}\nabla_{\mathbf{\theta}}N_ {\mathbf{\theta}}(\mathbf{x}_{i}) \text{(stationarity)} \tag{2}\] \[\forall i\in[m]:\enspace y_{i}N_{\mathbf{\theta}}(\mathbf{x}_{i})\geq 1 \text{(primal feasibility)}\] (3) \[\lambda_{1},\ldots,\lambda_{m}\geq 0 \text{(dual feasibility)}\] (4) \[\forall i\in[m]:\enspace\lambda_{i}(y_{i}N_{\mathbf{\theta}}(\mathbf{ x}_{i})-1)=0 \text{(complementary slackness)} \tag{5}\]
It is well-known that global or local optima of Problem (1) are KKT points, but in general, the reverse direction may not be true, even in our context of 2-layer ReLU networks (see Vardi et al., 2022). In this work, our results rely either on convergence to a KKT point (Theorem 3.1 and Theorem 4.3), or on stronger assumptions such as convergence to a local optimum (Theorem 3.2) or near-global optimum (Theorem 4.1) of Problem (1) in order to obtain stronger results.
## 3 Tempered overfitting in one dimension
Throughout this section we study the one dimensional case \(d=1\) under the uniform distribution \(\mathcal{D}_{x}=\mathrm{Unif}([0,1])\), so that \(L_{\mathrm{cl}}\left(N_{\mathbf{\theta}}\right)=\mathrm{Pr}_{x\sim\mathrm{Unif}([ 0,1])}[N_{\mathbf{\theta}}(x)\leq 0]\). We note that although we focus here on the uniform distribution for simplicity, our proof techniques are applicable in principle to other distributions with bounded density. Further note that both results to follow are independent of the number of neurons, so that in our setting, tempered overfitting occurs as soon as the network is wide enough to interpolate the data. We first show that any KKT point of the margin maximization problem (and thus any network we may converge to) gives rise to tempered overfitting.
**Theorem 3.1**.: _Let \(d=1\), \(p\in[0,\frac{1}{2})\). Then with probability at least \(1-\delta\) over the sample \(S\sim\mathcal{D}^{m}\), for any KKT point \(\boldsymbol{\theta}=\left(v_{j},w_{j},b_{j}\right)_{j=1}^{n}\) of Problem (1), it holds that_
\[c\left(p^{7}-\sqrt{\frac{\log(m/\delta)}{m}}\right)\leq L_{\rm cl}\left(N_{ \boldsymbol{\theta}}\right)\leq C\left(\sqrt{p}+\sqrt{\frac{\log(m/\delta)}{m }}\right)\,\]
_where \(c,C>0\) are absolute constants._
The theorem above shows that for a large enough sample size \(L_{\rm cl}(N_{\boldsymbol{\theta}})=\Theta({\rm poly}(p))\), proving that the clean test error must scale roughly monotonically with the noise level \(p\), leading to tempered overfitting.2
Footnote 2: We remark that the additional summands are merely a matter of concentration in order to get high probability bounds. As for the expected clean test error, our proof readily shows that \(cp^{7}\leq\mathbb{E}_{S\sim\mathcal{D}^{m}}\left[L_{\rm cl}(N_{\boldsymbol{ \theta}})\right]\leq C\sqrt{p}\).
We will now provide intuition for the proof of Theorem 3.1, which appears in Appendix A.1. The proofs of both the lower and the upper bounds rely on the analysis of Safran et al. (2022) for univariate networks that satisfy the KKT conditions. For the lower bound, we note that with high probability over the sample \((x_{i})_{i=1}^{m}\sim\mathcal{D}_{x}^{m}\), approximately \(p^{7}m\) of the sampled points will be part of a sequence of 7 consecutive points \(x_{i}<\cdots<x_{i+6}\) all labeled \(-1\). A lemma due to Safran et al. (2022) implies that any such sequence must contain a segment \([x_{j},x_{j+1}],\ i\leq j\leq i+5\) for which \(N_{\boldsymbol{\theta}}|_{[x_{j},x_{j+1}]}<0\), contributing to the clean test error proportionally to the length of \([x_{j},x_{j+1}]\). Under the probable event that most samples are not too close to one another, namely of distance \(\Omega(1/m)\), we we get that the total length of all such segments (and hence the error) is at least of order \(\Omega(\frac{mp^{7}}{m})=\Omega(p^{7})\).
As to the upper bound, we observe that the number of neighboring samples with opposing labels is likely to be on order of \(pm\), which implies that the data can be interpolated using a network of width of order \(n^{*}=O(pm)\). We then invoke a result of Safran et al. (2022) that states that the class of interpolating networks that satisfy the KKT conditions has VC dimension \(d_{\rm VC}=O(n^{*})=O(pm)\). Thus, a standard uniform convergence result for VC classes asserts that the test error would be of order \(\frac{1}{m}\sum_{i\in[m]}\mathbbm{1}\left\{\operatorname{sign}(N_{\boldsymbol {\theta}})(x_{i})\neq y_{i}\right\}+O(\sqrt{d_{\rm VC}/m})=0+O(\sqrt{pm/m})=O (\sqrt{p})\).
While the result above establishes the overfitting being tempered with an error scaling as \({\rm poly}(p)\), we note that the range \([cp^{7},C\sqrt{p}]\) is rather large. Moreover, previous works suggest that in many cases we should expect the clean test error to scale _linearly_ with \(p\)(Belkin et al., 2018, Chatterji and Long, 2021, Manoj and Srebro, 2023). Therefore it is natural to conjecture that this is also true here, at least for trained NNs that we are likely to converge to. We now turn to show such a result under a stronger assumption, where instead of any KKT point of the margin maximization problem, we consider specifically local minima. We note that several prior works have studied networks corresponding to global minima of the max-margin problem, which is of course an even stronger assumption than local minimality (e.g. Savarese et al., 2019, Glasgow et al., 2022, Boursier and Flammarion, 2023). We say that \(\boldsymbol{\theta}\) is a local minimum of Problem (1) whenever there exists \(r>0\) such that if \(\|\boldsymbol{\theta}^{\prime}-\boldsymbol{\theta}\|<r\) and \(\forall i\in[m]:y_{i}N_{\boldsymbol{\theta}^{\prime}}(x_{i})\geq 1\) then \(\|\boldsymbol{\theta}\|^{2}\leq\|\boldsymbol{\theta}^{\prime}\|^{2}\).
**Theorem 3.2**.: _Let \(d=1\), \(p\in[0,\frac{1}{2})\). Then with probability at least \(1-\delta\) over the sample \(S\sim\mathcal{D}^{m}\), for any local minimum \(\boldsymbol{\theta}=(v_{j},w_{j},b_{j})_{j=1}^{n}\) of Problem (1), it holds that_
\[c\left(p-\sqrt{\frac{\log(m/\delta)}{m}}\right)\leq L_{\rm cl}\left(N_{ \boldsymbol{\theta}}\right)\leq C\left(p+\sqrt{\frac{\log(m/\delta)}{m}} \right)\,\]
_where \(c,C>0\) are absolute constants._
The theorem above indeed shows that whenever the sample size is large enough, the clean test error indeed scales linearly with \(p\).3 The proof of Theorem 3.2, which appears in Appendix A.2, is based on a different analysis than that of Theorem 3.1. Broadly speaking, the proof relies on analyzing the structure of the learned prediction function, assuming it is a local minimum of the max-margin problem. More specifically, in order to obtain the lower bound we consider segments in between samples \(x_{i-1}<x_{i}<x_{i+1}\) that are labeled \(y_{i-1}=1,y_{i}=-1,y_{i+1}=1\). Note that with high probability over the sample, approximately \(p(1-p)^{2}=\Omega(p)\) of the probability mass lies in such segments. Our key proposition is that for a KKT point which is a local minimum, \(N_{\boldsymbol{\theta}}(\cdot)\) must be linear on the segment \([x_{i},x_{i+1}]\), and furthermore that \(N_{\boldsymbol{\theta}}(x_{i})=-1,N_{\boldsymbol{\theta}}(x_{i+1})=1\). Thus, assuming the points are not too unevenly spaced, it follows that a constant fraction of any such segment contributes to the clean test error, resulting in an overall error of \(\Omega(p)\). In order to prove this proposition, we provide an exhaustive list of cases in which the network does _not_ satisfy the assertion, and show that in each case the norm can be locally reduced - see Figure 1 for an illustration of such a case.
Footnote 3: Similarly to footnote 2, our proof also characterizes the expected clean test error for any sample size as \(\mathbb{E}_{S\sim\mathcal{D}^{m}}\left[L_{\mathrm{cl}}(N_{\boldsymbol{\theta} })\right]=\Theta(p)\).
For the upper bound, a similar analysis is provided for segments \([x_{i},x_{i+1}]\) for which \(y_{i}=y_{i+1}=1\). Noting that with high probability over the sample an order of \((1-p)^{2}=1-O(p)\) of the probability mass lies in such segments, it suffices to show that along any such segment the network is positive - implying an upper bound of \(O(p)\) on the possible clean test error. Indeed, we show that along such segments, by locally minimizing parameter norm the network is incentivized to stay positive.
## 4 Benign overfitting in high dimensions
In this section we focus on the high dimensional setting. Throughout this section we assume that the dataset is sampled according to \(\mathcal{D}_{\mathbf{x}}=\mathrm{Unif}(\mathbb{S}^{d-1})\) and \(d\gg m\), where \(m\) is the number of samples, so that \(L_{\mathrm{cl}}\left(N_{\boldsymbol{\theta}}\right)=\Pr_{\mathbf{x}\sim \mathrm{Unif}(\mathbb{S}^{d-1})}[N_{\boldsymbol{\theta}}(\mathbf{x})\leq 0]\). We note that our results hold for other commonly studied distributions (see Remark 4.2). We begin by showing that if the network converges to the maximum margin solution up to some multiplicative factor, then benign overfitting occurs:
**Theorem 4.1**.: _Let \(\epsilon,\delta>0\). Assume that \(p\leq c_{1}\), \(m\geq c_{2}\frac{\log(1/\delta)}{p}\) and \(d\geq c_{3}m^{2}\log\left(\frac{m}{\epsilon}\right)\log\left(\frac{m}{\delta}\right)\) for some universal constants \(c_{1},c_{2},c_{3}>0\). Given a sample \((\mathbf{x}_{i},y_{i})_{i=1}^{m}\sim\mathcal{D}^{m}\), suppose \(\boldsymbol{\theta}=(\mathbf{w}_{j},v_{j},b_{j})_{j=1}^{n}\) is a KKT
Figure 1: Illustration of the proof idea of Theorem 3.2. The dashed green perturbation reduces the parameter norm while still classifying correctly, by slightly altering exactly two neurons.
point of Problem (1) such that \(\|\mathbf{\theta}\|^{2}\leq\frac{c_{4}}{\sqrt{p}}\|\mathbf{\theta}^{*}\|^{2}\), where \(\mathbf{\theta}^{*}\) is a max-margin solution of Eq. (1) and \(c_{4}>0\) is some universal constant. Then, with probability at least \(1-\delta\) over \(S\sim\mathcal{D}^{m}\) we have that \(L_{\mathrm{cl}}\left(N_{\mathbf{\theta}}\right)\leq\epsilon\)._
The theorem above shows that under the specified assumptions, benign overfitting occurs for any noise level \(p\) which is smaller than some universal constant. Note that this result is independent of the number of neurons \(n\), and requires \(d=\tilde{\Omega}(m^{2})\). The mild lower bound on \(m\) is merely to ensure that the negative samples concentrate around a \(\Theta(p)\) fraction of the entire dataset. Also note that the dependence on \(\epsilon\) is only logarithmic, which means that in the setting we study, the clean test error decays exponentially fast with \(d\).
We now turn to provide a short proof intuition, while the full proof can be found in Appendix B.1. The main crux of the proof is showing that \(\sum_{j=1}^{n}v_{j}\sigma(b_{j})=\Omega(1)\), i.e. the bias terms are the dominant factor in the output of the network, and they tend towards being positive. In order to see why this suffices to show benign overfitting, first note that \(N_{\mathbf{\theta}}(\mathbf{x})=\sum_{j=1}^{n}v_{j}\sigma(\mathbf{w}_{j}^{\top} \mathbf{x}+b_{j})\) equals
\[\sum_{j=1}^{n}v_{j}\sigma(\mathbf{w}_{j}^{\top}\mathbf{x}+b_{j})+\sum_{j=1}^{n }v_{j}\sigma(b_{j})-\sum_{j=1}^{n}v_{j}\sigma(b_{j})\geq\sum_{j=1}^{n}v_{j} \sigma(b_{j})-\sum_{j=1}^{n}\left|v_{j}\sigma(\mathbf{w}_{j}^{\top}\mathbf{x} )\right|\;.\]
All the samples (including the test sample \(\mathbf{x}\)) are drawn independently from \(\mathrm{Unif}(\mathbb{S}^{d-1})\), thus with high probability \(|\mathbf{x}^{\top}\mathbf{x}_{i}|=o_{d}(1)\) for all \(i\in[m]\). Using our assumption regarding convergence to the max-margin solution (up to some multiplicative constant), we show that the norms \(\|\mathbf{w}_{j}\|\) are bounded by some term independent of \(d\). Thus, we can bound \(|\mathbf{x}^{\top}\mathbf{w}_{j}|=o_{d}(1)\) for every \(j\in[n]\). Additionally, by the same assumption we show that both \(\sum_{j=1}^{n}\|\mathbf{w}_{j}\|\) and \(\sum_{j=1}^{n}|v_{j}|\) are bounded by the value of the max-margin solution (up to a multiplicative factor), which we bound by \(O(\sqrt{m})\). Plugging this into the displayed equation above, and using the assumption that \(d\) is sufficiently larger than \(m\), we get that the \(\sum_{j=1}^{n}|v_{j}\sigma(\mathbf{w}_{j}^{\top}\mathbf{x})|\) term is negligible, and hence the bias terms \(\sum_{j=1}^{n}v_{j}\sigma(b_{j})\) are the predominant factor in determining the output of \(N_{\mathbf{\theta}}\).
Showing that \(\sum_{j=1}^{n}v_{j}\sigma(b_{j})=\Omega(1)\) is done in two phases. Recall the definitions of \(I_{+}:=\{i\in[m]:y_{i}=1\}\), \(I_{-}=\{i\in[m]:y_{i}=-1\}\), and that by our label distribution and assumption on \(p\) we have \(|I_{+}|\approx(1-p)m\gg pm\approx|I_{-}|\). We first show that if the bias terms are too small, and the predictor correctly classifies the training data, then its parameter vector \(\mathbf{\theta}\) satisfies \(\|\mathbf{\theta}\|^{2}=\Omega(|I_{+}|)\). Intuitively, this is because the data is nearly mutually orthogonal, so if the biases are small, the vectors \(\mathbf{w}_{j}\) must have a large enough correlation with each different point in \(I_{+}\) to classify it correctly. On the other hand, we explicitly construct a solution that has large bias terms, which satisfies \(\|\mathbf{\theta}\|^{2}=O(|I_{-}|)\). By setting \(p\) small enough, we conclude that the biases cannot be too small.
**Remark 4.2** (Data distribution).: _Theorem 4.1 is phrased for data sampled uniformly from a unit sphere, but can be easily extended to other isotropic distributions such as standard Gaussian and uniform over the unit ball. In fact, the only properties of the distribution we use are that \(|\mathbf{x}_{i}^{\top}\mathbf{x}_{j}|=o_{d}(1)\) for every \(i\neq j\in[m]\), and that \(\|XX^{\top}\|=O(1)\) (independent of \(m\)) where \(X\) is an \(m\times d\) matrix whose rows are \(\mathbf{x}_{i}\)._
### Benign overfitting under KKT assumptions
In Theorem 4.1 we showed that benign overfitting occurs in high dimensional data, but under the strong assumption of convergence to the max-margin solution up to a multiplicative factor (whose value is allowed to be larger for smaller values of \(p\)). On the other hand, Theorem 2.1 only guarantees convergence to a KKT point of the max-margin problem (1). We note that Vardi et al. (2022) provided several examples of ReLU neural networks where there are in fact KKT points which are not a global or even local optimum of the max-margin problem.
Consequently, in this section we aim at showing a benign overfitting result by only assuming convergence to a KKT point of the max-margin problem. For such a result we use two additional assumptions, namely: (1) The output weights \(v_{j}\) are all fixed to be \(\pm 1\), while only \((\mathbf{w}_{j},b_{j})_{j\in[n]}\) are trained;4 and (2) Both the noise level \(p\) and the input dimension \(d\) depend on \(n\), the number of neurons. Our main result is the following:
Footnote 4: This assumption appears in prior works such as Cao et al. (2022); Frei et al. (2023); Xu and Gu (2023).
**Theorem 4.3**.: _Let \(\epsilon,\delta>0\). Assume that \(p\leq\frac{c_{1}}{n^{2}}\), \(m\geq c_{2}n\log\left(\frac{1}{\delta}\right)\) and \(d\geq c_{3}m^{4}n^{4}\log\left(\frac{m}{\epsilon}\right)\log\left(\frac{m^{2}} {\delta}\right)\) for some universal constants \(c_{1},c_{2},c_{3}>0\). Assume that the output weights are fixed so that \(|\{j:v_{j}=1\}|=|\{j:v_{j}=-1\}|\) while \((\mathbf{w}_{j},b_{j})_{j\in[n]}\) are trained, and that \(N_{\boldsymbol{\theta}}\) converges to a KKT point of Problem (1). Then, with probability at least \(1-\delta\) over \(S\sim\mathcal{D}^{m}\) we have \(L_{\mathrm{cl}}\left(N_{\boldsymbol{\theta}}\right)\leq\epsilon\)._
Note that contrary to Theorem 4.1 and to the results from Section 3, here there is a dependence on \(n\), the number of neurons. For \(n=O(1)\) we get benign overfitting for any \(p\) smaller than a universal constant. This dependence is due to a technical limitation of the proof (which we will soon explain), and it would be interesting to remove it in future work. The full proof can be found in Appendix B.2, and here we provide a short proof intuition.
The proof strategy is similar to that of Theorem 4.1, by showing that \(\sum_{j=1}^{n}v_{j}\sigma(b_{j})=\Omega(1)\), i.e. the bias terms are dominant and tend towards being positive. The reason that this suffices is proven using similar arguments to that of Theorem 4.1. The main difficulty of the proof is showing that \(\sum_{j=1}^{n}v_{j}\sigma(b_{j})=\Omega(1)\). We can write each bias term as \(b_{j}=v_{j}\sum_{i\in[m]}\lambda_{i}y_{i}\sigma^{\prime}_{i,j}\), where \(\sigma^{\prime}_{i,j}=\mathbbm{1}\left(\mathbf{w}_{j}^{\top}\mathbf{x}_{i}+b_ {j}>1\right)\) is the derivative of the \(j\)-th ReLU neuron at the point \(\mathbf{x}_{i}\). We first show that all the bias terms \(b_{j}\) are positive (Lemma B.5) and that for each \(i\in I_{+}\), there is \(j\in\{1,\ldots,n/2\}\) with \(\sigma^{\prime}_{i,j}=1\). This means that it suffices to show that \(\lambda_{i}\) is not too small for all \(i\in I_{+}\), while for every \(i\in I_{-}\), \(\lambda_{i}\) is bounded from above.
We assume towards contradiction that the sum of the biases is small, and show it implies that \(\lambda_{i}=\Omega\left(\frac{1}{n}\right)\) for \(i\in I_{+}\) and \(\lambda_{i}=O(1)\) for \(i\in I_{-}\). Using the stationarity condition we can write for \(j\in\{1,\ldots,n/2\}\) that \(\mathbf{w}_{j}=\sum_{i=1}^{m}\lambda_{i}y_{i}\sigma^{\prime}_{i,j}\mathbf{x}_ {i}\). Taking some \(r\in I_{+}\), we have that \(\mathbf{w}_{j}^{\top}\mathbf{x}_{r}\approx\lambda_{r}\sigma^{\prime}_{r,j}\) since \(\mathbf{x}_{r}\) is almost orthogonal to all other samples in the dataset. By the primal feasibility condition (Eq. (3)) \(N_{\boldsymbol{\theta}}(\mathbf{x}_{r})\geq 1\), hence there must be some \(j\in\{1,\ldots,n/2\}\) with \(\sigma^{\prime}_{r,j}=1\) and with \(\lambda_{r}\) that is larger than some constant. The other option is that the bias terms are large, which we assumed does not happen. Note that we don't know how many neurons \(\mathbf{w}_{j}\) there are with \(\sigma^{\prime}_{j,r}=1\), which means that we can only lower bound \(\lambda_{r}\) by a term that depends on \(n\).
To show that \(\lambda_{s}\) are not too big for \(s\in I_{-}\), we use the complementary slackness condition (Eq. (5)) which states that if \(\lambda_{s}\neq 0\) then \(N_{\boldsymbol{\theta}}(\mathbf{x}_{s})=-1\). Since the sum of the biases is not large, then if there exists some \(s\in I_{-}\) with \(\lambda_{s}\) that is too large we would get that \(N_{\boldsymbol{\theta}}(\mathbf{x}_{s})<-1\) which contradicts the complementary slackness. Combining those bounds and picking \(p\) to be small enough shows that \(\sum_{j=1}^{n}v_{j}\sigma(b_{j})\) cannot be too small.
### The role of the bias and catastrophic overfitting in neural networks
In both our benign overfitting results (Theorem 4.1 and Theorem 4.3), our main proof strategy was showing that if \(p\) is small enough, then the bias terms tend to be positive and dominate over the other components of the network. In this section we further study the importance of the bias terms for obtaining benign overfitting, by examining bias-less ReLU networks of the form \(N_{\boldsymbol{\theta}}(\mathbf{x})=\sum_{j=1}^{n}v_{j}\sigma(\mathbf{w}_{j}^ {\top}\mathbf{x})\) where \(\boldsymbol{\theta}=(v_{j},\mathbf{w}_{j})_{j\in[n]}\). We note that if the input dimension is sufficiently large and \(n\geq 2\), such bias-less networks can still fit any training data. The proofs for this section can be found in Appendix B.3.
We begin by showing that without a bias term and with no further assumption on the network width, any solution can exhibit catastrophic behaviour:
**Proposition 4.4** (Catastrophic overfitting without bias).: _Consider a bias-less network with \(n=2\) which classifies a dataset correctly. If the dataset contains at least one sample with a negative label, then \(L_{\mathrm{cl}}\left(N_{\boldsymbol{\theta}}\right)\geq\frac{1}{2}\)._
While the result above is restricted to any network of width two (in particular, it holds under any assumption such as convergence to a KKT point), this already precludes the possibility of achieving either benign or tempered overfitting without further assumptions on the width of the network - in contrast to our previous results. Furthermore, we next show that in the bias-less case, the clean test error is lower bounded by some term which depends only on \(n\) (which again, holds for any network so in particular under further assumptions such as convergence to a KKT point).
**Proposition 4.5** (Not benign without bias).: _For any bias-less network of width \(n\), \(L_{\mathrm{cl}}\left(N_{\boldsymbol{\theta}}\right)\geq\frac{1}{2^{n}}\)._
Note that the result above does not depend on \(m\) nor on \(d\), thus we cannot hope to prove benign overfitting for the bias-less case unless \(n\) is large, or depends somehow on both \(m\) and \(d\). Furthermore, next we show that even for arbitrarily large \(n\) (possibly scaling with other problem parameters such as \(m,d\)), there are KKT points which do not exhibit benign overfitting:
**Proposition 4.6** (Benign overfitting does not follow from KKT without bias).: _Consider a bias-less network, and suppose that \(\mathbf{x}_{j}^{\top}\mathbf{x}_{i}=0\) for every \(i\neq j\). Denote \(I_{-}:=|\{i\in[m]:y_{i}=-1\}|\). Then there exists a KKT point \(\boldsymbol{\theta}\) of Problem (1) with \(L_{\mathrm{cl}}\left(N_{\boldsymbol{\theta}}\right)\geq\frac{1}{2}-\frac{1}{2 ^{|I_{-}|}}\)._
The result above shows that even if we're willing to consider arbitrarily large networks, there still exist KKT points which do not exhibit benign overfitting (at least without further assumptions on the data). This implies that a positive benign overfitting result in the bias-less case, if at all possible, cannot possibly apply to all KKT points - so one must make further assumptions on which KKT points to consider or resort to an alternative analysis. The assumption that \(\mathbf{x}_{j}^{\top}\mathbf{x}_{i}=0\) for every \(i\neq j\) is made to simplify the construction of the KKT point, while we hypothesize that the proposition can be extended to the case where \(\mathbf{x}_{i}\sim\mathrm{Unif}(\mathbb{S}^{d-1})\) (or other isotropic distributions) for large enough \(d\).
## 5 Between tempered and benign overfitting for intermediate dimensions
In Section 3, we showed that tempered overfitting occurs for one-dimensional data, while in Section 4 we showed that benign overfitting occurs for high-dimensional data. In this section we complement these results by empirically studying intermediate dimensions and their relation to the number of samples. Our experiments extend an experiment from [15, Figure 6b] which considered the same distribution as we do, for \(d=10\) and \(m\) varying from \(300\) to \(3.6\times 10^{5}\). Here we study larger values of \(d\), and how the ratio between \(d\) and \(m\) affects the type of overfitting. We also focus on \(2\)-layer networks (although we do show our results may extend to depth \(3\)).
In our experiments, we trained a fully connected neural network with \(2\) or \(3\) layers, width \(n\) (which will be specified later) and ReLU activations. We sampled a dataset \((\mathbf{x}_{i},y_{i})_{i=1}^{m}\sim\mathrm{Unif}(\mathbb{S}^{d-1})\times \mathcal{D}_{y}\) for noise level \(p\in\{0.05,0.1,\ldots,0.5\}\). We trained the network using SGD with a constant learning rate of \(0.1\), and with the logistic (cross-entropy) loss. Each experiment ran for a total of \(20k\) epochs, and was repeated \(10\) times with different random seeds, the plots are averaged over the runs. In all of our experiments, the trained network overfitted in the sense that it correctly classified all of the (inherently noisy) training data.
In Figure 2 we trained a \(2\)-layer network with \(1000\) neurons over different input dimensions, with \(m=500\) samples (left) and \(m=2000\) samples (right). There appears to be a gradual transition from tempered overfitting (with clean test error linear in \(p\)) to benign overfitting (clean test error close to \(0\) for any \(p\)), as \(d\) increases compared to \(m\). Our results indicate that the overfitting behavior is close to tempered/benign in wider parameter regimes than what our theoretical results formally cover (\(d=1\) vs. \(d>m^{2}\)). Additionally, benign overfitting seems to occur for lower values of \(p\), whereas for \(p\) closer to \(0.5\) the clean test error is significantly above \(0\) even for the highest dimensions we examined.
We further experimented on the effects of the width and depth on the type of overfitting. In Figure 3 (left) we trained a \(2\)-layer network on \(m=500\) samples with input dimension \(d\in\{250,2000,10000\}\) and \(n\) neurons for \(n\in\{50,3000\}\). It can be seen that even when the number of neurons vary significantly, the plots remain almost the same. It may indicate that the network width \(n\) has little effect on the type of overfitting (similar to most of our theoretical results), and that the dependence on \(n\) in Theorem 4.3 is an artifact of the proof technique (in accordance with our other results). In Figure 3 (right) we trained a
Figure 3: Left: \(2\)-layer network with \(n=50\) and \(n=3000\) neurons and varying input dimension. Right: \(3\)-layer network with \(n=1000\) neurons and varying input dimension. Both plots correspond to \(m=500\) samples.
Figure 2: Training a \(2\)-layer network with \(1000\) neurons on \(m\) samples drawn uniformly from \(\mathbb{S}^{d-1}\) for varying input dimensions \(d\). Each label is equal to \(-1\) with probability \(p\) and \(+1\) with probability \((1-p)\). Left: \(m=500\), Right: \(m=2000\). The line corresponding to the identity function \(y=x\) was added for reference. Best viewed in color.
\(3\)-layer neural networks on \(m=500\) samples, as opposed to a \(2\)-layer networks in all our other experiments. Noticeably, this plot looks very similar to Figure 2 (left) which may indicate that our observations generalize to deeper networks.
## 6 Discussion
In this work, we studied different types of overfitting that emerge when training 2-layer ReLU neural networks to interpolate a noisy dataset in a simple classification setting. We proved that univariate data gives rise to tempered overfitting, with a clean (noiseless) test error scaling with the amount of label noise of the training data. For high dimensional data, the type of overfitting is provably benign, and the clean test error decays exponentially fast to \(0\) with respect to the input dimension. While both results depend on the network having bias terms, we showed that without bias terms, networks can exhibit catastrophic overfitting, and that in general they will not overfit benignly without further assumptions. Finally, we conducted an empirical study for intermediate dimensions and observe a gradual shift from tempered to benign overfitting as the input dimension grows.
There are several intereseting future directions which are left open by this work. First, on a more technical level, it would be interesting to bridge some of the limitations of our proof techniques. For example, proving either benign overfitting or tempered overfitting scaling linearly with the label noise, merely under the assumption of convergence to a KKT point (instead of the stronger assumptions we impose for those results), or by establishing the impossibility of such results. Our work also leaves open the theoretical understanding of other regimes of the input dimension, such as any constant dimension larger than \(1\), or high-dimensional data where the dimension scales sub-quadratically (e.g., linearly) with the number of training samples. Moreover, a compelling direction would be to extend the studied setting for other ground truth functions \(f^{*}\) which are more complex, and to more complex network architectures. Finally, it would be interesting to further study the bias-less case and assess its corresponding overfitting behavior under suitable assumptions.
### Acknowledgements
We thank Gal Vardi for insightful discussions regarding the implicit bias of neural networks. This research is supported in part by European Research Council (ERC) grant 754705.
|
2304.08239 | RF-GNN: Random Forest Boosted Graph Neural Network for Social Bot
Detection | The presence of a large number of bots on social media leads to adverse
effects. Although Random forest algorithm is widely used in bot detection and
can significantly enhance the performance of weak classifiers, it cannot
utilize the interaction between accounts. This paper proposes a Random Forest
boosted Graph Neural Network for social bot detection, called RF-GNN, which
employs graph neural networks (GNNs) as the base classifiers to construct a
random forest, effectively combining the advantages of ensemble learning and
GNNs to improve the accuracy and robustness of the model. Specifically,
different subgraphs are constructed as different training sets through node
sampling, feature selection, and edge dropout. Then, GNN base classifiers are
trained using various subgraphs, and the remaining features are used for
training Fully Connected Netural Network (FCN). The outputs of GNN and FCN are
aligned in each branch. Finally, the outputs of all branches are aggregated to
produce the final result. Moreover, RF-GNN is compatible with various
widely-used GNNs for node classification. Extensive experimental results
demonstrate that the proposed method obtains better performance than other
state-of-the-art methods. | Shuhao Shi, Kai Qiao, Jie Yang, Baojie Song, Jian Chen, Bin Yan | 2023-04-14T00:57:44Z | http://arxiv.org/abs/2304.08239v1 | # RF-GNN: Random Forest Boosted Graph Neural Network for Social Bot Detection
###### Abstract
The presence of a large number of bots on social media leads to adverse effects. Although Random forest algorithm is widely used in bot detection and can significantly enhance the performance of weak classifiers, it cannot utilize the interaction between accounts. This paper proposes a Random Forest boosted Graph Neural Network for social bot detection, called RF-GNN, which employs graph neural networks (GNNs) as the base classifiers to construct a random forest, effectively combining the advantages of ensemble learning and GNNs to improve the accuracy and robustness of the model. Specifically, different subgraphs are constructed as different training sets through node sampling, feature selection, and edge dropout. Then, GNN base classifiers are trained using various subgraphs, and the remaining features are used for training Fully Connected Neural Network (FCN). The outputs of GNN and FCN are aligned in each branch. Finally, the outputs of all branches are aggregated to produce the final result. Moreover, RF-GNN is compatible with various widely-used GNNs for node classification. Extensive experimental results demonstrate that the proposed method obtains better performance than other state-of-the-art methods.
keywords: Graph Neural Network, Random forest, Social bot detection, Ensemble learning +
## 1 Introduction
Social media have become an indispensable part of people's daily lives. However, the existence of automated accounts, also known as social bots, has brought many problems to social media. These bots have been employed to disseminate false information, manipulate elections, and deceive users, resulting in negative societal consequences [1; 2; 3]. Effectively detecting bots on social media plays an important role in protecting user interests and ensuring stable platform operation. Therefore, the accurate detection of bots on social media platforms is becoming increasingly crucial. Random Forest (RF) [4] is a classical algorithm of ensemble learning that can significantly improve the performance of the base classifier, Decision Tree (DT) [5]. Specifically, \(S\) sub-training sets are generated by randomly selecting n samples with replacement from the original training set of \(N\) samples \(S\) times. Then, \(m\) features are selected from the \(M\)-dimensional features of each sub-training set, and S base classifiers are trained using different sub-training sets. The final classification result is determined by the voting of the base classifiers. Due to its excellent performance, RF has been widely applied in various competitions, such as data mining and financial risk detection, and is also frequently used in social bot detection. [6] made use of RF classifer to evaluate and detect social bots by creating a system called BotOrNot. [7] conducted a study analyzing the impact of political bots on the 2018 Swedish general elections. The study evaluated several algorithms, including AdaBoost [8], Support Vector Machine (SVM) [8], and RF, and found that RF outperformed the other algorithms, achieving an accuracy of 0.957. [9] conducted an evaluation of several classifiers, including RF, Adaboost, SVM, and K-Nearest Neighbors (KNN) [10], for bot detection. After analyzing the performance of these classifiers, it was determined that blending using RF produced the best results. Although RF algorithm performs well in social bot detection, new camoulage and adversarial techniques evolve to maintain threats and escape from perception [11]. Social bots can simulate genuine users through complex strategies to evade feature-based detection methods [1]. Rarely consider the connection between users [11], making it challenging to ensure detection accuracy. Graph neural networks (GNNs) [12; 13; 14] are emerging deep learning algorithms that effectively utilizes the relationship between nodes in graph structures. They have been widely applied in fields such as social networks and recommendation systems. Recently, GNNs have been applied in social bot detection, utilizing user relationships effectively during detection compared to traditional machine learning algorithms. GNN-based approaches [15; 16; 17] formulate the detection processing as a node classification problem. [15] was the first
to attempt to use graph convolutional neural networks [15] in detecting bots, effectively utilizing the graph structure and relationships of Twitter accounts. Recent work has focused on studying multiple relationships in social graphs, [15] introduced Relational Graph Convolutional Networks (RGCN) [15] into Twitter social bot detection, which can utilize multiple social relationships between accounts. [17] proposes a graph learning data augmentation method to alleviate the negative impact of class imbalance on models in bot detection. To effectively leverage the advantages of ensemble learning and GNNs, we have designed a Random Forest boosted Graph Neural Network for soical bot detection, called RF-GNN. Specifically, RF-GNN us GNNs as the base classifiers of the RF algorithm. To construct diverse sub-training sets effectively, we construct subgraphs using node sampling, feature selection, and edge dropout as the training sets for different GNN base classifiers. We use the remaining features after feature selection as the input for the Fully Connected Neural Network (FCN), aligning the outputs of the GNN base classifiers. As a result, each base classifier is enabled to be insensitive to the sub-training set, increasing the robustness of RF-GNN. Furthermore, the design of the aligning mechanism can effectively utilize the discarded features of the GNN base classifiers. Our proposed RF-GNN framework can be applied to various GNN models and significantly improves the accuracy and robustness of GNN. The main contributions of this work are summarized as follows.
* We proposed a novel framework is that combine random forest algorithm with GNNs, which is the first of its kind. The framework effectively utilizes the ability of GNNs to leverage relationships and the advantages of ensemble learning to improve the performance and robustness of model.
* We propose an alignment mechanism that further enhances the performance and robustness of the GNNs ensemble model by effectively utilizing the remaining features after feature selection.
* Our proposed framework is flexible and can be utilized with various widely-used backbones. Extensive experimental results demonstrate that our proposed framework significantly improves the performance of GNNs on different social bot detection benchmark datasets.
## 2 Preliminaries
In this section, we define some notations used throughout this paper. Let \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) represent the graph, where \(\mathcal{V}\) is the set of vertices \(|\mathcal{V}|=\{v_{1},\cdots,v_{N}\}\) with
\(|\mathcal{V}|=N\) and \(\mathcal{E}\) is the set of edges. In this paper, we consider multi-relationship social networks graphs. The adjacency matrix is defined as \(A^{k}\in\{0,1\}^{N\times N}\), \(1\leq k\leq K\). \(K\) represents the total number of edge types, and \(\mathbf{A}_{i,j}^{k}=1\) if and only if \((v_{i},v_{j})\in\mathcal{E}^{k}\), \(\mathcal{E}^{k}\subseteq\mathcal{E}\). Let \(\mathcal{N}_{i}\) denotes the neighborhood of node \(v_{i}\). The feature matrix is denoted as \(\mathbf{X}\in\mathbb{R}^{N\times M}\), where each node \(v\) is associated with a \(M\) dimensional feature vector \(\mathbf{X}_{v}\).
**Graph Neural Networks** GNNs [12; 13; 14] could directly operate on non-Euclidean graph data, which generates node-level feature representation through message passing mechanism. Specifically, the feature representation of nodes at the \(l\) layer is obtained through the aggregation of 1-hop neighborhood at the \(l-1\) layer. The \(l\)-th layer of the GNN message-passing scheme is:
\[\mathbf{h}_{v}^{(l)}=\mathrm{COM}\left(\mathbf{h}_{u}^{(l-1)},\mathrm{AGG} \left(\mathbf{h}_{u}^{(l-1)},u\in\mathcal{N}_{v}\right)\right) \tag{1}\]
where \(\mathrm{COM}(\cdot)\) and \(\mathrm{AGG}(\cdot)\) denotes COMBINE and AGGREGATE functions respectively, \(\mathbf{h}_{v}^{(l)}\) is the representation vector of node \(v\) in the \(l\)-th layer. Specifically, \(\mathbf{h}_{v}^{(0)}=\mathbf{X}_{v}\).
## 3 The Proposed Method
### Motivation
The base classifier of RF is DT, which has several disadvantages compared to neural networks. DT is prone to over-fitting on training data, especially when the tree depth is large. They have weak modeling ability for complex nonlinear relationships, and may require more feature engineering to extract more features. DT is also sensitive to noise in the data, leading to weak model robustness. Using graph neural networks as the base classifier of random forest can effectively solve these problems and make effective use of the relationships between accounts. Compared to traditional ensemble learning, introducing relationships between accounts can improve the performance of the model. Compared to previous GNNs, using bagging improves the performance and robustness of the model.
Figure 1 shows the overall structure of RF-GNN, which consists of the subgraph construction module, the alignment mechanism, and model ensemble module.
### Subgraph construction
The purpose of subgraph construction is to obtain \(S\) subgraphs, which are used as training data for \(S\) GNN base classifiers. In order to ensure the diversity of the
sub-training sets, three methods are used to construct the subgraphs, including node sampling, feature selection, and edge dropping.
**Nodes sampling** Due to the unique characteristics of graph data, sample sampling is no longer done by randomly selecting nodes with replacement, as is the case with the random forest algorithm. In nodes sampling, a certain portion of vertices without replacement along with their connections are randomly selected. The probability of keeping nodes, denoted by \(\alpha\), follows an _i.i.d._ uniform distribution.
**Feature selection** Based on the idea of selecting a subset of feature dimensions as new features in the RF algorithm, \(\beta\) proportion of feature dimensions are randomly selected from the \(M\)-dimensional feature vector of subgraph \(\mathcal{G}_{subi}\) to form \(\mathbf{X}_{subi}\), which is used as the feature matrix for the input of the \(i\)-th GNN base classifier.
**Edge dropping** To further increase the differences between subgraphs, we randomly remove a portion of edges in the subgraph \(\mathcal{G}_{subi}\). The proportion of edges to be dropped is \(1-\gamma\), which follows a normal distribution. In other words, \(\gamma\) proportion of edges are retained.
### Aligning mechanism
\(S\) subgraphs are constructed for training \(S\) different branches. In the \(i\)-th branch, as shown in Figure 2, only partial dimensional features \(\mathbf{X}_{subi}\) are used for
Figure 1: The proposed RF-GNN framework.
the training of the base classifier \(G_{\theta_{i}}\). The remaining features \(\tilde{\mathbf{X}}_{subi}\) after feature selection are used to train the fully connected neural network (FCN) \(F_{\theta_{i}}\). The output of the GNN-based classifier and the output of the FCN are aligned through the Hadamard product. Through this alignment method, the output between the GNN and FCN is facilitated to enhance the performance and stability of the model:
\[\mathbf{Z}_{i}=G_{\theta_{i}}\left(\mathcal{E}_{i},\mathbf{X}_{subi}\right) \odot F_{\theta_{i}}\left(\tilde{\mathbf{X}}_{subi}\right). \tag{2}\]
We use the output embedding \(\mathbf{Z}_{i}\) in Eq. 2 for semi-supervised classification with a linear transformation and a softmax function. Denote the class predictions of \(i\)-th brunch as \(\hat{\mathbf{Y}}_{i}\in\mathbb{R}^{N\times C}\), then the \(\hat{\mathbf{Y}}\) can be calculated in the following way:
\[\hat{\mathbf{Y}}_{i}=\mathrm{softmax}\left(\mathbf{W}\cdot\mathbf{Z}_{i}+ \mathbf{b}\right), \tag{3}\]
where \(\mathbf{W}\) and \(\mathbf{b}\) are learnable parameters, softmax is actually a normalizer across all classes. Suppose the training set is \(V_{L}\), for each \(v_{n}\in V_{L}\) the real label is \(\mathbf{y}_{\mathrm{in}}\) and the predicted label is \(\tilde{\mathbf{y}}_{in}\). In this paper, we employs cross-entropy loss to measure the supervised loss between the real and predicted labels. The loss function of the \(i\)-th branch is as follows:
\[\mathcal{L}_{i}=-\sum_{v_{n}\in V_{L}}\mathrm{loss}\left(\mathbf{y}_{in}, \tilde{\mathbf{y}}_{in}\right). \tag{4}\]
Figure 2: Aligning mechanism in the \(i\)-brunch of RF-GNN.
### Model ensemble
The base classifiers of \(S\) branches can be trained in parallel. After training the \(S\) base classifiers, the output is aggregated to obtain the final classification results:
\[\hat{\mathbf{Y}}=\sum_{i=1}^{S}\hat{\mathbf{Y}}_{i}. \tag{5}\]
The learning algorithm is summarized in Algorithm 1.
```
1:Input: Graph \(\mathrm{G}\), Node representation \(\mathbf{X}\in\mathbb{R}^{N\times M}\), Node sampling probability \(\alpha\), Feature selecting probability \(\beta\), Edge keeping probability \(\gamma\), GNN base classifiers, \(G_{\theta_{1}},G_{\theta_{2}},\ldots,G_{\theta_{S}}\), FCN models \(F_{\theta_{1}},F_{\theta_{2}},\ldots,F_{\theta_{S}}\)
2:Output: Predicted node class \(\hat{\mathbf{Y}}\)
3: Initialize the classifier parameter \(\theta\)
4:for\(i=1,2,\ldots,S\)do
5: Obtain \(\mathcal{G}_{i}\left(\mathcal{V}_{i},\mathcal{E}_{i}\right)\) by nodes sampling on \(\mathrm{G}\) with probability \(\alpha\)
6: Obtain \(\mathbf{X}_{subi}\) by feature selecting on \(\mathbf{X}\) with probability \(\beta\), the rest feature denote as \(\tilde{\mathbf{X}}_{subi}\)
7: Drop edge on \(\mathcal{E}_{i}\) with probability \(1-\gamma\)
8: Training GNN classifier \(G_{\theta_{i}}\) using \(\mathcal{E}_{i}\) and \(\mathbf{X}_{subi}\)
9: Training FCN model \(F_{\theta_{i}}\) using \(\tilde{\mathbf{X}}_{subi}\)
10: Align the output of \(\mathcal{E}_{i}\) and \(F_{\theta_{i}}\) via Eq. 2
11: Update parameters of \(G_{\theta_{i}}\) and \(F_{\theta_{i}}\) by applying gradient descent to maximize Eq. 4
12:endfor
13: Ensemble output prediction \(\hat{\mathbf{Y}}_{i}\) via Eq. 5
```
**Algorithm 1** RF-GNN Training Algorithm.
## 4 Experiment setup
### Dataset
We evaluate Twitter bot detection models on three datasets that have graph structures: Cresci-15 [3], Twibot-20 [18], and MGTAB [11]. The detailed description of these datasets is as follows:
* Cresci-15 is a dataset consisting of 5,301 users who have been labeled as either genuine or automated accounts. The dataset provides information on the follower and friend relationships between these users.
* Twibot-20 is a dataset containing 229,580 users and 227,979 edges, of which 11,826 accounts have been labeled as genuine or automated. The dataset provides information on the follower and friend relationships between these users.
* MGTAB is a dataset built upon the largest raw data in the field of machine account detection, containing more than 1.5 million users and 130 million tweets. The dataset provides information on 7 types of relationships between these users and labels 10,199 accounts as either genuine or bots.
We construct user social graphs using all labeled users for the above datasets. Specifically, for MGTAB, we use the 20 user attribute features with the highest information gain and the 768-dimensional user tweet features extracted by BERT as user features. For Twibot-20, following the processing in [17], we use 16 user attribute features, the 768-dimensional user description features extracted by BERT, and the user tweet features. For Cresci-15, following the processing approach in [17], we use 6 user attribute features, the 769-dimensional user description features extracted by BERT, and the user tweet features. The statistics of these datasets are summarized in Table 1. We conduct a 1:1:8 random partition as training, validation, and test set for all datasets.
### Baseline Methods
To verify the effectiveness of our proposed RF-GNN, we compare it with various semi-supervised learning baselines. The detail about these baselines as described as follows:
* **DT**[5] is a classification rule that infers a tree-shaped structure from a set of training data using a recursive top-down approach. It compares attribute values of the nodes inside the tree and determines the branching direction based on different attribute values.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Dataset & Nodes & Edges & Features & \(\alpha\) & \(\beta\) & \(\gamma\) \\ \hline Cresci-15 & 5,301 & 14,220 & 1,542 & 0.95 & 0.95 & 0.95 \\ Twibot-20 & 11,826 & 15,434 & 1,553 & 0.8 & 0.8 & 0.9 \\ MGTAB & 10,199 & 1,700,108 & 788 & 0.6 & 0.9 & 0.8 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics of datasets used in the paper.
* **RF**[4] uses DT as models in bagging. First, different training sets are generated using the bootstrap method. Then, for each training set, a DT is constructed, and finally, the trees are integrated into a forest and used to predict the final results.
* **Node2Vec**[19] is a weighted random walk algorithm that allows the trained vectors to simultaneously satisfy the homophily and structural similarity assumptions between nodes.
* **APPNP**[20] combines GCN with PageRank [21] to construct a simple model that utilizes the propagation of PageRank. It uses a large, adjustable neighborhood to better propagate information from neighboring nodes.
* **GCN**[12] is a representative of the spectral graph convolution method. By simplifying Chebyshev polynomials to first order neighborhood, the node embedding vector is obtained.
* **SGC**[13] is a simplified version of the GCN. It aims to decrease the excessive complexity of GCNs by iteratively removing non-linearities between GCN layers and collapsing the resulting function into a single linear transformation. Utilizing this technique ensures that the performance is comparable to that of GCNs, while significantly reducing the parameter size.
* **GAT**[14] is a semi-supervised homogeneous graph model that applies the attention mechanism to determine the weights of node neighborhood. By adaptively assigning weights to different neighbors, the performance of graph neural networks is improved.
* **Boosting-GNN**[22] is a graph ensemble learning method that combines GNN with adaboost, and can improve the performance of GNN under class imbalance conditions.
* **JK-Nets**[23] is a kind of GNN that employs jump knowledge to obtain a more effective structure-aware representation by flexibly utilizing the distinct neighborhood ranges of each node.
* **GraphSAINT**[24] is an inductive learning approach that relies on graph sampling. It addresses the issue of neighbor explosion by sampling subgraphs and applying GCN on them, while ensuring minimal variance and unbiasedness.
* **LA-GCN**[25] improves the expressiveness of GNN by generating neighborhood features based on a conditional generative model, which takes into account the local structure and node features.
### Variants
To gain a more comprehensive understanding of how each module operates within the overall learning framework and to better evaluate their individual contributions to performance improvement, we generated several variants of the full RF-GNN model. The RF-GNN model consists of three primary modules: subgraph construction module, the alignment mechanism, and model ensemble module. To conduct an ablation study, we selectively enabled or disabled certain components of these modules. The following is a detailed description of these variations:
* **RF-GNN-\(E\):** This variant solely employs emsembling to determine its effects without utilizing any other modules. The base classifier is trained using the entire training set every time. The final classification result is obtained by aggregating the outputs of the base classifiers using bagging. In this model variant, the number of base classifiers is fixed to be \(S\).
* **RF-GNN-\(E\):** This variant utilizes the bagging method, together with the subgraph construction. To increase the diversity of the base classifiers, \(S\) different subgraphs are obtained through subgraph construction before training the base classifiers. Each subgraph is used to train one of the \(S\) base classifiers.
* **RF-GNN** contains all modules in the graph learning framework. The aligning mechanism is additionally supplemented upon RF-GNN-\(E\)S.
### Parameter Settings
We train all models using AdamW optimizer for 200 epochs. The learning rate is set to 0.01 for all models with an exception to 0.005 for Node2Vec. The L2 weight decay factor is set to 5e-4 on all datasets. The dropout rate is set from 0.3 to 0.5. For all models, the input and output dimensions of the GNN layers are consistent, which are 128 or 256. Attention heads for GAT and RGAT [26] are set to 4. We implement RF-GNN with Pytorch 1.8.0, Python 3.7.10, PyTorch Geometric [27] with sparse matrix multiplication. All experiments are executed on a sever with 9 Titan RTX GPU, 2.20GHz Intel Xeon Silver 4210 CPU with 512GB RAM. The operating system is Linux bcm 3.10.0.
### Evaluation Metrics
Since the number of humans and bots is not roughly equal on social media, we utilize Accuracy and F1-score to indicate the overall performance of classifier:
\[\text{Accuracy}=\frac{TP+TN}{TP+FP+FN+TN}, \tag{6}\]
\[\text{Precision}=\frac{TP}{TP+FP}, \tag{7}\]
\[\text{Recall}=\frac{TP}{TP+FN}, \tag{8}\]
\[\text{F1}=\frac{2\times\text{Precision}\times\text{Recall}}{\text{Precision}+ \text{Recall}}, \tag{9}\]
where \(TP\) is True Positive, \(TN\) is True Negative, \(FP\) is False Positive, \(FN\) is False Negative.
## 5 Experiment results
In this section, we conduct several experiments to evaluate RF-GNN. We mainly answers the following questions:
* **Q1:** How different algorithms perform in different scenarios, i.e., algorithm effectiveness (Section 5.1).
* **Q2:** How each individual module of RF-GNN contributes to the overall effectiveness (Section 5.2).
* **Q3:** How does the number of base classifiers effect the performance of RF-GNN. (Section 5.3).
* **Q4:** How is RF-GNN perform under different parameters setting. i.e., parameter sensitivity. (Section 5.4).
* **Q5:** How does RF-GNN perform when applied to heterogeneous GNNs. i.e., extensibility (Section 5.5).
* **Q6:** How different algorithms perform when dealing data containing noise, i.e., robustness (Section 5.6).
### Overall performance
In this section, we perform experiments on public available social bot detection datasets to evaluate the effectiveness of our proposed method. We randomly conduct a 1:1:8 partition as training, validation, and test set. To reduce randomness and ensure the stability of the results, each method was evaluated five times with different seeds. We report the average test results of baselines, RF-GNN, and the variants. As shown in Table 2, RF-GNN outperforms other baselines and different variants across all scenarios.
The performance of the RF is significantly improved compared to the base classifier DT on all datasets. However, due to the inability of RF to utilize the relationship features among users, it performs less effectively than the GCN classifier on the MGTAB and Cresci-15 datasets.
Node2Vec consistently performs poorly compared to other baseline methods across all datasets. This can be attributed to Node2Vec's approach of controlling the random walk process within the step size. It struggles to capture the similarity
\begin{table}
\begin{tabular}{l l c c c c c c} \hline \hline & \multirow{2}{*}{**Method**} & \multicolumn{2}{c}{MGTAB} & \multicolumn{2}{c}{Twibot-20} & \multicolumn{2}{c}{Cresci-15} \\ \cline{3-8} & & Acc & F1 & Acc & F1 & Acc & F1 \\ \cline{2-8} & Decision Tree & 78.80 \(\pm\)1.46 & 73.22 \(\pm\)1.89 & 65.00 \(\pm\)6.27 & 64.67 \(\pm\)6.32 & 88.42 \(\pm\)1.61 & 87.01 \(\pm\)2.01 \\ & Random Forest & 84.46 \(\pm\)1.33 & 78.45 \(\pm\)1.95 & 73.42 \(\pm\)1.21 & 73.29 \(\pm\)1.25 & 94.36 \(\pm\)0.52 & 93.92 \(\pm\)0.53 \\ & & Node2Vec & 73.35 \(\pm\)0.19 & 60.20 \(\pm\)0.72 & 51.85 \(\pm\)0.20 & 48.98 \(\pm\)0.39 & 73.22 \(\pm\)0.60 & 70.83 \(\pm\)0.56 \\ & APPNP & 75.08 \(\pm\)1.73 & 61.66 \(\pm\)1.25 & 53.13 \(\pm\)3.80 & 50.82 \(\pm\)3.38 & 95.33 \(\pm\)0.48 & 94.97 \(\pm\)0.51 \\ & GCN & 84.98 \(\pm\)0.70 & 79.63 \(\pm\)1.02 & 67.76 \(\pm\)1.24 & 67.34 \(\pm\)1.16 & 95.19 \(\pm\)0.99 & 94.88 \(\pm\)1.02 \\ & SGC & 85.14 \(\pm\)0.72 & 80.60 \(\pm\)1.66 & 68.01 \(\pm\)0.40 & 67.60 \(\pm\)0.24 & 95.69 \(\pm\)0.84 & 95.39 \(\pm\)0.85 \\ & GAT & 84.94 \(\pm\)0.29 & 80.22 \(\pm\)0.44 & 71.71 \(\pm\)1.36 & 71.18 \(\pm\)1.38 & 96.10 \(\pm\)0.46 & 95.79 \(\pm\)0.49 \\ & Boosting-GNN & 85.14 \(\pm\)0.72 & 79.84 \(\pm\)1.09 & 68.10 \(\pm\)0.77 & 67.77 \(\pm\)0.79 & 95.69 \(\pm\)0.47 & 95.40 \(\pm\)0.49 \\ & JK-Nets & 84.58 \(\pm\)0.28 & 80.60 \(\pm\)0.78 & 71.01 \(\pm\)0.54 & 70.77 \(\pm\)0.37 & 96.04 \(\pm\)0.42 & 95.76 \(\pm\)0.43 \\ & GraphSAINT & 84.38 \(\pm\)0.11 & 79.06 \(\pm\)0.19 & **76.49**\(\pm\)0.06 & **76.04**\(\pm\)0.14 & **96.18**\(\pm\)0.07 & **95.91**\(\pm\)0.07 \\ & LA-GCN & **85.50**\(\pm\)0.28 & **81.12**\(\pm\)0.42 & 74.36 \(\pm\)0.67 & 73.49 \(\pm\)0.67 & 96.02 \(\pm\)0.39 & 95.70 \(\pm\)0.43 \\ \hline \multirow{7}{*}{**R-GCN-\(E\)**} & RF-GCN-\(E\)** & 84.75 \(\pm\)1.02 & 79.47 \(\pm\)1.89 & 72.81 \(\pm\)1.11 & 72.17 \(\pm\)1.07 & 95.23 \(\pm\)1.07 & 94.92 \(\pm\)1.10 \\ & RF-GCN-\(E\)** & 84.73 \(\pm\)0.12 & 79.30 \(\pm\)0.71 & 73.12 \(\pm\)0.67 & 72.74 \(\pm\)0.74 & 96.02 \(\pm\)0.31 & 95.72 \(\pm\)0.32 \\ & RF-GCN & **86.99**\(\pm\)0.20 & **82.94**\(\pm\)0.45 & **82.21**\(\pm\)0.51 & **81.85**\(\pm\)0.53 & **96.40**\(\pm\)0.12 & **96.10**\(\pm\)0.13 \\ & RF-SGC-\(E\)** & 85.23 \(\pm\)0.64 & 80.23 \(\pm\)1.32 & 70.76 \(\pm\)0.63 & 70.12 \(\pm\)0.80 & 95.94 \(\pm\)0.72 & 95.64 \(\pm\)0.74 \\ & RF-SGC-\(E\) & 85.68 \(\pm\)0.19 & 80.82 \(\pm\)0.34 & 70.36 \(\pm\)0.56 & 69.87 \(\pm\)0.70 & 96.26 \(\pm\)0.18 & 95.96 \(\pm\)0.20 \\ & RF-SGC & **86.96**\(\pm\)0.58 & **83.36**\(\pm\)0.58 & **81.98**\(\pm\)0.45 & **81.65**\(\pm\)0.46 & **96.43**\(\pm\)0.16 & **96.13**\(\pm\)0.18 \\ & RF-GAT-\(E\)** & 85.76 \(\pm\)0.15 & 81.07 \(\pm\)0.54 & 72.63 \(\pm\)1.06 & 72.11 \(\pm\)1.05 & 96.56 \(\pm\)0.05 & 96.28 \(\pm\)0.06 \\ & RF-GAT-\(E\)** & 84.90 \(\pm\)0.03 & 79.30 \(\pm\)0.07 & 74.07 \(\pm\)0.60 & 73.76 \(\pm\)0.64 & 96.45 \(\pm\)0.05 & 96.16 \(\pm\)0.05 \\ & RF-GAT & **86.56**\(\pm\)0.12 & **83.14**\(\pm\)0.23 & **81.69**\(\pm\)0.32 & **81.34**\(\pm\)0.37 & **96.62**\(\pm\)0.05 & **96.33**\(\pm\)0.05 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of the average performance of different methods for social bot detection. For RF-GNN, \(S\) is set to 10 for all dataset. The best result of the baseline method and the complete RF-GNN method proposed by us is highlighted in bold.
of adjacent nodes in large-scale graphs with highly complex structures and does not effectively utilize node features. The APPNP model incorporates PageRank computation, injecting some global information into the learning process and expanding the neighborhood size. This leads to an improvement in performance compared to Node2Vec. The GCN approach involves multiplying the normalized adjacency matrix with the feature matrix and then multiplying it with a trainable parameter matrix to perform convolution operations on the entire graph data. However, relying on full-graph convolution operations to obtain the global representation vector for a node can significantly impair the model's generalization performance. The SGC model removes the non-linear activation function from GCN. Despite a slightly reduced accuracy across all datasets, SGC can achieve similar performance compared to GCN. In contrast, GAT introduces an attention mechanism to GCN, allowing for adaptive model parameter adjustment during the convolution and feature fusion process by assigning a learnable coefficient to each edge. This improves the model's performance by making it more adaptive to the specific task.
### Ablation Study
The second half of Table 2 presents the performance of various variants, highlighting the roles of different modules in our proposed learning framework. RF-GNN-\(E\), which only employs multi-model ensembling, exhibits a minor improvement over the original GNN model. This is because the different base classifiers use the same training set, and the differences between the different models are relatively small, which limits the potential improvement from ensembling. RF-GNN-\(E\), on the other hand, improves model performance in most cases by incorporating subgraph construction and training the base classifiers on different subgraphs. However, since the constructed subgraphs only use a subset of nodes and
Figure 3: 2D t-SNE visualization. 2000 nodes in dataset were randomly selected for plotting. Red and blue represent bot and human, respectively.
features, information from unsampled nodes and features may be missing, leading to performance degradation in certain cases. For instance, on the MGTAB dataset, RF-GCN-_ES_ underperforms RF-GCN-E. As illustrated in the results of RF-GNN, combining the aligning mechanism with the backbone RF-GNN-_ES_ can further enhance the performance. Most noticeably, the accuracy can be substantially augmented over 13% for GCN and SGC when tackling Twibot-20 dataset.
We separately calculated the average cosine similarity of the output of the base classifiers of RF-GCN-_E_, RF-GCN-_ES_, and RF-GCN on Twibot-20, as shown in Figure 5. Higher similarity in the output indicates higher consistency among the base classifiers, resulting in more accurate results for the ensemble model. Although the output similarity of the base classifiers of RF-GCN-_E_ is high, the base classifiers of RF-GCN-_E_ are trained on the same training set, resulting in limited diversity and a small gain in integrating the output results of the base classifiers. For the RF-GCN-_ES_, the subgraph construction ensures the differences between the base classifiers, but the low output similarity suggests that the accuracy of each base classifier is not high. Using the same subgraph training as RF-GCN
Figure 4: RF-GNN performance with different scale of training data.
Figure 5: Average cosine similarity of the output of the base classifiers on Twibot-20.
_ES_, RF-GCN significantly improves the output similarity of different branches by introducing the alignment mechanism.
### The effect of the number of base classifiers
The number of base classifiers \(S\) is a crucial parameter that impacts the performance of a RF-GNN. With a small number of base classifiers, the classification error of the RF-GNN is high, resulting in relatively poor performance. Although increasing \(S\) can ensure the diversity of the ensemble classifier, the construction time of the RF-GNN is directly proportional to \(S\), setting \(S\) too large can lead to a decrease in model efficiency.
As shown in Figure 6, for all datasets without exception, all the model instances experience a rise of accuracy when the number of base classifiers grows. On the MGTAB dataset, the accuracy significantly increases as \(S\) increases from 2 to 10. On the Twibot-20 and Cresci-15 datasets, the accuracy shows an increasing trend as \(S\) is less than 6. Across all datasets, the trend of increasing classification accuracy of RF-GNN slows down when the \(S\) increases to 10.
### Parameters Sensitivity Analysis
We investigate the hyper-parameters sensitivity based on the GCN backbone models. The hyperparameters include \(\alpha\), which adjusts the proportion of nodes selected when constructing subgraphs, \(\beta\), which adjusts the proportion of features selected, and \(\gamma\), which adjusts the proportion of edges retained between nodes. We test the above hyperparameters, and vary them from 0.1 to 0.9, the results are shown in Figure 7.
Figure 6: RF-GNN performance changing trend w.r.t \(N\).
**Analysis of node sampling probability \(\alpha\)** On both MGTAB and Twibot-20 datasets, increasing the node sampling probability \(\alpha\) leads to an initial improvement in performance which is followed by a gradual decline. The optimal value of a for RF-GCN is 0.5, yielding the best performance. On the Cresci-15 dataset, the performance of RF-GCN improves gradually with the increase of \(\alpha\). Overall, RF-GCN exhibits stability when \(\alpha\) is within the range of 0.3 to 0.7 across all datasets.
**Analysis of feature selecting probability \(\beta\)** The feature selection ratio is the parameter that has the greatest impact on the performance of RF-GCN. Across all datasets, when \(\beta\) is less than 0.3, the performance of RF-GCN is poor. This is likely due to the insufficient number of features, which hinders the model's ability to effectively learn and detect bots. On the Twibot-20 dataset, RF-GCN performs best when \(\beta\) is set to 0.7. On the MGTAB and Cresci-15 datasets, the optimal value of \(\beta\) is 0.9 for achieving the best performance.
**Analysis of edge keeping probability \(\gamma\)** RF-GCN is stable when the \(\gamma\) is within the range from 0.1 to 0.9 on all datasets. Reducing the value of \(\gamma\) will increase the effect of data enhancement on the graph data, and the model performance will be slightly improved.
### Extend to heterogeneous GNNs
The Cresci-15, Twibot-20, and MGTAB datasets all contain at least two types of relationships, including followers and friends. Although both types of relationships were used in Section 5.1, homogeneous graph models employed did not distinguish between different types of relationships, edges in graph, during neighborhood aggregation. Homogeneous GNNs can effectively utilize multiple relationships to achieve better detection results. In this section, we extend the RF-GNN framework to heterogeneous GNNs, specifically using RGCN [29] and RGAT [26] as base classifiers, with results shown in Table 3.
Figure 7: The performance of RF-GCN with varying different hyperparameters in terms of accuracy.
When using a heterogeneous GNNs, \(\mathcal{E}=\{\mathcal{E}^{1},\mathcal{E}^{2}\}\), the adjacency matrices \(\mathbf{A}^{1}\) and \(\mathbf{A}^{2}\) represent the followers and friends relationships, respectively. By leveraging more information from user relationships, the heterogeneous GNNs performs better than the homogeneous GNNs. The results in Table 3 demonstrate the effectiveness of the three modules in RF-GNN. On all datasets, our proposed RF-GNN applied to the heterogeneous GNNs consistently enhances the model's performance.
### Robustness of the model
In this section, we evaluate the performance of various model under noise to verify the robustness of the models. In social bot detection, bots may change their registration information to evade detection. We simulate this scenario by adding a certain proportion of random noise to the feature vectors of the accounts. Specifically, we randomly add Gaussian noise with a mean of 0 and a variance of 1 to 10%, 20%, and 30% of the features. The dataset partition and parameter settings are the same as in Section 5.1, and the experimental results are shown in Figure 8.
The RF-GCN-\(\mathit{ES}\) removes the alignment mechanism compared to the RF-GCN model. The results in Figure 8 show that RF-GCN performs better than RF-GCN-\(\mathit{ES}\) and GCN on all datasets under different noise ratios, demonstrating that the alignment mechanism proposed in our framework improves the accuracy of the model. As the noise ratio increases, the decrease in accuracy for RF-GCN
\begin{table}
\begin{tabular}{l c c|c c|c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c}{MGTAB} & \multicolumn{2}{c}{Twibot-20} & \multicolumn{2}{c}{Cresci-15} \\ \cline{2-7} & Acc & F1 & Acc & F1 & Acc & F1 \\ \hline RGCN & 86.50 \(\pm\)0.49 & 82.46 \(\pm\)0.68 & 80.10 \(\pm\)0.49 & 79.76 \(\pm\)0.46 & 96.42 \(\pm\)0.14 & 96.14 \(\pm\)0.15 \\ RF-RGCN-\(\mathit{E}\) & 86.69 \(\pm\)0.36 & 82.74 \(\pm\)0.49 & 80.29 \(\pm\)0.58 & 79.96 \(\pm\)0.54 & 96.46 \(\pm\)0.31 & 96.17 \(\pm\)0.33 \\ RF-RGCN-\(\mathit{ES}\) & 87.25 \(\pm\)0.39 & 83.20 \(\pm\)0.54 & 82.32 \(\pm\)0.48 & 82.01 \(\pm\)0.47 & 96.51 \(\pm\)0.13 & 96.21 \(\pm\)0.15 \\ RF-RGCN & **87.62 \(\pm\)0.30** & **83.83 \(\pm\)0.52** & **83.92 \(\pm\)0.21** & **83.37 \(\pm\)0.27 & **96.74 \(\pm\)0.09** & **96.47 \(\pm\)0.10** \\ \hline \hline RGCN & 86.80 \(\pm\)0.27 & 82.72 \(\pm\)0.27 & 80.42 \(\pm\)0.73 & 80.10 \(\pm\)0.25 & 96.52 \(\pm\)0.32 & 96.24 \(\pm\)0.36 \\ RGAT & 86.93 \(\pm\)0.35 & 82.83 \(\pm\)0.32 & 80.44 \(\pm\)0.82 & 80.12 \(\pm\)0.33 & 96.52 \(\pm\)0.28 & 96.24 \(\pm\)0.25 \\ RF-RGAT-\(\mathit{E}\) & 87.32 \(\pm\)0.31 & 83.24 \(\pm\)0.38 & 82.45 \(\pm\)0.72 & 82.08 \(\pm\)0.65 & 96.76 \(\pm\)0.13 & 96.54 \(\pm\)0.12 \\ RF-RGAT & **87.86 \(\pm\)0.24** & **83.99 \(\pm\)0.10** & **83.96 \(\pm\)0.32** & **83.45 \(\pm\)0.28** & **96.76 \(\pm\)0.13** & **96.54 \(\pm\)0.12** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of the average performance of homogeneous GNNs for social bot detection. Each method was evaluated five times with different seeds. The best result of the baseline method and the complete RF-GNN method proposed by us is highlighted in bold.
is less than that of RF-GCN-_ES_ and GCN, demonstrating that our proposed alignment mechanism increases the robustness of the model.
## 6 Related work
### Social bot detection
Socail bot detection methods can be broadly divided into two categories: feature-based and graph-based approaches. Feature-based methods rely on feature engineering to design or extract effective detection features, and then use machine learning classifiers for classification. In earlier studies [3; 30], features such as the number of followers and friends, the number of tweets, and the creation date were used. Other studies have incorporated features derived from user tweets [30; 31]. To identify social bots using extracted user features, machine learning algorithms have been employed in numerous studies [1; 2; 3]. In particular, ensemble learning techniques such as Random Forest [4], Adaboost [8], XGBoost [32] have been applied in bot detection. Although feature-based methods are simple and efficient, they fail to utilize the interaction relationships between users. Graph neural network-based machine account detection methods [15; 16; 17] convert account detection into a graph node classification problem by constructing a social relationship graph of users. Compared to feature-based methods, graph-based methods have increasingly gained attention for their effective use of user interaction features such as follow and friend relationships [11].
### Semi-supervised Learning on Graphs
DeepWalk [33] is the first work for graph embedding. As an unsupervised method for learning latent representations of nodes, DeepWalk can easily be turned
Figure 8: The performance of RF-GCN with different proportion of random noise.
into a semi-supervised baseline model when combined with an SVM classifier. Compared to DeepWalk's random walk strategy for graph embedding, Node2Vec [19] balances homogeneity and structural results by adjusting the probability of random walks. Graph neural network (GNN) models have achieved enormous success. Originally inspired by graph spectral theory, [34] first designed a learnable graph convolution operation in the Fourier domain. The model proposed by [12] simplifies the convolution operation by using a linear filter and has become the most prevailing one. GAT [14] proposed to use an attention mechanism to weight the feature sum of neighboring nodes on the basis of GCN. Many algorithms [23; 24; 25] have improved GCN and boosted the performance of graph neural networks. Recently, some methods have combined ensemble learning with graph neural networks and achieved good results. In graph classification tasks, XGraphBoost [35] first formats the original molecular data into a graph structure, then uses a graph neural network to learn the graph representation of molecular features, and finally loads the graph representation as sample features into the supervised learning model XGBoost. XGraphBoost can facilitate efficient and accurate prediction of various molecular properties. In node classification tasks, Boosting-GNN [22] combines GNN with the Adaboost algorithm to propose a graph representation learning framework that improves node classification performance under unbalanced conditions.
## 7 Conclusion
In this paper, we proposed a novel technique for detecting social bots, Random Forest boosted Graph Neural Network (RF-GNN), which employs GNN as the base classifier for the random forest. Through subgraph construction, a set of subgraphs are generated, and different subgraphs are used to train the GNN base classifiers. The outputs of different branches are then integrated. Additionally, we propose the aligning mechanism, which leverages the output of GNNs and FCNs, to further enhance performance and robustness. Our proposed method effectively harnesses the advantages of GNNs and ensemble learning, resulting in a significant improvement in the detection performance and robustness of GNN models. Our experiments demonstrate that RF-GNN consistently outperform the state-of-the-art GNN baselines on bot detection benchmark datasets.
**CRediT authorship contribution statement**
**Shuhao Shi:** Conceptualization, Investigation, Methodology, Software, Validation, Analysis, Writing, Visualization. **Qiao Kai:** Conceptualization, Investi
gation, Methodology, Software, Validation, Analysis, Writing, Visualization. **Jie Yang:** Methodology, Validation, Writing, Supervision. **Baojie Song:** Methodology, Writing, Visualization. **Jian Chen:** Conceptualization, Methodology, Supervision. **Bin Yan:** Conceptualization, Methodology. Supervision.
## Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
## Data availability
The data used in this paper are all public datasets. The source code, data, and other artifacts have been made available at [https://github.com/GraphDetec/RF-GNN](https://github.com/GraphDetec/RF-GNN).
## Acknowledgment
This work was supported by the National Key Research and Development Project of China (Grant No. 2020YFC1522002).
|
2305.10987 | SPENSER: Towards a NeuroEvolutionary Approach for Convolutional Spiking
Neural Networks | Spiking Neural Networks (SNNs) have attracted recent interest due to their
energy efficiency and biological plausibility. However, the performance of SNNs
still lags behind traditional Artificial Neural Networks (ANNs), as there is no
consensus on the best learning algorithm for SNNs. Best-performing SNNs are
based on ANN to SNN conversion or learning with spike-based backpropagation
through surrogate gradients. The focus of recent research has been on
developing and testing different learning strategies, with hand-tailored
architectures and parameter tuning. Neuroevolution (NE), has proven successful
as a way to automatically design ANNs and tune parameters, but its applications
to SNNs are still at an early stage. DENSER is a NE framework for the automatic
design and parametrization of ANNs, based on the principles of Genetic
Algorithms (GA) and Structured Grammatical Evolution (SGE). In this paper, we
propose SPENSER, a NE framework for SNN generation based on DENSER, for image
classification on the MNIST and Fashion-MNIST datasets. SPENSER generates
competitive performing networks with a test accuracy of 99.42% and 91.65%
respectively. | Henrique Branquinho, Nuno Lourenço, Ernesto Costa | 2023-05-18T14:06:37Z | http://arxiv.org/abs/2305.10987v1 | # SPENSER: Towards a NeuroEvolutionary Approach for Convolutional Spiking Neural Networks
###### Abstract.
Spiking Neural Networks (SNNs) have attracted recent interest due to their energy efficiency and biological plausibility. However, the performance of SNNs still lags behind traditional Artificial Neural Networks (ANNs), as there is no consensus on the best learning algorithm for SNNs. Best-performing SNNs are based on ANN to SNN conversion or learning with spike-based backpropagation through surrogate gradients. The focus of recent research has been on developing and testing different learning strategies, with hand-tailored architectures and parameter tuning. Neuroevolution (NE), has proven successful as a way to automatically design ANNs and tune parameters, but its applications to SNNs are still at an early stage. DENSER is a NE framework for the automatic design and parametrization of ANNs, based on the principles of Genetic Algorithms (GA) and Structured Grammatical Evolution (SGE). In this paper, we propose SPENSER, a NE framework for SNN generation based on DENSER, for image classification on the MNIST and Fashion-MNIST datasets. SPENSER generates competitive performing networks with a test accuracy of 99.42% and 91.65% respectively.
spiking neural networks, neuroevolution, DENSER, computer vision +
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
+
Footnote †: journal: Computer Vision
To the best of our knowledge, this is the first work focusing on evolving SNNs trained with BPTT for image classification, including not only different architectures but different neuronal dynamics and optimizers in the search space. The main contribution of this paper is the preliminary validation of neuroevolution through SPENSER in the automatic generation of competitively performing CSNNs. The main focus of the paper is on the performance of the generated networks in terms of accuracy.
The remainder of this paper is structured as follows: Section 2 provides a review of important concepts regarding SNNs; Section 3 covers related work regarding evolutionary approaches for SNNs; Section 4 describes SPENSER; Section 5 describes the experimental setup; Section 6 analyses the experimental results, covering the evolutionary search and the testing performance of the generated models; Section 7 provides some final remarks and suggested guidelines for future research.
## 2. Spiking Neural Networks
Spiking Neural Networks (SNNs) are a class of neural network models built with spiking neurons where information is encoded in the timing and frequency of discrete events called spikes (or action potentials) over time (Srivastava et al., 2017). Spiking neurons can be characterized by a membrane potential \(V(t)\) and activation threshold \(V_{thresh}\). The weighted sum of inputs of the neuron increases the membrane potential over time. When the membrane potential reaches its activation threshold, a spike is generated (fired) and propagated to subsequent connections. In a feed-forward network, inputs are presented to the network in the form of spike trains (timed sequences of spikes) over \(T\) time steps, during which time spikes are accumulated and propagated throughout the network up to the output neurons.
There are a number of spiking neuron models that vary in biological plausibility and computational cost, such as the more realistic and computationally expensive Hodgkin-Huxley (Hodgkin and Huxley, 1983), to the more simplistic and computationally lighter models such as the Izhikevich (Izhikevich, 1983), Integrate-and-Fire (IF) (Ling and Fang, 2017) and Leaky Integrate-and-Fire (LIF) (Ling and Fang, 2017). We refer to Long and Fang (Long and Fang, 2017) for an in-depth review of existing spiking neuron models and their behaviour.
The LIF neuron is the most commonly used in the literature due to its simplicity and low computational cost. The LIF neuron can be modulated as a simple parallel Resistor-Capacitor (RC) circuit with a "leaky" resistor:
\[C\frac{dV}{dt}=-g_{L}(V(t)-E_{L})+I(t) \tag{1}\]
In Eq. 1, \(C\) is a capacitor, \(g_{L}\) is the "leaky" resistor (conductor), \(E_{L}\) is the resting potential and \(I(t)\) is the current source (synaptic input) that charges up the capacitor to increase the membrane potential \(V(t)\). Solving this differential equation through Euler method (demonstration in (Kirkpatrick, 1983)), we can calculate a neuron's membrane potential at a given timestep \(t\) as:
\[V[t]=\beta V[t-1]+WX[t]-Act[t-1]V_{thresh} \tag{2}\]
In Eq. 2, \(\beta\) is the decay rate of the membrane potential, \(X[t]\) is the input vector (corresponding to \(I(t)\)), \(W\) is the vector of input weights, and \(Act[t]\) is the activation function. The activation function can be defined as follows:
\[Act[t]=\left\{\begin{array}{ll}1,&\text{if }V[t]>V_{thresh}\\ 0,&otherwise\end{array}\right\} \tag{3}\]
A LIF neuron's membrane potential naturally decays to its resting state over time if no input is received (\(\beta V[t-1]\)). The potential increases when a spike is received from incoming connections, proportionally to the connection's weight (\(WX[t]\)). When the membrane potential \(V(t)\) surpasses the activation threshold \(V_{thresh}\) a spike is emitted and propagated to outgoing connections and the membrane's potential resets (\(-Act[t-1]V_{thresh}\)). Resetting the membrane's potential can be done either by subtraction, as is done in the presented example, where \(V_{thresh}\) is subtracted at the onset of a spike; or to zero, where the membrane potential is set to \(0\) after a spike. A refractory period is usually taken into account where a neuron's potential remains at rest after spiking in spite of incoming spikes. The decay rate and threshold can be static or trainable.
Existing frameworks such as _snntorch_(Kirkpatrick, 1983) allow for the development of SNNs by integration of spiking neuron layers in standard ANN architectures such as Convolutional Neural Networks, by simply replacing the activation layer with a spiking neuron layer.
### Information Coding
Spiking systems rely on discrete events to propagate information, so the question arises as to how this information is encoded. We focus on two encoding strategies: rate coding and temporal coding. In **rate coding**, information is encoded in the frequency of firing rates. This is the case in the communication between photoreceptor cells and the visual cortex, where brighter inputs generate higher frequency firing rates as opposed to darker inputs and respectively lower frequency firing rates (Kirkpatrick, 1983). ANNs rely on rate coding of information, as each neuron's output is meant to represent an average firing rate. In **temporal coding**, information is encoded in the precise timing of spikes. A photoreceptor system with temporal coding would encode a bright input as an early spike and a dark input as a last spike. When considering the output of an SNN for a classification task, the predicted class would either be: the one with the highest firing frequency, using rate coding; the one that fires first, using temporal coding.
Temporal coding is advantageous in terms of speed and power consumption, as fewer spikes are needed to convey information, resulting in more sparse events which translate to fewer memory accesses and computation. On the other hand, rate coding is advantageous in terms of error tolerance, as the timing constraint is relaxed to the overall firing rate, and promoting learning, as the absence of spikes can lead to the "dead neuron" problem, where no learning takes place as there is no spike in the forward pass. Increased spiking activity prevents the "dead neuron" problem.
### Learning
Learning in SNNs remains one of the biggest challenges in the community due to the non-differentiability of the activation function of spiking neurons (Eq. 3), which does not allow for the direct transposition of the error backpropagation algorithm.
Commonly used learning strategies include unsupervised learning through Spike-Timing-Dependent Plasticity (STDP) (Ling and Fang, 2017), offline conversion from trained ANNs to SNNs (also known as shadow
training) (Krizhevsky et al., 2014; Krizhevsky et al., 2014), and supervised learning through backpropagation either using spike times (Beng et al., 2015) or adaptations of the activation function to a continuous-valued function (Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014). In this work, we focus on the latter, by training SNNs using backpropagation through time (BPTT) and surrogate gradients.
BPTT is an application of the backpropagation algorithm to the unrolled computational graph over time, usually applied to Recurrent Neural Networks (RNNs) (Krizhevsky et al., 2014). In order to bypass the non-differentiability of the spiking neuron's activation function, one can use surrogate gradients, by approximating the activation function with continuous functions centered at the activation threshold during the backward pass of backpropagation (Srivastava et al., 2014).
In this experimental study, we considered two surrogate gradient functions available in _sntorch_(Krizhevsky et al., 2014):
* **Fast-Sigmoid** \[Act\approx\frac{V}{1+k|V|}\] (4)
* Shifted arc-tan function \[Act\approx\frac{1}{\pi}arctan(\pi V\frac{\alpha}{2})\] (5)
Regarding the loss function, there are a number of choices available depending on the output encoding of the network (rate vs temporal), that calculate the loss based on spikes or on membrane potential. For this experimental study, we considered rate encoding for inputs and outputs, and as such, chose the **Mean Square Error Spike Count Loss** (adapted from (Srivastava et al., 2014)). The spike counts of both correct and incorrect classes are specified as targets as a proportion of the total number of time steps (for example, the correct class should fire 80% of the time and the incorrect classes should only fire 10%). The target firing rates are not required to sum to 100%. After a complete forward pass, the mean square error between the actual (\(\sum_{t=0}^{T}Act[t]\)) and target (\(\hat{Act}\)) spike counts of each class \(C\) is calculated and summed together (Eq.6).
\[\mathcal{L}=\frac{1}{T}\sum_{j=0}^{C-1}(\sum_{t=0}^{T}Act_{j}[t]-\hat{Act}_{j })^{2} \tag{6}\]
## 3. Related Work
Recent works blending EC and SNNs are mostly focused on evolving a network's weights, using evolutionary approaches as a learning strategy (Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014).
Schuman et al. (Schuman et al., 2015) proposed Evolutionary Optimization for Neuromorphic Systems, aiming to train spiking neural networks for classification and control tasks, to train under hardware constraints, to evolve a reservoir for a liquid state machine, and to evolve smaller networks using multi-objective optimization. However, they focus on simple machine learning classification tasks and scalability is unclear. Elbrecht and Schuman (Elbrecht and Schuman, 2015) used HyperNeat (Krizhevsky et al., 2014) to evolve SNNs focusing on the same classification tasks. Grammatical Evolution (GE) has also been used previously by Lopez-Vazquez et al. (Lopez-Vazquez et al., 2015) to evolve SNNs for simple classification tasks.
The current state of the art in the automatic design of CSNN architectures are the works of Kim et al. (Kim et al., 2017) and AutoSNN by Na et al. (Na et al., 2019). Both works focus on Neural Architecture Search (NAS), with an evolutionary search component implemented in AutoNN, and attain state-of-the-art performances in the CIFAR-10, CIFAR-100 (Krizhevsky et al., 2014), and TinyImageNet datasets. However, both works fix the generated networks' hyperparameters such as LIF neuron parameters and learning optimizer. Our work differs from these works by incorporating these properties in the search space.
## 4. Spenser
SPENSER (SPiking Evolutionary Network StructurEd Representation) is a general-purpose evolutionary-based framework for the automatic design of SNNs, based on DENSER (Beng et al., 2015; Beng et al., 2015), combining the principles of Genetic Algorithms (GA) (Krizhevsky et al., 2014) and Dynamical Structured Grammatical Evolution (DSGE) (Krizhevsky et al., 2014; Krizhevsky et al., 2014). SPENSER works on a two-level basis, separating the GA and the DSGE level, which allows for the modeling of the overall network structure at the GA level while leaving the network layer's specifications for the DSGE (Figure 1). The use of a grammar is what makes SPENSER a general-purpose framework, as one solely needs to change the grammar to handle different network and layer types, problems and parameters range.
The GA level encodes the macrostructure representing the sequence of evolutionary units that form the network. Each unit corresponds to a nonterminal from the grammar that is later expanded through DSGE. With this representation, we can encode not only the network's layers as evolutionary units but also the optimizer and data augmentation. Furthermore, by assigning each evolutionary unit to a grammar nonterminal, we can encode prior knowledge and bound the overall network architecture.
The DSGE level is responsible for the specification of each layer's type and parameters, working independently from the GA level. DSGE represents an individual's genotype as a set of expansion choices for each expansion rule in the grammar. Starting from a nonterminal unit from the GA level, DSGE follows the expansions set in the individual's genotype until all symbols in the phenotype are nonterminals. Rules for the layer types and parameters are represented as a Context-Free Grammar (CFG), making it easier to adapt the framework to different types of networks, layers and problem domains.
An example encoding to build CSNNs could be defined by Grammar 1 and the following GA macro structure:
\[[(features,1,10),(classification,1,3),\] \[(output,1,1),(learning,1,1)]\]
The numbers in each macro unit represent the minimum and maximum number of units that can be incorporated into the network. With this example, the \(features\) block encodes layers for feature extraction, and therefore we can generate networks with convolutional and pooling layers, followed by 1 to 3 fully connected layers from the \(classification\) units. The activation layers are restricted to LIF nodes with different surrogate gradient options. The \(learning\) unit represents the optimizer used for learning and its parameters. The \(output\) unit encodes the network's output layer. Numeric parameters are defined by their type, the number of parameters to generate, and the range of possible values.
Regarding variation operators, SPENSER relies on mutations on both levels. At the GA level, individuals can be mutated by adding, replicating, or removing genes i.e. layers. At the DSGE level, mutation changes the layers' parameters by grammatical mutation (replacing grammatical expansions), integer mutation (replacing an integer parameter with a uniformly generated random one), and float mutation (modifying a float parameter through Gaussian perturbation). SPENSER follows a \((1+\lambda)\) evolutionary strategy where the parent individual for the next generation is chosen by highest fitness and mutated to generate the offspring. This evolutionary strategy was chosen due to the computational demands of the network training process, which limits the population size in regard to execution time.
## 5. Experimental Setup
For this experimental study, we evolved and tested networks on the MNIST (Krizhevsky et al., 2017) and Fashion-MNIST (Zhu et al., 2017) datasets, available through the Torchvision library of Pytorch. All images were converted to grayscale and their original size was kept (28x28). In order to apply SNNs to these datasets, the images were converted to spike trains using rate coding. The pixel values are normalized between 0 and 1 and each pixel value is used as a probability in a Binomial distribution, which is then sampled from to generate spike trains of length \(T\) time steps. No data augmentation was used. We considered different time steps for each dataset according to their complexity.
Datasets were split in three subsets: EvoTrain, Fitness and Test. The Test split is the one provided by Torchvision. The EvoTrain and Fitness splits are a 70/30 split of the original Train split. Each independent run generates different EvoTrain and Fitness splits. Table 1 summarises the chosen time steps and the number of samples per split for each dataset.
As this is a preliminary study to validate SPENSER, we settled on one-pass training of individuals as a trade-off between speed and accuracy. During the evolutionary search, individuals are trained on the EvoTrain split for 1 epoch and tested against the Fitness split for fitness assignment. After the evolutionary search is complete, the best individual is further trained for 50 epochs on the entire Train set, and tested against the Test set for accuracy assessment.
We used _smtorch_(Krizhevsky et al., 2017) to assemble, train and evaluate SNNs based on rate coding. Individuals are trained using BPTT and the chosen loss function was the Mean Square Error Spike Count described in Section 2.2, with a target spiking proportion of 100% for the correct class and 0% for the incorrect class. The predicted class for a given instance is calculated based on the highest spike count of the output neurons. Accuracy is used as the fitness metric during the evolutionary search and as the final performance assessment of the best found individuals.
The macro structure of individuals for the GA level was set as:
\[[(features,1,6),(classification,1,4),\] \[(output,1,1),(learning,1,1)]\]
Because we are dealing with an image recognition problem, we defined a grammar that contains primitives allowing for the construction of CSNNs, as shown in Grammar 2. Following is a brief description of the grammar.
\(features\) units can be expanded to either Convolutional + Activation, Convolutional + Pooling + Activation, or Dropout layers. Convolutional layers are defined by the number of filters, filter shape, stride, padding and bias. Pooling layers are defined by the pooling type (max or average) and the kernel size. \(classification\) units can be expanded to either Fully-Connected + Activation or Dropout layers. Fully-Connected layers are defined by the number of units. Dropout layers are defined by the dropout rate. The \(output\) unit is set as a Fully-Connected + Activation where the number of units is fixed to the number of classes. Activation layers are currently limited to LIF neurons. LIF neurons are defined by the decay rate \(\beta\), the activation threshold \(V_{\textit{thresh}}\), and the reset mechanism
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline & \multicolumn{2}{|c|}{**Train**} & \\ \cline{2-5} & **Time Steps (T)** & EvoTrain & Fitness & **Test** \\ \hline
**MNIST** & 10 & \multirow{2}{*}{42000} & \multirow{2}{*}{18000} & \multirow{2}{*}{10000} \\
**F-MNIST** & 25 & & & \\ \hline \end{tabular}
\end{table}
Table 1. Time steps and number of samples per split for each dataset (MNIST and Fashion-MNIST).
Figure 1. Individual generation by SPENSER. The first line represents the GA level where the macrostructure of the network is defined (this individual has 2 _features_ units and 2 _classification_ units). The second line represents the specification of a _classification_ unit through DSGE. Each number in the DSGE level represents the index of the chosen expansion rule for the current non-terminal. The last line is the resulting phenotype of the layer in question (Branquinho et al., 2017).
(subtraction or zero). Furthermore, they are also defined by the surrogate gradient function, which in this case can be either the ATan or the Fast-Sigmoid functions described in Section 2.2. The _learning_ unit encodes the optimizer and can be expanded to either Stochastic Gradient Descent, Adam, or RMSProp. We increased the probability of choosing feature extraction layers over dropout for \(features\) units (Grammar 2, line 1).
Regarding SPENSER's main hyper-parameters, we followed the recommendations of (B
different have small variations, showcasing SPENSER's robustness in generating high-performing networks.
We compared the best attained test accuracy with other works that also trained hand-tailored networks through spike based back-propagation. A comparison of test results is presented in Tab. 4. Albeit not surpassing the state-of-the-art, networks generated by SPENSER are head-to-head with the best-performing networks in the literature.
In order to validate our choice of one epoch training for fitness assessment, we also trained the best networks found in the first generation of each run for another 50 epochs and tested their performance on the Test set. Fig. 5 displays violin plots for the test accuracy of the best individuals from generation 1 and generation 2. The results are shown in Tab. 4. The results are shown in Tab. 5. The results are shown in Tab. 6. The results are shown in Tab. 7. The results are shown in Tab. 8. The results are shown in Tab. 9. The results are shown in Tab. 10. The results are shown in Tab. 11. The results are shown in Tab. 12. The results are shown in Tab. 13. The results are shown in Tab. 14. The results are shown in Tab. 15. The results are shown in Tab. 16. The results are shown in Tab. 17. The results are shown in Tab. 18. The results are shown in Tab. 19. The results are shown in Tab. 10. The results are shown in Tab. 11. The results are shown in Tab. 12. The results are shown in Tab. 13. The results are shown in Tab. 14. The results are shown in Tab. 15. The results are shown in Tab. 16. The results are shown in Tab. 17. The results are shown in Tab. 18. The results are shown in Tab. 19. The results are shown in Tab. 10. The results are shown in Tab. 11. The results are shown in Tab. 12. The results are shown in Tab. 13. The results are shown in Tab. 14. The results are shown in Tab. 15. The results are shown in Tab. 16. The results are shown in Tab. 17. The results are shown in Tab. 18. The results are shown in Tab. 19. The results are shown in Tab. 10. The results are shown in Tab. 11. The results are shown in Tab. 12. The results are shown in Tab. 13. The results are shown in Tab. 14. The results are shown in Tab. 15. The results are shown in Tab. 16. The results are shown in Tab. 17. The results are shown in Tab. 18. The results are shown in Tab. 19. The results are shown in Tab. 10. The results are shown in Tab. 11. The results are shown in Tab. 12. The results are shown in Tab. 13. The results are shown in Tab. 14. The results are shown in Tab. 15. The results are shown in Tab. 16. The results are shown in Tab. 17. The results are shown in Tab. 18. The results are shown in Tab. 19. The results are shown in Tab. 10. The results are shown in Tab. 11. The results are shown in Tab. 12. The results are shown in Tab. 13. The results are shown in Tab. 14. The results are shown in Tab. 15. The results are shown in Tab. 16. The results are shown in Tab. 17. The results are shown in Tab. 18. The results are shown in Tab. 19. The results are shown in Tab. 10. The results are shown in Tab. 11. The results are shown in Tab. 12. The results are shown in Tab. 13. The results are shown in Tab. 14. The results are shown in Tab. 15. The results are shown in Tab. 16. The results are shown in Tab. 17. The results are shown in Tab. 18. The results are shown in Tab. 19. The results are shown in Tab. 10. The results are shown in Tab. 11. The results are shown in Tab. 12. The results are shown in Tab. 13. The results are shown in Tab. 14. The results are shown in Tab. 15. The results are shown in Tab. 16. The results are shown in Tab. 17. The results are shown in Tab. 18. The results are shown in Tab. 19. The results are shown in Tab. 10. The results are shown in Tab. 11. The results are shown in Tab. 12. The results are shown in Tab. 13. The results are shown in Tab. 14. The results are shown in Tab. 15. The results are shown in Tab. 16. The results are shown in Tab. 17. The results are shown in Tab. 18. The results are shown in Tab. 19. The results are shown in Tab. 10. The results are shown in Tab. 11. The results are shown in Tab. 12. The results are shown in Tab. 13. The results are shown in Tab. 14. The results are shown in Tab. 15. The results are shown in Tab. 16. The results are shown in Tab. 17. The results are shown in Tab. 18. The results are shown in Tab. 19. The results are shown in Tab. 19. The results are shown in Tab. 10. The results are shown in Tab. 11. The results are shown in Tab. 12. The results are shown in Tab. 13. The results are shown in Tab. 14. The results are shown in Tab. 15. The results are shown in Tab. 16. The results are shown in Tab. 17. The results are shown in Tab. 18. The results are shown in Tab. 19. The results are shown in Tab. 10. The results are shown in Tab. 11. The results are shown in Tab. 12. The results are shown in Tab. 13. The results are shown in Tab. 14. The results are shown in Tab. 15. The results are shown in Tab. 16. The results are shown in Tab. 17. The results are shown in Tab. 18. The results are shown in Tab. 19. The results are shown in Tab. 10. The results are shown in Tab. 11. The results are shown in Tab. 12. The results are shown in Tab. 13. The results are shown in Tab. 14. The results are shown in Tab. 15. The results are shown in Tab. 16. The results are shown in Tab. 17. The results are shown in Tab. 18. The results are shown in Tab. 19. The results are shown in Tab. 10. The results are shown in Tab. 11. The results are shown in Tab. 12. The results are shown in Tab. 13. The results are shown in Tab. 14. The results are shown in Tab. 15. The results are shown in Tab. 16. The results are shown in Tab. 17. The results are shown in Tab. 18. The results are shown in Tab. 19. The results are shown in Tab. 19. The results are shown in Tab. 10. The results are shown in Tab. 11. The results are shown in Tab. 12. The results are shown in Tab. 13. The results are shown in Tab. 14. The results are shown in Tab. 15. The results are shown in Tab. 16. The results are shown in Tab. 17. The results are shown in Tab. 18. The results are shown in Tab.
200. It is clear that the networks' performance is dependent on the architecture rather than training epochs and that the networks evolved by SPENSER perform better than random initialization.
We hypothesize that a big limitation in this experimental study was the choice of the loss function's parameters, as it does not follow the literature's recommendations (Kim et al., 2019). By setting the target firing rate of incorrect classes to 0%, we might be suppressing output activity which is important to distinguish between closely distanced inputs. Furthermore, this experimental setup is sluggish, as training with BPTT is slower than in traditional ANNs and highly memory intensive. Kim et al. (Kim et al., 2019) have achieved impressive results without training the generated networks during the search phase, by estimating their future performance based on spike activation patterns across different data samples, and we believe this might be an important improvement to our framework. With faster experiments, we can focus on increasing diversity and coverage of the search space, so that SPENSER can yield better individuals.
## 7. Final Remarks
In this paper we propose SPENSER, a NE framework to automatically design CSNNs. SPENSER is able to generate competitive performing networks for image classification at the level of the state of the art, without human parametrization of the network's architecture and parameters. SPENSER generated networks with competitive results, attaining 99.42% accuracy on the MNIST (Kim et al., 2019) and 91.65% accuracy on the Fashion-MNIST (Kim et al., 2019) datasets. Current limitations rely on the execution time, due to the computationally intensive BPTT learning algorithm and the memory requirements. Furthermore, we believe the configuration of the loss function played a role in suppressing output activity and potentially decreasing accuracy.
### Future Work
In the future, we plan on:
* Experiment with different loss functions / encode the loss function as an evolvable macro parameter;
* Perform a more in-depth study of the preferred choices during evolution and observable patterns in the best-performing individuals. This could be relevant in uncovering novel optimal architectures and parameters;
* Experiment with different learning algorithms.
* Implement skip connections and back connections.
* Apply regularisation methods to prevent vanishing and exploding gradients.
###### Acknowledgements.
This research was supported by the Portuguese Recovery and Resilience Plan (PRR) through project C645008882-00000055, Center for Responsible AI, by the FCT - Foundation for Science and Technology, IP./MCTES through national funds (PIDDAC), within the scope of CISUC R&D Unit - UIDB/00326/2020 or project code UIDP/00326/2020. The first author is partially funded by FCT - Foundation for Science and Technology, Portugal, under the grant 2022.11314.BD.
|
2305.12148 | Probabilistic Modeling: Proving the Lottery Ticket Hypothesis in Spiking
Neural Network | The Lottery Ticket Hypothesis (LTH) states that a randomly-initialized large
neural network contains a small sub-network (i.e., winning tickets) which, when
trained in isolation, can achieve comparable performance to the large network.
LTH opens up a new path for network pruning. Existing proofs of LTH in
Artificial Neural Networks (ANNs) are based on continuous activation functions,
such as ReLU, which satisfying the Lipschitz condition. However, these
theoretical methods are not applicable in Spiking Neural Networks (SNNs) due to
the discontinuous of spiking function. We argue that it is possible to extend
the scope of LTH by eliminating Lipschitz condition. Specifically, we propose a
novel probabilistic modeling approach for spiking neurons with complicated
spatio-temporal dynamics. Then we theoretically and experimentally prove that
LTH holds in SNNs. According to our theorem, we conclude that pruning directly
in accordance with the weight size in existing SNNs is clearly not optimal. We
further design a new criterion for pruning based on our theory, which achieves
better pruning results than baseline. | Man Yao, Yuhong Chou, Guangshe Zhao, Xiawu Zheng, Yonghong Tian, Bo Xu, Guoqi Li | 2023-05-20T09:27:34Z | http://arxiv.org/abs/2305.12148v1 | # Probabilistic Modeling: Proving the Lottery Ticket Hypothesis in Spiking Neural Network
###### Abstract
The Lottery Ticket Hypothesis (LTH) states that a randomly-initialized large neural network contains a small sub-network (i.e., winning tickets) which, when trained in isolation, can achieve comparable performance to the large network. LTH opens up a new path for network pruning. Existing proofs of LTH in Artificial Neural Networks (ANNs) are based on continuous activation functions, such as ReLU, which satisfying the Lipschitz condition. However, these theoretical methods are not applicable in Spiking Neural Networks (SNNs) due to the discontinuous of spiking function. We argue that it is possible to extend the scope of LTH by eliminating Lipschitz condition. Specifically, we propose a novel probabilistic modeling approach for spiking neurons with complicated spatio-temporal dynamics. Then we theoretically and experimentally prove that LTH holds in SNNs. According to our theorem, we conclude that pruning directly in accordance with the weight size in existing SNNs is clearly not optimal. We further design a new criterion for pruning based on our theory, which achieves better pruning results than baseline.
## 1 Introduction
By mimicking the spatio-temporal dynamics behaviors of biological neural circuits, Spiking Neural Networks (SNNs) [1; 2; 3; 4] provide a low-power alternative to traditional Artificial Neural Networks (ANNs). The binary spiking communication enables SNNs to be deployed on neuromorphic chips [5; 6; 7] to perform sparse synaptic accumulation for low energy consumption. Given the memory storage limitations of such devices, neural pruning methods are well recognized as one of the crucial methods for implementing SNNs in real-world applications. Pruning redundant weights from an over-parameterized model is a mature and efficient way of obtaining significant compression [8; 9].
Recently, Lottery Ticket Hypothesis (LTH), a mile stone is proposed in the literature of network pruning, which asserts that an over-parameterized neural network contains sub-networks that can achieve a similar or even better accuracy than the fully-trained original dense networks by _training only once_[10]. A Stronger version of the LTH (SLTH) was then proposed: there is a high probability that a network with random weights contains sub-networks that can approximate any given sufficiently-smaller neural network, _without any training_[11]. SLTH claims to find the target sub-network without training, thus it is a kind of complement to original LTH that requires training [11]. Meanwhile, SLTH is considered to be "stronger" than LTH because it claims to require no training [12].
The effectiveness of LTH in ANNs have been verified by a series of experiments [10; 13; 14; 11]. Furthermore, it has been theoretically proved by several works with various assumptions [12; 15; 16; 17], due to its attractive properties in statement. A line of work is dedicated to designing efficient pruning algorithms based on LTH [18; 19; 20]. But the role of LTH in SNN is almost blank. There is only one work that experimentally verifies that LTH can be applied to SNNs [21], whether it can also be theoretically established remains unknown.
In this work, we theoretically and experimentally prove that LTH (Strictly speaking, SLTH3 ) holds in SNNs. There are two main hurdles. First and foremost, the binary spike signals are fired when the membrane potentials of the spiking neurons exceed the firing threshold, thus the activation function of a spiking neuron is discrete. But all current works [12; 15; 16; 17] proving LTH in ANNs relies on the Lipschitz condition. Only when the Lipschitz condition is satisfied, the error between the two layers of the neural network will be bounded when they are approximated. Second, brain-inspired SNNs have complex dynamics in the temporal dimension, which is not considered in the existing proofs and increases the difficulty of proving LTH in SNNs.
Footnote 3: In theoretical proofs, SLTH is considered a stronger version of LTH[12; 17]
To bypass the Lipschitz condition, we design a novel probabilistic modeling approach. The approach can theoretically provide the probability that two SNNs behave identically, which is not limited by the complex spatio-temporal dynamics of SNNs. In a nutshell, we establish the following result:
**Informal version of Theorem 4.3** For any given target SNN \(\hat{G}\), there is a sufficiently large SNN \(G\) with a sub-network (equivalent SNN) \(\tilde{G}\) that, with a high probability, can achieve the same output as \(\hat{G}\) for the same input,
\[\sup_{\mathbf{S}\in\mathcal{S}}\frac{1}{T}\sum_{t=1}^{T}\left\|\tilde{G}^{t}(\mathbf{ S})-\hat{G}^{t}(\mathbf{S})\right\|_{2}=0,\]
where \(t=1,2,\cdots,T\) is the timestep, \(\mathbf{S}\) is the input spiking tensor containing only 0 or 1.
As a complement to the theoretical proof part, we show experimentally that there are also sub-networks in SNNs that can work without training. Subsequently, it is clear that ANNs and SNNs operate on fundamentally distinct principles about how weight influences the output of the activation layer. Thus, simply using the weight size as the criterion for pruning like ANNs must not be the optimal strategy. Based on this understanding, we design a new weight pruning criterion for SNNs. We evaluate how likely weights are to affect the firing of spiking neurons, and prune according to the estimated probabilities. In summary, our contributions are as follows:
* We propose a novel probabilistic modeling method for SNNs that for the first time theoretically establishes the link between spiking firing and pruning (Section 3). The probabilistic modeling method is a general theoretical analysis tool for SNNs, and it can also be used to analyze the robustness and other compression methods of SNNs (Section 6).
* With the probabilistic modeling method, we theoretically prove that LTH also holds in SNNs with binary spiking activations and complex spatio-temporal dynamics (Section 4).
* We experimentally find the good sub-networks without weight training in random initialized SNNs, which is consistent with our theoretical results. (Section 5)
* We apply LTH for pruning in SNNs and design a new probability-based pruning criterion for it (Section 6). The proposed pruning criterion can achieve better performance in LTH-based pruning methods and can also be exploited in non-LTH traditional pruning methods.
## 2 Related Work
**Spiking neural networks.** The spike-based communication paradigm and event-driven nature of SNNs are key to their energy-efficiency advantages [3; 22; 23; 4]. Spike-based communication makes cheap synaptic Accumulation (AC) the basic operation unit, and the event-driven nature means that only a small portion of the entire network is active at any given time while the rest is idle. In contrast, neurons in ANNs communicate information using continuous values, so Multiply-and-Accumulate (MAC) is the major operation, and generally all MAC must be performed even if all inputs or activations are zeros. SNNs can be deployed on neuromorphic chips for low energy consumption
[6; 5; 24]. Thus, spike-based neuromorphic computing has broad application prospects in battery constrained edge computing platforms [25], e.g., internet of things, smart phones, etc.
**Pruning in spiking neural networks.** Recent studies on SNN pruning have mostly taken two approaches: 1) Gaining knowledge from ANN pruning's successful experience 2) incorporating the unique biological properties of SNNs. The former technical route is popular and effective. Some typical methods include pruning according to predefined threshold value [26; 27; 28], soft-pruning that training weights and pruning thresholds concurrently [29], etc. The temporal dynamics of SNN are often also taken into consideration in the design of pruning algorithms [30; 21]. Meanwhile, There have been attempts to develop pruning algorithms based on the similarities between SNNs and neural systems, e.g., regrowth process [31], spine motility [32; 33], gradient rewiring [34], state transition of dendritic spines [35], etc. None of these studies, however, took into account the crucial factor that affects network performance: the link between weights and spiking firing. In this work, we use probabilistic modeling to analyze the impact of pruning on spiking firing, and accordingly design a pruning criterion for SNNs.
## 3 Probabilistic Modeling for Spiking Neurons
No matter how intricate a spiking neuron's internal spatio-temporal dynamics are, ultimately the neuron decide whether to fire a spike or not based on its membrane potential at a specific moment. Thus, the essential question of the proposed probabilistic modeling is how likely is the firing of a spiking neuron to change after changing a weight.
**Leaky Integrate-and-Fire (LIF)** is one of the most commonly used spiking neuron models [1; 2], because it is a trade-off between the complex spatio-temporal dynamic characteristics of biological neurons and the simplified mathematical form. It can be described by a differential function
\[\tau\frac{du(t)}{dt}=-u(t)+I(t), \tag{1}\]
where \(\tau\) is a time constant, and \(u(t)\) and \(I(t)\) are the membrane potential of the postsynaptic neuron and the input collected from presynaptic neurons, respectively. Solving Eq. (1), a simple iterative
Figure 1: **Overview of this work.****(a)**, The goal of spiking neuron approximate. **(b)**, The firing behavior in the approximation of spiking neurons can be described by the proposed probabilistic modeling approach (Section 3). Spikes are fired when the membrane potential \(u\) exceeds the firing threshold \(u_{th}\). As long as a weight change induces \(u\) to fall into the crisis neighborhood, there is a certain probability that the spiking firing will be changed (0 to 1, or 1 to 0). We hope that spiking firing will not change after the redundant weights are pruned. **(c)**, Equivalent structure modeling. The error between the target and equivalent SNN is related to the network width (Section 4). **(d)**, Proof of LTH in SNN (Section 4). **(e)**, We provide a pruning technique for SNNs (Section 6). New pruning criterion: compute the probability that the firing of a spiking neuron changes when weights are pruned, and pruning according to the rank of the probability.
representation of the LIF neuron [36, 37] for easy inference and training can be obtained as follow
\[u_{i}^{t,l} =h_{i}^{t-1,l}+x_{i}^{t,l}, \tag{2}\] \[s_{i}^{t,l} =\mathrm{Hea}(u_{i}^{t,l}-u_{th}),\] (3) \[h_{i}^{t,l} =V_{reset}s_{i}^{t,l}+\beta u_{i}^{t,l}(1-s_{i}^{t,l}),\] (4) \[x_{i}^{t,l} =\sum_{j=1}^{N}w_{ij}^{l}s_{j}^{t,l-1}, \tag{5}\]
where \(u_{i}^{t,l}\) means the membrane potential of the \(i\)-th neuron in \(l\)-th layer at timestep \(t\), which is produced by coupling the spatial input feature \(x_{i}^{t,l}\) and temporal input \(h_{i}^{t-1,l}\), \(u_{th}\) is the threshold to determine whether the output spike \(s_{i}^{t,l}\in\{0,1\}\) should be given or stay as zero, \(\mathrm{Hea}(\cdot)\) is a Heaviside step function that satisfies \(\mathrm{Hea}(x)=1\) when \(x\geq 0\), otherwise \(\mathrm{Hea}(x)=0\), \(V_{reset}\) denotes the reset potential which is set after activating the output spiking, and \(\beta=e^{-\frac{\mu}{\mu}}<1\) reflects the decay factor. In Eq. (2), spatial feature \(x_{i}^{t,l}\) can be extracted from the spike \(s_{j}^{t,l-1}\) from the spatial output of the previous layer through a linear or convolution operation (i.e., Eq. (5)), where the latter can also be regraded as a linear operation [38]. \(w_{ij}^{l}\) denotes the weight concept from the \(j\)-th neuron in \((l-1)\)-th layer to the \(i\)-th neuron in \(l\)-th layer, \(N\) indicates the width of the \((l-1)\)-th layer.
The spatio-temporal dynamics of LIF can be described as: the LIF neuron integrates the spatial input feature \(x_{i}^{t,l}\) and the temporal input \(h_{i}^{t-1,l}\) into membrane potential \(u_{i}^{t,l}\), then, the fire and leak mechanism is exploited to generate spatial output \(s_{i}^{t,l}\) and the new neuron state \(h_{i}^{t,l}\) for the next timestep. Specifically, When \(u_{i}^{t,l}\) is greater than the threshold \(u_{th}\), a spike is fired and the neuron state \(h_{i}^{t,l}\) is reset to \(V_{reset}\). Otherwise, no spike is fired and the neuron state is decayed to \(\beta u_{i}^{t,l}\). Richer neuronal dynamics [2] can be obtained by adjusting information fusion [39], threshold [40], decay [41], or reset [42] mechanisms, etc. Note, notations used in this work are summarized in Appendix A.
**Probabilistic modeling of spiking neurons.** Our method is applicable to various spiking neuron models as long as Eq. (3) is satisfied. In this section, superscript \(l\) and \(i\) will be omitted when location is not discussed, and superscript \(t\) will be omitted when the specific timestep is not important. The crux of probabilistic modeling is how errors introduced by pruning alter the firing of spiking neurons.
We start by discussing only spatial input features. For a spiking neuron, assuming that the temporal input of a certain timestep is fixed, for two different spatial input features \(x\) and \(x^{\prime}\in[x-\epsilon,x+\epsilon]\), the corresponding membrane potentials are also different, i.e., \(u\) and \(u^{\prime}\in[u-\epsilon,u+\epsilon]\). Once \(u\) and \(u^{\prime}\) are located in different sides of the threshold \(u_{th}\), the output will be different (see Fig. 1**a**). This situation can only happen when \(u\) is located in _crisis neighborhood_\([u_{th}-\epsilon,u_{th}+\epsilon]\) (see Fig. 1**b**). That is, suppose membrane potential \(u\notin[u_{th}-\epsilon,u_{th}+\epsilon]\), if it changes from \(u\) to \(u^{\prime}\), and \(u^{\prime}\in[u-\epsilon,u+\epsilon]\), the output of the spiking neuron must not change. Consequently, the probability upperbound of a spiking neuron output change (_crisis probability_) is:
\[\int_{u_{th}-\epsilon}^{u_{th}+\epsilon}p(u)\mathrm{d}u, \tag{6}\]
where \(p(\cdot)\) is the probability density function of the membrane potential distribution. For the case of two independent spiking neurons, if the input is the same, then \(\epsilon\) is controlled by the weights of the two neurons (Fig. 1**a**).
It is reasonable to consider that the membrane potential follows a certain probability distribution. The membrane potential is accumulated by temporal input and spatial input feature, the former can be regarded as a random variable, and the latter is determined by the input spikes and weights. The input spike is a binary random variable according to a firing rate, and usually the weights are also assumed to satisfy a certain distribution. Moreover, some existing works directly assume that the membrane potential satisfies the Gaussian distribution [43, 44]. In this work, our assumptions about the membrane potential distribution are rather relaxed. Specifically, we give out a class of distributions for membrane potential, and we suppose all membrane potential should satisfy:
**Definition 3.1**.: **m-Neighborhood-Finite Distribution.** For a probability density function \(p(\cdot)\) and a value \(m\), if there exists \(\epsilon>0\) for the neighborhood \([m-\epsilon,m+\epsilon]\), in this interval, the max value
of function \(p(\cdot)\) is finite, we call the distribute function \(p(\cdot)\) as m-Neighborhood-Finite Distribution. The symbolic language is as follows
\[\exists\epsilon{>0},x\in[m-\epsilon,m+\epsilon]\,,\underset{x}{sup}\,p(x)<+\infty. \tag{7}\]
The upperbound of probability is controllable by controlling the input error \(\epsilon\):
**Lemma 3.2**.: _For the probability density function \(p(\cdot)\), if it is m-Neighborhood-Finite Distribution, for any \(\delta{>0}\), there exists a constant \(\epsilon{>0}\) so that \(\int_{m-\epsilon}^{m+\epsilon}p(x)\mathrm{d}x\leq\delta\)._
Now, we add consideration of the temporal dynamics of spiking neurons. As shown in Eq. (2) and Eq. (4), spiking neurons have a memory function that the membrane potential retains the neuronal state information from all previous timesteps. The spatial input feature \(x\) is only related to the current input, and the temporal input \(h\) is related to the spatial input features at all previous timesteps. For the convenience of mathematical expression, if there is no special mention to consider the temporal dynamics inside the neuron, we directly write the spiking neuron as \(\sigma\). But it should be noted that \(\sigma\) actually has complex internal dynamics. In its entirety, a spiking neuron in Eq. (3) can be denoted as \(\sigma_{x^{1:t-1}}^{t}(x^{t})\), where the temporal input \(h^{t-1}\) in the membrane potential \(u^{t}\) depends on \(x^{1:t-1}\), i.e., spatial input features from 1 to \(t-1\).
Using Definition 3.1, Lemma 3.2, and the math above, we can determine the probability of differing outputs from the two neurons due to errors in their spatial input features, under varying constraints.Specifically, in Lemma 3.3, it is assumed that the timestep is fixed and the temporal inputs to the two neurons are the same; in Lemma 3.4, we loosen the constraint on the timestep of the two neurons; finally, in Theorem 3.5, we generalize our results to arbitrary timesteps for two spiking layers (\(N\) neurons each layer).
**Lemma 3.3**.: _At a certain timestep \(T\), if the spiking neurons are \(u_{th}\)-Neighborhood-Finite Distribution and the spatial input features of two neuron got an error upperbound \(\epsilon\), and they got the same temporal input \(h^{t-1}\), the probability upperbound of different outputs is proportional to \(\epsilon\). Formally:_
_For two spiking neurons \(\hat{\sigma}^{T}\) and \(\tilde{\sigma}^{T}\), when \(\tilde{h}^{T-1}=\hat{h}^{T-1}\) and \(\hat{u}^{T}=\hat{h}^{T-1}+\hat{x}^{T}\) is a random variable follows the \(u_{th}\)-Neighborhood-Finite Distribution, if \(\|\tilde{x}^{T}-\hat{x}^{T}\|\leq\epsilon\), then: \(P\left[\hat{\sigma}^{T}(\hat{x}^{T})\neq\tilde{\sigma}^{T}(\tilde{x}^{T}) \right]\propto\epsilon\)_
**Lemma 3.4**.: _Suppose the spiking neurons are \(u_{th}\)-Neighborhood-Finite Distribution at timestep \(T\) and the spatial input features of two corresponding spiking neurons got an error upperbound \(\epsilon\) at any timestep, and they got the same temporal input \(h^{0}\), if both spiking neurons have the same output at the first \(T-1\) timesteps, then the probability upperbound is proportional to \(\frac{\epsilon}{1-\beta}\). Formally:_
_For two spiking neurons \(\hat{\sigma}^{T}\) and \(\tilde{\sigma}^{T}\), when \(\tilde{h}^{0}=\hat{h}^{0}\) and \(\hat{u}^{T}=\hat{h}^{T-1}+\hat{x}^{T}\) is a random variable follows the \(u_{th}\)-Neighborhood-Finite Distribution, if \(\|\tilde{x}^{t}-\hat{x}^{t}\|\leq\epsilon\) and \(\hat{\sigma}^{t}(\hat{x}^{t})=\hat{\sigma}^{t}(\tilde{x}^{t})\) for \(t=1,2,\cdots,T-1\), then: \(P\left[\hat{\sigma}^{T}(\hat{x}^{T})\neq\tilde{\sigma}^{T}(\hat{x}^{T}) \right]\propto\frac{\epsilon}{1-\beta}\)._
**Theorem 3.5**.: _Suppose the spiking layers are \(u_{th}\)-Neighborhood-Finite Distribution at timestep \(t\) and the inputs of two corresponding spiking layers with a width \(N\) got an error upperbound \(\epsilon\) for each element of spatial input feature vector at any timestep, and they got the same initial temporal input vector \(\mathbf{h^{0}}\), if there is no different spiking output at the first \(T-1\) timesteps, then the probability upperbound is proportional to \(N\frac{\epsilon}{1-\beta}\). Formally:_
_For two spiking layers \(\hat{\sigma}^{T}\) and \(\tilde{\sigma}^{T}\), when \(\tilde{\mathbf{h}}^{0}=\hat{\mathbf{h}}^{0}\) and \(\hat{\mathbf{u}}^{\mathbf{t}}=\hat{\mathbf{h}}^{\mathbf{t}-1}+\hat{\mathbf{x}}^{\mathbf{t}}\) is a random variable follows the \(u_{th}\)-Neighborhood-Finite Distribution, if \(\|\tilde{x}^{t}_{k}-\hat{x}^{t}_{k}\|\leq\epsilon\left(k=1,2,\cdots,N;i=1,2, \cdots,T\right)\) and \(\hat{\sigma}^{t}(\hat{\mathbf{x}}^{\mathbf{t}})=\hat{\sigma}^{t}(\tilde{\mathbf{x}}^{\mathbf{ t}})\) for \(t=1,2,\cdots,T-1\), then: \(P\left[\hat{\sigma}^{T}(\tilde{\mathbf{x}}^{\mathbf{T}})\neq\tilde{\sigma}^{T}(\tilde{\mathbf{x}}^{ \mathbf{T}})\right]\propto N\frac{\epsilon}{1-\beta}\)._
## 4 Proving Lottery Ticket Hypothesis for Spiking Neural Networks
SLTH states that a large randomly initialized neural network has a sub-network that is equivalent to any well-trained network. In this work, the large initialized network, sub-network, well-trained network are named _large network_\(G\), _equivalent (pruned) network_\(\tilde{G}\), and _target network_\(\hat{G}\), respectively. We use the same method to differentiate the weights and other notations in these networks.
We generally follow the theoretical proof approach in [12]. **Note, the only unique additional assumption we made (considering the characteristics of SNN) is that the membrane potentials
**follow a class of distribution defined in Definition 3.1, which is easy to satisfy.** Specifically, the whole proof in this work is divided into two parts: approximation of linear transformation (Fig. 1**c**) and approximation of spatio-temporal nonlinear transformation (Fig. 1**d**), each of which contains three steps. The first part is roughly similar to the proof of LTH in the existing ANN [12]. The second part requires our spatio-temporal probabilistic modeling approach in Section 3, i.e., Theorem 3.5.
**Approximation of linear transformation.** The existing methods for proving LTH in ANNs all first approach a single weight between two neurons by adding a virtual layer (introducing redundant weights), i.e., Step 1. Different virtual layer modeling methods [12, 15, 16] will affect the width of the network in the final conclusion. All previous proofs introduce two neurons in the virtual layer for approximation. In this work, we exploit only one spiking neuron in the virtual layer for the approximation, given the nature of the binary firing of spiking neurons. We then approximate the weights between one neuron and one layer of neurons (Step 2), and the weights between two layers of neurons (Step 3). Step 2 and Step 3 are the same as previous works.
_Step 1: Approximate single weight by a spiking neuron._ As shown in Fig. 2, a connection weight in SNNs can be approximated via an additional spiking neuron (i.e, a new virtual layer with only one neuron) with two weights, one connect in and one connect out. We set the two weights of the virtual neuron as \(v\) and \(\tilde{w}\), and the target connection weight is \(\hat{w}\). Consequently, the equivalent function for the target weight can be written as \(g(s)=\tilde{w}\tilde{\sigma}(vs)\), where the temporal input of the virtual neuron does not need to be considered. Once \(v\geq u_{th}\), the virtual neuron will fire no matter how the temporal input is if spatial input \(s=1\), while not fire if \(s=0\). Thus, the output of the virtual neuron is independent of the temporal input, which is \(V_{reset}\) or the decayed membrane potential (neither of which is likely to affect the firing of the virtual neuron). We have: 1) target weight connection: \(\hat{g}(s)=\hat{w}s\); 2) equivalent structure: \(\tilde{g}(s)=\tilde{w}\tilde{\sigma}(vs)\). If the weight \(v\) satisfy \(v\geq u_{th}\), the error between the target weight connection and the equivalent structure is
\[\left\|\tilde{g}(s)-\hat{w}s\right\|=\left\|\tilde{w}s-\hat{w}s\right\|\leq \left\|\tilde{w}-\hat{w}\right\|. \tag{8}\]
Once the number of initialized weights in the large network \(G\) is large enough, it can choose the weights \(\tilde{w}\) and \(v\) to make the error approximate to 0 by probability convergence. Thus, only one spiking neuron is needed for the virtual layer. Formally, we have Lemma C.1.
_Step 2: Layer weights approximation._ Based on Lemma C.1, we can extend the conclusion to the case where there is one neuron in layer \(l\) and several neurons in layer \(l-1\). The lemma is detailed in Lemma C.2.
_Step 3: Layer to layer approximation._ Finally, we get an approximation between the two layers of spiking neurons. See Lemma C.3 for proof.
**Lemma 4.1**.: _Layer to Layer Approximation. Fix the weight matrix \(\hat{W}\in[-\frac{1}{\sqrt{N}},\frac{1}{\sqrt{N}}]^{N\times N}\) which is the connection in target network between a layer of spiking inputs and the next layer of neurons. The equivalent structure is \(k\) spiking neurons with \(k\times N\) weights \(\mathbf{V}\in\mathbf{R}^{k\times N}\) connect the input and \(\tilde{\mathbf{W}}\in\mathbf{R}^{N\times k}\) connect. All the weights \(\tilde{w}_{ij}\) and \(v_{ij}\) random initialized with uniform distribution \(\mathrm{U}[-1,1]\) and i.i.d. \(\mathbf{B}\in\{0,1\}^{N\times k}\) is the mask for matrix \(\tilde{\mathbf{W}}\), \(\sum_{i,j}\left\|B_{ij}\right\|_{0}\leq N^{2},\sum_{j}\left\|B_{ij}\right\|_{0}\leq N\). Then, let the function of equivalent structure be \(\tilde{g}(\mathbf{s})=(\tilde{\mathbf{W}}\odot\mathbf{B})\tilde{\sigma}(\mathbf{V}\mathbf{s})\), where input spiking \(\mathbf{s}\) is a vector that \(\mathbf{s}\in\{0,1\}^{N}\). Then, \(\forall\mathrm{0}<\!\!\delta\!\leq 1,\mathrm{\SIUnitSymbolMicro}\mathrm{e}\!\!>0\) when \(k\geq N\lceil\frac{N}{C_{th}\mathrm{e}}\log\frac{N}{\delta}\rceil\) and \(C_{th}=\frac{1-u_{th}}{2}\), there exists a mask matrix \(\mathbf{B}\) that \(\left\|\tilde{g}(\mathbf{s})\right\|_{i}-[\hat{W}\mathbf{s}]_{i}\right\|\leq\epsilon\) w.p at least \(1-\delta\)._
**Approximation of spatio-temporal nonlinear transformation.** Since the activation functions in ANNs satisfy the Lipschitz condition, the input error can control the output error of the activation function. In contrast, the output error is not governed by the input error when the membrane potential of a spiking neuron is at a step position. To break this limitation, in Step 4, we combine the probabilistic modeling method in Theorem 3.5 to analyze the probability of consistent output
Figure 2: The target weight \(\hat{w}\) is simulated in the pruned network \(\tilde{G}\) by only one intermediate spiking neuron (this work), due to the spiking neuron only output 0 or 1. In contrast, previous ANN works required two intermediate artificial neurons to simulate the target weights.
(only 0 or 1) of spiking layers. Then, in Step 5, we generalize the conclusions in Step 4 to the entire network regardless of the temporal input. Finally, we consider the dynamics of SNNs in the temporal dimension in Step 6. Specifically, we can denote a SNN as follow:
\[G^{t}(\mathbf{S})=G^{t,L}\circ G^{t,L-1}\circ\cdots\circ G^{t,2}\circ G^{t,1}(\mathbf{S}), \tag{9}\]
where \(\mathbf{S}\in\{0,1\}^{N\times T}\) is the spatial input of the entire network at all timesteps. \(G^{t,l}(\cdot)\) represents the function of network \(G\) at \(t\)-th timestep and \(l\)-th layer, when considering the specific temporal input, it can be written as:
\[G^{t,l}(\mathbf{S})=\sigma_{\mathbf{W}^{l}\mathbf{S}^{1:t-1,l}}^{t,l}(\mathbf{W}^{l}\mathbf{S}^{t, l-1}), \tag{10}\]
where spatial input at \(t\)-th timestep is \(\mathbf{S}^{t,l-1}\in\{0,1\}^{N}\), \(\sigma\) denotes the activation function of spiking layer, its subscript \(\mathbf{S}^{1:t-1,l}\) indicates that the membrane potential at \(t\)-th timestep is related to the spatial inputs at all previous timesteps. We are not concerned with these details in probabilistic modeling, thus they are omitted in the remaining chapters where they do not cause confusion.
_Step 4: Layer spiking activation approximation._ Compared to the Step 3, we here include spiking activations to evaluate the probability that two different spiking layers (same input, different weights) have the same output. See Lemma C.4 for proof
**Lemma 4.2**.: _Layer Spiking Activation Approximation. Fix the weight matrix \(\mathbf{\hat{W}}\in[-\frac{1}{\sqrt{N}},\frac{1}{\sqrt{N}}]^{N\times N}\) which is the connection in target network between a layer of spiking inputs and the next layer of neurons. The equivalent structure is \(k\) spiking neurons with \(k\times N\) weights \(\mathbf{V}\in\mathbf{R}^{k\times N}\) connect the input and \(\mathbf{\tilde{W}}\in\mathbf{R}^{N\times k}\) connect out. All the weights \(\tilde{w}_{ij}\) and \(v_{ij}\) random initialized with uniform distribution \(\mathrm{U}[-1,1]\) and i.i.d. \(\mathbf{B}\in\{0,1\}^{N\times k}\) is the mask for matrix \(\mathbf{V}\), \(\sum_{i,j}\left\|B_{ij}\right\|_{0}\leq N^{2},\sum_{j}\left\|B_{ij}\right\|_{0}\leq N\). Then, let the function of equivalent structure be \(\tilde{g}(\mathbf{s})=\tilde{\sigma}((\mathbf{\tilde{W}}\odot\mathbf{B})\tilde{\sigma}(\bm {V}\mathbf{s})),\) where input spiking \(\mathbf{s}\) is a vector that \(\mathbf{s}\in\{0,1\}^{N}\). \(C\) is the constant depending on the supremum probability density of the dataset of the network. Then, \(\forall\delta,\exists\epsilon\) when \(k\geq N^{2}\lceil\frac{N}{C_{th}\epsilon}\log\frac{N^{2}}{\delta-NC\epsilon}\rceil\), there exists a mask matrix \(\mathbf{B}\) that \(\left\|\tilde{g}(\mathbf{s})-\hat{\sigma}(\mathbf{\hat{W}}\mathbf{s})\right\|=0\) w.p at least \(1-\delta\)._
_Step 5: Network approximation (\(T=1\))._ The lemma is generalized to the whole SNN in Lemma C.5.
_Step 6: Network approximation (\(T>1\))._ If the output of two SNNs in the first \(T-1\) timestep is consistent, and we assume that the error of spatial input features at all timesteps has an upperbound, there is a certain probability that the output of the two SNNs in timestep \(T\) is also the same. See detail proof in Theorem C.6 and detail theorem as follow Theorem 4.3:
**Theorem 4.3**.: _All Steps Approximation. Fix the weight matrix \(\mathbf{\hat{W}}^{l}\in[-\frac{1}{\sqrt{N}},\frac{1}{\sqrt{N}}]^{N\times N}\) which is the connection in target network between a layer of spiking inputs and the next layer of neurons. The equivalent structure is \(k\) spiking neurons with \(k\times N\) weights \(\mathbf{V}^{l}\in\mathbf{R}^{k\times N}\) connect the input and \(\mathbf{\tilde{W}}^{l}\in\mathbf{R}^{N\times k}\) connect out. All the weights \(\mathbf{\tilde{w}}_{ij}^{l}\) and \(v_{ij}^{l}\) random initialized with uniform distribution \(\mathrm{U}[-1,1]\) and i.i.d. \(\mathbf{B}^{l}\in\{0,1\}^{k\times N}\) is the mask for matrix \(\mathbf{V}^{l}\), \(\sum_{i,j}\left\|B_{ij}^{l}\right\|_{0}\leq N^{2},\sum_{j}\left\|B_{ij}^{l} \right\|_{0}\leq N\). Then, let the function of equivalent network at timestep \(t\) be \(\tilde{G}^{t}(\mathbf{S})=\tilde{G}^{t,L}\circ\tilde{G}^{t,L-1}\circ\cdots\circ \tilde{G}^{t,1}(\mathbf{S})\) and \(\tilde{G}^{t,l}=\tilde{\sigma}^{t}((\mathbf{\tilde{W}}^{l}\odot\mathbf{B}^{l})\tilde{ \sigma}^{t}(\mathbf{V}^{l}\mathbf{S}^{t}))\), where input spiking \(\mathbf{S}\) is a tensor that \(\mathbf{S}\in\{0,1\}^{N\times T}\). And the target network at timestep \(t\) is \(\tilde{G}^{t}(\mathbf{S})=\tilde{G}^{t,L}\circ\tilde{G}^{t,L-1}\circ\cdots\circ\tilde {G}^{t,1}(\mathbf{S})\), where \(\tilde{G}^{t,l}(\mathbf{S})=\tilde{\sigma}^{t}(\tilde{W}^{l}\mathbf{S}^{t})\). \(l=1,2,\cdots,L,t=1,2,\cdots,T\). \(C\) is the constant depending on the supremum probability density of the dataset of the network. Then, \(\forall 0<\delta\leq 1,\exists\epsilon>0\) which \(\geq N^{2}\lceil\frac{N}{C_{th}\epsilon}\log\frac{N^{2}L}{\delta-NCLT\epsilon}\rceil\), there exists a mask matrix \(\mathbf{B}\) that \(\left\|\tilde{G}(\mathbf{S})-\hat{G}(\mathbf{S})\right\rceil=0\), w.p at least \(1-\delta\)._
## 5 Searching Winning Tickets from SNNs without Weight Training
By applying the top-\(k\%\) sub-network searching (edge-popup) algorithm[11], we empirically verified SLTH as the complement of the theoretical proof in Section 4. Spiking neurons do not satisfy the Lipschitz condition, but the ANN technique can still be employed since the surrogate gradient of the SNN during BP training [36]. We first simply introduce the search algorithm, then show the results.
The sub-network search algorithm first proposed by [11] provides scores for every weight as a criterion for sub-network topology choosing. In each step of forward propagation, the top-\(k\%\) score weights are chosen while others are masked by \(0\). It means the network used for classification is actually a sub-network of the original random initialized network. During the BP training process, the scores are randomly initialized as weights at the beginning, and then the gradient of the score is updated while the gradient of weights is not recorded (weights are not changed). The updating rule of scores can be formally expressed in mathematical notations as follow:
\[s_{uv}\gets s_{uv}+\alpha\frac{\partial\mathcal{L}}{\partial\mathcal{I}_ {v}}S_{u}w_{uv}, \tag{11}\]
Where \(s_{uv}\) and \(w_{uv}\) denote the score and corresponding weight that connect from spiking neuron \(u\) to \(v\), respectively, \(S_{u}\in\{0,1\}\) indicates the output of neuron \(u\), \(\mathcal{L}\) and \(\mathcal{I}_{v}\) are loss function value of the network and the input value of the neuron \(v\), and \(\alpha\) is the learning rate.
We apply the edge-popup in SNN-VGG-16 and Res-SNN-19. We set maskable weights for all layers (the first layer and the last layer are included). The hyper-parameter sparsity \(k\%\) is set every 0.2 from 0.1 to 0.9. The datasets used to verify are CIFAR10/100. The network structures and other experiment details are shown in Appendix E. As shown in Fig. 3, the existence of good sub-networks is empirically verified, which is highly conformed with our theoretical analysis. The additional timesteps and discontinuous activation complicate the sub-network search problem in SNNs, but good sub-networks still exist without any weight training and can be found by specific algorithms.
## 6 Pruning
Criterion Based on Probabilistic Modeling
**Discussion.** For two spiking neurons, layers, or networks that have the same input but different weights, the proposed probabilistic modeling method can give the probability that the outputs (that is, the spiking firing) of both are the same. We convert the perturbation brought by pruning under the same input into the error of spatial input features, and then analyze its impact on spiking firing. Then, we prove that the lottery ticket hypothesis also holds in SNNs.
The potential of probabilistic modeling goes far beyond that. Probabilistic modeling prompts us to rethink whether the existing pruning methods in SNNs are reasonable, since most methods directly continue the idea and methods of ANN pruning. Firstly, from the view of linear transformation, _how to metrics the importance of weights in SNNs?_ The relationship between the inner state of the artificial neuron before activation and the input is usually not considered in ANN pruning. But we have to take this into account in binary-activated SNN pruning because the membrane potential is related to both weights and inputs. For instance, when the input spike frequency is 0, no matter how large the weight is, it will not affect the result of the linear transformation (Fig. 4). Secondly, from the view of nonlinear transformation, _how does pruning affect the output of SNNs?_ As shown in Fig. 1**b**, when the membrane potential is in different intervals, the probability that the output of the spiking neuron is changed is also different. Thus, we study this important question theoretically using probabilistic modeling.
Moreover, we can assume that the weights of the network remain unchanged, and add disturbance to the input, then exploit probabilistic modeling to analyze the impact of input perturbations on
Figure 4: The importance of weight needs to consider both the magnitude of the weight and the firing of neurons.
Figure 3: The horizontal and vertical coordinates represent the sparsity and accuracy. The solid line is the untrained sub-network. The dashed line is the trained large network (sparsity=\(0\%\)).
the output, i.e., analyze the robustness [45] of the SNNs in a new way. Similarly, probabilistic modeling can also analyze other compression methods in SNNs, such as quantization [46], tensor decomposition [47], etc. The perturbations brought about by these compression methods can be converted into errors in the membrane potential, which in turn affects the firing of spiking neurons.
**Pruning criterion design.** We design a new criterion to prune weights according to their probabilities of affecting the firing of spiking neurons. Based on Theorem 3.5, for each weight, its influence on the output of the spiking neuron has a probability upperbound \(P\), which can be estimated as:
\[P\approx E(|u^{\prime}-u|)\mathcal{N}(0|\mu-u_{th},var), \tag{12}\]
where \(E(|u^{\prime}-u|)\) is the expectation of the error in the membrane potential brought about by pruning. In this work, it is written as:
\[E(|u^{\prime}-u|)=\frac{E_{act}[|w|]\left|\gamma\right|}{\sqrt{\sigma_{\mathcal{ B}}^{2}+\epsilon}}, \tag{13}\]
where \(E_{act}[w]\) is the expectation that a weight is activated (the pre-synaptic spiking neuron outputs a spike). \(\gamma\) and \(\sigma_{B}\) are a pair of hyper-parameters in Batch Normalization [48], which can also affect the linear transformation of weights, thus we incorporate them into Eq. (13). The detailed derivation of Eq. (13) can be found in Appendix D. Note, in keeping with the notations in classic BN, there is some notation mixed here, but only for Eq. (12) and Eq. (13). Next, following [43; 44], here we suppose the membrane potential follows Gaussian distribution \(\mathcal{N}(\mu,var)\). Consequently, \(\mathcal{N}(0|\mu-u_{th},var)\) represents the probability density when the membrane potential is at the threshold \(u_{th}\). Finally, for each weight, we have a probability \(P\), and we prune according to the size of \(P\).
**Experiments.** Since our subject is to theoretically prove LTH in SNNs, we test the proposed new criteria using methods related to LTH. Note, Eq. (12) is a general pruning criterion and can also be incorporated in traditional non-LTH SNN pruning methods. Specifically, in this work we employ the LTH-based Iterative Magnitude Pruning (IMP) method for prune [10], whose criterion is to prune weights with small absolute values (algorithm details are given in Appendix F). Then, we change the criterion to pruning according to the size of \(P\) in Eq. (12).
We re-implement the pipeline network (Res-SNN-19 proposed in [43]) and IMP pruning method on the CIFAR-10/100 [49] datasets. We use the official code provided by authors in [21] for the whole pruning process. Then, we perform rigorous ablation experiments without bells and whistles, i.e., all experiment settings are same, the only change is to regulate the pruning criterion to Eq. (12). This is a IMP pruning based on Spiking Firing, thus we name it SF-IMP-1 (Appendix F). Moreover, the encoding layer (the first convolutional layer) in SNNs has a greater impact on task performance, and we can also allow the weights pruning in the encoding layer to be reactivated [50], called SF-IMP-2 (Appendix F).
The experimental results are shown in Fig. 5. First, these two approaches perform marginally differently on CIFAR-10/100, with SF-IMP-2 outperforming on CIFAR-10, and SF-IMP-1 performing better on CIFAR-100. Especially, compared with the vanilla IMP algorithm on CIFAR-100, SF-IMP-1 performs very well at all pruning sparsities. But on CIFAR-10, when the sparsity is high, the vanilla IMP looks better. One of the possible reasons is that the design of Eq. (12) still has room for improvement. These preliminary results demonstrate the effectiveness of the pruning criteria designed based on probabilistic modeling, and also lay the foundation for the subsequent design of more advanced pruning algorithms.
Figure 5: Observations from Iterative Magnitude Pruning (IMP) with the proposed pruning criterion \(P\) in eq. (12).
Conclusions
In this work, we aim to theoretically prove that the Lottery Ticket Hypothesis (LTH) also holds in SNNs, where the LTH theory has had a wide influence in traditional ANNs and is often used for pruning. In particular, spiking neurons employ binary discrete activation functions and have complex spatio-temporal dynamics, which are different from traditional ANNs. To address these challenges, we propose a new probabilistic modeling method for SNNs, modeling the impact of pruning on spiking firing, and then prove both theoretically and experimentally that LTH4 holds in SNNs. We design a new pruning criterion from the probabilistic modeling view and test it in an LTH-based pruning method with promising results. Both the probabilistic modeling and the pruning criterion are general. The former can also be used to analyze the robustness and other network compression methods of SNNs, and the latter can also be exploited in non-LTH pruning methods. In conclusion, we have for the first time theoretically established the link between membrane potential perturbations and spiking firing through the proposed novel probabilistic modeling approach, and we believe that this work will provide new inspiration for the field of SNNs.
Footnote 4: Specifically and strictly, strong LTH. We get a sub-network that performs well without any training.
|
2308.05423 | On the Stability and Convergence of Physics Informed Neural Networks | Physics Informed Neural Networks is a numerical method which uses neural
networks to approximate solutions of partial differential equations. It has
received a lot of attention and is currently used in numerous physical and
engineering problems. The mathematical understanding of these methods is
limited, and in particular, it seems that, a consistent notion of stability is
missing. Towards addressing this issue we consider model problems of partial
differential equations, namely linear elliptic and parabolic PDEs. We consider
problems with different stability properties, and problems with time discrete
training. Motivated by tools of nonlinear calculus of variations we
systematically show that coercivity of the energies and associated compactness
provide the right framework for stability. For time discrete training we show
that if these properties fail to hold then methods may become unstable.
Furthermore, using tools of $\Gamma-$convergence we provide new convergence
results for weak solutions by only requiring that the neural network spaces are
chosen to have suitable approximation properties. | Dimitrios Gazoulis, Ioannis Gkanis, Charalambos G. Makridakis | 2023-08-10T08:35:55Z | http://arxiv.org/abs/2308.05423v1 | # On the stability and convergence of Physics Informed Neural Networks
###### Abstract
Physics Informed Neural Networks is a numerical method which uses neural networks to approximate solutions of partial differential equations. It has received a lot of attention and is currently used in numerous physical and engineering problems. The mathematical understanding of these methods is limited, and in particular, it seems that, a consistent notion of stability is missing. Towards addressing this issue we consider model problems of partial differential equations, namely linear elliptic and parabolic PDEs. We consider problems with different stability properties, and problems with time discrete training. Motivated by tools of nonlinear calculus of variations we systematically show that coercivity of the energies and associated compactness provide the right framework for stability. For time discrete training we show that if these properties fail to hold then methods may become unstable. Furthermore, using tools of \(\Gamma-\)convergence we provide new convergence results for weak solutions by only requiring that the neural network spaces are chosen to have suitable approximation properties.
## 1 Introduction
### PDEs and Neural Networks
In this work we consider model problems of partial differential equations (PDEs) approximated by deep neural learning (DNN) algorithms. In particular we focus on linear elliptic and parabolic PDEs and Physics Informed Neural Networks, i.e., algorithms where the discretisation is based on the minimisation of the \(L^{2}\) norm of the residual over a set of neural networks with a given architecture. Standard tools of numerical analysis assessing the quality and performance of an algorithm are based on the notions of stability and approximability. Typically, in problems arising in scientific applications another important algorithmic characteristic is the preservation of key qualitative properties of the simulating system at the discrete level. In important classes of problems, stability and structural consistency are often linked. Our aim is to introduce a novel notion of stability for the above DNN algorithms approximating solutions of PDEs. In addition, we show convergence provided that the set of DNNs has the right approximability properties and the training of the algorithm produces stable approximations.
In the area of machine learning for models described by partial differential equations, at present, there is intense activity at multiple fronts: developing new methods for solving differential equations using neural networks, designing special neural architectures to approximate families of differential operators (operator learning), combination of statistical and machine learning techniques for related problems in uncertainty quantification and statistical functional inference. Despite the
progress at all these problems in the last years, basic mathematical, and hence algorithmical, understanding is still under development.
Partial Differential Equations (PDEs) has been proven an area of very important impact in science and engineering, not only because many physical models are described by PDEs, but crucially, methods and techniques developed in this field contributed to the scientific development in several areas where very few scientists would have guessed as possible. Numerical solution of PDEs utilising neural networks is at an early stage and has received a lot of attention. Such methods have significantly different characteristics compared to more traditional methods, and have been proved quite effective, e.g., in solving problems in high-dimensions, or when methods combining statistical approaches and PDEs are needed. Physics Informed Neural Networks is one of the most successful numerical methods which uses neural networks to approximate solutions of PDEs, see e.g., [39], [33]. Residual based methods were considered in [29], [6], [40], [46] and their references. Other neural network methods for differential equations and related problems include, for example, [41], [18], [27], [48], [12], [20], [23]. The term _Physics Informed Neural Networks_ was introduced in the highly influential paper [39]. It was then used extensively in numerous physical and engineering problems; for a broader perspective of the related methodologies and the importance of the NN methods for scientific applications, see e.g., [26]. Despite progress at some fronts, see [46], [3], [44], [45], [35], [36], the mathematical understanding of these methods is limited. In particular, it seems that, a consistent notion of stability is missing. Stability is an essential tool, in a priori error analysis and convergence of the algorithms, [30]. It provides valuable information for fixed values of the discretisation parameters, i.e., in the pre-asymptotic regime, and it is well known that unstable methods have poor algorithmic performance. On the other hand, stability is a problem dependent notion and not always easy to identify. Towards addressing this issue we consider model problems of partial differential equations, namely linear elliptic and parabolic PDEs. We consider PDEs with different stability properties, and parabolic problems with time discrete training. Since, apparently, the training procedure influences the behaviour of the method in an essential manner, but, on the other hand, complicates the analysis considerably, we have chosen as a first step in this work to consider time discrete only training. Motivated by tools of nonlinear calculus of variations we systematically show that coercivity of the energies and associated compactness provide the right framework for stability. For time discrete training we show that if these properties fail to hold then methods become unstable and it seems that they do not converge. Furthermore, using tools of \(\Gamma-\)convergence we provide new convergence results for weak solutions by only requiring that the neural network spaces are chosen to have suitable approximation properties.
### Model problems and their Machine Learning approximations
In this work we consider linear elliptic and parabolic PDEs. To fix notation, we consider simple boundary value problems of the form,
\[\begin{cases}L\,u=f&\text{in}\ \ \Omega\\ u=0&\text{on}\ \partial\Omega\end{cases} \tag{1}\]
where \(u:\Omega\subset\mathbb{R}^{d}\to\mathbb{R},\ \Omega\) is an open, bounded set with smooth enough boundary, \(f\in L^{2}(\Omega)\) and \(L\) a self-adjoint elliptic operator of the form
\[\begin{split} Lu:=-\sum_{1\leq i,j\leq d}\big{(}a_{ij}u_{x_{i}} \big{)}_{x_{j}}+cu\\ \text{where}\ \ \sum_{i,j}a_{ij}(x)\xi_{i}\xi_{j}\geq\theta|\xi|^{2} \ \ \text{for any}\ \ x\in\Omega\ \ \text{and any}\ \ \xi\in\mathbb{R}^{n},\ \ \ \text{for some}\ \ \theta>0\end{split} \tag{2}\]
also, \(a_{ij}=a_{ji}\in C^{1}(\overline{\Omega}),\ b_{i},\ c\in L^{\infty}(\Omega)\) and hence bounded in \(\overline{\Omega}.\) Further assumptions on \(L\) will be discussed in the next sections. Dirichlet boundary conditions were selected for simplicity.
The results of this work can be extended to other boundary conditions with appropriate technical modifications.
We shall study the corresponding parabolic problem as well. We use the compact notation \(\Omega_{T}=\Omega\times(0,T],\)\(\partial\Omega_{T}=\partial\Omega\times(0,T]\) for some fixed time \(T>0.\) We consider the initial-boundary value problem
\[\begin{cases}u_{t}+Lu=f,&\text{in}\;\;\Omega_{T},\\ u=0,&\text{on}\;\;\partial\Omega\times(0,T],\\ u=u^{0},&\text{in}\;\;\Omega\,,\end{cases} \tag{3}\]
where \(f\in L^{2}(\Omega_{T}),\;u^{0}\in H^{1}_{0}(\Omega)\) and \(L\) is as in (2). In the sequel we shall use the compact operator notation \(\mathscr{L}\) for either \(u_{t}+Lu\) or \(Lu\) for the parabolic or the elliptic case correspondingly. The associated energies used will be the \(L^{2}-\)residuals
\[\mathcal{E}(v)=\int_{\Omega_{D}}|\mathscr{L}v-f|^{2}\mathrm{d}\overline{x}+\, \mu\int_{\Omega}|v-u^{0}|^{2}\,\mathrm{d}x+\tau\,\int_{\partial\Omega_{T}}|v| ^{2}\,\mathrm{d}\overline{S} \tag{4}\]
defined over smooth enough functions and domains \(\Omega_{D}\) being \(\Omega_{T}\) or \(\Omega\) (with measures \(d\overline{x}\) ) for the parabolic or the elliptic case correspondingly. Clearly, the coefficient \(\mu\geq 0\) of the initial condition is set to zero in the elliptic case.
It is typical to consider regularised versions of \(\mathcal{E}(v)\) as well. Such functionals have the form
\[\mathcal{E}_{reg}(v)=\mathcal{E}(v)+\lambda\mathcal{J}(v)\,, \tag{5}\]
where the regularisation parameter \(\lambda=\lambda_{reg}>0\) is in principle small and \(\mathcal{J}(v)\) is an appropriate functional (often a power of a semi-norm) reflecting the qualitative properties of the regularisation. The formulation of the method extends naturally to nonlinear versions of the generic operator \(\mathscr{L}v-f,\) whereby in principle both \(\mathscr{L}\) and \(f\) might depend on \(v.\)
### Discrete Spaces generated by Neural Networks
We consider functions \(u_{\theta}\) defined through neural networks. Notice that the structure described is indicative and it is presented in order of fix ideas. Our results do not depend on particular neural network architectures but only on their approximation ability. A deep neural network maps every point \(\overline{x}\in\Omega_{D}\) to a number \(u_{\theta}(\overline{x})\in\mathbb{R},\) through
\[u_{\theta}(\overline{x})=C_{L}\circ\sigma\circ C_{L-1}\cdots\circ\sigma\circ C _{1}(\overline{x})\quad\forall\overline{x}\in\Omega_{D}. \tag{6}\]
The process
\[\mathcal{C}_{L}:=C_{L}\circ\sigma\circ C_{L-1}\cdots\circ\sigma\circ C_{1} \tag{7}\]
is in principle a map \(\mathcal{C}_{L}:\mathbb{R}^{m}\rightarrow\mathbb{R}^{m^{\prime}}\); in our particular application, \(m=d\) (elliptic case) or \(m=d+1\) (parabolic case) and \(m^{\prime}=1.\) The map \(\mathcal{C}_{L}\) is a neural network with \(L\) layers and activation function \(\sigma.\) Notice that to define \(u_{\theta}(\overline{x})\) for all \(\overline{x}\in\Omega_{D}\) we use the same \(\mathcal{C}_{L},\) thus \(u_{\theta}(\cdot)=\mathcal{C}_{L}(\cdot).\) Any such map \(\mathcal{C}_{L}\) is characterised by the intermediate (hidden) layers \(C_{k},\) which are affine maps of the form
\[C_{k}y=W_{k}y+b_{k},\qquad\text{where }W_{k}\in\mathbb{R}^{d_{k+1}\times d_{k}},b_{k}\in\mathbb{R}^{d_{k+1}}. \tag{8}\]
Here the dimensions \(d_{k}\) may vary with each layer \(k\) and \(\sigma(y)\) denotes the vector with the same number of components as \(y,\) where \(\sigma(y)_{i}=\sigma(y_{i})\,.\) The index \(\theta\) represents collectively all the parameters of the network \(\mathcal{C}_{L},\) namely \(W_{k},b_{k},\)\(k=1,\dots,L.\) The set of all networks \(\mathcal{C}_{L}\) with a given structure (fixed \(L,d_{k},k=1,\dots,L\) ) of the form (6), (8) is called \(\mathcal{N}.\) The total dimension
(total number of degrees of freedom) of \(\mathcal{N},\) is \(\dim\mathcal{N}=\sum_{k=1}^{L}d_{k+1}(d_{k}+1)\,.\) We now define the space of functions
\[V_{\mathcal{N}}=\{u_{\theta}:\Omega_{D}\to\mathbb{R},\text{ where }u_{\theta}( \overline{x})=\mathcal{C}_{L}(\overline{x}),\text{ for some }\mathcal{C}_{L}\in \mathcal{N}\,\}\,. \tag{9}\]
It is important to observe that \(V_{\mathcal{N}}\) is not a linear space. We denote by
\[\Theta=\{\theta\,:u_{\theta}\in V_{\mathcal{N}}\}. \tag{10}\]
Clearly, \(\Theta\) is a linear subspace of \(\mathbb{R}^{\dim\mathcal{N}}.\)
### Discrete minimisation on \(V_{\mathcal{N}}\)
Physics Informed Neural networks are based on the minimisation of residual-type functionals of the form (5) over the discrete set \(V_{\mathcal{N}}\,:\)
**Definition 1**: _Assume that the problem_
\[\min_{v\in V_{\mathcal{N}}}\mathcal{E}(v) \tag{11}\]
_has a solution \(v^{\star}\in V_{\mathcal{N}}.\) We call \(v^{\star}\,\) a deep-\(V_{\mathcal{N}}\) minimiser of \(\mathcal{E}\,.\)_
A key difficulty in studying this problem lies on the fact that \(V_{\mathcal{N}}\) is not a linear space. Computationally, this problem can be equivalently formulated as a minimisation problem in \(\mathbb{R}^{\dim\mathcal{N}}\) by considering \(\theta\) as the parameter vector to be identified through
\[\min_{\theta\in\Theta}\mathcal{E}(u_{\theta}). \tag{12}\]
Notice that although (12) is well defined as a discrete minimisation problem, in general, this is non-convex with respect to \(\theta\) even though the functional \(\mathcal{E}(v)\) is convex with respect to \(v.\) This is the source of one of the main technical difficulties in machine learning algorithms.
### Time discrete Training
To implement such a scheme we shall need computable discrete versions of the energy \(\mathcal{E}(u_{\theta}).\) This can be achieved through different ways. A common way to achieve this is to use appropriate quadrature for integrals over \(\Omega_{D}\) (Training through quadrature). Just to fix ideas such a quadrature requires a set \(K_{h}\) of discrete points \(z\in K_{h}\) and corresponding nonnegative weights \(w_{z}\) such that
\[\sum_{z\in K_{h}}\,w_{z}\,g(z)\approx\int_{\Omega_{D}}\,g(\overline{x})\, \mathrm{d}\overline{x}. \tag{13}\]
Then one can define the discrete functional
\[\mathcal{E}_{Q,h}(g)=\sum_{z\in K_{h}}\,w_{z}\,|\mathcal{L}v(z)-f(z)|^{2}\,\,. \tag{14}\]
In the case of the parabolic problem a similar treatment should be done for the term corresponding to the initial condition \(\int_{\Omega}|v-u^{0}|^{2}dx\,.\) Notice that both deterministic and probabilistic (Monte-Carlo, Quasi-Monte-Carlo) quadrature rules are possible, yielding different final algorithms. In this work we shall not consider in detail the influence of the quadrature (and hence of the training) to the stability and convergence of the algorithms. This requires a much more involved technical analysis and it will be the subject of future research. However, it will be instrumental for studying the notion of stability introduced herein, to consider a hybrid algorithm where quadrature (and discretisation) is applied only to the time variable of the parabolic problem. This approach is instrumental in the
design and analysis of time-discrete methods for evolution problems, and we believe that it is quite useful in the present setting.
To apply a quadrature in the time integral only we proceed as follows: Let \(0=t^{0}<t^{1}<\cdots<t^{N}=T\) define a partition of \([0,T]\) and \(I_{n}:=(t^{n-1},t^{n}]\), \(k_{n}:=t^{n}-t^{n-1}\). We shall denote by \(v^{m}(\cdot)\) and \(f^{m}(\cdot)\) the values \(v(\cdot,t^{m})\) and \(f(\cdot,t^{m})\). Then we define the discrete in time quadrature by
\[\sum_{n=1}^{N}\,k_{n}\,g(t^{n})\approx\int_{o}^{T}\,g(t)\,\mathrm{d}t. \tag{15}\]
We proceed to define the time-discrete version of the functional (5) as follows
\[\mathcal{G}_{k,IE}(v)=\sum_{n=1}^{N}\,k_{n}\,\int_{\Omega}\big{|}\frac{v^{n}-v ^{n-1}}{k_{n}}+Lv^{n}-f^{n}\big{|}^{2}\,\,\mathrm{d}x+\,\int_{\Omega}|v-u^{0}| ^{2}\mathrm{d}x \tag{16}\]
We shall study the stability and convergence properties of the minimisers of the problems:
\[\min_{v\in V_{\mathcal{N}}}\mathcal{G}_{k,IE}(v)\,. \tag{17}\]
It will be interesting to consider a seemingly similar (from the point of view of quadrature and approximation) discrete functional:
\[\mathcal{G}_{k,EE}(v)=\sum_{n=1}^{N}\,k_{n}\,\int_{\Omega}\big{|}\frac{v^{n}-v ^{n-1}}{k_{n}}+Lv^{n-1}-f^{n-1}\big{|}^{2}\,\,\mathrm{d}x+\,\sigma\int_{ \Omega}|v-u^{0}|^{2}dx, \tag{18}\]
and compare its properties to the functional \(\mathcal{G}_{k,IE}\), and the corresponding \(V_{\mathcal{N}}\) minimisers.
## 2 Our results
In this section we discuss our main contributions. Our goal is twofold: to suggest a consistent notion of stability and a corresponding convergence framework for the methods considered.
_Equi-Coercivity and Stability._
Equi-Coercivity is a key notion in the \(\Gamma-\)convergence analysis which drives compactness and the convergence of minimisers of the approximate functionals. Especially, in the case of discrete functionals (denoted below by \(\mathcal{E}_{\ell}\), \(\ell\) stands for a discretisation parameter) stability is a prerequisite for compactness and convergence. Our analysis is driven by two key properties which are roughly stated as follows:
1. If energies \(\mathcal{E}_{\ell}\) are uniformly bounded \[\mathcal{E}_{\ell}[u_{\ell}]\leq C,\] then there exists a constant \(C_{1}>0\) and \(\ell-\)dependent norms \(V_{\ell}\) such that \[\|u_{\ell}\|_{V_{\ell}}\leq C_{1}.\] (19)
2. Uniformly bounded sequences in \(\|u_{\ell}\|_{V_{\ell}}\) have convergent subsequences in \(H,\) where \(H\) is a normed space (typically a Sobolev space) which depends on the form of the discrete energy considered. Property [S1] requires that \(\mathcal{E}_{\ell}[v_{\ell}]\) is coercive with respect to (possibly \(\ell\)-dependent) norms (or semi-norms). Further, [S2], implies that, although \(\|\cdot\|_{V_{\ell}}\) are \(\ell\)-dependent, they should be such that, from uniformly bounded sequences in these norms, it is possible to extract convergent subsequences in a weaker topology (induced by the space \(H\)).
We argue that these properties provide the right framework for stability. Although, in principle, the use of discrete norms is motivated from a nonlinear theory, [21], [9], [22], in order to focus on ideas rather than on technical tools, we started our study in this work on simple linear problems. To this end, we consider four different problems, where [S1] and [S2] are relevant: Two elliptic problems with distinct regularity properties: namely elliptic operators posed on convex and non-convex Lipschitz domains. In addition, we study linear parabolic problems and their time-discrete only version. The last example highlights that training is a key factor in algorithmic design, since it influences not only the accuracy, but crucially, the stability properties of the algorithm. In fact, we provide evidence that functionals related to time discrete training of the form (81), which fail to satisfy the stability criteria [S1] and [S2], produce approximations with unstable behaviour.
Section 3 is devoted to elliptic problems and Section 4 to parabolic. In Section 3.1 and Section 3.2 we consider the same elliptic operator but posed on convex and non-convex Lipschitz domains respectively. It is interesting to compare the corresponding stability results, Propositions 3 and 7 where in the second case the stability is in a weaker norm as expected. Similar considerations apply to the continuous formulation (without training) of the parabolic problem, Proposition 10. Here an interesting feature appears to be that a maximal regularity estimate is required for the parabolic problem. In the case of time-discrete training, Proposition 13, [S1] holds with an \(\ell-\) dependent norm. Again it is interesting to observe that a discrete maximal regularity estimate is required in the proof of Proposition 13. Although we do not use previous results, it is interesting to compare to [28], [31], [2].
Let us mention that for simplicity in the exposition we assume that the discrete energies are defined on spaces where homogenous Dirichlet conditions are satisfied. This is done only to highlight the ideas presented herein without extra technical complications. It is clear that all results can be extended when these conditions are imposed weakly through the loss functional. It is interesting to note, that in certain cases, however, the choice of the form of the boundary terms in the discrete functional might affect how strong is the norm of the underlined space \(H\) in [S1], [S2], see Remark 4.
_Convergence_ - \(\liminf-\limsup\) _framework._
We show convergence of the discrete minimisers to the solutions of the underlined PDE under minimal regularity assumptions. For certain cases, see Theorem 5 for example, it is possible by utilising the stability of the energies and the linearity of the problem, to show direct bounds for the errors and convergence. This is in particular doable in the absence of training. In the case of regularised fuctionals, or when time discrete training is considered one has to use the liminf-limsup framework of De Giorgi, see Section 2.3.4 of [14], and e.g., [10], used in the \(\Gamma-\)convergence of functionals arising in non-linear PDEs, see Theorems 6, 9, (regularised functionals) and Theorem 14 (time-discrete training). These results show that stable functionals in the sense of [S1], [S2], yield neural network approximations converging to the weak solutions of the PDEs, under no extra assumptions. This analytical framework combined with the stability notion introduced above provides a consistent and flexible toolbox, for analysing neural network approximations to PDEs. It can be extended to various other, possibly nonlinear, problems. Furthermore, it provides a clear connection to PDE well posedness and discrete stability when training is taking place.
_Previous works._
Previous works on the analysis of methods based on residual minimisation over neural network spaces for PDEs include [46], [3], [44], [45], [35], [25], [36]. In [46] convergence was established for smooth enough classical solutions of a class of nonlinear parabolic PDEs, without considering training of the functional. Convergence results, under assumptions on the discrete minimisers or the NN space, when Monte-Carlo training was considered, were derived in [44], [45], [25]. In addition,
in [45], continuous stability of certain linear operators is used in the analysis. The results of [3], [35], [36] were based on estimates where the bounds are dependent on the discrete minimisers and their derivatives. These bounds imply convergence only under the assumption that these functions are uniformly bounded in appropriate Sobolev norms. The results in [25] with deterministic training, are related, in the sense that they are applicable to NN spaces where by construction high-order derivatives are uniformly bounded in appropriate norms. Conceptually related is the recent work on Variational PINNs (the residuals are evaluated in a weak-variational sense), [8], where the role of quadrature was proven crucial in the analysis of the method.
As mentioned, part of the analysis is based on \(\Gamma\)-convergence arguments. \(\Gamma\)-convergence is a very natural framework which is used in nonlinear energy minimisation. In [37]\(\Gamma\)-convergence was used in the analysis of deep Ritz methods without training. In the recent work [32], the \(\liminf-\limsup\,\) framework was used in general machine learning algorithms with probabilistic training to derive convergence results for global and local discrete minimisers. For recent applications to computational methods where the discrete energies are rather involved, see [5], [21], [9], [22]. It seems that these analytical tools coming from nonlinear PDEs provide very useful insight in the present neural network setting, while standard linear theory arguments are rarely applicable due to the nonlinear character of the spaces \(V_{\mathcal{N}}\).
## 3 Elliptic problems
We consider the problem
\[Lu=f \tag{20}\]
where \(u:\Omega\subset\mathbb{R}^{d}\to\mathbb{R}\), \(\Omega\) is an open, bounded set with Lipschitz boundary, \(f\in L^{2}(\Omega)\) and \(L\) the elliptic operator as in (2).
For smooth enough \(v\) now define the energy as follows
\[\mathcal{E}(v)=\int_{\Omega}|Lv-f|^{2}\,\mathrm{d}x+\int_{\partial\Omega}|v| ^{2}\,\mathrm{d}x \tag{21}\]
Define now the linear space \(\mathcal{H}_{L}=\{v\in H^{1}(\Omega):\;Lv\in L^{2}(\Omega)\,\}.\) We consider now the minimisation problem:
\[\min_{u\in\mathcal{H}_{L}}\mathcal{E}(u)\,. \tag{22}\]
We show next that the (unique) solution of (22) is the weak solution of the PDE (20). The Euler-Lagrange equations for (22) are
\[\int_{\Omega}(Lu-f)\,Lv\,\,\mathrm{d}x+\int_{\partial\Omega}u\,v\,\mathrm{d}x =0\qquad\text{for all }v\in\mathcal{H}_{L}\,. \tag{23}\]
Let \(w\in H^{1}_{0}(\Omega)\) be given but arbitrary. Consider \(\overline{v}\) to be the solution of \(L\overline{v}=w\) with zero boundary conditions. Hence \(\overline{v}\in H^{1}_{0}(\Omega)\,.\) Then there holds,
\[\int_{\Omega}(Lu-f)\,w\,\,\mathrm{d}x+\int_{\partial\Omega}u\,\overline{v}\, \mathrm{d}x=\int_{\Omega}(Lu-f)\,w\,\,\mathrm{d}x=0\qquad\text{for all }w\in H^{1}_{0}(\Omega)\,. \tag{24}\]
Hence, \(Lu=f\) in the sense of distributions. We turn now to (23) and observe that \(\int_{\partial\Omega}u\,v\,\mathrm{d}x=0\) for all \(v\in\mathcal{H}_{L}\,.\) We conclude therefore that \(u=0\) on \(\partial\Omega\) and the claim is proved.
In this section we assume that if we select the networks appropriately, as we increase their complexity we may approximate any \(w\) in \(H^{2}\). To this end, we select a sequence of spaces
as follows: for each \(\ell\in\mathbb{N}\) we correspond a DNN space \(V_{\mathcal{N}},\) which is denoted by \(V_{\ell}\) with the following property: For each \(w\in H_{0}^{2}(\Omega)\) there exists a \(w_{\ell}\in V_{\ell}\) such that,
\[\|w_{\ell}-w\|_{H^{2}(\Omega)}\leq\;\beta_{\ell}\left(w\right),\qquad\text{and }\;\beta_{\ell}\left(w\right)\to 0,\;\;\ell\to\infty\,. \tag{25}\]
If in addition, \(w\in H^{m}(\Omega)\cap H_{0}^{2}(\Omega)\) is in higher order Sobolev space then
\[\|w_{\ell}-w\|_{H^{2}(\Omega)}\leq\;\tilde{\beta}_{\ell}\,\|w\|_{H^{m}(\Omega )},\qquad\text{and }\;\tilde{\beta}_{\ell}\,\to 0,\;\;\ell\to\infty\,. \tag{26}\]
We do not need specific rates for \(\tilde{\beta}_{\ell}\,,\) but only the fact that the right-hand side of (26) has an explicit dependence of Sobolev norms of \(w.\) This assumption is a reasonable one in view of the available approximation results of neural network spaces, see for example [48], [13, 24, 43, 16, 7], and their references.
**Remark 2**: _Due to higher regularity needed by the loss functional one has to use smooth enough activation functions, such as \(\tanh\) or ReLU\({}^{k},\) that is, \(\sigma(y)=(\max\{0,y\})^{k},\) see e.g., [48], [15]. In general, the available results so far do not provide enough information on specific architectures required to achieve specific bounds with rates. Since the issue of the approximation properties is an important but independent problem, we have chosen to require minimal assumptions which can be used to prove convergence._
### Convex domains
Next, we study first the case where elliptic regularity bounds hold. Consider the sequence of energies
\[\mathcal{E}_{\ell}(u_{\ell})=\begin{cases}\mathcal{E}(u_{\ell})&,\;\;u_{\ell} \in V_{\ell}\cap H_{0}^{2}(\Omega)\\ +\infty&,\;\;\text{otherwise}\end{cases} \tag{27}\]
where \(V_{\ell}\) are chosen to satisfy (25).
#### 3.1.1 Stability
Now we have equicoercivity of \(\mathcal{E}_{\ell}\) as a corollary of the following result.
**Proposition 3** (Stability/Equi-coercivity): _Assume that \(\Omega\) is convex. Let \((u_{\ell})\) be a sequence of functions in \(V_{\ell}\) such that for a constant \(C>0\) independent of \(\ell\), it holds that_
\[\mathcal{E}_{\ell}(u_{\ell})\leq C. \tag{28}\]
_Then there exists a constant \(C_{1}>0\) such that_
\[||u_{\ell}||_{H^{2}(\Omega)}\leq C_{1}\,. \tag{29}\]
* Since \(\mathcal{E}_{\ell}(u_{\ell})\leq C,\) from the definition of \(\mathcal{E}_{\ell},\) it holds that \(\mathcal{E}(u_{\ell})\leq C.\) We have that \[\mathcal{E}(u)=\int_{\Omega}(|Lu|^{2}-2f\;Lu+|f|^{2})\,\mathrm{d}x\leq C\,.\] (30) From Holder's inequality we have, since \(f\in L^{2}(\Omega),\) \[||Lu||_{L^{2}(\Omega)}\leq C_{1}\,.\] (31)
Finally, since \(u|_{\partial\Omega}=0\), by the global elliptic regularity in \(H^{2}\) theorem (see Theorem 4, p.334 in [19]) we have
\[||u||_{H^{2}(\Omega)}\leq C_{2}(||Lu||_{L^{2}(\Omega)}+||u||_{L^{2}(\Omega)}) \tag{32}\]
where \(C_{2}\) depends only on \(\Omega\) and the coefficients of \(L\). Now since \(0\notin\Sigma\)\((\Sigma\) is the spectrum of \(L)\), by Theorem 6 in [19] (p.324), we have
\[||u||_{L^{2}(\Omega)}\leq C_{3}||Lu||_{L^{2}(\Omega)} \tag{33}\]
where \(C_{3}\) depends only on \(\Omega\) and the coefficients of \(L\). Thus by (31), (32) and (33) we conclude
\[||u||_{H^{2}(\Omega)}\leq\tilde{C}\,. \tag{34}\]
\(\blacksquare\)
**Remark 4** (Boundary loss): _As mentioned in the introduction, in order to avoid the involved technical issues related to boundary conditions we have chosen to assume throughout that homogenous Dirichlet conditions are satisfied. It is evident that that our results are valid when the boundary conditions are imposed weakly through the discrete loss functional under appropriate technical modifications. In the case where the loss is_
\[\int_{\Omega}|Lv-f|^{2}\mathrm{d}x+\tau\,\int_{\partial\Omega}|v|^{2}\,\mathrm{ d}S \tag{35}\]
_the assumption \(\mathcal{E}_{\ell}(u_{\ell})\leq C\) provides control of the \(\|v\|_{L^{2}(\partial\Omega)}\) which is not enough to guarantee that elliptic regularity estimates will hold up to the boundary, see e.g., [11], [42], for a detailed discussion of subtle issues related to the effect of the boundary conditions on the regularity. Since the choice of the loss is at our disposal during the algorithm design, it will be interesting to consider more balanced choices of the boundary loss, depending on the regularity of the boundary. This is beyond the scope of the present work. Alternatively, one might prefer to use the framework of [47] to exactly satisfy the boundary conditions. As noted in this paper, there are instances where the boundary loss of (35) is rather weak to capture accurately the boundary behaviour of the approximations. The above observations is yet another indication that our stability framework is consistent and able to highlight possible imbalances at the algorithmic design level._
#### 3.1.2 Convergence of the minimisers
In this subsection, we discuss the convergence properties of the discrete minimisers. Given the regularity properties of the elliptic problem and in the absence of training, it is possible to show the following convergence result.
**Theorem 5** (Estimate in \(H^{2}\)): _Let \(\mathcal{E}_{\ell}\) be the energy functionals defined in (27) and let \((u_{\ell}),\,u_{\ell}\in V_{\ell},\) be a sequence of minimisers of \(\mathcal{E}_{\ell}.\) Then, if \(u\) is the exact solution of (1),_
\[\|u-u_{\ell}\|_{H^{2}(\Omega)}\leq C\,\inf_{\varphi\in V_{\ell}}\|u-\varphi\| _{H^{2}(\Omega)}\,. \tag{36}\]
_and furthermore,_
\[u_{\ell}\to u,\quad\text{in}\,\,\,\,H^{2}(\Omega)\,,\qquad\ell\to\infty\,. \tag{37}\]
**Proof** Let \(u\in H^{2}_{0}(\Omega)\) be the unique solution of (20). Consider the sequence of minimisers \((u_{\ell})\,.\)
Obviously,
\[\mathcal{E}_{\ell}(u_{\ell})\leq\mathcal{E}_{\ell}(v_{\ell}),\qquad\text{for all}\,\,v_{\ell}\in V_{\ell}\,.\]
Then,
\[\mathcal{E}_{\ell}(u_{\ell})=\int_{\Omega}|Lu_{\ell}-f|^{2}=\int_{\Omega}|L(u _{\ell}-u)|^{2}\geq\beta\|u-u_{\ell}\|_{H^{2}(\Omega)}^{2}, \tag{38}\]
by Proposition 3, which proves the first claim. For the second, let \(u\in H^{2}_{0}(\Omega)\) be the unique solution of (20). Consider the sequence of minimisers \((u_{\ell})\,.\) Obviously,
\[\mathcal{E}_{\ell}(u_{\ell})\leq\mathcal{E}_{\ell}(v_{\ell}),\qquad\text{for all }v_{\ell}\in V_{\ell}\,.\]
In particular,
\[\mathcal{E}_{\ell}(u_{\ell})\leq\mathcal{E}_{\ell}(\tilde{u}_{\ell}),\]
where \(\tilde{u}_{\ell}\) is the recovery sequence corresponding to \(u\) by assumption (25). Then \(\tilde{u}_{\ell}\to u\) in \(H^{2}(\Omega)\) and
\[\mathcal{E}_{\ell}(\tilde{u}_{\ell})=||L\tilde{u}_{\ell}-f||^{2}_{L^{2}( \Omega)}=||L(\tilde{u}_{\ell}-u)||^{2}_{L^{2}(\Omega)}\,, \tag{39}\]
and the proof is complete in view of (38). \(\blacksquare\)
In the present smooth setting, the above proof hinges on the fact that \(\mathcal{E}(u)=0\) and on the linearity of the problem. In the case of regularised functional
\[\mathcal{E}_{reg}(v)=\mathcal{E}(v)+\lambda\mathcal{J}(v)\,, \tag{40}\]
the proof is more involved. We need certain natural assumptions on the functional \(\mathcal{J}(v)\) to conclude the convergence. We shall work with convex functionals \(\mathcal{J}(v)\) that are \(\mathcal{H}\) consistent, i.e., they satisfy the properties:
\[(i) \mathcal{J}(v)\geq 0,\] \[(ii) \mathcal{J}(v)\leq\liminf_{\ell\to\infty}\mathcal{J}(v_{\ell}) \text{ for all weakly convergent sequences }v_{\ell}\rightharpoonup v\in\mathcal{H}, \tag{41}\] \[(iii) \mathcal{J}(w)=\lim_{\ell\to\infty}\mathcal{J}(w_{\ell})\text{ for all convergent sequences }w_{\ell}\to w\in\mathcal{H},\]
where \(\mathcal{H}\) is an appropriate Sobolev (sub)space which will be specified in each statement.
The proof of the next theorem is very similar to the (more complicated) proof of the Theorem 9 and it is omitted.
**Theorem 6** (Convergence for the regularised functional): _Let \(\mathcal{E}_{reg},\ \mathcal{E}_{reg,\ell}\) be the energy functionals defined in (40) and_
\[\mathcal{E}_{reg,\ell}(u_{\ell})=\begin{cases}\mathcal{E}_{reg}(u_{\ell}),&u_ {\ell}\in V_{\ell}\cap H^{2}_{0}(\Omega)\\ +\infty,&\text{otherwise}\,.\end{cases} \tag{42}\]
_Assume that the convex functional \(\mathcal{J}(v)\) is \(H^{2}(\Omega)\) consistent. Let \((u_{\ell}),\,u_{\ell}\in V_{\ell},\) be a sequence of minimisers of \(\mathcal{E}_{\ell}\), i.e._
\[\mathcal{E}_{reg,\ell}(u_{\ell})=\inf_{v_{\ell}\in V_{\ell}}\mathcal{E}_{reg, \ell}(v_{\ell})\,. \tag{43}\]
_Then,_
\[u_{\ell}\to u^{(\lambda)},\ \ \text{in}\ \ H^{1}(\Omega)\,,\qquad\ell\to\infty\,, \tag{44}\]
_where \(u^{(\lambda)}\) is the exact solution of the regularised problem_
\[\mathcal{E}_{reg}(u^{(\lambda)})=\min_{v\in H^{2}_{0}(\Omega)}\mathcal{E}_{ reg}(v)\,. \tag{45}\]
### Non-convex Lipschitz domains
In this subsection we discuss the case on non-convex Lipschitz domains, i.e., elliptic regularity bounds are no longer valid, and solutions might form singularities and do not belong in general to \(H^{2}(\Omega).\) We will see that the stability notion discussed in [S1] and [S2] is still relevant but in a weaker topology than in the previous case.
In the analysis below we shall use the bilinear form associated to the elliptic operator \(L,\) denoted \(B:H^{1}_{0}(\Omega)\times H^{1}_{0}(\Omega)\rightarrow\mathbb{R}.\) In particular,
\[B(u,v)=\int_{\Omega}\Big{(}\sum_{i,j=1}^{n}a_{ij}u_{x_{i}}v_{x_{j}}+cuv\,\Big{)} \,\mathrm{d}x\,. \tag{46}\]
In the sequel, we shall assume that the coefficients \(a_{ij},\)\(c\) are smooth enough and satisfy the required positivity properties for our purposes. We have the following stability result:
**Proposition 7**: _The functional \(\mathcal{E}\) defined in (5) is stable with respect to the \(H^{1}\)-norm: Let \((u_{\ell})\) be a sequence of functions in \(V_{\ell}\) such that for a constant \(C>0\) independent of \(\ell,\) it holds that_
\[\mathcal{E}_{\ell}(u_{\ell})\leq C. \tag{47}\]
_Then there exists a constant \(C_{1}>0\) such that_
\[\|u_{\ell}\|_{H^{1}(\Omega)}\leq C_{1}\,. \tag{48}\]
* We show that, if \(\mathcal{E}_{\ell}(v)\leq C\) for some \(C>0,\) then \(\|v\|_{H^{1}(\Omega)}\leq\tilde{C}\) for some \(\tilde{C}>0.\) Indeed the positivity properties of the coefficients imply, for any \(v\in H^{1}_{0}(\Omega),\) \[\theta||\nabla v||^{2}_{L^{2}(\Omega)}\leq B(v,v)\,.\] (49) Also, if \(Lu\in L^{2}(\Omega)\,,\) \[B(v,v)=\int_{\Omega}vLv\,\mathrm{d}x\leq||v||_{L^{2}(\Omega)}||Lv||_{L^{2}( \Omega)}\,,\] (50) and the claim follows by applying Holder and Poincare inequalities.
The convergence proof below relies on a crucial \(\limsup\) inequality which is proved in the next Theorem 9.
**Theorem 8** (Convergence in \(H^{1}\)): _Let \(\mathcal{E}_{\ell}\) be the energy functionals defined in (27) and let \((u_{\ell}),\)\(u_{\ell}\in V_{\ell},\) be a sequence of minimisers of \(\mathcal{E}_{\ell}\), where \(\Omega\) is a possibly non-convex Lipschitz domain. Then, if \(u\) is the exact solution of (1),_
\[u_{\ell}\to u,\ \ \mbox{in}\ \ H^{1}(\Omega)\,,\qquad\ell\rightarrow\infty\,. \tag{51}\]
* Let \(u\in\mathcal{H}_{L}\) be the unique solution of (20). Consider the sequence of minimisers \((u_{\ell})\,.\) Obviously, \[\mathcal{E}_{\ell}(u_{\ell})\leq\mathcal{E}_{\ell}(v_{\ell}),\qquad\mbox{for all }v_{\ell}\in V_{\ell}\,.\] By the proof of Proposition 7, we have, for \(c_{0}>0,\) \[\mathcal{E}_{\ell}(u_{\ell})=\int_{\Omega}|Lu_{\ell}-f|^{2}=\int_{\Omega}|L(u_ {\ell}-u)|^{2}\geq c_{0}\|u-u_{\ell}\|^{2}_{H^{1}(\Omega)}\,.\] (52)
Furthermore, let \(\tilde{u}_{\ell}\) be the recovery sequence corresponding to \(u\) constructed in the proof of Theorem 9. Since
\[\mathcal{E}_{\ell}(u_{\ell})\leq\mathcal{E}_{\ell}(\tilde{u}_{\ell}),\]
and
\[\lim_{\ell\to\infty}\mathcal{E}_{\ell}(\tilde{u}_{\ell})=\mathcal{E}(u)=0,\]
the proof follows.
Next, we utilise the standard \(\liminf\)-\(\limsup\) framework of \(\Gamma\)-convergence, to prove that the sequence of discrete minimisers \((u_{\ell})\) of the regularised functionals converges to a global minimiser of the continuous regularised functional.
**Theorem 9** (Convergence of the regularised functionals ): _Let \(\mathcal{E}_{reg},\ \mathcal{E}_{reg,\ell}\) be the energy functionals defined in (40) and (42) respectively, where \(\Omega\) is a possibly non-convex Lipschitz domain. Assume that the convex functional \(\mathcal{J}(v)\) is \(\mathcal{H}_{L}\) consistent. Let \((u_{\ell}),\,u_{\ell}\in V_{\ell},\) be a sequence of minimisers of \(\mathcal{E}_{reg,\ell}.\) Then,_
\[u_{\ell}\to u^{(\lambda)},\ \ \ \mbox{in}\ \ L^{2}(\Omega),\ \ \ u_{\ell}\rightharpoonup u^{(\lambda)}\,,\ \ \ \mbox{in}\ \ H^{1}(\Omega),\qquad\ell\to\infty\,. \tag{53}\]
_where \(u^{(\lambda)}\) is the exact solution of the regularised problem_
\[\mathcal{E}_{reg}(u^{(\lambda)})=\min_{v\in\mathcal{H}_{L}(\Omega)}\mathcal{ E}_{reg}(v)\,. \tag{54}\]
**Proof** We start with a \(\liminf\) inequality: We assume there is a sequence, still denoted by \(u_{\ell}\), such that \(\mathcal{E}_{\ell}(u_{\ell})\leq C\) uniformly in \(\ell\), otherwise \(\mathcal{E}(u)\leq\liminf_{\ell\to\infty}\mathcal{E}_{\ell}(u_{\ell})=+\infty.\) The above stability result, Proposition 7, implies that \(||u_{\ell}||_{H^{1}(\Omega)}\) are uniformly bounded. Therefore, up to subsequences, there exists a \(v\in H^{1}(\Omega),\) such that \(u_{\ell}\rightharpoonup v\) in \(H^{1}\) and \(u_{\ell}\to u\) in \(L^{2}\), thus \(u_{\ell}\rightharpoonup u\) in \(H^{1}\). Also, from the energy bound we have that \(||Lu_{\ell}||_{L^{2}(\Omega)}\leq C\) and therefore \(Lu_{\ell}\rightharpoonup w\). Next we shall show that \(w=Lu\). Indeed, we have
\[\lim_{\ell\to\infty}\int_{\Omega}Lu_{\ell}\phi\,\mathrm{d}x=\int_{\Omega}w \phi\,\mathrm{d}x\ \ \,\ \forall\ \phi\in C_{0}^{\infty}(\Omega)\,, \tag{55}\]
and
\[\lim_{\ell\to\infty}\int_{\Omega}Lu_{\ell}\phi\,\mathrm{d}x=\lim_{\ell\to \infty}B(u_{\ell},\phi)=B(u,\phi),\ \ \ \mbox{since}\ \ u_{\ell}\rightharpoonup u\ \ \mbox{in}\ \ H^{1}(\Omega)\,, \tag{56}\]
hence,
\[B(u,\phi)=\int_{\Omega}w\phi\ \mathrm{d}x, \tag{57}\]
for all test functions. That is, \(Lu=w\) weakly. The convexity of \(\int_{\Omega}|Lu_{\ell}-f|^{2}\) implies weak lower semicontinuity, that is
\[\int_{\Omega}|Lv-f|^{2}\leq\liminf_{\ell\to\infty}\int_{\Omega}|Lv_{\ell}-f|^{2} \tag{58}\]
and since \(\mathcal{J}(v)\) is \(\mathcal{H}_{L}\) consistent, (ii) of (41) implies that \(\mathcal{E}_{reg}(v)\leq\liminf_{\ell\to\infty}\mathcal{E}_{reg,\ell}(v_{\ell})\) for each such sequence \((v_{\ell})\).
Let \(w\in\mathcal{H}_{L}\) be arbitrary; we will show the existence of a recovery sequence \((w_{\ell})\), such that \(\mathcal{E}(w)=\lim_{\ell\to\infty}\mathcal{E}_{\ell}(w_{\ell}).\) For each \(\delta>0\) we can select a smooth enough mollifier \(w_{\delta}\in H^{2}_{0}(\Omega)\cap C^{m}_{0}(\Omega),\,m>2,\) such that
\[\begin{split}&\|w-w_{\delta}\|_{H^{1}(\Omega)}+\|Lw-Lw_{\delta}\|_{L^{ 2}(\Omega)}\lesssim\delta\,,\ \ \ \mbox{and,}\\ &|w_{\delta}|_{H^{s}(\Omega)}\lesssim\frac{1}{\delta^{s}}|w|_{H^{ 1}(\Omega)}.\end{split} \tag{59}\]
For \(w_{\delta},\) (26), there exists \(w_{\ell,\delta}\in V_{\ell}\) such that
\[\|w_{\ell,\delta}-w_{\delta}\|_{H^{2}(\Omega)}\leq\ \tilde{\beta}_{\ell}\,\|w_{ \delta}\|_{H^{s}(\Omega)}\leq\ \tilde{\beta}_{\ell}\frac{1}{\delta^{s}}\,\|w\|_{H^{1}(\Omega)},\qquad\text{ and}\ \ \tilde{\beta}_{\ell}\,(w)\to 0,\ \ \ell\to\infty\,.\]
Choosing \(\delta\) appropriately as function of \(\tilde{\beta}_{\ell}\) we can ensure that \(w_{\ell}=w_{\ell,\delta}\) satisfies,
\[||Lw_{\ell}-f||_{L^{2}(\Omega)}\to||Lw-f||_{L^{2}(\Omega)}\,, \tag{60}\]
since \(\mathcal{J}(v)\) is \(\mathcal{H}_{L}\) consistent, (iii) of (41) implies that \(\mathcal{J}(w_{\ell})\to\mathcal{J}(w)\) and hence
\[\mathcal{E}_{reg,\ell}(w_{\ell})\to\mathcal{E}_{reg}(w). \tag{61}\]
Next, let \(u^{(\lambda)}\in\mathcal{H}_{L}\) be the unique solution of (54) and consider the sequence of the discrete minimisers \((u_{\ell})\,.\) Clearly,
\[\mathcal{E}_{reg,\ell}(u_{\ell})\leq\mathcal{E}_{reg,\ell}(v_{\ell}),\qquad \text{for all}\ v_{\ell}\in V_{\ell}\,.\]
In particular, \(\mathcal{E}_{reg,\ell}(u_{\ell})\leq\mathcal{E}_{reg,\ell}(\tilde{u}_{\ell}),\) where \(\tilde{u}_{\ell}\) is the recovery sequence constructed above corresponding to \(w=u^{(\lambda)}.\) Thus the discrete energies are uniformly bounded. Then the stability result Proposition 7, implies that
\[\|u_{\ell}\|_{H^{1}(\Omega)}<C, \tag{62}\]
uniformly. By the Rellich-Kondrachov theorem, [19], and the \(\liminf\) argument above, there exists \(\tilde{u}\in\mathcal{H}_{L}\) such that \(u_{\ell}\to u\) in \(L^{2}(\Omega)\) up to a subsequence not re-labeled here. Next we show that \(\tilde{u}\) is a global minimiser of \(\mathcal{E}_{reg}.\) We combine the \(\liminf\) and \(\limsup\) inequalities as follows: Let \(w\in\mathcal{H}_{L},\) and \(w_{\ell}\in V_{\ell}\) be its recovery sequence such that \(||Lw_{\ell}-f||_{L^{2}(\Omega)}\to||Lw-f||_{L^{2}(\Omega)}\,.\) Therefore, the \(\liminf\) inequality and the fact that \(u_{\ell}\) are minimisers of the \(\mathcal{E}_{reg,\ell},\) imply that
\[\mathcal{E}_{reg}(\tilde{u})\leq\liminf_{\ell\to\infty}\mathcal{E}_{reg,\ell} (u_{\ell})\leq\limsup_{\ell\to\infty}\mathcal{E}_{reg,\ell}(u_{\ell})\leq \limsup_{\ell\to\infty}\mathcal{E}_{reg,\ell}(w_{\ell})=\mathcal{E}_{reg}(w), \tag{63}\]
for all \(w\in\mathcal{H}_{L}.\) Therefore \(\tilde{u}\) is a minimiser of \(\mathcal{E},\) and since \(u^{(\lambda)}\) is the unique global minimiser of \(\mathcal{E}_{reg}\) on \(\mathcal{H}_{L}\) we have that \(\tilde{u}=u^{(\lambda)}.\)
\(\blacksquare\)
## 4 Parabolic problems
Let as before \(\Omega\subset\mathbb{R}^{d},\) open, bounded and set \(\Omega_{T}=\Omega\times(0,T]\) for some fixed time \(T>0.\) We consider the parabolic problem
\[\begin{cases}u_{t}+Lu=f,&\text{in}\ \,\Omega_{T},\\ u=0,&\text{on}\ \,\partial\Omega\times(0,T],\\ u=u^{0},&\text{on}\ \,\Omega\times\{t=0\}\,.\end{cases} \tag{64}\]
In this section we discuss convergence properties of approximations of (64) obtained by minimisation of continuous and time-discrete energy functionals over appropriate sets of neural network functions. We shall assume that \(\Omega\) is a convex Lipschitz domain. The case of a non-convex domain can be treated with the appropriate modifications.
### Exact time integrals
So now we define \(\mathcal{G}:H^{1}(0,T;L^{2}(\Omega))\cap L^{2}(0,T;H^{2}_{0}(\Omega))\to\overline{ \mathbb{R}}\) as follows
\[\mathcal{G}(v)=\int_{0}^{T}\|v_{t}(t)+Lv(t)-f(t)\|_{L^{2}(\Omega)}^{2}\mathrm{d }t+|v(0)-u^{0}|_{H^{1}(\Omega)}^{2}\,. \tag{65}\]
We use \(H^{1}(\Omega)\) seminorm for the initial condition, since then the regularity properties of the functional are better. Of course, one can use the \(L^{2}(\Omega)\) norm instead with appropriate modifications in the proofs.
As before, we select a sequence of spaces \(V_{\mathcal{N}}\) as follows: for each \(\ell\in\mathbb{N}\) we correspond a DNN space \(W_{\mathcal{N}}\), which is denoted by \(W_{\ell}\) such that: For each \(w\in H^{1}(0,T;L^{2}(\Omega))\cap L^{2}(0,T;H^{2}(\Omega))\) there exists a \(w_{\ell}\in W_{\ell}\) such that,
\[\|w_{\ell}-w\|_{H^{1}(0,T;L^{2}(\Omega))\cap L^{2}(0,T;H^{2}(\Omega))}\leq\; \beta_{\ell}\left(w\right),\qquad\text{and}\;\;\beta_{\ell}\left(w\right)\to 0,\;\;\ell\to\infty\,. \tag{66}\]
If in addition, \(w\) has higher regularity, we assume that
\[\|(w_{\ell}-w)^{\prime}\|_{H^{1}(0,T;L^{2}(\Omega))\cap L^{2}(0,T;H^{2}( \Omega))}\leq\;\tilde{\beta}_{\ell}\,\|w^{\prime}\|_{H^{m}(0,T;H^{2}(\Omega))},\qquad\text{and}\;\;\tilde{\beta}_{\ell}\;\to 0,\;\;\ell\to\infty\,. \tag{67}\]
As in the elliptic case, we do not need specific rates for \(\tilde{\beta}_{\ell}\,,\) but only the fact that the right-hand side of (67) has an explicit dependence of Sobolev norms of \(w\). See [1] and its references where space-time approximation properties of neural network spaces are derived, see also [48], [15] and Remark 2.
In the sequel we consider the sequence of energies
\[\mathcal{G}_{\ell}(u_{\ell})=\begin{cases}\mathcal{G}(u_{\ell}),&u_{\ell}\in W _{\ell}\cap L^{2}(0,T;H^{1}_{0}(\Omega))\\ +\infty,&\text{otherwise}\end{cases} \tag{68}\]
where \(W_{\ell}\) is chosen as before.
#### 4.1.1 Equi-coercivity
Now we have equicoercivity of \(\mathcal{G}_{\ell}\) as a corollary of the following result.
**Proposition 10**: _The functional \(\mathcal{G}\) defined in (65) is equicoercive with respect to the \(H^{1}(0,T;H^{2}_{0}(\Omega))\)-norm. That is,_
\[\begin{array}{l}\text{If}\;\;\mathcal{G}(u)\leq C\;\text{ for some }\;C>0\;,\;\text{we have}\\ ||u||_{L^{2}(0,T;H^{2}(\Omega))}+||u^{\prime}||_{L^{2}(0,T;L^{2}(\Omega))}\leq C _{1}\end{array} \tag{69}\]
* As in the proof of equicoercivity for (5), we have \[\mathcal{G}(u)=\int_{\Omega_{T}}(|u_{t}+Lu|^{2}-2f\left(u_{t}+Lu\right)+|f|^{ 2})\leq C\] (70) Hence, one can conclude that since \(f\in L^{2}(\Omega_{T})\), \[||u_{t}+Lu||_{L^{2}(0,T;L^{2}(\Omega))}\leq C_{1}\] (71)
From regularity theory for parabolic equations (see for example Theorem 5, p.382 in [19]) we have
\[\begin{array}{c}\mbox{ess sup}_{0\leq t\leq T}||u(t)||_{H^{1}_{0}(\Omega)}+||u||_ {L^{2}(0,T;H^{2}(\Omega))}+||u^{\prime}||_{L^{2}(0,T;L^{2}(\Omega))}\\ \qquad\qquad\leq\tilde{C}(||u_{t}+Lu||_{L^{2}(0,T;L^{2}(\Omega))}+||u(0)||_{H^ {1}_{0}(\Omega)})\end{array} \tag{72}\]
the constant \(\tilde{C}\) depending only on \(\Omega\,\ T\) and the coefficients of \(L\). Notice that (72) is a maximal parabolic regularity estimate in \(L^{2}(0,T;L^{2}(\Omega))\,.\) This completes the proof.
#### 4.1.2 Compactness and Convergence of Discrete Minimizers
As in the previous section, from standard arguments in the theory of \(\Gamma\)-convergence, we will prove that under some boundedness hypothesis on \(u_{\ell}\), the sequence of discrete minimizers \((u_{\ell})\) converges in \(L^{2}(0,T;H^{1}(\Omega))\) to a global minimiser of the continuous functional. We will also need the well-known Aubin-Lions theorem as an analog of the Rellich-Kondrachov theorem in the parabolic case, that can be found, for example, in [49].
**Theorem 11** (Aubin-Lions): _Let \(B_{0},B,B_{1}\) be three Banach spaces where \(B_{0},B_{1}\) are reflexive. Suppose that \(B_{0}\) is continuously imbedded into \(B\), which is also continuously imbedded into \(B_{1}\), and the imbedding from \(B_{0}\) into \(B\) is compact. For any given \(p_{0},p_{1}\) with \(1<p_{0},p_{1}<\infty\), let_
\[W=\{v\,|\,\,v\in L^{p_{0}}([0,T],B_{0})\,\ v_{t}\in L^{p_{1}}([0,T],B_{1})\}. \tag{73}\]
_Then the imbedding from \(W\) into \(L^{p_{0}}([0,T],B)\) is compact._
**Theorem 12** (Convergence of discrete minimisers): _Let \((u_{\ell})\subset W_{\ell}\) be a sequence of minimizers of \({\cal G}_{\ell}\), i.e.,_
\[{\cal G}_{\ell}(u_{\ell})=\inf_{w_{\ell}\in W_{\ell}}{\cal G}_{\ell}(w_{\ell}) \tag{74}\]
_then_
\[u_{\ell}\to u,\ \ \mbox{in}\ \ L^{2}(0,T;H^{1}(\Omega)) \tag{75}\]
_where \(u\) is the solution of (64)._
* We begin with the liminf inequality. We assume there is a sequence, still denoted by \(u_{\ell}\), such that \({\cal G}_{\ell}(u_{\ell})\leq C\) uniformly in \(\ell\), otherwise \({\cal G}(u)\leq\liminf_{\ell\to\infty}{\cal G}_{\ell}(u_{\ell})=+\infty.\) From Proposition 10, the uniform bound \({\cal G}_{\ell}(u_{\ell})\leq C\) implies that \(||u_{\ell}||_{L^{2}(0,T;H^{2}(\Omega))}+||u^{\prime}_{\ell}||_{L^{2}(0,T;L^{2 }(\Omega))}\) are uniformly bounded. This implies (we denote \(u^{\prime}:=u_{t}\)) \[\nabla^{2}u_{\ell}\rightharpoonup\nabla^{2}u\ \ \mbox{and}\ \ u^{\prime}_{\ell} \rightharpoonup u^{\prime}\ \ \mbox{weakly in}\ \ L^{2}(0,T;L^{2}(\Omega)),\] (76) and hence \(u^{\prime}_{\ell}+Lu_{\ell}-f\rightharpoonup u^{\prime}+Lu-f\,.\) The convexity of \(\int_{\Omega_{T}}|u^{\prime}_{\ell}+Lu_{\ell}-f|^{2}\) implies weak lower semicontinuity, that is \[\int_{\Omega_{T}}|u^{\prime}+Lu-f|^{2}\leq\liminf_{\ell\to\infty}\int_{\Omega_ {T}}|u^{\prime}_{\ell}+Lu_{\ell}-f|^{2}\] (77)
and therefore we conclude that \(\mathcal{G}(u)\leq\liminf_{\ell\to\infty}\mathcal{G}_{\ell}(u_{\ell})\).
Let \(w\in H^{1}(0,T;L^{2}(\Omega))\cap L^{2}(0,T;H^{2}(\Omega))\), by (66) there exists \(w_{\ell}\in W_{\ell}\) such that \(w_{\ell}\to w\) in \(H^{1}(0,T;L^{2}(\Omega))\cap L^{2}(0,T;H^{2}(\Omega))\). We can conclude that \(w_{\ell}^{\prime}+Lw_{\ell}\to w^{\prime}+Lw\) in \(L^{2}(0,T;L^{2}(\Omega)),\) and hence
\[||w_{\ell}^{\prime}+Lw_{\ell}-f||_{L^{2}(0,T;L^{2}(\Omega))}\to||w^{\prime}+Lw -f||_{L^{2}(0,T;L^{2}(\Omega))} \tag{78}\]
That is, \(\mathcal{G}_{\ell}(w_{\ell})\,\to\,\mathcal{G}(w)\,.\) We argue as in Theorem 9 and we conclude the proof. The only difference is that we utilise Theorem 11 instead of Rellich-Kondrachov Theorem, with \(B_{0}=H^{2}(\Omega)\,\ B=H^{1}(\Omega)\) and \(B_{1}=L^{2}(\Omega)\).
### Time discrete training
To apply a quadrature in the time integral only we proceed as follows: Let \(0=t^{0}<t^{1}<\cdots<t^{N}=T\) define a partition of \([0,T]\) and \(I_{n}:=(t^{n-1},t^{n}]\), \(k_{n}:=t^{n}-t^{n-1}\). We shall denote by \(v^{m}(\cdot)\) and \(f^{m}(\cdot)\) the values \(v(\cdot,t^{m})\) and \(f(\cdot,t^{m})\). Then we define the discrete in time quadrature by
\[\sum_{n=1}^{N}\,k_{n}\,g(t^{n})\approx\int_{0}^{T}\,g(t)\,\mathrm{d}t. \tag{79}\]
We proceed to define the time-discrete version of the functional (5) as follows
\[\mathcal{G}_{IE,k}(v)=\sum_{n=1}^{N}\,k_{n}\,\int_{\Omega}\big{|}\frac{v^{n}-v ^{n-1}}{k_{n}}+Lv^{n}-f^{n}\big{|}^{2}\,\,\mathrm{d}x+\,\int_{\Omega}|v^{0}-u ^{0}|_{H^{1}(\Omega)}^{2}\,\mathrm{d}x\,. \tag{80}\]
We shall study the stability and convergence properties of the minimisers of the problems:
\[\min_{v\in V_{\mathcal{N}}}\mathcal{G}_{IE,k}(v)\,. \tag{81}\]
Next we introduce the _time reconstruction_\(\widehat{U}\) of a time dependent function \(U\) to be the piecewise linear approximation of \(U\) defined by linearly interpolating between the nodal values \(U^{n-1}\) and \(U^{n}\):
\[\widehat{U}(t):=\ell_{0}^{n}(t)U^{n-1}+\ell_{1}^{n}(t)U^{n},\quad t\in I_{n}, \tag{82}\]
with \(\ell_{0}^{n}(t):=(t^{n}-t)/k_{n}\) and \(\ell_{1}^{n}(t):=(t-t^{n-1})/k_{n}\). This reconstruction of the discrete solution has been proven useful in various instances, see [4], [38], [17] and for higher-order versions [34].
Correspondingly, the piecewise constant interpolant of \(U^{j}\) is denoted by \(\overline{U},\)
\[\overline{U}(t):=U^{n},\quad t\in I_{n}\,. \tag{83}\]
So now the discrete energy \(\mathcal{G}_{IE,k}\) can be written as follows
\[\mathcal{G}_{IE,k}(U)= \|\widehat{U}_{t}+L\overline{U}-\overline{f}\|_{L^{2}(0,T;L^{2}( \Omega))}^{2}+\,\int_{\Omega}|\widehat{U}^{0}-u^{0}|_{H^{1}(\Omega)}^{2}\, \mathrm{d}x \tag{84}\] \[= \int_{0}^{T}\|\widehat{U}_{t}+L\overline{U}-\overline{f}\|_{L^{2 }(\Omega)}^{2}\,\mathrm{d}t+\,\int_{\Omega}|\widehat{U}^{0}-u^{0}|_{H^{1}( \Omega)}^{2}\,\mathrm{d}x\,.\]
#### 4.2.1 Stability-Equi-coercivity
Now we have equicoercivity of \({\cal G}_{IE,k}\) as a corollary of the following result.
**Proposition 13**: _The functional \({\cal G}_{IE,k}\) defined in (84) is equicoercive with respect to \(\widehat{U},\overline{U}\). That is,_
\[\begin{array}{l}\mbox{If }\ {\cal G}_{k}(U)\leq C\ \mbox{ for some }\ C>0\,\mbox{ we have}\\ \|\overline{U}\|_{L^{2}(0,T;H^{2}(\Omega))}+\|\widehat{U}^{\prime}\|_{L^{2} (0,T;L^{2}(\Omega))}\leq C_{1}\end{array} \tag{85}\]
* As in the proof of equicoercivity for (5), we have \[\int_{\Omega_{T}}(|\widehat{U}_{t}+L\overline{U}|^{2}-2\overline{f}\,( \widehat{U}_{t}+L\overline{U})+|\overline{f}|^{2})\leq C\] (86) Thus we can conclude that since \(f\in L^{2}(\Omega_{T})\), we have the uniform bound \[\|\widehat{U}_{t}+L\overline{U}\|_{L^{2}(0,T;L^{2}(\Omega))}\leq C_{1}\,.\] (87) We shall need a discrete maximal regularity estimate in the present Hilbert-space setting. To this end we observe, \[\|\widehat{U}_{t}+L\overline{U}\|_{L^{2}(0,T;L^{2}(\Omega))}^{2} =\|\widehat{U}_{t}\|_{L^{2}(0,T;L^{2}(\Omega))}^{2}+\|L\overline{U}\|_{L^{2 }(0,T;L^{2}(\Omega))}^{2}+2\sum_{n=1}^{N}\,\int_{I_{n}}\,\left\langle\widehat{ U}_{t},L\overline{U}\right\rangle\ dt\] \[=\|\widehat{U}_{t}\|_{L^{2}(0,T;L^{2}(\Omega))}^{2}+\|L\overline{ U}\|_{L^{2}(0,T;L^{2}(\Omega))}^{2}\] \[\qquad+2\sum_{n=1}^{N}\,\int_{I_{n}}\,\left\langle\big{[}\frac{U ^{n}-U^{n-1}}{k_{n}}\big{]},LU^{n}\right\rangle\ dt\] \[=\|\widehat{U}_{t}\|_{L^{2}(0,T;L^{2}(\Omega))}^{2}+\|L\overline{ U}\|_{L^{2}(0,T;L^{2}(\Omega))}^{2}\] (88) \[\qquad+2\sum_{n=1}^{N}\,\left\langle\big{[}U^{n}-U^{n-1}\big{]}, LU^{n}\right\rangle\] \[=\|\widehat{U}_{t}\|_{L^{2}(0,T;L^{2}(\Omega))}^{2}+\|L\overline{ U}\|_{L^{2}(0,T;L^{2}(\Omega))}^{2}+\left\langle L\,U^{N},U^{N}\right\rangle\] \[\qquad+\sum_{n=1}^{N}\,\left\langle L\,\big{[}U^{n}-U^{n-1}\big{]},U^{n}-U^{n-1}\right\rangle\,-\left\langle L\,U^{0},U^{0}\right\rangle.\] Since all but the last term \(\left\langle L\,U^{0},U^{0}\right\rangle\) are positive, we conclude, \[\|\widehat{U}_{t}\|_{L^{2}(0,T;L^{2}(\Omega))}^{2}+\|L\overline{U}\|_{L^{2}(0, T;L^{2}(\Omega))}^{2}\leq\|\widehat{U}_{t}+L\overline{U}\|_{L^{2}(0,T;L^{2}(\Omega))}^{2}+ \left\langle L\,U^{0},U^{0}\right\rangle,\] (89) and the proof is complete.
#### 4.2.2 \(\liminf\) inequality
We assume there is a sequence, still denoted by \(U_{\ell}\), such that \({\cal G}_{IE,\ell}(U_{\ell})\leq C\) uniformly in \(\ell\), otherwise \(\liminf_{\ell\to\infty}{\cal G}_{IE,\ell}(U_{\ell})=+\infty.\) From the discrete stability estimate, the uniform bound \({\cal G}_{IE,\ell}(U_{\ell})\leq C\) implies that \(\|\overline{U}_{\ell}\|_{L^{2}(0,T;H^{2}(\Omega))}+\|\widehat{U}_{\ell}^{ \prime}\|_{L^{2}(0,T;L^{2}(\Omega))}\leq C_{1}\,,\) are uniformly bounded.
By the relative compactness in \(L^{2}(0,T;L^{2}(\Omega))\) we have (up to a subsequence not re-labeled) the existence of \(u_{(1)}\) and \(u_{(2)}\) such that
\[L\overline{U}_{\ell}\rightharpoonup Lu_{(1)}\ \ \text{and}\ \ \widehat{U}_{\ell}^{ \prime}\rightharpoonup u_{(2)}^{\prime}\ \ \text{weakly in}\ \ L^{2}(0,T;L^{2}(\Omega))\,. \tag{90}\]
Notice that, for any space-time test function \(\varphi\in C_{0}^{\infty}\) there holds (we have set \(\tilde{\varphi}^{n}:=\frac{1}{k_{n}}\int_{I_{n}}\varphi\ \,dt\))
\[-\int_{0}^{T}\langle\widehat{U}_{\ell},\varphi^{\prime}\rangle \mathrm{d}t=\int_{0}^{T}\langle\widehat{U}_{\ell}^{\prime},\varphi\rangle \mathrm{d}t \tag{91}\] \[=\sum_{n=1}^{N}\,\int_{I_{n}}\,\langle\big{[}\frac{U_{\ell}^{n}-U _{\ell}^{n-1}}{k_{n}}\big{]},\varphi\rangle\,\ dt=\sum_{n=1}^{N}\,\langle U_{ \ell}^{n},\tilde{\varphi}^{n}\rangle-\langle U_{\ell}^{n-1},\tilde{\varphi}^{ n}\rangle\] \[=\sum_{n=1}^{N}\,\langle U_{\ell}^{n},\varphi^{n-1}\rangle- \langle U_{\ell}^{n-1},\varphi^{n-1}\rangle+\sum_{n=1}^{N}\,\langle U_{\ell}^ {n},\big{[}\tilde{\varphi}^{n}-\varphi^{n-1}\big{]}\rangle-\langle U_{\ell}^{ n-1},\big{[}\tilde{\varphi}^{n}-\varphi^{n-1}\big{]}\rangle\] \[=-\sum_{n=1}^{N}\,\langle U_{\ell}^{n},\varphi^{n}-\varphi^{n-1} \rangle+\sum_{n=1}^{N}\,\langle\big{[}U_{\ell}^{n}-U_{\ell}^{n-1}\big{]},\big{[} \tilde{\varphi}^{n}-\varphi^{n-1}\big{]}\rangle\] \[=-\int_{0}^{T}\langle\overline{U}_{\ell},\varphi^{\prime}\rangle \mathrm{d}t+\sum_{n=1}^{N}\,\langle\big{[}U_{\ell}^{n}-U_{\ell}^{n-1}\big{]}, \big{[}\tilde{\varphi}^{n}-\varphi^{n-1}\big{]}\rangle\,.\]
By the uniform bound,
\[\|\widehat{U}_{\ell}^{\prime}\|_{L^{2}(0,T;L^{2}(\Omega))}^{2}=\sum_{n=1}^{N} \,\frac{1}{k_{n}}\|U_{\ell}^{n}-U_{\ell}^{n-1}\|_{L^{2}(\Omega)}^{2}\leq C_{1} ^{2}\,,\]
and standard approximation properties for \(\tilde{\varphi}^{n}-\varphi^{n-1}\) we conclude that for any fixed test function,
\[\int_{0}^{T}\,\langle\widehat{U}_{\ell},\varphi^{\prime}\rangle \mathrm{d}t-\int_{0}^{T}\,\langle\overline{U}_{\ell},\varphi^{\prime}\rangle \mathrm{d}t\to 0,\qquad\ell\to\infty\,. \tag{92}\]
We can conclude therefore that \(u_{(1)}=u_{(2)}=u\) and thus,
\[\widehat{U}_{\ell}^{\prime}+L\overline{U}_{\ell}-\overline{f}\rightharpoonup u ^{\prime}+Lu-f,\qquad\ell\to\infty\,. \tag{93}\]
The convexity of \(\int_{\Omega_{T}}|\cdot|^{2}\) implies weak lower semicontinuity, that is
\[\int_{\Omega_{T}}|u^{\prime}+Lu-f|^{2}\leq\liminf_{\ell\to\infty}\int_{ \Omega_{T}}|\widehat{U}_{\ell}^{\prime}+L\overline{U}_{\ell}-\overline{f}|^{2} \tag{94}\]
and therefore we conclude that \(\mathcal{G}(u)\leq\liminf_{\ell\to\infty}\mathcal{G}_{IE,\ell}(U_{\ell})\).
#### 4.2.3 \(\limsup\) inequality
Let \(w\in H^{1}(0,T;L^{2}(\Omega))\cap L^{2}(0,T;H^{2}(\Omega))\). We will now show the existence of a recovery sequence \((w_{\ell})\) such that \(w_{\ell}\to w\) and \(\mathcal{G}(w)=\lim_{\ell\to\infty}\mathcal{G}_{IE,\ell}(w_{\ell})\). Since \(C^{\infty}(0,T;H^{2}(\Omega))\) is dense in \(L^{2}(0,T;H^{2}(\Omega))\) we can select a \((w_{\delta})\subset C^{\infty}(0,T;H^{2}(\Omega))\) with the properties
\[\begin{split}&\|w-w_{\delta}\|_{H^{1}(0,T;L^{2}(\Omega))\cap L^{2}(0,T;H^ {2}(\Omega))}\lesssim\delta\,,\quad\text{and,}\\ &|w_{\delta}^{\prime}|_{H^{1}(0,T;L^{2}(\Omega))\cap L^{2}(0,T;H^ {2}(\Omega))}\lesssim\frac{1}{\delta}|w|_{H^{1}(0,T;L^{2}(\Omega))\cap L^{2}( 0,T;H^{2}(\Omega))}\,.\end{split} \tag{95}\]
If \(w_{\delta,\ell}\in W_{\ell}\) is a neural network function satisfying (66), (67), we would like to show
\[||\widehat{w}_{\delta,\ell}^{\prime}+L\,\overline{w}_{\delta,\ell}-\overline{f} ||_{L^{2}(0,T;L^{2}(\Omega))}\rightarrow||w^{\prime}+Lw-f||_{L^{2}(0,T;L^{2}( \Omega))} \tag{96}\]
where \(\delta=\delta(\ell)\) is appropriately selected. Then,
\[\mathcal{G}_{IE,\ell}(w_{\delta,\ell})\rightarrow\mathcal{G}(w)\,. \tag{97}\]
To this end it suffices to consider the difference
\[\|\widehat{w}_{\delta,\ell}^{\prime}+L\,\overline{w}_{\delta,\ell}-w^{\prime} -Lw\|_{L^{2}(0,T;L^{2}(\Omega))}\,. \tag{98}\]
We have
\[\|\widehat{w}_{\delta,\ell}^{\prime}+L\,\overline{w}_{\delta,\ell }-w^{\prime}-Lw\|_{L^{2}(0,T;L^{2}(\Omega))}\leq \|\widehat{w}_{\delta,\ell}^{\prime}+L\,\overline{w}_{\delta,\ell }-\widehat{w}_{\delta}^{\prime}-L\,\overline{w}_{\delta}\|_{L^{2}(0,T;L^{2}( \Omega))} \tag{99}\] \[+\|\widehat{w}_{\delta}^{\prime}+L\,\overline{w}_{\delta}-w^{ \prime}-Lw\|_{L^{2}(0,T;L^{2}(\Omega))}\] \[=:A_{1}+A_{2}\,.\]
To estimate \(A_{1}\) we proceed as follows: Let \(\theta_{\ell}(t):=w_{\delta,\ell}(t)-w_{\delta}(t)\,.\) Then,
\[\|\widehat{w}_{\delta,\ell}^{\prime}-\widehat{w}_{\delta}^{\prime }\|_{L^{2}(0,T;L^{2}(\Omega))}^{2} =\sum_{n=1}^{N}\,\int_{I_{n}}\,\big{\|}\,\frac{\theta_{\ell}^{n}- \theta_{\ell}^{n-1}}{k_{n}}\big{\|}_{L^{2}(\Omega)}^{2}\,\ dt \tag{100}\] \[=\sum_{n=1}^{N}\,\frac{1}{k_{n}}\big{\|}\theta_{\ell}^{n}-\theta_ {\ell}^{n-1}\big{\|}_{L^{2}(\Omega)}^{2}\] \[=\sum_{n=1}^{N}\,\frac{1}{k_{n}}\big{\|}\int_{I_{n}}\,\theta_{ \ell}^{\prime}(t)\,\ dt\big{\|}_{L^{2}(\Omega)}^{2}\] \[\leq\sum_{n=1}^{N}\,\frac{1}{k_{n}}\int_{I_{n}}\big{\|}\theta_{ \ell}^{\prime}(t)\big{\|}_{L^{2}(\Omega)}^{2}\,\ dt\,k_{n}\] \[=\|\theta_{\ell}^{\prime}\|_{L^{2}(0,T;L^{2}(\Omega))}^{2}\,.\]
Similarly,
\[\|L\,\overline{w}_{\delta,\ell} -L\,\overline{w}_{\delta}\|_{L^{2}(0,T;L^{2}(\Omega))}=\Big{\{} \sum_{n=1}^{N}\,\int_{I_{n}}\,\big{\|}L\,\theta_{\ell}^{n}\big{\|}_{L^{2}( \Omega)}^{2}\,\ dt\Big{\}}^{1/2} \tag{101}\] \[\leq\Big{\{}\sum_{n=1}^{N}\,k_{n}\,\big{\|}L\,\theta_{\ell}^{n}- \frac{1}{k_{n}}\int_{I_{n}}L\,\theta_{\ell}(t)\mathrm{d}t\big{\|}_{L^{2}( \Omega)}^{2}\,\Big{\}}^{1/2}+\Big{\{}\sum_{n=1}^{N}\,\int_{I_{n}}\,\big{\|}L \,\theta_{\ell}(t)\big{\|}_{L^{2}(\Omega)}^{2}\,\ dt\Big{\}}^{1/2}\] \[=\Big{\{}\sum_{n=1}^{N}\,k_{n}\,\big{\|}L\,\theta_{\ell}^{n}- \frac{1}{k_{n}}\int_{I_{n}}L\,\theta_{\ell}(t)\mathrm{d}t\big{\|}_{L^{2}( \Omega)}^{2}\,\Big{\}}^{1/2}+\|L\,\theta_{\ell}\|_{L^{2}(0,T;L^{2}(\Omega))}\,.\]
It remains to estimate,
\[\Big{\{}\sum_{n=1}^{N}\,k_{n}\left\|L\,\theta_{\ell}^{n}-\frac{1}{k_{ n}}\int_{I_{n}}L\,\theta_{\ell}(t)\mathrm{d}t\right\|_{L^{2}(\Omega)}^{2} \Big{\}}^{1/2}=\Big{\{}\sum_{n=1}^{N}\,\frac{1}{k_{n}}\left\|\int_{I_{n}}\left[L \,\theta_{\ell}^{n}-L\,\theta_{\ell}(t)\right]\mathrm{d}t\right\|_{L^{2}( \Omega)}^{2}\Big{\}}^{1/2} \tag{102}\] \[\leq\Big{\{}\sum_{n=1}^{N}\,\frac{1}{k_{n}}\left[\,\int_{I_{n}}\, \left\|L\,\theta_{\ell}^{n}(s)\right\|_{L^{2}(\Omega)}\mathrm{d}s\,\mathrm{d} t\right]^{2}\Big{\}}^{1/2}\] \[=\Big{\{}\sum_{n=1}^{N}\,k_{n}\left[\,\int_{I_{n}}\,\left\|L\, \theta_{\ell}^{\prime}(t)\right\|_{L^{2}(\Omega)}\mathrm{d}t\right]^{2} \Big{\}}^{1/2}\] \[\leq k\,\|L\,\theta_{\ell}^{\prime}\|_{L^{2}(0,T;L^{2}(\Omega))}\,.\]
We conclude therefore that, \(k=\max_{n}k_{n},\)
\[A_{2}\leq\|\theta_{\ell}^{\prime}\|_{L^{2}(0,T;L^{2}(\Omega))}+\|L\,\theta_{ \ell}\|_{L^{2}(0,T;L^{2}(\Omega))}+k\,\|L\,\theta_{\ell}^{\prime}\|_{L^{2}(0,T ;L^{2}(\Omega))}\,. \tag{103}\]
On the other hand, standard time interpolation estimates yield,
\[A_{1}\leq C\,k\left[\|w_{\delta}^{\prime\prime}\|_{L^{2}(0,T;L^{2}(\Omega))}+ \|L\,w_{\delta}^{\prime}\|_{L^{2}(0,T;L^{2}(\Omega))}\right]. \tag{104}\]
Hence, we have using (66), (67), (95),
\[A_{1}+A_{2}\leq\beta_{\ell}(w_{\delta})+\frac{k}{\delta^{m+1}}\tilde{\beta}_{ \ell}\|w\|_{L^{2}(0,T;H^{2}(\Omega))}+C\frac{k}{\delta}\|w\|_{H^{1}(0,T;L^{2}( \Omega))\cap L^{2}(0,T;H^{2}(\Omega))}\,. \tag{105}\]
Therefore, we conclude that (96) holds upon selecting \(\delta=\delta(\ell,k)\) appropriately.
#### 4.2.4 Convergence of the minimisers
In this subsection, we conclude the proof that the sequence of discrete minimisers \((u_{\ell})\) converges in \(L^{2}(0,T;H^{1}(\Omega))\) to the minimiser of the continuous problem.
**Theorem 14** (Convergence): _Let \(\mathcal{G},\ \mathcal{G}_{IE,\ell}\) be the energy functionals defined in (65) and (80) respectively. Let \(u\) be the exact solution of (64) and let \((u_{\ell}),\)\(u_{\ell}\in V_{\ell},\) be a sequence of minimisers of \(\mathcal{G}_{IE,\ell}\), i.e._
\[\mathcal{G}_{IE,\ell}(u_{\ell})=\inf_{v_{\ell}\in W_{\ell}}\mathcal{G}_{IE, \ell}(v_{\ell})\,. \tag{106}\]
_Then,_
\[\hat{u}_{\ell}\to u,\ \ \text{in}\ \ L^{2}(0,T;H^{1}(\Omega)), \tag{107}\]
_where \(\hat{u}_{\ell}\) is defined by (82)._
* **Proof** Next, let \(u\in L^{2}(0,T;H^{2}(\Omega))\cap H^{1}(0,T;L^{2}(\Omega))\) be the solution of (64). Consider the sequence of minimisers \((u_{\ell})\). Obviously, \[\mathcal{G}_{IE,\ell}(u_{\ell})\leq\mathcal{G}_{IE,\ell}(v_{\ell}),\qquad\text {for all }v_{\ell}\in V_{\ell}\,.\] In particular, \[\mathcal{G}_{IE,\ell}(u_{\ell})\leq\mathcal{G}_{IE,\ell}(\tilde{u}_{\ell}),\]
where \(\tilde{u}_{\ell}\) is the recovery sequence \(w_{\delta,\ell}\) corresponding to \(w=u\) constructed above. Hence, we conclude that the sequence \(\mathcal{G}_{IE,\ell}(u_{\ell})\) is uniformly bounded. The stability-equi-coercivity of the discrete functional, see Proposition 13, implies that
\[\|\overline{u}_{\ell}\|_{L^{2}(0,T;H^{2}(\Omega))}+\|\widehat{u}_{\ell}\|_{L^{ 2}(0,T;H^{2}(\Omega))}+\|\widehat{u}_{\ell}^{\prime}\|_{L^{2}(0,T;L^{2}( \Omega))}\leq C\,. \tag{108}\]
The Aubin-Lions theorem ensures that there exists \(\tilde{u}\in L^{2}(0,T;H^{1}(\Omega))\) such that \(\widehat{u}_{\ell}\rightarrow\tilde{u}\) in \(L^{2}(0,T;H^{1}(\Omega))\) up to a subsequence not re-labeled. Furthermore the previous analysis shows that \(L\tilde{u}\in L^{2}(0,T;L^{2}(\Omega))\,.\) To prove that \(\tilde{u}\) is the minimiser of \(\mathcal{G},\) and hence \(\tilde{u}=u,\) we combine the results of Sections 4.2.2 and 4.2.3: Let \(w\in H^{1}(0,T;L^{2}(\Omega))\cap L^{2}(0,T;H^{2}(\Omega)).\) We did show the existence of a recovery sequence \((w_{\ell})\) such that \(w_{\ell}\to w\) and
\[\mathcal{G}(w)=\lim_{\ell\rightarrow\infty}\mathcal{G}_{IE,\ell}(w_{\ell}).\]
Therefore, the \(\liminf\) inequality and the fact that \(u_{\ell}\) are minimisers of the discrete problems imply that
\[\mathcal{G}(\tilde{u})\leq\liminf_{\ell\rightarrow\infty}\mathcal{G}_{IE,\ell }(u_{\ell})\leq\limsup_{\ell\rightarrow\infty}\mathcal{G}_{IE,\ell}(u_{\ell}) \leq\limsup_{\ell\rightarrow\infty}\mathcal{G}_{IE,\ell}(w_{\ell})=\mathcal{G }(w), \tag{109}\]
for all \(w\in H^{1}(0,T;L^{2}(\Omega))\cap L^{2}(0,T;H^{2}(\Omega)).\) Therefore \(\tilde{u}\) is the minimiser of \(\mathcal{G},\) hence \(\tilde{u}=u\) and the entire sequence satisfies
\[\hat{u}_{\ell}\to u,\;\;\;\text{in}\;\;L^{2}(0,T;H^{1}(\Omega)).\]
Therefore the proof is complete.
#### 4.2.5 Explicit time discrete training
It will be interesting to consider a seemingly similar (from the point of view of quadrature and approximation) discrete functional:
\[\mathcal{G}_{k,EE}(v)=\sum_{n=1}^{N}\,k_{n}\,\int_{\Omega}\big{|}\frac{v^{n}- v^{n-1}}{k_{n}}+Lv^{n-1}-f^{n-1}\big{|}^{2}\;\mathrm{d}x+\,\int_{\Omega}|v-u^{0}| ^{2}dx, \tag{110}\]
and compare its properties to the functional \(\mathcal{G}_{k,IE}(v)\) and the corresponding \(V_{\mathcal{N}}\) minimisers. The functional (110) is related to _explicit_ Euler discretisation in time as opposed to the _implicit_ Euler discretisation in time for \(\mathcal{G}_{k,IE}(v).\) Clearly, in the discrete minimisation framework, both energies are fully implicit, since the evaluation of the minimisers involves the solution of global space-time problems. It is therefore rather interesting that these two energies result in completely different stability properties.
Let us first note that it does not appear possible that a discrete coercivity such as (85) can be proved. Indeed, an argument similar to (91) is possible but with the crucial difference that the second to last term of this relation will be negative instead of positive. This is a fundamental point directly related to the (in)stability of the forward Euler method. Typically for finite difference forward Euler schemes one is required to assume a strong CFL condition of the form \(k\leq Ch^{2}\) where \(h\) is the spatial discretisation parameter to preserve stability. It appears that a phenomenon of similar nature is present in our case as well. Although we do not show stability bounds when spatial training is taking place, the numerical experiments show that the stability behaviour of the explicit training method deteriorates when we increase the number of spatial training points while keeping \(k\) constant. These stability considerations are verified by the numerical experiments we present below. Indeed, these computations provide convincing evidence that coercivity bounds
similar to (85) are necessary for stable behaviour of the approximations. In the computations we solve the one dimensional heat equation with zero boundary conditions and two different initial values plotted in black. All runs were performed using the package _DeepXDE_, [33], with random spatial training and constant time step. |
2301.02647 | Universal adaptive optics for microscopy through embedded neural network
control | The resolution and contrast of microscope imaging is often affected by
aberrations introduced by imperfect optical systems and inhomogeneous
refractive structures in specimens. Adaptive optics (AO) compensates these
aberrations and restores diffraction limited performance. A wide range of AO
solutions have been introduced, often tailored to a specific microscope type or
application. Until now, a universal AO solution -- one that can be readily
transferred between microscope modalities -- has not been deployed. We propose
versatile and fast aberration correction using a physics-based machine learning
(ML) assisted wavefront-sensorless AO control method. Unlike previous ML
methods, we used a bespoke neural network (NN) architecture, designed using
physical understanding of image formation, that was embedded in the control
loop of the microscope. The approach means that not only is the resulting NN
orders of magnitude simpler than previous NN methods, but the concept is
translatable across microscope modalities. We demonstrated the method on a
two-photon, a three-photon and a widefield three-dimensional (3D) structured
illumination microscope. Results showed that the method outperformed
commonly-used modal-based sensorless AO methods. We also showed that our
ML-based method was robust in a range of challenging imaging conditions, such
as extended 3D sample structures, specimen motion, low signal to noise ratio
and activity-induced fluorescence fluctuations. Moreover, as the bespoke
architecture encapsulated physical understanding of the imaging process, the
internal NN configuration was no-longer a ``black box'', but provided physical
insights on internal workings, which could influence future designs. | Qi Hu, Martin Hailstone, Jingyu Wang, Matthew Wincott, Danail Stoychev, Huriye Atilgan, Dalia Gala, Tai Chaiamarit, Richard M. Parton, Jacopo Antonello, Adam M. Packer, Ilan Davis, Martin J. Booth | 2023-01-06T18:48:52Z | http://arxiv.org/abs/2301.02647v3 | # Universal adaptive optics for microscopy through embedded neural network control
###### Abstract
The resolution and contrast of microscope imaging is often affected by aberrations introduced by imperfect optical systems and inhomogeneous refractive structures in specimens. Adaptive optics (AO) compensates these aberrations and restores diffraction limited performance. A wide range of AO solutions have been introduced, often tailored to a specific microscope type or application. Until now, a universal AO solution - one that can be readily transferred between microscope modalities - has not been deployed. We propose versatile and fast aberration correction using a physics-based machine learning assisted wavefront-sensorless AO control (MLAO) method. Unlike previous ML methods, we used a bespoke neural network (NN) architecture, designed using physical understanding of image formation, that was embedded in the control loop of the microscope. The approach means that not only is the resulting NN orders of magnitude simpler than previous NN methods, but the concept is translatable across microscope modalities. We demonstrated the method on a two-photon, a three-photon and a widefield three-dimensional (3D) structured illumination microscope. Results showed that the method outperformed commonly-used modal-based sensorless AO methods. We also showed that our ML-based method was robust in a range of challenging imaging conditions, such as extended 3D sample structures, specimen motion, low signal to noise ratio and activity-induced fluorescence fluctuations. Moreover, as the bespoke architecture encapsulated physical understanding of the imaging process, the internal NN configuration was no-longer a "black box", but provided physical insights on internal workings, which could influence future designs.
## Introduction
The imaging quality of high-resolution optical microscopes is often detrimentally affected by aberrations which result in compromised scientific information in the images. These aberrations can arise from imperfections in the optical design of the microscope, but are most commonly due to inhomogeneous refractive index structures within the specimen. Adaptive optics (AO) has been built into many microscopes, restoring image quality through aberration correction by reconfigurable elements, such as deformable mirrors (DMs) or liquid crystal spatial light modulators (LC-SLMs).[1, 2, 3, 4, 5, 6] Applications of AO-enabled microscopes have ranged from deep tissue imaging in multiphoton microscopy through to the ultra-high resolution required for optical nanoscopy. This range of applications has led to a wide variety of AO solutions that have invariably been tailored to a specific microscope modality or application.
There are two main classes AO operation: in one case, a wavefront sensor measures aberrations; in the other case, aberrations are inferred from images - so called "wavefront sensorless AO", or "sensorless AO" for short. For operations with a wavefront sensor, phase aberrations are measured directly by wavefront sensors such as a Shack-Hartmann sensor[7, 8] or an interferometer[9, 10, 11]. Such operations are direct and fast but also have intrinsic disadvantages such as requiring a complex optical design and suffering from non-common path errors. Furthermore, such wavefront sensors often have limitations and are less versatile. For example, an interferometer requires a coherent source and all such methods suffer from problems due to out-of-focus light. On the other hand, sensorless AO methods normally function with a simpler optical design and thus are more easily adaptable for a wide range of imaging applications. However, sensorless AO methods are based on iterative deductions of phase aberrations and thus tend to be more time consuming; this is coupled with repeated and prolonged sample exposures, which inevitably lead to photo-damage or motion related errors.
There have been many developments in AO technology, and in particular sensorless AO methods. Conventionally, sensorless AO operates based on the principle that the optimal image quality corresponds to the best aberration correction[12, 13]. A suitably defined metric, such as the total signal intensity[14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27] or a spatial frequency based sharpness metric[28, 29, 30, 31, 32, 33], is used to quantify the
image quality. Phase is modulated by the AO while this quality metric reading is measured and optimised. There have been discussions on how the phase should be modulated [12, 24, 34, 35] and how the optimisation algorithm should be designed [36, 37, 38, 21]. However, as mentioned before, such "conventional" sensorless AO methods depend on iterative optimisation of a scalar metric, where all image information is condensed into a single number, and the optimisation process is usually through mode by mode adjustment. Such methods were thus not the most efficient approach to solving this multi-dimensional optimisation problem and the effective range of correction was limited. While a higher dimensional metric was considered to extract more information from images [39], the optimisation of such a vector metric was not straightforward.
While the utility of each of these conventional sensorless AO methods has been demonstrated separately, each method had been defined for a particular microscope type and application. Until now, no such AO solution has been introduced that can be universally transferred between microscope modalities and applications.
We propose in this article a new approach to sensorless AO (named as MLAO) that addresses the limitations of previous methods and provides a route to a universal AO solution that is applicable to any form of microscopy. This solution is constructed around a physics-based machine learning (ML) framework that incorporates novel neural network (NN) architectures with carefully crafted training procedures, in addition to data pre-processing that is informed by knowledge of the image formation process of the microscope. The resulting NN is embedded into the control of the microscope, improving the efficiency and range of sensorless AO estimation beyond that possible with conventional methods. This approach delivers versatile aberration measurement and correction that can be adapted to the application, such as the correction of different types of aberration, over an increased range of aberration size, across different microscope modalities and specimens.
In recent years, machine learning (ML) has been trialled in AO for its great computational capability to extract and process information. However, many of these approaches required access to point spread functions (PSFs) or experimentally acquired bead images [40, 41, 42, 43, 44, 45, 46] ; these requirements limited the translatability of these methods to a wider range of applications. Reinforcement learning was applied to correct for phase aberrations when imaging non point-like objects [47]; however, the method still involved iterative corrections and was not advantageous in terms of its correction efficiency, accuracy and correction working range compared to conventional sensorless AO algorithms. Untrained neural networks (NN) were used to determine wavefront phase and were demonstrated on non point-like objects [48, 49]; however, such methods were reported to normally require a few minutes of network convergence, which limits their potential in live imaging applications.
Our new approach differs considerably from previous ML assisted aberration estimation, as previous methods mostly employed standard deep NN architectures that used raw images as the input data. Our method builds upon physical knowledge of the imaging process and is designed around the abilities of the AO to introduce aberration biases, which improve the information content of the NN input data. This approach means that the resulting NN is orders of magnitude simpler, in terms of trainable parameters, than previous NN methods (See Table S1 in supplemental document). Furthermore, our method is readily translatable across microscope modalities. As NN training is carried out on a synthetic data set, adaptation for a different modality simply requires regeneration of the image data using a new imaging model. The NN architecture and training process are otherwise similar.
To illustrate the versatility of this concept, we have demonstrated the method on three different types of fluorescence microscopes with different forms of AO corrector: a two-photon (2-P) microscope using a SLM, a three-photon (3-P) intravital microscope using a DM, and a widefield three dimensional (3-D) structured illumination microscope (SIM) using a DM. In all cases, we showed that the new method outperformed commonly used conventional sensorless AO methods. The results further showed that the ML-based method was robust in a range of challenging imaging conditions, such as specimen motion, low signal to noise ratio, and fluorescence fluctuations. Moreover, as the bespoke architecture encapsulated into its design physical understanding of the imaging process, there was a link between the weights in the trained NN and physical properties of the imaging process. This means that the internal NN configuration needs no-longer to be considered as a "black box", but can be used to provide physical insights on internal workings and how information about aberrations is encoded into images.
## Concept and implementation
The overall MLAO concept is illustrated in Figure 1. The experimental application follows closely the concept of modal sensorless AO, whereby a sequence of images are taken, each with a different bias aberration applied using the adaptive element. The set of images are then used as the input to the ML-enabled estimator, which replaces the previous conventional method of optimisation of an image quality metric. The estimated correction aberration is then applied to the adaptive element. If necessary, the process can be iterated for refined correction. The significant advantage of the new method is the way in which the estimator can more efficiently use image information to determine the aberration correction.
The concept has been designed in order to achieve particular capabilities that extend beyond those of conventional sensorless AO. The new method should ideally achieve more efficient aberration estimation from fewer images, to reduce time and exposure of measurement. It should operate over a larger range of aberration amplitudes, compared to previous methods. A particular estimator should be robust to variations between similar microscopes and the concept should be translatable across
Figure 1: The MLAO concept. (a) Overview of the AO correction process. A minimum of two bias aberrations were introduced by the adaptive element; corresponding images of the same field were captured. The images were passed to the MLAO estimator, which determined the Zernike coefficients for correction. The correction speed was limited only by the speed of image acquisition, not by computation. Further correction could optionally be implemented through iteration. (b) Image pre-processing and NN architecture. Images were pre-processed to compute pseudo-PSFs, which were predominantly independent of specimen structure. \(\mathcal{F}\) and \(\mathcal{F}^{-1}\) represent the forward and inverse Fourier transform, respectively. A central cropped region of the pseudo-PSF images was used as the input to a CNN. The CNN was designed and trained specifically for aberration determination. The output from the overall network was the correction coefficients for the Zernike modes. The NN architecture was such that the convolutional layer outputs could be correlated with spatial scales of the aberration effects on the pseudo-PSFs and hence the imaging process. Hence, the distribution of weights in the network had physical relevance. (c) Training data generation. A range of image variations were included in the synthetic data set for NN training to cope with variations in real experimental scenarios. The data were a combination of artificial and real microscope images, chosen to model a wide range of realistic specimen structures. Images were created through convolution of specimen structures with an appropriate PSF, generated for the specific microscope modality, incorporating aberrations.
different microscope types and applications. From a practical perspective, it is also important that training can be performed on synthetic data, as it would be impractical to obtain the vast data set necessary for training from experimentally obtained images.
An essential step towards efficient use of image data is the image pre-processing before they are presented to the NN. Rather than taking raw image data as the inputs, the NN receives pre-processed data calculated from pairs of biased images, which we term a "pseudo-PSF", as shown in Fig. 1 and explained in the methods section. This pseudo-PSF contains information about the input aberration and is mostly independent of the unknown specimen structure. By removing the specimen information at this stage, we can reduce the demands on the subsequent NN, hence vastly simplifying the architecture required to retrieve the aberration information.
As most of the useful information related to aberrations was contained within the central pixels of the pseudo-PSF, a region of \(32\times 32\) pixels was extracted as the input to the NN. The first section of the NN was a bespoke convolutional layer that was designed to extract information from the inputs at different spatial scales. The outputs from the convolutional layer were then provided to a fully connected layer, which was connected to the output layer. Full details of the NN design are provided in the methods and the supplementary information. This architecture - rather unusually - provided a link between the physical effects of aberrations on the imaging process and the mechanisms within the NN, specifically through the weights at the output of the first fully connected layer.
NN training was performed using a diverse set of synthesised training data. These images were calculated using an appropriate model of the microscope imaging process in the presence of aberrations. Images were synthesised by convolutions of specimen structures with a PSF, incorporating various likely experimental uncertainties and noise sources. The specimens consisted of a range of artificial and realistic objects. Full details are provided in the methods.
This versatile concept could accommodate different aberration biasing strategies. Conventional modal sensorless AO methods typically required a minimum of \(2N+1\) biased images to estimate \(N\) aberration modes[21]. However, the MLAO method has the ability to extract more information out of the images, such that aberrations could be estimated with as few as two images, although more biased images could provide better-conditioned information. In general, we defined methods that used \(M\) differently biased images to estimate \(N\) Zernike modes for aberration correction. The input layer of the NN was adjusted to accommodate the \(M\) image inputs for each method. Out of the many possibilities, we chose to illustrate the performance using two biasing schemes: one using a single bias mode (astigmatism, Noll index[50] i = 5) and one using all \(N\) modes that were being corrected. In the first case, we used either two or four images (\(M=2\) or 4) each with different astigmatism bias amplitude. We refer to these methods as _ast2 MLAO_ or _ast4 MLAO_. Astigmatism was chosen as the most effective bias mode (see supplementary document, section 6). In the second case, biased images were obtained for all modes being estimated (\(M=2N\) or \(4N\)); this type is referred to in this paper as _2N MLAO_ or _4N MLAO_. For a complete list of the settings for each demonstration, please refer to Table S2 in the supplemental document.
## Results
In order to show its broad application, the MLAO method was demonstrated in three different forms of microscopy: 2-P and 3-P scanning microscopy and widefield 3-D SIM. This enabled testing in different applications to examine its performance coping with different realistic imaging scenarios.
The MLAO methods were compared to two widely used conventional modal based sensorless AO methods (labelled as _2N+1 conv_ and _3N conv_). The _2N+1 conv_ method used two biased images per modulation mode and an additional zero biased image to determine phase correction consisting \(N\) modes simultaneously. The _3N conv_ method used three images per modulation mode (two biased and one unbiased images) and determined the coefficients of the modes sequentially. For both methods, the bias size was chosen to be \(\pm 1\) rad for each mode. A suitable metric was selected to quantify the image quality. For each mode, the coefficients were optimised by maximising the quality metric of the corresponding images using a parabolic fitting algorithm. When used in 2-P and 3-P demonstrations, the total fluorescence intensity metric was optimised. For the widefield 3-D SIM microscope, a Fourier based metric was optimised[51]. For the details of the two conventional methods, please refer to[21, 36].
Different functions were defined as optimisation metrics for the conventional AO methods, and also to assist quantifiable comparisons of image quality improvement for the MLAO methods. These were defined as an intensity based metric \(y_{1}\), a Fourier based metric \(y_{F}\), and a sharpness metric \(y_{S}\). Details are provided in the methods section.
### Two-photon microscopy
A range of method validations were performed on a 2-P microscope that incorporated a SLM as the adaptive correction element, including imaging bead ensembles and extended specimen structures. The experimental set-up of the 2-P system was included in Figure S8 (a) in the supplemental document. In order to obtain controlled and quantitative comparisons between different AO methods, the SLM was used to both introduce and correct aberrations. This enabled statistical analysis of MLAO
performance with known input aberrations. System aberrations were first corrected using a beads sample before carrying out further experiments.
We performed a statistical analysis to assess how MLAO algorithms (_ast2 MLAO_ and _2N MLAO_) performed in various experimental conditions compared to conventional algorithms (_2N+1 conv_ and _3N conv_). Experiments were conducted on fixed beads samples (Figure 2 (a, b)), and Bovine Pulmonary Artery Endothelial (BPAE) cells (FluoCellsTM Prepared Slide #1) (Figure 2 (c - f)). Dependent on the experiment, either \(N=5\) or \(N=9\) Zernike modes were estimated (see Table S2 in Supplemental document for details).
#### Statistical performance analysis
Figure 2 (a) and (b) showed statistical comparisons of the different correction methods. Figure 2 (a) displayed the residual aberrations gathered from twenty experiments, each consisting of one correction cycle from random initial aberrations including five Zernike modes. If the remaining aberration is below the pre-correction value, then the method provides effective aberration correction. A wide shaded area indicated inconsistent and less reliable correction. The results show that when correcting small aberrations with root mean square (RMS) = 0.63 to 1.19 rad, _2N MLAO_ performed similarly to _2N+1 conv_. Between RMS = 1.19 to 1.92 rad, _2N MLAO_ corrected more accurately (lower mean aberration) and also more reliably (smaller error range). For large aberrations above RMS = 2.12 rad, _2N+1 conv_ completely failed, whereas the MLAO methods still improved aberration correction. _ast2 MLAO_ had poor performance at small aberrations (RMS = 0.63 to 0.84 rad) but provided reasonable correction for large aberrations (RMS = 1.92 to 2.12 rad). However, it is important to note that _ast2 MLAO_ required only two images for each correction cycle, far fewer that the ten and eleven images required respectively for _2N MLAO_ and _2N+1 conv_.
Figure 2 (b) displayed the mean value of metric \(\mathrm{y_{I}}\) from ten experiments against the number of images acquired during multiple iterations of the different correction methods. The corrected aberrations consisted of nine Zernike modes. It was shown that _ast2 MLAO_ corrects the fastest initially when the input aberration is large but converges to a moderate signal level, which indicates only partial correction of the aberration. _2N MLAO_ corrects more quickly and to a higher level than the conventional algorithms. The narrower error bars for both MLAO algorithms at the end of the correction process indicate that they are more reliable than the two conventional methods.
#### Correction on extended specimen structures
Figure 2 (c)-(f) showed experimental results when imaging microtubules of BPAE cells. Specim regions were chosen to illustrate performance on different structures: (c) contained mainly aligned fine fibrous structures; (d) contained some large scale structures (bottom right); (e) contained fine and sparse features. For (f) we intentionally reduced illumination laser power and increased detector gain to simulate an imaging scenario with very low signal to noise ratio (SNR). The images showed structured noise at the background, which could pose a challenge to estimation performance. A large randomly generated aberration (RMS = 2.12 to 2.23 rad) consisting of five (c and f) or nine (d and e) Zernike modes was used as the input aberration.
In (c), (d) and (e), _ast2 MLAO_ corrected the fastest initially when the aberration was large but converged to a moderate level of correction. _2N MLAO_ corrected faster in general than the conventional methods and converged to a higher level of correction. In (f) when SNR was poor and structured noise was present, _ast2 MLAO_ failed to correct while _2N MLAO_ continued to perform consistently.
### Three-photon intravital microscopy
Three-photon microscopy of neural tissue imaging is a particular challenge for sensorless AO, due to the inherently low fluorescence signal levels. While this could be alleviated by averaging over time, problems are created due to specimen motion. Further challenges are posed for functional imaging, due to the time dependence of emission from ion or voltage sensitive dyes. The demonstrations here show the robustness of the new MLAO methods in experimental scenarios where the conventional methods were not effective. Importantly, the MLAO methods were able to perform effective correction based on a small number of image frames without averaging.
The experimental set-up of the 3-P system is shown in Figure S8 (b) in the supplemental document. The microscope used an electromagnetic DM for aberration biasing and correction. Two MLAO methods, _ast4 MLAO_ and _4N MLAO_, were used to correct aberrations by using single frame images as inputs. In each case, more input frames were chosen than in the 2-P demonstrations, in order to cope with the lower SNR. The NNs were trained to estimate \(N=7\) Zernike modes. Two types of mice were used to perform live brain imaging of green fluorescent protein (GFP) labelled cells (Figure 3 (a)) and functional imaging in GCaMP-expressing neurons (Figure 3 (b)). In Figure 3 (a), results were collected at 660\(\mu m\) depth and power at sample was 32 mW. In Figure 3 (b), imaging was at 250\(\mu m\) depth and power at sample was 19 mW. Further 3-P results were included in the section 8 of supplemental document. For the details of the sample preparation, please refer to section 9B in supplemental document.
Figure 3 (a) shows plots of the metrics \(\mathrm{y_{I}}\) and \(\mathrm{y_{F}}\) as proxies for correction quality when imaging GFP labelled cells. Both _ast4 MLAO_ and _4N MLAO_ networks successfully improved the imaging quality. Similar to the _ast2 MLAO_ results in the 2-P
Figure 2: Comparative performance of MLAO methods in a 2-P microscope. (a) Residual aberration after one correction cycle for three methods. Points show the mean and the shaded area indicates the standard deviations (SDs) of aberration distributions. The images show an example field of view (FOV) when different amounts of a random aberration were introduced. (b)-(f) show the intensity metric (\(\mathbf{y}_{1}\)) as a proxy for correction quality, against the number of images used for multiple iterations of correction when random aberrations were introduced. In (b), an ensemble of ten random aberrations were corrected, imaging over the same FOV. Error bars on the plot showed the SD of the fluorescence intensity before and after correction. (c)-(f) show specific corrections imaging microtubules of BPAE cells, illustrating performance for different specimen structures and imaging conditions. The images were acquired before and after correction through the different methods (as marked on the metric plots). Insets on the images show residual wavefronts after correction for each image. The grayscale colorbars show phase in radians.
Figure 3: Aberration correction in three-photon microscopy of live mouse brains: (a) GFP-labelled cells at depth \(660\mu m\) and (b) functional activity of GCaMP-labelled cells at \(250\mu m\). Wavefronts inserted into the figures showed the phase modulations applied by the DM at the relevant steps; the common scale for each set of results is indicated by the grayscale bars in (a) and (b). (a) shows on the left example single-frame images used in correction with the corresponding bias modes as insets; these were the image inputs to _ast4 MLAO_. For _4N MLAO_, six more bias modes and thus 24 more images were also used in each iteration. Three images at the central panel are shown averaged from 20 frames after motion correction. The rectangular boxes highlight regions of interest for comparison. The plots on the right show the intensity metric (y\({}_{\rm I}\)) and the Fourier metric (y\({}_{\rm F}\)), respectively, calculated from single image frames, against the number of images acquired for five correction iterations of _ast4 MLAO_ one correction iteration of _4N MLAO_. (b) shows on the left example single-frame images used as inputs to the _ast4 MLAO_ correction with the corresponding bias modes as insets. White squares highlight two cells for comparison to show the fluorescence fluctuations over time due to neural activity. The central panel shows respectively before and after _ast4 MLAO_ correction through five iterations (iter 1 to 5), 200 frame averages after motion correction. The time traces were taken from the marked line L. The plots on the right show the intensity metric (y\({}_{\rm I}\)) and the sharpness metric (y\({}_{\rm S}\)), respectively, calculated from single image frames, against the number of images acquired for five iterations _ast4 MLAO_. The lower panel shows the calcium activity of 8 cells (A-H marked on the averaged image).
demonstrations, _asst4 MLAO_ corrected more quickly at first, but converged to a lower correction level. In contrast, _4N MLAO_ preformed better overall correction, but required more images. Panels ii-iv show averaged images in which processes previously hidden below the noise level are revealed through MLAO correction (as highlighted in the white rectangles). The example biased images shown in Figure 3 (a) i provide an indication of the low raw-data SNR that the MLAO method can successfully use.
Figure 3 (b) shows results from imaging calcium activity in a live mouse. The _asst4 MLAO_ method successfully improved image quality despite the low SNR and fluorescence fluctuations of the sample. From both time traces of line 1 and cells A-H, it could be clearly seen that after corrections, signals were increase. The _4N MLAO_ method failed to correct in this experimental scenario (results not shown). We will discuss the likely hypotheses for this in the discussion section.
The fluctuating fluorescence levels due to neural activity mean that conventional metrics would not be effective in sensorless AO optimisation processes. This is illustrated in Figure 3 (b) iv and v, where it can be seen that no single metric can accurately reflect the image quality during the process of _asst4 MLAO_ correction. These observations illustrate the advantages of MLAO methods, as their optimisation process did not rely on any single scalar metric.
### Widefield 3-D structured illumination microscopy
The architecture of the NN was conceived so that it would be translatable to different forms of microscopy. In order to illustrate this versatility, and to complement to the previously shown 2-P and 3-P laser scanning systems, we applied MLAO to a widefield method. The 3D SIM microscope included multiple lasers and fluorescence detection channels and an electromagnetic DM as the correction element. Structured illumination patterns were introduced using a focal plane SLM. The detailed experimental set-up was included in Figure S6 (c) in the supplemental document.
Without AO, 3D SIM reconstruction suffers artefacts caused by aberrations. Since typical specimens contain 3D structures, the lack of optical sectioning in widefield imaging means that the aberration correction process can be affected by out of focus light. As total intensity metrics are not suitable for conventional AO algorithms in widefield imaging, Fourier based sharpness metrics have often been used. However, such metrics depend on the frequency components of the specimen structure [39]. In particular, emission from out of focus planes can also affect the sensitivity and accuracy of correction. However, the NN based MLAO methods were designed and trained to mitigate against the effects of the sample structures and out of focus light.
Figure 4 shows results from two NN-based methods _asst2 MLAO_ and _2N MLAO_ compared to the conventional algorithm _3N conv_, which used the \(y_{\text{S}}\) metric. Sensorless AO was implemented using widefield images as the input (Figure 4 (a, b)). The correction settings thus obtained by the _2N MLAO_ method were then applied to super-resolution 3D SIM operation (Figure 4 (c, d)). \(N=8\) Zernike modes were involved in the aberration determination. The specimen was a multiple labelled _Drosophila_ larval neuromuscular junction (NMJ). For the details of the sample preparation, please refer to section 7B in supplemental document.
Figure 4 (b) showed that _asst2 MLAO_ corrected most quickly; _2N MLAO_ corrected to a similar level but required more sample exposures; _3N MLAO_ was less effective. Figure 4 (a) showed the effectiveness of correction on raw and deconvolved widefield images. Part iii showed the changes in image spectrum after correction. The dashed line shows a threshold where signal falls below the noise level. It can be seen that both (C) _ast2 MLAO_ and (D) _2N MLAO_ increased high frequency content compared to (A) before AO correction and (B) after _3N conv_ corrections. Figure 4 (c) and (d) showed the images after 3D SIM reconstruction. It can be clearly seen that when by-passing AO (i), there was strong artefacts due to aberrations. After correcting using five iterations of _2N MLAO_, artefacts were suppressed and z-resolution was improved (see sections through line 1 and 2 in Figure 4 (d))
## Discussion
The power and simplicity of the MLAO method arise mainly from a combination of three aspects: the pre-processing of image data, the bespoke NN architecture, and the definition of the training data set. All of these aspects are informed by physical and mathematical principles of image formation. This forms a contrast with many other data-driven deep learning approaches, where complex NNs are trained using vast amount of acquired data.
The calculation of the pseudo-PSF from pair of biased images (as shown in Figure 1 (c) and elaborated in the Methods) acts to remove most of the effects of unknown specimen structure from the input data. The information contained within the pseudo-PSF encodes indirectly how aberrations affect the imaging PSF (see Figure S2 in the supplemental document for more details). There is a spatial correspondence between a pixel in the pseudo-PSF and the PSF itself. Hence, spatial correlations across the pseudo-PSF relate to spatial effects of aberrations on the images.
The set of pseudo-PSFs forms the input to the convolutional layers of the NN. The masks in each convolutional layer probe, in effect, different scales across the pseudo-PSF. Hence, one can attribute a correspondence between the output of these layers and the effects aberrations have over different physical scales in the image. Such phenomena are heuristically demonstrated in
Figure 4: Aberration correction in a widefield 3-D structured illumination microscope (SIM). (a) Widefield images acquired A before and B-D after correction through different methods (as marked on the metric plot (b)). The second column shows corresponding deconvolved widefield images. The third column shows corresponding image spectra; dashed lines show the threshold where signal falls below the noise level.
(b) The sharpness metric \(\mathrm{y_{S}}\) against the number of images, for two iterations of _3N conv_, ten iterations of _ast4 MLAO_ and three iterations of _2N MLAO_.
(c, d) 3-D projections of 3-D reconstructed SIM image stack of (c) \(10\mu m\) and (d) \(6\mu m\) when by-passing AO and after five iterations of _2N MLAO_ correction. (d) Square inserts show zoomed-in region for comparison. x-z and y-z sections are shown through lines 1 and 2.
Insets to (a,c and d) show wavefronts corrected by the DM for each image acquisition; phase is shown on the adjacent scale bar.
section 3 of the supplementary information. By extracting relevant weight connections from inside the NN, we can observe embedded physical interpretations of how the machine learned to process aberration information contained in images.
To illustrate this, we extracted from the trained NN the weights between the layer embedding physical interpretations and the next fully connected layer (marked by red arrows in Figure 1 (c)). Going down the convolutional layers, the scale of probed features increases from a single pixel, through small scale features, up to large scale features (as explained in section 3 of the supplemental document). The RMS values of the weights from each convolutional layer are shown in Table 1, where the data are shown for the ensembles of the two classes of MLAO networks used in this paper, _astX MLAO_ and _XN MLAO_ (where \(X=\)2 or 4). A full breakdown is provided in the Figure S4 of the supplementary document.
The largest weight variation was in the first layer in the _XN MLAO_ NN, which indicates that this algorithm extracts more information from the single pixel detail than from larger scale correlations. In contrast, _astX MLAO_ assigns weights more evenly across all layers. As explained in the supplementary document, the single pixel extraction from the pseudo-PSF is related to the Strehl ratio of the PSF and the intensity information of the images in non-linear systems. Hence, it is expected that the _XN MLAO_ NN, which uses as similar set of bias aberrations to the conventional method, would learn as part of its operation similar behaviour to the conventional algorithm. The same phenomena can also explain why in 3-P GeaMP imaging of neural activity _astX MLAO_ was less affected by the fluorescence fluctuations than _XN MLAO_, as _astX MLAO_ relies less on overall fluorescence intensity changes. Conversely, _astX MLAO_ generally performed worse than _XN MLAO_ in 2-P imaging when structured noise present, as _astX MLAO_ used fewer images and hence had access to less detectable intensity variations than _XN MLAO_. The fact that _astX MLAO_ had access to less well-conditioned image information may also explain why in general it was able to correct aberrations to a lower final level than _XN MLAO_.
## Conclusion
The MLAO methods achieved the aims explained at the outset. They provided more efficient aberration correction with fewer images over a larger range, reducing time required and specimen exposure. The training procedure, which was based on synthesised data, ensured that the AO correction was robust to uncertainty in microscope properties, the presence of noise, and variations in specimen structure. The concept was translatable across different microscope modalities, simply requiring training using a revised imaging model.
The new methods used NN architectures that are orders of magnitude simpler, in terms of trainable parameters, than in previous similar work (see supplementary information, section 5). This vast simplification was achieved through pre-processing of data to remove most of the effects of unknown specimen structure. The physics-informed design of the NN also meant that - unusually for most NN applications - the learned weights inside the network provided indications of the physical information used by the network. This provides constructive feedback that can inform future AO system designs and the basis for extension of the MLAO concept to more demanding tasks in microscopy and other imaging applications.
## Methods
### Image pre-processing
Image data were pre-processed before being used by the NN, in order to remove effects of the unknown specimen structure. The resulting "pseudo-PSFs" were better conditioned for the extraction of aberration information, independently of the specimen. The image formation can be modelled as a convolution between specimen fluorescence distribution and an intensity PSF. The AO introduced pre-chosen bias aberrations, so that multiple images with different PSFs could be acquired over the same FOV. Mathematically, this process can be expressed as
\[I_{1} =O*f_{1}+\delta_{1}\] \[I_{2} =O*f_{2}+\delta_{2} \tag{1}\]
where \(I_{1}\) and \(I_{2}\) were the images acquired with two different PSFs \(f_{1}\) and \(f_{2}\) for the same unknown specimen structure \(O\). \(\delta_{1}\) and \(\delta_{2}\) represent combined background and noise in each image. In order to remove (or at least reduce) the effects of specimen
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Layer & 1 & 2 & 3 & 4 & 5 \\ \hline _astX MLAO_ & 0.23 & 0.19 & 0.17 & 0.18 & 0.23 \\ \hline _XN MLAO_ & 0.39 & 0.14 & 0.15 & 0.13 & 0.20 \\ \hline \end{tabular}
\end{table}
Table 1: The RMS of the weight distributions extracted from different convolutional layers of the two classes of trained CNNs, _astX MLAO_ and _XN MLAO_. The values shown are calculated from the ensemble of corresponding layers from all CNNs of the given class.
structures, we defined the pseudo-PSF as
\[\text{pseudo-PSF}=\mathcal{F}^{-1}\left[\frac{\mathcal{F}(I_{1})}{ \mathcal{F}(I_{2})}\right] =\mathcal{F}^{-1}\left[\frac{\mathcal{F}(O*f_{1}+\delta_{1})}{ \mathcal{F}(O*f_{2}+\delta_{2})}\right]\] \[=\mathcal{F}^{-1}\left[\frac{\mathcal{F}(O)\times\mathcal{F}(f_{ 1})+\mathcal{F}(\delta_{1})}{\mathcal{F}(O)\times\mathcal{F}(f_{2})+\mathcal{ F}(\delta_{2})}\right]\]
where \(\mathcal{F}\) was the 2D Fourier transform and \(\mathcal{F}^{-1}\) was its inverse (see Figure 1 (c)). The term "pseudo-PSF" was chosen as the function was defined in the same variable space as a PSF, although it is not used directly in any imaging process. A similar computational process was shown elsewhere for different applications using defocussed images [52]. Assuming the noise is small enough to be neglected
\[\text{pseudo-PSF}=\mathcal{F}^{-1}\left[\frac{\mathcal{F}(I_{1})}{\mathcal{F}( I_{2})}\right]\approx\mathcal{F}^{-1}\left[\frac{\mathcal{F}(f_{1})}{ \mathcal{F}(f_{2})}\right] \tag{2}\]
There is an implicit assumption here that there are no zeroes in the object spectrum \(\mathcal{F}(O)\) or the optical transfer function \(\mathcal{F}(f_{2})\). In practice, it was found that a small non-zero value of \(\mathcal{F}(\delta_{2})\) mitigated against any problems caused by this. Furthermore, although structured noise was present in the pseudo-PSFs (see e.g. Figure S1 in the supplemental document), it was found that this did not detrimentally affect data extraction through the subsequent NN. As a further mitigation, we calculated pairs of pseudo-PSFs from pairs of biased input images by swapping the order from \((f_{1},f_{2})\) for the first pseudo-PSF to \((f_{2},f_{1})\) for the second.
Example pseudo-PSFs are shown in Figure S1 and S2 in the Supplemental document. As most information was contained within the central region, to ensure more efficient computation, we cropped the central region (\(32\times 32\) pixels) of the pseudo-PSFs to be used as the input to the NN. Dependent upon the MLAO algorithm, the input to the NN would consist of a single pair of cropped pseudo-PSFs, or multiple pairs corresponding to the multiple pairs of bias aberrations applied in different modes.
### Neural network training
To estimate phase aberrations from pseudo-PSFs, a convolutional based neural network was designed incorporating physical understanding of the imaging process and was trained through supervised learning. Synthetic data were used for training and the trained networks were then tested on real AO microscopes. For each imaging modality (i.e. 2-P, 3-P and widefield), a separate training dataset was generated, with the imaging model and parameters adjusted for different applications.
#### Neural network architecture
A convolutional neural network was designed to determine the aberrations from pseudo-PSFs, while embedding physical understanding of image formation. The conceptual structure is shown in Figure 1 (c); more specific details of the architecture and learning process are provided in Section S1 of the supplementary document. This CNN architecture allowed convolutional masks to - in effect - probe different spatial scales within the pseudo-PSF images and, hence, to learn from the effects aberrations had at different spatial scales in microscope images. The outputs from these convolutional layers acted as inputs to a single concatenated fully connected layer (FCL). This was followed by another FCL then the output layer, whose outputs corresponded to the Zernike mode coefficients estimated for aberration correction. This shallow architecture with the order of \(10^{4}\) trainable parameters was effective due to the pre-processing of data that meant the input information was better conditioned to this estimation task than raw images.
The weight connections between the concatenated FCL immediately following the CNN layer and the subsequent FCL (marked in red arrows in Figure 1 (c)) depended upon the significance of the information learnt from the different scales embedded in the CNN layers. Analysis of these weights could therefore provide insight into the pseudo-PSF information that was used by the ML process.
#### Synthetic data generation
Due to the impracticality of acquiring sufficient high-quality data experimentally, a large dataset of simulated image data was constructed. The simulations were designed to resemble images collected from different microscopes when imaging a range of samples.
We started with a collection of image stacks (containing around a total of 350 images) obtained from high-resolution 3D microscopy of various specimens labelled with nuclear, cytoplasmic membrane and/or single-molecule markers. The images were down-sampled to 8-bit (128\(\times\)128) and separated into their individual channels. This formed a pool of realistic sample structures which were later used to generate synthetic images. To further augment the varieties of sample structures, random rotations were applied and synthetic shapes including dots, rings, circular shapes, curved and straight lines of varying sizes were randomly introduced.
The simulated training dataset was generated by convolving the sample structures with synthetic PSFs, \(f\) (see Eq. 1). \(f\) was modelled as a pixel array through
\[f=\left|\mathcal{F}\left(Pe^{i(\Psi+\Phi+\Xi)}\right)\right|^{l} \tag{3}\]
where \(\mathcal{F}\) represented the 2D discrete Fourier transform. \(P\) was the circular pupil function, defined such that pixels in the region outside the pupil had value zero. The ratio between the radius of the pupil in pixels and the size in pixels of the overall array was adjusted to match sampling rates for different microscopes. In practical scanning optical microscopes, the sampling rates can be easily adjusted, although perhaps not arbitrarily. Hence, for experimental flexibility, the ratio for the simulated training dataset was tuned to be within the range of 1.0\(\times\) to 1.2\(\times\) the base sampling rate. The base sampling rate was defined as using two pixels to sample the full width half maximum (FWHM) of the PSF of the system when aberration free. For the widefield system, the ratio was tuned to simulate the projection of the camera pixel sampling rate at the specimen. Figure S5 in the supplemental document shows how tolerable a trained network was when tested on data collected at different pixel sampling. \(P\) also incorporated the illumination profile for different practical imaging systems, such as when using truncated Gaussian illumination at the pupil in the 3-P microscope. The exponent \(l\) varied with imaging modes: when simulating a 3-P, a 2-P and a widefield microscope, \(l\) was set to 6, 4 and 2 respectively.
The total aberration was expressed as a sum of chosen Zernike polynomial modes \(\Psi+\Phi+\Xi=\sum_{i}a_{i}Z_{i}\). \(\Psi\) was the sum of the randomly generated specimen aberrations, which included all modes that the AO system was designed to correct. \(\Phi\) represented the additional bias aberrations. \(\Xi\) included additional non-correctable higher order Zernike modes. The coefficients of the correctable modes were randomly generated for each data set. Representing the set of coefficients \(\{a_{i}\}\) as a vector \(\mathbf{a}\), the random coefficients followed a modified uniform n-sphere distribution [53] where both the direction and the two-norm of \(\mathbf{a}\) were uniformly distributed. The maximum two-norm (size) of \(\mathbf{a}\) were chosen differently for different imaging applications. This distribution allowed a denser population close to zero aberration, which was intuitively beneficial to train a stable NN. We also added random small errors to the correctable coefficients so that the labels were slightly inaccurate. This was to simulate situations when the AO would be incapable of introducing perfect Zernike modes. The spurious high order non-correctable Zernike modes were included to further resemble realistic scenarios in a practical microscope.
Poisson, Gaussian, pink and structured noise of varying noise level were also introduced to the generated images after the convolution to allow the training dataset to simulate more closely real microscope images.
Note that the scalar Fourier approximation of Eq. 3 was chosen for simplicity, although more accurate, vectorial, high numerical aperture (NA) objective lens models could have been applied [54, 55, 56, 57]. Although the chosen model would deviate from high NA and vectorial effects, the main phenomena under consideration here - namely the effects of phase aberrations on PSFs and images - are adequately modelled by scalar theory.
### Image quality metrics
Different image quality metrics were defined for use as the basis for optimisation in conventional sensorless AO methods and as proxies to quantify the level of aberration correction. \(y_{1}\) is an intensity based metric and can be used in non-linear imaging systems. It is defined as
\[y_{1}=\int\int I(x)d^{2}x\]
\(y_{\mathrm{F}}\) is a Fourier based metric and provides an alternative aspect to the intensity metric. It is defined as
\[y_{\mathrm{F}}=\int\int_{0.1f_{max}<|f|<0.6f_{max}}|\mathcal{F}[I(x)]|d^{2}f\]
where \(\mathcal{F}[I(x)]\) is the 2D Fourier transform of image \(I(x)\) from \(x\) domain to \(f\) domain; \(f_{max}\) is the maximum frequency limit of the imaging system. The range \(0.1f_{max}<|f|<0.6f_{max}\) was selected such that most PSF related frequency information was included in the range.
\(y_{\mathrm{S}}\) is a sharpness metric that can be used for optimisation in widefield systems, where the other metrics are not practical, or applications with fluorescence fluctuations. It is defined as
\[y_{\mathrm{S}}=\frac{\int\int_{nf_{max}<|f|<nf_{max}}|\mathcal{F}[I(x)]|d^{2}f }{\int\int_{0<|f|<nf_{max}}|\mathcal{F}[I(x)]|d^{2}f}\]
where \(1>m>n>0\). This metric is defined as the ratio of higher to lower spatial frequency content, which is dependent upon aberration content, but independent of changes in overall brightness.
### Microscope implementations
Three microscopes were used to demonstrate and examine the MLAO method. The microscope implementations are briefly described here and fully elaborated in the supplementary document section 9A.
In the home built 2-P system, a Newport-Spectra-Physics DeepSee femtosecond laser was used as the illumination with wavelength set at 850\(nm\). Light was modulated by a Hamamatsu spatial light modulator before passing through a water immersion objective lens with NA equals to 1.15 and reaching the sample plane.
A commercial Scientifica microscope system was used as the basis for our 3-P demonstration. In the 3-P system, a Ti:Sapphire laser passed through a pair of compressors and operated at 1300\(nm\). Light was modulated by a Mirao 52E deformable mirror before reaching a water dipping objective lens with NA equals to 0.8.
In the home built widefield 3D SIM system, two continuous wave lasers with wavelengths equal to 488 and 561\(nm\) were used as the illumination. Light was modulated by a ALPAO 69 deformable mirror before reaching a water dipping objective lens with NA of 1.1.
### Image acquisition and processing
For 3-P imaging of live specimens, where motion was present, averaging was performed after inter-frame motion correction using TurboReg [58]. Time traces were taken from 200 raw frames captured at 4 Hz consecutively for each of the pre- and post-MLAO corrections.
For the widefield/SIM results, widefield images were processed where indicated using the Fiji iterative deconvolution 3-D plugin [59]. A PSF for deconvolution was first generated using the Fiji plugin Diffraction PSF 3-D with settings the same as the widefield microscope. For the deconvolution, the following settings were applied: Wiener filter gamma equals to 0; both x-y and z direction low pass filter pixels equal to 1; maximum number of iterations equals to 100; and the iteration terminates when mean delta is smaller than 0.01%.
The thresholds shown on the widefield image spectra were calculated by identifying the largest frequency in all x-y directions with image spectrum components higher than noise level. The noise level was identified by averaging the components of the highest spectral frequency, i.e. at the four corners of the image spectrum. Starting from the lowest frequency, each angular and radial fragment was averaged and compared to the noise level. The largest component which was still above the noise level was traced on the image spectra by the dashed line and identified as the threshold.
Each 3D-SIM frame were extracted from a set of 15 image frames using the SoftWorx package (Applied Precision) [60]. The projected images were obtained by summing frames at different z depths into an extended focus xy image.
## References
* [1] Booth, M. J. Adaptive optics in microscopy. _Philos. Transactions Royal Soc. Lond. A: Math. Phys. Eng. Sci._**365**, 2829-2843, DOI: 10.1098/rsta.2007.0013 (2007).
* [2] Booth, M. J. Adaptive optical microscopy: the ongoing quest for a perfect image. _Light. Sci. & Appl._**3**, e165-e165, DOI: 10.1038/lsa.2014.46 (2014).
* [3] Booth, M. J. & Patton, B. R. Adaptive Optics for Fluorescence Microscopy. In Cornea, A. & Conn, P. M. (eds.) _Fluorescence Microscopy: Super-Resolution and other Novel Techniques_, 15-33, DOI: 10.1016/B978-0-12-409513-7.00002-6 (Academic Press, Boston, 2014).
* [4] Booth, M., Andrade, D., Burke, D., Patton, B. & Zurauskas, M. Aberrations and adaptive optics in super-resolution microscopy. _Microscopy_**64**, 251-261, DOI: 10.1093/jmicro/dfv033 (2015). [https://academic.oup.com/jmicro/article-pdf/64/4251/26556994/dfv033.pdf](https://academic.oup.com/jmicro/article-pdf/64/4251/26556994/dfv033.pdf).
* [5] Ji, N. Adaptive optical fluorescence microscopy. _Nat. Methods_**14**, 374-380, DOI: 10.1038/nmeth.4218 (2017).
* [6] Hampson, K. M. _et al._ Adaptive optics for high-resolution imaging. _Nat Rev Methods Primers_**1**, DOI: 10.1038/s43586-021-00066-7 (2021).
* [7] Hartmann, J. _Zeitschrift fur Instrumentenkunde_, 1, 33, 97 (Springer, 1904).
* [8] Shack, R. V. & Platt, B. C. Production and use of a lenticular hartmann screen. _J. Opt. Soc. Am._**61**, 656, DOI: 10.1364/JOSA.61.000648 (1971).
* [9] Schwertner, M., Booth, M. & Wilson, T. Characterizing specimen induced aberrations for high NA adaptive optical microscopy. _Opt. Express_**12**, 6540, DOI: 10.1364/opex.12.006540 (2004).
* [10] Booth, M., Wilson, T., Sun, H.-B., Ota, T. & Kawata, S. Methods for the characterization of deformable membrane mirrors. _Appl. Opt._**44**, 5131-5139, DOI: 10.1364/AO.44.005131 (2005).
* [
* [11] Antonello, J., Wang, J., He, C., Phillips, M. & Booth, M. Interferometric calibration of a deformable mirror. [https://doi.org/10.5281/zenodo.3714951](https://doi.org/10.5281/zenodo.3714951), DOI: 10.5281/zenodo.3714951 (2020).
* [12] Hu, Q. _et al._ A universal framework for microscope sensorless adaptive optics: Generalized aberration representations. _APL Photonics_**5**, 100801, DOI: 10.1063/5.0022523 (2020). [https://doi.org/10.1063/5.0022523](https://doi.org/10.1063/5.0022523).
* [13] Hu, Q. _Chapter 4 'Adaptive optics for corrections of phase and polarisation state aberrations in microscopes'_. Ph.D. thesis, University of Oxford (2021).
* [14] Booth, M. J., Neil, M. A. A. & Wilson, T. New modal wave-front sensor: application to adaptive confocal fluorescence microscopy and two-photon excitation fluorescence microscopy. _J. Opt. Soc. Am. A_**19**, 2112-2120, DOI: 10.1364/JOSAA.19.002112 (2002).
* [15] Sherman, L., Ye, J. Y., Albert, O. & Norris, T. B. Adaptive correction of depth-induced aberrations in multiphoton scanning microscopy using a deformable mirror. _J. Microsc._**206**, 65-71, DOI: [https://doi.org/10.1046/j.1365-2818.2002.01004.x](https://doi.org/10.1046/j.1365-2818.2002.01004.x) (2002). [https://onlinelibrary.wiley.com/doi/pdf/10.1046/j.1365-2818.2002.01004.x](https://onlinelibrary.wiley.com/doi/pdf/10.1046/j.1365-2818.2002.01004.x).
* [16] Marsh, P. N., Burns, D. & Girkin, J. M. Practical implementation of adaptive optics in multiphoton microscopy. _Opt. Express_**11**, 1123-1130, DOI: 10.1364/OE.11.001123 (2003).
* [17] Wright, A. J. _et al._ Exploration of the optimisation algorithms used in the implementation of adaptive optics in confocal and multiphoton microscopy. _Microsc. Res. Tech._**67**, 36-44, DOI: 10.1002/jemt.20178 (2005).
* [18] Jesacher, A. _et al._ Adaptive harmonic generation microscopy of mammalian embryos. _Opt. Lett._**34**, 3154-3156, DOI: 10.1364/OL.34.003154 (2009).
* [19] Debarre, D. _et al._ Image-based adaptive optics for two-photon microscopy. _Opt. Lett._**34**, 2495-2497, DOI: 10.1364/OL.34.002495 (2009).
* [20] Tang, J., Germain, R. N. & Cui, M. Superpenetration optical microscopy by iterative multiphoton adaptive compensation technique. _Proc. Natl. Acad. Sci._**109**, 8434-8439, DOI: 10.1073/pnas.1119590109 (2012).
* [21] Facomprez, A., Beaurepaire, E. & Debarre, D. Accuracy of correction in modal sensorless adaptive optics. _Opt. Express_**20**, 2598, DOI: 10.1364/OE.20.002598 (2012).
* [22] Fiolka, R., Si, K. & Cui, M. Complex wavefront corrections for deep tissue focusing using low coherence backscattered light. _Opt. Express_**20**, 16532-16543, DOI: 10.1364/OE.20.016532 (2012).
* [23] Katz, O., Small, E., Guan, Y. & Silberberg, Y. Noninvasive nonlinear focusing and imaging through strongly scattering turbid layers. _Optica_**1**, 170-174, DOI: 10.1364/OPTICA.1.000170 (2014).
* [24] Kong, L. & Cui, M. In vivo neuroimaging through the highly scattering tissue via iterative multi-photon adaptive compensation technique. _Opt. Express_**23**, 6145-6150, DOI: 10.1364/OE.23.006145 (2015).
* [25] Sinefeld, D., Paudel, H. P., Ouzounov, D. G., Bifano, T. G. & Xu, C. Adaptive optics in multiphoton microscopy: comparison of two, three and four photon fluorescence. _Opt. Express_**23**, 31472-31483, DOI: 10.1364/OE.23.031472 (2015).
* [26] Galwaduge, P. T., Kim, S. H., Grosberg, L. E. & Hillman, E. M. C. Simple wavefront correction framework for two-photon microscopy of in-vivo brain. _Biomed. Opt. Express_**6**, 2997-3013, DOI: 10.1364/BOE.6.002997 (2015).
* [27] Streich, L. _et al._ High-resolution structural and functional deep brain imaging using adaptive optics three-photon microscopy. _Nat. Methods_**18**, 1253-1258, DOI: 10.1038/s41592-021-01257-6 (2021).
* [28] Debarre, D., Booth, M. J. & Wilson, T. Image based adaptive optics through optimisation of low spatial frequencies. _Opt. Express_**15**, 8176-8190, DOI: 10.1364/OE.15.008176 (2007).
* [29] Lee, W. M. & Yun, S. H. Adaptive aberration correction of GRIN lenses for confocal endomicroscopy. _Opt. Lett._**36**, 4608-4610, DOI: 10.1364/OL.36.004608 (2011).
* [30] Gould, T. J., Burke, D., Bewersdorf, J. & Booth, M. J. Adaptive optics enables 3D STED microscopy in aberrating specimens. _Opt. Express_**20**, 20998-21009, DOI: 10.1364/OE.20.020998 (2012).
* [31] Bourgenot, C., Saunter, C. D., Taylor, J. M., Girkin, J. M. & Love, G. D. 3D adaptive optics in a light sheet microscope. _Opt. Express_**20**, 13252-13261, DOI: 10.1364/OE.20.013252 (2012).
* [32] Burke, D., Patton, B., Huang, F., Bewersdorf, J. & Booth, M. J. Adaptive optics correction of specimen-induced aberrations in single-molecule switching microscopy. _Optica_**2**, 177-185, DOI: 10.1364/OPTICA.2.000177 (2015).
* [33] Patton, B. R. _et al._ Three-dimensional STED microscopy of aberrating tissue using dual adaptive optics. _Opt. Express_**24**, 8862-8876, DOI: 10.1364/OE.24.008862 (2016).
* [
* [34] Wang, B. & Booth, M. J. Optimum deformable mirror modes for sensorless adaptive optics. _Opt. Commun._**282**, 4467-4474, DOI: 10.1016/j.optcom.2009.08.010 (2009).
* [35] Milkie, D. E., Betzig, E. & Ji, N. Pupil-segmentation-based adaptive optical microscopy with full-pupil illumination. _Opt. Lett._**36**, 4206-4208, DOI: 10.1364/OL.36.004206 (2011).
* [36] Booth, M. J., Neil, M. A. A., Juskaitis, R. & Wilson, T. Adaptive aberration correction in a confocal microscope. _Proc. Natl. Acad. Sci._**99**, 5788-5792, DOI: 10.1073/pnas.082544799 (2002).
* [37] Wang, F. Wavefront sensing through measurements of binary aberration modes. _Appl. Opt._**48**, 2865-2870, DOI: 10.1364/AO.48.002865 (2009).
* [38] Antonello, J. _et al._ Semidefinite programming for model-based sensorless adaptive optics. _J. Opt. Soc. Am. A_**29**, 2428-2438, DOI: 10.1364/JOSAA.29.002428 (2012).
* [39] Antonello, J., Barbotin, A., Chong, E. Z., Rittscher, J. & Booth, M. J. Multi-scale sensorless adaptive optics: application to stimulated emission depletion microscopy. _Opt. Express_**28**, 16749-16763, DOI: 10.1364/OE.393363 (2020).
* [40] Jin, Y. _et al._ Machine learning guided rapid focusing with sensor-less aberration corrections. _Opt. Express_**26**, 30162-30171, DOI: 10.1364/OE.26.030162 (2018).
* [41] Mockl, L., Petrov, P. N. & Moerner, W. E. Accurate phase retrieval of complex 3D point spread functions with deep residual neural networks. _Appl. Phys. Lett._**115**, 251106, DOI: 10.1063/1.5125252 (2019). [https://doi.org/10.1063/1.5125252](https://doi.org/10.1063/1.5125252) (2019).
* [42] Vishniakou, I. & Seelig, J. D. Wavefront correction for adaptive optics with reflected light and deep neural networks. _Opt. Express_**28**, 15459-15471, DOI: 10.1364/OE.392794 (2020).
* [43] Cumming, B. P. & Gu, M. Direct determination of aberration functions in microscopy by an artificial neural network. _Opt. Express_**28**, 14511-14521, DOI: 10.1364/OE.390856 (2020).
* [44] Khorin, P. A., Dzyuba, A. P., Serafimovich, P. G. & Khonina, S. N. Neural networks application to determine the types and magnitude of aberrations from the pattern of the point spread function out of the focal plane. _J. Physics: Conf. Ser._**2086**, 012148, DOI: 10.1088/1742-6596/2086/1/012148 (2021).
* [45] Zhang, H. _et al._ Application of adamspgd algorithm to sensor-less adaptive optics in coherent free-space optical communication system. _Opt. Express_**30**, 7477-7490, DOI: 10.1364/OE.451350 (2022).
* [46] Saha, D. _et al._ Practical sensorless aberration estimation for 3D microscopy with deep learning. _Opt. Express_**28**, 29044-29053, DOI: 10.1364/OE.401933 (2020).
* [47] Durech, E., Newberry, W., Franke, J. & Sarunic, M. V. Wavefront sensor-less adaptive optics using deep reinforcement learning. _Biomed. Opt. Express_**12**, 5423-5438, DOI: 10.1364/BOE.427970 (2021).
* [48] Wang, F. _et al._ Phase imaging with an untrained neural network. _Light. Sci. & Appl._**9**, 77, DOI: 10.1038/s41377-020-0302-3 (2020).
* [49] Bostan, E., Heckel, R., Chen, M., Kellman, M. & Waller, L. Deep phase decoder: self-calibrating phase microscopy with an untrained deep neural network. _Optica_**7**, 559-562, DOI: 10.1364/OPTICA.389314 (2020).
* [50] Noll, R. J. Zernike polynomials and atmospheric turbulence. _J. Opt. Soc. Am._**66**, 207-211, DOI: 10.1364/JOSA.66.000207 (1976).
* [51] Hall, N. _Chapter 3.2.2 'Accessible adaptive optics and super-resolution microscopy to enable improved imaging'_. Ph.D. thesis, University of Oxford (2020).
* [52] Xin, Q., Ju, G., Zhang, C. & Xu, S. Object-independent image-based wavefront sensing approach using phase diversity images and deep learning. _Opt. Express_**27**, 26102-26119, DOI: 10.1364/OE.27.026102 (2019).
* [53] Marsaglia, G. Choosing a point from the surface of a sphere. _The Annals Math. Stat._**43**, 645-646, DOI: 10.1214/aoms/1177692644 (1972).
* [54] Ignatowski, V. S. Diffraction by a lens of arbitrary aperture. _Trans. Opt. Inst._**1(4)**, 1, DOI: 10.1017/9781108552264.019 (1919).
* [55] Richards, B., Wolf, E. & Gabor, D. Electromagnetic diffraction in optical systems, ii. structure of the image field in an aplanatic system. _Proc. Royal Soc. London. Ser. A. Math. Phys. Sci._**253**, 358-379, DOI: 10.1098/rspa.1959.0200 (1959). [https://royalsocietypublishing.org/doi/pdf/10.1098/rspa.1959.0200](https://royalsocietypublishing.org/doi/pdf/10.1098/rspa.1959.0200).
* [56] Stamnes, J. J. _Waves in focal regions : propagation, diffraction, and focusing of light, sound, and water waves_. Adam Hilger series on optics and optoelectronics (Adam Hilger, Bristol, 1986).
* [
57] Boruah, B. & Neil, M. Focal field computation of an arbitrarily polarized beam using fast fourier transforms. _Opt. Commun._**282**, 4660-4667, DOI: [https://doi.org/10.1016/j.optcom.2009.09.019](https://doi.org/10.1016/j.optcom.2009.09.019) (2009).
* [58] Thevenaz, P., Ruttimann, U. & Unser, M. A pyramid approach to subpixel registration based on intensity. _IEEE Transactions on Image Process._**7**, 27-41 (1998).
* [59] Dougherty, R. _Extensions of DAMAS and Benefits and Limitations of Deconvolution in Beamforming_. No. 0 in Aeroacoustics Conferences (American Institute of Aeronautics and Astronautics, 2005).
* [60] Gustafsson, M. G. _et al._ Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination. _Biophys. J._**94**, 4957-4970, DOI: [https://doi.org/10.1529/biophysj.107.120345](https://doi.org/10.1529/biophysj.107.120345) (2008).
## Acknowledgements
This work was supported by grants from the European Research Council (to MJB: AdOMiS, No. 695140, to AMP: No. 852765), Wellcome Trust (to MJB: 203285/C/16/Z, to ID and MJB: 107457/Z/15/Z, to AMP: 204651/Z/16/Z, to HA: 222807/Z/21/Z), Engineering and Physical Sciences Research Council (to MJB: EP/W024047/1),
## Author contributions
QH and MJB conceived the overall physics-informed approach including data pre-processing and bespoke NN architecture. MH, QH and MJB developed NN architectures and the training approach. QH, MH, MW, JA and DS developed the software packages. JW, QH, AMP set up the microscopes for the experimental demonstrations. QH performed the two-photon experiments, supervised by MJB. HA, JW and QH performed the three-photon experiments, supervised by AMP and MJB. JW, MW, DS, QH and RMP performed the widefield/SIM experiments, for which DG, TC and RMP prepared specimens, supervised by ID and MJB. QH performed data analysis. QH and MJB wrote the manuscript. All authors reviewed the manuscript.
## Additional information
All experimental procedures involving animals were conducted in accordance with the UK animals in Scientific Procedures Act (1986).
Universal adaptive optics for microscopy through embedded neural network control: supplemental document
Mlao process and cnn architecture
###### Abstract
The MLAO aberration estimation process consists of two parts: image pre-processing to compute pseudo-PSFs from images and a cnn-based machine learning process for mode coefficient determination. A stack of M images over the same field of view, each with a different pre-determined bias phase modulation, was used to calculate pseudo-PSFs according to the procedure in the methods section. It was observed and understood that most of the information was contained within the central region of the calculated pseudo-PSFs. 1 A central patch of \(32\times 32\) pixels was then cropped and used as the inputs to the cnn. Cropped pseudo-PSFs were processed by a sequence of convolutional layers (CL) with trainable \(3\times 3\) kernels, each followed by a local \(2\times 2\) max-pooling and thus the x and y sizes were reduced by half but the stack size was increased twice going down each CL. For the input pseudo-PSFs and each of the CL outputs, a global max-pooling was applied and concatenated into a fully connected layer (FCL). This concatenated FCL was connected to the next FCL containing 32 neurons, which in turn was connected to the output layer, which produced the coefficients of the N chosen Zernike modes. The activation functions were chosen to be tanh and linear (only for the last layer connection FCL 32 and the output). The regularizer used was L1L2, the initializer was glorot-uniform and the optimizer was AdamW. The cnn architecture was built and the network training was conducted using TensorFlow.[1] As elaborated in the results section of the manuscript, M and N may be varied to suit different applications.
The weights in the connection between the concatenated FCL and FCL32 (enclosed by a grey dashed square) were extracted and analysed to understand the physical significance of structures in the pseudo-PSFs in influencing the learning of the cnn. Further analysis of such weights is provided in Discussion of the main paper and section 4.
## 1 Mlao process and cnn architecture
The MLAO aberration estimation process consists of two parts: image pre-processing to compute pseudo-PSFs from images and a cnn-based machine learning process for mode coefficient determination. A stack of M images over the same field of view, each with a different pre-determined bias phase modulation, was used to calculate pseudo-PSFs according to the procedure in the methods section. It was observed and understood that most of the information was contained within the central region of the calculated pseudo-PSFs. 1 A central patch of \(32\times 32\) pixels was then cropped and used as the inputs to the cnn. Cropped pseudo-PSFs were processed by a sequence of convolutional layers (CL) with trainable \(3\times 3\) kernels, each followed by a local \(2\times 2\) max-pooling and thus the x and y sizes were reduced by half but the stack size was increased twice going down each CL. For the input pseudo-PSFs and each of the CL outputs, a global max-pooling was applied and concatenated into a fully connected layer (FCL). This concatenated FCL was connected to the next FCL containing 32 neurons, which in turn was connected to the output layer, which produced the coefficients of the N chosen Zernike modes. The activation functions were chosen to be tanh and linear (only for the last layer connection FCL 32 and the output). The regularizer used was L1L2, the initializer was glorot-uniform and the optimizer was AdamW. The cnn architecture was built and the network training was conducted using TensorFlow.[1] As elaborated in the results section of the manuscript, M and N may be varied to suit different applications.
Footnote 1: The process of calculating pseudo-PSFs can be interpreted as a deconvolution between two PSFs. Depending on the sampling size of the imaging system, most details of a deformed PSF typically occupy a central region of a few pixels. Most features of the pseudo-PSFs were thus captured within the central region.
The weights in the connection between the concatenated FCL and FCL32 (enclosed by a grey dashed square) were extracted and analysed to understand the physical significance of structures in the pseudo-PSFs in influencing the learning of the cnn. Further analysis of such weights is provided in Discussion of the main paper and section 4.
## 2 Zernike polynomials and example pseudo-PSFs
A total of ten Zernike polynomials were used for aberration estimation and correction presented in the paper. A list of the polynomials, sequenced using Noll's indices, were included in Figure S2 (a).
Figure S2 (b) included some examples of pseudo-PSFs. It can be observed that when aberration size increases, the maximum pixel value of the Pseudo-PSF decreases; a global max-pooling of the pseudo-PSF extracts information related to the Strehl ratio of the PSFs. Pseudo-PSFs also have shapes that are related to the aberrated PSF shapes.
## 3 Physical information embedded in the CNN architecture
As mentioned in the main paper, the bespoke CNN architecture embedded information about the physical effects of aberrations on images within the trainable parameters. To illustrate these phenomena, we designed six input patterns and two filters to calculate how values obtained after global max-poolings from different convolutional layers were related to the features of the patterns. Normally, the filters would be learned as part of the training process, but for illustrative purposes, we have defined them manually here.
As shown in Figure S3, patterns 1 to 3 had the same general shape but varying sizes. They were all convolved with the same filter 1. Pattern 1 had the largest feature and the values obtained were almost constant throughout layers 1 to 5 (see Figure S3 (b)). Patterns 2 and 3 had smaller features and the extracted values reduced when moving further down the layers, where the embedded physical scales were more closely related to large scale features. Patterns 4 to 6 had the same general shape with four peaks positioned at the corners of a square. They were all convolved with filter 2, which shared a similar general shape. Pattern 4 had the smallest feature size and resulted a largest value in layer 2. Patterns 5 and 6 had larger feature sizes and resulted in largest values in layers 3 and 4, respectively. This trend confirms the expectation that layers later in the CNN probe larger scales in the input images. Note that all the patterns were designed in such a way that the maximum pixel reading (and thus the value max-pooled from layer 1) equalled to 1.
**Fig. S2.** (a) Zernike polynomials Noll's index 5-13, 22. This is a whole list of the polynomials used for aberration determinations in the paper. (b) Examples of pseudo-PSFs. The first column is the input aberration and the second column is the bias mode used in pseudo-PSFs generation.
**Fig. S3.** Demonstrations of the link between feature sizes and convolutional layers. (a) Pattern 1 to 6 each underwent a series of convolutions followed by a \(2\times 2\) local max-pooling. Pattern 1 to 3 were convolved with filter 1 and pattern 4 to 6 were convolved with filter 2. For each layer, a global max-pooling were carried out to extract the maximum reading of each layer. The physical interpretations of the extracted values of the different layers were related to Strehl ratio (layer 1) and shapes with features ranging from small scales (layer 2) to large scales (layer 5). The extracted readings was normalised with the readings of their respective previous layer and displayed in (b). The horizontal axis of each plot in (b) indicates from which layer the normalised maximum reading (indicated by the vertical axis) was extracted from.
## 4 Weight analysis of different trained neural networks
Figure S4 shows the root-mean-square (RMS) values of the weights at the output of each section of the concatenated FCL following the convolutional layers of the CNN. These weights encode information about physical phenomena in the pseudo-PSF that is related to the spatial effects of aberrations on images. Higher numbered layers correspond to larger scale features. Similar distributions are seen for all of the _ast_ CNNs class and all of the 2/4\(N\) class. Most notably, it can be seen that the _2/4\(N\)_ networks all carry heavier weights in layer 1, which is most similar to the Strehl ratio variations of the PSFs.
## 5 Trainable neural network parameters
The bespoke NN and data pre-processing steps were designed with knowledge of the physical basis of image formation. This permitted signficant reduction in NN complexity compared to previous methods for aberration estimation. This architecture not only allowed improved performances, providing insights on internal workings, but also had a structure size orders of magnitude smaller than common NNs used in similar applications (see the comparison in Table S1). This will be beneficial for future applications as NN with fewer trainable parameters would generally require less training data and a shorter training time. Furthermore, the simplified design means that there is greater potential for extending the method to more challenging applications.
Figure S4: Analysis of the weight distributions across convolutional layers in the CNNs trained for different biasing schemes and microscopes.
## 6 Choice of bias mode
The simplest MLAO implementation uses a pair of biased images as the input. The nature of the bias aberrations is a design choice. In order to investigate this, we tested individual Zernike modes as the bias and trained different MLAO networks with identical architecture to correct the same randomly generated aberrations. The loss function of the different NNs during training was shown in Fig. S5 (a). Results from correcting 20 randomly generated aberrations were shown in Fig. S5 (b).
The two networks using oblique and vertical astigmatism (index \(i=\)5 and 6) converged to similar loss function during training (Fig. S5 (a)). The same two networks also gave similar
\begin{table}
\begin{tabular}{|c|c|} \hline Neural network method & Number of trainable parameters \\ \hline ResNet[2] & \(>\)0.27M \\ \hline Inception V3/ GoogLeNet[3, 4] & 23.6M \\ \hline Xception[5, 6] & 22.8M \\ \hline Deep Image Prior[7] & 2M \\ \hline PHASENET[8, 9] & 1M \\ \hline MLAO in this paper & 0.028M to 0.032M \\ \hline \end{tabular}
\end{table}
Table S1: A list of NNs used in image processing and phase determination with their number of trainable parameters. Inception V3[3], Xception[5] and PHASENET[8] have been directly demonstrated for phase determination. ResNet is a common basic NN architecture that has been used in many different image processing and phase determination architectures[8]. A 20 layer ResNet is the smallest architecture proposed in the ResNet paper[2] that has \(\sim\)0.27M trainable parameters. Deep Image Prior employs a U-Net architecture that is a commonly used in many biomedical image processing applications. Deep phase decoder[10], a network designed for wavefront and image reconstruction, was also inspired and adapted from Deep Image Prior.
Figure 5: Testing Zernike modes as choice of bias aberration. (a) A plot of the root mean square (RMS) loss function against the number of epochs when training NNs of the same architecture from the same dataset but using different bias modes. (b) Statistical results of testing the trained NNs to correct the same sets of random aberrations over 2-P microscope images of beads. Twenty randomly generated aberrations consisting five Zernike modes and RMS value smaller than 2.2 radians were introduced for correction (dark gray bar). The remaining aberrations after correction by different networks were averaged and shown in the figure; standard deviations of the remaining aberrations are represented as the error bar. Insets showed an example of the FOV when no aberration was introduced and an example when 1.88 rad of aberration was introduced into the system.
averaged remaining aberrations during experimental aberration correction on a bead sample (Fig. S5 (b)). The two networks using vertical and horizontal coma (index 7 and 8) also showed mutually similar values. This was expected as these pairs of modes (5 and 6; 7 and 8) differ only by rotation, which should not have an effect on how effective the networks determine aberrations.
From these results, the NNs using astigmatism as the bias modes converged to the smallest loss function during training. This possibly suggested that the astigmatism modes, on average, allowed the network to learn more from the training data. It was also observed from the experimental results where, in general, the NN obtained the smallest remaining aberrations. We therefore chose to use astigmatism as the modulation modes for the two-bias NN methods in the experiments conducted in this paper.
## 7 Tolerance to Sampling Rate
As described in the paper, the networks for scanning microscopy were trained on simulated dataset with pixel sampling within the range of 1.0\(\times\) to 1.2\(\times\) of the base sampling rate (see the method section in the main paper for more details). However in many practical cases, there can be uncertainty in pixel sampling for a system or constraints on the sampling rates that may be used. We hence tested the tolerance of our networks to pixel sampling rates outside the range of the training dataset (see Fig. S6).
In this case, 188nm per pixel was close to the sampling of the generated dataset on which the two NNs were trained. When images were sampled at a smaller or larger rate, _ast2 MLAO_ and _2N MLAO_ were still able to correct aberrations, but were slightly less effective.
## 8 Further Three-Photon Microscope Demonstrations
Figure S7 showed the performance of the _ast4 MLAO_ algorithm, for imaging neuronal activity at a depth of 670 \(\mu\)m in a mouse brain. Despite the very low SNR of the image data, the image quality and cell activity data were considerably improved.
## 9 Details of the Experimental Methodology
Three optical systems, a 2-P, 3-P and widefield microscope, were used for demonstrations on different samples. Networks with different parameter settings are also adjusted for different applications.
Figure 86: Testing of robustness to pixel sampling. Statistical results of remaining aberrations before (red plot) and after correction using _2N+1 conv_, _ast2 MLAO_ and _2N MLAO_ methods. The results were averaged from 20 randomly generated aberrations and the SDs were shown as the error bars. The same algorithms were used to correct the same aberrations over images collected at different pixel sampling as shown by the horizontal axis. Insets show examples of the images collected at different sampling rates.
**Fig. S7.** Three-photon microscopy imaging GCaMP neuronal activities at depth 670\(\mu m\). Power at sample was 44 mW. Wavefronts inserted to the figures showed the phase modulations applied by the DM at the relevant step; the common scale is indicated by the colorbar above v. i and iii show respectively before and after _ast4 MLAO_ correction through three iterations (it:1 to 3), 200 frame averages after motion correction. In iii, time traces shown to the right and bottom were taken from the marked lines (1) and (2) respectively. ii 1-4 shows example single-frame images used as inputs to the _ast4 MLAO_ correction with the corresponding bias modes as insets. iv and v show the intensity metric (y\({}_{1}\)) and the sharpness metric (y\({}_{S}\)), respectively, calculated from single image frames, against the number of images acquired for three iterations _ast4 MLAO_. vi shows the Calcium activity of 8 cells (A-H marked on i). vii shows a histogram of the 200 frames collected pre MLAO (blue), post MLAO (red) and the differences between pre and post MLAO (yellow).
### Experimental setups
Figure 8: Configuration of the (a) 2-P (b) 3-P (c) widefield 3-D SIM microscope. (Caption continued on the next page.)
### Sample preparation
The 3-P results were collected from imaging male (Lhx6-eGFP)BP221Gsat; Gt(ROSA)26Sortm32(CAG-COP4^H134R/(EYFP)Hze mice (static imaging) and female and male Tg(tetO-GCaMP6s)2Niell mice (calcium imaging). Mice were between 8-12 weeks of age when surgery was performed. The scalp was removed bilaterally from the midline to the temporalis muscles, and a metal headplate with a 5 mm circular imaging well was fixed to the skull with dental cement (Super-Bond C&B, Sun-Medical). A 4-5 mm circular craniotomy was performed during which any bleeding was washed away with sterile external solution or staunched with Sugi-sponges (Sugi, Kettenbach). Cranial windows composed of 4 or 5 mm circular glass coverslips were press-fit into the craniotomy, sealed to the skull by a thin layer of cyanoacrylate (VetBond) and fixed in place by dental cement.
The widefield 3-D SIM results were collected from imaging NMJ of _Drosophila_ larvae. For the immunofluorescence sample with one coloured channel, it was prepared as previously [11]. Crawling 3rd instar larvae of wildtype Oregon-R _Drosophila melanogaster_ were dissected on a Sylgard-coated Petri Dish in HL3 buffer with 0.3mM Ca2+ to prepare larval filter [12]. Then, the larval filter samples were fixed in Paraformaldehyde 4% in PBS containing 0.3% (v/v) Triton X-100 (PBSTX) for 30 minutes. The brains were removed post-fixation, and the fillet samples were transferred to a Microcentrifuge tube containing PBSTX for 45 minutes of permeabilisation. The samples were stained with HRP conjugated to Alexa Fluor 488 and DAPI for 1 hour at room temperature (21C\({}^{\circ}\)). After the washes, the samples were mounted in Vectashield.
For the 3-D SIM results collected on the _Drosophila_ larvae sample with two coloured channels, it was prepared by following the protocol presented in [11]. 3rd instar _Drosophila_ melanogaster larvae (Brp-GFP strain) were dissected in HL3 buffer with 0.3mM Ca2+ to prepare a so-called larval fillet, and the larval brains were removed. After this, larvae were stained for 15 minutes with HRP conjugated to Alexa Fluor 568 to visualise the neurons, washed with HL3 buffer with 0.3mM Ca2+ and imaged in HL3 buffer without Ca2+ to prevent the larvae from moving.
### Network parameters
Table S2 showed the network settings used in different imaging applications. |
2303.01245 | An Incremental Gray-box Physical Adversarial Attack on Neural Network
Training | Neural networks have demonstrated remarkable success in learning and solving
complex tasks in a variety of fields. Nevertheless, the rise of those networks
in modern computing has been accompanied by concerns regarding their
vulnerability to adversarial attacks. In this work, we propose a novel
gradient-free, gray box, incremental attack that targets the training process
of neural networks. The proposed attack, which implicitly poisons the
intermediate data structures that retain the training instances between
training epochs acquires its high-risk property from attacking data structures
that are typically unobserved by professionals. Hence, the attack goes
unnoticed despite the damage it can cause. Moreover, the attack can be executed
without the attackers' knowledge of the neural network structure or training
data making it more dangerous. The attack was tested under a sensitive
application of secure cognitive cities, namely, biometric authentication. The
conducted experiments showed that the proposed attack is effective and
stealthy. Finally, the attack effectiveness property was concluded from the
fact that it was able to flip the sign of the loss gradient in the conducted
experiments to become positive, which indicated noisy and unstable training.
Moreover, the attack was able to decrease the inference probability in the
poisoned networks compared to their unpoisoned counterparts by 15.37%, 14.68%,
and 24.88% for the Densenet, VGG, and Xception, respectively. Finally, the
attack retained its stealthiness despite its high effectiveness. This was
demonstrated by the fact that the attack did not cause a notable increase in
the training time, in addition, the Fscore values only dropped by an average of
1.2%, 1.9%, and 1.5% for the poisoned Densenet, VGG, and Xception,
respectively. | Rabiah Al-qudah, Moayad Aloqaily, Bassem Ouni, Mohsen Guizani, Thierry Lestable | 2023-02-20T09:48:11Z | http://arxiv.org/abs/2303.01245v1 | # An Incremental Gray-box Physical Adversarial Attack on Neural Network Training
###### Abstract
Neural networks have demonstrated remarkable success in learning and solving complex tasks in a variety of fields. Nevertheless, the rise of those networks in modern computing has been accompanied by concerns regarding their vulnerability to adversarial attacks. In this work, we propose a novel gradient-free, gray box, incremental attack that targets the training process of neural networks. The proposed attack, which implicitly poisons the intermediate data structures that retain the training instances between training epochs acquires its high-risk property from attacking data structures that are typically unobserved by professionals. Hence, the attack goes unnoticed despite the damage it can cause. Moreover, the attack can be executed without the attackers' knowledge of the neural network structure or training data making it more dangerous. The attack was tested under a sensitive application of secure cognitive cities, namely, biometric authentication. The conducted experiments showed that the proposed attack is effective and stealthy. Finally, the attack effectiveness property was concluded from the fact that it was able to flip the sign of the loss gradient in the conducted experiments to become positive, which indicated noisy and unstable training. Moreover, the attack was able to decrease the inference probability in the poisoned networks compared to their unpoisoned counterparts by 15.37%, 14.68%, and 24.88% for the Densenet, VGG, and Xception, respectively. Finally, the attack retained its stealthiness despite its high effectiveness. This was demonstrated by the fact that the attack did not cause a notable increase in the training time, in addition, the Fscore values only dropped by an average of 1.2%, 1.9%, and 1.5% for the poisoned Densenet, VGG, and Xception, respectively.
Adversarial Attacks, Data Poisoning, Neural Networks, Iris Recognition.
## I Introduction
Cognitive cities [1] are proactive, hyper-connected, and citizen-driven cities that are designed to minimize resource consumption, in order to achieve sustainability. In addition, the vast advancement in Artificial Intelligence (AI) and Internet of Things (IoT) technologies have enhanced the evolution of research that integrates both technologies to deliver and automate services for cognitive cities' residents. In fact, the great development that emerged from the integration of those technologies has brought unforeseen exposures to cybersecurity, in addition to novel attacks that need to be addressed in order to deliver secure automation to cognitive cities.
Securing access to different services and facilities, such as connected buildings and data centers, and managing the flow of foot traffic are crucial requirements when adopting the cognitive city paradigm. Those requirements can be implemented using biometric authentication such as fingerprint recognition and iris recognition. Despite the benefits of biometric authentication, privacy concerns and security attacks pose serious challenges to this technology after deployment. Attacks that target biometric recognition systems typically include presenting human characteristics or artifacts directly to a biometric system to interfere or bias with its standard operation. Such attacks can result in granting access to unauthorized individuals into secured premises, allowing tailgating, or triggering denial of service by rejecting the biometrics of authorized individuals. For instance, in 2017, the Chaos Computer Club executed a successful attack on the Samsung Galaxy S8 iris scanner using a simple photograph and a contact lens [2].
On a different note, neural networks have gained wide popularity in the past decade due to their supremacy in terms of accuracy and minimal need for human intervention. Moreover, those networks are data hungry and are very sensitive to patterns they are exposed to during the training phase. On the other hand, neural networks are vulnerable and can be biased even with the introduction of simple adversarial attacks. For example, altering a single pixel in the data fed to an image classifier can disrupt the learning experience and result in a biased model [3].
Adversarial attacks are considered white box when the attacker has full access to the neural network and data, while gray box attacks assume having access to either and black box attacks assume access to neither. Those attacks can be categorized into digital attacks and physical attacks. Digital attacks engineer pixel values of input images, whereas physical attacks insert pixel patches that represent real world objects into the input image instance. Attacker goals can vary from faulting the predictions of a certain class, in what is called "targeted attacks". Moreover, an attack can be "non-targeted" and aim to fault the model in general.
Furthermore, attacks that target faulting the inference phase have been extensively studied in the literature. On the contrary, only a handful of papers focused on faulting the training phase and the intermediate values related to its computations. In 2022, Breier _et al._ introduced the first attack that directly targets the training phase by perturbing the ReLu values while training [4]. In fact, the lack of research attention on attacks
that target the training phase puts many applications that rely on neural networks in jeopardy. In this work, we propose and test a novel attack that focuses on faulting the training process of neural networks in the domain of biometric authentication through iris recognition. The contributions of this work can be summarized as follows:
1. We introduce a novel gradient-free, data poisoning attack that incrementally and directly targets the training set during the training process of a neural network with minimal knowledge by the attacker. To the best of our knowledge, this is the first attack that executes between training epochs and targets the intermediate data structures of the training phase.
2. We conduct extensive experimental verification on the proposed attack to test its effectiveness and stealthiness. We define four evaluation criteria to quantify the effect of the attack, namely, the average of the loss change, the average inference probability, the training time difference, and the performance degradation measure.
3. We experiment the proposed attack on an important aspect of a cognitive city, namely, iris recognition. To the best of our knowledge, this is the first attempt to test the effect of an adversarial attack that occurs during training on the domain of iris recognition.
The rest of this paper is organized as follows: the most recent literature on the domain of physical attacks and iris recognition is presented in Section II. The proposed methods are outlined in Section III. The results are described and discussed in Section IV. Finally, Section V concludes and summarizes the main highlights and observations of this work.
## II Related Work
### _Attacks on Neural Networks_
Patch attacks are physical attacks that replace a subset of pixels in an image with pixels from adversarial patches to bias a model [5]. While many studies have proposed attacks that target faulting the inference phase [6, 7], only a handful of papers focused on faulting the training phase and the intermediate values related to its computations [4]. For example, Zhao _et al._[6] applied the alternating direction method of multipliers at the inference time to solve the optimization problem of the targeted fault sneaking attack. The results showed that the attack was successful and stealthy, moreover, the success rate was approximately 100% when the number of targeted images was less than 10. Whereas, the success rate decreased as the number of fooled images increased. Furthermore, the work in [7] studied the effects of bitwise perturbations at inference time on 19 deep networks. The vulnerable parameters of the experimented networks were identified using heuristic functions. The results showed that most deep architectures have at least one parameter that causes an accuracy loss of over 90% when a bit-flip is executed on their bitwise representation.
In addition, the Fast Gradient Sign Method (FGSM) has been widely used in the literature as an attacking strategy [8]. This method includes adding noise whose direction is the same as the gradient of the cost function with respect to the data using a trained model. The work in [4], proposed the first attack that targets the training phase by changing the values of the ReLu function to bias the neural network. The novel attack was proven to be effective and stealthy.
### _Attacks on Iris Recognition Systems_
The crucial role iris recognition has played in securing premises, in addition to the threatening effects of breaching such authentication systems, have made iris biometric authentication systems an active target for adversarial attacks. A novel morph attack on iris recognition systems was tackled in [9]. Sharma _et al._ generated morphed iris images using the Indian Institute of Technology Delhi (IITD) Iris Database and West Virginia University (WVU) multi-modal datasets. The morph attack achieved a success rate higher than 90% on two state-of-the-art iris recognition methods, which indicates the vulnerability of iris recognition systems.
In order to protect against the increasing attacks, researchers have also focused on studying countermeasures and detection mechanisms for iris recognition attacks. For example, Thukral _et al._[10] proposed an iris spoofing detection system that utilized Gabor filters and Histogram of Gradient (HOG) bins to extract features. Next, a Support Vector Machine (SVM) was used to detect if the extracted features represented fake or real iris. The proposed system was able to detect spoofing attacks with an accuracy of 98%. Finally, Tapia _et al._[11] tackled testing the liveness of the scanned iris to protect the system from being fooled by printed images or artificial eyes. The proposed work utilized a MobileNetV2 network, which was trained from scratch. Moreover, the authors increased the number of filters and weighted each class based on the number of its instances. The proposed method was able to accurately classify irises with competitive Bona Fide Classification Error Rates (BPCER) of less than 4% in all experiments.
## III Physical Gray-box Adversarial Attacks
A labeled training set of size \(s\) can be represented as \(DS=\{(x^{i},\,y^{i})\}_{i=1}^{s}\), where \(y^{i}\in\mathcal{Y}\) and \(\mathcal{Y}\) is the set of all possible output classes for an image classification problem. When training a deep classifier, we aim to optimize a discriminant function \(\mathcal{F}\) that maps each instance,\(x^{i}\), to the class associated with the highest class probability, as can be seen in Equation 1. This optimization process takes place during the training process by passing \(DS\) to a deep classifier for a number of training rounds. The number of training rounds will be referred to as \(Epochs\) throughout the rest of this paper. The aforementioned setting of training \(\mathcal{F}\) without any attacks will be referred to as the **base model** throughout this work.
\[\mathcal{F}\to argmax(P(\mathcal{Y}\mid x^{i})) \tag{1}\]
### _Attack Definition_
In our proposed attack, an attacker aims to corrupt the training process by perturbing the training instances incrementally between training epochs in order to optimize a
corrupted poisoned discriminant function \(\mathcal{F}^{\prime}\) that produces faulty probability distributions over the possible output classes. The attack is executed implicitly in multiple rounds. In each poisoning round, a poisoning procedure that selects \(X\subseteq DS\) of size \(|X|=\alpha*s\) is executed, where \(\alpha\in(0\%,100\%]\) is the poisoning percentage coefficient. The attacker's goal is to replace \(X=\{(x^{i},y^{i})\}_{i=1}^{|X|}\) with a poisoned set \(X^{\prime}=\{g(x^{i})\), \(y^{i})\}_{i=1}^{|X|}\), where g(.) is the poisoning function that modifies \(x^{i}\) at a pixel level. The poisoning function replaces the pixels that fall within a selected area, namely \(Patch_{Area}\), with faulty pixels, \(x^{\prime}\), in order to corrupt the image representation and result in a faulty training process. The poisoning function can be seen in Equation 2, where \(W\) and \(H\) are the width and height of the training image instance \(x^{i}\).
\[g(x)=\begin{cases}x^{\prime}_{u,v}&if(u,v)\in Patch_{Area}\\ &,u\in[0,W),v\in[0,H)\\ x_{u,v}&Else\end{cases} \tag{2}\]
The attack targets the intermediate data structures, where the training instances are saved. In addition, it is executed incrementally between training epochs, such that a different \(X\) is selected every poisoning round in order to accumulate the poisoned training instances and increase the effectiveness of the attack.
The attack frequency coefficient determines the number of poisoning rounds and is called \(\beta\in[1,Epochs]\). When the value of \(\beta\) is chosen to be 1, then the attack will be executed after each training epoch causing an increased risk of damage. On the contrary, if the value is chosen to be \(Epochs\), then the poisoning process will only happen once after the first training epoch.
### _Poisoning Strategy_
Function \(g(.)\) in Equation 2 replaces pixels in a training instance within the defined poisoning area, \(Patch_{Area}\). This poisoning procedure can be implemented in multiple ways. In this work, we opted to implement \(g(.)\) to execute local perturbations and global perturbations [12]. It is worth mentioning that only one type of perturbations was considered in each of the conducted experiments in this work.
In the local perturbations setting, a small area called physical patch in the training instance is selected and replaced with pixels from another image. In this work, the physical patch was chosen to be close to the training set domain, hence it was an image of a human eye. It is worth mentioning that the size of the \(Patch_{Area}\) and its location are randomized, and optimizing them is out of the scope of this work [5].
On the other hand, in the global perturbations setting, all the instances in \(X\) are replaced with another randomly selected image from the training set. This way the classifier will be exposed to a highly redundant training set which corrupts the training process by increasing the risk of overfitting. Both poisoning strategies are not easy to blacklist, since the local setting only alters a small area of each instance and the global perturbation setting uses an image from within the training instances in a manner that imitates image augmentation, which is a benign, widely used technique in training neural networks.
### _Attack Characteristics_
The attack specifications can be summarized as:
1. **The attack is non-targeted:** the attack definition in III-A shows that no restrictions apply on the choice of \(y^{i}\) in \(X\). Moreover, the value of \(y^{i}\) remains unchanged after poisoning takes place in \(X^{\prime}\).
2. **The attack does not affect packets delay:** the attack only targets the training phase, whereas the inference phase is executed in the usual manner. Hence, the attack is stealthy in the sense that it does not affect the packet delay when the deep classifier is deployed on the cloud.
3. **The attack samples without replacement:** to guarantee faster and stealthier execution, \(X\) is sampled every poisoning round without replacement; that is an instance can only be included once in \(X\) at a certain poisoning round, however an instance can be included in multiple poisoning rounds. This implies that the network will be exposed to a different training set after every poisoning round, which results in a higher training instability.
4. **The attack is incremental for increased effectiveness:** the poisoned instances in \(X^{\prime}\) accumulate in the training set after each poisoning round and throughout the training phase, which in turn intensifies the effect of poisoning even at a low value of \(\alpha\).
5. **The attack is gradient-free [13] and is gray box:** the attack is gray box since we assume that the attacker only has access to the intermediate data structures of the training process without the need to access the physical path of the training instances or the neural network architecture. In other words, the attack is agnostic to the neural network architecture. The attack is also gradient-free since it perturbs the training data between epochs without the need to access the gradients of the attacked neural network.
6. **The attack targets intermediate data structures:** typically developers' efforts are more focused on preparing and preprocessing the training set before training. On the other hand, what happens during training and the values of the intermediate data structures that keep the training instances are overlooked, especially that the training is usually conducted on powerful servers with limited physical access. Hence, this attack which poisons the data implicitly between training epochs, acquires its high risk property from attacking data structures that are typically not monitored by professionals, and hence the attack goes unnoticed despite the damage it causes.
### _Evaluation Metrics_
In each experiment, the neural networks will be evaluated and compared in terms of the following evaluation measures:
1. Attack effectiveness measures: an attack is called effective if it achieves its intended goals. In our proposed attack, the goal is to expose the deep classifier to an
unstable training process, which in turn, will result in faulty probability distributions produced by the network at the inference stage. 1. Average of Loss Change \((ALC)\): the loss function is typically expected to decrease as the training process progresses. This is due to backpropagation, which reflects what the network learned during each training epoch. The \(ALC\) measures the average change in the loss value over the training epochs, and the sign of this evaluation metric is a leading element, as it reflects whether the loss was decreasing or increasing throughout training. Executing the attack is expected to cause instability in the training process due to the noisy poisoned data and, hence, increase the \(ALC\) value. The \(ALC\) can be defined as follows, where the \(\ell\) is the loss and \(Epochs\) is the number of training epochs: \[ALC=\frac{\sum_{i=1}^{Epochs}(\ell_{i}-\ell_{i-1})}{Epochs-1}\] (3) 2. Average Inference Probability (AIP): the softmax function is typically used in the last layer of deep classifiers to normalize the output to a probability distribution over the possible output classes. Each test instance is classified as the class of the highest probability. In this evaluation criterion, we assess the effect of the attack on the probabilities produced by the model at the inference stage, as typically higher probabilities imply more confidence about the selected class. As a result, a decreased average probability reflects the effectiveness of the attack on the final output of the model. \(AIP\) can be calculated using Equation 4, where \(t^{i}\) is a test instance. \[AIP=Average(argmax(P(Y\mid t^{i})))\] (4)
2. Attack stealthiness measures: an attack is called stealthy if the evaluation metrics of the corrupted classifier \(\mathcal{F}^{\prime}\) are close to the metrics of the base model \(\mathcal{F}\)[4]. 1. Training Time Difference \((TTD)\): training a neural network can be a lengthy process, especially when the training instances are large. Hence, it is crucial to ensure that executing the attack will not cause an observable added amount of time to the training phase, in order to keep the attack unnoticed. The \(TTD\) measure can be defined as follows: \[TTD=TrainingTime^{\prime}-TrainingTime_{base}\] (5) where \(TrainingTime_{base}\) is the time taken to train the base model, and \(TrainingTime^{\prime}\) is the training time when the neural network is trained with poisoned data. 2. Performance Degradation Measure (PDM): in order to confirm the attack stealthiness, the metrics of the poisoned classifier need to be reasonably close to the metrics of the base classifier. In this evaluation criterion, the difference between the macro Fscore of the base model and each poisoned model is calculated, as described in Equation 6, where \(Fscore^{\prime}\) is the Fscore of a poisoned model. \[PDM=Fscore_{base}-Fscore^{\prime}\] (6)
### _Datasets_
The proposed attack perturbs images and hence can target any computer vision application. Nevertheless, we opted to apply it to an iris recognition dataset, due to the significance of this domain. The CASIA Iris Subject Ageing dataset [14] was considered in our experiments. This dataset was collected by the National Laboratory of Pattern Recognition (NLPR) in China in April 2009 and April 2013. In this work, the subset of CASIA Iris Subject Ageing which was collected in 2009 using the H100 sensor was chosen due to its high diversity and good size. The subset comprises 37912 instances of the left and right eyes of 48 individuals. The dataset instances pose some challenging scenarios, like glasses, partially closed eyes, Moreover, some instances have very low brightness. The cross-validation method was used to train and evaluate the neural networks, and 100 images from each user subject were randomly selected for the test dataset.
### _Technical and Experimental Setup_
Three state-of-the-art deep classifiers, namely, Densenet, VGG, and Xception were considered for this work. Moreover, the number of epochs was set to 10, the cross entropy loss function was used and the networks were trained with a learning rate of.01 on the Google Colab Pro platform which utilizes NVIDIA GPUs. It is worth mentioning that the code of this work is available on Github [15].
Each of the 3 considered deep classifiers were experimented with \(\alpha\) values of 5%, 10%,15%, and 20%, as 20% is typically the maximum poisoning percentage considered in the literature [16]. In the experiments description and results, the local perturbations poisoning strategy will be referred to as \(P\), and the global perturbations strategy will be referred to as \(R\).
## IV Results and Discussion
In this section, the results of evaluating the proposed attack will be presented in detail. Figures 1, 2, and 3 depict the results of the evaluation metrics described in III-D. In all figures, the result of the base model is depicted as the origin point (\(\alpha\) = 0%).
On a different note, increasing the attack frequency (i.e., a lower \(\beta\)) resulted in increased effectiveness in all experiments. In the experiments where \(\beta\)'s value was set to 1, the \(ALC\) kept increasing as the value of \(\alpha\) increased, and the value was positive in all experiments where \(\alpha\geq 10\%\). On the other hand, when \(\beta=Epochs\), the \(ALC\) results were increasing but negative in all experiments, which means that the loss values were still decreasing but at a lower rate compared to the base model and the experiments of higher frequency.
The \(AIP\) results are depicted in Figure 2, where it can be seen that increasing the value of \(\alpha\) resulted in decreasing the \(AIP\) in all experiments. However, this decrease varied in the experiments; for example, the decrease was slight, even when \(\alpha\) increased, in the experiments where \(\beta\)=\(Epochs\). On the other hand, increasing \(\alpha\) with a higher frequency (\(\beta=1\)) resulted in a more noticeable drop in the \(AIP\) values. For example, it can be seen in Figure 2(c) that the \(AIP\) value dropped by 24.88% when \(\alpha=20\%\) and \(\beta=1\) in the random poisoning experiment, \(R\). Whereas, the \(AIP\) value only dropped by 5% when we only changed the value of \(\beta\) to be equal to the number of \(Epochs\). Furthermore, the highest drop in the \(AIP\) in the poisoned networks compared to their unpoisoned counterparts at inference time was 15.37%, 14.68%, and 24.88% for the Densenet, VGG, and Xception, respectively. Overall, we can conclude that the attack was effective in all conducted experiments. Moreover, the attack effectiveness has a positive correlation with the percentage \(\alpha\) and frequency \(\beta\).
### _Analysis of Attack Stealthiness_
It is crucial to keep the proposed attack undetected. The attack can be easily noticed if it takes long to execute, thus, to ensure the attack stealthiness, the \(TTD\) measure is monitored
Fig. 1: Experimental results of the Average of Loss Change (ALC) values
Fig. 3: Experimental results of the Performance Degradation Measure (PDM) values
Fig. 2: Experimental results of the Average Inference Probability (AIP) values
in all experiments. Among all conducted experiments, the maximum \(TTD\) value was 63 seconds. Hence, the attack did not add a noticeable period of time to the training time of the base model. Moreover, to monitor the stealthiness of the attack, the \(PDM\) values were recorded as can be seen in Figure 3. The maximum \(PDM\) value was recorded for the VGG network with \(\alpha=20\%\) and \(\beta=1\) in the random poisoning experiment, \(R\). Overall, the average \(PDM\) values were 1.2%, 1.9%, and 1.5% for the Densenet, VGG, and Xception, respectively. Hence, it can be concluded that the attack demonstrated a stealthy behavior.
### _Analysis of Poisoning Strategy_
As explained in Section III-B, the attack was experimented under local perturbations setting (\(P\)) and global perturbations setting (\(R\)). The influence of the perturbation type was highly associated with the value of \(\beta\). It can be seen in Figures 1, 2 and 3 that in the experiments of low frequency, where \(\beta=Epochs\), both perturbation types achieved comparable results. On the other hand, when the poisoning rounds were executed after every epoch, where \(\beta\)=1, the attack showed the highest effectiveness in the global perturbations setting, \(P\).
Finally, the results showed that the proposed attack is effective and stealthy. Those properties increase when the attack is intensified by increasing the value of \(\alpha\), increasing the number of affected pixels, similar to the case of global perturbations, and decreasing \(\beta\) for higher execution frequency. Moreover, the proposed attack inherits its riskiness from attacking unobserved data structures that usually reside on powerful servers with limited physical access. The attack is also incremental and accumulates poisoned data gradually to intensify its effectiveness across the training epochs. In addition, the attack requires no knowledge about the neural network structure, as all experiments in this work were conducted using the same injection code.
## V Conclusion and Future work
Neural networks are vulnerable to adversarial attacks. Moreover, the digital transformation adopted worldwide implies continuous acquisition and analytics of big streams of data, which has brought novel digital threats and unforeseen exposures to cybersecurity. In this work, we propose a novel gradient-free, gray box, incremental attack that targets the intermediate data structures of the training phase of neural networks. The attack has 3 main parameters: the attack percentage coefficient, the attack frequency coefficient, and the poisoning strategy. In all conducted experiments, it was noted that the attack stealthiness and effectiveness had a positive correlation with the aforementioned parameters.
Moreover, the attack resulted in unstable training, as it made the loss values increase which in turn indicates poor learning and generalization. Moreover, the attack was able to decrease the probability of the output class (\(AIP\)) in the poisoned networks compared to their unpoisoned counterparts at inference time by 15.37%, 14.68%, and 24.88% for the Densenet, VGG, and Xception, respectively. Despite its effectiveness, the attack remained stealthy as it only dropped the Fscore values by 1.2%, 1.9%, and 1.5% for the poisoned Densenet, VGG, and Xception, respectively.
In future works, further sensitivity analyses will be conducted on existing and new parameters, such as the type of communication protocol, and the area and size of the patch area. Moreover, the attack will be compared to other iris recognition attacks.
## Acknowledgements
This research was supported by the Technology Innovation Institute (TII), Abu Dhabi, UAE, under the CyberAI project (grant number: TII/DSRC/2022/3036).
|
2304.02933 | Convolutional neural networks for crack detection on flexible road
pavements | Flexible road pavements deteriorate primarily due to traffic and adverse
environmental conditions. Cracking is the most common deterioration mechanism;
the surveying thereof is typically conducted manually using internationally
defined classification standards. In South Africa, the use of high-definition
video images has been introduced, which allows for safer road surveying.
However, surveying is still a tedious manual process. Automation of the
detection of defects such as cracks would allow for faster analysis of road
networks and potentially reduce human bias and error. This study performs a
comparison of six state-of-the-art convolutional neural network models for the
purpose of crack detection. The models are pretrained on the ImageNet dataset,
and fine-tuned using a new real-world binary crack dataset consisting of 14000
samples. The effects of dataset augmentation are also investigated. Of the six
models trained, five achieved accuracy above 97%. The highest recorded accuracy
was 98%, achieved by the ResNet and VGG16 models. The dataset is available at
the following URL: https://zenodo.org/record/7795975 | Hermann Tapamo, Anna Bosman, James Maina, Emile Horak | 2023-04-06T08:46:30Z | http://arxiv.org/abs/2304.02933v1 | # Convolutional neural networks for crack detection on flexible road pavements
###### Abstract
Flexible road pavements deteriorate primarily due to traffic and adverse environmental conditions. Cracking is the most common deterioration mechanism; the surveying thereof is typically conducted manually using internationally defined classification standards. In South Africa, the use of high-definition video images has been introduced, which allows for safer road surveying. However, surveying is still a tedious manual process. Automation of the detection of defects such as cracks would allow for faster analysis of road networks and potentially reduce human bias and error. This study performs a comparison of six state-of-the-art convolutional neural network models for the purpose of crack detection. The models are pretrained on the ImageNet dataset, and fine-tuned using a new real-world binary crack dataset consisting of 14000 samples. The effects of dataset augmentation are also investigated. Of the six models trained, five achieved accuracy above 97%. The highest recorded accuracy was 98%, achieved by the ResNet and VGG16 models. The dataset is available at the following URL: [https://zenodo.org/record/7795975](https://zenodo.org/record/7795975)
Keywords:convolutional neural networks, transfer learning, crack detection
## 1 Introduction
Globally, the most popular form of transport is the motor vehicle. Consequently, public roads experience significant traffic load, which, coupled with the environmental and climatic conditions, are the primary contributors to rapid road surface deterioration [14].
A pavement management system (PMS) provides a systematic technique for recording and analysing pavement surface conditions of the road network, and enables personnel in charge to make data-driven maintenance decisions. An efficient monitoring strategy facilitates the development of a maintenance schedule
that would help to significantly reduce life-cycle maintenance costs [5]. The performance of the PMS depends on the quality of the pavement surface condition data recorded. Inspection methods may be divided into three main categories; manual, semi-automated, and automated. The manual inspection is carried out in the field, it is laborious and unsafe. In semi-automated inspections, on-vehicle sensors/cameras record the pavement surface conditions to be analysed manually later. Automated inspection consists of on-vehicle sensors coupled with distress detection algorithms, able to determine pavement conditions automatically. Automated inspection has not yet been widely adopted [2], despite its great potential.
The road surface condition inspection and data collection is still a predominantly manual process conducted by field personnel [8]. Apart from its time-consuming nature, personnel risk exposure, low detection efficiency and traffic congestion during inspection [19]. Manual approach introduces human subjectivity and error, resulting in variability in how the road data is interpreted from person to person [9]. Over the years, there have been numerous attempts to embed various technologies into the road inspection process, with the primary aims of counteracting human variability, improving the survey execution speed, and mitigating personnel on-site risk exposure.
Research efforts aimed at introducing a level of automation to this field are focused on the application of image processing and computer vision technologies [1, 8]. Advancements in deep learning and its increased accessibility have led to adoption of the deep learning methods in multiple fields, including pavement inspection. However, no comprehensive comparison of the effectiveness of the various deep learning methods exists to date. The availability of realistic high-definition road crack datasets also remains limited.
One of the most common defects which pavements are susceptible to is cracks, which develop over time primarily due to one or more of the following factors: repeated traffic loading, hostile environmental or climatic conditions and construction quality [11]. These factors contribute to the pavement ageing process and impact its structural integrity [10]. This study considers cracks as a reliable visual indicator of pavement surface condition, and therefore explores the viability of using convolutional neural networks (CNNs) together with transfer learning to detect pavement cracks. The novel contributions of this study are summarised as follows:
* A new binary crack image dataset is collected, consisting of 14000 samples used to train and test the various models evaluated.
* Six different state-of-the-art CNN models are applied to the collected dataset, initially by only training a binary classifier on top of pretrained model layers, and subsequently fine-tuning several top layers. A comparative study is performed to determine the effectiveness of transfer learning and fine-tuning.
* The comparative study is further enhanced by investigating the influence of data augmentation on the training process.
* Based on all of the above, a recommendation is made as to which of the current state-of-the-art models seems to be the most appropriate for the chosen application domain.
The rest of the paper is structured as follows: Section 2 briefly outlines the background of this study. Section 3 details the methodology. Section 4 presents and discusses the results. Section 5 concludes the paper.
## 2 Background
South Africa possesses the longest interconnected road network in sub-Saharan Africa, with a total length of 750,811 kilometres [13]. The South African National Roads Agency (SANRAL) is responsible for the management, maintenance and expansion of the major highways. Regional departments and local municipalities oversee the provincial/regional routes and smaller urban roads, respectively. The state of roads in South Africa is generally regarded as poor, 54% of unpaved roads are in poor to very poor condition, and the same is said of 30% of paved roads in South Africa [13]. Embedding deep learning into road surveying, which is currently performed manually, can make the process more efficient, less laborious and capable of producing more reliable and consistent records.
This study applies CNNs for the purpose of crack detection on a real-world dataset collected in South Africa. A CNN is a class of artificial neural network notably useful in computer vision tasks such as object recognition [12], due to its ability to extract high-level features from images and thereby reliably recognize various objects after the model is trained. CNNs eliminate the need for manual feature extraction, which primitive image processing methods rely on, and instead learns the image features and characteristics directly from the input data; pre-processing efforts are therefore significantly reduced.
Transfer learning is a machine learning method that makes use of a previously trained model as a starting point for training a model on a new task [20]. It leverages past knowledge to extract valuable features from the new dataset being used. If successful, it should allow for faster model training and provide a performance boost. Training from scratch is compute-intensive and generally requires large datasets to achieve acceptable performance.
## 3 Methodology
### Dataset
The images were captured using a multi-functional vehicle (MFV) with an adapted high-definition wide angle camera [4]. Each original image has a resolution of 2440x1080 pixels. At each capture point, there is a limited distance to which relevant image data can be collected, due to the camera focus and resolution. The images are collected on the slow lane, which generally carries more load due to heavier vehicles spending more time on it, consequently making the
slow lane more susceptible to deterioration. Twelve square images of 200x200 pixels were extracted from each original image as shown in Figure 1.
A dataset consisting of 14000 images was built to train the models, split evenly between positive (cracks present) and negative samples (cracks absent), examples of each shown in Figures 2 and 3, respectively. The 14000 images are a combination of the training data (11200 images), validation data (1400 images) and test data (1400 images). The training data is used to train the machine learning model. The validation data is held back during the training and used to provide an unbiased model performance update throughout the training process, useful for tuning the model hyperparameters. The test data, as with the validation data, is also unseen during model training, used once training is complete to compare the final models. Each of these three data components is split evenly between positive and negative images, where positive images are samples consisting of cracks, and negative images are samples free of cracks. These datasets have been manually prepared to be used for model training. Experiments are conducted on the original dataset, as well as an augmented dataset, where the original data is expanded by flipping and rotating the images.
The dataset is available online: [https://zenodo.org/record/7795975](https://zenodo.org/record/7795975).
### Deep Learning Models
Transfer learning was applied to six image classification models pre-trained on the ImageNet dataset. The two classes, in this case, are; positive (crack present) and negative (crack absent), making this a binary classification task.
The following CNN models were evaluated:
1. EfficientNet [17] - a model that proposes a new scaling method which scales all dimensions using an effective compound coefficient. EfficientNet has been
Figure 1: Example image captured by MFV superimposed with sample regions extracted for the dataset.
found to achieve a state-of-the-art performance compared to other CNNs with a significantly smaller architecture.
2. Inception V3 - a state-of-the-art CNN model based on the original paper by Szegedy et al. [16], which performs convolutions on various scales (called inception modules) throughout the architecture.
3. Xception [3] - a variation on Inception V3, where inception modules are substituted for depth-wise separable convolutions. The architecture has an identical number of parameters as Inception V3.
4. MobileNet [7] - efficient CNN model designed for mobile and embedded vision applications, based on a streamlined architecture which makes use of convolutions separable by depth.
5. ResNet [6] - ResNet, or residual network, is a CNN architecture that adds residual connections between hidden states, allowing for better error propagation during training.
6. VGG16 [15] - a state-of-the-art CNN architecture consisting of 16 convolutional layers.
The methodology adopted for the training of the models consists of two phases. In the first phase, pre-trained models are adapted to the dataset by only training the final fully-connected layer. In the second phase, further fine-tuning is performed through the unfreezing of several top layers and reducing the rate of learning to prevent overfitting. The proportion of top layers unfrozen for fine-tuning out of the total layers in each architecture was approximately 25%. A total of 80 epochs were completed for each model. Fine-tuning (second phase) was started at epoch 60. A batch size of 32 was used in the experiments.
Figure 3: Example of negative samples (cracks absent).
Figure 2: Example of positive samples (cracks present).
The cracks that can be detected include: block, longitudinal, transverse, crocodile and combinations of the aforementioned types, however, these are all detected as positive, reducing the problem to binary classification. The model was trained using samples up to a degree 1 type crack as per TMH9 [18]. Each model was trained with and without augmentation. This results in two models trained per architecture, thus twelve models in total. In order to draw inference on the consistency and reliability of the performance achieved, each model has been trained 5 times and all reported values are averages across the 5 runs, unless stated otherwise.
## 4 Results
The EfficientNetB7 model converged to lower accuracy values and higher loss values for the version trained with data augmentation compared to that without, prior to fine-tuning, as seen in Figure 4. The range of values, represented by the shaded region, shows generally narrow ranges for all metrics prior to fine-tuning, indicative of training stability. Once fine-tuning is initialized, there is a notable increase in accuracy across the remaining epochs. The model trained with data augmentation exhibits superior stability over that without data augmentation, as shown in the similar training and validation accuracy and loss obtained at each epoch. A divergence in the training and validation loss and accuracy occurs in the model trained without augmentation, indicating overfitting.
The aforementioned trends are repeated in Figure 5 for the Inception V3 model, Figure 6 for the ResNet model, and Figure 7 for the VGG16 model.
The most significant divergence in the training and validation accuracy and loss occurs with the Inception V3 model.
The MobileNet V2 model training values shown in Figure 8 show improvement in performance with the initialization of fine-tuning. The training and validation values of both versions (with and without augmentation) remain similar across all epochs, however these are based on the average values across the 5 separate training iterations. The shaded regions show that the MobileNet V2 validation accuracy and loss change significantly across the iterations between epoch 60 and 80, where fine-tuning is enabled. This suggests inferior robustness
Figure 4: EfficientNetB7 Accuracy/Loss vs Epoch.
during training. It may therefore be inferred that the MobileNet V2 model has the poorest repeatability amongst all models evaluated when comparing the loss and accuracy ranges with that of the other models.
The trends exhibited in training the Xception model are distinctly different in the epochs prior to fine-tuning being initialized, with the data augmentation version reporting significantly different values of validation and training accuracy and loss, however upon the initialization of fine-tuning, the values for these metrics began to converge.
The significant increase in performance achieved across the board upon the initialization of fine tuning confirms the theory hypothesized; the feature representation in the base model has been made more relevant and better suited for the task of crack classification. The introduction of fine-tuning, however, does also bring about generally higher variability in the validation accuracy and loss across the epochs. This suggests notable compromise to the repeatability in performance across multiple runs.
The test accuracy results are shown in Figure 10. The models performed well on the test set, with the exception of EfficientNetB7 model. This may be due to several reasons; the scaling method proposed in the architecture may not have been well suited for the model to learn the relevant crack features, the training and validation datasets may have had samples with similar features, and therefore overfitting occurred, resulting in poor performance when exposed to the unseen test dataset with distinctly different features. As a result of this,
Figure 5: Inception V3 Accuracy/Loss vs Epoch.
Figure 6: ResNet Accuracy/Loss vs Epoch.
the EfficientnetB7 model achieved an accuracy of 0.63, while the other models achieved accuracy between 0.97 and 0.99.
Table 1 shows the model performance metrics which includes the mean F1 score calculated using precision and recall. The EfficientNetB7 model registers the highest precision values, with a score of 1, due to the fact that the model does not register any false positives. However, a disproportionate number of samples are classified as negative, as seen in Table 2 (approximately 87%). This is what results in the model having a relatively poor accuracy (0.626/0.625) and F1-score (0.402/0.400).
The highest performance increase experienced once fine-tuning was enabled was from the data augmentation version of the Xception model both in accuracy (increase) and loss (decrease), with respective values of 0.151 and 0.276. The best accuracy and F1 score, both being 0.986, are achieved by the version of ResNet trained with data augmentation, and that of VGGNet without data augmentation. The standard deviations for the accuracy values are 0.003 and 0.002, and for the F1-scores 0.003 and 0.002, respectively. The standard deviations of both the F1-scores and accuracy of all models, as shown in Table 1, are extremely low, ranging between 0.001 and 0.005, apart from EfficientNetB7, ranging between 0.015 and 0.054. It can therefore be inferred that, apart from the EfficientNetB7 models, the performance of the models reported may be deemed reliable and repeatable.
Figure 8: MobileNet V2 Accuracy/Loss vs Epoch.
Figure 7: VGG16 Accuracy/Loss vs Epoch.
Table 2 shows the mean values of the confusion matrix inputs. A notable takeaway is the difference in the number of false positives and false negatives. All models have more than double (in some instances more than triple) the number of false negatives as the number of false positives. This infers that the probability for a model to detect a false negative is more than double the probability of it detecting a false positive. This is not a favourable statistic; therefore, an argument can be made that of the two best performing models, that which is less likely to detect a false negative may be deemed superior. The VGGNet model version without augmentation is therefore the best trained model.
## 5 Conclusion
This study presented a comparison between a selection of state-of-the-art CNN models applied to a real-life crack detection dataset collected in South Africa. The work presented herein shows great potential for the use of CNN architectures in conjunction with transfer learning for binary crack classification. Of all the models trained, VGGNet without data augmentation yielded the best performance overall. The dataset compiled consisted of only 14000 images, hence
Figure 10: Model accuracy on the test set with and without augmentation.
Figure 9: Xception Accuracy/Loss vs Epoch.
an attempt was made to compensate for it with data augmentation. The boost in performance yielded however, was minimal, but not detrimental to the performance. In future, additional data augmentation techniques may be considered. The aid of machine learning models would yield multiple benefits to road inspections. The research herein shows potential, if not for a fully autonomous road survey program, then at a minimum to assist surveyors in the consistent detection of road defects.
|
2310.13037 | Agri-GNN: A Novel Genotypic-Topological Graph Neural Network Framework
Built on GraphSAGE for Optimized Yield Prediction | Agriculture, as the cornerstone of human civilization, constantly seeks to
integrate technology for enhanced productivity and sustainability. This paper
introduces $\textit{Agri-GNN}$, a novel Genotypic-Topological Graph Neural
Network Framework tailored to capture the intricate spatial and genotypic
interactions of crops, paving the way for optimized predictions of harvest
yields. $\textit{Agri-GNN}$ constructs a Graph $\mathcal{G}$ that considers
farming plots as nodes, and then methodically constructs edges between nodes
based on spatial and genotypic similarity, allowing for the aggregation of node
information through a genotypic-topological filter. Graph Neural Networks
(GNN), by design, consider the relationships between data points, enabling them
to efficiently model the interconnected agricultural ecosystem. By harnessing
the power of GNNs, $\textit{Agri-GNN}$ encapsulates both local and global
information from plants, considering their inherent connections based on
spatial proximity and shared genotypes, allowing stronger predictions to be
made than traditional Machine Learning architectures. $\textit{Agri-GNN}$ is
built from the GraphSAGE architecture, because of its optimal calibration with
large graphs, like those of farming plots and breeding experiments.
$\textit{Agri-GNN}$ experiments, conducted on a comprehensive dataset of
vegetation indices, time, genotype information, and location data, demonstrate
that $\textit{Agri-GNN}$ achieves an $R^2 = .876$ in yield predictions for
farming fields in Iowa. The results show significant improvement over the
baselines and other work in the field. $\textit{Agri-GNN}$ represents a
blueprint for using advanced graph-based neural architectures to predict crop
yield, providing significant improvements over baselines in the field. | Aditya Gupta, Asheesh Singh | 2023-10-19T14:49:35Z | http://arxiv.org/abs/2310.13037v1 | Agri-GNN: A Novel Genotypic-Topological Graph Neural Network Framework Built on GraphSAGE for Optimized Yield Prediction
###### Abstract
Agriculture, as the cornerstone of human civilization, constantly seeks to integrate technology for enhanced productivity and sustainability. This paper introduces _Agri-GNN_, a novel Genotypic-Topological Graph Neural Network Framework tailored to capture the intricate spatial and genotypic interactions of crops, paving the way for optimized predictions of harvest yields. _Agri-GNN_ constructs a Graph \(\mathcal{G}\) that considers farming plots as nodes, and then methodically constructs edges between nodes based on spatial and genotypic similarity, allowing for the aggregation of node information through a genotypic-topological filter. Graph Neural Networks (GNN), by design, consider the relationships between data points, enabling them to efficiently model the interconnected agricultural ecosystem. By harnessing the power of GNNs, _Agri-GNN_ encapsulates both local and global information from plants, considering their inherent connections based on spatial proximity and shared genotypes, allowing stronger predictions to be made than traditional Machine Learning architectures. _Agri-GNN_ is built from the GraphSAGE architecture, because of its optimal calibration with large graphs, like those of farming plots and breeding experiments. _Agri-GNN_ experiments, conducted on a comprehensive dataset of vegetation indices, time, genotype information, and location data, demonstrate that _Agri-GNN_ achieves an \(R^{2}=.876\) in yield predictions for farming fields in Iowa. The results show significant improvement over the baselines and other work in the field. _Agri-GNN_ represents a blueprint for using advanced graph-based neural architectures to predict crop yield, providing significant improvements over baselines in the field.
_The ultimate goal of farming is not the growing of the crops, but the cultivation and perfection of human beings. --Masanobu Fukuoka_
**Key Words:** Graph Neural Networks (GNNs), Agricultural Data Integration, Multimodal Data Fusion, Structured Data Modeling, Adaptive and Modular Design, Precision Agriculture Enhancement, Complex Interdependencies Modeling
## 1 Introduction
In an era characterized by escalating climate change, which is resulting in unpredictable weather patterns and increasing environmental stresses, the agricultural sector faces significant challenges (Anwar et al., 2013). Unforseen climatic events such as droughts, floods, and extreme temperatures are impacting crop yields, highlighting the imperative for advanced, precise, and resilient crop yield prediction models (Kuwayama et al., 2019). Amidst this backdrop of climatic uncertainties (Shrestha et al., 2012), the necessity for accurate and comprehensive crop yield predictions is more acute than ever. A robust system that can efficiently integrate diverse data types and provide a detailed and holistic understanding of the agricultural ecosystem is crucial for mitigating the impacts of climate change on agriculture.
The agricultural ecosystem is inherently complex and interconnected, with numerous factors playing a pivotal role in determining crop yields. Traditional Machine Learning models, while powerful, often fall short in effectively capturing these intricate relationships, as they generally treat data points as independent entities (Liu et al., 2020). The limited capacity of these models to handle relational data and their inability to seamlessly integrate diverse data types such as spatial, temporal, and genetic information, hamper their effectiveness in providing comprehensive and accurate crop yield predictions. Moreover, these models tend to be data-hungry, requiring substantial labeled data for training, which is often a significant challenge in the agricultural context (Majumdar et al., 2017). On the contrary, Graph Neural Networks (GNNs) stand out as a more apt choice for this scenario. GNNs, by design, consider the relationships between data points, enabling them to efficiently model the interconnected agricultural ecosystem. They can effectively synthesize diverse data types into a unified framework, offering a more holistic and nuanced understanding of the factors influencing crop yields (Zhou et al., 2020). The ability of GNNs to work with limited labeled data and their flexibility in handling various data modalities make them a superior choice for developing robust and resilient crop yield prediction models in the face of climate change.
In light of this, the present study introduces Agri-GNN, a pioneering approach employing GNNs to offer an inclusive representation of the agricultural ecosystem. _Agri-GNN_ considers farming plots as nodes, and then constructs edges between nodes based on spatial and genotypic similarity, allowing for the aggregation of node information from a refined selection of nodes. This allows for the model to refine the noise that exists in the dataset and focus yield prediction efforts for each node on the most similar nodes in terms of genotypic and spatial similarity. _Agri-GNN_ stands out with its capacity to amalgamate diverse data modalities into a cohesive framework, adeptly capturing the complex interaction among genetic, environmental, and spatial factors (Meng et al., 2018). This innovative model transcends traditional methodologies by conceptualizing the agricultural ecosystem as a connected network, where each crop, viewed as an active node, is influenced by its immediate environment and genetic context.
The employment of the GraphSAGE architecture (Hamilton et al., 2017), significantly bolsters the effectiveness of _Agri-GNN_. GraphSAGE is known for its inductive learning approach, where it leverages node attribute information to generate representations for data not seen during the training process. This approach is particularly beneficial for the extensive and heterogeneous datasets that are commonplace in the field of agriculture. Traditional machine learning models often struggle with such diverse and expansive data, leading to suboptimal predictions and insights. However, the GraphSAGE architecture, with its innovative inductive learning, excels in processing and learning from large datasets, thereby ensuring the robustness of _Agri-GNN_.
In agriculture, datasets encompass a wide range of information including weather conditions, soil types, and genetic information, the ability to effectively handle and learn from such data is crucial for accurate yield predictions. The GraphSAGE architecture equips _Agri-GNN_ with this capability, allowing it to seamlessly integrate and process diverse data types to generate detailed and reliable yield predictions. This level of granular insight is important for making informed decisions in agricultural planning and management, ultimately contributing to enhanced productivity and sustainability.
By using the GraphSAGE architecture, _Agri-GNN_ is not just limited to data seen during training. It can generalize and adapt to new data, ensuring that the model remains relevant and useful as more agricultural data becomes available. This adaptability is essential in the dynamic field of agriculture, where new data and insights continuously emerge. The advanced architecture thereby not only enhances _Agri-GNN_'s predictive accuracy but also bolsters its longevity and relevance in the agricultural sector, making it a valuable tool for tackling the challenges of modern agriculture. _Agri-GNN_'s modular and scalable design ensures its adaptability to the fast-paced evolution of the agricultural sector. This flexibility allows for the effortless integration of emerging data sources and insights, ensuring the model remains relevant and effective in a changing landscape (Gandhi and Armstrong, 2016).
_Agri-GNN_ embodies a transformative shift that is taking place in agricultural modeling, providing a novel perspective that comprehensively addresses the complexity and interconnectedness of farming systems. By offering a nuanced, data-driven lens, _Agri-GNN_ stands as a robust tool for navigating the multi-faceted challenges of modern agriculture, particularly in the context of a changing climate.
Literature Review
Plant breeding specialists are focused on discovering high-quality genetic variations that fulfill the needs of farmers, the wider agricultural sector, and end consumers. One of the key traits often scrutinized is seed yield, particularly in row crops (Singh et al., 2021c). Traditional ways of assessing seed yield involve the laborious and time-restricted activity of machine-harvesting numerous plots when the growing season concludes. This data then informs decisions about which genetic lines to either advance or discontinue in breeding programs. This approach is highly resource-intensive, requiring the harvesting of thousands of test plots each year, thus presenting operational challenges.
In response to these issues, advancements in technology are being harnessed to develop more efficient alternatives. An increasing number of scientists and plant breeders are adopting the use of remote sensing technology, integrated with machine learning methods. This enables more timely predictions of seed yield, substantially cutting down on labor and time requirements during the crucial harvest phase (Li et al., 2022; Yoosefzadeh-Najafabadi et al., 2021; Chiozza et al., 2021; Shook et al., 2021b; Riera et al., 2021; Guo et al., 2021; Singh et al., 2021a).
The newly introduced Cyber-Agricultural System (CAS) takes advantage of cutting-edge continual sensing technology, artificial intelligence, and smart actuators for enhancing both breeding and production in agriculture Sarkar et al. (2023). Integral to CAS is the concept of phenotyping, which employs sophisticated imaging and computational techniques to streamline the gathering and interpretation of data, thereby facilitating better yield forecasts Singh et al. (2021b). Numerous studies have honed in on high-throughput phenotyping through the use of drones, investigating yield predictions in crops such as cotton, maize, soybean, and wheat Herr et al. (2023). Beyond the 2D data collected by drones, research has demonstrated the value of canopy fingerprints, which offer unique insights into the 3D structure of soybean canopies via point cloud data Young et al. (2023). Despite these advances, there is still scope for refining models that amalgamate diverse datasets, including but not limited to soil features and hyperspectral reflectance, for a more holistic grasp of soybean yields. The soil's physical and chemical attributes play a crucial role in nutrient availability, thereby impacting plant health and growth. Incorporating these soil characteristics could potentially enhance the precision of yield prediction models.
In recent years, the application of neural networks in crop yield prediction has moved beyond traditional architectures to more complex models like Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) (Dahikar and Rode, 2014; Gandhi et al., 2016).
Much work in the field takes advantage of remote sensing data, such as satellite images or NDVI, for yield predictions (You et al., 2017; Nevavuori et al., 2019; Kim et al., 2019). While these methods have shown promise, they often struggle to capture the direct relationships between environmental factors and crop yields. Previous research has also focused on using environmental factors like temperature and rainfall directly as inputs for yield prediction models (Cakir et al., 2014; Khaki and Wang, 2019), but the same failure to capture direct relationships accurately has been seen.
Against this backdrop, Graph Neural Networks (GNNs) offer a significant advancement by incorporating spatial relationships or neighborhood information into the prediction models. By adding the spatial context through GNNs, recent models provide a more nuanced understanding of how localized factors can impact yield, enhancing prediction accuracy (Fan et al., 2022; Park and Park, 2019; Sajitha et al., 2023). Particularly, Fan et al. (2022) uses a GNN-RNN based approach that shows promising results, but fails to generalize on graphs of large size. The GraphSAGE model (Hamilton et al., 2017) stands out as a notable innovation in leveraging spatial and neighborhood information for more accurate crop yield predictions. The incorporation of GraphSAGE allows us to effectively capture localized contextual information, thereby refining our understanding of how specific factors in localized areas can influence crop yields. This results in an enhanced level of accuracy in our yield predictions.
Graph Neural Networks offer a promising avenue for enhancing the state-of-the-art in crop yield prediction. They facilitate the integration of various types of data and have the potential to significantly improve the accuracy of existing models. Future research should focus on leveraging these capabilities to build models that can generalize well across different conditions and scales.
Background
In this section, we provide background information on Graph Neural Networks and GraphSAGE. We also provide a background on the data features used in the creation of Agri-GNN. This background information is necessary for understanding Agri-GNN.
### Graph Neural Networks
Graphs are a robust and versatile means of capturing relationships among entities, known as nodes, and their interconnections, termed edges. Graph Neural Networks elevate this representational power by extending classical neural network architectures to operate directly on graph-structured data. These networks are particularly effective at generating meaningful node-level or graph-level embeddings, representations that can be subsequently used in various downstream tasks such as node classification, link prediction, and community detection. For an exhaustive review of the techniques and methods employed in GNNs, we direct the reader to seminal survey papers by Battaglia et al. (2018) and Chami et al. (2021).
A GNN can be represented as \(f(A,X;W)\to y\), where \(y\) denotes the set of predicted outcomes (e.g., plot yield predictions), \(A\) is an \(n\times n\) adjacency matrix encapsulating the graph structure, \(X\) is an \(n\times p\) feature matrix detailing node attributes, and \(W\) represents the trainable weight parameters of the network. In this function, \(A\) and \(X\) serve as inputs, and the network \(f\) is parameterized by \(W\).
One of the distinguishing characteristics of GNNs lies in their ability to accumulate and propagate information across nodes (Hamilton et al., 2017). A node's feature vector is updated iteratively based on the feature vectors of its neighboring nodes. The depth or number of layers, \(\ell\), in the GNN controls which neighbors are involved in these updates. Specifically, the final representation of a node only includes information from nodes that are at most \(\ell\)-hop distance away. This scope of nodes and edges involved in the computation of a node's representation is termed its computational graph, formally defined as \(G=(A,V)\). Here, \(A\) is the adjacency matrix for the subgraph, and \(X_{v}\) is the feature matrix for nodes within the \(\ell\)-hop neighborhood of the node \(v\).
### GraphSAGE
GraphSAGE (Graph Sample and Aggregation) is a pioneering extension to general Graph Neural Networks and was specifically designed to address challenges such as scalability and inductive learning on large graphs (Hamilton et al., 2017). Unlike traditional GNN architectures that require the entire graph to be loaded into memory for training, GraphSAGE leverages a sampling strategy to extract localized subgraphs, thereby allowing for mini-batch training on large-scale graphs. The key innovation in GraphSAGE is its novel aggregation mechanism, which uses parameterized functions to aggregate information from a node's neighbors. These functions can be as simple as taking an average or as complex as employing a neural network for the aggregation process.
GraphSAGE is expressed as \(f_{\text{SAGE}}(A,X;W)\to y\), where \(f_{\text{SAGE}}\) represents the GraphSAGE model, \(y\) is the output (such as node embeddings or graph-level predictions), \(A\) is the adjacency matrix, \(X\) is the feature matrix, and \(W\) are the trainable parameters. Like generic GNNs, GraphSAGE accumulates and combines information from a node's neighborhood to update its feature representation. However, GraphSAGE can generalize to unseen nodes during inference by leveraging learned aggregation functions, making it particularly valuable for evolving graphs where the node set can change over time. It has been employed effectively in diverse applications such as social network analysis, recommendation systems, and even in specialized fields like computational biology and agronomy, showcasing its adaptability and efficiency(Xiao et al., 2019).
In formal terms, the \(l\)-th layer of GraphSAGE is defined. The aggregated embedding from neighboring counties, denoted \(\mathbf{a}_{c,t}^{(l)}\), is calculated using the function \(g_{l}\) applied to the embeddings \(\mathbf{z}_{c^{\prime},t}^{(l-1)}\) for all neighboring counties \(c^{\prime}\) of county \(c\), represented as:
\[\mathbf{a}_{c,t}^{(l)}=g_{l}(\{\mathbf{z}_{c^{\prime},t}^{(l-1)},\forall c^{ \prime}\in\mathcal{N}(c)\})\]
Here, \(\mathcal{N}(c)=\{c^{\prime},\forall A_{c,c^{\prime}}=1\}\) denotes the set of neighboring counties for \(c\).
The embedding for the \(l\)-th layer, \(\mathbf{z}^{(l)}_{c,t}\), is then obtained by applying a non-linear function \(\sigma\) to the product of a weight matrix \(\mathbf{W}^{(l)}\) and the concatenation of the last layer's embedding \(\mathbf{z}^{(l-1)}_{c,t}\) and \(\mathbf{a}^{(l)}_{c,t}\):
\[\mathbf{z}^{(l)}_{c,t}=\sigma(\mathbf{W}^{(l)}\cdot(\mathbf{z}^{(l-1)}_{c,t}, \mathbf{a}^{(l)}_{c,t}))\]
Where \(\mathbf{z}^{(0)}_{c,t}=h_{c,t}\) as per a previous equation, and \(l\) belongs to the set \(\{0,1,...,L\}\).
The aggregation function for the \(l\)-th layer, \(g_{l}(\cdot)\), can be a mean, pooling, or graph convolution (GCN) function.
In this process, \(\mathbf{a}^{(l)}_{c,t}\) is first concatenated with \(\mathbf{z}^{(l-1)}_{c,t}\), and then transformed using the weight matrix \(\mathbf{W}^{(l)}\). The non-linear function \(\sigma(\cdot)\) is applied to this product to obtain the final embedding for the \(l\)-th layer.
### Vegetation Indices
Vegetation indices are essential metrics used in the field of remote sensing phenology to quantify vegetation cover, assess plant health, and (in our study) to estimate crop yields. These indices leverage the spectral data gathered by electromagnetic radiation, which measure various wavelengths of light absorbed and reflected by plants. A fundamental understanding of how these wavelengths interact with vegetation is crucial for interpreting these indices. Specifically, the pigments in plant leaves, such as chlorophyll, absorb wavelengths in the visible spectrum, particularly the red light. Conversely, leaves reflect a significant amount of near-infrared (NIR) light, which is not visible to the human eye. The indices used in the construction of Agri-GNN are available in Appendix A.
One of the most commonly used vegetation indices is the Normalized Difference Vegetation Index (NDVI). It is calculated using the formula \(\text{NDVI}=\frac{(NIR-RED)}{(NIR+RED)}\), where \(NIR\) represents the near-infrared reflectance and \(RED\) is the reflectance in the red part of the spectrum. The NDVI value ranges between -1 and 1, with higher values typically indicating healthier vegetation and lower values signifying sparse or stressed vegetation. This index is invaluable for various applications, ranging from environmental monitoring to precision agriculture. For Agri-GNN, vegetation indices like NDVI can serve as informative node attributes in agronomic graphs, enhancing the model's ability to make accurate and meaningful predictions in agricultural settings(Bannari et al., 1995).
## 4 Methods
Our proposed framework, _Agri-GNN_, aims to provide a comprehensive solution to crop yield prediction by leveraging the power of Graph Neural Networks (GNNs). The methods section goes over the various stages involved in the design, construction, and validation of _Agri-GNN_.
### Graph Construction
To effectively utilize GNNs for agricultural prediction, we first represent the agricultural data as a graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}\) denotes the set of nodes and \(\mathcal{E}\) denotes the set of edges.
#### 4.1.1 Node Representation
Each node \(v_{i}\in\mathcal{V}\) corresponds to a specific plot and is associated with a feature vector \(\mathbf{x}_{i}\). This feature vector encapsulates all given data information:
**Vegetation Indices (r\({}_{i}\))**: The variable \(r_{i}\) is derived from remote sensing imagery and encapsulates various spectral indices, including but not limited to reflectance values, that are crucial for assessing vegetation health and vitality. These spectral indices often make use of different reflectance bands, such as near-infrared and red bands, to capture detailed information about plant life. These indices serve as a valuable source of information, particularly when integrated with other types of data like genotypic and soil information. The computational procedure to obtain \(r_{i}\) often involves sophisticated algorithms that account for
atmospheric corrections and other sources of noise in the imagery. The specific formulas and methodologies used to determine \(r_{i}\) are detailed in Appendix A.
**Genotypic Data (g\({}_{i}\))**: The variable \(g_{i}\) encapsulates the genetic information associated with a specific crop or plant. This genotypic data serves as a foundational element in the realm of Cyber Agricultural Systems. Utilizing advanced imaging techniques like hyperspectral imaging, along with computational methods such as machine learning algorithms, the acquisition and analysis of \(g_{i}\) have been significantly streamlined. These advancements not only ease the process of data collection but also enable more accurate and comprehensive genetic profiling. Such in-depth genotypic information is invaluable for understanding plant characteristics, disease resistance, and yield potential, thereby playing a crucial role in the development of precision agriculture strategies and sustainable farming practices Singh et al. (2021).
**Weather Data (w\({}_{i}\))**: The variable \(w_{i}\) encompasses an array of meteorological factors related to a specific agricultural plot, capturing elements such as temperature, precipitation, humidity, wind speed, and solar radiation. These weather conditions are collected through a variety of methods, including on-site weather stations, remote sensors, and even satellite data. The comprehensive nature of \(w_{i}\) allows it to serve as a vital input for crop health monitoring systems and predictive yield models. For instance, high-resolution temporal data on factors like soil moisture and air temperature can be instrumental in predicting potential stress events for crops, such as drought or frost risks. Furthermore, when integrated with other data types like genotypic and vegetation indices, \(w_{i}\) contributes to creating a multifaceted, dynamic model of the agricultural environment(Mansfield et al. (2005)).
The final node feature vector is a concatenation of these features:
\[\mathbf{x}_{i}=[\mathbf{r}_{i},\mathbf{g}_{i},\mathbf{w}_{i}] \tag{1}\]
#### 4.1.2 Edge Representation in _Agri-GNN_
Edges play an indispensable role in graph-based neural network architectures, encoding vital relationships between nodes. Within the _Agri-GNN_ architecture, the edge set \(\mathcal{E}\) is meticulously constructed to encapsulate the intricate relationships between agricultural plots. This is achieved by harnessing both spatial and genotypic attributes.
There are two edge constructions that are undertaken. \(\mathcal{E}_{\text{spatial}}\) encompasses the edges that are created through spatial proximity. Given the geographical coordinates \(\mathbf{c}(v_{i})\) associated with a node \(v_{i}\), the pairwise distance to another node \(v_{j}\) having coordinates \(\mathbf{c}(v_{j})\) can be defined as:
\[d(v_{i},v_{j})=\|\mathbf{c}(v_{i})-\mathbf{c}(v_{j})\| \tag{2}\]
For every node \(v_{i}\), edges are constructed to nodes that fall within the bottom 3% of all pairwise distances, thereby ensuring that the model captures localized environmental intricacies and dependencies:
\[e_{ij}=\begin{cases}1&\text{if }d(v_{i},v_{j})\in\text{bottom 3\% of distances for }v_{i}\\ 0&\text{otherwise}\end{cases} \tag{3}\]
The second set of edges constructed is represented by \(\mathcal{E}_{\text{genotypic}}\). The genotypic data serves as a repository of the genetic characteristics of agricultural plots, offering a window into inherent traits and susceptibilities. Let \(g(v_{i})\) represent the genotypic data for node \(v_{i}\). An edge is formed between nodes \(v_{i}\) and \(v_{j}\) if their genotypic attributes resonate:
\[e_{ij}=\begin{cases}1&\text{if }g(v_{i})=g(v_{j})\\ 0&\text{otherwise}\end{cases} \tag{4}\]
The culmination of the edge formation process results in the edge set \(\mathcal{E}\), which is a fusion of edges derived from both spatial proximity and genotypic similarity:
\[\mathcal{E}=\mathcal{E}_{\text{spatial}}\cup\mathcal{E}_{\text{genotypic}} \tag{5}\]
By harmonizing spatial and genotypic data, _Agri-GNN_ crafts a robust and nuanced representation of the agricultural milieu, establishing itself as an efficacious tool for diverse applications in the realm of agriculture.
### Graph Neural Network Architecture
For our crop yield prediction task, we introduce the _Agri-GNN_ model, an adaptation of the GraphSAGE (Graph Sample and Aggregation) architecture discussed in section 3. The model operates on graph \(\mathcal{G}\), designed for efficient aggregation and propagation of information across the graph.
_AgriGNN_ has four GraphSAGE convolutional layers to process and refine node features, ensuring an effective representation for downstream tasks. The model architecture is explained in detail in this section.
In the initial layer of the Architecture, whose primary objective is transitioning the input node features to an intermediate representation using hidden channels. The transformation for node \(i\) in this layer is shown by equation (6).
\[\mathbf{h}_{i}^{(1)}=\sigma\left(\mathbf{W}_{\text{init}}\cdot\mathbf{x}_{i}+ \mathbf{b}_{\text{init}}\right) \tag{6}\]
In equation (6), \(\mathbf{x}_{i}\) stands for the initial input features of node \(i\), while \(\mathbf{W}_{\text{init}}\) and \(\mathbf{b}_{\text{init}}\) denote the weight matrix and bias vector of this layer, respectively. The function \(\sigma\) represents the activation function. The Rectified Linear Unit (ReLU) is used.
A salient feature of the _Agri-GNN_ is its intermediary layers, which not only collate features from neighboring nodes but also incorporate skip connections to preserve the essence of the original node's features. The aggregation of features from neighboring nodes in these layers is depicted in equation (7).
\[\mathbf{m}_{i}^{(I)}=\text{AGGREGATE}\left(\left\{\mathbf{h}_{j}^{(I-1)}\, \forall j\in\text{N}(i)\right\}\right) \tag{7}\]
Subsequent to this aggregation, the features undergo a transformation, as expressed in equation (8).
\[\mathbf{h}_{i}^{(I)}=\sigma\left(\mathbf{W}^{(I)}\cdot\text{CONCAT}(\mathbf{h} _{i}^{(I-1)},\mathbf{m}_{i}^{(I)})+\mathbf{b}^{(I)}\right) \tag{8}\]
Here, \(\text{N}(i)\) represents the neighboring nodes of node \(i\), and \(\mathbf{W}^{(I)}\) and \(\mathbf{b}^{(I)}\) signify the weight matrix and bias vector for layer \(I\), respectively.
The architecture culminates in the final layer, a pivotal component that produces the model's refined output. This layer mirrors the operations of the intermediary layers in aggregating neighboring node features, but distinguishes itself by excluding the addition of original node features. The aggregation of features in this layer is portrayed in equation (9).
\[\mathbf{m}_{i}^{(4)}=\text{AGGREGATE}\left(\left\{\mathbf{h}_{j}^{(3)}\, \forall j\in\text{N}(i)\right\}\right) \tag{9}\]
The subsequent transformation, harnessing the aggregated features to yield the final output, is described in equation (10).
\[\mathbf{h}_{i}^{(4)}=\sigma\left(\mathbf{W}^{(4)}\cdot\text{CONCAT}(\mathbf{h} _{i}^{(3)},\mathbf{m}_{i}^{(4)})+\mathbf{b}^{(4)}\right) \tag{10}\]
To ensure stability in convergence and enhance generalization, each hidden layer is succeeded by a batch normalization step. After normalization, dropout regularization with a rate of \(p=0.5\) is employed to combat overfitting, as described by equation (11):
\[\mathbf{h}_{i}^{(I)}=\text{Dropout}(\mathbf{h}_{i}^{(I)},p=0.5) \tag{11}\]
The final output of the model is a prediction of the yield of the given node(s).
The final _Agri-GNN_ model is designed to take in initial node features \(\mathbf{x}_{i}\) and produce an output \(\mathbf{o}_{i}\) for each node, representing the predicted yield. The model is summarized as:
\[\mathbf{o}_{i}=\text{{Agri-GNN}}(\mathbf{x}_{i};\mathcal{G},\Theta) \tag{12}\]
Here, \(\Theta\) stands for the set of all learnable parameters within the model.
The model's performance is evaluated using a Mean Squared Error (MSE) loss between the predicted crop yields \(\mathbf{o}_{i}\) and the actual yields. Optimization is carried out via the Adam optimizer, and hyperparameters such as learning rates are fine-tuned for optimal performance.
_Agri-GNN_'s architecture is summarized in Figure 1.
## 5 Applications and Experimental Results
The _Agri-GNN_ framework is now applied to plot fields in Ames, Iowa. _Agri-GNN_'s performance is compared to various baseline yield prediction models to accurately gauge its potential.
### Data Collection and Processing
Data on soil attributes, hyperspectral reflectance, seed yield, and weather were collected systematically over two consecutive years, 2020 and 2021. The data collection covered both Preliminary Yield Trials (PYT) and Advanced Yield Trials (AYT), and each year involved multiple trial locations.
#### 5.1.1 Trial Design
The Preliminary Yield Trials (PYT) were designed using a row-column configuration. In this layout, each plot had a width of 1.52m and a length of 2.13m. Plots within each row were interspaced by 0.91m. The Advanced Yield Trials (AYT) followed a similar row-column design, but with each plot measuring 1.52m in width and 5.18m in length. The interspacing between plots remained consistent at 0.91m for both trial types.
#### 5.1.2 Soil Data Collection
Digital soil mapping techniques were used to identify ten specific soil attributes, supplementing the collection of hyperspectral data. Soil cores were extracted down to a depth of 15 cm following a 25m grid sampling pattern, using specialized soil probes. Digital soil maps were then generated with 3mx3m pixel resolution using the Cubist regression machine learning algorithmKhaledian and Miller (2020).
For each plot, boundaries were outlined using polygon shape files, and the average value of each soil feature was calculated. To improve data reliability, a 3x3 moving mean was computed for each soil attribute within the plot, and this smoothed value was then used for more detailed analyses. The assessed soil features encompassed Calcium (Ca), Cation Exchange Capacity (CEC), Potassium (K), Magnesium (Mg), Organic Matter (OM), Phosphorus (P1), Percent Hydrogen (Ph), and proportions of Clay, Sand, and Silt.
Figure 1: Summary of Model Architecture
#### 5.1.3 Hyperspectral Reflectance Data
Hyperspectral reflectance data for each plot was captured using a Thorlabs CCS200 spectrometer, based in Newton, NJ. The methodology adheres to the system outlined in the study by Bai et al. (2016). Spectral data was collected annually at three distinct time points: T1, T2, and T3, captured sequentially, covering wavelengths from 200 nm to 1000 nm, as illustrated in Figure 2.
Particular emphasis is placed on the T3 timepoint, which has been identified as having superior feature importance values according to preliminary data assessments. As vegetation nears physiological maturity, the correlation between hyperspectral reflectance data and crop yield becomes more significant.
Fifty-two vegetation indices were calculated based on the collected hyperspectral reflectance values, following the methodology detailed in the study by Li et al. (2022). A comprehensive list of these indices is available in Appendix A. The distribution of the collected data across the four fields is summarized in Table 1.
It is interesting to note that many of the vegetation indices show a strong correlation with each other, especially when considering their underlying mathematical formulations. As depicted in Figure 3, certain pairs of indices exhibit notably high correlation values, suggesting that they might be capturing similar information about the vegetation. This redundancy could be attributed to the fact that many vegetation indices are derived from the same spectral bands, primarily the red and near-infrared (NIR) regions, which are known to be indicative of plant health and vigor. However, while two indices might be highly correlated, they might still provide unique insights to different vegetation properties, and thus, all of the vegetation indices were kept as features in the construction of the dataset.
\begin{table}
\begin{tabular}{c c} \hline \hline Field No. & Number of observations \\ \hline \hline
1 & 770 \\
2 & 912 \\
3 & 800 \\
4 & 679 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Number of datapoints in each of the four fields.
Figure 2: Reflectance plot showcasing wavelengths from 400 nm to 1000 nm. The depicted red, green, and blue bands are illustrative of the visible spectrum wavelengths.
#### 5.1.4 Yield Data
Seed yield data at the plot level was collected using an Almaco small plot combine, headquartered in Nevada, IA. For consistency, all yield data was normalized to a moisture content of 13% and then converted to kilograms per hectare (Kg/ha). The data collection process was organized in blocks, mirroring the layout established by the breeding program. This arrangement grouped the plots based on their genetic lineage and corresponding maturity groups.
Prior to the computation of vegetation indices, a rigorous data preprocessing phase was undertaken to omit anomalies and outliers. The steps encapsulated:
1. Omission of all observations for band values falling below 400 nm due to detected anomalies in these bands' readings.
2. Exclusion of datapoints with negative hyperspectral values within the 400 nm to 1000 nm band range.
3. Removal of datapoints showcasing negative seed yield values.
### Graph Construction
Once the dataset was pruned to retain only the relevant columns, we started the task of graph construction, as explained in detail in Section 4.1. The nodes of the graph represented individual data points from the dataset, while the edges encoded two primary relationships: spatial proximity and genotype similarity.
The geospatial coordinates ('Latitude' and 'Longitude') of each data point facilitated the computation of pairwise distances between them. By harnessing this distance matrix, we established a threshold--specifically the 3rd percentile of non-zero distances--and used it as a criterion to draw edges between nodes. In essence, if the spatial distance between two nodes was less than this threshold, an edge was drawn between them.
Beyond spatial relationships, our graph also recognized the significance of genetic similarities between data points. This was achieved by drawing edges between nodes that shared the same genotype, as denoted by the 'Population' column. By adopting this strategy, our graph was enriched with edges that encapsulated intrinsic genetic relationships, which has an influence on agricultural yield (Shook et al., 2021a)
Figure 3: Correlation of selected vegetation indices is shown.
To facilitate subsequent graph-based deep learning, we represented our graph using the _PyTorch Geometric_ framework. This involved a series of transformations. Firstly, the combined spatial and genotype edges were aggregated. Secondly, the dataset was processed to handle missing values, by imputing them with column-wise means, and categorical columns were one-hot encoded. The final graph representation incorporated node features (derived from the processed dataset), the aggregated edges, and the target label ('Yield').
The resulting graph, thus constructed, served as the foundation for our _Agri-GNN_ experiments.
### Neural Architecture
The model was developed using the _PyTorch_ framework (Paszke et al., 2017). For the specific needs of graph-based neural networks, we turned to the _PyTorch Geometric (PyG)_ library (Fey and Lenssen (2019)). This library is a comprehensive toolkit, providing a range of architectures and tools tailored for graph neural networks.
Our model, _Agri-GNN_, is an augmentation of the conventional GraphSAGE architecture Hamilton et al. (2017). It features four GraphSAGE convolutional layers. The initial layer transitions input features to hidden channels. The two intermediary layers, enhanced with skip connections, amplify the model's capacity to discern both rudimentary and advanced patterns in the data. The final layer is designed to yield the model's output. To ensure the stability and efficiency of training, batch normalization is applied following each convolutional layer. Furthermore, to mitigate the risk of overfitting, dropout regularization is integrated after each batch normalization, with a rate of 0.5.
The dimensionality of the input features was determined dynamically based on the dataset. The model was trained for 500 epochs, and was monitored to gauge the model's performance accurately and take measures against potential overfitting.
An \(80-20\) split was utilized, where 80% of the farm nodes were randomly chosen to be part of the training dataset,
### Hyperparameter Tuning
Hyperparameter tuning was critical to ensuring we have the optimal model. We conducted an exhaustive exploration of various hyperparameter combinations to pinpoint the most conducive setting for our model. The hyperparameters that we varied are summarized in Table 2.
The best hyperparameters can be seen in Table 3.
## 6 Model Performance and Results
We now show the results of _Agri-GNN_ on the described dataset. The results are compared with two baselines on the same dataset: a K-Nearest Neighbors Model and an Ensemble Machine Learning model (Chat
\begin{table}
\begin{tabular}{|c|c|} \hline
**Hyperparameter** & **Values Explored** \\ \hline Learning Rates & \(0.001,0.005,0.01,0.02\) \\ \hline Hidden Channels & \(32,64,128\) \\ \hline Dropout Rates & \(0.3,0.5,0.7\) \\ \hline \end{tabular}
\end{table}
Table 2: Hyperparameters and their explored values
\begin{table}
\begin{tabular}{|c|c|} \hline
**Hyperparameter** & **Best Value** \\ \hline Learning Rate & \(0.02\) \\ \hline Hidden Channels & \(32\) \\ \hline Dropout Rate & \(0.3\) \\ \hline \end{tabular}
\end{table}
Table 3: Optimal hyperparameters after tuning
topadhyay et al., 2023).
We first visualize the embeddings derived from our graph neural network model. To better understand the spatial distribution of these embeddings, we employed the t-Distributed Stochastic Neighbor Embedding (t-SNE) algorithm for dimensionality reduction (Hinton and Roweis (2002)). The embeddings, initially generated by the forward method of our model, were reduced to two dimensions using t-SNE. As shown in Figure 4, each point represents a node in the graph, and the spatial arrangement captures the similarity between node embeddings. Distinct clusters and patterns in the visualization indicate nodes with similar features within the graph. The embeddings can be seen in Figure 4.
From Figure 4, we see that multiple distinct graphs may possibly be formed from the data, particularly when subsets of the data are not related in any spatial or genotypic way. _Agri-GNN_ is designed to be able to learn the inherent patterns in each of these subgraphs, while further increasing the accuracy of the main base model.
The performance of _Agri-GNN_ was assessed using standard regression metrics, as presented in table 4:
_Agri-GNN_ achieves a Root Mean Squared Error (RMSE) of 4.565 on this dataset. The Mean Absolute Error (MAE) is 3.590. The model has an \(R^{2}\) value of 0.876. These results show significant improvement over the baselines used.
The K-Nearest Neighbor (K-NN) Algorithm is used as the first baseline model. The K-NN model predicts the yield of a node based on the average yield of its \(k\) nearest neighbors based on latitude and longitude. This model is optimized by performing a grid search to find the best value of \(k\) from a range of 1 to 20. The optimized K-NN model used \(k=18\) as the number of neighbors for making predictions.
\begin{table}
\begin{tabular}{|c|c|} \hline
**Metric** & _Agri-GNN_ **Value** \\ \hline Root Mean Squared Error (RMSE) & 4.565 \\ \hline Mean Absolute Error (MAE) & 3.590 \\ \hline Coefficient of Determination (\(R^{2}\)) & 0.876 \\ \hline \end{tabular}
\end{table}
Table 4: Performance metrics of the _Agri-GNN_ model
Figure 4: t-SNE visualization of graph node embeddings.
The performance metrics of the optimized K-NN model are presented in Table 5. The model achieved a Root Mean Squared Error (RMSE) of 12.93, a Mean Absolute Error (MAE) of 10.33, and a Coefficient of Determination (\(R^{2}\)) of 0.026. The _Agri-GNN_ model shows superior performance to the baseline. The metrics signify a substantial enhancement over the baseline K-NN model, demonstrating the effectiveness of _Agri-GNN_ in yield prediction.
_Agri-GNN_'s prediction error and \(R^{2}\) shows significant potential when compared to recent work on a similar dataset (Chattopadhyay et al., 2023), where the highest \(R^{2}\) value achieved was less than.8. Such an enhancement underscores the potential benefits of graph-based neural networks, especially when handling datasets with inherent relational structures.
In Figure 5, we present a comparison between the Actual and Predicted Yields. This graphical representation provides a comprehensive insight into the accuracy and precision of our model's predictions. An interesting observation from Figure 5 is the remarkable accuracy of the model's predictions for yields ranging between 50 and 90. The data points in this range cluster closely around the line of perfect agreement, suggesting that the model is particularly adept at predicting yields within this interval. This is noteworthy as it indicates that our model is not only reliable in general but also exhibits enhanced performance when predicting yields in this specific range.
Figure 5: The Actual vs. Predicted Yield of our predictions. The red line signifies a perfect model. The model achieves an \(R^{2}\) value of 0.876
\begin{table}
\begin{tabular}{|c|c|} \hline
**Metric** & _Baseline K-NN Value_ \\ \hline Root Mean Squared Error (RMSE) & 12.93 \\ \hline Mean Absolute Error (MAE) & 10.33 \\ \hline Coefficient of Determination (\(R^{2}\)) & 0.026 \\ \hline \end{tabular}
\end{table}
Table 5: Performance metrics of K-NN baseline model (\(k=18\)
### Scalability and Robustness
In the previous section, an application of _Agri-GNN_ is outlined, focusing on agricultural yield prediction in farms located in Ames, Iowa. The results demonstrate the model's capability to efficiently and accurately predict yields based on geographical coordinates. This section delves into the scalability and robustness of the _Agri-GNN_ framework, discussing its potential application on a broader scale, both nationally and internationally.
_Agri-GNN_ is designed with scalability in mind, allowing for seamless integration and deployment in diverse farming contexts across the world. The model's adaptability to various geographical and environmental conditions ensures consistent and reliable yield predictions irrespective of the location. The use of cloud computing resources and distributed computing frameworks enhances the scalability of _Agri-GNN_, enabling real-time processing and analysis of large datasets encompassing numerous farms across different geographical locations.
Robustness is at the core of the _Agri-GNN_ framework. The model employs advanced machine learning algorithms and techniques to ensure stability and reliability in yield predictions, even in the face of data variability and uncertainties. The robust nature of _Agri-GNN_ ensures uninterrupted and consistent performance, bolstering the confidence of farmers and stakeholders in the accuracy and reliability of the yield predictions provided by the model. Moreover, the continuous learning and adaptation capabilities of _Agri-GNN_ further enhance its robustness, ensuring it remains at the forefront of agricultural yield prediction technology. Unfortunately, _Agri-GNN_ is optimized to work best with large datasets where ample information from previous experiments is available. In many farming contexts, there is not enough information to be able to train \(Agri-GNN\) to have reliable performance (Wiseman et al., 2019). Further research should explore how _Agri-GNN_ can be optimized to perform well in instances where a lack of ample farming data may be a problem.
## 7 Conclusion
In an ever-evolving agricultural landscape marked by climatic uncertainties, the pressing need for accurate and holistic crop yield predictions has never been greater. This study introduced Agri-GNN, a pioneering approach that harnesses the power of Graph Neural Networks to provide a comprehensive representation of the agricultural ecosystem. Unlike traditional models that often operate in silos, Agri-GNN's strength lies in its ability to synthesize diverse data modalities into a unified framework, capturing the intricate interplay between genetic, environmental, and spatial factors.
Agri-GNN transcends conventional methodologies by viewing the agricultural ecosystem as an interconnected network, where crops aren't just passive entities but active nodes influenced by their immediate surroundings and broader contexts. This perspective, combined with GNN's superior data processing capabilities, enables Agri-GNN to deliver predictions that are both granular and holistic.
Furthermore, Agri-GNN's modular design ensures its relevance in a rapidly changing agricultural sector, allowing for seamless integration of new data sources and insights. Its precision agriculture approach not only aids in enhancing productivity but also paves the way for sustainable practices that respect both economic and environmental considerations.
Our applications of the Agri-GNN framework's capabilities to the plot fields of Ames, Iowa show promising results, even obtaining better performance than the results obtained in Chattopadhyay et al. (2023). Agri-GNN's performance metrics, including \(RMSE\), \(MAE\), and \(R^{2}\) highlighted its proficiency in yield prediction. Notably, the model showcased significant improvements over existing models, reaffirming the potential of graph-based neural networks in agricultural applications. The t-SNE visualizations further provided insights into the model's embeddings, reinforcing the cohesive and interconnected nature of the data.
In summary, Agri-GNN represents a paradigm shift in agricultural modeling. By capturing the complexity and interconnectedness of farming systems, it offers a fresh lens through which to view and address the multifaceted challenges of modern agriculture. As we stand at the crossroads of traditional practices and technological innovation, Agri-GNN serves as a beacon, guiding the way towards informed, sustainable, and resilient agricultural futures.
Acknowledgments
The authors thank Joseif Raigne, Dr. Baskar Ganapathysubramanian, and Dr. Soumik Sarkar for their invaluable feedback on the draft manuscript.
The authors thank Joseif Raigne for his help in the construction of spatial data for the field experiments.
The authors thank Shannon Denna and Christopher Grattoni for their support.
The authors thank staff and student members of SinghSoybean group at Iowa State University, particularly Brian Scott, Will Doepke, Jennifer Hicks, Ryan Dunn, and Sam Blair for their assistance with field experiments and phenotyping.
This work was supported by the Iowa Soybean Association, North Central Soybean Research Program, USDA CRIS project IOW04714, AI Institute for Resilient Agriculture (USDA-NIFA #2021-647021-35329), COALESCE: COntext Aware LEarning for Sustainable CyBr-Agricultural Systems (CPS Frontier #1954556), Smart Integrated Farm Network for Rural Agricultural Communities (SIRAC) (NSF S & CC #1952045), RF Baker Center for Plant Breeding, and Plant Sciences Institute.
## 9 Author Contributions
Aditya Gupta conceptualized and designed the proposed model, developed the methodology, conducted the data analysis, constructed and implemented the Agri-GNN, provided visualization of the results, and took primary responsibility for writing, revising, and finalizing this paper. All authors have read, reviewed, and agreed to the published version of the manuscript.
## 10 Conflict of Interest
The authors declare no conflict of interest.
|
2302.05322 | Numerical Methods For PDEs Over Manifolds Using Spectral Physics
Informed Neural Networks | We introduce an approach for solving PDEs over manifolds using physics
informed neural networks whose architecture aligns with spectral methods. The
networks are trained to take in as input samples of an initial condition, a
time stamp and point(s) on the manifold and then output the solution's value at
the given time and point(s). We provide proofs of our method for the heat
equation on the interval and examples of unique network architectures that are
adapted to nonlinear equations on the sphere and the torus. We also show that
our spectral-inspired neural network architectures outperform the standard
physics informed architectures. Our extensive experimental results include
generalization studies where the testing dataset of initial conditions is
randomly sampled from a significantly larger space than the training set. | Yuval Zelig, Shai Dekel | 2023-02-10T15:33:32Z | http://arxiv.org/abs/2302.05322v3 | # Numerical Methods For PDEs Over Manifolds Using Spectral Physics Informed Neural Networks
###### Abstract
We introduce an approach for solving PDEs over manifolds using physics informed neural networks whose architecture aligns with spectral methods. The networks are trained to take in as input samples of an initial condition, a time stamp and point(s) on the manifold and then output the solution's value at the given time and point(s). We provide proofs of our method for the heat equation on the interval and examples of unique network architectures that are adapted to nonlinear equations on the sphere and the torus. We also show that our spectral-inspired neural network architectures outperform the standard physics informed architectures. Our extensive experimental results include generalization studies where the testing dataset of initial conditions is randomly sampled from a significantly larger space than the training set.
## 1 Introduction
Time dependent differential equations are a basic tool for understanding many processes in physics, chemistry, biology, economy and any field requires the analyze of time-dependent dynamical process. Therefore, solving those equations is an active area of research [1, 2]. For many of those equations, an analytical solution does not exist and a numerical method must be used.
Numerical methods such as finite differences and finite elements methods are applied successfully in many scenarios, however when the PDEs are given on manifolds, discretization processes and application of time steps can become challenging.
In recent years, there is an emergence of machine learning methods and most notably Physics Informed (PI) deep learning models [4],[20]. Physics Informed Neural Networks (PINN) are designed to solve partial differential equations and inverse problems by enforcing the networks to approximately obey the given governing equations by using corresponding loss functions during the training phase. This technique allows to obtain relatively high quality approximation with relatively small datasets. There are various neural network architectures that have been developed for this purpose, with different settings and strategies such as automation differentiation [18], numerical schemes [5], grid-free [3, 4] or grid-dependent approaches [5], and the ability to handle different geometries [19]. In this paper, we present a generalization of spectral based deep learning methods for PDEs [23],[24],[25],[26]:
1. We provide a physics informed deep learning approach that can handle the general case of differential equations over compact Riemannian manifolds.
2. The design of the networks relies on the paradigm of spectral approximation, where on each manifold we use the corresponding eigenfunction basis of the Laplace-Beltrami operator. Previous works discussing the connection between harmonic analysis and deep learning over manifolds are described in [16, 21]. As we shall see, this allows to construct neural networks that provide higher accuracy using less parameters when benchmarked with standard PINN architectures.
3. Typically, PINNs need to be re-trained for each given initial condition. Our approach allows to train a network that can take in as input a subspace of initial conditions over the manifold.
The outline for the remainder of this paper is as follows. Section 2 reviews some preliminaries about PINNs and spectral approximation over manifolds. Section 3 describes the key aspects of our approach. In Section 4 we provide, as a pedagogical example, the theory and details of the method for the simple case of the heat equation over the unit interval. In Sections 5 and 6 we show how our approach is applied for nonlinear equations over the sphere and torus. Our extensive experimental results include generalization studies where the testing dataset is sampled from a significantly larger space than the training set. We also verify the stability of our models by injecting random noise to the input and validating the errors increase in controlled manner. Concluding remarks are found in Section 7.
## 2 Preliminaries
### Physics informed neural networks
In this section, we describe the basic approach to PINNs presented in [4]. Generally, the goal is to approximate the solution for a differential equation over a domain \(\Omega\) of the form:
\[u_{t}+\mathcal{N}[u]=0,\quad t\in[0,T],\]
with some pre-defined initial and/or boundary conditions. Typically, a PINN \(\tilde{u}(x,t)\) is realized using a Multi Layer Perception (MLP) architecture. This is a pass forward network where each \(j\)-th layer takes as input the vector \(v_{j-1}\) which is the output of the previous layer, applies to it an affine transformation \(y=M_{j}v+b_{j}\) and then a coordinate-wise nonlinearity \(\sigma\) to produce the layer's output \(v_{j}\)
\[v_{j}=\sigma\circ(M_{j}v_{j-1}+b_{j}). \tag{1}\]
In some architectures either the bias vector \(b_{j}\) and/or the coordinate-wise nonlinearity \(\sigma\) are not applied in certain layers. In a standard PINN architecture, the input to the network \(\tilde{u}\) is \(v_{0}=(x,t)\). The unknown parameters of the network are the collection of weights \(\{M_{j},b_{j}\}_{j}\) and the network is trained to minimize the following loss function:
\[MSE_{B}+MSE_{D},\]
with the boundary/initial value loss
\[MSE_{B}=\frac{1}{N_{b}}\sum_{i=1}^{N_{b}}|\tilde{u}(x_{i}^{b},t_{i}^{b})-u(x_{ i}^{b},t_{i}^{b})|^{2},\]
and the differential loss
\[MSE_{D}=\frac{1}{N_{d}}\sum_{i=1}^{N_{d}}|(\tilde{u}_{t}+\mathcal{N}[\tilde{u }])(x_{i}^{d},t_{i}^{d})|^{2}.\]
In the above, \(\{(x_{i}^{b},t_{i}^{b})\}_{i=1}^{N_{b}}\) is a discretized set of time and space points, where each \(u(x_{i}^{b},t_{i}^{b})\) is the true given initial or boundary value at \((x_{i}^{b},t_{i}^{b})\). The set \(\{(x_{i}^{d},t_{i}^{d})\}_{i=1}^{N_{d}}\), typically contains randomly distributed internal domain collocation points. Since the architecture of the neural network is given analytically (as in (1) for the case of MLP), the value \((\tilde{u}_{t}+\mathcal{N}[\tilde{u}])|_{(x_{i}^{d},t_{i}^{d})}\) at a data-point \((x_{i}^{d},t_{i}^{d})\) can be computed using the automatic differentiation feature of software packages such as TensorFlow and Pytorch [6, 7] (in our work we used TensorFlow). Thus, the aggregated loss function enforces the approximating function \(\tilde{u}\) to satisfy required initial and boundary conditions as well as the differential equation.
### Spectral decompositions over manifolds
We recall a fundamental result in the spectral theory over manifolds regarding the spectrum of the Laplace-Beltrami operator \(\Delta\)[8, Theorem 10.13]
**Theorem 1**.: _Let \(\Omega\) be a non-empty compact relatively open subset of a Riemannian manifold \(\mathcal{M}\) with metric \(g\) and measure \(\mu\). The spectrum of \(\mathcal{L}:=-\Delta\) on \(\Omega\) is discrete and consists of an increasing sequence \(\{\lambda_{k}\}_{k=1}^{\infty}\) of non-negative eigenvalues (with multiplicity) such that \(\lim_{k\to\infty}\lambda_{k}=\infty\). There is an orthonormal basis \(\{\phi_{k}\}_{k=1}^{\infty}\) in \(L_{2}(\Omega)\) such that each function \(\phi_{k}\) is an eigenfunction of \(-\Delta\) with eigenvalue \(\lambda_{k}\). Moreover, if we wish to solve the heat equation \(u_{t}=\Delta u\) on \(\Omega\) with initial condition \(u(x,t)=f(x),\ f\in L_{2}(\Omega)\), the solution is given by:_
\[u(x,t)=\sum_{k=1}^{\infty}e^{-\lambda_{k}t}\langle f,\phi_{k}\rangle\phi_{k}( x).\]
This well established result motivates the following spectral paradigm. To solve the heat equation with some initial condition, one should first decompose the initial condition function to a linear combination of the eigenfunctions basis and then apply a time-dependent exponential decay on the initial value coefficients. An approximation entails working with the subspace spanned by \(\{\phi_{k}\}_{k=1}^{K}\), for some sufficiently large \(K\) (see e.g. Theorem 4 below). For a general manifold \(\mathcal{M}\), the eigenfunctions do not necessarily have an analytic form and need to be approximated numerically. As we will show, we also follow the spectral paradigm for more challenging cases of nonlinear equations over manifolds, where the time dependent processing of the initial value coefficients is not obvious. Nevertheless, a carefully crafted architecture can provide superior results over standard PINNs.
## 3 The architecture of spectral PINNs
Let \(\mathcal{M}\subset\mathbb{R}^{n}\) be a Riemannian manifold, \(\Omega\subset\mathcal{M}\) a non-empty compact relatively open subset and \(\mathcal{N}\) a differential operator over this manifold, which can possibly be nonlinear. We assume our family of initial conditions comes from a finite space \(W\subset L_{2}(\Omega)\), that can be selected to be sufficiently large. Given a vector of samples \(\vec{f}\) of \(f\in W\) over a fixed discrete subset of \(\Omega\), a point \(x\in\mathcal{M}\) and \(t\in[0,T]\), we would like to find an approximation \(\tilde{u}(\vec{f},x,t)\), given by a trained neural network, to the solution
\[u_{t}+\mathcal{N}[u]=0,\]
\[u(x,t=0)=f(x),\ \forall x\in\Omega.\]
Recall that typically PI networks are trained to approximate a solution for a single specific initial condition (such as in [4]).However, we emphasize that our neural network model is trained only once for the family of initial conditions from the subspace \(W\) and that once trained, it can be used to solve the equation with any initial condition from \(W\). Moreover, as we demonstrate in our experimental results, the trained network has the 'generalization' property, since it is able to approximate well the solutions when the initial value functions are randomly sampled from a larger space containing \(W\).
Our method takes inspiration from spectral methods for solving PDEs. It is composed of 3 steps implemented by 3 blocks, as depicted in Figure 1:
1. **Transformation Block -** The role of this block is to compute from the samples \(\vec{f}\) of the initial value condition a 'projection' onto \(W_{K}=span\{\phi_{k}\}_{k=1}^{K}\), for some given \(K\), where \(\{\phi\}_{k=1}^{\infty}\) are the eigenfunctions of the Laplace-Beltrami operator on the manifold. We denote this block as \(\tilde{\mathcal{C}}:\vec{W}\to\mathbb{R}^{K}\), where \(\vec{W}\) is a subset of \(\mathbb{R}^{L}\) which contains sampling vectors of functions from \(W\) over a fixed discrete subset of \(\Omega\). The desired output of the block is an estimation \(\{\tilde{f}_{k}\}_{k=1}^{K}\) of the coefficients \(\{\langle f,\phi_{k}\rangle\}_{k=1}^{K}\). However, in cases where it is difficult to
work with the spectral basis, one can train an encoder to transform the input samples to a compressed representation space of dimension \(K\). Also, although the network is trained on point samples from \(W\), it is able to receive as input a sample vector \(\vec{f}\) of a function \(f\) which is from a larger space containing \(W\) and approximate the solution. In most cases, it is advantageous to have the choice of the sampling set and the quantities \(L\) and \(K\) to be determined by 'Nyquist-Shannon'-type theorems for the given subspaces \(W\), \(W_{K}\) and the manifold. In the scenario where \(W=W_{K}\) and the sampling set of size \(L\) is selected to provide perfect 'Shanon'-type reconstruction, the transformation block may take the form of a simple linear transformation. In complex cases, where we have no prior knowledge about the required sampling rate or we do not have perfect reconstruction from the samples, we train a transformation block \(\tilde{\mathcal{C}}\) that is optimized to perform a nonlinear 'projection' based on a carefully selected training set.
2. **Time Stepping Block -** In this block we apply a neural network that takes as input the output of the transformation block \(\tilde{\mathcal{C}}(\vec{f})\), which may be the approximation of the spectral basis coefficients \(\{\tilde{f}_{k}\}_{k=1}^{K}\), and a time stamp \(t\), to compute a time dependent representation. We denote this block as \(\tilde{\mathcal{D}}:\mathbb{R}^{K}\times[0,T]\rightarrow\mathbb{R}^{K}\).
3. **Reconstruction Block -** In this block we apply an additional neural network on the output of the time stepping block \(\tilde{\mathcal{D}}\), together with the given input point \(x\in\Omega\), to provide an estimate \(\tilde{u}(x,t)\) of the solution \(u(x,t)\). We denote this block as \(\tilde{\mathcal{R}}:\mathbb{R}^{K}\times\Omega\rightarrow\mathbb{R}\).
Thus, our method is in fact a composition of the 3 blocks \(\tilde{u}:\vec{W}\times\Omega\times[0,T]\rightarrow\mathbb{R}\)
\[\tilde{u}(\vec{f},x,t)=\tilde{\mathcal{R}}(\tilde{\mathcal{D}}(t,\tilde{ \mathcal{C}}(\vec{f})),x).\]
Observe that in scenarios where one requires multiple evaluations at different locations \(\{\tilde{u}(\vec{f},x_{i},t)\}_{i}\), \(x_{i}\in\Omega\), at a given time step \(t\in[0,T]\), one may compute once the output of the time stepping block \(\tilde{\mathcal{D}}\) and use it multiple times for all \(\{x_{i}\}_{i}\), and by that reducing the total computation time.
Figure 1: General description of our method
## 4 Introduction of the spectral PINN for the heat equation over \(\Omega=[0,1]\)
We first review the prototype case of the heat equation on the unit interval where we can provide rigorous proofs for our method as well as showcase simple realization versions of our spectral network construction. Recall the heat equation:
\[u_{t}=\alpha u_{xx},\quad x\in[0,1],t\in[0,0.5],\]
with initial time condition:
\[u(x,t=0)=f(x),\ x\in[0,1].\]
### Architecture and theory for the heat equation over \(\Omega=[0,1]\)
The analytic solution to this equation can be computed in 3 steps that are aligned with the 3 blocks of our architecture. Assume the initial condition \(f:[0,1]\rightarrow\mathbb{R}\) has the following spectral representation
\[f(x)=\sum_{k=1}^{\infty}c_{k}\sin(2\pi kx).\]
Next, apply the following transformation on the coefficients for a given time step \(t\)
\[\mathcal{D}(t,c_{1},c_{2},...):=(e^{-4\pi^{2}\alpha t}c_{1},e^{-4\pi^{2}\cdot 2 ^{2}\alpha t}c_{2},...).\]
Finally, evaluate the time dependent representation at the point \(x\):
\[u(x,t)=\mathcal{R}(e^{-4\pi^{2}\alpha t}c_{1},e^{-4\pi^{2}\cdot 2^{2}\alpha t} c_{2},...,x):=\sum_{k=1}^{\infty}e^{-4\pi^{2}k^{2}\alpha t}c_{k}\sin(2\pi kx).\]
We now proceed to provide the details of the numerical spectral PINN approach in this scenario. First, we select as an example \(K=20\) and \(W=W_{20}\), where
\[W_{20}:=\left\{\sum_{k=1}^{20}c_{k}\sin(2\pi kx),\quad c_{1},...,c_{20}\in[-1, 1],\sqrt{c_{1}^{2}+...+c_{20}^{2}}=1\right\}.\]
We sample each \(f\in W_{20}\) using \(L=101\) equip-spaced points in the segment \([0,1]\) to compute a vector \(\vec{f}\). For the training of the networks we use a loss function which is a sum of two loss terms \(L_{0}+L_{D}\). The loss \(L_{0}\) enforces the solution to satisfy the initial time condition
\[L_{0}(\theta)=\frac{1}{101N}\sum_{i=1}^{N}\sum_{j=0}^{100}|\tilde{u}_{\theta} (\vec{f}_{i},X_{j},0)-(\vec{f_{i}})_{j}|^{2} \tag{2}\]
where \(\tilde{u}_{\theta}\) is the model with weights \(\theta\) and \((\vec{f}_{i})_{j}\) is the value of \(f_{i}\) at \(X_{j}=\frac{1}{100}j\). For the second loss term we randomly generate \(N=5,000\) triples \((\vec{f}_{i},x_{i},t_{i})_{i=1}^{N}\) and enforce the model to obey the differential condition
\[L_{D}(\theta)=\frac{1}{N}\sum_{i=1}^{N}\left|\frac{\partial\tilde{u}_{\theta} (\vec{f}_{i},x_{i},t_{i})}{\partial t}-\alpha\frac{\partial^{2}\tilde{u}_{ \theta}(\vec{f}_{i},x_{i},t_{i})}{\partial x^{2}}\right|^{2}. \tag{3}\]
The derivatives of the given neural network approximation in (3) are calculated using the automatic differentiation capabilities of deep learning frameworks. In this work we use TensorFlow [6].
We compare two PINN architectures that provide an approximation to the solution \(u\):
1. **The naive model -** We benchmark our method with a deep learning model which is a standard MLP neural network, that takes in as input \((\vec{f},t,x)\in\mathbb{R}^{103}\) and outputs an approximation. This model is trained to be PI using the loss function \(L_{0}+L_{D}\), where the two terms are defined in (2) and (3). The network is composed of \(5\) dense layers \(\mathbb{R}^{103}\rightarrow\mathbb{R}^{103}\) and finally a dense layer \(\mathbb{R}^{103}\rightarrow\mathbb{R}\). Each of the first five dense layers is followed by a non-linear activation function. Typically, a Rectifier Linear Unit (ReLU) \(\sigma(x)=(x)_{+}\), is a popular choice as the nonlinear activation for MLP networks [12]. However, it is not suitable in this case, since its second derivative is almost everywhere zero. Therefore we use \(\tanh\) as the nonlinear activation function.
2. **The spectral model -** In some sense, our spectral model \(\tilde{u}\) is'strongly' physics informed. Exactly as the naive model, it is also trained using the loss functions (2) and (3), to provide solutions to the heat equation. However, its architecture is different from the naive architecture, in that it is modeled to match the spectral method. The spectral model \(\tilde{u}\) approximates \(u\) using the \(3\) blocks of the spectral paradigm approximation presented in the previous section. We now provide the details of the architecture and support our choice of design with rigorous proofs
1. **Sine transformation block** This block receives as input a sampling vector \(\vec{f}\) and returns the sine transformation coefficients. Due to the high sampling rate \(L=101\), compared with the frequency used \(K=20\), the sampled function \(f\) can be fully reconstructed from \(\vec{f}\) and this operation can be realized perfectly using the Nyquist-Shannon sampling formula. However, so as to simulate a scenario on a manifold where the sampling formula cannot be applied, we train a network to apply the transformation. To this end, we created \(1,000\) initial value conditions using trigonometric polynomials of degree \(20\), and trained this block to extract the coefficients of those polynomials. In other words, we pre-trained \(\tilde{\mathcal{C}}:\mathbb{R}^{101}\rightarrow\mathbb{R}^{20}\) for the following task: \[\tilde{\mathcal{C}}(\vec{f})=(c_{1},...,c_{20}),\] where \(\vec{f}\) is the sampling vector of the function \[f(x)=\sum_{k=1}^{20}c_{k}\sin(2\pi kx).\] In this simple case where \(\Omega=[0,1]\), the network can simply be composed of one dense layer with no nonlinear activation, which essentially implies computing a transformation matrix from samples to coefficients. In more difficult cases, the architecture of the network will be more complex.
2. **Time stepping block** The time stepping block should approximate the function: \[\mathcal{D}(t,c_{1},...,c_{20})=(e^{-4\pi^{2}\alpha t}c_{1},e^{-4\pi^{2}\cdot 2 ^{2}\alpha t}c_{2},...,e^{-4\pi^{2}20^{2}\alpha t}c_{20}).\] (4) We consider \(2\) architectures for this block: **Realization time stepping block:** In the case of the heat equation we know exactly how the time stepping block should operate and so we can design a true realization. The first layer computes \[t\rightarrow(-4\pi^{2}\alpha t,-4\pi^{2}2^{2}\alpha t,...,-4\pi^{2}20^{2} \alpha t).\] The second layer applies the exponential nonlinearity \[(-4\pi^{2}\alpha t,-4\pi^{2}\cdot 2^{2}\alpha t,...,-4\pi^{2}20^{2}\alpha t )\rightarrow(e^{-4\pi^{2}\alpha t},e^{-4\pi^{2}\cdot 2^{2}\alpha t},...,e^{-4\pi^{2 }20^{2}\alpha t}).\] Finally, we element-wise multiply the output of the second layer with \((c_{1},...,c_{20})\) to output the time dependent spectral representation (4).
**Approximate time stepping block:**
In the case of general manifolds we may not be able to fully realize the time stepping block. Therefore, we examine what are the consequences of using an MLP network that approximates for given \(K\geq 1\)
\[\mathcal{D}(t,c_{1},...,c_{K}):=(e^{-4\pi^{2}\alpha t}c_{1},...,e^{-4\pi^{2}K^{2} \alpha t}c_{K}).\]
We prove the following theorem (see the appendix for proofs):
**Theorem 2**.: _For any \(\epsilon>0\) and \(K\geq 1\) there exists a MLP network \(\tilde{\mathcal{D}}\), consisting of dense layers and \(\tanh\) as an activation function, with \(O(K^{3}+K\log^{2}(\epsilon^{-1}))\) weights such that_
\[\|\tilde{\mathcal{D}}(t,c_{1},...,c_{K})-\mathcal{D}(t,c_{1},...,c_{K})\|_{ \infty}\leq\epsilon,\]
_for all inputs \(c_{1},...,c_{K}\in[-1,1],t\in[0,1]\)._
We remark that it is possible to approximate \(\mathcal{D}\) using ReLU as the nonlinear activation as it shown in [13]. However, recall the ReLU is not suitable for our second order differential loss function (3). In the experiments below, the approximating MLP time stepping block is composed of 5 layers.
3. **Reconstruction Block** The reconstruction block should operate as follow: \[\mathcal{R}(a_{1},....,a_{K},x)=\sum_{k=1}^{K}a_{k}\sin(2\pi kx).\] In the case of the heat equation, for given \(t\in[0,1]\), the coefficients \(\{a_{k}\}_{k=1}^{K}\) are \(\{e^{-4\pi^{2}k^{2}t}c_{k}\}_{k=1}^{K}\) or an approximation to these coefficients. Here, also one can design a realization block which uses the sine function as a nonlinearity. To support the general case we have the following result **Theorem 3**.: _For fixed \(A>0\), \(K\geq 1\) and any \(\epsilon>0\), there exists a MLP network \(\tilde{\mathcal{R}}\), consisting of dense layers and \(\tanh\) as an activation function, with \(O(K^{2}+K\log^{2}(K\epsilon^{-1}))\) weights for which_ \[|\tilde{\mathcal{R}}(a_{1},....,a_{K},x)-\mathcal{R}(a_{1},....,a_{K},x)|\leq\epsilon,\] _where \(a_{1},...,a_{K}\in[-A,A],x\in[0,1]\)._
In the experiments below, the approximating MLP reconstruction block is composed of 5 layers. Using theorem 2 and 3, we can prove a general theorem that provides an estimate for the approximation of a MLP network. We first give the definition of Sobolev spaces [22]:
**Definition 1**.: Let \(\Omega\subset\mathbb{R}^{n}\) and \(C_{0}^{r}(\Omega)\) be the space of continuously \(r\)-differentiable with compact support functions. For \(1\leq p<\infty\), the Sobolev space \(W_{p}^{r}(\Omega)\) is the completion of \(C_{0}^{r}(\Omega)\) with respect to the norm
\[\|f\|_{W_{p}^{r}(\Omega)}=\sum_{|\alpha|\leq r}\|\partial^{\alpha}f\|_{L_{p}( \Omega)},\]
where \(\partial^{\alpha}f=\frac{\partial^{|\alpha|}f}{\partial x_{1}^{a_{1}}... \partial x_{n}^{a_{m}}},|\alpha|=\sum_{i=1}^{n}\alpha_{i}\).
With this definition at hand we are ready to state a result on the approximation capabilities of our spectral architecture when MLP networks are used to approximate the spectral realization
**Theorem 4**.: _Let \(r\in\mathbb{N}\). For any \(\epsilon>0\) there exists a MLP neural network \(\tilde{u}\), with \(\tanh\) non-linearities and \(O(\epsilon^{-3/r}+\epsilon^{-1/r}\log^{2}(\epsilon^{-(1+1/r)}))\) weights (the constant depends on \(r\)) for which the following holds: For any \(f\in W_{2}^{r}([0,1])\), \(f=\sum_{k=1}^{\infty}c_{k}\sin(2\pi kx)\), \(\|f^{(r)}\|_{2}\leq 1\) and \(u\), the solution to the heat equation on \(\Omega=[0,1]\) with the initial condition \(f\), the network \(\tilde{u}\) takes the input \(\{c_{k}\}_{k=1}^{K}\), \(K\leq c\epsilon^{-1/r}\) and provides the estimate_
\[\|u(f,\cdot,t)-\tilde{u}(f,\cdot,t)\|_{L_{2}[0,1]}\leq\epsilon,\qquad\forall t \in[0,1].\]
### Experimental Results
We benchmark 4 models: the naive PINN model with vanilla MLP architecture consisting of 6 layers and 3 variations of the spectral model with the various blocks realized or approximated. The training was performed using \(5,000\) and \(25,000\) samples of the form \((\vec{f},x,t)\), where \(\vec{f}\) is a sampling vector of trigonometric polynomial of degree 20 on 101 equip-spaced points in the segment \([0,1]\) with \(t\in[0,0.5]\). To guarantee slow vanishing of the solution over time we used \(\alpha=0.01\). The comparison of the averaged Mean Squared Error (MSE) of the 4 models over 20 randomly sampled test initial conditions is presented in Table 1.
In Figure 2 we plot the norm of the error at different time steps
\[Error(t)=\sum_{i=1}^{20}\frac{1}{101}\sqrt{\sum_{k=0}^{100}\bigg{|}\tilde{u}_{ model}(\vec{f}_{i},\frac{1}{100}k,t)-u(f_{i},\frac{1}{100}k,t)\bigg{|}^{2}}.\]
We show some examples of the exact solution \(u\) and the output of the different neural network solution at different times and with several initial condition in figure 3.
In addition, we performed generalization and stability analysis for the different architectures. To evaluate the ability of our networks to generalize beyond the training space of polynomials of degree 20, we tested the different networks using initial conditions from a space of polynomials of degree 30. Namely,
\[W_{30}=\left\{\sum_{k=1}^{30}c_{k}\sin(2\pi kx),\quad c_{1},...,c_{20}\in[-1,1 ],\sqrt{c_{1}^{2}+...+c_{20}^{2}}=1\right\}.\]
To evaluate the stability of our networks, we add normal random noise with mean 0 and variance 0.001 to the initial condition sample vectors and evaluate in different time stamps using the following normalized metric
\[\frac{\|\tilde{u}_{model}(\vec{f}+\vec{\delta},\cdot,t)-\tilde{u}_{model}(\vec {f},\cdot,t)\|_{2}}{\|\delta\|_{2}} \tag{5}\]
where \(\delta_{i}\sim N(0,0.001)\). The results of the generalization test can be found in Table 2, and the results (averaged over 20 random initial conditions) for the stability test can be found in Table 3.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Model & Model Architecture & \#Model weights & Testing MSE: & Testing MSE: \\ number in & & & 5,000 training & 25,000 training \\ plots & & & samples & samples \\ \hline \hline
1 & Naive Model & 53,664 & 1.3e-4 & 1.19e-4 \\ \hline
2 & Spectral model - full & **2,960** & **9.0e-6** & **8.3e-6** \\ & realization (time stepping and reconstruction & & & \\ & blocks) & & & \\ \hline
3 & Spectral model - MLP & 11,980 & 5.7e-5 & 4.9e-5 \\ & approximation of time & & & \\ & stepping block, realization of & & & \\ & reconstruction block & & & \\ \hline
4 & Spectral model - & 10,401 & 2.9e-5 & 2.87e-5 \\ & realization of time & & & \\ & stepping block, MLP & & & \\ & approximation of & & & \\ & reconstruction block & & & \\ \hline \end{tabular}
\end{table}
Table 1: Heat equation over \(\Omega=[0,1]\) - Comparison of standard naive PINN model with 3 variants of our spherical PINN model
\begin{table}
\begin{tabular}{|c|c|c|} \hline Model number in & Model Architecture & MSE \\ plots & & \\ \hline \hline
1 & Naive Model & 1.0e-3 \\ \hline
2 & Spectral model - full & **7.1e-4** \\ & realization (time stepping & \\ & and reconstruction & \\ & blocks) & \\ \hline
3 & Spectral model - MLP & 8.1e-4 \\ & approximation of time & \\ & stepping block, & \\ & realization of & \\ & reconstruction block & \\ \hline
4 & Spectral model - & 7.4e-4 \\ & realization of time & \\ & stepping block, MLP & \\ & approximation of & \\ & reconstruction block & \\ \hline \end{tabular}
\end{table}
Table 2: Heat equation on \([0,1]\) - generalization results
Figure 2: Heat equation on \([0,1]\) - Error versus time
Figure 3: Heat equation over [0,1] - comparisons of the ground truth solution and the different neural network solutions with different initial conditions and at different times
In both tests, we can observe that all spectral model variants outperform the naive model.
The theoretical and empirical results for the simple case of the heat equation over \(\Omega=[0,1]\) motivate us to establish guidelines for designing spectral PINN networks in much more complicated scenarios. Namely, we should try to realize the various blocks, approximate them or at the least design their sub-components to have elements from the realization such as nonlinear activations relating to the spectral basis.
## 5 The sphere \(\mathbb{S}^{2}\)
In this section, we demonstrate our method in a more challenging setup, a nonlinear equation on a curved manifold. The Allen-Cahn equation over the sphere \(\mathbb{S}^{2}\) is defined as follow [14]:
\[u_{t}=0.1\Delta u+u-u^{3},\;t>0, \tag{6}\]
where the Laplace-Beltrami operator is
\[\Delta=\frac{\partial^{2}}{\partial\theta^{2}}+\frac{\cos\theta}{\sin\theta} \frac{\partial}{\partial\theta}+\frac{1}{\sin^{2}\theta}\frac{\partial^{2}}{ \partial\phi^{2}},\]
with \(\phi\) is the azimuth angle and \(\theta\) is the polar angle.
### Theory and spectral PINN architecture for the Allen-Cahn equation on \(\mathbb{S}^{2}\)
On \(\mathbb{S}^{2}\subset\mathbb{R}^{3}\) the spectral basis is the spherical harmonic functions [9]:
**Definition 2**.: The spherical harmonic function of degree \(l\) and order \(m\) is given by:
\[Y_{l}^{m}(\theta,\phi)=(-1)^{m}\sqrt{\frac{(2l+1)}{4\pi}\frac{(l-m)!}{(l+m)!}} P_{l}^{m}(\cos\theta)e^{im\phi},\]
where \(\theta\in[0,\pi]\) is the polar angle, \(\phi\in[0,2\pi)\) is the azimuth angle and \(P_{l}^{m}:[-1,1]\rightarrow\mathbb{R}\) is the associated Legendre polynomial.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Model number in & Model Architecture & \(T=0.2\) & \(T=0.4\) & \(T=0.5\) \\ \hline \hline
1 & Naive Model & 3.53 & 3.55 & 3.66 \\ \hline
2 & Spectral model - full & 0.95 & 0.73 & 0.67 \\ & realization (time stepping and reconstruction & & & \\ & blocks) & & & \\ \hline
3 & Spectral model - MLP & 0.97 & 0.79 & 0.75 \\ & approximation of time & & & \\ & stepping block, & & & \\ & realization of reconstruction block & & & \\ \hline
4 & Spectral model - & 0.95 & 0.75 & 0.68 \\ & realization of time & & & \\ & stepping block, MLP & & & \\ & approximation of & & & \\ & reconstruction block & & & \\ \hline \end{tabular}
\end{table}
Table 3: Heat equation on \([0,1]\) - stability test results using the normalized metric (5)
Each spherical harmonic function is an eigenfunction of the Laplace-Beltrami operator satisfying
\[\Delta Y_{l}^{m}=-l(l+1)Y_{l}^{m}.\]
In our work, for simplicity, we use the real version of the spherical harmonics, defined by:
\[Y_{lm}=\begin{cases}\sqrt{2}(-1)^{m}Im(Y_{l}^{|m|}),&-l\leq m<0,\\ Y_{l}^{0},&m=0,\\ \sqrt{2}(-1)^{m}Re(Y_{l}^{m}),&0<m\leq l.\end{cases}\]
The input to our networks are of type \((F,(\theta,\phi),t)\), where \(F\in\mathbb{R}^{20\times 20}\) is a sampling matrix of the initial condition on equip-spaced azimuth-polar grid of a spherical function, \(\theta\in[0,\pi],\phi\in[0,2\pi)\) are the coordinates of a point on the sphere and \(t\in[0,1]\). The loss functions are similar to the loss functions used in section 4, with the required modifications, such as for the differential loss term
\[L_{D}(\theta)=\frac{1}{N}\sum_{i=1}^{N}\left|\frac{\partial\tilde{u}_{\theta} (F_{i},x_{i},t_{i})}{\partial t}-(0.1\Delta\tilde{u}_{\theta}+\tilde{u}_{ \theta}-\tilde{u}_{\theta}^{3})(F_{i},x_{i},t_{i})\right|^{2}. \tag{7}\]
Our goal is to construct a spectral PINN architecture that will outperform the naive PINN architecture. Here are the details of the 3 blocks of the spectral model:
1. **Transformation Block** This block receives as input a flatten sampling matrix \(\vec{F}\in\mathbb{R}^{400}\) of the initial condition and returns the 100 spherical harmonic coefficients of degree 9. By [17, Theorem 3] under these conditions, spherical harmonics of degree 9 can be perfectly reconstructed. Thus, training one dense linear layer with sufficient samples, recovers the perfect reconstruction formula. In other words, we pre-trained \(\tilde{\mathcal{C}}:\mathbb{R}^{400}\rightarrow\mathbb{R}^{100}\) for the following task: \[\tilde{\mathcal{C}}(\vec{F})=(c_{0,0},c_{1,-1},c_{1,0},c_{1,1},...,c_{9,-9},...,c_{9,0},...,c_{9,9}),\] where \(\vec{F}\) is a flatten sampling matrix of the function \[\sum_{l=0}^{9}\sum_{m=-l}^{l}c_{l,m}Y_{lm}(\theta,\phi).\]
2. **Time Stepping Block** Unlike the heat equation on the unit interval, the Allen-Cahn equation (6) on the sphere, does not admit an analytic spectral solution. Nevertheless, we design an architecture that follows the spectral paradigm and compare it with a standard PINN MLP architecture. We test our hypothesis by conducting an ablation study using three optional architectures for the time stepping block: 1. **Input of Allen-Cahn Nonlinear Part** In this architecture, we further adapt the architecture to the nature of the equation, specifically to the non-linear part of the Allen-Cahn equation. Thus, in this variant, the input to the time stepping block is composed of: the transformation of the initial condition, the transformation of the nonlinear part of the initial condition and the time variable \((\tilde{\mathcal{C}}(\vec{F}),\tilde{\mathcal{C}}(\vec{F}-\vec{F}^{3}),t)\). Therefore the time stepping block is defined as \[\tilde{\mathcal{D}}:\mathbb{R}^{100}\times\mathbb{R}^{100}\times[0,T] \rightarrow\mathbb{R}^{100},\] where \[\tilde{\mathcal{D}}(\tilde{\mathcal{C}}(\vec{F}),\tilde{\mathcal{C}}(\vec{F}- \vec{F}^{3}),t)=(c_{0,0}(t),c_{1,-1}(t),c_{1,0}(t),c_{1,1}(t),...,c_{9,-9}(t),...,c_{9,0}(t),...,c_{9,9}(t)).\]
With the additional input of the non-linear part, this variant of the time stepping block is a sum of two sub-blocks \(\tilde{\mathcal{D}}=\tilde{\mathcal{D}}_{1}+\tilde{\mathcal{D}}_{2}\). The component \(\tilde{\mathcal{D}}_{1}\) is a sub-block designed to capture an exponential dynamic of the solution across time. The sub-block \(\tilde{\mathcal{D}}_{2}\) is a standard PINN sub-block. The exponential sub-block \(\tilde{\mathcal{D}}_{1}\) is defined by
\[\tilde{\mathcal{D}}_{1}(\tilde{\mathcal{C}}(\vec{F}),\tilde{\mathcal{C}}(\vec{ F}-\vec{F}^{3}),t)=e^{\tilde{\mathcal{D}}_{1,1}(t)}\odot\tilde{\mathcal{D}}_{1,2}( \tilde{\mathcal{C}}(\vec{F}),\tilde{\mathcal{C}}(\vec{F}-\vec{F}^{3})),\]
where \(\odot\) is element-wise vector multiplication. The component \(\tilde{\mathcal{D}}_{1,1}:\mathbb{R}\rightarrow\mathbb{R}^{100}\) is a simple dense layer with no bias, i.e. \(\tilde{\mathcal{D}}_{1,1}(t)=V\cdot t\) where \(V\in\mathbb{R}^{100}\) is a learnable vector. The component \(\tilde{\mathcal{D}}_{1,2}:\mathbb{R}^{100}\times\mathbb{R}^{100}\rightarrow \mathbb{R}^{100}\) is an MLP subnetwork with 6 layers with \(\tanh\) activations. Finally, the sub-block \(\tilde{\mathcal{D}}_{2}\) is also an MLP subnetwork with 6 layers and \(\tanh\) activations. The full architecture with this time stepping variant is depicted in Figure 4.
2. **Standard Exponential Block** This variant of the time-stepping block is similar to the one described in (a), but without the non-linear input \(\tilde{\mathcal{C}}(\vec{F}-\vec{F}^{3})\). Thus, the subnetwork capturing the exponential behavior takes the form \[\tilde{\mathcal{D}}_{1}(\tilde{\mathcal{C}}(\vec{F}),t)=e^{\tilde{\mathcal{D}} _{1,1}(t)}\odot\tilde{\mathcal{D}}_{1,2}(\tilde{\mathcal{C}}(\vec{F})).\] In this architecture we add 5 more dense layers to this block, as each layer requires less weights. 3. **Naive MLP Time Stepping Block** In this variant of the time stepping block, the input is \((\tilde{\mathcal{C}}(\vec{F}),t)\) and the architecture is a simple MLP block of 12 layers with \(\tanh\) activation functions.
3. **Reconstruction Block** The heuristics of our spectral approach is that the output of the time stepping block should be (once trained) a representation space resembling the coefficients of the spectral basis at the given time. Therefore, we design the reconstruction block to be composed of dense layers, but we use activation functions of the form \(\sin^{l},\cos^{l}\), \(0\leq l\leq 9\), on the input data point \((\theta,\phi)\)
Figure 4: Full architecture for spherical setting - the red arrows are used only in variant (a) for time stepping block
since these activation functions are the building blocks of the spherical harmonics functions. To this end, we first apply two subnetworks on the data point \((\theta,\phi)\) \[\mathcal{R}_{l,\sin,0}(\theta,\phi),\mathcal{R}_{l,\cos,0}(\theta,\phi):\mathbb{ R}^{2}\rightarrow\mathbb{R}^{2}.\] We then apply on their output, component wise, the spectral activation functions \[\sin^{l}\circ\mathcal{R}_{l,\sin,0}(\theta,\phi),\quad\cos^{l}\circ\mathcal{D} _{l,\cos,0}(\theta,\phi),\quad 0\leq l\leq 9.\] Next we apply dense layers on the output of the activation functions \[\mathcal{R}_{l,\sin,1},\mathcal{R}_{l,\cos,1}:\mathbb{R}^{2}\rightarrow\mathbb{ R}^{100},\quad 0\leq l\leq 9.\] We assemble these pieces to produce a subnetwork \(\mathcal{R}_{loc}:\mathbb{R}^{2}\rightarrow\mathbb{R}^{100}\) \[\mathcal{R}_{loc}(\theta,\phi)=\sum_{l=0}^{9}\mathcal{R}_{l,\sin,1}(\sin^{l} \circ\mathcal{R}_{l,\sin,0}(\theta,\phi))\odot\mathcal{R}_{l,\cos,1}(\cos^{l }\circ\mathcal{R}_{l,\cos,0}(\theta,\phi)),\] where \(\odot\) is element-wise vector multiplication. We apply separately, on the output of the time stepping block a subnetwork \(\mathcal{R}_{d}:\mathbb{R}^{100}\rightarrow\mathbb{R}^{100}\). Finally, our reconstruction network \(\tilde{\mathcal{R}}\) is a dot-product between the outputs of \(\mathcal{R}_{d}\) and \(\mathcal{R}_{loc}\) \[\tilde{\mathcal{R}}(c_{0,0}(t),c_{1,-1}(t),c_{1,0}(t),c_{1,1}(t),...,c_{9,-9}(t),...,c_{9,0}(t),...,c_{9,9}(t),\theta,\phi)\\ =\langle\mathcal{R}_{d}(c_{0,0}(t),c_{1,-1}(t),c_{1,0}(t),c_{1,1} (t),...,c_{9,-9}(t),...,c_{9,0}(t),...,c_{9,9}(t)),\mathcal{R}_{loc}(\theta, \phi)\rangle.\]
### Experimental Results
We generated training data consisting of \(N=5,000\) randomly chosen samples of the form \((\vec{F},(\theta,\phi),t)\), where \(\vec{F}\) is a flattened sampling matrix of initial conditions randomly sampled from
\[W=\left\{\sum_{l=0}^{9}\sum_{m=-l}^{l}c_{lm}Y_{lm}(\theta,\phi),\qquad c_{lm} \in[-1,1],\sqrt{\sum_{l=0}^{9}\sum_{m=-l}^{l}c_{lm}^{2}}=1\right\},\]
on the equip-spaced grid
\[\theta_{j}=\frac{\pi}{19}j,\;j\in\{0,...,19\},\qquad\phi_{k}=\frac{2\pi}{20} k,\;k\in\{0,...,19\}.\]
During the training of the spectral model we used some manipulations to improve the results:
1. Pre-training the transformation block and the reconstruction block separately before training the full model, using the PI loss function \[\frac{1}{N}\sum_{i=1}^{N}\left|F(\theta_{i},\phi_{i})-\tilde{\mathcal{R}}( \tilde{\mathcal{C}}(\vec{F}_{i}),(\theta_{i},\phi_{i}))\right|^{2}.\]
2. When training the full model, we started the first 20 epochs by freezing the weights of the transformation and reconstruction blocks that were pre-trained separately in (1) and training only the time stepping block. We observed that this technique where the transformation block and the reconstruction are pre-trained and then kept constant for the first epochs provides better initialization of the time-stepping block and overall better results. In this stage of the training, we used a loss function containing three terms. In addition to the standard initial condition loss and the differential loss we added new loss to enforce that the time stepping block does not change the spherical harmonics coefficients at time zero. Formally, the new loss term over the training set is \[\frac{1}{100N}\sum_{i=1}^{N}\|\tilde{\mathcal{D}}(\tilde{\mathcal{C}}(\vec{F }_{i}),0)-\tilde{\mathcal{C}}(\vec{F}_{i})\|_{2}^{2}.\] (8)
3. Finally, we trained the full model with all 3 loss terms for 25 more epochs.
Since there is no analytical solution for the Allen-Cahn equation over \(\mathbb{S}^{2}\), we used the numerical scheme IMEX-BDF4 [14] as ground truth for testing our models. Unlike [14], we used the spherical harmonic functions basis and not the double spherical Fourier method which was used in [14] due to performance considerations. We tested our models using 20 random initial conditions and predicted the solutions for all grid points:
\[\theta_{j}=\frac{\pi}{19}j,\ j\in\{0,...,19\},\quad\phi_{k}=\frac{2\pi}{20}k, \ k\in\{0,...,19\},\quad t_{n}=\frac{1}{500}n,\ n\in\{0,...,500\}.\]
We benchmarked 3 spectral PINN variants of the with the naive PINN model that has MLP architecture consisting of 26 layers with \(\tanh\) activations. The comparison of the 4 models is described in Table 4. We can see that our model achieves better accuracy than the naive model, with significantly less parameters. We can also see that there is a benefit to the special processing of the non-linear part of Allen Cahn equation by feeding the time stepping block with the non-linear part of the initial condition. In Figure 5 we show the norm of the error in different time steps. As in the previous example, we performed generalization and stability tests. For the generalization test we used random initial conditions from the larger set of spherical harmonics of degree 14:
\[W_{gen}=\left\{\sum_{l=0}^{14}\sum_{m=-l}^{l}c_{lm}Y_{lm}(\theta,\phi),\qquad c _{lm}\in[-1,1],\sqrt{\sum_{l=0}^{14}\sum_{m=-l}^{l}c_{lm}^{2}}=1\right\}.\]
For the stability test we used the technique as in the previous section with noise \(\delta\sim N(0,0.001)\). The results of generalization and stability tests can be found in tables 5 and 6 respectively (averaged over 20 random initial conditions). Again, we can see that all spectral model variants outperform the naive model.
\begin{table}
\begin{tabular}{|c|c|c|} \hline Model number in & Model Architecture & MSE \\ plots & & \\ \hline \hline
1 & Naive Model & 3.6e-4 \\ \hline
2 & Spectral model, time stepping variant (a) - & **1.1e-4** \\ & Input of Allen-Cahn & \\ & nonlinear part & \\ \hline
3 & Spectral model, time stepping variant (b) - & 1.3e-4 \\ & Standard time stepping & \\ & exponential block & \\ \hline
4 & Spectral model, time stepping variant (c) - & 1.2e-4 \\ & Naive time stepping & \\ & dense block & \\ \hline \end{tabular}
\end{table}
Table 5: Allen-Cahn equation over \(\mathbb{S}^{2}\) - generalization test results
Figure 5: Allen-Cahn equation over \(\mathbb{S}^{2}\) - Error over time of the naive and spectral variant PINN models on testing dataset
In this setting, the Laplace-Beltrami operator is [10]
\[\Delta_{\mathbb{T}}=\frac{1}{r^{2}}\frac{\partial^{2}}{\partial\theta^{2}}-\frac {\sin\theta}{r(R+r\cos\theta)}\frac{\partial^{2}}{\partial\theta}+\frac{1}{(R+ r\cos\theta)^{2}}\frac{\partial^{2}}{\partial\phi^{2}}.\]
On this manifold we demonstrate our spectral PINN method again using the Allen-Cahn equation (6). As the class of initial conditions we use the set
\[W_{5}=\left\{\sum_{k=1}^{5}\sum_{l=1}^{5}c_{k,l}\sin(k\theta)\sin(l\phi),\quad \sqrt{\sum_{k.l=1}^{5}c_{k,l}^{2}}=1\right\}.\]
We sample functions from this set on an equip-spaced grid with \(N_{\theta}=N_{\phi}=15\). On the embedded torus one is required to use a numeric approximation of the spectral basis and we used the finite-elements method implemented in the python package SPHARAPY [11]. The choice of spectral basis implementation impacts the design of the architecture of the transformation and reconstruction blocks. We test several options for each block. For the transformation and reconstruction blocks we consider two options:
1. **Numerical Spectral basis blocks -** In this option, we first create a dataset of 5,000 triples, each composed of a sampling matrix of a function \(f(\theta,\phi)=\sum_{k=1}^{5}\sum_{l=1}^{5}c_{k,l}\sin(k\theta)\sin(l\phi)\) on the equip-spaced grid with \(N_{\theta}=N_{\phi}=15\) and two random coordinates \((\theta,\phi)\in[0,2\pi)^{2}\) of a point on the torus. We then train the transformation and reconstruction blocks separately as follows. For the training of the transformation block \(\tilde{\mathcal{C}}\) we further compute for each function in the training set, using its sampling matrix, the (numerical) spectral transformation using SPHARAPY. We use the coefficients computed by SPHARAPY as ground truth to train our transformation block. The block's architecture is composed of 3 convolution layers followed by one dense layer. The reconstruction block \(\tilde{\mathcal{R}}\) in this variant is trained to take as input the coefficients of the spectral representation and the coordinate \((\theta,\phi)\in[0,2\pi)^{2}\) and approximate the ground truth function value at this coordinate. The block architecture is a MLP subnet with 15 layers.
2. **Auto-Encoder-Decoder blocks -** In this variant, we train the transformation block \(\tilde{\mathcal{C}}\) as the encoder together with the reconstruction block \(\tilde{\mathcal{R}}\) as a decoder, without using explicitly the spectral representation on the torus. However, this approach is certainly inspired by the
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Model number in & Model Architecture & \(T=0.4\) & \(T=0.7\) & \(T=1.0\) \\ \hline \hline
1 & Naive Model & 3.83 & 3.19 & 2.8 \\ \hline
2 & Spectral model, time stepping variant (a) - & 0.88 & 0.81 & 0.81 \\ & Input of Allen-Cahn & & & \\ & nonlinear part & & & \\ \hline
3 & Spectral model, time stepping variant (b) - & 0.70 & 0.66 & 0.65 \\ & Standard time stepping & & & \\ & exponential block & & & \\ \hline
4 & Spectral model, time stepping variant (c) - & 2.85 & 2.86 & 2.86 \\ & Naive time stepping & & & \\ & dense block & & & \\ \hline \end{tabular}
\end{table}
Table 6: Allen-Cahn equation over \(\mathbb{S}^{2}\) - stability test results using the normalized metric (5)
spectral method as we are ultimately optimizing some compressed representation space. The transformation encoder block simply learns to create a compressed latent representation of dimension 150 from the 225 function samples in a representation space. Its architecture is 5 convolution layers followed by one dense layer. Then the decoder takes the compressed representation together with the coordinate \((\theta,\phi)\in[0,2\pi)^{2}\) and tries to recover the ground truth function value as this coordinate. Its architecture is 17 dense layers. The loss function is then
\[\frac{1}{N}\sum_{i=1}^{N}\left|\tilde{\mathcal{R}}(\tilde{\mathcal{C}}(\vec{F}_ {i}),(\theta_{i},\phi_{i}))-f_{i}(\theta_{i},\phi_{i})\right|^{2}.\]
For the time stepping block we test two options
1. A custom made time stepping block that receives as input the coefficients of the initial condition as well as the coefficients of the nonlinear part and a time step \[(\tilde{\mathcal{C}}(\vec{F}),\tilde{\mathcal{C}}(\vec{F}-\vec{F}^{3}),t),\] similarly to variant (a) of the time stepping block in the spherical case from previous section. Recall that such an architecture aims to be'more' physics aware and adapted to the nature of the equation. For this variant of the time stepping block we use 9 dense layers.
2. A network that takes as input \[(\tilde{\mathcal{C}}(\vec{F}),t),\] without the nonlinear part. Here we used 15 dense layers.
We denote this block as earlier with \(\tilde{\mathcal{D}}\). For validation of our models, we used the IMEX-BDF4 numeric solver [14] to obtain approximations of solutions to the equations that we considered as ground truth. In table 7 we summarize the benchmarks of the various architectures and also compare them to a naive PINN architecture, with 26 layers, that simply takes in the samples of the initial condition as well as the time step and location on the torus and outputs an approximation of the value of the solution.
We can observe that the best result, in terms of accuracy and smaller size of the network, can be obtained using both numerical spectral basis blocks as transformation and reconstruction blocks, combined with the non-linear input time stepping block. Also, even the encoder-decoder variant that 'follows' the spectral paradigm to some extent without actually using the numerical spectral basis, provides a better result than the naive PINN model. In Figure 6 we show time plots of errors of the different PINN models averaged over 20 random initial conditions.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Model number & Transformation and & Time stepping block & \#Weights & MSE \\ in plots & Reconstruction blocks & & & \\ \hline \hline
1 & Numerical spectral basis & Spectral model, time & **2,564,105** & **2.5e-5** \\ & blocks & stepping variant (a) - & & \\ & & Input of Allen-Cahn & & \\ & & nonlinear part & & \\ \hline
2 & Auto-encoder-decoder & Spectral model, time & 3,800,555 & 8.3e-5 \\ & blocks & stepping variant (a) - & & \\ & & Input of Allen-Cahn & & \\ & & nonlinear part & & \\ \hline
3 & Numerical spectral basis & Spectral model, time & 3,129,976 & 1.7e-4 \\ & blocks & stepping variant (b) & & \\ \hline
4 & & Naive Model & 4,130,001 & 2.7e-4 \\ \hline \end{tabular}
\end{table}
Table 7: Allen-Cahn equation over \(\mathbb{T}\subset\mathbb{R}^{3}\) - Comparison of standard naive PINN model with 3 variants of our spherical PINN model
test presented in Table 8, the network that was trained on samples from \(W_{5}\) was tested on random initial conditions from the larger set
\[W_{10}=\left\{\sum_{k=1}^{10}\sum_{l=1}^{10}c_{k,l}\sin(k\theta)\sin(l\phi),\quad \sqrt{\sum_{k.l=1}^{10}c_{k,l}^{2}}=1\right\}.\]
The stability tests listed in Table 9 are averaged over 20 random initial conditions.
## 7 Conclusion
In this work we presented a physics informed deep learning strategy for building PDE solvers over manifolds which is aligned with the method of spectral approximation. Our method allows to train
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Model number & Transformation and & Time stepping block & MSE \\ in plots & Reconstruction blocks & & \\ \hline \hline
1 & Numerical spectral basis & Spectral model, time & **9.7e-5** \\ & blocks & stepping variant (a) - & \\ & & Input of Allen-Cahn & \\ & & nonlinear part & \\ \hline
2 & Auto-encoder-decoder & Spectral model, time & 1.1e-4 \\ & blocks & stepping variant (a) - & \\ & & Input of Allen-Cahn & \\ & & nonlinear part & \\ \hline
3 & Numerical spectral basis & Spectral model, time & 2.0e-4 \\ & blocks & stepping variant (b) & \\ \hline
4 & \multicolumn{3}{c|}{Naive Model} & 2.1e-4 \\ \hline \end{tabular}
\end{table}
Table 8: Allen-Cahn equation over \(\mathbb{T}\subset\mathbb{R}^{3}\) - generalization test results
Figure 6: Allen-Cahn equation over \(\mathbb{T}\subset\mathbb{R}^{3}\) - Error over time of the naive and spectral variant PINN models on testing dataset
a model that can take as input initial conditions from a pre-determined subset or subspace and is grid free. We showed empirically that our approach provides better approximation with much less weights compared with standard PINN architectures. For the case of the heat equation over the unit interval we provided a rigorous proof for the degree of approximation of a spectral PINN based on MLP components.
|
2308.08649 | Towards Zero Memory Footprint Spiking Neural Network Training | Biologically-inspired Spiking Neural Networks (SNNs), processing information
using discrete-time events known as spikes rather than continuous values, have
garnered significant attention due to their hardware-friendly and
energy-efficient characteristics. However, the training of SNNs necessitates a
considerably large memory footprint, given the additional storage requirements
for spikes or events, leading to a complex structure and dynamic setup. In this
paper, to address memory constraint in SNN training, we introduce an innovative
framework, characterized by a remarkably low memory footprint. We \textbf{(i)}
design a reversible SNN node that retains a high level of accuracy. Our design
is able to achieve a $\mathbf{58.65\times}$ reduction in memory usage compared
to the current SNN node. We \textbf{(ii)} propose a unique algorithm to
streamline the backpropagation process of our reversible SNN node. This
significantly trims the backward Floating Point Operations Per Second (FLOPs),
thereby accelerating the training process in comparison to current reversible
layer backpropagation method. By using our algorithm, the training time is able
to be curtailed by $\mathbf{23.8\%}$ relative to existing reversible layer
architectures. | Bin Lei, Sheng Lin, Pei-Hung Lin, Chunhua Liao, Caiwen Ding | 2023-08-16T19:49:24Z | http://arxiv.org/abs/2308.08649v1 | # Towards Zero Memory Footprint Spiking Neural Network Training
###### Abstract
Biologically-inspired Spiking Neural Networks (SNNs), processing information using discrete-time events known as spikes rather than continuous values, have garnered significant attention due to their hardware-friendly and energy-efficient characteristics. However, the training of SNNs necessitates a considerably large memory footprint, given the additional storage requirements for spikes or events, leading to a complex structure and dynamic setup. In this paper, to address memory constraint in SNN training, we introduce an innovative framework, characterized by a remarkably low memory footprint. We **(i)** design a reversible SNN node that retains a high level of accuracy. Our design is able to achieve a \(\mathbf{58.65\times}\) reduction in memory usage compared to the current SNN node. We **(ii)** propose a unique algorithm to streamline the backpropagation process of our reversible SNN node. This significantly trims the backward Floating Point Operations Per Second (FLOPs), thereby accelerating the training process in comparison to current reversible layer backpropagation method. By using our algorithm, the training time is able to be curtailed by \(\mathbf{23.8\%}\) relative to existing reversible layer architectures.
## 1 Introduction
As a bio-inspired neuromorphic computing representative, Spiking Neural Network (SNN) has attracted considerable attention, in contrast to the high computational complexity and energy consumption of traditional Deep Neural Networks (DNNs) [28; 2; 25; 10]. SNN processes information using discrete-time events known as spikes rather than continuous values, offering extremely hardware-friendly and energy-efficient characteristics. For instance, in a robot navigation task using Intel's Loihi [1], SNN could achieve a 276\(\times\) reduction in energy compared to a conventional DNN approach. Work [24] shows that DNN consumes 111 mJ and 1035 mJ per sample on MNIST and CIFAR-10, respectively, while SNN consumes only 0.66 mJ and 102 mJ, i.e., 168\(\times\) and 10\(\times\) energy reduction.
Despite their numerous advantages, one major bottleneck in the deployment of SNNs has been memory consumption. The memory complexity of a traditional DNN with a depth of \(L\) is \(\mathcal{O}(L)\). But for SNN of the same depth \(L\), there are several timesteps \(T\) involved in the computation. Consequently, the memory complexity of SNNs escalates to \(\mathcal{O}(L*T)\). For instance, the memory requirement during the DNN training process of ResNet19 is 0.6 GB, but for the SNN with the same architecture could reach about 12.34 GB (-20 \(\times\)) when time-step equals 10. This presents a significant challenge for their applicability to resource-constrained systems, such as IoT-Edge devices [21].
To address the issue of high memory usage of SNN, researchers have proposed several methods, including Quantization [21], Weight Sparsification [22; 12], the Lottery Ticket Hypothesis [13], Knowledge Distillation [9], Efficient Checkpointing [27], and so on. In this paper, we introduce a novel reversible SNN node that drastically compresses the memory footprint of the SNN node inside the entire network. Our method achieves state-of-the-art results in terms of SNN memory
savings. It achieves this by recalculating all the intermediate states on-the-fly, rather than storing them during the backward propagation process. To further enhance the efficiency of our approach, we also present a new algorithm for the backpropagation process of our reversible SNN node, which significantly reduces the training time compared with the original reversible layer backpropagation method. Remarkably, our method maintains the same level of accuracy throughout the process.
As a result of our innovations, we reduce the memory complexity of the SNN node from \(\mathcal{O}(n^{2})\) to \(\mathcal{O}(1)\), while maintaining comparable accuracy to traditional SNN node. Moreover, our method reduces the FLOPs needed for the backpropagation by a factor of 23% compared to existing reversible layer backpropagation method, thus accelerating the training process. Collectively, these advances pave the way for more efficient and scalable SNN implementations, enabling the deployment of these biologically inspired networks across a wider range of applications and hardware platforms.
## 2 Background And Related Works
### Spiking Neural Network
Spiking neural network (SNN) uses sparse binary spikes over multiple time steps to deal with visual input in an event-driven manner [1; 3]. We use SNN with the popular Leaky Integrate and Fire (LIF) spiking neuron. The forward pass is formulated as follows.
\[v[t]=\alpha v[t{-}1]+\sum_{i}w_{i}s_{i}[t]-\vartheta o[t-1] \tag{1a}\] \[o[t]=h(v[t]-\vartheta)\] (1b) \[h(x)=\begin{cases}0,&\text{if }x<0\\ 1,&\text{otherwise}\end{cases} \tag{1c}\]
where \(t\) denotes time step. In Eq. (1a), \(v[t]\) is the dynamics of the neuron's membrane potential after the trigger of a spike at time step t. The sequence \(s_{i}[t]\in\{0,1\}\) represents the \(i\)-th input spike train, consisting solely of 0s and 1s, while \(w_{i}\) is the corresponding weight. In Eq. (1b), \(o[t]\in\{0,1\}\) is the neuron's output spike train. In Eq. (1c), \(h(x)\) is the Heaviside step function to generate the outputs.
In the backward pass, we adopt Backpropagation Through Time (BPTT) to train SNNs. The BPTT for SNNs using a surrogate gradient [19], which is formulated as follows.
\[\delta_{l}[t]=\epsilon_{l+1}[t]w_{l+1} \tag{2a}\] \[\epsilon_{l}[t]=\delta_{l}[t]\phi_{l}[t]+\alpha\epsilon_{l}[t]\] (2b) \[\frac{\partial L}{\partial w_{l}}=\sum_{t=0}^{T-1}\epsilon_{l}[t] \cdot[s_{l}[t]]^{\intercal} \tag{2c}\]
Denote \(L\) as the loss, in Eq. (2a), \(\delta_{l}[t]{=}\frac{\partial L}{\partial\alpha_{l}[t]}\) is the error signal at layer \(l\) time step \(t\) which is propagated using above formulations. In Eq. (2b), \(\epsilon_{l}[t]{=}\frac{\partial L}{\partial w_{l}[t]}\), \(\phi_{l}[t]{=}\frac{\partial o_{l}[t]}{\partial w_{l}[t]}{=}\frac{\partial u _{l}(v_{l}[t]-\vartheta)}{\partial v_{l}[t]}\), where we follow the gradient surrogate function in [7] to approximate the derivative of \(u(x)\), such that \(\frac{\partial u(x)}{\partial x}\approx\frac{1}{1+\pi^{x}x^{2}}\). Eq. (2c) calculates the gradient of \(l^{th}\) layer weight \(w_{l}\).
### Reversible Layer
A reversible layer refers to a family of neural network architectures that are based on the Non-linear Independent Components Estimation [4; 5]. In traditional training methods, the activation values of each layer are stored in memory, which leads to a dramatic increase in memory requirements as the network depth increases. However, reversible transformation technology allows us to store only the final output of the network and to recompute the discarded ones when needed. This approach significantly reduces memory requirements, making it possible to train deeper neural networks and more complex models under limited memory conditions, thus potentially unlocking new insights and improving performance across a wide range of tasks. The reversible transformation has been used in
different kinds of neural networks, such as CNN [8], Graph neural networks (GNN) [15], Recurrent Neural Networks (RNN) [17] and some transformers [18]. Furthermore, studies have demonstrated the effectiveness of reversible layers in different types of tasks, including image segmentation [20], natural language processing [14], compression [16], and denoising [11].
## 3 Reversible SNN Algorithm
### Reversible SNN Memory Analysis
During the training process of SNN networks, the activation values occupy the main memory storage space. The SNN activation value memory analysis schematic diagram is shown in Fig.1.
In this figure, we use the VGG-13 architecture [26] with ten timesteps as an example. The percentage values represent the memory footprint ratio of each part in the entire network. The left diagram is the original SNN where the activation values of \(X\) account for 90.9% of the memory usage, and the output potentials of each neuron occupy 9.1% of the memory. The right diagram is our designed reversible SNN, which only requires saving the final \(X_{output}\) values and the output potentials of each neuron, without storing all intermediate values, thus significantly saving memory. The intermediate activation values will be regained during the backpropagation process through our inverse calculation equation. In this example, our method is able to save 90.21% of the memory used for activation values. The exact amount of memory saved by our method will be shown in Experiment part.
### Reversible SNN Forward Calculation
Our forward algorithm is in the upper section of Fig.2.
The various input states \(S=(X,V)\) of each neuron are evenly divided into two groups along the last dimension. Namely: \(S=[S_{1},S_{2}]\).
**P**: Calculate the first part of output \(Y_{1}\):
\[M_{1}^{t}=V_{1}^{t-1}+\frac{1}{\tau}\cdot\left(X_{1}^{t}-V_{1}^{t-1}\right) \quad\text{(\ref{eq:SNNNN})} Y_{1}^{t}=H\left(M_{1}^{t}-V_{th}\right)+\beta\cdot X_{2}^{t} \tag{4}\]
\(M_{1}^{t}\) is the membrane potential of the first half neuron at time \(t\). \(V_{2}^{t-1}\) is the input potential of the second half neuron at time \(t-1\). \(\tau\) is the time constant. \(X_{2}^{t}\) is the input to the second half neuron at time \(t\). \(V_{th}\) is the threshold voltage of the neurons. \(H()\) is the Heaviside step function. \(\beta\) is a scaling factor for the input. \(\beta\cdot X_{2}^{t}\) will help \(Y_{1}^{t}\) to collect information about the second half of the input in the next step. Then calculate first part of output voltage \(V_{1}^{t}\):
\[V_{1}^{t}=\left(1-Y_{1}^{t}\right)\odot M_{1}^{t}+Y_{1}^{t}\cdot V_{res}+ \alpha\cdot V_{1}^{t-1} \tag{5}\]
\(V_{1}^{t}\) is the output potential of the first half neuron at time \(t\). \(V_{res}\) is the reset voltage of the neurons. \(\alpha\) is a scaling factor for the membrane potential.
Figure 1: Activation value Memory Comparison between the original SNN network and our reversible SNN network, using VGG13 with a timestep of ten as an example.
: SNN node,
: Save to memory,
: NOT save to memory.
\(\mathfrak{C}\): Use the first part of output \(Y_{1}\) to calculate the second part \(Y_{2}\):
\[M_{2}^{t}=V_{2}^{t-1}+\frac{1}{\tau}\left(Y_{1}^{t}-V_{2}^{t-1}\right) \tag{6}\] \[Y_{2}^{t}=H\left(M_{2}^{t}-V_{th}\right)+\beta\cdot X_{1}^{t} \tag{7}\]
\(M_{2}^{t}\) is the membrane potential of the second half neuron at time \(t\). \(Y_{2}^{t}\) is the output of the second half neuron at time \(t\). calculate the second part of output voltage \(V_{2}^{t}\):
\[V_{2}^{t}=\left(1-Y_{2}^{t}\right)\odot M_{2}^{t}+Y_{2}^{t}\cdot V_{res}+ \alpha\cdot V_{2}^{t-1} \tag{8}\]
\(V_{2}^{t}\) is the output potential of the second half neuron at time \(t\).
\(\mathfrak{C}\): For all the output states \(S_{output}=([Y_{1},Y_{2}],[V_{1}^{t},V_{2}^{t}])\), combine them by the last dimension.
### Reversible SNN Inverse Calculation
The purpose of the inverse calculation is to use the output results to obtain the unsaved input values. i.e. Use \(Y\) and \(V_{output}\) to calculate \(X\) and \(V\). Our inverse algorithm is in the lower section of Fig.2.
\(\mathfrak{1}\): For all the output states \(S_{output}=(Y,V_{output})\), divide them into two groups by the last dimension in the same way as in the first step of forward calculation, namely: \(S_{output}=[\hat{S}_{output}1;S_{output}2]\)
\(\mathfrak{2}\): Calculate \(V_{1}^{t}\) by combine Eq.6 and Eq.8, simplify:
\[V_{2}^{t-1}=\frac{V_{2}^{t}-(1-Y_{2})\cdot\frac{1}{\tau}\odot Y_{1}-Y_{2} \cdot V_{reset}}{(1-Y_{2})\cdot(1-\frac{1}{\tau})+\alpha} \tag{9}\]
Calculate \(X_{1}^{t}\) by combine Eq.6 and 7, simplify:
\[X_{1}^{t}=\left(Y_{2}^{t}-H\left(M_{2}^{t}-V_{th}\right)\right)\div\beta \tag{10}\]
\(\mathfrak{3}\): Calculate \(V_{1}^{t}\) by combine Eq.3 and Eq.5, simplify:
\[V_{1}^{t-1}=\frac{V_{1}^{t}-(1-Y_{1})\cdot\frac{1}{\tau}\odot X_{1}^{t}-Y_{1} \cdot V_{reset}}{(1-Y_{1})\cdot(1-\frac{1}{\tau})+\alpha} \tag{11}\]
Figure 2: This reversibility demo use \(2\times 2\) toy Input as an example and shows our forward and inverse calculations. : The origin of the equations in the inverse process.
Calculate \(X_{1}^{t}\) by combine Eq.3 and 4, simplify:
\[X_{2}^{t}=\left(Y_{1}^{t}-H\left(M_{1}^{t}-V_{th}\right)\right)\div\beta \tag{12}\]
\(\mathfrak{x}\): For all the input states \(S=([X_{1},X_{2}],[V_{1}^{t-1},V_{2}^{t-1}])\), combine them by the last dimension.
## 4 Inverse Gradient Calculation
Although our reversible architecture significantly reduces memory usage, it does extend computation time for two primary reasons: (i) It necessitates the recalculation of the activation values that weren't originally stored. (ii) Many of the previous reversible layer architectures have inherited the backpropagation method from checkpointing [30; 6]. This method requires using the recalculated intermediate activation values to rerun the forward equation, thereby constructing a forward computational graph. This graph is then used to derive the corresponding gradients. This step of rerunning the forward equation introduces additional computational overhead, which extends the overall computation time.
This scenario is prevalent across all existing architectures of reversible layers, including Reversible GNN [15], Reversible RNN [17], Reversible Transformers [18], and so on. To reduce the training time, we have designed a new algorithm, called the inverse gradient calculation method, which is able to substantially decrease the number of FLOPs during the backpropagation process compared to the original reversible architecture. Our design is shown in Fig.3.
The left diagram illustrates the original forward and backward processes. The middle diagram depicts the original calculation process for reversible layers. It contains four steps:
1. The input \(X\) pass the forward function to compute the output \(Y\), without storing the input data to conserve memory.
2. For each layer \(n\): The output \(X^{n}\) of this layer pass the inverse function to compute the input \(X^{n-1}\) of this layer. This process starts with the final output \(Y\).
3. For each layer \(n\): The input \(X^{n-1}\) passes through the forward function again to reconstruct the forward computational graph, which facilitates gradient computation.
4. For each layer \(n\): Compute the gradient \(\frac{\partial\mathbf{X^{n}}}{\partial\mathbf{X^{n-1}}}\) based on the forward computational graph.
The right diagram is our design, it contains three steps:
1. The input \(X\) pass the forward function to compute the output \(Y\), without storing the input data to conserve memory.
2. For each layer \(n\): The output \(X^{n}\) of this layer pass the inverse function to compute the input \(X^{n-1}\) of this layer and construct an inverse computational graph.
3. For each layer \(n\): Compute the gradient \(\frac{\partial\mathbf{X^{n}}}{\partial\mathbf{X^{n-1}}}\) based on the inverse computational graph.
Below is the specific calculation formula of the \(\frac{\partial\mathbf{X^{n}}}{\partial\mathbf{X^{n-1}}}\) based on the inverse computation graph, and the derivation process is in the Appendix.
\[\frac{\partial\mathbf{X^{n}}}{\partial\mathbf{X^{n-1}}}=\frac{\theta}{2+\left( \pi\cdot\theta\cdot\left(M_{1}^{t}-V_{th}\right)\right)^{2}}\cdot\frac{1}{\tau }\odot\left(1+\frac{\theta}{2+\left(\pi\cdot\theta\cdot\left(M_{2}^{t}-V_{th} \right)\right)^{2}}\cdot\frac{1}{\tau}\right)+\beta \tag{13}\]
\[\frac{\partial\mathbf{X^{n}}}{\partial\mathbf{X^{n-1}}}=\frac{\theta}{2+\left( \pi\cdot\theta\cdot\left(M_{2}^{t}-V_{th}\right)\right)^{2}}+\beta \tag{14}\]
All the variables in Eq.13 and Eq.14 have the same meaning as the variables in Eq.3-Eq.12 and \(\theta\) is an adjustable constant parameter.
The ability to perform computational graph inverse computation in my algorithm is based on that our forward function has symmetry with the inverse computation function.
For the original reversible network:
\[FLOPS_{backward}^{pri}=FLOPS_{inverse}+FLOPS_{forward}+FLOPS_{\frac{ \partial\mathbf{X^{n}}}{\partial\mathbf{X^{n-1}}}} \tag{15}\]
For our reversible network:
\[FLOPS_{backward}^{our}=FLOPS_{inverse}+FLOPS_{part\;of\;\frac{\partial \mathbf{X^{n-1}}}{\partial\mathbf{X^{n}}}} \tag{16}\]
Compared to the standard reversible network, our method reduces FLOPS by 23%. The FLOPS analysis is shown in Appendix and the detailed time measurement is shown in the Experiment part.
## 5 Experiment
We first compare our design with the SOTA SNN Memory-efficient methods for SNN training on the several datasets. Subsequently, we incorporate our reversible SNN node into different architectures across various datasets. The goal is twofold: Firstly, we wanted to demonstrate that compared to the SNN node currently in use, our reversible version is able to offer substantial memory savings. Secondly, we aim to show that, when compared to the existing reversible layer backpropagation method, our reversible SNN node backpropagation design is able to considerably reduce the time spent in the backpropagation process, thereby accelerating the training phase. In the final part, we conduct an ablation study to evaluate the influence of various parameters within our equations and the impact of the number of groups into which the input is divided on the performance of our model.
All experiments were conducted on a Quadro RTX6000 GPU equipped with 24GB of memory, using PyTorch 1.13.1 with CUDA 11.4, and an Intel(R) Xeon(R) Gold 6244 CPU running at 3.60GHz. To ensure that the values we obtain through inverse-calculation is the same as the original forward-calculation method, we use torch.allclose(rtol=\(1e^{-06}\), atol=\(1e^{-10}\)) to compare all the inverse calculated values with the origianl forward calculated values. All the results return true as they should be. Detailed hyperparameter settings for each experiment are provided in the Appendix.
### Comparison with the SOTA Methods
We conducted a comparison of our approach with the current SOTA methods in memory efficiency during the SNN training process on the CIFAR10 and CIFAR100 datasets. To verify the universality of our work, we apply our designed reversible SNN node to the current SOTA sparse training work for SNNs. A comparison was then made between these two methods on the Tiny-ImageNet dataset, the results are shown in Table 1.
Our approach (RevSNN) achieves a \(\mathbf{2.54\times}\) memory reduction on the CIFAR10 dataset and a \(\mathbf{3.13\times}\) memory reduction on the CIFAR100 dataset compared to the current SOTA SNN training memory-efficient method. At the same time, we also maintain a high level of accuracy. On the Tiny-ImageNet dataset, we only replaced the original SNN node with our designed reversible SNN node, keeping all other conditions consistent (RevND). As a result, the accuracy of our VGG-16 model structure is 0.28% points higher than that of the original dense model and saves \(\mathbf{1.87\times}\) more memory than the original work at 99% sparsity. On the ResNet-19 model, our accuracy is 0.31% points higher than the dense model, and saves \(\mathbf{2.06\times}\) more memory than the original work at 99% sparsity.
### Memory Consumption Evaluation
To investigate whether our newly designed reversible SNN node achieves the expected memory savings compared to the original Spiking Neural Node, we incorporated our node into a range of architectures including VGG-11, 13, 16, 19, and ResNet-19, 34, 50, 101. For the VGG architectures, we examined the corresponding memory usage for timesteps ranging from 1 to 20, while for the ResNet architectures, we scrutinized the memory usage for timesteps from 1 to 10. These tests were conducted on the CIFAR-10 dataset. For all the experiments, we keep the batch size = 128.
The SNN node memory comparison results is shown in Fig.4. For VGG architecture, even when employing the most memory-intensive VGG-19 architecture with a timestep of 20, the cumulative memory usage for all the reversible SNN nodes within the entire network remains below 200MB. In contrast, using conventional SNN nodes demands a substantial amount of memory, up to 9032MB. For ResNet architectures, the ResNet-101 architecture with a timestep of 10 needs about 28993MB using conventional SNN node, but only 1382MB using our reversible SNN node. As the number of model layers and the timestep value increase, the memory savings achieved by our reversible SNN node become more pronounced. Specifically, when utilizing the VGG-19 architecture with a timestep of 20, our reversible SNN node enjoys a \(\mathbf{58.65\times}\) memory reduction compared to the original SNN node. The specific data values are shown in the Appendix.
These experimental results align with our theoretical analysis in Section 3.1, further validating that our design is able to significantly reduce memory usage.
### Training time Evaluation
To investigate the time efficiency of our designed backpropagation architecture in comparison with the traditional reversible layer backpropagation method, we employ two sets of backpropagation architectures for our reversible SNN node. The first set utilizes the original reversible layer backpropagation method, while the second set incorporates our newly designed backpropagation architecture.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Dataset** & **Method** & **Architecture** & **Time-steps** & **Accuracy** & **Memory(Glb)** \\ \hline \multirow{8}{*}{CIFAR10} & OITT [32] & VGG(6/8V8) & 6 & 93.52\% & 4 \\ & S2A-STSU [29] & ResNet-17 & 5 & 92.75\% & 27.93 \\ & IDE-LIF [33] & CIFARNet-F & 30 & 91.74\% & 2.8 \\ & Hybrid [23] & VGG-16 & 100 & 91.13\% & 9.36 \\ & Tandem [31] & CifarNet & 8 & 89.04\% & 4.2 \\ & Skipper [27] & VGG-5 & 100 & 87.44\% & 4.6 \\ \cline{2-6} & **RevSNN(Ours)** & ResNet-18 & 4 & 91.87\% & **1.101** \\ \hline \multirow{8}{*}{CIFAR100} & IDE-LIF [33] & CIFARNet-F & 30 & 71.56\% & 3.51\% \\ & OITT [32] & VGG(8/8V5) & 6 & 71.05\% & 4.04\% \\ & S2A-STSU [29] & VGG-13 & 4 & 68.96\% & 31.05 \\ & Skipper [27] & VGG-5 & 100 & 66.48\% & 4.6 \\ \cline{2-6} & **RevSNN(Ours)** & ResNet-18 & 4 & 71.13\% & **1.12** \\ \hline \multirow{8}{*}{Tiny-ImageNet} & ND(Dense) [12] & VGG-16 & 5 & 39.45\% & 3.99 \\ & ND(90\% Sparsity) [12] & VGG-16 & 5 & 39.12\% & 3.78 \\ \cline{1-1} & ND(99\% sparsity) [12] & VGG-16 & 5 & 33.84\% & 3.76 \\ \cline{1-1} \cline{2-6} & **RevSNN(Ours)** & VGG-16 & 5 & 39.73\% & **2.01** \\ \cline{1-1} \cline{2-6} & ND(Dense) [12] & ResNet-19 & 5 & 50.32\% & 5.29 \\ \cline{1-1} & ND(90\% Sparsity) [12] & ResNet-19 & 5 & 49.25\% & 5.11 \\ \cline{1-1} & ND(99\% sparsity) [12] & ResNet-19 & 5 & 41.96\% & 5.09 \\ \cline{1-1} \cline{2-6} & **RevSNN(Ours)** & ResNet-19 & 5 & 50.63\% & **2.47** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of our work with the SOTA methods in memory efficiency during the SNN training process. For all the works: Batch size = 128. *: They did not provide the memory data directly for training CIFAR100, we estimate it based on their memory usage for training CIFAR10 and their parameter data.
We employ VGG-11, VGG-13, VGG-16, and VGG-19 architectures with timesteps ranging from 1 to 10. We compare the time required for one iteration of training using the original SNN node, the reversible SNN node with the original reversible layer backpropagation method, and the reversible SNN node with our backpropagation architecture on the CIFAR-10 datasets. We perform all the experiments on an empty RTX6000 GPU and keep the batch size = 64. The reported times for each forward and backward pass are averages taken over all iterations within the first five epochs.
Fig.5 presents our measurement of the training time when the number of timesteps is set to 4, 6, and 8. The forward computation times for the three methods are virtually identical. The shortest backward processing time is exhibited by the original SNN node, primarily due to it records all intermediate values throughout the computation, thus eliminating the need for recalculations. Comparatively, among the two reversible SNN nodes, our backpropagation design achieves a \(\mathbf{20}\%-\mathbf{30}\%\) increase in speed over previous reversible layer backpropagation method during the backward process. As the network expands, the superiority of our backpropagation design becomes increasingly evident. Specifically, under the VGG-19 architecture with a timestep of 8, our designed node is able to save \(\mathbf{23.8}\%\) of the total training time compared to the reversible node using the original reversible layer backpropagation method. This aligns well with our previous theoretical predictions in Section 4. Data for the other timesteps is shown in the Appendix.
### Ablation Study
#### Effects of parameters \(\alpha\) and \(\beta\) in our equations
In Eq.4 and Eq.5, we have two parameters: \(\alpha\) and \(\beta\). The optimal setting for the parameter \(\beta\) is 1, as this maximizes the preservation of the original features of the data. We conduct experiments to
Figure 4: Memory comparison between normal SNN node and our reversible SNN node.
Figure 5: Training time analysis. Solid lines: Backward process’s duration; Dashed lines: Forward process’s duration; Red lines: Training time for the original SNN; Green lines: Training time for the reversible SNN using the original reversible layer backpropagation method; Blue lines: Training time for the reversible SNN employing our proposed backpropagation architecture.
assess the impact of the \(\alpha\) parameter on the model's performance. We vary the \(\alpha\) parameter from 0.05 to 0.8, and then employ architectures VGG-19, VGG-16, VGG-13, and VGG-11 to evaluate the accuracy on the CIFAR100 dataset. The results are shown on the left of Fig.6. We observe that varying \(\alpha\) within the range of 0.05 to 0.8 impacts the final accuracy by approximately 1%. Generally, the model exhibits optimal performance when \(\alpha\) is set between 0.1 to 0.2.
**Effects of number of groups for the various states**
In Section 3.2, We introduce a method of splitting various input states into two groups along the last dimension. Nonetheless, this method might encounter issues under specific circumstances. For instance, if the last dimension of a tensor is an odd number, it cannot be evenly divided into two groups. To address this, we enhance the original algorithm: we divide the various input states into \(n\) groups according to the number of elements \(n\) in the last dimension. Eq.3-5 is then executed sequentially for each group. This enhancement further improves the universality of our algorithm.
To evaluate the impact of the number of groups on the model, we modified part of the fully connected layers in the original ResNet-19, ResNet-18, VGG-16, VGG-13 network from 128 activations to 144 activations. This is to allow it to have a wider variety of factors. We then evaluate the model's performance with the number of groups set to 2, 3, 6, 12, 24, 48, 72, and 144 respectively on CIFAR100 dataset. The results are shown on the right of Fig.6. We observe that the training accuracy improves as the number of groups increases. When the number of groups approaches the number of elements \(n\) in the last dimension, the accuracy typically surpasses that of the original SNN node. This is attributed to a larger number of groups yielding a higher fidelity representation of the original data.
## 6 Conclusion and Discussion
This work addresses a fundamental bottleneck of current deep SNNs: their high GPU memory consumption. We have designed a novel reversible SNN node that is able to reduce memory complexity from \(\mathcal{O}(n^{2})\) to \(\mathcal{O}(1)\). Specifically, our reversible SNN node allows our SNN network to achieve \(\mathbf{2.54}\) times greater memory efficiency than the current SOTA SNN memory-efficient work on the CIFAR10 dataset, and \(\mathbf{3.13}\) times greater on the CIFAR100 dataset. Furthermore, in order to tackle the prolonged training time issue caused by the need for recalculating intermediate values during backpropagation within our designed reversible SNN node, we've innovated a new backpropagation approach specifically suited for reversible architectures. This innovative method, when compared to the original reversible layer architecture, achieves a substantial reduction in overall training time by \(\mathbf{23.7}\%\). As a result, we are able to train over-parameterized networks that significantly outperform current models on standard benchmarks while consuming less memory.
Figure 6: **Left Figure**: Test VGG-19,VGG-16,VGG-13,VGG-11 models on CIFAR100 dataset by using different \(\alpha\) settings. **Right Figure**: Change activations number from 128 to 144 for some fully connected layers inside ResNet-19, ResNet-18, VGG-16, VGG-13 and test model performance for different number of groups on CIFAR100. Rev.: Reversible SNN node. Ori.: Original SNN node. Mo.: Modified network (Change some fully connected layers). |
2304.07877 | A Neural Network Transformer Model for Composite Microstructure
Homogenization | Heterogeneity and uncertainty in a composite microstructure lead to either
computational bottlenecks if modeled rigorously or to solution inaccuracies in
the stress field and failure predictions if approximated. Although methods
suitable for analyzing arbitrary and non-linear microstructures exist, their
computational cost makes them impractical to use in large-scale structural
analysis. Surrogate models or Reduced Order Models (ROMs) commonly enhance
efficiencies but are typically calibrated with a single microstructure.
Homogenization methods, such as the Mori-Tanaka method, offer rapid
homogenization for a wide range of constituent properties. However, simplifying
assumptions, like stress and strain averaging in phases, render the
consideration of both deterministic and stochastic variations in microstructure
infeasible. This paper illustrates a transformer neural network architecture
that captures the knowledge of various microstructures and constituents,
enabling it to function as a computationally efficient homogenization surrogate
model. Given an image or an abstraction of an arbitrary composite
microstructure of linearly elastic fibers in an elastoplastic matrix, the
transformer network predicts the history-dependent, non-linear, and homogenized
stress-strain response. Two methods for encoding microstructure features were
tested: calculating two-point statistics using Principal Component Analysis
(PCA) for dimensionality reduction and employing an autoencoder with a
Convolutional Neural Network (CNN). Both methods accurately predict the
homogenized material response. The developed transformer neural network offers
an efficient means for microstructure-to-property translation, generalizable
and extendable to a variety of microstructures. The paper describes the network
architecture, training and testing data generation, and performance under
cycling and random loadings. | Emil Pitz, Kishore Pochiraju | 2023-04-16T19:57:52Z | http://arxiv.org/abs/2304.07877v2 | # A Neural Network Transformer Model for Composite Microstructure Homogenization
###### Abstract
Heterogeneity and uncertainty in a composite microstructure lead to either computational bottlenecks if modeled rigorously, or to solution inaccuracies in the stress field and failure predictions if approximated. Although methods suitable for analyzing arbitrary and non-linear microstructures exist, their computational cost makes them impractical to use in large-scale structural analysis. Surrogate models or Reduced Order Models (ROMs), commonly enhance efficiencies, but they are typically calibrated with a single microstructure. Homogenization methods, such as the Mori-Tanaka method, offer rapid homogenization for a wide range of constituent properties. However, simplifying assumptions, like stress and strain averaging in phases, render the consideration of both deterministic and stochastic variations in microstructure infeasible.
This paper illustrates a transformer neural network architecture that captures the knowledge of various microstructures and constituents, enabling it to function as a computationally efficient homogenization surrogate model. Given an image or an abstraction of an arbitrary composite microstructure of linearly elastic fibers in an elastoplastic matrix, the transformer network predicts the history-dependent, non-linear, and homogenized stress-strain response. Two methods were tested that encode features of the microstructure. The first method calculates two-point statistics of the microstructure and uses Principal Component Analysis (PCA) for dimensionality reduction. The second method uses an autoencoder with a Convolutional Neural Network (CNN). Both microstructure encoding methods accurately predict the homogenized material response. The developed transformer neural network offers an efficient means for non-linear and history-dependent microstructure-to-property translation, which is generalizable and extendable to a wide variety of microstructures. The paper describes the network architecture, training and testing data generation and the performance of the transformer network under cycling and random loadings.
Transformer Microstructure Homogenization Surrogate Model History-dependence Microstructure encoding
## 1 Introduction
Multiple studies have shown that Neural Networks (NNs) can replace constitutive laws of homogeneous materials for various material behaviors, such as plastic behavior [1, 2], plastic material under cyclic loading [3], hyperelastic materials [4], or for modeling traction-separation behavior [5]. Extensions of NN models that predict the orthotropic linear elastic properties of 2D and 3D-microcells have been shown in [6, 7, 8] using CNNs. A surrogate model for the anisotropic electrical response of graphene/polymer nanocomposites was presented in [9].
To model the non-linear response of heterogeneous materials,NNs surrogates of a non-linear strain energy function were presented in [10, 11]. In [12, 13], localized Mises stress distributions were generated using CNNs for a given
monotonic uniaxial tensile loading at a prescribed loading rate. The approaches in [12, 13] were limited to monotonic loading and history dependence was ignored. The loading history dependence can be significant for the homogenized material level to find the stress state for irreversible behavior under general loading conditions [14]. Therefore, in [15] state variables were used to account for the loading history and build a surrogate for the yield function of foams. An alternative to using state variables is a NNs that takes into account of entire loading history. An Long Short-Term Memory (LSTM) based Recurrent Neural Network (RNN) that considers strain history and microstructure information of elastoplastic composites was described in [16]. Similarly, a RNN surrogate was implemented in [17] for history-dependent homogenization to accelerate nested (micro-macro) finite element (FE\({}^{2}\)) analysis with automatic differentiation of the surrogate model to obtain consistent tangent matrices. Wu, Nguyen, Kilingar, and Noels used a RNN with Gated Recurrent Units (GRUs) to implement a surrogate model for elastoplastic composite microcells in [14]. The methodology was extended to include localization in the surrogate model to extract the micro stress field using dimensionality reduction with PCA [18].
In this paper, we described a surrogate model that can predict the response of arbitrary microstructures with two constituent phases. The model can predict the current (history-dependent, e.g., due to plasticity) stress state average over the Representative Volume Element (RVE) \(\overline{\mathbf{\sigma}}\) at time \(t\) as a result of the strain history \(\overline{\mathbf{c}}_{1:t}\) for a given microstructure defined by parameters \(\mathbf{m}\) and constituent properties \(\mathbf{p}\)[16]:
\[\overline{\mathbf{\sigma}}=f\left(\overline{\mathbf{c}}_{1:t},\mathbf{m},\mathbf{p},t\right). \tag{1}\]
The transformer network architecture was developed by Vaswani _et al._ in [19] was used as an alternative to RNNs. Transformer NNs have been shown to have impressive performance in language modeling tasks, machine translation and a variety of other applications (see Sec. 2). To date, transformers have not yet been used as surrogate models for a history-dependent material response.
Transformers in the context of Natural Language Processing (NLP) applications when trained on diverse data have shown an impressive ability to generalize. A single model has been able to generate text, and write and debug code in different programming languages [20]. We believe that the ability of transformers to generalize can be extended to the field of microstructure homogenization. We hypothesized that a transformer could be trained on a variety of material combinations and microstructures and function as a fast and efficient homogenization tool. Because of the prediction speed, the trained model could be used to perform Uncertainty Quantification (UQ) of the response of composite structures with complex and varying microstructure, where current homogenization techniques still are a computational bottleneck.
In this effort, the feasibility of using a transformer network to predict the history-dependent material response was first trained and validated with homogeneous materials. Following the successful prediction of the history-dependent homogeneous material response, the surrogate model was extended and trained for heterogeneous materials. Furthermore, in NLP, pre-training of transformers, where the network is first trained on large unlabeled datasets followed by task-specific fine-tuning, has shown major performance advantages [21]. Following a similar strategy, we used the weights for homogeneous data as a starting point for training with heterogeneous materials. This strategy reduced the amount of computationally-expensive microstructure homogenization data required for the teach homogenization to the transformer network.
This paper will first introduce the neural network techniques required to solve the homogenization problem and give an overview of similar approaches in the literature. The process to generate the required training data and represent the microstructure information will then be described, followed by a description of the used network architecture and the training process. Finally, the results will be presented.
## 2 Applications and Architecture of the Transformer
Transformer models are recent compared to RNNs to analyze sequential data. First presented in [19], transformers have been successful in NLP applications and more recently have been applied to a wider range of problems. Vaswani _et al._ already showed impressive performance gains in language translation tasks for the original transformer [19]. The recent public release of the transformer-based Large Language Model (LLM) ChatGPT by OpenAI has attracted major media attention [22]. ChatGPT is a version of the Generative Pre-trained Transformer (GPT) models developed by OpenAI [22, 23]. Successively larger models (GPT: 110 million parameters, GPT-2: 1.5 billion parameters, GPT-3: 175 billion parameters [24]) have led to an impressive ability to generate human-like text. The new GPT-4 reportedly has 100 trillion parameters and works with both text and images [25]. The GPT models use a decoder-only architecture with the decoder architecture relatively unchanged from the original transformer model [19, 22, 23]. The GPT models are trained on unlabeled data, allowing the models to be trained on data that require virtually no pre-processing, alleviating the need for manually labeled data.
Aside from NLP, multifaceted successful applications of transformers have been shown recently. Examples include "AlphaFold", a neural network incorporating transformers for predicting a protein's three-dimensional structure based on the amino acid sequence with atomic accuracy [26, 27, 28]. Applications have been shown in computer vision such as for image classification [29, 30] and a wide array of additional tasks such as object detection, image segmentation, or pose estimation [31]. In [32], a transformer was adapted to have as input three-dimensional meshes and perform shape classification and segmentation tasks. Two recent publications use transformers to improve the estimation of counterfactual outcomes in, e.g., healthcare settings [33, 34]. Furthermore, studies have shown that transformers or transformer-based models can be used for time series forecasting based on previously observed data, such as predicting the prevalence of influence [35, 36, 37]. In [38], a transformer-based time series forecasting model named Temporal Fusion Transformer was proposed for multi-horizon forecasting, i.e., prediction of multiple variables of interest. The model incorporates not only previous time series data but a mix of static and time-dependent inputs, and known future inputs.
Following the architecture of most sequence-to-sequence models, the transformer has an encoder-decoder structure [19]. Given an input sequence \(\mathbf{x}=(x_{1},\ldots,x_{n})\), the encoder generates an intermediate sequence \(\mathbf{z}=(z_{1},\ldots,z_{n})\). The decoder uses \(\mathbf{z}\) to element-wisely generate the output \(\mathbf{y}=(y_{1},\ldots,y_{n})\). Previously generated output values are auto-regressively used as supplementary input [19]. Commonly, models to process data with sequential dependencies use recurrent layers to process individual values sequentially [39]. Transformers, in contrast, solely rely on so-called attention mechanisms in the encoder and decoder to represent dependencies between input and output sequences [19, 40]. Attention was initially introduced in [40] to improve the performance of sequence models for long-range dependencies. Vaswani _et al._ used a scaled dot-product attention mechanism given by [19]:
\[\text{Attention}\left(\mathbf{Q},\mathbf{K},\mathbf{V}\right)=\text{softmax}\left( \frac{\mathbf{Q}\mathbf{K}^{T}}{\sqrt{d_{k}}}\right)\mathbf{V} \tag{2}\]
where \(\mathbf{Q}\), \(\mathbf{K}\), and \(\mathbf{V}\) are vectors of dimension \(d_{k}\), \(d_{k}\), and \(d_{v}\), named query, keys, and values, respectively, and softmax is the softmax activation function. In the transformer, instead of using the attention mechanism a single time, Vaswani _et al._ use so-called multi-head attention. The attention function is performed in parallel \(h\) times, where \(h\) is a selected parameter (eight in [19]) and represents the number of attention heads. Additionally, \(\mathbf{Q}\), \(\mathbf{K}\), and \(\mathbf{V}\) are linearly projected to \(d_{v}\), \(d_{k}\), and \(d_{k}\) dimensions, respectively, using projection matrices \(\mathbf{W}\) that are learned during model training [19]:
\[\text{head}_{i}=\text{Attention}\left(\mathbf{Q}\mathbf{W}_{i}^{Q},\mathbf{K}\mathbf{W}_{i}^ {K},\mathbf{V}\mathbf{W}_{i}^{V}\right)\qquad\text{with}\quad i=1,\ldots,h. \tag{3}\]
This operation produces \(d_{v}\)-dimensional output for each attention head. The output of the \(h\) attention heads is then concatenated and linearly projected again using the learned projection matrix \(\mathbf{W}^{O}\) to produce the multi-head attention output [19]:
\[\text{MultiHead}\left(\mathbf{Q},\mathbf{K},\mathbf{V}\right)=\text{Concat}\left(\text{ head}_{1},\ldots,\text{head}_{h}\right)\mathbf{W}^{O}. \tag{4}\]
From this, the dimensions of the projection matrices follow [19]:
\[\begin{split}\mathbf{W}_{i}^{Q}&\in\mathbb{R}^{d_{ model}\times d_{k}}\\ \mathbf{W}_{i}^{K}&\in\mathbb{R}^{d_{model}\times d_{k}} \\ \mathbf{W}_{i}^{V}&\in\mathbb{R}^{d_{model}\times d_{k}} \\ \mathbf{W}^{O}&\in\mathbb{R}^{hd_{v}\times d_{model}}, \end{split} \tag{5}\]
with the selected model size \(d_{model}\) (e.g., \(d_{model}=512\) in [19]).
There are contradicting studies on whether attention can provide reasoning for a model's predictions by laying out the relative importance of inputs [41, 42]. In [41] the authors compared attention weights with feature importance and found a low correlation, arguing that attention provides little transparency. In contrast, in [42] the authors argue that attention can provide a plausible explanation for a model's predictions, however, not _"one true, faithful interpretation of the link...between inputs and outputs"_.
Using the multi-head attention mechanism, the architecture of the transformer is depicted in Fig. 1. The encoder layer consists of a multi-head attention mechanism and a simple fully connected feed-forward layer, both having residual connections [19, 43]. After both attention and feed-forward layers, normalization is performed. The encoder layer is repeated \(N\) times (\(N=6\) in [19]). Keys, values, and queries of the multi-head attention mechanism are the embedded model input (first encoder layer) or the output of the previous encoder layers, making the attention mechanism a self-attention mechanism.
A similar architecture is used for the decoder layers, with an additional multi-head attention mechanism ("encoder-decoder attention", middle sub-layer in Fig. 1) that uses as values and keys the output of the encoder and as queries, the
output of the previous decoder layers [19]. The encoder-decoder multi-head attention mechanism is similarly followed by layer normalization and includes residual connections. Furthermore, the self-attention mechanism attending to the embedded model output (first decoder sub-layer in Fig. 1) is masked and the output sequence is shifted to the right by one position. Masked self-attention effectively prevents the decoder from "looking into the future", i.e., the decoder predictions can only depend on known past output positions. Again, the decoder layer is repeated \(N\) times. Output from the decoder is finally passed through a linear layer followed by softmax activation to obtain the output probabilities.
The inputs and outputs of the model have to be converted to vectors of dimension \(d_{model}\)[19]. In NLP applications, commonly learned embeddings are used for this purpose [44].
Figure 1: Architecture of the transformer model. (adapted from [19])
Unlike RNNs, the transformer does not use any recurrence and, therefore, does not directly have information about the location of a value in a sequence [19]. Instead, positional information is injected by adding a positional encoding to the embedded input and output. The positional encoding consists of sine and cosine functions of different frequencies.
Additionally, transformers can be advantageous in multiple aspects compared to RNNs: Long-range dependencies in time sequences are problematic due to the recurrent nature of RNNs [19, 36]. Transformers seem to be better able to capture long-range dependencies because self-attention mechanisms connect all positions in a sequence with the same number of operations, compared to sequentially in RNNs [19]. Furthermore, RNNs can suffer from exploding or vanishing gradients during training, making training the model more difficult [39].
## 3 Generation of Training data
The objective of this effort was to develop a transformer neural network surrogate to predict the non-linear and history-dependent response of composite microcells. Generation of the training dataset requires analysis of a microcell's response to deformation through scale transition from the micro-scale to the macro-scale. An introduction to Finite Element Method (FEM)-based homogenization to solve the multi-scale Boundary Value Problem (BVP) and generate the stress response of a microcell for finite-strain problems will be given in the following section (Sec. 3.1).
### Solving the Multi-Scale BVP to Calculate the Microcell Stress Response
Given a (homogenized) macro-scale deformation gradient tensor \(\mathbf{F}_{M}\), the homogenized Piola-Kirchhoff stress tensor of the underlying microcell \(\mathbf{P}_{M}\left(\mathbf{X},t\right)\) at (macro) material point \(\mathbf{X}\) and time \(t\) shall be found (see Fig. 2). The homogenized stress tensor \(\mathbf{P}_{M}\) can be found by solving the micro-scale BVP using FEM. In the following equations, \(M\) represents the macro-scale values and \(m\) refers to values on the micro-scale. The given summary of the multi-scale homogenization problem is based on the derivation presented in [14, 18].
The linear momentum equations at the macro-scale in the domain \(\Omega\) and neglecting dynamic effects are given by [14, 18]:
\[\mathbf{P}_{M}\left(\mathbf{X}\right)\cdot\nabla_{0}+\mathbf{b}_{M}=\mathbf{0}\qquad\forall \mathbf{X}\in\Omega, \tag{6}\]
with the gradient operator with regard to the reference configuration \(\nabla_{0}\) and the body force \(\mathbf{b}_{M}\). The linear momentum equations are subject to the Dirichlet boundary conditions on the boundary \(\partial_{D}\Omega\) and Neumann boundary conditions on the boundary \(\partial_{N}\Omega\):
\[\mathbf{u}_{M}\left(\mathbf{X}\right)=\hat{\mathbf{u}}_{M}\qquad\forall\mathbf{X} \in\partial_{D}\Omega \tag{7}\] \[\mathbf{P}_{M}\left(\mathbf{X}\right)\cdot\mathbf{N}_{M}=\hat{\mathbf{T}}_{M}\qquad \forall\mathbf{X}\in\partial_{N}\Omega, \tag{8}\]
with the displacements \(\mathbf{u}_{M}\), the constrained displacements \(\hat{\mathbf{u}}_{M}\) on \(\partial_{D}\Omega\), the unit normal \(\mathbf{N}_{M}\) on \(\partial_{N}\Omega\), and the surface traction \(\hat{\mathbf{T}}_{M}\).
Figure 2: Scale-transition: Given a macro-scale deformation gradient \(\mathbf{F}_{M}\) the corresponding homogenized Piola-Kirchhoff stress \(\mathbf{P}_{M}\) is found by solving the micro-scale BVP. (adapted from [14])
On the micro-scale, the BVP is commonly defined on a parallelepipedic RVE with the boundaries of the RVE being planar faces \(\partial\omega\). The linear momentum equations on the micro-scale are given by:
\[\mathbf{P}_{m}\left(\mathbf{x}\right)\cdot\nabla_{0}=\mathbf{0}\qquad\forall\mathbf{x}\in\omega, \tag{9}\]
subject to boundary conditions:
\[\mathbf{P}_{m}\left(\mathbf{x}\right)\cdot\mathbf{N}_{m}=\hat{\mathbf{T}}_{m}\qquad\forall\mathbf{ x}\in\partial\omega, \tag{10}\]
where the traction \(\mathbf{T}_{m}\) acts on the boundary \(\partial\omega\) with unit normal \(\mathbf{N}_{m}\).
The macro scale deformation gradient tensor \(\mathbf{F}_{M}\) and stress tensor \(\mathbf{P}_{M}\) are given by volume average over \(\omega\) of the micro scale deformation gradient \(\mathbf{F}_{m}\) and micro scale stress tensor \(\mathbf{P}_{m}\), respectively [14]:
\[\mathbf{F}_{M}\left(\mathbf{X},t\right) =\frac{1}{V\left(\omega\right)}\int_{\omega}\mathbf{F}_{m}\left(\mathbf{ x},t\right)d\mathbf{x} \tag{11}\] \[\mathbf{P}_{M}\left(\mathbf{X},t\right) =\frac{1}{V\left(\omega\right)}\int_{\omega}\mathbf{P}_{m}\left(\mathbf{ x},t\right)d\mathbf{x} \tag{12}\]
Energy consistency requires the Hill-Mandel condition to be fulfilled [14, 18, 45]:
\[\mathbf{P}_{M}:\delta\mathbf{F}_{M}=\frac{1}{V\left(\omega\right)}\int_{\omega}\mathbf{ P}_{m}:\delta\mathbf{F}_{m}d\mathbf{x}. \tag{13}\]
On the micro-scale, the displacement field \(\mathbf{u}_{m}\) with an introduced pertubation field \(\mathbf{u}^{\prime}\left(x\right)\), the macro-scale displacement gradient \(\mathbf{F}_{m}\), and unit tensor \(\mathbf{I}\) is defined by [14]:
\[\mathbf{u}_{m}\left(\mathbf{x}\right)=\left(\mathbf{F}_{M}-\mathbf{I}\right)\cdot\left(\mathbf{x }-\mathbf{x}_{0}\right)+\mathbf{u}^{\prime}\left(\mathbf{x}\right). \tag{14}\]
The point \(\mathbf{x}_{0}\) is a reference point in \(\omega\). Using trial and test functions, the weak form of the micro-scale BVP can be formulated:
\[\int_{\omega}\mathbf{P}_{m}\left(\mathbf{u}^{\prime}\right):\left(\partial\mathbf{u}^{ \prime}\otimes\nabla_{0}\right)d\mathbf{x}=0,\qquad\forall\partial\mathbf{u}^{\prime }\in\mathcal{U}\left(\omega\right)\subset\mathcal{U}^{\text{min}}\left(\omega \right), \tag{15}\]
where \(\partial\mathbf{u}^{\prime}\in\mathcal{U}\left(\omega\right)\) is a test function in the admissible kinematic vector field \(\mathcal{U}\left(\omega\right)\), \(\mathbf{u}^{\prime}\in\mathcal{U}\left(\omega\right)\subset\mathcal{U}^{\text{ min}}\left(\omega\right)\) shall be found, and \(\mathcal{U}^{\text{min}}\left(\omega\right)\) is the minimum kinematic vector field.
Given a prescribed macro-scale deformation gradient \(\mathbf{F}_{M}\), the weak form can be solved using FEM to find the homogenized stress tensor \(\mathbf{P}_{M}\). On opposite faces of the RVE \(\mathbf{x}^{+}\in\omega^{+}\) and \(\forall\mathbf{x}^{-}\in\omega^{-}\), Periodic Boundary Conditions (PBCs) are applied:
\[\begin{split}\mathcal{U}^{\text{BC}}\left(\omega\right)=& \left\{\mathbf{u}^{\prime}\in\mathcal{H}\left(\omega\right)\left|\mathbf{u}^{\prime} \left(\mathbf{x}^{+}\right)=\mathbf{u}^{\prime}\left(\mathbf{x}^{-}\right)\right.,\\ &\left.\forall\mathbf{x}^{+}\in\omega^{+}\right.\quad\text{and corresponding}\quad\forall\mathbf{x}^{-}\in\omega^{-}\right\}\subset\mathcal{U}^{\text{min}}\left(\omega \right),\end{split} \tag{16}\]
with the Hilbert space \(\mathcal{H}\). While various kinds of boundary conditions can be applied to fulfill the Hill-Mandel condition [14], PBCs were chosen for this work for ease of implementation.
### Description of the Micro-Scale Composite Volume Elements
To generate the training dataset, a workflow was implemented to:
1. Generate random composite Stochastic Volume Elements (SVEs).
2. Setup and execute FEM simulations with PBCs and given homogenized load to generate the homogenized microcells' response by solving the micro-scale BVPs.
3. Save the microstructure data and homogenized loading and response (as sequences of strain increments and corresponding stress) in a database to use as training data.
The term SVE refers to a volume element of the analyzed composite material that shows varying homogenized properties depending on the SVE realization [46] (e.g., given multiple volume elements with similar Fiber Volume Ratio (FVR), each volume element will yield different results). This is in contrast to an RVE, which assumes statistical representativity.
For simplicity, in this effort, the application was restricted to two-dimensional problems, more specifically to the transversal behavior of continuously reinforced composites assuming plane strain conditions. Therefore, the generated volume elements consisted of circular fibers inside a matrix (see Fig. 3 for three arbitrary SVEs at different volume
fractions). Given plane-strain conditions and with the \(z\)-axis being the out-of-plane direction (normal to an analyzed SVE), the three non-zero strain components \(\varepsilon_{xx}\), \(\varepsilon_{yy}\), and \(\varepsilon_{xy}\) can be prescribed resulting in four stress components \(\sigma_{xx}\), \(\sigma_{yy}\), \(\sigma_{zz}\), and \(\sigma_{xy}\).
Periodic SVEs were generated by randomly placing fibers inside the microcell until a desired FVR was reached. Uniform distributions were used to place the fibers. The uniformly distributed fiber locations inside the generated microstructures may not represent the fiber location distributions found in a composite material [46]. Further analysis of the fiber location distributions could be an interesting objective for future model improvements. After generating the microcell geometry, the SVE was meshed with quadratic (to prevent locking effects) plain strain triangles using the Python API of the open-source software _Gmsh_[47]. Periodic nodes were generated on opposite sides of the SVE during meshing to allow PBCs to be applied. The generated mesh was then used to generate an input file for the FEM solver _CalculiX_[48]. PBCs were applied, mapping the macro-scale deformation gradient to the SVE. An incremental plasticity model with a multiplicative decomposition of the deformation gradient was used to model the matrix behavior. Linear elastic behavior was assumed until reaching the yield strength, followed by perfectly plastic behavior (accumulating plastic strain at constant stress with further loading) [49]. The matrix plasticity introduces a history-dependency of the microcell response. Fibers were modeled linear elastically. Using _CalculiX_, the micro-scale BVP was solved and the strains and stresses at the integration points were recorded. Subsequently, the volume average strain and stress were calculated as the homogenized properties.
Following [14, 18], the generated training data consisted of both cyclic strain sequences and random strain walks. Strain sequences were generated with a maximum length of 100 strain increments. To apply boundary conditions to the micro-scale BVP, the deformation gradient tensor \(\mathbf{F}_{M}\) was required. Therefore, loading paths, again following [14], were derived from a sequence of right stretch tensors \(\mathbf{U}_{M_{0}},\mathbf{U}_{M_{1}},\ldots\mathbf{U}_{M_{n}}\). At each time step, a load increment \(\Delta\mathbf{U}_{M_{n}}=\mathbf{U}_{M_{n}}-\mathbf{U}_{M_{n-1}}\) was applied. The increment of the right stretch tensor can be represented using orthogonal eigenvectors \(\mathbf{n}_{1}\) and \(\mathbf{n}_{2}\) (only the 2D-case is addressed here) that control the loading direction and eigenvalues \(\Delta\lambda_{1}\) and \(\Delta\lambda_{2}\), governing the load increment size [14]:
\[\Delta\mathbf{U}_{M}=\Delta\lambda_{1}\mathbf{n}_{1}\otimes\mathbf{n}_{1}+\Delta\lambda_{ 2}\mathbf{n}_{2}\otimes\mathbf{n}_{2}. \tag{17}\]
To generate the random loading paths, random orthogonal vectors were generated:
\[\mathbf{n}_{1} =\left[\cos\alpha\quad\sin\alpha\right]^{T} \tag{18}\] \[\mathbf{n}_{2} =\left[-\sin\alpha\quad\cos\alpha\right]^{T},\]
with the angle \(\alpha\) being randomly chosen from a uniform distribution with \(\alpha\in[0,\pi)\). Corresponding eigenvalues were generated as:
\[\Delta\lambda_{1} =\sqrt{\mathcal{R}}\cos\left(\theta\right) \tag{19}\] \[\Delta\lambda_{2} =\sqrt{\mathcal{R}}\sin\left(\theta\right),\]
where \(\mathcal{R}\) and \(\theta\) are uniformly distributed variables in the intervals \(\mathcal{R}\in\left[\Delta R_{min}^{2},\Delta R_{max}^{2}\right]\) and \(\theta\in[0,2\pi]\). This guarantees the total step size to be bounded between \(\Delta R_{min}\) and \(\Delta R_{max}\):
\[\Delta R_{min}\leq\sqrt{\Delta\lambda_{1}^{2}+\Delta\lambda_{2}^{2}}\leq \Delta R_{max}. \tag{20}\]
A lower step size of \(\Delta R_{min}=5\times 10^{-5}\) and an upper limit of \(\Delta R_{max}=2.5\times 10^{-3}\) were used for random strain sequences. Furthermore, the maximum total stretch change was limited to 0.05 for each component of the stretch tensor.
Figure 3: Three random composite microcells with varying FCRs.
Additional to random loading paths, cyclic loading paths were generated, randomly choosing between uniaxial loading in \(x\) or \(y\) direction, biaxial loading (\(x\) and \(y\) loading), or pure shear loading. To generate cyclic strain sequences, the lower step size was increased to \(\Delta R_{min}=5\times 10^{-4}\).
To define the boundary conditions using PBCs, the deformation gradient was calculated from the right stretch tensor:
\[\mathbf{F}_{M}=\mathbf{R}_{M}\cdot\mathbf{U}_{M}, \tag{21}\]
where \(\mathbf{R}_{M}\) is a rotation tensor set to \(\mathbf{R}_{M}=\mathbf{I}\) in this effort due to the frame indifference of the micro-scale BVP [14]. The Green-Lagrange strain tensor \(\mathbf{E}_{M}\) as input to the NN surrogate was then calculated:
\[\mathbf{E}_{M}=\frac{1}{2}\left(\mathbf{U}_{M}^{2}-\mathbf{I}\right). \tag{22}\]
The homogenized response of three random microcells with different FVRs to cyclic loading (Fig. 4a) and random loading (Fig. 4b) can be seen in Fig. 4.
### Dimensionality Reduction of Microstructure Representation
Homogenization of heterogeneous materials requires knowledge about the composition and internal structure of the material. To represent the microstructure a set of descriptors is required that can capture the features governing material behavior while being sufficiently low-dimensional to be tractable [50, 51]. Common examples of microstructure descriptors include volume fraction, inclusion size, or inclusion spacing. In [16], FVR, fiber radius, and mean fiber distance were used to train RNNs to predict the homogenized response of continuously reinforced composites. However, selecting microstructure descriptors intuitively might not sufficiently capture information and, when examining different types of microstructures, might not be generally applicable. Instead, an adaptive microstructure encoding that applies to a wide range of microstructures without manual feature selection is preferable.
It has been shown that the relative spatial distribution of local heterogeneity based on two-point statistics can be used to effectively capture microstructure features [51, 52, 53]. Two approaches will be examined in this effort: a microstructure representation based on dimensionality-reduced two-point statistics and a learned microstructure auto-encoding where a CNN reduces the dimensionality and extracts information from an image of the microstructure. An introduction to two-point statistics will be given first.
Two-point statistics or two-point spatial correlations can give information about the distribution of local states and the fractions of local states in a microstructure [52, 53, 54]. Effectively, two-point spatial correlations provide the probability that the start and end points of a vector are on a certain specified local state, respectively. A discretized (e.g., voxelized) microstructure \(j\) is described by a microstructure function \(m_{j}\left[h;s\right]\) that gives the probability distribution for a local state \(h\in H\), where \(H\) are all possible local states, at each position \(s\in S\) with the complete set of all positions \(S\)[52]. In a voxelized two-dimensional microstructure with uniform cell size the position \(s\) can be simply given by indices \(i\) and \(j\) of the voxels. A set of two-point correlations \(f_{j}\left[h,h^{\prime};r\right]\), providing relative spatial information of the local states \(h\) and \(h^{\prime}\), can be calculated from the correlation of the microstructure function:
\[f_{j}\left[h,h^{\prime};r\right]=\frac{1}{\Omega_{j}\left[r\right]}\sum_{s}m _{j}\left[h;s\right]m_{j}\left[h^{\prime};s+r\right], \tag{23}\]
with a discrete vector \(r\) inside of the microstructure domain and a normalization factor \(\Omega_{j}\left[r\right]\) dependent on \(r\). A simple two-phase microstructure (gray and white cells representing the different phases) with the positions \(s\) is shown in Fig. 5. The two-point statistics can be understood as the likelihood of encountering the first and second phase (\(h\) and \(h^{\prime}\), respectively) of the material at the end (tail) and beginning (head) of a randomly placed vector \(r\), respectively [52]. The matrix-phase auto-correlation of the microcells from the training set shown in Fig. 3 are shown in Fig. 6.
In this effort, a computational implementation of two-point statistics in the Python package _Materials Knowledge Systems in Python (PyMKS)_[52] was used.
To reduce the dimensionality of the microstructure representation provided by two-point statistics, PCA was used [55]. Two-point statistics generate a high-dimensional feature space that can contain redundant information, therefore PCA can be applied to efficiently remove redundant information and reduce the dimensionality [52]. Using PCA, the vector \(\mathbf{f}_{j}\left[l\right]\) containing all possible combinations of \(h\), \(h^{\prime}\), and \(r\) in \(f_{j}\left[h,h^{\prime};r\right]\), can be approximated as:
\[\mathbf{f}_{j}\left[l\right]\approx\overline{\mathbf{f}\left[l\right]}+\sum_{n=1}^{k} \mu_{j,n}\mathbf{\phi}_{n}\left[l\right]. \tag{24}\]
In Eq. 24, \(\overline{\mathbf{f}\left[l\right]}\) represent the average values of all microstructures \(j\) from the calibration data for all \(l\), \(\mu_{j,n}\) are the principal component scores (effectively reduced-dimensionality microstructure descriptors), \(\mathbf{\phi}_{n}\left[l\right]\) are orthonormal
vectors (principal components) that retain the maximum variance under projection, and \(k\) is the number of selected components [52, 55, 56]. The principal component scores \(\mu_{j,n}\) are ordered by their contributions to the variances, with \(n=1\) having the highest contribution.
Randomly choosing 1000 microstructures from the training dataset, calculating two-point statistics and performing PCA, the individual and cumulative contribution to the variance of the first ten principal components are plotted in Fig. 7. The first principal component already explains more than 85 % of the dataset variance, while contributions of further components are quickly decreasing. Therefore, with six components explaining more than 90 % of the variance, PCA combined with two-point statistics is a highly efficient method to reduce the dimensionality of the microstructure description (compared to e.g., a geometric representation of the microstructure).
Figure 4: Homogenized stress-strain response of three random SVE realizations with different FURs for cyclic (4a) and random (4b) loading.
In continuously reinforced composites the predominant microstructure property is the ratio between fiber and matrix content [53]. To gain insight into the meaning of the principal component scores in microstructure representation, the scores of the first three principal components for 1000 random microstructures are plotted against the FVR in Fig. 8. A clear correlation between the first principal component score and the FVR is evident, with all microstructures with the same FVR having similar scores. This suggests that the first PCA score is a direct reflection of the FVR, unrelated to other microstructural characteristics. Again, similar values for the second PCA score are observed in microstructures
Figure 5: Schematic depiction of two-point correlations for a simple discretized microstructure. (adapted from [52])
Figure 6: Matrix phase auto-correlation corresponding to the microstructures shown in Fig. 3.
with similar FVRs. A low score is present for intermediate FVR and both high and low FVR exhibit high scores. However, the third PCA score exhibits variability in microstructures with similar FVR, implying that the FVR is not the dominant factor. Further examination is needed to identify the microstructural parameters represented by the principal component scores. Based on diminishing returns of later principal components, the surrogate model was trained using the first 3 principal components to describe the microstructure. These 3 components accounted for 88 % of the dataset variance.
It should be noted that a set of parameters describing the microstructure (such as FVR) might not necessarily capture features that govern strength behavior. As an example, composite stiffness can generally be well predicted purely based on FVR, while strength is generally highly dependent on local material inhomogeneity [57]. The validity of the
Figure 8: Analysis of principal component scores against the FVR for 1000 random microstructures.
Figure 7: Contribution of the principal components to the variance for a random set of 1000 microstructures.
chosen microstructure representation parameters has, therefore, to be evaluated with respect to their expressiveness for both stiffness and strength prediction.
As an alternative to two-point statistics with PCA for dimensionality reduction, a learned microstructure encoding was implemented. An image of the microstructure (as seen in Fig. 3) was used as input to a CNN. The CNN successfully reduced the dimensionality of the microstructure image to a pre-defined number of parameters (e.g., 3 parameters) and created a hidden state that was used as input to the transformer model. Weights of the CNN were trained simultaneously with the transformer network to minimize the model prediction error. The intended purpose of the microstructure encoding learned by the CNN was to enable the automatic selection of microstructural features that control the homogeneous stiffness and strength response. An advantage of a learned microstructure encoding over two-point correlation could be that the CNN better adapts to features governing composite strength. In contrast, dimensionality-reduced two-point statistics provide pre-defined parameters that are not necessarily optimal to capture property governing features.
The used CNN architecture was roughly based on the AlexNet architecture [58]. Five convolutional blocks were used with a progressively increasing number of output channels produced by the convolution. Maximum pooling layers were added in the second and fifth convolutional blocks for dimensionality reduction and batch normalization was used in the first, second, and fifth blocks to make training more stable. The convolutional blocks were followed by a three-layer feed-forward neural network producing the desired number of outputs. Dropout layers were used after the feed-forward layers to prevent overfitting. Throughout the CNN, Rectified Linear Unit (ReLu) activation functions were used. Black and white microstructure images that were input to the CNN were scaled to 32\(\times\)32 pixels. An overview of the CNN network architecture can be seen in Tab. 1. To assess the efficacy of the CNN in encoding microstructure information using dimensionality-reduced two-point statistics, an equal number of parameters were employed to characterize the microstructure, resulting in a CNN with 3 output parameters.
To analyze microstructure features extracted by the CNN, the output values of the trained network are plotted against the FVR in Fig. 9 for 1000 random microstructures. All three output values show a clear correlation to the FVR with larger output values for larger FVRs. The observed clear correlation shows that the CNN was able to extract governing microstructure features. However, because all output values were related to the FVR, likely a single output value would have been sufficient for the used training data. Introducing more variability into the microstructure, such as varying microcell size and fiber diameter, non-constant fiber diameter, or differently shaped inclusions, will require a larger number of parameters to describe the microstructure features.
## 4 Surrogate Model Architecture and Training Procedure
In this effort, a transformer network was chosen as a surrogate to predict the history-dependent response of composite materials. While the original transformer [19] consisted of an encoder-decoder architecture, only the decoder was used to build the surrogate model, similar to the GPT models [23]. The decoder's masked self-attention (see Sec. 2)
\begin{table}
\begin{tabular}{c c c c} \hline \hline Layer nr & Type and nonlinearity & Input size & Output size \\ \hline \multirow{4}{*}{1} & Input & & 32 \(\times\) 32 \(\times\) 1 \\ & Convolution (5\(\times\)5, 16 filters), Batch normalization, ReLu & & 32 \(\times\) 32 \(\times\) 1 & 32 \(\times\) 32 \(\times\) 16 \\ & Convolution (3\(\times\)3, 32 filters), Batch normalization, ReLu, Max Pooling (2\(\times\)2) & 32 \(\times\) 32 \(\times\) 16 & 16 \(\times\) 16 \(\times\) 32 \\ & Convolution (3\(\times\)3, 64 filters), Batch normalization, ReLu & 16 \(\times\) 16 \(\times\) 32 & 16 \(\times\) 16 \(\times\) 64 \\ & Convolution (3\(\times\)3, 128 filters), Batch normalization, ReLu, Max Pooling (2\(\times\)2) & 16 \(\times\) 16 \(\times\) 64 & 16 \(\times\) 16 \(\times\) 64 \\ & Flatten & 8 \(\times\) 8 \(\times\) 128 & 8192 \\ & Fully connected, ReLu, Dropout (0.3) & 8192 & 2048 \\ & Fully connected, ReLu, Dropout (0.3) & 2048 & 1024 \\ & Fully connected, Batch normalization & 1024 & 3 \\ \hline \hline \end{tabular}
\end{table}
Table 1: CNN architecture used to extract microstructure information from images.
prevents the network from seeing "future", i.e., yet unknown, strain values in the input sequence. This guarantees that the predicted stress is only a function of the deformation history up to the current strain increment.
Transformers require an embedding to convert the input and the target sequence to vectors matching the transformer dimension. In the input path, the strain sequence, stress sequence, and the vector containing the microstructure information (either PCA scores or outputs from CNN) were linearly transformed to match the transformer dimension \(d_{model}\). Two stacked Gated Residual Networks (GRNs) were used as encoders and decoder to embed static (time-independent) microstructure parameters. The GRN was adapted from the _Temporal Fusion Transformer_[38]. In addition to the primary input (strain or stress sequence), the GRN has an optional context vector (in our case the microstructure information) as input. By using gating mechanisms, the GRN can filter out insignificant inputs and can skip the GRN entirely if not relevant to the output [38]. A schematic representation of the GRN architecture is shown in Fig. 10. After passing through a linear layer, input and context are added and Exponential Linear Unit (ELU) activation [59] function is applied. This is followed by a second linear layer, dropout, and a Gated Linear Unit (GLU) [60] gating mechanism:
\[\text{GLU}\left(\boldsymbol{\gamma}\right)=\sigma\left(\boldsymbol{W}_{1} \boldsymbol{\gamma}+\boldsymbol{b}_{1}\right)\odot\left(\boldsymbol{W}_{2} \boldsymbol{\gamma}+\boldsymbol{b}_{2}\right), \tag{25}\]
where \(\boldsymbol{\gamma}\) is the input to the GLU, \(\boldsymbol{W}_{1}\), \(\boldsymbol{W}_{2}\), \(\boldsymbol{b}_{1}\), and \(\boldsymbol{b}_{2}\) are the weights and biases, respectively, \(\sigma\left(\cdot\right)\) is the sigmoid activation function, and \(\odot\) the element-wise Hadamard product [38]. A residual connection adds the GRN input and output, allowing the GRN to be skipped entirely. Finally, the output is normalized. It has been shown that residual connections improve network training and performance, and allow for far deeper networks [43]. The input and output size of all GRNs was chosen to be similar to \(d_{model}\) (512) and the dense layers in the GRN were of dimension 2048.
After the GRN encoders, the microstructure information was added again to the input path, followed by normalization and positional encoding. The encoded source (strain) and right-shifted target (stress) sequences were then passed through the transformer decoder. The right-shifted stress sequence required as input to the decoder was generated by inserting a zero vector at the first position of the stress sequence and truncating the sequence at the second to last element. After the transformer decoder, two stacked GRNs with the microstructure as context were used followed by a linear transformation to generate the stress predictions. The neural network architecture is shown in Fig. 11. The model architecture was implemented in _PyTorch_[61] using the _PyTorch Lightning_ interface. Surrogate model parameters are summarized in Tab. 2.
With CNN microstructure encoding, the chosen network had a total of 66.3 million trainable parameters. The total trainable parameters consisted of 19 million parameters for the CNN encoding, 25.2 million parameters for the transformer decoder, and 7.4 million parameters for each GRN encoder (including the linear transformation of the strain and stress sequences, respectively) and 7.3 million parameters for the GRN decoder.
Following [19], the transformer model was trained using the Adam optimizer [62] with learning rate warmup. During training, the learning rate \(lr\) was initially increased linearly, followed by decreasing learning rate proportional to the
Figure 9: Output values of the trained CNN for 1000 random microstructures with varying FVR.
inverse square root of the current step number [19]. The learning rate was calculated by:
\[lr=d_{model}^{-0.5}\min\left(step_{num}^{-0.5},step_{num}\cdot warmup_{steps}^{-1. 5}\right), \tag{26}\]
where \(d_{model}\) is the attention mechanism dimension, \(step_{num}\) is the current training step number and \(warmup_{steps}\) is the number of steps of linearly increasing learning rate. As was chosen in [19], \(warmup_{steps}\) was set to 4000. Using learning rate warmup can prevent divergent training in transformer models [63].
Target and source sequences were standardized to zero mean and unit variance before training to speed up the training process. The training sequences were truncated at random indices to allow the transformer to adapt to different length sequences, resulting in sequence lengths between one and 100. As a loss function, the Mean Squared Error (MSE) between the predicted and true values was used. It should be noted that due to the choice of loss function, the training process was dominated by high-stress data [14]. Therefore, relative prediction performance might decrease for smaller stresses. This effect should be reduced through normalization of the data and the loss function was, therefore, deemed suitable.
### Training the Surrogate Model on Homogeneous Data
To test the feasibility of a transformer network as a surrogate model for history-dependent plasticity, initially, the model was trained on the response of a purely homogeneous elastoplastic material. Using the strain sequence generation described in Sec. 3.2 and a (semi) analytical implementation of incremental J2-plasticity, a large amount of data could be generated at a low computational cost. This way and using the matrix material parameters listed in Tab. 3, 500,000 stress-strain sequences with a maximum sequence length of 100 were generated. Half the generated sequences consisted of random strain walks while the remaining sequences were cyclic. During training, 80 % of the data were used for training and the remaining 20 % were used for model validation. The surrogate model architecture remained
\begin{table}
\begin{tabular}{l c} \hline \hline Model parameter & Size \\ \hline Transformer size (\(d_{model}\)) & 512 \\ Transformer feed-forward size & 2048 \\ Number of decoder layers & 6 \\ Number of attention heads & 8 \\ Dropout probability & 0.1 \\ GRN encoder / decoder size & 2048 \\ Number of stacked GRNs in encoder / decoder & 2 \\ GRN input / output size & 512 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Transformer architecture parameters.
Figure 10: Architecture of the GRN. (adapted from [38])
unchanged except for the GRN context input (microstructure) and the microstructure addition step after the GRN encoders were removed. The training was performed for 800 epochs on four _NVIDIA RTX A6000_ GPUs with a batch size of 500. Total training time was approximately 24 hours.
Figure 11: Architecture of the transformer network to predict the history-dependent response of heterogeneous composites using a CNN to extract features from microstructure images.
After 800 epochs, the lowest validation MSE was \(5.86\times 10^{-4}\). Note that the MSE is for the standardized dataset and not the actual stress prediction. Subsequently, an additional 500 random and 500 cyclic strain sequences were generated to test the model performance on previously unseen data. Using the trained network the average Root-Mean-Square Error (RMSE) of the stress predictions for the ground truth were calculated for the unscaled (non-standardized) data to be 0.804 MPa. Comparisons between the network predictions and the ground truth data can be seen in Fig. 12 for a cyclic strain sequence and in Fig. 13 for a random strain sequence. History-dependence of the material response for both cyclic (Fig. 12) and random strain (Fig. 13) sequences was correctly predicted by the network with only some minor visible differences between the predicted and the true values.
### Adapting the Pre-Trained Surrogate Model for Homogenization
Following the training of the surrogate model for a homogeneous elastoplastic material, a total of 330,000 homogenized stress-strain sequences were generated following the homogenization procedure in Sec. 3.2. Each strain sequence had a length of 100 strain increments. Two-dimensional SVEs modeling the transversal behavior of continuously reinforced composites assuming plane strain conditions were used. The FVR of the generated SVEs was varied between 0.2 and 0.5 (the true values vary somewhat, as the target value cannot necessarily be achieved by randomly placing circular fibers inside of a square microcell) with fixed microcell size and fiber diameter. Around 40 % of the generated sequences were random walks and the remaining sequences had cyclic loading paths. The material parameters were left constant. An overview of the material parameters and SVE properties used to generate the training data are
Figure 13: Random input strain sequence (13a) and comparison between surrogate model prediction and ground truth (13b).
listed in Tab. 3. Simulating 330,000 strain sequences took approximately 4 weeks on two _AMD EPYC 7513 32-Core_ processors.
Using the generated dataset, two surrogate models with different microstructure representations (Sec. 3.3) were then trained. One of the models used a CNN with 3 output parameters, while the other had as input the first 3 PCA scores of the respective microstructure. Both models were initialized using the pre-trained weights from the homogeneous model and subsequently trained for 50 epochs. Because large batch sizes commonly lead to degradation in generalization performance [64], a smaller batch size of 100 was used. Furthermore, the learning rate was reduced by an order of magnitude to avoid "forgetting" all previously learned information. Other training parameters remained unchanged from the ones listed in Sec. 4.1.
After 50 training epochs, the lowest MSE on the validation dataset was \(2.30\times 10^{-3}\) for the standardized data after epoch 39 when using a CNN for microstructure feature extraction, and \(3.04\times 10^{-3}\) after epoch 15 using two-point correlations. Longer required training time to reach minimum MSE when using the CNN can be explained by the randomly initialized CNN weights, while the transformer weights were initialized using the pre-trained weights. Therefore, during training likely mainly the CNN parameters had to be optimized. Larger MSE compared to homogeneous data (minimum homogeneous validation MSE \(8.14\times 10^{-5}\)) showed the difficulty of the network to adapt to heterogeneous data. This was expected to a certain degree, as the homogenization problem represents inherently complex compound elastoplastic behavior, with the fibers storing significant amounts of elastic energy while plastic deformation occurs in the matrix phase [16].
To evaluate the impact of using pre-trained weights, the surrogate model using PCA was also trained using random weight initialization. The minimum validation loss with random initialization of \(3.34\times 10^{-3}\) was reached after 46 epochs, representing considerably longer required training time and a marginally worse minimum loss.
A test dataset containing 3000 strain sequences for random microstructures was generated to validate the performance of the trained surrogate model for previously unseen data. The RMSE between stress predictions and FEM results on the test data was 2.16 MPa using the CNN, 1.51 MPa using PCA with pre-training, and 2.63 MPa using PCA with random weight initialization. Stresses in the test dataset ranged between -387.5 MPa and 360.9 MPa. Both microstructure representation methods were able to successfully relate microstructure information to homogenized properties, with somewhat lower prediction error when using the PCA. Random weight initialization also resulted in worse performance compared to using the pre-trained weights from homogeneous training. The prediction error incrementally rises with increasing sequence length, due to the transformer's approach of predicting one time-step at a time, relying on the preceding data (see Fig. 14).
Fig. 15 demonstrates the comparison between cyclic FEM sequences and the surrogate model performance using both microstructure encoding methods (CNN Fig. 15a, two-point statistics Fig. 15b). Generally, good accuracy between the FEM and predicted curves can be seen. The surrogate models correctly capture changing stiffness with varying FVR and accurately predict the material history-dependence.
Predictions of the surrogate model using two-point correlations for a random strain sequence are depicted in Fig. 16. Generally, a good agreement between surrogate model predictions and FEM results can be seen, including for the out-of-plane stress \(\sigma_{zz}\) component.
However, several strain sequences from the test dataset, particularly random walks, showed relatively large prediction errors. The fact that the majority of training sequences were cyclic loading paths might be a contributing factor to these outliers. Furthermore, as transformers commonly require large amounts of data [33], increasing the number of training sequences could improve prediction performance. In this effort, a relatively large model (around 50 million parameters) was selected. Large models, when trained on small datasets, are more prone to overfitting, leading to poor
\begin{table}
\begin{tabular}{l c} \hline \hline SVE size (\(\mu\)m) & 70 \\ Fiber diameter (\(\mu\)m) & 7 \\ FVR (-) & 0.2 –0.5 \\ Young’s modulus matrix (MPa) & 1000 \\ Poisson’s ratio matrix (-) & 0.4 \\ Yield stress matrix (MPa) & 20 \\ Young’s modulus fiber (GPa) & 200 \\ Poisson’s ratio fiber (-) & 0.18 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Material and microcell parameters used to generate the training data.
generalization performance. Optimization of the network size and hyperparameter tuning could further improve the prediction performance of the surrogate models, reduce the training duration, and lower the required training data.
Three principal components and three output parameters of a CNN were used to encode microstructure information as input to the surrogate model. However, it was shown (see Sec. 3.3) that likely a single parameter, namely the encoded FVR, would have been sufficient for the analyzed microstructures. Increasing the complexity of the microstructures used for training by, for example, varying the RVE size and fiber diameter, or introducing non-circular inclusions will require more parameters to encode RVE features. Similarly, higher fidelity damage models to account for matrix damage will also require more refined structure descriptions to account for fiber clusters and voids initiating and accumulating local damage.
Comparing computational cost, solving a single microstructure subject to a random strain sequence with a length of 100 increments using parallel FEM on two CPUs took approximately 40 s. In contrast, predicting the response of 1000 microcells on a _NVIDIA RTX A6000_ GPU using the surrogate model took less than 25 s. Due to the fast batch processing of the neural network, major time advantages can especially be achieved when predicting a large number of sequences, as would be the case in structural FE\({}^{2}\) analysis.
### Testing Prediction Performance on Unseen Microstructure
To acquire additional insight into the knowledge assimilated by the neural network and test the model's robustness, a completely new microstructure containing a rectangular inclusion (see Fig. 17) was simulated under cyclic loading in \(x\) and \(y\)-directions using FEM and the results were compared with the network predictions. The rectangular inclusion was centered inside the microcell with a FVR of 0.2 and a width (\(x\)) to height (\(y\)) ratio of \(0.25:0.8\). Due to the elongated inclusion shape, anisotropy of the response was introduced (see Fig. 18). Because the surrogate models were only trained on circular inclusions, the prediction performance for new microstructures can give information about the generalizability of the models.
Comparing the simulated data with the neural network, both microstructure encoding methods were unable to capture the microcell's anisotropic response. Instead, the surrogate models predicted almost identical responses for transversal and longitudinal loading. This was expected as the networks were trained on data that was mostly isotropic and the network predictions were predominantly based on the FVR. However, the CNN seemed to be able to extract information about the FVR even for completely unseen microstructures (see Fig. 18a). Using two-point statistics, the transversal response of the microstructure was captured relatively well (see Fig. 18b) with a larger error for longitudinal loading. This shows that the surrogate models were able to learn a FVR-based homogenization for elastoplastic
Figure 14: Average RMSE as a function of number of strain increments using two-point correlation microstructure encoding.
composites. Additional training on a wider variety of microstructures would allow the models to generalize and give improved predictions for a wide range of inclusion geometries.
## 5 Concluding Remarks
A surrogate model to predict the history-dependent homogenized response of elastoplastic composite SVEs based on a NLP transformer neural network has been presented. A decoder-only architecture with GRNs to encode time-dependent (strain, stress) and static data (microstructure information) was chosen. Features were generated from mages
Figure 15: Comparison between surrogate model prediction and FEM for cyclic loading paths at three different FVRs using CNN (15a) and PCA (15b).
of the material microstructure comparing (i) a learned auto-encoding based on a CNN and (ii) two-point correlation functions using PCA for dimensionality reduction.
After training the developed network using random strain walks and cyclic loading paths, the model successfully predicted the history-dependent response of previously unseen composite SVEs. Furthermore, both microstructure representation techniques were able to extract parameters from the microstructure that governed the material response.
Considerable savings in computational cost for microstructure homogenization compared to FEM were demonstrated. Efficiency gains compared to FE\({}^{2}\) make UQ of composite structures with randomness in the underlying constituent distribution attainable. The transformer surrogate model could be used for scale transition to estimate the local ho
Figure 16: Random input strain sequence (16b) and comparison between surrogate model prediction using PCA microstructure representation and ground truth (16b).
mogenized material response based on the micro-scale constituent composition and included in a Monte Carlo (MC) or Quasi Monte Carlo (QMC) loop to sample the response distribution.
The current implementation was restricted to simple two-dimensional composite SVEs of fixed size and fiber diameter and fixed constituent properties. Having shown the feasibility of using a transformer surrogate for homogenization, the model will be extended to account for more complex microstructures, such as randomly sized SVEs and non-circular inclusions, and varying material parameters. Extension to three-dimensional microstructures is possible by using a three-dimensional CNN or three-dimensional two-point correlations to encode the microstructure.
In [2], the authors implement a data-driven material discovery approach using feature importance scores of microstructure parameters to more efficiently cover the training space, thereby escaping the curse of dimensionality. Following their approach, the amount of required training data to extend the transformer model to more complex microstructures could be reduced by carefully designing experiments based on feature importance. Furthermore, physics-informed loss functions could be used to decrease the amount of required training data [65].
## Funding Sources
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Figure 17: Microcell with rectangular inclusion. |
2303.06516 | Efficient Computation of Shap Explanation Scores for Neural Network
Classifiers via Knowledge Compilation | The use of Shap scores has become widespread in Explainable AI. However,
their computation is in general intractable, in particular when done with a
black-box classifier, such as neural network. Recent research has unveiled
classes of open-box Boolean Circuit classifiers for which Shap can be computed
efficiently. We show how to transform binary neural networks into those
circuits for efficient Shap computation.We use logic-based knowledge
compilation techniques. The performance gain is huge, as we show in the light
of our experiments. | Leopoldo Bertossi, Jorge E. Leon | 2023-03-11T23:33:43Z | http://arxiv.org/abs/2303.06516v3 | # Opening Up the Neural Network Classifier for Shap Score Computation
###### Abstract
We address the problem of efficiently computing Shap explanation scores for classifications with machine learning models. With this goal, we show the transformation of binary neural networks (BNNs) for classification into deterministic and decomposable Boolean circuits, for which knowledge compilation techniques are used. The resulting circuit is treated as an open-box model, to compute Shap scores by means of a recent efficient algorithm for this class of circuits. Detailed experiments show a considerable gain in performance in comparison with computing Shap directly on the BNN treated as a black-box model.
## 1 Introduction
In recent years, in which more sophisticated machine learning (ML) models have emerged, and their uses have become widespread, there has been a growing demand for methods to explain and interpret the results they produce. For example, explanations for why a loan application is rejected, or why a medication is recommended, or why a candidate is selected for a job, etc. In this work, we concentrate on explanations for binary classification models that assign one of two labels to the inputs, say say \(0\) or \(1\).
Explanations come in different forms, and can be obtained through different approaches. A common one assigns _attribution scores_ to the features values associated to an input that goes through an ML-based model, to _quantify_ their relevance for the obtained outcome. We concentrate on _local_ scores, i.e. associated to a particular input, as opposed to a global score that indicated the overall relevance of a feature.
A popular local score is Shap [1], which is based on the Shapley value that has introduced and used in coalition game theory and practice for a long time [14, 15]. Another attribution score that has been recently investigated in [1, 16] is Resp, the responsibility score [1] associated to _actual causality_[17]. In this work we concentrate on the Shap score, but the issues investigated here would be also interesting for other scores.
Shap scores can be computed with a black-box or an open-box model [1]. With the former, we do not know or use its internal components, but only its input/output relation. This is the most common approach. In the latter case, we can have access to its internal structure and components, and we can use them for score computation. It is common to consider neural-network-based models as black-box models, because their internal gates and structure may be difficult to understand or process when it comes to explaining classification outputs. However, a decision-tree model, due to its much simpler structure and use, is considered to be open-box for the same purpose.
Even for binary classification models, the complexity of Shap computation is provably hard, actually \(\#P\)-hard for several kinds of binary classification models, independently from whether the internal components of the model are used when computing Shap[1, 16, 17]. However, there are classes of classifiers for which, using the model components and structure, the complexity of Shap computation can be brought down to polynomial time [1, 16, 17].
A polynomial time algorithm for Shap computation with _deterministic and decomposable Boolean circuits_ (dDBCs) was presented in [1]. From this result, the tractability of Shap computation can be obtained for a variety of Boolean circuit-based classifiers and classifiers than can be represented as (or compiled into) them. In particular, this holds for _Ordered Binary Decision Diagrams_ (OBBDs) [15], decision trees, binary neural networks (BNNs), and other established classification models that can be compiled into OBDDs [14, 13]. Similar results can be obtained for _Sentential Decision Diagrams_ (SDDs) [16], which can be seen as dDBCs, and form a convenient _knowledge compilation_ target language [13, 12]. In [12], through a different approach, tractability of Shap computation was obtained for a collection of classifiers that intersect with that in [1].
In this work, we concentrate on explicitly developing this approach to the efficient computation of Shap for _binary neural networks_ (BNNs). For this, and inspired by [14], a BNN is transformed into a dDBC using techniques from _knowledge compilation_[13], an area that investigates the transformation of (usually) propositional theories into an equivalent one with a
canonical syntactic form that has some good computational properties, e.g. tractable model counting. The compilation may incur in a relatively high computational cost [15, 16], but it may still be worth the effort when a particular property will be checked often, as is the case of explanations for the same BNN.
More specifically, we describe in detail how a BNN is first compiled into a propositional formula in Conjunctive Normal Form (CNF), which, in its turn, is compiled into an SDD, which is finally compiled into a dDBC. We show how \(\mathsf{Shap}\) is computed on the resulting circuit via the efficient algorithm in [1]. This compilation is performed once, and is independent from any input to the classifier. The final circuit can be used to compute \(\mathsf{Shap}\) scores for different input entities.
We also make experimental comparisons between this open-box and circuit-based \(\mathsf{Shap}\) computation and that based directly on the BNN treated as a black-box, i.e. using only its input/output relation. We perform comparisons in terms of computation time and alignment of \(\mathsf{Shap}\) scores.
For our experiments, we consider real state as an application domain, where house prices depend on certain features, which we appropriately binarize.1 The problem consists in classifying property blocks, represented as entity records of thirteen feature values, as _high-value_ or _low-value_. This is a binary classification problem for which a BNN is first learnt, and then used.
Footnote 1: We use the California Housing Prices dataset available at [https://www.kaggle.com/datasets/camnugent/california-housing-prices](https://www.kaggle.com/datasets/camnugent/california-housing-prices).
To the best of our knowledge, our work is the first at using knowledge compilation techniques for efficiently computing \(\mathsf{Shap}\) scores, and the first at reporting experiments with the polynomial time algorithms for \(\mathsf{Shap}\) computation on binary circuits. We confirm that \(\mathsf{Shap}\) computation via the dDBC vastly outperforms the direct \(\mathsf{Shap}\) computation on the BNN. It is also the case that the scores obtained are fully aligned, as expected since the dDBC represents the BNN.
Compilation of binary classifiers, in particular BNNs, into obedds was used in [15] to provide different kinds of explanations for outcomes, but not for \(\mathsf{Shap}\) computation or any other kind of attribution score. In this work we concentrate only on explanations based on \(\mathsf{Shap}\) scores. There are several other explanations mechanisms for ML-based classification and decision systems in general, and also specific for neural networks. C.f. [17] and [14] for surveys.
This paper is structured as follows. Section 2 contains background on \(\mathsf{Shap}\) and Boolean circuits. Section 3 show in detail, by means of a running example, the kind of compilation of BNNs into dDBCs we use for the experiments. Section 4 presents the experimental setup, and the results of our experiments with \(\mathsf{Shap}\) computation. In Section 5 we draw some conclusions. Appendices A and B provide additional technical details for Section 3.
## 2 Preliminaries
In coalition game theory and its applications, the Shapley value is a established measure of the contribution by players to a shared wealth that is modelled as a game function. Given a set of players \(S\), and a game function \(G:\mathcal{P}(S)\rightarrow\mathbb{R}\), i.e. mapping subsets of players to real numbers, the general form of the Shapley value for a player \(p\in S\) is:
\[\mathit{Shapley}(S,G,p):= \tag{1}\] \[\sum_{s\subseteq S\setminus\{p\}}\frac{|s|!(|S|-|s|-1)!}{|S|!}(G( s\cup\{p\})-G(s)).\]
It quantifies the contribution of \(p\) to \(G\). It emerges as the only measure that enjoys certain desired properties [12]. Here all possible permutations of subsets of \(S\), and of their complements, are considered. So, this value is the average of differences between having \(p\) or not in a subteam. To apply the Shapley value, one defines a game function \(G\).
Now, consider a fixed entity \(\mathbf{e}=\langle F_{1}(\mathbf{e}),\dots,F_{N}(\mathbf{e})\rangle\) subject to classification. It has values \(F_{i}(\mathbf{e})\) for features \(\mathcal{F}=\{F_{1},\dots,F_{N}\}\). These values are \(0\) or \(1\) for binary features, which is our case. In [15, 16], the Shapley value in (1) is applied with \(\{F_{1}(\mathbf{e}),\dots,F_{N}(\mathbf{e})\}\) as the set of players, and the game function \(\mathcal{G}_{\mathbf{e}}(s):=\mathbb{E}(L(\mathbf{e}^{\prime})\mid\mathbf{e} ^{\prime}_{s}=\mathbf{e}_{s})\), giving rise to the \(\mathsf{Shap}\) score. Here, \(\mathbf{e}_{s}\) is the projection (or restriction) of \(\mathbf{e}\) on (to) the subset \(s\) of features, \(L\) is the label function associated to the classifier; \(0\) or \(1\) in our case. The \(\mathbf{e}^{\prime}\) inside the expected value is an entity whose values coincides with those of \(\mathbf{e}\) for the features in \(s\). Accordingly, for feature \(F\in\mathcal{F}\), and entity \(\mathbf{e}\), \(\mathsf{Shap}\) becomes:
\[\mathsf{Shap}(\mathcal{F},\mathcal{G}_{\mathbf{e}},F)=\sum_{s \subseteq\mathcal{F}\setminus\{F\}}\frac{|s|!(|\mathcal{F}|-|s|-1)!}{|\mathcal{F }|!}\left[\right. \tag{2}\] \[\left.\mathbb{E}(L(\mathbf{e}^{\prime})\mid\mathbf{e}^{\prime}_{S \cup\{F\}}=\mathbf{e}_{s\cup\{F\}})-\mathbb{E}(L(\mathbf{e}^{\prime})\mid \mathbf{e}^{\prime}_{s}=\mathbf{e}_{s})\left.\right].\]
The expected value is defined on the basis of an underlying probability distribution on the entity population. \(\mathsf{Shap}\) quantifies the contribution of feature value \(F(\mathbf{e})\) to the outcome label.
In order to compute \(\mathsf{Shap}\), we only need function \(L\), and none of the internal components of the classifier. Given that all possible subsets of features appear in its definition, \(\mathsf{Shap}\) is bound to be hard to compute. Actually, for some classifiers, its computation may become \(\#P\)-hard (c.f. [14] for some cases). However, in [14], it is shown that \(\mathsf{Shap}\) can be computed in polynomial time for every _deterministic and decomposable Boolean circuit_ (dDBC) used as a classifier. The circuit's internal structure is used in the computation (c.f. Section 4).
Figure 1 shows a Boolean circuit that can be used as a binary classifier, with input features \(x_{1},x_{2},x_{3}\). Their binary values are input at the bottom nodes, and then propagated upwards through the Boolean gates. The binary label is read off from the top node. This circuit is _deterministic_ in that, for every \(\vee\)-gate, at most one of its inputs is \(1\) when the output is \(1\). It is _decomposable_ in that, for every \(\wedge\)-gate, the inputs do not share features. The dDBC in the Figure is also _smooth_,
in that sub-circuits that feed a same \(\vee\)-gate share the same features. It has a _fan-in_ at most two, in that every \(\wedge\)-gate and \(\vee\)-gate have at most two inputs. We denote this subclass of dDBSCs with dDBCSFi(2).
In [1] it is established that Shap can be computed in polynomial time for dDBCSFi(2)-classifiers, assuming that the underlying probability distribution is the uniform, \(P^{u}\), or the product distribution, \(P^{\times}\). They are as follows for binary features: \(P^{u}(\mathbf{e}):=\frac{1}{2^{N}}\) and \(P^{\times}(\mathbf{e}):=\Pi_{i=1}^{N}p_{i}(F_{i}(\mathbf{e}))\), where \(p_{i}(v)\) is the probability of value \(v\in\{0,1\}\) for feature \(F_{i}\).
## 3 Compiling BNNs into dDBCs
In order to compute Shap with a BNN, we convert the latter into a dDBC. Shap scores will be computed in the end with the resulting circuit and the polynomial time algorithm in [1]. The transformation follows the following path, which we briefly describe next:
\[\begin{array}{ccccc}\text{BNN}&\underset{\text{(a)}}{\longrightarrow}& \text{CNF}&\underset{\text{(b)}}{\longrightarrow}&\text{SDD}&\underset{ \text{(c)}}{\longrightarrow}&\text{dDBC}\\ \end{array} \tag{3}\]
A BNN can be converted into a CNF formula [1, 2], which, in its turn, can be converted into an SDD [2, 3]. It is also known that SDDs can be compiled into a formula in d-DNNF [deterministic and decomposable negated normal form] [2], which forms a subclass of dDBCs. Actually, the resulting dDBC in (3) is eventually converted in polynomial time into a dDBCSFi(2).
Some of the steps in (3) may not be polynomial-time transformations, which we will discuss in more technical terms later in this section. However, we can claim at this stage that: (a) Any exponential cost of a transformation is kept under control by a usually small parameter. (b) The resulting dDBCSFi(2) is meant to be used multiple times, to explain different and multiple outcomes; and then, it may be worth taking a one-time, relatively high transformation cost. A good reason for our transformation path is the availability of implementations we can take advantage of.2
Footnote 2: The path in (3) is not the only way to obtain a dDBC. For example, [1] describe a conversion of BNNs into OBDDs, which can also be used to obtain dDBCs. However, the asymptotic time complexity is basically the same.
We will describe, explain and illustrate the conversion path (3) by means of a running example with a simple BNN, which is not the BNN used for our experiments. For them, we used a BNN with one hidden layer with 13 gates.
**Example 1**.: The BNN in Figure 2 has hidden neuron gates \(h_{1},h_{2},h_{3}\), an output gate \(o\), and three input gates, \(x_{1},x_{2},x_{3}\), that receive binary values. The latter represent, together, an input entity \(\bar{x}=\langle x_{1},x_{2},x_{3}\rangle\) that is being classified by means of a label returned by \(o\). Each gate \(g\) is activated by means of a _step function_ of the form:
\[\phi_{g}(\bar{i})=\text{\emph{sp}}(\bar{w}_{g}\bullet\bar{i}+b_{g})\ :=\left\{\begin{array}{ll}1&\text{if}\ \bar{w}_{g}\bullet\bar{i}+b_{g}\geq 0,\\ -1&\text{otherwise},\end{array}\right. \tag{4}\]
which is parameterized by a vector of binary weights \(\bar{w}_{g}\) and a real-valued constant bias \(b_{g}\).3 The \(\bullet\) is the inner vector product. For technical, non-essential reasons, for the gates, we use \(1\) and \(-1\) instead of \(0\) and \(1\). However, we assume we have a single output gate, for which the activation function returns instead \(1\) or \(0\), for _true_ or _false_, respectively.
Footnote 3: It is also possible to use more complex activation functions, such as sigmoid and softmax functions, as long as the output is binarized.
For example, \(h_{1}\) is _true_, i.e. outputs \(1\), for an input \(\bar{x}=(x_{1},x_{2},x_{3})\) iff \(\bar{w}_{h_{1}}\bullet\bar{x}+b_{h_{1}}=(-1)\times x_{1}+(-1)\times x_{2}+1 \times x_{3}+0.16\geq 0\). Otherwise, \(h_{1}\) is _false_, i.e. it returns \(-1\). Similarly, output gate \(o\) is _true_, i.e. returns label \(1\) for a binary input \(\bar{h}=(h_{1},h_{3},h_{3})\) iff \(\bar{w}_{o}\bullet\bar{h}=1\times h_{1}+1\times h_{2}+(-1)\times h_{3}-0.01\geq 0\), and \(0\) otherwise. \(\Box\)
The first step, (a) in (3), consists in representing the BNN as a CNF formula, i.e. as a conjunction of disjunctions of _literals_, i.e. atomic formulas or their negations. For this, we adapt the approach in [2], in their case, to verify properties of BNNs. Contrary to them, we avoid the use of auxiliary variables since their posterior elimination conflicts with our need for determinism.
Each gate of the BNN is represented by a propositional formula, initially not necessarily in CNF, which, in its turn,
Figure 1: A dDBC.
Figure 2: A BNN.
is used as one of the inputs to gates next to the right. In this way, we eventually obtain a defining formula for the output gate. The formula is converted into CNF. The participating propositional variables are logically treated as _true_ or _false_, even if they take numerical values \(1\) or \(-1\), resp.
**Example 2**.: (example 1 cont.) Consider gate \(h_{1}\), with parameters \(\bar{w}=\langle-1,-1,1\rangle\) and \(b=0.16\), and input \(\bar{i}=\langle x_{1},x_{2},x_{3}\rangle\). An input \(x_{j}\) is said to be _conveniently instantiated_ if it has the same sign as \(w_{j}\), and then, contributing to having a larger number on the LHS of the comparison in (4). E.g., this is the case of \(x_{1}=-1\). In order to represent as a propositional formula its output variable, also denoted with \(h_{1}\), we first compute the number, \(d\), of conveniently instantiated inputs that are necessary and sufficient to make the LHS of the comparison in (4) greater than or equal to \(0\). This is the (only) case when \(h_{1}\) becomes _true_; otherwise, it is _false_. This number can be computed in general by: [1]
\[d=\left[(-b+\sum_{j=1}^{\left|\bar{i}\right|}w_{j})/2\right]+\mbox{\# of negative weights in }\bar{w}. \tag{5}\]
In the case of \(h_{1}\), with 2 negative weights: \(d=\lceil(-0.16+(-1-1+1))/2\rceil+2=2\). With this, we can impose conditions on two input variables with the right sign at a time, considering all possible convenient pairs. For \(h_{1}\) we obtain its condition to be true:
\[h_{1}\;\longleftrightarrow\;(-x_{1}\wedge-x_{2})\vee(-x_{1}\wedge x_{3}) \vee(-x_{2}\wedge x_{3}). \tag{6}\]
This is DNF formula, directly obtained from considering all possible convenient pairs (which is already better that trying all cases of three variables at a time). However, there is a more expedite, iterative method that still uses the number of convenient inputs. In order to convey the bigger picture, we postpone the detailed description of this method (that is also used in our experiments) until Appendix A. Using this algorithm, we obtain an equivalent formula defining \(h_{1}\):
\[h_{1}\;\longleftrightarrow\;(x_{3}\wedge(-x_{2}\vee-x_{1}))\vee(-x_{2} \wedge-x_{1}). \tag{7}\]
Similarly, we obtain defining formulas for gates \(h_{2}\) and \(h_{3}\), and \(o\): (for all of them, \(d=2\))
\[h_{2} \longleftrightarrow(-x_{3}\wedge(-x_{2}\vee-x_{1}))\vee(-x_{2} \wedge-x_{1}),\] \[h_{3} \longleftrightarrow(x_{3}\wedge(x_{2}\lor x_{1}))\vee(x_{2} \wedge x_{1}),\] \[o \longleftrightarrow(-h_{3}\wedge(h_{2}\lor h_{1}))\vee(h_{2} \wedge h_{1}). \tag{8}\]
Replacing the definitions of \(h_{1},h_{2},h_{3}\) into (8), we finally obtain:
\[o\longleftrightarrow(-[(x_{3}\wedge(x_{2}\lor x_{1}))\vee(x_{2 }\wedge x_{1})]\wedge\] \[\qquad([(-x_{3}\wedge(-x_{2}\vee-x_{1}))\vee(-x_{2}\wedge-x_{1})]\lor\] \[\qquad[(x_{3}\wedge(-x_{2}\vee-x_{1}))\vee(-x_{2}\wedge-x_{1})])\lor\] \[\qquad([(-x_{3}\wedge(-x_{2}\vee-x_{1}))\vee(-x_{2}\wedge-x_{1})]\wedge\] \[\qquad[(x_{3}\wedge(-x_{2}\vee-x_{1}))\vee(-x_{2}\wedge-x_{1})]). \tag{9}\]
The final part of step (a) in path (3), requires transforming this formula into CNF. In this example, it can be taken straightforwardly into CNF.4 The resulting CNF formula is, in its turn, simplified into a shorter and simpler new CNF formula by means of the _Confer_ SAT solver [1]. For this example, the simplified CNF formula is as follows:
Footnote 4: For our experiments, we programmed a simple algorithm that does this job, while making sure the generated CNF does not grow too much (c.f. Appendix A).
\[o\;\longleftrightarrow\;(-x_{1}\vee-x_{2})\wedge(-x_{1}\vee-x_{3})\wedge(- x_{2}\vee-x_{3}). \tag{10}\]
Having a CNF formula will be convenient for the next conversion steps along path (3). \(\Box\)
Following with step (b) along path (3), the resulting CNF formula is transformed into a _Sentential Decision Diagram_ (SDD) [1, 2], which, as a particular kind of _decision diagram_[1], is a directed acyclic graph. So as the popular OBDDs [1], that SDDs generalize, they can be used to represent general Boolean formulas, in particular, propositional formulas (but without necessarily being _per se_ propositional formulas).
**Example 3**.: (example 2 cont.) Figure 3(a) shows an SDD, \(\mathcal{S}\), to be used for illustration. [1, 1] for precise definitions.) An SDD has different kinds of nodes. Those represented with encircled numbers are _decision nodes_[2], e.g. 1 and 3, that consider alternatives for the inputs (in essence, disjunctions). There are also nodes called _elements_. They are labeled with constructs of the form \([\ell_{1}|\ell_{2}]\), where \(\ell_{1},\ell_{2}\), called the _prime_ and the _sub_, resp., are Boolean literals, e.g. \(x_{1}\) and \(\neg x_{2}\), including \(\top\) and \(\bot\), for \(1\) or \(0\), resp. E.g. \([\neg x_{2}|\top]\) is one of them. The _sub_ can also be a pointer, \(\bullet\), with an edge to a decision node. \([\ell_{1}|\ell_{2}]\) represents two conditions that have to be satisfied simultaneously (in essence, a conjunction). An element without \(\bullet\) is a _terminal_.
An SDD represents (or defines) a total Boolean function \(F_{\mathcal{S}}:\;\langle x_{1},x_{2},x_{3}\rangle\in\{0,1\}^{3}\mapsto\{0,1\}\). For example, \(F_{\mathcal{S}}(0,1,1)\) is evaluated by following the graph downwards. Since \(x_{1}=0\), we descent to the right; next via node 3 underneath, with \(x_{2}=1\), we reach the instantiated leaf node labeled with \([1|0]\), a "conjunction", with the second
Figure 3: An SDD (a) and a vtree (b).
component due to \(x_{3}=1\). We obtain \(F_{\mathcal{S}}(0,1,1)=0\). \(\Box\)
In SDDs, the orders of occurrence of variables in the diagram must be compliant with a so-called _vtree_ (for "variable tree").5 The connection between a vtree and an SDD refers to the compatibility between the partitions \([\mathit{prime}|sub]\) and the tree structure (c.f. Example 4 below). Depending on the chosen vtree, substructures of an SDD can be better reused when representing a Boolean function, e.g. a propositional formula, which becomes important to obtain a compact representation. An important feature of SDDs is that they can easily be combined via propositional operations, resulting in a new SDD [12].
Footnote 5: Extending OBDDs, which have special kinds of vtrees that capture the condition that variables in a path must always appear in the same order. This generalization makes SDDs much more succinct than OBDDs [13, 14].
A vtree for a set of variables \(\mathcal{V}\) is binary tree that is full, i.e. every node has \(0\) or \(2\) children, and ordered, i.e. the children of a node are totally ordered, and there is a bijection between the set of leaves and \(\mathcal{V}\)[1].
**Example 4**.: (example 3 cont.) Figure 3(b) shows a vtree, \(\mathcal{T}\), for \(\mathcal{V}=\{x_{1},x_{2},x_{3}\}\). Its leaves, \(0,2,4\), show their associated variables in \(\mathcal{V}\). The SDD \(\mathcal{S}\) in Figure 3(a) is compatible with \(\mathcal{T}\). Intuitively, the variables at \(\mathcal{S}\)'s terminals, when they go upwards through decision nodes (\(\nexists\)), also go upwards through the corresponding nodes \(n\) in \(\mathcal{T}\). [16, 17] for a precise, recursive definition.)
The SDD \(S\) can be straightforwardly represented as a propositional formula by interpreting decision points as disjunctions, and elements as conjunctions, obtaining \([x_{1}\wedge((-x_{2}\wedge-x_{3})\vee(x_{2}\wedge\bot))]\vee[-x_{1}\wedge((x_ {2}\wedge-x_{3})\vee(-x_{2}\wedge\top))]\), which is logically equivalent to the formula on the RHS of (10) that represents the BNN. Accordingly, the BNN is represented by the SDD in Figure 3(a). \(\Box\)
In our experiments and in the running example, we used the _PySDD_ system [15], which, given a CNF formula \(\psi\), produces a vtree and a compliant SDD, both optimized in size, that represents the \(\psi\)[1, 12].
This compilation takes space and time that are exponential only in the _tree-width_, \(\mathit{TW}(\psi)\), of \(\psi\), which is the tree-width of the graph \(\mathcal{G}\) associated to \(\psi\)[1, 12]. \(\mathcal{G}\) contains the variables as nodes, and undirected edges between any of them when they appear in a same clause. The tree-width measures how close the graph is to being a tree. The exponential upper-bound on the tree-width is a positive _fixed-parameter tractability_ result [13] in that \(\mathit{TW}(\psi)\) is in general much smaller \(|\psi|\).
For example, the graph \(\mathcal{G}\) for the formula \(\psi\) on the RHS of (10) has \(x_{1},x_{2},x_{3}\) as nodes, and edges between any pair of variables, which makes \(\mathcal{G}\) a complete graph. Since every complete graph has a tree-width equal to the number of nodes minus one, we have \(\mathit{TW}(\psi)=2\).
Our final transformation step consists in obtaining a dDBC from the resulting SDD, as follows: An SDD turns out to correspond to a d-DNNF Boolean circuit, for which decomposability and determinism hold, and has only variables as inputs to negation gates [12]. The class d-DNNF is contained in dDBC.
The algorithm in [16] for Shap computing requires the dDBC a dDBCSFi(2). Every dDBC can be transformed in linear time into a dDBCSFi(2) [1]. More details can be found in Appendix B.
**Example 5**.: (example 3 cont.) By interpreting decision points and elements as disjunctions and conjunctions, resp., the SDD in Figure 3(a) can be easily converted into d-DNNF circuit. Notice that only variables are affected by negations. However, due to the children of node 3, that do not have the same variables, the directly resulting dBBC is not smooth (it has fan-in \(2\) though). It can be transformed into the dDBCSFi(2) in Figure 1. \(\Box\)
## 4 Shap Computation: Experiments
The dataset "California Housing Prices" dataset was used for our experiments, in particular, to train a BNN and compute Shap scores. It can be downloaded from Kaggle [12] (it has been used before, e.g. in [13]). It consists of 20,640 observations for 10 features with information on the block groups of houses in California, from the 1990 Census. Table 1 lists and describes the features, and the way in which they are binarized. The categorical feature #1 is one-hot encoded, giving rise to 5 binary features: #1\({}_{a}\),..., #1\({}_{e}\). Accordingly, we end up with 13 binary input features, plus the binary output feature, #10. Accordingly, thirteen binary predictor features are used to predict a binary output, representing whether the median price at each block is high or low (i.e. above or below the average of the original #10).
We used the "Tensorflow" and "Larq" Python libraries to train a BNN with one hidden layer, with as many neurons as predictors, and one neuron for the output. For the hidden neurons, the activation functions are step function, as in (4), with outputs \(1\) or \(-1\), whereas the step function for the output returns \(1\) or \(0\). All weights were rounded to binary values (1 or \(-1\)) and the biases were kept as real numbers. The loss function employed was the _binary cross-entropy_, defined by \(\mathit{BCE}(\bar{y},\hat{y})=-\frac{1}{|\hat{y}|}\sum_{i=1}^{|\hat{y}|}\left[y_{ i}\text{log}(\hat{y}_{i})+(1-y_{i})\text{log}(1-\hat{y}_{i})\right]\), where \(\hat{\hat{y}}\) represents the labels predicted by the model and \(\bar{y}\) are the true labels. The BNN ended up having a binary cross-entropy of 0.9041, and an accuracy of 0.6580, based on the test dataset.
According to the transformation path (3), the constructed BNN was first represented as a CNF formula with 2,391 clauses. It has a tree-width of 12, which makes sense having a middle layer of 13 gates, each with all features as inputs. The CNF was transformed, via the SDD conversion, into a dDBCSFi(2), \(\mathcal{C}\), which ended up having 18,671 nodes (without counting the negations affecting only input gates). Both transformations were programmed in Python. For the intermediate simplification of the CNF we used the SAT solver _Riss_. The initial transformation into CNF took 1.3 hrs. The conversion of the simplified CNF into the dDBCSFi(2) took 0.8276 secs.
After the transformation, we had the possibility to compute Shap for a given input entity in three different ways:
1. Directly on the BNN as a black-box model, i.e. using only its input/output relation for multiple calls, i.e. directly using formula (2);
2. Similarly, using the circuit \(\mathcal{C}\) as a black-box model; and
3. Using the efficient algorithm in [1], treating circuit \(\mathcal{C}\) as an open-box model. For completeness, it is reproduced here as Algorithm 1.
We performed these three computations for sets of 20, 40, 60, 80, and 100 input entities, to compare average times with increasing numbers of entities. In all cases, Shap was computed on the uniform probability distribution over the joint feature domain of size \(2^{13}\) (using Algorithm 1 with the product distribution with \(\frac{1}{2}\) for value \(1\)). Everything was programmed in Python.
The experimental results we report below are the average Shap score for each feature over 100 entities; and the average time taken to compute them for 20, 40, 60, 80, 100 entities. For the cases (a) and (b) above, i.e. computations with black-box models, the classification labels were first computed for all entities in the population \(\mathcal{E}\). After that, for each individual, fixed input entity \(\mathbf{e}\), when computing its Shap scores, all the other precomputed "counterfactual" versions in \(\mathcal{E}\) were used in formula (2).6 The specialized algorithm for (c), the open-box case, does not require the precomputation of all entity labels. All experiments were run on _Google Colab_ (with an NVIDIA Tesla T4 enabled).7
Footnote 6: The Shap scores reported in [1] were also computed in this way, but considering only the entity sample instead of the whole entity population.
Footnote 7: The complete code for _Google Colab_ can be found at: [https://github.com/Jorvan758/dDBCSFi2](https://github.com/Jorvan758/dDBCSFi2).
To report on Shap score computation, we used two different averages: over the original Shap scores, and over their absolute values. C.f. Figure 4. In this way, we can see the usual label each feature is associated with, and also its average relevance in absolute terms.
As expected, since the dDBC faithfully represent the BNN, we obtained exactly the same Shap scores under the three modes of computation, i.e. (a)-(c) above. Our results tell us that the three most relevant features for the classification label are #\(1_{b}\), #\(1_{e}\) and #6.
The times taken to compute Shap in the three cases, (a)-(c), are shown in Figure 5, in logarithmic scale. For example, the time with the BNN for 100 entities was 7.7 hrs, whereas for the open-box dDBC it was 4.2 min. We observe a huge gain in performance with the use of the efficient al
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline
**Id \#** & **Feature Name** & **Description** & **Original Values** & **Binarization** \\ \hline \#1 & _ocean\_proximity_ & A label of the location of the house w.r.t seq/ocean & Labels _lh\_ocean\_ island_, _near\_bay_ and _near\_ocean_ & Five features (one representing each label), for which 1 means a match with the value of _ocean\_proximity_, and \(-1\) otherwise \\ \hline \#2 & _households_ & The total number of households (a group of people residing within a home unit) for a block & Integer numbers from \(1\) to 6,082 & 1 (above average of the feature) or \(-1\) (below average) \\ \hline \#3 & _housing\_median\_age_ & The median age of a house within a block (lower numbers means newer buildings) & Integer numbers from \(1\) to \(52\) & 1 (above average of the feature) or \(-1\) (below average) \\ \hline \#4 & _latitude_ & The angular measure of how far north a block is the (the higher value, the farther north) & Real numbers from \(32.54\) to \(41.95\) & 1 (above average of the feature) or \(-1\) (below average) \\ \hline \#5 & _longitude_ & The angular measure of how far west a block is the (higher value, the farther west) & Real numbers from \(1-124.35\) to \(-114.31\) & 1 (above average of the feature) or \(-1\) (below average) \\ \hline \#6 & _median\_income_ & The median income for households within a block (measured in tens of thousands of US dollars) & Real numbers from \(0.50\) to \(15.00\) & 1 (above average of the feature) or \(-1\) (below average) \\ \hline \#7 & _population_ & The total number of people residing within a block & Integer numbers from \(3\) to 35,682 & 1 (above average of the feature) or \(-1\) (below average) \\ \hline \#8 & _total\_brooms_ & The total number of bedrooms within a block & Integer numbers from \(1\) to 6,445 & 1 (above average of the feature) or \(-1\) (below average) \\ \hline \#9 & _total\_rooms_ & The total number of rooms within a block & Integer numbers from \(2\) to \(39\),320 & 1 (above average of the feature) or \(-1\) (below average) \\ \hline \hline
**Id \#** & **Output** & **Description** & **Original Values** & **Labels** \\ \hline \#10 & _median\_house\_value_ & The median house value for households within a block (measured in US dollars) & Integer numbers from 14,999 to 500,001 & 1 (above average of the feature) or 0 (below average) \\ \hline \end{tabular}
\end{table}
Table 1: Features of the “California Housing Prices” dataset.
gorithm on the open-box dDBC. Those times do not show the one-time computation for the transformation of the BNN into the dDBC.
The difference in time between the BNN and the black-box dDBC, i.e. cases (a) and (b), resides in the fact that the BNN allows some batch processing for the initial label predictions, which translates into just 6 matrix operations (2 weight multiplications, 2 bias additions, and 2 activations), whereas with the dDBC the classifications are made one at a time.
## 5 Conclusions
The main objective of the research we reported about has been twofold. On one side, we have showed the convenience of applying knowledge compilation techniques to obtain explanations for ML-based models. On the other side, we have also established the huge difference in performance when computing explanation scores for a classifier when treating it as a black-box vs. treating it as an open-box.
In relation to the knowledge compilation phase, we emphasize once more, that the effort invested in transforming the BNN into a dDBC is something we incur once, being the compilation of the BNN into a CNF the most expensive step. In comparison, the subsequent transformations into SDD, and the required dDBC take a very small time.
Despite the intrinsic complexity involved, there is much room for improving the algorithmic and implementation aspects of the BNN compilation. The same applies to the implementation of the efficient Shap computation algorithm.
The resulting circuit can be used to obtain Shap scores multiple times, and for multiple inputs. We have showed that the performance gain in Shap computation with the circuit exceeds by far both the compilation time and the Shap computation time for the BNN used a black-box classifier. Furthermore, the resulting circuit can be used for other purposes, such as _verification_ of general properties of the classifier [10, 11].
When we can easily obtain explanation scores, such as Shap, we have an additional tool to analyze the goodness of our classifier. Obtaining unexpected or surprising explanations may invite us to reconsider the whole machine learning process, starting from the original dataset on, all the way to the deployment of the classifier.
We computed Shap scores using the uniform distribution on the entity population. There are a few issues to discuss in this regard. First, it is computationally costly to use it with a large number of features. One could use instead the _empirical distribution_ associated to the dataset, as in [1] for black-box Shap computation. This would require appropriately modifying the applied algorithm, which is left for future work. Secondly, and more generally, the uniform distribution does not capture possible dependencies among features. The algorithm is still efficient with the _product distribution_, which also suffers from imposing feature independence (c.f. [1] for a discussion of its empirical version and related issues). It would be interesting to explore to what extent other distributions could be used in
Figure 4: Average Shap score for each feature with 100 entities (a) (where red means an average closer to \(-1\) and blue closer 1), and for their absolute values (b).
combination with our efficient algorithm.
Independently from the algorithmic and implementation aspects of Shap computation, an important research problem has to do with bringing _domain knowledge_ or _domain semantics_ into the definitions and computations of attribution scores, to obtain more meaningful and interpretable results. This additional knowledge could come, for example, in declarative terms, expressed as _logical constraints_. They could be used to appropriately modify the algorithm or the underlying distribution (Bertossi, 2021). It is likely that domain knowledge can be more easily be brought into a score computation when it is done on a logic-based classifier, such as a dDBC.
In this work we have considered only binary NNs. It would be interesting to investigate to what extent our methods can be applied to non-binary NNs. There is work on binarized NNs (Qin et al., 2020; Yuan and Agaian, 2021; Simons and Lee, 2019), but it is not clear whether that would be a feasible way to go.
## Acknowledgments
Our special thanks go to: (a) Arthur Choi, for the use of the PySDD repository, and his kind and permanent advice on how to use it. And also answering more fundamental questions; (b) Andy Shih, for sharing some preliminary code for the conversion of BNNs into CNF, and explaining many issues; (c) Norbert Manthey, for his Riss tool collection repository, answering questions about its use and the simplification of CNF formulas; (d) Maximilian Schleich, for sharing some preliminary code for the black-box computation of Shap; and (e) Adnan Darwiche, for sharing useful insights.
Part of this work was funded by ANID - Millennium Science Initiative Program - Code ICN17002; and "Centro Nacional de Inteligencia Artificial" CENIA, FB210017 (Finamciamento Basal para Centros Cientificos y Tecnologicos de Excelencia de ANID), Chile. Both CENIA and SKEMA Canada funded Jorge Leon's visit to Montreal. L. Bertossi is a Professor Emeritus at Carleton University, Ottawa, Canada; and a Senior Fellow of the Universidad Adolfo Ibanez (UAI), Chile.
## Appendix A BNN to CNF Formula
Our conversion of the BNN into a CNF formula is inspired by a technique introduced in (Narodytska et al., 2018). However, it cannot be applied straightforwardly since it introduces auxiliary variables, whose final elimination becomes a challenge for our purposes. The application of a _variable forgetting_ technique (Oztok and Darwiche, 2017) turns out to damage the required determinism of the dDBC. Accordingly, it is necessary to adapt the conversion technique, avoiding the use of the auxiliary variables. We sketch the approach in the following. C.f. Example 2 for intuitions and motivation.
Consider a BNN receiving \(\ell_{0}\) input features, say \(\bar{x}=\langle x_{1},\dots,x_{\ell_{0}}\rangle\). The BNN has \(m\) hidden layers, each layer \(z\) (from \(1\) to \(m\)) with \(\ell_{z}\) neurons. A neuron in layer \(z\) receives the input vector \(\vec{i}=\langle i_{1},\dots,i_{\ell_{z-1}}\rangle\). The output layer has a single neuron. As in Section 3, all weights and features are \(-1\) or \(1\), with the activation functions returning \(-1\) or \(1\), except for the output layer that returns \(0\) or \(1\).
To convert the BNN into a CNF formula, we do it layer-wise from input to output. All the neurons in a layer can be converted in parallel. For every neuron gate \(g\), we encode the case when it returns \(1\), representing its activation function \(\mathit{sp}(\bar{w}_{g}\bullet\vec{i}+b_{g})\) as a CNF formula. For this, we compute, as in (5), the number \(d_{g}\) of inputs to be instantiated for the output to be \(1\).
For each neuron \(g\) on the first layer, and to conveniently manage the whole process, we construct a matrix-like structure \(\mathit{M}_{g}\) of dimension \(|\vec{i}|\times d_{g}\), whose components \(c_{k,t}\) are propositional formulas.
\[\mathit{M}_{g}=\begin{bmatrix}w_{1}\cdot i_{1}&\mathit{false}&\mathit{false}& \dots&\mathit{false}\\ w_{2}\cdot i_{2}&w_{2}\cdot i_{2}&\land_{2}&\land_{2}\\ \lor c_{1,1}&\land_{c1,1}&&\\ \par w_{3}\cdot i_{3}&(w_{3}\cdot i_{3}&\land_{3}&\land_{2}&\land_{2}\\ \lor c_{2,1}&\lor_{c2,2}&\land c_{2,2}&&\\ \dots&\dots&\dots&\dots&\dots\\ \par w_{|\vec{i}|\cdot i_{|\vec{i}|}}\vee(\vec{w}_{|\vec{i}|\cdot i_{|\vec{ i}|}}&(w_{|\vec{i}|\cdot i_{|\vec{i}|}}&(w_{|\vec{i}|\cdot i_{|\vec{i}|}} \land_{|\vec{i}|-1,1})&\land_{c1|\mid-1,2})&\dots&\ell_{|\vec{i}|-1,d_{g}-1} \\ c_{|\vec{i}|-1,1}&\lor_{c|\vec{i}|-1,2}&\lor_{c|\vec{i}|-1,3}&&\lor_{c|\vec{i}| -1,d_{g}}\end{bmatrix}\]
Figure 5: Seconds taken to compute Shap on 20, 40, 60, 80 and 100 entities; using the BNN as a black-box (blue bar), the dDBC as a black-box (red bar), and the dDBC as an open-box (orange bar). Note that the vertical axis employs a logarithmic scale.
In this array, a term of the form \(w_{k}\cdot i_{k}\)_does not represent a number_, but a propositional formula, namely \(i_{k}\) if \(w_{k}=1\), and \(\neg i_{k}\) if \(w_{k}=-1\). Every \(M_{\!g}\) is filled in a row-wise manner starting from the top, and then column-wise from left to right. The formula that will be passed over to the next layer to the right is the last entry, \(c_{\bar{|i|}\times d_{g}}\) (highlighted).
Only in the arrays for the first layer we find new propositional variables, for the inputs. For the next layers, the whole output formulas from the previous layer are passed over as inputs, and they become the \(i_{k}\)'s just mentioned. In this way, no auxiliary variables other that those for the initial inputs are created. Each row represents the number of the first \(k\in\{1,\ldots,\bar{|i|}\}\) inputs considered for the encoding, and each column, the threshold \(t\in\{1,\ldots,d_{g}\}\) to surpass (meaning that at least \(t\) inputs should be instantiated conveniently). For every component where \(k<t\), the threshold cannot be reached, which makes every component in the upper, right triangle to be _false_.
For explanation, the first row of \(M_{\!g}\) has the first entry \(w_{1}\cdot i_{1}\), and _false_ for the rest. Each entry \(c_{k,1}\) of the first column becomes \(w_{k}\cdot i_{k}\lor c_{k-1,1}\). For any other component \(c_{k,t}\), \(t>1\), we use \((w_{k}\cdot i_{k}\wedge c_{k-1,t-1})\lor c_{k-1,t}\). So, the lower right component \(c_{\bar{|i|},d_{g}}\) of \(M_{\!g}\) ends up being the encoding of neuron \(g\), and it is the only formula that will be used in the encondings for the next layer. All the \(M_{\!g}\)'s for a layer can be generated in parallel, as their encodings do not influence each other.
For the second layer, we take as inputs the encodings \(c_{\ell_{0},d_{g}}\) from the previous one, and we keep repeating this conversion. In other words, we compute \(d_{g}\) and \(M_{\!g}\) for every neuron \(g\) in the new layer, taking the previous encodings as inputs. The resulting \(c_{\ell_{1},d_{g}}\)s are given as inputs for the next layer, and we iterate all the way until the last layer. For illustration, c.f. Figure 6.
From the array \(M_{\!g}\) for the single output neuron, \(o\), at the last layer, we extract the component \(c_{\ell_{m},d_{a}}\), which becomes the propositional encoding of the whole BNN. This is a propositional formula that is next converted into a CNF formula, which in its turn is further simplified (c.f. Section 3).8
Footnote 8: Actually, every entry in an array \(M_{\!g}\) is immediately transformed into CNF and simplified, avoiding the iterative creation of extremely long, complex and non-CNF formulas.
```
Input : Original \(dDBC\) (starting from root node). Output : A \(dDBCSFi(\ref{eq:dDBC})\) equivalent to the given \(dDBC\).
1functionFIX_NODE(\(dDBC\_node\)) if\(dDBC\_node\) is a disjunction then
2\(c_{new}\) = false
3for each subcircuit \(sc\) in \(dDBC\_node\)
4\(sc_{\bar{|i|}}\) = FIX_NODE(\(sc\)) if\(sc_{\bar{|i|}}\) is a \(true\) value or is equal to \(\neg c_{new}\)then
5return\(true\)
6elseif\(sc_{\bar{|i|}}\) is not a false valuethen
7for each variable \(v\) in \(c_{new}\) and not in \(sc_{\bar{|i|}}\)
8\(sc_{\bar{|i|}}\) = \(sc_{\bar{|i|}}\wedge(v\lor\neg v)\)
9for each variable \(v\) in \(sc_{\bar{|i|}}\) and not in \(c_{new}\)
10\(c_{new}\) = \(c_{new}\wedge(v\lor\neg v)\)
11\(c_{new}\) = \(c_{new}\lor\mathit{s}\)\(\mathit{s}\)\(\mathit{s}\)\(\mathit{s}\)return\(c_{new}\)
12
13elseif\(dDBC\_node\) is a conjunctionthen
14\(c_{new}\) = \(true\)
15for each subcircuit \(sc\) in \(dDBC\_node\)
16\(sc_{\bar{|i|}}\) = FIX_NODE(\(sc\)) if\(sc_{\bar{|i|}}\) is a false value or is equal to \(\neg c_{new}\)then
17returnfalse
18elseif\(sc_{\bar{|i|}}\) is not a \(true\) valuethen
19\(c_{new}\) = \(c_{new}\wedge\mathit{s}\)\(\mathit{s}\)\(\mathit{s}\)\(\mathit{s}\)\(\mathit{s}\)return\(c_{new}\)
20elseif\(dDBC\_node\) is a negationthen
21return\(\neg\)FIX_NODE(\(\neg dDBC\_node\))
22
23else
24return\(dDBC\_node\)
25\(dDBCSFi(\ref{eq:dDBC})\) = FIX_NODE(\(root\_node\))
```
**Algorithm 2** From dDBC to dDBCSFi(2) |
2302.01636 | Symbiosis of an artificial neural network and models of biological
neurons: training and testing | In this paper we show the possibility of creating and identifying the
features of an artificial neural network (ANN) which consists of mathematical
models of biological neurons. The FitzHugh--Nagumo (FHN) system is used as an
example of model demonstrating simplified neuron activity. First, in order to
reveal how biological neurons can be embedded within an ANN, we train the ANN
with nonlinear neurons to solve a a basic image recognition problem with MNIST
database; and next, we describe how FHN systems can be introduced into this
trained ANN. After all, we show that an ANN with FHN systems inside can be
successfully trained and its accuracy becomes larger. What has been done above
opens up great opportunities in terms of the direction of analog neural
networks, in which artificial neurons can be replaced by biological ones.
\end{abstract} | Tatyana Bogatenko, Konstantin Sergeev, Andrei Slepnev, Jürgen Kurths, Nadezhda Semenova | 2023-02-03T10:06:54Z | http://arxiv.org/abs/2302.01636v1 | # Symbiosis of an artificial neural network and models of biological neurons:
###### Abstract
In this paper we show the possibility of creating and identifying the features of an artificial neural network (ANN) which consists of mathematical models of biological neurons. The FitzHugh-Nagumo (FHN) system is used as an example of model demonstrating simplified neuron activity. First, in order to reveal how biological neurons can be embedded within an ANN, we train the ANN with nonlinear neurons to solve a a basic image recognition problem with MNIST database; and next, we describe how FHN systems can be introduced into this trained ANN. After all, we show that an ANN with FHN systems inside can be successfully trained and its accuracy becomes larger. What has been done above opens up great opportunities in terms of the direction of analog neural networks, in which artificial neurons can be replaced by biological ones.
keywords: machine learning, neural network, FitzHugh-Nagumo system, linear regression, artificial neural network, biological neuron Pacs: 05.45.-a, 05.10.-a, 47.54.-r, 07.05.Mh, 87.18.Sn, 87.19.ll Msc: 70K05, 82C32, 92B20 +
Footnote †: journal: Elsevier
## Introduction
There are two different approaches to implementing neural networks and two different definitions of neural networks. From nonlinear dynamics' perspective, a neural network is of interest for describing biological phenomena and features of interaction between neurons within a neural circuit in response to an internal or external impact. The temporal dynamics of biological neurons and the connections between them are extremely complex, so there are a large number of works describing models of different levels of complexity [1; 2; 3; 4].
On the other hand, there are artificial neural networks (ANNs). Despite having a similar name, these networks are totally different from biological neural networks in their purpose and design. Artificial neural networks have been a prospective and widely spread tool for solving many computational tasks in a variety of scientific and engineering areas branching from climate research and medicine to sociology and economy [5; 6]. An ANN consists of artificial neurons whose role is to generate an output signal based on a linear or nonlinear transformation of the input signal. Training an artificial neural network lies in fitting and altering the connection matrices between its neurons. In the learning process, the connection matrices are built in such a way that the network outputs the result required from it. The idea of implementing such neural networks came from biology [7]. In 1940s scientists were inspired by the idea of how a neural circuit is arranged and tried to implement its simplified model in order to solve non-trivial problems that do not obtain a strictly formulated solution algorithm [8].
Having been first developed in the 1940s and 1950s, ANNs have undergone numerous substantial enhancements. Simple threshold neurons of the first generation, which produced binary-valued outputs, evolved into systems which use smooth activation functions, thus making it possible for the output to be real-valued. The most novel kind of an ANN is based on spiking neurons [9; 10] and has received the name of a spiking neural network (SNN).
In contrast to neural networks of the previous generations, an SNN considers temporal characteristics of the information on the input. In this regard an SNN architecture makes a step closer to a plausible model of a biological neural network, although still being highly simplified [11]. Within such a network, information transmission between artificial neurons resembles that of biological neurons.
Similar to traditional ANNs, SNNs are arranged in layers, and a signal travels from an input to the output layer traversing one or more hidden layers. However, in hidden layers SNNs use spiking neurons which are described by a phenomenological model representing a spike generation process. In a biological neuron, the activity of pre-synaptic neurons affects the membrane potential of post-synaptic neurons which results in a generation of a spike when the membrane potential crosses a threshold [11; 9]. This complex process has been described with the use of many mathematical models, with the Hodgkin-Huxley model being the first and the most famous one [12].
In order to find balance between computational expenses and biological realism, several other models have been proposed, e.g. the Leaky-Integrate-and-Fire model [13] or the Izhikevich model [14].
Based on the available knowledge from neuroscience, several methods of information encoding have been developed, e.g. rate coding or latency coding. For rate coding the rate (or the frequency) of spikes is used for information interpretation, while latency coding uses the timing of spikes. Both of these methods are special cases of a fully temporal code. In a fully temporal code a timing correlates with a certain event, for instance, a spike of a reference neuron.
Having artificial neural networks and their tasks become more and those sophisticated, we may soon verge on some kind of a crisis [15; 16], which consists in the fact that the tasks become so complex that the capacities of modern computers will soon not be enough to meet the growing needs. Here the bleeding-egde direction of hardware neural networks comes to the rescue [17]. According to this approach, neural networks are not created with a computer, but are a real device that can learn and solve tasks. The neurons themselves and the connections between them exist at the physical level, i.e. the model is not simulated on a computer, but is implemented in hardware according to its physical principles.
The main purpose of this work is to show the possibility of creating and identifying the features of a trained neural network which consists of mathematical models of biological neurons. In this research the FitzHugh-Nagumo system [18; 19] in used as an example. The FHN system is a well-known simplified model of a biological neuron that demonstrates spike dynamics under certain conditions. It is often used to model simplified neural activity.
This task helps us bring artificial neural networks closer to biological ones and reveal how biological neurons can be embedded within an artificial neural network. That is, we can use the topology (the connection) from an ANN and the features of dynamics and interactions from a biological system. Thus, it is possible to approximate spike ANNs to biological ones.
The work consists of several stages. First, there is a simple neural network with artificial linear and nonlinear neurons. It is trained to solve a basic image recognition problem with handwritten digits of the MNIST database (Sect. 1). Then, a certain number of FHN systems is introduced into the existing neural network, and the task is to identify the conditions under which the network will still function (Sect. 3). The next step is to make the task more difficult. From the beginning of this stage we use a network in which the FHN systems are implemented and attempt to train the neural network (Sect. 4).
## 1 MNIST database and network topology
At the first step a simple deep neural network with one hidden layer is being trained. The neural network is schematically shown in Fig. 1.
Since a task of recognising hand-written MNIST digits is being solved, an input signal of the ANN is an image of \(28\times 28\) pixels. Usually such an image is tranformed into a vector \(\mathbf{X}\) of size \(1\times 784\) and is fed to the input layer of the ANN. This makes the first layer consist of 784 simple linear neurons with an activation function \(f(x)=x\). These neurons do not transform the signal but merely pass it to the next layer. In order to do this, the vector \(\mathbf{X}\) is multiplied by a corresponding matrix \(\mathbf{W}^{\text{in}}\) of size \(784\times 100\). Thus, the input image is transformed into an input signal and is passed to the 100 neurons of the hidden layer. Within this layer the neurons have a sigmoid activation function: \(f(x)=1/(1+e^{-x})\). However, the choice of the function does not affect the subsequent results, and the use of a hyperbolic tangent function would lead to a similar outcome. Next, the signal from the 100 hidden neurons is fed to the output layer using the connection matrix \(\mathbf{W}^{\text{out}}\). The output layer consists of 10 neurons which also use a linear activation function \(f(x)=x\).
The response of the neural network is the index number of the output neuron that has the maximum output signal. This operation is also called softmax(). Specifically, if an image with the number 2 is fed to the input of the ANN, as in Fig. 1, then the ANN response "2" will correspond to the situation when the neuron numbered \(i=2\) (where \(i\in[0;9]\)) has the maximum output.
The MNIST database [20] was used to train the ANN. The database includes a training set (60,000 images of numbers 0-9) and a test set (10,000 images). To train the neural network, we used the Keras [21] library. This is a freely distributed API. The accuracy of the trained ANN on the training set was 99.5%, while the accuracy of training on the test set was 97.7%.
## 2 FitzHugh-Nagumo system
In order to implement FHN systems in the place of 100 artificial neurons in the hidden layer, one needs to understand the dynamics of the FHN system itself and how it should be fed with the input signal. After the input signal \(\mathbf{X}\) is multiplied by
Figure 1: Schematic illustration of neural network under study. The artificial neurons with linear activation function are colored in yellow, while the neurons with nonlinear activation function, which will later be replaced by FHN systems, are marked in red.
the connection matrix \(\mathbf{W}^{\text{in}}\), a vector of 100 values is obtained. These values are fed to the input of 100 neurons. Now FHN systems play the role of the hidden layer neurons and each of them is described by the following equations [18; 19]:
\[\begin{array}{l}\varepsilon\dot{x}=x-\frac{x^{3}}{3}-y\\ \dot{y}=x+a+I(t),\end{array} \tag{1}\]
where \(x\) is an activator variable, while \(y\) is an inhibitor variable. This is a widely used form of the FHN system, where the parameter \(\varepsilon\) is responsible for the time scale, \(I(t)\) is the input signal, and \(a\) is the control parameter. Depending on the value of \(a\), the system demonstrates the Andronov-Hopf bifurcation: for \(|a|>1\) a stable equilibrium state of the "focus" type is observed (excitable mode); if \(|a|<1\), the mode is called oscillatory, and the system exhibits periodic spike dynamics.
A description of the system (1) from the perspective of current and voltage is also common. Then the variable \(x\) is a voltage-like membrane potential with cubic nonlinearity that allows regenerative self-excitation via a positive feedback. The variable \(y\) is called the recovery variable with linear dynamics that provides a slower negative feedback. The parameter \(I\) corresponds to a stimulus current. A positive current corresponds to a current directed from the outside of the cell membrane to the inside.
Figure 2 shows the phase plane of the system (1). Also, the corresponding activator \(\dot{x}=0\) and inhibitor \(\dot{y}=0\) nullclines are depicted. The activator nullcine corresponds to the \(y=x-x^{3}/3\) line (Fig. 2, the orange line), while the inhibitor nullcline corresponds to the \(x=-a\) line when there is no input signal (Fig. 2, the green line). For \(a=1\) the nullclines intersect at \(x_{0}=-1\), \(y_{0}=-2/3\). This point is a stable equilibrium state for \(a>1\).
If the input signal does not change in time but introduces an additional constant component \(I(t)=I=\) const into the second equation, the sum \(a+I\) allows one to influence the position of the vertical nullcline of the system (see Fig. 2, green dash line). Thus, due to the input signal \(I\), the system can establish either an oscillatory or excitable mode. For \(a=0\) the values \(|I|<1\) will correspond to the oscillatory mode, and \(|I|>1\) will establish the excitable one.
## 3 Implementing the FHN systems into the trained ANN
Since the neural network was initially trained in such a way that the hidden layer neurons have a "sigmoid" type activation function, they accept an input signal of the range \((-\infty;+\infty)\), and return an output signal of \((0;1)\). In this case, the product of the input image vector \(\mathbf{X}\) and the connection matrix \(\mathbf{W}^{\text{in}}\) may contain such large numbers that they will not be commensurate with the scale of the variables \((x,y)\) of the FHN system.
In order for the product \(\mathbf{X}\cdot\mathbf{W}^{\text{in}}\) to be further used as an input signal of the FHN system, we propose to introduce the following normalization:
\[I=\gamma\cdot\tanh(\mathbf{X}\cdot\mathbf{W}^{\text{in}}). \tag{2}\]
Then, no matter how large the values of the matrix \(\mathbf{W}^{\text{in}}\) are, after applying the hyperbolic tangent the range of values is transformed into \((-1;1)\). The \(\gamma\) multiplier allows you to set the range of the values more precisely.
The output signal of the system is defined as follows:
\[Y=\text{softmax}(\vec{x}\cdot\mathbf{W}^{\text{out}}), \tag{3}\]
It is also computed with the use of the softmax() function as earlier, but now it is a function of the product of the \(\mathbf{W}^{\text{out}}\) matrix and the variable vector \(x_{i}\) of the 100 FHN systems. This makes the ANN response to be the index number of the output layer neuron which has the maximum output signal.
Figure 3 shows temporal dependencies of the neural network response for three different input images containing the numbers 0, 3 and 5. Strictly speaking, Fig. 3 does not show an immediate output of the neural network. There is a transient time of 1000 dimensionless units \(T^{\text{trans}}=1000\) which is discarded in order not to consider the neighbourhood process. Now there is a problem of the result interpretation. Since FHN systems are spike systems and may show an oscillatory mode, the ANN response may also oscillate, and at some moments "wrong" neurons can be activated. In order to interpret the response of the ANN correctly, one can choose the answer that takes the most time of the entire control record of the ANN output signal \(T=100\). For example, in the Fig. 3, the longest response time is "0" for the top record, "3" for the middle one, and "5" for the bottom one.
Identical results were obtained for other digits. The table 1 shows the accuracy calculated for the digits 0-9 from the training and testing sets for the parameter \(\gamma=-0.5\). As can be seen from the table, the average accuracy on the training set was 73.4%, and the average accuracy on the test set was 73.3%.
Average accuracies were also calculated for other values of \(\gamma\). Figure 4 shows the dependence of the average accuracy on \(\gamma\) for the test set. As can be seen from the picture, the lowest
Figure 2: Phase portrait of FHN system (1) with corresponding \(\dot{x}=0\) (orange) and \(\dot{y}=0\) nullclines (green).
accuracy can be obtained if \(\gamma>0\) and if \(\gamma\) is close to zero. Also, for \(-1<\gamma<0\), the accuracy increases with the decrease of \(\gamma\) and saturates when \(\gamma<-1\).
## 4 ANN training
In order to speed up the processes of training and testing, not all the images from the training and test sets were used. 10000 examples from the training set and 1000 examples of the test set with equal amount of examples of the same digit were used.
The parameters of the FHN systems remained the same. Since ANN training is associated with a large number of runs of input images and connection matrices, in order to speed up this process, the settling time was reduced to \(T^{\text{trans}}=200\), but the control time \(T=100\) remained the same. This did not affect the accuracy when repeating the previously described steps.
There were some difficulties in the process of training the ANN. A large number of network topologies and several nonlinear activation functions were considered, the number of layers was also being changed. In the end, we came to the conclusion that the optimal network topology looks like the one presented in the Fig. 5. 784 pixels are fed to the ANN input, so the input layer still consists of 784 neurons. In the previous section, it was shown that before applying a signal to the FHN input, it must first be renormalized using a hyperbolic tangent, so we added this processing step to the first layer in the new ANN. As a result, the number of neurons in the first layer remains the same, but their activation function becomes a hyperbolic tangent. The second layer is 100 FHN systems. The input layer is connected to the second layer by a \(\mathbf{W}^{1}\) matrix \(784\times 100\).
The third layer was necessary to simplify the processes of learning and calculating derivatives. It contains 100 artificial neurons with a "sigmoid" activation function. Layers 2 and 3 are interconnected using the identity connection matrix \(\mathbf{W}^{2}=E\), which is fixed and does not change during the learning process, i.e. neurons of layers 2 and 3 are connected one-to-one throughout the training.
The output layer contains 10 linear neurons with the function softmax(). The output layer is connected with the previous one with the connection matrix \(\mathbf{W}^{3}\).
The interpretation of the output signal of the obtained ANN was the same as in the previous section. An ANN's response over time \(T=100\) was the index number of the neuron which produced the largest output signal for the longest time.
The training was carried out using the backpropagation method of linear regression. Here the following trick is applied. During the forward propagation, FHN systems were used as layer 2 neurons, and during back propagation, they were re
Figure 4: Average testing accuracy of ANN with implemented FHN systems depending on the parameter \(\gamma\).
Figure 3: Temporal evolution of the output of ANN with implemented FHN systems for three different kinds of input image: digit “0” (top panel), “3” (middle) and “5” (bottom).
placed by conventional artificial neurons with a "sigmoid" type activation function \(f(x)=1/(1+e^{-x})\) for the correct calculation of the derivative. In Fig. 5 these neurons are represented in blue. In Fig. 6 the cost function (a) and accuracy on the training and test sets depending on the training epoch (b) illustrate the training process.
The corresponding accuracies on the training and test sets by digits are given in Table 2 for both the truncated sets (for which Fig. 6 was built) and the full MNIST sets (for which we have already applied the trained network). The above results were obtained for the value \(\gamma=-0.5\). The average accuracy of the resulting network is about 80%. In comparison of this value with the accuracy of the trained ANN with FHN systems embedded in it, this \(\gamma\) value corresponds to an accuracy of about 73%. Thus, after training with the proposed technique, it was possible to increase the accuracy of the neural network.
The distribution of accuracy for different digits is of particular interest (see Table 2 and Fig. A.7 in Appendix). The resulting ANN does not work well with the number 8. If it is not considered, the overall accuracy is about 90%.
## Conclusion
We managed to find ways to introduce FitzHugh-Nagumo systems into artificial neural networks, which made it possible to combine the neural network topology with the peculiarities of the FitzHugh-Nagumo spike dynamics. We were also able to find the conditions under which the resulting neural network demonstrates good accuracy.
The signal from the first layer is multiplied by the corresponding connection matrix and then it is sent to each of 100 FHN systems in ANNs' hidden layer according to Eq. (2). The multiplier \(\gamma\) allows to control the amplitude of FHN input signal more precisely. We show here that only negative \(\gamma\) values lead to appropriate accuracy.
In addition, we proposed a method for training the ANN with introduced FHN systems. It is schematically shown in Fig. 6. The resulting neural network, which increases the accuracy by \(\approx 12\%\).
## Data availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Digit} & \multicolumn{1}{c|}{Tr. set} & \multicolumn{1}{c|}{Test. set} & \multicolumn{1}{c|}{Tr. set} & \multicolumn{1}{c|}{Test. set} \\ & (10000) & (1000) & (6000) & (10000) \\ \hline \hline
0 & 97.1 & 96.5 & 95.9 & 95.6 \\
1 & 97.5 & 97.6 & 96.6 & 97.8 \\
2 & 89.2 & 87.1 & 86.4 & 86.6 \\
3 & 87.1 & 85.1 & 84.2 & 86.8 \\
4 & 91.9 & 86.4 & 88.6 & 89.1 \\
5 & 85.9 & 86.2 & 81.0 & 81.7 \\
6 & 95.2 & 86.2 & 92.2 & 91.1 \\
7 & 94.4 & 86.9 & 92.2 & 90.5 \\
8 & 0.1 & 0 & 0.1 & 0.1 \\
9 & 83.7 & 80.9 & 79.5 & 79.9 \\ \hline \hline Average & 82.2 & 79.3 & 79.7 & 79.9 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Accuracies of trained ANN with FHN systems applied to training (Tr.) and testing (Test.) datasets of each digit type. Parameter \(\gamma\) is set \(-0.5\).
Figure 5: Schematic illustration of trainable ANN with FHN systems. The artificial neurons with linear activation function are colored in yellow, while the neurons with nonlinear activation function are marked in red. Violet color corresponds to FHN systems.
Figure 6: Training process of ANN with FHN systems inside illustrated by cost function (a) and accuracies (b) on training and testing datasets depending on the training epoch.
## Acknowledgements
This work is partially supported by the Russian President scholarship SP-749.2022.5.
## Appendix A Testing accuracy of trained ANN according to digit type
|
2301.11612 | A neural network potential with self-trained atomic fingerprints: a test
with the mW water potential | We present a neural network (NN) potential based on a new set of atomic
fingerprints built upon two- and three-body contributions that probe distances
and local orientational order respectively. Compared to existing NN potentials,
the atomic fingerprints depend on a small set of tuneable parameters which are
trained together with the neural network weights. To tackle the simultaneous
training of the atomic fingerprint parameters and neural network weights we
adopt an annealing protocol that progressively cycles the learning rate,
significantly improving the accuracy of the NN potential. We test the
performance of the network potential against the mW model of water, which is a
classical three-body potential that well captures the anomalies of the liquid
phase. Trained on just three state points, the NN potential is able to
reproduce the mW model in a very wide range of densities and temperatures, from
negative pressures to several GPa, capturing the transition from an open random
tetrahedral network to a dense interpenetrated network. The NN potential also
reproduces very well properties for which it was not explicitly trained, such
as dynamical properties and the structure of the stable crystalline phases of
mW. | Francesco Guidarelli Mattioli, Francesco Sciortino, John Russo | 2023-01-27T09:28:08Z | http://arxiv.org/abs/2301.11612v1 | # A neural network potential with self-trained atomic fingerprints:
###### Abstract
We present a neural network (NN) potential based on a new set of atomic fingerprints built upon two- and three-body contributions that probe distances and local orientational order respectively. Compared to existing NN potentials, the atomic fingerprints depend on a small set of tuneable parameters which are trained together with the neural network weights. To tackle the simultaneous training of the atomic fingerprint parameters and neural network weights we adopt an annealing protocol that progressively cycles the learning rate, significantly improving the accuracy of the NN potential. We test the performance of the network potential against the mW model of water, which is a classical three-body potential that well captures the anomalies of the liquid phase. Trained on just three state points, the NN potential is able to reproduce the mW model in a very wide range of densities and temperatures, from negative pressures to several \(GPa\), capturing the transition from an open random tetrahedral network to a dense interpenetrated network. The NN potential also reproduces very well properties for which it was not explicitly trained, such as dynamical properties and the structure of the stable crystalline phases of mW.
## I Introduction
Machine learning (ML) potentials represent one of the emerging trends in condensed matter physics and are revolutionising the landscape of computational research. Nowadays, different methods to derive ML potentials have been proposed, providing a powerful methodology to model liquids and solid phases in a large variety of molecular systems [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17]. Among these methods, probably the most successful representation of a ML potential so far is given by Neural Network (NN) potentials, where the potential energy surface is the output of a feed-forward neural network [18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35].
In short, the idea underlying NN potentials construction is to train a neural network to represent the potential energy surface of a target system. The model is initially trained on a set of configurations generated ad-hoc, for which total energies and forces are known, by minimizing a suitable defined loss-function based on the error in the energy and force predictions. If the training set is sufficiently broad and representative, the model can then be used to evaluate the total energy and forces of any related atomic configuration with an accuracy comparable to the original potential. Typically the original potential will include additional degrees of freedom, such as the electron density for DFT calculations, or solvent atoms in protein simulations, which make the full computation very expensive. By training the network only on a subset of the original degrees of freedom one obtaines a coarse-grained representation that can be simulated at a much reduced computational cost. NN potentials thus combine the best of two worlds, retaining the accuracy of the underlying potential model, at the much lower cost of coarse-grained classical molecular dynamics simulations. The accuracy of the NN potential depends crucially on how local atomic positions are encoded in the input of the neural network, which needs to retain the symmetries of the underlying Hamiltonian, i.e. rotational, translational, and index permutation invariance. Several methods have been proposed in the literature [36; 12], such as the approaches based on the Behler-Parrinello (BP) symmetry functions [18], the Smooth Overlap of Atomic Positions (SOAP) [37], N-body iterative contraction of equivariants (NICE) [38] and polynomial symmetry functions [39], or frameworks like the DeepMD [23], SchNet [22] and RuNser [18]. In all cases, atomic positions are transformed into atomic fingerprints (AFs). The choice of the AFs is particularly relevant, as it greatly affects the accuracy and generality of the resulting NN potential.
We develop here a fully learnable NN potential in which the AFs, while retaining the simplicity of typical local fingerprints, do not need to be fixed beforehand but instead are learned during the training procedure. The coupled training of the atomic fingerprint parameters and of the network weights makes the NN training process more efficient since the NN representation is spontaneously built on a variable atomic fingerprint representation. To tackle the combined minimization of the AF parameters and of the network weights we adopt an efficient annealing procedure, that periodically cycles the learning rate, i.e. the step size of the minimization algorithm, resulting in a fast and accurate training process.
We validate the NN potential on the mW model of water [40], which is a one-site classical potential that has found widespread adoption to study water's anomalies [41; 42] and crystallization phenomena [43; 44]. Since the first pioneering MD simulations [45; 46], water is often chosen as a prototypical case study, as the large number of distinct local structures that are compatible with its tetrahedral coordination make it the molecule with the most complex thermodynamic behavior [47], for ex
ample displaying a liquid-liquid critical point at supercooled conditions [48; 49; 50; 51; 52]. NN potentials for water have been developed starting from density functional calculations, with different levels of accuracy [53; 54; 55; 56; 57; 58; 59; 60]. NN potentials have also been proposed to parametrise accurate classical models for water with the aim of speeding up the calculations when multi-body interactions are included [61], as in the MBpol model [62; 63; 64] or for testing the relevance of the long range interactions, as for the SPC/E model [65]. We choose the mW potential as our benchmark system because its explicit three-body potential term offers a challenge to the NN representation that is not found in molecular models built from pair-wise interactions. We stress that we train the NN-potential against data which can be generated easily and for which structural and dynamic properties are well known (or can be evaluated with small numerical errors) in a wide range of temperatures and densities. In this way, we can perform a quantitative accurate comparison between the original mW model and the hereby proposed NN model.
Our results show that training the NN potential at even just one density-temperature state point provides an accurate description of the mW model in a surrounding phase space region that is approximately a hundred kelvins wide. A training based on three different state points extends the convergence window extensively, accurately reproducing state points at extreme conditions, i.e. large negative and (crushingly) positive pressures. We will show that the NN reproduces thermodynamic, structural and dynamical properties of the mW liquid state, as well as structural properties of all the stable crystalline phases of mW water.
The paper is organized as follows. In Section II we describe the new atomic fingerprints and the details about the Neural Network potential implementation, including the _warm restart_ procedure used to train the weights and the fingerprints at the same time. In Section III we present the results, which include the accuracy of the models built from training sets that include one or three state points, and a comparison of the thermodynamic, structural and dynamic properties with those of the original mW model. We conclude in Section IV.
## II The Neural Network Model
The most important step in the design of a feed-forward neural network potential is the choice on how to define the first and the last layers of the network, respectively named the _input_ and _output_ layers. We start with the output layer, as it determines the NN potential architecture to be constructed. Here we follow the Behler Parrinello NN potential architecture [18], in which the total energy of the system is decomposed as the sum of local fields (\(E_{i}\)), each one representing the contribution of a local environment centered around atom \(i\). Being this a many-body contribution, it is important to note that \(E_{i}\) is not the energy of the single atom \(i\), but of all its environment (see also the Appendix A). With this choice, the total energy of the system is simply the sum over all atoms, \(E=\sum E_{i}\), and the force \(\vec{f}_{i}\) acting on atom \(i\) is the negative gradient of the _total_ energy with respect to the coordinates \(\nu\) of atom \(i\), e.g. \(f_{i\nu}=\partial E/\partial x_{i\nu}\). We have to point out that a NN potential is differentiable and hence it is possible to evaluate the gradient of the energy analytically. This allows to compute forces of the NN potential in the same way of other force fields, e.g. by the negative gradient of the total potential energy.
The input layer is built from two-body (distances) and three-body (angles) descriptors of the local environment, \(\vec{D}^{(i)}\) and \(\vec{T}^{(i)}\) respectively, ensuring translational and rotational invariance. The first layer of the neural network is the _Atomic Fingerprint Constructor_ (_AFC_), as shown in Fig. 1, which applies an exponential weighting on the atomic descriptors, restoring the invariance under permutations of atomic indexes. The outputs of this first layer are the atomic fingerprints (AFs) and in turn these are given to the first _hidden layer_. We will show how this organization of the AFC layer allows for the internal parameters of the exponential weighting to be trained together with the weights in the hidden layers of the network. In the following we describe in detail the construction of the inputs and the calculation flow in the first layers.
### The atomic fingerprints
The choice of input layer presents considerably more freedom, and it is here that we deviate from previous NN potentials. The data in this layer should retain all the information needed to properly evaluate forces and energies of the particles in the system, possibly exploiting the internal symmetries of the Hamiltonian (which in isotropic fluids are the rotational, translational and permutational invariance) to reduce the number of degenerate inputs. Given that the output was chosen as \(E_{i}\), the energy of the atomic environment surrounding atom \(i\), the input uses an atom-centered representation of the local environment of atom \(i\).
In the input layer, we define an atom-centered representation of the local environment of atom \(i\), considering both the distances \(r_{ij}\) with the nearest neighbours \(j\) within a spatial cut-off \(R_{c}\), and the angles \(\theta_{jik}\) between atom \(i\) and the pair of neighbours \(jk\) that are within a cut-off \({R_{c}}^{\prime}\). More precisely, for each atom \(j\) within \(R_{c}\) from \(i\) we calculate the following descriptors
\[D_{j}^{(i)}(r_{ij};R_{c})=\begin{cases}\frac{1}{2}\left[1+\cos\left(\pi\frac{r _{ij}}{R_{c}}\right)\right]&r_{ij}\leq R_{c}\\ 0&r_{ij}>R_{c}\end{cases} \tag{1}\]
and, for each triplet \(j-i-k\) within \({R_{c}}^{\prime}\) from \(i\),
\[T_{jk}^{(i)}(r_{ij},r_{ik},\theta_{jik})= \tag{2}\] \[\frac{1}{2}\left[1+\cos\left(\theta_{jik}\right)\right]\ D_{j}^{(i )}(r_{ij};{R_{c}}^{\prime})\ D_{k}^{(i)}(r_{ik};{R_{c}}^{\prime})\]
Here \(i\) indicates the label of i-th particle while index \(j\) and \(k\) run over all other particles in the system. In Eq. 1, \(D_{j}^{(i)}(r_{ij};R_{c})\) is a function that goes continuously to zero at the cut-off (including its derivatives). The choice of this functional form guarantees that \(D_{j}^{(i)}\) is able to express contributions even from neighbours close to the cut-off. Other choices, based on polynomials or other non-linear functions, have been tested in the past [31]. For example, we tested a parabolic cutoff function which produced considerably worse results than the cutoff function in Eq. 1. The function \(T_{jk}^{(i)}(r_{ij},r_{ik},\theta_{jik})\) is also continuous at the triplet cutoff \(R_{c}\). The angular function \(\frac{1}{2}\left[1+\cos\left(\theta_{jik}\right)\right]\) guarantees that \(0\leq T_{jk}^{(i)}(r_{ij},r_{ik},\theta_{jik})\leq 1\). We note that the use of relative distances and angles in Eq. 1-2 guarantees translational and rotational invariance.
The pairs and triplets descriptors are then fed to the _AFC layer_ to compute the atomic fingerprints, AFs. These are computed by projecting the \(D_{j}^{(i)}\) and \(T_{jk}^{(i)}\) descriptors on a exponential set of functions defined by
Figure 1: Schematic representation of the Neural Network Potential flow. (A) Starting from the relative distances and the triplets angles between neighbouring atoms, the input layer evaluates the atomic descriptors \(\vec{D}^{(i)}=\{D_{j}^{(i)}\) (Eq. 1) and \(\mathbf{T}^{(i)}=\{T_{jk}^{(i)}\}\) (Eq. 2). (B) The first layer is the Atomic Fingerprint Constructed (AFC) and it combines the atomic descriptors into atomic fingerprints, weighting them with an exponential function. The red nodes perform the calculation of Eq. 5, where from the two-body descriptors a weighting vector \(\vec{D}_{w}^{(i)}(\alpha)=\{e^{\alpha D_{j}^{(i)}}\}\) is calculated (square with \(\alpha\)) and then the scalar product \(\vec{D}^{(i)}\cdot\vec{D}_{w}^{(i)}(\alpha)\) is computed (square with point) and finally a logarithm is applied (circle). The blue nodes perform the calculation of Eq. 7, where two weighting vectors are calculated from the two-body descriptors namely \(\vec{D}_{w}^{(i)}(\gamma)\) and \(\vec{D}_{w}^{(i)}(\delta)\) and one weighting matrix from the three-body descriptors \(\mathbf{T}_{w}^{(i)}(\beta)=\{e^{\vec{\sigma}_{jk}^{(i)}}/2\}\). Finally in the compression unit (Eq 6) values are combined as \(0.5[\vec{D}^{(i)}\circ\vec{D}_{w}^{(i)}(\gamma)]^{T}[\mathbf{T}^{(i)}\circ\mathbf{T}_ {w}^{(i)}(\beta)][\vec{D}^{(i)}\circ\vec{D}_{w}^{(i)}(\delta)]\) where we use the circle symbol for the element-wise multiplication. The output value of the compression unit is given to the logarithm function (circle). The complete network (D) is made of ten AFC units and two hidden layers with 25 nodes per layer and here is depicted 2.5 times smaller.
\[\overline{D}^{(i)}(\alpha) = \ln\left[\sum_{j\neq i}D_{j}^{(i)}e^{\alpha D_{j}^{(i)}}+\epsilon \right]-Z_{\alpha} \tag{3}\] \[\overline{T}^{(i)}(\beta,\gamma,\delta) = \ln\left[\sum_{j\neq k\neq i}\frac{T_{jk}^{(i)}e^{\beta T_{jk}^{(i )}}e^{\gamma D_{j}^{(i)}}e^{\delta D_{k}^{(i)}}}{2}+\epsilon\right]\] (4) \[-Z_{\beta\gamma\delta}\]
These AFs are built summing over all pairs and all triplets involving particle \(i\), making them invariant under permutations, and multiplying each descriptor by an exponential filter whose parameters are called \(\alpha\) for distance AFs, and \(\beta,\ \gamma,\ \delta\) for the triplet AFs. These parameters play the role of feature selectors, i.e. by choosing an appropriate list of \(\alpha,\ \beta,\ \gamma,\ \delta\) the AFs can extract the necessary information from the atomic descriptors. The best choice of \(\alpha,\ \beta,\ \gamma,\ \delta\) will emerge automatically during the training stage. In Eqs. 3-4, the number \(\epsilon\) is set to \(10^{-3}\) and fixes the value of energy in the rare event that no neighbors are found inside the cutoff. Parameters \(Z_{\alpha}\) and \(Z_{\beta\gamma\delta}\) are optimized during the training process, shifting the AFs towards positive or negative values, and act as normalization factors that improve the representation of the NN.
The definitions in equations 3-4 can be reformulated in terms of product between vectors and matrices in the following way. The descriptors in equations 1-2 for particle i can be represented as a vector \(\vec{D}^{(i)}=\{D_{j}^{(i)}\}\) and a matrix \(\mathbf{T}^{(i)}=\{T_{jk}^{(i)}\}\) respectively. Given a choice of \(\alpha\), \(\beta\), \(\gamma\) and \(\delta\), three weighting vector \(\vec{D}_{w}^{(i)}(\alpha)=\{e^{\alpha D_{j}^{(i)}}\}\), \(\vec{D}_{w}^{(i)}(\gamma)=\{e^{\gamma D_{j}^{(i)}}\}\) and \(\vec{D}_{w}^{(i)}(\delta)=\{e^{\delta D_{j}^{(i)}}\}\) and one weighting matrix \(\mathbf{T}_{w}^{(i)}(\beta)=\{e^{\beta T_{jk}^{(i)}}/2\}\) are calculated from \(\vec{D}^{(i)}\) and \(\mathbf{T}^{(i)}\). The 2-body atomic fingerprint (Eq. 3) is finally computed as
\[\overline{D}^{(i)}(\alpha)=\ln\left[\vec{D}^{(i)}\cdot\vec{D}_{w}^{(i)}( \alpha)+\epsilon\right]-Z_{\alpha} \tag{5}\]
The 3-body atomic fingerprint (Eq. 4) is computed first by what we call _compression_ step in Fig. 1 as
\[\overline{T}_{c}^{(i)}=\frac{[\vec{D}^{(i)}\circ\vec{D}_{w}^{(i)}(\gamma)]^{T }[\mathbf{T}^{(i)}\circ\mathbf{T}_{w}^{(i)}(\beta)][\vec{D}^{(i)}\circ\vec{D}_{w}^{(i )}(\delta)]}{2} \tag{6}\]
and finally by
\[\overline{T}^{(i)}(\beta,\gamma,\delta)=\ln\left[\overline{T}_{c}^{(i)}(\beta, \gamma,\delta)+\epsilon\right]-Z_{\beta\gamma\delta} \tag{7}\]
where we use the circle symbol for the element-wise multiplication. The NN potential flow is depicted in Figure 1 following the vectorial representation.
In summary, our AFs select the local descriptors useful for the reconstruction of the potential by weighting them with an exponential factor tuned with exponents \(\alpha\), \(\beta\), \(\gamma\), \(\delta\). A similar weighting procedure has been showed to be extremely powerful in the selection of complex patterns and is widely applied in the so-called _attention layer_ first introduced by Google Brain [66]. However the AFC layer imposes additionally physically motivated constraints on the neural network representation.
We note that the expression for the system energy is a sum over the fields \(E_{i}\), but the local fields \(E_{i}\) are not additive energies, involving all the pair distances and triplets angles within the cut-off sphere centered on particle \(i\). This non-additive feature favours the NN ability to capture higher order correlations (multi-body contribution to the energy), and has been shown to outperform additive models in complex datasets [67]. The NN non-additivity requires the derivative of the whole energy \(E\) (as opposed to \(E_{i}\)) to estimate the force on a particle \(i\). In this way, contributions to the force on particle \(i\) come not only from the descriptors of \(i\) but also from the descriptors of all particles who have \(i\) as a neighbour, de facto enlarging the effective region in space where interaction between particles are included. This allows the network to include contributions from length-scales larger than the cutoffs that define the atomic descriptors. The Appendix A provides further information on this point.
### Hidden layers
We employ a standard feed-forward fully-connected neural network composed of two hidden layers with 25 nodes per layer and using the hyperbolic tangent (tanh) as the activation function. The nodes of the first hidden layer are fully connected to the ones in the second layer, and these connections have associated weights \(W\) which are optimized during the training stage.
The input of the first hidden layer is given by the AFC layer where we used five nodes for the two-body AFs (Eq. 3) and five nodes for the three-body AFs (Eq. 4) for a total of 10 AFs for each atom. We explore the performance of some combinations for the number of two-body and three-body AF in Appendix D and we find that the choice of five and five is the more efficient.
The output is the local field \(E_{i}\), for each atomic environment \(i\), whose sum \(E=\sum_{i=1}^{N}E_{i}\) represents the NN estimate of the potential energy \(E\) of the whole system.
### Loss function and training strategy
To train the NN-potential we minimize a loss function computed over \(n_{f}\) frames, i.e. the number of independent configurations extracted from an equilibrium simulation of the liquid phase of the target potential (in our case the mW potential). The loss function is the sum of two contributions.
The first contribution, \(H[\{\Delta\epsilon^{k},\Delta f_{i\nu}^{k}\}]\), expresses the difference in each frame \(k\) between the NN estimates and the target values for both the total potential energy (normalized by total number of atoms) \(\epsilon^{k}\) and the atomic
forces \(f_{i\nu}^{k}\) acting in direction \(\nu\) on atom \(i\). The \(n_{f}\) energy \(\epsilon^{k}\) values and \(3Nn_{f}\) force \(f_{i\nu}^{k}\) values are combined in the following expression
\[H[\{\Delta\epsilon^{k},\Delta f_{i\nu}^{k}\}]=\frac{p_{e}}{n_{f}} \sum_{k=1}^{n_{f}}h_{\text{Huber}}(\Delta\epsilon^{k})+\] \[\frac{p_{f}}{3Nn_{f}}\sum_{k=1}^{n_{f}}\sum_{i=1}^{N}\sum_{\nu=1} ^{3}h_{\text{Huber}}(\Delta f_{i\nu}^{k}) \tag{8}\]
where \(p_{e}=0.1\) and \(p_{f}=1\) control the relative contribution of the energy and the forces to the loss function, and \(h_{\text{Huber}}(x)\) is the so-called Huber function
\[h_{\text{Huber}}(x)=\begin{cases}0.5x^{2}\;\text{ if }|x|\leq 1\\ 0.5+(|x|-1)\;\text{ if }|x|>1\end{cases} \tag{9}\]
\(p_{e}\) and \(p_{f}\) are hyper-parameters of the model, and we selected them with some preliminary tests that found those values to be near the optimal ones. The Huber function [68] is an optimal choice whenever the exploration of the loss function goes through large errors caused by outliers, i.e. data points that differ significantly from previous inputs. Indeed when a large deviation between the model and data occur, a mean square error minimization may gives rise to an anomalous trajectory in parameters space, largely affecting the stability of the training procedure. This may happen especially in the first part of the training procedure when the parameter optimization, relaxing both on the energy and forces error surfaces may experience some instabilities.
The second contribution to the loss function is a regularization function, \(R[\{\alpha^{l},\beta^{m},\gamma^{m},\delta^{m}\}]\), that serves to limit the range of positive values of \(\alpha^{l}\) and of the triplets \(\beta^{m},\gamma^{m},\delta^{m}\) (where the indexes \(l\) and \(m\) run over the five different values of \(\alpha\) and five different triplets of values for \(\beta\), \(\gamma\) and \(\delta\)) in the window \(-\infty\) to \(5\). To this aim we select the commonly used relu function
\[r_{\text{relu}}(x)=\begin{cases}x-5\;\text{ if }x>5\\ 0\;\text{ if }x\leq 5\end{cases} \tag{10}\]
and write
\[R[\{\alpha^{l},\beta^{m},\gamma^{m},\delta^{m}\}]=\sum_{l=1}^{5} r_{\text{relu}}(\alpha^{l})+\] \[\sum_{m=1}^{5}[r_{\text{relu}}(\beta^{m})+r_{\text{relu}}( \gamma^{m})+r_{\text{relu}}(\delta^{m})] \tag{12}\]
Thus, the \(R\) function is activated whenever one parameters of the _AFC layer_ becomes, during the minimization, larger than \(5\).
To summarize, the global loss function \(\mathcal{L}\) used in the training of the NN is
\[\mathcal{L}[\epsilon,f]=H[\{\Delta\epsilon^{k},\Delta f_{i\nu}^{k}\}]+p_{b}R [\{\alpha^{l},\beta^{m},\gamma^{m},\delta^{m}\}] \tag{13}\]
where \(p_{b}=1\) weights the relative contribution of \(R\) compared to \(H\).
Compared to a standard NN-potential, we train not only the network weights \(W\) but also the AFs parameters \(\Sigma\equiv\{\alpha^{l},\beta^{m},\gamma^{m},\delta^{m}\}\) at the same time. The simultaneous optimization of the weights W and AFs \(\Sigma\) prevents possible bottleneck in the optimisation of W at fixed representation of \(\Sigma\). Other NN potential approaches implement a separate initial procedure to optimise the \(\Sigma\) parameters followed by the optimisation of W at fixed \(\Sigma\)[69]. The two-step procedure not only requires a specific methodological choice for optimising \(\Sigma\), but also may not result in the optimal values, compared to a search in the full parameter space (i.e. both \(\Sigma\) and W). Since the complexity of the loss function has increased, we have investigated in some detail some efficient strategies that lead to a fast and accurate training. Firstly, we initialize the parameters \(W\) via the Xavier algorithm, in which the weights are extracted from a random uniform distribution [70]. To initialize the \(\Sigma\) parameters we used a uniform distribution in interval \([-5,5]\). We then minimize the loss function using the _warm restart procedure_ proposed in reference [71]. In this procedure, the learning rate \(\eta\) is reinitialized at every cycle \(l\) and inside each cycle it decays as a function of the number of training steps \(t\) following
\[\eta^{(l)}(t)=A_{l}\left\{\frac{(1-\xi_{f})}{2}\left[1+\cos\left( \frac{\pi t}{T_{l}}\right)\right]+\xi_{f}\right\} \tag{14}\] \[0\leq t\leq T_{l}\]
where \(\xi_{f}=10^{-7}\), \(A_{l}=\eta_{0}\xi_{0}^{l}\) is the initial learning rate of the \(l\)-th cycle with \(\eta_{0}=0.01\) and \(\xi_{0}=0.9\), \(T_{l}=b\tau^{l}\) is the period of the \(l\)-th cycle with \(\tau=1.4\) and \(b=40\). The absolute number of training steps \(n\) during cycle \(l\) can be calculated summing over the length of all previous cycles as \(n=\tau+\sum_{m=0}^{l-1}T_{m}\).
We also select to evaluate the loss function for groups of four frames (mini-batch) and we randomly select \(200\) frames \(n_{f}=200\) for a system of \(1000\) atoms and hence we split this dataset in \(160\) frames (\(\%80\)) for the training set and the \(40\) frames (\(\%20\)) for the test set.
In Fig. 2(A) we represent the typical decay of the learning rate of the warm restart procedure, which will be compared to the standard exponential decay protocol in the Results section.
### The Target Model
To test the quality of the proposed novel NN we train the NN with data produced with the mW [40] model
of water. This potential, a re-parametrization of the Stillinger-Weber model for silicon [72], uses a combination of pairwise functions complemented with an additive three-body potential term
\[E=\sum_{i}\sum_{j>i}U_{2}(r_{ij})+\lambda\sum_{i}\sum_{j\neq i}\sum_{j>k}U_{3} \left(r_{ij},r_{ik},\theta_{jik}\right) \tag{15}\]
where the two body contribution between two particles \(i\) and \(j\) at relative distance \(r_{ij}\) is a generalized Lennard-Jones potential
\[U_{2}\left(r_{ij}\right)=A\epsilon\left[B\left(\frac{\sigma}{r_{ij}}\right)^{ p}-\left(\frac{\sigma}{r_{ij}}\right)^{q}\right]exp\left(\frac{\sigma}{r_{ij}-a \sigma}\right) \tag{16}\]
where the \(p=12\) and \(q=6\) powers are substituted by \(q=0\) and \(p=4\), multiplied by an exponential cut-off that brings the potential to zero at \(a\sigma\), with \(a=1.8\) and \(\sigma=2.3925\) A. \(A\epsilon\) (with \(A=7.049556277\) and \(\epsilon=6.189\) kcal mol\({}^{-1}\)) controls the strength of the two body part. B controls the two-body repulsion (with \(B=0.6022245584\)).
The three body contribution is computed from all possible ordered triplets formed by the central particle with the interacting neighbors (with the same cut-off \(a\sigma\) as the two-body term) and favours the tetrahedral coordination of the atoms via the following functional form
\[U_{3}\left(r_{ij},r_{ik},\theta_{jik}\right)=\epsilon\left[\cos \left(\theta_{jik}\right)-\cos\left(\theta_{0}\right)\right]^{2}\times\\ \exp\left(\frac{\gamma\sigma}{r_{ij}-a\sigma}\right)\exp\left( \frac{\gamma\sigma}{r_{ik}-a\sigma}\right) \tag{17}\]
where \(\theta_{jik}\) is the angle formed in the triplet \(jik\) and \(\gamma=1.2\) controls the smoothness of the cut-off function on approaching the cut-off. Finally, \(\theta_{0}=109.47^{\circ}\) and \(\lambda=23.15\) controls the strength of the angular part of the potential.
Figure 3: Comparison of the root mean square error calculated on the validation set for 60 replicas differing in the initial seed of the training procedure using both an exponential decay of the learning rate (points) and the warm restart method (squares), for the energy (panel A) and for the forces (panel B). For the forces, a significant improvement both in the average error and in its variance is found for the warm restart schedule.
Figure 2: Model convergence properties: (A) Learning rate schedule (Eq. 14) as a function of the absolute training step \(n\) (one step is defined as an update of the network parameters). (B) The training and validation loss (see \(\mathcal{L}[\epsilon,f]\) in Eq. 13) evolution during the training procedure, reported as a function of the number of epoch \(n_{e}\) (an epoch is defined as a complete evaluation of the training dataset). Root mean square (RMS) error of the total potential energy per particle (C) and of the force cartesian components (D) during the training evaluated in the test dataset. Data in panels B-C-D refers to the NN3 model and the green point shows the best model location.
The mW model, with its three-body terms centered around a specific angle and non-monotonic radial interactions, is based on a functional form which is quite different from the radial and angular descriptors selected in the NN model. The NN is thus agnostic with respect to the functional form that describes the physical system (the mW in this case). But having a reference model with explicit three body contributions offers a more challenging target for the NN potential compared to potential models built entirely from pairwise interactions. The mW model is thus an excellent candidate to test the performance of the proposed NN potential.
## III Results
### Training
We study two different NN models, indicated with the labels NN1 and NN3, differing in the number of state points included in the training set. These two models are built with a cut-off of \(R_{c}=4.545\) A for the two-body atomic descriptors and a cut-off of \(R_{c}^{\prime}=4.306\) A for the three-body atomic descriptors. \(R_{c}^{\prime}\) is the same as the mW cutoff while \(R_{c}\) was made slightly larger to mitigate the suppression of information at the boundaries by the cutoff functions. The NN1 model uses only training information based on mW equilibrium configurations from one state point at \(\rho_{1}=1.07\) g cm\({}^{-3}\), \(T_{1}=270.9\) K where the stable phase is the liquid. The NN3 model uses training information based on mW liquid configurations in three different state points, two state points at \(\rho_{1}=0.92\) g cm\({}^{-3}\), \(T_{1}=221.1\) K and \(\rho_{2}=0.92\) g cm\({}^{-3}\), \(T_{2}=270.9\) K where the stable solid phase is the clathrate Si34/Si136 [73] and one state point at \(\rho_{3}=1.15\) g cm\({}^{-3}\), \(T_{2}=270.9\) K.
This choice of points in the phase diagram is aimed to improve agreement with the low temperature-low density as well as high density regions of the phase diagram. Importantly, all configurations come from either stable or metastable liquid state configurations. Indeed, the point at \(\rho_{2}=0.92\) g cm\({}^{-3}\), \(T_{2}=270.9\) K is quite close to the limit of stability (respect to cavitation) of the liquid state.
To generate the training set, we simulate a system of \(N=1000\) mW particles with a standard molecular dynamics code in the NVT ensemble, where we use a time step of 4 fs and run \(10^{7}\) steps for each state point. From these trajectory, we randomly select 200 configurations (frames) to create a dataset of positions, total energies and forces. We then split the dataset in the _training_ and in the _test_ data sets, the first one containing 80% of the data. We then run the training for 4000 epochs with a minibatch of 4 frames. At the end of every epoch, we check if the validation loss is improved and we save the model parameters. In Fig. 2 we plot the loss function for the training and test datasets (B), the root mean square error of the total energy per particle (C), and of the force (D) for the NN3 model. The results show that the learning rate schedule of Eq. 14 is very effective in reducing both the loss and error functions.
Interestingly, the neural network seems to avoid overfitting (i.e. the validation loss is decreasing at the same rate as the loss on the training data), and the best model (deepest local minimum explored), in a given window of training steps, is always found at the end of that window, which also indicates that the accuracy could be further improved by running more training steps. Indeed we found that by increasing the number of training steps by one order of magnitude the error in the forces decreases by a further 30%. Similar accuracy of the training stage is obtained also for the NN1 model (not shown).
The training procedure always terminates with an error on the test set equal or less than \(\Delta\epsilon\simeq 0.01\) kcal mol\({}^{-1}\) (0.43 meV) for the energy, and of \(\Delta f\simeq 1.55\) kcal mol\({}^{-1}\) nm\({}^{-1}\) (6.72 meV A\({}^{-1}\)) for the forces. These values are comparable to the state-of-the-art NN potentials [54; 55; 61; 23], and within the typical accuracy of DFT calculations [74].
We can compare the precision of our model with that of alternative NN potentials trained on a range of water models. An alternative mW neural network potential has been trained on a dataset made of 1991 configurations of 128 particles system at different pressure and temperature (including both liquid and ice structures) with Behler-Parinello symmetry functions [24]. The training of this model (which uses more atomic fingerprints and a larger cutoff radius) converged to an error in energy of \(\Delta\epsilon\simeq 0.0062\) kcal mol\({}^{-1}\) (0.27 meV), and \(\Delta f\simeq 3.46\) kcal mol\({}^{-1}\) nm\({}^{-1}\) (15.70 meV A\({}^{-1}\)) for the forces. In a recent work searching for liquid-liquid transition signatures in an ab-initio water NN model [55], a dataset of configurations spanning a temperature range of \(0-600\) K and a pressure range of \(0-50\) GPa was selected. For a system of 192 particles, the training converged to an error in energy of \(\Delta\epsilon\simeq 0.010\) kcal mol\({}^{-1}\) (0.46 meV), and \(\Delta f\simeq 9.96\) kcal mol\({}^{-1}\) nm\({}^{-1}\) (43.2 meV A\({}^{-1}\)) for the forces. In the NN model of MB-POL [61], a dataset spanning a temperature range from 198 K to 368 K at ambient pressure was selected. In this case, for a system of 256 water molecule, an accuracy of \(\Delta\epsilon\simeq 0.01\) kcal mol\({}^{-1}\) (0.43 meV) and \(\Delta f\simeq 10\) kcal mol\({}^{-1}\) nm\({}^{-1}\) (43.36 meV A\({}^{-1}\)) was reached. Finally, the NN for water at \(T=~{}300\) K used in Ref. [54], reached precisions of \(\Delta\epsilon\simeq 0.046\) kcal mol\({}^{-1}\) (2 meV) and \(\Delta f\simeq 25.36\) kcal mol\({}^{-1}\) nm\({}^{-1}\) (110 meV A\({}^{-1}\)).
While a direct comparison between NN potentials trained on different reference potentials is not a valid test to rank the respective accuracies, the comparisons above show that our NN potential reaches a similar precision in energies, and possibly an improved error in the force estimation.
The accuracy of the NN potential could be further improved by extending the size of the dataset and the choice of the state points. In fact, while the datasets in Ref. [54; 55; 61] have been built with optimized proce
dures, the dataset used in this study was prepared by sampling just one (NN1) or three (NN3) state-points. Also the size of the datasets used in the present work is smaller or comparable to the ones of Ref. [54; 55; 61].
In Fig. 3 we compare the error in the energies (A) and the forces (B) between sixty independent training runs using the standard exponential decay of the learning rate (points) and the warm restart protocol (squares). The figure shows that while the errors in the energy computations are comparable between the two methods, the warm restart protocol allows the forces to be computed with higher accuracy. Moreover we found that the warm restart procedure is less dependent on the initial seed and that it reaches deeper basins than the standard exponential cooling rate.
### Comparing NN1 with NN3
The NN potential model was implemented in a custom MD code that makes use of the tensorflow C API [75]. We adopted the same time step (4 fs), the same number of particles (\(N=1000\)) and the same number of steps (\(10^{7}\)) as for the simulations in the mW model.
As described in the Training Section, we compare the accuracy of two different training strategies: NN1 which was trained on a single state point, and NN3 which is instead trained on three different state point. In Fig. 4 we plot the energy error (\(\Delta\epsilon\)) between the NN potential and the mW model with both NN1 (panel A) and NN3 (panel B). Starting from NN1, we see that the model already provides an excellent accuracy for a large range of temperatures and for densities close to the training density. The biggest shortcoming of the NN1 model is at densities lower than the trained density, where the NN potential model cavitates and does not retain the long-lived metastable liquid state displayed by the mW model. We speculate that this behaviour is due to the absence of low density configurations in the training set, which prevents the NN potential model from correctly reproducing the attractive tails of the mW potential.
To overcome this limitation we have included two additional state points at low density in the NN3 model. In this case, Fig. 4B shows that NN3 provides a quite accurate reproduction of the energy in the entire explored density and temperature window (despite being trained only with data at \(\rho=0.92\) g cm\({}^{-3}\) and \(\rho=1.15\) g cm\({}^{-3}\)).
We can also compare the accuracy obtained during production runs against the accuracy reached during training, which was \(\Delta\epsilon\simeq 0.01\) kcal mol\({}^{-1}\). Fig. 4B shows the error is of the order of 0.032 kcal mol\({}^{-1}\) (1.3 meV), for density above the training set density. But in the density region between 0.92 and 1.15, the error is even smaller, around 0.017 kcal mol\({}^{-1}\) (0.7 meV) at the lowest density boundary.
We can thus conclude that the NN3 model, which adds to the NN1 model information at lower density and temperature, in the region where tetrahedality in the water structure is enhanced, is indeed capable to represent, with only three state points, a quite large region of the phase space, encompassing dense and stretched liquid states. This suggests that a training based on few state points at the boundary of the density/temperature region which needs to be studied is sufficient to produce a high quality NN model. In the following we focus entirely on the NN3 model.
### Comparison of thermodynamic, structural and dynamical quantities
In Fig. 5 we present a comparison of thermodynamic data between the mW model (squares) and its NN potential representation (points) across a wide range of state points. Fig. 5A plots the energy as function of density for temperatures ranging from melting to deeply supercooled conditions. Perhaps the most interesting result is that the NN potential is able to capture the energy minimum, also called the _optimal network forming density_, which is a distinctive anomalous property of water and other _empty liquids_[76].
Fig. 5(B) shows the pressure as a function of the temperature for different densities, comparing the mW with the NN3 model. Also the pressure shows a good agreement between the two models in the region of densities between \(\rho=0.92\) g cm\({}^{-3}\) and \(\rho=1.15\) g cm\({}^{-3}\), which, as for the energy, tends to deteriorate at \(\rho=1.22\) g cm\({}^{-3}\).
In the large density region explored, the structure of the liquid changes considerably. On increasing density, a transition from tetrahedral coordinated local structure, prevalent at low \(T\) and low \(\rho\), towards denser local envi
Figure 4: Comparison between the mW total energy and the NN1 model (A) and NN3 model (B) for different temperatures and densities. While the NN3 model is able to reproduce the mW total energy with a good agreement in a wide region of densities and temperatures, the NN1 provide a good representation only in a limited region of density and temperature values. Blue squares represent the state points used for building the NN models.
ronments with interstitial molecules included in the first coordination shell takes place. This structural change is well displayed in the radial distribution function, shown for different densities at fixed temperature in Fig. 6. Fig. 6 also shows the progressive onset of a peak around 3.5 A developing on increasing pressure, which signals the growth of interstitial molecules, coexisting with open tetrahedral local structures [77; 78]. At the highest density, the tetrahedral peak completely merges with the interstitial peak. The NN3 model reproduces quite accurately all features of the radial distribution functions, maxima and minima positions and their relative amplitudes, at all densities, from the tetrahedral-dominated to the interstitial-dominated limits. In general, NN3 model reproduces quite well the mW potential in energies, pressure and structures and it appreciably deviates from mW pressures and energies quantities only at densities (above 1.15 g/cm\({}^{3}\)) which are outside of the training region.
To assess the ability of NN potential to correctly describe also the crystal phases of the mW potential, we compare in Fig. 7 the \(g(r)\) of mW with the \(g(r)\) of the NN3 model for four different stable solid phases [73]: hexagonal and cubic ice (\(\rho=1.00\) g cm\({}^{-3}\) and \(T=246\) K), the dense crystal SC16 (\(\rho=1.20\) g cm\({}^{-3}\) and \(T=234\) K) and the clathrate phase Si136 (\(\rho=0.80\) g cm\({}^{-3}\) and \(T=221\) K). The results, shown in Fig. 7, show that, despite no crystal configurations have been included in the training set, a quite accurate representation of the crystal structure at finite temperature is provided by the NN3 model for all distinct sampled lattices.
Finally, we compare in Fig. 8 the diffusion coefficient (evaluated from the long time limit of the mean square displacement) for the mW and the NN3 model, in a wide range of temperatures and densities, where water displays a diffusion anomaly. Fig. 8 shows again that, also for
Figure 5: Comparison between the mW total energy and the NN3 total energy as a function of density along different isotherm (A) and comparison between the mW pressure and the NN3 pressure as a function of temperature along different isochores (B). The relative error of the NN vs the mW potential grows with density, but remains within 3% even for densities larger than the densities used in the training set.
Figure 6: Comparison between the mW radial distribution functions \(g(r)\) and the NN3 \(g(r)\) at \(T=270.9\) K for four different densities. The tetrahedral structure (signalled by the peak at 4.54 Å ) progressively weakens in favour of an interstitial peak progressively growing at \(3.5-3.8\) Å. Different \(g(r)\) have been progressively shifted by two to improve clarity.
Figure 7: Comparison between the mW radial distribution functions \(g(r)\) and the NN3 \(g(r)\) for four different lattices: (A) hexagonal diamond (the oxygen positions of the ice \(I_{h}\)); (B) cubic diamond (the oxygen positions of the ice \(I_{c}\); (C) the SC16 crystal (the dense crystal form stable at large pressures in the mW model) and (D) the Si136 clathrate structure, which is stable at negative pressures in the mW model. Different \(g(r)\) have been progressively shifted by four to improve clarity.
dynamical quantities, the NN potential offers an excellent representation of the mW potential, despite the fact that no dynamical quantity was included in the training set. A comparison between fluctuations of energy and pressure of mW and NN3 potential is reported in Appendix B.
## IV Conclusions
In this work we have presented a novel neural network (NN) potential based on a new set of atomic fingerprints (AFs) built from two- and three-body local descriptors that are combined in a permutation-invariant way through an exponential filter (see Eq. 3-4). One of the distinctive advantages of our scheme is that the AF's parameters are optimized during the training procedure, making the present algorithm a self-training network that automatically selects the best AFs for the potential of interest.
We have shown that the added complexity in the concurrent training of the AFs and of the NN weights can be overcome with an annealing procedure based on the warm restart method [71], where the learning rate goes through damped oscillatory ramps. This strategy not only gives better accuracy compared to the commonly implemented exponential learning rate decay, but also allows the training procedure to converge rapidly independently from the initialisation strategies of the model's parameters.
Moreover we show in Appendix C that the potential hyper-surface of the NN model has the same smoothness as the target model, as confirmed by (i) the possibility to use the same timestep in the NN and in the target model when integrating the equation of motion and (ii) by the possibility of simulate the NN model even in the NVE ensemble with proper energy conservation.
We test the novel NN on the mW model [40], a one-component model system commonly used to describe water in classical simulations. This model, a re-parametrization of the Stillinger-Weber model for silicon [72], while treating the water molecule as a simple point, is able to reproduce the characteristic tetrahedral local structure of water (and its distortion on increasing density) via the use of three-body interactions. Indeed water changes from a liquid of tetrahedrally coordinated molecules to a denser liquid, in which a relevant fraction of interstitial molecules are present in the first nearest-neighbour shell. The complexity of the mW model, both due to its functional form as well as to the variety of different local structures which characterise water, makes it an ideal benchmark system to test our NN potential.
We find that a training based on configurations extracted by three different state points is able to provide a quite accurate representation of the mW potential hyper-surface, when the densities and temperatures of the training state points delimit the region of in which the NN potential is expected to work. We also find that the error in the NN estimate of the total energy is low, always smaller than \(0.03\) kcal mol\({}^{-1}\), with a mean error of \(0.013\) kcal mol\({}^{-1}\). The NN model reproduces very well not only the thermodynamic properties but also structural properties, as quantified by the radial distribution function, and the dynamic properties, as expressed by the diffusion coefficient, in the extended density interval from \(\rho=0.92\) g cm\({}^{-3}\) to \(\rho=1.22\) g cm\({}^{-3}\).
Interestingly, we find that the NN model, trained only on disordered configurations, is also able to properly describe the radial distribution of the ordered lattices which characterise the mW phase diagram, encompassing the cubic and hexagonal ices, the SC16 and the Si136 clathrate structure [73]. In this respect, the ability of the NN model to properly represent crystal states suggests that, in the case of the mW, and as such probably in the case of water, the geometrical information relevant to the ordered structures is contained in the sampling of phase space typical of the disordered liquid phase. These findings have been recently discussed in reference [80] where it has been demonstrated that liquid water contains all the building blocks of diverse ice phases.
We conclude by noticing that the present approach can be generalized to multicomponent systems, following the same strategy implemented by previous approaches [23; 18]. Work in this direction is underway.
###### Acknowledgements.
FGM and JR acknowledge support from the European Research Council Grant DLV-759187 and CINECA grant ISCAB NNPROT.
Figure 8: Comparison between the mW diffusion coefficient \(D\) and the NN3 corresponding quantity for different temperatures and densities, in the interval \(221-271\) K. In this dynamic quantity, the relative error is, for all temperatures, around \(8\%\). Note also that in this \(T\) window the diffusion coefficient shows a clear maximum, reproducing one of the well-know diffusion anomaly of water. Diffusion coefficients have been calculated in the NVT ensemble using the same Andersen thermostat algorithm [79] for mW and NN3 potential. |
2303.07609 | Training Robust Spiking Neural Networks with ViewPoint Transform and
SpatioTemporal Stretching | Neuromorphic vision sensors (event cameras) simulate biological visual
perception systems and have the advantages of high temporal resolution, less
data redundancy, low power consumption, and large dynamic range. Since both
events and spikes are modeled from neural signals, event cameras are inherently
suitable for spiking neural networks (SNNs), which are considered promising
models for artificial intelligence (AI) and theoretical neuroscience. However,
the unconventional visual signals of these cameras pose a great challenge to
the robustness of spiking neural networks. In this paper, we propose a novel
data augmentation method, ViewPoint Transform and SpatioTemporal Stretching
(VPT-STS). It improves the robustness of SNNs by transforming the rotation
centers and angles in the spatiotemporal domain to generate samples from
different viewpoints. Furthermore, we introduce the spatiotemporal stretching
to avoid potential information loss in viewpoint transformation. Extensive
experiments on prevailing neuromorphic datasets demonstrate that VPT-STS is
broadly effective on multi-event representations and significantly outperforms
pure spatial geometric transformations. Notably, the SNNs model with VPT-STS
achieves a state-of-the-art accuracy of 84.4\% on the DVS-CIFAR10 dataset. | Haibo Shen, Juyu Xiao, Yihao Luo, Xiang Cao, Liangqi Zhang, Tianjiang Wang | 2023-03-14T03:09:56Z | http://arxiv.org/abs/2303.07609v1 | # Training Robust Spiking Neural Networks with Viewpoint Transform and Spatiotemporal Stretching
###### Abstract
Neuromorphic vision sensors (event cameras) simulate biological visual perception systems and have the advantages of high temporal resolution, less data redundancy, low power consumption, and large dynamic range. Since both events and spikes are modeled from neural signals, event cameras are inherently suitable for spiking neural networks (SNNs), which are considered promising models for artificial intelligence (AI) and theoretical neuroscience. However, the unconventional visual signals of these cameras pose a great challenge to the robustness of spiking neural networks. In this paper, we propose a novel data augmentation method, ViewPoint Transform and SpatioTemporal Stretching (VPT-STS). It improves the robustness of SNNs by transforming the rotation centers and angles in the spatiotemporal domain to generate samples from different viewpoints. Furthermore, we introduce the spatiotemporal stretching to avoid potential information loss in viewpoint transformation. Extensive experiments on prevailing neuromorphic datasets demonstrate that VPT-STS is broadly effective on multi-event representations and significantly outperforms pure spatial geometric transformations. Notably, the SNNs model with VPT-STS achieves a state-of-the-art accuracy of 84.4% on the DVS-CIFAR10 dataset.
Haibo Shen\({}^{1}\), Juyu Xiao\({}^{1}\), Yihao Luo\({}^{2,1}\), Xiang Cao\({}^{3,1}\), Liangqi Zhang\({}^{1}\), Tianjiang Wang\({}^{1}\)+School of Huazhong University of Science and Technology\({}^{1}\)
Yichang Testing Technique Research Institute\({}^{2}\)
Changsha University\({}^{3}\)
Spiking Neural Networks, Neuromorphic Data, Data Augmentation, ViewPoint Transform and SpatioTemporal Stretching
Footnote †: This work was supported in part by the National Natural Science Foundation of China under Grant 61572214 and Seed Foundation of Huazhong University of Science and Technology (2020kfyXGYJ114). (Corresponding author: Tianjiang Wang.)
## 1 Introduction
Inspired by the primate visual system, neuromorphic vision cameras generate events by sampling the brightness of objects. For example, the Dynamic Vision Sensor (DVS) [1] camera and the Vidar [2] camera are inspired by the outer three-layer structure of the retina and the foveal three-layer structure, respectively. Both of them have the advantages of high temporal resolution, less data redundancy, low power consumption, and large dynamic range [3]. In addition, spiking neural networks (SNNs) are similarly inspired by the learning mechanisms of the mammalian brain and are considered a promising model for artificial intelligence (AI) and theoretical neuroscience [4]. In theory, as the third generation of neural networks, SNNs are computationally more powerful than traditional convolutional neural networks (CNNs) [4]. Therefore, event cameras are inherently suitable for SNNs.
However, the unconventional visual signals of these cameras also pose a great challenge to the robustness of SNNs. Most existing data augmentations are fundamentally designed for RGB data and lack exploration of neuromorphic events. For example, Cutout [5] artificially impedes a rectangular block in the image to simulate the impact of occlusion on the image. Random erasing [6] further optimizes the erased pixel value by adding noise. Mixup [7] uses the weighted sum of two images as training samples to smooth the transition line between classes. Since neuromorphic data have an additional temporal dimension and differ widely in imaging principles, novel data augmentations are required to process the spatiotemporal visual signals of these cameras.
In this paper, we propose a novel data augmentation method suitable for events, ViewPoint Transformation and SpatioTemporal Stretching (VPT-STS). Viewpoint transformation solves the spatiotemporal scale mismatch of samples by introducing a balance coefficient, and generates samples from different viewpoints by transforming the rotation centers and angles in the spatiotemporal domain. Furthermore, we introduce spatiotemporal stretching to avoid potential information loss in viewpoint transformation. Extensive experiments are performed on prevailing neuromorphic datasets. It turns out that VPT-STS is broadly effective on multiple event representations and significantly outperforms pure spatial geometric transformations. Insightful analysis shows that VPT-STS improves the robustness of SNNs against different spatial locations. In particular, the SNNs model with VPT-STS achieves a state-of-the-art accuracy of 84.4% on the DVS-CIFAR10 dataset.
Furthermore, while this work is related to EventDrop [8],
NDA [9], there are some notable differences. For example, NDA is a pure global geometric transformation, while VPT-STS changes the viewpoint of samples in the spatiotemporal domain. EventDrop is only experimented on CNNs, it introduces noise by dropping events, but may cause problems with dead neurons on SNNs. VPT-STS is applicable to both CNNs and SNNs, maintaining the continuity of samples. In addition, EventDrop transforms both temporal and spatial domains, but as two independent strategies, it does not combine the spatiotemporal information of the samples. To our knowledge, VPT-STS is the first event data augmentation that simultaneously incorporates spatiotemporal transformations.
## 2 Method
### Event Generation Model
The event generation model [3, 4] is abstracted from dynamic vision sensors [1]. Each pixel of the event camera responds to changes in its logarithmic photocurrent \(L=\log(I)\). Specifically, in a noise-free scenario, an event \(e_{k}=(x_{k},y_{k},t_{k},p_{k})\) is triggered at pixel \(X_{k}=(y_{k},x_{k})\) and at time \(t_{k}\) as soon as the brightness variation \(|\Delta L|\) reaches a temporal contrast threshold \(C\) since the last event at the pixel. The event generation model can be expressed by the following formula:
\[\Delta L(X_{k},t_{k})=L(X_{k},t_{k})-L(X_{k},t_{k}-\Delta t_{k})=p_{k}C \tag{1}\]
where \(C>0\), \(\Delta t_{k}\) is the time elapsed since the last event at the same pixel, and the polarity \(p_{k}\in\{+1,-1\}\) is the sign of the brightness change. During a period, the event camera triggers event stream \(\mathcal{E}\):
\[\mathcal{E}=\{e_{k}\}_{k=1}^{N}=\{(X_{k},t_{k},p_{k})\}_{k=1}^{N} \tag{2}\]
where \(N\) represents the number of events in the set \(\mathcal{E}\).
As shown in Figure 1, an event is generated each time the brightness variances reach the threshold, and then \(|\Delta L|\) is cleared. The event stream can be represented as a matrix:
\[M_{\varepsilon}=\begin{pmatrix}y_{1}&x_{1}&t_{1}&1\\ \vdots&\vdots&\vdots&\vdots\\ y_{N}&x_{N}&t_{N}&1\end{pmatrix}_{4\times N} \tag{3}\]
For convenience, we omit the unconverted polarity \(p\).
### Motivation
This work stems from the observation that it is difficult to maintain absolute frontal view between the sample and cameras, which easily leads to a slight shift of the viewpoint. Considering this small offset distance, we use viewpoint rotation to approximate the deformation of samples in space and time. In addition, since events record the brightness change of samples, especially changes of the edge, variations of the illumination angle will also cause the effect of viewpoint transformation, which suggests that we can enhance the robustness of SNNs by generating viewpoint-transformed samples.
### The Proposed Method.
To generate viewpoint-transformed samples, we draw on the idea of spatio-temporal rotation. For viewpoint transformation (**VPT**), we introduce translation matrices \(T_{b}\), \(T_{a}\), which represent the translation to the rotation center \((x_{c},y_{c},t_{c})\) and the translation back to the original position, respectively.
\[T_{b}=\begin{pmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ -y_{c}&-x_{c}&-t_{c}&1\end{pmatrix},T_{a}=\begin{pmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ y_{c}&x_{c}&t_{c}&1\end{pmatrix} \tag{4}\]
Suppose that rotate along the \(y\) and \(t\) planes with \(x\) as the axis, we can easily derive the rotation matrix \(R_{r}^{YT}\):
\[R_{r}^{YT}=\begin{pmatrix}cos\theta&0&sin\theta&0\\ 0&1&0&0\\ -sin\theta&0&cos\theta&0\\ 0&0&0&1\end{pmatrix} \tag{5}\]
where \(\theta\) is the rotation angle. In practice, Eq 5 is an unbalanced matrix due to the mismatch between the time and space dimensions in the \(M_{\varepsilon}\) matrix. Therefore, we introduce a balance coefficient \(\tau\) to scale the space and time dimension, which results in a better visual effects. The balanced matrix \(R_{br}^{YT}\) can be formulated as:
\[R_{br}^{YT}=\begin{pmatrix}cos\theta&0&\tau sin\theta&0\\ 0&1&0&0\\ -\frac{1}{\tau}sin\theta&0&cos\theta&0\\ 0&0&0&1\end{pmatrix} \tag{6}\]
Set \(x_{c}=0\), the viewpoint transformation matrix \(M_{br}^{YT}\) can be formulated by calculating \(T_{b}R_{br}^{YT}T_{a}\):
\[\begin{pmatrix}cos\theta&0&\tau sin\theta&0\\ 0&1&0&0\\ -\frac{1}{\tau}sin\theta&0&cos\theta&0\\ -x_{c}cos\theta+\frac{1}{\tau}csin\theta+x_{c}&0&-\tau x_{c}sin\theta-t_{c}cos \theta+t_{c}&1\end{pmatrix} \tag{7}\]
Similarly, the viewpoint transformation matrix \(M_{br}^{XT}\) in
Figure 1: Event generation model.
the \(x\) and \(t\) dimensions can be formulated as:
\[\left.\begin{pmatrix}1&0&0&0\\ 0&cos\theta&\tau sin\theta&0\\ 0&-\frac{1}{\tau}sin\theta&cos\theta&0\\ 0&-x_{c}cos\theta+\frac{1}{\tau}t_{c}sin\theta+x_{c}&-\tau x_{c}sin\theta-t_{c} cos\theta+t_{c}&1\end{pmatrix}\right\} \tag{8}\]
Therefore, the viewpoint-transformed matrix \(M_{VPT}^{YT}\) and \(M_{VPT}^{XT}\) can be formulated as:
\[M_{VPT}^{YT}=M_{e}M_{br}^{YT} \tag{9}\] \[M_{VPT}^{XT}=M_{e}M_{br}^{XT}\]
Furthermore, since events beyond the resolution will be discarded during the viewpoint transformation, we introduce spatiotemporal stretching (**STS**) to avoid potential information loss. STS stretches the temporal mapping in the VPT by a coefficient \(\frac{1}{cos\theta}\) while maintaining the spatial coordinates unchanged. Therefore, by setting \(t_{c}=0\), we get the transformed \((t)_{STS}^{YT}\) and \((t)_{STS}^{XT}\) from Eq. 7 and Eq. 8:
\[(t_{k})_{VPT}^{YT}=(t_{k})-\tau tan\theta\cdot((y_{k})-y_{c}) \tag{10}\] \[(t_{k})_{VPT}^{XT}=(t_{k})-\tau tan\theta\cdot((x_{k})-x_{c})\]
The time of STS is advanced or delayed according to the distance from the center \(|x-x_{c}|\) (\(|y-y_{c}|\)), causing event stream to be stretched long the time axis according to the spatial coordinates.
## 3 Experiments
### Implementation
Extensive experiments are performed to demonstrate the superiority of the VPT-STS method on prevailing neuromorphic datasets, including CIFAR10-DVS(CIF-DVS) [10], N-Caltech101(N-Cal) [11], N-CARS [12] datasets. N-Caltech101 and CIFAR10-DVS datasets are generated by neuromorphic vision sensors on the basis of traditional datasets, while N-CARS is collected in the real world. For the convenience of comparison, the model without VPT-STS with the same parameters is used as the baseline. STBP [13] methods are used to train SNN-VGG9 network, other parameters mainly refer to NDA [14]. For example, the Adam optimizer is used with an initial learning rate of \(1e-3\). The neuron threshold and leakage coefficient are \(1\) and \(0.5\), respectively. In addition, we also evaluate the performance of VPT-STS on various event representations with the Resnet9 network, including EST [15], VoxelGrid [16], EventFrame [17] and EventCount [18] representations.
### Performance on various representations
Extensive experiments are conducted to evaluate the performance of VPT-STS method on different event representations, covering SNNs and CNNs. As shown in Tab. 1, SNNs with VPT-STS methods achieve significant improvements on three prevailing datasets. And VPT-STS also performs well on four representations commonly used by CNNs. It is worth noting that EST maintains the most spatiotemporal information from neuromorphic data and thus performs best overall. Furthermore, since the samples of N-CARS are collected in the real world, its initial viewpoint diversity is already enriched compared to the other two datasets. Considering the high baseline on N-CARS, VPT-STS still further imporves the robustness of SNNs.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \multirow{2}{*}{Datasets} & \multirow{2}{*}{Method} & \multicolumn{5}{c}{Accuracy (\%)} \\ \cline{3-6} & & SNNs & EventFrame & EventCount & VoxelGrid & EST \\ \hline \multirow{2}{*}{CIFAR10-DVS} & Baseline & 83.20 & 78.71 & 78.85 & 77.47 & 78.81 \\ & VPT-STS & 84.40 & 79.58 & 79.12 & 79.62 & 79.37 \\ \hline \multirow{2}{*}{N-Caltech101} & Baseline & 78.98 & 73.08 & 73.66 & 77.08 & 78.41 \\ & VPT-STS & 81.05 & 76.96 & 76.38 & 79.13 & 78.88 \\ \hline \multirow{2}{*}{N-CARS} & Baseline & 95.40 & 94.44 & 94.76 & 93.86 & 94.97 \\ & VPT-STS & 95.85 & 94.60 & 94.81 & 94.30 & 94.99 \\ \hline \end{tabular}
\end{table}
Table 1: Performance of VPT-STS on SNNs and CNNs with various representations.
\begin{table}
\begin{tabular}{c c c c} \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{References} & \multicolumn{2}{c}{Accuracy (\%)} \\ \cline{3-4} & & CIF-DVS & N-CARS \\ \hline HATS[12] & CVPR 2018 & 52.40 & 81.0 \\ Dart[19] & TPAMI 2020 & 65.80 & - \\ Dspike [20] & NeurIPS 2021 & 75.40 & - \\ STBP [13] & AAAI 2021 & 67.80 & - \\ AutoSNN [21] & ICML 2022 & 72.50 & - \\ RecDis [22] & CVPR 2022 & 72.42 & - \\ DSR [23] & CVPR 2022 & 77.27 & - \\ NDA [9] & ECCV 2022 & 81.70 & 90.1 \\ \hline
**VPT-STS** & - & **84.40** & **95.85** \\ \hline \end{tabular}
\end{table}
Table 2: Performance of VPT-STS and previous SOTAs on CIFAR10-DVS and N-CARS datasets.
### Compared with SOTAs
As shown in Tab. 2, we compare VPT-STS with recent state-of-the-art results on neuromorphic datasets. The results show that VPT-STS achieves substantial improvements over previous SOTAs. It is worth noting that VPT-STS significantly outperforms NDA, which is an ensemble of six geometric transformations. The experimental results demonstrate the superiority of combining spatiotemporal information for data augmentation. Since VPT-STS is orthogonal to most training algorithms, it can provide a better baseline and improve the performance of existing models.
### Ablation Studies on VPT-STS
As shown in Fig. 3, the performance of VPT-STS with different rotation angles is evaluated on the N-Caltech101 dataset. It turns out that a suitable rotation angle is important for the performance of data augmentation, which can increase data diversity without losing features.
### Analysis of VPT-STS
To gain further insight into the workings of VPT-STS, we add different strategies on the baseline to analyze the effective components of VPT-STS. As shown in Table 3, spatial rotation (Rotation) is performed as a comparative experiment for VPT-STS. It turns out that both VPT and STS including spatiotemporal transformations are significantly better than pure spatial geometric transformations on all three datasets, which illustrate the importance of spatiotemporal transformations.
While VPT and STS are implemented with operations similar to rotation, it actually improves the robustness of SNNs to different viewpoints. Furthermore, we evaluate the robustness of SNNs to viewpoint fluctuations by adding different degrees of spatiotemporal rotation to the test data. Figures 2(a) and 2(b) show the performance of the baseline model and the model trained by VPT-STS under different disturbances, respectively. The results show that the general trend of the accuracy change is to decrease with the increase of the perturbation amplitude. In addition, Fig. 2(c) shows the difference in the accuracy reduction of the VPT-STS compared to baseline. As the perturbation amplitude increases, the difference in the accuracy reduction of the two models is less than zero, and the absolute value grows, which illustrate that the accuracy reduction of baseline is larger than that of VPT-STS. Experimental results show that the model trained with VPT-STS generalize better and improves the robustness of SNNs against spatial location variances.
## 4 Conclusion
We propose a novel data augmentation method suitable for events, viewpoint transformation and spatiotemporal stretching (VPT-STS). Extensive experiments on prevailing neuromorphic datasets show that VPT-STS is broadly effective on multiple event representations and significantly outperforms pure spatial geometric transformations. It achieves substan
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{3}{c}{Accuracy (\%)} \\ \cline{2-4} & CIF-DVS & N-Cal & N-CARS \\ \hline Baseline & 83.20 & 78.98 & 95.40 \\ Rotation & 83.90 & 80.19 & 95.46 \\ VPT & **84.40** & **81.05** & 95.56 \\ STS & 84.30 & 80.56 & **95.85** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of Different Strategies.
Figure 2: Performance of VPT-STS and Baseline under different perturbations.
tial improvements over previous SOTAs by improving the robustness of SNNs to different viewpoints.
|
2305.01848 | Photo-Voltaic Panel Power Production Estimation with an Artificial
Neural Network using Environmental and Electrical Measurements | Weather is one of the main problems in implementing forecasts for
photovoltaic panel systems. Since it is the main generator of disturbances and
interruptions in electrical energy. It is necessary to choose a reliable
forecasting model for better energy use. A measurement prototype was
constructed in this work, which collects in-situ voltage and current
measurements and the environmental factors of radiation, temperature, and
humidity. Subsequently, a correlation analysis of the variables and the
implementation of artificial neural networks were performed to perform the
system forecast. The best estimate was the one made with three variables
(lighting, temperature, and humidity), obtaining an error of 0.255. These
results show that it is possible to make a good estimate for a photovoltaic
panel system. | Antony Morales-Cervantes, Oscar Lobato-Nostroza, Gerardo Marx Chávez-Campos, Yvo Marcelo Chiaradia-Masselli, Rafael Lara-Hernández | 2023-05-03T01:28:53Z | http://arxiv.org/abs/2305.01848v1 | Photo-Voltaic Panel Power Production Estimation with an Artificial Neural Network using Environmental and Electrical Measurements
###### Abstract
Weather is one of the main problems in implementing forecasts for photovoltaic panel systems. Since it is the main generator of disturbances and interruptions in electrical energy. It is necessary to choose a reliable forecasting model for better energy use. A measurement prototype was constructed in this work, which collects in-situ voltage and current measurements and the environmental factors of radiation, temperature, and humidity. Subsequently, a correlation analysis of the variables and the implementation of artificial neural networks were performed to perform the system forecast. The best estimate was the one made with 3 variables (lighting, temperature, and humidity), obtaining an error of 0.255. These results show that it is possible to make a good estimate for a photovoltaic panel system.
Photovoltaic generation systems; Energy storage systems; Radiation Forecasting; Artificial Neural Networks.
## I Introduction
Nowadays, energy consumption increases each year significantly. As a result, pollution increases by the use of fossil fuels during the production of complementary energy by energy companies [1]. Companies have incorporated alternative energy sources to reduce the impact and accomplish energy demand [2]. One of the most significant sources is solar energy, which has become the most popular alternative in the world [3]. Solar energy has been used to provide electricity for many years [4], by using Photo-Voltaic (PV) panels. The amount of current and power generated by a PV cell depends on external factors such as the environment and internal factors typical of the photo-voltaic system.
Specifically, the weather creates disturbances and interruptions in PV cells' electrical power [5, 6, 7, 8]. Due to this variability, it is necessary to implement reliable forecasts in the implementation of these PV systems to avoid penalties resulting from the differences between the programmed and produced energy [9]. Different forecasting models can be chosen based on the parameters to be analyzed, depending on particular needs [10].
The use of artificial intelligence (AI) has increased considerably in recent years, due to its ability to model and solve complicated computational tasks[11]. In fact, AI algorithms in data collection systems help to improve the profitability of the measurement equipment [12][13]. Forecasting models are also good indicators for detecting the right moment to perform maintenance on the photovoltaic system and the distribution of the PV system. There are different methods of forecasting PV energy. The very short-term method(from a few seconds to a few minutes) is used for the control and management of PV systems in the electricity market over micro-networks. The short-term method(48-72 hrs) for control of the energy system's operations, economic dispatch, and unit commitment, among others. The Medium-term (a few days to a week) and long-term methods (a few months to a year or more) are used to plan PV systems [14].
Hence, several models and methods have been implemented to estimate the generated energy of a PV-systems. The advanced methods include diverse artificial intelligence and machine learning techniques, such as artificial neural networks (ANN), nearest neighbor-k (kNN), extreme learning machine (ELM), and support vector machine (SVM), to mention the most used [15]. ML algorithms can
be classified into three main groups: (1)
1. Supervised learning, where the algorithm creates relationships between input and output characteristics.
2. Unsupervised learning, in which the algorithm looks for patterns and rules to describe in a better way the data.
3. Reinforcement learning, which is used mainly with extensive data and reduces them for visualization or analysis purposes [16].
The forecasting of power in PV systems is mainly based on ANN due to the complexity of the parameters involved. ANN are techniques that seek to emulate the human brain's behavior and generate responses for decision making [17]. The fundamental part of each ANN is its element processor, the neuron. The neural networks gather these element processors with different methods, to respond to their different numerical needs [18].
Recently reported forecasting models are based on the ANNs. One type of model uses the ANN to estimate solar radiation (in specific cities) based on past weather data (temperature, humidity, and rain probability) together with radiation, with the estimated radiation, the models try to estimate the PV's output energy [19, 20, 21]. Other models use the ANN to estimate the produced energy directly using different inputs like: (i) humidity [22]; (ii) temperature, humidity, and rainfall [23]; (iii) solar radiation and temperature [24]; (iv) solar power and weather data [25]. Eventually, an improved model considers data correlation and ANNs to select the most critical inputs [26]. However, most models are based on databases available online or meteorological measurements not precisely in the same place as the photovoltaic system. Thus, the data information is insufficient from a zone or region, and in some cases, no continuous measurements are available. On the other hand, it is well known that semiconductors are extremely sensitive to high temperatures.
Therefore, the present paper proposes an IoT device to measure and log _in-situ_ data about the PV system. The solar radiation, solar panel's temperature-humidity, and the panel's electrical power (voltage and current) have been collected during --120 days-- with a sampling frequency of 5 min, collecting more than 32,200 measurements. Then, measurements were used to train diverse ANN topologies and compare them with a Multiple Linear Regressor model. The best topology has an error level of 0.255326464, which presents a reliable data model.
## II Materials and Methods
The present research methodology consists of acquiring and validating data from a photovoltaic system to perform the data analysis and compute electrical power forecasting values by artificial neural networks (see Figure 1). Therefore, a measurement prototype has been designed to collect _in-situ_ voltage and current measurements and environmental factors such as radiation levels, temperature, and humidity. Eventually, the system's behavior is obtained through the prototype's reading analysis, performing a validation process, and correlation of variables. Then, the regression model is obtained with a training set to finally implements the ANN. The obtained estimations were evaluated against a test set using the Root-Mean Square Error (RMSE) as the primary measure, applying equation 1.
\[RMSE=\sqrt{\frac{\sum\left(x_{T}-x_{i}\right)}{N}} \tag{1}\]
where \(x_{T}\) is the estimated value, \(x_{i}\) is the actual value and \(N\) is the total number of measurements.
### _Experimental setup_
Data has been collected using a stand-alone IoT system embedded with three sensors with \(I^{2}C\) communication. The IoT system also has embedded a web server to configure and manage the logged data [27]. The Figure 2 shows the experimental system conformed by three main sections. The left-must
Fig. 1: Experimental setup for data gathering and analysis.
includes the environmental variable sensors, the OPT3001, and HDC2080 integrated circuits. The OPT3001 is a light sensor with a \(0.01\,\mathrm{lx}\) resolution, including an upper limit of \(128\,\mathrm{klx}\). However, an attenuating glass has been used to extend 55% of the device limit; a calibration procedure was conducted using the commercial digital luxometer MASTECH ms6612. The HDC2080 sensor measures relative humidity (RH) and temperature. For temperature the sensor has a \(\pm 2\,\mathrm{\SIUnitSymbolCelsius}\) resolution with ranges of \(-40\,\mathrm{\SIUnitSymbolCelsius}\) to \(85\,\mathrm{\SIUnitSymbolCelsius}\), for RH the sensor gives measurements with resolution of \(\pm 0.2\%\).
### _Logged data analysis_
Previous to the forecasting process, logged data have been compared with similar commercial measurement systems to evaluate performance. According to the central limit theorem, our data is normal, so we performed the linear dependency measurement by the Pearson correlation method between measured variables to determine its importance during forecasting.
The behavior comparison is made against data from the meteorological station of the Technological Institute of Morelia. Both data-sets are made up of solar radiation, humidity, and temperature, which correspond to the period 12th of November 2018 to the 6th of December 2018. Since measurements are originated by different equipment, different scale, and different locations (5 meters of difference), measurements were normalized using Eq. 2:
\[x=\frac{x_{i}-x_{min}}{x_{max}-x_{min}} \tag{2}\]
where \(x_{i}\) is the actual value to be normalized, \(x_{min}\) is the minimum value of the entire data-set, and \(x_{max}\) is the maximum value. Fig. 3 shows the measurements collected by the IoT system and the meteorological station only for the variable radiation. The small variations between data-sets can be attributed to the location of each system; however, the general performance is related.
Fig. 4 show three correlation plots: (a)luxes, temperature and power; (b)humidity, luxes and power; and (c)temperature, humidity and power. The three plots are made considering power variable as output. Therefore the third plot shows the lower correlation, meaning that humidity will have a lower impact on the final model.
## III Results
The ANN performance analysis has considered cases with two and three input variables: (i) lighting, temperature, and humidity, (ii) lighting and temperature, (iii) lighting and humidity, and (iv) humidity and temperature; each option has been tested with different topologies.
The training process was performed with random data from the 14th of January to the 10th of February 2019. Table I shows only the head-performing topologies based on the resulting RMSE obtained during the cross-correlation validation. On the table, the first column indicates the input variables. The second represents the topology used; the third and fourth ones are the maximum training cycles and error levels, respectively. The last column is the RMSE value. Notice that the maximum number of training cycles was defined based on the RMSE's performance during various training experiments.
The first topology (Table I) owns three computation elements in the input layer, three elements in the hidden layer, and one single output (3:3:1). This topology was evaluated with data that were randomly selected, and then the Easy-NN software was used to import the training set (700 measurements), validation (200), and test (100). Fig. 4(a) shows the comparison of estimated data with topology 3:3:1
Fig. 3: Data comparison between the meteorological station and the IoT system.
Fig. 2: Experimental setup system for measurements.
vs. the test data-set. The computed level error for this topology was 0.255326464.
Another good estimator has been implemented using only lighting and temperature, with a topology 2:8:1, resulting in an error level of 0.273086254; see Fig. 5b. In the case of the estimator based on the Illumination and Humidity, the best topology was 2:3:1, with an error level of 0.26061261; see Fig. 5c. Finally, the estimation with the variables Temperature and Humidity, the best performing topology was 2:7:1, with an error level of 1.522621379; see Fig. 5d.
Notice that the best network applies the three variables. However, the optional networks can be used when some data is missing or corrupted. For this research, the humidity sensor was the most problematic due to the warmth variability, specifically in Morelia city, resulting in saturated measurements during the early morning periods. Nevertheless, when all variables are pleasant, the ANN can produce better estimations.
On the other hand, a Multiple Linear Regressor (MLR) has been developed with the same data-set with three variables. Figure 6 shows a comparison for one-day estimations of ANN, MLR, and real measurements. Notice that the ANN estimations are closely related to the photovoltaic system's real behavior, compared to the estimation made using the MLR model.
According to the central limit theorem, in large samples, the sampling distribution tends to be normal, regardless of the data [28]. Therefore, an ANOVA analysis is carried out. The ANOVA of the MLR analysis was performed on lighting, temperature, and humidity to determine their importance, obtaining the Table II. The results of the ANOVA analysis gave a regression model with a confidence interval of 95 %. The model has an approach of 91.77 % of the real phenomenon. This process is carried out to obtain learning with the right level of trust, which enables a suitable prediction of the power levels in the solar panels.
Fig. 4: Correlation variables plots for (a) luxes, temperature and power; (b) humidity, luxes and power; and (c) temperature, humidity and power.
## IV Discussion and Conclusions
During this study, it was observed that the most suitable neural network topology changes according to the input variables because of the lowest RMSE value. Experiments were conducted with different training cycles, input information, number of neurons, and hidden layers to discern their execution, then choose the most appropriate for each set of data. The prototype used has certain limitations, such as the resolution and ranges of the measurements and the flow of the readings. A comparison was made to verify the prototype's values besides a commercial station located at the Technological Institute of Morelia, Mexico. The RMSE calculation of 0.19309 determines that the prototype data is reliable for estimating.
\begin{table}
\begin{tabular}{c c c c c c}
**Source** & **DF** & **Adj SS** & **Adj MS** & **F-Value** & **P-Value** \\ \hline Regression & 3 & 10159.50 & 3386.51 & 5895.20 & 0.000 \\ Illumination & 1 & 2761.10 & 2761.10 & 4806.50 & 0.000 \\ Temperature & 1 & 6.80 & 6.78 & 11.81 & 0.001 \\ Humidity & 1 & 47.00 & 47.01 & 81.83 & 0.000 \\ Error & 1586 & 911.10 & 0.57 & & \\ Total & 1589 & 11070.60 & & & \\ \end{tabular}
\end{table} TABLE II: ANOVA Analysis of the \(x\) variables (independent variables) and the \(y\) variable (dependent variable).
Fig. 5: Graphical comparison of estimated data against actual data for the best topology of two and three input variables
The discrimination process, and creation of the data sets, were performed in Matlab(r). The most suitable configuration was determined to carry out both instantaneous estimates and short-term forecasts. The most suitable topology for neural networks and their parameters (the number of computational elements, the number of hidden layers, the number of inputs and outputs, and training cycles) has been identified. The determination of each parameter starts from a proposed random topology, thus reaching the 3:3:1 configuration with 5,000 training cycles. This configuration allows knowing instantly how the solar panel will behave under normal conditions. The results obtained in this work show that the best option for the estimation is the 3:3:1 topology of the neural network, which uses three variables (lighting, temperature, and humidity), which allows estimating how much power you can get from a panel. As for forecasts, the best configuration is a 9:4:3:1 network. Even when using two variables gives a more significant error in the estimation than when using the three variables, this error being 0.30320 is reliable for the estimation.
This article introduces a solar forecasting algorithm based on the artificial neural network (ANN) model. The proposed model has a 3:3:1 topology with 5,000 training cycles. The clear sky model and meteorological data from the prototype are used to train the model. The prototype and the meteorological station of the Technological Institute of Morelia, located in Morelia, Mexico, were compared. The RMSE value confirms that it is possible to make sensible estimates using the lighting, temperature, and humidity data. Forthcoming work will direct on developing a more comprehensive multi-layer ANN model taking into account rainfall factors and time of day, as well as using a more massive data set to train the ANN model to achieve greater forecast accuracy. Also, the system's accuracy must be improved.
## V Acknowledgments
The authors would like to acknowledge the "Consejo Nacional de Ciencia y Tecnologia" (CONACYT) for the support received for developing this project by supporting student 625015, also to The "Tecnologico Nacional de Mexico" (TecNM) that supports the project 6127.17-P, and the "National Laboratory SEDEAM" by helping the development of the electronic prototypes.
## VI Conflict of interest
The authors declare that there is no conflict of interest.
|
2303.08248 | An Intrusion Detection Mechanism for MANETs Based on Deep Learning
Artificial Neural Networks (ANNs) | Mobile Ad-hoc Network (MANET) is a distributed, decentralized network of
wireless portable nodes connecting directly without any fixed communication
base station or centralized administration. Nodes in MANET move continuously in
random directions and follow an arbitrary manner, which presents numerous
challenges to these networks and make them more susceptible to different
security threats. Due to this decentralized nature of their overall
architecture, combined with the limitation of hardware resources, those
infrastructure-less networks are more susceptible to different security attacks
such as black hole attack, network partition, node selfishness, and Denial of
Service (DoS) attacks. This work aims to present, investigate, and design an
intrusion detection predictive technique for Mobile Ad hoc networks using deep
learning artificial neural networks (ANNs). A simulation-based evaluation and a
deep ANNs modelling for detecting and isolating a Denial of Service (DoS)
attack are presented to improve the overall security level of Mobile ad hoc
networks. | Mohamad T Sultan, Hesham El Sayed, Manzoor Ahmed Khan | 2023-03-14T21:45:12Z | http://arxiv.org/abs/2303.08248v1 | # An Intrusion Detection Mechanism for
###### Abstract
Mobile Ad-hoc Network (MANET) is a distributed, decentralized network of wireless portable nodes connecting directly without any fixed communication base station or centralized administration. Nodes in MANET move continuously in random directions and follow an arbitrary manner, which presents numerous challenges to these networks and make them more susceptible to different security threats. Due to this decentralized nature of their overall architecture, combined with the limitation of hardware resources, those infrastructure-less networks are more susceptible to different security attacks such as black hole attack, network partition, node selfishness, and Denial of Service (DoS) attacks. This work aims to present, investigate, and design an intrusion detection predictive technique for Mobile Ad hoc networks using deep learning artificial neural networks (ANNs). A simulation-based evaluation and a deep ANNs modelling for detecting and isolating a Denial of Service (DoS) attack are presented to improve the overall security level of Mobile ad hoc networks.
Network Protocols, deep learning, ANN, intrusion detection
## 1 Introduction
Recently, the significant advances in wireless networking systems have recently made them among the most innovative topics in computer technologies. Users can access a wide range of information and services through mobile wireless networks. Latest technology developments in wireless data communication devices have led to cheaper prices and larger data rates. Compared to the conventional wired networking, the wireless networking provides a great deal of flexibility, efficiency and cost effectiveness that make them a good alternative in providing an efficient network connectivity. The development of Mobile Ad hoc networks (MANETs) [1][2] presented a reliable, cost-effective and efficient techniques exploit the availability and presence of mobile hosts during the lack of a fixed communication infrastructure. In MANET, the mobile nodes are independent and can effortlessly initiate a direct communication channel with each other as they are freely moving around the infrastructure-less network in different directions and at different velocity speeds. The Ad hoc network functions in a very specific way and nodes cooperation is its main element for forwarding the communication related information from main data sources to the planned destination mobile nodes. Nodes in MANET relies entirely in its operation on batteries as means of energy to move arbitrarily with no restrictions. The mobile nodes could leave or join the dynamic network at any specific time
and can take independent decisions without relying on any centralized authority. Due to its core aspect of abandoning the availability of any fixed infrastructure as a necessary factor for the communication to be present. This has dictated that the transmission and communication range of the entire network will be determined by the transmission range of the individual mobile nodes, and it is usually smaller in size compared to the range of the cellular networks. Nevertheless, in cellular networks to avoid interference and provide guaranteed bandwidth, each communication cell depends on various communication frequencies available from the on-hope adjacent neighbouring cells. This expands the communication range of the cellular network especially when different communication cells are joined together to offer a radio and communication coverage for wide-ranging geographical area. However, in MANET each mobile node has a wireless interface and interconnects with other nodes over a wireless channel. Mobile nodes in MANET could range from portable laptops to smartphones or any other digital devices with a communication wireless antenna. Among the numerous advantages the infrastructure-less ad hoc network offers are robustness, efficiency, and inherent support for dynamic random mobility. Fig. 1 illustrates the architecture of MANET.
The special characteristics of MANETs have made its deployment a preferable choice for many fields such as in military battlefield operations, natural disasters and in remote areas. However, due to their openness and decentralized structure. MANET has become vulnerable to different kinds of malicious threats and attacks. Their flexibility brings new threats to its security. The categorization of threats that affect mobile ad hoc networks can be perceived in many ways based on behaviour, level and position of the specific attack, flows of the used security algorithms and weaknesses in the structure of the developed routing protocols. Attacks such as blackhole attack, network partition, node selfishness, malicious node, and denial of service (DoS) are among the many popular threats that MANETs is facing [3]. The shared goal for those threats is to degrade the overall network performance. Researchers have become more focus on how to enhance and provide a secure and reliable mobile ad hoc network. Several techniques have been developed such as signature-based, statistical anomaly-based, and protocol analysis. This research will focus on deep learning intrusion detection techniques in MANET based on artificial neural networks The paper concentrates on very specific attack which is the denial-of-service DOS attach that can easily disrupt the MANETs operations.
## 2 Related Work
Identifying malicious and misbehaving mobile nodes is necessary to protect the MANET network. Researchers have conducted research on studying the security threats of mobile ad hoc
Figure 1: MANET architecture
network to make MANET more secure and reliable. In describing the security threats, many researchers make their own categorization of the security threats. MANET threats are classified into two levels. The first level is attacks on the basic mechanism resulted from nodes captured, compromised or the misbehaviour of nodes that do not listen to the rules of cooperative algorithms. The second level is attacks on the security mechanism which exploit the vulnerabilities of the security mechanism employed in MANET. In [4] and [5] researchers have classified security attacks in correspondence with the communication layers, which mean that each layer has its own threats and vulnerabilities. Table 1 shows security threats at the communication layers.
In [6] the authors have studied the effect of misbehaviour nodes on the MANET network. In this research a new method was used to efficiently detect and separate malicious mobile nodes from the network. Thus, the network performance remains balanced and stable regardless of the presence of the colluding nodes. The malicious behaviour is represented by suspicious behaviour of unauthorized mobile nodes that can inflict damage on other nodes in the network intentionally or unintentionally. An example of this could include the scenario where the aim of the mobile node is not the attack itself but to gain unauthorized benefits over other nodes.
The researchers in [7] proposed a blackhole attack identification mechanism in MANET using fuzzy-based intrusion detection techniques. Their main target was to detect the blackhole attack in the mobile network, which is considered as very popular type of malicious attack that disrupts the operations of MANETs. An adaptive neuro fuzzy inference system was developed by the researchers. The development of this system was based on the popular optimization technique; particle swarm optimization (PSO). Similarly, using fuzzy logic techniques the authors in [8] used a new technique for intrusion detection called node blocking mechanism, to differentiate two popular attacks that targets the network which are the grey hole and the black hole attacks,
The authors in [9] proposed a system that uses malicious behaviour-detection ratios to enhance security in mobile networks using modified zone-based intrusion detection techniques. In [10] another intrusion detection system was proposed using smart approach for intrusion identification and isolation. This system detects an attack on the ad hoc network by exhibiting a deep learning neural network with bootstrapped optimistic algorithm. In this system each mobile node submits finger vein biometric, user id, and latitude and longitude then the intrusion detection is executed to verify these entities and detect any suspicious behaviour in the network.
\begin{table}
\begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline
**Layers** & **Attacks** \\ \hline Application layer & Selfish nodes attacks, Malicious attacks like viruses, worms and spyware. \\ \hline Transport layer & Session hijacking, Session control, flooding attack and ACK-storm attack on TCP \\ \hline Network layer & Cache poisoning attacks, Routing protocols attacks (e.g., AODV, DSR, TORA), Packet dropping attacks, blackhole attack node impersonation, denial-of-service DoS attack. \\ \hline Data link layer & Man in the middle attack, MAC interruption (802.11), WEP vulnerability. \\ \hline Physical layer & Eavesdropping, jamming, traffic interceptions. \\ \hline \end{tabular}
\end{table}
Table 1: Communication Layers Security Threats
## 3 Manet Routing Protocols
The random arbitrary nature of mobile nodes in MANET due to the absence of any fixed communication infrastructure keep the network's topology in constant change. This rapid and dynamic change in topology make routing in MANET a challenging task. Thus, an effective routing strategy is required to smoothly accomplish the forwarding of packets form the source to the destination. The routing information is changing frequently to reflect the dynamic changes in network topology. There are numerous potential paths from source to destination. The routing protocol algorithms discovers a route and transports the data packets to the appropriate destination. Numerous routing algorithms for MANET have been developed [11]. The performance of the ad hoc network is highly associated with used routing protocols efficiency. The proposed routing algorithms for MANET can be divided into three different categories based on their behaviour and functionality. These categories are proactive, reactive and hybrid routing algorithms [11][12]. The basic concept of these routing algorithms is to discover the shortest route for the source-destination routes selection.
### AODV and Targeted Attacks
In MANETs, one of the widely used routing protocols that follows the reactive routing mechanism is the Ad hoc on demand distance vector (AODV) [13]. The AODV protocol sets up routes using a query cycle consisting of Route request (RREQ) and Route Reply (RREP). If a node has the most recent sequence number for a certain destination and needs to deliver data packets to that location, it will broadcast an RREQ message to its neighbours. Until the requested data is available in some form, this message will be transmitted. After receiving the RREQ message, every node builds a path back to its original sender. After receiving an RREQ message, the destination will respond with an RREP message that includes the destination's current sequence number and the number of hops taken to get there [14][17]. Keep in mind that if a given intermediate node has a newly discovered route to the final destination, it will not relay the RREQ to its neighbours but will instead send an RREP back in the direction of the source. Each node that gets the RREP message sets up a new forward route to the final destination. Thus, rather of storing the whole path, each node simply stores the information necessary for the next step. When a node detects that it has received a duplicate RREQ, it discards the packet. As an added measure, AODV verifies the freshness of the routes by using sequence numbers. The destination routes are only altered if a new path to a given destination has a higher sequence number than the previous path or has the same sequence number but with fewer hops. Moreover, when there is a link failure or a routing problem happens in the network, another technique is executed in AODV which is the route error (RERR) [13][14]. This technique sends warning error packets to the source and destination nodes in the ad hoc network. As an example, used in this research, this section discusses examples of attacks on AODV routing protocol.
* Packet dropping attack: In this type of attack malicious mobile users may drop all the legitimate incoming data packets that are mainly employed in route discovery and route maintenance stages. This is usually happening with aim of disrupting the network services such as (RREQ, RREP and RERR).
* Denial-of-Service (DoS): One of the most popular known attacks. This type of network attack, render a resource or a service inaccessible in the network [15]. The main aim of this malicious attack is not to get an unauthorized access by the perpetrator but is rather an act of vandalism to shut down a machine, resource or network. This attack will usually result in a legitimate users being unable to access the available resources. In AODV routing algorithm the attacking malicious node that wants to disrupt MANET resources begins to frequently broadcast the route request (RREQ) messages while the route discovery process is taking a place. Using an
s. The main goal of DoS attack is to consume the battery power of nodes and disrupt or deny access of legitimate nodes to a specific network services.
## 4 Methodology
The mobile ad hoc networks are highly vulnerable to different kinds of security threats due to its dynamic nature of randomness, decentralizations, and lake of central authority. The aim of this work is to propose an effective intrusion detection mechanism for MANETs using deep learning techniques such as artificial neural networks (ANNs). The denial-of-service attack that is being considered in this research is implemented in way where a malicious intruder mobile node injects its malicious data packets in large volumes into the mobile ad hoc network which leads to a disruption and denial of services at the destination node. The main routing protocol used in this study to perform the simulation experiments is the AODV. This routing protocol is used due to its popularity in MANET. In this study and in our experimental setup all the factors and issues that has an impact on link stability on the network is considered and analysed. The main attack which is considered in this research is the Denial of Service (DoS) attack with the aim of rendering the MANET resources and services inaccessible by overloading it with junk packets in a two way network communication setting. This type of attack has the possibility to happen over both wired and wireless networks. However, the wireless networks are more susceptible due to its radio nature and more loosely specified restrictions such as the case of MANETs. Fig. 2 shows an example of DoS attack where intruder D floods the host node C with extra malicious packets.
The deep learning ANNs are used to detect intrusions based on abnormal network activity and the attributes, labels and features are selected from the packets generated during the network simulation. Given to the learning and generalizable attributes of artificial neural networks (ANNs), and due to their ability to obtain knowledge from data and infer new information, are more suitable to manage such tasks. The performance of the proposed intrusion detection system is illustrated by means of simulation using AODV routing protocol in MANETs. ANN modelling for attack detection using a simulated MANET environment will be used in this research.
Figure 2: Example of DoS attack, the host node C flooded by Intruder D
### Implementation
The implementation of the proposed research is illustrated by means of simulations using NS-2 network simulator on Linux ubuntu 10.04 platform to evaluate the performance of the MANET network with 15 mobile nodes forming a network. Attack detection, using a simulated MANET environment and ANNs modelling is used. Once a simulation process is completed, NS2 follows to display the simulation details is by generating a big sized trace file holding all the events sequentially line by line. For all those reasons, the event-driven technique is used in NS2 as it can keep all the occurred events as records and all those records can be traced and analysed for evaluation purposes. In NS2 there are typically two kinds of output data records that can help in further investigation for a specific simulation scenario. The first one is a trace file which records the events traces that assist in studying the performance of the network by processing and analysing it using numerous of methods. The second one is a network animator (NAM) file which assists in detecting the interactions and movements between the mobile nodes visually. Fig. 3 illustrates the complete procedure of how a specific simulation is conducted using NS2.
#### 4.1.1 Mobility Model
The mobility model plays a very important role in MANET simulations. The considered model should attempt to simulate the movement, behaviour, and actions of real nodes in MANET. However, the mobile nodes in MANET move in a very dynamic arbitrary and decentralized manner. It's a dynamic network of autonomous decentralized mobile nodes. A node in the network could join or leave at any specific time which leads to high rates of link and topology changes. Moreover, the mobile nodes make decision independently and behave as routers where they can send, receive, or route the information simultaneously. Thus, to model this kind of unpredictability and randomness that exist in mobile ad hoc networks, the researchers have proposed different probability distribution models of MANET nodes. The most popular one that is highly represent the distribution of MANET nodes is called the Random Waypoint Mobility Model [16]. For this model the spatial distribution of mobile nodes movements is in general a non uniform. The mobility model that represents the movements of the mobile nodes is an important aspect for any simulation process because the way that these mobile nodes move and behave affects in different ways the performance of the routing protocol that these nodes utilize. The random waypoint mobility model is simple, reliable and is highly used to assess the behaviour of the MANET [16]. This mobility model can highly represent the actions and movements of real mobile nodes in real conditions. There is basically a specific pause time in this model that operates when there are any sudden changes or differences in direction or velocity of mobile nodes. When a specific wireless node starts to travel across the network, it remains in one location for a particular period of time which is a pause time before it moves to another location.
Figure 3: NS2 simulation process
The node chooses the subsequent destination randomly in the simulation region once that specified pause time has expired. These mobile nodes also select a speed that is generally specified between the minimum and maximum speed (0, Maxspeed) during simulation process. Then it travels to the newly chosen point at that selected speed. When the mobile node reaches at targeted place, it starts waiting again for a certain period of time, seconds before selecting another new way point and another speed. Then it initiates the same procedure all over again. Numerous researchers have adopted and implemented this mobility model in their studies. The Movement of individuals in a cafeteria or shopping mall, and movement of nodes in a conference are some of its practical examples. Fig. 4 presents an illustration of the movement pattern for a mobile node which begins at a randomly selected location (133, 180) and chosen a speed between (0 to 10 m/s) using random waypoint mobility model.
### Simulation Setup and Parameter Selection
A scenario file that defines the exact motion of every node in the network along with the exact number of packets generated by each node in the network is being taken as an input for every simulation run. This is together accompanied by the exact time at which each change in motion or packet origination is to occur. The simulation is done using NS2 simulator as shown in table 2 below:
Figure 4: Random waypoint mobility for node movement pattern
In this simulation execution, 15 nodes are deployed for MANET within the terrain of 500m X 500m using random waypoint mobility for the purpose of realization of a real-time simulation and the simulation runs for the maximum experiment duration of 200s, with maximum speed of 20m/s. It's indicated in the simulation parameters table the "Maximum Speed" of mobile nodes which is in fact implies that the node's speed is already changing form "0 m/s" which is a stationary paused node "no movement" to maximum speed of "20 m/s". Since we have used the popular mobility model "Random Waypoint Mobility Model", which is designed to specify users or mobile nodes movement, their location, velocity, and acceleration change over time. The mobile nodes speed in our simulation environment could change at any random time form (0 - 20 m/s). The MAC layer used is IEEE 802.11b. Once the simulation is finished, the generated output files like trace files should be analysed to extract beneficial data and statistics. As stated earlier, a pair of files will be produced once the simulation process ends. The first one is an event trace file which records all simulation events while the second one is a network visualization file which records the data that can be used in network animation. These event trace files are in its raw format and an analysis and assessment should be performed in order to extract the required necessary information. Both files are CPU intensive tasks while in simulation process and they make use and occupy an amount of the memory. The example excerpt in Fig. 5 below shows how a generated trace file will look like after a simulation run.
The excerpt above indicates that the data packet was sent (s) at time (t) 2.556838879 sec, from the main source node (Hs) 1 to target mobile node (Hd) 2. The source node id (Ni) is 1, The source node X axis coordinates (Nx) is 342.47, while the provided Y axis coordinate (Ny) is 4.35.
\begin{table}
\begin{tabular}{|l|l|} \hline \multicolumn{3}{|c|}{**Simulation parameters**} \\ \hline
**Parameter** & **Selected Value** \\ \hline Routing Protocol & Ad hoc on demand distance vector (AODV) \\ \hline Platform & Linux distribution ubuntu version 10.04 \\ \hline Number of Nodes & 15 \\ \hline Simulation Software & NS-2 \\ \hline MAC Layer Protocol & IEEE 802.11b \\ \hline Simulation Area & 500m X 500m \\ \hline Traffic Generation Model & CBR (Constant Bit Rate) \\ \hline Size of packet & 512 bytes \\ \hline Mobility Model & Random Waypoint \\ \hline Maximum Speed & 0-20 m/s \\ \hline No. of Connections & 2 to 10 \\ \hline Duration of experiment & 200 sec \\ \hline Type of Antenna & Antenna/OmniAntenna \\ \hline \end{tabular}
\end{table}
Table 2: Simulation Parameters
Figure 5: Excerpt of trace file
Moreover, its Z coordinate (Nz) is 0.00. The available level of energy (Ne) is 1.000000, while the type of trace format for this mobile node for routing (NI) is RTR and the event of the node (Nw) is blank. Moreover, (Ma) 0, is the specification of MAC level information while the address of the destination Ethernet (Md) 0, the address of the source Ethernet (Ms) is 0 and Ethernet kind (Mt) is 0. The features extracted from the logged details can then be used in ANN for attack detection. The analysis process of these trace files can be done using different tools such as using the AWK language command and Perl scripts. Different parameter selection for data extraction can be considered for analysis which merely depends on the nature of the network and the specific attack. The following parameters will be considered: Packet Loss PL, Packet sent (PS), Packet received (PR), Energy consumption (EC). Using analysis log files of simulation run, the parameters were extracted. The data is split for training and testing where 65% of data including 15 mobile nodes in 200 seconds were selected randomly for training and 35% for the purpose of testing and validation process.
### Designing Artificial Neural Network
An intrusion detection system using neural network (NN) is proposed to secure the MANET. Neural Network model is trained by applying the simulation data as inputs to the ANN. Feed Forward Back Propagation (FFBP) in the Neural network toolbox is used and the artificial neural network is implemented with four inputs, one output layer including two middle hidden layers. The network training in this setup is conducted using back propagation (BP) learning process, The (TRAINLM) training function of Levenberg-Marquardt backpropagation is used in addition to LEARNGDM as an adaptive learning function. Different transfer functions are available like Purelin, Log-Sigmoid, and Tan-Sigmoid. The main aim of the transfer function is to be used for estimating the output of a specific network layer from its initial net input. LogSigmoid, and Tan-Sigmoid are used in this study. Fig. 6 below shows an example of different transfer functions.
A screenshot of how the artificial neural network setup and design is presented in Fig 7 and Fig. 8 respectively. All the setup parameters must be specified before running the artificial neural network.
### Modelling Artificial Neural Networks for DOS Attack Detection
Given to the learning and generalizable attributes of feedforward neural networks with back propagation training algorithm, those deep learning networks are used for the purpose of DoS intrusion detection and to identify and predict any unusual activity and the features are selected from the packets generated in the simulation process. The number of input nodes will be determined from the input data set. The number of nodes in the hidden layers in the neural network are varied frequently during the experiments to achieve a highly accurate and stable neural network model and to avoid any overfitting. The structural design of the proposed deep learning neural network consists of two types of different network setups. The first one has 4 inputs and 15 neurons in the first hidden layer and 10 neurons in the second hidden layer and one output. While the second network has 4 inputs and 20 neurons in the first hidden layer and 10 neurons in the second hidden layer and one output. Training using feed forward back propagation (FFBP) in ANN is presented in Fig. 9 and the process is indicated as follows:
Figure 8: Neural network design
Figure 7: Neural network setup
* The model selects training epoch from the training set and initialize weights and biases.
* The model the calculates the output of the network.
* Then, the error between the network output and the desired output is calculated.
* The model modifies the weights of the network in a way that minimizes the error.
* The model repeats the steps for each input in the training set until the error for the entire set is acceptable low.
## 5 Performance Results
The deep learning technique which is used to design the ANN uses the backpropagation training algorithm to predicts a specific output. Then, this output is compared with actual known class label to measure the difference in error between the predicted and actual outputs. The obtained error is sent back to the neurons for adjustments. FFBP measures the variance of the residuals in a repeated process. The root mean squared error is just one way to calculate this error. The method of squaring the sum of the error is used to prevent the cancel out the positives and negatives values during the sum of the error of all the nodes. We used the root mean squared error instead of the mean absolute error (MAE) to measure the standard deviation of errors as the gradient descent requires the derivative of that loss function to be calculated to minimize the loss function and generate better outputs. The results are presented in table 3 below. The performance results of the designed deep learning model are shown in the table based on the training data. The selection process of the best performing model is based on results obtained.
Figure 9: Training process
We executed the neural network to detect unusual malicious activities in MANET. As it can be noticed in the performance results that two different transfer functions are used in this research Log-Sigmoid and Tan-Sigmoid. A well-trained ANN should have a very low RMSE at the end of the training phase The best result in ANNs for FFBP network with Tan-Sigmoid function is related to 4-15-10-1 network that produce RMSE\(=\)0.0452, for 14 epochs. The indication of MSE being quite small or almost close to zero is that the neural network model output and the desired output have become very close to each other for the training dataset. The rest of results are given in table 4 below. The table shows the performance results of the neural network model based on testing data. It can be noticed that the best result for neural network model using FFBP with Tan-Sigmoid function is related to 4-15-10-1 design that produce RMSE\(=\)0.0512.
Both of Fig. 10 and Fig. 11 show a summary of how the designed artificial neural networks (ANN 4-15-10-1) and (ANN 4-20-10-1) performed for training and testing phases. In this research, after we selected the best model with best RMSE value, we used this model to evaluate the performance of proposed system. The goal is to distinguish a normal connection form a malicious attack connection in MANET. Thus, we used a performance measure which is the Detection Rate (DR). This measure is calculated as the number of attack connections which classified correctly as an attack over the total number of connections in the network. Using this measure, we were able to detect the attack in the network with high accuracy as shown in the table below. It can be noticed that as the number of connections increases the detection rate decreases due to higher false positive rates.
\begin{table}
\begin{tabular}{|p{85.4pt}|p{85.4pt}|p{85.4pt}|p{85.4pt}|p{85.4pt}|} \hline
**Network** & **Training Function** & **Layers** & **Transfer function** & **RMSE** \\ \hline \multirow{8}{*}{Feed Forward Back Propagation (FFBP)} & \multirow{8}{*}{Training: TrainLM} & \multirow{8}{*}{4–15–10-1} & LogSigmoid & 0.1998 \\ \cline{3-5} & & & & 0.1982 \\ \cline{3-5} & & & TanSigmoid & 0.0512 \\ \cline{3-5} & & & 0.0781 \\ \cline{3-5} & \multirow{2}{*}{Learning: LearnGDM} & \multirow{2}{*}{4-20-10-1} & LogSigmoid & 0.2337 \\ \cline{3-5} & & & 0.1891 \\ \cline{3-5} & & & TanSigmoid & 0.0821 \\ \cline{3-5} & & & TanSigmoid & 0.0935 \\ \hline \end{tabular}
\end{table}
Table 4: Artificial neural networks based on testing data
\begin{table}
\begin{tabular}{|p{85.4pt}|p{85.4pt}|p{85.4pt}|p{85.4pt}|p{85.4pt}|p{85.4pt}|} \hline
**Network** & **Training/ Learning functions** & **Layers** & **Transfer function** & **RMSE** & **Epoch** \\ \hline \multirow{8}{*}{Feed Forward Back Propagation (FFBP)} & \multirow{8}{*}{Training: TrainLM} & \multirow{8}{*}{4–15–10-1} & LogSigmoid & 0.1924 & 10 \\ \cline{3-5} & & & & 0.1901 & 12 \\ \cline{3-5} & & & TanSigmoid & 0.0452 & 14 \\ \cline{3-5} & & & & 0.0618 & 16 \\ \cline{3-5} & \multirow{2}{*}{Learning: LearnGDM} & \multirow{2}{*}{4-20-10-1} & \multirow{2}{*}{LogSigmoid} & 0.1927 & 10 \\ \cline{3-5} & & & & 0.1801 & 12 \\ \cline{3-5} & & & TanSigmoid & 0.0492 & 14 \\ \cline{3-5} & & & 0.0835 & 16 \\ \hline \end{tabular}
\end{table}
Table 3: Artificial neural network for training data
## 6 Conclusions
This research paper is mainly focused on modelling and investigating the use of artificial neural networks ANNs as a mean for intrusion detection in mobile ad hoc networks (MANETs). The main objective of this work was to analyse, simulate and evaluate the use of feedforward neural networks with back propagation (FFBP) in MANETs. An extracted dataset generated using the means of simulations for mobile ad hoc networks is used to calculate the input parameters of this approach and the RMSR is employed as metric to evaluate the performance of the proposed deep learning artificial neural network modelling. The proposed modelling can be utilized for detecting
Figure 11: RMSE for Training and Testing (ANN 4-20-10-1)
Figure 10: RMSE for Training and Testing (ANN 4-15-10-1)
DoS attack in MANET. The best results in ANNs for FFBP network with Tan- Sigmoid function is related to 4-15-10-1 network that produce RMSE=0.0452, for 14 epochs for training and RMSE=0.0512 for testing data. We also used the Detection Rate (DR) as a performance measure to evaluate the selected neural network model. For the future works, different types of network attacks will be considered for the purpose of intrusion detection. Another measure could be used in the analysis is the coefficient of determination or R squared. R square is the percentage of variation in Y explained by the model. The higher the percentage of the R square is the better. However, the value of R square will be always less than one irrespective of the values in dataset being small or large.
## Conflicts of Interest
The authors declare no conflict of interest.
## Acknowledgements
This research was funded by the Emirates Center for Mobility Research (ECMR) of the United Arab Emirates University (grant number 31R271).
|
2307.11588 | Transferability of Convolutional Neural Networks in Stationary Learning
Tasks | Recent advances in hardware and big data acquisition have accelerated the
development of deep learning techniques. For an extended period of time,
increasing the model complexity has led to performance improvements for various
tasks. However, this trend is becoming unsustainable and there is a need for
alternative, computationally lighter methods. In this paper, we introduce a
novel framework for efficient training of convolutional neural networks (CNNs)
for large-scale spatial problems. To accomplish this we investigate the
properties of CNNs for tasks where the underlying signals are stationary. We
show that a CNN trained on small windows of such signals achieves a nearly
performance on much larger windows without retraining. This claim is supported
by our theoretical analysis, which provides a bound on the performance
degradation. Additionally, we conduct thorough experimental analysis on two
tasks: multi-target tracking and mobile infrastructure on demand. Our results
show that the CNN is able to tackle problems with many hundreds of agents after
being trained with fewer than ten. Thus, CNN architectures provide solutions to
these problems at previously computationally intractable scales. | Damian Owerko, Charilaos I. Kanatsoulis, Jennifer Bondarchuk, Donald J. Bucci Jr, Alejandro Ribeiro | 2023-07-21T13:51:45Z | http://arxiv.org/abs/2307.11588v1 | # Transferability of Convolutional Neural Networks in Stationary Learning Tasks
###### Abstract
Recent advances in hardware and big data acquisition have accelerated the development of deep learning techniques. For an extended period of time, increasing the model complexity has led to performance improvements for various tasks. However, this trend is becoming unsustainable and there is a need for alternative, computationally lighter methods. In this paper, we introduce a novel framework for efficient training of convolutional neural networks (CNNs) for large-scale spatial problems. To accomplish this we investigate the properties of CNNs for tasks where the underlying signals are stationary. We show that a CNN trained on small windows of such signals achieves a nearly performance on much larger windows without retraining. This claim is supported by our theoretical analysis, which provides a bound on the performance degradation. Additionally, we conduct thorough experimental analysis on two tasks: multi-target tracking and mobile infrastructure on demand. Our results show that the CNN is able to tackle problems with many hundreds of agents after being trained with fewer than ten. Thus, CNN architectures provide solutions to these problems at previously computationally intractable scales.
convolutional neural networks, transfer learning, deep learning, stationary process
## I Introduction
How much longer can we sustain the growing dataset size and model complexity? Over the past decade, we have seen rapid advancements in deep learning, which produced state-of-the-art results in a wide range of applications [3, 4, 5]. In large part, these successes are due to increasingly powerful hardware [3, 4] that enabled processing larger datasets [6] and training deep learning models with more parameters.
Recently, this trend has been taken to new extremes with the advent of large language models [7, 8, 9, 10]. For example, GPT-3 was trained on a dataset with approximately 374 billion words and has 175 billion parameters [7]. However, this strategy is not sustainable due to diminishing returns and increasing cost of computation and data acquisition [11, 12]. Moreover, in applications with a limited amount of data, training such large models is impractical. A natural question that arises, is whether we can build computationally efficient models, that maintain the favorable properties of deep learning. In this paper, we give an affirmative answer and develop an analytical and computational framework that uses fully convolutional models and transfer learning.
Convolutional neural networks (CNNs) are one of the most popular deep learning architectures [4], especially for image classification [13]. Though initially used for image processing, they have proven useful for a wide variety of other signals such as text, audio, weather, ECG data, traffic data, and many others [4, 14, 15]. Despite the recent increased interest in transformer-based models [5], convolutions continue to play an important role under the hood of novel architectures. For example, the autoencoder in latent diffusion models [16] is fully convolutional [17].
Transfer learning, on the other hand, is a set of methodologies that exploit knowledge from one domain to another [18, 19]. The works in [20, 21] taxonomize a wide range of transfer learning approaches. One common transfer learning approach is _parameter transfer_, where a model is pre-trained to perform one task and then some or all of its parameters are reused for another task. This process allows a model to grasp intricate patterns and features present in one domain and apply that knowledge to a different, but related domain. For example, the work in [22] employs parameter transfer to improve CNN performance on medical images by pre-training on ImageNet. Yet, the true power of transfer learning transcends mere performance improvements. It offers an opportunity to construct highly efficient and lightweight solutions, which is crucial in large-scale systems with high computational needs or constrained resources. In the context of graph neural networks, transfer learning has been explored, both theoretically and experimentally, for large-scale graphs [23]. Training on small graphs and transferring to larger graphs is a promising approach [24, 25]. However, this flavor of transfer learning has been underexploited for CNNs, with a few exceptions; [26] shows that training on CIFAR can be decomposed into training on smaller patches of images.
In this paper, we fill this gap and develop a novel theoretical and computational framework to tackle large-scale spatial problems with transfer learning and CNNs. In particular, we train a CNN model on a small version of a spatial problem and transfer it to an arbitrarily large problem, without re-training. To justify this approach we leverage the shift-equivariance property of CNNs and analyze their performance at arbitrarily large scales. Note that, many machine learning tasks are shift-equivariant in some sense. For instance, in image segmentation, if the input image is translated the output will move accordingly. Other systems exhibit this phenomenon, including financial [27], weather [28], and multi-agent systems [1].
Our analysis studies CNNs when the input-output pairs are stationary processes over space or time. This is motivated
by the fact that, in stochastic process theory, shift-invariance is equivalent to stationarity [29] and shift-equivariance is equivalent to joint stationarity of the input and output signals. We demonstrate that a CNN can be efficiently trained for tasks with jointly stationary inputs and outputs. To accomplish this, we prove a bound on the generalization error of a fully convolutional model when trained on small windows of the input and output signals. Our theory indicates that such a model can be deployed on arbitrarily large windows on the signals, with minimal performance degradation.
To support this analysis we conducted numerical experiments on two tasks: multi-target tracking [30] and mobile infrastructure on demand [31, 32]. In both problems, the inputs and outputs are finite sets, which we reinterpret as image-to-image tasks. We describe this method in detail to show how to apply our framework to a broad class of spatial problems. Overall, our contributions can be summarized as follows.
* Present a transfer learning framework for efficient training of CNNs for large-scale spatial problems.
* Prove a bound for the generalization error of a CNN trained under this framework.
* Describe a methodology to apply CNNs to non-image problems.
* Demonstrate the effectiveness of the framework in multi-target tracking and mobile infrastructure on demand.
## II Learning to process stationary signals
The following is a common machine learning (ML) problem that becomes challenging when signals have infinite support. Given an input-output signal pair \((X,Y)\), the task is to find a model \(\mathbf{\Phi}(\cdot)\) that minimizes the mean squared error (MSE) between \(\mathbf{\Phi}(X)\) and \(Y\). Let \(X\) and \(Y\) be random functions of a vector parameter \(\mathbf{t}\in\mathbb{R}^{d}\), which can represent time, space, or an abstract \(d\)-dimensional space. Mathematically, this is a minimization problem with loss
\[\mathcal{L}_{1}(\mathbf{\Phi})=\mathbb{E}\left[\int_{\mathbb{R}^{d}}|\mathbf{ \Phi}(X)(\mathbf{t})-Y(\mathbf{t})|^{2}d\mathbf{t}\right], \tag{1}\]
where \(\mathbf{\Phi}(X)(\mathbf{t})\) is the output of the model at \(\mathbf{t}\). When the signals have infinite support, the integral may diverge. To accommodate this, the MSE can be redefined as,
\[\mathcal{L}_{2}(\mathbf{\Phi})=\mathbb{E}\left[\lim_{T\rightarrow\infty}\frac{ 1}{T^{d}}\int_{\mathcal{C}^{d}_{T}}|\mathbf{\Phi}(X)(\mathbf{t})-Y(\mathbf{t} )|^{2}d\mathbf{t}\right], \tag{2}\]
where \(C^{d}_{T}\) is an open set of the points in \(\mathbb{R}^{d}\) within a \(T\)-wide hypercube, centered around the origin. The primary difference between (1) and (2) is the normalization by the hypercube volume \(T^{d}\). This extends the MSE to infinite signals where (1) diverges. Practically, large values of \(T\) represent situations where signals are too wide to be computationally tractable.
This paper explores how to solve the problem of minimizing (2). Numerically evaluating and minimizing (2) is intractable in general. To overcome this limitation, Section II-C shows that if \(\mathbf{\Phi}(\cdot)\) is a CNN trained on small windows of the signals, then (2) is bounded by the training loss of the small window plus a small quantity. To accomplish this \(X,\ Y\) are modeled as jointly stationary random signals and the output of the CNN \(\mathbf{\Phi}(X)\) is analyzed. Before this analysis is presented in II-C, the following sections define stochastic processes, stationarity, and CNNs.
### _Jointly Stationary Stochastic Processes_
We focus on continuous stationary stochastic processes with a multidimensional index. In this context, a stochastic process [29, 33, 34] is a family of random variables \(\{X(\mathbf{t})\}_{\mathbf{t}\in\mathbb{R}^{d}}\) indexed by a parameter \(\mathbf{t}\in\mathbb{R}^{d}\). One of the key assumptions in the analysis is joint stationary between signals \(X\) and \(Y\). Two continuous random signals are jointly stationary if all their finite joint distributions are shift-invariant.
**Definition 1**.: _Consider two real continuous stochastic processes \(\{X(\mathbf{t})\}_{\mathbf{t}\in\mathbb{R}^{d}}\) and \(\{Y(\mathbf{t})\}_{\mathbf{t}\in\mathbb{R}^{d}}\). The two processes are jointly stationary if and only if they satisfy (3) for any shift \(\boldsymbol{\tau}\in\mathbb{R}^{d}\), non-negative \(n,m\in\mathbb{N}_{0}\), indices \(\mathbf{t}_{i},\mathbf{s}_{i}\in\mathbb{R}^{d}\), and Borel sets of the real line \(A_{i},B_{i}\)._
\[\begin{split}& P(X(\mathbf{t}_{1})\in A_{1},...,X(\mathbf{t}_{ n})\in A_{n},...,Y(\mathbf{s}_{m})\in B_{m})\\ &=P(X(\mathbf{t}_{1}+\boldsymbol{\tau}),...,X(\mathbf{t}_{n}+ \boldsymbol{\tau})\in A_{n},...,Y(\mathbf{s}_{m}+\boldsymbol{\tau})\in B_{m}) \end{split} \tag{3}\]
Modeling \(X\) and \(Y\) as jointly stationary stochastic processes in \(d\)-dimensions has practical applications in many domains. Numerous signals can be represented by stationary or quasi-stationary processes. Examples include signals in financial [27], weather [28], sensor network [35] and multi-agent systems [1]. Therefore, understanding the performance of ML models for stationary signal processing has widespread importance. CNNs are of particular interest because they admit translational symmetries similar to jointly stationary signals. The following analysis attempts to demystify their performance and broaden our knowledge of how to train CNNs efficiently.
### _Convolutional Neural Networks_
CNNs are versatile architectures, heavily used to minimize (1) - typically to perform image processing. A CNN is a cascade of non-linear functions, called layers. The output of the \(l^{\text{th}}\) is defined recursively:
\[x_{l}(\mathbf{t})=\sigma\left(\int_{\mathbb{R}^{d}}h_{l}(\mathbf{s})x_{l-1}( \mathbf{t}-\mathbf{s})d\mathbf{s}\right). \tag{4}\]
At each layer, the output \(x_{l}\) is obtained by convolving the previous output \(x_{l-1}\) by a function \(h_{l}\) and applying a pointwise nonlinearity \(\sigma\). As the name suggests, convolution operations are the cornerstone of the architecture. They are the key to exploiting spatial symmetries since they are shift-equivariant. The CNN has \(L\) layers and is parametrized by a set of functions \(\mathcal{H}=\{h_{1},...,h_{L}\}\), also known as parameters. Let
\[\mathbf{\Phi}(X;\mathcal{H})(t)\coloneqq x_{L}(t) \tag{5}\]
represent a CNN with output \(x_{L}\) and input \(x_{0}\coloneqq X\). Then the loss function in (2) can be rewritten as a function of these parameters.
\[\mathcal{L}_{\infty}(\mathcal{H})=\mathbb{E}\left[\lim_{T\rightarrow\infty}\frac {1}{T^{d}}\int_{\mathcal{C}^{d}_{T}}|\mathbf{\Phi}(X;\mathcal{H})(\mathbf{t})- Y(\mathbf{t})|^{2}d\mathbf{t}\right] \tag{6}\]
In general, evaluating (6) is as challenging as evaluating (2), but the following two assumptions make this possible. First, the input and output signals have a stationary property. Second, the model \(\mathbf{\Phi}\) is a CNN. The next section presents our main theoretical result, which discusses how the CNN can be efficiently trained on narrow signal windows, but evaluated on arbitrarily wide signals without sacrificing performance.
### _Training on a window_
Instead of minimizing (6) directly, which is computationally prohibitive, we propose to train the CNN on narrow windows of the signals \(X,\ Y\). In particular, we can rewrite (6) to consider finite windows over the signals:
\[\mathcal{L}_{\cap}(\mathcal{H})=\frac{1}{B^{d}}\mathbb{E}\left[\int_{\mathcal{ C}_{B}^{d}}|\mathbf{\Phi}(\cap_{A}X;\mathcal{H})(\mathbf{t})-Y(\mathbf{t})|^{2} dt\right], \tag{7}\]
where \(\cap_{s}(\mathbf{t})\coloneqq 1(\mathbf{t}\in\mathcal{C}_{s}^{d})\) is an indicator function that is equal to one whenever \(\mathbf{t}\) is within a \(s\)-wide hypercube. Minimizing (7) is equivalent to learning a model between \(\cap_{A}X\) and \(\cap_{B}Y\). These are windowed versions of the original signals, with window widths \(A\) and \(B\), respectively. We will only consider \(A\geq B\) so that the input is wider than the output of the CNN. Most practical CNN architectures either shrink their input width or keep it constant because padding at each convolutional layer introduces boundary effects.
Unlike (6), the problem in (7) can be readily computed numerically because the inputs and outputs have finite support. Therefore optimizing the set of parameters \(\mathcal{H}\) with respect to \(\mathcal{L}_{\cap}\) is possible in practice using stochastic gradient descent, as shown in Sections V and VI. The window sizes \(A\) and \(B\) can be chosen to maximize training performance. For example, one might want the signals to be wide enough to saturate GPU computing while being small enough to fit into memory.
To summarize, we want to find a model \(\mathbf{\Phi}(X;\mathcal{H})\) that approximates a map from \(X\) to \(Y\), described by the loss function \(\mathcal{L}_{\infty}(\mathcal{H})\) in (6). However, minimizing \(\mathcal{L}_{\infty}(\mathcal{H})\) is challenging, especially when the signals involved are of large scale (very wide). Instead, we propose to train the CNN model using \(\mathcal{L}_{\cap}(\mathcal{H})\), which is easy to compute. Our proposed approach is theoretically justified by our novel theoretical analysis that provides an upper bound on \(\mathcal{L}_{\infty}\) in terms of \(\mathcal{L}_{\cap}\), i.e.,
\[\mathcal{L}_{\infty}(\hat{\mathcal{H}})\leq\mathcal{L}_{\cap}(\hat{\mathcal{H }})+C. \tag{8}\]
To derive this bound, let \(\hat{\mathcal{H}}\) be a set of parameters with associated windowed loss \(\mathcal{L}_{\cap}(\hat{\mathcal{H}})\), which can be obtained by minimizing (7) or other training procedure. Given the following assumptions, we show that \(\hat{\mathcal{H}}\) performs almost as well on signals with infinite support - in terms of (6).
**Assumption 1**.: _The random signals \(X\) and \(Y\) are jointly stationary following Definition 1._
In many important application domains, signals are at least approximately stationary, which makes our result directly applicable to these cases. Such a result also offers theoretical insights into the fundamental understanding and explainability of CNNs.
**Assumption 2**.: _The model \(\mathbf{\Phi}(X;\mathcal{H})\) is a convolutional neural network as defined by (4) with a set of parameters \(\mathcal{H}\)._
Given that the signals under consideration exhibit stationarity, it is appropriate to use a convolutional model, which is shift equivariant and is able to exploit stationarity. Assumptions 1 and 2 are the two keys to our analysis. The remaining assumptions are very mild and typical for model analysis.
**Assumption 3**.: _The stochastic processes \(X\) and \(Y\) are bounded so that \(|X(\mathbf{t})|<\infty\) and \(|Y(\mathbf{t})|<\infty\) for all \(\mathbf{t}\in\mathbb{R}^{d}\)._
Boundedness of the signals is a sufficient condition for the existence and finiteness of some limits. The magnitude of the bound is inconsequential to the final result.
**Assumption 4**.: _The filter functions \(h_{l}\in\mathcal{H}\) are continuous with a finite width \(K\). That is \(h_{l}(\mathbf{t})=0\) for all \(\mathbf{t}\notin[-K/2,K/2]^{d}\)._
**Assumption 5**.: _The filters have finite \(L1\) norms \(||h_{l}||_{1}\leq\infty\) for all \(h_{l}\in\mathcal{H}\)._
Assumptions 4 and 5 satisfied in typical CNN implementations since the CNN filters are finite in practice.
**Assumption 6**.: _The nonlinearities \(\sigma(\cdot)\) are normalized Lipschitz continuous, so that the Lipshitz constant is equal to 1. Mathematically this means that \(|\sigma(x)-\sigma(y)|\leq|x-y|\) for any \(x,y\in\mathbb{R}\)._
Note that the majority of pointwise nonlinear functions used in deep learning, e.g., ReLU, Leaky ReLU, hyperbolic tangent, are normalized Lipshitz for numerical stability.
Under these assumptions, Theorem 1 provides a bound on the generalization error \(\mathcal{L}_{\infty}\) of a CNN trained on a finite window and tested on an infinite window.
**Theorem 1**.: _Let \(\hat{\mathcal{H}}\) be a set of filters that achieves a cost of \(\mathcal{L}_{\cap}(\hat{\mathcal{H}})\) on the windowed problem as defined by (7) with an input window of width \(A\) and an output window width \(B\). Under Assumptions 1-6, the associated cost \(\mathcal{L}_{\infty}(\hat{\mathcal{H}})\) on the original problem as defined by (6) is bounded by the following._
\[\mathcal{L}_{\infty}(\hat{\mathcal{H}})\leq\mathcal{L}_{\cap}(\hat{\mathcal{H }})+\mathbb{E}\left[X(0)^{2}\right]C+\sqrt{\mathcal{L}_{\cap}(\hat{\mathcal{H }})\mathbb{E}\left[X(0)^{2}\right]C} \tag{9}\]
\[C=\frac{H^{2}}{B^{d}}\max\left(0,[B+LK]^{d}-A^{d}\right) \tag{10}\]
_In (9), \(H=\prod_{l}^{L}||h_{l}||_{1}\) is the product of all the L1 norms of the CNNs filters, \(L\) is the number of layers, and \(K\) is the width of the filters._
The proof is provided in Appendix A. The derivation exploits shift-equivariance of CNNs, which preserve joint stationarity of the signals. Theorem 1 shows that minimizing \(\mathcal{L}_{\infty}\) can be approximated by instead minimizing \(\mathcal{L}_{\cap}\). The maximum deterioration in performance is bounded by a quantity that is affected by the variance of the input signal, number of layers, filter widths, and the size of the input and output windows. The inequality in (9) reduces to \(\mathcal{L}_{\infty}(\hat{\mathcal{H}})\leq\mathcal{L}_{\cap}(\hat{\mathcal{H }})\) whenever \(A\geq B+LK\). In this special case, the output becomes
unaffected by padding, which explains the last two terms on the right-hand side of (9).
## III Representing Sparse Problems as Mixtures
CNNs are well suited to processing images or, more generally, multi-dimensional signals. In various spatial problems [1, 31, 32, 32], the decision variables are finite sets of real vectors with random cardinality such as \(\mathcal{X}=\{\mathbf{x}\in\mathbb{R}^{d}\}\). Therefore, in this section we propose a method to represent sets of vectors such as \(\mathcal{X}\) by multi-dimensional signals \(X(\mathbf{t};\mathcal{X}):\mathbb{R}^{d}\mapsto\mathbb{R}\).
Representing the sets by a superposition of Gaussian pulses is an intuitive approach. Specifically,
\[X(\mathbf{t};\mathcal{X})\coloneqq\sum_{\mathbf{x}\in\mathcal{X}}(2\pi\sigma _{X}^{2})^{-1}\exp(-\frac{1}{2\sigma_{X}^{2}}||\mathbf{t}-\mathbf{x}||_{2}^{2 }). \tag{11}\]
where there is one pulse with variance \(\sigma_{X}^{2}\) for each element in the set. The second image in Figure 1 exemplifies this.
This approach can be made more general by incorporating additional information about the distribution of the set. Assume that each element \(\mathbf{x}\in\mathcal{X}\) is in fact a sample from a distribution that depends on \(\mathbf{t}\). The density function \(g(\mathbf{x}\mid\mathbf{t})\) describes this dependence. For example, when \(\mathbf{x}\) is a noisy measurement of some underlying quantity \(g(\mathbf{x}\mid\mathbf{t})\) describes the likelihood of sampling \(\mathbf{x}\) given \(\mathbf{t}\) as the ground truth. There could exist multiple sensors making measurements, each with a different density function \(g_{i}(\mathbf{x}\mid\mathbf{t})\). Denote \(x_{i}\in\mathcal{X}\) to make it clear that each element in \(\mathcal{X}\) is associated with some density function \(g_{i}\). Therefore the intensity function \(X(\mathbf{t};\mathcal{X})\) can be written as a sum over these densities:
\[X(\mathbf{t};\mathcal{X})\coloneqq\sum_{\mathbf{x}_{i}\in\mathcal{X}}g_{i}( \mathbf{x}_{i}|\mathbf{t}). \tag{12}\]
The function \(g_{i}\) is sometimes known as a _measurement model_. In the first image in Figure 1, the measurements contain noise applied in polar coordinates around sensors. This results in an arc-shaped distribution. Notice that (11) is a special case (12) when the distribution is Gaussian.
In practice, we window and discretize the intensity function \(X(\mathbf{t};\mathcal{X})\). The function is windowed to width \(T\) around zero and sampled with resolution \(\rho\). This produces a \(d\)-dimensional tensor \(\mathbf{X}\in\mathbb{R}^{N^{d}}\) with width \(N=\lfloor\rho T\rfloor\) along each dimension. Similarly, given a set \(\mathcal{Y}=\{\mathbf{y}\in\mathbb{R}^{d}\}\) we construct a tensor \(\mathbf{Y}\in\mathbb{R}^{N^{d}}\). This allows training a CNN to learn a mapping between \(\mathbf{X}\) and \(\mathbf{Y}\).
To summarize, our original problem was to model the relationship between \(\mathcal{X}\) and \(\mathcal{Y}\). These are random sets and we assume that there exists a dataset of their realizations. We can represent each pair in this dataset as a tensor \(\mathbf{X}\) and \(\mathbf{Y}\), respectively. Therefore, we can use a CNN to learn a map between these tensors. Hopefully, we find parameters so that \(\hat{\mathbf{Y}}:=\mathbf{\Phi}(\mathbf{X};\mathcal{H})\) is close to \(\mathbf{Y}\). However, we also need a method to recover an estimate of \(\mathcal{Y}\) from \(\hat{\mathbf{Y}}\), the output of our CNN.
To achieve this, we propose the following approach. Assume that \(\mathbf{Y}\) is obtained by following (11) and discretizing; this is the case in our experiments. Notice that the L1 norm of the output image \(||Y(\mathbf{t};\mathcal{Y})||_{1}=|\mathcal{Y}|\) is by construction equal to the cardinality of \(\mathcal{Y}\). If during discretization we preserve the L1 norm, then \(\mathbf{Y}\) has a similar property for all elements of \(\mathcal{Y}\) within the window. If CNN training is successful then \(\hat{\mathbf{Y}}\) will retain this property. Since we know the number of Gaussian components, we can use k-means clustering or fit a Gaussian mixture via expectation maximization. In our implementation, we use the subroutines provided by Scikit-Learn [36].
## IV Architecture
Theorem 1 suggests that training a CNN on small, representative windows of multi-dimensional input-output signals is effective at learning a mapping between the original signal pairs. Such as CNN can later generalize to arbitrary scales by leveraging the shift equivariance property of CNNs. In this section, we describe a fully convolutional architecture for the processing of multi-dimensional signals.
Fig. 1: Example 2D signals from multi-target tracking; see Section V for a full problem description. _(Sensor Measurements)_\(\bar{\mathbf{Z}}_{n}\) represents the measurements made by the sensors in one time-step – their positions are marked in red. The sensors are modeled as range-bearing with additive noise, therefore in Cartesian coordinates, the distribution has a distinct banana shape. This example was cherrypicked to show multiple sensors in the 1km\({}^{2}\) area, but in simulations the sensors can be up to \(R\) meters outside this area. _(Target Positions)_: \(\mathbf{X}_{n}\) represents the ground truth positions of all the targets. _(CNN Output)_: \(\mathbf{\Phi}(\bar{\mathbf{Z}}_{n};\hat{\mathcal{H}})\) is the output of the trained CNN. The CNN is trained to approximate the target positions image, given the past \(K\) sensor measurement images.
### _Fully convolutional encoder-decoder_
The inputs and outputs of the architecture are multidimensional signals:
\[\mathbf{X} \in\underbrace{\mathbb{R}^{N}\times\cdots\times\mathbb{R}^{N}}_{d \text{ times}}\times\mathbb{R}^{F_{0}}\] \[\text{and}\quad\mathbf{\Phi}(\mathbf{X},\mathcal{H}) \in\underbrace{\mathbb{R}^{N}\times\cdots\times\mathbb{R}^{N}}_{d \text{ times}}\times\mathbb{R}^{F_{L}}\]
with \(F_{0}\) and \(F_{L}\) features, respectivelly. The first \(d\) dimensions are the translationally symmetric space where the task takes place. To process these signals we propose a fully convolutional encoder-decoder architecture as shown in Figure 2. The proposed architecture is organized into three sequential components: the encoder, the hidden block, and the decoder. Each of these components contains several layers with \(L\) in total. The input to the first layer is \(\mathbf{X}\) and the output of the final layer is \(\mathbf{\Phi}(\mathbf{X},\mathcal{H})\). Each layer contains a convolution, a pointwise nonlinearity, and possibly a resampling operation.
Convolution operations are the first component of every layer. The input to the \(l^{\text{th}}\) layer is a tensor \(\mathbf{X}_{l-1}\) of order \(d+1\), such that \(\mathbf{X}_{l-1}[\mathbf{n}]\in\mathbb{R}^{F_{l-1}}\) with \(F_{l-1}\) features and index \(\mathbf{n}\in\mathbb{Z}^{d}\). This input is passed through a multiple-input multiple-output convolutional filter
\[\mathbf{Z}_{l}[\mathbf{n}]=\sum_{\mathbf{m}\in\mathbb{Z}^{d}}\mathbf{H}_{l}[ \mathbf{m}]\mathbf{X}_{l-1}[\mathbf{n}-\mathbf{m}], \tag{13}\]
to obtain an intermediate output \(\mathbf{Z}_{l}[\mathbf{n}]\in\mathbb{R}^{F_{l}}\) with \(F_{l}\) features. In (13), \(\mathbf{H}_{l}\) is a high-order tensor with \(\mathbf{H}_{l}[\mathbf{n}]\in\mathbb{R}^{F_{l}\times F_{l-1}}\) representing a filter with a finite width \(K\). As a result, the filter output \(\mathbf{Z}_{l}\) is also a high-order tensor with finite width that is indexed by \(\mathbf{n}\in\mathbb{Z}^{d}\).
The encoder and decoder are mirror images of each other. The encoder downsamples the signal, while the decoder upsamples it. This effectively compresses the signal, so that the hidden block can efficiently process it. Each encoder layer is composed of a convolutional layer, as defined in (13), followed by a downsampling operation and a pointwise nonlinearity:
\[\mathbf{X}_{l}[\mathbf{n}]=\sigma\left(\mathbf{Z}_{l}[M\mathbf{n}]\right). \tag{14}\]
Each layer in the encoder downsamples the input by a factor of \(M\). Conversely, the decoder upsamples its inputs by the same factor. Much like an encoder layer, each decoder layer contains a convolution followed by an upsampling operation and a pointwise nonlinearity:
\[\mathbf{X}_{l}[\mathbf{n}]=\sigma\left(\mathbf{Z}_{l}[\mathbf{n}/M]\right). \tag{15}\]
In (15), the use of square brackets in \(\mathbf{Z}_{l}[\mathbf{n}/M]\) indicates that the index is rounded to the nearest integer.
The hidden block layers are interposed between the encoder and decoder. Each hidden layer is composed of a convolution followed by a pointwise nonlinearity:
\[\mathbf{X}_{l}[\mathbf{n}]=\sigma\left(\mathbf{Z}_{l}[\mathbf{n}]\right). \tag{16}\]
They are similar to the encoder and decoder layers, but they do not contain any resampling operations.
The proposed encoder-decoder architecture has two main benefits. First, it is compatible with Theorem 1 because it is fully convolutional. Therefore, it preserves the stationarity of the input signals [37]. Second, it allows for efficient processing of multi-dimensional signals. The encoder downsamples the input signal, reducing the amount of computation needed in the hidden block. Afterwards, the decoder upsamples the processed signal back to the original representation. This compression is especially beneficial for our experiments because, as can be seen from figure 1, the tasks we consider have sparse input and output signals.
### _Hyperparameters_
In practice, we found the following hyperparameter values to be effective in our simulations. Our encoder and decoder layers have filters with width \(K=9\), \(F_{l}=128\) channels, and a resampling factor of \(M=2\). Meanwhile, each hidden layer uses filters with width \(K=1\) and \(F_{l}=1024\) channels. We use three encoder layers, four hidden layers, and three decoder layers. In Section V we will elaborate on how we arrived at these hyperparameters.
There are several possible practical methods of downsampling and upsampling in the encoder and decoder blocks. In our best-performing variant of the architecture, we combine them with convolutions. In the encoder, we use a convolution with a stride of two. Meanwhile, in the decoder, we use transpose convolutions with a stride of two. This approach is
Fig. 2: Diagram of the proposed CNN architecture. Above we depict a CNN with two encoder, hidden, and decoder layers. This illustrates the architecture but is different from the configuration used in the experiments.
used in [16], although it is prone to producing checkerboard artifacts [38]. Alternatively, the downsampling can be performed separately or using a pooling layer such as average pooling. Similarly, the transpose convolutions can be replaced by an upsampling or interpolation layer followed by a convolution. However, in our simulations, we found this approach to be less performant, although it allowed training to converge slightly faster.
## V Multi Target Tracking
Multi-target tracking (MTT) is a type of sequential estimation problem where the goal is to jointly determine the number of objects and their states from noisy, discrete-time sensor returns. Such sensor returns can include the pulse energy observed across a series of sampled time delays in radar systems or the pixel intensities observed from a camera system. It is distinct from _state filtering_ in that object appearance and disappearance (i.e., birth and death) must be addressed in the problem formulation. We direct the reader to [30] for a detailed overview of the field and the relevant application areas.
We focus on the canonical application known as _point object tracking_ where sensor returns are quantized via a detection process into measurements before the tracker is applied. To make the problem definition more precise, we introduce notation for the multi-target state and measurement sets at the \(n^{\text{th}}\) time-step. Let the _multi-target state_ be denoted by the set, \(\mathcal{X}_{n}\). Each element of this set represents the state of a target in some space. At each time step each sensor will produce noisy measurements of the targets, resulting in a set \(\mathcal{Z}_{n}\) of measurements in a different space. The goal in MTT is to estimate the multi-target state \(\mathcal{X}_{n}\) given all the preceding measurements \(\mathcal{Z}_{1},...,\mathcal{Z}_{n}\). The principal challenge in this task is _measurement origin uncertainty_; it is not known _a priori_ which measurements were generated by true targets, were the result of an independent clutter process (false positive) or did not generate measurements (false negative). A direct Bayesian solution to the measurement origin uncertainty problem with multiple sensors requires solving an NP-hard multi-dimensional ranked assignment problem [39].
Algorithms in this area are well-studied and include global nearest neighbor techniques [40], joint probabilistic data association [41], multi-hypothesis tracking [42], and belief propagation [43]. Recently random finite sets (RFS) have emerged as a promising all-in-one Bayesian framework that addresses the measurement origin uncertainty problem jointly [44, 45]. In particular, _labeled RFS_ were introduced in [46] specifically for MTT, resulting in the Generalized Labeled Multi-Bernoulli (GLMB) [47, 48] and Labeled Multi-Bernoulli (LMB) [49, 50] filters. However, the GLMB and LMB filters exhibit exponential complexity in the number of sensors [39, 48, 51].
Our proposed analytical and computational framework is an excellent fit to overcome these scalability issues for two reasons: (i) it is independent of the number of sensors and targets within a specific region and (ii) it can leverage transfer learning to perform MTT in large areas. The proposed CNN tracker is trained on a small \(1\text{km}^{2}\) area and tested on larger areas up to \(25\text{km}^{2}\) where the number of sensors and targets are increased proportionally to maintain a fixed density of each. The computational complexity of this approach is proportional to the simulation area rather than the number of targets and sensors. Based on Theorem 1, we would expect the tracking performance from training to be maintained as this scaling up is carried out.
### _Modeling targets and sensors_
The multi-target state changes over time. First, new targets may be born according to a Poisson distribution with mean \(\lambda_{\text{birth}}\). Second, old targets may disappear with probability \(p_{\text{death}}\). Finally, the existing target's state evolves over time. In our simulations we use a (nearly) constant velocity (CV) model [52, 53]. In the CV model the state \(\mathbf{x}_{n,i}\in\mathcal{X}\) at time-step \(n\) of the \(i^{\text{th}}\) target is given by (17).
\[\mathbf{x}_{n,i}=\begin{bmatrix}\mathbf{p}_{n,i}&\mathbf{v}_{n,i}\end{bmatrix} \tag{17}\]
A target state is composed of its position \(\mathbf{p}_{n,i}\in\mathbb{R}^{2}\) and velocity \(\mathbf{v}_{n,i}\in\mathbb{R}^{2}\). The state evolves according to (18).
\[\Delta\mathbf{p}_{n,i} =\tau\hat{\mathbf{p}}_{n,i}+\frac{\tau^{2}}{2}\boldsymbol{\mu}_{ n,i} \tag{18}\] \[\Delta\mathbf{v}_{n,i} =\tau\boldsymbol{\mu}_{n,i}\]
The CV model is discretized with time-step \(\tau\) seconds. Also, note that the acceleration \(\boldsymbol{\mu}_{n,i}\in\mathbb{R}^{2}\) is sampled from a multivariate normal distribution,
\[\boldsymbol{\mu}_{n,i}\sim\mathcal{N}\left(0,\left[\begin{smallmatrix}\sigma_ {a}^{2}&0\\ 0&\sigma_{a}^{2}\end{smallmatrix}\right]\right). \tag{19}\]
There are \(M\) sensors that provide measurements of the multi-target state. Consider the \(j^{\text{th}}\) sensor with position \(\mathbf{s}_{j}\in\mathbb{R}^{2}\). If a target with state \(\mathbf{x}_{n,i}\in\mathcal{X}_{n}\) is within the sensor's radial range of \(R\) km, then at each time step the sensor may detect the target with probability \(p_{\text{detect}}\). When this occurs then a corresponding measurement \(\mathbf{z}_{n,i}^{j}\) is added to the measurement set \(\mathcal{Z}_{n}\). Multiple sensors can detect the same target, but each can provide at most one detection per target. The measurement \(\mathbf{z}_{n,i}^{j}\) is stochastic and its distribution is known as the _measurement model_. In our simulations, the measurement model is range-bearing relative to the position of the \(j^{\text{th}}\) sensor:
\[\mathbf{z}_{n,i}^{j}\sim\mathcal{N}\left(\begin{bmatrix}||\mathbf{p}_{n,i}- \mathbf{s}_{j}||\\ \left(\mathcal{Z}_{n,i}-\mathbf{s}_{j}\right)\end{bmatrix},\begin{bmatrix} \eta_{r}^{2}&0\\ 0&\eta_{\theta}^{2}\end{bmatrix}\right). \tag{20}\]
The measurements are Gaussian distributed with a mean equal to the distance and bearing to the target. The covariance matrix is diagonal with a standard deviation of \(\eta_{r}\) meters in range and \(\eta_{\theta}\) radians in angle. Additionally, clutter is added to the measurement set \(\mathcal{Z}_{n}\). The number of clutter measurements is Poisson with mean \(\lambda_{\text{clutter}}\) per sensor. Clutter is generated independently for each sensor, with clutter positions sampled uniformly within its range.
### _Simulations_
To train the CNN we construct a dataset of images representing the MTT problem within a \(T=1\text{km}\) wide square window. Similarly, we construct a testing dataset for window
widths between 1km and 5km. For any particular window of width \(T\), constructing a dataset of images for training or testing is the same. First, we simulate the temporal evolution of the multi-target state \(\mathcal{X}_{n}\) and at each time step generate sensor measurements \(\mathcal{Z}_{n}\). Table I describes the parameters used for the simulations. At the beginning of each simulation we initialize \(\mathrm{Poisson}(\lambda_{\text{initial}})\) targets and \(\mathrm{Poisson}(\lambda_{\text{sensor}})\) sensors. Their positions are uniformly distributed within the simulation region, while the initial velocity for each target is sampled from \(\mathcal{N}(0,\sigma_{\text{v}}^{2})\).
Notice that sensors outside of the window \(T\) can make measurements of targets within the window. Unless we want to break our assumption of stationarity, we have to include these measurements as well. Thinking back to equation (6), the images should be a cropped version of some infinitely wide signal. Hence, it is actually necessary to simulate the targets and sensors over a window larger than \(T\). Luckily, the sensors have a finite range \(R\)km, so we run simulations over a square window of width \(2R+T\) - padding the simulation area by \(R\)km on each side. Similarly, we also keep track of targets that could be born outside Tkm window and targets that leave the window but might return.
To generate training data, we ran 10,000 simulations with 100 steps each at a window width of \(T=1\)km. We generate images \(\mathbf{Z}(\mathcal{Z}_{n})\) and \(\mathbf{X}(\mathcal{X}_{n})\) from \(\mathcal{Z}_{n}\) and \(\mathcal{X}_{n}\), respectivelly, using the procedure outlined in Section III. In this section, \(\mathbf{X}\) denotes the _output_ of the CNN for consistency with the literature. To provide temporal information, the input to the CNN is actually a stack of \(K=20\) past sensor measurement images:
\[\bar{\mathbf{Z}}_{n}\coloneqq\begin{bmatrix}\mathbf{Z}(\mathcal{Z}_{n-K+1})& \ldots&\mathbf{Z}(\mathcal{Z}_{n})\end{bmatrix}. \tag{21}\]
Data for \(n<K\) is excluded. We train the model for 84 epochs using AdamW [54] with a batch size of 32, a learning rate of \(6.112\times 10^{-6}\), weight decay of \(0.07490\). We arrived at these hyperparameters and the ones in Subsection IV-B by running a random search using the Optuna library [55]. The hyperparameters were sampled using the built-in tree-structured parzen estimator [56].
### _Baseline Approaches_
Our baseline for comparing the CNN's performance is defined by two state-of-the-art filters. The filters used are the sequential Monte Carlo multi-sensor iterated corrector formulation of the Labeled Multi-Bernoulli (LMB) and the \(\delta\)-Generalized Labeled Multi-Bernoulli (GLMB) [57]. The bearing-range measurement models, sensor generation, and dynamic model for the targets are the same as described in Subsection V-A.
Both filters used the Monte Carlo approximation for the adaptive birth procedure from [58]. The proposal distribution is created by decomposing the state space into observable (\(x_{o}\)) and unobservable (\(x_{u}\)) states, with observable (\(p_{B}^{o}(x_{o},l_{+})\)) and unobservable (\(p_{B}^{u}(x_{u},l_{+})\)) prior densities. For the observable states, \(\mathbb{X}_{o}\), an uninformative uniform prior distribution \(p_{B}^{o}(x_{o},l_{+})=\mathcal{U}(\mathbb{X}_{o})\) was used. The unobservable states, which for this scenario are the velocities, were sampled from a zero-mean Gaussian distribution:
\[p_{B}^{u}(x_{u},l_{+})=\mathcal{N}(x_{u};0,\begin{bmatrix}\sigma_{\text{v}}^{ 2}&0\\ 0&\sigma_{\text{v}}^{2}\end{bmatrix}). \tag{22}\]
As the scenario scaled up in the number of targets and sensors, the number of measurements increased significantly. To accommodate for this, the number of Gibbs iterations was increased for each scenario. Due to computational limitations, window sizes of \(T\in\{1,2,3\}\)km were used, with 2000, 4000, and 6000 Gibbs iterations, respectively. Every 10 Gibbs iterations the current solution was reset to the all miss-detected tuple, to encourage exploration.
### _Results_
To test the performance of the trained CNN we ran 100 simulations at each window size \(T\in\{1,2,3,4,5\}\)km and constructed images from them. Figure 3 shows example inputs and outputs of the CNN, picked at random during testing at each window size. We compare our results against state-of-the-art filters: Labeled Multi-Bernoulli (LMB) and Generalized Labeled Multi-Bernoulli (GLMB).
The output of the CNN and the output of the LMB and GLMB filters are not in the same space. Specifically, the output of the CNN filter is an image, whereas the LMB and GLMB filters output a set of vectors in \(\mathcal{X}\) and a set of weights. The vectors represent the estimated states of the targets, while the weights are their existence probability. This makes it difficult to compare the two. To draw a fair comparison, we consider two different metrics: the mean squared error (MSE) and optimal sub-pattern assignment (OSPA) [59].
In this context, the MSE is computed between the CNN output and the image representing the true target positions, as
defined by equation (11). We convert the LMB and GLMB outputs to image using a superposition of Gaussians, as described by equation (11). The only notable difference is that we weigh the Gaussians by the estimated existence probability. Figure 4 shows that the MSE of the CNN is unaffected as the window size increases. This directly supports Theorem 1. Moreover, this is the case even though our CNN implementation employs padding. Therefore, the practical effect of padding is lower than the second and third terms in (9) suggest.
However, MSE does not fairly compare the different approaches. The output of the MTT problem should be a set of estimates. OSPA [59], as defined in Definition 2, allows us to measure the average "distance" between two sets of vectors, possibly of different cardinalities. We use OSPA with a cutoff distance of \(c=500\) meters. We describe the method used to extract a set of vectors from the CNN output image in Section III.
**Definition 2**.: _(OSPA) If \(\mathcal{X}=\{\mathbf{x}_{i}\}_{i=1,...,n}\) and \(\mathcal{Y}=\{\mathbf{y}_{i}\}_{i=1,...,m}\) are two sets of vectors in \(\mathbb{R}^{d}\) with \(m\leq n\) then OSPA is defined as_
\[\text{OSPA}(\mathcal{X},\mathcal{Y},c)=\min_{\pi\in\Pi_{n}}\sqrt{\frac{1}{n} \left[\sum_{i=1}^{m}||\mathbf{x}_{i}-\mathbf{y}_{\pi_{i}}||_{2}^{2}+c(n-m) \right]}\]
_where \(c\) is some cutoff distance and \(\pi=[\pi_{1},...,\pi_{n}]\) is a permutation._
The CNN maintains consistent performance across scales and outperforms the GLMB and LMB filters. This claim is supported by Figure 5, which shows the OSPA for different values of \(T\) from testing. GLMB and LMB filters have comparable performance to the CNN at \(T=1\)km, but their performance degrades as the region is scaled up due to truncation issues. In contrast, the performance of the CNN improves at scale. Within a 1km\({}^{2}\) region the CNN obtains an OSPA of \(215\pm 73\) meters. This decreases to \(152\pm 39\) meters for a 25km\({}^{2}\) region.
We observe that our architecture has memory requirements that quickly increase for higher dimensional signals. In this section, we considered 2D signals with multiple input features representing time. However. we also attempted to process this as a 3D convolution over two spatial dimensions and time. In that case, while we were able to obtain preliminary results, we encountered significant memory bottlenecks. Sparse convolutions show promise in mitigating this challenge [60, 61].
## VI Mobile Infrastructure on Demand
In this section, we continue to evaluate the utility of our framework on a different problem, called mobile infrastructure on demand (MID). The overall goal in MID is to deploy a team _communication agents_ that provides a wireless network
Fig. 4: Comparison of MSE of the three filters for different window sizes \(T\). We report the mean value for 100 simulations at each scale. The error bars indicate a 0.95% confidence interval in the MSE.
Fig. 3: The input and output of the CNN at different window widths of \(T=\{1,...,5\}\)km [1]. To increase the contrast on this figure we rescaled the sensor image to \(\log(\mathbf{Z}(\mathcal{Z}_{n})+0.001)\), which is shown in the first row. The second row shows the corresponding output of the trained CNN, \(\mathbf{\Phi}(\mathbf{\hat{Z}}_{n};\mathcal{H}^{*})\), without any adjustments.
for a team of _task agents_. Below we describe the task, summarize previous works, and describe our experiments. There, as before, Theorem 1 motivates training a CNN on small examples but evaluating for large tasks.
The MID task is defined as follows. Let \(\mathcal{X}=\{\mathbf{x}\in\mathbb{R}^{2}\}\) be a set that represents the positions of the task agents. Denote the positions of the communication agents by a set \(\mathcal{Y}=\{\mathbf{y}\in\mathbb{R}^{2}\}\). The positions of the task agents are exogenous. The goal is to determine communication agent positions \(\mathcal{Y}\) that maximize connectivity between task agents. This task is static, unlike in MTT where system dynamics are simulated.
This problem was first formulated by [31] as a convex optimization problem. However, the time it takes to find a solution grows quickly with the total number of agents \(|\mathcal{X}|+|\mathcal{Y}|\). The runtime is 30 seconds for 20 agents [32] and can be extrapolated to 48 minutes for 100 agents and 133 _hours_ for 600 agents 1. To alleviate this issue, [32] proposes a deep learning model that imitates the convex optimization solution. The authors utilize a convolutional encoder-decoder trained with a dataset of images representing configurations with two to six task agents uniformly distributed throughout a 320x320 meter area. In that scenario, the CNN achieved nearly identical performance to the convex solver while taking far less time to compute solutions. Its zero-shot performance was also evaluated on out-of-distribution configurations with up to 20 agents. In the following set of experiments, we go one step further and test the performance on larger images that represent bigger areas and more agents. In particular, we perform MID on up to 1600x1600 meter areas and with over 600 total agents.
Footnote 1: We obtain these estimates by fitting an exponential polynomial, and power models to data from [32] and reporting the lowest runtime.
### _Simulations_
We use the following method to generate task agent configurations, \(\mathcal{X}\). Consider a square window with width \(T\) meters. The number of task agents \(|\mathcal{X}|\) is proportional to the window area, \(T^{2}\). There are five agents for \(T=320\) and 125 agents for \(T=1600\). We sample each task agent position independently from \(\mathbf{x}\sim U(-T/2,T/2)^{2}\), a uniform distribution covering the entire window. We represent each configuration \(\mathcal{X}\) by an intensity function \(X(\mathbf{t},\mathcal{X})\) following (11), so that each agent is represented by a Gaussian pulse with standard deviation \(\sigma_{X}=6.4\). The image \(\mathbf{X}\in\mathbb{R}^{N\times N}\) is sampled from \(X(\mathbf{t},\mathcal{X})\) at a spatial resolution of \(\rho=1.25\) meters per pixel.
We use the pre-trained CNN from [32]. It was trained on images representing configurations within a \(T=320\) meter wide window. It is publicly available online2. During inference, \(\mathbf{X}\) is the input to the CNN. The corresponding output \(\mathbf{\Phi}(\mathbf{X};\mathcal{H})\) is assumed to represent a proposed communication agent configuration. Figure 6 shows example inputs and outputs of the CNN for different window sizes.
Footnote 2: [https://github.com/dannox/learning-connectivity](https://github.com/dannox/learning-connectivity)
### _Baseline Approaches_
We do not compare the performance of MID to any baseline. In our experiments on the MTT task, we were able to compare the performance of the CNN against Bayesian filters. Unfortunately, this was not possible for MID due to the poor scalability of the convex optimization-based approach. Based on experimental runtimes reported in [32] we extrapolate that the runtime for 100 and 600 agents would be 48 minutes and 133 hours, respectivelly. Therefore, we are focused on comparing the performance of the CNN at different scales.
### _Modeling Multi-Agent Communication_
To test the performance of the proposed CNN framework in MID, we measure the power required to maintain a minimum bitrate between any two agents in the network. To do so, we assume a path loss channel model [62]. It provides a relatively simple description of point-to-point communication between any two agents. The model relates three key variables: the transmit power \(P(d)\) in mini-watts, the distance \(d\) in meters, and the expected communication rate \(R\). Their relationship is summarized by (23).
\[P(d)=[\text{erf}^{-1}(R)]^{2}\frac{P_{N_{0}}d^{n}}{K} \tag{23}\]
We assume that the agents' communication hardware is homogenous so that the following parameters are constant. \(P_{N_{0}}=1\times 10^{-7}\) is the noise level in the environment, \(K=5\times 10^{-6}\) is a parameter that describes the efficiency of the hardware, and \(n=2.52\) characterizes the attenuation of the wireless signal with distance. Finally, we require that any two agents to communicate at a normalized rate of \(R=0.5\). Hence, (23) expresses the required transmission power as a function of distance.
Equation (23) computes the power needed for agents to communicate directly. However, not all agents need to communicate directly. In many configurations, it is more efficient
Fig. 5: Comparison of OSPA of the three filters for different window sizes \(T\)[1]. We report the mean value for 100 simulations at each scale. The error bars indicate a 0.95% confidence interval in the MSE.
if messages are routed along multi-hop paths between agents. To measure the power needed to maintain this we introduce the _average minimum transmit power_ (AMTP).
**Definition 3**.: _(AMTP) Consider a fully connected graph between all agents. Let \(P(d)\) be the edge weights following (23) with constants \(P_{N_{0}}\), \(K\), \(n\), and \(R\). Now, consider the minimum spanning tree of this weighted graph. We define the AMTP as the average edge weight of this tree._
Notice that in Definition 3 there is a path between any two agents. Each pair of agents along this path can communicate at a rate of \(R\). Therefore, the communication rate along this path is \(R\). We assume that overhead is negligible. Therefore AMTP quantifies the average transmit power needed to sustain the minimal communication rate.
We use AMTP to evaluate the performance of the proposed CNN. Recall that the output of the CNN is an image \(\mathbf{\Phi}(\mathbf{X},\mathcal{H})\). To compute AMTP, we estimate the communication agent positions using the procedure outlined in Section III. The result is a set of vectors \(\mathcal{Y}=\{\mathbf{y}\in\mathbb{R}^{2}\}\), which, along with \(\mathcal{X}\), is used to compute AMTP.
The authors of [32] measure CNN performance via algebraic connectivity of a communication network with fixed transmission power. Algebraic connectivity is defined as the second smallest eigenvalue of the graph Laplacian. There are two main reasons to use AMTP instead. First, algebraic connectivity depends on the number of nodes in a graph. This complicates comparing networks of different sizes. Second, when the transmission power is fixed, the communication graph can be disconnected. In those cases, the connectivity is zero, even if the graph is almost connected. This is especially problematic for large graphs because it only takes a single out-of-place agent. AMTP does not require any normalization for different graph sizes and takes into account how close a communication network is to being connected.
### _Results_
To test the performance of the proposed convolutional approach in large-scale MID and the applicability of Theorem 1, we conduct the following experiments. We have a model that was trained on a \(T=320\) meter wide area and evaluate its performance on varying window sizes with \(T=320,480,960,1280,1600\) meters. At each window size, we generate 100 random task agent configurations following Subsection VI-A. For each configuration, the CNN produces a communication agent configuration.
We evaluate the AMTP for the combined communication network that includes both the task and communication agents. Table III summarizes the numerical results. The AMTP increases modestly with scale. The AMTP is lowest for a 320-meter window width. It rises to the highest level when \(T=1280\)m, rising to 11.33% higher than at \(T=320\)m. Interestingly, we observe a decrease at \(T=1600\) to only 8.61% higher than at \(T=320\)m. Simultaneously the AMTP variance decreases as the scale increases. Combined, these two observations are strong evidence that the AMTP will plateau at this level.
Additionally, we think there is a quite convincing explanation for the trend in variance. The rapid decrease in variance is acutely visualized in Figure 7. There could be two factors working together. Recall that AMTP is effectively the average transmission power over the edges of a minimum-spanning
Fig. 6: Example inputs and outputs to the CNN for the MID task at different window widths \(A=320,650,960,1280,1600\) but with a constant spatial resolution of \(\rho=1.25\) meters per pixel [2]. The top row images represent the positions of the task agents. The bottom row images represent the estimated optimal positions of the communication agents by the CNN. Additionally, the task agent positions are marked in red on the bottom images.
tree. Therefore, by the Central Limit Theorem, the variance should be inversely proportional to the number of edges in the graph. However, the variance decreases faster than that. Therefore we think there is a second factor involved: border effects due to padding. These contribute to variability but would be less pronounced as the area grows relative to the perimeter.
Overall, we observe that the proposed CNN model is a great solution to large-scale MID, as it demonstrates low levels of AMTP in all settings. Unlike in Section V we are unable to compute the MSE, because a convex optimization solution was computationally intractable at large scales. Although, Theorem 1 is a bound on the MSE, not the AMTP, good AMTP generalization is strongly aligned with the theorem as there is only a modest deterioration in performance as the window size increases.
## VII Conclusions
This paper presented a transfer learning framework that allows for efficient training of CNNs for large-scale translationally symmetric problems. Training is performed on a small window of the larger problem and the trained network is transferred to much larger areas. The proposed approach is theoretically justified, by analyzing the CNN behavior with joint stationary input-output signals. The novel analysis proves that the MSE for the large-scale problem is bounded above by the MSE for the small-scale problem and a small term. This term results from errors introduced by zero-padding before convolutions, but it is equal to zero when padding is not used. The novel convolutional framework was evaluated through simulations on two distinct problems: Multi-target tracking, and Mobile-infrastructure on demand. In both tasks, the proposed architecture showcased remarkable performance and was able to generalize to large-scale problems with high performance.
## Appendix A Proof of Theorem 1
In this section, we derive an upper bound for (6) and prove Theorem 1. For concisenes, we make the substitution \(\varepsilon(\mathbf{t})=\mathbf{\Phi}(X;\mathcal{H})(\mathbf{t})-Y(\mathbf{t})\).
\[\mathcal{L}_{\infty}(\mathcal{H})=\mathbb{E}\left[\lim_{T\to\infty}\frac{1}{T^ {d}}\int_{\mathcal{C}^{d}_{B}}|\varepsilon(\mathbf{t})|^{2}d\mathbf{t}\right] \tag{24}\]
Recall that \(\mathcal{C}^{d}_{T}\subset\mathbb{R}^{d}\) is a \(T\)-wide hypercube centered at zero.
Next, we construct a sequence that bounds the limit above. For any \(T\) let \(N=\lceil\frac{T}{B}\rceil\). When we substitute for \(T\) we can make an upper bound.
\[\mathcal{L}_{\infty}(\hat{\mathcal{H}})\leq\mathbb{E}\left[\lim_{N\to\infty} \frac{1}{(NB)^{d}}\int_{\mathcal{C}^{d}_{NB}}|\varepsilon(\mathbf{t})|^{2}d \mathbf{t}\right] \tag{25}\]
Notice that we can partition the hypercube \(\mathcal{C}^{d}_{NB}\) into smaller hypercubes. Since \(N\) is integer, we can evenly arrange \(N^{d}\) hypercubes with side length \(B\) within it. Let \(\mathcal{C}^{d}_{B}(\mathbf{\tau})\) be such a hypercube centered at \(\mathbf{\tau}\). To partition \(\mathcal{C}^{d}_{NB}\) they must be centered at
\[\mathcal{T}=\left\{iB-\frac{(N-1)B}{2}\mid i\in\mathbb{Z},0\leq i<N\right\}^ {d}. \tag{26}\]
Thus, we write the original hypercube as \(\mathcal{C}^{d}_{NB}=\bigcup_{\mathbf{\tau}\in\mathcal{T}}\mathcal{C}^{d}_{\beta} (\mathbf{\tau})\) and break-up the integral in (25) into \(N^{d}\) smaller integrals.
\[\mathcal{L}_{\infty}(\hat{\mathcal{H}})\leq\mathbb{E}\left[\lim_{N\to\infty} \frac{1}{(NB)^{d}}\sum_{\mathbf{\tau}\in\mathcal{T}}\int_{\mathcal{C}^{d}_{B}}| \varepsilon(\mathbf{t}-\mathbf{\tau})|^{2}d\mathbf{t}\right] \tag{27}\]
Above, instead of shifting the hypercube by \(\mathbf{\tau}\) we equivalently shift the integrand. By Assumption 3, the stochastic processes are bounded. Therefore, both the limit and expectation exist and are finite, so we can exchange them.
\[\mathcal{L}_{\infty}(\hat{\mathcal{H}})\leq\lim_{N\to\infty}\frac{1}{(NB)^{d }}\sum_{\mathbf{\tau}\in\mathcal{T}}\mathbb{E}\left[\int_{\mathcal{C}^{d}_{B}}| \varepsilon(\mathbf{t}-\mathbf{\tau})|^{2}d\mathbf{t}\right] \tag{28}\]
In [37], the authors show that if the input to a CNN is strictly stationary then so is the output. Their result can be extended to jointly stationary signals as follows. Consider a vector valued process \(Z(\mathbf{t})=[X(\mathbf{t}),Y(\mathbf{t})]\) that is a concatenation of \(X\) and \(Y\). Since \(X\) and \(Y\) are jointly stationary then \(Z\) is stationary. Notice that for any CNN \(\mathbf{\Phi}(X;\mathcal{H})\), we can define a new CNN with output \(\mathbf{\bar{\Phi}}(Z;\mathcal{H})(\mathbf{t})=[\mathbf{\Phi}(X;\mathcal{H}) (\mathbf{t}),Y(\mathbf{t})]\). According to [37] this output is stationary whenever \(Z\) is stationary and the difference \(\varepsilon(\mathbf{t})=\mathbf{\Phi}(X;\mathcal{H})(\mathbf{t})-Y(\mathbf{t})\) must also be stationary.
The expectation of a stationary signal is shift-invariant, meaning that \(\mathbb{E}\left[|\varepsilon(\mathbf{t}-\mathbf{\tau})|^{2}\right]=\mathbb{E} \left[|\varepsilon(\mathbf{t})|^{2}\right]\).
\[\mathcal{L}_{\infty}(\hat{\mathcal{H}})\leq\lim_{N\to\infty}\frac{1}{(NB)^{d} }\sum_{\mathbf{\tau}\in\mathcal{T}}\mathbb{E}\left[\int_{\mathcal{C}^{d}_{B}}| \varepsilon(\mathbf{t})|^{2}d\mathbf{t}\right] \tag{29}\]
Fig. 7: The distributions of the minimum transmitter power needed to maintain a normalized communication rate of at least 50% between any two agents [2]. The distributions for each window width are visualized by box plots, with notches representing a 95% confidence interval for the estimate of the median.
The summand no longer depends on \(\mathbf{\tau}\). Since the cardinality of \(\mathcal{T}\) is \(N^{d}\), this cancels out with the factor of \(N^{d}\) in the denominator. The limit expression no longer depends on \(N\), so the right hand side reduces to an average over \(\mathcal{C}_{B}^{d}\).
\[\mathcal{L}_{\infty}(\hat{\mathcal{H}})\leq\frac{1}{B^{d}}\mathbb{E}\left[\int_ {\mathcal{C}_{B}^{d}}|\varepsilon(\mathbf{t})|^{2}d\mathbf{t}\right] \tag{30}\]
At this point, it is convenient to change notation. Let \(\sqcap_{s}(\mathbf{t})\) be an indicator function that is one whenever \(\mathbf{t}\in\mathcal{C}_{s}^{d}\) for all \(s\in\mathbb{R}\). Recognize that the integral above is the L2 norm squared of \(\varepsilon(\mathbf{t})\) multiplied by a window, \(\sqcap_{B}(\mathbf{t})\).
\[\mathcal{L}_{\infty}(\hat{\mathcal{H}})\leq\frac{1}{B^{d}}\mathbb{E}\left[ \|\sqcap_{B}(\mathbf{\Phi}(X;\hat{\mathcal{H}})-Y)\|_{2}^{2}\right] \tag{31}\]
The output of \(\mathbf{\Phi}(X;\hat{\mathcal{H}})\) is \(x_{L}\), the output of the \(L^{\text{th}}\) layer. Similarly, denote \(\tilde{x}_{L}\) as the output of the \(L^{\text{th}}\) layer in \(\mathbf{\Phi}(\sqcap_{A}X;\hat{\mathcal{H}})\). Hence, write the norm in (31) as \(||\sqcap_{B}(x_{L}-\tilde{x}_{L}+\tilde{x}_{L}-Y)||_{2}^{2}\) and apply the triangle inequality.
\[\mathcal{L}_{\infty}(\hat{\mathcal{H}}) \leq\frac{1}{B^{d}}\mathbb{E}\left[\|\sqcap_{B}\tilde{x}_{L}-Y) \|_{2}^{2}\right]\] \[+\frac{1}{B^{d}}\mathbb{E}\left[\|\sqcap_{B}(x_{L}-\tilde{x}_{L} )\|_{2}^{2}\right]\] \[+\frac{1}{B^{d}}\mathbb{E}\left[\|\sqcap_{B}(\tilde{x}_{L}-Y) \|_{2}\|\sqcap_{B}(x_{L}-\tilde{x}_{L})\|_{2}\right] \tag{32}\]
Let us consider the expectation of each term individually. The first term is equal to \(\mathcal{L}_{\sqcap}(\hat{\mathcal{H}})\) as defined in (7); this is easy to see by substituting \(\tilde{x}_{L}=\mathbf{\Phi}(\sqcap_{A}X;\hat{\mathcal{H}})\). The second term is bounded above following Lemma 1. We define \(H\coloneqq\prod_{l=1}^{L}\|h_{l}\|_{1}\) to express this concisely. Finally, the third term can be bounded above by applying Cauchy-Schwartz to the expectation. That is \(\mathbb{E}\left[XY\right]^{2}=\mathbb{E}\left[X^{2}\right]\mathbb{E}\left[Y^{2}\right]\) where \(X,Y\) are dummy random variables.
\[\mathcal{L}_{\infty}(\hat{\mathcal{H}}) \leq\mathcal{L}_{\sqcap}(\hat{\mathcal{H}})+\frac{H^{2}}{B^{d}} \mathbb{E}\left[\|\sqcap_{B}(X-\sqcap_{A}X)\|_{2}^{2}\right]\] \[+\sqrt{\mathcal{L}_{\sqcap}(\hat{\mathcal{H}})\frac{H^{2}}{B^{d}} \mathbb{E}\left[\|\sqcap_{B}(X-\sqcap_{A}X)\|_{2}^{2}\right]} \tag{33}\]
We can further evaluate the expectation. Since \(X\) is stationary, the second term is just the variance of \(X\) times the volume in \(\mathcal{C}_{B+LK}^{d}\setminus\mathcal{C}_{A}^{d}\). Both hypercubes are centered around zero, so the net hypervolume is the difference between their volumes. If \(B+LK\geq A\) then the hypervolume is \((B+LK)^{d}-A^{d}\), otherwise it is zero.
\[\mathcal{L}_{\infty}(\hat{\mathcal{H}}) \leq\mathcal{L}_{\sqcap}(\hat{\mathcal{H}})+\mathbb{E}\left[X(0)^ {2}\right]C+\sqrt{\mathcal{L}_{\sqcap}(\hat{\mathcal{H}})\mathbb{E}\left[X(0) ^{2}\right]C} \tag{34}\] \[C =\frac{H^{2}}{B^{d}}\max\left(0,[B+LK]^{d}-A^{d}\right) \tag{35}\]
## Appendix B Proof of Lemma 1
**Lemma 1**.: _Let \(x_{l}\) denote the output of the \(l^{\text{th}}\) layer of \(\mathbf{\Phi}(X;\mathcal{H})\) with \(L\) layers and filter width \(K\). Similarly, let \(\tilde{x}_{l}\) denote the layers' outputs in \(\mathbf{\Phi}(\sqcap_{A}X;\mathcal{H})\). Then for all integers \(L\geq 0\) the following inequality holds._
\[\|\sqcap_{B}(x_{L}-\tilde{x}_{L})\|_{2}\leq\|\sqcap_{B+LK}(X-\sqcap_{A}X)\|_{2} \prod_{l=1}^{l=L}\|h_{l}\|_{1}\]
Proof.: We proceed by induction. For \(L=0\) the above inequality holds by definition. Hence, assume that the inequality holds for some \(L>0\) and consider the left hand side for \(L+1\), which we denote \(\Delta_{L+1}\).
\[\Delta_{L+1}\coloneqq\|\sqcap_{B}(z^{L+1}-z_{A}^{L+1})\|_{2} \tag{36}\]
Then by definition of the CNN we can expand the output of the first layer.
\[\Delta_{L+1}=\|\sqcap_{B}(\sigma(h_{L+1}*x_{L})-\sigma(h_{L+1}*\tilde{x}_{L})) \|_{2} \tag{37}\]
Now we can use the fact that \(\sigma\) is normalized Lipshitz from assumption 6.
\[\Delta_{L+1}\leq\|\sqcap_{B}h_{L+1}*(x_{L}-\tilde{x}_{L})\|_{2} \tag{38}\]
Now notice that for all \(t\) such that \(|t|\leq B/2\) the values of \((h_{1}*x_{L})(t)\) depend only on \(x_{L}(s)\) for \(|s|<(B+K)/2\) because \(h_{L}\) has width \(K\). Using the monotonicity of the norm, we can simultaneously remove the \(\sqcap_{B}\) outside the convolution.
\[\Delta_{L+1}\leq\|h_{L+1}*[\sqcap_{B+K}(z^{L-1}-z_{A}^{L-1})]\|_{2} \tag{39}\]
Therefore we can apply Young's convolution inequality.
\[\Delta_{L+1}\leq\|h_{L+1}\|_{1}\|\sqcap_{B+K}(x_{L}-\tilde{x}_{L})\|_{2} \tag{40}\]
Now assuming the proposition holds for \(L\) then using the substitution \(\hat{B}=B+K\) we can obtain the desired form.
\[\Delta_{L+1} \leq\|h_{L+1}\|_{1}\|\sqcap_{\hat{B}+LK}(X-\sqcap_{A}X)\|_{2}\prod_{ l=1}^{l=L}\|h_{l}\|_{1} \tag{41}\] \[\Delta_{L+1} \leq\|\sqcap_{B+LK}(X-\sqcap_{A}X)\|_{2}\prod_{l=1}^{l=L+1}\|h_{l }\|_{1} \tag{42}\]
Thus, we have proven the inequality by induction for any integer \(L\geq 0\).
|
2302.08607 | Adaptive Axonal Delays in feedforward spiking neural networks for
accurate spoken word recognition | Spiking neural networks (SNN) are a promising research avenue for building
accurate and efficient automatic speech recognition systems. Recent advances in
audio-to-spike encoding and training algorithms enable SNN to be applied in
practical tasks. Biologically-inspired SNN communicates using sparse
asynchronous events. Therefore, spike-timing is critical to SNN performance. In
this aspect, most works focus on training synaptic weights and few have
considered delays in event transmission, namely axonal delay. In this work, we
consider a learnable axonal delay capped at a maximum value, which can be
adapted according to the axonal delay distribution in each network layer. We
show that our proposed method achieves the best classification results reported
on the SHD dataset (92.45%) and NTIDIGITS dataset (95.09%). Our work
illustrates the potential of training axonal delays for tasks with complex
temporal structures. | Pengfei Sun, Ehsan Eqlimi, Yansong Chua, Paul Devos, Dick Botteldooren | 2023-02-16T22:19:04Z | http://arxiv.org/abs/2302.08607v1 | # Adaptive Axonal Delays in Feedforward Spiking Neural Networks for Accurate Spoken Word Recognition
###### Abstract
Spiking neural networks (SNN) are a promising research avenue for building accurate and efficient automatic speech recognition systems. Recent advances in audio-to-spike encoding and training algorithms enable SNN to be applied in practical tasks. Biologically-inspired SNN communicates using sparse asynchronous events. Therefore, spike-timing is critical to SNN performance. In this aspect, most works focus on training synaptic weights and few have considered delays in event transmission, namely axonal delay. In this work, we consider a learnable axonal delay capped at a maximum value, which can be adapted according to the axonal delay distribution in each network layer. We show that our proposed method achieves the best classification results reported on the SHD dataset (92.45%) and NTIDIGITS dataset (95.09%). Our work illustrates the potential of training axonal delays for tasks with complex temporal structures.
Pengfei Sun\({}^{1}\), Ehsan Eqlimi\({}^{1}\), Yansong Chua\({}^{2}\), Paul Devos\({}^{1}\), Dick Botteldooren\({}^{1}\)+\({}^{1}\)WAVES Research Group, Ghent University, Belgium
\({}^{2}\)China Nanhu Academy of Electronics and Information Technology, China
Footnote †: This research is supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300).
+
Footnote †: This research is supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300).
+
Footnote †: This research is supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300).
+
Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300).
+
Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300).
+
Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300).
+
Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300).
+
Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300).
+
Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300).
+
Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300).
+
Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300).
+
Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300).
+
Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300).
+
Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300).
+
Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO2003000).
+
Footnote †: This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300).
+
Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300).
+
Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300).
+
Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300).
+
Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300).
+
Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300).
+
Footnote †: This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300).
+
Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300).
+
Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300).
+
Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300).
+
Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300).
+
Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO2003000).
+
Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300).
+
Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300).
+
Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300).
+
Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300).
+
Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO200300).
+
Footnote †: This research is also also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen”. This research is also supported by the National Key Research and Development Program of China (Grant No. 2021ZDO2003000).
+
Footnote †: This research is also supported by the Research Foundation - Flanders under grant number GOA0220N and the Flemish Government under the ”Onderzoeksprogramma Artificiele Intelligentie
where \(W_{ij}^{l-1}\) indicates the synaptic weight from neuron \(j\) to \(i\) at layer \(l-1\) and \(u_{i}^{l}\) refers to the membrane potential of neuron \(i\) in layer \(l\), while \(s_{j}^{l-1}\) is the incoming spike pattern from the preceding neuron \(j\). In this experiment, we use the response signal \(a(t)=(\epsilon*s^{J-1})(t)\) to describe the response of neurons by convolving input spikes \(s^{l-1}(t)\) with the response kernel \(\epsilon\), where \(\epsilon(t)=\frac{t}{\tau_{s}}\exp(1-\frac{t}{\tau_{s}})\Theta(t)\). Here, \(\Theta(t)\) represents the Heaviside step function. Likewise, the refractory signal can be described as \((\nu*s^{J})(t)\), where \(\nu(t)=-2\theta_{u}\,\frac{1}{\nu_{t}}\exp(1-\frac{t}{\tau_{s}})\Theta(t)\). Here, the parameter \(\tau_{s}\) and \(\tau_{s}\) are the time constant of the corresponding kernels. An output is generated whenever the \(u_{i}^{l}\) surpasses the pre-defined threshold \(\theta_{u}\). This spike-generation process can be formulated as
\[s_{i}^{l}(t)=\Theta(u_{i}^{l}(t)-\theta_{u}) \tag{2}\]
### Axonal delay module and adaptive delay caps
In Fig.1, the axonal delay module and an adaptive training scheduler for delay caps are shown. The axonal delay is part of spike transmission, and we formulate it such that it can be jointly learned with an optimization algorithm.
\[s_{d}^{l}(\hat{t})=\delta(t-d^{l})*s^{J}(t) \tag{3}\]
In layer \(l\), \(d^{l}\) represents the set of delays \(\{d_{1},d_{2},..,d_{n}\}\) subject to the constraint that \(\{d_{1}<d_{2},..,<d_{n}\leq\theta_{d}\}\). Meanwhile, \(s_{d}^{l}(\hat{t})\) denotes the spike trains output by the delay module at a shifted time \(\hat{t}\). From the optimization point of view, constraining the delay value can facilitate learning. Here, we compute the fraction of neurons against the total number of neurons within a sliding window [24] in the adaptive training scheduler so as to optimize training.
Consider the sliding window \(m\), when the fraction of delay neurons within this window exceeds the pre-defined cap fraction \(\alpha_{\theta}\), the sliding window right-shifts by 1, and the delay cap \(\theta_{d}\) also increases by 1. Pseudo-code of the proposed adaptive training scheduler is presented in Algorithm 1.
During training (in the second while loop), the delay will be clipped as follows
\[d=max(0,min(d,\theta_{d})) \tag{4}\]
where the \(\theta_{d}\) and delays \(d\) will be adaptively adjusted according to our scheduler.
## 3 Experimental Setup
### Datasets
We evaluate our proposed methods on the SHD [25] and NTIDIGITS [26] datasets. These datasets are the spike patterns processed by the artificial cochlear models[25, 26] from the original audio signals and have high temporal complexity [27]. The SHD dataset consists of 10,420 utterances of varying durations (0.24\(s\) to 1.17\(s\)) by 12 speakers. It contains a total of 20 digits classes from '0' to '9' in English and German language. We adopt the same data preprocessing method in [28], and the same train/validation/test split of the dataset as the official [25].
The NTIDIGITS is another event-based spoken recognition task. The original TIDIGITS is processd by the 64-channel CochleaAMS1b sensor and its output is recorded. In total, there are 11 spoken digits (the digits '0' to '9' in the
Figure 1: Illustration of how the adaptive delay caps are determined and the axonal delays adjusted. The generated spikes \(s^{1}(t)\) will be shifted in time by \(d_{i}\) and then output as spike trains \(s_{d}^{1}(\hat{t})\) in the axonal delay module. The adaptive scheduler will adjust the delay cap accordingly. The delay value may be the same across neurons, such as the top two neurons with the same delay value \(d_{1}\). The layer can be a traditional convolutional layer, dense layer, or recurrent layer.
English language and 'oh'). We follow the same train and test split as used in [26].
### Implementation details
In our experiment, we use the Adam optimizer to jointly update the synaptic weight and axonal delay with a constant learning rate of 0.1 and a minibatch size of 128. The initial caps for delay are set to 64 for SHD and 128 for NTIDIGITS, respectively. The pre-train epochs \(x\) are set as 40 and the simulation time step is 1 \(ms\) for both datasets. Table 1 lists the parameters we used in our experiments.
We train our SNN on the SLAYER-Pytorch framework [8]. This recently introduced GPU-accelerated software is publicly available and has proven to be effective for training SNN. For both tasks, we use an SNN with two hidden fully-connected layers with 128 hidden units for SHD and 256 neurons for NTIDIGITS. The total number of model parameters is approximately 0.11M for SHD dataset and 0.08M for NTIDIGITS dataset.
## 4 Results
### Ablation Study of Different Cap Fraction and Sliding Window Size
The window size and cap fraction are important parameters to get a reasonable delay cap. To evaluate their influence, we design an ablation study to compare the influence, which contains 5 different window sizes and 3 different fraction parameters. The impact of these parameters on SHD and NTIDIGITS datasets is shown in Fig 2. For the SHD dataset, we observe that a small fraction can always get good results, and the best results can be obtained by controlling the number of the largest two delayed neurons within 5% of the total number of neurons, which means the window size is set as 2 and the cap fraction is 5%. While for the NTIDIGITS dataset, the bigger window size and fraction are more helpful, and the accuracy keeps going up when the window size grows bigger besides the situation of window size 5 and cap fraction 10%.
### Shd
The result of performance provided by feed-forward SNN, recurrent spiking neural network (RSNN) with adaptive firing threshold, heterogeneous RSNN, RSNN with temporal attention, and our proposed method is given in Table 2. As can be seen from Table 2, our proposed method achieves an accuracy of 92.45%, which is the best performance reported for
\begin{table}
\begin{tabular}{c c c c c c} \hline
**Dataset** & \(\tau_{s}\) & \(\tau_{r}\) & **Initial**\(\theta_{d}\) & \(\theta_{u}\) & \(T_{steps}\) \\ \hline SHD & 1 & 1 & 64 & 10 & 150 \\ NTIDIGITS & 5 & 5 & 128 & 10 & 150 \\ \hline \end{tabular}
\end{table}
Table 1: Detailed parameters settings for different datasets
Figure 2: Ablation study of different sliding window sizes \(m\) and cap fractions \(\alpha_{\theta}\) based on (Top) SHD and (Bottom) N-TIDIGITS. The x-axis indicates the sliding window size, and the y-axis refers to the accuracy. There are 3 different cap fractions (5%, 10%, and 15%) and 5 different window sizes (1,2,3,4, and 5) we use in our experiment.
this task. More importantly, it can be observed that the RSNN outperforms the feed-forward SNN, which implies that the task is inherently dynamic. However, our results show that a feed-forward SNN, without recurrent connections, but with an adaptive axonal delay module, can achieve flexible short-term memory and better accuracy, using fewer parameters.
### Nitigits
When the ANN (RNN and Phased-LSTM) are applied to the digits recognition task, they achieve an accuracy of 90.90% and 91.25% (see Table 2), respectively. However, these networks cannot fully exploit the advantages of sparse event-based information and have to rely on the event synthesis algorithm, effectively losing the advantage gained when processing information embedded in spike-timing. Using our adaptive axonal delay module, we can achieve the best performance of 95.09%, and compared with Zhang et al.[30] which directly use the spike train level features, our model can improve 1.4% performance while only using 23% of parameters.
### Effect of the Delay Value
As shown in Table 3, the delay cap is important to the performance of the model. For both audio classification tasks, the performance is competitive without limiting the range of delay values, demonstrating the effectiveness of the axonal delay module. However, it is still necessary to limit the delay range, and our experiments show that an appropriate delay cap will improve the classification ability of the model. Taking the SHD dataset as an example, our adaptive training scheduler can determine the optimal delay distribution, and this distribution enables the model to achieve the best performance. For other combinations of delay caps, the obtained accuracy drops, may indicate that each network structure has an optimal delay cap, but it is difficult to find the delay caps manually. Our method provides an adaptive training scheduler to search for these optimal parameters. It is worth noting that the NTIDIGITS dataset conforms to this phenomenon.
## 5 Conclusions
In this paper, we integrate the learnable axonal delay module into the spiking neuron model and then introduce an adaptive training scheduler to adjust the caps of axonal delay in each network layer. Compared to previous work that adopts a static delay cap, our proposed method significantly improves the classification capability without extra parameters. Furthermore, our adaptive scheduler can be easily integrated into the existing delay module and determine the optimal delay distribution of the network adaptively. We achieve the best performance in SHD (92.45%) and NTIDIGITS (95.09%) datasets with the fewest parameters. These results suggest that a neuron axonal delay with an adaptive delay cap can be used to model a lightweight flexible short-term memory module so as to achieve an accurate and efficient spoken word recognition system. We conjecture that the axonal delay mechanism introduces a form of short-term memory without increasing the number of trainable parameters. For certain data-sets in ASR, whereby 1) information is organized in short sequences, without need for long-term memory, and 2), data is limited in size and hence prone to overfitting, the axonal delay mechanism may work best, in combination with a feed-forward, small SNN. Our experiments seem to agree with the above and further confirm the great potential of using spike timing as part of the solution to an ASR problem. Furthermore, the use of spike-based losses [31] can expedite decision-making, thereby reducing the impact of additional latency even more.
\begin{table}
\begin{tabular}{c c c c} \hline Datasets & **Method** & **Params** & **Accuracy** \\ \hline \multirow{6}{*}{SHD} & Feed-forward SNN [25] & 0.11 MB & \(48.6\pm 0.9\%\) \\ & RSNN [25] & 1.79 MB & \(83.2\pm 1.3\%\) \\ & RSNN with Adaption [28] & 0.14 MB & \(84.4\%\) \\ & Heterogeneous RSNN [20] & 0.11 MB & \(82.7\pm 0.8\%\) \\ & SNN with Time Attention [29] & 0.12 MB & \(91.08\%\) \\ & **This work (m=2, \(\alpha_{\theta}\) = 5\%)** & **0.11 MB** & \(\mathbf{92.45\%}\) \\ \hline \multirow{6}{*}{NTIDIGITS} & GRU-RNN [26]\({}^{\dagger}\) & 0.11 MB & \(90.90\%\) \\ & Phased-LSTM [26]\({}^{\dagger}\) & 0.61 MB & \(91.25\%\) \\ \cline{1-1} & ST-RSBP [30] & 0.35 MB & \(93.63\pm 0.27\%\) \\ \cline{1-1} & SNN with Axonal delay [23] & 0.08 MB & \(94.45\%\) \\ \cline{1-1} & **This work (m=4, \(\alpha_{\theta}\) = 10\%)** & **0.08 MB** & \(\mathbf{95.09\%}\) \\ \hline \end{tabular} \({}^{\dagger}\) Non-SNN implementation.
\end{table}
Table 2: Comparison with the state-of-the-art in terms of network size and accuracy.
\begin{table}
\begin{tabular}{c c c c c} \hline
**Dataset** & **Method** & \((\theta_{d_{d}},\theta_{d_{d}})\) & **Params** & **Accuracy** \\ \hline \multirow{6}{*}{\(\mathbf{\Delta\theta}\)} & Manual & (0, 0) & 108,820 & 67.05\% \\ & Manual & (64, 64) & 109,076 & 86.84\% \\ & Manual & (128, 128) & 109,076 & 87.24\% \\ & Adaptive & (107, 175) & 109,076 & \(\mathbf{92.45\%}\) \\ & Manual & (\(+\infty,+\infty\)) & 109,076 & \(\mathbf{84.99\%}\) \\ \hline \multirow{6}{*}{\(\mathbf{\Delta\theta}\)} & Manual & (0, 0) & 85,259 & 78.86\% \\ & Manual & (128, 128) & 85,771 & 94.19\% \\ \cline{1-1} & Adaptive & (215, 215) & 85,771 & \(\mathbf{95.09\%}\) \\ \cline{1-1} & Manual & (\(+\infty,+\infty\)) & 85,771 & 93.83\% \\ \hline \end{tabular}
\end{table}
Table 3: Ablation studies for different delay cap methods and the effect of the delay cap \(\theta_{d}\) in the axonal delay module. \(\theta_{d_{d}}\) indicates the delay of \(\hat{r}^{th}\) layer. ’Manual’ refers to using a static delay cap, while ’Adaptive’ refers to using our proposed adaptive scheduler. |
2303.07153 | SA-CNN: Application to text categorization issues using simulated
annealing-based convolutional neural network optimization | Convolutional neural networks (CNNs) are a representative class of deep
learning algorithms including convolutional computation that perform
translation-invariant classification of input data based on their hierarchical
architecture. However, classical convolutional neural network learning methods
use the steepest descent algorithm for training, and the learning performance
is greatly influenced by the initial weight settings of the convolutional and
fully connected layers, requiring re-tuning to achieve better performance under
different model structures and data. Combining the strengths of the simulated
annealing algorithm in global search, we propose applying it to the
hyperparameter search process in order to increase the effectiveness of
convolutional neural networks (CNNs). In this paper, we introduce SA-CNN neural
networks for text classification tasks based on Text-CNN neural networks and
implement the simulated annealing algorithm for hyperparameter search.
Experiments demonstrate that we can achieve greater classification accuracy
than earlier models with manual tuning, and the improvement in time and space
for exploration relative to human tuning is substantial. | Zihao Guo, Yueying Cao | 2023-03-13T14:27:34Z | http://arxiv.org/abs/2303.07153v1 | SA-CNN: Application to text categorization issues using simulated annealing-based convolutional neural network optimization
###### Abstract
Convolutional neural networks (CNNs) are a representative class of deep learning algorithms including convolutional computation that perform translation-invariant classification of input data based on their hierarchical architecture. However, classical convolutional neural network learning methods use the steepest descent algorithm for training, and the learning performance is greatly influenced by the initial weight settings of the convolutional and fully connected layers, requiring re-tuning to achieve better performance under different model structures and data. Combining the strengths of the simulated annealing algorithm in global search, we propose applying it to the hyperparameter search process in order to increase the effectiveness of convolutional neural networks (CNNs). In this paper, we introduce SA-CNN neural networks for text classification tasks based on Text-CNN neural networks and implement the simulated annealing algorithm for hyperparameter search. Experiments demonstrate that we can achieve greater classification accuracy than earlier models with manual tuning, and the improvement in time and space for exploration relative to human tuning is substantial.
Simulated Annealing Algorithm; Text Classification; Deep Learning; Self-optimization
## I Introduction
In recent years, significant breakthroughs have been achieved in the field of convolutional neural networks for text classification, and Yoon Kim [1] proposed a straightforward single-layer CNN architecture that can outperform traditional algorithms in a variety of uses. Rie Johnson and Tong Zhang [2] applied CNNs to high-dimensional text data and learned with embed small text regions for classification. Tong He and Weilin Huang [3] proposed a convolutional neural network that extracts regions and features related to text from image components. This type of model use vectors to characterize each sentence in the text, which are then merged into a matrix and utilized as input for constructing a CNN network model.
Numerous experiments have shown, however, that the performance of neural networks is highly dependent on their architecture [4][5][6]. Due to the discrete nature of these parameters, accurate optimization algorithms cannot be used to resolve the architecture optimization problem [7]. Manually tuning the parameters of a model to optimize its performance for different tasks is not only inefficient, but also likely to miss the optimal parameters, resulting in a network architecture that does not achieve maximum performance, which is not advantageous in comparison to traditional classification algorithms [8]. In addition, the widely utilized grid search is an enumerative search, i.e, it tries every possibility by performing a cyclic traversal of all candidate parameter choices, which is marked by its high time consumption and limited globalization. Therefore, it is practical to use an algorithm to automatically and fairly rapidly determine the optimal architecture of a neural network.
It has been shown that tuning neural network hyperparameters with metaheuristic algorithms not only simplifies the network [9][10], but also enhances its classification performance. In this paper, we use simulated annealing algorithm to optimize the neural network architecture, and we model the neural network hyperparameter optimization problem as a dual-criteria optimization problem of classification accuracy and computational complexity. The resulting network achieves improved classification performance in the text classification task.
## II Background and Related Work
### _The current utilisation text classification in neural networks_
Text classification was initially done by using knowledge engineering to build an expert system and perform classification, which is a laborious task with limited accuracy. After that, along with the development of statistical learning methods and machine learning disciplines, the classical approach of feature engineering plus shallow classification models developed gradually (Fig.1). During this period, rule-based models: decision trees [11], probability-based models: Naive Bayes classification algorithms [12][13], geometry-based models: SVM [14]and statistical models: KNN [15], etc. However, these models typically rely heavily on time-consuming feature engineering or large quantities of additional linguistic resources, and are ineffective at learning semantic information about words.
In recent years, research on deep learning that incorporates feature engineering into the process of model fitting has significantly enhanced the performance of text classification tasks. Kim [1] explored the use of convolutional neural networks
with multiple windows for text classification, a method that has been widely adopted in industry due to its high computational speed and parallelizability. Yang et al [16] proposed HAN, a hierarchical attention mechanism network that mitigates the gradient disappearance problem caused by RNNs in neural networks. Johnson and Zhang [17] proposed a word-level deep CNN model that improves network performance by increasing network depth without significantly increasing computational burden.
Additionally to convolutional neural networks and recurrent neural networks, numerous researchers have proposed more intricate models in recent years. The capsule networks-based text classification model proposed by Zhao et al [18]. outperformed conventional neural networks. Google proposed the BERT model [19], which overcomes the problem that static word vectors cannot solve the problem of a word having multiple meanings. However, the parameters of each of the aforementioned deep learning models have a substantial effect on network performance and must be optimized for optimal network performance.
_Current research status on the simulated annealing method and associated neural network optimization_
The Simulated Annealing Technique is a stochastic optimization algorithm based on the Monte Carlo iterative solution approach that was first designed for combinatorial optimization and then adapted for general optimization. Its fundamental concept is based on the significance sampling approach published by Metropolis in 1953, but Kirkpatrick et al. [20] did not properly implement it into the field of combinatorial optimization until 1983.
The algorithm for simulated annealing is taken from the process of metal annealing [21], which may be summarized roughly as follows. The simulated annealing technique begins with a high initial temperature, then as the temperature parameter decreases, it searches for the optimal solution among all conceivable possibilities. The SA method has a probability of accepting a solution that is worse than the initial one, i.e. the locally optimal solution can probabilistically jump out and finally converge to the global optimum. This likelihood of accepting a suboptimal answer decreases until the SA approaches the global optimal solution.
The standard optimization process of the simulated annealing algorithm can be described as follows.
```
input : Initial feasible solution: \(x_{0}\) ; Initial temperature: \(T_{0}\) ; Termination temperature : \(T_{f}\) ; Iteration number: k \(x\longleftarrow x_{k}\), \(T\longleftarrow T_{0}\), \(k\longleftarrow 0\) while\(k\leq k_{max}\) and \(T\gg T_{f}\)do \(x_{k}\longleftarrow\) NEIGHBOR(s) \(\delta f=f(x_{k})-f(x)\) if\(\Delta f<0\) or RANDOM(0, 1) \(\leqslant P(\Delta f,T)\)then \(x\longleftarrow x_{k}\) endif \(T\longleftarrow COOLING(T,k,k_{max})\) \(k\longleftarrow k+1\) endwhile
```
**Algorithm 1** Simulate Anneal Arithmetic
SA has been utilized extensively in VLSI [22], production scheduling [23], machine learning [24], signal processing [25], and other domains as a general-purpose stochastic search method. Boltzmann machine [26], the first neural network capable of learning internal expressions and solving complicated combinatorial optimization problems, utilizes the SA principle for optimization precisely, therefore the optimization potential of SA is evident.
RasdiRere et al. [27] employed simulated annealing to automatically construct neural networks and alter hyperparameters, and experimental findings revealed that the method could increase the performance of the original CNN, demonstrating the efficacy of this optimization technique. Mousavi et al. [28] updated a solar radiation prediction model by integrating artificial neural networks and simulated annealing with temperature cycling to increase ANN calibration performance. Choubin et al. [29] utilized the simulated annealing (SA) feature selection approach to find the influential factors of PM modeling based on current air detection machine learning models for the spatial risk assessment of PM10 in Barcelona.
According to the study, however, there is no research on the combination of simulated annealing and neural networks for text categorization tasks. This research conducts tests on neural networks employing simulated annealing to enable automated hyperparameter search, based on the fact that neural networks now generate superior outcomes in text categorization.
Fig. 1: A diagrammatic representation of the process of shallow learning
Fig. 2: Simulated annealing method picture schematic.
## III Methods
### _Convolutional neural networks for text processing tasks (Text-CNN)_
Convolutional neural networks (CNN) originated in the field of computer vision; however, with the recent deformation of the CNN input layer, this neural network structure has been steadily transferred to the field of natural language processing, where it is often referred to as Text CNN. The schematic is seen below.
A text statement consists of n words, therefore the text is separated according to words and each word is mapped to the Word Embedding layer as a k-dimensional vector. At this point, for this input model, the text may be regarded as a \(n\times k\) single-channel picture. During the processing of the convolution layer, the convolution is used to extract the relationships between tuples containing different numbers of words, i.e., to generate new features and obtain different feature maps, when the width of the convolution kernel is equal to the dimension k of the word vector.
The feature map in the form of an n-dimensional vector is then sampled using max-pooling to determine the maximum value, and the data is pooled and utilized as input to the fully connected layer. The softmax function[Eq. (1)] is then employed to convert these probabilities into discrete 0 or 1 class labels in order to solve this classification challenge.
\[y=softmax(W_{1}h+b_{1}) \tag{1}\]
As demonstrated in Eq. (2), a cross-entropy loss function is frequently utilized for the classification job in model training.
\[Loss=-\sum_{i=1}^{n}y_{i}\times log(y_{i}^{{}^{\prime}}) \tag{2}\]
where \(y_{i}\) is the label value corresponding to the real probability distribution of the \(i^{th}\) sample, \(y_{i}^{{}^{\prime}}\) is the prevalence measure corresponding to the projected probability distribution of the \(i^{th}\) sample, and n is the number of samples in the training set.
The hyperparameter optimization process of the neural network by simulated annealing method is shown below.
### _Hyperparametric optimization method based on multi-objective simulated annealing method_
In this study, we use the MOSA algorithm proposed by Gulcu and Kus [30] to optimize the hyperparameters of the convolutional neural network in order to find the most suitable parameters quickly and achieve a higher accuracy rate. Similar to the single-objective SA algorithm, we extend the SA algorithm, which only considers the error rate of the network implementation, to consider the two objectives of the number of FLOPs required by the network and the error rate of the network, respectively, and define the stopping criterion of the simulated annealing method as the number of iterations.
#### Iii-B1 Main flow of MOSA algorithm
The MOSA algorithm primarily uses Smith's Pareto dominance rule [31] due to complications such as the need for two target values to be on the same scale, followed by the application of decision rules to aggregate the probabilities, and the need to maintain different temperatures due to the different probabilities evaluated for each target. All non-dominated solutions encountered during the search are stored in an external archive, A, when the first iteration begins at the initial temperature. As new solutions are accepted, A is updated (by inserting the new solution X' and removing all solutions dominated by it) and a superior solution is eventually formed as the Pareto frontier. As depicted in the following flowchart, whenever a new solution X' is obtained, X and A are updated based on the dominance relationship between the current solution X, the new solution X', and the solution in the external archive A. This process of re-visiting previously visited archive solutions is known as the return-to-base strategy. In contrast to the single-target SA algorithm, the \(\Delta F\) calculation used to determine the probability of acceptance is different in this method. For calculation purposes, a single temperature will be maintained regardless of the number of targets.
#### Iii-B2 Setting and calculation of temperature and probability for SA method
First, the initial and finale temperatures.
Fig. 4: Flowchart of neural network hyperparameter tuning using simulated annealing.
Fig. 3: Schematic diagram of Text-CNN.
Theoretically, the higher the initial temperature setting, the better, but since the time required for convergence increases correspondingly with the temperature, it is necessary to make a compromise between convergence time and convergence accuracy. To define \(T_{init}\), we apply a real-time initial temperature selection strategy. In this strategy, rather than using Eq. (3) to calculate the initial probability value for a given initial temperature (where \(\Delta F\) is the amount of deterioration in the objective function and \(T_{c}ur\) is the current temperature).
\[p_{acc}=min\{1,exp(-\Delta F/T_{cur})\} \tag{3}\]
We use Eq. (4) to calculate the initial temperature with an initial probability value (where Fave is the average amount of deterioration penalty calculated during short-term "aging"). tfinal is also defined by this method of real-time temperature adjustment)
\[T_{init}=-(\Delta F_{ave}/(ln(p_{acc}))) \tag{4}\]
#### Iii-B3 Acceptance criteria in searching parameters
In this experiment, the number of iterations is used to define the stopping criterion for the simulated annealing method. The rationale is as follows: if poor early performance is observed on the validation set, the current training process is terminated and the next search round is initiated. This approach has the benefit of introducing noise while decreasing the total running time of the HPO method. In each iteration of the simulated annealing method, the newly generated configuration is trained on the training set of the original training set and evaluated on the validation set (i.e., the test split of the original training set). We apply the Xavier weight initial value setting term, a learning rate of 0.001, the Rmsprop optimizer, and a batch size of 32.
#### Iii-B4 Optimization of the Simulated Annealing method
Starting with the initial solution I and initial temperature \(T_{init}\), the iterative process "generate a new solution \(\rightarrow\) calculate the objective function difference \(\rightarrow\) accept or reject" is repeated for the current solution and temperature. \(T_{cur}\) is gradually decayed, and if the optimal error rate achieved on the validation set does not improve after three consecutive calendar hours, the training procedure is terminated, indicating that the current solution is the approximate optimal solution, i.e. the optimal state of the network has been reached.
## IV Experiments and Results
### _Introduction to the experimental data set_
This experiment utilized two short text categorization datasets, the MR dataset and the TREC dataset. MR is a dataset for the sentiment analysis of movie reviews, with each sample classified as positive sentiment or negative sentiment. The CR dataset consists of reviews of five electronic products, and these sentences have been manually tagged with the sentiment of the reviews. TREC is a question classification dataset in which each data point represents a question description, and the job entails categorizing a question into six question kinds, including person, place, numerical information, abbreviation, description, and entity.
The table displays the statistical information of the three data sets.
Several samples from each of the two datasets are provided below.
* It's a square, sentimental drama that satisfies, as comfort food often can. [**MR Dataset**, tags: positive].
* The sort of movie that gives tastelessness a bad rap. [**MR Dataset**, tags: negative].
* this camera is so easy to use! [**CR Dataset**, tags: positive].
* the sound level is also not as high as i would have expected. [**CR Dataset**, tags: negative].
* What is Australia's national flower? [**TREC Dataset**, tags: place]
* Who was the first man to fly across the Pacific Ocean? [**TREC Dataset**, tags: person]
### _Introduction to the comparison model_
Comparing the model in the article to the experimental model in Kim's [1] study for experimentation.
* **CNN-rand:** All word vectors are initialized at random before being utilized as optimization parameters during training.
* **CNN-static:** All word vectors are directly acquired with the Word2Vec tool and are fixed.
Fig. 5: Schematic diagram of MOSA algorithm flow.
* **CNN-multichannel:** A mix of CNN-static and CNN-non-static, i.e. two types of inputs.
* **DCNN [32]:** Dynamic Convolutional Neural Network with k-max pooling.
* **MV-RNN [33]:** Matrix-Vector Recursive Neural Network with parse trees.
### _SA-CNN parameter setting_
#### Iv-C1 Parameter setting of MOSA
In this research, we employed the identical parameter settings for the simulated annealing approach as Gulcu and Kus [30], set the starting initial probability value to 0.5, and derived \(T_{init}\approx 0.577\) and \(T_{final}\approx 0.12\) in a similar fashion. The link between the number of exterior and internal iterations estimated for different cooling rate values is depicted in the table.2.
The number of outer iterations defines the number of search networks, whereas the number of inner iterations determines the number of single network training iterations. As the cooling rate, we chose 0.95, where the number of external cycles is greater than the number of internal cycles, to ensure that as many network structures as possible are searched for and to avoid repeated training of a single network structure as much as possible, thereby avoiding becoming trapped in a local optimal solution of network selection.
#### Iv-C2 Search range of neural network hyperparameters
In this study, we utilize a 300-dimensional word2vec trained by Mikolov [34] as the initial representation and continually update the word vector during the training process, similar to Kim's [1] approach. In this paper, the empirical range of hyperparameters to be tuned in the network is provided so that the simulated annealing technique can search for a new solution from the existing one. Expanding the range of searchable hyperparameters may result in improved experimental outcomes, provided computational resources permit.
* kernelCount: [32, 64, 96, 100, 128, 160, 256]
* dropoutRate: [0.1, 0.2, 0.3, 0.4, 0.5]
* unitCount: [16, 32, 64, 128, 256, 512]
* dropoutRate: [0.1, 0.2, 0.3, 0.4, 0.5]
* activation: [relu, leaky_relu, elu, tanh, linear]
* learningRate: [0.0001, 0.001, 0.01, 0.0002, 0.0005, 0.0008, 0.002, 0.004, 0.005, 0.008]
* batchSize: [64, 128, 256]
* seedNumber: 40
* ratioInit: 0.9
### _Experimental results and discussion_
#### Iv-D1 Comparison of model accuracy results
The following table shows the accuracy of different CNN models for text classification tasks on MR, CR and TREC datasets.
As shown in the table.3, on the MR and CR datasets, the model presented in this paper outperformed other neural network structures, leading the authors to conclude that, due to a lack of computational resources, the parameter search range was restricted and the optimal network was not found. Nevertheless, the performance of the convolutional neural network was utilized, and on the TREC dataset, the model SA-CNN achieved the highest accuracy rate. Under the assumption of using the same model structure, the experimental results demonstrate that using the simulated annealing algorithm to find the optimal hyperparameters not only reduces the tediousness of manual parameter tuning, but also yields better parameters than manual tuning if the search range is correct, thereby achieving a high test accuracy.
#### Iv-D2 Discussion of experimental results
In order to comprehend the characteristics of the ideal hyperparameters discovered by the simulated annealing technique, the following tables list the top 3 optimal hyperparameters sought by the algorithm in the TREC dataset.
Compared to manual tuning, the simulated annealing algorithm may search for hyperparameter combinations that one would not ordinarily consider; for instance, the number of CNN convolutional kernels for the Top1 model on the TREC dataset is 100, 64, and 32 for three different strings, which is a combination that one would not ordinarily consider. Therefore, by utilizing the simulated annealing process to optimize the neural network's hyperparameters, it is theoretically possible to acquire hyperparameter combinations that have not been considered or disregarded based on prior experience, and so achieve improved performance.
## V Conclusion
In this article, we proposed a machine learning method combining simulated annealing and convolutional neural networks. The main goal is to adjust the hyperparameters of neural networks using simulated annealing in order to prevent manual tuning parameters into local optima, thus failing to enhance neural network performance. The experimental results demonstrate that the method of implementing simulated annealing to tune the hyperparameters of a neural network is effective in overcoming the constraints of manual parameter tuning, is practical, and can be theoretically applied to additional natural language processing problems. Due to the limited resource space and the different cooling rate for defining the initialization, which may result in different time costs and architecture, the final solution may only be an approximate optimal solution, and this approximation may differ. Consequently, the simulated annealing method may be integrated with other algorithms or the multi-objective simulated annealing algorithm may be further optimized, thereby further enhancing the efficiency of the simulated annealing algorithm on neural network optimization.
|
2303.11725 | Online Learning of Wheel Odometry Correction for Mobile Robots with
Attention-based Neural Network | Modern robotic platforms need a reliable localization system to operate daily
beside humans. Simple pose estimation algorithms based on filtered wheel and
inertial odometry often fail in the presence of abrupt kinematic changes and
wheel slips. Moreover, despite the recent success of visual odometry, service
and assistive robotic tasks often present challenging environmental conditions
where visual-based solutions fail due to poor lighting or repetitive feature
patterns. In this work, we propose an innovative online learning approach for
wheel odometry correction, paving the way for a robust multi-source
localization system. An efficient attention-based neural network architecture
has been studied to combine precise performances with real-time inference. The
proposed solution shows remarkable results compared to a standard neural
network and filter-based odometry correction algorithms. Nonetheless, the
online learning paradigm avoids the time-consuming data collection procedure
and can be adopted on a generic robotic platform on-the-fly. | Alessandro Navone, Mauro Martini, Simone Angarano, Marcello Chiaberge | 2023-03-21T10:30:31Z | http://arxiv.org/abs/2303.11725v1 | # Online Learning of Wheel Odometry Correction for Mobile Robots with Attention-based Neural Network
###### Abstract
Modern robotic platforms need a reliable localization system to operate daily beside humans. Simple pose estimation algorithms based on filtered wheel and inertial odometry often fail in the presence of abrupt kinematic changes and wheel slips. Moreover, despite the recent success of visual odometry, service and assistive robotic tasks often present challenging environmental conditions where visual-based solutions fail due to poor lighting or repetitive feature patterns. In this work, we propose an innovative online learning approach for wheel odometry correction, paving the way for a robust multi-source localization system. An efficient attention-based neural network architecture has been studied to combine precise performances with real-time inference. The proposed solution shows remarkable results compared to a standard neural network and filter-based odometry correction algorithms. Nonetheless, the online learning paradigm avoids the time-consuming data collection procedure and can be adopted on a generic robotic platform on-the-fly.
Mobile Robots Odometry Correction Deep Learning Robot Localization
## 1 Introduction
Wheel odometry (WO) and inertial odometry (IO) are the simplest forms of self-localization for wheeled mobile robots [1]. However, extended trajectories without re-localization, together with abrupt kinematic and ground changes, drastically reduce the reliability of wheel encoders as the unique odometric source. For this reason, visual odometry (VO) has recently emerged as a more general solution for robot localization [2], relying only on the visual features extracted from images. Nonetheless, service and assistive robotics platforms may often encounter working conditions that forbid the usage of visual data. Concrete scenarios are often related to the lack of light in indoor environments where GPS signals are denied, as occurs in tunnels exploration [3, 4] or in assistive nightly routines [5, 6, 7]. Repetitive feature patterns in the scene can also hinder the precision of VO algorithms, a condition that always exists while navigating through empty corridors [8] or row-based crops [9]. Therefore, an alternative or secondary localization system besides VO can provide a substantial advantage for the robustness of mobile robot navigation. Wheel-inertial odometry is still widely considered a simple but effective option for localization in naive indoor scenarios. However, improving its precision in time would extend its usage to more complex scenarios. Previous works tackle the problem with filters or simple neural networks, as discussed in Section1.1. Learning-based solutions demonstrate to mitigate the odometric error at the cost of a time-consuming data collection and labeling process. Recently, online learning
has emerged as a competitive paradigm to efficiently train neural networks on-the-fly avoiding dataset collection [10]. In this context, this work aims at paving the way for a learning-based system directly integrated into the robot and enabling a seamless transition between multiple odometry sources to increase the reliability of mobile robot localization in disparate conditions. Figure 1 summarizes the proposed methodology schematically.
### Related Works
Several studies have explored using machine learning techniques to estimate wheel odometry (WO) in mobile robotics applications. Approaches include different feed-forward neural networks (FFNN) [11], of which, in some cases, the output has been fused with other sensor data [12], and long short-memory (LSTM) NN, which have been applied to car datasets [13]. These approaches show a promising improvement in WO accuracy, which is crucial for mobile robotics applications.
Many works have focused on using Inertial Measurement Unit (IMU) data in mobile robots or other applications, such as person tracking using IMU data from cell phones [14]. One system was improved by implementing a zero-velocity detection with Gate Recurrent Units (GRU) neural network [15]. Another study used an Extended Kalman Filter (EKF) to estimate positions and velocities in real-time in a computationally lightweight manner [16]. Additionally, a custom deep Recurrent Neural Network (RNN) model, IONNet, was used to estimate changes in position and orientation in independent time windows [17]. Some studies used a Kalman Filter (KF) to eliminate noise from the accelerometer, gyroscope, and magnetometer sensor signals and integrate the filtered signal to reconstruct the trajectory [18]. Another KF approach has been combined with a Neural Network to estimate the noise parameters of the filter [19].
Several neural network architectures have been proposed to predict or correct IO odometry over time. For example, a three-channel LSTM was fed with IMU measurements to output variations in position and orientation and tested on a vehicle dataset [20]. Another LSTM-based architecture mimics a kinematic model, predicting orientation and velocity given IMU input data. Studies have investigated the role of hyper-parameters in IO estimation [21].
Figure 1: Diagram of the proposed approach. Red blocks and arrows refer to the online training phase, blue ones to the model inference stage, and yellow ones to the odometric input data.
Sensor fusion of wheel encoder and IMU data is a common method for obtaining a robust solution. One approach involves fusing the data with a Kalman Filter, which can assign a weight to each input based on its accuracy [22]. A Fully Connected Layer with a convolutional layer has been employed for estimating changes in position and orientation in a 2D space over time in an Ackermann vehicle, along with a data enhancement technique to improve learning efficiency [23]. Additionally, a GRU RNN-based method has been proposed to compensate for drift in mechanum wheel mobile robots, with an in-depth fine-tuning of hyper-parameters to improve performance [24].
### Contributions
In this work, we tackle the problem of improving wheel-inertial odometry by learning how to correct it online with an efficient artificial neural network. At this first stage, the study has been conceived to provide the robot with a more reliable, secondary odometric source in standard indoor environments where the working conditions for VO can temporarily vanish, as in the case of robots for domestic night surveillance or assistance. The main contribution of this work can be summarized as:
* A novel online learning approach for wheel-inertial odometry correction which allows avoiding complex trajectory data collection and can be directly included in a ROS 2 system;
* An efficient model architecture to preserve both easy online training and fast inference performance.
Nonetheless, a validation dataset of sensor data has been collected with the robot following different trajectories to conduct extensive experiments and comparisons with state-of-the-art offline methods.
## 2 Methodology
### Problem Formulation
The position of a robot at time \(t\) referred to the starting reference frame \(\textbf{R}_{0}\) can be calculated by accumulating its increments during time segments \(\delta t\). The time stamp \(n\) refers to the generic time instant \(t=n\delta t\). The state of the robot \(\textbf{x}_{n}\) is defined by the position and orientation of the robot, such as:
\[\textbf{x}_{n}=(x_{n},y_{n},\theta_{n})^{T}, \tag{1}\]
where \((x_{n},y_{n})\) is the robot's position in the 2D space and \(\theta_{n}\) is its heading angle. Given the state, it is possible to parametrize the roto-translation \(\textbf{T}_{0}^{m}\) matrix from the robot's frame \(\textbf{R}_{m}\) to the global frame \(\textbf{R}_{0}\). Its first two columns represent the axes of the robot frame, and the last one is its position with respect to the origin.
The robot employed to develop this work is equipped with an IMU, which includes a gyroscope and an accelerometer, and two wheel encoders. Therefore, \(\textbf{u}_{n}\) is defined as the measurement array referred to instant \(n\), i.e.:
\[\textbf{u}_{n}=\left(v_{l},v_{r},\ddot{x},\dot{y},\ddot{z},\dot{\theta}_{x}, \dot{\theta}_{y},\dot{\theta}_{z}\right)^{T}, \tag{2}\]
Figure 2: Architecture of the proposed model. The batch dimension is omitted for better clarity.
where \((v_{l},v_{r})\) are the wheels' velocities, \((\ddot{x},\ddot{y},\ddot{z})\) are the linear accelerations and \((\dot{\theta}_{x},\dot{\theta}_{y},\dot{\theta}_{z})\) are the angular velocities. The input \(\mathbf{U}_{n}\) to the proposed model consists in the concatenation of the last \(N\) samples of the measurements \(\mathbf{U}_{n}=(\mathbf{u}_{(n)},\mathbf{u}_{(n-1)},\dots,\mathbf{u}_{(n-N)})^{T}\). At each time sample, the state is updated as a function of the measurements \(f(\mathbf{U}_{n})\): first, the change of the pose \(\delta\hat{x}=f(\mathbf{U}_{n})\) of the robot is estimated, relative to the previous pose \(\hat{\mathbf{x}}_{n-1}\). Then, the updated state is calculated, given the transformation matrix obtained before, as:
\[\hat{\mathbf{x}}_{n}=\hat{\mathbf{x}}_{n-1}\boxplus f(\mathbf{U}_{n})= \mathbf{T}_{0(n-1)}^{m}\delta\hat{\mathbf{x}}_{n}, \tag{3}\]
where the operator \(\boxplus\) symbolizes the state update.
### Neural Network Architecture
As formalized in the previous section, the prediction of \(\hat{\mathbf{x}}_{n}\in\mathbb{R}^{3}\) from \(\mathbf{U}_{n}\in\mathbb{R}^{T\times C}\) is framed as a regression problem. The architecture we propose to solve this task is inspired to REMNet [25, 26], though it uses 2D convolutions instead of the original 1D convolutional blocks (Figure 2). This modification aims at exploiting temporal correlations without compressing the channel dimension throughout the backbone. In particular, we keep the channel dimension \(C\) separated from the filter dimension \(F\). In this way, the first convolutional step with kernel \((K,1)\) and \(F\) filters outputs a low-level feature map \(f_{1}\in\mathbb{R}^{T\times C\times F}\). Then, a stack of \(N\) Residual Reduction Modules (RRM) extracts high-level features while reducing the temporal dimension \(T\). Each RRM consists of a residual (\(Res\)) block followed by a reduction (\(Red\)) module:
\[RRM(x)=Red(Res(x)) \tag{4}\]
The \(Res\) block comprises a 2D convolution with kernel \(K\times 1\) followed by a Squeeze-and-Excitation (SE) block [27] on the residual branch. The SE block applies attention to the channel dimension of the features with a scaling factor learned from the features themselves. First, the block applies average pooling to dimensions \(T\) and \(C\). Then, it reduces the channel dimensionality with a bottleneck dense layer of \(F/R\) units. Finally, another dense layer restores the original dimension and outputs the attention weights. After multiplying the attention mask for the features, the result is used as a residual and added to the input of the residual block. The \(Red\) block halves the temporal dimension by summing two parallel convolutional branches with a stride of 2. The layers have kernels \(K\times 1\) and \(1\times 1\), respectively, to extract features at different scales. After \(N\) RRM blocks, we obtain the feature tensor \(f\in\mathbb{R}^{T\times C\times F/2^{N}}\), which is flattened to predict the output through the last dense layer. We also include a dropout layer to discourage overfitting.
### Training Procedure
The goal of this work consists of learning the positioning error of the robot using wheel odometry. Nonetheless, it is important to remark that, nowadays, visual-inertial odometry (VIO) is a standard approach on robotic platforms. This work does not aim to propose a more precise localization system but to learn wheel-inertial odometry as a second reliable localization algorithm available whenever visual approaches fail.
We exploit a basic VIO system on the robot for the only training process since it enables a competitive online learning paradigm to train the model directly on the robot. Batch learning, the most used training paradigm, requires all the data to be available in advance. As long as the data are collected over time, the proposed method consists in training the network in a continuous way when a batch of N data is available. This approach has been tested extensively in [28], demonstrating a negligible loss in accuracy compared to the batch-learning paradigm.
The proposed model's training consists of two main steps, which are repeated as long as new data are available. First, a batch of \(N\) elements is collected, respectively, the input of the network \(\mathbf{U}_{n}\) and the expected output \(\delta x\). Then, an update step is carried out using an SGD-based optimizer algorithm adopting a Mean Absolute Error loss function, which does not emphasize the outliers or the excessive noise in the training data.
## 3 Tests and Results
In this section, the proposed approach is tested through extensive experimental evaluations. The model presented in Section 2.2 has been trained with an incremental learning method and a classical batch training approach. Results obtained with a simple FFNN model and a standard localization solution based on an EKF are also discussed in the comparison. For this sake, both training processes have been accomplished on the same dataset, and all the tests have been executed on the same test set.
### Experimental Setting
The dataset used for the experiments was collected in a generic indoor environment. The employed robotic platform was a Clearpath Jackal1, a skid-steer driving four-wheeled robot designed for indoor and outdoor applications. All the code was developed in a ROS 2 framework and is tested on Ubuntu 20.04 LTS using the ROS 2 Foxy distro.
Footnote 1: [https://clearpathrobotics.com/jackal-small-unmanned-ground-vehicle/](https://clearpathrobotics.com/jackal-small-unmanned-ground-vehicle/)
Since an indoor environment was considered, the linear velocity of the robot was limited to \(0.4m/s\) and its angular velocity to \(1rad/s\). The data from the embedded IMU and wheel encoders were used as inputs to the model. According to these assumptions, we used the robot pose provided by an Intel Realsense T265 tracking camera as ground truth. As the testing environment is a single room, the precision of the tracking camera is guaranteed to provide a drift of less than \(1\%\) in a closed loop path2. All the data have been sampled at \(1/\delta t=25Hz\).
Footnote 2: [https://www.intelrealsense.com/tracking-camera-t265/](https://www.intelrealsense.com/tracking-camera-t265/)
The data were collected by teleoperating the robot around the room and recording the sensor measurements. For the training dataset, the robot has been moved along random trajectories. For the test dataset, critical situations when the skid-steer drive robot's odometry is known to lose the most accuracy were reproduced, such as tight curves, hard brakings, strong accelerations, and turns around itself. The obtained training dataset consists of 156579 samples; 80% have been used for training and 20% for validation and hyperparameter tuning. The test dataset consists of 61456 samples.
Figure 4: Absolute error of position and orientation of different methods during the test performed on a subset of infinite-shaped trajectories. The considered subset is the same as figure 3.
Figure 3: Infinite-shaped trajectories estimated by different methods. The data are collected during a total navigation time of about \(60s\).
The model hyperparameters have been tuned by performing a grid search using a batch learning process, considering a trade-off between accuracy and efficiency. In the identified model, we adopted \(F=64\) filters, \(N=2\) reduction modules, and a ratio factor \(R=4\). Kernel size \(K=3\) is used for all the convolutional layers, including the backbone. The input dimensions were fixed to \(T=10\) and \(C=8\). The former corresponds to the number of temporal steps, and it has been observed how a higher value appears to be superfluous. In contrast, a lower value leads to performance degradation. The latter value, \(C\), corresponds to the number of input features, i.e., sensor measurements as described in 2.
We adopted Adam[29] as the optimizer for the training. The exponential decay rate for the first-moment estimates is fixed to \(\beta_{1}=0.9\), and the decay rate for the second-moment estimates is fixed to \(\beta_{2}=0.999\). The epsilon factor for numerical stability is fixed to \(\epsilon=10^{-8}\). The optimal learning rate \(\eta\) was experimentally determined as \(1\times 10^{-4}\) for batch learning. Conversely, the incremental learning process showed how a value of \(\eta=7\times 10^{-5}\) avoided overfitting since the data were not shuffled. In both learning processes, a batch size of \(B=32\) was used.
### Evaluation Metrics
To evaluate the performance of the proposed model, two different metrics were used [30]:
* _Mean Absolute Trajectory Error (m-ATE)_, which averages the magnitude of the error evaluated between the estimated position and orientation of the robot and its ground truth pose in the same frame. Sometimes, it can lack generalization due to possible error compensations along the trajectory.
* _Segment Error (SE)_, which averages the errors along all the possible segments of a given length s, considering multiple starting points. It is strongly less sensitive to local degradation or compensations than the previous metrics.
### Quantitative Results
The proposed method was tested by training the neural network from scratch using the stream of sensor data in real-time, brought by the ROS 2 topics. The data were first collected in mini-batches of 32 elements. After completion, backpropagation is performed on the model to update all the weights. The data stream is recorded to provide the aforementioned 3.1 training dataset, which was later used to evaluate other methods. The results of the methods are compared to different state-of-the-art solutions, which are i) the same network trained with a traditional batch learning, ii) a feedforward neural network, as in [23], and iii) an Extended Kalman Filter based method, which can be considered one of the most common wheel-inertial odometry estimators.
All the models were evaluated offline using a test set composed of 19 sequences of various lengths, comprised between \(60s\) and \(280s\), which aim to recreate different critical situations for wheel inertial odometry. In particular, the sequences can be separated into three main trajectory types:
* _Type A_, comprises round trajectories which do not allow fortunate error compensation during the time. Therefore, they may lead to fast degradation of the estimated pose, and especially of the orientation.
Figure 5: Histograms of the SE error in position and orientation in section B of the test set.
* _Type B_ comprises an infinite-shaped trajectory. This test allows partial error compensations, but possible unbalanced orientation prediction may lead to fast degradation of the position accuracy. A partial sequence of type B trajectories are shown in Figure 3.
* _Type C_ comprises irregular trajectories, including hard brakings and accelerations, and aims to test the different methods' overall performance.
Table 1 presents the numeric results of the different tests, considering the proposed model (Online Learning) and the selected benchmarks. All the leaning-based approaches show a significant error reduction compared to the EKF results, which can be considered a baseline for improvement. Considering both the neural network architectures trained offline, the proposed convolutional one achieves an average improvement of 73.5% on the position \(m-ATE_{(x,y)}\) and 85.3% on the orientation \(m-ATE_{\theta}\). In comparison, the FFNN model achieves 49.0% and 48.4%, respectively. The Segment Error improves in both cases: the proposed model improves by 42.6% on the position \(SE_{(x,y)}\) and 75.8% on the orientation \(SE_{\theta}\). The FFNN architecture improves by 39.3% and 60.3%, respectively.
Compared with the EKF baseline, the online learning model shows almost the same improvement as batch learning. The improvement on the m-ATE equals 60.4% on the position and 79.2% on the orientation. The Segment Error also appears to be lower, showing an improvement of 19.1% on position and 60.3% on orientation. The observed difference between the two training paradigms is an acceptable trade-off between the slight loss of accuracy of the online training compared to the batch training and the possibility of training the model without a pre-collected dataset.
Figure 5 reports the histograms of the distribution of the Segment Errors, in position and orientation, respectively, for test scenario B. It emerges how learning-based methods achieve, on average, a smaller error than the EKF method. Figure 4 shows the error trend during time related to the trajectory of figure 3. It is evident how the batch-trained and online-trained models perform similarly to the other methods.
external PC with 32-GB RAM on a \(12^{th}\)-generation Intel Core i7 @ 4.7 GHz took an average time of 25 ms per batch, considering 100 measurements.
## 4 Conclusions
This paper introduces an online learning approach and an efficient neural network architecture for wheel-inertial odometry estimation in mobile robots from raw sensor data. The online training paradigm does not need a pre-collected dataset and allows fine-tuning the performance of the model over time, adapting to environmental changes. Moreover, the proposed model's reduced dimension allows training and fast inference on a low-resources robotic platform on-the-fly.
Future works may include developing a collaborative system based on integrating multiple odometry sources with a seamless transition to constantly guarantee accurate localization data to the robot.
## 5 Acknowledgement
This work has been developed with the contribution of Politecnico di Torino Interdepartmental Centre for Service Robotics PIC4SeR3.
Footnote 3: www.pic4ser.polito.it
|
2303.07621 | Two-stage Neural Network for ICASSP 2023 Speech Signal Improvement
Challenge | In ICASSP 2023 speech signal improvement challenge, we developed a dual-stage
neural model which improves speech signal quality induced by different
distortions in a stage-wise divide-and-conquer fashion. Specifically, in the
first stage, the speech improvement network focuses on recovering the missing
components of the spectrum, while in the second stage, our model aims to
further suppress noise, reverberation, and artifacts introduced by the
first-stage model. Achieving 0.446 in the final score and 0.517 in the P.835
score, our system ranks 4th in the non-real-time track. | Mingshuai Liu, Shubo Lv, Zihan Zhang, Runduo Han, Xiang Hao, Xianjun Xia, Li Chen, Yijian Xiao, Lei Xie | 2023-03-14T04:19:41Z | http://arxiv.org/abs/2303.07621v1 | # Two-Stage Neural Network for ICASSP 2023 Speech Signal Improvement Challenge
###### Abstract
In ICASSP 2023 speech signal improvement challenge, we developed a dual-stage neural model which improves speech signal quality induced by different distortions in a stage-wise divide-and-conquer fashion. Specifically, in the first stage, the speech improvement network focuses on recovering the missing components of the spectrum, while in the second stage, our model aims to further suppress noise, reverberation, and artifacts introduced by the first-stage model. Achieving 0.446 in the final score and 0.517 in the P.835 score, our system ranks 4th in the non-real-time track.
Mingshuai Liu\({}^{1}\), Shubo Lv\({}^{1}\), Zihan Zhang\({}^{1}\), Runduo Han\({}^{1}\), Xiang Hao\({}^{1}\), Xianjun Xia\({}^{2}\), Li Chen\({}^{2}\), Yijian Xiao\({}^{2}\), Lei Xie\({}^{1*}\)\({}^{1}\)Audio, Speech and Language Processing Group (ASLP@NPU), School of Software,
Northwestern Polytechnical University, Xi'an, China
\({}^{2}\)ByteDance, China speech distortion, speech enhancement
## 1 Introduction
During audio communication, speech signals may be degraded by multiple distortions, including coloration, discontinuity, loudness, noisiness, and reverberation. Existing methods have an impressive performance in noise suppression, but there is still an open problem on how to repair several distortions simultaneously. To stimulate research in this direction, ICASSP 2023 has specifically held the first Speech Signal Improvement Challenge1, as a flagship event of Signal Processing Grand Challenge.
Footnote 1: [https://www.microsoft.com/en-us/research/academic-program/speech-signal-improvement-challenge-icassp-2023/](https://www.microsoft.com/en-us/research/academic-program/speech-signal-improvement-challenge-icassp-2023/) * Corresponding author.
Inspired by the idea of decoupling difficult tasks into multiple easier sub-tasks [1], we propose a neural speech signal improvement approach with a training procedure that involves two stages. In the first stage, we adopt DCCRN [2] as a repair tool and substitute its oracle convolution structure with a more powerful gated convolution with the aim to mainly repair the missing components in the spectrum. Thus with simulated paired data of perfect and impaired speech, GatedDCCRN learns to improve quality problems caused by coloration, discontinuity, and loudness. For the loss function, besides SI-SNR loss and power-law compressed loss [3], we also integrate adversarial loss further to improve speech naturalness. In the second stage, a variant of S-DCCRN [4] is cascaded to the first stage GatedDCCRN model to particularly remove noise, and reverberation, and suppress possible artifacts introduced by the previous stage. Specifically, S-DCCRN is a powerful denoising model working on super wide-band and full-band signals, consisting of two small-footprint DCCRNs - one operates on sub-band signal and one works on full-band signal, benefiting from both local and global frequency information. With this denoising model, we further update its bottleneck layers from LSTM to STCM [5] for better temporal modeling. The proposed system has achieved 0.446 in the final evaluation score and 0.517 in P.835 score, leading our submission to the 4th place in the non-real-time track.
## 2 Approach
As illustrated in Fig.1, the training procedure of our system consists of two stages. The details of the network architecture, training data, and loss function used in these two stages are introduced below.
### Stage 1: Repairing Net
Considering its impressive ability in signal mapping, we employ DCCRN [2] as the backbone of our first-stage speech repair network. DCCRN is a U-Net structured complex network working on complex-valued spectrum, where the encoder and decoder are both composed of layered convolution, and LSTM is served as the bottleneck for temporal modeling. Inspired by the superior performance of gated convolution in image inpainting [6], we update the complex convolution with gated complex convolution, as shown in Fig.1, resulting in GateDCCRN. For the model training loss, in addition to SI-SNR (\(\mathcal{L}_{\text{SI-SNR}}\)) and power-law compressed loss (\(\mathcal{L}_{\text{PLC}}\)), we particularly integrate adversarial loss (\(\mathcal{L}_{\text{Adv}}\)) by adding the Multi-Period Discriminator [7] and Multi-Scale Discriminator [7] into the model optimization process to further improve the speech naturalness. Thus the final loss function is \(\mathcal{L}_{\text{Aug1}}=\mathcal{L}_{\text{SI-SNR}}+10\cdot\mathcal{L}_{ \text{PLC}}+15\cdot\mathcal{L}_{\text{Adv}}\).
### Stage 2: Denoising Net
In the second stage, the pre-trained GateDCCRN and an S-DCCRN [4] are chained as the denoising structure. Specifically, as shown in Fig.1 stage 2, two lightweight DCCRN sub-modules consequently work on sub-band and full-band signals, with the design to model fine details of different frequency bands with further inter-band smoothing. Different from the oracle S-DCCRN, we substitute LSTM with squeezed temporal convolution module (STCM) [5] in the bottleneck layer of the two DCCRNs, which aims to further strengthen the temporal modeling ability. With this update, the new model is named S-DCCSN. During training, we add noise and reverberation to the data simulated in the first stage to train the second stage model, which makes the model further achieve the ability to suppress noise, reverberation, and artifacts introduced by GateDCCRN. Note that the parameters of the pre-trained GateDCCRN are frozen in this training stage. We adopt SI-SNR loss (\(\mathcal{L}_{\text{SI-SNR}}\)), power-law compressed loss (\(\mathcal{L}_{\text{PLC}}\)), and mean square error loss (\(\mathcal{L}_{\text{Mtg}}\)) to optimize the model parameters, and the final loss becomes \(\mathcal{L}_{\text{stage2}}=\mathcal{L}_{\text{SI-SNR}}+\mathcal{L}_{\text{PLC }}+\mathcal{L}_{\text{Mtg}}\).
## 3 Experiments
### Datasets
The training set is created using the DNS4 dataset. Specifically, the DNS4 dataset includes a total of 750 hours of clean speech and 181 hours of noise. In addition, 50,000 RIR clips are simulated by HYB method. The RT60 of RIRs ranges from 0.2s to 1.2s. The room
size ranges from \(5\times 3\times 3m^{3}\) to \(8\times 5\times 4m^{3}\). Totally, there are 110,248 RIR clips, combined from the simulated RIR set and the provided RIR set by the challenge. We perform dynamic simulations of speech distortions during the training stage. In the first stage, we generate training data degraded by coloration, discontinuity, and loudness, accounting for 60\(\%\), 25\(\%\), and 15\(\%\) respectively. For coloration, we follow the description in [8] to design a low pass filter. We convolve it with full-band speech and perform resampling on the filtered result to generate low-band speech. Besides producing low-bandwidth distortions, we also restrict signal amplitudes within a range \([-\eta,+\eta]\) (\(\eta\in(0,1)\)). When we produce coloration distortion, the low-bandwidth speech and clipping speech account for 60\(\%\) and 40\(\%\) respectively. Specifically, full band speech is down-sampled to 4K, 8K, 16K, and 24K with the same probability. For discontinuity, the value of speech samples is set to zero randomly with a window size of 20ms. And the probability of setting the value of the sample point to zero is 10\(\%\). For loudness, we multiply the signal amplitudes by a scale within a range [0.1, 0.5]. In the second training stage, noise is further added to the first-stage training data with an SNR range of [0, 20]dB, and reverberation is further added to 50\(\%\) of the first-stage training data. Dynamic mixing is still used during training. We denote the first-stage and second-stage training data as S1 and S2 respectively, where S2 includes simulations for all signal distortions.
### Experiment Setup
The window length and frame shift are 20ms and 10ms respectively, resulting in 10ms algorithmic latency and 10ms buffering latency. The STFT length is 1024. The number of channels for DCCRN / GateDCCRN is {16, 32, 64, 128, 256, 256}, and the convolution kernel size and stride are set to (5,2) and (2,1) respectively. There are two LSTM layers with 256 nodes followed by a 2048 \(\times\) 256 fully connected layer between the encoder and decoder. The number of channels for the sub-band module and the full-band module of S-DCCRN / S-DCCSN is {64, 64, 64, 64, 128, 128}, and the convolution kernel size and stride are set to (5,2) and (2,1) respectively. For the CED and CFD of S-DCCRN / S-DCCSN, the number of channels is 32 and the depth of DenseBlock is 5. In S-DCCSN, The hidden channels of STCM adopted by the sub-band module and full-band module are 64. While in S-DCCRN, there are two LSTM layers with 256 nodes instead. And a fully connected layer with 512 nodes is adopted after the last LSTM layer. Models are optimized by Adam with the initial learning rate of 0.001, halved if the validation loss of two consecutive epochs no longer decreases.
### Results
We conduct ablation experiments to validate each proposed module. In Table 1, we train DCCRN and GateDCCRN with and without (w/o) discriminator using the first-stage training data (S1). Then we froze the parameters of the pre-trained GateDCCRN and cascade this model with S-DCCRN and S-DCCSN respectively. And the cascaded models are trained using the second-stage training data (S2). In addition, we also train a GateDCCRN and an S-DCCSN with the second-stage training data only (S2). DNSMOS results in Table 1 show that gated convolution and adversarial training using a discriminator can effectively improve the signal improvement ability of DCCRN. Moreover, S-DCCSN surpasses S-DCCRN in all evaluation metrics with a small increase in model size, which proves the better temporal modeling ability of STCM. Finally, the single-stage models (GateDCCRN and S-DCCSN) trained using S2 reflecting all distortions are inferior to the multi-stage model (GateDCCRN+S-DCCSN). Table 2 shows the subjective results of our submitted two-stage model (GateDCCRN+S-DCCSN) on the official test set. We can see that speech quality is clearly improved for different types of distortions except for discontinuity. The bad cases may attribute to our imperfect data simulation on discontinuity which desires further investigation. The number of parameters of the submitted two-stage system is 10M. The RTF is 1.478 tested on Intel(R) Xeon(R) CPU E5-2678 v3 2.4GHz using a single thread.
|
2306.09987 | Transforming Observations of Ocean Temperature with a Deep Convolutional
Residual Regressive Neural Network | Sea surface temperature (SST) is an essential climate variable that can be
measured via ground truth, remote sensing, or hybrid model methodologies. Here,
we celebrate SST surveillance progress via the application of a few relevant
technological advances from the late 20th and early 21st century. We further
develop our existing water cycle observation framework, Flux to Flow (F2F), to
fuse AMSR-E and MODIS into a higher resolution product with the goal of
capturing gradients and filling cloud gaps that are otherwise unavailable. Our
neural network architecture is constrained to a deep convolutional residual
regressive neural network. We utilize three snapshots of twelve monthly SST
measurements in 2010 as measured by the passive microwave radiometer AMSR-E,
the visible and infrared monitoring MODIS instrument, and the in situ Argo
dataset ISAS. The performance of the platform and success of this approach is
evaluated using the root mean squared error (RMSE) metric. We determine that
the 1:1 configuration of input and output data and a large observation region
is too challenging for the single compute node and dcrrnn structure as is. When
constrained to a single 100 x 100 pixel region and a small training dataset,
the algorithm improves from the baseline experiment covering a much larger
geography. For next discrete steps, we envision the consideration of a large
input range with a very small output range. Furthermore, we see the need to
integrate land and sea variables before performing computer vision tasks like
those within. Finally, we see parallelization as necessary to overcome the
compute obstacles we encountered. | Albert Larson, Ali Shafqat Akanda | 2023-06-16T17:35:11Z | http://arxiv.org/abs/2306.09987v1 | Transforming Observations of Ocean Temperature with a Deep Convolutional Residual Regressive Neural Network
###### Abstract
Sea surface temperature (SST) is an essential climate variable that can be measured via ground truth, remote sensing, or hybrid "model" methodologies. Here, we celebrate SST surveillance progress via the application of a few relevant technological advances from the late 20th and early 21st century. We further develop our existing water cycle observation framework, Flux to Flow (F2F), to fuse AMSR-E and MODIS into a higher resolution product with the goal of capturing gradients and filling cloud gaps that are otherwise unavailable. Our neural network architecture is constrained to a deep convolutional residual regressive neural network. We utilize three snapshots of twelve monthly SST measurements in 2010 as measured by the passive microwave radiometer AMSR-E, the visible and infrared monitoring MODIS instrument, and the in situ Arog dataset ISAS. The performance of the platform and success of this approach is evaluated using the root mean squared error (RMSE) metric. We determine that the 1:1 configuration of input and output data and a large observation region is too challenging for the single compute node and dcrnn structure as is. When constrained to a single 100 x 100 pixel region and a small training dataset, the algorithm improves from the baseline experiment covering a much larger geography. For next discrete steps, we envision the consideration of a large input range with a very small output range. Furthermore, we see the need to integrate land and sea variables before performing computer vision tasks like those within. Finally, we see parallelization as necessary to overcome the compute obstacles we encountered.
## 1 Introduction
Water is both an essential and abundant resource on earth; its availability and quality are critical for sustaining life and ecosystems. Though it is abundant, the majority of earth's water, about 97%, is found in the ocean, while the remaining 3% is freshwater found in glaciers, lakes, rivers, and underground aquifers on land. Water is not only crucial for sustaining life, but also plays a vital role in shaping the planet's climate and weather patterns. One significant but understudied climate variable that hydrologists must consider is sea surface temperature. SST has a profound impact on the water cycle, specifically evaporation [1]. Over-ocean-anomalies like atmospheric rivers can lead to both anomalous and enormous quantities of meteorological water falling on land [2]. The same is true in reverse, as failure of the rains in India are influenced not only by orographic factors from the towering Himalayan mountain range but also determinants of the Bay of Bengal and eponymous nearby ocean [3]. Understanding the relationship between sea surface temperature and the water cycle is essential for predicting and adapting to extreme events, managing water resources, and sustainable the global ecosystem [4].
Evidence shows that human beings through industrialization have modified and are continuing to significantly modify the climate. However, the modern cause for concern is the rate at which our climate has changed rather than the Boolean of has it or has it not. Measurements of carbon dioxide (CO\({}_{2}\)) tell the story: detected values of atmospheric CO\({}_{2}\) have increased by 50% of the starting value at the advent of industrialization [5, 6]. Invariant to latitude and longitude, the impacts are felt everywhere. Earth's response to our stimuli manifests in the form of heat waves, stronger storms, longer periods of drought, greater impulses of meteorological water accumulation over land, and a general increase in environmental variability. While in wealthy communities, modern civil infrastructure serves as a boundary layer to environment-related catastrophes, the poor and powerless are unequally yoked [7]. One must consider also
the importance of the ecology itself. How is the global health of all creatures of the atmosphere, land and oceans [8, 9, 10, 11]? What does the next five, ten, or one hundred years look like at the current rate [12, 13]? What changes can be made to mitigate or adapt to implications of past and present poor actions [14]? What are the global environmental quality standards [15]? How can changes be promoted in unequal nation states [16, 17, 18]?
SST is worthy of study for a number of reasons: its status as a key observational characteristic of water in the environment; the importance of SST in numerical weather and climate forecasting; SST's detectability via satellite-mounted RS instrumentation; and the availability of matched continuous ground truth temporospatial measurements that can be studied for intercomparison of dataset bias, variances, and uncertainties. We compare the raw satellite observations to the lower resolution but more precise in situ measurements of sea surface temperature (ISAS). We apply a treatment to the lower resolution but generally more available satellite instrument (AMSR-E), setting its target output to be the higher resolution MODIS product. Our hypothesis is that fusing the AMSR-E data to MODIS data will create a product that is closer in performance to MODIS than its AMSR-E input.
SST is a fascinating variable because not only is it a predictor of future weather anomalies, but it represents a largely unexplored stage regarding the capture and storage of CO\({}_{2}\). Marine carbon dioxide removal (mCDR) is a crucial strategy for mitigating climate change by extracting and sequestering carbon dioxide (CO\({}_{2}\)) from the atmosphere. Natural processes, such as the biological pump and carbon storage in marine ecosystems, contribute significantly to CO\({}_{2}\) removal [19, 20]. Engineered approaches, including iron fertilization to stimulate primary production [21], nutrient optimization for enhanced carbon export [22], and ocean alkalinity enhancement to increase CO\({}_{2}\) absorption capacity [23], offer potential avenues for mCDR. However, challenges exist, including environmental risks and unintended ecosystem disruptions [24].
One tool humans have in their arsenal is the ability to artificially model the environment. Modeling is the combination of a priori measurement data with empirically derived functions that simulate and represent the behavior of the environment. Given observed information obtained by sensors or measurement devices, we can use the collective knowledge to make better decisions. The types of observation data we collect about the state of the climate now in a given place, (e.g. tree ring cores, core samples) or via man-made data collection devices (satellites, weather balloons, airplanes, drifters, buoys, gauges), and have better insights to what tomorrow or the same time in five years might bode given current inputs, trajectories, and these empirical functions [25].
The use of neural networks as a tool for modeling has grown in frequency over the last two decades alongside the increase in availability and speed of fast mathematical computing hardware like graphics processing units. Nvidia has just crossed the trillion dollar market capitalization in large part to the AI boom driven by the proliferation of generative models like GPT and Stable Diffusion [26]. Here, we build on an existing neural network architecture called dcrrnn, which stands for a deep convolutional residual regressive neural network and is pronounced "discern" [27, 28]. When applied to problems involving the water cycle, dcrrnn is under the umbrella of Flux to Flow (F2F). This is an expectorant measure developed during the creation process of the first experiment. It is hypothesized that one might prefer to extract one or the other depending on the subject environment. Specifically, the extraction of just the dcrrnn structure for use in another vertical. Likewise, development of F2F with a different but related structure and a focus specifically on water-focused variables are anticipated to be better categorized as Flux to Flow projects. Dcrrnn is narrow, whereas F2F is wide.
The premise and motivation of dcrrnn and F2F is to study measurements of water, be them remotely sensed, gauged measurements, or hybrid datasets. From ingestion, some preprocessing of the data occurs to prepare for training either via statistical conformation or temporospatial constraint. The data is used to train a neural network. Inferences on unseen data are performed to evaluate the trained algorithm according to a relevant figure of merit for the experiment. Our chosen metric in set of experiments is the dimensionless root mean squared error (RMSE). This metric is a solid introductory measurement because of its frequent use and acceptance within the scientific community. Our stochastic framework was successful in mimicking the performance of deterministic algorithms used to predict gauged river streamflow from meteorological foncings. In light of this success and our interest in all facets of the water cycle, we elected to next examine the ability of dcrrnn and F2F to mimic the abilities of image enhancement and ocean modeling software such as the Scatterometer Image Reconstruction (SIR) algorithm, the regional ocean modeling system (ROMS) algorithm, and the Massachusetts Institute of Technology's General Circulation Model (MITgcm) [29, 30, 31, 32]. These algorithms take in a variety of different spatial datasets and perform some deterministic process to provide a value-added output. We aimed to evaluate the research questions: "Can dcrrnn, the supervised learning structure, be used as a surrogate to other commonly used image optimization algorithms and ocean modeling software? When applied to SST fields, does dcrrnn improve an input dataset based solely on statistical optimization associated with neural networks and the high resolution training data?"
From the outset, one important facet of dcrrnn and F2F to retain is the integral relationship between the paper and the code used to perform the experiments. A primary motivation behind the work writ large is to disseminate the
programming element so others will want to join. Frequently, frustrations and disagreements arise about scientific discourse because the experiment process is not documented, or there is no repeatable proof. We as much as possible take a white box approach. Scripts for these experiments are made available as jupyter notebooks at [https://github.com/albertlarson/f2f_sst](https://github.com/albertlarson/f2f_sst). As a primer, we recommend the genesis dcrrm paper [27]. Though the likely audience is one with or approaching a graduate degree and a penchant for education, our intention is that the language is such that it might be introduced in an undergraduate or advanced K-12 classroom. Coincidingly, pointing out pitfalls is a valuable exercise to help prevent others from repeating the same mistakes. Here, we capture what occurs when a naive, stochastic system is used on a single compute node to attempt a performance of the classic data fusion problem via neural network methods with SST fields as the subject matter. We determine that our approach has definite merit, but requires further investigation with current datasets and other configurations. A prerequisite for advancement of this effort should either be a larger amount of compute time via single node, or via multiple compute nodes conducted in parallel.
With this article, our contributions to the field are the following: 1, the continued development of F2F as an open source water cycle measurement framework; 2, to further consideration of dcrrmn as a viable neural network architecture; 3, the active constraint of the work to a single compute node and focus on the integration of the manuscript with the code behind the experiments; 4, the consideration and focus on the importance of sea surface temperature from a hydrologist's point of view; 5, a fresh consideration of neural network based data fusion and data engineering techniques; 6, an illustration of the limitations of underparameterization; 7, a demonstration of how the neural network improves its performance when the requirements of the process are relaxed; and last but not least, 8, a biased focus on global water resources.
## 2 Materials and Methods
### Sea Surface Temperature (SST)
The origin of SST as a continuously monitored variable began when Benjamin Franklin captured measurements of the ocean as he traversed the Atlantic, acquiring data and synthesizing these observations into information about the Gulf Stream [33]. Since then, the field of physical oceanographic research has made great strides in a variety of advancements relevant here such as observational techniques and data analysis methods. In recent decades, satellite remote sensing has emerged as a crucial tool for observing the ocean on a global scale, and for acquiring SST data. Satellites equipped with infrared sensors provide accurate and high-resolution measurements of SST [34; 35]. These satellite-based measurements offer advantages over traditional in situ measurements, as they provide comprehensive coverage of the oceans, including remote and inaccessible regions. In situ devices are point source measurements and have a relatively limited observation window compared to the hundreds of kilometers sun-synchronous satellites capture at a single moment.
In addition to infrared sensors, satellite sensors that detect microwave radiation, starting with the Nimbus-5 in 1972, have also been used to retrieve SST [36]. Microwave-based SST retrieval methods have the benefit of low sensitivity to atmospheric conditions, such as cloud cover and aerosols, compared to infrared measurements. The availability of long-term satellite-based SST datasets has led to significant advancements in climate research. Scientists have utilized these fields to study various climate phenomena, including oceanic oscillations such as the Pacific Decadal Oscillation (PDO) and the Atlantic Multidecadal Oscillation (AMO) [37; 38]. Large-scale oscillations influence the long-term variability of SST and have implications for regional climate patterns.
Furthermore, satellite-based SST observations have been essential in understanding the impacts of climate change on the oceans. Studies have shown that global warming has led to widespread changes in SST, including increases in average temperatures, alterations in temperature gradients, and shifts in the distribution of warm and cold regions [39]. These changes have significant implications for marine ecosystems, including shifts in species distributions, changes in phenology, and coral reef bleaching events [40; 41]. Satellite-derived SST data also contribute to the prediction and forecasting of weather and climate events. The accurate representation of SST conditions is crucial for weather prediction models, as it affects the development and intensity of atmospheric phenomena such as hurricanes and tropical cyclones [42; 43]. The integration of satellite-based SST data into numerical weather prediction models has improved forecast accuracy, particularly in regions where in situ observations are sparse or nonexistent.
In addition to weather forecasting, satellite-based SST data has practical applications in fisheries management and marine resource monitoring. SST information helps identify optimal fishing grounds by indicating areas with suitable temperature conditions for target species [44; 45]. Furthermore, monitoring changes in SST can provide insights into the health of marine ecosystems and aid in the assessment and management of protected areas and biodiversity hot spots [46; 47; 48].
### Aqua
The Aqua satellite was launched on May 4, 2002. Upon it, two instruments sit: AMSR-E and MODIS. Both, among other things, are designed to study the temperature of the ocean. The measurements obtained as the satellite is moving from South Pole towards North always crosses the equator at approximately 1:30 PM local time nadir (directly below the satellite). In the downward portion of the orbit, the satellite crosses the equator at 1:30 AM local time nadir [49]. The AMSR-E instrument ceased functioning after ten years of service. MODIS continues to operate, logging over twenty years of active surveillance. AMSR-E has been succeeded by a follow-on instrument AMSR2. AMSR2 is aboard a Japanese mission called GCM-W, one of a series of global climate observation missions [50]. AMSR2 has a slightly larger antenna than AMSR-E, but is similar in scope to the AMSR-E instrument. The de facto replacement of MODIS is VIIRS, an instrument series carried on the Suomi National Polar-orbiting Partnership (SNPP), NOAA-20, and NOAA-21 satellites [51, 52]. Other global aerospace mission datasets associated with SST are available, like the Chinese Haiyang and Gaofen series [53, 54].
### Amsr-E
AMSR-E was a passive microwave radiometer [55, 56]. The acronym stands for Advanced Microwave Scanning Radiometer for Earth Observing System. There are several products produced on top of the raw radiance data that were collected by this instrument, and the AMSR-E data was processed by different ground stations depending on the parameter of interest. The produced datasets contain latitude, longitude, several physical parameters (e.g., SST, Albedo, soil moisture) as well as other pertinent metadata. As it pertains to sea surface temperature, AMSR-E is available in Level 3 products and as part of Level 4 assimilation system output. As of the writing of this document, Level 2 SST is no longer publicly available. However, we were able to obtain Level 2 fields from the Physical Oceanography Distributed Active Archive Center (PODAAC) through the Jet Propulsion Laboratory before the sunsetting of the data product.
To detail a sample, one single Level 2 netCDF (.nc) file containing AMSR-E data was procured. The record selected is that of March 3rd, 2004, with a UTC time of 01:07:01. The file contains three coordinates (latitude, longitude, and time) and thirteen data variables. Each variable is a single matrix comprised of columns and rows of measurements. The important distinction here is that the data structure is stored to reflect the path of the orbit. See Figure 1. When the sea surface temperature is plotted as it sits in the matrix, it is difficult to discern what is transpiring. There appears to be some curvature of the measurements, but other than that little is known to an untrained eye beyond the title and colormap.
Inclusion of the latitude and longitude coordinates, as well as a global basemap generates a clearer picture as seen in Figure 2. A single Level 2 AMSR-E SST file contains matrices representing one full orbit around the globe. Each file holds partially daytime and partially nighttime observations. Because of diurnal warming, it is desirable to separate the nighttime and daytime passes. Furthermore, many analyses are comprised of an ensemble of satellite observations from different platforms such as this one. A grid makes for more orderly computations at large spatial scales. Certainly, one could elect to grid every observation to the AMSR-E or MODIS native product coordinate system. With our experiments, we choose the path of rectangular gridding. We consider the Level 3 product because of the interest in spatial relationship across large geographic scales and variable time (daily, weekly, seasonally, yearly, generationally).
AMSR-E is available (accessed 7 June 2023) via its producer, Remote Sensing Systems of Santa Rosa, CA [57]. This daily product comes in 25 km resolution and is delineated by daytime and nighttime passes of the satellite. The time series runs from June of 2002 until October of 2011 when the AMSR-E instrument ceased functioning. Figure 3 illustrates the point that even without explicitly defining the coordinate system in the visualization, the matrix of SST values is already placed in proper spatial order. Figure 4 reinforces the fact that little change occurs with the inclusion of latitude and longitude coordinates when plotted on a rectangular grid. Here, we simply mean each month's worth of daily daytime and nighttime passes on a pixel-wise basis. We call these day and night in the experiments that follow. We also create a hybrid dataset, where we average the monthly averages of day and night passes together. Finally, we
Figure 1: L2 AMSR-E SST field, March 3, 2004, no coordinate system
transform all three of these datasets from the native AMSR-E grid system to the slightly different MODIS grid; this function is carried out using the xESMF software [58].
### Modis
MODIS, or Moderate Resolution Imaging Spectroradiometer, measures thirty-six different radiance bands in the infrared and visible ranges of the electromagnetic spectrum [59]. Level 3 sea surface skin temperature as obtained from MODIS comes in 4 kilometer and 9 km products, and is derived from a subset of the thirty-six radiance bands. The products are available in daily, average of eight days, and monthly products. They are also delineated by daytime and nighttime passes of the Aqua's polar-orbiting nature. SST products deriving from MODIS are further specified by the length of the waves within the thermal infrared range used to derive the measurement: longer waves (11-12 microns) and middling waves (3-4 microns). The MODIS documentation state that the 3-4 micron wave SST product is less uncertain, but
Figure 3: L3 AMSR-E file plotted without supplied coordinate system
Figure 2: L2 AMSR-E SST, plotted with available coordinates and world map
only usable at night because of the daytime sun glint impact on 3-4 micron waves. We use the long wave 11-12 micron infrared measurements to keep constant the source of both daytime and nighttime passes.
The MODIS Aqua Level 3 SST Thermal IR Monthly 4km V2019.0 product [60; 61] comes with latitude and longitude coordinates, SST values and per pixel quality measurements denoting when contamination is likely. The grid is equidistant rectangular, a match with the AMSR-E grid but at a finer original resolution. Of the over thirty million pixels for an entire day of 4 km MODIS pixels, 90% of them in the random sample selected here are deemed contaminated and filtered out Figure 5. This contrasts with the 50% loss of AMSR-E pixels. This great loss in pixels due to quality is attributed to cloud contamination. To compensate, we use the monthly product Figure 6 where a greater amount of time has transpired, allowing for a higher probability of clean global coverage. A randomly sampled MODIS monthly image yields 50% loss, in line with the AMSR-E daily product and much improved upon relative to the daily MODIS observation files.
Figure 4: L3 AMSR-E file plotted with available coordinates and world map
Figure 5: L3 daily MODIS file containing only high quality flagged pixels
### Ground Truth Measurements
For a source of ground truth data, we selected the "In Situ Analysis System" (ISAS) dataset obtained from the University of California's Argo repository and produced by a consortium of French institutions [62]. An important constraint for this work was to obtain only the surface level measurement of temperature at the highest frequency available during the years of both AMSR-E and MODIS. These products are provided in a gridded format are used to observe temperature measurements at many depth levels. In the publication attached to the ISAS dataset [62], the target physical quantity is steric height and ocean heat content; with these as their target output, gridded depth-dependent temperature is stored as a byproduct. The 0.5 degree monthly dataset is presented in a Mercator projection, slightly different than the AMSR-E and MODIS grids. Mercator lines of longitude have a uniform distance in between them; the distance between latitudes from the equator changes. Identical to AMSR-E, we re-grid this data to the MODIS grid and coordinate system.
### Treatment
This work is an extension of [27]. The F2F code base provides an step-by-step approach to the fundamentals of the materials and methods applied within. As such, it is a key of part of the work and has been made openly accessible at [https://github.com/albertlarson/f2f](https://github.com/albertlarson/f2f) (accessed on 6 June 2023). The scripts follow the logical order of that paper. Concretely, this work builds upon that foundation. The notebooks found at [https://github.com/albertlarson/f2f_sst](https://github.com/albertlarson/f2f_sst) follow the simple process of extracting the data, transforming the data, feed the data into a transformation system (dcrmn), and then evaluating the performance of the system.
The treatments we apply to the data are several configurations of one common concept: neural networks. Neural networks are not new, but the growth of graphical processing units (hardware) has enabled them to flourish in software. Neural networks are a type of learned representation. A structure is fed connected input and target pairs. Based on the predictive quality of the initial network structure, an error between the neural network output and the target occurs. This error is in turn fed to an optimization algorithm that iteratively and slightly alters each "neuron" of the initial network structure until it reaches a designated optimal state. Via many small calculations and the simultaneous application of statistical mechanics, neural networks are known to provide qualities like that of a brain, such as capturing spatial eccentricities and temporal changes in sets of related images. Neural networks are applied to a range of tasks from the more mundane such as learning a quadratic equation, to the more cutting edge, like extreme event forecasting or cancer detection.
Transfer learning has become commonplace in the field of machine learning [63]. Transfer learning places an emphasis on creating reusable treatment structures for others to build on top of without inadvertently causing the audience to get lost in possibly unimportant details. We employ transfer learning to create a complex configuration with a relatively short learning curve. The neural network is characteristically deep, convolutional, residual, and regressive.
Figure 6: Monthly L3 MODIS image containing only high quality observations
Our construct is inspired by the work of residual networks [64]. However, our problem is one of a regressive nature. Sea surface temperature has a continuous temperature range that it exists within. This is a notable difference to some of the more common introductory neural network examples, such as those associated with the MNIST and CIFAR datasets where the number of possible outputs is very small [65; 66]. Loss functions associated with regressive problems are constrained to just a couple: mean absolute error (MAE) and mean squared error (MSE). The calculation of the loss function must be differentiable. This is due to the optimization component of neural networks. The literature is rich with publications regarding neural network optimizers, as well as the general mechanics of neural nets [67; 68].
Once neural network architecture and hyperparameters are chosen, training and validation data is loaded into the network. While training the neural network, close observation is made of the reduction in error between training input and output as the neural network begins to optimize or learn. We also monitor the validation dataset at each training iteration. The learning process stops once the training and validation data has been passed through the network a certain number of times, or epochs. When prototyping or pilot-testing the experiment set to be carried out, one should test with a very short number of epochs and a larger sum of epochs to see where good performance meets fast time of computation.
After training, the optimized neural network structure is intentionally frozen. Before the point of freezing, the neurons of the network can be adjusted for optimization, like a student asking a teacher for advice when studying. The frozen state and inference imposed upon it is like a student being prompted with a pop quiz and no teacher assistance. This test or input data are similar enough to the training that the teacher believes the student will have success in passing the test according to the selected merit (mean squared error, the loss function). After the test, the performance of the model is evaluated and a decision is made regarding next logical steps in the research.
A neural network can become biased to its training inputs. It starts to memorize the training dataset, which does not make for a generally applicable algorithm. Avoidance of biasing comes at the cost of variance [69]. Applying dropout is one technique to systematically prevent system bias by simply "turning off" a certain percentage of random neurons at each iteration of the algorithm [70; 71]. Another approach is the application of early stopping. The loss function of a neural network typically looks like a very steep curve down to a flat bottom. Rather than allow the network to persist in the flat bottom for long and become overfit, simple logic can be employed to stop training early when the network shows evidence that it has reached an optimal state. Percentage of data split between training and testing proportions is another relevant training hyperparameter. A larger proportion of the dataset being part of the training portion could lead to overfitting of the model and lack of generalized predictability. On the other hand, insufficient training data might lead to an inability to adequately characterize the reality of the data pairs.
Figure 7: Sample training monthly observation; January 2010 MODIS day observation of the Hawaiian Islands; segmented into 100 x 100 pixel regions.
The image sets subjected to treatment are on the large side computationally. Holding many one million or nine million pixel images within the memory of a single graphical processing unit becomes intolerable to the device. One could elect to use multiple GPUs or a compute node with a great provision of memory. Here, we constrain the experiment to a single GPU and cut the images up into smaller pieces of square data. Our patch size is fixed at 100 x 100, though this is a tunable hyperparameter. Figure 7 shows a Pacific Ocean study region, highlighting Hawaii and regions east. While this image is too large to process directly in the neural network, we can solve this problem by creating the eighteen patches of 100 x 100 pixels, representing the 300 x 600 pixel region under observation.
Neural networks do not function when nan values are present in any of the images. We enacted a broad treatment to the AMSR-E and MODIS images, computing the mean of the entire image, excluding the nan values. Then, where the nan values are present, we replace them with the mean value. This has the convenient byproduct of introducing into the neural network many training pairs where the input and output are simply comprised of the average global SST value as obtained via the AMSR-E and MODIS instrument.
## 3 Results
We randomly selected a single calendar year from the available time series where AMSR-E, MODIS, and ISAS overlap. Of this year's worth of data, we settled on nine different cases consisting of three different locations (Atlantic Ocean, Pacific Ocean, and Indian Ocean) and three different observation windows (day, night, mean averaged of day and night). We train the neural network on the first ten months of the year, validate with the eleventh month, and test with the twelfth month. However, we did not intend for this system to be deterministic or biased in nature. Therefore, we shuffle the training pairs to confuse the network and promote regularization [72]. A training session runs for 100 epochs. Each image in the geographically constrained time series is 300 pixels x 600 pixels in size, divided up into eighteen 100 x 100 pixel segments to incrementally feed the neural net.
Results of the nine cases are illustrated in Figure 8. The upper panel is a test matrix. Each of the ten rows in the test matrix represent a tuple of datasets that are compared. All of the different data products we use are compared here to the ISAS data and the MODIS data as benchmarks. Columnwise, the test matrix is delineated into each of the nine cases (experiments) that we perform. For example, PD stands for Pacific Day, AN means Atlantic Night, and IH Indian hybrid. The hybrid is simply a mean of the day and night images for a given month. Take the PD column. We compute the RMSE between the complete 300 x 600 Pacific scene for each of the comparison tuples. In row 1, column 1, the RMSE value of the (Argo, Argo) comparison for the Pacific Day experiment comes out to zero. This is expected, because there should be no error between two identical datasets. Root mean squared error is not computed in locations where land is present.
The bottom two panels are graphs of the in-the-loop performance of the dcrmn algorithm as the learning process occurs. The loss function, sometimes otherwise referred to as the cost function, for dcrmn is mean squared error (MSE). This calculation is related to the RMSE metric used in the upper panel. Though the computations are different, the results are not perfectly related. The cause of this discrepancy is a function of a constraint for the training process. If an input or target image has any non-numerical pixels present, we have to replace them in some way. The cause of nans are largely attributed in this case to a location that is partially or completely land, or is a pixel contaminated by clouds or rainfall. We retain a global land mask for use before and after the training process to enable the removal of the temporarily-filled land pixels, which gives way to the clean RMSE test matrix. Our fill mechanism for each of the training and validation squares is to use the local mean of that square as the fill value. As such, there are always the same number of pixels factored into each 100 x 100 square during the neural network training process and in particular the calculation of in-the-loop training loss.
Pred in the test matrix refers to the neural network's prediction of the unseen test data, the single month of data. It is fed into the trained network as eighteen 100 x 100 squares, but then recombined into one 180,000 pixel array. Land locations are added back on top. No other compute mechanism is employed. The Optim dataset has an extremely faint bandpass filter on top of the Pred dataset. Simply put, if there are any pixels in the Pred result from the neural network that is outside of the three standard deviations from the mean of the image, they are converted to nans and deemed erroneous. In this experiment, this filter has little effect. One would see more drastic effects should the trigger for filtering data from Pred to Optim change from three standard deviations to two, one, or less standard deviations from the mean. The risk of using a bandpass filter is that much of the interesting nuanced information can be filtered away. This feature was implemented during the experiment phase in response to the analyst's acknowledgement that artefact in the experiments were occurring along the coast.
Figure 8: December 2010. The top panel shows the RMSE matrix for all datasets (input and outputs) for each of the nine cases. Two bottom panels are MSE plots of training and validation losses during neural network training. Atlantic, Pacific, and Indian Oceans segmented by monthly Day, Night, and Hybrid
## 4 Discussion
With every case summarized in Figure 8, the RMSE between Optim and MODIS is higher than the AMSR-E input. In occasional instances, the test case of December does make an Optim output that is closer to Argo than the input AMSR-E. In some cases, though, it makes a worse performing product with regards to Argo than either AMSR-E or MODIS. See Figures 9 and 10 for samples of how the RMSE translates to actual transformation of the images. In the second row of the first column, one sees all green. This solid green color demarcates a clear relationship between the two compared datasets. In this case, that is because this square is the difference between Argo and itself. Considering all images in row 2 of Figure 9, the second column where AMSR-E is compared to Argo seems to be the most green, denoting a closeness. Pred and Optim are both covered in green; however, the bandpass filter in Optim is not being utilized. Therefore, we see the coastal artefact around the Hawaiian islands present as a result of dealing with nan values due to land. We are hesitant to use a liberal bandpass filter, because there's a real risk of deleting good dynamic data. This fact has lead us to an open question on the most appropriate way to handle water cycle data. It is evident that a gridded ocean product where land has no numerical representation isn't acceptable, and solutions like replacing values with local means isn't a true solution but a workaround. The most obvious answer is to create a unified global surface temperature dataset. However, this is less than best for the hydrology community. An adequate replacement specification for land surface temperature might be soil moisture or calculated surface and subsurface flows.
Figure 10: Relatively “poor” perceptual change, Indian Day case, axes are pixels
Figure 9: Relatively “good” perceptual change, Pacific Night case, axes are pixels
Figure 10 represents a dcrnn experiment conducted over the Arabian Sea and Bay of Bengal; it also illustrates a less desirable outcome than that of Figure 9. The telltale signal of underparameterization is evident in columns four and five of Figure 10, most notably in rows 2 and 3. As mentioned earlier, because of compute constraints, we can not load the entire image into the neural network at one time. We have to break it up into squares. Here there are clear horizontal and vertical boundary lines. Furthermore, there is the appearance of much colder estimates of SST, driven by artefact from the coastal regions. It appears that edges were systematically not able to be resolved based on the configuration of the dcrnn structure and quantity of data fed into the system.
We determined that just looking at the nine cases in one constant way was not considerate enough of potential unseen indirect effects or confounding variables [73, 74]. As an additional measure, we dug deeper into the night only observations of the Atlantic ocean. We selected a single 100 x 100 pixel that is completely over ocean within the Atlantic region. Our selection is seen in Figure 11. Figure 12 illustrates that all pixels in the selection are real numbers (left). The presence of the color red would denote nan pixels present. This point is reinforced on the right panel of Figure 12 with the single vertical line histogram denoting all real pixels.
We extend the number of epochs from 100 as compiled in Figure 8 to 500 in this more-zoomed-in observation. A larger amount of training epochs, a smaller dataset size, and no nans present are strategic measures away from the baseline experiments to improve the outcome. Results of this experiment are illustrated in Figure 13. With this
Figure 11: Atlantic Night, single patch selection overlain on top of the the land mask (left) and SST field (right)
Figure 12: Atlantic Night, single 100 x 100 pixel patch real/nan Boolean (left) and histogram of real/nan (right)
Figure 13: Atlantic Night, AMSR-E (col 1), Pred (col 2), MODIS (col 3), histogram of differences (col 4)
configuration, we find that the network reduces the RMSE difference between AMSR-E and MODIS by 20%, bringing the RMSE between AMSR-E and MODIS from 53.4 down to 44.0 between Pred and MODIS. However, there is no major perceptual change between the AMSR-E image and the Pred image. It is a promising result that there is an improvement in the results here relative to the nine larger cases. This performance indicates that rectification of the performance issues we are seeing can be further alleviated by shrinking the target size relative to the input, or by increasing the input size. As the output size shrinks, the problem converges with that of the earlier dcrrnn work [27].
Wherever land and coastal regions are present, the neural network alone struggles. This is due to the nature of the land sea boundary layer in all these datasets. At the presence of land, the raw data (as they are downloaded as.nc files) are given a non-number (nan) designation. For the purpose of training a neural network using the convolutional flavor, images with zero nans are needed. Otherwise the software fails catastrophically. As referenced earlier, steps were taken during the training process to circumvent the presence of land by substituting those pixels temporarily with the local mean value. Another option is the application of the substitution of the nan values with the mean as computed by the entire "scene" or day. There is a risk that the substitution of these values introduces a source of structured noise. This noise might be one factor leading to the higher than zero RMSE scores denoted in Figure 8. Furthermore, it is possible that this structured noise is hindering training of the neural network process itself. This is unfortunate, because coastal regions are the stage for a variety of interesting SST events such as boundary currents. At the same time, hydrologists and global health professionals have a growing stake in the influence of the ocean. Sea level rise and saltwater intrusion are two hallmark conditions present at the coastal interface [75]. Beyond sea surface temperature, ocean color and sea surface height are variables intimately linked to the coastal environment [76, 77, 78]. We hypothesize that global datasets considering these parameters have the same challenges when conformation to a gridded structure and neural network process are prescribed. We believe that some harmony via integration of land and data sets would be a next discrete step to evaluate the impact on training and inferring with Flux to Flow. A good candidate is surface and subsurface flow as produced by the Global Land Data Assimilation System (GLDAS). GRACE, VIIRS, and SMAP all have land products available as well as oceanic products. It's possible that if the data is ocean focused, perhaps leaving the land observations in their raw state with no physical scaling would be the best practice.
Our images are single channel inputs, and can be considered grayscale pictures. They are slightly different though; grayscale pictures are usually digitized as pixels with numbers between 0 and 1. When displaying SST images, measurements of physical properties, we use a colormap with minimum and maximum based on what we know to be the physical limits of the parameter itself. Nevertheless, it is closely related to a grayscale image. In grayscale computer vision tasks such as this set of experiments, the use of mean squared error as a loss function has audience of skptics [79]. There are examples where the loss function optimized in a neural network drops in value significantly but sees no improvement in the quality of the image. Alternate loss functions to the standards built into PyTorch are available [80]. Without alteration, these functions require the inputs to be either between 0 and 1 (grayscale) or 0 and 255 (color images). Another avenue is the pursuit of physics-based loss functions [81]. Some data engineering is definitely needed in future iterations. As we have experience with the surface and subsurface flow rasters and these are land viewing, we think the combination of SST and GLDAS as predictors of streamflow in coastal regions is a suitable discrete next step. This next step will force us to decrease the number of target outputs, as there is a paucity of point measurements in streamflow monitoring relative to that of the interpolated target ISAS data used herein.
Only a fraction of the available data was observed in this study. The ISAS Argo dataset comes as a single compressed file; we extracted the surface layer only. There is great value in consideration of SST depth layers; however, it was outside the scope of our investigation. Furthermore, we studied monthly time series images of all three raw datasets. AMSR-E and MODIS each have near complete global pictures within two to three days. These datasets are then transformed in different ways and can lose fidelity by various types of decimation such as re-gridding from swaths to squares, uncertainty in formulas used for conversion from base input to high level physical parameter, or by forms of compression.
We took a naive approach to the problem. It is common to initialize a model with a historical known bias, and make slight inferences based upon the long-term mean. This tightly constrains the problem to the known past environment. We did not do that. We actively attempt to root out and prevent any sort of deterministic bias, and see under a tough conditioning what the dcrrnn algorithm does. This leads to some less than desirable results; however, our zoom in shows that when the complexity of the system is relaxed, the algorithm improves according to the RMSE target. In a future study, it would be helpful to start the training with a long-term bias of SST for an entire year based on, for example, the Operational Sea Surface Temperature and Ice Analysis time series dataset produced by the Group for High Resolution Sea Surface Temperature [82].
## 5 Conclusions
Sea surface temperature (SST) is an essential climate variable. A better understanding of SST equates to a better understanding of global hydrology and the interactions of water as it moves around the hydrosphere in liquid, solid, and vapor forms. The advent of satellite SST observation has allowed for the study of large scale phenomena otherwise invisible. The beginning of the 21st century marked a new frontier in the measurement of SST via the Aqua mission and Argo program. Recently, neural networks have changed the way that scientists consider modeling of the environment. In this study, we continued to develop Flux to Flow, an extract, transform, load, treat, and evaluation framework based around a deep convolutional residual regressive neural network (dcrmnn). We extended its additional functionality of streamflow prediction to transform one Aqua instrument dataset into another: AMSR-E observations towards MODIS observations.
We focused on three large oceanic regions: Indian, Pacific, and Atlantic. With each of the three locations comprised of eighteen 100 x 100 pixel image pairs per month, and ten months of training data, the neural network struggles to transform AMSR-E into MODIS. When we relax the amount of data fed into dcrmn, looking at a single 100 x 100 pixel image pair per month and ten months of training data, the network is statistically better able to transform AMSR-E into MODIS data. Provided these results, we believe that a next discrete step is to focus on coastal areas where the hydrology and oceanography are closely linked. We would like to investigate the performance of dcrmnn in predicting the streamflow of a river when it does and does not consider oceanic behavior in the adjacent ocean. We hypothesize that merging ocean and land datasets will not only ease the challenges associated with handling non-numbers, but that the streamflow prediction will benefit from the unique signatures present in the ocean data alongside the land measurements of the water cycle.
|
2302.04954 | Mixed formulation of physics-informed neural networks for
thermo-mechanically coupled systems and heterogeneous domains | Physics-informed neural networks (PINNs) are a new tool for solving boundary
value problems by defining loss functions of neural networks based on governing
equations, boundary conditions, and initial conditions. Recent investigations
have shown that when designing loss functions for many engineering problems,
using first-order derivatives and combining equations from both strong and weak
forms can lead to much better accuracy, especially when there are heterogeneity
and variable jumps in the domain. This new approach is called the mixed
formulation for PINNs, which takes ideas from the mixed finite element method.
In this method, the PDE is reformulated as a system of equations where the
primary unknowns are the fluxes or gradients of the solution, and the secondary
unknowns are the solution itself. In this work, we propose applying the mixed
formulation to solve multi-physical problems, specifically a stationary
thermo-mechanically coupled system of equations. Additionally, we discuss both
sequential and fully coupled unsupervised training and compare their accuracy
and computational cost. To improve the accuracy of the network, we incorporate
hard boundary constraints to ensure valid predictions. We then investigate how
different optimizers and architectures affect accuracy and efficiency. Finally,
we introduce a simple approach for parametric learning that is similar to
transfer learning. This approach combines data and physics to address the
limitations of PINNs regarding computational cost and improves the network's
ability to predict the response of the system for unseen cases. The outcomes of
this work will be useful for many other engineering applications where deep
learning is employed on multiple coupled systems of equations for fast and
reliable computations. | Ali Harandi, Ahmad Moeineddin, Michael Kaliske, Stefanie Reese, Shahed Rezaei | 2023-02-09T21:56:59Z | http://arxiv.org/abs/2302.04954v2 | Mixed formulation of physics-informed neural networks for thermo-mechanically coupled systems and heterogeneous domains
###### Abstract
Deep learning methods find a solution to a boundary value problem by defining loss functions of neural networks based on governing equations, boundary conditions, and initial conditions. Furthermore, the authors show that when it comes to many engineering problems, designing the loss functions based on first-order derivatives results in much better accuracy, especially when there is heterogeneity and variable jumps in the domain [1]. The so-called mixed formulation for PINN is applied to basic engineering problems such as the balance of linear momentum and diffusion problems. In this work, the proposed mixed formulation is further extended to solve multi-physical problems. In particular, we focus on a stationary thermo-mechanically coupled system of equations that can be utilized in designing the microstructure of advanced materials. First, sequential unsupervised training, and second, fully coupled unsupervised learning are discussed. The results of each approach are compared in terms of accuracy and corresponding computational cost. Finally, the idea of transfer learning is employed by combining data and physics to address the capability of the network to predict the response of the system for unseen cases. The outcome of this work will be useful for many other engineering applications where DL is employed on multiple coupled systems of equations.
keywords: Physically informed neural networks, Thermo-mechanically coupled problems, Heterogeneous solids +
Footnote †: journal:
## 1 Introduction
Deep learning (DL) methods are a branch of machine learning (ML) that have remarkable flexibility in approximating arbitrary functions and operators [2; 3]. They are able to perform the latter task by finding the hidden patterns for the training data set (also known as supervised learning). Moreover, DL methods are capable of discovering the solution to a boundary value problem (BVP) by solely considering physical constraints (unsupervised learning). Therefore, DL models find their way into many engineering applications from material design to structural analysis [4]. One advantage of ML algorithms compared to the classical solver is the huge speedup that one achieves after successful training. The efficiency of these models makes them extremely relevant for multi-scaling analysis where the bottleneck is in the process of information
transformation between the upper scale and the lower scale (see [5; 6; 7] and references therein). An important issue is that the network has to be trained for enough observations considering all the relevant scenarios. Otherwise, one cannot completely rely on the network outcome. Adding well-known physical constraints to the data training process improves the network's prediction capabilities (see [1] and references therein). The latter point needs to be investigated properly for different test cases and physical problems to gain the most advantage from applying the physical laws to the classical data-driven approaches.
Here, we intend to examine the potential of the DL method for the field of computational engineering, where we deal with a complex heterogeneous domain as well as a coupled system of governing equations. In what follows, we shall review some of the major contributions and address the new contributions of the current study.
Training ML algorithms based on the available physical data obtained from our numerical solvers can speed up the process of predicting the solution. Park et al. [8] utilized a multi-scale kernel neural network to enhance the prediction of strain fields for unseen cases. In traditional ML approaches, where NNs are only trained based on data, the underlying physics of the problem is embedded in the data itself, which is the result of other numerical solvers. Yang et al. [9] employed the DL method to predict complex stress and strain fields in composites. Khorrami et al. [10] employed a convolutional neural network to predict the stress fields for history-dependent material behavior. See also [11], where authors employed a Fourier neural operator to predict a stress field at a high resolution by performing the training based on data of low resolution. NN surrogate models are powerful models to solve complex systems of equations in coupled thermal, chemical, and hydrological engineering problems [12]. Data-driven models are used to estimate the state of charge based on the driving conditions [13] or as digital twins of complex systems like proton exchange membrane fuel cells [14]. See also [15; 16] for a comparison between the performance of different neural operators which are suitable for industrial applications.
Interestingly enough, following the idea of physics-informed neural networks (PINNs) [17], and by employing the physical laws in the final loss functions, one can reasonably rely on the network outcome. This is due to the fact that the network outcomes automatically respect and satisfy important physical constraints. However, a proper and in-depth theoretical understanding of the convergence and generalization properties of the PINNs method is still under investigation. Therefore, the idea behind PINNs is also further extended and explored by several authors to improve the network performance even for unseen scenarios. For some examples see [18; 19; 20]. In case the underlying physics of the problem is completely known and complete, the NN can be trained without any initial data and solely based on the given BVP [1]. The latter point is also referred to as unsupervised learning since no initial solution to the problem is needed beforehand. Hu et al. [21] initiate discussions on how extended PINNs that work based on domain decomposition method show effectiveness in modeling multi-scale and multi-physical problems. The authors introduce a tradeoff for generalization. On the one hand, the decomposition of a complex PDE solution into several simple parts decreases the complexity and on the other hand, decomposition might lead to overfitting and the obtained solution may become less generalizable, see also investigations by Jagtap et al. [19], D. Jagtap and Em Karniadakis [22], Henkes et al. [23] and Wang et al. [24] related to the idea of domain decomposition. Wang et al. [25] investigated how PINNs are biased towards learning functions along the dominant eigen-directions of the neural tangent kernel and proposed architectures that can lead to robust and accurate PINN models. Different loss terms in the PINNs formulation may be treated differently on their importance. McClenny and Braga-Neto [26] proposed to utilize adaptive
weights for each training point, so the neural network learns which regions of the solution are difficult and is forced to focus on them. The basic idea here is to make the weights increase as the corresponding losses increase. Xiang et al. [27] also addressed that the PINNs' performance is affected by the weighting of losses and established Gaussian probabilistic models to define the self-adaptive loss functions through the adaptive weights for each loss term. See also [28; 29]. Another promising extension is the idea of reducing the differential orders for the construction of loss functions. For this purpose, researchers combined different ideas from the variational formulation of partial differential equations and apply them to various problems in engineering applications [1; 23; 30; 31].
One important aspect of engineering applications is to consider the multi-physical characteristics. In other words, in many realistic problems, various coupled and highly nonlinear fields have to be considered simultaneously. As an example in solid mechanics, besides the mechanical deformation, often one needs to take into account the influence of the thermal field [32], chemical concentrations [33], electrical and/or magnetic fields [34], damage field [35; 36], and others.
The idea of employing PINNs for solving underlying equations within the multi-physical context is addressed by several authors. In such problems, we are dealing with different loss terms which represent different physics and might be completely different from each other in terms of the magnitude of numerical values. Therefore, a balance between different loss terms is often necessary. Therefore, classical PINN models need to be further improved. Amini et al. [37] investigated the application of PINNs to find the solution to problems involving thermo-hydro-mechanical processes. The authors proposed to use a dimensionless version of the governing equations as well as a sequential training strategy to obtain better accuracy in the multiobjective optimization problem. Raj et al. [38] studied the thermo-mechanical problem using PINN for functionally graded material and the ability of the network to approximate the displacements and showed that the network leads to an order higher error when it comes to approximating the stress fields. Laubscher [39] presented a single and segregated network for PINNs dry air humidification fluid properties heat diffusion to predict momentum, species, and temperature distributions of a dry air humidification problem. It is reported that the segregated network has lower losses when compared to the single network. Mattey and Ghosh [40] also reported that the PINN's accuracy suffers for strongly non-linear and higher-order PDEs such as Allen Cahn and Cahn Hilliard equations. The authors proposed to solve the PDE sequentially over successive time segments using a single neural network. See also studied by Chen et al. [41], Nguyen et al. [42] and Bischof and Kraus [28]. In modeling crack propagation in solids via a smeared approach, one usually assigns a field variable to damage. In this case, equations for the evolution of the damage field are usually coupled to those from the mechanical equilibrium [36]. Goswami et al. [35] proposed a physics-informed variational formulation of DeepONet for brittle fracture analysis, see also investigations by [43; 44].
The computational efficiency and cost of the training can be improved by utilizing a transfer learning algorithm [45]. The key idea of transfer learning is to capture the common features and behavior of the source domain and transfer it to a new target domain. To do so, one can transfer the pre-trained hyperparameters of the NN and initialize them as hyperparameters of the NN for the new domain. The latter decreases the computational cost of training [1; 46]. Tang et al. [47] utilized transfer learning-PINNs to solve vortex-induced vibration problems. Goswami et al. [48] employed the idea of transfer learning to partially retrain the network's hyperparameters to identify the crack patterns in a domain. Pellegrin et al. [49] exploited the transfer learning idea to solve a system of equations by having various configurations (initial conditions) and showed
that utilizing the trained multi-head network leads to a significant speed-up for branched flow simulations without losing accuracy. Desai et al. [50] utilized transfer learning to solve linear systems of PDEs such as Poisson and Schrodinger equations by PINNs. Gao et al. [51] proposed a singular-valued-based transfer learning algorithm for PINNs and showed its performance for Allen Cahn equations.
According to the reviewed articles, it becomes clear that having fast and reliable results by employing the PINNs method for engineering applications has to be done with care. The naive version of PINN may not converge to the correct solution, and even so, the training process may take significantly more time compared to other numerical solvers based on the finite element method. This work discusses ideas on how to overcome the previously mentioned issues. Here, we intend to focus on the idea of first-order PINNs formulation for a stationary thermo-mechanically coupled system in heterogeneous solids and compare the performance of the coupled and sequential training (see also Fig. 1). In Section 2, we provide the derivation of the governing equations and their variational form. In Section 3, the architecture of the new mixed formulation of PINNs is described. The results and conclusions are then provided in Sections 4 and 5, respectively.
## 2 Formulation of the problem
### Stationary thermo-elasticity in heterogeneous solids
The formulation behind the thermo-mechanical problem is first summarized in what follows. In this problem, we are dealing with the displacement field denoted by the vector \(\mathbf{u}\) and the temperature field denoted by the scalar parameter \(T\). Through different coupling terms,
Figure 1: Sequential training and coupled training algorithms for solving a multi-physical boundary value problem as well as the mixed PINN architecture. \(\mathcal{L}_{\mathrm{Phy}}\) stands for the relevant physics law. The variable \(\mathbf{x}\) denotes the location of collocation points. The variable \(\mathbf{u}_{i}\) shows the \(i\)-th primary variable (solution) and \(\nabla\mathbf{u}_{i}^{o}\) is their spatial gradient. Separated neural networks are then utilized to predict primary variables and their spatial gradients.
these two fields influence each other. Starting with kinematics, the total strain \(\mathbf{\varepsilon}\) is additively decomposed into the elastic part and the thermal part as
\[\mathbf{\varepsilon}\,=\,\mathbf{\varepsilon}_{e}\,+\,\mathbf{\varepsilon}_{t}=\,\text{sym} \left(\text{grad}(\mathbf{u})\right)\,=\,\frac{1}{2}\left(\nabla\mathbf{u}\,+\,\nabla \mathbf{u}^{T}\right). \tag{1}\]
In Eq. (1), \(\mathbf{\varepsilon}_{e}\) represents the elastic strain tensor, and \(\mathbf{\varepsilon}_{t}\) presents the thermal strain caused by the temperature field. Considering isotropic material behavior, the thermal strain tensor follows a linear expansion law and reads
\[\mathbf{\varepsilon}_{t}\,=\,\alpha(x,y)\left(T-T_{0}\right)\,\mathbf{I}. \tag{2}\]
Here, \(\alpha\) is the thermal expansion coefficient and it can vary spatially as we are dealing with a heterogeneous microstructure. Moreover, \(T_{0}\) is the initial temperature and \(\mathbf{I}\) is the second-order identity tensor. Utilizing the fourth-order elasticity tensor \(\mathbb{C}\), one can write the stress tensor as
\[\mathbf{\sigma}\,=\mathbb{C}(x,y)\,\left(\mathbf{\varepsilon}-\mathbf{\varepsilon}_{t} \right). \tag{3}\]
Note, that at this point, we assume the elastic properties to be temperature-independent. Considering temperature-dependent material properties shall be investigated in future work and the investigations in the current work can easily be extended to such problems. Writing Eq. (3) in Voigt notation reads
\[\hat{\mathbf{\sigma}}\,=\hat{C}(x,y)\,\,\left(\hat{\mathbf{\varepsilon}}\,-\,\hat{\bm {\varepsilon}}_{t}\right). \tag{4}\]
In Eq. (4), \(\hat{\bullet}\) presents the tensor (\(\bullet\)) in the Voigt notation. For the two-dimensional plane strain setup, the position-dependent elasticity tensor \(\hat{\mathcal{C}}\) is written as
\[\hat{C}(x,y)\,=\,\frac{E(x,y)}{(1-2\nu(x,y))(1+\nu(x,y))}\,\left[\begin{array} []{ccc}1-\nu(x,y)&\nu(x,y)&0\\ \nu(x,y)&1-\nu(x,y)&0\\ 0&0&\frac{1-2\nu(x,y)}{2}\end{array}\right]. \tag{5}\]
Here, Young's modulus \(E\) and Poisson's ratio \(\nu\) represent the elastic constants of the material. The isotropic assumption holds for every point of material. It is worth mentioning that having a heterogeneous domain causes the dependency of material parameters on the coordinates of collocation points. For the mechanical field, the balance of linear momentum by having no body force vector, and the Dirichlet and the Neumann boundary conditions read
\[\text{div}(\mathbf{\sigma}) =\mathbf{0}\quad\text{ in}\quad\Omega, \tag{6}\] \[\mathbf{u} =\bar{\mathbf{u}}\quad\text{on}\,\,\,\,\Gamma_{D},\] (7) \[\mathbf{\sigma}\cdot\mathbf{n}=\mathbf{t} =\bar{\mathbf{t}}\quad\text{ on}\,\,\,\Gamma_{N}. \tag{8}\]
Next, we write the equation of energy balance in the absence of a heat source, and for a steady-state problem ([32])
\[\text{div}(\mathbf{q}) =0\quad\text{ in }\Omega, \tag{9}\] \[T =\bar{T}\quad\text{on }\bar{\Gamma}_{D},\] (10) \[\mathbf{q}\cdot\mathbf{n} =q_{n} =\bar{q}\quad\text{ on }\bar{\Gamma}_{N}. \tag{11}\]
In the above equations, \(\Omega\), and \(\Gamma\) are the material points in the body and on the boundary of the mechanical field, respectively. The temperature boundary condition is also satisfied on \(\bar{\Gamma}\). In Eqs. (9) and (11), \(\mathbf{q}\) denotes the heat flux and is defined based on Fourier's law as
\[\mathbf{q}=-k(x,y)\,\nabla T. \tag{12}\]
Here, \(k(x,y)\) shows the position-dependent heat conductivity coefficient.
To obtain the Galerkin weak form (WF) for Eq. (6), the multiplication with a test function \(\delta\mathbf{u}\) is done, which also satisfies the Dirichlet boundary conditions of the mechanical field. After integration by parts and employing the Neumann boundary conditions, one finally gets
\[\int_{\Omega}\delta\mathbf{\hat{\varepsilon}}_{e}^{T}\,\hat{C}(x,y)\, \mathbf{\hat{\varepsilon}}_{e}\,\,\,dV\,-\,\int_{\Gamma_{N}}\delta\mathbf{u}^{T}\, \bar{\mathbf{t}}\,\,dA=0. \tag{13}\]
Considering the balance between the mechanical internal energy \(E_{\rm int}^{M}\) and the mechanical external energy \(E_{\rm ext}^{M}\), we write the weak form in Eq. (13) as a minimization of the total energy, see [30], as
\[\text{Minimize:}\,\,\underbrace{\int_{\Omega}\frac{1}{2}\mathbf{\hat {\varepsilon}}_{e}^{T}\,\hat{C}(x,y)\,\mathbf{\hat{\varepsilon}}_{e}\,\,\,dV}_{E _{\rm int}^{M}}-\underbrace{\int_{\Gamma_{N}}\mathbf{u}^{T}\,\bar{\mathbf{t}}\,\,dA}_ {E_{\rm ext}^{M}}, \tag{14}\]
which is subjected to BCs in Eq. (7) and Eq. (8). By multiplying the Eq. (9) with a test function \(\delta T\) and following a similar procedure for the mechanical field and applying the boundary condition in Eqs. (10) and 11, the Galerkin weak form for the thermal field leads to
\[\int_{\Omega}\,k(x,y)\,\nabla^{T}T\,\delta(\nabla T)\,\,dV\,+\, \int_{\bar{\Gamma}_{N}}\bar{q}\,\,\delta T\,\,dA\,=\,0. \tag{16}\]
One can also reformulate the weak form in Eq. (16) as a variation of the energy. The latter initiates the balance of thermal internal and external energies \(E_{\rm int}^{T}\) and \(E_{\rm ext}^{T}\), respectively, which leads to
\[\text{Minimize:}\,\,\underbrace{\int_{\Omega}\frac{1}{2}\,k(x,y)\, \nabla^{T}T\,\nabla T\,\,\,dV}_{E_{\rm int}^{T}}+\underbrace{\int_{\bar{ \Gamma}_{N}}\bar{q}\,T\,\,dA}_{E_{\rm ext}^{T}}. \tag{17}\]
The above equation is minimized subjected to BCs in Eq. (10) and Eq. (11).
It is important to note that in Eq. (14), the minimization is performed for both the displacement field \(\mathbf{u}\) and temperature field \(T\) in the coupled training process. However, by using sequential training and fixing one field variable, the referring loss function is minimized with respect to the displacement field \(\mathbf{u}\). For Eq. (16), in the absence of the other fields in the formulation, the minimization is done with respect to the temperature field \(T\) in both training procedures.
Similarly to the previous works [1, 30], expressions in Eqs. (13) and (16) are added to the neural network's loss function for the primary variables \(\mathbf{u}\) and \(T\), respectively.
Two different sets of geometries according to Fig. 2 are considered. The first geometry is selected to represent one of the most common cases in engineering problems, the fiber matrix
setup. The second geometry stands for an advanced engineering alloy microstructure that contains rectangular inclusions. Properties of phase 1 (matrix) are denoted by the sub-index "mat" while the material properties for phase 2 (inclusion) are denoted by the sub-index "inc". Next, we define the ratio \(E_{\text{mat}}/E_{\text{inc}}=k_{\text{mat}}/k_{\text{inc}}=R\). The value \(R\) is equal to \(0.3\) and \(2\) for the left and right geometries, respectively. The stationary thermal elasticity problem is solved for these two setups. Finally, the boundary conditions are summarized in Fig. 3. For the mechanical field, the left and right parts are restricted to move in the \(x\)-direction and the upper and lower edges are fixed along \(y\)-direction. For the thermal field, constant temperatures are applied on the left and the right edges while on the upper and lower edges, there is no heat flux normal to the surface.
Figure 3: Boundary conditions for the thermo-mechanically coupled problem. Left: mechanical field boundary conditions. Right: Thermal field boundary conditions.
Figure 2: Selected geometries for studies on the thermal elasticity problem. The color variation shows the different values of Young’s modulus and thermal conductivity.
## 3 PINNs' architecture
### Thermo-elasticity problem
Based on the mixed formulation proposed by Rezaei et al. [1], the input for the network is the location of the collocation points (i.e. the coordinates \(x\) and \(y\) in a 2D setting). The outputs are then divided into the field variables from the mechanical and thermal sub-problem. For the mechanical field, outputs are the components of the displacement vector as well as the stress tensor (i.e. \(u_{x}\), \(u_{y}\), \(\sigma_{xx}\), \(\sigma_{xy}\), and \(\sigma_{yy}\)). For the thermal field, the output layer includes the scalar temperature value as well as the components of the heat flux (i.e. \(T\), \(q_{x}\), and \(q_{y}\)). It is important to mention that we intend to use the separate fully connected feed-forward neural network (FFNN) for each individual output variable (see also Fig. 4 and explanations in [1]). The structure of each neural network takes the standard form where it can be split into a single input layer, several possible hidden layers, and the output layer. Each layer is connected to the next layer for transferring information [52]. In every single layer, there is no connection among its neurons. Therefore, we represent the information bypass from the \(l-1\) layer to \(l\) via the vector \(\mathbf{z}^{l}\). Every component of vector \(\mathbf{z}^{l}\) is computed by
\[z_{m}^{l}=a(\sum_{n=1}^{N_{l}}w_{mn}^{l}z_{n}^{l-1}+b_{m}^{l}), \quad l=1,\ldots,L. \tag{18}\]
In Eq. (18), \(z_{n}^{l-1}\), is the \(n\)-th neuron within the \(l-1\)-th layer. The component \(w_{mn}\) shows the connection weight between the \(n\)-th neuron of the layer \(l-1\) and the \(m\)-th neuron of the layer \(l\). Every individual neuron in the \(l\)-th hidden layer owns a bias variable \(b_{m}^{l}\). The number \(N_{I}\) corresponds to the number of neurons in the \(l\)-th hidden layer. The total number of hidden layers is \(L\). The letter \(a\) stands for the activation function in each neuron. The activation function \(a(\cdot)\) is usually a non-linear function. In this work, the hyperbolic-tangent function is utilized which is defined as
\[\tanh(x)=\frac{\exp^{x}-\exp^{-x}}{\exp^{x}+\exp^{-x}}. \tag{19}\]
The proper choice of the activation function is problem dependent and shall be obtained based on hyperparameter studies on the final performance of the network. See also discussions in [1], [53] and [54].
The input layer of the PINNs for the described thermo-mechanical problem consists of the position vector \(x\) and \(y\) coordinates. For the output layer, we have \(u_{x}\), \(u_{y}\), \(\sigma_{xx}\), \(\sigma_{xy}\), and \(\sigma_{yy}\) in a mechanical field and \(T\), \(q_{x}\), and \(q_{y}\) in the thermal field. In the current work, the focus is on a 2-D setting, and extension to a 3-D setting is trivial. The rest of the network architecture will be discussed in what follows. The trainable set of parameters of the network is represented by \(\mathbf{\theta}=\{\mathbf{W},\mathbf{b}\}\), which are the weights and biases of a neural network, respectively. Considering each neural network structure as \(\mathcal{N}\), the final outcome of the network for the mechanical and thermal field reads
\[u_{x}=\mathcal{N}_{u_{x}}(\mathbf{X};\mathbf{\theta}),\ u_{y}=\mathcal{N}_{u_{y}}(\bm {X};\mathbf{\theta}),\ \sigma_{xx}^{o}=\mathcal{N}_{\sigma_{xx}}(\mathbf{X};\mathbf{\theta}),\ \sigma_{xy}^{o}=\mathcal{N}_{\sigma_{xy}}(\mathbf{X};\mathbf{\theta}),\ \sigma_{yy}^{o}=\mathcal{N}_{\sigma_{yy}}(\mathbf{X};\mathbf{\theta}), \tag{20}\]
\[T=\mathcal{N}_{T}(\mathbf{X};\mathbf{\theta}),\ q_{x}^{o}=\mathcal{N}_{q_{x}}(\mathbf{X}; \mathbf{\theta}),\ q_{y}^{o}=\mathcal{N}_{q_{x}}(\mathbf{X};\mathbf{\theta}). \tag{21}\]
The neural network outputs are functions of the trainable parameters and the training is done by minimizing different physical loss functions. Next, we build the BVP via the defined input and output layers. To do so, the partial differential equation and its corresponding initial and/or boundary conditions will be defined in terms of loss functions for the neural networks. Therefore, we require to obtain the derivatives of the output layer with respect to the input layer (for instance, the derivative of stress with respect to \(x\)- or \(y\)-direction). Current deep learning packages, i.e, Pytorch [55] and Tensorflow [56] are capable of computing the derivatives based on the automatic differentiation algorithms [57]. The algorithms developed in the current work are implemented using the SciANN package [58] but the methodology can easily be transferred to other platforms.
Denoting the summation of mechanical-related loss terms by \(\mathcal{L}_{M}\), it is defined based on a set of equations, Eqs. (2) - (8), as
\[\mathcal{L}_{M}=\underbrace{\mathcal{L}_{M}^{EF}+\mathcal{L}_{M}^{DBC}}_{\text {based on }\boldsymbol{u}}+\mathcal{L}_{M}^{cnc}+\underbrace{\mathcal{L}_{M}^{SF}+ \mathcal{L}_{M}^{NBC}}_{\text{based on }\boldsymbol{\sigma}^{o}}. \tag{22}\]
In Eq. (22), the energy form of the problem (\(\mathcal{L}_{M}^{EF}\)) and the prescribed Dirichlet boundary conditions losses (\(\mathcal{L}_{M}^{DBC}\)) are minimized by means of the displacement vector \(\boldsymbol{u}\) output. For Neumann boundary conditions (\(\mathcal{L}_{M}^{NBC}\)) and the strong form (\(\mathcal{L}_{M}^{SF}\)) of the problem requiring the first-order displacement derivatives, relevant loss terms are applied to the \(\boldsymbol{\sigma}^{o}\),
\[\mathcal{L}_{M}^{EF} =\text{MAE}_{M}^{EF}\left(\int_{\Omega}\frac{1}{2}\hat{\boldsymbol {e}}_{e}^{T}\,\hat{C}(x,y)\,\hat{\boldsymbol{e}}_{e}\,\ dV-\int_{\varGamma_{N}} \boldsymbol{u}^{T}\,\bar{\boldsymbol{t}}\;dA\right), \tag{23}\] \[\mathcal{L}_{M}^{DBC} =\text{MSE}_{M}^{DBC}\left(\boldsymbol{u}-\overline{\boldsymbol{ u}}\right),\] (24) \[\mathcal{L}_{M}^{cnc} =\text{MSE}_{M}^{cnc}\left(\boldsymbol{\sigma}^{o}-\boldsymbol{ \sigma}\right),\] (25) \[\mathcal{L}_{M}^{SF} =\text{MSE}_{M}^{SF}\left(\text{div}(\boldsymbol{\sigma}^{o}) \right),\] (26) \[\mathcal{L}_{M}^{NBC} =\text{MSE}_{M}^{NBC}\left(\boldsymbol{\sigma}^{o}\cdot \boldsymbol{n}-\overline{\boldsymbol{t}}\right). \tag{27}\]
The loss function related to the thermal field is also computed based on Eqs. (9)- (17).
Analogously to the mechanical losses, the thermal loss \(\mathcal{L}_{T}\) consists of \(\mathcal{L}_{T}^{EF}\) (energy form of the thermal diffusion problem), \(\mathcal{L}_{T}^{DBC}\) (Dirichlet boundary conditions), \(\mathcal{L}_{T}^{NBC}\) (Neumann boundary conditions), \(\mathcal{L}_{T}^{SF}\) (strong form of diffusion problem), and \(\mathcal{L}_{T}^{cnc}\) (connection term) and it reads
\[\mathcal{L}_{T} =\underbrace{\mathcal{L}_{T}^{EF}+\mathcal{L}_{T}^{DBC}}_{\text {based on }T}+\mathcal{L}_{T}^{cnc}+\underbrace{\mathcal{L}_{T}^{SF}+\mathcal{L}_{T}^{ NBC}}_{\text{based on }\boldsymbol{q}^{O}}, \tag{28}\] \[\mathcal{L}_{T}^{EF} =\text{MAE}_{T}^{EF}\left(\int_{\Omega}\frac{1}{2}\boldsymbol{q} ^{T}\cdot\nabla T\,dV+\int_{\bar{\varGamma}}(\boldsymbol{q}\cdot\boldsymbol{ n})\ T\,dA\right),\] (29) \[\mathcal{L}_{T}^{DBC} =\text{MSE}_{T}^{DBC}\left(T-\overline{T}\right),\] (30) \[\mathcal{L}_{T}^{cnc} =\text{MSE}_{T}^{cnc}\left(\boldsymbol{q}^{o}-\boldsymbol{q} \right),\] (31) \[\mathcal{L}_{T}^{SF} =\text{MSE}_{T}^{SF}\left(\text{div}(\boldsymbol{q}^{o})\right),\] (32) \[\mathcal{L}_{T}^{NBC} =\text{MSE}_{T}^{NBC}\left(\boldsymbol{q}^{o}\cdot\boldsymbol{n}- \overline{\boldsymbol{q}}\right). \tag{33}\]
**Remark 1** The energy form Eqs. (23) and (29) are evaluated at the global level and a single loss term is available for all of the collocation points. The other loss terms are minimized
at every single collocation point.
For completeness, the mean squared and mean absolute error are defined as
\[\text{MSE}(\bullet)_{type}=\frac{1}{k_{type}}\sum_{i=1}^{k_{type}}( \bullet)^{2},\quad\text{MAE}(\bullet)_{type}=\frac{1}{k_{type}}\sum_{i=1}^{k_{ type}}|\bullet|. \tag{34}\]
The mathematical optimization problem is written as
\[\mathbf{\theta}^{*}=\arg\min_{\mathbf{\theta}}\mathcal{L}(\mathbf{X};\mathbf{ \theta}), \tag{35}\]
where \(\mathbf{\theta}^{*}\) are the optimal trainable parameters (weights and biases) of the network which is the result of the optimization problem.
We investigate two distinct training algorithms to solve the coupled system of equations. In one approach, we utilize a so-called sequential training, and in the other approach, we intend to use a coupled training as depicted in Fig. 1. The sequential training algorithm is done by minimizing losses of one individual physics while the other physical parameters are kept frozen. Readers are also encouraged to see Haghighat et al. [59] for more details. Sequential training has a similar procedure as staggered numerical schemes [32; 36; 60]. For sequential training, we define the following loss terms
\[\mathcal{L}_{Seq1}=\mathcal{L}_{\text{Phy1}}=\mathcal{L}_{T}, \tag{36}\] \[\mathcal{L}_{Seq2}=\mathcal{L}_{\text{Phy2}}=\mathcal{L}_{M}. \tag{37}\]
During the minimization of the loss function in Eq. (36), the output related to the mechanical field (i.e. \(u_{x},u_{y},\sigma_{xx}^{o},\sigma_{xy}^{o},\text{ and }\sigma_{yy}^{o}\)) is kept frozen. While the loss function in Eq. (37) is minimized, the output related to the thermal field (i.e. \(T,q_{x}^{o},\text{ and }q_{y}^{o}\)) is kept frozen. On the other hand, we have coupled training where we minimize all the loss terms, simultaneously.
For the coupled training, the total loss function is the summation of all the loss terms and it is written as
\[\mathcal{L}_{Cpl}=\mathcal{L}_{\text{Phy1}}\,+\mathcal{L}_{\text{ Phy2}}=\mathcal{L}_{M}+\mathcal{L}_{T}. \tag{38}\]
Fig. 4 depicts the mixed PINN formulation for solving the thermoelasticity problem. In addition to the position vector (\(x\), \(y\)), the material parameters \(C(x,y)\) and \(k(x,y)\) are added to the input layer. The latter avails the capability of a network to learn the solution for other material properties.
In the proposed network structure, the primary variables and their spatial derivatives are directly evaluated. For example, when it comes to the mechanical problem, the displacement vector \(\mathbf{u}\) and the stress tensor \(\mathbf{\sigma^{o}}\) are defined as network output. Moreover, the strain tensor \(\mathbf{\varepsilon}\) is calculated by means of the predicted values for deformation (\(u_{x}\) and \(u_{y}\)), and the predicted temperature \(T\) according to Eqs. (1) and (2). By inserting the computed strain tensor (\(\mathbf{\varepsilon}\)) into Eq. (4), one computes the stress tensor \(\mathbf{\sigma}\) for the coupled system of equations. Consequently, the resulting stress tensor from the evaluated displacement \(\mathbf{\sigma}\) is linked to the network's direct evaluation \(\mathbf{\sigma}^{o}\) through the connection loss term \(\mathcal{L}_{M}^{enc}\) in Eq. (25). The latter is a soft constraint to connect \(\mathbf{\sigma}^{o}\) to \(\mathbf{\sigma}\) values.
The analogous strategy is employed for the thermal field in which the heat flux vector \(\mathbf{q}\) is derived on the basis of Eq. (12) where we compute the thermal field gradients by means of the automatic differentiation. The \(\mathbf{q}\) is connected to \(\mathbf{q}^{o}\) by the \(\mathcal{L}_{T}^{enc}\) described in Eq. (31).
According to Fig. 4, the optimization problem in a sequential training procedure starts with the first desired field loss function to enforce the minimization of the first physical field loss functions.
Figure 4: Network architecture and loss functions for the multi-physical PINNs, sequential and coupled training flowcharts for the thermoelasticity problem.
In the thermoelasticity problem, the training starts by finding the optimum network parameters to satisfy the thermal loss function (\(\mathcal{L}_{T}\)). This choice is made based on the fact that the thermal-induced strain tensor will affect the results of the mechanical field. In our proposed algorithm, the training is stopped for the thermal field loss function (first physics' loss function) after reaching a certain number of epochs \(n_{T}\). Subsequently, the training is done for the next physics' loss function (mechanical field loss function \(\mathcal{L}_{m}\)) for the same or a different number of epochs \(n_{M}\). The whole training will be stopped if the total number of epochs is reached the desired value \(n_{A}\). While the desired value for the total number of epochs has not been reached, the same procedure will be continued to reach that particular value (minimizing \(\mathcal{L}_{T}\) for \(n_{T}\) epochs and minimizing \(\mathcal{L}_{M}\) for \(n_{M}\) epochs). For the coupled training procedure, the total loss function is computed by summing the different physics loss functions (\(\mathcal{L}_{M}\) and \(\mathcal{L}_{T}\)) into a single one \(\mathcal{L}_{Cpl}\), see Eq. (38) and Fig. 4.
**Remark 2** It is shown that employing separated networks and utilizing only first-order derivatives in the network's loss function leads to a better performance [1]. See also studies by Chiu et al. [61] on this matter. Reducing the order of derivatives also has other benefits. For example, one has more flexibility in choosing the activation functions (i.e. we avoid problems related to vanishing gradients due to higher-order derivatives). Moreover, the computational cost is decreased compared to the cases where we have second-order derivation.
**Remark 3** Another possibility to stop each training procedure in the sequential training is to reach a certain desired value of the loss for each field. This is achieved by having an infinite number for the number of epochs while optimizing each field's loss function. The training procedure will stop whenever a certain value for the desired loss function is accomplished.
**Remark 4** The current mixed PINN approach can also be interpreted as a multi-objective optimization problem. Therefore, having a proper balance between different loss terms is essential. Often in multi-physical problems, it is important to normalize the quantities before the minimization starts [23]. Also, the loss terms \(L_{T}^{EF}\) and \(L_{M}^{EF}\) are defined based on the means of absolute error to be in the same order as other loss terms.
In this work, the Adam optimizer is employed, and the relevant network parameters are summarized in Table 1. Please note that in the case of a simple PINNs formulation, where the energy form of the problem is absent, one can use multiple batches for the training of the network.
\begin{table}
\begin{tabular}{l l} \hline \hline Parameter & Value \\ \hline Input, Output & \(\mathbf{X}\), \((\mathbf{u}\), \(\mathbf{\sigma}^{o}\), \(T\), \(\mathbf{q}^{o})\) \\ activation function & tanh \\ number of layers and neurons per layer (\(L\), \(N_{l}\)) & (5, 40) \\ batch size & full batch for the mixed PINNs formulation \\ (learning rate \(\alpha\), number of epochs) & (\(10^{-3},10^{5}\)) \\ \hline coupled training & \\ total number of epochs (\(n_{A}\)) & 100k epochs \\ \hline sequential training & \\ total number of iterations (\(n_{A}\)) & 200k epochs \\ iterations for the thermal field before switch (\(n_{T}\)) & 20k epochs \\ iterations for the mechanical field before switch (\(n_{M}\)) & 20k epochs \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of the network parameters for coupled and sequential training procedures.
Based on the boundary conditions discussed in Fig. 3, we write all the loss terms more specifically for the given BVP in what follows. The components of the total strain tensor and temperature gradient vector are calculated by taking derivatives of displacements and temperature fields, respectively. Considering Eqs. (20) and (21), we define the following networks
\[\mathcal{N}_{\varepsilon_{x}}(\mathbf{X};\mathbf{\theta}) =\partial_{x}\mathcal{N}_{u_{x}}, \tag{39}\] \[\mathcal{N}_{\varepsilon_{y}}(\mathbf{X};\mathbf{\theta}) =\partial_{y}\mathcal{N}_{u_{y}},\] (40) \[\mathcal{N}_{\varepsilon_{xy}}(\mathbf{X};\mathbf{\theta}) =\frac{1}{2}\left(\partial_{y}\mathcal{N}_{u_{x}}+\partial_{x} \mathcal{N}_{u_{y}}\right),\] (41) \[\mathcal{N}_{\nabla_{x}T}(\mathbf{X};\mathbf{\theta}) =\partial_{x}\mathcal{N}_{T},\] (42) \[\mathcal{N}_{\nabla_{y}T}(\mathbf{X};\mathbf{\theta}) =\partial_{y}\mathcal{N}_{T}. \tag{43}\]
Components of the (isotropic) thermal strain tensor are then derived by
\[\mathcal{N}_{\varepsilon_{tx}}(\mathbf{X};\mathbf{\theta})=\mathcal{N}_{\varepsilon_{ ty}}=\alpha\left(\mathbf{X}\right)\left(\mathcal{N}_{T}-T_{0}\right). \tag{44}\]
Considering the total and thermally induced strain tensors, the elastic part of the strain tensor reads
\[\mathcal{N}_{\varepsilon_{ex}}(\mathbf{X};\mathbf{\theta}) =\mathcal{N}_{\varepsilon_{x}}-\mathcal{N}_{\varepsilon_{tx}}, \tag{45}\] \[\mathcal{N}_{\varepsilon_{ey}}(\mathbf{X};\mathbf{\theta}) =\mathcal{N}_{\varepsilon_{y}}-\mathcal{N}_{\varepsilon_{ty}},\] (46) \[\mathcal{N}_{\varepsilon_{exy}}(\mathbf{X};\mathbf{\theta}) =\mathcal{N}_{\varepsilon_{xy}}. \tag{47}\]
Utilizing the constitutive relations (Hooke's law) as well as Fourier's law, one computes the components of the stress tensor and heat flux vector according to the following relations
\[\mathcal{N}_{\sigma_{x}}(\mathbf{X};\mathbf{\theta}) =\frac{E(\mathbf{X})}{(1+\nu(\mathbf{X}))(1-2\nu(\mathbf{X}))}\left((1-\nu( \mathbf{X}))\mathcal{N}_{\varepsilon_{ex}}+\nu(\mathbf{X})\mathcal{N}_{\varepsilon_{ xy}}\right), \tag{48}\] \[\mathcal{N}_{\sigma_{y}}(\mathbf{X};\mathbf{\theta}) =\frac{E(\mathbf{X})}{(1+\nu(\mathbf{X}))(1-2\nu(\mathbf{X}))}\left(\nu(\mathbf{ X})\mathcal{N}_{\varepsilon_{x}}+(1-\nu(\mathbf{X}))\mathcal{N}_{\varepsilon_{y}} \right),\] (49) \[\mathcal{N}_{\sigma_{xy}}(\mathbf{X};\mathbf{\theta}) =\frac{E(\mathbf{X})}{2(1+\nu(\mathbf{X}))}\mathcal{N}_{\varepsilon_{exy}},\] (50) \[\mathcal{N}_{q_{x}}(\mathbf{X};\mathbf{\theta}) =-k(\mathbf{X})\mathcal{N}_{\nabla_{x}T},\] (51) \[\mathcal{N}_{q_{y}}(\mathbf{X};\mathbf{\theta}) =-k(\mathbf{X})\mathcal{N}_{\nabla_{y}T}. \tag{52}\]
To simplify equations for formulating loss functions, in what follows, the dependency of neural network output \(\mathcal{N}\left(\mathbf{X};\mathbf{\theta}\right)\) on the collocation points' coordinates \(\mathbf{X}\) and trainable parameters \(\mathbf{\theta}\) are not shown.
Finally, the mathematical expressions for calculating loss functions for the mechanical and thermal field are expanded in what follows. See also Eqs. (23) to (33) as well as Fig. 4,
\[\mathcal{L}_{M}^{EF} =\left|-\frac{1}{2N_{b}}\sum_{\mathbf{X}\in\Omega}\left(\mathcal{N}_{ \sigma_{x}}\mathcal{N}_{\varepsilon_{e_{x}}}+\mathcal{N}_{\sigma_{y}}\mathcal{N }_{\varepsilon_{e_{y}}}+\mathcal{N}_{\sigma_{xy}}\mathcal{N}_{\varepsilon_{e_{ xy}}}\right)\right|, \tag{53}\] \[\mathcal{L}_{M}^{DBC} =\frac{1}{N_{DBC}^{L}}\sum_{\mathbf{X}\in\Gamma_{D}^{L}}\left( \mathcal{N}_{u_{x}}-0\right)^{2}+\frac{1}{N_{DBC}^{R}}\sum_{\mathbf{X}\in\Gamma_{ B}^{B}}\left(\mathcal{N}_{u_{x}}-0\right)^{2},+\frac{1}{N_{DBC}^{T}}\sum_{\mathbf{X} \in\Gamma_{D}^{T}}\left(\mathcal{N}_{u_{y}}-0\right)^{2}\] \[+\frac{1}{N_{DBC}^{B}}\sum_{\mathbf{X}\in\Gamma_{B}^{B}}\left( \mathcal{N}_{u_{y}}-0\right)^{2},\] (54) \[\mathcal{L}_{M}^{cnc} =\frac{1}{N_{b}}\sum_{\mathbf{X}\in\Omega}\left(\left(\mathcal{N}_{ \sigma_{x}^{\circ}}-\mathcal{N}_{\sigma_{x}}\right)^{2}+\left(\mathcal{N}_{ \sigma_{y}^{\circ}}-\mathcal{N}_{\sigma_{y}}\right)^{2}+\left(\mathcal{N}_{ \sigma_{y}^{\circ}}-\mathcal{N}_{\sigma_{xy}}\right)^{2}\right),\] (55) \[\mathcal{L}_{M}^{SF} =\frac{1}{N_{b}}\sum_{\mathbf{X}\in\Omega}\left(\left(\partial_{x} \mathcal{N}_{\sigma_{x}^{\circ}}\,+\,\partial_{y}\mathcal{N}_{\sigma_{xy}^{ \circ}}\right)^{2}+\left(\partial_{y}\mathcal{N}_{\sigma_{y}^{\circ}}\,+\, \partial_{x}\mathcal{N}_{\sigma_{xy}^{\circ}}\right)^{2}\right),\] (56) \[\mathcal{L}_{M}^{NBC} =0. \tag{57}\]
In the above equations, \(\Gamma^{L}\), \(\Gamma^{R}\), \(\Gamma^{T}\), and \(\Gamma^{B}\) denote the location of points at left, right, top, and bottom boundaries, respectively. _DBC_ and _NBC_ represent for Dirichlet and Neumann boundary conditions.
Note, that the loss term related to the Neumann BCs (Eq. (57)) is zero according to the defined BVP. We do not apply any external traction to the microstructure of the material. For the computation of the integral in the energy form, we use the summation over the collocation points with equal weights as the collocation points are uniformly distributed in the domain. Therefore, \(N_{b}\) denotes the number of collocation points within the domain (body). Moreover, \(N^{L}\), \(N^{R}\), \(N^{T}\), and \(N^{B}\) represent the number of collocation points at left, right, top, and bottom boundaries, respectively. See also [31] for other possible methodologies.
The loss functions related to the thermal field read as
\[\mathcal{L}_{T}^{EF} =\left|\frac{1}{2N_{b}}\sum_{\mathbf{X}\in\Omega}\left(\mathcal{N}_{ q_{x}}\mathcal{N}_{\nabla_{x}T}+\mathcal{N}_{q_{y}}\mathcal{N}_{\nabla_{y}T} \right)+\frac{1}{N_{NBC}^{L}}\sum_{\mathbf{X}\in\Gamma_{N}^{L}}\left(\mathcal{N}_ {q_{x}}\mathcal{N}_{T}\right)\right|, \tag{58}\] \[\mathcal{L}_{T}^{DBC} =\frac{1}{N_{DBC}^{L}}\sum_{\mathbf{X}\in\Gamma_{D}^{L}}\left( \mathcal{N}_{T}-1\right)^{2}+\frac{1}{N_{DBC}^{R}}\sum_{\mathbf{X}\in\Gamma_{D}^{ B}}\left(\mathcal{N}_{T}-0\right)^{2},\] (59) \[\mathcal{L}_{T}^{enc} =\frac{1}{N_{b}}\sum_{\mathbf{X}\in\Omega}\left(\left(\mathcal{N}_{ q_{x}^{\circ}}-\mathcal{N}_{q_{x}}\right)^{2}+\left(\mathcal{N}_{q_{y}^{\circ}}- \mathcal{N}_{q_{y}}\right)^{2}\right),\] (60) \[\mathcal{L}_{T}^{SF} =\frac{1}{N_{b}}\sum_{\mathbf{X}\in\Omega}\left(\partial_{x}\mathcal{ N}_{q_{x}^{\circ}}\,+\,\partial_{y}\mathcal{N}_{q_{y}^{\circ}}\right)^{2},\] (61) \[\mathcal{L}_{T}^{NBC} =\sum_{\mathbf{X}\in\Gamma_{N}^{T}}\frac{1}{N_{NBC}^{T}}\left(\mathcal{ N}_{q_{y}^{\circ}}-0\right)^{2}\] \[+\sum_{\mathbf{X}\in\Gamma_{N}^{B}}\frac{1}{N_{NBC}^{B}}\left( \mathcal{N}_{q_{y}^{\circ}}-0\right)^{2}. \tag{62}\]
## 4 Results
### A circular heterogeneity in a matrix (geometry 1)
We start with the first geometry which is described in Fig. 2. The collocation points are set as depicted on the left-hand side of Fig. 5. As explained in [1], to enhance the accuracy of the results, one needs to increase the collocation points in the heterogeneity, domain boundaries, and matrix. Approximately, 5000 collocation points are considered in a rectangular domain from 0 to 1 [mm] in both \(x\)- and \(y\)-directions. Minimization of the loss functions (described in Sec. 3.1) is done at the collocations points. The material parameters for the current boundary value problem are mentioned in Tab. 2.
To evaluate the network response, the thermoelasticity problem is also solved for the same domain by the finite element program FEAP [62]. The discretization of the domain for FE analysis is done in a way that the elements coincide with the regular collocation points for evaluating the response of the trained network. Moreover, in Fig. 5, the variation of color shows the different Young's moduli and thermal conductivity values in the PINNs collocation points. The latter comes from the fact that some points are living at the interface of the matrix and the inclusion. The same procedure exists during the assembly procedure in the FE method.
The minimization of the loss functions is done by two different procedures, coupled and sequential training (see also Fig. 4). The training is performed on the Skylake CPU, Platinum 8160 model, and Intel HNS2600BPB Hardware Node Type for both cases. For the coupled training, every loss term is minimized by performing 100 k epochs (iterations). To have a fair
\begin{table}
\begin{tabular}{l l} \hline & Value/Unit \\ \hline \hline
**Thermal elasticity problem of geometry 1** & \\ Matrix Young’s modulus and Poisson’s ratio (\(E_{\text{mat}}\), \(\nu_{\text{mat}}\)) & (0.3 GPa, 0.3) \\ Inclusion Young’s modulus and Poisson’s ratio (\(E_{\text{inc}}\), \(\nu_{\text{inc}}\)) & (1.0 GPa, 0.3) \\ Matrix heat conductivity (\(k_{\text{mat}}\)) & 0.3 W/mK \\ Inclusion heat conductivity (\(k_{\text{inc}}\)) & 1.0 W/mK \\ \hline \end{tabular}
\end{table}
Table 2: Geometry 1 input parameters for the linear thermal elasticity problem
Figure 5: Left: collocation points for minimizing the loss functions in PINNs. Middle: points for evaluating the trained network response. Right: discretized domain using FEM to solve the same boundary value problem in thermoelasticity. The color variation shows the different values of Young’s modulus.
comparison for the sequential training, we perform a total of 200 k epochs. In the sequential training, we minimize each loss term for 100 k epochs. The number of epochs is chosen based on the final loss value and accuracy of the results. This number may easily change based on the optimizer type and the preference for the desired accuracy.
#### 4.1.1 Coupled training for the geometry 1
The decay of prominent loss terms during the training procedure is shown in Fig. 6.
In the coupled training minimization, we try to find the optimum network parameters to satisfy all of the loss terms, simultaneously. Note that for the sake of clarity and a better
Figure 6: Prominent loss terms for the thermal and mechanical field decay during the training procedure utilizing the coupled training algorithm.
comparison with the sequential training, the loss terms related to the mechanical and thermal sub-problem are plotted individually in Fig. 6.
The deformed configurations of PINNs and FEM are plotted in Fig. 7. The deformed configurations are almost identical. Note that all the deformations are induced due to the introduced temperature gradient in the system. To have a more detailed view, the comparison is made in Fig. 8 for the displacements and the stress components in PINNs and FEM results. The PINNs results are presented in the first column while the FEM results and the difference between PINNs and FEM are shown in the second and third columns, respectively.
According to the results in Fig. 8, the mechanical field results from PINNs for the thermoelasticity problem are in good agreement with the outcomes from FE analysis. The averaged relative difference lies at \(0.59\,\%\) for displacements in \(x\)-direction and is about \(0.87\,\%\) for stresses in \(x\)-direction. The maximum relative difference for the displacements and stress components are about \(4.1\,\%\), and \(13.5\,\%\), for \(u_{x}\) and \(\sigma_{x}\), respectively. The previously measured errors only exist at a few points in the domain around the inclusion boundaries.
The maximum amount of difference is observed at the interface of the matrix and heterogeneity. This point is the result of having a rather sharp transition in the material properties at the interface. One possible remedy to overcome the latter problem is to add more collocation points near the interface or by applying domain decomposition methods, see [23, 63, 64]. Increasing the number of epochs for training is also helpful.
The agreement of the thermal field results between PINN and FE analysis in the thermoelasticity problem for the coupled training is also addressed in Fig. 9. The relative averaged error for \(T\) is about \(0.3\,\%\) and for \(q_{x}\) is around \(0.76\,\%\). The maximum relative difference between PINN and FE analysis for the thermal field is about \(1.0\,\%\) and \(24.93\,\%\) for the temperature field and the heat flux in the \(x\)-direction, respectively. The measured difference again occurs mostly at a few points on the interface. In the next section, a more detailed comparison is made by applying sections in \(x\)-direction and having a closer look at the response of PINN and FE analysis.
Figure 7: Left: undeformed configuration of evaluation points in PINNs. Middle: deformed configuration of evaluation points in PINNs employing coupled training. Right: deformed configuration of the discretized domain in FE analysis.
Figure 8: The mechanical field results of the thermoelasticity problem employing the coupled training procedure. Left: PINNs, Middle: FEM, Right: FEM with PINNs difference.
#### 4.1.2 Sequential training
The identical collocation points from Fig. 5 are utilized to minimize the thermal and mechanical loss functions (\(\mathcal{L}_{T}\) and \(\mathcal{L}_{M}\)) by employing the sequential algorithm. As it is discussed beforehand, the minimization is performed in total for \(200\,\mathrm{k}\) epochs which results in \(100\,\mathrm{k}\) for each field loss function. The decay of main losses for each field with respect to the number is plotted separately for the mechanical and thermal fields in Fig. 10.
In Fig. 10, one can see the effect of chosen values for \(n_{T}\) and \(n_{M}\). These two numbers determine when we switch between the minimization of thermal and mechanical loss functions. The value for these two parameters is set to \(20\,\mathrm{k}\). Take into consideration that the loss terms for the thermal field are not evaluated during the mechanical part training, and vice versa. Therefore, to show the whole loss functions the losses are saved for each individual part during its training and later on appended together. One interprets that by minimization of the one field loss functions, other fields' losses may increase. The network tries to find the optimum parameters for the active field that may change the previously found optimum parameters for the nonactive fields. The latter point is hardly seen for each loss function right after upright lines in Fig. 10, specifically for the thermal field losses. Finally, one needs to state that having some iterations between the fields during training is essential to find the global minimum of the system. For this example, \(n_{M}\), \(n_{T}\), and \(n_{A}^{Seq}\) are chosen in a way to make five iterations possible.
Figure 9: The thermal field results of the thermoelasticity problem employing the coupled training procedure. Left: PINNs, Middle: FEM, Right: FEM with PINNs difference.
Further studies are required to find the optimum switching times between the fields and the number of epochs while minimizing one field loss function.
Utilizing a mixed PINN by employing sequential training leads to \(0.81\,\mathrm{\char 37}\) and \(0.8\,\mathrm{\char 37}\) averaged relative difference for the displacement and stress components in \(x\)-direction with FE analysis, respectively, while the maximum relative errors are \(5.9\,\mathrm{\char 37}\), and \(14.89\,\mathrm{\char 37}\). Again, one should emphasize that the error is localized in a few points around the inclusion. In the thermal field, the maximum relative temperature error stands at around \(1.3\,\mathrm{\char 37}\) and at almost \(29.16\,\mathrm{\char 37}\) for the heat flux in \(y\)-direction, \(q_{y}\), see Fig. 15. The averaged relative errors lie at \(0.26\,\mathrm{\char 37}\), and \(0.66\,\mathrm{\char 37}\)
Figure 10: Top: prominent loss terms for the thermal field decay during the training procedure utilizing the sequential training algorithm. Bottom: prominent loss terms for the mechanical field decay during the training procedure utilizing the sequential training algorithm.
for the mentioned values.
To emphasize the capability of the mixed PINNs formulation in comparison with the standard PINNs structure [20; 37], the standard PINN is used to solve the thermoelasticity boundary value problem with an identical number of epochs for training. Note that the standard PINNs structure utilizes a separate neural network to predict output which are only \(u\) and \(T\). Therefore, second-order derivatives appear in the formulation. The network has five hidden layers and forty neurons in each of them (\(5\times 40\)) with having the _tanh_ as an activation function. The training is done by both coupled and sequential training algorithms for each of the normal PINNs and mixed PINNs formulations.
The \(x\)-components of displacement, stress, and heat flux as well as the temperature field are depicted in Fig. 11 for the section which is made at \(x=0.5\,\mathrm{mm}\). The FE analysis results are also added as a reference solution to the figures.
The standard PINNs formulation is not able to show the effect of heterogeneity for the displacement component \(u_{x}\), and for \(T\), the temperature field. It also shows poor performance for
Figure 11: Comparison of main output of the networks through the section at \(x=0.5\mathrm{mm}\). Mix. PINNs: denotes the formulation of this paper by having the \(\mathbf{\sigma}\) and \(\mathbf{u}\) as output and utilizing the energy form loss in addition to conventional losses. PINNs: stands for the standard same formulation which has only \(\mathbf{u}\) as output in the absence of the energy form of the problem in the loss function. Coupled: minimizing loss function utilizing coupled training algorithm (all fields loss function are minimized simultaneously). Sequential: minimization is done with the proposed sequential training method.
\(q_{x}\), and \(\sigma_{x}\) prediction, regardless of having coupled or sequential training procedures. However, employing sequential training leads to slightly better performance for stress and flux components. The mixed PINNs formulation shows much better performance and is able to capture the effect of heterogeneity for both coupled and sequential training algorithms. By having a closer look at the results, one sees the modestly better performance of coupled training algorithm compared to the sequential one. Due to having a pixel-based mesh in FEM discretization, some small jumps are exited for FE analysis results in Fig. 11 which does not exist for the mixed PINNs formulation. One can state that the mixed PINNs perfectly solve the current multi-physical problem by utilizing sequential or coupled training procedures and lead to the almost same results as the FE analysis.
**Remark 5** To address the computational efficiency of each training procedure, the average value of one epoch for coupled training is about 0.67 seconds while for the sequential training, this value decreased to 0.16 seconds. However, since one needs to perform at least two iterations to achieve the solution in the case of sequential training, the computational gain is about 0.35 seconds which reduces the computational time more than two times compared to the coupled training method.
### Complex microstructure containing rectangular inclusions (geometry 2)
To further evaluate the performance of the proposed approach for solving multi-physical in the heterogeneous domains, the second geometry in Fig. 12 is now under consideration. The pattern of heterogeneity is defined to be more challenging with sharp corners. Such a pattern also corresponds to some specific material microstructures such as nickel alloy systems, see [65].
For the second geometry, we introduce 2601 collocation points to minimize the loss functions (see the left-hand side of Fig. 12). The corresponding mesh, which discretizes the domain in order to solve the thermoelasticity problem by means of FEM, is depicted on the right-hand side of Fig. 12. To evaluate the response of the system, 10201 points are employed. In this case, approximately 75 % of points have different positions concerning the collocation points, see Fig. 12 middle part. The latter addresses the applicability of the network to interpolate the solution. The material parameters utilized in the second boundary value problem are stated in Tab. 3.
Figure 12: Left: collocation points for minimizing the loss functions for the second geometry. Middle: points for evaluating the trained network response. Right: discretized domain using FEM to solve the same boundary value problem in thermoelasticity. The color variation shows the different values of Young’s modulus.
#### 4.2.1 Coupled training of the second geometry
The loss functions according to the thermoelasticity problem are minimized by the coupled training procedure for the second geometry. Fig. 13 illustrates the deformed (magnification is one time) and undeformed configurations for the microstructure utilizing PINNs and FEM. The PINNs mixed formulation is capable of predicting the response of this multi-physical problem, despite the shape of heterogeneity and also the large deformation of inclusions. However, on the left-hand side of the deformed configuration in Fig. 13, one can observe that the left boundary condition is not satisfied. The latter is the main source of error for the mechanical field responses of the network. See the first row in Fig. 14, where the left Dirichlet boundary condition is not fully satisfied in certain regions.
A remedy for tackling this problem is using hard constraint boundary conditions or adding extra collocation points at boundaries. Similar to the results of the first geometry, the difference is accumulated mainly at interface regions. The maximum differences between the mixed PINNs approach and the FEM are \(9.7\,\mathrm{\char 37}\) and \(13.4\,\mathrm{\char 37}\) for the \(u_{x}\) and \(\sigma_{x}\), respectively. The averaged relative errors for the mentioned fields are \(1.6\,\mathrm{\char 37}\), and \(3.3\,\mathrm{\char 37}\) which shows again the localization of the error around inclusions. The maximum relative difference between the trained network and the FEM lies at around \(3.4\,\mathrm{\char 37}\) for the temperature field and at \(21.5\,\mathrm{\char 37}\) for \(q_{y}\) which is the critical component of the flux vector. For the temperature prediction, the error lives near the left boundary while for the fluxes right boundary is also critical, see Fig. 15. For the mentioned values, the averaged relative errors are \(1.3\,\mathrm{\char 37}\), \(1.8\,\mathrm{\char 37}\).
Figure 13: Undeformed and deformed configurations result from the mixed PINNs formulation employing coupled training algorithm and FEM respecting the second geometry. No scaling is applied.
\begin{table}
\begin{tabular}{l l} \hline \hline & Value/Unit \\ \hline \hline
**Thermal elasticity problem of geometry 2** & \\ Matrix Young’s modulus and Poisson’s ratio (\(E_{\mathrm{mat}}\), \(\nu_{\mathrm{mat}}\)) & (1.0 GPa, 0.3) \\ Inclusion Young’s modulus and Poisson’s ratio (\(E_{\mathrm{inc}}\), \(\nu_{\mathrm{inc}}\)) & (0.5 GPa, 0.3) \\ Matrix heat conductivity (\(\mathbf{k}_{\mathrm{mat}}\)) & 1.0 W/mK \\ Inclusion heat conductivity (\(\mathbf{k}_{\mathrm{inc}}\)) & 0.5 W/mK \\ \hline \hline \end{tabular}
\end{table}
Table 3: Geometry 2 input parameters for the linear thermal elasticity problem
Figure 14: The mechanical field results of the thermoelasticity for geometry 2 using coupled training. Left: PINNs, Middle: FEM, Right: FEM with PINNs difference.
#### 4.2.2 Sequential training of the second geometry
The same number of collocation points as in Sec. 4.2.1 are utilized to minimize the loss function in the sequential training approach. The corresponding results of the network are partially plotted in Fig. 16.
We also observed that the same issue occurs also for this training strategy and the Dirichlet boundary condition of the left-hand side is not fully satisfied. In general, the results are in good agreement with the results from FEM, the averaged relative difference is about \(0.49\,\%\), and \(2.12\,\%\) which are for \(u_{x}\) and \(\sigma_{x}\), respectively. The maximum calculated relative errors for the mentioned fields are \(2.8\,\%\), and \(10.19\,\%\). For the thermal field, relative maximum and relative averaged errors lie at \(2.5\,\%\), and \(0.7\,\%\). These values for the heat flux in \(y\)-direction are \(21.5\,\%\), and \(1.82\,\%\).
For carrying out a closer inspection of the results employing coupled and sequential methods, the components of the mechanical field outputs and heat flux in \(x\)-direction as well as the temperature field are compared by drawing a section at \(x=0.5\)mm. The mentioned sections are made in such a way that at least they go through one inclusion part. The results of PINNs and coupled training are compared by having FE analysis results as the reference solution in Figs. 16. The results of sequential training slightly outperform the coupled training results with respect to the FEM. More specifically, near the inclusion, the results of the sequential
Figure 15: The thermal field results of the thermoelasticity for the second geometry using coupled training. Left: PINNs, Middle: FEM, Right: FEM with PINNs difference.
training are closer to the reference solution. The related relative difference values mimic the outperformance of the sequential training strategy.
### Transfer learning by combining physics and data
In this section, we aim to obtain full-field solutions for unseen cases by combining data and physics. The trained network reaches the solution of the given BVP promptly. Therefore, it makes the neural network enormously faster than a conventional solver like FE.
For the given BVP, one can change the topology of the microstructure, boundary conditions as well as material properties. For the sake of simplicity, we focus only on different material properties for different involved phases. Therefore, geometry 2 from the previous section is taken with the set of different material parameters. To this end, the ratio of Young's modulus between inclusion and the matrix as well as their thermal conductivity ratio is changing in the range of 1 to 10. For each case, the thermoelasticity problem is solved by means of FE analysis, and the corresponding results, later on, be used to train the neural network. The FE data is utilized to facilitate the process of training and reduce computational costs.
For training the network, one needs to insert Young's modulus and thermal conductivity of each point alongside the collocation points' coordinates as inputs. The designed network's architecture is depicted in Fig. 17. In addition to the loss based on data, the losses based on
Figure 16: Comparison of the network’s outputs through the section at \(x=0.5\)mm of the geometry 2.
the mixed PINNs formulation are taken into account, \(\mathcal{L}_{T}\) and \(\mathcal{L}_{M}\) see Eqs. (36) and (37). The total loss function in the presence of data from FE analysis is written as
\[\mathcal{L}_{total}=\underbrace{\mathcal{L}_{data}}_{\text{FE simulations}}+\,w\left(\underbrace{\mathcal{L}_{T}+\mathcal{L}_{M}}_{\text{Physics}}\right). \tag{63}\]
In Eq. (63), \(w\) denotes the weight of the physics in the total loss function. The network's predictions are compared for four different cases in which the \(w\) parameter is varying from \(0.0\) to \(1.0\). In the case of \(w=0.0\), the data-driven case is investigated. After studying a set of different values for the weight value, the value of \(w=0.1\) is taken that reads the best results. The training is done by employing the transfer learning procedure where the \(\mathcal{L}_{total}\) in Eq. (63) is minimized considering a specific ratio for Young's modulus \(E(x,y)\) and thermal conductivity \(k(x,y)\) of different phases. Next, the network's optimum parameters are found for the first set of values after \(10\,\mathrm{k}\) epochs. By having the previously tuned parameters, the minimization of the losses is done for the new value of the ratio for \(10\,\mathrm{k}\) iterations.
To have a better comparison, a section is made at \(x=0.35\) mm. The results from the combination of data and physics as well as the case with the pure data-driven case are reported. The interest outputs in these sections are \(u_{x}\), \(\sigma_{x}\) for the mechanical field. From the thermal field, \(T\), and \(q_{x}\) are chosen for comparison.
In general, corresponding results from the mentioned approaches are in good agreement with the reference solution which is computed by the FE analysis, see Fig. 20. The boundary values are approximated better by the mixed PINNs formulation combined with data. Both methods are able to observe the effects of inclusions and the use of physical constraints enhances the accuracy of predictions in areas where heterogeneity exists.
Figure 18: Mechanical and thermal fields results of the thermoelasticity for the case of \(E_{r}=K_{r}=15\). Left: PINNs + Data (\(w=0.1\)), Middle: FEM, Right: FEM with PINNs difference.
**Remark 6** The computational cost of the data-driven network employing transfer learning is almost \(82\,\%\) less than the network which is trained by combining data and physics. For constructing physical constraints, one needs to differentiate the output parameters with respect to inputs which is the main reason for the higher computational cost.
Figure 19: Mechanical and thermal fields results of the thermoelasticity for the case of \(E_{r}=K_{r}=15\). Left: Data-driven (\(w=0.0\)), Middle: FEM, Right: FEM with a data-driven difference.
## 5 Conclusion and outlooks
This work shows the capability of physics-informed neural networks in the absence of any ground truth data (unsupervised learning) to solve a multi-physical problem (coupling PDEs) in heterogeneous domains. The latter is achieved by utilizing the mixed PINNs formulation proposed by Rezaei et al. [1], where the authors used only first-order derivatives to formulate the loss functions which leads to a better approximation of the solution of a boundary value problem compared to standard field approaches where second-order derivatives are existing in the loss functions. It is shown that employing sequential training (minimization of the loss functions is done step-wise: first one field is minimized by freezing the other fields and in the same way losses of other fields are minimized) is beneficial in terms of computational cost and also the accuracy of the network. The proposed network is employed to solve a quasi-static thermoelasticity problem. The corresponding results are compared to the results of the coupled training where all losses from different fields are minimized simultaneously.
The prescribed network architecture is employed in combination with data (in order to reduce the computational cost) to predict unseen cases. To do so and by having Young's modulus and thermal conductivity ratios as additional input parameters, the training is done for several values
Figure 20: Comparison of main outputs of the networks through the section at \(x=0.5\)mm of the second geometry using mixed PINNs formulation combined with data utilizing transfer learning training procedure or pure data-driven scheme.
of ratios, and the resulting network is used to predict (extrapolate) an unseen value of the ratio. The trained network can be used to predict the solution of a given boundary value problem with almost no computational cost. The latter can be utilized in digital twins, where the solution of the system is required in a fraction of a second. Further studies are needed to predict the response of arbitrary microstructures under the same boundary and initial conditions.
In future works, one could aim to solve other coupled partial differential equations such as the phase-field damage model [36, 48]. Furthermore, one can combine more complex physics for predicting cracking in multi-physical environments [32, 66]. As an example, the trained networks can be utilized for structural optimizations and optimal design of materials created through additive manufacturing. The methodology discussed in this paper for transfer learning can be employed to generalize the network for various problem configurations. It is also rewarding to combine and compare these ideas with those from operator learning [3, 67].
**Acknowledgements**:
The authors acknowledge the computing time granted via the high-performance computers of RWTH Aachen University. Ali Harandi and Stefanie Reese would like to acknowledge the financial support of the German Science Foundation (DFG) for project A6 "Multi-scale modeling of damage and fracture behavior of nano-structured layers" as project number 138690629.
**Author Statement**:
Ali Harandi: Methodology, Software, Writing - Review & Editing. Ahmad Moidendin: Methodology, Software, Writing - Review & Editing. Michael Kaliske: Supervision, Review & Editing. Stefanie Reese: Funding acquisition, Review & Editing. Shahed Rezaei: Conceptualization, Methodology, Supervision, Writing - Review & Editing.
|
2306.03266 | Extending the Design Space of Graph Neural Networks by Rethinking
Folklore Weisfeiler-Lehman | Message passing neural networks (MPNNs) have emerged as the most popular
framework of graph neural networks (GNNs) in recent years. However, their
expressive power is limited by the 1-dimensional Weisfeiler-Lehman (1-WL) test.
Some works are inspired by $k$-WL/FWL (Folklore WL) and design the
corresponding neural versions. Despite the high expressive power, there are
serious limitations in this line of research. In particular, (1) $k$-WL/FWL
requires at least $O(n^k)$ space complexity, which is impractical for large
graphs even when $k=3$; (2) The design space of $k$-WL/FWL is rigid, with the
only adjustable hyper-parameter being $k$. To tackle the first limitation, we
propose an extension, $(k,t)$-FWL. We theoretically prove that even if we fix
the space complexity to $O(n^k)$ (for any $k\geq 2$) in $(k,t)$-FWL, we can
construct an expressiveness hierarchy up to solving the graph isomorphism
problem. To tackle the second problem, we propose $k$-FWL+, which considers any
equivariant set as neighbors instead of all nodes, thereby greatly expanding
the design space of $k$-FWL. Combining these two modifications results in a
flexible and powerful framework $(k,t)$-FWL+. We demonstrate $(k,t)$-FWL+ can
implement most existing models with matching expressiveness. We then introduce
an instance of $(k,t)$-FWL+ called Neighborhood$^2$-FWL (N$^2$-FWL), which is
practically and theoretically sound. We prove that N$^2$-FWL is no less
powerful than 3-WL, and can encode many substructures while only requiring
$O(n^2)$ space. Finally, we design its neural version named N$^2$-GNN and
evaluate its performance on various tasks. N$^2$-GNN achieves record-breaking
results on ZINC-Subset (0.059), outperforming previous SOTA results by 10.6%.
Moreover, N$^2$-GNN achieves new SOTA results on the BREC dataset (71.8%) among
all existing high-expressive GNN methods. | Jiarui Feng, Lecheng Kong, Hao Liu, Dacheng Tao, Fuhai Li, Muhan Zhang, Yixin Chen | 2023-06-05T21:35:32Z | http://arxiv.org/abs/2306.03266v3 | # Towards Arbitrarily Expressive GNNs in \(O(n^{2})\) Space by Rethinking Folklore Weisfeiler-Lehman
###### Abstract
Message passing neural networks (MPNNs) have emerged as the most popular framework of graph neural networks (GNNs) in recent years. However, their expressive power is limited by the 1-dimensional Weisfeiler-Lehman (1-WL) test. Some works are inspired by \(k\)-WL/FWL (Folklore WL) and design the corresponding neural versions. Despite the high expressive power, there are serious limitations in this line of research. In particular, (1) \(k\)-WL/FWL requires at least \(O(n^{k})\) space complexity, which is impractical for large graphs even when \(k=3\); (2) The design space of \(k\)-WL/FWL is rigid, with the only adjustable hyper-parameter being \(k\). To tackle the first limitation, we propose an extension, \((k,t)\)-FWL. We theoretically prove that even if we fix the space complexity to \(O(n^{2})\) in \((k,t)\)-FWL, we can construct an expressiveness hierarchy up to solving the graph isomorphism problem. To tackle the second problem, we propose \(k\)-FWL+, which considers any equivariant set as neighbors instead of all nodes, thereby greatly expanding the design space of \(k\)-FWL. Combining these two modifications results in a flexible and powerful framework \((k,t)\)-FWL+. We demonstrate \((k,t)\)-FWL+ can implement most existing models with matching expressiveness. We then introduce an instance of \((k,t)\)-FWL+ called Neighborhood\({}^{2}\)-FWL (N\({}^{2}\)-FWL), which is practically and theoretically sound. We prove that N\({}^{2}\)-FWL is no less powerful than 3-WL, can encode many substructures while only requiring \(O(n^{2})\) space. Finally, we design its neural version named **N\({}^{2}\)-GNN** and evaluate its performance on various tasks. N\({}^{2}\)-GNN achieves superior performance on almost all tasks, with record-breaking results on ZINC-Subset **(0.059)** and ZINC-Full **(0.013)**, outperforming previous state-of-the-art results by \(10.6\)% and \(40.9\)%, respectively.
## 1 Introduction
In recent years, graph neural networks (GNNs) have become one of the most popular and powerful methods for graph representation learning, following a message passing framework [1; 2; 3; 4]. However, the expressive power of message passing GNNs is bounded by the one-dimensional Weisfeiler-Lehman (1-WL) test [5; 6]. As a result, numerous efforts have been made to design GNNs with higher expressive power. We provide a more detailed discussion in Section 5.
Several works have drawn inspiration from the \(k\)-dimensional Weisfeiler-Lehman (\(k\)-WL) or Folklore Weisfeiler-Lehman (\(k\)-FWL) test [7] and developed corresponding neural versions [6; 8; 9; 10]. However, \(k\)-WL/FWL has two inherent limitations. First, while the expressive power increases with higher values of \(k\), the space and time complexity also grows exponentially, requiring \(O(n^{k})\)
space complexity and \(O(n^{k+1})\) time complexity, which makes it impractical even when \(k=3\). Thus, the question arises: **Can we retain high expressiveness without exploding both time and space complexities?** Second, the design space of WL-based algorithms is rigid, with the only adjustable hyper-parameter being \(k\). However, there is a significant gap in expressive power even between consecutive values of \(k\), making it hard to fine-tune the tradeoffs. Moreover, increasing the expressive power does not necessarily translate into better real-world performance, as it may lead to overfitting [8; 11]. Although some works try to tackle this problem [8; 10], there is still limited understanding of **how to expand the design space of the original \(k\)-FWL to a broader space** that enables us to identify the most appropriate instance to match the complexity of real-world tasks.
To tackle the first limitation, we notice that \(k\)-FWL and \(k\)-WL have the same space complexity but \(k\)-FWL can achieve the same expressive power as (\(k\)+1)-WL. We found the key component that allows \(k\)-FWL to have stronger power is the tuple aggregation style. Enlightened by this observation, we propose \((k,t)\)-FWL, which extends the tuple aggregation style in \(k\)-FWL. Specifically, in the original \(k\)-FWL, a neighbor of a \(k\)-tuple is defined by iteratively replacing its \(i\)-th element with a node \(u\), and \(u\) traverses all nodes in the graph. In \((k,t)\)-FWL, we extend \(u\) to be a \(t\)-tuple, and allow flexible replacement schemes to insert the \(t\)-tuple into a \(k\)-tuple to define its neighbor. We demonstrate that even with a space complexity of \(O(n^{2})\), \((k,t)\)-FWL can construct an expressive hierarchy capable of solving the graph isomorphism problem. To deal with the second limitation, we revisit the definition of neighborhood in \(k\)-FWL. Inspired by previous works [8; 12] which consider only local neighbors (i.e., \(u\) must be connected to the \(k\)-tuple) instead of global neighbors in \(k\)-WL/FWL, we find that the neighborhood (i.e., which \(u\) are used to construct a \(k\)-tuple's neighbors) can actually be extended to any equivariant set of the \(k\)-tuple, resulting in \(k\)-FWL+. Combining the two modifications leads to a novel and powerful FWL-based algorithm \((k,t)\)-FWL+. \((k,t)\)-FWL+ is highly flexible and can be used to design different versions to fit the complexity of real-world tasks. Based on the proposed \((k,t)\)-FWL+ framework, we implement many different instances that are closely related to existing powerful GNN/WL models, further demonstrating the flexibility of \((k,t)\)-FWL+.
Finally, we propose an instance of \((2,2)\)-FWL+ that is both theoretically expressive and practically powerful. It considers the local neighbors of both two nodes in the 2-tuple. Despite having a space complexity of \(O(n^{2})\), which is lower than 3-WL's \(O(n^{3})\) space complexity, this instance can still partially outperform 3-WL and is able to count many substructures. We implement a neural version named **Neighborhood\({}^{2}\)-GNN (N\({}^{2}\)-GNN)** and evaluate its performance on various synthetic and real-world datasets. Our results demonstrate that N\({}^{2}\)-GNN outperforms existing SOTA methods across most tasks. Particularly, it achieves **0.059** in ZINC-Subset and **0.013** in ZINC-Full, surpassing existing state-of-the-art models by significant margins.
## 2 Preliminaries
**Notations.** Let \(\{\cdot\}\) denote a set, \(\{\!\!\{\cdot\}\!\!\}\) denote a multiset (set that allows repetition), and \((\cdot)\) denote a tuple. As usual, let \([n]=\{1,2,\ldots,n\}\). Let \(G=(V(G),E(G),l_{G})\) be an undirected, colored graph, where \(V(G)=[n]\) is the node set with \(n\) nodes, \(E(G)\subseteq V(G)\times V(G)\) is the edge set, and \(l_{G}\colon V(G)\to C\) is the graph coloring function with \(C=\{c_{1},\ldots,c_{d}\}\) denote a set of \(d\) distinct colors. Let \(\mathcal{N}_{k}(v)\) denote a set of nodes within \(k\) hops of node \(v\) and \(Q_{k}(v)\) denote the \(k\)-th hop neighbors of node \(v\) and we have \(\mathcal{N}_{k}(v)=\bigcup_{i=0}^{k}Q_{i}(v)\). Let SPD\((u,v)\) denote the shortest path distance between \(u\) and \(v\). We use \(x_{v}\in\mathbb{R}^{d_{n}}\) to denote attributes of node \(v\in V(G)\) and \(e_{uv}\in\mathbb{R}^{d_{n}}\) to denote attributes of edge \((u,v)\in E(G)\). They are usually the one-hot encoding of the node and edge color respectively. We say that two graphs \(G\) and \(H\) are _isomorphic_ (denoted as \(G\simeq H\)) if there exists a bijection \(\varphi\colon V(G)\to V(H)\) such that \(\forall u,v\in V(G),(u,v)\in E(G)\Leftrightarrow(\varphi(u),\varphi(v))\in E (H)\) and \(\forall v\in V(G),l_{G}(v)=l_{H}(\varphi(v))\). Denote \(V(G)^{k}\) the set of \(k\)-tuples of vertices and \(\mathbf{v}=(v_{1},\ldots,v_{k})\in V(G)^{k}\) a \(k\)-tuple of vertices. Let \(S_{n}\) denote the permutation group of \([n]\) and \(g\in S_{n}:[n]\rightarrow[n]\) be a particular permutation. When a permutation \(g\in S_{n}\) operates on any target \(X\), we denote it by \(g\cdot X\). Particularly, a permutation operating on an edge set \(E(G)\) is \(g\cdot E(G)=\{(g(u),g(v))|(u,v)\in E(G)\}\). A permutation operating on a \(k\)-tuple \(\mathbf{v}\) is \(g\cdot\mathbf{v}=(g(v_{1}),\ldots,g(v_{k}))\). A permutation operating on a graph is \(g\cdot G=(g\cdot V(G),g\cdot E(G),g\cdot l_{G})\). \(k\)**-dimensional Weisfeiler-Lehman test.** The \(k\)-dimensional Weisfeiler-Lehman (\(k\)-WL) test is a family of algorithms used to test graph isomorphism. There are two variants of the \(k\)-WL test: \(k\)-WL and \(k\)-FWL (Folklore WL). We first describe the procedure of 1-WL, which is also called the color
refinement algorithm [7]. Let \(\mathcal{C}^{0}_{1wl}(v)=l_{G}(v)\) be the initial color of node \(v\in V(G)\). At the \(l\)-th iteration, 1-WL updates the color of each node using the following equation:
\[\mathcal{C}^{l}_{1wl}(v)=\text{HASH}\left(\mathcal{C}^{l-1}_{1wl}(v),\{\!\!\{ \mathcal{C}^{l-1}_{1wl}(u)|u\in Q_{1}(v)\}\!\}\right). \tag{1}\]
After the algorithm converges, a color histogram is constructed using the colors assigned to all nodes. If the color histogram is different for two graphs, then the two graphs are non-isomorphic. However, if the color histogram is the same for two graphs, they can still be non-isomorphic.
The \(k\)-WL and \(k\)-FWL, for \(k\geq 2\), are generalizations of the 1-WL, which do not color individual nodes but node tuples \(\mathbf{v}\in V(G)^{k}\). Let \(\mathbf{v}_{w/j}\) be a \(k\)-tuple obtained by replacing the \(j\)-th element of \(\mathbf{v}\) with \(w\). That is \(\mathbf{v}_{w/j}=(v_{1},\ldots,v_{j-1},w,v_{j+1},\ldots,v_{k})\). The main difference between \(k\)-WL and \(k\)-FWL lies in their definition of neighbors. For \(k\)-WL, the set of \(j\)-th neighbors of tuple \(\mathbf{v}\) is denoted as \(Q_{j}(\mathbf{v})=\{\mathbf{v}_{w/j}|w\in V(G)\}\), \(j\in[k]\). Instead, the \(w\)-neighbor of tuple \(\mathbf{v}\) for \(k\)-FWL is denoted as \(Q^{F}_{w}(\mathbf{v})=\left(\mathbf{v}_{w/1},\mathbf{v}_{w/2},\ldots,\mathbf{ v}_{w/k}\right)\). Let \(\mathcal{C}^{0}_{kwl}(\mathbf{v})=\mathcal{C}^{0}_{kful}(\mathbf{v})\) be the initial color for \(k\)-WL and \(k\)-FWL, respectively. They are usually the isomorphism types of tuple \(\mathbf{v}\). At the \(l\)-th iteration, \(k\)-WL and \(k\)-FWL update the color of each tuple according to the following equations:
**WL:** \[\mathcal{C}^{l}_{kwl}(\mathbf{v})=\text{HASH}\left(\mathcal{C}^{l -1}_{kwl}(\mathbf{v}),\left(\{\!\!\{\mathcal{C}^{l-1}_{kwl}(\mathbf{u})| \mathbf{u}\in Q_{j}(\mathbf{v})\}\!\}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
a tuple of color \((\mathcal{C}^{l-1}_{2fwl}(v_{1},w),\mathcal{C}^{l-1}_{2fwl}(w,v_{2}))\) is aggregated. While in 2-WL, the colors of nodes are considered in separate multisets. To understand why this aggregation is more powerful, we can reformulate the color of the root tuple \(\mathcal{C}^{l-1}_{2fwl}(v_{1},v_{2})\) and further rewrite the update equation of 2-FWL as follows:
\[\mathcal{C}^{l}_{2fwl}(v_{1},v_{2})=\text{HASH}\left(\left\{\mathbb{f}\left( \mathcal{C}^{l-1}_{2fwl}(v_{1},v_{2}),\mathcal{C}^{l-1}_{2fwl}(v_{1},w), \mathcal{C}^{l-1}_{2fwl}(w,v_{2})\right)|w\in V(G)\right\}\right).\]
It is easy to verify that the above equation is equivalent to the original one. This means that 2-FWL updates the color of tuple \((v_{1},v_{2})\) by aggregating different tuples of \(((v_{1},v_{2}),(v_{1},w),(w,v_{2}))\), which can be viewed as aggregating information of 3-tuples \((v_{1},v_{2},w)\). For example, \(\text{HASH}(\mathcal{C}^{0}_{2fwl}(v_{1},v_{2}),\mathcal{C}^{0}_{2fwl}(v_{1},w),\mathcal{C}^{0}_{2fwl}(w,v_{2}))\) can fully recover the isomorphism type of tuple \((v_{1},v_{2},w)\) used in 3-WL. The above observations can be easily extended to \(k\)-FWL. The key insight here is that **tuple-style aggregation is the key to lifting the expressive power of \(k\)-FWL.** Since aggregating \(k\)-tuple can boost the expressive power of \(k\)-FWL to be equivalent to (\(k\)+1)-WL, we may wonder, what if we continue to extend the size of the tuple?
Let \(Q^{F}_{\mathbf{w}}(\mathbf{v})\) denote a neighborhood tuple related to \(k\)-tuple \(\mathbf{v}\) and \(t\)-tuple \(\mathbf{w}\). The neighborhood tuple contains all possible results such that we sequentially select \(m\in[0,1,...,min(k,t)]\) elements from \(\mathbf{w}\) to replace \(m\) elements in \(\mathbf{v}\). Note that the selection and replacement procedure must follow a predefined order such that the result is invariant. For example, a possible \(Q^{F}_{(w_{1},w_{2})}(v_{1},v_{2})\) could be \(((v_{1},v_{2}),(v_{1},w_{1}),(v_{1},w_{2}),(w_{1},v_{2}),(w_{2},v_{2}),(w_{1},w_{2}))\), and the procedure of the construction is shown in Figure 1. We leave further discussion of the neighborhood tuple in Appendix A. By default, we adopt the above \(Q^{F}_{\mathbf{w}}(\mathbf{v})\) definition in the rest of the paper if \(k=t=2\). Then, we introduce \((k,t)\)-**FWL**, which extends \(k\)-FWL by extending the tuple size in the update function. Let \(\mathcal{C}^{l}_{ktfwl}(\mathbf{v})\) be the color of tuple \(\mathbf{v}\) at iteration \(l\) for \((k,t)\)-FWL, the update function of \((k,t)\)-FWL is:
\[(k,t)\text{-FWL:}\quad\mathcal{C}^{l}_{ktfwl}(\mathbf{v})=\text{HASH}( \mathcal{C}^{l-1}_{ktfwl}(\mathbf{v}),\mathbb{f}\left(\mathcal{C}^{l-1}_{ktfwl }(\mathbf{u})|\mathbf{u}\in Q^{F}_{\mathbf{w}}(\mathbf{v})\right)|\mathbf{w} \in V^{t}(G)\backslash\mathbb{f}_{t}), \tag{6}\]
where \(\{\!\{\cdot\}\!\}_{k}\) is hierarchical multiset over \(k\)-tuples such that elements in the set are grouped hierarchically given the order of the tuple. For example, \(\{\!\{(v_{1},v_{2})|(v_{1},v_{2})\in V^{2}(G)\}\!\}_{2}=\{\!\{\!\{(v_{1},v_{2})| v_{1}\in V(G)\}\!\}\!\}_{2}=\{\!\{\!\{(v_{1},v_{2})|v_{1}\in V(G)\}\!\}\!\}_{1}\). It is easy to see \((k,1)\)-FWL is equivalent to \(k\)-FWL under our definition of \(Q^{F}_{\mathbf{w}}(\mathbf{v})\). Further, as we only need to maintain representations of all \(k\)-tuples. \((k,t)\)-FWL has a fixed space complexity of \(O(n^{k})\). Here we show the expressive power of \((k,t)\)-FWL.
**Proposition 3.1**.: _For \(k\geq 2\) and \(t\geq 1\), if \(t\geq n-k\), \((k,t)\)-FWL can solve the graph isomorphism problems with the size of the graph less than or equal to \(n\)._
**Theorem 3.2**.: _For \(k\geq 2\), \(t\geq 1\), \((k,t)\)-FWL is at most as powerful as \((k+t)\)-WL. In particular, \((k,1)\)-FWL is as powerful as \((k+1)\)-WL._
**Proposition 3.3**.: _For \(k\geq 2\), \(t\geq 1\), \((k,t+1)\)-FWL is strictly more powerful than \((k,t)\)-FWL; \((k+1,t)\)-FWL is strictly more powerful than \((k,t)\)-FWL._
We leave all formal proofs in Appendix B. Briefly speaking, even if we fix the size of \(k\), \((k,t)\)-FWL can still construct an expressive hierarchy by varying \(t\). Further, if \(t\) is large enough, \((k,t)\)-FWL can actually enumerate all possible combinations of tuples with the size of the graph, and thus
Figure 1: Illustration of the construction of neighborhood tuple \(Q^{F}_{(w_{1},w_{2})}(v_{1},v_{2})\) in \((2,2)\)-FWL. We sequentially select 0, 1, 2 elements from \((w_{1},w_{2})\) to replace 0, 1, 2 elements in \((v_{1},v_{2})\), resulting in three sub-tuple of length 1, 4, 1, respectively. The final neighborhood tuple is the concatenation of three sub-tuples. We can easily recover high-order graph structures from constructed neighborhood tuple.
equivalent to the relational pooling on graph [15]. It is worth noting that the size of \(Q_{\mathbf{w}}^{F}(\mathbf{v})\) will grow exponentially with an increase in the size of \(t\). However, the key contribution of \((k,t)\)-FWL is that even when \(k=2\), \((k,t)\)-FWL can still construct an expressive hierarchy for solving graph isomorphism problems. Therefore, high-order embedding may not be necessary for building high-expressivity WL algorithms. Note that our \((k,t)\)-FWL is also different from subgraph GNNs such as \(k,l\)-WL [16] and \(l\)-OSAN [17], where \(l\)-tuples are labeled independently to enable learning \(k\)-tuple representations in all \(l\)-tuples' subgraphs, resulting in \(O(n^{k+l})\) space complexity.
### Rethinking and extending the aggregation scope of \(k\)-FWL
Another problem of \(k\)-FWL is its limited design space, as the only adjustable hyperparameter is \(k\). It is well known that there is a huge gap in expressive power even if we increase from \(k\) to \(k+1\). For example, 1-WL cannot count any cycle even with a length of 3, but 2-FWL can already count up to 7-cycle [18; 19; 20]. Moreover, increasing the expressive power does not always bring better performance when designing the corresponding neural version as it quickly leads to overfitting [11; 6; 9]. Therefore, we ask another question:
_Can we extend the \(k\)-FWL to a more flexible and fine-grained design space?_
To address this issue, we identify that the inflexibility of \(k\)-FWL's design space arises from the definition of the neighbor used in the aggregation step. Unlike 1-WL, \(k\)-FWL lacks the concept of local neighbors and instead requires the aggregation of all \(|V(G)|\) global neighbors to update the color of each tuple \(\mathbf{v}\). Recently, some works have extended \(k\)-WL by incorporating local information [8; 12]. Inspired by previous works, we find that the definition of neighbor can actually be much more flexible than just considering local neighbors or global neighbors. Specifically, for each \(k\)-tuple \(\mathbf{v}\) in graph \(G\), we define equivariant set \(ES(\mathbf{v})\) to be the neighbors set of tuple \(\mathbf{v}\) and propose \(k\)**-FWL+**. An equivariant set \(ES(\mathbf{v})\) is a set of nodes related to \(\mathbf{v}\) and equivariant given the permutation \(g\in S_{n}\). That is, \(\forall w\in ES(\mathbf{v})\) in graph \(G\) implies \(g(w)\in ES(g\cdot\mathbf{v})\) in graph \(g\cdot G\). Some nature equivariant sets \(ES(v)\) including \(V(G)\), \(\mathcal{N}_{k}(v)\), and \(Q_{k}(v)\), etc. Let \(\mathcal{C}^{l}_{kfwl+}(\mathbf{v})\) be the color of tuple \(\mathbf{v}\) at iteration \(l\) for \(k\)-FWL+, we have:
\[k\text{-FWL+:}\quad\mathcal{C}^{l}_{kfwl+}(\mathbf{v})=\text{HASH}(\mathcal{ C}^{l-1}_{kfwl+}(\mathbf{v}),\complement\left(\mathcal{C}^{l-1}_{kfwl+}( \mathbf{u})|\mathbf{u}\in Q^{F}_{w}(\mathbf{v})\right)|w\in ES(\mathbf{v}) \complement\left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement \left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement\left( \mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v} \right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v}\right) \complement\left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement \left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement\left( \mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v} \right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v}\right) \complement\left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement \left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement\left( \mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v} \right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement \left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement\left( \mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v} \right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement \left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement\left( \mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v} \right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement \left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement\left( \mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v} \right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement \left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement\left( \mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v} \right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement \left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement\left( \mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v} \right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement \left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement\left( \mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v} \right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement \left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement\left( \mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v} \right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement \left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement\left( \mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v} \right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement \left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement\left( \mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v} \right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement \left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement\left( \mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v} \right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement \left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v} \right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement \left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v} \right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement \left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement\left( \mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v} \right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement \left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v} \right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement \left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement\left( \mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v} \right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement \left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement\left( \mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v} \right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement \left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement\left( \mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v} \right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement \left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement\left( \mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement \left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement\left( \mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v} \right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement \left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v} \right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement \left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement\left( \mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v} \right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement \left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement\left( \mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v} \right)\complement\left(\mathbf{v}\right)\complement\left(\mathbf{v}\right)\complement \left(\
### Gallery of \((k,t)\)-FWL+ instances
In this section, we provide some practical designs of \(ES^{t}(\mathbf{v})\) that are closely correlated to many existing powerful GNN/WL models. Note that any model directly follows the original \(k\)-FWL like PPGN [9] can obviously be implemented by \((k,t)\)-FWL+ as \((k,t)\)-FWL+ also includes the original \(k\)-FWL. We specify that \(k\)-tuple \(\mathbf{v}=(v_{1},v_{2},\ldots,v_{k})\) to avoid any confusion.
**Proposition 3.4**.: _Let \(t=1\), \(k=2\), and \(ES(\mathbf{v})=\mathcal{N}_{1}(v_{1})\cup\mathcal{N}_{1}(v_{2})\), the corresponding \((k,t)\)-FWL+ instance is equivalent to SLFWL(2) [12] and strictly more powerful than any existing node-based subgraph GNNs._
**Proposition 3.5**.: _Let \(t=1\), \(k=k\), and \(ES(\mathbf{v})=\bigcup_{i=1}^{k}Q_{1}(v_{i})\), the corresponding \((k,t)\)-FWL+ instance is more powerful than \(\delta\)-k-LWL [8]._
**Proposition 3.6**.: _Let \(t=2\), \(k=2\), and \(ES^{2}(\mathbf{v})=(Q_{1}(v_{1})\cap Q_{1}(v_{2}))\times(Q_{1}(v_{1})\cap Q_{ 1}(v_{2}))\), the corresponding \((k,t)\)-FWL+ instance is more powerful than GraphSNN [21]._
**Proposition 3.7**.: _Let \(t=2\), \(k=2\), and \(ES^{2}(\mathbf{v})=Q_{1}(v_{2})\times Q_{1}(v_{1})\), the corresponding \((k,t)\)-FWL+ instance is more powerful than edge-based subgraph GNNs like \(\text{I}^{2}\)-GNN [19]._
**Proposition 3.8**.: _Let \(t=1\), \(k=2\), and \(ES(\mathbf{v})=Q_{\text{SPD}(v_{1},v_{2})}(v_{1})\cap Q_{1}(v_{2})\), the corresponding \((k,t)\)-FWL+ instance is at most as powerful as KP-GNN [22] with the peripheral subgraph encoder as powerful as 1-WL._
**Proposition 3.9**.: _Let \(t=2\), \(k=2\), and \(ES^{2}(\mathbf{v})=Q_{1}(v_{2})\times\mathcal{SP}(v_{1},v_{2})\), the corresponding \((k,t)\)-FWL+ is more powerful than GDGNN [23]._
We leave all formal proofs in Appendix C. Significantly, Proposition 3.7 indicates that \((k,t)\)-FWL+ can be more powerful than edge-based subgraph GNNs but only require \(O(n^{2})\) space, strictly lower than \(O(nm)\) space used in edge-based subgraph GNNs, where \(m\) is the number of edges in the graph. However, most existing works employ MPNNs as their basic encoder, which differs from the tuple aggregation in \((k,t)\)-FWL+. Thus it is still non-trivial to find an instance that can exactly fit many existing works. Nevertheless, given the strict hierarchy between node-based subgraph GNNs and SLFWL(2) [12], we conjecture there also exists a strict hierarchy between the \((k,t)\)-FWL+ instances and corresponding existing works based on MPNNs and leave the detailed proof in our future works.
## 4 Neighborhood\({}^{2}\)-GNN: a practical and powerful \((2,2)\)-FWL+ implementation
Due to the extremely flexible and broad design space of \((k,t)\)-FWL+, it is hard to evaluate all possible instances. Therefore, we focus on a particular instance that is both theoretically and practically sound.
### Neighborhood\({}^{2}\)-Fwl
In this section, we propose a practical and powerful implementation of \((2,2)\)-FWL+ named Neighborhood\({}^{2}\)-FWL (N\({}^{2}\)-FWL). Let \(\mathcal{C}_{n^{2}fwl}^{l}(\mathbf{v})\) denote the color of tuple \(\mathbf{v}\) at iteration \(l\) for N\({}^{2}\)-FWL, the color is updated by:
\[\text{N}^{2}\text{-FWL:}\quad\mathcal{C}_{n^{2}fwl}^{l}(\mathbf{v})=\text{ HASH}(\mathcal{C}_{n^{2}fwl}^{l-1}(\mathbf{v}),\big{\{}\!\big{\}}\Big{(} \mathcal{C}_{n^{2}fwl}^{l-1}(\mathbf{u})|\mathbf{u}\in Q_{\mathbf{w}}^{F}( \mathbf{v})\Big{)}\,|\mathbf{w}\in N^{2}(\mathbf{v})\big{\}}_{2}), \tag{9}\]
where \(N^{2}(\mathbf{v})=(\mathcal{N}_{1}(v_{2})\times\mathcal{N}_{1}(v_{1}))\cap( \mathcal{N}_{h}(v_{1})\cap\mathcal{N}_{h}(v_{2}))^{2}\) and \(h\) is the number of hop. Briefly speaking, N\({}^{2}\)-FWL considers neighbors of both node \(v_{1}\) and \(v_{2}\) within the overlapping \(h\)-hop subgraph between \(v_{1}\) and \(v_{2}\). In the following, we show the expressive power and counting power of N\({}^{2}\)-FWL.
**Theorem 4.1**.: _Given \(h\) is large enough, N\({}^{2}\)-FWL is more powerful than SLFWL(2) [12] and edge-based subgraph GNNs._
**Corollary 4.2**.: _N\({}^{2}\)-FWL is strictly more powerful than all existing node-based subgraph GNNs and no less powerful than 3-WL._
**Theorem 4.3**.: _N\({}^{2}\)-FWL can count up to (1) 6-cycles; (2) all connected graphlets with size 4; (3) 4-paths at node level._
### Neighborhood\({}^{2}\)-Gnn
In this section, we provide a neural version of N\({}^{2}\)-FWL called Neighborhood\({}^{2}\)-GNN (N\({}^{2}\)-GNN). Let \(h_{\mathbf{v}}^{l}\) be the output of N\({}^{2}\)-GNN of tuple \(\mathbf{v}\) at layer \(l\) and \(h_{\mathbf{v}}^{0}\) encode the isomorphism type of tuple \(\mathbf{v}\). At the \(l\)-th layer, the representation of each tuple is updated by:
\[h_{\mathbf{v}}^{l}=\mathbf{U}^{l}(h_{\mathbf{v}}^{l-1},\{\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
[43] computes graph polynomial features and injects them into PPGN to achieve greater expressive power than 3-WL within \(O(n^{2})\) space. Although feature-based methods are highly efficient, they often suffer from overfitting in real-world tasks and produce suboptimal results.
## 6 Experiments
In this section, we conduct various experiments on synthetic and real-world tasks to verify the effectiveness of N\({}^{2}\)-GNN. The details of all experiments can be found in Appendix E. We include additional experimental results and ablation studies in Appendix F. Our code is available at [https://github.com/JiaruiFeng/N2GNN](https://github.com/JiaruiFeng/N2GNN) for reproducing all results.
### Expressive power
SR25 [44], which contains 15 non-isomorphic strongly regular graphs, each has 25 nodes. Even 3-WL is unable to distinguish between these graphs.
**Models**. As we are solely evaluating the expressive power of N\({}^{2}\)-GNN, we only report results for N\({}^{2}\)-GNN. We vary the hop \(h\) for different datasets and report the best result.
**Results**. We report results in Table 1. For EXP and CSL, we report the average accuracy for 10-time cross-validation; For SR25, we report single-time accuracy. We can see N\({}^{2}\)-GNN achieves perfect results on all three datasets, empirically verifying its expressive power. Notably, N\({}^{2}\)-GNN is able to distinguish strongly regular graphs that even 3-WL are unable to differentiate. This empirically verified Corollary 4.2. Despite this improved performance, N\({}^{2}\)-GNN only requires at most \(O(n^{2})\) space complexity, which is less than 3-WL. This further demonstrates the superiority of N\({}^{2}\)-GNN.
### Substructure counting
**Datasets**. To verify the substructure counting power of N\({}^{2}\)-GNN, we select the synthetic dataset from [25; 19]. The dataset consists of 5000 randomly generated graphs from different distributions, which are split into training, validation, and test sets with a ratio of 0.3/0.2/0.5. The task is to perform node-level counting regression. Specifically, we choose tailed-triangle, chordal cycles, 4-cliques, 4-paths, triangle-rectangle, 3-cycles, 4-cycles, 5-cycles, and 6-cycles as target substructures.
**Models**. We compare N\({}^{2}\)-GNN with the following baselines: Identity-aware GNN (ID-GNN) [28], Nested GNN (NGNN) [24], GNN-AK+ [25], PPGN [9], I\({}^{2}\)-GNN [19]. Results for all baselines are reported from [19].
**Results**. We report the results for counting substructures in Table 2. All results are the average normalized test MAE of three runs with different random seeds. We colored all results that are less than 0.01, which is an indication of successful counting, the same as in [19]. We can see that N\({}^{2}\)-GNN achieves MAE less than 0.01 among all different substructures/cycles, which empirically verified
\begin{table}
\begin{tabular}{l|c c c} \hline Datasets & EXP & CSL & SR25 \\ \hline N\({}^{2}\)-GNN & 100 & 100 & 100 \\ \hline \end{tabular}
\end{table}
Table 1: Expressive power verification (Accuracy).
\begin{table}
\begin{tabular}{l|c c c c|c} \hline Target & ID-GNN [28] & NGNN [24] & GIN-AK+ [25] & PPGN [9] & I\({}^{2}\)-GNN [19] & N\({}^{2}\)-GNN \\ \hline Tailed Triangle & 0.1053 & 0.1044 & 0.0043 & 0.0026 & 0.0011 & 0.0033 \\ Chordal Cycle & 0.0454 & 0.0392 & 0.0112 & 0.0015 & 0.0010 & 0.0024 \\
4-Clique & 0.0026 & 0.0045 & 0.0049 & 0.1646 & 0.0003 & 0.0005 \\
4-Path & 0.0273 & 0.0244 & 0.0075 & 0.0041 & 0.0041 & 0.0038 \\ Tri.-Rec. & 0.0628 & 0.0729 & 0.1311 & 0.0144 & 0.0013 & 0.0055 \\
3-Cycles & 0.0006 & 0.0003 & 0.0004 & 0.0003 & 0.0003 & 0.0004 \\
4-Cycles & 0.0022 & 0.0013 & 0.0041 & 0.0009 & 0.0016 & 0.0031 \\
5-Cycles & 0.0490 & 0.0402 & 0.0133 & 0.0036 & 0.0028 & 0.0043 \\
6-Cycles & 0.0495 & 0.0439 & 0.0238 & 0.0071 & 0.0082 & 0.0077 \\ \hline \end{tabular}
\end{table}
Table 2: Evaluation on Counting Substructures (norm MAE), cells with MAE less than 0.01 are colored.
Theorem 4.3. Specifically, N\({}^{2}\)-GNN achieves the best result on 4-path counting and comparable performance to I\({}^{2}\)-GNN on most substructures. Moreover, N\({}^{2}\)-GNN can count 4-clique extremely well, which is theoretically and empirically infeasible for 3-WL [42] and PPGN [\(\diamond\)].
### Molecular properties prediction
**Datasets**. To evaluate the performance of N\({}^{2}\)-GNN on real-world tasks, we select two popular molecular graphs datasets: QM9 [45; 46] and ZINC [47]. QM9 dataset contains over 130K molecules with 12 different molecular properties as the target regression task. The dataset is split into training, validation, and test sets with a ratio of 0.8/0.1/0.1. ZINC dataset has two variants: ZINC-subset (12k graphs) and ZINC-full (250k graphs), and the task is graph regression. The training, validation, and test splits for the ZINC datasets are provided.
**Models**. For QM9, we report baseline results of DTNN and MPNN from [45]. We further adopt PPGN [9], NGNNs [24], KP-GIN\(\prime\)[22], I\({}^{2}\)-GNN [19]. We report results of PPGN, NGNN, and KP-GIN\(\prime\) from [22] and results of I\({}^{2}\)-GNN from [19]. For ZINC, the baseline including CIN [48], \(\delta\)-2-GNN [8], KC-SetGNN [10], PPGN [9], Graphmer-GD [41], GPS [49], Specformer [50], NGNN [24], GNN-AK-ctx [25], ESAN [34], SUN [27], KP-GIN\(\prime\)[22], I\({}^{2}\)-GNN [19], SSWL+ [12]. We report results of all baselines from [12; 22; 43; 10; 49; 50; 19].
**Results**. For QM9, we report the single-time final test MAE in Table 3. We can see that N\({}^{2}\)-GNN outperforms all other methods in almost all targets with a large margin. Particularly, we achieve \(63.8\%\), \(54.7\%\), \(71.6\%\), and \(63.2\%\) improvement comparing to the second-best results on target \(U_{0}\), \(U\), \(H\), and \(G\) respectively. For ZINC-Subset and ZINC-Full, we report the average test MAE and standard deviation of 10 runs with different random seeds in Table 4. We can see N\({}^{2}\)-GNN still surpass all previous methods and achieve \(10.6\%\) and \(40.9\%\) improvement over the second-best methods. All these results demonstrate the superiority of N\({}^{2}\)-GNN on real-world tasks.
\begin{table}
\begin{tabular}{l|c|c c c|c} \hline \multirow{2}{*}{Model} & \multirow{2}{*}{\# Param} & \multicolumn{2}{c}{ZINC-Subset} & \multicolumn{2}{c}{ZINC-Full} \\ \cline{3-6} & & \multicolumn{2}{c}{Test MAE} & \multicolumn{2}{c}{Test MAE} \\ \hline CIN [48] & \(\sim\)100k & 0.079 \(\pm\) 0.006 & 0.022 \(\pm\) 0.002 \\ \hline \(\delta\)-\(2\)-GNN [8] & - & - & 0.042 \(\pm\) 0.003 \\ KC-SetGNN [10] & - & 0.075 \(\pm\) 0.003 & - \\ PPGN [\(\diamond\)] & - & 0.079 \(\pm\) 0.005 & 0.022 \(\pm\) 0.003 \\ \hline Graphmer-GD [41] & 503k & 0.081 \(\pm\) 0.009 & 0.025 \(\pm\) 0.004 \\ GPS [49] & 424k & 0.070 \(\pm\) 0.004 & - \\ Specformer [50] & \(\sim\)500k & 0.066 \(\pm\) 0.003 & - \\ \hline NGNN [24] & \(\sim\)500k & 0.111 \(\pm\) 0.003 & 0.029 \(\pm\) 0.001 \\ GNN-AK-ctx [25] & \(\sim\)500k & 0.093 \(\pm\) 0.002 & - \\ ESAN [34] & 446k & 0.097 \(\pm\) 0.006 & 0.025 \(\pm\) 0.003 \\ SUN [27] & 526k & 0.083 \(\pm\) 0.003 & 0.024 \(\pm\) 0.003 \\ KP-GIN [22] & 489k & 0.093 \(\pm\) 0.007 & - \\ I\({}^{2}\)-GNN [19] & - & 0.083 \(\pm\) 0.001 & 0.023 \(\pm\) 0.001 \\ SSWL+ [12] & 387k & 0.070 \(\pm\) 0.005 & 0.022 \(\pm\) 0.002 \\ \hline N\({}^{2}\)-GNN & 306k & **0.059 \(\pm\) 0.002** & **0.013 \(\pm\) 0.0005** \\ \hline \end{tabular}
\end{table}
Table 4: MAE results on ZINC (smaller the better).
\begin{table}
\begin{tabular}{l|c c c c c|c|c} \hline Target & DTNN [45] & MPNN [45] & PPGN [9] & NGNN [24] & KP-GIN\(\prime\)[22] & I\({}^{2}\)-GNN [19] & N\({}^{2}\)-GNN \\ \hline \(\mu\) & 0.244 & 0.358 & **0.231** & 0.433 & 0.358 & 0.428 & 0.333 \\ \(\alpha\) & 0.95 & 0.89 & 0.382 & 0.265 & 0.233 & 0.230 & **0.193** \\ \(\varepsilon_{\text{HOMO}}\) & 0.00388 & 0.00541 & 0.00276 & 0.00279 & 0.00240 & 0.00261 & **0.00217** \\ \(\varepsilon_{\text{LUMO}}\) & 0.00512 & 0.00623 & 0.00287 & 0.00276 & 0.00236 & 0.00267 & **0.00210** \\ \(\Delta_{E}\) & 0.0112 & 0.0066 & 0.00406 & 0.00390 & 0.00333 & 0.00380 & **0.00304** \\ \(\langle R^{2}\rangle\) & 17.0 & 28.5 & 16.7 & 20.1 & 16.51 & 18.64 & **14.47** \\ ZPVE & 0.00172 & 0.00216 & 0.00064 & 0.00015 & 0.00017 & 0.00014 & **0.00013** \\ \(U_{0}\) & 2.43 & 2.05 & 0.234 & 0.205 & 0.0682 & 0.211 & **0.0247** \\ \(U\) & 2.43 & 2.00 & 0.234 & 0.200 & 0.0696 & 0.206 & **0.0315** \\ \(H\) & 2.43 & 2.02 & 0.229 & 0.249 & 0.0641 & 0.269 & **0.0182** \\ \(G\) & 2.43 & 2.02 & 0.238 & 0.253 & 0.0484 & 0.261 & **0.0178** \\ \(C_{v}\) & 0.27 & 0.42 & 0.184 & 0.0811 & 0.0869 & **0.0730** & 0.0760 \\ \hline \end{tabular}
\end{table}
Table 3: MAE results on QM9 (smaller the better).
Conclusion
In this work, we propose \((k,t)\)-FWL+, a flexible and powerful extension of \(k\)-FWL. First, \((k,t)\)-FWL+ expands the tuple aggregation style in \(k\)-FWL. We theoretically prove that even with a space complexity of \(O(n^{2})\), \((k,t)\)-FWL+ can still construct an expressiveness hierarchy up to solving the graph isomorphism problem. Second, \((k,t)\)-FWL+ extends the global neighborhood definition in \(k\)-FWL to any equivariant set, which enable a finer-grained design space. We show that the \((k,t)\)-FWL+ framework can implement many existing powerful GNN models with matching expressiveness. We further implement an instance named N\({}^{2}\)-GNN which is both practically powerful and theoretically expressive. Theoretically, it partially outperforms 3-WL but only requires \(O(n^{2})\) space. Empirically, it achieves new SOTA on both ZINC-Subset and ZINC-Full. However, the practical complexity of N\({}^{2}\)-GNN can still be unbearable especially for dense graphs. Nevertheless, there still exists a vast unexplored landscape within \((k,t)\)-FWL+ that is worth further investigation. We envision \((k,t)\)-FWL+ can serve as a promising general framework for designing highly effective GNNs tailored to real-world problems.
|
2301.05868 | Modulation spectral features for speech emotion recognition using deep
neural networks | This work explores the use of constant-Q transform based modulation spectral
features (CQT-MSF) for speech emotion recognition (SER). The human perception
and analysis of sound comprise of two important cognitive parts: early auditory
analysis and cortex-based processing. The early auditory analysis considers
spectrogram-based representation whereas cortex-based analysis includes
extraction of temporal modulations from the spectrogram. This temporal
modulation representation of spectrogram is called modulation spectral feature
(MSF). As the constant-Q transform (CQT) provides higher resolution at emotion
salient low-frequency regions of speech, we find that CQT-based spectrogram,
together with its temporal modulations, provides a representation enriched with
emotion-specific information. We argue that CQT-MSF when used with a
2-dimensional convolutional network can provide a time-shift invariant and
deformation insensitive representation for SER. Our results show that CQT-MSF
outperforms standard mel-scale based spectrogram and its modulation features on
two popular SER databases, Berlin EmoDB and RAVDESS. We also show that our
proposed feature outperforms the shift and deformation invariant scattering
transform coefficients, hence, showing the importance of joint hand-crafted and
self-learned feature extraction instead of reliance on complete hand-crafted
features. Finally, we perform Grad-CAM analysis to visually inspect the
contribution of constant-Q modulation features over SER. | Premjeet Singh, Md Sahidullah, Goutam Saha | 2023-01-14T09:36:49Z | http://arxiv.org/abs/2301.05868v1 | # Modulation spectral features for speech emotion recognition using deep neural networks
###### Abstract
This work explores the use of constant-Q transform based modulation spectral features (CQT-MSF) for speech emotion recognition (SER). The human perception and analysis of sound comprise of two important cognitive parts: early auditory analysis and cortex-based processing. The early auditory analysis considers spectrogram-based representation whereas cortex-based analysis includes extraction of temporal modulations from the spectrogram. This temporal modulation representation of spectrogram is called modulation spectral feature (MSF). As the constant-Q transform (CQT) provides higher resolution at emotion salient low-frequency regions of speech, we find that CQT-based spectrogram, together with its temporal modulations, provides a representation enriched with emotion-specific information. We argue that CQT-MSF when used with a 2-dimensional convolutional network can provide a time-shift invariant and deformation insensitive representation for SER. Our results show that CQT-MSF outperforms standard mel-scale based spectrogram and its modulation features on two popular SER databases, Berlin EmoDB and RAVDESS. We also show that our proposed feature outperforms the shift and deformation invariant scattering transform coefficients, hence, showing the importance of joint hand-crafted and self-learned feature extraction instead of reliance on complete hand-crafted features. Finally, we perform Grad-CAM analysis to visually inspect the contribution of constant-Q modulation features over SER.
keywords: Constant-Q transform, Convolutional neural network, Modulation spectrogram, Gammatone spectrogram, Shift invariance, Speech emotion recognition. +
Footnote †: journal: Speech Communication Journal
## 1 Introduction
Speech emotion recognition (SER) is the process of automatic prediction of speaker's emotional state from his/her speech samples. A speech sample generally remains enriched with various information, such as speaker, language, emotion, context, recording environment, gender and age, intricately entangled to each other [1]. Human mind
is congenitally trained to disentangle such information, however, same is not true for machines [2]. Machines need to be specifically trained to extract cues pertaining to a particular information. Among such, extraction of emotion-specific cues for SER is still considered a challenging task. The challenge basically persists because of the differences in the manner of emotion expression across individuals [3]. These differences stem from factors such as speaker's culture and background, ethnicity, speaker's mood, gender, manner of speech, etc. [3; 4]. For automatic SER, a machine should be capable of extracting emotion-specific cues in the presence of all such variabilities.
SER finds application in several human-computer interaction domains such as sentiment analysis in customer service, health care systems, self-driving vehicles, auto-pilot systems, product advertisement and analysis [1; 3; 5]. One of the first seminal works in SER was aimed towards emotion information extraction using different speech cues [6]. Various works that followed discovered that _speech prosody_ (pitch, intonation, energy, loudness, etc.) contain significant information for emotion discrimination [7; 8; 9]. Similarly, several other works report that _spectral features_ (spectral flux, centroid, mel-frequency cepstral coefficients (MFCCs), etc.) and _voice-quality features_ (jitter, shimmer, harmonic-to-noise ratio (HNR), etc.) of speech are also important for SER [10]. For classification, these extracted features are processed with a classifier back-end such as _support vector machine_ (SVM), _Gaussian mixture model_ (GMM), and _k-nearest neighbour_ (k-NN) for emotion class prediction. These approaches which employ certain signal processing algorithm for feature extraction are termed hand-crafted feature based approaches for SER. Hand-crafted approaches enjoy the advantage of being interpretable, in terms of which feature or speech characteristic is more relevant for emotions, and are computationally inexpensive. However, hand-crafted features often suffer from _curse of dimensionality_, especially when _brute-force_ method based SER system is used [11].
Recent advancements in signal processing have introduced _deep neural networks_ (DNN) into the speech processing domain. DNNs have the impressive ability by which, given the required data, they automatically learn to obtain a possible solution to pattern recognition problems. This is accomplished by automatically updating the DNN parameters so as to reduce the defined loss function and approach towards the local minima. In SER system, deep networks are either used as automatic feature extractors or as classifiers for emotion class prediction. Recently, a new deep learning paradigm is also introduced which performs both feature extraction and emotion classification in an end-to-end fashion. Along these lines, several works use _convolutional neural network_ (CNN) as automatic feature extractor for SER [12; 13]. In contrast, other approaches use hand-crafted methods for feature extraction which are then used as features for DNN classifier input [14; 15; 16]. To obtain an end-to-end solution for SER works in [17; 18; 19] have used DNNs where the initial layers extract the emotion-relevant features and final layers act as classifier. In recent years, deep learning methods have been consistently shown to outperform hand-crafted feature based SER techniques.
In spite of their tremendous success, DNNs have major practical disadvantages. One such disadvantage is the requirement of large labelled database for proper DNN training [20]. In contrast to other speech classification problems, such as speech and speaker recognition, large speech corpora are not available for evaluating SER task. Various
ethical and legal issues make it difficult to collect large dataset of natural emotional voices from real-world scenario [3; 5]. To somewhat alleviate this issue, acted emotion recordings are generally used where skilled actors enact a predefined set of emotions. However, this approach is not considered very appropriate as acted emotions are often exaggerated versions of natural emotions [3; 5]. Another disadvantage of DNN is its complexity. Due to large trainable parameter set and high non-linear relationship between input and output, DNNs are often termed _black-box_ models, which are very difficult to understand/interpret [21; 22; 23]. As the training includes optimization of all DNN parameters, it takes much larger time for DNNs to train as compared to classical statistical methods (e.g., SVM or GMM). Hence, even though very appealing, DNN models are still far-off a completely optimized SER approach.
The above discussion leads to the conclusion that both hand-crafted and DNN based feature extraction methods have their own set of advantages and disadvantages. In this work, we aim to exploit the advantages of both the methods for improved SER performance. Our framework is similar to the combination of hand-crafted feature, in the form of time-frequency representation, and a DNN model for further feature enrichment as used in other SER works [14; 15; 16]. However, our approach incorporates the prior (domain) knowledge of speech processing in humans, i.e., early auditory and cortex-based processing of speech [24], for an improved hand-crafted feature representation. Being data-driven, DNN-based machine learning approaches suffer in performance, especially when there are constraints over the training data, e.g., limited size, ethical concerns in recording and poor quality of data [25], all of which are relevant for SER databases. Evidences reveal that such disadvantages can be alleviated by the use of domain knowledge [25; 26] in hand-crafted feature generation. Further, regarding speech representations, spectrogram and mel-spectrogram are considered the _de-facto_ standard of time-frequency representations in SER. However, they encompass only the early auditory processing of speech and lack cortical information.
Inspired by this fact, we first employ a hand-crafted feature extraction technique which combines an emotion relevant early auditory representation with corresponding cortex-based representation of the speech for SER. These features are then processed by a deep convolutional neural network which further extracts the emotion relevant information. Two machine learning frameworks are used at the back-end: Convolutional network with fully connected layer, and convolutional layer for embedding extraction with SVM classifier for final emotion class prediction. Such combination of multi-stage hand-crafted feature with DNN at back-end more closely follows the natural speech processing workflow in humans where the auditory system captures the signal and extracts the features, which are then transmitted to the inner regions of brain for further analysis and understanding. The achieved improvement in performance over different databases further consolidates our hypothesis of two-staged hand-crafted speech processing for SER. Figure 1 provides a general overview of the two-staged processing framework in human auditory system for SER.
In the next section (Section 2), we describe the relevant literature and discuss the motivation and major contributions of this work. Section 3 and 4 provides a brief introduction to the early auditory and cortex-based feature representations used in this
work. Section 5 describes the experimental setup used to perform the experiments. Section 6 describes the results obtained with the proposed feature and comparison with the standard features followed by corresponding discussion. Finally, Section 7 includes the conclusive statements of the work.
## 2 Related Works and Motivation
In this section, we provide a brief review of works related to the frequency localisation of emotions. We then discuss some works which describe the relevance of modulation spectrogram in speech processing. This is followed by description of the motivation of our proposed feature and the major contributions of this work.
### Literature Review
Several studies, aimed towards analysing the importance of spectral frequencies, have reported the prominence of low frequencies in SER. Authors in [27] report the promi
Figure 1: The two-staged speech processing in the human auditory system for SER. The input speech captured at ear is converted to a form similar to time-frequency (TF) representation by the early auditory processing filters present in cochlea. This representation is then passed on to the auditory cortex in the brain for processing with cortical filters. The highlighted part in brain image identifies the auditory cortex region of the brain. The cortical filter processing leads to a modulation spectrogram based representation. This is further processed by the inner regions of the brain to finally decode emotions. Our employed deep neural network, which is loosely based on the studies of the brain and nervous system, models the inner processing of the brain to identify the emotion classes from the input modulation spectrogram feature. The \(h_{\omega_{n}^{m}}(t)\) and \(g_{\omega_{k}^{m}}(t)\) depict the impulse response of \(n\)th early auditory and \(m\)th cortical processing filter, respectively. The Figure shows logarithm applied TF and modulation spectrogram representation.
nence of first formant frequency (F1) for recognition of _Anger_ and second formant (F2) for recognition of _Neutral_. Studies performed in [28] found that high arousal emotions, e.g, _Anger_, _Happy_, have higher average F1 value and lower F2 value. They also found that positive valence emotions (e.g., _Happy_, _Pride_, _Relief_) have higher average F2 value. Authors in [29] also report discrimination between idle and negative emotions using the temporal patterns of first two formant frequencies. In [30], authors show that non-linear frequency scales (e.g., equivalent rectangular bandwidth (ERB), mel, logarithmic) when applied for sub-band partitioning and energy computation over discrete Fourier transform based spectrogram, results in improved SER accuracy. Such studies hint toward the requirement of a non-linear frequency scale based time-frequency representation with higher emphasis on low-frequency regions of speech.
Regarding human sound perception, evidences in literature suggest that the process of auditory signal analysis can be modelled into two stages: (i) _Early auditory stage_, which models the incoming audio signal into a spectrogram based representation. (ii) _Cortical analysis stage_, which extracts the spectro-temporal modulation relationship among different audio cues from the auditory spectrogram [24; 31]. Such modelling strategy has been found effective in the analysis of both speech and music signals [24]. The spectral and temporal modulation features of speech spectrogram are also highly related to speech intelligibility, noise and reverberation effects [31]. In [32], authors report that the spectro-temporal representation of audio (non-speech) signals with positive negative valence is different from that of neutral sounds. They also report that spectral frequency and temporal modulation frequency can represent the valence information of sounds.
Authors in [33] compared the temporal modulations of human speech with scream voice and concluded that slow temporal variations (\(<20\) Hz) contain most linguistic (both prosodic and syllabic cues) information.
In speech analysis, temporal modulation features are called _modulation spectral features_ (MSFs). Owing to their relatedness to speech intelligibility, MSFs have been extensively used in speech processing. Some works also successfully explored modulation features for speaker identification, verification and audio coding [34; 35]. Author in [36] provides a comprehensive description of the history of the use of modulation features in speech recognition.
Modulation features have also been explored for emotion recognition in speech. Authors in [37] used MSF for emotion recognition and provided a detailed explanation of its relevance for SER. In [38], authors used a smoothed nonlinear operator to obtain the amplitude modulated power spectrum of the gammatone filterbank generated spectrogram and showed improvement over standard MFCC for SER. Authors in [39] studied the relationship between human emotion perception and the MSFs of emotional speech and concluded on the suitability of modulation features for emotion recognition. Authors in [40] used 3-D convolutions and attention-based recurrent networks to combine auditory analysis and attention mechanisms for SER. This work also explains that the temporal modulations extracted from auditory analysis contain periodicity information important for emotion recognition. In [41], various feature pooling, such as _mean_, _standard deviation_, and _kurtosis_, on frame-level measure of MSF to be used for "in-the-wild" dimension-based SER (dimensional SER includes projection of speech onto three emotion
dimensions: _valence_, _arousal_ and _dominance_). The authors report improvement in results over frame-wise modulation spectral feature baseline for various noise and reverberated speech scenarios. Similar MSF measures when used with Bag-of-Audio-Words (BoAW) approach showed SER improvement against environmental noise in [42]. In [43], authors use modulation spectral features with convolutional neural networks to discriminate between stress-based speech and neutral speech. The authors show that the modulation spectral features when used with CNN with the time frames intact (without statistics pooling over time frames of MSF) gives better performance, especially over increased number of target emotion classes. In [44], authors show that joint spectro-temporal modulation representation outperforms standard MFCC in emotion classification of noisy speech. Recently, the authors in [45] have also used MSF over cochleagram features with a long short term memory (LSTM) based system for dimensional SER. The work explains that arousal information can be characterised by the amplitude envelope of speech signal, whereas valence information is characterised by the temporal dynamics of amplitude envelope. Since it is difficult to obtain such dynamics from low-level descriptor (LLD) features, auditory analysis based temporal modulation features can potentially represent the required temporal dynamics for SER.
### Motivation and Contributions
The literature in SER reveals two important speech characteristics for emotion prediction: the importance of low frequencies, and the importance of temporal modulations of spectrogram. To address the importance of low-frequency information, we use constant-Q transform (CQT) based time-frequency representation for SER. CQT provides higher frequency resolution and increased time invariance at low frequencies thereby emphasizing the low-frequency regions of speech [46]. This helps in better resolution of emotion salient frequency regions of speech and improved SER performance [47]. CQT is also known to provide a representation with visible pitch frequency and well-separated pitch harmonics [48]. Because of high relevance of pitch information in emotion discrimination, this property of CQT makes it more suitable for SER over standard mel-based features.
To further enhance the CQT-based system while utilising the understanding of domain knowledge of human auditory-cortical physiology [31], we propose to use temporal modulation of CQT spectrogram representation for SER. Specifically, we use CQT spectrogram representation for auditory analysis and extract temporal modulations of CQT by again using constant-Q filters, for cortical analysis. In this way, we obtain the temporal modulation of emotion salient low-frequency regions which are emphasized by CQT. Studies show that such use of constant-Q modulation filterbank better approximates the cortical sound processing in humans [49; 50]. The constant-Q factor characteristic of modulation filters also lead to higher resolution at lower modulation frequencies, hence, providing an arrangement that helps in identifying any deviation from general (or _Neutral_) speech modulation rate (2-4 Hz) [51]. Our choice of constant-Q filters in both stages is also inspired from the study of early auditory and cortical stages of mammalian auditory cortex [31; 52]. We term our proposed feature as _constant-Q transform based modulation spectral feature_ (CQT-MSF). A 2-dimensional convolution neural network architecture (2-D CNN) is used to further refine the emotion information present in CQT-MSF feature. We compare the performance of CQT-MSF with mel-frequency spectral coefficients (MFSC) and show that the constant-Q non-linearity based auditory-cortical
features outperform the mel-scale non-linearity based features. We also investigate the performance differences obtained with auditory and cortical representations taken separately. We also highlight the striking similarity of CQT-MSF with the wavelet-based time-shift and deformation invariant coefficients, known as scattering transform coefficients [53]. Our main contributions in this work are as follows:
* This study proposes a new human auditory-cortical physiology based SER framework.
* We propose a modulation feature extraction technique using constant-Q filterbank over constant-Q spectrogram and analyse its relevance from vocal emotion perspective.
* We perform similarity analysis with another two-staged auditory-cortical feature representation: Scattering transform.
* We also perform explainability analysis to visually inspect different regions of CQT-MSF that weigh the most in prediction of a particular emotion class.
* The study further hints correlation between music training and emotion understanding by discussing the case of _Amusia_[54], and the possible analogy between modulation computed over CQT spectrogram and the cortex-level processing of sound in music trained individuals [55; 56; 57; 58; 59; 60].
## 3 Early auditory processing: Constant-Q Transform (CQT)
Our proposed features are based on the use of constant-Q filterbanks for both time-frequency (early auditory) and temporal modulation (cortical) based analysis of speech. In this section, we briefly discuss the CQT method of time-frequency representation. CQT uses constant _quality factor_ (Q-factor) bandpass filters with logarithmically spaced center frequencies [61]. Mathematical formulation of constant-Q transform is given by,
\[X^{CQT}[k,n]\ =\ \sum_{j\ =\ n-\lfloor N_{K}/2\rfloor}^{n+\lfloor N_{k}/2 \rfloor}\ \ x(j)a_{k}^{*}(j-n+N_{k}/2) \tag{1}\]
where \(k\) denotes the CQT frequency index, \(\lfloor.\rfloor\) denotes the rounding-off to nearest integer towards negative infinity and \(a_{k}^{*}(n)\) is the complex conjugate of the CQT basis function for \(k^{\text{th}}\) CQT bin. The CQT basis, or the time-frequency _atom_, is a complex time domain waveform given as,
\[a_{k}(n)\ =\ \frac{1}{N_{k}}w\left(\frac{n}{N_{k}}\right)exp\left[-i2\pi n \frac{f_{k}}{f_{s}}\right] \tag{2}\]
where \(f_{k}\) is the center frequency of \(a_{k}\), \(f_{s}\) is the sampling frequency and \(w(n)\) is the window function with length \(N_{k}\). We use the standard _Hann_ window in this work for CQT computation. The center frequencies of filters in constant-Q transform are
spaced by the relation \(f_{k}~{}=~{}f_{\min}2^{\frac{k-1}{B}}\) where \(f_{k}\) is the frequency of \(k\)th filterbank, \(f_{\min}\) being the frequency of the lowest bin and \(B\) the number of frequency bins used per octave of frequency. This binary logarithmic spacing leads to more frequency bins at lower frequencies, as compared to high frequencies, and hence provides higher frequency resolution at low frequencies [61]. In time domain, such filters can be given as truncated sinusoids (e.g., truncated with _Hann_ window) with different lengths [62], given by,
\[N_{k}~{}=~{}\frac{qf_{s}}{f_{k}(2^{\frac{1}{B}}-1)} \tag{3}\]
where \(q\) is the filter scaling factor. This scaling factor offers to change the time (and hence frequency) resolution of CQT bases without affecting \(B\)[62]. When compared with mel-based features, the mel-scale is also logarithmic in nature. However, mel-scale uses a decadic logarithm scale (or natural logarithm in some implementations), because of which, the emphasis on low-frequencies is not as prominent as CQT.
The computation of CQT, as described in Eq. 1, includes convolution of _atom_ with every time sample of the input signal. However, the fast CQT computation algorithm [62] introduced a _hop length_ parameter, which describes the number of samples the time window is shifted for next time frame CQT computation. The hop length is kept equal to integer multiples of \(2^{\text{No. of octaves}}\) so that the corresponding signal frames at different frequencies do not fall out of alignment [62]. In CQT representation, the number of octaves is given by \(\log_{2}\frac{F_{\max}}{F_{\min}}\)[61] where \(F_{\min}\) and \(F_{\max}\) are the minimum and maximum frequency of operation, respectively. For CQT computation in this work, we use the _LibROSA1_ toolkit [63] which follows all the computational details of the fast CQT implementation mentioned above.
Footnote 1: [https://librosa.github.io/](https://librosa.github.io/)
Fig. 2: Auditory and Modulation filter banks used in CQT-MSF. The modulation filters shown here have scale factor (\(q\)) value 2.
## 4 Cortex-based processing: Modulation Spectrogram
Modulation spectrogram shows the temporal variation pattern of the spectral components in spectrogram. According to [51], speech signal is composed of two parts, the carrier, i.e., the vocal cords excitation, and the varying modulation envelope which is the result of changes in orientation of different vocal organs over time. The low-frequencies of the modulation envelope characterise slow variations of the complete spectral structure, which is known to encode most of the phonetic information [36; 64; 65]. Let \(S(t,\ \omega)\) be the speech spectrogram. The temporal evolution of a frequency bin \(\omega_{o}\) in \(S(t,\ \omega)\), over time \(t\), is a one-dimensional time-series. The spectral representation of this time-series \(S(t,\ \omega_{o})\) constitutes the modulation spectrum of frequency bin \(\omega_{o}\) over \(T\), where \(T\) is the spectrogram time window (with duration equal to window length \(N_{k}\)).
For speech, most of the modulation energy remains concentrated around 2-4 Hz range with peak at 4 Hz [51]. This makes 4 Hz to be considered as the syllabic rate of normal (_Neutral_) speech. Deviations from this rate generally result from inflection of noise or reverberation effects over speech [65; 66]. It is studied in SER literature that rate of speech is higher than _Neutral_ class for high arousal emotions, such as, _Anger_, _Fear_ and lower for low arousal emotions, such as _Sad_, _Boredom_[67]. Hence, this deviation of modulation energy peak from 4 Hz can be used for emotion discrimination over arousal scale.
Figure 3: Block diagram of proposed CQT-MSF feature extraction method. In figure, \(\omega^{a}\) refers to acoustic frequency and \(\omega^{m}\) refers to the modulation frequency.
### Constant-Q based Modulation Spectral Features (CQT-MSF)
In this subsection, we compute modulations of CQT bins and combine them with CQT spectrogram to generate CQT-MSF. The first stage early auditory analysis (CQT spectrogram) in the CQT-MSF feature can be given as,
\[Y(t,\ \omega^{a})=s(t)*h_{\omega^{a}_{n}}(t);\ \omega^{a}_{0}\leq\omega^{a}_{n}< \omega^{a}_{C}, \tag{4}\]
where, \(s(t)\) is the input speech signal, \(h_{\omega^{a}_{n}}(t)\) is the impulse response of \(n\)th constant quality factor auditory filter with \(\omega^{a}_{n}\) center frequency, \(C\) is the number of auditory filters and \(Y(t,\ \omega^{a})\) is the corresponding time-frequency representation. Fig. 2a shows the frequency response of different \(h_{\omega^{a}_{n}}(t)\) used. For envelope extraction, modulus operation is applied over \(Y(t,\ \omega^{a})\), i.e., \(|Y(t,\ \omega^{a})|\). The resulting representation provides the temporal trajectories of different frequency bins in \(Y(t,\ \omega^{a})\).
For cortical analysis, the \(|Y(t,\ \omega^{a})|\) is passed through a modulation filterbank. The modulation spectrogram computed over time-frequency representation \(|Y(t,\ \omega^{a})|\), is given as,
\[Y(t,\omega^{a},\omega^{m})=|Y(t,\ \omega^{a}_{n})|*g_{\omega^{m}_{k}}(t);\ \omega^{m}_{0}\leq\omega^{m}_{k}<\omega^{m}_{M},\ \omega^{a}_{0}\leq\omega^{a}_{n}<\omega^{a}_{C}, \tag{5}\]
where, \(g_{\omega^{m}_{k}}(t)\) is the impulse response of \(k\)th modulation filter with \(\omega^{m}_{k}\) center frequency and \(M\) is the total number of modulation filters. Similar to the output of the first stage, we use the modulus of computed modulation spectrum coefficients computed over all
Figure 4: Modulation spectral features (averaged over time) for different emotions in EmoDB. The ‘**M F**’ refers to the modulation frequency channels and ‘**A F**’ refers to the acoustic frequency channels. The modulation filters used in this analysis has filter scale (\(q\)) value 2.
frequency bins of CQT spectrogram, i.e., \(|Y(t,\omega^{a},\omega^{m})|\)[51]. Fig. 3 shows the block diagram of CQT-MSF feature extraction. The complete MSF includes concatenation of temporal modulations, computed using every modulation filter (\(g_{\omega_{0}^{m}}\leq g_{\omega_{k}^{m}}<g_{\omega_{M}^{m}}\)), of all frequency bins in the time-frequency representation, i.e., for \(\omega_{0}^{a}\leq\omega_{n}^{a}<\omega_{C}^{a}\) bins where \(C=24\) auditory channels in our experiments. Regarding the properties of MSF, study performed in [31] report distinction between three different temporal modulation rates: slow, intermediate, and fast. The slow modulation rate is shown to roughly correspond to the syllable or speaking rate. Whereas the intermediate modulation rate appearing because of interharmonic interaction is shown to reflect the fundamental frequency of the signal. This shows the importance of temporal modulation for pitch representation, and hence, SER. Temporal modulation extracted by MSF represent tempo [68], pitch, and timber [52], all of which are related to emotion information in speech.
Fig. 4 shows the time-averaged CQT-MSF coefficients for utterances of different emotion classes of the EmoDB database. The MF and AF refer to the modulation and auditory frequency channels, respectively. In terms of modulation frequency, the highest peak in _Neutral_ emotion is observed at 4 Hz modulation frequency with another peak around 0.5 Hz. Compared to _Neutral_ class, low arousal emotions (_Boredom_ and _Sad_) also have energies extending towards 0-4 Hz modulation frequency range. High arousal emotions (_Anger_, _Fear_) have peak around 4-8 Hz modulation frequency range. In contrast, _Happy_ also has a peak at 4 Hz similar to _Neutral_. Similarly, _Disgust_ also shows a peak at 4 Hz followed by another peak at 2 Hz. From AF perspective, _Anger_ emotion shows peak at high frequencies (high AF), whereas, in _Sad_ low auditory frequencies are more dominant. For remaining emotions, auditory energy distribution extends almost similarly over mid-auditory frequencies. This analysis shows the higher emotion discrimination potential of combined MF and AF channels as compared to only AF channel-based representation.
To further analyze the discriminative potential of modulation spectrum features, we perform F-ratio analysis between the time-averaged modulation features of various emotion classes and the _Neutral_ class of the EmoDB database [69, 70]. Fig. 5 shows the 3-D projection of F-ratio values in AF-MF plane. Different auditory and modulation bins show varying discriminative characteristics for different emotions. For every emotion class, F-ratio peaks are observed at low MF bins showing their potential for emotion discrimination. Similarly, low AF also shows high F-ratio for every class except _Sad_. High arousal emotions (_Anger_, _Happy_, _Fear, etc._) in general show greater F-ratios at high MF bins as compared to low arousal emotions (_Sad_, _Boredom_). The highest F-ratio value with respect to _Neutral_ is observed for _Anger_ and lowest for _Boredom_ emotion class. For _Anger_ and _Happy_ classes, low AF bins exhibit higher discriminative characteristic. _Disgust_ shows F-ratio peaks over wide range of AF and corresponding MF bins. In _Fear_, F-ratio peaks are observed at low and high AF values with gradual slope towards increasing MF bins. The presence of moderately higher F-ratio values at MF bins is a result of increase in speaking rate in high arousal emotions. _Boredom_ class has lowest F-ratio values, mostly focused at low AF and low MF bins, whereas, _Sad_ shows higher discrimination w.r.t. _Neutral_ over low and high AF and MF bins. Lower F-ratio values for _Boredom_ also indicate its similarity in characteristics with _Neutral_ class. The F-ratio analysis again shows the higher discrimination potential of joint AF and MF bins, with respect to _Neutral_ emotion.
### Comparison between CQT-MSF and Scattering Transform
Our proposed CQT-MSF feature, combined with standalone CQT, has striking similarity with _scattering transform_ feature representation of 1-D signals. Authors in [53] compute scattering coefficients of 1-D signals and show their characteristic invariance against temporal shifts and deformations. The features (or coefficients) are computed by convolving the signal with a set of predefined filter kernels. The feature extraction process includes the following steps: 1) Scalogram computation by passing the signal through a bank of wavelet filters. 2) Passing the obtained time-series of frequency bins in scalogram through another set of wavelet filterbank to obtain modulation spectrogram. 3) Introduce stability to deformations by low-pass filtering the signal, scalogram and modulation spectrogram coefficients. The scattering transform coefficients are mathematically described as,
\[S_{J_{2}}x(t)=U_{2}x*\phi_{2^{J}}(t)=\int U_{2}x(u)\phi_{2^{J}}(t-u)du \tag{6}\]
where,
\[U_{2}x=U[\lambda_{2}]U[\lambda_{1}]x=||x*\psi_{\lambda 1}|*\psi_{\lambda 2}|. \tag{7}\]
Here, \(x\) is the 1-D signal, \(\phi_{2^{J}}(t)\) defines the averaging low-pass filter with scale \(2^{J}\), \(\psi_{\lambda_{N}}\) describes the \(N\)th layer complex _Mortel_ wavelet filterbank (layer 1 are scalogram
Fig. 5: F-ratio values of different auditory frequency (AF) and modulation frequency (MF) bins of modulation spectrum features computed over EmoDB database. The F-ratio is calculated over time-averaged modulation spectrum features (MSF) between _Neutral_ and every other emotion class. The modulation filters used in this analysis have filter scale (\(q\)) value 2.
coefficients and layer 2 constitutes modulation coefficients) and operator '\(*\)' is the convolution operator. Scattering coefficients are found useful in various speech and audio processing domains, e.g., speech recognition [53], speaker identification [71], urban and environmental sound classification [72], etc. In [73], scattering coefficients also showed improvement in SER performance over mel-frequency cepstral coefficients (MFCCs).
In our proposed CQT-MSF feature, the CQT time-frequency representation is similar to the first layer scalogram coefficients computed by scattering transform. Similarly, the MSF computed over CQT is similar to the modulation spectrogram computed over scalogram in second layer of scattering transform. Also, CQT follows the same constant-Q non-linearity as followed by the filterbanks in both first and second layers of the scattering transform. However, the averaging performed in scattering coefficients to obtain invariance to time-shift and deformations is absent in CQT-MSF. The design parameter (e.g., bandwidth) of the low-pass filter which performs this averaging in scattering transform is manually selected depending upon the input signal characteristics. To address the absence of time-shift invariance, we employ a 2-D convolutional neural network over the computed CQT-MSF. The employed CNN architecture includes multiple layers with different filter scale values in different layers. According to [74], convolutional neural networks inherently exhibit invariance in vertical direction (direction of network depth) which mainly appears due to feature pooling. Hence, a generic CNN architecture with pooling layers can learn to apply the required averaging and obtain the required time-frequency representation.
Fig. 6: Visual description of deformation stability of STFT and CQT. a), Signal \(x(t)\) (left) and its deformed version \(x^{i}(t)\) (right). b), CQT representation of the same signal and its deformed version. c) CQT of the original and deformed signal projected on linear frequency scale. Figure taken with permission from [46].
shift invariance characteristic.
Regarding sensitivity to deformation, authors in [74] and [75] prove that convolutional feature extractors provide inherent but limited deformation stability. The extent of this stability depends upon the deformation sensitivity of the input signal. Signals which are slowly varying or band-limited are more deformation insensitive than signal with sudden changes or discontinuities [75]. As the CQT also provides a non-uniform filterbank representation as provided by mel-filters, similar stability to temporal deformation can be assumed in CQT filterbank as well. Fig. 6 shows the deformation stability of STFT, CQT and CQT with linear frequency scale for a signal \(x(t)\) deformed by a factor \(\epsilon t\) (i.e, \(x^{\prime}(t)=x(t-\epsilon t)=x((1-\epsilon)t)\)) [53]. The upward shift of spectral response in STFT appears due to the '\(\epsilon t\)' term, leading to instability to deformation imposed by \(\epsilon t\). However, in CQT, spectral responses of deformed signal do not show any major frequency shift. Instead, the deformed signal well-overlaps with the original signal because of higher filter bandwidth at higher frequencies. This shows that CQT is indeed deformation stable as compared to STFT. The linear frequency CQT plot is given for comparison of STFT and CQT over linear frequency scale, hence confirming the deformation stability of CQT in both linear and non-linear frequency scales.
Therefore, convolutional neural network layers can be used to inherently provide the required time-shift and deformation invariance for better emotion-rich representation at its output. We hence write the feature extracted by convolution layers of our employed DNN model as,
\[S=F(||x*\psi_{\lambda 1}|*\psi_{\lambda 2}|) \tag{8}\]
where, \(F(.)\) is the function estimated by 2-D convolution layers, and \(\psi_{\lambda 1}\) and \(\psi_{\lambda 2}\) corresponds to the filterbanks \(h_{\omega_{n}^{a}}(t)\) and \(g_{\omega_{n}^{m}}(t)\) used in the CQT-MSF generation (Fig. 2). Another point of dissimilarity is the difference between the basis functions used in the filterbank of scattering transform and CQT-MSF. The former uses _Morlet_ wavelets, whereas, the latter employs sinusoids multiplied with _Hann_ window function. The ripples observed in the frequency response of the filters in Fig. 2 is because of the small spectral leakage in the _Hann_ window.
## 5 Experimental Setup
### Database Description
For analysis of CQT-MSF and its comparison with mel-scale features, we perform experiments with two different speech corpora. We use Berlin EmoDB and RAVDESS datasets which are most widely used and publicly available.
#### 5.1.1 Berlin Emotion Database (EmoDB)
Berlin Emotion Database [76] contains acted emotional speech recordings of 10 professional artists (5 female and 5 male). The actors speak ten emotionally neutral and phonetically rich sentences in German language. Seven different emotion categories are used in the database: _Anger, Happy, Fear, Sad, Boredom, Disgust,_ and _Neutral_. To
evaluate the authenticity of recordings, listening test was performed by 20 subjects. A total of 800 utterances were recorded but only 535, having more than 80% recognition rate and 60% naturalness, were finally selected. Our choice of this database is explained by its diligent recording setup, popularity in SER domain [77; 78; 79; 80; 8; 12; 8] and its free availability.
#### 5.1.2 Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS)
The RAVDESS database [81] contains acted utterances of 12 male and 12 female artists speaking English language. A total of 7536 clips were recorded in three different modalities, namely audio-only, video-only, and audio-video, out of which the audio-only modality contains 1440 spoken utterances from all speakers. The database includes eight different emotion categories (_Happy, Anger, Sad, Neutral, Disgust, Calm, Surprised,_ and _Fear_) with two intensity levels, strong and normal. Recorded clips were evaluated by 319 subjects out of which 247 tested the validity and 72 evaluated test-retest reliability of recordings. An average of 60% accuracy was obtained in validity test over recordings of all emotions. Up-to-date design and inclusion of an extensive emotion set with varying intensities make this an important database for SER.
To further increase the diversity in training data, five-fold data augmentation following the x-vector _Kaldi_2 recipe is used [82]. The augmented data involves adding additive and reverberation noises over clean speech samples. The RAVDESS database is downsampled to 16 kHz before data augmentation and feature extraction.
Footnote 2: [https://github.com/kaldi-asr/kaldi/tree/master/egs/voxceleb/v2](https://github.com/kaldi-asr/kaldi/tree/master/egs/voxceleb/v2)
### Parameter settings for feature extraction
We compare the performance of proposed CQT-MSF features with baseline CQT and MFSC. The different parameter values used for CQT and MFSC are based on our preliminary comparison of the two methods [47]. For CQT computation, we select the minimum frequency value \(F_{\min}\) to be 32.7 Hz and \(F_{\max}\) equal to the Nyquist frequency. This provides a total of eight octaves over complete frequency range. Every frequency octave contains three bins which provides a total of 24 frequency bins over complete frequency range (\(F_{\min}\) to \(F_{\max}\)). The obtained CQT representation corresponds to 24-channel early auditory stage feature extraction. Another important parameter in CQT computation is hop length which is the number of samples the observation window advances between successive frame-shifts. We keep the hop length value fixed at 64. For cortical analysis, the above-mentioned CQT representation is passed through another set of constant-Q filterbank, referred to as modulation filterbank, as described in Section 4. We use an 8-channel modulation filterbank with center frequencies ranging from 0.5 to 64 Hz covering a total of eight frequency octaves.
For a fair comparison with CQT and CQT-MSF, similar modification in parameter values of MFSC is applied. We use 24-filter bank for MFSC computation over STFT with 512 frequency bins. The frame size is fixed to 320 samples with hop length of 64 samples over 16 kHz sampling frequency. We also compute the modulation spectrum coefficients by using MFSC time-frequency representation for comparison with CQT-MSF features.
We refer to these as MFSC-MSF features. We use the same _LibROSA_ toolkit for MFSC feature generation.
### Evaluation Methodology
Unlike other speech processing tasks, such as automatic speech recognition (ASR), automatic speaker verification (ASV), the SER lacks standardization of evaluation methodology for performance benchmarking on publicly available datasets. This leads to a wide variation in results across different works reported in the literature. Some examples of differences in evaluation protocol are the use of some selected emotions from the databases, the choice of performance evaluation metric, the selection of cross-validation strategy, the differences in the selected emotion classes, etc. Because of these reasons, meaningful comparison of obtained results with those reported in the literature becomes inaccurate, if not impossible, in SER research.
We adopt a leave-one-speaker-out (LOSO) cross-validation strategy for evaluation and benchmarking. The databases are divided into train/validation/test groups with every group containing disjoint speakers. The test and validation group contain utterances from one speaker and the remaining speakers are kept for training. This keeps the total number of train/validation/test sets equal to the number of speakers in every database. The final performance over a database is reported by averaging the performance metrics obtained for every train/validation/test group. For SER, speaker-dependent testing is known to fair better than speaker-independent testing [83]. However, speaker-independent sets of the database eliminate the chances of the trained classifier being biased towards a set of speakers and also simulates the real-world scenario in a better way. Although LOSO cross-validation is computationally expensive, due to small database sizes in SER, the increase in complexity can be safely ignored. Also, keeping a single speaker for testing allows more training data to be available which is essential with small databases.
### Classifier Description
In this work, we use two different machine learning frameworks to evaluate performances of studied features: (1) convolutional neural network with fully connected layer for emotion classification (termed henceforth as DNN). (2) Convolutional layers to extract emotion embeddings and SVM to classify embeddings into emotion classes (termed as DNN-SVM). Our selection of these is inspired from the success of embeddings based networks [12; 82; 84] and fully DNN-based frameworks [40] in speech processing. Performance evaluation over these also enables us to compare the SER efficiency of the two DNN frameworks.
The DNN-SVM framework comprises of two parts: embedding extraction from convolutional layers of the trained model used in the DNN framework, and final classification using SVM. The SVM is trained and tested over the embeddings extracted from the trained DNN model. We extract embeddings at the output of global average pooling (GAP) layer, placed after the final convolutional layer. These embeddings are processed with an SVM back-end for performance evaluation. In SVM model, we empirically select the value of regularization parameter \(C\) and the expanse/width of the _radial basis_
function_ kernel (parameter \(\gamma\)) to 1 and 0.001 respectively [85].
In the DNN framework, to train and validate the model, the speech utterances from every database are chunked into segments of 100 frame length with 50% overlap across consecutive frames. With 64 samples hop and 16 kHz sampling rate, this corresponds to 400 ms speech duration. Our choice of 400 ms is based on the reports in SER literature which explain that segment length greater than 250 ms contains required information for emotion prediction [12]. However, for testing, complete utterances are used to test model performance. This is done as the labels provided to emotion speech recordings are over complete utterances and not over segments. This approach also leads to increase in available data samples for training. Similarly, in DNN-SVM framework, the train embeddings are generated over segments of speech utterances with 100 frame length (400 ms), whereas, test embeddings are generated from complete utterances.
Table 1 describes the DNN architecture employed in this work. We use cross entropy optimizer with learning rate value of 0.001 with 64 batch size and dropout value of 0.3 applied over only the fully connected (FC) layer. The model is trained for 50 epochs and the version with the best performance on validation set is used for testing.
As the convolution layers accept input in 2-D (time-frequency) form, we use two different strategies to combine CQT/MFSC time-frequency representation with their MSF features. In the first method, we directly concatenate the CQT/MFSC with its corresponding MSF features over the frequency axis. This leads to a representation with time frames in x-axis and early auditory frequency bins, followed by modulation frequency bins corresponding to every auditory bin, placed in succession o
\begin{table}
\begin{tabular}{c c c c} \hline \multirow{2}{*}{**Layer**} & **No. of** & **Height** & **Length** \\ & **Filters** & **(Frequency)** & **(Time)** \\ \hline \hline
2-D Conv & 128 & 5 & 5 \\ Maxpool & - & 2 & 1 \\
2-D Conv & 128 & 3 & 3 \\ Maxpool & - & 2 & 1 \\
2-D Conv & 128 & 1 & 1 \\ Maxpool & - & 2 & 1 \\ Global Average & - & - & - \\ Pool (GAP) & & & \\ Fully Connected & 64 & - & - \\ Softmax & \#Classes & - & - \\ \hline \end{tabular}
\end{table}
Table 1: The parameters of CNN architecture for SER. The number of 2D-Conv layers and the kernel sizes are inspired from the x-vector TDNN architecture [82]. Maxpooling applied after every 2D-Conv layer provides time and frequency invariant feature representations.
the 2-D representation obtained with concatenation of CQT/MFSC with the corresponding MSF. In second approach, to better combine the information from time-frequency and modulation features, we use an embedding fusion based DNN architecture. The architecture consists of two parallel but similar branches of convolutional and GAP layer, followed by a common FC and softmax layer. For both feature fusion and embedding fusion, the embeddings are extracted from the GAP layer of DNN model.
### Evaluation Metrics
For performance evaluation, we use accuracy and UAR metrics. We chose these metrics owing to their popularity in SER and also for better comparison of results with the literature. Accuracy is defined as the ratio between the number of correctly classified utterances to the total number of utterances in test set. According to [86], the UAR metric is given as:
\[\text{UAR}=\frac{1}{K}\sum_{i=1}^{K}\frac{A_{ii}}{\sum_{j=1}^{K}A_{ij}} \tag{9}\]
here, \(A\) is called the contingency matrix, \(A_{ij}\) refers to the number of samples in class \(i\) classified as class \(j\), and \(K\) is the total number of classes. As accuracy is considered _unintuitive_ for databases with uneven samples across different classes, we use UAR to measure the validation set performance of the DNN model and to select the best performing model over the set of epochs.
Figure 7: Logarithm of feature fusion of CQT and modulation spectral features, i.e., CQT-MSF, extracted from CQT over utterances taken from EmoDB database. The first 24 bins on y-axis correspond to the CQT spectrogram. Bins that follow include stacking of 8 modulations bins corresponding to every auditory bin. The total number of bins then becomes, 24 auditory bins + 24 auditory bins \(\times\) 8 modulation bins for every auditory bin \(=\) 24 auditory bins \(+\) 192 MSF bins \(=\) 216 total bins on y-axis).
## 6 Results & Discussion
### Performance Comparison of Different Features
First, we experimentally optimize the constant-Q filters used for CQT-MSF computation by varying the value of filter scaling factor (\(q\)).
Fig. 8 shows the frequency response of modulation filter banks with \(q\) = 1 and 2. Since \(q\) affects the time resolution of filters as given in Eq. 3, filters with \(q\) = 1 are wider (have higher bandwidth) which leads to clipping of the filter frequency response at low frequencies. This leads to inclusion of zero-frequency (or DC) components as well. Also, because of greater overlap between filters, the generated filter outputs have higher redundancy. Increased redundancy helps convolutional layers to better extract required emotion correlation among modulation frequency bins. With \(q\) = 2, the filter responses remain limited inside the frequency range providing a less redundant filterbank structure as shown in Fig. 7(b). Table 2 reports the difference in results obtained for modulation features computed with different values of scaling factor \(q\). Filterbank structure with \(q\) = 1 outperforms the arrangement with \(q\) = 2 and 3. Hence, we select modulation filters with \(q\) = 1 for further experiments. We perform optimization and detailed experimentation of \(q\) over only EmoDB database due to its small size and similar trends with other databases.
Table 3 shows the performance of time-frequency representations combined with their corresponding modulation features for different classification frameworks over EmoDB
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline
**Classification** & \multicolumn{2}{c}{\(q\) = 1} & \multicolumn{2}{c}{\(q\) = 2} & \multicolumn{2}{c}{\(q\) = 3} \\
**Framework** & **Accuracy** & **UAR** & **Accuracy** & **UAR** & **Accuracy** & **UAR** \\ \hline \hline DNN & 76.97 & 68.25 & 70.79 & 64.93 & 69.77 & 64.27 \\ DNN-SVM & 79.86 & 76.17 & 79.50 & 77.00 & 74.28 & 70.91 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison between different filter scale (\(q\)) values of modulation filters for experiments performed with feature-fused CQT-MSF over EmoDB database. Given values are in percentages.
Figure 8: Modulation filter banks for different values of filter scale factor \(q\). Filters with \(q\) = 1 have same center frequencies but higher bandwidth than filters with \(q\) = 2.
database. The DNN-SVM classification framework outperforms DNN framework for every feature and over both performance metrics. This observation is counter-intuitive
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{**Features**} & \multicolumn{2}{c}{**DNN**} & \multicolumn{2}{c}{**DNN-SVM**} \\ & **Accuracy** & **UAR** & **Accuracy** & **UAR** \\ \hline \hline \multicolumn{1}{c}{MSF} & \multirow{4}{*}{72.67} & \multirow{4}{*}{66.32} & \multirow{4}{*}{78.73} & \multirow{4}{*}{76.33} \\ (Computed over CQT) & & & & & \\ \cline{1-1} \cline{5-6} MSF & & & & & \\ \cline{1-1} \cline{5-6} MSF & & & & & \\ \cline{1-1} \cline{5-6} (Computed over MFSC) & & & & & \\ \cline{1-1} CQT & 71.77 & 64.91 & 76.74 & 73.21 \\ \cline{1-1} MFSC & 61.62 & 57.32 & 66.18 & 62.58 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance comparison of early auditory and cortical features taken separately over EmoDB database. Filter scale (\(q\)) value of 1 is used in modulation filterbank for MSF computation. Given values are in percentages.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{**Features**} & \multicolumn{2}{c}{**DNN**} & \multicolumn{2}{c}{**DNN-SVM**} \\ & **Accuracy** & **UAR** & **Accuracy** & **UAR** \\ \hline \hline \multicolumn{1}{c}{CQT-MSF} & \multirow{4}{*}{76.97} & \multirow{4}{*}{68.25} & \multirow{4}{*}{79.86} & \multirow{4}{*}{76.17} \\ (Feature Fusion) & & & & & \\ \cline{1-1} \cline{5-6} CQT-MSF & & & & & \\ \cline{1-1} \cline{5-6} (Embedding Fusion) & & & & & \\ \cline{1-1} \cline{5-6} MFSC-MSF & & & & & \\ \cline{1-1} \cline{5-6} (Feature Fusion) & & & & & \\ \cline{1-1} \cline{5-6} MFSC-MSF & & & & & \\ \cline{1-1} \cline{5-6} MFSC-MSF & & & & & \\ \cline{1-1} \cline{5-6} (Embedding Fusion) & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance comparison of combined early auditory and cortical features over EmoDB database. Filter scale (\(q\)) value of 1 is used in modulation filterbank for MSF computation. Given values are in percentages.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{**Database**} & \multicolumn{2}{c}{**MFSC**} & \multicolumn{2}{c}{**CQT**} & \multicolumn{2}{c}{**CQT-MSF**} & \multicolumn{2}{c}{**Chance level**} \\ & **Accuracy** & **UAR** & **Accuracy** & **UAR** & **Accuracy** & **UAR** & **Accuracy** \\ \hline \hline EmoDB & 66.01 & 61.75 & 76.74 & 73.21 & 79.86 & 76.17 & 14.28 \\ RAVDESS & 36.94 & 36.16 & 48.68 & 44.64 & 52.24 & 48.83 & 12.5 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Feature performance comparison over different databases with DNN-SVM framework. Given values are in percentages
as the activations extracted from the same convolutional layer in DNN-SVM and DNN framework, end up performing better with SVM at back-end but not with fully-connected layer at the back-end. Regarding the two different feature fusion types, there is no specific pattern, in terms of performance improvement, among the classification frameworks. With CQT-MSF, feature fusion outperforms embedding fusion over DNN framework, whereas, embedding fusion outperforms feature fusion for DNN-SVM framework. For MFSC-MSF, the trend is opposite to that of CQT-MSF fusion results. However, CQT-MSF in both fusion styles performs better than MFSC-MSF.
To further analyse the contribution of early auditory and cortical features taken separately over SER, we perform experiments to analyse the performance of standalone MSF extracted over CQT/MFSC (without any type of fusion). From the results in Table 4, cortical features (standalone MSF over CQT) outperforms standalone CQT. However, the same is not true for mel-scale based features. The MSF computed over MFSC shows poor performance as compared to standalone MFSC. This questions the usability of temporal trajectories of the mel-scale time-frequency representation for emotion classification. The inferiority of MFSC against CQT is also indicated by direct comparison between CQT and MFSC. CQT outperforms MFSC in both DNN and DNN-SVM classification frameworks. Among different CQT feature types, MSF computed over CQT assumes very similar performance in contrast to both fusion types of CQT-MSF. This phenomenon describes the higher emotion relevance of the temporal modulations of the low-frequency regions emphasised by CQT.
Fig. 9: Confusion matrices for CQT-MSF (feature fusion), MFSC-MSF (feature fusion), CQT, and MFSC features. CQT-MSF shows best comparative performance in classifying different emotion classes of EmoDB database. Matrices are computed over DNN-SVM framework owing to its greater performance.
Fig. 9 shows the confusion matrices for different features over EmoDB database. We choose only the feature-fused CQT-MSF and MFSC-MSF for comparison, owing to their improved performance as compared to embedding fusion in DNN-SVM framework. Even though CQT-MSF is comparatively better at classifying different emotion classes, some instances of _Happy_ and _Disgust_ are confused with _Anger_, and that of _Fear_ are confused with _Happy_. The highest misclassification is in _Happy-Anger_ emotion pair which have similar arousal but opposite valence characteristic. This observation is found to be consistent among various SER works [12, 40, 87] and can also be related to the similar F-ratio characteristics of _Anger_ and _Happy_ in Fig. 5. Among low-arousal classes, some confusion in _Sad-Boredom_ is also visible in CQT-MSF. As prosody features are less effective in valence discrimination [88], the confusion among pairs with similar arousal but opposite valence characteristics can be attributed to higher emphasis over pitch in constant-Q scale based spectral representation. However, modulations computed across constant-Q scale reduce this confusion, which is evident from the comparison between standalone CQT and CQT-MSF. In standalone CQT, confusion among both low and high arousal emotions is higher and appears among multiple emotion classes (e.g., _Disgust_ and _Fear_ with _Happy_). In MFSC and MFSC-MSF, as compared to CQT-based features, the misclassification is more prominent across multiple classes. Unable to emphasise speech prosody, the emotion classification ability of MFSC over arousal scale is inferior to CQT-based features (e.g., increased confusion of _Boredom_ with _Neutral_, and _Fear_ with _Sad_). Also, modulations of MFSC are computed with more focus on high and mid speech frequencies and less focus on prosody at low frequencies, further deteriorating the performance.
At the end, we compare the SER performance of CQT-MSF feature with the scattering transform coefficients. As scattering network is also a deep convolutional network, its comparison with our proposed CQT-MSF with DNN-SVM classification framework shows the superiority of automatic feature extraction and required time and frequency-shift invariance learning for SER. The scattering coefficients are computed using the similar train/test strategy. The training speech utterances are chunked to 400 ms segments with 50% overlapping, whereas testing utterances are used as is. Following our experiments in [73], the Q-factor value (Q) for first layer coefficients is chosen as Q = 5. As the training segment size is fixed to 400 ms (6400 samples at 16 kHz), the maximal wavelet length or averaging scale (\(T\)) is kept 4096 samples for both training and validation. However, since we use complete utterances in test, the duration \(N\) for testing is empirically fixed to 51000 samples (3.18 seconds at 16 kHz). Longer utterances are chopped to contain only 51000 samples, whereas shorter frames are zero-padded. We obtain **72.67**% accuracy and **69.8**% UAR with scattering coefficients outperforming MFSC, and indicating the requirement of time and deformation stability in SER. However, the performance is inferior to that with the proposed CQT-MSF (especially with feature-fusion). The superiority of CQT-MSF shows that deep networks learn to provide better time and deformation stability, as described in Section 4.2, while extracting the emotion relevant information from multiple convolution layers. Although scattering transform also involves convolutions and averaging for stability, it is performed using fixed kernels which are not automatically learned/optimized to improve performance.
Table 5 shows the results obtained with different features over EmoDB and RAVDESS databases. The proposed CQT-MSF feature outperforms other features over RAVDESS database as well. This explains the suitability of CQT-MSF features or two stage auditory analysis for SER over different databases. Compared to EmoDB, the relative performance improvement with CQT-MSF, CQT over MFSC is higher in RAVDESS database.
### Comparison with Related Works
In this subsection, we compare our obtained results with related works in SER. Among SER literature, different strategies are used to evaluate system performances, for example, use of different databases, number of emotion classes used in the databases, different evaluation methodologies, etc. These differences make direct comparison of SER works difficult. The evaluation methodology, in terms of, cross-validation scheme, train/test split, speaker dependent/independent testing, etc. are found to differ substantially in SER literature. Lack of reproducible research in SER domain also leads to uncertainty, leading to difficulty in comparison. Hence, a comparison made with other relevant SER literature can not be considered accurate.
Due to the above mentioned issues, to justify our obtained results, we implement different studies from the literature by using our proposed CQT-MSF feature and experimental framework. Table 6 shows the list of selected works and the corresponding performances obtained. Section 5.3 and 5.4 of the manuscript describe the experimental framework employed in the studies listed in the table. Our choice of selected works is based on the use of modulation spectrogram related features, and use of advanced neural network architectures (e.g., contextual long short-term memory (LSTM), multi-head attention, ResNet architecture, multi-time-scale kernel, etc.). Below we briefly describe the details of selected works.
* Study performed by Avila et al. (2021) proposes feature pooling techniques over different measures of modulation spectral features for dimensional emotion recognition. For performance comparison, we compute the same modulation spectral measures over CQT representation, unlike Avila et al. which uses GMT representation, and show the results on our experimental framework. Feature-pooling schemes reported in the study are skipped to maintain similarity in comparison as our framework does not include handcrafted pooling operations.
* Aftab et al. (2022) use mel-frequency cepstral coefficients (MFCCs) over a fully convolutional neural network architecture with parallel paths of different kernels sizes followed by stacked convolutional layers for SER with impressive reported SER performance. We use the GitHub implementation3 of the state-of-the-art architecture with our experimental framework and CQT-MSF feature for performance comparison.
* We also select the state-of-the-art study performed by Liu et al. (2022) for multimodal emotion recognition in our work. As our primary focus is emotion recognition from speech, we use only the bidirectional-contextualised LSTM (bc-LSTM) with multi-head attention block used for speech modality in [90] with our experimental framework and databases. As emotion information spreads temporally across utterances, temporal pattern extraction architectures like LSTM, attention, are selected to compare with handcrafted temporal modulation feature, i.e., CQT-MSF.
* Study reported by Gerczuk et al. (2021) employs an adapter ResNet architecture for multi-corpora SER scenario. As our study does not include mixing different corporas, we select the reported ResNet architecture without the adapter module but with CQT-MSF feature for performance comparison. We use the open-source GitHub implementation4 of the model. Footnote 4: [https://github.com/EIHW/EmoNet](https://github.com/EIHW/EmoNet)
* Study performed by Guizzo et al. (2020) uses a multi-time-scale convolutional kernel based front-end which employs multiple temporally resampled versions of the
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{**References**} & \multicolumn{4}{c}{**Performance (in \%)**} \\ & **Brief Description** & \multicolumn{2}{c}{**EmoDB**} & \multicolumn{2}{c}{**RAVDESS**} \\ & **(Feature \& Classifier)** & **Acc.** & **UAR** & **Acc.** & **UAR** \\ \hline \hline \multirow{4}{*}{Avila et al. (2021)} & Different modulation & & & & \\ & spectral measures. & 55.40 & 46.83 & 37.77 & 29.10 \\ & DNN classifier. & & & & \\ \cline{2-6} & MFCC. & & & & \\ \cline{2-6} & Parallel path fully & 73.37 & 67.21 & 48.68 & 44.51 \\ & convolutional network. & & & & \\ \cline{2-6} & Interspeech 2009 feature set. & & & & \\ \cline{2-6} & bc-LSTM + Multi-head attention. & 53.45 & 47.49 & 24.58 & 21.79 \\ \cline{2-6} & Mel-spectrogram. & 76.34 & 71.07 & 52.56 & 49.46 \\ \cline{2-6} & Convolutional ResNet. & & & & \\ \cline{2-6} & Spectrogram. & & & & \\ \cline{2-6} Guizzo et al. (2020) & Multi-time-scale kernel & 63.01 & 58.03 & 40.27 & 36.46 \\ \cline{2-6} & based CNN. & & & & \\ \cline{2-6} & x-vectors, i-vectors, INTERSPEECH & & & & \\ \cline{2-6} & 2010 feature set, articulation, & & & & \\ \cline{2-6} & phonation, and prosody features. & 50.21 & 42.74 & 46.59 & 43.68 \\ \cline{2-6} & SVM classifier. & & & & \\ \cline{2-6} This work & GMT-MSF feature. & & & & \\ \cline{2-6} & DNN-SVM framework. & 77.05 & 74.35 & **54.05** & **51.16** \\ \cline{2-6} & CQT-MSF feature. & & & & \\ \cline{2-6} & DNN-SVM framework. & **79.86** & **76.17** & 52.24 & 48.83 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Performance comparison with selected works. **Boldface** values show the best obtained results.
original convolution kernel and parallelly perform convolution with every version. We utilize this method by employing multi-time-scale convolution layer as front-end over our DNN framework (mentioned in Section 5.4). We use the available open-source GitHub implementation5 of multi-time-scale convolution layer. Similar to CQT-MSF, the multi-time-scale kernels also focus on the extraction of speech temporal patterns for emotion recognition. Footnote 5: [https://github.com/ericguizzo/multi_time_scale](https://github.com/ericguizzo/multi_time_scale)
* We select the feature set based study by Parra-Gallego and Orozco-Arroyave (2022) for notably high performance reported on the RAVDESS database. The work includes combination of x-vector, i-vector, Interspeech 2010 paralinguistic (IS10) feature set, and articulation, phonation and prosody features extracted from _Disvoice6_ framework for emotion classification with SVM as classifier. As the study report that the combination of only x-vector, IS10, and Disvoice framework provide the best performance, we use the same shortened feature set with SVM classifier, but on LOSO cross-validation framework. To maintain similarity, we use complete utterances (no segmentation) for both training and testing in this particular implementation. Note that study does not integrate our CQT-MSF and is rather based on the original implementation in the paper. Footnote 6: [https://github.com/jcvasquez/DisVoice](https://github.com/jcvasquez/DisVoice)
* As several modulation feature based SER works use gammatone-scale to generate a time-frequency representation over which modulations are computed [37; 38; 41],
Fig. 10: Comparison between different non-linear time-frequency representation scales with filterbank center frequencies (shown by dots). The zoomed-in portion shows the difference between the non-linearities for emotion-salient low frequency filter bins. For better visibility of differences among the scales, we plot every scale with 96 frequency bins.
we compare the performance of CQT-MSF with gammatone-scale based modulation spectral features (GMT-MSF) on the DNN-SVM framework mentioned in Section 5.4. The obtained performance is reported in Table 6 along with other employed comparison techniques. Our implementation of gammatone-spectrogram includes gammatone filter design using python _Spafe_[94] toolkit, followed by application of the filters over the signal spectrogram. The GMT-MSF is computed over the designed gammatone time-frequency representation following the same steps used to compute CQT-MSF (Fig. 3).
From Table 6 we observe that CQT-MSF outperforms GMT-MSF on EmoDB database but performs relatively poor for RAVDESS database. This observation is due to the difference in non-linearity between constant-Q and gammatone scale. Previous studies on speech recognition [95], speaker recognition [96], and SER [46] also show how different non-linearity during frequency wrapping affect recognition performances. Inspired by those studies, we compare the three concerned non-linear scales used in our experiments: mel, constant-Q, and gammatone in Fig. 10 along with the filterbank center frequencies. For better visibility of differences in filter placement, we use 96 frequency bins for every scale in this analysis. The figure shows that constant-Q scale provides highest
Fig. 11: Average energy spectral density (ESD) computed over all utterances of EmoDB and RAVDESS. The spectrogram (Spec.), MFSC, CQT, and GMT ESD corresponds to the energy of corresponding coefficients average across all utterances. For better visibility of differences among the features, we plot every ESD with 96 frequency bins.
low-frequency emphasis (because of binary logarithm) followed by gammatone and mel-scale. Considering the relevance of the low-frequency information for SER [46; 47], the underperformance of constant-Q scale is an interesting observation.
Previous studies indicate that the dataset-dependent non-linearity scale is more appropriate than a general-purpose scale and providing importance to higher energy region helps in performance optimization [95; 70]. Hence, to further investigate the performance gap, we compare the energy spectral density (ESD) averaged across all utterances for both the databases. Fig. 11 shows the respective averaged energy density plots. The energy densities are computed by time averaging the squared feature coefficients, followed by averaging across all utterances. Fig. 11 shows that when compared to EmoDB, the energy density in RAVDESS is more shifted towards higher frequency regions. The constant-Q scale provides greater emphasis at low-frequency bins (refer Fig. 10) but due to high non-linearity (binary logarithm), the resolution reduces drastically as we move towards higher frequencies. Hence, for RAVDESS, the resolution on the frequencies with the larger ESD value is lower in CQT compared to the resolution provided by gammatone filterbank placed in the ERB scale. Thus, for RAVDESS, GMT-MSF better captures the signal energy information and leads to better performance when compared to CQT-MSF.
From Table 6 we also observe that the employed EmoNet-based ResNet architecture also shows competitive performance on RAVDESS database when compared to CQT-MSF feature. Larger model size with comparatively larger database leads to this observation.
### Visual Inspection of Learned Features
In spite of the performance gain achieved from the proposed CQT-MSF feature, the complexity of the model (Table 1) in terms of network parameters makes it very difficult to understand which information in input helps the network to recognize the pattern. To obtain a general insight into the operation of deep networks, several works use gradient generated at the ultimate layer with respect to the network input to generate a saliency map. This map shows corresponding input regions which weigh the most in generating the output probability scores [97; 98; 99]. We use one such analysis, called the _gradient-based class activation mapping_ (Grad-CAM), to obtain insight into the working of the model employed [99].
Grad-CAM uses the class-wise gradient generated at network output with respect to the activations of the final convolution layer to generate a _heatmap_ showing the importance of different regions of the input. The steps included in Grad-CAM heatmap generation are:
1. Compute the gradient of network output score (before softmax non-linearity) with respect to the activations of final convolution layer, i.e., \(\frac{\partial y_{c}}{\partial A_{k}}\), where \(y_{c}\) is the score of the \(c\)th class and \(A_{k}\) is the 3-dimensional (height, width and channel dimension) class activations from the final convolution layer.
2. Average the computed gradients over length and width dimensions (global average pooling) to obtain a single vector representation of gradients. Mathematically, \(\alpha_{k}=\frac{1}{N}\sum\limits_{\text{length width}}\frac{\partial y_{c}}{ \partial A_{k}}\).
## Appendix A
Fig. 12: Grad-CAM output of three different emotion utterances of EmoDB database. To analyse the significance of various frequency regions, pitch and first three formants are also shown along with the Grad-CAM output. For MSF plot, the y-axis labels on the left describe the auditory frequencies and ticks on the right describe the modulation bins corresponding to every auditory frequency bin. The title of every plot describes the EmoDB file name used for analysis.
3. Multiply the computed gradient vector with the final convolution layer activations and average the result over the number of filters in convolution layer, i.e., \(\mathit{map}\ =\ \frac{1}{N}\sum\limits_{k}(\alpha_{k}A_{k})\).
4. Apply \(\mathit{ReLU}\) activation over computed class activation maps in the previous step, \(L^{c}_{\mathrm{Grad-CAM}}\ =\ \mathit{ReLU}(map)\).
5. Upsample the computed 2-D heatmap over length and width axes to make its shape similar to the input image. The upsampled heatmap shows the importance of various regions of input image which led to the final class prediction.
Fig. 12 shows the Grad-CAM output of the CQT-MSF feature of _Anger_, _Neutral_, and _Sad_ utterances of the same speaker (speaker 03) and context (a05) from EmoDB database. For analysis of Grad-CAM output, we also plot the pitch and first three formant frequencies (F1, F2 & F3) of utterances to compare the frequency regions which are most focused upon by the network to predict emotion classes. We observe that pitch and the first two formants (F1 & F2) are important for the emotion class prediction. Lower pitch harmonics are apparently more significant for _Sad_ emotion as shown in Fig. 12c. Another important observation for _Sad_ is the presence of high Grad-CAM score at the silence and unvoiced regions (where pitch frequency is 0 Hz) of utterance. This shows that the silence between spoken phonemes is also important for emotion recognition. In _Anger_ and _Neutral_ emotions, formants F1 & F2 are more prominent than pitch. Both emotions are identified mostly in the voiced region of utterances. Interestingly, the Grad-CAM response of _Sad_ also shows some focus on high frequencies (near formant F3) over complete utterance as compared to _Neutral_ emotion class. Observations made from the Grad-CAM analysis indicate the importance of low frequencies for emotion recognition, especially for low-arousal emotions like _Sad_, further justifying the use of CQT time-frequency representation for SER. Also, MSF representation provides the Conv2D classifier with different modulation rates of different speech characteristics, such as pitch harmonics, formants etc. This helps the classifier to emphasize the emotion-wise differences appearing across different modulation rates, for different speech characteristics (pitch, formant, etc.), hence improving the performance.
### Discussion
Our performed experiments show higher relevance of CQT-based features, as compared to mel-scale features, for SER. We summarise and interpret the results obtained from the experiments in the following points:
* **The two-staged auditory processing based features, i.e., early auditory with cortical analysis based features improve emotion classification performance**. This also justifies the combination of human auditory analysis (domain knowledge) with neural networks for the betterment of SER. However, such improvement is observed over temporal modulations extracted from CQT representation only. Temporal modulations of MFSC show opposite effect and degrade the performance.
* **The improvement observed with CQT-MSF (both fusion types) and standalone CQT over MFSC features show the relevance of increased low-frequency resolution in CQT for SER**. As low-frequency resolution in
mel-scale is not as high, it does not provide enough emphasis over the emotion relevant low-frequencies to capture the information required for emotion discrimination. Hence, its modulation spectrum coefficients also end up with less emotion relevant parts of speech, e.g., irrelevant high frequency regions, leading to reduction in performance.
* **DNN framework lags behind in performance as compared to DNN-SVM classification framework with RBF kernel function**. This observation is consistent across every employed feature. The advantage in performance of DNN-SVM framework has also been mentioned in SER [12] and speaker recognition [82] works.
* **The confusion matrices of different features show a general misclassification trend in _Happy-Anger_ and _Fear-Happy_ emotion pairs**. This confusion mainly appears due to very similar arousal characteristics of features. However, _Happy-Anger_ pair are placed very distant in valence plane. This is due to higher focus on speech prosody in constant-Q representation, and higher sensitivity of speech prosody over arousal characteristics [88]. Although, inclusion of modulations of constant-Q representation reduces the confusion among emotions with opposite valence characteristics.
* **Both CQT-MSF and standalone CQT also outperforms scattering transform coefficients**. Scattering transforms apply averaging over features with empirically defined averaging scale to obtain a translation invariant representation. The convolutional neural network, when used with constant-Q features as input, automatically learns this requires invariance with cross-entropy objective function. Hence, the joint effect of CQT/CQT-MSF and automatic translation invariance makes our frameworks superior. Although, scattering transform manages to outperform mel-scale features, again because of the constant-Q filter banks and time shift and deformation stability in scattering coefficients.
* **CQT-MSF feature outperforms GMT-MSF on EmoDB but underperforms on RAVDESS database.** Comparative analysis shows a slight high-frequency shift in average energy spectral density of the RAVDESS database compared to EmoDB database. The difference in non-linearities of the gammatone and CQT scales result in gammatone-spectrogram better capturing energies with a slight high-frequency shift. This explains the observed anomaly in the performance with RAVDESS database.
Studies in psychology report that individuals with music expertise are better capable of perceiving emotions from speech [55, 56, 57, 58, 59, 60, 100, 101]. This finding falls in line with our experiments. As CQT was originally invented for music analysis, its better suitability for SER can be considered as a mathematical evidence, supporting the findings in psychology domain. Another justification towards increased SER suitability of CQT, can be proposed by analysing studies performed over _amusia_ in [54, 60]. Amusia is a medical condition in which individuals have limited capability to perceive or resolve pitch. The study in [54] reports that emotion recognition ability of amusic individuals is below par with that of normal individuals, which is attributed to their limited ability to resolve pitch or
low-frequencies of speech. Amusics then utilize the high-frequency content of speech to decipher emotions but are not as efficient as healthy individuals. Therefore, an amusic brain can be assumed to represent speech as a time-frequency representation with low resolution at low-frequencies and comparatively higher resolution at high frequencies. In a contrastive manner, a representation with high low-frequency resolution should improve SER ability, which is what we observe in our experiments with CQT-based features. Also, modulation coefficients computed over CQT is analogous to cortex-level analysis performed over music trained brain. Study performed in [60] reports that such cortical analysis has further beneficial effects over SER ability.
## 7 Conclusion
This paper proposed the use of constant-Q transform based modulation spectral features for SER. The proposed feature employs the knowledge of two-staged sound processing in humans as domain knowledge and is tested over two different deep network based classification frameworks. We show that the proposed feature outperforms standard mel-frequency based feature and scattering transform coefficients. From the performed experiments, we conclude the following:
1. A representation with increased low-frequency resolution is a better contender for SER. Similar conclusion is endorsed in psychology based studies as well.
2. The combination of a time-frequency representation with higher low-frequency resolution, and its temporal modulation (two-staged representation), efficiently represent the emotion contents in speech.
3. Mel-scale based feature and its temporal modulations are not very significant from speech emotion information perspective.
4. Similar, to mel-scale features, CQT and its temporal modulation representation are also time deformation invariant. With CQT-MSF as input, the CNN can learn the required invariance to time-frequency shifts, leading to a representation stable to shifts and deformations.
5. The DNN-SVM framework provides better SER performance as compared to the standard DNN framework.
6. Grad-CAM based analysis performed over MSF reestablishes the importance of pitch and formant frequencies for SER. It also describes the importance of different modulation rates of pitch and formants, apart from their crude values, for SER.
Although the proposed feature performs better than standard features, the performance is still not optimum for practical real-world deployment of the feature. Also, the performance varies by a large margin when the feature is used over different databases. Even though found more efficient than mel-scale, constant-Q scale is effective for utterances with larger average energy in low-frequency regions. This opens up the opportunity to explore database-dependent non-linear scale for SER. In future, we would also like to experiment with joint spectral and temporal modulation feature and analyse its suitability for SER. To combine the domain knowledge with self-learning, a deep network based architecture for self-learned modulation feature extraction can also be explored. |
2303.01287 | High-throughput optical neural networks based on temporal computing | An emerging generative artificial intelligence (AI) based on neural networks
starts to grow in popularity with a revolutionizing capability of creating new
and original content. As giant generative models with millions to billions of
parameters are developed, trained and maintained, a massive and
energy-efficient computing power is highly required. However, conventional
digital computers are struggling to keep up with the pace of the generative
model improvements. In this paper, we propose and demonstrate high-throughput
optical neural networks based on temporal computing. The core weighted
summation operation is realized with the use of high-speed electro-optic
modulation and low-speed balanced photodetection. The input data and weight are
encoded in a time sequence separately and loaded on an optical signal via two
electro-optic modulators sequentially. By precisely controlling the
synchronization time of the data and weight loading, the matrix multiplication
is performed. Followed by a balanced photodetector, the summation is conducted,
thanks to the electron accumulation of the inherent electronic integrator
circuit of the low-speed photodetector. Thus, the linear weighted summation
operation is implemented based on temporal computing in the optical domain.
With the proposed optical linear weighted summation, a fully-connected neural
network and convolutional neural network are realized. Thanks to the high-speed
feature of temporal computing, a high data throughput of the optical neural
network is experimentally demonstrated, and the weighting coefficients can be
specified on demand, which enables a strong programmability of the optical
neural network. By leveraging wavelength multiplexing technology, a scalable
optical neural network could be created with a massive computing power and
strong reconfigurability, which holds great potential for future giant AI
applications. | Shuang Zheng, Jiawei Zhang, Weifeng Zhang | 2023-03-02T14:11:39Z | http://arxiv.org/abs/2303.01287v2 | ## Scalable optical neural networks based on temporal computing
## Abstract
The optical neural network (ONN) has been considered as a promising candidate for next-generation neurocomputing due to its high parallelism, low latency, and low energy consumption, with significant potential to release unprecedented computational capability. Large-scale ONNs could process more neural information and improve the prediction performance. However, previous ONN architectures based on matrix multiplication are difficult to scale up due to manufacturing limitations, resulting in limited scalability and small input data volumes. To address this challenge, we propose a compact and scalable photonic computing architecture based on temporal photoelectric multiplication and accumulate (MAC) operations, allowing direct processing of large-scale matrix computations in the time domain. By employing a temporal computing unit composed of cascaded modulators and time-integrator, we conduct a series of proof-of-principle experiments including image edge detection, optical neural networks-based recognition tasks, and sliding-window method-based multi-target detection. Thanks to its intrinsic scalability, the demonstrated photonic computing architecture could be easily integrated on a single chip toward large-scale photonic neural networks with ultrahigh computation throughputs.
## Introduction
With the advent of the big-data era, the proliferation of artificial intelligence (AI) applications and powerful machine learning techniques create exponentially increased data streams that can hardly be processed efficiently by conventional computing systems and architectures[1, 2]. In the meanwhile, the unsustainability of Moore's law and the failure of Dennard Scaling rules greatly hinder the development of current computing hardware based on electronic transistor and von Neumann architecture[3, 4]. To resolve this disagreement, optical computing has been envisioned as a promising solution, which holds great potential to release unprecedented computational capability by leveraging the advantages inherent to optical signals, such as low power consumption, low latency, ultra-wide bandwidth and high parallelism[5, 6, 7]. In particular, in past years, optical neural network (ONN) layouts have already penetrated successfully the deep learning era. With the rapidly increasing complexity in dataset size of various application scenarios, large-scale ONN is highly desired to meet technological demands and performance criteria. To achieve better accuracy and efficiency in complex neural tasks, it is critical to scale up the dimensions of ONNs, including depth, width, and resolution, which correspond to the number of layers, the number of neurons in a layer, and the size of the input image, respectively[8, 9]. Due to the advantages of large-scale matrix operation and high computational capability, spatial optical computation has shown great potential for establishing large and compact computing units predominantly required in optical-artificial intelligence computers[10, 11, 12]. Fourier transforms and convolutions as well as applications in classification tasks have been proposed utilizing the optical diffraction masks in free space. However, traditional free-space approaches require heavy auxiliary or modulation equipment[13, 14, 15, 16, 17], such as lenses, spatial light modulators (SLMs) and digital micro-mirror devices (DMDs), which may make the system bulky and costly, hindering their feasibilities towards the large-scale implementation and commercialization of photonic neural networks. Therefore, there remain great opportunities for substantial improvements in photonic hardware accelerators.
With the rapid development of advanced photonic manufacturing technique, silicon photonic integrated circuits (PICs) have been considered as a promising candidate for establishing large-scale photonic neural networks, which have the advantages of compact size, high-density integration, and power efficiency[18, 19, 20, 21, 22]. Typical silicon PIC-based ONN architectures use cascades of multiple Mach-Zehnder interferometers (MZIs) and microring resonator array-based wavelength division multiplexing (WDM) technology. The number of units for these ONN architectures scales quadratically with the input data dimension for matrix multiplication, leading to quadratic scaling in the footprint and energy consumption. Though PICs have shown great potential in integrated optics, the integration scale due to manufacturing limitations, and the complex control circuits for reducing fluctuation of the resonance wavelength restrict the development of a large-scale programmable photonic neural network. Consequently, the size of the input vectors and the
matrices that can be loaded on these hardwares is extremely limited by the integration scale. More recently, chip-scale microcomb devices have been employed as multi-wavelength light sources [37; 38; 39; 40], which can bring much convenience and greatly increase the input data size. However, the input matrix size may be still limited by the number of multiplexed wavelength channels. Additionally, there are also other on-chip architectures that can support matrix loading and computing [41; 42; 43; 44; 45]. Despite impressive performance, their practical compute rate and processed matrix size are still far beyond their promise for high-speed photonic engines. In this scenario, an integrable and scalable optical computing architecture featuring large-scale matrix operation is highly desired, which would increase the network size and resolution to enhance predictive accuracy.
In this paper, we propose an integrable and scalable photonic computing architecture capable of performing parallel MAC operations. Different from prior works, the MAC operations are implemented in the time domain by using a fundamental execution unit. The intrinsic temporal accumulation mechanism of the proposed approach determines that the temporal, wavelength and spatial dimensions can be fully utilized to perform large-scale matrix computation and enhance the computation capability [46; 47]. We conduct a series of proof of concept experiments to take full advantage of the proposed computing architecture, including typical image edge detection, fully-connected neural network and convolutional neural network-based classification tasks, and a primary validation for sliding-window method-based multi-target detection task. With further on-chip integration and multidimensional multiplexing, the demonstrated photonic computing architecture is promising toward large-scale photonic neural networks with ultrahigh computation throughputs.
## Results
**Scalable photonic computing architecture.** The key components in the proposed scalable photonic computing archihtecture are the temporal photonic MAC operation units, which are used as the fundamental
Figure 1: **Scalable photonic computing architecture.** (a) Schematic of the proposed scalable photonic computing architecture, which is implemented by employing temporal, wavelength and spatial dimensions. When the light beams transmit through cascaded MZMs, the multiplication process is accomplished. The temporal waveforms are detected by BPDs and time-integrators, effectively achieving the temporal accumulation operation. (b) For the input data, the matrix rows and columns are mapped to a time-wavelength basis. The input row vectors are encoded as temporal serial symbols and loaded onto different light wavelengths by high-speed modulators. The weight matrix rows and columns are mapped to a space-time basis. The column vectors are also encoded as temporal serial symbols by another MZM array. (c) Schematic of single-channel temporal photonic MAC operation unit. In the proposed solution, the input matrix could be encoded as non-negative values, and the weight matrix could be encoded as both positive and negative values. LD, laser diode; MUX, wavelength division multiplexer; BS, beam splitter; DEMUX, wavelength division demultiplexer; VOA, variable optical attenuator, BPD, balanced photodetector.
computing units to multiply and accumulate temporal signals. Figure 1(a) shows the proposed scalable photonic computing architecture, which could perform large-scale matrix operations by simultaneously exploiting temporal, wavelength and spatial dimensions. The architectures mainly consists of three major parts. The first part is composed of an MZM array, which is used for high-speed image data loading. The data matrix rows and columns are mapped to a time-wavelength basis. As shown in Fig. 1(b), the input row vectors of size N are encoded as temporal serial symbols and loaded onto M different light wavelengths by high-speed modulators, thus an M\(\times\)N data matrix with M rows and N columns could be loaded. All the wavelengths are then combined through a wavelength division multiplexer, and fanned out to L different spatial channels. Thus, M\(\times\)L multiplexed channels can be fully used to enhance both the weight matrix dimension and the computational capability. In the second part, another MZM array is used to load the weight matrix and also functions as multipliers. The weight matrix rows and columns are mapped to a space-time basis. Due to the broadband modulation characteristic of the MZMs, the values in data matrix (M\(\times\)N) could be correspondingly multiplied with the values in the column vector (N\(\times\)1) loaded in each spatial channel. The third part of the computing architecture is used for the temporal accumulation process. The multiplexed wavelengths in each spatial channel are separated into different optical paths via a wavelength division demultiplexer. Then, the temporal optical signals are converted into electrical signals by a high-speed BPD array. At last, time-integrators are used to complete the accumulation of temporal serial signals. Thanks to its temporal accumulation mechanism of the proposed method, the length of the loaded temporal vector is theoretically unlimited. At the same time, by multiplexing wavelength and spatial dimensions, large-scale matrix-vector and matrix-matrix multiplication could be realized on the proposed temporal photonic computing architecture.
Figure 1(c) shows the details of the proposed temporal photonic MAC operation unit, where the values in the weight matrix could be positive or negative. The input data is first flattened into a vector M and encoded as a time-domain serial electrical signal, and then loaded onto the optical carrier by the first-stage MZM with a half-wave voltage of \(V_{\pi}\). Thus, the input vector values are represented by the modulated light intensity \(I_{\text{M}}\), which can be expressed as \(\dfrac{M(t)}{V_{\pi}}\) after linearization. After the first-stage MZM, the optical signal \(I_{\text{M}}\) is divided into two parts through a 50:50 beam splitter. One part is sent to the second-stage MZM, and then detected by the positive port of the BPD. The other part is sent to the negative port of the BPD after adjusting the intensity through a variable optical attenuator (VOA). The second-stage MZM is used to encode the weight matrix. Since the weight matrices usually have both positive and negative values, the second-stage MZMs are biased at the quadrature transmission point and work in the linear region. The weight matrix W is also flattened into a vector and loaded by the second-stage MZM. After propagating through two-stage MZMs, the light intensity in the upper optical path can be expressed as:
\[I_{\rm up}(t)=\frac{1}{4}\,I_{\rm M}(t)\times\left[1+\frac{W(t)}{V_{\pi}}\right] \tag{1}\]
\(I_{\rm M}\) is the light intensity modulated by the first MZM, which represents the temporal signal sequence of the input data stream. \(W(t)\) is the temporal signal sequence representing weigth data. The signal \(I_{\rm up}\) modulated by two-stage MZMs is then sent to the positive port of the BPD. After propagating through the first-stage MZM and the VOA, the optical signal in another optical path could be expressed as:
\[I_{\rm down}(t)=\frac{\alpha}{2}\,I_{\rm M}(t) \tag{2}\]
The temporal signal \(I_{\rm down}\) is detected by the negative port of the BPD. Thus, the output of the BPD can be given by:
\[I_{\rm BPD}(t)=\frac{1}{2}\,\Re\times I_{\rm M}(t)\times\left[\frac{W(t)}{V_{ \pi}}+\frac{1}{2}-\alpha\right] \tag{3}\]
where \(\Re\) is the responsivity of the BPD. To get the multiplication result from the BPD, the attenuation coefficient \(a\) of the VOA should be equal to 1/2. To further achieve the temporal accumulation, the temporal serial signals are sent to a time-integrator. When the duration time of the optical signal that needs to be accumulated is less than the maximum integration time, the electrical output could be derived:
\[V_{\rm out}\,\propto\int_{\delta t}\left[\frac{1}{2}\,\Re\times M(t)\times W (t)\right]dt \tag{4}\]
where the output voltage \(V_{\rm out}\) is proportional to the result of the vector dot product.
In the experiment, the temporal accumulation process is achieved by using a low-speed BPD as an integrator (see Supplementary Note 1). A series of square light pulses with varied durations are detected by a BPD for the test of temporal integration. Since the duration of the square light pulses is less than the respond time of the BPD, the amplitude of the output electrical signal increases linearly with the pulse width. The details of the accumulation process are further illustrated in the experiment (see Supplementary Note 1). Hence, the temporal MAC operation could be accomplished by the proposed unit.
**Convolutional image processing with photonic MAC operation unit.** Figure 2(a) shows the experimental setup of the proposed temporal photonic MAC operation unit for convolutional image processing. In the experiment, we perform two-dimensional (2D) convolution on an 8-bit 92\(\times\)92 flower image to detect its edge feature. One channel of the RGB three-channel flower picture shown in Fig. 2(d) is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig. 2(a). The 92\(\times\) flower image is the same as the one in Fig. 2(a). The 92\(\times\)92 flower image is the same as the one in Fig.
is used as the input grayscale image. A 3\(\times\)3 Laplacian kernel \(\begin{bmatrix}-1&-1&-1\\ -1&8&-1\\ -1&-1&-1\end{bmatrix}\) is shown in Fig. 2(e), which could be used to realize a two-dimensional (2D) isotropic measurement of the second-order spatial derivative of an image, and highlights regions of rapid intensity change (See Supplementary Fig. 4 for more detailed information). According to the order in which the convolution kernel moves in the convolution calculation, the grayscale image is divided into a series of 3\(\times\)3 data groups, and each data group contains 9 pixels. Then, the image is flattened into a vector by the order of the groups. A certain time interval is added between each data group to ensure that different groups will not interfere with each other when the temporal signal is accumulated. In the experiment, the time interval between each data group is 16 ns. Same as the input image, the convolution kernel also needs to be flattened into a vector, and repeated periodically with a time interval of 16 ns. After the flattening operation, the image signal and the convolution kernel signal are loaded into the photonic MAC operation unit through the cascaded MZMs using an arbitrary waveform generator (AWG). The symbol rate of the signals generated by the AWG is 2.88 gigabaud per second, with a resolution of 8 bits per symbol. When the light propagates through the cascaded MZMs, the values of the vectors that representing the image information and the convolution kernel are multiplied in the time domain. The multiplication results for each data group are then accumulated in the time domain through a BPD with a bandwidth of 150 MHz. The processed image could be recovered by sampling and rearranging the output temporal signal from the BPD. Since there is a fixed time interval between each data group in the temporal accumulation process, the effective computing speed is 2\(\times\)9/16=1.125 GMAC operations per second. Figure 2(b) shows part of the measured temporal waveforms for convolutional image processing, and Fig. 2(c) shows the details of the electrical signal waveform output from BPD. As can be seen in Fig. 2(f) and (g), the experimentally measured edge-detected image agrees very well with the theoretically calculated one using convolutional image processing algorithms. The measured experimental result confirms the feasibility of convolutional image processing using the photonic MAC operation unit, which is a prerequisite for building a photonic convolutional neural network (OCNN).
**OCNN for image recognition.** In deep learning, a CNN is a class of ANN, most commonly applied to analyze visual imagery[49]. The powerful feature extraction capability of CNN makes it highly efficient for various image recognition tasks. In general, a typical CNN is comprised of two major parts. The first part contains alternating convolutional and pooling layers, which is used for feature learning. The second part is the fully connected layer, which can act as a classifier. The convolutional layer performs a dot product of the convolution kernel with the layer's input matrix. The pooling layer serves to progressively reduce the spatial size of the representation, to reduce the number of parameters and amount of computation in the
network, and hence to also reduce computation cost. There are several nonlinear functions to implement pooling, where max-pooling is the most common. It calculates the maximum value for patches of a feature map, and creates a downsampled (pooled) feature map. Meanwhile, this also implies that pooling operation would result in unacceptable information loss.
Recently, many optical neural networks based on various architectures have been proposed and experimentally demonstrated with impressive performance. However, these hardwares only support small matrices and implement few neurons. To make large neural networks run on the implemented photonic hardware accelerators, the down-sampling operations are usually used in the preprocess before photonic computing[39], and simplified neural network models are also utilized[42, 27, 37].
Figure 3: **An OCNN for image classification.****a** Schematic of OCNN-based image classification. The OCNN consists of a convolutional layer and a fully connected layer. The linear operations in these two layers are implemented by the photonic MAC operation unit. **b** The output temporal waveform for the feature map extracting. Inset shows the reconstructed feature map. **c** The output temporal waveform from the fully connected layer. The sampling points from left to right correspond to the handwritten digits from 0 to 9. The maximum sampling point corresponds to the classification result.
Here, based on the proposed photonic MAC operation unit, we further construct an OCNN for classifying handwritten digit images in the MNIST dataset. Thanks to its intrinsic advantage of temporal large-scale matrix operation, the max-pooling layer is not used in the designed CNN architecture. As shown in Fig. 3(a), a 5\(\times\)5 convolution kernel is used to extract the feature map of the 28\(\times\)28 handwritten digit images in the convolutional layer. By sampling the output temporal waveform of the photonic MAC operation unit and using the activation function (ReLu) to process the sampling result, a vector containing 784 elements is obtained. Then arranging these 784 elements into a 28\(\times\)28 matrix, the feature map of the input handwritten digit image is obtained in Fig. 3(b). The experimentally reconstructed feature maps from the convolutional layer exhibit good agreement with the theoretically calculated results (See Supplementary Fig. 5 and 6 for more detailed information). The photonic MAC operation unit can be further used to implement a fully connected layer, because of the linear matrix operation of the fully connected layer. To realize a complete functionality of an OCNN, the obtained feature maps from the convolutional layer are loaded into the photonic MAC operation unit again, which functions as a fully connected layer this time. In order to match the output of the convolutional layer, the fully connected layer is formed with 784 inputs and 10 outputs. Thus, the weight bank of the fully connected layer is a 10\(\times\)784 matrix. The weight bank matrix could be divided into 10 vectors with a length of 784, and sequentially loaded on the photonic MAC operation unit together with the feature map vectors. The output temporal waveform from the fully connected layer is recorded in Fig. 3(c). The sampling points from left to right correspond to the handwritten digits from 0 to 9. The image classification result is determined by the serial number of the maximum value in the output waveform of the fully connected layer. As can be seen in Fig. 3(c), the maximum sampling point corresponds to the handwritten digit 2, which agrees with the theoretically calculated one. We use the trained 5\(\times\)5 convolution kernel to process 20 handwritten images to get their feature maps, and the corresponding classification result gives a classification accuracy of 85% (see Supplementary Note 3 for more detailed information). In addition, more classification tests are performed by using 100 theoretically calculated feature maps. The confusion matrix for 100 images shown in Fig. 4(a) shows a classification accuracy of 88%, in contrast to the theoretically calculated accuracy of 93.2% on a digital computer (Fig. 4(b)). Due to the fluctuations of the device performances, the experimental accuracy will be slightly lower than the theoretically calculated value under the adopted CNN model. By integrating the discrete devices on a small single chip, the temporal computing system would be more stable.
In the experiment, both of the image data and the convolution kernel are flattened into temporal data groups. Each data group contains 25 symbols with a symbol rate of 6.92 gigabaud per second, and the time interval between each data group is 10 ns. The bandwidth of the BPD used for accumulation is 150 MHz. Therefore, in the convolutional layer, the total computing speed is 2\(\times\)25/10=5 GMAC operations per second (see Supplementary Note 6). In the implementation of the fully connected layer, the modulated symbol rate
is set to be 20.56 gigabaud per second, and a BPD with a bandwidth of 4 MHz is used to achieve the accumulation operation. As shown in Fig. 2c, the time interval between data groups is 583.66 ns, which could be further shortened. The effective computing speed for the fully connected layer is 2\(\times\)784/583.66=2.69 GMAC operations per second (see Supplementary Note 6).
As mentioned above, the down-sampling process and model simplification will cause some input information to be lost, thereby reducing the computational accuracy of neural networks. To investigate the impact of the down-sampling operation on the computational accuracy of neural networks, a simulated model shown in Fig. 3c is built in TensorFlow, which is trained by the AdamOptimizer with a default learning rate of 0.001, a training period of 50 iterations and a batch size of 1024. Same as the architecture used in the OCNN experiment, the convolutional layer contains a 5\(\times\)5 kernel and a nonlinear activation function (ReLu). A max-pooling layer is employed after the convolutional layer in the simulated CNN model. The training accuracies of the CNN model with four different down-sampling rates (1\(\times\), 4\(\times\), 9\(\times\), and
Figure 4: **Performance analysis of the proposed OCNN.****a** Confusion matrix obtained from 100 repeated experiments. **b** Confusion matrix obtained by simulation with 10,000 samples. **c** A common CNN architecture is composed of a convolutional layer, a pooling layers and a fully connected layer. **d** Sceneries are investigated. The training curves for CNN with different down-sampling rates are shown. Notably, the accuracy that the CNN model can obtain decreases with the increase of the down-sampling rate in the pooling layer.
16\(\times\)) are simulated. As can be seen in Fig. 4(d), the simulated classification accuracy that the CNN model can achieve becomes lower as the subsampling rate increases. Compared with the simulated accuracy of 93.2% in the CNN model without the down-sampling operation, 4\(\times\), 9\(\times\), and 16\(\times\) down-sampling will lead to training accuracies of 90.6%, 85.9%, and 80.7%, respectively. Hence, the CNN without pooling layer significantly outperforms its down-sampling counterparts. Similiar performance analysis for the fully connected neural network is carried out in Supplementary Fig. 10, which further verifies the feasibility and advantages of the proposed approach. The temporal computing approach holds great potential to enhance the learning capability of optical neural networks and processing more complex computational tasks, thereby helping to alleviate the problems imposed by the current limitations of manufacturing large-scale photonic integrated chips.
Figure 5: **Object detection based on sliding-window method.****a** The principle of object detection based on sliding-window method. **b** Schematic of using photonic MAC operation unit to detect the handwritten digits in a 68\(\times\)68 image. **c, d, e** The detection results of the handwritten digits 0, 4, and 8, the order of the detected peaks in the waveforms implies the position of the object in the original image.
**Sliding-window object detection with photonic MAC operation unit.** The sliding-window method is a very classical approach in object detection, and it also forms the basis of many other advanced object detection algorithms[48, 49]. Here, the photonic MAC operation unit is further used to realize a primary sliding-window multi-target detection. As shown in Fig. 5(a), the procedure of the sliding-window method is to crop the image into a series of image patches and classify all image patches as objects or non-objects independently. Figure. 5(b) shows the detail of using the photonic MAC operation unit to detect the handwritten digits in a 68\(\times\)68 image. The size of the sliding-window is set to be 28\(\times\)28 to match the size of the handwritten digits in the image. By moving the window in steps of ten pixels, the 68\(\times\)68 image is divided into 25 sets of 28\(\times\)28 image patches. All the 25 image patches are sequentially fed into a linear classifier, which is the most computationally intensive part of the sliding-window method. In the simulation, the linear classifier with a 28\(\times\)28\(\times\)10 three-dimensional (3D) matrix is trained in TensorFlow through the MNIST dataset. According to the classification results, the linear classifier matrix is flattened into 10 vectors with a length of 784. The image patches and the trained vectors are loaded into the photonic MAC operation unit to imitate the process of the classifier sliding through the image.
By sampling the output temporal waveform of the photonic MAC operation unit, the decision values for all the image patches are obtained. A decision threshold is adopted to judge whether there is an object in the image patch. In the experiment, the symbol rate is 20.56 gigabaud per second, and a time interval of 243.19 ns is used between each data group. The effective computing speed is 2\(\times\)784/243.19=6.45 GMAC operations per second. Figure. 5(c)-(e) show the corresponding output waveforms from the BPD for the handwritten digits 0, 4, and 8 shown in Fig. 5(b). The order of 25 pulses in the waveform represents the order in which the image patches are divided from the original image. The sample points that exceed the decision threshold determine the position of the object in the original image. For example, the output waveform of the handwritten digit 8 is shown in Fig. 5(e), where the value of the 5th sampling point from left to right exceeds the decision threshold. Thus, the object for the handwritten digit 8 is detected in the first row and fifth column of the original image. The object detection for other 4 sets of handwritten digits has also been successfully achieved in the experiment (see Supplementary Note 4). Here, a linear classifier is used instead of a multi-layer neural network classifier for multi-target detection detection. The straightforward approach adopted in the proposed model enables more than 4 million times objected or non-objected judgments per second in the designed task. The sliding-window object detection model could be further expanded to a more powerful model, such as a multi-layer neural network for more complex tasks such as face detection[50, 51].
Figure 6: **Parallel fully connected neural network for image recognition task.****a** Schematic of the parallel fully connected nueral network. The input image signals are multiplexed on two different light wavelength, and transmit through the implemented optical neural network. **b** The transmission spectrum of the multiplexed optical signals. **c,** The transmission spectrum of the multiplexed optical signal with one channel filtered out by the optical filter. **d, e** The measured waveforms for the handwritten digit 4 at 1549.8 nm and the handwritten digit 2 at 1550.2 nm, respectively. The maximum sampling point corresponds to the classification result. **f** The normalized sampling results at 1549.8 nm. **g** The normalized sampling results at 1550.2 nm.
**WDM-compatible parallel photonic computing.** In addition to the ultrahigh bandwidth inherent in optics, the ability of high parallel processing is another major advantage. Thanks to the scalability and the broadband operating characteristic of the proposed photonic MAC operation unit, it can be further expanded and enhanced by multiplexing wavelength and space dimensions. As a proof-of-concept experimental demonstration, we construct a WDM-compatible fully connected neural network, where two wavelength-multiplexed channels are employed to classify the handwritten digit images in the MNIST dataset. The experimental setup of the WDM-compatible fully connected neural network is shown in supplementary Fig. 9. Two different handwritten digital images with 28\(\times\)28 pixels are separately flattened into two vectors and loaded on two different wavelengths through two MZMs in the first stage. Then the temporal signals carried by two different light wavelengths are combined through an optical coupler as the input of the fully connected layer. The diagram of a parallel fully connected neural network is shown in Fig. 6. Two 1\(\times\)784 vectors flattened by two 28\(\times\)28 images are loaded on the first-stage MZM, and a 10\(\times\)784 matrix as the weight bank is loaded onto the second-stage MZM. After completing the signal multiplication operation, optical bandpass filters are used to filter out one of the wavelengths in Fig. 6(b) and (c). Then the filtered light signal is sent to the BPD for the temporal accumulation. The final step is to sample the temporal waveform to get the output results of the fully connected layer. As can be seen in Fig. 6(d) and (e), the maximum sampling points in the measured temporal waveforms correspond to the handwritten digit 4 at 1549.8 nm and the handwritten digit 2 at 1550.2 nm, respectively (see Supplementary Note 5 for more experimental results). The normalized sampling results for handwritten digits from 0 to 9 at the wavelength of 1549.8 nm and 1550.2 nm are summarized in Fig. 6(f) and (g), which agree well with the theoretically calculated ones. In the experiment, the symbol rate is 20.56 gigabaud per second, and a time interval of 583.66 ns is used between each data group.Thus, the effective computing speed is 2\(\times\)784/583.66=2.69 GMAC operations per second for one wavelength. The total computing speed of the WDM-compatible fully connected neural network is 2\(\times\)2.69=5.38 GMAC operations per second (see Supplementary Note 6). The proposed computing architecture can significantly enhance the computation capability of optical neural networks with more multiplexed channels.
## Discussion
We propose a compact and scalable photonic computing architecture based on temporal photoelectric MAC operations, where large-scale matrix multiplication operations could be directly processed in the time domain. Particularly, the proposed approach offers the opportunity to extend the length of operand vectors and avoid additional down-sampling process in neural networks, which will be helpful to obtain higher computing accuracy in a range of tasks. Based on the photonic MAC operation unit, a series of proof of principle experiments are demonstrated, including high-speed convolutional image processing, optical neural networks-based recognition tasks, and sliding-window method-based multi-target detection. Furthermore, based on the proposed parallel photonic computing architecture, we construct a WDM-compatible fully connected neural network for image recognition task.
Recently, a lot of approaches have been experimentally demonstrated to realize photonic matrix multiplication and nueral networks. Table 1 shows the comparison between the reported works and our work, and some key metrics evaluating the physical network are listed, including the architecture, accumulation mechanism, and the matrix dimenison. Previously MZI-based integrated photonic approaches have been predominantly limited by the current limitations of manufacturing large-scale photonic integration hardwares, resulting in large footprints and small matrix sizes (4\(\times\)4 and 6\(\times\)6). Addtionally, the accumulation operation of MRR/PCM array-based solutions is accomplished by summing up the light fields
\begin{table}
\begin{tabular}{c c c c c} \hline
**Year** & **Ref.** & **Accumulation approach** & **Architecture** & **Matrix dimension** \\ \hline
2017 & 25 & Coherent light accumulation & MZI array & 4\(\times\)4 \\
2021 & 27 & Coherent light accumulation & MZI array & 6\(\times\)6 \\
2021 & 28 & Coherent light accumulation & MZM array & 9\(\times\)1 \\
2019 & 33 & Multi-wavelength accumulation & MRR \& PCM array & 4\(\times\)15 \\
2021 & 36 & Multi-wavelength accumulation & MRR array & 4\(\times\)2 \\
2021 & 37 & Multi-wavelength accumulation & PCM array & 16\(\times\)16 \\
2021 & 40 & Multi-wavelength accumulation & Discrete device & 90\(\times\)1 \\
2021 & 43 & Multiple photodetectors accumulation & MRR \& VOA array & 5\(\times\)6 \\
2022 & 53 & Coherent light accumulation & Cascaded MMI & 10\(\times\)10 \\ & Ours & Time-domain integration & MZM array & 784\(\times\)1 \\ \hline \end{tabular}
\end{table}
Table 1: Comparison between the reported optical neural networks and our work.
of multi-wavelength channels into a single photodetector, which also leads to a limited matrix dimension (16\(\times\)16). And other approaches are also limited by either the integration scale or the number of wavelength channels. By comparison, our approach is based on time-domain integration, which determines that the length of the vector for dot product is therectically unlimited. In the experiment, the length of the loaded vector can be up to 784, which is much larger than that of previous approaches. Furthermore, the demonstrated photonic computing unit could be easily integrated on a chip and scaled up to a multidimensional parallel photonic MAC operation architecture by employing time, wavelength and spatial dimensions, which can further increase the number of parallel computation channels. The advantages are briefly couccluded as: (a) It offers the ability to perform large-scale matrix multiplication operation, which is determined by the temporal MAC operation mechanism. (b) Its strong scalability and high parallelism make it possible to use multiple dimensional resources simultaneously for computational acceleration.
In this work, the effective computational speed is performed at the level of Giga MAC operations per second, which is restricted by the modulators and BPD. With further improvement, the state-of-the-art integrated photonic transmitters and photodetectors can drive the MAC operation unit at a speed of 100 GOPS[52]. When scaling up to a larger network with 100 parallel wavelength/space computing channels, the total effective computing speed can easily reach 10 Tera MAC operations per second (TOPS). Thus, the proposed photonic computing architecture is capable of performing large-scale matrix-vector multiplication. Furthermore, it is worth mentioning that the proposed temporal computing architecture can be easily fabricated on a small single chip, and thus its enhanced functionality and performance will make it practical to implement large-scale matrix computation for photonic neural networks. Additionally, complex-valued neural network could be realized by IQ modulation and receiver, which would further improve the computing performance.
## Methods
### Experimental setup.
The light source used in the experiments was NKT Koheras BASIK 1550 nm single-frequency fiber laser. A 4-channel Keysight M8196A AWG with 92 GSa/s, 8-bit vertical resolution and 32 GHz analog bandwidth was used for digital-to-analog converting. The electro-optic modulation was all conducted by Fujitsu FTM7937 modulators. The BPD is Thorlabs PDB 450C with switchable gain and bandwidth. A Tektronix DPO75002SX Oscilloscope was used to analyze the output waveform of BPD. The digital filter of the oscilloscope was set to 500 MHz to suppress the high-frequency noise of the BPD. |
2307.02279 | From NeurODEs to AutoencODEs: a mean-field control framework for
width-varying Neural Networks | The connection between Residual Neural Networks (ResNets) and continuous-time
control systems (known as NeurODEs) has led to a mathematical analysis of
neural networks which has provided interesting results of both theoretical and
practical significance. However, by construction, NeurODEs have been limited to
describing constant-width layers, making them unsuitable for modeling deep
learning architectures with layers of variable width. In this paper, we propose
a continuous-time Autoencoder, which we call AutoencODE, based on a
modification of the controlled field that drives the dynamics. This adaptation
enables the extension of the mean-field control framework originally devised
for conventional NeurODEs. In this setting, we tackle the case of low Tikhonov
regularization, resulting in potentially non-convex cost landscapes. While the
global results obtained for high Tikhonov regularization may not hold globally,
we show that many of them can be recovered in regions where the loss function
is locally convex. Inspired by our theoretical findings, we develop a training
method tailored to this specific type of Autoencoders with residual
connections, and we validate our approach through numerical experiments
conducted on various examples. | Cristina Cipriani, Massimo Fornasier, Alessandro Scagliotti | 2023-07-05T13:26:17Z | http://arxiv.org/abs/2307.02279v2 | # From NeurODEs to AutoencODEs:
###### Abstract
The connection between Residual Neural Networks (ResNets) and continuous-time control systems (known as NeurODEs) has led to a mathematical analysis of neural networks which has provided interesting results of both theoretical and practical significance. However, by construction, NeurODEs have been limited to describing constant-width layers, making them unsuitable for modeling deep learning architectures with layers of variable width. In this paper, we propose a continuous-time Autoencoder, which we call AutoencODE, based on a modification of the controlled field that drives the dynamics. This adaptation enables the extension of the mean-field control framework originally devised for conventional NeurODEs. In this setting, we tackle the case of low Tikhonov regularization, resulting in potentially non-convex cost landscapes. While the global results obtained for high Tikhonov regularization may not hold globally, we show that many of them can be recovered in regions where the loss function is locally convex. Inspired by our theoretical findings, we develop a training method tailored to this specific type of Autoencoders with residual connections, and we validate our approach through numerical experiments conducted on various examples.
_Keywords--_ Machine Learning, Optimal Control, Gradient Flow, Minimising Movement Scheme, Autoencoders
## 1 Introduction
In recent years, the field of artificial intelligence has witnessed remarkable progress across diverse domains, including computer vision and natural language processing. In particular, neural networks have emerged as a prominent tool, revolutionizing numerous machine learning tasks. Consequently, there is an urgent demand for a robust mathematical framework to analyze their intricate characteristics. A deep neural network can be seen as map \(\phi:\mathbb{R}^{d_{\mathrm{in}}}\to\mathbb{R}^{d_{\mathrm{out}}}\), obtained as the composition of \(L\gg 1\) applications \(\phi=\phi_{L}\circ\ldots\circ\phi_{1}\), where, for every \(n=1,\ldots,L\), the function \(\phi_{n}:\mathbb{R}^{d_{n}}\to\mathbb{R}^{d_{n+1}}\) (also referred as _the \(n\)-th layer_ of the network) depends on a _trainable_ parameter \(\theta_{a}\in\mathbb{R}^{m_{n}}\). The crucial process of choosing the values of the parameters \(\theta_{1},\ldots,\theta_{L}\) is known as the _training of the network_. For a complete survey on the topic, we recommend the textbook [24].
Recent advancements have explored the link between dynamical systems, optimal control, and deep learning, proposing a compelling perspective. In the groundbreaking work [29], it was highlighted how the problem of training very deep networks can be alleviated by the introduction of a new type layer called "Residual Block". This consists in using the identity map as skip connection, and after-addition activations. In other words, every layer has the following form:
\[X_{n+1}=\phi_{n}(X_{n})=X_{n}+\mathcal{F}(X_{n},\theta_{n}), \tag{1.1}\]
where \(X_{n+1}\) and \(X_{n}\) are, respectively, the output and the input of the \(n\)-th layer. This kind of architecture is called _Residual Neural Network_ (or ResNet). It is important to observe that, in order to give sense to the sum in (1.1), in each layer the dimension of the input should coincide with the dimension of the output. In the practice of Deep
Learning, this novel kind of layer has turned out to be highly beneficial, since it is effective in avoiding the "vanishing of the gradients" during the training [5], or the saturation of the network's accuracy [28]. Indeed, before [29], these two phenomena had limited for long time the large-scale application of deep architectures.
Despite the original arguments in support of residual blocks being based on empirical considerations, their introduction revealed nevertheless a more mathematical and rigorous bridge between residual deep networks and controlled dynamical systems. Indeed, what makes Residual Neural Networks particularly intriguing is that they can be viewed as discretized versions of continuous-time dynamical systems. This dynamical approach was proposed independently in [18] and [26], and it was greatly popularized in the machine learning community under the name of NeurODEs by [13]. This connection with dynamical systems relies on reinterpreting the iteration (1.1) as a step of the forward-Euler approximation of the following dynamical system:
\[\dot{X}(t)=\mathcal{F}(X(t),\theta(t)), \tag{1.2}\]
where \(t\mapsto\theta(t)\) is the map that, instant by instant, specifies the value of the parameter \(\theta\). Moreover, the training of these neural networks, typically formulated as empirical risk minimization, can be reinterpreted as an optimal control problem. Given a labelled dataset \(\{(X_{0}^{i},Y_{0}^{i})\}_{i=1}^{N}\) of size \(N\geq 1\), the depth of the time-continuous neural network (1.2) is denoted by \(T>0\). Then, training this network amounts to learning the control signals \(\theta\in L^{2}([0,T],\mathbb{R}^{m})\) in such a way that the terminal output \(X_{T}^{i}\) of (1.2) is close to it corresponding label \(Y_{0}^{i}\) for all \(i=1,\ldots,N\), with respect to some distortion measure \(\ell(\cdot,\cdot)\in C^{1}\). A typical choice is \(\ell(x,y):=|x-y|^{2}\), which is often referred as the _squared loss function_ in the machine learning literature. Therefore, it is possible to formulate the following optimal control problem
\[\inf_{\theta\in L^{2}([0,T];\mathbb{R}^{m})}J^{N}(\theta):=\begin{cases} \frac{1}{N}\sum_{i=1}^{N}\ell\big{(}X^{i}(T),Y^{i}(T)\big{)}+\lambda \int_{0}^{T}|\theta(t)|^{2}\,dt,\\ \text{s.t.}\ \begin{cases}\dot{X}^{i}(t)=\mathcal{F}(t,X^{i}(t),\theta(t)), \hskip 28.452756pt\dot{Y}^{i}(t)=0,\\ \big{(}X^{i}(t),Y^{i}(t)\big{)}\big{|}_{t=0}=(X_{0}^{i},Y_{0}^{i}),\end{cases}&i \in\{1,\ldots,N\},\end{cases}\]
where, differently from (1.2), we admit here the explicit dependence of the dynamics on the time variable. Notice that the objective function also comprises of Tikhonov regularization, tuned by the parameter \(\lambda\), which plays a crucial role in the analysis of this control problem. The benefit of interpreting the training process in this manner results in the possibility of exploiting established results from the branch of mathematical control theory, to better understand this process. A key component of optimal control theory is a set of necessary conditions, known as Pontryagin Maximum Principle (PMP), that must be satisfied by any (local) minimizer \(\theta\). These conditions were introduced in [37] and have served as inspiration for the development of innovative algorithms [33] and network structures [12] within the machine learning community.
This work specifically addresses a variant of the optimal control problem presented above, in which the focus is on the case of an infinitely large dataset. This formulation gives rise to what is commonly known as a _mean-field optimal control problem_, where the term "mean-field" emphasizes the description of a multiparticle system through its averaged effect. In this context, the focus is on capturing the collective behavior of the system rather than individual particle-level dynamics, by considering the population as a whole. As a consequence, the parameter \(\theta\) is shared by the entire population of input-target pairs, and the optimal control is required to depend on the initial distribution \(\mu_{0}(x,y)\in\mathcal{P}(\mathbb{R}^{d}\times\mathbb{R}^{d})\) of the input-target pairs. Therefore, the optimal control problem needs to be defined over spaces of probability measures, and it is formulated as follows:
\[\inf_{\theta\in L^{2}([0,T];\mathbb{R}^{m})}J(\theta):=\begin{cases}\int_{ \mathbb{R}^{2d}}\ell(x,y)\,d\mu_{T}(x,y)+\lambda\int_{0}^{T}|\theta(t)|^{2} \,dt,\\ \text{s.t.}\ \begin{cases}\partial_{t}\mu_{t}(x,y)+\nabla_{x}\cdot( \mathcal{F}(t,x,\theta)\mu_{t}(x,y))=0&t\in[0,T],\\ \mu_{t}|_{t=0}(x,y)=\mu_{0}(x,y),\end{cases}\end{cases}\]
This area of study has gained attention in recent years, and researchers have derived the corresponding Pontryagin Maximum Principle in various works, such as [19] and [7]. It is worth mentioning that there are other types of mean-field analyses of neural networks, such as the well-known work [36], which focus on mean-field at the parameter level, where the number of parameters is assumed to be infinitely large. However, our approach in this work takes a different viewpoint, specifically focusing on the control perspective in the case of an infinitely large dataset.
One of the contributions of this paper is providing a more accessible derivation of the necessary conditions for optimality, such as the well-known Pontryagin Maximum Principle. Namely, we characterize the stationary points of the cost functional, and we are able to recover the PMP that was deduced in [7] under the assumption of large values of the regularization parameter \(\lambda\), and whose proof relied on an infinite-dimensional version of the Lagrange multiplier rule. This alternative perspective offers a clearer and more intuitive understanding of the PMP, making it easier to grasp and apply it in practical scenarios.
In addition, we aim at generalizing the applicability of the results presented in [7] by considering a possibly non-convex regime, corresponding to small values of the parameter \(\lambda>0\). As mentioned earlier, the regularization coefficient \(\lambda\) plays a crucial role in determining the nature of the cost function. Indeed, when \(\lambda\) is sufficiently large, the cost function is convex on the sub-level sets, and it is possible to prove the existence and uniqueness of the solution of the optimal control problem that arises from training NeurODEs. Additionally, in this highly-regularized scenario, desirable properties of the solution, such as its continuous dependence on the initial data and a bound on the generalization capabilities of the networks, have been derived in [7].
However, in practical applications, a large regularization parameter may cause a poor performance of the trained NeurODE on the task. In other words, in the highly-regularized case, the cost functional is unbalanced towards the \(L^{2}\)-penalization, at the expenses of the term that promotes that each datum \(X_{0}^{i}\) is driven as close as possible to the corresponding target \(Y_{0}^{i}\). This motivated us to investigate the case of low Tikhonov regularization. While we cannot globally recover the same results as in the highly-regularized regime, we find interesting results concerning local minimizers. Moreover, we also show that the (mean field) optimal control problem related to the training of the NeurODE induces a gradient flow in the space of admissible controls. The perspective of the gradient flow leads us to consider the well-known minimizing movement scheme, and to introduce a proximal stabilization term to the cost function in numerical experiments. This approach effectively addresses the well-known instability issues (see [14]) that arise when solving numerically optimal control problems (or when training NeurODEs) with iterative methods based on the PMP. It is important to note that our stabilization technique differs from previous methods, such as the one introduced in [33].
From NeurODEs to AutoencODEs.Despite their huge success, it should be noted that NeurODEs (as well as ResNets, their discrete-time counterparts) in their original form face a limitation in capturing one of the key aspects of modern machine learning architectures, namely the discrepancy in dimensionality between consecutive layers. As observed above, the use of skip connections with identity mappings requires a "rectangular" shape of the network, where the width of the layers are all identical and constant with respect to the input's dimension. This restriction poses a challenge when dealing with architectures that involve layers with varying dimensions, which are common in many state-of-the-art models. Indeed, the inclusion of layers with different widths can enhance the network's capacity to represent complex functions and to capture intricate patterns within the data. In this framework, Autoencoders have emerged as a fundamental class of models specifically designed to learn efficient representations of input data by capturing meaningful features through an encoder-decoder framework. More precisely, the encoder compresses the input data into a lower-dimensional latent space, while the decoder reconstructs the original input from the compressed representation. The concept of Autoencoders was first introduced in the 1980s in [39], and since then, it has been studied extensively in various works, such as [30], among many others. Nowadays, Autoencoders have found numerous applications, including data compression, dimensionality reduction, anomaly detection, and generative modeling. Their ability to extract salient features and capture underlying patterns in an unsupervised manner makes them valuable tools in scenarios where labeled training data is limited or unavailable. Despite their huge success in practice, there is currently a lack of established theory regarding the performance guarantees of these models.
Prior works, such as [20], have extended the control-theoretic analysis of NeurODEs to more general width-varying neural networks. Their model is based on an integro-differential equation that was first suggested in [34] in order to study the continuum limit of neural networks with respect to width and depth. In such an equation the state variable has a dependency on both time and space since the changing dimension over time is viewed as an additional spatial variable. In [20, Section 6] the continuous space-time analog of residual neural networks proposed in [34] has been considered and discretized in order to model variable width ResNets of various types, including convolutional neural networks. The authors assume a simple time-dependent grid, and use forward difference discretization for the time derivative and Newton-Cotes for discretizing the integral term, but refer to more sophisticated moving grids in order to possibly propose new types of architectures. In this setting, they are also able to derive some stability estimates and generalization properties in the overparametrized regime, making use of turnpike theory in optimal control [22]. In principle, there could be several different ways to model width-varying neural networks with dynamical systems, e.g., forcing some structure on the control variables, or formulating a viability problem. In this last case, a possibility could be to require admissible trajectories to visit some lower-dimensional subsets during the evolution. For an introduction to viability theory, we recommend the monograph [4], while we refer to [8, 9] for recent results on viability theory for differential inclusions in Wasserstein spaces.
In contrast, our work proposes a simpler extension of the control-theoretical analysis. It is based on a novel design of the vector field that drives the dynamics, allowing us to develop a continuous-time model capable of accommodating various types of width-varying neural networks. This approach has the advantage of leveraging insights and results obtained from our previous work [7]. Moreover, the simplicity of our model facilitates the implementation of residual networks with variable width and allows us to test their performance in machine learning tasks. In order to capture
width-varying neural networks, we need to extend the previous control-theoretical framework to a more general scenario, in particular we need to relax some of the assumptions of [7]. This is done in Subsection 2.2, where we introduce a discontinuous-in-time dynamics that can describe a wider range of neural network architectures. By doing so, we enable the study of Autoencoders (and, potentially, of other width-varying architectures) from a control-theoretic point of view, with the perspective of getting valuable insights into their behavior.
Furthermore, we also generalize the types of activation functions that can be employed in the network. The previous work [7] primarily focused on sigmoid functions, which do not cover the full range of activations commonly employed in practice. Our objective is to allow for unbounded activation functions, which are often necessary for effectively solving certain tasks. By considering a broader set of activation functions, we aim at enhancing the versatility and applicability of our model.
Furthermore, in contrast to [7], we introduce a stabilization method to allow the numerical resolution of the optimal control problem in the low-regularized regime, as previously discussed. This stabilization technique provides the means to test the architecture with our training approach on various tasks: from low-dimensional experiments, which serve to demonstrate the effectiveness of our method, to more sophisticated and high-dimensional tasks such as image reconstruction. In Section 5, we present all the experiments and highlight noteworthy behaviors that we observe. An in-depth exploration of the underlying reasons for these behaviors is postponed to future works.
The structure of the paper is the following: Section 2 discusses the dynamical model of NeurODEs and extends it to the case of width-varying neural networks, including Autoencoders, which we refer to as AutoencODEs. In Section 3, we present our mean-field analysis, focusing on the scenario of an infinitely large dataset. We formulate the mean-field optimal control problem, we derive a set of necessary optimality conditions, and we provide a convergence result for the finite-particles approximation. At the end of this section, we compare our findings with the ones previously obtained in [7]. Section 4 covers the implementation and the description of the training procedure, and we compare it with other methods for NeurODEs existing in the literature. Finally, in Section 5, we present the results of our numerical experiments, highlighting interesting properties of the AutoencODEs that we observe.
### Measure-theoretic preliminaries
Given a metric space \((X,d_{X})\), we denote by \(\mathcal{M}(X)\) the space of signed Borel measures in \(X\) with finite total variation, and by \(\mathcal{P}(X)\) the space of probability measures, while \(\mathcal{P}_{c}(X)\subset\mathcal{P}(X)\) represents the set of probability measures with compact support. Furthermore, \(\mathcal{P}_{c}^{N}(X)\subset\mathcal{P}_{c}(X)\) denotes the subset of empirical or atomic probability measures. Given \(\mu\in\mathcal{P}(X)\) and \(f:X\to Y\), with \(f\)\(\mu-\)measurable, we denote with \(f_{\#}\mu\in\mathcal{P}(Y)\) the push-forward measure defined by \(f_{\#}\mu(B)=\mu(f^{-1}(B))\) for any Borel set \(B\subset Y\). Moreover, we recall the change-of-variables formula
\[\int_{Y}g(y)\,d\big{(}f_{\#}\mu\big{)}(y)=\int_{X}g\circ f(x)\,d\mu(x) \tag{1.3}\]
whenever either one of the integrals makes sense.
We now focus on the case \(X=\mathbb{R}^{d}\) and briefly recall the definition of the Wasserstein metrics of optimal transport in the following definition, and refer to [2, Chapter 7] for more details.
**Definition 1.1**.: _Let \(1\leq p<\infty\) and \(\mathcal{P}_{p}(\mathbb{R}^{d})\) be the space of Borel probability measures on \(\mathbb{R}^{d}\) with finite \(p\)-moment. In the sequel, we endow the latter with the \(p\)-Wasserstein metric_
\[W_{p}^{p}(\mu,\nu):=\inf\left\{\int_{\mathbb{R}^{2d}}|z-\hat{z}|^{p}\ d\pi(z, \hat{z})\ \big{|}\ \pi\in\Pi(\mu,\nu)\right\},\]
_where \(\Pi(\mu,\nu)\) denotes the set of transport plan between \(\mu\) and \(\nu\), that is the collection of all Borel probability measures on \(\mathbb{R}^{d}\times\mathbb{R}^{d}\) with marginals \(\mu\) and \(\nu\) in the first and second component respectively._
It is a well-known result in optimal transport theory that when \(p=1\) and \(\mu,\nu\in\mathcal{P}_{c}(\mathbb{R}^{d})\), then the following alternative representation holds for the Wasserstein distance
\[W_{1}(\mu,\nu)=\sup\left\{\int_{\mathbb{R}^{d}}\varphi(x)\,d\big{(}\mu-\nu \big{)}(x)\,\big{|}\,\varphi\in\mathrm{Lip}(\mathbb{R}^{d}),\ \mathrm{Lip}(\varphi)\leq 1 \right\}\,, \tag{1.4}\]
by Kantorovich's duality [2, Chapter 6]. Here, \(\mathrm{Lip}(\mathbb{R}^{d})\) stands for the space of real-valued Lipschitz continuous functions on \(\mathbb{R}^{d}\), and \(\mathrm{Lip}(\varphi)\) is the Lipschitz constant of a mapping \(\varphi\) defined ad
\[Lip(\varphi):=\sup_{x,y\in\mathbb{R}^{d},x\neq y}\frac{\|\varphi(x)-\varphi(y )\|}{\|x-y\|}\]
## 2 Dynamical Model of NeurODEs
### Notation and basic facts
In this paper, we consider controlled dynamical systems in \(\mathbb{R}^{d}\), where the velocity field is prescribed by a function \(\mathcal{F}:[0,T]\times\mathbb{R}^{d}\times\mathbb{R}^{m}\to\mathbb{R}^{d}\) that satisfies these basic assumptions.
**Assumption 1**.: _The vector field \(\mathcal{F}:[0,T]\times\mathbb{R}^{d}\times\mathbb{R}^{m}\to\mathbb{R}^{d}\) satisfies the following:_
1. _For every_ \(x\in\mathbb{R}^{d}\) _and every_ \(\theta\in\mathbb{R}^{m}\)_, the map_ \(t\mapsto\mathcal{F}(t,x,\theta)\) _is measurable in_ \(t\)_._
2. _For every_ \(R>0\) _there exists a constant_ \(L_{R}>0\) _such that, for every_ \(\theta\in\mathbb{R}^{m}\)_, it holds_ \[|\mathcal{F}(t,x_{1},\theta)-\mathcal{F}(t,x_{2},\theta)|\leq L_{R}(1+|\theta |)|x_{1}-x_{2}|,\quad\text{ for a.e. }t\in[0,T]\text{ and every }x_{1},x_{2}\in B_{R}(0),\] _from which it follows that_ \(|\mathcal{F}(t,x,\theta)|\leq L_{R}(1+|x|)(1+|\theta|)\) _for a.e._ \(t\in[0,T]\)_._
3. _For every_ \(R>0\) _there exists a constant_ \(L_{R}>0\) _such that, for every_ \(\theta_{1},\theta_{2}\in\mathbb{R}^{m}\)_, it holds_ \[|\mathcal{F}(t,x,\theta_{1})-\mathcal{F}(t,x,\theta_{2})|\leq L_{R}(1+|\theta _{1}|+|\theta_{2}|)|\theta_{1}-\theta_{2}|,\quad\text{ for a.e. }t\in[0,T]\text{ and every }x\in B_{R}(0).\]
The control system that we are going to study is
\[\begin{cases}\dot{x}(t)=\mathcal{F}(t,x(t),\theta(t)),\ \ \text{ a.e. in }[0,T],\\ x(0)=x_{0},\end{cases} \tag{2.1}\]
where \(\theta\in L^{2}([0,T],\mathbb{R}^{m})\) is the control that drives the dynamics. Owing to Assumption 1, the classical Caratheodory Theorem (see [27, Theorem 5.3]) guarantees that, for every \(\theta\in L^{2}([0,T],\mathbb{R}^{m})\) and for every \(x_{0}\in\mathbb{R}^{d}\), the Cauchy problem (2.1) has a unique solution \(x:[0,T]\to\mathbb{R}^{d}\). Hence, for every \((t,\theta)\in[0,T]\times L^{2}([0,T],\mathbb{R}^{m})\), we introduce the flow map \(\Phi^{\theta}_{(0,t)}:\mathbb{R}^{d}\to\mathbb{R}^{d}\) defined as
\[\Phi^{\theta}_{(0,t)}(x_{0}):=x(t), \tag{2.2}\]
where \(t\mapsto x(t)\) is the absolutely continuous curve that solves (2.1), with Cauchy datum \(x(0)=x_{0}\) and corresponding to the admissible control \(t\mapsto\theta(t)\). Similarly, given \(0\leq s<t\leq T\), we write \(\Phi^{\theta}_{(s,t)}:\mathbb{R}^{d}\to\mathbb{R}^{d}\) to denote the flow map obtained by prescribing the Cauchy datum at the more general instant \(s\geq 0\). We now present the properties of the flow map defined in (2.2) that describes the evolution of the system: we show that is well-posed, and we report some classical properties.
**Proposition 2.1**.: _For every \(t\in[0,T]\) and for every \(\theta\in L^{2}([0,T],\mathbb{R}^{m})\), let \(\mathcal{F}\) satisfy Assumption 1. Then, the flow \(\Phi^{\theta}_{(0,t)}:\mathbb{R}^{d}\to\mathbb{R}^{d}\) is well-defined for any \(x_{0}\in\mathbb{R}^{d}\) and it satisfies the following properties._
* _For every_ \(R>0\) _and_ \(\rho>0\)_, there exists a constant_ \(\bar{R}>0\) _such that_ \[|\Phi^{\theta}_{(0,t)}(x)|\leq\bar{R}\] _for every_ \(x\in B_{R}(0)\) _and every_ \(\theta\in L^{2}([0,T],\mathbb{R}^{m})\) _such that_ \(||\theta||_{L^{2}}\leq\rho\)_._
* _For every_ \(R>0\) _and_ \(\rho>0\)_, there exists a constant_ \(\bar{L}>0\) _such that, for every_ \(t\in[0,T]\)_, it holds_ \[|\Phi^{\theta}_{(0,t)}(x_{1})-\Phi^{\theta}_{(0,t)}(x_{2})|\leq\bar{L}|x_{1}-x _{2}|\] _for every_ \(x_{1},x_{2}\in B_{R}(0)\) _and every_ \(\theta\in L^{2}([0,T],\mathbb{R}^{m})\) _such that_ \(||\theta||_{L^{2}}\leq\rho\)_._
Even though the framework introduced in Assumption 1 is rather general, in this paper we specifically have in mind the case where the mapping \(\mathcal{F}:[0,T]\times\mathbb{R}^{d}\times\mathbb{R}^{m}\to\mathbb{R}^{d}\) represents the feed-forward dynamics associated to residual neural networks. In this scenario, the parameter \(\theta\in\mathbb{R}^{m}\) encodes the _weights_ and _shifts_ of the network, i.e., \(\theta=(W,b)\), where \(W\in\mathbb{R}^{d\times d}\) and \(b\in\mathbb{R}^{d}\). Moreover, the mapping \(\mathcal{F}\) has the form:
\[\mathcal{F}(t,x,\theta)=\sigma(Wx+b),\]
where \(\sigma:\mathbb{R}^{d}\to\mathbb{R}^{d}\) is a nonlinear function acting component-wise, often called in literature _activation function_. In this work, we consider sigmoidal-type activation functions, such as the hyperbolic tangent function:
\[\sigma(x)=\tanh(x),\]
as well as smooth approximations of the Rectified Linear Unit (ReLU) function, which is defined as:
\[\sigma(x)=\max\{0,x\}. \tag{2.3}\]
We emphasize the need to consider smoothed versions of the ReLU function due to additional differentiability requirements on \(\mathcal{F}\), which will be further clarified in Assumption 2. Another useful activation function covered by Assumption 2 is the Leaky Rectified Linear Unit (Leaky ReLU) function:
\[\sigma(x)=\max\{0,x\}-\max\{-\alpha x,0\} \tag{2.4}\]
where \(\alpha\in[0,1]\) is a predetermined parameter that allows the output of the function to have negative values. The smooth approximations of (2.3) and (2.4) that we consider will be presented in Section 4.
### From NeurODEs to AutoencODEs
As explained in the Introduction, NeurODEs and ResNets -their discrete-time counterparts- face the limitation of a "rectangular" shape of the network because of formulas (1.2) and (1.1), respectively. To overcome this fact, we aim at designing a continuous-time model capable of describing width-varying neural networks, with a particular focus on Autoencoders, as they represent the prototype of neural networks whose layers operate between spaces of different dimensions. Indeed, Autoencoders consist of an _encoding phase_, where the layers' dimensions progressively decrease until reaching the "latent dimension" of the network. Subsequently, in the _decoding phase_, the layers' widths are increased until the same dimensionality as the input data is restored. For this reason, Autoencoders are prominent examples of width-varying neural networks, since the changes in layers' dimensions lie at the core of their functioning. Sketches of encoders and Autoencoders are presented in Figure 1. Finally, we insist on the fact that our model can encompass as well other types of architectures. In this regard, in Remark 2.2 we discuss how our approach can be extended to U-nets.
EncoderOur goal is to first model the case of a network which sequentially reduces the dimensionality of the layers' outputs. For this purpose, we artificially force some of the components not to evolve anymore, while we let the others be active part of the dynamics. More precisely, given an input variable \(x_{0}\in\mathbb{R}^{d}\), we denote with \((\mathcal{I}_{j})_{j=0,\dots,r}\) an increasing filtration, where each element \(\mathcal{I}_{j}\) contains the sets of indices whose corresponding components are _inactive_, i.e., they are constant and do not contribute to the dynamics. Clearly, since the layers' width will decrease sequentially, the filtration of inactive components \(\mathcal{I}_{j}\) will increase, i.e.
\[\emptyset=:\mathcal{I}_{0}\subsetneq\mathcal{I}_{1}\subsetneq...\subsetneq \mathcal{I}_{r}\subsetneq\{1,\dots,d\},\quad r<d,\quad j=0,\dots,r.\]
Figure 1: Left: network with an encoder structure. Right: Autoencoder.
On the other hand, the sets of indices of _active_ components define an decreasing filtration \(\mathcal{A}_{j}:=\{1,\ldots,d\}\setminus\mathcal{I}_{j}\) for \(j=0,\ldots,r\). As opposed to before, the sets of active components \((\mathcal{A}_{j})_{j=0,\ldots,r}\) satisfy
\[\{1,\ldots,d\}=:\mathcal{A}_{0}\supseteq\mathcal{A}_{1}\supseteq... \supseteq\mathcal{A}_{r}\supseteq\emptyset,\quad r<d,\quad j=0,\ldots,r.\]
We observe that, for every \(j=0,\ldots,r\), the sets \(\mathcal{A}_{j}\) and \(\mathcal{I}_{j}\) provide a partition of \(\{1,\ldots,d\}\). A visual representation of this model for encoders is presented on the left side of Figure 2.
Now, in the time interval \([0,T]\), let us consider \(r+1\) nodes \(0=t_{0}<t_{1}<...<t_{r}<t_{r+1}=T\). For \(j=0,\ldots,r\), we denote with \([t_{j},t_{j+1}]\) the sub-interval and, for every \(x\in\mathbb{R}^{d}\), we use the notation \(x_{\mathcal{I}_{j}}:=(x_{i})_{i\in\mathcal{I}_{j}}\) and \(x_{\mathcal{A}_{j}}:=(x_{i})_{i\in\mathcal{A}_{j}}\) to access the components of \(x\) belonging to \(\mathcal{I}_{j}\) and \(\mathcal{A}_{j}\), respectively. Hence, the controlled dynamics for any \(t\in[t_{j},t_{j+1}]\) can be described by
\[\begin{cases}\dot{x}_{\mathcal{I}_{j}}(t)=0,\\ \dot{x}_{\mathcal{A}_{j}}(t)=\mathcal{G}_{j}(t,x_{\mathcal{A}_{j}}(t),\theta( t)),\end{cases} \tag{2.5}\]
where \(\mathcal{G}_{j}:[t_{j},t_{j+1}]\times\mathbb{R}^{|\mathcal{A}_{j}|}\times \mathbb{R}^{m}\rightarrow\mathbb{R}^{|\mathcal{A}_{j}|}\), for \(j=0,\ldots,r\), and \(x(0)=x_{\mathcal{A}_{0}}(0)=x_{0}\). Furthermore, the dynamical system describing the encoding part is
\[\begin{cases}\dot{x}(t)=\mathcal{F}(t,x(t),\theta(t)),\quad\text{a.e }t\in[0,T],\\ x(0)=x_{0}\end{cases}\]
where, for \(t\in[t_{j},t_{j+1}]\), we define the discontinuous vector field as follows
\[\left(\mathcal{F}(t,x,\theta)\right)_{k}=\begin{cases}\left(\mathcal{G}(t,x_{ \mathcal{A}_{j}},\theta)\right)_{k},&\text{if }k\in\mathcal{A}_{j},\\ 0,&\text{if }k\in\mathcal{I}_{j}.\end{cases}\]
**Remark 2.1**.: _Notice that \(\theta(t)\in\mathbb{R}^{m}\) for every \(t\in[0,T]\), according to the model that we have just described. However, it is natural to expect that, since \(x\) has varying active components, in a similar way the controlled dynamics \(\mathcal{F}(t,x,\theta)\) shall not explicitly depend at every \(t\in[0,T]\) on every component of \(\theta\)._
AutoencoderWe now extend the previous model to the case of networks which not only decrease the dimensionality of the layers, but they are also able to increase the layers' width in order to restore the original dimension of the input data. Here we denote by \(z_{0}\in\mathbb{R}^{\bar{d}}\) the input variable, and we fictitiously augment the input's dimension, so that we consider the initial datum \(x_{0}=(z_{0},\underline{0})\in\mathbb{R}^{d}=\ \mathbb{R}^{\bar{d}}\times \mathbb{R}^{\bar{d}}\), where \(\underline{0}\in\mathbb{R}^{\bar{d}}\). We make use of the following notation for every \(x\in\mathbb{R}^{d}\):
\[x=\left((z_{i})_{i=1,\ldots,\bar{d}},(z^{H}_{i})_{i=1,\ldots,\bar{d}})\right)\]
where \(z^{H}\) is the augmented (or _shadow_) part of the vector \(x\). In this model, the time horizon \([0,T]\) is splitted using the following time-nodes:
\[0=t_{0}\leq t_{1}\leq...\leq t_{r}\leq...\leq t_{2r}\leq t_{2r+1}:=T\]
where \(t_{r}\), which was the end of the encoder in the previous model, is now the instant corresponding to the bottleneck of the autoencoder. Similarly as before, we introduce two families of partitions of \(\{1,\ldots,\bar{d}\}\) modeling the active and
Figure 2: Left: Embedding of an encoder into a dynamical system. Right: model for an Autoencoder.
non-active components of, respectively, \(z\) and \(z^{H}\). The first filtrations are relative to the encoding phase and they involve the component of \(z\):
\[\begin{cases}\mathcal{I}_{j-1}\subsetneq I_{j}&\text{if }1\leq j\leq r,\\ \mathcal{I}_{j}=\mathcal{I}_{j-1}&\text{if }j>r,\end{cases}\qquad\qquad \qquad\qquad\begin{cases}\mathcal{A}_{j-1}\supsetneq\mathcal{A}_{j}&\text{if }1\leq j \leq r,\\ \mathcal{A}_{j}=\mathcal{A}_{j-1}&\text{if }j>r.\end{cases}\]
where \(\mathcal{I}_{0}:=\emptyset\), \(\mathcal{I}_{r}\subsetneq\{1,\ldots,\tilde{d}\}\) and \(\mathcal{A}_{0}=\{1,\ldots,\tilde{d}\}\), \(\mathcal{A}_{r}\supsetneq\emptyset\). The second filtrations, that aim at modeling the decoder, act on the shadow part of \(x\), i.e., they involve the components of \(z^{H}\):
\[\begin{cases}\mathcal{I}_{j-1}^{H}=\{1,\ldots,d\}&\text{if }1\leq j\leq r,\\ \mathcal{I}_{j}^{H}\subsetneq\mathcal{I}_{j-1}^{H}&\text{if }r<j\leq 2r,\end{cases} \qquad\qquad\qquad\begin{cases}\mathcal{A}_{j-1}^{H}=\emptyset& \text{if }1\leq j\leq r,\\ \mathcal{A}_{j}^{H}\supsetneq\mathcal{A}_{j-1}^{H}&\text{if }r<j\leq 2r.\end{cases}\]
While the encoder structure acting on the input data \(z_{0}\) is the same as before, in the decoding phase we aim at activating the components that have been previously turned off during the encoding. However, since the information contained in the original input \(z_{0}\) should be first compressed and then decompressed, we should not make use of the values of the components that we have turned off in the encoding and hence, we cannot re-activate them. Therefore, in our model the dimension is restored by activating components of \(z^{H}\), the shadow part of \(x\), which we recall was initialized equal to \(\underline{0}\in\mathbb{R}^{\tilde{d}}\). This is the reason why we introduce sets of active and inactive components also for the shadow part of the state variable. A sketch of this type of model is presented on the right of Figure 2. Moreover, in order to be consistent with the classical structure of an autoencoder, the following identities must be satisfied:
1. \(\mathcal{A}_{j}\cap\mathcal{A}_{j}^{H}=\emptyset\) for every \(j=1,\ldots,2r\),
2. \(\mathcal{A}_{2r}\cup\mathcal{A}_{2r}^{H}=\{1,\ldots,\tilde{d}\}\).
The first identity formalizes the constraint that the active component of \(z\) and those of \(z^{H}\) cannot overlap and must be distinct, while the second identity imposes that, at the end of the evolution, the active components in \(z\) and \(z^{H}\) should sum up exactly to \(1,\ldots,\tilde{d}\). Furthermore, from the first identity we derive that \(\mathcal{A}_{j}\subseteq(\mathcal{A}_{j}^{H})^{C}=\mathcal{I}_{j}^{H}\) and, similarly, \(\mathcal{A}_{j}^{H}\subseteq\mathcal{I}_{j}\) for every \(j=1,\ldots,2r\). Moreover, \(\mathcal{A}_{r}\) satisfies the inclusion \(\mathcal{A}_{r}\subseteq\mathcal{A}_{j}\) for every \(j=1,\ldots,2r\), which is consistent with the fact that layer with the smallest width is located in the bottleneck, i.e., in the interval \([t_{r},t_{r+1}]\). Finally, from the first and the second assumption, we obtain that \(\mathcal{A}_{2r}^{H}=\mathcal{I}_{2r}\), i.e., the final active components of \(z^{H}\) coincide with the inactive components of \(z\), and, similarly, \(\mathcal{I}_{2r}^{H}=\mathcal{A}_{2r}\). Finally, to access the active components of \(x=(z,z^{H})\), we make use of the following notation:
\[x_{\mathcal{A}_{j}}=(z_{k})_{k\in\mathcal{A}_{j}},\quad x_{\mathcal{A}_{j}^{H }}=(z_{k}^{H})_{k\in\mathcal{A}_{j}^{H}}\quad\text{and}\quad x_{\mathcal{A}_{j },\mathcal{A}_{j}^{H}}=(z_{\mathcal{A}_{j}},z_{\mathcal{A}_{j}^{H}}^{H}),\]
and we do the same for the inactive components:
\[x_{\mathcal{I}_{j}}=(z_{k})_{k\in\mathcal{I}_{j}},\quad x_{\mathcal{I}_{j}^{H }}=(z_{k}^{H})_{k\in\mathcal{I}_{j}^{H}}\quad\text{and}\quad x_{\mathcal{I}_{ j},\mathcal{I}_{j}^{H}}=(z_{\mathcal{I}_{j}},z_{\mathcal{I}_{j}^{H}}^{H}).\]
We are now in position to write the controlled dynamics in the interval \(t_{j}\leq t\leq t_{j+1}\):
\[\begin{cases}\dot{x}_{\mathcal{I}_{j},\mathcal{I}_{j}^{H}}(t)=0,\\ \dot{x}_{\mathcal{A}_{j},\mathcal{A}_{j}^{H}}(t)=\mathcal{G}_{j}(t,x_{\mathcal{ A}_{j},\mathcal{A}_{j}^{H}}(t),\theta(t)),\end{cases} \tag{2.6}\]
where \(\mathcal{G}_{j}:[t_{j},t_{j+1}]\times\mathbb{R}^{|\mathcal{A}_{j}|+|\mathcal{A }_{j}^{H}|}\times\mathbb{R}^{m}\to\mathbb{R}^{|\mathcal{A}_{j}|+|\mathcal{A} _{j}^{H}|}\,,\) for \(j=0,\ldots,2r\), and \(x_{\mathcal{I}_{0}}^{H}(0)=\underline{0}\), \(x_{\mathcal{A}_{0}}(0)=x_{0}\). As before, we define the discontinuous vector field \(\mathcal{F}\) for \(t\in[t_{j},t_{j+1}]\), as follows
\[\big{(}\mathcal{F}(t,x,\theta)\big{)}_{k}=\begin{cases}\big{(}\mathcal{G}(t,x _{\mathcal{A}_{j}},\theta)\big{)}_{k}\,,&\text{if }k\in\mathcal{A}_{j}\cup\mathcal{A}_{j}^{H}\\ 0,&\text{if }k\in\mathcal{I}_{j}\cup\mathcal{I}_{j}^{N}.\end{cases}\]
Hence, we are now able to describe any type of width-varying neural network through a continuous-time model depicted by the following dynamical system
\[\begin{cases}\dot{x}(t)=\mathcal{F}(t,x(t),\theta(t))\quad\text{a.e. in }[0,T],\\ x(0)=x_{0}.\end{cases}\]
It is essential to highlight the key difference between the previous NeurODE model in (2.6) and the current model: the vector field \(\mathcal{F}\) now explicitly depends on the time variable \(t\) to account for sudden dimensionality drops, where certain components are forced to remain constant. As a matter of fact, the resulting dynamics exhibit high discontinuity in
the variable \(t\). To the best of our knowledge, this is the first attempt to consider such discontinuous dynamics in NeurODEs. Previous works, such as [26, 18], typically do not include an explicit dependence on the time variable in the right-hand side of NeurODEs, or they assume a continuous dependency on time, as in [7]. Furthermore, it is worth noting that the vector field \(\mathcal{F}\) introduced to model autoencoders satisfies the general assumptions outlined in Assumption 1 at the beginning of this section.
**Remark 2.2**.: _The presented model, initially designed for Autoencoders, can be easily extended to accommodate various types of width-varying neural networks, including architectures with long skip-connections such as U-nets [38]. While the specific details of U-nets are not discussed in detail, their general structure is outlined in Figure 3. U-nets consist of two main components: the contracting path (encoder) and the expansive path (decoder). These paths are symmetric, with skip connections between corresponding layers in each part. Within each path, the input passes through a series of convolutional layers, followed by a non-linear activation function (often ReLU), and other operations (e.g., max pooling) which are not encompassed by our model. The long skip-connections that characterize U-nets require some modifications to the model of autoencoder described above. If we denote with \(\bar{d}_{i}\) for \(i=0,\ldots,r\) the dimensionality of each layer in the contracting path, we have that \(\bar{d}_{2r-i}=\bar{d}_{i}\) for every \(i=0,\ldots,r\). Then, given an initial condition \(z_{0}\in\mathbb{R}^{\bar{d}_{0}}\), we embed it into the augmented state variable_
\[x_{0}=(z_{0},\underline{0}),\text{ where }\underline{0}\in\mathbb{R}^{\bar{d}_{ 1}+\ldots+\bar{d}_{r}}.\]
_As done in the previous model for autoencoder, we consider time-nodes \(0=t_{0}<\ldots<t_{2r}=T\), and in each sub-interval we introduce a controlled dynamics with the scheme of active/inactive components depicted in Figure 3._
## 3 Mean-field Analysis
In this section, we extend the dynamical model introduced in Section 2 to its mean-field limit, which corresponds to the scenario of an infinitely large dataset. Within this framework, we formulate the training of NeurODEs and AutoencODEs as a mean-field optimal control problem and provide the associated necessary optimality conditions. It is worth noting that our analysis covers both the high-regularized regime, as studied in previous work [7], as well as the low-regularized regime, which has not been extensively addressed before. In this regard, we dedicate a subsection to a detailed comparison with the results obtained in [7]. Additionally, we investigate the case of finite-particles approximation and we establish a quantitative bound on the generalization capabilities of these networks.
### Mean-field dynamical model
In this section, we employ the same view-point as in [7], and we consider the case of a dataset with an infinite number of observations. In our framework, each datum is modeled as a point \(x_{0}\in\mathbb{R}^{d}\), and it comes associated to its corresponding label \(y_{0}\in\mathbb{R}^{d}\). Notice that, in principle, in Machine Learning applications the label (or _target_) datum
Figure 3: Embedding of the U-net into a higher-dimensional dynamical system.
\(y_{0}\) may have dimension different from \(d\). However, the labels' dimension is just a matter of notation and does not represent a limit of our model. Following [7], we consider the curve \(t\mapsto(x(t),y(t))\) which satisfies
\[\dot{x}(t)=\mathcal{F}(t,x(t),\theta(t))\qquad\text{and}\qquad\dot{y}(t)=0 \tag{3.1}\]
for a.e. \(t\in[0,T]\), and \((x(0),y(0))=(x_{0},y_{0})\). We observe that the variable \(y\) corresponding to the labels is not changing, nor it is affecting the evolution of the variable \(x\). We recall that the flow associated to the dynamics of the variable \(x\) is denoted by \(\Phi^{\theta}_{(0,t)}:\mathbb{R}^{d}\to\mathbb{R}^{d}\) for every \(t\in[0,T]\), and it has been defined in (2.2). Moreover, in regards to the full dynamics prescribed by (3.1), for every admissible control \(\theta\in L^{2}([0,T],\mathbb{R}^{m})\) we introduce the extended flow \(\boldsymbol{\Phi}^{\theta}_{(0,t)}:\mathbb{R}^{d}\times\mathbb{R}^{d}\to \mathbb{R}^{d}\times\mathbb{R}^{d}\), which reads
\[\boldsymbol{\Phi}^{\theta}_{(0,t)}(x_{0},y_{0})=(\Phi^{\theta}_{(0,t)}(x_{0} ),y_{0}) \tag{3.2}\]
for every \(t\in[0,T]\) and for every \((x_{0},y_{0})\in\mathbb{R}^{d}\times\mathbb{R}^{d}\). We now consider the case of an infinite number of labeled data \((X^{i}_{0},Y^{i}_{0})_{i\in I}\), where \(I\) is an infinite set of indexes. In our mathematical model, we understand this data distribution as a compactly-supported probability measure \(\mu_{0}\in\mathcal{P}_{c}(\mathbb{R}^{d}\times\mathbb{R}^{d})\). Moreover, for every \(t\in[0,T]\), we denote by \(t\mapsto\mu_{t}\) the curve of probability measures in \(\mathcal{P}_{c}(\mathbb{R}^{d}\times\mathbb{R}^{d})\) that models the evolution of the solutions of (3.1) corresponding to the Cauchy initial conditions \((X^{i}_{0},Y^{i}_{0})_{i\in I}\). In other words, the curve \(t\mapsto\mu_{t}\) satisfies the following continuity equation:
\[\partial_{t}\mu_{t}(x,y)+\nabla_{x}\cdot\big{(}\mathcal{F}(t,x,\theta_{t})\mu _{t}(x,y)\big{)}=0,\qquad\mu_{t}|_{t=0}(x,y)=\mu_{0}(x,y), \tag{3.3}\]
understood in the sense of distributions, i.e.
**Definition 3.1**.: _For any given \(T>0\) and \(\theta\in L^{2}([0,T],\mathbb{R}^{m})\), we say that \(\mu\in\mathcal{C}([0,T],\mathcal{P}_{c}(\mathbb{R}^{2d}))\) is a weak solution of (3.3) on the time interval \([0,T]\) if_
\[\int_{0}^{T}\int_{\mathbb{R}^{2d}}\big{(}\partial_{t}\psi(t,x,y)+\nabla_{x} \psi(t,x,y)\cdot\mathcal{F}(t,x,\theta_{t})\big{)}\,d\mu_{t}(x,y)\,dt=0, \tag{3.4}\]
_for every test function \(\psi\in\mathcal{C}^{1}_{c}((0,T)\times\mathbb{R}^{2d})\)._
Let us now discuss the existence and the characterisation of the solution.
**Proposition 3.2**.: _Under Assumptions 1, for every \(\mu_{0}\in\mathcal{P}_{c}(\mathbb{R}^{2d})\) we have that (3.3) admits a unique solution \(t\mapsto\mu_{t}\) in the sense of Definition 3.1. Moreover, we have that for every \(t\in[0,T]\)_
\[\mu_{t}=\boldsymbol{\Phi}^{\theta}_{(0,t)\#}\mu_{0}. \tag{3.5}\]
Proof.: Existence and uniqueness of the measure solution of (3.3) follow from [1, Proposition 2.1, Theorem 3.1 and Remark 2.1].
From the characterisation of the solution of (3.3) provided in (3.5), it follows that the curve \(t\mapsto\mu_{t}\) inherits the properties of the flow map \(\Phi^{\theta}\) described in Proposition 2.1. These facts are collected in the next result.
**Proposition 3.3**.: _Let us fix \(T>0\) and \(\mu_{0}\in\mathcal{P}_{c}(\mathbb{R}^{2d})\), and let us consider \(\mathcal{F}:[0,T]\times\mathbb{R}^{d}\times\mathbb{R}^{m}\to\mathbb{R}^{d}\) satisfying Assumption 1. Let \(\theta\in L^{2}([0,T],\mathbb{R}^{m})\) be an admissible control, and let \(t\mapsto\mu_{t}\) be the corresponding solution of (3.3). Then, the curve \(t\mapsto\mu_{t}\) satisfies the properties listed below._
* _For every_ \(R>0\) _and_ \(\rho>0\)_, there exists_ \(\bar{R}>0\) _such that, for every_ \(t\in[0,T]\)_, it holds that_ \[\operatorname{supp}(\mu_{t})\subset B_{R}(0)\] _for every_ \(\theta\in L^{2}([0,T],\mathbb{R}^{m})\) _such that_ \(\|\theta\|_{L^{2}}\leq\rho\)_, and for every_ \(\mu_{0}\) _such that_ \(\operatorname{supp}(\mu_{0})\subset B_{R}(0)\)_._
* _For every_ \(R>0\) _and_ \(\rho>0\)_, there exists_ \(\bar{L}>0\) _such that, for every_ \(t\in[0,T]\)_, it holds that_ \[W_{1}(\mu_{t},\nu_{t})\leq\bar{L}W_{1}(\mu_{0},\nu_{0})\] _for every_ \(\theta\in L^{2}([0,T],\mathbb{R}^{m})\) _such that_ \(\|\theta\|_{L^{2}}\leq\rho\)_, and for every initial conditions_ \(\mu_{0},\nu_{0}\) _such that the supports satisfy_ \(\operatorname{supp}(\mu_{0}),\operatorname{supp}(\nu_{0})\subset B_{R}(0)\)_, where_ \(\mu_{t}=\boldsymbol{\Phi}^{\theta}_{(0,t)\#}\mu_{0}\) _and_ \(\nu_{t}=\boldsymbol{\Phi}^{\theta}_{(0,t)\#}\nu_{0}\)_._
* _For every_ \(R>0\) _and_ \(\rho>0\)_, there exists_ \(\bar{L}>0\) _such that, for every_ \(t_{1},t_{2}\in[0,T]\)_, it holds that_ \[W_{1}(\mu_{t_{1}},\mu_{t_{2}})\leq\bar{L}\cdot|t_{1}-t_{2}|^{\frac{1}{2}}\] _for every_ \(\theta\in L^{2}([0,T],\mathbb{R}^{m})\) _such that_ \(\|\theta\|_{L^{2}}\leq\rho\)_, and for every_ \(\mu_{0}\) _such that_ \(\operatorname{supp}(\mu_{0})\subset B_{R}(0)\)_._
* _For every_ \(R>0\) _and_ \(\rho>0\)_, there exists_ \(\bar{L}>0\) _such that, for every_ \(t\in[0,T]\)_, it holds that_ \[W_{1}(\mu_{t},\nu_{t})\leq\bar{L}\|\theta_{1}-\theta_{2}\|_{L^{2}}\] _for every_ \(\theta_{1},\theta_{2}\in L^{2}([0,T],\mathbb{R}^{m})\) _such that_ \(\|\theta\|_{L^{2}},\|\theta_{2}\|_{L^{2}}\leq\rho\)_, and for every initial condition_ \(\mu_{0}\) _such that_ \(\operatorname{supp}(\mu_{0})\subset B_{R}(0)\)_, where_ \(\mu_{t}=\boldsymbol{\Phi}_{(0,t)\#}^{\theta_{1}}\mu_{0}\) _and_ \(\nu_{t}=\boldsymbol{\Phi}_{(0,t)\#}^{\theta_{2}}\mu_{0}\)_._
Proof.: All the results follow from Proposition 3.2 and from the properties of the flow map presented in Proposition 2.1, combined with the Kantorovich duality (1.4) for the distance \(W_{1}\), and the change-of-variables formula (1.3). Since the argument is essentially the same for all the properties, we detail the computations only for the second point, i.e., the Lipschitz-continuous dependence on the initial distribution. Owing to (1.4), for any \(t\in[0,T]\), for any \(\varphi\in\operatorname{Lip}(\mathbb{R}^{2d})\) such that its Lipschitz constant \(\text{Lip}(\varphi)\leq 1\), it holds that
\[W_{1}(\mu_{t},\nu_{t})\leq\int_{\mathbb{R}^{2d}}\varphi(x,y)\,d(\mu_{t}-\nu_{ t})(x,y)=\int_{\mathbb{R}^{2d}}\varphi(\Phi_{(0,t)}^{\theta}(x),y)\,d(\mu_{0}- \nu_{0})(x,y)\leq\bar{L}W_{1}(\mu_{0},\nu_{0}),\]
where the equality follows from the definition of push-forward and from (3.2), while the constant \(\bar{L}\) in the second inequality descends from the local Lipschitz estimate of \(\Phi_{(0,t)}^{\theta}\) established in Proposition 2.1.
### Mean-field optimal control
Using the transport equation (3.3), we can now formulate the mean-field optimal control problem that we aim to address. To this end, we introduce the functional \(J:L^{2}([0,T],\mathbb{R}^{m})\to\mathbb{R}\), defined as follows:
\[J(\theta)=\left\{\begin{aligned} &\int_{\mathbb{R}^{2d}}\ell(x,y)\,d\mu_{T}(x, y)+\lambda\int_{0}^{T}|\theta(t)|^{2}\,dt,\\ &\text{s.t.}\ \left\{\begin{aligned} &\partial_{t}\mu_{t}(x,y)+ \nabla_{x}\cdot(\mathcal{F}(t,x,\theta_{t})\mu_{t}(x,y))=0\quad t\in[0,T],\\ &\mu_{t}|_{t=0}(x,y)=\mu_{0}(x,y),\end{aligned}\right. \end{aligned}\right. \tag{3.6}\]
for every admissible control \(\theta\in L^{2}([0,T],\mathbb{R}^{m})\). The objective is to find the optimal control \(\theta^{*}\) that minimizes \(J(\theta^{*})\), subject to the PDE constraint (3.3) being satisfied by the curve \(t\mapsto\mu_{t}\). The term "mean-field" emphasizes that \(\theta\) is shared by an entire population of input-target pairs, and the optimal control must depend on the distribution of the initial data. We observe that when the initial measure \(\mu_{0}\) is empirical, i.e.
\[\mu_{0}:=\mu_{0}^{N}=\frac{1}{N}\sum_{i=1}^{N}\delta_{(X_{\delta}^{i},Y_{\delta }^{i})},\]
then minimization of (3.6) reduces to a classical finite particle optimal control problem with ODE constraints.
We now state the further regularity hypotheses that we require, in addition to the one contained in Assumption 1.
**Assumption 2**.: _For any given \(T>0\), the vector field \(\mathcal{F}\) satisfies the following._
* _For every_ \(R>0\) _there exists a constant_ \(L_{R}>0\) _such that, for every_ \(x_{1},x_{2}\in B_{R}(0)\)_, it holds_ \[|\nabla_{x}\mathcal{F}(t,x_{1},\theta)-\nabla_{x}\mathcal{F}(t,x_{2},\theta)| \leq L_{R}(1+|\theta|^{2})|x_{1}-x_{2}|,\quad\text{ for a.e. }t\in[0,T]\text{ and every }\theta\in\mathbb{R}^{m}.\]
* _There exists another constant_ \(L_{R}>0\) _such that, for every_ \(\theta_{1},\theta_{2}\in\mathbb{R}^{m}\)_, it holds_ \[|\nabla_{\theta}\mathcal{F}(t,x,\theta_{1})-\nabla_{\theta}\mathcal{F}(t,x, \theta_{2})|\leq L_{R}|\theta_{1}-\theta_{2}|,\quad\text{ for a.e. }t\in[0,T]\text{ and every }x\in B_{R}(0).\] _From this, it follows that_ \(|\nabla_{\theta}\mathcal{F}(t,x,\theta)|\leq L_{R}(1+|\theta|)\) _for every_ \(x\in B_{R}(0)\) _and for every_ \(\theta\in\mathbb{R}^{m}\)_._
* _There exists another constant_ \(L_{R}>0\) _such that, for every_ \(\theta_{1},\theta_{2}\in\mathbb{R}^{m}\)_, it holds_ \[|\nabla_{x}\mathcal{F}(t,x,\theta_{1})-\nabla_{x}\mathcal{F}(t,x,\theta_{2})| \leq L_{R}(1+|\theta_{1}|+|\theta_{2}|)|\theta_{1}-\theta_{2}|,\quad\text{ for a.e. }t\in[0,T]\text{ and every }x\in B_{R}(0).\] _From this, it follows that_ \(|\nabla_{x}\mathcal{F}(t,x,\theta)|\leq L_{R}(1+|\theta|^{2})\) _for every_ \(x\in B_{R}(0)\) _and for every_ \(\theta\in\mathbb{R}^{m}\)_._
* _There exists another constant_ \(L_{R}>0\) _such that_ \[|\nabla_{\theta}\mathcal{F}(t,x_{1},\theta)-\nabla_{\theta}\mathcal{F}(t,x_{2}, \theta)|\leq L_{R}(1+|\theta|)|x_{1}-x_{2}|,\quad\text{ for a.e. }t\in[0,T]\text{ and every }x_{1},x_{2}\in B_{R}(0).\]
Additionally, it is necessary to specify the assumptions on the function \(\ell\) that quantifies the discrepancy between the output of the network and its corresponding label.
**Assumption 3**.: _The function \(\ell:\mathbb{R}^{d}\times\mathbb{R}^{d}\mapsto\mathbb{R}_{+}\) is \(C^{1}\)-regular and non-negative. Moreover, for every \(R>0\), there exists a constant \(L_{R}>0\) such that, for every \(x_{1},x_{2}\in B_{R}(0)\), it holds_
\[|\nabla_{x}\ell(x_{1},y_{1})-\nabla_{x}\ell(x_{2},y_{2})|\leq L_{R}\left(|x_{1} -x_{2}|+|y_{1}-y_{2}|\right). \tag{3.7}\]
Let us begin by establishing a regularity result for the reduced final cost, which refers to the cost function without the regularization term.
**Lemma 3.4** (Differentiability of the cost).: _Let \(T,R>0\) and \(\mu_{0}\in\mathcal{P}_{c}(\mathbb{R}^{2d})\) be such that \(\operatorname{supp}(\mu_{0})\subset B_{R}(0)\), and let us consider \(\mathcal{F}:[0,T]\times\mathbb{R}^{d}\times\mathbb{R}^{m}\to\mathbb{R}^{d}\) and \(\ell:\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}\) that satisfy, respectively, Assumptions 1-2 and Assumption 3. Then, the reduced final cost_
\[J_{t}:\theta\in L^{2}([0,T];\mathbb{R}^{m})\mapsto\left\{\begin{array}{l} \int_{\mathbb{R}^{2d}}\ell(x,y)\,d\mu_{T}^{\theta}(x,y),\\ \mathrm{s.t.}\begin{cases}\partial_{t}\mu_{t}^{\theta}(x,y)+\nabla_{x}\big{(} \mathcal{F}(t,x,\theta_{t})\mu_{t}^{\theta}(x,y)\big{)}=0,\\ \mu_{t}^{\theta}|_{t=0}(x,y)=\mu_{0}(x,y),\end{cases}\end{array}\right. \tag{3.8}\]
_is Frechet-differentiable. Moreover, using the standard Hilbert space structure of \(L^{2}([0,T],\mathbb{R}^{m})\), we can represent the differential of \(J_{t}\) at the point \(\theta_{0}\) as the function:_
\[\nabla_{\theta}J_{\ell}(\theta):t\mapsto\int_{\mathbb{R}^{2d}}\nabla_{\theta} \mathcal{F}^{\top}\big{(}t,\Phi_{(0,t)}^{\theta_{0}}(x),\theta(t)\big{)}\cdot \mathcal{R}_{(t,T)}^{\theta}(x)^{\top}\cdot\nabla_{x}\ell^{\top}\big{(}\Phi_{ (0,T)}^{\theta}(x),y\big{)}\,d\mu_{0}(x,y) \tag{3.9}\]
_for a.e. \(t\in[0,T]\)._
Before proving the statement, we need to introduce the linear operator \(\mathcal{R}_{\tau,s}^{\theta}(x):\mathbb{R}^{d}\to\mathbb{R}^{d}\) with \(\tau,s\in[0,T]\), that is related to the linearization along a trajectory of the dynamics of the control system (2.1), and that appears in (3.9). Given an admissible control \(\theta\in L^{2}([0,T],\mathbb{R}^{m})\), let us consider the corresponding trajectory curve \(t\mapsto\Phi_{(0,t)}^{\theta}(x)\) for \(t\in[0,T]\), i.e., the solution of (2.1) starting at the point \(x\in\mathbb{R}^{d}\) at the initial instant \(t=0\). Given any \(\tau\in[0,T]\), we consider the following linear ODE in the phase space \(\mathbb{R}^{d\times d}\):
\[\begin{cases}\frac{d}{ds}\mathcal{R}_{(\tau,s)}^{\theta}(x)=\nabla_{x} \mathcal{F}(s,\Phi_{(0,s)}^{\theta}(x),\theta(s))\cdot\mathcal{R}_{(\tau,s)}^ {\theta}(x)\quad\text{for a.e. }s\in[0,T],\\ \mathcal{R}_{(\tau,\tau)}^{\theta}(x)=\mathrm{Id}.\end{cases} \tag{3.10}\]
We insist on the fact that, when we write \(\mathcal{R}_{(\tau,s)}^{\theta}(x)\), \(x\) denotes the starting point of the trajectory along which the dynamics has been linearized. We observe that, using Assumption 2-\((iv)-(vi)\) and Caratheodory Theorem, it follows that (3.10) admits a unique solution, for every \(x\in\mathbb{R}^{d}\) and for every \(\tau\in[0,T]\). Since it is an elementary object in Control Theory, the properties of \(\mathcal{R}^{\theta}\) are discussed in the Appendix (see Proposition A.7). We just recall here that the following relation is satisfied:
\[\mathcal{R}_{\tau,s}^{\theta}(x)=\nabla_{x}\Phi_{(\tau,s)}^{\theta}|_{\Phi_{(0, \tau)}^{\theta}(x)}^{\theta} \tag{3.11}\]
for every \(\tau,s\in[0,T]\) and for every \(x\in\mathbb{R}^{d}\) (see, e.g., [10, Theorem 2.3.2]). Moreover, for every \(\tau,s\in[0,T]\) the following identity holds:
\[\mathcal{R}_{\tau,s}^{\theta}(x)\cdot\mathcal{R}_{s,\tau}^{\theta}(x)=\mathrm{ Id},\]
i.e., the matrices \(\mathcal{R}_{\tau,s}^{\theta}(x),\mathcal{R}_{s,\tau}^{\theta}(x)\) are one the inverse of the other. From this fact, it is possible to deduce that
\[\frac{\partial}{\partial\tau}\mathcal{R}_{\tau,s}^{\theta}(x)=-\mathcal{R}_{ \tau,s}^{\theta}(x)\cdot\nabla_{x}\mathcal{F}(\tau,\Phi_{(0,\tau)}^{\theta}(x ),\theta(\tau)) \tag{3.12}\]
for almost every \(\tau,s\in[0,T]\) (see, e.g., [10, Theorem 2.2.3] for the details).
Proof of Lemma 3.4.: Let us fix an admissible control \(\theta\in L^{2}([0,T];\mathbb{R}^{m})\) and let \(\mu_{\cdot}^{\theta}\in\mathcal{C}^{0}([0,T];\mathcal{P}_{c}(\mathbb{R}^{2d}))\) be the unique solution of the continuity equation (3.3), corresponding to the control \(\theta\) and satisfying \(\mu^{\theta}|_{t=0}=\mu_{0}\). According to Proposition 3.2, this curve can be expressed as \(\mu_{t}^{\theta}=\mathbf{\Phi}_{(0,t)\#}^{\theta}\mu_{0}\) for every \(t\in[0,T]\), where the map \(\mathbf{\Phi}_{(0,t)}^{\theta}=(\Phi_{(0,t)}^{\theta},\mathrm{Id}):\mathbb{R}^{2 d}\to\mathbb{R}^{2d}\) has been introduced in (3.2) as the flow of the extended control system (3.1). In particular, we can rewrite the terminal cost \(J_{\ell}\) defined in (3.8) as
\[J_{\ell}(\theta)=\int_{\mathbb{R}^{2d}}\ell(\Phi_{(0,T)}^{\theta}(x),y)d\mu_{0}(x,y).\]
In order to compute the gradient \(\nabla_{\theta}J_{\ell}\), we preliminarily need to focus on the differentiability with respect to \(\theta\) of the mapping \(\theta\mapsto\ell(\Phi^{\theta}_{(0,T)}(x),y)\), when \((x,y)\) is fixed. Indeed, given another control \(\vartheta\in L^{2}([0,T];\mathbb{R}^{m})\) and \(\varepsilon>0\), from Proposition A.6 it descends that
\[\begin{split}\Phi^{\theta+\varepsilon\vartheta}_{(0,T)}(x)& =\Phi^{\theta}_{(0,T)}(x)+\varepsilon\xi^{\theta}(T)+o_{\theta}( \varepsilon)\\ &=\Phi^{\theta}_{(0,T)}(x)+\varepsilon\int_{0}^{T}\mathcal{R}^{ \theta}_{(s,T)}(x)\nabla_{\theta}\mathcal{F}(s,\Phi^{\theta}_{(0,s)}(x), \theta(s))\vartheta(s)ds+o_{\theta}(\varepsilon)\end{split}\qquad \text{ as }\varepsilon\to 0, \tag{3.13}\]
where \(o_{\theta}(\varepsilon)\) is uniform for every \(x\in B_{R}(0)\subset\mathbb{R}^{d}\), and as \(\vartheta\) varies in the unit ball of \(L^{2}\). Owing to Assumption 3, for every \(x,y,v\in B_{R}(0)\) we observe that
\[|\ell(x+\varepsilon v+o(\varepsilon),y)-\ell(x,y)-\varepsilon\nabla_{x}\ell (x,y)\cdot v|\leq|\nabla_{x}\ell(x,y)|o(\varepsilon)+\frac{1}{2}L_{R}| \varepsilon v+o(\varepsilon)|^{2}\qquad\text{as }\varepsilon\to 0. \tag{3.14}\]
Therefore, combining (3.13) and (3.14), we obtain that
\[\ell(\Phi^{\theta+\varepsilon\vartheta}_{(0,T)}(x),y)-\ell(\Phi^{\theta}_{(0,T)}(x),y)=\varepsilon\int_{0}^{T}\big{(}\nabla_{x}\ell(\Phi^{\theta}_{(0,T)}( x),y)\cdot\mathcal{R}^{\theta}_{(s,T)}(x)\cdot\nabla_{\theta}\mathcal{F}(s, \Phi^{\theta}_{(0,s)}(x),\theta(s))\big{)}\cdot\vartheta(s)ds+o_{\theta}( \varepsilon).\]
Since the previous expression is uniform for \(x,y\in B_{R}(0)\), then if we integrate both sides of the last identity with respect to \(\mu_{0}\), we have that
\[J_{\ell}(\theta+\varepsilon\vartheta)-J_{\ell}(\theta)=\varepsilon\int_{ \mathbb{R}^{2d}}\int_{0}^{T}\big{(}\nabla_{x}\ell(\Phi^{\theta}_{(0,T)}(x),y) \cdot\mathcal{R}^{\theta}_{(s,T)}(x)\cdot\nabla_{\theta}\mathcal{F}(s,\Phi^{ \theta}_{(0,s)}(x),\theta(s))\big{)}\cdot\vartheta(s)\,ds\,d\mu_{0}(x,y)+o_{ \theta}(\varepsilon). \tag{3.15}\]
This proves the Frechet differentiability of the functional \(J_{\ell}\) at the point \(\theta\). We observe that, from Proposition 2.1, Proposition A.7 and Assumption 2, it follows that the function \(s\mapsto\nabla_{x}\ell(\Phi^{\theta}_{(0,T)}(x),y)\cdot\mathcal{R}^{\theta}_{ (s,T)}(x)\cdot\nabla_{\theta}\mathcal{F}(s,\Phi^{\theta}_{(0,s)}(x),\theta(s))\) is uniformly bounded in \(L^{2}\), as \(x,y\) vary in \(B_{R}(0)\subset\mathbb{R}^{d}\). Then, using Fubini Theorem, the first term of the expansion (3.15) can be rewritten as
\[\int_{0}^{T}\left(\int_{\mathbb{R}^{2d}}\nabla_{x}\ell(\Phi^{\theta}_{(0,T)}( x),y)\cdot\mathcal{R}^{\theta}_{(s,T)}(x)\cdot\nabla_{\theta}\mathcal{F}(s, \Phi^{\theta}_{(0,s)}(x),\theta(s))\,d\mu_{0}(x,y)\right)\cdot\vartheta(s)\,ds.\]
Hence, from the previous asymptotic expansion and from Riesz Representation Theorem, we deduce (3.9).
We now prove the most important result of this subsection, concerning the Lipschitz regularity of the gradient \(\nabla_{\theta}J_{\ell}\).
**Proposition 3.5**.: _Under the same assumptions and notations as in Lemma 3.4, we have that the gradient \(\nabla_{\theta}J_{\ell}:L^{2}([0,T],\mathbb{R}^{m})\to L^{2}([0,T],\mathbb{R}^ {m})\) is Lipschitz-continuous on every bounded set of \(L^{2}\). More precisely, given \(\theta_{1},\theta_{2}\in L^{2}([0,T];\mathbb{R}^{m})\), there exists a constant \(\mathcal{L}(T,R,\|\theta_{1}\|_{L^{2}},\|\theta_{2}\|_{L^{2}})>0\) such that_
\[\big{\|}\nabla_{\theta}J_{\ell}(\theta_{1})-\nabla_{\theta}J_{\ell}(\theta_{2}) \big{\|}_{L^{2}}\leq\mathcal{L}(T,R,\|\theta_{1}\|_{L^{2}},\|\theta_{2}\|_{L^{ 2}})\,\big{\|}\theta_{1}-\theta_{2}\big{\|}_{L^{2}}.\]
Proof.: Let us consider two admissible controls \(\theta_{1},\theta_{2}\in L^{2}([0,T],\mathbb{R}^{m})\) such that \(\|\theta_{1}\|_{L^{2}},\|\theta_{1}\|_{L^{2}}\leq C\). In order to simplify the notations, given \(x\in B_{R}(0)\subset\mathbb{R}^{d}\), we define the curves \(x_{1}:[0,T]\to\mathbb{R}^{d}\) and \(x_{2}:[0,T]\to\mathbb{R}^{d}\) as
\[x_{1}(t):=\Phi^{\theta_{1}}_{(0,t)}(x),\quad x_{2}(t):=\Phi^{\theta_{2}}_{(0,t)}(x)\]
for every \(t\in[0,T]\), where the flows \(\Phi^{\theta_{1}},\Phi^{\theta_{2}}\) where introduced in (2.2). We recall that, in virtue of Proposition 2.1, \(x_{1}(t),x_{2}(t)\in B_{R}(0)\) for every \(t\in[0,1]\). Then, for every \(y\in B_{R}(0)\), we observe that
\[\begin{split}\Big{|}\nabla_{\theta}\mathcal{F}^{\top}\big{(}t,x_{1} (t),&\theta_{1}(t)\big{)}\mathcal{R}^{\theta_{1}}_{(t,T)}(x^{\top} )^{\top}\nabla_{x}\ell^{\top}\big{(}x_{1}(T),y\big{)}-\nabla_{\theta}\mathcal{F} ^{\top}\big{(}t,x_{1}(t),\theta_{2}(t)\big{)}\mathcal{R}^{\theta_{2}}_{(t,T)}(x ^{\top})^{\top}\nabla_{x}\ell^{\top}\big{(}x_{2}(T),y\big{)}\Big{|}\\ &\leq\Big{|}\nabla_{\theta}\mathcal{F}^{\top}\big{(}t,x_{1}(t), \theta_{1}(t)\big{)}\Big{|}\Big{|}\mathcal{R}^{\theta_{1}}_{(t,T)}(x^{\top})^{ \top}\Big{|}\,\Big{|}\nabla_{x}\ell^{\top}\big{(}x_{1}(T),y\big{)}-\nabla_{x} \ell^{\top}\big{(}x_{2}(T),y\big{)}\Big{|}\\ &+\Big{|}\nabla_{\theta}\mathcal{F}^{\top}\big{(}t,x_{1}(t),\theta_{ 1}(t)\big{)}\Big{|}\mathcal{R}^{\theta_{1}}_{(t,T)}(x)^{\top}-\mathcal{R}^{ \theta_{2}}_{(t,T)}(x)^{\top}\Big{|}\,\Big{|}\nabla_{x}\ell^{\top}\big{(}x_{2}(T),y \big{)}\Big{|}\\ &+\Big{|}\nabla_{\theta}\mathcal{F}^{\top}\big{(}t,x_{1}(t),\theta_{ 1}(t)\big{)}-\nabla_{\theta}\mathcal{F}^{\top}\big{(}t,x_{2}(t),\theta_{2}(t) \big{)}\Big{|}\,\Big{|}\mathcal{R}^{\theta_{2}}_{(t,T)}(x)^{\top}\Big{|}\, \Big{|}\nabla_{x}\ell^{\top}\big{(}x_{2}(T),y\big{)}\Big{|}\end{split} \tag{3.16}\]
for a.e. \(t\in[0,T]\). We bound separately the three terms at the right-hand side of (3.16). As regards the first addend, from Assumption 2-(\(v\)), Assumption 3, Proposition A.7 and Lemma A.4, we deduce that there exists a positive constant \(C_{1}>0\) such that
\[\begin{split}\Big{|}\nabla_{\theta}\mathcal{F}^{\top}\big{(}t,x_{1} (t),\theta_{1}(t)\big{)}\Big{|}\,\Big{|}\mathcal{R}^{\theta_{1}}_{(t,T)}(x)^{ \top}\Big{|}&\nabla_{x}\ell^{\top}\big{(}x_{1}(T),y\big{)}- \nabla_{x}\ell^{\top}\big{(}x_{2}(T),y\big{)}\Big{|}\\ &\leq C_{1}\left(1+|\theta_{1}(t)|\right)\|\theta_{1}-\theta_{2}\|_{L^ {2}}\end{split} \tag{3.17}\]
for a.e. \(t\in[0,T]\). Similarly, using again Assumption 2-\((v)\), Assumption 3, and Proposition A.7 on the second addend at the right-hand side of (3.16), we obtain that there exists \(C_{2}>0\) such that
\[\left|\nabla_{\theta}\mathcal{F}^{\top}\big{(}t,x_{1}(t),\theta_{1}(t)\big{)} \right|\Big{|}\mathcal{R}_{(t,T)}^{\theta_{1}}(x)^{\top}-\mathcal{R}_{(t,T)}^{ \theta_{2}}(x)^{\top}\Big{|}\left|\nabla_{x}\ell^{\top}\big{(}x_{2}(T),y\big{)} \right|\leq C_{2}\left(1+|\theta_{1}(t)|\right)\|\theta_{1}-\theta_{2}\|_{L^{ 2}} \tag{3.18}\]
for a.e. \(t\in[0,T]\). Moreover, the third term can be bounded as follows:
\[\left|\nabla_{\theta}\mathcal{F}^{\top}\big{(}t,x_{1}(t),\theta_{1}(t)\big{)} \right|-\nabla_{\theta}\mathcal{F}^{\top}\big{(}t,x_{2}(t),\theta_{2}(t) \big{)}\Big{|}\left|\mathcal{R}_{(t,T)}^{\theta_{2}}(x)^{\top}\right|\Big{|} \nabla_{x}\ell^{\top}\big{(}x_{2}(T),y\big{)}\Big{|} \tag{3.19}\]
for a.e. \(t\in[0,T]\), where we used Assumption 2-\((v)-(vii)\), Proposition A.7 and Lemma A.4. Therefore, combining (3.16)-(3.19), we deduce that
\[\left|\nabla_{\theta}\mathcal{F}^{\top}\big{(}t,x_{1}(t),\theta_{1}(t)\big{)} \mathcal{R}_{(t,T)}^{\theta_{1}}(x)^{\top}\nabla_{x}\ell^{\top}\big{(}x_{1}( T),y\big{)}-\nabla_{\theta}\mathcal{F}^{\top}\big{(}t,x_{1}(t),\theta_{2}(t) \big{)}\mathcal{R}_{(t,T)}^{\theta_{2}}(x)^{\top}\nabla_{x}\ell^{\top}\big{(} x_{2}(T),y\big{)}\right| \tag{3.20}\]
for a.e. \(t\in[0,T]\). We observe that the last inequality holds for every \(x,y\in B_{R}(0)\). Therefore, if we integrate both sides of (3.20) with respect to the probability measure \(\mu_{0}\), recalling the expression of the gradient of \(J_{\ell}\) reported in (3.9), we have that
\[\left|\nabla_{\theta}J_{\ell}(\theta_{1})[t]-\nabla_{\theta}J_{\ell}(\theta_ {1})[t]\right|\leq\bar{C}\Big{[}(1+|\theta_{1}(t)|)\|\theta_{1}-\theta_{2}\|_ {L^{2}}+|\theta_{1}(t)-\theta_{2}(t)|\Big{]} \tag{3.21}\]
for a.e. \(t\in[0,T]\), and this concludes the proof.
From the previous result we can deduce that the terminal cost \(J_{\ell}:L^{2}([0,T],\mathbb{R}^{m})\to\mathbb{R}\) is locally semi-convex.
**Corollary 3.6** (Local semiconvexity of the cost functional).: _Under the same assumptions and notations as in Lemma 3.4, let us consider a bounded subset \(\Gamma\subset L^{2}([0,T];\mathbb{R}^{m})\). Then, \(\nabla_{\theta}J:L^{2}([0,T])\to L^{2}([0,T])\) is Lipschitz continuous on \(\Gamma\). Moreover, there exists a constant \(\mathcal{L}(T,R,\Gamma)>0\) such that the cost functional \(J:L^{2}([0,T],\mathbb{R}^{m})\to\mathbb{R}\) defined in (3.6) satisfies the following semiconvexity estimate:_
\[J\big{(}(1-\zeta)\theta_{1}+\zeta\theta_{2}\big{)}\leq(1-\zeta)J(\theta_{1})+ \zeta J(\theta_{2})-(2\lambda-\mathcal{L}(T,R,\Gamma))\tfrac{\zeta(1-\zeta)} {2}\|\theta_{1}-\theta_{2}\|_{L^{2}}^{2} \tag{3.22}\]
_for every \(\theta_{1},\theta_{2}\in\Gamma\) and for every \(\zeta\in[0,1]\). In particular, if \(\lambda>\frac{1}{2}\mathcal{L}(T,R,\Gamma)\), the cost functional \(J\) is strictly convex over \(\Gamma\)._
Proof.: We recall that \(J(\theta)=J_{\ell}(\theta)+\lambda\|\theta\|_{L^{2}}^{2}\), where \(J_{\ell}\) has been introduced in (3.8). Owing to Proposition 3.5, it follows that \(\nabla_{\theta}J_{\ell}\) is Lipschitz continuous on \(\Gamma\) with constant \(\mathcal{L}(T,R,\Gamma)\). This implies that \(J\) is Lipschitz continuous as well on \(\Gamma\). Moreover, it descends that
\[J_{\ell}\big{(}(1-\zeta)\theta_{1}+\zeta\theta_{2}\big{)}\leq(1-\zeta)J_{\ell} (\theta_{1})+\zeta J_{\ell}(\theta_{2})+\mathcal{L}(T,R,\Gamma)\tfrac{\zeta(1- \zeta)}{2}\|\theta_{1}-\theta_{2}\|_{L^{2}}^{2}\]
for every \(\theta_{1},\theta_{2}\in\Gamma\) and for every \(\zeta\in[0,1]\). On the other hand, recalling that
\[\|(1-\zeta)\theta_{1}+\zeta\theta_{2}\|_{L^{2}}^{2}=(1-\zeta)\|\theta_{1}\|_{L ^{2}}^{2}+\zeta\|\theta_{2}\|_{L^{2}}^{2}-\zeta(1-\zeta)\|\theta_{1}-\theta_{2} \|_{L^{2}}^{2}\]
for every \(\theta_{1},\theta_{2}\in L^{2}\), we immediately deduce (3.22).
**Remark 3.1**.: _When the parameter \(\lambda>0\) that tunes the \(L^{2}\)-regularization is large enough, we can show that the functional \(J\) defined by (3.6) admits a unique global minimizer. Indeed, since the control identically \(0\) is an admissible competitor, we have that_
\[\inf_{\theta\in L^{2}}J(\theta)\leq J(0)=J_{\ell}(0),\]
_where we observe that the right-hand side is not affected by the value of \(\lambda\). Hence, recalling that \(J(\theta)=J_{\ell}(\theta)+\lambda\|\theta\|_{L^{2}}^{2}\), we have that the sublevel set \(\{\theta:J(\theta)\leq J_{\ell}(0)\}\) is included in the ball \(B_{\lambda}:=\{\theta:\|\theta\|_{L^{2}}^{2}\leq\frac{1}{\lambda}J_{\ell}(0)\}\). Since these balls are decreasing as \(\lambda\) increases, owing to Corollary 3.6, we deduce that there exists a parameter \(\bar{\lambda}>0\) such that the cost functional \(J\) is strongly convex when restricted to \(B_{\bar{\lambda}}\). Then, Lemma 3.4 guarantees that the functional \(J:L^{2}([0,T],\mathbb{R}^{m})\to\mathbb{R}\) introduced in (3.6) is continuous with respect to the strong topology of \(L^{2}\), while the convexity implies that it is weakly lower semi-continuous as well. Being the ball \(B_{\bar{\lambda}}\) weakly compact, we deduce that the restriction to \(B_{\bar{\lambda}}\) of the functional \(J\) admits a unique minimizer \(\theta^{*}\). However, since \(B_{\bar{\lambda}}\) includes the sublevel set \(\{\theta:J(\theta)\leq J_{\ell}(0)\}\), it follows that \(\theta^{*}\) is actually the unique global minimizer. It is interesting to observe that, even though \(\lambda\) is chosen large enough to ensure existence (and uniqueness) of the global minimizer, it is not possible to conclude that the functional \(J\) is globally convex. This is essentially due to the fact that Corollary 3.6 holds only on bounded subsets of \(L^{2}\)._
Taking advantage of the representation of the gradient of the terminal cost \(J_{\ell}\) provided by (3.9), we can formulate the necessary optimality conditions for the cost \(J\) introduced in (3.6). In order to do that, we introduce the function \(p:[0,T]\times\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}^{d}\) as follows:
\[p_{t}(x,y):=\nabla_{x}\ell(\Phi^{\theta}_{(0,T)}(x),y)\cdot\mathcal{R}^{\theta }_{(t,T)}(x), \tag{3.23}\]
where \(\mathcal{R}^{\theta}_{(t,T)}(x)\) is defined according to (3.10). We observe that \(p\) (as well as \(\nabla_{x}\ell\)) should be understood as a row vector. Moreover, using (3.12), we deduce that, for every \(x,y\in\mathbb{R}^{d}\), the \(t\mapsto p_{t}(x,y)\) is solving the following backward Cauchy problem:
\[\frac{\partial}{\partial t}p_{t}(x,y)=-p_{t}(x,y)\cdot\nabla_{x}\mathcal{F}(t, \Phi^{\theta}_{(0,t)}(x),\theta(t)),\qquad p_{T}(x,y)=\nabla_{x}\ell(\Phi^{ \theta}_{(0,T)}(x),y). \tag{3.24}\]
Hence, we can equivalently rewrite \(\nabla_{\theta}J_{\ell}\) using \(p\):
\[\nabla_{\theta}J_{\ell}(\theta)[t]=\int_{\mathbb{R}^{2d}}\nabla_{\theta} \mathcal{F}^{\top}\big{(}t,\Phi^{\theta}_{(0,t)}(x),\theta(t)\big{)}\cdot p_ {t}^{\top}(x,y)\,d\mu_{0}(x,y) \tag{3.25}\]
for almost every \(t\in[0,T]\). Therefore, recalling that \(J(\theta)=J_{\ell}(\theta)+\lambda\|\theta\|_{L^{2}}^{2}\), we deduce that the stationary condition \(\nabla_{\theta}J(\theta^{*})=0\) can be rephrased as
\[\begin{cases}\partial_{t}\mu_{t}^{*}(x,y)+\nabla_{x}\cdot\big{(}\mathcal{F}(t,x,\theta^{*}(t))\mu_{t}^{*}(x,y)\big{)}=0,&\mu_{t}^{*}|_{t=0}(x,y)=\mu_{0}(x,y ),\\ \partial_{t}p_{t}^{*}(x,y)=-p_{t}^{*}(x,y)\cdot\nabla_{x}\mathcal{F}(t,\Phi^{ \theta^{*}}_{(0,t)}(x),\theta^{*}(t)),&p_{t}^{*}|_{t=T}(x,y)=\nabla_{x}\ell( \Phi^{\theta^{*}}_{(0,T)}(x),y),\\ \theta^{*}(t)=-\frac{1}{2\lambda}\int_{\mathbb{R}^{2d}}\nabla_{\theta} \mathcal{F}^{\top}\big{(}t,\Phi^{\theta^{*}}_{(0,t)}(x),\theta^{*}(t)\big{)} \cdot p_{t}^{*\top}(x,y)\,d\mu_{0}(x,y).\end{cases} \tag{3.26}\]
**Remark 3.2**.: _The computation of \(p\) through the backward integration of (3.24) can be interpreted as the control theoretic equivalent of the "back-propagation of the gradients". We observe that, in order to check whether (3.26) is satisfied, it is sufficient to evaluate \(p^{*}\) only on \(\operatorname{supp}(\mu_{0})\). Moreover, the evaluation of \(p^{*}\) on different points \((x_{1},y_{1}),(x_{2},y_{2})\in\operatorname{supp}(\mu_{0})\) involves the resolution of two uncoupled backward ODEs. This means that, when dealing with a measure \(\mu_{0}\) that charges only finitely many points, we can solve the equation (3.24) in parallel for every point in \(\operatorname{supp}(\mu_{0})\)._
In virtue of Proposition 3.5, we can study the gradient flow induced by the cost functional \(J:L^{2}([0,T],\mathbb{R}^{m})\to\mathbb{R}\) on its domain. More precisely, given an admissible control \(\theta_{0}\in L^{2}([0,T],\mathbb{R}^{m})\), we consider the gradient flow equation:
\[\begin{cases}\dot{\theta}(\omega)=-\nabla_{\theta}J(\theta(\omega))\quad \text{for $\omega\geq 0$},\\ \theta(0)=\theta_{0}.\end{cases} \tag{3.27}\]
In the next result we show that the gradient flow equation (3.27) is well-posed and that the solution is defined for every \(\omega\geq 0\). In the particular case of linear-control systems, the properties of the gradient flow trajectories has been investigated in [42].
**Lemma 3.7**.: _Let \(T,R>0\) and \(\mu_{0}\in\mathcal{P}_{c}(\mathbb{R}^{2d})\) be a probability measure such that \(\operatorname{supp}(\mu_{0})\subset B_{R}(0)\), and let us consider \(\mathcal{F}:[0,T]\times\mathbb{R}^{d}\times\mathbb{R}^{m}\to\mathbb{R}^{d}\) and \(\ell:\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}\) that satisfy, respectively, Assumptions 1-2 and Assumption 3. Then, for every \(\theta_{0}\in L^{2}([0,T],\mathbb{R}^{m})\), the gradient flow equation (3.27) admits a unique solution \(\omega\mapsto\theta(\omega)\) of class \(C^{1}\) that is defined for every \(\omega\in[0,+\infty)\)._
Proof.: Let us consider \(\theta_{0}\in L^{2}([0,T],\mathbb{R}^{m})\), and let us introduce the sub-level set
\[\Gamma:=\{\theta\in L^{2}([0,T],\mathbb{R}^{m}):J(\theta)\leq J(\theta_{0})\},\]
where \(J\) is the functional introduced in (3.6) defining the mean-field optimal control problem. Using the fact that the end-point cost \(\ell:\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}_{+}\) is non-negative, we deduce that \(\Gamma\subset\{\theta\in L^{2}([0,T],\mathbb{R}^{m}):\|\theta\|_{L^{2}}^{2}\leq \frac{1}{\lambda}J(\theta_{0})\}\). Hence, from Proposition 3.5 it follows that the gradient field \(\nabla_{\theta}J\) is Lipschitz (and bounded) on \(\Gamma\). Hence, using a classical result on ODE in Banach spaces (see, e.g., [31, Theorem 5.1.1]), it follows that the initial value problem (3.27) admits a unique small-time solution \(\omega\mapsto\theta(\omega)\) of class \(C^{1}\) defined for \(\omega\in[-\delta,\delta]\), with \(\delta>0\). Moreover, we observe that
\[\frac{d}{d\omega}J(\theta(\omega))=\langle\nabla_{\theta}J(\theta(\omega)), \dot{\theta}(\omega))\rangle=-\|\nabla_{\theta}J(\theta(\omega))\|_{L^{2}}\leq 0,\]
and this implies that \(\theta(\omega)\in\Gamma\) for every \(\omega\in[0,\delta]\). Hence, it is possible to recursively extend the solution to every interval of the form \([0,M]\), with \(M>0\)
We observe that, under the current working assumptions, we cannot provide any convergence result for the gradient flow trajectories. This is not surprising since, when the regularization parameter \(\lambda>0\) is small, it is not even possible to prove that the functional \(J\) admits minimizers. Indeed, the argument presented in Remark 3.1 requires the regularization parameter \(\lambda\) to be sufficiently large.
We conclude the discussion with an observation on a possible discretization of (3.27). If we fix a sufficiently small parameter \(\tau>0\), given an initial guess \(\theta_{0}\), we can consider the sequence of controls \((\theta_{k}^{\tau})_{k\geq 0}\subset L^{2}([0,T],\mathbb{R}^{m})\) defined through the Minimizing Movement Scheme:
\[\theta_{0}^{\tau}=\theta_{0},\qquad\theta_{k+1}^{\tau}\in\arg\min_{\theta} \left[J(\theta)+\frac{1}{2\tau}\|\theta-\theta_{k}^{\tau}\|_{L^{2}}^{2}\right] \quad\text{for every }k\geq 0. \tag{3.28}\]
**Remark 3.3**.: _We observe that the minimization problems in (3.28) are well-posed as soon as the functionals \(\theta\mapsto J_{\theta_{k}}^{\tau}(\theta):=J(\theta)+\frac{1}{2\tau}\| \theta-\theta_{k}^{\tau}\|_{L^{2}}^{2}\) are strictly convex on the bounded sublevel set \(K_{\theta_{0}}:=\{\theta:J(\theta)\leq J(\theta_{0}^{\tau})\}\), for every \(k\geq 0\). Hence, the parameter \(\tau>0\) can be calibrated by means of the estimates provided by Corollary 3.6, considering the bounded set \(K_{\theta_{0}}\). Then, using and inductive argument, it follows that, for every \(k\geq 0\), the functional \(J_{\theta_{k}}^{\tau}:L^{2}([0,T],\mathbb{R}^{m})\to\mathbb{R}\) admits a unique global minimizer \(\theta_{k+1}^{\tau}\). Also for \(J_{\theta_{k}}^{\tau}\) we can derive the necessary conditions for optimality satisfied by \(\theta_{k+1}\), that are analogous to the ones formulated in (3.26), and that descend as well from the identity \(\nabla_{\theta}J_{\theta_{k}}^{\tau}(\theta_{k+1}^{\tau})=0\):_
\[\begin{cases}\partial_{t}\mu_{t}(x,y)+\nabla_{x}\cdot\big{(}\mathcal{F}(t,x, \theta_{k+1}^{\tau}(t))\mu_{t}(x,y)\big{)}=0,&\mu_{t}|_{t=0}(x,y)=\mu_{0}(x,y), \\ \partial_{t}p_{t}(x,y)=-p_{t}(x,y)\cdot\nabla_{x}\mathcal{F}(t,\Phi_{(0,t)}^{ \sigma_{t+1}^{\tau}}(x),\theta_{k+1}^{\tau}(t)),&p_{t}|_{t=T}(x,y)=\nabla_{x} \ell(\Phi_{(0,T)}^{\theta_{k+1}^{\tau}}(x),y),\\ \theta_{k+1}^{\tau}(t)=-\frac{1}{1+2\lambda\tau}\left(\theta_{k}^{\tau}(t)- \tau\int_{\mathbb{R}^{2d}}\nabla_{\theta}\mathcal{F}^{\top}\big{(}t,\Phi_{(0, t)}^{\theta_{k+1}^{\tau}}(x),\theta_{k+1}^{\tau}(t)\big{)}\cdot p_{t}^{ \tau}(x,y)\,d\mu_{0}(x,y)\right).\end{cases} \tag{3.29}\]
_Finally, we observe that the mapping \(\Lambda^{\tau}:L^{2}([0,T],\mathbb{R}^{m})\to L^{2}([0,T],\mathbb{R}^{m})\) defined for a.e. \(t\in[0,T]\) as_
\[\Lambda_{\theta_{k}^{\tau}}^{\tau}(\theta)[t]:=-\frac{1}{1+2\lambda\tau}\left( \theta_{k}^{\tau}(t)-\tau\int_{\mathbb{R}^{2d}}\nabla_{\theta}\mathcal{F}^{ \top}\big{(}t,\Phi_{(0,t)}^{\theta}(x),\theta(t)\big{)}\cdot p_{t}^{\top}(x,y )\,d\mu_{0}(x,y)\right) \tag{3.30}\]
_is a contraction on \(K_{\theta_{0}}\) as soon as_
\[\frac{\tau}{1+2\lambda\tau}\mathrm{Lip}\left(\nabla_{\theta}J_{\ell}|_{K_{ \theta_{0}}}\right)<1.\]
For every \(\tau>0\) such that the sequence \((\theta_{k}^{\tau})_{k\geq 0}\) is defined, we denote with \(\tilde{\theta}^{\tau}:[0,+\infty)\to L^{2}([0,T],\mathbb{R}^{m})\) the piecewise affine interpolation obtained as
\[\tilde{\theta}^{\tau}(\omega)=\theta_{k}^{\tau}+\frac{\theta_{k+1}^{\tau}- \theta_{k}^{\tau}}{\tau}(\omega-k\tau)\quad\text{for }\omega\in[k\tau,(k+1)\tau]. \tag{3.31}\]
We finally report a classical result concerning the convergence of the piecewise affine interpolation \(\tilde{\theta}^{\tau}\) to the gradient flow trajectory solving (3.27).
**Proposition 3.8**.: _Under the same assumptions and notations as in Lemma 3.7, let us consider an initial point \(\theta_{0}\in L^{2}([0,T],\mathbb{R}^{m})\) and a sequence \((\tau_{j})_{j\in\mathbb{N}}\) such that \(\tau_{j}\to 0\) as \(j\to\infty\), and let \((\tilde{\theta}^{\tau_{j}})_{j\in\mathbb{N}}\) be the sequence of piecewise affine curves defined by (3.31). Then, for every \(\Omega>0\), there exists a subsequence \((\tilde{\theta}^{\tau_{j_{k}}})_{k\in\mathbb{N}}\) converging uniformly on the interval \([0,\Omega]\) to the solution of (3.27) starting from \(\theta_{0}\)._
Proof.: The proof follows directly from [41, Proposition 2.3].
### Finite particles approximation
In this section, we study the stability of the mean-field optimal control problem (3.6) with respect to finite-samples distributions. More precisely, assume that we are given samples \(\{(X_{0}^{i},Y_{0}^{i})\}_{i=1}^{N}\) of size \(N\geq 1\) independently and identically distributed according to \(\mu_{0}\in\mathcal{P}_{c}(\mathbb{R}^{2d})\), and consider the empirical loss minimization problem
\[\inf_{\theta\in L^{2}([0,T];\mathbb{R}^{m})}J^{N}(\theta):=\begin{cases}\frac{1 }{N}\sum_{i=1}^{N}\ell\big{(}X^{i}(T),Y^{i}(T)\big{)}+\lambda\int_{0}^{T}| \theta(t)|^{2}\,dt\\ \text{s.t.}\begin{cases}\dot{X}^{i}(t)=\mathcal{F}(t,X^{i}(t),\theta(t)),&\dot {Y}^{i}(t)=0,\\ \big{(}X^{i}(t),Y^{i}(t)\big{)}\big{|}_{t=0}=(X_{0}^{i},Y_{0}^{i}),&i\in\{1, \ldots,N\}.\end{cases}\end{cases} \tag{3.32}\]
By introducing the empirical measure \(\mu_{0}^{N}\in\mathcal{P}_{c}^{N}(\mathbb{R}^{2d})\), defined as
\[\mu_{0}^{N}:=\frac{1}{N}\sum_{i=1}^{N}\delta_{(X_{0}^{i},Y_{0}^{i})},\]
the cost function in (3.32) can be rewritten as
\[J^{N}(\theta)=\int_{\mathbb{R}^{2d}}\ell(\Phi_{(0,T)}^{\theta}(x),y)\,d\mu_{0} ^{N}(x,y)+\lambda\|\theta\|_{L^{2}}^{2} \tag{3.33}\]
for every \(\theta\in L^{2}([0,T],\mathbb{R}^{m})\), and the empirical loss minimization problem in (3.32) can be recast as a mean-field optimal control problem with initial datum \(\mu_{0}^{N}\). In this section we are interested in studying the asymptotic behavior of the functional \(J^{N}\) as \(N\) tends to infinity. More precisely, we consider a sequence of probability measures \((\mu_{0}^{N})_{N\geq 1}\) such that \(\mu_{0}^{N}\) charges uniformly \(N\) points, and such that
\[W_{1}(\mu_{0}^{N},\mu_{0})\ \underset{N\to+\infty}{\longrightarrow}\ 0.\]
Then, in Proposition 3.9 we study the uniform convergence of \(J^{N}\) and of \(\nabla_{\theta}J^{N}\) to \(J\) and \(\nabla_{\theta}J^{N}\), respectively, where \(J:L^{2}([0,T],\mathbb{R}^{m})\to\mathbb{R}\) is the functional defined in (3.6) and corresponding to the limiting measure \(\mu_{0}\). Moreover, in Theorem 3.10, assuming the existence of a region where the functionals \(J^{N}\) are uniformly strongly convex, we provide an estimate of the so called _generalization error_ in terms of the distance \(W_{1}(\mu_{0}^{N},\mu_{0})\).
**Proposition 3.9** (Uniform convergence of \(J^{N}\) and \(\nabla_{\theta}J^{N}\)).: _Let us consider a probability measure \(\mu_{0}\in\mathcal{P}_{c}(\mathbb{R}^{2d})\) and a sequence \((\mu_{0}^{N})_{N\geq 1}\) such that \(\mu_{0}^{N}\in\mathcal{P}_{c}^{N}(\mathbb{R}^{2d})\) for every \(N\geq 1\). Let us further assume that \(W_{1}(\mu_{0}^{N},\mu_{0})\to 0\) as \(N\to\infty\), and that there exists \(R>0\) such that \(\operatorname{supp}(\mu_{0}),\operatorname{supp}(\mu_{0}^{N})\subset B_{R}(0)\) for every \(N\geq 1\). Given \(T>0\), let \(\mathcal{F}:[0,T]\times\mathbb{R}^{d}\times\mathbb{R}^{m}\to\mathbb{R}^{d}\) and \(\ell:\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}\) satisfy, respectively, Assumptions 1-2 and Assumption 3, and let \(J,J^{N}:L^{2}([0,T],\mathbb{R}^{m})\to\mathbb{R}\) be the cost functionals defined in (3.6) and (3.32), respectively. Then, for every bounded subset \(\Gamma\subset L^{2}([0,T],\mathbb{R}^{m})\), we have that_
\[\lim_{N\to\infty}\ \sup_{\theta\in\Gamma}|J^{N}(\theta)-J(\theta)|=0 \tag{3.34}\]
_and_
\[\lim_{N\to\infty}\ \sup_{\theta\in\Gamma}\|\nabla_{\theta}J^{N}(\theta)-\nabla_{ \theta}J(\theta)\|_{L^{2}}=0, \tag{3.35}\]
_where \(J\) was introduced in (3.6), and \(J^{N}\) is defined as in (3.33)._
Proof.: Since we have that \(J(\theta)=J_{\ell}(\theta)+\lambda\|\theta\|_{L^{2}}^{2}\) and \(J^{N}(\theta)=J_{\ell}^{N}(\theta)+\lambda\|\theta\|_{L^{2}}^{2}\), it is sufficient to prove (3.34)-(3.35) for \(J_{\ell}\) and \(J_{\ell}^{N}\), where we set
\[J_{\ell}^{N}(\theta):=\int_{\mathbb{R}^{2d}}\ell(\Phi_{(0,T)}^{\theta}(x),y)\, d\mu_{0}^{N}(x,y)\]
for every \(\theta\in L^{2}\) and for every \(N\geq 1\). We first observe that, for every \(\theta\in L^{2}([0,T],\mathbb{R}^{m})\) such that \(\|\theta\|_{L^{2}}\leq\rho\), from Proposition 3.3 it follows that \(\operatorname{supp}(\mu_{0}),\operatorname{supp}(\mu_{0}^{N})\subset B_{R}(0)\), for some \(\bar{R}>0\). Then, denoting with \(t\mapsto\mu_{t}^{N}\) and \(t\mapsto\mu_{t}\) the solutions of the continuity equation (3.3) driven by the control \(\theta\) and with initial datum, respectively, \(\mu_{0}^{N}\) and \(\mu_{0}\), we compute
\[\begin{split}|J_{\ell}^{N}(\theta)-J_{\ell}(\theta)|& =\left|\int_{\mathbb{R}^{2d}}\ell(\Phi_{(0,T)}^{\theta}(x),y)\, \big{(}d\mu_{0}^{N}-d\mu_{0}\big{)}(x,y)\right|=\left|\int_{\mathbb{R}^{2d}} \ell(x,y)\,\big{(}d\mu_{T}^{N}-d\mu_{T}\big{)}(x,y)\right|\\ &\leq\bar{L}_{1}\bar{L}_{2}W_{1}(\mu_{0}^{N},\mu_{0}),\end{split} \tag{3.36}\]
where we have used (3.2) and Proposition 3.2 in the second identity, and we have indicated with \(\bar{L}_{1}\) the Lipschitz constant of \(\ell\) on \(B_{\bar{R}}(0)\), while \(\bar{L}_{2}\) descends from the continuous dependence of solutions of (3.3) on the initial datum (see Proposition 3.3). We insist on the fact that both \(\bar{L}_{1},\bar{L}_{2}\) depend on \(\rho\), i.e., the upper bound on the \(L^{2}\)-norm of the controls.
We now address the uniform converge of \(\nabla_{\theta}J_{\ell}^{N}\) to \(\nabla_{\theta}J_{\ell}\) on bounded sets of \(L^{2}\). As before, let us consider an admissible control \(\theta\) such that \(\|\theta\|_{L^{2}}\leq\rho\). Hence, using the representation provided in (3.9), for a.e. \(t\in[0,T]\) we have:
\[\begin{split}\big{|}\nabla_{\theta}J_{\ell}^{N}(\theta)[t]& -\nabla_{\theta}J_{\ell}(\theta)[t]\big{|}=\\ &\left|\int_{\mathbb{R}^{2d}}\nabla_{\theta}\mathcal{F}^{\top} \big{(}t,\Phi_{(0,t)}^{\theta_{0}}(x),\theta_{0}(t)\big{)}\cdot\mathcal{R}_{(t,T)}^{\theta_{0}}(x)^{\top}\cdot\nabla_{x}\ell^{\top}\big{(}\Phi_{(0,T)}^{\theta_ {0}}(x),y\big{)}\,\big{(}d\mu_{0}^{N}-d\mu_{0}\big{)}(x,y)\right|,\end{split} \tag{3.37}\]
In order to prove uniform convergence in \(L^{2}\) norm, we have to show that the integrand is Lipschitz-continuous in \((x,y)\) for a.e. \(t\in[0,T]\), where the Lipschitz constant has to be \(L^{2}\)-integrable as a function of the \(t\) variable. First of all, combining Assumption 2\(-(v)\) and Lemma A.2, we can prove that there exists constants \(C_{1},\tilde{L}_{3}>0\) (depending on \(\rho\)) such that
\[\begin{split}|\nabla_{\theta}\mathcal{F}(t,\Phi^{\theta}_{(0,t)} (x),\theta(t))|\leq& C_{1}(1+|\theta(t)|),\\ |\nabla_{\theta}\mathcal{F}(t,\Phi^{\theta}_{(0,t)}(x_{1}), \theta(t))-\nabla_{\theta}\mathcal{F}(t,\Phi^{\theta}_{(0,t)}(x_{2}),\theta( t))|\leq&\bar{L}_{3}\bar{L}_{2}(1+|\theta(t)|)|x_{1}-x_{2}|\end{split} \tag{3.38}\]
for a.e. \(t\in[0,T]\). We recall that the quantity \(\bar{L}_{2}>0\) (that already appeared in (3.36)) represents the Lipschitz constant of the flow \(\Phi_{(0,t)}\) with respect to the initial datum. Moreover, from Proposition A.7, it descends that
\[\begin{split}|\mathcal{R}^{\theta}_{(t,T)}(x)|\leq& C_{2},\\ |\mathcal{R}^{\theta}_{(t,T)}(x_{1})-\mathcal{R}^{\theta}_{(t,T)}( x_{2})|\leq&\bar{L}_{4}|x_{1}-x_{2}|\end{split} \tag{3.39}\]
for every \(t\in[0,T]\), where the constants \(C_{2},\bar{L}_{4}\) both depend on \(\rho\). Finally, owing to Assumption 3 and Proposition 2.1, we deduce
\[\begin{split}|\nabla_{x}\ell(\Phi^{\theta}_{(0,T)}(x),y)|\leq& C_{3},\\ |\nabla_{x}\ell(\Phi^{\theta}_{(0,T)}(x_{1}),y_{1})-\nabla_{x} \ell(\Phi^{\theta}_{(0,T)}(x_{2}),y_{2})|\leq&\bar{L}_{5}(\bar{L} _{2}|x_{1}-x_{2}|+|y_{1}-y_{2}|)\end{split} \tag{3.40}\]
for every \(x,y\in B_{R}(0)\), where the constants \(C_{3},\bar{L}_{2}\) and the Lipschitz constant \(\bar{L}_{5}\) of \(\nabla_{x}\ell\) depend, once again, on \(\rho\). Combining (3.38), (3.39) and (3.40), we obtain that there exists a constant \(\tilde{L}_{\rho}>0\) such that
\[|\nabla_{\theta}J^{N}[t]-\nabla_{\theta}J[t]|\leq\tilde{L}_{\rho}(1+|\theta( t)|)W_{1}(\mu_{0}^{N},\mu_{0}),\]
for a.e. \(t\in[0,T]\). Observing that the right-hand side is \(L^{2}\)-integrable in \(t\), the previous inequality yields
\[\|\nabla_{\theta}J^{N}-\nabla_{\theta}J\|_{L^{2}}\leq\tilde{L}_{\rho}(1+\rho) W_{1}(\mu_{0}^{N},\mu_{0}),\]
and this concludes the proof.
In the next result we provide an estimate of the _generalization error_ in terms of the distance \(W_{1}(\mu_{0}^{N},\mu_{0})\). In this case, the important assumption is that there exists a sequence \((\theta^{\star,N})_{N\geq 1}\) of local minimizers of the functionals \((J^{N})_{N\geq 1}\), and that it is contained in a region where \((J^{N})_{N\geq 1}\) are uniformly strongly convex.
**Theorem 3.10**.: _Under the same notations and hypotheses as in Proposition 3.9, let us further assume that the functional \(J\) admits a local minimizer \(\theta^{\star}\) and, similarly, that, for every \(N\geq 1\), \(\theta^{\star,N}\) is a local minimizer for \(J^{N}\). Moreover, we require that there exists a radius \(\rho>0\) such that, for every \(N\geq\bar{N}\), \(\theta^{\star,N}\in B_{\rho}(\theta^{\star})\) and the functional \(J^{N}\) is \(\eta-\)strongly convex in \(B_{\rho}(\theta^{\star})\), with \(\eta>0\). Then, there exists a constant \(C>0\) such that, for every \(N\geq\bar{N}\), we have_
\[\bigg{|}\int_{\mathbb{R}^{2d}}\ell(x,y)\,d\mu_{T}^{\theta^{\star,N}}(x,y)- \int_{\mathbb{R}^{2d}}\ell(x,y)\,d\mu_{T}^{\theta^{\star}}(x,y)\bigg{|}\leq C \left(W_{1}(\mu_{0}^{N},\mu_{0})+\frac{1}{\sqrt{\eta}}\sqrt{W_{1}(\mu_{0}^{N}, \mu_{0})}\right). \tag{3.41}\]
Proof.: According to our assumptions, the control \(\theta^{\star,N}\in B_{\rho}(\theta^{\star})\) is a local minimizer for \(J^{N}\), and, being \(J^{N}\) strongly convex on \(B_{\rho}(\theta^{\star})\) for \(N\geq\bar{N}\), we deduce that \(\{\theta^{\star,N}\}=\arg\min_{B_{\rho}(\theta^{\star})}J^{N}\). Furthermore, from the \(\eta\)-strong convexity of \(J^{N}\), it follows that for every \(\theta_{1},\theta_{2}\in B_{\rho}(\theta^{\star})\), it holds
\[\langle\nabla_{\theta}J^{N}(\theta_{1})-\nabla_{\theta}J^{N}(\theta_{2}),\, \theta_{1}-\theta_{2}\rangle\geq\eta||\theta_{1}-\theta_{2}||_{L^{2}}^{2}.\]
According to Proposition 3.9, we can pass to the limit in the latter and deduce that
\[\langle\nabla_{\theta}J(\theta_{1})-\nabla_{\theta}J(\theta_{2}),\,\theta_{1}- \theta_{2}\rangle\geq\eta||\theta_{1}-\theta_{2}||_{L^{2}}^{2}\]
for every \(\theta_{1},\theta_{2}\in B_{\rho}(\theta^{\star})\). Hence, \(J\) is \(\eta\)-strongly convex in \(B_{\rho}(\theta^{\star})\) as well, and that \(\{\theta^{\star}\}=\arg\min_{B_{\rho}(\theta^{\star})}J\). Therefore, from the \(\eta\)-strong convexity of \(J^{N}\) and \(J\), we obtain
\[J^{N}(\theta^{\star})-J^{N}(\theta^{\star,N})\geq\frac{\eta}{2}|| \theta^{\star,N}-\theta^{\star}||_{L^{2}}^{2}\] \[J(\theta^{\star,N})-J^{N}(\theta^{\star})\geq\frac{\eta}{2}|| \theta^{\star,N}-\theta^{\star}||_{L^{2}}^{2}.\]
Summing the last two inequalities, we deduce that
\[\eta||\theta^{\star,N}-\theta^{\star}||_{L^{2}}^{2}\leq\left(J^{N}(\theta^{\star})- J(\theta^{\star})\right)+\left(J^{N}(\theta^{\star,N})-J(\theta^{\star,N})\right) \leq 2C_{1}W_{1}(\mu_{0}^{N},\mu_{0}), \tag{3.42}\]
where the second inequality follows from the local uniform convergence of Proposition 3.9. We are now in position to derive a bound on the generalization error:
\[\begin{split}\left|\int_{\mathbb{R}^{2d}}\ell(x,y)\left(d\mu_{T}^{ \theta^{*,N}}-d\mu_{T}^{\theta^{*}}\right)(x,y)\right|&=\left| \int_{\mathbb{R}^{2d}}\ell(\Phi_{(0,T)}^{\theta^{*,N}}(x),y)\,d\mu_{0}^{N}(x,y) -\int_{\mathbb{R}^{2d}}\ell(\Phi_{(0,T)}^{\theta^{*}}(x),y)\,d\mu_{0}(x,y) \right|\\ &\leq\int_{\mathbb{R}^{2d}}\left|\ell(\Phi_{(0,T)}^{\theta^{*,N}}( x),y)-\ell(\Phi_{(0,T)}^{\theta^{*}}(x),y)\right|\,d\mu_{0}^{N}(x,y)\\ &\quad+\left|\int_{\mathbb{R}^{2d}}\ell(\Phi_{(0,T)}^{\theta^{*}} (x),y)\big{(}d\mu_{0}^{N}(x,y)-d\mu_{0}(x,y)\big{)}\right|\\ &\leq\bar{L}\sup_{x\in\mathrm{supp}(\mu_{0}^{N})}\left|\Phi_{(0, T)}^{\theta^{*,N}}(x)-\Phi_{(0,T)}^{\theta^{*}}(x)\right|+\bar{L}_{R}W_{1}( \mu_{0}^{N},\mu_{0}),\end{split} \tag{3.43}\]
where \(\bar{L}\) and \(\bar{L}_{R}\) are constants coming from Assumption 3 and Proposition 2.1. Then, we combine Proposition 2.1 with the estimate in (3.42), in order to obtain
\[\sup_{x\in\mathrm{supp}(\mu_{0}^{N})}\left|\Phi_{(0,T)}^{\theta^{*,N}}(x)- \Phi_{(0,T)}^{\theta^{*}}(x)\right|\leq C_{2}\|\theta^{*,N}-\theta^{*}\|_{L^{2 }}\leq C_{2}\sqrt{\frac{2C_{1}}{\eta}W_{1}(\mu_{0}^{N},\mu_{0})}.\]
Finally, from the last inequality and (3.43), we deduce (3.41).
**Remark 3.4**.: _Since the functional \(J:L^{2}([0,T],\mathbb{R}^{m})\to\mathbb{R}\) defined in (3.6) is continuous (and, in particular, lower semi-continuous) with respect to the strong topology of \(L^{2}\), the locally uniform convergence of the functionals \(J^{N}\) to \(J\) (see Proposition 3.9) implies that \(J^{N}\) is \(\Gamma\)-converging to \(J\) with respect to the strong topology of \(L^{2}\). However, this fact is of little use, since the functionals \(J,J^{N}\) are not strongly coercive. On the other hand, if we equip \(L^{2}\) with the weak topology, in general the functional \(J\) is not lower semi-continuous. In our framework, the only circumstance where one can hope for \(\Gamma\)-convergence with respect to the weak topology corresponds to the highly-regularized scenario, i.e., when the parameter \(\lambda>0\) is sufficiently large. Therefore, in the situations of practical interest when \(\lambda\) is small, we cannot rely on this tool, and the crucial aspect is that the dynamics (2.1) is nonlinear with respect to the control variable. Indeed, in the case of affine-control systems considered in [44], it is possible to establish \(\Gamma\)-convergence results in the \(L^{2}\)-weak topology (see [43] for an application to diffeomorphisms approximation). Finally, we report that in [46], in order to obtain the \(L^{2}\)-strong equi-coercivity of the functionals, the authors introduced in the cost the \(H^{1}\)-seminorm of the controls._
### Convex regime and previous result
In order to conclude our mean-field analysis, we now compare our results with the ones obtained in the similar framework of [7], where the regularization parameter \(\lambda\) was assumed to be _sufficiently large_, leading to a convex regime in the sublevel sets (see Remark 3.1). We recall below the main results presented in [7].
**Theorem 3.11**.: _Given \(T,R,R_{T}>0\), and an initial datum \(\mu_{0}\in\mathcal{P}_{c}(\mathbb{R}^{2d})\) with \(\mathrm{supp}(\mu_{0})\subset B_{R}(0)\), let us consider a terminal condition \(\psi_{T}:\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}\) such that \(\mathrm{supp}(\psi_{T})\subset B_{R_{T}}(0)\) and \(\psi_{T}(x,y)=\ell(x,y)\ \forall x,y\in B_{R}(0)\). Let \(\mathcal{F}\) satisfy [7, Assumptions 1-2] and \(\ell\in C^{2}(\mathbb{R}^{d}\times\mathbb{R}^{d},\mathbb{R})\). Assume further that \(\lambda>0\) is large enough. Then, there exists a triple \((\mu^{*},\theta^{*},\psi^{*})\in\mathcal{C}([0,T],\mathcal{P}_{c}(\mathbb{R}^ {2d}))\times Lip([0,T],\mathbb{R}^{m})\times\mathcal{C}^{1}([0,T],\mathcal{C} _{c}^{2}(\mathbb{R}^{2d}))\) solution of_
\[\begin{cases}\partial_{t}\mu_{t}^{*}(x,y)+\nabla_{x}\cdot(\mathcal{F}(t,x, \theta^{*}(t))\mu_{t}^{*}(x,y))=0,&\mu_{t}^{*}|_{t=0}(x,y)=\mu_{0}(x,y),\\ \partial_{t}\psi_{t}^{*}(x,y)+\nabla_{x}\psi_{t}^{*}(x,y)\cdot\mathcal{F}(t,x,\theta^{*}(t))=0,&\psi_{t}^{*}|_{t=T}(x,y)=\ell(x,y),\\ \theta^{*^{\top}}(t)=-\frac{1}{2\lambda}\int_{\mathbb{R}^{2d}}\nabla_{x}\psi_{t }^{*}(x,y)\cdot\nabla_{\theta}\mathcal{F}(t,x,\theta^{*}(t))\,d\mu_{t}^{*}(x,y ),\end{cases} \tag{3.44}\]
_where \(\psi^{*}\in\mathcal{C}^{1}([0,T],\mathcal{C}_{c}^{2}(\mathbb{R}^{2d}))\) is in characteristic form. Moreover, the control solution \(\theta^{*}\) is unique in a ball \(\Gamma_{C}\subset L^{2}([0,T],\mathbb{R}^{m})\) and continuously dependent on the initial datum \(\mu_{0}\)._
We observe that the condition on \(\lambda>0\) to be large enough is crucial to obtain local convexity of the cost functional and, consequently, existence and uniqueness of the solution. However, in the present paper we have not done assumptions on the magnitude of \(\lambda\), hence, as it was already noticed in Remark 3.1, we might end up in a non-convex regime. Nevertheless, in Proposition 3.12 we show that, in the case of \(\lambda\) sufficiently large, the previous approach and the current one are "equivalent".
**Proposition 3.12**.: _Under the same hypotheses as in Theorem 3.11, let \(J:L^{2}([0,T])\to\mathbb{R}\) be the functional defined in (3.6). Then, \(\theta^{*}\) satisfies (3.44) if and only if it is a critical point for \(J\)._
Proof.: According to Lemma (3.4), the gradient of the functional \(J\) at \(\theta\in L^{2}([0,T],\mathbb{R}^{m})\) is defined for a.e. \(t\in[0,T]\) as
\[\nabla_{\theta}J(\theta)[t]=\int_{\mathbb{R}^{2d}}\nabla_{\theta}\mathcal{F}^{ \top}\big{(}t,\Phi^{\theta}_{(0,t)}(x),\theta(t)\big{)}\cdot\mathcal{R}^{ \theta}_{(t,T)}(x)^{\top}\cdot\nabla_{x}\ell^{\top}\big{(}\Phi^{\theta}_{(0,T )}(x),y\big{)}\,d\mu_{0}(x,y)+2\lambda\theta(t).\]
Hence, if we set the previous expression equal to zero, we obtain the characterization of the critical point
\[\theta(t)=-\frac{1}{2\lambda}\int_{\mathbb{R}^{2d}}\nabla_{\theta}\mathcal{F}^ {\top}\big{(}t,\Phi^{\theta}_{(0,t)}(x),\theta(t)\big{)}\cdot\mathcal{R}^{ \theta}_{(t,T)}(x)^{\top}\cdot\nabla_{x}\ell^{\top}\big{(}\Phi^{\theta}_{(0,T )}(x),y\big{)}\,d\mu_{0}(x,y) \tag{3.45}\]
for a.e. \(t\in[0,T]\). On the other hand, according to Theorem 3.11, the optimal \(\theta\) satisfies for a.e. \(t\in[0,T]\) the following
\[\theta(t) =-\frac{1}{2\lambda}\int_{\mathbb{R}^{2d}}\big{(}\nabla_{x}\psi_ {t}(x,y)\cdot\nabla_{\theta}\mathcal{F}(t,x,\theta(t))\big{)}^{\top}\,d\mu_{t} (x,y) \tag{3.46}\] \[=-\frac{1}{2\lambda}\int_{\mathbb{R}^{2d}}\nabla_{\theta} \mathcal{F}^{\top}(t,\Phi^{\theta}_{(0,t)}(x),\theta(t))\cdot\nabla_{x}\psi_{ t}^{\top}(\Phi^{\theta}_{(0,t)}(x),y)\,d\mu_{0}(x,y).\]
Hence, to conclude that \(\nabla_{\theta}J=0\) is equivalent to condition stated in Theorem 3.11, we are left to show that
\[\mathcal{R}^{\theta}_{(t,T)}(x)^{\top}\cdot\nabla_{x}\ell^{\top}\big{(}\Phi^ {\theta}_{(0,T)}(x),y\big{)}=\nabla_{x}\psi_{t}^{\top}(\Phi^{\theta}_{(0,t)}( x),y), \tag{3.47}\]
where the operator \(\mathcal{R}^{\theta}_{(t,T)}(x)\) is defined as the solution of (3.10). First of all, we recall that \((t,x,y)\mapsto\psi(t,\Phi^{\theta}_{(0,t)}(x),y)\) is defined as the characteristic solution of the second equation in (3.44) and, as such, it satisfies
\[\psi_{t}(x,y)=\ell(\Phi^{\theta}_{(t,T)}(x),y),\]
for every \(t\in[0,T]\) and for every \(x,y\in B_{R_{T}}(0)\). By taking the gradient with respect to \(x\), we obtain that
\[\nabla_{x}\psi_{t}(x,y)=\nabla_{x}\ell(\Phi^{\theta}_{(t,T)}(x),y))\cdot \nabla_{x}\Phi^{\theta}_{(t,T)}\big{|}_{x},\]
for all \(x,y\in B_{R_{T}}(0)\). Hence, using (3.11), we deduce that
\[\nabla_{x}\psi_{t}(\Phi^{\theta}_{(0,t)}(x),y)=\nabla_{x}\ell(\Phi^{\theta}_{ (0,t)}\circ\Phi^{\theta}_{(0,t)}(x),y))\cdot\nabla_{x}\Phi^{\theta}_{(t,T)} \big{|}_{\Phi^{\theta}_{(0,t)}(x)}=\nabla_{x}\ell(\Phi^{\theta}_{(0,T)}(x),y) \cdot\mathcal{R}^{\theta}_{(t,T)}(x)\]
which proves (3.47).
## 4 Algorithm
In this section, we present our training procedure, which is derived from the necessary optimality conditions related to the minimizing movement scheme (see (3.29)). Since the mean-field optimal control problem as presented in (3.6) is numerically intractable (especially in high-dimension), in the practice we always consider the functional corresponding to the finite particles approximation (see (3.32)). For its resolution, we employ an algorithm belonging to the family of shooting methods, which consists in the forward evolution of the trajectories, in the backward evolution of the adjoint variables, and in the update of the controls. Variants of this method have already been employed in different works, e.g. [33, 26, 6, 7, 14], with the name of _method of successive approximations_, and they have been proven to be an alternative way of performing the training of NeurODEs for a range of tasks, including high-dimensional problems.
In our case, we start with a random guess for the control parameter \(\theta_{0}\in L^{2}([0,T],\mathbb{R}^{m})\). Subsequently, we solve the necessary optimality conditions specified in equation (3.29) for a suitable \(\tau>0\) to obtain an updated control parameter \(\theta_{1}\). More precisely, since the last identity in (3.29) has the form \(\theta_{1}=\Lambda^{\tau}_{\theta_{0}}(\theta_{1})\), the computation of \(\theta_{1}\) is performed via fixed-point iterations of the mapping \(\Lambda^{\tau}_{\theta_{0}}\), which is defined as in (3.30). In this regard, we recall that \(\Lambda^{\tau}_{\theta_{0}}\) is a contraction if \(\tau\) is small enough. The scheme that we implemented is presented is Algorithm 1.
**Remark 4.1**.: _It is interesting to observe that, in the highly-regularized regime considered in [7], the authors managed to obtain a contractive map directly from the necessary conditions for optimality, and they did not need to consider the minimizing movements scheme. This is rather natural since, when the parameter \(\lambda>0\) that tunes the \(L^{2}\)-penalization is large enough, the functional associated to the optimal control problem is strongly convex in the sublevel set corresponding to the control \(\theta\equiv 0\), as discussed in Remark 3.1. However, as reported in [7], determining the appropriate value for \(\lambda\) in each application can be challenging. On the other hand, from the practitioners' perspective, dealing with high regularization is not always desirable, since the machine learning task that the system should learn is encoded in the final-time cost. The authors highlighted the complexity involved in selecting a regularization parameter that is large enough to achieve contractivity, while ensuring that the resulting controls are not excessively small (due to high regularization) and of little use._
These considerations motivated us to consider a scenario where the regularization parameter does not need to be set sufficiently large. From a numerical perspective, the parameter \(\tau\) in Equation (3.29) (coming from the minimizing movement scheme) plays the role of the _learning rate_, and it provides the lacking amount of convexity, addressing the stability issues related to the resolution of optimal control problems in non-convex regime. These kinds of instabilities were already known in the Soviet literature of numerical optimal control (see the review paper [14]), and various solutions have been proposed to address them. For example, in [40] the authors proposed an iterative method based on the Maximum Principle and on an augmented Hamiltonian, with an approach that is somehow reminiscent of minimizing movements. More recently, in the framework of NeurODEs, in [33] it was proposed another stabilization strategy, which is different from ours since it enforces similarity between the evolution of state and co-state variables after the control update. Implicitly, the approach of [33] leads to a penalization of significant changes in the controls. On the other hand, in our approach this penalization is more explicit, and it is enforced via the memory term of the minimizing movement scheme. To the best of our knowledge, this is the first instance where a regularization based on the minimizing movement scheme is employed for training NeurODEs.
**Remark 4.2**.: _Although we formulate and analyze theoretically our problem within the mean-field framework, it is not advantageous to numerically solve the forward equation as a partial differential equation. In [7], various numerical methods for solving PDEs were employed and compared. However, these methods encounter limitations when applied to high-dimensional data, which is often the case in Machine Learning scenarios. Therefore, in this study, we employ a particle method to solve both the forward partial differential equation and the backward dynamics. This particle-based approach involves reformulating the PDE as a system of ordinary differential equations in which particles represent mathematical collocation points that discretize the continuous fields. By employing this particle method, we address the challenges associated with high-dimensional data, enabling efficient numerical solutions for the forward and backward dynamics._
To conclude this section, we briefly present the forward and the backward systems that are solved during the execution of the method. For the sake of simplicity, we will focus on the case of an encoder. The objective is to minimize the following function:
\[J(\theta)=\frac{1}{N}\sum_{i=1}^{N}\ell(X_{\mathcal{A}_{r}}^{i}(T),Y^{i}(0))+ \frac{\lambda}{2}\left\|\theta\right\|_{2}^{2}, \tag{4.1}\]
where \(\mathcal{A}_{r}\) denotes the active indices in the bottleneck, i.e. at \(t_{r}=T\), of the state-vector \(X^{i}(T)\). The latter denotes the encoded output at time \(T\) for the \(i\)-th particle, while \(Y^{i}(0)\) represents the corresponding target at time \(0\) (which we recall is the same at time \(T\), being \(\dot{Y}^{i}\equiv 0\) for every \(i=1,\ldots,N\)). For each \(i\)-th particle and every \(t\) such that
\(t_{j}\leq t\leq t_{j+1}\), the forward dynamics can be described as follows:
\[\begin{cases}\dot{X}_{Z_{j}}^{i}(t)=0,\\ \dot{X}_{\mathcal{A}_{j}}^{i}(t)=\mathcal{G}_{j}\big{(}t,X_{\mathcal{A}_{j}}^{i} (t),\theta(t)\big{)},\end{cases} \tag{4.2}\]
subject to the initial condition \(X^{i}(0)=X_{\mathcal{A}_{0}}^{i}(0)=X_{0}^{i}\in\mathbb{R}^{d}\). In the same interval \(t_{j}\leq t\leq t_{j+1}\), the backward dynamics reads
\[\begin{cases}\dot{P}_{\mathcal{I}_{j}}^{i}(t)=0,\\ \dot{P}_{\mathcal{A}_{j}}^{i}(t)=-P_{\mathcal{A}_{j}}^{i}(t)\cdot\nabla_{x_{ \mathcal{A}_{j}}}\mathcal{G}_{j}\big{(}t,X_{\mathcal{A}_{j}}^{i}(t),\theta(t) \big{)},\end{cases} \tag{4.3}\]
where the final co-state is
\[P^{i}(T)=\begin{cases}-\partial_{t}\ell(X_{\mathcal{A}_{r}}^{i}(T),Y^{i}(0)), &\text{if }k\in\mathcal{A}_{r},\\ 0,&\text{if }k\notin\mathcal{A}_{r}.\end{cases}\]
We notice that, for \(t_{j}\leq t\leq t_{j+1}\) and every \(i\in\{0,\dots,N\}\), we have
\[\mathcal{F}(t,X^{i}(t),\theta(t))=\mathcal{F}(t,(X_{\mathcal{A}_{r}}^{i},X_{ \mathcal{I}_{j}}^{i})(t),\theta(t))=\begin{pmatrix}\mathcal{G}_{j}(t,X_{ \mathcal{A}_{j}}^{i}(t),\theta(t))\\ 0\end{pmatrix}, \tag{4.4}\]
and, consequently, we deduce that
\[\nabla_{x}\mathcal{F}(t,X^{i}(t),\theta(t))=\begin{pmatrix}\nabla_{x_{ \mathcal{A}_{j}}}\mathcal{G}_{j}(t,X_{\mathcal{A}_{j}}^{i}(t),\theta(t))&0\\ 0&0\end{pmatrix}, \tag{4.5}\]
where the null blocks are due to the fact that, for \(t_{j}\leq t\leq t_{j+1}\), \(\nabla_{x}\mathcal{F}_{k}(t,x,\theta)=0\) if \(k\in\mathcal{I}_{j}\), and \(\nabla_{x_{\mathcal{I}_{j}}}\mathcal{G}_{j}(t,x,\theta)=0\). In the case of an Autoencoder, the structure of the forward and backward dynamics is analogous.
**Remark 4.3**.: _From the calculations reported above it is evident that the matrices and the vectors involved in our forward and backward dynamics are quite sparse (see (4.5) and (4.4)), and that the state and co-state variables contain components that are constant in many sub-intervals (see (4.2) and (4.3)). Hence, in the practical implementation, especially when dealing with an Autoencoder, we do not actually need to double the original state variables and to introduce the shadow ones, but we can simply overwrite those values and, in this way, we obtain a more memory-efficient code. A similar argument holds as well for the co-state variables. Moreover, we expect the control variable \(\theta\) to have several null components during the evolution. This relates to Remark 2.1 and descends from the fact that, even though in our model \(\theta\in\mathbb{R}^{m}\) for every \(t\in[0,T]\), in the internal sub-intervals \([t_{j},t_{j+1}]\) only few of its components are influencing the dynamics. Hence, owing to the \(L^{2}\)-squared regularization on \(\theta\), it results that, if in an interval \([t_{j},t_{j+1}]\) a certain component of \(\theta\) is not affecting the velocity, then it is convenient to keep it null._
## 5 Numerical Experiments
In this section, we present a series of numerical examples to illustrate the practical application of our approach. We consider datasets of varying dimensions, ranging from low-dimensional data to a more typical Machine Learning dataset such as MNIST. Additionally, we provide justifications and insights into some of the choices made in our theoretical analysis. For instance, we examine the process of choosing the components to be deactivated during the modeling phase, and we investigate whether this hand-picked selection can lead to any issues or incorrect results. In this regard, in our first experiment concerning a classification task, we demonstrate that this a priori choice does not pose any problem, as the network effectively learns to separate the dataset into two classes before accurately classifying them. Furthermore, as we already pointed out, we have extended some of the assumptions from [7] to accommodate the use of a smooth approximation of the ReLU function. This extension is not merely a theoretical exercise, since in our second numerical example we show how valuable it is to leverage unbounded activation functions. While both of these examples involve low-dimensional data and may not be representative of typical tasks for an Autoencoder architecture, we address this limitation in our third experiment by performing a reconstruction task on the MNIST dataset. Lastly, we present noteworthy results obtained from analyzing the performance on MNIST, highlighting specific behaviors that warrant further investigation in future research.
The layers of the networks that we employ in all our experiments have the form:
\[\mathbb{R}^{d}\ni X=\big{(}X_{\mathcal{A}_{j}},X_{\mathcal{I}_{j}}\big{)}^{ \top}\mapsto\phi_{n}^{W,b}(X)=\big{(}X_{\mathcal{A}_{j}},X_{\mathcal{I}_{j}} \big{)}^{\top}+h\Big{(}\sigma\left(W_{\mathcal{A}_{j}}\cdot X_{\mathcal{A}_{j }}+b_{\mathcal{A}_{j}}\right),0\Big{)}^{\top},\]
where \(\mathcal{A}_{j},\mathcal{I}_{j}\) are, respectively, the sets of active and inactive components at the layer \(n\), \(b_{\mathcal{A}_{j}}\) are the components of \(b\in\mathbb{R}^{d}\) belonging to \(\mathcal{A}_{j}\), while \(W_{\mathcal{A}_{j}}\) is the square sub-matrix of \(W\in\mathbb{R}^{d\times d}\) corresponding to the active components. Finally, the activation function \(\sigma\) will be specified case by case.
### Bidimensional Classification
In our initial experiment, we concentrate on a bidimensional classification task that has been extensively described in [7]. Although this task deviates from the typical application of Autoencoders, where the objective is data reconstruction instead of classification, we believe it gives valuable insights on how our model works. The objective is to classify particles sampled from a standard Gaussian distribution in \(\mathbb{R}^{2}\) based on the sign of their first component. Given an initial data point \(x_{0}\in\mathbb{R}^{2}\), denoted by \(x_{0}[i]\) with \(i=1,2\) representing its \(i-\)th component, we assign a positive label \(+1\) to it if \(x_{0}[1]>0\), and a negative label \(-1\) otherwise. To incorporate the labels into the Autoencoder framework, we augment the labels to obtain a positive label \([1,0]\) and a negative one \([-1,0]\). In such a way, we obtain target vectors in \(\mathbb{R}^{2}\), i.e., with the same dimension as the input data-points in the first layer.
The considered architecture is an Autoencoder comprising twenty-one layers, corresponding to \(T=2\) and \(dt=0.05\). The first seven layers maintain a constant active dimension equal to \(2\), followed by seven layers of active dimension \(1\). Finally, the last seven layers, representing the prototype of a decoder, have again constant active dimension \(2\), restoring the initial one.
A sketch of the architecture is presented on the right side of Figure 4. We underline that we make use of the observation presented in Remark 4.3 to construct the implemented network, and we report that we employ the hyperbolic tangent as activation function.
The next step is to determine which components to deactivate, i.e., we have to choose the sets \(\mathcal{I}_{j}\) for \(j=1,\ldots,2r\): the natural choice is to deactivate the second component, since the information which the classification is based on is contained in the first component (the sign) of the input data-points. Since we use the memory-saving regime of Remark 4.3, we observe that, in the decoder, the particles are "projected" onto the \(x\)-axis, as their second component is deactivated and set equal to \(0\). Then, in the decoding phase, both the components have again the possibility of evolving. This particular case is illustrated on the left side of Figure 4.
Now, let us consider a scenario where the network architecture remains the same, but instead of deactivating the second component, we turn-off the first component. This has the effect of "projecting" the particles onto the \(y\)-axis in the encoding phase.
The results are presented in Figure 5, where an interesting effect emerges. In the initial phase (left), where the particles can evolve in the whole space \(\mathbb{R}^{2}\), the network is capable of rearranging the particles in order to separate them. More precisely, in this part, the relevant information for the classification (i.e., the sign of the first
Figure 4: Left: Classification task performed when the turned off component is the natural one.
Right: sketch of the AutoencODE architecture considered.
Figure 5: Left: Initial phase, i.e., separation of the data along the \(y\)-axis. Center: Encoding phase, i.e., only the second component is active. Right: Decoding phase and classification result after the “unnatural turn off”. Notice that, for a nice clustering of the classified data, we have increased the number of layers from \(20\) to \(40\). However, we report that the network accomplishes the task even if we use the same structure as in Figure 4.
component), is transferred to the second component, that will not be deactivated. Therefore, once the data-points are projected onto the \(y\)-axis in the bottleneck (middle), two distinct clusters are already formed, corresponding to the two classes of particles. Finally, when the full dimension is restored, the remaining task consists in moving these clusters towards the respective labels, as demonstrated in the plot on the right of Figure 5.
This numerical evidence confirms that our a priori choice (even when it is very unnatural) of the components to be deactivated does not affect the network's ability to learn and classify the data. Finally, while studying this low-dimensional numerical example, we test one of the assumptions that we made in the theoretical setting. In particular, we want to check if it is reasonable to assume that the cost landscape is convex around local minima, as assumed in Theorem 3.10. In Table 1, we report the smallest and highest eigenvalues of the Hessian matrix of the loss function recorded during the training process, i.e., starting from a random initial guess, until the convergence to an optimal solution.
### Parabola Reconstruction
In our second numerical experiment, we focus on the task of reconstructing a two-dimensional parabola. To achieve this, we sample points from the parabolic curve and we use them as the initial data for our network. The network architecture consists of a first block of seven layers with active dimension 2, followed by seven additional layers with active dimension 1. Together, these two blocks represent the encoding phase in which the set of active components are \(\mathcal{A}_{j}=\{0\}\) for \(j=7,\ldots,14\). Similarly as in the previous example, the points at the 7-th layer are "projected" onto the \(x-\)axis, and for the six subsequent layers they are constrained to stay in this subspace. After the 14-th layer, the original active dimension is restored, and the particles can move in the whole space \(\mathbb{R}^{2}\), aiming at reaching their original positions. Despite the low dimensionality of this task, it provides an interesting application that allows us to observe the distinct phases of our mode, which are presented in Figure 6.
Notably, in the initial seven layers, the particles show quite tiny movements (top left of Figure 6). This is since the relevant information to reconstruct the position is encoded in the first component, which is kept active in the bottleneck. On the other hand, if in the encoder we chose to deactivate the first component instead of the second one, we would expect that the points need to move considerably before the projection takes place, as it was the case in the previous classification task. During the second phase (top right of Figure 6), the particles separate along the \(x\)-axis, preparing for the final decoding phase, which proves to be the most challenging to learn (depicted in the
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline Epochs & 0 & 80 & 160 & 240 & 320 & 400 & 480 & 560 & 640 & 720 \\ \hline Min Eingenvalue & -1.72e-2 & -1.19e-2 & -1.09e-2 & -8.10e-3 & -3.44e-3 & -6.13e-3 & 6.80e-4 & 7.11e-4 & 7.25e-4 & 7.33e-4 \\ Max Eigenvalue & 3.78e-2 & 2.84e-1 & 7.30e-1 & 9.34e-1 & 1.11 & 1.18 & 1.22 & 1.25 & 1.26 & 1.27 \\ \hline \end{tabular}
\end{table}
Table 1: Minimum and maximum eigenvalues of the Hessian matrix across epochs.
Figure 6: Top Left: Initial phase. Top Rights: Encoding phase. Bottom Left: Decoding phase. Bottom Right: network’s reconstruction with alternative architecture.
bottom left of Figure 6). Based on our theoretical knowledge and the results from initial experiments, we attempt to improve the performance of the AutoencODE network by modifying its structure. One possible approach is to design the network in a way that allows more time for the particles to evolve during the decoding phase, while reducing the time spent in the initial and bottleneck phases. Indeed, we try to use 40 layers instead of 20, and most of the new ones are allocated in the decoding phase. The result is illustrated in the bottom right of Figure 6, where we observe that changing the network's structure has a significant positive impact on the reconstruction quality, leading to better results. This result is inspired by the heuristic observation that the particles "do not need to move" in the first two phases. On this point, a more theoretical analysis of the network's structure will be further discussed in the next paragraph, where we perform a sanity check, and we relate the need for extra layers to the Lipschitz constant of the trained network.
This experiment highlights an important observation regarding the choice of activation functions. Specifically, it becomes evident that certain bounded activation functions, such as the hyperbolic tangent, are inadequate for moving the particles back to their original positions during the decoding phase. The bounded nature of these activation functions limits their ability to move a sufficiently large range of values, which can lead to the points getting stuck at suboptimal positions and failing to reconstruct the parabolic curve accurately. To overcome this limitation and achieve successful reconstruction, it is necessary to employ unbounded activation functions that allow for a wider range of values, in particular the well-known Leaky Relu function. An advantage of our approach is that our theory permits the use of smooth approximations for well-known activation functions, such as the Leaky ReLU (2.4). Specifically, we employ the following smooth approximation of the Leaky ReLU function:
\[\sigma_{smooth}(x)=\alpha x+\frac{1}{s}\log\big{(}1+e^{sx}\big{)}, \tag{5.1}\]
where \(s\) approaching infinity ensures convergence to the original Leaky ReLU function. While alternative approximations are available, we employed (5.1) in our study. This observation emphasizes the importance of considering the characteristics and properties of activation functions when designing and training neural networks, and it motivates our goal in this work to encompass unbounded activation functions in our working assumptions.
### MNIST Reconstruction
In this experiment, we apply the AutoencODE architecture and our training method to the task of reconstructing images from the MNIST dataset. The MNIST dataset contains 70000 grayscale images of handwritten digits ranging from zero to nine. Each image has a size of \(28\times 28\) pixels and has been normalized. This dataset is commonly used as a benchmark for image classification tasks or for evaluating image recognition and reconstruction algorithms. However, our objective in this experiment is not to compare our reconstruction error with state-of-the-art results, but rather to demonstrate the applicability of our method to high-dimensional data, and to highlight interesting phenomena that we encounter. In general, when performing an autoencoder reconstruction task, the goal is to learn a lower-dimensional representation of the data that captures its essential features. On the other hand, determining the dimension of the lower-dimensional representation, often referred to as the _latent dimension_, requires setting a hyperparameter, i.e., the width of the bottleneck's layers, which might depend on the specific application.
We now discuss the architecture we employed and the choice we made for the latent dimension. Our network consists of twenty-three layers, with the first ten layers serving as encoder, where the dimension of the layers is gradually reduced from the initial value \(d_{0}=784\) to a latent dimension of \(d_{r}=32\). Then, this latent dimension is kept in the bottleneck for three layers, and the last ten layers act as decoder, and, symmetrically to the encoder, it increases the width of the layers from 32 back to \(d_{2r}=784\). Finally, for each layer we employ a smooth version of the Leaky Relu, see (5.1), as activation function. The architecture is visualized in Figure 7, while the achieved reconstruction results are presented in Figure 8. We observe that, once again, we made use of Remark 4.3 for the implementation of the AutoencODE-based model.
Latent dimensionality in the bottleneck:One of the first findings that we observe in our experiments pertains to the latent dimension of the network and to the intrinsic dimension of the dataset. The problem of determining the intrinsic dimension has been object of previous studies such as [47, 16, 17], where it was estimated to be approximately equal to 13 in the case of MNIST dataset. On this interesting topic, we also report the paper [32], where a maximum likelihood estimator was proposed and datasets of images were considered, and the recent contribution [35]. Finally, the model of the _hidden manifold_ has been formulated and studied in [23].
Notably, our network exhibits an interesting characteristic in which, starting from the initial guess of weights and biases initialized at 0, the training process automatically identifies an intrinsic dimensionality of 13. Namely, we observe that the latent vectors of dimension 32 corresponding to each image in the dataset are sparse vectors with 13 non-zero components, forming a consistent support across all latent vectors derived from the original images. To further analyze this phenomenon, we compute the means of all the latent vectors for each digit and we compare them,
as depicted in the left and middle of Figure 9. These mean vectors always have exactly the same support of dimension 13, and, interestingly, we observe that digits that share similar handwritten shapes, such as the numbers 4 and 9 or digits 3 and 5, actually have latent means that are close to each other. Additionally, we explore the generative capabilities of our network by allowing the latent means to evolve through the decoding phase, aiming to generate new images consistent with the mean vector. On the right of Figure 9, we present the output of the network when using a latent vector corresponding to the mean of all latent vectors representing digit 3.
This intriguing behavior of our network warrants further investigation into its ability to detect the intrinsic dimension of the input data, and into the exploration of its generative potential. Previous studies have demonstrated that the ability of neural networks to converge to simpler solutions is significantly influenced by the initial parameter values (see e.g. [15]). Indeed, in our case we have observed that this phenomenon only occurs when initializing the parameters with zeros. Moreover, it is worth mentioning that this behavior does not seem to appear in standard Autoencoders without residual connections.
Sanity check of the network's architecture.An advantage of interpreting neural networks as discrete approximations of dynamical systems is that we can make use of typical results of numerical resolutions of ODEs in
Figure 8: Reconstruction of some numbers achieved by the Autoencoder.
Figure 7: Architecture used for the MNIST reconstruction task. The inactive nodes are marked in green.
order to better analyze our results. Indeed, we notice that, according to well-known results, in order to solve a generic ODEs we need to take as discretization step-size \(dt\) a value smaller than the inverse of the lipschitz constant of the vector field driving the dynamics. We recall that the quantity \(dt\) is related to the number of layers of the network through the relation \(n_{\text{layers}}=\frac{T}{dt}\), where \(T\) is the right-extreme of the evolution interval \([0,T]\).
In our case, we choose _a priori_ the amplitude of \(dt\), we train the network and, once we have computed \(\theta^{*}\), we can compare _a posteriori_ the discretization step-size chosen at the beginning with the quantity \(\Delta=\frac{1}{Lip(\mathcal{F}(t,X,\theta^{*})}\) for each time-node \(t\) and every datum \(x\). In Figure 5, we show the time discretization \(dt\) in orange and in blue the quantity \(\Delta\),
for the case of a wrongly constructed autoencoder (on the left) and the correct one (on the right). From this plots, we can perform a "sanity check" and we can make sure that the number of layers that we chose is sufficient to solve the task. Indeed, in the wrong autoencoder on the left, we see that in the last layer the quantity \(\Delta\) is smaller than \(dt\), and this violates the condition that guarantees the stability of the explicit Euler discretization.
Indeed, the introduction of two symmetric layers to the network (corresponding to the plot on the right of Figure 5) allows the network to satisfy everywhere the relation \(\Delta>dt\). Moreover, we also notice that during the encoding phase the inverse of the Lipschitz constant of \(\mathcal{F}\) is quite high, which means that the vector field does not need to move a lot the points. This suggests that we could get rid of some of the layers in the encoder and only keep the necessary ones, i.e., the ones in the decoder where \(\Delta\) is small and a finer discretization step-size is required. We report that this last observation is consistent with the results recently obtained in [11]. Finally, we also draw attention to the work [45], which shares a similar spirit with our experiments, since the Lipschitz constant of the layers is the main subject of investigation. In their study, the authors employ classical results on the numerical integration of ordinary differential equations in order to understand how to constrain the weights of the network with the aim of designing stable architectures. This approach leads to networks with non-expansive properties, which is highly advantageous for mitigating instabilities in various scenarios, such as testing adversarial examples [25], training generative adversarial networks [3], or solving inverse problems using deep learning.
Entropy across layers.We present our first experiments on the study of the information propagation within the network, where some intriguing results appear. This phenomenon is illustrated in Figure 5, where we examine the entropy across the layers after the network has been trained. We introduce two different measures of entropy, depicted in the two graphs of the figure. In first place, we consider the well-known Shannon entropy, denoted as \(H(E)\), which quantifies the information content of a discrete random variable \(E\), distributed according to a discrete
Figure 10: Left: wrong autoencoder detected with \(\Delta\). Right: correct version of the same autoencoder.
Figure 9: Left: Comparing two similar latent means. Center: again two similar latent means. Right: Output of the decoding of one latent mean.
probability measure \(p:\Omega\to[0,1]\) such that \(p(e)=p(E=e)\). The Shannon entropy is computed as follows:
\[H(E)=\mathbb{E}\big{[}-\log(p(E)\big{]}=\sum_{e\in E}-p(e)\log(p(e))\]
In our context, the random variable of interest is \(E=\sum_{j=1}^{N}\mathds{1}_{|X_{0}^{i}-X_{0}^{j}|\leq\epsilon}\), where \(X_{0}^{i}\) represents a generic image from the MNIST dataset. Additionally, we introduce another measure of entropy, denoted as \(\mathcal{E}\), which quantifies the probability that the dataset can be partitioned into ten clusters corresponding to the ten different digits. This quantity has been introduced in [21] and it is defined as
\[\mathcal{E}=\mathbb{P}\left(X\in\bigcup_{i=1}^{k}B_{\varepsilon}(X_{0}^{i}) \right),\]
where \(\varepsilon>0\) is a small radius, and \(X_{0}^{1},\ldots,X_{0}^{k}\) are samplings from the dataset. Figure 5 suggests the existence of a distinct pattern in the variation of information entropy across the layers, which offers a hint for further investigations.
Let us first focus on the Shannon entropy: as the layers' dimensionality decreases in the encoding phase, there is an expected decrease of entropy, reflecting the compression and reduction of information in the lower-dimensional representation. The bottleneck layer, where the dimension is kept constant, represents a critical point where the entropy reaches a minimum. This indicates that the information content is highly concentrated and compressed in this latent space. Then, during the decoding phase, the Shannon entropy do not revert to its initial value but instead exhibit a slower increase. This behavior suggests that the network retains some of the learned structure and information from the bottleneck layer. Something similar happens for the second measure of entropy: at the beginning, the data is unlikely to be highly clustered, since two distinct images of the same digit may be quite distant one from the other. In the inner layers, this probability increases until it reaches its maximum (rather close to 1) in the bottleneck, where the data can then be fully partitioned into clusters of radius \(\epsilon\). As for the Shannon entropy, the information from the bottleneck layer is retained during the decoding phase, which is why the entropy remains constant for a while and then decreases back in a slower manner.
It is worth noticing that in both cases the entropy does not fully return to its initial level. This might be attributed to the phenomenon of mode collapse, where the network fails to capture the full variability in the input data and instead produces similar outputs for different inputs, hence inducing some sort of _implicit bias_. Mode collapse is often considered undesirable in generative models, as it hinders the ability to generate diverse and realistic samples. However, in the context of understanding data structure and performing clustering, the network's capability to capture the main modes or clusters of the data can be seen as a positive aspect. The network learns to extract salient features and represent the data in a compact and informative manner, enabling tasks such as clustering and classification. Further investigation is needed to explore the relationship between the observed entropy patterns, mode collapse, and the overall performance of the network on different tasks.
## Acknowledgments
The authors would like to thank Prof. Giuseppe Savare for the fruitful discussions during his permanence in Munich. Moreover, the authors are grateful to Dr. Oleh Melnyk for the suggestion on the extension of the dynamical model to the U-net architecture.
This work has been funded by the German Federal Ministry of Education and Research and the Bavarian State Ministry for Science and the Arts. C.C. and M.F. acknowledge also the partial support of the project "Online Firestorms And Resentment Propagation On Social Media: Dynamics, Predictability and Mitigation" of the TUM
Figure 11: Left: Shannon entropy across layers. Right: Our measure of entropy across layers.
Institute for Ethics in Artificial Intelligence and of the DFG Project "Implicit Bias and Low Complexity Networks" within the DFG SPP 2298 "Theoretical Foundations of Deep Learning". A.S. acknowledges the partial support from INdAM-GNAMPA.
## Appendix A Appendix
**Lemma A.1** (Boundedness of trajectories).: _Let us consider the controlled system_
\[\dot{x}=\mathcal{F}(t,x,\theta),\quad x(0)=x_{0},\]
_where \(\mathcal{F}:[0,T]\times\mathbb{R}^{n}\times\mathbb{R}^{m}\to\mathbb{R}^{n}\) satisfies Assumptions 1, and \(\theta\in L^{2}([0,T],\mathbb{R}^{m})\). Then, for every \(R>0\) and any \(x_{0}\in B_{R}(0)\), we have that \(x(t)\in B_{R}(0)\) for every \(t\in[0,T]\), where \(\bar{R}=(R+L_{R}(1+\|\theta\|_{L^{1}}))e^{L_{R}(1+\|\theta\|_{L^{1}})}\)._
Proof.: According to Assumption 1\(-(ii)\) on \(\mathcal{F}\), the trajectories can be bounded as follows:
\[|x(t)|\leq|x_{0}|+\int_{0}^{t}|\mathcal{F}(s,x(s),\theta(s))|\,ds\leq|x_{0}|+L _{R}\int_{0}^{t}(1+|x(s)|)(1+|\theta(s)|)\,ds\]
for every \(t\in[0,T]\). Using Gronwall's lemma, it follows that
\[|x(t)|\leq\Big{(}|x_{0}|+L_{R}(1+\|\theta\|_{L^{1}})\Big{)}e^{L_{R}(1+\| \theta\|_{L^{1}})}.\]
**Lemma A.2** (Flow's dependency on initial datum).: _For every \(t\in[0,T]\), let us consider the flow mapping \(\Phi^{\theta}_{(0,t)}:\mathbb{R}^{d}\to\mathbb{R}^{d}\) defined in (2.2) and driven by the control \(\theta\in L^{2}([0,T],\mathbb{R}^{m})\). Let us assume that the controlled dynamics \(\mathcal{F}:[0,T]\times\mathbb{R}^{d}\times\mathbb{R}^{m}\to\mathbb{R}^{d}\) satisfies Assumption 1. Then, for every \(R>0\), and every \(x_{1},x_{2}\in B_{R}(0)\), it follows that_
\[|\Phi^{\theta}_{(0,t)}(x_{1})-\Phi^{\theta}_{(0,t)}(x_{2})|\leq e^{L_{R}(1+\| \theta\|_{L^{1}})}|x_{1}-x_{2}|,\]
_where \(\bar{R}\) is defined as in Lemma A.1, and \(L_{R}\) is prescribed by Assumption 1-(ii)._
Proof.: Let us denote with \(t\mapsto x_{1}(t),t\mapsto x_{2}(t)\) the solutions of (2.1) driven by \(\theta\) and starting, respectively, from \(x_{1}(0)=x_{1},x_{2}(0)=x_{2}\). Then, for every \(t\in[0,T]\), we have
\[|x_{1}(t)-x_{2}(t)| \leq|x_{1}-x_{2}|+\int_{0}^{t}|\mathcal{F}(s,x_{1}(s),\theta(s))- \mathcal{F}(s,x_{2}(s),\theta(s))|\,ds\] \[\leq|x_{1}-x_{2}|+L_{R}\int_{0}^{t}(1+|\theta(s)|)|x_{1}(s)-x_{2} (s))|\,ds,\]
by using Assumption 1\(-(ii)\). As before, the statement follows from Gronwall's Lemma.
**Lemma A.3** (Flow's dependency on time).: _Under the same assumptions and notations as in Lemma A.2, for every \(R>0\), for every \(x\in B_{R}(0)\) and for every \(\theta\in L^{2}([0,T],\mathbb{R}^{m})\), we have that_
\[|\Phi^{\theta}_{(0,t_{2})}(x)-\Phi^{\theta}_{(0,t_{1})}(x)|\leq L_{R}(1+\bar{ R})(1+\|\theta\|_{L^{2}})|t_{2}-t_{1}|^{\frac{1}{2}}\]
_for every \(0\leq t_{1}<t_{2}\leq T\), where \(\bar{R}\) is defined as in Lemma A.1, and \(L_{R}\) is prescribed by Assumption 1-(\(ii)\). Moreover, if \(\theta\in L^{2}([0,T],\mathbb{R}^{m})\cap L^{\infty}([0,T],\mathbb{R}^{m})\), then, for every \(0\leq t_{1}<t_{2}\leq T\), it holds:_
\[|\Phi^{\theta}_{(0,t_{2})}(x)-\Phi^{\theta}_{(0,t_{1})}(x)|\leq L_{\bar{R}}(1+ \bar{R})(1+\|\theta\|_{L^{2}})|t_{2}-t_{1}|.\]
Proof.: If we denote by \(t\mapsto x(t)\) the solution of (2.1) driven by the control \(\theta\), then
\[|x(t_{2})-x(t_{1})|\leq\int_{t_{1}}^{t_{2}}|\mathcal{F}(s,x(s),\theta(s)|\,ds \leq\int_{t_{1}}^{t_{2}}L_{\bar{R}}(1+\bar{R})(1+|\theta(s)|)\,ds.\]
The thesis follows by using Cauchy-Schwarz for \(\theta\in L^{2}\), or from basic estimates if \(\theta\in L^{\infty}\).
**Lemma A.4** (Flow's dependency on controls).: _For every \(t\in[0,T]\), let \(\Phi^{\theta_{1}}_{(0,t)},\Phi^{\theta_{2}}_{(0,t)}:\mathbb{R}^{d}\to\mathbb{R}^{d}\) be the flows defined in (2.2) and driven, respectively, by \(\theta_{1},\theta_{2}\in L^{2}([0,T],\mathbb{R}^{m})\). Let us assume that the controlled dynamics \(\mathcal{F}:[0,T]\times\mathbb{R}^{n}\times\mathbb{R}^{m}\to\mathbb{R}^{n}\) satisfies Assumption 1. Then, for every \(R>0\) and for every \(x\in B_{R}(0)\), it holds that_
\[|\Phi^{\theta_{1}}_{(0,t)}(x)-\Phi^{\theta_{2}}_{(0,t)}(x)|\leq L_{\bar{R}}(1+ \|\theta_{1}\|_{L^{2}}+\|\theta_{2}\|_{L^{2}})e^{L_{R}(1+\|\theta_{1}\|_{L^{1}}) }\left\|\theta_{1}-\theta_{2}\right\|_{L^{2}},\]
_where \(\bar{R}\) is defined as in Lemma A.1, and \(L_{\bar{R}}\) is prescribed by Assumption 1-(ii)._
Proof.: By using Assumption 1\(-(ii),(iii)\) and the triangle inequality, we obtain that
\[|\Phi^{\theta_{1}}_{(0,t)}(x)-\Phi^{\theta_{2}}_{(0,t)}(x)| \leq\int_{0}^{t}|\mathcal{F}(s,x_{1}(s),\theta_{1}(s))-\mathcal{F}( s,x_{2}(s),\theta_{2}(s))|\,ds\] \[\leq\int_{0}^{t}|\mathcal{F}(s,x_{1}(s),\theta_{1}(s))-\mathcal{F }(s,x_{2}(s),\theta_{1}(s))|\,ds\] \[\quad+\int_{0}^{t}|\mathcal{F}(s,x_{2}(s),\theta_{1}(s))-\mathcal{ F}(s,x_{2}(s),\theta_{2}(s))|\,ds\] \[\leq L_{R}\int_{0}^{t}(1+\theta_{1}(s))|x_{1}(s)-x_{2}(s)|\,ds+L_ {\bar{R}}(1+\left\|\theta_{1}\right\|_{L^{2}}+\left\|\theta_{2}\right\|_{L^{2} })\left\|\theta_{1}-\theta_{2}\right\|_{L^{2}}.\]
The statement follows again by applying Gronwall's Lemma.
**Proposition A.5** (Differentiability with respect to trajectories perturbations).: _Let us assume that the controlled dynamics \(\mathcal{F}\) satisfies Assumptions 1-2. Given an admissible control \(\theta\in L^{2}([0,T],\mathbb{R}^{m})\) and a trajectory \(t\mapsto x(t)=\Phi^{\theta}_{(0,t)}(x_{0})\) with \(x_{0}\in B_{R}(0)\), let \(\xi:[0,T]\to\mathbb{R}^{d}\) be the solution of the linearized problem_
\[\begin{cases}\dot{\xi}(t)=\nabla_{x}\mathcal{F}(t,x(t),\theta(t))\xi(t),\\ \xi(t)=v,\end{cases}\] (A.1)
_where \(\bar{t}\in[0,T]\) is the instant of perturbation and \(v\) is the direction of perturbation of the trajectory. Then, for every \(t\in(\bar{t},T)\), it holds_
\[|\Phi^{\theta}_{(\bar{t},t)}(x(\bar{t})+\epsilon v)-\Phi^{\theta}_{(\bar{t},t )}(x(\bar{t}))-\epsilon\xi(t)|\leq C|v|^{2}\epsilon^{2}\]
_where \(C\) is a constant depending on \(T,R,\left\|\theta\right\|_{L^{2}}\)._
Proof.: For \(t\geq\bar{t}\), let us denote with \(t\mapsto y(t):=\Phi^{\theta}_{(\bar{t},t)}(x(\bar{t})+\epsilon v)\) the solution of the modified problem, obtained by perturbing the original trajectory with \(\epsilon v\) at instant \(\bar{t}\). Then, since \(\xi\) solves (A.1), we can write
\[|y(t)-x(t)-\epsilon\xi(t)| =|\Phi^{\theta}_{(\bar{t},t)}(x(\bar{t})+\epsilon v)-\Phi^{ \theta}_{(\bar{t},t)}(x(\bar{t}))-\epsilon\xi(t)|\] \[\leq\int_{\bar{t}}^{t}|\mathcal{F}(s,y(s),\theta(s))-\mathcal{F} (s,x(s),\theta(s))-\epsilon\nabla_{x}\mathcal{F}(s,x(s),\theta(s))\xi(s)|ds\] \[\leq\int_{\bar{t}}^{t}|\mathcal{F}(s,y(s),\theta(s))-\mathcal{F} (s,x(s),\theta(s))-\nabla_{x}\mathcal{F}(s,x(s),\theta(s))(y(s)-x(s))|ds\] \[\quad+\int_{\bar{t}}^{t}|\nabla_{x}\mathcal{F}(s,x(s),\theta(s) )||y(s)-x(s)-\epsilon\xi(s)|ds\] \[\leq\int_{\bar{t}}^{t}\left[\int_{0}^{1}|\nabla_{x}\mathcal{F}(s,x(s)+\tau(y(s)-x(s)),\theta(s)-\nabla_{x}\mathcal{F}(s,x(s),\theta(s)||y(s)- x(s)|d\tau\right]ds\] \[\quad+\int_{\bar{t}}^{t}|\nabla_{x}\mathcal{F}(s,x(s),\theta(s)) ||y(s)-x(s)-\epsilon\xi(s)|ds\]
for every \(t\geq\bar{t}\). We now address the two integrals separately. Using Assumption 2\(-(iv)\) and the result of Lemma A.4, we obtain the following bound
\[\int_{\bar{t}}^{t}\left[\int_{0}^{1}|\nabla_{x}\mathcal{F}(s,x(s )+\tau(y(s)-x(s))),\theta(s)-\nabla_{x}\mathcal{F}(s,x(s),\theta(s))||y(s)-x( s)|d\tau\right]ds\] \[\leq\int_{\bar{t}}^{t}L_{R}(1+|\theta(s)|^{2})\frac{1}{2}|y(s)-x (s)|^{2}ds\] \[\leq\frac{1}{2}L_{\bar{R}}\left(1+\|\theta\|_{L^{2}}^{2}\right)e^{ 2L_{R}(1+\|\theta\|_{L^{1}})}|\epsilon v|^{2}\]
Similarly, for the second integral, owing to Assumption 2\(-(iv)\), we can compute:
\[\int_{\bar{t}}^{t}|\nabla_{x}\mathcal{F}(s,x(s),\theta(s))||y(s)-x(s)- \epsilon\xi(s)|ds\leq\int_{\bar{t}}^{t}L_{R}(1+|\theta(s)|^{2})(1+\bar{R})|y(s )-x(s)-\epsilon\xi(s)|ds\]
Finally, by combining the two results together and using Gronwall's Lemma, we prove the statement.
**Proposition A.6** (Differentiability with respect to control perturbations).: _Consider the solution \(\xi\) of the linearized problem_
\[\begin{cases}\dot{\xi}(t)=\nabla_{x}\mathcal{F}(t,x^{\theta}(t),\theta(t))\xi(t) +\nabla_{\theta}\mathcal{F}(t,x^{\theta}(t),\theta(t))\nu(t)\\ \xi(0)=0\end{cases}\] (A.2)
_where the control \(\theta\) is perturbed at the initial time with \(\theta+\epsilon\nu\), when starting with an initial datum \(x_{0}\in B_{R}(0)\). Then,_
\[|\Phi^{\theta+\epsilon\nu}_{(0,t)}(x_{0})-\Phi^{\theta}_{(0,t)}(x_{0})- \epsilon\xi(t)|\leq C||\nu||_{L^{2}}^{2}\epsilon^{2}\] (A.3)
_where \(C\) is a constant depending on \(T,\bar{R},L_{R},\left\|\theta\right\|_{L^{1}}\). Moreover, we have that for every \(t\in[0,T]\)_
\[\xi(t)=\int_{0}^{t}\mathcal{R}^{\theta}_{(s,t)}(x_{0})\cdot\nabla_{\theta} \mathcal{F}(s,x^{\theta}(s),\theta(s))\nu(s)\,ds,\] (A.4)
_where \(\mathcal{R}^{\theta}_{(s,t)}(x_{0})\) has been defined in (3.10)._
Proof.: We first observe that the dynamics in (A.2) is affine in the \(\xi\) variable. Moreover, Assumptions 1-2 guarantee that the coefficients are \(L^{1}\)-regular in time. Hence, from the classical Caratheodory Theorem we deduce the existence and the uniqueness of the solution of (A.2). Finally, the identity (A.4) follows as a classical application of the resolvent map (see,e.g., in [10, Theorem 2.2.3]).
Let us denote with \(t\mapsto x(t)\) and \(t\mapsto y(t)\) the solutions of Cauchy problem (2.1) corresponding, respectively, to the admissible controls \(\theta\) and \(\theta+\epsilon\nu\). We observe that, in virtue of Lemma A.1, we have that there exists \(\bar{R}>0\) such that \(x(t),y(t)\in B_{\bar{R}}(0)\) for every \(t\in[0,T]\). Then, recalling the definition of the flow map provided in (2.2), we compute
\[|y(t)-x(t)-\epsilon\xi(t)| =|\Phi^{\theta+\epsilon\nu}_{(0,t)}(x_{0})-\Phi^{\theta}_{(0,t)}( x_{0})-\epsilon\xi(t)|\] \[\leq\int_{0}^{t}|\mathcal{F}(s,y(s),\theta(s)+\epsilon\nu(s))- \mathcal{F}(s,x(s),\theta(s))-\epsilon\dot{\xi}(s)|\,ds\] \[\leq\int_{0}^{t}|\mathcal{F}(s,y(s),\theta(s)+\epsilon\nu(s))- \mathcal{F}(s,x(s),\theta(s)+\epsilon\nu(s))\] \[\qquad\qquad-\epsilon\nabla_{x}\mathcal{F}(s,x(s),\theta(s)+ \epsilon\nu(s))\cdot(y(s)-x(s))|\,ds\] \[\quad+\int_{0}^{t}|\mathcal{F}(s,x(s),\theta(s)+\epsilon\nu(s))- \mathcal{F}(s,x(s),\theta(s))-\epsilon\nabla_{\theta}\mathcal{F}(s,x(s), \theta(s))\cdot\nu(s)|\,ds\] \[\quad+\int_{0}^{t}|\nabla_{x}\mathcal{F}(s,x(s),\theta(s)+ \epsilon\nu(s))-\nabla_{x}\mathcal{F}(s,x(s),\theta(s))||y(s)-x(s)|ds\] \[\quad+\int_{0}^{t}|\nabla_{x}\mathcal{F}(s,x(s),\theta(s))||y(s)- x(s)-\epsilon\xi(s)|\,ds.\]
We now handle each term separately:
\[\int_{0}^{t}|\mathcal{F}(s,y(s),\theta(s)+\epsilon\nu(s))- \mathcal{F}(s,x(s),\theta(s)+\epsilon\nu(s))-\epsilon\nabla_{x}\mathcal{F}(s, x(s),\theta(s)+\epsilon\nu(s))(y(s)-x(s))|\,ds\] (A.5) \[\leq\int_{0}^{t}\left[\int_{0}^{1}L_{\bar{R}}(1+|\theta(s)+ \epsilon\nu(s)|^{2})\tau|y(s)-x(s)|^{2}d\tau\right]ds\] \[\leq L_{R}^{3}(1+\left\|\theta\right\|_{L^{2}}+\epsilon\left\| \nu\right\|_{L^{2}})^{4}e^{2L_{R}(1+\left\|\theta\right\|_{L^{1}})}\left\|\nu \right\|_{L^{2}}^{2}\epsilon^{2}\]
where we used Assumption 2\(-(iv)\) and Lemma A.4. By using Assumption 2\(-(v)\), we obtain the following bounds for the second integral:
\[\int_{0}^{t}|\mathcal{F}(s,x(s),\theta(s)+\epsilon\nu(s))- \mathcal{F}(s,x(s),\theta(s))-\nabla_{\theta}\mathcal{F}(s,x(s),\theta(s)) \cdot\epsilon\nu(s)|\,ds\] (A.6) \[\leq\int_{0}^{t}\left[\int_{0}^{1}L_{R}|\nu(s)|^{2}\epsilon^{2} \tau d\tau\right]ds=\frac{1}{2}L_{R}\left\|\nu\right\|_{L^{2}}^{2}\epsilon^{2}.\]
Similarly, the third integral can be bounded by using Assumption 2\(-(vi)\) and Lemma A.4, and it yields
\[\int_{0}^{t}|\nabla_{x}\mathcal{F}(s,x(s),\theta(s)+\epsilon\nu(s ))-\nabla_{x}\mathcal{F}(s,x(s),\theta(s))||y(s)-x(s)|\,ds\] (A.7) \[\leq\int_{0}^{t}L_{R}(1+|\theta(s)|+\epsilon|\nu(s)|)\epsilon|y(s)- x(s)||\nu(s)|\,ds\] \[\leq L_{R}^{2}(1+\left\|\theta\right\|_{L^{2}}+\epsilon\left\| \nu\right\|_{L^{2}})^{2}e^{L_{R}(1+\left\|\theta\right\|_{L^{1}})}\left\|\nu \right\|_{L^{2}}^{2}\epsilon^{2}.\]
Finally, the fourth integral can be bounded using Assumption 2\(-(iv)\) as follows:
\[\int_{0}^{t}|\nabla_{x}\mathcal{F}(s,x(s),\theta(s))||y(s)-x(s)-\epsilon\xi(s)|ds \leq\int_{0}^{t}L_{R}(1+\tilde{R})(1+|\theta(s)|^{2})|y(s)-x(s)-\epsilon\xi(s)| \,ds.\] (A.8)
Hence, by combining (A.5), (A.6), (A.7) and (A.8), the thesis follows from Gronwall Lemma.
**Proposition A.7** (Properties of the resolvent map).: _Let us assume that the controlled dynamics \(\mathcal{F}\) satisfies Assumptions 1-2. Given an admissible control \(\theta\in L^{2}([0,T],\mathbb{R}^{m})\) and a trajectory \(t\mapsto x(t)=\Phi^{\theta}_{(0,t)}(x)\) with \(x\in B_{R}(0)\), for every \(\tau\in[0,T]\) the resolvent map \(\mathcal{R}^{\theta}_{(\tau,\cdot)}(x):[0,T]\to\mathbb{R}^{d\times d}\) is the curve \(s\mapsto\mathcal{R}^{\theta}_{(\tau,s)}(x_{0})\) that solves_
\[\begin{cases}\frac{d}{ds}\mathcal{R}^{\theta}_{(\tau,s)}(x)=\nabla_{x} \mathcal{F}(s,\Phi^{\theta}_{(0,s)}(x),\theta(s))\cdot\mathcal{R}^{\theta}_{( \tau,s)}(x)\quad\text{for $a.e.$ $s\in[0,T]$},\\ \mathcal{R}^{\theta}_{(\tau,\tau)}(x)=\mathrm{Id}.\end{cases}\] (A.9)
_Then for every \(\tau,s\in[0,T]\), there exists a constant \(C_{1}\) depending on \(T,R,\left\|\theta\right\|_{L^{2}}\) such that_
\[|\mathcal{R}^{\theta}_{(\tau,s)}(x)|:=\sup_{v\neq 0}\frac{|\mathcal{R}^{ \theta}_{(\tau,s)}(x)\cdot v|}{|v|}\leq C_{1}.\] (A.10)
_Moreover, for every \(x,y\in B_{R}(0)\), there exists a constant \(C_{2}\) depending on \(T,R,\left\|\theta\right\|_{L^{2}}\) such that_
\[|\mathcal{R}^{\theta}_{(\tau,s)}(x)-\mathcal{R}^{\theta}_{(\tau,s)}(y)|:=\sup _{v\neq 0}\frac{|\mathcal{R}^{\theta}_{(\tau,s)}(x)\cdot v-\mathcal{R}^{\theta }_{(\tau,s)}(y)\cdot v|}{|v|}\leq C_{2}|x-y|.\] (A.11)
_Finally, if \(\theta_{1},\theta_{2}\) satisfy \(\left\|\theta_{1}\right\|,\left\|\theta_{2}\right\|\leq\rho\), then there exists a constant \(C_{3}\) depending on \(T,R,\rho\) such that_
\[|\mathcal{R}^{\theta_{1}}_{(\tau,s)}(x)-\mathcal{R}^{\theta_{2}}_{(\tau,s)}(x )|:=\sup_{v\neq 0}\frac{|\mathcal{R}^{\theta_{1}}_{(\tau,s)}(x)\cdot v- \mathcal{R}^{\theta_{2}}_{(\tau,s)}(x)\cdot v|}{|v|}\leq C_{3}\|\theta_{1}- \theta_{2}\|_{L^{2}}.\] (A.12)
Proof.: We first prove the boundedness of the resolvent map. Let us fix \(v\in\mathbb{R}^{d}\) with \(v\neq 0\), and let us define \(\xi(s):=\mathcal{R}^{\theta}_{(\tau,s)}(x)\cdot v\) for every \(s\in[0,T]\). Then, in virtue of Assumption 2\(-(vi)\), we have:
\[|\xi(s)|\leq|\xi(\tau)|+\int_{\tau}^{s}\left|\nabla_{x}\mathcal{F}(\sigma, \Phi^{\theta}_{(0,\sigma)}(x),\theta(\sigma))\right|\left|\xi(\sigma)\right|d \sigma\leq|v|+L_{R}\int_{0}^{t}(1+\theta(\sigma)^{2})|\xi(\sigma)|\,d\sigma,\]
and, by Gronwall's Lemma, we deduce (A.10). Similarly as before, given \(x,y\in B_{R}(0)\) and \(v\neq 0\), let us define \(\xi^{x}(s):=\mathcal{R}^{\theta}_{(\tau,s)}(x)\cdot v\) and \(\xi^{y}(s):=\mathcal{R}^{\theta}_{(\tau,s)}(y)\cdot v\) for every \(s\in[0,T]\). Then, we have that
\[|\xi^{x}(s)-\xi^{y}(s)| \leq\int_{\tau}^{s}\left|\nabla_{x}\mathcal{F}(\sigma,\Phi^{ \theta}_{(0,\sigma)}(x),\theta(\sigma))\xi^{x}(\sigma)-\nabla_{x}\mathcal{F}( \sigma,\Phi^{\theta}_{(0,\sigma)}(y),\theta(\sigma))\xi^{y}(\sigma)\right|\,d\sigma\] \[\leq\int_{\tau}^{s}\left|\nabla_{x}\mathcal{F}(\sigma,\Phi^{ \theta}_{(0,\sigma)}(x),\theta(\sigma))-\nabla_{x}\mathcal{F}(\sigma,\Phi^{ \theta}_{(0,\sigma)}(y),\theta(\sigma))\right|\left|\xi^{y}(\sigma)\right|d\sigma\] \[\quad+\int_{\tau}^{s}\left|\nabla_{x}\mathcal{F}(\sigma,\Phi^{ \theta}_{(0,\sigma)}(x),\theta(\sigma))\right|\left|\xi^{x}(\sigma)-\xi^{y}( \sigma)\right|d\sigma\] \[\leq C_{1}|v|\int_{\tau}^{s}L_{R}(1+\theta(\sigma)^{2})\left|\Phi^ {\theta}_{(0,\sigma)}(x)-\Phi^{\theta}_{(0,\sigma)}(y)\right|\,d\sigma\] \[\quad+\int_{\tau}^{s}L_{R}(1+\theta(\sigma)^{2})|\xi^{x}(\sigma)- \xi^{y}(\sigma)|\,d\sigma,\]
where we used (A.10) and Assumption 2\(-(iv)\). Hence, combining Lemma A.2 with Gronwall's Lemma, we deduce (A.11). Finally, we prove the dependence of the resolvent map on different controls \(\theta_{1},\theta_{2}\in L^{2}([0,T];\mathbb{R}^{m})\). Given \(x\in B_{R}(0)\) and \(v\neq 0\), let us define \(\xi^{\theta_{1}}(s):=\mathcal{R}^{\theta_{1}}_{(\tau,s)}(x)\cdot v\) and \(\xi^{\theta_{2}}(s):=\mathcal{R}^{\theta_{2}}_{(\tau,s)}(x)\cdot v\) for every \(s\in[0,T]\). Then, we
compute
\[|\xi^{\theta_{1}}(s)-\xi^{\theta_{2}}(s)| \leq\int_{\tau}^{s}\left|\nabla_{x}\mathcal{F}(\sigma,\Phi_{(0, \sigma)}^{\theta_{1}}(x),\theta_{1}(\sigma))\xi^{\theta_{1}}(\sigma)-\nabla_{x} \mathcal{F}(\sigma,\Phi_{(0,\sigma)}^{\theta_{2}}(x),\theta_{2}(\sigma))\xi^{ \theta_{2}}(\sigma)\right|\,d\sigma\] \[\leq\int_{\tau}^{s}\left|\nabla_{x}\mathcal{F}(\sigma,\Phi_{(0, \sigma)}^{\theta_{1}}(x),\theta_{1}(\sigma))-\nabla_{x}\mathcal{F}(\sigma,\Phi _{(0,\sigma)}^{\theta_{2}}(x),\theta_{2}(\sigma))\right|\left|\xi^{\theta_{1}} (\sigma)\right|d\sigma\] \[\quad+\int_{\tau}^{s}\left|\nabla_{x}\mathcal{F}(\sigma,\Phi_{(0, \sigma)}^{\theta_{2}}(y),\theta(\sigma))\right|\left|\xi^{\theta_{1}}(\sigma) -\xi^{\theta_{2}}(\sigma)\right|d\sigma\] \[\leq C_{1}|v|\int_{\tau}^{s}L_{R}(1+\theta_{1}(\sigma)^{2})\left| \Phi_{(0,\sigma)}^{\theta_{1}}(x)-\Phi_{(0,\sigma)}^{\theta_{2}}(x)\right|\,d\sigma\] \[\quad+C_{1}|v|\int_{\tau}^{s}L_{R}(1+|\theta_{1}(\sigma)|+| \theta_{2}(\sigma)|)|\theta_{1}(\sigma)-\theta_{2}(\sigma)|\,d\sigma\] \[\quad+\int_{\tau}^{s}L_{R}(1+\theta(\sigma)^{2})|\xi^{\theta_{1} }(\sigma)-\xi^{\theta_{2}}(\sigma)|\,d\sigma,\]
where we used Assumption 2\(-(iv)\)-\((vi)\).
|
2307.11563 | Feature Map Testing for Deep Neural Networks | Due to the widespread application of deep neural networks~(DNNs) in
safety-critical tasks, deep learning testing has drawn increasing attention.
During the testing process, test cases that have been fuzzed or selected using
test metrics are fed into the model to find fault-inducing test units (e.g.,
neurons and feature maps, activating which will almost certainly result in a
model error) and report them to the DNN developer, who subsequently repair
them~(e.g., retraining the model with test cases). Current test metrics,
however, are primarily concerned with the neurons, which means that test cases
that are discovered either by guided fuzzing or selection with these metrics
focus on detecting fault-inducing neurons while failing to detect
fault-inducing feature maps.
In this work, we propose DeepFeature, which tests DNNs from the feature map
level. When testing is conducted, DeepFeature will scrutinize every internal
feature map in the model and identify vulnerabilities that can be enhanced
through repairing to increase the model's overall performance. Exhaustive
experiments are conducted to demonstrate that (1) DeepFeature is a strong tool
for detecting the model's vulnerable feature maps; (2) DeepFeature's test case
selection has a high fault detection rate and can detect more types of
faults~(comparing DeepFeature to coverage-guided selection techniques, the
fault detection rate is increased by 49.32\%). (3) DeepFeature's fuzzer also
outperforms current fuzzing techniques and generates valuable test cases more
efficiently. | Dong Huang, Qingwen Bu, Yahao Qing, Yichao Fu, Heming Cui | 2023-07-21T13:15:15Z | http://arxiv.org/abs/2307.11563v1 | # Feature Map Testing for Deep Neural Networks
###### Abstract
Due to the widespread application of deep neural networks (DNNs) in safety-critical tasks, deep learning testing has drawn increasing attention. During the testing process, test cases that have been fuzzed or selected using test metrics are fed into the model to find fault-inducing test units (e.g., neurons and feature maps, activating which will almost certainly result in a model error) and report them to the DNN developer, who subsequently repair them (e.g., retraining the model with test cases). Current test metrics, however, are primarily concerned with the neurons, which means that test cases that are discovered either by guided fuzzing or selection with these metrics focus on detecting fault-inducing neurons while failing to detect fault-inducing feature maps.
In this work, we propose DeepFeature, which tests DNNs from the feature map level. When testing is conducted, DeepFeature will scrutinize every internal feature map in the model and identify vulnerabilities that can be enhanced through repairing to increase the model's overall performance. Exhaustive experiments are conducted to demonstrate that (1) DeepFeature is a strong tool for detecting the model's vulnerable feature maps; (2) DeepFeature's test case selection has a high fault detection rate and can detect more types of faults (comparing DeepFeature to coverage-guided selection techniques, the fault detection rate is increased by 49.32%). (3) DeepFeature's fuzzer also outperforms current fuzzing techniques and generates valuable test cases more efficiently.
Article submission, IEEE, IEEEtran, journal, LaTeX, paper, template, typesetting.
## I Introduction
Deep neural networks (DNNs) are widely used in safety-critical domains, such as autonomous driving [1] and medical diagnosis [2]. However, DNNs can be complex and susceptible to data bias, overfitting, and underfitting, making them far from dependable for these applications. To address this problem, DNNs must undergo deep learning testing techniques before deployment to ensure their reliability [3, 4]. This testing helps identify and repair units (e.g., neurons and feature maps) in the model that are particularly vulnerable to change and can significantly impact the model's performance, resulting in a more reliable model.
The deep learning testing technique mainly has three key elements: coverage metric, test case selection, and fuzzing strategy. The testing process typically works in an iterative way. In each iteration, the testing techniques will first feed test cases, which are selected or fuzzed with certain rules (e.g., coverage metric), into the model, after which a coverage metric is calculated. New test cases will be generated to further improve the coverage metric until the metric meets the developers' requirements.
The most important element mentioned above is the coverage metric since it determines how test cases are generated (i.e., fuzzed or selected from a massive amount of candidate test sets) to reveal the fault-inducing units (e.g., neuron and feature map. Illustrated in Fig. 1) of a tested DNN. Therefore, the main criterion of coverage metrics is to explore as much diversity as possible of a certain subspace defined based on different abstraction levels (e.g., neuron activation [3], neuron activation pattern [4], and neuron activation conditions [5]), while test case selection and fuzzing strategy are used to obtain new test cases, which are fed into the model to increase coverage metrics. To explore more types of model units' behaviors, multiple **N**euron **C**overage (NC) metrics, including NAC [3], KMNC [4], IDC [6], and MC/DC [7] have been proposed.
**Key problems of existing NC-guided testing.** Despite the fact that existing NC-guided testing techniques have greatly advanced deep learning testing, there remains a significant constraint, namely that not all fault-inducing units in the model are neurons. Recent studies [8, 9] have revealed that feature maps are also important fault-inducing units that can affect DNN performance, while NC-guided techniques often focus on analyzing neuron's behavior and fail to capture and utilize feature-map-level information, causing DNN developers can not detecting feature maps that induce errors. Consequently, NC-guided testing techniques yield a sub-optimal fault detection rate and can merely bring a significant boost to the model [10, 11], as retraining the model with test cases generated by NC-Fuzz (e.g., DeepXplore [3], DeepHunter [12],
Fig. 1: An illustration of feature maps and neurons. A neuron is a basic unit of computation in a neural network. A feature map is a set of neurons that are connected to the same input and that together produce an output that encodes certain features (e.g., color, and texture).
and ADAPT [13]) can only repair fault-inducing neurons.
In this work, we address this problem by proposing DeepFeature, which test DNNs from feature map level. As opposed to heuristically defining various neuron-level metrics, we establish a clear connection between output variation at the feature-map level and the performance of the model through clear experiments. As illustrated in Fig. 1, the main distinction between DeepFeature and existing neuron coverage testing works is that DeepFeature tests the DNN model from the feature-map level. Specifically, during the testing process, DeepFeature delves into every inner feature map in the model to evaluate its vulnerability. The vulnerable feature maps therein are then repaired by retraining with test cases selected or generated using our corresponding selection and fuzzing strategy while further improving the overall performance of the model.
To validate the effectiveness of DeepFeature, we conduct experiments with four popular DNN models across four widely-used datasets. The experiment results validate our motivation and demonstrate the superiority of our method. Specifically, in comparison to state-of-the-art neuron-level test case selection methods, our DeepFeature yields a higher fault detection rate (e.g., DeepFeature maximum increases 49.32% fault detection rate compared with NC-guided baselines), and in limited test case selection scenarios, DeepFeature can detect more type of faults that are ignored by baselines. DeepFeature's Fuzzer is further exhibited to be the most effective and efficient among state-of-the-art fuzzing techniques.
In a nutshell, we make the following contributions.
* We propose a novel set of testing metrics to quantify the feature map's vulnerability.
* We propose a new selection method that can productively select test cases with a high probability of fooling the model from a massive number of unlabeled test cases (e.g., DeepFeature maximum increases 49.32% fault detection rate compared with NC-guided baselines).
* We propose feature-map-guided fuzzing technique that outperforms current fuzzing techniques.
## II Background
This section includes several basic knowledge of neural networks, neuron coverage metrics, test case selection, and fuzzing strategies.
### _Neuron, Feature Map and Neural Network_
In this work, we focus on deep learning models (e.g., deep neural networks) for classification, which can be presented as a complicated function \(f:\mathcal{X}\rightarrow\mathcal{Y}\) mapping an input \(x\in\mathcal{X}\) into a label \(y\in\mathcal{Y}\). Unlike traditional software, programmed with deterministic algorithms by developers, DNNs are defined by the training data, along with the network structures.
As shown in Fig. 1, neurons and feature maps are fundamental concepts in the field of deep neural networks (DNNs), which have emerged as powerful tools for image recognition and other tasks involving large amounts of data. A neuron is a basic unit of computation in a neural network. It receives input from other neurons or from the input layer, applies a set of weights and biases to this input, and then applies an activation function to produce an output, where the activation function is a mathematical function that determines the output of a neuron based on its input. Common activation functions include Sigmoid, Tanh, and ReLU. A feature map, on the other hand, is a set of neurons that are connected to the same input and that together produce an output that encodes certain features or patterns in the input. In a DNN, multiple feature maps are typically stacked together to form a hierarchy of features, with lower-level feature maps encoding simpler patterns and higher-level feature maps encoding more complex patterns.
The main difference between neurons and feature maps is that neurons are individual units of computation, whereas feature maps are collections of neurons that work together to extract specific features from the input. Neurons are the building blocks of a DNN, whereas feature maps are the building blocks of the DNN's ability to recognize patterns in the input data.
### _Neuron Coverage Metrics_
Neuron Activation Coverage (NAC(\(k\)))NAC(\(k\)) was proposed by DeepXplore [3], NAC(\(k\)) assumes that the more neurons are activated, the more states of DNN are explored. The parameter \(k\) of this coverage criteria is defined by the developer to specify how a neuron in a DNN can be counted as covered (i.e., if the output of a neuron is large than \(k\), the developer will take the neuron as covered). The rate of NAC(\(k\)) for a test is defined as the ratio of the number of covered neurons to the total number of neurons.
K-multisection Neuron Coverage (KMNC(\(k\)))Based on the NAC(\(k\)) assumption about the DNN states, DeepGuage [4] further partitions the neuron's output into \(k\) ranges (e.g., 2000), and each range represents one state in DNN. Specifically, suppose that the output of a neuron \(N\) on the training set is in the interval \([low_{N},high_{N}]\), and divide them equally into \(k\) segments.
### _Test Case Selection Methods_
In this section, we introduce several widely used test case selection methods, which are used to select valuable test cases from a massive number of datasets. Specifically, the goal of the test case selection method is to sampler a fixed size (\(N\)) subset \(I_{N}\) from the total test set \(I_{T}\). We divide test case selection methods into two types: coverage-guided test case selection and prioritization test case selection.
#### Ii-C1 Coverage-guided test case selection
Coverage-guided (e.g., NAC) selection methods try to select test cases that can reach maximum coverage metrics and lead to a higher fault detection rate [3, 4, 14].
#### Ii-C2 Prioritization test case selection
Generally, for a given total test set \(I_{T}\), prioritization test selection methods compute a probability \(p_{i}\) for each test case \(i\) in the total test set. The value of \(p_{i}\) represents the probability of test case \(i\) sampled by selection methods.
### _Fuzzing Strategies_
DeepXplore [3] is the first DNN testing framework. It proposes a neuron activation coverage (NAC) guided fuzzing
strategy to increase the neuron activation coverage metric. Inspired by DeepXplore, Hu et al. [15] propose DeepMutation++ [15], which generates test cases to increase multi-granularity coverage metrics. Inspired by DeepXplore, and DeepMutation++, Lee et al. [13] propose ADAPT, which integrates multiple neuron behaviors (e.g., NAC and TKNC) to fuzz test cases.
## III Methodology
### _Motivation_
Deep Neural Networks (DNNs) are known to be sensitive to any perturbations in their parameters, leading to vulnerabilities in certain units (e.g., neurons or feature maps), making them more susceptible to perturbations [3, 12, 23]. Deep learning testing aims to discover these vulnerable units and repair them to make the model more reliable. Traditional deep learning testing has only considered the vulnerability of neurons by generating test cases to vary certain neurons' status and see if a DNN produces wrong labels, but recent studies [8, 9] have shown that without altering any neuron's status, feature maps (sets of neurons) can also lead to wrong DNN output and thus become vulnerable. Therefore, a feature map may not contain any vulnerable neurons but can still be vulnerable and cause a significant impact on the model's performance.
Unfortunately, current neuron testing techniques are not equipped to detect vulnerable feature maps due to a lack of proper metrics, evaluation methods for vulnerable feature maps, and testing techniques designed to detect vulnerable feature maps. Existing coverage metrics for deep learning testing are based on neurons, and there are no methods to evaluate the vulnerability of a feature map. As a result, there is a pressing need for a comprehensive testing approach that addresses these limitations.
In this section, we address above-mentioned problem by proposing DeepFeature, a novel feature-map-level testing technique for deep neural networks. DeepFeature can detect vulnerable feature maps in a DNN and repair them. The overall framework of DeepFeature is shown in Fig. 2. For ease of discussion, this section defines the following notations for DNNs and feature maps: \(f_{\theta}\) is a DNN parameterized by \(\theta\), and there are \(C\) channels (i.e., \(C\) different feature maps) in the model. The \(c\)-th feature map in the DNN is denoted as \(f^{c}_{\theta}\). \(f^{c}_{\theta}(x)\) denotes the output of the corresponding feature map when the network input is \(x\).
### _Feature Map Attack_
We first propose the prototype of Feature Map Attack (FMA), which is used to generate mutation cases for a given model's feature map and test cases. The FMA algorithm, shown in Algorithm 1, iteratively generates mutated samples and maximizes the difference between the feature maps of the mutated cases and the original test cases. Specifically, the algorithm first initializes each test case with added random noise and then performs a specified number of iterations, using a gradient-based optimization method to maximize the difference between the feature maps of the mutated cases and the original test cases. In our algorithm, we add a clamping function to ensure that the perturbations stay within a certain range. We also set the number of iterations as \(steps\) while the parameter \(\epsilon\) controls the strength of the perturbations.
```
Input:\(f^{c}_{\theta}\): model's feature map to be tested; \(x\): test case (clean sample); \(\alpha\): step size; \(\epsilon\): maximum perturbation; \(steps\): mutation steps; output:\(x^{\prime}\): mutation case (adversarial sample).
1FunctionFMA\((f^{c}_{\theta},x,steps,\alpha,\epsilon)\):
2 Initialize: \(x^{{}^{\prime}}\gets x+\text{random\_noise}\);
3for\(step=1\)to\(steps\)do
4\(loss\leftarrow\|f^{c}_{\theta}(x)-f^{c}_{\theta}(x^{\prime})\|^{2}\);
5\(grad\leftarrow\frac{\partial loss}{\partial x^{\prime}}\);
6\(x^{{}^{\prime}}\gets x^{{}^{\prime}}+\alpha\cdot\text{sign}(grad)\);
7\(\delta\leftarrow\text{clamp}(x^{{}^{\prime}}-x,-\epsilon,\epsilon)\);
8\(x^{{}^{\prime}}\leftarrow\text{clamp}(x+\delta,0,1)\);
9return\(x^{\prime}\)
```
**Algorithm 1**Feature Map Attack
### **Feature map _Atack_ Score_**
The attack success rate of DNNs under FMA against different feature maps can vary greatly. Generally, different feature maps in a DNN have different vulnerabilities, and their impact on model attack success rate is also different. Therefore, we propose Feature map Attack Score (FAS) to measure the effect of different feature maps on model performance. The FAS for the \(c\)-th feature map is defined as the attack success rate of the DNN when the \(c\)-th feature map is attacked by FMA. Specifically for the \(c\)-th feature map in the model. \(FAS(f^{c}_{\theta},\mathcal{X})\) is defined as:
\begin{table}
\begin{tabular}{c|l|c c c} \hline \hline Dataset & DNN Model & Neurons & Layers & Ori Acc (\%) \\ \hline \multirow{2}{*}{MNIST [16]} & LeNet-1 [17] & 3,350 & 5 & 89.50 \\ & LeNet-5 [17] & 44,426 & 7 & 91.79 \\ \hline \multirow{2}{*}{CIFAR-10 [18]} & ResNet-20 [19] & 543,754 & 20 & 86.07 \\ & VGG-16 [20] & 35,749,834 & 21 & 82.52 \\ \hline \multirow{2}{*}{Fashion [21]} & LeNet-1 [17] & 3,350 & 5 & 78.99 \\ & ResNet-20 [19] & 543,754 & 20 & 86.12 \\ \hline \multirow{2}{*}{SVHN [22]} & LeNet-5 [17] & 44,426 & 7 & 84.17 \\ & VGG-16 [20] & 35,749,834 & 21 & 92.02 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Datasets and DNNs for evaluating DeepFeature.
Fig. 2: Overview of DeepFeature’s framework.
\[FAS(f_{\theta}^{c})=1-\frac{\sum_{i=0}^{N}\mathbb{I}(f_{\theta}(x_{i}^{\prime})==y_ {i})}{\sum_{i=0}^{N}\mathbb{I}(f_{\theta}(x_{i})==y_{i})}\]
Where \(\mathcal{X}=\{x_{i}\mid i=1,2,\ldots,N\}\) is the developer provided test cases, \(\{y_{i}\mid i=1,2,\ldots,N\}\) is its ground-truth label and the mutation samples \(\{x_{i}^{\prime}\mid i=1,2,\ldots,N\}\) is generated by FMA based on the feature map \(f_{\theta}^{c}\) (i.e., for different feature map, FMA will generate different mutation samples). The FAS for a feature map ranges between 0 and 1, where the value of 0 means that using FMA to attack the feature map will not affect the performance of the DNN, which indicates the feature map is not vulnerable. The value of 1 means that using FMA to attack the feature map has a significant effect on the performance of the DNN, and the larger the FAS value means the feature map is more vulnerable than others.
Example 1.Here we first take a SVHN & LeNet5 pre-trained model as an example to illustrate the FAS introduced above. For the pre-trained model, there are 6 different feature maps in the first layer. Then we feed the labeled dataset \(\mathcal{X},\mathcal{Y}\) into the FMA with each feature map. FAS results are shown in Tab. II, we can observe that for different feature maps in the model's first layer, the FAS will be largely different. For instance, the 3th feature map is the most robust compared with other feature maps, which obtain low FAS for different attack strengths. In contrast, the 1st and 6th feature maps are vulnerable to even a 3-step FMA attack, which indicates that the 1st and 6th feature maps are vulnerable and that the DNN developers need to enhance them to avoid errors in safety-critical scenarios. Then we further illustrate other combinations to evaluate the feature map's vulnerability. As shown in Fig. 3, the distribution of FAS is consistent across different strengths of the FMA, datasets, and models, indicating that the vulnerability of feature maps is an intrinsic characteristic of pre-trained models. Additionally, the experiment results reveal that there are particularly vulnerable feature maps among those learned in a layer of the model (e.g., the 12th feature map in Fig. 2(b)), attacking it can easily cause the model has incorrect behavior. These feature maps are what DNN developers need to pay extra attention to, as test cases that can induce these feature maps into error can more easily cause the model to misclassify.
### _F Feature map Vulnerability Score_
As discussed in Sec. III-C, FAS will need the ground-truth label to calculate the score, which means once the developers only have unlabelled data, they can not use FAS to detect vulnerable feature maps. To solve this problem, we introduce FVS, another metric to detect vulnerable feature maps which measures the feature maps' vulnerability by directly computing the feature map output's difference between the test cases and its mutation examples (i.e., feature map output's difference between \(x\) and \(x^{\prime}\)). Formally, given a feature map \(f_{\theta}^{c}\) in the model, a test set \(\mathcal{X}=\{(x_{i})\mid i=1,2,\ldots,N\}\) and its corresponding mutated test set \(\mathcal{X}^{\prime}=\{(x_{i}^{\prime})\mid i=1,2,\ldots,N\}\) generated by a user-defined mutation method (e.g., rotation, shear, blur, and FMA), the FVS is defined as:
\[FVS(f_{\theta}^{c})=\frac{1}{N}\sum_{i=0}^{N}(\|f_{\theta}^{c}(x)-f_{\theta}^ {c}(x^{\prime})\|^{2})\]
where \(f_{\theta}^{c}(x_{i})\) is the output of \(x_{i}\) in the \(c\)-th feature map of
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline Feature Map & \multicolumn{4}{c}{FMA steps} \\ Index & 3 & 7 & 10 & 20 \\ \hline
1 & 58.85 & 68.73 & 68.65 & 67.72 \\
2 & 35.15 & 44.09 & 44.11 & 43.07 \\
3 & 3.45 & 3.72 & 3.70 & 3.91 \\
4 & 48.26 & 60.98 & 59.29 & 62.81 \\
5 & 32.90 & 42.42 & 42.74 & 41.09 \\
6 & 56.68 & 66.07 & 66.93 & 66.39 \\ \hline \hline \end{tabular}
\end{table} TABLE II: A example of FAS in LeNet5 pretrained model
Fig. 4: FVS value of different feature maps. The darker the color indicates the greater the degree of data perturbation, which is aligned with Fig. 3. We use mutation to generate test cases here to be consistent with the selection experiment. Mutation refers to benign augmentation, whose strength is increased by increasing augmentation parameters’ value range (as mentioned in Tab. IV).
Fig. 3: Different feature maps’ vulnerability. The x-axis is the index of the feature maps, and the y-axis is the FAS value (attack success rate). A larger FAS value indicates a more vulnerable feature map.
the model \(f_{\theta}\). The FVS for a feature map ranges between 0 and \(\infty\), where a value of 0 means that the feature map will not be affected by the perturbation, which indicates that this feature map is not vulnerable. A value large than 0 means it will be affected by the perturbation, and the larger FVS for the feature map, the more vulnerable it is.
```
Input:\(x\in\mathcal{X}\): original test cases \(x^{\prime}\in\mathcal{X}^{\prime}\) : benign mutation test cases \(f_{\theta}\) : DNN to be tested \(N\) : The number of test cases to be selected \(K\) : TopK vulnerable feature map output:\(\mathcal{X}_{f}\): selected test cases
1FunctionFVS-Selection(\(f_{\theta}\), \(\mathcal{X}\), \(\mathcal{X}^{\prime}\), \(N\), \(K\)): /* Detect TopK Vulnerable Feature Map */
2\(FM\): Initialize vulnerable feature map list \(FM\gets f_{\theta}[\text{argsort}(\{FVS(f_{\theta}^{c})\}_{\text{fore each}}\ f_{\theta}^{c}\in f_{\theta})][-K:]\) /* Test Case Selection Begins */
3\(FVS\_array=[]\)
4fore each\(x_{i},x^{\prime}_{i}in\mathcal{X},\mathcal{X}^{\prime}\)do
5\(FM\_score=0\)
6fore each\(f_{\theta}^{c}\in FM\)do
7\(FM\_score+=FVS(f_{\theta}^{c}(x_{i}),f_{\theta}^{c}(x^{\prime}_{i}))\)
8\(FVS\_array.append(FM\_score)\)
9\(\mathcal{X}_{f}=\mathcal{X}[\text{argsort}(FVS\_array)][-N:]\)
10return\(\mathcal{X}_{f}\)
```
**Algorithm 2**Test Case Selection Pipeline
Example 2.In this part, we take the same SVHN & LeNet5 pre-trained model as in Sec. III-C to illustrate the effectiveness of FVS introduced above. Specifically, for dataset \(\mathcal{X}\) we use mutation methods in Tab. IV to generate \(\mathcal{X}^{\prime}\), then we feed these cases into the model to calculate the FVS for each feature maps in the first layer. Results are shown in Tab. III, we can observe that the 3rd feature map has low FVS value, while the 1st and 6th have high FVS values, which means the 3rd feature map is robust while the 1st and 6th are vulnerable for the perturbation. Observations are consistent with the Tab. II, indicating the interchangeability between FVS and FAS in label-agnostic scenarios. Then we further illustrate other combinations to evaluate the FVS. Results shown in Fig. 4, FVS effectively captures the most vulnerable feature maps in the model. We find that FVS and FAS have the same distribution under the same model, which means we can use FVS to replace FAS in label-agnostic scenarios (i.e., unlabeled dataset).
### _Enhancing the DNN with DeepFeature_
#### Iii-E1 Test Case Selection From Unlabeled Data
Both testing and repairing the DNN-driven system rely on manually labeled data. While collecting a massive amount of unlabeled data is usually easy to achieve, the cost of manual labeling is much greater. For data with strong expertise knowledge (e.g., medical data), it is unrealistic to blindly label all collected data. Therefore, DNN test case selection is crucial to select valuable data, reducing the labeling cost.
To select valuable test cases from unlabeled data, we propose a FVS-guided test case selection method, which select test cases with high FVS value. Algo 2 demonstrates the workflow of our FVS-guided test case selection. Specifically, we first feed the unlabeled test cases and its benign mutation cases into the \(FVS-Selection()\), then we will use \(\mathcal{X}\) and \(\mathcal{X}^{\prime}\) to obtain the vulnerable feature map list (Algo 2 line 3). After obtain the vulnerable feature map list, we will begin to select test cases. For each input \(x\) and its mutation case \(x^{\prime}\), we will calculate the \(FM\_score\) and then return the \(\mathcal{X}_{f}\) that have larger \(FM\_score\) than others.
#### Iii-E2 Data augmentation by Fuzzing
Another important application of DeepFeature is to generate data to increase the dataset capacity when the dataset is limited. Specifically, as one of the main elements for training DNN models, the dataset's size seriously affects the trained model's performance. We apply DeepFeature to the data augmentation task to solve this problem by using our FMA-Fuzzer to expand the dataset. Algorithm 3 presents the details of our proposed FMA-Fuzzer. The inputs include the model \(f_{\theta}\), the original test cases \(\mathcal{X}\), the fuzzing boundary \(\epsilon\), the maximum number of steps to fuzz for a test case \(step\), and fuzzing step size \(step\_size\). For each test case \(x\in\mathcal{X}\), FMA-Fuzzer generates multiple samples corresponding to multiple vulnerable feature maps (Alg 3). Specifically, for each test case \(x\), we iteratively generate perturbations targeting each feature map by maximizing the FVS value (Alg 3 line 5-9), We first add a random perturbation to generate the initial \(x^{{}^{\prime}}\) as the first step. As we need to make sure the \(loss\), formulated as:
\[loss=FVS(f_{\theta}^{c}(x^{\prime}),f_{\theta}^{c}(x))\]
starts at a non-zero value to obtain a non-zero gradient. The generated sample will be then added to the fuzzing list \(\mathcal{X}_{f}\) if the model misclassifies it.
## IV Evaluation
We evaluate DeepFeature and answer the following research questions.
* **RQ1:** How effective is DeepFeature's test case selection?
* **RQ2:** How effective is DeepFeature's fuzzing algorithm?
* **RQ3:** Can DeepFeature behave stable under different settings?
### _Experiment Setup_
Datasets and Models.We adopt four widely used image classification benchmark datasets for the evaluation (i.e., MNIST [16], Fashion MNIST [21], SVHN [24], and CIFAR10 [18]), which are most commonly used datasets in deep learning testing [3, 4, 6, 10, 12, 13, 14, 15, 23, 25, 26, 27]. Table I presents the detail of the datasets and models. The MNIST [16] dataset is a large collection of handwritten digits. It contains a training set of 60,000 examples and a test set of 10,000 examples. The CIFAR-10 [18] dataset consists of 60,000 32x32 color images in 10 classes, with 6,000 images per class. There are 50,000 training images and 10,000 test images in CIFAR-10. Fashion [21] is a dataset of Zalando's article images--consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image associated with a label from 10 classes. SVHN [22] is a real-world image dataset that can be seen as similar in flavor to MNIST (e.g., the images are of small cropped digits). SVHN is obtained from existing house numbers in Google Street View images. The models we evaluated include LeNet [17], VGG [20], and ResNet [19], which are also commonly used in deep learning testing tasks [3, 4, 6, 10, 12, 13, 14, 15, 23, 25, 26, 27].
Test Case Generation.We follow the prior data simulation [7, 26, 27] to generate realistic test cases. Specifically, we use seven well-used benign mutations (i.e., shift, rotation, scale, shear, contrast, brightness, and blur) to generate the test case with its original label. The configurable parameters of the mutation are shown in Tab. IV. We do not choose adversarial attack (e.g., FGSM, PGD, and BIM) to generate test cases because these data can not represent data collected from the real-world scenario and may lead to unreliable conclusions [28]. During the test case generation, for each test data in the dataset, we randomly select one benign data augmentation from our seven augmentations to mutate a test case with its original label. When the original test size is 10,000, we will generate the same size of test cases.
Fuzzing Case Generation.A branch of existing works [3, 13, 15, 25, 29, 30] have been proposed for generating fuzzing cases. To show the effectiveness of DeepFeature, we select the state-of-the-art neuron coverage-guided fuzzing method (i.e., ADAPT [13]) and DeepMutation++ [15] as our baseline. Compared with other existing works, ADAPT integrates with 29 neuron characteristics (e.g., NAC), while others only have one to four different characteristics, which are even contained in ADAPT. The DeepMutation++ is also inter-grate multiple coverage guided fuzzing metrics, e.g., Neuron-Level (NAC), and layer-level (e.g., TKNC). We believe evaluate DeepFeature's fuzzing module with ADAPT and DeepMutation++ can sohws DeepFeature's effectiveness. We also notice that DeepHyperion [31] generates test cases from the feature map level, so we chose it as one of our baseline.
Test Case SelectionMultiple test case selections have been proposed by recent works (e.g., NC-guided [3, 4, 6, 26, 32], Priority [25, 33], Active learning guided [27, 34], robust-guided [23, 35, 36], SA-guided [14]). However, robust-guided selection [23, 35, 36] focus on model robustness, as mentioned by Zhang et al. [37], there is a trade-off between the accuracy and robustness, which means using these (e.g., RobOT [23], and PACE [36]) strategies to increase model robustness, the accuracy will decrease. So we will not choose these strategies as our baselines.
To show the effectiveness of DeepFeature, we use the most famous metric (i.e., NAC) and its fine-grained version (i.e., KMNC) proposed by DeepGauge [4]. To evaluate DeepFeature with SOTA coverage metrics, we take NPC [32] as our baseline. Compared with DC/MC, it can evaluate the DNN decision logic flows for each connected layer directly, which can reduce the overhead of DC/MC. Since DeepFeature's test case selection is a prioritization technique (i.e., priority the test cases with certain rules and return cases with larger priority), we take DeepGini, the SOTA open-sourced prioritization technique as our baseline. Compared with PRIMA [33], it only need \(1/100\) times to select test cases. To compare DeepFeature with SA metrics, we use DSA, which is proposed by Kim et al. [14], to evaluate DeepFeature's effectiveness. Then to evaluate DeepFeature's effectiveness with the current SOTA active learning guided selection strategies, we use ATS [27] as our baseline. Finally, we also use Random Selection (RS) as a natural baseline, which can help us evaluate whether a selection method is effective. The configurable parameters of the selection strategies are shown in Tab. V.
\begin{table}
\begin{tabular}{c|c c} \hline \hline Criteria & Parameters & Parameter Config \\ \hline Random & - & - \\ NAC & t (threshold) & 0.5 \\ KMNC & k (k-bins) & 1000 \\ NPC & \(\alpha\) & 0.7 \\ DSA & \(n\) & 1000 \\ Gini & None & None \\ ATS & None & None \\ DeepFeature & K=5, step\_size=7, \(\alpha=\epsilon/4=1/255\) \\ \hline \hline \end{tabular}
\end{table} TABLE V: The parameter configuration of test case selection. The \(step\_size=7\) is from Fig. 3.
\begin{table}
\begin{tabular}{c|c c} \hline \hline Transformations & Parameters & Parameter ranges \\ \hline Shift & \((s_{x},s_{y})\) & [0.05, 0.15] \\ Rotation & \(q\) (\(degree\)) & [5, 25] \\ Scale & \(r\) (\(ratio\)) & [0.8,1.2] \\ Shear & \(s\) (\(angle\)) & [15, 30] \\ contrast & \(\alpha\) (\(gain\)) & [0.5,1.5] \\ Brightness & \(\beta\) (\(bias\)) & [0.5,1.5] \\ Blur & \(ks\) (\(kernelsize\)) & \{3,5,7\} \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Transformations and parameters used by DeepFeature for generating test cases.
### **Rq1:How effective is DeepFeature's test case selection?**
#### V-B1 Fault Detection Effectiveness
Similar to traditional software testing [38, 39, 40], test case selection tries to find valuable test cases from a large pool of candidate unlabeled test set, which can reduce the cost of manual labeling time once the labeling resource is limited. For a given selection method, a selected test set that can trigger more faults means it could reveal more defects in the software. We take **F**ault **D**etection **R**ate (FDR) as our evaluation metric to measure the effectiveness of DeepFeature's test case selection method. Specifically, the FDR is defined as follows:
\[FDR(X)=\frac{|X_{wrong}|}{|X|}\]
where \(|X|\) denotes the size of the selected test cases, and \(|X_{wrong}|\) represents the number of test cases being misclassified by DNN.
As shown in Tab. VI, we compare the FDR of DeepFeature and our baselines at different test case selection rates (5%, 10%, 15%, and 20%). First, we found that NC-guided and test case selection methods (i.e., NAC, KMNC, and NPC) have poor performance in fault detecting, and sometimes their FDR are very similar to RS's, which indicates that neuron coverage is not a proper metric to guided test case selection, and this is consistent with previous research [10, 28]. And we also notice that DSA is also similar with RS's performance, which may because the surprise adaquacy is also not correlated with FDR or not correlated with the prediction confidence. Compared to the coverage-guided test selection baseline, DeepFeature, DeepGini, and ATS show a higher defect detection capability, while DeepFeature is still the best result for most combinations. Specifically, compared to coverage-guided and prioritization test case selection methods, DeepFeature can improve FDR by up to 49.32% and 24.8% in MNIST&LeNet1, respectively. We believe that the main reason for DeepFeature having a higher FDR is that, as a feature-map-guided testing technique, DeepFeature can detect more types of defects that
\begin{table}
\begin{tabular}{l|c|c|c c c c c c|c|c c c c c c c} \hline \hline \multirow{2}{*}{Dataset(DNN)} & \multicolumn{8}{c|}{Select 5\% Test Cases} & \multicolumn{8}{c}{Select 10\% Test Cases} \\ & Our & NAC & KMNC & NPC & DSA & Gini & ATS & RS & Our & NAC & KMNC & NPC & DSA & Gini & ATS & RS \\ \hline MNIST (L-1) & **84.4** & 22.6 & 44.2 & 20.2 & 22.2 & 61.8 & 58.7 & 21.3 & **78.0** & 20.8 & 38.7 & 20.2 & 22.5 & 55.7 & 54.3 & 21.3 \\ MNIST (L-5) & **66.2** & 18.4 & 25.8 & 21.6 & 22.6 & 58.2 & 60.2 & 18.8 & **61.4** & 18.1 & 20.8 & 20.0 & 21.3 & 50.5 & 52.3 & 18.8 \\ Fashion (L-1) & **67.0** & 32.7 & 35.9 & 31.0 & 31.1 & 57.8 & 53.3 & 31.2 & **63.8** & 30.9 & 35.9 & 31.0 & 48.2 & 48.7 & 31.0 \\ Fashion (R-20) & **79.8** & 21.1 & 23.7 & 30.5 & 24.1 & 55.0 & 40.3 & 26.3 & **70.6** & 20.5 & 22.0 & 28.4 & 27.1 & 48.7 & 35.2 & 26.2 \\ SVHN (L-5) & **73.0** & 29.1 & 28.8 & 31.0 & 31.1 & 53.2 & 47.8 & 28.8 & **67.6** & 29.1 & 27.2 & 31.0 & 31.1 & 47.3 & 42.1 & 29.1 \\ SVHN (V-16) & **63.6** & 21.5 & 16.2 & 18.9 & 24.5 & 53.0 & 55.3 & 16.0 & **59.6** & 19.6 & 16.2 & 17.1 & 23.4 & 43.1 & 48.7 & 16.0 \\ CIFAR-10 (V-16) & **67.4** & 28.4 & 16.4 & 30.6 & 43.6 & 60.2 & 62.1 & 30.9 & **60.2** & 29.4 & 17.4 & 30.3 & 43.3 & 56.2 & 56.3 & 30.8 \\ CIFAR-10 (R-20) & **59.6** & 28.2 & 24.6 & 26.0 & 27.4 & 50.4 & 50.2 & 28.8 & **53.7** & 25.8 & 21.8 & 28.7 & 27.4 & 45.0 & 47.1 & 28.9 \\ \hline \hline \multicolumn{12}{c}{Select 15\% Test Cases} & \multicolumn{8}{c}{Select 20\% Test Cases} \\ & Our & NAC & KMNC & NPC & DSA & Gini & ATS & RS & Our & NAC & KMNC & NPC & DSA & Gini & ATS & RS \\ \hline MNIST (L-1) & **70.7** & 21.6 & 35.3 & 20.6 & 23.6 & 50.1 & 47.6 & 21.3 & **62.1** & 22.0 & 35.7 & 20.9 & 23.5 & 45.4 & 41.5 & 21.3 \\ MNIST (L-5) & **56.2** & 18.5 & 19.3 & 19.3 & 21.0 & 46.2 & 40.2 & 18.7 & **51.5** & 18.4 & 19.4 & 19.1 & 20.8 & 42.3 & 37.1 & 18.7 \\ Fashion (L-1) & **58.4** & 30.5 & 34.2 & 31.0 & 31.0 & 44.4 & 45.1 & 31.21 & **56.8** & 30.3 & 34.6 & 31.1 & 31.0 & 39.4 & 40.1 & 31.3 \\ Fashion (R-20) & **66.1** & 21.1 & 22.2 & 27.3 & 27.8 & 47.2 & 30.3 & 26.2 & **60.7** & 21.4 & 22.8 & 26.7 & 27.6 & 42.7 & 28.5 & 26.2 \\ SVHN (L-5) & **64.7** & 29.1 & 27.5 & 30.9 & 31.0 & 41.3 & 33.9 & 28.9 & **62.3** & 29.1 & 28.3 & 30.9 & 31.0 & 35.4 & 33.1 & 28.9 \\ SVHN (V-16) & **59.3** & 18.6 & 15.5 & 17.0 & 23.5 & 34.7 & 43.2 & 15.9 & **58.6** & 18.2 & 15.6 & 16.6 & 23.1 & 28.9 & 40.1 & 15.9 \\ CIFAR-10 (V-16) & **54.4** & 27.0 & 17.7 & 30.6 & 42.8 & 51.9 & 50.5 & 30.8 & **50.8** & 29.8 & 18.3 & 30.7 & 42.3 & 48.5 & 46.1 & 30.8 \\ CIFAR-10 (R-20) & **48.3** & 25.6 & 21.5 & 28.8 & 27.2 & 40.6 & 42.7 & 28.9 & **47.9** & 26.4 & 20.7 & 28.5 & 26.4 & 35.4 & 36.9 & 28.9 \\ \hline \hline \end{tabular}
\end{table} TABLE VI: Fault Detection Rate of DeepFeature and baselines.
\begin{table}
\begin{tabular}{l|c|c|c c c c c c|c c c c c c c} \hline \hline \multirow{2}{*}{Dataset(DNN)} & \multicolumn{8}{c}{Select 5\% Test Cases} & \multicolumn{8}{c}{Select 10\% Test Cases} \\ & Our & NAC & KMNC & NPC & DSA & Gini & ATS & RS & Our & NAC & KMNC & NPC & DSA & Gini & ATS & RS \\ \hline MNIST (L-1) & **8.01** & 7.07 & 7.31 & 7.10 & 7.10 & 7.54 & 7.41 & 7.09 & **8.18** & 7.14 & 7.42 & 7.14 & 7.15 & 7.91 & 7.82 & 7.14 \\ MNIST (L-5) & **8.65** & 7.04 & 7.24 & 7.03 & 7.04 & 7.57 & 7.21 & 7.04 & **9.27** & 7.06 & 7.31 & 7.08 & 7.07 & 8.21 & 7.24 & 7.06 \\ Fashion (L-1) & **6.03** & 4.95 & 5.01 & 4.98 & 4.97 & 5.31 & 5.73 & 4.97 & **6.44** & 5.04 & 5.11 & 5.05 & 5.03 & 5.77 & 6.23 & 5.04 \\ Fashion (R-20) & **6.17** & 6.08 & 6.10 & 6.09 & 6.10 & 6.13 & 6.15 & 6.08 & **7.39** & 6.14 & 6.23 & 6.12 & 6.16 & 6.37 & 6.42 & 6.14 \\ SVHN (L-5) & **7.10** & 3.86 & 3.94 & 3.85 & 3.86 & 4.21
do not induce neuron-type defects, but feature-map-type defects (see Sec. IV-B3).
#### Iv-B2 Repairing Effectiveness
Finding faults and using them to enhance the DNN model by retraining is the ultimate purpose of DNN testing. To evaluate the repairing effectiveness of DeepFeature, we add selected test cases, with selection ratios ranging from 5% to 20%, to retrain the models, as listed in Table VII. For each dataset & model combination, we use the same training hyperparameters (e.g., epoch, optimizer settings) to perform a fair comparison. Specifically, all models are trained for 40 epochs, with the learning rate initialized to 0.001 and stepped down to one-tenth of the original in the 20th and 30th epochs. We use SGD as the optimizer with a Nesterov momentum of 0.99. Detailed accuracy improvement results are listed in Tab. VII. We achieved the highest model accuracy improvement across all datasets, models, and test case selection ratios (e.g., In CIFAR 10 & ResNet20 combination, DeepFeature can increase 5.81% accuracy while baselines can only maximum increase 3.91% accuracy), which indicates that DeepFeature can repair more faults in DNN models compared with our baselines.
#### Iv-B3 Fault Type Detection Effectiveness
In this part, we delve deeper into the reasons why DeepFeature has a higher FDR and can increase more in accuracy than the baselines. Insights from traditional software testing found error-inducing inputs are very dense, inspiring DNN testing that detecting a greater variety of errors may also be as important as detecting more errors. We use the concept of fault type to answer this question. For a given test case \(x\) that has been misclassified, its fault type is defined as:
\[Fault\_Type(x)=(Predict(x)\to Label(x))\]
where \(Label(x)\) denotes the ground-truth label, and \(Predict(x)\) denotes the DNN prediction. For instance, if the \(Predict(x)\) is "7", while the \(Label(x)\) is "9", then the fault type of \(x\) is denoted as: \(Fault\_Type(x)=7\to 9\). For a typical classification dataset with 10 different categories, the number of possible fault types is \(10\times 9=90\). As the candidate test cases to be selected may not introduce all types of errors, we use the _Fault Type Coverage Rate (FTCR)_, the proportion of error types introduced by our selected test cases among the error types introduced by all samples, to quantify the ability to select diverse faults.
Plots of FTCR with the percentage of selected cases increasing from 1% to 20% are shown in Fig. 5. Our selection method achieved better fault diversity compared to random selection and baseline methods under the vast majority of dataset&model combinations, indicating that DeepFeature can detect more types of faults in DNN, which explains why DeepFeature has higher FDR and can increase more accuracy compared with baselines. Second, we can see that in most of our experiments, the neuron coverage-driven selection stops increasing after the FTCR reaches 80%. Even if we increase the selection rate after that, their FTCR still converges to 80%. However, DeepFeature's FTCR continues to increase to 100%, implying that some cases are not detected by neuron coverage-guided selection. In contrast, DeepFeature can detect them, implying that these cases will induce feature map types of errors.
As shown in the Tab. VIII, we also calculated the ratio of area under the curve (RAUC) of FTCR to quantitatively demonstrate how well each approach can identify various defects. It can be seen that existing neuron-coverage metrics are generally weaker than even random selection in terms of detecting diverse defects. Homogeneous faults from neuron coverage guided selection make only a trivial contribution to follow-up repairing. Nevertheless, DeepFeature tries to select more types of faults, yielding greater accuracy improvement after retraining (repairing) the model.
_Answer to RQ1: DeepFeature can select more diverse and valuable test cases, with which we repair the model for further improvement in the model accuracy._
### **RQ2: How effective and efficient is DeepFeature's fuzzing algorithm?**
To answer this question, we compare DeepFeature with the SOTA fuzzing algorithm ADAPT and DeepMutation++ (DeepM++), and DeepHyperion (DeepH). We run DeepFeature and baselines for the same period of time (i.e., 5 minutes) to generate test cases. For the baselines' parameter, we follow the default settings in its original paper (i.e., the fuzzing time for each case is limited to 1 second, and the coverage follows Tab. V). Then we retrain the models with generated cases and re-evaluate them to compare the accuracy improvement brought by DeepFeature and baselines.
Table IX shows our experiment results. We can see that for the same amount of time, DeepFeature can mutate more valuable fuzzing cases, which will be misclassified by the model, than baselines. Then, we also evaluate the quality of the fuzzing cases generated by baselines and ours. Considering baselines' fuzzing cases are less than DeepFeature, we randomly select the same number of cases from DeepFeature to make a fair comparison for the fuzzing effectiveness (e.g., we only select 771 cases from DeepFeature in MNIST & LeNet1 combination). We then retrain the model using these fuzzing cases. Experiment results reveal that DeepFeature brings greater improvement on model accuracy in the majority of dataset & model combinations (e.g., we improve the accuracy of MNIST&LeNet1 into 93.45%, while baselines only improve to 92.84% and 92.88%.), denoting that test cases fuzzed by DeepFeature are more valuable than baselines.
Then, we can also observe that compared with DeepH, DeepFeature can generate more misclassified cases 5 to 20 times, which is because DeepHyperion will use illumination search to generate test cases for each feature map in the model, which causes it becomes time-consuming. Then we can also observe that using the same number of cases to repair the model, DeepFeature can increase more accuracy compared with DeepH, which is because cases generated by DeepFeature are focused on repairing vulnerable feature maps, while cases generated by DeepH may fine-tune the feature maps that are robustness, which decrease the repairing efficiency.
_Answer to RQ2: DeepFeature is more effective and efficient than the baseline approaches._
### _Can DeepFeature behave stable under different settings?_
The FDR experiment results in Tab. VI are based on the five most vulnerable feature maps across all datasets and models. However, a fixed number of feature maps may not generalize well to models of different sizes, so we also analyze how the number of vulnerable feature maps would affect the estimation of the _value_ of a test case and further affect the FDR. Specifically, for each experiment combination, we use the top-1, 5, 10, 15, 20, and 25 vulnerable feature maps to guide test case selection. Results are listed in Tab. X. We find that once the number of selected vulnerable feature maps are larger than 5, the effectiveness of DeepFeature will become stable. For instance, when number from 5 to 25, DeepFeature's FDR only change from 62.10% to 62.01% in MNIST&LeNet1 combination, and the RAUC only change from 87.97% to 88.55%, which indicate the DeepFeature are stable under different number of selected vulnerable feature maps.
Perturbation.In Algorithm 1, we will use \(\epsilon\) to constrain the maximum perturbation of the FMA. To evaluate the impact of \(\epsilon\) and \(\alpha\) for the model, use four different \(\epsilon\) and \(\alpha\) setting to compare DeepFeature's effectiveness (for ease of discussion, we set \(\alpha=\epsilon/4\)). The experiment results are shown in Tab. XI. We can observe that under different perturbation strength, DeepFeature's effectiveness do not have large change for each model. For instance, when \(\alpha\) increase from \(1/255\) to \(4/255\), DeepFeature's FDR only change from 62.10% to 62.29% in MNIST&LeNet1 combination, and the RAUC also only change from 87.97% to 88.17%, which indicate DeepFeature are stable under different perturbation size.
_Answer to RQ3: DeepFeature work stable under different hyper-parameter settings._
### _Threats to Validity_
**Test subject selection.** The selection of evaluation subjects (i.e., datasets and DNN models) could be a threat to validity. We try to counter this by using four commonly studied datasets (i.e., MNIST, Fashion, SVHN, and CIFAR10); for DNN models, we use four well-known pre-trained DNN-based models (i.e., LeNet-1, LeNet-5, ResNet-20, and VGG-16) of different sizes and complexity ranging from 3,350 neurons up to more than 35,749,834 neurons. However, it doesn't guarantee that DeepFeature can be applied to all models. Additional models will be used to evaluate DeepFeature in future work.
**Data simulation.** Another threat to validity comes from the augmented test input generation. In order to generate test cases, we choose seven well-used benign mutations (i.e., shift, rotation, scale, shear, contrast, brightness, and blur) as our baselines to simulate faults from different sources and granularity. Although these data simulations are very similar to virtual environment noise, it is impossible to guarantee that the distribution of the real unseen input is the same as our simulation. Additional experiments based on real unseen inputs need to be conducted in future work.
**Parameters settings.** The last threat could be the parameter settings in baselines. To compare with neuron coverage-guided and prioritization test set selection methods, we reproduced
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline & ADAPT & DeepM++ & DeepH & Our \\ Dataset\&Model & Cases/Acc(\%) & Cases/Acc(\%) & Cases/Acc(\%) & Cases/Acc(\%) \\ \hline MNIST (LeNet1) & 771/92.84 & 3071/92.88 & 2315/92.81 & 41196/93.45 \\ MNIST (LeNet5) & 2254/95.22 & 5032/96.31 & 2431/95.33 & 46048/97.92 \\ Fashion (LeNet1) & 6578/08.01 & 1057/187.27 & 7759/81.03 & 4820/85.36 \\ Fashion (ResNet20) & 11059/1.61 & 3086/91.83 & 10379/00.07 & 12422/91.75 \\ SVHN (VCG16) & 5599/93.77 & 3989/93.20 & 1277/93.39 & 1941/94.21 \\ CIFAR10 (VGG16) & 73373/81.62 & 2554/84.33 & 1123/83.55 & 10244/856.96 \\ CIFAR10 (ResNet200) & 1622/86.14 & 348/86.93 & 2441/86.76 & 12255/88.86 \\ \hline \hline \end{tabular}
\end{table} TABLE IX: DeepFeature and baselines generated fuzz test cases, which will be **miscalised** by DNN, within 5 minutes. In the repairing experiments, we randomly select the same number of cases as ADAPT to eliminate the effect of training case size (e.g., we only select 771 cases from DeepFeature and other baseliens in MNIST & LeNet1). DeepM++ and DeepH mean DeepMutation++ and DeepHverion.
\begin{table}
\begin{tabular}{c|c c c c c c c c} \hline \hline Dataset & \multicolumn{2}{c}{MNIST} & \multicolumn{2}{c}{Fashion} & \multicolumn{2}{c}{CIFAR10} & SVHN \\ Model & LeNet1 & LeNet5 & LeNet1 & ResNet20 & VGG16 & ResNet20 & LeNet5 & VGG16 \\ \hline NAC & 57.36 & 55.49 & 54.12 & 46.71 & 58.81 & 65.53 & 69.75 & 47.15 \\ KMNC & 81.48 & 55.86 & 81.57 & 52.63 & 65.39 & 64.71 & 88.35 & 47.98 \\ NPC & 50.68 & 61.29 & 52.78 & 63.15 & 62.07 & 65.07 & 65.61 & 49.84 \\ DSA & 59.77 & 56.46 & 60.72 & 70.45 & 77.03 & 68.27 & 63.83 & 67.02 \\ Gini & 85.00 & 85.33 & 82.96 & 62.28 & 62.23 & 78.53 & 91.54 & **88.47** \\ ATS & 84.48 & 83.98 & 81.57 & 62.28 & 63.48 & 75.42 & 80.96 & 85.69 \\ RS & 74.01 & 65.37 & 70.04 & **71.99** & 74.15 & 72.66 & 90.02 & 74.62 \\ Our & **87.97** & **86.03** & **84.86** & 70.07 & **77.25** & **79.42** & **95.17** & 86.11 \\ \hline \hline \end{tabular}
\end{table} TABLE VIII: When selecting 20% test cases, the ratio of area under the curve (RAUC) of FTCR plots.
Fig. 5: The cumulative sum of the fault types coverage rate (FTCR) found by our selection method. We only report four combinations due to page limitaion. Please refer to Tab. VIII for quantitative results.
existing coverage methods for DNN, which may include user-defined parameters. By fine-tuning the parameter settings, the selected data could be different. To alleviate the potential bias, we follow the authors' suggested settings or employ the default settings of the original papers. Due to page constrain, the impact of different \(step\_size\) are not discussed in the paper, which will be evaluated in the future.
## V Related Work
This section discusses the related work in two groups: testing the DNN model and deep learning testing.
### _Neuron Coverage Metrics_
Pei et al. [3] proposes the first white-box coverage criteria (i.e., NAC), which calculates the percentage of activated neurons. Since NAC is coase-grain neuron metric, DeepGauge [4] then extends NAC and proposes a set of more fine-grained coverage criteria by considering the distribution of neuron outputs from the training data. Noticing that DNNs have millions of neuron, testing all neuron is a large overhead, [6] propose IDC, which focus on an important neuron in DNNs. Inspired by the coverage criteria in traditional software testing, some conventional coverage metrics [7, 41, 42] are proposed. DeepCover [41] proposes the MC/DC coverage of DNNs based on the dependence between neurons in adjacent layers. To explore how inputs affect neuron internal decision logic flow, Xie et al. [32] propose NPC to explore the neuron path coverage. DeepCT [42] adopts the combinatorial testing idea and proposes a coverage metric that considers the combination of different neurons at each layer. DeepMutation [7] adopts the mutation testing into DL testing and proposes a set of operators to generate mutants of the DNN. Furthermore, DeepConcolic [5] analyzed the limitation of existing coverage criteria and proposed a more fine-grained coverage metric that considers the relationships between two adjacent layers and the combinations of values of neurons at each layer. Since neuron coverage testing can not detect fault-inducing feature maps in the DNN, in this work, we propose DeepFeature to address this problem, which can detect faults that are induced by the feature maps in the DNN (Sec. IV-B).
### _Deep Learning Testing_
Pei et al. [3] propose the first deep learning testing technique (i.e., DeepXplore), which test DNN internal neuron activation pattern. Based on the DeepXplore's coverage metric, Tian et al. [26] propose DeepTest, which is used to generate test cases to explore DNN behaviors. Noticing DeepXplore is coarse-grained, Ma et al. [4] propose DeepGuage, which tests DNN by fine-grained metrics. Based on the DeepGauge [4], Ma et al. [7] propose DeepMutation [7] and DeepMutation++ [15], these techniques are used to mutate test case to explore model incorrect behaviors. Odena et al. [29] also uses metrics proposed by the above-mentioned coverage metric to generate test cases. Since the test cases generated are not always useful, and feeding all test cases into the model is time-consuming and the benign selection strategies used by coverage metrics are also time-consuming,
Based on the above-mentioned testing techniques, some automated testing techniques [3, 5, 12, 15, 26, 29, 41] are proposed to generate test inputs towards explore DNN incorrect behaviours. In addition, while the neuron coverage guided testing techniques are widely studied, the existing work [10, 25, 27, 28] found that using NC-guided testing can not detect all types of faults in DNN. Such findings motivate this work that proposes a new type of test unit, i.e., feature map, to detect more types of faults that are not detected by neuron criteria testing techniques.
## VI Conclusion
In this work, we propose DeepFeature, which test DNNs from feature map level, for testing DNNs. Unlike existing neuron testing techniques, DeepFeature take the feature map as a testing unit, which delves into every inner feature maps that the models learned and detects vulnerable ones in the testing process. The key component of DeepFeature is its testing metrics called FVS and FAS. FVS quantifies the vulnerability of the model feature maps, and FAS measures the impact of vulnerable feature maps on the model's accuracy. Then we propose a new feature map guided test case selection, which selects test cases by measuring each test case's FVS value in vulnerable feature maps. Compared with coverage-guided and prioritization test case selection methods, DeepFeature's test case selection increases the fault detection rate by 49.97% and 24.8% on average, respectively. Finally, we utilize the
\begin{table}
\begin{tabular}{l|c c c c c c} \hline \hline \multirow{2}{*}{Dataset (DNN)} & \multicolumn{5}{c}{FDR of Top k vulnerable feature maps} \\ & 1 & 5 & 10 & 15 & 20 & 25 \\ \hline MNIST (LeNet-1) & 56.15 & 62.10 & 62.72 & 62.30 & 62.10 & 62.01 \\ Fashion (ResNet-20) & 46.85 & 60.70 & 60.30 & 60.01 & 58.30 & 58.30 \\ SVHN (LeNet-5) & 50.20 & 62.34 & 62.32 & 62.0 & 62.17 & 62.15 \\ CIFAR-10 (ResNet-20) & 51.30 & 50.40 & 50.15 & 49.80 & 50.35 & 50.75 \\ \hline \multirow{2}{*}{Dataset (DNN)} & \multicolumn{5}{c}{RAUC of Top k vulnerable feature maps} \\ & 1 & 5 & 10 & 15 & 20 & 25 \\ \hline MNIST (LeNet-1) & 88.09 & 87.97 & 88.31 & 88.42 & 88.51 & 88.55 \\ Fashion (ResNet-20) & 59.68 & 70.07 & 66.66 & 60.63 & 64.84 & 62.30 \\ SVHN (LeNet-5) & 50.78 & 95.17 & 94.82 & 95.23 & 94.46 & 95.10 \\ CIFAR-10 (ResNet-20) & 76.89 & 78.78 & 81.11 & 78.89 & 81.63 & 85.93 \\ \hline \hline \end{tabular}
\end{table} TABLE X: FDR and FTCR with the different number of vulnerable feature maps ranging from 5 to 25, when 20% of the test cases have been selected. We only report four combinations, while other combinations have similar results.
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline \multirow{2}{*}{Dataset (DNN)} & \multicolumn{5}{c}{FDR of different \(\alpha\)} \\ & 1/255 & 2/255 & 3/255 & _/4/255_ \\ \hline MNIST (LeNet-1) & 62.10 & 62.17 & 62.24 & 62.29 \\ Fashion (ResNet-20) & 60.70 & 60.78 & 60.85 & 60.90 \\ SVHN (LeNet-5) & 62.34 & 62.39 & 62.43 & 62.48 \\ CIFAR-10 (ResNet-20) & 50.40 & 50.48 & 50.55 & 50.60 \\ \hline \multirow{2}{*}{Dataset (DNN)} & \multicolumn{5}{c}{RAUC of different \(\alpha\)} \\ & 1/255 & 2/255 & 3/255 & _/4/255_ \\ \hline MNIST (LeNet-1) & 87.97 & 88.09 & 88.14 & 88.17 \\ Fashion (ResNet-20) & 70.07 & 70.14 & 70.17 & 70.18 \\ SVHN (LeNet-5) & 95.17 & 95.30 & 95.41 & 94.44 \\ CIFAR-10 (ResNet-20) & 78.78 & 78.89 & 78.89 & 78.90 \\ \hline \hline \end{tabular}
\end{table} TABLE XI: FDR and FTCR with the different \(\alpha\) and \(\epsilon\).
proposed FVS metric to automatically fuzz for more valuable test cases to repair vulnerable feature maps and improve the model's accuracy. The source code of DeepFeature with all the evaluation details are available on our Github Page [43].
|
2305.16475 | Initialization-Dependent Sample Complexity of Linear Predictors and
Neural Networks | We provide several new results on the sample complexity of vector-valued
linear predictors (parameterized by a matrix), and more generally neural
networks. Focusing on size-independent bounds, where only the Frobenius norm
distance of the parameters from some fixed reference matrix $W_0$ is
controlled, we show that the sample complexity behavior can be surprisingly
different than what we may expect considering the well-studied setting of
scalar-valued linear predictors. This also leads to new sample complexity
bounds for feed-forward neural networks, tackling some open questions in the
literature, and establishing a new convex linear prediction problem that is
provably learnable without uniform convergence. | Roey Magen, Ohad Shamir | 2023-05-25T21:09:11Z | http://arxiv.org/abs/2305.16475v2 | # Initialization-Dependent Sample Complexity
###### Abstract
We provide several new results on the sample complexity of vector-valued linear predictors (parameterized by a matrix), and more generally neural networks. Focusing on size-independent bounds, where only the Frobenius norm distance of the parameters from some fixed reference matrix \(W_{0}\) is controlled, we show that the sample complexity behavior can be surprisingly different than what we may expect considering the well-studied setting of scalar-valued linear predictors. This also leads to new sample complexity bounds for feed-forward neural networks, tackling some open questions in the literature, and establishing a new convex linear prediction problem that is provably learnable without uniform convergence.
## 1 Introduction
In this paper, we consider the sample complexity of learning function classes, where each function is a composition of one or more transformations given by
\[\mathbf{x}\ \rightarrow\ f(W\mathbf{x})\,\]
where \(\mathbf{x}\) is a vector, \(W\) is a parameter matrix, and \(f\) is some fixed Lipschitz function. A natural example is vanilla feed-forward neural networks, where each such transformation corresponds to a layer with weight matrix \(W\) and some activation function \(f\). A second natural example are vector-valued linear predictors (e.g., for multi-class problems), where \(W\) is the predictor matrix and \(f\) corresponds to some loss function. A special case of the above are scalar-valued linear predictors (composed with some scalar loss or nonlinearity \(f\)), namely \(\mathbf{x}\to f(\mathbf{w}^{\top}\mathbf{x})\), whose sample complexity is extremely well-studied. However, we are interested in the more general case of matrix-valued \(W\), which (as we shall see) is far less understood.
Clearly, in order for learning to be possible, we must impose some constraints on the size of the function class. One possibility is to bound the number of parameters (i.e., the dimensions of the matrix W), in which case learnability follows from standard VC-dimension or covering number arguments (see Anthony and Bartlett (1999)). However, an important thread in statistical learning theory is understanding whether bounds on the number of parameters can be replaced by bounds on the magnitude of the weights - say, a bound on some norm of \(W\). For example, consider the class of scalar-valued linear predictors of the form
\[\{\mathbf{x}\rightarrow\mathbf{w}^{\top}\mathbf{x}:\mathbf{w},\mathbf{x}\in \mathbb{R}^{d},\|\mathbf{w}\|\leq B\}\]
and inputs \(||\mathbf{x}||\leq 1\), where \(||\cdot||\) is the Euclidean norm. For this class, it is well-known that the sample complexity required to achieve excess error \(\epsilon\) (w.r.t. Lipschitz losses) scales as \(O(B^{2}/\epsilon^{2})\), independent of the number of parameters \(d\) (e.g., Bartlett and Mendelson (2002), Shalev-Shwartz and Ben-David (2014)). Moreover, the same bound holds when we replace \(\mathbf{w}^{\top}\mathbf{x}\) by \(f(\mathbf{w}^{\top}\mathbf{x})\) for some \(1\)-Lipschitz function \(f\). Therefore, it is natural to ask whether similar size-independent bounds can be obtained when \(W\) is a matrix, as described above. This question is the focus of our paper.
When studying the matrix case, there are two complicating factors: The first is that there are many possible generalizations of the Euclidean norm for matrices (namely, matrix norms which reduce to the Euclidean norm in the case of vectors), so it is not obvious which one to study. A second is that rather than constraining the norm of \(W\), it is increasingly common in recent years to constrain the distance to some fixed reference matrix \(W_{0}\), capturing the standard practice of non-zero random initialization (see, e.g., Bartlett et al. (2017)). Following a line of recent works in the context of neural networks (e.g., Vardi et al. (2022), Daniely and Granot (2019, 2022)), we will be mainly interested in the case where we bound the _spectral norm_\(||\cdot||\) of \(W_{0}\), and the distance of \(W\) from \(W_{0}\) in the _Frobenius norm_\(||\cdot||_{F}\), resulting in function classes of the form
\[\left\{\mathbf{x}\to f(W\mathbf{x}):W\in\mathbb{R}^{n\times d},||W-W_{0}||_{F} \leq B\right\}. \tag{1}\]
for some Lipschitz, possibly non-linear function \(f\) and a fixed \(W_{0}\) of bounded spectral norm. This is a natural class to consider, as we know that spectral norm control is necessary (but insufficient) for finite sample complexity guarantees (see, e.g., Golowich et al. (2018)), whereas controlling the (larger) Frobenius norm is sufficient in many cases. Moreover, the Frobenius norm (which is simply the Euclidean norm of all matrix entries) is the natural metric to measure distance from initialization when considering standard gradient methods, and also arises naturally when studying the implicit bias of such methods (see Lyu and Li (2019)). As to \(W_{0}\), we note that in the case of scalar-valued linear predictors (where \(W,W_{0}\) are vectors), the sample complexity is not affected 1 by \(W_{0}\). This is intuitive, since the function class corresponds to a ball of radius \(B\) in parameter space, and \(W_{0}\) affects the location of the ball but not its size. A similar weak dependence on \(W_{0}\) is also known to occur in other settings that were studied (e.g., Bartlett et al. (2017)).
Footnote 1: More precisely, it is an easy exercise to show that the Rademacher complexity of the function class \(\{\mathbf{x}\to f(\mathbf{w}^{\top}\mathbf{x}):\mathbf{w}\in\mathbb{R}^{d},|| \mathbf{w}-\mathbf{w}_{0}||\leq B\}\), for some fixed Lipschitz function \(f\), can be upper bounded independent of \(\mathbf{w}_{0}\).
In this paper, we provide several new contributions on the size-independent sample complexity of this and related function classes, in several directions.
In the first part of the paper (Section 3), we consider function classes as in Eq. (1), without further assumptions on \(f\) besides being Lipschitz, and assuming \(\mathbf{x}\) has a bounded Euclidean norm. As mentioned above, this is a very natural class, corresponding (for example) to vector-valued linear predictors with generic Lipschitz losses, or neural networks composed of a single layer and some general Lipschitz activation. In this setting, we make the following contributions:
* In subsection 3.1 we study the case of \(W_{0}=0\), and prove that the size-independent sample complexity (up to some accuracy \(\epsilon\)) is both upper and lower bounded by \(2^{\tilde{\Theta}(B^{2}/\epsilon^{2})}\). This is unusual and perhaps surprising, as it implies that this function class does enjoy a finite, size-independent sample complexity bound, but the dependence on the problem parameters \(B,\epsilon\) are exponential. This is in very sharp contrast to the scalar-valued case, where the sample complexity is just \(O(B^{2}/\epsilon^{2})\) as described earlier. Moreover, and again perhaps unexpectedly, this sample complexity remains the same even if we consider the much larger function class where the Lipschitz function \(f\) itself is a parameter.
* Building on the result above, we prove a size-independent sample complexity upper bound for deep feed-forward neural networks, which depends only on the Frobenius norm of the first layer, and the product of the spectral norms of the other layers. In particular, it has no dependence whatsoever on the network depth, width or any other matrix norm constraints, unlike previous works in this setting.
* In subsection 3.2, we turn to consider the case of \(W_{0}\neq 0\), and ask if it is possible to achieve similar size-independent sample complexity guarantees. Perhaps unexpectedly, we show that the answer is no, even for \(W_{0}\) with very small spectral norm. Again, this is in sharp qualitative contrast to the scalar-valued case and other settings in the literature involving a \(W_{0}\) term, where the choice of \(W_{0}\) does not strongly affect the bounds.
* In subsection 3.3, we show that the negative result above yields a new construction of a convex linear prediction problem which is learnable (via stochastic gradient descent), but where uniform convergence provably does not hold. This adds to a well-established line of works in statistical learning theory, studying when uniform convergence is provably unnecessary for learnability (e.g., Shalev-Shwartz et al. (2010), Daniely et al. (2011), Feldman (2016)).
In the second part of our paper (Section 4), we turn to a different and more specific choice of the function \(f\), considering one-hidden-layer neural networks with activation applied element-wise:
\[\mathbf{x}\ \longrightarrow\ \mathbf{u}^{\top}\sigma(W\mathbf{x})=\sum_{j}u_{j} \sigma(\mathbf{w}_{j}^{\top}\mathbf{x}),\]
with weight matrix \(W\in R^{n\times d}\), weight vector \(\mathbf{u}\in\mathbb{R}^{n}\), and a fixed (generally non-linear) Lipschitz activation function \(\sigma(\cdot)\). As before, We focus on an Euclidean setting, where \(\mathbf{x}\) and \(\mathbf{u}\) has a bounded Euclidean norm and \(||W-W_{0}||_{F}\) is bounded, for some initialization \(W_{0}\) with bounded spectral norm. In this part, our sample complexity bounds have polynomial dependencies on the norm bounds and on the target accuracy \(\epsilon\). Our contributions here are as follows:
* We prove a fully size-independent Rademacher complexity bound for this function class, under the assumption that the activation \(\sigma(\cdot)\) is smooth. In contrast, earlier results that we are aware of were either not size-independent, or assumed \(W_{0}=0\). Although we do not know whether the smoothness assumption is necessary, we consider this an interesting example of how smoothness can be utilized in the context of sample complexity bounds.
* With \(W_{0}=0\), we show an upper bound on the Rademacher complexity of deep neural networks (more than one layer) that is fully independent of the network width or the input dimension, and for generic element-wise Lipschitz activations. For constant-depth networks, this bound is fully independent of the network size.
These two results answer some of the open questions in Vardi et al. (2022). We conclude with a discussion of open problems in Section 5. Formal proofs of our results appear in the appendix.
## 2 Preliminaries
**Notations.** We use bold-face letters to denote vectors, and let \([m]\) be shorthand for \(\{1,2,\ldots,m\}\). Given a vector \(\mathbf{x}\), \(x_{j}\) denotes its \(j\)-th coordinate. Given a matrix \(W\), \(\mathbf{w}_{j}\) is its \(j\)-th row, and \(W_{j,i}\) is its entry in row \(j\) and column \(i\). Let \(0_{n\times d}\) denote the zero matrix in \(\mathbb{R}^{n\times d}\), and let \(I_{d\times d}\) be the \(d\times d\) identity matrix. Given
a function \(\sigma(\cdot)\) on \(\mathbb{R}\), we somewhat abuse notation and let \(\sigma(\mathbf{x})\) (for a vector \(\mathbf{x}\)) or \(\sigma(M)\) (for a matrix M) denote applying \(\sigma(\cdot)\) element-wise. We use standard big-Oh notation, with \(\Theta(\cdot),\Omega(\cdot),O(\cdot)\) hiding constants and \(\tilde{\Theta}(\cdot),\tilde{\Omega}(\cdot),\tilde{O}(\cdot)\) hiding constants and factors that are polylogarithmic in the problem parameters.
\(\left\|\cdot\right\|\) denotes the operator norm: For vectors, it is the Euclidean norm, and for matrices, the spectral norm (i.e., \(\left\|M\right\|=\sup_{x:\left\|\mathbf{x}\right\|=1}\left\|M\mathbf{x}\right\|\)). \(\left\|\cdot\right\|_{F}\) denotes the Frobenius norm (i.e., \(\left\|M\right\|_{F}=\sqrt{\sum_{i,j}M_{i,j}^{2}}\)). It is well-known that the spectral norm of a matrix is equal to its largest singular value, and that the Frobenius norm is equal to \(\sqrt{\sum_{i}\sigma_{i}^{2}}\), where \(\sigma_{1},\sigma_{2},\dots\) are the singular values of the matrix.
When we say that a function \(f\) is Lipschitz, we refer to the Euclidean metric unless specified otherwise. We say that \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}\) is \(\mu\)-smooth if \(f\) is continuously differentiable and its gradient \(\nabla f\) is \(\mu\)-Lipschitz.
**Sample Complexity Measures.** In our results and proofs, we consider several standard complexity measures of a given class of functions, which are well-known to control uniform convergence, and imply upper or lower bounds on the sample complexity:
* Fat-Shattering dimension: It is well-known that the fat-shattering dimension (at scale \(\epsilon\)) lower bounds the number of samples needed to learn in a distribution-free learning setting, up to accuracy \(\epsilon\) (see for example Anthony and Bartlett (2002)). It is formally defined as follows: **Definition 1** (Fat-Shattering).: _A class of functions \(\mathcal{F}\) on an input domain \(\mathcal{X}\)_ shatters \(m\)_points_\(\mathbf{x}_{1},...,\mathbf{x}_{m}\in\mathcal{X}\)_with margin \(\epsilon\), if there exists a number \(s\), such that for all \(y\in\{0,1\}^{m}\) we can find some \(f\in\mathcal{F}\) such that \[\forall i\in[m],\;\;f(\mathbf{x}_{i})\leq s-\epsilon\;\;\text{if}\;\;y_{i}=0 \;\;\text{and}\;\;f(\mathbf{x}_{i})\geq s+\epsilon\;\;\text{if}\;\;y_{i}=1\] _The fat-shattering dimension of \(F\) (at scale \(\epsilon\)) is the cardinality \(m\) of the largest set of points in \(\mathcal{X}\) for which the above holds._
Thus, by proving the existence of a large set of points shattered by the function class, we get lower bounds on the fat-shattering dimension, which translate to lower bounds on the sample complexity.
* Rademacher Complexity: This measure can be used to obtain upper bounds on the sample complexity: Indeed, the number of inputs \(m\) required to make the Rademacher complexity of a function class \(\mathcal{F}\) smaller than some \(\epsilon\) is generally an upper bound on the number of samples required to learn \(\mathcal{F}\) up to accuracy \(\epsilon\) (see Shalev-Shwartz and Ben-David (2014)). **Definition 2** (Rademacher complexity).: _Given a class of functions \(\mathcal{F}\)on a domain \(\mathcal{X}\), its Rademacher complexity is defined as \(R_{m}(\mathcal{F})=\sup_{\{\mathbf{x}_{i}\}_{i=1}^{m}\subseteq\mathcal{X}} \mathbb{E}_{\epsilon}\left[\sup_{f\in\mathcal{F}}\frac{1}{m}\sum_{i=1}^{m} \epsilon_{i}f_{i}(\mathbf{x}_{i})\right]\;,\) where \(\epsilon=(\epsilon_{1},...,\epsilon_{m})\) is uniformly distributed on \(\{-1,+1\}^{m}\)._
* Covering Numbers: This is a central tool in the analysis of the complexity of classes of functions (see, e.g., Anthony and Bartlett (2002)), which we use in our proofs. **Definition 3** (Covering Number).: _Given any class of functions \(\mathcal{F}\) from \(\mathcal{X}\) to \(\mathcal{Y}\), a metric \(d\) over functions from \(\mathcal{X}\) to \(\mathcal{Y}\), and \(\epsilon>0\), we let the covering number \(N(\mathcal{F},d,\epsilon)\) denote the minimal number \(n\) of functions \(f_{1},f_{2},...,f_{n}\) from \(\mathcal{X}\) to \(\mathcal{Y}\), such that for all \(f\in\mathcal{F}\), there exists some \(f_{i}\) with \(d(f_{i},f)\leq\epsilon\). In this case we also say that \(\{f_{1},f_{2},...,f_{n}\}\) is an \(\epsilon\)-cover for \(\mathcal{F}\)._
In particular, we will consider covering numbers with respect to the empirical \(L_{2}\) metric defined as \(d_{m}(f,f^{\prime})=\sqrt{\frac{1}{m}\sum_{i=1}^{m}||f(\mathbf{x}_{i})-f^{\prime} (\mathbf{x}_{i})||^{2}}\) for some fixed set of inputs \(\mathbf{x}_{1},\ldots,\mathbf{x}_{m}\). In addition, if \(\{f_{1},f_{2},...,f_{n}\}\subseteq\mathcal{F}\), then we say that this cover is _proper_. It is well known that the distinction between proper and improper covers is minor, in the sense that the proper \(\epsilon\)-covering number is sandwiched between the improper \(\epsilon\)-covering number and the improper \(\frac{\epsilon}{2}-\)covering number (see the appendix for a formal proof):
**Observation 1**.: _Let \(\mathcal{F}\) be a class of functions. Then the proper \(\epsilon\)-covering number for \(\mathcal{F}\) is at least \(N(\mathcal{F},d,\epsilon)\) and at most \(N(\mathcal{F},d,\frac{\epsilon}{2})\)._
## 3 Linear Predictors and Neural Networks with General Activations
We begin by considering the following simple matrix-parameterized class of functions on \(\mathcal{X}=\{\mathbf{x}\in\mathbb{R}^{d}:||\mathbf{x}||\leq 1\}\):
\[\mathcal{F}_{B,n,d}^{f,W_{0}}:=\left\{\mathbf{x}\to f(W\mathbf{x}):W \in\mathbb{R}^{n\times d},||W-W_{0}||_{F}\leq B\right\}\;,\]
where \(f\) is assumed to be some fixed \(L\)-Lipschitz function, and \(W_{0}\) is a fixed matrix in \(\mathbb{R}^{n\times d}\) with a bounded spectral norm. As discussed in the introduction, this can be interpreted as a class of vector-valued linear predictors composed with some Lipschitz loss function, or alternatively as a generic model of one-hidden-layer neural networks with a generic Lipschitz activation function. Moreover, \(W_{0}\) denotes an initialization/reference point which may or may not be \(0\).
In this section, we study the sample complexity of this class (via its fat-shattering dimension for lower bounds, and Rademacher complexity for upper bounds). Our focus is on size-independent bounds, which do not depend on the input dimension \(d\) or the matrix size/network width \(n\). Nevertheless, to understand the effect of these parameters, we explicitly state the conditions on \(d,n\) necessary for the bounds to hold.
**Remark 1**.: _Enforcing \(f\) to be Lipschitz and the domain \(\mathcal{X}\) to be bounded is known to be necessary for meaningful size-independent bounds, even in the case of scalar-valued linear predictors \(\mathbf{x}\mapsto f(\mathbf{w}^{\top}\mathbf{x})\) (e.g., Shalev-Shwartz and Ben-David (2014)). For simplicity, we mostly focus on the case of \(\mathcal{X}\) being the Euclidean unit ball, but this is without much loss of generality: For example, if we consider the domain \(\{\mathbf{x}\in\mathbb{R}^{d}:||\mathbf{x}||\leq b_{x}\}\) in Euclidean space for some \(b_{x}\geq 0\), we can embed \(b_{x}\) into the weight constraints, and analyze instead the class \(\mathcal{F}_{b_{x}B,n,d}^{f,b_{x}W_{0}}\) over the Euclidean unit ball \(\{\mathbf{x}\in\mathbb{R}^{d}:||\mathbf{x}||\leq 1\}\)._
### Size-Independent Sample Complexity Bounds with \(W_{0}=0\)
First, we study the case of initialization at zero (i.e. \(W_{0}=0_{n\times d}\)). Our first lower bound shows that the size-independent fat-shattering dimension of \(\mathcal{F}_{B,n,d}^{f,W_{0}}\) (at scale \(\epsilon\)) is at least exponential in \(B^{2}/\epsilon^{2}\):
**Theorem 1**.: _For any \(B,L\geq 1\) and \(0<\epsilon\leq 1\) s.t. \(\frac{L^{2}B^{2}}{128\epsilon^{2}}\geq 20\), there exists large enough \(d=\Theta(L^{2}B^{2}/\epsilon^{2}),n=\exp(\Theta(L^{2}B^{2}/\epsilon^{2}))\) and an \(L\)-Lipschitz function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\) for which \(\mathcal{F}_{B,n,d}^{f,W_{0}=0}\) can shatter_
\[\exp(cL^{2}B^{2}/\epsilon^{2})\]
_points from \(\{\mathbf{x}\in\mathbb{R}^{d}:||\mathbf{x}||\leq 1\}\) with margin \(\epsilon\), where \(c>0\) is a universal constant._
The proof is directly based on the proof technique of Theorem 3 in Daniely and Granot (2022), and differs from them mainly in that we focus on the dependence on \(B,\epsilon\) (whereas they considered the dependence on \(n,d\)). The main idea is to use the probabilistic method to show the existence of \(\mathbf{x}_{1},...,\mathbf{x}_{m}\in\mathbb{R}^{d}\) and \(W_{1},...,W_{2^{m}}\in\mathbb{R}^{n\times d}\) for \(m=\Theta(B^{2}/\epsilon^{2})\), with the property that every two different vectors from \(\{W_{y}\mathbf{x}_{i}:i\in m,y\in[2^{m}]\}\) are far enough from each other. We then construct an \(L\)-Lipschitz function \(f\) which assigns arbitrary outputs to all these points, resulting in a shattering as we range over \(W_{1},\ldots W_{2^{m}}\).
We now turn to our more novel contribution, which shows that the bound above is nearly tight, in the sense that we can upper bound the Rademacher complexity of the function class by a similar quantity. In fact, and perhaps surprisingly, a much stronger statement holds: A similar quantity upper bounds the complexity of the much larger class of _all_\(L\)-Lipschitz function \(f\) on \(\mathbb{R}^{n}\), composed with all norm-bounded linear functions from \(\mathbb{R}^{d}\) to \(\mathbb{R}^{n}\):
**Theorem 2**.: _For any \(L,B\geq 1\) and \(0<\epsilon\leq 1\) s.t. \(\frac{LB}{\epsilon}\geq 1\), let \(\Psi_{L,a,n}\) be the class of all \(L\)-Lipschitz functions from \(\{\mathbf{x}\in\mathbb{R}^{n}:||\mathbf{x}||\leq B\}\) to \(\mathbb{R}\), which equal some fixed \(a\in\mathbb{R}\) at \(\mathbf{0}\). Let \(\mathcal{W}_{B,n}\) be the class of linear functions from \(\mathbb{R}^{d}\) to \(\mathbb{R}^{n}\) over the domain \(\{\mathbf{x}\in\mathbb{R}^{d}:||\mathbf{x}||\leq 1\}\) with Frobenius norm at most \(B\), namely_
\[\mathcal{W}_{B,n}:=\{\mathbf{x}\to W\mathbf{x}:W\in\mathbb{R}^{n\times d},||W|| _{F}\leq B\}.\]
_Then the Rademacher complexity of \(\Psi_{a,L,n}\circ\mathcal{W}_{B,n}:=\{\psi\circ g:\psi\in\Psi_{L,a,n},g\in \mathcal{W}_{B,n}\}\) on \(m\) inputs from \(\{\mathbf{x}\in\mathbb{R}^{d}:||\mathbf{x}||\leq 1\}\) is at most \(\epsilon\), if \(m\geq\left(\frac{LB}{\epsilon}\right)^{\frac{cL^{2}B^{2}}{\epsilon^{2}}}\) for some universal constant \(c>0\)._
Since \(\mathcal{F}_{B,n,d}^{f,W_{0}=0}\subseteq\Psi_{L,a,n}\circ\mathcal{W}_{B,n}\) for any fixed \(f\), the Rademacher complexity of the latter upper bounds the Rademacher complexity of the former. Thus, we get the following corollary:
**Corollary 1**.: _Let \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\) be a fixed \(L\)-Lipschitz function. Then the Rademacher complexity of \(\mathcal{F}_{B,n,d}^{f,W_{0}=0}\) on \(m\) inputs from \(\{\mathbf{x}\in\mathbb{R}^{d}:||\mathbf{x}||\leq 1\}\) is at most \(\epsilon\), if \(m\geq\left(\frac{LB}{\epsilon}\right)^{\frac{cL^{2}B^{2}}{\epsilon^{2}}}\) for some universal constant \(c>0\)._
Comparing the corollary and Theorem 1, we see that the sample complexity of Lipschitz functions composed with matrix linear ones is \(\exp\left(\tilde{\Theta}(L^{2}B^{2}/\epsilon^{2})\right)\) (regardless of whether the Lipschitz function is fixed in advance or not). On the one hand, it implies that the complexity of this class is very large (exponential) as a function of \(L,B\) and \(\epsilon\). On the other hand, it implies that for any fixed \(L,B,\epsilon\), it is finite completely independent of the number of parameters. The exponential dependence on the problem parameters is rather unusual, and in sharp contrast to the case of vector-valued predictors (that is, functions of the form \(\mathbf{x}\mapsto f(\mathbf{w}^{\top}\mathbf{x})\) where \(||\mathbf{w}-\mathbf{w}_{0}||\leq B\) and \(f\) is \(L\)-Lipschitz ), for which the sample complexity is just \(O(L^{2}B^{2}/\epsilon^{2})\).
The key ideas in the proof of Theorem 2 can be roughly sketched as follows: First, we show that due to the Frobenius norm constraints, every function \(\mathbf{x}\mapsto f(W\mathbf{x})\) in our class can be approximated (up to some \(\epsilon\)) by a function of the form \(\mathbf{x}\mapsto f(\tilde{W}_{\epsilon}\mathbf{x})\), where the rank of \(\tilde{W}_{\epsilon}\) is at most \(B^{2}/\epsilon^{2}\). In other words, this approximating function can be written as \(f(UV\mathbf{x})\), where \(V\) maps to \(\mathbb{R}^{B^{2}/\epsilon^{2}}\). Equivalently, this can be written as \(g(V\mathbf{x})\), where \(g(z)=f(Uz)\) over \(\mathbb{R}^{B^{2}/\epsilon^{2}}\). This reduces the problem to bounding the complexity of the function class which is the composition of all linear functions to \(\mathbb{R}^{B^{2}/\epsilon^{2}}\), and all Lipschitz functions over \(\mathbb{R}^{B^{2}/\epsilon^{2}}\), which we perform through covering numbers and the following technical result:
**Lemma 1**.: _Let \(\mathcal{F}\) be a class of functions from Euclidean space to \(\{\mathbf{x}\in\mathbb{R}^{r}:||\mathbf{x}||\leq B\}\). Let \(\Psi_{L,a}\) be the class of all \(L\)-Lipschitz functions from \(\{\mathbf{x}\in\mathbb{R}^{r}:||\mathbf{x}||\leq B\}\) to \(\mathbb{R}\), which equal some fixed \(a\in\mathbb{R}\) at \(\mathbf{0}\). Letting \(\Psi_{L,a}\circ\mathcal{F}:=\{\psi\circ f:\psi\in\Psi_{L,a},f\in\mathcal{F}\}\), its covering number satisfies_
\[\log N(\Psi_{L,a}\circ\mathcal{F},d_{m},\epsilon)\leq\left(1+\frac{8BL}{\epsilon }\right)^{r}\cdot\log\frac{8B}{\epsilon}+\log N(\mathcal{F},d_{m},\frac{ \epsilon}{4L}).\]
The proof for this Lemma is an extension of Theorem 4 from Golowich et al. 2018, which considered the case \(r=1\).
**Application to Deep Neural Networks.** Theorem 2 which we established above can also be used to study other types of predictor classes. In what follows, we show an application to deep neural networks, establishing a size/dimension-independent sample complexity bound that depends _only_ on the Frobenius norm of the first layer, and the spectral norms of the other layers (albeit exponentially). This is surprising, since all previous bounds of this type we are aware of strongly depend on various norms of all layers, which can be arbitrarily larger than the spectral norm in a size-independent setting (such as the Frobenius norm), or made strong assumptions on the activation function (e.g., Neyshabur et al. (2015, 2017), Bartlett et al. (2017), Golowich et al. (2018), Du and Lee (2018), Daniely and Granot (2019), Vardi et al. (2022)).
Formally, we consider scalar-valued depth-\(k\) "neural networks" of the form
\[\mathcal{F}_{k}:=\{x\rightarrow\mathbf{w}_{k}f_{k-1}(W_{k-1}f_{k-2}(...f_{1}( W_{1}\mathbf{x})))\;\;:\;\;||\mathbf{w}_{k}||\leq S_{k}\;,\;\forall j\;||W_{j}|| \leq S_{j}\;,\;||W_{1}||_{F}\leq B\}\]
where each \(\mathbf{w}_{j}\) is a parameter matrix of some arbitrary dimensions, and each \(f_{j}\) is some fixed \(1\)-Lipschitz2 function satisfying \(f_{j}(\mathbf{0})=0\). This is a rather relaxed definition for neural networks, as we don't assume anything about the activation functions \(f_{j}\), except that it is Lipschitz. To analyze this function class, we consider \(\mathcal{F}_{k}\) as a subset of the class
Footnote 2: This is without loss of generality, since if \(f_{j}\) is \(L_{j}\)-Lipschitz, we can rescale it by \(1/L_{j}\) and multiply \(S_{j+1}\) by \(L_{j}\).
\[\left\{x\to f(W\mathbf{x}):\left\|W\right\|_{F}\leq B\;,\;f:\mathbb{R}^{n} \rightarrow\mathbb{R}\text{ is }L\text{-Lipschitz}\right\},\]
where \(L=\prod_{j=2}^{k}S_{j}\) (as this upper bounds the Lipschitz constant of \(z\mapsto W_{k}f_{k-1}(W_{k-1}f_{k-2}(\ldots f_{1}(z)\ldots))\). By applying Theorem 2 (with the same conditions) we have
**Corollary 2**.: _For any \(B,L\geq 1\) and \(0<\epsilon\leq 1\) s.t. \(\frac{B}{\epsilon}\geq 1\), we have that the Rademacher complexity of \(\mathcal{F}_{k}\) on \(m\) inputs from \(\{\mathbf{x}\in\mathbb{R}^{d}:\left\|\mathbf{x}\right\|\leq 1\}\) is at most \(\epsilon\), if_
\[m\geq\left(\frac{LB}{\epsilon}\right)^{\frac{\epsilon L^{2}B^{2}}{\epsilon^{2}}}\]
_where \(L:=\prod_{j=2}^{k}S_{j}\) and \(c>0\) is some universal constant._
Of course, the bound has a bad dependence on the norm of the weights, the Lipschitz parameter and \(\epsilon\). On the other hand, it is finite for any fixed choice of these parameters, fully independent of the network depth, width, nor on any matrix norm other than the spectral norms, and the Frobenius norm of the first layer only. We note that in the size-independent setting, controlling the product of the spectral norms is both necessary and not sufficient for finite sample complexity bounds (see discussion in Vardi et al. (2022)). The bound above is achieved only by controlling in addition the Frobenius norm of the first layer.
### No Finite Sample Complexity with \(W_{0}\neq 0\)
In subsection 3.1, we showed size-independent sample complexity bounds when the initialization/reference matrix \(W_{0}\) is zero. Therefore, it is natural to ask if it is possible to achieve similar size-independent bounds with non-zero \(W_{0}\). In this subsection we show that perhaps surprisingly, the answer is negative: Even for very small non-zero \(W_{0}\), it is impossible to control the sample complexity of \(\mathcal{F}_{B,n,d}^{f,W_{0}}\) independent of the size/dimension parameters \(d,n\). Formally, we have the following theorem:
**Theorem 3**.: _For any \(m\in\mathbb{N}\) and \(0<\epsilon\leq 0.5\), there exists \(d=\Theta(m),n=\Theta(\exp(m))\), \(W_{0}\in\mathbb{R}^{n\times d}\) with \(\|W_{0}\|=2\sqrt{2}\cdot\epsilon\) and a function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\) which is \(1\)-Lipschitz with respect to the infinity norm (and hence also with respect to the Euclidean norm), for which \(\mathcal{F}_{B=\sqrt{2},n,d}^{f,W_{0}}\) can shatter \(m\) points from \(\{\mathbf{x}\in\mathbb{R}^{d}:||\mathbf{x}||\leq 1\}\) with margin \(\epsilon\)._
The theorem strengthens the lower bound of Daniely and Granot (2022) and the previous subsection, which only considered the \(W_{0}=0\) case. We emphasize that the result holds already when \(||W_{0}||\) is very small (equal to \(2\sqrt{2}\cdot\epsilon\)). Moreover, it holds even if we allow for functions Lipschitz w.r.t. the infinity norm (and not just the Euclidean norm as we have done so far). This is of interest, since non-element-wise activations used in practice (such as variants of the max function) are Lipschitz with respect to that norm, and some previous work utilized such stronger Lipschitz constraints to obtain sample complexity guarantees (e.g., Daniely and Granot (2019)).
Interestingly, the proof of the theorem is simpler than the \(W_{0}=0\) case, and involves a direct non-probabilistic construction. It can be intuitively described as follows: We choose a fixed set of vectors \(\mathbf{x}_{1},\ldots,\mathbf{x}_{m}\) and a matrix \(W_{0}\) (essentially the identity matrix with some padding) so that \(W_{0}\mathbf{x}_{i}\) encodes the index \(i\). For any choice of target values \(\mathbf{y}\in\{\pm\epsilon\}^{m}\), we define a matrix \(W_{\mathbf{y}}^{\prime}\) (which is all zeros except for a single entry with value \(1\) in a strategic location), so that \(W_{\mathbf{y}}^{\prime}\mathbf{x}_{i}\) encodes the entire vector \(\mathbf{y}\). We note that this is possible if the number of rows of \(W_{\mathbf{y}}^{\prime}\) is exponentially large in \(m\). Letting \(W_{\mathbf{y}}=W_{\mathbf{y}}^{\prime}+W_{0}\), we get a matrix of bounded distance to \(W_{0}\), so that \(W_{\mathbf{y}}\mathbf{x}_{i}\) encodes both \(i\) and \(\mathbf{y}\). Thus, we just need \(f\) to be the fixed function that given an encoding for \(\mathbf{y}\) and \(i\), returns \(y_{i}\), hence \(\mathbf{x}\mapsto f(W_{\mathbf{y}}\mathbf{x})\) shatters the set of points.
### Vector-valued Linear Predictors are Learnable without Uniform Convergence
The class \(\mathcal{F}_{B,n,d}^{f,W_{0}}\), which we considered in the previous subsection, is closely related to the natural class of matrix-valued linear predictors (\(\mathbf{x}\mapsto W\mathbf{x}\)) with bounded Frobenius distance from initialization, composed with some Lipschitz loss function \(\ell\). We can formally define this class as
\[\mathcal{G}_{B,n,d}^{\ell,W_{0}}:=\left\{(\mathbf{x},y)\rightarrow\ell(W \mathbf{x};y)\ :\ W\in\mathbb{R}^{n\times d},\|W-W_{0}\|_{F}\leq B\right\}\.\]
For example, standard multiclass linear predictors fall into this form. Note that when \(y\) is fixed, this is nothing more than the class \(\mathcal{F}_{B,n,d}^{\ell_{y},W_{0}}\) where \(\ell_{y}(z)=\ell(z;y)\). The question of learnability here boils down to the question of whether, given an i.i.d. sample \(\{\mathbf{x}_{i},y_{i}\}_{i=1}^{m}\) from an unknown distribution, we can approximately minimize \(\mathbb{E}_{(\mathbf{x},y)}[\ell(W\mathbf{x},y)]\) arbitrarily well over all \(W:\|W-W_{0}\|\leq B\), provided that \(m\) is large enough.
For multiclass linear predictors, it is natural to consider the case where the loss \(\ell\) is also convex in its first argument. In this case, we can easily establish that the class \(\mathcal{G}_{B,n,d}^{\ell,W_{0}}\) is learnable with respect to inputs of bounded Euclidean norm, regardless of the size/dimension parameters \(n,d\). This is because for each \((\mathbf{x},y)\), the function \(W\mapsto\ell(W\mathbf{x};y)\) is convex and Lipschitz in \(W\), and the domain \(\{W:||W-W_{0}||_{F}\leq B\}\) is bounded. Therefore, we can approximately minimize \(\mathbb{E}_{(\mathbf{x},y)}[\ell(W\mathbf{x},y)]\) by applying stochastic gradient descent (SGD) over the sequence of examples \(\{\mathbf{x}_{i},y_{i}\}_{i=1}^{m}\). This is a consequence of well-known results (see for example Shalev-Shwartz and Ben-David (2014)), and is formalized as follows:
**Theorem 4**.: _Suppose that for any \(y\), the function \(\ell(.,y)\) is convex and \(L\)-Lipschitz. For any \(B>0\) and fixed matrix \(W_{0}\), there exists a randomized algorithm (namely stochastic gradient descent) with the following property: For any distribution over \((\mathbf{x},y)\) such that \(||\mathbf{x}||\leq 1\) with probability \(1\), given an i.i.d. sample \(\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{m}\), the algorithm returns a matrix \(\tilde{W}\) such that \(||\tilde{W}-W_{0}||_{F}\leq B\) and_
\[\mathbb{E}_{\tilde{W}}\left[\mathbb{E}_{(\mathbf{x},y)}[\ell(\tilde{W}\mathbf{x} ;y)]-\min_{W:\|W-W_{0}\|_{F}\leq B}\mathbb{E}_{(\mathbf{x},y)}[\ell(W\mathbf{ x};y)]\right]\ \leq\ \frac{BL}{\sqrt{m}}.\]
_Thus, the number of samples \(m\) required to make the above less than \(\epsilon\) is at most \(\frac{B^{2}L^{2}}{\epsilon^{2}}\)._
Perhaps unexpectedly, we now turn to show that this positive learnability result is _not_ due to uniform convergence: Namely, we can learn this class, but not because the empirical average and expected value of \(\ell(W\mathbf{x};y)\) are close uniformly over all \(W:\|W-W_{0}\|\leq B\). Indeed, that would have required that a uniform convergence measure such as the fat-shattering dimension of our class would be bounded. However, this turns out to be false: The class \(\mathcal{G}^{\ell,W_{0}}_{B,n,d}\) can shatter arbitrarily many points of norm \(\leq 1\), and at any scale \(\epsilon\leq 1\), for some small \(W_{0}\) and provided that \(n,d\) are large enough3. In the previous section, we already showed such a result for the class \(\mathcal{F}^{f,W_{0}}_{B,n,d}\), which equals \(\mathcal{G}^{\ell,W_{0}}_{B,n,d}\) when \(y\) is fixed and \(f(W\mathbf{x})=\ell(W\mathbf{x};y)\). Thus, it is enough to prove that the same impossibility result (Theorem 3) holds even if \(f\) is a convex function. This is indeed true using a slightly different construction:
Footnote 3: This precludes uniform convergence, since it implies that for any \(m\), we can find a set of \(2m\) points \(\{\mathbf{x}_{i},y_{i}\}_{i=1}^{2m}\), such that if we sample \(m\) points with replacement from a uniform distribution over this set, then there is always some \(W\) in the class so that the average value of \(\ell(W\mathbf{x};y)\) over the sample and in expectation differs by a constant independent of \(m\). The fact that the fat-shattering dimension is unbounded does not contradict learnability here, since our goal is to minimize the expectation of \(\ell(W\mathbf{x};y)\), rather than view it as predicted values which are then composed with some other loss.
**Theorem 5**.: _For any \(m\in\mathbb{N}\) and \(0<\epsilon\leq 0.5\), there exists large enough \(d=\Theta(m),n=\Theta(\exp(m))\), \(W_{0}\in\mathbb{R}^{n\times d}\) with \(\|W_{0}\|=4\sqrt{2}\cdot\epsilon\) and a **convex** function \(f:\mathbb{R}^{n}\to\mathbb{R}\) which is \(1\)-Lipschitz with respect to the infinity norm (and hence also with respect to the Euclidean norm), for which \(\mathcal{F}^{f,W_{0}}_{B=\sqrt{2},n,d}\) can shatter \(m\) points from \(\{\mathbf{x}\in\mathbb{R}^{d}:\|\mathbf{x}\|\leq 1\}\) with margin \(\epsilon\)._
Overall, we see that the problem of learning vector-valued linear predictors, composed with some convex Lipschitz loss (as defined above), is possible using a certain algorithm, but without having uniform convergence. This connects to a line of work establishing learning problems which are provably learnable without uniform convergence (such as Shalev-Shwartz et al. (2010), Feldman (2016)). However, whether these papers considered synthetic constructions, we consider an arguably more natural class of linear predictors of bounded Frobenius norm, composed with a convex Lipschitz loss over stochastic inputs of bounded Euclidean norm. In any case, this provides another example for when learnability can be achieved without uniform convergence.
## 4 Neural Networks with Element-Wise Lipschitz Activation
In section 3 we studied the complexity of functions of the form \(\mathbf{x}\mapsto f(W\mathbf{x})\) (or possibly deeper neural networks) where nothing is assumed about \(f\) besides Lipschitz continuity. In this section, we consider more specifically functions which are applied element-wise, as is common in the neural networks literature. Specifically, we will consider the following hypothesis class of scalar-valued, one-hidden-layer neural networks of width \(n\) on inputs in \(\{\mathbf{x}\in\mathbb{R}^{d}:||\mathbf{x}||\leq b_{x}\}\), where \(\sigma(\cdot)\) is a Lipschitz function on \(\mathbb{R}\) applied element-wise, and where we only bound the norms as follows:
\[\mathcal{F}^{\sigma,W_{0}}_{b,B,n,d}:=\left\{\mathbf{x}\to\mathbf{u}^{\top} \sigma(W\mathbf{x}):\mathbf{u}\in\mathbb{R}^{n},W\in\mathbb{R}^{n\times d},|| \mathbf{u}||\leq b,||W-W_{0}||_{F}\leq B\right\}\,\]
where \(\mathbf{u}^{\top}\sigma(W\mathbf{x})=\sum_{j}u_{j}\sigma(\mathbf{w}_{j}^{ \top}\mathbf{x})\). We note that we could have also considered a more general version, where \(\mathbf{u}\) is also initialization-dependent: Namely, where the constraint \(||\mathbf{u}||\leq b\) is replaced by \(||\mathbf{u}-\mathbf{u}_{0}||\leq b\) for some fixed \(\mathbf{u}_{0}\). However, this extension is rather trivial, since for vectors \(\mathbf{u}\) there is no distinction between the Frobenius and spectral norms. Thus, to consider \(\mathbf{u}\) in some ball of radius \(b\) around some \(\mathbf{u}_{0}\), we might
as well consider the function class displayed above with the looser constraint \(||\mathbf{u}||\leq b+||\mathbf{u}_{0}||\). This does not lose much tightness, since such a dependence on \(||\mathbf{u}_{0}||\) is also necessary (see remark 2 below).
The sample complexity of \(\mathcal{F}^{\sigma,W_{0}}_{b,B,n,d}\) was first studied in the case of \(W_{0}=0\), with works such as Neyshabur et al. (2015, 2017); Du and Lee (2018); Golowich et al. (2018); Daniely and Granot (2019) proving bounds for specific families of the activation \(\sigma(\cdot)\) (e.g., homogeneous or quadratic). For general Lipschitz \(\sigma(\cdot)\) and \(W_{0}=0\), Vardi et al. (2022) proved that the Rademacher complexity of \(\mathcal{F}^{\sigma,W_{0}=0}_{b,B,n,d}\) for any \(L\)-Lipschitz \(\sigma(\cdot)\) is at most \(\epsilon\), if the number of samples is \(\tilde{O}\left(\left(\frac{bB_{x}L}{\epsilon}\right)^{2}\right).\)They left the case of \(W_{0}\neq 0\) as an open question. In a recent preprint, Daniely and Granot (2022) used an innovative technique to prove a bound in this case, but not a fully size-independent one (there remains a logarithmic dependence on the network width \(n\) and the input dimension \(d\)). In what follows, we prove a bound which handles the \(W_{0}\neq 0\) case and is fully size-independent, under the assumption that \(\sigma(\cdot)\) is smooth. The proof (which is somewhat involved) involves techniques different from both previous papers, and may be of independent interest.
**Theorem 6**.: _Suppose \(\sigma(\cdot)\) (as function on \(\mathbb{R}\)) is \(L\)-Lipschitz, \(\mu\)-smooth (i.e, \(\sigma^{\prime}(\cdot)\) is \(\mu\)-Lipchitz) and \(\sigma(0)=0\). Then for any \(b,B,n,D,\epsilon>0\) such that \(Bb_{x}\geq 2\), and any \(W_{0}\) such that \(||W_{0}||\leq B_{0}\), the Rademacher complexity of \(\mathcal{F}^{\sigma,W_{0}}_{b,B,n,d}\) on m inputs from \(\{\mathbf{x}\in\mathbb{R}^{d}:||\mathbf{x}||\leq b_{x}\}\) is at most \(\epsilon\), if_
\[m\geq\frac{1}{\epsilon^{2}}\cdot\tilde{O}\left((1+bb_{x}(LB_{0}+(\mu+L)B(1+B_{ 0}b_{x}))^{2}\right).\]
Thus, we get a sample complexity bounds that depend on the norm parameters \(b,b_{x},B_{0}\), the Lipschitz parameter \(L\), and the smoothness parameter \(\mu\), but is fully independent of the size parameters \(n,d\). Note that for simplicity, the bound as written above hides some factors logarithmic in \(m,B,L,b_{x}\) - see the proof in the appendix for the precise expression.
We note that if \(W_{0}=0\), we can take \(B_{0}=0\) in the bound above, in which case the sample complexity scales as \(\tilde{O}(((\mu+L)bb_{x}B)^{2}/\epsilon^{2})\). This is is the same as in Vardi et al. (2022) (see above) up to the dependence on the smoothness parameter \(\mu\).
**Remark 2**.: _The upper bound on Theorem 6 depends quadratically on the spectral norm of \(W_{0}\) (i.e., \(B_{0}\)). This dependence is necessary in general. Indeed, even by taking the activation function \(\sigma(\cdot)\) to be the identity, \(B=0\) and \(b=1\) we get that our function class contains the class of scalar-valued linear predictors \(\{\mathbf{x}\rightarrow\mathbf{v}^{\top}\mathbf{x}:\mathbf{x},\mathbf{v}\in \mathbb{R}^{d},||\mathbf{v}||\leq B_{0}\}\). For this class, it is well known that the number of samples should be \(\Theta(\frac{B_{0}^{2}}{\epsilon^{2}})\), to ensure that the Rademacher complexity of that class is at most \(\epsilon\)._
### Bounds for Deep Networks with Lipschitz Activations
As a final contribution, we consider the case of possibly deep neural networks, when \(W_{0}=0\) and the activations are Lipschitz and element-wise. Specifically, given the domain \(\{\mathbf{x}\in\mathbb{R}^{d}:||\mathbf{x}||\leq b_{x}\}\) in Euclidean space, we consider the class of scalar-valued neural networks of the form
\[\mathbf{x}\rightarrow\mathbf{w}_{k}^{\top}\sigma_{k-1}(W_{k-1}\sigma_{k-2}(...\sigma_{1}((W_{1}\mathbf{x})))\]
where \(\mathbf{w}_{k}\) is a vector (i.e. the output of the function is in \(\mathbb{R}\)) with \(||\mathbf{w}_{k}||\leq b\), each \(W_{j}\) is a parameter matrix s.t. \(||W_{j}||_{F}\leq B_{j}\), \(||W_{j}||\leq S_{j}\) and each \(\sigma_{j}(\cdot)\) (as a function on \(\mathbb{R}\)) is an \(L\)-Lipschitz function applied element-wise, satisfying \(\sigma_{j}(0)=0\). Let \(\mathcal{F}^{\{\sigma_{j}\}}_{k,\{S_{j}\},\{B_{j}\}}\) be the class of neural networks as above. Vardi et al. (2022) proved a sample complexity guarantee for \(k=2\) (one-hidden-layer neural networks), and left the case of higher depths as an open problem. The theorem below addresses this problem, using a combination
of their technique and a "peeling" argument to reduce the complexity bound of depth-\(k\) neural networks to depth-\(k-1\) neural networks. The resulting bound is fully independent of the network width (although strongly depends on the network depth), and is the first of this type (to the best of our knowledge) that handles general Lipschitz activations under Frobenius norm constraints.
**Theorem 7**.: _For any \(\epsilon,b>0,\{B_{j}\}_{j=1}^{k-1},\{S_{j}\}_{j=1}^{k-1},L\) with \(S_{1},...,S_{k-1},L\geq 1\), the Rademacher complexity of \(\mathcal{F}_{k,\{S_{j}\},\{B_{j}\}}^{\{\sigma_{j}\}}\) on \(m\) inputs from \(\{\mathbf{x}\in\mathbb{R}^{d}:||\mathbf{x}||\leq b_{x}\}\) is at most \(\epsilon\), if_
\[m\geq\frac{c\left(kL^{k-1}bR_{k-2}\log^{\frac{3(k-1)}{2}}(m)\cdot\prod_{i=1}^ {k-1}B_{i}\,\right)^{2}}{\epsilon^{2}}\,\]
_where \(R_{k-2}=b_{x}L^{k-2}\prod_{i=1}^{k-2}S_{i}\), \(R_{0}=b_{x}\) and \(c>0\) is a universal constant._
We note that for \(k=2\), this reduces to the bound of Vardi et al. (2022) for one-hidden-layer neural networks.
## 5 Discussion and Open Problems
In this paper, we provided several new results on the sample complexity of vector-valued linear predictors and feed-forward neural networks, focusing on size-independent bounds and constraining the distance from some reference matrix. The paper leaves open quite a few avenues for future research. For example, in Section 3, we studied the sample complexity of \(\mathcal{F}_{B,n,d}^{f,W_{0}}\) when \(n,d\) are unrestricted. Can we get a full picture of the sample complexity when \(n,d\) are also controlled? Even more specifically, can the lower bounds in the section be obtained for any smaller values of \(d,n\)? As to the results in Section 4, is our Rademacher complexity bound for \(\mathcal{F}_{b,B,n,d}^{\sigma,W_{0}}\) (one-hidden-layer networks and smooth activations) the tightest possible, or can it be improved? Also, can we generalize the result to arbitrary Lipschitz activations? In addition, what is the sample complexity of such networks when \(n,d\) are also controlled? In a different direction, it would be very interesting to extend the results of this section to deeper networks and non-zero \(W_{0}\).
### Acknowledgements
This research is supported in part by European Research Council (ERC) grant 754705.
|
2306.10232 | Multi-Task Offloading via Graph Neural Networks in Heterogeneous
Multi-access Edge Computing | In the rapidly evolving field of Heterogeneous Multi-access Edge Computing
(HMEC), efficient task offloading plays a pivotal role in optimizing system
throughput and resource utilization. However, existing task offloading methods
often fall short of adequately modeling the dependency topology relationships
between offloaded tasks, which limits their effectiveness in capturing the
complex interdependencies of task features. To address this limitation, we
propose a task offloading mechanism based on Graph Neural Networks (GNN). Our
modeling approach takes into account factors such as task characteristics,
network conditions, and available resources at the edge, and embeds these
captured features into the graph structure. By utilizing GNNs, our mechanism
can capture and analyze the intricate relationships between task features,
enabling a more comprehensive understanding of the underlying dependency
topology. Through extensive evaluations in heterogeneous networks, our proposed
algorithm improves 18.6\%-53.8\% over greedy and approximate algorithms in
optimizing system throughput and resource utilization. Our experiments showcase
the advantage of considering the intricate interplay of task features using
GNN-based modeling. | Mulei Ma | 2023-06-17T02:04:49Z | http://arxiv.org/abs/2306.10232v2 | # Multi-Task Offloading via Graph Neural Networks in Heterogeneous Multi-access Edge Computing
###### Abstract
In the rapidly evolving field of Heterogeneous Multi-access Edge Computing (HMEC), efficient task offloading plays a pivotal role in optimizing system throughput and resource utilization. However, existing task offloading methods often fall short of adequately modeling the dependency topology relationships between offloaded tasks, which limits their effectiveness in capturing the complex interdependencies of task features. To address this limitation, we propose a task offloading mechanism based on Graph Neural Networks (GNN). Our modeling approach takes into account factors such as task characteristics, network conditions, and available resources at the edge, and embeds these captured features into the graph structure. By utilizing GNNs, our mechanism can capture and analyze the intricate relationships between task features, enabling a more comprehensive understanding of the underlying dependency topology. Through extensive evaluations in heterogeneous networks, our proposed algorithm improves 18.6%-53.8% over greedy and approximate algorithms in optimizing system throughput and resource utilization. Our experiments showcase the advantage of considering the intricate interplay of task features using GNN-based modeling.
Task Offloading, Graph Neural Network, Multi-access Edge Computing, Heterogeneous Network
## I Introduction
Edge computing refers to the decentralized infrastructure that brings computation and data storage closer to the network edge, reducing latency and improving response times for end-users [1]. Task offloading in edge scenes has become an essential technique in the field of edge computing, involving the distribution of computationally intensive tasks from resource-constrained devices to edge servers or cloud platforms, leveraging their computational capabilities and proximity to the data source [2]. By offloading tasks to edge servers, the computational burden on these devices is alleviated, enabling them to focus on tasks that are better suited for local execution. Furthermore, compared to cloud computing, offloading tasks to edge servers can significantly reduce latency and response times. Scholars in the field have recognized the potential benefits of task offloading in edge scenes. They emphasize that efficient task offloading strategies can lead to improved user experiences, real-time responsiveness, and reduced energy consumption for resource-constrained devices [3, 4]. However, some challenges need to be addressed to fully realize the potential of task offloading in edge environments.
One of the challenges is to generate reasonable strategies that take into account various factors such as task characteristics, network conditions, and the availability of resources at the edge. The decision of whether to offload a task and where to offload it requires careful consideration and optimization algorithms. The commonly used solution currently is to use heuristic algorithms to generate computing offloading strategies, including genetic algorithms, simulated annealing algorithms, ant colony algorithms, and so on [5, 6]. But these algorithms may fall into local optima and require manual adjustment of algorithm parameters to achieve better results. There is also a lot of research work using game theory to simulate the offloading cooperation between users and service providers, and design corresponding auction algorithms [7, 8]. But it assumes that all players are rational, meaning they will always adopt the optimal strategy, which is difficult to guarantee in real scenarios. In addition, reinforcement learning has also been introduced into the computational offloading decision-making process, allowing agents to take offloading actions through environmental feedback [9, 10]. Unfortunately, researchers have found that some models are difficult to converge and find the optimal solution accurately.
With the development of Graph Neural Networks (GNNs) in recent years, GNNs have become a powerful model widely used in graph data analysis [11]. GNN can effectively process graph data with complex structures, which enables it to apply to many fields. In the domain of task offloading, GNNs offer advantages that make them well-suited for addressing the challenges of efficient offloading strategies in edge scenes. One key motivation for employing GNNs in task offloading is their ability to leverage the inherent graph structure that underlies many real-world scenarios. In the context of edge scenes, the relationships between tasks, devices, and edge servers can often be represented as a graph, where nodes represent tasks or devices, and edges represent the dependencies or relationships between them [12]. By utilizing GNNs, it becomes possible to exploit the structural information encoded in the graph to make informed offloading decisions. Moreover, GNNs can adaptively update their representations as the graph evolves, allowing them to adapt to dynamic changes in task requirements, network conditions, or the availability of edge resources. This adaptability is crucial in edge scenes where the availability and characteristics of edge servers may vary dynamically. GNNs can continuously learn and update their models based on the evolving graph, enabling more flexible and responsive offloading decisions.
The primary research objectives of this study are to address the challenges associated with efficient offloading decisions in edge environments, by utilizing GNNs. And leverage the inherent graph structure of edge scenes to improve resource utilization and reduce latency. Firstly, we propose a framework that models the tasks that users need to offload into a graph structure. Secondly, we apply the Graph Convolutional Network (GCN) to the offloading graph. We have defined a training method for GCN, which uses a graph state transfer method to generate offloading strategies, allowing for the integration of contextual information and dependencies. This approach has the potential to enhance the decision-making process, considering task characteristics, network conditions, and available edge resources. Finally, we conduct a comprehensive evaluation of the GCN-based offloading strategies in edge scenarios. Through extensive simulations and performance analysis, we assess the effectiveness of the proposed framework in terms of system throughput and resource utilization optimization. Compared with various methods, the GCN strategy can effectively improve the system processing efficiency. Our contribution points can be summarized as follows:
1. We have designed a modeling approach to convert the task to be offloaded into a graph structure. This method will generate inherent graph structures based on task attributes and adapt to dynamic changes. This design will advance the understanding of utilizing graph based neural networks for optimizing task offloading in edge scenes
2. We propose a framework to explore the potential of GCN for task offloading in edge scenarios. GCN provides a means to incorporate complex contextual information and dependencies into offloading decisions, leading to improved system performance and enhanced resource utilization.
3. Our experiment shows that GCN exhibits promising results in graph related tasks, indicating their potential effectiveness in the field of task offloading. Compared with baselines, GCN achieves better system throughput and resource utilization.
## II Related Work
### _Overview of existing approaches to task offloading in edge scenes_
In task offloading, it is important to consider the heterogeneity of MEC (Mult-access Edge Computing) servers. This heterogeneity refers to variations in computational capabilities, storage capacities, and communication capabilities among MEC servers. Researchers have explored different approaches to address these challenges and optimize the performance of edge computing systems. This section provides an overview of some existing approaches and their contributions. Y. Tu _et al._[13] optimized task transmission in fog computing networks through numerical methods, balancing the costs of local computing, offloading, and discarding, and achieving a low complexity offloading algorithm. However, although numerical methods can accurately find the optimal solution, they overlook the dynamic perception of the scene. S. Yang _et al._[14] proposed a multi-round sequential combination auction mechanism considering the heterogeneous Internet of Vehicles scenario, and modeled the matching problem as a grouped knapsack problem. However, the strategy generation of game theory algorithms generally requires a lot of computing resources and time. The above literature indicate that researchers have recognized that the heterogeneity of MEC servers poses additional challenges in resource allocation and task scheduling, and have proposed solutions to efficiently utilize the available resources while appropriate approaches are still needed to generate offloading decisions.
### _Discussion of GNN-based strategies in edge computing_
GNNs have shown great potential in capturing and leveraging the inherent graph structures present in edge computing environments. By representing the relationships between tasks, edge devices, and cloud servers as a graph, GNNs enable efficient decision-making and resource allocation in task offloading scenarios. Several studies have demonstrated the effectiveness of GNN-based strategies in achieving better performance metrics, such as reduced latency, improved energy efficiency, and enhanced user experience. Work [15, 16, 17] combines GNN and reinforcement learning to apply to edge scenes. Their common approach is to use GNN for scene modeling, and then use reinforcement learning algorithms to generate offloading strategies. K. Li _et al._[18] proposes a reinforcement learning framework based on depth maps for unmanned aerial vehicle systems, which reduces the loss rate of aerial task transmission. However, there is currently a lack of work utilizing GNN to capture the spatial dependencies among tasks, and effective literature references are scarce. And it lacks the graph representation allowance for real-time updates and adjustments based on changing heterogeneous network conditions and resource availability.
## III System Overview
In this section, we consider a general multi cell heterogeneous network. This network consists of multiple randomly distributed heterogeneous edge servers ES, each with varying computing and communication capabilities. For the convenience of description and analysis, we consider a quasi static network scenario, where time is divided into time slots of the same size, and the resource allocation and network conditions in each time slot are considered static and invariant. Within each time slot, the user device UD is waiting for offloaded tasks to be generated. To complete the task response within strict delay requirements, the task will be transferred to the nearby high computing power ES for processing, i.e. task offloading. In any specific time slot, each ES has two states: idle and busy. It should be pointed out that we stipulate that ES can calculate up to one task. If there are already tasks being executed in the current ES, subsequent tasks will be placed in the waiting queue. We assume that in the time slot \(\tau\), the ES set is \(S\) and the set of tasks to be offloaded is \(N\). Each
task is represented by a binary \((z,c)\). Where \(z\)represents the task size, in bits, \(c\) represents the computational amount of the task, in CPU revolutions per second.
### _Communication model_
In terms of communication, the system is based on the OFDMA protocol. The entire frequency band is divided into multiple subcarriers, which is in line with most current wireless standards. Communication devices occupying different subcarriers do not generate interference, and interference mainly comes from inter cell interference. The Signal Interference plus Noise Ratio (SINR) of task \(n\in N\) can be expressed as:
\[\mathrm{SINR}_{n}=\frac{h_{ns}p_{n}}{\sigma_{n}^{2}} \tag{1}\]
\[\sigma_{n}^{2}=\sigma^{2}+\sum_{m\in I^{k}\setminus\{n\}}p_{n}h_{ns}. \tag{2}\]
where \(h_{ns}\) denotes the communication gain between UD and ES \(s\) during the offloading process. \(p_{n}\) denotes the communication power of UD. \(\sigma_{n}^{2}\) denotes the transmission noise of task \(n\), and \(\sigma^{2}\) denotes the background noise. The subcarrier number is denoted by \(k\in K\), while \(I^{k}\) denotes the set of tasks that occupy subcarrier \(k\) transmissions. Next, we denote the transmission rates of tasks \(n\) and ES \(s\) by \(r_{n}\).
\[r_{n}=B_{k}\log_{2}\left(1+\mathrm{SINR}_{n}\right), \tag{3}\]
where \(B_{k}\) denotes the bandwidth of subcarrier \(k\). Then the task transmission time \(t_{n}^{tran}\) can be expressed as:
\[t_{n}^{tran}=\frac{z_{n}}{r_{n}}, \tag{4}\]
### _Computation model_
Computationally, the computation time \(t_{n}^{comp}\) for task \(n\) can be expressed by dividing the computational demand by the computational power of the ES, both of which can be expressed in terms of CPU revolutions, as follows:
\[t_{n}^{comp}=\frac{c_{n}}{f_{n}} \tag{5}\]
where \(f_{s}\) is the node computation rate, i.e., the CPU clock frequency (CPU revolutions/sec). We set the ES to serve only one task per time slot, and subsequent arrivals cannot be executed immediately. Moreover, we assume that the ES starts processing the task only when all data transfers are completed. Therefore, we use \(t_{n}^{start}\) to denote the time when the task to be offloaded starts transferring. The time when the task processing is completed \(t_{n}^{end}\) can be expressed as:
\[t_{n}^{end}=t_{n}^{start}+t_{n}^{tran}+t_{n}^{comp} \tag{6}\]
Fig. 1 shows an example where multiple tasks are offloaded to the ES cluster via a wireless network. The size and arithmetic requirements vary from task to task. The right side of Fig. 1 shows typical task arrival times and execution times for selected specific ES and communication scenarios.
## IV Training Process for Generating Offloading Decision Using GCN
In this section, we will delve into the training process for generating offloading decisions using GCN.
### _Node and edge features_
Our modeling approach obtains the ability to capture the spatial and temporal dependencies among tasks. First, we represent the tasks to be offloaded as nodes in the graph. The edges represent the contextual relationships between tasks and are mainly affected by features such as ES arithmetic power, network conditions, task size, and arithmetic demand. When there is a temporal conflict between task executions, edges are generated as connections between corresponding nodes in the graph.
As an example, the three nodes on the left side of Fig. 2 represent task 3, task 5, and task 6. A comparison with Fig. 1 shows that all three tasks have execution time conflicts (marked by red dashed lines). Therefore, all three are connected by edges in Fig. 2 to indicate the dependencies among the tasks. The graph generated based on the execution of all tasks is shown on the right side of Fig. 2. Furthermore, GCN-based approaches facilitate dynamic and adaptive task offloading. The graph representation allows for real-time updates and adjustments based on changing network conditions and resource availability.
Fig. 1: Cluster operation occupies workload.
Fig. 2: Cluster operation occupies workload.
### _GCN-based offloading decision_
Quadratic Unconstrained Binary Optimization (QUBO) is a mathematical form for solving optimization problems. QUBO can encode offloading problems with the Hamiltonian form. For computational offloading problems, the offloading decision between a particular UD \(n\) and ES \(s\) can be represented by a binary variable x. The offloading decision between The \(x=1\) means that an offloading mapping is formed between the two, while x=0 does not hold. We input the offloading Hamiltonian into the QUBO solver and obtain the decision \(x\). We set \(V_{s}\) to be the set of tasks offloaded to ES s. \(E_{s}\) denotes the set of all edges connected to nodes in \(V_{s}\). \(\delta\) is a constant and denotes the penalty parameter. The modeling approach from Schuetz, M.J.A [19], offloading strategy can be formulated in Hamiltonian as:
\[H_{s}=-|V_{s}|+\delta|E_{s}|,s\in S \tag{7}\]
The GCN passes node features to adjacent nodes through the adjacency matrix through the message passing mechanism, i.e., the feature vector \(X_{i}\) of node \(i\) is passed to the feature vector of the node adjacent to it through the adjacency matrix \(A\), and this process can be expressed by the following equation:
\[M^{(l+1)}=\sigma_{R}\left(\hat{D}^{-\frac{1}{2}}\hat{A}\hat{D}^{-\frac{1}{2}} M^{(l)}W^{(l)}\right), \tag{8}\]
where \(M^{l}\) denotes the feature matrix of the _lth_ layer and \(W^{l}\) denotes the weight matrix of the _lth_ layer. \(\hat{A}=A+I\) is the adjacency matrix \(A\) plus the matrix after self-connection. \(D\) is the diagonal matrix, where \(\hat{D}_{ii}=\sum_{j}\hat{A}_{ij}\). \(\sigma_{R}\) denotes the use of the ReLU activation function.
We refer to the unsupervised learning method in [19]. GCN can build multilayer networks by stacking multiple graph convolutional layers. The output feature vector of each layer can be used as the input feature vector of the next layer, thus enabling the information to be passed and updated layer by layer. A fully connected layer and a softmax activation function are added to the feature vector of the last layer to achieve this. The output vector of the last layer represents the probability that each node belongs to each class.
The proposed research presents a multi-task offloading framework based on GCN. The framework aims to optimize the decision-making process for offloading tasks in a heterogeneous network environment. The algorithmic steps for computing the offloading decisions are outlined in Algorithm 1. The first step involves the initialization of the GCN parameters. These parameters are crucial for the subsequent decision-making process. Subsequently, the algorithm starts monitoring various network parameters, namely network bandwidth \(B_{k}\), ES computing power \(f_{s}\), and channel gain between the UD and ES \(h_{ns}\). These parameters are essential for determining the feasibility and efficiency of offloading tasks.
```
1: GCN parameter initialization.
2: Monitoring the network bandwidth, ES computing power, and channel gain of UD and ES as \(B_{k}\), \(f_{s}\) and \(h_{ns}\), \(n\in N,s\in S\).
3:for\(s\in S\)do
4: Collect the tasks to be offloaded in the system and compose a waiting list
5:for episode number t in 1,2,...T do
6: Calculate the transfer delay \(t_{n}^{tran}\) and computing delay \(t_{n}^{comp}\) of task n to s, \(n\in N\).
7: Determine the task execution context based on the task arrival schedule.
8: Generate task graph \(G\) based on task execution schedules.
9: Input \(G\) into GCN to generate the offloading decision.
10: Remove the task corresponding to the selected offloaded node from the waiting list.
11:endfor
12:endfor
```
**Algorithm 1** Multi-Task Offloading Framework Based on GCN
The algorithm then proceeds to iterate over each server \(s\) in the set of servers \(S\). Within each server iteration, the algorithm collects the tasks that need to be offloaded in the system and composes a waiting list. This waiting list represents the pool of tasks available for offloading. For each episode (indexed by \(t\)) within the predetermined number of episodes \(T\), the algorithm performs the following steps. Firstly, it calculates the transfer delay \(t_{n}^{tran}\) and computing delay \(t_{n}^{comp}\) for each task \(n\) to the selected server \(s\). These delays capture the time required for transferring data and executing computations, respectively. Next, the algorithm determines the task execution context based on the task arrival schedule. This step ensures that tasks are executed in the correct order and timeframe.
To enable the decision-making process, the algorithm generates a task graph \(G\) based on the task execution schedules. This graph serves as the input to the GCN model. By utilizing the GCN, the algorithm generates the optimal offloading decision for each task. The decision indicates whether a task should be offloaded or executed locally. Finally, the algorithm removes the task corresponding to the selected offloaded node from the waiting list, as it has been successfully offloaded and processed. The entire process repeats for each server in the set \(S\), ensuring that all tasks are considered for offloading. The algorithm terminates once it completes the iteration over all servers and episodes, providing a comprehensive offloading decision for each task in the system.
## V Experimental and Model Evaluation
Next, we will evaluate the performance of the GCN-based computational offloading algorithm through experiments. We consider a multi-cell system with heterogeneous ES in each cell center. Each ES not only has abundant communication resources but also has powerful computing power. We assume that there are a total of 6 ES in the network, and their computing power is randomly selected at [0.6, 0.8, 1.0, 1.2] GHz. The
scenario is based on the OFDMA protocol, with a bandwidth of 5 MHz and channel noise of -117dBm. Considering the diversity of user devices, we set the communication power of the device to be selected from [0.3, 0.5, 0.7] W. Each offloading task size and computing power requirement are randomly selected from the [800, 4000] KB and \([3000,6000]*10^{6}\) cycles, respectively. In addition, to better reflect the algorithm performance, we have implemented the following baseline:
* _GCN-based Offloading Algorithm_ (**GCNOA**): Our proposed algorithm. Use GCN to generate offloading policies based on task attribute graphs.
* _Branch and Bound for Offloading Algorithm_ (**BBOA**): The branch-and-bound algorithm traverses all possible solutions recursively while using pruning strategies to exclude invalid solutions. The branch-and-bound algorithm can achieve exponential time complexity in the worst case.
* _Approximation Strategy for Offloading Algorithm_ (**ASOA**): Approximation algorithms match tasks and nodes according to heuristic rules in pursuit of maximizing offloading performance. It can find an approximate optimal solution with high efficiency, providing a way to trade off time and solution quality.
* _Greedy Strategy for Offloading Algorithm_ (**GSOA**): In the greedy policy, pending offload tasks are offloaded to the nearest ES based on network conditions to maximize system performance.
### _Servers workload status_
Fig. 3 shows the actual workload situation of the server cluster. There are a total of 6 ES in the scene, and the number of uninstalled tasks is 200. In Fig. 1, the horizontal axis represents the time frame, and the vertical axis represents the server ID. The first 400 time frames are captured in Fig. 3 for display. The different colored line segments represent the time taken by the offloading task to occupy the corresponding server resources. Each server has tasks to process most of the time, and the less idle time, the higher the resource utilization rate. Specifically, the resource utilization rates from server 1 to server 6 are 78.2%, 73.3%, 72.7%, 73.7%, 68.2%, and 65.9%, respectively. Therefore, our proposed GCN algorithm can achieve efficient task scheduling and reasonable allocation of server cluster resources.
Fig. 4: System throughput.
Fig. 5: Server resource utilization rate.
Fig. 3: Cluster operation occupies workload.
### _System throughput & resource occupancy_
Fig. 4 shows the changes in system throughput under different task densities. The x-axis in the image represents the number of tasks to be offloaded, while the y-axis represents the system throughput. The blue, green, purple, and red in the figure represent the GCNOA, BBOA, ASOA, and GSOA algorithms, respectively. The common feature of the four curves shows that as the number of tasks to be offloaded increases, the overall system throughput shows an upward trend. The curves corresponding to the ASOA and GSOA algorithms are the lowest. This indicates that greed and approximation strategy are prone to falling into local optima, making performance difficult to guarantee. Relatively speaking, the GCNOA algorithm has achieved the performance of BBOA, with similar performance between the two. In addition, GCNOA achieved higher system throughput compared to other baselines. Specifically, when the number of tasks is 120, the throughput of GCNOA is improved by 23.5% and 50.0% compared to ASOA and GSOA algorithms, respectively.
Next, we will analyze the resource utilization rate of the system. To describe the resource utilization of different algorithms, we expanded the number of tasks to the range of [50-250]. In Fig. 5, as the number of tasks increases, the overall resource utilization rate of the offloading algorithm in the fourth section shows an upward trend. Among them, GSOA and ASOA have poor performance and significant curve fluctuations. This indicates that the greedy and approximation algorithms are not stable enough to find the global optimal solution. Our proposed GCNOA can approach BBOA in resource utilization. This indicates that GCN can effectively generate efficient and robust offloading strategies. Specifically, when the number of tasks is 100 and 200, the resource utilization rate of GCNOA is 18.6%, 53.8%, 22.1%, and 44.9% higher than that of ASOA and GSOA.
## VI Conclusion
In this study, we addressed the limitations of existing task offloading methods in modeling the dependency topology relationships between offloaded tasks within the context of HMEC. We introduced a novel task offloading mechanism based on GCN that effectively captures the complex interdependencies of task features. the motivation for utilizing GCNs in task offloading stems from their ability to leverage the graph structure inherent in edge scenes, capture complex relationships, propagate information, and adapt to dynamic changes. By harnessing the power of GCNs, task offloading strategies can benefit from improved decision-making, enhanced resource utilization, and optimized performance in edge environments. By considering task characteristics, network conditions, and available edge resources, we embedded these features into a graph structure and utilized GCNs to analyze the integrated relationships.
Our extensive evaluations in heterogeneous networks have demonstrated the superiority of our proposed algorithms, resulting in significant improvements in system throughput and resource utilization. Compared with the greedy and approximate algorithms, our proposed algorithm improves the system throughput by 23.5%-50.0% and resource utilization by 18.6%-53.8%. These findings strongly support the notion that considering the integrated interplay of task features through GCN-based modeling is crucial for achieving optimal offloading decisions. We believe that our findings will inspire further exploration and refinement of GCN applications in edge networks, ultimately leading to more efficient and intelligent computing offloading solutions in heterogeneous edge computing environments.
|
2303.09340 | Improving Automated Hemorrhage Detection in Sparse-view Computed
Tomography via Deep Convolutional Neural Network based Artifact Reduction | This is a preprint. The latest version has been published here:
https://pubs.rsna.org/doi/10.1148/ryai.230275
Purpose: Sparse-view computed tomography (CT) is an effective way to reduce
dose by lowering the total number of views acquired, albeit at the expense of
image quality, which, in turn, can impact the ability to detect diseases. We
explore deep learning-based artifact reduction in sparse-view cranial CT scans
and its impact on automated hemorrhage detection. Methods: We trained a U-Net
for artefact reduction on simulated sparse-view cranial CT scans from 3000
patients obtained from a public dataset and reconstructed with varying levels
of sub-sampling. Additionally, we trained a convolutional neural network on
fully sampled CT data from 17,545 patients for automated hemorrhage detection.
We evaluated the classification performance using the area under the receiver
operator characteristic curves (AUC-ROCs) with corresponding 95% confidence
intervals (CIs) and the DeLong test, along with confusion matrices. The
performance of the U-Net was compared to an analytical approach based on total
variation (TV). Results: The U-Net performed superior compared to unprocessed
and TV-processed images with respect to image quality and automated hemorrhage
diagnosis. With U-Net post-processing, the number of views can be reduced from
4096 (AUC-ROC: 0.974; 95% CI: 0.972-0.976) views to 512 views (0.973;
0.971-0.975) with minimal decrease in hemorrhage detection (P<.001) and to 256
views (0.967; 0.964-0.969) with a slight performance decrease (P<.001).
Conclusion: The results suggest that U-Net based artifact reduction
substantially enhances automated hemorrhage detection in sparse-view cranial
CTs. Our findings highlight that appropriate post-processing is crucial for
optimal image quality and diagnostic accuracy while minimizing radiation dose. | Johannes Thalhammer, Manuel Schultheiss, Tina Dorosti, Tobias Lasser, Franz Pfeiffer, Daniela Pfeiffer, Florian Schaff | 2023-03-16T14:21:45Z | http://arxiv.org/abs/2303.09340v4 | Improving Automated Hemorrhage Detection in Sparse-view Computed Tomography via Deep Convolutional Neural Network based Artifact Reduction
###### Abstract
Intracranial hemorrhage poses a serious health problem requiring rapid and often intensive medical treatment. For diagnosis, a Cranial Computed Tomography (CCT) scan is usually performed. However, the increased health risk caused by radiation is a concern. The most important strategy to reduce this potential risk is to keep the radiation dose as low as possible and consistent with the diagnostic task. Sparse-view CT can be an effective strategy to reduce dose by reducing the total number of views acquired, albeit at the expense of image quality. In this work, we use a U-Net architecture to reduce artifacts from sparse-view CCTs, predicting fully sampled reconstructions from sparse-view ones. We evaluate the hemorrhage detectability in the predicted CCTs with a hemorrhage classification convolutional neural network, trained on fully sampled CCTs to detect and classify different sub-types of hemorrhages. Our results suggest that the automated classification and detection accuracy of hemorrhages in sparse-view CCTs can be improved substantially by the U-Net. This demonstrates the feasibility of rapid automated hemorrhage detection on low-dose CT data to assist radiologists in routine clinical practice.
Artifact reduction computed tomography (CT) deep learning hemorrhage detection sparse-view CT U-Net
## 1 Introduction
Intracranial hemorrhage (ICH) refers to a pathological bleeding within the cranial vault. It is a potentially life-threatening disease with a median 30-day fatality rate of 40.4%, accounting for approximately 10% to 30% of strokes annually [1][2]. Its most important risk factors include hypertension [3], cerebral amyloid angiopathy [4] and anticoagulation treatment [5]. Furthermore, factors like drug abuse [6] and excessive alcohol consumption [3] are linked to an increased risk. In the event of an ICH, rapid and accurate diagnosis is crucial for an optimal treatment. The clinical standard for diagnosis is an intracranial Computed Tomography (CT) scan [7]. However, with the increasing use
of medical CT, not only for intracranial scans, concerns over the associated health risk caused by radiation exposure are gaining more and more attention [8][9]. Especially for CCTs, the associated overall median effective dose of 2 mSV for a single routine head CT is close to the natural radiation accumulated per year and affects not only the brain but also nearby areas, for example the lenses of the eye, which have a particularly high sensitivity to radiation [9][10][11].
Besides lowering the X-ray tube current, sparse-view CT is a promising approach to reduce dose by decreasing the number of views. However, this decrease leads to severe artifacts in filtered backprojection (FBP) reconstruction. Hence, adequate processing is required to restore image quality to an acceptable level for diagnostic CT images. In the past, compressed sensing (CS) and iterative reconstruction (IR) approaches have been widely investigated, which generally minimize a CS-based regularization term and a data-fidelity term that ensures data consistency. While these approaches have been proven to produce good results, they are also very computationally expensive due to the repeated update steps during iterative optimization [12][13][14].
Recently, machine learning approaches using deep neural networks have gathered vast scientific attention. It has been shown that in the context of artifact reduction in sparse-view CT, excellent results can be achieved with comparably low computational effort during inference [15][16][17]. Approaches that combine deep learning and iterative techniques can also be found in literature [18][19].
On the other hand, extensive research has been conducted on utilizing deep learning techniques for automated detection and classification of pathological features in CT images to aid radiologists in clinical practice [20][21][22][23]. Despite their outstanding performance, the presence of artifacts in sparse-view CT images can impair the accuracy of these algorithms.
The excellent results achieved by convolutional neural networks in reducing artifacts, coupled with their efficient computational performance, served as a motivation for our study. We aim to explore the potential benefits of deep learning based artifact reduction in sparse-view CCTs and to determine whether this approach could enhance automated hemorrhage detection.
For that, we use a U-Net architecture to reduce artifacts from sparse-view CCTs, predicting fully sampled reconstructions from sparse-view ones. The performance of the U-Net is compared to an analytical approach based on total variation (TV) [24]. We evaluate the hemorrhage detectability of the predicted full-view CCTs by a hemorrhage classification convolutional neural network, trained on full-view data to detect and classify different types of intracranial hemorrhages. To quantify the classification performance, we use the area under the receiver operator characteristic curve (AUC-ROC) [25].
## 2 Materials and Methods
The code associated with this publication can be found at [https://github.com/ThalhammerJ/Sparse-View-Cranial-CT-Reconstruction](https://github.com/ThalhammerJ/Sparse-View-Cranial-CT-Reconstruction).
### Network Architectures
The network architecture used by us follows the U-Net architecture with an additional skip connection between the input and output of the network, similar to the implementation by Jin et al. [26][16]. The architecture is depicted in figure 1. A 512x512 input image is processed by a series of convolutional layers and downsampled by four pooling layers, leading to a resolution of 32x32 at the bottleneck. The pooling is performed by strided convolutions with stride (2, 2), followed by a 1x1 convolution. The unpooling is implemented with transposed strided convolutions, again with stride (2, 2). The use of strided convolutions was adapted from the U-Net implementation by Guo et al. [27].
As described by Han et al., one of the most important features of the multi-resolution architecture of the U-Net is the exponentially large receptive field due to the pooling and unpooling layers [15]. This feature allows the U-Net to deal with streak artifacts that occur in sparse-view CT and typically spread over a large portion of the image.
For the classification of the hemorrhage types, the EfficientNetB2 architecture implemented in Keras was used [28][29]. Because of the shown benefits of using pre-trained models in image recognition tasks in terms of convergence, robustness and accuracy, the network was initialized with ImageNet pre-trained weights [30][31][32][33]. The final dense layer with originally 1000 output classes was substituted by a dense layer with 6 output classes. Due to the use of pre-trained weights, the network expects three input channels of shape 260x260 pixels in the [0, 255] range.
### Dataset
The dataset used was taken from the RSNA 2019 Brain CT Hemorrhage Challenge [34]. It was compiled from the clinical PACS (picture and archiving and communication system) archives from three institutions, namely: Stanford University, Universidade Federal de Sao Paulo and Thomas Jefferson University Hospital. The annotation was performed in a collaborative effort between the RSNA and the American Society of Neuroradiology (ASNR). The label for each CT image is a six dimensional vector with binary entries, where the first entry indicates whether a hemorrhage is present or not, and the remaining five labels indicate which of the five possible subtypes (subarachnoid, intraventricular, subdural, epidural, and intraparenchymal) is present. Each image is provided in DICOM format [35]. In total, there are 21,784 labeled CT scans in the dataset from 18,938 different patients with a total of 752,803 individual images. After sorting out all images which were either not 512x512 or produced some different error during the data generation process, 696,842 images from 18,545 patients remained.
For the training of the U-Net, 4000 patients were randomly selected. The data from 2400 patients (95384 individual images) served as training data, the data from 600 patients (23842 individual images) was used for validation. The remaining 1000 patients (39425 individual images) were held back for testing purposes. The pixel intensities in the DICOM format are usually stored as 12-bit numbers, leading to possible intensity values in the range [0-4095]. To produce sparse-view CTs, the data was normalized to range [0, 1] by division by 4095. After that, sinograms with 4096 views were created under parallel beam geometry from the CT dataset using the Astra Toolbox [36]. For the ground truth images, CT images were reconstructed again via FBP using 4096 views. Six corresponding sparse-view CT data subsets at varying levels of undersampling were generated using FBP with only 64, 128, 256, 512, 1024 and 2048 views, respectively.
Regarding the training of the EfficientNetB2 classification network, the same subset of 1000 patients was used for testing as for the U-Net. Because overfitting was much more of a problem for the EfficientNetB2, the training set from before was expanded by the remaining data. Therefore, the training set for the classification task consisted of 17,545 individual patients with a total of 657,417 images. The individual images were first scaled to Hounsfield Units (HUs) using the rescale slope and intercept provided in the DICOM metadata. After that, the images were clipped to the diagnostically relevant brain window ranging from 0 HU to 80 HU. To meet the requirements of the pre-trained ImageNet weights, the images were rescaled to range [0, 255] by multiplying the pixel values with \(\frac{255}{80}\), resized to 260x260 pixels by the biliniar resize function from the tensorflow library, and transformed to three channel images by concatenating three neighboring CT images [37].
Figure 2 depicts the distribution of the labeled hemorrhage subtypes in the used training set of the EfficientNet, the U-Net and the test set in descending order. The majority of the used images have no hemorrhage labeled. The subtypes
Figure 1: Architecture of the used U-Net for a 512x512 input.
subdural, intraparenchymatous, subarachnoid, and intraventricular occur in the same order of magnitude, whereas the epidural subtype is much less common.
### Training
The U-Net architecture was trained separately for each subset of the sparse-view data, ranging from 64 to 2048 views, receiving the sparse-view images as input and the fully sampled image as ground truth. The U-Net was trained with mean squared error loss for 75 epochs with a mini-batchsize of 32. From each image, a patch of size 256x256 was randomly selected and randomly rotated by either 0\({}^{\circ}\), 90\({}^{\circ}\), 180\({}^{\circ}\), or 270\({}^{\circ}\). The learning rate \(lr\) of the networks was determined for each epoch by \(lr=10^{-4}/(epoch+1)\).
The training scheme for the EfficientNetB2 was adapted from Wang et al. [20]. The model was trained with k-fold cross validation with k=5 and binary crossentropy loss. The learning rate was determined using a cosine decay schedule with restarts after epoch one, three, and seven with a initial learning rate of \(5\cdot 10^{-4}\) and a minimal learning rate of \(10^{-5}\). The network was trained with a mini-batchsize of 32 for 15 epochs.
### Total Variation
In this work, the total variation (TV) denoising method by Chambolle implemented in the scikit-image library was used [24][38]. The optimal weight for the TV algorithm was chosen for each sparse subset by randomly sampling 1000 images from the training set and iterating through weights ranging from 0.001 to 1.000 in 0.001 increments. For each level of subsampling, the weight that resulted in the best score on the structural similarity index measure (SSIM) was then used to calculate the metrics on the test set [39].
### Auc-Roc
To quantify the hemorrhage classification performance, the AUC-ROC metric was used [25]. First, predictions on the test dataset for the raw and post-processed reconstructions are generated for each level of sparse sampling. The ROC curves are then created by plotting the true positve rate (TPR) against the false positve rate (FPR) at different classification thresholds. To obtain the final AUC-ROC values, the area under the respective curves are calculated. For all these operations the scikit-learn library was used [40].
## 3 Results
In the following, the results of the artifact reduction by the U-Net is presented. The outcomes are compared with artifact reduction by TV by visual impressions as well as in terms of peak-signal-to-noise ratio (PSNR) and SSIM values. Subsequently, the hemorrhage detection and classification performance on the raw sparse-view images and the images post-processed by either the U-Net or the TV method is presented.
Figure 2: Distribution of the labeled hemorrhage subtypes in the used training set of the EfficientNetB2, the U-Net and in the test set. Note the logarithmic scaling of the x-axis.
### Artifact Reduction
Figure 3 shows an example of the artifact reduction by the U-Net on a CT image from the test set, reconstructed from a varying number of views. According to the provided label, an intraparenchymal and an intraventricular hemorrhage is present in the depicted example. The extracts depicted below the respective images display the zoomed in hemorrhages. The shown images were rescaled to HU and clipped to the brain window. In a) the image reconstructed from 4096 views which serves as a ground truth image is shown. Here, the labeled intraparenchymal hemorrhage (cyan arrow) is clearly visible. The intraventricular subtype (orange arrow) is more difficult to detect but still stands out from the background tissue. Images b) - e) depict the same image reconstructed with 512, 256, 128, and 64 views, respectively. In b) it can be seen, that the image quality deteriorates and artifacts become visible. However, the hemorrhages are still recognizable. In image c) the streak artifacts become more pronounced, making it more difficult to distinguish small features. The intraparenchymal subtype is still distinguishable, the intraventricular subtype on the other hand is only barely recognizable. In images d) and e) the streak artifacts are even more dominant and no features of the tissue inside the cranial vault can be recognized anymore. The hemorrhages are also not detectable. In images f) - i) the predictions of the U-Net of the corresponding sparse-view images are depicted. In all cases we note a clear reduction in streak artifacts compared to the respective input images b) - e). With increasingly sparse-sampled input, the prediction also tend to become smoother, i.e. sharp image features are not retained. Despite the image smoothing during U-Net processing, image quality of the sparse-view CTs still improves, making the predictions appear much more similar to the full-view images. This improves the visibility of pathological features, as seen at the hemorrhages present. The contours of the intraparenchymal hemorrhage can be recognized up to including 128-view sparse-sampling (h), those of the intraventricular hemorrhage up to including 256-view (g).
Figure 3: CT image from the test set labeled with intraparenchymal (cyan arrow) and intraventricular (orange arrow) hemorrhage. Image a) shows the image reconstructed from 4096 views using FBP. Images b) - e) show the same image reconstructed from 512, 256, 128, and 64 views, respectively. Images e)- h) show the U-Net predictions of the corresponding sparse-view images in the upper row. All images are presented in the brain window ranging from 0 HU to 80 HU. Both inserts are 80x80 pixels.
### Comparison with Total Variation (TV)
To put the performance of the U-Net into perspective, we also implemented artifact reduction using the TV method for comparison. TV is commonly applied to in these type of undersampling problems [41][42].
Figure 4 compares results of one image labeled "healthy" from the test set that was processed in various different ways. The shown SSIM and PSNR values were calculated with respect to the image reconstructed from 4096 views. Images a) - f) show the image reconstructed using FBP from 2048, 1024, 512, 256, 128 and 64 views without any post-processing, respectively. While the images reconstructed from 2048 views (a) shows no artifacts, the image quality deteriorates if the number of views is further reduced. For the 64-view reconstruction only the shape of the skull is recognizable, but not the tissue inside the cranial vault. In images g) - m) the respective predictions by the U-Net of the sparse-view CTs in the upper row are shown. Consistent with the results shown in figure 3, the UNet is able to reduce the artifacts considerably. We once again note that the resulting images are increasingly smoothed as the number of views decreases. In images n) - s) the results of TV post-processing are depicted. TV is able to reduce streak artifacts comparatively well down to 256 views sparse sampling. However, the results for 128- and 64-sparse view data are sub-par compared to the results from U-Net processing. While both post-processing methods are able to reduce the artifacts and improve the image quality compared to the raw sparse-view FBP reconstruction, the U-Net produces superior results both visually as well as in terms of the calculated SSIM and PSNR values.
\begin{table}
\begin{tabular}{c|c c c c c} SSIM & 2048 views & 1024 views & 512 views & 256 views & 128 views & 64 views \\ \hline FBP & 1.000 \(\pm\) 0.000 & 0.994 \(\pm\) 0.006 & 0.974 \(\pm\) 0.025 & 0.906 \(\pm\) 0.078 & 0.747 \(\pm\) 0.149 & 0.574 \(\pm\) 0.190 \\ U-Net & 1.000 \(\pm\) 0.000 & **0.999 \(\pm\) 0.001** & **0.997 \(\pm\) 0.003** & **0.990 \(\pm\) 0.008** & **0.985 \(\pm\) 0.010** & **0.960 \(\pm\) 0.024** \\ TV & 1.000 \(\pm\) 0.000 & 0.997 \(\pm\) 0.003 & 0.991 \(\pm\) 0.007 & 0.981 \(\pm\) 0.012 & 0.946 \(\pm\) 0.034 & 0.873 \(\pm\) 0.073 \\ \hline \hline PSNR[dB] & 2048 views & 1024 views & 512 views & 256 views & 128 views & 04 views \\ \hline FBP & 68.102 \(\pm\) 10.194 & 54.797 \(\pm\) 12.674 & 45.268 \(\pm\) 12.334 & 35.860 \(\pm\) 10.751 & 27.631 \(\pm\) 9.687 & 20.766 \(\pm\) 9.372 \\ U-Net & **70.490 \(\pm\) 8.476** & **59.812 \(\pm\) 9.030** & **52.575 \(\pm\) 10.292** & **44.707 \(\pm\) 8.042** & **41.300 \(\pm\) 5.239** & **35.256 \(\pm\) 4.376** \\ TV & 61.991 \(\pm\) 6.661 & 53.350 \(\pm\) 10.061 & 44.323 \(\pm\) 10.155 & 36.293 \(\pm\) 9.849 & 29.464 \(\pm\) 9.798 & 23.590 \(\pm\) 9.780 \\ \end{tabular}
\end{table}
Table 1: SSIM and PSNR values of the sparse-view CT images reconstructed by FBP, the predictions of the U-Net, and the TV method. The values were calculated on the test set with with respect to the ground truth reconstructions.
Figure 4: CT image from the test set labeled as ”healthy”. Images a) - f) shows the FBP reconstruction from 2048, 1024, 512, 256, 128, and 64 views, respectively. Images g) - m) show the U-Net predictions of the respective images in the upper row, images n) - s) show the results of the TV method. The presented SSIM and PSNR values were calculated over the entire CT image scaled to (0, 1) from the full range. All images are presented in the brain window ranging from 0 HU to 80 HU. The insert is 100x100 pixels.
Table 1 depicts the average SSIM and PSNR values of the reconstructed images calculated on the entire test set. The individual values were calculated with respect to the ground truth reconstructions. Comparison of the SSIM values of the unprocessed, U-Net processed, and TV processed sparse-view CTs confirm quantitatively that both methods increased image similarity to the fully sampled reconstructions by reducing streak artifacts. The quantitative assessment is in agreement with the visual result of U-Net post-processing depicted in figure 3.
The PSNR values are also improved by U-Net post-processing. Interestingly, no clear trend in the PSNR values can be identified for TV processing. This is most likely due to the fact that the weights for the TV method were set to optimize the SSIM, rather than PSNR, as described in section 2.4. The direct comparison of SSIM and PSNR values between U-Net and TV processing reveals a stronger performance of the U-Net in all investigated cases.
### Detection of Hemorrhage Subtypes
Finally, we evaluate the results of the artifact reduction via the EfficientNetB2 hemorrhage classification network, trained on fully sampled CT images.
In figure 5 the AUC-ROC values for the raw images (blue) and the images post-processed by either TV (green) or the U-Net (orange) are shown for varying levels of subsampling. In a) the AUC-ROC values of the "any" class are depicted, which indicates whether any kind of hemorrhage is present in the CT image or not. In subfigures b) - f) the AUC-ROC plots of the classified subtypes subdural, subarachnoid, intraparenchymal, intraventricular, and epidural are shown. In all cases the classification performance declines as the number of views used for the reconstruction decreases. The drop is most pronounced in the raw images, followed by the images post-processed by TV and least pronounced by the images post-processed by the U-Net. For the raw reconstructions, the 2048-view images yield almost identical AUC-ROC values as the full-view images. When the number of views is reduced to 1024, the classification performance decreases slightly. By further decreasing the number of views, the AUC-ROC values drop substantially. For the images post-processed by the TV algorithm, the AUC-ROC values decrease slightly down to 512-views and decline substantially for fewer views in subfigures a) - e). For the epidural subtype in f) the hemorrhage detection performance of the raw and TV post-processed images are nearly identical down to 256 views. By further decreasing the number of views, the classification network performs substantially worse on the TV images than on the raw sparse-view ones. A possible explanation for this is that the artifacts smoothed by the TV algorithm get classified as epidural hemorrhages, leading to a high false positive rate. With the U-Net, the number of views can be reduced from 4096 views down to 512 views with almost no decrease in classification performance and to 256 views with a slight performance decrease. Below 256 views, a distinct decline in classification performance is also perceptible. In all cases, the AUC-ROC values obtained from the images post-processed by the U-Net are superior compared to the TV processed and raw sparse-view
Figure 5: Results of the EfficientNetB2 classification network. Images a) - f) depict the AUC values of the ROC curves associated with the any, subdural, subarachnoid, intraparenchymal, intraventricular, and epidural class, respectively.
images. When looking at Figure f), it is also noticeable that the detection of epidural hemorrhages is significantly worse compared to the other subtypes. This is most likely due to the fact that this type of hemorrhage was much less represented in the dataset than the other subtypes (cf. figure 2).
The individual ROC curves of the EfficientNetB2 classification results used for the calculation of the AUC-ROC values are plotted in figure 6.
## 4 Conclusion
In this work it was shown, that a deep, U-Net-type neural network is able to substantially improve the quality of cranial sparse-view CTs visually, as well as in terms of calculated PSNR and SSIM values. A hemorrhage detection and classification network was trained on fully sampled data and applied to the sparse-view data with and without post-processing by the U-Net. With the U-Net, the number of views can be reduced from 4096 views to 512 views with almost no decrease in classification performance and to 256 views with a slight performance decrease. The results of the U-Net were compared with an analytical approach based on TV. The U-Net was found to perform superior to TV post-processing with respect to image quality parameters and automated hemorrhage diagnosis. Our results suggest that the classification accuracy of hemorrhages in sparse-view CCTs can be improved substantially by deep learning methods. This demonstrates the feasibility of rapid automated hemorrhage classification on sparse-view CCT data to assist radiologists in routine clinical practice.
Figure 6: Results of the EfficientNetB2 hemorrhage classification network. Depicted are the ROC curves of the classes any, subdural, subarachnoid, intraparenchymal, intraventricular, and epidural, for the different levels of subsampling. In addition to the results of the raw images (blue), the classification results on the images post-processed by either U-Net (orange) or TV (green) are shown.
## 5 Acknowledgements
This work was funded by the Federal Ministry of Education and Research (BMBF) and the Free State of Bavaria under the Excellence Strategy of the Federal Government and the Lander, the German Research Foundation, as well as by the Technical University of Munich-Institute for Advanced Study.
|
2306.04904 | An adaptive augmented Lagrangian method for training physics and
equality constrained artificial neural networks | Physics and equality constrained artificial neural networks (PECANN) are
grounded in methods of constrained optimization to properly constrain the
solution of partial differential equations (PDEs) with their boundary and
initial conditions and any high-fidelity data that may be available. To this
end, adoption of the augmented Lagrangian method within the PECANN framework is
paramount for learning the solution of PDEs without manually balancing the
individual loss terms in the objective function used for determining the
parameters of the neural network. Generally speaking, ALM combines the merits
of the penalty and Lagrange multiplier methods while avoiding the ill
conditioning and convergence issues associated singly with these methods . In
the present work, we apply our PECANN framework to solve forward and inverse
problems that have an expanded and diverse set of constraints. We show that ALM
with its conventional formulation to update its penalty parameter and Lagrange
multipliers stalls for such challenging problems. To address this issue, we
propose an adaptive ALM in which each constraint is assigned a unique penalty
parameter that evolve adaptively according to a rule inspired by the adaptive
subgradient method. Additionally, we revise our PECANN formulation for improved
computational efficiency and savings which allows for mini-batch training. We
demonstrate the efficacy of our proposed approach by solving several forward
and PDE-constrained inverse problems with noisy data, including simulation of
incompressible fluid flows with a primitive-variables formulation of the
Navier-Stokes equations up to a Reynolds number of 1000. | Shamsulhaq Basir, Inanc Senocak | 2023-06-08T03:16:21Z | http://arxiv.org/abs/2306.04904v2 | An adaptive augmented Lagrangian method for training physics and equality constrained artificial neural networks
###### Abstract
Physics and equality constrained artificial neural networks (PECANN) are grounded in methods of constrained optimization to properly constrain the solution of partial differential equations (PDEs) with their boundary and initial conditions and any high-fidelity data that may be available. To this end, adoption of the augmented Lagrangian method within the PECANN framework is paramount for learning the solution of PDEs without manually balancing the individual loss terms in the objective function used for determining the parameters of the neural network. Generally speaking, ALM combines the merits of the penalty and Lagrange multiplier methods while avoiding the ill conditioning and convergence issues associated singly with these methods. In the present work, we apply our PECANN framework to solve forward and inverse problems that have an expanded and diverse set of constraints. We show that ALM with its conventional formulation to update its penalty parameter and Lagrange multipliers stalls for such challenging problems. To address this issue, we propose an adaptive ALM in which each constraint is assigned a unique penalty parameter that evolve adaptively according to a rule inspired by the adaptive subgradient method. Additionally, we revise our PECANN formulation for improved computational efficiency and savings which allows for mini-batch training. We demonstrate the efficacy of our proposed approach by solving several forward and PDE-constrained inverse problems with noisy data, including simulation of incompressible fluid flows with a primitive-variables formulation of the Navier-Stokes equations up to a Reynolds number of 1000.
Augmented Lagrangian method constrained optimization incompressible flows inverse problems physics-informed neural networks
## 1 Introduction
Partial differential equations (PDEs) are used to describe a wide range of physical phenomena, including sound propagation, heat and mass transfer, fluid flow, and elasticity. The most common numerical methods for solving PDE problems involve domain discretization, such as finite difference, finite volume, finite element, and spectral element methods. However, the accuracy of the solution heavily depends on the quality of the mesh used for domain discretization. Additionally, mesh generation can be tedious and time-consuming, particularly for complex geometries or problems with moving boundaries. While numerical methods are efficient for solving forward problems, they are not well-suited for solving inverse problems, particularly data-driven modeling. In this regard, neural networks can be viewed as a promising alternative meshless approach to solving PDEs.
The use of neural networks for solving PDEs can be traced back the early 1990s [6, 32, 22, 17]. Recently, there has been a resurgence of interest in the use of neural networks to solve PDEs [7, 11, 29, 37], particularly after the introduction of
the term physics-informed neural networks (PINNs) [27]. In PINNs, the governing equations of a physical phenomenon of interest are embedded in an objective function along with any data to learn the solution to those governing equations. Although the performance of PINNs can be influenced by several factors such as the choice of activation function [14], sampling strategy [36], and architecture [33; 2], the formulation of the objective function that is used to determine the neural network parameters is paramount for satisfactory predictions.
In the baseline PINN formulation [27], several loss terms with different physical scales are aggregated into a single composite objective function with tunable weights, which is a rudimentary approach to a multi-objective optimization problem. Since neural networks are trained using gradient descent type optimizers, the model parameters can be influenced by the larger gradient of the loss function, irrespective of the physical scale. This can lead to unstable training, as the optimizer may prioritize one objective over another, sacrificing either the PDE loss or the boundary loss. Therefore, adjusting the interplay between the objective terms requires manual hyperparameter tuning, which can be a time-consuming and challenging task. Additionally, the absence of validation data or prior knowledge of the solution to the PDE for the purpose of tuning can render the baseline PINN approach impractical for the solution of PDEs [3].
Dynamic determination of the weights in the composite objective function of the baseline PINN approach has attracted the attention of several researchers. Wang et al. [33] proposed an empirical method that has several limitations that we discussed in a prior work [2]. van der Meer et al. [31] proposed an empirical method by considering a bi-objective loss function for the solution of linear PDEs. Liu and Wang [18] proposed a dual dimer method to adjust the interplay between the loss terms by searching for a saddle point. Wang et al. [34] studied PINNs using the Neural Tangent Kernel (NTK). NTK provides insights into the dynamics of fully-connected neural networks with infinite width during training via gradient descent. The authors proposed using the eigenvalues of the NTK to adapt the interplay between the objective terms. McClenny and Braga-Neto [21] proposed self-adaptive PINNs (SA-PINN) with a minimax formulation to adjust the interplay between the loss terms. Lagrange multipliers are sent through an empirical mask function such as sigmoid, which makes the dual unconstrained optimization formulation not equivalent to the constrained optimization problem. Because of that, equality constraints are not strictly enforced. The SA-PINN method produced results that are comparable or better than the NTK method [33]. In the present work, we compare our proposed method with both of these approaches for the solution of the wave equation.
By and larger the aforementioned works adopt an unconstrained optimization approach in the first place to formulate the objective function used to determine neural network parameters. In physics and equality constrained artificial neural networks (PECANNs) [2], we pursued a constrained optimization method to formulate the objective function. Specifically, we used the augmented Lagrangian method (ALM) [13; 26] to formally cast the constrained optimization problem into an unconstrained one in which PDE domain loss is constrained by boundary and initial conditions, and with any high-fidelity data that may be available. It is worth noting that ALM combines the merits of the penalty method and the Lagrange multiplier method. It balances feasibility and optimality by updating a penalty parameter to control the influence of constraint violations [24].
In what follows, we show that the conventional ALM with a single penalty parameter used in the PECANN model [2] struggles when applied to problems with multiple constraints of varying characteristics. To overcome this limitation, we propose an adaptive augmented Lagrangian method, which introduces multiple penalty parameters and independently updates them based on the characteristics of each constraint. Additionally, we propose a computationally efficient formulation of the objective function to handle a large number of constraints to enable mini-batch training while maintaining a small memory footprint during training. We solve several forward and inverse PDE problems, including the solution of incompressible fluid flow equations with a primitive-variables formulation up to a Reynolds number of 1000, to demonstrate the efficacy of our PECANN model with an adaptive augmented Lagrangian method. The codes used to produce the results in this paper are publicly available at [https://github.com/HiPerSimLab/PECANN](https://github.com/HiPerSimLab/PECANN)
## 2 Technical formulation
Let us consider a general constrained optimization problem with equality constraints
\[\min_{\theta}\mathcal{J}(\theta),\quad\text{ such that }\quad\mathcal{C}_{i}( \theta)=0,\quad\forall i\in\mathcal{E}, \tag{1}\]
where the objective function \(\mathcal{J}\) and the constraint functions \(\mathcal{C}_{i}\) are all smooth, real valued functions on a subset of \(R^{n}\) and \(\mathcal{E}\) is a finite set of equality constraints. We can write an equivalent _minimax_ unconstrained dual problem using the augmented Lagrangian method as follows
\[\min_{\theta}\max_{\lambda}\mathcal{L}(\theta;\lambda,\mu)=\mathcal{J}(\theta )+\sum_{i\in\mathcal{E}}\lambda_{i}\mathcal{C}_{i}+\frac{\mu}{2}\sum_{i\in \mathcal{E}}\mathcal{C}_{i}^{2}(\theta). \tag{2}\]
We can swap the order of the minimum and the maximum by using the following _minimax_ inequality concept or weak duality
\[\max_{\lambda}\min_{\theta}L(\theta;\lambda)\leq\min_{\theta}\max_{ \lambda}L(\theta;\lambda). \tag{3}\]
The minimization can be performed for a sequence of multipliers generated by
\[\lambda_{i}\leftarrow\lambda_{i}+\mu\:\mathcal{C}_{i}(\theta), \forall i\in\mathcal{E}. \tag{4a}\]
We should note that \(\mu\) can be viewed as a global learning rate for the Lagrange multipliers. ALM combines the merits of the penalty method and the Lagrange multiplier method by updating the penalty parameter in such a way that it balances the trade-off between feasibility and optimality, and ensures convergence. Traditionally, in ALM, textbook descriptions typically rely on a single penalty parameter to address all constraints as in (4a), and updating the penalty parameter is often done through empirical strategies. However, as we show in a later section, these update strategies become ineffective when there are multiple constraints with different characteristics. Next, we consider two existing strategies to update the penalty parameter in ALM.
First, in Algorithm 1, we monotonically increase the penalty parameter \(\mu\) at each training iteration until a maximum safeguarding value (i.e., \(\mu_{\max}\)) is reached. Safeguarding the penalty parameter is a common strategy and prevents it from reaching excessively large values that may lead to numerical instability or overflow. To establish clear distinction from other strategies and facilitate comparison, we will refer to Algorithm 1 as ALM with monotonically increasing penalty update (MPU).
```
1Input:\(\theta^{0},\mu_{\max},\beta\)
2\(\lambda_{i}^{0}=1\:\forall i\in\mathcal{E}\) /* Initializing Lagrange multipliers */
3\(\mu^{0}=1\) /* Initializing the penalty parameter */
4for\(t=1\)to...do
5\(\theta^{t}\leftarrow\underset{\theta}{\operatorname{argargmin}}\:\mathcal{L}( \theta^{t-1};\lambda^{t-1},\mu^{t-1})\) /* primal update */
6\(\lambda_{i}^{t}\leftarrow\lambda_{i}^{t-1}+\mu^{t-1}\mathcal{C}_{i}(\theta^{t})\) /* update Lagrange multipliers */
7\(\mu^{t}=\min(\beta\:\mu^{t-1},\mu_{\max})\) /* penalty parameter update */
8
9
10 end for Output:\(\theta^{t}\)
11Defaults:\(\beta=2,\:\mu_{\max}=1\times 10^{4}\)
```
**Algorithm 1**ALM with monotonically increasing penalty update (MPU) [20]
In the context of training the neural networks, inputs to Algorithm 1 are parameters for the neural network model \(\theta^{0}\), a maximum safeguarding penalty parameter \(\mu_{\max}\), and a multiplicative factor \(\beta\) for increasing the penalty parameter \(\mu\) at each iteration. We should note that in Algorithm 1, both the Lagrange multipliers and the penalty parameter are updated at each iteration [20]. A similar approach without the maximum safeguarding penalty parameter has been used in [19]. To prevent divergence, the maximum penalty parameter was set to \(10^{4}\). However, updating the penalty parameter at each epoch may aggressively increase the penalty parameter that could lead to divergence as we will demonstrate in a later section. Additionally, finding a suitable maximum penalty parameter can be challenging.
Another strategy to update the penalty parameter \(\mu\) is to update it only when the constraints have not decreased sufficiently at the current iteration [4, 35]. In Algorithm 2, we present an augmented Lagrangian method in which the penalty parameter is updated conditionally [35]. A similar strategy has been adopted in the work of Dener et al. [5] to train an encoder-decoder neural network for approximating the Fokker-Planck-Landau collision operator. We will refer to Algorithm 2 as ALM with conditional penalty update (MPU). Our inputs to Algorithm 2 are \(\eta\) which is a placeholder for the previous value of constraints, \(\mu_{\max}\) is a safeguarding penalty parameter, \(\beta\) is a multiplicative weight for increasing penalty parameter. Similar to the previous algorithm, the maximum penalty parameter is set to \(10^{4}\) to prevent numerical overflow and divergence. It should be noted that finding a suitable maximum penalty parameter can be challenging.
In our recent investigations, we have discovered that the rate of update for Lagrange multipliers can be a critical factor when dealing with problems that have different types of constraints (such as flux, data, or PDE constraints). Because the penalty parameter behaves like a learning rate for the Lagrange multiplier, employing the same update rate or penalty parameter for all constraints can lead to issues of instability during training. In some cases, the parameter may be too large or too small for certain constraints, which can adversely affect the optimization process. As such, we propose a
new approach to address this issue. Our method involves assigning a unique penalty parameter for each constraint type with its own update rate. By tailoring the penalty parameter to each specific constraint type, we can effectively manage the optimization process and ensure greater stability during training. We will show in the results section that we are able to learn the solution of challenging PDEs when other strategies fail or perform poorly.
### Adaptive augmented Lagrangian method
In this section, we propose an augmented Lagrangian method with a novel adaptive update strategy for multiple penalty parameters to handle diverse constraints in the optimization problem. We note that it is common to use a single penalty parameter that is monotonically increased in most implementations of the ALM. However, that strategy becomes insufficient to tackle challenging problems. As we discuss above, we need a unique, adaptive penalty parameter or learning rate for each Lagrange multiplier associated with a constraint. This tailored approach ensures that the penalty parameter conforms to the characteristics of individual Lagrange multipliers, enabling effective handling of diverse constraints. We formulate our unconstrained optimization problem for problem (1) as follows:
\[\max_{\lambda}\min_{\theta}\mathcal{L}(\theta,\lambda;\mu)=\mathcal{J}(\theta )+\sum_{i\in\mathcal{E}}\lambda_{i}\mathcal{C}_{i}(\theta)+\frac{1}{2}\sum_{i \in\mathcal{E}}\mu_{i}\mathcal{C}_{i}^{2}(\theta), \tag{5}\]
where \(\lambda_{i}\) is a vector of Lagrange multipliers and \(\mu_{i}\) is a vector of penalty parameters. The minimization can be performed using a variant of gradient descent type optimizer for a sequence of Lagrange multipliers generated by
\[\lambda_{i}\leftarrow\lambda_{i}+\mu_{i}\mathcal{C}_{i}(\theta), \forall i\in\mathcal{E}, \tag{6}\]
during training. Upon examining the dual update of the augmented Lagrangian method as shown in Eq. (6), we observe that it involves a gradient ascent step with learning rates denoted by \(\mu_{i}\) for each Lagrange multiplier \(\lambda_{i}\). Hence, an suitable approach is to adopt the strategy in RMSprop algorithm by G. Hinton [9] in finding an independent effective learning rate or penalty parameter for each Lagrange multiplier. The main reason behind our choice is that the update strategy in RMSprop is consistent with the dual update of ALM as in Eq. (6). Hence, we divide our global learning rates by the weighted moving average of the squared gradients of our Lagrange multipliers as follows
\[\bar{v}_{i} \leftarrow\alpha\bar{v}_{i}+(1-\alpha)\mathcal{C}_{i}(\theta)^{2}, \forall i\in\mathcal{E}, \tag{7}\] \[\mu_{i} \leftarrow\frac{\gamma}{\sqrt{\bar{v}_{i}}+\epsilon}, \forall i\in\mathcal{E},\] (8) \[\lambda_{i} \leftarrow\lambda_{i}+\mu_{i}\mathcal{C}_{i}(\theta), \forall i\in\mathcal{E}, \tag{9}\]
where \(\bar{v}_{i}\) are the weighted moving average of the squared gradient of our Lagrange multipliers, \(\gamma\) is a scheduled global learning rate, \(\epsilon\) is a term added to the denominator to avoid division by zero for numerical stability, and \(\alpha\) is a smoothing constant. In Algorithm 3, presents our training procedure. The input to the algorithm is an initialized set of parameters
for the network \(\theta^{0}\), a global learning rate \(\gamma\), a smoothing constant \(\alpha\), collocation points, noisy measurement data and high fidelity data to calculate our augmented Lagrangian. In Algorithm 3, the Lagrange multiplier vector is initialized to \(1.0\) with their respective averaged squared-gradients initialized to zero. In summary, our main approach involves reducing the penalty parameters for Lagrange multiplier updates with large and rapidly changing gradients, while increasing the penalty parameters for Lagrange multiplier updates with small and slowly changing gradients. Next, we delve into the subtle differences in learning the solution of forward and inverse problems and illustrate how they can be formulated as constrained optimization problems, which can then be cast as an unconstrained optimization problem using the augmented Lagrangian method.
```
1Defaults:\(\gamma=1\times 10^{-2},\;\alpha=0.99,\;\epsilon=1\times 10^{-8}\)
2Input:\(\theta^{0}\)
3\(\lambda_{i}^{0}=1\quad\forall i\in\mathcal{E}\) /* Initializing Lagrange multipliers */
4\(\mu_{i}^{0}=1\quad\forall i\in\mathcal{E}\) /* Initializing penalty parameters */
5\(\bar{v}_{i}^{0}=0\quad\forall i\in\mathcal{E}\) /* initializing averaged square-gradients */
6for\(t=1\)to...do
7\(\theta^{t}\leftarrow\underset{\theta}{\operatorname{argmin}}\;\mathcal{L}( \theta^{t-1};\lambda^{t-1},\mu^{t-1})\) /* primal update */
8\(\bar{v}_{i}^{t}\leftarrow\alpha\;\bar{v}_{i}^{t-1}+(1-\alpha)\;\mathcal{C}_{i} (\theta^{t})^{2},\quad\forall i\in\mathcal{E}\) /* square-gradient update */
9\(\mu_{i}^{t}\leftarrow\frac{1}{\sqrt{\bar{v}_{i}^{t}+\epsilon}},\quad\forall i \in\mathcal{E}\) /* penalty update */
10\(\lambda_{i}^{t}\leftarrow\lambda_{i}^{t-1}+\mu_{i}^{t}\;\mathcal{C}_{i}( \theta^{t}),\quad\forall i\in\mathcal{E}\) /* dual update */
11 end for Output:\(\theta^{t}\)
```
**Algorithm 3**Augmented Lagrangian method with adaptive penalty updates (APU)
### Constrained optimization formulation for solving forward and inverse problems
In this section, we explain our proposed formulation for solving a generic PDE problem with boundary and initial conditions. for demonstration purposes. We will also compare our new formulation to our previous approach and highlight the advantages and improvements of our new approach. We first formulate a constraint optimization problem by minimizing the loss on the governing PDE and constraining the individual points on the boundary and initial conditions assuming these conditions are noise-free and can be defined as equality constraints. Consider a scalar function \(u(\mathbf{x},t):\mathbb{R}^{d+1}\rightarrow\mathbb{R}\) on the domain \(\Omega\subset\mathbb{R}^{d}\) with its boundary \(\partial\Omega\) satisfying the following partial differential equation
\[\mathcal{F}(\mathbf{x},t;\frac{\partial u}{\partial t},\frac{\partial^ {2}u}{\partial t^{2}},\cdots,\frac{\partial u}{\partial\mathbf{x}},\frac{\partial^ {2}u}{\partial\mathbf{x}^{2}},\cdots,\mathbf{\nu})=0,\quad\forall(\mathbf{x},t)\in\mathcal{ U}, \tag{10}\] \[\mathcal{B}(\mathbf{x},t,g;u,\frac{\partial u}{\partial\mathbf{x}},\cdots) =0,\quad\forall(\mathbf{x},t)\in\partial\mathcal{U},\] (11) \[\mathcal{I}(\mathbf{x},t,h;u,\frac{\partial u}{\partial t},\cdots) =0,\quad\forall(\mathbf{x},t)\in\Gamma, \tag{12}\]
where \(\mathcal{F}\) is the residual form of the PDE containing differential operators, \(\mathbf{\nu}\) is a vector PDE parameters, \(\mathcal{B}\) is the residual form of the boundary condition containing a source function \(g(\mathbf{x},t)\) and \(\mathcal{I}\) is the residual form of the initial condition containing a source function \(h(\mathbf{x},t)\). \(\mathcal{U}=\{(\mathbf{x},t)\;|\;\mathbf{x}\in\Omega,t=[0,T]\}\), \(\partial\mathcal{U}=\{(\mathbf{x},t)\;|\;\mathbf{x}\in\partial\Omega,t=[0,T]\}\) and \(\Gamma=\{(\mathbf{x},t)\;|\;\mathbf{x}\in\partial\Omega,t=0\}\).
#### 2.2.1 Formulation for problems with point-wise constraints
In this section, we present a constrained optimization formulation, as represented in equation (1), within the context of solving partial differential equations (PDEs). Considering Eq.(10) with its boundary condition (11) and initial condition (12), we previously formulated the following constrained optimization problem [2]:
\[\min_{\theta}\sum_{i=1}^{N_{\mathcal{F}}}\|\mathcal{F}(\mathbf{x}^{(i)},t^{(i)}; \mathbf{\nu},\theta)\|_{2}^{2}, \tag{13}\]
subject to
\[\phi(\mathcal{B}(\mathbf{x}^{(i)},t^{(i)},g^{(i)};\theta)) =0,\;\forall(\mathbf{x}^{(i)},t^{(i)},g^{(i)})\in\partial\mathcal{U},\;i =1,\cdots,N_{\mathcal{B}} \tag{14}\] \[\phi(\mathcal{I}(\mathbf{x}^{(i)},t^{(i)},h^{(i)};\theta)) =0,\;\forall(\mathbf{x}^{(i)},t^{(i)},h^{(i)})\in\Gamma,\;i=1,\cdots,N_ {\mathcal{I}}, \tag{15}\]
where \(N_{\mathcal{F}}\), \(N_{\mathcal{B}}\), \(N_{\mathcal{I}}\) are the number of data points in \(\mathcal{U}\), \(\partial\mathcal{U}\) and \(\Gamma\) respectively. \(\phi\) is a distance function. We should note that number of Lagrange multipliers scale with the number of constraints \(N_{\mathcal{B}}\), \(N_{\mathcal{I}}\). When dealing with a substantial number of constraints (i.e.,\(N_{\mathcal{B}}\),\(N_{\mathcal{I}}\)), the efficiency of the optimization can be compromised due to the size of the computational graph and memory requirements. To mitigate that, we propose to minimize the expected loss on the PDE while incorporating an expected equality constraint on the boundary and initial conditions. Furthermore, we justify our proposed approach by examining the distribution of the Lagrange multipliers for a large number of constraints.
#### 2.2.2 Computationally efficient formulation based on expectation of constraints
In this section, we introduce an efficient constrained optimization formulation, represented in the form of equation (1), for solving partial differential equations (PDEs). In our original PECANN approach [2], we learned the solution of PDEs by constraining the optimization problem point-wise using the formulation described in the previous section. However, this strategy becomes computationally expensive for challenging problems, which benefit from increased number of collocation points. Constraining the learning problem pointwise is also not suitable for taking advantage of deep learning techniques designed to accelerate the training process. As we will demonstrate later in the present work, for large number of point-wise constraints, Lagrange multipliers assume a probabilistic distribution with a clear expected value. Based on this observation, we revise our formulation for point-wise constraints such that instead of constraining the loss on individual high fidelity data, we constrain the expected loss on a batch of our high fidelity data. Consequently, Lagrange multipliers for this particular approach will represent expected values, which has a direct impact on computational efficiency because Lagrange multipliers are not attached to any individual data point, we can do mini-batch training if our data does not fit in the memory for full batch training. Considering Eq.(10) with its boundary condition (11) and initial condition (12), we write the following constrained optimization problem:
\[\min_{\theta}\;\mathcal{J}(\theta)=\frac{1}{N_{\mathcal{F}}}\sum_{i=1}^{N_{ \mathcal{F}}}\|\mathcal{F}(\mathbf{x}^{(i)},t^{(i)};\mathbf{\nu},\theta)\|_{2}^{2}, \tag{16}\]
subject to
\[\frac{1}{N_{\mathcal{B}}}\sum_{i=1}^{N_{\mathcal{B}}}\phi(\mathcal{ B}(\mathbf{x}^{(i)},t^{(i)},g^{(i)};\theta)) :=0,\;\forall(\mathbf{x}^{(i)},t^{(i)},g^{(i)})\in\partial\mathcal{U}, \;i=1,\cdots,N_{\mathcal{B}}, \tag{17}\] \[\frac{1}{N_{\mathcal{I}}}\sum_{i=1}^{N_{\mathcal{I}}}\phi( \mathcal{I}(\mathbf{x}^{(i)},t^{(i)},h^{(i)};\theta)) :=0,\;\forall(\mathbf{x}^{(i)},t^{(i)},h^{(i)})\in\Gamma,\;i=1, \cdots,N_{\mathcal{I}}, \tag{18}\]
where \(N_{\mathcal{F}}\), \(N_{\mathcal{B}}\), \(N_{\mathcal{I}}\) are the number of data points in \(\mathcal{U}\), \(\partial\mathcal{U}\) and \(\Gamma\) respectively. \(\phi\) is a convex distance function. In all of the experiments \(\phi\) is a quadratic function unless specified otherwise. Next, we are present a typical PDE-constrained inverse problem.
#### 2.2.3 Formulation for PDE-constrained inverse problems
In our original formulation of the PECANN framework [2], we minimized the loss on the governing PDE and noisy data while constraining the loss with high fidelity data [2]. However, disparity in physical scales between the partial differential equations (PDEs) and noisy measurement data can exist, which couldcomplicate the inference problem. To address this challenge of solving inverse PDE problems using noisy and high fidelity data, we propose a PDE-constrained formulation for inverse problems by minimizing the loss on noisy data while considering the governing PDE and any available high fidelity data as constraints.
Les us consider a generic inverse PDE problem to demonstrate the new formulation. We assume that the initial condition is known exactly and can be treated as high fidelity data. Additionally, we assume that the underlying physical problem is governed by a typical PDE operator as shown in Eq. (10). Given a set of noisy measurement data \(\{(\mathbf{x}^{(i)},t^{(i)}),\hat{u}^{(i)}\}_{i=1}^{N_{\mathcal{M}}}\), we can minimize the following objective
\[\min_{\theta,\mathbf{\nu}}\;\frac{1}{N_{\mathcal{M}}}\sum_{i=1}^{N_{\mathcal{M}}} \|u(\mathbf{x}^{(i)},t^{(i)};\theta)-\hat{u}(\mathbf{x}^{(i)},t^{(i)})\|_{2}^{2}, \tag{19}\]
subject to
\[\frac{1}{N_{\mathcal{F}}}\sum_{i=1}^{N_{\mathcal{F}}}\phi(\mathcal{F} (\mathbf{x}^{(i)},t^{(i)};\mathbf{\nu},\theta)):=0,\;\forall(\mathbf{x}^{(i)},t^{(i)})\in \mathcal{U},\;i=1,\cdots,N_{\mathcal{F}}, \tag{20}\] \[\frac{1}{N_{\mathcal{I}}},\sum_{i=1}^{N_{\mathcal{I}}}\phi( \mathcal{I}(\mathbf{x}^{(i)},t^{(i)},h^{(i)};\theta)):=0,\;\forall(\mathbf{x}^{(i)},t ^{(i)},h^{(i)})\in\Gamma,\;i=1,\cdots,N_{\mathcal{I}}, \tag{21}\]
where \(N_{\mathcal{F}}\), \(N_{\mathcal{I}}\) are the number of data points in \(\mathcal{U}\) and \(\Gamma\) respectively. \(\phi\) is a convex distance function. In all of the experiments \(\phi\) is a quadratic function unless specified otherwise. It is worth noting that if additional high-fidelity data is available, we can incorporate it as an additional constraint into the above formulation along with other constraints.
## 3 Applications
In this section, we apply our proposed augmented Lagrangian method with an adaptive update strategy to learn the solution of forward and inverse problems. We provide additional examples in the Appendix as well. Given an \(n\)-dimensional vector of predictions \(\hat{\mathbf{u}}\in\mathbf{R}^{n}\) and an \(n\)-dimensional vector of exact values \(\mathbf{u}\in\mathbf{R}^{n}\), we define the relative Euclidean or \(l^{2}\)-norm, infinity norm \(l^{\infty}\)-norm, and the root-mean square (RMS), respectively, to assess the accuracy of our predictions
\[\mathcal{E}_{r}(\hat{u},u)=\frac{\|\hat{\mathbf{u}}-\mathbf{u}\|_{2}}{\|\mathbf{u}\|_{2}},\quad\mathcal{E}_{\infty}(\hat{u},u)=\|\hat{\mathbf{u}}-\mathbf{u}\|_{\infty},\quad \text{RMS}=\frac{1}{n}\sqrt{\sum_{i=1}^{n}(\hat{\mathbf{u}}^{(i)}-\mathbf{u}^{(i)})^{ 2}} \tag{22}\]
where \(\|\cdot\|_{2}\) denotes the Euclidean norm, and \(\|\cdot\|_{\infty}\) denotes the maximum norm. All the codes used in the following examples are available as open source at [https://github.com/HiPerSimLab/PECANN](https://github.com/HiPerSimLab/PECANN).
### Forward problem: unsteady heat transfer in a composite medium
In this section, we study a heat transfer problem in a composite material where temperature and heat fluxes are matched across the interface [1]. This canonical problem has initial condition, boundary condition and flux constraint which makes it ideal for highlighting the implementation intricacies of our method as well as demonstrating the significant improvement we achieve using our proposed formulation. Consider a time-dependent heat equation in a composite medium,
\[\frac{\partial u(x,t)}{\partial t}=\frac{\partial}{\partial x}[\kappa(x,t) \frac{\partial u(x,t)}{\partial x}]+s(x,t),\quad(x,t)\in\Omega\times[0,\tau] \tag{23}\]
along with Dirichlet boundary condition
\[u(x,t)=g(x,t),\quad(x,t)\in\partial\Omega\times(0,\tau], \tag{24}\]
and initial condition
\[u(x,0)=h(x),\quad x\in\Omega, \tag{25}\]
where \(u\) is the temperature, \(s\) is a heat source function, \(\kappa\) is thermal conductivity, \(g\) and \(h\) are source functions respectively. The composite medium consists of two non-overlapping sub-domains where \(\Omega=\Omega_{1}\cup\Omega_{2}\). We consider the thermal conductivity of the medium to vary as follows
\[\kappa(x,t)=\begin{cases}1,&(x,t)\in\Omega_{1}\times[0,2],\\ 3\pi,&(x,t)\in\Omega_{2}\times[0,2],\end{cases} \tag{26}\]
where \(\Omega_{1}=\{x|-1\leq x<0\}\) and \(\Omega_{2}=\{x|0<x\leq 1\}\). To accurately evaluate our model, we consider an exact solution of the form
\[u(x,t)=\begin{cases}\sin(3\pi x)t,&x\in\Omega_{1}\times[0,2],\\ tx,&x\in\Omega_{2}\times[0,2].\end{cases} \tag{27}\]
The corresponding source functions \(s(x,t)\), \(g(x,t)\) and \(h(x,t)\) can be calculated exactly using (27).
First, let us introduce an auxiliary flux parameter \(\sigma(x,t)=\kappa(x,t)\frac{\partial u}{\partial x}\) to obtain a system of first-order partial differential equation that reads
\[\mathcal{F}(x,t) :=\frac{\partial u(x,t)}{\partial t}-\frac{\partial\sigma(x,t)}{ \partial x}+q(x,t),\in\Omega\times[0,\tau], \tag{28a}\] \[\mathcal{Q}(x,t) :=\sigma(x,t)-\kappa(x,t)\frac{\partial u(x,t)}{\partial x},\in \Omega\times[0,\tau],\] (28b) \[\mathcal{B}(x,t) :=u(x,t)-g(x,t),\in\partial\Omega\times(0,\tau],\] (28c) \[\mathcal{I}(x,t) :=u(x,0)-h(x),\in\Omega,t=0, \tag{28d}\]
where \(\mathcal{F}\) is the residual form of our differential equation (used as our objective function), \(\mathcal{Q}\) is the residual form of our flux operator (equality constraint), \(\mathcal{B}\) is the residual form our boundary condition operator (equality constraint), and \(\mathcal{I}\) is the residual form of our initial condition operator (equality constraint).
We use a fully connected neural network architecture parameterized by \(\theta\) with two inputs for \((x,t)\) and two outputs for \(u\) and \(\sigma\) consisting of three hidden layers with 30 neurons per layer and tangent hyperbolic activation functions. We use L-BFGS optimizer [23] available in the Pytorch package [25] with _strong Wolfe_ line search function with its maximum iteration number set to five. For the purpose of this problem, we generate \(N_{\mathcal{F}}=N_{\mathcal{Q}}=10^{4}\) collocation points, \(N_{\mathcal{B}}=2\times 5000\) and \(N_{\mathcal{I}}=5000\) for approximating the boundary and initial conditions only once before training.
First, we solve this problem using the formulation discussed in section 2.2.1 using point-wise constraints. We present the results at \(t=1\) obtained using the point-wise constraint formulation in Fig.1 and compare the impact of three different strategies to update the penalty parameters in ALM. Fig.1(a) and (b) presents the temperature and heat flux distribution in the composite medium, respectively. We observe that adaptive penalty update strategy produces results that are in excellent agreement with the exact solution. However, results using monotonic and conditional penalty updates produce a solution that is markedly different than the exact solution.
With Fig.1 we show the importance of adaptively updating the penalty parameter when dealing with diverse constraint types (e.g. boundary condition, initial condition, and flux conservation) that exhibit different physical characteristics. However, we apply the constraints point-wise. As we mentioned previously, the computational cost of this particular implementation scales linearly with the number of points assigned to each constraint, especially when constraining the flux because it is applied to every collocation point in the domain. Furthermore, with the point-wise constraint formulation, we cannot train our model in mini-batch training fashion since each Lagrange multiplier is attached to every constraint data point.
In Fig. 2, we present the distribution of Lagrange multipliers resulting from the point-wise constraint formulation presented in section 2.2.1. Our goal is to find out if it is necessary to constrain the solution point-wise. We observe that the distribution of Lagrange multipliers assumes probabilistic distributions regardless of the penalty update strategy. This observation indicates that when there are a large number of point-wise constraints, we can constrain an expected value of the loss on each type of constraints using a single Lagrange multiplier as seen in Figs. 2(a)-(c). We should note that expected loss is also commonly used as an objective function in machine learning applications. Hence, we employ
Figure 1: Effects of augmented Lagrangian penalty update strategies on the heat transfer in a composite medium problem using the point-wise constraint formulation (i.e. section 2.2.1). (a) predicted temperature at t = 1, (b) predicted heat flux at t = 1
our proposed formulation to train our models using ALM with different penalty updating approaches to showcase the efficiency of our adaptive ALM and our proposed formulation.
Figure 3 presents the results of our PECANN model with the adaptive ALM and with the expected loss formulation given in section 2.2.2 applied to the heat transfer in a composite medium problem. Specifically, Fig.3(a)-(b) presents the space and time solution of the temperature and the absolute value of the point-wise error in temperature produced by our model, respectively. Space and time solution of the heat flux and its absolute value error are shown in Fig.3(c)-(d), respectively. Collectively, Fig. 3 demonstrates our predictions closely match the exact solutions. Our PECANN model using the monotonic (MPU) and conditional (CPU) penalty update strategies do not converge, and therefore we do not report temperature and heat flux predictions using those two strategies. However, we provide the error norms resulting from all three penalty update strategies for comparison purposes in Fig. 4 from which we observe that ALM with adaptive penalty update strategy outperforms the other two strategies. From Fig. 4, we also observe that adaptive penalty update (APU) strategy can provide acceptable error levels with even smaller number of epochs.
To further gain insight into how ALM with adaptive penalty updates (APU) works, we analyze the evolution of the penalty parameters during training. Fig.5(a) shows the evolution of penalty parameters with each epoch for enforcing boundary conditions \(\mu_{BC}\), initial conditions \(\mu_{IC}\), and flux constraints \(\mu_{Flux}\) using the adaptive penalty update (APU). The penalty parameters for each constraint type grow at different rates over the course of training, justifying the need for adaptive and independent penalty parameters for each constraint type. Fig.5(b) illustrates the evolution of the penalty parameter for enforcing boundary conditions, initial conditions, and flux constraints using conditional penalty update (CPU) and monotonically increasing penalty update (MPU) strategies. In both cases, the penalty parameter quickly reaches the maximum penalty value of \(\mu_{\max}=10^{4}\), causing the problem to become ill-conditioned and leading to divergence during optimization. This observation again highlights the importance of using an adaptive penalty update strategy to avoid ill-conditioning and ensure convergence. Additionally, in Fig.5(c), we present the evolution of the loss on our equality constraints during training, when using the adaptive penalty update strategy. We observe that all
Figure 2: Distribution of Lagrange multipliers arising from using the formulation with point-wise constraints (i.e. section 2.2.1). (a) ALM with monotonically increasing penalty update (Algorithm 1). (b) ALM with conditional penalty update (Algorithm 2). (c) ALM with adaptive penalty update (Algorithm 3, proposed).
our constraints decay at different rates, which provides further justification for the varying rates at which the penalty parameters increase.
Our investigation and findings from the heat transfer in a composite medium problem provide a strong support for our proposed augmented Lagrangian method with an adaptive penalty update strategy (i.e. Algorithm 3) and for the computationally efficient formulation of the objective function using expectation of constraints (i.e. formulation given in section 2.2.2). In the rest of the examples, we will adopt this combination to further showcase its efficacy and versatility.
### Forward problem: wave equation
Baseline PINN model [28] with a fully connected neural network architecture is known to struggle for certain type of PDEs [21, 34, 3]. Wang et al. [34] and McClenny and Braga-Neto [21] single out the one-dimensional (1D) wave equation as a challenge case for future PINN models because the baseline PINN model faces severe challenges when applied to this PDE problem. Here, we apply our PECANN model with the proposed adaptive ALM to learn the solution of the same 1D wave equation as studied in [34, 21] and compare the accuracy of our predictions and the size of our neural network model against the results and networks presented in those two studies. While comparing the accuracy levels is significant, we also believe it is imperative to take into perspective the size of the neural network architecture, and the number of the collocations points deployed to obtain the level of accuracy in predictions.
Consider the following 1D wave equation
\[\frac{\partial^{2}u}{\partial t^{2}}-4\frac{\partial^{2}u}{ \partial x^{2}}=0,\quad(x,t)\in\mathcal{U}, \tag{29}\]
with the boundary condition
\[u(x,t)=0,\quad(x,t)\in\partial\mathcal{U}, \tag{30}\]
Figure 3: Heat transfer in a composite medium using the adaptive penalty update strategy and the formulation based expectation of constraints (i.e. section 2.2.2). (a) predicted temperature (b) absolute point-wise error in predicted temperature, (c) predicted heat flux, (d) absolute point-wise error in heat flux prediction.
and the initial condition
\[\frac{\partial u}{\partial t}(x,t)=0,\quad(x,t)\in\partial\mathcal{U}. \tag{31}\]
The exact solution to the problem is
\[u(x,t)=\sin(\pi x)+\frac{1}{2}\sin(4\pi x),\quad(x,t)\in\mathcal{U}, \tag{32}\]
where \(\mathcal{U}=\{(x,t)\mid x\in\Omega,t=[0,1]\}\), \(\partial\mathcal{U}=\{(x,t)\mid x\in\partial\Omega,t=[0,1]\}\) and \(\Gamma=\{(x,t)\mid x\in\Omega,t=0\}\). \(\partial\Omega\) represents the boundary of the spatial domain \(\Omega=\{x|x=(0,1)\}\).
Wang et al. [34], McClenny and Braga-Neto [21] used a fully connected feed-forward neural network with five hidden layer \((H_{L}=5)\) and 500 neurons per hidden layer \((N_{H}=500)\) with \(N_{\mathcal{U}}=300\) collocation points, \(N_{\partial\mathcal{U}}=300\) boundary points and \(N_{\Gamma}=300\) initial points that are generated at every epoch to solve the same 1D wave equation. In the present work, we use a single hidden layer \((H_{L}=1)\) with 50 neurons \((N_{H}=50)\) and a tangent hyperbolic activation function. We sample \(N_{\mathcal{U}}=300\) collocation points, \(N_{\partial\mathcal{U}}=300\) boundary points and \(N_{\Gamma}=300\) only once
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Models & Mean \(\mathcal{E}_{r}(u,\dot{u})\pm\) stdev; (trials) & No. trials & \(H_{L}\times N_{H}\) & \(N_{\mathcal{U}}\) & \(N_{\mathcal{U}}\) & \(N_{\mathcal{U}}\) \\ \hline Baseline PINN [34] & \(4.518\times 10^{-1}\) & \(1\) & \(5\times 500\) & \(24\times 10^{6}\) & \(24\times 10^{6}\) & \(24\times 10^{6}\) \\ PINN using NTR [34] & \(1.728\times 10^{-3}\) & \(1\) & \(5\times 500\) & \(24\times 10^{6}\) & \(24\times 10^{6}\) & \(24\times 10^{6}\) \\ Baseline PINN[21] & \(3.792\times 10^{-1}\pm 0.162\times 10^{-1}\) & \(10\) & \(5\times 500\) & \(24\times 10^{6}\) & \(8\times 10^{6}\) & \(8\times 10^{6}\) \\ SA-PINN [21] & \(8.105\times 10^{-1}\pm 1.591\times 10^{-1}\) & \(10\) & \(5\times 500\) & \(24\times 10^{6}\) & \(8\times 10^{6}\) & \(8\times 10^{6}\) \\ SA-PINN with SGD [21] & \(2.950\times 10^{-2}\pm 7.0\times 10^{-3}\) & \(10\) & \(5\times 500\) & \(24\times 10^{6}\) & \(8\times 10^{6}\) & \(8\times 10^{6}\) \\
**Current Method** & \(\mathbf{3.990\times 10^{-3}\pm 9.179\times 10^{-4}}\) & \(10\) & \(\mathbf{1\times 50}\) & \(\mathbf{300}\) & \(\mathbf{300}\) & \(\mathbf{300}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of relative error \(\mathcal{E}_{r}\) levels and network size used in the solution of 1D wave equation.
Figure 4: Error levels during training using ALM with different penalty update strategy. Top row: Temperature (a) relative error (b) absolute error. Bottom row: Heat flux (c) relative error, (d) absolute error.
before training. Our objective function is the residual on the governing PDE given in (29) and our constraints are the boundary condition (30) and initial conditions (31). We assign three Lagrange multipliers and three penalty parameters for this problem. We train our model for \(10,000\) epochs even though we obtain an acceptable level of accuracy with \(2,000\) epochs. We use L-BFGS optimizer with its default parameters in the Pytorch package. Table 1 presents a summary of the prediction errors, architecture size and collocation points for this problem in comparison to other works.
We present the prediction of our model in Fig. 6. We can observe that the our model has successfully learned the underlying solution to a good level of accuracy. In Table 1 we compare our predictions against other works that tackled the same problem. We report the mean and standard deviation of the relative error calculated over 10 different trials. The results show that our method produces results with error levels that are one order of magnitude level than the results obtained with the self-adaptive PINNs [21], while using a much smaller neural network model.
### Forward problem: lid-driven cavity flow simulation up to a Reynolds number of 1000
In this section, we apply our adaptive augmented Lagrangian method to solve the steady-state flow in a two-dimensional lid-driven cavity benchmark problem [10] at Reynolds numbers of 100, 400, and 1000. Unlike other works where a stream function vorticity formulation was adopted, we use the primitive-variables formulation of the Navier-Stokes equations, which potentially makes our approach extensible to three-dimensional fluid flow problems. We should also mention that lid-driven cavity at Reynolds of 1000 is considered much more challenging than the same problem at lower Reynolds numbers, such as at \(Re=100\) or \(Re=400\).
Figure 5: Evolution of penalty parameters with different penalty update strategy: (a) evolution of the penalty parameter \(\mu_{\mathcal{B}}\) for enforcing boundary conditions, \(\mu_{\mathcal{I}}\) for enforcing initial conditions, and \(\mu_{\mathcal{Q}}\) for enforcing flux constraints using ALM with adaptive penalty update strategy. (b) evolution of penalty parameters for enforcing boundary condition, initial condition and flux equation using ALM with MPU and CPU. (c) evolution of constraints during training using ALM with APU
Steady-state Navier-Stokes equations in two dimensions and the continuity equation are written as follows:
\[u\frac{\partial u}{\partial x}+v\frac{\partial u}{\partial y}+ \frac{\partial p}{\partial x}-\frac{1}{Re}(\frac{\partial^{2}u}{\partial x^{2}} +\frac{\partial^{2}u}{\partial y^{2}}) =0,\quad(x,y)\in\Omega \tag{33a}\] \[u\frac{\partial v}{\partial x}+v\frac{\partial v}{\partial y}+ \frac{\partial p}{\partial y}-\frac{1}{Re}(\frac{\partial^{2}v}{\partial x^{2} }+\frac{\partial^{2}v}{\partial y^{2}}) =0,\quad(x,y)\in\Omega,\] (33b) \[\frac{\partial u}{\partial x}+\frac{\partial v}{\partial y} =0,\quad(x,y)\in\Omega \tag{33c}\]
where \(\mathbf{u}=(u,v)\) is the velocity vector, \(Re\) is the Reynolds number and \(p\) is the pressure. We aim to learn the solution of the momentum equations constrained by the continuity equation on \(\Omega=\{(x,y)\mid 0\leq x\leq 1,0\leq y\leq 1\}\) with its boundary \(\partial\Omega\). No-slip boundary conditions are imposed on all walls of the square cavity. The top wall of the square cavity moves at a velocity of \(u(x,1)=1\) in the \(x\) direction. Because we have a closed system with no inlets or outlets in which the pressure level is defined, we fix the pressure field at an arbitrary reference point as \(p(0.5,0)=0\). Fig.7 illustrates our simulation setup for the lid-driven cavity flow problem. We should note that our objective function in this problem is the loss on the total momentum \(\mathcal{F}\) field, and our constraints are the continuity equation (i.e. divergence-free velocity field) \(\mathcal{Q}\), boundary conditions and the pressure that is fixed at a reference point.
To formulate our constrained optimization problem, let us define \(\mathbf{\mathcal{F}}=(\mathcal{F}_{x},\mathcal{F}_{y})\), which represents the steady-state, two-dimensional momentum equations, and \(\mathcal{Q}\) as the divergence-free condition as follows
\[\mathcal{F}_{x}(x,y) :=u\frac{\partial u}{\partial x}+v\frac{\partial u}{\partial y}+ \frac{\partial p}{\partial x}-\frac{1}{Re}(\frac{\partial^{2}u}{\partial x^{2 }}+\frac{\partial^{2}u}{\partial y^{2}}),\quad(x,y)\in\Omega \tag{34a}\] \[\mathcal{F}_{y}(x,y) :=u\frac{\partial v}{\partial x}+v\frac{\partial v}{\partial y}+ \frac{\partial p}{\partial y}-\frac{1}{Re}(\frac{\partial^{2}v}{\partial x^{2 }}+\frac{\partial^{2}v}{\partial y^{2}}),\quad(x,y)\in\Omega,\] (34b) \[\mathcal{Q}(x,y) :=\frac{\partial u}{\partial x}+\frac{\partial v}{\partial y}, \quad(x,y)\in\Omega \tag{34c}\]
Figure 6: Solution of 1D wave equation obtained from a single hidden layer neural network with 50 neurons trained using the proposed augmented Lagrangian method: (a) predicted solution, (b) exact solution (c) absolute pointwise error
Figure 7: Boundary conditions for the lid-driven cavity problem
We sample \(N_{\mathcal{B}}=4\times 64\) points for approximating the boundary conditions, \(N_{\mathcal{Q}}=N_{\mathcal{F}}=10^{4}\) points for approximating the loss on conservation of mass and momentum only once before training, respectively. Additionally, we constrain a single point for pressure as shown in Fig.7. We use a fully connected neural network architecture consisting of four hidden layers with 30 neurons in each layer and the tangent hyperbolic activation function. We choose L-BFGS optimizer with its default parameters and _strong Wolfe_ line search function that are available in the PyTorch package. We train our network for 2000 epochs for each Reynolds number separately.
In Fig. 8, we investigate the effect of different penalty update strategies on the predictions. We observe that our model trained with the adaptive penalty update strategy in ALM learns the underlying solution for Re=100 well and matches the benchmark data closely. Figs. 8(a)-(b) also show that ALM with monotonic (MPU) or conditional (CPU) penalty update strategies are incapable of producing the expected solution. Next, we consider the case with Reynolds number 400 and train our model for 2000 epochs using the adaptive penalty update strategy only since the other two strategies did not produce acceptable results for \(Re=100\).
We observe from Fig. 9 and Fig. 10 that our model with the adaptive update strategy learns the underlying pattern for \(Re=400\) and \(Re=1000\), respectively. However, the predicted velocity fields do not closely match the benchmark numerical solution, despite the our model's superior performance for the \(Re=100\) case.
It is worth noting that our objective in these two simulation cases is to highlight the fact that despite implementing and constraining appropriate boundary conditions, divergence-free velocity field, and anchoring the pressure at a single point, the task of learning becomes increasingly more challenging as the Reynolds number increases. Learning the solution of lid-driven cavity problem at a high Reynolds numbers may be achievable with a larger neural network or a
Figure 8: Impact of three different ALM penalty update strategies on the solution of lid-driven cavity problem at Re = 100: (a) predicted horizontal velocity along the line at \(x=0.5\) compared against the benchmark data, (b) predicted vertical velocity along the line at \(y=0.5\) compared against the benchmark data.
Figure 9: Solution of lid-driven cavity problem at Re = 400 using ALM with APU: (a) predicted horizontal velocity along the line at \(x=0.5\) compared against the benchmark data, (b) predicted vertical velocity along the line at \(y=0.5\) compared against the benchmark data.
higher number of collocations. However, our aim here is not to demonstrate whether such learning is possible or not. Instead, we aim to illustrate the challenges involved for high Reynolds number flow problems, even when formulated properly as a constrained optimization problem. We hypothesize that for high Reynolds number flow problems, the underlying optimization problems become ill-conditioned and require additional regularization. Therefore, in order to further regularize the optimization process, we introduce a small set of exact, randomly sampled velocity data within the flow domain as shown in Fig. 11(a)-(b). We use these clean data points as equality constraints on the prediction of our model, similar to the way we handle boundary conditions.
We observe from Figs. 11(c)-(e) that our PECANN model with an adaptive ALM successfully predicts the fluid flow in the cavity at Reynolds number = 400 with a high degree of accuracy by leveraging a small sample of high fidelity data. Specifically, Figs. 11(c)-(e) show the predicted solution, x-component of the velocity over a line at \(x=0.5\)), and y-component of the velocity over a line at \(y=0.5\)) for \(Re=400\), while Figs. 11(f)-(h) show the corresponding results for \(Re=1000\). Compared to the results shown in Fig. 9 and Fig. 10, our new results suggest that a small set of randomly sampled labeled data is beneficial to regularize the optimization process for high Reynolds numbers. We aim to explore the feasibility of learning high Reynolds numbers in two and three dimensions without any labeled data using larger and different neural network architectures in the future.
### Inverse problem: estimating transient thermal boundary condition from noisy data
Here we aim to learn the transient thermal boundary condition and temperature based on measured data. Let us consider a parabolic partial differential of the form
\[\frac{\partial T}{\partial t}=\kappa\frac{\partial^{2}T}{\partial x^{2}}\quad (x,t)\in(0,1)\times(0,1], \tag{35}\]
where \(T\) is the temperature, \(\kappa\) is the thermal conductivity. The forward solution of Eq. 35 requires the following boundary information
\[T(0,t)\ \forall t\in(0,1],\quad T(1,t)\ \forall t\in(0,1],\quad T(x,0)\ \forall x \in(0,1). \tag{36}\]
However, in this problem, we do not know the boundary condition at \(T(0,t)\). Instead we have noisy data at the interior part of our domain. We constrain the prediction of our model on the governing physics and the noiseless boundary condition that we treat as high-fidelity data. Fig. 12(a)-(c) present the collocation points distributed over the space-time domain, and synthetically generated noisy data from the exact solution at two locations.
We use a fully connected artificial neural networks with three hidden layers and 30 neurons per layer. We adopt the tangent hyperbolic activation function. We randomly generate \(N_{\mathcal{F}}=512\) collocation points, \(N_{\mathcal{I}}=64\) number of training points for approximating the initial condition and \(N_{\mathcal{B}}=64\) number of training boundary data only once before training. For the noisy measurement data we generate \(N_{\mathcal{M}}=2\times 128\) with 10 percent noise (Gaussian noise with a standard deviation that is 10 \(\%\) of the standard deviation of the synthetically generated clean data). We use the L-BFGS optimizer with its default parameters and _strong Wolfe_ line search activation function. We train our model for \(5000\) epochs. The result of our numerical experiment is presented in Fig. 13.
Figure 13(a) presents the exact solution. From Fig. 13(b) we observe that our predicted solution is in great agreement with the exact solution. We provide the absolute point-wise error distribution in Fig.13(c). We have shown that by
Figure 10: Same caption as in Fig. 9, but for \(Re=1000\)
having two sensors at two different locations inside the domain that are measuring noisy data, we can sufficiently learn not only the boundary condition, but also the entire temperature distribution across the space-time domain.
Figure 11: Lid-driven cavity predictions with a small sample of high fidelity velocity data for different Reynolds numbers. ALM with APU is used in the computations. Locations of randomly sampled high fidelity data: (a) Re = 400, (b) Re = 1000. Results for Re=400: (c) predicted velocity streamlines, (d) predicted horizontal velocity along the line at \(x=0.5\) compared against the benchmark result, (e) predicted vertical velocity along the line at \(y=0.5\) compared against the benchmark result. Bottom row: Results for Re = 1000, same captions as in (c)-(e)
### Inverse problem: estimation of transient source term in diffusion problems
In this example, we aim to learn the temperature distribution as well as the heat source using noisy measurements and noiseless boundary or initial conditions, which is a more challenging inverse problem than the previous example. Hasanov [12] proposed a collocation algorithm, based on the piece-wise linear approximation of the unknown heat source. Several works have also been proposed to learn the unknown heat source with a single dependent variable [8, 15]. In our approach, we use a neural network model to represent the temperature and the heat source with time and space dependency. We then learn the heat distribution and the heat source simultaneously.
Let us consider the following initial-boundary value problem of the heat conduction equation as follows:
\[\frac{\partial u}{\partial t}=\kappa\frac{\partial^{2}u}{\partial x^{2}}+s, \quad(x,t)\in(0,1)\times(0,1], \tag{37}\]
where \(u\) is the temperature, \(\kappa\) is the thermal conductivity and \(s\) is the heat source. The objective is to learn the solution \(u\) and the heat source \(s\) from noiseless boundary condition and noisy measurements inside our domain. A schematic representation of data is given in Fig. 14.
Unlike typical inverse source problems that rely on a single input variable, this paper addresses a more challenging scenario where the problem is dependent not only on the spatial variable \(\mathbf{x}\) but also on the temporal variable t. We demonstrate that we obtain an accurate prediction without resorting to any stabilizing scheme or assuming any fundamental solution for the temperature or the heat source distribution. It is worth mentioning that our constrained optimization problem may also be called as PDE-constrained optimization problem. For this problem, we use three-hidden layer fully connected artificial neural networks with 30 neurons per layer with tangent hyperbolic activation function. We randomly generate \(N_{\mathcal{F}}=4096\) collocation points, \(N_{\mathcal{I}}=128\) number of training points for approximating the initial condition and \(N_{\mathcal{B}}=2\times 128\) number of training boundary data only once before training. For the noisy data we generate \(N_{\mathcal{M}}=512\) with 10 percent noise (Gaussian noise with a standard
Figure 12: Collocation points and measurement data for estimating transient thermal boundary condition from noisy data: (a) collocation points (magenta), boundary points (black) and collocation points for approximating the initial condition (blue), (b) noisy measured temperature \(\tilde{T}\) and exact temperature \(T\) at \(x=0.2\), (c) temperature \(\tilde{T}\) and exact temperature \(T\) at \(x=0.6\)
Figure 13: Estimating transient thermal boundary condition: (a) exact solution (b), predicted solution (c), absolute pointwise error
Figure 14: schematic representation of training data: (a) initial conditions (red points), boundary conditions (magenta points) and noisy measurement data (blue points), (b) collocation points.
Figure 15: Estimating transient heat source: (a) exact temperature distribution, (b) predicted temperature distribution, (c) absolute point wise error in temperature distribution, (d) exact heat source distribution, (e) predicted heat source distribution, (f) absolute point wise error in heat source distribution
deviation that is 10 \(\%\) of the standard deviation of the synthetically generated clean data). We use L-BFGS optimizer with its default parameters and _strong Wolfe_ line search activation function. We train our model for \(5000\) epochs.
We present the result from this numerical experiment is presented in Fig.15. The exact temperature distribution is plotted in Fig. 15(a). From Figs. 15(b)-(c) we observe that our predicted solution is in agreement with the exact solution to a great degree. The exact distribution of the heat source is plotted in Fig.15(d). Similar to the temperature distribution, the predicted heat source distribution agrees closely with the exact solution as shown in Figs.15(e)-(f).
## 4 Conclusion
Constrained optimization is central to the formulation of the PECANN framework [2] for learning the solution of forward and inverse problems and sets it apart from the popular PINN framework, which adopts an unconstrained optimization in which different loss terms are combined into a composite objective function through a linear scalarization. In the PECANN framework, the augmented Lagrangian method (ALM) enables us to constrain the solution of a PDE with its boundary and initial conditions and with any high-fidelity data that may be available in a principled fashion. However, for challenging forward and inverse PDE problems, the set of constraints can be large and diverse in their characteristics. In the present work, we have demonstrated that conventional approaches to update penalty and Lagrange multipliers within the ALM can fail for PDE problems with complex constraints. Consequently, we introduced an adaptive ALM in which a unique penalty parameter is assigned to each constraint and updated according to a proposed rule. This new method within the PECANN framework enables us to constrain the solution of PDEs flexibly toward convergence without the need to manually tune the hyperparameters in the objective function or resort to alternative strategies to balance the loss terms, which may not necessarily satisfy the optimality conditions.
When the problem size gets larger, constraining each collocation point associated with a constraint becomes taxing computationally. We have empirically demonstrated that Lagrange multipliers associated with a particular type of constraint exhibits a distribution with a clear mean value. Consequently, we have revised our constrained optimization formulation based on the expectation of each loss and constraint terms. Based on our experiments, the resulting formulation does not affect the efficacy of our PECANN framework, but makes it computational more efficient and enable mini-batch training.
We applied our proposed model to several forward and PDE-constrained inverse problems to demonstrate its efficacy and versatility. A noteworthy example is our simulation of the lid-driven cavity benchmark problem at a Reynolds number 1000 using the primitives-formulation of the Navier-Stokes equations. The \(Re=1000\) case is known to be a challenging case to simulate because of the dominant effect of advection in the flow physics and development of strong circulations in the bottom corners of the cavity. Unlike, streamfunction-vorticity formulation of the two-dimensional Navier-Stokes equation, primitive-variables formulation is more general, but at the same more challenging to solve computationally because the formulation retains pressure as one of the variables in addition to the velocity field and. In our flow simulations, we found that the key to obtaining results that are in close agreement with benchmark data [10] was to constrain the momentum equations with the divergence-free condition, boundary conditions, and with a small set of randomly sampled high-fidelity data while handling each type constraint with its own unique, adaptive penalty parameter. Another PDE problem that has been viewed as a challenge case for PINNs is the one-dimensional wave equation. Again, our proposed model performed better than the state-of-the-art PINN models for the wave equation problem as well. The codes used to produce the results in this paper are publicly available at [https://github.com/HiPerSimLab/PECANN](https://github.com/HiPerSimLab/PECANN). Two additional examples are provided in the Appendix.
## Acknowledgments
This material is based upon work supported by the National Science Foundation under Grant No. 1953204 and in part in part by the University of Pittsburgh Center for Research Computing through the resources provided.
## Appendix A Additional Examples
### Inviscid scalar transport in one dimension
In this section, we study the transport of a physical quantity dissolved or suspended in an inviscid fluid with a constant velocity. Numerical solutions of inviscid flows often exhibit a viscous or diffusive behavior owing to numerical dispersion. Let us consider a convection equation of the form
\[\frac{\partial\xi}{\partial t}+u\frac{\partial\xi}{\partial x}=0,\;\forall(x,t )\in\Omega\times[0,1], \tag{38}\]
satisfying the following boundary condition
\[\xi(0,t)=\xi(2\pi,t)\;\forall t\in[0,1], \tag{39}\]
and initial condition
\[\xi(x,0)=h(x),\;\forall x\in\partial\Omega, \tag{40}\]
where \(\xi\) is any physical quantity to be convected with the velocity \(u\), \(\Omega=\{x\mid 0<x<2\pi\}\) and \(\partial\Omega\) is its boundary. Eq.(38) is inviscid, so it lacks viscosity or diffusivity. For this problem, we consider \(u=40\) and \(h(x)=\sin(x)\). The analytical solution for the above problem is given as follows [16]
\[\xi(x,t)=F^{-1}(F(h(x))e^{-i\omega kt}), \tag{41}\]
where \(F\) is the Fourier transform, \(k\) is the frequency in the Fourier domain and \(i=\sqrt{-1}\). Since the PDE is already first-order, there is no need to introduce any auxiliary parameters. Following is the residual form of the PDE used to formulate our objective function,
\[\mathcal{F}(x,t)=\frac{\partial\xi(x,t)}{\partial t}+u\frac{ \partial\xi(x,t)}{\partial x}, \tag{42a}\] \[\mathcal{B}(t)=\xi(0,t)-\xi(2\pi,t),\] (42b) \[\mathcal{I}(x)=\xi(x,0)-\sin(x), \tag{42c}\]
where \(\mathcal{F}\) is the residual form of our differential equation, \(\mathcal{B}\) and \(\mathcal{I}\) are our boundary condition and initial condition constraints.
For this problem, we use a fully connected neural network architecture consisting of four hidden layers with 50 neurons and tangent hyperbolic activation functions. We generate \(N_{\mathcal{F}}=512\) collocation points from the interior part of the domain, \(N_{\mathcal{B}}=512\) from each boundary, and \(N_{\mathcal{I}}=512\) for approximating the initial conditions at each epoch. We use L-BFGS optimizer with its default parameters and _strong Wolfe_ line search function that is available in the PyTorch package. We train our network for 5000 epochs. We present the prediction of our neural network in Fig.16. We observe that our neural network model has successfully learned the underlying solution as shown in Figs. 16(b)-(c).
We also present a summary of the error norms from our approach and state-of-the-art results given in [16] in Table 2. We observe that our method achieves a relative \(\mathcal{E}_{r}=8.161\times 10^{-4}\), which is two orders of magnitude better than curriculum learning method presented in [16], and three orders of magnitude better than the baseline PINN model performance as reported in the same study.
\begin{table}
\begin{tabular}{l c c} \hline \hline Models & \(\mathcal{E}_{r}(\xi,\hat{\xi})\) & MAE \\ \hline Baseline PINN [16] & \(9.61\times 10^{-1}\) & \(5.82\times 10^{-1}\) \\ Curriculum learning [16] & \(5.33\times 10^{-2}\) & \(2.69\times 10^{-2}\) \\ Current method & \(\mathbf{8.161\times 10^{-4}}\) & \(\mathbf{6.810\times 10^{-4}}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Inviscid scalar transport problem: summary of the relative error \(\mathcal{E}_{r}\) and the mean absolute error (MAE) metrics for training a fixed neural network architecture.
Figure 16: Inviscid scalar transport problem: (a) exact solution, (b) predicted solution, (c) absolute point-wise error
### Mixing of a hot front with a cold front
In this section, we explore formation of cold and warm fronts in a two-dimensional environment which can be described by the following convection equation:
\[\mathcal{F}:=\frac{\partial\xi}{\partial t}+u\frac{\partial\xi}{\partial x}+v \frac{\partial\xi}{\partial y}=0\ \in\Omega\times(0,T], \tag{43}\]
where \(\Omega=\{(x,y)\ |\ -4\leq x,y\leq 4\}\) and \(T=4\). The boundary conditions considered throughout this work are zero flux of the variable \(\xi\) along each boundary as shown in Fig. 17. The problem has the following analytical solution:
\[\xi(x,y,t)=-\tanh\Big{[}\frac{y}{2}\cos\omega t-\frac{x}{2}\sin\omega t\Big{]}, \tag{44}\]
where \(\omega\) is the frequency, which is defined as
\[\omega=\frac{\nu_{t}}{r\nu_{t,max}}. \tag{45}\]
The velocity field \(\nu_{t}=\mathrm{sech}\,r^{2}\tanh r\) is the tangential velocity around the center with a maximum value \(\nu_{t\max}=0.385\). The velocity components in the \(x\) and \(y\) directions can be obtained, respectively, as follows:
\[u(x,y)=-\omega y,\quad v(x,y)=\omega x. \tag{46}\]
We also present the velocity field in Fig.18(a). The initial scalar field varies gradually from positive values at the bottom to negative values at the top as can be seen Fig.18(b).
where the maximum and minimum values of the field is \(\xi_{\max}=0.964\) and \(\xi_{\min}=-0.964\), respectively.
Figure 17: Mixing hot and cold front: Schematic representation of the domain with its boundary condition
Figure 18: Velocity and initial state of mixing of hot and cold fronts: (a) velocity field, (b) initial state of our flow \(\xi_{\max}=0.964\) and \(\xi_{\min}=-0.964\), respectively.
We use a deep fully connected neural network with 4 hidden layers each with 30 neurons that we train for 5000 epochs total. We use a Sobol sequence to generate \(N_{\mathcal{F}}=10,000\) residual points from the interior part of the domain, \(N_{\mathcal{B}}=512\) points from each of the boundaries (faces) and \(N_{\mathcal{I}}=512\) points for enforcing the initial condition only once before training. Our optimizer is LBFGS with its default parameters and _strong Wolfe_ line search function that is available in the PyTorch package.
Figure 19 presents the temperature contours obtained from exact solution along with our predictions. We observe that our PECANN model is in good agreement with the exact solution. We also provide a summary of RMS errors obtained from our neural network model along with conventional numerical methods used in other works in Table. 3.
Table 3 compares the RMS error level of our predictions against the RMS error levels produced by five different finite-volume based advection schemes as presented in the work of Tamamidis and Assanis [30]. Among the finite-volume based methods, the most accurate numerical results were obtained with the QUICK scheme on all meshes consistently. We should note that our results on the Cartesian mesh sizes shown in Table 3 are post calculations from the trained neural network. For this reason, our error levels are nearly the same across all mesh sizes.
|
2301.03766 | Optimal Power Flow Based on Physical-Model-Integrated Neural Network
with Worth-Learning Data Generation | Fast and reliable solvers for optimal power flow (OPF) problems are
attracting surging research interest. As surrogates of physical-model-based OPF
solvers, neural network (NN) solvers can accelerate the solving process.
However, they may be unreliable for ``unseen" inputs when the training dataset
is unrepresentative. Enhancing the representativeness of the training dataset
for NN solvers is indispensable but is not well studied in the literature. To
tackle this challenge, we propose an OPF solver based on a
physical-model-integrated NN with worth-learning data generation. The designed
NN is a combination of a conventional multi-layer perceptron (MLP) and an
OPF-model module, which outputs not only the optimal decision variables of the
OPF problem but also the constraints violation degree. Based on this NN, the
worth-learning data generation method can identify feasible samples that are
not well generalized by the NN. By iteratively applying this method and
including the newly identified worth-learning samples in the training set, the
representativeness of the training set can be significantly enhanced.
Therefore, the solution reliability of the NN solver can be remarkably
improved. Experimental results show that the proposed method leads to an over
50% reduction of constraint violations and optimality loss compared to
conventional NN solvers. | Zuntao Hu, Hongcai Zhang | 2023-01-10T03:06:08Z | http://arxiv.org/abs/2301.03766v1 | # Optimal Power Flow Based on
###### Abstract
Fast and reliable solvers for optimal power flow (OPF) problems are attracting surging research interest. As surrogates of physical-model-based OPF solvers, neural network (NN) solvers can accelerate the solving process. However, they may be unreliable for "unseen" inputs when the training dataset is unrepresentative. Enhancing the representativeness of the training dataset for NN solvers is indispensable but is not well studied in the literature. To tackle this challenge, we propose an OPF solver based on a physical-model-integrated NN with worthwhile data generation. The designed NN is a combination of a conventional multi-layer perceptron (MLP) and an OPF-model module, which outputs not only the optimal decision variables of the OPF problem but also the constraints violation degree. Based on this NN, the worth-learning data generation method can identify feasible samples that are not well generalized by the NN. By iteratively applying this method and including the newly identified worth-learning samples in the training set, the representativeness of the training set can be significantly enhanced. Therefore, the solution reliability of the NN solver can be remarkably improved. Experimental results show that the proposed method leads to an over 50% reduction of constraint violations and optimality loss compared to conventional NN solvers.
Optimal power flow, physical-model-integrated neural network, worth-learning data generation
## I Introduction
Optimal power flow (OPF) is a fundamental but challenging problem for power systems [1]. A typical OPF problem usually involves determining the optimal power dispatch with an objective, e.g., minimizing total generation costs or power loss, while satisfying nonlinear power flow equations and other physical or engineering constraints [2]. Due to the nonlinear interrelation of nodal power injections and voltages, OPF is non-convex, NP-hard, and cannot be solved efficiently [3]. With the increasing integration of renewable generation and flexible demands, uncertainty and volatility have been rising on both the demand and supply sides of modern power systems [4], which requires OPF to be solved more frequently. Thus, fast and reliable OPF solvers have become indispensable to ensure effective operations of modern power systems and have attracted surging interest in academia.
There is a dilemma between the solving efficiency and solution reliability of OPF. Conventionally, OPF is solved by iterative algorithms, such as interior point algorithms, based on explicit physical models [5]. However, these methods may converge to locally optimal solutions. Recently, some researchers have made great progress in designing conic relaxation models for OPF, which are convex and can be efficiently solved [6, 7, 8]. Nevertheless, the exactness of these relaxations may not hold in practical scenarios, and they may obtain infeasible solutions [9]. In addition, the scalability of the conic relaxation of alternating current optimal power flow (AC-OPF) may still be a challenge, particularly in online, combinatorial, and stochastic settings [10].
To overcome the limitation of the aforementioned physical-model-based solvers, some researchers propose surrogate OPF solvers based on neural networks (NNs) [11, 12, 13]. These solvers use NNs to approximate the functional mapping from the operational parameters (e.g., profiles of renewable generation and power demands) to the decision variables (e.g., power dispatch) of OPF. Compared to iterative algorithms, they can introduce significant speedup because an NN is only composed of simple fundamental functions in sequence [12, 13]. However, one of the critical problems of NN solvers is that they may be unreliable if not properly trained, especially for "unseen" inputs in feasible regions due to NNs' mystery generalization mechanism [14].
The generalization of NNs is mainly influenced by their structures, loss functions, and training data. Most published papers propose to enhance the generalization of NN OPF solvers by adjusting the structures and loss functions. Various advanced NN structures rather than conventional fully connected networks are employed to imitate AC-OPF. For example, Overkro _et al._[15] use graph NNs to approximate a given optimal solution. Su _et al._[16] employ a deep belief network to fit the generator's power in OPF. Zhang _et al._[17] construct a convex NN solving DC-OPF to guarantee the generalization of NNs. Jeyaraj _et al._[18] employ a Bayesian regularized deep NN to solve the OPF in DC microgrids. Some researchers design elaborate loss functions that penalize the constraints violation, combine Karush-Kuhn-Tucker conditions, or include derivatives of decision variables to operational parameters. For example, Pan _et al._[11] introduce a penalty term related to the inequality constraints into the loss function. This approach can speed up the computation by up to two orders of magnitude compared to the Gurobi solver, but 18.3% of its solutions are infeasible. Ferdinado _et al._[12] include a Lagrange item in the loss function of NNs. Their method's prediction errors are as low as 0.2%, and its solving speed is faster than DC-OPF by at least two orders of magnitude. Manish _et al._[10] include sensitivity information in the training of NN so that only using about 10% to 25% of training data can attain the same approximation accuracy as methods without sensitivity information. Nellikkath _et al._[19] apply physics-informed
NNs to OPF problems, and their results have higher accuracy than conventional NNs.
The above-mentioned studies have made significant progress in designing elaborate network structures and loss functions. However, little attention has been paid to the training set generation problem. Specifically, they all adopt conventional probability sampling methods to produce datasets for training and testing, such as simple random sampling [10, 11, 12, 13, 15, 17, 20], Monte Carlo simulation [18], or Latin hypercube sampling [19, 16]. These probability sampling methods cannot provide a theoretical guarantee that a generated training set can represent the input space of the OPF problem properly. As a result, probability sampling methods may generate insufficient and unrepresentative training sets, so the trained NN solvers may provide unreliable solutions.
It is important to create a sufficiently representative dataset for training an NN OPF solver. A training set's representativeness depends on its size and distribution in its feasible region [21]. Taking a medium-scale OPF problem as an example, millions of data samples may still be sparse given the high dimension of the NN's inputs (e.g., operational parameters of the OPF problem: renewable generation and power demands at all buses); in addition, because the OPF problem is non-convex, the feasible region of the NN's inputs is a complicated irregular space. Thus, generating a representative training set to cover all the feasible regions of the inputs with an acceptable size is quite challenging. Without a representative training set, it is difficult to guarantee that the NN OPF solver's outputs are reliable, especially given "unseen" inputs in the inference process, as discussed in [22, 23].
To address the above challenge, this study proposes a physical-model-integrated deep NN method with worthwhile data generation to solve AC-OPF problems. To the best of our knowledge, this is the first study that has addressed the representativeness problem of the training dataset for NN OPF solvers. The major contributions of this study are twofold:
1. A novel physical-model-integrated NN is designed for solving the AC-OPF problem. This NN is constructed by a conventional MLP integrating an OPF-model module, which outputs not only the optimal decision variables of the OPF problem but also the violation degree of constraints. By penalizing the latter in the loss function during training, the NN can generate more reliable decision variables.
2. Based on the designed NN, a novel generation method for worth-learning training data is proposed, which can identify samples in the input feasible region that are not well generalized by the previous NN. By iteratively applying this method during the training process, the trained NN gradually generalizes to the whole feasible region. As a result, the generalization and reliability of the proposed NN solver can be significantly enhanced.
Furthermore, comprehensive numerical experiments are conducted, which prove that the proposed method is effective in terms of both reliability and optimality for solving AC-OPF problems with high computational efficiency.
The remainder of this article is organized as follows. Section II provides preliminary models and the motivations behind this study. Section III introduces the proposed method. Section IV details the experiments. Section V concludes this paper.
## II Analysis of Approximating OPF Problems by NN
### _AC-OPF problem_
The AC-OPF problem aims to determine the optimal power dispatch (usually for generators) given specific operating conditions of a power system, e.g., power loads and renewable generation. A typical AC-OPF model can be formulated as
\[\min_{\mathbf{V},\mathbf{S}^{\mathrm{Q}}} \mathrm{C}\,\left(\mathbf{S}^{\mathrm{Q}}\right)\] (1a) s.t.: \[\left[\mathbf{V}\right]\mathbf{Y}_{\text{bus}}^{*}\mathbf{V}^{*}=\mathbf{S}^ {\mathrm{G}}-\mathbf{S}^{\mathrm{L}}, \tag{1b}\] \[\underline{\mathbf{S}}^{\mathrm{G}}\leq\mathbf{S}^{\mathrm{G}}\leq \overline{\mathbf{S}}^{\mathrm{G}},\] (1c) \[\underline{\mathbf{V}}\leq\mathbf{V}\leq\overline{\mathbf{V}},\] (1d) \[\left|\mathbf{Y}_{\text{b}}\mathbf{V}\right|\leq\overline{\mathbf{I}}, \tag{1e}\]
where Eq. (1a) is the objective, e.g., minimizing total generation costs, and Eqs. (1b) to (1e) denote constraints. Symbols \(\mathbf{S}^{\mathrm{G}}\) and \(\mathbf{S}^{\mathrm{L}}\) are \(n\times 1\) vectors representing complex bus injections from generators and loads, respectively, where \(n\) is the number of buses. Symbol \(\mathbf{V}\) is an \(n\times 1\) vector denoting node voltages. Symbol \([.]\) denotes an operator that transforms a vector into a diagonal matrix with the vector elements on the diagonal. Symbol \(\mathbf{Y}_{\text{bus}}\) is a complex \(n\times n\) bus admittance matrix written as \(\mathbf{Y}\) at other sections for convenience. Symbol \(\mathbf{Y}_{\text{b}}\) is a complex \(n_{\text{b}}\times n\) branch admittance matrix, and \(n_{\text{b}}\) is the number of branches. The upper and lower bounds of any variable \(\mathbf{x}\) are represented by \(\bar{\mathbf{x}}\) and \(\underline{\mathbf{x}}\), respectively. Vector \(\overline{\mathbf{I}}\) denotes the current flow limit of branches.
### _AC-OPF mapping from loads to optimal dispatch_
An NN model describes an input-output mapping. Specifically, for an NN model solving the AC-OPF problem shown in Eq. (1), the input is the power demand \(\mathbf{S}^{\mathrm{L}}\), and the output is the optimal generation \(\mathbf{S}^{\mathrm{G}^{\ast}}\). Hence, an NN OPF solver describes the mapping \(\mathbf{S}^{\mathrm{G}^{\ast}}=f^{\text{OPF}}(\mathbf{S}^{\mathrm{L}})\). A well-trained NN should be able to accurately approximate this mapping.
We provide a basic example of a 3-bus system, as shown in Fig. 1, to illustrate how NN works for OPF problems and explain the corresponding challenge for generalization. For simplicity, we assume there is no reactive power in the system and set \(r_{31}=r_{12}=0.01\) ; \(\underline{P}_{i}=0\), \(\overline{P}_{i}=4\), for \(i\in\{1,3\}\), \(P_{2}\in[-7,0]\); \(\underline{V}_{i}=0.95\) and \(\overline{V}_{i}=1.05\), for \(i\in\{1,2,3\}\)
Fig. 1: The 3-bus system.
Then, the OPF model Eq. (1) is reduced to the following quadratic programming:
\[\min_{\mathbf{V},\,\mathbf{P}^{\text{G}}} P_{1}+1.5\times P_{3}\] (2a) s.t.: \[P_{1}=V_{1}\left(V_{1}-V_{2}\right)/0.01+V_{1}\left(V_{1}-V_{3} \right)/0.01, \tag{2b}\] \[P_{2}=V_{2}\left(V_{2}-V_{1}\right)/0.01,\] (2c) \[P_{3}=V_{3}\left(V_{3}-V_{1}\right)/0.01,\] (2d) \[0.95\leq V_{3}\leq 1.05,0.95\leq V_{2}\leq 1.05,\] (2e) \[0\leq P_{1}\leq 4,0\leq P_{3}\leq 4,V_{1}=1, \tag{2f}\]
where \(\mathbf{V}\) is \([V_{1}\,\ V_{2}\,\ V_{3}]^{\top}\), and \(\mathbf{P}^{\text{G}}\) is \([P_{1}\,\ P_{3}]^{\top}\).
Given that \(P_{2}\) ranges from -7 to 0, the 3-bus OPF model can be solved analytically. The closed-form solution of \([P_{1}^{*}\ P_{3}^{*}]\) = \(f_{\text{3-bus}}^{\text{OPF}}(P_{2})\), is formulated as follows:
\[P_{1}^{*}=\left\{\begin{matrix}50-50\sqrt{0.04P_{2}+1},&\text{c1},\\ 4,&\text{c2},\end{matrix}\right. \tag{3}\]
\[P_{3}^{*}=\left\{\begin{matrix}0,&\text{c1},\\ 213\left(1-0.34\sqrt{0.04P_{2}+1}\right)^{2}\\ +50\sqrt{0.04P_{2}+1}-146\end{matrix}\right.,&\text{c2},\end{matrix}\right. \tag{4}\]
where c1 denotes condition 1: \(-3.84\leq P_{2}\leq 0\), and c2 denotes condition 2: \(-7\leq P_{2}<-3.84\).
To further analyze the mapping \(f_{\text{3-bus}}^{\text{OPF}}\), we draw the \([P_{1}^{*}\ P_{3}^{*}]\)-\(P_{2}\) curve according to Eqs. (3) and (4), shown in Fig. 2. Both the \(P_{1}^{*}\)-\(P_{2}\) and \(P_{3}^{*}\)-\(P_{2}\) curves are piecewise nonlinear functions, in which two oblique lines are nonlinear because of the quadratic equality constraints. The reason why the two curves above are piecewise is that the active inequalities change the \([P_{1}^{*}\ P_{3}^{*}]\)-\(P_{2}\) relationship. From an optimization perspective, each active inequality will add a unique equality constraint to the relationship, so the pieces in \(f_{\text{3-bus}}^{\text{OPF}}\) are determined by the sets of active inequalities. In this example, the two pieces in each curve correspond to two sets of active inequalities: \(P_{1}\leq 4\) and \(0\leq P_{3}\). Moreover, the two intersection points are the critical points where these inequalities are just satisfied as equalities.
For a general AC-OPF problem, its input is usually high-dimensional (commonly determined by the number of buses), and its feasible space is partitioned into some distinct regions by different sets of active inequality constraints. From an optimization perspective, a set of active constraints uniquely characterizes the relationship \(\mathbf{S}^{\text{G}^{*}}=f^{\text{OPF}}(\mathbf{S}^{\text{L}})\), and the number of pieces theoretically increases with the number of inequality constraints by exponential order [24, 25, 26]. Therefore, there are massive regions, and each region corresponds to a unique mapping relation, i.e., a piece of mapping function \(f^{\text{OPF}}\).
### _Challenges of fitting OPF mapping by NN_
As shown in Fig. 2(a), to fit the two-dimensional piecewise nonlinear curve of \(f_{\text{3-bus}}^{\text{OPF}}\), we first adopt four data samples by simple random sampling and then use an NN to learn the curve. Obviously, there are significant fitting errors between the fitting and the original lines. Because the training set lacks the samples near the intersections in the curve (where \(p_{2}=-0.384\) in this case), the NN cannot accurately approximate the mapping in the neighboring region of the intersections.
A training set representing the whole input space is a prerequisite for an NN approximating the curve properly. However, it is nontrivial to generate a representative training set by probability sampling. As shown in Fig. 2(a), the intersections of \(f^{\text{OPF}}\) are key points for the representativeness, and the number of intersections increases exponentially with that of the inequality constraints, as analyzed in II-B. When each sample is selected with a small possibility \(\rho\), the generation of a dataset containing all the intersection points are in a low possibility event whose probability is equal to \(\rho^{m}\), where \(m\) is the number of intersections. In practice, the only way to collect sufficient data representing the input space by probability sampling is to expand the dataset as much as possible [27]. This is impractical for large power networks. Therefore, the conventional probability sampling in the literature can hardly produce a representative dataset with a moderate size.
As shown in Fig. 2(b), if we are able to identify the two intersections, i.e., (\(P_{2}=-0.384\), \(P_{1}=4\)) and (\(P_{2}=-0.384\), \(P_{3}=0\)), and include them as new samples in the training dataset, the corresponding large fitting errors of the NN can be eliminated. These samples are termed as the _worth-learning_ data samples. The focus of this study is to propose a _worth-learning data generation method_ that can help identify worth-learning data samples and overcome the aforementioned disadvantage of conventional probability sampling (detailed in the following section).
## III A Physical-Model-Integrated NN with Worth-Learning Data Generation
This section proposes a physical-model-integrated NN with a worth-learning data generation method to solve AC-OPF problems. The proposed NN is a combination of a fully-connected network and a transformed OPF model. It outputs not only the optimal decision variables of the OPF problem but also the violation degree of constraints, which provides guidance for identifying worth-learning data. The worth-learning data generation method creates representative training sets to enhance the generalization of the NN solver.
Fig. 2: Examples of an NN fitting the OPF of the 3-bus system based on (a) simple random sampling, and (b) worth-learning data generation.
### _Framework of the proposed method_
The proposed data generation method has an iterative process, as shown in Fig. 3. First, a training set is initialized by random sampling; second, the physical-model-integrated NN is trained on the training set, where an elaborate loss function is utilized; third, worth-learning data for the current NN are identified; fourth, if worth-learning data are identified, these data are added to the training set and returns to the second step; otherwise, the current NN is output.
The above training process converges until no worth-learning data are identified. This means that the training set is sufficiently representative of the input space of the OPF problem. As a result, the NN trained based on this dataset can generalize to the input feasible set well. The following subsections introduce the proposed method in detail.
### _Physical-model-integrated NN_
In the second step of the proposed method (Fig. 3), the NN is trained to fit the mapping \(\boldsymbol{S}^{\text{G}^{\ast}}=f^{\text{OPF}}(\boldsymbol{S}^{\text{L}})\). To obtain better results, we design a physical-model-integrated NN structure consisting of a _conventional NN module_ and a _physical-model module_, as shown in Fig. 4. The former is a conventional MLP, while the latter is a computational graph transformed from the OPF model.
#### Iii-B1 Conventional NN module
This module first adopts a conventional MLP with learnable parameters to fit the mapping from the \(\boldsymbol{S}^{\text{L}}\) to the optimal decision variable \(\boldsymbol{V}_{\text{NN}}\)[28]. The \(\boldsymbol{V}_{\text{NN}}\) has its box constraint defined in Eq. (1). To ensure that the output \(\boldsymbol{V}_{\text{NN}}\) satisfies this constraint, we design a function \(\text{dRe}()\) to adjust any infeasible output \(\boldsymbol{V}_{\text{NN}}\) into its feasible region, which is formulated as follows:
\[x\leftarrow\text{dRe}(x,\underline{x},\overline{x})=\text{ReLU}(x-\underline {x})-\text{ReLU}(x-\overline{x})+\underline{x}, \tag{5}\]
where \(\text{ReLU}(\text{x})=\text{max}(x,0)\); \(x\) is the input of the function, and its lower and upper bounds are \(\underline{x}\) and \(\overline{x}\), respectively. The diagram of this function is illustrated in Fig. 5.
Applying \(\text{dRe}()\) as the activation function of the last layer of the conventional MLP, the mathematical model of this conventional NN module is formulated as follows:
\[\boldsymbol{V}_{\text{NN}}=\text{MLP}(\boldsymbol{S}^{\text{L}}), \tag{6}\] \[\boldsymbol{V}_{\text{NN}}\leftarrow\text{dRe}(\boldsymbol{V}_{ \text{NN}},\underline{\boldsymbol{V}},\overline{\boldsymbol{V}}), \tag{7}\]
where Eq. (6) describes the conventional model of MLP and Eq. (7) adjusts the output of the MLP.
#### Iii-B2 Physical model module
This module receives \(\boldsymbol{V}_{\text{NN}}\) from the previous module, and then it outputs the optimal power generation \(\boldsymbol{S}^{\text{G}}_{\text{phm}}\) and the corresponding constraints violation \(\boldsymbol{V}\boldsymbol{io}_{\text{phm}}\), where the subscript "phm" denotes the _physical model module_. The first output \(\boldsymbol{S}^{\text{G}}_{\text{phm}}\) is the optimal decision variable of the AC-OPF problem. It can be calculated by \(\boldsymbol{V}_{\text{NN}}\) and \(\boldsymbol{S}^{\text{L}}\), as follows:
\[\boldsymbol{S}^{\text{G}}_{\text{phm}}=[\boldsymbol{V}_{\text{NN}}|\boldsymbol {Y}^{\ast}\boldsymbol{V}_{\text{NN}}^{\ast}+\boldsymbol{S}^{\text{L}}. \tag{8}\]
The second output \(\boldsymbol{V}\boldsymbol{io}_{\text{phm}}\) (termed as violation degree) measures the quality of \(\boldsymbol{S}^{\text{G}}_{\text{phm}}\) and is the key metric to guide the proposed worth-learning data generation (see details in the following subsection III-C). Given \(\boldsymbol{V}_{\text{NN}}\) and \(\boldsymbol{S}^{\text{G}}_{\text{phm}}\), the violations of inequality constraints of the AC-OPF problem \(\boldsymbol{V}\boldsymbol{io}_{\text{phm}}\) are calculated as follows:
\[\boldsymbol{V}\boldsymbol{io}^{\text{S}}_{\text{phm}}=\text{ ReLU}(\boldsymbol{S}^{\text{G}}_{\text{phm}}-\overline{\boldsymbol{S}}^{\text{G}})+ \text{ReLU}(\underline{\boldsymbol{S}}^{\text{G}}-\boldsymbol{S}^{\text{G} }_{\text{phm}}), \tag{9a}\] \[\boldsymbol{V}\boldsymbol{io}^{\text{I}}_{\text{phm}}=\text{ ReLU}(|\boldsymbol{Y}_{\boldsymbol{V}}\boldsymbol{V}_{\text{NN}}|-\overline{\boldsymbol{I}}),\] (9b) \[\boldsymbol{V}\boldsymbol{io}_{\text{phm}}=(\boldsymbol{V} \boldsymbol{io}^{\text{S}}_{\text{phm}}\quad\boldsymbol{V}\boldsymbol{io}^{ \text{I}}_{\text{phm}})^{\top}, \tag{9c}\]
where \(\boldsymbol{V}\boldsymbol{io}^{\text{S}}_{\text{phm}}\) denotes the violation of the upper or lower limit of \(\boldsymbol{S}^{\text{G}}_{\text{phm}}\), and \(\boldsymbol{V}\boldsymbol{io}^{\text{I}}_{\text{phm}}\) represents the violation of branch currents.
_Remark 1_.: _The physical-model-integrated NN is formed by combining the conventional NN module and the physical model module. It inputs \(\boldsymbol{S}^{\text{L}}\) and outputs \(\boldsymbol{S}^{\text{G}}_{\text{phm}}\) and \(\boldsymbol{V}\boldsymbol{io}_{\text{phm}}\), as shown in Fig. 4. Its function is the same as conventional OPF numerical solvers. In addition, it is convenient for users
Fig. 4: The physical-model-integrated NN.
Fig. 5: The dRe() function.
Fig. 3: Framework of the proposed training process.
to directly determine whether the result of the NN OPF solver is acceptable or not based on the violation degree \(\mathbf{Vio_{\text{plm}}}\). In contrast, most NN OPF solvers in the literature are incapable of outputting the violation degree directly [10, 11, 12]._
#### Iii-B3 Loss function
To enhance the training accuracy of the physical-model-integrated NN, we design an elaborate loss function, which consists of \(\mathbf{V}_{\text{NN}}\) from the _conventional NN module_, and \(\mathbf{S^{\text{G}}_{\text{plm}}}\) and \(\mathbf{Vio_{\text{plm}}}\) from the _physical model module_. The formula is as follows:
\[loss=||\hat{\mathbf{V}}-\mathbf{V}_{\text{NN}}||_{1}+||\hat{\mathbf{S}}^{\text{G}}-\mathbf{S }^{\text{G}}_{\text{plm}}||_{1}+\mathbf{Vio_{\text{plm}}}, \tag{10}\]
where \(\hat{\mathbf{V}}\) and \(\hat{\mathbf{S}}^{\text{G}}\) are label values from the training set, which is a ground truth dataset from numerical solvers.
Combining the three terms in the loss function can help enhance fitting precision. As shown in Fig. 6, if the loss function only has the first two items \(||\hat{\mathbf{V}}-\mathbf{V}_{\text{NN}}||_{1}+||\hat{\mathbf{S}}^{\text{G}}-\mathbf{S}^{ \text{G}}_{\text{plm}}||_{1}\) to penalize conventional fitting errors, the predicted value will be in a tiny square space (the red square in Fig. 6) around the label value. From the optimization perspective, the optimal label value is usually on the edge of its feasible region (the blue polyhedron in Fig. 6). This edge through the label value splits the square into two parts: the feasible (blue) part and the infeasible (white) part. Intuitively, we would prefer the predicted values to be in the feasible part. Thus, we also penalize violation degree \(\mathbf{Vio_{\text{plm}}}\) in the loss function to force the predicted values with big \(\mathbf{Vio_{\text{plm}}}\) close to the square's feasible half space for smaller constraint violations.
Although the proposed NN with elaborate loss function has high training accuracy, it is still difficult to guarantee the generalization of the NN OPF solver to the whole input space with conventional random sampling. Therefore, it is indispensable and challenging to obtain a representative training dataset with moderate size to train the proposed NN, which is the focus of the following subsection.
### _Worth-learning data generation_
As shown in Fig. 3, we adopt an iterative process to identify the worth-learning data. For an NN trained in the previous iteration, we utilize its output \(\mathbf{Vio_{\text{plm}}}\) to help identify new data samples that are not yet properly generalized. Specifically, if an input \(\mathbf{S^{\text{L}^{*}}}\) is feasible for the original OPF problem while the current NN outputs a large violation degree \(\mathbf{Vio^{\text{L}}_{\text{plm}}}\), the contradiction means the NN has a large fitting error at \(\mathbf{S^{\text{L}^{*}}}\). This is probably because sample \(\mathbf{S^{\text{L}^{*}}}\) was not included in the previous training set and was not generalized by the NN. Hence, this sample \(\mathbf{S^{\text{L}^{*}}}\) can be regarded as a _worth-learning sample_. Including the sample in the training dataset in the next iteration helps enhance the generalization of the NN.
The key to the proposed worth-learning data generation method is to identify worth-learning samples efficiently. Instead of traversing all of the possible inputs, we maximize \(\mathbf{Vio_{\text{plm}}}\) for a given NN to identify the input with a large violation degree. However, the inputs identified in the maximizing process should be feasible for the original OPF problem. Otherwise, the found inputs might be infeasible and useless for the representation of the training data.
#### Iii-C1 Input feasible set module
To keep the inputs identified in the maximizing process feasible for the original OPF problem, we formulate the _input feasible set module_ to restrict power loads \(\mathbf{S^{\text{L}}}\) to their feasible set. The _feasible set_ is composed of box constraints, current limits, and KCL&KVL constraints, which are transformed from the feasible set of the OPF problem defined in Eq. (1). The partial formulations of the _input feasible set_ are as follows, where the subscript "ifs" denotes the _input feasible set module_:
\[\mathbf{S^{\text{G}}_{\text{ifs}}} =\text{dRe}\left(\mathbf{S^{\prime}}^{\text{G}}_{\text{ifs}},\,\mathbf{ \underline{S}}^{\text{G}},\,\mathbf{\overline{S}}^{\text{G}}\right),\,\,\mathbf{S^{ \prime}}^{\text{G}}_{\text{ifs}}\in\mathbb{R}^{n}, \tag{11a}\] \[\mathbf{V_{\text{ifs}}} =\text{dRe}\left(\mathbf{V^{\prime}}_{\text{ifs}},\,\mathbf{\underline{V} },\,\mathbf{\overline{V}}\right),\,\,\mathbf{V^{\prime}}_{\text{ifs}}\in\mathbb{R}^{n},\] (11b) \[\mathbf{S^{\text{L}}_{\text{ifs}}} =\mathbf{S^{\text{G}}_{\text{ifs}}}\,\,-\,\,\mathbf{[V_{\text{ifs}}]}\mathbf{ Y^{*}}\mathbf{V^{*}}_{\text{ifs}}^{*},\] (11c) \[\mathbf{I_{\text{ifs}}} =\mathbf{Y_{\text{b}}}\mathbf{V}_{\text{ifs}}, \tag{11d}\]
where \(\mathbf{S^{\prime}}^{\text{G}}_{\text{ifs}}\) and \(\mathbf{V^{\prime}}_{\text{ifs}}\) are auxiliary \(n\times 1\) vectors in \(\mathbb{R}^{n}\) and have no physical meaning. Symbols \(\mathbf{S^{\text{G}}_{\text{ifs}}}\) and \(\mathbf{V}_{\text{ifs}}\) are restricted in their box constraints in Eqs. (11a) and (11b). Then the KCL&KVL correlations of \(\mathbf{S^{\text{L}}_{\text{ifs}}}\), \(\mathbf{S^{\text{G}}_{\text{ifs}}}\), and \(\mathbf{V}_{\text{ifs}}\) are described by Eq. (11c). Symbol \(\mathbf{I_{\text{ifs}}}\) in Eq. (11d) denotes the currents at all branches.
The other formulations of the _input feasible set_ aim to calculate \(\mathbf{Vio_{\text{ifs}}}\), the AC-OPF's constraint violations corresponding to \(\mathbf{S^{\text{L}}_{\text{ifs}}}\) and \(\mathbf{I_{\text{ifs}}}\), as follows:
\[\mathbf{Vio^{\text{S}}_{\text{ifs}}} =\text{ReLU}(\mathbf{S^{\text{L}}_{\text{ifs}}}-\mathbf{\overline{S}}^{ \text{L}})+\text{ReLU}(\mathbf{\underline{S}}^{\text{L}}-\mathbf{S^{\text{L}}_{\text{ ifs}}}), \tag{12a}\] \[\mathbf{Vio^{\text{L}}_{\text{ifs}}} =\text{ReLU}(\mathbf{|I_{\text{fis}}|}-\mathbf{\overline{I}}),\] (12b) \[\mathbf{Vio_{\text{ifs}}} =(\mathbf{Vio^{\text{S}}_{\text{ifs}}}\,\,\,\mathbf{Vio^{\text{L}}_{\text{ ifs}}})^{\top}, \tag{12c}\]
where \(\mathbf{Vio^{\text{S}}_{\text{ifs}}}\) denotes the violation of the upper or lower limit of \(\mathbf{S^{\text{L}}_{\text{plm}}}\), and \(\mathbf{Vio^{\text{L}}_{\text{ifs}}}\) denotes the violation of branch current. _Remark 2_.: _This module takes \(\mathbf{S^{\prime}}^{\text{G}}_{\text{ifs}}\) and \(\mathbf{V^{\prime}}_{\text{ifs}}\) as the inputs, and then outputs \(\mathbf{S^{\text{L}}_{\text{ifs}}}\) and \(\mathbf{Vio_{\text{ifs}}}\), as shown in Fig. 7. When \(\mathbf{Vio_{\text{ifs}}}=0\), the corresponding \(\mathbf{S^{\text{L}}_{\text{ifs}}}\) lies in the feasible set of
Fig. 6: Illustration of the effectiveness of the three terms in the loss function.
Fig. 7: The _input feasible set module_.
the AC-OPF problem. To identify feasible \(\mathbf{S}_{\text{ifs}}^{\text{L}}\) in the process of maximizing \(\mathbf{Vio}_{\text{bfm}}\), this module backpropagate the \(\frac{\partial\mathbf{Vio}_{\text{bfm}}}{\partial\mathbf{S}_{\text{bfm}}}\) with \(\mathbf{Vio}_{\text{ifs}}\leq\zeta\) (\(\zeta\) is a small positive tolerance), and then it updates \(\mathbf{S^{\prime}}_{\text{ifs}}^{\text{G}}\) and \(\mathbf{V^{\prime}}_{\text{ifs}}\). As a result, the corresponding \(\mathbf{S^{\prime}}_{\text{ifs}}^{\text{G}}\) is always feasible. Furthermore, because \(\mathbf{S^{\prime}}_{\text{ifs}}^{\text{G}}\) and \(\mathbf{V^{\prime}}_{\text{ifs}}\) are not bounded, changing them can theoretically find any feasible \(\mathbf{S}_{\text{ifs}}^{\text{L}}\)._
#### Iii-C2 Max violation backpropagation
To identify worth-learning data, a novel NN is created by inputting \(\mathbf{S}_{\text{ifs}}^{\text{L}}\) into the physical-model-integrated NN (see Fig. 8). This NN has two outputs, i.e., \(\mathbf{Vio}_{\text{bfm}}\) and \(\mathbf{Vio}_{\text{ifs}}\). The former measures the constraint violation degree of the OPF solution \(\mathbf{S}^{\text{G}^{\text{G}}}\); the latter indicates the feasibility of the OPF input \(\mathbf{S}_{\text{ifs}}^{\text{L}}\). If \(\mathbf{S}_{\text{ifs}}^{\text{L}}\) is a feasible input, i.e., \(\mathbf{Vio}_{\text{ifs}}\leq\zeta\), but the optimal solution \(\mathbf{S}^{\text{G}^{\text{e}}}\) is infeasible, i.e., \(\mathbf{Vio}_{\text{bfm}}\geq\xi\) (\(\xi\) is a threshold), this means the corresponding input is worth learning (i.e., it is not learned or generalized by the current NN). Based on this analysis, we design the loss function \(loss_{\text{max}}\) for max violation backpropagation, as follows:
\[loss_{\text{max}}=\mathbf{Vio}_{\text{bfm}}-\lambda\times\mathbf{Vio}_{\text{ifs}}, \tag{13}\]
where \(\lambda\) is a large, constant weight parameter. When maximizing this loss function, the algorithm tends to find a worth-learning \(\mathbf{S}_{\text{ifs}}^{\text{L}}\) that has small \(\mathbf{Vio}_{\text{ifs}}\) but large \(\mathbf{Vio}_{\text{bfm}}\).
During the max violation backpropagation, the proposed algorithm maximizes \(loss_{\text{max}}\) to update the variables \(\mathbf{S^{\prime}}_{\text{ifs}}^{\text{G}}\) and \(\mathbf{V^{\prime}}_{\text{ifs}}\) by gradient backpropagation until \(loss_{\text{max}}\) converges to the local maximum. After the process, the corresponding \(\mathbf{S}_{\text{ifs}}^{\text{L}}\) is also found. Because the maximizing process can be processed in parallel by the deep learning module PyTorch, the worth-learning samples are found in batch, where the max violation backpropagation uses the previous training set as initial points to identify the new data. Further, the auto-differentiation technique in PyTorch can accelerate the process of parallel computation. Based on these techniques, massive worth-learning data samples are identified efficiently.
### _Overall training process_
The overall training process is presented in Algorithm 1, which first takes an initial training dataset \(\mathbb{D}_{\text{l}}\) (obtained by any conventional sampling method) as input. The learning rate \(\eta\) is equal to \(10^{-3}\), the loss difference tolerance \(\epsilon\) is equal to \(10^{-2}\), the added dataset \(\mathbb{A}\) is empty, and the loss difference \(\Delta L\) is equal to infinity at initialization. The training is performed for a fixed number of epochs (lines 2-5). Then the max violation backpropagation starts to identify worth-learning data (lines 6 and 7) by using the training data as the initial points (line 8) and updating \(\mathbf{S^{\prime}}_{\text{ifs}}^{\text{G}}\) and \(\mathbf{V^{\prime}}_{\text{ifs}}\) until \(\Delta L\) is less than \(\epsilon\) (lines 9-12), which indicates \(loss_{max}\) has converged to the terminal.
```
0:\(\mathbb{D}_{\text{l}}=\left(\mathbf{\hat{S}}^{\text{L}},\hat{\mathbf{V}},\hat{\mathbf{S}} ^{\text{G}}\right)\) Initialization :\(\eta\gets 10^{-3},\epsilon\gets 10^{-2},\mathbb{A}\leftarrow\varnothing, \Delta L\leftarrow\infty\)
1:repeat
2:for epoch \(k=0,1,...\)do
3: Train the NN with \(loss\) Eq. (10):
4:\(w\gets w-\eta\nabla loss\).
5:endfor
6:while\(\Delta L\geq\epsilon\)do
7: Identify data with \(loss_{\text{max}}\) Eq. (13):
8:\(\mathbf{S^{\prime}}_{\text{ifs}}^{\text{G}}\), \(\mathbf{V^{\prime}}_{\text{ifs}}-\mathbf{S}_{\text{ifs}}^{\text{G}}\), \(\mathbf{V_{\text{ifs}}}\leftarrow\mathbf{\hat{S}}^{\text{G}}\), \(\hat{\mathbf{V}}\)
9:\(\mathbf{S^{\prime}}_{\text{ifs}}^{\text{G}}\leftarrow\mathbf{S^{\prime}}_{\text{ifs}}+ \eta\nabla loss_{\text{max}}\)
10:\(\mathbf{V^{\prime}}_{\text{ifs}}=\mathbf{V^{\prime}}_{\text{ifs}}-\eta\nabla loss_{ \text{max}}\)
11:\(\Delta L\leftarrow\left|loss_{\text{max},i}-loss_{\text{max},i-100}\right|\)
12:endwhile
13:\(\{\mathbf{Vio}_{\text{bfm},N}\}\leftarrow f_{\text{filter}}(\mathbf{Vio}_{\text{bfm},N} \geq\xi)\)
14:Collect \(\{\mathbf{S}_{\text{ifs}}^{\text{G}}\}\) corresponding to \(\{\mathbf{Vio}_{\text{bfm},N}\}\) based on the novel NN in Fig. 8
15: Calculate \(\{\hat{\mathbf{V}},\mathbf{S}^{\text{G}}\}\) corresponding to \(\{\mathbf{S}_{\text{ifs}}^{\text{G}}\}\) using numerical solvers
16:\(\mathbb{A}\leftarrow\{\mathbf{S}_{\text{ifs}}^{\text{L}},\hat{\mathbf{V}},\hat{\mathbf{S}} ^{\text{G}}\}\)
17:\(\mathbb{D}_{\text{l}}\leftarrow\mathbb{D}_{\text{l}}\cup\mathbb{A}\)
18:until\(\mathbb{A}\) is \(\varnothing\)
```
**Algorithm 1** Training process of the physical-model-integrated NN OPF solver with worth-learning data generation.
### _Efficiency and convergence of the proposed method_
Unlike general training processes for conventional NNs, the proposed physical-model-integrated NN with worth-learning data generation adopts an iterative training process. It iteratively checks the NN's generalization to the input's feasible space by identifying worth-learning data, as shown in Fig. 3 and Algorithm 1. This difference introduces two critical questions. 1) Efficiency: is the process of identifying worth-learning data computationally efficient? 2) Convergence: is the training set representative of the whole input space after iterations? In terms of the computational efficiency of the proposed method, the theoretical analysis (detailed in the Appendix A) shows it takes no more than 0.08 s to find one sample, which brings little computational burden into the training process. According to the experiment results, the average consumption time for finding one sample is 0.056 s. In terms of the convergence, we prove that the training set would gradually represent the whole input space in the Appendix B, because the number of worth-learning samples identified would converge to zero after a finite number of iterations.
Fig. 8: The novel NN for max violation backpropagation by integrating physical-model-integrated NN with the _input feasible set module_.
## IV Numerical Experiments
The proposed method is evaluated using the IEEE 12-bus, 14-bus, 30-bus, 57-bus, and 118-bus systems. The ground truth datasets are constructed using PANDAPOWER based on a prime-dual interior points algorithm.
### _The efficiency of worth-learning data generation_
As shown in Algorithm 1, the proposed worth-learning data generation (lines 6-12) is the second loop in one iteration (lines 1-18), and the number of initial data points for the generation varies with iterations (lines 8, 15-17). To evaluate the efficiency of the worth-learning data generation, we conduct an experiment on the IEEE 57-bus system in three different iterations to quantitatively measure how much time it takes to finish one worth-learning data generation loop. The time consumption of the data-generation loops in the three different iterations is illustrated in Fig. 9. The x-axis is the number of times the codes are repeated (lines 6-12) divided by 100, which represents the time consumed in one data generation loop; the y-axis is the violation degree. The three lines converge to the terminal stage within 4000 times. The trends are similar: they increase very quickly at first (with 100 epochs) and then approach the local maximum slowly (with 2900-3900 epochs). The inflection points on the three lines are \((1,7228)\), \((1,9065)\), and \((1,5841)\).
In the three iterations, 300, 500, and 800 new data samples are identified. Each data-generation loop in iterations takes 30 s on average to run 3000-4000 times. Hence, one worth-learning data sample costs \((30\times 3)/(300+500+800)\approx 0.056\) s, which introduces little computational burden into the training process compared to the other steps in Algorithm 1. For example, each label value calculated by numerical solvers costs around 1 s (line 14), and the NN training on a dataset with 1100 samples costs around 600 s (lines 2-5). In conclusion, the numerical experiment verifies that the worth-learning data generation brings little computational burden to the training process.
Furthermore, we list the time consumption comparison of the conventional and proposed training processes in Table I, where the conventional training process uses simple random sampling in place of the data generation loop (lines 6-12) in Algorithm 1. By comparing the time consumption of the two methods, we can conclude that the training time of the proposed method only increases by 4%-8%. Hence, these experiments validate that the proposed worth-learning data generation is computationally efficient.
### _Reliability and optimality of the proposed solver_
To validate the superiority of the proposed NN OPF solver (denoted by **Proposed NN**), we compare it with two benchmarks: 1) **B1 NN**, which adopts the conventional loss function and NN model (MLP) with a training dataset generated by simple random sampling; 2) **B2 NN**, which adopts the proposed loss function and physical-model-integrated model with a training dataset generated by simple random sampling.
A particular test set different from the training datasets above is created to examine the effect of these models fairly. The test set has 600 samples that are produced by uniformly sampling 200 points in \([80\%,120\%]\) of the nominal value of one load three times. The other loads in the three times are fixed at light \((80\%\times\text{nominal value})\), nominal \((100\%\times\text{nominal value})\), and heavy \((120\%\times\text{nominal value})\) load conditions. The load sampled has the largest nominal value to cover a big region of the input space. Based on these settings, the test set includes much "unseen" data for those models.
The reliability of the NN OPF solvers is evaluated by the constraint violation degrees on all test data. The optimality loss is evaluated by the relative error between predicted results and label values. For a fair comparison, the three methods all stop their training processes when the value of \(||\hat{\mathbf{V}}-\mathbf{V_{\text{NN}}}||_{1}\) is less than \(2\times 10^{-4}\). In view of the iterative training process, the performance of the three solvers is studied with increasing training data, and the initial NNs are identical because they are trained on an initial dataset with \(N\) samples.
The results are statistically analyzed by creating box plots displayed in Fig. 10. The violation degrees and optimality losses of the results of the NNs from the three methods converge to the terminal stages gradually. The rate of convergence of **Proposed NN** is the largest, that of **B2 NN** is in the middle, and that of **B1 NN** is the smallest.
In Figs. 10(a) to 10(c), the comparison of the last violation degree gives notable results in the three cases. Specifically, the median values in three cases are 7, 15, and 75 for **B1 NN**; 6, 12.5, and 60 for **B2 NN**; and 3.2, 6.1, and 25 for **Proposed NN**, respectively. The novel loss function brings a 19% reduction of violation degree on average by comparing **B1 NN** and **B2 NN**. The proposed training data generation method introduces a 50% reduction of violation degree on average according to the comparison of **B2 NN** and **Proposed NN**. Moreover, the height of the last boxes in each subfigure suggests the robustness of the three solvers, and **Proposed NN**
Fig. 9: Time consumption of the worth-learning data codes in three different iterations. The number of times the sequence codes are repeated in the data generation loop (x-axis) represents the time consumed in one data generation loop; the violation degrees (y-axis) quickly converge to the terminal stage.
has the smallest height in all three cases, which indicates the worth-learning data generation can improve the reliability in encountering "unseen" data from the feasible region.
The comparison of optimality losses is similar to that of violation degrees, as illustrated in Figs. 10(d) to 10(f). The **proposed NN** method has the best results in the three cases, and the final median values of optimality losses are 0.6%, 0.5%, and 0.3% in the three different cases, respectively. The optimality losses of **B2 NN** and **B1 NN** increase by 150%, 66%, and 360% and 142%, 167%, and 460% compared to those of the **proposed NN** method in the three cases.
In conclusion, the proposed physical-model-integrated NN OPF solver with worth-learning data generation can improve the generalization of NN models compared to the conventional NN solvers. Specifically, the proposed method introduces an over 50% reduction of constraint violations and optimality losses in the results on average.
### _Comparison with numerical solvers_
To further evaluate the capability of the proposed method, the next experiment focuses on the comparison with the results of the classical AC-OPF solver based on the prime-dual interior points algorithm and the classical DC-OPF solver with a linear approximation of the power flow equations. The classical AC-OPF solver produces the optimal solutions as the ground truth values, and the DC-OPF solver is a widely used approximation in the power industry. The test set is the same as that in Section IV-B. The performance of the three methods is evaluated by the following metrics: 1) the average consumption time to solve an OPF problem; 2) the average constraint violation degree \(\mathbf{Vio}_{\text{phm}}\), which is calculated by Eqs. (8) and (9) for the two numerical solvers; and 3) the average relative error of dispatch costs. These three metrics are denoted as Time (ms), \(\text{Vio}_{\text{.}}(\text{MW})\), and \(\text{Opt}_{\text{.}}(\%)\), respectively.
The results are tabulated in Table II. The bottom row of the table shows the average results over the three cases. As shown, the proposed method achieves high computational efficiency, which is at least three orders of magnitude faster than the DC-OPF solver and four orders of magnitude faster than the AC-OPF solver. Furthermore, the method also has much lower constraint violations and optimality losses compared with the DC OPF solver. The average Vio. (MW) and Opt. (\(\%\)) of the proposed solver are only 10.882 and 0.462, which are 44% and 18% of those of the DC-OPF solver, respectively.
### _Interpretation of worth-learning data generation_
This subsection interprets why the worth-learning data generated by the proposed method improve the representativeness of the training dataset. The proposed worth-learning data generation method is compared with the conventional simple random sampling method. Without loss of generality, the experiment is conducted on the 14-bus system. Beginning with an identical initial dataset, the conventional and proposed methods generate 100 samples in every step, and there are 8 steps for both. To visualize the representativeness, we draw the distribution of these high-dimensional training samples based on the t-distributed Stochastic Neighbor Embedding algorithm [29, 30], which is a statistical method for visualizing high-dimensional data by giving each data point a location in a two- or three-dimensional map.
The reduced-dimensional data distributions of the conventional and proposed methods are shown in Fig.11. In Fig. 11(a), the data are produced by the simple random sampling method, and their distribution is almost in a "\(\mathbf{\nabla}\)" region, which means the possibility of sampling in this region is high. Furthermore, the new data added in each step overlap with existing data or fill in the intervals. The new data overlapping with existing data are redundant in terms of NN training. The data filling in the intervals may be also redundant when the blanks are generalized well by the trained NN model. In contrast, as shown in Fig. 11(b), the new data generated by
Fig. 10: The violation degree and optimality loss of the results of the NNs trained by three methods change with the number of training data in different cases: (a), (d) IEEE 30-bus; (b), (e) IEEE 57-bus; (c), (f) IEEE 118-bus.
the proposed method in each step hardly overlap with existing data and are usually outside the region covered by the initial data. These new data increase the area covered by the training set so that the training set can have better representativeness of the input feasible region. This explains the effectiveness of the proposed worth-learning data generation method.
## V Conclusion
This study proposes an AC-OPF solver based on a physical-model-integrated NN with worth-learning data generation to produce reliable solutions efficiently. To the best of our knowledge, this is the first study that has addressed the generalization problem of NN OPF solvers regarding the representativeness of training datasets. The physical-model-integrated NN is designed by integrating an MLP and an OPF-model module. This specific structure outputs not only the optimal decision variables of the OPF problem but also the constraint violation degree. Based on this NN, the worth-learning data generation method can identify feasible training samples that are not well generalized by the NN. Accordingly, by iteratively applying this method and including the newly identified worth-learning data samples in the training set, the representativeness of the training set can be significantly enhanced.
The theoretical analysis shows that the method brings little computational burden into the training process and can make the models generalize over the feasible region. Experimental results show that the proposed method leads to over a 50% reduction of both constraint violations and optimality loss compared to conventional NN solvers. Furthermore, the computation speed of the proposed method is three orders of magnitude faster than that of the DC-OPF solver.
## Appendix A Computational Efficiency of Worth-Learning Data Generation
To analyze the computational complexity of the proposed NN model with worth-learning data generation, we adopt a widely used measure--the number of floating-point operations (FLOPs) during the NN model's forward-backward propagation. The total FLOPs of one single layer of a fully-connected NN model can be calculated as follows:
\[\textbf{Forward}:\quad\text{FLOPs}=(2I-1)\times O, \tag{14a}\] \[\textbf{Backward}:\quad\text{FLOPs}=(2I-1)\times O, \tag{14b}\]
where \(I\) is the dimension of the layer's input, and \(O\) is the dimension of its output.
To approximate an OPF mapping based on a 57-bus system, the proposed NN model uses the following structure: \(84\times 1000\times 2560\times 2560\times 5120\times 2000\times 114\). According to Eq. (14), the total FLOPs of the NN per forward-backward process is around \(1\times 10^{8}\). The GPU used in the experiment is the Quadro P6000, and its performance is 12.2 TFLOP/s (\(1\) TFLOP/s \(=10^{12}\) FLOP/s). Using the GPU, we can perform the forward-backward process \(1.22\times 10^{5}\) times per second.
For the worth-learning data generation in Algorithm 1, the forward process is to calculate \(\boldsymbol{V^{i}\alpha_{\text{fs}}}\) and \(\boldsymbol{V^{i}\alpha_{\text{plm}}}\), and the backward process is to update \(\boldsymbol{S^{\prime}\alpha_{\text{fs}}}\) and \(\boldsymbol{V^{\prime}\mathrm{fs}}\) by the gradients. We concatenate \(\boldsymbol{S^{\prime}\alpha_{\text{fs}}}\) and \(\boldsymbol{V^{\prime}\mathrm{fs}}\) as a vector **x**, and we suppose the range of each item in **x** is \([0,10]\), and **x** changes \(10^{-3}\) in each update step. Varying from 0 to 10, it costs \(10^{4}\) times the forward-backward processes. In other words, the algorithm can at least update \(1.22\times 10^{5}/10^{4}\approx 12\) samples in 1 s, so finding one sample costs no longer than 0.08 s.
In practice, there is a slight error between the actual speed in experiments and the theoretical analysis. According to the numerical experiments in Section IV-A, an average of 533 samples are found in 30 s. The average consumption time for identifying one sample is 0.056 s.
Fig. 11: Reduced-dimensional distributions of the training datasets generated by two different methods.
From the analysis presented above, we can conclude that the proposed worth-learning data generation method brings little computational burden into the training process.
## Appendix B Convergence of Worth-Learning Data Generation
This section verifies that the proposed NN with worth-learning data generation can generalize to the whole feasible set. NN models are continuous functions because both linear layers and activation functions are continuous. We define a critical violation value \(\epsilon\) that divides the input space into two types: the covered region (the \(\mathbf{Vio_{\text{phm}}}\) values of all of all of the points are less or equal to \(\epsilon\)) and the uncovered region (the \(\mathbf{Vio_{\text{phm}}}\) values of all of the points are greater than \(\epsilon\)). The boundaries of the two regions consist of the points whose \(\mathbf{Vio_{\text{phm}}}\) values are approximately equal to \(\epsilon\). Using these points as initial points, we can identify points with the local maximum in the uncovered region by max violation backpropagation.
Next, these new points \(\{\mathbf{x}_{1}\}\) (the red points) are added to the training set. After training, the neighborhood of these new points \(\{\mathbf{x}_{1}\}\) would be covered. Due to the generalization of NNs, most points in the area \(\mathbb{S}_{\text{add}}=\{\mathbf{x}|a\times\mathbf{x}_{0}^{\text{ini}}+(1-a)\times \mathbf{x}_{1},0\leq a\leq 1\}\) would also be covered, where \(\{\mathbf{x}_{0}^{\text{ini}}\}\) are the initial points on the boundaries (the black points), as shown in Fig. 12.
Therefore, the area \(\mathbb{S}_{\text{add}}\) is subtracted from the uncovered region. Through iterations, the uncovered region is emptied, and the number of added samples converges to zero.
In practice, we choose the training set instead of the boundary points as initial points for convenience. Although some samples in the training set are not at boundaries, they are eliminated by the filter function, as shown in Algorithm 1. Therefore, the replacement of the boundary points has no impact on the results.
|
2310.01618 | Operator Learning Meets Numerical Analysis: Improving Neural Networks
through Iterative Methods | Deep neural networks, despite their success in numerous applications, often
function without established theoretical foundations. In this paper, we bridge
this gap by drawing parallels between deep learning and classical numerical
analysis. By framing neural networks as operators with fixed points
representing desired solutions, we develop a theoretical framework grounded in
iterative methods for operator equations. Under defined conditions, we present
convergence proofs based on fixed point theory. We demonstrate that popular
architectures, such as diffusion models and AlphaFold, inherently employ
iterative operator learning. Empirical assessments highlight that performing
iterations through network operators improves performance. We also introduce an
iterative graph neural network, PIGN, that further demonstrates benefits of
iterations. Our work aims to enhance the understanding of deep learning by
merging insights from numerical analysis, potentially guiding the design of
future networks with clearer theoretical underpinnings and improved
performance. | Emanuele Zappala, Daniel Levine, Sizhuang He, Syed Rizvi, Sacha Levy, David van Dijk | 2023-10-02T20:25:36Z | http://arxiv.org/abs/2310.01618v1 | # Operator Learning Meets Numerical Analysis: Improving Neural Networks through Iterative Methods
###### Abstract
Deep neural networks, despite their success in numerous applications, often function without established theoretical foundations. In this paper, we bridge this gap by drawing parallels between deep learning and classical numerical analysis. By framing neural networks as operators with fixed points representing desired solutions, we develop a theoretical framework grounded in iterative methods for operator equations. Under defined conditions, we present convergence proofs based on fixed point theory. We demonstrate that popular architectures, such as diffusion models and AlphaFold, inherently employ iterative operator learning. Empirical assessments highlight that performing iterations through network operators improves performance. We also introduce an iterative graph neural network, PIGN, that further demonstrates benefits of iterations. Our work aims to enhance the understanding of deep learning by merging insights from numerical analysis, potentially guiding the design of future networks with clearer theoretical underpinnings and improved performance.
## 1 Introduction
Deep neural networks have become essential tools in domains such as computer vision, natural language processing, and physical system simulations, consistently delivering impressive empirical results. However, a deeper theoretical understanding of these networks remains an open challenge. This study seeks to bridge this gap by examining the connections between deep learning and classical numerical analysis.
By interpreting neural networks as operators that transform input functions to output functions, discretized on some grid, we establish parallels with numerical methods designed for operator equations. This approach facilitates a new iterative learning framework for neural networks, inspired by established techniques like the Picard iteration.
Our findings indicate that certain prominent architectures, including diffusion models, AlphaFold, and Graph Neural Networks (GNNs), inherently utilize iterative operator learning (see Figure 1). Empirical evaluations show that adopting a more explicit iterative approach in these models can enhance performance. Building on this, we introduce the Picard Iterative Graph Neural Network (PIGN), an iterative GNN model, demonstrating its effectiveness in node classification tasks.
In summary, our work:
* Explores the relationship between deep learning and numerical analysis from an operator perspective.
* Introduces an iterative learning framework for neural networks, supported by theoretical convergence proofs.
* Evaluates the advantages of explicit iterations in widely-used models.
* Presents PIGN and its performance metrics in relevant tasks.
* Provides insights that may inform the design of future neural networks with a stronger theoretical foundation.
The remainder of this manuscript is organized as follows: We begin by delving into the background and related work to provide the foundational understanding for our contributions. This is followed by an introduction to our theoretical framework for neural operator learning. Subsequently, we delve into a theoretical exploration of how various prominent deep learning frameworks undertake operator learning. We conclude with empirical results underscoring the advantages of our proposed framework.
## 2 Background and Related Work
Numerical Analysis.Numerical analysis is rich with algorithms designed for approximating solutions to mathematical problems. Among these, the Banach-Caccioppoli theorem is notable, used for iteratively solving operator equations in Banach spaces. The iterations, often called Fixed Point iterations, or Picard iterations, allow to solve an operator equation approximately, in an iterative manner. Given an operator \(T\), this approach seeks a function \(u\) such that \(T(u)=u\), called a fixed point, starting with an initial guess and refining it iteratively.
The use of iterative methods has a long history in numerical analysis for approximate solutions of intractable equations, for instance involving nonlinear operators. For example, integral
Figure 1: Overview of iterative framework. (A) Popular architectures which incorporate iterative components in their framework. (B) Convergence behavior of an iterative solver. (C) Behavior of iterative solver converging to a fixed point in the data manifold.
equations, e.g. of Urysohn and Hammerstein type, arise frequently in physics and engineering applications and their study has long been treated as a fixed point problem [13, 14, 15].
Convergence to fixed points can be guaranteed under contractivity assumptions by the Banach-Caccioppoli fixed point theorem [1]. Iterative solvers have also been crucial for partial differential equations and many other operator equations [16].
Operator Learning.Operator learning is a class of deep learning methods where the objective of optimization is to learn an operator between function spaces. Examples and an extended literature can be found in [17, 18]. The interest of such an approach, is that mapping functions to functions we can model dynamics datasets, and leverage the theory of operators. When the operator learned is defined through an equation, e.g. an integral equation as in [19], along with the training procedure we also need a way of solving said equation, i.e. we need a solver. For highly nonlinear problems, when deep learning is not involved, these solvers often utilize some iterative procedure as in [16]. Our approach here brings the generality of iterative approaches into deep learning by allowing to learn operators between function spaces through iterative procedure used in solving nonlinear operator equations.
Transformers.Transformers ([23, 17, 18]), originally proposed for natural language processing tasks, have recently achieved state-of-the-art results in a variety of computer vision applications ([19, 18, 16, 17, 20, 2]). Their self-attention mechanisms make them well-suited for tasks beyond just sequence modeling. Notably, transformers have been applied in an iterative manner in some contexts, such as the "recycling" technique used in AlphaFold2 [15].
AlphaFold.DeepMind's AlphaFold [19] is a protein structure prediction model, which was significantly improved in [15] with the introduction of AlphaFold2 and further extended to protein complex modeling in AlphaFold-Multimer [14]. AlphaFold2 employs an iterative refinement technique called "recycling", which recycles the predicted structure through its entire network. The number of iterations was increased from 3 to 20 in AF2Complex [1], where improvement was observed. An analysis of DockQ scores with increased iterations can be found in [1]. We only look at monomer targets, where DockQ scores do not apply and focus on global distance test (GDT) scores and root-mean-square deviation (RMSD).
Diffusion Models.Diffusion models were first introduced in [13] and were shown to have strong generative capabilities in [19] and [20]. They are motivated by diffusion processes in non-equilibrium thermodynamics [14] related to Langevin dynamics and the corresponding Kolmogorov forward and backward equations. Their connection to stochastic differential equations and numerical solvers is highlighted in [13, 14, 15, 16]. We focus on the performance of diffusion models at different amounts of timesteps used during training, including an analysis of FID [12] scores.
Graph Neural Networks (GNNs).GNNs are designed to process graph-structured data through iterative mechanisms. Through a process called message passing, they repeatedly aggregate and update node information, refining their representations. The iterative nature of GNNs was explored in [12], where the method combined repeated applications of the same GNN layer using confidence scores. Although this shares similarities with iterative techniques, our method distinctly leverages fixed-point theory, offering specific guarantees and enhanced performance, as detailed in Section 5.1.
## 3 Iterative Methods for Solving Operator Equations
In the realm of deep learning and neural network models, direct solutions to operator equations often become computationally intractable. This section offers a perspective that is applicable to
machine learning, emphasizing the promise of iterative methods for addressing such challenges in operator learning. We particularly focus on how the iterative numerical methods converge and their application to neural network operator learning. These results will be used in the Appendix to derive theoretical convergence guarantees for iterations on GNNs and Transformer architectures, see Appendix A.
### Setting and Problem Statement
Consider a Banach space \(X\). Let \(T:X\longrightarrow X\) be a continuous operator. Our goal is to find solutions to the following equation:
\[\lambda T(x)+f=x, \tag{1}\]
where \(f\in X\) and \(\lambda\in\mathbb{R}-\{0\}\) is a nontrivial scalar. A solution to this equation is a fixed point \(x^{*}\) for the operator \(P=\lambda T+f\):
\[\lambda T(x^{*})+f=x^{*}. \tag{2}\]
### Iterative Techniques
It is clear that for arbitrary nonlinear operators, solving Equation (1) is not feasible. Iterative techniques such as Picard or Newton-Kantorovich iterations become pivotal. These iterations utilize a function \(g\) and progress as:
\[x_{n+1}=g(T,x_{n}). \tag{3}\]
Central to our discussion is the interplay between iterative techniques and neural network operator learning. We highlight the major contribution of this work: By using network operators iteratively during training, convergence to network fixed points can be ensured. This approach uniquely relates deep learning with classical numerical techniques.
### Convergence of Iterations and Their Application
A particular case of great interest is when the operator \(T\) takes an integral form and \(X\) represents a function space, our framework captures the essence of an integral equation (IE). By introducing \(P_{\lambda}(x)=\lambda T(x)+f\), we can rephrase our problem as a search for fixed points.
We now consider the problem of approximating a fixed point of a nonlinear operator. The results of this section are applied to various deep learning settings in Appendix A to obtain theoretical guarantees for the iterative approaches.
**Theorem 1**.: _Let \(\epsilon>0\) be fixed, and suppose that \(T\) is Lipschitz with constant \(k\). Then, for all \(\lambda\) such that \(|\lambda k|<1\), we can find \(y\in X\) such that \(||\lambda T(y)+f-y||<\epsilon\) for any choice of \(\lambda\), independently of the choice of \(f\)._
Proof.: Let us set \(y_{0}:=f\) and \(y_{n+1}=f+\lambda T(y_{n})\) and consider the term \(||y_{1}-y_{0}||\). We have
\[||y_{1}-y_{0}||=||\lambda T(y_{0})||=|\lambda|||T(y_{0})||.\]
For an arbitrary \(n>1\) we have
\[||y_{n+1}-y_{n}||=||\lambda T(y_{n})-\lambda T(y_{n-1})||\leq k|\lambda|||y_{ n}-y_{n-1}||.\]
Therefore, applying the same procedure to \(y_{n}-y_{n-1}=T(y_{n-1})-T(y_{n-2})\) until we reach \(y_{1}-y_{0}\), we obtain the inequality
\[||y_{n+1}-y_{n}||\leq|\lambda|^{n}k^{n}||T(y_{0})||.\]
Since \(|\lambda|k<1\), the term \(|\lambda|^{n}k^{n}||T(y_{0})||\) is eventually smaller than \(\epsilon\), for all \(n\geq\nu\) for some choice of \(\nu\). Defining \(y:=y_{\nu}\) for such \(\nu\) gives the following
\[||\lambda T(y_{\nu})+f-y_{\nu}||=||y_{\nu+1}-y_{\nu}||<\epsilon.\]
The following now follows easily.
**Corollary 1**.: _Consider the same hypotheses as above. Then Equation 1 admits a solution for any choice of \(\lambda\) such that \(|\lambda|k<1\)._
Proof.: From the proof of Theorem 1 it follows that the sequence \(y_{n}\) is a Cauchy sequence. Since \(X\) is Banach, then \(y_{n}\) converges to \(y\in X\). By continuity of \(T\), \(y\) is a solution to Equation 1.
Recall that for nonlinear operators, continuity and boundedness are not equivalent conditions.
**Corollary 2**.: _If in the same situation above \(T\) is also bounded, then the choice of \(\nu\) of the iteration can be chosen uniformly with respect to \(f\), for a fixed choice of \(\lambda\)._
Proof.: From the proof of Theorem 1, we have that
\[||y_{n+1}-y_{n}||\leq|\lambda|^{n}k^{n}||T(y_{0})||=|\lambda|^{n}k^{n}||T(f)||.\]
If \(T\) is bounded by \(M\), then the previous inequality is independent of the element \(f\in X\). Let us choose \(\nu\) such that \(|\lambda|^{n}k^{n}<\epsilon/M\). Then, suppose \(f\) is an arbitrary element of \(X\). Initializing \(y_{0}=f\), \(y_{\nu}\) will satisfy \(||\lambda T(y_{\nu})+f-y_{\nu}||<\epsilon\), for any given choice of \(\epsilon\).
The following result is classic, and its proof can be found in several sources. See for instance Chapter 5 in [1].
**Theorem 2**.: _(Banach-Caccioppoli fixed point theorem) Let \(X\) be a Banach space, and let \(T:X\longrightarrow X\) be contractive mapping with contractivity constant \(0<k<1\). Then, \(T\) has a unique fixed point, i.e. the equation \(T(x)=x\) has a unique solution \(u\) in \(X\). Moreover, for any choice of \(u_{0}\), \(u_{n}=T^{n}(u_{0})\) converges to the solution with rate of convergence_
\[||u_{n}-u|| < \frac{k^{n}}{1-k}||u_{0}-u_{1}||, \tag{4}\] \[||u_{n}-u|| < \frac{k}{1-k}||u_{n-1}-u_{n}||. \tag{5}\]
The possibility of solving Equation 1 with different choices of \(f\) is particularly important in the applications that we intend to consider, as it is interpreted as the initialization of the model. While various models employ iterative procedures for operator learning tasks implicitly, they lack a general theoretical perspective that justifies their approach. Several other models can be modified using iterative approaches to produce better performance with lower number of parameters. We will give experimental results in this regard to validate the practical benefit of our theoretical framework.
While the iterations considered so far have a fixed procedure which is identical per iteration, more general iterative procedures where the step changes between iterations are also diffused, and this can be done also adaptively.
### Applications
Significance and Implications.Our results underscore the existence of a solution for Equation 1 under certain conditions. Moreover, when the operator \(T\) is bounded, our iterative method showcases uniform convergence. It follows that ensuring that the operators approximated by deep neural network architectures are contractive, we can introduce an iterative procedure that will allow us to converge to the fixed point as in Equation 2.
Iterative Methods in Modern Deep Learning.In contemporary deep learning architectures, especially those like Transformers, Stable Diffusion, AlphaFold, and Neural Integral Equations, the importance of operator learning is growing. However, these models, despite employing iterative techniques, often lack the foundational theoretical perspective that our framework provides. We will subsequently present experimental results that vouch for the efficacy and practical advantages of our theoretical insights.
Beyond Basic Iterations.While we have discussed iterations with fixed procedures, it is imperative to highlight that more general iterative procedures exist, and they can adapt dynamically. Further, there exist methods to enhance the rate of convergence of iterative procedures, and our framework is compatible with them.
## 4 Neural Network Architectures as Iterative Operator Equations
In this section, we explore how various popular neural network architectures align with the framework of iterative operator learning. By emphasizing this operator-centric view, we unveil new avenues for model enhancements. Notably, shifting from implicit to explicit iterations can enhance model efficacy, i.e. through shared parameters across layers. A detailed discussion of the various methodologies given in this section is reported in Appendix B. In the appendix, we investigate architectures such as neural integral equations, transformers, AlphaFold for protein structure prediction, diffusion models, graph neural networks, autoregressive models, and variational autoencoders. We highlight the iterative numerical techniques underpinning these models, emphasizing potential advancements via methods like warm restarts and adaptive solvers. Empirical results substantiate the benefits of this unified perspective in terms of accuracy and convergence speed.
Diffusion models.Diffusion models, especially denoising diffusion probabilistic models (DDPMs), capture a noise process and its reverse (denoising) trajectory. While score matching with Langevin dynamics models (SMLDs) is relevant, our focus is primarily on DDPMs for their simpler setup. These models transition from complex pixel-space distributions to more tractable Gaussian distributions. Notably, increasing iterations can enhance the generative quality of DDPMs, a connection we wish to deepen. This procedure can be seen as instantiating an iteration procedure, where iterations are modified as in the methods found in [20]. This operator setting and iterative interpretation is described in detail in Appendix B.1.
To empirically explore convergence with iterations in diffusion models, we train 10 different DDPMs with 100-1000 iterations and analyze their training dynamics and perceptual quality. Figure 2 reveals that increasing timesteps improves FID scores of generated images. Additionally, Figure 2 demonstrates a consistent decrease in both training and test loss with more time steps, attributed to the diminished area under the expected KL divergence curve over time (Figure 8). Notably, FID scores decline beyond the point of test loss convergence, stabilizing after approximately 150,000 steps (Figure 8). This behavior indicates robust convergence with increasing iterations.
AlphaFold.AlphaFold, a revolutionary protein structure prediction model, takes amino acid sequences and predicts their three-dimensional structure. While the model's intricacies can be found in [17], our primary interest lies in mapping AlphaFold within the operator learning context. Considering an input amino acid sequence, it undergoes processing to yield a multiple sequence alignment (MSA) and a pairwise feature representation. These data are subsequently fed into Evoformers and Structure Modules, iteratively refining the protein's predicted structure. We can think of the output of the Evoformer model as pair of functions lying in some discretized Banach space, while the Structure Modules of AlphaFold can be thought of as being operators over a space of matrices. This is described in detail in Appendix B.2.
To empirically explore the convergence behavior of AlphaFold as a function of iterations, we applied AlphaFold-Multimer across a range of 0-20 recycles on each of the 29 monomers using ground truth targets from CASP15. Figure 3 presents the summarized results, which show that while on average the GDT scores and RMSD improve with AlphaFold-Multimer, not all individual targets consistently converge, as depicted in Figures 4 and 5. Given that AlphaFold lacks a convergence constraint in its training, its predictions can exhibit variability across iterations.
Graph Neural Networks.Graph neural networks (GNNs) excel in managing graph-structured data by harnessing a differentiable message-passing mechanism. This enables the network to assimilate information from neighboring nodes to enhance their representations. We can think of the feature spaces as being Banach spaces of functions, which are discretized according to some grid. The GNN architecture can be thought of as being an operator acting on the direct sum of the Banach spaces, where the underlying geometric structure of the graph determines how the operator combines information through the topological information of the graph. A detailed description is given in Appendix A.3, where theoretical guarantees for the convergence of the iterations are given, and Appendix B.3.
Neural Integral Equations.Neural Integral Equations (NIEs), and their variant Attentional Neural Integral Equations (ANIEs), draw inspiration from integral equations. Here, an integral operator, determined by a neural network, plays a pivotal role.
Denoting the integrand of the integral operator as \(G_{\theta}\) within an NIE, the equation becomes:
\[\mathbf{y}=f(\mathbf{y},\mathbf{x},t)+\int_{\Omega\times[0,1]}G_{\theta}( \mathbf{y},\mathbf{x},\mathbf{z},t,s)d\mathbf{z}ds\]
To solve such integral equations, one very often uses iterative methods, as done in [23] and the training of the NIE model consists in finding the parameters \(\theta\) such that the solutions of the corresponding integral equations model the given data. A more detailed discussion of this model is given in Appendix B.4.
## 5 Experiments
In this section, we showcase experiments highlighting the advantages of explicit iterations. We introduce a new GNN architecture based on Picard iteration and enhance vision transformers with
Figure 2: **Left and Middle**: Losses always decrease with more iterations in DDPMs. Training is stable and overfitting never occurs. EMA smoothing with \(\alpha=0.1\) is used for the loss curves to make the differences clearer. **Right**: DDPMs show FID and loss improves with an increased number of iterations on CIFAR-10. The number of iterations represent the denoising steps during training and inference. All diffusion models, UNets of identical architecture, are trained on CIFAR-10’s training dataset with 64-image batches.
Picard iteration.
### PIGN: Picard Iterative Graph Neural Network
To showcase the benefits of explicit iterations in GNNs, we developed **P**icard **I**teration **G**raph neural **N**etwork (PIGN), a GNN that applies Picard iterations for message passing. We evaluate PIGN against state-of-the-art GNN methods and another iterative approach called IterGNN [THG\({}^{+}\)20] on node classification tasks.
GNNs can suffer from over-smoothing and over-squashing, limiting their ability to capture long-range dependencies in graphs [NHN\({}^{+}\)23]. We assess model performance on noisy citation graphs (Cora and CiteSeer) with added drop-in noise. Drop-in noise involves increasing a percentage \(p\) of the bag-of-words feature values, hindering classification. We also evaluate on a long-range benchmark (LRGB) for graph learning [DRG\({}^{+}\)22].
Table 5 shows PIGN substantially improves accuracy over baselines on noisy citation graphs. The explicit iterative process enhances robustness. Table 1 illustrates PIGN outperforms prior iterative and non-iterative GNNs on the long-range LRGB benchmark, using various standard architectures. Applying Picard iterations enables modeling longer-range interactions.
The PIGN experiments demonstrate the benefits of explicit iterative operator learning. Targeting weaknesses of standard GNN training, PIGN effectively handles noise and long-range dependencies. A theoretical study of convergence guarantees is given in Appendix A.3.
### Enhancing Transformers with Picard Iteration
We hypothesize that many neural network frameworks can benefit from Picard iterations. Here, we empirically explore adding iterations to Vision Transformers. Specifically, we demonstrate the
\begin{table}
\begin{tabular}{l c c c} \hline \hline & **GCN**[15] & **GAT**[15] & **GraphSAGE**[15] \\ \hline w/o iterations & \(0.1510\pm 0.0029\) & \(0.1204\pm 0.0127\) & \(0.3015\pm 0.0032\) \\ IterGNN & \(0.1736\pm 0.0311\) & \(0.1099\pm 0.0459\) & \(0.1816\pm 0.0014\) \\ PIGN (Ours) & \(\mathbf{0.1831\pm 0.0038}\) & \(\mathbf{0.1706\pm 0.0046}\) & \(\mathbf{0.3560\pm 0.0037}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: F1 scores of different models on the standard test split of the LRGB PascalVOC-SP dataset. Rows refer to model frameworks and columns are GNN backbone layers. A budget of 500k trainable parameters is set for each model. Each model is run on the same set of 5 random seeds. The mean and standard deviation are reported. For IterGNN with GAT backbone, two of the runs keep producing exploding loss so the reported statistics only include three runs.
Figure 3: On average, additional iterations enhance AlphaFold-Multimer’s performance, though the model doesn not invariably converge with more iterations. Target-specific trends can be seen in Figures 4 and 5.
benefits of explicit Picard iteration in transformer models on the task of solving the Navier-Stokes partial differential equation (PDE) as well as self-supervised masked prediction of images. We evaluate various Vision Transformer (ViT) architectures [1] along with Attentional Neural Integral Equations (ANIE) [13].
For each model, we perform training and evaluation with different numbers of Picard iterations as described in Section 3.2. We empirically observe improved performance with more iterations for all models, since additional steps help better approximate solutions to the operator equations.
Table 2 shows lower mean squared error on the PDE task for Vision Transformers when using up to three iterations compared to the standard single-pass models. Table 3 shows a similar trend for self-supervised masked prediction of images. Finally, Table 4 illustrates that higher numbers of iterations in ANIE solvers consistently reduces error. We observe in our experiments across several transformer-based models and datasets that, generally, more iterations improve performance.
Overall, these experiments highlight the benefits of explicit iterative operator learning. For transformer-based architectures, repeating model application enhances convergence to desired solutions. Our unified perspective enables analyzing and improving networks from across domains. A theoretical study of the convergence guarantees of the iterations is given in Appendix A.2.
## 6 Discussion
We introduced an iterative operator learning framework in neural networks, drawing connections between deep learning and numerical analysis. Viewing networks as operators and employing techniques like Picard iteration, we established convergence guarantees. Our empirical results, exemplified by PIGN--an iterative GNN, as well as an iterative vision transformer, underscore the benefits of explicit iterations in modern architectures.
For future work, a deeper analysis is crucial to pinpoint the conditions necessary for convergence
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Model** & \(N_{\text{iter}}=1\) & \(N_{\text{iter}}=2\) & \(N_{\text{iter}}=3\) \\ \hline ViT & \(0.2472\pm 0.0026\) & \(0.2121\pm 0.0063\) & \(\mathbf{0.0691\pm 0.0024}\) \\ ViTsmall & \(0.2471\pm 0.0025\) & \(0.1672\pm 0.0087\) & \(\mathbf{0.0648\pm 0.0022}\) \\ ViTparallel & \(0.2474\pm 0.0027\) & \(0.2172\pm 0.0066\) & \(\mathbf{0.2079\pm 0.0194}\) \\ ViT3D & \(0.2512\pm 0.0082\) & \(\mathbf{0.2237\pm 0.0196}\) & \(0.2529\pm 00.0079\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: ViT models used to solve a PDE (Navier-Stokes). The mean squared error is reported for each model as the number of iterations varies. A single iteration indicates the baseline ViT model. Higher iterations perform better than the regular ViT (\(N_{\text{iter}}=1\)).
and stability within our iterative paradigm. There remain unanswered theoretical elements about dynamics and generalization. Designing network architectures inherently tailored for iterative processes might allow for a more effective utilization of insights from numerical analysis. We are also intrigued by the potential of adaptive solvers that modify the operator during training, as these could offer notable advantages in both efficiency and flexibility.
In summation, this work shines a light on the synergies between deep learning and numerical analysis, suggesting that the operator-centric viewpoint could foster future innovations in the theory and practical applications of deep neural networks.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Model size** & \(N_{\text{iter}}=1\) & \(N_{\text{iter}}=2\) & \(N_{\text{iter}}=4\) & \(N_{\text{iter}}=6\) & \(N_{\text{iter}}=8\) \\ \hline \(1H|1B\) & \(0.0564\pm 0.0070\) & \(0.0474\pm 0.0065\) & \(0.0448\pm 0.0062\) & \(0.0446\pm 0.0065\) & \(\mathbf{0.0442\pm 0.0065}\) \\ \(4H|1B\) & \(0.0610\pm 0.0078\) & \(0.0516\pm 0.0083\) & \(0.0512\pm 0.0070\) & \(0.0480\pm 0.0066\) & \(\mathbf{0.0478\pm 0.0066}\) \\ \(2H|2B\) & \(0.0476\pm 0.0065\) & \(0.0465\pm 0.0067\) & \(0.0458\pm 0.0067\) & \(0.0451\pm 0.0064\) & \(\mathbf{0.0439\pm 0.0062}\) \\ \(4H|4B\) & \(0.0458\pm 0.0062\) & \(0.0461\pm 0.0065\) & \(0.0453\pm 0.0063\) & \(0.0453\pm 0.0061\) & \(\mathbf{0.0445\pm 0.0059}\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance of ANIE on a PDE (Navier-Stokes) as the number of iterations of the integral equation solver varies, and for different sizes of architecture. Here \(H\) indicates the number of heads and \(B\) indicates the number of blocks (layers). A single iteration means that the integral operator is applied once. As the number of iterations of the solver increases, the performance of the model in terms of mean squared error improves.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Model** & \(N_{\text{iter}}=1\) & \(N_{\text{iter}}=2\) & \(N_{\text{iter}}=3\) \\ \hline ViT (MSE) & \(0.0126\pm 0.0006\) & \(\mathbf{0.0121\pm 0.0006}\) & \(0.0122\pm 0.0006\) \\ ViT (FID) & \(20.0433\) & \(20.0212\) & \(\mathbf{19.2956}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: ViT models trained with a pixel dropout reconstruction objective on CIFAR-10. The ViT architecture contains 12 encoder layers, 4 decoder layers, 3 attention heads in both the encoder and decoder. The embedding dimension and patch size are 192 and 2. The employed loss is \(\text{MSE}((1-\lambda)T(x_{i})+\lambda x_{i},y)\), computed on the final iteration \(N_{\text{iter}}=i\). Images are altered by blacking out 75% of pixels. During inference, iterative solutions are defined as \(x_{i+1}=(1-\lambda)T(x_{i})+\lambda x_{i}\), for \(i\in\{0,1,\ldots N\}\). Here, \(N=2\) and \(\lambda=1/2\). |
2306.04086 | TEC-Net: Vision Transformer Embrace Convolutional Neural Networks for
Medical Image Segmentation | The hybrid architecture of convolution neural networks (CNN) and Transformer
has been the most popular method for medical image segmentation. However, the
existing networks based on the hybrid architecture suffer from two problems.
First, although the CNN branch can capture image local features by using
convolution operation, the vanilla convolution is unable to achieve adaptive
extraction of image features. Second, although the Transformer branch can model
the global information of images, the conventional self-attention only focuses
on the spatial self-attention of images and ignores the channel and
cross-dimensional self-attention leading to low segmentation accuracy for
medical images with complex backgrounds. To solve these problems, we propose
vision Transformer embrace convolutional neural networks for medical image
segmentation (TEC-Net). Our network has two advantages. First, dynamic
deformable convolution (DDConv) is designed in the CNN branch, which not only
overcomes the difficulty of adaptive feature extraction using fixed-size
convolution kernels, but also solves the defect that different inputs share the
same convolution kernel parameters, effectively improving the feature
expression ability of CNN branch. Second, in the Transformer branch, a
(shifted)-window adaptive complementary attention module ((S)W-ACAM) and
compact convolutional projection are designed to enable the network to fully
learn the cross-dimensional long-range dependency of medical images with few
parameters and calculations. Experimental results show that the proposed
TEC-Net provides better medical image segmentation results than SOTA methods
including CNN and Transformer networks. In addition, our TEC-Net requires fewer
parameters and computational costs and does not rely on pre-training. The code
is publicly available at https://github.com/SR0920/TEC-Net. | Rui Sun, Tao Lei, Weichuan Zhang, Yong Wan, Yong Xia, Asoke K. Nandi | 2023-06-07T01:14:16Z | http://arxiv.org/abs/2306.04086v3 | # TEC-Net: Vision Transformer Embrace Convolutional Neural Networks for Medical Image Segmentation
###### Abstract
The hybrid architecture of convolution neural networks (CNN) and Transformer has been the most popular method for medical image segmentation. However, the existing networks based on the hybrid architecture suffer from two problems. First, although the CNN branch can capture image local features by using convolution operation, the vanilla convolution is unable to achieve adaptive extraction of image features. Second, although the Transformer branch can model the global information of images, the conventional self-attention only focuses on the spatial self-attention of images and ignores the channel and cross-dimensional self-attention leading to low segmentation accuracy for medical images with complex backgrounds. To solve these problems, we propose vision Transformer embrace convolutional neural networks for medical image segmentation (TEC-Net). Our network has two advantages. First, dynamic deformable convolution (DDConv) is designed in the CNN branch, which not only overcomes the difficulty of adaptive feature extraction using fixed-size convolution kernels, but also solves the defect that different inputs share the same convolution kernel parameters, effectively improving the feature expression ability of CNN branch. Second, in the Transformer branch, a (shifted)-window adaptive complementary attention module ((SW)-ACAM) and compact convolutional projection are designed to enable the network to fully learn the cross-dimensional long-range dependency of medical images with few parameters and calculations. Experimental results show that the proposed TEC-Net provides better medical image segmentation results than SOTA methods including CNN and Transformer networks. In addition, our TEC-Net requires fewer parameters and computational costs and does not rely on pre-training. The code is publicly available at [https://github.com/SR0920/TEC-Net](https://github.com/SR0920/TEC-Net).
Deep learning, Medical image segmentation, Dynamic deformable convolution, (Shifted)-window adaptive complementary attention.
## I Introduction
Medical image segmentation refers to dividing a medical image into several specific regions with unique properties. Medical image segmentation results can not only achieve abnormal detection of human body regions but also be used to guide clinicians. Therefore, accurate medical image segmentation has become a key component of computer-aided diagnosis and treatment, patient condition analysis, image-guided surgery, tissue and organ reconstruction, and treatment planning. Compared with common RGB images, medical images usually suffer from the problems such as high-density noise, low contrast and blurred edges. So how to quickly and accurately segment specific human organs and lesions from medical images has always been a huge challenge in the field of smart medicine.
The early traditional medical image segmentation algorithms are based on manual features designed by medical experts using professional knowledge [1]. These methods have a strong mathematical basis and theoretical support, but these algorithms have poor generalization abilities for different organs or lesions of human body. Later, inspired by the full convolutional networks (FCN) [2] and the idea of encoder-decoder, Ronneberger et al. designed U-Net [3] network that was first applied to medical image segmentation. After that the U-shaped encoder-decoder structure receives widespread attention. At the same time, due to the small number of parameters and the good segmentation effect of U-Net, deep learning has made a breakthrough in medical image segmentation. Then a series of improved medical image segmentation networks are reported, such as 2D U-Net++ [4], ResDO-UNet [5], SGU-Net [6], 2.5D RIU-Net [7], 3D Unet [8] and V-Net [9]. The rapid development of CNN in the field of medical image segmentation is largely due to the scale invariance and inductive bias of convolution operation. Although this
fixed receptive field improves the computational efficiency of CNN, it limits the ability of CNN to capture the long-range dependency relationship.
Aiming at the shortcomings of CNN in obtaining global features of images, Vaswani et al. [10] proposed the vision Transformer architecture for image classification. The Transformer achieves a global representation of image information through complex spatial transformations and long-range dependency modeling, effectively solving the problem of CNN being only able to obtain local features of images. Currently, many methods based on Transformer have been applied to medical image segmentation, representative methods such as Swin-Unet [11], BAT [12], Swin UNETR [13], and UC-TransNet [14]. These methods can be roughly divided into the pure Transformer architecture and the hybrid architecture of CNN and Transformer. The pure Transformer architecture realizes the long-range dependency modeling using self-attention. However, due to the lack of inductive bias of the Transformer itself, the traditional Transformer cannot be widely used on small-scale datasets like medical images [15]. At the same time, the Transformer architecture is prone to ignore local detaile features, which reduces the separability between the background and the foreground of small lesions or objects with large-scale shape changes in medical images. The hybrid architecture of CNN and Transformer realizes both local and global information modeling of medical images by taking the complementary advantages of CNN and Transformer, thus achieving a better medical image segmentation effect. However, these hybrid architectures still suffer from the following two problems. First, these networks ignore the problems of organ deformation and lesion irregularities when modeling local features, resulting in weak local feature expression. Second, these networks ignore the correlation between spacial and channels when modeling the global feature, resulting in inadequate expression of self-attention. To address the above problems, our main contributions are as follows:
* _A novel dynamic deformable convolution (DDConv) is proposed. Through task adaptive learning, the DDConv can flexibly change the weight coefficient and deformation offset of convolution itself. The DDConv can overcome the problems of fixation of the narrow reception field and non-adaptive convolution kernel parameters existing in the vanilla convolution and its variants, such as Atrous convolution and Involution, etc. The DDConv can improve the ability to perceive tiny lesions and targets with large-scale shape changes in medical images._
* _A new (shifted)-window adaptive complementary attention module ((S)W-ACAM) is proposed. The (S)W-ACAM realizes the cross-dimensional global modeling of medical images through four parallel branches of weight coefficient adaptive learning. Compared with popular attention mechanisms, such as CBAM and Non-Local, the (S)W-ACAM fully makes up for the deficiency of the conventional attention mechanism in modeling the cross-dimensional relationship between spatial and channels, and thus enhances the separability between the segmented object and the background in medical images._
* _A new parallel network structure based on dynamically adaptive CNN and cross-dimensional feature fusion Transformer are proposed for medical image segmentation, called TEC-Net. Compared with popular hybrid architectures of CNN and Transformer, like Swin-Unet [11] and Swin UNETR [13]. TEC-Net enhances representation learning by tightly combining local and global features at different resolutions through parallel interaction between CNN and Transformer. However, TEC-Net not only abandons pre-training but also requires fewer parameters and fewer computational costs, which are 11.58 M and 4.53 GFLOPs respectively._
## 2 Related Work
Medical image segmentation plays a very important role in the field of medical image processing, and is also one of the core techniques of computer-aided diagnosis and treatment systems. Because it's tedious and complex to label manually medical images, and also it's difficult to guarantee the efficiency and accuracy of manual labeling, the rapid and accurate segmentation of medical images is of great significance for clinical treatment. In recent years, with the rapid development of deep learning techniques, researchers have continuously developed many deep network models for medical image segmentation. These medical image segmentation networks can be coarsely divided into two categories: CNN and Transformer networks.
### CNN-based Methods
Different from traditional medical image segmentation algorithms, the algorithms based on deep learning can learn the high-dimensional feature information of medical images through a multi-layer network structure. Among various deep-learning networks related to medical image segmentation, CNN perform extremely well. CNN can effectively learn from large-scale medical datasets to distinguish features and extract the prior knowledge, making them an important part of smart medical image analysis systems.
In 2015, Ronneberger et al. were inspired by the FCN [2] network and designed the first end-to-end network U-Net [3] for medical image segmentation in the ISBI cell tracking challenge. U-Net adopts a symmetric encoder and decoder structure, which can make full use of the local details of medical images and reduce the dependence on training datasets. Therefore, on the case of small datasets, U-Net can still achieve good medical image segmentation results. Based on U-Net, Alom et al. designed R2U-Net [16] by combining U-Net, ResNet [17], and recurrent neural network (RCNN) [18], which has achieved good performance on multiple medical image segmentation datasets such as blood vessels and retinas. To further improve the performance of U-Net, Gu et al. introduced dynamic convolution [19] into U-Net and proposed CA-Net [20]. Experiments on medical datasets show that CA-Net can not only improve the segmentation accuracy of medical images but also reduce the training time of the network. Inspired by the idea of residual connection and deformable convolution [21], Yang et al. added a residual deformable
convolution to U-Net, and proposed DCU-Net [22]. DCU-Net shows a more advanced segmentation effect than U-Net on DRIVE medical dataset. Lei et al. designed SGU-Net [6] based on U-Net, and proposed an ultralight convolution module and additional adversarial shape-constraint that can significantly improve the segmentation accuracy of abdominal medical images through self-supervised training. Although CNN have made great progress in network structure, the main reason for their success is due to the invariance in dealing with different scales and the inductive bias in local modeling. Although this fixed receptive field improves the computational efficiency of CNN, it also limits the ability to capture the relationship between distant pixels in medical images and lacks the ability to model medical images in a long-range.
### Transformer-based Methods
In 2017, Vaswani et al. [10] proposed the first Transformer network. Because of its unique structure, Transformer obtains the ability of processing indefinite-length input, establishs long-range dependency relationship, and captures global features of input data. Transformer's success is mainly attributed to the self-attention (SA) mechanism because it can capture long-range dependency.
With the excellent performance of the Transformer in NLP fields, ViT [23] firstly applies Transformer to the field of image processing, capturing the global context information of input images through multiple cascaded Transformer layers, making Transformer a great success in image classification tasks. Then, Chen et al. proposed TransUNet [24], which opened a new era of Transformer in the field of medical image segmentation. As TransUNet directly uses the Transformer network designed for NLP in the field of image segmentation, the size of the input image block is fixed and the calculation is massive. To solve this problem, Valanarasu et al. proposed MedT [25] for medical image segmentation. It adds the gating mechanism to the network, and the gating parameters can be automatically adjusted to obtain the position embedding weight suitable for datasets of different sizes. Since images are more diverse than text and have high resolution, Cao et al. proposed a pure Transformer network Swin-Unet [11] for medical image segmentation by combining the shifted window multi-head self-attention (SW-MSA) in Swin Transformer [26]. Swin-Unet achieved the most advanced segmentation performance at that time on Synapse and ACDC multi-organ segmentation datasets. In order to better use Transformer to process dermoscopic image data, Wang et al. designed BAT [12] network based on the edge detection idea. The proposed boundary-wise attention gate (BAG) can fully utilize the prior knowledge of image boundaries to capture more details of medical images. BAT achieves amazing segmentation Dice value on the skin lesion datasets and surpasses many of the latest medical image segmentation networks.
Compared with the previous vanilla convolution [3], dynamic convolution [19][27] and deformable convolution [21], our DDConv can not only adaptively change the weight coefficient and deformation offset of the convolution according to the medical image task, but also better adapt to the shape of organs and small lesions with large-scale changes in medical images, and additionally, it can improve the local feature expression ability of the segmentation network. Compared with the self-attention mechanism in existing Transformer architectures [11][12], our (S)W-ACAM requires fewer parameters and less computation while it is capable of capturing the global cross-dimensional long-range dependency in the medical images, and improving the global feature expression ability of the segmentation network. Our TEC-Net does not require a large amount of labeled data for pre-training, but it can maximize the retention of local details and global semantic information in medical images. It achieves the best segmentation performance on dermoscopic images, liver datasets, and cardiac multi-organ datasets.
## 3 Method
### Overall Architecture
The fusion of local and global features are clearly helpful for improving medical image segmentation. CNN capture local features of medical images through convolution operation and hierarchical feature representation. In contrast, the Transformer network realizes the extraction of global features in medical images through the cascaded self-attention mechanism and the matrix operation with context interaction. In order to make full use of local details and global semantic features of medical images, we design a parallel interactive network architecture TEC-Net. The overall architecture of the network is shown in Fig. 1(a).
TEC-Net fully considers the complementary properties of CNN and Transformer. During the forward propagation process, TEC-Net continuously feeds the local details extracted by the CNN branch to the decoder of the Transformer branch. Similarly, TEC-Net also feeds the global long-range relationships captured by the Transformer branch to the decoder of the CNN branch. Obviously, the proposed TEC-Net provides better local and global feature representation than pure CNN or Transformer networks, and it shows great potential in the field of medical image segmentation.
Specifically, TEC-Net consists of four components: a patch embedding model, a dynamically adaptive CNN branch, a cross-dimensional fusion Transformer branch, and a feature fusion module. Among them, the dynamically adaptive CNN branch and the cross-dimensional fusion Transformer branch follow the design of U-Net and Swin-Unet, respectively. The dynamically adaptive CNN branch consists of seven main stages. By using the weight coefficient and deformation offset adaptive DDConv in each stage, the segmentation network can better understand the local semantic features of medical images, better perceives the subtle changes of human organs or lesions, and improves the ability to extract multi-scale change targets in medical images. Similarly, the cross-dimensional fusion Transformer branch also consists of seven main stages. By using (S)W-ACAM attention in each stage, as shown in Fig. 1(b), the segmentation network can better understand the global dependency of medical images to capture the position information between different organs, and improves the separability of segmented objects and the background in medical images.
Although our TEC-Net can effectively improve the feature representation of medical images, it requires a large number of training data and network parameters due to the dual-branch structure. As the conventional Transformer network contains a lot of multi-layer perceptron (MLP) layers, which not only aggravates the training burden of the network but also causes the number of model parameters rise sharply, resulting in the slow training for the model. Inspired by the idea of the Ghost network [28], we redesign the MLP layer in the original Transformer and proposed a lightweight perceptron module (LPM). The LPM can help our TEC-Net not only achieve better medical image segmentation results than MLP but also greatly reduces the number of parameters and computational costs, even the Transformer can achieve good results without a lot of labeled data training. It is worth mentioning that the dual-branch structure involves mutually symmetric encoders and decoders so that the parallel interaction network structure can maximize the preservation of local features and global features in medical images.
### Dynamic Deformable Convolution
Vanilla convolution has spatial invariance and channel specificity, so it has a limited ability to change different visual modalities when dealing with different spatial locations. At the same time, due to the limitations of the receptive field, it is difficult for vanilla convolution to extract features of small targets or targets with blurred edges. Therefore, vanilla convolution inevitably has poor adaptability and weak generalization ability for feature representation of complex medical images. Although the existing deformable convolution [21] and dynamic convolution [19][27] outperforms vanilla convolution to a certain extent, they still face a problem of balancing the performance and size of networks when dealing with medical image segmentation.
In order to overcome the shortcomings of popular convolution operations, this paper proposes a new convolution strategy namely DDConv, as shown in Fig. 2. It can be seen that DDConv can adaptively learn the kernel deformation offset and weight coefficients according to the specific task and data distribution, so as to realize the change of both the shapes and the values of convolution kernels. It can effectively deal with the problems of large data distribution differences and large target deformation in medical image segmentation. Also, DDConv is plug-and-play and can be embedded in any network structure.
The shape change of the convolutional kernel in DDConv is based on the network learning of the deformation offsets. The segmentation network first samples the input feature map \(X\) using a square convolutional kernel \(S\), and then performs a weighted sum with a weight matrix \(M\). The square convolution kernel \(S\) determines the range of the receptive field, e.g., a \(3\times 3\) convolution kernel can be expressed as:
\[S=\{(0,0),(0,1),(0,2),...,(2,1),(2,2)\}, \tag{1}\]
then the output feature map \(Y\) at the coordinate \(\varphi_{n}\) can be
Figure 1: (**a**) The architecture of TEC-Net. TEC-Net consists of a dual-branch interaction between dynamically adaptive CNN and cross-dimensional feature fusion Transformer. The DDConv in the CNN branch can adaptively change the weight coefficient and deformation offset of the convolution itself, which improves the segmentation accuracy of irregular objects in medical images. The (S)W-ACAM in the Transformer branch can capture the cross-dimensional long-range dependency in medical images, improving the separability of segmented objects and backgrounds in medical images. The lightweight perceptron module (LPM) greatly reduces the parameters and calculations of the original Transformer network by using the Ghost strategy. (b) Two successive Transformer blocks. W-ACAM and SW-ACAM are cross-dimensional self-attention modules with shifted windows and compact convolutional projection configurations.
Figure 2: The module of the proposed DDConv. Compared with the current popular convolution strategy, DDConv can dynamically adjust the weight coefficient and deformation offset of the convolution itself during the training process, which is conducive to the feature capture and extraction of irregular targets in medical images. \(\alpha\) and \(\beta\) represent the different weight values of DDConv in different states.
expressed as:
\[Y\left(\varphi_{n}\right)=\sum_{\varphi_{n\in S}}S\left(\varphi_{m}\right)\cdot X \left(\varphi_{n}+\varphi_{m}\right), \tag{2}\]
when the deformation offset \(\triangle\varphi_{m}=\left\{m=1,2,3,\ldots,N\right\}\) is introduced in the weight matrix \(M\), \(N\) is the total length of \(S\). Thus the Equation (2) can be expressed as:
\[Y\left(\varphi_{n}\right)=\sum_{\varphi_{m\in S}}S\left(\varphi_{m}\right) \cdot X\left(\varphi_{n}+\varphi_{m}+\triangle\varphi_{m}\right). \tag{3}\]
Through network learning, an offset matrix with the same size as the input feature map can be finally obtained, and the matrix dimension is twice that of the input feature map.
To show the convolution kernel of DDConv is dynamic, we first present the output feature map of vanilla convolution:
\[y=\sigma(W\cdot x), \tag{4}\]
where \(\sigma\) is the activation function, \(W\) is the convolutional kernel weight matrix and \(y\) is the output feature map. In contrast, the output of the feature map of DDConv is:
\[\hat{y}=\sigma\left(\left(\alpha_{1}\cdot W_{1}+\ldots+\alpha_{n}\cdot W_{n} \right)\cdot x\right), \tag{5}\]
where \(n\) is the number of weight coefficients, \(\alpha_{n}\) is the weight coefficients with learnable parameters and \(\hat{y}\) is the output feature map generated by DDConv. DDConv achieves dynamic adjustment of the convolution kernel weights by linearly combining different weight matrices according to the corresponding weight coefficients before performing the convolution operation.
According to the above analysis, we can see that DDConv realizes the dynamic adjustment of the shape and weights of the convolution kernel. Compared with directly increasing the number and size of convolution kernels, the DDConv is simpler and more efficient. The proposed DDConv not only solves the problem of poor adaptive feature extraction ability of fixed-size convolution kernels but also overcomes the defect that different inputs share the same convolution kernel parameters. Consequently, our DDConv can be used to improve the segmentation accuracy of small targets and large targets with blurred edges in medical images.
### Shifted Window Adaptive Complementary Attention Module
The self-attention mechanism is the core computing unit in Transformer networks, which realizes the capture of long-range dependency of feature maps by utilizing matrix operations. However, the self-attention mechanism only considers the dependency in the spatial dimension but not the cross-dimensional dependency between spatial and channels [29]. Therefore, when dealing with medical image segmentation with low contrast and high-density noise, the self-attention mechanism is easy to confuse segmentation targets with their background, resulting in poor segmentation results.
To solve the problems mentioned above, we propose a new cross-dimensional self-attention module called (S)W-ACAM. As shown in Fig. 3, (S)W-ACAM consists of four parallel branches, the top two branches are the conventional dual attention module and the bottom two branches are cross-dimensional attention modules. Compared to popular self-attention modules such as spatial self-attention, channel self-attention, and dual self-attention, our proposed (S)W-ACAM can not only fully extract the long-range dependency of both spatial and channels, but also capture the cross-dimensional long-range dependency between spatial and channels. These four branches complement each other, provide richer long-range dependency relationships, enhance the separability between the foreground and background, and thus improve the segmentation results for medical images.
The standard Transformer architecture [23] uses the global self-attention method to calculate the relationship between one token and all other tokens. This calculation method is complex since the computational costs will increase exponentially with the increase of image size. In order to improve the calculation efficiency, we use the shifted window calculation method similar to that in Swin Transformer [26], which only calculates the self-attention in the local window. However, in the face of our (S)W-ACAM four branches module, using the shifted window method to calculate self-attention does not reduce the overall computational complexity of the module. Therefore, we also designed the compact convolutional projection. First, we reduce the local size of the medical image through the shifted window operation, then we compress the channel dimension of feature maps through the compact convolutional projection, and finally calculate the self-attention. It is worth mentioning that this method can not only better capture the global information of medical images but also significantly reduce the computational costs of the module.
Suppose an image contains \(h\times w\) windows, each window size is \(M\times M\), then the complexity of the (S)W-ACAM, the global MSA in the original Transformer, and the (S)W-MSA in the Swin Transformer are compared as follows:
\[\Omega\left(MSA\right)=4hwC^{2}+2(hw)^{2}C, \tag{6}\]
Figure 3: The module of the proposed (SW)W-ACAM. Unlike conventional self-attention, (SW)W-ACAM has the advantages of spatial and channel attention, and can also capture long-distance correlation features between spatial and channels. Through the shifted window operation, the spatial resolution of images is significantly reduced, and through the compact convolutional projection operation, the channel dimension of images is also significantly reduced. Thus, the overall computational costs and complexity of our proposed network are reduced. \(\mathbf{\lambda_{1}}\), \(\mathbf{\lambda_{2}}\), \(\mathbf{\lambda_{3}}\) and \(\mathbf{\lambda_{4}}\) are learnable weight parameters.
\[\Omega\left((S)W\text{-}MSA\right)=4hwC^{2}+2M^{2}hwC, \tag{7}\]
\[\Omega\left((S)W\text{-}ACAM\right)=\frac{hwC^{2}}{4}+M^{2}hwC, \tag{8}\]
if the former term of each formula is a quadratic function of the number of patches \(h\cdot w\), the latter term is linear when \(M\) is fixed (the default is 7). Then the computational cost of (S)W-ACAM is smaller than MSA and (S)W-MSA.
Among the four parallel branches of (S)W-ACAM, two branches are used to capture channel correlation and spatial correlation, respectively, and the remaining two branches are used to capture the correlation between channel dimension \(C\) and space dimension \(H\) and vice versa (between channel dimension \(C\) and space dimension \(W\)). After adopting the shifted window partitioning method, as shown in Fig. 1(b), the calculation process of continuous Transformer blocks is as follows:
\[\hat{T}^{l}=W\text{-}ACAM\left(LN\left(T^{l-1}\right)\right)+T^{l-1}, \tag{9}\]
\[T^{l}=LPM\left(LN\left(\hat{T}^{l}\right)\right)+\hat{T}^{l}, \tag{10}\]
\[\hat{T}^{l+1}=SW\text{-}ACAM\left(LN\left(T^{l}\right)\right)+T^{l}, \tag{11}\]
\[T^{l+1}=LPM\left(LN\left(\hat{T}^{l+1}\right)\right)+\hat{T}^{l+1}, \tag{12}\]
where \(\hat{T}^{l}\) and \(T^{l}\) represent the output features of (S)W-ACAM and LPM, respectively. W-ACAM represents window adaptive complementary attention, SW-ACAM represents shifted window adaptive complementary attention, and LPM represents lightweight perceptron module. For the specific attention calculation process of each branch, we follow the same principle in Swin Transformer as follows:
\[Attention\left(Q,K,V\right)=SoftMax\left(\frac{QK^{T}}{\sqrt{C/8}}+B\right)V, \tag{13}\]
where relative position bias \(B\in\mathbb{R}^{M^{2}\times M^{2}}\), \(Q,K,V\in\mathbb{R}^{M^{2}\times\frac{Q}{8}}\) are query, key, and value matrices respectively. \(\frac{C}{8}\) represents the dimension of query/key, and \(M^{2}\) represents the number of patches.
After obtaining \(Out_{1}\), \(Out_{2}\), \(Out_{3}\) and \(Out_{4}\), the final feature fusion output is:
\[Out=\lambda_{1}\cdot Out_{1}+\lambda_{2}\cdot Out_{2}+\lambda_{3}\cdot Out_{ 3}+\lambda_{4}\cdot Out_{4}, \tag{14}\]
where \(\lambda_{1}\), \(\lambda_{2}\), \(\lambda_{3}\) and \(\lambda_{4}\) are learnable parameters that enable adaptive control of the importance of each attention branch.
Different from other self-attention mechanisms, our proposed (S)W-ACAM can fully capture the correlation between spatial and channels, and reasonably use the context information of medical images to achieve long-range dependency modeling. Since our (S)W-ACAM effectively overcomes the defect that the conventional self-attention only focuses on the spatial self-attention of images and ignores the channel and cross-dimensional self-attention, it achieves better feature representation for medical images with high-density noise, low contrast and complex background.
### Loss Function
In our task, three loss functions are used for model training, namely, the overall loss \(L_{TEC}\) of TEC-Net network, the loss \(L_{CNN}\) of CNN branch and the loss \(L_{Trans}\) of Transformer branch.
\[L_{TEC}=L_{MSE}\left(y_{i}^{TEC},y_{i}^{label}\right)+L_{Dice}\left(y_{i}^{TEC},y_{i}^{label}\right), \tag{15}\]
\[L_{CNN}=L_{MSE}\left(y_{i}^{CNN},y_{i}^{label}\right)+L_{Dice}\left(y_{i}^{CNN },y_{i}^{label}\right), \tag{16}\]
\[L_{Trans}=L_{MSE}\left(y_{i}^{Trans},y_{i}^{label}\right)+L_{Dice}\left(y_{i}^{ Trans},y_{i}^{label}\right), \tag{17}\]
where \(L_{MSE}(\bullet)\) represents mean squared error loss, and \(L_{Dice}(\bullet)\) represents Dice loss. \(y_{i}^{TEC}\), \(y_{i}^{CNN}\), \(y_{i}^{Trans}\) and \(y_{i}^{label}\) represent the final predicted image using our TEC-Net network, the predicted image from the CNN branch output, the predicted image from the Transformer branch output, and the label, respectively. The total loss function of TEC-Net network can be expressed as:
\[L_{Total}=\lambda L_{TEC}+\left(\left(1-\lambda\right)/2\right)L_{CNN}+\left( \left(1-\lambda\right)/2\right)L_{Trans}, \tag{18}\]
where \(\lambda=\delta e^{-5\left(1-k\right)^{2}}\), \(\lambda\) is a Gaussian ramp-up curve, \(k\) represents the number of epochs.
### Architecture Variants
We have built a TEC-Net-T as a base network with a model size of 11.58 M and a computing capacity of 4.53 GFLOPs. In addition, we built the TEC-Net-B network to make a fair comparison with the latest networks such as CvT [33] and PVT [34]. The window size is set to 7, and the input image size is \(224\times 224\). Other network parameters are set as follows:
* TEC-Net-T: \(layer\)\(number=\{2,\ 2,\ 6,\ 2,\ 6,\ 2,\ 2\}\), \(H=\{3,\ 6,\ 12,\ 24,\ 12,\ 6,\ 3\}\), \(D=96\)
* TEC-Net-B: \(layer\)\(number=\{2,\ 2,\ 18,\ 2,\ 18,\ 2,\ 2\}\), \(H=\{4,\ 8,\ 16,\ 32,\ 16,\ 8,\ 4\}\), \(D=96\)
where \(D\) represents the number of image channels when entering the first layer of the dynamically adaptive CNN branch and the cross-dimensional fusion Transformer branch, \(layer\)\(number\) represents the number of Transformer blocks used in each stage, and \(H\) represents the number of multiple heads in the Transformer branch.
## IV Experiment and Results
### Datasets
We conducted experiments on the skin lesion segmentation dataset (ISIC2018) from the International Symposium on Biomedical Imaging (ISBI) [36], the Liver Tumor Segmentation Challenge dataset (LiTS) from the Medical Image Computing and Computer Assisted Intervention Society (MICCAI) [37] and the Automated Cardiac Diagnosis Challenge dataset (ACDC) from the University Hospital of Dijon (France) [38]. These three datasets have different data types and data distributions. Among them, the ISIC2018 dataset is electron microscope images, which contain 2,594 dermoscopic images for training, but the ground truth images of the testing set have not been released, so we performed a five-fold cross-validation on the training set for a fair comparison. The LiTS
dataset is a CT image of the human abdomen, containing 131 3D CT liver scans, where 100 scans of which are used for training, and the remaining 31 scans are used for testing. The ACDC dataset is an MRI image of the human heart, which contains cardiac short-axis Cine MRI data from 100 patients. The ACDC dataset includes healthy patients, patients with previous myocardial infarction, dilated cardiomyopathy, hypertrophic cardiomyopathy, and abnormal right ventricle, with 20 scans for each group. These data are obtained over a 6 years period using two MRI scanners of two magnetic strengths (1.5T and 3.0T). In addition, all images are empirically resized to 224x224 for efficiency.
### Implementation Details and Evaluation Indicators
All the networks are implemented on NVIDIA GeForce RTX 3090 24GB and PyTorch 1.7. We utilized Adam with an initial learning rate of 0.001 to optimize the networks. The learning rate decreases in half when the loss on the validation set has not dropped by 10 epochs. We used mean squared error loss (MSE) and Dice loss as loss functions in our experiment.
In the experiment on the ISIC2018 dataset, we conducted an overall evaluation for SOTA networks and the proposed TEC-Net using five indicators: Dice (DI), Jaccard (JA), Sensitivity (SE), Accuracy (AC), and Specificity (SP) [39]. In the experiment on the LiTS-Liver dataset, we conducted an overall evaluation for SOTA networks and the proposed TEC-Net using five indicators: DI, VOE, RVD, ASD, and RMSD. In the experiment on the ACDC dataset, we conducted an overall evaluation for SOTA networks and the proposed TEC-Net using DI and 95HD [40].
The values of DI, JA, AC, SE, and SP range from 0 to 100, and better segmentation results mean that the values of evaluation indicators such as DI, JA, AC, SE, and SP will be higher, while the values of evaluation indicators such as VOE, RVD, ASD, RMSD, and 95HD will be lower [41].The 95HD is defined as the 95th quantile of Hausdorff distances (HD) instead of the maximum [42].
### Evaluation and Results
In this paper, we selected popular SOTA networks for medical image segmentation networks U-Net [3], R2UNet [16], Attention Unet [30], CENet [31], 3D Unet [8], V-Net [9], Swin-Unet [11], TransUNet [24], CVT [33], PVT [34], CrossForm [35] and the proposed TEC-Net to conduct a comprehensive comparison of the three different modalities datasets, the ISIC2018, the LiTS-Liver and the ACDC.
TABLE 1 shows the quantitative analysis results of the proposed TEC-Net and the competitive CNN and Transformer
networks on the ISIC2018 dataset. From the experimental results, we can conclude that our TEC-Net needs the minimum number of parameters and the lowest computational costs, and can obtain the best segmentation effect on the dermoscopic images without adding pre-training. Moreover, our TEC-Net-T network requires only 11.58 M of parameters and 4.53 GFLOPs of computational costs, but still achieves the second-best segmentation effect. Our TEC-Net-B network, BAT, CvT, and CrossForm have similar parameters or computational costs, but on the ISIC2018 dataset, the division Dice value of our TEC-Net-B is 1.02%, 3.00%, and 3.79% higher than that of the BAT, CvT, and CrossForm network respectively. In terms of other evaluation indicators, our TEC-Net-B is also significantly better than other competitive networks.
TABLE II shows the quantitative analysis results of the proposed TEC-Net and the competitive networks on the LiTS-Liver dataset. It can be seen from the experimental results that our TEC-Net shows great advantages in medical image segmentation, which further verifies the integrity of TEC-Net in extracting local and global features for medical images. It is worth noting that the TEC-Net-B and TEC-Net-T networks achieve good results in medical image segmentation in the first and second place, with the least number of model parameters and computational costs. The division Dice value of our TEC-Net-B network without pre-training is 1.20%, 1.03%, and 1.01% higher than that of the Swin-Unet, TransUNet, and CvT network with pre-training. In terms of other evaluation indicators, our TEC-Net-B is also significantly better than other competitive networks.
TABLE III shows the quantitative analysis results of the proposed TEC-Net and the competitive networks on the ACDC dataset. From the experimental results, it can be seen that the proposed TEC-Net still exhibits significant advantages on MRI type multi-organ segmentation datasets. Both TEC-Net-T and TEC-Net-B provide state-of-the-art segmentation effects for the left ventricle (LV), right ventricle (RV), and left ventricular myocardium (MYO). Among them, LV provides the best segmentation effect, while MYO provides a poor segmentation effect. Compared with the latest CvT, PVT, and CrossForm, the average segmentation performance of TEC-Net-B improves by 0.98%, 1.45%, and 1.38%, respectively, while the average 95HD decreases by 0.72%, 0.78%, and 0.74%, respectively. It also proves that TEC-Net has strong generalization performance on different datasets and can be flexibly applied to medical image segmentation tasks in different modalities and collection environments.
From the visualization in Fig. 4, it can be clearly seen that the TEC-Net network can effectively extract local detail features and global semantic features in medical images to the maximum extent possible. Therefore, it provides accurate segmentation results for segmentation targets with irregular edges and large deformation scales. Fig. 4(a) shows the final output result from the CNN branch, and Fig. 4(b) shows
Fig. 4: Visualization effect display of TEC-Net network on ISIC2018 dataset and LTS-Liver dataset. (a) The predicted image output by the CNN branch, (b) the predicted image output by the Transformer branch, (c) the predicted image output by the TEC-Net network, (d) the corresponding label, (e) the corresponding original image.
the final output result from the Transformer branch. We can find that the CNN branch captures more accurate local detail information of segmented targets. However, due to the fact that CNN branches mainly capture local features of images, they are more susceptible to noise interference. However, the Transformer branch captures more accurate position information of segmented targets. Due to the fact that the Transformer branch mainly captures the global features of the image, it is easy to overlook the detailed information of the segmented target. So, after integrating the CNN branch with the Transformer branch, the TEC-Net network fully inherits the structural and generalization advantages of CNN and Transformer, providing better local and global feature representations for medical images, demonstrating great potential in the field of medical image segmentation.
### Ablation Study
In order to fully prove the effectiveness of different modules in our TEC-Net, we conducted a series of ablation experiments on the ISIC2018 dataset. As shown in TABLE IV, we can see that the DDConv and (S)W-ACAM proposed in this paper show good performance, and the combination of these two modules, TEC-Net shows the best medical image segmentation effect.
### Network Visualization
In order to better understand the TEC-Net network, we visualized the feature maps of each stage of the network, as shown in Fig. 5. We used the linear interpolation method to restore the deep feature map with the low spatial resolution to the same size as the input and output images. In the visualization, red represents areas that the network is more concerned about, while blue represents areas that the network is less concerned about.
Through visualization images, we can clearly see the entire process of feature extraction, feature fusion, and other stages that the image undergoes after being input into the network. In the two branches of the encoder, the model can analyze and identify the semantic information of images. In the two branches of the decoder, after skip connections and feature fusion operations, the model began to pay more attention to the semantic features of target regions. During this process, the information exchange and fusion between CNN and Transformer branches play an important role in the accurate segmentation of target regions.
As a whole structure, as the number of network layers continues to deepen, the TEC-Net network gradually refines the localization and contour segmentation of semantic objects in medical images, proving the effectiveness of our proposed TEC-Net network for global information modeling and accurate segmentation of medical images.
## V Conclusion
In this study, we have proposed a new architecture TEC-Net that combined dynamically adaptive CNN and cross-dimensional fusion Transformer in parallel for medical image segmentation. The proposed TEC-Net integrates the advantages of both CNN and Transformer, and retains the local details and global semantic features of medical images using local relationship modeling and long-range dependency modeling. The proposed DDConv overcomes the problems of fixed receptive field and parameter sharing in vanilla convolution, enhances the ability to express local features, and realizes adaptive extraction of spatial features. The proposed (S)W-ACAM self-attention mechanism can fully capture the cross-dimensional correlation between spatial and channels, and adaptively learn the important information between spatial and channels through network training. In addition, by using the LPM to replace the MLP in the traditional Transformer, our TEC-Net significantly reduces the number of parameters, gets rid of the dependence of the network on pre-training, avoids the challenge of lacking labeled medical images and easy suffering from over-fitting. Compared with popular CNN and Transformer medical image segmentation networks, our TEC-Net shows significant advantages in terms of operational efficiency and segmentation effect.
|
2304.04199 | Information-Theoretic Testing and Debugging of Fairness Defects in Deep
Neural Networks | The deep feedforward neural networks (DNNs) are increasingly deployed in
socioeconomic critical decision support software systems. DNNs are
exceptionally good at finding minimal, sufficient statistical patterns within
their training data. Consequently, DNNs may learn to encode decisions --
amplifying existing biases or introducing new ones -- that may disadvantage
protected individuals/groups and may stand to violate legal protections. While
the existing search based software testing approaches have been effective in
discovering fairness defects, they do not supplement these defects with
debugging aids -- such as severity and causal explanations -- crucial to help
developers triage and decide on the next course of action. Can we measure the
severity of fairness defects in DNNs? Are these defects symptomatic of improper
training or they merely reflect biases present in the training data? To answer
such questions, we present DICE: an information-theoretic testing and debugging
framework to discover and localize fairness defects in DNNs.
The key goal of DICE is to assist software developers in triaging fairness
defects by ordering them by their severity. Towards this goal, we quantify
fairness in terms of protected information (in bits) used in decision making. A
quantitative view of fairness defects not only helps in ordering these defects,
our empirical evaluation shows that it improves the search efficiency due to
resulting smoothness of the search space. Guided by the quantitative fairness,
we present a causal debugging framework to localize inadequately trained layers
and neurons responsible for fairness defects. Our experiments over ten DNNs,
developed for socially critical tasks, show that DICE efficiently characterizes
the amounts of discrimination, effectively generates discriminatory instances,
and localizes layers/neurons with significant biases. | Verya Monjezi, Ashutosh Trivedi, Gang Tan, Saeid Tizpaz-Niari | 2023-04-09T09:16:27Z | http://arxiv.org/abs/2304.04199v1 | # Information-Theoretic Testing and Debugging of Fairness Defects in Deep Neural Networks
###### Abstract
The deep feedforward neural networks (DNNs) are increasingly deployed in socioeconomic critical decision support software systems. DNNs are exceptionally good at finding minimal, sufficient statistical patterns within their training data. Consequently, DNNs may learn to encode decisions--amplifying existing biases or introducing new ones--that may disadvantage protected individuals/groups and may stand to violate legal protections. While the existing search based software testing approaches have been effective in discovering fairness defects, they do not supplement these defects with debugging aids--such as severity and causal explanations--crucial to help developers triage and decide on the next course of action. Can we measure the severity of fairness defects in DNNs? Are these defects symptomatic of improper training or they merely reflect biases present in the training data? To answer such questions, we present Dice: an information-theoretic testing and debugging framework to discover and localize fairness defects in DNNs.
The key goal of Dice is to assist software developers in triaging fairness defects by ordering them by their severity. Towards this goal, we quantify fairness in terms of protected information (in bits) used in decision making. A quantitative view of fairness defects not only helps in ordering these defects, our empirical evaluation shows that it improves the search efficiency due to resulting smoothness of the search space. Guided by the quantitative fairness, we present a causal debugging framework to localize inadequately trained layers and neurons responsible for fairness defects. Our experiments over ten DNNs, developed for socially critical tasks, show that Dice efficiently characterizes the amounts of discrimination, effectively generates discriminatory instances (vis-a-vis the state-of-the-art techniques), and localizes layers/neurons with significant biases.
## I Introduction
AI-assisted software solutions--increasingly implemented as deep neural networks [1] (DNNs)--have made substantial inroads into critical software infrastructure where they routinely assist in socio-economic and legal-critical decision making [2]. Instances of such AI-assisted software include software deciding on recidivism, software predicting benefit eligibility, and software deciding whether to audit a given taxpayer. The DNN-based software development, driven by the _principle of information bottleneck_[3], involves a delicate balancing act between over-fitting and detecting useful, parsimonious patterns. It is, therefore, not a surprise that such solutions often encode and amplify pre-existing biases in the training data. What's worse, improper training may even introduce biases not present in the training data or irrelevant to the decision making. The resulting fairness defects may not only disadvantage protected groups [4, 5, 6, 7, 8], but may stand to violate statutory requirements [9].
_This paper presents Dice, an information-theoretic testing and debugging framework for fairness defects in deep neural networks._
**Quantifying Fairness.** Concentrated efforts from the software engineering and the machine learning communities have produced a number of successful _fairness testing_ frameworks [10, 11, 12, 13]. These frameworks characterize various notions of fairness--such as group fairness [14] (decision outcome for various protected groups must be similar) and individual fairness [15] (individuals differing only on protected attributes must receive similar outcome)--and employ search-based testing to discover fairness defects. While a binary classification of fairness is helpful in discovering defects, developers may require further insights into the nature of these defects to decide on the potential "bug fix". Are some defects more severe than others? Whether these defects stem from biases present in the training data, or they are artifacts of an inadequate training? Is it possible to find an alternative explanation of the training data that does not use protected information?
_Individual discrimination_ is a well-studied [16, 11, 17, 18] causal notion of fairness that defines a function being discriminant towards an individual (input) if there exists another individual (potentially counterfactual), differing only in the protected features, receives a more favorable outcome. We present a quantitative generalization of this notion as the _quantitative individual discrimination_ (QID). We define QID as the amount of protected information--characterized by entropy metrics such as Shannon entropy and min entropy--used in deriving an outcome. Observe that a zero value for the QID measure implies the absence of the individual discrimination. The QID measure allows us to order various discriminating inputs in terms of their severity, as in an application that is not supposed to base its decisions on protected information, inputs
with higher dependence indicate a more severe violation. Our first _research question_ (**RQ1**) concerns the usefulness of QID measure in finding inputs with different severity.
**Search-Based Testing.** Search-based software testing provide scalable optimization algorithms to automate discovery of software bugs. In the context of fairness defects, the search of such bugs involves finding twin inputs exhibiting discriminatory instances. The state-of-the-art algorithms for fairness testing [19, 17, 18] explore the input space governed by a binarized feedback, resulting in a discontinuous search domain. On the other hand, QID-based search algorithms can benefit a smooth (quantitative) feedback during the optimization, resulting in a more guided search. Our next research question (**RQ2**) is to investigate whether this theoretical promise materializes in practice in terms of discovering richer discriminating instances than using classic notions of discrimination.
**Causal Explanations.** While the discriminating instances (ordered by their severity) provide a clear evidence of fairness defects in the DNN, it is unclear whether these defects are inherent in the training data, or whether they are artifacts of the training process. Inspired by the notion of "the average causal effects" [20] and Audee framework [21] for bug localization in deep learning models, we develop a layer and neuron localization framework for fairness defects. If the cause of the defects is found to be at the input layer, it is indicative of discrimination existing in the training data. On the other hand, if we localize the cause of the defect to some internal layer, we wish to further prod the DNN to extract quantitative information about neurons and their counterfactual parameters that can mitigate the defect while maintaining the accuracy. This debugging activity informed our next research question (**RQ3**): is it possible to identify a subset of neurons and their causal effects on QID to guide a mitigation without affecting accuracy?
**Experiments.** Dice implements a search algorithm (Algorithm 1) to discover inputs that maximize QID and a causal debugging algorithm (Algorithm 2) to localize layers and neurons that causally affect the amounts of QID. Using \(10\) socio-critical DNNs from the literature of algorithmic fairness, we show that Dice finds inputs that can use significant amounts of protected information in the decision making; outperforms three state-of-the-art techniques [19, 17, 18] in generating discriminatory instances; and localizes neurons that guides a simple mitigation strategy to reduce QID down to \(15\%\) of reported initial QID with at most \(5\%\) loss of accuracy. The key contributions of this paper are:
1. **Quantitative Individual Discrimination.** We introduce an information-theoretic characterization of discrimination, dubbed quantitative individual discrimination (QID), based on Shannon entropy and Min entropy.
2. **Search-based Testing.** We present a search-based algorithm to discover circumstances under which the DNNs exhibit severe discrimination.
3. **Causal Debugging.** We develop a causal fairness debugging based on the language of interventions to localize the root cause of the fairness defects.
4. **Experimental Evaluation.** Extensive experiments over different datasets and DNN models that show feasibility, usefulness, and scalability (viz-a-viz state-of-the-art). Our framework can handle multiple protected attributes and can easily be adapted for regression tasks.
## II Preliminaries
**Fairness Terminology.** We consider decision support systems as _binary classifiers_ where a prediction label is _favorable_ if it gives a desirable outcome to an input (individual). These favorable predictions may include higher income estimations for loan, low risk of re-offending in parole assessments, and high risk of failing a class. Each dataset consists of a number of _attributes_ (such as income, experiences, prior arrests, sex, and race) and a set of _instances_ that describe the value of attributes for each individual. According to ethical and legal requirements, data-driven software should not _discriminate_ on the basis of an individual's _protected attributes_ such as sex, race, age, disability, colour, creed, national origin, religion, genetic information, marital status, and sexual orientation.
There are several well-established fairness definitions. _Group fairness_ requires that the statistics of ML outcomes for different _protected groups_ to be similar [14] using metrics such as _equal opportunity difference_ (EOD), which is the difference between the true positive rates (TPR) of two protected groups. Fairness through unawareness (FTU) [15] requires removing protected attributes during training. However, FTU may provide inadequate support since protected attributes can influence the prediction via a non-protected collider attribute (e.g., race and ZIP code). Fairness through awareness (FTA) [15] is an _individual fairness_ notion that requires that two _individuals_ that deemed similar (based on their non-protected attributes) are treated similarly. _Our approach is geared toward individual fairness_.
**Individual Discrimination.** Causal discrimination, first studied in Themis[10], measures the difference between two subgroups via _counterfactual_ queries. It samples individuals with the protected attributes set to \(A\) and compares the outcome to a counterfactual scenario where the protected attributes is set to \(B\). Individual discrimination (ID) is a prevalent notion that adapts counterfactual queries to find an individual such that their counterfactual with a different protected attributes receives more favorable outcome. This fairness notion is used by the state-of-the-art fairness testing to generate fairness defects [22, 19, 17, 18] and closely related to situation testing notion [23]. While standard group fairness metrics (e.g., AOD/EOD) are already quantitative, the quantitative measures do not exist for individual fairness. We propose to adapt information theoretic tools to provide quantitative measures for individual fairness.
**Information-Theoretic Concepts.** The notion of Renyi entropy [24], \(H_{\alpha}(X)\) quantifies the uncertainty (randomness) of a system responding to inputs \(X\). In particular, Shannon
entropy (\(\alpha{=}1\)) and min-entropy (\(\alpha{=}\infty\)) are two important subclasses of Renyi entropy. Shannon entropy (\(H_{1}\)) measures the expected amounts of uncertainty over finitely many events whereas min entropy (\(H_{\infty}\)) measures the uncertainty over single maximum likelihood event.
Consider a deterministic system (like pre-trained DNN) with a finite set of responses and assume that the input \(X\) is distributed uniformly. Thus, the system induces an equivalence relation over the input set \(X\) such that two inputs are equivalent if their system outputs are approximately close, i.e. \(x{\sim}x^{\prime}\) iff \(\mathit{DNN}(x)\approx_{\epsilon}\mathit{DNN}(x^{\prime})\). Let \(X_{o}\) denote the equivalence class of \(X\) with output \(o\). Then, the remaining uncertainty after observing the output of DNN over \(X\) can be written as:
\[H_{1}(X|O)=\sum_{O=o}\frac{|X_{o}|}{|X|}.\log_{2}(|X_{o}|)\qquad\quad\text{( Shannon entropy)}\]
where \(|X|\) is the cardinality of \(X\) and \(|X_{o}|\) is the size of equivalence class of output \(o\). Similarly, the min-entropy is given as
\[H_{\infty}(X|O)=\log_{2}(\frac{|X|}{|O|})\qquad\qquad\qquad\qquad\qquad\text{ (min-entropy)}\]
where \(|O|\) is the number of equivalence classes over \(X\)[25, 26, 27]. Given that the initial entropy is equal to \(\log_{2}(|X|)\) for both entropies, the amount of information from \(X\) used by the system to make decisions are
\[I_{1}(X;O) = \log_{2}(|X|)-H_{1}(X|O),\text{ and}\] \[I_{\infty}(X;O) = \log_{2}(|O|),\]
under Shannon- and min- entropies with \(I_{1}\leq I_{\infty}\).
**Quantitative Notion and Fairness.** Our approach differs from these state-of-the-art techniques [28, 22, 19, 17, 29, 18] in that it extends the individual discrimination notion with quantitative information flow that enables us to measure the amount of discrimination in the ML-based software systems. Given non-protected attributes and ML outcomes, the _Shannon entropy_ measures the expected amount of individual discrimination over all possible responses varying protected classes, whereas, _min entropy_ measures the amount over a single response from the maximum likelihood class.
**Example 1**.: _Consider a dataset with \(16\) different protected values--sex(2), race(2), and age(4)--distributed uniformly, and suppose that we have \(4\) individuals in the system. We perturb the protected attributes of these individuals to generate \(16\) counterfactuals and run them through the DNN to get their prediction scores. Suppose that the outputs have all \(16\) outputs in the same class for the first individual (absolutely fair) and have \(16\) classes of size one for the second individual (absolutely discriminatory). For the third individual, let the outputs be in \(4\) classes with \(\{4,4,4,4\}\) elements in each class (e.g., there is one output class per each age group). For the fourth individual, let us consider outputs to be in \(5\) classes with \(\{8,4,2,1,1\}\) elements in each class (e.g., if race=1 then the output is class \(1\); else, if sex=1 then the output is \(2\); else if age={1,2} then the output is \(3\); else there is one output class for each age={3,4})._
We work with the following notions of discrimination.
* _Individual Discrimination Notion_. The individual discrimination used by the state-of-the-art techniques can only distinguish between the first individual and the rest, but they cannot distinguish among individuals two to four. In fact, these techniques generate tens of thousands of individual discriminatory instances in a short amount of time [19, 17, 18]. However, they fail to prioritize test cases for mitigation and cannot characterize the amounts of discrimination (i.e., their severity).
* _Shannon Entropy_. Using the Shannon entropy, we have the initial fairness to be \(4.0\) bits, a maximum possible discrimination. The remaining fairness of DNN are \(4.0\), \(0.0\), \(2.0\) and \(2.125\), for the first to fourth individuals, respectively. The discrimination is the difference between the initial and remaining fairness, which are \(0.0\), \(4.0\), \(2.0\) and \(1.875\) for the first to fourth individuals, respectively. It is important to note that beyond the two extreme cases, Shannon entropy deems perturbations to the third individual (rather than the fourth) create a higher amount of discrimination.
* _Min Entropy_. The initial fairness via min entropy is also \(4\) bits. The conditional min entropy is \(\log\frac{16}{1}=4.0\), \(\log\frac{16}{16}=0.0\), \(\log\frac{16}{4}=2.0\), and \(\log\frac{16}{5}=1.7\), for the four individuals, respectively. The amounts of discrimination thus are \(0.0\), \(4.0\), \(\log 4=2.0\) and \(\log 5=2.3\), respectively. Beyond the two extreme cases where both entropies agree, the min entropy deems perturbations to the fourth individual create a higher amount of discrimination. This is intuitive since the discrimination on the fourth case is more subtle, complex, and significant. Therefore, the ML software developers might prioritize those cases characterized by the min entropy.
## III Overview
**Dice in Nutshell.** Figure 1 shows an overview of our framework Dice. It consists of two components: (1) an automatic test-generation mechanism based on search algorithms and (2) a debugging approach that localizes the neurons with significant impacts on fairness using a causal algorithm. First, Dice searches through the space of input dataset to find circumstances on the non-protected attributes under which the DNN-under-test shows a significant dependency on the protected attributes in making decisions. In doing so, it works in global and local phases. In the global phase, the search explores the input space to increase the amount of discrimination in each step of search. On the other hand, in the local phase it exploits the promising seeds from the global phase to generate as many discriminatory instances as possible.
The key elements of search is a threshold-based clustering algorithm used for computing both gradients and objective functions that provide a smooth feedback. The search characterizes the quantitative individual discrimination (QID) and
returns a set of interesting inputs. Second, Dice uses those inputs to localize neurons with the largest causal effects on the amounts of discrimination. In doing so, it intervenes [30] over a set of suspicious neurons. For every neuron, our debugging approach forces the neurons to be active (\(n{>}0\)) and non-active (\(n{=}0\)) over the test cases as far as the functional accuracy of DNN remains in a valid range. Then, it computes the difference between the amounts of QID in these two cases to characterize the causal effects of the neuron on fairness.
Dice reports top \(k\) neurons that both have positive impacts (i.e., their activation reduces the amounts of discrimination) and negative impacts (i.e, their activation increases the amounts of discrimination). A potential mitigation strategy is to intervene to keep a small set of neurons activated (for the positive neurons) or deactivated (for the negative neurons).
**Test Cases.** Consider the _adult census income_[31] dataset with a pre-trained model with \(6\) layers [17] to overview Dice in practice. We ran Dice for \(1\) hours and obtain \(230,593\) test cases. It discovered \(36\) clusters from the initial of \(14\) clusters, and the amounts of QID are \(4.05\) and \(2.64\) bits for min entropy and Shannon entropy, respectively, out of a total of \(5.3\) bits of information from the protected attributes. Considering to order the test cases, we have \(6\) test case with maximum QID discrimination of \(5.3\) bits. In addition, we have \(29\) and \(112\) test cases cases with \(5.2\) and \(5.1\) bits of QID discrimination. The reported numbers are averaged (and rounded) for \(10\) runs.
**Localization and Mitigation.** Dice uses the generated test cases to localize layers and neurons with a significant causal contribution to the discrimination. For the census dataset, it identifies the second layer as the layer with largest sensitivity to protected attributes. Among the neurons in this layer, Dice found that \(15\)th neuron has the largest negative influence on fairness (the discrimination decreased by \(19.6\%\) when it is deactivated) and \(19\)the neuron has the largest positive influence on fairness (the discrimination decreased by \(17.6\%\) when it is activated). Following this localization, a simple mitigation strategy of activating or deactivating these neuron reduces the amounts of QID discrimination by \(20\%\) with \(3\%\) accuracy loss.
**Comparison to the State-of-the-art.** We compare Dice to the state-of-the-art techniques in terms of generating individual discrimination (ID) (rather than the quantitative notion) per each protected attribute. Our goal is to evaluate whether the clustering-based search is effective in generating discriminatory instances. We run Dice and baseline for \(15\) minutes, and report average of results over \(10\) runs. The baseline includes Aequitas[19], ADF [17], and NeuronFair[18]. Considering sex as the protected attribute in the census dataset, Dice generated \(79.0k\) instances whereas Aequitas, ADF, and NeuronFair generated \(10.4k\), \(18.2k\), and \(21.6k\) discriminatory instances, respectively. Overall, Dice generate more ID instances in all cases with more success rates. However, Dice is slower in finding the first ID instance in order of a few seconds (in average), since our approach does not generate ID instances in global phase. When considering the time to the first 1,000 instances, Dice has significantly outperformed the state-of-the-art. We conjecture that the improvements are due to smooth search space via quantitative feedback.
## IV Problem Statement
We consider DNN-based classifiers with the set of input variables \(A\) partitioned into protected set of variables \(Z\) (such as race, sex, and age) and non-protected variables \(X\) (such as profession, income, and education). We further assume the output to consist of \(t\) prediction classes.
**Definition IV.1** (DNN: Semantics).: _A deep neural network (DNN) encodes a function \(\mathcal{D}:X\times Z\rightarrow[0,1]^{t}\) where \(X=X_{1}\times X_{2}\cdots\times X_{n}\) is the set of non-protected input variables, \(Z=Z_{1}\times Z_{2}\cdots\times Z_{r}\) is the set of protected input variables, and the output is \(t\)-dimensional probabilistic vector corresponding to \(t\) prediction classes. The predicted label \(\mathcal{D}_{\ell}(x,z)\) of an input pair \((x,z)\) is the index of the maximum score, i.e. \(\mathcal{D}_{\ell}(x,z)=\max_{i}\mathcal{D}_{\ell}(x,z)(i)\). We assume that the set of protect input variables are finite domain, and we let \(m\) be the cardinality of the set of protected variables \(Z\)._
**Definition IV.2** (DNN: Syntax).: _A DNN \(\mathcal{D}\) is parameterized by the input dimension \(n{+}r\), the output dimension \(t\), the depth of hidden layers \(N\), and the weights of its hidden layers \(W_{1},W_{2},\ldots,W_{N}\). Our goal is to test and debug a pre-trained neural network with known parameters and weights. Let
Fig. 1: Workflow of Dice. Given a DNN and relevant input dataset, Dice quantifies QID discrimination via testing and applies causal debugging to localize and mitigate QID discrimination.
_be the output of layer \(i\) that implements an affine mapping from the output of previous layer \(D_{i-1}\) and its weights \(W_{i-1}\) for \(1\leq i\leq N\) followed by_
1. _a fixed non-linear activation unit (e.g., ReLU defined as_ \(D_{i-1}\mapsto\max\left\{W_{i-1}.D_{i-1},0\right\}\)_) for_ \(1\leq i<N\)_, or_
2. _a SoftMax function that maps scores to probabilities of each class for_ \(i=N\)_._
_Let \(D_{i}^{j}\) be the output of neuron \(j\) at layer \(i\)._
**Individual Discrimination.** We say a DNN \(\mathcal{D}\) is biased based on causal discrimination notion [16, 11, 17, 18] if
\[\exists z_{1},z_{2}\in Z,x\in X\ s.t.\ \mathcal{D}_{\ell}(x,z_{1})\neq \mathcal{D}_{\ell}(x,z_{2}),\]
for \(z_{1}\neq z_{2}\) of protected inputs. Intuitively, the idea is to find an individual such that their counterfactual with different protected attributes such as race receives a different outcome.
**Quantitative Individual Discrimination.** In the setting of fairness testing, it is often desirable to quantify the _amounts_ of bias for individuals. We define the notion quantitative individual discrimination (QID) based on the equivalence classes induced from the output of DNN over protected attributes. Formally, \(\mathit{QID}(Z,X=x)=\langle Z_{1},\ldots,Z_{k}\rangle\) that is the quotient space of \(Z\) characterized by the DNN outputs under an individual with non-protected value \(x\). Using this notion, we say a pair of protected values \(z,z^{\prime}\) are in the same equivalence class \(i\) (i.e., \(z,z^{\prime}\in\mathcal{Z}_{i}\)) if and only if \(\mathcal{D}(z,x)\approx\mathcal{D}(z^{\prime},x)\).
Given that \(Z\) is uniformly distributed and \(\mathcal{D}\) is a deterministic function, we can quantify the QID notion for an individual \((z,x)\) according to the Shannon and min entropy, respectively:
\[Q_{1}(Z,x)\!\!=\!\log_{2}(m)\!\!-\!\sum_{i=1}^{k}\frac{|\mathit{QID}_{i}(Z,x)| }{m}.\log_{2}(|\mathit{QID}_{i}(Z,x)|)\]
\[Q_{\infty}(Z,x)=\log_{2}(m)-\log_{2}(\frac{m}{k})=\log_{2}(k).\]
where \(m\) is the cardinality of \(Z\), \(|\mathit{QID}_{i}(Z,x)|\) is the size of equivalence class \(i\), and \(k\) is the number of equivalence classes.
**Debugging/Mitigating DNN for QID.** After characterizing the amounts of discrimination via \(\mathit{QID}\), our next step is to localize a set of layers and neurons that _causally_ effect the output of DNN to have \(k\) equivalence classes.
Causal logic [30] provides a firm foundation to reason about the causal relationships between variables. We consider a structural causal model (SCM) with exogenous variables \(U\) over the unobserved input factors, endogenous variables \(V\) over \((X,Z,D_{i}^{j})\); and the set of functions \(\mathcal{F}\) over the set \(V\) using the DNN function \(\mathcal{D}\) and exogenous variables \(U\). Using the SCM, we aim to estimate the average causal effect (ACE) [30] of neuron \(D_{i}^{j}\) on the QID.
A primary tool for performing such computation is called do logic [20]. We write do\((i,j,y)\) to indicate that the output of neuron \(j\) at layer \(i\) is intervened to stay \(y\). In doing so, we remove the incoming edges to the neuron and force the output of neuron to take a pre-defined value \(y\), but we are not required to control back-door variables due to the feed-forward structure of DNN. Then, the ACE of neuron \(D_{i}^{j}\) on the quantitative individual discrimination with min entropy can be written as \(E[Q_{\infty}|\mathrm{\triangle}\mathrm{\triangle}(i,j,y)\!,\!k,l]\), which is the expected QID after intervening on the neuron given that the non-intervened DNN characterized \(k\) classes with an accuracy of \(l\). Our goal is to find neurons with the largest causal effects on the QIDs, requiring that such interventions are faithful to the functionality of DNN.
**Definition IV.3** (Quantitative Fairness Testing and Debugging).: _Given a deep neural network model \(\mathcal{D}\) trained over a dataset \(A\) with protected (\(Z\subset A\)) and non-protected (\(X\subset A\)) attributes; the search problem is to find a single non-protected value \(x\in X\) such that the quantitative individual discrimination (QID), for a chosen measure \(Q_{1}\) or \(Q_{\infty}\), is maximized over the \(m\) protected values \(\Sigma=\{z_{1},\ldots,z_{m}\}\). Given the inputs \((\Sigma,x)\) characterizing the maximum QID, our debugging problem is to find a minimal subset of layers \(l\subset\{1,\ldots,N\}\) and neurons \(D_{l}^{(J)}\) for \(J\subseteq|W_{l}|\) such that the average causal effects of \(D_{l}^{j}\) on the QID are maximum._
## V Approach
**Characterizing Quantitative Individual Discrimination.** Given a DNN \(\mathcal{D}\) over a dataset \(A\), our goal is to characterize the worst-case QID over all possible individuals. Since min entropy characterizes the amounts of discrimination from one prediction with \(Q_{\infty}(Z,x)\geq Q_{1}(Z,x)\), it is a useful notion to prioritize the test cases. Therefore, we focus on \(Q_{\infty}\) and propose the following objective function:
\[\max_{x\in X}\ 2^{Q_{\infty}(Z,x)}+(1-exp(-0.1*\delta))\]
where \(2^{Q_{\infty}(Z,x)}=k\) and \(\delta\) is the maximum distance between equivalence classes, normalized with the exponential function to remain between \(0\) and \(1\). The term is used to break ties when two instances characterize the same number of classes, by preferring one with the highest distance. Overall, the goal is to find a single value of non-protected attribute \(x\) such that the neural network model \(\mathcal{D}\) predicts many distinguishable classes of outcomes when \(x\) is paired with \(m\) protected values. However, finding those inputs requires an exhaustive search in the exponential set of subsets of input space, and hence is clearly intractable. We propose a gradient-guided search algorithm that aims to search the space of input variables (attributes) to maximize the number of equivalence classes and generate as many discrimination instances as possible.
**Search Approach.** Our search strategy consists of global and local phases as in some of the prior work [17, 29, 18]. The goal of the global phase is to find the maximum quantitative individual discrimination via gradient-guided clustering. The local phase uses the promising instances to generate a maximum number of discriminatory instances (ID).
_Global Phase._ Given a current instance \(x\), the global stage first uses \(m\) different values from the space of protected attributes, while keeping the values of non-protected attributes the same. Then, it receives \(m\) prediction scores from the DNN and
partitions them into \(k\) classes. We adapt a constrained-based clustering with \(\epsilon\) where two elements cannot be in the same cluster if their scores differ more than \(\epsilon\). Now, the critical step is to perturb the current instance over a subset of non-protected attributes with a direction that will likely increase the number of clusters induced from the perturbed instance in the next step of global search.
In doing so, we first compute the gradients of DNN loss function for a pair of instances (say \(a,a^{\prime}\)) in the cluster with the maximum elements. The intuition is that we are more likely to split the largest cluster into \(2\) or more sub-clusters and increase the number of partitions in the next step. For the pair of samples, we use the non-protected attributes that have the same direction of gradients \(d\) since it shows the high sensitivity of loss function with respect to small changes on those common features of the pair. If we were to use gradients of opposite directions, we will neutralize the effects of gradients since we only perturb one instance over the non-protected attributes. Finally, we perturb the current sample \(x\) to generate \(x^{\prime}\) using the direction \(d\) and step size \(s_{g}\).
_Local Phase._ Once we detect an instance with more than \(2\) clusters, we enter a local phase where the goal is to generate as many discriminatory instances (ID) as possible. In our quantitative approach, we say that an unfavorable decision for an individual \(x\) is discriminatory if there is a counterfactual individual \(x^{\prime}\) that received a favorable outcome. Similar to the state-of-the-art [19, 17, 18], we use non-linear optimizer that takes an initial instance \(x\), a step function to generate the next instance around the neighborhood of the current instance, and an objective function that quantifies the discrimination of the current instance. Since our approach uses a continuous objective based on the characteristics of clusters, it enables us to guide the local search to generate discriminatory instances.
**Search Procedure.** Algorithm 1 sketches our search algorithm to quantify the amounts of bias. We first use the clustering (KMeans algorithm) to partition the data points into \(p\) groups (line 1). Next, we run the algorithm until the time-out \(T\) reaches where in each iteration we seed a sample randomly from one of the partitions \(p\) (line 2). Then, we proceed into global and local phases of search.
```
Input: Dataset \(A\), deep learning model \(\mathcal{D}\), the loss function for the DNN \(J\), protected attributes \(P\), non-protected attributes \(NP\), the number of partitions over the dataset \(p\), the step size in global perturbation \(s_{g}\), the step size in local perturbation \(s_{l}\), the maximum number of global iterations \(N_{g}\), the maximum number of local iterations \(N_{l}\), the tolerance \(\epsilon\), and time-out \(T\). Output: Num. Clusters and (local+Global) Test Cases.
1\(A^{\prime}\), \(cur\leftarrow\)KMeans(\(A\), \(p\)), time()
2whiletime() - \(cur<T\)do
3\(x\), \(i\), \(k\), \(\delta\leftarrow\) pick(\(A^{\prime}\)), 0, 1, 0.0
4while\(i<N_{g}\)do
5\(I_{m},S_{m}\leftarrow\) Generate_Predict(\(a\), \(P\))
6\(X_{k}\), \(\delta^{\prime}\leftarrow\) Clust(\(S_{m}\), \(\epsilon\))
7\(a,a^{\prime}\leftarrow\) Choose_Pair_Max(\(X_{k}\))
8\(Gs\leftarrow(\nabla J(a),\nabla J(a^{\prime}))\)
9\(d\leftarrow\) choose_common_direct(\(Gs\), \(NP\))
10\(x^{\prime}\leftarrow\) perturb(\(x\), \(d,s_{g}\))
11if(\(|X_{k}|>k\)) or (\(|X_{k}|=k\) and \(\delta^{\prime}>\delta\))then
12eval\(\leftarrow\)(\(x\)) {
13\(I^{\prime}_{m},S^{\prime}_{m}\leftarrow\) Generate_Predict(\(x\), \(P\))
14\(X_{k^{\prime}}\leftarrow\) clust(\(S^{\prime}_{m}\), \(\epsilon\))
15\(\Delta\leftarrow\)\(arg.\)max(\(X_{k^{\prime}}\))\(-arg.\) min(\(X_{k^{\prime}}\))
16\(local\_inps.\)add(\(x\))
17Return\(-\Delta\)
18step_f\(\leftarrow\)\(\lambda_{x}\) perturb_local(\(x,s_{l}\))
19LBFGS(\(x\), eval_f, step_f, \(N_{l}\))
20\(global\_inps.\)add(\(a\))
21\(k\), \(x\leftarrow\) max(\(k,|X_{k}|\), \(x^{\prime}\))
22
23return\(k\), \(I=global\_inps\cup local\_inps\)
```
**Algorithm 1**Dice (Search)
In the local phase, we use the general-purpose optimizer, known as LBFGS [32], which takes an initial seed \(x\), an objective function, a step function, and the maximum number of local iterations \(N_{l}\); it returns the generated instances during the optimization (Line 12-19). In the objective function shown with eval_f (Line 12-17), we generate \(m\) instances with the same non-protected values but different protected ones (Line 13). We generate prediction scores for those instances and cluster them with tolerance parameter \(\epsilon\) (line 14). Then, we compute the difference between the indices of two clusters with the smallest and largest scores (line 15). Finally, we record the generated sample and return the difference as the evaluation of optimizer at the current sample (line 16-17). The step function is shown with perturb_local (Line 18) where it guides the optimizer to take one step in the input space. Our step function uses a random sample from a different cluster compared to the current sample. Then, it computes the normalized sum of gradients and perturbs it using the smallest gradients to remain in the neighborhood of the current sample.
**Debugging Approach.** Since it is computationally difficult to intervene over all possible neurons in a DNN, we first
adapt a layer-localization technique from the literature of DL framework debugging [33, 21] where we detect a layer with the largest sensitivity to the protected attributes. Let \(D_{i}(z,x)\) be the output of layer \(i\) over protected value \(z\) and non-protected value \(x\). Let \(\Delta_{i}(x):\mathbb{R}^{|D_{i}|}\times\mathbb{R}^{|D_{i}|}\rightarrow\mathbb{R}\) be the distance between the outputs of DNN at layer \(i\) as triggered by \(m\) different protected values and the same non-protected value \(x\), and let \(\delta_{i}\) be the \(\max_{x}\Delta_{i}(x)\). The rate of changes in the sensitivity of layer \(i\) (w.r.t protected attributes) is
\[\rho_{i}=\frac{\delta_{i}-\max_{j}\delta_{j}}{\max_{j}\delta_{j}+ \epsilon},\ with\ 0\leq\ j<\ i\]
where \(\delta_{0}=0.0\) and \(\epsilon=10^{-7}\) (to avoid division-by-zero [33, 21]). Let \(l=\arg\max_{i}\rho_{i}\) be the layer index with the maximum rate of changes. Our next step is to localize neurons in the layer \(l\) that have significant positive or negative effects on fairness. Let \(V_{l}^{j}\) be the set of possible values for neuron \(j\) at later \(l\) (recorded during the layer localization). We are interested in computing the average causal effects when the neuron \(D_{l}^{j}\) is activated vs. deactivated, noting that such interventions might affect the functionality of DNN. Therefore, among a set of intervention values, we choose one activated value \(v_{1}\in V_{l}^{j}>0\) and one deactivated value \(v_{2}\in V_{l}^{j}\approx 0\), considering the functional accuracy of DNN within \(\epsilon\) of original accuracy \(A\). Therefore, we define average causal difference (ACD) for a neuron \(D_{l}^{j}\) as:
\[\mathbb{E}[Q_{\infty}\mid\texttt{do}(l,j,v_{1}>0),k,A]-\mathbb{E}[Q_{\infty} \mid\texttt{do}(l,j,v_{2}\approx 0),k,A],\]
where do notation is used to force the output of neuron \(j\) at layer \(l\) to a fix value \(v\). We then return the neuron indices with the largest positive (aggravating discrimination) and smallest negative (mitigating discrimination). Let \(\hat{i}\) and \(\hat{j}\) be the layer and neuron with the largest positive \(ACD\). One simple mitigation strategy is thus to deactivate the neuron \(D_{l}^{j}\), expecting to reduce QID by \(ACD/k\) percentage. Similarly, activating the neuron with the smallest negative \(ACD\) is expected to reduce QID by \(ACD/k\) percentage.
**Debugging Procedure.** Algorithm 2 shows the debugging aspect of Dice. Given a set of test cases from the search algorithm, we first use a notion of distance (e.g, \(\Delta=L_{1}\)) to compute the difference between any pair of protected values \(z,z^{\prime}\) w.r.t the outputs of layer \(l\in\{1,\ldots,N\}\) (line 1). Then, we compute the rate of changes (line 2-3) and return a layer \(l\) with the largest change (line 4). We compute various statistics on the output of every neuron \(i\) at layer \(l\) such as the minimum, maximum, average, average\(\pm\)std. dev, average\(\pm\)2*std. dev, etc (line 5). Among those values, we take the smallest and largest values such that the intervention on the neuron \(i\) at layer \(l\) has the minimal impacts on the accuracy of DNN (line 6). Finally, we compute the average causal difference (line 7) and return the indices of layer, neurons with large negative influence, and neurons with large positive influence.
```
Input: Dataset \(A=(A_{X},A_{Z})\), \(\mathcal{D}\) with accuracy \(\mathcal{A}_{\mathcal{D}}\), Test cases \(I\), the distance function \(\Delta\), the tolerance of layer localization \(\epsilon_{1}\), the tolerance of accuracy loss \(\epsilon_{2}\), and \(k\) top items. Output: Layer Index, Negative, and Positive Neurons. /* Layer Localization */
1\(\delta\leftarrow\lambda_{l}\ \max\sum_{x\in I}\Delta\big{(}D_{l}(z,x),D_{l}(z^{ \prime},x)\big{)}\)
2\(\delta[0]\), \(\delta_{max}\leftarrow\) 0.0, \(\lambda_{i}\max_{j<i}\delta[j]\)
3\(\rho\leftarrow\lambda_{l}\frac{\delta[j]-\delta_{max}[j]}{\delta_{max}[i]+ \epsilon}\)
4\(l\leftarrow\arg\max_{i}\rho[i]\)
5\(\ast\)Neuron Localization */
6\(V_{l}\leftarrow\lambda_{i}\ \text{stats}(D_{l}^{j})\)
7\(v_{l}\leftarrow\lambda_{i}\ \lambda_{j}\ |\mathcal{A}_{\mathcal{D}}- \mathcal{A}_{\mathcal{D}\gets d(l,i,V_{l}[j])}|\leq\epsilon_{2}\)
8\(ACD_{l}\)\(\leftarrow\)\(\lambda_{j}\ \mathbb{E}\) (\(k^{\prime}|\texttt{do}(l,j,v_{l}^{j}\textgreater 0)\)) - \(\mathbb{E}(k^{\prime}|\texttt{do}(l,j,v_{l}^{j}\textgreater 0)\)) return\(l\), \(top_{k}(\max\ ACD_{l})\), \(top_{k}(\min\ ACD_{l})\).
```
**Algorithm 2**Dice (Debugging).
## VI Experiments
**Datasets and DNN models.** We consider \(10\) socially critical datasets from the literature of algorithmic fairness. These datasets and their properties are described in Table I. For the DNN model, we used the same architecture as the literature [17, 18, 29] and trained all datasets on a six-layers fully-connected neural network with \(\langle 64,32,16,8,4,2\rangle\) neurons. We used the same hyperparameters for the all training with num_epochs, batch_size, and learning_rate are set to \(1000\), \(128\), and \(0.01\), respectively. The accuracy of trained models are reported in Table V.
**Technical Details.** We implemented Dice with TensorFlow v2.7.0 and scikit-learn v0.22.2. We run all the experiments on an Ubuntu 20.04.4 LTS OS sever with AMD Ryzen Threadripper PRO 3955WX 3.9GHz 16-cores X 32 CPU and two NVIDIA GeForce RTX 3090 GPUs. We choose the values \(10\), \(1000\), \(0.025\), \(1\), and \(1\) for max_global, max_local, \(\epsilon\), \(s\_{g}\), and \(s\_{l}\) in Algorithm 1, respectively, and take the average of \(10\) multiple runs for all experiments. In Algorithm 2, we used \(L_{1}\)-norm, \(10^{-7}\), \(0.05\), and \(3\) for \(\Delta\), \(\epsilon_{1}\), \(\epsilon_{2}\), and \(k\), respectively.
**Research Questions.** We seek to answer the following three questions using our experimental setup.
* Can Dice characterize the amounts of information from protected attributes used for the inferences?
* Is the the proposed search algorithm effective and efficient (vis-a-vis the state-of-the-art techniques) in generating individual discrimination instances?
* Can the proposed causal debugging guide us to localize and mitigate the amounts of discrimination?
* Our open-source tool Dice with all experimental subjects are publicly accessible:
### _Characterizing QID via Search (RQ1)_
An important goal is to characterize the amount of information from protected attributes used during the inference of DNN models. Table II shows the result of experiments to answer this research question. The left side of table shows the
initial characteristics such as the number of protected values (\(m\)), the maximum possible amounts of discrimination (\(Q_{I}\)) based on min(\(\epsilon^{-1},m\)), and the initial number of clusters found using samples from the dataset (\(K_{I}\)). The right side of table shows the results after running our search for \(1\) hour. The column #\(I\) is the number of QID instances generated, and \(K_{F}\) is the maximum number of clusters discovered by Dice. The column T\({}_{K_{F}}\) is the time taken to find the maximum number of clusters from an input with initial clusters \(K_{I}\) (in seconds). The columns \(Q_{\infty}\) and \(Q_{1}\) are the quantitative individual discrimination based on min entropy and Shanon entropy, respectively. The columns #\(I_{K_{k}^{1}}\), #\(I_{K_{F}^{2}}\), and #\(I_{K_{F}^{3}}\) show the number of test cases with the highest, second-highest, and third-highest QIDs, respectively, that order test cases with their QID severity. Overall, the results show that Dice can find \(3.4\times\) more clusters (in average) from the initial characteristics within one minute of search. The DNN for Students dataset showed the largest increase in the number of clusters going from \(1.9\) to \(10.9\). Dice found that Adult Income Census dataset has the largest amounts of QID where \(4.05\) out of \(5.3\) bits (\(76.4\%\)) from protected variables are used to make decisions. The German Credit dataset with \(1.61\) out of \(4.0\) bits (\(40.0\%\)) showed the least amounts of discrimination. For test-case prioritizing, the column #\(I_{K_{F}^{1}}\) shows our approach to be useful in finding a small percentage of generated test cases with the worst-case discrimination. In \(7\) out of \(10\) experiments, Dice found less than \(50\) test cases with severe discrimination out of hundreds of thousands inputs.
**Answer RQ1:** The search algorithm is effective in characterizing the amounts of discrimination via QID. Within \(1\) hour, it increased the number of clusters by \(3.4\times\) in average, and found instances that used up to \(76\%\) of protected information (\(4.05\) out of \(5.3\) bits) to infer DNN outcomes. Dice is useful to prioritize test cases with their severity where it generates less than \(50\) test cases with the maximum QID among hundreds of thousands test cases.
### _Individual Discriminatory Instances (RQ2)_
In this section, we compare the efficiency and effectiveness of our search algorithm to the state-of-the-art techniques in searching individual discrimination (\(ID\)) instances (as defined in Section IV). Our baselines are Aequitas[19], ADF[17], and NeuronFair[18]. We obtained the implementations of these tools from their GitHub repositories and configured them according to the prescribed setting to have the best performance. Following these techniques, we report the results for each protected attribute separately. Table III shows the results of baselines and Dice in runs of 15 minutes. The results are averaged over \(10\) repeated runs. The column #\(ID\) is the total number of generated individual discriminatory instances. The column \(l\_s\) is the success rate of local stage of searches. We exclude the global success rate since the goal of global phase in our search is to maximize QID whereas the local phase focuses to generate many \(ID\) instances. We calculate success rate as the number of \(ID\) found over the total number of generated samples. The columns \(T.1st\) and \(T.1k\) are the amount of time (in seconds) taken to find the first \(ID\) instance and to generate \(1,000\) individual discriminatory instances (note: \(N/A\) in column \(T.1k\) means that the tool did not generate \(1,000\)\(IDs\) in the experiment timeout of \(900\) seconds in the average of 10 runs).
The result shows that Dice outperforms the-state-of-the-art in generating many ID instances. In particular, Dice finds \(27.1\times\), \(16.0\times\), and \(16.0\times\) more \(ID\)s in the best case compare to Aequitas, ADF, and NeuronFair, respectively. Dice also generates \(3.2\times\), \(2.3\times\), and \(2.6\times\) more \(ID\)s in the worst case compare to Aequitas, ADF, and NeuronFair, respectively. The success rate of local search are \(20.6\%\), \(33.0\%\), \(29.6\%\), and \(78.2\%\) in average for Aequitas, ADF, NeuronFair, and Dice, respectively. For the time taken to find the first \(ID\), Aequitas achieves the best result with an average of \(0.03\) (s) whereas it took Dice\(1.46\) (s) in average to find the first \(ID\). In average, Dice was found to take the lowest time to generate \(1000\)\(IDs\) with \(57.2\) (S), while ADF
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|} \hline
**Dataset** & **\#Instances** & **\#Features** & \begin{tabular}{l} **Protected Groups** \\ _Name_ \\ \end{tabular} & \begin{tabular}{l} **Num.** **Protected** \\ _Values_ (m) \\ \end{tabular} & \multicolumn{2}{c|}{**Outcome Label**} \\ \cline{4-7} \begin{tabular}{l} Adli \\ _Census_ \\ \end{tabular} & \multirow{2}{*}{\(32,561\)} & \multirow{2}{*}{\(13\)} & Sex & \(2\) & \multirow{2}{*}{90} & \multirow{2}{*}{High Income} & \multirow{2}{*}{Low Income} \\ \cline{4-7} Income[31] & & & Age & & & \\ \cline{4-7} \begin{tabular}{l} _Census_ \\ \end{tabular} & \multirow{2}{*}{\(7,214\)} & \multirow{2}{*}{\(12\)} & Sex & \(2\) & \multirow{2}{*}{12} & \multirow{2}{*}{Did not Reoffend} & \multirow{2}{*}{Roeffend} \\ \cline{4-7} \begin{tabular}{l} _Census_ \\ \end{tabular} & & & & Age & & & \\ \hline \multirow{2}{*}{_Census_[34]} & \multirow{2}{*}{\(7,214\)} & \multirow{2}{*}{\(12\)} & Sex & \(2\) & \multirow{2}{*}{16} & \multirow{2}{*}{Good Credit} & \multirow{2}{*}{Bad Credit} \\ \cline{4-7} \begin{tabular}{l} _German_ \\ Credit[35] \\ \end{tabular} & & & Age & & & \\ \hline _Default_ & \multirow{2}{*}{\(13,636\)} & \multirow{2}{*}{\(23\)} & Sex & \(2\) & \multirow{2}{*}{12} & \multirow{2}{*}{Default} & \multirow{2}{*}{Not Default} \\ \cline{4-7} \begin{tabular}{l} _Hear_ \\ Health[37] \\ \end{tabular} & & & Age & & & \\ \hline _Bank Marketing[38]_ & \(45,211\) & \(16\) & Age & \(9\) & \(9\) & Subscriber & Non-subscriber \\ \hline _Dicebies[39]_ & \(768\) & \(8\) & Age & \(9\) & \(9\) & Positive & Negative \\ \hline _Shalomens_ & \multirow{2}{*}{\(1044\)} & \multirow{2}{*}{\(32\)} & Sex & \(2\) & \multirow{2}{*}{16} & \multirow{2}{*}{Pass} & \multirow{2}{*}{Not Pass} \\ \cline{4-7} \begin{tabular}{l} Performance[40] \\ \end{tabular} & & & Age & & & \\ \hline \multirow{2}{*}{MEPS15 [41]} & \multirow{2}{*}{\(15,830\)} & \multirow{2}{*}{\(137\)} & Race & \(2\) & \multirow{2}{*}{36} & \multirow{2}{*}{Utilized Benefits} & \multirow{2}{*}{Not Utilized Benefits} \\ \cline{4-7} \begin{tabular}{l} _Dice_ \\ \end{tabular} & & & Sex & & & \\ \hline \multirow{2}{*}{MEPS16 [41]} & \multirow{2}{*}{\(15,675\)} & \multirow{2}{*}{\(137\)} & Age & \multirow{2}{*}{36} & \multirow{2}{*}{Utilized Benefits} & \multirow{2}{*}{Not Utilized Benefits} \\ \cline{4-7} \begin{tabular}{l} _Dice_ \\ \end{tabular} & & & Sex & & \\ \cline{4-7}
\begin{tabular}{l} _Dice_ \\ \end{tabular} & & & Sex & & & \\ \hline \end{tabular}
\end{table} TABLE I: Datasets used in our experiments.
took \(179.1\) (s), NeuronFair took \(135.4\) (s), and Aequitas took the longest time at \(197.7\) (s). Overall, our experiments indicate that Dice is effective in generating \(ID\) instances compared to the three state-of-the-art techniques, largely due to the smoothness of the feedback during the local search.
**Answer RQ2:** Our experiments demonstrate that Dice outperforms the state-of-the-art fairness testing techniques [19, 17, 18]. In the best case, our approach found \(20\times\) more individual discrimination (ID) instances than these techniques with almost \(3\times\) more success rates in average. However, we found that Dice is slower than those techniques in finding the first ID instance in the order of a few seconds.
### _Causal Debugging of DNNs for Fairness (RQ3)_
We perform experiments over the DNN models to study whether the proposed causal debugging approach is useful in identifying layers and neurons that significantly effect the amounts of discrimination as characterized by \(QID\). Table IV shows the results of experiments (averaged over \(10\) independent runs). The first two columns show the localized layer and its influence (i.e., \(l\) and \(\rho\) in Algorithm 2). The next six columns show top \(3\) neurons with the positive influence on fairness (i.e., activating those neurons reduce the amounts of discrimination based \(Q_{\infty}\)). The last six columns show top \(3\) neurons with the negative influence on fairness (i.e., activating those neurons increase the amounts of discrimination based \(Q_{\infty}\)). The layer index \(2\) is more frequently localized than other layers where the layers \(3\), \(4\), and \(5\) are localized once. Overall, the average causal difference (ACD) ranges from \(4\%\) to \(55\%\) for neurons with positive fairness effects and from \(0.6\%\) to \(18.3\%\) for neurons with negative fairness effects.
Guided by localization, Dice intervenes to activate neurons with positive fairness influence or de-activate those with negative influence. Table V shows the results of this mitigation strategy. The columns \(A\) and \(K\) show the accuracy and the number of clusters (averaged over a set of random test cases) reported by Dice before mitigation over the DNN model. The columns \(A^{=0}\) and \(K^{=0}\) are accuracy and the number of clusters reported after mitigating the DNN model by _de-activating_ the neuron with the highest negative fairness impacts (as suggested by Neuron\({}_{1}^{-}\) in Table IV). Similarly, the columns \(A^{>0}\) and \(K^{>0}\) are accuracy and the number of clusters reported after mitigating the DNN model by _activating_ the neuron with the highest positive fairness impacts (as suggested by Neuron\({}_{1}^{+}\) in Table IV). The results indicate that the activation interventions can reduce QID discrimination by at least \(5\%\) with \(3\%\) loss of accuracy and up to \(64.3\%\) with \(2\%\) loss of accuracy. The de-activation, on the other hand, can improve the fairness by at least \(6\%\) with \(1\%\) loss of accuracy and up to \(27\%\) with \(2\%\) loss.
**Answer RQ3:** The debugging approach implemented in Dice identified neurons that have at least \(5\%\) and up to \(55\%\) positive causal effects on the fairness and those which have at least \(0.6\%\) and up to \(18.3\%\) negative causal effects. A mitigation strategy followed by the localization can reduce the amounts of discrimination by at least \(6\%\) and up to \(64.3\%\) with less than 5% loss of accuracy.
## VII Discussion
_Limitation_. In this work, we consider all set of protected values and perturb them to generate counterfactual. Various perturbations of protected attributes may yield unrealistic counterfactuals and contribute towards false positives (an over-approximation of discrimination). This limitation can be mitigated by supplying domain-specific constraints (Age\(<\)YY \(\Longrightarrow\) NOT(married)): we already apply some common-sense constraints (e.g., to ensure valid range of age). In addition, similar to any dynamic testing methods, our approach might miss discriminatory inputs and is prone to false negatives. The probability of missing relevant inputs can be contained under a suitable statistical testing (e.g., Bayes factor). In addition, our debugging approach is similar to pin-pointing suspicious code fragments and is based on causal reasoning of its effect in decision making rather than correlation. But, it is not to furnish explanations or interpretations of black-box DNN functions.
_Threat to Validity_. To address the internal validity and ensure our finding does not lead to invalid conclusion, we follow established guideline and take average of repeated experiments. To ensure that our results are generalizable and address external validity, we perform our experiments on \(10\) DNN models taken from the literature of fairness testing. However,
\begin{table}
\begin{tabular}{|l|c|c||c|c|c|c|c|c|c|c|} \hline
**Dataset** & \(m\) & \(Q_{I}\) & \(K_{I}\) & \#\(I\) & \(K_{F}\) & T\({}_{K_{F}}\) & \(Q_{\infty}\) & \(Q_{1}\) & \#\(I_{K_{F}^{1}}\) & \#\(I_{K_{F}^{2}}\) & \#\(I_{K_{F}^{3}}\) \\ \hline Census & \(90\) & \(5.3\) & \(13.54\) & \(230,593\) & \(35.61\) & \(21.04\) & \(4.05\) & \(2.64\) & \(6.0\) & \(28.6\) & \(111.6\) \\ \hline Compas & \(12\) & \(3.6\) & \(3.12\) & \(157,968\) & \(10.24\) & \(6.50\) & \(1.81\) & \(1.40\) & \(35.2\) & \(338.7\) & \(1,016.9\) \\ \hline German & \(16\) & \(4.0\) & \(2.34\) & \(245,915\) & \(9.56\) & \(13.14\) & \(1.61\) & \(1.10\) & \(6.6\) & \(16.2\) & \(54.8\) \\ \hline Default & \(12\) & \(3.6\) & \(5.58\) & \(258,105\) & \(11.26\) & \(10.94\) & \(2.10\) & \(1.78\) & \(3,528.8\) & \(9,847.2\) & \(9,771.0\) \\ \hline Heart & \(14\) & \(3.8\) & \(4.54\) & \(270,029\) & \(10.01\) & \(11.88\) & \(2.31\) & \(1.80\) & \(21.7\) & \(135.2\) & \(579.7\) \\ \hline Bank & \(9\) & \(3.2\) & \(1.45\) & \(172,686\) & \(8.93\) & \(3.68\) & \(2.25\) & \(1.98\) & \(5,118.5\) & \(13,513.3\) & \(20,438\) \\ \hline Diabetes & \(10\) & \(3.3\) & \(2.39\) & \(504,414\) & \(7.90\) & \(0.016\) & \(1.40\) & \(1.11\) & \(89.7\) & \(609.6\) & \(2,310.1\) \\ \hline Students & \(16\) & \(4\) & \(1.90\) & \(133,221\) & \(10.90\) & \(14\) & \(1.93\) & \(1.35\) & \(16.0\) & \(130.7\) & \(128.7\) \\ \hline MEPS15 & \(36\) & \(5.2\) & \(7.03\) & \(19,673\) & \(18.52\) & \(31.62\) & \(2.61\) & \(1.62\) & \(2.6\) & \(3.5\) & \(6.0\) \\ \hline MEPS16 & \(36\) & \(5.2\) & \(9.06\) & \(14,266\) & \(19.25\) & \(49.16\) & \(2.21\) & \(1.52\) & \(2.0\) & \(3.5\) & \(6.0\) \\ \hline \end{tabular}
\end{table} TABLE II: Dice characterizes \(QID\) for \(10\) datasets and DNNs in \(1\) hour run (results are the average of \(10\) runs).
it is an open problem whether these datasets and DNN models are sufficiently representative for fairness testing.
## VIII Related Work
**Fairness Testing of ML systems.** Themis [10] presents a causal discrimination notion where they measure the difference between the fairness metric of two subgroups by _counterfactual_ queries; i.e., they sample individuals with the protected attributes set to A and compare the outcome to a counterfactual scenario where the protected attributes are set to B. Symbolic generation (SC) [28, 22] presents a black-box testing that approximates the ML models with decision trees and leverage symbolic execution over the tree structure to find individual discrimination (ID). AEQUITAS [19] uses a two-step approach that first uniformly at random samples instances from the input dataset to find a discriminatory instance and then locally perturb those instances to further generate biased test cases. ExpGA [16] proposed a genetic algorithm (GA) to generate ID instances in natural language processes. The proposed technique used a prior knowledge graph to guide the perturbation of protected attributes in the NLP tasks. While these techniques are black-box, they potentially suffer from the lack of local guidance during the search. ADF [17] utilized the gradient of the loss function as guidance in generating ID instances. The global phase explores the input space to find diverse set of individual discrimination whereas the local phase exploits each instance to generate many individual discriminatory (ID) instances in their neighborhoods. EIDIG [29] follows similar ideas to ADF, but uses different computations of gradients. First, it uses the gradients of output (rather than loss function) to reduce the computation cost at each iteration. Second, it uses momentum of gradients in
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|} \hline
**Dataset** & **Layer Index** & **Layer Influence** & **Neuron\({}^{+}\)** & **ACD\({}^{+}_{i}\)** & **Neuron\({}^{+}_{i}\)** & **ACD\({}^{+}_{i}\)** & **Neuron\({}^{+}_{i}\)** & **ACD\({}^{+}_{i}\)** & **Neuron\({}^{+}_{i}\)** & **ACD\({}^{+}_{i}\)** & **Neuron\({}^{+}_{i}\)** & **ACD\({}^{+}_{i}\)** & **Neuron\({}^{+}_{i}\)** \\ \hline Census & \(2\) & \(9.01\) & \(N_{19}\) & \(1.09\) & \(N_{2}\) & \(0.150\) & \(N_{12}\) & \(1.02\) & \(1.33\) & \(0.18\) & \(N_{24}\) & \(0.163\) & \(N_{14}\) & \(0.102\) \\ \hline Campus & \(2\) & \(2.07\) & \(N_{25}\) & \(0.16\) & \(N_{28}\) & \(0.419\) & \(N_{27}\) & \(0.366\) & \(N_{27}\) & \(N_{14}\) & \(N_{26}\) & \(1.1,238\) \\ \hline German & \(5\) & \(1.79\) & \(N_{3}\) & \(0.050\) & \(N_{1}\) & \(0.013\) & \(N/A\) & \(N/A\) & \(N/A\) & \(N/A\) & \(N/A\) & \(N/A\) \\ \hline Defali & \(2\) & \(27.58\) & \(N_{0}\) & \(0.039\) & \(N_{27}\) & \(0.022\) & \(N_{19}\) & \(0.014\) & \(N_{2}\) & \(0.031\) & \(N_{13}\) & \(0.027\) & \(N_{10}\) & \(0.109\) \\ \hline HEPS15 & \(0.898\) & \(0.35\) & \(N/A\) & \(N/A\) & \(N/A\) & \(N/A\) & \(N/A\) & \(N/A\) & \(N/A\) & \(N/A\) & \(N/A\) \\ \hline Barak & \(3\) & \(6.62\) & \(N_{0}\) & \(0.495\) & \(N_{2}\) & \(0.178\) & \(N_{1}\) & \(0.091\) & \(N_{6}\) & \(0.057\) & \(N_{11}\) & \(0.014\) & \(N/A\) & \(N/A\) \\ \hline Diabetes & \(2\) & \(1.67\) & \(N_{19}\) & \(0.041\) & \(N_{26}\) & \(0.035\) & \(N_{26}\) & \(0.031\) & \(N_{2}\) & \(0.042\) & \(N_{29}\) & \(0.001\) & \(N_{27}\) & \(N/A\) & \(N/A\) \\ \hline Students & \(2\) & \(4.01\) & \(N_{22}\) & \(0.550\) & \(N_{24}\) & \(0.442\) & \(N_{28}\) & \(0.229\) & \(N_{4}\) & \(0.084\) & \(N_{18}\) & \(0.055\) & \(N_{28}\) & \(0.026\) \\ \hline MIPS15 & \(2\) & \(35.44\) & \(N_{24}\) & \(0.230\) & \(N_{26}\) & \(0.167\) & \(N_{14}\) & \(0.160\) & \(N/A\) & \(N/A\) & \(N/A\) & \(N/A\) & \(N/A\) \\ \hline MIPS16 & \(2\) & \(47.46\) & \(N_{8}\) & \(0.147\) & \(N_{11}\) & \(0.138\) & \(N_{24}\) & \(0.144\) & \(N_{30}\) & \(0.006\) & \(N_{30}\) & \(N_{22}\) & \(0.001\) \\ \hline \end{tabular}
\end{table} TABLE V: \(A\) is accuracy, \(K\) is the average number of clusters from test cases; \(A^{=0}\) is the accuracy after deactivating the neuron with the highest negative fairness impacts; \(K^{=0}\) is the average number of clusters after the deactivation; \(A^{>0}\) is the accuracy after activation the neuron with the highest positive fairness impacts; \(K^{>0}\) is the average number of clusters after the activation; and \(T_{I}\) is the amount of computation times for localization and mitigation in seconds.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline
**Dataset** & **Pred.** & \multicolumn{3}{c|}{**Advoc. [17]**} & \multicolumn{3}{c|}{**Adv. [17]**} & \multicolumn{3}{c|}{**Neuron\({}^{+}_{i}\)**} & \multicolumn{3}{c|}{**ACD\({}^{+}_{i}\)**} & \multicolumn{3}{c|}{**ACD\({}^{+}_{i}\)**} & \multicolumn{3}{c|}{**ACD\({}^{+}_{i}\)**} & \multicolumn{3}{c|}{**ACD\({}^{+}_{i}\)**} \\ \hline & **avg** & \(9.01\) & \(1.03\) & \(1.03\) & \(0.02\) & **69.53** & \(1.13\) & \(1.26\) & **69.53** & \(7.13\) & \(7.12\) & **69.27** & \(1.03\) & **69.77** & \(7.16\) & **69.77** & \(7.16\) & **69.77** \\ \hline Census & **avg** & \(8.73\) & \(2.21\) & \(0.02\) & \(0.02\) & \(115.58\) & \(0.18\) & \(12.63\) & \(1.03\) & \(0.50\) & \(5.21\) & \(7.24\) & \(12.18\) & \(19.93\) & \(0.08\) & \(0.38\) & \(38.68\) & \(79.93\) & \(79.23\) & \(74.20\) & \(0.15\) & \(0.50\) & \(58.98\) \\ \hline Defali & **avg** & \(8.73\) & \(2.21\) & \(0.22\) & \(0.02\) & \(115.48\) & \(12.18\) & \(15.83\) & \(42.00\) & \(0.53\) & \(33.07\) & \(0.33\) & \(1.11\) & \(54.8\) & \(0.48\) & \(0.48\) & \(0.49\) & \(0.39\) & \(79.12\) & \(1.20\) & \(0.34\) & \(0.46\) & \(0.56\) & \(0.58\) & \(0.58\) \\ \hline Defali & **avg** & \(8.73\) & \(2.21\) & \(0.22\) & \(0.02\) & \(115.48\) & \(12.16\) & \(15.83\) & \(53.40\) & \(0.63\) & \(0.63\) & \(0.39\) & \(
global phase to avoid local optima. NeuronFair[18] extends ADF and EIDIG to support unstructured data (e.g., image, text, speech, etc.) where the protected attributes might not be well-defined. In addition, NeuronFair is guided by the DNN's internal neuron states (e.g., the pattern of activation and deactivation) and their activation difference. Beyond the capability of these techniques, Dice quantifies the amounts of discrimination, enables software developers to prioritize test cases, and searches multiple protected attributes at one time.
Beyond the scope of this paper, a body of prior work [42, 43, 44, 23, 45, 46] considered testing for group fairness. Fairway[43] mitigates biases after finding suitable ML algorithm configurations. In doing so, they used a multi-objective optimization (FLASH) [47]. Parfait-ML [45] searches the hyperparameter space of classic ML algorithms via a gray-box evolutionary algorithm to characterize the magnitude of biases from the hyperparameter configuration.
**Debugging of Deep Neural Network.** Cradle[33] traced the execution graph of a DNN model over two different deep-learning frameworks and used the differences in the outcomes to localize what backend functions might cause a bug. However, since Cradle did not use causal analysis, it showed a high rate of false positive. Audee[48] used a similar approach, but it leveraged causal-testing methods. In particular, it designed strategies to intervene in the DNN models and tracked how the intervention affected the observed inconsistencies. We adapted the layer localization of Cradle and Audee; but our causal localization is developed using do logic for a meta-property (fairness). Audee used a simple perturbation of neuron values for functional correctness (i.e., any inconsistency shows a bug) without considering the accuracy or the severity of neuron contributions to a bug.
**In-process Mitigation.** A set of work considers in-process algorithms to mitigate biases in ML predictions [49, 50, 51]. Adversarial debiasing [49] and Prejudice remover [50] improve fairness by adding constraints to model parameters or the loss function. Exponentiated gradient [51] uses a meta-learning algorithm to infer a family of classifiers that maximizes accuracy and fairness. Different than these approaches, we develop a mitigation approach that is specialized to handle neural networks for individual fairness. This setting allows us to exploit the layer-based structure of NNs toward causal reasoning and mitigation. We believe that our approach can be extended with in-process mitigation techniques to maximize fairness in the DNN-based decision support systems.
**Formal Methods.** We believe that this paper can connect to the rich literature of formal verification and its application. Here, we provide two examples. FairSquare[52] certifies a fair decision-making process in probabilistic programs using a novel verification technique called the weighted-volume-computation algorithm. SFTREE[53] formulated the problem of inferring fair decision tree as a mixed integer linear programming and apply constraint solvers iteratively to find solutions.
**Fairness in income, wealth, and taxation.** We develop a fairness testing and debugging approach that is uniquely geared toward handling regression problems. Therefore, our approach can be useful to study and address biases in income and wealth distributions [54] among different race and gender. Furthermore, our approach can be useful to study fairness in taxation (e.g., vertical and horizontal equities [55, 56]). We left further study in these directions to future work.
## IX Conclusion
DNN-based software solutions are increasingly being used in socio-critical applications where a bug in their design may lead to discriminatory behavior. In this paper, we presented Dice: an information-theoretic model to characterize the amounts of protected information used in DNN-based decision making. Our experiments showed that the search and debugging algorithms, based on the quantitative landscape, are effective in discovering and localizing fairness defects.
**Acknowledgement.** The authors thank the anonymous ICSE reviewers for their time and invaluable feedback to improve this paper. This research was partially supported by NSF under grant DGE-2043250 and UTEP College of Engineering under startup package.
|
2308.11127 | How Expressive are Graph Neural Networks in Recommendation? | Graph Neural Networks (GNNs) have demonstrated superior performance on
various graph learning tasks, including recommendation, where they leverage
user-item collaborative filtering signals in graphs. However, theoretical
formulations of their capability are scarce, despite their empirical
effectiveness in state-of-the-art recommender models. Recently, research has
explored the expressiveness of GNNs in general, demonstrating that message
passing GNNs are at most as powerful as the Weisfeiler-Lehman test, and that
GNNs combined with random node initialization are universal. Nevertheless, the
concept of "expressiveness" for GNNs remains vaguely defined. Most existing
works adopt the graph isomorphism test as the metric of expressiveness, but
this graph-level task may not effectively assess a model's ability in
recommendation, where the objective is to distinguish nodes of different
closeness. In this paper, we provide a comprehensive theoretical analysis of
the expressiveness of GNNs in recommendation, considering three levels of
expressiveness metrics: graph isomorphism (graph-level), node automorphism
(node-level), and topological closeness (link-level). We propose the
topological closeness metric to evaluate GNNs' ability to capture the
structural distance between nodes, which aligns closely with the objective of
recommendation. To validate the effectiveness of this new metric in evaluating
recommendation performance, we introduce a learning-less GNN algorithm that is
optimal on the new metric and can be optimal on the node-level metric with
suitable modification. We conduct extensive experiments comparing the proposed
algorithm against various types of state-of-the-art GNN models to explore the
explainability of the new metric in the recommendation task. For
reproducibility, implementation codes are available at
https://github.com/HKUDS/GTE. | Xuheng Cai, Lianghao Xia, Xubin Ren, Chao Huang | 2023-08-22T02:17:34Z | http://arxiv.org/abs/2308.11127v3 | # How Expressive are Graph Neural Networks in Recommendation?
###### Abstract.
Graph Neural Networks (GNNs) have demonstrated superior performance in various graph learning tasks, including recommendation, where they explore user-item collaborative filtering signals within graphs. However, despite their empirical effectiveness in state-of-the-art recommender models, theoretical formulations of their capability are scarce. Recently, researchers have explored the expressiveness of GNNs, demonstrating that message passing GNNs are at most as powerful as the Weisfeiler-Lehman test, and that GNNs combined with random node initialization are universal. Nevertheless, the concept of "expressiveness" for GNNs remains vaguely defined. Most existing works adopt the graph isomorphism test as the metric of expressiveness, but this graph-level task may not effectively assess a model's ability in recommendation, where the objective is to distinguish nodes of different closeness. In this paper, we provide a comprehensive theoretical analysis of the expressiveness of GNNs in recommendation, considering three levels of expressiveness metrics: graph isomorphism (graph-level), node automorphism (node-level), and topological closeness (link-level). We propose the topological closeness metric to evaluate GNNs' ability to capture the structural distance between nodes, which closely aligns with the recommendation objective. To validate the effectiveness of this new metric in evaluating recommendation performance, we introduce a learning-less GNN algorithm that is optimal on the new metric and can be optimal on the node-level metric with suitable modification. We conduct extensive experiments comparing the proposed algorithm against various types of state-of-the-art GNN models to explore the effectiveness of the new metric in the recommendation task. For the sake of reproducibility, implementation codes are available at [https://github.com/HKUDS/GTE](https://github.com/HKUDS/GTE).
2023
Graph Neural Networks, Recommender Systems +
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
which primarily focuses on node similarity rather than graph-level properties. In their study on link prediction with subgraph sketching, Chamberlain et al. (Chamberlain et al., 2017) emphasize the importance of distinguishing automorphic nodes (symmetric nodes in the same orbit induced by the graph automorphism group) for link prediction, highlighting that the message passing mechanism of GNNs with equivalent power to the WL test lacks this ability. Although node automorphism is more relevant to link prediction and recommendation than graph isomorphism, it still does not fully align with the objective of recommendation, as it only requires distinguishing different nodes without considering their relative proximity. Secondly, existing works primarily focus on general graphs, while recommendation systems typically involve bipartite user-item interaction graphs, allowing for stronger conclusions regarding the expressiveness of GNNs (as we will demonstrate in Section 3). Figure 0(a) provides an overview of the current progress in formulating the expressiveness of GNNs, depicted by the light orange and green boxes.
In this paper, we propose a comprehensive theoretical framework for analyzing the expressiveness of GNNs in recommendation, encompassing three levels of expressiveness metrics: i) Graph-level: The ability to distinguish isomorphic graphs, while less directly relevant to recommendation tasks, is included in our framework to ensure coherence and consistency with previous works (Beng et al., 2017; Chen et al., 2018; Chen et al., 2018). ii) Node-level: The ability to distinguish automorphic nodes, as mentioned in Chamberlain et al. (Chamberlain et al., 2017), is particularly relevant to recommendation systems as it assesses the model's capability to identify different users and items. At this level, we investigate the impact of equipping message passing GNNs with distinct (yet non-random) initial embeddings. Our research demonstrates that GNNs with distinct initial embeddings can successfully differentiate some of the automorphic nodes, although not all of them. Notably, we establish that when the graph is constrained to a bipartite structure, GNNs with distinct initial embeddings are capable of distinguishing all automorphic nodes. iii) Link-level: The ability to discriminate nodes of different topological closeness to a given node. We define topological closeness in Section 4 and propose it as a new metric of expressiveness that directly aligns with the recommendation objective. Our theoretical analysis shows that the popular paradigm adopted in most GNN-based recommender systems (message passing GNN with random initial embeddings, using the inner product between embeddings as the prediction score) cannot fully discriminate nodes based on topological closeness. The relations between the three levels of metrics to different graph tasks are illustrated in Figure 0(b).
It is worth noting that no single expressiveness metric can fully explain recommendation, as user preferences involve much more complicated factors than what can be encoded in a user-item interaction graph. Even if a metric directly aligns with the recommendation objective, achieving optimality on that metric does not guarantee flawless recommendation performance. Therefore, in Section 5, we analyze the effectiveness of topological closeness, the newly proposed metric, in explaining recommendation performance. Specifically, we propose a lightweight Graph Topology Encoder (GTE) that adopts the message passing framework, but does not have learnable parameters. The learning-less characteristic of GTE makes it much more efficient than learning-based GNN recommends. We prove that GTE is optimal on the new topological closeness metric and can achieve optimality on the node automorphism metric and the expressive power equivalent to the WL test on the graph isomorphism metric with suitable modification of the mapping function. Since GTE is optimal in discriminating nodes by topological closeness, we conduct various experiments with GTE and state-of-the-art GNN and GCL models to evaluate the effectiveness of the new metric in the recommendation task. The theories we prove in this paper and their relations to previous works are presented in Figure 0(a) (highlighted blue boxes).
In summary, our contributions are highlighted as follows:
* We perform a comprehensive theoretical analysis on the expressiveness of GNNs in recommendation under a three-level framework designed specifically for the recommendation task.
* We introduce a new link-level metric of GNN expressiveness, topological closeness, that directly aligns with the recommendation objective and is more suitable for evaluating recommendation expressiveness.
* We propose a learning-less GNN algorithm GTE that is optimal on the link-level metric, whose learning-less feature enables it to be much more efficient than learning-based GNN recommenders.
* We conduct extensive experiments on six real-world datasets of different sparsity levels with GTE and various baselines to explore the effectiveness of topological closeness in recommendation.
## 2. Preliminaries and Related Work
### GNNs for Recommendation
The ability to extract multi-hop collaborative signals by aggregating neighborhood representation makes graph neural networks a prominent direction of research in recommender systems (Kang et al., 2017; Wang et al., 2018). Most GNN-based recommender models adopt the message passing
Figure 1. Theoretical framework of GNN expressiveness.
type of GNNs, or more specifically, the graph convolutional networks (GCN) (GCN) (GCN and GCN, 2017), as the backbone, such as NGCF (Zhou et al., 2018) and PinSage (2018). GCCF (GCN and GCN, 2017) incorporates the residual structure into this paradigm. LightGCN (GCN and GCN, 2017) further adapts to the recommendation task by removing the non-linear embedding transformation. There are also attempts to enhance the GCN framework with supplementary tasks, such as masked node embedding reconstruction in (Zhou et al., 2018).
Recently, the self-supervised learning (SSL) paradigm has been incorporated into GNN-based recommenders to address the data sparsity issue (Zhou et al., 2018; PinSage and GCN, 2018; Zhou et al., 2018; GCN and GCN, 2017). The graph contrastive learning (GCL) is one of the most prominent lines of research that leverages the self-supervision signals produced by aligning embeddings learned from different views. Various GCL approaches have been proposed to produce augmented views, such as random feature dropout (Zhou et al., 2018), hypergraph global learning (Zhou et al., 2018), adaptive dropping based on node centrality (Zhou et al., 2018), embeddings noisy perturbation (Zhou et al., 2018), and SVD-reconstruction (Beng et al., 2018). Despite their promising performance on real data, these models are often validated by experiments, with little theoretical formulation and guarantee on their capability.
### The Graph-Level Expressive Power of GNNs
In recent years, there has been significant research investigating the expressiveness of GNNs using graph-level metrics, such as the graph isomorphism test. In particular, Xu et al. (2018) demonstrate that the expressive power of message passing GNNs is at most equivalent to the Weisfeiler-Lehman (WL) test, a popular graph isomorphism algorithm. Additionally, they establish that this equivalence can be achieved by ensuring that both the aggregation function and the readout function are injective. In our analysis, we adopt and adapt their conclusions, introducing relevant concepts and notations that will be utilized throughout our study.
**Theorem 1** (from Xu et al. (2018)).: _Let \(h_{v}^{(k)}\) be the feature of node \(v\) in the \(k\)-th iteration, and \(\mathcal{N}(v)\) be the set of neighboring nodes of \(v\). With sufficient layers, a GNN can map any two graphs \(G_{1}\) and \(G_{2}\) that the WL-test decides as non-isomorphic to two different multisets of node features \(\{h_{v}^{(k)}\}_{1}\) and \(\{h_{v}^{(k)}\}_{2}\), if the aggregation function \(h_{v}^{(k)}=g((h_{u}^{(k-1)}:u\in\mathcal{N}(v)\cup\{v\}))\) is injective. Here, a multiset is defined as a set that allows multiple instances from its elements._
It is noteworthy that the aforementioned theorem solely focuses on the message passing mechanism of GNNs and assumes that nodes possess identical constant initial features, similar to the WL test. However, recent studies by Abboud et al (2018) and Sato et al (2018) highlight that the capacity of GNNs can be significantly enhanced when nodes are equipped with randomly initialized features. Remarkably, random node initialization empowers GNNs with universality, enabling them to approximate any function on a graph and potentially solve the graph isomorphism problem (Beng et al., 2018). It is important to emphasize that this conclusion does not come with a complexity guarantee since the computational complexity of the graph isomorphism problem remains unknown. Random initialization represents a stronger assumption and presents greater challenges for analysis. In our paper, we specifically investigate the impact of a weaker assumption, distinct initialization, on the expressiveness of GNNs.
However, when evaluating the expressiveness of GNNs in recommendation systems, it is important to use metrics that align with the specific task of recommendation itself. Graph isomorphism tests and graph-level metrics, such as graph classification, may not directly capture the ability of GNNs to distinguish between different item nodes and accurately discriminate their closeness to a user node. In the context of link prediction, Chamberlain et al. (2018) suggest that the performance of message passing GNNs is hindered by their inability to differentiate automorphic nodes. Automorphic nodes refer to symmetric nodes within the same orbit induced by the graph automorphism group (e.g., nodes \(a\) and \(b\), \(c\) and \(d\), \(e\) and \(f\)). The issue arises because automorphic nodes are assigned the same final features during the Weisfeiler-Lehman (WL) test, leading to indistinguishable effects in message passing GNNs. However, it is worth noting that this limitation assumes that all nodes have identical initial features. As a node-level metric, the ability to distinguish automorphic nodes becomes more relevant as it reflects the model's capability to differentiate between users and items. Therefore, it serves as a more appropriate metric for assessing GNN expressiveness in recommendation systems. Subsequent sections of our paper will demonstrate how GNNs, with distinct initial embeddings, can effectively address the issue of distinguishing automorphic nodes in user-item bipartite graphs.
## 3. Expressing Node-Level
Distinguishability of GNNs
This section explores how distinct initial embeddings (DIE) enhance GNNs' ability to distinguish automorphic nodes. It's important to note that distinct initialization is a more relaxed assumption than random initialization in (Beng et al., 2018; GCN and GCN, 2017), as it accepts any set of predefined unique initial embeddings, including the simplest one-hot encoding by node ID. We classify the automorphic nodes into three types:
* **Type I**: a pair of automorphic nodes \(u\) and \(v\) such that \(\mathcal{N}(u)=\mathcal{N}(v)\) (i.e., they share the same set of neighboring nodes). For example, \(c\) and \(d\) in Figure 2.
* **Type II**: a pair of automorphic nodes \(u\) and \(v\) such that \(u\in\mathcal{N}(v)\), \(v\in\mathcal{N}(u)\), and \(\mathcal{N}(u)-\{v\}=\mathcal{N}(v)-\{u\}\) (i.e., they are neighbors to each other, and share the same set of other neighboring nodes). For example, \(a\) and \(b\) in Figure 2.
* **Type III**: a pair of automorphic nodes \(u\) and \(v\) such that \(\mathcal{N}(u)-\{v\}\neq\mathcal{N}(v)-\{u\}\) (i.e., no matter if they are neighbors to each other, their neighborhoods differ by at least a pair of automorphic nodes \(w\) and \(w^{\prime}\), such that \(w\in N(u)\) but \(w^{\prime}\notin N(u)\)1). For example, \(e\) and \(f\) in Figure 2.
Figure 2. Illustrated examples of three types of automorphic nodes.(\(a\) and \(b\) belong to Type II, \(c\) and \(d\) belong to Type I, \(e\) and \(f\) belong to Type III.)
### DIE Does Not Fully Solve Node Automorphism on General Graphs
In this section, we show that GNNs with distinct initial embeddings can only distinguish two of the three types of automorphic nodes.
**Theorem 2**.: _Assuming that the aggregation function \(g\) of the graph neural networks is injective, and every node receives a distinct initial embedding, for a pair of automorphic nodes \(u\) and \(v\), in every iteration, they will be assigned different embeddings if and only if one of the two conditions is satisfied: (i) The GNN implements residual connections, and \(u\) and \(v\) belong to Type I or Type III. (ii) The GNN does not implement residual connections, and \(u\) and \(v\) belong to Type II or Type III. In other words, regardless of whether the GNN implements residual connections or not, it can only distinguish two out of the three types of automorphic nodes._
**Proof**. We first prove case (i), i.e, with residual connections:
We prove that for a pair of automorphic nodes \(u\) and \(v\) of Type I or III, they will be assigned different embeddings in any iteration \(k\). This is obviously true when \(k=0\), because the nodes are initialized with distinct embeddings. If it holds for iteration \(k=i-1\), then in iteration \(k=i\), the embeddings for \(u\) and \(v\) are:
\[h_{u}^{(i)}=g(\{h_{u}^{(i-1)}\}\cup\{h_{w}^{(i-1)}:w\in\mathcal{N}(u)\}) \tag{1}\]
\[h_{v}^{(i)}=g(\{h_{v}^{(i-1)}\}\cup\{h_{w^{\prime}}^{(i-1)}:w^{\prime}\in \mathcal{N}(v)\}) \tag{2}\]
If \(u\) and \(v\) are of Type I, then \(\{h_{u}^{(i-1)}\}\cup\{h_{w}^{(i-1)}:w\in\mathcal{N}(u)\}\) and \(\{h_{v}^{(i-1)}\}\cup\{h_{w^{\prime}}^{(i-1)}:w^{\prime}\in\mathcal{N}(v)\}\) are two distinct multisets, one exclusively containing \(h_{u}^{(i-1)}\) and the other exclusively containing \(h_{v}^{(i-1)}\), and \(h_{u}^{(i-1)}\neq h_{v}^{(i-1)}\). If \(u\) and \(v\) are of Type III, the above-mentioned two multisets are also distinct, because there exists at least a pair of automorphic nodes \(w\) and \(w^{\prime}\) (distinct from \(u\), \(v\)), such that \(w\in N(u)\) but \(w\notin N(v)\), \(w^{\prime}\in N(v)\) but \(w^{\prime}\notin N(u)\). Thus, \(h_{u}^{(i-1)}\) and \(h_{w^{\prime}}^{(i-1)}\) are exclusively in one of the two multisets, respectively. Moreover, \(w,w^{\prime}\) must be Type III automorphic nodes, because their neighborhoods differ by nodes other than each other (\(u\) and \(v\)). By induction assumption, \(h_{w}^{(i-1)}\neq h_{w^{\prime}}^{(i-1)}\). In summary, the input multisets of \(u\) and \(v\) to \(g\) must be different if they are Type I or III. Then, since \(g\) is injective, \(h_{u}^{(i)}\neq h_{v}^{(i)}\). By induction, this holds for all iterations. If \(u\) and \(v\) are of Type II, the inputs to the two multisets are the same, and the above result does not hold.
Next, we prove case (ii), i.e, without residual connection:
Similarly, we can demonstrate that for a pair of automorphic nodes \(u\) and \(v\) of Type II or III, they will be assigned different embeddings in any iteration \(k\). This is evident when \(k=0\) because the nodes are initially initialized with distinct embeddings. Assuming it holds true for iteration \(k=i-1\), we can show that when \(k=i\), the embeddings for nodes \(u\) and \(v\) are as follows:
\[h_{u}^{(i)}=g(\{h_{w}^{(i-1)}:w\in\mathcal{N}(u)\}) \tag{3}\]
\[h_{v}^{(i)}=g(\{h_{w^{\prime}}^{(i-1)}:w^{\prime}\in\mathcal{N}(v)\}) \tag{4}\]
If nodes \(u\) and \(v\) are of Type II, we can observe that the multisets \(h_{w}^{(i-1)}:w\in\mathcal{N}(u)\) and \(h_{w^{\prime}}^{(i-1)}:w^{\prime}\in\mathcal{N}(v)\) are two distinct multisets. Specifically, one exclusively contains the embedding \(h_{v}^{(i-1)}\), while the other exclusively contains the embedding \(h_{u}^{(i-1)}\). It is also established that \(h_{u}^{(i-1)}\neq h_{v}^{(i-1)}\). if nodes \(u\) and \(v\) are of Type III, the arguments presented in the case of Type II still apply. Thus, the input multisets of nodes \(u\) and \(v\) to the aggregation function \(g\) must be different if they are of Type I or II. Then, since the aggregation function \(g\) is injective, we can conclude that \(h_{u}^{(i)}\neq h_{v}^{(i)}\). By induction, we can assert that this holds for all iterations.
However, if \(u\) and \(v\) are of Type I, the inputs to the two multisets are the same, and the above result does not hold.
This theorem suggests that on a general graph, using distinct embedding initialization can only provide partial differentiation between automorphic nodes for GNNs. However, it is important to note that user-item interaction graphs in recommendation systems are bipartite graphs, where a user node cannot be connected to another user node, and an item node cannot be connected to another item node. This bipartite structure allows us to leverage this inherent feature and demonstrate that with distinct initial embeddings, GNNs can fully address the node automorphism problem on user-item bipartite graphs.
### DIE Solves Node Automorphism on Bipartite Graphs
We start by proving that Type II automorphic nodes cannot exist in a connected bipartite graph with more than two nodes.
**Lemma 3**.: _In a connected bipartite graph with more than 2 nodes, if a pair of nodes \(u\) and \(v\) are automorphic, they cannot be Type II automorphic nodes._
**Proof**. If \(u\) and \(v\) are both user nodes or both item nodes, there cannot be an edge between them, so the condition for Type II, \(u\in\mathcal{N}(v)\) and \(v\in\mathcal{N}(u)\), cannot hold. If one of \(u\), \(v\) is a user node and another one is an item node, and they are neighbors to each other, then since all neighbors of a user node must be item nodes, and all neighbors of an item node must be user nodes, \(\mathcal{N}(u)-\{v\}\) cannot be equal to \(\mathcal{N}(v)-\{u\}\), unless \(\mathcal{N}(u)-\{v\}=\emptyset\) and \(\mathcal{N}(v)-\{u\}=\emptyset\). If \(\mathcal{N}(u)-\{v\}=\emptyset\) and \(\mathcal{N}(v)-\{u\}=\emptyset\), then \(u\) and \(v\) must form a connected component that is isolated from the rest of the graph. However, since the graph is a connected bipartite graph with more than 2 nodes, the above scenario cannot occur.
With Lemma 3 and Theorem 2, we prove the following theorem.
**Theorem 4**.: _Assume that the graph is a connected bipartite graph with more than 2 nodes. Assume that the aggregation function \(g\) of the GNN is injective, and residual connections are implemented. Assume that every node receives a distinct initial embedding. For a pair of automorphic nodes \(u\) and \(v\), in every iteration, they will be assigned different embeddings._
**Proof**. By Lemma 3, \(u\) and \(v\) must be either Type I or III automorphic nodes. By Theorem 2, as long as the GNN implements residual connections, \(u\) and \(v\) will be assigned different embeddings in every iteration due to the injectivity of the aggregation function.
This conclusion indicates that a GNN is capable of distinguishing automorphic nodes on a connected user-item bipartite graph with the following three design choices: i) distinct initial embeddings; ii) residual connections; iii) injective aggregation function. These design choices provide a useful guideline for developing powerful
GNN-based recommender systems that aim to achieve optimal expressiveness on the node-level metric.
## 4. Encoding Node Topological Closeness with GNNs
While the capability of node automorphism provides a better metric for recommendation expressiveness, it is not directly aligned with the intended task. This is because it only focuses on distinguishing between different nodes, without considering their structural closeness to one another. In recommendation systems, models should not only differentiate between two items but also determine which one is closer to the target user's profile. Therefore, it is important to focus on a link-level metric that captures the closeness between nodes in the graph's structural space.
The traditional metric for evaluating the closeness between two nodes is the geodesic distance, which is defined as the length of the shortest path between the two nodes (Beng et al., 2017; Chen et al., 2018). However, this metric may not accurately reflect the true similarity between nodes as encoded in the graph structure. This is especially true in tasks where clusters and community structures are important, such as recommendation systems. For instance, in Figure 2(b), nodes \(v_{1}\) and \(v_{2}\) have the same geodesic distance to node \(u\). However, it is evident that \(v_{2}\) should be considered "closer" to \(u\) than \(v_{1}\) because they reside together in a denser cluster.
In light of this limitation, we propose a new link-level metric _topological closeness_ that evaluates the closeness between two nodes by considering not only the length of the shortest path but also the number of possible paths between them.
### Topological Closeness
Here, we define the k-hop topological closeness between two nodes \(u\) and \(v\), denoted as \(k\)-\(TC(u,v)\), as follows.
**Definition 5** (k-Hop Topological Closeness).: _Given two nodes \(u\) and \(v\) in an undirected graph, the \(k\)-hop topological closeness between \(u\) and \(v\) in this graph is defined as:_
\[k\text{-}TC(u,v)=\left|\mathcal{P}^{k}_{u,v}\right|\]
_where \(\mathcal{P}^{k}_{u,v}\) is the set of all possible paths of length \(k\) between \(u\) and \(v\). Note that the paths here allow repeated vertices, repeated edges, and self-loops. We further define the 0-hop topological distance between \(a\) node \(u\) and itself as 0-\(TC(u,u)=1\)._
The capability of topological closeness (TC) to intuitively capture both the distance and clustering information between two nodes is illustrated in Figure 3. In Figure 2(a), both \(v_{1}\) and \(v_{2}\) have only one simple path2 to \(u\), representing the minimum clustering structure they can have when connected to \(u\). Additionally, \(v_{1}\) has a shorter distance to \(u\) compared to \(v_{2}\). Consequently, \(v_{1}\) exhibits a higher TC value to \(u\) than \(v_{2}\). This occurs because TC takes into account self-loops and repeated vertices, and a shorter simple path allows more possible paths of length \(k\) with self-loops and repeated vertices. In Figure 2(b), both \(v_{1}\) and \(v_{2}\) are equidistant from \(u\). However, \(v_{2}\) resides within a denser cluster alongside \(u\). Thus, \(v_{2}\) demonstrates a higher TC value to \(u\) compared to \(v_{1}\), due to the larger number of paths connecting \(v_{2}\) and \(u\) than those connecting \(v_{1}\) and \(u\).
Footnote 2: A simple path is defined as a path in which all the internal vertices are distinct.
It is important to note that there are multiple methods available for combining clustering density information and the shortest distance between two nodes. For instance, one approach involves calculating a weighted sum of all simple paths connecting the two nodes, where the weight is determined by the inverse of the path length. In our case, we define topological closeness in this manner because this definition aligns naturally with the structure of GNNs and can be efficiently computed using a message-passing GNN. The subsequent two subsections will provide further clarity on this concept and its relevance within our framework.
### Popular GNN Recommender Paradigm Does Not Fully Capture Topological Closeness
In recent years, most GNN-based recommender systems (Chen et al., 2018; Chen et al., 2018; Chen et al., 2018; Wang et al., 2018; Wang et al., 2018) adopt the following paradigm. Initially, each user \(u\) and item \(v\) are assigned random initial embedding vectors \(\mathbf{e}^{(0)}_{u}\in\mathbb{R}^{d}\) and \(\mathbf{e}^{(0)}_{u}\in\mathbb{R}^{d}\) respectively, where \(d\) represents the dimensionality of the embeddings. In the \(k\)-th GNN layer, the embeddings are updated using the following rule, which incorporates residual connections:
\[\mathbf{e}^{(k)}_{u} =\sigma(\sum_{v\in\mathcal{N}(u)}\!\!\phi(\mathbf{e}^{(k-1)}_{u})) +\mathbf{e}^{(k-1)}_{u} \tag{5}\] \[\mathbf{e}^{(k)}_{v} =\sigma(\sum_{u\in\mathcal{N}(v)}\!\!\phi(\mathbf{e}^{(k-1)}_{u})) +\mathbf{e}^{(k-1)}_{v} \tag{6}\]
where \(\sigma\) denoting an activation function and \(\phi\) generally representing the element-wise multiplication with the coefficients in the normalized adjacency matrix, the final embeddings of the nodes can be computed as follows:
\[\mathbf{e}_{u}=\theta(\{\mathbf{e}^{(k)}_{u}\}_{k=0}^{L}),\quad\mathbf{e}_{v}= \theta(\{\mathbf{e}^{(k)}_{v}\}_{k=0}^{L}) \tag{7}\]
where \(L\) represents the number of GNN layers, and \(\theta\) denotes an aggregation function applied to the embeddings from all layers,
Figure 3. Illustrations of Topological Closeness with k=3.
typically implemented as a sum or mean operation. Finally, the predicted score of a user \(u\)'s preference for an item \(v\) is computed by taking the inner product of their final embeddings:
\[\hat{y}_{u,v}=\mathbf{e}_{u}\cdot\mathbf{e}_{v} \tag{8}\]
This section provides a proof demonstrating that the message passing GNN and inner product prediction mechanism employed in this paradigm are unable to completely discriminate between two item nodes based solely on their topological distance from a user node. To further substantiate this claim, we conduct experiments on real-world datasets, which are presented in Section 5.3.
For the sake of simplicity in calculations and clearer presentation, we consider a simplified version of the aforementioned paradigm. Specifically, we assume that \(\sigma\) and \(\phi\) are identity functions. Hence, the embedding updating rule is simplified as:
\[\mathbf{e}_{u}^{(k)}=\sum_{v\in\mathcal{N}(u)}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \mathbf{e}_{u}^{(k-1)}+\mathbf{e}_{u}^{(k-1)},\ \ \mathbf{e}_{v}^{(k)}=\sum_{u\in\mathcal{N}(v)}\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
only study the model structure of message passing and prediction in the above analysis, without considering the learning process. It is possible that the models could fit the task better with learnable embeddings that are widely adopted in GNN recommenders, but theoretical analysis involving learning would require complicated case-by-case analysis that we cannot include here.
### Graph Topology Encoder (GTE)
Indeed, it is evident that no single metric can fully capture a model's capability in recommendation, even if the metric aligns with the recommendation objective. User behavior is considerably more intricate than what is encompassed within the user-item graph's interaction history. As a result, it is crucial to investigate the level of interpretability of the proposed metric in the context of recommendation. To address this concern, we introduce a learning-less GNN algorithm termed GTE. GTE is proven to be optimal in terms of the link-level metric of topological closeness.
At the beginning, each item \(v_{i}\) is assigned a one-hot initial feature \(\mathbf{h}_{u}^{(0)}\in\mathbb{R}^{I}\), where \(I\) is the total number of items. For \(j\in[0,I)\), the \(j\)-th entry of the initial feature is \(\mathbf{h}_{u_{j}}^{(0)}\left[j\right]=\begin{cases}1&i=j\\ 0&i\neq j\end{cases}.\)
Each user \(u_{i}\) is assigned an initial feature vector \(\mathbf{h}_{u_{i}}^{(0)}\in\mathbb{R}^{I}\), where all entries are initialized to \(0\). It is important to note that, unlike traditional GNN-based recommenders where the embeddings are learnable, the features in GTE are fixed. In fact, the entire algorithm does not involve any learning process. In the \(k\)-th GNN layer, the features are propagated on the graph using a simple rule:
\[\mathbf{h}_{u}^{(k)}=\phi\left(\sum_{u\in\mathcal{N}(u)}\mathbf{h}_{o}^{(k-1 )}+\mathbf{h}_{u}^{(k-1)}\right)\text{, }\mathbf{h}_{o}^{(k)}=\phi\left(\sum_{u\in\mathcal{N}(u)}\mathbf{h}_{u}^{(k-1 )}+\mathbf{h}_{o}^{(k-1)}\right)\]
where \(\phi\) is a mapping function, and in our case we simply use the identity function as \(\phi\). The message propagation is performed \(L\) times, where \(L\) is the number of GNN layers. Finally, for user \(u_{i}\), the predicted preference score for item \(v_{j}\) is \(\hat{y}_{i,j}=\mathbf{h}_{u_{i}}^{(L)}\left[j\right]\). Note that unlike most GNN-based recommenders that use inner product for prediction, GTE directly uses a specific dimension in the feature space to represent the score for an item.
The procedure of GTE is formally presented in Algorithm 1. GTE exploits the graph collaborative filtering signals via a fixed GNN structure without any learnable components, which makes the execution of the algorithm fast and reliable. We prove that GTE is optimal on the proposed topological closeness metric by showing that for a user node \(u_{i}\) and two item nodes \(v_{j}\) and \(v_{k}\), if \(L\)-\(TC(u_{i},v_{j})>L\)-\(TC(u_{i},v_{k})\), it is certain that \(\hat{y}_{i,j}>\hat{y}_{i,k}\).
**Lemma 7**.: _In the \(k\)-th layer of GTE, the \(j\)-th dimension of the feature of a node \(w\) (it could be user or item node) can be written as follows, where \(v_{j}\) is the \(j\)-th item node:_
\[\mathbf{h}_{w}^{(k)}\left[j\right]=k\text{-}TC(w,v_{j})\]
**Proof**. This is true when \(k=0\), because if \(w\neq v_{j}\), \(\mathbf{h}_{w}^{(0)}\left[j\right]=0\), and \(0\)-\(TC(w,v_{j})=0\); if \(w=v_{j}\), \(\mathbf{h}_{w}^{(0)}\left[j\right]=\mathbf{h}_{v_{j}}^{(0)}\left[j\right]=1\), and \(0\)-\(TC(w,v_{j})=0\)-\(TC(v_{j},v_{j})=1\).
If it holds for \(k=l-1\), then in layer \(k=l\), the \(j\)-th entry of the feature of \(w\) is:
\[\mathbf{h}_{w}^{(l)}\left[j\right] =\sum_{x\in\mathcal{N}(w)}\mathbf{h}_{x}^{(l-1)}\left[j\right]+ \mathbf{h}_{w}^{(l-1)}\left[j\right] \tag{18}\] \[=\sum_{x\in\mathcal{N}(w)\cup\{w\}}(l-1)\text{-}TC(x,v_{j})\] (19) \[=\sum_{x\in\mathcal{N}(w)\cup\{w\}}\left[\mathcal{P}_{x,v_{j}}^{ (l-1)}\right]\] (20) \[=\sum_{x\in\mathcal{N}(w)\cup\{w\}}\left[\mathcal{P}_{x,v_{j},x }^{(l)}\right]\] (21) \[=\left|\mathcal{P}_{w,v_{j}}^{(l)}\right|=l\text{-}TC(w,v_{j}) \tag{22}\]
By induction, the expression holds for every iteration k.
**Theorem 8**.: _In GTE, after \(L\) layers, for a user node \(u_{i}\) and two item nodes \(v_{j}\) and \(v_{k}\), if \(L\)-\(TC(u_{i},v_{j})>L\)-\(TC(u_{i},v_{k})\), then \(\hat{y}_{i,j}>\hat{y}_{i,k}\)._
**Proof**. We have shown in Lemma 7 that \(\hat{y}_{i,j}=\mathbf{h}_{u_{i}}^{(L)}\left[j\right]=L\)-\(TC(u_{i},v_{j})\), and \(\hat{y}_{i,k}=\mathbf{h}_{u_{i}}^{(L)}\left[k\right]=L\)-\(TC(u_{i},v_{k})\). Thus, if it holds that \(L\)-\(TC(u_{i},v_{j})>L\)-\(TC(u_{i},v_{k})\), then \(\hat{y}_{i,j}>\hat{y}_{i,k}\).
The above analysis proves that GTE is optimal on the link-level metric topological closeness, and thus its performance on recommendation datasets can be used to evaluate the effectiveness of the proposed metric.
### The Relations between GTE and the Graph and Node-Level Expressiveness Metrics
As shown in Theorem 1 in Section 2.2, Xu et al. (2019) proved that a GNN can have WL-equivalent power of expressiveness on the graph isomorphism test if the aggregation function \(h_{u}^{(k)}=g(\{h_{u}^{(k-1)}:u\in\mathcal{N}(v)\cup\{o\})\})\) is injective. They further proved that the sum aggregator allows injective functions over multisets. We present an adapted version of the latter conclusion below.
**Theorem 9** (from Xu et al. (2019)).: _Assume the feature space \(\mathcal{X}\) is countable. If the initial features are one-hot encodings, then there exists some function \(\phi\) such that the aggregator \(h_{u}^{(k)}=g(\{h_{u}^{(k-1)}:u\in\mathcal{N}(v)\cup\{o\})\})=\phi\left(h_{o}^{(k-1 )}+\sum_{u\in\mathcal{N}(v)}h_{u}^{(k-1)}\right)\) is injective._
In Section 3.2, we demonstrate the effectiveness of a GNN in distinguishing various automorphic nodes on a connected user-item bipartite graph. This is achieved by proving Theorem 4, which
shows that when residual connections are implemented and the aggregation function \(h_{n}^{(k)}=g(h_{u}^{(k-1)}:u\in\mathcal{N}(v)\cup v)\) is injective
In GTE, residual connections are employed. Consequently, based on the aforementioned conclusions, if the aggregation function is injective, GTE will exhibit equivalent expressiveness to the Weisfeiler-Lehman (WI) algorithm on the graph-level metric and achieve optimal expressiveness on the node-level metric. It is worth noting that in our implementation, the aggregation function of GTE is not injective, as we simply utilize the identity function as \(\phi\). However, according to Theorem 9, it is possible to select \(\phi\) carefully, thereby making the aggregation function injective and enhancing the theoretical capability of GTE on both graph and node-level metrics. It is important to acknowledge that while a suitable choice of \(\phi\) can render the aggregation function injective, it may also compromise the optimality of GTE on the topological closeness metric. Addressing this issue will be a topic for future research endeavors.
## 5. Experiments
To evaluate the explainability of topological closeness in recommendation, we perform experiments on real-world datasets to compare GTE, which is optimal on the metric, against various baselines.
### Experimental Settings
1.1. **Datasets. Yelp**: a widely used dataset containing users' ratings on venues from the Yelp platform. **Douban**: consisting of book reviews and ratings collected by Douban. **Tmall**: an e-commerce dataset recording online purchase history of users on Tmall. **Gowalla**: collected by Gowalla with users' check-in records in different locations. **Amazon-beauty**: a dataset containing Amazon users' reviews and ratings of beauty products. **Sparser-Tmall**: a sparser version of the _Tmall_ dataset by sampling users and items with less interactions. The statistics are summarized in Table 2.
#### 5.1.2. **Evaluation Protocols and Metrics.**
The all-rank protocol is commonly adopted to mitigate sampling bias (Gowalla et al., 2018; Gowalla et al., 2019). For the evaluation metrics, we adopt the widely used Recall@N and Normalized Discounted Cumulative Gain (NDCG)@N, where N = {20, 40}.[3; 15; 26; 32]. The p-values are calculated with T test.
#### 5.1.3. **Baseline Models.**
We adopt the following 8 baselines. We tune the hyperparameters of all the baselines with the ranges suggested in the original papers, except that the size of learnable embeddings is fixed as 32 for all the baselines to ensure fairness.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c} \hline \hline Dataset & Density & Metric & BiasMF & NGCF & GCCF & LightGCN & SimGRACE & SAIL & HCCF & LightGCL & GTE & _p-val_ & _impr._ \\ \hline \multicolumn{12}{l}{_Denser datasets_} \\ \hline \multirow{3}{*}{Yelp} & & R@20 & 0.0190 & 0.0294 & 0.0462 & 0.0482 & 0.0603 & 0.0471 & 0.0626 & **0.0793** & 0.0647 & 3e\({}^{-13}\) & -18\% \\ & & N@20 & 0.0161 & 0.0243 & 0.0398 & 0.0409 & 0.0435 & 0.0405 & 0.0527 & **0.0668** & 0.0556 & 3e\({}^{-14}\) & -16\% \\ & & R@40 & 0.0371 & 0.0522 & 0.0760 & 0.0803 & 0.0989 & 0.0733 & 0.1040 & **0.1292** & 0.0556 & 2e\({}^{-14}\) & -18\% \\ & & N@40 & 0.0227 & 0.0330 & 0.0508 & 0.0527 & 0.0656 & 0.0516 & 0.081 & **0.0852** & 0.0704 & 4e\({}^{-15}\) & -17\% \\ \hline \multirow{3}{*}{Douban} & & R@20 & 0.0926 & 0.0999 & 0.0986 & 0.1006 & 0.0827 & 0.1211 & 0.1083 & 0.1216 & **0.1269** & 3e\({}^{-7}\) & 4\% \\ & & N@20 & 0.0687 & 0.0739 & 0.0730 & 0.0759 & 0.0603 & 0.0910 & 0.0828 & 0.0927 & **0.1029** & 3e\({}^{-9}\) & 11\% \\ & & R@40 & 0.1424 & 0.1505 & 0.1482 & 0.1530 & 0.1251 & 0.1778 & 0.1593 & 0.1708 & **0.1777** & 3e\({}^{-6}\) & 4\% \\ & & N@40 & 0.0845 & 0.0897 & 0.0887 & 0.0923 & 0.0735 & 0.1090 & 0.0988 & 0.1077 & **0.1182** & 3e\({}^{-9}\) & 10\% \\ \hline \multirow{3}{*}{Tmall} & & R@20 & 0.0103 & 0.0180 & 0.0209 & 0.0225 & 0.0222 & 0.0254 & 0.0314 & 0.0528 & **0.0578** & 1e\({}^{-11}\) & 9\% \\ & & N@20 & 0.0072 & 0.0123 & 0.0141 & 0.0154 & 0.0152 & 0.0177 & 0.0213 & **0.0361** & 0.0290 & 1e\({}^{-13}\) & -20\% \\ & & R@40 & 0.0170 & 0.0310 & 0.0356 & 0.0378 & 0.0367 & 0.0424 & 0.0519 & **0.0852** & 0.0752 & 5e\({}^{-11}\) & -12\% \\ & & N@40 & 0.0095 & 0.0168 & 0.0196 & 0.0208 & 0.0203 & 0.0236 & 0.0284 & **0.0473** & 0.0326 & 5e\({}^{-15}\) & -31\% \\ \hline \multicolumn{12}{l}{_Sparser datasets_} \\ \hline \multirow{3}{*}{Amazon-beauty} & & R@20 & 0.0607 & 0.0734 & 0.0782 & 0.0797 & 0.0539 & 0.0834 & 0.0813 & 0.0896 & **0.0976** & 1e\({}^{-8}\) & 9\% \\ & & N@20 & 0.0249 & 0.0290 & 0.0315 & 0.0326 & 0.0212 & 0.0334 & 0.0339 & 0.0369 & **0.0440** & 1e\({}^{-11}\) & 19\% \\ & & R@40 & 0.0898 & 0.1078 & 0.1155 & 0.1161 & 0.0836 & 0.1196 & 0.1178 & 0.1286 & **0.1322** & 1e\({}^{-4}\) & 3\% \\ & & N@40 & 0.0308 & 0.0360 & 0.0391 & 0.0400 & 0.0272 & 0.0408 & 0.0413 & 0.0447 & **0.0511** & 1e\({}^{-11}\) & 14\% \\ \hline \multirow{3}{*}{Gowalla} & & R@20 & 0.0196 & 0.0552 & 0.0951 & 0.0985 & 0.0869 & 0.0999 & 0.1070 & 0.1578 & **0.1706** & 2e\({}^{-10}\) & 8\% \\ & & N@20 & 0.0105 & 0.0298 & 0.0535 & 0.0593 & 0.0528 & 0.0602 & 0.0644 & 0.0935 & **0.1001** & 8e\({}^{-11}\) & 7\% \\ & & R@40 & 0.0346 & 0.0810 & 0.1392 & 0.1431 & 0.1276 & 0.1472 & 0.1535 & 0.2245 & **0.2400** & 5e\({}^{-11}\) & 7\% \\ & & N@40 & 0.0145 & 0.0367 & 0.0684 & 0.0710 & 0.0637 & 0.0725 & 0.0767 & 0.1108 & **0.1181** & 1e\({}^{-10}\) & 7\% \\ \hline \multirow{3}{*}{Sparser-Tmall} & & R@20 & 0.0328 & 0.0395 & 0.0543 & 0.0542 & 0.0174 & 0.0521 & 0.0501 & 0.0518 & **0.0588** & 1e\({}^{-10}\) & 14\% \\ & & N@20 & 0.0169 & 0.0196 & 0.0290 & 0.0288 & 0.0084 & 0.0282 & 0.0270 & 0.0300 & **0.0368** & 2e\({}^{-9}\) & 23\% \\ \cline{1-1} & & R@40 & 0.0439 & 0.0552 & 0.0717 & 0.0708 & 0.0274 & 0.0685 & 0.0655 & 0.0653 &
* **BiasMF**(Krishna et al., 2017). This model employs matrix factorization to learn latent embeddings for users and items.
* **NGCF**(Krishna et al., 2017). It aggregates feature embeddings with high-order connection information over the user-item interaction graph.
* **GCCF**(Chen et al., 2017). It adopts an improved GNN model that incorporates a residual structure and eliminates non-linear transformations.
* **LightGCN**(Krishna et al., 2017). It implements an effective GNN structure without embedding transformations and non-linear projections.
* **SimGRACE**(Krishna et al., 2017). It creates an augmented view by introducing random perturbations to GNN parameters.
* **SAIL**(Wang et al., 2017). It adopts a self-augmented approach that maximizes the alignment between learned features and initial features.
* **HCCF**(Wang et al., 2017). It contrasts global information encoded with a hypergraph against local signals propagated with GNN.
* **LightGCL**(Chen et al., 2017). It guides the contrastive view augmentation with singular value decomposition and reconstruction.
### Performance Comparison and Analysis
The performance results are presented in Table 1. The number of GNN layers for GTE is set as 3 for all datasets. As shown in the table, GTE can achieve comparable performance to the SOTA GCL models. The mechanism of GTE revolves around effectively discriminating different items based on their topological closeness to a user. Therefore, these results indicate that the proposed topological closeness metric accurately reflects a model's proficiency in performing the recommendation task.
Furthermore, GTE exhibits superior performance on sparse data. Specifically, while GTE consistently outperforms other baselines, it occasionally performs worse than LightGCL on the three denser datasets. However, on the three sparser datasets, GTE consistently outperforms LightGCL. This discrepancy can be attributed to the reliance of learning-based methods on the availability of supervision signals. In scenarios where there is an abundance of user-item collaborative signals, learning-based methods can more effectively capture complex patterns in the data compared to learning-less or heuristic algorithms like GTE. However, in cases where the interaction history is limited, which often occurs in recommendation systems, the performance of learning-based methods deteriorates more rapidly than that of fixed algorithms.
By capturing the topological closeness between nodes with a learning-less structure, GTE achieves satisfactory performance rapidly and reliably. Since GTE is not subject to any random factors, the p-values in Table 1 are of negligible magnitudes. Table 3 presents an intuitive comparison of the running time of GTE and the efficient GCL baseline LightGCL, where GTE is run on CPU only, and LightGCL is run with an NVIDIA GeForce RTX 3090 GPU.
### Topological Closeness Aligns with Real Data
To demonstrate the discriminative ability of the topological closeness metric in real datasets, we randomly sample 400 pairs of positive and negative items and calculate the difference between their topological closeness to the target users. The results are plotted in Figure 4. It is clear from the plot that the majority of the differences are larger than 0, with only negligible exceptions. This indicates that the positive items have a higher topological closeness to the user than the negative items.
### Popular GNN Recommenders Perform Poorly on Topological Closeness
Section 4.2 provides a mathematical demonstration showing that popular GNN recommender models are unable to fully capture the topological closeness between nodes. Specifically, we sort the items based on the prediction scores generated by GTE and the baseline models. We then calculate the Kendall rank correlation coefficient (\(\tau\)), a widely used indicator for measuring the ordinal difference between two rankings (Krishna et al., 2017), between these rankings. Since GTE is optimized for topological closeness, its ranking represents the ideal prediction based on this metric. As depicted in Figure 5, the \(\tau\) values for the baselines cluster around 0.05 to 0.25. This indicates that the predictions made by the baselines exhibit positive correlations with the ideal ranking, but are significantly misaligned with it.
## 6. Conclusion
In this paper, we provide a comprehensive analysis of GNNs' expressiveness in recommendation, using a three-level theoretical framework consisting of graph/node/link-level metrics. We prove that with distinct initial embeddings, injective aggregation function and residual connections, GNN-based recommenders can achieve optimality on the node-level metric on bipartite graphs. We introduce the topological closeness metric that aligns with recommendation, and propose the learning-less GTE algorithm provably optimal on this link-level metric. By comparing GTE with baseline models, we show the effectiveness of the proposed metric. The presented theoretical framework aims to inspire the design of powerful GNN-based recommender systems. While achieving optimality on all three metrics is challenging, the framework serves as a guiding direction for selecting well-justified model structures.
Figure 4. Difference between the topological closeness of users to their positive items and negative items.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & Yelp & Douban & Tmall & Gowalla & Amazon & s-Tmall \\ LightGCL & 216.12 & 38.54 & 65.03 & 284.12 & 21.06 & 30.24 \\ GTE & 1.85 & 0.64 & 0.84 & 5.11 & 0.26 & 2.10 \\ \hline \hline \end{tabular}
\end{table}
Table 3. Efficiency comparison in terms of total running time between GTE (w/o GPU) and LightGCL (w. GPU), in minutes.
Figure 5. The ordinal difference (measured in Kendall’s \(\tau\) coefficient) between the predictions made by the baselines and the ideal predictions based on topological closeness. |
2305.09958 | SIGMA: Similarity-based Efficient Global Aggregation for Heterophilous
Graph Neural Networks | Graph neural networks (GNNs) realize great success in graph learning but
suffer from performance loss when meeting heterophily, i.e. neighboring nodes
are dissimilar, due to their local and uniform aggregation. Existing attempts
of heterophilous GNNs incorporate long-range or global aggregations to
distinguish nodes in the graph. However, these aggregations usually require
iteratively maintaining and updating full-graph information, which limits their
efficiency when applying to large-scale graphs. In this paper, we propose
SIGMA, an efficient global heterophilous GNN aggregation integrating the
structural similarity measurement SimRank. Our theoretical analysis illustrates
that SIGMA inherently captures distant global similarity even under
heterophily, that conventional approaches can only achieve after iterative
aggregations. Furthermore, it enjoys efficient one-time computation with a
complexity only linear to the node set size $\mathcal{O}(n)$. Comprehensive
evaluation demonstrates that SIGMA achieves state-of-the-art performance with
superior aggregation and overall efficiency. Notably, it obtains 5$\times$
acceleration on the large-scale heterophily dataset \emph{pokec} with over 30
million edges compared to the best baseline aggregation. | Haoyu Liu, Ningyi Liao, Siqiang Luo | 2023-05-17T05:35:49Z | http://arxiv.org/abs/2305.09958v3 | # SIMGA: A Simple and Effective Heterophilous Graph Neural Network with Efficient Global Aggregation
###### Abstract
Graph neural networks (GNNs) realize great success in graph learning but suffer from performance loss when meeting heterophily, i.e. neighboring nodes are dissimilar, due to their local and uniform aggregation. Existing attempts in incorporating global aggregation for heterophilous GNNs usually require iteratively maintaining and updating full-graph information, which entails \(\mathcal{O}(n^{2})\) computation efficiency for a graph with \(n\) nodes, leading to weak scalability to large graphs. In this paper, we propose SIMGA, a GNN structure integrating SimRank structural similarity measurement as global aggregation. The design of SIMGA is simple, yet it leads to promising results in both efficiency and effectiveness. The simplicity of SIMGA makes it the first heterophilous GNN model that can achieve a propagation efficiency near-linear to \(n\). We theoretically demonstrate its effectiveness by treating SimRank as a new interpretation of GNN and prove that the aggregated node representation matrix has expected grouping effect. The performances of SIMGA are evaluated with 11 baselines on 12 benchmark datasets, usually achieving superior accuracy compared with the state-of-the-art models. Efficiency study reveals that SIMGA is up to 5\(\times\) faster than the state-of-the-art method on the largest heterophily dataset _pokec_ with over 30 million edges.
## 1 Introduction
Graph neural networks (GNNs) have recently shown remarkable performance in graph learning tasks [1; 2; 3; 4; 5; 6; 7]. Despite the wide range of model architectures, traditional GNNs [1] operate under the assumption of _homophily_, which assumes that connected nodes belong to the same class or have similar attributes. In line with this assumption, they employ a uniform message-passing framework to aggregate information from a node's local neighbors and update its representations accordingly [8; 9; 10]. However, real-world graphs often exhibit _heterophily_, where linked nodes are more likely to have different labels and dissimilar features [11]. Conventional GNN approaches struggle to achieve optimal performance in heterophily scenarios because their local and uniform aggregations fail to recognize distant, similar nodes and assign distinct weights for the ego node to aggregate features from nodes with the same or different labels. Consequently, the network's expressiveness is limited [12].
To apply GNNs in heterophilous graphs, several recent works incorporate long-range connections and distinguishable aggregation mechanisms. Examples of long-range connections include the amplified multi-hop neighbors [13], and the geometric criteria to discover distant relationships [14]. The main challenge in such design lies in deciding and tuning proper neighborhood sizes for different graphs to realize stable performance. With regard to distinguishable aggregations, Yang et al. [15] and Liu et al. [16] exploited attention mechanism in a whole-graph manner, while Jin et al. [17] and Li et al. [18] respectively considered feature Cosine similarity and global homophily correlation. Nonetheless, the above approaches require iteratively maintaining and updating a large correlation matrix for all node pairs, which entails \(\mathcal{O}(n^{2})\) computation complexity for a graph of \(n\) nodes, leading to weak scalability. In the state-of-the-art method GloGNN++ [18], the complexity of global aggregation
can be improved to \(\mathcal{O}(m)\) with extensive algorithmic and architectural optimizations, where \(m\) is the number of edges. However, \(\mathcal{O}(m)\) can still be significantly larger than \(\mathcal{O}(n)\). In summary, the existing models, while achieving promising effectiveness in general, suffer from efficiency issues due to the adoption of the whole-graph long-range connections.
Addressing the aforementioned challenges calls for a global node-similarity metric with two properties. First, a high degree of similarity should indicate a high likelihood of having the same label, particularly when the nodes also entail similar attributes. Second, the computation of pair-wise similarity can be approximated efficiently. For the first property, Suresh et al. [19] suggested that nodes surrounded with similar graph structures are likely to share the same label. Hence, the property boils down to the computation of similarity between the nodes regarding their surrounding graph structure. This insight motivates us to incorporate SimRank [20] into Graph Neural Networks (GNNs) to facilitate global feature aggregation. SimRank is defined based on an intuition that two nodes are similar if they are connected by similar nodes. Hence, nodes with high SimRank scores indicate that they reside within similar sub-structures of the graph. For instance, in an academic network illustrated in Figure 1(c), two professors from the same university are likely to belong to the same category (i.e., possess the same label) because they are connected to similar neighbors, such as their respective students in the university. Furthermore, this similarity propagates recursively, as the students themselves are similar due to their shared publication topics. To examine the effectiveness of SimRank in capturing the pairwise label information, we further conduct empirical evaluation as shown in Figure 1 (a) and (b). In contrast to traditional neighbor-based local aggregation (Figure 1(a)), SimRank succeeds in discovering distant nodes with the same labels as the center node (Figure 1(b)). In heterophilous graphs, this property is especially useful for recognizing same-label nodes which may be distant. As we will show in Theorem 3.3, we further theoretically derive a novel interpretation between SimRank and GNN that, the SimRank score is exactly the inner product of node embeddings output by a converging GNN, which reveals the ability of SimRank's global aggregation expressiveness. SimRank is also desired for the second property for its high computational efficiency. As we will show in Proposition 3.7, the SimRank matrix can be efficiently computed beforehand in near \(\mathcal{O}(d^{2})\) time for average degree \(d\), where \(d^{2}\) is significantly smaller than \(n\) for all the datasets that are commonly used.
Putting the ideas together, we propose **SIMGA1**, a heterophilous GNN model that can effectively capture distant structurally similar nodes for heterophilous learning, while achieving high efficiency by its one-time global aggregation whose complexity is only near-linear to the number of graph nodes \(n\). To the best of our knowledge, we make the first attempt of developing an inherent relationship between SimRank structural similarity and a converging GNN, as well as integrating it into GNN designs to achieve high efficacy. Benefiting from its expressiveness attribute, SIMGA enjoys a simple model architecture and efficient calculation. Its simple feature transformation also brings fast inference and scalability to large graphs. Experiments demonstrate that our method achieves \(5\times\) speed-up over the state-of-the-art GloGNN++ [18] on the _pokec_ dataset, currently the largest heterophily dataset with 30 million edges.
Footnote 1: SIMGA, **SIMRank** based **GNN** Aggregation.
We summarize our main contributions as follows: (1) We demonstrate that SimRank can be seen as a new interpretation of GNN aggregation. Based on the theory, we propose SIMGA, utilizing SimRank as the measure to aggregate global node similarities for heterophilous graphs. (2) We demonstrate
Figure 1: **(a) Neighborhood-based local aggregation and (b) SimRank aggregation on _Texas_ heterophily graph.** Node color represents aggregation score with respect to the center node (\(\blacktriangle\) in orange). Conventional aggregation focuses on nearest nodes regardless of node label, while SimRank succeeds in assigning high values for nodes with same label (\(\blacktriangle\) in blue). **(c) Simrank illustration on a social network.** Two professors inherit high similarity because they share similar neighbors.
SIMGA's effectiveness by proving the grouping effect of generated node embedding matrix and design a simple but scalable network architecture that simultaneously realizes effective representation learning and efficient computation, which incorporates near-linear approximate SimRank calculation and fast computation on million-scale large graphs. (3) We conduct extensive experiments to show the superiority of our model on a range of datasets with diverse domains, scales, and heterophilies. SIMGA achieves top-tier accuracy with \(5-10\times\) faster computation than the best baseline.
## 2 Preliminaries
**[Notations].** We denote an undirected graph \(G=(V,E)\) with node set \(V\) and edge set \(E\). Let \(n=|V|\), \(m=|E|\) and \(d=m/n\), respectively. For each node \(v\), we use \(N(v)\) to denote node \(v\)'s neighborhood, i.e. set of nodes directly link to \(v\). The graph connectivity is represented by the adjacency matrix as \(A\in\mathbb{R}^{n\times n}\), and the diagonal degree matrix is \(D\in\mathbb{R}^{n\times n}\). The input node feature matrix is \(F\in\mathbb{R}^{n\times f}\) and we denote \(Y\in\mathbb{R}^{n\times k}\) the ground truth node label one-hot matrix, where \(f\) is the feature dimension and \(N_{y}\) is the number of labels in classification.
**[Graph Neural Network].** Graph neural network is a type of neural networks designed for processing graph data. We summarize the key operation in GNNs into two steps: Firstly, aggregating local representations: \(\hat{h}_{u}^{(l)}=\textit{AGG}(\{h_{v}^{(l)}:\forall v\in N(u)\})\). Then, combining the aggregated information with ego node to update its embeddings: \(h_{u}^{(l+1)}=\textit{UPD}(h_{u}^{(l)},\hat{h}_{u}^{(l)}),\) where \(h_{u}^{(l)}\) denotes the embeddings of node \(u\) generated by the \(l\)-th layer of network. For a model of \(L\) layers in total, the final output of all nodes form the embedding matrix \(H^{(L)}\), which can be fed to higher-level modules for specific tasks, such as a classifier for node label prediction. _AGG_(\(\cdot\)) and _UPD_(\(\cdot\)) are functions specified by concrete GNN models. For example, GCN [8], representing a series of popular graph convolution networks, uses _sum_ function for neighbor aggregation, and updates the embedding by learnable weights together with _ReLU_ activation.
**[Graph Heterophily and Homophily].** The heterophily and homophily indicate how similar the nodes in the graph are to each other. Typically, in the context of this paper, it is mainly determined by the node labels, i.e., which category a node belongs to. The homophily of a graph can be measured by metrics like node homophily [14] and edge homophily [11]. Here we employ node homophily defined as the average proportion of the neighbors with the same category of each node:
\[\mathcal{H}_{node}=\frac{1}{|V|}\sum\limits_{v\in V}\frac{|\{u\in N(v):y_{u}=y _{v}\}|}{|N(v)|}. \tag{1}\]
\(\mathcal{H}_{node}\) is in range \((0,1)\) and homophilous graphs have higher \(\mathcal{H}_{node}\) values closer to 1. Generally, high homophily is correlated with low heterophily, and vice versa.
**[SimRank]** SimRank holds the intuition that two nodes are similar if they are connected by similar neighbors, described by the following recursive formula [20]:
\[S(u,v)=\begin{cases}1&(u=v)\\ \frac{c}{|N(u)||N(v)|}\sum\limits_{u^{\prime}\in N(u),v^{\prime}\in N(v)}S(u^{ \prime},v^{\prime})&(u\neq v)\end{cases}, \tag{2}\]
where \(c\in(0,1)\) is a decay factor emperically set to 0.6. Generally, a high SimRank score of node pair \(u,v\) corresponds to high structural similarity, which is beneficial for detecting similar nodes.
## 3 Simga
In this section, we first elaborate on our intuition of incorporating SimRank as a novel GNN aggregation by presenting the new interpretation between SimRank and GNN. Then we propose our SIMGA model design and analyze its scalability and complexity.
### Interpreting SimRank in GNN
In graphs with node heterophily, neighboring nodes usually belong to different categories. Their feature distributions also diversify. Such disparity prevents the model to aggregate information from neighbors that is meaningful for predictive tasks such as node classification [11]. Consequently, local uniform GNN aggregation produces suboptimal results for smoothing local dissimilarity, i.e. heterophilous representations of nodes in a few hops, failing to recognize intra-class node pairs distant [21; 22]. On the contrary, it is inferred from Eq. (2) that, SimRank is defined for any given node pairs in the graph and is calculated by recognizing whole-graph structural information. Even for long distances, SimRank succeeds in assigning higher scores to structurally similar node pairs and
extracts relationships globally. We validate the ability of SimRank for aggregating global homophily in Figure 2, where frequencies of the SimRank score of node pairs in six graphs are displayed. It is clearly observed that similarity values for intra-class node pairs are significantly higher than those of inter-class ones, which is useful for the model to recognize information from homophily nodes.
To theoretically analyze the utilization of SimRank as GNN global aggregation, we further explore the association between SimRank and GNN. First we look into the concept of random walk:
**Definition 3.1** (_Pairwise Random Walk_[20]).: The pairwise random walk measures the probability that two random walks starting from node \(u\) and \(v\) simultaneously and meet at the same node \(x\) for the first time. The probability of such random-walk pairs for all tours \(t^{(2l)}\) with length \(2l\) can be formulated by:
\[P_{pair}(u,v|t^{(2l)})=\sum_{t^{(2l)}_{u:v}}P_{pair}(u,v|t^{(2l)}_{u:v})=\sum_ {x\in V}p(x|u,t^{(l)}_{u:x})\cdot p(x|v,t^{(l)}_{v:x}), \tag{3}\]
where \(t^{2l}_{u:v}\) is one possible tour of length \(2l\), \(t^{l}_{v:x}:\{v,...,x\}\) is the sub-tour of length \(l\) compositing the total tour, and \(p(x|v,t^{(l)}_{v:x})\) denotes the probability a random walk starting from node \(v\) and reaching at node \(x\) under the tour \(t^{(l)}_{v:x}\).
Since random walk visits the neighbors of the ego node with equal probability, \(p(x|v,t^{(l)}_{v:x})\) can be trivially calculated as: \(p(x|v,t^{(l)}_{v:x})=\prod_{w_{i}\in t_{v:x}}\frac{1}{|N(w_{i})|},\) where \(w_{i}\) is the \(i\)-th node in tour \(t^{(l)}_{v:x}\). Generally, a higher probability of such walks indicates a higher similarity between the source and end nodes. Pairwise random walk paves a way for node pair aggregation that can be calculated globally, compared to conventional GNN aggregation that is interpreted as random walks with limited hops, which is highly local [10; 23]. In fact, we will later show that random walk probability can be seen as GNN embeddings. In this paper, we especially consider the SGC-like architecture for representative GNN aggregation [24]. For each layer the _AGG_ and _UPD_ operation are accomplished by a 1-hop neighborhood propagation, and the node features are only learned before or after all layers. In other words, each intermediate layer can be written in the matrix form as \(H^{(l+1)}=\mathcal{L}H^{(l)}\), where \(\mathcal{L}=D^{-1}A\) denotes the graph Laplacian matrix. For other typical GNN architectures such as GCN and APPNP, similar expressions also exist [22]. We hereby present the next lemma, linking random walk and such network embeddings:
**Lemma 3.2**.: _(See Appendix B.1 for proof). For any \(l\geq 0\) and node-pair \(u,v\in V\), the probability of length-\(l\) random walk under all tour \(t^{(l)}\) equals to the \(l\)-th layer embedding value \(h^{(l)}_{u}[v]\) of node \(u\):_
\[h^{(l)}_{u}[v]=p(v|u,t^{(l)}_{u:v}).\]
Lemma 3.2 states that an \(l\)-layer GNN is able to simulate \(l\)-length random walks and its embedding contains the information of the each node's reaching possibility. This conclusion is in line with other studies and displays the expressiveness of GNN in graph learning [10; 24; 22]. Based on the connection, we investigate a GNN mapping function \(\mathcal{G}\) stacking multiple SGC layers. We are specifically interested in the case when network converges to infinity layers \(L\rightarrow\infty\), denoted as \(\mathcal{G}^{(\infty)}\). We derive SimRank as the novel interpretation of such GNN model in the following theorem:
**Theorem 3.3**.: _(See Appendix B.2 for proof). On a graph \(G=(V,E)\), the SimRank matrix calculated by Eq. (2) is \(S\). Then, the SimRank value \(S(u,v)\) of node-pair \(u,v\in V\) is equivalent to the layer-wise
Figure 2: **Distribution of SimRank scores over intra-class and inter-class node pairs. X-axis denotes the logged value for each SimRank score corresponding to one node pair and Y-axis reflects the density (frequency). Note that we filter values which are trivial, i.e. near zero.**
summation of inner-product of network \(\mathcal{G}^{(\infty)}\) node representations \(h_{u}^{(l)},h_{v}^{(l)}\), with a decay factor \(c\):_
\[S(u,v)=\sum_{l=1}^{\infty}\ c^{l}\cdot\langle h_{u}^{(l)},h_{v}^{(l)}\rangle, \tag{4}\]
_where \(h_{u}^{(l)}\) is node \(u\) embedding generated by \(l\)-th layer of \(\mathcal{G}^{(\infty)}\), and \(\langle\cdot\,\ \cdot\rangle\) denotes the inner product._
### SIMGA Workflow
The implication of Theorem 3.3 is that, by calculating the SimRank matrix, it naturally retrieves the information that common GNN architectures can only represent after a number of local aggregations. As SimRank assigns high scores to node relations with topological similarities, it can be regarded as a powerful global aggregation for GNN representation, guiding the network to put higher weights on such distant but homophilous node pairs. Hence, our goal is to design effective and efficient model architecture that exploits the desirable capability of SimRank.
Considering SimRank already carries global similarity information implicitly, our model removes the need of iteratively aggregating and updating embeddings by incorporating the similarity matrix such as [17; 18]. Instead, we employ a precomputation process to acquire the SimRank matrix \(S\) and only apply it once as model aggregation. Since the aggregation is as straightforward as a matrix multiplication, we establish the simple but effective transformation scheme to concatenate and learn node attributes and local adjacency. In a nutshell, the architecture of SIMGA is summarised as:
\[Z_{Sim}=\textit{SoftMax}(S\cdot f_{\theta}(F,A)), \tag{5}\]
where \(f_{\theta}\) is a mapping function to transform features \(F\) and adjacency \(A\) into node embeddings. Here \(S\) plays the role of final and global aggregation which gather information for all nodes according to the SimRank scores as weights. To further explain why SIMGA is effective, we mark that SIMGA in the form of Eq. (5) shares a similar idea to the representative message-passing model APPNP [10], whereas the latter follows a local propagation according to the Personalized PageRank matrix \(\Pi\), while ours spotlights \(S\) for global similarity measure in heterophilous settings. As a natural merit, our SIMGA satisfies various advantages of decoupled GNN models like APPNP, ranging from batch computation to scalability on large graphs.
In specific, SIMGA firstly uses two _MLPs_ to embed node feature matrix and adjacency matrix respectively. Then, it linearly combines them with the tunable parameter \(\delta\in[0,1]\), namely the feature factor, after which the concatenation is fed into a main _MLP_ network:
\[H_{F}=\textit{MLP}_{F}(F),\ H_{A}=\textit{MLP}_{A}(A);\qquad H=\textit{MLP} _{H}\big{(}\delta\cdot H_{F}+(1-\delta)\cdot H_{A}\big{)}, \tag{6}\]
where \(H_{F}\in\mathbb{R}^{n\times N_{y}}\) and \(H_{A}\in\mathbb{R}^{n\times N_{y}}\) represents the intermediate embeddings of feature and adjacency, \(H\in\mathbb{R}^{n\times N_{y}}\) is the main node embedding, and \(N_{y}\) is the number of categories. Then to conduct SimRank aggregation, we apply the score matrix \(S\) to the main embedding as \(S\cdot H\). Referencing the common approach in Klicpera et al. [10], we adopt skip connection with tunable parameter \(\alpha\in[0,1]\) to balance global aggregation and raw embeddings:
\[Z=(1-\alpha)\cdot S\cdot H+\alpha\cdot H. \tag{7}\]
We depict the model architecture in Appendix C and next we explain the expressiveness of SIMGA.
### Grouping Effect
If two nodes \(u,v\) share similar attributes and graph structures, their characterizations from other nodes are supposed to be close. This derives the formal definition of grouping effect as follow:
**Definition 3.4**.: [_Grouping Effect [25]_] Given the node set \(V\), two nodes \(u,v\in V\) and \(u\)-th row of \(A^{k}\) and \(F\) as \(a_{u}^{k}\) and \(x_{u}\) respectively, let \(u\mathrel{\hbox to 0.0pt{\hbox{\kern-2.0pt\lower 4.0pt\hbox{$\sim$}}} \raisebox{0.0pt}{$>$}}v\) denote the conditions that (1) \(||f_{u}-f_{v}||_{2}\to 0\) and (2) \(||a_{u}^{k}-a_{v}^{k}||_{2}\to 0\), \(\forall k\in[1,K]\). A matrix \(Z\) is said to have grouping effect if condition below meets:
\[u\to v\Rightarrow\forall p\in V,\ \ |Z(u,p)-Z(v,p)|\to 0. \tag{8}\]
Equipped with approximated matrix \(S\), we demonstrate the effectiveness of SIMGA as follow:
**Theorem 3.5**.: _(See Appendix B.3 for proof). Matrices \(Z\) and \(H\) both have grouping effect when \(K>log_{c}\epsilon\approx 4\), where \(\epsilon\) denotes the absolute error for approximately calculating \(S\)._
The grouping effect of \(Z\) sufficiently demonstrates the effectiveness of SIMGA, that for two nodes regardless of their distance, if they share similar features and graph structures, their aggregated representations will be similar. Besides, for inter-class nodes, due to the low or near zero coefficients assigned by aggregation matrix \(S\), their combined representations will hence be different. In a nutshell, SIMGA is capable of handling heterophily graphs due to the distinguishable aggregation.
### Scalability & Complexity
Examining the respective stages of SIMGA, operations with dominant computation include the calculation on SimRank matrix \(S\) and the final aggregation of \(S\cdot H\). Without optimization, calculating matrix \(S\in\mathcal{R}^{n\times n}\) based on Eq. (2) in a recursive way requires \(\mathcal{O}(Tn^{2}d^{2})\) time and \(\mathcal{O}(n^{2})\) space, where \(T\) is the number of iterations and \(d\) is the average degree of each node. In addition, the aggregation \(S\cdot H\) also needs a computation time of \(\mathcal{O}(N_{y}n^{2})\). This is inapplicable to graphs with large amount of nodes. Luckily, SimRank can be calculated really efficiently [26].
```
Input: Graph \(G\), decay factor \(c\), error threshold \(\epsilon\) Output: Approximate SimRank matrix \(\hat{S}\)
1\(S\gets 0\), \(R\gets I\)
2while\(\max_{(u,v)}R(u,v)>(1-c)\epsilon\)do
3\(\hat{S}(v,v)\leftarrow\hat{S}(v,v)+R(u,v)\)
4for\(\textbf{d}^{\prime}\in N(u),v^{\prime}\in N(v)\)do
5\(R(u^{\prime},v^{\prime})\gets R(u^{\prime},v^{\prime})+c\cdot\frac{R(u,v)} {N(u^{\prime}):|N(v^{\prime})|}\)
6\(\overline{R}(u,v)\gets 0\)
7return matrix \(\hat{S}\)
```
**Algorithm 1** Localpush [27]
To solve above limitations, we first use the Localpush [27], one state-of-the-art all-pair SimRank computation algorithm with accuracy guarantee to speed up the calculation process. Its pseudo code is shown in Algorithm 1 and has the following approximation guarantee:
**Lemma 3.6** ([27]).: _Algorithm 1 returns an approximate SimRank matrix \(\hat{S}\) in \(\mathcal{O}(\frac{d^{2}}{c(1-c)^{2}\epsilon})\) time and holds that \(\|\hat{S}-S\|_{max}<\epsilon\)._
The accuracy of the approximate SimRank matrix is bounded by the parameter \(\epsilon\) which is usually a constant representing absolute error threshold. Thus, the computation time of \(S\), which is \(\mathcal{O}(\frac{d^{2}}{c(1-c)^{2}\epsilon})\), becomes practical. Further, to reduce memory overhead of the SimRank matrix, we sparsify the matrix by utilizing top-\(k\) pruning to select the \(k\)-largest scores for each node as utilized in [28]. This helps to reduce the space cost from at most \(\mathcal{O}(n^{2})\) to \(\mathcal{O}(kn)\), where \(k\) is the number of values selected in top-\(k\) pruning. By utilizing the pruned matrix, the aggregation by multiplying \(S\) is also reduced to \(\mathcal{O}(N_{y}kn)\). The other parts of SIMGA are all based on matrix multiplication. In Eq. (6), the computation complexities for \(H_{F}\), \(H_{A}\) and \(H\) are \(\mathcal{O}(N_{y}fn)\), \(\mathcal{O}(dfn)\) and \(\mathcal{O}(N_{y}^{2}n)\), respectively. Combining them together, we derive the time complexity of SIMGA as follows:
**Proposition 3.7**.: _The total time complexity of SIMGA is \(\mathcal{O}(\frac{d^{2}}{c(1-c)^{2}\epsilon}+c_{1}n)\), where \(c_{1}=kN_{y}+N_{y}f+df+N_{y}^{2}\ll n\)._
Based on Proposition 3.7, we conclude that SIMGA with such optimization achieves a near-linear time complexity to node numbers \(n\), where \(\mathcal{O}(\frac{d^{2}}{c(1-c)^{2}\epsilon})\) is insignificant when \(d\ll\sqrt{n}\) and \(\epsilon\sim 10^{-1}\). In Table 1, we compare the complexity of our SIMGA with heterophily GNNs regarding the cost of linking global connections in one hidden layer.
## 4 Related Work
GNNs achieve great advances and they usually root in graph manipulations and develop a variety of explicit designs surrounding aggregation and update schemes for effectively retrieving graph information [29; 30; 22]. GCN [8] mimics the operation in convolutional neural networks by summoning information from neighboring nodes in graphs and achieves larger receptive fields by stacking multiple layers. GCNII [31] chooses to stack network layers for a deeper model and further aggregation. GAT [9] exploits the attention mechanism for a more precise aggregation, while Mixhop [13] assigns weights to multi-hop neighbors. GraphSAGE [2] enhances the aggregation by sampling neighborhood and reduces computational overhead. APPNP [10] and SGC [24] decouple the network update by a separated local aggregation calculation and a simple feature transformation. Following works [32; 33; 34; 35] extend the design on scalability and generality.
Latest works notice the issue of graph heterophily and the limited capability of conventional GNNs. Zheng et al. [12] pointed out such limitation is due to that local neighbors are unable to capture informative nodes at a large distance. In addition, a uniform aggregation schema ignores the implicit difference of edges. Approaches addressing this heterophilous graph learning task can be mainly divided into two aspects. Some studies propose diverse modifications to widen the concept of local aggregation. H\({}_{2}\)GCN [11] and WRGAT [19] establish new connections between graph nodes,
\begin{table}
\begin{tabular}{|c|c|} \hline
**Methods** & **Time Complexity** \\ \hline \hline Gcom-GCN[14] & \(\mathcal{O}(|R|inmf)\) \\ GPNN[15] & \(\mathcal{O}(nmf)\) \\ U-GCN[17] & \(\mathcal{O}(n^{2}f)\) \\ WR-GAT[19] & \(\mathcal{O}(|R|n^{2}f)\) \\ GloGNN++[18] & \(\mathcal{O}(mf)\) \\ SIMGA & \(\mathcal{O}(\textbf{knf})\) \\ \hline \end{tabular}
\end{table}
Table 1: **Time complexity comparison with one hidden layer as an example.**
while GGCN [36] and GPR-GNN [37] alternate the aggregation function to allow negative influence from neighboring nodes. Another series of models choose to rethink locality and introduce global information, including global attention [15; 16] and similarity measurement Jin et al. [17]. Especially, LINRX [38] and GloGNN++ [18] both employ feature-topology decoupling and simple MLP layers for embedding updates. We highlight the uniqueness of SIMGA to all the above works for its global aggregation incorporating the well-defined topological similarity.
## 5 Experiments
We comprehensively evaluate the performance of SIMGA, including classification accuracy as well as precomputation and training efficiency. We also provide a specific study on embedding homophily for aggregating similar nodes globally. More details can be found in Appendix.
### Experiment Setup
**Baselines** We compare SIMGA with 11 baselines, including: (1) general graph learning models: MLP, GCN [8], GAT [9], Mixhop [13], and GCNII [31]; (2) graph convolution-based heterophilous models: GGCN [36], H\({}_{2}\)GCN [11], GPR-GNN [37] and WRGAT [19]; (3) decoupling heterophilous models: LINRX [38] and GloGNN++ [18].
**Parameter Settings** We calculate exact SimRank score for small datasets, while adopting approximation in Section 3.4 with \(\epsilon=0.1\) and \(k=1024\), which is sufficient to derive excellent aggregation coefficients (see Section 5.4). The decay factor \(c=0.6\) and \(\alpha=0.5\) on all the datasets. The layer number of _MLP\({}_{H}\)_ is set to 1 and 2 for small and large datasets. We use the same dataset train/validation/test splits as in Pei et al. [14] and Liu et al. [16]. We conduct 5 and 10 repetitive experiments on the small and large datasets, respectively. Exploration on parameters including feature factor \(\delta\), learning rate \(r\), dropout \(p\) and weight decay are elaborated in Appendix E and H.
### Performance Comparison
We measure our SIMGA's performance against 11 baselines on 12 benchmark datasets of both small and large scales as Table 2, where we also rank and order the models based on respective accuracy. We stress the effectiveness of SIMGA by concluding the following observations:
\(\bullet\) Common graph learning models generally perform worse. Among MLP, GAT, and GCN, GCN achieves the highest ranking 7.25 over all datasets. The reason may be that they fail to distinguish the homophily and heterophily nodes due to the uniform aggregation design. It is surprising that MLP only utilizing node features learns well on some datasets such as _Texas_, which indicates that node features are important in heterophilous graphs' learning. Meanwhile, Mixhop and GCNII generally hold better performance than the plain models. On _pokec_, the accuracy for Mixhop and GCNII are 81.07 and 78.94, outperforming the former three models significantly. They benefit from strategies considering more nodes in homophily and combine node representations to boost the performance, showing the importance of modifying the neighborhood aggregation process.
\(\bullet\) With respect to heterophilous models, those graph convolution-based approaches enjoy proper performance on small graphs by incorporating structural information, but their scalability is a
\begin{table}
\begin{tabular}{l|c c c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{**Small Scale Datasets**} & \multicolumn{4}{c}{**Large Scale Datasets**} \\ \cline{2-13} & **Texas** & **Cleester** & **Cleester** & **Cleester** & **Cleester** & **Combation** & **Pubanned** & **Squirrel** & **graphics** & **arXiv-year** & **Panual** & **twide-guares** & **snap-patations** & **polace** \\ \hline
**Categories** & 5 & 6 & 7 & 5 & 3 & 5 & 2 & 5 & 2 & 2 & 5 & 2 \\
**Nodes** & 183 & 3.327 & 2.308 & 2.277 & 19.171 & 5.201 & 42.961 & 109.343 & 41.554 & 168.114 & 2.9232 & 1.6328 & **Rank** \\
**Edges** & 295 & 4.65 & 5.278 & 31.421 & 44.327 & 198.403 & 998.979 & 1.1663 & 1.3823 & 1.8229 & 6.7937 & 1.3957 & 7.886 & 0.8225 & 5.064 \\
**Patterns** & 1,030 & 7.303 & 1.433 & 2.235 & 50 & 2.009 & 12 & 128 & 5 & 7 & 20 & 65 & \\
**Methods** & 0.11 & 0.74 & 0.61 & 0.23 & 0.40 & 0.22 & 0.61 & 0.22 & 0.47 & 0.54 & 0.07 & 0.44 & 0.44 \\ \hline MLP & 80.81 \(\pm\) 4.75 & 74.02 \(\pm\) 1.90 & 75.69 \(\pm\) 2.00 & 46.23 \(\pm\) 2.90 & 87.16 \(\pm\) 0.37 & 27.71 \(\pm\) 1.66 & 86.68 \(\pm\) 0.09 & 36.70 \(\pm\) 0.21 & 73.61 \(\pm\) 0.40 & 60.32 \(\pm\) 0.07 & 31.34 \(\pm\) 0.05 & 62.37 \(\pm\) 0.05 & 9.75 \\ GAT & 52.16 \(\pm\) 4.68 & 75.65 \(\pm\) 1.21 & 37.30 \(\pm\) 1.10 & 60.26 \(\pm\) 2.06 & 86.33 \(\pm\) 0.48 & 40.72 \(\pm\) 1.58 & 55.80 \(\pm\) 0.37 & 60.65 \(\pm\) 0.51 & 85.38 \(\pm\) 0.55 & 98.99 \(\pm\) 4.12 & 45.57 \(\pm\) 0.04 & 17.77 \(\pm\) 6.18 & 8.58 \\ WRGAT & 80.62 \(\pm\) 5.50 & 78.81 \(\pm\) 8.50 & 83.20 \(\pm\) 2.25 & 65.47 \(\pm\) 8.82 & 88.32 \(\pm\) 0.48 & 88.73 \(\pm\) 0.48 & 0.00 & 0.00 \(\pm\) 3.72 & 6.03 \(\pm\) 0.03 & 0.00 & 0.00 & 8.33 \\ HOG & 80.48 \(\pm\) 9.78 & 70.81 \(\pm\) 1.11 & 85.70 \(\pm\) 1.21 & 60.11 \(\pm\) 0.87 & 80.38 \(\pm\) 1.64 & 80.03 \(\pm\) 1.64 & 0.00 \(\pm\) 0.01 & 31.31 \(\pm\) 0.00 & 0.00 & 0.00 & 0.00 & 8.04 \\
**GCN** & 78.38 \(\pm\) 4.36 & 77.13 \(\pm\) 1.81 & 73.58 \(\pm\) 1.18 & 58.48 \(\pm\) 1.51 & 73.54 \(\pm\) 0.38 & 31.68 \(\pm\) 1.24 & 90.05 \(\pm\) 0.31 & 45.07 \(\pm\) 0.21 & 83.18 \(\pm\) 0.16 & 61.89 \(\pm\) 0.29 & 40.19 \(\pm\) 0.03 & 78.83 \(\pm\) 0.05 & 7.62 \\
**GCN** & **81.86 \(\pm\) 4.55 & 77.14 \(\pm\) 1.45 & 75.95 \(\pm\) 1.05 & 74.14 \(\pm\) 1.86 & 89.15 \(\pm\) 0.37 & 57.17 \(\pm\) 1.75 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 7.58 \\ GCN & 55.14 \(\pm\) 5.36 & 76.50 \(\pm\) 1.36 & 86.90 \(\pm\) 1.27 & 64.22 \(\pm\) 2.38 & 84.42 \(\pm\) 0.50 & 53.43 \(\pm\) 2.01 & 74.27 \(\pm\) 0.47 & 40.67 \(\pm\) 0.26 & 82.38 \(\pm\) 0.38 & 45.65 \(\pm\) 0.04 & 74.56 \(\pm\) 0.17 & 7.25 \(\pm\) 0.04 & 6.27 \(\pm\) 0.05 & 6.55 \(\pm\) 0.04 & 7.54 \(\pm\) 0.17 & 7.25 \\
**Multiple** & 77.84 \(\pm\) 3.73 & 73.65 \(\pm\) 1.33 & 87.61 \(\pm\) 0.85 & 93.60 \(\pm\) 2.53 & 83.31 \(\pm\) 0.61 & 83.40 \(\pm\) 1.29 & 83.05 \(\pm\) 0.16 & 83.05 \(\pm\) 0.15 & 81.51 \(\pm\) 0.17 & 87.47 \(\pm\) 0.71 & 65.48 \(\pm\) 0.27 & 51.26 \(\pm\) 0.09 & 83.74 \(\pm\) 0.10 & 6.41 \\
**GCN** & 77.53 \(\pm\) 3.73 & 73.18 \(\pm\) 1.21 & 82.21 \(\pm\) 2.15 & 83.60 \(\pm\) 1.30 & **83.48 \(\pm\) 0.34** & 84.39 \(\pm\) 0.40 & 87.19 \(\pm\) 0.20 & 42.12
bottleneck. Specifically, GGCN, H\({}_{2}\)GCN, and WRGAT cannot be employed in most large datasets due to their high memory requirement for simultaneously processing features and conducting propagation, which hinders their application. GPR-GNN, on the contrary, shows no superiority in effectiveness. On the other hand, decoupling heterophilous models LINKX and GloGNN++ achieve the most competitive accuracy over most datasets where LINKX separately embeds node features and graph topology and GloGNN++ further performs node neighborhood aggregation from the whole set of nodes in the graph, bringing them performance improvement. Notably, the advantage of LINKX is not consistent and is usually better on large graphs than on smaller ones.
\({}^{\bullet}\) Our method SIMGA achieves outstanding performance with the highest average accuracy on 9 out of 12 datasets and the top ranking 1.25, significantly better than the runner-up GloGNN++. We state that the superior and stable performance over various heterophilous graphs is brought by the capability of global aggregation that exploits structural information and evaluates node similarity. The simple decoupled feature transformation network architecture also effectively retrieves node features and generates embeddings. Specifically, on datasets like _snap-patents_, SIMGA outperforms the runner-up methods by nearly 1%., a significant improvement indicating that our method aggregates excellent neighborhood information to distinguish the homophily and heterophily. Besides, we consider SIMGA being slightly worse than the best method in three datasets due to the shallow feature transformation layers, which may have insufficient capacity for a large amount of input information and can be improved by introducing more specific designs. We leave this for future.
### Efficiency Study
To evaluate the efficiency aspect of SIMGA, we investigate its learning time and convergence on 6 large-scale datasets as shown in Table 3 and Figure 3, respectively.
**Learning Time** We compare the learning time of SIMGA with LINKX and GloGNN++ as they are all decoupling heterophilous methods and achieve first-tier performance. Learning time is the sum of pre-computation time and training time, since SIMGA needs to calculate the aggregation matrix before conducting network training. We use the same training set on each dataset and run 5-repeated experiments for 500 epochs. The average learning time is reported in Table 3.
It can be seen that SIMGA costs the least learning time over all the 12 datasets, especially on the large ones, thanks to its scalable pre-computation and top-\(k\) global aggregation mechanism. The result aligns with our complexity analysis in Section 3.4. Besides, the one-time similarity measurement calculation in SIMGA also benefits its speed compared with the GloGNN++'s to be-updated measurement. On datasets such as _Penn94_ and _pokce_, it outperforms GloGNN++ by around \(10\times\) faster, which is a significant speed-up. Although LINKX is relatively fast due to its simple yet effective architecture, it is still slower than SIMGA, i.e. 5 times slower on the largest dataset _pokce_.
**Convergence** We then study the convergence time as another indicator of model efficiency in graph learning. We compare SIMGA against models with leading efficacy in Table 2 in Figure 3. Among the methods, MixHop and GCNII are GNNs containing multi-hop neighbors as the ego node's neighborhood for aggregation or combine nodes with their degree information, while LINKX and GloGNN++ are decoupling heterophily methods without explicit graph convolution operation. Both SIMGA and GloGNN++ select nodes from the whole set of nodes in the graph, while SIMGA only considers nodes with significant scores and filters irrelevant ones through the top-\(k\) pruning.
Figure 3 shows that SIMGA shares favorable convergence states among all graphs, i.e. achieves high accuracy in a short training time. Generally, SIMGA and LINKX are the two fastest models among all competitors. GloGNN++ can converge to the first-tier results, but its speed is usually slower than SIMGA. For example, on the largest dataset _pokce_ with over 1.6 million nodes, SIMGA is approximately 5\(\times\) faster than GloGNN++. For _Penn94_, the speed difference even reaches 10\(\times\). Other methods such as MIXHOP is exceeded by a larger extent. These results validate that SIMGA is both highly effective and efficient, and can be capably applied to large heterophilous graphs.
\begin{table}
\begin{tabular}{c|c c c c c c c c c c c} \hline \hline & **Texas** & **Citeseer** & **Cora** & **Chameleon** & **Pubmed** & **Squirrel** & **genius** & **arXiv-year** & **Penn94** & **twitch-game** & **snap-patents** & **pokce** \\ \hline LINKX & 8.6 & 19.0 & 12.4 & 12.9 & 20.1 & 19.9 & 292.3 & 51.2 & 112.1 & 302.7 & 701.8 & 2472.9 \\ GloGNN++ & 7.41 & 12.7 & 11.4 & 10.8 & 13.7 & 13.7 & 358.7 & 134.1 & 183.5 & 783.0 & 732.9 & 1564.7 \\
**SIMGA** & **4.3** & **10.2** & **6.6** & **7.5** & **12.1** & **12.7** & **153.6** & **36.5** & **17.2** & **236.5** & **408.2** & **388.5** \\ \hline _precomp_ & 0.0 & 0.1 & 0.2 & 0.1 & 1.6 & 0.3 & 8.6 & 9.3 & 3.9 & 14.0 & 15.9 & 11.4 \\ _training_ & 4.3 & 10.1 & 6.4 & 7.4 & 10.5 & 12.4 & 145.0 & 27.1 & 13.3 & 222.5 & 392.3 & 377.1 \\ \hline \hline \end{tabular}
\end{table}
Table 3: **The average learning time (s) of decoupling heterophily methods on 12 datasets. We separately show the _precomputation_ and _training_ time of SIMGA. The fastest is marked in bold.**
### Ablation Study
**Parameter \(\mathbf{\delta}\)** and \(\mathbf{S}\). In the initial embedding formation of SIMGA, the node attribute matrix and adjacency matrix are transformed into vectors using two MLPs. These vectors are then combined using the parameter \(\delta\) in Eq.(6). To investigate the importance of these two sources of information, we consider two variants: SIMGA-NA and SIMGA-NF. SIMGA-NA removes the adjacency matrix (\(\delta=0\)), while SIMGA-NF removes the node attributes (\(\delta=1\)). After obtaining the initial embeddings \(H\), SIMGA aggregates each node's features using \(S\) in Eq.(7) to capture distant intra-class nodes. We denote the variant that excludes the aggregation matrix \(S\) as SIMGA-NS. The results on large-scale datasets, as shown in Figure 4(a), indicate that removing either matrix leads to a performance drop. This demonstrates the effectiveness of the aggregation process \(S\) and highlights the benefits of incorporating both node attributes and the adjacency matrix in forming initial node embeddings.
**Parameter \(\mathbf{\delta}\)** and \(\mathbf{\varsigma}\). Choosing larger values for \(k\) and smaller values for \(\epsilon\) in the calculation of \(S\) introduces increased computational complexity (Proposition 3.7). In our empirical evaluation (Figure 4(b)), we observed that setting \(\epsilon=0.1\) provides satisfactory classification scores for the largest dataset, _pokec_. More stringent accuracy requirements by setting \(\epsilon=0.01\) only yield minimal performance improvements while significantly inflating pre-computation time by approximately 100 times. Similarly, we found that larger values of \(k\) lead to better results. Notably, setting \(k=1024\) already achieves excellent performance, and the amount of information filtered in \(S\) becomes negligible (refer to Appendix E for a more detailed discussion).
## 6 Conclusion
In this paper, we propose SIMGA, a simple and effective heterophilous graph neural network with near-linear time efficiency. We first derive a new interpretation of SimRank as global GNN aggregation, highlighting its capability of discovering structural similarity among nodes which are distant and have same label in the whole graph. Based on this connection, we employ a decoupling GNN architecture and utilize SimRank as aggregation without iterative calculation, where efficient approximation calculation can be applied and realize a time complexity near-linear to \(n\).
Figure 4: **Parameter effect of \(\delta\), \(S\), \(k\) and \(\epsilon\). \(\delta\) denotes the balancing factor in Eq.(6), \(k\) denotes number of selected nodes, \(S\) the aggregation matrix and \(\epsilon\) the absolute error tolerance for \(S\).**
Figure 3: **Convergence efficiency of SIMGA and selected baselines. X-time (s) and Y-accuracy.** |
2304.05293 | Equivariant Graph Neural Networks for Charged Particle Tracking | Graph neural networks (GNNs) have gained traction in high-energy physics
(HEP) for their potential to improve accuracy and scalability. However, their
resource-intensive nature and complex operations have motivated the development
of symmetry-equivariant architectures. In this work, we introduce EuclidNet, a
novel symmetry-equivariant GNN for charged particle tracking. EuclidNet
leverages the graph representation of collision events and enforces rotational
symmetry with respect to the detector's beamline axis, leading to a more
efficient model. We benchmark EuclidNet against the state-of-the-art
Interaction Network on the TrackML dataset, which simulates high-pileup
conditions expected at the High-Luminosity Large Hadron Collider (HL-LHC). Our
results show that EuclidNet achieves near-state-of-the-art performance at small
model scales (<1000 parameters), outperforming the non-equivariant benchmarks.
This study paves the way for future investigations into more resource-efficient
GNN models for particle tracking in HEP experiments. | Daniel Murnane, Savannah Thais, Ameya Thete | 2023-04-11T15:43:32Z | http://arxiv.org/abs/2304.05293v1 | # Equivariant Graph Neural Networks for Charged Particle Tracking
###### Abstract
Graph neural networks (GNNs) have gained traction in high-energy physics (HEP) for their potential to improve accuracy and scalability. However, their resource-intensive nature and complex operations have motivated the development of symmetry-equivariant architectures. In this work, we introduce EuclidNet, a novel symmetry-equivariant GNN for charged particle tracking. EuclidNet leverages the graph representation of collision events and enforces rotational symmetry with respect to the detector's beamline axis, leading to a more efficient model. We benchmark EuclidNet against the state-of-the-art Interaction Network on the TrackML dataset, which simulates high-pileup conditions expected at the High-Luminosity Large Hadron Collider (HL-LHC). Our results show that EuclidNet achieves near-state-of-the-art performance at small model scales (\(<1000\) parameters), outperforming the non-equivariant benchmarks. This study paves the way for future investigations into more resource-efficient GNN models for particle tracking in HEP experiments.
## 1 Introduction
In recent years there has been a sharp increase in the use of graph neural networks (GNNs) for high-energy physics (HEP) analyses [1]. These studies have demonstrated that GNNs have the potential to deliver large improvements in accuracy, and can easily scale to sizable volumes of data. However, many of these architectures either involve a large number of parameters or complex graph operations and convolutions, which make GNNs resource-intensive and time-consuming to deploy. Many real-world datasets, including those from HEP experiments, display known symmetries, which can be used to construct alternatives to computationally expensive unconstrained architectures. By exploiting the inherent symmetry in a given problem, we can restrict the function space of neural networks to relevant candidates by enforcing equivariance with respect to transformations belonging a certain symmetry group. This approach has a two-fold benefit: incorporating equivariance introduces inductive biases into the neural network, and equivariant models may be more resource-efficient than their non-equivariant counterparts [2, 3, 4]. In this work, we introduce a new architecture of symmetry-equivariant GNNs for charged particle tracking. Using a graph representation of a collision event, we propose EuclidNet, which scalarizes input tracking features to enforce rotational symmetry. In particular, given the detector's symmetry around the beamline (\(z\)) axis, we develop a formulation of EuclidNet equivariant to the SO(2) rotation group. Benchmarked against the current state-of-the-art (SoTA) Interaction Network, EuclidNet achieves near-SoTA performance at small model sizes.
We explore the out-of-distribution inference performance of EuclidNet and InteractionNet in order to provide an explanation for the behavior of model performance versus model size. As such, we reveal hints not only as to the upper ceiling on fully-equivariant models, but the symmetries learned by non-equivariant models. The code and models are publicly available at [https://github.com/ameya1101/equivariant-tracking](https://github.com/ameya1101/equivariant-tracking).
## 2 Theory and Background
In this section, we introduce the concept of symmetry-group equivariance and provide a short introduction to the theory of graph neural networks.
### Equivariance
Formally, if \(T_{g}:X\to X\) is a set of transformations on a vector space \(X\) for an abstract symmetry group \(g\in G\), a function \(\phi:X\to Y\) is defined to be equivariant to \(g\) if there exists an equivalent set of transformations on the output space \(S_{g}:Y\to Y\) such that:
\[\phi(T_{g}(\mathbf{x}))=S_{g}\phi(\mathbf{x}) \tag{2.1}\]
A model is said to be equivariant to a group \(G\) if it is composed of functions \(\phi\) that are equivariant to \(G\). In this work, we limit our discussion to rotational equivariance corresponding to the \(\mathrm{SO}(2)\) group. As an example, let \(\mathbf{x}=(\mathbf{x}_{1},\mathbf{x}_{2},\cdots,\mathbf{x}_{M})\) be a set of \(M\) points embedded in \(n-\)dimensional space, and \(\phi(\mathbf{x})=\mathbf{y}\in\mathbb{R}^{M\times n}\) be the transformed set of points. For an orthogonal rotation matrix \(Q\in\mathbb{R}^{n\times n}\), \(\phi\) is equivariant to rotations if \(Q\mathbf{y}=\phi(Q\mathbf{x})\).
### Graph Neural Networks
Consider a graph \(\mathcal{G}=(V,E)\) with nodes \(v_{i}\in V\) and edges \(e_{ij}\in E\). A graph neural network is a permutation-invariant deep learning architecture that operates on graph-structured data [5]. A GNN commonly consists of multiple layers, with each layer performing the graph convolution operation, which is defined as [6]:
\[\mathbf{m}_{ij} =\phi_{e}(\mathbf{h}_{i}^{l},\mathbf{h}_{j}^{l},a_{ij}) \tag{2.2}\] \[\mathbf{m}_{i} =\sum_{j\in\mathcal{N}(i)}\mathbf{m}_{ij}\] (2.3) \[\mathbf{h}_{\mathbf{i}}^{\mathbf{l+1}} =\phi_{h}(\mathbf{h}_{i}^{l},\mathbf{m}_{i}) \tag{2.4}\]
where \(\mathbf{h}_{i}^{l}\in\mathbb{R}^{k}\) is the \(k\)-th dimensional embedding of node \(v_{i}\) at layer \(l\). \(a_{ij}\) are edge attributes. \(\mathcal{N}(i)\) is the set of neighbors of node \(v_{i}\). Finally, \(\phi_{e}\) and \(\phi_{h}\) are edge and node operations which are approximated by multi-layer perceptrons (MLPs). \(\mathbf{m}_{ij}\) are called messages that are passed between nodes \(v_{i}\) and \(v_{j}\).
## 3 Network Architecture
In this section, we describe the architecture of EuclidNet. As described in Figure 1, EuclidNet is constructed by stacking Euclidean Equivariant Blocks (EEB) along with some encoding and decoding layers. The architecture of EuclidNet closely follows that of LorentzNet presented in [7]. In this section, \(\phi(a,b,\dots,f)\) implies that the quantities \(a\) through \(f\) are concatenated before being passed to \(\phi\).
Input layer.The inputs to the network are 3D hit positions. For the \(\mathrm{SO}(2)\) group, scalars are the \(z-\)coordinate of the hit position, while the 2D coordinates form the vectors. The scalars are projected to an embedding space using an embedding layer before being passed to the first
equivariant block.
**Euclidean Equivariant Block.** Following convention, we use \(h^{l}=(h^{l}_{1},h^{l}_{2},\dots,h^{l}_{N})\) to denote node embedding scalars and \(x^{l}=(x^{l}_{1},x^{l}_{2},\dots,x^{l}_{N})\) to denote coordinate embedding vectors in the \(l-\)th EEB layer. \(x^{0}\) corresponds to the hit positions and \(h^{0}\) corresponds to the embedded input of scalar variables. The message \(m^{l}_{ij}\) to be passed is constructed as follows:
\[m^{l}_{ij} =\phi_{m}\left(h^{l}_{i},h^{l}_{j},e^{l}_{ij},\psi(\|x^{l}_{i}-x^{ l}_{j}\|^{2}),\psi(\langle x^{l}_{i},x^{l}_{j}\rangle)\right) \tag{3.1}\] \[e^{l}_{ij} =\phi_{e}\left(\psi(\|x^{l}_{i}-x^{l}_{j}\|^{2}),\psi(\langle x^{ l}_{i},x^{l}_{j}\rangle),e^{l-1}_{ij}\right) \tag{3.2}\]
where \(\phi_{m}(\cdot)\) is a neural network and \(\psi(\cdot)=\text{sgn}(\cdot)\log(|\cdot|+1)\) normalizes large numbers from broad distributions to ease training. The input to \(\phi_{m}\) also contains the Euclidean dot product \(\langle x^{l}_{i},x^{l}_{j}\rangle\). The Euclidean distance \(\|x^{l}_{i}-x^{l}_{j}\|^{2}\) between hits is an important feature and we include it for ease of training. \(e^{l}_{ij}\) is an edge significance weight learnt by an MLP.
We use an Euclidean formulation of the dot product attention from [7] as the aggregation function, which is defined as:
\[x^{l+1}_{i}=x^{l}_{i}+c\sum_{j\in\mathcal{N}(i)}\phi_{x}(m^{l}_{ij})\cdot(x^{l }_{i}-x^{l}_{j}) \tag{3.3}\]
where \(\phi_{x}(\cdot)\in\mathbb{R}\) is a scalar function modeled by an MLP. The hyperparameter \(c\) is introduced to control the scale of the updates. The scalar features, \(h^{l}_{i}\), are updated as:
\[h^{l+1}_{i}=h^{l}_{i}+\phi_{h}\left(h^{l}_{i},\sum_{j\in\mathcal{N}(i)}m^{l}_{ ij}\right) \tag{3.4}\]
Figure 1: (**left**) The structure of the Euclidean Equivariant Block (EEB). (**right**) The network architecture of EuclidNet.
Decoding layer.After \(L\) stacks of EEBs, we decode the node embedding \(h^{L}=(h^{L}_{1},h^{L}_{2},\dots,h^{L}_{N})\). Message passing ensures that the information contained in the vector embeddings is propagated to scalars, therefore it is redundant to decode both. A decoding block with three fully connected layers, followed by a softmax function is used to generate a truth score for each track segment.
## 4 Results
### Dataset
In this study, we test the developed tracking models on the TrackML dataset, which is a simulated set of proton-proton collision events developed for the TrackML Particle Tracking Challenge [8]. Events are generated with 200 pileup interactions on average, simulating the high pileup conditions expected at the HL-LHC. Each event contains three-dimensional hit positions and truth information about the particles that generated them. In this work, we limit our discussion to the pixel layers only which consist of a highly-granular set of four barrel and fourteen endcap layers in the innermost region. Each event's tracker hits are converted to a hitgraph through an edge construction algorithm. In addition to transverse momentum, noise, and same-layer filters to modulate the number of hits, graph edges are also required to satisfy constraints on certain geometrical quantities. In this study, we use the geometric graph construction strategy from [9] to generate graphs, with \(p_{T}^{\text{min}}=1.5\) GeV for each event in the dataset.
### Experiments
We train EuclidNet and the Interaction Network for three different values of hidden channels: 8, 16, and 32. The results obtained on the TrackML dataset are summarized in Table 1. We evaluate the models using the Area Under the ROC curve (AUC), which is a commonly used metric in classification problems. We also report the number of model parameters, as well as the purity (fraction of true edges to total edges in the graph) and efficiency (fraction of all true track segments correctly classified) of the resulting event graphs.
For small model sizes, EuclidNet outperforms the unconstrained Interaction Network. Rotational symmetry is approximately obeyed in the generic TrackML detector and using this symmetry appears, to first order, to produce an accurate edge classification GNN. However, given a larger latent space, the performance with rotational symmetry enforced plateaus. At that point, the unconstrained network is more performant. We hypothesize that, as in real detectors like ATLAS and CMS, the rotational symmetry of the TrackML dataset is likely only approximate due to the granularity of the detector segments and inhomogeneity in the magnetic field surrounding the detector. To study this possibility, we train both models on a set of rotations of the dataset by \(\theta\in[0,\pi/4,\pi/2,3\pi/4,\pi]\), and run inference of each of these instances across the set of rotated
\begin{table}
\begin{tabular}{l l l l l l} \hline \(N_{hidden}\) & **Model** & **Params** & **AUC** & **Efficiency** & **Purity** \\ \hline \multirow{2}{*}{8} & EuclidNet & 967 & \(\mathbf{0.9913\pm 0.004}\) & \(\mathbf{0.9459\pm 0.022}\) & \(\mathbf{0.7955\pm 0.040}\) \\ & InteractionNet & 1432 & \(0.9849\pm 0.006\) & \(0.9314\pm 0.021\) & \(0.7319\pm 0.052\) \\ \hline \multirow{2}{*}{16} & EuclidNet & 2580 & \(0.9932\pm 0.003\) & \(0.9530\pm 0.014\) & \(\mathbf{0.8194\pm 0.033}\) \\ & InteractionNet & 4392 & \(0.9932\pm 0.004\) & \(\mathbf{0.9575\pm 0.019}\) & \(0.8168\pm 0.073\) \\ \hline \multirow{2}{*}{32} & EuclidNet & 4448 & \(0.9941\pm 0.003\) & \(0.9547\pm 0.019\) & \(0.9264\pm 0.023\) \\ & InteractionNet & 6448 & \(\mathbf{0.9978\pm 0.003}\) & \(\mathbf{0.9785\pm 0.022}\) & \(\mathbf{0.9945\pm 0.043}\) \\ \hline \end{tabular}
\end{table}
Table 1: Performance comparison between EuclidNet and the Interaction Network (IN) on the TrackML dataset. The results for EuclidNet and IN are averaged over 5 runs
datasets. In this way, we can capture whether the unconstrained network is learning a function specific to that orientation of the detector's material and magnetic field. The results of this study are summarized in Figure 2.
In Figure 2, we observe that both the SO(2)-equivariant EuclidNet and the Interaction Network are robust to rotations in input space, irrespective of which set of rotations of the dataset the model was trained on. The general trend observed in Table 1 is also replicated here, with models having a larger latent space producing larger AUC scores. Although further study is needed to interpret the unconstrained network's superior performance, we posit that this might be a artifact of the dataset's inherent approximate rotational symmetry. In such case, then a less expressive EuclidNet would not be able to completely match the approximate symmetry of the event, and would outperform the IN only at latent space sizes that are not sufficient for IN to capture the complete symmetry set. However, given a large enough latent space, an unconstrained network such as the IN appears to easily learn both the approximate symmetry and non-symmetric corrections in the dataset. In the future, an more in-depth study leveraging explainable AI techniques and methods like 'LieGG' [10] to determine the actual symmetry learnt by both EuclidNet and the IN is required to ascertain the cause for this behaviour.
## 5 Conclusions and Future Work
In this study, we have presented EuclidNet -- the first Euclidean rotation-equivariant GNN for the particle tracking problem. The SO(2)-equivariant model offers a marginal improvement (AUC = 0.9913) over the benchmark (AUC = 0.9849) at small model scales (\(<1000\) parameters). However, for the particle tracking problem, we find that an unconstrained model still outperforms an equivariant architecture at larger model sizes. More work is needed to concretely establish the reasons for this result. Possible future directions include studying the problem's equivariance and identifying non-equivariant facets, if they exist, as well as investigating the quality of the learnt symmetry. However, if the dataset inherently contains non-equivariant features, any symmetric model will likely always underperform. In this case, models such as in [4] which relax the strict constraints imposed by symmetry-following architectures might be able to learn the
Figure 2: The AUC at inference time plotted as a function of rotations in the input space by an angle \(\theta\) for models trained on a set of rotations of the dataset by \(\theta\in[0,\pi/4,\pi/2,3\pi/4,\pi]\) for (**left**) EuclidNet and (**right**) the Interaction Network calculated over 5 independent inference runs.
non-equivariant aspects of the tracking dataset.
## Acknowledgements
This work was supported by IRIS-HEP through the U.S. National Science Foundation under Cooperative Agreement OAC-1836650. This research used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02-05CH11231.
## Appendix A Training Details
We use the binary cross-entropy loss function for both EuclidNet and the Interaction Network. All models are optimized using the Adam optimizer [11] implemented in PyTorch with a learning rate \(\eta=1\times 10^{-3}\) and momentum coefficients \((\beta_{1},\beta_{2})=(0.9,0.999)\). The learning rate is decayed by 0.3 every 100 epochs. Both are trained on a single NVIDIA A100 GPU each for a 200 epochs using early stopping with a patience of 50 epochs. Total training time for both models is typically 2 hours.
## Appendix B Equivariance Tests
We also test the equivariance of EuclidNet and find that it is indeed equivariant to SO(2) transformations up to floating-point numerical errors. For a given transformation \(R\in SO(2)\), we compare \(R_{\theta}\phi(\mathbf{x})\) and \(\phi(R_{\theta}\mathbf{x})\), where \(\mathbf{x}\) are the hit coordinates and \(\phi(\cdot)\) is a EuclidNet instance. In Figure 11 we plot the relative deviation, defined as
\[\varepsilon(\theta)=\frac{\langle\phi(R_{\theta}\mathbf{x})\rangle-\langle R_ {\theta}\phi(\mathbf{x})\rangle}{\langle R_{\theta}\phi(\mathbf{x})\rangle}\] (B.1)
where \(\langle\cdot\rangle\) is the mean computed over 500 tracking events in the test dataset. We find the relative deviation from rotations to be \(<10^{-8}\).
|
2305.00997 | The Expressivity of Classical and Quantum Neural Networks on
Entanglement Entropy | Analytically continuing the von Neumann entropy from R\'enyi entropies is a
challenging task in quantum field theory. While the $n$-th R\'enyi entropy can
be computed using the replica method in the path integral representation of
quantum field theory, the analytic continuation can only be achieved for some
simple systems on a case-by-case basis. In this work, we propose a general
framework to tackle this problem using classical and quantum neural networks
with supervised learning. We begin by studying several examples with known von
Neumann entropy, where the input data is generated by representing $\text{Tr}
\rho_A^n$ with a generating function. We adopt KerasTuner to determine the
optimal network architecture and hyperparameters with limited data. In
addition, we frame a similar problem in terms of quantum machine learning
models, where the expressivity of the quantum models for the entanglement
entropy as a partial Fourier series is established. Our proposed methods can
accurately predict the von Neumann and R\'enyi entropies numerically,
highlighting the potential of deep learning techniques for solving problems in
quantum information theory. | Chih-Hung Wu, Ching-Che Yen | 2023-05-01T18:00:01Z | http://arxiv.org/abs/2305.00997v1 | # The Expressivity of Classical and Quantum Neural Networks on Entanglement Entropy
###### Abstract
Analytically continuing the von Neumann entropy from Renyi entropies is a challenging task in quantum field theory. While the \(n\)-th Renyi entropy can be computed using the replica method in the path integral representation of quantum field theory, the analytic continuation can only be achieved for some simple systems on a case-by-case basis. In this work, we propose a general framework to tackle this problem using classical and quantum neural networks with supervised learning. We begin by studying several examples with known von Neumann entropy, where the input data is generated by representing \(\operatorname{Tr}\rho_{A}^{n}\) with a generating function. We adopt KerasTuner to determine the optimal network architecture and hyperparameters with limited data. In addition, we frame a similar problem in terms of quantum machine learning models, where the expressivity of the quantum models for the entanglement entropy as a partial Fourier series is established. Our proposed methods can accurately predict the von Neumann and Renyi entropies numerically, highlighting the potential of deep learning techniques for solving problems in quantum information theory.
###### Contents
* 1 Introduction
* 2 Analytic continuation of von Neumann entropy from Renyi entropies
* 3 Deep learning von Neumann entropy
* 3.1 Model architectures and training strategies
* 3.2 Entanglement entropy of a single interval
* 3.3 Entanglement entropy of two disjoint intervals
* 4 Renyi entropies as sequential deep learning
* 4.1 Model architectures and training strategies
* 4.2 Examples of the sequential models
* 5 Quantum neural networks and von Neumann entropy
* 5.1 Fourier series from variational quantum machine learning models
* 5.2 The generating function as a Fourier series
* 5.3 The expressivity of the quantum models on the entanglement entropy
* 5.4 Recovering the von Neumann entropy
* 6 Discussion
* A Fourier series representation of the generating function
* B The Gegenbauer polynomials and the Gibbs phenomenon
## 1 Introduction
The _von Neumann entropy_ is widely regarded as an effective measure of quantum entanglement, and is often referred to as _entanglement entropy_. The study of entanglement entropy has yielded valuable applications, particularly in the context of quantum information and quantum gravity (see [1; 2] for a review). However, the analytic continuation from the _Renyi entropies_ to von Neumann entropy remains a challenge in quantum field theory for general systems. We tackle this problem using both classical
and quantum neural networks to examine their expressive power on entanglement entropy and the potential for simpler reconstruction of the von Neumann entropy from Renyi entropies.
Quantum field theory (QFT) provides an efficient method to compute the \(n\)-th Renyi entropy with integer \(n>1\), which is defined as [3]
\[S_{n}(\rho_{A})\equiv\frac{1}{1-n}\ln\mathrm{Tr}(\rho_{A}^{n}). \tag{1}\]
The computation is done by replicating the path integral representation of the reduced density matrix \(\rho_{A}\) by \(n\) times. This step is non-trivial; however, we will be mainly looking at examples where explicit analytic expressions of the Renyi entropies are available, especially in two-dimensional conformal field theories (CFT\({}_{2}\)) [4; 5; 6; 7]. Then upon analytic continuation of \(n\to 1\), we have the von Neumann entropy
\[S(\rho_{A})=\lim_{n\to 1}S_{n}(\rho_{A}). \tag{2}\]
The continuation can be viewed as an independent problem from computing the \(n\)-th Renyi entropy. Although the uniqueness of \(S(\rho_{A})\) from the continuation is guaranteed by Carlson's theorem, analytic expressions in closed forms are currently unknown for most cases.
Furthermore, while \(S_{n}(\rho_{A})\) are well-defined in both integer and non-integer \(n\), determining it for a set of integer values \(n>1\) is not sufficient. To obtain the von Neumann entropy, we must also take the limit \(n\to 1\) through a _space_ of real \(n>1\). The relationship between the Renyi entropies and the von Neumann entropy is therefore complex, and the required value of \(n\) for a precise numerical approximation of \(S(\rho_{A})\) is not clear.
Along this line, we are motivated to adopt an alternative method proposed in [8], which would allow us to study the connection between higher Renyi entropies and von Neumann entropy "accumulatively." This method relies on defining a generating function that manifests as a Taylor series
\[G(w;\rho_{A})=\sum_{k=1}^{\infty}\frac{\tilde{f}(k)}{k}w^{k},\quad\tilde{f}(k )=\mathrm{Tr}[\rho_{A}(1-\rho_{A})^{k}]. \tag{3}\]
Summing over \(k\) explicitly yields an absolutely convergent series that approximates the von Neumann entropy with increasing accuracy as \(w\to 1\). This method has both numerical and analytical advantages, where we refer to [8] for explicit examples. Note that the accuracy we can achieve in approximating the von Neumann entropy depends on the truncation of the partial sum in \(k\), which is case-dependent and can be
difficult to evaluate. It becomes particularly challenging when evaluating the higher-order Riemann-Siegel theta function in the general two-interval case of CFT\({}_{2}\)[8], which remains an open problem.
On the other hand, deep learning techniques have emerged as powerful tools for tackling the analytic continuation problem [9; 10; 11; 12; 13; 14], thanks to their universal approximation property. The universal approximation theorem states that artificial neural networks can approximate any continuous function under mild assumptions [15], where the von Neumann entropy is no exception. A neural network is trained on a dataset of known function values, with the objective of learning a latent manifold that can approximate the original function within the known parameter space. Once trained, the model can be used to make predictions outside the space by extrapolating the trained network. The goal is to minimize the prediction errors between the model's outputs and the actual function values. In our study, we frame the supervised learning task in two distinct ways: the first approach involves using densely connected neural networks to predict von Neumann entropy, while the second utilizes sequential learning models to extract higher Renyi entropies.
Instead of using a static "define-and-run" scheme, where the model structure is defined beforehand and remains fixed throughout training, we have opted for a dynamic "define-by-run" approach. Our goal is to determine the optimal model complexity and hyperparameters based on the input validation data automatically. To achieve this, we employ KerasTuner [16] with Bayesian optimization, which efficiently explores the hyperparameter space by training and evaluating different neural network configurations using cross-validation. KerasTuner uses the results to update a probabilistic model of the hyperparameter space, which is then used to suggest the next set of hyperparameters to evaluate, aiming to maximize expected performance improvement.
A similar question can be explicitly framed in terms of quantum machine learning, where a trainable quantum circuit can be used to emulate neural networks by encoding both the data inputs and the trainable weights using quantum gates. This approach bears many different names [17; 18; 19; 20; 21; 22], but we will call it a _quantum neural network_. Unlike classical neural networks, quantum neural networks are defined through a series of well-defined unitary operations, rather than by numerically optimizing the weights for the non-linear mapping between targets and data. This raises a fundamental question for quantum computing practitioners: can _any unitary operation_ be realized, or is there a particular characterization for the _learnable function class_? In other words, is the quantum model universal in its ability to express any function with the given data input? Answering these questions will not only aid in designing future algorithms, but also provide deeper insights into how quantum models achieve universal approximation [23; 24].
Recent progress in quantum neural networks has shown that data-encoding strategies play a crucial role in their expressive power. The problem of data encoding has been the subject of extensive theoretical and numerical studies [25; 26; 27; 28]. In this work, we build on the idea introduced in [29; 30], which demonstrated the expressivity of quantum models as partial Fourier series. By rewriting the generating function for the von Neumann entropy in terms of a Fourier series, we can similarly establish the expressivity using quantum neural networks. However, the Gibbs phenomenon in the Fourier series poses a challenge in recovering the von Neumann entropy. To overcome this, we reconstruct the entropy by expanding the Fourier series into a basis of Gegenbauer polynomials.
The structure of this paper is as follows. In Sec. 2, we provide a brief overview for the analytic continuation of the von Neumann entropy from Renyi entropies within the framework of QFT. In addition, we introduce the generating function method that we use throughout the paper. In Sec. 3, we use densely connected neural networks with KerasTuner to extract the von Neumann entropy for several examples where analytic expressions are known. In Sec. 4, we employ sequential learning models for extracting higher Renyi entropies. Sec. 5 is dedicated to studying the expressive power of quantum neural networks in approximating the von Neumann entropy. In Sec. 6, we summarize our findings and discuss possible applications of our approach. A is devoted to the details of rewriting the generating function as a partial Fourier series, while Appendix. B addresses the Gibbs phenomenon using Gegenbauer polynomials.
## 2 Analytic continuation of von Neumann entropy from Renyi entropies
Let us discuss how to calculate the von Neumann entropy in QFTs [31; 32; 33; 34]. Suppose we start with a QFT on a \(d\)-dimensional Minkowski spacetime with its Hilbert space specified on a Cauchy slice \(\Sigma\) of the spacetime. Without loss of generality, we can divide \(\Sigma\) into two disjoint sub-regions \(\Sigma=A\cup A^{c}\). Here \(A^{c}\) denotes the complement sub-region of \(A\). Therefore, the Hilbert space also factorizes into the tensor product \(\mathcal{H}_{\Sigma}=\mathcal{H}_{A}\otimes\mathcal{H}_{A^{c}}\). We then define a reduced density matrix \(\rho_{A}\) from a pure state on \(\Sigma\), which is therefore mixed, to capture the entanglement between the two regions. The von Neumann entropy \(S(\rho_{A})\) allows us to quantify this entanglement
\[S(\rho_{A})\equiv-\operatorname{Tr}(\rho_{A}\ln\rho_{A})=\frac{\operatorname{ Area}(\partial A)}{\epsilon^{d-2}}+\cdots. \tag{1}\]
Along with several nice properties, such as the invariance under unitary operations, complementarity for pure states, and a smooth interpolation between pure and maxi
mally mixed states, it is therefore a fine-grained measure for the amount of entanglement between \(A\) and \(A^{c}\). The second equality holds for field theory, where we require a length scale \(\epsilon\) to regulate the UV divergence encoded in the short-distance correlations. The leading-order divergence is captured by the area of the entangling surface \(\partial A\), a universal feature of QFTs [35].1
Footnote 1: While in CFT\({}_{2}\), the leading divergence for a single interval \(A\) of length \(\ell\) in the vacuum state on an infinite line is a logarithmic function of the length, this is the simplest example we will consider later.
There have been efforts to better understand the structure of the entanglement in QFTs, including free theory [36], heat kernels [37; 38], CFT techniques [39] and holographic methods based on AdS/CFT [40; 41]. But operationally, computing the von Neumann entropy analytically or numerically is still a daunting challenge for generic interacting QFTs. For a review, see [1].
Path integral provides a general method to access \(S(\rho_{A})\). The method starts with the Renyi entropies [3]
\[S_{n}(\rho_{A})=\frac{1}{1-n}\ln\operatorname{Tr}\rho_{A}^{n}, \tag{2}\]
for real \(n>1\). As previously mentioned, obtaining the von Neumann entropy via analytic continuation in \(n\) with \(n\to 1\) requires two crucial steps. An analytic form for the \(n\)-th Renyi entropy must be derived from the underlying field theory in the first place, and then we need to perform analytic continuation toward \(n\to 1\). These two steps are independent problems and often require different techniques. We will briefly comment on the two steps below.
Computing \(\operatorname{Tr}\rho_{A}^{n}\) is not easy; therefore, the replica method enters. The early form of the replica method was developed in [34], and was later used to compute various examples in CFT\({}_{2}\)[4; 5; 6; 7], which can be compared with holographic ones [42]. The idea behind the replica method is to consider an orbifold of \(n\) copies of the field theory to compute \(\operatorname{Tr}\rho_{A}^{n}\) for positive integers \(n\). The computation reduces to evaluating the partition function on a \(n\)-sheeted Riemann surface, which can be alternatively computed by correlation functions of twist operators in the \(n\) copies. For more details on the construction in CFTs, see [4; 5; 6; 7]. If we are able to compute \(\operatorname{Tr}\rho_{A}^{n}\) for any positive integer \(n\geq 1\), we have
\[S(\rho_{A})=\lim_{n\to 1}S_{n}(\rho_{A})=-\lim_{n\to 1}\frac{\partial}{ \partial n}\operatorname{Tr}\rho_{A}^{n}. \tag{3}\]
This is computable for special states and regions, such as ball-shaped regions for the vacuum of the CFT\({}_{d}\). However, in CFT\({}_{2}\), due to its infinite-dimensional symmetry being sufficient to fix lower points correlation functions, we are able to compute \(\operatorname{Tr}\rho_{A}^{n}\) for several instances.
The analytic continuation in \(n\to 1\) is more subtle. Ensuring the existence of a unique analytic extension away from integer \(n\) typically requires the application of the Carlson's theorem. This theorem guarantees the uniqueness of the analytic continuation from Renyi entropies to the von Neumann entropy, provided that we can find some locally holomorphic function \(\mathcal{S}_{\nu}\) with \(\nu\in\mathbb{C}\) such that \(\mathcal{S}_{n}=S_{n}(\rho)\) for all integers \(n>1\) with appropriate asymptotic behaviors in \(\nu\to\infty\). Then we have unique \(S_{\nu}(\rho)=\mathcal{S}_{\nu}\)[43; 44]. Carlson's theorem addresses not only the problem of unique analytic continuation but also the issue of continuing across non-integer values of the Renyi entropies.
There are other methods to evaluate \(S(\rho_{A})\) in the context of string theory and AdS/CFT; see for examples [45; 46; 47; 48; 49; 50]. In this work, we would like to focus on an effective method outlined in [8] that is suitable for numerical considerations. In [8], the following generating function is used for the analytic continuation in \(n\) with a variable \(z\)
\[G(z;\rho_{A})\equiv-\operatorname{Tr}\bigg{(}\rho_{A}\ln\frac{1-z\rho_{A}}{1-z }\bigg{)}=\sum_{n=1}^{\infty}\frac{z^{k}}{k}\bigg{(}\operatorname{Tr}(\rho_{A} ^{k+1})-1\bigg{)}. \tag{4}\]
This manifest Taylor series is absolutely convergent in the unit disc with \(|z|<1\). We can analytically continue the function from the unit disc to a holomorphic function in \(\mathbb{C}\setminus[1,\infty)\) by choosing the branch cut of the logarithm to be along the positive real axis. The limit \(z\to-\infty\) is within the domain of holomorphicity and is exactly where we obtain the von Neumann entropy
\[S(\rho_{A})=\lim_{z\to-\infty}G(z;\rho_{A}). \tag{5}\]
However, a more useful form can be obtained by performing a Mobius transformation to a new variable \(w\)
\[G(w;\rho_{A})=-\operatorname{Tr}\bigg{(}\rho_{A}\ln\left\{1-w(1-\rho_{A}) \right\}\bigg{)},\quad w=\frac{z}{z-1}. \tag{6}\]
It again manifests as a Taylor series
\[G(w;\rho_{A})=\sum_{k=1}^{\infty}\frac{\tilde{f}(k)}{k}w^{k}, \tag{7}\]
where
\[\tilde{f}(k)=\operatorname{Tr}[\rho_{A}(1-\rho_{A})^{k}]=\sum_{m=0}^{k}\frac{ (-1)^{m}k!}{m!(k-m)!}\operatorname{Tr}\big{(}\rho_{A}^{m+1}\big{)}. \tag{8}\]
We again have a series written in terms of \(\operatorname{Tr}\rho_{A}^{n}\), and it is absolutely convergent in the unit disc \(|w|<1\). The convenience of using \(w\) is that by taking \(w\to 1\), we have the von Neumann entropy
\[S(\rho_{A})=\lim_{w\to 1}G(w;\rho_{A})=\sum_{k=1}^{\infty}\frac{\tilde{f}(k)}{k}. \tag{9}\]
This provides an exact expression of \(S(\rho_{A})\) starting from a known expression of \(\operatorname{Tr}\rho_{A}^{n}\). Numerically, we can obtain an accurate value of \(S(\rho_{A})\) by computing a partial sum in \(k\). The method guarantees that by summing to sufficiently large \(k\), we approach the von Neumann entropy with increasing accuracy.
However, a difficulty is that we need to sum up \(k\sim 10^{3}\) terms to achieve precision within \(10^{-3}\) in general [8]. It will be computationally costly for certain cases with complicated \(\operatorname{Tr}\rho_{A}^{n}\). Therefore, one advantage the neural network framework offers is the ability to give accurate predictions with only a limited amount of data, making it a more efficient method.
In this paper, we focus on various examples from CFT\({}_{2}\) with known analytic expressions of \(\operatorname{Tr}\rho_{A}^{n}\)[6], and we use the generating function \(G(w;\rho_{A})\) to generate the required training datasets for the neural networks.
## 3 Deep learning von Neumann entropy
This section aims to utilize deep neural networks to predict the von Neumann entropy via a supervised learning approach. By leveraging the gradient-based learning principle of the networks, we expect to find a non-linear mapping between the input data and the output targets. In the analytic continuation problem from the \(n\)-th Renyi entropy to the von Neumann entropy, such a non-linear mapping naturally arises. Accordingly, we consider \(S_{n}(\rho_{A})\) (equivalently \(\operatorname{Tr}\rho_{A}^{n}\) and the generating function) as our input data and \(S(\rho_{A})\) as the target function for the training process. As supervised learning, we will consider examples where analytic expressions of both sides are available. Ultimately, we will employ the trained models to predict the von Neumann entropy across various physical parameter regimes, demonstrating the efficacy and robustness of the approach.
The major advantage of using deep neural networks lies in that they improve the accuracy of the generating function for computing the von Neumann entropy. As we mentioned, the accuracy of this method depends on where we truncate the partial sum, and it often requires summing up a large \(k\) in (9), which is numerically difficult. In a sense, it requires knowing much more information, such as those of the higher Renyi entropies indicated by \(\operatorname{Tr}\rho_{A}^{n}\) in the series. Trained neural networks are able to predict
the von Neumann entropy more accurately given much fewer terms in the input data. We can even predict the von Neumann entropy for other parameter spaces without resorting to any data from the generating function.
Furthermore, the non-linear mappings the deep neural networks uncover can be useful for investigating the expressive power of neural networks on the von Neumann entropy. Additionally, they can be applied to study cases where analytic continuations are unknown and other entanglement measures that require analytic continuations.
In the following subsections, we will give more details on our data preparation and training strategies, then we turn to explicit examples as demonstrations.
### Model architectures and training strategies
Generating suitable training datasets and designing flexible deep learning models are empirically driven. In this subsection, we outline our strategies for both aspects.
**Data preparation**
To prepare the training datasets, we consider several examples with known \(S(\rho_{A})\). We use the generating function \(G(w;\rho)\), which can be computed from \(\operatorname{Tr}\rho_{A}^{n}\) for each example. This is equivalent to computing the higher Renyi entropies with different choices of physical parameters since the "information" available is always \(\operatorname{Tr}\rho_{A}^{n}\). However, note that all the higher Renyi entropies are distinct information. Therefore, adopting the generating function is preferable to using \(S_{n}(\rho_{A})\) itself, as it approaches the von Neumann entropy with increasing accuracy, making the comparison more transparent.
We generate \(N=10000\) input datasets for a fixed range of physical parameters, where each set contains \(k_{\text{max}}=50\) terms in (9); their corresponding von Neumann entropies will be the targets. We limit the amount of data to mimic the computational cost of using the generating function. We shuffle the input datasets randomly and then split the data into \(80\%\) for training, \(10\%\) for validation, and \(10\%\) as the test datasets. Additionally, we use the trained neural networks to make predictions on another set of \(10000\) test datasets with a different physical parameter regime and compare them with the correct values as a non-trivial test for each example.
**Model design**
To prevent overfitting and enhance the generalizability of our model, we have employed a combination of techniques in the design of neural networks. ReLU activation function is used throughout the section. We adopt Adam optimizer [51] in the training process with mean square error (MSE) as the loss function.
We consider a neural network consisting of a few hidden Dense layers with varying numbers of units in TensorFlow-Keras [52; 53]. In this case, each neuron in a layer
receives input from all the neurons in the previous layer. The Dense connection allows the model to find non-linear relations between the input and output, which is the case for analytic continuation. The final layer is a Dense layer with a single unit that outputs a unique value for each training dataset, which is expected to correspond to the von Neumann entropy. As an example, we show a neural network with 3 hidden Dense layers, each with 8 units, in Figure 1.
Figure 1: An architecture of 3 densely connected layers, where each layer has 8 units. The final output layer is a single Dense unit with a unique output corresponding to the von Neumann entropy.
To determine the optimal setting of our neural networks, we employ KerasTuner [16], a powerful tool that allows us to explore different combinations of model complexity, depth, and hyperparameters for a given task. An illustration of the KerasTuner process can be found in Figure 2. We use Bayesian optimization, and adjust the following designs and hyperparameters:
* We allow a maximum of 4 Dense layers. For each layer, we allow variable units in the range of 16 to 128 with a step size of 16. The number of units for each layer will be independent of each other.
* We allow BatchNormalization layers after the Dense layers as a Boolean choice to improve generalization and act as a regularization.
* A final dropout with log sampling of a dropout rate in the range of 0.1 to 0.5 is added as a Boolean choice.
* In the Adam optimizer, we only adjust the learning rate with log sampling from the range of 3\(\times\)10\({}^{-3}\) to 9\(\times\)10\({}^{-3}\). All other parameters are taken as default values in TensorFlow-Keras. We also use the AMSGrad [54] variant of this algorithm as a Boolean choice.
We deploy the KerasTuner for 100 trials with 2 executions per trial and monitor the validation loss with EarlyStopping of patience 8. Once the training is complete, since we will not be making any further hyperparameter changes, we no longer evaluate
Figure 2: Flowchart illustrating the steps of KerasTuner with Bayesian optimization. Bayesian optimization is a method for finding the optimal set of designs and hyperparameters for a given dataset, by iteratively constructing a probabilistic model from a prior distribution for the objective function and using it to guide the search. Once the tuner search loop is complete, we extract the best model in the final training phase by including both the training and validation data.
performance on the validation data. A common practice is to initialize new models using the best model designs found by KerasTuner while also including the validation data as part of the training data. Indeed, we select the top 5 best designs and train each one 20 times with EarlyStopping of patience 8. We pick the one with the smallest relative errors from the targets among the \(5\times 20\) models as our final model. We set the batch size in both the KerasTuner and the final training to be 512.
In the following two subsections, we will examine examples from \(\mathrm{CFT}_{2}\) with \(\mathrm{Tr}\,\rho_{A}^{n}\) and their corresponding von Neumann entropies \(S(\rho_{A})\)[4; 5; 6; 7; 8]. These instances are distinct and worth studying for several reasons. They have different mathematical structures and lack common patterns in their derivation from the field theory side, despite involving the evaluation of certain partition functions. Moreover, the analytic continuation for each case is intricate, providing strong evidence for the necessity of independent model designs.
### Entanglement entropy of a single interval
Throughout the following, we will only present the analytic expression of \(\mathrm{Tr}\,\rho_{A}^{n}\) since it is the only input of the generating function. We will also keep the UV cut-off \(\epsilon\) explicit in the formula.
#### Single interval
The simplest example corresponds to a single interval \(A\) of length \(\ell\) in the vacuum state of a \(\mathrm{CFT}_{2}\) on an infinite line. In this case, both the analytic forms of \(\mathrm{Tr}\,\rho_{A}^{n}\) and \(S(\rho_{A})\) are known [4], where \(S(\rho_{A})\) reduces to a simple logarithmic function that depends on \(\ell\). We have the following analytic form with a central charge \(c\)
\[\mathrm{Tr}\,\rho_{A}^{n}=\left(\frac{\ell}{\epsilon}\right)^{\frac{c}{6}( \frac{1}{n}-n)}, \tag{10}\]
that defines \(G(w;\rho_{A})\). The corresponding von Neumann entropy is given by
\[S(\rho_{A})=\frac{c}{3}\ln\frac{\ell}{\epsilon}. \tag{11}\]
We fixed the central charge \(c=1\) and the UV cutoff \(\epsilon=0.1\) when preparing the datasets. We generated 10000 sets of data for the train-validation-test split from \(\ell=1\) to 50, with an increment of \(\Delta\ell=5\times 10^{-3}\) between each step up to \(k=50\) in \(G(w;\rho_{A})\). To further validate our model, we generated an additional 10000 test datasets for the following physical parameters: \(\ell=51\) to 100 with \(\Delta\ell=5\times 10^{-3}\). For a density plot of the data distribution with respect to the target von Neumann entropy, see Figure 3.
Figure 4: Left: The MSE loss function as a function of epochs. We monitor the loss function with EarlyStopping, where the minimum loss is achieved at epoch 410 with loss \(\approx 10^{-7}\) for this instance. Right: The density plot of relative errors between the model predictions and targets. Note that the blue color corresponds to the test datasets from the initial train-validation-test split, while the green color is for the additional test datasets. We can see clearly that for both datasets, we have achieved high accuracy with relative errors \(\lesssim 0.30\%\).
Figure 3: The distribution of the data for the case of a single interval, where we plot density as a function of the von Neumann entropy computed by (10) with varying \(\ell\). The left plot represents the 10000 datasets for the train-validation-test split, while the right plot corresponds to the additional 10000 test datasets with a different physical parameter regime.
Figure 4 illustrates that the process outlined in the previous subsection effectively minimizes the relative errors in predicting the test data to a very small extent. Moreover, the model's effectiveness is further confirmed by its ability to achieve similarly small relative errors when predicting the additional test datasets. The accuracy of the model's predictions for the two test datasets significantly surpasses the approximate entropy obtained by summing the first 50 terms of the generating function, as can be seen in Figure 5. We emphasize that in order for the generating function to achieve the same accuracy as the deep neural networks, we generally need to sum \(k\geq 400\) from (9) [8]. This applies to all the following examples.
In this example, the von Neumann entropy is a simple logarithmic function, making it relatively straightforward for the deep learning models to decipher. However, we will now move on to a more challenging example.
#### Single interval at finite temperature and length
We extend the single interval case to finite temperature and length, where \(\operatorname{Tr}\rho_{A}^{n}\) becomes a complicated function of the inverse temperature \(\beta=T^{-1}\) and the length \(\ell\). The analytic expression of the Renyi entropies was first derived in [55] for a two-dimensional free Dirac fermion on a circle from bosonization. We can impose periodic
Figure 5: We plot the predictions from the model with the analytic von Neumann entropy computed by (10) for the 1000 test datasets (left) from the training-validation-test split and the additional 10000 test datasets (right), with the same scale on both figures. The correct von Neumann entropy overlaps with the model’s predictions precisely. We have also included the approximate entropy by summing over \(k=50\) terms in the generating function.
boundary conditions that correspond to finite size and finite temperature. For simplicity, we set the total spatial size \(L\) to \(1\), and use \(\ell\) to denote the interval length. In this case we have [55]
\[\operatorname{Tr}\rho_{A}^{n}=\prod_{k=-\frac{n-1}{2}}^{\frac{n-1}{2}}\left| \frac{2\pi\epsilon\eta(\tau)^{3}}{\theta_{1}(\ell|\tau)}\right|^{\frac{2k^{2}} {n^{\tau}}}\frac{|\theta_{\nu}(\frac{k\ell}{n}|\tau)|^{2}}{|\theta_{\nu}(0| \tau)|^{2}}, \tag{3.3}\]
where \(\epsilon\) is a UV cutoff. We study the case of \(\nu=3\), which is the Neveu-Schwarz (NS-NS) sector. We then have the following Dedekind eta function \(\eta(\tau)\) and the Jacobi theta functions \(\theta_{1}(z|\tau)\) and \(\theta_{3}(z|\tau)\)
\[\eta(\tau)\equiv q^{\frac{1}{24}}\prod_{n=1}^{\infty}(1-q^{n}), \tag{3.4}\]
\[\theta_{1}(z|\tau)\equiv\sum_{n=-\infty}^{n=\infty}(-1)^{n-\frac{1}{2}}e^{(n +\frac{1}{2})^{2}i\pi\tau}e^{(2n+1)\pi iz}\,,\qquad\theta_{3}(z|\tau)\equiv \sum_{n=-\infty}^{n=\infty}e^{n^{2}i\pi\tau}e^{2n\pi iz}\,. \tag{3.5}\]
Previously, the von Neumann entropy after analytically continuing (3.3) was only known in the high- and low-temperature regimes [55]. In fact, only the infinite length or zero temperature pieces are universal. However, the analytic von Neumann entropy for all temperatures was recently worked out by [56; 57], which we present below
\[S(\rho_{A})=\frac{1}{3}\log\frac{\sigma(\ell)}{\epsilon}+4i\ell\int_{0}^{ \infty}dq\frac{\zeta(iq\ell+1/2+i\beta/2)-\zeta(1/2)-\zeta(i\beta/2)}{e^{2\pi q }-1}. \tag{3.6}\]
Here \(\sigma\) and \(\zeta\) are the Weierstrass sigma function and zeta function with periods \(1\) and \(i\beta\), respectively. We can see clearly that the analytic expressions for both \(\operatorname{Tr}\rho_{A}^{n}\) and \(S(\rho_{A})\) are rather different compared to the previous example.
In preparing the datasets, we fixed the interval length \(\ell=0.5\) and the UV cutoff \(\epsilon=0.1\). We generated \(10000\) sets of data for train-validation-test split from \(\beta=0.5\) to \(1.0\), with an increment of \(\Delta\beta=5\times 10^{-5}\) between each step up to \(k=50\) in \(G(w;\rho_{A})\). Since \(\beta\) corresponds to the inverse temperature, this is a natural parameter to vary as the formula (3.6) is valid for all temperatures. To further validate our model, we generated \(10000\) additional test datasets for the following physical parameters: \(\beta=1.0\) to \(1.5\) with \(\Delta\beta=5\times 10^{-5}\). A density plot of the data with respect to the von Neumann entropy is shown in Figure 6. As shown in Figure 7 and Figure 8, our model demonstrates its effectiveness in predicting both test datasets, providing accurate results for this highly non-trivial example.
Figure 6: The distribution of the two test datasets for the case of a single interval at finite temperature and length, where we plot density as a function of the von Neumann entropy computed by (3.6) with varying \(\beta\).
Figure 7: Left: The MSE loss function as a function of epochs. The minimum loss close to \(10^{-8}\) is achieved at epoch 86 for this instance. Right: The relative errors between the model predictions and targets for the two test datasets, where we have achieved high accuracy with relative errors \(\lesssim 0.6\%\).
### Entanglement entropy of two disjoint intervals
We now turn to von Neumann entropy for the union of two intervals on an infinite line. In this case, several analytic expressions can be derived for both Renyi and von Neumann entropies. The theory we will consider is a CFT\({}_{2}\) for a free boson with central charge \(c=1\), and the von Neumann entropy will be distinguished by two parameters, a cross-ratio \(x\) and a universal critical exponent \(\eta\). The latter is proportional to the square of the compactification radius.
To set up the system, we define the union of the two intervals as \(A\cup B\) with \(A=[x_{1},x_{2}]\) and \(B=[x_{3},x_{4}]\). The cross-ratio is defined to be
\[x=\frac{x_{12}x_{34}}{x_{13}x_{24}},\quad x_{ij}=x_{i}-x_{j}. \tag{3.7}\]
With the definition, we can write down the generating function for two intervals in a free boson CFT with finite \(x\) and \(\eta\)[5]
\[\text{Tr}(\rho^{n})=c_{n}\bigg{(}\frac{\epsilon^{2}x_{13}x_{24}}{x_{12}x_{34} x_{14}x_{23}}\bigg{)}^{\frac{1}{6}(n-\frac{1}{n})}\mathcal{F}_{n}(x,\eta), \tag{3.8}\]
where \(\epsilon\) is a UV cutoff and \(c_{n}\) is a model-dependent coefficient [6] that we set to \(c_{n}=1\) for simplicity. An exact expression for \(\mathcal{F}_{n}(x,\eta)\) is given by
\[\mathcal{F}_{n}(x,\eta)=\frac{\Theta(0|\eta\Gamma)\Theta(0|\Gamma/\eta)}{[ \Theta(0|\Gamma)]^{2}}, \tag{3.9}\]
Figure 8: We plot the predictions from the model with the analytic von Neumann entropy computed by (3.6) for the two test datasets. Again, the approximate entropy by summing over \(k=50\) terms in the generating function is included.
for integers \(n\geq 1\). Here \(\Theta(z|\Gamma)\) is the Riemann-Siegel theta function defined as
\[\Theta(z|\Gamma)\equiv\sum_{m\in\mathbb{Z}^{n-1}}\exp[i\pi m^{t}\cdot\Gamma\cdot m +2\pi im^{t}\cdot z], \tag{3.10}\]
where \(\Gamma\) is a \((n-1)\times(n-1)\) matrix with elements
\[\Gamma_{rs}=\frac{2i}{n}\sum_{k=1}^{n-1}\sin\left(\frac{\pi k}{n}\right)\beta_ {k/n}\cos\bigg{[}\frac{2\pi k}{n}(r-s)\bigg{]}, \tag{3.11}\]
and
\[\beta_{y}=\frac{F_{y}(1-x)}{F_{y}(x)},\qquad F_{y}(x)\equiv{}_{2}F_{1}(y,1-y;1 ;x), \tag{3.12}\]
where \({}_{2}F_{1}\) is the hypergeometric function. A property of this example is that (3.9) is manifestly invariant under \(\eta\leftrightarrow 1/\eta\).
The analytic continuation towards the von Neumann entropy is not known, making it impossible to study this example directly with supervised learning. Although the Taylor series of the generating function guarantees convergence towards the true von Neumann entropy for sufficiently large values of \(k\) in the partial sum, evaluating the higher-dimensional Riemann-Siegel theta function becomes increasingly difficult. For efforts in this direction, see [58; 59]. However, we will revisit this example in the next section when discussing the sequence model.
However, there are two limiting cases where analytic perturbative expansions are available, and approximate analytic continuations of the von Neumann entropies can be obtained. The first limit corresponds to small values of the cross-ratio \(x\), where the von Neumann entropy has been computed analytically up to second order in \(x\). The second limit is the decompactification limit, where we take \(\eta\to\infty\). In this limit, there is an approximate expression for the von Neumann entropy.
#### Two intervals at small cross-ratio
Let us consider the following expansion of \(\mathcal{F}_{n}(x,\eta)\) at small \(x\) for some \(\eta\neq 1\)
\[\mathcal{F}_{n}(x,\eta)=1+\left(\frac{x}{4n^{2}}\right)^{\alpha}s_{2}(n)+ \left(\frac{x}{4n^{2}}\right)^{2\alpha}s_{4}(n)+\cdots, \tag{3.13}\]
where we can look at the first order contribution with
\[s_{2}(n)\equiv\mathcal{N}\frac{n}{2}\sum_{j=1}^{n-1}\frac{1}{\left[\sin(\pi j /n)\right]^{2\alpha}}. \tag{3.14}\]
The coefficient \(\alpha\) for a free boson is given by \(\alpha=\min[\eta,1/\eta]\). \(\mathcal{N}\) is the multiplicity of the lowest dimension operators, where for a free boson we have \(\mathcal{N}=2\). Up to this order, the analytic von Neumann entropy is given by
\[S(\rho_{AB})=\frac{1}{3}\ln\left(\frac{x_{12}x_{34}x_{14}x_{23}}{\epsilon^{2}x _{13}x_{24}}\right)-\mathcal{N}\bigg{(}\frac{x}{4}\bigg{)}^{\alpha}\frac{ \sqrt{\pi}\Gamma(\alpha+1)}{4\Gamma\left(\alpha+\frac{3}{2}\right)}-\cdots. \tag{3.15}\]
We can set up the numerics by taking \(|x_{12}|=|x_{34}|=r\), and the distance between the centers of \(A\) and \(B\) to be \(L\), then the cross-ratio is simply
\[x=\frac{x_{12}x_{34}}{x_{13}x_{24}}=\frac{r^{2}}{L^{2}}. \tag{3.16}\]
Similarly we can express \(|x_{14}|=L+r=L(1+\sqrt{x})\) and \(|x_{23}|=L-r=L(1-\sqrt{x})\). This would allow us to express everything in terms of \(x\) and \(L\).
For the datasets, we fixed \(L=14\), \(\alpha=0.5\), and \(\epsilon^{2}=0.1\). We generated 10000 sets of data for train-validation-test split from \(x=0.05\) to \(0.1\), with an increment of \(\Delta x=5\times 10^{-6}\) between each step up to \(k=50\) in \(G(w;\rho_{A})\). To further validate our model, we generated 10000 additional test datasets for the following physical parameters: \(x=0.1\) to \(0.15\) with \(\Delta x=5\times 10^{-6}\). A density plot of the data with respect to the von Neumann entropy is shown in Figure 9. We refer to Figure 10 and Figure 11 for a clear demonstration of the learning outcomes.
The study up to second order in \(x\) using the generating function method is available in [8], as well as through the use of holographic methods [60]. Additionally, an analytic continuation toward the von Neumann entropy up to second order in \(x\) for general CFT\({}_{2}\) can be found in [61]. Although this is a subleading correction, it can also be approached using our method.
Figure 10: Left: The MSE loss function as a function of epochs. The minimum loss close to \(10^{-8}\) is achieved at epoch 696 for this instance. Right: The relative errors between the model predictions and targets for the two test datasets, where we have achieved high accuracy with relative errors \(\lesssim 0.03\%\).
Figure 9: The distribution of the two test datasets for the case of two intervals at small cross-ratio, where we plot density as a function of the von Neumann entropy computed by (3.15) with varying \(x\).
**Two intervals in the decompactification limit**
There is a different limit that can be taken other than the small cross-ratio, where an approximate analytic Renyi entropies can be obtained. This is called the decompactification limit where we take \(\eta\to\infty\), then for each fixed value of \(x\) we have \(\mathcal{F}(x,\eta)\) as
\[\mathcal{F}_{n}(x,\eta)=\bigg{[}\frac{\eta^{n-1}}{\prod_{k=1}^{n-1}F_{k/n}(x)F_ {k/n}(1-x)}\bigg{]}^{\frac{1}{2}}, \tag{3.17}\]
where \({}_{2}F_{1}\) is the hypergeometric function. Equation (3.17) is invariant under \(\eta\leftrightarrow 1/\eta\), so we will instead use the result with \(\eta\ll 1\)
\[\mathcal{F}_{n}(x,\eta)=\bigg{[}\frac{\eta^{-(n-1)}}{\prod_{k=1}^{n-1}F_{k/n}( x)F_{k/n}(1-x)}\bigg{]}^{\frac{1}{2}}. \tag{3.18}\]
In this case, the exact analytic continuation of the von Neumann entropy is not known, but there is an approximate result following the expansion
\[S(\rho_{AB})\simeq S^{W}(\rho_{AB})+\frac{1}{2}\ln\eta-\frac{D_{1}^{\prime}(x) +D_{1}^{\prime}(1-x)}{2}+\cdots,\quad(\eta\ll 1) \tag{3.19}\]
with \(S^{W}(\rho_{AB})\) being the von Neumann entropy computed from the Renyi entropies without the special function \(\mathcal{F}_{n}(x,\eta)\) in (3.8). Note that
\[D_{1}^{\prime}(x)=-\int_{-i\infty}^{i\infty}\frac{dz}{i}\frac{\pi z}{\sin^{2 }(\pi z)}\ln F_{z}(x). \tag{3.20}\]
Figure 11: We plot the predictions from the model with the analytic von Neumann entropy computed by (3.15) for the two test datasets. We also include the approximate entropy by summing over \(k=50\) terms in the generating function.
This approximate von Neumann entropy has been well tested in previous studies [5; 8], and we will adopt it as the target values in our deep learning models.
For the datasets, we fixed \(L=14\), \(x=0.5\) and \(\epsilon^{2}=0.1\). We generated 10000 sets of data for train-validation-test split from \(\eta=0.1\) to \(0.2\), with an increment of \(\Delta\eta=10^{-5}\) between each step up to \(k=50\). To further validate our model, we generated 10000 additional test datasets for the following physical parameters: \(\eta=0.2\) to \(0.3\) with \(\Delta\eta=10^{-5}\). A density plot of the data with respect to the von Neumann entropy is shown in Figure 12. We again refer to Figure 13 and Figure 14 for a clear demonstration of the learning outcomes.
We have seen that deep neural networks, when treated as supervised learning, can achieve accurate predictions for the von Neumann entropy that extends outside the parameter regime in the training phase. However, the potential for deep neural networks may go beyond this.
As we know, the analytic continuation must be worked out on a case-by-case basis (see the examples in [4; 5; 6; 7]) and may even depend on the method we use [8]. Finding general patterns in the analytic continuation is still an open question. Although it remains ambitious, the non-linear mapping that the neural networks uncover would
Figure 14: We plot the predictions from the model with the analytic von Neumann entropy computed by (3.19) for the two test datasets. We also include the approximate entropy by summing over \(k=50\) terms in the generating function.
Figure 13: Left: The MSE loss function as a function of epochs. The minimum loss at around \(10^{-7}\) is achieved at epoch 132 for this instance. Right: The relative errors between the model predictions and targets for the two test datasets, where we have achieved high accuracy with relative errors \(\lesssim 0.4\%\).
allow us to investigate the expressive power of deep neural networks for the analytic continuation problem of the von Neumann entropy.
Our approach also opens up the possibility of using deep neural networks to study cases where analytic continuations are unknown, such as the general two-interval case. Furthermore, it may enable us to investigate other entanglement measures that follow similar patterns or require analytic continuations. We leave these questions as future tasks.
## 4 Renyi entropies as sequential deep learning
In this section, we focus on higher Renyi entropies using sequential learning models. Studying higher Renyi entropies that depend on \(\operatorname{Tr}\rho_{A}^{n}\) is equivalent to studying the higher-order terms in the Taylor series representation of the generating function (9). There are a few major motivations. Firstly, although the generating function can be used to compute higher-order terms, it becomes inefficient for more complex examples. Additionally, evaluating \(\operatorname{Tr}\rho_{A}^{n}\) in (11) for the general two-interval case involves the Riemann-Siegel theta function, which poses a challenge in computing higher Renyi entropies [58; 59; 8]. On the other hand, all higher Renyi entropies should be considered independent and cannot be obtained in a linear fashion. They can all be used to predict the von Neumann entropy, but in the Taylor series expansion (9), knowing higher Renyi entropies is equivalent to knowing a more accurate von Neumann entropy. As we cannot simply extrapolate the series, using a sequential learning approach is a statistically robust way to identify underlying patterns.
_Recurrent neural networks_ (RNNs) are a powerful type of neural network for processing sequences due to their "memory" property [62]. RNNs use internal loops to iterate through sequence elements while keeping a state that contains information about what has been observed so far. This property allows RNNs to identify patterns in a sequence regardless of their position in the sequence. To train an RNN, we initialize an arbitrary state and encode a rank-2 tensor of size (steps, input features), looping over multiple steps. At each step, the networks consider the current state at \(k\) with the input, and combine them to obtain the output at \(k+1\), which becomes the state for the next iteration.
RNNs incorporate both feedforward networks and _back-propagation through time_ (BPTT) [63; 64], with "time" representing the steps \(k\) in our case. The networks connect the outputs from a fully connected layer to the inputs of the same layer, referred to as the hidden states. These inputs receive the output values from the previous step, with the number of inputs to a neuron determined by both the number of inputs to the layer and the number of neurons in the layer itself, known as _recurrent_
connections_. Computing the output involves iteratively feeding the input vector from one step, computing the hidden states, and presenting the input vector for the next step to compute the new hidden states.
RNNs are useful for making predictions based on sequential data, or "sequential regression," as they learn patterns from past steps to predict the most probable values for the next step.
### Model architectures and training strategies
In this subsection, we discuss the methodology of treating the Renyi entropies (the Taylor series of the generating function) as sequence models.
**Data preparation**
To simulate the scenario where \(k_{\rm max}\) in the series cannot be efficiently computed, we generate \(N=10000\) datasets for different physical parameters, with each dataset having a maximum of \(k_{\rm max}=50\) steps in the series. We also shuffle the \(N\) datasets since samples of close physical parameters will have most of their values in common. Among the \(N\) datasets, we only take a fraction \(p<N\) for the train-validation-test split. The other fraction \(q=N-p\) will all be used as test data for the trained model. This serves as a critical examination of the sequence models we find. The ideal scenario is that we only need small \(p\) datasets while achieving accurate performance for the \(q\) datasets.
Due to the rather small number of steps available, we are entitled to adopt the SimpleRNN structure in TensorFlow-Keas2 instead of the more complicated ones such as LSTM or GRU networks [66; 67].
Footnote 2: SimpleRNN suffers from the vanishing gradient problem when learning long dependencies [65]. Even using ReLU, which does not cause a vanishing gradient, back-propagation through time with weight sharing can still lead to a vanishing gradient across different steps. However, since the length of the sequence is small due to the limited maximum steps available in our case, we have found that SimpleRNN generally performs better than its variants.
We also need to be careful about the train-validation-test splitting process. In this type of problem, it is important to use validation and test data that is more recent than the training data. This is because the objective is to predict the next value given the past steps, and the data splitting should reflect this fact. Furthermore, by giving more weight to recent data, it is possible to mitigate the vanishing gradient (memory loss) problem that can occur early in the BPTT. In this work, the first 60% of the steps (\(k=1\sim 30\)) are used for training, the middle 20% (\(k=31\sim 40\)) for validation, and the last 20% (\(k=41\sim 50\)) for testing.
We split the datasets in the following way: for a single dataset from each step, we use a fixed number of past steps3, specified by \(\ell\), to predict the next value. This will create \((\text{steps}-\ell)\) sequences from each dataset, resulting in a total of \((\text{steps}-\ell)\times p\) sequences for the \(p\) datasets in the train-validation-test splitting. Using a fixed sequence length \(\ell\) allows the network to focus on the most relevant and recent information for predicting the next value, while also simplifying the input size and making it more compatible with our network architectures. We take \(p=1000\), \(q=9000\), and \(\ell=5\). An illustration of our data preparation strategy is shown in Figure 15.
Footnote 3: We could also include as many past steps as possible, but we have found it less effective. This can be attributed to our choice of network architectures and the fact that we have rather short maximum steps available.
**Model design**
After the pre-processing of data, we turn to the model design. Throughout the section, we use the ReLU activation function and Adam optimizer with MSE as the loss function.
Figure 15: Data preparation process for the sequential models. A total of \(N\) datasets are separated into two parts: the \(p\) datasets are for the initial train-validation-test split, while the \(q\) datasets are treated purely as test datasets. The zoomed-in figure on the right hand side illustrates how a single example sequence is generated, where we have used a fixed number of past steps \(\ell=5\). Note that for the additional \(q\) test datasets, a total of \((\text{steps}-\ell)\times q=405000\) sequences are generated.
In KerasTuner, we employ Bayesian optimization by adjusting a few crucial hyperparameters and designs. We summarize them in the following list:
* We introduce one or two SimpleRNN layers, with or without recurrent dropouts. The units of the first layer range from 64 to 256 with a step size of 16. If a second layer is used, the units range from 32 to 128 with a step size of 8. Recurrent dropout is applied with a dropout rate in the range of 0.1 to 0.3 using log sampling.
* We take LayerNormalization as a Boolean choice to enhance the training stability, even with shallow networks. The LayerNormalization is added after the SimpleRNN layer if there is only one layer; in between the two layers if there are two SimpleRNN layers.
* We allow a Dense layer with units ranging from 16 to 32 and a step size of 8 as an optional regressor after the recurrent layers.
* A final dropout with log sampling of a dropout rate in the range of 0.2 to 0.5 is added as a Boolean choice.
* In the Adam optimizer, we only adjust the learning rate with log sampling from the range of \(10^{-5}\) to \(10^{-4}\). All other parameters are taken as the default values in TensorFlow-Keras. We take the AMSGrad [54] variant of this algorithm as a Boolean choice.
The KerasTuner is deployed for 300 trials with 2 executions per trial. During the process, we monitor the validation loss using EarlyStopping of patience 8. Once the best set of hyperparameters and model architecture are identified based on the validation data, we initialize a new model with the same design, but with both the training and validation data. This new model is trained 30 times while monitoring the training loss using EarlyStopping of patience 10. The final predictions are obtained by averaging the results of the few cases with close yet overall smallest relative errors from the targets. The purpose of taking the average instead of picking the case with minimum loss is to smooth out possible outliers. We set the batch size in both the KerasTuner and the final training to be 2048.
We will also use the trained model to make predictions on the \(q\) test data and compare them with the correct values as validation for hitting the benchmark.
### Examples of the sequential models
The proposed approach will be demonstrated using two examples. The first example is a simple representative case of a single interval (3.1); while the second is a more challenging case of the two-interval at decompactification limit (3.19), where the higher-order terms in the generating function cannot be efficiently computed. Additionally, we will briefly comment on the most non-trivial example of the general two-interval case.
**Single interval**
In this example, we have used the same \(N\) datasets for the single interval as in Sec. 3.2. Following the data splitting strategy we just outlined, it is worth noting that the ratio of training data to the overall dataset is relatively small. We have plotted the losses of the three best-performing models, as well as the density plot of relative errors for the two test datasets in Figure 16. Surprisingly, even with a small ratio of training data, we were able to achieve small relative errors on the additional test datasets.
Figure 16: Top: The loss function for the best 3 models as a function of epochs. We monitor the loss function with EarlyStopping, where the epochs of minimum losses at around \(10^{-8}\) for different models are specified in the parentheses of the legend. Bottom: The density plots as a function of relative errors for the two test datasets. The relative errors for the \(p\) test datasets are concentrated at around \(1\%\); while for the additional \(q\) test datasets, they are concentrated at around \(2.5\%\) with a very small ratio of outliers.
**Two intervals in the decompactification limit**
Again, we have used the same \(N\) datasets for the two intervals in the \(\eta\to\infty\) limit as in Sec. 3.3. In Figure 17, we have plotted the losses of the four best-performing models and the density plot of relative errors for the two test datasets. In this example, the KerasTuner identified a relatively small learning rate, which led us to truncate the training at a maximum of 1500 epochs since we had achieved the required accuracy. In this case, the predictions are of high accuracy, essentially without outliers.
Let us briefly address the most challenging example discussed in this paper, which is the general two-interval case (3.8) where the analytic expression for the von Neumann entropy is not available. In this example, only \(\operatorname{Tr}\rho_{A}^{n}\) is known, and since it involves the Riemann-Siegel theta function, computing the generating function for large \(k\) in the partial sum becomes almost infeasible. Therefore, the sequential learning models we have introduced represent the most viable approach for extracting useful information in this case.
Figure 17: Top: The loss function for the best 4 models as functions of epochs. We monitor the loss function with EarlyStopping. Bottom: The density plot as a function of relative errors for the two test datasets. The relative errors for the \(p\) test datasets are well within \(\lesssim 1.5\%\); while for the additional \(q\) test datasets, they are well within \(\lesssim 2\%\).
Since only \(k_{\rm max}\approx 10\) can be efficiently computed from the generating function in this case, we have much shorter steps for the sequential learning models. We have tested the above procedure with \(N=10000\) datasets and \(k_{\rm max}=10\), however, we could only achieve an average of \(5\%\) relative errors. Improvements may come from a larger dataset with a longer training time, which we leave as a future task.
In general, sequential learning models offer a potential solution for efficiently computing higher-order terms in the generating function. To extend our approach to longer sequences beyond the \(k_{\rm max}\) steps, we can treat the problem as self-supervised learning. However, this may require a more delicate model design to prevent error propagation. Nonetheless, exploring longer sequences can provide a more comprehensive understanding of the behavior of von Neumann entropy and its relation to Renyi entropies.
## 5 Quantum neural networks and von Neumann entropy
In this section, we explore a similar supervised learning task by treating the quantum circuits as models that map data inputs to predictions, which influences the expressive power of quantum circuits as function approximations.
### Fourier series from variational quantum machine learning models
We will focus on a specific function class that a quantum neural network can explicitly realize, namely a simple Fourier-type sum [29, 30]. Before linking it to the von Neumann entropy, we shall first give an overview of the seminal works in [30].
Consider a general Fourier-type sum in the following form
\[f_{\theta_{i}}(\vec{x})=\sum_{\vec{\omega}\in\Omega}c_{\vec{\omega}}(\theta_{ i})e^{i\vec{\omega}\cdot\vec{x}}, \tag{5.1}\]
with the frequency spectrum specified by \(\Omega\subset\mathbb{R}^{N}\). Note that \(c_{\vec{\omega}}(\theta_{i})\) are the (complex) Fourier coefficients. We need to come up with a quantum model that can learn the characteristics of the sum by the model's control over the frequency spectrum and the Fourier coefficients.
Now we define the quantum machine learning model as the following expectation value
\[f_{\theta_{i}}(x)=\langle 0|U^{\dagger}(x,\theta_{i})MU(x,\theta_{i})|0\rangle, \tag{5.2}\]
where \(|0\rangle\) is taken to be some initial state of the quantum computer. The \(M\) will be the physical observable. Note that we have omitted writing the vector symbol and the hat on the operator, which should be clear from the context. The crucial component is \(U(x,\theta_{i})\), which is a quantum circuit that depends on the data input \(x\) and the trainable
parameters \(\theta_{i}\) with \(L\) layers. Each layer has a data-encoding circuit block \(S(x)\), and the trainable circuit block \(W(\theta_{i})\). Schematically, it has the form
\[U(x,\theta_{i})=W^{(L+1)}(\theta_{i})S(x)W^{(L)}(\theta_{i})\cdots W^{(2)}( \theta_{i})S(x)W^{(1)}(\theta_{i}), \tag{100}\]
where we refer to Figure 18 for a clear illustration.
Let us discuss the three major components of the quantum circuit in the following:
* The repeated data-encoding circuit block \(S(x)\) prepares an initial state that encodes the (one-dimensional) input data \(x\) and is not trainable due to the absence of free parameters. It is represented by certain gates that embed classical data into quantum states, with gates of the form \(g(x)=e^{-ixH}\), where \(H\) is the encoding Hamiltonian that can be any unitary operator. In this work, we use the Pauli X-rotation gate, and the encoding Hamiltonians in \(S(x)\) will determine the available frequency spectrum \(\Omega\).
* The trainable circuit block \(W(\theta_{i})\) is parametrized by a set of free parameters \(\theta_{i}=(\theta_{1},\theta_{2},...)\). There is no special assumption made here and we can take these trainable blocks as arbitrary unitary operations. The trainable parameters will contribute to the coefficients \(c_{\omega}\).
Figure 18: Quantum neural networks with repeated data-encoding circuit blocks \(S(x)\) (whose gates are of the form \(g(x)=e^{-ixH}\)) and trainable circuit blocks \(W^{(i)}\). The data-encoding circuit blocks determine the available frequency spectrum for \(\vec{\omega}\), while the remainder determines the Fourier coefficients \(c_{\vec{\omega}}\).
* The final piece is the measurement of a physical observable \(M\) at the output. This observable is general, it could be local for each wire or subset of wires in the circuit.
Our goal is to establish that \(f(x)\) can be written as a partial Fourier series [29; 30]
\[f_{\theta_{i}}(x)=\langle 0|U^{\dagger}(x,\theta_{i})MU(x,\theta_{i})|0\rangle =\sum_{n\in\Omega}c_{n}e^{inx}. \tag{100}\]
Note that here for simplicity, we have taken frequencies being integers \(\Omega\subset\mathbb{Z}^{N}\). The training process goes as follows: we sample a quantum model with \(U(x,\theta_{i})\), and then define the mean square error as the loss function. To optimize the loss function, we need to tune the free parameters \(\theta=(\theta_{1},\theta_{2},...)\). The optimization is performed by a classical optimization algorithm that queries the quantum device, where we can treat the quantum process as a black box and only examine the classical data input and the measurement output. The output of the quantum model is the expectation value of a Pauli-Z measurement.
We use the single-qubit Pauli rotation gate as the encoding \(g(x)\)[30]. The frequency spectrum \(\Omega\) is determined by the encoding Hamiltonians. Two scenarios can be considered to determine the available frequencies: the _data reuploading_[68] and the _parallel encodings_[69] models. In the former, we repeat \(r\) times of a Pauli rotation gate in sequence, which means we act on the same qubit, but with multiple layers \(r=L\); whereas in the latter, we perform similar operations in parallel on \(r\) different qubits. but with a single layer \(L=1\). These models allow quantum circuits to access increasingly rich frequencies, where \(\Omega=\{-r,...,-1,0,1,...,r\}\) with a spectrum of integer-valued frequencies up to degree \(r\). This will correspond to the maximum degree of the partial Fourier series we want to compute.
From the discussion above, one can immediately derive the maximum accessible frequencies of such quantum models [30]. But in practice, if the degree of the target function is greater than the number of layers (for example, in the single qubit case), the fit will be much less accurate.4 Increasing the value of \(L\) typically requires more training epochs to converge at the same learning rate.
Footnote 4: Certain initial weight samplings may not even converge to a satisfactory solution. This is relevant to the barren plateau problem [70] generically present in variational quantum circuits with a random initialization, similar to the classical vanishing gradient problem.
This is relevant to a more difficult question of how to control the Fourier coefficients in the training process, given that all the blocks \(W^{(i)}(\theta_{i})\) and the measurement observable contribute to "every" Fourier coefficient. However, these coefficients are functions of the quantum circuit with limited degrees of freedom. This means that a
quantum circuit with a certain structure can only realize a subset of all possible Fourier coefficients, even with enough degrees of freedom. While a systemic understanding is not yet available, a simulation exploring which Fourier coefficients can be realized can be found in [30]. In fact, it remains an open question whether, for asymptotically large \(L\), a single qubit model can approximate any function by constructing arbitrary Fourier coefficients.
### The generating function as a Fourier series
Given the framework of the quantum model and its relation to a partial Fourier series, a natural question arises as to whether the entanglement entropy can be realized within this setup. To approach this question, it is meaningful to revisit the generating function for the von Neumann entropy
\[G(z;\rho_{A})\equiv-\operatorname{Tr}\left(\rho_{A}\ln\frac{1-z\rho_{A}}{1-z} \right)=\sum_{k=1}^{\infty}\frac{f(k)}{k}z^{k}, \tag{100}\]
as a manifest Taylor series. The goal is to rewrite the generating function in terms of a partial Fourier series. Therefore, we would be able to determine whether the von Neumann and Renyi entropies are the function classes that the quantum neural network can describe. Note that we will only focus on small-scale tests with a low depth or width of the circuit, as the depth or width of the circuit will correspond exactly to the orders that can be approximated in the Fourier series.
But we cannot simply convert either the original generating function or its Taylor series form to a Fourier series. By doing so, it will generally involve special functions in \(\rho_{A}\), for which we will be unable to specify in terms of \(\operatorname{Tr}\rho_{A}^{n}\). Therefore, it is essential to have an expression of the Fourier series that allows us to compute the corresponding Fourier coefficients at different orders using \(\operatorname{Tr}\rho_{A}^{n}\), for which we know the analytic form from CFTs.
This can indeed be achieved, see Appendix A for a detailed derivation. The Fourier series representation of the generating function on an interval \([w_{1},w_{2}]\) with period \(T=w_{2}-w_{1}\) is given by
\[G(w;\rho)=\frac{a_{0}}{2} + \sum_{n=1}^{\infty}\bigg{\{}\sum_{m=0}^{\infty}\frac{\tilde{f}(m )}{m}C_{cos}(n,m)\cos\left(\frac{2\pi nw}{T}\right) \tag{101}\] \[+ \sum_{m=0}^{\infty}\frac{\tilde{f}(m)}{m}C_{sin}(n,m)\sin\left( \frac{2\pi nw}{T}\right)\bigg{\}},\]
where \(C_{cos}\) and \(C_{sin}\) are some special functions defined as
\[C_{cos}(n,m) = \frac{2}{(m+1)T}\bigg{[}{}_{p}F_{q}\bigg{(}\frac{m+1}{2};\frac{1}{2 },\frac{m+3}{2};-\frac{n^{2}\pi^{2}t_{2}^{2}}{T^{2}}\bigg{)}t_{2}^{m+1} \tag{104}\] \[-{}_{p}F_{q}\bigg{(}\frac{m+1}{2};\frac{1}{2},\frac{m+3}{2};- \frac{n^{2}\pi^{2}t_{2}^{2}}{T^{2}}\bigg{)}t_{1}^{m+1}\bigg{]},\]
\[C_{sin}(n,m) = \frac{4n\pi}{(m+2)T^{2}}\bigg{[}{}_{p}F_{q}\bigg{(}\frac{m+2}{2}; \frac{3}{2},\frac{m+4}{2};-\frac{n^{2}\pi^{2}t_{2}^{2}}{T^{2}}\bigg{)}t_{2}^{m +2} \tag{105}\] \[-{}_{p}F_{q}\bigg{(}\frac{m+2}{2};\frac{3}{2},\frac{m+4}{2};- \frac{n^{2}\pi^{2}t_{1}^{2}}{T^{2}}\bigg{)}t_{1}^{m+2}\bigg{]},\]
with \({}_{p}F_{q}\) being the generalized hypergeometric function. Note also that
\[\tilde{f}(m)\equiv\sum_{k=0}^{m}\frac{(-1)^{2m-k+1}m!}{k!(m-k)!}\,{\rm Tr}\,( \rho_{A}^{k+1}). \tag{106}\]
Similarly, the zeroth order Fourier coefficient is given by
\[a_{0}=\sum_{m=0}^{\infty}\frac{\tilde{f}(m)}{m}C_{cos}(0,m)=\sum_{m=0}^{\infty }\frac{\tilde{f}(m)}{m}\frac{2(w_{2}^{m+1}-w_{1}^{m+1})}{(m+1)T}. \tag{107}\]
Note that summing to \(m=10\) suffices our purpose, while the summation in \(n\) corresponds to the degree of the Fourier series. Note that the complex-valued Fourier coefficients \(c_{n}\) to be used in our simulation can be easily reconstructed from the expression. Therefore, the only required input for evaluating the Fourier series is \(\tilde{f}(m)\), with \({\rm Tr}\,\rho_{A}^{k+1}\) explicitly given. This is exactly what we anticipated and allows for a straightforward comparison with the Taylor series form.
Note the interval for the Fourier series is not arbitrary. We will take the interval \([w_{1},w_{2}]\) to be \([-1,1]\), which is the maximum interval where the Fourier series (103) is convergent. Furthermore, we expect that as \(w\to 1\) from (103), we arrive at the von Neumann entropy, that is
\[S(\rho_{A})=\lim_{w\to 1}G(w;\rho_{A}). \tag{108}\]
However, as we can see in Figure 19, there is a rapid oscillation near the end points of the interval for the Fourier series. The occurrence of such "jump discontinuity" is a generic feature for the approximation of discontinuous or non-periodic functions using Fourier series known as the _Gibbs phenomenon_. This phenomenon poses a serious problem in recovering accurate values of the von Neumann entropy because we are taking the limit to the boundary point \(w\to 1\). We will return to this issue in Section 5.4.
### The expressivity of the quantum models on the entanglement entropy
In this subsection, we will demonstrate the expressivity of the quantum models of the partial Fourier series with examples from CFTs. We will focus on two specific examples: a single interval and two intervals at small cross-ratio \(x\). While these examples suffice for our purpose, it is worth noting that once the Fourier series representation is derived using the expression in (5.6), all examples with a known analytic form of \(\operatorname{Tr}\rho_{A}^{n}\) can be studied.
The demonstration is performed using Pennylene [71]. We have adopted the Adam optimizer with a learning rate \(0.005\) and batch size of \(100\), where MSE is the loss function. Note that we have chosen a smaller learning rate compared to [30] and monitor with EarlyStopping. For the two examples we study, we have considered both the serial (data reuploading) and parallel (parallel encodings) models for the training. Note that in the parallel model, we have used the StronglyEntanglingLayers in Pennylene with itself of 3 user-defined layers. In each case, we start by randomly initializing a quantum model with 300 sample points to fit the target function
\[f(x)=\sum_{n=-k}^{n=k}c_{n}e^{-inx}. \tag{5.12}\]
where the complex-valued Fourier coefficients are calculated from the real coefficients in (5.6). We have chosen \(k=4\) with prescribed physical parameters in the single- and two-interval examples. Therefore, we will need \(r\) in the serial and parallel models to be larger than \(k=4\). We have executed multiple trials from each case, where we
Figure 19: Gibbs phenomenon for the Fourier series near the end point for \(w\to 1\). We take the single interval example where the yellow curve represents the generating function as a Taylor series, and the blue curve is the Fourier series approximation of the generating function.
include the most successful results with maximum relative errors controlled in \(\lesssim 3\%\) in Figures 20\(\sim\)23.
Figure 20: A random serial quantum model trained with data samples to fit the target function of the single interval case. Top: the MSE loss function as a function of epochs, where the minimum loss is achieved at epoch 982. Bottom left: a random initialization of the serial quantum model with \(r=6\) sequential repetitions of Pauli encoding gates. Bottom right: the circles represent the 300 data samples of the single interval Fourier series with \(\ell=2\) and \(\epsilon=0.1\) for (3.2). The red curve represents the quantum model after training.
**Figure 21**: A random parallel quantum model for the single interval case. Top: the loss function achieves minimum loss at epoch 917. Bottom: a random initialization of the quantum model with \(r=5\) parallel repetitions of Pauli encoding gates that has achieved a good fit.
Figure 22: A random serial quantum model trained with data samples to fit the target function of the two-interval system with a small cross-ratio. Top: the loss function achieves minimum loss at epoch 968. Bottom left: a random initialization of the serial quantum model of \(r=6\) sequential repetitions of Pauli encoding gates. Bottom right: the circles represent the 300 data samples of the two-interval Fourier series with \(x=0.05\), \(\alpha=0.1\), and \(\epsilon=0.1\) for (3.15). The red curve represents the quantum model after training.
As observed from Figures 20\(\sim\)23, a rescaling of the data is necessary to achieve precise matching between the quantum models and the Fourier spectrum of our examples. This rescaling is possible because the global phase is unobservable [30], which introduces an ambiguity in the data-encoding. Consider our quantum model
\[f_{\theta}(x)=\langle 0|U^{\dagger}(x,\theta)MU(x,\theta)|0\rangle=\sum_{\omega \in\Omega}c_{\omega}(\theta)e^{i\omega x}, \tag{119}\]
Figure 23: A random parallel quantum model for the two-interval case. Top: the loss function achieves minimum loss at epoch 818. Bottom: a random initialization of the quantum model with \(r=5\) parallel repetitions of Pauli encoding gates that has achieved a good fit.
where we consider the case of a single qubit \(L=1\), then
\[U(x)=W^{(2)}g(x)W^{(1)}. \tag{110}\]
Note that the frequency spectrum \(\Omega\) is determined by the eigenvalues of the data-encoding Hamiltonians, which is given by the operator
\[g(x)=e^{-ixH}. \tag{111}\]
\(H\) has two eigenvalues \((\lambda_{1},\lambda_{2})\), but we can rescale the energy spectrum to \((-\gamma,\gamma)\) as the global phase is unobservable (e.g. for Pauli rotations, we have \(\gamma=\frac{1}{2}\)). We can absorb \(\gamma\) from the eigenvalues of \(H\) into the data input by re-scaling with
\[\tilde{x}=\gamma x. \tag{112}\]
Therefore, we can assume the eigenvalues of \(H\) to be some other values. Specifically, we have chosen \(\gamma=6\) in the training, where the interval in \(x\) is stretched from \([0,1]\) to \([0,6]\), as can be seen in Figures 20\(\sim\)23.
We should emphasize that we are not re-scaling the original target data, but instead, we are re-scaling how the data is encoded. Effectively, we are re-scaling the frequency of the quantum model itself. The intriguing part is that the global phase shift of the operator acting on a quantum state cannot be observed, yet it affects the expressive power of the quantum model. This can be understood as a pre-processing of the data, which is argued to extend the function classes of the quantum model that can represent [30].
This suggests that one may consider treating the re-scaling parameter \(\gamma\) as a trainable parameter [68]. This would turn the scaling into an adaptive "frequency matching" process, potentially increasing the expressivity of the quantum model. Here we only treat \(\gamma\) as a tunable hyperparameter. The scaling does not need to match with the data, but finding an appropriate scaling parameter is crucial for model training.
### Recovering the von Neumann entropy
So far, we have managed to rewrite the generating function into a partial Fourier series \(f_{N}(w)\) of degree \(N\), defined on the interval \(w\in[-1,1]\). By leveraging variational quantum circuits, we have been able to reproduce the Fourier coefficients of the series accurately. In principle, with appropriate data-encoding and re-scaling strategies, increasing the depth or width of the quantum models would enable us to capture the series to any arbitrary degree \(N\). Thus, the expressivity of the Renyi entropies can be
established in terms of quantum models. However, a crucial problem remains, that is, we need to recover the von Neumann entropy under the limit \(w\to 1\)
\[\lim_{w\to 1}G(w;\rho_{A})=S(\rho_{A}), \tag{5.17}\]
where the limiting point is exactly at the boundary of the interval that we are approximating. However, as we can see clearly from Figure 24, taking such a limit naively gives a very inaccurate value compared to the true von Neumann entropy. This effect does not diminish even by increasing \(N\) to achieve a better approximation of the series when compared to its Taylor series form, as shown in Figure 24. This is because the Fourier series approximation is always oscillatory at the endpoints, a general feature known as the _Gibbs phenomenon_ for the Fourier series when approximating discontinuous or non-periodic functions.
_A priori_, a partial Fourier series of a function \(f(x)\) is a very accurate way to reconstruct the point values of \(f(x)\), as long as \(f(x)\) is smooth and periodic. Furthermore, if \(f(x)\) is analytic and periodic, then the partial Fourier series \(f_{N}\) would converge to \(f(x)\) exponentially fast with increasing \(N\). However, \(f_{N}(x)\) in general is not an accurate approximation of \(f(x)\) if \(f(x)\) is either discontinuous or non-periodic. Not only the convergence is slow, there is an overshoot near the boundary of the interval. There are many different ways to understand this phenomenon. Broadly speaking, the difficulty lies in the fact that we are trying to obtain accurate local information from the global properties of the Fourier coefficients defined via an integral over the interval, which seems to be inherently impossible.
Figure 24: We have plotted the single interval example with \(L=2\) and \(\epsilon=0.1\) for (3.2). Here the legends \(G_{N}\) refer to the Fourier series of the generating function to degree \(N\), by summing up to \(m=10\) in (5.6). \(G_{\text{Taylor}}\) refers to the Taylor series form (2.9) of the generating function by summing up to \(k=100\).
Mathematically, the occurrence of the Gibbs phenomenon can be easily understood in terms of the oscillatory nature of the Dirichlet kernel, which arises when the Fourier series is written as a convolution. Explicitly, the Fourier partial sum can be written as
\[s_{n}(x)=\frac{1}{\pi}\int_{-\pi}^{\pi}f(\xi)D_{n}(\xi-x)d\xi, \tag{111}\]
where the Dirichlet kernel \(D_{n}(x)\) is given by
\[D_{n}(x)=\frac{\sin{(n+\frac{1}{2})x}}{2\sin{\frac{x}{2}}}. \tag{112}\]
This function oscillates between positive and negative values. The behavior is therefore responsible for the appearance of the Gibbs phenomenon near the jump discontinuities of the Fourier series at the boundary.
Therefore, our problem can be accurately framed as follows: given the \(2N+1\) Fourier coefficients \(\hat{f}_{k}\) of our generating function (109) for \(-N\leq k\leq N\), with the generating function defined in the interval \(w\in[-1,1]\), we need to reconstruct the point value of the function at the limit \(w\to 1\). The point value of the generating function at this limit exactly corresponds to the von Neumann entropy. Especially, we need the reconstruction to converge exponentially fast with \(N\) to the correct point value of the generating function, that is
\[\lim_{w\to 1}|G(w;\rho_{A})-f_{N}(w)|\leq e^{-\alpha N},\quad\alpha>0. \tag{113}\]
This is for the purpose of having a realistic application of the quantum model, where currently the degree \(N\) we can approximate for the partial Fourier series is limited by the depth or the width of the quantum circuits.
We are in need of an operation that can diminish the oscillations, or even better, to completely remove them. Several filtering methods have been developed to ameliorate the oscillations, including the non-negative and decaying Fejer kernel, which smooths out the Fourier series over the entire interval, or the introduction of Lanczos \(\sigma\) factor, which locally reduces the oscillations near the boundary. For a comprehensive discussion on the Gibbs phenomenon and these filtering methods, see [72]. However, we emphasize that none of these methods are satisfying, as they still cannot recover accurate point values of the function \(f(x)\) near the boundary.
Therefore, we need a more effective method to remove the Gibbs phenomenon completely. Here we will adopt a powerful method by re-expanding the partial Fourier
series into a basis of Gegenbauer polynomials.5
Footnote 5: Note that other methods exist based on periodically extending the function to give an accurate representation within the domain of interest, which involves reconstructing the function based on Chebyshev polynomials [73]. However, we do not explore this method in this work.
This is a method developed in the 1990s by a series of seminal works [74; 75; 76; 77; 78; 79], we also refer to [80; 81] for more recent reviews.
The Gegenbauer expansion method allows for accurate representation, within exponential accuracy, by only summing a few terms from the Fourier coefficients. Given an analytic and non-periodic function \(f(x)\) on the interval \([-1,1]\) (or a sub-interval \([a,b]\subset[-1,1]\)) with the Fourier coefficients
\[\hat{f}_{k}=\frac{1}{2}\int_{-1}^{1}f(x)e^{-ik\pi x}dx, \tag{5.21}\]
and the partial Fourier series
\[f_{N}(x)=\sum_{k=-N}^{N}\hat{f}_{k}e^{ik\pi x}. \tag{5.22}\]
The following Gegenbauer expansion represents the original function we want to approximate with the Fourier information
\[S_{N,M}(x)=\sum_{n=0}^{M}g_{n,N}^{\lambda}C_{n}^{\lambda}(x), \tag{5.23}\]
where \(g_{n,N}^{\lambda}\) is the Gegenbauer expansion coefficients and \(C_{n}^{\lambda}(x)\) are the Gegenbauer polynomials.6
Footnote 6: The Gegenbauer expansion coefficients \(g_{n,N}^{\lambda}\) are defined with the partial Fourier series \(f_{N}(x)\) as
\[g_{n,N}^{\lambda}=\frac{1}{h_{n}^{\lambda}}\int_{-1}^{1}(1-x^{2})^{\lambda- \frac{1}{2}}f_{N}(x)C_{n}^{\lambda}(x)dx,\quad 0\leq n\leq M. \tag{5.24}\]
For \(\lambda\geq 0\), the Gegenbauer polynomial of degree \(n\) is defined to satisfy \[\int_{-1}^{1}(1-x^{2})^{\lambda-\frac{1}{2}}C_{k}^{\lambda}(x)C_{n}^{\lambda} (x)dx=0,\quad k\neq n.\] (5.25)
We refer to Appendix. B for a more detailed account on the properties of the Gegenbauer expansion. Note that we have the following integral formula for computing \(g_{n,N}^{\lambda}\)
\[\frac{1}{h_{n}^{\lambda}}\int_{-1}^{1}(1-x^{2})^{\lambda-\frac{1}{2}}e^{in\pi x }C_{n}^{\lambda}(x)dx=\Gamma(\lambda)\bigg{(}\frac{2}{\pi k}\bigg{)}^{\lambda} i^{n}(n+\lambda)J_{n+\lambda}(\pi k), \tag{5.26}\]
then
\[g_{n,N}^{\lambda}=\delta_{0,n}\hat{f}(0)+\Gamma(\lambda)i^{n}(n+\lambda)\sum_ {k=-N,k\neq 0}^{N}J_{n+\lambda}(\pi k)\bigg{(}\frac{2}{\pi k}\bigg{)}^{ \lambda}\hat{f}_{k}, \tag{5.27}\]
where we only need the Fourier coefficients \(\hat{f}_{k}\).
In fact, the Gegenbauer expansion is a two-parameter family of functions, characterized by \(\lambda\) and \(M\). It has been shown that by setting \(\lambda=M=\beta\epsilon N\) where \(\epsilon=(b-a)/2\) and \(\beta<\frac{2\pi e}{27}\) for the Fourier case, the expansion can achieve exponential accuracy with \(N\). Note that \(M\) will determine the degrees of the Gegenbauer polynomials, and as such, we should allow the degrees of the original Fourier series to grow with \(M\). For a clear demonstration of how the Gegenbauer expansion approaches the generating function from the Fourier data, see Figure 25. We will eventually be able to reconstruct the point value of the von Neumann entropy near \(w\to 1\) with increasing order in the expansion. A more precise statement regarding the exponential accuracy can be found in Appendix B. This method is indeed a process of reconstructing local information from global information with exponential accuracy, thereby effectively removing the Gibbs phenomenon.
## 6 Discussion
In this paper, we have considered a novel approach of using classical and quantum neural networks to study the analytic continuation of von Neumann entropy from Renyi entropies. We approach the analytic continuation problem in a way suitable to deep learning techniques by rewriting \(\operatorname{Tr}\rho_{A}^{n}\) in the Renyi entropies in terms of a generating function that manifests as a Taylor series (9). We show that our deep learning models achieve this goal with a limited number of Renyi entropies.
Figure 25: Gegenbauer expansion constructed from the Fourier information. Here \(S_{M}\) refers to the Gegenbauer polynomials of order \(M\). Note that we set \(\beta\epsilon=0.25\), then \(\lambda=M=0.25N\). Therefore, in order to construct the polynomials of order \(M\), we need the information of the Fourier coefficients to order \(N=4M\).
Instead of using a static model design for the classical neural networks, we adopt the KerasTuner in finding the optimal model architecture and hyperparameters. There are two supervised learning scenarios: predicting the von Neumann entropy given the knowledge of Renyi entropies using densely connected neural networks, and treating higher Renyi entropies as sequential deep learning using RNNs. In both cases, we have achieved high accuracy in predicting the corresponding targets.
For the quantum neural networks, we frame a similar supervised learning problem as a mapping from inputs to predictions. This allows us to investigate the expressive power of quantum neural networks as function approximators, particularly for the von Neumann entropy. We study quantum models that can explicitly realize the generating function as a partial Fourier series. However, the Gibbs overshooting hinders the recovery of an accurate point value for the von Neumann entropy. To resolve this issue, we re-expand the series in terms of Gegenbauer polynomials, which leads to exponential convergence and improved accuracy.
Several relevant issues and potential improvements arise from our approach:
* It is crucial to choose the appropriate architectures before employing KerasTuner, for instances, densely connected layers in Sec. 3 and RNNs in Sec. 4. Because these architectures are built for certain tasks _a priori_. KerasTuner only serves as an effective method to determine the optimal complexity and hyperparameters for model training. However, since the examples from CFT\({}_{2}\) have different analytic structures for both the von Neumann and Renyi entropies, it would be interesting to explore how the different hyperparameters correlate with each example.
* Despite being efficient, the parameter spaces we sketched in Sec. 3.1 and Sec. 4.1 that the KerasTuner searches are not guaranteed to contain the optimal setting, and there could be better approaches.
* We can generate datasets by fixing different physical parameters, such as temperature for (3.6) or cross-ratio \(x\) for (3.15). While we have considered the natural parameters to vary, exploring different parameters may offer more representational power. It is possible to find a Dense model that provides feasible predictions in all parameter ranges, but may require an ensemble of models.
* Regularization methods, such as K-fold validation, can potentially reduce the model size or datasets while maintaining the same performance. It would be valuable to determine the minimum datasets required or whether models with low complexity still have the same representational power for learning entanglement entropy.
* On the other hand, training the model with more data and resources is the most effective approach to improve the model's performance. One can also scale up the search process in the KerasTuner or use ensemble methods to combine the models found by it.
* For the quantum neural networks, note that our approach does not guarantee convergence to the correct Fourier coefficients, as we outlined in Sec. 5.1. It may be beneficial to investigate various pre-processing or data-encoding strategies to improve the approximation of the partial Fourier series with a high degree \(r\).
There are also future directions that are worth exploring that we shall comment on briefly:
* **Mutual information:** We can extend our study to mutual information for two disjoint intervals \(A\) and \(B\), which is an entanglement measure related to the von Neumann entropy defined as \[I(A:B)\equiv S(\rho_{A})+S(\rho_{B})-S(\rho_{A\cup B}).\] (110) In particular, there is a conjectured form of the generating function in [8], with \(\operatorname{Tr}\rho_{A}^{n}\) being replaced by \(\operatorname{Tr}\rho_{A}^{n}\operatorname{Tr}\rho_{B}^{n}/\operatorname{Tr} \rho_{A\cup B}^{n}\). It is worth exploring the expressivity of classical and quantum neural networks using this generating function, particularly as mutual information allows eliminating the UV-divergence and can be compared with some realistic simulations, such as spin-chain models [82].
* **Self-supervised learning for higher Renyi entropies:** Although we have shown that RNN architecture is effective in the sequence learning problem in Sec. 4, it is worth considering other architectures that could potentially offer better performance. For instance, a time-delay neural network, depthwise separable convolutional neural network, or a Transformer may be appropriate for certain types of data. These architectures may be worth exploring in extending the task of extracting higher Renyi entropies as self-supervised learning, particularly for examples where analytic continuation is not available.
* **Other entanglement measures from analytic continuation:** There are other important entanglement measures, say, relative entropy or entanglement negativity that may require analytic continuation and can be studied numerically based on neural networks. We may also consider entanglement entropy or entanglement spectrum that can be simulated in specific models stemming from condensed matter or holographic systems.
* **Expressivity of classical and quantum neural networks:** We have studied the expressivity of classical and neural networks for the von Neumann and Renyi entropies, with the generating function as the medium. This may help us in designing good generating functions for other entanglement measures suitable for neural networks. It is also worth understanding whether other entanglement measures are also in the function classes that the quantum neural networks can realize.
## Acknowledgments
We thank Xi Dong for his encouragement of this work. C-H.W. was supported in part by the U.S. Department of Energy under Grant No. DE-SC0023275, and the Ministry of Education, Taiwan. This material is based upon work supported by the Air Force Office of Scientific Research under award number FA9550-19-1-0360.
## Appendix A Fourier series representation of the generating function
Suppose there is a Fourier series representation of the generating function from (2.4)
\[G(z;\rho_{A})=\sum_{n=-\infty}^{\infty}c_{n}e^{inz}.\] (A.1)
The idea is that we want to compute the Fourier coefficients given only the information about \(G(z;\rho)\) or \(\operatorname{Tr}\rho_{A}^{n}\). We can compute the complex-valued Fourier coefficients \(c_{n}\) using real-valued coefficients \(a_{n}\) and \(b_{n}\) for a general period \(T\) where
\[G(z;\rho_{A})=\frac{a_{0}}{2}+\sum_{n=1}^{\infty}a_{n}\cos\bigg{(}\frac{2\pi nz }{T}\bigg{)}+b_{n}\sin\bigg{(}\frac{2\pi nz}{T}\bigg{)}.\] (A.2)
Note that
\[a_{n} =\frac{2}{T}\int_{z_{1}}^{z_{2}}G(z;\rho_{A})\cos\bigg{(}\frac{2 \pi nz}{T}\bigg{)}dz,\] (A.3) \[b_{n} =\frac{2}{T}\int_{z_{1}}^{z_{2}}G(z;\rho_{A})\sin\bigg{(}\frac{2 \pi nz}{T}\bigg{)}dz,\] (A.4)
where we only need to compute the two Fourier coefficients using the generating function of \(\operatorname{Tr}\rho_{A}^{n}\). However, the above integrals are hard to evaluate in general. Instead, we will show that both \(a_{n}\) and \(b_{n}\) can be written as the following series
\[a_{n}=\sum_{m=0}^{\infty}\frac{G(0;\rho)^{(m)}}{m!}C_{cos}(n,m),\] (A.5)
\[b_{n}=\sum_{m=0}^{\infty}\frac{G(0;\rho)^{(m)}}{m!}C_{sin}(n,m). \tag{104}\]
where \(C_{cos}(n,m)\) and \(C_{sin}(n,m)\) involve certain special functions. The definitions of \(G(0;\rho_{A})^{(m)}\) starts from the following generating function in terms of \(w\) from (6)
\[G(w;\rho_{A})=-\operatorname{Tr}\left(\rho_{A}\ln\left[1-w(1-\rho_{A})\right]\right), \tag{105}\]
where the \(m\)-th derivative with \(w\to 0\)
\[G(0;\rho_{A})^{(m)} = -\operatorname{Tr}[(-1)^{m+1}(m-1)!\rho_{A}(\rho_{A}-1)^{m}] \tag{106}\] \[= -(m-1)!\sum_{k=0}^{m}\frac{(-1)^{2m-k+1}m!}{k!(m-k)!}\operatorname {Tr}\left(\rho_{A}^{k+1}\right).\]
Note that we have to define for \(m=0\) such that
\[G(0;\rho_{A})^{(0)}=-\operatorname{Tr}(\rho_{A}\ln 1)=0. \tag{107}\]
Then we have the Fourier series representation of the generating function on an interval \([w_{1},w_{2}]\) with period \(T=w_{2}-w_{1}\) given by
\[G(w;\rho_{A})=\frac{a_{0}}{2} + \sum_{n=1}^{\infty}\bigg{\{}\sum_{m=0}^{\infty}\frac{\tilde{f}(m )}{m}C_{cos}(n,m)\cos\left(\frac{2\pi nw}{T}\right) \tag{108}\] \[+ \sum_{m=0}^{\infty}\frac{\tilde{f}(m)}{m}C_{sin}(n,m)\sin\left( \frac{2\pi nw}{T}\right)\bigg{\}},\]
where we have defined
\[\tilde{f}(m)\equiv-\sum_{k=0}^{m}\frac{(-1)^{2m-k+1}m!}{k!(m-k)!}\operatorname {Tr}\left(\rho_{A}^{k+1}\right). \tag{109}\]
with manifest \(\operatorname{Tr}\rho_{A}^{k+1}\) appearing in the expression.
Now we need to work out \(C_{cos}(n,m)\) and \(C_{sin}(n,m)\). First, let us consider in general
\[a_{n}=\frac{2}{T}\int_{t_{1}}^{t_{2}}f(t)\cos\bigg{(}\frac{2\pi nt}{T}\bigg{)}dt, \tag{110}\]
where we have written \(G(w;\rho_{A})\) as \(f(t)\) for simplicity. We can write down the Taylor series of both pieces
\[f(t)=\sum_{j=0}^{\infty}\frac{f^{(j)}(0)}{j!}t^{j},\quad\cos\bigg{(}\frac{2 \pi nt}{T}\bigg{)}=\sum_{k=0}^{\infty}\frac{(-1)^{k}}{(2k)!}\bigg{(}\frac{2 \pi nt}{T}\bigg{)}^{2k}, \tag{111}\]
Consider the following function
\[T_{cos}(t)\equiv f(t)\cos\left(\frac{2\pi nt}{T}\right)=\bigg{[}\sum_{j=0}^{ \infty}\frac{f^{(j)}(0)}{j!}t^{j}\bigg{]}\bigg{[}\sum_{k=0}^{\infty}\frac{(-1)^ {k}}{(2k)!}\bigg{(}\frac{2\pi nt}{T}\bigg{)}^{2k}\bigg{]},\] (A.14)
then let us collect the terms in orders of \(t\)
\[T_{cos}(t)=f(0)+f^{(1)}(0)t + \bigg{(}\frac{1}{2}f^{(2)}(0)-2f(0)\bigg{(}\frac{\pi n}{T}\bigg{)} ^{2}\bigg{)}t^{2}\] (A.15) \[+ \bigg{(}\frac{1}{6}f^{(3)}f(0)-2f^{(1)}(0)\bigg{(}\frac{\pi n}{T} \bigg{)}^{2}\bigg{)}t^{3}\] \[+ \bigg{(}\frac{1}{24}f^{(4)}(0)-f^{(2)}\bigg{(}\frac{\pi n}{T} \bigg{)}^{2}+\frac{2}{3}f(0)\bigg{(}\frac{\pi n}{T}\bigg{)}^{4}\bigg{)}t^{4}\] \[+ \cdots,\]
then the integral becomes
\[\int_{t_{1}}^{t_{2}}T_{cos}(t)dt=f(0)(t_{2}-t_{1}) + \frac{1}{2}f^{(1)}(0)(t_{2}^{2}-t_{1}^{2})\] (A.16) \[+ \frac{1}{3}\bigg{(}\frac{1}{2}f^{(2)}(0)-2f(0)\bigg{(}\frac{\pi n }{T}\bigg{)}^{2}\bigg{)}(t_{2}^{3}-t_{1}^{3})\] \[+ \frac{1}{4}\bigg{(}\frac{1}{6}f^{(3)}f(0)-2f^{(1)}(0)\bigg{(} \frac{\pi n}{T}\bigg{)}^{2}\bigg{)}(t_{2}^{4}-t_{1}^{4})\] \[+ \frac{1}{5}\bigg{(}\frac{1}{24}f^{(4)}(0)-f^{(2)}\bigg{(}\frac{ \pi n}{T}\bigg{)}^{2}+\frac{2}{3}f(0)\bigg{(}\frac{\pi n}{T}\bigg{)}^{4}\bigg{)} (t_{2}^{5}-t_{1}^{5})\] \[+ \cdots.\]
Now we want to re-order this expression, where we collect terms in terms of \(f^{(m)}(0)\)
\[\int_{t_{1}}^{t_{2}}T_{cos}(t)dt = f(0)\bigg{(}(t_{2}-t_{1})-\frac{2}{3}\bigg{(}\frac{\pi n}{T} \bigg{)}^{2}(t_{2}^{3}-t_{1}^{3})+\frac{2}{15}\bigg{(}\frac{\pi n}{T}\bigg{)} ^{4}(t_{2}^{5}-t_{1}^{5})+\cdots\bigg{)}\] (A.17) \[+ f^{(1)}(0)\bigg{(}\frac{1}{2}(t_{2}^{2}-t_{1}^{2})-\frac{1}{2} \bigg{(}\frac{\pi n}{T}\bigg{)}^{2}(t_{2}^{4}-t_{1}^{4})+\cdots\bigg{)}\] \[+ f^{(2)}(0)\bigg{(}\frac{1}{24}(t_{2}^{4}-t_{1}^{4})+\cdots\bigg{)} +\cdots.\]
After multiplying a factor of \(2/T\), this can be written as
\[a_{n}=\frac{2}{T}\int_{t_{1}}^{t_{2}}T_{cos}(t)dt=\sum_{m=0}^{\infty}\frac{f^{( m)}(0)}{m!}C_{cos}(n,m),\] (A.18)
where
\[C_{cos}(n,m) = \sum_{p=0}^{\infty}\bigg{[}\frac{(-1)^{p}2^{(2p+1)}n^{2p}\pi^{2p}(t _{2}^{(2p+m+1)}-t_{1}^{(2p+m+1)})}{(2p+m+1)(2p)!T^{2p+1}}\bigg{]}\] (A.19) \[= \frac{2}{(m+1)T}\bigg{[}{}_{p}F_{q}\bigg{(}\frac{m+1}{2};\frac{1} {2},\frac{m+3}{2};-\frac{n^{2}\pi^{2}t_{2}^{2}}{T^{2}}\bigg{)}t_{2}^{m+1}\] \[-{}_{p}F_{q}\bigg{(}\frac{m+1}{2};\frac{1}{2},\frac{m+3}{2};- \frac{n^{2}\pi^{2}t_{2}^{2}}{T^{2}}\bigg{)}t_{1}^{m+1}\bigg{]}.\]
Next, we consider the case for \(C_{sin}(n,m)\), where we need to work out
\[b_{n}=\frac{2}{T}\int_{t_{1}}^{t_{2}}f(t)\sin\bigg{(}\frac{2\pi nt}{T}\bigg{)}dt,\] (A.20)
again, we know
\[\sin\bigg{(}\frac{2\pi nt}{T}\bigg{)}=\sum_{k=0}^{\infty}\frac{(-1)^{k}}{(2k+ 1)!}\bigg{(}\frac{2\pi nt}{T}\bigg{)}^{(2k+1)},\] (A.21)
then we define
\[T_{sin}(t)\equiv f(t)\sin\bigg{(}\frac{2\pi nt}{T}\bigg{)}=\bigg{[}\sum_{j=0}^ {\infty}\frac{f^{(j)}(0)}{j!}t^{j}\bigg{]}\bigg{[}\sum_{k=0}^{\infty}\frac{(-1 )^{k}}{(2k+1)!}\bigg{(}\frac{2\pi nt}{T}\bigg{)}^{2k+1}\bigg{]},\] (A.22)
with the only difference being the denominator \((2k)!\rightarrow(2k+1)!\) and the power of \(\frac{2\pi nt}{T}\) becomes \(2k+1\). Then
\[C_{sin}(n,m) = \sum_{p=0}^{\infty}\bigg{[}\frac{(-1)^{p}2^{(2p+2)}n^{2p+1}\pi^{ 2p+1}(t_{2}^{(2p+m+2)}-t_{1}^{(2p+m+2)})}{(2p+m+2)(2p+1)!T^{2p+2}}\bigg{]}\] (A.23) \[= \frac{4n\pi}{(m+2)T^{2}}\bigg{[}{}_{p}F_{q}\bigg{(}\frac{m+2}{2} ;\frac{3}{2},\frac{m+4}{2};-\frac{n^{2}\pi^{2}t_{2}^{2}}{T^{2}}\bigg{)}t_{2}^ {m+2}\] \[-{}_{p}F_{q}\bigg{(}\frac{m+2}{2};\frac{3}{2},\frac{m+4}{2};- \frac{n^{2}\pi^{2}t_{1}^{2}}{T^{2}}\bigg{)}t_{1}^{m+2}\bigg{]}.\]
## Appendix B The Gegenbauer polynomials and the Gibbs phenomenon
In the appendix, we discuss briefly the definition and properties of the Gegenbauer polynomials used to remove the Gibbs phenomenon in Section 5.4.
The Gegenbauer polynomials \(C_{n}^{\lambda}(x)\) of degree \(n\) for \(\lambda\geq 0\) are defined by the integral
\[\int_{-1}^{1}(1-x^{2})^{\lambda-\frac{1}{2}}C_{k}^{\lambda}(x)C_{n}^{\lambda} (x)dx=0,\quad k\neq n.\] (B.1)
with the following normalization
\[C_{n}^{\lambda}(1)=\frac{\Gamma(n+2\lambda)}{n!\Gamma(2\lambda)}. \tag{110}\]
Note the polynomials are not orthonormal, the norm of \(C_{n}^{\lambda}(x)\) is
\[\int_{-1}^{1}(1-x^{2})^{\lambda-\frac{1}{2}}(C_{n}^{\lambda}(x))^{2}dx=h_{n}^{ \lambda}, \tag{111}\]
where
\[h_{n}^{\lambda}=\pi^{\frac{1}{2}}C_{n}^{\lambda}(1)\frac{\Gamma(\lambda+\frac {1}{2})}{\Gamma(\lambda)(n+\lambda)}. \tag{112}\]
Given a function \(f(x)\) defined on the interval \([-1,1]\) (or a sub-interval \([a,b]\subset[-1,1]\)), the corresponding Gegenbauer coefficients \(\hat{f}^{\lambda}(l)\) are given by
\[\hat{f}^{\lambda}(l)=\frac{1}{h_{n}^{\lambda}}\int_{-1}^{1}(1-x^{2})^{\lambda -\frac{1}{2}}f(x)C_{l}^{\lambda}(x)dx, \tag{113}\]
then the truncated Gegenbauer expansion up to the first \(m+1\) terms is
\[f_{m}^{\lambda}(x)=\sum_{l=0}^{m}\hat{f}^{\lambda}(l)C_{l}^{\lambda}(x). \tag{114}\]
Here we will sketch briefly how the Gegenbauer expansion leads to a resolution of the Gibbs phenomenon as we discussed in Section 5.4. In fact, one can prove that there is an exponential convergence between the function \(f(x)\) we want to approximate and the \(m\)-th degree Gegenbauer polynomials. We will only sketch the idea behind the proof, and we refer the readers to the review in [79] for the details.
One can establish exponential convergence by demonstrating that the errors for the \(N\)-th Fourier coefficient, expanded into Gegenbauer polynomials, can be made exponentially small. Let us call the \(f_{N}^{m}(x)\) the expansion of \(f_{N}(x)\) into \(m\)-th degree Gegenbauer polynomials and \(f^{m}(x)\) the expansion of \(f(x)\) into \(m\)-th degree Gegenbauer polynomials. Then we have the following relation, where the approximation of \(f(x)\) by \(f_{N}^{m}(x)\) is obviously bounded by the error between \(f(x)\) and \(f^{m}(x)\) and the error between \(f^{m}(x)\) and \(f_{N}^{m}(x)\)
\[||f(x)-f_{N}^{m}(x)||\leq||f(x)-f^{m}(x)||+||f^{m}(x)-f_{N}^{m}(x)||. \tag{115}\]
On the right hand side of the inequality, we call the first norm as the _regularization error_, while the second norm as the _truncation error_. Note that we take the norm to
be the maximum norm over the interval \([-1,1]\). To be more precise, we can write the truncation error as
\[||f^{m}-f^{m}_{N}||=\max_{-1\leq x\leq 1}\bigg{|}\sum_{k=0}^{m}(\hat{f}^{\lambda}_ {k}-\hat{g}^{\lambda}_{k})C^{\lambda}_{k}(x)\bigg{|}, \tag{111}\]
where we take \(\hat{f}^{\lambda}_{k}\) to be the unknown Gegenbauer coefficients of the function \(f(x)\). If both \(\lambda\) and \(m\) grow linearly with \(N\), this error is shown to be exponentially small. On the other hand, the regularization error can be written as
\[||f-f^{m}||=\max_{-1\leq 1}\bigg{|}f(x)-\sum_{k=0}^{m}\hat{f}^{\lambda}_{k}C^{ \lambda}_{k}(x)\bigg{|}. \tag{112}\]
It can also be shown that this error is exponentially small for \(\lambda=\gamma m\) with a positive constant \(\gamma\). Since both the regularization and truncation errors can be made exponentially small with the prescribed conditions, the Gegenbauer expansion achieves uniform exponential accuracy and removes the Gibbs phenomenon from the Fourier data.
|
2302.01390 | Controlling the Skyrmion Density and Size for Quantized Convolutional
Neural Networks | Skyrmion devices show energy efficient and high integration data storage and
computing capabilities. Herein, we present the results of experimental and
micromagnetic investigations of the creation and stability of magnetic
skyrmions in the Ta/IrMn/CoFeB/MgO thin film system. We investigate the
magnetic-field dependence of the skyrmion density and size using polar magneto
optical Kerr effect MOKE microscopy supported by a micromagnetic study. The
evolution of the topological charge with time under a magnetic field is
investigated, and the transformation dynamics are explained. Furthermore,
considering the voltage control of these skyrmion devices, we evaluate the
dependence of the skyrmion size and density on the Dzyaloshinskii Moriya
interaction and the magnetic anisotropy. We furthermore propose a skyrmion
based synaptic device based on the results of the MOKE and micromagnetic
investigations. We demonstrate the spin-orbit torque controlled discrete
topological resistance states with high linearity and uniformity in the device.
The discrete nature of the topological resistance makes it a good candidate to
realize hardware implementation of weight quantization in a quantized neural
network (QNN). The neural network is trained and tested on the CIFAR10 dataset,
where the devices act as synapses to achieve a recognition accuracy of 87%,
which is comparable to the result of ideal software-based methods. | Aijaz H. Lone, Arnab Ganguly, Hanrui Li, Nazek El- Atab, Gobind Das, H. Fariborzi | 2023-02-02T19:56:14Z | http://arxiv.org/abs/2302.01390v1 | # Controlling the Skyrmion Density and Size for Quantized Convolutional Neural Networks
###### Abstract
_ABSTRACT: Skyrmion devices show energy-efficient and high-integration data storage and computing capabilities. Herein, we present the results of experimental and micromagnetic investigations of the creation and stability of magnetic skyrmions in the Ta/IrMn/CoFeB/MgO thin-film system. We investigate the magnetic-field dependence of the skyrmion density and size using polar magneto-optic Kerr effect (MOKE) microscopy supported by a micromagnetic study. The evolution of the topological charge with time under a magnetic field is investigated, and the transformation dynamics are explained. Furthermore, considering the voltage control of these skyrmion devices, we evaluate the dependence of the skyrmion size and density on the Dzyaloshinskii-Moriya interaction and the magnetic anisotropy. We furthermore propose a skyrmion-based synaptic device based on the results of the MOKE and micromagnetic investigations. We demonstrate the spin-orbit torque-controlled discrete topological resistance states with high linearity and uniformity in the device. The discrete nature of the topological resistance (weights) makes it a candidate to realize hardware implementation of weight quantization in a quantized neural network (QNN). The neural network is trained and tested on the CIFAR-10 dataset, where the devices act as synapses to achieve a recognition accuracy of \(\thicksim\)87%, which is comparable to the result of ideal software-based methods._
_KEYWORDS: Skyrmions, Magnetic Tunnel Junction, Magneto-Optical Kerr Effect, Micromagnetics, Topological Resistance, and Neuromorphic devices_
## I Introduction
Magnetic skyrmions are topologically protected swirling structures obtained using chiral interactions in noncentrosymmetric magnetic compounds or thin films exhibiting broken inversion symmetry [1]. The intrinsic topological protection of the skyrmion makes them stable against external perturbations [2]. Topological protection means that skyrmions have a characteristic topological integer that cannot change by the continuous deformation of the field [3]. The Dzyaloshinskii-Moriya interaction (DMI), as a chiral antisymmetric exchange interaction responsible for the formation of magnetic skyrmions, is based on the strong spin-orbit coupling at
the heavy metal/ferromagnetic (HM/FM) interface with broken-inversion symmetry [4]. Magnetic skyrmions emerge from the interaction between different energy terms. The exchange coupling and anisotropy terms support the parallel alignment of spins, whereas the DMI and dipolar energy terms prefer the noncollinear alignment of spins [5]. In asymmetric FM multilayer systems, such as Pt/Co/Ta [6] and Pt/CoFeB/MgO [7], the DMI is facilitated by the high interfacial spin-orbit coupling induced by symmetry breaking. Because the DMI and anisotropy terms are material property- and geometry-dependent, a combination of different HM/FM structures has been investigated to stabilize skyrmions and define a specific chirality [8]. Spintronic devices based on magnetic skyrmions exhibit increased density and promote energy-efficient data storage because of their nanometric size and topological protection [9]. Magnetic skyrmions can be driven by extremely low depinning current densities [10], and they can be scaled down to 1 nm [11]. These properties indicate that they can be potentially applied in data storage and computing [12]. In particular, skyrmion devices exhibit excellent potential for application in unconventional computing techniques such as neuromorphic computing [13] and reversible computing [14]. Neuromorphic computing is inspired by the performance and energy efficiency of the brain [15]. Furthermore, it employs neuromimetic devices, emulating neurons, for computing tasks. Synapses store information in terms of weight. Spintronic devices, particularly magnetic tunnel junctions (MTJs), have attracted considerable interest in neuromorphic computing [16][17][18][19]. Recently, multiple neuromorphic computing systems coupled with MTJs based on skyrmions, such as skyrmion neurons [20][15][21] and skyrmion synapses [13][22][23], have been proposed. Furthermore, the control of spintronic devices using an electric field has attracted considerable attention for memory and logic applications because it is an efficient approach for improving the data-storage density [24][25] and reducing the switching energy [26]. The important challenges encountered in the application of skyrmions in storage and computing (both conventional and unconventional computing) are the controlled motion and readability of skyrmions [27].
Quantization of neural network is an effective method for model compression and acceleration.
To compress the network, all weights of the model with a high-bit floating-point number are quantized into a low-bit integer or fixed floating number, which significantly reduces the amount of the required memory size and computational cost [28]. Emerging memory devices could emulate functions of biological synapses to realize in-memory computing, which paved the way for the quantized weight implementation in the neuromorphic computing system [29][30]. In this
study, we conduct experimental and micromagnetic investigations in magnetic skyrmions in the Ta/IrMn/CoFeB/MgO thin film system for realization of neuromorphic device. The dependence of the skyrmion density and diameter on the magnetic field is studied by polar magneto-optic Kerr effect (MOKE) microscopy supported with the micromagnetic simulations. The evolution of the topological charge with time under the magnetic field is investigated, and the transformation dynamics from the labyrinth domains to Neel skyrmions is explained. Furthermore, we evaluate the dependence of the skyrmion density and size on the DMI and the magnetic anisotropy to realize voltage control skyrmion devices. Based on these results of the MOKE and micromagnetic investigations, we propose a skyrmion-based synaptic. We demonstrate the spin-orbit torque (SOT)-controlled skyrmion device and its constituent discrete topological resistance states. Considering the discrete nature of the topological resistance (weights), we demonstrate the neuromorphic implementation based on the quantized convolutional neural network (QCNN) architecture. The NN is trained and tested on the CIFAR-10 dataset and the devices acting as synapses achieve a recognition accuracy of \(\sim\)90%, which is comparable to the accuracy of ideal software-based methods.
## II Methods
As shown in Fig. 1, a multilayer thin film of Ta (5 nm)/IrMn (5 nm)/CoFeB (1.04 nm)/MgO (2 nm)/Ta\({}_{2}\) (nm) was deposited on thermally oxidized Si substrates by Singulus DC/RF magnetron sputtering. In this sample, the thickness of CoFeB is a curtailment parameter that provides suitable anisotropy for creating high-density skyrmions. The sputtering conditions were carefully optimized to achieve perpendicular magnetic anisotropy (PMA). Then, the sample was subjected to a post-deposition annealing treatment at 250\({}^{\circ}\)C for 30 min to enhance the PMA. The investigations were performed using MOKE microscopy in the polar configuration. Differential Kerr imaging was performed to observe the magnetic domains and eliminate the contribution of any nonmagnetic intensity. The square pulses of the MF were simultaneously applied both in-plane and out-of-plane of the sample using two independent electromagnets. The sample exhibited a labyrinth domain structure without an MF. First, using a sufficiently large out-of-plane field, magnetization was saturated in one perpendicular direction. A reference polar-MOKE image was captured in this state. Next, the out-of-plane MF strength (\(H_{z}\)) was decreased to the desired value
while the in-plane field with strength \(H_{x}\) was applied, as required. Subsequently, a second polar-MOKE image was captured in this state. The magnetic image of the final state was obtained by the differential image relative to the reference image. Fig. 1(b) shows the typical double-loop magnetic hysteresis characteristic of the multidomain in the system, which supports the MOKE imagining results.
## 2 Micromagnetics
Magnetic skyrmions are described using their topological or skyrmion number (Q), calculated as follows [31]:
\[Q=\tfrac{1}{4\pi}\int\int\mathbf{m}.\left(\tfrac{\partial\mathbf{m}}{ \partial x}\times\tfrac{\partial\mathbf{m}}{\partial y}\right)dxdy. \tag{1}\]
The spins projected on the xy plane and the normalized magnetization vector (\(\mathbf{m}\)) can be determined by the radial function (\(\theta\)), azimuthal angle \(\varphi\), vorticity (\(Q_{v}\)), and helicity (\(Q_{h}\)):
\[m(r)=[\sin(\theta)\cos(Q_{v}\varphi+Q_{h}),\sin(\theta)\sin(Q_{v }\varphi+Q_{h}),\cos\left(\theta\right)]. \tag{2}\]
The vorticity is related to the skyrmion number via the following expression [12]:
\[Q=\tfrac{Q_{v}}{2}\Big{[}\lim_{r\rightarrow\infty}\cos(\theta(r ))-\cos(\theta(0))\Big{]}. \tag{3}\]
Figure 1: Schematic of the fabricated sample for MOKE and vibrating sample magnetometry (VSM) characterizations
Micromagnetic simulations were performed using MuMax having the Landau-Lipschitz-Gilbert (LLG) equation as the basic magnetization dynamics computing unit [32]. The LLG equation describes the magnetization evolution as follows:
\[\frac{d\widehat{\mathbf{m}}}{dt}=\frac{-\gamma}{1+\alpha^{2}}\big{[}\mathbf{m}\times\mathbf{ H}_{\mathbf{eff}}+\mathbf{m}\times(\mathbf{m}\times\mathbf{H}_{\mathbf{eff}})\big{]}, \tag{4}\]
\(\gamma\) is the gyromagnetic ratio, \(\alpha\) is the Gilbert damping coefficient, and
The total magnetic energy of the free layer includes the exchange, Zeeman, uniaxial anisotropy, demagnetization, and DMI energies [33][34]:
\[E(\mathbf{m})=\int_{V}[\,A(\nabla\mathbf{m})^{2}-\mu_{0}\mathbf{m}.\,H_{\mathbf{ext}}\,-\frac{ \mu_{0}}{2}\mathbf{m}.\,H_{\mathbf{d}}-K_{\mathbf{u}}(\widehat{\alpha}.\mathbf{m})+\mathbf{\varepsilon }_{\mathbf{DM}}\,]d\mathbf{v}, \tag{5}\]
where A is the exchange stiffness, \(\mu_{0}\) is the permeability, \(K_{u}\) is the anisotropy energy density, \(H_{\mathbf{d}}\) is the demagnetization field, and \(H_{\mathbf{ext}}\) is the external field. Moreover, the DMI energy density is computed as follows:
\[\varepsilon_{DM}=D[m_{\text{z}}(\nabla.\mathbf{m})-(\mathbf{m}.\nabla).\,\mathbf{m}]. \tag{6}\]
\[\mathbf{H}_{\mathbf{eff}}=\frac{-1}{\mu_{0}M_{\text{S}}}\frac{\delta E}{\delta\mathbf{m}} \tag{7}\]
is the effective magnetic field around which the magnetization vector precesses.
Then, the SOT is introduced as a custom field term into MuMax [35]:
\[\mathbf{\tau}_{\mathbf{SOT}}=-\frac{\gamma}{1+\alpha^{2}}a_{J}[(1+\xi\alpha)\mathbf{m} \times(\mathbf{m}\times\mathbf{p})+(\xi-\alpha)(\mathbf{m}\times\mathbf{p})], \tag{8}\]
\[a_{J}=\left|\frac{\hbar}{2M_{\text{S}}e\mu_{0}}\frac{\theta_{SH}j}{d}\right| \qquad\text{and}\,\,\,\mathbf{p}=sign(\theta_{SH})\mathbf{j}\times\mathbf{n}\,\,,\]
where \(\theta_{SH}\) is the spin Hall coefficient of the material, \(j\) is the current density, and \(d\) is the free-layer thickness. The synapse resistance and neuron output voltage are computed using the nonequilibrium Green's function (NEGF) formalism. We then consider the magnetization profile of the free layer and feed it to the NEGF model, which computes the resistance of the device, as follows [36][37]:
\[R_{syn}=\frac{v_{syn}}{l_{syn}}. \tag{9}\]
Subsequently, the MTJ read current is computed as follows:
\[I_{syn}=trace\left\{\sum_{k_{t}}C_{\sigma}\frac{i}{h}\begin{pmatrix}H_{k,k+1}&G ^{n}_{k+1,k}\\ -G^{n}_{k,k+1}H_{k+1,k}\end{pmatrix}\right\}, \tag{10}\]
where \(H_{k}\) is the \(k\)th lattice site in the device Hamiltonian, and \(G^{n}_{k}\) is the electron correlation at the \(k\)th site, which yields the electron density.
## III Results and Discussion
In this study, skyrmions were created and annihilated in a multilayer thin film using a combination of in-plane and out-of-plane MFs. The sample and experimental conditions were optimized to achieve a relatively high skyrmion density at a low field amplitude. We experimentally obtained a maximum skyrmion density of \(350\times 10^{3}\)/mm\({}^{2}\) for the resultant MFs with \(H_{z}\) and \(H_{x}\) values of 20 Oe and 35 Oe, respectively. As shown in Fig. 2(a), we experimentally observed the evolution from the labyrinth domains into skyrmions under a constant in-plane MF (\(H_{X}=2\) mT) and an out-of-plane MF (\(H_{z}=0\)-2.4 mT), as reported in the top-right corner of each figure. The white and black contrasts correspond to the \(\uparrow\) and \(\downarrow\) domains, respectively. We then observed that the large labyrinth domains start splitting into small domain structures, and skyrmions start emerging as \(H_{z}\) increases from 0 to 1.2 mT. At \(H_{z}=2\) mT, all the labyrinth domains disappear and the skyrmion density reaches the maximum value. Then, any additional increase in the field strength decreases the skyrmion density because of skyrmion annihilation. The complete annihilation of the skyrmions occurs when \(H_{z}\) is \(\sim\)2.6 mT. The magnetization reversal mechanism is analytically explained using micromagnetic simulations. Here, we employ the finite difference method to solve the LLG equation to examine the spin dynamics in a system similar to that used in the experiments. We consider a 1024 \(\times\) 512 \(\times\) 1-nm\({}^{3}\) free layer discretized as a mesh with 1 \(\times\) 1 \(\times\) 1-nm\({}^{3}\) cells. The magnetic parameters used for the simulations are as follows: anisotropy, K = 1.1 \(\times\) 10\({}^{6}\) J/m\({}^{3}\); exchange constant, A = 1 \(\times\) 10\({}^{11}\) J/m; and saturation magnetization, M\({}_{\rm s}\)= 800 A/m. Fig. 2(b) shows that the micromagnetic simulation results are consistent with the experimental results. At \(H_{z}=2\) mT, we observe the splitting of large labyrinth domains into small domain structures, followed by the gradual stabilization of skyrmions; accordingly, an increased skyrmion density is observed at \(H_{z}=10\) mT. An important observation from these simulations is that the magnetic domains split, rotate counterclockwise, and shrink until the skyrmions are stabilized. However, during this
process, the already stabilized skyrmions are robust, and except for small translation motions, we observe no change in their size. Once only skyrmions are present in the magnetic free layer, any additional increase in the field strength will cause the skyrmion size to shrink. However, at this point, the skyrmions are less responsive to the MF, depicting their topological stability. The experimental field dependence of the skyrmion density is summarized in a 3D color plot in Fig. 2(c). The horizontal axes represent \(H_{x}\) and \(H_{z}\), and the height and the color represent the corresponding skyrmion densities. The skyrmion density peaks at \(H_{z}\!=\!2\) mT, independent of \(H_{x}\) (black dashed line). However, the skyrmion density is not symmetric with respect to the \(H_{z}\!=\!2\) mT line and is larger for higher values of \(H_{z}\). The red and blue arrows in Fig. 2(c) show the asymmetry of the skyrmion density line with respect to \(H_{z}\!=\!2\ mT\). The degree of asymmetry increases with \(H_{x}\). This result demonstrates that the evolution of skyrmions involves two different phases. On the left side of \(H_{z}\!=\!2\) mT, the skyrmion creation rate is higher than the annihilation rate, indicating the net creation of skyrmion. On the right side, net annihilation occurs. At high \(H_{x}\) values, a larger field range is required for skyrmion annihilation than that for creation. This observation can be explained by the increase in the skyrmion size with \(H_{x}\) and the topological protection of skyrmions. Therefore, for annihilation, large fields are required to decrease the critical diameter of skyrmions. Thus, at high \(H_{x}\) values, additional energy is required for the annihilation of skyrmions. This result demonstrates that the in-plane field promotes the stability of the skyrmion spin structures; as such, their annihilation occurs at high chiral energies. This can be observed in Figs. 2(b) and 2(e), where the skyrmions are created under fields with \(H_{z}\!=\!40\) mT; however, the annihilation occurs at \(\sim\)100 mT. When comparing to MOKE results as shown in Fig. 2(d), we observe micromagnetic simulations agreeing very well qualitatively (see Fig. (e)). Moreover, the skyrmion density increases with \(H_{x}\), and a relatively broad window of \(H_{z}\) is observed, indicating the presence of skyrmions. When an in-plane field is applied to the labyrinth domains, the domains align along the field direction and their widths decrease. The aligned domains increase the efficiency of the skyrmion creation compared to the case with misaligned labyrinth domains. The experimental results qualitatively agree with simulation results (Fig. 2(e)). The missing quantitative aspect is justified by the difference in the sample sizes and time differences in application of magnetic field in the experiment and simulation. The simulation time considered in MuMax was in range of ns while the magnetic field was applied for 100ms. In the experiment, the stack area was 100 mm\({}^{2}\)
whereas in the simulation, considering the computational viability, the MuMax stack size was set to \(1.024~{}\mu\mathrm{m}\times 512~{}\mathrm{nm}\).
Fig. 3(a) shows the dynamics of the skyrmion stabilization. We observe that the highly asymmetrical domains at \(\mathrm{t}=3~{}\mathrm{ns}\) rotate counterclockwise while gradually shrinking to additional symmetrical textures; thereafter, they are transformed into topologically protected skyrmions at \(\mathrm{t}=14.2~{}\mathrm{ns}\). The rotation stops when the domain transforms into a highly symmetrical skyrmion. Thus, the magnetic energy at the beginning is dissipated in three degrees of freedom (DOFs): rotation, translation, and shrinkage. Gradually, the DOFs are restricted; in particular, the rotation DOF completely vanishes with time because of the formation of symmetrical magnetic textures (skyrmions). In Figs. 3(b) and (c), we demonstrate the different torques acting on the domains. For the asymmetrical domains, the unbalanced torques due to the in-plane component (\(H_{x}\)) and the perpendicular component (\(H_{z}\)) induce domain rotation and domain shrinkage, respectively. However, for skyrmions, the torques are rotation torques, and they are balanced
Figure 2: (a) MOKE captures of the sample showing the MF dependence of the skyrmion density. (b) Micromagnetic simulation results corresponding to the experimental results. (c) Skyrmion-density tuning by applying in-plane and perpendicular MF’s.(d) Skyrmion density as function of Hz (MOKE) (e) Skyrmion density versus \(H_{z}\) showing an increasing trend, followed by a saturated constant (wide window) and, subsequently, annihilation (Micromagnetics)
because of the symmetrical magnetic textures; thus, the rotation DOF is eliminated. Consequently, the skyrmion size gradually decreases because of the increase in the shrinking energy.
In Figs. 4 (a1-a3), we demonstrate the evolution of the total topological charge under the MF at different times. The topological charge is initially \(-10\), which indicates that skyrmions with Q = \(-1\) dominate the overall free-layer magnetic texture. However, under the field with \(H_{Z}\) = \(-30\) mT, the topological charge switches to \(+55\), and within a fraction of a nanosecond, Q settles at around 40. On increasing the MF, the Q = \(+1\) skyrmion density increases, as shown in Figs. 4(a2) (d-f). The maximum topological charge reaches 50 when all the Q = \(-1\) skyrmions are annihilated, and we
Fig. 3: (a) Skyrmion-stabilization dynamics showing the counterclockwise rotation of the labyrinth domains and the gradual shrinking to yield skyrmions. (b) The unbalanced torque caused by the field in the asymmetrical domains causes rotation due to \(H_{x}\) and gradual shrinking due to \(H_{z}\). (c) For the skyrmion, the symmetrical texture balances the net torque, and hence no rotation occurs.
observe only Q = +1 skyrmions, as shown in (f). The Q = +1 skyrmions are stable from 40 to 80 mT, as shown in Fig. 2(e), and the corresponding topological charge is fixed at 50 in this range, as shown in Fig. 4(b).
_Fig. 4 (a1-a3) Time evolution of the topological charge (Q) under the MF. (b) Topological charge vs MF. (c) Magnetic texture corresponding to different times and fields._
As we increase the field, the skyrmion size decreases and we start observing the annihilation of Q = +1 skyrmions and the decrease in the topological charge. The topological charge reaches 0 around \(H_{Z}\) = 100 mT.
Figure 5: (a) Anisotropy dependence of the skyrmion density for different DMI magnitude; the plots show the behavior of two types of skyrmions with (Q, Q\({}_{V}\), Q\({}_{Q}\)) = (–1, 1, 0) and (1, 1, 0). (a–e) DMI magnitude = 2, 2.5, 3, 3.5, and 4 m/\(\text{J}\text{/}\text{J}^{2}\), respectively. (b) DMI = 4mJm\({}^{2}\) (c) Combined skyrmion density versus anisotropy while increasing the DMI magnitude. (d) DMI = 4mJm\({}^{2}\) (e) The control of the skyrmion density with increasing the DMI
In Figs. 5(a), we show the skyrmion density dependence on the DMI and anisotropy, for skyrmion with indices Q=(-1,1,0) and (1, 1, \(\pi\))The simulations were carried out for DMI in range (2mJm-2 to 4mJm-2) and anisotropy starting from K = \(0.5\times 10^{6}\) J/m\({}^{3}\) to K = \(1.6\times 10^{6}\) J/m\({}^{3}\). In all simulations, we observe the presence of two types of skyrmions with different winding numbers (charges, Q), vorticities (Q\({}_{\rm v}\)), and helicities (Q\({}_{\rm h}\)) in the same FM thin film. The magnetization texture splits into large domains, and these domains, in turn, split into small labyrinth domains and skyrmions. Depending on the background magnetization of these large domains, the skyrmions have (Q, Q\({}_{\rm v}\), Q\({}_{\rm h}\)) either as (\(-\)1, 1, 0) or (1, 1, \(\pi\)). Here, we independently consider the impact of the DMI and anisotropy on the size and density of these two types of skyrmions. At the DMI magnitude (DMI) of \(2\times 10^{-3}\) J/m\({}^{2}\), the density of skyrmions with attributes (1, 1, \(\pi\)) gradually decreases smoothly with an increase in the anisotropy and undergoes complete annihilation at K = \(1.3\times 10^{6}\) J/m\({}^{3}\). However, the skyrmions with attributes (\(-\)1, 1, 0) undergo abrupt annihilation at \(\sim\)K = \(1.1\times 10^{6}\) J/m\({}^{3}\), indicating their lower stability than those of the other skyrmions. As DMI increases to \(2.5\times 10^{-3}\) J/m\({}^{2}\), the skyrmion-density behavior compared to the anisotropy starts changing, the number of skyrmions with attributes (1, 1, \(\pi\)) increases from 29 to the local minimum of 20, and a peak value of 35 is observed at K = \(1\times 10^{6}\) J/m\({}^{3}\), followed by sharp annihilation at K = \(1.25\times 10^{6}\) J/m\({}^{3}\). The density of the other type of skyrmion remains almost constant until K = \(1\times 10^{6}\) J/m\({}^{3}\), followed by an abrupt decay at K = \(1.1\times 10^{6}\) J/m\({}^{3}\). At DMI = \(3\times 10^{-3}\) J/m\({}^{2}\), the skyrmion (1, 1, \(\pi\))'s density gradually decreases with a few oscillations, whereas that of the skyrmion with the (\(-\)1, 1, 0) attribute remains constant until K = \(1.2\times 10^{6}\) J/m\({}^{3}\), after which a sharp annihilation is observed. With an additional increase in DMI, the skyrmion density demonstrates a more stable behavior with an increase in the anisotropy, followed by an abrupt annihilation. For both types of skyrmions, at DMI = \(4\times 10^{-3}\) J/m\({}^{2}\), the skyrmion density decreases at low anisotropies and increases with the anisotropy to a peak value as shown in Fig. 5(b). Thus, a normal trend of the skyrmion density decreasing with the anisotropy is observed, as in previous cases. We plot the combined skyrmion density in Fig. 5(c) and Fig. 5(d), these results demonstrate an improved image of the skyrmion-stability dependence on the DMI and anisotropy. If the DMI of a system is low, skyrmions can exist only at low anisotropies; however, the stability increases with the DMI magnitude, and the skyrmions can exist in a range of anisotropies. At high DMI values, the reverse behavior is observed, and the skyrmion density attains the maximum value at a high anisotropy. The skyrmion density simultaneous dependence on DMI is shown in Fig. 5(e), we observe the maximum
skyrmion density around DMI = \(3\times 10^{-3}\) J/m\({}^{2}\). As both DMI and anisotropy depend upon the FM thickness and spin orbit interaction. This study provides additional insights into the optimization of material and geometrical properties, particularly the thickness of FM thin films, for the stabilization of skyrmions; for example, for a stable racetrack memory, conventional and unconventional logics depend on the SOT-driven motion of skyrmions.
The optimum DMI magnitude and anisotropy ranges are 3.1-3.5 mJ/m\({}^{2}\) and 0.5-1.2 \(\times\) 10\({}^{6}\) J/m\({}^{3}\), respectively, as the skyrmion density remains almost constant in these regions, which is important for realizing a reliable memory/logic operation. However, low DMI values (2-2.5 mJ/m\({}^{2}\)) appear to be ideal for voltage-controlled operations. Considering the behavior in Figs. 5(a) and Fig. 5(c) for D \(<\) 2.5 mJ/m\({}^{2}\), we observe a smooth variation in the skyrmion density with the anisotropy. The dependence of anisotropy on voltage has been demonstrated in [38]:
\[K_{S}(V)=K_{S}(0)-\frac{\xi E}{t_{FL}}\,, \tag{11}\]
where \(K_{S}(V)\) is the anisotropy at voltage \(V,E\) is the electric field across the oxide, \(\xi\) is the voltage-controlled magnetic anisotropy (VCMA) coefficient, and \(t_{FL}\) is the free-layer thickness. In Fig. 6(a), the MOKE results demonstrate that the skyrmion size decreases with an increase in \(H_{x}\) and \(H_{z}\). The skyrmion size is the maximum at \(H_{z}\) = 1 Mt and \(H_{x}\) = 0; the size decreases with an increase in both \(H_{x}\) and \(H_{z}\). Fig. 5(b) shows how the skyrmion responds to the magnetic field and that the size of the skyrmion sharply decreases. However, as the size decreases, the intraskyrmion forces increases, decreasing the responsiveness of the skyrmion to the MF. Consequently, the skyrmion
Figure 6: (a) Skyrmion size decreasing with increasing \(H_{x}\) and \(H_{z}\) (MOKE). (b) The micromagnetic simulation captures a similar trend. (c) Magnetization behavior with time shows the gradual decrease in the responsiveness of the skyrmion to the field
size decreases more gradually than the case with the large skyrmion. Fig. 6(c) shows the free-layer magnetization having a single skyrmion under the magnetic field. As the skyrmion size decreases, the magnetization increases; however, with time, saturation is realized because of the increase in the strength of the intraskyrmion forces, and the magnetic field finally overcomes the topological barrier. Thus, the skyrmion is annihilated under a strong field.
Fig. 6(a) shows the change in the magnetization texture of the free layer with an increase in the anisotropy; the skyrmion density and size decrease with an increase in the anisotropy. For our device simulations, we consider \(\xi=130\) (f)(Vm) \(-135\)] and \(t_{F}=1\ nm\). Therefore, we used the VCMA to control the skyrmion density, which controls the synaptic conductance and demonstrates its neuromimetic behavior. Fig. 6(b) shows the MuMax simulations, showing the variations in the skyrmion size with the anisotropy. The micromagnetic simulation exhibits a similar trend to that observed in the experiment: at the beginning, the skyrmion size decreases rapidly; however, the skyrmion becomes more stable and unresponsive to the magnetic field as the size decreases because of the intraskyrmion forces (topological barrier). Fig. 6(c) shows the variation in the skyrmion size with an increase in the anisotropy at a constant DMI value (3 mJ/m\({}^{2}\)). The skyrmion diameter linearly decreases in the anisotropy range of 0.58-0.64 \(\times\) 10\({}^{6}\) J/m\({}^{3}\) and at both extreme ends of the anisotropy range. The magnetization behavior with time plot shows a gradual decrease in the responsiveness of the skyrmion to the field as well as the final annihilation under a considerably strong field.
Thus, we can exploit the linear-region response for the synapses that act as linear weights; furthermore, the overall skyrmion behavior has considerable relevance to the sigmoid neuron behavior in artificial neural networks. We express this behavior in terms of a fitting model, as a modified sigmoid function:
\[R\,=\,\frac{\beta D}{1+e^{\,c_{1}(K-c_{2})+e^{\,c_{1}(K+c_{2})}}}\,. \tag{12}\]
From the equation, the critical condition for a skyrmion to exist is derived after expanding the equation to the first order in anisotropy K. We obtained the radius dependence on anisotropy K for a fixed DMI value of 3 mJ/m\({}^{2}\), as follows:
\[R\,=\frac{\beta D}{3+2C_{1}K}. \tag{13}\]
\(\beta\), \(c_{l}\), and \(c_{2}\) are the fitting coefficients (1.03 \(\times\) 10\({}^{5}\) m\({}^{3}\)/J, 5 \(\times\) 10\({}^{-5}\) m\({}^{3}\)/J, and 6.1 \(\times\) 10\({}^{5}\) J/m\({}^{3}\), respectively). The simulation results agree with the fitting model results.
These results open another possibility for the realization of voltage-controlled neuromorphic skyrmion devices. On application of the voltage, the anisotropy decreases or increases. Thus, a corresponding variation in the skyrmion size can be obtained.
The results based on the skyrmion density and size manipulation using the magnetic field, DMI, and anisotropy (voltage) terms provide in-depth insight into the
Figure 7: (a) The CoFeB (1 nm) layer magnetization texture evolution with anisotropy showing that the skyrmion density remains almost constant at 5–7 \(\times\) 10\({}^{5}\) J/m\({}^{3}\) and decreases with a further increase in K. The skyrmions are annihilated at \(\sim\)K = 11 \(\times\) 10\({}^{6}\) J/m\({}^{3}\). (b) The skyrmion size at a constant DMI value decreases with an increase in the anisotropy. (c) The skyrmion-size dependence on the anisotropy at a constant DMI of 3 m/m\({}^{2}\) agrees with the fitted analytical model. The model can be used as a neuron thresholding function in spiking and artificial neural networks.
parameters, stack geometry, and switching techniques for tuning the skyrmion density and diameter for memory and logic applications.
## VI Modulating the skyrmion density and size for a quantized convolutional neural network
Motivated by the skyrmion density, topological charge and skyrmion-size modulation discussed thus far, in Fig. 7(a), we propose a skyrmion-based memristive device, where the skyrmion topological resistance increases/decreases when a skyrmion is moved into/out of the active region. The current is applied across T-1 and T-2, which drives the skyrmions from pre-synapse to main-synapse. The topological Hall resistance due to the skyrmions is expressed as follows[40]:
\[B_{Z}^{e}=\frac{\Phi_{Z}^{e}}{A}=-\frac{h}{eA}\iint\frac{1}{4\pi}\mathbf{m}.\left( \frac{\partial m}{\partial x}\times\frac{\partial m}{\partial y}\right)dxdy,\]
\[\rho_{xy}^{T}{=}PR_{o}\left|\frac{h}{e}\right|\frac{1}{A}.\]
Here, \(P\) is the spin polarization of the conduction electrons, \(Ro\) is the normal Hall coefficient, \(h\) is Planck's constant, e is the electron charge, A is the area of the cross-overlap, and \(\frac{h}{e}\) is the flux quantum. The topological resistivity change is measured across T3 and T-4. Following the results from[2], the conductivity contribution by one skyrmion is 22 \(n\Omega\)cm. Therefore, in the proposed skyrmion synapse, the topological resistance (RTHE) across XY is expected to increase by 22 \(n\Omega\) on adding each skyrmion to the synapse region. We create a discrete set of skyrmions in the pre-synapse region, as shown in Fig. 7(b). Thereafter, using SOT current pulses, the skyrmions are driven into the synapse region, and the corresponding topological resistivity is reflected in RAH across XY. The skyrmions move at 80 m/s; thus, roughly each skyrmion takes time = skyrmion(initial)/velocity to reach the main-synapse region. We observe that the first skyrmion located at \(-50\) nm from the center arrives in \(\sim\)0.6 ns, and a step in the topological charge is detected. Likewise, in the constant current pulse, other skyrmions move into the synapse region, as shown in Fig. 7(b), and we observe discrete steps, as shown in Fig. 7(c). For the 8-skyrmions, eight discrete steps are detected. This results in discrete topological resistivity, as shown in Fig. 7(d). For the 16-skyrmions, the resistivity increases in 16 discrete steps on the application of current pulses. As shown in Fig. 7(b), the current moves from \(+\)x to \(-\)x, and the skyrmions move from \(-\)x to \(+\)x. Assuming that the background magnetization is not impacted by the current but by the
skyrmion motion, then during the period before the arrival of the next skyrmion, the resistivity remains constant. Thus, we observe that the topological resistivity increases in discrete steps, as shown in Fig. 7(d) (Potentiation). After 16 pulses, the current direction is reversed, the skyrmions move from +x to -x, and the topological charge in the main synapse decreases, which reduces the topological resistivity. This is referred to as synaptic depression. This is justified in Fig. 7(d), where it is shown that the topological charge in the synapse region increases from 0 to 15, and this is referred to as synaptic potentiation. To induce the synaptic depression, the current direction is reversed and the skyrmions are removed from the active region into the pre-synapse region. Note that any number of discrete states, such as 8 (3bit), 16 (4bit), 32 (5bit), and 64 (6bit) discrete states, can be realized by creating the corresponding skyrmions in the pre-synapse region. However, with increase in the skyrmion density, the skyrmion-skyrmion repulsion and skyrmion-hall effect begin to distort the skyrmion size, as shown in Fig. 7(b), although no impact on the topological charge is observed. This indicates that the topological resistivity-based skyrmion synapse tends to be more stable and noise resilient. In Fig. 7(e), the magneto-tunnel resistance corresponding to the discrete skyrmions in the synapse is shown. If the device resistance is measured using a MTJ, we observe that with each skyrmion moving into or out of the synapse, the vertical tunnel resistance varies by 5.62 \(\Omega\) (73 m\(\Omega\).cm). In addition to the MF and SOT control, the variation in the skyrmion size with MF and VCMA discussed in this work can introduce extra novelty to skyrmion device design. In particular, the tunnel resistance exhibited by the device can be tuned by SOT and voltage control; thus, additional advanced functional devices based on skyrmions can be realized.
_Fig. 7 (a) Topological resistivity-based skyrmion synapse; current is applied across T-1 and T-2. (b) Magnetic texture of the synapse showing the skyrmion motion from the pre-synapse region into the main-synapse region and vice-versa; the corresponding topological resistivity change is measured across T-3 and T-4. (c) Evolution of the topological charge in the presence of continuous current (\(8~{}\times 10^{11}\)A/m\({}^{2}\)). (d) Discrete topological resistivity of the device (measured across T-3 and T-4) for 16 skyrmions; by moving the skyrmions into/out of the main synapse, we achieve potentiation and depression. (e) Tunnel magneto-resistance in the MTJ configuration._
#### Synaptic weight and neural network configuration
The synapse value is configured using a differential pair of skyrmion devices. To measure the synaptic weight, we employ two skyrmion devices to subtract the values and obtain the corresponding positive and negative weights [32]. The target synapse values are determined by the following equation:
\[G_{i,j}^{target}=G_{i,j}^{+}-G_{i,j}^{-}=k\big{(}\big{|}w_{i,j}^{+}-w_{i,j}^{-} \big{|}\big{)}=kw_{i,j}^{target},\]
where \(G_{i,j}^{target}\)indicates the target conductance which can have positive or negative depending upon the difference between. \(G_{i,j}^{+}\) and \(G_{i,j}^{-}\), which indicate the device conductance receiving positive and negative voltage stresses at the site \((i,j)\)th in the array.. Similarly, \(w_{i,j}^{target}\) is the target weight obtained from the devices, and \(k\) is the coefficient that relates the weights to the conductance. Fig. 8(a) shows the example of the synaptic weights, which are directly obtained from eight discrete skyrmion states. Fig. 9(a) shows the schematic circuit diagram considered for the vector-matrix multiplication operations with differential skyrmion device pairs.
_Fig.8(a): Schematic of the circuit diagram comprising skyrmion-based synapses. (b) G diamond plot describing the method through which two skyrmion devices map to the synaptic weights, where each skyrmion device takes one of the eight conductance values from G1 to G8._
Through Kirchhoff's law, the weighted current sum can simply be calculated as the result of matrix multiplication, which increases the computing speed and decreases the power consumption within the in-memory computing architecture.
To measure the learning capability of our skyrmion device, we conducted an image-recognition task on CIFAR-10 Dataset with a nine-layer CNN, which is a brief variant of VGG-NET [33]. The network structure is shown in Fig. 9(a). Our network architecture comprises six convolutional layers for feature extraction and three fully connected layers for image classification. After every two convolutional layers, we adopt a max-pooling layer behind to subsample and aggregate the feature. In the simulation, the CNN
Fig. 9: Neural network structure for the CNN model (a variant of VGG-NET) comprising six convolutional layers (CONV), three max-pooling layers (Pooling), and three fully connected layers (FC). (b) System-level performance in terms of the recognition accuracy.
weights are directly used to obtain the synaptic values from the skyrmion device. The topological resistivity of the skyrmion device has the advantages of excellent uniformity and linearity, which are beneficial for the implementation of the QCNN. For example, our skyrmion device has exhibited suitable properties on both LTP and LTD with 64 and 32 states. We can easily implement 5-bit and 4-bit quantized parameter networks separately with our device synapse.
The simulation results show that QCNN implemented with a skyrmion-based synaptic device achieves results comparable to those of software-based CNN algorithms. The simulation results are illustrated in Fig. (9b), we implement 5- and 6-bit synaptic weights with 32 and 64 skyrmions in the active region respectively to achieve the recognition accuracy around 87.76% and 86.81%, which is slightly lower than the 32-bit floating-point (FP32) arithmetic by software default. The experimental results demonstrate the learning capability of the device to achieve competitive accuracy in image recognition and highlight its applicability as a synaptic device for neuromorphic computing systems.
## V Conclusions
In this study, we investigated the creation, stability, and controllability of skyrmions using experimental and simulation techniques to understand their applications in data storage and computing. We then analyze multiple aspects of the skyrmion stability, size, and density modulation under the influences of MF and anisotropy. Detailed insights into the transition from the labyrinth domains to skyrmions, along with the topological charge evolution, are obtained. Subsequently, an analytical model is developed to demonstrate the relationship between the skyrmion size and anisotropy, which helps in realizing VCMA-controlled synapses and neurons. We then analyze the influence of the DMI and anisotropy on the skyrmion size and density for device-parameter optimization in multiple skyrmion applications. Our results, in particular, contribute to the understanding of skyrmion voltage switching for data storage and unconventional computing applications. Therefore, we propose a skyrmion-based synaptic device based on our results and demonstrate the SOT-controlled skyrmion device with discrete topological resistance states. The discrete topological resistance of skyrmion device shows the inherent advantage of high linearity and uniformity, which makes it a suitable candidate for weight implementation of QCNN. The neural network is trained and tested on CIFAR-10 dataset, where we adopt the devices as synapses to achieve a competitive recognition accuracy against the software-based weights. |
2306.08125 | Implicit Compressibility of Overparametrized Neural Networks Trained
with Heavy-Tailed SGD | Neural network compression has been an increasingly important subject, not
only due to its practical relevance, but also due to its theoretical
implications, as there is an explicit connection between compressibility and
generalization error. Recent studies have shown that the choice of the
hyperparameters of stochastic gradient descent (SGD) can have an effect on the
compressibility of the learned parameter vector. These results, however, rely
on unverifiable assumptions and the resulting theory does not provide a
practical guideline due to its implicitness. In this study, we propose a simple
modification for SGD, such that the outputs of the algorithm will be provably
compressible without making any nontrivial assumptions. We consider a
one-hidden-layer neural network trained with SGD, and show that if we inject
additive heavy-tailed noise to the iterates at each iteration, for any
compression rate, there exists a level of overparametrization such that the
output of the algorithm will be compressible with high probability. To achieve
this result, we make two main technical contributions: (i) we prove a
'propagation of chaos' result for a class of heavy-tailed stochastic
differential equations, and (ii) we derive error estimates for their Euler
discretization. Our experiments suggest that the proposed approach not only
achieves increased compressibility with various models and datasets, but also
leads to robust test performance under pruning, even in more realistic
architectures that lie beyond our theoretical setting. | Yijun Wan, Melih Barsbey, Abdellatif Zaidi, Umut Simsekli | 2023-06-13T20:37:02Z | http://arxiv.org/abs/2306.08125v2 | # Implicit Compressibility of Overparametrized Neural Networks Trained with Heavy-Tailed SGD
###### Abstract
Neural network compression has been an increasingly important subject, due to its practical implications in terms of reducing the computational requirements and its theoretical implications, as there is an explicit connection between compressibility and the generalization error. Recent studies have shown that the choice of the hyperparameters of stochastic gradient descent (SGD) can have an effect on the compressibility of the learned parameter vector. Even though these results have shed some light on the role of the training dynamics over compressibility, they relied on unverifiable assumptions and the resulting theory does not provide a practical guideline due to its implicitness. In this study, we propose a simple modification for SGD, such that the outputs of the algorithm will be provably compressible without making any nontrivial assumptions. We consider a one-hidden-layer neural network trained with SGD and we inject additive heavy-tailed noise to the iterates at each iteration. We then show that, for _any_ compression rate, there exists a level of overparametrization (i.e., the number of hidden units), such that the output of the algorithm will be compressible with high probability. To achieve this result, we make two main technical contributions: (i) we build on a recent study on stochastic analysis and prove a 'propagation of chaos' result with improved rates for a class of heavy-tailed stochastic differential equations, and (ii) we derive strong-error estimates for their Euler discretization. We finally illustrate our approach on experiments, where the results suggest that the proposed approach achieves compressibility with a slight compromise from the training and test error.
## 1 Introduction
Obtaining compressible neural networks has become an increasingly important task in the last decade, and it has essential implications from both practical and theoretical perspectives. From a practical point of view, as the modern network architectures might contain an excessive number of parameters, compression has a crucial role in terms of deployment of such networks in resource-limited environments O'Neill (2020); Blalock et al. (2020). On the other hand, from a theoretical perspective, several studies have shown that compressible neural networks should achieve a better generalization performance due to their lower-dimensional structure Arora et al. (2018); Suzuki et al. (2020, 2020); Hsu et al. (2021); Barsbey et al. (2021); Sefidgaran et al. (2022).
Despite their evident benefits, it is still not yet clear how to obtain compressible networks with provable guarantees. In an empirical study Frankle and Carbin (2018), introduced the 'lottery ticket hypothesis', which indicated that a randomly initialized neural network will have a sub-network that can achieve a performance that is comparable to the original network; hence, the original network can be compressed to the smaller sub-network. This empirical study has formed a fertile ground for subsequent theoretical research, which showed that such a sub-network can indeed exist (see e.g., Malach et al. (2020); Burkholz et al. (2021); da Cunha et al. (2022)); yet, it is not clear how to develop an algorithm that can find it in a feasible amount of time.
Another line of research has developed methods to enforce compressibility of neural networks by using sparsity enforcing regularizers, see e.g., Papyan et al. (2018); Aytekin et al. (2019); Chen et al. (2020); Lederer (2023); Kengne and Wade (2023). While they have led to interesting algorithms, the resulting algorithms typically require higher computational needs due to the increased complexity of the problem. On the other hand, due to the nonconvexity of the overall objective, it is also not trivial to provide theoretical guarantees for the compressibility of the resulting network weights.
Recently it has been shown that the training dynamics can have an influence on the compressibility of the algorithm output. In particular, motivated by the empirical and theoretical evidence that heavy-tails might arise in stochastic optimization (see e.g., Martin and Mahoney (2019); Simsekli et al. (2019); Simsekli et al. (2019); Simsekli et al. (2020); Zhou et al. (2020); Zhang et al. (2020); Camuto et al. (2021)), Barsbey et al. (2021); Shin (2021) showed that the network weights learned by stochastic gradient descent (SGD) will be compressible if we assume that they are heavy-tailed and there exists a certain form of statistical independence within the network weights. These studies illustrated that, even _without_ any modification to the optimization algorithm, the learned network weights can be compressible depending on the algorithm hyperparameters (such as the step-size or the batch-size). Even though the tail and independence conditions were recently relaxed in Lee et al. (2022), the resulting theory relies on unverifiable assumptions, and hence does not provide a practical guideline.
In this paper, we focus on single-hidden-layer neural networks with a fixed second layer (i.e., the setting used in previous work De Bortoli et al. (2020)) trained with vanilla SGD, and show that, when the iterates of SGD are simply perturbed by heavy-tailed noise with infinite variance (similar to the settings considered in Simsekli (2017); Nguyen et al. (2019); Simsekli et al. (2020); Huang et al. (2021); Zhang and Zhang (2023)), the assumption made in Barsbey et al. (2021) in effect holds. More precisely, denoting the number of hidden units by \(n\) and the step-size of SGD by \(\eta\), we consider the _mean-field limit_, where \(n\) goes to infinity and \(\eta\) goes to zero. We show that in this limiting case, the columns of the weight matrix will be independent and identically distributed (i.i.d.) with a common _heavy-tailed_ distribution. Then, we focus on the finite \(n\) and \(\eta\) regime and we prove that for _any_ compression ratio (to be precised in the next section), there exists a number \(N\), such that if \(n\geq N\) and \(\eta\) is sufficiently small, the network weight matrix will be compressible with high probability. Figure 1 illustrates the overall approach and precises our notion of compressibility.
To prove our compressibility result, we make two main technical contributions. We first consider the case where the step-size \(\eta\to 0\), for which the SGD recursion perturbed with heavy-tailed noise yields a _system_ of heavy-tailed stochastic differential equations (SDE)
with \(n\) particles. As our first technical contribution, we show that as \(n\to\infty\) this particle system converges to a mean-field limit, which is a McKean-Vlasov-type SDE that is driven by a heavy-tailed process Jourdain et al. (2007); Liang et al. (2021); Cavallazzi (2023). For this convergence, we obtain a rate of \(n^{-1/2}\), which is faster than the best known rates, as recently proven in Cavallazzi (2023). This result indicates that a _propagation of chaos_ phenomenon Sznitman (1991) emerges1: in the mean-field regime, the columns of the weight matrix will be i.i.d. and heavy-tailed due to the injected noise.
Footnote 1: Here, the term chaos refers to statistical independence: when the particles are initialized independently, they stay independent through the whole process even though their common distribution might evolve.
Next, we focus on the Euler discretizations of the particle SDE to be able to obtain a practical, implementable algorithm. As our second main technical contribution, we derive _strong-error_ estimates for the Euler discretization Kloeden et al. (1992) and show that for sufficiently small \(\eta\), the trajectories of the discretized process will be close to the one of the continuous-time SDE, in a precise sense. This result is similar to the ones derived for vanilla SDEs (e.g., Mikulevicius and Xu (2018)) and enables us to incorporate the error induced by using a finite step-size \(\eta\) to the error of the overall procedure.
Equipped with these results, we finally prove a high-probability compression bound by invoking Gribonval et al. (2012); Amini et al. (2011), which essentially shows that an i.i.d. sequence of heavy-tailed random variables will have a small proportion of elements that will dominate the whole sequence in terms of absolute values (to be stated formally in the next section). This establishes our main contribution. Here, we shall note that similar mean-field regimes have already been considered in machine learning (see e.g., Mei et al. (2018); Chizat and Bach (2018); Rotskoff and Vanden-Eijnden (2018); Jabir et al. (2019); Mei et al. (2019); De Bortoli et al. (2020); Sirignano and Spiliopoulos (2022)). However, these studies all focused on particle SDE systems that either converge to deterministic systems or that are driven by Brownian motion. While they have introduced interesting analysis tools, we cannot directly benefit from their analysis in this paper, since the heavy-tails are crucial for obtaining compressibility, and the Brownian-driven SDEs cannot produce heavy-tailed solutions in general. Hence, as we consider heavy-tailed SDEs in this paper, we need to use different techniques to prove mean-field limits, compared to the prior art in machine learning.
Figure 1: The illustration of the overall approach. We consider a one-hidden-layer neural network with \(n\) hidden units, which results in a weight matrix of \(n\) columns (first layer). We show that, when SGD is perturbed with heavy-tailed noise, as \(n\to\infty\), each column will follow a multivariate heavy-tailed distribution in an i.i.d. fashion. This implies that a small number of columns will have significantly larger norms compared to the others; hence, the norm of the overall weight matrix will be determined by such columns Gribonval et al. (2012). As a result, the majority of the columns can be removed (i.e., set to zero), which we refer to as compressibility.
To validate our theory, we conduct experiments on single-hidden-layer neural networks on different datasets. Our results show that, even with a minor modification to SGD (i.e., injecting heavy-tailed noise), the proposed approach can achieve compressibility with a negligible computational overhead and with a slight compromise from the training and test error. For instance, on a classification task with the MNIST dataset, when we set \(n=10\)K, with vanilla SGD, we obtain a test accuracy of 94.69%, whereas with the proposed approach, we can remove 44% of the columns of the weight matrix, while maintaining a test accuracy of 94.04%. We provide all the proofs in the appendix.
## 2 Preliminaries and Technical Background
**Notation.** For a vector \(u\in\mathbb{R}^{d}\), denote by \(\|u\|\) its Euclidean norm, and by \(\|u\|_{p}\) its \(\ell_{p}\) norm. For a function \(f\in C(\mathbb{R}^{d_{1}},\mathbb{R}^{d_{2}})\), denote by \(\|f\|_{\infty}:=\sup_{x\in\mathbb{R}^{d_{1}}}\|f(x)\|\) its \(L^{\infty}\) norm. For a family of \(n\) (or infinity) vectors, the indexing \(\cdot^{i,n}\) denotes the \(i\)-th vector in the family. In addition, for random variables, \(\stackrel{{\rm(d)}}{{=}}\) means equality in distribution, and the space of probability measures on \(\mathbb{R}^{d}\) is denoted by \(\mathcal{P}(\mathbb{R}^{d})\). For a matrix \(A\in\mathbb{R}^{d_{1}\times d_{2}}\), its Frobenius norm is denoted by \(\|A\|_{F}=\sqrt{\sum_{i=1}^{d_{1}}\sum_{j=1}^{d_{2}}|a_{i,j}|^{2}}\). Without specifically mentioning, \(\mathbb{E}\) denotes the expectation over all the randomness taken into consideration.
### Alpha-stable processes
A random variable \(X\in\mathbb{R}^{d}\) is called \(\alpha\)_-stable_ with the stability parameter \(\alpha\in(0,2]\), if \(X_{1}\), \(X_{2}\), \(\ldots\) are independent copies of \(X\), then \(n^{-1/\alpha}\sum_{j=1}^{n}X_{j}\stackrel{{\rm(d)}}{{=}}X\) for all \(n\geq 1\)Samoradnitsky (2017). Stable distributions appear as the limiting distribution in the generalized central limit theorem (CLT) Gnedenko and Kolmogorov (1954). In the one-dimensional case (\(d=1\)), we call the variable \(X\) a symmetric \(\alpha\)-stable random variable if its characteristic function is of the following form: \(\mathbb{E}[\exp(i\omega X)]=\exp(-|\omega|^{\alpha})\) for \(\omega\in\mathbb{R}\).
For symmetric \(\alpha\)-stable distributions, the case \(\alpha=2\) corresponds to the Gaussian distribution, while \(\alpha=1\) corresponds to the Cauchy distribution. An important property of \(\alpha\)-stable distributions is that in the case \(\alpha\in(1,2)\), the \(p\)-th moment of an \(\alpha\)-stable random variable is finite if and only if \(p<\alpha\); hence, the distribution is heavy-tailed. In particular, \(\mathbb{E}[|X|]<\infty\) and \(\mathbb{E}[|X|^{2}]=\infty\), which can be used to model phenomena with heavy-tailed observations.
There exist different types of \(\alpha\)-stable random vectors in \(\mathbb{R}^{d}\). In this study we will be interested in the following three variants, whose characteristic functions (for \(u\in\mathbb{R}^{d}\)) are given as follows:
* **Type-I.** Let \(Z\in\mathbb{R}\) be a symmetric \(\alpha\)-stable random variable. We then construct the random vector \(X\) such that all the coordinates of \(X\) is equated to \(Z\). In other words \(X=\mathbf{1}_{d}Z\), where \(\mathbf{1}_{d}\in\mathbb{R}^{d}\) is a vector of ones. With this choice, \(X\) admits the following characteristic function: \(\mathbb{E}\left[\exp(i\langle u,X\rangle)=\exp(-|\langle u,\mathbf{1}_{d} \rangle|^{\alpha})\right]\);
* **Type-II.**\(X\) has i.i.d. coordinates, such that each component of \(X\) is a symmetric \(\alpha\)-stable random variable in \(\mathbb{R}\). This choice yields the following characteristic function: \(\mathbb{E}\left[\exp(i\langle u,X\rangle)=\exp(-\sum_{i=1}^{d}|u_{i}|^{\alpha})\right]\);
* **Type-III.**\(X\) is rotationally invariant \(\alpha\)-stable random vector with the characteristic function \(\mathbb{E}\left[\exp(i\langle u,X\rangle]=\exp(-\|u\|^{\alpha})\). Note that the Type-II and Type-III noises reduce to a Gaussian distribution when \(\alpha=2\) (i.e., the characteristic function becomes \(\exp(-\|u\|^{2})\)). Similar to the fact that stable distributions extend the Gaussian distribution, we can define a more general random process, called the _\(\alpha\)-stable Levy process_, that extends the Brownian motion. Formally, \(\alpha\)-stable processes are stochastic processes \((\mathrm{L}_{t}^{\alpha})_{t\geq 0}\) with independent and stationary \(\alpha\)-stable increments, and have the following definition:
* \(\mathrm{L}_{0}^{\alpha}=0\) almost surely,
* For any \(0\leq t_{0}<t_{1}<\cdots<t_{N}\), the increments \(\mathrm{L}_{t_{n}}^{\alpha}-\mathrm{L}_{t_{n-1}}^{\alpha}\) are independent,
* For any \(0\leq s<t\), the difference \(\mathrm{L}_{t}^{\alpha}-\mathrm{L}_{s}^{\alpha}\) and \((t-s)^{1/\alpha}\mathrm{L}_{1}^{\alpha}\) have the same distribution,
* \(\mathrm{L}_{t}^{\alpha}\) is stochastically continuous, i.e. for any \(\delta>0\) and \(s\geq 0\), \(\mathbb{P}(\|\mathrm{L}_{t}^{\alpha}-\mathrm{L}_{s}^{\alpha}\|>\delta)\to 0\) as \(t\to s\). To fully characterize an \(\alpha\)-stable process, we further need to specify the distribution of \(\mathrm{L}_{1}^{\alpha}\). Along with the above properties, the choice for \(\mathrm{L}_{1}^{\alpha}\) will fully determine the process. For this purpose, we will again consider the previous three types of \(\alpha\)-stable vectors: We will call the process \(\mathrm{L}_{t}^{\alpha}\) a Type-I process if \(\mathrm{L}_{1}^{\alpha}\) is a Type-I \(\alpha\)-stable random vector. We define the Type-II and Type-III processes analogously. Note that, when \(\alpha=2\), Type-II and Type-III processes reduce to the Brownian motion. For notational clarity, occasionally, we will drop the index \(\alpha\) and denote the process by \(\mathrm{L}_{t}\).
### Compressibility of heavy-tailed processes
One interesting property of heavy-tailed distributions in the one-dimensional case is that they exhibit a certain compressibility property. Informally, if we consider a sequence of i.i.d. random variables coming from a heavy-tailed distribution, a small portion of these variables will likely have a very large magnitude due to the heaviness of the tails, and they will dominate all the other variables in terms of magnitudes Nair et al. (2022). Therefore, if we only keep this small number of variables with large magnitude, we can 'compress' (in a lossy way) the whole sequence of random variables by representing it with this small subset. Concurrently, Amini et al. (2011); Gribonval et al. (2012) provided formal proofs for these explanations. Formally, Gribonval et al. (2012) characterized the family of probability distributions whose i.i.d. realizations are compressible. They introduced the notion of \(\ell_{p}\)-compressibility - in terms of the error made after pruning a fixed portion of small (in magnitude) elements of an i.i.d. sequence, whose common distribution has diverging \(p\)-th order moments. More precisely, let \(X_{n}=(x_{1},\ldots,x_{n})\) be a sequence of i.i.d. random variables such that \(\mathbb{E}\left[|x_{1}|^{\alpha}\right]=\infty\) for some \(\alpha\in\mathbb{R}_{+}\). Then, for all \(p\geq\alpha\) and \(0<\kappa\leq 1\) denoting by \(X_{n}^{(\kappa n)}\) the \(\lfloor\kappa n\rfloor\) largest ordered statistics2 of \(X_{n}\), the following asymptotic on the relative compression error holds almost surely:
Footnote 2: In other words, \(X_{n}^{(\kappa n)}\) is obtained by keeping only the largest (in magnitude) \(\kappa n\) elements of \(X_{n}\) and setting all the other elements to \(0\).
\[\lim_{n\to\infty}\frac{\|X_{n}^{(\kappa n)}-X_{n}\|_{p}}{\|X_{n}\|_{p}}=0\]
Built upon this fact, Barsbey et al. (2021) proposed structural pruning of neural networks (the procedure described in Figure 1) by assuming that the network weights provided by SGD will be asymptotically independent. In this study, instead of making this assumption, we will directly prove that the network weights will be asymptotically independent in the two layer neural network setting with additive heavy-tailed noise injections to SGD.
## 3 Problem Setting and the Main Result
We consider a single hidden-layer overparametrized network of \(n\) units and use the setup provided in De Bortoli et al. (2020). Our goal is to minimize the expected loss in a supervised learning regime, where for each data \(z=(x,y)\) distributed according to \(\pi(\mathrm{d}x,\mathrm{d}y)\);3 the feature \(x\) is included in \(\mathcal{X}\subset\mathbb{R}^{d}\) and the label \(y\) is in \(\mathcal{Y}\). We denote by \(\theta^{i,n}\in\mathbb{R}^{p}\) the parameter for the \(i\)-th unit, and the parametrized model is denoted by \(h_{x}:\mathbb{R}^{p}\rightarrow\mathbb{R}^{l}\). The mean-field network is the average over models for \(n\) units:
Footnote 3: Note that for finite datasets, \(\pi\) can be chosen as a measure supported on finitely many points.
\[f_{\Theta^{n}}(x)=\frac{1}{n}\sum_{i=1}^{n}h_{x}(\theta^{i,n}),\]
where \(\Theta^{n}=(\theta^{i,n})_{i=1}^{n}\in\mathbb{R}^{p\times n}\) denotes the collection of parameters in the network and \(x\in\mathcal{X}\) is the feature variable for the data point. In particular, the mean-field network corresponds to a two-layer neural network with the weights of the second layer are fixed to be \(1/n\) and \(\Theta^{n}\) is the parameters of the first layer. While this model is less realistic than the models used in practice, nevertheless, we believe that it is desirable from theoretical point of view, and this defect can be circumvented upon replacing \(h_{x}(\theta^{i,n})\) by \(h_{x}(c^{i,n},\theta^{i,n})=c^{i,n}h_{x}(\theta^{i,n})\), where \(c^{i,n}\) and \(\theta^{i,n}\) are weights corresponding to different layers. However, in order to obtain similar results in this setup as in our paper, stronger assumptions are inevitable and the proof should be more involved, which are left for future work.
Given a loss function \(\ell:\mathbb{R}^{l}\times\mathcal{Y}\rightarrow\mathbb{R}^{+}\), the goal (for each \(n\)) is to minimize the expected loss
\[R(\Theta^{n})=\mathbb{E}_{(x,y)\sim\pi}\left[\ell\left(f_{\Theta^{n}}(x),y \right)\right]. \tag{1}\]
One of the most popular approaches to minimize this loss is the stochastic gradient descent (SGD) algorithm. In this study, we consider a simple modification of SGD, where we inject a stable noise vector to the iterates at each iteration. For notational clarity, we will describe the algorithm and develop the theory over gradient descent, where we will assume that the algorithm has access to the true gradient \(\nabla R\) at every iteration. However, since we are already injecting a heavy-tailed noise with _infinite variance_, our techniques can be adapted for handling the stochastic gradient noise (under additional assumptions, e.g., De Bortoli et al. (2020)), which typically has a milder behavior compared to the \(\alpha\)-stable noise4.
Footnote 4: In Simsekli et al. (2019) the authors argued that the stochastic gradient noise in neural networks can be modeled by using stable distributions. Under such an assumption, the effect of the stochastic gradients can be directly incorporated into \(\mathrm{L}_{t}^{\alpha}\).
Let us set the notation for the proposed algorithm. Let \(\hat{\theta}_{0}^{i,n}\), \(i=1,\ldots,n\), be the initial values of the iterates, which are \(n\) random variables in \(\mathbb{R}^{d}\) distributed independently according to a given initial probability distribution \(\mu_{0}\). Then, we consider the gradient descent updates
with stepsize \(\eta n\), which is perturbed by i.i.d. \(\alpha\)-stable noises \(\sigma\cdot\eta^{1/\alpha}X_{k}^{i,n}\) for each unit \(i=1,\ldots,n\) and some \(\sigma>0\):
\[\begin{cases}\hat{\theta}_{k+1}^{i,n}=\hat{\theta}_{k}^{i,n}-\eta n\left[ \partial_{\theta^{i,n}}R(\Theta^{n})\right]+\sigma\cdot\eta^{1/\alpha}X_{k}^{ i,n}\\ \hat{\theta}_{0}^{i,n}\sim\mu_{0}\in\mathcal{P}(\mathbb{R}^{d}),\end{cases} \tag{2}\]
where the scaling factor \(\eta^{1/\alpha}\) in front of the stable noise enables the discrete dynamics of the system homogenize to SDEs as \(\eta\to 0\). At this stage, we do not have to determine which type of stable noise (e.g., Type-I, II, or III) that we shall consider as they will all satisfy the requirements of our theory. However, our empirical findings will illustrate that the choice will affect the overall performance.
We now state the assumptions that will imply our theoretical results. The following assumptions are similar to (De Bortoli et al., 2020, Assumption A1).
**Assumption 1**.:
* Regularity of the model: for each \(x\in\mathcal{X}\), the function \(h_{x}:\mathbb{R}^{p}\to\mathbb{R}^{l}\) is two-times differentiable, and there exists a function \(\Psi:\mathcal{X}\to\mathbb{R}_{+}\) such that for any \(x\in\mathcal{X}\), \[\|h_{x}(\cdot)\|_{\infty}+\|\nabla h_{x}(\cdot)\|_{\infty}+\|\nabla^{2}h_{x}( \cdot)\|_{\infty}\leq\Psi(x).\]
* Regularity of the loss function: there exists a function \(\Phi:\mathcal{Y}\to\mathbb{R}_{+}\) such that \[\|\partial_{1}\ell(\cdot,y)\|_{\infty}+\|\partial_{1}^{2}\ell(\cdot,y)\|_{ \infty}\leq\Phi(y)\]
* Moment bounds on \(\Phi(\cdot)\) and \(\Psi(\cdot)\): there exists a positive constant \(B\) such that \[\mathbb{E}_{(x,y)\sim\pi}[\Psi^{2}(x)(1+\Phi^{2}(y))]\leq B^{2}.\]
Let us remark that these are rather standard smoothness assumptions that have been made in the mean field literature Mei et al. (2018, 2019) and are satisfied by several smooth activation functions, including the sigmoid and hyper-tangent functions.
We now proceed to our main result. Let \(\hat{\Theta}_{k}^{n}\in\mathbb{R}^{p\times n}\) be the concatenation of all parameters \(\hat{\theta}_{k}^{i,n}\), \(i=1,\ldots,n\) obtained by the recursion (2) after \(k\) iterations. We will now compress \(\hat{\Theta}_{k}^{n}\) by pruning its columns with small norms. More precisely, fix a compression ratio \(\kappa\in(0,1)\), compute the norms of the columns of \(\hat{\Theta}_{k}^{n}\), i.e., \(\|\hat{\theta}_{k}^{i,n}\|\). Then, keep the \(\lfloor\kappa n\rfloor\) columns, which have the largest norms, and set all the other columns to zero, in all their entirety. Finally, denote by \(\hat{\Theta}_{k}^{(\kappa n)}\in\mathbb{R}^{p\times n}\), the pruned version of \(\hat{\Theta}_{k}^{n}\).
**Theorem 3.1**.: _Suppose that Assumption 1 holds. For any fixed \(t>0\), \(\kappa\in(0,1)\) and \(\epsilon>0\) sufficiently small, with probability \(1-\epsilon\), there exists \(N\in\mathbb{N}_{+}\) such that for all \(n\geq N\) and \(\eta\) such that \(\eta\leq n^{-\alpha/2-1}\), the following upper bound on the relative compression error for the parameters holds:_
\[\frac{\left\|\hat{\Theta}_{\lfloor t/\eta\rfloor}^{(\kappa n)}-\hat{\Theta}_{ \lfloor t/\eta\rfloor}^{n}\right\|_{F}}{\left\|\hat{\Theta}_{\lfloor t/\eta \rfloor}^{n}\right\|_{F}}\leq\epsilon.\]
This bound shows that, thanks to the heavy-tailed noise injections, the weight matrices will be compressible at _any_ compression rate, as long as the network is sufficiently over-parametrized and the step-size is sufficiently small. We shall note that this bound also enables us to directly obtain a generalization bound by invoking (Barsbey et al., 2021, Theorem 4).
## 4 Proof Strategy and Intermediate Results
In this section, we gather the main technical contributions with the purpose of demonstrating Theorem 3.1. We begin by rewriting (2) in the following form:
\[\begin{cases}\hat{\theta}^{i,n}_{k+1}-\hat{\theta}^{i,n}_{k}=\eta b(\hat{\theta }^{i,n}_{k},\hat{\mu}^{n}_{k})+\sigma\cdot\eta^{1/\alpha}X^{i,n}_{k}\\ \hat{\theta}^{i,n}_{0}\sim\mu_{0}\in\mathcal{P}(\mathbb{R}^{d}),\end{cases} \tag{3}\]
where \(\hat{\mu}^{n}_{k}=\frac{1}{n}\delta_{\hat{\theta}^{i,n}_{k}}\) is the empirical distribution of parameters at iteration \(k\) and \(\delta\) is the Dirac measure, and the drift is given by \(b(\theta^{i,n}_{k},\mu^{n}_{k})=-\mathbb{E}[\partial_{1}\ell(\mu^{n}_{k}(h_{x }(\cdot),y)\nabla h_{x}(\theta^{i,n})]\), where \(\partial_{1}\) denotes the partial derivative with respect to the first parameter and
\[\mu^{n}_{k}(h_{x}(\cdot)):=\int h_{x}(\theta)\mathrm{d}\mu^{n}_{k}(\theta)= \sum_{i=1}^{n}h_{x}(\theta^{i,n}_{k})=f_{\Theta^{n}_{k}}(x).\]
It is easy to check that \(b(\theta^{i,n}_{k},\mu^{n}_{k})=-n\partial_{\theta^{i,n}}R(\Theta^{n})\). By looking at the dynamics from this perspective, we can treat the evolution of the parameters as a system of evolving probability distributions \(\mu^{n}_{k}\): the empirical distribution of the parameters during the training process will converge to a limit as \(\eta\) goes to \(0\) and \(n\) goes to infinity.
We start by linking the recursion (2) to its limiting case where \(\eta\to 0\). The limiting dynamics can be described by the following system of SDEs:
\[\begin{cases}\mathrm{d}\theta^{i,n}_{t}=b(\theta^{i,n}_{t},\mu^{n}_{t}) \mathrm{d}t+\sigma\mathrm{d}\mathrm{L}^{i,n}_{t}\\ \theta^{i,n}_{0}\sim\mu_{0}\in\mathcal{P}(\mathbb{R}^{d}),\end{cases} \tag{4}\]
where \(\mu^{n}_{t}=\frac{1}{n}\delta_{\theta^{i,n}_{t}}\) and \((\mathrm{L}^{i,n}_{t})_{t\geq 0}\) are independent \(\alpha\)-stable processes such that \(\mathrm{L}^{i,n}_{1}\stackrel{{(\mathrm{d})}}{{=}}X^{i,n}_{1}\). We can now see the original recursion (2) as an Euler discretization of (4) and then we have the following strong uniform error estimate for the discretization.
**Theorem 4.1**.: _Let \((\theta^{i,n}_{t})_{t\geq 0}\) be the solutions to SDE (4) and \((\hat{\theta}^{i,n}_{k})_{k\in\mathbb{N}_{+}}\) be given by SGD (2) with the same initial condition \(\xi^{i,n}\) and \(\alpha\)-stable Levy noise \(\mathrm{L}^{i,n}_{\cdot}\), \(i\)=1,...,n. Under Assumption 1, for any \(T>0\), if \(\eta k\leq T\), there exists a constant \(C\) depending on \(B,T,\alpha\) such that the approximation error_
\[\mathbb{E}\left[\sup_{i\leq n}\|\theta^{i,n}_{\eta k}-\hat{\theta}^{i,n}_{k}\| \right]\leq C(\eta n)^{1/\alpha}.\]
In comparison to the standard error estimates in the Euler-Maruyama scheme concerning only the stepsize \(\eta\), the additional \(n\)-dependence is due to the fact that here we consider the supremum of the approximation error over all \(i\leq n\), which involves the expectation of the supremum of the modulus of \(n\) independent \(\alpha\)-stable random variables.
Next, we start from the system (4) and consider the case where \(n\to\infty\). In this limit, we obtain the following McKean-Vlasov-type stochastic differential equation:
\[\begin{cases}\mathrm{d}\theta^{\infty}_{t}=b(\theta^{\infty}_{t},[\theta^{ \infty}_{t}])\mathrm{d}t+\mathrm{d}\mathrm{L}_{t}\\ [\theta^{\infty}_{0}]=\mu\in\mathcal{P}(\mathbb{R}^{d}),\end{cases} \tag{5}\]
where \((\mathrm{L}_{t})_{t\geq 0}\) is an \(\alpha\)-stable process and \([\theta_{t}^{\infty}]\) denotes the distribution of \(\theta_{t}^{\infty}\). The existence and uniqueness of a strong solution to (5) are given in Cavallazzi (2023). Moreover, for any positive \(T\), \(\mathbb{E}\left[\sup_{t\leq T}\|\theta_{t}^{\infty}\|^{\alpha}\right]<+\infty.\) This SDE with measure-dependent coefficients turns out to be a useful mechanism for analyzing the behavior of neural networks and provides insights into the effects of noise on the learning dynamics.
In this step, we will link the system (4) to its limit (5), which is a strong uniform propagation of chaos result for the weights. The next result shows that, when \(n\) is sufficiently large, the trajectories of weights asymptotically behave as i.i.d. solutions to (5).
**Theorem 4.2**.: _Following the existence and uniqueness of strong solutions to (4) and (5), let \((\theta_{t}^{i,\infty})_{t\geq 0}\) be solutions to the McKean-Vlasov equation (5) and \((\theta_{t}^{i,n})_{t\geq 0}\) be solutions to (4) associated with same realization of \(\alpha\)-stable processes \((\mathrm{L}_{t}^{i})_{t\geq 0}\) for each \(i\). Suppose that \((\mathrm{L}_{t}^{i})_{t\geq 0}\) are independent. Then there exists \(C\) depending on \(T,B\) such that_
\[\mathbb{E}\left[\sup_{t\leq T}\sup_{i\leq n}|\theta_{t}^{i,n}-\theta_{t}^{i, \infty}|\right]\leq\frac{C}{\sqrt{n}}\]
It is worth mentioning that the \(O(1/\sqrt{n})\) decreasing rate here is better, if \(\alpha<2\), than \(O(1/n^{\alpha})\) in the litterature on the propagation of chaos Cavallazzi (2023) with classical Lipschitz assumptions on the coefficients of SDEs. The reason is that here, thanks to Assumption 1, we can take into account the specific structure of the one-hidden layer neural networks.
Finally, we are interested in the distributional properties of the McKean-Vlasov equation (5). The following result establishes that the marginal distributions of (5) will have diverging second-order moments, hence, they will be heavy-tailed.
**Theorem 4.3**.: _Let \((\mathrm{L}_{t})_{t\geq 0}\) be an \(\alpha\)-stable process. For any time \(t\), let \(\theta_{t}\) be the solution to (5) with initialization \(\theta_{0}\) which is independent of \((\mathrm{L}_{t})_{t\geq 0}\) such that \(\mathbb{E}\left[\|\theta_{0}\|\right]<\infty\), then the following holds_
\[\mathbb{E}\left[\|\theta_{t}^{\infty}\|^{2}\right]=+\infty.\]
We remark that the result is weak in the sense that details on the tails of \(\theta_{t}\) with respect to \(\alpha\) and \(t\) are implicit. However, it renders sufficient for our compressibility result in Theorem 3.1.
Now, having proved all the necessary ingredients, Theorem 3.1 is obtained by accumulating the error bounds proven in Theorems 4.1 and 4.2, and applying (Gribonval et al., 2012, Proposition 1) along with Theorem 4.3.
## 5 Empirical Results
In this section, we validate our theory with empirical results. Our goal is to investigate the effects of the heavy-tailed noise injection in SGD in terms of compressibility and the train/test performance. We consider a single-hidden-layer neural network with ReLU activations and the cross entropy loss, applied on classifications tasks. We chose the Electrocardiogram (ECG) dataset Yanping and Eamonn, MNIST, and the CIFAR10 datasets. By slightly streching the scope of our theoretical framework, we also train the weights of the second layer instead of fixing them to \(1/n\).
For SGD, we fix the batch-size to be one tenth of the number of training data points, the step-size is chosen to be small enough to approximate the continuous dynamics given by the McKean-Vlasov equation in order to stay close to the theory, but also not too small so that SGD converges in a reasonable amount of time. As for the noise level \(\sigma\), we have tried a range of values for each dataset and \(n\), and we chose the largest \(\sigma\) such that the perturbed SGD converges. Intuitively, we can expect that smaller \(\alpha\) with heavier tails will lead to lower relative compression error. However, it does not guarantee better test performance: one has to fine tune the parameters appropriately to achieve a favorable trade-off between compression error and the test performance. We repeat all the experiment 5 times and report and average and the standard deviation. For the noiseless case (vanilla SGD), the results of the different runs were almost identical, hence we did not report the standard deviations. All the experimentation details are given in Appendix C and we present additional experimental results in Appendix D.
In our first experiment, we consider the ECG500 dataset and choose the Type-I noise. Our goal is to investigate the effects \(\alpha\) and \(n\) over the performance. Tables 1-2 illustrate the results. Here, for different cases, we monitor the training and test accuracies (over 1.00), the pruning ratio: the percentage of the weight matrix that can be pruned while keeping the 90% of the norm of the original matrix5, and training/test accuracies after pruning (a.p.) the network with the specified pruning ratio.
Footnote 5: The pruning ratio has the same role of \(\kappa\), whereas we fix the compression error to 0.1 and find the largest \(\kappa\) that satisfies this error threshold.
The results show that, even for a moderate number of neurons \(n=2\)K, the heavy-tailed noise results in a significant improvement in the compression capability of the neural network. For \(\alpha=1.9\), we can see that the pruning ratio increases to 39%, whereas vanilla SGD can only be compressible with a rate 11%. Besides, the compromise in the test accuracy is almost negligible, the proposed approach achieves 95.3%, whereas vanilla SGD achieves 95.7% accuracy. We also observe that decreasing \(\alpha\) (i.e., increasing the heaviness of the tails) results in a better compression rate; yet, there is a tradeoff between this rate and the test performance. In Table 2, we repeat the same experiment for \(n=10\)K. We observe that the
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \(\alpha\) & Train Acc. & Test Acc. & Pruning Ratio & Train Acc. a.p. & Test Acc. a.p. \\ \hline \hline no noise & 0.974 & 0.957 & 11.45 & 0.97 & 0.954 \\ \hline
1.75 & \(0.97\pm 0.007\) & \(0.955\pm 0.003\) & \(48.07\pm 7.036\) & \(0.944\pm 0.03\) & \(0.937\pm 0.022\) \\ \hline
1.8 & \(0.97\pm 0.007\) & \(0.955\pm 0.003\) & \(44.68\pm 5.4\) & \(0.95\pm 0.025\) & \(0.963\pm 0.016\) \\ \hline
1.9 & \(0.966\pm 0.008\) & \(0.959\pm 0.01\) & \(39.37\pm 2.57\) & \(0.962\pm 0.012\) & \(0.953\pm 0.005\) \\ \hline \end{tabular}
\end{table}
Table 1: ECG5000, Type-I noise, \(n=2\)K.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \(\alpha\) & Train Acc. & Test Acc. & Pruning Ratio & Train Acc. a.p. & Test Acc. a.p. \\ \hline \hline no noise & 0.978 & 0.963 & 11.46 & 0.978 & 0.964 \\ \hline
1.75 & \(0.978\pm 0.001\) & \(0.964\pm 0.001\) & \(52.59\pm 6.55\) & \(0.95\pm 0.03\) & \(0.954\pm 0.022\) \\ \hline
1.8 & \(0.978\pm 0.001\) & \(0.964\pm 0.001\) & \(52.59\pm 6.55\) & \(0.95\pm 0.03\) & \(0.954\pm 0.022\) \\ \hline
1.9 & \(0.978\pm 0.001\) & \(0.964\pm 0.001\) & \(40.85\pm 2.89\) & \(0.96\pm 0.021\) & \(0.958\pm 0.013\) \\ \hline \end{tabular}
\end{table}
Table 2: ECG5000, Type-I noise, \(n=10\)K.
previous conclusions become even clearer in this case, as our theory applies to large \(n\). For the case where \(\alpha=1.75\), we obtain a pruning ratio of \(52\%\) with test accuracy \(95.4\%\), whereas for vanilla SGD the ratio is only \(11\%\) and the original test accuracy is \(96.3\%\).
In our second experiment, we investigate the impact of the noise type. We set \(n=10\)K and use the same setting as in Table 2. Tables 3-4 illustrate the results. We observe that the choice of the noise type can make a significant difference in terms of both compressibility and accuracy. While the Type-III noise seems to obtain a similar accuracy when compared to Type-I, it achieves a worse compression rate. On the other hand, the behavior of Type-II noise is perhaps more interesting: for \(\alpha=1.9\) it both increases compressibility and also achieves a better accuracy when compared to unpruned, vanilla SGD. However, we see that its behavior is much more volatile, the performance quickly degrades as we decrease \(\alpha\). From these comparisons, Type-I noise seems to achieve a better tradeoff.
In our next experiment, we consider the MNIST dataset, set \(n=10\)K and use Type-I noise. Table 5 illustrates the results. Similar to the previous results, we observe that the injected noise has a visible benefit on compressibility. When \(\alpha=1.9\), our approach doubles the compressibility of the vanilla SGD (from \(10\%\) to \(21\%\)), whereas the training and test accuracies almost remain unchanged. On the other hand, when we decrease \(\alpha\), we observe that the pruning ratio goes up to \(44\%\), while only compromising \(1\%\) of test accuracy. To further illustrate this result, we pruned vanilla SGD by using this pruning ratio (\(44\%\)). In this case, the test accuracy of SGD drops down to \(92\%\), where as our simple noising scheme achieves \(94\%\) of test accuracy with the same pruning ratio.
Our last experiment is a negative result that might be useful for illustrating the limitations of our approach. In this case, we consider the CIFAR10 dataset, set \(n=5000\), use Type-I
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \(\alpha\) & Train Acc. & Test Acc. & Pruning Ratio & Train Acc. a.p. & Test Acc. a.p. \\ \hline \hline \(1.75\) & \(0.97\pm 0.007\) & \(0.957\pm 0.005\) & \(33.48\pm 7.33\) & \(0.969\pm 0.008\) & \(0.957\pm 0.011\) \\ \hline \(1.8\) & \(0.97\pm 0.007\) & \(0.956\pm 0.007\) & \(26.81\pm 4.72\) & \(0.963\pm 0.008\) & \(0.952\pm 0.008\) \\ \hline \(1.9\) & \(0.97\pm 0.005\) & \(0.955\pm 0.005\) & \(17.59\pm 1.56\) & \(0.968\pm 0.004\) & \(0.954\pm 0.96\) \\ \hline \end{tabular}
\end{table}
Table 4: ECG5000, Type-III noise, \(n=10\)K.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \(\alpha\) & Train Acc. & Test Acc. & Pruning Ratio & Train Acc. a.p. & Test Acc. a.p. \\ \hline \hline \(1.75\) & \(0.986\pm 0.003\) & \(0.982\pm 0.005\) & \(52.13\pm 27.78\) & \(0.865\pm 0.261\) & \(0.866\pm 0.251\) \\ \hline \(1.8\) & \(0.985\pm 0.003\) & \(0.980\pm 0.005\) & \(39.9\pm 21.55\) & \(0.971\pm 0.025\) & \(0.972\pm 0.023\) \\ \hline \(1.9\) & \(0.982\pm 0.003\) & \(0.976\pm 0.006\) & \(20.95\pm 6.137\) & \(0.982\pm 0.004\) & \(0.977\pm 0.006\) \\ \hline \end{tabular}
\end{table}
Table 3: ECG5000, Type-II noise, \(n=10\)K.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \(\alpha\) & Train Acc. & Test Acc. & Pruning Ratio & Train Acc. a.p. & Test Acc. a.p. \\ \hline \hline no noise & \(0.95\) & \(0.9487\) & \(10.59\) & \(0.9479\) & \(0.9476\) \\ \hline \(1.75\) & \(0.95\pm 0.0001\) & \(0.9454\pm 0.0005\) & \(44.42\pm 7.16\) & \(0.944\pm 0.0025\) & \(0.9409\pm 0.0019\) \\ \hline \(1.8\) & \(0.95\pm 0.0001\) & \(0.9457\pm 0.0007\) & \(34.49\pm 5.07\) & \(0.9453\pm 0.0015\) & \(0.9397\pm 0.0036\) \\ \hline \(1.9\) & \(0.95\pm 0.0001\) & \(0.9463\pm 0.0004\) & \(21.31\pm 1081\) & \(0.9478\pm 0.0008\) & \(0.9444\pm 0.0009\) \\ \hline \end{tabular}
\end{table}
Table 5: MNIST, Type-I noise, \(n=10\)K until reaching \(95\%\) training accuracy.
noise. We compute the pruning ratio and accuracies as before and we illustrate the results in Table 6. We observe that the injected noise does not bring an advantage in this case: vanilla SGD achieves a better pruning ratio when compared to the case where \(\alpha=1.9\). On the other hand, the noise injections result in a significant drop in the training accuracy, and the situation becomes even more prominent when we decrease \(\alpha\). This might indicate that the injected noise might complicate the training process.
Following the arguments of Barsby et al. (2021), we suspect that, in this case, vanilla SGD already exhibits some sort of heavy tails and the additional noise might not be as beneficial as it was in the other cases. Although neural SGD can achieve similar compressibility, this regime is not easily controllable, and our paper is able to provide a more practical guideline for achieving compressibility along with theoretical guarantees.
## 6 Conclusion
We provided a methodological and theoretical framework for provably obtaining compressibility in mean-field neural networks. Our approach requires minimal modification for vanilla SGD and has the same computational complexity. By proving discretization error bounds and propagation of chaos results, we showed that the resulting algorithm is guaranteed to provide compressible parameters. We illustrated our approach on several experiments, where we showed that, in most cases, the proposed approach achieves a high compressibility ratio, while slightly compromising from the accuracy.
The limitations of our approach are as follows: (i) we consider mean-field networks, it would be of interest to generalize our results to more sophisticated architectures, (ii) we focused on the compressibility; yet, the noise injection also has an effect on the train/test accuracy. Hence, an investigation of the noise injection on the training loss needs to be performed to understand the bigger picture. Finally, due to the theoretical nature of our paper, it does not have a direct negative social impact.
## Acknowledgements
The authors thank Alain Durmus for fruitful discussions. U.S. is partially supported by the French government under management of Agence Nationale de la Recherche as part of the "Investissements d'avenir" program, reference ANR-19-P3IA0001 (PRAIRIE 3IA Institute) and by the European Research Council Starting Grant DYNASTY - 101039676.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \(\alpha\) & Train Acc. & Test Acc. & Pruning Ratio & Train Acc. a.p. & Test Acc. a.p. \\ \hline \hline no noise & 0.9514 & 0.5636 & 25.1 & 0.8289 & 0.5214 \\ \hline
1.75 & 0.9503 & 0.5626 & 21.74 & 0.8196 & 0.52 \\ \hline
1.9 & 0.95 & 0.5755 & 16.56 & 0.8870 & 0.5641 \\ \hline \end{tabular}
\end{table}
Table 6: CIFAR10, Type-I noise, \(n=5\)K until reaching 95% training accuracy. |
2302.10289 | Tackling Shortcut Learning in Deep Neural Networks: An Iterative
Approach with Interpretable Models | We use concept-based interpretable models to mitigate shortcut learning.
Existing methods lack interpretability. Beginning with a Blackbox, we
iteratively carve out a mixture of interpretable experts (MoIE) and a residual
network. Each expert explains a subset of data using First Order Logic (FOL).
While explaining a sample, the FOL from biased BB-derived MoIE detects the
shortcut effectively. Finetuning the BB with Metadata Normalization (MDN)
eliminates the shortcut. The FOLs from the finetuned-BB-derived MoIE verify the
elimination of the shortcut. Our experiments show that MoIE does not hurt the
accuracy of the original BB and eliminates shortcuts effectively. | Shantanu Ghosh, Ke Yu, Forough Arabshahi, Kayhan Batmanghelich | 2023-02-20T20:25:41Z | http://arxiv.org/abs/2302.10289v9 | # Dividing and Conquering a BlackBox to a Mixture of Interpretable Models: Route, Interpret, Repeat
###### Abstract
ML model design either starts with an interpretable model or a Blackbox and explains it post hoc. Blackbox models are flexible but difficult to explain, while interpretable models are inherently explainable. Yet, interpretable models require extensive ML knowledge and tend to be less flexible and underperforming than their Blackbox variants. This paper aims to blur the distinction between a post hoc explanation of a Blackbox and constructing interpretable models. Beginning with a Blackbox, we iteratively _carve out_ a mixture of interpretable experts (MoIE) and a _residual network_. Each interpretable model specializes in a subset of samples and explains them using First Order Logic (FOL), providing basic reasoning on concepts from the Blackbox. We route the remaining samples through a flexible residual. We repeat the method on the residual network until all the interpretable models explain the desired proportion of data. Our extensive experiments show that our _route, interpret, and repeat_ approach (1) identifies a diverse set of instance-specific concepts with high concept completeness via MoIE without compromising in performance, (2) identifies the relatively "harder" samples to explain via residuals, (3) outperforms the interpretable by-design models by significant margins during test-time interventions, and (4) fixes the shortcut learned by the original Blackbox. The code for MoIE is publicly available at: [https://github.com/batmanlab/ICML-2023-Route-interpret-repeat](https://github.com/batmanlab/ICML-2023-Route-interpret-repeat).
## 1 Introduction
Model explainability is essential in high-stakes applications of AI, _e.g.,_ healthcare. While Blackbox models (_e.g.,_ Deep Learning) offer flexibility and modular design, post hoc explanation is prone to confirmation bias (Wan et al., 2022), lack of fidelity to the original model (Adebayo et al., 2018), and insufficient mechanistic explanation of the decision-making process (Rudin, 2019). Interpretable-by-design models overcome those issues but tend to be less flexible than Blackbox models and demand substantial expertise to design. Using a post hoc explanation or adopting an inherently interpretable model is a mutually exclusive decision to be made at the initial phase of AI model design. This paper blurs the line on that dichotomous model design.
The literature on post hoc explanations is extensive. This includes model attributions ((Simonyan et al., 2013; Selvaraju et al., 2017)), counterfactual approaches (Abid et al., 2021; Singla et al., 2019), and distillation methods (Alharbi et al., 2021; Cheng et al., 2020). Those methods either identify key input features that contribute the most to the network's output (Shrikumar et al., 2016), generate input perturbation to flip the network's output (Samek et al., 2016; Montavon et al., 2018), or estimate simpler functions to approximate the network output locally. Post hoc methods preserve the flexibility and performance of the Blackbox but suffer from a lack of fidelity and mechanistic explanation of the network output (Rudin, 2019). Without a mechanistic explanation, recourse to a model's undesirable behavior is unclear. Interpretable models are alternative designs to the Blackbox without many such drawbacks. For example, modern interpretable methods highlight human understandable _concepts_ that contribute to the downstream prediction.
Several families of interpretable models exist for a long time, such as the rule-based approach and generalized additive models (Hastie and Tibshirani, 1987; Letham et al., 2015; Breiman et al., 1984). They primarily focus on tabular data. Such models for high-dimensional data (_e.g.,_ images) primarily rely on projecting to a lower dimensional human understandable _concept_ or _symbolic_ space (Koh et al., 2020) and predicting the output with an interpretable classifier. Despite their utility, the current State-Of-The-Art (SOTA) are limited in design; for example, they do not model the
interaction between the concepts except for a few exceptions (Ciravegna et al., 2021; Barbiero et al., 2022), offering limited reasoning capabilities and robustness. Furthermore, if a portion of the samples does not fit the template design of the interpretable model, they do not offer any flexibility, compromising performance.
**Our contributions** We propose an interpretable method, aiming to achieve the best of both worlds: not sacrificing Blackbox performance similar to post hoc explainability while still providing actionable interpretation. We hypothesize that a Blackbox encodes several interpretable models, each applicable to a different portion of data. Thus, a single interpretable model may be insufficient to explain all samples. We construct a hybrid neuro-symbolic model by progressively _carving out_ a mixture of interpretable models and a _residual network_ from the given Blackbox. We coin the term _expert_ for each interpretable model, as they specialize over a subset of data. All the interpretable models are termed a "Mixture of Interpretable Experts" (MoIE). Our design identifies a subset of samples and _routes_ them through the interpretable models to explain the samples with FOL, providing basic reasoning on concepts from the Blackbox. The remaining samples are routed through a flexible residual network. On the residual network, we repeat the method until MoIE explains the desired proportion of data. We quantify the sufficiency of the identified concepts to explain the Blackbox's prediction using the concept completeness score (Yeh et al., 2019). Using FOL for interpretable models offers recourse when undesirable behavior is detected in the model. We provide an example of fixing a shortcut learning by modifying the FOL. FOL can be used in human-model interaction (not explored in this paper). Our method is the divide-and-conquer approach, where the instances covered by the residual network need progressively more complicated interpretable models. Such insight can be used to inspect the data and the model further. Finally, our model allows _unexplainable_ category of data, which is currently not allowed in the interpretable models.
## 2 Method
**Notation:** Assume we have a dataset \(\{\mathcal{X}\), \(\mathcal{Y}\), \(\mathcal{C}\}\), where \(\mathcal{X}\), \(\mathcal{Y}\), and \(\mathcal{C}\) are the input images, class labels, and human interpretable attributes, respectively. \(f^{0}:\mathcal{X}\rightarrow\mathcal{Y}\), is our pre-trained initial Blackbox model. We assume that \(f^{0}\) is a composition \(h^{0}\circ\Phi\), where \(\Phi:\mathcal{X}\rightarrow\mathbb{R}^{l}\) is the image embeddings and \(h^{0}:\mathbb{R}^{l}\rightarrow\mathcal{Y}\) is a transformation from the embeddings, \(\Phi\), to the class labels. We denote the learnable function \(t:\mathbb{R}^{l}\rightarrow\mathcal{C}\), projecting the image embeddings to the concept space. The concept space is the space spanned by the attributes \(\mathcal{C}\). Thus, function \(t\) outputs a scalar value representing a concept for each input image.
**Method Overview:** Figure 1 summarizes our approach. We iteratively carve out an interpretable model from the given Blackbox. Each iteration yields an interpretable model (the downward grey paths in Figure 1) and a residual (the straightforward black paths in Figure 1). We start with the initial Blackbox \(f^{0}\). At iteration \(k\), we distill the Blackbox from the previous iteration \(f^{k-1}\) into a neuro-symbolic interpretable model, \(g^{k}:\mathcal{C}\rightarrow\mathcal{Y}\). Our \(g\) is flexible enough to be any interpretable models (Yuksekgonul et al., 2022; Koh et al., 2020; Barbiero et al., 2022). The _residual_\(r^{k}=f^{k-1}-g^{k}\) emphasizes the portion of \(f^{k-1}\) that \(g^{k}\)cannot explain. We then approximate \(r^{k}\) with \(f^{k}=h^{k}\circ\Phi\). \(f^{k}\) will be the Blackbox for the subsequent iteration and be explained by the respective interpretable model. A learnable gating mechanism, denoted by \(\pi^{k}:\mathcal{C}\rightarrow\{0,1\}\) (shown as the _selector_ in Figure 1) routes an input sample towards either \(g^{k}\) or \(r^{k}\). The thickness of the lines in Figure 1 represents the samples covered by the interpretable models (grey line) and the residuals (black line). With every iteration, the cumulative coverage of the interpretable models increases, but the residual decreases. We name our method _route, interpret_ and _repeat_.
### Neuro-Symbolic Knowledge Distillation
Knowledge distillation in our method involves 3 parts: (1) a series of trainable selectors, _routing_ each sample through the interpretable models and the residual networks, (2) a sequence of learnable neuro-symbolic interpretable models, each providing FOL explanations to _interpret_ the Blackbox, and (3) _repeating_ with Residuals for the samples that cannot be explained with their interpretable counterparts. We detail each component below.
Figure 1: Schematic view of _route, interpret_ and _repeat_. At iteration \(k\), the selector _routes_ each sample either towards the interpretable model \(g^{k}\) (to _interpret_) with probability \(\pi^{k}(.)\) or the residual \(r^{k}=f^{k-1}-g^{k}\) with probability \(1-\pi^{k}(.)\) (to _repeat_ in the further iterations). \(f^{k-1}\) is the Blackbox of the \((k-1)^{th}\) iteration. \(g^{k}\) generates FOL-based explanations for the samples it covers. Otherwise, the selector routes through the next step until it either goes through a subsequent interpretable model or reaches the last residual. Components in black and grey indicate the fixed and trainable modules in our model, respectively.
#### 2.1.1 The selector function
As the first step of our method, the selector \(\pi^{k}\)_routes_ the \(j^{th}\) sample through the interpretable model \(g^{k}\) or residual \(r^{k}\) with probability \(\pi^{k}(\mathbf{c_{j}})\) and \(1-\pi^{k}(\mathbf{c_{j}})\) respectively, where \(k\in[0,K]\), with \(K\) being the number of iterations. We define the empirical coverage of the \(k^{th}\) iteration as \(\zeta(\pi^{k})=\frac{1}{m}\sum_{j=1}^{m}\pi^{k}(\mathbf{c_{j}})\), the empirical mean of the samples selected by the selector for the associated interpretable model \(g^{k}\), with \(m\) being the total number of samples in the training set. Thus, the entire selective risk is:
\[\mathcal{R}^{k}(\pi^{k},g^{k})=\frac{\frac{1}{m}\sum_{j=1}^{m}\mathcal{L}^{k}_ {(g^{k},\pi^{k})}\big{(}\mathbf{x_{j}},\mathbf{c_{j}}\big{)}}{\zeta(\pi^{k})}, \tag{1}\]
where \(\mathcal{L}^{k}_{(g^{k},\pi^{k})}\) is the optimization loss used to learn \(g^{k}\) and \(\pi^{k}\) together, discussed in Section 2.1.2. For a given coverage of \(\tau^{k}\in(0,1]\), we solve the following optimization problem:
\[\theta^{*}_{s^{k}},\theta^{*}_{g^{k}}= \operatorname*{arg\,min}_{\theta_{s^{k}},\theta_{g^{k}}}\mathcal{ R}^{k}\Big{(}\pi^{k}(.;\theta_{s^{k}}),g^{k}(.;\theta_{g^{k}})\Big{)}\] \[\text{s.t.}\quad\zeta\big{(}\pi^{k}(:;\theta_{s^{k}})\big{)}\geq \tau^{k}, \tag{2}\]
where \(\theta^{*}_{s^{k}},\theta^{*}_{g^{k}}\) are the optimal parameters at iteration \(k\) for the selector \(\pi^{k}\) and the interpretable model \(g^{k}\) respectively. In this work, \(\pi\)s' of different iterations are neural networks with sigmoid activation. At inference time, the selector routes the \(j^{th}\) sample with concept vector \(\mathbf{c_{j}}\) to \(g^{k}\) if and only if \(\pi^{k}(\mathbf{c_{j}})\geq 0.5\) for \(k\in[0,K]\).
#### 2.1.2 Neuro-Symbolic interpretable models
In this stage, we design interpretable model \(g^{k}\) of \(k^{th}\) iteration to _interpret_ the Blackbox \(f^{k-1}\) from the previous \((k-1)^{th}\) iteration by optimizing the following loss function:
\[\mathcal{L}^{k}_{(g^{k},\pi^{k})}(\mathbf{x_{j}},\mathbf{c_{j}})=\underbrace{\ell \big{(}f^{k-1}(\mathbf{x_{j}}),g^{k}(\mathbf{c_{j}})\big{)}\pi^{k}(c_{j})}_{ \begin{subarray}{c}\text{variable component}\\ \text{for current iteration $k$}\end{subarray}}(\underbrace{\prod_{i=1}^{k-1}(1-\pi^{i}(\mathbf{c_{j}}))}_ {\begin{subarray}{c}\text{fixed component trained}\\ \text{in the previous iterations}\end{subarray}}), \tag{3}\]
where the term \(\pi^{k}(\mathbf{c_{j}})\prod_{i=1}^{k-1}\big{(}1-\pi^{i}(\mathbf{c_{j}})\big{)}\) denotes the probability of \(j^{th}\) sample being routed through the interpretable model \(g^{k}\). It is the probability of the sample going through the residuals for all the previous iterations from \(1\) through \(k-1\) (_i.e.,_\(\prod_{i=1}^{k-1}\big{(}1-\pi^{i}(\mathbf{c_{j}})\big{)}\)) times the probability of going through the interpretable model at iteration \(k\) (_i.e.,_\(\pi^{k}(\mathbf{c_{j}})\)). Refer to Figure 1 for an illustration. We learn \(\pi^{1},\dots\pi^{k-1}\) in the prior iterations and are not trainable at iteration \(k\). As each interpretable model \(g^{k}\) specializes in explaining a specific subset of samples (denoted by coverage \(\tau\)), we refer to it as an _expert_. We use SelectiveNet's (Geifman and El-Yaniv, 2019) optimization method to optimize Equation (2) since selectors need a rejection mechanism to route samples through residuals. Appendix A.4 details the optimization procedure in Equation (3). We refer to the interpretable experts of all the iterations as a "Mixture of Interpretable Experts" (MoIE) cumulatively after training. Furthermore, we utilize E-LEN, _i.e.,_ a Logic Explainable Network (Ciravegna et al., 2023) implemented with an Entropy Layer as first layer (Barbiero et al., 2022) as the interpretable symbolic model \(g\) to construct First Order Logic (FOL) explanations of a given prediction.
#### 2.1.3 The Residuals
The last step is to _repeat_ with the residual \(r^{k}\), as \(r^{k}(\mathbf{x_{j}},\mathbf{c_{j}})=f^{k-1}(\mathbf{x_{j}})-g^{k}(\mathbf{c_{j}})\). We train \(f^{k}=h^{k}\big{(}\Phi(.)\big{)}\) to approximate the residual \(r^{k}\), creating a new Blackbox \(f^{k}\) for the next iteration \((k+1)\). This step is necessary to specialize \(f^{k}\) over samples not covered by \(g^{k}\). Optimizing the following loss function yields \(f^{k}\) for the \(k^{th}\) iteration:
\[\mathcal{L}^{k}_{f}(\mathbf{x_{j}},\mathbf{c_{j}})=\underbrace{\ell\big{(}r^{k}(\mathbf{x _{j}},\mathbf{c_{j}}),f^{k}(\mathbf{x_{j}})\big{)}}_{\begin{subarray}{c}\text{ trainable component}\\ \text{for iteration $k$}\end{subarray}}\underbrace{\prod_{i=1}^{k}\big{(}1-\pi^{i}(\mathbf{c_{j}}) \big{)}}_{\begin{subarray}{c}\text{non-trainable component}\\ \text{for iteration $k$}\end{subarray}} \tag{4}\]
Notice that we fix the embedding \(\Phi(.)\) for all the iterations. Due to computational overhead, we only finetune the last few layers of the Blackbox (\(h^{k}\)) to train \(f^{k}\). At the final iteration \(K\), our method produces a MoIE and a Residual, explaining the interpretable and uninterpretable components of the initial Blackbox \(f^{0}\), respectively. Appendix A.5 describes the training procedure of our model, the extraction of FOL, and the architecture of our model at inference.
**Selecting number of iterations \(K\):** We follow two principles to select the number of iterations \(K\) as a stopping criterion: 1) Each expert should have enough data to be trained reliably ( coverage \(\zeta^{k}\)). If an expert covers insufficient samples, we stop the process. 2) If the final residual (\(r^{K}\)) underperforms a threshold, it is not reliable to distill from the Blackbox. We stop the procedure to ensure that overall accuracy is maintained.
\begin{table}
\begin{tabular}{l c c} \hline \hline DATASET & BLACKBOK & \# EVPRTS \\ \hline CUB-200 (Wih et al., 2011) & RESNET101 (Wih et al., 2016) & 6 \\ CUB-200 (Wih et al., 2011) & VIT (Wang et al., 2012) & 6 \\ AWA22(Cun et al., 2018) & RESNET101 (Wih et al., 2016) & 4 \\ AWA22(Cun et al., 2018) & VIT (Wang et al., 2012) & 6 \\ HAM100 (Tchandl et al., 2018) & INCEPTION (Szeply et al., 2015) & 6 \\ SIM-SSC (Zoeberberg et al., 2012) & INCEPTION (Szeply et al., 2015) & 6 \\ EPFUSION IN MIDECO (Zoebergs et al., 2015) & RESNET121 (Zhang et al., 2017) & 3 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Datasets and Blackboxes.
## 3 Related work
**Post hoc explanations:** Post hoc explanations retain the flexibility and performance of the Blackbox. The post hoc explanation has many categories, including feature attribution (Simonyan et al., 2013; Smilkov et al., 2017; Binder et al., 2016) and counterfactual approaches (Singla et al., 2019; Abid et al., 2021). For example, feature attribution methods associate a measure of importance to features (e.g., pixels) that is proportional to the feature's contribution to BlackBox's predicted output. Many methods were proposed to estimate the importance measure, including gradient-based methods (Selvaraju et al., 2017; Sundararajan et al., 2017), game-theoretic approach (Lundberg and Lee, 2017). The post hoc approaches suffer from a lack of fidelity to input (Adebayo et al., 2018) and ambiguity in explanation due to a lack of correspondence to human-understandable concepts. Recently, Posthoc Concept Bottleneck models (PCBMs) (Yuksekgonul et al., 2022) learn the concepts from a trained Blackbox embedding and use an interpretable classifier for classification. Also, they fit a residual in their hybrid variant (PCBM-h) to mimic the performance of the Blackbox. We will compare against the performance of the PCBMs method. Another major shortcoming is that, due to a lack of mechanistic explanation, post hoc explanations do not provide a recourse when an undesirable property of a Blackbox is identified. Interpretable-by-design provides a remedy to those issues (Rudin, 2019).
**Concept-based interpretable models:** Our approach falls into the category of concept-based interpretable models. Such methods provide a mechanistically interpretable prediction that is a function of human-understandable concepts. The concepts are usually extracted from the activation of the middle layers of the Neural Network (bottleneck). Examples include Concept Bottleneck models (CBMs) (Koh et al., 2020), antheo concept decoder (Sarkar et al., 2022), and a high-dimensional Concept Embedding model (CEMs) (Zarlenga et al., 2022) that uses high dimensional concept embeddings to allow extra supervised learning capacity and achieves SOTA performance in the interpretable-by-design class. Most concept-based interpretable models do not model the interaction between concepts and cannot be used for reasoning. An exception is E-LEN (Barbiero et al., 2022) which uses an entropy-based approach to derive explanations in terms of FOL using the concepts. The underlying assumption of those methods is that one interpretable function can explain the entire set of data, which can limit flexibility and consequently hurt the performance of the models. Our approach relaxes that assumption by allowing multiple interpretable functions and a residual. Each function is appropriate for a portion of the data, and a small portion of the data is allowed to be uninterpretable by the model (_i.e.,_ residual). We will compare our method with CBMs, CEMs, and their E-LEN-enhanced variants.
**Application in fixing the shortcut learning:** Shortcuts are spurious features that correlate with both input and the label on the training dataset but fail to generalize in more challenging real-world scenarios. Explainable AI (X-AI) aims to identify and fix such an undesirable property. Related work in X-AI includes LIME (Ribeiro et al., 2016), utilized to detect spurious background as a shortcut to classify an
Figure 2: MoIE identifies diverse concepts for specific subsets of a class, unlike the generic ones by the baselines. **(i)** We construct the FOL explanations of the samples of, “Bay breasted warbler” in the CUB-200 dataset for VIT-based **(a)** CBM + E-LEN as an _interpretable-by-design_ baseline, **(b)** PCBM + E-LEN as a _posthoc_ baseline, **(c)** experts in MoIE at inference. We highlight the unique concepts for experts 1,2, and 3 in _red, blue_, and _magenta_, respectively. **(ii)** Comparison of FOL explanations by MoIE with the PCBM + E-LEN baselines for HAM10000 **(top)** and ISIC **(down)** to classify Malignant lesion. We highlight unique concepts for experts 3, 5, and 6 in _red, blue_, and _violet_, respectively. For brevity, we combine FOLs for each expert for the samples covered by them.
animal. Recently interpretable model (Rosenzweig et al., 2021), involving local image patches, are used as a proxy to the Blackbox to identify shortcuts. However, both methods operate in pixel space, not concept space. Also, both approaches are post hoc and do not provide a way to eliminate the shortcut learning problem. Our MoIE discovers shortcuts using the high-level concepts in the FOL explanation of the Blackbox's prediction and eliminates them via metadata normalization (MDN) (Lu et al., 2021).
## 4 Experiments
We perform experiments on a variety of vision and medical imaging datasets to show that 1) MoIE captures a diverse set of concepts, 2) the performance of the residuals degrades over successive iterations as they cover "harder" instances, 3) MoIE does not compromise the performance of the Blackbox, 4) MoIE achieves superior performances during test time interventions, and 5) MoIE can fix the shortcuts using the Waterbirds dataset (Sagawa et al., 2019). We repeat our method until MoIE covers at least 90% of samples or the final residual's accuracy falls below 70%. Refer to Table 1 for the datasets and Blackboxes experimented with. For ResNets and Inception, we flatten the feature maps from the last convolutional block to extract the concepts. For VITs, we use the image embeddings from the transformer encoder to perform the same. We use SIIM-ISIC as a real-world transfer learning setting, with the Blackbox trained on HAM10000 and evaluated on a subset of the SIIM-ISIC Melanoma Classification dataset (Yuksekgonul et al., 2022). Appendix A.6 and Appendix A.7 expand on the datasets and hyperparameters.
**Baselines:** We compare our methods to two concept-based baselines - 1) interpretable-by-design and 2) posthoc. They consist of two parts: a) a concept predictor \(\Phi:\mathcal{X}\rightarrow\mathcal{C}\), predicting concepts from images; and b) a label predictor \(g:\mathcal{C}\rightarrow\mathcal{Y}\), predicting labels from the concepts. The end-to-end CEMs and sequential CBMs serve as interpretable-by-design baselines. Similarly, PCBM and PCBM-h serve as post hoc baselines. Convolution-based \(\Phi\) includes all layers till the last convolution block. VIT-based \(\Phi\) consists of the transformer encoder block. The standard CBM and PCBM models do not show how the concepts are composed to make the label prediction. So, we create CBM + E-LEN, PCBM + E-LEN and PCBM-h + E-LEN by using the identical \(g\) of MOIE (shown in Appendix A.7), as a replacement for the standard classifiers of CBM and PCBM. We train the \(\Phi\) and \(g\) in these new baselines to sequentially generate FOLs (Barbiero et al., 2022). Due to the unavailability of concept annotations, we extract the concepts from the Derm7pt dataset (Kawahara et al., 2018) using the pretrained embeddings of the Blackbox (Yuksekgonul et al., 2022) for HAM10000. Thus, we do not have interpretable-by-design baselines for HAM10000 and ISIC.
### Results
#### 4.1.1 Expert driven explanations by MoIE
First, we show that MoIE captures a rich set of diverse instance-specific concepts qualitatively. Next, we show quantitatively that MoIE-identified concepts are faithful to Blackbox's final prediction using the metric "completeness score" and zeroing out relevant concepts.
**Heterogenity of Explanations:** At each iteration of MoIE, the blackbox \(\left(h^{k}(\Phi(.)\right)\) splits into an interpretable expert (\(g^{k}\)) and a residual (\(r^{k}\)). Figure 2i shows this mechanism for VIT-based MoIE and compares the FOLs with CBM + E-LEN and PCBM + E-LEN baselines to classify "Bay Breasted Warbler" of CUB-200. The experts of different iterations specialize in specific instances of "Bay Breasted Warbler". Thus, each expert's FOL comprises its instance-specific concepts of the same class ( Figure 2i-c). For example, the concept, _leg_color_grey_ is unique to expert4, but _belly_pattern_solid_ and _back_pattern_multicolored_ are unique to experts 1 and 2, respectively, to classify the instances of "Bay Breasted Warbler". Unlike MoIE, the baselines employ a single interpretable model \(g\), resulting in a generic FOL with identical concepts for all the samples of "Bay Breasted Warbler" (Figure 2(a-b)). Thus the baselines fail to capture the heterogeneity of explanations. For additional results of CUB-200, refer to Appendix A.10.6.
Figure 2ii shows such diverse explanations for HAM10000 (_top_) and ISIC (_bottom_). In Figure 2ii-(top), the baseline-FOL consists of concepts such as _AtypicalPigmentNetwork_ and _BlueWhitishVeil (BWV)_ to classify "Malignancy" for all the instances for HAM10000. However, expert 3 relies on _RegressionStructures_ along with \(BWV\) to classify the same for the samples it covers while expert 5 utilizes several other concepts _e.g., IrregularStreaks_, _Irregular dots and globules (IrregularDG) etc._ Due to space constraints, Appendix A.10.7 reports similar results for the Awa2 dataset. Also, VIT-based experts compose less concepts per sample than the ResNet-based experts, shown in Appendix A.10.8.
**MoIE-identified concepts attain higher completeness scores.** Figure 5(a-b) shows the completeness scores (Yeh et al., 2019) for varying number of concepts. Completeness score is a post hoc measure, signifying the identified concepts as "sufficient statistic" of the predictive capability of the Blackbox. Recall that \(g\) utilizes E-LEN (Barbiero et al., 2022), associating each concept with an attention weight after training. A concept with high attention weight implies its high predictive significance. Iteratively, we select the top relevant concepts based on their attention weights and compute the completeness scores for the top concepts for MoIE and the PCBM + E-LEN baseline in Fig
ure 5(a-b) ( Appendix A.8 for details). For example, MoIE achieves a completeness score of 0.9 compared to 0.75 of the baseline(\(\sim 20\%\uparrow\)) for the 10 most significant concepts for the CUB-200 dataset with VIT as Blackbox.
**MoIE identifies more meaningful instance-specific concepts.** Figure 5(c-d) reports the drop in accuracy by zeroing out the significant concepts. Any interpretable model (\(g\)) supports concept-intervention (Koh et al., 2020). After identifying the top concepts from \(g\) using the attention weights, as in the last section, we set these concepts' values to zero, compute the model's accuracy drop, and plot in Figure 5(b). When zeroing out the top 10 essential concepts for VIT-based CUB-200 models, MoIE records a drop of 53% compared to 28% and 42% for the CBM + E-LEN and PCBM + E-LEN baselines, respectively, showing the faithfulness of the identified concepts to the prediction.
In both of the last experiments, MoIE outperforms the baselines as the baselines mark the same concepts as significant for all samples of each class. However, MoIE leverages various experts specializing in different subsets of samples of different classes. For results of MIMIC-CXR and Awa2, refer to Appendix A.10.2 and Appendix A.10.4 respectively.
#### 4.1.2 Identification of harder samples by successive residuals
Figure 3 (a-c) display the proportional accuracy of the experts and the residuals of our method per iteration. The proportional accuracy of each model (experts and/or residuals) is defined as the accuracy of that model times its cov
Figure 4: Across architectures test time interventions of concepts on all the samples **(a-d)**, on the “hard” samples **(e)**, covered by only the last two experts of MoIE.
Figure 3: The performance of experts and residuals across iterations. **(a-c)** Coverage and proportional accuracy of the experts and residuals. **(d-f)** We route the samples covered by the residuals across iterations to the initial Blackbox \(f^{0}\) and compare the accuracy of \(f^{0}\) (red bar) with the residual (blue bar). Figures **d-f** show the progressive decline in performance of the residuals across iterations as they cover the samples in the increasing order of “hardness”. We observe the similar abysmal performance of the initial blackbox \(f^{0}\) for these samples.
erage. Recall that the model's coverage is the empirical mean of the samples selected by the selector. Figure 2(a) show that the experts and residual cumulatively achieve an accuracy \(\sim\) 0.92 for the CUB-200 dataset in iteration 1, with more contribution from the residual (black bar) than the expert1 (blue bar). Later iterations cumulatively increase and worsen the performance of the experts and corresponding residuals, respectively. The final iteration curves out the entire interpretable portion from the Blackbox \(f^{0}\) via all the experts, resulting in their more significant contribution to the cumulative performance. The residual of the last iteration covers the "hardest" samples, achieving low accuracy. Tracing these samples back to the original Blackbox \(f^{0}\), it also classifies these samples poorly (Figure 2(d-f)). As shown in the coverage plot, this experiment reinforces Figure 1, where the flow through the experts gradually becomes thicker compared to the narrower flow of the residual with every iteration. Refer to Figure 12 in the Appendix A.10.3 for the results of the ResNet-based MoIEs.
#### 4.1.3 Quantitative analysis of MoIE with the blackbox and baseline
**Comparing with the interpretable-by-design baselines:** Table 2 shows that MoIE achieves comparable performance to the Blackbox. Recall that "MoIE" refers to the mixture of all interpretable experts (\(g\)) only excluding any residuals. MoIE outperforms the interpretable-by-design baselines for all the datasets except Awa2. Since Awa2 is designed for zero-shot learning, its rich concept annotation makes it appropriate for interpretable-by-design models. In general, VIT-derived MoIEs perform better than their ResNet-based variants.
**Comparing with the PCBMs:** Table 2 shows that interpretable MoIE outperforms the interpretable posthoc baselines - PCBM and PCBM + E-LEN for all the datasets, especially by a significant margin for CUB-200 and ISIC. We also report "MoIE + Residual" as the mixture of interpretable experts plus the final residual to compare with the residualized PCBM, _i.e.,_ PCBM-h. Table 2 shows that PCBM-h performs slightly better than MoIE + Residual. Note that PCBM-h learns the residual by fitting the complete dataset to fix the interpretable PCBM's mistakes to replicate the performance of the Blackbox, resulting in better performance for PCBM-h than PCBM. However, we assume the Blackbox to be a combination of interpretable and uninterpretable components. So, we train the experts and the final residual to cover the interpretable and uninterpretable portions of the Blackbox respectively. In each iteration, our method learns the residuals to focus on the samples, which are not covered by the respective interpretable experts. Therefore, residuals are not designed to fix the mistakes made by the experts. In doing so, the final residual in MoIE + Residual covers the "hardest" examples, lowering its overall performance compared to MoIE.
#### 4.1.4 Test time interventions
Figure 4(a-d) shows effect of test time interventions. Any concept-based models (Koh et al., 2020; Zarlenga et al., 2022) allow test time interventions for datasets with concept annotation (_e.g.,_ CUB-200, Awa2). We identify the significant concepts via their attention scores in \(g\), as during the computation of completeness scores, and set their values with the ground truths, considering the ground truth concepts as an oracle. As MoIE identifies a more diverse set of concepts by focusing on different subsets of classes, MoIE
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline
**MODEL** & \multicolumn{3}{c}{**Datasets**} \\ & CUB-200 & CUB-200 (VIT) & AWA2 & AWA2 (VIT) & HAM10000 & SIME-SIC & EPpression \\ & (0283TTTTT) & & (0283TTTT) & & & & & \\ \hline BLACKACK & 0.88 & 0.92 & 0.89 & 0.99 & 0.96 & 0.85 & 0.91 \\ \hline
**INTERSPEARLE-RA-V-DISION** & & - & 0.88 \(\pm\) 0.50 & - & NA & NA & 0.76 \(\pm\) 0.00 \\ CIM (Zalalalrega et al., 2022) & 0.77 \(\pm\) 0.22 & - & 0.88 \(\pm\) 0.50 & - & NA & NA & 0.79 \(\pm\) 0.00 \\ CIM (Zalalrega et al., 2022) & 0.65 \(\pm\) 0.37 & 0.86 \(\pm\) 0.24 & 0.88 \(\pm\) 0.35 & 0.94 \(\pm\) 0.28 & NA & NA & 0.79 \(\pm\) 0.00 \\ CIM + E-LEN (Koh et al., 2020; Benlière et al., 2022) & 0.71 \(\pm\) 0.35 & 0.88 \(\pm\) 0.24 & 0.86 \(\pm\) 0.35 & 0.93 \(\pm\) 0.25 & NA & NA & 0.79 \(\pm\) 0.00 \\ \hline
**POSTING** & & & & & & & & \\ PGNL (Vakulappani et al., 2022) & 0.36 \(\pm\) 0.01 & 0.88 \(\pm\) 0.20 & 0.82 \(\pm\) 0.23 & 0.94 \(\pm\) 0.17 & 0.93 \(\pm\) 0.00 & 0.71 \(\pm\) 0.01 & 0.81 \(\pm\) 0.01 \\ PGNL (Vakulappani et al., 2022) & 0.35 \(\pm\) 0.01 & 0.91 \(\pm\) 0.18 & 0.87 \(\pm\) 0.20 & 0.98 \(\pm\) 0.17 & 0.95 \(\pm\) 0.00 & 0.79 \(\pm\) 0.05 & 0.82 \(\pm\) 0.07 \\ PGNL + E-LEN (Vakulappani et al., 2022; Benlière et al., 2022) & 0.80 \(\pm\) 0.36 & 0.89 \(\pm\) 0.26 & 0.85 \(\pm\) 0.25 & 0.96 \(\pm\) 0.18 & 0.94 \(\pm\) 0.02 & 0.73 \(\pm\) 0.01 & 0.83 \(\pm\) 0.01 \\
**POSTING** & & & & & & & & \\ PGNL (Vakulappani et al., 2022; Benlière et al., 2022; Benlière et al., 2022) & 0.88 \(\pm\) 0.24 & 0.98 \(\pm\) 0.20 & 0.95 \(\pm\) 0.03 & 0.82 \(\pm\) 0.05 & 0.87 \(\pm\) 0.03 \\ \hline
**ORES** & & & & & & & & \\ Muffle (COVERAGE) & **0.85 \(\pm\) 0.01 (0.9)** & **0.91 \(\pm\) 0.00** & **0.87 \(\pm\) 0.00** & **0.97 \(\pm\) 0.00** & **0.95 \(\pm\) 0.00** (0.97) \(\pm\) 0.00** & **0.84 \(\pm\) 0.00** & **0.87 \(\pm\) 0.00** \\ MoIE + RESIDUA & & & & & & & & \\ \hline
**MoIE** + **R-SIDUA** & & & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 2: MoIE does not hurt the performance of the original Blackbox using a held-out test set. We provide the mean and standard errors of AUROC and accuracy for medical imaging (_e.g.,_ HAM10000, ISIC, and Effusion) and vision (_e.g.,_ CUB-200 and Awa2) datasets, respectively, over 5 random seeds. For MoIE, we also report the percentage of test set samples covered by all experts as “coverage”. Here, MoIE + Residual represents the experts with the final residual. Following the setting (Zarlenga et al., 2022), we only report the performance of the convolutional CEM, leaving the construction of VIT-based CEM as a future work. Recall that interpretable-by-design models can not be constructed for HAM10000 and ISIC as they have no concept annotation; we learn the concepts from the Derm7pt dataset. For all the datasets, MoIE covers a significant portion of data (at least 90%) cumulatively. We boldface our results.
outperforms the baselines in terms of accuracy for such test time interventions. Instead of manually deciding the samples to intervene, it is generally preferred to intervene on the "harder" samples, making the process efficient. As per Section 4.1.2, experts of different iterations cover samples with increasing order of "hardness". To intervene efficiently, we perform identical test-time interventions with varying numbers of concepts for the "harder" samples covered by the final two experts and plot the accuracy in Figure 4(e). For the VIT-derived MoIE of CUB-200, intervening only on 20 concepts enhances the accuracy of MoIE from 91% to 96% (\(\sim 6.1\%\) \(\uparrow\)). We cannot perform the same for the baselines as they cannot directly estimate "harder" samples. Also, Figure 4 shows a relatively higher gain for ResNet-based models in general. Appendix A.10.5 demonstrates an example of test time intervention of concepts for relatively "harder" samples, identified by the last two experts of MoIE.
#### 4.1.5 Application in the removal of shortcuts
First, we create the Waterbirds dataset as in (Sagawa et al., 2019)by using forest and bamboo as the spurious land concepts of the Places dataset for landbirds of the CUB-200 dataset. We do the same by using oceans and lakes as the spurious water concepts for waterbirds. We utilize ResNet50 as the Blackbox \(f^{0}\) to identify each bird as a Waterbird or a Landbird. The Blackbox quickly latches on the spurious backgrounds to classify the birds. As a result, the black box's accuracy differs for land-based versus aquatic subsets of the bird species, as shown in Figure 6a. The Waterbird on the water is more accurate than on land ((96% vs. 67 % in the orange bar in the Figure 6a). The FOL from the biased Blackbox-derived MoIE captures the spurious concept _forest_ a waterbird, misclassified as a landbird. Assuming the background concepts as metadata, we minimize the background bias from the representation of the Blackbox using Metadata normalization (MDN) layers (Lu et al., 2021) between two successive layers of the convolutional backbone to fine-tune the biased Blackbox. Next, we train \(t\), using the embedding \(\Phi\) of the robust Blackbox, and compare the accuracy of the spurious concepts with the biased blackbox in Figure 6d. The validation accuracy of all the spurious concepts retrieved from the robust Blackbox falls well short of the predefined threshold 70% compared to the biased Blackbox. Finally, we re-train the MoIE distilling from the new robust Blackbox. Figure 6b illustrates similar accuracies of MoIE for Waterbirds on water vs. Waterbirds on land (91% -88 %). The FOL from the robust Blackbox does not include any background concepts ( 6c, bottom row). Refer to 8 in Appendix A.9 for the flow diagram of this experiment.
Figure 5: Quantitative validation of the extracted concepts. **(a-b)** Completeness scores of the models for a varying number of top concepts. **(c-d)** Drop in accuracy compared to the original model after zeroing out the top significant concepts iteratively. The highest drop for MoIE indicates that MoIE selects more instance-specific concepts than generic ones by the baselines.
Figure 6: MoIE fixes shortcuts. **(a)** Performance of the biased Blackbox. **(b)** Performance of final MoIE extracted from the robust Blackbox after removing the shortcuts using MDN. **(c)** Examples of samples **(top-row)** and their explanations by the biased **(middle-row)** and robust Blackboxes (**bottom-row)**. **(d)** Comparison of accuracies of the spurious concepts extracted from the biased vs. the robust Blackbox.
## 5 Discussion & Conclusions
This paper proposes a novel method to iteratively extract a mixture of interpretable models from a flexible Blackbox. The comprehensive experiments on various datasets demonstrate that our method 1) captures more meaningful instance-specific concepts with high completeness score than baselines without losing the performance of the Blackbox, 2) does not require explicit concept annotation, 3) identifies the "harder" samples using the residuals, 4) achieves significant performance gain than the baselines during test time interventions, 5) eliminate shortcuts effectively. In the future, we aim to apply our method to other modalities, such as text or video. Also, as in the prior work, MoIE-captured concepts may not reflect a causal effect. The assessment of causal concept effects necessitates estimating inter-concept interactions, which will be the subject of future research.
## 6 Acknowledgement
We would like to thank Mert Yukekgonul of Stanford University for providing the code to construct the concept bank of Derm7pt to conduct the skin experiments. This work was partially supported by NIH Award Number 1R01HL141813-01 and the Pennsylvania Department of Health. We are grateful for the computational resources provided by Pittsburgh Super Computing grant number TG-ASC170024.
|
2303.01913 | Bespoke: A Block-Level Neural Network Optimization Framework for
Low-Cost Deployment | As deep learning models become popular, there is a lot of need for deploying
them to diverse device environments. Because it is costly to develop and
optimize a neural network for every single environment, there is a line of
research to search neural networks for multiple target environments
efficiently. However, existing works for such a situation still suffer from
requiring many GPUs and expensive costs. Motivated by this, we propose a novel
neural network optimization framework named Bespoke for low-cost deployment.
Our framework searches for a lightweight model by replacing parts of an
original model with randomly selected alternatives, each of which comes from a
pretrained neural network or the original model. In the practical sense,
Bespoke has two significant merits. One is that it requires near zero cost for
designing the search space of neural networks. The other merit is that it
exploits the sub-networks of public pretrained neural networks, so the total
cost is minimal compared to the existing works. We conduct experiments
exploring Bespoke's the merits, and the results show that it finds efficient
models for multiple targets with meager cost. | Jong-Ryul Lee, Yong-Hyuk Moon | 2023-03-03T13:27:00Z | http://arxiv.org/abs/2303.01913v2 | # Bespoke: A Block-Level Neural Network Optimization Framework
###### Abstract
As deep learning models become popular, there is a lot of need for deploying them to diverse device environments. Because it is costly to develop and optimize a neural network for every single environment, there is a line of research to search neural networks for multiple target environments efficiently. However, existing works for such a situation still suffer from requiring many GPUs and expensive costs. Motivated by this, we propose a novel neural network optimization framework named Bespoke for low-cost deployment. Our framework searches for a lightweight model by replacing parts of an original model with randomly selected alternatives, each of which comes from a pretrained neural network or the original model. In the practical sense, Bespoke has two significant merits. One is that it requires near zero cost for designing the search space of neural networks. The other merit is that it exploits the sub-networks of public pretrained neural networks, so the total cost is minimal compared to the existing works. We conduct experiments exploring Bespoke's the merits, and the results show that it finds efficient models for multiple targets with meager cost.
1 Electronics and Telecommunications Research Institute (ETRI), Daejeon, Korea
2 University of Science and Technology (UST), Daejeon, Korea
{jongryul.lee,yhmoon}@etri.re.kr
## Introduction
Due to the great success of deep learning, neural networks are deployed into various environments such as smartphones, edge devices, and so forth. In addition, lots of techniques have been proposed for optimizing neural networks for fast inference and small memory usage Park et al. (2020); Liu et al. (2021); Luo and Wu (2020); Yu et al. (2021); Lee et al. (2020); Peng et al. (2019); Hinton et al. (2015); Yim et al. (2017).
One of the major issues about optimizing neural networks for efficient deployment is that it is not cheap to search for an optimized model for even a single target environment. If the target environment is changed, we need to search for another optimized model again. That is, if we have 100 target environments, we have to do optimization 100 times separately. This situation is not preferred in practice due to the expensive model search/training cost.
There is a line of research for addressing this issue. Cai et al. (2020) proposed an efficient method, called Once-For-All(OFA), producing non-trivial neural networks simultaneously for multiple target environments. Sahni et al. (2021) proposed a method called CompOFA which is more efficient than OFA in terms of training cost and search cost based on reducing the design space for model configuration. Since these methods do not utilize a pretrained network, they require too much time for training the full network. In order to address such an expensive cost, Molchanov et al. (2021) proposed a latency-aware network acceleration method called LANA based on a pretrained teacher model. This method dramatically reduces training and search costs with large design space. In LANA, for each layer in the teacher model, more than 100 alternative blocks (sub-networks) are trained to mimic it. After training them, LANA replaces ineffective layers in the teacher model with lightweight alternatives using integer programming. Even if LANA utilizes a pretrained teacher model, it requires designing alternative blocks. In addition, training many alternatives for every single layer may increase peak CPU/GPU memory usage. This limitation leads LANA to require \(O(M)\) epochs for training the alternatives on ImageNet where \(M\) is the number of alternatives for each layer.
To significantly reduce the cost of searching and retraining models for multiple target environments, we focus on public pretrained neural networks like models in Keras' applications1. Each of those networks was properly searched and trained with plentiful resources. Thus, instead of newly exploring large search space with a number of expensive GPUs, we devise a novel way of exploiting decent reusable blocks (sub-networks) in those networks. Our work starts from this rationale.
Footnote 1: [https://keras.io/api/applications](https://keras.io/api/applications)
In this paper, we devise a framework named Bespoke efficiently optimizing neural networks for various target environments. This framework is supposed to help us to get fast and small neural networks with practically affordable cost, which a single modern GPU can handle. The illustration of comparing OFA, LANA, and Bespoke is depicted in Figure 1. Our framework has following notable features compared to previous methods like OFA and LANA.
* **Near Zero-Cost Model Design Space**: Our framework is based on the sub-networks of a teacher (original) model and those of pretrained models. Our framework
extracts uniformly at random sub-networks from the teacher and the pretrained networks, and uses them for constructing a lightweight model. Thus, we do not need to struggle to expand the design space with fancy blocks and numerous channel/spatial settings. Such blocks are already included in the pretrained neural networks which were well-trained with large-scale data. We can use them for a target task with a minor cost of knowledge distillation. To the best of our knowledge, this is the first work to utilize parts of such pretrained networks for student model construction.
* **Low-Cost Preprocessing**: The randomly selected sub-networks are trained to mimic a part of the teacher via knowledge distillation in preprocessing. They can be simultaneously trained with low peak memory usage compared to OFA, CompoFA, and LANA.
* **Efficient Model Search**: Our framework deals with a neural network as a set of sub-networks. Each sub-network is extracted from the original network so that we can precisely measure the actual inference time of it. In addition, for evaluating the suitability of an alternative sub-network, our framework produces an induced neural network by rerouting the original network to use the alternative. Then, we compute the accuracy of the induced network for a validation dataset. The measured inference time and the accuracy lead us to make an incremental algorithm to evaluate a candidate neural network. Based on this, we devise an efficient model search method.
## Related Work
**Block-wise NAS.** As our framework deals with a neural network in a block-wise way, there are several early works for block-wise architecture search/generation [10, 11]. Such works find network architectures at block-level to effectively reduce search space. zhong2018deep proposed a block-wise network generation method with Q-learning, but it is still somewhat slow because it requires training a sampled model from scratch for evaluation. li2020learning proposed another block-wise method to ensure that potential candidate networks are fully trained without additional training. Since the potential candidate networks are fully trained, the time for evaluating each architecture can be significantly reduced. For this, li2020learning used a teacher model and knowledge distillation with it. Compared to [11], we do not only use a teacher model for knowledge distillation but also use it as a base model to find a result model. In addition, while li2020learning only considered sequentially connected blocks, this work does not have such a limitation regarding the form of a block. Since we reuse the components (blocks) in the teacher model with their weights, our framework can have lower costs for pretraining and searching than such NAS-based approaches, even if they are block-wisely formulated.
**Hardware-aware NAS.** Hardware-aware architecture search has been one of the popular topics in deep learning, so there are many existing works [1, 13, 14, 15, 16]. Those works usually aim to reduce model search space, predict the accuracy of a candidate model, or consider actual latency in the optimization process. In addition, some of such works were proposed to reduce the cost of deploying models to multiple hardware environments [13, 14, 15, 16]. moons2021deep proposed a method named DONNA built on block-wise knowledge distillation for rapid architecture search with consideration of multiple hardware platforms. DONNA uses teacher blocks defined in a pre-trained reference model for distilling knowledge to student blocks, not for building student block architectures. Despite such many existing works, we think that there is an opportunity for improvement in terms of searching/training costs with the sub-networks in pretrained models.
**Knowledge Distillation.** Our framework includes knowledge distillation to train the sub-networks. Some early works for block-wise knowledge distillation which are related to our framework [13, 14, 15, 16] proposed a block-wise distilling method for a student model which is given by model compression or an initialized lightweight model. In contrast to [11, 16], they used a teacher model for making a student model. However, while our framework deals with any sub-network for a single input and a single output as a block, they used sequentially connected blocks at a high level. That is, student models produced by our framework are likely to be more diverse than those by [14].
Figure 1: Illustration of model optimization frameworks for multiple deployment targets
## Method
Bespoke consists of two steps: preprocessing and model search. Given a teacher (original) model, the preprocessing step finds sub-networks which will be used as alternatives. This step is described in Figure 2. The model searching step constructs a student model by replacing the sub-networks with some parts of the teacher model. In this section, all the proofs are included in the supplemental material.
### Formulation
For a neural network denoted by \(\mathcal{N}\), the set of all layers in \(\mathcal{N}\) is denoted by \(L(\mathcal{N})\). Then, for any subset \(S\subseteq L(\mathcal{N})\), we define a sub-network of \(\mathcal{N}\) which consists of layers in \(S\) with the same connections in \(\mathcal{N}\). Its inputs are defined over the connections from layers in \(L(\mathcal{N})\setminus S\) to \(S\) in \(\mathcal{N}\). The outputs of the sub-network are similarly defined. For simplicity, we consider only sub-networks having a single input and a single output. We denote a set of sub-networks in \(\mathcal{N}\) sampled in a certain way by \(\Omega(\mathcal{N})\).
For a sub-network \(\mathcal{S}\), the spatial change of \(\mathcal{S}\) is defined to be the ratio between its spatial input size and its spatial output size. For example, if \(\mathcal{S}\) has a convolution with stride 2, its spatial change is \(\frac{1}{2}\). Then, for any two sub-networks \(\mathcal{S}\) and \(\mathcal{R}\), we define that if the spatial change of \(\mathcal{S}\) is equal to that of \(\mathcal{R}\), \(\mathcal{R}\) is compatible with \(\mathcal{S}\) or vice versa.
Suppose that we have a neural network \(\mathcal{N}\) and a sub-network \(\mathcal{S}\) of \(\mathcal{N}\). Suppose that we have another sub-network \(\mathcal{R}\) that is compatible with \(\mathcal{S}\) and replace \(\mathcal{S}\) with \(\mathcal{R}\) for \(\mathcal{N}\). \(\mathcal{R}\) may come from \(\mathcal{N}\) or another neural network. If the computational cost of \(\mathcal{R}\) is smaller than that for \(\mathcal{S}\), such a replacement can be seen as model acceleration/compression. One can worry a case that the input/output channel dimensions of \(\mathcal{S}\) are not matched to those of \(\mathcal{R}\). Even in this case, by pruning input/output channels of \(\mathcal{R}\), we can make a pruned network \(\mathcal{R}^{\prime}\) from \(\mathcal{R}\) having the same input/output channel dimensions as \(\mathcal{S}\). Then, we can replace \(\mathcal{S}\) with \(\mathcal{R}^{\prime}\) in \(\mathcal{N}\) without any computational violation. We say that a neural network produced by such a replacement is induced from \(\mathcal{N}\).
Suppose that we have a teacher neural network \(\mathcal{T}\) which we want to optimize. Let us consider a set of public pre-trained networks, denoted by \(\mathbf{P}\). Then, suppose that we have sub-networks in \(\mathcal{T}\) as well as sub-networks in the pretrained networks of \(\mathbf{P}\). By manipulating such sub-networks, we propose a novel structure named a model house \(\mathsf{H}=(\Omega(\mathcal{T}),\mathbf{A})\) where \(\mathbf{A}\) is a set of sub-networks each of which is derived from \(\mathcal{T}\) or a neural network in \(\mathbf{P}\). \(\mathbf{A}\) is called the alternative set, and every element in it is compatible with a sub-network in \(\Omega(\mathcal{T})\). Our framework finds a lightweight model \(\mathcal{M}^{*}\) which is induced from \(\mathcal{T}\) as follows:
\[\mathcal{M}^{*}=\operatorname*{arg\,min}_{\mathcal{M}\in\text{{\it satisfy}}( \mathsf{H},R)}Loss_{val}(\mathcal{M}), \tag{1}\]
where \(Loss_{val}(\mathcal{M})\) is the task loss of \(\mathcal{M}\) upon the validation dataset, \(R\) represents a certain requirement, and \(\text{{\it satisfy}}(\mathsf{H},R)\) is a set of candidate neural networks induced from \(\mathcal{T}\) with \(\mathsf{H}\) which satisfy \(R\).
### Preprocessing
Let us introduce how to find effective sub-networks for a model house \(\mathsf{H}\) and how to make them ready for searching a student model.
**Enumeration.** Let us introduce a way of finding a sub-network in \(\mathcal{T}\). Note that it is not trivial to find such a sub-network in \(\mathcal{T}\) due to complex connections between layers such as skip connections. To address this issue, we use a modified version of the Depth-First Search (DFS). The modified algorithm starts from any layer in \(\mathcal{T}\), and it can move from a current layer \(l\) to its outbound neighbors only if all the inbound layers of \(l\) were already visited. For understanding, we provide the detailed procedure of the modified DFS in Algorithm 1. In this algorithm, \(|\mathcal{T}|\) represents the number of layers in \(\mathcal{T}\). In addition, for any layer \(l\), \(\mathbf{N}_{\text{out}}(l)\) is the outbound layers of \(l\), and \(\mathbf{N}_{\text{in}}(l)\) is the inbound layers of \(l\). By maintaining a vector \(\delta\), we can make \(u\) wait until all the inbound layers of \(u\) are visited.
```
Input:\(l_{0}\): a layer in \(\mathcal{T}\), \(\mathcal{T}\): a neural network
1begin
2 Initialize a stack \(S\) with \(l_{0}\);
3 Initialize a vector \(\delta\) as the zero vector in \(\mathbb{R}^{|\mathcal{T}|}\);
4while\(S\) is not emptydo
5\(v:=\text{pop}(S)\);
6 mark \(v\) as being visited;
7for\(u\in\mathbf{N}_{\text{out}}(v)\)do
8 Increase \(\delta[u]\) by \(1\);
9if\(\delta[u]=|\mathbf{N}_{\text{in}}(u)|\)then
10 Push \(u\) into \(S\);
```
**Algorithm 1**Modified Depth-First Search(\(l_{0}\), \(\mathcal{T}\))
Suppose that the modified DFS algorithm started from a layer \(l_{0}\). Let us denote the \((i+1)\)-th layer popped from \(S\) by \(l_{i}\) and the set of popped layers before \(l_{i}\) by \(P_{i}\). Then, the modified DFS algorithm guarantees the following lemma.
**Lemma 1**.: _Suppose that \(l_{0}\) has a single input and \(l_{i}\) has a single output. If \(l_{i}\) is the only element in the stack when \(l_{i}\) is being popped from the traversing stack, the sub-network induced by \(P_{i}\cup\{l_{i}\}\) from \(\mathcal{T}\) whose \(l_{i}\) is the output layer has the single input and the single output._
Figure 2: The flow of the preprocessing step
The key point of this lemma is the completeness that every layer in \(P_{i}\cup\{l_{i}\}\) is connected from or to another in the set except \(l_{0}\) and \(l_{i}\). Based on this lemma, we can derive the following theorem.
**Theorem 1**.: _All the possible sub-networks of a neural network \(\mathcal{N}\) having a single input and a single output can be enumerated by running the modified DFS algorithm multiple times._
For understanding, Figure 3 depicts sub-networks that can be found with the modified DFS algorithm. An example of the simple skip connection is a sub-network with residual connections in ResNet [1]. An example of the nested skip connections is a sub-network with a residual connection and a squeeze-and-excitation module in EfficientNet [15]. Our enumeration algorithm can also find a sub-network including multiple blocks with skip connections depicted in Figure 3(c).
**Sampling.** Because a neural network usually has more than 100 layers, enumerating them is simply prohibitive with respect to time and space. Another decent option is to randomly sample sub-networks. Given a neural network \(\mathcal{N}\), the sub-network random sampling algorithm works as follows. First, it collects all individual layers in \(\mathcal{N}\). Then, it uniformly at random selects a layer among them and conducts the modified DFS algorithm. Whenever \(l_{i}\) is being popped from the stack only having \(l_{i}\), the algorithm stores a pair of \(l_{i}\) and \(P_{i}\). After the traversal, our algorithm uniformly at random selects one among the stored pairs and derives a sub-network as the return. In practice, for this random selection, we use a part of the stored pairs close to \(l_{0}\) to avoid selecting over-sized sub-networks. The size ratio of the part to the entire stored pairs is parameterized to \(r\). We denote the sub-network sampling algorithm with a neural network \(\mathcal{N}\) by _SubnetSampling_(\(\mathcal{N}\)).
The sub-network sampling algorithm is used to get the sub-networks of a teacher network and pretrained networks. Let us denote a set of sampled sub-networks of pretrained networks in \(\mathbf{P}\) by \(\Omega(\mathbf{P})\). Algorithm 2 describes the overall procedure to find \(\Omega(\mathcal{T})\) and \(\Omega(\mathbf{P})\) with _SubnetSampling_. In Lines 4-6, it computes sub-networks in \(\mathcal{T}\). In Lines 7-13, for a randomly sampled sub-network \(\mathcal{N}\in\Omega_{\mathcal{T}}\), it uniformly at random samples a pretrained neural network \(\mathcal{P}\) and gets a sub-network \(\mathcal{P}^{\prime}\) of it. We store a mapping from \(\mathcal{P}^{\prime}\) to \(\mathcal{N}\) into _TMap_ for our search algorithm.
```
Input:\(\mathcal{T}\): the input teacher network, \(\mathbf{P}\): the set of pretrained networks, \(n_{\mathcal{T}}\): a parameter for the number of sampled sub-networks in \(\mathcal{T}\), \(n_{\mathbf{P}}\): a parameter for the number of sampled sub-networks over \(\mathbf{P}\) Output:\(\Omega(\mathcal{T})\): a set of sampled sub-networks in \(\mathcal{T}\), \(\Omega(\mathbf{P})\): a set of sampled sub-networks in the neural networks of \(\mathbf{P}\), _TMap_: a map (dictionary)
1begin
2\(\Omega_{\mathcal{T}}\coloneqq\{\}\), \(\Omega_{\mathbf{P}}\coloneqq\{\}\);
3 Initialize map TMap;
4while\(|\Omega_{\mathcal{T}}|<n_{\mathcal{T}}\)do
5\(\mathcal{N}\coloneqq\) SubnetSampling(\(\mathcal{T}\));
6 Add \(\mathcal{N}\) into \(\Omega_{\mathcal{T}}\);
7while\(|\Omega_{\mathcal{P}}|<n_{\mathcal{P}}\)do
8 Randomly sample \(\mathcal{N}\in\Omega_{\mathcal{T}}\);
9 Randomly sample \(\mathcal{P}\in\mathbf{P}\);
10\(\mathcal{P}^{\prime}\coloneqq\) SubnetSampling(\(\mathcal{P}\));
11if\(\mathcal{P}^{\prime}\) is compatible with \(\mathcal{N}\)then
12 Add \(\mathcal{P}^{\prime}\) into \(\Omega_{\mathbf{P}}\);
13\(\textit{TMap}[\mathcal{P}^{\prime}]\coloneqq\mathcal{N}\);
14
15return\(\Omega_{\mathcal{T}}\), \(\Omega_{\mathbf{P}}\), TMap;
```
**Algorithm 2**_Construct_(\(\mathcal{T}\), \(\mathbf{P}\), \(n_{\mathcal{T}}\), \(n_{\mathbf{P}}\))
**Further Approximation.** In order to expand the search space of our framework, we generate more sub-networks and add them to \(\mathbf{A}\). One straightforward approach is to apply model compression techniques such as channel pruning and decomposition to the sub-networks in \(\Omega(\mathcal{T})\) and \(\Omega(\mathbf{P})\). For this purpose, we use a variant of a discrepancy-aware channel pruning method [11] named CURL. This variant simply uses mean-squared-error as the objective function, while the original version uses KL divergence.
Since a channel pruning method changes the number of the in/out channels of sub-networks, one can ask about how to handle the compatibility between sub-networks. While every alternative should have the same in/out shapes of a teacher layer in [10], we permit the changes of an alternative's in/out shapes by introducing channel-wise mask layers.
To implement this, for any alternative sub-network \(\mathcal{A}\in\mathbf{A}\), we append an input masking layer \(M_{\text{in}}\) and output masking layer \(M_{\text{out}}\) after the input layer and the output layer, respectively. If a channel pruning method makes channel-wise masking information for the in/out channels of \(\mathcal{A}\), it is stored in \(M_{\text{in}}\) and \(M_{\text{out}}\). The masked version of \(\mathcal{A}\) is used instead of \(\mathcal{A}\) when \(\mathcal{A}\) is selected as a replacement. Suppose that a re
Figure 3: Illustration of example sub-networks having a single input and a single output with skip connections. A rectangle represents a layer and a circle represents an element-wise operation like addition and multiplication.
sulting student model \(\mathcal{S}\) has such masked sub-networks. After fine-tuning \(\mathcal{S}\), we prune internal channels in \(\mathcal{S}\) according to the masking information in masking layers. Keeping the masking layers during fine-tuning is essential for intermediate feature-level knowledge distillation. For understanding, we provide figures explaining this procedure in the supplemental material.
We denote a set of networks added by the expansion process from \(\Omega(\mathcal{T})\) and \(\Omega(\mathbf{P})\) by \(\Omega^{*}(\mathcal{T},\mathbf{P})\). Then, the alternative set \(\mathbf{A}\) is defined as,
\[\mathbf{A}=\Omega(\mathcal{T})\cup\Omega(\mathbf{P})\cup\Omega^{*}(\mathcal{T },\mathbf{P}). \tag{2}\]
**Sub-Network Training and Profiling.** Let us discuss how to train the sub-networks in \(\mathbf{A}\) to use them for producing a student model. We train them via knowledge distillation as Molchanov et al. (2021) did. Recall that each alternative sub-network \(\mathcal{A}\in\mathbf{A}\) has a corresponding sub-network \(\mathcal{N}\in\Omega(\mathcal{T})\). Thus, for each sub-network \(\mathcal{A}\in\mathbf{A}\), we can construct a distillation loss defined by the mean-squared-error between the output of \(\mathcal{A}\) and that of \(\mathcal{N}\). By minimizing such a MSE loss by gradient descent optimization, we can train the sub-networks in \(\mathbf{A}\). Formally, the mean-square-error loss for training alternatives is defined as follows:
\[\min_{W_{\mathbf{A}}}\sum_{x\in X_{\text{sample}}}\sum_{\mathcal{N}\in\Omega( \mathcal{T})}\sum_{\mathcal{A}\in\mathbf{A}_{\mathcal{N}}}\left(f_{\mathcal{N }}(x^{*})-f_{\mathcal{A}}(x^{*})\right)^{2}, \tag{3}\]
where \(W_{\mathbf{A}}\) is the set of all weight parameters over \(\mathbf{A}\), \(X_{\text{sample}}\) is a set of sampled input data, \(\mathbf{A}_{\mathcal{N}}\) is a set of alternatives compatible with \(\mathcal{N}\), \(f_{\mathcal{N}}\) is the functional form of \(\mathcal{N}\), and \(x^{*}\) is the output of the previous layer of \(\mathcal{N}\) in \(\mathcal{T}\). It is noteworthy that if we have enough memory size for a GPU, all the sub-networks in \(\mathbf{A}\) can be trained together in a single distillation process. If GPU memory is insufficient, they can be trained over several distillation processes.
**Requirement and Profiling.** A requirement can be related to FLOPs, actual latency, memory usage, and so forth. In order to produce a student model satisfying a requirement, we measure such metrics for every alternative sub-network after the sub-network training step.
### Model Searching
Given a requirement and a model house \(\mathsf{H}=(\Omega(\mathcal{T}),\mathbf{A})\), our searching algorithm for a student model is based on simulated annealing. Simulating annealing is a meta-heuristic algorithm for optimizing a function within computational budget. In order to exploit simulated annealing, we need to define three things: an initial solution, a function to define a next candidate, and the objective function. For simulated annealing, a solution \(S\) is defined to be a set of sub-networks in \(\mathbf{A}\), and we say that \(S\) is feasible when sub-networks in \(\mathcal{T}\) corresponded to the sub-networks in \(S\) by _TMap_ are not overlapped. In this work, the initial solution is a solution selected by a simple greedy method. For a solution \(S\), the score function for the greedy method is defined as,
\[\Delta acc(S) =\sum_{\mathcal{N}\in S}\Delta acc(\mathcal{N},\textit{TMap}[ \mathcal{N}],\mathcal{T}), \tag{4}\] \[\textit{score}(S) =\max\left\{1.0,\frac{\textit{metric}(\mathcal{N}_{S})}{R}\right\} +\lambda\Delta acc(S), \tag{5}\]
where \(\Delta acc(\mathcal{N},\textit{TMap}[\mathcal{N}],\mathcal{T})\) represents the accuracy loss when _TMap_[\(\mathcal{N}\)] is replaced with \(\mathcal{N}\) for \(\mathcal{T}\). The values for \(\Delta acc\) can be calculated in preprocessing. _metric_ is a metric function to evaluate the inference time (e.g., latency) of a candidate student model, and \(\mathcal{N}_{S}\) is a candidate student model induced by \(S\). We define _score_ handling a single requirement (metric) for simplicity, but it can be easily extended to multiple requirements. This score function is also used in the annealing process. Note that since constructing \(\mathcal{N}_{S}\) with alternatives is time-consuming, we take another strategy for computing it. For each sub-network \(\mathcal{N}\in S\), \(\textit{metric}(\mathcal{N}_{S})\) is approximated incrementally from \(\textit{metric}(\mathcal{T})\) by adding \(\textit{metric}(\mathcal{N})\) and subtracting \(\textit{metric}(\textit{TMap}[\mathcal{N}])\).
The remaining thing to complete the annealing process is to compute the next candidate solution. Given a current solution \(S\), we first copy it to \(S^{\prime}\) and simply uniformly at random remove a sub-network from it. Then, we repeatedly add a randomly selected sub-network in \(\mathcal{A}\) into \(S^{\prime}\) until the selected sub-network violates the feasibility of \(S^{\prime}\).
**Retraining with Distillation.** After getting a student model, we retrain it with knowledge distillation. Consider that the final solution of the simulated annealing process is denoted by \(S^{*}\). This distillation process works with the mean-square-error loss between the outputs corresponding to the sub-networks of \(S^{*}\) in \(\mathcal{N}_{S^{*}}\) and their corresponding sub-networks in \(\mathcal{T}\).
## Experiments
### Implementation Detail
**Datasets.** We use two representative datasets: CIFAR-100 [10] and ImageNet-1K [12] (Russakovsky et al., 2015). CIFAR-100 has 50K training images and 10K validation images with 100 classes. ImageNet has 1,281K training images and 50K validation images with 1000 classes.
**Models.** As original (teacher) models, we use EfficientNet [13]. As pretrained models, we do not only use them, but also ResNet50V2 [14], MobileNet[15], MobileNetV2[16], MobileNetV3[17], Inception-ResNet[12], ResNet[11], ResNetBS[14], and EfficientNetV2[13]. Those models were already pretrained and are fully available at Keras2.
Footnote 2: [https://keras.io](https://keras.io)
It should be noticed that for CIFAR-100, EfficientNet and MobileNetV3 are transformed for transfer learning to additionally include a global average pooling layer, a dropout layer and a fully-connected layer. That is why they are slower for CIFAR-100 than for ImageNet.
**Computation Environment.** Each experiment for CIFAR-100 is conducted with NVIDIA GeForce Titan RTX, while that for ImageNet is conducted with NVIDIA RTX A5000.
CPU latency and GPU latency are measured with ONNX Runtime3. For reference, Intel i9-10900X is used for CIFAR-100, while AMD EPYC 7543 32-Core processor is used for ImageNet.
Footnote 3: [https://github.com/onnx/onnx](https://github.com/onnx/onnx)
**Implementation.** We implement all things with TensorFlow Keras because it is easy to implement the sub-network extraction and injection with Keras. Our training code is based on NVIDIA Deep Learning Examples4.
Footnote 4: [https://github.com/NVIDIA/DeepLearningExamples](https://github.com/NVIDIA/DeepLearningExamples)
EfficientNet and MobileNetV3 are fine-tuned for CIFAR-100 with the learning rate is 0.0001 and cosine annealing [11]. We used CutMix [20] and MixUp [12] together for fine-tuning them, but not for retraining a student model.
The learning rate for training alternative sub-networks and fine-tuning a student model after the model search step is 0.02. We used the SGD optimizer for training the alternatives, and the AdaBelief optimizer [12] for fine-tuning the student model. We used the same hyperparameter values for the AdaBelief optimizer depending on datasets as described in [12]. The number of epochs for training the alternatives is one for ImageNet and 20 for CIFAR-100, respectively. In addition, the number of epochs for fine-tuning the student model is 15 for ImageNet and 30 for CIFAR-100, respectively.
Recall \(r\) is the ratio parameter to control the maximum size of sampled sub-networks. It is set to 20% for ImageNet and 30% for CIFAR-100. \(n_{\mathcal{T}}\) is the parameter for \(|\Omega(\mathcal{T})|\) and \(n_{\mathbf{P}}\) is the parameter for \(|\Omega(\mathbf{P})|\). For all the experiments, \(n_{\mathcal{T}}\) and \(n_{\mathbf{P}}\) are fixed to 100 and 200, respectively. In addition, \(|\Omega^{*}|\), which is the number of sub-networks added by the expansion process, is also fixed to 200.
### Results
Building a model house and searching for a student model are repeated 5 times. Thus, each reported latency/accuracy for Bespoke is the average over 25 measured values.
**Overall Inference Results.** CURL [13] is a channel pruning method which gives a global score to each channel in a network. Based on the score information, we can make models with various scales. Thus, we compare our framework with it in terms of latency and accuracy.
The overall results are presented in Table 1, Table 2, and Table 3. In these tables, the results of applying Bespoke to EfficientNet-B2 (EfficientNet-B0) are presented with Bespoke-EB2 (Bespoke-EB0). 'Rand' represents the results of models built by random selection on the same model house of Bespoke-EB2. 'Only' represents the results of making Bespoke only use the sub-networks of a teacher. It is noteworthy that the accuracy of Bespoke-EB2 is higher than that of EfficientNet-B0 for ImageNet while they have similar CPU latency. This means that Bespoke can effectively scale down EfficientNet-B2 in terms of CPU latency. The accuracy of Bespoke-EB2 for CIFAR-100 is lower than that of EfficientNet-B0, but Bespoke-EB2 is more efficient than EfficientNet-B0. Compared to MobileNetV3 models, which were proposed for fast inference, Bespoke-EB2 is somewhat slower than them. However, it can still be more accurate than the MobileNetV3 models. Bespoke-EB0 is faster than MobileNetV3-Large with slightly less accuracy. Note that the accuracy gap between MobileNetV3-Large and Bespoke-EB2 is larger than the gap between MobileNetV3-Large and Bespoke-EB0, while the latency gap is not. In addition, the latency difference between Bespoke-EB0 and MobileNetV3-Small is marginal, but the accuracy gap is significant. This means that the stu
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c} \hline \multirow{2}{*}{Methods} & \multicolumn{4}{c}{CIFAR-100} & \multicolumn{4}{c}{ImageNet} \\ & Top-1 Acc. (\%) & Latency (ms) & Params (M) & Top-1 Acc. (\%) & Latency (ms) & Params (M) \\ \hline \hline EfficientNetB2 & 88.01 & 72.78 & 7.91 & 79.98 & 28.40 & 9.18 \\ EfficientNetB0 & 86.38 & 41.28 & 4.18 & 77.19 & 18.63 & 5.33 \\ Bespoke-EB2-Rand & 79.45 & 26.11 & 6.98 & 71.78 & 16.72 & 9.58 \\ Bespoke-EB2-Only & 84.47 & 33.07 & 4.39 & 65.62 & 13.74 & 6.30 \\
**Bespoke-EB2** & **85.45** & **31.82** & **7.43** & **78.61** & **17.55** & **9.39** \\ \hline MobileNetV3-Large & 84.33 & 24.71 & 3.09 & 75.68 & 13.54 & 5.51 \\ MobileNetV3-Small & 81.98 & 11.98 & 1.00 & 68.21 & 6.99 & 2.55 \\ CURL & 81.49 & 28.65 & 1.89 & 71.21 & 17.20 & 2.37 \\
**Bespoke-EB0** & **84.86** & **22.05** & **3.78** & **73.93** & **9.39** & **6.77** \\ \hline \end{tabular}
\end{table}
Table 1: Overall Results (CPU Latency)
\begin{table}
\begin{tabular}{l|c|c} \hline Methods & Top-1 Acc. (\%) & Latency (ms) \\ \hline \hline EfficientNetB2 & 88.01 & 5.78 \\ EfficientNetB0 & 86.38 & 4.06 \\ Bespoke-EB2-Rand & 79.45 & 5.36 \\ Bespoke-EB2-Only & 87.30 & 5.15 \\
**Bespoke-EB2** & **87.01** & **3.08** \\ \hline MobileNetV3-Large & 84.33 & 3.04 \\ MobileNetV3-Small & 81.98 & 2.52 \\ CURL & 81.49 & 4.02 \\
**Bespoke-EB0** & **86.18** & **3.19** \\ \hline \end{tabular}
\end{table}
Table 2: GPU Latency (CIFAR-100)
\begin{table}
\begin{tabular}{l|c|c} \hline Methods & Top-1 Acc. (\%) & Latency (ms) \\ \hline \hline EfficientNetB2 & 79.98 & 3.55 \\ EfficientNetB0 & 77.19 & 2.29 \\ Bespoke-EB2-Rand & 71.78 & 2.55 \\ Bespoke-EB2-Only & 74.22 & 2.83 \\
**Bespoke-EB2** & **78.08** & **2.16** \\ \hline MobileNetV3-Large & 75.68 & 1.82 \\ MobileNetV3-Small & 68.21 & 1.29 \\ CURL & 71.21 & 1.81 \\
**Bespoke-EB0** & **72.75** & **2.40** \\ \hline \end{tabular}
\end{table}
Table 3: GPU Latency (ImageNet)
dent models of Bespoke have a better trade-off between accuracy and latency than those of MobileNetV3.
Models found by Bespoke-EB2-Rand have inferior accuracy compared to the other competitors. This result supports that selecting good alternatives is important for Bespoke, and the search algorithm of Bespoke effectively works. In addition, we observed that models found by Bespoke-EB2-Only are less competitive than those by Bespoke-EB2. From this result, the public pretrained networks are actually helpful for constructing fast and accurate models.
Meanwhile, Bespoke-EB0 is somewhat behind in terms of a trade-off between accuracy and GPU latency, especially for ImageNet. This may happen with the quality of the sampled sub-networks. Bespoke-EB0 will produce better results with a more number of sampled sub-networks and a more number of epochs for training them.
Since CURL does not exploit actual latency, its latency gain is insignificant. Recall that Bespoke utilizes the variant of CURL to expand the alternative set. This observation implies that the latency gain of Bespoke mainly comes from replacing expensive sub-networks with fast alternatives, not the expansion. We expect that Bespoke can find faster models with latency-aware compression methods.
**Cost analysis.** We analyze the cost for preprocessing, searching, and retraining. The results for OFA [12], DNA [13], DONNA [14], and LANA [15] come from [16]. Note that 'preprocess' includes pretraining process and'search' is the cost for searching a single target.
The result is described in Table 4. Note that \(G\) is the number of sub-networks that can be trained with a single GPU in parallel. In the experiments, \(G\) is 100 for ImageNet. Since each sub-network is pretrained with one epoch, the entire cost for retraining the alternatives is \(|\mathbf{A}|/G\). The preprocessing includes applying the channel pruning method, but its cost is negligible. Note that the preprocessing cost of LANA is also calculated with parallelism.
From this table, we do not want to claim that the retraining cost of Bespoke is lower than that of the other methods. We only want to demonstrate the assumption that sub-networks in pretrained neural networks are actually useful as building blocks for neural architecture search.
**Comparison with LANA.** We provide an analysis to compare Bespoke with LANA [16], which is described in Table 5. Because it is not possible to set up the same environment with LANA, we use relative speed-up from EfficientNet-B0. The model found by LANA is more accurate than that by Bespoke for GPU latency with faster inference latency. On the other hand, Bespoke-EB2 is not dominated by the models of LANA in terms of accuracy and CPU latency. This result is quite promising because Bespoke requires much less cost for preprocessing, but there is a situation that Bespoke is comparable to LANA.
**Random sub-alternative set test.** We conducted experiments evaluating Bespoke with random subsets of alternative set \(\mathbf{A}\). The ratio is the size ratio from a random subset to \(\mathbf{A}\). Bespoke finds an inefficient student model with the random subset of half size, which is slower than EfficientNet-B0. This is because many efficient and effective sub-networks are not included in the subset. This result supports the importance of a good alternative set.
## Conclusions
This paper proposes an efficient neural network optimization framework named Bespoke, which works with sub-networks in an original network and public pretrained networks. The model design space of Bespoke is defined by those sub-networks. Based on this feature, it can make a lightweight model for a target environment at a very low cost. The results of the experiments support the assumption that such sub-networks can actually be helpful in reducing searching and retraining costs. Compared to LANA, Bespoke can find a lightweight model having comparable accuracy and CPU latency. Its entire cost is much lower than that required by LANA and the other competitors.
For future work, we consider making Bespoke support other valuable tasks such as pose estimation and image segmentation. In addition, by adding more recent pretrained neural architectures into Bespoke, we want to see how Bespoke improves with them.
## Acknowledgments
This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (No. 2021-0-00907, Development of Adaptive and Lightweight Edge-Collaborative Analysis Technology for Enabling Proactively Immediate Response and Rapid Learning).
\begin{table}
\begin{tabular}{l|c|c} \hline \hline Ratio & Top-1 Acc. (\%) & CPU Latency (ms) \\ \hline \hline
1.0 & 85.45 & 31.82 \\
0.75 & 86.23 & 37.34 \\
0.5 & 86.23 & 44.50 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Random Partial Alternative Set Test (CIFAR-100, Bespoke-B2)
\begin{table}
\begin{tabular}{l|c|c|c} \hline \hline Methods & Preprocess & Search & Retrain \\ \hline \hline OFA & 1205 & 40 & 75 \\ DNA & 320 & 14 & 450 \\ DONNA & 1920-1500 & 1500 & 50 \\ LANA & 197 & \(<\) 1h & 100 \\ \hline
**Bespoke** & \(|\mathbf{A}|/G=4\) & \(<\) 0.5h & 15 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Cost Analysis for ImageNet (Epochs)
\begin{table}
\begin{tabular}{l|c|c} \hline \hline Methods & Top-1 Acc. (\%) & Speed-up \\ \hline \hline EfficientNet-B0 & 77.19 & 1.0 \\ \hline LANA-EB2-0.4 (CPU) & 78.11 & 1.19 \\ LANA-EB2-0.5 (CPU) & 78.87 & 0.98 \\
**Bespoke-EB2 (CPU)** & 78.61 & 1.06 \\ \hline LANA-EB2-0.45 (GPU) & 79.71 & 1.10 \\
**Bespoke-EB2 (GPU)** & 78.08 & 1.06 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Comparison with LANA (ImageNet) |
2307.09660 | Neural Priority Queues for Graph Neural Networks | Graph Neural Networks (GNNs) have shown considerable success in neural
algorithmic reasoning. Many traditional algorithms make use of an explicit
memory in the form of a data structure. However, there has been limited
exploration on augmenting GNNs with external memory. In this paper, we present
Neural Priority Queues, a differentiable analogue to algorithmic priority
queues, for GNNs. We propose and motivate a desiderata for memory modules, and
show that Neural PQs exhibit the desiderata, and reason about their use with
algorithmic reasoning. This is further demonstrated by empirical results on the
CLRS-30 dataset. Furthermore, we find the Neural PQs useful in capturing
long-range interactions, as empirically shown on a dataset from the Long-Range
Graph Benchmark. | Rishabh Jain, Petar Veličković, Pietro Liò | 2023-07-18T22:01:08Z | http://arxiv.org/abs/2307.09660v1 | # Neural Priority Queues for Graph Neural Networks (GNNs)
###### Abstract
Graph Neural Networks (GNNs) have shown considerable success in neural algorithmic reasoning. Many traditional algorithms make use of an explicit memory in the form of a data structure. However, there has been limited exploration on augmenting GNNs with external memory. In this paper, we present Neural Priority Queues, a differentiable analogue to algorithmic priority queues, for GNNs. We propose and motivate a desiderata for memory modules, and show that Neural PQs exhibit the desiderata, and reason about their use with algorithmic reasoning. This is further demonstrated by empirical results on the CLRS-30 dataset. Furthermore, we find the Neural PQs useful in capturing long-range interactions, as empirically shown on a dataset from the Long-Range Graph Benchmark.
Machine Learning, Neural Networks, Graph Neural Networks, Graph Neural Networks, Graph Neural Networks
## 1 Introduction
Algorithms and Deep Learning methods possess very fundamentally different properties. Training deep learning models to mimic algorithms would allow us to get neural models that show generalisation ability similar to the algorithms, while retaining the robustness to noise of deep learning systems. This building and training of neural networks to execute algorithmic computations is referred to as Neural Algorithmic Reasoning (Velickovic and Blundell, 2021).
Architectures that align more with the underlying algorithm for the reasoning task, tend to generalize better (Xu et al., 2019). Previous works have drawn inspiration from the external memory and data structure use of programmes and algorithms, and have found success in improving the algorithmic reasoning capabilities of recurrent neural networks (RNNs) by extending them with differentiable variants for these memory and data structures (Graves et al., 2014; Grefenstette et al., 2015).
Recently, graph neural networks (GNNs) have found immense success with algorithmic tasks (Chen et al., 2020; Velickovic et al., 2022). There have been works attempting to augment GNNs with memory, with majority using gates to do so. However, gated memory leads to very limited persistence. Furthermore, these works have solely focused on dynamic graphs, and extending these to non-dynamic graphs would involve significant effort.
In this paper, we propose the extension of the message passing framework of GNNs with external memory modules. We focus on adding a differentiable analogue to priority queues, as priority queues are a general data structure used by different algorithms and can be reduced to other data structures like stacks and queues. We name the thus formed framework for differentiable priority queues as 'Neural PQs'. We describe NPQ, an implementation under this framework, and also explore various variants for this. NPQ shows various properties that were lacking in previous works with GNNs, which we believe enable NPQ to help GNNs with algorithmic reasoning.
We summarize the contributions of this paper below:
* We propose the 'Neural PQ' framework, an extension of the message-passing GNN framework to allow use of memory modules, with particular inspiration from priority queues.
* (1) Memory-Persistence, (2) Permutation-Equivariance, (3) Reducibility to Priority Queues, and (4) No dependence on intermediate supervision. Past works have already expressed some subsets of these as desirables.
* We propose NPQs, an implementation within the Neural PQ framework, that exhibit all the above mentioned properties. This is the first differentiable analogue to priority queues, and the first memory modules for GNNs to exhibit all the above desiderata, to the best of our knowledge.
* We perform extensive quantitative analysis, via a variety of experiments and find:
* NPQs, when training to reason Dijkstra's shortest path algorithm, close the gap between the baseline test performance and ground truth by over \(40\%\).
* The various Neural PQs outperform the baseline on 26 out of 30 algorithms from the CLRS-30 dataset (Velickovic et al., 2022). The performance gains are not restricted to algorithms that actually use a priority queue.
* Neural PQs also help with long-range reasoning. These help local message-passing networks to capture long-range interaction. Thus, the benefits of using the Neural PQs are not limited to algorithmic reasoning, and these can be used on a variety of other tasks.
## 2 Background
CLRS Benchmark (Velickovic et al., 2022)Various prior works have shown the efficiency of GNNs for algorithmic tasks. However, many of these works tend to be disconnected in terms of the algorithms they target, data processing and evaluation, making direct comparisons difficult. To take the first steps in solving this issue, Velickovic et al. (2022) propose the CLRS Algorithmic Reasoning Benchmark which consists of 30 algorithms from the 'Introduction to Algorithms' textbook by Cormen et al. (2022). They name this dataset as CLRS-30.
The authors employ the encode-process-decode paradigm (Hamrick et al., 2018) and compare different processor networks (which are different GNNs) choices. Below we provide some more details on this encode-process-decode setup. Since we focus on the CLRS Benchmark for evaluation, this forms as the baseline architectural structure.
Let us take a graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), with Let \(\mathcal{N}_{i}\) as the one-hop neighbourhood of node \(i\). Let \(\mathbf{x}_{i}\in\mathbb{R}^{d_{k}}\) be the node features for node \(i\in\mathcal{V}\), \(\mathbf{e}_{ji}\in\mathbb{R}^{d_{e}}\) the edge features for edge \((j,i)\in\mathcal{E}\) and \(\mathbf{g}\in\mathbb{R}^{d_{g}}\) the graph features. The _encode_ step involves encoding these inputs using linear layers \(f_{n}:\mathbb{R}^{d_{k}}\rightarrow\mathbb{R}^{d_{h}}\), \(f_{e}:\mathbb{R}^{d_{e}}\rightarrow\mathbb{R}^{d_{h}}\) and \(f_{g}:\mathbb{R}^{d_{g}}\rightarrow\mathbb{R}^{d_{h}}\):
\[\mathbf{h}_{i}=f_{n}(\mathbf{x}_{i})\qquad\mathbf{h}_{ij}=f_{e}(\mathbf{e}_{ij })\qquad\mathbf{h}_{g}=f_{g}(\mathbf{g}) \tag{1}\]
These are then used in a processor network during the _process_ step. The previous latent features \(\mathbf{h}_{i}^{(t-1)}\) are used along with the current node feature \(\mathbf{h}_{i}\) encoding to get a recurrent encoded input \(\mathbf{z}_{i}^{(t)}\) using a recurrent encoding function \(f_{A}\). This recurrent cell update is line with the work of Velickovic et al. (2019). A message from node \(i\) to node \(j\), \(\mathbf{m}_{ij}\) is computed for each pair of nodes using a message function \(f_{m}\). These messages are aggregated using a permutation-invariant aggregation function \(\bigoplus\). Finally, a readout function \(f_{r}\) transforms the aggregated messages and node encodings into processed node latent features.
\[\mathbf{z}_{i}^{(t)} =f_{A}(\mathbf{h}_{i},\mathbf{h}_{i}^{(t-1)}) \tag{2}\] \[\mathbf{m}_{ij} =f_{m}(\mathbf{z}_{i}^{(t)},\mathbf{z}_{j}^{(t)},\mathbf{h}_{ij },\mathbf{h}_{g})\] (3) \[\mathbf{m}_{i} =\bigoplus_{j\in\mathcal{N}_{i}}\mathbf{m}_{ji}\] (4) \[\mathbf{h}_{i}^{(t)} =f_{r}(\mathbf{z}_{i}^{(t)},\mathbf{m}_{i}) \tag{5}\]
For different processors, \(f_{A}\), \(f_{m}\) and \(f_{r}\) may differ.
The last _decode_ step consists of using relevant decoding functions to get the required prediction. This might be the predicted hints or predicted output.
## 3 Related Work
The ability of RNNs to work in a sequential manner led to their popularity in previous works for reasoning about algorithms, as algorithms tend to be iterative in nature. Noting that most computer programmes make use of external memory, Graves et al. (2014) proposed addition of an external memory module to RNNs, which makes reasoning about algorithms easier. Subsequent methods have worked upon this idea and have found success with memory modules inspired from different data structures. This includes Stack-Augmented RNNs by (Joulin and Mikolov, 2015) and Neural DeQues by Grefenstette et al. (2015). A key limitation of all these proposals is that they are only defined for
Figure 1: **Left: A GNN processor based on the message passing framework. At each timestep, pair-wise messages are formed using the node features. These messages are aggregated, and then used to update the node features for the next timestep. Right: The memory module framework we propose. The memory module takes the node features and previous memory state, to output the next memory state and messages for the GNN processor. These messages are aggregated alongside the traditional node-to-node messages. In this project, we focus on memory modules inspired from priority queues.**
use by RNNs. Unlike GNNs, RNNs are unable to use the structured information about the algorithms' input spaces.
Early explorations on augmenting GNNs with memory focused on the use of internal memory in the form of gates, such as Gated Graph Sequence Networks by Li et al. (2015), and Temporal Graph Networks by Rossi et al. (2020). However, the use of such RNN-like gate mechanisms limits the persistence of the graph/node histories. Persistent Message Passing (PMP) by Strathmann et al. (2021) is a noteworthy GNN that makes use of non-gated persistent external memory, by persisting some of the nodes at each timestep. However, PMPs cannot be applied to non-dynamic graphs without significant effort. Furthermore, they require intermediate-supervision.
## 4 Neural PQ Framework
Previous works have proposed Neural Stacks, Queues and DeQues (Grefenstette et al., 2015; Joulin and Mikolov, 2015) that have a RNN controller. In this project, we propose the use of memory modules, with focus on differentiable PQs (or Neural PQs), with a GNN model acting as the controller. Furthermore, we propose integration of such memory modules with message-passing framework by allowing the Neural PQ to send messages to each node. The setup for this is shown in Figure 2.
### Desiderata
We form the framework with the following desiderata in mind - (1) Memory-Persistence, (2) Permutation-Equivariance, (3) Reducibility to Priority Queues, and (4) No dependence on intermediate supervision. We motivate the need for these below.
Memory-PersistenceMemory-Persistence is necessary to make full use of the extended capacity provided by the external memory modules. This is especially true for models running over several timestep, with a long temporal interaction, where we would want to access memory added at a much earlier timestep. Furthermore, memory persistence helps avoid over-smoothing. Node embeddings of GNNs tend to start off varied. As more messages are passed, the embeddings tend to converge to each-other, thus making the nodes indistinguishable, as the number of layers increases. Memory-persistence would allow GNNs to remember older states, when the embeddings were more distinguished, and use these to promote more varied embeddings, despite the depth of the model.
Permutation-EquivarianceA GNN layer is said to be equivariant to permutation of the nodes if and only if any permutation of the node IDs, while maintaining the overall graph structure, leads to the same permutation of the node features. This reflects one of the most basic symmetries of graph structures, and is necessary to ensure that isomorphic graphs receive the same representation, up to certain permutations and transformation. This makes permutation-equivariance an essential property for GNN layers. Thus, we want our Neural PQs to also be equivariant to permutations.
Priority Queue AlignmentPriority queues are a general data structure used by various algorithms. Furthermore, different data structures can be modelled using priority queues, like stacks and queues are priority queues with the time of insertion as the priority. A memory module that aligns well with priority queues would lead to the overall model, that uses the memory module, to align with the related algorithms better. Algorithmically aligned models lead to greater generalisation (Xu et al., 2019). Thus, this is essential for greater algorithmic reasoning.
No intermediate supervision requirementBy not requiring any intermediate supervision, memory modules become easier to apply directly to different algorithmic tasks. This allows us to even use the Neural PQ for reasoning over algorithms that may not use a priority queue themselves. General Neural PQs can be helpful with such tasks due to additional properties, like memory-persistence.
Figure 2: Neural PQ controlled using a GNN processor. At each timestep, the node features are used to pop values from the priority queue. These values are used to form the messages that are sent to the different nodes. The node features and previous state are also used to determine what values to push, and update the priority queue. This uses Neural PQ as the memory module in Figure 1.
### Framework
We present the framework for a Neural PQ controlled by a message-passing GNN below. We use the equations for the baseline message-passing GNN as described in Section 2. Let us suppose we have the same setup as the baseline. The encode and decode part remain the same, but now the processor uses an implementation of the Neural PQ Framework. Let the graph in consideration be \(\mathcal{G}=(\mathcal{V},\mathcal{E})\). Let the previous hidden state of the GNN be \(\mathbf{h}_{i}^{(t-1)}\), and the previous state of the Neural PQ be \(\mathbf{h}_{pq}^{(t-1)}\).
In the Neural PQ framework, we calculate the set of values \(V_{i}\) to be popped for each node \(i\in\mathcal{V}\), using a pop function \(f_{pop}\). Messages are formed from these popped values using a message encoding function \(f_{M}^{(pq)}\). Each node aggregates these messages along with the traditional node-to-node pair-wise messages. Lastly, a push function \(f_{push}\) updates the state of the Neural PQ, to obtain the next state \(\mathbf{h}_{pq}^{(t)}\). Formally, we can define the following equations for the framework:
\[\mathbf{z}_{i}^{(t)} =f_{A}(\mathbf{h}_{i},\mathbf{h}_{i}^{(t-1)}) \tag{6}\] \[\mathbf{m}_{ij} =f_{m}(\mathbf{z}_{i}^{(t)},\mathbf{z}_{i}^{(t)},\mathbf{h}_{ij},\mathbf{h}_{g})\] (7) \[V_{i} =f_{pop}(\mathbf{z}_{i}^{(t)},\mathbf{z}^{(t)},\mathbf{h}_{pq}^ {(t-1)})\] (8) \[M_{i} =f_{M}^{(pq)}(V_{i},\mathbf{z}_{i}^{(t)})\,\cup\,\{\mathbf{m}_{ji} \,|\,j\in\mathcal{N}_{i}\}\] (9) \[\mathbf{m}_{i} =\bigoplus_{\mathbf{m}\in M_{i}}\mathbf{m}\] (10) \[\mathbf{h}_{i}^{(t)} =f_{r}(\mathbf{z}_{i}^{(t)},\mathbf{m}_{i})\] (11) \[\mathbf{h}_{pq}^{(t)} =f_{push}(\mathbf{h}_{pq}^{(t-1)},\mathbf{z}^{(t)}) \tag{12}\]
where \(\mathbf{z}^{(t)}\) is a multi-set of all the encoded inputs \(\mathbf{z}_{i}^{(t)}\), i.e. \(\mathbf{z}^{(t)}=\{\{\mathbf{z}_{i}^{(t)}\,|\,i\in\mathcal{V}\}\}\). \(f_{A},f_{m}\) and \(f_{r}\) depend on which message-passing GNN processor we choose, while \(f_{pop}\), \(f_{M}^{(pq)}\) and \(f_{push}\) depend on the Neural PQ implementation.
Note that, in the above proposed framework, we choose to delay the update of the priority queue due to pop operation until the \(f_{push}\). This is done to keep the Neural PQ framework general and to segregate the queue read and update operations. This also allows us to prove the permutation-equivariance properties of the framework, as discussed below.
Even though the presented framework is inspired from priority queues, we can implement various other data structures, like queues and stacks, by appropriate \(f_{pop}\) and \(f_{push}\) definitions. The Neural PQ framework exhibits and promotes various properties from the desiderata. By design, these do not require any additional supervision. Furthermore, since the push, pop and message encoding functions only depend on the destination node's features, the multi-set of all node features and the Neural PQ state, all implementations are also equivariant to permutations of the nodes, under certain assumptions. For a detailed proof, refer to Appendix A.
## 5 Npq
We propose NPQ, an implementation following the foremtioned Neural PQ framework that exhibits all the proposed desiderata. We divide the overall definition of NPQ into 4 sub-sections - (1) State, (2) Pop Function, (3) Message Encoding Function, and (4) Push Function. Taking inspiration from Neural DeQues (Grefenstette et al., 2015), NPQs consist of continuous push and pop operations.
### State
The state of the NPQ must hold all the memory values pushed into the queue. Alongside these values, since we define continuous push and pop operations, we also need to keep track of the strengths/proportions of each queue element still present. We can represent this state \(\mathbf{h}_{pq}^{(t)}\) as a tuple of the list of memory values \(\mathbf{v}^{(t)}=[\mathbf{v}_{1}^{(t)},\ldots,\mathbf{v}_{i}^{(t)},\ldots]\) and the list of strengths of these memory values \(\mathbf{s}^{(t)}=[\mathbf{s}_{1}^{(t)},\ldots,\mathbf{s}_{i}^{(t)},\ldots]\). The \(i\)th element of \(\mathbf{v}^{(t)}\) is \(\mathbf{v}_{i}^{(t)}\), the value of the \(i\)th element of the priority queue, and the \(i\)th element of \(\mathbf{s}^{(t)}\) is \(\mathbf{s}_{i}^{(t)}\in(0,1)\), the strength of the \(i\)th element of the priority queue.
\[\mathbf{h}_{pq}^{(t)}=\langle\mathbf{v}^{(t)},\mathbf{s}^{(t)}\rangle \tag{13}\]
where \(\langle\cdot\rangle\) is a tuple. Figure 3 shows a sample state for the NPQ.
### Pop function
We propose a continuos pop function, i.e. we pop a fractional proportions of the values in the queue. This fraction, \(s_{pop}^{(i)}\in(0,1)\) for node \(i\), is computed as noted in Equation 14. This equation is similar to the ones used for Neural DeQues by Grefenstette et al. (2015).
\[s_{pop}^{(i)}=\text{ sigmoid}\left(f_{s}^{(pop)}(\mathbf{z}_{i}^{(t)})\right) \tag{14}\]
We use a request-grant framework to maintain the constraint that no value can be popped more than it is present, i.e. one cannot pop \(0.7\) of a value \(v_{j}\) that may be present in the PQ with only a strength of \(s_{j}=0.4\). Each node \(i\in\mathcal{V}\) requests to pop fraction \(p_{j}^{(i)}\in(0,1)\) of PQ element \(j\). NPQ takes
Figure 3: **State of the NPQ.** It consists of two lists of the same length, representing the values \(\mathbf{v}\) in the queue and their respective strengths \(\mathbf{s}\). In the above example, the NPQ consists of three elements, \(\mathbf{v}_{1}\), \(\mathbf{v}_{2}\) and \(\mathbf{v}_{3}\), with strengths \(0.6\), \(0.8\) and \(0.3\) respectively.
all the \(p_{j}^{(i)}\) values into consideration and grants a fraction \(q_{j}^{(i)}\in(0,1)\) of PQ element \(j\) to node \(i\), which may or may not be the same as the requested \(p_{j}^{(i)}\). Equation 15 shows the calculation of this granted fraction given the requested fractions \(p_{j}^{(i)}\) and the PQ element strengths \(\mathbf{s}_{j}^{(t-1)}\).
\[q_{j}^{(i)}=\left\{\begin{array}{cl}p_{j}^{(i)}&\text{, if }\sum_{k\in\mathcal{V }}p_{j}^{(k)}\leq\mathbf{s}_{j}^{(t-1)}\\ \frac{p_{j}^{(i)}}{\sum_{k\in\mathcal{V}}p_{j}^{(k)}}\cdot\mathbf{s}_{j}^{(t- 1)}&\text{, else}\end{array}\right. \tag{15}\]
The main idea behind this equation is that ideally we would want to satisfy each request \(p_{j}^{(i)}\) for popping. We cannot do that when the sum of pop requests is greater than the strength with which the element is present in the PQ. In this case, we completely pop the element and return to each node fraction of this value in proportion to the strength each node requested. It maintains the requirement that \(q_{j}^{(i)}\leq p_{j}^{(i)}\) and \(\sum_{i\in\mathcal{V}}q_{j}^{(i)}\leq\mathbf{s}_{j}^{(t-1)}\). These granted proportions are used to calculate the final value popped.
\[\mathbf{v}=\sum_{j\in\mathcal{I}_{pq}^{(t)}}q_{j}^{(i)}\cdot \mathbf{v}_{j}^{(t-1)} \tag{16}\] \[f_{pop}(\mathbf{z}_{i}^{(t)},\mathbf{z}^{(t)},\left\langle \mathbf{v}^{(t-1)},\mathbf{s}^{(t-1)}\right\rangle)=\{\mathbf{v}\} \tag{17}\]
where \(\mathcal{I}_{pq}^{(t-1)}=\left[1,\ldots,\left\lfloor\mathbf{v}^{(t-1)}\right \rfloor\right]\) is the set of indices for the NPQ. Note that since we are only popping a single value from the NPQ, we are returning a single element set. The requested pop proportions \(p_{j}^{(i)}\) are calculated using the continuous pop strength value \(s_{pop}^{(i)}\), and attention coefficients \(c_{j}^{(i)}\in(0,1)\), denoting the coefficient for the \(j\)th element of the queue with respect to the \(i\)th node. These are calculated using a multi-head additive attention mechanism (Bahdanau et al., 2014; Vaswani et al., 2017). This is done with inspiration from GATs by Velickovic et al. (2018).
\[e_{j}^{(i,h)}= \text{ LeakyReLU}\left(f_{a_{1}}^{(h)}(\mathbf{z}_{i}^{(t)})+f_{a_{2}}^{(h) }(\mathbf{v}_{j}^{(t-1)})\right) \tag{18}\] \[\alpha_{j}^{(i,h)}= \text{ softmax}_{j}\left(e_{j}^{(i,h)}\right)\] (19) \[c_{j}^{(i)}= \text{ softmax}_{j}\left(f_{a}([\alpha_{j}^{(i,1)},\ldots,\alpha_ {j}^{(i,h)},\ldots])\right) \tag{20}\]
where \(\alpha_{j}^{(i,h)}\in(0,1)\) is the attention coefficients for \(j\)th element of the queue with respect to node \(i\) via attention-head \(h\), and \(f_{a_{1}}^{(h)}\), \(f_{a_{2}}^{(h)}\) and \(f_{a}\) are linear layers.
Using these coefficients, we propose two ways of popping elements from the queue - Max Popping and Weighted Popping. We refer to NPQ using max popping and weighted popping as NPQ\({}_{\text{M}}\) and NPQ\({}_{\text{W}}\), respectively.
#### Max Popping
The element \(j\) of the queue with the highest attention coefficient \(c_{j}^{(i)}\) is requested to be popped for the node \(i\).
\[k =\operatorname*{argmax}_{k\in\mathcal{I}_{pq}^{(t-1)}}c_{k}^{(i)} \tag{21}\] \[p_{j}^{(i)} =s_{pop}^{(i)}\cdot\mathbb{I}_{\{k\}}\left(j\right) \tag{22}\]
where \(\mathbb{I}_{A}(\cdot)\) is the indicator function for set \(A\), i.e. \(\mathbb{I}_{A}(a)=1\iff a\in A\) and \(\mathbb{I}_{A}(a)=0\iff a\notin A\).
### Weighted Popping
The attention coefficients are treated as soft-weights with which each element in the PQ is requested to be popped.
\[p_{j}^{(i)}=s_{pop}^{(i)}\cdot c_{j}^{(i)} \tag{23}\]
Figure 4 shows sample pop operation for NPQ.
### Priority Queue Message Function
We use a simple message encoding function, where each output is passed through a linear layer \(f_{m}^{(pq)}\).
\[f_{M}^{(pq)}(V_{i},\mathbf{z}_{i}^{(t)})=\left\{f_{m}^{(pq)}(\mathbf{v})\,|\, \mathbf{v}\in V_{i}\right\} \tag{24}\]
### Push function
As mentioned earlier, the push function is actually the state update function. Here we first delete the popped proportions from the NPQ. Let \(\mathbf{h}_{pq}^{(t-1)}=\left\langle\mathbf{v}^{(t-1)},\mathbf{s}^{(t-1)}\right\rangle\) be the previous
Figure 4: Sample **pop operation** for a single node \(i\) in NPQ\({}_{\text{M}}\). **Left:** Pop-request \(p_{j}^{(i)}\) generation. First we compute the attention coefficients \(c_{j}^{(i)}\) using the node features \(\mathbf{z}_{i}\) and the memory values \(\mathbf{v}_{j}^{(t-1)}\). We also calculate the pop-strength \(s_{pop}^{(i)}\). The coefficients and the pop strength are together used to determine the pop fractions to request. NPQ\({}_{\text{M}}\) uses Max Popping, which only pops the element with the highest attention coefficient, in this case \(\mathbf{v}_{2}\). Thus, we request popping of only this element with strength \(s_{pop}^{(i)}\). **Right:** Using pop requests from all the nodes and the strengths of each value currently in the queue, NPQ grants certain fraction \(q_{j}^{(i)}\) to node \(i\) to pop element \(j\). We use this granted proportion to determine the value popped.
NPQ state. Then, we can define the NPQ state with the popped proportions deleted as \(\langle\mathbf{v^{\prime}},\mathbf{s^{\prime}}\rangle\), which are calculated as below.
\[\mathbf{s^{\prime}}_{i} =\mathbf{s}_{i}^{(t-1)}-\sum_{k\in\mathcal{V}}q_{i}^{(k)} \tag{25}\] \[\mathbf{s^{\prime}} =\text{ nonzero}_{i}\left(\mathbf{s^{\prime}_{i}}\right)\] (26) \[\mathbf{v^{\prime}} =\mathbf{v^{(t)}}[\text{ arg-nonzero}_{i}\left(\mathbf{s^{ \prime}_{i}}\right)] \tag{27}\]
where \(q_{i}^{(k)}\in(0,1)\) is the proportion NPQ element \(i\) granted to be popped for node \(k\) as defined in Equation 15, \(\text{nonzero}_{i}(\mathbf{s^{\prime}_{i}})\) is sequence of \(\mathbf{s^{\prime}_{i}}\) with all zero \(\mathbf{s^{\prime}_{i}}\) removed, and similarly, \(\text{arg-nonzero}_{i}\) is the relevant indices of the sequence.
We push a single value \(\mathbf{v}\) for the whole graph. To determine this value, we pass each node embedding through a linear layer \(f_{v}\) and sum the formed values across all the nodes. In line with Neural DeQues by Grefenstette et al. (2015), this values is activated using a tanh function to get the final value to be pushed.
The push function is continuous and so requires calculation of the push strength \(\mathbf{s}_{push}\). This is done in a similar manner to the push values calculation, using a linear layer \(f_{s}^{(push)}\). We use a logistic sigmoid activation here instead of tanh, akin to Neural DeQues.
\[\mathbf{v}=\text{ tanh}\left(\sum_{i\in\mathcal{V}}f_{v}(\mathbf{z }_{i}^{(t)})\right) \tag{28}\] \[\mathbf{s}=\text{ sigmoid}\left(\sum_{i\in\mathcal{V}}f_{s}^{( push)}(\mathbf{z}_{i}^{(t)})\right)\] (29) \[f_{push}(\langle\mathbf{v}^{(t-1)},\mathbf{s}^{(t-1)}\rangle, \mathbf{z}^{(t)})=\langle\mathbf{v^{\prime}}||[\mathbf{v}],\mathbf{s^{\prime} }||[\mathbf{s}_{push}]\rangle. \tag{30}\]
Note that in the above equations, we use \(q_{i}^{(k)}\) and \(\mathbf{z}_{i}^{(t)}\), which are not actually inputs to the \(f_{push}\) function. This is done mainly to maintain readability of the functions. These equations can be easily reformulated to only use \(\textbf{h}_{pq}^{(t-1)}\) and \(\mathbf{z}^{(t)}\), in order to follow the general Neural PQ framework. Refer to Appendix B for the reformulation.
### Properties
Simply by virtue of following the Neural PQ framework, NPQ exhibits two of the desiderata - Permutation-Equivariance, and no dependence on intermediate supervision. We do not update or replace the previously stored NPQ elements, but rather persist them as long as possible, and only delete their proportions when we pop them. This allows NPQ to achieve much greater memory-persistence than done using gated memories.
Lastly, the push and pop operations of the NPQ are defined to be aligned close to the push and pop operations of the traditional priority queue. In fact, under some assumptions, we can prove that \(\text{NPQ}_{\text{M}}\) can be reduced to a traditional priority queue. This can be done by taking the push and pop functions to be encoding the key-value pairs for the priority queue elements. For a detailed proof, refer to Appendix C.
Thus, NPQ satisfies the four stated desiderata.
### Variants
We also explore some variations on the proposed NPQ. One such variation involves consideration of greater memory-persistence by not deleting the popped elements. We refer to this variation as NPQ-P.
Notably, NPQ treats popping as a node-wise activity. We can instead treat popping as a graph operation, i.e. each node receives the same set of popped values. This can be done by either sending all the node-wise popped values to all the nodes, or by popping a single value for all the nodes. We refer to these two variants as NPQ-SA and NPQ-SV, respectively.
Empirically, we found these latter two variants more useful when combined with the first one. We refer to these combined variations as NPQ-P-SA and NPQ-P-SV, respectively. For the exact equations for the variants, refer to Appendix D-F.
Figure 5: Sample **push operation**. The overall push operation can be divided into two main steps. **Step 1:** In the first step, we update the queue strengths to reflect the removal of popped fractions of each element. This is done by summing the granted pop fractions \(q_{j}^{(i)}\) from all nodes. The granted pop-fractions \(q_{j}^{(i)}\) can be calculated from the previous NPQ state and the node embeddings \(\mathbf{z}\). These aggregated pop-grants are then subtracted from the strengths of the respective queue elements. Some elements might end up with \(0\) strength, and these are then removed from the queue, as shown here with the greyed-out value \(\mathbf{v}_{2}\) here. **Step 2:** The next step is to actually push a value into the queue. The value to be pushed and its strength are determined by the node embeddings \(\mathbf{z}\).
## 6 Evaluation
The main hypothesis we test is whether the Neural PQ implementations are useful for algorithmic reasoning by using the CLRS-30 dataset (Velickovic et al., 2022). To do so, we undertake multiple experiments - (1) We first focus on a single algorithm, Dijkstra's Shortest Path algorithm, evaluating the performance of the Neural PQs with a MLP MPNN as the base GNN, comparing them with the MPNN baseline as well as an MLP MPNN with an oracle priority queue. (2) We also evaluate the performance of the Neural PQs on rest of the algorithms from the CLRS benchmark. (3) Lastly, we also test whether the Neural PQs are useful for long-range reasoning, by evaluating their performance on a dataset from the Long Range Graph Benchmark (Dwivedi et al., 2022). Appendix G shows some more experiments performed.
### Dijkstra's Algorithm - MPNN Base
We train the models on Dijkstra's algorithm from CLRS-30, and test for out-of-distribution (OOD) generalisation, i.e. the models are trained on smaller input graphs, containing 16 nodes, and tested on larger graphs, containing 256 nodes. The training data consists of 1000 samples, while the testing and validation data consist of 32 samples each. We test the models on larger graph sizes than done by Velickovic et al. (2022) (they use graphs with 64 nodes) to better test the generalisation ability, and because baseline MPNN model already gets around \(91.5\%\) test performance with 64 nodes.
To test the limit of attainable performance from Neural PQs, we test an MPNN with access to an Oracle PQ, where apart from the standard input features, we also take information about the values pushed and popped from the priority queue as input. The Oracle Neural PQ forces the push and pop operation to be determined by the algorithmic PQ. This information about the actual PQ is used in training, validation as well as testing.
Table 1 shows the test performance of the different models. We see that the last model performs much better than the early-stopped model for the baseline and Oracle PQ. Notably, the last and early-stopped model perform similarly for NPQ\({}_{\text{W}}\). NPQ\({}_{\text{W}}\) outperforms the baseline as well as the Oracle PQ. In fact, we see that it **closes the gap** between the test performance of baseline MPNN and true solution, i.e. 100% test performance, by over **40\(\%\)**.
### Different Algorithms from CLRS-30
We train and test five models for each algorithm from CLRS-30 dataset - 'Baseline' (no memory module), NPQ\({}_{\text{M}}\)-P-SA, NPQ\({}_{\text{W}}\)-P-SV, NPQ\({}_{\text{W}}\) and NPQ\({}_{\text{M}}\). We train each model on graphs with 16 nodes, and test them on graphs with 128 nodes, and consider only the early-stopped models.
Figure 6 shows the comparison for best performing Neural PQ and the baseline MPNN for each algorithm. We see that for **26 out of the 30 algorithms, at least one of the Neural PQs outperforms the baseline MPNN**. Interestingly, the optimal Neural PQ version depends on the algorithm of choice. Notably, the performance gain of using a Neural PQ does not seem to be limited to algorithms that use a traditional priority queue. This supports our belief that the Neural PQ implementations are quite general, and these can perform various roles, such as acting as a traditional data structure, or a persistent-memory for accessing past overall graph states. We provide the table with algorithm-wise performance of each Neural PQ in Appendix G. Focussing on NPQ\({}_{\text{M}}\), we found that it outperforms the baseline for \(17\) algorithms (more than half of the algorithms). We see that for \(12\) algorithms, it improves the performance or closes the gap to true prediction by at least \(10\%\). For \(4\) algorithms, it improves performance/reduces the gap by at least \(50\%\).
### Long-Range Reasoning
Message-passing based GNNs exchange information between 1-hop neighbours to build node representations at each layer. Past works have shown that such information propagation leads to over-squashing when the path of information traversal is long, and so such models perform poorly on tasks requiring long-range interaction (Alon and Yahav, 2021; Dwivedi et al., 2022). Dwivedi et al. (2022) have proposed a collection of graph learning datasets to form 'Long Range Graph Benchmark' (LRGB), each of which arguably require long-range interaction reasoning to achieve strong performance. In these experiments, we test the performance of using Neural PQs on Peptides-struct dataset from the LRGB benchmark.
Figure 7 shows the test MAE results for the different Neural PQs and the baseline. Notably, all Neural PQs outperform the baseline for GATv2 processor, while only NPQ\({}_{\text{W}}\)-P
\begin{table}
\begin{tabular}{l l l} \hline
**Method** & **Best** & **Last** \\ \hline Baseline & \(68.58\%\pm 9.71\) & \(76.97\%\pm 4.38\) \\ NPQ\({}_{\text{W}}\) & \(\mathbf{85.48\%\pm 3.50}\) & \(\mathbf{86.22\%\pm 2.20}\) \\ NPQ\({}_{\text{M}}\) & \(74.54\%\pm 8.37\) & \(74.68\%\pm 3.06\) \\ NPQ\({}_{\text{W}}\)-SA & \(79.74\%\pm 2.99\) & \(69.36\%\pm 12.02\) \\ NPQ\({}_{\text{W}}\)-SV & \(77.04\%\pm 2.99\) & \(79.89\%\pm 6.28\) \\ NPQ\({}_{\text{M}}\)-P-SA & \(79.19\%\pm 5.17\) & \(78.26\%\pm 6.19\) \\ NPQ\({}_{\text{W}}\)-P-SV & \(71.46\%\pm 6.75\) & \(79.44\%\pm 5.90\) \\ \hline \hline Oracle PQ & \(75.85\%\pm 3.65\) & \(85.37\%\pm 3.72\) \\ \hline \end{tabular}
\end{table}
Table 1: Test performance (Mean \(\pm\) Standard Deviation) of models with MLP MPNN base on learning Dijkstra’s Algorithm with 256 node graphs, run with 3 different seeds. The table shows the results for the _Best_ validation score model (early-stopped model) and the _Last_ model in training.
SV and \(\text{NPQ}_{\text{W}}\) outperform the baseline on the other two processors. The success of \(\text{NPQ}_{\text{W}}\)-P-SV and \(\text{NPQ}_{\text{W}}\) means that these Neural PQs are empirically helping the models with long-range reasoning. Notably, we see that Weighted popping seems more useful for long-range reasoning.
## 7 Conclusion and Future Works
External memory modules have helped traditional RNNs improve their algorithmic reasoning capabilities. A natural hypothesis would be that external memory modules can also help graph neural networks (GNNs) with algorithmic reasoning. However, this remains a largely unexplored domain. In this paper, we proposed Neural PQs, a general framework for adding memory modules to GNNs, with inspirations from traditional priority queues. We proposed and motivated a desiderata for memory modules, and presented NPQ, an implementation with the Neural PQ framework that exhibited the desiderata.
We empirically show that NPQs indeed help with algorithmic reasoning, and without any extra supervision, matches the performance of the baseline model that has access to true priority queue operations on Dijkstra's algorithm. The performance gains are not limited to algorithms using priority queues. Furthermore, we show that the Neural PQs help with capturing long-range interaction, by demonstrating their prowess on the Peptides-struct dataset from the Long-Range Graph Benchmark.
The success of the Neural PQs has a wide effect on the field of representational learning. It opens up a research domain exploring the use of memory modules with GNNs, especially their interfacing with the message-passing framework. The Neural PQs take crucial steps towards advancing the neural algorithmic reasoning field. These also hold potential with various other fields and tasks, as seen by their performance on the long-range reasoning task.
We have limited our focus on simple memory module operations. Potential future works could involve exploration of more complicated definitions. These definitions might be formed by analysing the reasons behind the greater success of Neural PQs on some algorithms as opposed to others. Neural PQs can also be used for various other graph tasks, and it would be interesting to explore their uses for these.
Figure 6: Evaluation results for best performing Neural PQ and the baseline MPNN model for the 30 algorithms from CLRS-30, sorted by the relative improvement in performance.
Figure 7: Test MAE (Mean \(\pm\) Standard Deviation) of different Neural PQs with different base processors on Peptides-struct dataset, run with 3 different seeds. **Lower** the test MAE, **better** is the performance.
## Acknowledgements
We thank Adria Puigdomenech and Karl Tuyls for reviewing the paper prior to the submission.
|
2310.05495 | On the Convergence of Federated Averaging under Partial Participation
for Over-parameterized Neural Networks | Federated learning (FL) is a widely employed distributed paradigm for
collaboratively training machine learning models from multiple clients without
sharing local data. In practice, FL encounters challenges in dealing with
partial client participation due to the limited bandwidth, intermittent
connection and strict synchronized delay. Simultaneously, there exist few
theoretical convergence guarantees in this practical setting, especially when
associated with the non-convex optimization of neural networks. To bridge this
gap, we focus on the training problem of federated averaging (FedAvg) method
for two canonical models: a deep linear network and a two-layer ReLU network.
Under the over-parameterized assumption, we provably show that FedAvg converges
to a global minimum at a linear rate $\mathcal{O}\left((1-\frac{min_{i \in
[t]}|S_i|}{N^2})^t\right)$ after $t$ iterations, where $N$ is the number of
clients and $|S_i|$ is the number of the participated clients in the $i$-th
iteration. Experimental evaluations confirm our theoretical results. | Xin Liu, Wei li, Dazhi Zhan, Yu Pan, Xin Ma, Yu Ding, Zhisong Pan | 2023-10-09T07:56:56Z | http://arxiv.org/abs/2310.05495v2 | # A Neural Tangent Kernel View on Federated Averaging for Deep Linear Neural Network
###### Abstract
Federated averaging (FedAvg) is a widely employed paradigm for collaboratively training models from distributed clients without sharing data. Nowadays, the neural network has achieved remarkable success due to its extraordinary performance, which makes it a preferred choice as the model in FedAvg. However, the optimization problem of the neural network is often non-convex even non-smooth. Furthermore, FedAvg always involves multiple clients and local updates, which results in an inaccurate updating direction. These properties bring difficulties in analyzing the convergence of FedAvg in training neural networks. Recently, neural tangent kernel (NTK) theory has been proposed towards understanding the convergence of first-order methods in tackling the non-convex problem of neural networks. The deep linear neural network is a classical model in theoretical subject due to its simple formulation. Nevertheless, there exists no theoretical result for the convergence of FedAvg in training the deep linear neural network. By applying NTK theory, we make a further step to provide the first theoretical guarantee for the global convergence of FedAvg in training deep linear neural networks. Specifically, we prove FedAvg converges to the global minimum at a linear rate \(\mathcal{O}\big{(}(1-\eta K/N)^{t}\big{)}\), where \(t\) is the number of iterations, \(\eta\) is the learning rate, \(N\) is the number of clients and \(K\) is the number of local updates. Finally, experimental evaluations on two benchmark datasets are conducted to empirically validate the correctness of our theoretical findings.
Federated learning, Deep linear neural network, Neural tangent kernel
## I Introduction
Federated learning (FL) is an efficient and privacy-preserving distributed learning scheme. In the framework of the centralized distributed learning, there exists a server node for communication with distributed clients. In general, the server updates the global model through collecting local data from clients. However, this approach has several drawbacks. Firstly, it is not suitable for the scenarios with high privacy requirements. For instance, the diagnosis data of the patient can not be shared and collected due to the regulation [1], which prevents data collection from clients. Secondly, some distributed devices are sensitive to the communication and energy costs, such as the wireless sensor which has limited battery capacity and large latency in the network. As a result, it is difficult to support frequent communication between the server and clients. To address these problems, McMahan _et al._[2] proposed the Federated Averaging (FedAvg) algorithm, in which the updating of the model is conducted on clients to prevent transmitting private informations. Meanwhile, each client performs multiple training processes on the local dataset before uploading the local model to the server, which reduces the communication burden.
With the widespread practice of FedAvg, there is a growing interest in providing theoretical guarantees for its convergence. Under convex and smooth assumptions, previous work [3, 4] has proven the convergence of FedAvg based on gradient descent (GD) and stochastic gradient descent (SGD). Nowadays, the neural network has achieved great success due to its extraordinary performance, which makes it a preferred model choice in FedAvg. Recently, progress has been obtained in analyzing the convergence of gradient-based methods for training over-parameterized neural networks [5, 6, 7, 8, 9, 10], whose number of parameters is significantly larger than that of the training samples. For the federated learning, researchers extended these theoretical results to consider the optimization problem of FedAvg in training the two-layer ReLU neural network [11] and the deep ReLU neural network [12, 13]. However, there is no guarantee for the convergence rate of FedAvg in training the deep linear neural network. Despite the simplicity of its framework, the deep linear neural network has attracted considerable attention due to its hierarchical structure similar to non-linear networks, as well as its high-dimensional and non-convex objective function. which makes it a representative model in theoretical community [14, 15, 16, 10].
In this paper, we focus on the optimization problem of FedAvg with GD in training the deep fully-connected linear neural network. Specifically, our contributions are as follows
* We first establish the recursive formulation of the residual error of the FedAvg in training the deep linear neural network, which is closely related to an asymmetric coefficient matrix. Based on the over-parameterized assumption, the spectrum of the coefficient matrix is derived using NTK theory.
* Then, we provide the first theoretical guarantee for the global convergence of FedAvg by analyzing its residual error. Specifically, FedAvg can converge to a global minimum at a non-asymptotic linear rate \(\mathcal{O}\big{(}(1-\eta K/N)^{t}\big{)}\), where \(t\) is the number of iterations, \(\eta\) is the learning rate, \(N\) is the number of clients and \(K\) is the number of local updates. Moreover, the required network width only has
a linear dependence on the network depth.
* Finally, we conduct experiments on two benchmark datasets to empirically validate our theoretical findings. The result shows the correctness of our theoretical findings.
## II Related works
### _Federated learning_
The server node uses the collected client-side model to update the global model, thereby enabling the acquisition of a global model without accessing private data. To further reduce communication costs, [17] exploited model sparsification and quantization compression while minimising the impact on model performance. To address the problem of limited communication and computational resources, [18] proposed an adaptive approach that makes a balance between the number of local update rounds and communications. [19, 20] proposed gradient compression methods to reduce the upload of server data on client nodes. To address security concerns in federated learning,[21] considered a secure aggregation approach. [22, 23] exploited differential privacy (DP) by adding noise. [11] investigated the convergence of FedAvg algorithm when training an over-parameterised two-layer ReLU neural network, which demonstrates that the algorithm can converge to a global minimum at a linear rate. Furthermore, [12] analysed FedAvg in training a deep ReLU fully connected neural network, which establishing its convergence rates.
### _Deep linear networks_
The deep linear networks is only linear maps between input and output space, which leads to a simple formulation of the optimization problem. However, its loss landscape is non-convex due to the existence of the layered structure. Recent works mainly focus on investigating its characteristics to promote the understanding of practical non-linear networks. One line of researches analyzed the properties of critical points (e.g., the gradient is zero). In 1989, Baldi _et al._[24] conjectured that all local minimums (e.g., the gradient is zero and the Hessian is positive semidefinite) of deep linear networks are global minima. Kawaguchi [25] was the first to prove this conjecture by extracting the second-order information from the local minimum. Subsequent work extended this result to different settings, such as few assumptions [26] and arbitrary loss functions [27]. However, the above results are insufficient to provide convergence guarantees for gradient-based optimization methods.
To establish the theoretical convergence result for the deep linear network, some work considered the trajectory induced by gradient-based methods. Shamir [28] showed that GD converges with at least \(exp\big{(}\Omega(L)\big{)}\) iterations for a \(L\)-layer linear network, where all layers have a width of 1 and the initialization scheme uses the standard Gaussian. However, this result only fits for narrow networks. Later, Du _et al._[29] proved that GD exhibits a linear convergence to the global optimum for a wide deep linear network, but the lower bound of the width scales linearly in the depth. Hu _et al._[30] analyzed the effect of the orthogonal initialization in the deep linear network, showing that the required width is independent of the depth. Zou _et al._[14] provided the convergence analysis for both GD and stochastic GD in the training of deep linear ResNets.
## III Problem Setting
### _Notations_
We use lowercase, lowercase boldface and uppercase boldface letters to denote scalars, vectors and matrices, respectively. The other notations are introduced in the Table I.
### _Preliminaries_
The FedAvg involves the following four steps:
1. In the \(t\)-th global round, the server broadcasts the global model to each client.
2. Each client \(c\) initializes its local model with the global model and updates using GD for \(K\) local steps.
3. After all local updates finish, each client sends its local model to the server.
4. Finally, the server averages all local parameters to derive a new global model.
FedAvg repeats the above procedures with \(T\) global rounds. The framework of the algorithm is provided in Fig 1.
For the architecture of the model, we consider a \(L\)-layer fully-connected linear neural network \(f:\mathbb{R}^{d_{in}}\rightarrow\mathbb{R}^{d_{out}}\)
\[f(\mathbf{W}^{1},\cdots\mathbf{W}^{L};\mathbf{x}):=\frac{1}{\sqrt{m^{L-1}d_{out}}}\mathbf{W} ^{L}\cdots\mathbf{W}^{1}\mathbf{x}, \tag{1}\]
\begin{table}
\begin{tabular}{|c|l|} \hline
**Notation** & **Meaning** \\ \hline \(\otimes\) & The Kronecker product. \\ \(\|\cdot\|\) & The \(\ell_{2}\) norm of the vector or the spectral norm of the matrix. \\ \(\|\cdot\|_{F}\) & The Frobenius norm of the matrix. \\ \(\operatorname{vec}(\cdot)\) & The vectorization of the matrix in the column-first order. \\ \(rank(\cdot)\) & The rank of the matrix. \\ \(\lambda_{max}(\cdot)\), \(\lambda_{min}(\cdot)\) & Largest and smallest eigenvalues of the matrix. \\ \(\sigma_{max}(\cdot)\), \(\sigma_{min}(\cdot)\) & Largest and smallest singular values of the matrix. \\ \(\kappa(\cdot)\), & The condition number of the matrix. \\ \(\mathcal{N}(0,1)\) & The standard gaussian distribution. \\ \(N\) & The number of the clients. \\ \(c\) & The index of the client. \\ \(T\) & The number of the communication rounds. \\ \(t\) & The index of the communication round. \\ \(K\) & The number of the local update steps. \\ \(k\) & The index of the local update step. \\ \(\mathbf{X},\mathbf{Y}\) & The feature matrix and the ground truth. \\ \(\mathbf{X}_{c},\mathbf{Y}_{c}\) & The feature matrix and the ground truth on the \(c\)-th client. \\ \(\mathbf{W}^{i}_{k,c}(t)\) & The parameter of the \(i\)-th layer of the \(c\)-th client in \\ & \(k\)-th local step of \(t\)-th global round. \\ \(\overline{\mathbf{W}}^{i}(t)\) & The \(i\)-th layer of the global model in the \(t\)-th global round. \\ \(\overline{\mathbf{U}}(t)\) & Outputs of the server model in \(t\)-th global round. \\ \(\overline{\mathbf{U}}_{c}(t)\) & Outputs of the server model on \(\mathbf{X}_{c}\) in \(t\)-th global round. \\ \(\mathbf{U}_{k,c}(t)\) & Outputs of the \(c\)-th client in \(k\)-th local step of the \(t\)-th global round. \\ \(\mathbf{U}_{k}(t)\) & The concatenation of the outputs of all clients in \\ & \(k\)-th local step of \(t\)-th global round. \\ \(\overline{\mathbf{\xi}}(t)\), \(\overline{\mathbf{\xi}}_{k}(t)\) & The residual error of the global model \(\overline{\mathbf{\xi}}(t)=\overline{\mathbf{U}}_{c}(t)-\mathbf{Y}_{c}\). \\ \(\mathbf{\xi}_{k,c}(t)\), \(\mathbf{\xi}_{k}(t)\) & The residual error of the local model \(\mathbf{\xi}_{k,c}(t)=\mathbf{U}_{k,c}(t)-\mathbf{Y}_{c}\) and \(\mathbf{\xi}_{k}(t)=\mathbf{U}_{k}(t)-\mathbf{Y}\). \\ \hline \end{tabular}
\end{table} TABLE I: Summary of notations
where \(\mathbf{x}\in\mathbb{R}^{d_{in}}\) denotes the feature, \(\mathbf{W}^{1}\in\mathbb{R}^{m\times d_{in}}\), \(\mathbf{W}^{L}\in\mathbb{R}^{d_{out}\times m}\) and \(\mathbf{W}^{i}\in\mathbb{R}^{m\times m}\) (\(1<i<L\)) denote the parameters of the network for each layer, respectively. Each entry of \(\mathbf{W}^{i}\) is identically independent initialized with the standard gaussian distribution \(\mathcal{N}(0,1)\). It is noted that the coefficient \(\frac{1}{\sqrt{m^{L-1}d_{y}}}\) in Eq.(1) is a scaling factor according to [8].
Under federated learning setting, we aim to minimize the sum of loss \(\mathcal{L}\) over all clients
\[\min_{\mathbf{W}^{L},\cdots,\mathbf{W}^{1}}\mathcal{L}(\mathbf{W}^{L},\cdots,\mathbf{W}^{1}). \tag{2}\]
For the loss function, we use the square loss
\[\mathcal{L}_{j}(\mathbf{W}^{L},\cdots,\mathbf{W}^{1}) :=\!\frac{1}{2}\!\sum_{i\in S_{j}}(f(\mathbf{W}^{L}\!,\!\cdots\!,\mathbf{W }^{1};\mathbf{x}_{i})\!-\!\mathbf{y}_{i})^{2}, \tag{3}\] \[\mathcal{L}(\mathbf{W}^{L},\cdots,\mathbf{W}^{1}) :=\frac{1}{N}\sum_{j=1}^{N}L_{j}(\mathbf{W}^{L},\cdots,\mathbf{W}^{1}), \tag{4}\]
where \(\mathbf{x}_{i}\) and \(\mathbf{y}_{i}\) denote the feature and the label of the \(i\)-th training instance. For brevity, we denote the outputs of the neural network \(f\) on \(\mathbf{X}\) as
\[\mathbf{U}=\frac{1}{\sqrt{m^{L-1}d_{out}}}\mathbf{W}^{L:1}\mathbf{X}\in\mathbb{R}^{d_{y} \times n}, \tag{5}\]
where \(\mathbf{W}^{L:1}=\mathbf{W}^{L}\mathbf{W}^{L-1}\cdots\mathbf{W}^{1}\).
Based on (1) and (3), the gradient \(\frac{\partial\mathcal{L}_{c}(\mathbf{W}^{L},\cdots,\mathbf{W}^{1})}{\mathbf{W}^{1}}\) on each client \(c\) has the following form
\[\frac{\partial\mathcal{L}_{c}(\mathbf{W}^{L},\cdots\mathbf{W}^{1})}{\partial\mathbf{W}^{i }} \!\!\!=\!\!\frac{1}{\sqrt{m^{L-1}d_{out}}}\!\mathbf{W}^{L:i\!+\!1}\!\left[\! \mathbf{U}_{c}\!-\!\mathbf{Y}_{\!\!\mathbf{\lambda}}(\mathbf{W}^{l-1:1}\mathbf{X}_{c})^{\top}\! \right.\!\!. \tag{6}\]
According to the procedures of FedAvg, the optimization of the deep linear neural network involves local update and global update.
**Local update**. For each client \(c\), the local parameter \(\mathbf{W}^{i}_{k,c}(t)\) of each layer \(i\) for local iteration \(k\) at the global iteration \(t\) is updated by GD as:
\[\mathbf{W}^{i}_{k+1,c}(t)=\mathbf{W}^{i}_{k,c}(t)-\eta\frac{\partial\mathcal{L}_{c}( \mathbf{W}^{L},\cdots,\mathbf{W}^{1})}{\partial\mathbf{W}^{i}_{k,c}(t)}, \tag{7}\]
where \(\eta>0\) is the learning rate.
**Global update**. In the server side, it updates the global parameter by averaging all local parameters from clients as
\[\overline{\mathbf{W}}^{i}(t+1)= \sum_{c=1}^{N}\mathbf{W}^{i}_{K,c}(t)/N. \tag{8}\]
## IV Theoretical Results
In this section, we provide a detailed convergence analysis of FedAvg for training the deep fully-connected linear network. Our proof is composed of two parts: 1) Establishing the connection between two consecutive residual errors. 2) Deriving the closed-form of the residual error to provide a recursive bound.
### _The dynamics of the residual error_
At the \(t+1\) global iteration, the global parameter \(\overline{\mathbf{W}}^{i}(t+1)\) of each layer \(i\) can be expanded as
\[\overline{\mathbf{W}}^{i}(t+1)= \sum_{c=1}^{N}\mathbf{W}^{i}_{K,c}(t)/N\] \[= \overline{\mathbf{W}}^{i}(t)\!-\!\frac{\eta}{N}\sum_{c=1}^{N}\sum_{k=0 }^{K-1}\frac{\partial\mathcal{L}_{c}(\mathbf{W}^{L},\cdots,\mathbf{W}^{1})}{\partial \mathbf{W}^{i}_{k,c}(t)}. \tag{9}\]
For brevity, we denote the accumulated gradient as \(\frac{\partial\mathcal{L}}{\partial\overline{\mathbf{W}}^{i}(t)}=\frac{1}{N}\sum_{c= 1}^{N}\sum_{k=0}^{K-1}\frac{\partial\mathcal{L}_{c}(\mathbf{W}^{L},\cdots,\mathbf{W}^{ 1})}{\partial\mathbf{W}^{i}_{k,c}(t)}\).
Thus, the multiplication of the global parameters of \(L\) layers has
\[\overline{\mathbf{W}}^{L:1}(t+1)=\prod\left(\overline{\mathbf{W}}^{i}(t)- \eta\frac{\partial\mathcal{L}}{\partial\overline{\mathbf{W}}^{i}(t)}\right)\] \[= \overline{\mathbf{W}}^{L:1}(t)-\eta\sum_{i=1}^{L}\overline{\mathbf{W}}^{L:i +1}(t)\frac{\partial\mathcal{L}}{\partial\overline{\mathbf{W}}^{i}(t)}\overline{ \mathbf{W}}^{i-1:1}(t)+\mathbf{E}(t), \tag{10}\]
where the second term contains first-order items with respect to \(\eta\) and \(\mathbf{E}(t)\) contains all high-order items. When multiplying \(\frac{1}{\sqrt{m^{L-1}d_{out}}}\mathbf{X}\) on both sides of Eq.(10), it has the corresponding outputs of the global model as
\[\overline{\mathbf{U}}(t+1)\!=\!\overline{\mathbf{U}}(t)\!-\!\frac{\eta}{ \sqrt{m^{L-1}d_{out}}}\!\sum_{i=1}^{L}\!\!\left(\!\overline{\mathbf{W}}^{L:i+1}(t) \frac{\partial\mathcal{L}}{\partial\overline{\mathbf{W}}^{i}(t)}\overline{\mathbf{W}}^{i -1:1}(t)\mathbf{X}\!\right)\] \[\qquad+\frac{1}{\sqrt{m^{L-1}d_{out}}}\mathbf{E}(t)\mathbf{X}. \tag{11}\]
Fig. 1: The framework of FedAvg, in which the solid line represents the upload of the local model, the dashed line denotes the download of the global model and the circles correspond to the procedures of FedAvg.
Taking the vectorization of the second term on the right side of Eq.(11), it has
\[\text{vec}\left(\frac{\eta}{\sqrt{m^{L-1}d_{out}}}\sum_{i=1}^{L}( \overline{\mathbf{W}}^{L:i+1}(t)\frac{\partial\mathcal{L}}{\partial\overline{\mathbf{W}} ^{i}(t)}\overline{\mathbf{W}}^{i-1:1}(t)\mathbf{X})\right)\] \[= \frac{\eta}{Nm^{L-1}d_{out}}\sum_{i=1}^{L}\sum_{c=1}^{N}\sum_{k=0 }^{K-1}\text{vec}(\mathbf{T}_{k,c}^{i}), \tag{12}\]
where the first equality uses the gradient Eq.(6) and \(\mathbf{T}_{k,c}^{i}=\overline{\mathbf{W}}^{L:i+1}(t)\mathbf{W}_{k,c}^{i+1}(t)^{\top}(\mathbf{ U}_{k,c}(t)-\mathbf{Y}_{c})(\mathbf{W}_{k,c}^{i-1:1}(t)\mathbf{X}_{c})^{\top}\overline{\mathbf{W}} ^{i-1:1}(t)\mathbf{X}\).
For simplicity, we denote \(\mathbf{M}_{k,c}^{i}=\text{vec}(\mathbf{T}_{k,c}^{i})\). Using \(\text{vec}(\mathbf{ACB})=(\mathbf{B}^{\top}\otimes\mathbf{A})\text{vec}(\mathbf{C})\), it has
\[\mathbf{M}_{k,c}^{i} = \big{(}\overline{\mathbf{W}}^{i-1:1}(t)\mathbf{X}\big{)}^{\top}(\mathbf{W}_{k,c}^{i-1:1}(t)\mathbf{X}_{c})\otimes\overline{\mathbf{W}}^{L:i+1}(t)\mathbf{W}_{k,c}^{L:i +1}(t)^{\top}\big{)} \tag{13}\] \[\ast\text{vec}(\mathbf{U}_{k,c}(t)-\mathbf{Y}_{c}).\]
For centralized training setting, the analysis in [8] mainly depends on the spectral properties of a symmetric gram matrix defined by
\[\mathbf{P}(t)=\frac{1}{m^{L-1}d_{out}}\sum_{i=1}^{L} \big{(}(\mathbf{W}^{i-1:1}(t)\mathbf{X})^{\top}(\mathbf{W}^{i-1:1}(t)\mathbf{X})\] \[\otimes\mathbf{W}^{L:i+1}(t)\mathbf{W}^{L:i+1}(t)^{\top}\big{)}, \tag{14}\]
where the eigenvalues of the time-varying \(\mathbf{P}(t)\) can be bounded by applying perturbation analysis.
In light of [8, 11], we abuse the notation \(\mathbf{P}\) to define a similar matrix.
**Definition IV.1**.: _For any \(t\in[0,T],k\in[0,K],c\in[1,N]\), we define a matrix \(P(t,k,c)\) as_
\[\mathbf{P}(t,k,c):=\frac{1}{m^{L-1}d_{out}}\sum_{i=1}^{L} \big{(}(\overline{\mathbf{W}}^{i-1:1}(t)\mathbf{X})^{\top}(\mathbf{W}_{k,c}^ {i-1:1}(t)\mathbf{X}_{c})\] \[\otimes\overline{\mathbf{W}}^{L:i+1}(t)\mathbf{W}_{k,c}^{L:i+1}(t)^{\top }\big{)}. \tag{15}\]
Compared to Eq.(14), the matrix \(\mathbf{P}(t,k)=[\mathbf{P}(t,k,1),\cdots,\mathbf{P}(t,k,N)]\) is an asymmetric matrix, which is non-trivial to directly derive its eigenvalues. Instead, we turn to present the bounds of its eigenvalues by bounding the distance between \(\mathbf{P}(t,k)\) and \(\mathbf{P}(0)\) according to [11].
Along with \(\mathbf{P}(t,k)\), taking the vectorization of both sides of Eq.(11), it has
\[\text{vec}(\overline{\mathbf{U}}(t+1))= \text{vec}(\overline{\mathbf{U}}(t))-\frac{\eta}{N}\sum_{c=1}^{N}\sum_ {k=0}^{K-1}\mathbf{P}(t,k,c)\text{vec}(\mathbf{U}_{k,c}(t)-\mathbf{Y}_{c})\] \[+\frac{1}{\sqrt{m^{L-1}d_{out}}}\text{vec}(\mathbf{E}(t)\mathbf{X})\] \[= \text{vec}(\overline{\mathbf{U}}(t))-\frac{\eta}{N}\sum_{k=0}^{K-1}\bm {P}(t,k)\text{vec}(\mathbf{U}_{k}(t)-\mathbf{Y})\] \[+\frac{1}{\sqrt{m^{L-1}d_{out}}}\text{vec}(\mathbf{E}(t)\mathbf{X}), \tag{16}\]
where the first equality uses Eq.(12) and Eq.(15). For simplifying notations, we use \(\overline{\mathbf{\xi}}(t)=\text{vec}(\overline{\mathbf{U}}(t)-\mathbf{Y})\) and \(\mathbf{\xi}_{k}(t)=\text{vec}(\mathbf{U}_{k}(t)-\mathbf{Y})\) as the vectorizations of the global residual error and the local residual error, respectively. Then, it has
\[\overline{\mathbf{\xi}}(t+1)\] \[= \overline{\mathbf{\xi}}(t)-\frac{\eta}{N}\sum_{k=0}^{K-1}\mathbf{P}(t,k) \mathbf{\xi}_{k}(t)+\frac{1}{\sqrt{m^{L-1}d_{out}}}\text{vec}(\mathbf{E}(t)\mathbf{X})\] \[= \overline{\mathbf{\xi}}(t)-\frac{\eta}{N}\sum_{k=0}^{K-1}(\mathbf{P}(t,k) -\mathbf{P}(0))\mathbf{\xi}_{k}(t)-\frac{\eta}{N}\sum_{k=0}^{K-1}\mathbf{P}(0)\overline{\bm {\xi}}(t)\] \[-\frac{\eta}{N}\sum_{k=0}^{K-1}\mathbf{P}(0)\text{vec}(\mathbf{U}_{k}(t)- \overline{\mathbf{U}}(t))+\frac{1}{\sqrt{m^{L-1}d_{out}}}\text{vec}(\mathbf{E}(t)\mathbf{X})\] \[= \underbrace{(I-\frac{\eta}{N}\sum_{k=0}^{K-1}\mathbf{P}(0))\overline{ \mathbf{\xi}}(t)}_{\text{fit term}}\underbrace{-\frac{\eta}{N}\sum_{k=0}^{K-1}(\mathbf{P}(t,k) -\mathbf{P}(0))\mathbf{\xi}_{k}(t)}_{\text{second term}}\] \[\underbrace{-\frac{\eta}{N}\sum_{k=0}^{K-1}\mathbf{P}(0)\text{vec}( \mathbf{U}_{k}(t)-\overline{\mathbf{U}}(t))}_{\text{third term}}+\underbrace{\frac{1}{ \sqrt{m^{L-1}d_{out}}}\text{vec}(\mathbf{E}(t)\mathbf{X})}_{\text{fourth term}}. \tag{17}\]
Then we can obtain the bound of \(\|\overline{\mathbf{\xi}}(t+1)\|_{F}\) through analyzing the four terms in Eq.(17).
### _Theoretical Results_
**Theorem IV.2**.: _Suppose \(r=rank(\mathbf{X})\),\(\kappa=\frac{\sigma_{max}^{2}(\mathbf{X})}{\sigma_{min}^{2}(\mathbf{X})}\), \(\eta=\mathcal{O}(\frac{d_{out}}{L\kappa L\|\mathbf{X}\|^{2}})\) and \(m=\Omega(L\max\{r\kappa^{4}N^{2}d_{out}(1+\|\mathbf{W}^{*}\|^{2}),r\kappa^{4}N^{2} \log(\frac{r}{2}),\log L\})\). With the probability at least \(1-\delta\), for any \(t\geq 0\), FedAvg has the following bound of the training loss for updating the randomly initialized deep linear neural network_
\[\mathcal{L}(t)\leq(1-\frac{\eta K\lambda_{min}(\mathbf{P}(0))}{2N})^{t}\mathcal{L}(0). \tag{18}\]
**Remarks**. From above theorem, we show that FedAvg is capable of attain the global optimum at a \((1-\frac{\eta K\lambda_{min}(\mathbf{P}(0))}{2N})^{t}\) rate in optimizing the deep fully-connected linear neural network after \(t\) training iterations. In addition, the increasing of the number of clients slows down the convergence of FedAvg. On the contrary, FedAvg converges faster with the increasing of the local updates. Besides, it is noted that the convergence rate of FedAvg is determined by the ratio \(K/N\). Therefore, the convergence of FedAvg stays in a similar manner when \(K/N\) is fixed.
Now, we turn to introduce the details of the proof. Firstly, we make following three inductive hypothesis.
1. \(\mathcal{A}(\tau)\): \(\mathcal{L}(\tau)\leq\rho^{\tau}\mathcal{L}(0)\), where \(\rho=1-\frac{\eta K\lambda_{min}(\mathbf{P}(0))}{2N}\) for brevity.
2. \(\mathcal{B}(\tau)\): \[\sigma_{max}(\overline{\mathbf{W}}^{L:i}(\tau))\leq 1.25m^{\frac{L-i+1}{2}},\quad\forall 1<i\leq L\] \[\sigma_{min}(\overline{\mathbf{W}}^{L:i}(\tau))\geq 0.75m^{\frac{L-i+1}{2}},\quad\forall 1<i\leq L\] \[\sigma_{max}(\overline{\mathbf{W}}^{i:1}(\tau)\mathbf{X})\leq 1.25m^{\frac{L}{2}}\|\mathbf{X}\|,\quad\forall 1\leq i<L\] \[\sigma_{min}(\overline{\mathbf{W}}^{i:1}(\tau)\mathbf{X})\geq 0.75m^{\frac{L}{2}}\sigma_{min}(\mathbf{X}),\quad\forall 1\leq i<L\] \[\|\overline{\mathbf{W}}^{j:i}(\tau)\|\leq \mathcal{O}(\sqrt{L}m^{\frac{j-i+1}{2}}),\forall 1<i\leq j<L.\]
3. \(\mathcal{C}(\tau)\): For any
\[\mathcal{O}(\max\{1,\log(\frac{r}{s})/d_{out},\|\mathbf{W}^{*}\|^{2}\})\| \mathbf{X}\|_{F}^{2}.\]
The derivation of the bound of \(\mathcal{L}(0)\) can be found in [8].
Noted that \(\mathcal{A}(\tau)\) provides the convergence result of FedAvg for training deep fully-connected linear networks, which is also the main part of the Theorem IV.2. \(\mathcal{B}(\tau)\) establishes the bounds for the singular values of the multiplication of consecutive \(\overline{\mathbf{W}}^{i}\). \(\mathcal{C}(\tau)\) ensures the parameter of each layer always stays in a norm ball, where the center is the initial parameter.
Next, we prove the three hypothesis by induction. Specifically, assuming \(\mathcal{A}(\tau)\), \(\mathcal{B}(\tau)\) and \(\mathcal{C}(\tau)\) hold for \(\tau\leq t\), it needs to prove them hold for \(\tau=t+1\). To start with, we consider \(\mathcal{C}(t+1)\) using the hypothesis \(\mathcal{A}(\tau)\) for \(\tau\leq t\).
#### Iv-A1 \(\mathcal{C}(t+1)\)
Before proving \(\mathcal{C}\), we first provide the bound of the accumulated gradient. Note that the distance between \(\overline{\mathbf{W}}^{i}(t+1)\) and \(\overline{\mathbf{W}}^{i}(0)\) involves the gradients from initial to the \(t\)-th iteration. Hence, we bound the gradient as
\[\|\frac{\partial\mathcal{L}}{\partial\overline{\mathbf{W}}^{i}(t)}\|_ {F}= \|\frac{1}{N}\sum_{c=1}^{N}\sum_{k=0}^{K-1}\frac{\partial\mathcal{ L}_{c}(\mathbf{W}^{L},\cdots,\mathbf{W}^{i})}{\partial\mathbf{W}^{i}_{k,c}(t)}\|_{F}\] \[\stackrel{{(a)}}{{\leq}} C_{1}\sum_{c=1}^{N}\sum_{k=0}^{K-1}\|\mathbf{W}^{i-1:1}_{k,c}(t)\mathbf{X}_{c}\|\| \mathbf{\xi}_{k,c}(t)\|_{F}\|\mathbf{W}^{L:i+1}_{k,c}(t)\|\] \[\stackrel{{(b)}}{{\leq}} C_{1}\sum_{c=1}^{N}\sum_{k=0}^{K-1}1.26^{2}m^{(L-1)/2}\|\mathbf{\xi}_{k,c}(t)\|_{F}\| \mathbf{X}_{c}\|\] \[\stackrel{{(c)}}{{\leq}} \frac{8\|\mathbf{X}\|}{5N\sqrt{d_{out}}}\sum_{c=1}^{N}\sum_{k=0}^{K-1} \rho^{k}\|\overline{\mathbf{\xi}}_{c}(t)\|_{F}\] \[\stackrel{{(d)}}{{\leq}} \frac{8\|\mathbf{X}\|}{5\sqrt{Nd_{out}}}\|\overline{\mathbf{\xi}}(t)\|_{F}, \tag{19}\]
where (a) uses Eq.(6) and \(C_{1}=\frac{1}{N\sqrt{m^{L-1}d_{out}}}\), (b) uses \(\mathcal{B}(t)\), Eq.(34) and Eq.(35) that
\[\|\mathbf{W}^{i-1:1}_{k,c}(t)\mathbf{X}_{c}\|\] \[\leq \|\mathbf{W}^{i-1:1}_{k,c}(t)\mathbf{X}_{c}-\overline{\mathbf{W}}^{i-1:1}(t) \mathbf{X}_{c}\|+\|\overline{\mathbf{W}}^{i-1:1}(t)\mathbf{X}_{c}\|\] \[\leq \frac{0.01}{\kappa\sqrt{N}}m^{\frac{i-1}{2}}\|\mathbf{X}_{c}\|+1.25m ^{\frac{i-1}{2}}\|\mathbf{X}_{c}\|\leq 1.26m^{\frac{i-1}{2}}\|\mathbf{X}_{c}\|\]
and
\[\|\mathbf{W}^{L:i+1}_{k,c}(t)\|\leq \|\mathbf{W}^{L:i+1}_{k,c}(t)-\overline{\mathbf{W}}^{L:i+1}(t)\|+\| \overline{\mathbf{W}}^{L:i+1}(t)\|\] \[\leq 1.26m^{\frac{L-i}{2}},\]
(c) uses Lemma (VII.4) with \(\sqrt{1-\frac{\eta L\lambda_{min}(\mathbf{X}^{\top}\mathbf{X})}{4d_{out}}}\leq 1\) and (d) uses \(\sum_{c\in[N]}\|\overline{\mathbf{\xi}}_{c}(t)\|_{F}\leq\sqrt{N}\|\overline{\mathbf{ \xi}}(t)\|_{F}\) according to Cauchy-Schwartz inequality and \(\eta\leq\frac{d_{out}}{50L\kappa R\|\mathbf{X}\|^{2}}\).
Thus, it has
\[\|\overline{\mathbf{W}}^{i}(t+1)-\overline{\mathbf{W}}^{i}(0)\|_{F} \stackrel{{(a)}}{{\leq}}\sum_{s=0}^{t}\|\overline{\mathbf{W}}^{i}(s+ 1)-\overline{\mathbf{W}}^{i}(s)\|_{F}\] \[\leq \eta\sum_{s=0}^{t}\|\frac{\partial L}{\partial\overline{\mathbf{W}} ^{i}(s)}\|_{F}\stackrel{{(b)}}{{\leq}}\eta\sum_{s=0}^{t}\frac{8\| \mathbf{X}\|K}{5\sqrt{Nd_{out}}}\|\overline{\mathbf{\xi}}(s)\|\stackrel{{ (c)}}{{\leq}}\frac{16\sqrt{Bd_{out}}N\|\mathbf{X}\|}{L\sigma_{min}^{2}(\mathbf{X})}\]
where (a) uses the triangular inequality of norm, (b) uses (19) and (c) uses \(\mathcal{A}(\tau)\) for \(\tau\leq t\), \(\sqrt{1-x}\leq 1-x/2\) for \(0\leq x\leq 1\), \(\eta=\mathcal{O}(\frac{d_{out}}{L\kappa R\|X\|^{2}})\) and following bound of \(\lambda_{min}(\mathbf{P}(0))\)
\[\lambda_{min}(\mathbf{P}(0))\geq \frac{1}{m^{L-1}d_{out}}L(0.8)^{4}m^{L-1}\sigma_{min}^{2}(\mathbf{X})\] \[= \frac{0.8^{4}L\sigma_{min}^{2}(\mathbf{X})}{d_{out}}, \tag{20}\]
which is according to Lemma VII.1 and the definition of \(\mathbf{P}(0)\) in Eq.(14). This completes the proof. Then we turn to prove assumption \(\mathcal{B}\) holds at \(t+1\).
#### Iv-A2 \(\mathcal{B}(t+1)\)
Note that \(\overline{\mathbf{W}}^{L:i}(t+1)=(\overline{\mathbf{W}}^{L}(0)+\Delta^{L}(t+1))\cdots( \overline{\mathbf{W}}^{i}(0)+\Delta^{i}(t+1))\) for \(\Delta^{i}(t+1)=\overline{\mathbf{W}}^{i}(t+1)-\overline{\mathbf{W}}^{i}(0)\), it has
\[\|\overline{\mathbf{W}}^{L:i}(t+1)-\overline{\mathbf{W}}^{L:i}(0)\|\] \[\stackrel{{(a)}}{{\leq}} \sum_{s=1}^{L-i+1}\binom{L-i+1}{s}R^{*}(\mathcal{O}(\sqrt{L}))^{ *}1.2m^{\frac{L-i+1-s}{2}}\] \[\stackrel{{(b)}}{{\leq}} \sum_{s=1}^{L-i+1}L^{s}R^{*}(\mathcal{O}(\sqrt{L}))^{*}1.2m^{\frac{L- i+1-s}{2}}\] \[\leq 1.2m^{\frac{L-i+1}{2}}\sum_{s=1}^{L-i+1}(\frac{\mathcal{O}(L^{3/ 2}R)}{\sqrt{m}})^{s}\] \[\leq 1.2m^{\frac{L-i+1}{2}}\frac{\mathcal{O}(L^{3/2}R)}{\sqrt{m}}\sum_{ s=1}^{L-i+1}(\frac{\mathcal{O}(L^{3/2}R)}{\sqrt{m}})^{s-1}\] \[\stackrel{{(c)}}{{\leq}} \frac{0.01}{\kappa\sqrt{N}}m^{\frac{L-i+1}{2}}, \tag{21}\]
where (a) uses \(\mathcal{C}(t+1)\) and Lemma VII.1, (b) uses \(\binom{L-i+1}{s}\leq L^{s}\) and (c) uses \(m=\Omega\left((L^{3/2}R\kappa\sqrt{N})^{2}\right)\). Combining the bound of the initial parameter in Lemma VII.1, it has
\[\sigma_{max}(\overline{\mathbf{W}}^{L:i}(t+1))\] \[= \max_{\|\mathbf{z}\|=1}\|\overline{\mathbf{W}}^{L:i}(t+1)\mathbf{z}\|\] \[\leq \max_{\|\mathbf{z}\|=1}\|(\overline{\mathbf{W}}^{L:i}(t+1)-\overline{\mathbf{W}} ^{L:i}(0))\mathbf{z}\|+\max_{\|\mathbf{z}\|=1}\|\overline{\mathbf{W}}^{L:i}(0)\mathbf{z}\|\] \[\leq \frac{0.01}{\kappa\sqrt{N}}m^{\frac{L-i+1}{2}}+1.2m^{\frac{L-i+1}{2 }}\leq 1.25m^{\frac{L-i+1}{2}},\]
and
\[\sigma_{min}(\overline{\mathbf{W}}^{L:i}(t+1))\] \[= \min_{\|\mathbf{z}\|=1}\|\overline{\mathbf{W}}^{L:i}(t+1)\mathbf{z}\|\] \[= \min_{\|\mathbf{z}\|=1}\|(\overline{\
Then, it can derive the upper bound of \(\|\overline{\mathbf{W}}^{j:i}(t+1)\|\) by applying the above analysis and it completes the proof of \(\mathcal{B}(t+1)\).
#### Iv-B3 \(\mathcal{A}(t+1)\)
Finally, we focus on analyzing Eq.(17) to derive the convergence rate of FedAvg in training deep fully-connected linear networks.
**1. The bound of the first term**: According to Eq.(20), it has
\[\|(I\!-\!\frac{\eta}{N}\sum_{k\in[K]}\mathbf{P}(0))\overline{\mathbf{\xi}}( t)\|_{2}\] \[\leq(1\!-\!\frac{\eta\lambda_{min}(\mathbf{P}(0))K}{N})\|\overline{ \mathbf{\xi}}(t)\|_{F}, \tag{23}\]
where the last inequality uses \(\eta\leq\frac{d_{out}}{50L\kappa K\|\mathbf{X}\|^{2}}\).
**2. The bound of the second term**:
\[\|\frac{\eta}{N}\sum_{k\in[K]}(\mathbf{P}(t,k)-\mathbf{P}(0))\mathbf{\xi}_{k}( t)\|_{2}\] \[\leq \frac{\eta}{N}\sum_{k\in[K]}\|\mathbf{P}(t,k)-\mathbf{P}(0)\|\|\mathbf{\xi}_{ k}(t)\|_{F}\] \[\leq K\frac{\eta}{N}\|\overline{\mathbf{\xi}}(t)\|_{F}\frac{0.109L}{d_{ out}\kappa}\|\mathbf{X}\|^{2}\leq\frac{0.109K\eta\lambda_{min}(\mathbf{P}(0))}{0.8^{4}N}. \tag{24}\]
where the second inequality uses Lemma VII.5 the last inequality uses Eq.(31), Eq.(36) and \(\eta\leq\frac{d_{out}}{50L\kappa K\|\mathbf{X}\|^{2}}\).
**3. The bound of the third term**:
\[\|\frac{\eta}{N}\sum_{k\in[K]}\mathbf{P}(0)\text{vec}(\mathbf{U}_{k}(t)- \overline{\mathbf{U}}(t))\|\] \[\leq \frac{\eta\lambda_{max}(\mathbf{P}(0))}{N}\|\sum_{k\in[K]}\text{vec} (\mathbf{U}_{k}(t)-\overline{\mathbf{U}}(t))\|\] \[\leq \frac{\eta\lambda_{max}(\mathbf{P}(0))}{N}\frac{\eta 57K^{2}\sigma_{max}^{2}( \mathbf{X})}{20d_{out}}\|\overline{\mathbf{\xi}}(t)\|_{F}\] \[\leq \frac{\eta\lambda_{min}(\mathbf{P}(0))K}{5N}. \tag{25}\]
where the first inequality uses Lemma VII.7, the second inequality uses Lemma VII.1 and \(\mathcal{B}(t)\) that \(\lambda_{max}(\mathbf{P}(0))\leq\frac{1.2^{t}L\sigma_{max}^{2}(\mathbf{X})}{d_{out}}\) and the last inequality uses \(\eta=\mathcal{O}(\frac{d_{out}}{K\kappa K\|\mathbf{X}\|^{2}})\).
**4. The bound of the fourth term**:
From Eq.(10), we know that \(\mathbf{E}(t)\) contains high-order items in terms of \(\eta\). Moreover, it has \(\|\frac{\partial\mathcal{C}}{\partial\overline{\mathbf{W}}^{0}(t)}\|\leq\frac{8 \|\mathbf{X}\|K}{5\sqrt{Nd_{out}}}\|\overline{\mathbf{\xi}}(t)\|_{F}\) according to Eq.(19). Thus, it has
\[\|\frac{1}{\sqrt{m^{L-1}d_{out}}}\text{vec}(\mathbf{E}(t)\mathbf{X})\|\] \[\leq \frac{1}{\sqrt{\mathcal{C}_{2}}}\sum_{s=2}^{L}\left(\frac{L}{s} \right)\left(\eta\frac{8\|\mathbf{X}\|K}{5\sqrt{Nd_{out}}}\|\overline{\mathbf{\xi}}(t )\|_{F}\right)^{s}(\mathcal{O}(\sqrt{L}))^{s-1}m^{\frac{L-s}{2}}\|\mathbf{X}\|\] \[\leq \frac{1}{\sqrt{d_{out}}}\sum_{s=2}^{L}L^{s}\left(\eta\frac{8\| \mathbf{X}\|K}{5\sqrt{Nd_{out}}}\|\overline{\mathbf{\xi}}(t)\|_{F}\right)^{s}(\mathcal{ O}(\sqrt{L}))^{s-1}m^{\frac{1-s}{2}}\|\mathbf{X}\|\] \[\leq \frac{1}{\sqrt{d_{out}}}\sum_{s=2}^{L}\left(\mathcal{O}(L^{3/2} \eta\frac{8\|\mathbf{X}\|K}{5\sqrt{Nd_{out}}}\|\overline{\mathbf{\xi}}(t)\|_{F})\right) ^{s}L^{-1/2}m^{\frac{1-s}{2}}\|\mathbf{X}\|\] \[\leq L\eta\frac{\|\mathbf{X}\|^{2}K}{\sqrt{Nd_{out}}}\|\overline{\mathbf{\xi}}( t)\|_{F}\sum_{s=2}^{L}\left(\mathcal{O}(L^{3/2}\eta\frac{8\|\mathbf{X}\|K}{5\sqrt{N \sqrt{md_{out}}}}\|\overline{\mathbf{\xi}}(t)\|_{F})\right)^{s-1}.\]
where the second inequality uses \(\binom{L}{s}\leq L^{s}\).
In addition, it has
\[\mathcal{O}(L^{3/2}\eta\frac{\|\mathbf{X}\|K}{5\sqrt{N}\sqrt{md_{out}}}\|\overline{ \mathbf{\xi}}(t)\|_{F})\leq\frac{1}{2},\]
where \(m=\Omega(\frac{Ld_{out}}{\kappa^{2}N\|\mathbf{X}\|^{2}}\|\overline{\mathbf{\xi}}(0)\|_{F }^{2})\) and \(\eta=\mathcal{O}(\frac{d_{out}}{L\kappa K\|\mathbf{X}\|^{2}})\).
\[L\eta\frac{\|\mathbf{X}\|^{2}K}{\sqrt{Nd_{out}}}\|\overline{\mathbf{\xi}} (t)\|_{F}(L^{3/2}\eta\frac{\|\mathbf{X}\|K}{\sqrt{N}\sqrt{md_{out}}}\|\overline{\mathbf{ \xi}}(t)\|_{F})*2\] \[\leq \frac{\eta\lambda_{min}(\mathbf{P}(0))K}{10N} \tag{26}\]
where \(m=\Omega(\frac{Ld_{out}}{\|\mathbf{X}\|^{2}}\|\overline{\mathbf{\xi}}(0)\|_{F}^{2})\) and \(\eta=\mathcal{O}(\frac{d_{out}}{L\kappa K\|\mathbf{X}\|^{2}})\).
Finally, combining Eq.(23), Eq.(24), Eq.(25) and Eq.(26), it has
\[\|\overline{\mathbf{\xi}}(t+1)\|\] \[= \|(I\!-\!\frac{\eta}{N}\sum_{k\in[K]}\mathbf{P}(0))\text{vec}(\overline {\mathbf{\xi}}(t))\!-\!\frac{\eta}{N}\sum_{k\in[K]}(\mathbf{P}(t,k)\!-\!\mathbf{P}(0))\mathbf{ \xi}_{k}(t)\] \[-\frac{\eta}{N}\sum_{k\in[K]}\mathbf{P}(0)\text{vec}(\mathbf{U}_{k}(t)\!- \!\overline{\mathbf{U}}(t))+\frac{1}{\sqrt{m^{L-1}d_{out}}}\text{vec}(\mathbf{E}(t)\mathbf{X})\|\] \[\leq (1-\frac{\eta K\lambda_{min}(\mathbf{P}(0))}{N}+\frac{0.109\eta K\lambda _{min}(\mathbf{P}(0))}{0.8^{4}N}+\frac{\eta K\lambda_{min}(\mathbf{P}(0))}{5N}\] \[+\frac{\eta K\lambda_{min}(\mathbf{P}(0))}{10N})\|\overline{\mathbf{\xi}}(t )\|_{F}\] \[\leq (1-\frac{\eta K\lambda_{min}(\mathbf{P}(0))}{2N})\|\overline{\mathbf{\xi}}( t)\|_{F}\] \[\leq (1-\frac{\eta K\lambda_{min}(\mathbf{P}(0))}{2N})\|\overline{\mathbf{\xi}}( t)).\]
Thus, it has
\[\mathcal{L}(t\!+\!1)\!=\!\|\overline{\mathbf{\xi}}(t\!+\!1)\|_{F}^{2}\] \[\leq (1\!-\!\frac{\eta K\lambda_{min}(\mathbf{P}(0))}{2N})^{2}\mathcal{L}( t)\leq(1\!-\!\frac{\eta K\lambda_{min}(\mathbf{P}(0))}{2N})\mathcal{L}(t).\]
Finally, based on \(m=\Omega(L^{3}R^{2}\kappa^{2}N)\) and the requirement of \(m\) in Lemma VII.6, it can derive the condition on \(m\) with \(R\) defined in assumption \(\mathcal{C}\).
## V Experimental evaluation
### _Experimental Setup_
In the experiment, we consider two widely used datasets: MNIST [31] and FMNIST [32], where each dataset contains 60,000 samples. For the architecture of the network, the deep linear network is evaluated for 10 classification task, where the initialization uses the scheme in Section III-B. The network depth is set to 3 and the width is 500. In addition, we set the learning rate to 0.0005 for both datasets. For evaluating the impacts of the local update \(K\) and the number of local clients \(N\), we perform \(K\in\{1,5,10\}\) and \(N\in\{40,80,160\}\). In addition, we test the influence of the ratio \(K/N\) on the convergence by setting \(K/N\in\{1/8,1/16\}\). According to the training setting in [33, 11], we use two schemes to distribute the dataset to clients: independent and identically distributed (i.i.d) and non-i.i.d. For the non-iid scheme, each client only randomly accesses two classes of the dataset. To assess the impact of the global aggregation
### _Experimental Result_
**The impact of \(N\)**: As shown in Fig.2, it can be observed that FedAvg requires more rounds to achieve the same training loss along with the increasing of clients on both i.i.d and non-i.i.d schemes, which is consistent with our theoretical findings that the convergence rate is inversely proportional to \(N\).
**The impact of \(K\)**: Furthermore, the experimental result depicted in Fig.3 shows that FedAvg converges faster as the number of local iterations increases. This also verifies the correctness of our theoretical findings that the increasing of \(K\) can speed up the convergence of FedAvg.
**The impact of \(K/N\)**: According to Theorem IV.2, the convergence rate of FedAvg is actually influenced by the ratio of \(K/N\). As depicted in Fig.4, it can be observed that the curves of the training loss stay similarly for \(\{N=80,K=5\}\) and \(\{N=160,K=10\}\), which also holds on \(\{N=40,K=5\}\) and \(\{N=80,K=10\}\). These observations are in accordance with our theoretical convergence rate \(\mathcal{O}\left(1-\frac{\eta K}{N})^{t}\right)\) for FedAvg.
## VI Conclusion
FedAvg is widely used in training neural networks from distributed clients without sharing data. However, the corresponding optimization problem is often non-convex. Moreover, FedAvg include multiple clients and local updates, which results in an inaccurate gradient-descent direction. In this paper, we provide the first theoretical guarantee for the global convergence of FedAvg in training the deep linear neural network, where the convergence rate is \(\mathcal{O}\big{(}(1-\eta K/N)^{t}\big{)}\). In the experiment, we empirical justify the correctness of our theoretical results by evaluate the effects of \(K\) and \(N\) on the convergence speed.
In the future work, the convergence of FedAvg in training other commonly used architectures of neural networks would be interesting to explore. Furthermore, the partial participant of clients is commonly used in the real scenario, thereby deserving its impact on the convergence of FedAvg in the non-convex optimization problem of neural networks.
## VII Supporting Lemma
**Lemma VII.1**.: _([8]) With probability at least \(1-\delta\), it has_
\[\left\{\begin{array}{l}\sigma_{max}(\overline{\mathbf{W}}^{L:i}(0))\leq 1.2m ^{\frac{L-i+1}{2}},\\ \sigma_{min}(\overline{\mathbf{W}}^{Li:i}(0))\geq 0.8m^{\frac{L-i+1}{2}}, \end{array}\right.\forall 1<i\leq L\]
\[\left\{\begin{array}{l}\sigma_{max}(\overline{\mathbf{W}}^{j:1}(0)\mathbf{X})\leq 1.2m^{\frac{j}{2}}\sigma_{max}(\mathbf{X}),\\ \sigma_{min}(\overline{\mathbf{W}}^{j:1}(0)\mathbf{X})\geq 0.8m^{\frac{j}{2}} \sigma_{min}(\mathbf{X}),\end{array}\right.\forall 1\leq j<L\]
\[\|\overline{\mathbf{W}}^{j:i}(0)\|\leq\mathcal{O}(\sqrt{L}m^{\frac{i-i+1}{2}}), \ \forall 1<i\leq j<L\]
\[\frac{1}{2}\|\overline{\mathbf{\xi}}(0)\|_{F}^{2}\leq B^{2}=\mathcal{O}(\max\{1, \frac{\log(r/\delta)}{d_{out}},\|\mathbf{W}^{*}\|^{2}\})\|\mathbf{X}\|_{F}^{2},\]
_where the requirement of the width satisfies \(m=\Omega(\frac{L\|\mathbf{X}\|^{4}d_{out}B}{\sigma_{min}^{2}(\mathbf{X})})=\Omega( \frac{Le^{2}d_{out}B}{\sigma_{min}^{2}(\mathbf{X})})\)_
**Lemma VII.2**.: _For a matrix \(\mathbf{A}=[\mathbf{A}_{0},\cdots,\mathbf{A}_{N-1}]\), then it has the bound_
\[\|\mathbf{A}\|_{2}\leq\sqrt{\sum_{c\in[N]}\|\mathbf{A}_{c}\|_{2}^{2}}.\]
The above lemma can be proved by applying triangle inequality.
**Lemma VII.3**.: _([35]) For any matrix \(\mathbf{A}\) and \(\mathbf{B}\), then it has_
\[\|\mathbf{A}\otimes\mathbf{B}\|_{2}=\|\mathbf{A}\|_{2}\|\mathbf{B}\|_{2}.\]
**Lemma VII.4**.: _(Theorem 4.1 in [8]) With \(\eta=\frac{d_{out}}{3L\|\mathbf{X}^{\top}\mathbf{X}\|}\leq\frac{d_{out}}{3L\|\mathbf{X}^{ \top}\mathbf{X}_{c}\|}\), it has_
\[\|\mathbf{U}_{k,c}(t)-\mathbf{Y}_{c}\|_{2}^{2}\leq (1-\frac{\eta L\lambda_{min}(\mathbf{X}_{c}^{\top}\mathbf{X}_{c})}{4d_{out }})^{k}\|\mathbf{U}_{0,c}(t)-\mathbf{Y}_{c}\|_{2}^{2}\] \[\leq (1-\frac{\eta L\lambda_{min}(\mathbf{X}^{\top}\mathbf{X})}{4d_{out}})^{k} \|\mathbf{U}_{0,c}(t)-\mathbf{Y}_{c}\|_{2}^{2}.\]
It is noted that \(\lambda_{min}(\mathbf{X}^{\top}\mathbf{X})\leq\lambda_{min}(\mathbf{X}_{c}^{\top}\mathbf{X}_{c})\) and \(\lambda_{max}(\mathbf{X}^{\top}\mathbf{X})\geq\lambda_{max}(\mathbf{X}_{c}^{\top}\mathbf{X}_ {c})\), then \(\kappa=\frac{\lambda_{max}(\mathbf{X}^{\top}\mathbf{X})}{\lambda_{min}(\mathbf{X}^{\top} \mathbf{X})}>\kappa_{c}=\frac{\lambda_{max}(\mathbf{X}^{\top}\mathbf{X}_{c})}{\lambda_{ min}(\mathbf{X}^{\top}\mathbf{X}_{c})}\). As a result, it has
\[1-\eta L\frac{\lambda_{min}(\mathbf{X}_{c}^{\top}\mathbf{X}_{c})}{4d_{out}}\leq 1-\eta L \frac{\lambda_{min}(\mathbf{X}^{\top}\mathbf{X})}{4d_{out}}. \tag{27}\]
**Lemma VII.5**.: _Based on the inductive hypothesis for \(\tau\leq t\) in, with \(m=\Omega(L^{3}R^{2}\kappa^{2}N)\), it has_
\[\|\mathbf{P}(t,k)-\mathbf{P}(0)\|\leq\frac{0.109L}{d_{out}\mathcal{K}}\|\mathbf{X}\|^{2}.\]
Proof.: Denote \(\Delta_{1}(t)=\overline{\mathbf{W}}^{l-1:1}(t)X-\overline{\mathbf{W}}^{i-1:1}(0)X\), \(\Delta_{2}(t)=\mathbf{W}_{k,c}^{l-1:1}(t)\mathbf{X}_{c}-\overline{\mathbf{W}}^{i-1:1}(0) \mathbf{X}_{c}\), \(\Delta_{3}(t)=\overline{\mathbf{W}}^{L:i+1}(t)-\overline{\mathbf{W}}^{L:i+1}(0)\) and \(\Delta_{4}(t)=\mathbf{W}_{k,c}^{L:i+1}(t)-\overline{\mathbf{W}}^{L:i+1}(0)\).
It has
\[\|\Delta_{1}(t)\|\leq \frac{0.01}{\kappa\sqrt{N}}m^{\frac{i-1}{2}}\|\mathbf{X}\|,\|\Delta_{3} (t)\|\leq\frac{0.01}{\kappa\sqrt{N}}m^{\frac{L-i}{2}}, \tag{28}\]
according to Eq.(21) and Eq.(22).
Moreover, based on Eq.(34) and Eq.(22), it has
\[\|\Delta_{2}(t)\|=\|\mathbf{W}_{k,c}^{i-1:1}(t)\mathbf{X}_{c}-\overline{\bm {W}}^{i-1:1}(0)\mathbf{X}_{c}\|\] \[\leq \|\mathbf{W}_{k,c}^{i+1:1}(t)\mathbf{X}_{c}-\overline{\mathbf{W}}^{i-1:1}(t) \mathbf{X}_{c}\|+\|\overline{\mathbf{W}}^{i-1:1}(t)\mathbf{X}_{c}-\overline{\mathbf{W}}^{i-1:1}(0) \mathbf{X}_{c}\|\] \[\leq \frac{0.02}{\kappa\sqrt{N}}m^{\frac{i-1}{2}}\mathbf{X}_{c}. \tag{29}\]
Similarly, it has
\[\|\Delta_{4}(t)\|_{2}\leq \frac{0.02}{\kappa\sqrt{N}}m^{\frac{L-i}{2}}. \tag{30}\]
Considering \(\mathbf{P}(0)=[\mathbf{P}(0,1),\cdots\mathbf{P}(0,N)]\) with \(\mathbf{P}(0,c)=\frac{1}{m^{\frac{L-1}{L-d_{out}}}}((\overline{\mathbf{W}}^{i-1:1}(0)X)^{ \top}(\overline{\mathbf{W}}^{i-1:1}(0)\mathbf{X}_{c})\otimes\overline{\mathbf{W}}^{Li +1}(0)\overline{\mathbf{W}}^{Li+1}(0)^{\top})\) for \(1\leq c\leq N\) and \(C_{2}=\frac{1}{m^{t-1}d_{out}}\), it has
\[\|\mathbf{P}(t,k)-\mathbf{P}(0)\|\leq \sqrt{\sum_{c\in[N]}\|\mathbf{P}(t,k,c)-\mathbf{P}(0,c)\|^{2}}\]
\[= \sqrt{\sum_{c\in[N]}(\frac{0.109L}{d_{out}\kappa\sqrt{N}}\|\mathbf{X}\|^{2 })^{2}}\leq\frac{0.109L}{d_{out}\kappa}\|\mathbf{X}\|^{2}. +\Delta_{3}(t)\overline{\mathbf{W}}^{L:i+1}(0)^{\top}+\Delta_{3}(t) \Delta_{4}(t)^{\top})+((\overline{\mathbf{W}}^{i-1:1}(0)\mathbf{X})^{\top}\Delta_{2}(t)\] \[+\Delta_{1}(t)^{\top}\overline{\mathbf{W}}^{i-1:1}(0)\mathbf{X}_{c}+ \Delta_{1}(t)^{\top}\Delta_{2}(t)\otimes\overline{\mathbf{W}}^{L:i+1}(t)\mathbf{W}_{k, c}^{L:i+1}(t)^{\top}\|\] \[\leq LC_{2}(1.2^{2}m^{i-1}\|\mathbf{X}\|^{2}(1.2*\frac{0.02}{\kappa\sqrt{N }}+1.2*\frac{0.01}{\kappa\sqrt{N}}+\frac{2*10^{-4}}{\kappa^{2}N})m^{L-i}\] \[+1.25^{2}m^{L-i}(1.2*\frac{0.02}{\kappa\sqrt{N}}+\frac{0.01}{ \kappa\sqrt{N}}*1.2+\frac{2*10^{-4}}{\kappa^{2}N})m^{i-1}\|\mathbf{X}\|^{2})\] \[\leq \frac{0.109L}{d_{out}\kappa\sqrt{N}}\|\mathbf{X}\|^{2}.\]
**Lemma VII.6**.: _(Claim 7.1, 7.2 and 7.3 in [8]) With probability at least \(1-\delta\) over the random initialization, the following inequalities hold for all \(k\in[K]\), \(c\in[N]\) and \(r\in[m]\) in local iteration \(k\) with \(m=\Omega(L\max\{r\kappa^{3}d_{out}(1+\|\mathbf{W}^{*}\|^{2}),r\kappa^{3}\log\frac{ \delta}{\delta},\log L\})\)\(\|\mathbf{W}^{*}\|^{2}),r\kappa^{3}\log\frac{\delta}{\delta},\log L\}\)\(\|\mathbf{U}_{k,c}(t)-\mathbf{Y}_{c}\|_{F}^{2}\leq(1-\eta L\frac{\lambda_{min}(\mathbf{X}_{c}^{ \top}\mathbf{X}_{c})}{4d_{out}})^{k}\|\overline{\mathbf{U}}_{c}(t)-\mathbf{Y}_{c}\|_{F}^{2},\)\(\|\mathbf{W}_{k,c}^{L:i}(t)-\overline{\mathbf{W}}^{j:i}(t)\|\leq\)\(\mathcal{O}(\sqrt{L})\!\!\sum_{s=1}^{j-i+1}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\[\|(\mathbf{W}_{k,c}^{i:1}(t)-\overline{\mathbf{W}}^{i:1}(t))\mathbf{X}\|\leq \frac{5}{4}m^{\frac{i}{2}}\sum_{s=1}^{i}(\frac{\mathcal{O}(L^{3/2}R)}{\sqrt{m}} )^{s}\|\mathbf{X}\|,1\leq i<L\] \[\|\mathbf{W}_{k,c}^{i}(t)-\overline{\mathbf{W}}^{i}(t)\|\leq R:=\frac{24 \sqrt{d_{out}}\|\mathbf{X}_{c}\|}{L\sigma_{min}^{2}(\mathbf{X}_{c})}\|\overline{\mathbf{U} }_{c}(t)-\mathbf{Y}_{c}\|_{F} \tag{32}\]
In addition, according to [8], it has
\[\|\operatorname{vec}(\mathbf{U}_{k+1,c}(t+1)-\mathbf{U}_{k,c}(t))\|\] \[\leq \eta\lambda_{max}(\mathbf{P}^{k+1,c}(t))\|\mathbf{\xi}_{k,c}(t)\|_{F}+ \frac{\eta\lambda_{min}(\mathbf{P}^{k+1,c}(t))}{6}\|\overline{\mathbf{\xi}}(t)\|_{F}\] \[\leq \frac{7\eta\lambda_{max}(\mathbf{P}^{k+1,c}(t))}{6}\|\mathbf{\xi}_{k,c}(t )\|_{F}\leq\frac{57\eta\sigma_{max}^{2}(\mathbf{X}_{c})}{20d_{out}}\|\mathbf{\xi}_{k,c }(t)\|_{F}, \tag{33}\]
where \(\mathbf{P}^{k+1,c}(t)\) denotes the centralized gram matrix calculated on client \(c\), the last inequality uses the upper bound of the largest eigenvalue of \(\mathbf{P}^{k+1,c}(t)\leq 1.25^{4}L\sigma_{max}^{2}(\mathbf{X}_{c})/d_{out}\) as proved in [8].
Based on Lemma VII.6 and \(m=\Omega(L^{3}R^{2}\kappa^{2}N)\), it has
\[\|(\mathbf{W}_{k,c}^{i:1}(t)-\overline{\mathbf{W}}^{i:1}(t))\mathbf{X}\|\leq \frac{5}{4}m^{\frac{i}{2}}\sum_{s=1}^{i}(\frac{\mathcal{O}(L^{3/ 2}R)}{\sqrt{m}})^{s}\|\mathbf{X}\|\] \[\leq \frac{0.01}{\kappa\sqrt{N}}m^{\frac{i}{2}}\|\mathbf{X}\|. \tag{34}\]
Similarly, it has
\[\|\mathbf{W}_{k,c}^{L:i}(t)-\overline{\mathbf{W}}^{L:i}(t)\|\leq \frac{5}{4}m^{\frac{L-i+1}{2}}\sum_{s=1}^{L-i+1}(\frac{\mathcal{ O}(L^{3/2}R)}{\sqrt{m}})^{s}\] \[\leq \frac{0.01}{\kappa\sqrt{N}}m^{\frac{L-i+1}{2}}. \tag{35}\]
According to Eq.(21) and Lemma VII.6, it has
\[\|\mathbf{U}_{k,c}(t)-\mathbf{Y}_{c}\|_{F}^{2}\leq(1-\eta L\frac{\lambda_{min}(\mathbf{X} ^{\top}\mathbf{X})}{4d_{out}})^{k}\|\overline{\mathbf{U}}_{c}(t)-\mathbf{Y}_{c}\|_{F}^{2}\]
As a result, it has
\[\|\mathbf{U}_{k}(t)-\mathbf{Y}\|_{F}^{2}= \sum_{c\in|N|}\|\mathbf{U}_{k,c}(t)-\mathbf{Y}_{c}\|_{F}^{2}\] \[\leq (1-\eta L\frac{\lambda_{min}(\mathbf{X}^{\top}\mathbf{X})}{4d_{out}})^{k} \|\overline{\mathbf{U}}(t)-\mathbf{Y}\|_{F}^{2}, \tag{36}\] \[\|\mathbf{U}_{k}(t)-\mathbf{Y}\|_{F}\leq (1-\frac{\eta L\lambda_{min}(\mathbf{X}^{\top}\mathbf{X})}{8d_{out}})^{k} \|\overline{\mathbf{U}}(t)-\mathbf{Y}\|_{F}, \tag{37}\]
according to \(\sqrt{1-x}\leq 1-x/2\) for \(0\leq x\leq 1\)
We then prove a Lemma that controls the updates in local steps.
Fig. 3: The impact of \(K\) on the convergence of FedAvg.
**Lemma VII.7**.: _In the \(t\)-th global update, for all \(k\in[K],c\in[N]\), it has_
\[\|\overline{\mathbf{U}}_{c}(t)-\mathbf{U}_{k,c}(t)\|_{F}\leq\frac{57\eta \sigma_{max}^{2}(\mathbf{X}_{c})}{20d_{out}}\|\overline{\mathbf{U}}_{c}(t)-\mathbf{Y}_{c}\|_ {F},\] \[\|\mathbf{U}_{k}(t)-\overline{\mathbf{U}}(t)\|_{F}^{2}\leq (\frac{57\eta\sigma_{max}^{2}(\mathbf{X}_{c})}{20d_{out}})^{2}\| \overline{\mathbf{U}}(t)-\mathbf{Y}\|_{F}^{2}.\]
Proof.: From Eq.(33), it has
\[\|\mathbf{U}_{k,c}(t)-\mathbf{Y}_{c}\|_{F}\leq \|\mathbf{U}_{k,c}(t)-\mathbf{U}_{k-1,c}(t)\|_{F}+\|\mathbf{U}_{k-1,c}(t)-\mathbf{Y }_{c}\|_{F}\] \[\leq (\frac{57\eta\sigma_{max}^{2}(\mathbf{X}_{c})}{20d_{out}}+1)\|\mathbf{U} _{k-1,c}(t)-\mathbf{Y}_{c}\|_{F}\] \[\leq (\frac{57\eta\sigma_{max}^{2}(\mathbf{X}_{c})}{20d_{out}}+1)^{k}\| \overline{\mathbf{U}}_{c}(t)-\mathbf{Y}_{c}\|_{F}.\]
In turn, it has
\[\|\overline{\mathbf{U}}_{c}(t)-\mathbf{U}_{k,c}(t)\|_{F}\leq\sum_{i=1}^{k }\|\mathbf{U}_{i,c}(t)-\mathbf{U}_{i-1,c}(t)\|_{F}\] \[\leq \sum_{i=1}^{k}\frac{57\eta\sigma_{max}^{2}(\mathbf{X}_{c})}{20d_{out} }\|\mathbf{U}_{i-1,c}(t)-\mathbf{Y}_{c}\|_{F}\] \[\leq \sum_{i=1}^{k}\frac{57\eta\sigma_{max}^{2}(\mathbf{X}_{c})}{20d_{out} }(\frac{57\eta\sigma_{max}^{2}(\mathbf{X}_{c})}{20d_{out}}+1)^{i-1}\|\overline{\bm {U}}_{c}(t)-\mathbf{Y}_{c}\|_{F}\] \[\leq \frac{57k\eta\sigma_{max}^{2}(\mathbf{X}_{c})}{10d_{out}}\|\overline{ \mathbf{U}}_{c}(t)-\mathbf{Y}_{c}\|_{F},\]
where the last inequality applies
\[\sum_{i=1}^{k}(x+1)^{i-1}\leq\sum_{i=1}^{k}(x+1)^{K}\leq 2k, \tag{38}\]
which is according to
\[(\frac{57\eta\sigma_{max}^{2}(\mathbf{X}_{c})}{20d_{out}}+1)^{K}\leq (\frac{7}{100\kappa K}+1)^{K}\leq e^{\frac{7}{100\kappa}}\leq 2\]
Therefore, it has
\[\|\mathbf{U}_{k}(t)-\overline{\mathbf{U}}(t)\|_{F}^{2}= \sum_{c\in[N]}\|\mathbf{U}_{k,c}(t)-\overline{\mathbf{U}}_{c}(t)\|_{F}^{2}\] \[\leq (\frac{57k\eta\sigma_{max}^{2}(\mathbf{X}_{c})}{10d_{out}})^{2}\| \overline{\mathbf{U}}(t)-\mathbf{Y}\|_{F}^{2} \tag{39}\]
|
2303.07392 | Efficient Bayesian Physics Informed Neural Networks for Inverse Problems
via Ensemble Kalman Inversion | Bayesian Physics Informed Neural Networks (B-PINNs) have gained significant
attention for inferring physical parameters and learning the forward solutions
for problems based on partial differential equations. However, the
overparameterized nature of neural networks poses a computational challenge for
high-dimensional posterior inference. Existing inference approaches, such as
particle-based or variance inference methods, are either computationally
expensive for high-dimensional posterior inference or provide unsatisfactory
uncertainty estimates. In this paper, we present a new efficient inference
algorithm for B-PINNs that uses Ensemble Kalman Inversion (EKI) for
high-dimensional inference tasks. We find that our proposed method can achieve
inference results with informative uncertainty estimates comparable to
Hamiltonian Monte Carlo (HMC)-based B-PINNs with a much reduced computational
cost. These findings suggest that our proposed approach has great potential for
uncertainty quantification in physics-informed machine learning for practical
applications. | Andrew Pensoneault, Xueyu Zhu | 2023-03-13T18:15:26Z | http://arxiv.org/abs/2303.07392v1 | Efficient Bayesian Physics Informed Neural Networks for Inverse Problems via Ensemble Kalman Inversion
###### Abstract
Bayesian Physics Informed Neural Networks (B-PINNs) have gained significant attention for inferring physical parameters and learning the forward solutions for problems based on partial differential equations. However, the overparameterized nature of neural networks poses a computational challenge for high-dimensional posterior inference. Existing inference approaches, such as particle-based or variance inference methods, are either computationally expensive for high-dimensional posterior inference or provide unsatisfactory uncertainty estimates. In this paper, we present a new efficient inference algorithm for B-PINNs that uses Ensemble Kalman Inversion (EKI) for high-dimensional inference tasks. By reframing the setup of B-PINNs as a traditional Bayesian inverse problem, we can take advantage of EKI's key features: (1) gradient-free, (2) computational complexity scales linearly with the dimension of the parameter spaces, and (3) rapid convergence with typically \(\mathcal{O}(100)\) iterations. We demonstrate the applicability and performance of the proposed method through various types of numerical examples. We find that our proposed method can achieve inference results with informative uncertainty estimates comparable to Hamiltonian Monte Carlo (HMC)-based B-PINNs with a much reduced computational cost. These findings suggest that our proposed approach has great potential for uncertainty quantification in physics-informed machine learning for practical applications.
B -
Bayesian Physically Informed Neural Networks, Inverse Problems, Ensemble Kalman Inversion, Gradient-free
## 1 Introduction
Many applications in science and engineering can be accurately modeled by partial differential equations (PDEs). These equations often contain parameters corresponding to the physical properties of the system. In practice, these properties are often challenging to measure directly. Inverse problems arise when indirect measurements of the system are used to infer these model parameters. These problems are often ill-posed, with the available measurements being limited and noisy. Traditional approaches for solving these problems can be sensitive to noise and data scarcity. In addition, they often require many runs of sophisticated forward numerical solvers [46], which can be computationally expensive. Furthermore, it is not uncommon that the initial or boundary conditions are missing for real-world applications. In such a case, the traditional numerical forward solver might not even be able to run. To address these issues, there is growing interest in developing more efficient, robust, and flexible alternatives.
Recently, Scientific Machine Learning (Sci-ML) [4, 42, 37, 41], a set of approaches that combine domain-specific scientific and engineering knowledge with powerful machine learning tools, has received much attention. These approaches have proven effective in solving PDE-based inverse problems efficiently. One particularly promising approach is Physics Informed Neural Networks (PINNs) [42, 31]. PINNs construct a neural network approximation of the forward solution to the underlying problem while inferring model parameters simultaneously by minimizing data misfit regularized by the underlying governing PDE residual loss. In addition to obtaining estimates of these quantities, the presence of noise and lack of data make it essential to quantify the impact of uncertainty on the surrogate and parameters, especially in high-stakes
applications [50, 41]. However, standard PINN approaches provide only deterministic estimates. Non-Bayesian approaches to uncertainty quantification (UQ) in PINN, such as neural network dropout [48], have been explored. Despite their efficiency, the estimates provided by these methods tend to be less satisfactory, as pointed out in [48, 30].
Alternatively, several attempts to develop Bayesian PINNs (B-PINNs) have been explored, enabling uncertainty quantification in the neural network approximations and the corresponding physical parameter estimates. These methods utilize Bayesian Neural Networks (BNNs) as surrogates by treating weights of neural networks and physical parameters as random variables. In general, Markov Chain Monte Carlo (MCMC) methods are one of the most popular approaches for Bayesian inference tasks. These methods approximate the posterior distribution of parameters with a finite set of samples. While these approaches provide inference with asymptotic convergence guarantees, the use of standard MCMC methods in deep learning applications is limited by their poor scalability for large neural network architectures and large data sets [40]. In practice, Hamiltonian Monte Carlo (HMC) is the gold standard for inference for Bayesian neural networks [17]. HMC is an MCMC method that constructs and solves a Hamiltonian system from the posterior distribution. This approach enables higher acceptance rates than traditional MCMC methods. In the context of B-PINNs [48], an HMC-based B-PINN has been investigated for inference tasks. Despite this, the computational cost of inference remains high.
Alternatively, Variational Inference (VI) approaches to B-PINNs [48] posit that the posterior distribution of the physical and neural network parameters lives within a family of parameterized distributions and solves a deterministic optimization problem to find an optimal distribution within that family [7]. In general, VI approaches are more efficient than HMC approaches and scale well for large parameter spaces and data sizes. Variational inference methods, however, do not share the same theoretical guarantees as MCMC approaches and only provide estimates within the function space of the family of parameterized distributions. Additionally, the inference quality depends on the space of parameterized densities chosen. For example, in [48], KL divergence-based B-PINNs tend to provide less satisfactory estimates than the corresponding HMC B-PINNs and are less robust to measurement noise.
The particle-based VI approach bridges the gap between VI and MCMC methods, combining the strengths of both approaches to provide more efficient inference than MCMC methods while also providing greater flexibility through non-parametric estimates in contrast with the parametric approximations utilized in VI [47]. In the context of B-PINNs, Stein Variational Gradient Descent (SVGD) has been proposed for inference tasks to reconstruct idealized vascular flows with sparse and noisy velocity data [48]. However, SVGD tends to underestimate the uncertainty of the distribution for high dimensional problems, collapsing to several modes [3, 45].
Recently, Ensemble Kalman inversion (EKI) [29, 32, 28] has been introduced as a particle-based VI approach that uses the Ensemble Kalman filter algorithm to solve traditional Bayesian inverse problems. EKI methods have many appealing properties: they are gradient-free, easily parallelizable, robust to noise, and computationally scale linearly with ensemble size [36]. Additionally, these methods use low-rank approximate Hessian (and gradient) information, allowing for rapid convergence of the methods [49]. In the case of a Gaussian prior and linear Gaussian likelihood [33], Ensemble Kalman methods have asymptotic convergence to the correct posterior distribution in a Bayesian sense. Ensemble Kalman methods are asymptotically biased
when these assumptions are violated, yet they are computationally efficient compared to asymptotically unbiased methods and empirically provide reasonable estimates [19, 8]. Recently, these methods have also been used to train neural networks efficiently [32, 23, 14, 22, 49]. In [32], EKI was first proposed as a derivative-free approach to train the neural networks but primarily for traditional purely data-driven machine learning problems. Recently, a one-shot variant of EKI [22] has been used to learn maximum a posteriori (MAP) estimates of a NN surrogate and model parameters for several inverse problems involving PDEs; however, this approach still requires traditional numerical solvers or discretizations to enforce the underlying physical laws, which can be computationally expensive for large scale complex applications. Additionally, while EKI has been traditionally used to obtain MAP estimates of unknown parameters, recent works have begun to investigate the use of the EKI methods to efficiently draw approximate samples for Bayesian inference for traditional Bayesian inverse problems [8, 19, 25, 26, 15].
Motivated by the recent advances in EKI, we present a novel efficient inference method for B-PINNs based on EKI. Specifically, we first recast the setup of B-PINNs as a traditional Bayesian inverse problem so that EKI can be applied to obtain approximate posterior estimates. Furthermore, based on the variant of EKI in [32], we present an efficient sampling-based inference approach to B-PINNs. Because our approach inherits the properties of EKI, it provides efficient gradient-free inference and is well suited for large-scale neural networks thanks to the linear computational complexity of the dimension of unknown parameters [36]. Further, unlike the traditional setting for EKI [29], this approach replaces the expensive numerical forward solver with a NN surrogate trained on the measurements and the underlying physics laws to jointly estimate the physical parameters and forward solution with the corresponding uncertainty estimation. Because the trained neural network surrogate is cheap to evaluate, our proposed approach can draw a large number of samples efficiently and is thus expected to reduce sampling errors significantly. Through various classes of numerical examples, we show that the EKI method can efficiently infer physical parameters and learn valuable uncertainty estimates in the context of B-PINNs. Furthermore, the empirical results show that EKI B-PINNs can deliver comparable inference results to HMC B-PINNs while requiring significantly less computational time. To our best knowledge, this is the _first attempt_ to use EKI for efficient inference under the context of B-PINNs.
The rest of the paper is organized as follows: in section 2, we first introduce the setup of the PDE-based inverse problem and briefly review B-PINNs. In section 3, we first reframe the problem in terms of a Bayesian Inverse problem and introduce EKI under the context of Bayesian inverse problems. We then introduce the building blocks of our proposed EKI B-PINNs framework. We demonstrate the performance of this approach via several numerical examples in section 4. Finally, we conclude in section 5.
## 2 Problem Setup and Background
We consider the following partial differential equation (PDE):
\[\mathcal{N}_{x}(u(\mathbf{x});\mathbf{\lambda}) =f(\mathbf{x}) \mathbf{x}\in\Omega, \tag{1}\] \[\mathcal{B}_{x}(u(\mathbf{x});\mathbf{\lambda}) =b(\mathbf{x}) \mathbf{x}\in\partial\Omega, \tag{2}\]
where \(\mathcal{N}_{x}\) and \(\mathcal{B}_{x}\) denote the differential and boundary operators, respectively. The spatial domain \(D\subseteq\mathbb{R}^{d}\) has boundary \(\Gamma\), and \(\mathbf{\lambda}\in\mathbb{R}^{N_{\lambda}}\) represents a vector of unknown physical parameters. The forcing function \(f(\mathbf{x})\) and boundary function \(b(\mathbf{x})\) are given,
and \(\mathbf{u}(\mathbf{x})\) is the solution of the PDE. For time-dependent problems, we consider time \(t\) as a component of \(\mathbf{x}\) and consider domain \(\Omega\) and boundary \(\partial\Omega\) to additionally contain the temporal domain and initial boundary, respectively.
In this setting, we additionally have access to \(N_{u}\) measurements \(\mathcal{D}_{u}=\{(\mathbf{x}_{u}^{i},u^{i})\}_{i=1}^{N_{u}}\) of the forward solution at various locations. Given the available information, we aim to infer the physical parameters \(\mathbf{\lambda}\) with uncertainty estimation.
### Bayesian Physics Informed Neural Networks (B-PINNs)
Over the last several years, SciML approaches for solving inverse problems have received much attention [4, 31]. One of the most promising approaches is the Physics Informed Neural Networks (PINNs), which approximate the forward solution \(u(\mathbf{x})\) with a fully connected neural network surrogate \(\tilde{u}(\mathbf{x};\mathbf{\theta})\), parameterized by neural network's weight parameter \(\mathbf{\theta}\in\mathbb{R}^{N_{\theta}}\). Denote \(\mathbf{\xi}=[\mathbf{\theta},\mathbf{\lambda}]\) as the concatenation of the neural network and physical parameters. The neural networks are trained by minimizing a weighted sum of the data misfit regularized by initial/boundary data misfit and underlying PDE residual loss (1) over a set of discrete points - "residual points" and "boundary points," respectively, in the domain as follows:
\[\mathcal{D}_{f}=\{(\mathbf{x}_{f}^{i},f(\mathbf{x}_{f}^{i}))\}_{ i=1}^{N_{f}} =\{(\mathbf{x}_{f}^{i},f^{i})\}_{i=1}^{N_{f}} \tag{3}\] \[\mathcal{D}_{b}=\{(\mathbf{x}_{b}^{i},b(\mathbf{x}_{b}^{i}))\}_{ i=1}^{N_{b}} =\{(\mathbf{x}_{b}^{i},b^{i})\}_{i=1}^{N_{b}}, \tag{4}\]
with residual locations \(\mathbf{x}_{f}^{i}\in\Omega\) and boundary locations \(\mathbf{x}_{b}^{i}\in\partial\Omega\). With these notions, the corresponding PINN loss function is defined as follows:
\[\mathcal{L}(\mathbf{\xi})=\omega_{u}\mathcal{L}_{u}(\mathbf{\xi})+\omega_{f}\mathcal{ L}_{f}(\mathbf{\xi})+\omega_{b}\mathcal{L}_{b}(\mathbf{\xi}), \tag{5}\]
where
\[\mathcal{L}_{u}(\mathbf{\xi}) =\frac{1}{N_{u}}\sum_{i=1}^{N_{u}}|u^{i}-\tilde{u}(\mathbf{x}_{u}^ {i};\mathbf{\theta})|^{2}, \tag{6}\] \[\mathcal{L}_{f}(\mathbf{\xi}) =\frac{1}{N_{f}}\sum_{i=1}^{N_{f}}|f^{i}-\mathcal{N}_{x}(\tilde{ u}(\mathbf{x}_{f}^{i};\mathbf{\theta});\mathbf{\lambda})|^{2},\] (7) \[\mathcal{L}_{b}(\mathbf{\xi}) =\frac{1}{N_{b}}\sum_{i=1}^{N_{b}}|b^{i}-\mathcal{B}_{x}(\tilde{ u}(\mathbf{x}_{b}^{i};\mathbf{\theta});\mathbf{\lambda})|^{2}, \tag{8}\]
and \(\omega_{u}\), \(\omega_{f}\), and \(\omega_{b}\) are the weights for each term. In practice, this loss is often minimized with ADAM or L-BFGS optimizers. Standard PINNs typically provide only a deterministic estimate of the target parameters \(\mathbf{\xi}\)[42, 37, 31]. These estimates may be inaccurate and unreliable for problems with small or noisy datasets. Therefore, qualifying the uncertainty in the estimate would be desirable.
To account for the uncertainty, Bayesian Physics-Informed Neural Networks (B-PINNs) (e.g., [43, 48, 35, 2]) have been proposed. B-PINNs are built on Bayesian Neural Networks (BNNs) by treating neural network weights and biases \(\mathbf{\theta}\) and physical parameters \(\mathbf{\lambda}\) as random variables. By Bayes' theorem, the posterior distribution of the parameters \(\xi\) conditions on the forward measurements \(\mathcal{D}_{u}\), the residual points \(\mathcal{D}_{f}\), and the boundary points \(\mathcal{D}_{b}\) can be obtained as follows:
\[p(\mathbf{\xi}|\mathcal{D}_{u},\mathcal{D}_{f},\mathcal{D}_{b})\propto p(\mathbf{\xi} )p(\mathcal{D}_{u},\mathcal{D}_{f},\mathcal{D}_{b}|\mathbf{\xi}). \tag{9}\]
The choice of prior distribution \(p(\mathbf{\xi})\) and likelihood \(p(\mathcal{D}_{u},\mathcal{D}_{f},\mathcal{D}_{b}|\mathbf{\xi})\) will greatly affect the properties of the posterior distribution \(p(\mathbf{\xi}|\mathcal{D}_{u},\mathcal{D}_{f},\mathcal{D}_{b})\). A typical choice for the prior is to assume independence between the physical parameters \(\mathbf{\lambda}\) and neural network parameters \(\mathbf{\theta}\), i.e. \(p(\mathbf{\xi})=p(\mathbf{\theta})p(\mathbf{\lambda})\). Additionally, the neural network parameters \(\mathbf{\theta}=\{\theta^{i}\}_{i=1}^{N_{\theta}}\) are often assumed to follow independent zero-mean Gaussian distributions, i.e.
\[p(\mathbf{\theta})=\prod_{i=1}^{N_{\theta}}p(\theta^{i}),\quad p(\theta^{i})\sim \mathcal{N}\left(0,\sigma_{\theta}^{i}\right), \tag{10}\]
where \(\sigma_{\theta}^{i}\) is the standard deviation of the corresponding neural network parameter \(\theta^{i}\). For the likelihood, independence between the forward measurements \(\mathcal{D}_{u}\), residual points \(\mathcal{D}_{f}\), and boundary points \(\mathcal{D}_{b}\) is often assumed as follows:
\[p(\mathcal{D}_{u},\mathcal{D}_{f},\mathcal{D}_{b}|\mathbf{\xi})=p(\mathcal{D}_{u} |\mathbf{\xi})p(\mathcal{D}_{f}|\mathbf{\xi})p(\mathcal{D}_{b}|\mathbf{\xi}). \tag{11}\]
Each term within \(\mathcal{D}_{u}\), \(\mathcal{D}_{f}\), and \(\mathcal{D}_{b}\) is often assumed to follow a Gaussian distribution of the form
\[p(\mathcal{D}_{u}|\mathbf{\xi}) =\prod_{i=1}^{N_{u}}p(u^{i}|\mathbf{\xi}),\quad p(\mathcal{D}_{f}|\bm {\xi})=\prod_{i=1}^{N_{f}}p(f^{i}|\mathbf{\xi}),\quad p(\mathcal{D}_{b}|\mathbf{\xi})= \prod_{i=1}^{N_{b}}p(b^{i}|\mathbf{\xi}), \tag{12}\] \[p(u^{i}|\mathbf{\xi}) =\frac{1}{\sqrt{2\pi\sigma_{\eta_{u}}^{2}}}\exp\left(-\frac{ \left(u^{i}-\tilde{u}(\mathbf{x}_{u}^{i};\mathbf{\theta})\right)^{2}}{2\sigma_{\eta _{u}}^{2}}\right),\] (13) \[p(f^{i}|\mathbf{\xi}) =\frac{1}{\sqrt{2\pi\sigma_{\eta_{f}}^{2}}}\exp\left(-\frac{ \left(f^{i}-\mathcal{N}_{x}(\tilde{u}(\mathbf{x}_{u}^{i};\mathbf{\theta});\mathbf{ \lambda})\right)^{2}}{2\sigma_{\eta_{f}}^{2}}\right),\] (14) \[p(b^{i}|\mathbf{\xi}) =\frac{1}{\sqrt{2\pi\sigma_{\eta_{b}}^{2}}}\exp\left(-\frac{ \left(b^{i}-\mathcal{B}_{x}(\tilde{u}(\mathbf{x}_{b}^{i};\mathbf{\theta});\mathbf{ \lambda})\right)^{2}}{2\sigma_{\eta_{b}}^{2}}\right). \tag{15}\]
Here, \(\sigma_{\eta_{u}}\), \(\sigma_{\eta_{f}}\), and \(\sigma_{\eta_{b}}\) are the standard deviations for the forward measurement, residual point, and boundary point, respectively. Choice of physical parameter prior \(p(\mathbf{\lambda})\) is often problem-dependent, as this distribution represents domain knowledge of the corresponding physical property. Given these choices of prior and likelihood functions, one can construct the corresponding posterior distribution of the BNN. For most BNNs, the closed-form expressions for the posterior distribution are unavailable, and approximate inference methods must be employed. Due to the overparameterized nature of neural networks, the resulting Bayesian inference problem is often high-dimensional for even moderately sized BNNs.
### Hamiltonian Monte Carlo (HMC)
Next, we briefly review Hamiltonian Monte Carlo (HMC), a popular inference algorithm for B-PINNs [48] that serves as a baseline method for our proposed method. Hamiltonian Monte Carlo (HMC) is a powerful method for sampling based posterior inference [18] and has been utilized in the context of inference in Bayesian Neural Networks (BNNs) [39]. HMC employs Hamiltonian dynamics to propose states in parameter space with high acceptance in the Metropolis-Hastings acceptance step. Given the posterior distribution
\(p(\mathbf{\xi}|\mathcal{D}_{u},\mathcal{D}_{f},\mathcal{D}_{b})=e^{-U(\mathbf{\xi})}\), where \(U\) is the negative log-density of the posterior, we define the Hamiltonian dynamics as follows
\[H(\mathbf{\xi},\mathbf{r})=U(\mathbf{\xi})+\frac{1}{2}\mathbf{r}^{T}M^{-1}\mathbf{r}, \tag{16}\]
where \(\mathbf{r}\in\mathbb{R}^{N_{\xi}}\) is an auxiliary momentum vector, and \(M\in\mathbb{R}^{N_{\xi}\times N_{\xi}}\) is the corresponding mass matrix, often set to the identity \(I_{N_{\xi}}\). Starting from an initial sample of \(\mathbf{\xi}\), the HMC generates proposal samples by resampling momentum \(\mathbf{r}\sim\mathcal{N}(0,M)\) and advancing \((\mathbf{\xi},\mathbf{r})\) through Hamiltonian dynamics
\[\frac{d\mathbf{\xi}}{dt} =-M\mathbf{r}, \tag{17}\] \[\frac{d\mathbf{r}}{dt} =-\nabla U(\mathbf{\xi}). \tag{18}\]
This is often done via Leapfrog integration [18] for \(L\) steps given a step size \(\delta t\). Following this, a Metropolis-Hastings acceptance step is applied to determine if the given sample will be accepted. The details of the HMC are shown in Algorithm 1. The variant HMC B-PINN used in this paper is based on the version in [50], to which we refer interested readers for more details.
```
1:Input: \(\mathbf{\xi}_{0}\) (initial sample), \(\delta t\) (step size), \(L\) (leapfrog steps)
2:for\(i=1,...,J\)do
3:\(\mathbf{\xi}_{i}\leftarrow\mathbf{\xi}_{i-1}\)
4: Sample \(\mathbf{r}_{i}\sim\mathcal{N}(0,M)\)
5:\(\hat{\mathbf{\xi}}_{i}\leftarrow\mathbf{\xi}_{i}\)
6:\(\hat{\mathbf{r}}_{i}\leftarrow\mathbf{r}_{i}\)
7:for\(j=1,...,L\)do
8:\(\hat{\mathbf{r}}_{i}\leftarrow\hat{\mathbf{r}}_{i}-\frac{\delta t}{2}\nabla U(\hat{ \mathbf{\xi}}_{i})\)
9:\(\hat{\mathbf{\xi}}_{i}\leftarrow\hat{\mathbf{\xi}}_{i}+\delta tM^{-1}\hat{\mathbf{r}}_{i}\)
10:\(\hat{\mathbf{r}}_{i}\leftarrow\hat{\mathbf{r}}_{i}-\frac{\delta t}{2}\nabla U(\hat{ \mathbf{\xi}}_{i})\)
11:endfor
12: Sample \(p\sim\mathcal{U}(0,1)\)
13:\(\alpha\leftarrow\min[1,\exp(H(\hat{\mathbf{\xi}}_{i},\hat{\mathbf{r}}_{i})-H(\mathbf{\xi} _{i},\mathbf{r}_{i}))]\)
14:if\(p<\alpha\)then
15:\(\mathbf{\xi}_{i}\leftarrow\hat{\mathbf{\xi}}_{i}\)
16:endif
17:endfor
18:Return: \(\mathbf{\xi}_{1},...,\mathbf{\xi}_{J}\) (Posterior samples)
```
**Algorithm 1** Hamiltonian Monte Carlo (HMC)
From the HMC algorithm, we obtain a set of approximate samples from the B-PINNs posterior distribution \(p(\mathbf{\xi}|\mathcal{D}_{u},\mathcal{D}_{f},\mathcal{D}_{b})\). We shall use these samples to obtain uncertainty estimates of the approximate forward solution and physical parameters. Denote \(\bar{\lambda}\) and \(\bar{u}\) to be the sample mean of physical parameter \(\lambda\) and forward surrogate \(\tilde{u}\), respectively. Additionally, we denote the corresponding sample standard deviations \(s_{\lambda}\) and \(s_{\tilde{u}}\). We compute the sample statistics over the \(J\) samples \(\{\lambda_{j}\}_{j=1}^{J}\) and
\(\{\tilde{u}(\mathbf{x};\mathbf{\theta}_{j})\}_{j=1}^{J}\) obtain from the HMC as follows:
\[\bar{\lambda} =\frac{1}{J}\sum_{j=1}^{J}\lambda_{j},\quad\bar{u}(\mathbf{x})= \frac{1}{J}\sum_{j=1}^{J}\tilde{u}(\mathbf{x};\mathbf{\theta}_{j}), \tag{19}\] \[s_{\lambda} =\sqrt{\frac{\sum_{j=1}^{J}\left(\lambda_{j}-\bar{\lambda} \right)^{2}}{J-1}},\quad s_{\tilde{u}}(\mathbf{x})=\sqrt{\frac{\sum_{j=1}^{J} \left(\tilde{u}(\mathbf{x};\mathbf{\theta}_{j})-\bar{u}(\mathbf{x})\right)^{2}}{J- 1}}. \tag{20}\]
## 3 Ensemble Kalman Inversion-based B-PINNs
In this section, we first briefly review Ensemble Kalman Inversion (EKI) as an efficient method for solving Bayesian inverse problems. Following that, we present our proposed method, denoted _EKI B-PINNs_. Specifically, we first recast the setup of B-PINNs in the traditional Bayesian inverse problem setting and then employ EKI for efficient sampling-based inference.
### Ensemble Kalman Inversion (EKI)
Ensemble Kalman Inversion (EKI) [29, 27, 28, 32] is a popular class of methods that utilize the Ensemble Kalman Filter (EnKF) [20] in the context of traditional inverse problems. These methods are derivative-free, easily parallelizable, and scale well in high-dimension inverse problems with ensemble sizes much smaller than the total number of parameters [32]. Assume that the unknown parameters \(\mathbf{\xi}\in\mathbb{R}^{N_{\xi}}\) have a prior distribution \(p(\mathbf{\xi})\) and the observations \(\mathbf{y}\in\mathbb{R}^{N_{y}}\) are related to the parameters through the observation operator \(\mathcal{G}\):
\[\mathbf{y}=\mathcal{G}(\mathbf{\xi})+\mathbf{\eta}, \tag{21}\]
where \(\mathbf{\eta}\in\mathbb{R}^{N_{y}}\) is a zero-mean Gaussian random vector with observation covariance matrix \(R\in\mathbb{R}^{N_{y}\times N_{y}}\), i.e., \(\mathbf{\eta}\sim\mathcal{N}(0,R)\). In the Bayesian context, this problem corresponds to the following posterior distribution:
\[p(\mathbf{\xi}|\mathbf{y})\propto p(\mathbf{\xi})\exp\left(-\frac{\left\|R^{-1/2}( \mathbf{y}-\mathcal{G}(\mathbf{\xi}))\right\|_{2}^{2}}{2}\right). \tag{22}\]
Given a prior of the form \(p(\mathbf{\xi})\sim\mathcal{N}(\mathbf{\xi}_{0},C_{0})\), the posterior becomes
\[p(\mathbf{\xi}|\mathbf{y})\propto\exp\left(-\frac{\left\|C_{0}^{-1/2}(\mathbf{\xi}_{0} -\mathbf{\xi})\right\|_{2}^{2}+\left\|R^{-1/2}(\mathbf{y}-\mathcal{G}(\mathbf{\xi})) \right\|_{2}^{2}}{2}\right), \tag{23}\]
For weakly nonlinear systems, approximate samples from (23) can be obtained by minimizing an ensemble of loss functions of the form
\[f(\mathbf{\xi}|\mathbf{\xi}_{j},\mathbf{y}_{j})=\frac{1}{2}\left\|C_{0}^{-1/2}(\mathbf{\xi }_{j}-\mathbf{\xi})\right\|_{2}^{2}+\frac{1}{2}\Big{\|}R^{-1/2}(\mathbf{y}_{j}- \mathcal{G}(\mathbf{\xi}))\Big{\|}_{2}^{2}, \tag{24}\]
where \(\mathbf{\xi}_{j}\sim\mathcal{N}(\mathbf{\xi}_{0},C_{0})\) and \(\mathbf{y}_{j}\sim\mathcal{N}(\mathbf{y},R)\)[15, 21]. EnKF-based methods such as EKI can be derived as an approximation to the minimizer of an ensemble of loss functions of the form (24) [21].
In practice, EKI and its variants consider the following artificial dynamics state-space model formulation based on the original Bayesian inverse problem (21) so that
the EnKF update equations can be applied:
\[\mathbf{\xi}_{i} =\mathbf{\xi}_{i-1}+\mathbf{\epsilon}_{i},\quad\mathbf{\epsilon}_{i}\sim\mathcal{ N}(0,Q), \tag{11}\] \[\mathbf{y}_{i} =\mathcal{G}(\mathbf{\xi}_{i})+\mathbf{\eta}_{i},\quad\mathbf{\eta}_{i}\sim \mathcal{N}(0,R), \tag{12}\]
where \(\mathbf{\epsilon}_{i}\) is an artificial parameter noise term with the artificial parameter covariance \(Q\in\mathbb{R}^{N_{\xi}\times N_{\xi}}\) and \(\mathbf{\eta}_{i}\) represents the observation error with the observation covariance \(R\in\mathbb{R}^{N_{y}\times N_{y}}\). Given an initial ensemble of \(J\) ensemble members \(\{\mathbf{\xi}_{0}^{(j)}\}_{j=1}^{J}\), the iterative EKI methods iteratively correct the ensemble \(\{\mathbf{\xi}_{i}^{(j)}\}_{j=1}^{J}\) based on Kalman update equations similar to [32]:
\[\mathbf{\hat{\xi}}_{i}^{(j)} =\mathbf{\xi}_{i-1}^{(j)}+\mathbf{\epsilon}_{i}^{(j)},\quad\mathbf{\epsilon} _{i}^{(j)}\sim\mathcal{N}(0,Q), \tag{13}\] \[\hat{\mathbf{y}}_{i}^{(j)} =\mathcal{G}(\mathbf{\xi}_{i}^{(j)}),\] (14) \[\mathbf{\xi}_{i}^{(j)} =\mathbf{\hat{\xi}}_{i}^{(j)}+C_{i}^{\hat{\xi}\hat{y}}(C_{i}^{\hat{y }\hat{y}}+R)^{-1}(\mathbf{y}-\hat{\mathbf{y}}_{i}^{(j)}+\mathbf{\eta}_{i}^{(j)}), \quad\mathbf{\eta}_{i}^{(j)}\sim\mathcal{N}(0,R), \tag{15}\]
where \(C_{i}^{\hat{y}\hat{y}}\) and \(C_{i}^{\hat{\xi}\hat{y}}\) are the sample covariance matrices defined as follows:
\[C_{i}^{\hat{y}\hat{y}} =\frac{1}{J-1}\sum_{j=1}^{J}(\hat{\mathbf{y}}_{i}^{(j)}-\bar{ \mathbf{y}}_{i})(\hat{\mathbf{y}}_{i}^{(j)}-\bar{\mathbf{y}}_{i})^{T}, \tag{16}\] \[C_{i}^{\hat{\xi}\hat{y}} =\frac{1}{J-1}\sum_{j=1}^{J}(\mathbf{\hat{\xi}}_{i}^{(j)}-\bar{\mathbf{ \xi}}_{i})(\hat{\mathbf{y}}_{i}^{(j)}-\bar{\mathbf{y}}_{i})^{T}. \tag{17}\]
Here, \(\bar{\mathbf{\xi}}_{i}\) and \(\bar{\mathbf{y}}_{i}\) are the corresponding sample average of prior ensembles \(\{\mathbf{\hat{\xi}}_{i}^{(j)}\}\) and \(\{\hat{\mathbf{y}}_{i}^{(j)}\}\). We remark that for many EKI variants, \(Q=0\), which may lead to the ensemble collapsing [13]. This ensemble collapse is not observed for positive definite observation error covariance \(Q\), and desirable convergence properties with reasonable uncertainty estimates have been shown for traditional Bayesian inverse problems [26, 25]. In the case of Gaussian prior and linear measurement operator, convergence to the correct Bayesian posterior corresponding to the artificial dynamics (11)-(12) can be shown [33]. However, these assumptions are typically not satisfied, and thus the corresponding estimates will be biased. Nonetheless, empirical evidence suggested that EKI can still provide reasonable posterior estimates even when these assumptions are violated [8].
### Eki B-Pinn
While EKI-based methods have often been applied to estimate physical model parameters (e.g., [29, 11, 12]) under the context of traditional Bayesian inverse problems, these approaches often rely on existing numerical forward solvers or the corresponding discrete operators. As the EKI requires multiple evaluations of the numerical forward solvers over EKI iterations, this can be computationally expensive for large-scale complex applications. In contrast, our approach (EKI B-PINN) learns a neural network surrogate and infers the model parameters simultaneously without the need for a traditional numerical solver. Combining the inexpensive forward surrogate with EKI allows us to efficiently explore the high-dimensional posterior with larger ensemble sizes.
To employ EKI under the context of B-PINNs, we first recast the setup of B-PINNs in section 2.1 by interpreting the corresponding notation in the context of EKI. Recall the notation introduced in B-PINNs section 2.1: the forward measurement
vector \(\mathbf{u}=\{u^{i}\}_{i=1}^{N_{u}}\), the residual vector \(\mathbf{f}=\{f^{i}\}_{i=1}^{N_{f}}\) and boundary vector \(\mathbf{b}=\{b^{i}\}_{i=1}^{N_{b}}\) from datasets \(\mathcal{D}_{u}=\{(\mathbf{x}_{u}^{i},u^{i})\}_{i=1}^{N_{u}}\), \(\mathcal{D}_{f}=\{(\mathbf{x}_{f}^{i},f^{i})\}_{i=1}^{N_{f}}\) and \(\mathcal{D}_{b}=\{(\mathbf{x}_{b}^{i},b^{i})\}_{i=1}^{N_{b}}\). We utilize the concatenated physical and neural network parameters \(\boldsymbol{\xi}=[\boldsymbol{\lambda},\boldsymbol{\theta}]\). With the approximate solution \(\tilde{u}(\mathbf{x};\boldsymbol{\theta})\) parameterized by \(\boldsymbol{\theta}\), the PDE operator \(\mathcal{N}_{x}\), and the boundary operator \(\mathcal{B}_{x}\), we can define the corresponding observation operator \(\mathcal{G}_{u}\), \(\mathcal{G}_{f}\), and \(\mathcal{G}_{b}\):
\[\mathcal{G}_{u}(\boldsymbol{\xi}) =[\tilde{u}(\mathbf{x}_{u}^{1};\boldsymbol{\theta}),...,\tilde{u }(\mathbf{x}_{u}^{N_{u}};\boldsymbol{\theta})], \tag{3.12}\] \[\mathcal{G}_{f}(\boldsymbol{\xi}) =[\mathcal{N}_{x}(\tilde{u}(\mathbf{x}_{f}^{1};\boldsymbol{\theta });\boldsymbol{\lambda}),...,\mathcal{N}_{x}(\tilde{u}(\mathbf{x}_{f}^{N_{f}} ;\boldsymbol{\theta});\boldsymbol{\lambda})],\] (3.13) \[\mathcal{G}_{b}(\boldsymbol{\xi}) =[\mathcal{B}_{x}(\tilde{u}(\mathbf{x}_{u}^{1};\boldsymbol{ \theta});\boldsymbol{\lambda}),...,\mathcal{B}_{x}(\tilde{u}(\mathbf{x}_{b}^{ N_{b}};\boldsymbol{\theta});\boldsymbol{\lambda})]. \tag{3.14}\]
Given these notations, we now define our measurement vector \(\mathbf{y}\) and the corresponding measurement operator \(G(\boldsymbol{\xi})\) under the context of EKI as follows:
\[\mathbf{y} =[\mathbf{u},\mathbf{f},\mathbf{b}], \tag{3.15}\] \[\mathcal{G}(\boldsymbol{\xi}) =[\mathcal{G}_{u}(\boldsymbol{\xi}),\mathcal{G}_{f}(\boldsymbol{ \xi}),\mathcal{G}_{b}(\boldsymbol{\xi})]. \tag{3.16}\]
After identifying each component in the EKI setting, we employ the version of EKI in (3.7)-(3.9) to infer the parameters of B-PINNs. Following that, we can then compute the sample means and standard deviations of ensemble \(\{\lambda^{(j)}\}_{i=1}^{J}\) of the physical parameter \(\lambda\) and ensemble of approximate solutions \(\{\tilde{u}(\mathbf{x};\boldsymbol{\theta^{(j)}})\}_{j=1}^{J}\) via (2.19)-(2.20) as described in the section 2.2.
We remark that in EKI B-PINN, the underlying physics laws are enforced as soft constraints on the residual points. This approach contrasts with the standard EKI measurement operator, which enforces the physics law as hard constraints imposed by the traditional numerical solvers.
**Choice of Covariance matrices \(Q\) and \(R\).** The choice of observation covariance \(R\) and parameter evolution \(Q\) are often important in the Ensemble Kalman type algorithms, including the EKI algorithms. In the Ensemble Kalman methods literature, several attempts to automatically estimate these covariance matrices have also been suggested (e.g., [6, 34, 1]), however, there is no standard approach to estimating these matrices. In practice, \(R\) is often assumed to be known or estimated empirically from the instrument error and representation error between the states and observations [44]. The matrix \(Q\) is more difficult to estimate, as its dimension is often much larger than the number of available measurements.
In this paper, we assume \(R\) corresponds to the covariance matrix chosen for the Gaussian likelihood \(p(\mathbf{y}|\boldsymbol{\xi})\) in Section 2.1, i.e.,
\[R=\begin{bmatrix}\sigma_{\eta_{u}}^{2}I_{N_{u}}&0&0\\ 0&\sigma_{\eta_{f}}^{2}I_{N_{f}}&0\\ 0&0&\sigma_{\eta_{b}}^{2}I_{N_{b}}\end{bmatrix}. \tag{3.17}\]
The choice of \(Q\) is important in this setting as it prevents the collapse of the ensemble and improves the uncertainty quantification in the EKI. In this study, we assume \(Q\) takes the form
\[Q=\begin{bmatrix}\sigma_{\theta}^{2}I_{N_{\theta}}&0\\ 0&\sigma_{\lambda}^{2}I_{N_{\lambda}}\end{bmatrix}, \tag{3.18}\]
where \(\sigma_{\theta}\) and \(\sigma_{\lambda}\) are standard deviations associated with the neural weight parameters \(\boldsymbol{\theta}\) and the physical parameters \(\boldsymbol{\lambda}\), respectively.
**Stopping Criterion.** We consider a stopping criterion based on the Discrepancy Principle [27, 29, 24, 38], originally utilized as a stopping criterion in iterative regularized Gauss-Newton methods, which has been used as a common choice in many EKI formulations. The discrepancy principle suggests an acceptable choice of \(\boldsymbol{\xi}\) for the inverse problem (3.1) can be made when
\[||R^{-1/2}(y-\mathcal{G}(\boldsymbol{\xi}))||\leq||R^{-1/2}(y- \mathcal{G}(\boldsymbol{\xi}^{\dagger}))||, \tag{3.19}\]
where \(\boldsymbol{\xi}^{\dagger}\) is the true solution of (3.1). This choice avoids instabilities in the solution and provides a criterion for stopping the EKI iteration. As the \(\boldsymbol{\xi}^{\dagger}\) is not known, often it is stated as
\[||R^{-1/2}(y-G(\boldsymbol{\xi}))||\leq\eta \tag{3.20}\]
where \(\eta>0\) is some stopping threshold. In the context of EKI, a sample mean-based discrepancy principle stopping criterion has been employed [27]:
\[\left\|R^{-1/2}\left(y-\frac{1}{J}\sum_{j=1}^{J}\mathcal{G}( \boldsymbol{\xi_{i}^{(j)}})\right)\right\|\leq\eta. \tag{3.21}\]
The choice of \(\eta\) is dependent on \(R\), which can be an issue if a reasonable choice of \(R\) is not clear, such as in the case with residual points \(\mathcal{D}_{f}\) and boundary points \(\mathcal{D}_{b}\) in the context of B-PINNs. Instead, we consider the relative change in the discrepancy over several iterations. By defining \(D_{i}\) as the \(i\)th discrepancy metric as follows:
\[D_{i}=\left\|R^{-1/2}\left(y-\frac{1}{J}\sum_{j=1}^{J}\mathcal{G}(\mathbf{\xi_{i}^{ (j)}})\right)\right\|, \tag{3.22}\]
We shall stop at the iteration when the relative improvement of \(D_{i}\) over a fixed iteration window of length \(W\) does not improve by more than \(\tau\), i.e.,
\[\max_{j\in\{i-W,\dots,i\}}\frac{|D_{j}-D_{i}|}{D_{i}}<\tau. \tag{3.23}\]
**Remark 3.1**: _If an EKI iteration results in a failure state which occurs in the first several iterations, we can reinitialize the ensemble from the prior \(\mathbf{\xi}_{i}^{(j)}\sim p(\mathbf{\xi})\)._
**Complexity Analysis.** At each EKI iteration in Algorithm 2 has following computational complexity of \(\mathcal{O}(JN_{y}N_{\xi}+N_{y}^{3}+JN_{y}^{2})\) detailed as follows:
* Construction of matrix \(C^{\hat{y}\hat{y}}\) in (3.10)
* Construction of matrix \(C^{\hat{\xi}\hat{y}}\) in (3.11)
* Evaluation of \(C^{\hat{\xi}\hat{y}}(C^{\hat{y}\hat{y}}+R)^{-1}(\hat{\mathbf{y}}_{i}^{(j)}+\mathbf{\eta }_{i}^{(j)})\) in (3.9).
As a result, the computational complexity of EKI grows linearly with both parameter dimension and ensemble size and thus can easily scale to high-dimensional inverse problems efficiently. Nevertheless, the method does scale cubically with data dimension and hence is computationally prohibitive when naively applied to large data sets. In this scenario, mini-batching, as in standard neural network optimization problems, can be used to reduce the computational cost [32]. Furthermore, it is important to note that the EKI requires storing an ensemble of \(J\) neural network parameter sets, which can be memory-demanding for large ensemble sizes and large networks. Dimension reduction techniques might be helpful to address this issue [16].
## 4 Numerical Examples
In this section, we shall demonstrate the applicability and performance of EKI B-PINNs via various numerical examples. We also compare our approach with a variant of the HMC B-PINN to assess the inference accuracy and computational efficiency of our proposed method.
For each example, we generated a synthetic data set, \(\mathcal{D}_{u}\), by solving the corresponding problem and corrupting the solution with i.i.d zero-mean Gaussian noise, \(\mathcal{N}(0,\sigma_{u})\). To demonstrate the robustness of the EKI method, we considered two noise levels: \(\sigma_{u}=0.1\) and \(\sigma_{u}=0.01\). We generate residual points \(\mathcal{D}_{f}\) by evaluating \(f\) at locations generated using Latin hypercube sampling over the problem domain for 2D problems and equally spaced over the domain for 1D problems. For 2D problems, boundary points \(\mathcal{D}_{b}\) are placed equally spaced over the boundary. Boundary and residual points are assumed to be noise-free in this paper.
For all examples, the neural network architecture used for both B-PINNs consists of 2 hidden layers with 50 neurons in each layer and the tanh activation function. Neural network parameter dimension \(N_{\theta}\) for each type of example can be seen in Table 1. For each problem, we assume the standard deviations for the boundary and residual likelihood in (3.17): \(\sigma_{\eta_{b}}=0.01\) and \(\sigma_{\eta_{f}}=0.01\) for the B-PINNs unless otherwise specified. Furthermore, we assume that the measurement noise level \(\sigma_{u}\) is
```
1:Input: \(\mathbf{y}\) (observations), \(Q\) (parameter covariance), \(R\) (observation covariance), \(W\) (stopping window), \(\tau\) (stopping threshold)
2:Initialize prior samples for \(j=1,...,J\): \[\boldsymbol{\xi}_{0}^{(j)}\sim p(\boldsymbol{\xi}).\]
3:for\(i=1,...,I\)do
4: Obtain \(i\) th prior parameter and measurement ensembles for \(j=1,...,J\): \[\boldsymbol{\epsilon}_{i}^{(j)} \sim\mathcal{N}(0,Q).\] \[\boldsymbol{\hat{\xi}}_{i}^{(j)} =\boldsymbol{\xi}_{i}^{(j)}+\boldsymbol{\epsilon}_{i}^{(j)}.\] \[\boldsymbol{\hat{y}}_{i}^{(j)} =\mathcal{G}_{i}(\boldsymbol{\hat{\xi}}_{i}^{(j)}).\]
5: Evaluate sample mean and covariance terms: \[\boldsymbol{\bar{\xi}}_{i} =\frac{1}{J}\sum_{j=1}^{J}\boldsymbol{\hat{\xi}}_{i}^{(j)}.\] \[\bar{\mathbf{y}}_{i} =\frac{1}{J}\sum_{j=1}^{J}\boldsymbol{\hat{y}}_{i}^{(j)}.\] \[C_{i}^{\hat{y}\hat{y}} =\frac{1}{J-1}\sum_{j=1}^{J}(\boldsymbol{\hat{y}}_{i}^{(j)}- \bar{\mathbf{y}}_{i})(\boldsymbol{\hat{y}}_{i}^{(j)}-\bar{\mathbf{y}}_{i})^{ T}.\] \[C_{i}^{\hat{\xi}\hat{y}} =\frac{1}{J-1}\sum_{j=1}^{J}(\boldsymbol{\hat{\xi}}_{i}^{(j)}- \boldsymbol{\bar{\xi}}_{i})(\boldsymbol{\hat{y}}_{i}^{(j)}-\bar{\mathbf{y}}_{ i})^{T}.\]
6: Update posterior ensemble for \(j=1,...,J\): \[\boldsymbol{\eta}_{i}^{(j)} \sim\mathcal{N}(0,R),\] \[\boldsymbol{\xi}_{i}^{(j)} =\boldsymbol{\hat{\xi}}_{i}^{(j)}+C_{i}^{\hat{\xi}y}(C_{i}^{yy} +R)^{-1}(\mathbf{y}-\hat{\mathbf{y}}_{i}^{j}+\boldsymbol{\eta}_{i}^{(j)}).\]
7: Check discrepancy \[D_{i}=\left\|R^{1/2}\left(y-\frac{1}{J}\sum_{j=1}^{J}\mathcal{G}(\boldsymbol{ \xi}_{i}^{(j)})\right)\right\|,\]
8:if\(\max\limits_{j\in\{i-W,...,i\}}|D_{j}-D_{i}|/D_{i}<\tau\)then
9:\(I:=i\)
10: Break
11:endif
12:endfor
13:Return: \(\boldsymbol{\xi}_{I}^{(1)},...,\boldsymbol{\xi}_{I}^{(J)}\)
```
**Algorithm 3** Ensemble Kalman Inversion (EKI) B-PINNs
known for each example and set \(\sigma_{\eta_{u}}=\sigma_{u}\). Finally, unless otherwise specified, we use physical parameter prior \(\mathbf{\lambda}\sim\mathcal{N}(0,I_{N_{\lambda}})\) for both B-PINNs.
For the EKI B-PINNs, we choose an ensemble size of \(J=1000\) and stopping criterion parameters from (3.23): \(W=25\) and \(\tau=0.05\). Additionally, the artificial dynamics standard deviations for the parameters from (3.18) are chosen to be \(\sigma_{\lambda}=0.1\) and \(\sigma_{\theta}=0.002\). For the HMC B-PINNs, the leapfrog step \(L=50\) and the initial step size \(\delta t=0.1\) adaptively tuned to reach an acceptance rate of \(60\%\) during burn-in steps as in [50]. We draw a total of \(1000\) samples following \(1000\) burn-in steps.
**Metrics.** We examine the accuracy of the forward solution approximation and physical parameter estimation from the B-PINNs in the following metrics over an independent test set \(\{\mathbf{x}_{t}^{i}\}_{i=1}^{N_{t}}\):
\[e_{u}=\sqrt{\frac{\sum_{i=1}^{N_{t}}\left|u(\mathbf{x}_{t}^{i})-\bar{u}( \mathbf{x}_{t}^{i})\right|^{2}}{\sum_{i=1}^{N_{t}}\left|u(\mathbf{x}_{t}^{i}) \right|^{2}}},\quad e_{\lambda}=\frac{\left|\lambda-\bar{\lambda}\right|}{ \left|\lambda\right|}, \tag{4.1}\]
where \(u(x)\) and \(\lambda\) are the reference solution and the reference physical parameters. The sample means of the forward solution approximation \(\bar{u}\) and the physical parameter \(\bar{\lambda}\) are computed from the B-PINNs as defined in (2.19). The mean predictions are obtained in the final iteration for the EKI B-PINNs and over the \(1000\) samples generated for the HMC B-PINN. Furthermore, we assess the quality of our uncertainty estimates by examining the sample standard deviation of the estimated forward solutions and physical parameters \(\sigma_{\lambda}\) and \(\sigma_{\tilde{u}}\) defined in (2.20).
To demonstrate the efficiency of our proposed method, we compare the walltime of the EKI B-PINN and HMC B-PINN (including both burn-in and training time) experiments averaged over \(10\) trials. The average iterations utilized over \(10\) trials are also presented for the EKI B-PINN tests. All experiments use the JAX [9] library in single-precision floating-point on a single GPU (Tesla T4 GPU with \(16\) GB of memory).
### 1D linear Poisson equation
We first consider the 1D linear Poisson equation motivated by [10] as follows:
\[u_{xx}-k\cos(x)=0,\quad x\in[0,8], \tag{4.2}\] \[u(0)=0,\quad u(8)=k\cos(8), \tag{4.3}\]
where the exact solution is \(u(x)=k\cos(x)\). Assuming the unknown constant \(k=1.0\), we generated \(N_{u}=8\) equally spaced measurements over \([0,8]\) for measurements \(\mathbf{u}\), excluding the boundary points, and \(N_{b}=2\) boundary points at \(x=0\) and \(x=8\). Additionally, \(N_{f}=100\) equally spaced residual points are utilized.
Due to the linearity of the solution with respect to \(k\), one can derive a Gaussian "reference" posterior distribution for parameter \(k\) conditioned on knowledge of
\begin{table}
\begin{tabular}{|l|l|} \hline & \(\mathbf{\theta}\) size \\ \hline
1D PDE - Examples 4.1, 4.2 & 5251 \\ \hline
2D PDE - Examples 4.3, 4.5 4.6 & 5301 \\ \hline System of ODEs - Example 4.4 & 5353 \\ \hline \end{tabular}
\end{table}
Table 4.1: The number of neural network parameters \(\mathbf{\theta}\) for each example.
the correct solution parameterization \(\tilde{u}(x;k)=k\cos(x)\)[10]. In the case where the forward solution and boundary standard deviations are equal (i.e., \(\sigma_{\eta_{u}}=\sigma_{\eta_{b}}\)), if we denote for simplicity \(\mathcal{D}_{u}\) as containing both forward solution and boundary data, the distribution takes the form:
\[p(k|\mathcal{D}_{u})\propto\exp\left(-\left(\frac{\sum_{i=1}^{N_{u}}(u^{i}-k \cos(x_{u}^{i}))^{2}}{2\sigma_{\eta_{u}}^{2}}+\frac{(k-k_{0})^{2}}{2\sigma_{k} ^{2}}\right)\right), \tag{4.4}\]
where \(k_{0}\), \(\sigma_{k}\) are the prior mean and standard deviation of \(k\).
In this example, we choose \(k_{0}=0,\sigma_{k}=1\) and consider the noise level \(\sigma_{u}=0.01\). We present the "reference" density and density estimates of the estimated posterior of \(k\) in Fig 4. Both approaches deliver comparable inference results for the posterior distribution of \(k\) compared to the "reference" density.
Table 4 provides the mean and one standard deviation of the posterior \(k\) approximations. The uncertainty estimates of \(k\) for both B-PINNs show that the true value of \(k\) is within one standard deviation of the posterior mean. Table 4 shows the mean relative error of the solution and parameters, with corresponding walltime and the number of EKI iterations utilized. From the table, it is clear that the EKI B-PINN can achieve comparable approximation quality to HMC B-PINN but with a 8-fold speed-up.
### 1D nonlinear Poisson equation
We now consider a 1D nonlinear Poisson equation as presented in [48] as follows:
\[\lambda u_{xx}+k\tanh(u)=f,\quad x\in[-0.7,0.7], \tag{4.5}\]
where \(\lambda=0.01\). Here, \(k=0.7\) is the unknown physical parameter to be inferred. By assuming the true solution \(u(x)=\sin^{3}(6x)\), the right-hand side \(f\) and boundary conditions can be analytically constructed. The objective of this problem is to infer the physical parameter \(k\) and the forward solution \(u\), along with corresponding uncertainty estimates, using \(N_{u}=6\) measurements \(\mathbf{u}\) equally spaced over the spatial domain, \(N_{b}=2\) boundary points \(\mathbf{b}\), and \(N_{f}=32\) equally spaced PDE residual points \(\mathbf{f}\).
Table 4 compares the posterior sample mean and one standard deviation of \(k\) obtained from the EKI B-PINN and HMC B-PINN. Both methods accurately capture
\begin{table}
\begin{tabular}{c|c|c c|c} \hline & & \(e_{u}\) & \(e_{k}\) & Walltime & Iterations \\ \hline \multirow{2}{*}{\(\sigma_{u}=0.01\)} & EKI & 0.63\% & 0.18\% & 5.81 seconds & 219 \\ & HMC & 1.43\% & 0.08\% & 46.66 seconds & - \\ \hline \end{tabular}
\end{table}
Table 4.3: Example 4.1: relative errors \(e_{u}\) of the forward solution \(u\) and \(e_{k}\) of parameter \(k\) for the noise level \(\sigma_{u}=0.01\), as well as average walltime. Average EKI iterations for the EKI B-PINN are also reported.
the true value of \(k=0.7\) within one standard deviation of the mean for both noise levels. Figure 4.2 compares the sample mean and standard deviation of the surrogate based on both approaches. The mean of the EKI B-PINN is in good agreement with the reference solution. The one standard deviation bound captures most of the error in the mean prediction.
Table 4.5 reports the relative errors of the forward solution and parameter estimates using EKI B-PINN and HMC B-PINN for two noise levels, along with the corresponding walltime and number of EKI iterations. Both methods are reasonably accurate, with estimates of \(k\) within \(1\%\) error for \(\sigma_{u}=0.01\) and \(5\%\) error for \(\sigma_{u}=0.1\). Additionally, the relative errors of the mean forward solution \(e_{u}\) for both B-PINNs reach similar levels of accuracy, approximately \(2\%\) and \(10\%\) for \(\sigma_{u}=0.01\) and \(\sigma_{u}=0.1\) respectively. Nevertheless, the EKI B-PINN achieves comparable inference results to the HMC B-PINN, but approximately \(9\) times faster.
\begin{table}
\begin{tabular}{c|c|c} \hline & & \(k\) (mean \(\pm\) std) \\ \hline \multirow{2}{*}{\(\sigma_{u}=0.01\)} & EKI & \(0.701\pm 0.006\) \\ & HMC & \(0.701\pm 0.006\) \\ \hline \multirow{2}{*}{\(\sigma_{u}=0.1\)} & EKI & \(0.697\pm 0.012\) \\ & HMC & \(0.688\pm 0.022\) \\ \hline \end{tabular}
\end{table}
Table 4.4: Example 4.2: sample mean and standard deviation of parameter \(k\) for EKI B-PINN and HMC B-PINN for \(\sigma_{u}=0.01,0.1\) noise levels. The true value of \(k\) is \(0.7\).
Figure 4.1: Example 4.1: posterior distributions of \(k\) for EKI B-PINN and HMC B-PINN at \(\sigma_{u}=0.01\) with the corresponding reference posterior distribution.
\begin{table}
\begin{tabular}{c|c|c|c} \hline & & \(k\) (mean\(\pm\) std) \\ \hline \multirow{2}{*}{\(\sigma_{u}=0.01\)} & EKI & \(0.999\pm 0.006\) \\ & HMC & \(0.996\pm 0.006\) \\ \hline \multirow{2}{*}{\(\sigma_{u}=0.1\)} & EKI & \(0.988\pm 0.017\) \\ & HMC & \(1.023\pm 0.032\) \\ \hline \end{tabular}
\end{table}
Table 4.6: Example 4.3: sample mean and standard deviation of parameter \(k\) for EKI B-PINN and HMC B-PINN for \(\sigma_{u}=0.01,0.1\) noise levels. The true value of \(k\) is 1.
Figure 4.2: Example 4.2: sample mean and standard deviation of \(u\) for EKI B-PINN and HMC B-PINN for \(\sigma_{u}=0.01,0.1\) noise levels.
\begin{table}
\begin{tabular}{c|c c|c|c|c} \hline & & \(e_{u}\) & \(e_{k}\) & Walltime & Iterations \\ \hline \multirow{2}{*}{\(\sigma_{u}=0.01\)} & EKI & \(1.19\%\) & \(0.19\%\) & \(6.43\) seconds & \(282\) \\ & HMC & \(1.12\%\) & \(0.10\%\) & \(55.55\) seconds & - \\ \hline \multirow{2}{*}{\(\sigma_{u}=0.1\)} & EKI & \(9.32\%\) & \(0.38\%\) & \(6.47\) seconds & \(289\) \\ & HMC & \(8.76\%\) & \(1.73\%\) & \(55.77\) seconds & - \\ \hline \end{tabular}
\end{table}
Table 4.5: Example 4.2: relative errors \(e_{u}\) of the forward solution \(u\) and \(e_{k}\) of parameter \(k\) for the noise levels \(\sigma_{u}=0.01,0.1\), as well as average walltime. Average EKI iterations for the EKI B-PINN are also reported.
### 2D nonlinear diffusion-reaction equation
Next, we examine the 2D nonlinear diffusion-reaction equation in [48] as follows:
\[\lambda\Delta u+ku^{2} =f,\quad(x,y)\in[-1,1]^{2}, \tag{4.6}\] \[u(x,-1) =u(x,1)=0,\] (4.7) \[u(-1,y) =u(1,y)=0, \tag{4.8}\]
where \(\lambda=0.01\) is known. Here, \(k=1\) is an unknown physical parameter, and the ground truth solution \(u(x,y)=\sin(\pi x)\sin(\pi y)\). We construct the source term \(f\) to satisfy the PDE with the given solution. For this problem, we have \(N_{u}=100\) measurements \(\mathbf{u}\) and \(N_{f}=100\) residual points \(\mathbf{f}\) both sampled via Latin Hypercube sampling over the spatial domain. Additionally, we have \(N_{b}=100\) boundary points \(\mathbf{b}\), which are equally spaced over the boundary. As in the previous example, we aim to estimate \(k\) and \(u\) with uncertainty estimates. The solution \(u\) and measurements \(\mathbf{u}\) can be seen in Figure 4.3.
Table 4.6 shows the one standard deviation confidence interval of the B-PINN estimates for parameter \(k\). Notably, the ground truth \(k=1.0\) falls within one standard deviation of the mean estimates for both noise levels. Additionally, Figure 4.4 shows the sample mean, one standard deviation, and the error of the forward surrogate based on both approaches. Both B-PINNs provide reasonably good mean estimates of the true solution for two measurement noise levels. Moreover, the standard deviation for both B-PINNs increases as the measurement noise level increases as expected, indicating that the uncertainty estimates provided are plausible. Although the standard deviations by the EKI B-PINN somewhat differ from those of the HMC B-PINN, both
\begin{table}
\begin{tabular}{c|c|c c|c|c} \hline & & \(e_{u}\) & \(e_{k}\) & Walltime & Iterations \\ \hline \multirow{2}{*}{\(\sigma_{u}=0.01\)} & EKI & 1.12\% & 0.04\% & 2.47 seconds & 53 \\ & HMC & 1.06\% & 0.03\% & 63.26 seconds & - \\ \hline \multirow{2}{*}{\(\sigma_{u}=0.1\)} & EKI & 3.64\% & 1.46\% & 3.41 seconds & 66 \\ & HMC & 2.53\% & 3.73\% & 64.71 seconds & - \\ \hline \end{tabular}
\end{table}
Table 4.7: Example 4.3: relative errors \(e_{u}\) of the forward solution \(u\) and \(e_{k}\) of parameter \(k\) for the noise levels \(\sigma_{u}=0.01,0.1\), as well as average walltime. Average EKI iterations for the EKI B-PINN are also reported.
Figure 4.3: Example 4.3: measurements of forward solution \(\mathbf{u}\) and boundary measurements \(\mathbf{b}\) with solution \(u(x,y)=\sin(\pi x)\sin(\pi y)\).
methods appear to agree on the rough locations of regions with higher uncertainty. For instance, when \(\sigma_{u}=0.01\), For example, when \(\sigma_{u}=0.01\), both methods exhibit large peaks around \((x,y)=(-1,0),(-0.3,0.7),(0.9,0.9),(1,-1)\). Similarly, major peaks can be seen around at \((x,y)=(0.8,0.8),(-0.8,-0.8),(-0.8,0.8),(0.8,-0.8)\) for both methods when \(\sigma_{u}=0.1\).
The relative error of the mean estimates of \(u\) and \(k\) and walltime for the B-PINN methods are presented in Table 4.7. The mean approximations of \(k\) for both B-PINNs show approximation errors around \(1\%\) and \(5\%\) of the ground truth for \(\sigma_{u}=0.01\) and \(\sigma_{u}=0.1\), respectively. Similarly, the sample mean approximation of the forward surrogate \(u\) achieved a relative error less than \(2\%\) and \(5\%\), respectively. The EKI B-PINN approximates the forward solution and physical parameter reasonably well, with mean estimates comparable to the HMC. Nonetheless, the EKI B-PINN provides inference approximately \(18\) times faster than the HMC B-PINN.
Figure 4.4: Example 4.3: (Top Row) the sample mean of forward surrogate for EKI and HMC B-PINNs for different noise levels \(\sigma_{u}\). (Middle Row) The standard deviation of the forward surrogate based on EKI and HMC B-PINNs for different noise levels. (Bottom Row) The absolute difference between the ground truth solution \(u(x,y)=\sin(\pi y)\sin(\pi y)\) and the sample mean of the forward approximation for different noise levels.
### Kraichnan-Orszag system
We next consider the Kraichnan-Orszag model [50] consisting of three coupled nonlinear ODEs describing the temporal evolution of a system composed of several interacting inviscid shear waves:
\[\frac{du_{1}}{dt}-au_{2}u_{3}=0, \tag{11}\] \[\frac{du_{2}}{dt}-bu_{1}u_{3}=0,\] (12) \[\frac{du_{3}}{dt}+(a+b)u_{1}u_{2}=0,\] (13) \[u_{1}(0)=1.0,u_{2}=0.8,u_{3}(0)=0.5, \tag{14}\]
where \(u_{1}\), \(u_{2}\), and \(u_{3}\) are solutions to the above system of ODEs and \(a\) and \(b\) are unknown physical parameters. We choose \(a=b=1\) in this example. We place \(12\) equally spaced data points over \(t\in[1,10]\), and observe \(u_{1}\) and \(u_{3}\) at each of these locations, and \(u_{2}\) at the first \(7\) locations, and thus, \(N_{u}=31\) for this example. We also utilize \(N_{f}=300\) residual points (with \(100\) points equally spaced over \(t\in[0,10]\) for each equation in the system of ODEs). Initial conditions for the ODE are assumed to be unknown, thus \(N_{b}=0\). As in [50], we place a Gaussian prior on \(a\) and \(b\) such that \(a\sim\mathcal{N}(0,2)\) and \(b\sim\mathcal{N}(0,2)\). Our goal is to estimate \(u_{1}\), \(u_{2}\), \(u_{3}\), and parameters \(a\) and \(b\) with uncertainty estimates.
Table 8 shows the mean and one standard deviation of the estimated \(a\) and \(b\). For the small noise level \(\sigma_{u}=0.01\), the parameter estimates capture the truth \(b\) within one standard deviation and \(a\) within two standard deviations of the mean. For \(\sigma_{u}=0.1\) noise level, both estimates find \(b\) within two standard deviations of the mean. However,\(a\) is less well approximated for both B-PINNs due to bias in the mean approximation. Furthermore, Figure 5 presents the approximation of the forward solutions \((u_{1},u_{2},u_{3})\) with the means and one standard deviation bands. For \(\sigma=0.01\), the forward predictions closely match the true solutions with narrow uncertainty bands for the B-PINNs. For the case \(\sigma=0.1\), the deviations between the mean prediction of \(u_{1}\) and the ground truth are more pronounced, particularly at locations where the standard deviation is also larger, suggesting both B-PINNs offer informative uncertainty estimates.
Table 9 presents the mean relative errors for parameter estimations and forward approximations. The mean approximations of the EKI B-PINN are accurate and comparable to those of the HMC B-PINN for both noise levels, with the exception of parameter \(a\) for \(\sigma_{u}=0.1\), where both B-PINNs show less accuracy. However, the EKI method is approximately \(25\) times faster than the HMC method in this example.
\begin{table}
\begin{tabular}{c|c|c c} \hline \hline & & \(a\) (mean \(\pm\) std) & \(b\) (mean \(\pm\) std) \\ \hline \multirow{2}{*}{\(\sigma_{u}=0.01\)} & EKI & \(0.953\pm 0.023\) & \(1.003\pm 0.018\) \\ & HMC & \(0.978\pm 0.025\) & \(1.001\pm 0.016\) \\ \hline \multirow{2}{*}{\(\sigma_{u}=0.1\)} & EKI & \(0.847\pm 0.033\) & \(1.027\pm 0.026\) \\ & HMC & \(0.826\pm 0.053\) & \(1.029\pm 0.032\) \\ \hline \hline \end{tabular}
\end{table}
Table 8: Example 4: sample mean and standard deviation of parameters \(a\) and \(b\) for EKI B-PINN and HMC B-PINN for \(\sigma_{u}=0.01,0.1\) noise levels. The true values of \(a=1,b=1\).
\begin{table}
\begin{tabular}{c|c|c c c c c|c|c|c} \hline & & \(e_{u_{1}}\) & \(e_{u_{2}}\) & \(e_{u_{3}}\) & \(e_{a}\) & \(e_{b}\) & Walltime & Iterations \\ \hline \multirow{2}{*}{\(\sigma_{u}=0.01\)} & EKI & 1.04\% & 2.96\% & 1.63\% & 4.67\% & 0.35\% & 2.93 seconds & 71 \\ & HMC & 0.66\% & 1.45\% & 1.06\% & 2.25\% & 0.14\% & 93.34 seconds & - \\ \hline \multirow{2}{*}{\(\sigma_{u}=0.1\)} & EKI & 3.49\% & 5.57\% & 4.45\% & 15.26\% & 2.73\% & 3.51 seconds & 85 \\ & HMC & 3.96\% & 6.49\% & 5.19\% & 17.42\% & 2.88\% & 94.31 seconds & - \\ \hline \end{tabular}
\end{table}
Table 4.9: Example 4.4: relative errors \(e_{u}\) of the forward solution \(u\), \(e_{a}\) of parameter \(a\), and \(e_{b}\) of parameter \(b\) for the noise levels \(\sigma_{u}=0.01,0.1\), as well as average walltime. Average EKI iterations are also reported.
Figure 4.5: Example 4.4: the ground truth, sample mean, standard deviation, and measurements \(\mathbf{u}\) for \(u_{1}\) (top row), \(u_{2}\) (middle row), and \(u_{3}\) (bottom row) for the EKI B-PINN and HMC B-PINN with noise levels \(\sigma_{u}=0.01,0.1\).
### Burgers' Equation
We now consider the following 1D time-dependent Burgers' equation motivated by [42]:
\[u_{t}+uu_{x}-\nu u_{xx} =0, x\in[-1,1], \tag{4.13}\] \[u(x,0) =-\sin(\pi x), x\in[-1,1],\] (4.14) \[u(-1,t) =u(1,t)=0, \tag{4.15}\]
where \(\nu=\frac{0.1}{\pi}\) is an unknown physical parameter, and \(t\in[0,\frac{3}{\pi}]\). A reference solution \(u\) for (4.13) is found in [5], where we evaluate on a \(256\times 100\) equally spaced grid over the domain. We randomly sampled \(N_{u}=100\) measurements \(\mathbf{u}\) from the solution grid and placed \(N_{b}=75\) equally spaced boundary points over the boundary, which is visualized with the solution in Figure 6. Residual points \(\mathbf{f}\) were sampled using Latin hypercube sampling over \((x,t)\in[-1,1]\times[0,\frac{3}{\pi}]\) with \(N_{f}=100\). Instead of directly approximating \(\nu\), we approximate the transformed parameter \(\log\nu\) and place the prior \(\log\nu\sim\mathcal{N}(0,3)\). Additionally, for this example, we choose the standard deviation of the likelihood for the residual \(\sigma_{f}=0.1\). In this example, we aim to infer \(\nu\) and \(u\) with corresponding uncertainty estimates.
Table 10 shows the posterior mean and one standard deviation of the estimated parameter \(\nu\). Both methods provide accurate mean approximations of the parameter, and the true \(\nu\) is contained within the two standard deviation bounds for \(\sigma_{u}=0.01\), and lies just outside the one standard deviation confidence interval from the EKI B-PINN. For \(\sigma_{u}=0.1\), EKI B-PINN does provide a slightly better mean estimate, but it underestimates the uncertainty compared to the HMC.
\begin{table}
\begin{tabular}{c|c|c} \hline & & \(\nu\) (\(\times 10^{3}\)) (\(\mathrm{mean}\pm\mathrm{std}\)) \\ \hline \multirow{2}{*}{\(\sigma_{u}=0.01\)} & EKI & \(32.449\pm 0.602\) \\ & HMC & \(30.957\pm 1.027\) \\ \hline \multirow{2}{*}{\(\sigma_{u}=0.1\)} & EKI & \(33.740\pm 0.808\) \\ & HMC & \(34.174\pm 3.523\) \\ \hline \end{tabular}.
\end{table}
Table 10: Example 4.5: the sample mean and standard deviation of parameter \(\nu\) for EKI B-PINN and HMC B-PINN for \(\sigma_{u}=0.01,0.1\) noise levels. The true value \(\nu=\frac{0.1}{\pi}\approx 0.031831\).
Figure 6: Example 4.5: measurements of forward solution \(\mathbf{u}\) and boundary measurements \(\mathbf{b}\) with solution \(u(x,t)\) over \([-1,1]\times[0,\frac{3}{\pi}]\).
Figure 7 displays the posterior mean and standard deviation of \(u\) estimated by both methods, as well as their corresponding approximation error. As expected, the uncertainty in both B-PINNs increases with the measurement noise level. Notably, larger uncertainty is clustered around \(x=0\), and greater errors are observed in the same region in the error plot, which suggests that the uncertainty estimates are informative as an error indicator.
Table 11 provides mean relative error for \(\nu\) and \(u\). Both methods can achieve reasonably good accuracy with relative errors of less than 2% for both \(\nu\) and \(u\) when \(\sigma_{u}=0.01\) and 8% for \(\sigma=0.1\). The inference results from EKI are obtained approximately 10 times faster than the HMC method.
### Diffusion-Reaction Equation with Source Localization
Finally, we consider the following source inversion problem for a two-dimensional linear diffusion
Figure 7: Example 5: (Top Row) the sample mean of forward surrogate for EKI and HMC B-PINNs for different noise levels \(\sigma_{u}\). (Middle Row) The standard deviation of the forward surrogate based on EKI and HMC B-PINNs for different noise levels. (Bottom Row) The absolute difference between the ground truth solution \(u(x,y)\) and the sample mean of the forward approximation for different noise levels.
reaction equation inspired by the example [48]:
\[-\lambda\Delta u-f_{2} =f_{1},\quad(x,y)\in[0,1]^{2}, \tag{4.16}\] \[u(0,y)=u(1,y) =0,\] (4.17) \[u(x,0)=u(x,1) =0, \tag{4.18}\]
where \(\lambda=0.02\) is the known diffusivity and \(f_{1}(x)=0.1\sin(\pi x)\sin(\pi y)\) is a known forcing. The goal is to infer the location parameters of the field \(f_{2}\), corresponding to three contaminant sources of the unknown location in the following equation:
\[f_{2}(\mathbf{x})=\sum_{i=1}^{3}k_{i}\exp\left(-\frac{||\mathbf{x}-\mathbf{x}_ {i}||^{2}}{2\ell^{2}}\right). \tag{4.19}\]
Here, \(\mathbf{k}=(k_{1},k_{2},k_{3})=(2,-3,0.5)\) and \(\ell=0.15\) are known constants and parameters \(\mathbf{x}_{1}=(0.3,0.3)\), \(\mathbf{x}_{2}=(0.75,0.75)\), \(\mathbf{x}_{3}=(0.2,0.7)\) are parameters to be recovered. The prior distributions on the locations \(\mathbf{x}_{i}=(x_{i},y_{i})\) are chosen to be log-normal, that is \(\log(x_{i})\sim\mathcal{N}(0,1)\) and \(\log(y_{i})\sim\mathcal{N}(0,1)\) for \(i=1,2,3\). The measurements \(\mathbf{u}\) are generated by solving (4.16) using the _solvepde_ function in Matlab with 1893 triangle meshes and sampling randomly \(N_{u}=100\) points from among the nodes and \(N_{b}=100\) equally spaced points along the domain's boundary, shown in Figure 4.8. In addition, \(N_{f}=100\) residual points were obtained via Latin Hypercube Sampling within the domain.
The posterior mean and standard deviation of the locations \(\mathbf{x}_{1}\), \(\mathbf{x}_{2}\) and \(\mathbf{x}_{3}\) in Table 4.12 show an accurate estimation of most of the centers of contaminants, with all
\begin{table}
\begin{tabular}{c|c|c c|c|c} \hline & & \(e_{u}\) & \(e_{\nu}\) & Walltime & Iterations \\ \hline \multirow{2}{*}{\(\sigma_{u}=0.01\)} & EKI & 1.04\% & 1.94\% & 5.90 seconds & 71 \\ & HMC & 0.77\% & 2.75\% & 62.57 seconds & - \\ \hline \multirow{2}{*}{\(\sigma_{u}=0.1\)} & EKI & 3.48\% & 6.00\% & 6.72 seconds & 116 \\ & HMC & 3.07\% & 7.36\% & 62.92 seconds & - \\ \hline \end{tabular}
\end{table}
Table 4.11: Example 4.5: relative errors \(e_{u}\) of the forward solution \(u\) and \(e_{\nu}\) of parameter \(\nu\) for the noise levels \(\sigma_{u}=0.01,0.1\), as well as the average walltime. Average iterations for EKI B-PINN are also reported.
Figure 4.8: Example 4.6: measurements of forward solution \(\mathbf{u}\) and boundary measurements \(\mathbf{b}\) with solution \(u(x,t)\).
values falling within contained within two standard deviations and most within one standard deviation of the mean estimates. The corresponding solution obtained for both B-PINNs, with the first two moments and approximation error shown in Figure 4.9, shows high accuracy regarding the mean prediction. Although the uncertainty estimates for the surrogate solutions by both B-PINNs somewhat differ for both noise levels, the error estimates for the surrogate solutions are much smaller than the error estimates for the surrogate solutions.
levels, their magnitudes are comparable over the domain for both noise levels. Furthermore, when \(\sigma_{u}=0.01\), both methods identify some similar bulk regions with higher uncertainty, particularly near \((x,y)=(0.5,0.25)\) and \((0.75,0.75)\). Similarly for \(\sigma_{u}=0.1\), the regions of higher uncertainty show rough similarities between the two methods, with larger uncertainties clustering near the right diagonal of the domain.
Table 13 compares the mean relative errors of physical parameters \(\mathbf{x}_{i}\) and forward solution \(u\) for two noise levels, as well as the walltime and number of EKI iterations. Consistent with the previous examples, EKI-based inference is approximately 25-fold faster than that of the HMC.
## 5 Summary
This paper presents a new and efficient inference method for B-PINNs, utilizing Ensemble Kalman Inversion (EKI) to infer physical and neural network parameters. We demonstrate the applicability and performance of our proposed approach using several benchmark problems. Interestingly, while EKI methodology theoretically only provides unbiased inference with Gaussian priors and linear operators, our results show that it can still provide reasonable inference in non-linear and non-Gaussian settings, which is consistent with findings in other literature [8, 19]. In all examples, our proposed method delivers comparable inference results to HMC-based B-PINNs, but with around 8-30 times speedup. Furthermore, EKI B-PINNs can provide informative uncertainty estimates for physical model parameters and forward predictions at different noise levels comparable to the results of HMC B-PINNs. In cases where more detailed uncertainty quantification is necessary, our proposed approach can serve as a good initialization for other inference algorithms, such as HMC, to produce better results with reduced computational costs.
Besides, it is worth noting that our study did not investigate the case of a large dataset or residual points. In such cases, naive approaches to EKI would be computationally expensive due to the cubic scaling of EKI with the size of the dataset. In such a case, mini-batching techniques proposed in [32] for EKI to train NNs with large datasets, may help to overcome this challenge. Finally, we acknowledge that the EKI requires storing an ensemble of \(J\) neural network parameter sets, which can be memory-demanding for large ensemble sizes and large networks. In this situation, Dimension reduction techniques may help address this issue. We plan to investigate this strategy in future work.
\begin{table}
\begin{tabular}{c|c|c c c c c c|c|c} \hline & & \(e_{u}\) & \(e_{x_{1}}\) & \(e_{y_{1}}\) & \(e_{x_{2}}\) & \(e_{y_{2}}\) & \(e_{x_{3}}\) & \(e_{y_{2}}\) & Walltime & Iterations \\ \hline \multirow{2}{*}{\(\sigma_{u}=0.01\)} & EKI & 0.72\% & 0.28\% & 2.88\% & 0.60\% & 0.11\% & 0.73\% & 1.46\% & 2.66 seconds & 59 \\ & HMC & 0.72\% & 0.28\% & 2.88\% & 0.60\% & 0.11\% & 0.73\% & 1.46\% & 68.50 seconds & - \\ \hline \multirow{2}{*}{\(\sigma_{u}=0.1\)} & EKI & 1.30\% & 0.08\% & 1.02\% & 0.53\% & 0.30\% & 2.11\% & 1.24\% & 2.97 seconds & 67 \\ & HMC & 1.43\% & 0.45\% & 0.17\% & 0.23\% & 0.06\% & 5.08\% & 2.62\% & 68.58 seconds & - \\ \hline \end{tabular}
\end{table}
Table 13: Example 4.6: relative errors \(e_{u}\) of the forward solution \(u\) and \(e_{x_{i}}\), \(e_{y_{i}}\) for source locations \(x_{i}\) and \(y_{i}\) for \(i=1,2,3\) respectively for the noise levels \(\sigma_{u}=0.01,0.1\), as well as average walltime. Average EKI iterations for the EKI B-PINN are also reported. |
2305.04267 | Provable Identifiability of Two-Layer ReLU Neural Networks via LASSO
Regularization | LASSO regularization is a popular regression tool to enhance the prediction
accuracy of statistical models by performing variable selection through the
$\ell_1$ penalty, initially formulated for the linear model and its variants.
In this paper, the territory of LASSO is extended to two-layer ReLU neural
networks, a fashionable and powerful nonlinear regression model. Specifically,
given a neural network whose output $y$ depends only on a small subset of input
$\boldsymbol{x}$, denoted by $\mathcal{S}^{\star}$, we prove that the LASSO
estimator can stably reconstruct the neural network and identify
$\mathcal{S}^{\star}$ when the number of samples scales logarithmically with
the input dimension. This challenging regime has been well understood for
linear models while barely studied for neural networks. Our theory lies in an
extended Restricted Isometry Property (RIP)-based analysis framework for
two-layer ReLU neural networks, which may be of independent interest to other
LASSO or neural network settings. Based on the result, we advocate a neural
network-based variable selection method. Experiments on simulated and
real-world datasets show promising performance of the variable selection
approach compared with existing techniques. | Gen Li, Ganghua Wang, Jie Ding | 2023-05-07T13:05:09Z | http://arxiv.org/abs/2305.04267v1 | # Provable Identifiability of Two-Layer ReLU Neural Networks via LASSO Regularization
###### Abstract
LASSO regularization is a popular regression tool to enhance the prediction accuracy of statistical models by performing variable selection through the \(\ell_{1}\) penalty, initially formulated for the linear model and its variants. In this paper, the territory of LASSO is extended to two-layer ReLU neural networks, a fashionable and powerful nonlinear regression model. Specifically, given a neural network whose output \(y\) depends only on a small subset of input \(x\), denoted by \(\mathcal{S}^{*}\), we prove that the LASSO estimator can stably reconstruct the neural network and identify \(\mathcal{S}^{*}\) when the number of samples scales logarithmically with the input dimension. This challenging regime has been well understood for linear models while barely studied for neural networks. Our theory lies in an extended Restricted Isometry Property (RIP)-based analysis framework for two-layer ReLU neural networks, which may be of independent interest to other LASSO or neural network settings. Based on the result, we advocate a neural network-based variable selection method. Experiments on simulated and real-world datasets show promising performance of the variable selection approach compared with existing techniques.
Lasso, Identifiability, Neural network, Nonlinear regression, Variable selection.
## I Introduction
Given \(n\) observations \((y_{i},\mathbf{x}_{i})\), \(i=1,\ldots,n\), we often model them with the regression form of \(y_{i}=f(\mathbf{x}_{i})+\xi_{i}\), with an unknown function \(f\), \(\mathbf{x}_{i}\in\mathbb{R}^{p}\) being the input variables, and \(\xi_{i}\) representing statistical errors. A general goal is to estimate a regression function \(\widehat{f}_{n}\) close to \(f\) for prediction or interpretation. This is a challenging problem when the input dimension \(p\) is comparable or even much larger than the data size \(n\). For linear regressions, namely \(f(\mathbf{x})=\mathbf{w}^{\top}\mathbf{x}\), the least absolute shrinkage and selection operator (LASSO) [1] regularization has been established as a standard tool to estimate \(f\). The LASSO has also been successfully used and studied in many nonlinear models such as generalized linear models [2], proportional hazards models [3], and neural networks [4]. For LASSO-regularized neural networks, existing works have studied different properties, such as convergence of training [5], model pruning [6, 7], and feature selection [8, 9]. The LASSO regularization has also been added into the standard deep learning toolbox of many open-source libraries, e.g., Tensorflow [10] and Pytorch [11].
Despite the practical success of LASSO in improving the generalizability and sparsification of neural networks, whether one can use LASSO for identifying significant variables is underexplored. For linear models, the variable selection problem is also known as support recovery or feature selection in different literature. Selection consistency requires that the probability of \(\text{supp}(\widehat{\mathbf{w}})=\text{supp}(\mathbf{w})\) converges to one as \(n\to\infty\). The standard approach to selecting a parsimonious sub-model is to either solve a penalized regression problem or iteratively pick up significant variables [12]. The existing methods differ in how they incorporate unique domain knowledge (e.g., sparsity, multicollinearity, group behavior) or what desired properties (e.g., consistency in coefficient estimation, consistency in variable selection) to achieve [13]. For instance, consistency of the LASSO method [1] in estimating the significant variables has been extensively studied under various technical conditions, including sparsity, mutual coherence [14], restricted isometry [15], irrepresentable condition [16], and restricted eigenvalue [17].
Many theoretical studies of neural networks have focused on the generalizability. For example, a universal approximation theorem was established that shows any continuous multivariate function can be represented precisely by a polynomialized two-layer network [18]. It was later shown that any continuous function could be approximated arbitrarily well by a two-layer perceptron with sigmoid activation functions [19], and an approximation error bound of using two-layer neural networks to fit arbitrary smooth functions has been established [20, 21]. Statistically, generalization error bounds for two-layer neural networks [21, 22] and multi-layer networks [23, 24, 25] have been developed. From an optimization perspective, the parameter estimation of neural networks was cast into a tensor decomposition problem where a provably global optimum can be obtained [26, 27, 28]. Very recently, a dimension-free Rademacher complexity to bound the generalization error for deep ReLU neural networks was developed to avoid the curse of dimensionality [29]. It was proved that certain deep neural networks with few nonzero network parameters could achieve minimax rates of convergence [30]. A tight error bound free from the input dimension was developed by assuming that the data is generated from a generalized hierarchical interaction model [31].
This work theoretically studies the identifiability of neural networks and uses it for variable selection. Specifically, suppose data observations are generated from a neural network with only a few nonzero coefficients. The identifiability concerns the possibility of identifying those coefficients, which may be further used to identify a sparse set of input variables that are genuinely relevant to the response. In this direction, LASSO and its variant Group LASSO [32] have been advocated to regularize neural-network for variable selection in
practice (see, e.g., [6, 8, 9, 33]).
In this paper, we consider the following class of two-layer ReLU neural networks for regression.
\[\mathcal{F}_{r}=\bigg{\{}f:\boldsymbol{x}\mapsto f(\boldsymbol{x})=\sum_{j=1}^{ r}a_{j}\mathrm{relu}(\boldsymbol{w}_{j}^{\top}\boldsymbol{x}+b_{j}),\]
\[\text{where }a_{j},b_{j}\in\mathbb{R},\boldsymbol{w}_{j}\in\mathbb{R}^{p} \bigg{\}}.\]
Here, \(p\) and \(r\) denote the input dimension and the number of neurons, respectively. We will assume that data are generated from a regression function in \(\mathcal{F}_{r}\) perturbed by a small term. We will study the following two questions.
First, if the underlying regression function \(f\) admits a parsimonious representation so that only a small set of input variables, \(\mathcal{S}^{\star}\), is relevant, can we identify them with high probability given possibly noisy measurements \((y_{i},\boldsymbol{x}_{i})\), for \(i=1,\ldots,n\)? Second, is such an \(\mathcal{S}^{\star}\) estimable, meaning that it can be solved from an optimization problem with high probability, even in small-\(n\) and large-\(p\) regimes?
To address the above questions, we will establish a theory for neural networks with the LASSO regularization by considering the problem \(\min_{\boldsymbol{W},\boldsymbol{a},\boldsymbol{b}}\|\boldsymbol{W}\|_{1}\) under the constraint of
\[\frac{1}{n}\sum_{i=1}^{n}\biggl{(}y_{i}-\sum_{j=1}^{r}a_{j}\mathrm{relu}( \boldsymbol{w}_{j}^{\top}\boldsymbol{x}_{i}+b_{j})\biggr{)}^{2}\leq\sigma^{2},\]
which is an alternative version of the \(\ell_{1}\)-regularization. More notational details will be introduced in Subsection II-B.
Our theory gives positive answers to the above questions. We theoretically show that the LASSO-type estimator can stably identify ReLU neural networks with sparse input signals, up to a permutation of hidden neurons. We only focus on the varying \(n\) and \(p\) and implicitly assume that the sparsity of \(\boldsymbol{W}^{\star}\) and the number of neurons \(r\) are fixed. While this does not address wide neural networks, we think it still corresponds to a nontrivial and interesting function class. For example, the class contains linear functions when input signals are bounded. Our result is rather general as it applies to noisy observations of \(y\) and dimension regimes where the sample size \(n\) is much smaller than the number of input variables \(p\). The theory was derived based on new concentration bounds and function analysis that may be interesting in their own right.
Inspired by the developed theory, we also propose a neural network-based variable selection method. The idea is to use the neural system as a vehicle to model nonlinearity and extract significant variables. Through various experimental studies, we show encouraging performance of the technique in identifying a sparse set of significant variables from large dimensional data, even if the underlying data are not generated from a neural network. Compared with popular approaches based on tree ensembles and linear-LASSO, the developed method is suitable for variable selection from nonlinear, large-dimensional, and low-noise systems.
The rest of the paper is outlined as follows. Section II introduces the main theoretical result and proposes an algorithm to perform variable selection. Section III uses simulated and real-world datasets to demonstrate the proposed theory and algorithm. Section IV concludes the paper.
## II Main result
### _Notation_
Let \(\boldsymbol{u}_{\mathcal{S}}\) denote the vector whose entries indexed in the set \(\mathcal{S}\) remain the same as those in \(\boldsymbol{u}\), and the remaining entries are zero. For any matrix \(\boldsymbol{W}\in\mathbb{R}^{p\times r}\), we define
\[\|\boldsymbol{W}\|_{1}=\sum_{1\leq k\leq p,1\leq j\leq r}|w_{kj}|,\,\| \boldsymbol{W}\|_{\mathrm{F}}=\biggl{(}\sum_{1\leq k\leq p,1\leq j\leq r}w_{ kj}^{2}\biggr{)}^{1/2}.\]
Similar notations apply to vectors. The inner product of two vectors is denoted as \(\langle\boldsymbol{u},\boldsymbol{v}\rangle\). Let \(\boldsymbol{w}_{j}\) denote the \(j\)-th column of \(\boldsymbol{W}\). The sparsity of a matrix \(\boldsymbol{W}\) refers to the number of nonzero entries in \(\boldsymbol{W}\). Let \(\mathcal{N}(\boldsymbol{0},\boldsymbol{I}_{p})\) denote the standard \(p\)-dimensional Gaussian distribution, and \(1(\cdot)\) denote the indicator function. The rectified linear unit (ReLU) function is defined by \(\mathrm{relu}(v)=\max\{v,0\}\) for all \(v\in\mathbb{R}\).
### _Formulation_
Consider \(n\) independently and identically distributed (i.i.d.) observations \(\{\boldsymbol{x}_{i},y_{i}\}_{1\leq i\leq n}\) satisfying
\[y_{i}=\sum_{j=1}^{r}a_{j}^{\star}\cdot\mathrm{relu}(\boldsymbol{w}_{j}^{\star \top}\boldsymbol{x}_{i}+b_{j}^{\star})+\xi_{i} \tag{1}\]
with \(\boldsymbol{x}_{i}\sim\mathcal{N}(\boldsymbol{0},\boldsymbol{I}_{p})\), where \(r\) is the number of neurons, \(a_{j}^{\star}\in\{1,-1\}\), \(\boldsymbol{w}_{j}^{\star}\in\mathbb{R}^{p}\), \(b_{j}^{\star}\in\mathbb{R}\), and \(\xi_{i}\) denotes the random noise or approximation error obeying
\[\frac{1}{n}\sum_{i=1}^{n}\xi_{i}^{2}\leq\sigma^{2}. \tag{2}\]
In the above formulation, the assumption \(a_{j}^{\star}\in\{1,-1\}\) does not lose generality since \(a\cdot\mathrm{relu}(b)=ac\cdot\mathrm{relu}(b/c)\) for any \(c>0\). The setting of Inequality (2) is for simplicity. If \(\xi_{i}\)'s are unbounded random variables, the theoretical result to be introduced will still hold, with more explanations in the Appendix. The \(\xi_{i}\)'s are not necessarily i.i.d., and \(\sigma\) is allowed to be zero, which reduces to the noiseless scenario.
Let \(\boldsymbol{W}^{\star}=[\boldsymbol{w}_{1}^{\star},\ldots,\boldsymbol{w}_{r}^{ \star}]\in\mathbb{R}^{p\times r}\) denote the data-generating coefficients. The main problem to address is whether one can stably identify those nonzero elements, given that most entries in \(\boldsymbol{W}^{\star}\) are zero. The study of neural networks from an identifiability perspective is essential. Unlike the generalizability problem that studies the predictive performance of machine learning models, the identifiability may be used to interpret modeling results and help scientists make trustworthy decisions. To illustrate this point, we will propose to use neural networks for variable selection in Subsection II-E.
To answer the above questions, we propose to study the following LASSO-type optimization. Let \(\big{(}\widehat{\boldsymbol{W}},\widehat{\boldsymbol{a}},\widehat{ \boldsymbol{b}}\big{)}\) be a solution to the following optimization problem,
\[\min_{\boldsymbol{W},\boldsymbol{a},\boldsymbol{b}}\|\boldsymbol{W} \|_{1} \tag{3}\] \[\text{subject to }\frac{1}{n}\sum_{i=1}^{n}\biggl{(}y_{i}-\sum_{j=1}^{r}a_ {j}\cdot\mathrm{relu}(\boldsymbol{w}_{j}^{\top}\boldsymbol{x}_{i}+b_{j}) \biggr{)}^{2}\leq\sigma^{2},\]
within the feasible range \(\boldsymbol{a}\in\{1,-1\}^{r}\), \(\boldsymbol{W}\in\mathbb{R}^{p\times r}\), and \(\boldsymbol{b}\in\mathbb{R}^{r}\). Intuitively, the optimization operates under the
constraint that the training error is not too large, and the objective function tends to sparsify \(\mathbf{W}\). Under some regularity conditions, we will prove that the solution is indeed sparse and close to the data-generating process.
### _Main result_
We make the following technical assumption.
**Assumption 1**.: _For some constant \(B\geq 1\),_
\[1\leq\|\mathbf{w}_{j}^{\star}\|_{2}\leq B\quad\text{and}\quad|b_{j}^{\star}|\leq B \quad\forall 1\leq j\leq r. \tag{4}\]
_In addition, for some constant \(\omega>0\),_
\[\max_{j,k=1,\dots,r,j\neq k}\frac{\left|\langle\mathbf{w}_{j}^{\star},\mathbf{w}_{k}^ {\star}\rangle\right|}{\|\mathbf{w}_{j}^{\star}\|_{2}\|\mathbf{w}_{k}^{\star}\|_{2}}\leq \frac{1}{r^{\omega}}. \tag{5}\]
**Remark 1** (Discussion of Assumption 1).: _The condition in (4) is a normalization only for technical convenience, since we can re-scale \(\mathbf{w}_{j},b_{j},y_{i},\sigma\) proportionally without loss of generality. Though this condition implicitly requires \(\mathbf{w}_{j}^{\star}\neq\mathbf{0}\) for all \(j=1,\dots,r\), it is reasonable since it means the neuron \(j\) is not used/activated. The condition in (5) requires that the angle of any two different coefficient vectors is not too small. This condition is analogous to a bounded-eigenvalue condition often assumed for linear regression problems, where each \(w_{j}^{\star}\) is understood as a column in the design matrix. This condition is by no means mild or easy to verify in practice. Nevertheless, as our focused regime is large \(p,n\) but small \(r\), we think the condition in (5) is still reasonable. For example, when \(r=2\), this condition simply requires \(w_{1}^{\star}\neq w_{2}^{\star}\)._
Our main result shows that if \(\mathbf{W}^{\star}\) is sparse, one can stably reconstruct a neural network when the number of samples (\(n\)) scales logarithmically with the input dimension (\(p\)). A skeptical reader may ask how the constants exactly depend on the sparsity and \(r\). We will provide a more elaborated result in Subsection II-D and introduce the proof there.
**Theorem 1**.: _Under Assumption 1, there exist some constants \(c_{1},c_{2},c_{3}>0\) depending only (polynomially) on the sparsity of \(\mathbf{W}^{\star}\) such that for any \(\delta>0\), one has with probability at least \(1-\delta\),_
\[\widehat{\mathbf{a}}=\mathbf{\Pi}\mathbf{a}^{\star}\text{ and }\|\widehat{\mathbf{W}}-\mathbf{W}^{ \star}\mathbf{\Pi}^{\top}\|_{\mathrm{F}}+\|\widehat{\mathbf{b}}-\mathbf{\Pi}\mathbf{b}^{\star }\|_{2}\leq c_{1}\sigma \tag{6}\]
_for some permutation matrix \(\mathbf{\Pi}\), provided that_
\[n>c_{2}\log^{4}\frac{p}{\delta}\qquad\text{and}\qquad\sigma<c_{3}. \tag{7}\]
**Remark 2** (Interpretation of Theorem 1).: _The permutation matrix \(\mathbf{\Pi}\) is necessary since the considered neural networks produce identical predictive distributions (of \(y\) conditional \(\mathbf{x}\)) under any permutation of the hidden neurons. The result states that the underlying neural coefficients can be stably estimated even when the sample size \(n\) is much smaller than the number of variables \(p\). Also, the estimation error bound is at the order of \(\sigma\), the specified noise level in (2)._
_Suppose that we define the signal-to-noise ratio (SNR) to be \(\mathbb{E}\|\mathbf{x}\|^{2}/\sigma^{2}\). An alternative way to interpret the theorem is that a large SNR ensures the global minimizer to be close to the ground truth with high probability. One may wonder what if the \(\sigma<c_{3}\) condition is not met. We note that if \(\sigma\) is too large, the error bound in (6) would be loose, and it is not of much interest anyway. In other words, if the SNR is small, we may not be able to estimate parameters stably. This point will be demonstrated by experimental studies in Section III._
The estimation results in Theorem 1 can be translated into variable selection results as shown in the following Corollary 1. The connection is based on the fact that if \(i\)-th variable is redundant, the underlying coefficients associated with it should be zero. Let \(\mathbf{w}_{i,\cdot}^{\star}\) denote the \(i\)-th row of \(\mathbf{W}^{\star}\). Then,
\[\mathcal{S}^{\star}=\{1\leq i\leq p:\|\mathbf{w}_{i,\cdot}^{\star}\|_{2}>0\}\]
characterizes the "significant variables." Corollary 1 states that the set of variables with non-vanished coefficient estimates contains all the significant variables. The corollary also shows that with a suitable shrinkage of the coefficient estimates, one can achieve variable selection consistency.
**Corollary 1** (Variable selection).: _Let \(\widehat{\mathcal{S}}_{c_{1}\sigma}\subseteq\{1,\dots,p\}\) denote the sets of \(i\)'s such that \(\|\widehat{\mathbf{w}}_{i,\cdot}\|_{2}>c_{1}\sigma\). Under the same assumption as in Theorem 1, and \(\min_{1\leq i\leq r}\|\mathbf{w}_{i,\cdot}^{\star}\|_{2}>2c_{1}\sigma\), for any \(\delta>0\), one has_
\[\mathbb{P}(\mathcal{S}^{\star}=\widehat{\mathcal{S}}_{c_{1}\sigma})\geq 1-\delta,\]
_provided that \(n>c_{2}\log^{4}\frac{p}{\delta}\) and \(\sigma<c_{3}\)._
Considering the noiseless scenario \(\sigma=0\), Theorem 1 also implies the following corollary.
**Corollary 2** (Unique parsimonious representation).: _Under the same assumption as in Theorem 1, there exists a constant \(c_{2}>0\) depending only on the sparsity of \(\mathbf{W}^{\star}\) such that for any \(\delta>0\), one has with probability at least \(1-\delta\),_
\[\widehat{\mathbf{a}}=\mathbf{\Pi}\mathbf{a}^{\star},\quad\widehat{\mathbf{W}}=\mathbf{W}^{\star} \mathbf{\Pi}^{\top},\quad\text{and}\quad\widehat{\mathbf{b}}=\mathbf{\Pi}\mathbf{b}^{\star}\]
_for some permutation matrix \(\mathbf{\Pi}\), provided that \(n>c_{2}\log^{4}\frac{p}{\delta}\)._
Corollary 2 states that among all the possible representations \(\mathbf{W}\) in (1) (with \(\xi_{i}=0\)), the one(s) with the smallest \(\ell_{1}\)-norm must be identical to \(\mathbf{W}^{\star}\) up to a column permutation with high probability. In other words, the most parsimonious representation (in the sense of \(\ell_{1}\) norm) of two-layer ReLU neural networks is unique. This observation addresses the questions raised in Section I.
It is worth mentioning that since the weight matrix \(\mathbf{W}\) of the neural network is row-sparse, Group-LASSO is a suitable alternative to LASSO. We leave the analysis of Group-LASSO for future study.
### _Elaboration on the main result_
Suppose that \(\mathbf{W}^{\star}\) has at most \(s\) nonzero entries. The following theorem is a more elaborated version of Theorem 1.
**Theorem 2**.: _There exist some constants \(c_{1},c_{2},c_{3}>0\) such that for any \(\delta>0\), one has with probability at least \(1-\delta\),_
\[\widehat{\mathbf{a}}=\mathbf{\Pi}\mathbf{a}^{\star}\text{ and }\|\widehat{\mathbf{W}}-\mathbf{W}^{ \star}\mathbf{\Pi}^{\top}\|_{\mathrm{F}}+\|\widehat{\mathbf{b}}-\mathbf{\Pi}\mathbf{b}^{\star} \|_{2}\leq c_{1}\sigma \tag{8}\]
_for some permutation \(\mathbf{\Pi}\in\{0,1\}^{r\times r}\), provided that Assumption 1 holds and_
\[n>c_{2}s^{3}r^{13}\log^{4}\frac{p}{\delta}\qquad\text{and}\qquad \sigma<\frac{c_{3}}{r}. \tag{9}\]
**Remark 3** (Sketch proof of Theorem 1).: _The proof of Theorem 1 is nontrivial and is included in the Appendix. Next, we briefly explain the sketch of the proof. First, we will define what we refer to as \(D_{1}\)-distance and \(D_{2}\)-distance between \((\mathbf{W},\mathbf{a},\mathbf{b})\) and \((\mathbf{W}^{\star},\mathbf{a}^{\star},\mathbf{b}^{\star})\). These distances can be regarded as the counterpart of the classical \(\ell_{1}\) and \(\ell_{2}\) distances between two vectors, but allowing the invariance under any permutation of neurons (Remark 2). Then, we let_
\[\Delta_{n}\coloneqq\frac{1}{n}\sum_{i=1}^{n} \bigg{[}\sum_{j=1}^{r}a_{j}\mathrm{relu}(\mathbf{w}_{j}^{\top}\mathbf{x} _{i}+b_{j})\] \[-\sum_{j=1}^{r}a_{j}^{\star\top}\mathrm{relu}(\mathbf{w}_{j}^{\star \top}\mathbf{x}_{i}+b_{j}^{\star})\bigg{]}^{2},\]
_where \((\mathbf{W},\mathbf{a},\mathbf{b})\) is the solution of the problem in (3), and develop the following upper and lower bounds of it:_
\[\Delta_{n} \leq c_{6}\left(\frac{r}{S}+\frac{r\log^{3}\frac{p}{n\delta}}{n} \right)D_{1}^{2}+c_{6}\sigma^{2}\quad\text{and}\] \[\Delta_{n} \geq c_{4}\min\left\{\frac{1}{r},D_{2}^{2}\right\} \tag{10}\]
_hold with probability at least \(1-\delta\), provided that \(n\geq c_{5}S^{3}r^{4}\log^{4}\frac{p}{\delta}\), for some constants \(c_{4},c_{5},c_{6}\), and \(S\) to be specified. Here, the upper bound will be derived from a series of elementary inequalities. The lower bound is reminiscent of the Restricted Isometry Property (RIP) [15] for linear models. We will derive it from the lower bound of the population counterpart by concentration arguments, namely_
\[\mathbb{E}\left[\sum_{j=1}^{r}a_{j}\mathrm{relu}(\mathbf{w}_{j}^{\top }\mathbf{x}+b_{j})-\sum_{j=1}^{r}a_{j}^{\star}\mathrm{relu}(\mathbf{w}_{j}^{\star\top }\mathbf{x}+b_{j}^{\star})\right]^{2}\] \[\geq c\min\left\{\frac{1}{r},D_{2}^{2}\right\},\]
_for some constant \(c>0\). The bounds in (10) imply that with high probability,_
\[c_{4}\min\left\{\frac{1}{r},D_{2}^{2}\right\}\leq c_{6}\left( \frac{r}{S}+\frac{r\log^{3}\frac{p}{n\delta}}{n}\right)D_{1}^{2}+c_{6}\sigma ^{2},\]
_Using this and an inequality connecting \(D_{1}\) and \(D_{2}\), we can prove the final result._
**Remark 4** (Alternative assumption and result).: _We provide an alternative to Theorem 2. Consider the following Assumption 1' as an alternative to Assumption 1._
**Assumption 1'**. For some constant \(B>0\),_
\[\|\mathbf{w}_{j}^{\star}\|_{2}\leq B\quad\text{and}\quad|b_{j}^{\star }|\leq B\quad\text{ for all }1\leq j\leq r.\]
_In addition,_
\[\mathbb{E}\Big{[}\langle\mathbf{a},\mathrm{relu}(\mathbf{W}^{\top}\mathbf{ x}+\mathbf{b})\rangle-\langle\mathbf{a}^{\star},\mathrm{relu}(\mathbf{W}^{\star\top}\mathbf{x}+\mathbf{b}^{ \star})\rangle\Big{]}^{2}\] \[\geq\psi D_{2}\left[(\mathbf{W},\mathbf{a},\mathbf{b}),(\mathbf{W}^{\star},\mathbf{ a}^{\star},\mathbf{b}^{\star})\right]^{2}, \tag{11}\]
_and_
\[n>\frac{c_{2}}{\psi}s^{3}r^{3}\log^{4}\frac{p}{\delta}. \tag{12}\]
_With Assumption 1' instead of Assumption 1, one can still derive the same result as Theorem 2. The proof of the above result is similar to that of Theorem 2, except that we insert Inequality (11) instead of Inequality (22) into (21) in Appendix A._
### _Variable selection_
To solve the optimization problem (3) in practice, we consider the following alternative problem,
\[\min_{\mathbf{W}\in\mathbb{R}^{p\times r},\mathbf{a}\in\mathbb{R}^{r}, \mathbf{b}\in\mathbb{R}^{r}} \bigg{\{}\frac{1}{n}\sum_{i=1}^{n}\biggl{(}y_{i}-\sum_{j=1}^{r}a_{j} \cdot\mathrm{relu}(\mathbf{w}_{j}^{\top}\mathbf{x}_{i}+b_{j})\biggr{)}^{2}\] \[+\lambda\|\mathbf{W}\|_{1}\bigg{\}}. \tag{13}\]
It has been empirically shown that algorithms such as the stochastic gradient descent can find a good approximate solution to the above optimization problem [34, 35]. Next, we will discuss some details regarding the variable selection using LASSO-regularized neural networks.
**Tuning parameters.** Given a labeled dataset in practice, we will need to tune hyper-parameters including the penalty term \(\lambda\), the number of neurons \(r\), learning rate, and the number of epochs. We suggest the usual approach that splits the available data into training and validation parts. The training data are used to estimate neural networks for a set of candidate hyper-parameters. The most suitable candidate will be identified based on the predictive performance on the validation data.
**Variable importance.** Inspired by Corollary 1, we interpret the \(\ell_{2}\)-norm of \(\widehat{\mathbf{w}}_{i,\cdot}\) as the importance of the \(i\)-th variable, for \(i=1,\ldots,p\). Corollary 1 indicates that we can accurately identify all the significant variables in \(\mathcal{S}^{\star}\) with high probability if we correctly set the cutoff value \(c_{1}\sigma\).
**Setting the cutoff value.** It is conceivable that variables with large importance are preferred over those with near-zero importance. This inspires us to cluster the variables into two groups based on their importance. Here, we suggest two possible approaches. The first is to use a data-driven approach such as \(k\)-means and Gaussian mixture model (GMM). The second is to manually set a threshold value according to domain knowledge on the number of important variables.
**Extension to deep neural networks.** Inspired by (13), we can intuitively generalize the proposed method to deep neural networks by penalizing the \(\ell_{1}\)-norm of the weight matrix in the input layer. Though we do not have a theoretical analysis for this broader setting, numerical studies show that it is still effective.
## III Experiments
We perform experimental studies to show the promising performance of the proposed variable selection method. We compare the variable selection accuracy and prediction performance of the proposed algorithm ('NN') with several
baseline methods, including LASSO ('LASSO'), orthogonal matching pursuit ('OMP'), random forest ('RF'), gradient boosting ('GB'), neural networks with group LASSO ('GLASSO') [5], group sparse regularization ('GSR') [8], and LNET ('LNET') [9]. The 'NN' hyperparameters to search over are the penalty term \(\lambda\in\{0.1,0.05,0.01,0.005,0.001\}\), the number of neurons \(r\in\{20,50,100\}\), the learning rate in \(\{0.01,0.005,0.001\}\), and the number of epochs in \(\{100,200,500\}\). Moreover, we extend 'NN' to a neural network that contains an additional hidden layer of ten neurons. We distinguish the proposed method with two-layer and three-layer neural networks by 'NN-2' and 'NN-3', respectively. Further experimental details are included in Appendix E.
### _Synthetic datasets_
#### Iv-A1 NN-generated dataset
The first experiment uses the data generated from Equation (1) with \(p=100\) variables, \(r=16\) neurons. The first \(10\) rows of neural coefficients \(\mathbf{W}\) are independently generated from the standard uniform distribution, and the remaining rows are zeros, representing \(10\) significant variables. The neural biases \(\mathbf{b}\) are also generated from the standard uniform distribution. The signs of neurons, \(\mathbf{a}\), follow an independent Bernoulli distribution. The training size is \(n=500\), and the test size is \(2000\). The noise is zero-mean Gaussian with standard deviation \(\sigma\) set to be \(0\), \(0.5\), \(1\), and \(5\). For each \(\sigma\), we evaluate its mean squared error on the test dataset and three quantities for variable selection: the number of correctly selected variables ('TP', the larger the better), wrongly selected variables ('FP', the smaller the better), and area-under-curve score ('AUC', the larger the better). Here, 'AUC' is evaluated based on the variable importance given by each method, which is detailed in Appendix E. The procedure is independently replicated \(20\) times.
The results are reported in Table I and Table II, which suggest that 'NN' has the best overall performance for both selection and prediction. In particular, 'NN-2' and 'NN-3' have almost the same performance among all situations, which empirically demonstrates that the proposed method also works for deeper neural networks. It is interesting to compare 'NN' with 'LNET': 'NN' has slightly higher test error than 'LNET' when the noise level is small, but a much smaller false positive rate and higher AUC score than 'LNET'. It indicates that 'NN' is more accurate for variable selection, while 'LNET' tends to over-select variables for better prediction accuracy. Also, all the methods perform worse as the noise level \(\sigma\) increases.
the quadratic factor \(x_{3}\). As for the prediction performance, neural network-based methods outperform other methods. In particular, 'NN' is better than 'GLASSO' and 'GSR', while 'LNET' exhibits better prediction and worse selection performance as seen in previous experiments.
### _BGSBoy dataset_
The BGSBoy dataset involves 66 boys from the Berkeley guidance study (BGS) of children born in 1928-29 in Berkeley, CA [36]. The dataset includes the height ('HT'), weight ('WT'), leg circumference ('LG'), strength ('ST') at different ages (2, 9, 18 years), and body mass index ('BMI18'). We choose 'BMI18' as the response, which is defined as follows.
\[\text{BMI18}=\text{WT18}/(\text{HT18}/100)^{2}, \tag{14}\]
where WT18 and HT18 denote the weight and height at the age of 18, respectively. In other words, 'WT18' and 'HT18' are sufficient for modeling the response among \(p=10\) variables. Other variables are correlated but redundant. The training size is \(n=44\) and the test size is 22. Other settings are the same as before. We compare the prediction performance and explore the three features which are most frequently selected by each method. The results are summarized in Table VII.
From the results, both linear and NN-based methods can identify 'WT18' and 'HT18' in all the replications. Meanwhile, tree-based methods may miss 'HT18' but select 'LG18' instead, which is only correlated with the response. Interestingly, we find that the linear methods still predict well in this experiment. A possible reason is that Equation (14) can be well-approximated by a first-order Taylor expansion on 'HT18' at the value of around \(180\), and the range of 'HT18' is within a small interval around \(180\).
### _UJIIndoorLoc dataset_
The UJIINdoorLoc dataset aims to solve the indoor localization problem via WiFi fingerprinting and other variables such as the building and floor numbers. A detailed description
can be found in [37]. Specifically, we have 520 Wireless Access Points (WAPs) signals (which are continuous variables) and 'FLOOR', 'BUILDING', 'SPACEID', 'RELATIVEPOSITION', 'USERID', and 'PHONEID' as categorical variables. The response variable is a user's longitude ('Longitude'). The dataset has \(19937\) observations. We randomly sample \(3000\) observations and split them into \(n=2000\) for training and \(1000\) for test. As part of the pre-processing, we create binary dummy variables for the categorical variables, which results in \(p=681\) variables in total. We explore the ten features that are most frequently selected by each method. We set the cutoff value as the tenth-largest variable importance. The procedure is independently replicated \(100\) times. The results are reported in Table VIII.
Based on the results, the 'NN' achieves the best prediction performance and significantly outperforms other methods. As for variable selection, since 'BUILDING' greatly influences the location from our domain knowledge, it is non-surprisingly selected by all methods in every replication. However, except for 'BUILDING', different methods select different variables with some overlapping, e.g., 'PHONEID_14' selected by 'GLASSO' and 'GB', 'USERID_16' selected by 'NN' and 'LASSO', which indicate the potentially important variables. 'LNET' again selects more variables than other methods. There are nearly 60 variables selected by 'LNET' in every replication. Nevertheless, those methods do not achieve an agreement for variable selection. 'NN' implies that all the WAPs signals are weak while categorical variables provide more information about the user location. Given the very high missing rate of WAPs signals (\(97\%\) on average, as reported in [37]), the interpretation of 'NN' seems reasonable.
### _Summary_
The experiment results show the following points. First, 'NN' can stably identify the important variables and have competitive prediction performance compared with the baselines. Second, the increase of the noise level will hinder both the selection and prediction performance. Third, the LASSO regularization is crucial for 'NN' to avoid over-fitting, especially for small data. Using group LASSO or a mixed type of penalty has a similar performance as 'NN', while 'LNET' tends to over-select importance variables. Fourth, the selection and prediction performances are often positively associated for 'NN', but may not be the case for baseline methods.
## IV Concluding Remarks
We established a theory for the use of LASSO in two-layer ReLU neural networks. In particular, we showed that the LASSO estimator could stably reconstruct the neural network coefficients and identify the critical underlying variables under reasonable conditions. We also proposed a practical method to solve the optimization and perform variable selection. We briefly remark on some interesting further work. First, a limitation of the work is that we considered only a small \(r\). An interesting future problem is to study \(r\) that may grow fast with \(p\) and \(n\). Second, our experiments show that the algorithm can be extended to deeper neural networks. It will be exciting to generalize the main theorem to the multi-layer cases.
The Appendix includes proofs and experimental details.
## Acknowledgment
The authors would like to thank the anonymous reviewers and Associate Editor for their constructive review comments. The work was supported in part by the U.S. National Science Foundation under Grant Numbers DMS-2134148 and CNS-2220286.
## Appendix A Analysis: proof of Theorem 2
Let \(\mathcal{S}\) be the index set with cardinality \(S\) consisting of the support for \(\mathbf{W}^{\star}\) and top entries of \(\widehat{\mathbf{W}}\), where \(S\) will be specified momentarily. Define
\[\mathbf{W}\coloneqq\widehat{\mathbf{W}}_{\mathcal{S}}\in\mathbb{R}^{p\times r},\]
and \(a_{j}=\widehat{a}_{j}\), \(b_{j}=\widehat{b}_{j}\). Define
\[d_{1}(\mathbf{w}_{1},a_{1},b_{1},\mathbf{w}_{2},a_{2},b_{2})\] \[=\left\{\begin{array}{ll}\|\mathbf{w}_{1}-\mathbf{w}_{2}\|_{1}+|b_{1}- b_{2}|&\text{if }a_{1}=a_{2};\\ \|\mathbf{w}_{1}\|_{1}+\|\mathbf{w}_{2}\|_{1}+|b_{1}|+|b_{2}|&\text{if }a_{1}\neq a_{2}, \end{array}\right. \tag{15}\]
and
\[d_{2}(\mathbf{w}_{1},a_{1},b_{1},\mathbf{w}_{2},a_{2},b_{2})\] \[=\left\{\begin{array}{ll}\sqrt{\|\mathbf{w}_{1}-\mathbf{w}_{2}\|_{2}^{2 }+|b_{1}-b_{2}|^{2}}&\text{if }a_{1}=a_{2};\\ 1&\text{if }a_{1}\neq a_{2}.\end{array}\right. \tag{16}\]
In addition, for permutation \(\pi\) on \([r]\), let
\[D_{1} \coloneqq\min_{\pi}\sum_{j=1}^{r}d_{1}(\mathbf{w}_{\pi(j)},a_{\pi(j)}, b_{\pi(j)},\mathbf{w}_{j}^{\star},a_{j}^{\star},b_{j}^{\star}), \tag{17a}\] \[D_{2} \coloneqq\min_{\pi}\sqrt{\sum_{j=1}^{r}d_{2}(\mathbf{w}_{\pi(j)},a_{ \pi(j)},b_{\pi(j)},\mathbf{w}_{j}^{\star},a_{j}^{\star},b_{j}^{\star})^{2}} \tag{17b}\]
denote the \(D_{1}\)-distance and \(D_{2}\)-distance between \((\mathbf{W},\mathbf{a},\mathbf{b})\) and \((\mathbf{W}^{\star},\mathbf{a}^{\star},\mathbf{b}^{\star})\), respectively. Then, one has the following bounds.
**Lemma 1**.: _For any \(\mathbf{W}\in\mathbb{R}^{p\times r}\) with \(\left\|\mathbf{W}\right\|_{0}\leq S\), there exists some universal constants \(c_{4},c_{5}>0\) such that_
\[\frac{1}{n}\sum_{i=1}^{n}\left[\sum_{j=1}^{r}a_{j}\mathrm{relu}( \mathbf{w}_{j}^{\top}\mathbf{x}_{i}+b_{j})-\sum_{j=1}^{r}a_{j}^{\star}\mathrm{relu}( \mathbf{w}_{j}^{\star\top}\mathbf{x}_{i}+b_{j}^{\star})\right]^{2}\] \[\geq c_{4}\min\left\{\frac{1}{r},D_{2}^{2}\right\} \tag{18}\]
_holds with probability at least \(1-\delta\) provided that_
\[n\geq c_{5}S^{3}r^{4}\log^{4}\frac{p}{\delta}. \tag{19}\]
**Lemma 2**.: _Then, there exists a universal constant \(c_{6}>0\) such that_
\[\frac{1}{n}\sum_{i=1}^{n}\left[\sum_{j=1}^{r}a_{j}\mathrm{relu}( \mathbf{w}_{j}^{\top}\mathbf{x}_{i}+b_{j})-\sum_{j=1}^{r}a_{j}^{\star}\mathrm{relu}( \mathbf{w}_{j}^{\star\top}\mathbf{x}_{i}+b_{j}^{\star})\right]^{2}\] \[\leq c_{6}\left(\frac{r}{S}+\frac{r\log^{3}\frac{p}{n\delta}}{n }\right)D_{1}^{2}+c_{6}\sigma^{2}\]
_holds with probability at least \(1-\delta\)._
By comparing the bounds given in Lemma 1 and 2, one has
\[c_{4}\min\left\{\frac{1}{r},D_{2}^{2}\right\}\leq c_{6}\left(\frac{r}{S}+\frac {r\log^{3}\frac{p}{n\delta}}{n}\right)D_{1}^{2}+c_{6}\sigma^{2},\]
provided that
\[n>c_{5}S^{3}r^{4}\log^{4}\frac{p}{\delta}.\]
Let \(\widehat{\mathcal{S}}^{\star}\) be the index set with cardinality \(2s\) consisting of the support for \(\mathbf{W}^{\star}\) and top entries of \(\widehat{\mathbf{W}}\). In addition, let \(D_{1}^{\star}\) and \(D_{2}^{\star}\) denote the \(D_{1}\)-distance and \(D_{2}\)-distance between \(\left(\widehat{\mathbf{W}}_{\widehat{\mathcal{S}}},\widehat{\mathbf{a}},\widehat{\mathbf{ b}}\right)\) and \(\left(\mathbf{W}^{\star},\mathbf{a}^{\star},\mathbf{b}^{\star}\right)\) in a similar way as (17). Observing the fact that for \(S\geq 2s\), one has \(\mathcal{S}^{\star}\subset\widehat{\mathcal{S}}^{\star}\subset\mathcal{S}\), we have
\[\|\mathbf{w}_{j}-\mathbf{w}_{j}^{\star}\|_{2}\geq\|\mathbf{w}_{j,\widehat{\mathcal{S}}^{ \star}}-\mathbf{w}_{j}^{\star}\|_{2}=\|\widehat{\mathbf{w}}_{j}-\mathbf{w}_{j}^{\star}\|_ {2},\]
after some permutation, and then
\[D_{2}^{\star}\leq D_{2}.\]
In addition, after some permutation, we have \(D_{1}^{\star}\geq\left\|\widehat{\mathbf{W}}_{\widehat{\mathcal{S}}^{\star}}- \mathbf{W}^{\star}\right\|_{1}\geq\left\|\mathbf{W}^{\star}\right\|_{1}-\left\| \widehat{\mathbf{W}}_{\mathcal{S}^{\star}}\right\|_{1}\) and \(\left\|\mathbf{W}\right\|_{1}\leq\left\|\widehat{\mathbf{W}}\right\|_{1}\leq\left\| \mathbf{W}^{\star}\right\|_{1}\). Then,
Combined with Lemma 3 in Appendix D, the above results give
\[D_{2}^{\star}\leq\frac{2c_{6}}{c_{4}}\sigma,\]
provided that for some constant \(c_{7}>0\)
\[n\geq c_{5}S^{3}\log^{4}\frac{p}{\delta}\qquad\text{with }S\geq c_{7}sr,\]
such that
\[c_{6}\left(\frac{r}{S}+\frac{r\log^{3}\frac{p}{n\delta}}{n}\right)D_{1}^{\star 2 }\leq\frac{c_{4}}{8}D_{2}^{\star 2}.\]
Then, we can conclude the proof since after appropriate permutation,
\[\|\widehat{\mathbf{W}}-\mathbf{W}^{\star}\|_{\mathrm{F}}\leq 2\|\widehat{\mathbf{W}}_{ \widehat{\mathcal{S}}^{\star}}-\mathbf{W}^{\star}\|_{\mathrm{F}}.\]
## Appendix B Proof of Lemma 1 (Lower bound)
This can be seen from the following three properties.
* Consider the case that \[D_{1}\leq\epsilon=\frac{\delta}{4nr}\sqrt{\frac{\pi}{\log\frac{4pn}{\delta}}}.\] With probability at least \(1-\delta\), \[\frac{1}{n}\sum_{i=1}^{n} \bigg{[}\sum_{j=1}^{r}a_{j}\mathrm{relu}(\mathbf{w}_{j}^{\top}\mathbf{x}_ {i}+b_{j})\] \[-\sum_{j=1}^{r}a_{j}^{\star}\mathrm{relu}(\mathbf{w}_{j}^{\star\top} \mathbf{x}_{i}+b_{j}^{\star})\bigg{]}^{2}\] \[=\frac{D_{1}^{2}}{\epsilon^{2}}\frac{1}{n}\sum_{i=1}^{n} \bigg{[}\sum_{j=1}^{r}a_{j}\mathrm{relu}(\widetilde{\mathbf{w}}_{j}^{\top}\mathbf{x }_{i}+\widetilde{b}_{j})\] \[-\sum_{j=1}^{r}a_{j}^{\star}\mathrm{relu}(\mathbf{w}_{j}^{\star\top} \mathbf{x}_{i}+b_{j}^{\star})\bigg{]}^{2},\] (20) where \(\widetilde{\mathbf{w}}_{j}=\mathbf{w}_{j}^{\star}+\frac{\epsilon}{D_{1}}\left(\mathbf{w} _{j}-\mathbf{w}_{j}^{\star}\right)\) and \(\widetilde{b}_{j}=b_{j}^{\star}+\frac{\epsilon}{D_{1}}\left(b_{j}-b_{j}^{ \star}\right)\).
* For any \(\epsilon>0\) and \[D_{1}\geq\frac{\epsilon}{\sqrt{\frac{S}{n}\log\frac{pr}{S}\log\frac{BS}{ \epsilon\delta}}},\] there exists some universal constant \(C_{1}>0\), such that with probability at least \(1-\delta\), \[\frac{1}{n}\sum_{i=1}^{n} \bigg{[}\sum_{j=1}^{r}a_{j}\mathrm{relu}(\mathbf{w}_{j}^{\top}\mathbf{x}_ {i}+b_{j})\] \[-\sum_{j=1}^{r}a_{j}^{\star}\mathrm{relu}(\mathbf{w}_{j}^{\star\top} \mathbf{x}_{i}+b_{j}^{\star})\bigg{]}^{2}\] \[\geq\mathbb{E} \left[\sum_{j=1}^{r}a_{j}\mathrm{relu}(\mathbf{w}_{j}^{\top}\mathbf{x}+b_ {j})-\sum_{j=1}^{r}a_{j}^{\star}\mathrm{relu}(\mathbf{w}_{j}^{\star\top}\mathbf{x}+b_ {j}^{\star})\right]^{2}\] \[-C_{1}D_{1}^{2}\log\frac{pn}{\delta}\sqrt{\frac{S}{n}\log\frac{pr} {S}\log\frac{BS}{\epsilon\delta}}.\] (21)
* For some universal constant \(C_{2}>0\) \[\mathbb{E} \left[\sum_{j=1}^{r}a_{j}\mathrm{relu}(\mathbf{w}_{j}^{\top}\mathbf{x}+b_ {j})-\sum_{j=1}^{r}a_{j}^{\star}\mathrm{relu}(\mathbf{w}_{j}^{\star\top}\mathbf{x}+b_ {j}^{\star})\right]^{2}\] \[\geq C_{2}\min\left\{\frac{1}{r},D_{2}^{2}\right\}.\] (22)
_Putting the above together._ Let
\[\epsilon=C_{3}\frac{\delta}{nr}\sqrt{\frac{S}{n}\log\frac{BnS}{\delta}},\]
for some universal constant \(C_{3}>0\) such that
\[\frac{\epsilon}{\sqrt{\frac{S}{n}\log\frac{pr}{S}\log\frac{BS}{\epsilon\delta} }}<\frac{\delta}{4nr}\sqrt{\frac{\pi}{\log\frac{4pn}{\delta}}}.\]
Inserting (22) into (21) gives that
\[\frac{1}{n}\sum_{i=1}^{n}\left[\sum_{j=1}^{r}a_{j}\mathrm{relu}( \mathbf{w}_{j}^{\top}\mathbf{x}_{i}+b_{j})-\sum_{j=1}^{r}a_{j}^{\star}\mathrm{relu}( \mathbf{w}_{j}^{\star\top}\mathbf{x}_{i}+b_{j}^{\star})\right]^{2}\] \[\geq C_{2}\min\left\{\frac{1}{r},D_{2}^{2}\right\}-C_{1}D_{1}^{2} \log\frac{pn}{\delta}\sqrt{\frac{S}{n}\log\frac{pr}{S}\log\frac{BS}{\epsilon \delta}}\] \[\geq\frac{C_{2}}{2}\min\left\{\frac{1}{r},D_{2}^{2}\right\}, \tag{23}\]
holds with probability at least \(1-\delta\) provided that for some constant \(C_{4}>0\)
\[n\geq C_{4}S^{3}r^{4}\log\frac{pr}{S}\log\frac{BS}{\epsilon \delta}\log^{2}\frac{pn}{\delta}\quad\text{and}\] \[D_{1}\geq\frac{\delta}{4nr}\sqrt{\frac{\pi}{\log\frac{4pn}{\delta }}}.\]
Here, the last line holds due to Lemma 3 and we assume that \(\max\left\{\|\mathbf{W}\|_{\infty},\|\mathbf{b}\|_{\infty}\right\}\) is bounded by some constant. On the other hand, if
\[D_{1}<\frac{\delta}{4nr}\sqrt{\frac{\pi}{\log\frac{4pn}{\delta}}},\]
it follows from (20) and (23) that
\[\frac{1}{n}\sum_{i=1}^{n}\left[\sum_{j=1}^{r}a_{j}\mathrm{relu}( \mathbf{w}_{j}^{\top}\mathbf{x}_{i}+b_{j})-\sum_{j=1}^{r}a_{j}^{\star}\mathrm{relu}( \mathbf{w}_{j}^{\star\top}\mathbf{x}_{i}+b_{j}^{\star})\right]^{2}\] \[\geq\frac{D_{1}^{2}}{\epsilon^{2}}\frac{C_{2}}{2}\min\left\{ \frac{1}{r},\widetilde{D}_{2}^{2}\right\}=\frac{C_{2}}{2}D_{2}^{2}.\]
Summing up, we conclude the proof by verifying (20), (21), and (22) below.
### _Proof of (20)_
Since \(D_{1}\leq\epsilon=\frac{\delta}{4nr}\sqrt{\frac{\pi}{\log\frac{4pn}{\delta}}}\), without loss of generality, we assume that \(a_{j}=a_{j}^{\star}\) for \(1\leq j\leq r\), and
\[D_{1}=\sum_{j=1}^{r}\left(\|\mathbf{w}_{j}-\mathbf{w}_{j}^{\star}\|_{1}+|b_{j}-b_{j}^{ \star}|\right)\leq\epsilon.\]
By taking union bound, with probability at least \(1-\frac{\delta}{2}\), one has for all \(1\leq i\leq n\) and \(1\leq j\leq r\),
\[\left|\mathbf{w}_{j}^{\star\top}\mathbf{x}_{i}+b_{j}^{\star}\right|>\frac{\delta}{2nr} \sqrt{\frac{\pi}{2}},\]
since \(\|\mathbf{w}_{j}^{\star}\|_{2}\geq 1\) and \(\mathbf{x}_{i}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\). In addition, for all \(1\leq i\leq n\) and \(1\leq j\leq r\),
\[\left|\mathbf{w}_{j}^{\top}\mathbf{x}_{i}+b_{j}-\mathbf{w}_{j}^{\star\top}\mathbf{x}_ {i}-b_{j}^{\star}\right| \leq\|\mathbf{w}_{j}-\mathbf{w}_{j}^{\star}\|_{1}\|\mathbf{x}_{i}\|_{\infty}+|b_{j}-b_{j }^{\star}|\] \[\leq\epsilon\sqrt{2\log\frac{4pn}{\delta}}\]
holds with probability at least \(1-\frac{\delta}{2}\). Here, the last inequality comes from the fact that with probability at least \(1-\frac{\delta}{2}\),
\[\|\mathbf{x}_{i}\|_{\infty}\leq\sqrt{2\log\frac{4pn}{\delta}}\qquad\text{for all }1\leq i\leq n. \tag{24}\]
Putting together, we have with probability at least \(1-\delta\),
\[u(\mathbf{w}_{j}^{\top}\mathbf{x}_{i}+b_{j})=u(\mathbf{w}_{j}^{\top}\mathbf{x}_{i}+b_{j}^{ \star}), \tag{25}\]
provided that
\[\epsilon\leq\frac{\delta}{4nr}\sqrt{\frac{\pi}{\log\frac{4pn}{\delta}}}.\]
Note that \(u(x)=1\) if \(x>0\), and \(u(x)\!=0\) if \(x\leq 0\). Then combining with the definition of \(\widetilde{\mathbf{w}}_{j}\) and \(\widetilde{b}_{j}\), the above property yields
\[\frac{1}{n}\sum_{i=1}^{n}\left[\sum_{j=1}^{r}a_{j}\mathrm{relu}( \mathbf{w}_{j}^{\top}\mathbf{x}_{i}+b_{j})-\sum_{j=1}^{r}a_{j}^{\star}\mathrm{relu}( \mathbf{w}_{j}^{\top}\mathbf{x}_{i}+b_{j}^{\star})\right]^{2}\] \[=\frac{1}{n}\sum_{i=1}^{n}\left[\sum_{j=1}^{r}a_{j}^{\star}u(\mathbf{ w}_{j}^{\top}\mathbf{x}_{i}+b_{j}^{\star})(\mathbf{w}_{j}^{\top}\mathbf{x}_{i}+b_{j}- \mathbf{w}_{j}^{\top}\mathbf{x}_{i}-b_{j}^{\star})\right]^{2}\] \[=\frac{D_{1}^{2}}{\epsilon^{2}}\frac{1}{n}\sum_{i=1}^{n}\biggl{[} \sum_{j=1}^{r}a_{j}^{\star}u(\mathbf{w}_{j}^{\top}\mathbf{x}_{i}+b_{j}^{\star})\] \[\qquad\qquad\qquad\times(\widetilde{\mathbf{w}}_{j}^{\top}\mathbf{x}_{i}+ \widetilde{b}_{j}-\mathbf{w}_{j}^{\star\top}\mathbf{x}_{i}-b_{j}^{\star})\biggr{]}^{2}\] \[=\frac{D_{1}^{2}}{\epsilon^{2}}\frac{1}{n}\sum_{i=1}^{n}\biggl{[} \sum_{j=1}^{r}a_{j}\mathrm{relu}(\widetilde{\mathbf{w}}_{j}^{\top}\mathbf{x}_{i}+ \widetilde{b}_{j})\] \[\qquad\qquad\qquad-\sum_{j=1}^{r}a_{j}^{\star}\mathrm{relu}(\mathbf{ w}_{j}^{\star\top}\mathbf{x}_{i}+b_{j}^{\star})\biggr{]}^{2},\]
and the claim is proved. Here, the last equality holds due to (25) and \(a_{j}=a_{j}^{\star}\) for \(j=1,\ldots,r\).
### _Proof of (21)_
Notice that
\[\left|a_{j}\mathrm{relu}(\mathbf{w}_{j}^{\top}\mathbf{x}+b_{j})-a_{j}^{ \star}\mathrm{relu}(\mathbf{w}_{j}^{\star\top}\mathbf{x}_{i}+b_{j}^{\star})\right|\] \[\leq\left\{\begin{array}{ll}\|\mathbf{w}_{j}-\mathbf{w}_{j}^{\star}\|_ {1}\|\mathbf{x}\|_{\infty}+|b_{j}-b_{j}^{\star}|&\text{if }a_{j}=a_{j}^{\star},\\ \left(\|\mathbf{w}_{j}\|_{1}+\|\mathbf{w}_{j}^{\star}\|_{1}\right)\|\mathbf{x}\|_{\infty}+ |b_{j}|+|b_{j}^{\star}|&\text{if }a_{j}\neq a_{j}^{\star},\end{array}\right.\]
which leads to
\[\left|\sum_{j=1}^{r}a_{j}\mathrm{relu}(\mathbf{w}_{j}^{\top}\mathbf{x}+b _{j})-\sum_{j=1}^{r}a_{j}^{\star}\mathrm{relu}(\mathbf{w}_{j}^{\star\top}\mathbf{x}+b _{j}^{\star})\right|\] \[\leq D_{1}\max\left\{\|\mathbf{x}\|_{\infty},1\right\}. \tag{26}\]
For any fixed \((\mathbf{W},\mathbf{a},\mathbf{b})\), let
\[z_{i}:=\sum_{j=1}^{r}a_{j}\mathrm{relu}(\mathbf{w}_{j}^{\top}\mathbf{x}_{i}+b_{j})- \sum_{j=1}^{r}a_{j}^{\star}\mathrm{relu}(\mathbf{w}_{j}^{\star\top}\mathbf{x}_{i}+b_{j }^{\star}),\]
and define the following event set
\[\mathcal{E}:=\left\{\|\mathbf{x}_{i}\|_{\infty}\leq\sqrt{2\log\frac{4pn}{\delta}} \qquad\text{for all }1\leq i\leq n\right\}.\]
Then, with probability at least \(1-\delta\),
\[\frac{1}{n}\sum_{i=1}^{n}\left(z_{i}^{2}-\mathbb{E}\left[z_{i}^{ 2}\right]\right) \tag{27}\] \[=\frac{1}{n}\sum_{i=1}^{n}\biggl{\{}z_{i}^{2}\mathbb{1}(\mathcal{ E})-\mathbb{E}\left[z_{i}^{2}\mathbb{1}(\mathcal{E})\right]-\mathbb{E}\left[z_{i}^{2} \mathbb{1}(\overline{\mathcal{E}})\right]\biggr{\}}\] \[\geq-4D_{1}^{2}\log\frac{4pn}{\delta}\sqrt{\frac{1}{n}\log\frac{ 2}{\delta}}-D_{1}^{2}\frac{\delta}{n}\] \[\geq-5D_{1}^{2}\log\frac{4pn}{\delta}\sqrt{\frac{1}{n}\log\frac{ 2}{\delta}}. \tag{28}\]
Here, the first line holds due to (24); the last line comes from Hoeffding's inequality, and the fact that
\[\left|\mathbb{E}\left[z_{i}^{2}\mathbb{1}(\overline{\mathcal{E}})\right]\right| \leq D_{1}^{2}\left|\mathbb{E}\left[\|\mathbf{x}_{i}\|_{\infty}^{2}1( \|\mathbf{x}_{i}\|_{\infty}>\sqrt{2\log\frac{4pn}{\delta}})\right]\right|\] \[\leq D_{1}^{2}\int_{\sqrt{2\log\frac{4pn}{\delta}}}^{\infty}x^{2}d \mathbb{P}(\|\mathbf{x}_{i}\|_{\infty}<x)\] \[\leq D_{1}^{2}\int_{\sqrt{2\log\frac{4pn}{\delta}}}^{\infty}4xp \exp(-\frac{x^{2}}{2})dx\leq D_{1}^{2}\frac{\delta}{n}.\]
In addition, consider the following \(\epsilon\)-net
\[\mathcal{N}_{\epsilon}\coloneqq\bigg{\{} (\mathbf{W},\mathbf{a},\mathbf{b}):|W_{ij}|\in\frac{\epsilon}{r+S}\left[\left. \frac{B(r+S)}{\epsilon}\right]\right\rfloor,\] \[\|\mathbf{W}\|_{0}\leq S,|b_{j}|\in\frac{\epsilon}{r+S}\left[\left. \frac{B(r+S)}{\epsilon}\right],|a_{j}|=1\right\}\!,\]
where \([n]:=\{1,2,\ldots,n-1\}\). Then, for all \((\mathbf{W},\mathbf{a},\mathbf{b})\) with \(\|\mathbf{W}\|_{1}\leq B\) and \(\|\mathbf{b}\|_{1}\leq B\), there exists some point, denoted by \(\left(\widetilde{\mathbf{W}},\widetilde{\mathbf{a}},\widetilde{\mathbf{b}}\right)\), in \(\mathcal{N}_{\epsilon}\) whose \(D_{1}\)-distance from \((\mathbf{W},\mathbf{a},\mathbf{b})\) is less than \(\epsilon\). For simplicity, define
\[z_{i}\coloneqq\sum_{j=1}^{r}a_{j}\mathrm{relu}(\mathbf{w}_{j}^{\top} \mathbf{x}_{i}+b_{j})-\sum_{j=1}^{r}a_{j}^{\star}\mathrm{relu}(\mathbf{w}_{j}^{ \star\top}\mathbf{x}_{i}+b_{j}^{\star}),\] \[\widetilde{z}_{i}\coloneqq\sum_{j=1}^{r}\widetilde{a}_{j}\mathrm{relu }(\widetilde{\mathbf{w}}_{j}^{\top}\mathbf{x}_{i}+\widetilde{b}_{j})-\sum_{j=1}^{r}a_{j}^ {\star}\mathrm{relu}(\mathbf{w}_{j}^{\star\top}\mathbf{x}_{i}+b_{j}^{\star}).\]
Similar to (26), we can derive that
\[\left|\sum_{j=1}^{r}a_{j}\mathrm{relu}(\mathbf{w}_{j}^{\top}\mathbf{x}+b_{j})-\sum_{j =1}^{r}\widetilde{a}_{j}\mathrm{relu}(\widetilde{\mathbf{w}}_{j}^{\top}\mathbf{x}+ \widetilde{b}_{j})\right|\] \[\leq\epsilon\max\left\{\|\mathbf{x}\|_{\infty},1\right\},\]
which implies
\[\left|z_{i}^{2}-\widehat{z}_{i}^{2}\right|\leq\epsilon\left(\epsilon+D_{1} \right)\max\left\{\|\mathbf{x}_{i}\|_{\infty}^{2},1\right\},\]
and then with probability at least
for some universal constant \(C_{5}>0\). Combining (28), (29), and (30) leads to
\[\frac{1}{n}\sum_{i=1}^{n}\left(z_{i}^{2}-\mathbb{E}\left[z_{i}^{2} \right]\right)\geq -5\left(\epsilon+D_{1}\right)^{2}\log\frac{4pn}{\delta}\sqrt{ \frac{1}{n}\log\frac{2\left|\mathcal{N}_{c}\right|}{\delta}}\] \[-4\epsilon\left(\epsilon+D_{1}\right)\log\frac{4pn}{\delta}.\]
It follows that (21) holds.
### _Proof of (22)_
We first consider a simple case that \(b_{j}=0\) and \(b_{j}^{\star}=0\) for \(1\leq j\leq r\), and show that for some small constant \(C_{6}>0\),
\[\mathbb{E}\left[\sum_{j=1}^{r}a_{j}\mathrm{relu}(\mathbf{w}_{j}^{ \top}\mathbf{x})-\sum_{j=1}^{r}a_{j}^{\star}\mathrm{relu}(\mathbf{w}_{j}^{\star\top} \mathbf{x})\right]^{2}\] \[\geq C_{6}\min\left\{\frac{1}{r},D_{2}^{2}\right\}. \tag{31}\]
Next, we will assume that
\[\mathbb{E}\left[\sum_{j=1}^{r}a_{j}\mathrm{relu}(\mathbf{w}_{j}^{ \top}\mathbf{x})-\sum_{j=1}^{r}a_{j}^{\star}\mathrm{relu}(\mathbf{w}_{j}^{\star\top} \mathbf{x})\right]^{2}\leq\frac{C_{6}}{r}.\]
Otherwise, Inequality (31) already holds. According to Lemma 4, one has for any constant \(k\geq 0\), there exists some constant \(\alpha_{k}>0\) such that
\[\mathbb{E}\left[\sum_{j=1}^{r}a_{j}\mathrm{relu}(\mathbf{w}_{j}^{ \top}\mathbf{x})-\sum_{j=1}^{r}a_{j}^{\star}\mathrm{relu}(\mathbf{w}_{j}^{\star\top} \mathbf{x})\right]^{2}\] \[\geq\alpha_{k}\bigg{\|}\sum_{j=1}^{r}a_{j}\|\mathbf{w}_{j}\|_{2}\big{(} \frac{\mathbf{w}_{j}}{\|\mathbf{w}_{j}\|_{2}}\big{)}^{\otimes 2k}\] \[\qquad-\sum_{j=1}^{r}a_{j}^{\star}\|\mathbf{w}_{j}^{\star}\|_{2}\big{(} \frac{\mathbf{w}_{j}^{\star}}{\|\mathbf{w}_{j}^{\star}\|_{2}}\big{)}^{\otimes 2k} \bigg{\|}_{F}^{2}. \tag{32}\]
Assumption 1 tells us that for any integer \(k\geq\frac{2}{\omega}\),
\[\big{|}\langle\mathbf{v}_{j_{1}}^{\star},\mathbf{v}_{j_{2}}^{\star}\rangle\big{|}\leq \frac{1}{r^{2}}. \tag{33}\]
where
\[\mathbf{v}_{j}\coloneqq\mathrm{vec}\left(\big{(}\frac{\mathbf{w}_{j}}{\|\mathbf{w}_{j}\|_ {2}}\big{)}^{\otimes k}\right)\quad\text{with}\quad\beta_{j}\coloneqq a_{j}\| \mathbf{w}_{j}\|_{2},\]
and
\[\mathbf{v}_{j}^{\star}\coloneqq\mathrm{vec}\left(\big{(}\frac{\mathbf{w}_{j}^{\star}} {\|\mathbf{w}_{j}^{\star}\|_{2}}\big{)}^{\otimes k}\right)\quad\text{with}\quad \beta_{j}^{\star}\coloneqq a_{j}^{\star}\|\mathbf{w}_{j}^{\star}\|_{2}.\]
Then, (32) gives
\[\mathbb{E}\left[\sum_{j=1}^{r}a_{j}\mathrm{relu}(\mathbf{w}_{j}^{ \top}\mathbf{x})-\sum_{j=1}^{r}a_{j}^{\star}\mathrm{relu}(\mathbf{w}_{j}^{\star\top} \mathbf{x})\right]^{2}\] \[\geq\alpha_{3k}\left\|\sum_{j=1}^{r}\beta_{j}\mathbf{v}_{j}^{\otimes 6 }-\sum_{j=1}^{r}\beta_{j}^{\star}\mathbf{v}_{j}^{\star\otimes 6}\right\|_{F}^{2}.\]
Define
\[\mathbb{S}_{+}\coloneqq\mathrm{span}\left\{\mathbf{v}_{j}\right\}_{j;\beta_{j}>0} \qquad\mathbb{S}_{-}\coloneqq\mathrm{span}\left\{\mathbf{v}_{j}\right\}_{j:\beta_ {j}<0},\]
and
\[\mathbb{S}_{+}^{\star}\coloneqq\mathrm{span}\left\{\mathbf{v}_{j}^{\star}\right\}_{j :\beta_{j}^{\star}>0}\qquad\mathbb{S}_{-}^{\star}\coloneqq\mathrm{span} \left\{\mathbf{v}_{j}^{\star}\right\}_{j:\beta_{j}^{\star}<0}.\]
Let \(\mathbf{P}_{\mathbb{S}}\) and \(\mathbf{P}_{\mathbb{S}}^{\perp}\) denote the projection onto and perpendicular to the subspace \(\mathbb{S}\), respectively. By noticing that \(\mathbf{P}_{\mathbb{S}}^{\perp}\mathbf{v}_{j}=\mathbf{0}\) for \(j\) obeying \(\beta_{j}<0\), and \(\mathbf{P}_{\mathbb{S}_{+}^{\star}}^{\perp}\mathbf{v}_{j}^{\star}=\mathbf{0}\) for \(j\) obeying \(\beta_{j}^{\star}>0\), one has
\[\left\|\sum_{j=1}^{r}\beta_{j}\mathbf{v}_{j}^{\otimes 6}-\sum_{j=1}^{r} \beta_{j}^{\star}\mathbf{v}_{j}^{\star\otimes 6}\right\|_{F}^{2}\] \[\geq\left\|\sum_{j:\beta_{j}>0}\beta_{j}\big{(}\mathbf{P}_{\mathbb{S} _{-}}^{\perp}\mathbf{v}_{j}\big{)}^{\otimes 2}\otimes\big{(}\mathbf{P}_{\mathbb{S}_{+}}^{ \perp}\mathbf{v}_{j}\big{)}^{\otimes 4}\right.\] \[\qquad-\sum_{j:\beta_{j}^{\star}<0}\beta_{j}^{\star}\big{(}\mathbf{P}_{ \mathbb{S}_{-}}^{\perp}\mathbf{v}_{j}^{\star}\big{)}^{\otimes 2}\otimes\big{(}\mathbf{P}_{\mathbb{S}_{+}}^{ \perp}\mathbf{v}_{j}^{\star}\big{)}^{\otimes 4}\bigg{\|}_{F}^{2}\] \[\geq\sum_{j:\beta_{j}^{\star}<0}\left\|\mathbf{P}_{\mathbb{S}_{-}}^{ \perp}\mathbf{v}_{j}^{\star}\right\|_{2}^{4},\]
where the penultimate inequality holds since the inner product between every pair of terms is positive, and the last inequality comes from the facts that \(|\beta_{j}^{\star}|\geq 1\) and (33).
Moreover, by means of AM-GM inequality and (33), one can see that
\[\sum_{j:\beta_{j}^{\star}<0}\left\|\mathbf{P}_{\mathbb{S}_{-}}^{\perp }\mathbf{v}_{j}^{\star}\right\|_{2}^{4} \geq\frac{1}{r}\Big{(}\sum_{j:\beta_{j}^{\star}<0}\left\|\mathbf{P}_{ \mathbb{S}_{-}}^{\perp}\mathbf{v}_{j}^{\star}\right\|_{2}^{2}\Big{)}^{2}\] \[=\frac{1}{r}\left\|\mathbf{P}_{\mathbb{S}_{-}}^{\perp}\big{[}\mathbf{v}_{j }^{\star}\big{]}_{j:\beta_{j}^{\star}<0}\right\|_{F}^{4}\] \[\geq\frac{1}{2r}\left\|\mathbf{P}_{\mathbb{S}_{-}}^{\perp}\mathbf{P}_{ \mathbb{S}_{-}}^{\perp}\right\|_{F}^{4}.\]
Then combining with (31), the above result and the counterpart for \(\beta_{j}^{\star}>0\) lead to
\[\dim(\mathbb{S}_{-})\geq\dim(\mathbb{S}_{-}^{\star})\quad\text{and}\quad\dim( \mathbb{S}_{+})\geq\dim(\mathbb{S}_{+}^{\star}),\]
which gives
\[\dim(\mathbb{S}_{-})=\dim(\mathbb{S}_{-}^{\star})\quad\text{and}\quad\dim( \mathbb{S}_{+})=\dim(\mathbb{S}_{+}^{\star}).\]
Furthermore, for some small constant \(C_{6}>0\), we have
\[\mathrm{dist}(\mathbb{S}_{-},\mathbb{S}_{-}^{\star})\leq C_{6}\quad\text{and} \quad\mathrm{dist}(\mathbb{S}_{+},\mathbb{S}_{+}^{\star})\leq C_{6}.\]
Let \(\mathbf{P}_{i}^{\perp}\) denote the projection perpendicular to
\[\mathrm{span}\left\{\mathbf{v}_{j}^{\star}\right\}_{j\neq i:\beta_{j}^{\star}>0},\]
and
\[\gamma_{j}\coloneqq\frac{\beta_{j}\langle\mathbf{P}_{\mathbb{S}_{-}}^{\perp}\mathbf{v}_{j },P_{\mathbb{S}_{-}}^{\perp}\mathbf{v}_{i}^{\star}\rangle^{2}\langle\mathbf{P}_{i}^{ \perp}\mathbf{v}_{i},P_{\mathbb{S}_{-}}^{\perp}\mathbf{v}_{i}^{\star}\rangle^{2}}{ \left\|\mathbf{P}_{\mathbb{S}
Then for any \(i\),
\[\left\|\sum_{j=1}^{r}\beta_{j}\mathbf{v}_{j}^{\otimes 6}-\sum_{j=1}^{r} \beta_{j}^{*}\mathbf{v}_{j}^{*\otimes 6}\right\|_{F}^{2}\] \[\geq\left\|\sum_{j;\beta_{j}>0}\beta_{j}\big{(}\mathbf{P}_{\mathbb{S} \_}}^{\perp}\mathbf{v}_{j}\big{)}^{\otimes 2}\otimes\mathbf{v}_{j}^{\otimes 4}\right.\] \[\left.\qquad-\sum_{j=1}^{r}\beta_{j}^{*}\big{(}\mathbf{P}_{\mathbb{S} \_}}^{\perp}\mathbf{v}_{j}^{*}\big{)}^{\otimes 2}\otimes\mathbf{v}_{j}^{*\otimes 4}\right\|_{F}^{2}\] \[\geq\frac{1}{2}\bigg{\|}\sum_{j;\beta_{j}>0}\beta_{j}\big{(}\mathbf{P} _{\mathbb{S}\_}}^{\perp}\mathbf{v}_{j}\big{)}^{\otimes 2}\otimes\mathbf{v}_{j}^{ \otimes 4}\] \[\geq\frac{1}{2}\bigg{\|}\sum_{j;\beta_{j}>0}\beta_{j}\big{(}\mathbf{P} _{\mathbb{S}\_}}^{\perp}\mathbf{v}_{j}\big{)}^{\otimes 2}\otimes\mathbf{(}\mathbf{P}_{i}^{ \perp}\mathbf{v}_{i}\big{)}^{\otimes 2}\otimes\mathbf{v}_{j}^{\otimes 2}\] \[\qquad\qquad-\beta_{i}^{*}\big{(}\mathbf{P}_{\mathbb{S}\_}}^{\perp} \mathbf{v}_{i}\big{)}^{\otimes 2}\otimes\mathbf{(}\mathbf{P}_{i}^{\perp}\mathbf{v}_{i}\big{)}^{ \otimes 2}\otimes\mathbf{v}_{i}^{*\otimes 2}\bigg{\|}_{F}^{2}\] \[\geq\frac{1}{2}\left\|\sum_{j;\beta_{j}>0}\gamma_{j}\mathbf{v}_{j}^{ \otimes 2}-\beta_{i}^{*}\big{\|}\mathbf{P}_{\mathbb{S}\_}}^{\perp}\mathbf{v}_{i}^{*} \big{\|}_{2}^{2}\big{\|}\mathbf{P}_{i}^{*}\mathbf{v}_{i}^{*}\right\|^{2}\mathbf{v}_{i}^{* \otimes 2}\Bigg{\|}_{F}^{2}\,,\]
which, together with (31), implies that there exists some \(j\) such that
\[\|\sqrt{\beta_{j}}\mathbf{v}_{j}-\sqrt{\beta_{i}^{*}}\mathbf{v}_{i}^{*}\|_{2}^{2}\leq \frac{1}{r}.\]
Without loss of generality, assume that
\[\|\sqrt{\beta_{j}}\mathbf{v}_{j}-\sqrt{\beta_{j}^{*}}\mathbf{v}_{j}^{*}\|_{2}^{2}\leq \frac{1}{r},\qquad\text{for all }1\leq j\leq r. \tag{34}\]
Then
\[\mathbb{E}\big{[}\sum_{j=1}^{r}a_{j}\mathrm{relu}(\mathbf{w}_{j}^{ \top}\mathbf{x})-\sum_{j=1}^{r}a_{j}^{*}\mathrm{relu}(\mathbf{w}_{j}^{\top}\mathbf{x}) \big{]}^{2}\] \[\geq\alpha_{k}\left\|\sum_{j=1}^{r}\beta_{j}\mathbf{v}_{j}\mathbf{v}_{j}^{ \top}-\sum_{j=1}^{r}\beta_{j}^{*}\mathbf{v}_{j}^{*}\mathbf{v}_{j}^{*\top}\right\|_{F}^ {2}\] \[\geq\alpha_{k}\sum_{j=1}^{r}\big{\|}\beta_{j}\mathbf{v}_{j}\mathbf{v}_{j}^ {\top}-\beta_{j}^{*}\mathbf{v}_{j}^{*}\mathbf{v}_{j}^{*\top}\big{\|}_{F}^{2}\] \[\quad-\frac{\alpha_{k}}{2r}\left(\sum_{j=1}^{r}\big{\|}\beta_{j} \mathbf{v}_{j}\mathbf{v}_{j}^{\top}-\beta_{j}^{*}\mathbf{v}_{j}^{*}\mathbf{v}_{j}^{*\top} \big{\|}_{F}^{2}\right)^{2}\] \[\geq\frac{\alpha_{k}}{2}\sum_{j=1}^{r}\big{\|}\beta_{j}\mathbf{v}_{j} \mathbf{v}_{j}^{\top}-\beta_{j}^{*}\mathbf{v}_{j}^{*}\mathbf{v}_{j}^{*\top}\big{\|}_{F}^{2}\,.\]
Here, the first line comes from (32); the second line holds through the following claim
\[\leq\frac{1}{2r}\|\beta_{j_{1}}\mathbf{v}_{j}\mathbf{v}_{j}^{\top}-\beta_{ j}^{*}\mathbf{v}_{j}^{*}\mathbf{v}_{j}^{*\top}\|_{2}\|\beta_{j_{2}}\mathbf{v}_{j_{2}}\mathbf{v}_{j_{2}} ^{\top}-\beta_{j_{2}}^{*}\mathbf{v}_{j_{2}}^{*}\mathbf{v}_{j_{2}}^{*\top}\|_{2}\]
since for \(\mathbf{\delta}_{j}\coloneqq\sqrt{\beta_{j}}\mathbf{v}_{j}-\sqrt{\beta_{j}^{*}}\mathbf{v}_{j }^{*}\),
\[\beta_{j}\mathbf{v}_{j}\mathbf{v}_{j}^{\top}-\beta_{j}^{*}\mathbf{v}_{j}^{*}\mathbf{v}_{j}^{* \top}=\mathbf{\delta}_{j}\mathbf{\delta}_{j}^{\top}+\sqrt{\beta_{j}^{*}}\mathbf{\delta}_{j} \mathbf{v}_{j}^{*\top}+\sqrt{\beta_{j}^{*}}\mathbf{\delta}_{j}\mathbf{\delta}_{j}^{*\top}.\]
Then the conclusion is obvious by noticing that
\[\big{\|}\beta_{j}\mathbf{v}_{j}\mathbf{v}_{j}^{\top}-\beta_{j}^{*}\mathbf{v}_{j}^{*}\mathbf{v}_{ j}^{*\top}\big{\|}_{F}\geq\|\mathbf{w}_{j}-\mathbf{w}_{j}^{*}\|_{2}.\]
Finally, we analyze the general case with \(b_{j},b_{j}^{*}\neq 0\), which is similar to the above argument. For simplicity, we only explain the different parts here. According to Lemma 4, one has for any constant \(k\geq 0\), there exists some constant \(\alpha_{k}>0\) and some function \(f_{k}:\mathbb{R}\to\mathbb{R}\) such that
\[\mathbb{E}\left[\sum_{j=1}^{r}a_{j}\mathrm{relu}(\mathbf{w}_{j}^{\top }\mathbf{x}+b_{j})-\sum_{j=1}^{r}a_{j}^{*}\mathrm{relu}(\mathbf{w}_{j}^{*\top}\mathbf{x}+b_ {j}^{*})\right]^{2}\] \[\geq\sum_{k\geq\frac{1}{2}\omega}^{\infty}\left\|\sum_{j=1}^{r}a_{j }f_{k}(\frac{b_{j}}{\|\mathbf{w}_{j}\|_{2}})\|\mathbf{w}_{j}\|_{2}\big{(}\frac{\mathbf{w}_{j} }{\|\mathbf{w}_{j}\|_{2}})^{\otimes k}\right.\] \[\left.\qquad\qquad-\sum_{j=1}^{r}a_{j}^{*}f_{k}(\frac{b_{j}^{*}}{ \|\mathbf{w}_{j}^{*}\|_{2}})\|\mathbf{w}_{j}^{*}\|_{2}\big{(}\frac{\mathbf{w}_{j}^{*}}{\|\mathbf{w} _{j}^{*}\|_{2}})^{\otimes k}\right\|_{F}^{2}\] \[\gtrsim\sum_{j=1}^{r}\sum_{k\geq\frac{12}{r}}^{\infty}\left\|a_{j }f_{k}(\frac{b_{j}}{\|\mathbf{w}_{j}\|_{2}})\mathbf{w}_{j}-a_{j}^{*}f_{k}(\frac{b_{j}^{*}}{ \|\mathbf{w}_{j}^{*}\|_{2}})\mathbf{w}_{j}^{*}\right\|_{F}^{2}\] \[\gtrsim\sum_{j=1}^{r}\inf_{R_{l}(\mathbf{x})}\mathbb{E}\Big{[}a_{j} \mathrm{relu}(\mathbf{w}_{j}^{\top}\mathbf{x}+b_{j}) \tag{35}\] \[\left.\qquad\qquad-a_{j}^{*}\mathrm{relu}(\mathbf{w}_{j}^{*\top}\mathbf{x} +b_{j}^{*})-R_{l}(\mathbf{x})\right]^{2}\] \[\gtrsim\sum_{j=1}^{r}\big{(}\|\mathbf{w}_{j}-\mathbf{w}_{j}^{*}\|_{2}^{2}+|b _{j}-b_{j}^{*}|^{2}\big{)}\,.\]
Here, \(l=\big{\lceil}\frac{12}{\omega}\big{\rceil}\), and the second inequality holds in a similar way to above analysis. Then the general conclusion is handy.
## Appendix C Proof of Lemma 2 (upper bound)
For simplicity, let
\[z_{i} \coloneqq\sum_{j=1}^{r}a_{j}\mathrm{relu}(\mathbf{w}_{j}^{\top}\mathbf{x} _{i}+b_{j})-\sum_{j=1}^{r}a_{j}^{*}\mathrm{relu}(\mathbf{w}_{j}^{*\top}\mathbf{x}_{i}+b_ {j}^{*}),\] \[\widehat{z}_{i}
We can bound the first term in the right hand side by
\[\frac{1}{n}\sum_{i=1}^{n}\widehat{z}_{i}^{2}\] \[=\frac{1}{n}\sum_{i=1}^{n}\left[\sum_{j=1}^{r}a_{j}\left(\mathrm{relu }(\mathbf{w}_{j}^{\top}\mathbf{x}_{i}+b_{j})-\mathrm{relu}(\widehat{\mathbf{w}}_{j}^{\top} \mathbf{x}_{i}+\widehat{b}_{j})\right)\right]^{2}\] \[\leq\frac{1}{n}\sum_{i=1}^{n}\left[\sum_{j=1}^{r}\left|(\mathbf{w}_{j }-\widehat{\mathbf{w}}_{j})^{\top}\mathbf{x}_{i}\right|\right]^{2}\] \[\leq\frac{r}{n}\sum_{i=1}^{n}\sum_{j=1}^{r}\left|(\mathbf{w}_{j}- \widehat{\mathbf{w}}_{j})^{\top}\mathbf{x}_{i}\right|^{2},\]
where the second line holds due to the contraction property of ReLu function, and the last line comes from the AM-GM inequality. Lemma 5 further gives for some constant \(C_{7}>0\),
\[\sum_{j=1}^{r}\frac{1}{n}\sum_{i=1}^{n}\left|(\mathbf{w}_{j}-\widehat {\mathbf{w}}_{j})^{\top}\mathbf{x}_{i}\right|^{2}\leq C_{7}\sum_{j=1}^{r}\left\|\mathbf{w}_{j}-\widehat{\mathbf{w}}_{j} \right\|_{1}^{2}\] \[+C_{7}\frac{\log^{3}\frac{p}{n\delta}}{n}\sum_{j=1}^{r}\left\| \mathbf{w}_{j}-\widehat{\mathbf{w}}_{j}\right\|_{1}^{2}\]
holds with probability at least \(1-\delta\). In addition,
\[\sum_{j=1}^{r}\left\|\mathbf{w}_{j}-\widehat{\mathbf{w}}_{j}\right\|_{1}^ {2} \leq\left\|\mathbf{W}-\widehat{\mathbf{W}}\right\|_{1}^{2}\] \[\leq\left(\left\|\mathbf{W}^{\star}\right\|_{1}-\left\|\widehat{\mathbf{ W}}\right\|_{1}\right)^{2}\leq D_{1}^{2},\]
and
\[\sum_{j=1}^{r}\left\|\mathbf{w}_{j}-\widehat{\mathbf{w}}_{j}\right\|_{2}^ {2} =\left\|\mathbf{W}-\widehat{\mathbf{W}}\right\|_{1}^{2}\leq\left\|\mathbf{W}- \widehat{\mathbf{W}}\right\|_{1}\left\|\mathbf{W}-\widehat{\mathbf{W}}\right\|_{\infty}\] \[\leq\frac{\left(\left\|\mathbf{W}^{\star}\right\|_{1}-\left\|\widehat {\mathbf{W}}\right\|_{1}\right)\left(\left\|\mathbf{W}^{\star}\right\|_{1}-\left\| \widehat{\mathbf{W}}^{\star}\right\|_{1}\right)}{S/2}\] \[\leq\frac{4}{S}D_{1}^{2}.\]
Here, \(\widehat{\mathbf{W}}^{\star}\) denote the entries of \(\widehat{\mathbf{W}}\) on the support set for \(\mathbf{W}^{\star}\), and we make use of the fact that \(\|\widehat{\mathbf{W}}\|_{1}\leq\|\mathbf{W}^{\star}\|_{1}\) and
\[\left\|\mathbf{W}-\widehat{\mathbf{W}}\right\|_{\infty}\leq\frac{\|\widehat{\mathbf{W}}^ {\star}-\widehat{\mathbf{W}}\|_{1}}{S-s}\leq\frac{\|\mathbf{W}^{\star}\|_{1}-\| \widehat{\mathbf{W}}^{\star}\|_{1}}{S/2}.\]
Putting everything together gives the desired result.
## Appendix D Technical lemmas
**Lemma 3**.: _For any \((\mathbf{W},\mathbf{a},\mathbf{b})\) with \(\left\|\mathbf{W}\right\|_{0}+\left\|\mathbf{b}\right\|_{0}+\left\|\mathbf{W}^{\star} \right\|_{0}+\left\|\mathbf{b}^{\star}\right\|_{0}\leq S\). Assume that \(\left\|\mathbf{W}\right\|_{1}+\left\|\mathbf{b}\right\|_{1}\leq\left\|\mathbf{W}^{\star} \right\|_{1}+\left\|\mathbf{b}^{\star}\right\|_{1}\) and \(\left\|\mathbf{w}_{j}^{\star}\right\|_{2}^{2}+|b_{j}^{\star}|^{2}\leq 1\). Then one has_
\[D_{1}\leq 2\sqrt{S}D_{2}, \tag{37}\]
_where \(D_{1},D_{2}\) are defined in (17)._
Proof.: For simplicity, assume that
\[D_{2}^{2}=\sum_{j\in\mathcal{J}}\left(\|\mathbf{w}_{j}-\mathbf{w}_{j}^{\star}\|_{2}^{2 }+|b_{j}-b_{j}^{\star}|^{2}\right)+\sum_{j\in\mathcal{J}}\left(\|\mathbf{w}_{j}^{ \star}\|_{2}^{2}+|b_{j}^{\star}|^{2}\right).\]
Here, \(j\in\mathcal{J}\) means that \(a_{j}=a_{j}^{\star}\) and
\[\|\mathbf{w}_{j}-\mathbf{w}_{j}^{\star}\|_{2}^{2}+|b_{j}-b_{j}^{\star}|^{2}\leq\|\mathbf{w} _{j}^{\star}\|_{2}^{2}+|b_{j}^{\star}|^{2}.\]
Then, according to the AM-GM inequality, one has
\[\sqrt{S}D_{2} \geq\sum_{j\in\mathcal{J}}\left(\|\mathbf{w}_{j}-\mathbf{w}_{j}^{\star} \|_{1}+|b_{j}-b_{j}^{\star}|\right)\] \[\quad+\sum_{j\in\mathcal{J}}\left(\|\mathbf{w}_{j}^{\star}\|_{1}+|b_{ j}^{\star}|\right)\] \[\geq\sum_{j\in\mathcal{J}}\left(\|\mathbf{w}_{j}^{\star}\|_{1}-\|\mathbf{ w}_{j}\|_{1}+|b_{j}^{\star}|-|b_{j}|\right)+\|\mathbf{W}^{\star}\|_{1}\] \[\quad+\|\mathbf{b}^{\star}\|_{1}-\sum_{j\in\mathcal{J}}\left(\|\mathbf{w}_ {j}^{\star}\|_{1}+|b_{j}^{\star}|\right)\] \[\geq\sum_{j\notin\mathcal{J}}\biggl{(}\|\mathbf{w}_{j}\|_{1}+|b_{j}| \biggr{)},\]
which implies that
\[2\sqrt{S}D_{2} \geq\sum_{j\in\mathcal{J}}\left(\|\mathbf{w}_{j}-\mathbf{w}_{j}^{\star}\|_ {1}+|b_{j}-b_{j}^{\star}|\right)\] \[\quad+\sum_{j\notin\mathcal{J}}\left(\|\mathbf{w}_{j}^{\star}\|_{1}+|b_ {j}^{\star}|+\|\mathbf{w}_{j}\|_{1}+|b_{j}|\right).\]
Thus we conclude the proof.
**Lemma 4** (Theorem 2.1 [27]).: _For any constant \(k\geq 0\), there exists some universal function \(f_{k}:\mathbb{R}\to\mathbb{R}\) such that_
\[\mathbb{E} \left[\sum_{j=1}^{r}a_{j}\mathrm{relu}(\mathbf{w}_{j}^{\top}\mathbf{x}+b_ {j})-\sum_{j=1}^{r}a_{j}^{\star}\mathrm{relu}(\mathbf{w}_{j}^{\star\top}\mathbf{x}+b_{j }^{\star})\right]^{2}\] \[=\sum_{k=0}^{\infty}\biggl{\|}\sum_{j=1}^{r}a_{j}f_{k}\biggl{(}\frac{ b_{j}}{\|\mathbf{w}_{j}\|_{2}}\biggr{)}\|\mathbf{w}_{j}\|_{2}\biggl{(}\frac{\mathbf{w}_{j}}{\|\mathbf{w}_{j}\|_{2}} \biggr{)}^{\otimes k}\] \[\qquad-\sum_{j=1}^{r}a_{j}^{\star}f_{k}\biggl{(}\frac{b_{j}^{\star}}{ \|\mathbf{w}_{j}^{\star}\|_{2}}\biggr{)}\|\mathbf{w}_{j}^{\star}\|_{2}\biggl{(}\frac{\mathbf{ w}_{j}^{\star}}{\|\mathbf{w}_{j}^{\star}\|_{2}}\biggr{)}^{\otimes k}\biggr{\|}_{F}^{2}, \tag{38}\]
_with_
\[\alpha_{k}\coloneqq f_{2k}(0)>0,\qquad\text{for all }k>0. \tag{39}\]
_In addition, we have_
\[\inf_{R_{l}}\mathbb{E}\left[a\mathrm{relu}(\mathbf{w}^{\top}\mathbf{x}+b )-\sum_{j=1}^{r}a^{\star}\mathrm{relu}(\mathbf{w}^{\star\top}\mathbf{x}+b^{\star})-R_{l }(\mathbf{x})\right]^{2}\] \[=\sum_{k>l}^{\infty}\biggl{\|}af_{k}\biggl{(}\frac{b}{\|\mathbf{w}\|_{2} }\biggr{)}\|\mathbf{w}\|_{2}\biggl{(}\frac{\mathbf{w}}{\|\mathbf{w}\|_{2}}\biggr{)}^{ \otimes k}\] \[\qquad-a^{\star}f_{k}\biggl{(}\frac{b^{\star}}{\|\mathbf{w}^{\star}\|_{2}} \biggr{)}\|\mathbf{w}^{\star}
holds with probability at least \(1-\delta\)._
Proof.: Before proceeding, we introduce some useful techniques about Restricted Isometry Property (RIP). Let \(\mathbf{X}\coloneqq\frac{1}{\sqrt{n}}[\mathbf{x}_{1},\mathbf{x}_{2},\dots,\mathbf{x}_{n}]\). For some constant \(c_{0}>0\), if \(n\geq c_{0}\left(s\log\frac{p}{s}+\log\frac{1}{s}\right)\), then with probability at least \(1-\delta\),
\[\big{\|}\mathbf{X}^{\top}\mathbf{w}\big{\|}_{2}^{2}\leq 2\|\mathbf{w}\|_{2}^{2} \tag{42}\]
holds for all \(\mathbf{w}\) satisfying \(\|\mathbf{w}\|_{0}\leq s\).
We divide the entries of \(\mathbf{w}\) into several groups \(\mathcal{S}_{1}\cup\mathcal{S}_{2}\cup\dots\cup\mathcal{S}_{L}\) with equal size \(s\) (except for \(\mathcal{S}_{L}\)), such that the entries in \(\mathcal{S}_{j}\) are no less than \(\mathcal{S}_{k}\) for any \(j<k\). Then, according (42), one has
\[\frac{1}{n}\sum_{i=1}^{n}(\mathbf{w}^{\top}\mathbf{x}_{i})^{2} =\mathbf{w}^{\top}\mathbf{X}\mathbf{X}^{\top}\mathbf{w}=\sum_{j,k}\mathbf{w}_{\mathcal{ S}_{j}}^{\top}\mathbf{X}\mathbf{X}^{\top}\mathbf{w}_{\mathcal{S}_{k}}\] \[\leq 2\sum_{j,k}\|\mathbf{w}_{\mathcal{S}_{j}}\|_{2}\|\mathbf{w}_{ \mathcal{S}_{k}}\|_{2}=2\Big{(}\sum_{l=1}^{L}\|\mathbf{w}_{\mathcal{S}_{l}}\|_{2 }\Big{)}^{2}.\]
In addition, the order of \(\mathbf{w}_{\mathcal{S}_{l}}\) yields for \(l>1\),
\[\|\mathbf{w}_{\mathcal{S}_{l}}\|_{2} \leq\sqrt{s}\|\mathbf{w}_{\mathcal{S}_{l}}\|_{\infty}\leq\frac{1}{(l -1)\sqrt{s}}\|\mathbf{w}\|_{1},\]
which leads to
\[\Big{(}\sum_{l=1}^{L}\|\mathbf{w}_{\mathcal{S}_{l}}\|_{2}\Big{)}^{2} \leq 2\|\mathbf{w}_{\mathcal{S}_{l}}\|_{2}^{2}+2\Big{(}\sum_{l=2}^{L} \frac{1}{(l-1)\sqrt{s}}\|\mathbf{w}\|_{1}\Big{)}^{2}\] \[\leq 2\|\mathbf{w}\|_{2}^{2}+\frac{2\log^{2}L}{s}\|\mathbf{w}\|_{1}^{2}.\]
We conclude the proof by combining the above inequalities.
## Appendix E Further experiments details
The hyper-parameters used in Section III are summarized in Table IX.
We briefly explain the variable selection procedure. We first obtain a vector of the variables' importance. For 'LASSO' and 'OMP', we use the absolute value of the estimated coefficient as the variable importance; for 'NN', 'GLASSO', and 'GSR', we obtain the importance by applying row-wise \(\ell_{2}\)-norm to the weight matrix in the input layer of the neural network; for 'RF', 'GB', and 'LNET', we use the importance produced by those methods. Once we have the importance vector, we can obtain the receiver operating characteristic (ROC) curve for synthetic datasets by varying the cut-off thresholds and calculate the AUC score. As for variable selection, we apply GMM of two mixtures to the importance vector for the synthetic datasets. The variables in the cluster with higher importance are considered significant. Then, we calculate the correctly or wrongly selected variables accordingly. For BGSBoy and UJIIndoorLoc datasets, the variables with the three- and ten-largest importance are selected, respectively.
|
2306.10742 | BNN-DP: Robustness Certification of Bayesian Neural Networks via Dynamic
Programming | In this paper, we introduce BNN-DP, an efficient algorithmic framework for
analysis of adversarial robustness of Bayesian Neural Networks (BNNs). Given a
compact set of input points $T\subset \mathbb{R}^n$, BNN-DP computes lower and
upper bounds on the BNN's predictions for all the points in $T$. The framework
is based on an interpretation of BNNs as stochastic dynamical systems, which
enables the use of Dynamic Programming (DP) algorithms to bound the prediction
range along the layers of the network. Specifically, the method uses bound
propagation techniques and convex relaxations to derive a backward recursion
procedure to over-approximate the prediction range of the BNN with piecewise
affine functions. The algorithm is general and can handle both regression and
classification tasks. On a set of experiments on various regression and
classification tasks and BNN architectures, we show that BNN-DP outperforms
state-of-the-art methods by up to four orders of magnitude in both tightness of
the bounds and computational efficiency. | Steven Adams, Andrea Patane, Morteza Lahijanian, Luca Laurenti | 2023-06-19T07:19:15Z | http://arxiv.org/abs/2306.10742v1 | # BNN-DP: Robustness Certification of Bayesian Neural Networks via Dynamic Programming
###### Abstract
In this paper, we introduce BNN-DP, an efficient algorithmic framework for analysis of adversarial robustness of Bayesian Neural Networks (BNNs). Given a compact set of input points \(T\subset\mathbb{R}^{n}\), BNN-DP computes lower and upper bounds on the BNN's predictions for all the points in \(T\). The framework is based on an interpretation of BNNs as stochastic dynamical systems, which enables the use of Dynamic Programming (DP) algorithms to bound the prediction range along the layers of the network. Specifically, the method uses bound propagation techniques and convex relaxations to derive a backward recursion procedure to over-approximate the prediction range of the BNN with piecewise affine functions. The algorithm is general and can handle both regression and classification tasks. On a set of experiments on various regression and classification tasks and BNN architectures, we show that BNN-DP outperforms state-of-the-art methods by up to four orders of magnitude in both tightness of the bounds and computational efficiency.
Machine Learning, BNN-DP, BNN-DP, BNN-DP: Robustness Cer
precision and computational time. For instance, on the Fashion MNIST dataset, our approach achieves an average 93% improvement in certified lower bound compared to Berrada et al. (2021), while being around 3 orders of magnitude faster. In summary, this paper makes the following main contributions:
* we introduce a framework based on stochastic dynamic programming and convex relaxation for the analysis of adversarial robustness of BNNs,
* we implement an efficient algorithmic procedure of our framework for BNNs trained with Gaussian variational inference (VI) in both regression and classification settings,1 and Footnote 1: Our code is available at [https://github.com/sjladams/BNN_DP](https://github.com/sjladams/BNN_DP).
* we benchmark the robustness of a variety of BNN models on five datasets, empirically demonstrating how our method outperforms state-of-the art approaches by orders of magnitude in both tightness and efficiency.
Related WorksMany algorithms have been developed for certification of deterministic (i.e., non Bayesian) neural networks (NNs) (Katz et al., 2017; Weng et al., 2018; Wong and Kolter, 2018; Bunel et al., 2020). However, these methods cannot be employed to BNNs because they all assume the weights of the network have a fixed value, whereas in the Bayesian setting they are distributed according to the BNN posterior. Methods for certification of BNNs have recently presented in (Wicker et al., 2020; Berrada et al., 2021; Lechner et al., 2021). Wicker et al. (2020) consider a different notion of robustness than the one in this paper, not directly related to adversarial attacks on the BNN decision. Furthermore, that work considers a partitioning procedure in weight space that makes it applicable only to small networks and/or with small variance. The method proposed in (Berrada et al., 2021) is based on dual optimization. Hence, it is restricted to distributions with bounded support and needs to solve non-convex problems at large computational costs for classification tasks. Separately, Lechner et al. (2021) aims to build an intrinsic safe BNN by truncating the posterior in the weight space. Cardelli et al. (2019); Wicker et al. (2021) introduced statistical approaches to quantify the robustness of a BNN, which however, does not return formal guarantees, which are necessary in safety-critical settings. Empirical methods that use the uncertainty of BNNs to flag adversarial examples are introduced in (Rawat et al., 2017; Smith and Gal, 2018). These, however, consider only point-wise uncertainty estimates, specific to a particular test point and do not account for worst-case adversarial perturbations.
Various recent works have proposed formal methods to compute adversarial robustness for Gaussian Processes (GPs) (Cardelli et al., 2019; Smith and Gal, 2018; Patane et al., 2022; Smith et al., 2022). In BNNs, however, due to the non-linearity of activation functions, the distribution over the space of functions induced by a BNN is generally non-Gaussian, even if a Gaussian distribution in weight space is assumed. Hence, the techniques that are developed for GPs cannot be directly applied to BNNs.
## 2 Robust Certification of BNNs Problem
### Bayesian Neural Networks (BNNs)
For an input vector \(x\in\mathbb{R}^{n_{0}}\), we consider fully connected neural networks \(f^{w}:\mathbb{R}^{n_{0}}\rightarrow\mathbb{R}^{n_{K+1}}\) of the following form for \(k=0,\ldots,K\):2
Footnote 2: Note that the formulation of neural networks considered in Eqn (1) also includes convolutional neural networks (CNNs). In fact, the convolutional operation can be interpreted as a linear transformation into a larger space; see, e.g., Chapter 3.4.1 in (Gal and Ghahramani, 2016). This allows us to represent convolutional layers equivalently as fully connected layers, and do verification for CNNs as we show in Section 7.
\[\begin{split} z_{0}&=x,\qquad\qquad\zeta_{k+1}=W_{k} (z_{k}^{T},1)^{T},\\ z_{k}&=\phi_{k}(\zeta_{k}),\quad f^{w}\left(x\right) =\zeta_{K+1},\end{split} \tag{1}\]
where \(K\) is the number of hidden layers, \(n_{k}\) is the number of neurons of layer \(k\), \(\phi_{k}:\mathbb{R}^{n_{k}}\rightarrow\mathbb{R}^{n_{k}}\) is a vector of continuous activation functions (one for each neuron) in layer \(k\), and \(W_{k}\in\mathbb{R}^{n_{k}\times n_{k+1}}\) is the matrix of weights and biases that correspond to the \(k\)th layer of the network. We denote the vector of parameters by \(w=(W_{1}^{T},\ldots,W_{K}^{T})^{T}\) and the mapping from \(\zeta_{k_{1}}\) to \(\zeta_{k_{2}}\) by \(f^{w}_{k_{1}:k_{2}}:\mathbb{R}^{n_{k_{1}}}\rightarrow\mathbb{R}^{n_{k_{2}}}\) for \(k_{1},k_{2}\in\{0,...,K\}\). \(\zeta_{K+1}\) is the final output of the network (or the logit in the case of classification problems).
In the Bayesian setting, one starts by assuming a prior distribution \(p(w)\) over the parameters \(w\) and a likelihood function \(p(y|x,w)\). We adopt bold notation to denote random variables and write \(f^{w}\) to denote a BNN defined according to Eqns. (1). The likelihood is generally assumed to be Gaussian in case of regression and categorical for classification, where the probability for each class is given as the softmax of the neural network final logits (MacKay, 1992). Then, given a training dataset \(\mathcal{D}=\{(x_{i},y_{i})\}_{i=1}^{N_{\mathcal{D}}}\), learning amounts to computing the posterior distribution \(p(w|\mathcal{D})\) via the Bayes rule (MacKay, 1992). The posterior predictive distribution over an input \(x^{*}\) is finally obtained by marginalising the posterior over the likelihood, i.e., \(p(y^{*}|x^{*},\mathcal{D})=\int p(y^{*}|x^{*},w)p(w|\mathcal{D})dw\). The final output (decision) of the BNN, \(\hat{y}(x^{*})\), is then computed by minimising a loss function \(\mathcal{L}\) averaged over the predictive distribution, i.e.,
\[\hat{y}(x^{*})=\arg\min_{y}\int\mathcal{L}(y,y^{*})p(y^{*}|x^{*},\mathcal{D})dy ^{*}.\]
In this paper, we focus on both regression and classification problems. In regression, an \(l_{2}\) loss is generally used, which leads to an optimal decision \(\hat{y}(x^{*})\) given by the mean of the predictive posterior distribution (Neal, 2012), i.e., \(\hat{y}(x^{*})=\mathbb{E}_{y\sim p(y|x^{*},\mathcal{D})}\left[\mathbf{y}\right].\)3 For classification, \(\ell_{0-1}\) loss is typically employed, which results in
Footnote 3: In the remainder, we may omit the probability measure of an expectation or probability when it is clear from the context.
\[\hat{y}(x^{*})=\operatorname*{arg\,max}_{i\in\{1,...,n_{K+1}\}}\mathbb{E}_{ \boldsymbol{w}\sim p(w|\mathcal{D})}\left[\text{softmax}^{(i)}(f^{\boldsymbol {w}}\left(x^{*}\right))\right],\]
where \(\text{softmax}^{(i)}\) is the \(i\)th component of the \(n_{K+1}\)-dimensional softmax function.4 Unfortunately, because of the non-linearity introduced by the neural network architecture, the computation of the posterior distribution and consequently of \(\hat{y}(x^{*})\) cannot be done analytically. Therefore, approximate inference methods are required. In what follows, we focus on mean-field Gaussian Variational Inference (VI) approximations (Blundell et al., 2015). Specifically, we fit an approximating multivariate Gaussian \(q(w)=\mathcal{N}\left(w\mid\mu_{w};\,\Sigma_{w}\right)\approx p(w\mid\mathcal{ D})\) with mean \(\mu_{w}\) and block diagonal covariance matrix \(\Sigma_{w}\) such that for \(k\in\{0,\ldots,K\}\) and \(i\in\{1,\ldots,n_{k}\}\), the approximating distribution of the parameters corresponding to the \(i\)th node of the \(k\)th layer is
Footnote 4: Analogous formulas can be obtained for the weighted classification loss by factoring in misclassification weights in the argmax.
\[q(W_{k}^{(i,:)})=\mathcal{N}\left(W_{k}^{(i,:)}\mid\mu_{w,k,i};\,\Sigma_{w,k,i }\right) \tag{2}\]
with mean \(\mu_{w,k,i}\) and covariance matrix \(\Sigma_{w,k,i}\).5
Footnote 5: BNN-DP can be extended to Gaussian approximation distributions with inter-node or inter-layer correlations. In that case, to solve the backward iteration scheme of Theorem 1, the value functions need to be marginalized over partitions in weight space.
**Remark 1**.: _While our primary focus is on VI, the techniques presented in this paper can be applied to other approximate inference methods, such as HMC (Neal, 2012) and Dropout (Gal & Ghahramani, 2016). In these cases, the prediction of the BNN is obtained by averaging over a finite ensemble of NNs. For this setting, the dynamic programming problem in Theorem 1 reduces to computing piecewise linear relaxations for each layer of a weighted sum, i.e., an average, of deterministic neural networks, and propagating the resulting relaxations backward._
### Problem Statement
Given a BNN \(f^{\boldsymbol{w}}\) trained on a dataset \(\mathcal{D}\), as common in the literature (Madry et al., 2017), for a generic test point \(x^{*}\), we represent the possible adversarial perturbations by defining a compact neighbourhood \(T\) around \(x^{*}\) and measure the changes in the BNN output caused by limiting the perturbations to lie within \(T\).
**Definition 1** (Adversarial Robustness).: _Consider a BNN \(f^{\boldsymbol{w}}\), a compact set \(T\subset\mathbb{R}^{n_{0}}\), and input point \(x^{*}\in T\). For a given threshold \(\gamma>0\), \(f^{\boldsymbol{w}}\) is adversarially robust in \(x^{*}\) iff_
\[\forall x\in T,\quad\|\hat{y}(x)-\hat{y}(x^{*})\|_{p}\leq\gamma,\]
_where \(\|\cdot\|_{p}\) is an \(\ell_{p}\) norm._
Definition 1 is analogous to the standard notion of adversarial robustness employed for deterministic neural networks (Katz et al., 2017) and Bayesian models (Patane et al., 2022). As discussed in Section 2.1, the particular form of a BNN's output depends on the specific application considered. Below, we focus on regression and classification problems.
**Problem 1**.: _Let \(T\subset\mathbb{R}^{n_{0}}\) be a compact subset. Define functions \(I(y)=y\) and \(\text{softmax}(y)=\left[\text{softmax}^{(1)}(y),...,\text{softmax}^{(n_{K+1}) }(y)\right]\). Then, for a BNN \(f^{\boldsymbol{w}}\), \(h\in\{1,\text{softmax}\}\), and \(i\in\{1,...,n_{K+1}\}\), compute:_
\[\begin{split}\pi_{\min}^{(i)}(T)&=\min_{x\in T} \mathbb{E}_{\boldsymbol{w}\sim q(\cdot)}\left[h^{(i)}(f^{\boldsymbol{w}}\left( x\right))\right],\\ \pi_{\max}^{(i)}(T)&=\max_{x\in T}\mathbb{E}_{ \boldsymbol{w}\sim q(\cdot)}\left[h^{(i)}(f^{\boldsymbol{w}}\left(x\right)) \right].\end{split} \tag{3}\]
In the regression case (\(h=I\)), Problem 1 seeks to compute the ranges of the expectation of the BNN for all \(x\in T\). Similarly, in the classification case (\(h=\text{softmax}\)), Eqns. (3) define the ranges of the expectation of the softmax of each class for \(x\in T\). It is straightforward to see that these quantities are sufficient to check whether \(f^{\boldsymbol{w}}\) is adversarially robust for \(x\in T\); that is, if \(\sup_{x\in T}||\hat{y}(x)-\hat{y}(x^{*})||_{p}\leq\gamma\).
**Remark 2**.: _Our method can be extended to other losses, i.e., other forms of \(h\) in Eqns. (3), as long as affine relaxations of \(h\) can be computed._
Approach OutlineDue to the non-convex nature of \(f^{\boldsymbol{w}}\) and possibly \(h\), the computation of \(\mathbb{E}_{\boldsymbol{w}\sim q(\cdot)}\left[h(f^{\boldsymbol{w}}\left(x \right))\right]\) is analytically infeasible. To solve this problem, in Section 4, we view BNNs as stochastic dynamical systems evolving over the layers of the neural network. Through this, we show that adversarial robustness can be characterized as the solution of a dynamic programming (DP) problem. This allows us to break its computation into \(K\) simpler optimization problems, one for each layer. Each problem essentially queries a back-propagation of the uncertainty of the BNN through \(h\) and from one layer of the neural network to the next. Due to the non-convex nature of the layers of the BNN, these problems still cannot be solved exactly. We overcome this problem by using convex relaxations. Specifically, in Section 5, we show that efficient PWA relaxations can be obtained by recursively bounding the DP problem. In Section 6, we combine the theoretical results into a general algorithm called BNN-DP that solves Problem 1 efficiently.
## 3 Preliminaries on Relaxations of Functions
To propagate the uncertainty of the BNN from one layer to the other, we rely on upper and lower approximations of the corresponding Neural Network (NN), also known as _relaxations_. For vectors \(\hat{x},\hat{x}\in\mathbb{R}^{n}\), we denote by \([\hat{x},\hat{x}]\) the \(n\)-dimensional hyper-rectangle defined by \(\hat{x}\) and \(\hat{x}\), i.e., \([\hat{x},\hat{x}]=[\hat{x}^{(1)},\hat{x}^{(1)}]\times[\hat{x}^{(2)},\hat{x}^{(2 )}]\times\ldots\times[\hat{x}^{(n)},\hat{x}^{(n)}]\). We consider two types of relaxations, interval and affine.
**Definition 2** (Interval Relaxation).: _An interval relaxation of a function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\) over a set \(T\subseteq\mathbb{R}^{n}\) are two vectors \(\hat{b},\hat{b}\in\mathbb{R}^{m}\) such that \(f(x)\in[\hat{b},\hat{b}]\) for all \(x\in T\)._
**Definition 3** (Affine Relaxation).: _An affine relaxation of a function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\) over a set \(T\subseteq\mathbb{R}^{n}\) are two affine functions \(\hat{A}x+\hat{b}\) and \(\hat{A}x+\hat{b}\) with \(\hat{A},\hat{A}\in\mathbb{R}^{m\times n}\) and \(\hat{b},\hat{b}\in\mathbb{R}^{m}\) such that \(f(x)\in[\hat{A}x+\hat{b},\hat{A}x+\hat{b}]\) for all \(x\in T\)._
Interval and symbolic arithmetic can be used to propagate relaxations through the layers of a NN. Let, \([\alpha]_{+}\coloneqq\max\{\alpha,0\}\) and \([\alpha]_{-}\coloneqq\min\{\alpha,0\}\) represent the saturation operators on \(\alpha\). For a vector or matrix, \([\cdot]_{+}\) and \([\cdot]_{-}\) represent element-wise max and min, respectively. We adopt the notation of Liu et al. (2021) and write interval arithmetic w.r.t. a linear mapping \(M\) compactly as \(\otimes\) where \(M\otimes[\hat{b},\hat{b}]\coloneqq[[M]_{+}\hat{b}+[M]_{-}\hat{b},\,[M]_{+} \hat{b}+[M]_{-}\hat{b}]\), and use the similar notation for symbolic arithmetic.
## 4 BNN Certification via Dynamic Program
As observed in Marchi et al. (2021), NNs and consequently BNNs can be viewed as dynamical systems evolving over the layers of the network. In particular, for \(k\in\{0,...,K\}\), Eqn. (1) can be rewritten as:
\[z_{k+1}=\phi_{k+1}(\mathbf{W}_{k}(z_{k}^{T},1)^{T}) \tag{4}\]
with initial condition \(z_{0}=x\). Since, in a BNN, weights and biases are random variables sampled from the approximate posterior \(q(\cdot)\), Eqn. (4) describes a non-linear stochastic process evolving over the layers of the NN. This observation leads to the following theorem, which shows that \(\mathbb{E}_{\mathbf{w}\sim q(\cdot)}\left[h(\mathbf{W}_{k}(z^{T},1)^{T})\right]\) can be characterized as the solution to a backward recursion DP problem.
**Theorem 1**.: _Let \(f^{\mathbf{w}}\left(x\right)\) be a fully connected BNN with \(K\) hidden layers and \(h:\mathbb{R}^{n_{K+1}}\rightarrow\mathbb{R}^{l}\) be an integrable function. For \(k=0,...,K\), define functions \(V_{k}:\mathbb{R}^{n_{k}}\rightarrow\mathbb{R}^{l}\) backwards-recursively as:_
\[V_{K}(z)=\mathbb{E}_{\mathbf{W}_{K}\sim q(\cdot)}\left[h(\mathbf{W}_{K}( z^{T},1)^{T})\right], \tag{5a}\] \[V_{k-1}(z)=\mathbb{E}_{\mathbf{W}_{k-1}\sim q(\cdot)}\left[V_{k}( \phi_{k}(\mathbf{W}_{k-1}(z^{T},1)^{T}))\right]. \tag{5b}\]
_Then, it holds that \(\mathbb{E}_{\mathbf{w}\sim q(\cdot)}\left[h(f^{\mathbf{w}}\left(x\right))\right]=V_{0}(x)\)._
The proof of Theorem 1 is reported in Appendix A.1 and obtained by induction over the layers of the NN by relying on the law of total expectation and independence of the parameters distribution at different layers.6 Figure 1 illustrates the backward-iteration scheme of Theorem 1 for a two-hidden-layer BNN. Starting from the last layer, value functions \(V_{k}\) are constructed according to Eqns. (5a) and (5b) describing how the output of layer \(k\) is transformed in the previous layers. Theorem 1 is a central piece of our framework as it allows one to break the computation of \(\mathbb{E}_{\mathbf{w}\sim q(\cdot)}\left[h(f^{\mathbf{w}}\left(x\right))\right]\) into \(K+1\) (simpler) sub-problems, one for each layer of the BNN. In fact, note that \(V_{k}\) is a deterministic function. Hence, all the uncertainty in \(V_{k-1}\) depends only on the weights of layer \(k-1\). This is a key point that we use to derive efficient methods to solve Problem 1. Nevertheless, we stress that since \(V_{k}(z)\) is obtained by propagating \(z\) over \(K-k\) layers of the BNN, this is still generally a non-convex function, whose exact optimisation is infeasible in practice. Consequently, we employ the following corollary, which guarantees that, to solve Problem 1, it suffices to recursively bound \(V_{k}\) following Eqns. (5a) and (5b).
Footnote 6: While the vast majority of VI algorithms make this assumption, Theorem 1 can be generalized to the case where there is inter-layer correlation by marginalizing Eqn. (5a) and (5b) over partitions in correlated weight-spaces.
**Corollary 1**.: _For \(k\in\{1,\ldots,K\}\), let functions \(\hat{V}_{k},\hat{V}_{k}:\mathbb{R}^{n_{k}}\rightarrow\mathbb{R}^{n_{l}}\) be relaxations of \(V_{k}(z_{k})\), i.e, \(\forall z_{k}\in\mathbb{R}^{n_{k}},\hat{V}_{k}(z_{k})\leq V_{k}(z_{k})\leq \hat{V}_{k}(z_{k})\). Then_
\[\mathbb{E}_{\mathbf{W}_{k-1}\sim q(\cdot)}\left[\hat{V}_{k}(\phi_{k} (\mathbf{W}_{k-1}(z^{T},1)^{T}))\right]\leq V_{k-1}(z_{k})\leq\] \[\mathbb{E}_{\mathbf{W}_{k-1}\sim q(\cdot)}\left[\hat{V}_{k}(\phi_{k }(\mathbf{W}_{k-1}(z^{T},1)^{T}))\right].\]
Figure 1: Illustration of the DP algorithm in Theorem 1 for a BNN with two hidden layers. Value functions \(V_{k}\) are mappings from the latent and input spaces of the BNN to the mean of the output distribution. For each mapping, the distribution and mapping of a single point is displayed in orange. Starting from the last hidden layer, we recursively compute PWA approximations of the mappings. The true mean of the BNN for all \(z_{2}\in\mathcal{Z}_{2}\) is in the green oval, which we over-approximate by the blue hexagon.
_Further, for \(i\in\{1,\ldots,l\}\), it holds that \(\pi^{(i)}_{\min}(T)\geq\min_{x\in T}\hat{V}^{(i)}_{0}(x)\) and \(\pi^{(i)}_{\max}(T)\leq\max_{x\in T}\hat{V}^{(i)}_{0}(x)\)._
Corollary 1.1 allows us to recursively find relaxations of \(V_{k}\) via Theorem 1. In what follows, we focus on finding PWA relaxations \(\hat{V}_{k}\) and \(\hat{V}_{k}\). To achieve that, there are two basic steps: (i) initialization of \(\hat{V}_{k},\hat{V}_{k}\) via Eqn. (5a), and (ii) backward propagation of \(\hat{V}_{k},\hat{V}_{k}\) through a hidden layer of the BNN via Eqn. (5b). In Section 5, we first show an efficient method to perform step (ii) and then focus on (i).
## 5 PWA Relaxation for Dynamic Programming
Our goal in this section is to find PWA relaxations of \(V_{k}\). To do that, we first show how to propagate affine relaxations of the value function backwards through a single hidden layer of the BNN via Eqn. (5b) and then generalize this result to PWA relaxations. Note that, because the support of a BNN is generally unbounded, affine relaxations of Eqn. (5b) and (5a) lead to overly conservative results (an affine function should over-approximate a non-linear function over an unbounded set). Thus, PWA relaxations are necessary to obtain tight approximations. Finally, in Subsection 5.3 we show how to compute relaxations for Eqn. (5a).
### Affine Value Functions
For the sake of presentation, we focus on upper bound \(\hat{V}_{k-1}\); the lower bound case follows similarly. Let \(\hat{V}_{k}:\mathbb{R}^{n_{k}}\rightarrow\mathbb{R}^{l}\) be an affine upper bound on \(V_{k}\). Then, by Corollary 1.1 and the linearity of expectation, it holds that
\[\hat{V}_{k-1}(z)=\hat{V}_{k}(\mathbb{E}_{\mathbf{W}_{k-1}\sim q(\cdot)}\left[ \phi_{k}(\mathbf{W}_{k-1}(z^{T},1)^{T})\right]). \tag{6}\]
Recall that here \(q\) is a Gaussian distribution (see Section 2.1). Hence, due to the closure of Gaussian random variables w.r.t. linear transformations, we can rewrite Eqn. (6) as:
\[\hat{V}_{k-1}(z)=\hat{V}_{k}(\mathbb{E}_{\mathbf{\zeta}\sim\mathcal{N}(m_{k}(z); \text{ diag}(s_{k}(z)))}\left[\phi_{k}(\mathbf{\zeta})\right]), \tag{7}\]
where \(m_{k}:\mathbb{R}^{n_{k-1}}\rightarrow\mathbb{R}^{n_{k}}\) and \(s_{k}:\mathbb{R}^{n_{k-1}}\rightarrow\mathbb{R}^{n_{k}}_{\geq 0}\) are defined component-wise as
\[\begin{split}& m^{(i)}_{k}(z)=\mu_{w,k-1,i}(z^{T},1)^{T},\\ & s^{(i)}_{k}(z)=(z^{T},1)\Sigma_{w,k-1,i}(z^{T},1)^{T},\end{split} \tag{8}\]
for all \(i\in\{1,\ldots,n_{k}\}\) with \(\mu_{w,k-1,i},\Sigma_{w,k-1,i}\) being mean and covariance of the \(i\)th node of the \(k\)th layer. \(\text{diag}(s)\) is a diagonal matrix with the elements of \(s\) on the main diagonal. Note that Eqn (7) reduces the propagation of the value function to the propagation of a Gaussian random variable (\(\mathbf{\zeta}\)) through an activation function (\(\phi_{k}\)). In Proposition 2, we show how this propogation can be achieved analytically for ReLU activation functions. Generalization to other activation functions is discussed in Remark 3.
**Proposition 2**.: _For \(k\in\{1,\ldots,K\}\), let \(\hat{V}_{k}\) be an affine function and \(Z\subset\mathbb{R}^{n_{k-1}}\) be a compact set. Define function \(r_{k}:\mathbb{R}^{n_{k-1}}\rightarrow\mathbb{R}^{n_{k}}_{\geq 0}\) as \(r_{k}(z)=\sqrt{s_{k}(z)}\), and let \(\check{r}_{k},\check{r}_{k}:\mathbb{R}^{n_{k-1}}\rightarrow\mathbb{R}^{n_{k}}_ {\geq 0}\) be an affine-relaxation of \(r_{k}\) w.r.t. \(Z\). Further, define \(g:\mathbb{R}^{2}\rightarrow\mathbb{R}\) as_
\[g(\mu,\sigma)=\frac{\mu}{2}\left[1-\text{erf}\!\left(\frac{-\mu}{\sigma\sqrt{2 }}\right)\right]+\frac{\sigma}{\sqrt{2\pi}}\,e^{-(\mu/\sigma\sqrt{2})^{2}},\]
_and, let \(\check{g}_{i},\hat{g}_{i}:\mathbb{R}^{2}\rightarrow\mathbb{R}\) be an affine-relaxation of \(g\) w.r.t. \(\{(m^{(i)}_{k}(z),r^{(i)}_{k}(z))\mid\forall z\in Z\}\), Then, for \(\check{A},\hat{A}\in\mathbb{R}^{n_{k-1}\times n_{k}}\) and \(\check{b},\hat{b}\in\mathbb{R}^{n_{k}}\) defined as, \(\forall i\in\{1,\ldots,n_{k}\}\),_
\[\check{A}^{(i,\cdot)}=[\nabla_{z}g(m^{(i)}_{k}(z),r^{(i)}_{k}(z)) ]_{z=z^{*}},\] \[\check{b}^{(i)}=g(m^{(i)}_{k}(z^{*}),r^{(i)}_{k}(z^{*}))-\check{A} ^{(i,\cdot)}z^{*},\] \[[\cdot,\check{A}^{(i,z)}z+\check{b}^{(i)}]=(m^{(i)}_{k},\check{r} ^{(i)}_{k})^{T}\otimes[\check{g},\hat{g}],\]
_with \(z^{*}\in Z\) and \(\nabla_{z}\) being the gradient w.r.t. \(z\), it holds that \(\forall z\in Z\), \(\mathbb{E}_{\mathbf{\zeta}\sim\mathcal{N}(m_{k}(z);\text{ }s_{k}(z))}\left[\hat{V}_{k}(\text{ReLU}(\mathbf{\zeta}))\right]\in\hat{V}_{k} \otimes[\check{A}z+\check{b},\check{A}z+\hat{b}]\)._
The proof of Proposition 2 is based on the convexity of the expected value of a rectified Gaussian w.r.t. its mean and variance. The proof and detailed procedures for obtaining affine relaxations of \(g\) and \(r\) are reported in Appendix B.1. Next, we show how the result of Proposition 2 can be extended to PWA relaxations of the value functions.
**Remark 3**.: _The results of Proposition 2 (as well as Propositions 4 and 5 below) extend to any continuous activation function \(\phi_{k}\). That is, as shown in (Beunissi et al., 2022), every continuous activation function can be under and over-approximated by PWA functions \(\hat{\phi}_{k},\hat{\phi}_{k}:\mathbb{R}^{n_{k}}\rightarrow\mathbb{R}^{n_{k}}\) such that \(\hat{\phi}_{k}\leq\phi_{k}\leq\hat{\phi}_{k}\). Consequently, \(\mathbb{E}\!\left[\hat{\phi}_{k}(\mathbf{\zeta})\right]\leq\mathbb{E}\left[\phi_{k}( \mathbf{\zeta})\right]\leq\mathbb{E}\!\left[\hat{\phi}_{k}(\mathbf{\zeta})\right]\), which allows the extension of Proposition 2 from ReLU to general continuous \(\phi_{k}\)._
### Piecewise Affine Value Functions
For \(N\in\mathbb{N}\), let \(\mathcal{Z}_{k}=\{Z_{k,1},\ldots,Z_{k,N}\}\subseteq\mathbb{R}^{n_{k}}\) be a partition of the support of \(f^{\mathbf{\alpha}_{k}}_{0:k}\), and let \(\hat{V}_{k,j},\hat{V}_{k,j}:\mathbb{R}^{n_{k}}\rightarrow\mathbb{R}^{l}\) be an affine relaxation of \(V_{k}(z_{k})\) w.r.t. \(Z_{k,j}\) for all \(j\in\{1,\ldots,N\}\), i.e., \(\forall z_{k}\in Z_{k,j}\)\(V_{k}(z_{k})\leq\hat{V}_{k,j}\) with \(\hat{V}_{k,j}\coloneqq\hat{A}_{k,j}z_{k}+\hat{b}_{k,j}\). Then, by Eqn. (5b) and the law of total expectation, we obtain an upper bound on \(V_{k-1}\):
\[V_{k-1}(z)\leq\sum_{j=1}^{N}\hat{b}_{k,j}\underbrace{\mathbb{P}_{\mathbf{\zeta}\sim \mathcal{N}(m_{k}(z);\text{ diag}(s_{k}(z)))}\left[\mathbf{\zeta}\in Z_{k,j}\right]}_{9 \text{a}}+ \tag{9}\] \[\hat{A}_{k,j}\underbrace{\mathbb{E}_{\mathbf{\zeta}\sim\mathcal{N}(m_{k} (z);\text{ diag}(s_{k}(z)))}\left[\phi_{k}(\mathbf{\zeta})\mid\mathbf{\zeta}\in Z_{k,j}\right]}_{9 \text{b}},\]
where \(\mathbb{E}_{\mathbf{\zeta}\sim p}\left[\mathbf{\zeta}\mid\mathbf{\zeta}\in Z\right]:= \mathbb{E}_{\mathbf{\zeta}\sim p}\left[\mathbf{\zeta}\mid\mathbf{\zeta}\in Z\right]\mathbb{P}_{ \mathbf{\zeta}\sim p}\left[\mathbf{\zeta}\in Z\right]\). The lower bound on \(V_{k-1}\) follows similarly. Term 9a is
simply the probability that a Gaussian random variable (\(\zeta\)) is in a given set (partition \(Z_{k,j}\)). If the partition is hyperrectangular, in Lemma 3 we express Term 9a in closed-form.
**Lemma 3**.: _For \(k\in\{1,\ldots,K\}\), \(\tilde{\zeta},\hat{\zeta}\in\mathbb{R}^{n_{k}}\), it holds that 7_
Footnote 7: A similar result holds for unbounded regions defined by vector \(\tilde{z}\), that is, \(\mathbf{\zeta}\in[\tilde{z},\infty)\) or \(\mathbf{\zeta}\in(\infty,\tilde{z}]\), as shown in Appendix B.2.
\[\mathbb{P}_{\mathbf{\zeta}\sim\mathcal{N}(m_{k}(z);\;\text{diag}(s_{ k}(z)))}\left[\mathbf{\zeta}\in[\tilde{\zeta},\hat{\zeta}]\right]= \tag{10}\] \[\frac{1}{2^{n_{k}}}\prod_{i=1}^{n_{k}}\text{erf}\left(\frac{ \hat{\zeta}^{(i)}-m_{k}^{(i)}(z)}{\sqrt{2s_{k}^{(i)}(z)}}\right)-\text{erf} \left(\frac{\hat{\zeta}^{(i)}-m_{k}^{(i)}(z)}{\sqrt{2s_{k}^{(i)}(z)}}\right)\]
Term 9b is the conditional expectation of the random variable propagated through an activation function. The following shows that we can decompose this term in expectations, which we can bound using the result of Proposition 2, and probabilities for which Lemma 3 can be applied.
**Proposition 4**.: _For \(k\in\{1,\ldots,K\}\), vectors \(\tilde{\zeta},\hat{\zeta}\in\mathbb{R}^{n_{k}}\), and \(\mathbf{\zeta}\sim\mathcal{N}\left(m_{k}(z);\;\text{diag}\;(s_{k}(z))\right)\), it holds that8_
Footnote 8: A similar relation can be obtained for \(\phi_{k}\) being the identity function, as shown in Appendix B.3.
\[\mathbb{\mathbb{E}}\left[\text{ReLU}\left(\mathbf{\zeta}\right)\mid\mathbf{\zeta}\in[ \tilde{\zeta},\hat{\zeta}]\right]=\] \[\mathbb{\mathbb{E}}\left[\text{ReLU}\left(\mathbf{\zeta}+[\tilde{ \zeta}]_{+}\right)\right]-\mathbb{\mathbb{E}}\left[\text{ReLU}\left(\mathbf{\zeta }+[\tilde{\zeta}]_{+}\right)\right]+\] \[[\tilde{\zeta}]_{+}\mathbb{P}\left[\mathbf{\zeta}\in[[\tilde{\zeta}]_ {+},\infty)\right]-[\hat{\zeta}]_{+}\mathbb{P}\left[\mathbf{\zeta}\in[[\hat{\zeta}] _{+},\infty)\right].\]
Next, we show how these results can be extended to unbounded sets in partition \(\mathcal{Z}_{k}\), i.e., unbounded support \(f_{0:k}^{\mathbf{w}}\).
Unbounded SupportIf \(f_{0:k}^{\mathbf{w}}\) has an unbounded support, then there must necessarily be at least a region that is unbounded in the partition \(\mathcal{Z}_{k}\). While for this region we can still apply Lemma 3 to compute Term 9a, we cannot use Proposition 4 to compute a bound for Term 9b. Instead, we rely on Proposition 5 (below), which derives relaxations based on the fact that Gaussian distributions decay exponentially fast (thus, faster than a linear function).
**Proposition 5**.: _For \(k\in\{1,\ldots,K\}\), \(i\in\{1,\ldots,n_{k}\}\), and vector \(\tilde{\zeta}\in\mathbb{R}^{n_{k}}\), it holds that9_
Footnote 9: A similar relation can be obtained for \(\phi_{k}\) being the identity function, as shown in Appendix B.4.
\[\frac{1}{2}[m_{k}^{(i)}(z)]_{-}\leq\] \[\leq\frac{1}{2}[m_{k}^{(i)}(z)]_{+}+\sqrt{\frac{s_{k}^{(i)}(z)}{ 2\pi}}.\]
**Algorithm 1** Adversarial Robustness for Classification
\[\mathbb{\mathbb{E}}_{\mathbf{\zeta}\sim\mathcal{N}(m_{k}(z);\;\text{diag}(s_{ k}(z)))}\left[\text{ReLU}\left(\mathbf{\zeta}\right)\mid\mathbf{\zeta}\in[\tilde{ \zeta},\infty)\right]\] \[\leq\frac{1}{2}[m_{k}^{(i)}(z)]_{+}+\sqrt{\frac{s_{k}^{(i)}(z)}{ 2\pi}}.\]
\(\mathbb{\mathbb{E}}\left[\text{ReLU}\left(\mathbf{\zeta}\right)\mid\mathbf{\zeta} \in[\tilde{\zeta},\infty)\right]\)
### Relaxation of the Last Layer of BNN
We show how to compute interval relaxations of Eqn (5a). For the regression case (\(h=I\)), the process is simple since Eqn. (5a) becomes an affine function. That is, \(V_{K}(z)=m_{K}(z)\), where \(m_{K}(z)\) as defined in Eqn. (8), and hence no relaxation is required. For classification, however, further relaxations are needed because \(h=\text{softmax}\), i.e., the output distribution of the BNN (the logit) is propagated through the softmax. The following proposition shows that an interval relaxation can be obtained by relaxing the distribution of \(\mathbf{W}_{K}(z^{T},1)^{T}\) by Dirac delta functions on the extremes of \(h\) for each set in the partition of the BNN's output.
**Proposition 6**.: _For \(N\in\mathbb{N}\), let \(\{Z_{1},\ldots,Z_{N}\}\subseteq\mathbb{R}^{n_{K+1}}\) be a partition of \(\text{supp}(f^{\mathbf{w}}\left(x\right))\). Then, for \(i\in\{1,\ldots,n_{K+1}\}\) and \(\mathbf{w}\sim q(\cdot)\), it holds that_
\[\sum_{j=1}^{N}[\underset{\zeta\in Z_{j}}{\min}h^{(i)}(\zeta)] \mathbb{P}\left[f^{\mathbf{w}}\left(x\right)\in Z_{j}\right]\leq\mathbb{E}\left[ h^{(i)}(f^{\mathbf{w}}\left(x\right)\right]\leq\] \[\sum_{j=1}^{N}[\underset{\zeta\in Z_{j}}{\max}h^{(i)}(\zeta)] \mathbb{P}\left[f^{\mathbf{w}}\left(x\right)\in Z_{j}\right].\]
A particularly simple case is when there are only two sets in the partition of the BNN's output layer. Then, the following corollary of Proposition 6 guarantees that, similarly to deterministic NNs (Zhang et al., 2018), we can determine adversarial robustness by simply looking at the logit.
**Corollary 6**.: _Let \(\{[\tilde{\zeta},\hat{\zeta}],Z\}\subseteq\mathbb{R}^{n_{K+1}}\) be a partition of \(\text{supp}(f^{\mathbf{w}})\). Then, for \(i,j\in\{1,\ldots,n_{K+1}\}\) and \(\mathbf{w}\sim q(\cdot)\), it holds that_
\[\hat{e}^{\hat{\zeta}^{(j)}}-e^{\hat{\zeta}^{(i)}}+(\frac{1}{ \mathbb{P}\left[f^{\mathbf{w}}(x)\in[\tilde{\zeta},\hat{\zeta}]\right]}-1)\sum_{l=1 }^{n_{K+1}}e^{\hat{\zeta}^{(l)}}\leq 0\] \[\implies\mathbb{E}\left[\text{softmax}^{(j)}(f^{\mathbf{w}}(x))- \text{softmax}^{(i)}(f^{\mathbf{w}}(x))\right]\leq 0.\]
## 6 BNN-DP Algorithm
We summarize our overall procedure to solve Problem 1 in an algorithm called BNN-DP. Algorithm 1 presents BNN-DP for the classification setting; the procedure for the regression setting follows similarly and is provided in Appendix D. Algorithm 1 consists of a forward pass to partition the latent space of the BNN (Lines 2-4), and a backward pass to recursively approximate the value functions via Eqns. (5a) and (5b) (Lines 7-10). The last layer of the BNN (Eqn. 5a) is handled by the IBPSoftmax function in Line 7 using the results of Proposition 6. The BP function in Line 9 performs the back-propagation over the hidden layers of the BNN (Eqn. 5b) using the results of Lemma 3 and Proposition 4 and 5. The detailed procedures of IBPSoftmax and BP can be found in Appendix D. In what follows, we describe how we partition the support of the latent space of the BNN, and discuss the computational complexity of BNN-DP.
PartitioningRecall that our results rely on hyperrectangular partitions. Hence, for each layer \(k\), we employ the following proposition to find a hyper-rectangular subset of the support of each layer that captures at least \(1-\epsilon\) of the probability mass of \(\text{supp}(f_{0:k}^{u})\).
**Proposition 7**.: _For \(k\in\{1,\ldots,K\}\), let \(\epsilon\in[0,1]\) be a constant, and \(Z\subset\mathbb{R}^{n_{k-1}}\) be a compact set. Then, for vectors \(\hat{\zeta}_{k},\hat{\zeta}_{k}\in\mathbb{R}^{n_{k}}\) defined such that \(\forall i\in\{1,\ldots,n_{k}\}\),_
\[\hat{\zeta}_{k}^{(i)} =\max_{z\in Z}\left[\text{erf}^{-1}\left(-\eta\right)\sqrt{2s_{k} ^{(i)}(z)}+m_{k}^{(i)}(z)\right], \tag{11}\] \[\hat{\zeta}_{k}^{(i)} =\min_{z\in Z}\left[\text{erf}^{-1}\left(\eta\right)\sqrt{2s_{k} ^{(i)}(z)}+m_{k}^{(i)}(z)\right], \tag{12}\]
_where \(\eta=(1-\epsilon)^{\frac{1}{n_{k}}}\), it holds that, \(\forall z\in Z\),_
\[\mathbb{P}_{\boldsymbol{\zeta}\sim\mathcal{N}(m_{k}(z);\text{ diag}(s_{k}(z)))}\left[\boldsymbol{\zeta}\in[\tilde{\zeta}_{k},\hat{\zeta}_{k}] \right]\geq 1-\epsilon.\]
Here, Eqns (11) and (12) are convex minimization problems, which can be efficiently solved via, e.g., the gradient descent algorithm. We denote the resulting region obtained via Proposition 7 as \(Z_{k,main}\subset\text{supp}(f_{0:k}^{u})\). Then, \(Z_{k,main}\) can be further refined by interval splitting.
Computational ComplexitySimilarly as for linear bounding procedures for deterministic neural networks, see e.g. [Zhang et al.2018], the cost of computing piecewise-affine relaxations of a BNN with \(K\) layers and \(n\) neurons per layer is polynomial in both \(K\) and \(n\). Refinement, which is not part of the main algorithm, has exponential cost in \(n\). In practice, however, in NNs and consequently in BNNs, only a few neurons are generally active, and those are the ones that most influence the posterior [Frankle and Carbin2018]. Therefore, the refining procedure can focus only on these neurons. Because of this, in almost all the experiments in Section 7, only \(2\) regions in the partition per hidden layer were required to certify robustness, even in cases where the BNN had large posterior variance and thousands of neurons.
## 7 Experimental Results
We empirically evaluated BNN-DP on various regression and classification benchmarks. We ran our experiments on an AMD EPYC 7252 8-core CPU and train the BNNs using Noisy Adam [Zhang et al., 2018] and variational online Gauss-Newton [Khan et al., 2018]. We first validate the bounds obtained by BNN-DP for BNNs trained on samples from an 1D sine with additive noise (referred to as the 1D Noisy Sine). We then analyse a set of BNNs with various architectures trained on the 2D dimensional equivalent of 1D Noisy Sine and the Kinsnm dataset.10 The latter dataset contains state-space readings for the dynamics of an 8 link robot arm, and is commonly used as a regression task to benchmark BNNs [Hernandez-Lobato and Adams, 2015, Gal and Ghahramani, 2016]. Last, we turn our attention to classification and evaluate BNNs trained on the MNIST, Fashion MNIST and CIFAR-10 datasets. 11
Footnote 10: Available at [http://www.cs.toronto.edu/~delve](http://www.cs.toronto.edu/~delve).
Footnote 11: Our code is available at [https://github.com/sjladams/BNN_DP](https://github.com/sjladams/BNN_DP).
As a baseline for our experiments, we consider the state-of-the-art approach of berrada2021, to which we refer as "FL". In fact, FL is the only existing method that can provide robustness certification for BNNs in similar settings as our BNN-DP. Nevertheless, we must remark that even
Figure 2: Certified affine bounds on the mean of BNNs trained on the 1D Noisy Sine dataset w.r.t. the grey marked interval of input.
FL is not fully formal; it works by truncating the Gaussian posterior distribution associated to each weight at a given multiple of its standard deviation (std), disregarding a large part of the posterior distribution. Hence, the returned bound is not sound over the full posterior but only a subset of it. More importantly, the disregarded portion of the posterior grows exponentially with the number of weights of the networks. Already for a two hidden layer BNN with 48 neurons per layer, FL verifies only \(0.1\%\) of the BNN posterior when truncated at 3 std. Thus, the bounds computed by FL are optimistic and not mathematically guaranteed to hold. In contrast, not only BNN-DP returns formal bounds accounting for the whole posterior, but also the benchmark results show that BNN-DP bounds are much tighter than FL ones.
### Bound Validation
We validate and qualitatively compare the bounds obtained by BNN-DP and FL on BNNs with 1 and 2 hidden layers trained on 1D Noisy Sine. The results of these analyses are reported in Figure 2. Visually, we see that BNN-DP is able to compute tight affine relaxations (blue lines) on the mean of the BNNs over the grey shaded intervals. In contrast, already in this simple scenario, and even when truncating the posterior distribution at just 1 std, FL returns as guaranteed output intervals \([-1.68,0.59]\) and \([0.23,0.71]\) for the 1 and 2 hidden layer BNN, respectively. Hence, even though FL disregards most of the BNNs posterior, BNN-DP still produces tighter bounds. When using 3 std, the FL interval bounds become even wider, that is \([-7.07,9.31]\) and \([-0.65,1.54]\), for the 1 and 2 hidden layer BNN, respectively. Intuitively, the major improvement of the bounds can be explained by the fact that, while BNN-DP directly averages the uncertainty of each layer by solving the DP in Theorem 1, FL solves an overall optimisation problem that at each layer considers the worst combination of parameters in the support of the (truncated) distribution, leading to conservative bounds. In fact, the bound computed by FL is looser in the one-hidden layer case than in the two-hidden layers one by one order of magnitude, precisely because of the higher variance of the former BNN compared to the second. In what follows, we see that analogous observations apply to more complex deep learning benchmarks.
### Regression Benchmarks
We consider a set of BNNs with various architectures trained on the 2D Noisy Sine and Kin8nm regression datasets. To asses the certification performance of BNN-DP, we compute the difference between the upper and lower bounds on the expectation of the BNNs, referred to as the \(\gamma\)-robustness, for its input in a \(\ell_{\infty}\)-norm ball of radius \(\epsilon\) centered at a sampled data point. Clearly, a smaller value of \(\gamma\) implies a tighter bound computation. Results, averaged over 100 randomly sampled test points, are reported in Table (a)a. For all experiments, BNN-DP greatly improves the value of \(\gamma\)-robustness provided by the FL-baseline by 1 to 4 orders of magnitude with similar computation times. We also note that the larger the BNN is, the larger the improvement in the (tightness of the) bounds are, which empirically demonstrates the superior scalability of BNN-DP. Figure 3 explicitly shows the impact of the model size and variance on the certified \(\gamma\)-robustness. For BNNs with 1 hidden layer, BNN-DP guarantees small \(\gamma\)-robustness (and hence tighter bounds) irrespective of the number of neurons as well as the amount of uncertainty. In contrast, as already observed for the 1D Noisy Sine case, FL is particularly impacted by the variance of the posterior distribution. For BNNs with two hidden layers, BNN-DP requires partitioning the latent space, which leads to a positive correlation with the value of \(\gamma\)-robustness and the number of hidden neurons. A similar, but more extreme, trend is also observed for FL.
\begin{table}
\end{table}
Table 1: Comparison between BNN-DP and FL on various fully connected BNN architectures, with \(K\) being the number of hidden layers, and \(n_{hid}\) the number of neurons per layer. The results are the average over \(100\) test point, and the computation times are averaged over all architectures. The best values for each comparison are reported in bold.
### Classification Benchmarks
We now evaluate BNN-DP on the MNIST, Fashion MNIST and CIFAR-10 classification benchmarks. In order to quantitatively measure the robustness of an input point \(x^{*}\), we consider the maximum radius \(\epsilon\) for which the decisions on \(\ell_{\infty}\)-norm perturbations of \(x^{*}\) with radii \(\epsilon\) is invariant. That is, any perturbation of \(x^{*}\) smaller than \(\epsilon\) does not change the classification output; hence, the larger \(\epsilon\) in \(x^{*}\), the more robust the BNN in the specific point. Results are reported in Table 0(b) and 2. For the fully connected BNN architectures, BNN-DP not only is able to certify a substantially larger \(\epsilon\) compared to the baseline, but also it does so by orders of magnitude smaller computation time. This is because our approach uses interval relaxations (Proposition 6) to bound the softmax, whereas FL explicitly considers a non-convex optimization problem, which is computationally demanding. For the Bayesian CNN architectures, FL is able to certify a slightly larger \(\epsilon\), at the costs of magnitudes of orders increase of computation time. This can be explained by the decreasing support of the BNN posterior certified by FL for increasing network size, whereas, the \(\epsilon\) certified by BNN-DP holds for the whole posterior.
## 8 Conclusion
We introduced BNN-DP, an algorithmic framework to certify adversarial robustness of BNNs. BNN-DP is based on a reformulation of adversarial robustness for BNNs as a solution of a dynamic program, for which efficient relaxations can be derived. Our experiments on multiple datasets for both regression and classification tasks show that our approach greatly outperforms state-of-the-art competitive methods, thus paving the way for applications of BNNs in safety-critical applications.
## Acknowledgements
This work was supported in part by the NSF grant 2039062.
|
2310.06995 | Accelerated Modelling of Interfaces for Electronic Devices using Graph
Neural Networks | Modern microelectronic devices are composed of interfaces between a large
number of materials, many of which are in amorphous or polycrystalline phases.
Modeling such non-crystalline materials using first-principles methods such as
density functional theory is often numerically intractable. Recently, graph
neural networks (GNNs) have shown potential to achieve linear complexity with
accuracies comparable to ab-initio methods. Here, we demonstrate the
applicability of GNNs to accelerate the atomistic computational pipeline for
predicting macroscopic transistor transport characteristics via learning
microscopic physical properties. We generate amorphous heterostructures,
specifically the HfO$_{2}$-SiO$_{2}$-Si semiconductor-dielectric transistor
gate stack, via GNN predicted atomic forces, and show excellent accuracy in
predicting transport characteristics including injection velocity for nanoslab
silicon channels. This work paves the way for faster and more scalable methods
to model modern advanced electronic devices via GNNs. | Pratik Brahma, Krishnakumar Bhattaram, Sayeef Salahuddin | 2023-10-10T20:26:46Z | http://arxiv.org/abs/2310.06995v1 | # Accelerated Modelling of Interfaces for Electronic Devices using Graph Neural Networks
###### Abstract
Modern microelectronic devices are composed of interfaces between a large number of materials, many of which are in amorphous or polycrystalline phases. Modeling such non-crystalline materials using first-principles methods such as density functional theory is often numerically intractable. Recently, graph neural networks (GNNs) have shown potential to achieve linear complexity with accuracies comparable to ab-initio methods. Here, we demonstrate the applicability of GNNs to accelerate the atomistic computational pipeline for predicting macroscopic transistor transport characteristics via learning microscopic physical properties. We generate amorphous heterostructures, specifically the HfO\({}_{2}\)-SiO\({}_{2}\)-Si semiconductor-dielectric transistor gate stack, via GNN predicted atomic forces, and show excellent accuracy in predicting transport characteristics including injection velocity for nanoslab silicon channels. This work paves the way for faster and more scalable methods to model modern advanced electronic devices via GNNs.
+
Footnote †: * These authors contributed equally to this work
## 1 Introduction
The modern Si transistor gate stack comprises of multiple material interfaces whose electronic interactions significantly affect electron transport and ultimately transistor performance. In particular, the heterogeneous semiconductor-dielectric gate stack introduces many atomic-scale modeling challenges, which, if addressed, can help design higher-performance gate stacks for next-generation transistors. Fundamentally, the starting point for modeling any transport process of an atomistic electronic device stems from the continuity equation \(\frac{\partial Q}{Q}=-\vec{\nabla}\cdot\vec{J}\), where \(Q\) is the non quasi-static charge and \(J\) is current, which in turn is a function of \(Q\) and injection velocity (\(v_{inj}\)) [10; 8]. In practical devices, a significant contribution to \(Q\) comes from parasitic sources, which traditional Poisson solvers[15] capture well. The main challenge in transport calculations is calculating the intrinsic \(Q\) and \(v_{inj}\), which depend on the specific combination of materials interfaces and confinement effects. When amorphous/polycrystalline phases are involved, one cannot directly leverage the E-k diagram. Therefore, the DOS is then used to calculate all relevant parameters, including \(Q\) and \(v_{inj}\), which are later used in electrostatic solvers and transport models to directly estimate the fast behavior of nanoscale devices. Fig.1 summarises this atomistic computational pipeline for calculating transistor characteristics from the macroscopic transistor dimensions. This pipeline has two bottlenecks for scalability: (i) Molecular Dynamics (MD), which generates atomistic transistor gate stacks with different structural phases, and (ii) Electronic Structure Calculators, which generate atomistic properties like DOS from the quantum Hamiltonian. These bottlenecks arise as the current state-of-the-art atomistic simulation models[6] diagonalize the quantum Hamiltonian, an operation
that scales cubically with system size. This poses a challenge for fast and accurate simulations of practically large material systems containing thousands of atoms and varied crystalline states.
Following the success of graph neural networks (GNNs)[11; 18] in learning ab initio molecular forces[9] and predicting atomistic properties [4; 3], we propose to overcome the scaling challenge by learning the functional relationship between atomic local chemical environments and macroscopic transistor properties. As GNN inference (summarized in Fig.2) scales linearly with system size, orders of magnitude speedup can be realized. Other existing neural network algorithms for transistor characterization prioritize speed but sacrifice generalizability to unseen geometries and complex material interfaces [17] by inferring on macroscopic scales and ignoring microscopic properties. Our work focuses on learning atomistic contributions to macroscopic transistor characteristics, which we demonstrate yields accurate and generalizable predictions even on unseen transistor geometries.
## 2 Methods
Neural Network Architecture:Our GNN architecture is composed of the Schnet[11] and SpookyNet[18] architectures. The forward pass of the GNN (Fig.2) is divided into three phases:
Atom Embedding: The atomistic transistor structure is modeled as a graph where each atom is a node, and each neighboring atom interaction within a given cutoff \(r_{c}\) is a weighted edge. Every node is initialized with a random vector (\(x_{v}^{t}\)) according to its atomic number, and every distance vector (\(\vec{r}\)) to its neighboring atoms is projected in radial (\(R\)) and spherical harmonic basis (\(Y_{l}^{m}\))[18]. This phase emulates a system of isolated atoms unaware of its local chemical environment.
Message Passing: This phase starts with an isolated system of atoms and allows interactions with atomic neighbors to influence the atomic chemical state. Continuous convolutional filter-generated messages (\(m_{v}^{t}\)) are sent over the edges as given in Fig.2b. The state vector (\(x_{v}^{t}\)) of each node is updated by summing up the incoming messages (\(m_{vj}^{t}\)) along the edges \(j\) as \(x_{v}^{t+1}=x_{v}^{t}+\sum_{j\in\mathcal{N}(v)}m_{vj}^{t}\).
Readout: This phase calculates the local atomic contributions of the desired global property from the state vector of all nodes. We focus on two sets of properties: (i) Energy and Atomic Forces: These properties generate the atomistic transistor gate stack using MD. A dense linear layer predicts the local atomic energy using the final atomic state of each node. Summing up the local atomistic energy predictions give total energy, and its derivative gives the atomic forces. (ii) Injection Velocity: This property characterizes the drain current through small channel transistors[10; 8]. It relates to the average velocity of all electrons over the source-channel potential barrier (\(U\)). In the ballistic limit, the drain current (\(I_{D}\)) through a transistor is related to the injection velocity as \(I_{D}=qN_{inv}v_{inj}\), where:
\[v_{inj}=\frac{\int dEv_{x}(E)D(E)f(E+U-E_{f})}{N_{inv}},\quad N_{inv}\qquad= \int dED(E)f(E+U-E_{f})\]
Figure 1: **Atomistic Computational Pipeline**: Procedure for ab initio accurate predictions of advanced transistor characteristics containing various material interfaces given the macroscopic transistor dimensions. The blue boxes represent the current bottlenecks for scalable simulations of large atomistic devices. We propose to substitute these blocks with GNNs for accelerated predictions.
\(N_{inv}\) is the inversion electron density present in the silicon channel, \(v_{x}(E)\) is the bandstructure velocity of the electron at energy \(E\) and \(D(E)\) is the DOS in the silicon channel. A dense linear layer with multiple outputs predicts \(D(E)\) and \(J_{x}(E)=v_{x}(E)D(E)\) from the final node state vectors. We perform PCA on the dataset to reduce the number of output nodes[1]. \(v_{inj}\) and \(N_{inv}\) are subsequently calculated using the predicted properties at a given fermi level (\(E_{f}\)).
Datasets:Two datasets are generated related to the transistor gate stack and silicon channel.
Gate Stack:The transistor gate stack, a heterostructure of crystalline silicon, amorphous silica, and hafnia, is generated via a high-temperature quench using the LAMMPS MD package[16]. Forces between atoms are estimated using the Charge-Optimized Many Body (COMB) potential[13, 12]. The crystalline forms \(\beta\)-cristobalite and orthorhombic HfO\({}_{2}\) are melted using constant number, pressure, temperature (NPT) dynamics at 2700K and 3500K, respectively. Subsequently, we reshape the melts to match the silicon substrate area using non-equilibrium constant number, volume, temperature (NVT) dynamics and quenched via a damped force minimizer. The generated amorphous silica, hafnia, and crystalline silicon are stacked on each other, and the material interfaces are allowed to relax using the COMB potential. This procedure generates a dataset of \(\sim\)200k molecular structures ranging from 25-96 atoms.
Silicon channel:We consider the silicon channel of the transistor as a nanoslab passivated by hydrogen atoms. The empirical \(sp^{3}d^{5}s^{*}\) tight-binding model in Quantum ATK[14, 7] generates the electronic properties of the silicon channel, primarily \(D(E)\) and \(J_{x}(E)\). Around 1k structures are generated by varying the strain (0.900-1.035) and the nanoslab silicon channel thickness (0.6-2.4 nm).
## 3 Results
Gate Stack Generation:We simultaneously train on atomic forces and energy using a 90-10 weighted sum of mean squared error (MSE) losses, which yields a final mean absolute error of 3.0e-2 eV/A for the MD dataset on force predictions. The trained model is then used to generate gate stacks of around 200 atoms, a factor of 2 larger than the structures provided during training. The validity of the GNN-generated heterostructure (Fig.3a) is confirmed by the excellent match of the pair distribution functions \(g(r)\) of amorphous silica and hafnia to the baseline simulation model (Fig.3b). Predicted Si-Si, Hf-Hf, Si-O, and Hf-O bond lengths are within \(3\%\) of their experimental values Fig.3b)[2, 5] This demonstrates that our approach of learning local chemical environments can
Figure 2: **Graph Neural Network Architecture**: The forward pass of a GNN is divided into three phases: a) Atom Embedding, b) Message Passing and c) Readout.
be generalized to predicting atomic forces and energy in transistor geometries unseen in the initial training set.
Injection Velocity:We simultaneously train on \(D(E)\) and \(J_{x}(E)\) using an equally weighted sum of MSE losses, which yields a final mean absolute error of 9.0e-4 /eV (0.18% error) for \(D(E)\) and 4.9e4 cm/s-eV (0.82% error) for \(J_{x}(E)\). The trained task-specific GNN predicts PCA coefficients for \(D(E)\) and \(J_{x}(E)\) over 200 and 15 basis functions respectively for the crystalline silicon nanoslab. We found high model performance in reproducing \(D(E)\) and \(J_{x}(E)\) for a 2.6nm thick unstrained silicon nanoslab, a structure larger than in the training set, as shown in Fig.3c. From these predictions, we derived \(v_{inj}\) for a range of chemical potentials \(E_{f}\), finding errors within 5.4% for both the on and off states of the transistor (Fig. 3d) Furthermore, we evaluate our neural network on unstrained silicon nanoslabs with thicknesses ranging from 0.67 to 2.4 nm. We reproduce \(v_{inj}\) as a function of thickness with high fidelity (within 5.3%) (Fig.3e), demonstrating the ability of our model to successfully predict macroscopic dynamics of unseen geometries of silicon channel. The runtime for predicting injection velocity by the GNN is around 20ms, while the baseline simulation takes 430s, demonstrating four orders of speed improvement.
## 4 Conclusion
In this study, we demonstrate the efficacy of GNNs to accelerate an end-to-end simulation framework for predicting the material properties of complex material interfaces. Starting from macroscopic transistor dimensions, we use our neural network to predict forces for generating the modern transistor gate stack (HfO\({}_{2}\)-SiO\({}_{2}\)-Si) with bond lengths within 3% of the experimental values. We further reproduce global electronic (\(D(E)\), \(J_{x}(E)\)) and transport properties (\(v_{inj}\)) of crystalline silicon channels. We show agreement within 5.4% for injection velocity on unseen geometries and demonstrate high performance on a structure size outside the training domain. The scalability and accuracy of our predictions over a wide range of material and transport properties demonstrate the viability of our approach for the modeling of advanced heterogeneous devices, paving the way for rapid design and modeling of next-generation transistors.
Figure 3: **Gate Stack Generation and Injection Velocity Prediction**: (a) The gate stack heterostructure generated via the GNN predicted atomic forces. (b) Excellent agreement between the \(g(r)\) of the heterostructure generated by GNN and the baseline material simulation. [Exp] refers to the experimental bond length values[2; 5]. (c), (d) Excellent agreement of the predicted \(D(E)\), \(J_{x}(E)\), and \(v_{inj}\) between the GNN and the baseline model for a 2.6nm thick unstrained nanoslab silicon. (e) GNN correctly predicts the transistor \(v_{inj}\) trend as a function of the unstrained nanoslab silicon channel thickness.
## Acknowledgements
This work is supported by the Defense Advanced Research Projects Agency (DARPA) within the Nanosim Microsystems Exploration Program.
|
2306.15969 | Separable Physics-Informed Neural Networks | Physics-informed neural networks (PINNs) have recently emerged as promising
data-driven PDE solvers showing encouraging results on various PDEs. However,
there is a fundamental limitation of training PINNs to solve multi-dimensional
PDEs and approximate highly complex solution functions. The number of training
points (collocation points) required on these challenging PDEs grows
substantially, but it is severely limited due to the expensive computational
costs and heavy memory overhead. To overcome this issue, we propose a network
architecture and training algorithm for PINNs. The proposed method, separable
PINN (SPINN), operates on a per-axis basis to significantly reduce the number
of network propagations in multi-dimensional PDEs unlike point-wise processing
in conventional PINNs. We also propose using forward-mode automatic
differentiation to reduce the computational cost of computing PDE residuals,
enabling a large number of collocation points (>10^7) on a single commodity
GPU. The experimental results show drastically reduced computational costs (62x
in wall-clock time, 1,394x in FLOPs given the same number of collocation
points) in multi-dimensional PDEs while achieving better accuracy. Furthermore,
we present that SPINN can solve a chaotic (2+1)-d Navier-Stokes equation
significantly faster than the best-performing prior method (9 minutes vs 10
hours in a single GPU), maintaining accuracy. Finally, we showcase that SPINN
can accurately obtain the solution of a highly nonlinear and multi-dimensional
PDE, a (3+1)-d Navier-Stokes equation. For visualized results and code, please
see https://jwcho5576.github.io/spinn.github.io/. | Junwoo Cho, Seungtae Nam, Hyunmo Yang, Seok-Bae Yun, Youngjoon Hong, Eunbyung Park | 2023-06-28T07:11:39Z | http://arxiv.org/abs/2306.15969v4 | # Separable Physics-Informed Neural Networks
###### Abstract
Physics-informed neural networks (PINNs) have recently emerged as promising data-driven PDE solvers showing encouraging results on various PDEs. However, there is a fundamental limitation of training PINNs to solve multi-dimensional PDEs and approximate highly complex solution functions. The number of training points (collocation points) required on these challenging PDEs grows substantially, but it is severely limited due to the expensive computational costs and heavy memory overhead. To overcome this issue, we propose a network architecture and training algorithm for PINNs. The proposed method, _separable PINN (SPINN)_, operates on a per-axis basis to significantly reduce the number of network propagations in multi-dimensional PDEs unlike point-wise processing in conventional PINNs. We also propose using forward-mode automatic differentiation to reduce the computational cost of computing PDE residuals, enabling a large number of collocation points (\(>10^{7}\)) on a single commodity GPU. The experimental results show drastically reduced computational costs (\(62\times\) in wall-clock time, \(1,394\times\) in FLOPs given the same number of collocation points) in multi-dimensional PDEs while achieving better accuracy. Furthermore, we present that SPINN can solve a chaotic (2+1)-d Navier-Stokes equation significantly faster than the best-performing prior method (9 minutes vs 10 hours in a single GPU), maintaining accuracy. Finally, we showcase that SPINN can accurately obtain the solution of a highly nonlinear and multi-dimensional PDE, a (3+1)-d Navier-Stokes equation. For visualized results and code, please see [https://jwcho5576.github.io/spinn.github.io/](https://jwcho5576.github.io/spinn.github.io/).
## 1 Introduction
Solving partial differential equations (PDEs) has been a long-standing problem in various science and engineering domains. Finding analytic solutions requires in-depth expertise and is often infeasible in many useful and important PDEs [11]. Hence, numerical approximation methods to solutions have been extensively studied [17], e.g., spectral methods [4], finite volume methods (FVM) [10], finite difference method (FDM) [38], and finite element methods (FEM) [45]. While successful, classical methods have several limitations, such as expensive computational costs, requiring sophisticated techniques to support multi-physics and multi-scale systems, and the curse of dimensionality in high dimensional PDEs.
With the vast increases in computational power and methodological advances in machine learning, researchers have explored data-driven and learning-based methods [2; 5; 24; 29; 33; 37]. Among the promising methods, physics-informed neural networks (PINNs) have recently emerged as new data-driven PDE solvers for both forward and inverse problems [34]. PINNs employ neural networks and
gradient-based optimization algorithms to represent and obtain the solutions, leveraging automatic differentiation to enforce the physical constraints of underlying PDE. It has enjoyed great success in various forward and inverse problems thanks to its numerous benefits, such as flexibility in handling a wide range of forward and inverse problems, mesh-free solutions, and not requiring observational data, hence, unsupervised training.
Despite its advantages and promising results, there is a fundamental limitation of training PINN to solve multi-dimensional PDEs and approximate very complex solution functions. It primarily stems from using coordinate-based MLP architectures to represent the solution function, which takes input coordinates and outputs corresponding solution quantities. For each training point, computing PDE residual loss involves multiple forward and backward propagations, and the number of training points (collocation points) required to solve multi-dimensional PDEs and obtain more accurate solutions grows substantially. The situation deteriorates as the dimensionality of the PDE or the solution's complexity increases.
Recent studies have presented empirical evidence showing that choosing a larger batch size (i.e., a large number of collocation points) in training PINNs leads to enhanced precision [34, 35, 36, 42]. Furthermore, more collocation points also accelerate the convergence speed due to the scaling characteristic between the batch size and the learning rate (the larger the batch size, the higher the learning rate) [39, 27, 12]. As long as the computational resources allow, we have the flexibility to utilize a substantial number of training points in PINNs since they can be continuously sampled from the input domain in an unsupervised training manner.
We propose a novel PINN architecture, _separable PINN (SPINN)_, which utilizes forward-mode* automatic differentiation (AD) to enable a large number of collocation points (\(>10^{7}\) in a single GPU), reducing the computational cost of solving multi-dimensional PDEs. Instead of feeding every multi-dimensional coordinate into a single MLP, we use separated sub-networks, in which each sub-network takes independent one-dimensional coordinates as input. The final output is generated by an aggregation module such as simple outer product and element-wise summation where the predicted solution can be interpreted by low-rank tensor approximation [25] (Fig. 4b). The suggested architecture obviates the need to query every multi-dimensional coordinate input pair, exponentially reducing the number of network propagations to generate a solution, \(\mathcal{O}(N^{d})\rightarrow\mathcal{O}(Nd)\), where \(N\) is the resolution of the solution for each dimension, and \(d\) is the dimension of the system.
Footnote *: a.k.a. forward accumulation mode or tangent linear mode.
We have conducted comprehensive experiments on the representative PDEs to show the effectiveness of the suggested method. The experimental results demonstrate that the computational costs of the conventional PINN increase linearly with the number of collocation points, while the proposed model shows logarithmic growths. This allows SPINN to accommodate orders of magnitude larger number of collocation points _in a single batch_ during training. We also show that given the same number of training points, SPINN improves wall-clock training time up to by \(62\times\) on commodity GPUs and FLOPs up to by \(1,394\times\) while achieving better accuracy. Furthermore, with large-scale collocation points, SPINN can solve a turbulent Navier-Stokes equation much faster than the state-of-the-art PINN method [42] (9 minutes vs 10 hours in a single GPU) without bells and whistles, such as causal
Figure 1: Training speed (w/ a single GPU) of our model compared to the causal PINN [42] in (2+1)-d Navier-Stokes equation of time interval [0, 0.1].
inductive bias in the loss function (Fig. 1). Our experimental results of the Navier-Stokes equation show that SPINN can solve highly nonlinear PDEs and is sufficiently expressive to represent complex functions. This is further supported by the provided theoretical result, potentiating the use of our method for more challenging and various PDEs.
## 2 Related Works
Physics-informed neural networks.Physics-Informed Neural Networks (PINNs) [34] have received great attention as a promising learning-based PDE solver. Given the underlying PDE and initial, boundary conditions embedded in a loss function, a coordinate-based neural network is trained to approximate the desired solution. Since its inception, many techniques have been studied to improve training PINNs for more challenging PDE systems [26; 42; 43], or to accelerate the training speed [20; 36]. Our method is orthogonal to most of the previously suggested techniques above and improves PINNs' training from a computational perspective.
The effect of collocation points.The residual loss of PINNs is calculated by Monte Carlo integration, not an exact definite integral. Therefore, it inherits a core property of Monte Carlo methods [15]: the impact of the number of sampled points. The importance of sampling strategy and the number of collocation points in training PINNs has been highlighted by recent works [8; 23; 35; 42]. Especially, Sankaran et al. [35] empirically found that training with a large number of collocation points is unconditionally favorable for PINNs in terms of accuracy and convergence speed. Another line of research established a theoretical upper bound of PINN's statistical error with respect to the number of collocation points [23]. It showed that a larger number of collocation points are required to use a bigger network size for training. Our work builds on this evidence to bring PINNs out in more practical scenarios and overcome their limitation.
Derivative computations in PINNs.Several studies have introduced techniques to increase the efficiency of derivative calculations in training PINNs, where obtaining derivatives with respect to input coordinates is an essential process, often achieved through reverse-mode automatic differentiation. DT-PINN [36] utilized the finite difference (FD) method to accelerate the training. They specifically employed the radial basis function-finite differences (RBF-FD) method [41], which can be applied to the irregular domain to maintain the mesh-agnostic characteristic of PINNs. However, the RBF-FD method can only be used for calculating spatial derivative, and DT-PINN still had to employ AD for temporal derivative. Another work, fPINN [32], also used the FD method to solve fractional-order PDEs, which addresses the limitation of automatic differentiation. CAN-PINN [6] adopted both FD and AD to redefine the loss function and accelerate the training. Employing Taylor-mode AD [13] in training PINNs was introduced by causal PINN [42] to handle high-order PDEs such as Kuramoto-Sivashinsky equation [28]. To the best of our knowledge, the proposed method is the first approach to leverage forward-mode AD in training PINNs, which is fully applicable to both time-dependent and independent PDEs and does not incur any truncation errors.
Multiple MLP networks.Employing multiple MLPs for PINNs has been introduced by several works to utilize parallelized training [19; 21; 31]. They share the same concept of decomposing the spatio-temporal domain and training multiple individual MLPs on each sub-domain. Although these methods showed promising results, they still suffer from the fundamental problem of heavy computation as the number of collocation points increases. While these methods decompose input domains, and each small MLP is used to cover a particular subdomain, we decompose input dimensions and solve PDEs over the entire domain cooperated by all separated MLPs. In terms of the model architecture and function representation, our work is also related to NAM [1], Haghighat et al. [14], and CoordX [30]. NAM [1] suggested separated network architectures and inputs, but only for achieving the interpretability of the model's prediction in multi-task learning. Haghighat et al. [14] used multiple MLPs to individually predict each component of the _output_ vector. CoordX's [30] primary purpose is to reconstruct natural signals, such as images or 3D shapes. Hence, they are not motivated to improve the efficiency of computing higher-order gradients. In addition, they had to use additional layers after the feature merging step, which makes their model more computationally expensive than ours. Furthermore, in neural fields [44], there is an explicit limitation in the number of ground truth data points (e.g., the number of pixels in an image). Therefore, CoordX cannot fully maximize the advantage of enabling a large number of input coordinates. We focus on solving
PDEs and carefully devised the architecture to exploit forward-mode AD to efficiently compute PDE residual losses.
## 3 Preliminaries: Forward/Reverse-mode AD
For the completeness of this paper, we start by briefly introducing the two types of AD and how Jacobian matrices are evaluated. For clarity, we will follow the notations used in [3] and [13]. Suppose our function \(f:\mathbb{R}^{n}\to\mathbb{R}^{m}\) is a two-layer MLP with 'tanh' activation. The left-hand side of Tab. 1 demonstrates a single forward trace of \(f\). To obtain a \(m\times n\) Jacobian matrix \(\mathbb{J}_{f}\) in forward-mode, we compute the Jacobian-vector product (JVP),
\[\mathbb{J}_{f}r=\begin{bmatrix}\frac{\partial y_{1}}{\partial x_{1}}&\dots& \frac{\partial y_{1}}{\partial x_{n}}\\ \vdots&\ddots&\vdots\\ \frac{\partial y_{m}}{\partial x_{1}}&\dots&\frac{\partial y_{m}}{\partial x _{n}}\end{bmatrix}\begin{bmatrix}\frac{\partial x_{1}}{\partial x_{1}}\\ \vdots\\ \frac{\partial x_{n}}{\partial x_{1}}\end{bmatrix}, \tag{1}\]
for \(i\in\{1,\dots,n\}\). The forward-mode AD is a one-phase process: while tracing primals (intermediate values) \(v_{k}\), it continues to evaluate and accumulate their tangents \(v_{k}=\partial v_{k}/\partial x_{i}\) (the middle column of Tab. 1). This is equivalent to decomposing one large JVP into a series of JVPs by the chain rule and computing them from right to left. A run of JVP with the initial tangents \(v_{0}\) as the first column vector of an identity matrix \(I_{n}\) gives the first column of \(\mathbb{J}_{f}\). Thus, the full Jacobian can be obtained in \(n\) forward passes.
On the other hand, the reverse-mode AD computes vector-Jacobian product (VJP):
\[r^{\top}\mathbb{J}_{f}=\begin{bmatrix}\frac{\partial y_{j}}{ \partial y_{1}}&\dots&\frac{\partial y_{j}}{\partial y_{m}}\\ \vdots&\ddots&\vdots\\ \frac{\partial y_{m}}{\partial x_{1}}&\dots&\frac{\partial y_{m}}{\partial x _{n}}\end{bmatrix}, \tag{2}\]
for \(j\in\{1,\dots,m\}\), which is the reverse-order operation of JVP. This is a two-phase process. The first phase corresponds to forward propagation, storing all the primals, \(v_{k}\), and recording the elementary operations in the computational graph. In the second phase, the derivatives are computed by accumulating the adjoints \(\bar{v}_{k}=\partial y_{j}/\partial v_{k}\) (the right-hand side of Tab. 1). Since VJP builds one row of a Jacobian at a time, it takes \(m\) evaluations to obtain the full Jacobian. To sum up, the forward-mode is more efficient for a tall Jacobian (\(m>n\)), while the reverse-mode is better suited for a wide Jacobian (\(n>m\)). Fig. 2 shows an illustrative example and please refer to Baydin et al. [3] for more details.
\begin{table}
\begin{tabular}{l l l l} \hline Forward primal trace & Forward tangent trace & Backward adjoint trace \\ \hline \(v_{0}=x\) & \(\frac{\dot{v}_{0}}{v_{1}}=W_{1}\cdot v_{0}\) & \(\frac{\dot{v}_{0}}{\dot{v}_{1}}=W_{1}\cdot\dot{v}_{0}\) & \(\frac{\dot{x}}{\dot{v}_{0}}=\bar{v}_{1}\cdot\frac{\partial v_{1}}{\partial x_ {0}}=\bar{v}_{1}\cdot W_{1}\) \\ \(v_{2}=\text{tanh}(v_{1})\) & \(\dot{v}_{2}=\text{tanh}^{\prime}(v_{1})\circ\dot{v}_{1}\) & \(\bar{v}_{1}=\bar{v}_{2}\cdot\frac{\partial v_{2}}{\partial v_{1}}=\bar{v}_{2} \circ\text{tanh}^{\prime}(v_{1})\) \\ \(v_{3}=W_{2}\cdot v_{2}\) & \(\frac{\dot{v}_{3}}{\dot{v}_{2}}=W_{2}\cdot\dot{v}_{2}\) & \(\frac{\dot{v}_{2}}{\dot{v}_{3}}=\bar{v}_{3}\cdot\frac{\dot{v}_{3}}{\dot{v}_{2}}= \bar{v}_{3}\cdot W_{2}\) \\ \hline \(y=v_{3}\) & \(\bar{y}=v_{3}\) & \(\bar{y}=v_{3}\) & \(\frac{\dot{x}}{\dot{v}_{3}}=\bar{y}\) \\ \hline \end{tabular}
\end{table}
Table 1: An example of forward and reverse-mode AD in a two-layers tanh MLP. Here \(v_{0}\) denotes the input variable, \(v_{k}\) the primals, \(\dot{v}_{k}\) the tangents, \(\bar{v}_{k}\) the adjoints, and \(W_{1},W_{2}\) the weight matrices. Biases are omitted for brevity.
Figure 3: An illustrative example of separated approach vs non-separated approach when \(f^{(\theta)}:\mathbb{R}^{d}\to\mathbb{R}\) (an example case of \(N=3\), \(d=3\) is shown above). The number of JVP (forward-mode) evaluations (propagations) of computing the Jacobian for separated approach (a) is \(Nd\), while the number of VJP (reverse-mode) evaluations for non-separated approach (b) is \(N^{d}\).
Figure 2: Simple neural networks with different input and output dimensions. To compute \(\frac{\partial y}{\partial x}\), (a) requires one forward pass using forward-mode AD or three backward passes using reverse-mode AD, (b) requires three forward passes using forward-mode AD or one backward pass using reverse-mode AD or one backward pass using reverse-mode AD.
## 4 Separable PINN
### Forward-Mode AD with Separated Functions
We demonstrate that leveraging forward-mode AD and separating the function into multiple functions with respect to input axes can significantly reduce the cost of computing Jacobian. In the proposed separated approach (Fig. 2(a)), we first sample \(N\) one-dimensional coordinates on each of \(d\) axes, which makes a total of \(Nd\) batch size. Next, these coordinates are fed into \(d\) individual functions. Let \(f\) be a function which takes coordinate as input to produce feature representation and we denote the number of operations of \(f\) as \(\mathsf{ops}(f)\). Then, a feature merging function \(h\) is used to construct the solution of the entire \(N^{d}\) discretized points. The amount of computations for AD is known to be 2\(\sim\)3 times more expensive than the forward propagation [3; 13]. According to Fig. 3 and with the scale constants \(c_{f},c_{h}\in[2,3]\) we can approximate the total number of operations to compute the Jacobian matrix of the proposed separated approach (Fig. 2(a)).
\[\mathcal{C}_{\text{sep}}=Ndc_{f}\mathsf{ops}(f)+N^{d}c_{h}\mathsf{ops}(h). \tag{3}\]
For a non-separated approach (Fig. 2(b)),
\[\mathcal{C}_{\text{non-sep}}=N^{d}c_{f}\mathsf{ops}(f), \tag{4}\]
If we can make \(\mathsf{ops}(h)\) sufficiently small, then the ratio \(\mathcal{C}_{\text{sep}}/\mathcal{C}_{\text{non-sep}}\) becomes \(\frac{Nd}{N^{d}}\ll 1\), quickly converging to 0 as \(d\) and \(N\) increases. Our separated approach has almost linear complexity with respect to \(N\), implying it can obtain more accurate solutions (high-resolution) efficiently.
### Network Architecture
Fig. 3(a) illustrates the overall SPINN architecture, parameterizing multiple separated functions with neural networks. SPINN consists of \(d\) body-networks (MLPs), each of which takes an individual 1-dimensional coordinate component as an input. Each body-network \(f^{(\theta_{i})}:\mathbb{R}\rightarrow\mathbb{R}^{r}\) (parameterized by \(\theta_{i}\)) is a vector-valued function which transforms the coordinates of \(i\)-th axis into a \(r\)-dimensional feature representation. The final prediction is computed by feature merging:
\[\hat{u}(x_{1},x_{2},\ldots,x_{d})=\sum_{j=1}^{r}\prod_{i=1}^{d}f_{j}^{(\theta _{i})}(x_{i}) \tag{5}\]
where \(\hat{u}:\mathbb{R}^{d}\rightarrow\mathbb{R}\) is the predicted solution function, \(x_{i}\in\mathbb{R}\) is a coordinate of \(i\)-th axis, and \(f_{j}^{(\theta_{i})}\) denotes the \(j\)-th element of \(f^{(\theta_{i})}\). We used 'tanh' activation function throughout the paper. As shown in Eq. 5, the feature merging operation is a simple product (\(\Pi\)) and summation (\(\Sigma\)) which
Figure 4: (a) SPINN architecture in a 3-dimensional system. To solve a \(d\)-dimensional PDE, our model requires \(d\) body MLP networks, each of which takes individual scalar coordinate values as input and gives \(r\)-dimensional feature vector. The final output is obtained by element-wise product and summation. (b) Construction process of the entire discretized solution tensor when the inputs are given in batches. Each outer product between the column vectors \(F_{:,j,i}\) from the feature tensor \(F\) constructs a rank-1 tensor and summing all the \(r\) tensors gives a rank-\(r\) tensor. The output tensor of SPINN can be interpreted as a low-rank decomposed representation of a solution.
corresponds to the merging function \(h\) described in Eq. 3. Due to its simplicity, \(h\) operations are much cheaper than operations in MLP layers (i.e., \(\mathsf{ops}(h)\ll\mathsf{ops}(f)\) in Eq. 3). Note that SPINN can also approximate any \(m\)-dimensional vector functions \(\hat{u}:\mathbb{R}^{d}\to\mathbb{R}^{m}\) by using a larger output feature size (see section D.4 in the appendix for details).
The collocation points of our model and conventional PINNs have a distinct difference (Fig. 5). Both are _uniformly_ evaluated on a \(d\)-dimensional hypercube, but collocation points of SPINN form a lattice-like structure, which we call as _factorizable coordinates_. In SPINN, 1-dimensional input points from each axis are randomly sampled, and \(d\)-dimensional points are generated via the cartesian product of the point sets from each axis. On the other hand, non-factorizable coordinates are randomly sampled points without any structure. Factorizable coordinates with our separated MLP architecture enable us to evaluate functions on dense (\(N^{d}\)) collocation points with a small number (\(Nd\)) of input points.
In practice, the input coordinates are given in a batch during training and inference. Assume that \(N\) input coordinates (training points) are sampled from each axis. Note that the sampling resolutions for each axis need not be the same. The input coordinates \(X\in\mathbb{R}^{N\times d}\) is now a matrix. The batchified form of feature representation \(F\in\mathbb{R}^{N\times r\times d}\) and Eq. 5 now becomes
\[\hat{U}(X_{:,1},X_{:,2},\ldots,X_{:,d})=\sum_{j=1}^{r}\bigotimes_{i=1}^{d}F_{:,j,i}, \tag{6}\]
where \(\hat{U}\in\mathbb{R}^{N\times N\times\ldots\times N}\) is the discretized solution tensor, \(\bigotimes\) denotes outer product, \(F_{:,:,i}\in\mathbb{R}^{N\times r}\) is an \(i\)-th frontal slice matrix of tensor \(F\), and \(F_{:,j,i}\in\mathbb{R}^{N}\) is the \(j\)-th column of the matrix \(F_{:,:,i}\). Fig. 4b shows an illustrative procedure of Eq. 6. Due to its structural input points and outer products between feature vectors, SPINN's solution approximation can be viewed as a low-rank tensor decomposition where the feature size \(r\) is the rank of the reconstructed tensor. Among many decomposition methods, SPINN corresponds to CP-decomposition [16], which approximates a tensor by finite summation of rank-1 tensors. While traditional methods use iterative methods such as alternating least-squares (ALS) or alternating slicewise diagonalization (ASD) [22] to directly fit the decomposed vectors, we train neural networks to learn the decomposed vector representation and approximate the solution functions in continuous input domains. This, in turn, allows for the calculation of derivatives with respect to arbitrary input coordinates.
### Gradient Computation of SPINN
In this section, we show that the number of JVP (forward-mode AD) evaluations for computing the full gradient of SPINN (\(\nabla\hat{u}(x)\)) is \(Nd\), where \(N\) is the number of coordinates for each axis and \(d\) is the input dimension. According to Eq. 5, the \(i\)-th element of \(\nabla\hat{u}(x)\) is:
\[\frac{\partial\hat{u}}{\partial x_{i}}=\sum_{j=1}^{r}f_{j}^{(\theta_{1})}(x_{ 1})f_{j}^{(\theta_{2})}(x_{2})\ldots\frac{\partial f_{j}^{(\theta_{i})}(x_{i} )}{\partial x_{i}}\ldots f_{j}^{(\theta_{d})}(x_{d}). \tag{7}\]
Computing this derivative requires feature representations \(f_{j}^{(\theta_{i})}\), but we can reuse them from the forward pass results computed beforehand. Note that the entire \(r\) components of the feature derivatives \(\partial f^{(\theta_{i})}(x_{i})/\partial x_{i}:\mathbb{R}\to\mathbb{R}^{r}\) can be obtained by a single pass thanks to forward-mode AD. To obtain the full gradient vector \(\nabla\hat{u}(x):\mathbb{R}^{d}\to\mathbb{R}^{d}\), we iterate the calculation of Eq. 7 over \(d\) times, switching the input axis \(i\). Also, since each iteration involves \(N\) training samples, the total number of JVP evaluations becomes \(Nd\), which is consistent with Eq. 3.
Figure 5: An illustrative 2-dimensional example of (a) factorizable and (b) non-factorizable coordinates. (a) has a lattice-like structure, where SPINN can be evaluated on more dense collocation points with fewer input points. Conventional PINNs sample non-factorizable coordinates which do not have any structures.
### Universal Approximation Property
It is widely known that neural networks with sufficiently many hidden units have the expressive power of approximating any continuous functions [7; 18]. However, it is not straightforward that our suggested architecture enjoys the same capability. For the completeness of the paper, we provide a universal approximation property of the proposed method.
**Theorem 1**.: _(Proof in appendix) Let \(X,Y\) be compact subsets of \(\mathbb{R}^{d}\). Choose \(u\in L^{2}(X\times Y)\). Then, for arbitrary \(\varepsilon>0\), we can find a sufficiently large \(r>0\) and neural networks \(f_{j}\) and \(g_{j}\) such that_
\[\left\|u-\sum_{j=1}^{r}f_{j}g_{j}\right\|_{L^{2}(X\times Y)}<\varepsilon. \tag{8}\]
By repeatedly applying Theorem 1, we can show that SPINN can approximate any functions in \(L^{2}\) in the high-dimensional input space. An approximation property for a broader function space is a fruitful research area, and we leave it to future work. To support the expressive power of the suggested method, we empirically showed that SPINN can accurately approximate solution functions of various challenging PDEs, including diffusion, Helmholtz, Klein-Gordon, and Navier-Stokes equations.
## 5 Experiments
### Experimental setups
We compared SPINN against vanilla PINN [34] on 3-d (diffusion, Helmholtz, Klein-Gordon, and Navier-Stokes equation) and 4-d (Klein-Gordon and Navier-Stokes) PDE systems. Every experiment was run on a different number of collocation points, and we also applied the modified MLP introduced in [43] to both PINN and SPINN. For 3-d systems, the number of collocation points of \(64^{3}\) was the upper limit for the vanilla PINN (\(54^{3}\) for modified MLP) when we trained with a single NVIDIA RTX3090 GPU with 24GB of memory. However, the memory usage of our model was significantly smaller, enabling SPINN to use a larger number of collocation points (up to \(256^{3}\)) to get more accurate solutions. This is because SPINN stores a much smaller batch of tensors, which are the primals (\(v_{k}\) in Tab. 1) while building the computational graph. All reported error metrics are average relative \(L_{2}\) errors computed by \(\|\hat{u}-u\|^{2}/\|u\|^{2}\), where \(\hat{u}\) is the model prediction, and \(u\) is the reference solution. Every experiment is performed seven times (three times for (2+1)-d and five times for (3+1)-d Navier-Stokes, respectively) with different random seeds. More detailed experimental settings are provided in the appendix.
Figure 6: Overall results of (a) diffusion, (b) Helmholtz, (c) (2+1)-d Klein-Gordon, and (d) (3+1)-d Klein-Gordon experiments. It shows comparisons among PINN, PINN with modified MLP (PINN-mod), SPINN, and SPINN with modified MLP (SPINN-mod). For each experiment, the first to third columns show the relative error, runtime, and GPU memory versus different numbers of collocation points, respectively. The rightmost column shows the training curves of each model when the number of collocation points is \(64^{3}\) (\(54^{3}\) for PINN with modified MLP.) For (3+1)-d Klein-Gordon, the training curve is plotted when each model is trained with the number of collocation points of \(16^{4}\). The error and the training curves are averaged over 7 different runs and 70% confidence intervals are provided. Note that the y-axis scale of every plot is a log scale.
### Results
Fig. 6 shows the overall results of forward problems on three 3-d systems (diffusion, Helmholtz, and Klein-Gordon) and one 4-d system (Klein-Gordon). SPINN is significantly more computationally efficient than the baseline PINN in wall-clock run-time. For every PDE, SPINN with the modified MLP found the most accurate solution. Furthermore, when the number of collocation points grows exponentially, the memory usage and the actual run-time of SPINN increase almost linearly. We also confirmed that we can get more accurate solutions with more collocation points. This training characteristic of SPINN substantiates that our method is very effective for solving multi-dimensional PDEs. Furthermore, with the help of the method outlined in Griewank and Walther [13], we estimated the FLOPs for evaluating the derivatives. Compared to the baseline, SPINN requires \(1,394\times\) fewer operations to compute the forward pass, first and second order derivatives. Further details regarding the FLOPs estimation are provided in section C of the appendix. In the following sections, we give more detailed descriptions and analyses of each experiment.
Diffusion EquationWhen trained with \(128^{3}\) collocation points, SPINN with the modified MLP finds the most accurate solution. Furthermore, when trained with the same number of collocation points (\(64^{3}\)), SPINN-mod is \(52\times\) faster and \(29\times\) more memory efficient than the baseline with modified MLP. You can find the visualized solution in the appendix. Since we construct the solution relatively simple compared to other PDE experiments, baseline PINNs also find a comparatively good solution. However, our model finds a more accurate solution with a larger number of collocation points with minor computational overhead. The exact numerical values are provided in the appendix.
Helmholtz EquationThe results of the Helmholtz experiments are shown in Fig. 5(b). Due to stiffness in the gradient flow, conventional PINNs hinder finding accurate solutions. Therefore, a modified MLP and learning rate annealing algorithm is suggested to mitigate such phenomenon [43]. We found that even PINNs with modified MLP fail to solve the equation when the solution is complex (contains high-frequency components). However, SPINN obtains one order of magnitude lower relative error without bells and whistles. This is because each body network of the SPINN learns an individual 1-dimensional function which mitigates the burden of representing the entire 3-dimensional function. This makes our model much easier to learn complex functions. Given the same number of collocation points (\(64^{3}\)), the training speed of our proposed model with modified MLP is \(62\times\) faster, and the memory usage is \(29\times\) smaller than the baseline with modified MLP. The exact numerical values are provided in the appendix.
Klein-Gordon EquationFig. 5(c) shows the results. Again, our method shows the best performance in terms of accuracy, runtime (\(62\times\)), and memory usage (\(29\times\)). Note that both Helmholtz and Klein-Gordon equations contain three second-order derivative terms, while the diffusion equation contains only two. Our results showed the largest differences in runtime and memory usage in Helmholtz and Klein-Gordon equations. This is because SPINN significantly reduces the AD computations with forward-mode AD, implying SPINN is very efficient for solving high-order PDEs.
We investigated the Klein-Gordon experiment further, extending it to a 4-d system by adding one more spatial axis. As shown in Fig. 5(d), baseline PINN can only process \(23^{4}\) (\(18^{4}\) for modified MLP) collocation points at once due to a high memory throughput. On the other hand, SPINN can exploit the number of collocation points of \(64^{4}\) (\(160\times\) larger than the baseline) to obtain a more precise solution. The exact numerical values are provided in the appendix.
(2+1)-d Navier-Stokes EquationWe constructed SPINN to predict the velocity field \(u\) and applied forward-mode AD to obtain the vorticity \(\omega\). We used the same PDE setting used in causal PINN [42] and compared SPINN against their model. As shown in Tab. 2, our model finds a comparably accurate solution even without the causal loss function. Furthermore, since causal PINN had to iterate over four different tolerance values \(\epsilon\) in the causal loss function, they had to run 3\(\sim\)4 times more training epochs, depending on the stopping criterion. As a result, given the same number of collocation points, SPINN converges 60\(\times\) faster than causal PINN in terms of training runtime. The results of the Navier-Stokes equation ensure that SPINN can successfully solve a chaotic, highly nonlinear PDE. In Fig. 7, we
Figure 7: Relative error and training speed of (2+1)-d Navier-Stokes experiment vs. different ranks of SPINN.
demonstrate the effect of the rank \(r\) for solving the equation. We observed that the performance almost converges at the rank of 128, and increasing the rank does not slow down the training speed too much. Further details and visualization of the predicted vorticity map is shown in appendix section D.4.
(3+1)-d Navier-Stokes EquationWe proceeded to explore our model on (3+1)-d Navier-Stokes equation by devising a manufactured solution corresponding to the Taylor-Green vortex [40]. It models a decaying vortex, widely used for testing Navier-Stokes numerical solvers [9]. Since the vorticity of the (3+1)-d Navier-Stokes equation is a 3-d vector as opposed to the (2+1)-d system, the equation has three independent components resulting in a total of 33 derivative terms (see section D.5 in the appendix). Similar to the (2+1)-d experiment, the network's output is the velocity field, and we obtained 3-d vorticity by forward-mode AD. Tab. 3 shows the relative error, runtime, and GPU memory. Trained with a single GPU, SPINN achieves a relative error of 1.9e-3 less than 30 minutes. Visualized velocity and vorticity vector fields are provided in the appendix.
## 6 Limitations and Future Works
Although we presented comprehensive experimental results to show the effectiveness of our model, there remain questions on solving more challenging and higher dimensional PDEs. Combining the existing PINNs training techniques, such as learning-rate annealing [43], curriculum learning [26], and causal loss function [42], into ours is an exciting research direction, and we leave it to future work. Applying SPINN to higher dimensional PDEs, such as the BGK equation, would also have a tremendous practical impact. Lastly, we are still far from the theoretical speed-up (wall-clock runtime vs. FLOPs), which is further discussed in appendix section C We expect to additionally reduce runtime by optimizing hardware/software, e.g., using customized CUDA kernels.
## 7 Conclusion
We showed a simple yet powerful method to employ large-scale collocation points for PINNs training, leveraging forward-mode AD with the separated MLPs. Experimental results demonstrated that our method significantly reduces both spatial and computational complexity while achieving better accuracy on various three and four-dimensional PDEs. To our knowledge, it is the first attempt to exploit the power of forward-mode AD. We believe this work opens up a new direction to rethink the architectural design of neural networks in many scientific applications.
\begin{table}
\begin{tabular}{c|c c|c c c|c c c} \hline \hline model & \multicolumn{2}{c|}{PINN+mod [43]} & \multicolumn{3}{c|}{causal PINN [42]} & \multicolumn{3}{c}{SPINN (ours)} \\ \hline \(N_{c}\) & \(2^{12}\) & \(2^{15}\) & \(2^{12}\) & \(2^{15}\) & \(2^{15}\) & \(2^{15}\) & \(2^{18}\) & \(2^{21}\) \\ \hline \multirow{3}{*}{relative \(L_{2}\) error} & 0.0694 & 0.0581 & 0.0578 & 0.0401 & \multirow{3}{*}{0.0353\({}^{\dagger}\)} & 0.0780 & 0.0363 & 0.0355 \\ & \(\pm\)0.0091 & \(\pm\)0.0135 & \(\pm\)0.0117 & \(\pm\)0.0084 & & & & \\ \cline{1-1} & 03:20 & 07:52 & 10:09 & 23:03 & - & 00:07 & 00:09 & 00:14 \\ \cline{1-1} & 5,198 & 17,046 & 5,200 & 17,132 & - & 764 & 892 & 1,276 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The numerical result of (2+1)-d Navier-Stokes equation compared against PINN with modified MLP (PINN+mod) and causal PINN. \(N_{c}\) is the number of collocation points. The fourth row shows the training runtime of a single time window in time-marching training. All relative \(L_{2}\) errors are averaged over 3 runs.
\({}^{\dagger}\) This is the error (the single run result) reported by the causal PINN paper. Since the causal PINN is sensitive to the choice of parameter initialization, we also reported average error obtained from different random seeds by running their official code.
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline model & \(N_{c}\) & \(\begin{array}{c}\text{relative}\\ L_{2}\text{ error}\end{array}\) & runtime & memory \\ \hline & \(8^{4}\) & 0.0090 & 15.33 & 768 \\ SPINN & \(16^{4}\) & 0.0041 & 16.83 & 1,192 \\ & \(32^{4}\) & 0.0019 & 26.73 & 2,946 \\ \hline \hline \end{tabular}
\end{table}
Table 3: The numerical result of (3+1)-d Navier-Stokes equation. \(N_{c}\) is the number of collocation points on the entire domain. The relative errors are obtained by comparing the vorticity vector field and averaged over five different runs.
## Acknowledgments and Disclosure of Funding
This research was supported by the Ministry of Science and ICT (MSIT) of Korea, under the National Research Foundation (NRF) grant (2022R1F1A1064184, 2022R1A4A3033571), Institute of Information and Communication Technology Planning Evaluation (IITP) grants for the AI Graduate School program (IITP-2019-0-00421). The research of Seok-Bae Yun was supported by Samsung Science and Technology Foundation under Project Number SSTF-BA1801-02.
## References
* Agarwal et al. [2021] Rishabh Agarwal, Levi Melnick, Nicholas Frosst, Xuezhou Zhang, Ben Lengerich, Rich Caruana, and Geoffrey E Hinton. Neural additive models: Interpretable machine learning with neural nets. _Advances in Neural Information Processing Systems_, 34:4699-4711, 2021.
* Avrutskiy [2020] Vsevolod I Avrutskiy. Neural networks catching up with finite differences in solving partial differential equations in higher dimensions. _Neural Computing and Applications_, 32(17):13425-13440, 2020.
* Baydin et al. [2018] Atilim Gunes Baydin, Barak A Pearlmutter, Alexey Andreyevich Radul, and Jeffrey Mark Siskind. Automatic differentiation in machine learning: a survey. _Journal of Machine Learning Research_, pages 1-43, 2018.
* Boyd [2001] John P Boyd. _Chebyshev and Fourier spectral methods_. Courier Corporation, 2001.
* Brunton and Kutz [2022] Steven L Brunton and J Nathan Kutz. _Data-driven science and engineering: Machine learning, dynamical systems, and control_. Cambridge University Press, 2022.
* Chiu et al. [2022] Pao-Hsiung Chiu, Jian Cheng Wong, Chinchun Ooi, My Ha Dao, and Yew-Soon Ong. Can-pinn: A fast physics-informed neural network based on coupled-automatic-numerical differentiation method. _Computer Methods in Applied Mechanics and Engineering_, 395:114909, 2022.
* Cybenko [1989] George Cybenko. Approximation by superpositions of a sigmoidal function. _Mathematics of control, signals and systems_, 2(4):303-314, 1989.
* Daw et al. [2022] Arka Daw, Jie Bu, Sifan Wang, Paris Perdikaris, and Anuj Karpatne. Rethinking the importance of sampling in physics-informed neural networks. _arXiv preprint arXiv:2207.02338_, 2022.
* Ethier and Steinman [1994] C Ross Ethier and DA Steinman. Exact fully 3d navier-stokes solutions for benchmarking. _International Journal for Numerical Methods in Fluids_, 19(5):369-375, 1994.
* Eymard et al. [2000] Robert Eymard, Thierry Gallouet, and Raphaele Herbin. Finite volume methods: handbook of numerical analysis. _PG Ciarlet and JL Lions (Eds)_, 2000.
* Fefferman [2000] Charles L Fefferman. Existence and smoothness of the navier-stokes equation. _The millennium prize problems_, 57:67, 2000.
* Goyal et al. [2017] Priya Goyal, Piotr Dollar, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch sgd: Training imagenet in 1 hour. _arXiv preprint arXiv:1706.02677_, 2017.
* Griewank and Walther [2008] Andreas Griewank and Andrea Walther. _Evaluating derivatives: principles and techniques of algorithmic differentiation_. SIAM, 2008.
* Haghighat et al. [2021] Ehsan Haghighat, Maziar Raissi, Adrian Moure, Hector Gomez, and Ruben Juanes. A physics-informed deep learning framework for inversion and surrogate modeling in solid mechanics. _Computer Methods in Applied Mechanics and Engineering_, 379:113741, 2021.
* Hammersley [2013] John Hammersley. _Monte carlo methods_. Springer Science & Business Media, 2013.
* Hitchcock [1927] Frank L Hitchcock. The expression of a tensor or a polyadic as a sum of products. _Journal of Mathematics and Physics_, 6(1-4):164-189, 1927.
* Hoffman and Frankel [2018] Joe D Hoffman and Steven Frankel. _Numerical methods for engineers and scientists_. CRC press, 2018.
* Hornik et al. [1989] Kurt Hornik, Maxwell Stinchcombe, and Halbert White. Multilayer feedforward networks are universal approximators. _Neural networks_, 2(5):359-366, 1989.
* [19] Ameya D Jagtap and George E Karniadakis. Extended physics-informed neural networks (xpinns): A generalized space-time domain decomposition based deep learning framework for nonlinear partial differential equations. In _AAAI Spring Symposium: MLPS_, 2021.
* [20] Ameya D Jagtap, Kenji Kawaguchi, and George Em Karniadakis. Adaptive activation functions accelerate convergence in deep and physics-informed neural networks. _Journal of Computational Physics_, 404:109136, 2020.
* [21] Ameya D Jagtap, Ehsan Kharazmi, and George Em Karniadakis. Conservative physics-informed neural networks on discrete domains for conservation laws: Applications to forward and inverse problems. _Computer Methods in Applied Mechanics and Engineering_, 365:113028, 2020.
* [22] Jian-hui Jiang, Hai-long Wu, Yang Li, and Ru-qin Yu. Three-way data resolution by alternating slice-wise diagonalization (asd) method. _Journal of Chemometrics: A Journal of the Chemometrics Society_, 14(1):15-36, 2000.
* [23] Yuling Jiao, Yanming Lai, Dingwei Li, Xiliang Lu, Fengru Wang, Yang Wang, and Jerry Zhijian Yang. A rate of convergence of physics informed neural networks for the linear second order elliptic pdes. _Communications in Computational Physics_, 31(4):1272, 2022.
* [24] George Em Karniadakis, Ioannis G Kevrekidis, Lu Lu, Paris Perdikaris, Sifan Wang, and Liu Yang. Physics-informed machine learning. _Nature Reviews Physics_, 3(6):422-440, 2021.
* [25] Tamara G Kolda and Brett W Bader. Tensor decompositions and applications. _SIAM review_, 51(3):455-500, 2009.
* [26] Aditi Krishnapriyan, Amir Gholami, Shandian Zhe, Robert Kirby, and Michael W Mahoney. Characterizing possible failure modes in physics-informed neural networks. _Advances in Neural Information Processing Systems_, 34:26548-26560, 2021.
* [27] Alex Krizhevsky. One weird trick for parallelizing convolutional neural networks. _arXiv preprint arXiv:1404.5997_, 2014.
* [28] Yoshiki Kuramoto and Toshio Tsuzuki. Persistent propagation of concentration waves in dissipative media far from thermal equilibrium. _Progress of theoretical physics_, 55(2):356-369, 1976.
* [29] Zongyi Li, Nikola Borislavov Kovachki, Kamyar Azizzadenesheli, Burigede liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Fourier neural operator for parametric partial differential equations. In _International Conference on Learning Representations_, 2021.
* [30] Ruofan Liang, Hongyi Sun, and Nandita Vijaykumar. Coordx: Accelerating implicit neural representation with a split mlp architecture. In _International Conference on Learning Representations_, 2022.
* [31] Ben Moseley, Andrew Markham, and Tarje Nissen-Meyer. Finite basis physics-informed neural networks (fpbirns): a scalable domain decomposition approach for solving differential equations. _arXiv preprint arXiv:2107.07871_, 2021.
* [32] Guofei Pang, Lu Lu, and George Em Karniadakis. fpinns: Fractional physics-informed neural networks. _SIAM Journal on Scientific Computing_, 41(4):A2603-A2626, 2019.
* [33] Maziar Raissi. Deep hidden physics models: Deep learning of nonlinear partial differential equations. _The Journal of Machine Learning Research_, 19(1):932-955, 2018.
* [34] Maziar Raissi, Paris Perdikaris, and George E Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. _Journal of Computational physics_, 378:686-707, 2019.
* [35] Shyam Sankaran, Hanwen Wang, Leonardo Ferreira Guilhoto, and Paris Perdikaris. On the impact of larger batch size in the training of physics informed neural networks. In _The Symbiosis of Deep Learning and Differential Equations II_, 2022.
* [36] Ramansh Sharma and Varun Shankar. Accelerated training of physics informed neural networks (pinns) using meshless discretizations. _Advances in Neural Information Processing Systems_, 2022.
* [37] Justin Sirignano and Konstantinos Spiliopoulos. Dgm: A deep learning algorithm for solving partial differential equations. _Journal of computational physics_, 375:1339-1364, 2018.
* [38] Gordon Dennis Smith. _Numerical solution of partial differential equations: finite difference methods_. Oxford university press, 1985.
* [39] Samuel L. Smith, Pieter-Jan Kindermans, and Quoc V. Le. Don't decay the learning rate, increase the batch size. In _International Conference on Learning Representations_, 2018.
* [40] Geoffrey Ingram Taylor and Albert Edward Green. Mechanism of the production of small eddies from large ones. _Proceedings of the Royal Society of London. Series A-Mathematical and Physical Sciences_, 158(895):499-521, 1937.
* [41] AI Tolstykh and DA Shirobokov. On using radial basis functions in a "finite difference mode" with applications to elasticity problems. _Computational Mechanics_, 33(1):68-79, 2003.
* [42] Sifan Wang, Shyam Sankaran, and Paris Perdikaris. Respecting causality is all you need for training physics-informed neural networks. _arXiv preprint arXiv:2203.07404_, 2022.
* [43] Sifan Wang, Yujun Teng, and Paris Perdikaris. Understanding and mitigating gradient flow pathologies in physics-informed neural networks. _SIAM Journal on Scientific Computing_, 43(5):A3055-A3081, 2021.
* [44] Yiheng Xie, Towaki Takikawa, Shunsuke Saito, Or Litany, Shiqin Yan, Numair Khan, Federico Tombari, James Tompkin, Vincent Sitzmann, and Srinath Sridhar. Neural fields in visual computing and beyond. In _Computer Graphics Forum_, pages 641-676. Wiley Online Library, 2022.
* [45] Olek C Zienkiewicz, Robert Leroy Taylor, and Jian Z Zhu. _The finite element method: its basis and fundamentals_. Elsevier, 2005.
## Appendix A Proof of Theorem 1
Here we show the preliminary lemmas and proofs for Theorem 1. in the main paper. We start by defining a general tensor product between two Hilbert spaces.
**Definition 1**.: Let \(\{v_{\beta}\}\) be an orthonormal basis for \(\mathcal{H}_{2}\). **Tensor product** between Hilbert spaces \(\mathcal{H}_{1}\) and \(\mathcal{H}_{2}\), denoted by \(\mathcal{H}_{1}\otimes\mathcal{H}_{2}\), is a set of all antilinear mappings \(A:\mathcal{H}_{2}\rightarrow\mathcal{H}_{1}\) such that \(\sum_{\beta}\lVert Av_{\beta}\rVert^{2}<\infty\) for every orthonormal basis for \(\mathcal{H}_{2}\).
Then by Theorem 7.12 in Folland [3], \(\mathcal{H}_{1}\otimes\mathcal{H}_{2}\) is also a Hilbert space with respect to norm \(\left|\kern-1.075pt\left|\kern-1.075pt\left|\cdot\right|\right|\kern-1.075pt\right|\) and associated inner product \(\langle\cdot,\cdot\rangle\):
\[\left|\kern-1.075pt\left|\kern-1.075pt\left|A\right|\kern-1.075pt\right| \kern-1.075pt\right|^{2} \equiv\sum_{\beta}\lVert Av_{\beta}\rVert^{2}, \tag{1}\] \[\langle A,B\rangle \equiv\sum_{\beta}\langle Av_{\beta},Bv_{\beta}\rangle, \tag{2}\]
where \(A,B\in\mathcal{H}_{1}\otimes\mathcal{H}_{2}\), and \(\{v_{\beta}\}\) is any orthonormal basis of \(\mathcal{H}_{2}\).
**Lemma 1**.: _Let \(x\in\mathcal{H}_{1}\) and \(y,y^{\prime}\in\mathcal{H}_{2}\). Then, \((x\otimes y)y^{\prime}=\langle y,y^{\prime}\rangle x\)._
Proof.: Let \(A\) be a mapping \(A:y^{\prime}\rightarrow\langle y,y^{\prime}\rangle x\), where \(\lVert y\rVert=1\). We expand \(y^{\prime}\) with orthonormal basis \(\{y,z_{\beta}\}\). i.e., \(\{y,z_{\beta}\}\) is a basis for \(\mathcal{H}_{2}\). Then,
\[\lVert Ay\rVert^{2}+\sum_{\beta}\lVert Az_{\beta}\rVert^{2} =\lVert Ay\rVert^{2}+\sum_{\beta}\lVert\langle y,z_{\beta}\rangle x \rVert^{2} \tag{3}\] \[=\lVert Ay\rVert^{2}<\infty \tag{4}\]
It is obvious that \(A\) is antilinear. Then by Definition 1, \(A\in\mathcal{H}_{1}\otimes\mathcal{H}_{2}\) which is, \(Ay^{\prime}=(x\otimes y)y^{\prime}\). Therefore, \((x\otimes y)y^{\prime}=\langle y,y^{\prime}\rangle x\).
**Lemma 2**.: \(\{u_{\alpha}\otimes v_{\beta}\}\) _is an orthonormal basis for \(\mathcal{H}_{1}\otimes\mathcal{H}_{2}\)._
Proof.: Let \(A,B\in\mathcal{H}_{1}\otimes\mathcal{H}_{2}\), where \(B=u_{\alpha}\otimes v_{\beta}\). Then by the definition of inner product in Eq. 2,
\[\langle A,B\rangle =\sum_{i}\langle Av_{i},Bv_{i}\rangle\] (5) \[=\sum_{i}\langle Av_{i},(u_{\alpha}\otimes v_{\beta})v_{i}\rangle\] (6) \[=\sum_{i}\langle Av_{i},\langle v_{\beta},v_{i}\rangle u_{\alpha} \rangle\qquad(\because\text{Lemma 1})\] (7) \[=\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle \!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\! \langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle \!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\! \langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle \!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle \!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle \!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle \!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle \!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle \!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle \!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle \!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle \!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle \!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle \!\langle\!\langle\!\langle\!\langle\!\!\langle\!\langle\!\langle\!\langle\!\langle \!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\! \langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle \!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle \!\langle\!\langle\!\langle\!\langle\langle\!\langle\!\langle\!\langle\langle\! \langle\!\langle\!\langle\!\langle\!\langle\langle\!\langle\!\langle\!\langle\! \langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle \!\langle\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle \!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\! \langle\!\langle\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle \!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\! \langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle \!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\! \langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle \!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\! \langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle \!\langle\!\langle\!\langle\!\langle\langle\!\langle\!\langle\!\langle\!\langle \!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\! \langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\! \langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle \!\langle\!\langle\!\langle\!\!\langle\!\langle\!\langle\!\langle\!\langle \!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\! \langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle \!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle \!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle \!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle \!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\! \langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\! \langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle \!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\!\langle\!\langle \!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\!\langle \!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle \!\langle\!\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle \!\langle\!\langle\!\langle\!\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle \!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\! \langle\!\langle\!\langle\!\langle\!\langle\!\!\langle\!\langle\!\langle\!\langle\!\! \langle\!\langle\!\langle\!\langle\!\langle\!\!\langle\!\langle\!\!\langle\!\langle\!\langle \!\langle\!\langle\!\langle\!\!\langle\!\langle\!\!\langle\!\!\langle\!\langle\!\langle\!\! \langle\!\!\langle\!\langle\!\langle\!\langle\!\langle\!\langle\!\!\langle\!\langle\!\! \langle\!\langle\!\langle\!\!\langle\!\langle\!\langle\!\!\langle\!\langle\!\langle\!\! \langle\!\langle\!\!\langle\!\langle\!\!\langle\!\langle\!\langle\!\langle\!\langle\!\!\langle\!\langle \!\langle\!\!\langle\!\langle\!\!\langle\!\!\langle\!\langle\!\langle\!\!\langle\!\langle\!\!\langle\!\! \langle\!\langle\!\langle\!\!\langle\!\!\langle\!\!\langle\!\!\langle\!\!\langle\!\!\langle\!\!\langle\!\langle\!\! \langle\!\!\langle\!\langle\!\langle\!\langle\!\!\langle\!\!\langle\!\langle\!\!\langle\!\!\langle\!\!\langle\!\langle\!\! \langle\!\langle\!\langle\!\!\langle\!\langle\!\!\langle\!\langle\!\!\langle\
\(\therefore\{u_{\alpha}\otimes v_{\beta}\}\) is a basis.
Now we begin the proof of Theorem 1. in the main paper.
**Theorem 1**.: _Let \(X,Y\) be compact subsets of \(\mathbb{R}^{d}\). Choose \(u\in L^{2}(X\times Y)\). Then, for arbitrary \(\varepsilon>0\), we can find a sufficiently large \(r>0\) and neural networks \(f_{j}\) and \(g_{j}\) such that_
\[\left\|u-\sum_{j=1}^{r}f_{j}g_{j}\right\|_{L^{2}(X\times Y)}<\varepsilon. \tag{13}\]
Proof.: Let \(\{\phi_{i}\}\) and \(\{\psi_{j}\}\) be orthonormal basis for \(L^{2}(X)\) and \(L^{2}(Y)\) respectively. Then \(\{\phi_{i}\psi_{j}\}\) forms an orthonormal basis for \(L^{2}(X\times Y)\) (\(\therefore\) Lemma 2). Therefore, we can find a sufficiently large \(r\) such that
\[\left\|u-\sum_{i,j}^{r}a_{ij}\phi_{i}\psi_{j}\right\|_{L^{2}(X\times Y)}<\frac {\varepsilon}{2}, \tag{14}\]
where \(a_{ij}\) denotes
\[a_{ij}=\int_{X\times Y}u(x,y)\phi_{i}(x)\psi_{j}(y)dxdy.\]
On the other hand, by the universal approximation theorem [1], we can find neural networks \(f_{j}\) and \(g_{j}\) such that
\[\|\phi_{i}-f_{j}\|_{L^{2}(X)}\leq\frac{\varepsilon}{3^{j}\|u\|_{L^{2}(X \times Y)}}\text{ and }\|\psi_{j}-g_{j}\|_{L^{2}(Y)}\leq\frac{\varepsilon}{3^{j}\|u\|_{L^{2}(X \times Y)}}. \tag{15}\]
We first consider the difference between \(u\) and \(\sum_{i,j}^{r}a_{ij}f_{i}g_{j}\):
\[\left\|u-\sum_{i,j}^{r}a_{ij}f_{i}g_{j}\right\|_{L^{2}(X\times Y)} \tag{16}\] \[\leq\left\|u-\sum_{i,j}^{r}a_{ij}\phi_{i}\psi_{j}\right\|_{L^{2} (X\times Y)}+\left\|\sum_{i,j}^{r}a_{ij}\phi_{i}\psi_{j}-\sum_{i,j}^{r}a_{ij} f_{i}g_{j}\right\|_{L^{2}(X\times Y)}\] (17) \[\equiv I+II \tag{18}\]
Since \(|I|<\varepsilon/2\) from (14), it is enough to estimate \(II\). For this, we consider
\[\sum_{i,j}^{r}a_{ij}\phi_{i}\psi_{j}-\sum_{i,j}^{r}a_{ij}f_{i}g_{j} =\sum_{i,j}^{r}a_{ij}\phi_{i}\big{(}\psi_{j}-g_{j}\big{)}+\sum_{i,j}^{r}a_{ij}\big{(}\phi_{i}-f_{i}\big{)}g_{j} \tag{19}\] \[\equiv II_{1}+II_{2}. \tag{20}\]
We first compute \(II_{1}\):
\[\|II_{1}\|_{L^{2}(X\times Y)}^{2} =\int_{X\times Y}\left\{\sum_{i,j}^{r}a_{ij}\phi_{i}\big{(}\psi_ {j}-f_{j}\big{)}\right\}^{2}dxdy \tag{21}\] \[=\int_{X\times Y}\left\{\sum_{j}^{r}\left(\sum_{i}^{r}a_{ij} \phi_{i}\right)(\psi_{j}-f_{j})\right\}^{2}dxdy \tag{22}\]
We set
\[A_{j}(x)=\sum_{i}^{r}a_{ij}\phi_{i}(x),\qquad B_{j}(y)=\psi_{j}(y)-g_{j}(y) \tag{23}\]
to write \(II_{1}\) as
\[\|II_{1}\|_{L^{2}(X\times Y)}^{2}=\int_{X\times Y}\left\{\sum_{j}^{r}A_{j}(x)B_{j} (y)\right\}^{2}dxdy. \tag{24}\]
We then apply Cauchy-Scharwz inequality to get
\[\|II_{1}\|_{L^{2}(X\times Y)}^{2} \leq\int_{X\times Y}\Big{(}\sum_{j}^{r}|A_{j}(x)|^{2}\Big{)}\Big{(} \sum_{j}^{r}|B_{j}(y)|^{2}\Big{)}dxdy \tag{25}\] \[=\left(\int_{X}\sum_{j}^{r}|A_{j}(x)|^{2}dx\right)\left(\int_{Y} \sum_{j}^{r}|B_{j}(y)|^{2}dy\right). \tag{26}\]
Now
\[\int_{X}\sum_{j}^{r}|A_{j}(x)|^{2}dx =\int_{X}\sum_{j}^{r}\left(\sum_{i}^{r}a_{ij}\phi_{i}\right)^{2}dx \tag{27}\] \[=\sum_{j}^{r}\Big{\|}\sum_{i}^{r}a_{ij}\phi_{i}\Big{\|}_{L(X)}^{ 2}. \tag{28}\]
Since \(\{\phi_{i}\}\) is an orthonormal basis, we see that
\[\Big{\|}\sum_{i}^{r}a_{ij}\phi_{i}\Big{\|}_{L(X)}\leq\sum_{i}^{r}|a_{ij}| \big{\|}\phi_{i}\big{\|}_{L(X)}=\sum_{i}^{r}|a_{ij}|. \tag{29}\]
Therefore,
\[\int_{X}\sum_{j}^{r}|A_{j}(x)|^{2}dx =\int_{X}\sum_{j}^{r}\left(\sum_{i}^{r}a_{ij}\phi_{i}\right)^{2}dx \tag{30}\] \[=\sum_{j}^{r}\int_{X}\left(\sum_{i}^{r}a_{ij}\phi_{i}\right)^{2}dx\] (31) \[=\sum_{j}^{r}\Big{\|}\sum_{i}^{r}a_{ij}\phi_{i}\Big{\|}_{L(X)}^{ 2}\] (32) \[=\sum_{j}^{r}\Big{(}\sum_{i}^{r}|a_{ij}|\Big{)}^{2}. \tag{33}\]
Finally, we recall
\[\Big{(}\sum_{i}^{r}|a_{ij}|\Big{)}^{2}\leq 2\sum_{i}^{r}|a_{ij}|^{2} \tag{34}\]
to conclude
\[\int_{X}\sum_{j}^{r}|A_{j}(x)|^{2}dx\leq 2\sum_{j}^{r}\sum_{i}^{r}|a_{ij}|^{2 }<2\sum_{i,j}^{\infty}|a_{ij}|^{2}=2\|u\|_{L^{2}(X\times Y)}^{2}. \tag{35}\]
On the other hand, we have from (15)
\[\int_{Y}\sum_{j}^{r}|B_{j}(y)|^{2}dy =\sum_{j}^{r}\int_{Y}|\psi_{j}(y)-g_{j}(y)|^{2}dy \tag{36}\] \[=\sum_{j}^{r}\left\|\psi_{j}(y)-g_{j}(y)\right\|_{L^{2}(Y)}^{2}\] (37) \[\leq\sum_{j}^{r}\frac{\varepsilon^{2}}{\mathfrak{g}^{j}\|u\|_{L^ {2}(X\times Y)}^{2}}\] (38) \[<\frac{\varepsilon^{2}}{8\|u\|_{L^{2}(X\times Y)}^{2}}. \tag{39}\]
Hence we have
\[\|II_{1}\|_{L^{2}(X\times Y)}^{2}<\frac{\varepsilon^{2}}{8\|u\|_{L^{2}(X \times Y)}^{2}}2\|u\|_{L^{2}(X\times Y)}^{2}=\frac{\varepsilon^{2}}{4}. \tag{40}\]
Likewise,
\[\|II_{2}\|_{L^{2}(X\times Y)}^{2}<\frac{\varepsilon^{2}}{4}. \tag{41}\]
Therefore,
\[\|II\|_{L^{2}(X\times Y)}^{2}<\frac{\varepsilon^{2}}{2}. \tag{42}\]
We go back to (16) with the estimates (14) and (42) to derive
\[\left\|u-\sum_{i,j}^{r}a_{ij}f_{i}g_{j}\right\|_{L^{2}(X\times Y)} <\frac{\varepsilon}{2}+\frac{\varepsilon}{2}=\varepsilon. \tag{43}\]
Finally, we reorder the index to rewrite
\[\sum_{i,j}^{r}a_{ij}f_{i}g_{j} =\sum_{i}^{\tilde{r}}b_{i}\tilde{f}_{i}\tilde{g}_{i} \tag{44}\] \[=\sum_{i}^{\tilde{r}}\left\{b_{i}\tilde{f}_{i}\right\}\tilde{g}_ {i}\] (45) \[=\sum_{i}^{\tilde{r}}k_{i}\tilde{g}_{i} \tag{46}\]
Without loss of generality, we rewrite \(\tilde{r},k_{i}\), \(\tilde{g}_{i}\) into \(r\), \(f_{i}\), \(g_{i}\) respectively, to complete the proof.
## Appendix B Training with Physics-Informed Loss
After SPINN predicts an output function with the methods described above, the rest of the training procedure follows the same process used in conventional PINN training [11], except we use forward-mode AD to compute PDE residuals (standard back-propagation, a.k.a. reverse-mode AD for parameter updates). With the slight abuse of notation, our predicted solution function is denoted as \(\hat{u}^{(\theta)}(x,t)\) from onwards, explicitly expressing time coordinates. Given an underlying PDE (or ODE), the initial, and the boundary conditions, SPINN is trained with a 'physics-informed' loss function:
\[\min_{\theta}\mathcal{L}(\hat{u}^{(\theta)}(x,t))=\min_{\theta} \lambda_{\text{pde}}\mathcal{L}_{\text{pde}}+\lambda_{\text{ic}}\mathcal{L}_{ \text{ic}}+\lambda_{\text{bc}}\mathcal{L}_{\text{bc}}, \tag{47}\] \[\mathcal{L}_{\text{pde}}=\int_{\Gamma}\int_{\Omega}\|\mathcal{N} [\hat{u}^{(\theta)}](x,t)\|^{2}dxdt,\] (48) \[\mathcal{L}_{\text{ic}}=\int_{\Omega}\|\hat{u}^{(\theta)}(x,0)-u _{\text{ic}}(x)\|^{2}dx,\] (49) \[\mathcal{L}_{\text{bc}}=\int_{\Gamma}\int_{\partial\Omega}\| \mathcal{B}[\hat{u}^{(\theta)}](x,t)-u_{\text{bc}}(x,t)\|^{2}dxdt, \tag{50}\]
where \(\Omega\) is an input domain, \(\mathcal{N},\mathcal{B}\) are generic differential operators and \(u_{\text{ic}}\), \(u_{\text{bc}}\) are initial, boundary conditions, respectively. \(\lambda\) are weighting factors for each loss term. When calculating the PDE loss (\(\mathcal{L}_{\text{pole}}\)) with Monte-Carlo integral approximation, we sampled collocation points from factorized coordinates and used forward-mode AD. The remaining \(\mathcal{L}_{\text{ic}}\) and \(\mathcal{L}_{\text{bc}}\) are then computed with initial and boundary coordinates to regress the given conditions. By minimizing the objective loss in Eq. 47, the model output is enforced to satisfy the given equation, the initial, and the boundary conditions.
## Appendix C FLOPs Estimation
The FLOPs for evaluating the derivatives can be systematically calculated by disassembling the computational graph into elementary operations such as additions and multiplications. Given a computational graph of forward pass for computing the primals, AD augments each elementary operation into other elementary operations. The FLOPs in the forward pass can be precisely calculated since it consists of a series of matrix multiplications and additions. We used the method described in [4] to estimate FLOPs for evaluating the derivatives. Table 1 shows the number of additions (ADDS) and multiplications (MULTS) in each evaluation process. Note that FLOPs is a summation of ADDS and MULTS by definition.
One thing to note here is that this is a theoretical estimation. Theoretically, the number of JVP evaluations for computing the gradient with respect to the input coordinates is \(Nd\), when \(N\) is the number of coordinates for each axis and \(d\) is the input dimension (see section 4.3 in the main paper). However, our actual implementation of gradient calculation involves re-computing the feature representations \(f^{(\theta_{i})}\), which makes the complexity of network propagations from \(\mathcal{O}(Nd)\) to \(\mathcal{O}(Nd^{2})\). Ideally, these feature vectors can be computed only once and stored to be used later for gradient computations. Although it is still significantly more efficient than the conventional PINN's complexity (\(Nd^{2}\ll N^{d}\)), there is a room to bridge the gap between theoretical FLOPs and actual training runtime by further software optimization.
## Appendix D Experimental Details and Results
In this section, we provide experimental details, numerical results, and visualizations for each experiment in the main paper.
### Diffusion Equation
The diffusion equation is one of the most representative parabolic PDEs, often used for modeling the heat diffusion process. We especially choose a nonlinear diffusion equation where it can be written as:
\[\partial_{t}u=\alpha\left(\|\nabla u\|^{2}+u\Delta u\right), x\in\Omega,t\in\Gamma, \tag{51}\] \[u(x,0)=u_{\text{ic}}(x), x\in\Omega,\] (52) \[u(x,t)=0, x\in\partial\Omega,t\in\Gamma. \tag{53}\]
We used diffusivity \(\alpha=0.05\), spatial domain \(\Omega=[-1,1]^{2}\), temporal domain \(\Gamma=[0,1]\) and used superposition of three Gaussian functions for the initial condition \(u_{\text{ic}}\). We obtained the reference
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline & \multicolumn{2}{c|}{SPINN (ours)} & \multicolumn{2}{c}{PINN (baseline)} \\ \hline & ADDS (\(\times 10^{6}\)) & MULTS (\(\times 10^{6}\)) & ADDS (\(\times 10^{6}\)) & MULTS (\(\times 10^{6}\)) \\ \hline forward pass & 20 & 20 & 21,609 & 21,609 \\
1st-order derivative & 40 & 40 & 86,638 & 43,419 \\
2nd-order derivative & 80 & 80 & 130,057 & 87,040 \\ \hline MFLOPs (total) & \multicolumn{2}{c|}{280} & \multicolumn{2}{c}{390,370} \\ \hline \hline \end{tabular}
\end{table}
Table 1: The number of elementary operations for evaluating forward pass, first and second-order derivatives. The calculation is based on \(64^{3}\) collocation points in a 3-d system and the vanilla MLP settings used for diffusion, Helmholtz, and Klein-Gordon equations. We assumed that each derivative is evaluated on every coordinate axis.
solution (\(101\times 101\times 101\) resolution) through a widely-used PDE solver platform FEniCS [9]. Note that FEniCS is a FEM-based solver. We particularly set the initial condition to be a superposition of three gaussian functions:
\[u_{\text{ic}}(x,y)=0.25\exp\left[-10\{(x-0.2)^{2}+(y-0.3)^{2}\} \right]+0.4\exp\left[-15\{(x+0.1)^{2}+(y+0.5)^{2}\}\right]\] \[+0.3\exp\left[-20\{(x+0.5)^{2}+y^{2}\}\right]. \tag{54}\]
For our model, we used three body networks of 4 hidden layers with 64/32 hidden feature/output size each. For the baseline model, we used a single MLP of 5 hidden layers with 128 hidden feature sizes. We used Adam [6] optimizer with a learning rate of 0.001 and trained for 50,000 iterations for every experiment. All weight factors \(\lambda\) in the loss function in Eq. 47 are set to 1. The final reported errors are extracted where the total loss was minimum across the entire training iteration. We also resampled the input points every 100 epochs. Tab. 3 shows the numerical results, and the visualized solutions are provided in Fig. 1.
### Helmholtz Equation
The Helmholtz equation is a time-independent wave equation that takes the form:
\[\Delta u+k^{2}u=q, x\in\Omega, \tag{55}\] \[u(x)=0, x\in\partial\Omega, \tag{56}\]
where the spatial domain is \(\Omega=[-1,1]^{3}\). For a given source term \(q=-(a_{1}\pi)^{2}u-(a_{2}\pi)^{2}u-(a_{3}\pi)^{2}u+k^{2}u\), we devised a manufactured solution \(u=\sin(a_{1}\pi x_{1})\sin(a_{2}\pi x_{2})\sin(a_{3}\pi x_{3})\), where we take \(k=1,a_{1}=4,a_{2}=4,a_{3}=3\).
For our model, we used three body networks of 4 hidden layers with 64/32 hidden feature/output size each. For the baseline model, we used a single MLP of 5 hidden layers with 128 hidden feature sizes. We used Adam [6] optimizer with a learning rate of 0.001 and trained for 50,000 iterations for every experiment. All weight factors \(\lambda\) in the loss function in Eq. 47 are set to 1. The final reported errors are extracted where the total loss was minimum across the entire training iteration. We also resampled the input points every 100 epochs. Tab. 4 shows the numerical results and the visualized solutions are provided in Fig. 2.
### Klein-Gordon Equation
The Klein-Gordon equation is a nonlinear hyperbolic PDE, which arises in diverse applied physics for modeling relativistic wave propagation. The inhomogeneous Klein-Gordon equation is given by
\[\partial_{tt}u-\Delta u+u^{2}=f, x\in\Omega,t\in\Gamma, \tag{57}\] \[u(x,0)=x_{1}+x_{2}, x\in\Omega,\] (58) \[u(x,t)=u_{\text{bc}}(x), x\in\partial\Omega,t\in\Gamma, \tag{59}\]
where we chose the spatial/temporal domain to be \(\Omega=[-1,1]^{2}\) and \(\Gamma=[0,10]\), respectively. For error measurement, we used a manufactured solution \(u=(x_{1}+x_{2})\cos(2t)+x_{1}x_{2}\sin(2t)\) and \(f\), \(u_{bc}\) are extracted from this exact solution.
For our model, we used three body networks of 4 hidden layers with 64/32 hidden feature/output size each. For the baseline model, we used a single MLP of 5 hidden layers with 128 hidden feature sizes. We used Adam [6] optimizer with a learning rate of 0.001 and trained for 50,000 iterations for every experiment. All weight factors \(\lambda\) in the loss function in Eq. 47 are set to 1. The final reported errors are extracted where the total loss was minimum across the entire training iteration. We also resampled the input points every 100 epochs. Tab. 5 shows the numerical results.
We used the same settings used in (2+1)-d Klein-Gordon experiment for the (3+1)-d experiment except the manufactured solution was chosen as:
\[u=(x_{1}+x_{2}+x_{3})\cos{(t)}+x_{1}x_{2}x_{3}\sin{(t)}, \tag{60}\]
where \(f\), \(u_{bc}\) are extracted from this exact solution. Tab. 6 shows the numerical results. The collocation sizes of \(23^{4},18^{4}\) were the maximum value for PINN and PINN with modified MLP, respectively.
### (2+1)-d Navier-Stokes Equation
Navier-Stokes equation is a nonlinear time-dependent PDE that describes the motion of a viscous fluid. Various engineering fields rely on this equation, such as modeling the weather, airflow, or ocean currents. The vorticity form for incompressible fluid can be written as below:
\[\partial_{t}\omega+u\cdot\nabla\omega=\nu\Delta\omega, x\in\Omega,t\in\Gamma, \tag{61}\] \[\nabla\cdot u=0, x\in\Omega,t\in\Gamma,\] (62) \[\omega(x,0)=\omega_{0}(x), x\in\Omega, \tag{63}\]
where \(u\in\mathbb{R}^{2}\) is the velocity field, \(\omega=\nabla\times u\) is the vorticity, \(\omega_{0}\) is the initial vorticity, and \(\nu\) is the viscosity. We used the viscosity 0.01 and made the spatial/temporal domain \(\Omega=[0,2\pi]^{2}\) and \(\Gamma=[0,1]\), respectively. Note that Eq. 61 models decaying turbulence since there is no forcing term and Eq. 62 is the incompressible fluid condition. The reference solution is generated by JAX-CFD solver [7] which specifically used the pseudo-spectral method. The initial condition was generated using the gaussian random field with a maximum velocity of 5. The resolution of the obtained solution is \(100\times 128\times 128\left(N_{t}\times N_{x}\times N_{y}\right)\), and we tested our model on this data.
For our model, we used three body networks (modified MLP) of 3 hidden layers with 128/256 hidden feature/output sizes each. We divided the temporal domain into ten time windows to adopt the time marching method [8; 13]. We used Adam [6] optimizer with a learning rate of 0.002 and each time window is trained for 100,000 iterations. Followed by causal PINN, the PDE (residual) loss and initial condition loss function are written as follows.
\[\mathcal{L}_{\text{pde}} =\frac{\lambda_{w}}{N_{c}}\sum_{i=1}^{N_{c}}|\partial_{t}w+u_{x} \partial_{x}w+u_{y}\partial_{y}w-\nu(\partial_{xx}w+\partial_{yy}w)|^{2}+\frac {\lambda_{c}}{N_{c}}\sum_{i=1}^{N_{c}}|\partial_{x}u_{x}+\partial_{y}u_{y}|^{2}, \tag{64}\] \[\mathcal{L}_{\text{ic}} =\frac{\lambda_{ic}}{N_{ic}}\sum_{i=1}^{N_{ic}}\left(|u_{x}-u_{x 0}|^{2}+|u_{y}-u_{y0}|^{2}+|w-w_{0}|^{2}\right), \tag{65}\]
where \(u_{x},u_{y}\) are \(x,y\) components of predicted velocity, \(w=\partial_{x}u_{y}-\partial_{y}u_{x}\), \(N_{c}\) is the collocation size, \(N_{ic}\) is the number of coordinates for initial condition and \(u_{x0},u_{y0},u_{w0}\) are the initial conditions. We chose the weighting factors \(\lambda_{w}=1\), \(\lambda_{c}=5,000\), and \(\lambda_{ic}=10,000\). We also resampled the input points every 100 epochs. The periodic boundary condition can be explicitly enforced by positional encoding [2], and we specifically used the following encoding function only for the spatial input coordinates.
\[\gamma(x)=[1,\sin(x),\sin(2x),\sin(3x),\sin(4x),\sin(5x),\cos(x), \cos(2x),\cos(3x),\cos(4x),\cos(5x)]^{\top}. \tag{66}\]
Unlike other experiments, the solution of the Navier-Stokes equation is a 2-dimensional vector-valued function \(u:\mathbb{R}^{3}\rightarrow\mathbb{R}^{2}\). We can rewrite the feature merging equation Eq. 5 in the main paper to construct SPINN into a 2-dimensional vector function:
\[u_{1} =\sum_{j=1}^{r}\prod_{i=1}^{d}f_{j}^{(\theta_{i})}(x_{i}), \tag{67}\] \[u_{2} =\sum_{j=r+1}^{2r}\prod_{i=1}^{d}f_{j}^{(\theta_{i})}(x_{i}). \tag{68}\]
For example in our (2+1)-d Navier-Stokes equation setting, \(r=128\) since the network output feature size is 256. This can be applied to any \(m\)-dimensional vector function if we use a larger output feature size. More formally, if we want to construct an \(m\)-dimensional vector function with SPINN of rank \(r\), the \(k\)-th element of the function output can be written as
\[u_{k}=\sum_{j=(k-1)r+1}^{kr}\prod_{i=1}^{d}f_{j}^{(\theta_{i})}(x_{i}), \tag{69}\]
where the output feature size of each body network is \(mr\).
### (3+1)-d Navier-Stokes Equation
The vorticity form of (3+1)-d Navier-Stokes equation is given as:
\[\partial_{t}\omega+(u\cdot\nabla)\omega=(\omega\cdot\nabla)u+\nu \Delta\omega+F, x\in\Omega,t\in\Gamma, \tag{70}\] \[\nabla\cdot u=0, x\in\Omega,t\in\Gamma,\] (71) \[\omega(x,0)=\omega_{0}(x), x\in\Omega. \tag{72}\]
We constructed the spatial/temporal domain to be \(\Omega=[0,2\pi]^{3}\) and \(\Gamma=[0,5]\), respectively. We constructed the analytic solution for (3+1)-d Navier-Stokes equation introduced in Taylor et al. [12]. The manufactured velocity and vorticity are
\[u_{x} =2e^{-9\nu t}\cos(2x)\sin(2y)\sin(z), \tag{73}\] \[u_{y} =-e^{-9\nu t}\sin(2x)\cos(2y)\sin(z),\] (74) \[u_{z} =-2e^{-9\nu t}\sin(2x)\sin(2y)\cos(z),\] (75) \[\omega_{x} =-3e^{-9\nu t}\sin(2x)\cos(2y)\cos(z),\] (76) \[\omega_{y} =6e^{-9\nu t}\cos(2x)\sin(2y)\cos(z),\] (77) \[\omega_{z} =-6e^{-9\nu t}\cos(2x)\cos(2y)\sin(z), \tag{78}\]
where we chose the viscosity to be \(\nu=0.05\). Each forcing term \(F\) in the Eq. 70 is then given as
\[F_{x} =-6e^{-18\nu t}\sin(4y)\sin(2z), \tag{79}\] \[F_{y} =-6e^{-18\nu t}\sin(4x)\sin(2z),\] (80) \[F_{z} =6e^{-18\nu t}\sin(4x)\sin(4y). \tag{81}\]
We constructed SPINN to be four body networks (modified MLP) of 5 hidden layers with 64/384 hidden feature/output sizes each. We used Adam [6] optimizer with a learning rate of 0.001 and trained for 50,000 iterations. The weight factors in the loss function in Eq. 48 are chosen as \(\lambda_{\text{pde}}=1\), \(\lambda_{\text{ic}}=10\), and \(\lambda_{\text{bc}}=1\). We also weighted the incompressibility loss (Eq. 71) with 100. The visualized solution vector field is shown in Fig. 4.
## Appendix E Additional Experiments
### (5+1)-d Heat Equation
We tested our model on (5+1)-d heat equation to verify the effectiveness of our model on higher dimensional PDE:
\[\frac{\partial u(t,x)}{\partial t}=\Delta u(t,x),\qquad\qquad x\in[-1,1]^{5}, t\in[0,1], \tag{82}\]
where the manufactured solution is chosen to be \(\|x\|^{2}+10t\). When trained with \(8^{6}\) collocation points, SPINN achieved a relative \(L_{2}\) error of 0.0074 within 2 minutes.
### Fine Tuning with L-BFGS
We also conducted some experiments to explore the use of L-BFGS when training SPINN. We found that training with Adam first and then fine-tuning with L-BFGS showed a slight increase in accuracy. Note that this training strategy is used by other works [5, 10] and is known to be effective in some cases. Tab. 2 shows the numerical results on three 3-d PDEs. Understanding the effect of the optimization algorithm is still an open question in PINNs, we believe that investigating this issue in the context of SPINN would be a valuable direction for future study.
\begin{table}
\begin{tabular}{c|c c c} \hline \hline & Diffusion & Helmholtz & Klein-Gordon \\ \hline Adam & 0.0041 & 0.0360 & 0.0013 \\ Adam + L-BFGS & 0.0041 & 0.0308 & 0.0010 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Numerical result of 3-d PDEs with L-BFGS fine tuning. The number of training collocation points is \(64^{3}\) and a single outer loop L-BFGS is applied.
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline \multirow{2}{*}{model} & \multirow{2}{*}{\(N_{c}\)} & relative & runtime & memory \\ & & \(L_{2}\) error & (ms/iter.) & (MB) \\ \hline \multirow{3}{*}{PINN} & \(16^{3}\) & 0.0193 & 4.70 & 2,810 \\ & \(32^{3}\) & 0.0281 & 14.95 & 2,938 \\ & \(64^{3}\) & 0.0299 & 112.00 & 18,118 \\ \hline \multirow{3}{*}{PINN +} & \(16^{3}\) & 0.0158 & 17.87 & 7,036 \\ & \(32^{3}\) & 0.0185 & 34.61 & 9,082 \\ & \(54^{3}\) & 0.0163 & 159.20 & 22,246 \\ \hline \multirow{3}{*}{SPINN} & \(16^{3}\) & 0.0193 & 1.55 & 762 \\ & \(32^{3}\) & 0.0060 & 1.71 & 762 \\ & \(64^{3}\) & 0.0045 & 1.82 & 762 \\ & \(128^{3}\) & 0.0040 & 1.85 & 890 \\ & \(256^{3}\) & 0.0039 & 3.98 & 1,658 \\ \hline \multirow{3}{*}{SPINN +} & \(16^{3}\) & 0.0062 & 2.20 & 764 \\ & \(32^{3}\) & 0.0020 & 2.41 & 764 \\ & \(64^{3}\) & 0.0013 & 2.57 & 764 \\ \cline{1-1} & \(128^{3}\) & **0.0008** & 2.79 & 892 \\ \cline{1-1} & \(256^{3}\) & 0.0009 & 5.61 & 1,660 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Full results of the **(3+1)-d Klein-Gordon equation**. \(N_{c}\) is the number of collocation points.
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline \multirow{2}{*}{model} & \multirow{2}{*}{\(N_{c}\)} & relative & runtime & memory \\ & & \(L_{2}\) error & (ms/iter.) & (MB) \\ \hline \multirow{3}{*}{PINN} & \(16^{3}\) & 0.0919 & 4.84 & 2,810 \\ & \(32^{3}\) & 0.9757 & 14.84 & 2,938 \\ & \(64^{3}\) & 0.9723 & 110.23 & 18,118 \\ \hline \multirow{3}{*}{\begin{tabular}{c} PINN + \\ modified MLP \\ \end{tabular} } & \(16^{3}\) & 0.4770 & 18.32 & 7,034 \\ & \(32^{3}\) & 0.5176 & 35.02 & 9,082 \\ & \(54^{3}\) & 0.4770 & 159.90 & 22,244 \\ \hline \multirow{3}{*}{SPINN} & \(16^{3}\) & 0.1177 & 1.54 & 762 \\ & \(32^{3}\) & 0.0809 & 1.71 & 762 \\ & \(64^{3}\) & 0.0592 & 1.85 & 762 \\ & \(128^{3}\) & 0.0449 & 1.89 & 762 \\ & \(256^{3}\) & 0.0435 & 3.84 & 1,146 \\ \hline \multirow{3}{*}{
\begin{tabular}{c} SPINN \\ + \\ modified MLP \\ \end{tabular} } & \(16^{3}\) & 0.1161 & 2.24 & 764 \\ & \(32^{3}\) & 0.0595 & 2.50 & 764 \\ & \(64^{3}\) & 0.0360 & 2.57 & 764 \\ \cline{1-1} & \(128^{3}\) & **0.0300** & 2.76 & 764 \\ \cline{1-1} & \(256^{3}\) & 0.0311 & 5.50 & 1,148 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Full results of the **Helmholtz equation**. \(N_{c}\) is the number of collocation points.
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline \multirow{2}{*}{model} & \multirow{2}{*}{\(N_{c}\)} & relative & runtime & memory \\ & & \(L_{2}\) error & (ms/iter.) & (MB) \\ \hline \multirow{3}{*}{PINN} & \(16^{3}\) & 0.0095 & 3.98 & 1,022 \\ & \(32^{3}\) & 0.0082 & 12.82 & 2,942 \\ & \(64^{3}\) & 0.0081 & 95.22 & 18,122 \\ \hline \multirow{3}{*}{\begin{tabular}{c} PINN \\ + \\ modified MLP \\ \end{tabular} } & \(16^{3}\) & 0.0048 & 14.94 & 1,918 \\ & \(52^{3}\) & 0.0043 & 29.91 & 4,990 \\ & \(54^{3}\) & 0.0041 & 134.64 & 22,248 \\ \hline \multirow{3}{*}{SPINN} & \(16^{3}\) & 0.0447 & 1.45 & 766 \\ & \(32^{3}\) & 0.0115 & 1.76 & 766 \\ & \(64^{3}\) & 0.0075 & 1.90 & 766 \\ & \(128^{3}\) & 0.0061 & 2.09 & 894 \\ & \(256^{3}\) & 0.0061 & 10.54 & 2,174 \\ \hline \multirow{3}{*}{
\begin{tabular}{c} SPINN \\ + \\ modified MLP \\ \end{tabular} } & \(16^{3}\) & 0.0390 & 2.17 & 766 \\ & \(32^{3}\) & 0.0067 & 2.44 & 768 \\ & \(64^{3}\) & 0.0041 & 2.59 & 768 \\ \cline{1-1} & \(128^{3}\) & **0.0036** & 3.06 & 896 \\ \cline{1-1} & \(256^{3}\) & **0.0036** & 12.13 & 2,176 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Full results of **diffusion equation**. \(N_{c}\) is the number of collocation points.
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline \multirow{2}{*}{model} & \multirow{2}{*}{\(N_{c}\)} & relative & runtime & memory \\ & & \(L_{2}\) error & (ms/iter.) & (MB) \\ \hline \multirow{3}{*}{PINN} & \(16^{3}\) & 0.0343 & 4.70 & 2,810 \\ & \(32^{3}\) & 0.0281 & 14.95 & 2,938 \\ & \(64^{3}\) & 0.0299 & 112.00 & 18,118 \\ \hline \multirow{3}{*}{\begin{tabular}{c} PINN \\ + \\ modified MLP \\ \end{tabular} } & \(16^{3}\) & 0.0158 & 17.87 & 7,036 \\ & \(54^{3}\) & 0.0185 & 34.61 & 9,082 \\ & \(54^{3}\) & 0.0163 & 159.20 & 22,246 \\ \hline \multirow{3}{*}{\begin{tabular}{c} SPINN \\ + \\ modified MLP \\ \end{tabular} } & \(16^{3}\) & 0.0193 & 1.55 & 762 \\ & \(32^{3}\) & 0.0060 & 1.71 & 762 \\ & \(64^{3}\) & 0.0045 & 1.82 & 762 \\ & \(128^{3}\) & 0.0040 & 1.85 & 890 \\ & \(256^{3}\) & 0.0039 & 3.98 & 1,658 \\ \hline \multirow{3}{*}{
\begin{tabular}{c} SPINN \\ + \\ modified MLP \\ \end{tabular} } & \(16^{3}\) & 0.0062 & 2.20 & 764 \\ & \(32^{3}\) & 0.0020 & 2.41 & 764 \\ & \(64^{3}\) & 0.0013 & 2.57 & 764 \\ \cline{1-1} & \(128^{3}\) & **0.0008** & 2.79 & 892 \\ \cline{1-
Figure 1: Visualized solution of **nonlinear diffusion equation** obtained by the baseline PINN and SPINN, both trained on \(64^{3}\) collocation points.
Figure 3: Visualized vorticity maps of **(2+1)-d Navier-Stokes equation** experiment predicted by SPINN. Three snapshots at timestamps \(t=0.0\), \(0.5\), \(1.0\) are presented.
Figure 2: Visualized solution of **Helmholtz equation** obtained by the baseline PINN and SPINN, both trained on \(64^{3}\) collocation points.
Figure 4: Visualized solution of **(3+1)-d Navier-Stokes equation** obtained by SPINN, trained on \(32^{4}\) collocation points. Each arrow is colored by its zenith angle. |
2306.08086 | Safe Use of Neural Networks | Neural networks in modern communication systems can be susceptible to
internal numerical errors that can drastically effect decision results. Such
structures are composed of many sections each of which generally contain
weighting operations and activation function evaluations. The safe use comes
from methods employing number based codes that can detect arithmetic errors in
the network's processing steps. Each set of operations generates parity values
dictated by a code in two ways. One set of parities is obtained from a
section's outputs while a second comparable set is developed directly from the
original inputs. The parity values protecting the activation functions involve
a Taylor series approximation to the activation functions. We focus on using
long numerically based convolutional codes because of the large size of data
sets. The codes are based on Discrete Fourier Transform kernels and there are
many design options available. Mathematical program simulations show our
error-detecting techniques are effective and efficient. | George Redinbo | 2023-06-13T19:07:14Z | http://arxiv.org/abs/2306.08086v1 | # Safe Use of Neural Networks
###### Abstract
Neural networks in modern communication systems can be susceptible to internal numerical errors that can drastically effect decision results. Such structures are composed of many sections each of which generally contain weighting operations and activation function evaluations. The safe use comes from methods employing number-based codes that can detect arithmetic errors in the network's processing steps. Each set of operations generates parity values dictated by a code in two ways. One set of parities is obtained from a section's outputs while a second comparable set is developed directly from the original inputs. The parity values protecting the activation functions involve a Taylor series approximation to the activation functions. We focus on using long numerically-based convolutional codes because of the large size of data sets. The codes are based on DFT kernels and there are many design options available. MatLab simulations show our error-detecting techniques are effective and efficient.
Neural networks, convolutional codes, error detection, soft errors, matrix operations, activation functions, DFT-based convolutional codes, algebraic-based fault tolerance (ABFT)
## 1 Introduction
Communication systems can use neural networks in many parts as is outlined in an article describing many applications of neural networks [1]. Neural networks have many processing operaions that are susceptible to random internal numerical errors that can drastically alter their decision outputs. Safe use needs to know when errors have appeared. Networks in commercial situations on standard computing hardware are extremely reliable.
On the other hand, there are situations involving neural networks that operate in what we term, hostile environments, where radiation particles can disrupt normal numerical operations causing erroneous decisions and improper tuning. For example, remote sensing in earth orbit or on foreign planets face disruptions. Neural networks can be used in orbital control systems or in medical systems within high-energy environments. Control systems in heavy industrial locations can be influenced dramatically. This paper addresses a standard neural network configuration and proposes protective methods that can detect when errors have affected numerical calculations voiding safe use.
Neural networks appear in many forms. They all have some common operations in stages forming the network, both in a forward direction when yielding decision results and in a backward propagation needed for tuning and training the network [Chapter 5, 2]. Each stage in the network whether a forward or backward type involves weighting data, scale by coefficients and summing, or passing through activation functions, nonlinear operations with limited output range. We propose numerically based error-detecting codes wherein the code word symbols are numerical values with coding operations defined on an arithmetic field.
The purpose of this paper is to guarantee safe us by detecting errors in operations supporting neural networks. Error-detecting codes are applied to generic models of the stages in neural networks, not aimed for any specific implementation. In this way, these new methods can be appropriately modified to address any practical neural network implementation
The concerns about errors in neural networks have been expressed in many papers with quite different viewpoints and sometimes with new approaches for increasing the protection levels. There are many, many articles concerning neural networks in the literature and some of them address reliability issues. We mention some in particular that seem in the direction of results in this article.
Some papers [3-5] evaluate various architectures that mitigate the effects of single-event upset (SEU) errors in networks of various kinds. A fault-tolerant method in [6] employs training algorithms combined with error-correction codes to separate the decision result allowing errors to be more noticeable. The training procedures for this approach are complicated. A series of papers [7-10] make hardware changes to the underlying implementation devices to avoid errors. Paper [10] diversifies the decisions steps allowing better detection of errors. When memistors are employed [11-13] several clustering techniques including binary error-correcting codes on the network's outputs offer some protection.
Several papers are concerned with the impact of SEU errors on the memory system supporting the network [14-16]. One approach focuses on storage of the weight values [16]. Another article addresses modifying the activation func
tion (models of neurons) so that failures during these evaluations can be checked more easily.
After we had completed our developments of error-detection codes for protecting the weighting operations and activation function outputs, we discovered a short conference paper (2pages) [18] that started in the same direction as our approach. We recognized their approach as using algorithm-based fault tolerance (ABFT) methods [19]. Their results relied on BCH codes upgraded to real number block codes, which they applied to a three-stage network for simulation purposes. We believe our new results employing numerically based convolutional codes for both weighting actions and activation functions provide a much broader scope for protection techniques.
The next section describes the general features of most neural networks. The following section explains our novel technique for detecting numerical errors in both forward and backward stages aligning with the data flow of the network. The use of wavelet codes, convolutional codes with numbers as symbols and parities, permit the large size of the data vectors to be handled offering protection through and across the weighting and activation function computations. A special method is developed for protecting the calculations producing the new weighting matrices that are determined in the backward stages. Thus, the infrequent tuning of the network as implemented by the backward sections are protected. The last section evaluates the effectiveness of these new codes, showing unusually broad detecting performances. A short appendix outlines the design details of the new wavelet codes.
## 2 Modeling Neural Networks for Protection Evaluation
There are many different neural network configurations for implementing artificial intelligence operations. Our goal is to demonstrate methods for detecting any errors that occur in arithmetic operations in the network. Computational errors arise from the underlying electronic structures' failures such as by soft errors [3]. They are very infrequent and their sources are difficult to pinpoint but can have a devastating effect on the results of a neural network. Accordingly, we will adopt a reasonable model that considers all normal operations that appear in neural networks. We employ a model offered by a text on neural networks [Chapter 5, 2]. The common arithmetic processing features involve large arithmetic weighting operations and activation functions. All are included in Fig. 1, which depicts the weighting sections \(\mathrm{W}^{(p)}\) and activation operator \(\mathrm{A}\).
The data processed in stages are collected into vectors \(\mathrm{Y}\), dimension \(\mathrm{K}\), as will be more formally described shortly. The number of data samples passed between stages can be different sometimes, but for exposition purposes, we assume the same size through the stages. The weighting functions are implemented by a matrix \(\mathrm{W}\) with its scaling features. The outputs of the weighting operations are denoted by \(\mathrm{\underline{S}}\) whereas the outputs to the next stage are from the activation functions, each of which have a limited output range. The role of the forward stages is to produce decision variables outputs in \(\mathrm{\underline{Y}}^{(M)}\). These final outputs of the forward network yield the decision variables.
The neural network is adjusted, possibly infrequently, by the actions of the backward propagation stages that process errors between the actual outputs and the desired outputs of the forward stages. These error values are passed through the backward stages. Each backward stage processes errors from the previous stage. Then, based on the perceived errors using the outputs of a comparable indexed forward stage, a new set of weights are computed in a backward stage. These new weights will be employed in a future forward stages. In addition, the newly defined weights are used to continue through the backward propagation processing stages. This approach also detects any errors in the control operations since they ultimately lead to numerical errors.
The arithmetic operations in a typical forward stage, labeled by index \(\mathrm{p}\), out of \(\mathrm{M}\) forward stages is detailed further in Fig. 2. We consider the arithmetic operations as separate entities whether by fixed-point or floating-point nature. Thus, arithmetic values are the fundamental quantities in this work. The inputs to this stage \(\mathrm{p}\) are outputs of activation functions from the previous stage. These inputs are collected into a vector \(\mathrm{\underline{Y}}^{(p)}\), \(1\times\mathrm{K}\). They are combined with the weighting matrix \(\mathrm{W}^{(p)},\mathrm{K}\times\mathrm{K}\), yielding outputs \(\mathrm{\underline{S}}^{(p)},1\times\mathrm{K}\). The activation function \(g(\mathrm{x})\) is applied to each variable in \(\mathrm{\underline{S}}^{(p)}\) providing the outputs of stage \(\mathrm{p}\), \(\mathrm{\underline{Y}}^{(p)}\). Since the weightings are applied linearly, the processing from \(\mathrm{\underline{Y}}^{(p)}\) to \(\mathrm{\underline{S}}^{(p)}\) is a matrix-vector equation.
Figure 1: Arithmetic Stages of Neural Network
\[\underline{S}^{(p)}=\underline{Y}^{(p-1)}W^{(p)} \tag{1}\]
In a compressed notation, the outputs \(\underline{Y}^{(p)}\) are expressed by applying \(g(x)\), activation function, to each component of \(\underline{S}^{(p)}\).
\[\underline{Y}^{(p)}=g\left(\underline{S}^{(p)}\right)\quad\text{ Activation Function Outputs} \tag{2}\]
The activation function can take several nonlinear forms, e.g. tanh(x) ReLU(x), [20]. It is useful to have a good derivative since the backward stages use derivatives of the forward function. (Some other functions besides tanh(x) may not have derivatives at all places but that can be handled [20])
The forward stages produce decision variables that give the choices for the output. If the forward network is to be trained, adjusted, the decision variables are compared to the desired outputs and any discrepancies are used by the backward propagation stages to adjust the weighting values in \(W^{(p)}\)'s. Any future adjusted network will use the new values contained in the new weighting matrices \(W^{(p)ww}\). The role of the backward propagation stages is indicated in Fig. 1 by the adjustable symbol under backward stages. Keep in mind that the activation function in the backward section is the derivative of the one in the correspondingly indexed forward stage.
The update of each weighting matrix is done in two phases. The output of a backward stage, \(\underline{\delta}^{(p)}\), as indicated in Fig. 3 showing a generic backward stage p is derived using new weights called \(W^{(p)ww}\). This is to designate the new variables in backward stages Fig. 3. They are derived using the propagating error vector \(\underline{\delta}^{(p-1)}\) from the previous backward stage (p+1) and the input vector \(\underline{Y}^{(p-1)}\) saved from the comparably indexed FORWARD stage.
\[W^{(p)ww}_{j}=W^{(p)}_{j}+\eta\delta^{(p+1)}_{j}Y^{(p-1)}_{i} \tag{3}\] \[;\eta\text{ learning rate, }Y^{(p-1)}_{i}\text{forward network values}\]
The output vector \(\underline{\delta}^{(p-1)}\) is calculated using the new weights. Remember, the backward propagation stages are not used during normal decision operations.
## 3 Protecting Processing Operations
There are two major processing operations involved in every forward and backward stage (Fig. 1). The filter weighting calculations may be viewed as a large matrix structure, far larger than most matrix-vector products. The second part of each stage has a bank of activation functions. It might seem that the backward feedback which adjusts the filter weightings could eventually correct any errors introduced in the updating of the weights. However, it is easy to conjure up situations where any errors would continue through to the feedback. We propose using error-detecting codes defined over large arithmetic fields to determine when computational errors in either part of a stage are present. When errors are detected, the processing steps need to be repeated to support safe use.
### Weighting Operations
The weighting of input data in stages is effectively a vector-matrix operation. For example, the data input to stage p of forward section, Fig. 2, weights a vector \(\underline{Y}\), 1\(\times\)K, with matrix \(W,K\times\)K. The output values are placed in a K vector \(\underline{S}\).
\[\underline{S}=\underline{Y}W\quad\quad\quad\quad 1\times\text{K vector} \tag{4}\]
There can be occasional infrequent sparse numerical errors in the resulting vector caused by failures in the vector-multiply operations. (Later, we will model these errors as additive errors inserted in the components of W.)
Error-detecting codes can be employed to sense any er
Figure 3: Backward Propagation Stage of Neural Network
Figure 2: Forward Stage of Neural Network
rors in \(\underline{\mathrm{S}}\). We are considering block codes first for describing the concepts. Later, we will expand the approach to employ wavelet codes. Think of a large length code word in systematic form (data and check symbols are distinct), defined by an encoding matrix \(\mathrm{G_{s}=\left(I_{1}\,P\right)}\). The parity-generating matrix for a linear code is \(\mathrm{P}\), \(\mathrm{K\times(N\text{-}K)}\), for a length \(\mathrm{N}\) code with \(\mathrm{K}\) data positions.
We will use a methodology called algorithm-based fault tolerance (ABFT) [19] where the parity values associated with output vector \(\underline{\mathrm{S}}\) will be computed in two independent ways and then compared. One set of parity values can be developed:
\[\underline{\rho}^{\text{--}}\underline{\mathrm{S}}\mathrm{P}\qquad\qquad 1 \times\left(\text{N-}\text{K}\right)\] (5a) However, \[\underline{\mathrm{S}}\] results from using matrix \[\mathrm{W}\] applied on the inputs \[\underline{\mathrm{Y}}\]. Thus, an alternative version for the parity values can be defined. \[\underline{\rho}_{\text{--}}\underline{\mathrm{Y}}\mathrm{WP}\qquad\qquad \qquad 1\times\left(\text{N-}\text{K}\right)\] (5b) When parities in \[\underline{\rho}\] and \[\underline{\rho}_{\text{a}}\] are compared, if they disagree in even one place, errors appear in \[\underline{\mathrm{S}}\] or \[\underline{\mathrm{Y}}\] or \[\mathrm{both}\] (up to error-detecting capability of code).
The matrix (WP) appearing in developing \(\underline{\rho}_{\text{a}}\) is smaller than \(\mathrm{W}\); \(\mathrm{WP}\) is \(\mathrm{K\times(N\text{-}K)}\). It can be formed beforehand independently. The overhead in this ABFT scheme is in the calculation \(\underline{\mathrm{Y}}\mathrm{WP}\) (5b) and \(\underline{\mathrm{S}}\mathrm{P}\) (5a). Note the computation of the output \(\underline{\mathrm{Y}}\) using \(\mathrm{W}\) is already required for the forwarding data. The efficiency of the ABFT method relies on the value (N-K) being much smaller than \(\mathrm{K}\) the data size. The overall concept of ABFT is shown in Fig. 4.
The comparison of parities in \(\underline{\rho}\) and \(\underline{\rho}_{\text{a}}\) allows a small tolerance in each position, particularly when no errors are present since roundoff noise can enter the two different calculations. Of course, it is always possible to overwhelm any error-detecting structure by encountering too many errors. This is true for all error-detecting codes. No errors guarantees safe processing.
We can use the same parity-check matrix \(\mathrm{P}\) to produce parities associated with the calculation of the new weighting function in each stage in the backward direction. A generic backward stage first develops an updated weighting matrix, which is employed in the next forward stages and henceforth until the next training sessions. We can describe a method for verifying proper calculations using a generic backward stage with this new updated weighting called \(\mathrm{W^{raw}}\) with formula similar to (3).
\[\eta\] is learning factor \[\mathrm{W^{raw}=W+}\ \eta\underline{\delta}^{\mathrm{T}}\underline{ \mathrm{Y}}; \underline{\mathrm{\delta}}, 1\times\text{$\mathrm{K}$ error gradient}\] \[\underline{\mathrm{Y}}, 1\times\text{$\mathrm{K}$ input data forward}\] \[\mathrm{W}, \text{$\mathrm{K}\times\mathrm{K}$ matrix forward stage} \tag{6}\]
The parity-check matrix \(\mathrm{P}\) is applied to \(\mathrm{W^{raw}}\) generating a row of parity \(\rho_{\text{i}}^{raw}\).
\[\left(\left(\rho_{\text{i}}^{raw}\right)\right)=\mathrm{W^{raw}P}\ \ \text{$\mathrm{K \times}\left(\text{N-}\text{K}\right)$}\] (7a) Each of the \[\mathrm{K}\] rows of \[\left(\left(\rho_{\text{i}}^{raw}\right)\right)\] hold (N-K) parity values.
However, similar parity values can be computed by applying \(\mathrm{P}\) to the two parts of (6) individually and adding.
\[\left(\left(\rho_{\text{i}}^{raw}\right)\right)=\mathrm{W^{+}}\eta\underline{ \delta}^{\mathrm{T}}\underline{\mathrm{Y}}\mathrm{P} \tag{7b}\]
Note, two items in this parity equation have been calculate in the forward operation. \(\underline{\mathrm{Y}}\mathrm{P}\), \(1\times\left(\text{N-}\text{K}\right)\), was formed in the forward stage as was \(\mathrm{WP}\). Then scaling the \(\underline{\mathrm{Y}}\mathrm{P}\) by the error gradient \(\underline{\delta}^{\mathrm{T}}\) produces a matrix \(\mathrm{K\times(N\text{-}K)}\) which when adjusted by \(\eta\) and added to \(\mathrm{WP}\) yields \(\mathrm{K}\) new row vectors each \(\times\left(\text{N-}\text{K}\right)\). When \(\left(\left(\rho_{\text{i}}^{raw}\right)\right)\) rows are compared to the rows of \(\left(\rho_{\text{i}}^{raw}\right)\), any mismatch indicates a detected error in the calculation of \(\mathrm{W^{raw}}\).
\[\left(\left(\rho_{\text{i}}^{raw}\right)\right)-\left(\left(\rho_{\text{i}}^{ \text{-}}\right)\right)\] Parity Comparisons (7c)
Once each updated weighting vector is computed and checked, it is employed in the backward stage. Now checking this backward stage, its output shown generically as vector \(\underline{\sigma}\), \(1\times\text{K}\), can produce (N-K) parity values using check matrix \(\mathrm{P}\).
\[\underline{\rho}=\underline{\sigma}\mathrm{P}\] (8a) However, as before, the inputs to backward stages can be used to generate another set of comparable parities using the new weighting matrix just computed. This alternate parity vector is designated \[\underline{\rho}_{\text{a}}=\underline{\mathcal{Q}}\left(\mathrm{W^{raw}P} \right)\quad 1\times\left(\text{N-}\text{K}\right)\ \underline{\mathcal{Q}}, 1\times\text{$\mathrm{K}$ input vector} \tag{8b}\]
Fig.5 shows this ABFT method for a generic backward step.
Long error-detecting block codes have very poor performances, regardless whether finite field or numerically based forms. One way to obtain long error-detecting codes over numerical fields is through convolutional codes. We pioneered a new form of convolutional codes over the complex numbers [24]. Their construction uses the popular discrete Fourier transforms (DFT). A brief description with defining equations is contained in Appendix A.
Convolutional codes employ a sliding memory segment of data, linearly producing parity samples. This memory
Figure 4: Algorithm-Based Fault Tolerance Method
Figure 5: Algorithm-Based Fault Tolerance
usually called the constraint length of data provides error detection along the codeword stream. We propose such codes to protect the large-sized weighting operations, again employing the concept of algorithm-based fault-tolerance.
The codeword in a standard systematic form intermingles the parity samples among the data segment allowing detecting operations to proceed using a sliding memory. However, it is always possible to handle the parity parts separately. The coding structure we developed actually uses a subcode giving a slightly shorter information parameter. The underlying DFT codes have parameter n length, (k-1) information positions and a constraint length \(\mu\) (memory length parameter). There is a constraint among these parameters required.
\[\text{n}\textgreater\big{(}\mu+1\big{)}\text{(n-k);}\quad\text{n,k and u, parameter DFT code} \tag{9}\]
We change the notation for the processing and coding to match the traditional form of convolutional codes even though they are viewed here as a "block" code. The data are contained in a column vector as are the associated parity values. If the K data samples are collected in a wide vector \(\underline{\text{Y}}\) of L, (k-1) subvectors while the affiliated parity values are contained in a vector \(\underline{\rho}\), L(n-k) long
\[\underline{\text{Y}}\textgreater=\big{(}\underline{Y}_{0},\underline{Y}_{1}, \ldots,\underline{Y}_{L+1}\big{)};\quad\text{each }\underline{Y}_{1}\text{\times(k-1), }\underline{Y}\text{\times L}\text{(k-1), }\text{K=L}\text{(k-1)}\] (10a) \[\underline{\rho}=\underline{\text{Y}}\Gamma^{\text{T}}\quad;\quad\begin{array}{c}1 \times\text{L(n-k+1)}\text{\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \ \leavevmode\nobreak\ \ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\leavevmode\nobreak\ \ \leavevmode\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\leavevmode\nobreak\ \leavevmode\ \leavevmode\leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\leavevmode\nobreak\ \leavevmode\ \leavevmode\leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\leavevmode\nobreak\ \leavevmode\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\ \leavevmode\leavevmode\nobreak\ \ \leavevmode\leavevmode\nobreak\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\ \leavevmode\nobreak\ \leavevmode\ \leavevmode\ \leavevmode\leavevmode\nobreak\ \leavevmode\ \leavevmode\ \leavevmode\leavevmode\nobreak\ \leavevmode\ \leavevmode\ \leavevmode\leavevmode\ \leavevmode\leavevmode\nobreak\ \ \leavevmode\leavevmode\ \leavevmode\ \leavevmode\leavevmode\ \leavevmode\leavevmode\nobreak\ \leavevmode\ \leavevmode\ \leavevmode\leavevmode\ \leavevmode\ \leavevmode\leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\leavevmode\ \leavevmode\ \leavevmode\leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\leavevmode\ \leavevmode\leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\leavevmode\ \leavevmode\leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\leavevmode\ \leavevmode\leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\leavevmode\ \leavevmode\leavevmode\ \leavevmode\ \leavevmode\leavevmode\ \leavevmode\leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\leavevmode\ \leavevmode\leavevmode\ \leavevmode\ \leavevmode\leavevmode\ \leavevmode\leavevmode\ \leavevmode\ \leavevmode\leavevmode\ \leavevmode\leavevmode\ \leavevmode\ \leavevmode\leavevmode\ \leavevmode\leavevmode\ \leavevmode\ \leavevmode\leavevmode\ \leavevmode\leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\leavevmode\ \leavevmode\ \leavevmode\leavevmode\ \leavevmode\ \leavevmode\leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\ \leavevmode\
This is an expansion around \(\underline{S}=0\). Several coefficients in this expansion are known constants and will be designated by elements \(A_{i}\).
\[A_{i}=\frac{1}{i!}\frac{d^{i}g}{d\underline{S}^{i}}\left[{}_{\underline{S}=0} \right] \tag{17}\]
Again, using vector notation, the output vector \(\underline{Y}{=}g\left(\underline{S}\right)\) can be written as an approximation out to (m+1) terms.
\[g\left(\underline{S}\right)=\sum_{i=0}^{m}A_{i}S^{i}\ ;\text{where}\ \underline{S}^{i}=\left(S^{i}_{1},S^{i}_{2}, \ldots,S^{i}_{k}\ \right)^{\text{T}} \tag{18}\]
An alternate set of parities may be determined using the approximation to \(g(X)\) and incorporating the parity-generating matrix \(\Gamma^{\text{T}}\).
\[\underline{\rho}_{i}{=}g\left(\underline{S}\right)\Gamma^{\text{T}}=\sum_{i=0 }^{m}\left(A_{i}\Gamma^{\text{T}}\right)\underline{S}^{i} \tag{19}\]
The linearity of the expansion permits the parity matrix \(\Gamma^{\text{T}}\) to be applied to individual powers. The parity matrix guarantees that the codewords have good separation insuring detection of errors up to a certain level [25]. Nevertheless, Fig. 7 outlines the ABFT protection scheme using the Taylor series expansion.
Of course, the powers of items in vector \(\underline{S}\) have to be computed and the expansion in (16) may be the basis for computing the activation function itself.
The Taylor series expansion (18) is only valid over a limited range of the components of \(\underline{S}\). So, extending the range to develop the alternate parity calculations (19) requires approximating \(\tanh(X)\) outside \(\mid X\mid<1\). The Taylor series for \(\tanh(X)\) of with up to 10 terms is called \(g(X)\).
\[\begin{split} g(x){=}& x{-}\left(\frac{1}{3}\right) x^{3}+\left(\frac{2}{15}\right)x^{5}-\left(\frac{17}{315}\right)x^{7}\\ &+\left(\frac{63}{2835}\right)x^{9}\end{split} \tag{20}\]
On way to do this extension approximation is to assign fixed values over segments outside the good range. A reasonable approximation for \(\tanh(X)\) is given \(g_{s}(x)\).
\[g_{s}(x){=}\begin{cases}g(x)&\left|x\right|{\leq}1\\ 0.8049&1<\left|x\right|{\leq}1.25\\ 0.8767&1.25<\left|x\right|{\leq}1.5\\ 0.9233&1.5<\left|x\right|{\leq}1.75\\ 0.9527&1.75<\left|x\right|{\leq}2\\ 0.9820&2<\left|x\right|{\leq}5\\ 1&5<\left|x\right|\end{cases} \tag{21}\]
The fixed segments of range of \(X\) where constant values are defined is shown in Fig. 8. The constant values are chosen as the midpoint value between endpoints of the segment. The approximation error introduced by these segment values are on the order of \(10^{-2}\). The approximation errors are reflected into the parity calculations when \(g_{s}(x)\) is substituted in (19).
Similar protection methods can be applied for the derivative of the activation function appearing in the forward sections. The derivative of \(\tanh(X)\) is \(\tanh^{\text{T}}(X)=\left[1-\tanh^{\text{T}}(x)\right]\) which has a diminishing curve from 1 to 0 for positive values of \(X\). Also, the Taylor series for \(g^{\prime}\left(X\right)\) is extracted for the expansion for \(g(X)\) by taking derivatives inside (20).
\[\begin{split} g^{\prime}\left(\underline{S}\right)=&\\ +\frac{dg}{d\underline{S}}\left[{}_{\underline{S}=0}\ \underline{S}+\frac{1}{2!}\frac{d^{2}g}{d \underline{S}^{2}}\left|{}_{\underline{S}=0}\ \underline{S}^{2}+\cdots+\frac{1}{\left(m^{+1}\right)!}\frac{d^{m+1}g}{d \underline{S}^{m+1}}\left[{}_{\underline{S}=0}\ \underline{S}^{n}\right.\right.\\ &\left.\left.+\text{error terms}\right.\right.\end{split} \tag{22}\]
Then an approximation using segmented pieces like \(g_{s}(x)\) in (21) is easily established for protection purposes.
On the other hand, if the evaluation functions are implemented by look-up tables, a simple straightforward and effective error-detecting scheme uses duplication. Two compatible outputs from the functions employing two different look-up operations per position. This simple but brute force detection philosophy is outlined in Fig. 9.
## 4 Evaluations
Will the protection methods work, how well and what are the overhead processing costs? Two parts of the neural network stages are addressed by our detection techniques. The large weighting operations, viewed as matrix multiplication, are protected by long numerical-based error-detecting codes. In addition, the bank of activation functions can be protected by similar codes using a segmented Taylor series approximation for parity generation.
Figure 8: Approximations of \(\tanh(X)\) by Taylor Series and Constant Value Segments
Figure 7: Parity Detection Method Using Taylor Series Expansion of Activation Function
The large filtering operations are efficiently covered by DFT motivated codes. An algorithm-based fault tolerance (ABFT) methodology produces two sets of comparable parity values related to the output data weighted by matrix W. Equations (12) and (13) outline this approach relying on a parity-generating matrix \(\Gamma^{\mathrm{T}}\). (See Fig. 6) The matrix W, L (k-1)\(\times\)L(k-1), gives an output \(\mathbb{S}\) that in turn yields a set of Ln_k =L(n-k+1) parity values. The extra cost of the parity generation is a matrix-vector multiplication \(\underline{\mathrm{SI}}\Gamma^{\mathrm{T}}\). Using the number of multiplications to form the parity components in \(\rho\) (5a) as an indication of extra cost gives Ln_k (L (k-1))\({}^{2}\) multiplications. (\(\Gamma^{\mathrm{T}}\) (A-17) contains only short spans of nonzero pieces, which could greatly reduce this number.)
The other set of parities in \(\rho\), (13) results from applying the combined parity matrix \(\left(\mathrm{WT}^{\mathrm{T}}\right)\) which is smaller at L(n-k+1)\(\times\)L(k-1). This matrix can be computed off-line (not required at data processing steps). It operates on the input data in \(\underline{\mathrm{Y}}\) to matrix W. This second set of parities also needs products. L(n-k+1)\(\left(\mathrm{L(k-1)}\right)^{2}\). Thus, the parity calculations involve 2L(n-k+1)\(\left(\mathrm{L(k-1)}\right)^{2}\) extra multiplications. This is the operational overhead required to insert error-detecting codes in the stages.
We ran extensive MatLab simulations concerning the processing of data with weighting matrix W. These simulations focus on errors inserted in the numerical values during processing. The experiments considered three instances of matrix processing, one modeling the forward section and two backward sections. One of the background steps used the same matrix from a similarly indexed forward section while the other backward propagation section used the new weighting matrix W\({}^{\mathrm{m}}\) (6), resulting from the updating actions of the forward matrix W.
The simulations go through three executions of the matrix iterations with randomly generated input data for each pass. This guarantees virtually everything is random to test all operations. The total number of passes numbered in the millions. The errors in multiplications in the processing steps are modeled by randomly selecting locations in the matrix W into which random sized errors are added. The additive error values follow a Gaussian-Bernoulli probability law. The positions where these random errors are inserted are selected uniformly distributed over all positions in the matrix with independent probability \(\varepsilon\). Fig. 10 indicates the main loop for one pass of the simulation.
The operations are protected by a DFT-based convolutional code (wavelet code in Appendix A). The parameters of each code are basic length of pieces n, (k-1) information content, and number of parity values per code piece (n-k+1) with constraint length \(\mu\). The two parity sets for each section of a pass are compared and if any position differs between the two, an error is detected. (Many positions of parity could deviate, still counting as a detection.) The independent insertion probability \(\varepsilon\) was varied over 13 values ranging from \(\varepsilon\) = 10\({}^{-3}\) to \(\varepsilon\) = 0.1 and during each value, 10\({}^{7}\) passes were executed. The number of errors detected were collected for each pass. The performance of the coding was so good that no errors were missed. Thus, the probability of detection for runs in these ranges was 1.0
The simulations were performed for four different numerical-valued convolutional codes whose different parameters are given in Table 1. The sizes of the matrix W used in each suite of simulations depended on a parameter L=12 and the parameters k and n-k. The matrix W was L(k-1)\(\times\)L(k-1) and the number of parity values used in each case was L(n-k+1).
The number of errors that were detected were counted. For each data point, 4 million passes were made. The codes are intrinsically powerful so that only a few errors were missed (under 5 per pass). However, if the insertion probability \(\varepsilon\) is raised above 0.5, errors were missed because they exceeded the capabilities of the codes. The number of errors detected for each insertion probability \(\varepsilon\) and for each of the codes employed is plotted in Fig.11.
Table 1
Simulation Code Parameters
\begin{table}
\begin{tabular}{|c|c|c|c|c|} Code & k-1 & (n-k+1) & \(\mu\) & L(k-1) & L(n-k+1) \\ \hline
1 & 8 & 4 & 2 & 96 & 48 \\ \hline
2 & 11 & 4 & 2 & 132 & 48 \\ \hline \end{tabular}
\end{table}
Table 1: Simulation Code Parameters
Figure 10: Basic Simulation Loop, Filter Section
For the case of activation function processing, simulations used inserting errors on top of the normal outputs. The first parity was directly from these outputs, possibly corrupted with infrequent additive errors. The other parity set of values used the \(\tanh()\) approximation in (21), Fig. 8. A comparison of these parity values provide detection capabilities. However, this approximation increased the level of small errors appearing in this set of parities. The Taylor series part utilizes powers of input samples. It is hard to estimate its overhead because of the randomness of samples.
The look-up part of the segmented Taylor series-based approximation introduces errors when normal error-free data are processed through it. Thus, the threshold for determining if parities are mismatched has to be increased. Nevertheless, the segmented part could cause ( infrequently) miss-detections. These, in turn, can prompt some processing to be repeated after incorrectly detecting errors in this part. We simulated millions of passes with no errors present and quickly found that there was a discernable threshold above which normal error-free data processing caused no false detections. The threshold was increased by a factor of 10 over that used in the weighting simulations.
Once, we had set a threshold properly to avoid false errors being detected, simulated errors were properly detected all the time. We ran long simulations inputting random data and selecting error positions in the output places to add Gaussian errors. The results were gathered as before, and detection levels are plotted in Fig. 13. We note the number of detected errors are of course much less since there are only L(k-1) places where errors are inserted compared to the case of weighting matrix W that had \(\left(\text{L}\left(\text{k-1}\right)\right)^{2}\) places for error insertions. The detection performances, when approximations for the activation function are employed, are shown in Fig. 13.
## 5 Summary
Neural networks employ several stages that each include weighting operations and activation functions. Safe operation requires no errors appearing in any stage. The weighting parts involve a large number of arithmetic operations, multiplications and additions, which could be susceptible to infrequent numerical errors. The activation functions in the stages, which are nonlinear and have limiting outputs, can also suffer numerical errors. In all cases, errors can drastically change the final decisions at the outputs of the network.
This paper proposes and evaluates methods using numerically based error-detecting codes to sense any processing errors that can led to erroneous decisions. The elements in the code are arithmetic symbols, not bits or groups of bits as used in more familiar error-detecting codes. We explored long codes in both sections of a network stage, weighting and activation functions. The detecting procedures involve generating parity values dictated by the code in two ways; one set from the outputs and the other directly
Figure 11: Detection Values of Protected Matrix Operations
Figure 12: Simulation Activation Functions Section
from inputs to the process stage. Protecting activation functions' operations uses a segmented Taylor series to generate one set of the necessary parity pairs. We also showed a technique for detecting numerical errors in computing the updated weighting functions implemented in the backward stages.
Extensive MatLab simulations modeling errors in a hostile environment of both weighting and activation function operations show this approach is extremely effective in detecting randomly occurring errors in the processing steps. The Taylor series approximation method for generating checking parities gives good results.
## Appendix A Appendix Codes over Number Fields to "Safe Use of Neural Networks"
The proposed protection method for the major two parts,same sign depending on the size of t, even or odd. The product forward and backward processing stages, of neural networkwill be + if t is even while product is - if t is odd. Of course, configurations involve error-detecting codes. Our codes are \(\mathrm{arg}_{\texttt{n}\texttt{k}}=1\), a product of all X terms. Consequently, the sign of the defined over standard arithmetic fields. We outline heremhowary coefficients in matrix G trace back to the coefficients in details of these types of codes, block or convolutional, so as \(\mathrm{log}(\texttt{X})\).
All coding features for protecting arithmetic operations should use codes in systematic form; the information symbols are completely distinguished from the easily identified parity symbols. This requires the encoding matrix to have a particular form.
\[\mathrm{g}(\texttt{X})\mathrm{=}\mathrm{g}_{\texttt{n}\texttt{k}}\mathrm{X^{ \texttt{n}\texttt{k}}}+\mathrm{g}_{\texttt{n}\texttt{k}\texttt{l}}\mathrm{X^{ \texttt{n}\texttt{k}\texttt{l}}}+\cdots+\mathrm{g}_{\texttt{t}}\mathrm{X^{ \texttt{t}}}+\mathrm{g}_{\texttt{t}}\mathrm{X^{\texttt{t}}}\mathrm{X^{\texttt{ t}}}\mathrm{\texttt{g}_{\texttt{t}}\mathrm{X^{\texttt{t}}}}\] (A-1a)
\[\mathrm{g}(\texttt{X})\mathrm{=}\mathrm{g}_{\texttt{n}\texttt{k}}\mathrm{X^{ \texttt{n}\texttt{k}}}+\mathrm{g}_{\texttt{n}\texttt{k}\texttt{l}}\mathrm{X^{ \texttt{n}\texttt{k}\texttt{l}}}+\cdots+\mathrm{g}_{\texttt{t}}\mathrm{X^{ \texttt{t}}}+\mathrm{g}_{\texttt{t}}\mathrm{X^{\texttt{t}}}\mathrm{X^{\texttt{t}}} \mathrm{\texttt{g}_{\texttt{t}}\mathrm{X^{\texttt{t}}}\mathrm{\texttt{g}_{ \texttt{t}}\mathrm{X^{\texttt{t}}}}\] (A-1b)
There is a difficultly in transferring the binary code to a real number code. In the binary field, -1=+1 so the binary coefficients in the original code when converted to the encoding matrix each 1 must be distinguished whether it is +1 or -1 in the real field. These distinctions can be made by examining the coefficients of \(\mathrm{g}(\texttt{X})\) when transferred.
The proper sign for the translating coefficients can be established by noting the generator polynomial is a product of first-degree polynomial factors in an extension field.
\[\mathrm{g}(\texttt{X})\mathrm{=}\mathrm{\prod}_{\texttt{i=0}}^{\texttt{n} \texttt{k}}\mathrm{(X\texttt{-}b_{\texttt{i}})}\mathrm{;}\mathrm{\ }\mathrm{b}_{\texttt{i}}\mathrm{\ }\mathrm{root\ in\ extension\ field}\] (A-2)
The coefficients in \(\mathrm{g}(\texttt{X})\) (A-1a) are products of the roots taken the proper number of times. For example, the conditions \(\mathrm{g}_{\texttt{n}\texttt{k}\texttt{l}}\) is a sum of all combinations of roots taken t at
\[\mathrm{a\ time:\ }\sum(-\mathrm{b}_{\texttt{i}})(-\mathrm{b}_{\texttt{i}}) \cdots(-\mathrm{b}_{\texttt{i}})\,.\] (A-3)
All items in the sum have the same sign depending on the size of t, even or odd. The product was on the BCH class that can be constructed for large lengths, 74-150 symbols. As mentioned in a recent paper [18], real num-\(\mathrm{G}_{\texttt{n}\texttt{k}}=1\), a product of all X terms. Consequently, the sign of the her codes can be declared by treating the binary bits as real numbers. This is a well-known result and all defining matrices now have real number 1's. [Theorem 2, 22] The error- detecting capabilities are preserved.
BCH binary codes [Sect, 6.5, 23] are a form of cyclic codes that is described by a generator polynomial \(\mathrm{g}(\texttt{X})\) whose coefficients in turn define a generator matrix. The generator polynomial of degree (n-k) for code of length n and information content k, has coefficients that also translate into a generator matrix G.
\[\mathrm{g}(\texttt{X})\mathrm{=}\mathrm{g}_{\texttt{n}\texttt{k}}\mathrm{X^{ \texttt{n}\texttt{k}}}+\mathrm{g}_{\texttt{n}\texttt{k}\texttt{l}}\mathrm{X^{ \texttt{n}\texttt{k}\texttt{l}}}+\cdots+\mathrm{g}_{\texttt{t}}\mathrm{X^{ \texttt{t}}}\mathrm{X^{\texttt{t}}}+\mathrm{g}_{\texttt{t}}\mathrm{X^{\texttt{t }}}\mathrm{X^{\texttt{t}}}\mathrm{\texttt{g}_{\texttt{t}}\mathrm{X^{\texttt{t}}}}\] (A-1a)
\[\mathrm{g}(\texttt{X})\mathrm{=}\mathrm{g}_{\texttt{n}\texttt{k}}\mathrm{X^{ \texttt{n}\texttt{k}\texttt{l}}}+\mathrm{g}_{\texttt{n}\texttt{k}\texttt{l}} \mathrm{X^{\texttt{n}\texttt{k}\texttt{l}}}+\cdots+\mathrm{g}_{\texttt{t}} \mathrm{X^{\texttt{t}}}\mathrm{X^{\texttt{t}}}+\mathrm{g}_{\texttt{t}}\mathrm{X^{ \texttt{t}}}\mathrm{X^{\texttt{t}}}\] (A-1a)
\[\mathrm{g}(\texttt{X})\mathrm{=}\mathrm{g}_{\texttt{n}\texttt{k}}\mathrm{X^{ \texttt{n}\texttt{k}\texttt{l}}}+\mathrm{g}_{\texttt{n}\texttt{k}\texttt{l}} \mathrm{X^{\texttt{n}\texttt{k}\texttt{l}}}+\cdots+\mathrm{g}_{\texttt{t}} \mathrm{X^{\texttt{t}}}\mathrm{X^{\texttt{t}}}+\mathrm{g}_{\texttt{t}}\mathrm{X^{ \texttt{t}}}\mathrm{X^{\texttt{t}}}\mathrm{\texttt{g}_{\texttt{t}}\mathrm{X^{ \texttt{t}}}\mathrm{X^{\texttt{t}}}\mathrm{\texttt{g}_{\texttt{t}}\mathrm{X^{ \texttt{t}}}\mathrm{X^{\texttt{t}}}\mathrm{\texttt{g}_{\texttt{t}}\mathrm{X^{ \texttt{t}}}\mathrm{X^{\texttt{t}}}\mathrm{X^{\texttt{t}}}\mathrm{X^{\texttt{t}}} \mathrm{X^{\texttt{t}}}\mathrm{X^{\texttt{t}}}\mathrm{X^{\texttt{t}}}\mathrm{X^{ \texttt{t}}}\mathrm{X^
two subvectors.
\[\underline{x}=\left(\underline{u}\text{:}\underline{\varepsilon}\right)\text{:} \underline{\varepsilon}\text{ is 1}\times\text{n-k, parity vector}\] (A-4)
The parity check symbols are related to the data symbols through the submatrix P (A-3).
\[\underline{\varepsilon}=\text{P}\underline{u}\text{:}\text{1}\times\text{n-k, parity vector}\] (A-5)
These checking symbols are used to determine if errors have\(\underline{u}=\left(\underline{u}_{0},\ \underline{u}_{1},\ \ldots,\underline{u}_{p},\ \ldots.\right)\) data stream been added to the code vector \(\underline{x}\).
BCH codes are defined only for certain lengths and information content due to their construction using roots in a binary extension field [23]. However, the information content, factor k, can be adjusted by setting a number of the normal information positions to 0. The information content is now (k-r)ng by setting r information positions to 0. The length is also short-acting data is a checking matrix H. This checking matrix annened to (n-r). This can be described by a generator matrix likabilates all codewords \(\underline{x}\), i.e., \(\underline{0}=\text{H}\underline{x}\). When additive errors (A-1b) by selecting k-r rows of original G and removing \(r\) colorpers, the parity-checking matrix produces a syndrome vector \(\underline{S}\), which is composed of syndrome, subvectors \(\underline{S}_{\text{s}}\).
\[\begin{array}{ccccccccc}\underline{S}=\text{H}\underline{x}\\ \underline{S}=\left(\underline{S}_{0},\ \underline{S}_{1},\ \ldots,\underline{S}_{p},\ \ldots.\right)\text{;}\ \underline{S}_{p}=\left(\underline{s}_{p0},\ \underline{s}_{p1},\ \ldots.\right)\end{array}\] (A-8)
The parity-checking matrix H is formed using a finite number of submatrices so that it engages only a finite length of the codeword stream at a time. These submatrices is involved with constraint length of the code, m.
\[\begin{array}{ccccccccc}\underline{H}=\left(\begin{array}{ccccccccc}H_{0} &0&0&\cdots&&&&\\ H_{1}&H_{0}&0&0&\cdots&&&0&0\\ H_{2}&H_{1}&H_{0}&0&0&\cdots&&&\\ \vdots&&\vdots&\ddots&\ddots&&&\\ H_{n}&H_{n-1}&\cdots&\cdots&H_{1}&H_{0}&0&\cdots&0\\ 0&H_{n}&H_{n-1}&&&H_{1}&H_{0}&0&\cdots\\ 0&0&&\ddots&\ddots&&\ddots&&0\\ &&&&\ddots&&\ddots&&\ddots\\ &&&&\ddots&&\ddots&\end{array}\right)\end{array}\] (A-9)
Each group of syndrome subvectors \(\underline{S}_{\text{s}}\) in \(\underline{S}\) involves the product of a set of submatrices in H, collected as \(\text{H}_{\text{sSG}}\).
\[\begin{array}{ccccccccc}\text{H}_{\text{sSG}}=\left(\begin{array}{ccccccccc}H_{n} &H_{n-1}&\cdots&H_{1}&H_{0}\end{array}\right)\text{;}\
emerge. This class of codes, sometimes called Piret convolutional codes [24], does require a constraint among governingpressed showing two parts, data and parity.
\[\begin{array}{l}\big{(}\text{m+1}\big{)}\big{(}\text{n-k}\big{)}\text{<n}\\ \end{array}\] (A-11)
This requirement guarantees that there are enough DFT vec-with (A-16) shows the syndromes as the sum of two items, eager describing all details for constructing these types of codes is given [23] along with many other features of such codes. It is also possible to develop the generators of these DFT-based convolutional codes to have real-valued coefficients, following.
For our use, it is best to describe these codes using a polyphase representation [26] wherein the vectors and matrices of the sequences of symbols. When thematic form (A-15) indicates that the submatrices of each element in a submatrix is labeled accord-\(\Xi_{i}\), (n-k+1)\(\times\)(k-1), i=0,1,...,\(\mu\) are useful in determining paring to its location in the submatrix, the Z-transform spreadsry values. Expanding on the use of the convolutional code, the data components for this subcode can be collected into a semi-infinite vector containing well-defined pieces, subvectors each (k-1) long.
\[\underline{\text{Y}}=\left(\underline{\text{Y}}_{0},\,\underline{\text{Y}}_{ 1},\,\ldots,\underline{\text{Y}}_{\text{i}},\,\underline{\text{Y}}_{\text{i +1}},\ldotsldots\,\,\right)^{\text{T}}\ ;\ \underline{\text{Y}}_{\text{i}}\ 1\times\text{(k-1) data (A-19)}\]
The semi-infinite parity vector is composed of subvectors P, 1\(\times\)(n-k+1), each calculated using the submatrices \(\Xi_{i}\) shown shortly.
Z-P=\(\left(\underline{\text{P}}_{0},\,\underline{\text{P}}_{1},\ldots,\underline{\text{P}}_{ \text{i+1}},\ldots\,\,\right)^{\text{T}}\ ;\ \underline{\text{P}}_{\text{i}}\ 1\times\text{(n-k+1) parity}\) (A-20a)
All subvectors result from using a parity-generating matrix \(\Gamma\) with the same form as H above.
\[\Gamma=\left(\begin{array}{ccccccccc}-\Xi_{0}&0&0&\ldots&&&&\\ -\Xi_{1}&-\Xi_{0}&0&0&\ldots&&&&\\ -\Xi_{2}&-\Xi_{1}&-\Xi_{0}&0&0&\ldots&&&&\\ \vdots&&\vdots&\ddots&\ddots&&&&\\ -\Xi_{\mu}&-\Xi_{\mu+1}&\ldots&\ldots&-\Xi_{1}&-\Xi_{0}&0&\ldots\\ 0&-\Xi_{\mu}&-\Xi_{\mu+1}&&-\Xi_{1}&-\Xi_{0}&0&\ldots\\ 0&0&&\ddots&\ddots&&\ddots&&0\\ &&&&&&\ddots&&\ddots&\\ &&&&&&\ddots&&\ddots&\\ &&&&&&\ddots&&\ddots&\\ &&&&&&\end{array}\right)\]
\[\underline{\text{P}}=\Gamma\underline{\text{Y}}\] (A-20c)
Each parity subvector P involves engaging a finite number of the subvectors (A-19)\(\underline{\text{Y}}_{\text{i}}\).
\[\underline{\text{P}}_{\text{i}}=\left(\begin{array}{c}-\Xi_{0},-\Xi_{1},\ \cdots,-\Xi_{1}\\ \vdots\\ \underline{\text{Y}}_{\text{i+1}}\\ \underline{\text{Y}}_{\text{i+1}}\\ \underline{\text{Y}}_{\text{i+1}}\\ \end{array}\right)\] (A-21) \[\times\left(\mu+1\right)\times\left(\mu+1\right)\ \ \ \left(\mu+1\right)\times\text{1}\]
Note that \(\mathbb{P}_{q^{+1}}\) parity subvectors involves input subvectors \(\underline{y}_{q^{+1}}\), \(\underline{y}_{q^{+2}},^{-},\underline{y}_{q^{+1}\mu}\) which overlap those involve in, \(\underline{P}\). Hence, parity symbols are generated by matrix \(\Xi=(-\underline{z}_{0^{+}},-\underline{z}_{1},\ \cdots,-\underline{z}_{i})\)
The data samples in stream \(\underline{y}\) can be segmented into a piece-by \((\mu^{+}1)(\textbf{k}\text{-}1)\times 1\) and then applied to matrix \(\Xi\). If there are errors in this segment of \(\underline{y}\), the resulting parity values will not match those computed in a different way and method. The error detection process can proceed as segments of the observed data-progress.
The ABFT technique in the text uses a modified weighting matrix combining \(\Gamma\) and weighting matrix \(\mathbb{W}\), as \(\Gamma\mathbb{W}\). The network structure of \(\Gamma\) with the submatrices in \(\Xi\) appearing in limited parts of \(\Gamma\) clearly visible in (A-20b) means there are many fewer multiplications really in forming \(\Gamma\mathbb{W}\). There are only \((\mu^{+}1)(\textbf{k}\text{-}1)\) columns nonzero when applying \(\Gamma\) to \(\mathbb{W}\).
|
2306.08938 | Scalable Resource Management for Dynamic MEC: An Unsupervised
Link-Output Graph Neural Network Approach | Deep learning has been successfully adopted in mobile edge computing (MEC) to
optimize task offloading and resource allocation. However, the dynamics of edge
networks raise two challenges in neural network (NN)-based optimization
methods: low scalability and high training costs. Although conventional
node-output graph neural networks (GNN) can extract features of edge nodes when
the network scales, they fail to handle a new scalability issue whereas the
dimension of the decision space may change as the network scales. To address
the issue, in this paper, a novel link-output GNN (LOGNN)-based resource
management approach is proposed to flexibly optimize the resource allocation in
MEC for an arbitrary number of edge nodes with extremely low algorithm
inference delay. Moreover, a label-free unsupervised method is applied to train
the LOGNN efficiently, where the gradient of edge tasks processing delay with
respect to the LOGNN parameters is derived explicitly. In addition, a
theoretical analysis of the scalability of the node-output GNN and link-output
GNN is performed. Simulation results show that the proposed LOGNN can
efficiently optimize the MEC resource allocation problem in a scalable way,
with an arbitrary number of servers and users. In addition, the proposed
unsupervised training method has better convergence performance and speed than
supervised learning and reinforcement learning-based training methods. The code
is available at \url{https://github.com/UNIC-Lab/LOGNN}. | Xiucheng Wang, Nan Cheng, Lianhao Fu, Wei Quan, Ruijin Sun, Yilong Hui, Tom Luan, Xuemin Shen | 2023-06-15T08:21:41Z | http://arxiv.org/abs/2306.08938v2 | Scalable Resource Management for Dynamic MEC: An Unsupervised Link-Output Graph Neural Network Approach
###### Abstract
Deep learning has been successfully adopted in mobile edge computing (MEC) to optimize task offloading and resource allocation. However, the dynamics of edge networks raise two challenges in neural network (NN)-based optimization methods: low scalability and high training costs. Although conventional node-output graph neural networks (GNN) can extract features of edge nodes when the network scales, they fail to handle a new scalability issue whereas the dimension of the decision space may change as the network scales. To address the issue, in this paper, a novel link-output GNN (LOGNN)-based resource management approach is proposed to flexibly optimize the resource allocation in MEC for an arbitrary number of edge nodes with extremely low algorithm inference delay. Moreover, a label-free unsupervised method is applied to train the LOGNN efficiently, where the gradient of edge tasks processing delay with respect to the LOGNN parameters is derived explicitly. In addition, a theoretical analysis of the scalability of the node-output GNN and link-output GNN is performed. Simulation results show that the proposed LOGNN can efficiently optimize the MEC resource allocation problem in a scalable way, with an arbitrary number of servers and users. In addition, the proposed unsupervised training method has better convergence performance and speed than supervised learning and reinforcement learning-based training methods. The code is available at [https://github.com/UNIC-Lab/LOGNN](https://github.com/UNIC-Lab/LOGNN).
edge computing, link-output graph neural network, scalability, unsupervised learning
## I Introduction
In recent years, mobile edge computing (MEC) has garnered widespread attention for its ability to reduce task processing latency by providing edge computing resources to users with limited computing capabilities [1]. The conventional approach involves transmitting user tasks to the server via a wireless network, with several variables optimized to minimize task computing delays, such as offloading proportion, user transmission power, and server computing resource allocation, among others. Given the critical role of MEC in enhancing user experience, numerous researchers have focused their efforts on designing resource management methods that can further optimize MEC performance [2, 3]. Drawing inspiration from the remarkable achievements of deep learning, several studies have employed neural networks (NN) to address the optimization problem in MEC, outperforming traditional optimization methods both in terms of performance and algorithm execution time [4, 5].
The dynamics of MEC systems, which are changes in numbers and locations of the edge servers (e.g., flying drones as MEC servers) and users (e.g., vehicular users), raise two challenges in NN-based methods: low scalability and high training costs [6]. Scalability plays a crucial role in determining whether a new NN architecture needs to be manually designed and retrained when the number of edge servers and users undergoes changes. Additionally, training costs are a key factor in determining the ability of an NN-based method to be rapidly deployed in the edge network. Some recent works explore using graph neural network (GNN) to deal with the scalability issue in input space, which includes usually the features of network nodes. This is because the inference of GNN relies on the message passing method which is an input dimension-independent method [7, 8]. However, in MEC, the dimension of offloading and resource allocation decisions can also change as the network scales. For instance, a task can be offloaded to a varying number of servers, thereby leading to a unique scalability issue, i.e., scalability in the decision space. Unfortunately, the conventional node-output GNN, which allocates resources from an edge node to others through the dimension-fixed node features vector, fails to address the dimension change of the decision space caused by the changing size of the edge network. Thus, the development of a novel approach that can efficiently tackle the scalability issues in both input and decision space remains a crucial area of research in optimizing MEC.
This paper presents a novel resource management scheme for dynamic MEC that leverages the capabilities of a link-output Graph Neural Network (LOGNN). Despite the fixed dimension of output link features, the number of graph links increases as the number of edge servers and users increases. |