id
stringlengths 10
10
| title
stringlengths 26
192
| abstract
stringlengths 172
1.92k
| authors
stringlengths 7
591
| published_date
stringlengths 20
20
| link
stringlengths 33
33
| markdown
stringlengths 269
344k
|
---|---|---|---|---|---|---|
2304.00050 | kNN-Res: Residual Neural Network with kNN-Graph coherence for point
cloud registration | In this paper, we present a residual neural network-based method for point
set registration that preserves the topological structure of the target point
set. Similar to coherent point drift (CPD), the registration (alignment)
problem is viewed as the movement of data points sampled from a target
distribution along a regularized displacement vector field. While the coherence
constraint in CPD is stated in terms of local motion coherence, the proposed
regularization term relies on a global smoothness constraint as a proxy for
preserving local topology. This makes CPD less flexible when the deformation is
locally rigid but globally non-rigid as in the case of multiple objects and
articulate pose registration. A Jacobian-based cost function and
geometric-aware statistical distances are proposed to mitigate these issues.
The latter allows for measuring misalignment between the target and the
reference. The justification for the k-Nearest Neighbour(kNN) graph
preservation of target data, when the Jacobian cost is used, is also provided.
Further, to tackle the registration of high-dimensional point sets, a constant
time stochastic approximation of the Jacobian cost is introduced. The proposed
method is illustrated on several 2-dimensional toy examples and tested on
high-dimensional flow Cytometry datasets where the task is to align two
distributions of cells whilst preserving the kNN-graph in order to preserve the
biological signal of the transformed data. The implementation of the proposed
approach is available at https://github.com/MuhammadSaeedBatikh/kNN-Res_Demo/
under the MIT license. | Muhammad S. Battikh, Dillon Hammill, Matthew Cook, Artem Lensky | 2023-03-31T18:06:26Z | http://arxiv.org/abs/2304.00050v2 | # kNN-Res: Residual Neural Network with kNN-Graph coherence for point cloud registration
###### Abstract
In this paper, we present a residual neural network-based method for point set registration that preserves the topological structure of the target point set. Similar to coherent point drift (CPD), the registration (alignment) problem is viewed as the movement of data points sampled from a target distribution along a regularized displacement vector field. While the coherence constraint in CPD is stated in terms of local motion coherence, the proposed regularization term relies on a global smoothness constraint as a proxy for preserving local topology. This makes CPD less flexible when the deformation is locally rigid but globally non-rigid as in the case of multiple objects and articulate pose registration. A Jacobian-based cost function and geometric-aware statistical distances are proposed to mitigate these issues. The latter allows for measuring misalignment between the target and the reference. The justification for the k-Nearest Neighbour(kNN) graph preservation of target data, when the Jacobian cost is used, is also provided. Further, to tackle the registration of high-dimensional point sets, a constant time stochastic approximation of the Jacobian cost is introduced. The proposed method is illustrated on several 2-dimensional toy examples and tested on high-dimensional flow Cytometry datasets where the task is to align two distributions of cells
whilst preserving the kNN-graph in order to preserve the biological signal of the transformed data. The implementation of the proposed approach is available at [https://github.com/MuhammadSaeedBatikh/kNN-Res_Demo/](https://github.com/MuhammadSaeedBatikh/kNN-Res_Demo/) under the MIT license.
## 1 Introduction
Point set registration is a widely studied problem in the field of computer vision but also arises in other fields e.g. bioinformatics as is discussed below. The problem involves aligning a deformed target set of \(d\)-dimensional points to another reference point set by applying a constrained transformation. This alignment allows for improved comparison and analysis of the two sets of points and is used in a variety of fields including object tracking, body shape modeling, human pose estimation, and removal of batch effects in biological data. [1, 2, 3, 4, 5]
Point set registration techniques are typically categorized based on two main properties, first, whether the technique is a correspondence-based or a correspondence-free technique, and second, whether the estimated transformation is rigid or non-rigid. Correspondence-based techniques require the availability of correspondence information (e.g. labels) between the two point sets, while correspondence-free, sometimes called simultaneous pose and correspondence registration, does not require such information and therefore is considered a significantly more difficult problem. Rigid registration techniques are also generally simpler. A rigid transformation is an isometric transformation that preserves the pairwise distance between points and such transformation is typically modeled as a combination of rotation and translation. Several rigid registration techniques have been proposed in [6, 7, 8, 9, 10, 11, 12, 13, 14]. Assuming the transformation is rigid, however, makes the types of deformations that could be handled quite limited. Non-rigid transformations allow for more flexibility; however, this makes the problem ill-posed as there are an infinite number of transformations that could align two point sets, thus, non-rigid registration techniques employ additional constraints.
### Problem Formulation
In this section, we formulate the alignment problem. Inspired by CPD [15], we view an alignment method as finding a map \(\phi\) that transforms data points sampled from an underlying distribution \(Q\) to distribution \(P\) in such a way that preserves the topological structure of data sampled from \(Q\). This is an ill-posed density estimation problem, therefore, we require an additional desiderium for \(\phi\) to be as simple as possible. In this context, we call a map \(\phi\) simple if it is close to the identity transformation. Importantly, this could be visualized as data points sampled from \(Q\) moving along a regularized displacement vector field \(F\).
More formally, we denote two sets of \(d\)-dimensional vectors (points), a ref
erence point set \(\mathbf{R}=\{\mathbf{x}_{1},\mathbf{x}_{2},...,\mathbf{x}_{n}\}\), and target point set \(\mathbf{T}=\{\mathbf{y}_{1},\mathbf{y}_{2},...,\mathbf{y}_{m}\}\), generated by a probability distributions \(P\) and \(Q\) respectively. Additionally, a \(k\)-Nearest Neighbour (kNN) graph is associated with (or constructed from) the set \(\mathbf{T}\) which must be preserved after transformation. A kNN graph for set \(\mathbf{T}\) is a directed graph such that there is an edge from node \(i\) to \(j\) if and only if \(\mathbf{y}_{j}\) is among \(\mathbf{y}_{i}\)'s \(k\) most similar items in \(\mathbf{T}\) under some similarity measure \(\rho\).
Thus, the goal of an alignment method, given the sets \(\mathbf{R}\) and \(\mathbf{T}\) in a matrix form of \(X\in\mathbf{R}^{n\times d}\) and \(Y\in\mathbf{R}^{m\times d}\) respectively, is finding a transformation \(\phi\) parameterized by \(\theta\) such that:
\[\hat{\theta}=\arg\max_{\theta}D(\phi(Y;\theta),X) \tag{1}\]
subject to the constraints:
\[\texttt{kNN}_{g}(\phi(Y;\theta))=\texttt{kNN}_{g}(y) \tag{2}\]
where \(D\) is a statistical distance that measures the difference between two probability distributions.
### Limitations of existing approaches
A classic example of a such constraint is found in CPD [15] and its extensions [16, 17, 18]. CPD uses a Gaussian Mixture Model to induce a displacement field from the target to source points and uses local motion coherence to constrain the field such that nearby target points move together. CPD achieves this however via a global smoothing constraint which makes it locally inflexible, and therefore unsuitable for articulated deformations in 3D human data, scenes with multiple objects, and biological data [19].
In this work, we introduce a Jacobian orthogonality loss and show that it is a sufficient condition for preserving the kNN graph of the data. Jacobian orthogonality introduced as a penalty \(|\mathbf{J}_{\mathbf{X}}^{\top}\mathbf{J}_{\mathbf{X}}-\mathbf{I}_{d}|\) where \(\mathbf{J}_{\mathbf{X}}\) is the Jacobian matrix at a point \(\mathbf{x}\) and \(\mathbf{I}_{d}\) is the \(d\times d\) identity matrix. The penalty has been proposed in other contexts as well, such as unsupervised disentanglement [20] and medical image registration [21, 22].
In [21], the finite difference method is employed to compute the Jacobian penalty for the B-splines warping transformation, and mutual information of corresponding voxel intensities is used as the similarity measure. Instead of using finite difference for the Jacobian penalty, which produces a numerical approximation of first-order derivatives, the authors of [22] derive an analytical derivative specific to the multidimensional B-splines case. Such approaches however are limited to low dimensions by the nature of the transformations used, the way in which the Jacobian penalty is computed, and their proposed similarity measures.
### Contributions
To address these limitations, we use Hutchinson's estimator [20, 23] for fast computation of the Jacobian loss for high-dimensional point clouds, a scalable residual neural network (ResNet) [24] architecture as our warping transformation, and geometry-aware statistical distances. The choice of ResNet with identity block \(\phi(x)=x+\delta(x)\) is natural since we view alignment similar to CPD as a regularized movement of data points along a displacement vector field; which in our case is simply \(\phi(x)-x=\delta(x)\). It is also worth mentioning that ResNets can learn identity mapping more easily. Further discussion on this choice is given in section 2.2. Moment-matching ResNet(MM-Res) [5] use a similar ResNet architecture with RBF kernel maximum-mean discrepancy as its similarity measure [25, 26], however, no topological constraints are provided to preserve the topological structure of the transformed data nor to limit the nature of the learned transformation as shown in Figure 1. Additionally, while maximum-mean discrepancy is a geometry-aware distance, we address some limitations by incorporating Sinkhorn divergence into our framework [27].
Figure 1: Stanford Bunny example showing the effect of the Jacobian penalty on the learned transformation.
To elaborate further, we first start by defining Maximum Mean Discrepancy (MMD):
\[\texttt{MMD}(\alpha,\beta):=\frac{1}{2}\int_{X^{2}}k(\mathbf{x},\mathbf{y})d \zeta(\mathbf{x})d\zeta(\mathbf{y}) \tag{3}\]
where \(\alpha,\beta\in M_{1}^{+}(X)\) are unit-mass positive empirical distributions on a feature space \(X\), \(\zeta=\alpha-\beta\), and \(k(\mathbf{x},\mathbf{y})\) is a kernel function. MM-Res uses an RBF kernel which is suitable for high-dimensional Euclidean feature spaces (e.g. to represent \(X\subset\mathbb{R}^{n}\)) and makes training complexity low as it scales up to large batches, nonetheless, such kernel blinds the model to details smaller than its standard deviation, and the networks' gradient suffers from the well-known vanishing gradient problem. One simple solution is to decrease the standard deviation of the kernel; however, this introduces another issue, namely, the target points will not be properly attracted to source points [28]. In practice, this makes such a framework incapable of learning simple deformations with sizable translations as we show in section 4. Optimal transport (OT) losses do not typically suffer from this issue and produce more stable gradients; however, such losses require solving computationally costly linear programs. A well-known efficient approximation of the OT problem is entropic regularized \(OT_{\epsilon}\)[29], for \(\epsilon>0\), it is defined as:
\[\texttt{OT}_{\epsilon}(\alpha,\beta):=\min_{\pi_{1}=\alpha,\pi_{2}=\beta}\int _{X^{2}}C(\mathbf{x},\mathbf{y})d\pi+\epsilon\texttt{KL}(\pi|\alpha\times\beta) \tag{4}\]
where \(C\) is a cost function (typically the Euclidean distance), \((\pi_{1},\pi_{2})\) denotes the two marginals of the coupling measure \(\pi\in M_{1}^{+}\) and KL is the KL-divergence. The solution for this formulation could be efficiently computed using the Sinkhorn algorithm as long as \(\epsilon>0\). It is clear that by setting \(\epsilon\) to 0, this minimization problem reduces back to standard OT. Sinkhorn divergence combines the advantages of MMD and OT and is defined as:
\[S_{\epsilon}(\alpha,\beta)=\texttt{OT}_{\epsilon}(\alpha,\beta)-\frac{1}{2}( \texttt{OT}_{\epsilon}(\alpha,\alpha)+\texttt{OT}_{\epsilon}(\beta,\beta)) \tag{5}\]
The authors of [29] show that:
\[\lim_{\epsilon\to 0}S_{\epsilon}(\alpha,\beta)=\texttt{OT}(\alpha,\beta) \tag{6}\]
and
\[\lim_{\epsilon\rightarrow\infty}S_{\epsilon}(\alpha,\beta)=\frac{1}{2} \texttt{MDD}_{-C}^{2}(\alpha,\beta) \tag{7}\]
where \(C\) is the kernel used by MMD.
In the following section, we review other related methods.
### Other related work
Several point cloud registration approaches have been proposed. Thin plate spline functions-based techniques preserve the local topology via local rigidity on the surface of a deformed shape; however, these approaches are not scalable
to large datasets and are typically limited to 3-dimensional point clouds [30, 31, 32, 33, 34, 35]. To address these limitations, a deep learning paradigm for point cloud registration has been adopted. Deep learning-based approaches can be divided into two categories, namely, features-based, and end-to-end learning. In features-based methods, a neural network is used as a feature extraction. By developing sophisticated network architectures or loss functions, they aim to estimate robust correspondences by the learned distinctive feature [30, 36, 37, 38]. While feature-based learning typically involves elaborate pipelines with various steps such as feature extraction, correspondence estimation, and registration, end-to-end learning methods combine various steps in one objective and try to solve the registration problem directly by converting it to a regression problem [39, 40]. For example, [39] employs a key point detection method while simultaneously estimating relative pose.
Another class of methods is Graph Matching techniques, which are quadratic assignment problems (QAP) [40]. The main challenge for such methods is finding efficient approximate methods to the otherwise NP-hard QAP. Congruent Sets Gaussian Mixture (CSGM) [41] uses a linear program to solve the graph-matching problem and apply it to solve the cross-source point cloud registration task. Another approach is a high-order graph [42] that uses an integer projection algorithm to optimize the objective function in the integer domain. Finally, Factorized Graph Matching (FGM) method [43] factorizes the large pairwise affinity matrix into some smaller matrices. Then, the graph-matching problem is solved with a simple path following optimization algorithm.
## 2 Proposed model
### Methodology
In our case, we parametrize the transformation \(\phi\) as a residual neural network and formulate the optimization problem as:
\[\mathcal{L}(\theta)=\mathcal{L}_{1}+\lambda\mathcal{L}_{2} \tag{8}\]
where \(\mathcal{L}_{1}\) is the alignment loss \(D(\theta(Y;\theta),X)\) and \(\lambda\) is a hyperparamater, and \(\mathcal{L}_{2}\) is the topology preserving loss:
\[\mathcal{L}_{2}=\frac{1}{m}\sum_{\mathbf{y}\in T}|\mathbf{J}_{X}^{\top} \mathbf{J}_{X}-\mathbf{I}_{d}| \tag{9}\]
where \(\mathbf{J}_{y}\) is the Jacobian matrix at points \(y\) and \(\mathbf{I}_{d}\) is the \(d\times d\) identity matrix. In section 2.4 we prove that the orthogonality of the Jacobian matrix is indeed a sufficient condition for preserving the kNN graph of the data. We use two statistical distances, namely, Sinkhorn divergences, and maximum mean discrepancy. Sinkhorn divergences is a computationally efficient approximation for the Wasserstein distance in high dimensions and converge to the maximum mean discrepancy.
\[\mathcal{L}_{1}(\theta)=S(\alpha,\beta)=\texttt{OT}_{2}(\alpha,\beta)-\frac{1}{2}( \texttt{OT}_{2}(\alpha,\alpha)+\texttt{OT}_{2}(\beta,\beta)) \tag{10}\]
where \(\texttt{OT}_{\epsilon}\) is the optimal transport with \(\mathcal{L}_{2}\)-norm cost, and \(\alpha\) and \(\beta\) are measures over reference and target distributions respectively. The measures \(\alpha\) and \(\beta\) are unknown and are only known via samples from \(\mathbf{R}\) and \(\mathbf{T}\) respectively. Although \(S_{\epsilon}(\alpha,\beta)\) interpolates to MMD as \(\epsilon\) goes to infinity, we still maintain an efficient standalone MMD distance for data where MMD performs better than the Wasserstein distance and therefore no need for the interpolation overhead. Specifically, we use Gaussian-based MMD:
\[\texttt{MMD}(\alpha,\beta):=\frac{1}{2}\int_{X^{2}}k(\mathbf{x},\mathbf{y})d \zeta(\mathbf{x})d\zeta(\mathbf{y}) \tag{11}\]
### Architecture
We use a simple ResNet identity block with a skip connection as our transformation where the output dimension is equal to the input dimension, and the output is calculated as such: \(\phi(\mathbf{y};\theta)=\mathbf{y}+\delta(\mathbf{y};\theta)\), where \(\delta\) is a standard multi-layer perceptron (MLP) with LeakyRelu activation functions and \(\theta\) represents the trainable weights of the network. The ResNet identity block has been chosen for two reasons: biasing \(\theta\) to have small values via weight decay or initializing the output layer using a distribution with mean zero and a small standard deviation minimizes the contribution of \(\delta(\mathbf{y};\theta)\) to the final transformation which makes \(\phi(\mathbf{y};\theta)\) close to the identity. Additionally, this follows the same recipe from CPD of viewing the alignment function as a smooth displacement field.
The ResNet identity block is chosen for the following two reasons. Biasing \(\theta\) to have small values via weight decay or initialization using a distribution with close to zero values minimizes the contribution of \(\delta(\mathbf{x}:\theta)\) to the final transformation which in turn makes \(\phi(\mathbf{x}:\theta)\) close to the identity by design. Additionally, since we take a similar approach to CPD by viewing the alignment transformation as a regularized movement of data point along displacement vector field \(F\); having a ResNet identity block is mathematically convenient since a displacement vector is a difference between the final position \(\phi(\mathbf{x}:\theta)\) (transformed point) and the initial position (data point) \(\mathbf{x}\) such that \(F(\mathbf{x})=\phi(\mathbf{x}:\theta)-\mathbf{x}=\delta(\mathbf{x}:\theta)\), therefore, we only need to worry about \(\delta(\mathbf{x}:\theta)\) instead of \((\phi(\mathbf{x}:\theta)-\mathbf{x})\) absent skip connection.
### Orthogonal Jacobian preserves kNN graph
In this section, we show that the orthogonality of the Jacobian matrix evaluated at data points is a sufficient condition for preserving the kNN graph of the data. A vector-valued function \(\mathcal{F}:\mathbb{R}_{n}\rightarrow\mathbb{R}_{n}\) preserves the kNN graph of data points \(X\in\mathbb{R}_{n}\) if, for every two points \(\mathbf{v}\) and \(\mathbf{w}\) that are in some small \(\epsilon\)-neighborhood of \(\mathbf{u}\), the following holds:
\[||\mathbf{u}-\mathbf{v}||_{2}^{2}<||\mathbf{u}-\mathbf{w}||_{2}^{2} \rightarrow||F(\mathbf{u}),F(\mathbf{v})||_{2}^{2}<||F(\mathbf{u}),F( \mathbf{w})||_{2}^{2}, \tag{12}\]
where \(||\cdot||_{2}^{2}\) is the squared Euclidian distance. Without loss of generality, we choose two points \(\mathbf{w}\), \(\mathbf{v}\) that lie in \(\epsilon\) neighborhood of point \(\mathbf{u}\) and linearize the vector field \(F\) around point \(\mathbf{u}\) such that:
\[F(\mathbf{x};\mathbf{u})\approx F(\mathbf{u})+\mathbf{J}_{\mathbf{u}}(\mathbf{x }-\mathbf{u}), \tag{13}\]
where \(\mathbf{J}_{\mathbf{u}}\) is the Jacobian matrix evaluated at point \(\mathbf{u}\).
The squared distance of \(\mathbf{u}\) and \(\mathbf{v}\) is:
\[||\mathbf{u}-\mathbf{v}||_{2}^{2}=(\mathbf{u}-\mathbf{v})^{\top}(\mathbf{u}- \mathbf{v})=\sum_{i}^{n}\left(\mathbf{u}_{i}-\mathbf{v}_{i}\right)^{2} \tag{14}\]
Similarly, the squared distance between \(F(\mathbf{u};\mathbf{u})\) and \(F(\mathbf{v};\mathbf{u})\) computes as follows
\[\begin{array}{rcl}||F(\mathbf{u};\mathbf{u})-F(\mathbf{v};\mathbf{u})||_{2 }^{2}&=&(F(\mathbf{u};\mathbf{u})-F(\mathbf{v};\mathbf{u})^{\top}(F(\mathbf{u} ;\mathbf{u})-F(\mathbf{v};\mathbf{u}))\\ &=&F(\mathbf{u})-F(\mathbf{u})-\mathbf{J}_{\mathbf{u}}(\mathbf{v}-\mathbf{u})^ {\top}(F(\mathbf{u})-F(\mathbf{u})-\mathbf{J}_{\mathbf{u}}(\mathbf{v}- \mathbf{u}))\\ &=&(\mathbf{J}_{\mathbf{u}}(\mathbf{v}-\mathbf{u}))^{\top}(\mathbf{J}_{ \mathbf{u}}(\mathbf{v}-\mathbf{u}))\\ &=&(\mathbf{v}-\mathbf{u})^{\top}\mathbf{J}_{\mathbf{u}}^{\top}\mathbf{J}_{ \mathbf{u}}(\mathbf{v}-\mathbf{u})\\ &=&(\mathbf{v}-\mathbf{u})^{\top}(\mathbf{v}-\mathbf{u})\end{array}\]
The last step follows from the orthogonality of \(\mathbf{J}_{\mathbf{u}}\) i.e. \((\mathbf{J}_{\mathbf{u}}^{\top}\mathbf{J}_{\mathbf{u}}=\mathbf{I})\)
### Jacobian Loss Via Finite Difference
Given a vector-valued function \(F:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\), a data batch \(X\in\mathbb{R}^{m\times d}\), and the Jacobian \(\mathbf{J}_{X}\) of \(F\) at points \(\mathbf{X}\) is an \(\mathbb{R}^{m\times d\times d}\) tensor, it is possible to compute \(\mathbf{J}_{\mathbf{X}}\) analytically using autodifferentiation modules, however, such computation is highly inefficient, thus, we use numerical approximation.
Given a \(d\)-dimensional vector \(\mathbf{x}=[x_{1},...,x_{d}]\), the partial first derivative of \(F\) with respect to \(x_{i}\) is:
\[\frac{\partial F}{\partial x_{i}}=\lim_{\epsilon\to 0}\frac{F(\mathbf{x}+ \epsilon e_{i})-F(\mathbf{x})}{\epsilon}, \tag{15}\]
where \(e_{i}\) is a standard basis vector (i.e. only the \(i\)th component equals 1 and the rest are zero). This could be approximated numerically using a small \(\epsilon\). The Jacobian matrix \(\mathbf{J}_{\mathbf{x}}\) is simply \(\lfloor\frac{\partial F}{\partial x_{i}},...,\frac{\partial F}{\partial x_{d}}\rfloor\). To ensure the orthogonality of the Jacobian at \(\mathbf{X}\), we minimize the following loss:
\[\mathcal{L}_{2}=\frac{1}{m}\sum_{\mathbf{x}\in\mathbf{X}}|\mathbf{J}_{\mathbf{ x}}^{\top}\mathbf{J}_{\mathbf{x}}-\mathbf{I}_{d}| \tag{16}\]
This process could be computed efficiently in a few lines of code as indicated in algorithm 1.
### Training
The training process (algorithm 2) takes advantage of two sets of \(d\)-dimensional vectors (points), a reference point set \(\mathbf{R}=\{\mathbf{x}_{1},\mathbf{x}_{2},...,\mathbf{x}_{n}\}\), and target point set \(\mathbf{T}=\{\mathbf{y}_{1},\mathbf{y}_{2},...,\mathbf{y}_{m}\}\). First, we sample points from \(\mathbf{R}\) and \(\mathbf{T}\) and create two matrices \(\mathbf{X}\) and \(\mathbf{Y}\). We feed \(\mathbf{Y}\) to the model and obtain \(\hat{\mathbf{Y}}\). Under the GMM assumption, we compute the GMM posterior probability as a similarity matrix and estimate \(\mathcal{L}_{1}\) as the negative log-likelihood. For the Sinkhorn divergence approach, we compute equation (10). We use the SoftkNN operator to construct the kNN graph for both the input \(\mathbf{Y}\) and the output \(\hat{\mathbf{Y}}\) and compute \(\mathcal{L}_{2}\) as the mean squared error between the two. Finally, we use backpropagation by minimizing the loss \(\mathcal{L}=\mathcal{L}_{1}+\lambda\mathcal{L}_{2}\) until convergence.
### Stochastic Approximation of Orthogonal Jacobian Loss
Using finite difference to compute the Jacobian for low-dimensional point clouds is efficient, however, the computational cost increases linearly with the dimension of the data. Thus, an approximate estimate with the constant computational cost is introduced.
Given a vector-valued function \(F\), and a sample \(\mathbf{x}\), we would like to minimize the following:
\[\mathcal{L}_{\mathbf{J}}(F)=|\mathbf{J}^{\top}\mathbf{J}\circ(1-\mathbf{I})| _{2}=\sum_{i\neq j}\frac{\partial F_{i}}{\partial x_{j}}\frac{\partial F_{j}} {\partial x_{i}} \tag{17}\]
Following [20, 23], the Hutchinson's estimator of \(\mathcal{L}_{\mathbf{J}}(F)\) can be approximated as such:
\[\mathcal{L}_{\mathbf{J}}(F)=\texttt{Var}_{r}(r_{\epsilon}^{\top}(\mathbf{J}^{ \top}\mathbf{J})r_{\epsilon})=\texttt{Var}_{r}((\mathbf{J}r_{\epsilon})^{\top }(\mathbf{J}r_{\epsilon})) \tag{18}\]
where \(r_{\epsilon}\) denotes a scaled Rademacher vector (each entry is either \(-\epsilon\) or \(+\epsilon\) with equal probability) where \(\epsilon>0\) is a hyperparameter that controls the granularity of the first directional derivative estimate and \(\texttt{Var}_{r}\) is the variance. It
is worth noting that this does not guarantee orthonormality, only orthogonality. In practice, however, we find that such an estimator produces comparable results to the standard finite difference method and could be efficiently implemented in Pytorch as shown in algorithm 3.
```
Input: \(\mathbf{R}\), and \(\mathbf{T}\) pointsets, blurring factor \(\sigma\), step size \(\epsilon\), regularisation \(\lambda\), and batch size \(b\); Output: Trained model \(\triangleright\) Simple mini-batches of size \(b\) from \(\mathbf{R}\) and \(\mathbf{T}\) while\((\mathbf{X},\mathbf{Y})\in(\mathbf{R},\mathbf{T})\) until convergencedo \(\phi(\mathbf{Y})\leftarrow\mathbf{Y}+\delta(\mathbf{Y})\); ifloss=="sinkhorn"then \(\mathcal{L}_{1}=\mathtt{S}(\mathbf{X},\phi(\mathbf{Y});\sigma^{2})\); else \(\mathcal{L}_{1}=\mathtt{MMD}(\mathbf{X},\phi(\mathbf{Y});\sigma^{2})\); \(\mathbf{J}_{\mathbf{Y}}[i,:]=\frac{\delta(\mathbf{Y}+\epsilon\epsilon_{i})- \delta(\mathbf{Y})}{\epsilon}\); \(\mathcal{L}_{2}=\frac{1}{m}\sum_{\mathbf{x}\in\mathbf{X}}|\mathbf{J}_{\mathbf{ x}}^{\top}\mathbf{J}_{\mathbf{x}}-\mathbf{I}_{d}|\); \(\mathcal{L}=\mathcal{L}_{1}+\lambda\mathcal{L}_{2}\); \(\triangleright\) backpropogation step Minimize(\(\mathcal{L}\));
```
**Algorithm 2**Training kNN-Resnet
### Parameters Selection
The proposed model has three main hyperparameters, namely: \(\sigma\), \(\epsilon\), and \(\lambda\). In the case of Sinkhorn divergence, \(\sigma>0\) is the blur (interpolation) parameter between OT and MMD, with a default value of \(0.01\) for datasets that lie in the first quadrant of the unit hypercube (minmax normalized data). Decreasing \(\sigma\) has the effect of solving for an exact OT, which typically produces very accurate registration, however, this comes at a slower convergence cost. In the cases where it is more advantageous to use MMD, \(\sigma\) represents the standard deviation of the Gaussian kernel. \(\epsilon>0\) represents the finite difference step size and controls the radius of topology preservation around each point. It is worth noting that a large epsilon value that covers all data tends to produce globally isomorphic transformations. \(\lambda>0\) is simply a regularization parameter that prioritizes regularization over alignment and is typically less than \(0.01\). An additional hyperparameter \(k\) is introduced when using a stochastic approximation of Jacobian orthogonality for high-dimensional data. This hyperparameter determines the number of Rademacher vectors sampled to estimate the Jacobian orthogonality penalty. Generally, a large \(k\) tends to produce a more accurate estimation, however; in practice, \(k=5\) seems to be a sufficient number for the datasets we experimented with.
```
1defstochastic_orth_jacobian(G,z,epsilon=0.01):
2''
3InputG:FunctiontocomputetheJacobianPenaltyfor.
4Inputz:(batchsize,d)InputtoGthattheJacobianis
5computedw.r.t.
6Inputk:numberofdirectionstosample(default5)
7Inputepsilon:(default0.1)
8Output:mean(\(|\mathbf{J}_{X}^{T}\mathbf{J}_{X}-\mathbf{I}_{d}|\))
9'
10r=torch.randint(0,2,size=torch.Size((k,*z.size()),))
11#r:rademacherrandomvector
12r[r==0]=-1
13vs=epsilon*r
14diffs=[G(z+v)-Gzforvinvs]
15#std:stochasticfinitediffs
16sfd=torch.stack(diffs)/epsilon
17loss=torch.var(sfd,dim=0).max()
18returnloss
19
```
**Algorithm 3**PyTorch code for Hutchinson approximation for Jacobian off-diagonal elements at data points \(z\).
## 3 Experiments
In this section, we provide experimental results on several datasets, namely, Chui-Rangarajan synthesized dataset used in [31, 44, 45], and single-cell RNA data used in [5]. The Chui-Rangarajan synthesized dataset is comprised of two shapes; a fish shape, and a Chinese character shape. Each shape is subjected to 5 increasing levels of deformations using an RBF kernel, and each deformation contains 100 different samples. The samples are generated using different RBF coefficients which are sampled from a zero-mean normal distribution with standard deviation \(\sigma\), whereby increasing \(\sigma\) leads to generally larger deformation.
### Results on 2D Data
We use the root-mean-squared error (RMSE) between the transformed data \(\hat{y_{i}}\) and the ground truth \(y_{i}\) available from the Chui-Rangarajan synthesized dataset: \(error=\sqrt{\frac{1}{m}\sum_{i=0}^{m}{(\hat{y_{i}}-y_{i})^{2}}}\).
It is important to note that such ground-truth correspondence is absent during training time and is only available during test time. Figures 2 and 3 show the initial point set distributions and their corresponding aligned versions for the Chinese character and the fish examples respectively. We also report results for our kNN-Res, MM-Res[5], CPD [15], TRS-RPM [31], RPM-LNS [45], and GMMREG [32] over 5 deformation levels and 100 samples per level. Figures 4b and 4b show results for tested models for the Chinese character, and Fish datasets respectively. We notice that after a certain level of
non-rigid deformation, MM-Res is unable to converge. For our kNN-Res, we set \(\epsilon=.005,\lambda=10^{-5},\sigma=.001\) and number of hidden units = 50. We start with a relatively high learning rate (0.01) for ADAM [46] optimizer and use a reduce-on-plateau scheduler with a reduction factor of 0.7 and minimum learning rate of \(5\times 10^{-5}\). Qualitatively, the grid-warp representations from the second column in figures 2 and 3 indicate that our estimated transformations are, at least visually, "simple" and "coherent". Furthermore, to quantitatively assess neighborhood preservation we use the hamming loss \(\mathcal{L}_{H}\) to estimate the difference between the kNN graph before and after transformation:
\[\mathcal{L}_{H}=\sum_{i=0}^{m}\sum_{j=0}^{m}I(\hat{p}_{i,j}^{k}\neq p_{i,j}^{k})\]
where \(p_{i,j}^{k}\) is the \(i\),\(j\) element of the k-NN graph matrix before transformation, \(\hat{p}_{i,j}^{k}\) is the corresponding element after transformation, and \(I\) is the indicator function. Figures 5b and 5a show the difference in neighborhood preservation between MM-Res and our kNN-Res for the Chinese character, and Fish datasets respectively for three different levels of deformations.
Moreover, despite the additional topology regularization term, our kNN-Res generally incurred smaller alignment errors and was able to converge under large deformation levels.
### Results on High-Dimensional CyTOF Data
Cytometry by time of flight (CyTOF) provides the means for the quantification of multiple cellular components data, however, is susceptible to the so-called batch effect problem, where systematic non-biological variations during
Figure 2: The Chinese character deformation example: Top row represents original and deformed sets, Mid row represents the vector field, and Bottom row is the final alignment.
the measuring process result in a distributional shift of otherwise similar samples. This effect breaks the intra-comparability of samples which is a crucial component of downstream tasks such as disease diagnosis and typically requires the intervention of human experts to remove these batch effects. The CyTOF dataset used in our experiments was curated by the Yale New Haven Hospital. There are two patients, and two conditions were measured on two different days. All eight samples have 25 markers each representing a separate dimension ('CD45', 'CD19', 'CD127', 'CD4', 'CD8a', 'CD20', 'CD25', 'CD278', 'TNFa', 'Tim3', 'CD27', 'CD14', 'CCR7', 'CD28','CD152', 'FOXP3', 'CD45RO', 'INFg', 'CD223', 'GzB', 'CD3', 'CD274', 'HLADR', 'PD1', 'CD11b'), and a range of cells (points) between 1800 to 5000 cells per sample. The split is done such that samples collected on day 1 are the target, and samples collected on day 2 are the target, and samples collected on day 3 are the target, and samples collected on day 4 are the target, and samples collected on day 5 are the target, and samples collected on day 6 are the target, and samples collected on day 7 are the target, and samples collected on day 8 are the target, and samples collected on
Figure 5: The figures show Hamming loss for the following levels of deformations: (left) level 1, (mid) level 2, (right) level 3.
Figure 6: The blue and red dots represent 1st and 2nd principal components of reference (patient #2 on day 2) and the target samples (patient #2 on day 1) correspondingly.
the reference, resulting in four alignment experiments.
We follow the exact preprocessing steps described in [5]. To adjust the dynamic range of samples, a standard pre-preprocessing step of CyTOF data is applying a log transformation [47]. Additionally, CyTOF data typically contains a large number of zero values (40%) due to instrumental instability which are not considered biological signals. Thus, a denoising autoencoder (DAE) is used to remove these zero-values [48]. The Encoder of the DAE is comprised of two fully-connected layers with ReLU activation functions. The decoder (output) is a single linear layer, with no activation function. All layers of the DAE have the same number of neurons as the dimensionality of the data. Next, each cell is multiplied by an independent Bernoulli random vector with probability =.8, and the DAE is trained to reconstruct the original cell using an MSE loss. Furthermore, the DAE is optimized via RMSprop and weight decay regularization. The zero values in both reference and target are then removed using the trained DAE. Finally, each feature in both target and reference samples is independently standardized to have a zero-mean and unit variance. For our kNN-Res, we set \(\epsilon=0.05,\lambda=0.1,\sigma=0.04\), \(k=5\) for Hutchinson's estimator, and the number of hidden units to 50. We start with a relatively high learning rate (0.01) for the ADAM optimizer and use a reduce-on-plateau scheduler with a reduction factor of.7, and a minimum learning rate of \(5\times 10^{-5}\). Figure 6 shows the first two principal components of data before and after alignment using two kNN-Res models with different lambdas. Although the two samples appear less aligned when using a large \(\lambda\), this comes with the benefit of preserving the topology of the data as shown by the learned transformation in figure 7 where points (cells) are moved in a coherent way.
This becomes more clearer when looking at the marginals in figure 13 in the appendix. In this experiment, we trained five models with five different
Figure 7: Point set transformation(alignment) of patient #2 sample on day 1 and day 2, shown in space of 1st and 2nd principal components.
lambdas ranging from 0 to 1. It is clear that having a small \(\lambda\) favors alignment over faithfulness to the original distribution, however, increasing \(\lambda\) preserves the shape of the original data after transformation, which is desirable in biological settings. For results of other experiments see Appendix.
## 4 Discussion
### Implications
Point-set registration methods are typically used for problems in computer vision to align point clouds produced by either stereo vision or by Light Detection and Ranging devices (e.g. Velodyne scanner) for instance to stitch scenes and align objects. These datasets are usually of 2 or 3 dimensions and hence the methods had limited exposure to high-dimensional datasets. Biological data, on the other hand, is usually of high dimension and hence methods from point-set registration do not directly translate to biological data. The proposed method in this study was tested on a 25-dimensional CyTOF dataset. However, in flow and mass cytometry, data could easily go beyond 50 dimensions (markers). For instance, methods that combine protein marker detection with unbiased transcriptome profiling of single cells provide an even higher number of markers. These methods show that multimodal data analysis can achieve a more detailed characterization of cellular phenotypes than transcriptome measurements alone [49, 50] and hence recently gained significant traction. Unfortunately, these methods require more sophisticated batch normalization algorithms, since manual gating and normalization using marginal distributions become infeasible. It is worth mentioning that even though the experts are making sure that the marginal distributions are aligned, there is still no guarantee that the samples are aligned in the higher dimensional space. Moreover, the alignment might result in such nonlinear and non-smooth transformations that break biological relationships or introduce non-existing biological variabilities. The proposed method mitigates these issues and guarantees smooth transformations.
### Limitations
It is clear from the last step of the proof that the orthogonal Jacobian is too strong a condition for preserving the kNN graph:
\[(\mathbf{v}-\mathbf{u})^{\top}\mathbf{J}_{\mathbf{u}}^{\top}\mathbf{J}_{ \mathbf{u}}(\mathbf{v}-\mathbf{u})=(\mathbf{v}-\mathbf{u})^{\top}(\mathbf{v}- \mathbf{u}) \tag{19}\]
The objective is satisfied by preserving inequality and not equality. In other words, it is only necessary and sufficient for \(\mathbf{J}\) to preserve the kNN graph if the following holds:
\[\mathbf{u}^{\top}\mathbf{u}\leq\mathbf{v}^{\top}\mathbf{v}\rightarrow\mathbf{ u}^{\top}\mathbf{J}^{\top}\mathbf{J}\mathbf{u}\leq\mathbf{v}^{\top}\mathbf{J} ^{\top}\mathbf{J}\mathbf{v} \tag{20}\]
or
\[\langle\mathbf{u},\mathbf{u}\rangle\leq\langle\mathbf{v},\mathbf{v}\rangle \rightarrow\langle\mathbf{J}\mathbf{u},\mathbf{J}\mathbf{u}\rangle\leq \langle\mathbf{J}\mathbf{v},\mathbf{J}\mathbf{v}\rangle \tag{21}\]
Having strict equality puts a limitation on the kind of transformations the model is capable of learning. Furthermore, even if the deformation could theoretically be expressed, such a penalty makes convergence unnecessarily slower. On the empirical side, we only have a limited number of experiments to test the proposed method. More experimentation and ablation are required to better understand the limits of our current approach and to learn how it fairs on a wider selection of real-world data such as RNA-Seq.
### Future Work
An important future direction is incorporating local or partial matching using modified alignment losses such as Gromov-Wasserstein distance. This should lead to a much more robust solution than global matching, especially in the case of outliers and missing objects. We also posit that solving point set registration under topological constraints such as preserving the kNN graph is naturally extendable to dimensionality reduction.
## 5 Conclusion
This paper presents a simple, scalable framework for point cloud registration. At its core, it consists of three components, namely (a) residual neural network with identity blocks as a parametrized displacement field, (b) Jacobian penalty as a topology-preserving loss, and (c) Sinkhorn Divergence as a sample-based, geometry-aware statistical distance. Additionally, by incorporating Hutchinson's estimator for the Jacobian loss, we show that our model is easily extensible to high dimensions with constant complexity. Furthermore, we offer both qualitative and quantitative analysis for synthetic and CyTOF datasets showing the flexibility and applicability of our model in multiple domains.
|
2310.20579 | Initialization Matters: Privacy-Utility Analysis of Overparameterized
Neural Networks | We analytically investigate how over-parameterization of models in randomized
machine learning algorithms impacts the information leakage about their
training data. Specifically, we prove a privacy bound for the KL divergence
between model distributions on worst-case neighboring datasets, and explore its
dependence on the initialization, width, and depth of fully connected neural
networks. We find that this KL privacy bound is largely determined by the
expected squared gradient norm relative to model parameters during training.
Notably, for the special setting of linearized network, our analysis indicates
that the squared gradient norm (and therefore the escalation of privacy loss)
is tied directly to the per-layer variance of the initialization distribution.
By using this analysis, we demonstrate that privacy bound improves with
increasing depth under certain initializations (LeCun and Xavier), while
degrades with increasing depth under other initializations (He and NTK). Our
work reveals a complex interplay between privacy and depth that depends on the
chosen initialization distribution. We further prove excess empirical risk
bounds under a fixed KL privacy budget, and show that the interplay between
privacy utility trade-off and depth is similarly affected by the
initialization. | Jiayuan Ye, Zhenyu Zhu, Fanghui Liu, Reza Shokri, Volkan Cevher | 2023-10-31T16:13:22Z | http://arxiv.org/abs/2310.20579v1 | # Initialization Matters: Privacy-Utility Analysis of Overparameterized Neural Networks
###### Abstract
We analytically investigate how over-parameterization of models in randomized machine learning algorithms impacts the information leakage about their training data. Specifically, we prove a privacy bound for the KL divergence between model distributions on worst-case neighboring datasets, and explore its dependence on the initialization, width, and depth of fully connected neural networks. We find that this KL privacy bound is largely determined by the expected squared gradient norm relative to model parameters during training. Notably, for the special setting of linearized network, our analysis indicates that the squared gradient norm (and therefore the escalation of privacy loss) is tied directly to the per-layer variance of the initialization distribution. By using this analysis, we demonstrate that privacy bound improves with increasing depth under certain initializations (LeCun and Xavier), while degrades with increasing depth under other initializations (He and NTK). Our work reveals a complex interplay between privacy and depth that depends on the chosen initialization distribution. We further prove excess empirical risk bounds under a fixed KL privacy budget, and show that the interplay between privacy utility trade-off and depth is similarly affected by the initialization.
## 1 Introduction
Deep neural networks (DNNs) in the over-parameterized regime (i.e., more parameters than data) perform well in practice but the model predictions can easily leak private information about the training data under inference attacks such as membership inference attacks [44] and reconstruction attacks [17; 7; 29]. This leakage can be mathematically measured by the extent to which the algorithm's output distribution changes if DNNs are trained on a neighboring dataset (differing only in a one record), following the differential privacy (DP) framework [23].
To train differential private model, a typical way is to randomly perturb each gradient update in the training process, such as stochastic gradient descent (SGD), which leads to the most widely applied DP training algorithm in the literature: DP-SGD [2]. To be specific, in each step, DP-SGD employs gradient clipping, adds calibrated Gaussian noise, and yields differential privacy guarantee that scales with the noise multiplier (i.e., per-dimensional Gaussian noise standard deviation divided by the clipping threshold) and number of training epochs. However, this privacy bound [2] is overly general as its gradient clipping artificially neglects the network properties (e.g., width and depth) and training schemes (e.g., initializations). Accordingly, a natural question arises in the community:
_How does the over-parameterization of neural networks (under different initializations) affect the privacy bound of the training algorithm over_ worst-case _datasets?_
To answer this question, we circumvent the difficulties of analyzing gradient clipping, and instead _algorithmically_ focus on analyzing privacy for the Langevin diffusion algorithm _without_ gradient clipping nor Lipschitz assumption on loss function. 2 It avoids an artificial setting in DP-SGD [2] where a constant sensitivity constraint is enforced for each gradient update and thus makes the privacy bound insensitive to the network over-parameterization. _Theoretically_, we prove that the KL privacy loss for Langevin diffusion scales with the expected gradient difference between the training on any two worst-case neighboring datasets (Theorem 3.1). 3 By proving precise upper bounds on the expected \(\ell_{2}\)-norm of this gradient difference, we thus obtain KL privacy bounds for fully connected neural network (Lemma 3.2) and its linearized variant (Corollary 4.2) that changes with the network width, depth and per-layer variance for the initialization distribution. We summarized the details of our KL privacy bounds in Table 1, and highlight our key observations below.
Footnote 2: A key difference between this paper and existing privacy utility analysis of Langevin diffusion [26] is that we analyze in the absence of gradient clipping or Lipschitz assumption on loss function. Our results also readily extend to discretized noisy GD with constant step-size (as discussed in Appendix E).
Footnote 3: We focus on KL privacy loss because it is a more relaxed distinguishability notion than standard \((\varepsilon,\delta)\)-DP, and therefore could be upper bounded even without gradient clipping. Moreover, KL divergence enables upper bound for the advantage (relative success) of various inference attacks, as studied in recent works [39; 28].
* Width always worsen privacy, under all the considered initialization schemes. Meanwhile, the interplay between network depth and privacy is much more complex and crucially depends on which initialization scheme is used and how long the training time is.
* Regarding the specific initialization schemes, under small per-layer variance in initialization (e.g. in LeCun and Xavier), if the depth is large enough, our KL privacy bound for training fully connected network (with a small amount of time) as well as linearized network (with finite time) decays exponentially with increasing depth. To the best of our knowledge, this is the first time that an improvement of privacy bound under over-parameterization is observed.
We further perform numerical experiments (Section 5) on deep neural network trained via noisy gradient descent to validate our privacy analyses. Finally, we analyze the privacy utility trade-off for training linearized network, and prove that the excess empirical risk bound (given any fixed KL privacy budget) scales with a lazy training distance bound \(R\) (i.e., how close is the initialization to a minimizer of the empirical risk) and a gradient norm constant \(B\) throughout training (Corollary 6.4). By analyzing these two terms precisely, we prove that under certain initialization distributions (such as LeCun and Xavier), the privacy utility trade-off strictly improves with increasing depth for linearized network (Table 1). To our best knowledge, this is the first time that such a gain in privacy-utility trade-off due to over-parameterization (increasing depth) is shown. Meanwhile, prior results only prove (nearly) dimension-independent privacy utility trade-off for such linear models in the literature [45; 32; 37]. Our improvement demonstrates the unique benefits of our algorithmic framework and privacy-utility analysis in understanding the effect of over-parameterization.
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline \multirow{2}{*}{Initialization} & \begin{tabular}{c} Variance \(\beta_{l}\) \\ for layer \(l\) \\ \end{tabular} & \begin{tabular}{c} Gradient norm \\ constant \(B\) (7) \\ \end{tabular} & \begin{tabular}{c} Approximate lazy \\ training distance \\ \(R\) (9) \\ \end{tabular} &
\begin{tabular}{c} Excess Empirical risk \\ under \(\varepsilon\)-KL privacy \\ (Corollary 6.4) \\ \end{tabular} \\ \hline LeCun [34] & \(1/m_{l-1}\) & \(\frac{\text{\small{\small{\small{\small{\small{\small{\small{\small{\small{ \begin{array}{c}{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny }}}}}} \left[{\left[{{\tiny{{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\left}}}}}}}}{{{\tiny{{ \tiny{\tiny{{\tiny{\tiny{\tiny{\tiny{\{\left}}}}}}{{{\tiny{{\tiny{\tiny{\tiny{{ \tiny{\left}}}}}}{{{\tiny{{\tiny{{\tiny{\tiny{\{}}}}}}{{ \tiny{{\tiny{{\tiny{\tiny{\left}}}}}{{{\tiny{{\tiny{\tiny{\tiny{\left}}}}{{{\tiny{{ \tiny{\left}}}}{{{\tiny{{\tiny{\tiny{{\left}}}}{{{\tiny{\tiny{\tiny{{\left}}}}{{{ \tiny{\tiny{\left}}}{{{\tiny{{\tiny{\left}}}}{{{\tiny{{\tiny{\left}}}{{{ \tiny{\tiny{\left}}}{{{\tiny{{\left{\left}}}}{{{\tiny{{\tiny{\left{}}}}{{ \tiny{{\tiny{\left{\left}}}{{{\tiny{\left{{\left}}}}{{{\tiny{{\tiny{\left{{\left}}} }}{{{{\tiny{{\left{\left}}}}{{{\tiny{{\tiny{\left{\left}}}}{{{\tiny{{\left{{ \left}}}}{{{\tiny{{\left{\left}}}{{{\tiny{\left{{\left}}}}{{{ \tiny{{\left{\left{{\left}}}}{{{\tiny{{\left{\left{\left{}}}}{{\tiny{{\left{{ \left}}}}{{{\left{{\left{\left{\left}}}{{{\left{\left{{{}}} {\left{\left{{\left{\left}}}}{{{\left{\left{{\left{}}}}{{\left{{\left{{ }}}}{{\left{\left{{\left{\left{{\left}}}}{{{\left{\left{{{ }}}}{{\left{\left{{\left{\left}}}{{{\left{{{}}} {\left{{\left{\left{\left}}}}{{{\left{{\left{\left{{\left}}}}{{\left{{{{ }}}}{\left{\left{{\left{{\left}}}{{\left{{{{{{{}}}}} {\left{\left{{\left}}}}{{\left{\left{\left{{{{\left}}}}}{{\left{{ \left{{\left{{{\left}}}}}{{\left{\left{\left{{{{\left}}}}{{\left{{{{{{{{{{{}}}}}} }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\}\}\}\}\}\}\}\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\{\{\{\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\}\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\}\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\
### Related Works
over-parameterization in DNNs and NTK.Theoretical demonstration on the benefit of over-parameterization in DNNs occurs in global convergence [3; 21] and generalization [4; 16]. Under proper initialization, the training dynamics of over-parameterized DNNs can be described by a kernel function, termed as neural tangent kernel (NTK) [31], which stimulates a series of analysis in DNNs. Accordingly, over-parameterization has been demonstrated to be beneficial/harmful to several topics in deep learning, e.g., robustness [15; 54], covariate shift [50]. However, the relationship between over-parameterization and privacy (based on the differential privacy framework) remains largely an unsolved problem, as the training dynamics typically change [14] after adding new components in the privacy-preserving learning algorithm (such as DP-SGD [2]) to enforce privacy constraints.
Membership inference privacy risk under over-parameterization.A recent line of works [47; 48] investigates how over-parameterization affects the theoretical and empirical privacy in terms of membership inference advantage, and proves novel trade-off between privacy and generalization error. These literature are closet to our objective of investigating the interplay between privacy and over-parameterization. However, Tan et al. [47; 48] focus on proving upper bounds for an average-case privacy risk defined by the advantage (relative success) of membership inference attack on models trained from randomly sampled training dataset from a population distribution. By contrast, our KL privacy bound is heavily based on the strongest adversary model in the differential privacy definition, and holds under an arbitrary _worst-case_ pair of neighboring datasets, differing only in one record. Our model setting (e.g., fully connected neural networks) is also quite different from that of Tan et al. [47; 48]. The employed analysis tools are accordingly different.
Differentially private learning in high dimensionStandard results for private empirical risk minimization [9; 46] and private stochastic convex optimization [11; 12; 5] prove that there is an unavoidable factor \(d\) in the empirical risk and population risk that depends on the model dimension. However, for unconstrained optimization, it is possible to seek for the dimension-dependency in proving risk bounds for certain class of problems (such as generalized linear model [45]). Recently, there is a growing line of works that proves dimension-independent excess risk bounds for differentially private learning, by utilizing the low-rank structure of data features [45] or gradient matrices [32; 37] during training. Several follow-up works [33; 13] further explore techniques to enforce the low-rank property (via random projection) and boost privacy utility trade-off. However, all the works focus on investigating a general high-dimensional problem for private learning, rather than separating the study for different network choices such as width, depth and initialization. Instead, our study focuses on the fully connected neural network and its linearized variant, which enables us to prove more precise privacy utility trade-off bounds for these particular networks under over-parameterization.
## 2 Problem and Methodology
We consider the following standard multi-class supervised learning setting. Let \(\mathcal{D}=(\mathbf{z}_{1},\cdots,\mathbf{z}_{n})\) be an input dataset of size \(n\), where each data record \(\mathbf{z}_{i}=(\mathbf{x}_{i},\mathbf{y}_{i})\) contains a \(d\)-dimensional feature vector \(\mathbf{x}_{i}\in\mathbb{R}^{d}\) and a label vector \(\mathbf{y}_{i}\in\mathcal{Y}=\{-1,1\}^{o}\) on \(o\) classes. We aim to learn a neural network output function \(\mathbf{f}_{\mathbf{W}}(\cdot):\mathcal{X}\rightarrow\mathcal{Y}\) parameterized by \(\mathbf{W}\) via empirical risk minimization (ERM)
\[\min_{\mathbf{W}}\mathcal{L}(\mathbf{W};\mathcal{D}):=\frac{1}{n}\sum_{i=1}^{n}\ell( \mathbf{f}_{\mathbf{W}}(\mathbf{x}_{i});\mathbf{y}_{i})\,, \tag{1}\]
where \(\ell(\mathbf{f}_{\mathbf{W}}(\mathbf{x}_{i});\mathbf{y}_{i})\) is a loss function that reflects the approximation quality of model prediction \(f_{\mathbf{W}}(\mathbf{x}_{i})\) compared to the ground truth label \(\mathbf{y}_{i}\). For simplicity, throughout our analysis, we employ the cross-entropy loss \(\ell(\mathbf{f}_{\mathbf{W}}(\mathbf{x});\mathbf{y})=-\langle\mathbf{y},\log\text{softmax}(\mathbf{ f}_{\mathbf{W}}(\mathbf{x}))\rangle\) for multi-class network with \(o\geq 2\), and \(\ell(\mathbf{f}_{\mathbf{W}}(\mathbf{x});\mathbf{y})=\log(1+\exp(-\mathbf{y}\mathbf{f}_{\mathbf{W}}(\mathbf{x}))\) for single-output network with \(o=1\).
Fully Connected Neural NetworksWe consider the \(L\)-layer, multi-output, fully connected, deep neural network (DNN) with ReLU activation. Denote the width of hidden layer \(l\) as \(m_{l}\) for \(l=1,\cdots,L-1\). For consistency, we also denote \(m_{0}=d\) and \(m_{L}=o\). The network output \(f_{\mathbf{W}}(\mathbf{x})\coloneqq\mathbf{h}_{L}(\mathbf{x})\) is defined recursively as follows.
\[\mathbf{h}_{0}(\mathbf{x})=\mathbf{x};\quad\mathbf{h}_{l}(\mathbf{x})=\phi(\mathbf{W}_{l}\mathbf{x})\text{ for }l=1,\cdots,L-1;\quad\mathbf{h}_{L}(\mathbf{x})=\mathbf{W}_{L}\mathbf{h}_{L-1}(\mathbf{x})\,, \tag{2}\]
where \(h_{l}(\mathbf{x})\) denotes the post-activation output at \(l\)-th layer, and \(\{\mathbf{W}_{l}\in\mathbb{R}^{m_{l}\times m_{l-1}}:l=1,\ldots,L\}\) denotes the set of per-layer weight matrices of DNN. For brevity, we denote the vector \(\mathbf{W}\coloneqq(\text{Vec}(\mathbf{W}_{1}),\ldots,\text{Vec}(\mathbf{W}_{L}))\in \mathbb{R}^{m_{1}\cdot d+m_{2}\cdot m_{1}+\cdots+o\cdot m_{L-1}}\), i.e., the the concatenation of vectorizations for weight matrices of all layers, as the model parameter.
Linearized NetworkWe also analyze the following _linearized network_, which is used in prior works [35, 3, 41] as an important tool to (approximately and qualitatively) analyze the training dynamics of DNNs. Formally, the linearized network \(\mathbf{f}_{\mathbf{W}}^{lin,0}(\mathbf{x})\) is a first-order Taylor expansion of the fully connected ReLU network at initialization parameter \(\mathbf{W}_{0}^{lin}\), as follows.
\[\mathbf{f}_{\mathbf{W}}^{lin,0}(\mathbf{x})\equiv\mathbf{f}_{\mathbf{W}_{0}^{lin}}(\mathbf{x})+\frac{ \partial\mathbf{f}_{\mathbf{W}}(\mathbf{x})}{\partial\mathbf{W}}\Big{|}_{\mathbf{W}=\mathbf{W}_{0}^{ lin}}\left(\mathbf{W}-\mathbf{W}_{0}^{lin}\right), \tag{3}\]
where \(\mathbf{f}_{\mathbf{W}_{0}^{lin}}(\mathbf{x})\) is the output function of the fully connected ReLU network (2) at initialization \(\mathbf{W}_{0}^{lin}\). We denote \(\mathcal{L}_{0}^{lin}(\mathbf{W};\mathcal{D})=\frac{1}{n}\sum_{i=1}^{n}\ell\left( \mathbf{f}_{\mathbf{W}_{0}^{lin}}(\mathbf{x}_{i})+\frac{\partial\mathbf{f}_{\mathbf{W}}(\mathbf{x})}{ \partial\mathbf{W}}|_{\mathbf{W}=\mathbf{W}_{0}^{lin}}(\mathbf{W}-\mathbf{W}_{0}^{lin});\mathbf{y}_{i}\right)\) as the empirical loss function for training linearized network, by plugging (3) into (1).
Langevin DiffusionRegarding the optimization algorithm, we focus on the _Langevin diffusion_ algorithm [36] with per-dimensional noise variance \(\sigma^{2}\). Note that we aim to _avoid gradient clipping_ while still proving KL privacy bounds. After initializing the model parameters \(\mathbf{W}_{0}\) at time zero, the model parameters \(\mathbf{W}_{t}\) at subsequent time \(t\) evolves as the following stochastic differential equation.
\[\mathrm{d}\mathbf{W}_{t}=-\,\nabla\mathcal{L}(\mathbf{W}_{t};\mathcal{D})\mathrm{d}t+ \sqrt{2\sigma^{2}}\mathrm{d}\mathbf{B}_{t}\,. \tag{4}\]
Initialization DistributionThe initialization of parameters \(\mathbf{W}_{0}\) crucially affects the convergence of Langevin diffusion, as observed in prior literatures [52, 25, 24]. In this work, we investigate the following general class of Gaussian initialization distributions with different (possibly depth-dependent) variances for the parameters in each layer. For any layer \(l=1,\cdots,L\), we have
\[[\mathbf{W}^{l}]_{ij}\sim\mathcal{N}(0,\beta_{l})\text{, for }(i,j)\in[m_{l}]\times[m_{l-1}]\,, \tag{5}\]
where \(\beta_{1},\cdots,\beta_{L}>0\) are the per-layer variance for Gaussian initialization. By choosing different variances, we recover many common initialization schemes in the literature, as summarized in Table 1.
### Our objective and methodology
We aim to understand the relation between privacy, utility and over-parameterization (depth and width) for the Langevin diffusion algorithm (under different initialization distributions). For privacy analysis, we prove a KL privacy bound for running Langevin diffusion on any two _worst-case_ neighboring datasets. Below we first give the definition for neighboring datasets.
**Definition 2.1**.: We denote \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\) as neighboring datasets if they are of same size and only differ in one record. For brevity, we also denote the differing records as \((\mathbf{x},\mathbf{y})\in\mathcal{D}\) and \((\mathbf{x}^{\prime},\mathbf{y}^{\prime})\in\mathcal{D}^{\prime}\).
**Assumption 2.2** (Bounded Data).: For simplicity, we assume bounded data, i.e., \(\|\mathbf{x}\|_{2}\leq\sqrt{d}\).
We now give the definition for KL privacy, which is a more relaxed, yet closely connected privacy notion to the standard \((\varepsilon,\delta)\) differential privacy [22], see Appendix A.2 for more discussions. KL privacy and its relaxed variants are commonly used in previous literature [8, 10, 53].
**Definition 2.3** (KL privacy).: A randomized algorithm \(\mathcal{A}\) satisfies \(\varepsilon\)-KL privacy if for any neighboring datasets \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\), we have that the KL divergence \(\mathrm{KL}(\mathcal{A}(\mathcal{D})\|\mathcal{A}(\mathcal{D}^{\prime}))\leq\varepsilon\), where \(\mathcal{A}(\mathcal{D})\) denotes the algorithm's output distribution on dataset \(\mathcal{D}\).
In this paper, we prove KL privacy upper bound for \(\max_{\mathcal{D},\mathcal{D}^{\prime}}\mathrm{KL}(\mathbf{W}_{[0:T]}\|\mathbf{W}_{[0:T]} ^{\prime})\) when running Langevin diffusion on any _worst-case_ neighboring datasets. For brevity, here (and in the remaining paper), we abuse the notations and denote \(\mathbf{W}_{[0:T]}\) and \(\mathbf{W}_{[0:T]}^{\prime}\) as the distributions of model parameters trajectory during Langevin diffusion processes Eq. (4) with time \(T\) on \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\) respectively.
For utility analysis, we prove the upper bound for the excess empirical risk given any fixed KL divergence privacy budget for a single-output neural network under the following additional assumption (it is only required for utility analysis and not needed for our privacy bound).
**Assumption 2.4** ([40; 20; 21]).: The training data \(\mathbf{x}_{1},\cdots,\mathbf{x}_{n}\) are i.i.d. samples from a distribution \(P_{x}\) that satisfies \(\mathbb{E}[\mathbf{x}]=0,\|\mathbf{x}\|_{2}=\sqrt{d}\) for \(\mathbf{x}\sim P_{x}\), and with probability one for any \(i\neq j\), \(\mathbf{x}_{i}\nparallel\mathbf{x}_{j}\).
Our ultimate goal is to precisely understand how the excess empirical risk bounds (given a fixed KL privacy budget) are affected by increasing width and depth under different initialization distributions.
## 3 KL Privacy for Training Fully Connected ReLU Neural Networks
In this section, we perform the composition-based KL privacy analysis for Langevin Diffusion given random Gaussian initialization distribution under Eq. (5) for fully connected ReLU network. More specifically, we prove upper bound for the KL divergence between distribution of output model parameters when running Langevin diffusion on an arbitrary pair of neighboring datasets \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\).
Our first insight is that by a Bayes rule decomposition for density function, KL privacy under a relaxed gradient sensitivity condition can be proved (that could hold _without_ gradient clipping).
**Theorem 3.1** (KL composition under possibly unbounded gradient difference).: _The KL divergence between running Langevin diffusion (4) for DNN (2) on neighboring datasets \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\) satisfies_
\[\mathrm{KL}(\mathbf{W}_{[0:T]}\|\mathbf{W}_{[0:T]}^{\prime})=\frac{1}{2\sigma^{2}} \int_{0}^{T}\mathbb{E}\left[\|\nabla\mathcal{L}(\mathbf{W}_{t};\mathcal{D})- \nabla\mathcal{L}(\mathbf{W}_{t};\mathcal{D}^{\prime})\|_{2}^{2}\right]\mathrm{d}t\,. \tag{6}\]
Proof sketch.: We compute the partial derivative of KL divergence with regard to time \(t\), and then integrate it over \(t\in[0,T]\) to compute the KL divergence during training with time \(T\). For computing the limit of differentiation, we use Girsanov's theorem to compute the KL divergence between the trajectory of Langevin diffusion processes on \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\). The complete proof is in Appendix B.1.
Theorem 3.1 is an extension of the standard additivity [51] of KL divergence (also known as chain rule [1]) for a finite sequence of distributions to continuous time processes with (possibly) unbounded drift difference. The key extension is that Theorem 3.1 does not require bounded sensitivity between the drifts of Langevin Diffusion on neighboring datasets. Instead, it only requires finite second-order moment of drift difference (in the \(\ell_{2}\)-norm sense) between neighboring datasets \(\mathcal{D},\mathcal{D}^{\prime}\), which can be proved by the following Lemma. We prove that this expectation of squared gradient difference incurs closed-form upper bound under deep neural network (under mild assumptions), for running Langevin diffusion (without gradient clipping) on any neighboring dataset \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\).
**Lemma 3.2** (Drift Difference in Noisy Training).: _Let \(M_{T}\) be the subspace spanned by gradients \(\{\nabla\ell(f_{\mathbf{W}_{t}}(\mathbf{x}_{i};\mathbf{y}_{i}):(\mathbf{x}_{i},\mathbf{y}_{i})\in \mathcal{D},t\in[0,T]\}_{i=1}^{n}\) throughout Langevin diffusion \((\mathbf{W}_{t})_{t\in[0,T]}\). Denote \(\|\cdot\|_{M_{T}}\) as the \(\ell_{2}\) norm of the projected input vector onto \(M_{T}\). Suppose that there exists constants \(c,\beta>0\) such that for any \(\mathbf{W}\), \(\mathbf{W}^{\prime}\) and \((\mathbf{x},\mathbf{y})\), we have \(\|\nabla\ell(f_{\mathbf{W}}(\mathbf{x});\mathbf{y})\rangle-\nabla\ell(f_{\mathbf{W}^{\prime}} (\mathbf{x});\mathbf{y})\|_{2}<\max\{c,\beta\|\mathbf{W}-\mathbf{W}^{\prime}\|_{M_{T}}\}\). Then running Langevin diffusion Eq. (4) with Gaussian initialization distribution (5) satisfies \(\varepsilon\)-KL privacy with \(\varepsilon=\frac{\max_{\mathcal{D},\mathcal{D}^{\prime}}\int_{0}^{T}\mathbb{ E}\left[\|\nabla\mathcal{L}(\mathbf{W}_{t};\mathcal{D})-\nabla\mathcal{L}(\mathbf{W}_{t}; \mathcal{D}^{\prime})\|_{2}^{2}\right]\mathrm{d}t}{2\sigma^{2}}\) where_
\[+\underbrace{\frac{2\beta^{2}}{n^{2}(2+\beta^{2})}\left(\frac{ \epsilon^{(2+\beta^{2})T}-1}{2+\beta^{2}}-T\right)\cdot\left(\mathbb{E}\left[ \|\nabla\mathcal{L}(\mathbf{W}_{0};\mathcal{D})\|_{2}^{2}\right]+2\sigma^{2} \text{rank}(M_{T})+c^{2}\right)}_{\text{gradient difference fluctuation during training}}+\underbrace{\frac{2c^{2}T}{n^{2}}}_{\text{non- smoothness}}.\]
Proof sketch.: The key is to reduce the problem of upper bounding the gradient difference at any training time \(T\), to analyzing its two subcomponents: \(\|\nabla\ell(f_{\mathbf{W}_{t}}(\mathbf{x});\mathbf{y}))-\nabla\ell(f_{\mathbf{W}_{t}}(\mathbf{x}^{\prime});\mathbf{y}^{\prime}) \|_{2}^{2}\leq\underbrace{2\left\|\nabla\ell(f_{\mathbf{W}_{0}}(\mathbf{x});\mathbf{y}) \right\|-\nabla\ell(f_{\mathbf{W}_{0}}(\mathbf{x}^{\prime});\mathbf{y}^{\prime})\right\|_ {2}^{2}}_{\text{gradient difference at initialization}}+2\beta^{2}\underbrace{\left\|\mathbf{W}_{t}- \mathbf{W}_{0}\right\|_{M_{T}}^{2}}_{\text{parameters' change after time $T$}}+2c^{2}\), where \((\mathbf{x},\mathbf{y})\) and \((\mathbf{x}^{\prime},\mathbf{y}^{\prime})\) are the differing data between neighboring datasets \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\). This inequality is by the Cauchy-Schwartz inequality. In this way, the second term in Lemma 3.2 uses the change of parameters
to bound the gradient difference between datasets \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\) at time \(T\), via the relaxed smoothness assumption of loss function (that is explained in Remark 3.5 in details). The complete proof is in Appendix B.2.
_Remark 3.3_ (Gradient difference at initialization).: The first term and in our upper bound linearly scales with the difference between gradients on neighboring datasets \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\) at initialization. Under different initialization schemes, this gradient difference exhibits different dependency on the network depth and width, as we will prove theoretically in Theorem 4.1.
_Remark 3.4_ (Gradient difference fluctuation during training).: The second term in Lemma 3.2 bounds the change of gradient difference during training, and is proportional to the the rank of a subspace \(M_{T}\) spanned by gradients of all training data. Intuitively, this fluctuation is because Langevin diffusion adds per-dimensional noise with variance \(\sigma^{2}\), thus perturbing the training parameters away from the initialization at a scale of \(O(\sigma\sqrt{\text{rank}(M_{T})})\) in the expected \(\ell_{2}\) distance.
_Remark 3.5_ (Relaxed smoothness of loss function).: The third term in Lemma 3.2 is due to the assumption \(\|\nabla\ell(f_{\mathbf{W}}(\mathbf{x});\mathbf{y})\|_{2}<\max\{c,\beta\|\mathbf{W}-\mathbf{W}^{ \prime}\|_{M_{T}}\}.\) This assumption is similar to smoothness of loss function, but is more relaxed as it allows non-smoothness at places where the gradient is bounded by \(c\). Therefore, this assumption is general to cover commonly-used smooth, non-smooth activation functions, e.g., sigmoid, ReLU.
_Growth of KL privacy bound with increasing training time \(T\)._ The first and third terms in our upper bound Lemma 3.2 grow linearly with the training time \(T\), while the second term grows exponentially with regard to \(T\). Consequently, for learning tasks that requires a long training time to converge, the second term will become the dominating term and the KL privacy bound suffers from exponential growth with regard to the training time. Nevertheless, observe that for small \(T\to 0\), the second component in Lemma 3.2 contains a small factor \(\frac{e^{(2+\beta^{2})T}-1}{2+\beta^{2}}-T=o(T)\) by Taylor expansion. Therefore, for small training time, the second component is smaller than the first and the third components in Lemma 3.2 that linearly scale with \(T\), and thus does not dominate the privacy bound. Intuitively, this phenomenon is related to lazy training [19]. In Section 5 and Figure 2, we also numerically validate that the second component does not have a high effect on the KL privacy loss in the case of small training time.
_Dependence of KL privacy bound on network over-parameterization_. Under a fixed training time \(T\) and noise scale \(\sigma^{2}\), Lemma 3.2 predicts that the KL divergence upper bound in Theorem 3.1 is dependent on the gradient difference and gradient norm at initialization, and the rank of gradient subspace \(\text{rank}(M_{T})\) throughout training. We now discuss the how these two terms change under increasing width and depth, and whether there are possibilities to improve them under over-parameterization.
1. The gradient norm at initialization crucially depends on how the per-layer variance in the Gaussian initialization distribution scales with the network width and depth. Therefore, it is possible to reduce the gradient difference at initialization (and thus improve the KL privacy bound) by using specific initialization schemes, as we later show in Section 4 and Section 5.
2. Regarding the rank of gradient subspace \(\text{rank}(M_{T})\): when the gradients along the training trajectory span the whole optimization space, \(\text{rank}(M_{T})\) would equal the dimension of the learning problem. Consequently, the gradient fluctuation upper bound (and thus the KL privacy bound) worsens with increasing number of model parameters (over-parameterization) in the worst-case. However, if the gradients are low-dimensional [45; 32; 43] or sparse [37], \(\text{rank}(M_{T})\) could be dimension-independent and thus enables better bound for gradient fluctuation (and KL privacy bound). We leave this as an interesting open problem.
## 4 KL privacy bound for Linearized Network under over-parameterization
In this section, we focus on the training of linearized networks (3), which fosters a refined analysis on the interplay between KL privacy and over-parameterization (increasing width and depth). Analysis of DNNs via linearization is a commonly used technique in both theory [19] and practice [43; 41]. We hope our analysis for linearized network serves as an initial attempt that would open a door to theoretically understanding the relationship between over-parameterization and privacy.
To derive a composition-based KL privacy bound for training a linearized network, we apply Theorem 3.1 which requires an upper bound for the norm of gradient difference between the training
processes on neighboring datasets \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\) at any time \(t\). Note that the empirical risk function for training linearized models enjoys convexity, and thus a relatively short amount of training time is enough for convergence. In this case, intuitively, the gradient difference between neighboring datasets does not change a lot during training, which allows for a tighter upper bound for the gradient difference norm for linearized networks (than Lemma 3.2).
In the following theorem, we prove that for a linearized network, the gradient difference throughout training has a uniform upper bound that only depends on the network width, depth and initialization.
**Theorem 4.1** (Gradient Difference throughout training linearized network).: _Under Assumption 2.2, taking over the randomness of the random initialization and the Brownian motion, for any \(t\in[0,T]\), running Langevin diffusion on a linearized network in Eq. (3) satisfies that_
\[\mathbb{E}\left[\|\nabla\mathcal{L}(\mathbf{W}_{t}^{lin};\mathcal{D})-\mathcal{ L}(\mathbf{W}_{t}^{lin};\mathcal{D}^{\prime})\|_{2}^{2}\right]\leq\frac{4B}{n^{2}} \,,\text{ where }B\coloneqq d\cdot o\cdot\left(\prod_{i=1}^{L-1}\frac{\beta_{i}m_{i}}{2} \right)\sum_{l=1}^{L}\frac{\beta_{L}}{\beta_{l}}\,, \tag{7}\]
_where \(n\) is the training dataset size, and \(B\) is a constant that only depends on the data dimension \(d\), the number of classes \(o\), the network depth \(L\), the per-layer network width \(\{m_{i}\}_{i=1}^{L}\), and the per-layer variances \(\{\beta_{i}\}_{i=1}^{L}\) of the Gaussian initialization distribution._
Theorem 4.1 provides a precise analytical upper bound for the gradient difference during training linearized network, by tracking the gradient distribution for fully connected feed-forward ReLU network with Gaussian weight matrices. Our proof borrows some techniques from [3, 54] for computing the gradient distribution, refer to Appendix C.1 and C.2 for the full proofs. By plugging Eq. (7) into Theorem 3.1, we obtain the following KL privacy bound for training a linearized network.
**Corollary 4.2** (KL privacy bound for training linearized network).: _Under Assumption 2.2 and neural networks (3) initialized by Gaussian distribution with per-layer variance \(\{\beta_{i}\}_{i=1}^{L}\), running Langevin diffusion for linearized network with time \(T\) on any neighboring datasets satisfies that_
\[\mathrm{KL}(\mathbf{W}_{[0:T]}^{lin}\|\mathbf{W}_{[0:T]}^{lin})\leq\frac{2BT}{n^{2} \sigma^{2}}\,, \tag{8}\]
_where \(B\) is the constant that specifies the gradient norm upper bound, given by Eq. (7)._
Over-parameterization affects privacy differently under different initialization.Corollary 4.2 and Theorem 4.1 prove the role of over-parameterization in our KL privacy bound, crucially depending on how the per-layer Gaussian initialization variance \(\beta_{i}\) scales with the per-layer network width \(m_{i}\) and depth \(L\). We summarize our KL privacy bound for the linearized network under different width, depth and initialization schemes in Table 1, and elaborate the comparison below.
**(1) LeCun initialization** uses small, width-independent variance for initializing the first layer \(\beta_{1}=\frac{1}{d}\) (where \(d\) is the number of input features), and width-dependent variance \(\beta_{2}=\cdots=\beta_{L}=\frac{1}{m}\) for initializing all the subsequent layers. Therefore, the second term \(\sum_{l=1}^{L}\frac{\beta_{L}}{\beta_{l}}\) in the constant \(B\) of Eq. (7) increases linearly with the width \(m\) and depth \(L\). However, due to \(\frac{m_{l}\cdot\beta_{l}}{2}<1\) for all \(l=2,\cdots,L\), the first product term \(\prod_{l=1}^{L-1}\frac{\beta_{l}m_{l}}{2}\) in constant \(B\) decays with the increasing depth. Therefore, by combining the two terms, we prove that the KL privacy bound worsens with increasing width, but improves with increasing depth (as long as the depth is large enough). Similarly, under **Xavier initialization**\(\beta_{l}=\frac{\mathcal{Z}}{m_{l-1}+m_{l}}\), we prove that the KL privacy bound (especially the constant \(B\) (7)) improves with increasing depth as long as the depth is large enough.
**(2) NTK and He initializations** use large per-layer variance \(\beta_{l}=\begin{cases}\frac{2}{m_{l}}&l=1,\cdots,L-1\\ \frac{1}{o}&l=L\end{cases}\) (for NTK) and \(\beta_{l}=\frac{2}{m_{l-1}}\) (for He). Consequently, the gradient difference under NTK or He initialization is significantly larger than that under LeCun initialization. Specifically, the gradient norm constant \(B\) in Eq. (7) grows linearly with the width \(m\) and the depth \(L\) under He and NTK initializations, thus indicating a worsening of KL privacy bound under increasing width and depth.
## 5 Numerical validation of our KL privacy bounds
To understand the relation between privacy and over-parameterization in _practical_ DNNs training (and to validate our KL privacy bounds Lemma 3.2 and Corollary 4.2), we perform experiments for
DNNs training via noisy GD to numerically estimate the KL privacy loss. We will show that if the total training time is small, it is indeed possible to obtain numerical KL privacy bound estimates that does not grow with the total number of parameter (under carefully chosen initialization distributions).
_Numerical estimation procedure_. Theorem 3.1 proves that the exact KL privacy loss scales with the expectation of squared gradient norm during training. This could be estimated by empirically average of gradient norm across training runs. For training dataset \(\mathcal{D}\), we consider all 'car' and 'plane' images of the CIFAR-10. For neighboring dataset, we consider all possible \(\mathcal{D}^{\prime}\) that removes a record from \(\mathcal{D}\), or adds a test record to \(\mathcal{D}\), i.e., the standard "add-or remove-one" neighboring notion [2]. We run noisy gradient descent with constant step-size \(0.01\) for \(50\) epochs on both datasets.
_Numerically validate the growth of KL privacy loss with regard to training time_. Figure 1 shows numerical KL privacy loss under different initializations, for fully connected networks with width \(1024\) and depth \(10\). We observe that the KL privacy loss grows linearly at the beginning of training (\(<10\) epochs), which validates the first and third term in the KL privacy bound Lemma 3.2. Moreover, the KL privacy loss under LeCun and Xavier initialization is close to zero at the beginning of training (\(<10\) epochs). This shows LeCun and Xavier initialization induce small gradient norm at small training time, which is consistent with Theorem 4.1. However, when the number of epochs is large, the numerical KL privacy loss grows faster than linear accumulation under all initializations, thus validating the second term in Lemma 3.2.
_Numerically validate the dependency of KL privacy loss on network width, depth and initializations_. Figure 2 shows the numerical KL privacy loss under different network depth, width and initializations, for a fixed training time. In Figure 1(c), we observe that increasing width and training time always increases KL privacy loss. This is consistent with Theorem 4.1, which shows that increasing width worsens the gradient norm at initialization (given fixed depth), thus harming KL privacy bound Lemma 3.2 at the beginning of training. We also observe that the relationship between KL privacy
Figure 1: Numerically estimated KL privacy loss for noisy GD with constant step-size \(0.001\) on deep neural network with width \(1024\) and depth \(10\). We report the mean and standard deviation across \(6\) training runs, taking worst-case over all neighboring datasets. The numerical KL privacy loss grows with the number of training epochs under all initializations. The growth rate is close to linear at beginning of training (epochs \(<10\)) and is faster than linear at epochs \(\geq 10\).
Figure 2: Numerically estimated KL privacy loss for noisy GD with constant step-size on fully connected ReLU network with different width, depth and initializations. We report the mean and standard deviation across \(6\) training runs, taking worst-case over all neighboring datasets. Under increasing width, the KL privacy loss always grows under all evaluated initializations. Under increasing depth, at the beginning of training (20 epochs), the KL privacy loss worsens with depth under He initialization, but first worsens with depth (\(\leq 8\)) and then improves with depth (\(\geq 8\)) under Xavier and LeCun initializations. At later phases of the training (50 epochs), KL privacy worsens (increases) with depth under all evaluated initializations.
and network depth depends on the initialization distributions and the training time. Specifically, in Figure (a)a, when the training time is small (20 epochs), for LeCun and Xavier initializations, the numerical KL privacy loss improves with increasing depth when depth \(>8\). Meanwhile, when the training time is large (50 epochs) in Figure (b)b, KL privacy loss worsens with increasing depth under all initializations. This shows that given small training time, the choice of initialization distribution affects the dependency of KL privacy loss on increasing depth, thus validating Lemma 3.2 and Theorem 4.1.
## 6 Utility guarantees for Training Linearized Network
Our privacy analysis suggests that training linearized network under certain initialization schemes (such as LeCun initialization) allows for significantly better privacy bounds under over-parameterization by increasing depth. In this section, we further prove utility bounds for Langevin diffusion under initialization schemes and investigate the effect of over-parameterization on the privacy utility trade-off. In other words, we aim to understand whether there is any utility degradation for training linearized networks when using the more privacy-preserving initialization schemes.
Convergence of training linearized networkWe now prove convergence of the excess empirical risk in training linearized network via Langevin diffusion. This is a well-studied problem in the literature for noisy gradient descent. We extend the convergence theorem to continuous-time Langevin diffusion below and investigate factors that affect the convergence under over-parameterization. The proof is deferred to Appendix D.1.
**Lemma 6.1** (Extension of [42, Theorem 2] and [45, Theorem 3.1]).: _Let \(\mathcal{L}_{0}^{lin}(\mathbf{W};\mathcal{D})\) be the empirical risk function of a linearized network in Eq. (3) expanded at initialization vector \(\mathbf{W}_{0}^{lin}\). Let \(\mathbf{W}_{0}^{*}\) be an \(\alpha\)-near-optimal solution for the ERM problem such that \(\mathcal{L}_{0}^{lin}(\mathbf{W}_{0}^{*};\mathcal{D})-\min_{\mathbf{W}}\mathcal{L}_{0 }^{lin}(\mathbf{W};\mathcal{D})\leq\alpha\). Let \(\mathcal{D}=\{\mathbf{x}_{i}\}_{i=1}^{n}\) be an arbitrary training dataset of size \(n\), and denote \(M_{0}=\left(\nabla f_{\mathbf{W}_{0}^{lin}}(\mathbf{x}_{1}),\cdots,\nabla f_{\mathbf{W}_{0 }^{lin}}(\mathbf{x}_{n})\right)^{\top}\) as the NTK feature matrix at initialization. Then running Langevin diffusion (4) on \(\mathcal{L}_{0}^{lin}(\mathbf{W})\) with time \(T\) and initialization vector \(\mathbf{W}_{0}^{lin}\) satisfies_
\[\mathbb{E}[\mathcal{L}_{0}^{lin}(\tilde{\mathbf{W}}_{T}^{lin})]-\min_{\mathbf{W}} \mathcal{L}_{0}^{lin}(\mathbf{W};\mathcal{D})\leq\alpha+\frac{R}{2T}+\frac{1}{2} \sigma^{2}rank(M_{0})\,,\]
_where the expectation is over Brownian motion \(B_{T}\) in Langevin diffusion in Eq. (4), \(\bar{\mathbf{W}}_{T}^{lin}=\frac{1}{T}\int\tilde{\mathbf{W}}_{t}^{lin}\mathrm{d}t\) is the average of all iterates, and \(R=\|\mathbf{W}_{0}^{lin}-\mathbf{W}_{0}^{*}\|_{M_{0}}^{2}\) is the gap between initialization parameters \(\mathbf{W}_{0}^{lin}\) and solution \(\mathbf{W}_{0}^{*}\)._
Remark 6.2.: The excess empirical risk bound in Lemma 6.1 is smaller if data is low-rank, e.g., image data, then \(\text{rank}(M_{0})\) is small. This is consistent with the prior dimension-independent private learning literature [32, 33, 37] and shows the benefit of low-dimensional gradients on private learning.
Lemma 6.1 highlights that the excess empirical risk scales with the gap \(R\) between initialization and solution (denoted as lazy training distance), the rank of the gradient subspace, and the constant \(B\) that specifies upper bound for expected gradient norm during training. Specifically, the smaller the lazy training distance \(R\) is, the better is the excess risk bound given fixed training time \(T\) and noise variance \(\sigma^{2}\). We have discussed how over-parameterization affects the gradient norm constant \(B\) and the gradient subspace rank \(\text{rank}(M_{0})\) in Section 3. Therefore, we only still need to investigate how the lazy training distance \(R\) changes with the network width, depth, and initialization, as follows.
Lazy training distance \(R\) decreases with model over-parameterizationIt is widely observed in the literature [19, 55, 38] that under appropriate choices of initializations, gradient descent on fully connected neural network falls under a lazy training regime. That is, with high probability, there exists a (nearly) optimal solution for the ERM problem that is close to the initialization parameters in terms of \(\ell_{2}\) distance. Moreover, this lazy training distance \(R\) is closely related to the smallest eigenvalue of the NTK matrix, and generally decreases as the model becomes increasingly overparameterized. In the following proposition, we compute a near-optimal solution via the pseudo inverse of the NTK matrix, and prove that it has small distance to the initialization parameters via existing lower bounds for the smallest eigenvalue of the NTK matrix [40].
**Lemma 6.3** (Bounding lazy training distance via smallest eigenvalue of the NTK matrix).: _Under Assumption 2.4 and single-output linearized network Eq. (3) with \(o=1\), assume that the per-layer network widths \(m_{0},\cdots,m_{L}=\tilde{\Omega}(n)\) are large. Let \(\mathcal{L}_{0}^{lin}(\mathbf{W})\) be the empirical risk Eq. (1) for
linearized network expanded at initialization vector \(\mathbf{W}_{0}^{lin}\). Then for any \(\mathbf{W}_{0}^{lin}\), there exists a corresponding solution \(\mathbf{W}_{0}^{\frac{1}{n^{2}}}\), s.t. \(\mathcal{L}_{0}^{lin}(\mathbf{W}_{0}^{\frac{1}{n^{2}}})-\min_{\mathbf{W}}\mathcal{L}_{0 }^{lin}(\mathbf{W};\mathcal{D})\leq\frac{1}{n^{2}}\), \(\text{rank}(M_{0})=n\) and_
\[R\leq\tilde{\mathcal{O}}\left(\max\left\{\frac{1}{d\beta_{L}\left(\prod_{i=1} ^{L-1}\beta_{i}m_{i}\right)},1\right\}\frac{n}{\sum_{l=1}^{L}\beta_{l}^{-1}} \right)\,, \tag{9}\]
_with high probability over training data sampling and random initialization Eq. (5), where \(\tilde{\mathcal{O}}\) ignores logarithmic factors with regard to \(n\), \(m\), \(L\), and tail probability \(\delta\)._
The full proof is deferred to Appendix D.2. By using Lemma 6.3, we provide a summary of bounds for \(R\) under different initializations in Table 1. We observe that the lazy training distance \(R\) decreases with increasing width and depth under LeCun, He and NTK initializations, while under Xavier initialization \(R\) only decreases with increasing depth.
_Privacy & Excess empirical risk tradeoffs for Langevin diffusion under linearized network_. We now use the lazy training distance \(R\) to prove empirical risk bound and combine it with our KL privacy bound Section 4 to show the privacy utility trade-off under over-parameterization.
**Corollary 6.4** (Privacy utility trade-off for linearized network).: _Assume that all conditions in Lemma 6.3 holds. Let \(B\) be the gradient norm constant in Eq. (7), and let \(R\) be the lazy training distance bound in Lemma 6.3. Then for \(\sigma^{2}=\frac{2BT}{\varepsilon n^{2}}\) and \(T=\sqrt{\frac{\varepsilon nR}{2B}}\), releasing all iterates of Langevin diffusion with time \(T\) satisfies \(\varepsilon\)-KL privacy, and has empirical excess risk upper bounded by_
\[\mathbb{E}[\mathcal{L}_{0}^{lin}(\bar{\mathbf{W}}_{T}^{lin})] -\min_{\mathbf{W}}\mathcal{L}_{0}^{lin}(\mathbf{W};\mathcal{D})\leq \tilde{\mathcal{O}}\left(\frac{1}{n^{2}}+\sqrt{\frac{BR}{\varepsilon n}}\right) \tag{10}\] \[=\tilde{\mathcal{O}}\left(\frac{1}{n^{2}}+\sqrt{\frac{\max\{1,d \beta_{L}\prod_{l=1}^{L-1}\beta_{l}m_{l}\}}{2^{L-1}\varepsilon}}\right) \tag{11}\]
_with high probability over random initialization Eq. (5), where the expectation is over Brownian motion \(B_{T}\) in Langevin diffusion, and \(\tilde{O}\) ignores logarithmic factors with regard to width \(m\), depth \(L\), number of training data \(n\) and tail probability \(\delta\)._
See Appendix D.3 for the full proof. Corollary 6.4 proves that the excess empirical risk worsens in the presence of a stronger privacy constraint, i.e., a small privacy budget \(\varepsilon\), thus contributing to a trade-off between privacy and utility. However, the excess empirical risk also scales with the lazy training distance \(R\) and the gradient norm constant \(B\). These constants depend on network width, depth and initialization distributions, and we prove privacy utility trade-offs for training linearized network under commonly used initialization distributions, as summarized in Table 1.
We would like to highlight that our privacy utility trade-off bound under LeCun and Xavier initialization strictly improves with increasing depth as long as the data satisfy Assumption 2.4 and the hidden-layer width is large enough. To our best knowledge, this is the first time that a strictly improving privacy utility trade-off under over-parameterization is shown in literature. This shows the benefits of precisely bounding the gradient norm (Appendix C.1) in our privacy and utility analysis.
## 7 Conclusion
We prove new KL privacy bound for training fully connected ReLU network (and its linearized variant) using the Langevin diffusion algorithm, and investigate how privacy is affected by the network width, depth and initialization. Our results suggest that there is a complex interplay between privacy and over-parameterization (width and depth) that crucially relies on what initialization distribution is used and the how much the gradient fluctuates during training. Moreover, for a linearized variant of fully connected network, we prove KL privacy bounds that improve with increasing depth under certain initialization distributions (such as LeCun and Xavier). We further prove excess empirical risk bounds for linearized network under KL privacy, which similarly improve as depth increases under LeCun and Xavier initialization. This shows the gain of our new privacy analysis for capturing the effect of over-parameterization. We leave it as an important open problem as to whether our privacy utility trade-off results for linearized network could be generalized to deep neural networks.
## Acknowledgments and Disclosure of Funding
The authors would like to thank Yaxi Hu and anonymous reviewers for helpful discussions on drafts of this paper. This work was supported by Hasler Foundation Program: Hasler Responsible AI (project number 21043), and the Swiss National Science Foundation (SNSF) under grant number 200021_205011, Google PDPO faculty research award, Intel within the www.private-ai.org center, Meta faculty research award, the NUS Early Career Research Award (NUS ECRA award number NUS ECRA FY19 P16), and the National Research Foundation, Singapore under its Strategic Capability Research Centres Funding Initiative. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore.
|
2306.17396 | Koopman operator learning using invertible neural networks | In Koopman operator theory, a finite-dimensional nonlinear system is
transformed into an infinite but linear system using a set of observable
functions. However, manually selecting observable functions that span the
invariant subspace of the Koopman operator based on prior knowledge is
inefficient and challenging, particularly when little or no information is
available about the underlying systems. Furthermore, current methodologies tend
to disregard the importance of the invertibility of observable functions, which
leads to inaccurate results. To address these challenges, we propose the
so-called FlowDMD, aka Flow-based Dynamic Mode Decomposition, that utilizes the
Coupling Flow Invertible Neural Network (CF-INN) framework. FlowDMD leverages
the intrinsically invertible characteristics of the CF-INN to learn the
invariant subspaces of the Koopman operator and accurately reconstruct state
variables. Numerical experiments demonstrate the superior performance of our
algorithm compared to state-of-the-art methodologies. | Yuhuang Meng, Jianguo Huang, Yue Qiu | 2023-06-30T04:26:46Z | http://arxiv.org/abs/2306.17396v2 | # Physics-informed invertible neural network for the Koopman operator learning 1
###### Abstract
In Koopman operator theory, a finite-dimensional nonlinear system is transformed into an infinite but linear system using a set of observable functions. However, manually selecting observable functions that span the invariant subspace of the Koopman operator based on prior knowledge is inefficient and challenging, particularly when little or no information is available about the underlying systems. Furthermore, current methodologies tend to disregard the importance of the invertibility of observable functions, which leads to inaccurate results. To address these challenges, we propose the so-called FlowDMD, a Flow-based Dynamic Mode Decomposition that utilizes the Coupling Flow Invertible Neural Network (CF-INN) framework. FlowDMD leverages the intrinsically invertible characteristics of the CF-INN to learn the invariant subspaces of the Koopman operator and accurately reconstruct state variables. Numerical experiments demonstrate the superior performance of our algorithm compared to state-of-the-art methodologies.
keywords: Koopman operator, Generative models, Invertible neural networks +
## 1 Introduction
Nonlinear dynamic systems are widely prevalent in both theory and engineering applications. Since the governing equations are generally unknown in many situations, it can be challenging to study the systems directly based on the first principles. Fortunately, the data about the systems of interest could be available by experiments or observations. Instead, one could seek to understand the behavior of the nonlinear system through the data-driven approaches [1; 2; 3; 4; 5].
The Koopman operator [6], which embeds the nonlinear system of interest into an infinite dimensional linear space by observable functions has attracted lots of attention. The Koopman operator acts on the infinite dimensional Hilbert space and aims to capture the full representations of the nonlinear systems. Dynamic mode decomposition (DMD) calculates the spectral decomposition of the Koopman operator numerically by extracting dynamic information from the collected data. Concretely, DMD devises a procedure to extract the spectral information directly from a data sequence without an explicit formulation of the Koopman operator, which is efficient for handling high dimensional data [7]. Variants of DMD are proposed to address challenges in different scenarios [8; 9; 10; 11; 12; 13; 14; 15].
The selection of observable functions plays an essential role in the DMD algorithm. Exact DMD [8] exploits the identity mapping as the observables. This implies that one uses a linear system to approximate a nonlinear system with given data [16]. This would yield inaccurate or even completely mistaken outcomes. Furthermore, the short-term prediction of Exact DMD might be acceptable for some cases, but the long-term prediction is probably unreliable. Typically, prior knowledge is required to select the observable functions that span the invariant subspace of the Koopman operator. However, the invariant subspace is not simply available. In order to overcome the limitations of the Exact DMD algorithm and capture the full feature of the nonlinear system, several data-driven selection strategies for observable functions have been proposed. Extended DMD (EDMD) [17] lifts the state variables from the original space into a higher dimensional space using the dictionary functions. The accuracy and rate of convergence of EDMD depend on the choice of the dictionary functions. Therefore, EDMD needs as many dictionary functions as possible. This implies that the set of dictionary functions (nonlinear transformations) should be sufficiently complex, which results in enormous computational cost. Kernel based DMD (KDMD) [18] differs from EDMD in that it utilizes the kernel trick to exploit the implicit expression of dictionary functions, whereas EDMD uses the explicit expression of dictionary functions. Nonetheless, both EDMD and KDMD are prone to overfitting [19], which leads to large generalization error. How to efficiently choose the observable functions that span the invariant subspace of the Koopman operator
becomes a significant challenge.
In contrast to EDMD and KDMD, observable functions can be represented by neural networks. Dictionary learning [20] couples the EDMD with a set of trainable dictionary functions, where dictionary functions are represented by a fully connected neural network and an untrainable component. Fixing the partial dictionary function facilitates the reconstruction of the state variables, however, this setting implicitly assumes that linear term lies in the invariant subspace of the Koopman operator. Yeung et al. [21] select low-dimensional dictionary functions more efficiently using deep neural networks.
Autoencoder (AE) neural networks have been widely applied to learn the optimal observable functions and reconstruction functions in Koopman embedding [19; 22; 23; 24; 25; 26]. Concretely, the invariant subspace of the Koopman operator and reconstruction functions are represented by the encoder and decoder network in AE, respectively. Lusch et al. [23] utilize neural networks to identify the Koopman eigenfunctions and introduced an auxiliary network to cope with the dynamic systems with continuous spectrum. Azencot et al. [24] propose the Consistent Koopman AE model that combines the forward-backward DMD method [27] with the AE model. This approach extracts the latent representation of high-dimensional non-linear data and eliminates the effect of noise in the data simultaneously. Pan and Duraisamy [25] parameterize the structure of the transition matrix in linear space and construct an AE model to learn the residual of the DMD. Li and Jiang [26] utilize deep learning and the Koopman operator to model the nonlinear multiscale dynamical problems, where coarse-scale data is used to learn the fine-scale information through a set of multiscale basis functions. Wang et al. [28] propose Koopman Neural Forecaster (KNF) combining AE with Koopman operator theory to predict the data with distributional shifts.
Representing Koopman embedding by dictionary learning or AE networks has several drawbacks. Firstly, the reconstruction in dictionary learning partially fixes the dictionary functions, which leads to a low level of interpretability of the model. Secondly, the encoder and decoder in an AE model are trained simultaneously, but neither of them is invertible, cf. [29] for more details. Moreover, due to the structural noninvertibility of the encoder and decoder, it typically requires a large amount of training data in order to obtain accurate representations, which makes the AE model prone to overfitting. Alford-Lago et al. [29] analyze the property of both the encoder and decoder in AE and proposed the deep learning dynamic mode decomposition (DLDMD). Bevanda et al. [30] constructed a conjugate map between the nonlinear system and its Jacobian linearization, which is learned by a diffeomorphic neural network.
In this paper, we develop a novel architecture that incorporates physical knowledge to learn the Koopman embedding. Specifically, we apply the coupling flow invertible neural networks (CF-INN) to learn the observable functions and reconstruction functions. The invertibility of the learned observable functions makes our method more flexible than dictionary learning or AE learning. Our contributions are three-folds:
1. We utilize an structurally invertible mapping to reconstruct state variables, which increases the interpretability of the neural network and alleviates the overfitting of AE.
2. The difficulty of learning the observable functions and observable functions is reduced by exploiting their structural invertibility of neural networks. Therefore, the reconstruction error in the loss function could be eliminated.
3. As the physical information is embedded into the model, the number of parameters is reduced to achieve comparable accuracy with other methods. Additionally, the parameters to be optimized are reduced dramatically since the learned mappings and their inverse share the same parameters.
This paper is organized as follows. In Section 2, we briefly review the Koopman operator theory and DMD. In Section 3, we present the structure of CF-INN and introduce how to learn the invariant subspace of the Koopman operator and the reconstruction functions. In Section 4, several numerical experiments are performed to demonstrate the performance of our method, and we summarize our work in Section 5.
## 2 Preliminaries
### Koopman operator theory
Consider the nonlinear autonomous system in discrete form,
\[\mathbf{x}_{k+1}=f(\mathbf{x}_{k}),\quad\mathbf{x}_{k}\in\mathcal{M}\subset \mathbb{R}^{m}, \tag{1}\]
where \(\mathcal{M}\) represents the set of state space, \(f:\mathcal{M}\rightarrow\mathcal{M}\) is an unknown nonlinear map, and \(k\) is the time index.
**Definition 1** (Koopman operator [16]).: _For the nonlinear system (1), the Koopman operator \(\mathcal{K}\) is an infinite-dimensional linear operator that acts on all observable functions \(g:\mathcal{M}\rightarrow\mathbb{C}\) such that_
\[\mathcal{K}g(\mathbf{x})=g(f(\mathbf{x})).\]
_Here, \(g(x)\in\mathcal{H}\) and \(\mathcal{H}\) represents the infinite dimensional Hilbert space._
Through the observable functions, the nonlinear system (1) could be transformed into an infinite-dimensional linear system using the Koopman operator,
\[g(\mathbf{x}_{k+1})=g(f(\mathbf{x}_{k}))=\mathcal{K}g(\mathbf{x}_{k}). \tag{2}\]
Note that the Koopman operator is linear, _i.e._, \(\mathcal{K}(\alpha_{1}g_{1}(\mathbf{x})+\alpha_{2}g_{2}(\mathbf{x}))=\alpha_{1} g_{1}(f(\mathbf{x}))+\alpha_{2}g_{2}(f(\mathbf{x}))\), with \(g_{1}(\mathbf{x}),g_{2}(\mathbf{x})\in\mathcal{H}\) and \(\alpha_{1},\alpha_{2}\in\mathbb{R}\). As \(\mathcal{K}\) is an infinite-dimensional operator, we denote its eigenfunctions and eigenvalues by \(\{\lambda_{i},\varphi_{i}(x)\}_{i=0}^{\infty}\) such that \(\mathcal{K}\varphi_{i}(\mathbf{x})=\lambda_{i}\varphi_{i}(\mathbf{x})\), where \(\varphi_{i}(\mathbf{x}):\mathcal{M}\rightarrow\mathbb{R}\), \(\lambda_{i}\in\mathbb{C}\).
The Koopman eigenfunctions define a set of intrinsic measurement coordinates, then a vector-valued observable function \(\mathbf{g}(\mathbf{x})=[g_{1}(\mathbf{x}),\cdots,g_{n}(\mathbf{x})]^{T}\) could be written in terms of the Koopman eigenfunctions,
\[\mathbf{g}(\mathbf{x}_{k})=\begin{bmatrix}g_{1}(\mathbf{x}_{k})\\ \vdots\\ g_{n}(\mathbf{x}_{k})\end{bmatrix}=\sum_{i=1}^{\infty}\varphi_{i}(\mathbf{x}_{ k})\begin{bmatrix}<\varphi_{i},g_{1}>\\ \vdots\\ <\varphi_{i},g_{n}>\end{bmatrix}=\sum_{i=1}^{\infty}\varphi_{i}(\mathbf{x}_{k}) \mathbf{v}_{i}, \tag{3}\]
where \(\mathbf{v}_{i}\) refers to the \(i\)-th Koopman mode with respect to the Koopman eigenfunction \(\varphi_{i}(\mathbf{x})\). Combining (2) and (3), we have the decomposition of a vector-valued observable functions
\[\mathbf{g}(\mathbf{x}_{k+1})=\mathcal{K}\mathbf{g}(\mathbf{x}_{k})=\mathcal{K }\sum_{i=1}^{\infty}\varphi_{i}(\mathbf{x}_{k})\mathbf{v}_{i}=\sum_{i=1}^{ \infty}\lambda_{i}\varphi_{i}(\mathbf{x}_{k})\mathbf{v}_{i}.\]
Furthermore, the decomposition could be rewritten as
\[\mathbf{g}(\mathbf{x}_{k})=\sum_{i=1}^{\infty}\lambda_{i}^{k}\varphi_{i}( \mathbf{x}_{0})\mathbf{v}_{i}.\]
In practice, we need a finite-dimensional representation of the infinite-dimensional Koopman operator. Denote the \(n\)-dimensional invariant subspace of the Koopman operator \(\mathcal{K}\) by \(\mathcal{H}_{g}\), _i.e._, \(\forall g(\mathbf{x})\in\mathcal{H}_{g},\mathcal{K}g(\mathbf{x})\in\mathcal{H }_{g}\). Let \(\{g_{i}(\mathbf{x})\}_{i=1}^{n}\) be one set of basis of \(\mathcal{H}_{g}\), this induces a finite-dimensional linear operator \(\mathbf{K}\)[16], which projects the Koopman operator \(\mathcal{K}\) onto \(\mathcal{H}_{g}\), _i.e._, for the \(n\)-dimensional vector-valued observable functions \(\mathbf{g}(\mathbf{x})=[g_{1}(\mathbf{x}),\cdots,g_{n}(\mathbf{x})]^{T}\), we have
\[\mathbf{g}(x_{k+1})=\begin{bmatrix}g_{1}(x_{k+1})\\ \vdots\\ g_{n}(x_{k+1})\end{bmatrix}=\begin{bmatrix}\mathcal{K}g_{1}(x_{k})\\ \vdots\\ \mathcal{K}g_{n}(x_{k})\end{bmatrix}=\mathbf{K}\begin{bmatrix}g_{1}(x_{k})\\ \vdots\\ g_{n}(x_{k})\end{bmatrix}=\mathbf{K}\mathbf{g}(x_{k}) \tag{4}\]
### Dynamic mode decomposition
DMD approximates the spectral decomposition of the Koopman operator numerically. Given the state variables \(\{\mathbf{x}_{0},\mathbf{x}_{1},\cdots,\mathbf{x}_{p}\}\) and a vector-valued observable function \(\mathbf{g}(\mathbf{x})=[g_{1}(\mathbf{x}),\cdots,g_{n}(\mathbf{x})]^{T}\), then we get the sequence \(\{\mathbf{g}(\mathbf{x}_{0}),\mathbf{g}(\mathbf{x}_{1}),\cdots,\mathbf{g}( \mathbf{x}_{p})\}\), where each \(\mathbf{g}(\mathbf{x}_{k})\in\mathbb{R}^{n}\) is the observable snapshot of the \(k\)-th time step. According to (4), we have
\[\mathbf{g}(\mathbf{x}_{k+1})=\mathbf{K}\mathbf{g}(\mathbf{x}_{k}),\]
where \(\mathbf{K}\in\mathbb{R}^{n\times n}\) is the matrix form of the finite-dimensional operator. For the two data matrices, \(\mathbf{X}=[\mathbf{g}(\mathbf{x}_{0}),\cdots,\mathbf{g}(\mathbf{x}_{p-1})]\) and \(\mathbf{Y}=[\mathbf{g}(\mathbf{x}_{1}),\cdots,\mathbf{g}(\mathbf{x}_{p})]\), where \(\mathbf{X}\) and \(\mathbf{Y}\) are both in \(\mathbb{R}^{n\times p}\), which satisfies \(\mathbf{Y}=\mathbf{K}\mathbf{X}\). Therefore, \(\mathbf{K}\) can be represented by
\[\mathbf{K}=\mathbf{Y}\mathbf{X}^{\dagger},\]
where \(\mathbf{X}^{\dagger}\) denotes the Moore-Penrose inverse of \(\mathbf{X}\).
The Exact DMD algorithm developed by Tu et al. [8] computes dominant eigen-pairs (eigenvalue and eigenvector) of \(\mathbf{K}\) without the explicit formulation of \(\mathbf{K}\). In Algorithm 1, we present the DMD algorithm on the observable space, which is a general form of the Exact DMD algorithm. When using the identical mapping as the observable functions, _i.e._, \(\mathbf{g}(\mathbf{x})=\mathbf{x}\), Algorithm 1 is identical to the Exact DMD algorithm.
```
1. Compute the (reduced) SVD of \(\mathbf{X}\), \(\mathbf{X}=\mathbf{U}_{\mathbf{r}}\mathbf{\Sigma}_{\mathbf{r}}\mathbf{V}_{ \mathbf{r}}^{*}\), where \(\mathbf{U}_{\mathbf{r}}\in\mathbb{C}^{n\times r}\), \(\mathbf{\Sigma}_{\mathbf{r}}\in\mathbb{R}^{r\times r}\), \(\mathbf{V}_{\mathbf{r}}\in\mathbb{C}^{p\times r}\).
2. Compute \(\tilde{\mathbf{K}}=\mathbf{U}_{\mathbf{r}}^{*}\mathbf{Y}\mathbf{V}_{\mathbf{r} }\mathbf{\Sigma}_{\mathbf{r}}^{-1}\).
3. Compute the eigen-pairs of \(\tilde{\mathbf{K}}\): \(\tilde{\mathbf{K}}\mathbf{W}=\mathbf{W}\mathbf{\Lambda}\).
4. Reconstruct the eigen-pairs of \(\mathbf{K}\), where eigenvalues of \(\mathbf{K}\) are diagonal entries of \(\Lambda\), the corresponding eigenvectors of \(\mathbf{K}\)(DMD modes) are columns of \(\mathbf{\Phi}=\mathbf{Y}\mathbf{V}_{\mathbf{r}}\mathbf{\Sigma}_{\mathbf{r}}^{ -1}\mathbf{W}\).
5. Approximate the observation data via DMD, \(\hat{\mathbf{g}}(\mathbf{x}_{k})=\mathbf{\Phi}\mathbf{\Lambda}^{k}\mathbf{b}\), where \(\mathbf{b}=\mathbf{\Phi}^{\dagger}\mathbf{g}(\mathbf{x}_{0})\).
6. Reconstruct the state variables \(\hat{\mathbf{x}}_{k}=\mathbf{g}^{-1}(\hat{\mathbf{g}}(\mathbf{x}_{k}))= \mathbf{g}^{-1}\left(\mathbf{\Phi}\mathbf{\Lambda}^{k}\mathbf{b}\right)\).
```
**Algorithm 1** DMD on observable space [16; 31]
### State reconstruction
Koopman operator theory utilizes observable functions \(\mathbf{g}\) to transform the nonlinear system (1) into a linear system while preserving the nonlinearity. Evolving the nonlinear system (1) is computationally expensive or even impossible when \(f\) is
unknown, whereas evolving through the Koopman operator (2) offers a promising and computationally efficient approach.
Figure 1 illustrates the relation between the nonlinear evolution \(f\) and the Koopman operator evolution where the system evolves linearly in the observation space \(\mathcal{H}\). By computing the Koopman eigenvalues and modes, we can make predictions of the observable functions \(\mathbf{g}(\mathbf{x})\). We could reconstruct the state \(\mathbf{x}\) by the inverse of the observable functions \(\mathbf{g}^{-1}(\mathbf{x})\) provided that \(\mathbf{g}(\mathbf{x})\) is invertible. The invertibility of observable functions is essential to ensure the reconstruction accuracy and the interpretability of the outcomes.
Typical observable functions \(\mathbf{g}(\mathbf{x})\) selection are performed manually based on prior knowledge. Exact DMD takes the identical mapping, while the EDMD utilizes a set of pre-defined functions such as polynomials, Fourier modes, radial basis functions, and so forth [17]. However, these methods can be inaccurate and inefficient for Koopman embeddings learning. Deep neural networks, as efficient global nonlinear approximators, could be applied to represent the observable function \(\mathbf{g}(\mathbf{x})\) and the reconstruction function \(\mathbf{g}^{-1}(\mathbf{x})\). Several studies have demonstrated that the encoder and decoder networks in AE correspond to \(\mathbf{g}(\mathbf{x})\) and \(\mathbf{g}^{-1}(\mathbf{x})\), respectively [19; 22; 23; 24; 25; 26].
In practical applications, it is not always guaranteed that \(\mathbf{g}(\mathbf{x})\) is invertible. In the learning Koopman embedding via AE, the invertibility of \(\mathbf{g}(\mathbf{x})\) is enforced through numerical constraints, _i.e._, the reconstruction error \(\|\mathbf{x}-\mathbf{g}^{-1}(\mathbf{g}(\mathbf{x}))\|_{2}^{2}\), which tends to result in overfitting and suboptimal performance [29]. Besides, the reconstruction error is trained simultaneously with the prediction error and the linearity error [23]. The weights assigned to each loss term are hyperparameters that can be challenging to tune. In this paper, we propose a structurally invertible mapping learning framework, which eliminates the need for the reconstruction term in the loss function and yields more robust and accurate results. We present the details of our method in Section 3.
Figure 1: Koopman operator and inverse of observable functions
## 3 Learning Koopman embedding by invertible neural networks
In this section, we first briefly review the AE neural network and demonstrate the limitation of this class of neural networks in the Koopman embedding learning. Then, we introduce our method to overcome this limitation.
### Drawback of AE in the Koopman embedding learning
Most of the work use the Autoencoder (AE) neural networks as the backbone to learn the invariant subspace of the Koopman operator and reconstruct the state variables. AE as the frequently-used unsupervised learning structure of neural networks, consists of two parts, _i.e._, the encoder \(\mathcal{E}\) and the decoder \(\mathcal{D}\). AE learns these two mappings (functions) \(\mathcal{E}\) and \(\mathcal{D}\) by optimizing
\[\min_{\mathcal{E},\mathcal{D}}\mathbb{E}_{x\sim m(x)}[\text{loss}(x,\mathcal{ D}\circ\mathcal{E}(x))]. \tag{5}\]
Here \(m(x)\) denotes the distribution of the input data, \(\text{loss}(x,y)\) describes the difference between \(x\) and \(y\), and \(\mathbb{E}(\cdot)\) represents the expectation.
**Definition 2**.: _Let \(f_{1}:S\to S^{\prime}\) be an arbitrary mapping, and it is said to be invertible if there exists a mapping \(f_{2}:S^{\prime}\to S\) such that_
\[f_{1}\circ f_{2}=\mathcal{I},f_{2}\circ f_{1}=\mathcal{I},\]
_where \(\mathcal{I}\) is the identity mapping. Then, \(f_{2}\) is said to be the inverse mapping of \(f_{1}\)._
Let \(\mathcal{E}\) and \(\mathcal{D}\) be two mappings learned by AE such that \(\mathcal{D}\circ\mathcal{E}\approx\mathcal{I}\). However, the reverse order of the mapping \(\mathcal{E}\circ\mathcal{D}\) is not always a good approximation to the identity mapping, moreover, \(\mathcal{E}\) and \(\mathcal{D}\) are generally not invertible [29]. The main reason is that while AE strives to reach \(\mathcal{D}\circ\mathcal{E}\approx\mathcal{I}\), it omits the additional constraint \(\mathcal{E}\circ\mathcal{D}\approx\mathcal{I}\) which requires the latent variable data to train. Unfortunately, the latent variables are not accessible, thus rendering it impossible for AE to satisfy \(\mathcal{E}\circ\mathcal{D}\approx\mathcal{I}\) and \(\mathcal{D}\circ\mathcal{E}\approx\mathcal{I}\) simultaneously.
AE learns an identity mapping \(\mathcal{I}\) from a training data set \(\mathcal{S}\), _i.e._, for any \(\mathbf{x}\in\mathcal{S},\mathcal{D}\circ\mathcal{E}(\mathbf{x})\approx\mathbf{x}\). For data out of the set \(\mathcal{S}\), the mapping learned by AE may perform badly. In other words, AE may have poor generalization capability. Next, we use a preliminary experiment to demonstrate this limitation. The details of this numerical example are given in Section 4.1. We use the structure of AE defined in [26] and randomly generate 120 trajectories to train the AE, and the results are depicted by Figure 2.
Figure 2 compares the input data points out of the distribution of the training data with the corresponding reconstructed data points using the trained AE model. Figure 2(a) shows the density distribution of training data set \(\mathcal{S}\), which provides a rough illustration of the data space \(\mathcal{S}\). For the reconstruction test of AE, we generate three types of data, _i.e._, the sin-shaped scatters, the S-shaped scatters, and scatters from the standard 2-d normal distribution. We plot the corresponding input points (blue) and reconstructed data points (red) of the AE. The results shown in the next three subfigures illustrate that AE can reconstruct the input data points nearby the training data set \(\mathcal{S}\) very well. But for the data points far away from \(\mathcal{S}\), AE performs badly. The same situation happens in learning the Koopman embedding. Specifically, in the training process of AE, one aims to find the Koopman invariant space by minimizing the error of the Koopman embedding learning and the reconstruction error. However, minimizing the error between latent variables and their corresponding reconstruction denoted by \(\text{loss}(\mathbf{x},\mathcal{E}\circ\mathcal{D}(\mathbf{x}))\) is intractable. This result is in poor stability and generalization capability.
### Structure of Cf-Inn
We have shown that the mapping learned by AE performs poorly, which inspires us that invertibility can greatly reduce computational complexity and yields better
Figure 2: Generalization capability test of AE. (a) the training data distribution. (b) the \(sin(x)\) test function. (c) S-shaped scatters test. (d) random scatters from 2-d standard normal distribution.
generalization capability. Next, we introduce an invertible neural network to overcome the drawback of AE. Let \(\mathbf{g}_{\boldsymbol{\theta}}(\mathbf{x}):\mathbf{X}\rightarrow\mathbf{Y}\) denote the input-output mapping of the invertible neural network, where \(\boldsymbol{\theta}\) represents the parameters of the neural network. Let \(\mathbf{f}_{\boldsymbol{\theta}}\) be the inverse mapping of \(\mathbf{g}_{\boldsymbol{\theta}}\) which shares the same parameters with \(\mathbf{g}_{\boldsymbol{\theta}}\). Then we can reconstruct \(x\) in the backward direction by \(\mathbf{f}_{\boldsymbol{\theta}}(\mathbf{y}):\mathbf{Y}\rightarrow\mathbf{X}\). In generative tasks of machine learning, the forward generating direction is called the flow direction and the backward direction is called the normalizing direction. Next, we introduce the concept of coupling flows, which belongs to the invertible neural networks.
**Definition 3** (Coupling flow [32]).: _Let \(m\in\mathbb{N}\) and \(m\geq 2\), for a vector \(\mathbf{z}\in\mathbb{R}^{m}\) and \(2\leq q\leq m-1\), we define \(\mathbf{z}_{up}\) as the vector \((z_{1},\ldots,z_{q})^{\top}\in\mathbb{R}^{q}\) and \(\mathbf{z}_{low}\) as the vector \((z_{q+1},\ldots,z_{m})^{\top}\in\mathbb{R}^{m-q}\). A coupling flow (CF), denoted by \(h_{q,\tau}\), has the following form,_
\[h_{q,\tau}(\mathbf{z}_{up},\mathbf{z}_{low})=(\mathbf{z}_{up},\tau(\mathbf{z}_ {low},\sigma(\mathbf{z}_{up}))),\]
_where \(\sigma:\mathbb{R}^{q}\rightarrow\mathbb{R}^{l}\), and \(\tau(\cdot,\sigma(\mathbf{y})):\mathbb{R}^{m-q}\times\mathbb{R}^{l}\rightarrow \mathbb{R}^{m-q}\) is a bijection mapping for any \(\mathbf{y}\in\mathbb{R}^{q}\)._
A coupling flow defined in _Definition 3_ is invertible if and only if \(\tau\) is invertible and its inverse \(h_{q,\tau}^{-1}(\mathbf{z}_{up},\mathbf{z}_{low})=(\mathbf{z}_{up},\tau^{-1}( \mathbf{z}_{low},\sigma(\mathbf{z}_{up})))\)[33]. The key point of making the CF invertible is the invertibility of \(\tau\). One of the mostly used CF is the affine coupling function (ACF) [34, 35, 36], where \(\tau\) is an invertible element-wise function.
**Definition 4** (Affine coupling function [33]).: _Define an affine coupling function by the mapping \(\Psi_{q,s,t}\) from \(\mathbb{R}^{q}\times\mathbb{R}^{m-q}\) to \(\mathbb{R}^{m}\) such that_
\[\Psi_{q,s,t}(\mathbf{z}_{up},\mathbf{z}_{low})=(\mathbf{z}_{up},(\mathbf{z}_{ low}+t(\mathbf{z}_{up}))\odot s(\mathbf{z}_{up})), \tag{6}\]
_where \(\odot\) is the Hadamard product, \(s,t:\mathbb{R}^{q}\rightarrow\mathbb{R}^{m-q}\) are two arbitrary vector-valued mappings._
Definition 4 defines the forward direction of computations, and the backward direction of computations is given by \(\Psi_{q,s,t}^{-1}(\mathbf{z}_{up},\mathbf{z}_{low})=(\mathbf{z}_{up},\mathbf{ z}_{low}\oslashs(\mathbf{z}_{up})-t(\mathbf{z}_{up}))\), where \(\oslash\) denotes the element-wise division of vectors. The mappings \(s\) and \(t\) in Definition 4 can be any nonlinear functions, neural networks such as fully-connected neural network (FNN) are typically used to parameterize \(t\) and \(s\).
Let \(\Psi_{1},\ldots,\Psi_{L}\) be a sequence of \(L\) affine coupling functions and define \(\mathbf{g}_{\boldsymbol{\theta}}=\Psi_{L}\circ\Psi_{L-1}\circ\cdots\Psi_{1}\), where \(\boldsymbol{\theta}\) represents the parameters of \(\{\Psi_{i}\}_{i=1}^{L}\). The resulted vector-valued function \(\mathbf{g}_{\boldsymbol{\theta}}\) is an invertible neural network and called by coupling flow invertible neural network (CF-INN) in this paper. Moreover, for any \(\Psi_{i}\), the division
index \(q\) of the input vector \(x\) is user-guided. In this paper, we set \(q=\lceil m/2\rceil\), where \(\lceil\cdot\rceil\) is the rounding function. Furthermore, in order to mix the information sufficiently, we can flip the ACF by using the form \(\bar{\Psi}_{q,s,t}(\mathbf{z}_{up},\mathbf{z}_{low})=((\mathbf{z}_{up}+t( \mathbf{z}_{low}))\odot s(\mathbf{z}_{low}),\mathbf{z}_{low})\). We plot the computation process of an ACF and a flipped ACF in Figure 3, where the network structure diagram left shows the forward direction and the network structure diagram right shows the backward direction. The red area is an ACF block and consists of a standard ACF and a flipped ACF, which is a CF-INN of depth 2.
When the depth (L) of a CF-INN is large, its training becomes challenging. The main curse is that the dividend term \(s\) is too small in \(\Psi\) in the backward direction computations. This can be solved by replacing the affine coupling functions with residual coupling functions. Similar idea has also been applied in the residual term of ResNet.
**Definition 5** (Residual coupling functions [37]).: _Define a residual affine coupling function (RCF) by the map \(\Psi_{q,s,t}\) from \(\mathbb{R}^{q}\times\mathbb{R}^{m-q}\) to \(\mathbb{R}^{m}\) such that_
\[\Psi_{q,t}(\mathbf{z}_{up},\mathbf{z}_{low})=(\mathbf{z}_{up},\mathbf{z}_{low }+t(\mathbf{z}_{up})),\]
_where \(t:\mathbb{R}^{q}\rightarrow\mathbb{R}^{m-q}\) is a nonlinear mapping._
RCFs are simplifications of ACFs and when we connect a RCF with a flipped RCF, we obtain a RCF block, which is a simplified ACF block in Figure 3.
### Loss function for Koopman embedding
In this paper, we use the CF-INN to learn the Koopman invariant subspace and the reconstructions simultaneously, where the forward direction of CF-INN is represented by \(\mathbf{g}_{\boldsymbol{\theta}}\) and its backward direction is represented by \(\mathbf{f}_{\boldsymbol{\theta}}\). The observable
Figure 3: The illustration of the forward and backward direction in an ACF block.
functions evolve linearly in the Koopman invariant subspace. Hence, the linearity constrained loss function that represents the DMD approximation error is given by
\[\mathcal{L}_{\text{linear}}=\sum_{t=1}^{T}||\mathbf{g}_{\boldsymbol{\theta}}( \mathbf{x}_{t})-\Phi\Lambda^{t}\Phi^{\dagger}\mathbf{g}_{\boldsymbol{\theta}}( \mathbf{x}_{0})||^{2}=\sum_{t=1}^{T}||\mathbf{g}_{\boldsymbol{\theta}}( \mathbf{x}_{t})-\hat{\mathbf{g}}_{\boldsymbol{\theta}}(\mathbf{x}_{t})||^{2},\]
where \(\hat{\mathbf{g}}_{\boldsymbol{\theta}}(\mathbf{x}_{t})=\Phi\Lambda^{t}\Phi^{ \dagger}\mathbf{g}_{\boldsymbol{\theta}}(\mathbf{x}_{0})\) is the DMD approximation of the observable functions \(\{\mathbf{g}(\mathbf{x}_{t})\}_{t=1}^{T}\) by using Algorithm 1. To reconstruct the states \(x_{t}\), the inverse mapping of \(\mathbf{g}\), _i.e_, \(\mathbf{f}_{\theta}\) corresponds to the backward direction of CF-INN. \(\mathbf{f}_{\theta}\) shares the same network structure and parameters with \(\mathbf{g}_{\theta}\). Therefore, the computational cost is greatly reduced, compared with AE that another neural network is required to parameterize the inverse mapping of \(\mathbf{g}_{\theta}\). The reconstruction loss due to the DMD approximation error is given by
\[\mathcal{L}_{\text{rec}}=\sum_{t=1}^{T}||\mathbf{x}_{t}-\mathbf{f}_{ \boldsymbol{\theta}}(\hat{\mathbf{g}}_{\boldsymbol{\theta}}(\mathbf{x}_{t})) ||^{2}.\]
The optimal parameters \(\boldsymbol{\theta}^{*}\) is given by
\[\boldsymbol{\theta}^{*}=\operatorname*{arg\,min}_{\boldsymbol{\theta}} \mathcal{L}_{\text{linear}}+\alpha\mathcal{L}_{\text{rec}},\]
where \(\alpha\) is a user-guard hyperparameter.
Compared with other Koopman embedding learning frameworks, the loss function in our approach is much more simplified. We summarize our CF-INN framework for the Koopman embedding learning in Figure 4 and our method is called FlowDMD since this framework uses a flow model based Dynamic Model Decomposition to compute the finite dimensional Koopman operator approximation and reconstruct system states.
Figure 4: The general framework of FlowDMD.
## 4 Numerical experiments
In this section, we use three numerical examples to demonstrate the efficiency of our method for learning the Koopman embedding and compare its performance with LIR-DMD [26] and Exact DMD. We use the Python library _FEniCS_[38] to compute the numerical solutions of PDEs, the Python library _PyDMD_[39] to complete the calculations of Exact DMD, and the Python library _PyTroch_[40] to train the neural networks. Besides, the Xavier normal initialization scheme [41] is utilized to initialize the weights of all neural networks, while the biases of all nodes are set to zero. All the networks are trained by the Adam optimizer [42] with an initial learning rate of \(10^{-3}\). In order to find the optimal parameters of the network, we use _ReduceLROnPlateau_[43] to adjust the learning rate during the training process for all numerical examples. For fairness, all the methods share the same training strategies. Denote \(x\) as the "true" value of the states and \(\hat{x}\) as its reconstruction. We use three metrics to evaluate different methods synthetically., i.e., the relative \(L_{2}\) error
\[\text{RL2E}(t)=\frac{||\hat{x}_{t}-x_{t}||_{2}}{||x_{t}||_{2}},\]
the mean squared error (MSE),
\[\text{MSE}(t)=\frac{||\hat{x}_{t}-x_{t}||_{2}^{2}}{m},\]
and the total relative \(L_{2}\) error
\[\text{TRL2E}=\sqrt{\frac{\sum_{t=1}^{T}||\hat{x}_{t}-x_{t}||_{2}^{2}}{\sum_{i= 1}^{T}||x_{t}||_{2}^{2}}}.\]
### Fixed-point attractor
The fixed-point attractor example [23] is given by
\[\begin{cases}x_{t+1,1}=\lambda x_{t,1},\\ x_{t+1,2}=\mu x_{t,2}+(\lambda^{2}-\mu)x_{t,1}^{2}.\end{cases}\]
The initial state is chosen randomly by \(x_{0,1}\sim U(0.2,4.2)\), \(x_{0,2}\sim U(0.2,4.2)\) and \(\lambda=0.9,\mu=0.5\). We divide the data set into three parts where the ratio of training, validation, and test is \(60\%,20\%\), and \(20\%\), respectively. The number of neurons of each layer for the encoder network in LIR-DMD is \(2,10,10,3\) and the number of neurons of decoder network is \(3,10,10,2\). This results in \(345\) trainable parameters for
LIR-DMD. We use three ACFs for this problem. The mappings \(t\) and \(s\) are parameterized by FNN with three layers and the width of each layer is 1,8,2, respectively. This results in 102 trainable parameters in total.
We randomly choose one example from the test set and plot its results in Figure 5. Both Figure 5(a) and Figure 5(b) show that the reconstruction calculated by LIR-DMD and FlowDMD are better than that by the Exact DMD and the difference of trajectories between LIR-DMD and FlowDMD is very small. Figure 5(c) and Figure 5(d) illustrate that the reconstruction error of FlowDMD is the smallest. In the first 30 time steps, LIR-DMD has a similar error to FlowDMD. The error of FlowDMD increases much more slowly than that of LIR-DMD for the following 30 time steps. We conclude that FlowDMD has better generalization ability than LIR-DMD.
We test FlowDMD, LIR-DMD and Exact DMD using 40 randomly generated examples and the results are depicted by Figure 6. We use the total relative \(L_{2}\) error to evaluate the reconstruction results of trajectories. For FlowDMD, the reconstruction error is the lowest among almost all of the test examples, and the average total relative \(L_{2}\) error is only 0.3%. Compared with LIR-DMD, FlowDMD has better generalization ability and learning ability of the Koopman invariant subspace.
Figure 5: Comparison of three methods for Example 4.1. The total relative \(L_{2}\) error of the Exact DMD, LIR-DMD, and FlowDMD are 0.2448, 0.0111 and 0.0018, respectively.
### Burgers' equation
The 1-D Burgers' equation [44] is given by
\[\begin{cases}\frac{\partial u}{\partial t}+u\frac{\partial u}{\partial x}=\frac{0.01}{\pi}\frac{\partial^{2}u}{\partial x^{2}}\quad x\in(-1,1),t\in(0,1],\\ u(1,t)=u(-1,t)=0,\\ u(x,0)=-\xi*sin(\pi x),\end{cases} \tag{7}\]
where \(\xi\) is a random variable that satisfies a uniform distribution \(U(0.2,1.2)\). We use the finite element method with 30 equidistant grid points for the spatial discretization and the implicit Euler method with a step size of 0.01 for temporal discretization. We generate 100 samples of \(\xi\) for the initial state and compute the corresponding solutions. The examples are then divided into three parts, with proportions 60% for training, 20% for validation, and 20% for test. We test the performance of the Exact DMD, LIR-DMD, and FlowDMD. The rank of Exact DMD is 3 and the same rank is also used in LIR-DMD and FlowDMD to embed the Koopman linearity. The structure of the encoder network for LIR-DMD is \([30,40,50,40]\), and the decoder network is \([40,50,40,30]\) where the numbers in the brackets represent the width of each layer and we use RCFs to replace ACFs. This results in an invertible neural network of depth of 3 with one RCF block and one RCF. In each RCF, the width of each layer in FNN to parameterize the mapping \(t\) is 15, 40, 15, which results in 7530 parameters in FlowDMD, whereas LIR-DMD has 10650 parameters.
Figure 7 depicts that FlowDMD has the smallest absolute reconstruction error and total relative reconstruction error. Figure 8(a) and Figure 8(b) show that the reconstruction error of Exact DMD and LIR-DMD increase with time, but FlowDMD maintains in a very low level. Figure 9 summarizes the TRL2E of reconstruction on all test examples and depicts that the FlowDMD has the smallest error on almost all test examples, where the average TRL2E of FlowDMD is 1.5%. For some test examples, Exact DMD has the same TRL2E with FlowDMD, but for most test
Figure 6: Total relative \(L_{2}\) error in Example 4.1.
examples, FlowDMD performs better than Exact DMD. The TRL2E of LIR-DMD are bigger than FlowDMD over all the test examples and are slightly better than Exact DMD for some test examples.
### Allen-Cahn equation
The 1-D Allen-Cahn equation [44] is given by
\[\begin{cases}\dfrac{\partial u}{\partial t}-\gamma_{1}\dfrac{ \partial^{2}u}{\partial x^{2}}+\gamma_{2}\left(u^{3}-u\right)=0,x\in(-1,1),t \in(0,1],\\ u(0,x)=\xi*x^{2}\cos(2\pi x),\\ u(t,-1)=u(t,1),\end{cases} \tag{8}\]
where \(\gamma_{1}=0.0001\), \(\gamma_{2}=5\), and \(\xi\sim\mathcal{N}(-0.1,0.04)\). We use the finite element method with 20 equidistant grid points for the spatial discretization and the implicit Euler with a step size of 0.02 for the temporal discretization. Furthermore, we generate 100 samples of \(\xi\) and use _FEniCS_ to compute the numerical solutions. The data set is segmented according to a ratio of 60%, 20%, 20%, respectively to be used as
Figure 7: Comparison of three methods in Example 4.2. The total relative \(L_{2}\) errors for exact DMD, LIR-DMD, and FlowDMD are 0.08, 0.119, and 0.017, respectively.
the training set, the validation set, and the test set. The structure of the encoder network for LIR-DMD is [20, 30, 40, 30] and the decoder network is [30, 40, 30, 20], where the numbers in the bracket indicate the width of each layer. This results in 6190 parameters for LIR-DMD. For FlowDMD, we also use RCFs to replace the ACFs. The neural network for FlowDMD consists of one RCF block and one RCF, which results in a network with depth \(L=3\). In each RCF, the width of each layer of the FNN to parameterize \(t\) is 10, 20, 10. Finally, we obtain 2580 parameters for FlowDMD. The rank of Exact DMD is 3, and the same rank is also used in LIR-DMD and FlowDMD to embed the Koopman linearity.
Figure 10 clearly shows that FlowDMD can reconstruct the original state most accurately. It reveals that the absolute error of both exact DMD and LIR-DMD increase over time, but FlowDMD can maintain the error in a low level all the time. Numerical results show that FlowDMD is more robust and generalizes better than Exact DMD and LIR-DMD. The error of the state reconstruction for three methods are given in Figure 11. At the beginning time, FlowDMD has the biggest relative error because the norm of the true state variables is too small, which leads to a
Figure 8: Relative error of three methods for Example 4.2.
Figure 9: Total relative \(L_{2}\) error in Example 4.2.
large relative error. As time evolves, the error of FlowDMD reaches the lowest level among all three methods. In Figure 12, we use the test data set to evaluate the generalization ability. The FlowDMD has almost the smallest total relative \(L_{2}\) error in most examples and the average of the total relative \(L_{2}\) error is 9%. It also shows that the fluctuation of error for FlowDMD is smaller than that of LIR-DMD, which demonstrates that FlowDMD has a better generalization ability and is more robust than LIR-DMD.
### Sensitivity study
Here, we study the sensitivity of FlowDMD systematically with respect to the following four aspects:
1. The neural network initialization.
2. The hyperparameter \(\alpha\) in the loss function.
3. The structure of neural networks.
4. The rank \(r\) used by DMD in Algorithm 1.
We study the sensitivity of FlowDMD using the Allen-Cahn equation in Section 4.3.
Figure 10: Comparison of three methods in Example 4.3. The total relative \(L_{2}\) error for exact DMD, LIR-DMD, and FlowDMD are 0.6129, 0.4038, and 0.0725, respectively.
#### 4.4.1 Sensitivity with respect to the neural network initialization
In order to quantify the sensitivity of FlowDMD with respect to the initialization, we consider the same data set with Section 4.3. Simultaneously, we fix the structure for FlowDMD to include only one RCF block and one RCF. Each RCF has a FNN to parameterize \(t\) where the width of each layer is \(10,20,10\). Moreover, all FNNs use the rectified linear unit (ReLU) as activation functions. We use \(15\) random seeds to initialize models and train all the models with the same setting. In Figure 13, we report the total relative \(L_{2}\) error between the reconstructed states and the"true" states. Evidently, the TRL2E remains stable for different initializations of neural networks, as demonstrated by the consistent results obtained within the following interval,
\[[\mu_{TRL2E}-\sigma_{TRL2E},\mu_{TRL2E}+\sigma_{TRL2E}]=[6.5\times 10^{-2}-1.6 \times 10^{-2},6.5\times 10^{-2}+1.6\times 10^{-2}]\]
#### 4.4.2 Sensitivity with respect to \(\alpha\)
We utilize the same training set with Section 4.3 and select \(\alpha\) from the list \([0.01,0.1,1,10,100]\). As shown in Table 1, the different weights \(\alpha\) in the loss function
Figure 11: Relative error of three methods for Example 4.3.
Figure 12: Total relative \(L_{2}\) error in Example 4.3.
have little influence on the final results. We observe that the error is minimized when \(\alpha=10\), which suggests the use of an adaptive weight selection algorithm. The gradient flow provided by the neural tangent kernel [45] can be employed to adjust the weight \(\alpha\) and accelerate the training process, and we leave this for our future work.
#### 4.4.3 Sensitivity with respect to the structure of neural networks
We study the impact of the number of RCFs and the number of neurons in the FNN to parameterize the mapping \(t\) on the performance of the FlowDMD. Specifically, the sensitivity of FlowDMD is being quantified with respect to two parameters: the number of RCFs (\(N_{f}\)) and the number of neurons (\(N_{n}\)) in the middle layer of the FNN. Here, the FNN used to parameterize \(t\) is restricted to a three layer structure of \([10,N_{n},10]\). The results are summarized in Table 2, which indicate that the reconstruction of FlowDMD has little to do with its structure while adding more neurons or more RCFs will not improve the final results to a big extent.
#### 4.4.4 Sensitivity with respect to the rank of DMD
As we increase the rank \(r\) used for the DMD computations in Algorithm 1, we include more physical information, but the computation time also increases. In this study, we investigate how the DMD rank affects the model and its reconstruction. The results in Table 3 show that as we increase the rank \(r\), the corresponding error decreases rapidly.
\begin{table}
\begin{tabular}{c c c c c c} \hline \(\alpha\) & 0.01 & 0.1 & 1 & 10 & 100 \\ \hline TRL2E & 6.2e-02 & 6.8e-02 & 8.2e-02 & 3.2e-02 & 6.9e-02 \\ \hline \end{tabular}
\end{table}
Table 1: Total relative \(L_{2}\) error for different \(\alpha\).
Figure 13: Total relative \(L_{2}\) error for different neural network initializations.
## 5 Conclusion
In this paper, we propose a coupling flow invertible neural network approach to learn both the observable functions and reconstruction functions for the Koopman operator learning. Our method generate more accurate Koopman embedding model and better approximations of the Koopman operator than state-of-the-art methods. Our FlowDMD is structurally invertible, which simplifies the loss function and improves the accuracy of the state reconstruction. Numerical experiments show that our approach is more accurate, efficient, and interpretable than the state-of-the-art methods.
## Acknowledgments
The authors would like to thank Mengnan Li and Lijian Jiang for sharing their code.
|
2310.04424 | Stability Analysis of Non-Linear Classifiers using Gene Regulatory
Neural Network for Biological AI | The Gene Regulatory Network (GRN) of biological cells governs a number of key
functionalities that enables them to adapt and survive through different
environmental conditions. Close observation of the GRN shows that the structure
and operational principles resembles an Artificial Neural Network (ANN), which
can pave the way for the development of Biological Artificial Intelligence. In
particular, a gene's transcription and translation process resembles a
sigmoidal-like property based on transcription factor inputs. In this paper, we
develop a mathematical model of gene-perceptron using a dual-layered
transcription-translation chemical reaction model, enabling us to transform a
GRN into a Gene Regulatory Neural Network (GRNN). We perform stability analysis
for each gene-perceptron within the fully-connected GRNN sub network to
determine temporal as well as stable concentration outputs that will result in
reliable computing performance. We focus on a non-linear classifier application
for the GRNN, where we analyzed generic multi-layer GRNNs as well as E.Coli
GRNN that is derived from trans-omic experimental data. Our analysis found that
varying the parameters of the chemical reactions can allow us shift the
boundaries of the classification region, laying the platform for programmable
GRNNs that suit diverse application requirements. | Adrian Ratwatte, Samitha Somathilaka, Sasitharan Balasubramaniam, Assaf A. Gilad | 2023-09-14T21:37:38Z | http://arxiv.org/abs/2310.04424v1 | # Stability Analysis of Non-Linear Classifiers using Gene Regulatory Neural Network for Biological AI
###### Abstract
The Gene Regulatory Network (GRN) of biological cells governs a number of key functionalities that enables them to adapt and survive through different environmental conditions. Close observation of the GRN shows that the structure and operational principles resembles an Artificial Neural Network (ANN), which can pave the way for the development of Biological Artificial Intelligence. In particular, a gene's transcription and translation process resembles a sigmoidal-like property based on transcription factor inputs. In this paper, we develop a mathematical model of gene-perceptron using a dual-layered transcription-translation chemical reaction model, enabling us to transform a GRN into a Gene Regulatory Neural Network (GRNN). We perform stability analysis for each gene-perceptron within the fully-connected GRNN sub-network to determine temporal as well as stable concentration outputs that will result in reliable computing performance. We focus on a non-linear classifier application for the GRNN, where we analyzed generic multi-layer GRNNs as well as _E.Coli_ GRNN that is derived from trans-omic experimental data. Our analysis found that varying the parameters of the chemical reactions can allow us shift the boundaries of the classification region, laying the platform for programmable GRNNs that suit diverse application requirements.
## 1 Introduction
In recent years, the field of Artificial intelligence (AI) has developed rapidly resulting in sophisticated learning algorithms that have benefited a plethora of applications (e.g., manufacturing, economics, computer vision, robotics, etc.) [(1)]. Inspired by the functions of neurons, the ultimate vision of AI is to create human-like intelligence that one day will have a working capacity close to the brain. Based on the system applications, AI can be categorized into software or hardware-based. Software-based AI includes various forms of algorithms that depends on their structure as well as training process (e.g., convolutional neural networks [(2)], recurrent neural networks [(3)], where a novel applications is large language models such as Generative Pre-trained Transformer (GPT) [(4)].
Neuromorphic processors is a hardware-based AI platform that architecturally consists of neurons and synapses constructed from memristor devices that communicate based on encoded neural spikes. [(5)]. Presently, the vast majority of AI machines are constructed using instruction-encoded circuits and silicon-based semiconductors and nanotechnology [(6)], [(7)], [(8)]. While this enables more efficient computer systems that have capabilities of learning and computing, it also results in significant challenges such as deployments in wet non-silicon mediums (e.g., biological mediums), as well as utilizing large amounts of energy. [(9)].
Current research has aimed to address these challenges and one direction taken is through Biological AI, where computing are performed through living biological cells [(10)], [(11)]. A recent examples is the _DishBrain_, where the system is composed of living neurons that can be trained to play the game of "_Pong_" on a computer [(12)]. In other works, ANNs have been programmed into bacterial cells [(13)], [(14)]. Similarly, molecular circuits programmed to behave like ANN have also been proposed, and one example is the Bio-molecular Neural Network (BNN) [(15)]. The underlying basis for all these approaches, is the communication of molecules [(16)] that operates as part of the chemical reactions to enable computing operations.
From the perspective of Gene Regulatory Networks (GRN), there has been a connection between its structure and the opera
tion of a ANN. In our recent work [17], we developed a model that transforms the gene-gene interaction within the GRN using weights, forming a **GRNN** while also exploring the impact of structural changes on the computing capacity. In this study, we investigate the behaviour of a fully-connected GRNN derived from a GRN, focusing on the stability analysis of the gene translation and transcription process during its computing operation. The stability analysis focuses on each perceptron of the GRNN, which we term as **gene-perceptron**. Figure 1 illustrates the mapping from ANN to GRNN. In a conventional ANN, a perceptron takes multiple inputs (\(x_{1}\) and \(x_{2}\)) and computes their weighted summation (\(\sum\)) that goes through an activation function (\(z(x)\)). In the context of the GRNN, the weights are represented as Transcription Factors (TF) concentration corresponding to half-maximal RNA concentration (\(K_{A}\)) and gene-product copy number (\(C_{N}\)), which individually impact RNA and protein concentrations. Input-genes (\(g_{X_{1}}\) and \(g_{X_{2}}\)) have TFs that binds to the promoter region of the gene-perceptron \(g_{1,i}\), which transcribes to RNA (\(R_{i}\)) and then translate to protein (\(P_{i}\)). This can be considered as a weighted summation, which results in regulatory effects on gene expression within the gene-perceptron. Based on the stability of each gene-perceptron at the steady state, the maximum-stable protein concentration (\([P]^{*}\)), represents the output.
We mathematically model chemical reactions of the transcription and translation process of a gene-perceptron, which we term as the dual-layered transcription-translation reaction model (from here on we simply term this as dual-layered chemical reaction model). The dual-layered chemical reaction model can be integrated with trans-omic data model (transcriptome and proteome) and the cellular GRN in order for us to identify active genes for the specific environments, which will be the basis for us to create the GRNN.
Based on this platform, we will perform stability analysis at the steady-state of molecular production (RNA and protein) for the gene-perceptron. Once we prove the stability of the gene-perceptron, as an application we focus on a non-linear classifier relying on the maximum-stable protein concentration for different concentrations of TFs that acts as inputs. To evaluate the model's performance, we analyze two generic multi-layer GRNN networks and an E.Coli GRNN. We also show that we can manipulate and shift the classification areas based on different parameter configurations.
The contributions of this study can be outlined as follows:
* **Developing GRNN inspired from ANN structures using dual-layer chemical reaction models:** Using the dual-layered chemical reaction model, we show that gene transcription and RNA translation process exhibit a sigmoidal-like molecular concentration dynamics at their stable points. This behavior is governed by the weights, which is a function of gene product copy number and transcription factors TFs concentration corresponding to the half-maximal RNA concentration.
* **Stability analysis of GRNN:** We developed full mathematical models derived from the chemical reactions and apply Lyapunov's stability analysis for the gene-perceptron to determine stable protein concentration as well as temporal production that will facilitate reliable GRNN computing.
* **GRNN application for non-linear classifiers:** Using the stability analysis, we are able to determine the decision boundaries of the derived GRNNs to classify data within regions of protein concentration output. By varying parameters of the chemical reactions, we demonstrate how the classification area can be shifted, which can serve as a tool for engineering the GRN for several non-linear classifiers based on the application's requirements.
## System Modeling
This section describes the mathematical models for the gene transcription and translation within gene-perceptrons, employing a dual-layered chemical reaction model (Figure 2) that breaks down the steps of the translation and transcription process. The production of RNAs depends on RNA polymerase, TFs and \(\sigma\) factors that binds to the promoter (\(Prom\)) [18], as well as the dissociation constant (\(k_{d}\)). Once the TF binds to the promoters \(Prom\), the transcription begins at the rate of \(k_{1}\). This is followed by the RNA degradation at the rate of \(d_{1}\) based on their half-life value [19], RNA binding proteins [20] as well as the degradosome components that includes _RNase E_, _RNA helicase_, as well as _PNParse_[21]. Following the transcription of the RNAs is the translation into protein, which occurs at the rate of \(k_{2}\) facilitated by Ribosome and Transfer RNA (tRNA). Once the RNA is translated, the protein molecules start to degrade gradually at the rate of \(d_{2}\). Significant factors that affect the degradation of protein are non-coding RNA, as well as energy-dependent and energy-independent Proteases. Overall, to maintain the concentration
Figure 1: Illustration of mapping between components of ANN to GRNN. In this depiction, \(w_{i}\) and \(w_{i}(K_{A},C_{N})\) represent the weights of a perceptron in ANN and GRNN, respectively, while activation function \(z(x)\) is equivalent to a combination of the transcription process of RNA concentration \([R]_{i}\) as well as translation of maximum-stable protein concentration \([P]_{i}^{*}\). The chemical reactions are governed by the transcriptions rate \(k_{1}\), translation rate \(k_{2}\), degradation rate of RNA \(d_{1}\) and degradation rate of protein \(d_{2}\).
stability in the cell, RNAs and protein production are balanced by the degradation process.
By taking the dual-layered chemical reactions model into account, we model the concentration changes at the transcriptome and proteome using mathematical models. These models, enable us to assess the stability of the gene-perceptron expression through the eigenvalue method and determine the stabilization time using the Lyapunov stability theorem. After determining if a particular gene-perceptron expression is stable, we determine the stability of the entire GRNN. Then, based on the application study, the classification ranges for each gene-perceptron in a network is determined at the equilibrium maximum-stable protein concentration state. Based on the sigmoidal input-output behavior and adjustable threshold, we deduce that gene-perceptrons in the GRNN consist of conventional NN properties. For the overview of the algorithm mentioned above, please refer to Figure 3.
## Modelling Transcription of a Gene
In this section, we discuss transcription and the corresponding RNA concentration model. During the transcription process, the RNA polymerase and TFs bind to the promoter region and then the \(\sigma\) factor attaches to the promoter region and unwinds the DNA (22). This is followed by the \(\sigma\) factor release from the polymerase, allowing for the elongation of the RNA chain. Based on (23), the concentration change over time \(t\) of RNA for a particular gene-perceptron \(i\) can be expressed as follows (chemical species are represented using uppercase letters (e.g., \(X\)), and their corresponding concentration is enclosed within brackets (e.g., \([X]\)))
\[\frac{d[R]_{i}}{dt}=k_{1_{i}}C_{N_{i}}\frac{[TF]^{n}}{K_{A_{i}}^{n}+[TF]^{n}}- d_{1_{i}}[R]_{i}. \tag{1}\]
The gene-perceptron is activated by the TF, where \([R]_{i}\), \(k_{1_{i}}\), \([TF]\), \(d_{1_{i}}\), \(n\), \(C_{N_{i}}\) and \(K_{A_{i}}\) are the RNA concentration, transcription rate, concentration of TFs, degradation rate of RNA, Hill coefficient, gene product copy number and TF concentration when the production of RNA is at the half maximal point for gene-perceptron \(i\), respectively.
Given the initial RNA concentration transcribed by a gene-perceptron is \([R]_{i}(0)\) (i.e., \([R]_{i}(t=0)=[R]_{i}(0)\)), the solution of Eq. 1 is derived as follows
\[[R]_{i}=\frac{k_{1_{i}}C_{N_{i}}}{d_{1_{i}}}\left(\frac{[TF]^{n}}{[TF]^{n}+K_ {A_{i}}^{n}}\right)(1-e^{d_{1_{i}}t})+[R]_{i}(0)e^{d_{1_{i}}t}. \tag{2}\]
In contrast, in the event that the gene-perceptron is repressed by the TF, the RNA concentration changes over time \(t\) is represented as follows,
\[\frac{d[R]_{i}}{dt}=k_{1_{i}}C_{N_{i}}\frac{K_{A_{i}}^{n}}{K_{A_{i}}^{n}+[TF]^ {n}}-d_{1_{i}}[R]_{i}. \tag{3}\]
Eq. 1 and 3 is expressed as a mass balance differential equation with the difference between the RNA synthesis, which is modelled using the Hill function integrated with the degradation process of the RNA (24), (25), (26). The Hill coefficient \(n\) represents the number of TF molecules that bind simultaneously to the promoter \(Prom\) with \(K_{d}\) reaction dissociation constant when the gene-perceptron is transcribing RNA (23) and is represented as \(Prom+n\ TF\stackrel{{ K_{d}}}{{\rightleftharpoons}}Prom_{nTF}\). The Hill coefficient is critical for the sigmoidal input-output characteristics of the gene-perceptron, as depicted in Figure 4. According to the plot, we can see that when we increase the hill coefficient, the sigmoidicity increase for the maximum-stable protein concentration (\([P]^{*}\)) over the input-gene concentration (\([TF]\)). Thus, when a gene-perceptron possesses a higher hill coefficient, it exhibits more sigmoidal-like behavior. (for our analytical model we consider \(n=1\)).
Figure 3: Flow chart for the calculation of classification areas as well as stability based on the dual-layered transcription-translation chemical reaction model of each gene-perceptron.
Figure 2: Illustration of dual-layered transcription-translation chemical reaction model of the gene-perceptron. Each components corresponds to the synthesis and degradation of RNA and protein for the \(j^{\text{th}}\) gene-perceptron in the \(i^{\text{th}}\) layer (\(g_{i,j}\)) of the GRNN. Here, \(RnpB\), \(SsrA\) and \(SsrS\) are examples for non-coding RNA (ncRNA). Examples of energy-dependent proteases include \(Lon\), \(HflB\), \(ClpXP\) and \(HslUV\). Active TF, RNAP, PNpace, RNase E and tRNA corresponds to active TFs, RNA polymerase, Polyt ribonucleotide phosphorylase, Ribonuclease E and transfer RNA, respectively.
## Modelling Translation of a RNA
In this section, we describe RNA-to-protein translation and associated models. Initially, the ribosome and tRNAs form a complex that draws the amino acids in the polypeptide chain to attach to the first codon position of the RNA [27]. This is followed by the tRNAs adding amino acids one by one to form a polypeptide chain while moving along the RNA [28]. Once the stop codon is detected, the polypeptide chain is released, dissociating the ribosome complex from the RNA and forming the protein [29]. This process can be summarized through the protein concentration change over time, and is modelled as follows for a particular gene-perceptron \(i\):
\[\frac{d[P]_{i}}{dt}=k_{2_{i}}[R]_{i}-d_{2_{i}}[P]_{i}, \tag{4}\]
where \([P]_{i},k_{2_{i}}\) and \(d_{2_{i}}\) are the protein concentration, translation rate and degradation rate of protein for gene-perceptron \(i\). Moreover, \([R]_{i}\) is the concentration of RNA from Eq. 1, and the TF activates the gene-perceptron \(i\) based on Eq. 3 if the TF represses the gene-perceptron. Similar to Eq. 1 and 3, Eq. 4 is modelled based on mass-balance differential equation taking the difference between the RNA produced at the transcriptome level which is translated into protein at the rate of \(k_{2_{i}}\) and the amount of protein that is degraded at the rate of \(d_{2_{i}}\) due to the factors presented in Figure 2. Provided that the initial protein concentration translated by a RNA for gene-perceptron \(i\) is \([P]_{i}(0)\) (i.e., \([P]_{i}(t=0)=[P]_{i}(0)\)), the solution of Eq. 4 is given by
\[[P]_{i} =\frac{k_{1_{i}}k_{2_{i}}C_{N_{i}}}{d_{1_{i}}}\left(\frac{[TF]^{n }}{[TF]^{n}+K_{A_{i}}^{n}}\right)\left(\frac{1}{d_{2_{i}}}-\frac{e^{d_{1_{i}}t} }{d_{1_{i}}+d_{2_{i}}}\right)\] \[+[R]_{i}(0)k_{2_{i}}\left(\frac{e^{d_{1_{i}}t}}{d_{1_{i}}+d_{2_{i} }}\right)+e^{-d_{2_{i}}t}[P]_{i}(0)-e^{-d_{2_{i}}t}\] \[\times[R]_{i}(0)k_{2_{i}}\frac{1}{(d_{1_{i}}+d_{2_{i}})}-e^{-d_{2_ {i}}t}\frac{k_{1_{i}}k_{2_{i}}C_{N_{i}}}{d_{1_{i}}}\] \[\left(\frac{[TF]^{n}}{[TF]^{n}+K_{A_{i}}^{n}}\right)\times\left( \frac{1}{d_{2_{i}}}-\frac{1}{(d_{1_{i}}+d_{2_{i}})}\right). \tag{5}\]
## Methods
This section introduces the mathematical models for the stability analysis and RNA/Protein concentration changes over time, and subsequently demonstrates how to apply these mathematical models in the GRNNs.
### Gene Expression Stability Analysis
In this section, we discuss the approach towards analyzing the stability of the gene-perceptron expression. Our view of the stability of the gene-perceptron is when the RNA transcription as well as the protein translation concentrations reach maximum over time and remain stable at that level exhibiting a sigmoidal behavior. To confirm the existence of transcription and translation upper bounds, we use eigenvalue-based stability analysis. This, in turn, ensures a stable classification region of the GRNN due to a protein concentration with minimum fluctuations that can result in minimized computing errors. Moreover, another crucial property that should be considered in GRNN computing is the time it takes the GRNN to reach stability, which is investigated using the Lyapunov function in the following sections.
### Stability of Gene-Perceptron based on Eigenvalues
The stability of the gene-perceptron is governed by the concentration changes of the gene expression as well as protein translation using the Jacobian matrix of Eq. 1 and 4, which enables us to define the equilibrium point based on the eigenvalues. While we have only considered the case of gene transcription in Eq. 1, our approach is also applicable for repression process defined in Eq. 3. Since we are analysing the stability of the gene-perceptron at the equilibrium point, we can represent the maximum-stable RNA \([R]_{i}^{*}\) and protein \([P]_{i}^{*}\) concentration as follows:
\[[R]_{i}^{*}=\frac{k_{1_{i}}C_{N_{i}}}{d_{1_{i}}}\left(\frac{[TF]^{n}}{[TF]^{n} +K_{A_{i}}^{n}}\right), \tag{6}\]
\[[P]_{i}^{*}=\frac{k_{1_{i}}k_{2_{i}}C_{N_{i}}}{d_{1_{i}}d_{2_{i}}}\left(\frac{[ TF]^{n}}{[TF]^{n}+K_{A_{i}}^{n}}\right). \tag{7}\]
The maximum-stable RNA and protein concentrations are determined for different TF concentrations. Additionally, we can vary gene-specific parameters such as \(C_{N_{i}}\) to achieve different non-linear classification ranges [30], implying that by engineering the cell, we can change its decision-making process.
To determine the eigenvalues of Eq. 1 and 4 at the equilibrium points of Eq. 6 and 7, we use the Jacobian matrix given in Eq. 24 (please see Appendix). Hence, the eigenvalues are \(\lambda_{1}=-d_{1_{i}}\) and \(\lambda_{2}=-d_{2_{i}}\). Since all the eigenvalues (\(\lambda_{1}\) and \(\lambda_{2}\)) are negative, we can conclude that the gene-perceptron reaches maximum-stable concentration level.
### Stability of a Gene-Perceptron using Lyapunov function
To determine the temporal stability, we employ Lyapunov stability theorem that is based on the function (\(V([R]_{i},[P]_{i})\)) (from the Appendix Eq. 25) which satisfies the necessary conditions: \(V\left([R]_{i},[P]_{i}\right)=0\) when \([R]_{i}=[R]_{i}^{*}\) and \([P]_{i}=[P]_{i}^{*}\); where \([R]_{i}^{*}\) and \([P]_{i}^{*}\) are RNA and protein concentration at the equilibrium. Additionally, \(V\left([R]_{i},[P]_{i}\right)>0\) due to the quadratic nature of all terms. Finally, we consider the first derivative of Eq. 25 as given by Eq. 27, as the last condition to be satisfied for the stability of the gene-perceptron. Then, according to the Lyapunov's theorem, if Eq. 27 is negative, the gene-perceptron is
Figure 4: Sigmoidicity fluctuations for different Hill coefficients.
asymptotically stable and if Eq. 27 is less than or equal to zero, the gene-perceptron is Lyapunov stable (See Eq. 25 - 27 in the Appendix for the complete derivation). Since it is difficult to directly determine the sign of the derivative of the Lyapunov function in Eq. 27 (please see the Appendix), we illustrate the temporal fluctuation of Eq. 27 in Figure 5. This provides us the insights into the dynamic stability behavior of the gene-perceptron.
## Gene Regulatory Neural Network Analysis
While the previous section present the stability analysis of each individual gene-perceptron, they need to be integrated into a GRNN in order to perform the classification operation. We focus on two types of generic multi-layer GRNNs. In the first network, we consider direct gene relationships within the GRN from the input to the outputs that mimics a multi-layer ANN. In the second case, we consider a Random structured multi-layer GRNN with intermediate gene-perceptrons.
### Multi-layer GRNN
This GRNN network, which is illustrated in Figure 6, consists of three hidden layer gene-perceptrons (\(g_{1,1},g_{1,2},g_{1,3}\)) and one output layer gene-perceptron (\(g_{2,1}\)) (\(g_{i,j}\) represents the \(j^{\text{th}}\) gene-perceptron in \(i^{\text{th}}\) layer in the sub-network). The concentrations that is output from layer 1 to layer 2 are \([TF]_{1,1}\), \([TF]_{1,2}\), \([TF]_{1,3}\) and \([P]\) is the output from gene-perceptron \(g_{2,1}\). The two input-genes (\(g_{X_{1}}\) and \(g_{X_{2}}\)) are TFs with corresponding concentrations (\([TF]_{x_{1}}\) and \([TF]_{x_{2}}\)), respectively. The RNA concentration change over time \(t\), for the hidden layer gene-perceptrons, based on Eq. 1, can be expressed as,
\[\frac{d[R]_{i}}{dt}=k_{1_{i}}C_{N_{i}}\left(\frac{[TF]_{x_{1}}^{n}}{K_{A_{i}}^{ n}+[TF]_{x_{1}}^{n}}\right)\cdot\left(\frac{[TF]_{x_{2}}^{n}}{K_{A_{i}}^{n}+[TF]_{x_ {2}}^{n}}\right)-d_{1_{i}}[R]_{i}, \tag{8}\]
for the activators, \(i=g_{1,1},g_{1,2}\). Since the gene-perceptron \(g_{1,3}\) has a repression from gene-perceptron \(g_{x_{3}}\), the changes in the RNA production based on Eq. 3, is given by
\[\frac{d[R]_{g_{1,3}}}{dt}=k_{1_{g_{1,3}}}C_{N_{g_{1,3}}}\left( \frac{[TF]_{x_{1}}^{n}}{K_{A_{g_{1,3}}}^{n}+[TF]_{x_{1}}^{n}}\cdot\frac{K_{A_ {g_{1,3}}}}{K_{A_{g_{1,3}}}^{n}+[TF]_{x_{2}}^{n}}\right)\\ -d_{1_{g_{1,3}}}[R]_{g_{1,3}}. \tag{9}\]
The RNA concentration changes of the output gene-perceptron \(g_{2,1}\) that consists of TFs from the gene-perceptrons \(g_{1,1},g_{1,2}\) and \(g_{1,3}\) with the output protein concentration that contribute as TF concentration (\([TF]_{1,1}=[P]_{g_{1,1}},[TF]_{1,2}=[P]_{g_{1,2}}\) and \([TF]_{1,3}=[P]_{g_{1,3}}\)) to accumulate in order to invoke the expression is given by,
\[\frac{d[R]_{g_{2,1}}}{dt}=k_{1_{g_{2,1}}}C_{N_{g_{2,1}}}\left( \frac{[TF]_{1,1}^{n}}{K_{A_{g_{2,1}}}^{n}+[TF]_{1,1}^{n}}\right)\\ \cdot\left(\frac{[TF]_{1,2}^{n}}{K_{A_{g_{2,1}}}^{n}+[TF]_{1,2}^ {n}}\right)\cdot\left(\frac{[TF]_{1,3}^{n}}{K_{A_{g_{2,1}}}^{n}+[TF]_{1,3}^ {n}}\right)-d_{1_{g_{2,1}}}[R]_{g_{2,1}}. \tag{10}\]
Each of the gene-perceptron also undergoes a translation process. Therefore, the protein concentration change for each gene-perceptron can be modelled using Eq. 4 for \(i=g_{1,1}\), \(g_{1,2}\), \(g_{1,3}\) and \(g_{2,1}\). The maximum-stable protein concentration can be derived by setting Eq. 8 -10 to zero to find \([R]_{i}^{*}\), which is then plugged into Eq. 4 and set to zero for \(i=g_{1,1},g_{1,2},g_{1,3}\) and \(g_{2,1}\), respectively.
\[i=g_{1,1},g_{1,2}\Longrightarrow[P]_{i}^{*}=\frac{k_{1_{i}}k_{2_ {i}}C_{N_{i}}}{d_{1_{i}}d_{2_{i}}}\left(\frac{[TF]_{x_{1}}^{n}}{K_{A_{g_{1,3}} }^{n}+[TF]_{x_{1}}^{n}}\right)\\ \times\left(\frac{[TF]_{x_{2}}^{n}}{K_{A_{i}}^{n}+[TF]_{x_{2}}^{ n}}\right), \tag{11}\]
\[[P]_{g_{1,3}}^{*}=\frac{k_{1_{g_{1,3}}}k_{2_{g_{1,3}}}C_{N_{g_{1,3}}}}{d_{1_{g_{ 1,3}}}d_{2_{g_{1,3}}}}\Bigg{(}\frac{[TF]_{x_{1}}^{n}}{K_{A_{g_{1,3}}}^{n}+[TF ]_{x_{1}}^{n}}\Bigg{)}\\ \times\left(\frac{K_{A_{g_{1,3}}}}{K_{A_{g_{1,3}}}+[TF]_{x_{2}}^{ n}}\right), \tag{12}\]
Figure 5: Temporal stability of a Gene-perceptron based on the derivative of the Lyapunov function with respect to time. This shows that the gene-perceptron reaching stability over time.
Figure 6: Multi-layer GRNN with two-input layer nodes, three hidden-layer gene-perceptrons (\(g_{1,1},g_{1,2},g_{1,3}\)) and one output layer gene-perceptron (\(g_{2,1}\)) and their corresponding output concentrations are transcription factors \([TF]_{1,1},[TF]_{1,2},[TF]_{1,3}\) and protein concentration \([P]\) respectively. There are two input-genes (\(g_{x_{1}}\), \(g_{x_{2}}\)) considered as two TFs with concentration of \([TF]_{x_{1}}\) and \([TF]_{x_{2}}\), respectively. In this context, \(g_{i,j}\) represents the \(j^{\text{th}}\) gene-perceptron in \(i^{\text{th}}\) layer in the GRNN. Input-gene activators and input-gene regressors are denoted by \((+)\) and \((-)\) edges, respectively. The weights \((w)\) of this GRNN is a function of the TF concentration corresponding to the half-maximal RNA concentration (\(K_{A_{i}}\)) and gene-product copy number (\(C_{N_{i}}\)) for the gene-perceptron \(i\) represented as \(w(K_{A_{i}},C_{N_{i}})\).
\[[P]^{*}_{g_{2,1}}=\frac{k_{1_{g_{2,1}}}k_{2_{g_{2,1}}}C_{N_{g_{2,1}}} }{d_{1_{g_{2,1}}}d_{2_{g_{2,1}}}}\left(\frac{[TF]^{n}_{1,1}}{K^{n}_{A_{g_{2,1}}}+[ TF]^{n}_{1,1}}\right)\] \[\times\left(\frac{[TF]^{n}_{1,2}}{K^{n}_{A_{g_{2,1}}}+[TF]^{n}_{1,2}}\right)\left(\frac{[TF]^{n}_{1,3}}{K^{n}_{A_{g_{2,1}}}+[TF]^{n}_{1,3}} \right). \tag{13}\]
Eq. 11 - 13, which are the stable concentration quantity of proteins produced, is used to compute the classification areas for each gene-perceptron based on the value of concentration, which is further elaborated in the Results section as we present a case study. Subsequently, we apply the approach from the Methods Section to show the stability of the gene-perceptron in this GRNN. The overall stability of the GRNN based on the derived Lyapunov function of Eq. 27 (please see Appendix), which can be further expressed for \(l\) number of TFs connected to a gene-perceptron (\(i\)), is represented as follows
\[\frac{dV}{dt}=-\prod_{j=1}^{l}\frac{C^{2}_{N_{i}}\cdot[TF]^{2n}_{ j}\cdot k^{2}_{1_{i}}\cdot e^{(-2t(d_{i_{1}}+d_{2_{i}}))}}{d_{1_{i}}d_{2_{i}}([TF]^{ n}_{j}+K^{n}_{A_{j}})^{2}(d_{1_{i}}-d_{2_{i}})^{2}}\] \[\times(d^{3}_{2_{i}}\cdot e^{(2d_{2}t_{i})}-2d_{1_{i}}d^{2}_{2_{i} }\cdot e^{(2d_{2}t_{i})}+d_{1_{i}}d_{2_{i}}\cdot e^{(2d_{2}t_{i})})\] \[\qquad\qquad+(d_{1_{i}}k^{2}_{2_{i}}\cdot e^{(2d_{1}t_{i})}+d_{2_ {i}}k^{2}_{2_{i}}\cdot e^{(2d_{2}t_{i})})-\] \[\qquad\qquad-(d_{1_{i}}k^{2}_{2_{i}}\cdot e^{(t(d_{1_{i}}+d_{2_{i }}))})+d_{2_{i}}k^{2}_{2_{i}}\cdot e^{(t(d_{1_{i}}+d_{2_{i}}))}), \tag{14}\]
where \([TF]_{j}\) and \(K_{A_{j}}\) are concentration of \(j^{\text{th}}\) TF and corresponding half maximal RNA concentration for gene-perceptron \(i\), respectively.
### Random Structured GRNN
As described earlier, relationship of gene-perceptrons within a GRN that have common TFs may have intermediate gene-perceptrons within the path of connections. We analyze how this impacts on the overall stability of the GRNN, where the network for this case is presented in Figure 7. In this form of networks, it is necessary to consider the RNA concentration change from the intermediate gene-perceptron (\(g_{2,1}\)) and its impact on the output layer gene-perceptron (\(g_{3,1}\)). The expressions for each gene-perceptrons, and their relative TFs from their immediate predecessor, is represented as follows:
\[\frac{d[R]_{g_{2,1}}}{dt}=k_{1_{g_{2,1}}}C_{N_{g_{2,1}}}\left( \frac{[TF]^{n}_{1,1}}{K^{n}_{A_{g_{2,1}}}+[TF]^{n}_{1,1}}\right)-d_{1_{g_{2,1}} }[R]_{g_{2,1}}, \tag{15}\]
\[\frac{d[R]_{g_{3,1}}}{dt}=k_{1_{g_{3,1}}}C_{N_{g_{3,1}}}\left( \frac{[TF]^{n}_{2,1}}{K^{n}_{A_{g_{3,1}}}+[TF]^{n}_{2,1}}\right)\cdot\left( \frac{[TF]^{n}_{1,2}}{K^{n}_{A_{g_{3,1}}}+[TF]^{n}_{1,2}}\right)\] \[\times\left(\frac{[TF]^{n}_{1,3}}{K^{n}_{A_{g_{3,1}}}+[TF]^{n}_{1,3}}\right)-d_{1_{g_{3,1}}}[R]_{g_{3,1}}. \tag{16}\]
Here, the protein concentration from Eq. 5 can be derived from Eq. 15 (i.e., \([TF]_{1,1}=[P]_{1,1}\)), since the gene-perceptron \(g_{2,1}\) is activated by gene-perceptron \(g_{1,1}\). The RNA concentration models behaves similarly to the case without the intermediate gene-perceptron for the gene-perceptrons \(g_{1,1}\), \(g_{1,2}\)\(g_{1,3}\) and can be derived directly from Eq. 8 and 9. Using Eq. 4 we can determine the protein concentration change for each gene-perceptron Figure 7.
Using the maximum-stable protein concentration derived from Eq. 15 and 16, we can determine \([R]^{*}_{i}\), which is then applied to Eq. 4 and used to determine the maximum-stable value for \(i=g_{2,1}\) and \(g_{3,1}\). This will result in the following maximum-stable protein production that is represented as follows
\[[P]^{*}_{g_{2,1}}=\frac{k_{1_{g_{2,1}}}k_{2_{g_{2,1}}}C_{N_{g_{2,1}}}}{d_{1_{g _{2,1}}}d_{2_{g_{2,1}}}}\left(\frac{[TF]^{n}_{1,1}}{K^{n}_{A_{g_{2,1}}}+[TF]^{n }_{1,1}}\right), \tag{17}\]
\[[P]^{*}_{g_{3,1}}=\frac{k_{1_{g_{3,1}}}k_{2_{g_{3,1}}}C_{N_{g_{3,1}}}}{d_{1_{g _{3,1}}}d_{2_{g_{3,1}}}}\left(\frac{[TF]^{n}_{2,1}}{K^{n}_{A_{g_{3,1}}}+[TF]^{n }_{2,1}}\right)\] \[\cdot\left(\frac{[TF]^{n}_{1,2}}{K^{n}_{A_{g_{3,1}}}+[TF]^{n}_{1,2}}\right)\left(\frac{[TF]^{n}_{1,3}}{K^{n}_{A_{g_{3,1}}}+[TF]^{n}_{1,3}} \right). \tag{18}\]
We use Eq. 11 to determine \([P]^{*}_{i}\) for \(i=g_{1,1}\) and \(g_{1,2}\), while for \(i=g_{1,3}\) we use Eq. 12. For the stability analysis, Eq. 14 is used with \(l=2\) for \(g_{1,1}\), \(g_{1,2}\) and \(g_{1,3}\), \(l=1\) for \(g_{2,1}\) and \(l=3\) for \(g_{3,1}\) corresponding to the number of TFs for each gene-perceptron.
## 4 Results
In this section, we perform the temporal stability analysis and obtain the classification areas for the two multi-layer GRNN network topologies (Figures 6, 7) as well as the GRNN derived from _E.Coli_ GRN.
from the input-gene \(g_{X_{1}}\). The output-layer gene-perceptron (\(g_{2,1}\)) followed a similar trend as gene-perceptrons \(g_{1,1}\) and \(g_{1,2}\) attaining Lyapunov stability within the initial 30 seconds because its immediate predecessors are all activators.
Given the gene-perceptron's stability at the equilibrium (Figure 8), we can use Eq. 11 - 13 to calculate output protein \([P]_{i}^{*}\) for different input concentrations (\([TF]_{x_{1}}\) and \([TF]_{x_{2}}\)). The calculated output protein \([P]_{i}^{*}\) is illustrated over varying input concentrations, highlighting the values above and below the threshold (\([P]^{*}=0.5\)). Decision boundaries reflect how the classification areas change based on the edge (activation or repression) connected to the target gene-perceptron and corresponding parameters in Eq. 11 - 13. The inputs (\([TF]_{x_{1}}\) and \([TF]_{x_{2}}\)) vary, while parameters like gene product copy number (\(C_{N_{1}}\)), transcription rate (\(k_{1_{j}}\)), translation rate (\(k_{2_{j}}\)), RNA degradation rate (\(d_{1_{j}}\)), protein degradation rate (\(d_{2_{j}}\)) and TF concentration corresponding to the half maximal RNA concentration (\(K_{A_{j}}\)) are kept constant. We consider two parameters sets to determine the different classification regions, which are presented in Table 1.
For the parameters set 1, we obtain the classification areas shown in Figure (a)a. The decision boundary and their top-view for each gene-perceptron are shown in the first and second row, respectively. The gene-perceptron \(g_{1,2}\) has the largest classification area above the threshold due its lower TF concentration corresponding to half maximal RNA concentration \(K_{A_{t}}\), compared to gene-perceptrons \(g_{1,1}\) and \(g_{1,3}\). Moreover, the decision boundaries for gene-perceptrons \(g_{1,1}\) and \(g_{1,2}\) exhibits a similar shape classifying majority of the values above the threshold. In contrast, the gene-perceptron \(g_{1,3}\) covers larger area for the values below the threshold since it is repressed by the input-gene \(g_{x_{2}}\). The intersection of classification areas corresponding to hidden layer gene-perceptrons is represented by the output layer gene-perceptron \(g_{2,1}\), where the classification area above the threshold is approximately bounded by input concentrations, \(2.5\leq[TF]_{x_{1}}\leq 3.5\) and \(3.4\leq[TF]_{x_{2}}\). Due to the significant contribution from gene-perceptrons \(g_{1,1}\) and \(g_{1,2}\) beyond the threshold, the output layer gene-perceptron \(g_{2,1}\) exhibits a rightward shift.
For the parameter set 2 (Table 1), the lower \(K_{A_{t}}\) values have shifted the classification area above the threshold compared to parameter set 1. This shift is evident in Figure (b)b, particularly for the gene-perceptron \(g_{1,2}\), which results in classifying majority of the values above the threshold. Conversely, for the gene-perceptron \(g_{1,3}\), the classification area shifts below the threshold due to the repression from the input when reducing the half maximal RNA concentration \(K_{A_{t}}\). The classification range for the gene-perceptron \(g_{1,1}\) expands compared to parameter set 1, approximately bounded by \(2.3\leq[TF]_{x,1}\) and \(2.1\leq[TF]_{x,2}\). Considering all gene-perceptrons, the output layer gene-perceptron \(g_{2,1}\) shows a leftward shift in the decision boundary, becoming slightly more linear. Overall, modifying the half maximal RNA concentration \(K_{A_{t}}\) can significantly expand the classification area.
Eq. 14 and the parameter set 1 from Table 2. Similar to the Figure 8, gene-perceptrons \(g_{1,1},g_{1,2},g_{3,1}\) and the intermediate gene-perceptron \(g_{2,1}\) exhibit consistent stability fluctuations due to their immediate predecessor being activators. Additionally, gene-perceptron \(g_{1,3}\) shows similar stability fluctuation patterns as the gene-perceptron \(g_{1,3}\) in the network without the intermediate gene-perceptron and this is because both are being influenced by their repressive predecessors.
Following the temporal stability analysis, we apply Eq. 11 and 12 to determine the maximum-stable protein concentration (\([P]_{i}^{*}\)) for the gene-perceptrons \(g_{1,1},g_{1,2}\) and \(g_{1,3}\). However, unlike the GRNN in Figure 6, Eq. 13 is not used to determine the classification area for the output layer gene-perceptron. Instead, for the computation of \([P]_{i}^{*}\) for the gene-perceptrons \(g_{2,1}\) and \(g_{3,1}\), both Eq. 17 and 18 is employed due to the addition of the intermediate gene-perceptron compared to the multi-layer GRNN in Figure 6. The calculated protein concentration output \([P]_{i}^{*}\) values for different input concentrations used to determine the classification area for each gene-perceptron is presented in Figure 12. We also used two different sets of parameters from Table 2 to analyze different classification areas.
The parameter set 1 results in the classification areas shown in Figure 11(a). As the gene-perceptron \(g_{2,1}\) serves as the intermediate gene-perceptron of \(g_{1,1}\), we observe similar classification areas and decision boundaries. Additionally, repression from the input-gene \(g_{x_{1}}\) to the gene-perceptron \(g_{1,3}\) results in a distinctive decision boundary, approximately within the range of \(3\leq[TF]_{x_{2}}\) and \(3\geq[TF]_{x_{1}}\). Overall, the gene-perceptron \(g_{3,1}\) represents the intersection of the hidden layer gene-perceptrons, with the classification area beyond the threshold bounded by
Figure 11: Temporal stability for each gene-perceptrons in the _E. coli_ GRNN.
Figure 10: Temporal stability of the gene-perceptrons for the Random Structured GRNN.
Figure 9: Parameter configurations for the Multi-layer GRNN depicted in Figure 6. Each graph depicts the classification area of each gene-perceptron and for (a) Parameter set 1, as well as (b) Parameter set 2 (\(g_{2,1}\) is the output gene-perceptron that combines all classification areas of gene-perceptrons from the previous layer).
\(2.5\leq[TF]_{x_{2}}\leq 3.5\) and \(3\geq[TF]_{x_{1}}\).
In contrast, reducing the TF concentration at the half maximal RNA concentration (\(K_{A_{t}}\)) for a gene-perceptron as shown in parameter set 2, alters the classification areas for both gene-perceptron \(g_{1,1}\) and its immediate intermediate gene-perceptron \(g_{2,1}\), as illustrated in Figure 11(b). The classification area significantly expands above the threshold, while dropping below it when lowering the TF concentration corresponding to the half-maximal RNA concentration \(K_{A_{t}}\), as it is inversely proportional to the maximum protein concentration \([P]_{i}^{*}\) based on Eqs. 8 and 17. Alterations made to gene-perceptron \(g_{1,1}\) notably impacts \(g_{2,1}\), the predecessor gene-perceptron in the GRNN. Other hidden layer gene-perceptrons \(g_{1,2}\) and \(g_{1,3}\) remain unaffected between parameter sets 1 and 2. Parameter set 2 results in a leftward shift in the classification area of the output layer gene-perceptron \(g_{3,1}\) compared to set 1. In summary, parameter adjustments leads to shifts in the decision boundary of the output layer gene-perceptrons; with decreased \(K_{A_{t}}\) causing a leftward shift in the the classification area.
### E.Coli GRNN Classification Analysis
This section demonstrates the classification areas for the _E.coli_ GRNN illustrated in Figure 12(a), which is extracted from the trans-omic data of _E.coli_ GRNN (31). The network consists of two input-genes (\(b3025,b3357\)), two hidden layer gene-perceptrons (\(b1891\) and \(b1892\)) and one output layer gene-perceptron (\(b1071\)) with their corresponding TF concentrations \([TF]_{i}\) for \(i=b3025,b3357,b1891\) and \(b1892\), and protein concentration \([P]_{b1071}\). In this specific GRNN, all TFs are considered activators. For the output layer gene-perceptron (\(i=b1071\)), we employ Eqs. 8, 4 and 11 with TFs \(x_{1}=b1891\) and \(x_{2}=b1892\) to calculate RNA, protein concentration change and maximum protein concentration (\([P]_{i}^{*}\)), respectively using the parameter values in Table 3.
Similar to the previous GRNNs, we based the stability analysis for this GRNN on Eq. 14. For the 2 input layer gene-perceptrons (\(i=b1891\) and \(b1892\)), we consider TFs \(j=b3025,b3357\), while for the output layer gene-perceptron \(i=b1071\), we evaluate stability with the TFs \(j=b1891,b1891\). In the previous GRNNs, we found that in Figures 8, 10 that the gene-perceptrons with an immediate activator, exhibits a consistent stability fluctuations before reaching Lyapunov stability \(\left(\frac{dV}{dt}\approx 0\right)\). This is also a similar behaviour with the _E.Coli_ GRNN, which is shown in Figure 11, which shows the temporal stability for the gene-perceptrons (\(g_{1,1}\), \(g_{1,2}\) and \(g_{2,1}\)) that is influenced by the immediate activator predecessors displaying uniform stability. Overall, the analysis indicates that all the gene-perceptrons in the GRNN eventually attained the Lyapunov stability, ensuring network-wide stability, but with different timing periods.
Once proving the stability of the GRNN, we ascertain the maximum-stable protein concentration to obtain the classification
Figure 12: Parameter configurations for the Random Structured GRNN in Figure 6. Each graph depicts the classification area of each gene-perceptron and for (a) Parameter set 1; (b) Parameter set 2 (\(g_{3,1}\) is the output gene-perceptron that combines all classification areas of gene-perceptrons from the previous layer).
ranges. In order to compute maximum-stable protein concentration (\([P]_{i}^{*}\)) for gene-perceptrons \(i=b1891\) and \(1892\), we use Eq. 11 with the replacement of \(x_{1}\) and \(x_{2}\) by \(b3025\) and \(b3357\) as input genes. Furthermore, for the computation of output concentrations \([P]_{i}^{*}\), concerning gene-perceptron \(i=b\,1071\), Eq. 11 is used with TFs as \(x_{1}=b\,1891\) and \(x_{2}=b\,1892\) with the assumption that the Hill coefficient \(n\) is equal to \(1\) in all simulations. Since \(K_{A_{i}}\) is the TF concentration corresponding to the half maximal RNA concentration, there are two \(K_{A_{i}}\) values for each gene-perceptron because each has two TFs, as shown in Figure 12(a). The time-series data of gene expression levels for _E.coli_ was used by first identifying the gene's half maximal expression level \(K_{A_{i}}\) and then finding the expression level of its TF at that corresponding time point. For the remaining parameters that was obtained from literature as shown in Table 3, the average value was used.
The classification area from our analysis is shown in Figure 12(b). The classification area of gene-perceptron \(b\,1892\) has expanded towards the left when compared to \(b\,1891\), and this is because the expression level of the half-maximal RNA concentration \(K_{A_{i}}\) of both TFs (\(b3025\) and \(b\,3357\)) corresponding to \(b\,1891\) exceed the value of \(K_{A_{i}}\) for \(b\,1892\). The classification area above the threshold of \(b\,1892\) is defined within the limits of \([TF]_{b3025}\geq 2.7\) and \([TF]_{b3357}\geq 2.7\), in contrast to \(b\,1891\) which is approximately bounded by \([TF]_{b3025}\geq 3.5\) and \([TF]_{b3357}\geq 3.8\). Consistent with the decision boundary simulations performed on the two generic multi-layer GRNNs (Figure 9 and 12), the output-layer gene-perceptron (\(b1071\)) of this GRNN also exhibited a intersection of classification areas driven by the input-layer gene-perceptrons. In line with this, as gene-perceptron \(b\,1891\) had the majority of its classification area below the threshold and gene-perceptron \(b\,1892\) had the majority above the threshold, the decision boundary of gene-perceptron \(b\,1071\) is approximately bounded by \([TF]_{b3025}\geq 2.9\) and \([TF]_{b3357}\geq 2.9\). Overall, gene-perceptrons within the GRNN derived from E.coli GRN exhibit tunable decision boundaries by selecting sub-netowrks from the GRN at steady-state and collectively they function as multi-layer GRNN showcasing aspects of biological AI.
## 6 Conclusion
In this study, we introduced a GRNN that can be derived from a cell's GRN and mathematical modelling this for the transcription and translation process, transforming a gene into a gene-perceptron. We also performed stability analysis for the GRNN as it functions as a non-linear classifier. This is based on the eigenvalue method and the Lyapunov's stability theorem, with the latter approach capable of determining the time at which the stability is achieved. The classification application was applied to two multi-layer GRNNs as well as a sub-network extracted from the E.coli GRN using trans-omic data. From the simulation for different parameter settings for the two multi-layer GRNN revealed that the TF concentration at the half maximal gene expression level \(K_{A_{i}}\), has a significant impact on the shifting of the classification boundary. Based on the outcomes of the stability analysis and simulations, we can conclude that the GRN exhibits NN properties as the gene-perceptron demonstrated sigmoidal-like behavior for multiple inputs and tunable decision boundary. Further, by engineering living cells it is possible to obtain desired non-linear classifiers based on our application. Our model has potential to transform GRNs into GRNN when the suitable parameters are established for the dual-layered chemical reaction model.
## 7 Author Contributions
A.R., S.S. and S.B. designed the theoretical framework of the study. The implementation of the analysis was done by A.R. while
Figure 13: _E. coli_ GRNN classification analysis. (a) Fully-connected GRNN derived from the E.coli GRN. This network consists of two input-genes (\(b3025,b3357\)), two hidden layer gene-perceptrons (\(b\,1891\) and \(b\,1892\)), and one output layer gene-perceptron (\(b1071\)). (b) Classification regions of each gene perceptron within the _E. coli_ GRNN, with gene-perceptron \(b\,1071\) as the output.
A.G. provided the knowledge for the biological aspect of this study. All the authors wrote and reviewed the final manuscript.
## Acknowledgments
This publication has emanated from research conducted with the financial support of National Science Foundation (NSF) under Grant Number 2316960.
## Declaration of Interests
The authors declare no competing interests.
## Appendix
### RNA and Protein Concentration Model
To model the RNA and protein concentration change, mass-balance differential equations were used based on Hill function. Transcription of a gene-perceptron begins with TF and RNA polymerase binding to the promoter, which is modelled by,
\[[Prom.TF]=C_{N_{i}}\frac{[TF]^{n}}{[TF]^{n}+K_{A_{i}}^{n}}, \tag{19}\]
where \([TF],n,K_{A_{i}},[Prom.TF]\) and \(C_{N_{i}}\) are concentration of TFs, Hill coefficient, TF concentration corresponding to half maximal RNA concentration, complex produced after TFs bind to promoter and gene product copy number, respectively. The complex, \(Prom.TF\) transcribes into RNA at the rate of \(k_{1_{i}}\) and subsequently RNA degrades at the rate of \(d_{1_{i}}\) which can be modelled as
\[\frac{d[R]_{i}}{dt}=k_{1_{i}}[Prom.TF]-d_{1_{i}}[R]_{i}. \tag{20}\]
By plugging Eq. 19 in Eq. 20 we can obtain Eq. 1. In contrast, if a gene-perceptron is repressed by a TF, Eq. 19 can be expressed as
\[[Prom.TF]=C_{N_{i}}\frac{K_{A_{i}}^{n}}{K_{A_{i}}^{n}+[TF]^{n}}. \tag{21}\]
Since the initial RNA concentration transcribed by a gene-perceptron is \([R]_{i}(0)\) (i.e., \([R]_{i}(t=0)=[R]_{i}(0)\)), the solution of Eq. 1 as given by Eq. 2 can be derived using the integrating factor, \(IF=e^{f\,d_{1_{i}}\,dt}=e^{d_{2_{i}}t}\), where \(t\) and \(d_{1_{i}}\) are time and RNA degradation rate, respectively. Transcribed RNA is then translated into protein at the proteome level. To solve the differential equation of protein concentration change for Eq. 4 we can follow 2 steps. **Step 1**: Replacing RNA concentration (\([R]_{i}\)) in Eq. 4 with the solution obtained for the differential equation of RNA concentration change from Eq. 2. **Step 2**: Using the integrating factor (\(IF=e^{f\,d_{2_{i}}\,dt}=e^{d_{2_{i}}t}\)) and initial RNA concentration (\([R]_{i}(0)\)), as well as initial protein concentration \([P]_{i}(0)\) (i.e., \([P]_{i}(t=0)=[P]_{i}(0)\)) we can obtain the equation for the protein concentration in Eq. 5. By setting \(\frac{d\,[R]_{i}}{dt}=0\), we can obtain maximum-stable RNA concentration at the steady-state (\([R]_{i}^{*}\)) expressed by Eq. 6. In addition, protein concentration at the steady-state (\([P]_{i}^{*}\)) can be represented by Eq. 7 which is derived by plugging \(\frac{d\,[P]_{i}}{dt}=0\) in Eq. 4.
### Determining Gene-perceptron Stability
In this section, we derive the stability of a gene-perceptron using eigenvalues of differential equations for RNA and protein concentration change (Eq. 1 and 4) and using Lypunov's stability theorem. Based on (15), we applied eigenvalue method to determine the stability in the gene-perceptrons. Suppose \(f\) and \(g\) are functions of \([R]_{i}\) and \([P]_{i}\). Such that,
\[\text{Eq.}1 \Longrightarrow\frac{d\,[R]_{i}}{dt}=f\left([R]_{i},[P]_{i} \right), \tag{22}\] \[\text{Eq.}4 \Longrightarrow\frac{d\,[P]_{i}}{dt}=g([R]_{i},[P]_{i}). \tag{23}\]
Then, the Jacobian matrix for Eqs. 1 and 4 at the equilibrium point is represented as,
\[J_{i}=\begin{bmatrix}\frac{\partial f}{\partial[R]_{i}}&\frac{\partial f}{ \partial[P]_{i}}\\ \frac{\partial g}{\partial[R]_{i}}&\frac{\partial g}{\partial[P]_{i}}\end{bmatrix} =\begin{bmatrix}-d_{1_{i}}&0\\ k_{2_{i}}&-d_{2_{i}}\end{bmatrix}, \tag{24}\]
for gene-perceptron \(i\). Using the characteristic equation \(|J_{i}-\lambda I|=0\) we can determine the eigenvalues for the above Jacobian matrix (Eq. 24) as \(\lambda_{1}=-d_{1_{i}},\lambda_{2}=-d_{2_{i}}\). Hence, all the eigenvalues are negative, indicating that the gene-perceptron is stable, where \(\lambda\) is scalar, \(I\) is a \(2\times 2\) identity matrix, \(d_{2_{i}}\) is the protein degradation rate, \(d_{1_{i}}\) is the RNA degradation rate and \(k_{2_{i}}\) is the translation rate. We use the Lyapunov function (\(V\)) to perform the temporal stability analysis defined for the Eqs. 1 and 4 as follows,
\[V\left([R]_{i},[P]_{i}\right)=\left([R]_{i}-[R]_{i}^{*}\right)^{2}+\left([P]_{ i}-[P]_{i}^{*}\right)^{2}. \tag{25}\]
According to the Lyapunov's stability theorem, \(V\left([R]_{i},[P]_{i}\right)=0\) when \([R]_{i}=[R]_{i}^{*}\) and \([P]_{i}=[P]_{i}^{*}\), where \([R]_{i}^{*}\) and \([P]_{i}^{*}\) are RNA and protein concentration at the equilibrium. It is clear that \(V\left([R]_{i},[P]_{i}\right)>0\), since all terms are quadratic. Finally, we consider the first derivative of Eq. 25 as the last condition for the stability, which is represented as
\[\dot{V}([R]_{i},[P]_{i})=\frac{dV}{dt}=\frac{\partial V}{\partial[R]_{i}}. \frac{d[R]_{i}}{dt}+\frac{\partial V}{\partial[P]_{i}}.\frac{d[P]_{i}}{dt}. \tag{26}\]
By plugging \(\frac{d[R]_{i}}{dt}\) and \(\frac{d[P]_{i}}{dt}\) from Eq. 1 and 4, differentiating Eq. 25 with respect to \([R]_{i}\) and \([P]_{i}\) to obtain \(\frac{\partial V}{\partial[R]_{i}}\) and \(\frac{\partial V}{\partial[P]_{i}}\) and finally replacing \([R]_{i}^{*},[P]_{i}^{*},[R]_{i}\) and \([P]_{i}\), with Eq. 6, 7, 2 and 5 we get Eq. 26, which is represented as follows
\[\text{Eq.}26 \Longrightarrow\frac{dV}{dt}=-\frac{C_{N_{i}}^{2}\cdot[TF]^{2n} \cdot k_{1_{i}}^{2}\cdot e^{(-2(d_{1_{i}}+d_{2_{i}}))}}{d_{1_{i}}d_{2_{i}}([TF] ^{n}+K_{A_{i}}^{n})^{2}(d_{1_{i}}-d_{2_{i}})^{2}}\] \[\cdot(d_{2_{i}}^{3}\cdot e^{(2d_{2_{i}}t)}-2d_{1_{i}}d_{2_{i}}^{2} \cdot e^{(2d_{2_{i}}t)}+d_{1_{i}}^{2}d_{2_{i}}\cdot e^{(2d_{2_{i}}t)})\] \[+(d_{1_{k}}k_{2_{i}}^{2}\cdot e^{(2d_{1_{i}}t)}+d_{2_{k}}k_{2_{i}}^ {2}\cdot e^{(2d_{2_{i}}t)})\] \[-(d_{1_{k}}k_{2_{i}}^{2}\cdot e^{(t(d_{1_{i}}+d_{2_{i}}))})+d_{2_{k} }k_{2_{i}}^{2}\cdot e^{(t(d_{1_{i}}+d_{2_{i}}))}), \tag{27}\]
where we assume initial RNA concentration of zero (\([R]_{i}(0)=0\)) and initial protein concentration of zero (\([P]_{i}(0)=0\)). The above equation is used to determine the stability of the gene-perceptron for different parameter configurations. |
2309.03770 | Neural lasso: a unifying approach of lasso and neural networks | In recent years, there is a growing interest in combining techniques
attributed to the areas of Statistics and Machine Learning in order to obtain
the benefits of both approaches. In this article, the statistical technique
lasso for variable selection is represented through a neural network. It is
observed that, although both the statistical approach and its neural version
have the same objective function, they differ due to their optimization. In
particular, the neural version is usually optimized in one-step using a single
validation set, while the statistical counterpart uses a two-step optimization
based on cross-validation. The more elaborated optimization of the statistical
method results in more accurate parameter estimation, especially when the
training set is small. For this reason, a modification of the standard approach
for training neural networks, that mimics the statistical framework, is
proposed. During the development of the above modification, a new optimization
algorithm for identifying the significant variables emerged. Experimental
results, using synthetic and real data sets, show that this new optimization
algorithm achieves better performance than any of the three previous
optimization approaches. | David Delgado, Ernesto Curbelo, Danae Carreras | 2023-09-07T15:17:10Z | http://arxiv.org/abs/2309.03770v1 | # Neural lasso: a unifying approach of lasso and neural networks
###### Abstract
In recent years, there is a growing interest in combining techniques attributed to the areas of Statistics and Machine Learning in order to obtain the benefits of both approaches. In this article, the statistical technique lasso for variable selection is represented through a neural network. It is observed that, although both the statistical approach and its neural version have the same objective function, they differ due to their optimization. In particular, the neural version is usually optimized in one-step using a single validation set, while the statistical counterpart uses a two-step optimization based on cross-validation. The more elaborated optimization of the statistical method results in more accurate parameter estimation, especially when the training set is small. For this reason, a modification of the standard approach for training neural networks, that mimics the statistical framework, is proposed. During the development of the above modification, a new optimization algorithm for identifying the significant variables emerged. Experimental results, using synthetic and real data sets, show that this new optimization algorithm achieves better performance than any of the three previous optimization approaches.
neural networks, lasso, cross-validation, feature selection
## 1 Introduction
Nowadays, there is a growing interest in combining techniques attributed to the areas of Statistics and Machine Learning in order to obtain the benefits of both approaches.
An example of the above can be found in the area of statistical item response theory, and specifically in the development of computerized adaptive tests [1; 2]. Yan, Lewis, and Stocking and, later, Ueno and Songmuang proposed the use of decision trees as an alternative to the computerized adaptive tests [3; 4]. Later, Delgado-Gomez et al. established mathematically an equivalence between these two techniques that allows the administration of computerized adaptive tests in real-time using item selection criteria that are computationally very intensive [5]. Recently, several works using neural networks have been published in this field [6; 7].
Regarding these last works, it is interesting to note the synergies that are being generated between the areas of Statistics and Neural Networks [8; 9]. Representing statistical models using neural networks provides them with the flexibility and optimization methods of the latter. In a previous pilot study, Laria et al. indicated how the least absolute shrinkage and selection operator (lasso) algorithm can be represented as a neural network [10]. Conversely, linking neural networks to statistical models allows to improve the interpretability of the former [11]. These synergies have occurred in several domains of Statistics such as regression, dimensional reduction, time series, or quality control [12].
In this article, the widely used lasso algorithm is developed from the perspective of neural networks. To this end, in Section 2, the most relevant features of the lasso algorithm are presented in order to understand the elaboration of its neural version. After that, in Section 3, the entire mathematical formulation proposed by Laria et al. is extended, and the optimization is redefined [10]. Both linear and logistic regressions are considered. In Section 4, several experiments are carried out to evaluate the performance of the neural version and compare it with their statistical counterpart. These experiments are performed on both real and simulated data. Finally, the article concludes in Section 5 with a discussion of the obtained results and future research lines.
## 2 The lasso
Following, the lasso algorithm is briefly presented highlighting the most relevant elements in relation to our proposal. Hereafter, the lasso algorithm will be referred to as _statistical lasso_ to differentiate it from its neural version throughout the article.
### Formulation
Let \((\mathbf{x}_{i},y_{i})\), \(i=1,\ldots,N\), be a set containing \(N\) observations where \(\mathbf{x}_{i}\in\mathbb{R}^{p}\) represents the predictors, and \(y_{i}\in\mathbb{R}\) are the associated responses. It is assumed
that the predictors are standardized and the responses are centered, i.e.,
\[\sum_{i=1}^{N}x_{ij}=0,\hskip 28.452756pt\sum_{i=1}^{N}x_{ij}^{2}=1,\hskip 28.452756pt \sum_{i=1}^{N}y_{i}=0,\hskip 28.452756pt\text{for }j=1,2,\ldots,p \tag{1}\]
The lasso technique was introduced for generalized linear models in the supervised context by Tibshirani [13]. It is formulated as the following optimization problem
\[\underset{\mathbf{\beta}}{argmin}\,\mathcal{R}(\mathbf{y},\mathbf{X}\mathbf{\beta})+\lambda \lVert\mathbf{\beta}\rVert_{1} \tag{2}\]
where \(\mathbf{X}\) is the (standardized) matrix that contains the observations as rows, \(\mathbf{y}\) is the vector with the corresponding labels, \(\mathbf{\beta}\) is the vector containing the weights of the regression, and \(\lambda\lVert\mathbf{\beta}\rVert_{1}\) is a penalization term. \(\mathcal{R}(\mathbf{y},\mathbf{X}\mathbf{\beta})\) represents the error term. In this work, we will focus on linear and logistic regression. For linear regression, the error term is given by
\[\mathcal{R}_{Lin}(\mathbf{y},\mathbf{X}\mathbf{\beta})=\frac{1}{N}\sum_{i=1}^{N}(y_{i}- \mathbf{x}_{i}^{t}\mathbf{\beta})^{2} \tag{3}\]
while the error term for the logistic regression is given by:
\[\mathcal{R}_{Log}(\mathbf{y},\mathbf{X}\mathbf{\beta})=\frac{1}{N}\sum_{i=1}^{N}\left[ \log(1+e^{\mathbf{x}_{i}^{t}\mathbf{\beta}})-y_{i}\mathbf{x}_{i}^{t}\mathbf{\beta}\right] \tag{4}\]
### Optimization
Given a fixed \(\lambda\), the values of \(\mathbf{\beta}\) are estimated using coordinate descent. As an example, the coordinate descent update for the \(j^{th}\) coefficient in the linear regression case is given by
\[\hat{\beta}_{j}=\mathcal{S}_{\lambda}(\frac{1}{N}\langle\mathbf{X}_{j}, \mathbf{r}_{j}\rangle) \tag{5}\]
where \(\mathbf{X}_{j}\) is the \(j^{th}\) column of matrix \(\mathbf{X}\), the \(i^{th}\) component of \(\mathbf{r}_{j}\) is obtained by
\[\mathbf{r}_{j}(i)=y_{i}-\sum_{k\neq j}x_{ik}\hat{\beta}_{k} \tag{6}\]
and \(\mathcal{S}_{\lambda}\) is the soft-thresholding operator defined by
\[S_{\lambda}(x)=\text{sign}(x)(|x|-\lambda)_{+} \tag{7}\]
The optimal value of \(\lambda\) is obtained through a k-fold crossvalidation. A more detailed discussion of the lasso optimization can be found in the book
by Hastie, Tibshirani and Wainwright [14]. A schematic representation of the lasso optimization algorithm is shown in the upper panel of Figure 3.
## 3 The neural lasso
Similarly to the previous section, the formulation and optimization of the neural lasso is presented.
### Formulation
Following, the neural representation of the lasso is presented. It begins by presenting the mathematical formulation for linear regression and, afterward, it is extended to logistic regression.
#### Linear regression
When the error term is given by the mean squared error (MSE), lasso can be characterized as the neural network shown in Figure 1.
In this case, the loss function is given by
\[\begin{split}\mathcal{L}(\mathbf{w})&=\frac{1}{N}\sum _{i=1}^{N}\Biggl{(}y_{i}-\gamma\sum_{j=1}^{p}x_{ij}w_{j}\Biggr{)}^{2}+\ell_{1} \sum_{j=1}^{p}|w_{j}|\\ &=\frac{1}{N}\|\mathbf{y}-\gamma\mathbf{X}\mathbf{w}\|_{2}^{2}{+} \ell_{1}\|\mathbf{w}\|_{1}\end{split} \tag{8}\]
where \((\mathbf{w},\gamma)\) are the parameters of the network, and \(\ell_{1}\) is a regularization hyper-parameter. Notice that, by making \(\mathbf{\beta}=\gamma\mathbf{w}\) and \(\lambda=\frac{\ell_{1}}{\gamma}\), equation (8) is equivalent to equation (2) using MSE as error term.
Figure 1: Neural Representation of lasso for linear regression
An important aspect to keep in mind is that, unlike the statistical lasso, the neural network optimization does not set the weights exactly to zero. Therefore, it is necessary to establish a condition that determines which weights are zeros after each training epoch, and sets them to this value. To do this, we calculate the derivative of the loss function defined in equation (8) with respect to \(w_{j}\)
\[\frac{\partial\mathcal{L}(\mathbf{w})}{\partial w_{j}}=\frac{-2\gamma}{N}\sum_{i=1 }^{N}\Biggl{(}y_{i}-\gamma\sum_{k=1}^{p}x_{ik}w_{k}\Biggr{)}x_{ij}+\ell_{1}s_{j} \tag{9}\]
where the term \(s_{j}\) is the subgradient defined by
\[s_{j}=\left\{\begin{array}{cc}1&w_{j}>0\\ -1&w_{j}<0\\ [-1,1]&w_{j}=0\end{array}\right.. \tag{10}\]
Equation (9) can be rewritten as
\[\frac{\partial\mathcal{L}(\mathbf{w})}{\partial w_{j}}=\frac{-2\gamma}{N}\Biggl{(} \sum_{i=1}^{N}y_{i}x_{ij}-\gamma\sum_{i=1}^{N}x_{ij}\sum_{k\neq j}x_{ik}w_{k}- \gamma w_{j}\sum_{i=1}^{N}x_{ij}^{2}\Biggr{)}+\ell_{1}s_{j} \tag{11}\]
and, equivalently, in vector form
\[\frac{\partial\mathcal{L}(\mathbf{w})}{\partial w_{j}}=\frac{-2\gamma}{N}\Biggl{(} \mathbf{X}_{j}^{t}\mathbf{y}-\Bigl{(}\gamma\mathbf{X}_{j}^{t}\mathbf{X}\mathbf{w} _{j}^{*}-\gamma w_{j}\Bigr{)}+\ell_{1}s_{j} \tag{12}\]
where \(\mathbf{X}_{j}^{t}\) is the transpose of the \(j^{th}\) column of matrix \(\mathbf{X}\) (containing observations as rows) and \(\mathbf{w}_{j}^{*}\) is the vector \(\mathbf{w}\) with the \(j^{th}\) component equal to 0. To obtain the above expression, it has been taken into account that \(\sum_{i=1}^{2}x_{ij}^{2}=1\) since the data are standardized.
Equating the derivative to 0 leads to
\[w_{j}=\frac{\frac{2}{N}\gamma\mathbf{X}_{j}^{t}\Biggl{(}\mathbf{y}-\gamma \mathbf{X}\mathbf{w}_{j}^{*}\Biggr{)}-\ell_{1}s_{j}}{\frac{2}{N} \gamma^{2}} \tag{13}\]
From where it follows that
\[w_{j}^{op}=\left\{\begin{array}{ll}\dfrac{2}{N}\gamma\mathbf{X }_{j}^{t}\Bigg{(}\mathbf{y}-\gamma\mathbf{X}\mathbf{w}_{j}^{*}\Bigg{)}-\ell_{1}&\\ \dfrac{2}{N}\gamma^{2}&\text{if }\dfrac{2}{N}\gamma\mathbf{X}_{j}^{t} \Bigg{(}\mathbf{y}-\gamma\mathbf{X}\mathbf{w}_{j}^{*}\Bigg{)}>\ell_{1}\\ \dfrac{2}{N}\gamma\mathbf{X}_{j}^{t}\Bigg{(}\mathbf{y}-\gamma \mathbf{X}\mathbf{w}_{j}^{*}\Bigg{)}+\ell_{1}&\\ \dfrac{2}{N}\gamma^{2}&\text{if }\dfrac{2}{N}\gamma\mathbf{X}_{j}^{t} \Bigg{(}\mathbf{y}-\gamma\mathbf{X}\mathbf{w}_{j}^{*}\Bigg{)}<-\ell_{1}\\ 0&\text{if }\left|\dfrac{2}{N}\gamma\mathbf{X}_{j}^{t} \Bigg{(}\mathbf{y}-\gamma\mathbf{X}\mathbf{w}_{j}^{*}\Bigg{)}\right|\leq\ell_{1} \end{array}\right. \tag{14}\]
Indicate that, unlike lasso which needs the three updates of equation 14, neural lasso only uses the last condition to make weights zero. This is because the update of the weights is performed implicitly during the training of the network. Concisely, after each training epoch, the network will determine if any of the weights can be replaced by 0 by checking if the last condition of the equation (14) is satisfied using the current estimates. This difference will be relevant later in the logistic regression.
#### Logistic regression
As shown below, the optimization problem for the logistic case is formulated by
\[\underset{\mathbf{\beta}}{argmin}\frac{1}{N}\sum_{i=1}^{N}\Big{[}\log(1+e^{ \mathbf{x}_{i}^{t}\mathbf{\beta}+\beta_{0}})-y_{i}\left(\mathbf{x}_{i}^{t}\mathbf{ \beta}+\beta_{0}\right)\Big{]}+\lambda\norm{\mathbf{\beta}}_{1} \tag{15}\]
This problem can be characterized by the neural network shown in Figure 2.
Figure 2: Neural representation of lasso for logistic regression
Note that the linear activation of the output layer has been replaced by a sigmoid. In addition, the MSE has been replaced by the binary cross-entropy function whose formula is given by
\[-\frac{1}{N}\sum_{i=1}^{N}y_{i}\log\hat{y}_{i}+(1-y_{i})\log(1-\hat{y}_{i}) \tag{16}\]
Therefore, the loss function of the network is given by
\[\mathcal{L}(\mathbf{w})=-\frac{1}{N}\sum_{i=1}^{N}\Biggl{(}y_{i}\log \left(\frac{1}{1+e^{-\gamma x_{i}^{t}\mathbf{w}-b_{0}}}\right)+(1-y_{i})\log\left( 1-\frac{1}{1+e^{-\gamma x_{i}^{t}\mathbf{w}-b_{0}}}\right)\Biggr{)} \tag{17}\] \[+\ell_{1}\|\mathbf{w}\|_{1}\]
It can be seen that equation (17) is equivalent to equation (15) as follows. Focusing on the error term of equation (17):
\[\mathcal{R}(\mathbf{y},\mathbf{X}\mathbf{w}) = -\frac{1}{N}\sum_{i=1}^{N}\Biggl{(}y_{i}\log\left(\frac{1}{1+e^{ -\gamma x_{i}^{t}\mathbf{w}-b_{0}}}\right)+(1-y_{i})\log\left(\frac{1}{1+e^{ \gamma x_{i}^{t}\mathbf{w}+b_{0}}}\right)\Biggr{)}\] \[= -\frac{1}{N}\sum_{i=1}^{N}\left(-y_{i}\log(1+e^{-\gamma\mathbf{x}_{i} ^{t}\mathbf{w}-b_{0}})-(1-y_{i})\log(1+e^{\gamma\mathbf{x}_{i}^{t}\mathbf{w}+b_{0}})\right)\] \[= \frac{1}{N}\sum_{i=1}^{N}\left(y_{i}\log(1+e^{-\gamma\mathbf{x}_{i}^ {t}\mathbf{w}-b_{0}})+\log(1+e^{\gamma\mathbf{x}_{i}^{t}\mathbf{w}+b_{0}})-y_{i}\log(1+e^{ \gamma\mathbf{x}_{i}^{t}\mathbf{w}+b_{0}})\right)\] \[= \frac{1}{N}\sum_{i=1}^{N}\left(y_{i}\log\left(\frac{1+e^{-\gamma \mathbf{x}_{i}^{t}\mathbf{w}-b_{0}}}{1+e^{\gamma\mathbf{x}_{i}^{t}\mathbf{w}+b_{0}}}\right)+ \log(1+e^{\gamma\mathbf{x}_{i}^{t}\mathbf{w}+b_{0}})\right)\] \[= \frac{1}{N}\sum_{i=1}^{N}\left(y_{i}\log\left(e^{-\gamma\mathbf{x}_{i }^{t}\mathbf{w}-b_{0}}\right)+\log(1+e^{\gamma\mathbf{x}_{i}^{t}\mathbf{w}+b_{0}})\right)\] \[= \frac{1}{N}\sum_{i=1}^{N}\left(\log(1+e^{\gamma\mathbf{x}_{i}^{t}\mathbf{ w}+b_{0}})-y_{i}(\gamma\mathbf{x}_{i}^{t}\mathbf{w}+b_{0})\right)\]
Therefore, (17) becomes
\[\mathcal{L}(\mathbf{w})=\frac{1}{N}\sum_{i=1}^{N}\left(\log(1+e^{\gamma\mathbf{x}_{i}^ {t}\mathbf{w}+b_{0}})-y_{i}(\gamma\mathbf{x}_{i}^{t}\mathbf{w}+b_{0})\right)+\ell_{1}\| \mathbf{w}\|_{1} \tag{18}\]
Defining, as above, \(\mathbf{\beta}=\gamma\mathbf{w}\), \(\lambda=\ell_{1}/\gamma\), formulation (17) is equivalent to formulation (15).
Similar to the linear case, it is necessary to establish a mechanism that makes the weights associated with the non-significant variables equal to 0. Taking the derivative of the loss function in equation (18)
\[\frac{\partial\mathcal{L}(\mathbf{w})}{\partial w_{j}}=\frac{1}{N}\sum_{i=1}^{N} \left(\frac{\gamma x_{ij}e^{\gamma\mathbf{x}_{i}^{t}\mathbf{w}+b_{0}}}{1+e^{\gamma\mathbf{x} _{i}^{t}\mathbf{w}+b_{0}}}-y_{i}\gamma x_{ij}\right)+\ell_{1}s_{j} \tag{19}\]
Unfortunately, unlike the linear case, it is not possible to isolate the vector \(\mathbf{w}\). The problem is, therefore, approached from a different perspective.
Rearranging and equating the above equation to zero
\[\frac{\gamma}{N}\sum_{i=1}^{N}\left(\frac{e^{\gamma\mathbf{x}_{i}^{t}\mathbf{w}+b_{0}} }{1+e^{\gamma\mathbf{x}_{i}^{t}\mathbf{w}+b_{0}}}-y_{i}\right)x_{ij}+\ell_{1}s_{j}=0 \tag{20}\]
which is equivalent to
\[\frac{\gamma}{\ell_{1}N}\sum_{i=1}^{N}\left(y_{i}-\frac{1}{1+e^{-\gamma\mathbf{x} _{i}^{t}\mathbf{w}+b_{0}}}\right)x_{ij}=s_{j} \tag{21}\]
Following Simon et al. [15], this is satisfied for \(w_{j}=0\) if
\[\frac{\gamma}{\ell_{1}N}\sum_{i=1}^{N}\left(y_{i}-\frac{1}{1+e^{-\gamma\mathbf{x} _{i}^{t}\mathbf{w}_{j}^{*}-b_{0}}}\right)x_{ij}=s_{j} \tag{22}\]
where \(\mathbf{w}_{j}^{*}\) is the vector \(\mathbf{w}\) with the \(j^{th}\) component equal to \(0\). Therefore,
\[\left|\frac{\gamma}{\ell_{1}N}\sum_{i=1}^{N}\left(y_{i}-\frac{1}{1+e^{-\gamma \mathbf{x}_{i}^{t}\mathbf{w}_{j}^{*}-b_{0}}}\right)x_{ij}\right|=|s_{j}|\leq 1 \tag{23}\]
Rearranging gives
\[\left|\frac{\gamma}{N}\sum_{i=1}^{N}\left(y_{i}-\frac{1}{1+e^{-\gamma\mathbf{x}_{ i}^{t}\mathbf{w}_{j}^{*}-b_{0}}}\right)x_{ij}\right|\leq\ell_{1} \tag{24}\]
which vectorially can be written as
\[\left|\frac{\gamma}{N}\mathbf{X}_{j}^{t}\Bigg{(}\mathbf{y}-\sigma\left(\gamma \mathbf{X}\mathbf{w}_{j}^{*}+\mathbf{b}\right)\Bigg{)}\right|\leq\ell_{1} \tag{25}\]
where \(\sigma(x)=1/(1+e^{-x})\) is the sigmoid activation function and \(\mathbf{b}\) is the N-dimensional vector whose all components are equal to \(b_{0}\).
It is important to note that the way by which neural lasso obtains the condition that determines whether a weight is zero is different from that of the statistical lasso. The latter uses a quadratic approximation of the error term since it also needs to have an explicit expression of the update of the non-zero weights. Neural lasso only needs to know which weights are zero since the update of the non-zero weights is implicitly performed during the training of the network.
### Optimization
An important aspect to discuss is how to estimate the neural lasso weights. In this section, three optimization algorithms are proposed which are shown schematically in the three lower panels of Figure 3.
Normally, when working with neural networks, its layout is determined by cross-validation and the estimation of its weights by simple validation.
Figure 3: Statiscal lasso and neural lasso algorithms.
That is, once the network layout has been determined, the available data are divided into a training set and a validation set. The training set is used to estimate the network parameters, while the validation set is used to evaluate the performance of the network in an independent set. The resulting network is the one whose weights minimize the validation error. As the network layout is predefined in neural lasso, it is only necessary to estimate its weights using simple validation. This way of training the network will be called _standard neural lasso_.
However, the standard neural lasso may present a disadvantage with respect to the statistical lasso because of how they estimate the weights. The fact that statistical lasso employs cross-validation allows it to use all available observations to obtain an estimate of the error, whereas the standard neural lasso obtains this estimate using only a subset of the observations because it relies on simple validation. For this reason, a second algorithm called _restricted neural lasso_ has been developed to train the neural network by mimicking statistical lasso. Restricted neural lasso sets the value of \(\gamma\) equal to 1 and establishes it as a non-trainable parameter. Once the \(\gamma\) value has been fixed, it also sets the value of the hyper-parameter \(\ell_{1}\) to one of the \(\lambda\) values that the statistical lasso considers during its optimization. Having fixed the value of these two parameters, it is possible to perform the cross-validation and the algorithm selects the value of \(\ell_{1}\) that minimizes the cross-validation error. In a second step, the algorithm estimates the weights using the optimal value of \(\ell_{1}\) and setting \(\gamma\) equal to 1. Assuming that the network layout is correct, the performance of this second optimization method should be practically identical to that obtained by the statistical lasso.
Finally, during the development of this work, a third optimization approach emerged. This new optimization algorithm, called _voting neural lasso_, combines all the optimization approaches discussed above. Specifically, it uses the cross-validation design used by the restricted neural lasso and by the statistical lasso. However, it does not search for the value of the hyper-parameter \(\lambda\) that minimizes the average validation error in the K configurations. For each of the K settings, it selects the value of \(\lambda\) with which the smallest validation error is obtained in a similar way to the standard neural lasso. A variable is considered to be significant when it has been selected in most of the K settings. In a second phase, the weights of only these significant variables are estimated without taking into account the penalty term. It is important to note that this approach is not a relaxed lasso [16].
To summarize the above, three optimization algorithms with three different purposes will be considered. Standard neural lasso obtains the estimation of the weights using the usual procedure of training neural networks. Restricted neural lasso mimics the statistical lasso method. If these two methods obtain very similar results, a bridge between Statistics and Machine Learning would
be built. Finally, voting neural lasso proposes a new way of estimating weights that can be used for both the statistical and the neural versions.
For the standard neural lasso and for the voting neural lasso, the network is initialized with \(\gamma=1\) and \(\ell_{1}=\max_{j}\left|\frac{2}{N}\mathbf{X}_{j}^{t}\mathbf{y}\right|\) for the linear case and \(\ell_{1}=\max_{j}\left|\frac{1}{N}\mathbf{X}_{j}^{t}(\mathbf{y}-\sigma(0))\right|\) for the logistic case. In addition, in this article, the Adam optimization algorithm is used to adjust the weights [17].
## 4 Experimental Results
In order to evaluate the performance of the proposed method, three experiments were conducted. The first two focused on the linear case. Specifically, the first one is performed with simulated data and the second one uses several real data sets. The two previous experiments are complemented with a third one aiming to evaluate the proposed method in the logistic case using real data.
### Experiment 1: Linear case, Simulated data
In the first study, the data were simulated according to the model \(y=\mathbf{X}\boldsymbol{\beta}+\epsilon\) where \(\mathbf{X}\) is the matrix containing the observations as row, \(\epsilon_{i}\sim N(0,1)\) and
\[\beta=[1\,2\,3\,4\,\underbrace{0\,\ldots\,0}_{p-4}]\]
Moreover, the data were simulated from a centered normal distribution so that \(\rho_{ij}=0.5^{|i-j|}\) for \(1\leq i<j\leq p\). In addition, the columns with the predictors were randomly rearranged to avoid possible positional effects.
In order to test the performance of the different algorithms, training sets for \(p\in\{20,100,200\}\) with sample size \(N\) equal to 50 were generated. For each of the three scenarios, a repeated validation was performed with 100 runs. In all the repetitions, a test set of 1000 observations was generated. As performance measures, we calculated the MSE on the test set, the precision (percentage of non-significant variables correctly identified), and the recall (percentage of significant variables correctly identified). The number of folders K was set to five for the statistical lasso, restricted neural lasso, and voting neural lasso algorithms. Standard neural lasso used 20% of the training data as validation set. Indicate that the analyses using the non-neural versions were performed using the glmnet R package [18], while the neural versions were implemented in Pytorch [19]. The obtained results are shown in Table 1.
This table shows that the standard neural lasso performs significantly worse than the non-neural version. As noted above, this is because the standard neural lasso only obtains knowledge of its performance during training on the small validation subset. It is also observed that the performance of the statistical lasso and the restricted neural lasso is almost identical. This confirms that the network design is correct. Finally, a result of
results were obtained by the voting neural lasso algorithm which significantly improves those obtained by the three previous approaches.
### Experiment 2: Linear case, Real data
The evaluation of the proposed technique was further evaluated using five different real data sets. Specifically, three datasets were obtained from the University of Caroline-Irvine (UCI) repository, and two own datasets were used. The datasets used are the following:
* UCI White wine quality [20]. This database, containing 4898 observations, was built to predict the quality of Portuguese "Vinho Verde" from 11 predictors. In each of the repetitions, the training set consisted of 4000 training observations, and the test set was made up of 898 observations.
* UCI Boston housing [21]. This dataset consists of 506 observations with 12 attributes each. These attributes correspond to the dependent variable, which indicates the median value of owner-occupied homes, and the 11 predictors used to estimate it. In each of the repetitions, the training set consisted of 400 training observations, and the test set was made up of 106.
* UCI Abalone [22]. This dataset was collected to predict the age of the abalone from physical measurements. It contains 4177 observations with nine attributes each. In each of the repetitions, the training set consisted of 3342 training observations, and the test set was made up of 1935.
* Suicide attempt severity. This database contains information on the severity of 349 suicide attempts as measured by the Beck suicide intent scale [23]. The predictors are 30 items of the Barrat impulsivity scale [24]. In each repetition, the training set consisted of 200 training observations, and
\begin{table}
\begin{tabular}{c l c c c} \hline \hline & Method & MSE & Precision & Recall \\ \hline \hline \multirow{3}{*}{p=20} & Statistical lasso & 1.294 (0.188) & 0.671 (0.207) & 1 (0) \\ & Standard neural lasso & 1.465\({}^{**}\) (0.341) & 0.644 (0.249) & 1 (0) \\ & Restricted neural lasso & 1.298 (0.188) & 0.668 (0.210) & 1 (0) \\ & Voting neural lasso & 1.188\({}^{**}\) (0.144) & 0.934\({}^{**}\) (0.072) & 1 (0) \\ \hline \multirow{3}{*}{p=100} & Statistical lasso & 1.680 (0.419) & 0.848 (0.087) & 0.998 (0.025) \\ & Standard neural lasso & 2.129\({}^{**}\) (0.789) & 0.808\({}^{**}\) (0.136) & 0.998 (0.025) \\ & Restricted neural lasso & 1.695 (0.447) & 0.853 (0.096) & 0.998 (0.025) \\ & Voting neural lasso & 1.419\({}^{**}\) (0.360) & 0.976\({}^{**}\) (0.017) & 0.998 (0.025) \\ \hline \multirow{3}{*}{p=200} & Statistical lasso & 1.806 (0.383) & 0.910 (0.053) & 1 (0) \\ & Standard neural lasso & 2.338\({}^{**}\) (0.717) & 0.827\({}^{**}\) (0.166) & 0.995 (0.035) \\ \cline{1-1} & Restricted neural lasso & 1.821 (0.395) & 0.910 (0.065) & 1 (0) \\ \cline{1-1} & Voting neural lasso & 1.403\({}^{**}\) (0.425) & 0.992\({}^{**}\) (0.007) & 0.990 (0.049) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results obtained for the linear scenario with synthetic data. For each of the three statistics, the mean and average standard deviation (in parentheses) are shown. Differences with respect to the statistical lasso algorithm at the 0.05 and 0.01 significance levels are denoted by * and **, respectively.
the test set was made up of 149.
* Attention Deficit Hyperactivity Disorder (ADHD). It contains the responses provided by 59 mothers of children with ADHD to the Behavior Rating Inventory of Executive Function-2 containing 63 items [25]. This dataset has two possible dependent variables measuring the degree of inattention and the degree of hyperactivity of the children as measured by the ADHD rating scale [26]. The training set for each repetition consists of 47 observations and the validation set consists of 12 observations.
As with the previous experiment, 100 repeated validations are performed, the number of K-folders is set to five, and the validation set contains 20% of the training data. Obtained results, shown in Table 2, strengthen the conclusions obtained with synthetic data. In particular, it is observed that the voting neural lasso obtains an MSE similar to that of the statistical lasso but with the advantage of using a significantly smaller number of predictors. It is also observed that the worst performance is obtained with the standard neural lasso. In addition, it can be seen that the statistical lasso and restricted neural lasso obtain practically identical results.
\begin{table}
\begin{tabular}{l l c c} \hline \hline Dataset & Method & MSE & Selected Var. (\%) \\ \hline \hline \multirow{3}{*}{White wine quality} & Statistical lasso & 0.567 (0.027) & 0.899 (0.087) \\ & Standard neural lasso & 0.566 (0.027) & 0.960\({}^{**}\) (0.073) \\ & Restricted neural lasso & 0.567 (0.027) & 0.898 (0.084) \\ & Voting neural lasso & 0.566 (0.028) & 0.905 (0.070) \\ \hline \multirow{3}{*}{Boston housing} & Statistical lasso & 25.530 (5.603) & 0.864 (0.093) \\ & Standard neural lasso & 25.865 (5.844) & 0.910\({}^{*}\) (0.082) \\ & Restricted neural lasso & 25.529 (5.600) & 0.865 (0.093) \\ & Voting neural lasso & 25.611 (5.625) & 0.764\({}^{*}\) (0.098) \\ \hline \multirow{3}{*}{Abalone} & Statistical lasso & 5.063 (0.420) & 0.981 (0.048) \\ & Standard neural lasso & 5.334\({}^{**}\) (0.458) & 0.571\({}^{**}\) (0) \\ & Restricted neural lasso & 5.061 (0.420) & 0.981 (0.048) \\ & Voting neural lasso & 5.060 (0.418) & 0.964\({}^{*}\) (0.062) \\ \hline \multirow{3}{*}{Suicide attempt} & Statistical lasso & 31.126 (2.380) & 0.095 (0.123) \\ & Standard neural lasso & 31.915\({}^{*}\) (2.276) & 0.683\({}^{**}\) (0.282) \\ & Restricted neural lasso & 31.127 (2.382) & 0.078 (0.133) \\ & Voting neural lasso & 31.025 (2.424) & 0.002\({}^{**}\) (0.008) \\ \hline \multirow{3}{*}{ADHD} & Statistical lasso & 3.616 (1.389) & 0.257 (0.065) \\ & Standard neural lasso & 3.680 (1.433) & 0.334\({}^{**}\) (0.229) \\ \cline{1-1} & Restricted neural lasso & 3.614 (1.388) & 0.252 (0.064) \\ \cline{1-1} & Voting neural lasso & 3.787 (1.230) & 0.145\({}^{**}\) (0.034) \\ \hline \multirow{3}{*}{ADHD} & Statistical lasso & 3.465 (1.251) & 0.312 (0.153) \\ \cline{1-1} & Standard neural lasso & 3.883\({}^{*}\) (1.686) & 0.346 (0.205) \\ \cline{1-1} & Restricted neural lasso & 3.465 (1.259) & 0.315 (0.159) \\ \cline{1-1} & Voting neural lasso & 3.637 (1.198) & 0.093\({}^{**}\) (0.029) \\ \hline \end{tabular}
\end{table}
Table 2: Results obtained for the linear scenario with real data. For each of the three statistics, the mean and average standard deviation (in parentheses) are shown. Differences with respect to the statistical lasso algorithm at the 0.05 and 0.01 significance levels are denoted by * and **, respectively.
### Experiment 3: Logistic case, Real data
This last experiment is intended to test the performance of the neural lasso in the logistic scenario. For this purpose, three databases obtained from the UCI repository and one own database are used. A brief description of these databases is given below.
* UCI Wisconsin Breast cancer [27]. This dataset is composed of 569 observations. Each observation has 30 predictors and a dependent variable indicating whether the predictors were obtained from a malignant tumor. The training set was made up of 445 observations while the test set consisted of 124.
* UCI Spam [28]. This dataset is made up of 4601 instances. Each of them contains 57 predictors and one dependent variable indicating whether the email was spam. The training set consisted of 3975 observations while the test set comprised 626.
* UCI Ionosphere [29]. This database is composed of 351 instances with 34 predictors and a dependent variable indicating whether the radar signal passed through the ionosphere or not. The training set was made up of 299 observations while the test set consisted of 52.
* Suicidal Behaviour [30]. This database consists of 700 observations. Each contains 106 predictors consisting of responses to items of various scales, and a dependent variable indicating whether the respondent had recently made an attempt.
The set-up used was similar to that of the two previous sections (K equal to five, 100 repetitions, and the validation set composed of 20% of the training data). The results obtained are shown in Table 3.
Results obtained for the logistic case are similar to those obtained in the linear scenario and presented in the previous two sections. It is observed that the best results are achieved by the voting neural lasso in three of the four settings. A significantly lower accuracy than the statistical lasso is obtained only in the spam data set. It is also observed that the restricted neural lasso and the statistical lasso obtain equivalent results, which again shows the convergence of the neural technique with the statistical one. A small difference, with respect to the results achieved previously, is that the standard neural lasso gets better results than the statistical neural lasso in two settings (Cancer and Ionosphere).
## 5 Conclusions
In this work, the lasso algorithm has been implemented by means of neural networks. Specifically, the network layout has been defined and three possible optimization algorithms for estimating its weights have been compared. It has been observed that estimating the weights in the way a neural network is usually trained results in poor performance. It has also been shown that it is possible to mimic the optimization of the statistical lasso algorithm with a neural network obtaining almost identical results. The only difference is that the former uses coordinated descent while the latter uses gradient descent. This result brings the fields of Statistics and Machine Learning closer. Finally, an algorithm using a majority vote has been proposed which takes into account how many of the cross-validation scenarios the variable is considered significant. This third algorithm has shown an exceptionally better performance than the widely used statistical lasso. In particular, it has been shown that voting neural lasso either obtains a lower error or obtains a better variable selection in both the linear and logistic cases. Moreover, these results have been obtained using training sets that present a great diversity. They contain a number of observations ranging from only 47 to 4000 and a number of predictors varying from 9 to 200.
These results open up new lines of research such as developing neural versions of other shrinkage techniques such as the elastic net or extending these algorithms to non-linear versions using the flexibility of neural networks. It is also important to note that the development of the voting neural lasso has been limited to simple cross-validation which is the information available to the other techniques. However, the use of repeated repetitions or repeated
\begin{table}
\begin{tabular}{l l c c} \hline \hline Dataset & Method & ACC & Selected Var. (\%) \\ \hline \hline \multirow{4}{*}{Cancer} & Statistical lasso & 0.963 (0.016) & 0.359 (0.092) \\ & Standard neural lasso & 0.964 (0.018) & 0.160\({}^{**}\) (0.039) \\ & Restricted neural lasso & 0.964 (0.016) & 0.360 (0.096) \\ & Voting neural lasso & 0.969\({}^{**}\) (0.015) & 0.111\({}^{**}\) (0.018) \\ \hline \multirow{4}{*}{Spam} & Statistical lasso & 0.923 (0.011) & 0.926 (0.024) \\ & Standard neural lasso & 0.904\({}^{**}\) (0.014) & 0.528\({}^{**}\) (0.056) \\ & Restricted neural lasso & 0.924 (0.011) & 0.927 (0.024) \\ & Voting neural lasso & 0.915\({}^{**}\) (0.010) & 0.462\({}^{**}\) (0.025) \\ \hline \multirow{4}{*}{Ionosphere} & Statistical lasso & 0.828 (0.048) & 0.448 (0.079) \\ & Standard neural lasso & 0.823 (0.051) & 0.388\({}^{**}\) (0.071) \\ & Restricted neural lasso & 0.827 (0.047) & 0.447 (0.080) \\ & Voting neural lasso & 0.829 (0.048) & 0.245\({}^{**}\) (0.040) \\ \hline \multirow{4}{*}{Suicide} & Statistical lasso & 0.650 (0.030) & 0.093 (0.057) \\ & Standard neural lasso & 0.627\({}^{**}\) (0.048) & 0.166\({}^{**}\) (0.253) \\ \cline{1-1} & Restricted neural lasso & 0.651 (0.029) & 0.088 (0.061) \\ \cline{1-1} & Voting neural lasso & 0.652 (0.031) & 0.031\({}^{**}\) (0.010) \\ \hline \end{tabular}
\end{table}
Table 3: Results obtained for the logistic scenario with real data. For each of the two statistics, the mean and average standard deviation (in parentheses) are shown. Differences with respect to the statistical lasso algorithm at the 0.05 and 0.01 significance levels are denoted by * and **, respectively.
cross-validations, and obtaining confidence intervals, on them might result in a more robust algorithm.
## Funding
This research was partially funded by: Ministerio de Ciencia e Innovacion, Proyectos de Transicion Ecologica y Transicion Digital TED2021-130980B-I00, and Instituto Salud Carlos III, grant number DTS21/00091.
## Data availability
The real data used in this study for the linear regression problem can be obtained from the UCI repository ([https://archive.ics.uci.edu/datasets](https://archive.ics.uci.edu/datasets)). The real data used for the logistic regression experiment are available from the corresponding author upon request.
## Declarations
**Conflict of interest.** The authors have no relevant financial or non-financial interests to disclose.
|
2309.04037 | SRN-SZ: Deep Leaning-Based Scientific Error-bounded Lossy Compression
with Super-resolution Neural Networks | The fast growth of computational power and scales of modern super-computing
systems have raised great challenges for the management of exascale scientific
data. To maintain the usability of scientific data, error-bound lossy
compression is proposed and developed as an essential technique for the size
reduction of scientific data with constrained data distortion. Among the
diverse datasets generated by various scientific simulations, certain datasets
cannot be effectively compressed by existing error-bounded lossy compressors
with traditional techniques. The recent success of Artificial Intelligence has
inspired several researchers to integrate neural networks into error-bounded
lossy compressors. However, those works still suffer from limited compression
ratios and/or extremely low efficiencies. To address those issues and improve
the compression on the hard-to-compress datasets, in this paper, we propose
SRN-SZ, which is a deep learning-based scientific error-bounded lossy
compressor leveraging the hierarchical data grid expansion paradigm implemented
by super-resolution neural networks. SRN-SZ applies the most advanced
super-resolution network HAT for its compression, which is free of time-costing
per-data training. In experiments compared with various state-of-the-art
compressors, SRN-SZ achieves up to 75% compression ratio improvements under the
same error bound and up to 80% compression ratio improvements under the same
PSNR than the second-best compressor. | Jinyang Liu, Sheng Di, Sian Jin, Kai Zhao, Xin Liang, Zizhong Chen, Franck Cappello | 2023-09-07T22:15:32Z | http://arxiv.org/abs/2309.04037v3 | SRN-SZ: Deep Leaning-Based Scientific Error-bounded Lossy Compression with Super-resolution Neural Networks
###### Abstract
The fast growth of computational power and scales of modern super-computing systems have raised great challenges for the management of exascale scientific data. To maintain the usability of scientific data, error-bound lossy compression is proposed and developed as an essential technique for the size reduction of scientific data with constrained data distortion. Among the diverse datasets generated by various scientific simulations, certain datasets cannot be effectively compressed by existing error-bounded lossy compressors with traditional techniques. The recent success of Artificial Intelligence has inspired several researchers to integrate neural networks into error-bounded lossy compressors. However, those works still suffer from limited compression ratios and/or extremely low efficiencies. To address these issues and improve the compression on the hard-to-compress datasets, in this paper, we propose SRN-SZ, which is a deep learning-based scientific error-bounded lossy compressor leveraging the hierarchical data grid expansion paradigm implemented by super-resolution neural networks. SRN-SZ applies the most advanced super-resolution network HAT for its compression, which is free of time-costing per-data training. In experiments compared with various state-of-the-art compressors, SRN-SZ achieves up to 75% compression ratio improvements under the same error bound and up to 80% compression ratio improvements under the same PSNR than the second-best compressor.
error-bounded lossy compression, deep learning, super-resolution.
## I Introduction
The rapid growth of computing power of worldwide exascale supercomputers has enabled the scientific applications running on them to intensely enlarge their scales and outputs. Nevertheless, the data storage capacity and memory bandwidth of those machines have not developed fast enough to catch up with the increasingly huge amount of data generated by those applications, bringing rising requirements for advanced data reduction techniques to efficiently store, transfer, and analyze those data. To this end, error-bounded lossy compression has been recognized as the most proper strategy to manage extremely large amounts of scientific data. Compared to the lossless compression techniques which can only provide around halved compressed size, it can reduce the data size to 10%, 1%, or even 0.1% of the original size. Unlike many existing lossy compressors (such as the JPEG compressor for image data) that do not constrain the point-wise data error, error-bounded lossy compression can control the point-wise data distortion upon the user's requirements. Therefore, error-bounded lossy compression is of great significance for boosting the utility of scientific data.
Existing state-of-the-art scientific error-bounded lossy compressors with diverse compression ratios and speeds, such as SZ3 [1, 2], ZFP [3], and SPERR [4], have shown advantages in variant practical use cases. However, despite the success existing error-bounded lossy compressors have achieved, their limitations persist. Among the diverse archetypes of existing compressors, their compressions on certain datasets are still apparently under-optimized, suffering from low compression ratios, which is still an ongoing challenge for error-bounded lossy compression research.
Inspired by the great breakthroughs in the Artificial Intelligence field, several attempts have been made to leverage neural networks in error-bounded lossy compression. Autoencoder-based AE-SZ [5] and Coordinate network-based CoordNet [6] are two typical examples. Those deep learning-based compressors may provide well-optimized compression ratios in certain cases, but their limitations are still obvious. The Coordinate network-based compressors [6, 7, 8] suffer from extremely low compression efficiencies as they need to train a new network separately for each input. Although autoencoder-based compressors such as [5, 9] can leverage pre-trained networks to avoid per-input training, their compression ratios cannot overperform SZ3 in most cases [5].
In order to address the issues of optimizing the hard-to
compress data compression and overcoming the limitations of deep learning-based error-bounded lossy compression, in this paper, we proposed SRN-SZ, which is a grand new deep learning-based error-bounded lossy compression framework. The core innovation of SRN-SZ is that it abstracts the compression and decompression processes of scientific data grids into a hierarchical paradigm of data grid super-resolution, which is the first work of integrating the super-resolution neural network into the error-bounded lossy compressor to the best of our knowledge. Compared with the autoencoders and coordinate networks, the super-resolution networks have two-fold advantages: Unlike coordinate networks, they can be pre-trained before the practical compression tasks. At the same time, they do not generate any latent information that is required to be stored for compression as the autoencoders. Benefiting from those advantages, SRN-SZ achieves acceptable efficiencies and further improved compression ratios over the state-of-the-art error-bounded lossy compressors on multiple hard-to-compress datasets.
The contributions of our paper are detailed as follows:
* We propose a new scientific error-bounded lossy compressor SRN-SZ, in which the compression is performed by hierarchical data grid expansion implemented with a hybrid of super-resolution networks and interpolations.
* Leveraging the Hybrid Attention Transformer (HAT) network, we designed a specialized training pipeline with several adaptive techniques to optimize the super-resolution quality of scientific data.
* We carry out systematical evaluations with SRN-SZ and 5 other state-of-the-art scientific error-bounded lossy compressors on various scientific datasets from different domains. According to the experimental results, SRN-SZ has achieved up to 75% compression ratio improvements under the same error bound and up to 80% compression ratio improvements under the same PSNR.
The rest of this paper is organized as follows: In, Section II, we discuss related works. Section III presents the research problem formulation and backgrounds. The overall framework of SRN-SZ is demonstrated in Section IV. The compression pipeline and network training pipeline of SRN-SZ are separately proposed in Section V and Section VI. In section VII, the evaluation results are provided and analyzed. Section VIII concludes this work and discusses future work.
## II Related Work
In this section, we discuss the related works in 3 categories: Traditional scientific error-bounded lossy compression, deep learning-based scientific lossy compression, and super-resolution neural networks.
### _Traditional Scientific Error-bounded Lossy Compression_
Traditional scientific error-bounded lossy compressors can be classified into prediction-based, transform-based, and dimension-reduction-based. The prediction-based compressors utilize different data prediction techniques for the compression, such as linear regression (SZ2 [10]) and interpolations (SZ3 [1] and QoZ [11]). Transform-based compressors decorrelate the input data by data transformation techniques so that the transformed data (a.k.a., coefficients) turn out to be much easier to compress than the original dataset; then it compresses the efficient domain to get a high compression ratio. Typical examples include ZFP [3] leveraging orthogonal discrete transform and SPERR [4] integrating CDF 9/7 wavelet transform. With dimension reduction techniques such as (high-order) singular vector decomposition (SVD), dimension-reduction-based compressors such as THHRESH [12] can perform the data compression very effectively. Besides the CPU-based compressors, several GPU-specialized error-bounded lossy compressors have also been developed and proposed for better parallelization and throughput. Typical examples are CuSZ [13, 14] and FZ-GPU [15].
### _Deep Learning-based Scientific Lossy Compression_
The great success of the recent research of Artificial Intelligence techniques started boosting the development of several other relevant research fields, including the scientific error-bounded lossy compression. Several research works that leverage deep neural networks in error-bounded lossy compression have been proposed [5, 6, 7, 8, 9]. There are mainly 2 archetypes: autoencoder-based compressors which store the autoencoder-encoded latent vectors for compression, and coordinate network-based compressors which train networks online for each input to map the data coordinates to data values. For autoencoder-based compressors, AE-SZ is an example of integrating Slice-Wasserstein autoencoders, and Hayne et al. [9] leverages a double-level autoencoder for compressing 2D data. Examples of coordinate network-based compressors include NeurComp [8], CoordNet [6] and [7].
### _Super-resolution Neural Networks_
Following the SRCNN [16] which introduced a Convolutional neural network model to the image super-resolution tasks, a large number of convolutional neural network models [17, 18, 19] have been proposed for the super-resolutions. Because of the development of Transformer [20] and its adaption to Computer Vision tasks [21, 22, 23], vision-transformer-based neural networks like [24, 25, 26] have achieved state-of-the-art performance on the image super-resolution task. Among those works, HAT [26] is the most impressive one as it has the widest scope of feature extraction for reconstructing each data point with a carefully designed hybrid attention model and achieves state-of-the-art performance.
## III Problem Formulation and Backgrounds
### _Mathematical Formulations for Error-bounded Lossy Data Compression_
In this subsection, we propose several key mathematical definitions and the mathematical formulation of our research target for this paper.
#### Iii-A1 Compression ratio and bit rate
Compression ratio is defined by the input data size divided by the compressed data size. Specifically, for input data \(X\) and compressed data \(Z\), compression ratio \(\rho\) is:
\[\rho=\frac{|X|}{|Z|} \tag{1}\]
According to Eq. 1, a higher compression ratio means better (smaller) compressed size, and vice versa. In the visualization of experimental results, researchers often plot curves with another metric closely related to the compression ratio, namely the bit rate. Bit rate is defined by the average number of bytes used in the compressed data to store each data element for the input data, which can be expressed as (denote bit rate by \(b\)):
\[b=\frac{sizeof(x)}{|Z|} \tag{2}\]
in which \(x\) is an element of the input \(X\), and sizeof() returns the byte size. Since the bit rate is reciprocal to the compression ratio, a lower bit rate is better.
#### Iii-A2 Psnr
PSNR (Peak Signal-to-Noise Ratio) is one of the most important data distortion metrics for evaluating the quality of the decompressed data from the lossy compression. it is defined as follows:
\[PSNR=20\log_{10}{vrange(X)}-10\log_{10}{mse(X,X^{\prime})}, \tag{3}\]
where \(X\) is the input data and \(X^{\prime}\) is the decompressed data. strange() calculates the value range of one data array, and _mse_ refers to the mean-squared error. Fixing the input data (and also the data range), a smaller mean-squared error will lead to higher PSNR, therefore higher PSNR means higher precision of the decompressed data.
#### Iii-A3 Research target
The objective of SRN-SZ is to optimize the compression process with regard to a certain optimization target: maximizing compression PSNR under each certain compression ratio. Mathematically speaking, given the input data \(X\), compressed data \(Z\), decompression output \(X^{\prime}\), error bound \(e\), and the target compression ratio \(T\), we will optimize the compressor \(C\) and decompressor \(D\) of SRN-SZ via the following optimization problem (\(Z=C(X)\) and \(X^{\prime}=D(Z)\)):
\[\begin{array}{ll}&\underset{|Z|}{maximize}\ \ PSNR(X,X^{\prime})\\ s.t.&\frac{|X|}{|Z|}=T\\ &&|x_{i}-x^{\prime}_{i}|\leq e,\ \forall x_{i}\in X.\end{array}, \tag{4}\]
In this paper, we propose a deep learning-based compressor, leveraging the super-resolution neural network for the optimization of Eq. 4.
### _Challenge for Error-bounded Lossy Compression: Low-compression-ratio Datasets_
Recently proposed scientific error-bound lossy compressors have succeeded in outperforming the old state-of-the-art compressors dramatically. Compared with the historical SZ 2.1 [10], SZ3 [2] has improved the compressor ratio by up to 460% [1] under the same data distortion. With higher computational costs, wavelet-based compressors such as SPERR [4] may have doubled or even tripled compression ratios compared with SZ3.
However, those exciting improvements in compression ratios are just concentrated on datasets that generally project relatively high compression ratios (e.g. over 100). In other words, the recent proposed works with advanced data compression techniques fail to improve the compression for datasets with relatively low compression ratios to similar extents as they have done in high-ratio cases. In Figure 1, we present the bit rate-PSNR curves from the compression of 4 scientific datasets with the representative existing error-bounded lossy compressors: prediction-based SZ2 [10] and SZ3 [1, 2], SVD-based TTHRESH [12], and wavelet transform-based SPERR [4] (the compression result of TTHRESH is not shown in Figure 1 (b) as TTHRESH does not support 2D data input). For datasets like the Miranda [27] (Figure 1 (a)). SZ3 has boosted the compression ratio of SZ2 by over 100%, and SPERR further achieves 2x-3x of the compression ratio over SZ3. However, on other datasets, those 4 compressors have relatively low compression ratios. On certain datasets such as NYX-Dark Matter Density and Hurricane-QRain (Figure 1 (c) and (d)), the SPERR and TTHRESH have lower compression ratios than SZ3 does, though they are designed with more complicated data processing techniques and much higher computational costs.
It is worth noting that the low-compression-ratio data snapshots are actually the bottleneck of compression effectiveness because their compressed data size will obviously occupy a very large portion of all data fields (having diverse characteristics) in a single dataset. For example, compressing 100TB of data with a compression ratio of 100 will generate 1TB of compressed data, which means that we can at most save the space of 1TB when optimizing the compression. Nevertheless, if the original data has the same size of 100TB but only has a potential compression ratio of 5 (20TB compressed data), merely improving the compression ratio by 10% will lead to around 1.8TB storage cost reduction. Therefore, overcoming the limitation of existing compressors on low-compression-ratio data will be significant for optimizing the overall compression process for a large variety of scientific simulation datasets.
## IV SRN-SZ Design Overview
We would like to propose our SRN-SZ, which is a deep learning-based error-bounded lossy compressor, and is based on a modular compression framework that integrates a hybrid data reconstruction model with both interpolators and super-resolution neural networks. As shown in Figure 2, the compression framework of SRN-SZ consists of 4 modules: Data grid sparsification, data grid expansion, Huffman encoding, and Zstd lossless compression. Moreover, the super-resolution neural networks are first pre-trained with a large-size dataset assorted from the scientific database and then fine-tuned with domain-specific datasets before being leveraged in the data grid expansion module of SRN-SZ. In the compression process of SRN-SZ, it first extracts a sparse data grid from the original data input, next, this sparse data grid is expanded step by step with super-resolution networks and interpolators, eventually
to a lossy reconstruction of the full-size input grid. Compared to existing deep learning-based compressors which leverage autoencoder-like networks [5, 28] to generate compact representations or coordinate networks [6, 7, 8] mapping data point indices to data values, SRN-SZ has the advantages of both free from the storage cost for the compact representations (required by autoencoders) and per-input network training (required by coordinate networks).
We demonstrate the detailed compression algorithm of SRN-SZ in Algorithm 1. Lines 1-2 correspond to data grid sparsification, Lines 3-10 correspond to data grid expansion, and Lines 11-12 correspond to Huffman encoding and Zstd lossless compression. To bound the point-wise compression error, the linear quantization is involved in the data grid expansion module, and for clearness of demonstration, it is not displayed in Figure 2.
```
0: Input data \(D\), error-bound \(e\), grid sparsification rate \(r\), minimum SRN size \(s\)
0: Compressed data \(Z\)
1: Sparsify \(D\) into \(D_{0}\) with rate \(r\). Save \(D_{0}\) losslessly /*Data grid sparsification.*/
2: Set current reconstructed data grid \(D^{{}^{\prime}}\gets D_{0}\), Quantized errors \(Q\leftarrow\{\}\)
3:while\(size(D^{{}^{\prime}})!=size(D)\)do
4:if\(size(D^{{}^{\prime}})\leq s\)then
5:\(D^{{}^{\prime}},q=Interp\_and\_Quantize(D,D^{{}^{\prime}},e)\)/*Expand \(D^{{}^{\prime}}\) with interpolation.*/
6:else
7:\(D^{{}^{\prime}},q=HAT\_and\_Quantize(D,D^{{}^{\prime}},e)\)/*Expand \(D^{{}^{\prime}}\) with HAT network.*/
8:endif
9:\(Q\gets Q\bigcup q\). /*Merge newly acquired quantized errors \(q\).*/
10:endwhile
11:\(H\leftarrow\) Huffman_Encode(\(Q\)). /*Huffman encoding*/
12:\(Z\leftarrow\) Zstd(\(H,D_{0}\)). /*Zstd compression*/
```
**Algorithm 1** SRN-SZ Compression Algorithm
## V SRN-SZ Compression Pipeline
In this section, we describe the steps in the SRN-SZ Compression pipeline in detail. Since the encoding and lossless modules of SRN-SZ are the same as the ones in SZ3 and QoZ [1, 2, 11], in the following subsections, we will mainly discuss the data grid sparsification and data grid expansion.
### _Data Grid Sparsification_
Having shown advantages in MGARD [29, 30], SZ3 [1, 2], and QoZ [11], SRN-SZ adopts a level-wise hierarchical data grid reconstruction paradigm for its compression process. It starts from a sparse data grid sampled from the original input dataset. An example of 2D input data is shown in Figure 3: certain data points are uniformly sampled from the full data grid with a fixed stride. Those sampled data points in a sparsified data grid will be losslessly saved and the rest data points will be reconstructed in the data grid expansion process. The reason SRN-SZ losslessly saves the sparsified grid instead of directly reconstructing a lossy version of it from scratch as SZ3 does is analyzed below. According to the comparison between evaluations of SZ3 and QoZ [11], for the hierarchical level-wise data reconstruction, an accurate base is essential for preserving the high reconstruction quality of the data points, meanwhile only introducing negligible overhead storage space overhead. To balance the compression ratio loss and data reconstruction accuracy, we conducted some tests and then specified the dimension-wise rate of data grid sparsification as \(\frac{1}{32}\), i.e., reduce the data grid to \(\frac{1}{32}\) along each dimension and then save the sparsified grid for the data grid expansion.
### _Data Grid Expansion_
Based on the sparsified data grid, the data grid expansion (i.e. reconstruction) process is involved in both the compression and decompression of SRN-SZ. In the compression, the data grid expansion is executed for acquiring the reconstruction errors of data points, and then those errors are quantized and encoded serving as the correction offsets in the decompression. Moreover, During both the compression and decompression process of SRN-SZ, the super-resolution and error-quantization in compression (or error correction in decompression) are executed alternately, which can maximally
Fig. 1: Rate-distortion (PSNR) of several existing error-bounded compressors.
Fig. 3: Data grid sparsification
Fig. 2: SRN-SZ compression framework
preserve the accuracy of data grid expansion. As presented in Figure 4, the data grid expansion is performed iteratively step by step, until the whole data grid has been reconstructed. In each step, the reconstructed data grid is expanded by 2x along each dimension, therefore its implementation is compatible with both the deep learning-based super-resolution neural networks and the traditional interpolation methods.
#### Iii-B1 HAT super-resolution network
Super-resolution network is the most important data grid expansion technique in SRN-SZ as it is always applied on the last iteration step of data grid expansion, which contains the reconstruction for most of the data points in the input data (about 75% for 2D case and about 87.5% for 3D case). The network SRN-SZ leveraged is the HAT (Hybrid Attention Transformer) network [26], which is a very recent proposed work for image super-resolution and has been proven to be state-of-the-art. The network architecture of HAT is illustrated in Figure 5. Developed from [25, 19], HAT is a very-deep residual [31] neural network with transformers [20] as its basic components. HAT has 3 main modules: the initial convolutional layers for shallow feature extraction, the deep feature extraction module integrated with residual hybrid attention groups (RHAG), and a reconstruction module leveraging the Pixel Shuffle technique [32]. The RHAG blocks in the HAT network can be broken down into HAB (hybrid attention block), OCAB (overlapping cross-attention block), and convolutional layers. For more details of the HAT network, we refer readers to read [26]. The main advantage of HAT is that according to the analysis presented in [26], the design of HAT empowers it to make use of a large region of data points for computing each value in its super-resolution output. Therefore, both local and global data patterns can be well utilized in the super-resolution process.
Although HAT was originally designed for the super-resolution of natural images, we managed to adapt it to the scientific data grid expansion process in SRN-SZ. Feeding an intermediate data grid with size X x Y (or X x Y x Z) into HAT, SRN-SZ uses the super-resolution output of size 2X x 2Y (or 2X x 2Y x 2Z) from HAT as the data grid expansion result in one step. Some key points in bridging the scientific data and the HAT network are: First, the input and output channels in HAT have been modified from 3 to 1. Second, the input data grid is normalized to 0-1 before being fed into the network. Last, for 3D data inputs, 2D HAT models can still be used, but the inputs are preprocessed into 2D slices (along all the 3 dimensions) instead of 3D blocks. The reason SRN-SZ applies 2D networks for 3D data is that 3D HAT models suffer from extremely high computational time costs for training and inference, presenting unacceptable flexibility and scalability. Figure 6 presents the details of performing 3D super-resolution with those 2D slices. Specifically, with a partially reconstructed 3D data grid (blue points), SRN-SZ performs super-resolution on it with the HAT network in 3 different directions: on top/bottom faces (red points), on left/right faces (green points), and on front/back faces (purple points). The super-resolution results on the edges are the average of 2 directions, and the point on the cubic center is reconstructed by a multi-dimensional spline interpolation, which is introduced in [33] and will be detailed in the next subsection and Figure 7 (b).
#### Iii-B2 interpolation-based data predictor
We have observed that, when the reconstructing data grid has a small size, the super-resolution network can not work well. Therefore, on some initial steps of data grid expansion in which the
Fig. 4: Data grid expansion
Fig. 5: HAT network
Fig. 6: 3D super-resolution with 2D slices
Fig. 7: Interpolations in SRN-SZ
current data grid is smaller than a threshold (with a dimension shorter than 64), the traditional QoZ-based interpolation [11] is leveraged for the grid expansion which can auto-tune the best-fit interpolation configurations and error bounds. In addition to the QoZ interpolation, following the design proposed by [33], SRN-SZ also leverages several advanced interpolation designs such as multi-dimensional spline interpolation. Figure 7 presents and compares these two interpolation methods, and SRN-SZ will dynamically select the interpolation method for each interpolation level. This adaptive selection design improves both the efficiency of SRN-SZ and the reconstruction quality in the early steps of data grid expansion.
## VI SRN-SZ Network Training
The super-resolution quality of the HAT network plays the most important role in optimizing the compression ratio with controlled data distortion in SRN-SZ, and the core of optimizing the super-resolution quality of the HAT is its training process. The HAT networks in SRN-SZ are pre-trained offline both with an assorted dataset and domain-specific datasets. This design contributes to the flexibility and adaptability of SRN-SZ. Several strategies have been proposed for optimizing the training of the HAT networks in SRN-SZ. Figure 8 proposes our designed HAT network training pipeline for SRN-SZ. In the pipeline, each network is trained for two rounds: the general training from scratch and the domain-specific training for fine-tuning. The following subsections describe the key design of this pipeline.
### _Training data collection and preprocessing_
We have collected training data snapshots from a variety of well-known scientific simulations, including CESM-ATM [34], RTM [35], OCEAN, Miranda [27], JHTDB [36], Hurricane-ISABEL [37], SCALE-LetKF [38], NYX [39], and so on. The full list of the scientific simulations used by SRN-SZ for the HAT network training is shown in Table I. With those assorted data snapshots, we first decompose 3D data arrays into 2D data slices, next normalize them to [0,1] range, then split all over-sized (over 480x480) slices into smaller slices (480x480) according to the setting in [26]. When yielding the training data batches, the low-resolution, and high-resolution image pairs are randomly cropped from those slices. The widely-used image data augmentation methods like random flip and rotation are excluded from SRN-SZ network training as we observe that those data augmentation strategies will harm the quality of super-resolution with test data. This assorted and pre-processed dataset will be used for general pre-training of the HAT network from scratch.
### _Domain-specific fine-tuning_
Datasets from different scientific domains and simulations would present diverse patterns and characteristics. To make the trained network better adapt to more varied inputs, we fine-tune the super-resolution for certain scientific simulations that are being intensively and consistently used for research and analysis. To address this issue, we develop a domain-specific fine-tuning in SRN-SZ. After an initial training phase with the assorted database, SRN-SZ picks up several additional data snapshots generated by those simulations and then fine-tunes the network separately with each simulation data. In this way, SRN-SZ can achieve improved compression ratios on multiple widely used scientific data simulation datasets. We will compare the rate-distortion of SRN-SZ between applying the domain-specific fine-tuning or not in Section VII-B5.
### _Denoise training with Gaussian random noise_
As discussed in Section V-B, the data grid to be expanded in SRN-SZ is a lossy sample from the original data input. At the same time, we will need the super-resolution of it to fit the original input as much as possible. To simulate this process in the training of the HAT networks in SRN-SZ for better super-resolution results, we propose denoise training in SRN-SZ. Specifically, instead of simply using full data grids and the corresponding down-sampled data grids as the training data pairs, SRN-SZ adds Gaussian noise to the down-sampled data grids before feeding them into the network in the training phase. In this way, the trained network will be capable of denoising the input for more accurate super-resolution outputs.
Fig. 8: SRN-SZ network training pipeline
Moreover, we observe that training networks with intense noises will damage their effectiveness on low error-bound cases, so we separately train 3 base networks with different intensities of noises: strong noise (with stand derivation of 1% of data range), weak noise (with stand derivation of 0.1% of data range), and no noise. Those networks will correspondingly serve for different compression cases: high error bounds (larger than 1e-2), medium error bounds (1e-4 to 1e-2), and low error bounds (smaller than 1e-4).
## VII Performance Evaluation
In this section, we describe the setup of our experiments and then present the experimental results together with our analysis. We evaluate the newly proposed SRN-SZ and compare it with five other state-of-the-art error-bounded lossy compressors [2, 4, 11, 12, 10].
### _Experimental Setup_
#### Vii-A1 Experimental environment and datasets
Our experiments are conducted on the Argonne Bebop supercomputer (for CPU-based tests) and the ALCF Theta supercomputer (for GPU-based tests). On the Bebop machine, we used its nodes of the bdwall partition, having an Intel Xeon E5-2695v4 CPU with 64 CPU cores and a total of 128GB of DRAM on each. On the Theta machine, each GPU node of it has 8 NVIDIA TESLA A100 GPUs.
We select 6 data fields from 4 real-world scientific applications in diverse scientific domains. Those datasets are frequently used for evaluating scientific error-bounded lossy compression [41]. We detail the information about the datasets and the fields in Table II. As suggested by domain scientists, some fields of the datasets listed above are transformed to their logarithmic domain for better visualization. For fairness of evaluation, the data snapshots used for the evaluations are never contained in the assorted training dataset and their corresponding fine-tuning datasets. However, for optimizing the compression, some data snapshots in the same data field (but from different runs of the application or from different time steps) are used for training (especially for fine-tuning).
#### Vii-A2 Comparison of lossy compressors in evaluation
In the experiments, SRN-SZ is evaluated together with five other state-of-the-art lossy compressors. Among those, 4 are the traditional error-bounded lossy compressors: SZ3 [2], QoZ [11], SPERR [4], and FAZ [40]. Another one is the deep learning-based AE-SZ [5], which was verified in [5] to be one of the most effective autoencoder-based error-bounded lossy compressors. We do not perform comparison experiments with coordinate-network-based compressors due to the reason that they suffer from very low compression speed (much slower than SRN-SZ) as they need to perform a network training process for each single compression task [6, 7, 8].
#### Vii-A3 Network training configurations
For the training of HAT networks in SRN-SZ, we apply the network structure and training configurations described in [26]. In each training phase (including general training and domain-specific fine-tuning), we train the network on 8 GPUs in 200,000 iterations with a mini-batch size of 32. The initial learning rate is 2e-4 and will be halved on step [100K,160K,180K,190K]. For the network training and compression of AE-SZ, we follow the configurations described in [5].
#### Vii-A4 Evaluation Metrics
In the compression experiments, we adopted the value-range-based error bound mode (denoted as \(\epsilon\)) being equivalent to the absolute error bound (denoted as \(e\)) with the relationship of \(e\) = \(\epsilon\cdot value\_range\). The evaluation results are based on the following key metrics:
* Decompression error verification: Verify that the decompression errors are strictly error-bounded.
* Compression ratio (CR) under the same error bound: Compression ratio is the metric mostly cared for by the users, for fair comparison, the compression ratios under fixed error bounds are presented.
* _Rate-PSNR plots_: Plot curves for compressors with the compression bit rate and decompression PSNR.
* Visualization with the same CR: Comparing the visual qualities of the reconstructed data from different compressors based on the same CR.
* Ablation Study: Verify the effectiveness of each SRN-SZ design component separately.
### _Evaluation Results and Analysis_
#### Vii-B1 Verification of compression errors versus error bound
First of all, we verify that the decompression errors from SRN-SZ have strictly been constrained within the error bounds. To this end, we plot the histograms of decompression errors for each compression task, and two of them (on the QRAIN and QGRAUP fields of the Hurricane-ISABEL dataset) are presented in Figure 9. It can be clearly observed that the decompression errors of SRN-SZ always respect the error bound (\(e\)) in all cases with no out-of-bound abnormalities of point-wise decompression error. Having examined the error-bounded feature of SRN-SZ, in the following subsections, we will test, present, and analyze the compression ratios and qualities of SRN-SZ.
#### Vii-B2 Compression ratio under the same error bounds
The compression ratios of all lossy compressors under the same
Fig. 9: Histograms of decompression errors from SRN-SZ
certain error bounds (1e-3, 1e-4, and 1e-5) are presented in Table III. An interesting fact is that, although proposed later than SZ3, some new compressors (QoZ, SPERR, and FAZ) have not raised the compression ratios well on the tested datasets. In contrast, SRN-SZ has quite improved the compression ratios of error-bounded lossy compressors on almost all of the tested compression cases, over a variety of datasets and error bounds. Particularly, under the error bound of 1e-4 SRN-SZ achieves a 75% compression ratio improvement over the second-best QoZ on the CLDHGH field of the CESM-ATM dataset, and under the error bound of 1e-3 SRN-SZ achieves a 44% compression ratio improvement on the FREQSH field of it. On other datasets, SRN-SZ can also get 3% to 20% compression ratio improvements. Last, compared with other deep learning-based compressors, SRN-SZ has outperformed AE-SZ in an overall assessment.
#### Iv-B3 Rate distortion evaluation
Next, we present and analyze the rate-distortion evaluation of SRN-SZ and other state-of-the-art error-bounded lossy compressors.
Figure 10 displays the rate-distortion evaluation results of each lossy compressor on all datasets. In the plots, the x-axis is bit rate and the y-axis is PSNR. Like the cases of same-error-bound compression ratios, SRN-SZ has the best rate-distortion curves on all the datasets. On the CESM-CLDHGH dataset, SRN-SZ achieves \(60\%\) to \(80\%\) compression ratio improvement than the second-best SPERR in the PSNR range of 70 \(\sim\) 80. On the Ocean-TMXL dataset, SRN-SZ achieves \(\sim\)20% compression ratio improvement than the second-best QoZ in the PSNR range of 60 \(\sim\) 70. Additionally, SRN-SZ overperforms all other compressors by about \(5\%\) to \(15\%\) compression ratio improvements on the rest of the datasets.
Those results show that, for certain datasets on which the traditional or autoencoder-based lossy compressors can only present limited compression ratios, SRN-SZ has the potential to optimize the compression of those datasets to a further extent, and the reasons can be attributed to 3 terms. First, those datasets have complex data characteristics and patterns for which traditional data modeling techniques cannot fit well. Second, the newly proposed compression framework of SRN-SZ enables the compressor to directly leverage a super-resolution network for the data prediction via data grid expansion (super-resolution) instead of applying a redundant autoencoder model for which the latent vectors must be stored (such as AE-SZ does). Third, the hybrid usage of interpolations and super-resolution networks makes the interpolation compensate for the limitation of neural networks when dealing with small data grids.
#### Iv-B4 Visualization of decompressed data
As an example of the high compression quality of SRN-SZ, In Figure 11, we present several visualizations of the decompression results of CESM-CLDHGH data field from multiple compressors, together with the original data as the reference. For a fair comparison, for each compressor, the data are compressed into a fixed compression ratio (around 32) and then get decompressed. According to Figure 11 (we omit the visualization results of AE-SZ because it has poor visualization quality with PSNR \(\approx\) 53 under the specified compression ratio), in this case, the decompression data of SRN-SZ has the lowest distortion from the original input, with a PSNR of 68.5 which is 5 higher than the second-best FAZ. The zoomed regions also show that SRN-SZ has best preserved the local data patterns as well. The local visualization of SRN-SZ decompressed data is nearly identical to the original data, meanwhile, the ones of other compressors suffer from some quality degradation.
#### Iv-B5 Ablation Study
For verifying and understanding how the design details of SRN-SZ contribute to the overall compression quality, especially for the design components in the network pre-training pipelines, we conduct several ablation studies
Fig. 10: Rate Distortion Evaluation (PSNR)
for the network pre-training, identifying and quantifying the contributions of the corresponding design components.
First, to examine the impact of domain-specific fine-tuning (described in Section VI-B) on the training of HAT networks in SRN-SZ. We tested the compression of SRN-SZ with networks free of domain-specific fine-tuning and then compared the rate-distortion of it to the one from ordinary SRN-SZ. This comparison is detailed in Figure 12 with 2 examples presented (on Ocean-TMXL and NYX-Dark Matter Density). It is shown that the domain-specific fine-tuning process (the blue curves in Figure 12) can consistently improve the compression rate distortion over the SRN-SZ without a network fine-tuning process (the orange curves in Figure 12).
Next, we address the importance of SRN-SZ denoise training via analyzing and comparing the compression rate-distortion of SRN-SZ integrating fixed HAT networks each trained with a certain intensity of noise added to the training data. In Figure 13, the rate-PSNR curves of SRN-SZ with HAT networks trained by 3 different levels of noise intensity (zero noise, low noise of \(\sigma\)=1e-3, and high noise of \(\sigma\)=1e-2) are illustrated. Those compressors exhibit advantages over the others on different bit rate ranges. SRN-SZ with high-noise-trained overperforms the other configurations when the bit rate is smaller than 0.4 (corresponding to error bound \(>\) 1e-2). The low-noise-trained HAT network optimizes the SRN-SZ compression under medium bit rates, and when the bit rate is large (error bound \(<\) 1e-4), Leveraging networks trained with no noise achieves the best rate-distortion. From those results, we prove that the error-bound-adaptive dynamic usage of differently-trained HAT networks (with diverse noise intensities) essentially optimizes the compression of SRN-SZ.
## VIII Conclusion and Future Work
In this paper, We propose SRN-SZ, a deep learning-based error-bounded compressor that leverages one of the most advanced super-resolution neural network archetypes, namely HAT. SRN-SZ abstracts the data prediction process in compression into a hierarchical data grid expansion paradigm, enabling the utility of super-resolution neural networks for lossy compression. To exploit the advantages of different data reconstruction techniques, the data grid expansion in SRN-SZ is performed by a self-adaptive hybrid method of super-resolution HAT networks and interpolations. For the better adaptation of super-resolution networks to scientific data, SRN-SZ integrates a carefully designed network training pipeline for optimizing the network performance. In the evaluations, SRN-SZ outperforms all other state-of-the-art error-bounded lossy compressors in terms of compression ratio and rate-distortion, achieving up to 75% compression ratio improvements under the same error bound and up to 80% compression ratio improvements under the same PSNR.
SRN-SZ still has a few limitations. First, since it is based on neural networks, its running speed is inevitably quite lower than traditional lossy compressors, and the complexity of its integrated network makes it slower than some autoencoder-based compressors such as AE-SZ. Second, the compression ratios of SRN-SZ may not outperform the existing state-of-the-art compressors on datasets with high compressibility. Third, the training of HAT networks in SRN-SZ is not fully
Fig. 11: Visualization of reconstructed data (CESM-CLDHGH)
Fig. 12: Ablation study for the Domain-specific fine-tuning
Fig. 13: Ablation study for the Denoise Training
optimized. In future work, we will revise SRN-SZ in several aspects such as accelerating and fine-tuning the training and inference of its integrated neural networks, improving its compression ratio on easy-to-compress datasets, and so on.
## Acknowledgments
This research was supported by the Exascale Computing Project (ECP), Project Number: 17-SC-20-SC, a collaborative effort of two DOE organizations - the Office of Science and the National Nuclear Security Administration, responsible for the planning and preparation of a capable exascale ecosystem, including software, applications, hardware, advanced system engineering and early testbed platforms, to support the nation's exascale computing imperative. The material was supported by the U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research (ASCR), under contract DE-AC02-06CH11357, and supported by the National Science Foundation under Grant OAC-2003709, OAC-2104023, OAC-2311875, OAC-2311877, and OAC-2153451. We acknowledge the computing resources provided on Bebop (operated by Laboratory Computing Resource Center at Argonne) and on Theta and JLSE (operated by Argonne Leadership Computing Facility).
|
2309.15728 | Line Graph Neural Networks for Link Weight Prediction | Link weight prediction is of great practical importance, since real-world
networks are often weighted networks. Previous studies have mainly used shallow
graph features for link weight prediction, which limits the prediction
performance. In this paper, we propose a new link weight prediction algorithm,
namely Line Graph Neural Networks for Link Weight Prediction (LGLWP), which
learns deeper graph features through deep learning. In our algorithm, we first
extract the enclosing subgraph around a target link, and then employ a weighted
graph labeling algorithm to label the subgraph nodes. Next, we transform the
subgraph into a line graph and apply the graph convolution neural networks to
learn the node embedding in the line graph, which can represent the links in
the original subgraph. Finally, the link feature vectors are put into a
fully-connected neural network to predict the weight of the target link. Our
algorithm directly obtain the feature vectors of the target links in the
original graph, which is better than the previous methods that splice the node
feature vectors for link weight prediction. Experiments results on six real
datasets of various network sizes and types show that our algorithm has better
prediction performance than the state-of-art methods, while it has fewer
parameters and high training efficiency. | Jinbi Liang, Cunlai Pu | 2023-09-27T15:34:44Z | http://arxiv.org/abs/2309.15728v1 | # Line Graph Neural Networks for Link Weight Prediction
###### Abstract.
Link weight prediction is of great practical importance, since real-world networks are often weighted networks. Previous studies have mainly used shallow graph features for link weight prediction, which limits the prediction performance. In this paper, we propose a new link weight prediction algorithm, namely Line Graph Neural Networks for Link Weight Prediction (LGLWP), which learns deeper graph features through deep learning. In our algorithm, we first extract the enclosing subgraph around a target link, and then employ a weighted graph labeling algorithm to label the subgraph nodes. Next, we transform the subgraph into a line graph and apply the graph convolution neural networks to learn the node embedding in the line graph, which can represent the links in the original subgraph. Finally, the link feature vectors are put into a fully-connected neural network to predict the weight of the target link. Our algorithm directly obtain the feature vectors of the target links in the original graph, which is better than the previous methods that splice the node feature vectors for link weight prediction. Experiments results on six real datasets of various network sizes and types show that our algorithm has better prediction performance than the state-of-art methods, while it has fewer parameters and high training efficiency.
Link weight prediction, line graph, graph neural network, graph mining +
Footnote †: journal: Computer graphics & Machine learning
+
Footnote †: journal: Computer graphics & Machine learning
Link weight prediction represents a burgeoning field of research in network science, aiming to forecast the strength of connections between nodes in a network. Unlike traditional graph classification tasks such as node classification or link prediction, link weight prediction poses a more intricate regression task that has received comparably less attention and exploration. Previous studies have attempted to address this problem through simplified approaches, primarily utilizing shallow topological features of the network to estimate connection weights (Wang et al., 2018; Wang et al., 2019; Wang et al., 2020; Wang et al., 2020). For instance, methods like WCN, WAA, and WRA (Wang et al., 2019) rely solely on basic statistical characteristics of the local graph structure to infer connection weights.However, it has been empirically and practically observed that connection weights often embody intricate interdependencies, exemplified by the strength of interactions between neurons in brain networks. These complex relationships elude facile capture by shallow feature-based methodologies (Wang et al., 2019). As a result, the challenge of link weight prediction resides in the necessity to devise advanced techniques and methodologies capable of accurately capturing and predicting the multifaceted strengths of connections between nodes. Accomplishing this goal necessitates thorough research and exploration to tackle the inherent challenges presented by this regression task.
Inspired by LGLP (Liang et al., 2019), we propose line graph neural network for link weight model LGLWP.We first extract the enclosing subgraphs of the target links for node labeling.Since the node labeling algorithm in LGLP is for unweighted graphs, here we introduce the node labeling algorithm for weighted graphs in (Wang et al., 2019), and then we transform the subgraphs into corresponding line graphs, and then through the graphs Convolutional Neural Network GCN to learn the node features and get the feature vector corresponding to the target link, and finally regression prediction by two fully connected layers.The framework is depicted in Figure 1. In fact by transforming the original graph into a line graph, the feature vectors of the original two destination nodes are transformed into the feature vectors of the corresponding points in the line graph, and this feature vectors are fed into the neural network for regression prediction. On the one hand, the neural network can require fewer parameters, and on the other hand, it can accelerate the training speed of the model.
The contributions of our research can be summarized as follows:
1. Based on LGLP, unlike LGLP for link existence prediction, we propose link weight regression prediction with line graphs. Thus, we open up a new idea for link weight prediction.
2. Since the subgraph node labeling algorithms in LGLP are mainly for undirected unweighted graphs, these works focus on simple link prediction, i.e., they are not suitable for weight prediction. Their node labeling algorithms are not able to handle weighted graph nodes and hence are not applicable to the tasks of this work. Here we introduce algorithms suitable for weighted graph node labeling.
3. On six real datasets, six datasets involving different network sizes and containing both large and small graphs, we have achieved good results. We also conducted ablation experiments to demonstrate the effectiveness of this weighted graph node labeling algorithm for link weight prediction by comparing the random labeling of subgraph nodes with the weighted graph node labeling algorithm.
## 2. Related Work
Link weight prediction is a relatively new research area within network science. Lv et al. (Lv et al., 2019) pioneered this field by exploring the role of weak connectivity in weighted networks. They introduced weighted local similarity metrics, which included Weighted Common Neighbors (WCN) metrics, Weighted Adamic-Adar (WAA) metrics, and Weighted Resource Allocation (WRA) metrics to estimate link weights. Zhao et al. (Zhao et al., 2019) extended unweighted local similarity metrics to weighted local similarity metrics through a method called the "Reliable Routing Method."These weighted local similarity metrics are invaluable for predicting the presence of links and their respective weights. (Li et al., 2019)In this approach, a link weight matrix is generated by perturbing the observed weighted network structure. Subsequently, the link weight matrix is reconstructed by utilizing the factorized latent factors derived from the observed network. Finally, these two matrices are combined to yield predictions for missing link weights.In another approach, (Li et al., 2019) expanded the eigenvector space of connected edges by incorporating node similarity metrics from the original network and node centrality metrics from the line graph to perform link weight regression prediction.Additionally, (Li et al., 2019) introduced a novel computational framework called Neighborhood Estimation Weights NEW. This method relies solely on the fundamental structural information of the network and offers flexibility to adapt to various types of networks. However, it's important to note that these methods often remain at the level of basic structural network information and may have limitations when dealing with more complex network features.
Thus graph representation learning was proposed, where the goal of graph representation learning is to encode nodes into low-dimensional vectors that contain the nodes' positions in the graph as well as their local graph structure. In other words, graph representation learning is the projection of nodes into a potential Euclidean space, where the geometric relations in this potential space correspond to the relations in the original graph or network. The obtained embedding vectors can be used for downstream tasks such as node embedding and link prediction. The main technical tools are Graph Embedding (GE) and Graph Neural Networks (GNNs) (Goh et al., 2019; Li et al., 2019). The graph embedding models are deepwalk (Wang et al., 2019), node2vec (Wang et al., 2019), SDNE (Wang et al., 2019), etc., and the graph deep learning models are GCN (Goh et al., 2019),GAE (Goh et al., 2019), VGAE (Goh et al., 2019), etc., but the link weight prediction task has never been addressed. Therefore, it would be interesting to investigate how well these deep graph learning models perform on this task (Wang et al., 2019).
In addition, there is recent literature demonstrating promising results using enclosing subgraph extraction. There are methods for link prediction through enclosing subgraph extraction in recent years, representative of which are WLNM (Wang et al., 2019), SEAL (Wang et al., 2019) and LGLP (Liang et al., 2019). They have achieved very good results in this task. Thus, the correct representation of the target links has been shown to be sufficient to predict the links and their weights, avoiding the need to process the whole graph. However, node labeling techniques must be provided to use enclosing subgraph extraction methods. By
consistently labeling nodes to learn predictions, further models can be generalized to different subgraphs. In addition, node labeling techniques must preserve topological directionality towards the target link for optimal performance, thus providing a mechanism for the model to focus on specific nodes (Wang et al., 2017).Weisfeiler-Lehman (2017), proposed an algorithm based on the original WL algorithm for labeling unweighted graphs. The SEAL framework (Kumar et al., 2017) also proposes a novel node labeling method based on the radius of each node to the target linkLGLP (Beng et al., 2017) applies a line graph based on SEAL while also retaining the node labeling algorithm of the SEAL model. These works focus on simple link prediction, i.e., they are not suitable for weights. Their node labeling algorithms are unable to handle weighted nodes and hence are not suitable for tasks related to weighted graph node labeling (Wang et al., 2017). Therefore, in this paper, we introduce a node labeling technique suitable for weighted graphs based on the LGLP (Beng et al., 2017) model, while retaining the enclosing subgraph extraction method, and finally perform link weight prediction with good results on several datasets.
## 3. Proposed Method
In this section, we will begin by providing an overview of the problem of link weight prediction. Subsequently, we will introduce our novel link weight prediction method, LGLWP, which encompasses the following key steps:
1. Enclosing subgraph extraction
2. Subgraph node ordering
3. Feature learning and link weight prediction via line graph neural networks
A summary figure of the whole approach is shown in Figure 1.
### Problem description
Let \(G(V,E,W)\) represent an undirected weighted network, where V denotes the network's nodes and E denotes its edges. The weight matrix, denoted as W, describes the network's adjacency, where the weight value \(w\) of a link \((i,j)\in E\) is assigned as \(W_{i,j}=w\), and \(W_{i,j}=0\) otherwise. The set of weights, W, can be divided randomly into two subsets: \(W_{train}\) and \(W_{test}\), where \(W_{train}\cup W_{test}=W\), and \(W_{train}\cap W_{test}=\emptyset\). The objective of network link weight prediction is to predict the weights of the test set \(W_{test}\) with maximum accuracy, using the graph \(G(V,E,W_{train})\)(Beng et al., 2017).To avoid the effect of different weight ranges on the results, we first preprocess the weights. Here we employ we use the exponential transformation method to normalize all the contiguous edge weights \(w\) to the interval \((0,1)\), i.e.:
\[w^{*}=e^{-\frac{1}{w}} \tag{1}\]
### Enclosing subgraph extraction
The first step of the method is to extract the enclosing subgraph of the target connection.
The link weights between two nodes can be predicted based on the subgraph of the target link. In general, the larger the subgraph, the more information can be learned. However, this will bring more computational cost. In order to find a balance between performance and computational cost, we only extract 1-hop subgraphs, and one-hop subgraphs are defined as follows:
\[G^{1}(i,j)=\{w\mid\min(d(v,i),d(v,j))\leq 1\}, \tag{2}\]
Where \(d(v,i)\) and \(d(v,j)\) denote the shortest path between \(v\) and i, and \(v\) and j, respectively, i.e., the path that connects two nodes with the least number of edges on the path.
Figure 1. Summary of the steps for the LGLWP link weight prediction framework.
Since the number of nodes contained in different enclosing subgraphs is inconsistent, considering the time complexity and performance, we select 10 nodes in the first-order enclosing subgraphs of all the target links, and randomly select 10 nodes in the enclosing subgraphs with the number of nodes greater than 10 nodes, and leave them unprocessed if they are less than 10 nodes. This avoids the variability in the number of nodes in different graphs. With the same subgraph extraction strategy, we can obtain similar contextual representations of target node pairs and the model can be generalized across different graphs, nodes, and links.
### Subgraph node ordering
The second step of the approach is to order the extracted enclosing subgraph.
The purpose of the ordering is to provide a consistent way of labeling nodes such that those nodes with similar topological characteristics to the subgraphs are similarly labeled, e.g. if the relative positions and structural roles of the vertices in their respective subgraphs are similar, then they receive a similar ordering.
Now, let's provide a brief introduction to the Weisfeiler-Lehman (WL) algorithm (Zhou et al., 2017). The WL algorithm addresses the graph isomorphism problem, which involves determining whether two graphs share the same number of nodes connected in the same manner. WL operates through iterative updates of node labels, utilizing the labels of neighboring nodes and compacting them into new labels until convergence.Initially, all nodes are assigned the same color, typically denoted as 1. For each node, a signature string is generated by concatenating its color with the sorted colors of its neighboring nodes. Subsequently, nodes are sorted based on their signature strings in ascending order, and new colors are assigned. Nodes with identical signature strings are assigned the same color.A crucial aspect of the WL algorithm is its ability to encode the structural roles of vertices within the graph. Moreover, it defines a relative order for these vertices, considering their structural roles; vertices with similar roles receive similar labels. Importantly, the relative ordering of vertices remains consistent across different graphs.
Literature (Zhou et al., 2017) proposed a new node labeling ranking method suitable for unweighted graphs based on Weisfeiler-Lehman ( WL ) algorithm. Literature (Zhou et al., 2017) introduces another node labeling sorting algorithm based on weighted graph based on literature (Zhou et al., 2017), which has the following requirements for graph labeling algorithm:
1. The graph labeling algorithm must provide similar labels for nodes with similar topological characteristics in a enclosing subgraph.
2. It must maintain topological directionality to the target link, i.e., the order of the nodes must be constrained by the target node and the distance to the target node must be reflected in the ordering.
Since the WL algorithm does not satisfy the second requirement and the node ordering is crucial for model learning, we adopt here the graph labeling approach proposed in (Zhou et al., 2017) by applying the one-dimensional Weisfeiler - Lehman ( WL ) algorithm to a weighted graph. The goal of this algorithm is to rank the set of nodes from the extracted subgraphs. However, since we want to maintain topological directionality with respect to the target edges, the target edges will always be assigned the order 1 and 2. First, initial labels are assigned to the nodes based on the sum of the shortest paths (shortest paths are computed by computing the weights of the edges) from the nodes to the target nodes, \(o_{x}\) and \(o_{y}\). Next, we use the Weisfeiler-Lehman algorithm to assign a label string to each node. This process consists of arranging the initial labels of each node with the initial labels of its neighboring nodes in order from smallest to largest, and in this way, we generate a unique label string. Then, the lowest dictionary-ordered string signature corresponds to the next node in the ordered list. Next, we iterate this process until each node is assigned a number. The process is defined in Algorithm 1.The schematic of the process is shown in Figure 2.
```
Input:\(h\)-hop enclosing subgraph \(G^{h}_{(o_{1},o_{2})}\) centered on two target nodes \(o_{1}\) and \(o_{2}\), which is extracted by Equation (2) Output: ordered set of nodes from \(o\in G^{h}_{(o_{1},o_{2})}\)\(o_{1}=\)x \(o_{2}=\)y calculate \(d(o):=d(o,x)+d(o,y)\) for all \(o\in G^{h}_{(o_{1},o_{2})}\) get initial labels \(l(o)=f(d(o))\)\(l(o_{1})=0\)\(l(o_{2})=0\)while\(|orderList|<|V|\)do generate label string \(Agg(l(o))\) for all \(o\in G^{h}_{(o_{1},o_{2})}\) sorted \(Agg(l(o))\) add lowest \((Agg(l(o)))\) to orderList endwhile returnorderList
```
**Algorithm 1** Subgraph node ordering algorithm
Once the node sorting process is complete and the ordered set is obtained, we extract the adjacency matrix of the subgraph with rows and columns of the matrix corresponding to the ordered set. Each row vector of this adjacency matrix is then used as a feature vector for each node. Before feeding into the line graph network model for prediction, we ensure that the values \(W_{1,2}\) and \(W_{2,1}\) in the matrix representing the link weights are not visible to the model and are set to -1 here, respectively (Zhou et al., 2017).
### Line graph transformation
To predict the weights of links from a given enclosing subgraph \(G^{h}_{(o_{1},o_{2})}\), where \(G^{h}_{(o_{1},o_{2})}\) is an \(h\)-hop enclosing subgraph centered on two target nodes v1 and v2, each node in the enclosing subgraph corresponds to an ordered feature vector, and nodes with similar topological features to the subgraph are similarly labeled. Since the edges corresponding to pairs of nodes in the original graph correspond to vertices in the line graph. Therefore directly processing the nodes in the line graph is processing the edges in the original graph. This will not only not increase the time complexity, but will also require fewer model parameters. Therefore we propose to convert the enclosing subgraph into a line graph, which represents the adjacencies between the edges of the original graph (Beng et al., 2019). Moreover, the features of the links to be predicted can be learned directly from
the line graph representation using graph convolutional neural networks for weight prediction.
In graph theory, the graph G corresponding to a line graph is a graph that reflects the adjacency of the edges in the graph, denoted as \(L(G)\). Briefly, \(L(G)\) abstracts each edge in G into a vertex each; if two edges in the original graph are adjacent, then an edge is connected to the corresponding vertex in the line graph. Because a line graph reduces the edges of the original graph to vertices, it can also be thought of as a dual of the original graph. An example of the line graph transformation process is given in Figure 3.
In order to transform the attributes of the node pairs in the original graph into the attributes of the nodes in the line graph, (Bang et al., 2017) proposed a function:
\[l_{(v_{1},v_{2})}=\text{concate}(\min(f_{l}(v_{1}),f_{l}(v_{2})),\max(f_{l}(v_ {1}),f_{l}(v_{2}))), \tag{3}\]
where: \(f_{l}(\cdot)\) is the node labeling function, \(v_{1}\) and \(v_{2}\) are the two endpoints of the edge, and concate \(\text{concate}(\cdot)\) denotes the cascade operation on the two inputs. Since only the link weight prediction of the undirected weighted graph is considered in this paper, the attributes \((v1,v2)\) and \((v2,v1)\) of the introduced edges should be the same. The above formulation ensures that the generated edge attributes (i.e., node attributes in the line graph) are consistent when switching the end nodes. In addition, the structural importance information of the nodes can be well preserved in the function (Bang et al., 2017).
Since the nodes in the original graph are represented by the row vectors of the ordered adjacency matrix, here we take the node pairs in the original graph and splice the row vectors of the two nodes according to the above transformation method as the feature vectors of the nodes in the line graph.
### Feature Learning by Graph Neural Networks
Deep learning methods have been successfully applied in many fields such as image processing, speech recognition, etc., but the data in these fields are in Euclidean space. However, the data present in real world networks are in non-Euclidean space, so traditional deep learning methods do not work well to extract features from graphs.Kipf et al. (Kipf et al., 2017) proposed a multilayer graph convolutional neural network that can be used directly on graph data, which aggregates the node information of its neighbors and generates new node embeddings that contain rich neighborhood information.
In this work, we use a graph convolutional neural network to learn node embeddings in a line graph, where a node in the line graph can represent an edge in the original graph. The graph convolutional neural network can aggregate the information of neighboring nodes to generate a new node embedding that contains rich neighbor information. Therefore, the node embeddings in the line graph can be used to predict the target edge connection weights in the network.
Given a line graph representation of the enclosing subgraph \(L\left(G_{01,02}^{h}\right)\), the node embedding of \((v_{i},v_{j})\) in the kth layer of the graph convolutional neural network is denoted as \(Z_{(v_{i},v_{j})}^{(k)}\). The node embedding of \((v_{i},v_{j})\) in the \((k+1)\)th layer is:
\[Z_{(v_{i},v_{j})}^{(k+1)}=\sigma(\widetilde{D}^{-\frac{1}{2}}\widetilde{A} \widetilde{D}^{-\frac{1}{2}}Z_{(v_{i},v_{j})}^{(k)})W^{(k)}), \tag{4}\]
Where \(\widetilde{A}=A+I_{N}\) is the adjacency matrix of the line graph \(L\left(G_{01,02}^{h}\right)\) of the enclosing subgraphs where each node in the graph
Figure 3. Line graph transformation procedure.
Figure 2. Subgraph node ordering algorithm. We want to predict the weight (w, coloured in red) of the link for the target nodes (dashed and coloured in yellow).
is connected to itself, \(\widetilde{D}_{ii}=\sum_{j}\widetilde{A}_{ij}\) is the degree of the line graph \(L\left(G_{o1,02}^{h}\right)\) of the enclosing subgraphs where each node in the graph is connected to itself, and \(W(k)\) is the trainable weight matrix of the kth layer. \(\sigma(\cdot)\) is the activation function in each layer. The input for the first layer of graph convolution neural network is set to node attribute in the line graph as \(Z_{(u_{i},u_{j})}^{0}=l_{(o1,02)}\)(Gendran et al., 2017).We then treat the link weight prediction task as a regression problem and train the neural network by minimizing the root-mean-square error loss for all all link weights to be predicted.
## 4. Experiments
### Datasets description
We test the effect of our proposed algorithm in the following in six weighted networks, respectively. The six datasets cover different network sizes and network types. The specific topological characteristics are shown in Table 1.
* Neural network (Kang et al., 2017): The neural network of C. elegans exhibits connections between neurons, which can occur through synapses or gap junctions. The weights assigned to the edges in this network signify the quantity of interactions that transpire between the neurons.
* C. elegans (Kang et al., 2017): The network describing the interactions between metabolites in the roundworm Caenorhabditis elegans is an undirected graph. In this graph, the links represent the connections between pairwise metabolites. The weights assigned to the edges reflect the occurrence of multiple interactions between these metabolites.
* Coauthorships in network science (Kang et al., 2017): The largest component of a co-authorship network collected by M. Newman consists of scientists collaborating on studies in the field of network science. M. Newman calculated the weights of edges in this network based on information from co-authored papers and co-authors.
* Political blogs (Newman, 2017): Adamic and Glance collected a network that depicts the directed hyperlinks between political web blogs during the 2004 US Election. In this study, we simplified the directed links as undirected ones and assigned weights to represent the volume of multiple hyperlinks between blogs.
* UC-social (Newman, 2017): The network consists of sent messages exchanged between users within an online student community at the University of California, Irvine. Users are represented as nodes, while directed edges indicate the flow of sent messages. The weight assigned to each edge reflects the occurrence of multiple messages. In this analysis, the network was treated as undirected but with weighted edges.
* Condmat (Candand, 2017): This network represents collaborations among scientists who have posted preprints on the condensed matter archive at www.arxiv.org between 1995 and 1999. The compilation of this network was conducted by M. Newman. The weights assigned to the network follow the methodology outlined in the original paper.
### Evaluation metrics
When performing link weight prediction, many literatures propose to use Pearson's correlation coefficient and root-mean-square error as evaluation metrics, however, according to the idea proposed in (Newman, 2017), they have practically equal evaluative power, and in order to save the paper's length, we choose only RMSE to measure the model's prediction performance.
The definition of RMSE is:
\[RMSE=\sqrt{\frac{\sum_{i=1}^{n}(y_{i}-\hat{y}_{i})^{2}}{n}} \tag{5}\]
where y is the predicted value and \(\hat{y}\) is the actual value. Obviously, the smaller the RMSE is, the smaller the difference between predicted and actual values is, and the more accurate the algorithm is in its prediction. The smaller the RMSE is, the more accurate the algorithm's prediction is.
### Parameter settings
The model parameters used in this paper are similar to the original paper (Gendran et al., 2017). Three graph convolution layers are used to compute the node embeddings, and the output feature dimensions of the three graph convolution layers are set to 32. Finally, the link weight regression prediction is performed through two more fully connected layers. The number of training iterations is set differently depending on the specific dataset. Specifically, 5 training epochs are used on some graphs with larger network sizes, such as Condmat, P.blog, and UC-social, and 15 training epochs are used on the rest of the datasets.
### Baselines
In order to evaluate the predictive ability of the LGLWP model, we selected the same baseline models as in (Newman, 2017). These include seven well-known graph representation models such as Deepwalk (Wang et al., 2018), Node2vec (Chen et al., 2018), Grarep (Chen et al., 2018), SDNE (Kang et al., 2017), LINE (Kang et al., 2017), GAE (Gendran et al., 2017), VGAE (Gendran et al., 2017), etc. proposed in recent years. Since the above seven graph learning models cannot be directly applied to link weight prediction, the obtained node embedding vectors are concatenated as edge feature vectors to further train the linear regression model and evaluate its performance for link weight prediction (Newman, 2017). In addition to the mentioned baseline model, we also compared the GCN (Gendran et al., 2017) model. Since GCN model also performs very well in graph representation, it is often used for node classification, graph classification and other related tasks. GCN learns node embeddings by graph convolution. For this purpose, they use message passing framework in which node embeddings are updated by aggregating the embeddings of their neighbors. Our proposed model performs graph convolution on the line graph, whereas GCN performs graph convolution in the full graph. With the embedding vectors obtained
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Dataset & \(|V|\) & \(E\) & Range of weights & categories \\ \hline Neural & 296 & 2137 & \([1,72]\) & biology \\ C.elegans & 453 & 2025 & \([1,114]\) & biology \\ Netscience & 575 & 1,028 & [0.0526, 2.5] &authorship \\ P.blog & 1224 & 16,715 & \([1,3]\) & social \\ UC-social & 1899 & 13,838 & \([1,184]\) & social \\ Condmat & 16264 & 47,594 & \([0.058824,22.3333]\) & coauthorship \\ \hline \hline \end{tabular}
\end{table}
Table 1. Basic topological features of weighted networks
by GCN, we follow the method proposed by (Kumar et al., 2017) and use the inner product of the node vectors to measure the weights between nodes. In addition, seven shallow feature-based link weight prediction methods are selected for comparison, including three reliable routing-based methods (Wang et al., 2018), three line graph-based methods (Wang et al., 2018), and NEW methods (Kumar et al., 2018). SEA (Kumar et al., 2018), a self-attention-enhanced graph self-encoder SEA, which improves weight prediction by learning deep graph features, is a very novel model that achieves state-of-the-art performance in link weight prediction, and is presented in the paper we also compare it. Here all the parameters of the baseline are set according to the original literature.
### Results and Analysis
In this paper, we follow the setup of (Wang et al., 2018; Kumar et al., 2018), where we choose 90% of the link weights in the original network as the training set and 10% of the link weights as the test set. Also, to avoid errors from a single experiment, each model is run on 10 independent sets of training and test sets, and their means and standard deviations are calculated. All data were preprocessed according to the methodology proposed in the (Kumar et al., 2018), e.g., all weights were normalized to intervals (0,1) using the exponential transformation method. The machine used for the experiments is a laptop computer configured with i9-12900H 2.50 GHz processor, 16G RAM and Nvidia 3070 GPU.
Since GAE and VGAE are designed for attriubte networks. In order to make GAE and VGAE work on ordinary weighted graphs, the i-th column vector of the weighted adjacency matrix is used as the initial eigenvector of node i. The GAE\({}^{*}\) and VGAE\({}^{*}\) methods are modified versions of the original implementations. Especially when it comes to line-graph-based techniques such as LG-RF, LG-GBDT, and LG-SVM, the computation of centrality-based metrics can be quite resource-intensive. Consequently, these methods encountered difficulties in producing conclusive results for large graphs, including Condmat.
As can be seen from Table 2, comparing seven link weight prediction models based on shallow graph features and eight graph representation models, our model achieves the best results. the SEA model has comparable performance with ours. SEA model is based on graph attention mechanism using graph self-encoder for link weight prediction, which opens the way to weight prediction based on attention mechanism, and the result also proves that the attention mechanism in weight prediction has a lot of room for development. However, SEA models are limited by the size of the graph, and their computation needs to consider the global information of the whole graph, which often requires more computational resources. On some large graphs, SEA proposes an effective graph compression algorithm, which first compresses the graph to a smaller size and then performs link weight prediction, and the algorithm also achieves good results. Contrary to graph compression, we only extract the neighbor information around the target links, and an effective node labeling algorithm can achieve the model to generalize between different subgraphs, thus avoiding processing the whole graph. In addition, our node labeling technique maintains the topological directionality of the target links for optimal performance, providing a mechanism for the model to focus on specific nodes. Our proposed method can learn the features of target links directly in the line graph. To analyze the convergence speed of the two models, we run the models on different datasets and collect the loss of each epoch. The results are shown in Figure 4. The losses in this paper's method are marked with green lines, and the losses in the SEA model are marked with blue lines. As can be seen from the results, our proposed model is able to converge faster than SEA. For our proposed method, we only need about 15 epochs to achieve the best performance on Neural, C.elegans, and Netscience datasets, and about 5 epochs on P.blog, UC-social, and Condmat datasets. while SEA has not converged after 50 epochs. According to (Kumar et al., 2018), specifically, SEA needs to be trained for 100 epochs on the Condmat dataset for optimal performance and 800 epochs on the P. blog dataset requires 800 epochs of training, 500 epochs in the UC - social dataset, and 300 epochs in the rest of the datasets.Thus, our proposed method saves training time and requires fewer model parameters.
As can be seen from the Table 2, our model LGLWP outperforms SEA on the Neural and Netscience datasets.To test the robustness of LGLWP, we dynamically take 30, 40, 50, 60, 70, and 80 percent of all the links and weights in G as the training set and the rest as the test set, respectively. Our purpose is to determine whether the model consistently outperforms SEA with different proportions of weights missing.The results are shown in Figure 5. The experimental results show that the RMSE of LGLWP is lower than that of SEA for different proportions of the training set, indicating that LGLWP is robust.
### Ablation study
We conducted ablation experiments with the aim of understanding the extent to which the graph labeling algorithm affects the models The introduction of a weighted graph labeling algorithm in LGLWP is one of the main contributions of this paper. It provides consistency to the model by labeling roles with similar structural roles and similar labels. We compare this algorithm with random labeling of nodes.
We conducted experiments on all six data with the same experimental setup. The experimental results are shown in the Table 3. The results clearly demonstrate the effectiveness of the weighted graph labeling algorithm, as it indeed achieves significantly better performance than the subgraph random labeling algorithm. Due to the search for a balance between performance and computational cost, we only extract 1-hop subgraphs while controlling the number of subgraph nodes to be around 10. It is believed that this gap will become more obvious with the expansion of subgraph size. The use of enclosing subgraph extraction is a promising approach for link prediction and link weight prediction. A proper representation for the target links has been shown to be sufficient to predict the links and their weights, thus avoiding the entire processing graph. However, a node labeling technique must be provided to pair with the enclosing subgraph extraction method. By consistently labeling nodes algorithmically, the model can generalize the conversation over different subgraphs and thus make predictions. In addition, the node labeling technique must remain topologically oriented to the target links for optimal performance, thus providing a mechanism for the model to focus on specific nodes (Wang et al., 2018).
Figure 4. Training loss comparison between our proposed LGLWP and SEA method. The training loss on Neural, C.elegans, Netscience, P.blog, UC-social and Condmat dataset.
Figure 5. RMSE comparison on Neural, Netscience for SEA, LGLP using different percent of training set. On each dataset, we take 30, 40, 50, 60, 70, and 80 percent of all the links and weights in G as the training set.
### Discussion
Since the P.blog, UC-social and Condmat networks are larger and therefore contain more samples, we trained only 5 epochs and the other datasets 15 epochs.The main problem when using graph-based methods for prediction is how to make them independent of the size of the graph. The subgraph extraction method solves this problem by making LWLWP independent of the number of nodes in the graph, i.e., to predict the link weights, we only need a small portion of the graph. However, being unaffected by the number of nodes in the graph comes at the cost of computational complexity and time complexity of the weighted graph labeling algorithm. Nevertheless, the time required to compute the weighted graph labeling algorithm for a given subgraph will always remain the same, while the model for processing the entire graph will scale linearly according to the number of nodes in the graph. It is worth noting that Deepwalk, Node2vec, Grarep, SDNE, Line, GCN, GAE and VGAE are all techniques based on graph representation learning. They all learn the representation of nodes in the graph to perform the link weight prediction task. We conclude that the model that generates the best representation for the nodes in the graph is the most successful. As (Kang et al., 2019) points out, good node embeddings should yield good prediction accuracy, because eventually some other machine learning system should use these embeddings to make valuable predictions. For this purpose, aggregating information from the most important neighbors and nodes is crucial. Meanwhile LGLP (Kang et al., 2019) points out that learning node embeddings by graph convolution in a line graph is more effective than performing neighbor embedding aggregation in the original graph. Therefore we introduce the line graph mechanism. For these graph representation learning methods, simply converting node vectors to connected edge vectors may not accurately characterize the structure of connected edges. Directly mapping connected edges to low-dimensional vectors can better preserve the structural features of incoming connected edges and is more suitable for network analysis tasks where connected edges are the object of study.
## 5. Conclusion and Future Work
Inspired by LGLP, we propose a new link weight model, LGLWP.This model applies subgraph extraction and node labeling techniques that are currently widely used in link prediction and link weight prediction. To overcome the limitations of the unweighted graph node labeling technique in LGLP, we introduce a new weighted graph node labeling technique while retaining the line graph and graph convolutional neural network architectures. Our model achieved the best results on each of the tested datasets.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Model & Neural & C.elegans & Netscience & P.blog & UC-social & Condmat \\ \hline rWCN & 5.78\(\pm\)0.74 & 1.79\(\pm\)0.76 & 0.43\(\pm\)0.05 & 3.02\(\pm\)0.03 & 2.4527\(\pm\)0.0942 & 0.1992\(\pm\)0.0029 \\ rWAA & 6.3\(\pm\)0.75 & 2.36\(\pm\)0.84 & 0.42\(\pm\)0.05 & 0.89\(\pm\)0.02 & 0.6617\(\pm\)0.0292 & 0.1816\(\pm\)0.0031 \\ rWRA & 6.7\(\pm\)0.76 & 2.93\(\pm\)0.91 & 0.42\(\pm\)0.05 & 1.09\(\pm\)0.01 & 0.5852\(\pm\)0.0067 & 0.1932\(\pm\)0.0027 \\ LG-RF & 0.235\(\pm\)0.006 & 0.183\(\pm\)0.003 & 0.213\(\pm\)0.005 & 0.099 \(\pm\)0.003 & 0.223\(\pm\)0.002 & - \\ LG-GBDT & 0.383\(\pm\)0.004 & 0.276\(\pm\)0.005 & 0.181\(\pm\)0.006 & 0.239\(\pm\)0.003 & 0.369\(\pm\)0.003 & - \\ LG-SVM & 0.236\(\pm\)0.006 & 0.152\(\pm\)0.004 & 0.212\(\pm\)0.004 & 0.171\(\pm\)0.004 & 0.225\(\pm\)0.003 & - \\ NEW & 0.2056\(\pm\)0.0064 & 0.1421\(\pm\)0.0081 & 0.0891\(\pm\)0.0115 & 0.0797\(\pm\)0.0024 & 0.2076\(\pm\)0.0017 & 0.1953\(\pm\)0.0016 \\ \hline Deepwalk & 0.2211\(\pm\)0.0043 & 0.1421\(\pm\)0.0045 & 0.1214\(\pm\)0.0151 & 0.0816\(\pm\)0.0023 & 0.2124\(\pm\)0.0026 & 0.1943\(\pm\)0.0008 \\ Node2vec & 0.2153\(\pm\)0.0054 & 0.1413\(\pm\)0.0052 & 0.1199\(\pm\)0.0126 & 0.0817\(\pm\)0.0021 & 0.2088\(\pm\)0.0022 & 0.2032\(\pm\)0.0011 \\ Grarep & 0.2254\(\pm\)0.0092 & 0.1424\(\pm\)0.0053 & 0.1484\(\pm\)0.0378 & 0.0798\(\pm\)0.0021 & 0.2098\(\pm\)0.0012 & 0.1945\(\pm\)0.0016 \\ SDNE & 0.2060\(\pm\)0.0058 & 0.1380\(\pm\)0.0167 & 0.1386\(\pm\)0.0263 & 0.0771\(\pm\)0.0029 & 0.2056\(\pm\)0.0029 & 0.1808\(\pm\)0.0014 \\ LINE & 0.2222\(\pm\)0.0079 & 0.1390\(\pm\)0.0052 & 0.1377\(\pm\)0.0112 & 0.0809\(\pm\)0.0021 & 0.2102\(\pm\)0.0016 & 0.1927\(\pm\)0.0016 \\ GAE\({}^{*}\) & 0.2161\(\pm\)0.0082 & 0.1508\(\pm\)0.0058 & 0.4452\(\pm\)0.0052 & 0.1466\(\pm\)0.0142 & 0.2360\(\pm\)0.0041 & 0.4112\(\pm\)0.0017 \\ VGAE\({}^{*}\) & 0.2332\(\pm\)0.0089 & 0.1496\(\pm\)0.0054 & 0.4458\(\pm\)0.0052 & 0.1340\(\pm\)0.0008 & 0.2318\(\pm\)0.0043 & 0.4127\(\pm\)0.0017 \\ GCN & 0.2216\(\pm\)0.0098 & 0.1583\(\pm\)0.0139 & 0.1232\(\pm\)0.0146 & 0.2720\(\pm\)0.0770 & 0.2540\(\pm\)0.0614 & 0.2117\(\pm\)0.0036 \\ SEA & 0.2015\(\pm\)0.0052 & **0.11134\(\pm\)0.0055** & 0.0823\(\pm\)0.0094 & **0.0754\(\pm\)0.002** & **0.19764\(\pm\)0.0028** & 0.1694\(\pm\)0.0018 \\ \hline LGLPW & **0.1915\(\pm\)0.0086** & 0.1299\(\pm\)0.0061 & **0.0624\(\pm\)0.0137** & 0.0759\(\pm\)0.0019 & 0.2007\(\pm\)0.0029 & **0.1556\(\pm\)0.0024** \\ \hline \hline \end{tabular}
\end{table}
Table 2. Root Mean Squared Errors, standard deviation on all of the datasets, In addition to LGLWP, GCN, other experimental data were obtained from (Kang et al., 2019). The best experimental results have been bolded. The second-best results are indicated with underlining.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Method & Neural & C.elegans & Netscience & P.blog & UC-social & Condmat \\ \hline Random labeling & 0.2115 \(\pm\)0.0066 & 0.1466\(\pm\)0.0068 & 0.0762\(\pm\)0.0154 & 0.0833\(\pm\)0.0013 & 0.2096\(\pm\)0.0025 & 0.1745\(\pm\)0.0034 \\ weighted graph labeling & 0.1915\(\pm\)0.0086 & 0.1309\(\pm\)0.0072 & 0.0698\(\pm\)0.0134 & 0.0766\(\pm\)0.0012 & 0.2017\(\pm\)0.0022 & 0.1556\(\pm\)0.0024 \\ \hline \hline \end{tabular}
\end{table}
Table 3. Results for each version of the algorithm applied to all datasets. We report the Mean Squared errors with the Standard Deviation for 10 trials on each version.
In network analysis tasks where connected edges are the object of study, simply transforming node vectors into connected edge vectors may not accurately characterize the structural features of connected edges. Directly mapping the connected edges into low-dimensional vectors can better characterize the structural features of the connected edges, and the line graph is certainly a very good solution. In our future research, we will focus on the application of line graphs in the network analysis task with connected edges as the object of study. A good subgraph node labeling algorithm is also crucial for the final prediction. A good subgraph extraction strategy and node labeling algorithm are both worthy future work.
|
2309.03374 | Physics Informed Neural Networks for Modeling of 3D Flow-Thermal
Problems with Sparse Domain Data | Successfully training Physics Informed Neural Networks (PINNs) for highly
nonlinear PDEs on complex 3D domains remains a challenging task. In this paper,
PINNs are employed to solve the 3D incompressible Navier-Stokes (NS) equations
at moderate to high Reynolds numbers for complex geometries. The presented
method utilizes very sparsely distributed solution data in the domain. A
detailed investigation on the effect of the amount of supplied data and the
PDE-based regularizers is presented. Additionally, a hybrid data-PINNs approach
is used to generate a surrogate model of a realistic flow-thermal electronics
design problem. This surrogate model provides near real-time sampling and was
found to outperform standard data-driven neural networks when tested on unseen
query points. The findings of the paper show how PINNs can be effective when
used in conjunction with sparse data for solving 3D nonlinear PDEs or for
surrogate modeling of design spaces governed by them. | Saakaar Bhatnagar, Andrew Comerford, Araz Banaeizadeh | 2023-09-06T21:52:14Z | http://arxiv.org/abs/2309.03374v3 | # Physics Informed Neural Networks for Modeling of 3D Flow-Thermal Problems with Sparse Domain Data
###### Abstract
Successfully training Physics Informed Neural Networks (PINNs) for highly nonlinear PDEs on complex 3D domains remains a challenging task. In this paper, PINNs are employed to solve the 3D incompressible Navier-Stokes (NS) equations at moderate to high Reynolds numbers for complex geometries. The presented method utilizes very sparsely distributed solution data in the domain. A detailed investigation on the effect of the amount of supplied data and the PDE-based regularizers is presented. Additionally, a hybrid data-PINNs approach is used to generate a surrogate model of a realistic flow-thermal electronics design problem. This surrogate model provides near real-time sampling and was found to outperform standard data-driven neural networks when tested on unseen query points. The findings of the paper show how PINNs can be effective when used in conjunction with sparse data for solving 3D nonlinear PDEs or for surrogate modeling of design spaces governed by them.
**Keywords:** Physics Informed Neural Networks; Navier-Stokes Equations; Surrogate Modeling; Design Optimization
## 1 Introduction
Over the last few years, there has been significant growth in the popularity of machine learning algorithms to solve partial differential equations (PDE) or assist PDE solvers, such as computational fluid dynamics (CFD) solvers [1, 2]. A particular application where CFD solvers struggle, due to the computational cost, is iterative design optimization. This is the process of continually updating a design (e.g. an electronics assembly layout) and computing the solution (e.g. flow or thermal fields) to optimize the performance (e.g. constrain the temperatures or reduce the pressure drop). The challenge for CFD is the input-output relationship is one-to-one. Therefore, any changes to the input vector (e.g. geometric variations) need to be re-simulated, leading to high costs when iterating on different design scenarios [3]. Overall high-fidelity iterative design requires a prohibitive level of resources, both computationally and monetarily, and often leads to a sub-optimal outcome. The attraction of Machine Learning (ML) algorithms in these scenarios is the ability to rapidly find solutions for such problem setups that are challenging in conventional CFD, such as large design space explorations [4], turbulence model closure [5] or solving incomplete/ill-posed problems [6].
Conventional ML algorithms usually require large amounts of data to train. This represents a challenge when using ML in engineering applications such as CFD, since experimental data can be difficult and expensive to obtain and may suffer from measurement noise. Furthermore, in many engineering experiments, field data such as temperature and velocity fields can sometimes only be captured at specific locations, and it is difficult to get full field solution results from physical experiments. Research has turned to using simulation data for training ML models, but the computational cost of generating large amounts of data to train models is a major bottleneck.
Physics Informed Neural Networks (PINNs) [7] represent an advance in scientific machine learning that has the potential to solve many of the aforementioned issues. By adding the physics that governs the problem into the loss function, and optimizing the loss, it is possible to have the network learn the
solution of the problem represented by that equation in a data-free manner. PINNs can be used in cases where sporadic experimental field data is available [8, 9] to calculate the rest of the field variable and can be used to solve problems with incomplete or missing physics [10, 11].
Another application area, in which PINNs could be very beneficial is machine learning-based surrogate modeling. Although a relatively new field, several ML architectures and methods have been utilized in the literature. These include: Proper Orthogonal Decomposition (POD) [12], Gappy POD [13] and Manifold Learning [14]. More recently, increased attention has been given to statistical methods like Gaussian processes and neural networks that incorporate Machine Learning (ML) to create surrogate models. Bhatnagar et al. [15] used a CNN architecture to predict aerodynamic flow fields over airfoils and created a surrogate model that generalized between flow conditions and airfoil geometries. Guo et al. [16] also used a Convolutional Neural Network (CNN) architecture to predict steady flows over automotive vehicles. Lee and You [17] used Generative Adversarial Networks (GANs) coupled with physical laws to predict unsteady flow around a cylinder, demonstrating the benefits of using embedded physics. Raissi and Karniadakis [18] use Gaussian processes to model and identify several complex PDEs.
Several of the aforementioned studies used purely data-driven models and required the creation of large amounts of training data to generate accurate and generalizable models. PINNs have the capability to greatly reduce these data generation costs, and it has been shown that training surrogates using the physics embedded in the loss function greatly improves predictive accuracy, across a wide range of applications [17, 19, 20, 21].
However, there is currently a lack of research articles applying PINNs to 3-dimensional (3D) problems, particularly for highly nonlinear PDEs like the Navier-Stokes equations. These problems are challenging for PINNs due to a variety of reasons that are discussed later in this paper. Yet, these problems are the most lucrative to solve, as most industrial applications of CFD are done in 3D. This paper provides results that aim to address this gap, by solving several problems with realistic physical parameters, over complex geometries in a data-assisted manner, using very sparse domain data. Further, this paper solves a realistic flow-thermal design optimization problem using a hybrid data-PINN surrogate model and shows how PINN models outperform standard data-driven neural network (NN) surrogates for every test point queried in the Design of experiments (DoE) space for the surrogate modeling problem.
The paper is divided as follows; Section 2 introduces PINNs in more detail and discusses some of the technical challenges with training PINNs. Section 3 outlines some of the important features the authors incorporate in the creation and training of PINNs to enable accurate and fast convergence. Section 4 demonstrates several problems solved using PINNs, and showcases a design optimization problem using PINN-based surrogates. Section 5 discusses how the work shown in this paper can be improved upon.
## 2 Physics Informed Neural Networks (PINNs)
### Setting up a PINN Training
Physics-informed neural networks (PINNs) leverage automatic differentiation to obtain an analytical representation of an output variable and its derivatives, given a parametrization using the trainable weights of the network. By employing the underlying static graph, it is possible to construct the differential equations that govern physical phenomena.
A PDE problem in the general form reads:
\[\mathcal{N}_{\mathbf{x}}[u]=0,\mathbf{x}\in\Omega, \tag{1}\]
\[\Phi(u(\mathbf{x}))=\mathbf{g}(\mathbf{x}),\mathbf{x}\in\partial\Omega \tag{2}\]
where \(\Phi\) can be the identity operator (Dirichlet B.C) or a derivative operator (Neumann/Robin B.C). In order to solve the PDE using the PINN method, the residual of the governing PDE is minimized, which is defined by
\[r_{\theta}(\mathbf{x})=\mathcal{N}_{\mathbf{x}}[f_{\theta}(\mathbf{x})], \tag{3}\]
where \(f_{\theta}\) is the predicted value by the network. The residual value, along with the deviation of the prediction from boundary/initial conditions, is used to construct the loss, which takes the form:
\[L(\theta)=L_{r}(\theta)+\sum_{i=1}^{M}\lambda_{i}L_{i}(\theta), \tag{4}\]
where the index i refers to different components of the loss function, relating to initial conditions, boundary conditions, and measurement/simulation data. \(\lambda_{i}\) refers to the weight coefficient of each loss term. The individual loss terms are constituted as follows:
\[L_{r}=\frac{1}{N_{r}}\sum_{i}^{N_{r}}[r(\mathbf{x}_{r}^{i})]^{2},\ L_{b}=\frac{1 }{N_{b}}\sum_{i}^{N_{b}}[\Phi(\hat{u}(\mathbf{x}_{b}^{i}))-g_{b}^{i}]^{2},\ L_{d}=\frac{1}{N_{d}}\sum_{i}^{N_{d}}[u( \mathbf{x}_{d}^{i})-\hat{u}(x_{d}^{i},t_{d}^{i})]^{2}, \tag{5}\]
where the subscripts r, b, and d refer to collocation, boundary, initial, and data points, respectively. The loss term \(L(\theta)\) can then be minimized to have the network learn the solution to the PDE described by 1,2. A popular method is to use gradient-based optimizers like Adam [22] and L-BFGS to optimize the network weights.
### Current Challenges with PINNs
Although the PINN method shows great promise, it still has a number of unresolved issues. The biggest challenges with PINNs currently lie in the scalability of the algorithms to large 3D problems as well as problems with complex nonlinearities, and unsteady problems. Some of the issues described henceforth are tackled by methods described in Section 3.
#### 2.2.1 Weak imposition of Boundary Conditions
The solution of a PDE problem must obey all initial and boundary conditions imposed on it while minimizing the residual of the governing equation. However, for neural network based solvers it is difficult to impose boundary and initial conditions in an exact manner. This is because the standard way to impose B.C in PINNs is to create a linear combination of loss functions (as described mathematically in the previous section). Each loss either describes the deviation of the network output from a specific boundary condition, or the magnitude of the residual of the governing equations. Therefore, boundary conditions are only satisfied in a weak manner. There has been research demonstrating the utility of exact imposition of boundary conditions [23, 24, 25] or creative multi-network approaches [26], such implementations are mostly problem-specific and do not generalize well.
Weak imposition of boundary conditions also creates another issue, one that is fairly common in multi-task learning and multi-objective optimization: choosing the values of loss term coefficients that make up the linear combination. Choosing these weights is a nontrivial exercise that would require calibration via hyper-parameter search, which is not feasible. Wang et al. [27] introduced a heuristic dynamic weighting algorithm to update and select these weights automatically and continuously during the training, to enable convergence to the correct answer. Additionally, there have been several other algorithms proposed to choose the correct scheme for weighting the losses [28, 29, 30]. This continues to be an active area of research in the PINNs community. Finally, methods have been proposed to impose the boundary conditions in a strong manner by manipulating the output formulations [23] or by utilizing operator networks [31].
#### 2.2.2 Difficult Optimization Problem
A second problem is the nature of the loss landscape itself, in which a reasonable local minimum is required to be found. As seen in Krishnapriyan et al. [32], Gopakumar et al. [33],Subramanian et al. [34] and Basir and Senocak [35], as well as the author's own experiments, different non-dimensional quantities (e.g. Reynolds number) in the governing equations, the number of dimensions of the problem, the point cloud/discretization, the boundary conditions and the complexity of the solution to be predicted can adversely affect the loss landscape of the neural network training. This makes the optimization challenging and can fail to find an adequate local minimum via a gradient descent-based algorithm. Recently, methods borrowing concepts from optimization theory have shown alternate formulations
(e.g. augmented lagrangian method for the loss functions) can aid the convergence properties of the training problem [35, 36]. There have also been efforts towards imposing physical constraints in an integral form [37].
#### 2.2.3 Cost of training
Constructing the PDE loss functions involves several backward passes through the network, which is a costly operation. PINNs on average take longer to train than their data-driven counterparts for exactly this reason; the computation graph of a PINN training is much more complex. Moreover, for the Navier-Stokes equations, it has been seen that although the stream function formulation provides better results (due to exact enforcement of continuity), it is costlier in terms of training time. As seen in NVIDIA's experiments [38], it can take several million iterations for the more complex problems to be solved via PINNs. To reduce the cost of training approaches such as automatic differentiation for finite difference formulations [39], or using first-order formulations [40], have been proposed. However, these solutions tend to be mostly problem-specific and do not necessarily generalize well to increased problem complexity and grid definitions. Meta-learning algorithms [41] have also recently gained significance as an effective way to reduce the cost of training neural networks on new tasks, and some of this work has been extended to PINNs [42] as well.
## 3 Important Features for Creating PINN Models
In this section, the important techniques used to create PINN-based models cost-effectively are outlined. The PINN models in subsequent sections are created by combining these features that have been found to have an effect on the accuracy of the model and the speed of training.
### Hybrid Data-Physics Training
Compared with the original PINNs method proposed by Raissi et al. [7], a plethora of research has been undertaken to improve and expand on the method [43, 44]. From these developments, the PINNs method has been applied to solve PDE-based problems of increasing complexity and dimensionality. However, the PINNs method is currently not suited for solving engineering problems often encountered in industry in a data-free manner. The optimization issues and cost of model training outlined above make the method, presently, unsuitable for use as a forward solver. To get the best of both worlds, the PINNs method can be augmented with data. Figure 1 depicts the tradeoff between using only data or only physics, and that the sweet spot lies in using both. In addition to the discussed benefit of hybrid data-physics training reducing the cost of generating data, there have been several examples showing that the inclusion of sparse solution data in the training loss function significantly improves the convergence capabilities of the PINNs method [33, 43, 45].
In this paper, we take inspiration from this and use very sparse solution data to solve 3D flow-thermal problems and inform our surrogate models with physics while creating them.
Figure 1: The spectrum of data-driven versus physics-informed models. Incorporating governing physics information into the models during creation serves as an effective form of regularization and often helps reduce the amount of data required to achieve the same accuracy levels.
### Modified Learning Rate Annealing
As described in Section 2.2.1, the learning rate annealing algorithm has proved to be very effective in mitigating the stiffness of the PINN training problem. However, utilizing this method over a broader spectrum of problems highlighted an issue with stability. The following outlines this issue:
As shown in Equation 4 the PINN loss function being optimized takes the form:
\[L(\theta)=L_{r}(\theta)+\sum_{i=1}^{M}\lambda_{i}L_{i}(\theta) \tag{6}\]
At any training step, the update to the loss coefficient is calculated [27] as
\[\hat{\lambda}_{i}=\frac{max_{\theta}|\nabla_{\theta}L_{r}(\theta)|}{|\nabla_{ \theta}L_{i}(\theta)|},i=1,....,M\]
It can be seen that if the loss \(L_{i}\) decreases much faster than \(L_{r}\) during the training, the value of \(\hat{\lambda}_{i}\) increases. This then leads to a larger coefficient for that loss term and an associated faster decay of the loss.
This instability has the unintended consequence of the optimizer getting stuck in minima where it minimizes the loss \(L_{i}\) very well but is unable to optimize for the loss of the other constraints. The proposed updated algorithm to mitigate this issue is shown in Algorithm 1. The values of thresholds are hyper-parameters, but if the inputs and outputs of the network have been normalized (using standard score normalization, for example), then selecting values between \(10^{-3}\) and \(10^{-5}\) works well in practice.
```
for update step = 1 to \(N\)do if\(L_{i}(\theta)\leq(threshold)_{i}\)then \[\hat{\lambda}_{i}=0\] else Compute \(\hat{\lambda}_{i}\) by \[\hat{\lambda}_{i}=\frac{max_{\theta}|\nabla_{\theta}L_{r}(\theta)|}{|\nabla_{ \theta}L_{i}(\theta)|},i=1,....,M\] endif Update weights \(\lambda_{i}\) as \[\lambda_{i}=(1-\alpha)\lambda_{i}+\alpha\hat{\lambda}_{i}\] Update network parameters via gradient descent: \[\theta_{n+1}=\theta_{n}-\eta\nabla_{\theta}L_{r}(\theta)-\eta\sum_{i=1}^{M} \lambda_{i}\nabla_{\theta}L_{i}(\theta)\] endfor We set the hyper-parameter \(\alpha=0.1\) and \(\eta=10^{-3}\). Threshold values are chosen somewhere between \(10^{-3}\) and \(10^{-5}\).
```
**Algorithm 1** Modified Learning Rate Annealing
For a problem with the loss function
\[L(\theta)=L_{r}(\theta)+\lambda_{neu}L_{neu}(\theta)+\lambda_{dir}L_{dir}(\theta) \tag{7}\]
where \(L_{r}(\theta)\), \(L_{neu}(\theta)\) and \(L_{dir}(\theta)\) correspond to the PDE, Neumann and Dirichlet loss respectively, Figure 2 shows the training curves for the individual losses, and the value of the adaptive coefficients when they are calculated using Algorithm 1. It can be seen that when the boundary loss terms in Figures 1(c) and 1(d) go below their thresholds (set to \(10^{-5}\)), the associated coefficients shown in Figures 1(a) and 1(b) start decaying. Following this, the PDE loss starts improving much faster. If the term \(L_{i}(\theta)\) goes above its threshold, it leads to a spike in the adaptive constant \(\lambda_{i}\) which brings it down again.
### Fourier Feature Embeddings
As described in Tancik et al. [46], Artificial Neural Networks suffer from a spectral bias problem. To overcome this, they introduced a Fourier feature embedding that allows models to capture high-frequency components of the solution effectively. This has the effect of markedly improving the ability of the networks to capture sharp gradients in the solutions, which requires the network to be able to learn high-frequency components of the solution quickly.
Following the implemenation in Tancik et al. [46], for an input vector
Figure 2: Adaptive coefficients and loss terms from Equation 7 during training. (a) Evolution of the Dirichlet loss adaptive constant during training.(b) Evolution of the Neumann loss adaptive constant during training. (c) Dirichlet B.C loss term \(L_{dir}(\theta)\) (d) Neumann B.C loss term \(L_{neu}(\theta)\) (e) The PDE loss during training. Once the values of both the adaptive constants start dropping, the PDE loss improves much more rapidly.
\[\mathbf{v}=\left[\begin{array}{c}x\\ y\\ z\end{array}\right]\]
instead of using \(\mathbf{v}\) as the input we compute the Fourier feature mapping:
\[\gamma(\mathbf{v})=[\cos(2\pi\mathbf{b}_{1}^{T}\mathbf{v}),\sin(2\pi\mathbf{b}_ {1}^{T}\mathbf{v}),.....,\cos(2\pi\mathbf{b}_{m}^{T}\mathbf{v}),\sin(2\pi \mathbf{b}_{m}^{T}\mathbf{v})] \tag{8}\]
where m is a hyper parameter and the frequencies \(\mathbf{b}_{j}^{T}\) are selected randomly from an isotropic distribution. Then \(\gamma(\mathbf{v})\) is passed into the network.
The Fourier feature embedding was shown to be highly effective in training PINNs models by Wang et al. [47], and several results were shown for 1D and 2D problems. We extend this implementation to solve 3D flow problems via PINNs and use it to create our hybrid data-PINN surrogate for flow thermal problems.
In addition, there have been other proposed solutions for the spectral bias problem for applications to PDE problems, such as the Siren activation [48], Fourier Neural Operators [49], and weighting schemes derived from the theory of Neural Tangent Kernels (NTK) [28].
## 4 Experiments and Results
In this section, some example problems are solved using PINNs. Sections 4.1 and 4.2 solve the 3D incompressible Navier-Stokes equations through a data-assisted approach, where very sparse solution data is provided in the domain.
Section 4.3 uses a hybrid data-PINN approach to generate a surrogate model for a given design space of a heat sink with a chip underneath it, undergoing cooling via forced convection. Then, given certain constraints on the running metrics of the chip-sink setup (like max temperature in the chip), the optimal set of parameters in the Design of Experiments (DoE) space that satisfy the constraints while maximizing an objective are obtained via rapid design optimization using the created surrogate.
Details on hyper-parameters used in the model training for each experiment that follows can be found in Appendix Section A.1.
### Forward Solve of 3D Stenosis Problem
Flow through an idealized 3D stenosis geometry at a physiologically relevant Reynolds number is demonstrated, see Figure 3 for details about the geometry. To the author's best knowledge, flow through a stenosis has been solved using PINNs only at a low Reynolds number of approximately 6 (based on inlet diameter) [23]. Flow through irregular geometries has been solved at a higher Re (500), but in 2D [50]. In this paper, the stenosis problem is solved at Re 150, and in 3 dimensions.
As discussed in Section 2.2, at higher Reynolds numbers the standard PINN implementation struggles to achieve a good local minimum. This was confirmed using a standard PINN implementation. To alleviate this issue a data-assisted approach where sporadic solution data can be added throughout the domain of interest (depicted on a slice in Figure 4). The data was given in the form of concentric rings at the radii depicted on the cut plane.
#### 4.1.1 Problem Setup
The flow problem through the stenosis is solved by solving the steady-state incompressible Navier-Stokes equations:
\[\nabla\cdot\mathbf{u}=0, \tag{9}\]
\[(\mathbf{u}\cdot\nabla)\mathbf{u}=-\frac{1}{\rho}\nabla\mathbf{p}+\nu\nabla \cdot(\nabla\mathbf{u}), \tag{10}\]
subject to
\[\mathbf{u}(x_{b1})=g(x_{b1}),\,x_{b1}\in\partial\Omega_{1},\]
\[\mathbf{u}(x_{b2})=0,\,x_{b2}\in\partial\Omega_{2},\]
\[\nabla u_{i}(x_{b3})\cdot\mathbf{n}=0,\,x_{b3}\in\partial\Omega_{3},i=1,2,3\]
\[p(x_{b3})=0,\,x_{b3}\in\partial\Omega_{3}\]
where \(g(x_{b3})\) represents a profiled input to the stenosis. \(\rho\) and \(\nu\) are the density and kinematic viscosity of the fluid(air) respectively, and \(\mathbf{u}\) and p are the velocity vector and pressure respectively.
In the present problem, a parabolic profiled input is provided with a peak value inflow of 0.15 m/s. The ratio of areas of the throat to the inlet is 0.36.
The output of the network is approximated as \(G_{\theta}\), which is a 4-component output:
\[G_{\theta}=\left[\begin{array}{c}u\\ v\\ w\\ p\end{array}\right]\]
#### 4.1.2 Results
Figure 5 compares the velocity magnitude returned by the trained PINN model and Altair AcuSolve(r) through a 2D slice of the stenosis. As can be seen, the essential features of the flow are captured. Figure 5(a) and 5(b) compare the velocity and pressure profile through the center of the stenosis. The differences between the line plots are attributed to differences in mesh density between the two cases. The CFD mesh was an unstructured mesh of around 63,000 nodes with a boundary layer, while the
Figure 4: Stenosis diagram (not to scale) showing planes where solution data is provided randomly.
Figure 3: Visual description of stenosis problem
point cloud used with the PINN was around 87,000 nodes randomly distributed points except near the boundary where the sampling was finer.
Another approach that was investigated to solve the 3D stenosis problem was that of using "continuity planes" as defined by Hennigh et al. [38] in their experiments solving 3D flow problems using PINNs. In this approach, the authors added constraints on the mass flow through a plane and added these constraints to the loss function. While this approach was found to aid the convergence of the PINN model to the correct solution, there were several issues found to exist with this method:
1. It is difficult to generate continuity planes for complex geometries such as those shown in Sections 4.2 and 4.3.
2. The quality of the solution from the PINN depends heavily on the integration scheme used to calculate the mass flow rate, and the fineness of the points on the continuity plane.
Figure 5: Solution Comparison. (a) Altair AcuSolve® Solution to stenosis problem (b) PINN forward solve to stenosis problem.
Figure 6: Centerline solution comparisons: PINN versus Altair AcuSolve® (a) Total Velocity Comparison (b) Pressure Comparison
Hence, in the next section, random and sparsely distributed data was used in the domain to aid convergence.
### Flow over a Printed Circuit Board (PCB)
#### 4.2.1 Problem Setup
Flow over a PCB consisting of a heat sink, chip, and capacitor is solved at a Reynolds number of approximately 1500, based on the length of the PCB board and air as the fluid. The geometry and flow orientation are shown in Figure 7. This represents a forced convection problem common in electronics design and is a challenging problem for PINNs because it is in 3D, with a complex geometry and large gradients involved.
Let \(D\) represent the set of all nodes in the domain. To train the PINN model, the CFD solution was first computed. Next, 1% of the nodes in the solution domain were randomly selected (call this set \(D_{1}\subset D\))). This is a selection of roughly 2,300 node points (from a mesh of roughly 230,000 nodes). The experiment was then divided into three parts:
1. **Case A**: A network was trained on the CFD solution at all points in \(D_{1}\) (i.e \(\forall\mathbf{x}\in D_{1}\)) following which the physics of the problem was **enforced at every node location** in \(D\) (i.e \(\forall\mathbf{x}\in D\)), by including the physics-based loss in the training, and then the network was asked to predict the solution in the entire domain \(D\).
2. **Case B**: A network was trained on the CFD solution at the points contained in \(D_{1}\) (i.e, \(\forall\mathbf{x}\in D_{1}\)) **without any physics enforcement** and then asked to predict the solution in the entire domain (i.e \(\forall\mathbf{x}\in D\)).
3. **Case C**: Finally, the same experiment as Case A was repeated but with a new set \(D_{2}\) consisting of only 0.2% of the nodes in \(D\), which were again randomly selected.
The governing equations for this problem are the Reynolds Averaged Navier-Stokes Equations:
\[\nabla\cdot\mathbf{u}=0, \tag{11}\]
\[(\mathbf{u}\cdot\nabla)\mathbf{u}=-\frac{1}{\rho}\nabla\mathbf{p}+(\nu+\nu_{t })\nabla\cdot(\nabla\mathbf{u}), \tag{12}\]
\(\rho\), \(\nu\), and \(\nu_{t}\) represent the density, kinematic viscosity and eddy viscosity of the system. The inflow is set to a constant velocity of 0.15 m/s and the outflow is set to the stress-free condition. It should be noted that in the current study, eddy viscosity is obtained directly from the CFD solver using the Spalart-Allmaras turbulence model. Turbulence modeling in PINNs is a field of active research with a few articles investigating it [38, 51, 52], and it is a future work to effectively incorporate turbulence models into PINN-based models.
Figure 7: Geometry of a PCB with a chip, sink, and capacitor assembly.
#### 4.2.2 Results
Figure 8 shows the ANN predictions for the different cases. It is evident that by using sparse data, the network is able to better converge toward the CFD solution (shown in Figure 8d) using the physics-based regularizer. However, as evident in Figure 8c, the network failed to converge to a physical solution when the amount of data provided was insufficient, highlighting the importance of a certain amount and fineness of the required data. Table 1 shows the Mean Squared Errors (MSE) for each experiment, for the velocity and the pressure, taking the CFD solution as the ground truth. The MSE is calculated as
\[\text{MSE}=\sqrt{\frac{\sum_{i=1}^{N_{nodes}}(x_{i,pred}-x_{i,truth})^{2}}{N_{ nodes}}} \tag{13}\]
Figure 9 shows the fraction of node points for each case that are above a certain Mean Absolute Error (MAE) value. The lower the fraction, the better the solution. We note from Figure 9 that even for Case A, there are outliers to the solution where the MAE is relatively high, indicating poor convergence to the solution at those nodes. The convergence of PINNs toward the correct solution for highly nonlinear systems is an open and challenging problem, especially in 3 dimensions. Nonetheless, these results open exciting possibilities about using physics-based regularizers in the future and represent a step forward for solving the 3D Navier-Stokes Equations at high Reynolds Numbers using PINNs. Furthermore, data generation costs to create surrogate models using PINNs can be greatly reduced by providing solution data on a coarser grid and solving the physics on a finer grid.
### Surrogate Modeling and Design Optimization of a Heat Sink
In this section, the PINNs surrogate modeling technique is demonstrated for rapid design optimization of a heat sink assembly. The assembly utilizes a chip that generates heat and a fin-type heatsink on top to dissipate heat into the surrounding fluid. The chip-heatsink assembly is cooled by forced convection of air. The geometry and setup are shown in Figure 10.
The goal is to optimize the heat sink design and the running conditions of the assembly, subject to feasibility constraints placed on chip temperature and channel pressure drop. This represents a common design optimization problem in electronics cooling. More specifically, if \(\dot{Q}_{src}\) is the total power being generated by the chip, the optimization problem can be framed as follows:
\[\text{Maximize }\dot{Q}_{src}\text{ s.t} \tag{14}\]
\[\text{Pressure drop across the heat sink channel ($\Delta$P)}\leq 11\text{ Pa} \tag{15}\]
\[\text{Maximum temperature anywhere on the chip }\leq 350\text{ K} \tag{16}\]
The pressure at the outflow is fixed to 0 Pa, and the pressure drop across the heat sink channel is hence calculated as the average pressure over the inflow of the channel:
\[\Delta\text{P}=\overline{\text{P}_{\text{inlet}}} \tag{17}\]
The term to be maximized \(\dot{Q}_{src}\) is also one of the design axes and an input parameter(P3) to the network.
The design variables that can be altered for this present optimization are:
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Case & Description & MSE (Velocity) & MSE (Pressure) \\ \hline Case A & 1\% domain data + physics & **0.0135** & **0.0037** \\ \hline Case B & 1\% domain data only & 0.0222 & 0.00472 \\ \hline Case C & 0.2\% domain data + physics & 0.0245 & 0.00545 \\ \hline \end{tabular}
\end{table}
Table 1: Mean Squared Errors (MSE) for velocity and pressure for cases A,B, and C
* Inflow Velocity
* Fin height
* Source term in the chip (has to be maximized)
The upper and lower limits of each of the design variables mentioned above are summarized in Table 2. The inlet velocity is set based on typical values found in literature [53] and corresponds to a Reynolds number range of Re 10,300 to Re 24,000.
The governing equations solved for this conjugate heat transfer problem are the same as in Section 4.2 for the flow problem, subject to no-slip boundary conditions on the chip-heatsink assembly with a
Figure 8: Neural Network (NN) prediction with and without physics, for very coarse data supplied on a plane through the domain. (a) **Case A:** Trained on 1% data and physics (b) **Case B:** Trained on 1% solution data only (c) **Case C:** Trained on 0.2% data and physics (d) True Solution from CFD solver
Figure 9: Node fractions of points above a certain MAE value, for each case. (a) MAE of Velocity (b) MAE of Pressure
variable freestream inflow velocity, causing forced convection. As in Section 4.2, the eddy viscosities are taken from the CFD solutions.
The energy equation in both fluid and solid reads:
\[k\nabla^{2}T+\dot{q}_{src}-\rho s\mathbf{u}\cdot\nabla T=0, \tag{18}\]
where T represents the temperature, \(\dot{q}_{src}\) represents the volumetric source term, and \(k\) and \(s\) are the conductivity and specific heat of the material respectively. At the interface between the fluid and solid domain (fluid-sink, sink-chip, and fluid-chip) the interface condition is applied by minimizing the following loss terms as shown in [54];
\[L_{flux}=\frac{1}{N_{int}}\sum_{i=1}^{N_{int}}(f_{d_{1}}(\mathbf{u}(x_{i})) \cdot\mathbf{n}_{d_{1}}+f_{d_{2}}(\mathbf{u}(x_{i}))\cdot\mathbf{n}_{d_{2}})^ {2}, \tag{19}\]
\[L_{val}=\frac{1}{N_{int}}\sum_{i=1}^{N_{int}}(\mathbf{u}_{d_{j}}(x_{i})- \overline{\mathbf{u}_{d_{j}}(x_{i})})^{2}, \tag{20}\]
where \(\mathbf{n}_{d1}=-\mathbf{n}_{d2}\) and j=1,2. The average is taken over j. \(d_{1}\) and \(d_{2}\) refer to the domains on both sides of the interface, and \(N_{int}\) is the number of node points on the interface.
#### Model Creation and Evaluation
The sampling of the above Design of Experiments (DoE) space is done via an efficient space-sampling method to optimally fill the DoE space [55]. The sampled DoE space for training is shown in Figure 11, along with the location of points at which the surrogate model is tested. The reader is referred to Section A.3.1 for a complete tabular description of the DoE space. Note that for this
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Parameter No. & Parameter Name & Lower Value & Upper Value \\ \hline P1 & Inflow Velocity (\(m/s\)) & 3 & 7 \\ \hline P2 & Fin Height (\(mm\)) & 15 & 23 \\ \hline P3 & Source Term (\(W\)) & 30 & 60 \\ \hline \end{tabular}
\end{table}
Table 2: Design of Experiments space axes ranges for the heat sink design optimization
Figure 10: Basic problem geometry and flow depiction
example, we use full field data at each DoE point to train the surrogate as opposed to a small fraction of it (like in Section 4.1 and 4.2), as the objective is to get a surrogate that is as accurate as possible. Table 3 shows the MSE for the predictions by the hybrid data-PINN model at the test points, calculated w.r.t the CFD solution at the same mesh resolution. Also shown is the MSE for predictions by a standard data-driven NN without leveraging key features described in Section 3, which are used extensively in industry for surrogate modeling applications. The hybrid data-PINN model outperforms the standard data-driven NN for all predictions. Section A.4 shows some more qualitative comparisons between test point results from the PINNs model versus standard data-driven NNs.
#### Solving the Design Optimization Problem
The surrogate model is used to solve the design optimization problem described in Equations 14-16. The goal is to show that the surrogate model can accurately predict the solution in the entire DoE space by returning a solution that satisfies all applied constraints while maximizing the objective. The created surrogate models are interfaced with an optimizer that solves a generic constrained optimization problem via an iterative process, described thoroughly in Appendix Section A.2.
Each snapshot in Figure 12 represents a design iteration, and each particle represents a point in the DoE space. Each axis of a plot represents a parameter axis.
Figure 11: Training and testing points in the 3D DoE space
Figure 12: Design optimization iterations of the heat sink problem (a) Iteration 0 (b) Iteration 5 (c) Iteration 10
For the given constraints, the particles converge to a much smaller region of the DoE space. The design point returned by the optimizer in this case is:
**Inflow Velocity**: 6 m/s
**Chip Power**: 50W
**Fin Height**: 17mm
To test that the result satisfies the constraints, the returned design point is solved by the Altair AcuSolve(r), at the same mesh fineness and another mesh with 10x fineness, keeping all essential mesh features such as boundary layers and refinement zones. As shown in Figures 13 and 14, not only does the given design point satisfy the design constraints, but the finer mesh solution is very close to the coarser solution, and a little tweaking of the design point using CFD with the higher resolution mesh will yield a highly optimized solution to the designer. This optimization is done several orders of magnitude faster than if using traditional CFD, and the reader is referred to Appendix Section A.3.2 for a quantitative description of the same.
## 5 Conclusions and Future Work
In this paper, Physics Informed Neural Networks were used to solve the 3D Navier-Stokes equations in a data-assisted setting, for complex geometries with realistic physical parameters. It was shown that even for problems being solved at high Reynolds Numbers in 3D, PINNs can be trained to produce a
Figure 13: Temperature plot through a slice at the rear of the sink (from bottom to top).
The comparison between the high-fidelity solution on the fine mesh and the PINN prediction on a coarser mesh shows good agreement.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & **Velocity MSE** & **Pressure MSE** & **Temperature MSE** \\ \hline
**Test Point 1** & & & \\ Hybrid data-PINN Model & **0.65** & **2.62** & **1.81** \\ Standard Data-driven NN & 0.93 & 2.83 & 2.05 \\ \hline
**Test Point 2** & & & \\ Hybrid data-PINN Model & **0.39** & **1.19** & **2.67** \\ Standard Data-driven NN & 0.58 & 1.42 & 2.97 \\ \hline
**Test Point 3** & & & \\ Hybrid data-PINN Model & **0.76** & **3.31** & **1.86** \\ Standard Data-driven NN & 1.10 & 3.51 & 2.18 \\ \hline
**Test Point 4** & & & \\ Hybrid data-PINN Model & **0.33** & **0.99** & **2.87** \\ Standard Data-driven NN & 0.52 & 1.19 & 3.15 \\ \hline \hline \end{tabular}
\end{table}
Table 3: MSE for 4 test points shown in Table 5. The PINN-based model consistently outperforms the standard data-driven NN on all test points.
good solution in the presence of very sparse solution data randomly scattered in the solution domain. However, using too little solution data causes the model to converge to an unphysical solution. PINNs were also demonstrated for 3D flow-thermal surrogate modeling and the PINN-based surrogates consistently outperformed standard data-driven NN on test case examples. The PINN surrogates were also interfaced with a design optimization algorithm to solve a constrained optimization problem. This optimization returned a design point that when solved with high-fidelity CFD was consistent with the requirements of the design constraints, highlighting the suitability of the method to produce surrogates for 3D flow-thermal surrogate modeling problems.
There are multiple avenues through which the work shown in this paper can be improved. Research has to be done to improve convergence and offer guarantees of PINNs training toward the local minimums that represent physical solutions, in a data-free manner. This will further reduce data requirements for the creation of physically consistent PINN models which can greatly improve their surrogate modeling capabilities, by reducing the cost of training and improving predictive accuracy. Further work needs to be done to investigate turbulence modeling in PINNs so that high Reynolds number problems can be solved in a data-free manner. There are also many themes like uncertainty quantification [56, 57, 58] of surrogates and effective surrogate modeling of different geometries [59, 60, 61] that are active fields of research in PINNs, which could be included in future works that build on these results.
## Acknowledgements
This research did not receive any specific grant from funding agencies in the public or not-for-profit sectors, or from any external commercial entities. The authors gratefully acknowledge the use of Altair Engineering Inc.'s computing facilities for running experiments.
## CRediT authorship contribution statement
**Saakaar Bhatnagar:** Formal Analysis, Investigation, Methodology, Software, Validation, Writing-original draft. **Andrew Comerford:** Conceptualization, Investigation, Project Administration, Supervision, Writing- review and editing **Araz Banaeizadeh:** Conceptualization, Project Administration, Supervision, Writing- review and editing
|
2309.16022 | GNNHLS: Evaluating Graph Neural Network Inference via High-Level
Synthesis | With the ever-growing popularity of Graph Neural Networks (GNNs), efficient
GNN inference is gaining tremendous attention. Field-Programming Gate Arrays
(FPGAs) are a promising execution platform due to their fine-grained
parallelism, low-power consumption, reconfigurability, and concurrent
execution. Even better, High-Level Synthesis (HLS) tools bridge the gap between
the non-trivial FPGA development efforts and rapid emergence of new GNN models.
In this paper, we propose GNNHLS, an open-source framework to comprehensively
evaluate GNN inference acceleration on FPGAs via HLS, containing a software
stack for data generation and baseline deployment, and FPGA implementations of
6 well-tuned GNN HLS kernels. We evaluate GNNHLS on 4 graph datasets with
distinct topologies and scales. The results show that GNNHLS achieves up to
50.8x speedup and 423x energy reduction relative to the CPU baselines. Compared
with the GPU baselines, GNNHLS achieves up to 5.16x speedup and 74.5x energy
reduction. | Chenfeng Zhao, Zehao Dong, Yixin Chen, Xuan Zhang, Roger D. Chamberlain | 2023-09-27T20:58:33Z | http://arxiv.org/abs/2309.16022v1 | # GNNHLS: Evaluating Graph Neural Network Inference via High-Level Synthesis
###### Abstract
With the ever-growing popularity of Graph Neural Networks (GNNs), efficient GNN inference is gaining tremendous attention. Field-Programming Gate Arrays (FPGAs) are a promising execution platform due to their fine-grained parallelism, low-power consumption, reconfigurability, and concurrent execution. Even better, High-Level Synthesis (HLS) tools bridge the gap between the non-trivial FPGA development efforts and rapid emergence of new GNN models. In this paper, we propose GNNHLS, an open-source framework to comprehensively evaluate GNN inference acceleration on FPGAs via HLS, containing a software stack for data generation and baseline deployment, and FPGA implementations of 6 well-tuned GNN HLS kernels. We evaluate GNNHLS on 4 graph datasets with distinct topologies and scales. The results show that GNNHLS achieves up to \(50.8\times\) speedup and \(423\times\) energy reduction relative to the CPU baselines. Compared with the GPU baselines, GNNHLS achieves up to \(5.16\times\) speedup and \(74.5\times\) energy reduction.
field-programmable gate arrays, graph neural networks, high-level synthesis
## I Introduction
Graphs are widely adopted to model the relational-structured data in social networks, bioinformatics, etc [26]. Machine learning (ML) on graphs has experienced a surge of popularity in the past decade, since traditional ML models, which are designed to process Euclidean data with regular structures, are ineffective at performing prediction tasks on graphs. Due to their simplicity and superior representation learning ability, Graph Neural Networks (GNNs) [6, 12, 19, 23, 25] have achieved impressive performance on various graph learning tasks, such as node classification, graph classification, etc.
To implement GNNs, a set of widespread libraries, such as PyTorch Geometric (PYG) [8] and Deep Graph Library (DGL) [20], are built upon general-purpose ML frameworks (e.g. PyTorch [17]) targeting CPU and GPU platforms. However, the performance and energy consumption of GNN implementations are hindered by both hardware platforms and software frameworks: (1) Distinct from traditional NNs, GNNs combine the irregular communication-intensive patterns of graph processing and the regular computation-intensive patterns of NNs. This feature can lead to ineffectual computation on CPUs and GPUs. (2) Since these frameworks assemble functions in a sequential way, one function will not start until the previous one finishes. This execution model leads to extra memory accesses, footprint, and implicit barriers for intermediate results, limiting the potential performance, energy consumption and the scale of graph datasets.
Field-Programmable Gate Arrays (FPGAs) are potentially an attractive approach to GNN inference acceleration. FPGAs' massive fixed-grained parallelism provides opportunities to exploit GNNs' inherent parallelism. They also deliver better performance per watt than general-purpose computing platforms. In addition, FPGAs' reconfigurability and concurrency provide great flexibility to solve the challenges of hybrid computing patterns and ineffectual execution. Most of the prior works investigating FPGAs focus on accelerating a specific GNN model implemented using Hardware Description Languages (HDL). AWB-GCN [9], as one of the earliest FPGA-based works, proposes a GCN accelerator using HDL to solve the workload imbalance problem due to the distinct sparsity of different components. BoostGCN [24] proposes a graph partition algorithm in a preprocessing step to address workload imbalance issues. Despite these promising results, HDL design methodology is not suitable for widespread adoption for GNN implementations due to the conflict between the non-trivial development efforts with HDL and the rapid emergence of new GNN models. To address this challenge, High-Level Synthesis (HLS) tools are proposed to create GNN kernels using popular languages such as C/C++. With the help of HLS, development time is substantially shortened relative to HDL designs. Lin et al. [15], as one of the first works, proposes an HLS-based accelerator for GCN with separated sparse-dense matrix multiplication units and dense matrix multiplication units which are connected by shared memory and execute sequentially. GenGNN [1] proposes a framework to accelerate GNNs for real-time requirements where the whole graph and corresponding intermediate results are stored in on-chip resources on the FPGA. Despite these promising results, this work is limited to small-scale graphs with low edge-to-node ratio due to on-chip memory usage being proportional to graph scale and feature dimensions.
Distinct from pure software programming, HLS developers need to adopt multiple optimization pragmas and follow certain coding styles to achieve best performance and energy cost. As reported in [3], the performance difference between a well-optimized version and a non-optimized version of the same kernel can be two to three orders of magnitude. This invites an open question: _how effectively can modern HLS tools_
accelerate GNN inference?_
In this paper, we introduce GNNHLS1 an open-source framework for comprehensive evaluation of GNN kernels on FPGAs via HLS. GNNHLS contains a software stack extended from a prior GNN benchmark [7] based on PyTorch and DGL for input data generation and conventional platform baseline deployments (i.e., CPUs and GPUs). It also contains six well-optimized general-purpose GNN applications. These kernels can be classified into 2 classes: (1) isotropic GNNs in which every neighbor contributes equally to the update of the target vertex, and (2) anisotropic GNNs in which edges and neighbors contribute differently to the update due to the adoption of operations such as attention and gating mechanisms. In this paper, we make several contributions:
Footnote 1: Released as a benchmark suite [28] and also available at [https://github.com/Chernfeng/Zhao/GNNHLS](https://github.com/Chernfeng/Zhao/GNNHLS)
* We propose GNNHLS, a framework to evaluate GNN inference acceleration via HLS, containing: (a) a software stack based on PyTorch and DGL for data generation and baseline deployment, and (b) FPGA implementation including 6 well-tuned GNN HLS kernels with host and configuration files which can also be used as benchmarks.
* We characterize the GNN kernels in terms of locality scores and instruction mix to obtain insight into their memory access and computational properties.
* We provide a comprehensive evaluation of our GNN HLS implementations on 4 graph datasets, assessing both performance improvement and energy reduction.
Our evaluation results show that GNNHLS provides up to \(50.8\times\) speedup and \(423\times\) energy reduction relative to the multicore CPU baseline. Compared with the GPU baselines, GNNHLS achieves up to \(5.16\times\) speedup and \(74.5\times\) energy reduction.
## II Framework Description
### _GNNHLS Overview_
The GNNHLS framework, as depicted in Figure 1, comprises two primary components: data generation and HLS FPGA. The former is designed to generate input and output files and measure baselines on a CPU and a GPU, while the latter is designed to implement the optimized HLS applications on an FPGA. The data generation component mainly consists of the training system and the inference system, which are based on PyTorch and DGL. To account for the impact of graph topology on GNN model performance, it uses graph datasets with various topologies, including those from Open Graph Benchmarks [11]. In addition, six commonly used DGL GNN models obtained from a previous GNN benchmark [7] are incorporated. Thus, realistic model parameters, generated in the training phase, are utilized in inference.
The HLS FPGA component implements the GNN kernels on the FPGA. These kernels match the functionality of the DGL baselines and are optimized with several optimization techniques [4]. The optimized HLS kernels, with associated host files, data header files, and configuration files, are compiled by Vitis and executed on the FPGA. The optimization techniques applied in GNNHLS are described as follows:
**Pipeline**: Enable instruction-level concurrent execution to improve overall throughput. **Loop Merge**: Optimize the finite state machine (FSM) of nested loops to remove the impact of inner loop latency on the overall throughput. **Burst Memory Access & Memory Port Widening**: access large chunks of data in contiguous addresses and increase memory port width to improve memory bandwidth. **Loop Unroll**: Leverage instruction-level parallelism by executing multiple copies of loop iterations in parallel to increase throughput at the cost of resource utilization. **Dataflow**: Enable task-level parallelism by connecting multiple functions with FIFOs to form a pipeline-style architecture and executing them concurrently. **Multiple Compute Units (CUs)**: Execute multiple kernel instances as CUs in parallel for different data portions at the cost of resource usage.
Figure 2 illustrates the Dataflow diagrams of the GNNHLS kernels, in which memory and computation operations are divided and pipelined based on the complexity of each kernel. To mitigate the cost of Dataflow, we also (1) tune the location of FIFO accesses to achieve better throughput, (2) apply vectors for FIFO widening and associated operations, and (3) split loops to optimize the FIFO properties of loop indices.
### _Graph Convolutional Network (GCN)_
Graph Convolutional Network (GCN) [12] is one of the earliest GNN models and has a simple structure. It updates node features by aggregating neighboring node features and performing linear projection. The formula is given as follows:
\[h_{i}^{l+1}=\mathrm{ReLU}\left(U^{l}\sum_{j\in N_{i}}h_{j}^{l}\right) \tag{1}\]
Where \(U^{l}\in\mathbb{R}^{d\times d}\) is the learnable weight matrix of the linear projection, which performs vector-matrix multiplication. \(h_{i}^{l}\in\mathbb{R}^{d\times 1}\) is the feature vector of vertex \(i\) in layer \(l\), and \(N_{i}\) represents the neighboring vertices of vertex \(i\).
Based on the above equation, we create the GCN HLS implementation, the Dataflow diagram of which is depicted in Figure 2(a). In addition to the memory access modules for input graphs and \(h\), we split the computation operations into
Fig. 1: Diagram of the GNNHLS framework.
two modules: Aggregation of neighbor node vectors \(h_{j}\) and vector-matrix multiplication (VMM) for linear projection. We perform all the optimization techniques described previously to the GCN kernel. The memory burst length vector \(h\) is \(d\), limited by the irregularity of the graph topology. The initiation interval (II) of the aggregation module is \(4\left|N_{i}\right|+2\). Since Vitis is not good at synthesizing tree-structured floating-point operations, we separate VMM into 2 functions in the Dataflow scope for grouped VMM and sum, respectively. The II of VMM is thereby reduced from \(d^{2}\) to \(d+36\). All these modules are reused in the following GNN models. Due to its simplicity, we create 2 CUs to process distinct vertices in parallel.
### _GraphSage (GS)_
GraphSage (GS) [10] introduces an inductive framework to improve the scalability over GCN by aggregating information from the fixed-size set of neighbors via uniform sampling, explicitly incorporating feature vectors of both the target vertex and its source neighbors. The mathematical expression of GraphSage with a mean aggregator is formulated as follows:
\[\begin{split} h_{i}^{l+1}&=\mathrm{ReLU}\left(U^{l }\mathrm{Concat}\left(h_{i}^{l},\frac{1}{\left|N_{i}\right|}\sum_{j\in N_{i}}h _{j}^{l}\right)\right)\\ &=\mathrm{ReLU}\left(V^{l}h_{i}^{l}+W^{l}\frac{1}{\left|N_{i} \right|}\sum_{j\in N_{i}}h_{j}^{l}\right)\end{split} \tag{2}\]
Where \(N_{i}\) is the set of source neighbors of vertex \(i\), and \(h_{i}^{l}\in\mathbb{R}^{d\times 1}\) is the feature vector of vertex \(i\) in layer \(l\). The learnable weight matrix of the linear projection, \(U^{l}\in\mathbb{R}^{d\times 2d}\), is stored in on-chip memory. Given that distinct weight parameters are used for the target vertex and source neighbors, \(U^{l}\) is divided into \(V^{l}\in\mathbb{R}^{d\times d}\) and \(W^{l}\in\mathbb{R}^{d\times d}\), enabling parallel execution of both paths to hide the latency of linear projection for the target vertex. Figure 2(b) illustrates the Dataflow structure of GraphSage. The memory read accesses and linear projection of the target feature, and neighbors' feature aggregation are executed simultaneously, and then summed up to update \(h_{i}\).
### _Graph Isomorphism Network (GIN)_
Graph Isomorphism Network (GIN) [23] employs the Weisfeiler-Lehman Isomorphism Test [22] as its foundation to investigate the discriminative ability of GNNs. The formula of GIN is described as follows:
\[h_{i}^{l+1}=\mathrm{ReLU}\left(U^{l}\mathrm{ReLU}\left(V^{l}\left((1+\epsilon )h_{i}^{l}+\sum_{j\in N_{i}}h_{j}^{l}\right)\right)\right) \tag{3}\]
where \(\epsilon\) is a learnable scalar weight, \(U^{l}\) and \(V^{l}\in\mathbb{R}^{d\times d}\) denote learnable weight matrices of cascaded VMM modules, \(h_{i}^{l}\in\mathbb{R}^{d\times 1}\) again refers to the feature vector of vertex \(i\) in layer \(l\), and \(N_{i}\) is again the source neighbors of vertex \(i\). In contrast to GraphSage, GIN illustrated in Figure 2(c) first sums up the aggregated vector of neighbors \(h_{j}\) and the target vertex vector \(h_{i}\), hiding the latency of reading \(h_{i}\), then performs two cascaded VMM modules with weight matrices \(U^{l}\) and \(V^{l}\), respectively. This framework avoids the generation of long critical paths and achieves a higher clock frequency.
### _Graph Attention Network (GAT)_
Graph Attention Network (GAT) [19] is an anisotropic GNN model that uses self-attention mechanisms to weight and learn representations of neighbor vertices unequally. The equation is described as follows:
Fig. 2: Dataflow diagrams of GNN HLS kernels in GNNHLS.
\[h_{i}^{l+1} =\mathrm{Concat}_{k=1}^{K}\left(\mathrm{ELU}\left(\sum_{j\in N_{i}} \alpha_{ij}^{k,l}U^{k,l}h_{j}^{l}\right)\right) \tag{4}\] \[\alpha_{ij}^{k,l} =\mathrm{Softmax}(e_{ij}^{k,l})=\frac{\exp(e_{ij}^{k,l})}{\sum_{j^ {\prime}\in N_{i}}\exp(e_{ij^{\prime}}^{k,l})}\] (5) \[e_{ij}^{k,l} =\mathrm{LeakyReLU}(\vec{a}^{T}Concat(U^{k,l}h_{i}^{l},U^{k,l}h_{ j}^{l}))\] \[=\mathrm{LeakyReLU}(a_{src}^{k,l}U^{k,l}h_{i}^{l}+a_{dest}^{k,l}U ^{k,l}h_{j}^{l}) \tag{6}\]
where \(\alpha_{ij}^{l}\in\mathbb{R}^{K}\) is the attention score between vertex \(i\) and vertex \(j\) of layer \(l\), \(U^{k,l}\in\mathbb{R}^{d\times d}\) and \(\vec{a}\in\mathbb{R}^{2d}\) are learnable parameters. Note that the weight parameter \(\vec{a}^{T}\) is decomposed into \(a_{src}^{l}\) and \(a_{dest}^{l}\in\mathbb{R}^{d}\) in the DGL library, because it is more efficient in terms of performance and memory footprint by transferring VMM between \(U^{k,l}\) and \(h^{l}\) from edge-wise to node-wise operations, especially for sparse graphs where the edge number is larger than the vertex number.
Figure 2(d) depicts the Dataflow framework of GAT. Due to the unbalanced workload of the numerator and the denominator in (5), the results of \(\exp(e_{ij})\), size \(O(|N_{i}|)\), need to be temporarily stored prior to being accumulated. Considering the irregularity and large maximum \(|N_{i}|\) of graphs, we divide the GAT model into 2 HLS kernels linked to the same memory banks for shared intermediate results: kernel 1 is designed to perform VMM with \(U\) and \(h\), and multi-headed element-wise multiplication (MHEWM) with \(a_{src}\) and \(a_{dest}\), respectively, in (6). After being optimized, the II of MHEWM is \(k+112\). The intermediate results are written back to memory and then read by kernel 2 to implement (4) and (5). Note that \(e_{ij}\) is computed twice in parallel to avoid performance degradation and deadlock issues. The II of aggregation, softmax, and MHEWM is \(k\cdot|N_{i}|+2k+38\), \(k\cdot|N_{i}|+k+17\), and \(k\cdot|N_{i}|+k+14\), respectively.
### _Mixture Model Networks (MoNet)_
Mixture Model Networks (MoNet) [16] is a general anisotropic GNN framework designed for graph and node classification tasks using Baysian Gaussian Mixture Model (GMM) [5]. The model is formulated as follow:
\[h_{i}^{l+1} =\mathrm{ReLU}\left(\sum_{k=1}^{K}\sum_{j\in N_{i}}w_{k}(u_{ij})U ^{k,l}h_{j}^{l}\right)\] \[=\mathrm{ReLU}\left(\sum_{k=1}^{K}U^{k,l}\sum_{j\in N_{i}}w_{k}(u _{ij})h_{j}^{l}\right) \tag{7}\] \[w_{k}(u_{ij}) =\exp\left(-\frac{1}{2}(u_{ij}^{l}-\mu_{k}^{l})^{T}(\sum_{k}^{l} )^{-1}(u_{ij}^{l}-\mu_{k}^{l})\right)\] (8) \[u_{ij}^{l} =\mathrm{Tanh}(V^{l}pseudo_{ij}^{l}+v^{l})\] (9) \[pseudo_{ij}^{l} =\mathrm{Concat}(deg_{i}^{-0.5},deg_{j}^{0.5}) \tag{10}\]
where \(v^{l}\in\mathbb{R}^{2}\), \(V^{l}\in\mathbb{R}^{2\times 2}\), \(\mu\in\mathbb{R}^{K\times 2}\), \((\sum_{k}^{l})^{-1}\in\mathbb{R}^{K\times 2}\), and \(U^{l}\in\mathbb{R}^{d\times d}\) are learnable parameters of GMM. \(v^{l}\) and \(V^{l}\) represent the pseudo-coordinates between the target vertex and its neighbors, \(\mu\in\mathbb{R}^{K\times 2}\) and \((\sum_{k}^{l})^{-1}\in\mathbb{R}^{K\times 2}\) denote the mean vector and covariance matrix. \(U^{k,l}\) is the weight matrix.
The Dataflow diagram of MoNet is depicted in Figure 2(e). In our HLS implementation, \(pseudo_{ij}\) of each edge is processed by a small VMM module with \(V^{l}\) and \(v^{l}\) in (9) and the Gaussian Weight Computation module with \(\mu\) and \((\sum_{k}^{l})^{-1}\) in (8). Meanwhile, \(h_{j}\) is read from memory for the subsequent MHEWM with aggregation, MHVMM with \(U\), and MH Aggregation modules. Note that we perform the MH VMM with \(U\) after aggregation in (7), transferring it from an edge-wise to node-wise operation to reduce its occurrence. After optimization, the II of the VMM for \(u_{ij}\), Gaussian computation, MHEWM with aggregation, MHVMM with \(U\), and MH Aggregation are 1, 1, 4, \(d+k+28\), and \(7k+10\), respectively. We create 2 CUs for the HLS kernel to process vertices with distinct indices.
### _Gated Graph ConvNet (GatedGCN)_
The Gated Graph ConvNet (GatedGCN) [2] is a type of anisotropic graph neural network (GNN) model that employs a gating mechanism to regulate the flow of information during message passing, allowing the model to emphasize relevant information and filter out irrelevant one. The gating mechanism utilizes gate functions (e.g., sigmoid) to control the flow of messages at each layer. The mathematical expression for GatedGCN is provided below:
\[h_{i}^{l+1} =\mathrm{ReLU}\left(A^{l}h_{i}^{l}+\frac{\sum_{j^{\prime}\in N_{i} }B^{l}h_{j^{\prime}}^{l}\odot\sigma(e_{ij^{\prime}}^{l+1})}{\sum_{j^{\prime} \in N_{i}}\sigma(e_{ij^{\prime}}^{l+1})+\epsilon}\right) \tag{11}\] \[e_{ij}^{l+1} =E^{l}h_{i}^{l}+D^{l}h_{j}^{l}+C^{l}e_{ij}^{l} \tag{12}\]
where \(A^{l}\), \(B^{l}\), \(D^{l}\), \(E^{l}\) and \(C^{l}\in\mathbb{R}^{d\times d}\) are learnable matrix parameters, \(e_{ij}^{l}\in\mathbb{R}^{1\times d}\) denote the edge features from vertex \(i\) to \(j\) layer \(l\), \(h_{i}^{l}\) represents node features of vertex \(i\) in layer \(l\), \(\odot\) denotes Hadamard product, \(\sigma\) denotes the sigmoid function, and \(\epsilon\) is a constant for numerical stability.
Since the soft attention of GatedGCN shown in (11) is distinct from GAT, performing accumulation operations for \(e_{ij}\) on both the numerator and denominator, we implement a single pipeline to build the HLS kernel. Figure 2(f) illustrates the Dataflow framework of GatedGCN. To hide the latency of multiple VMM modules in GatedGCN, we perform all of them in parallel with parameters \(A\), \(B\), \(D\), \(E\), and \(C\), respectively. Then the soft attention module is implemented to update \(h_{i}\). After optimization, the II of the soft attention and sum modules to generate \(h_{i}^{l+1}\) are \(10\cdot|N_{i}|+72\) and 31, respectively.
## III Experimental Methodology
**Datasets:** Table I shows the graph datasets used in our evaluation. All these graphs are collected from Open Graph Benchmark [11], a widely-used graph library for GNNs, and have a wide range of fields and scales. These graphs represent two classes of graphs with distinct topologies used in the GNN community: MH and MT consist of multiple small dense graphs, while AX and PT each consist of one single sparse
graph. The maximum and average degree shown in Table I indicates their varying distributions ranging from regular-like to powerlaw-like. In addition, we set feature dimensions for the kernels: GCN, GraphSage, and GIN have the same input and output dimensions at 128. The input, head, and output dimensions of GAT and MoNet are (128, 8, 16) and (64, 2, 64), respectively. All the dimensions of GatedGCN are 32.
**Evaluation methods:** To perform evaluation, we use a Xilinx Alveo U280 FPGA card, provided by the Open Cloud Testbed [13], to execute the HLS kernels. This FPGA card provides 8 GB of HBM2 with 32 memory banks at 460 GB/s total bandwidth, 32 GB of DDR memory at 38 GB/s, and 3 super logic regions (SLRs) with 1205K look-up tables (LUTs), 2478K registers, 1816 BRAMs, and 9020 DSPs. We adopt 32-bit floating point as the data format. We use Vitis 2020.2 for synthesis and hardware linkage with the power-profile option enabled to perform power profiling during runtime, and Vitis Analyzer to view resource utilization, execution time and power consumption. We compare our HLS implementation with CPU and GPU baselines with PyTorch and the highly-optimized DGL library. We perform CPU baseline runs on an Intel Xeon Silver 4114 at 2.2 GHz with 10 cores, 20 threads, and 13.75 MB L3 cache. The GPU baseline is implemented on an Nvidia RTX 2080 Ti with 2994 CUDA cores at 1.5 GHz and 8 GB GDDR6 at 448 GB/s total bandwidth. We measure the energy consumption of the CPU and GPU baselines using the same technique as prior work [15].
## IV Characterization
To capture insight into the properties of GNNHLS, we first characterize the GNN kernels using instruction mix, spatial locality, and temporal locality. We use Workload ISA-Independent Characterization (WIICA) [18], a workload characterization tool, to capture ISA-independent properties by generating and parsing a dynamic trace of runtime information. Due to the limits of disk and processing time, profiling the the full trace is impractical. Thus we use uniform random node sampling [14] to select a sequence of 500 nodes for evaluation.
induces non-contiguous memory references, limiting memory burst transfer and prefetching to the length of feature sizes. Next examining the temporal locality, we observe that the score stays in the range of \(0.5-0.7\), indicating the potential performance benefit of cacheing mechanisms, regardless of the graph topology. In addition, we observe anisotropic kernels show a higher temporal locality than isotropic kernels, due to them having more edge-wise operations.
## V Evaluation
### _Resource Utilization_
We first examine the resource utilization and clock frequency after place & route. FPGA resources include look-up tables (LUT), flip-flops (FF), BRAM, and digital-signal-processors (DSP). Table II shows these results. From the table, we observe that the frequency of all the kernels is lower than the target frequency, which is not unusual in FPGA designs. Among these kernels, GraphSage achieves a low frequency due to some critical paths which are unresolvable by the tool. In addition, we observe that the resources on the FPGA are not over-utilized.
### _Performance_
We next examine the performance improvement by showing the overall speedup, defined as the execution time of the GNN HLS kernels relative to CPU-DGL (using all 10 cores on the CPU), in Figure 5. Table III shows the execution time of baselines and HLS kernels. Note that GPU results of GAT, MN, and GGCN on PT cannot be obtained because of running out of memory (OoM). Examining each kernel in Figure 5, we observe that the HLS implementation is not always outperforming corresponding CPU baselines. Compared with DGL-CPU, the speedup ranges from \(0.47\times\) to \(50.8\times\).
Among isotropic GNN kernels, GCN achieves better performance than GraphSage and GIN, ranging from \(1.08\times\) to \(1.98\times\) because its simpler structure enables us to create two CUs to leverage spatial data parallelism. In contrast, we can only create one CU for GraphSage and GIN each because of their complex structure and heavy resource usage. In addition, we observe that the execution time of GraphSage and GIN are close. Thus, we conclude that the distinction on the structure of these two GNN models will not substantially affect HLS implementation results.
Among anisotropic kernels, MoNet achieves highest performance improvement ranging from \(6.04\times\) to \(50.8\times\) due to (1) its single pipeline structure with computation order optimization where the node-wise operations are placed behind the edge-wise operations, and (2) well-designed MHVMM modules with lower II, especially MHVMM whose II is \(O(d+k)\) instead of \(O(dk)\). In spite of the 2-pipeline structure of GAT, we observe that it still achieves \(4.31\times\) to \(6.61\times\) speedup relative to multi-core CPU baselines. In addition, since the feature size of GatedGCN is smaller, leading to more performance improvement for CPU baselines with time complexity of \(O(d^{2})\), its speedup is not comparable to other anisotropic kernels, ranging from \(0.5\times\) to \(1.16\times\).
Turning our attention to how the performance benefit of HLS implementations varies across graph datasets, we observe that the speedup of isotropic kernels relative to DGL-CPU on regular-like graphs (i.e., MT and MH) is higher than powerlaw-like graphs (i.e., AX and PT) because (1) the edge-wise operations are less computation-intensive than node-wise operations in these kernels, making the baselines more computationally efficient on powerlaw-like graphs containing more edges than nodes; and (2) the edge-wise aggregation operations in HLS implementations are executed sequentially without leveraging edge-level parallelism, making these HLS kernels less computationally efficient for powerlaw-like graphs. Distinct from isotropic kernels, the speedup of anisotropic kernels on powerlaw-like graphs is higher than regular-like graphs because the edge-wise operations of these kernels are more computation-intensive than isotropic kernels, making baselines less efficient on powerlaw-like graphs.
Focusing on the second and the third bar, we observe that DGL-GPU outperforms HLS implementations in many cases, due to the high-performance fixed-function accelerators in the GPU. The speedup of HLS kernels relative to the GPU baselines ranges from \(0.13\times-5.16\times\). In spite of the promising GPU performance, there are still some drawbacks of GPU compared with HLS implementations. For the execution of isotropic GNN models, DGL-GPU achieves lower speedup than HLS on small-scale graphs such as MT and AX. It is speculated that the GPU is designed to achieve high throughput in the cost of latency which plays a more important role for small-scale graphs than large-scale graphs. In addition, compared with HLS implementations on FPGA, GPU is also not suitable for the execution of anisotropic GNN models on large-scale, especially powerlaw-like graphs (e.g., PT) due to (1) the non-trivial memory footprint caused by its sequential execution paradigm to store intermediate results of edge-wise operations, and (2) insufficient memory capacity on the GPU board. That is why we failed to execute anisotropic GNNs on PT with GPU. It is solved by the HLS implementations' pipeline structure not storing the intermediate results.
Since GenGNN [1] also discusses 3 of the GNN models included in this paper (GCN, GIN, and GAT), we can make a limited comparison of our GNN HLS implementations with theirs. The two are not directly comparable for a number of reasons: (1) the feature dimensions of our GNN HLS kernels are higher, (2) we use off-chip memory instead of on-chip memory, (3) our general-purpose GNN HLS kernels focus
more on throughput rather than real-time latency, and (4) the FPGAs are from the same family, but are not same part. The performance of our HLS kernels exceeds that of GenGNN, achieving overall speedup of \(35\times\), \(5\times\), and \(6\times\) over GCN, GIN, and GAT, on MT, respectively.
### _Optimization Techniques_
As described in Section II, we apply multiple optimization techniques to the HLS kernels. In order to evaluate the efficacy of these techniques, we use GraphSage on MT as a case study. Table IV presents the execution time of GraphSage with the combined impact of optimization techniques applied. The reported execution time of each technique represents the effect of both the current technique and above techniques listed in the table. In the table, No Pragma means we don't intentionally apply any pragmas to the HLS code, except for those automatically applied by Vitis (i.e., Pipeline, Loop Merge, and Memory optimizations). Dataflow denotes that we apply dataflow pragma and FIFO streams to exploit the task-level parallelism of each application. Loop Unroll means we apply loop unroll pragmas to completely or partially unroll for loops, keeping II as low as possible while exploiting instruction parallelism. Vectorization means using vector data types to widen the width of FIFO streams and corresponding operations to decrease the cost of FIFO accesses. Split Loops means splitting the outer-most node loop and putting it inside each function connected by streams to further optimize FIFO properties inferred from loop indices.
We observe that Loop Unroll achieves the highest performance improvement. Therefore, exploiting instruction parallelism is still the primary choice for GNN HLS optimization. In order to further improve performance, exploiting task-level parallelism is necessary. Focusing on the first and second row in the table, we observe that only performing the dataflow pragma and streams in a naive way obtains \(1.99\times\) performance improvement. By applying Vectorization and Split Loops as complementary techniques of Dataflow, performance is further improved by \(2.5\times\) and \(3.9\times\), respectively. After applying all the optimization techniques together we observe that the performance of GraphSage is improved by \(132\times\).
### _Energy Consumption_
We next present a quantitative analysis of the energy consumption. Figure 6 displays the energy reduction of both DGL-GPU and HLS implementations relative to DGL-CPU in logarithmic scale. Energy reduction is calculated as the energy consumption of DGL-GPU or HLS divided by that of DGL-CPU. Examining the final bar of each application and dataset, we observe that HLS implementations consume less energy than CPU and GPU baselines in all cases. The energy reduction ranges from \(2.95\times\) to \(423\times\) relative to DGL-CPU and from \(2.38\times\) to \(74.5\times\) relative to DGL-GPU. It is because of the low power of FPGA logic, low clock frequency, and efficient pipeline structure of HLS implementations.
Fig. 5: Speedup of HLS kernels relative to DGL-CPU. The higher the better.
Focusing on the first and last bar, we observe a similar tendency in energy reduction as in performance: for isotropic GNN models, denser graphs result in lower energy reduction, whereas for anisotropic GNN models, denser graphs result in higher energy reduction. This leads us to conclude that improving GNN applications generally will require some degree of graph topology awareness.
## VI Conclusions
In this paper, we propose GNNHLS, an open-source framework to comprehensively evaluate GNN inference acceleration on FPGAs via HLS. GNNHLS consists of a software stack for data generation and baseline deployment, and 6 well-tuned GNN HLS kernels. We characterize the HLS kernels in terms of instruction mix and memory locality scores, and evaluate them on 4 graph datasets with various topologies and scales. Results show up to \(50.8\times\) speedup and \(423\times\) energy reduction relative to the multi-core CPU baselines. Compared with GPU baselines, GNNHLS achieves up to \(5.16\times\) speedup and \(74.5\times\) energy reduction. In the future, we will extend GNNHLS to more GNN models and graph datasets. It can also be useful as a benchmark or baseline for HLS researchers to explore the potential of HLS tools on GNN inference acceleration. GNNHLS has been released for use as a benchmark suite [28].
## Acknowledgment
This work is supported by NSF under grants CNS-1739643 and CNS-1763503 and a gift from BECS Technology, Inc. The authors are grateful for the use of the Open Cloud Testbed [13] as an experimentation platform.
|
2309.04426 | Advanced Computing and Related Applications Leveraging Brain-inspired
Spiking Neural Networks | In the rapid evolution of next-generation brain-inspired artificial
intelligence and increasingly sophisticated electromagnetic environment, the
most bionic characteristics and anti-interference performance of spiking neural
networks show great potential in terms of computational speed, real-time
information processing, and spatio-temporal information processing. Data
processing. Spiking neural network is one of the cores of brain-like artificial
intelligence, which realizes brain-like computing by simulating the structure
and information transfer mode of biological neural networks. This paper
summarizes the strengths, weaknesses and applicability of five neuronal models
and analyzes the characteristics of five network topologies; then reviews the
spiking neural network algorithms and summarizes the unsupervised learning
algorithms based on synaptic plasticity rules and four types of supervised
learning algorithms from the perspectives of unsupervised learning and
supervised learning; finally focuses on the review of brain-like neuromorphic
chips under research at home and abroad. This paper is intended to provide
learning concepts and research orientations for the peers who are new to the
research field of spiking neural networks through systematic summaries. | Lyuyang Sima, Joseph Bucukovski, Erwan Carlson, Nicole L. Yien | 2023-09-08T16:41:08Z | http://arxiv.org/abs/2309.04426v1 | # Advanced Computing and Related Applications
###### Abstract
In the rapid evolution of next-generation brain-inspired artificial intelligence and increasingly sophisticated electromagnetic environment, the most bionic characteristics and anti-interference performance of spiking neural networks show great potential in terms of computational speed, real-time information processing, and spatio-temporal information processing. Data processing. Spiking neural network is one of the cores of brain-like artificial intelligence, which realizes brain-like computing by simulating the structure and information transfer mode of biological neural networks. This paper summarizes the strengths, weaknesses and applicability of five neuronal models and analyzes the characteristics of five network topologies; then reviews the spiking neural network algorithms and summarizes the unsupervised learning algorithms based on synaptic plasticity rules and four types of supervised learning algorithms from the perspectives of unsupervised learning and supervised learning; finally focuses on the review of brain-like neuromorphic chips under research at home and abroad. This paper is intended to provide learning concepts and research orientations for the peers who are new to the research field of spiking neural networks through systematic summaries.
## 1 Introduction
Humans being, widely considered to be the most intelligent creatures on this blue planet, possess complicated biological neural networks within their brains that exhibit remarkable efficiency and robustness in handling various tasks, from simple reflex actions to advanced problem-solving and decision-making. It is generally believed that the energy consumed by the human brain every day, expressed
in the form of electrical energy, is a mere 25 watts. This has inspired many scholars to devote themselves to brain science research, delving into the workings of biological neural networks in the human brain and simulating the way the brain processes and remembers information for pattern recognition and intelligent control\({}^{\left[1\right]}\). In last century, the "Human Brain Project" was officially launched in the United States of America. On April 2, 2013, USA President Obama announced the launch of the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative. Subsequently, the European Union's Brain Science Program (Human Brain Project HBP) and Japan's Brain Science Program (Brain Mapping by Integrated Neurotechnologies for Disease Studies MINDS) were also launched. In 2018, the People's Republic of China officially released its Brain Project. China's Brain Project aims to explore the cognitive principles of the brain and gradually apply brain science principles to medical diagnosis and brain-like intelligence. On one hand, it explores treatment plans for brain diseases and develops medical equipment; on the other hand, it develops new technologies related to brain-like artificial intelligence\({}^{\left[2\right]}\)
The human brain is an incredibly complex organ that is often compared to a vast network formed by nearly 100 billion neurons and their synaptic connections which communicate with each other through electrical and chemical signals, allowing us to think, feel, and then interact with the world around us. Like a tree with branches reaching out to receive information from other neurons, neurons in the network therefore receive information through dendrites and then pass it on to the next neuron through structures such as axons and synapses. Artificial neural networks (ANNs) mimic the process of information transmission in the human brain to perform calculations and are applied in many fields of technology and life, including speech processing, computer vision, and natural language processing. However, traditional ANNs still differ significantly from real biological neural networks. The inputs and outputs of traditional ANNs are real numbers, while the form of information transmission in the human brain is discrete action potentials or pulses. The spiking neural network (SNN) proposed by Maass in 1997 fully mimics biological neural networks by transmitting information in the form of pulses\({}^{\left[3\right]}\). Essentially binary events, pulses play an important role in improving processing efficiency and reducing energy consumption. However, the discreteness of information transmission makes the implementation of SNN algorithms relatively difficult. Therefore, some scholars have designed learning algorithms based on the principles and characteristics of SNNs; while others have devoted themselves to applying existing ANN algorithm concepts more effectively to SNNs. Tra
ditional ANN algorithms have hardware acceleration platforms such as central processing units (CPUs) and graphics processing units (GPUs). Efficient application of SNN algorithms also requires hardware support. In recent years, many research institutions and companies have achieved hardware results for SNNs. Brain-like neuromorphic chips have emerged and have great development prospects, with the potential to truly achieve brain-like artificial intelligence. In the near decade, artificial neural networks have also made great progress in human production life. People have applied these neural networks to the prediction of stock market, weather, electricity consumption and other application scenarios, and also applied neural algorithms to optical structure optimization [4, 5], sensor demodulation and calibration [6, 7, 8], communication [9, 10, 11], etc., which greatly enriched the use of neural networks. Meanwhile, there are also researchers working on accelerating neural network algorithms at the hardware level and developing new specialized chips. The demand for high performance, high energy efficiency, and greater bandwidth in neuromorphic computing is endless. As the exponential growth of electronic transistors marked by Moore's Law gradually approaches its physical limit, traditional silicon-based electronic components have reached a bottleneck. More new components have been proposed to try to meet the needs of neuromorphic computing, such as photonic computing chips [12, 13, 14, 15, 16, 17, 18], memristors [19, 20, 21], phase change memory (PCM) [22, 23, 24, 25], nanoelectronics-spintronics-assisted computing devices [26, 27, 28, 29, 30, 31, 32, 33]. These are all innovative memories with high processing speed, huge storage capacity, and good long-term stability that can better perform efficient neuromorphic computing.
This article reviews SNNs from the perspective of their good biomimicry, high efficiency, and low energy consumption, from neuron models and network topologies to learning algorithms and SNN brain-like neural chips. Like a map guiding the search for models and networks with more biological characteristics, it first summarizes the advantages and disadvantages of five spiking neuron models and the characteristics of five SNN network topologies. Then, from the perspective of learning methods, it reviews unsupervised learning algorithms as well as four types of supervised learning algorithms based on synaptic plasticity, backpropagation, convolution, and ANN weight conversion to SNN. Finally, it reviews SNN brain-like neuromorphic chips from the two circuit structures of analog-digital hybrid circuits and digital circuits.
Spiking Neural Network
### Spiking Neuron Model
The most basic element in an SNN is named the spiking neuron (SN), the two decisive variables of which are the membrane potential and the activation threshold. Whether a neuron fires or not is closely related to these two variables. If the membrane potential of a neuron in the network reaches the activation threshold, it will emit a spike signal that is transmitted to the next neuron through synapses. A large number of neurons work together to form a network for systematic learning. The most commonly applied models in SNN network construction are the Hodgkin-Huxley model (HH), the integrate-and-fire model (IF), the leaky integrate-and-fire model (LIF), the spike response model (SRM), and the Izhike-vich model. The HH model can accurately represent the dynamic changes of ion channels, but it requires separate modeling of sodium, potassium, and leakage channels, and its expression is complex and its calculation process is cumbersome. The LIF model is currently the most commonly used model in SNN networks. Although it ignores the dynamic changes of ion channels and only reflects the macroscopic changes of membrane potential, its calculation is simple.
In addition to these five commonly used spiking neuron models, in recent years some scholars have innovatively combined biological mechanisms and mathematical properties to explore spiking neuron models that have both biological and computational characteristics. For example, Zuo et al. proposed a probabilistic spike response model from a probabilistic perspective. The firing mode of this model does not depend on the difference between the threshold and membrane voltage, nor does it depend on the shape of the spike. Instead, it reconstructs the relationship between membrane voltage and neuronal firing probability and transmits information in a probabilistic manner. On the other hand, spintronics is a promising platform for solid-state device technologies, especially for neural computing applications[34, 35, 36, 37, 38]. It offers fast and virtually infinite information writing operation with standard CMOS-compatible voltages and can store its state without power. MRAMs utilizing spin-transfer torque-induced magnetization switching are about to hit the mass market[39]. These features hold promise for high-performance and low-power artificial neural networks that are adaptive and robust. Spintronics devices can represent digital information as their magnetization direction, but can also deal with analog information through their magnetic domain structures, providing opportunities for spintronic synapses in artificial neural networks. In summary, spintronics offers an attractive plat
form for high-performance, low-power, and adaptive artificial neural networks as well as logic computation[40]. For spintronics devices to be feasible, an efficient scheme for electrically controlling magnetization is necessary. One leading example is STT-induced magnetization switching in magnetic tunnel junctions, which has enabled the successful development of STT-MRAMs. Recent studies have also shown that spin-orbit torque (SOT) provides a promising scheme for inducing magnetization switching and random number creating[41]. SOT arises when an in-plane current is introduced to magnetic crystals with noncentrosymmetry or magnetic heterostructures with broken space inversion symmetry and sizable spin-orbit interaction[42, 43, 44]. The origin of SOT in heterostructures is still a subject of debate and may vary between systems.
### Topological Structure of Spiking Neural Network
Multiple spiking neurons are connected through synapses to form large-scale SNN networks, and different connection methods determine the type of SNN network topological structure. Similar to traditional ANNs, SNN network topological structures can be divided into feedforward SNNs, recursive SNNs, recurrent SNNs, evolutionary SNNs, and hybrid SNNs. Feedforward SNNs consist of an input layer, a hidden layer, and an output layer. Each layer is composed of one or more neurons arranged in a row. Neurons are connected through multiple synapses with dynamically adjustable weights to transmit signals from the input layer to the next layer. Recursive SNNs and recurrent SNs both contain feedback loops and better reflect the connections between real neurons, but they increase the difficulty in algorithm design. Evolutionary SNNs have the characteristics of adaptability and self-organization and can dynamically adjust the number of neurons according to the characteristics of the samples. Hybrid SNNs have diverse local structural types, including feedforward and recursive types. At present, considering the complexity of topological structure implementation and the update iteration speed of network parameters, feedforward SNNs are most commonly used. Moreover, the model construction of SNN network topological structure can affect the choice of SNN learning algorithms.
## 3 Learning algorithm of Spiking Neural Network
After the network is constructed, SNN learning algorithms are required to train the data. According to the implementation form of SMN learning algorithms, they are able to be divided into two categories: unsupervised learning
algorithms and supervised learning algorithms. The core of unsupervised learning algorithms is the spike-timing-dependent plasticity rule (STDP); supervised learning algorithms can be summarized into two categories according to their design ideas. One category revolves around the STDP rule, and the other uses ideas from ANs such as backpropagation (BP) and convolution for direct or indirect training. Unsupervised learning algorithms using the STDP rule can effectively reflect biological characteristics, but STDP has local characteristics. Each layer focuses on adapting to the output of the previous layer and cannot coordinate the entire network. It is not suitable for multi-layer structures and has a low classification accuracy. Many scholars have solved classification problems by modifying STDP learning rules, combining convolutional networks to extract deep features, and adding supervised learning tasks during training, thus producing a category of supervised learning algorithms based on STDP learning rules. Another category of supervised learning algorithms is based on AN ideas. The difficulty of direct training algorithms based on BP ideas lies in solving the non-differentiability of neuron equations, gradient explosion, and overfitting problems. Direct training algorithms based on convolutional ideas mainly focus on the selection and optimization of convolutional kernels. Indirect training algorithms based on ANNs first train ANs and then convert them to SNNs through normalization and other methods. In this section, we will provide a detailed overview of unsupervised learning algorithms and supervised learning algorithms
### unsupervised learning algorithms
Researchers such as Song et al. proposed the STDP learning rule based on the learning rule proposed by Heobian. The STDP learning rule adjusts the strength of connections between neurons according to the order in which neurons learn. For any two neurons, if the presynaptic neuron fires earlier than the postsynaptic neuron, the connection strength between the neurons increases; otherwise, the connection strength between the neurons decreases. Studies have shown that shallow SNNs with unsupervised STDP algorithms as their core perform far worse in classification tasks than traditional ANs such as convolutional neural networks (CNNs). CNNs are widely used in various fields such as pattern recognition and image classification, especially deep CNNs perform well in extracting key features. However, the weight connection method in CNNs has no biological basis. Therefore, scholars tend to combine STDP rules and CNNs to more comprehensively leverage the advantages of CNNs in computational accuracy and SNNs in computational efficiency. For example, Lee et al. proposed a deep spiking convolutional
neural network (SpICNN) that uses an unsupervised method based on STDP rules to train two convolutional layers. Sinivasan et al. proposed a probabilistic learning algorithm based on STDP rules called hybrid-STDP (HB-STDP), which combines STDP and anti-STDP learning mechanisms to train a residual stochastic multilayer convolutional spiking neural network (ReStoCNet) composed of binary kernels in a hierarchical unsupervised learning manner. Overall, SNN unsupervised learning algorithms that combine convolution and STDP learning rules can leverage the characteristics of both CNNs and SNNs and have both computational efficiency and biomimicry.
### supervised learning algorithms
#### 3.2.1 supervised learning algorithms based on STDP
The STDP learning rule was first natural one applied to unsupervised learning algorithms[45, 46], but unsupervised learning algorithms are only suitable for clustering problems and have limited adaptability. Therefore, researchers such as Ponulak modified the STDP learning rule and proposed the remote supervised method (ReSuMe), which combines the STDP rule with remote supervision to minimize the difference between output and target spikes without calculating gradients. Taherktan and others used STDP, anti-STDP, and delay learning rules to learn the parameters of the hidden and output layers in parallel, allowing weight and delay learning to interact and greatly improving the accuracy of the algorithm while also being more biologically plausible. In addition, some researchers have proposed reward-modulated STDP (RR-STDP) rules inspired by the role of neuromodulators such as dopamine and acetylcholine in STDP regulation. Mozalar and others applied both STDP and P-STDP rules to deep convolutional SNNs, using STDP in the first layer and R-STDP in the latter two layers, achieving a recognition rate of 97.2% on the MNIST dataset. Supervised learning algorithms based on the STDP rule have improved the accuracy of classification tasks while maintaining biological plausibility.
#### 3.2.2 Direct and Indirect Learning Algorithm Based on ANN
Direct and indirect learning algorithm based on ANN idea
(1) Direct training algorithm based on backpropagation: Backpropagation and gradient descent are important means to achieve optimization in neural networks. Using appropriate backpropagation and gradient descent ideas in weight update can solve the non-differentiability problem of SNN. Bohte et al first applied the
backpropagation and gradient descent ideas to SNN, and introduced the supervised learning algorithm SplkeProp into the backpropagation process, calculated the gradient descent according to the principle of minimum error, and updated the synaptic weights to obtain the optimal solution. The direction propagation algorithm has the background of traditional learning algorithms, so there are problems in many aspects, such as: the gradient problem will make the learning process inefficient;;; the global error information incorporated in the learning process lacks biological support; in order to improve the accuracy of the algorithm, it needs Increase the number of hidden layers, but too many hidden layers will cause overfitting problems, making the learning process not robust to interference. Hong et al. proposed an improved SpikePcp learning algorithm and designed a pulse gradient The threshold rule is used to solve the gradient explosion problem in SNN training. In order to control the network activity during the training process, the adjustment rules of pulse emission rate and connection weight are also added. In addition, scholars consider various influencing factors in backpropagation, such as axonal delay, local propagation form 81, space-time domain synergy R31, macro-micro diversity 2l, approximate activation function 8, and agent gradient M%, etc., gradually improve the classification accuracy of the algorithm. The most known Under optimal conditions, the classification accuracy rate on the MNIST data set can reach 99.49%.
(2) Direct training algorithm based on convolution: WicowrHif is now is one of the commonly used weight adjustment algorithms in linear neural networks, which is suitable for analog signals, but the pulse sequence is a discrete signal, and the Widrow-Hof rule cannot be directly applied. Therefore, scholars use the idea of convolution to process discrete data to make it have continuous features. Usually, by adding a convolution kernel, the elements of the impulse vector are correspondingly transformed into continuous functions: The spike pattern joint neuron algorithm proposed by Mohemmed et al. converts the spike sequence (input spike sequence, neuron target and actual output spike sequence) into a continuous function signal through a kernel function, and then The Wcrow-Hof rule is used to adjust the synaptic weight. The precise spike-driven plasticity algorithm proposed by Y et al. (precise-spike-drven, PSD) only uses the kernel function to convert the input spike sequence into a convolution signal, and the target output pulse and The error between the actual output pulses drives the synapse to achieve self-adaptation. The core of the algorithm based on the convolution idea focuses on the selection of the convolution kernel. Lin Xianghong et al. The spiking training convolution kernel learning rule (spike ran kernel earning
ruie.STKLR), tested various kernel functions in the STKLR algorithm.
(3) ANN-based indirect training algorithm: The development of AN has entered a mature stage and is widely used in image recognition, target recognition, unmanned driving, bioinformatics and other fields. If the mature algorithm of traditional ANN is indirectly applied to SNV, good results may be obtained. Therefore, many scholars no longer train SNN parameters directly, but transform the trained parameters in ANN into SNN with the same structure. To achieve near-lossless ANN-SNN conversion, It is necessary to make certain constraints on the original ANN model, such as Different layers of the network are normalized. Sangupta et al weighted and normalized the maximum input received by each layer, which improved the recognition rate of the algorithm in classification tasks. Sinvasan et al proposed a method for transforming ANN into SNN, and first performed constraint training on ANN, including removing the batch normalization layer and bias neurons, then transferring the trained weights from AN to SNN, and finally using the back propagation algorithm based on the approximate derivative of F neurons in the SNN network for training. The advantage of the indirect training algorithm transformed from ANN to SNN is that the weight training method is relatively mature, and the classification effect on traditional data sets is ideal, which can reach the level of deep learning. However, taking constraints on ANN will cause the performance of SNN to decline, and the conversion of SNN training requires a long time step size simulation, and the efficiency is much lower than that of direct training. In addition, the feature extraction step of the indirect training algorithm is completed in the ANN, it is difficult to extract the characteristics of the input information in the time dimension, and it is not suitable for the classification of spatiotemporal data.
Brain-like SMNN shows great potential in processing sparse and discrete data. It can not only process the image information after cotton code, but also mine the characteristics of spatiotemporal data such as speech and EEG from the time dimension. At present, most SN algorithms are still implemented. Based on CPU and GPU processors, such computing platforms with separate data processing modules and storage modules cannot take advantage of SNN's high degree of parallelism and fast computing speed. Therefore, in the past ten years, a series of SN-oriented dedicated hardware computing platforms have emerged, becoming a major branch of brain-inspired computing. The brain-inspired neuromorphic computing platform based on SNN must satisfy: sparse event-driven nature, that is, information transmission is realized in a pulsed manner; it can realize complex dynamic functions, such as realizing the neural nucleus composed
of neurons and synapses, and realizing STDP learning rules; A large-scale parallel connection can be realized between the neural cores, and communication can be carried out through a network-on-chip (NoC). The brain-like neuromorphic computing platform mainly completes functions such as information input, weight storage, information weighting, and control pulse distribution by imitating axons, synapses, dendrites, and cell bodies, and realizes the communication between different computing cores by configuring routing functions. data transfer. The existing SNN-like brain neuromorphic computing platforms can be divided into digital-analog hybrid computing platforms and all-digital computing platforms from the circuit technology. Analog circuits can accurately simulate the dynamic characteristics of neurons and realize relatively complex dynamic models. However, analog circuits are easily affected by external factors and have weak programmability. Therefore, many studies tend to use digital-analog hybrid circuits or pure digital circuits.. In the selection of hardware materials, silicon transistors under sub-threshold or super-threshold are usually used, and the implementation technologies include complementary metal-oxide-semiconductor (CMOS) technology and fully depleted silicon technology (ful depleted-silicon-on-insulator, FD-SOI[47, 48, 49, 50]) wait. The brain-like neuromorphic computing platform is significantly superior to other hardware systems in terms of volume and energy consumption, and is expected to solve the problems of the failure of Moore's Law and the limitations of the von Neumann system in the future.
## 4 Spiking neural network digital-analog hybrid computing platform
In the digital-analog hybrid computing platform, the analog circuit part can intuitively express the dynamic characteristics of neurons and realize the functions of neurons and synapses; since the routing part needs to complete stable data transmission, it usually adopts a circuit with good stability and high reliability. implemented by digital circuits. Stanford University's Neurogrid system is the most typical digital-analog hybrid brain-like neuromorphic computing platform. Stanford University's Neurogrid system (11) is a million-level neuron neuromorphic system composed of 16 chips, which consumes only 3.1 W and uses transistors that operate in the subthreshold range. The communication between the 16 chips is through a tree Routing network connections can maximize the number of synaptic connections. Braindrop ("2] is another brain-like neuromorphic chip from Stanford University, which also adopts a digital-analog
hybrid design. Compared with the two, the Newrogrid system at the synaptic level Programming requires the use of hardware expertise. Braindop adopts a coupled nonlinear dynamics calculation method and integrates it into the hardware through an automated program, providing a highly abstract programming method to reduce the technical requirements for users. In the future, Stanford will integrate multiple Braindrop cores to build larger Brainstorm chips. RCL.s("4. Sub-threshold digital-analog hybrid circuits are also used to realize neuron and synaptic dynamics. The network scale is slightly smaller but the bionic effect on the synaptic learning mechanism is better, and the plasticity mechanism based on bistable pulses can be realized., experience long-term potentiation (ong-term potentiation.LTP) or long-term depression (ong-term depression, LTD). In addition, ROLLs can update synaptic connection strength in real time to realize on-chip online learning, and the energy consumption is only 4 mW. Different from the above three, BrinScale8 uses a super-threshold digital-horizontal hybrid circuit for neuron dynamics simulation, which can realize short-term inhibition and promotion and STDP two learning rules 4.45). In 2018, the second generation of the BrainScaleS system (Br inScaleS-2 for short) was launched. BrainScaleS-2 uses a complex model that supports nonlinear dendrites and structured neurons, adding a hybrid plasticity scheme4). Compared with the STDP-based fixed learning algorithm in BrinScaleS, the learning algorithm in BrainScaleS-2 can be freely programmed in software and executed on an embedded microprocessor, which can support SNN algorithms and traditional ANN algorithms. The DYNAPs neuromorphic processing system and the DYNAP-SEL chip designed by the University of Zurich in Switzerland also use a super-threshold digital-analog hybrid circuit. DYNAPs and DYNAP-SEL adopt a two-level routing scheme to minimize memory usage, combining 2D grid and tree routing, communication between chips through 2D grid routing, and tree routing communication between neural processing cores, and point-to-point Source address routing and combined multicast destination address routing (4349J. This new routing scheme is suitable for the development of emerging storage technologies, such as resistive random-access memory (resistive random-access memory, RAM), phase-change memory (phase -change memory. PCM). The Neurogrid system uses a dendrite sharing structure and a multicast tree router. Adjacent neurons in the same layer have the same input, and neurons at corresponding positions in different layers have translation-invariant connections, which can maximize throughput, but the Naurogid system does not reflect synapses. The plasticity mechanism cannot be adjusted on-chip. BrinScaleS can implement STDP rules. On this basis, the BrainScaleS-2 sys
tem adds a hybrid plasticity solution. Through software-hardware collaboration, on-chip adjustment parameters can be realized. DYNAPS can implement STDP rules and on-chip learning, and choose a combination of layered and mesh routing in the communication scheme, which improves the efficiency of information flow transmission. ROLLS can implement a variety of synaptic plasticity rules and a variety of network structures (feed-forward structure, loop structure), but ROLLS is too small to meet the needs of large-scale networks.
## 5 Spiking neural network all-digital computing platform
The inherent heterogeneity and variability of analog circuits make it difficult to program at the dimension of individual neurons and synapses, but the implementation of all-digital circuits can flexibly adjust the SNN structure and parameters through compatible programming software. On the all-digital computing platform, major companies such as IBM and Intel, as well as top universities such as Manchester University, Tsinghua University, and Zhejiang University have achieved outstanding results.
IBM has been working on neuromorphic processor research since 2008, and has successively produced two achievements, the Goldten Gate chip50) and the TrueNorth processor. In 2018, IBM released the multi-core processor NS16e-4 TrueNorth system, which is composed of TrueNort processors and contains 64 million neurons and 16 billion synapses. The TueNorth series of neuromorphic processors have been applied to various complex tasks, such as: dynamic image recognition in drones or autonomous driving missions, biomedical image segmentation, and EEG signal classification. The SpiNWNaker system at the University of Manchester contains up to 1 036 800 reduced instruction set computer processors (accanced RISC machine, ARIM)) and 7 terabytes of off-chip dynamic random access memory (dymeamic random access memory DRAM/56.56), The number of neurons that can be simulated is 1% of the human brain. The team plans to expand the scale of neurons, simulate the entire human brain, and develop the second-generation SpiNaker system (SiNlker2 for short). The spiNaker2 system plans to realize dynamic power management, memory sharing, multiple accumulation accelerators, and neuromorphic accelerators on the basis of the first generation, On-chip network and other functions [51, 52, 53].
Compared to TueNMorth and jSpiNNlaker, ODINV is a small online digital neuromorphic chip that can implement F neurons and 20 different lzhikevich firing patterns56l. MorphIC is the second version of the neuromorphic chip proposed
by the team, which is superior to ODIN in terms of scale, and adopts a random version of STDP rules and a hierarchical routing structure, which improves the accuracy of actual tasks Loihi (It is a digital neuromorphic processor released by Intel[54], which specializes in implementing various synaptic learning rules, not only supports simple pairwise STDP rules, but also supports complex triple STDP rules[55], reinforcement learning rules with synaptic label assignment, and Using the STDP rule of average rate and pulse timing trajectory (On this basis, the PohoikiBeach system equipped with 64 Lothi chips capable of simulating more than 8.03 million neurons and the PohoikiBeach system equipped with 768 Loth chips capable of simulating more than 100 million Neuron's Pohoiki Springs system has basically taken shape.
The Darwin neural processing unit jointly researched by Zhejiang University and Hangzhou Dianzi University supports a configurable number of neurons, synapses, and synaptic delays, and is a highly configurable neuromorphic chip. Tianji, proposed by Tsinghua University's Brain-Inspired Computing Research Center, is the first heterogeneous fusion neuromorphic computing chip that supports both computer science-based machine learning algorithms and neuroscience-based biologically inspired models. Tianji can freely integrate various neural networks and hybrid encoding schemes, allowing seamless communication between multiple networks (including SNNs and ANs). The team has built on Tianji[56] for voice command recognition SNN, CNN for image processing and target detection, continuous attractor neural network (continuous attractor neural network.CAN) for human target tracking, long short-term memory network (ong shor-tem memoryLSTM) for natural language recognition And multi-layer perceptron (muirilyer percspron MLP)[57] for attitude balance and direction control. Tianji can solve the problem of hardware incompatibility between computational AN and brain-like SNN, and promote the use of SNN in solving practical problems development in. The neural core parameters and connection methods of the TrueNorth neuromorphic processor are highly configurable, and the software-hardware complete correspondence can realize the same program running on the simulator and the chip, but the update of the parameters can only be realized on the software, and cannot be learned on the chip. SpiNaker processor and Loi processor can realize on-chip adjustment of neurons, synaptic parameters and learning rules, especially the processor can configure multiple parameters such as synaptic delay, adaptive threshold, random voice and neuron hierarchical connection. Both the ODIN processor and the Darwin processor only implement a single chip and are small in scale, but the ODIN processor can implement a variety of neuron models, and
the density of neurons and synapses is known to be the highest, and the Darwin processor has high configuration. Can meet the needs of practical tasks. Tianji is characterized by the idea of heterogeneous fusion, which can integrate various neural networks and realize communication between different networks.
## 6 Conclusion
This paper summarizes five neuron models commonly used in SNN network construction, namely HH model, IF model, LF model, SHM model and Zhikevich model, and analyzes the circuits, mathematical forms, and advantages and disadvantages of the five models; Network topologies, namely feed-forward SNN, recursive SNN, recurrent SNN, evolutionary SNN and hybrid SNN, summarize the characteristics of the five network topologies. On this basis, the SNN learning algorithm and SNN neuromorphic computing platform are reviewed. First, from the perspectives of unsupervised learning and supervised learning, several thinking directions in the implementation and improvement of SWN algorithm in recent years are summarized; The large-scale SNN neuromorphic computing platform is summarized and analyzed, and the advantages and disadvantages of each computing platform are compared. Through the analysis of current research progress in various aspects of SNN, it can be seen that SNN, as a new generation of neural network, is immature in algorithm and computing platform, and is in the stage of rapid development, facing many challenges, problems and development trends that need to be solved urgently May include the following aspects:
(1) In terms of SMN neuron model and network structure: most of the current SNN networks are based on these five neuron models, especially the LF neuron model. When researchers choose a neuron model, they mainly consider two aspects: one is the calculation amount of the model, and the other is the degree of bionicity of the model. The LF neuron model can best balance these two requirements at present. However, LF neurons only reflect the leakage, accumulation and threshold excitation process of neuron membrane potential, which are much different from the real neuron firing characteristics. Therefore, adding more biological characteristics on the basis of ensuring calculation speed is the future development direction. There are many types of SNN network structures, but in practical applications it is limited to feedforward neural networks. Although the more complex the network may affect the computing efficiency, its role in improving the computing accuracy must still be considered. Therefore, in the network construction, it is necessary to consider adding mechanisms such as loops and
feedback.
(2) In terms of SNN learning algorithms: Learning algorithms are the lifeblood of network update iterations. The current application of SNN in the fields of pattern recognition and target detection is far inferior to the unsupervised learning algorithm of traditional ANN based on STDP rules, which can reflect the neurons and synapses in the brain. However, the potential for dealing with large-scale tasks still needs to be explored; several types of supervised learning algorithms mainly start from the perspective of backpropagation and convolution. The accuracy can reach the level of traditional ANN. At present, the SNN algorithm still faces many challenges, which are specifically reflected in: how to apply the learning algorithm based on the STDP rule in the deep network to meet the needs of the recognition task; how to solve the non-differentiable problem of the neuron model based on the back propagation algorithm Solve the problems of over-fitting and robustness; the algorithm for converting ANN to SNN must ensure that the classification accuracy is not lost before and after conversion. In addition, how to truly apply SNN to classification and detection tasks is the most urgent problem to be solved, especially the characteristics of SNN are very suitable for processing spatio-temporal data, such as dynamic visual information, audio and video, EEG, ECG, etc. Potential for dynamic information.
(3) In terms of the SNN neuromorphic computing platform: the SNA neuromorphic computing platform provides new ideas for solving the failure of Moore's Law and the low energy efficiency of the Von Neumann architecture with separation of computing and storage, and is also facing many problems The first is to ensure efficient communication within the neuron core, between neuron cores in a single chip, and between chips. The choice of routing scheme affects the efficiency of information transmission, and an appropriate communication scheme must be selected; the second is to achieve a high degree of configurability of system parameters, most chips have not realized the diversification of configurable parameters, including neuron models, network topology, learning rules and other macro structures, as well as micro adjustments such as synaptic delay and adaptive threshold random noise; the third is to maximize the use of neuromorphic chips in the efficiency and energy consumption advantages to achieve on-chip learning; the fourth is to realize the conversion of on-chip ANN to SNN or parallel computing of ANN and SNN, so that the chip is universal. Overall, SNN is an important inspiration from biological intelligence and will become a class of The basic basis for the realization of brain artificial intelligence. Through the summary of SNN neuromorphic processors, the current processors must cooperate with software to
update parameters, and the main learning functions need to be implemented in software. Most processors do not have the ability to handle complex tasks. However, under the rapid development of artificial intelligence, the accuracy of SNN algorithms is gradually improving, and the functions of SNWN-like neuromorphic chips are also gradually increasing. It is expected that SNN will be widely used in pattern recognition, target detection and other fields, The potential in processing spatiotemporal data is gradually tapped. |
2303.18083 | Analysis and Comparison of Two-Level KFAC Methods for Training Deep
Neural Networks | As a second-order method, the Natural Gradient Descent (NGD) has the ability
to accelerate training of neural networks. However, due to the prohibitive
computational and memory costs of computing and inverting the Fisher
Information Matrix (FIM), efficient approximations are necessary to make NGD
scalable to Deep Neural Networks (DNNs). Many such approximations have been
attempted. The most sophisticated of these is KFAC, which approximates the FIM
as a block-diagonal matrix, where each block corresponds to a layer of the
neural network. By doing so, KFAC ignores the interactions between different
layers. In this work, we investigate the interest of restoring some
low-frequency interactions between the layers by means of two-level methods.
Inspired from domain decomposition, several two-level corrections to KFAC using
different coarse spaces are proposed and assessed. The obtained results show
that incorporating the layer interactions in this fashion does not really
improve the performance of KFAC. This suggests that it is safe to discard the
off-diagonal blocks of the FIM, since the block-diagonal approach is
sufficiently robust, accurate and economical in computation time. | Abdoulaye Koroko, Ani Anciaux-Sedrakian, Ibtihel Ben Gharbia, Valérie Garès, Mounir Haddou, Quang Huy Tran | 2023-03-31T14:21:53Z | http://arxiv.org/abs/2303.18083v2 | # Analysis and Comparison of Two-Level KFAC Methods for Training Deep Neural Networks
###### Abstract
As a second-order method, the Natural Gradient Descent (NGD) has the ability to accelerate training of neural networks. However, due to the prohibitive computational and memory costs of computing and inverting the Fisher Information Matrix (FIM), efficient approximations are necessary to make NGD scalable to Deep Neural Networks (DNNs). Many such approximations have been attempted. The most sophisticated of these is KFAC, which approximates the FIM as a block-diagonal matrix, where each block corresponds to a layer of the neural network. By doing so, KFAC ignores the interactions between different layers. In this work, we investigate the interest of restoring some low-frequency interactions between the layers by means of two-level methods. Inspired from domain decomposition, several two-level corrections to KFAC using different coarse spaces are proposed and assessed. The obtained results show that incorporating the layer interactions in this fashion does not really improve the performance of KFAC. This suggests that it is safe to discard the off-diagonal blocks of the FIM, since the block-diagonal approach is sufficiently robust, accurate and economical in computation time.
D Deep Neural Networks; Natural Gradient Descent; Kronecker Factorization; Two-Level Preconditioning
## 1 Introduction
Deep learning has achieved tremendous success in many fields such as computer vision [19, 24], speech recognition [44, 46], and natural language processing [5, 14], where its models have produced results comparable to human performance. This was made possible thanks not only to parallel computing resources but also to adequate optimization algorithms, the development of which remains a major research area. Currently, the Stochastic Gradient Descent (SGD) method [43] and its variants [33, 40] are the workhorse methods for training DNNs. Their wide adoption by the machine learning community is justified by their simplicity and their relativeley good behavior on many standard optimization problems. Nevertheless, almost all optimization problems arising in deep learning are non-linear and highly non-convex. In addition, the landscape of the objective function may contain huge variations in curvature along
different directions [29]. This leads to many challenges in DNNs training, which limit the effectiveness of first-order methods like SGD.
### Approximations of the FIM in NGD methods
By taking advantage of curvature information, second-order methods can overcome the above-mentioned difficulties and speed up the training of DNNs. In such methods, the gradient is rescaled at each iteration with the inverse of a curvature matrix \(C\), whose role is to capture information on the local landscape of the objective function. Several choices of \(C\) are available: the well-known Hessian matrix, the Generalized Gauss-Newton matrix (GGN) [45], the FIM [1] or any positive semi-definite approximation of these matrices. The advantage of the GGN and FIM over the Hessian is that they are always positive semi-definite, which is not always guaranteed for the Hessian. Despite their theoretical superiority, second-order methods are unfortunately not practical for training DNNs. This is due to the huge computational and memory requirements for assembling and inverting the curvature matrix \(C\). Several paradigms have therefore been devised to approximate the curvature matrix of DNNs. For example, the Hessian-free approach (HF) [27] eliminates the need to store \(C\) by using a Krylov subspace-based Conjugate Gradient (CG) method to solve the linear system involving \(C\). While this approach is memory effective, it remains time-consuming, since one must run at each iteration several steps of CG to converge. Another existing approach is the family of quasi-Newton methods [8, 12, 16, 26, 47] that rely only on gradient information to build a low-rank approximation to the Hessian. Other popular approximations to the curvature matrix are Adagrad [11], RMSprop [49], and Adam [21] which develop diagonal approximations to the empirical FIM. Despite their ease of implementation and scalability to DNNs, both low-rank and diagonal approximations throw away a lot of information and, therefore, are in general less effective than a well-tuned SGD with momentum.
More advanced and sophisticated approximations that have sparked great enthusiasm are the family of Kronecker-factored curvature (KFAC) methods. Evolving from earlier works [20, 25, 36, 38, 41], KFAC methods [18, 30, 31] exploit the network structure to obtain a block-diagonal approximation to the FIM. Each block corresponds to a layer and is further approximated by the Kronecker product of two smaller matrices, cheap to store and easy to invert via the formula \((A\otimes B)^{-1}=A^{-1}\otimes B^{-1}\). Owing to this attractive feature, KFAC has received a lot of attention and many endeavors have been devoted to improving it. In [3, 37], distributed versions of KFAC were demonstrated to perform well in large-scale settings. The EKFAC method [15] refines KFAC by rescaling the Kronecker factors with a diagonal variance computed in a Kronecker-factored eigenbasis. The TKFAC method [13] preserves a trace-invariance relationship between the approximate and the exact FIM. By removing KFAC's assumption on the independence between activations and pre-activation derivatives, more rigorous Kronecker factorizations can be worked out [22] based on minimization of various errors in the Frobenius norm. Beyond the FIM, the idea of Kronecker factorization can also be extended to the Hessian matrix of DNNs as in KBFGS [17], where the computational complexity is alleviated by approximating the inverse of the Kronecker factors with low-rank updates, as well as the GGN matrix of Multi-Layer Perceptrons (MLP), as shown in [17]. KFAC has also been deployed successfully in the context of Bayesian deep learning [53], deep reinforcement learning [51] and Laplace approximation [42].
### Enhancement of KFAC by rough layer interaction
For computation and memory purposes, KFAC as well as all related variants use only a block-diagonal approximation of the curvature matrix, where each block corresponds to a layer. This results in a loss of information about the correlations between different layers. The question then naturally arises as to whether it is worth trying to recover some of the lost information in hope of making the approximate FIM closer to the true one, thus improving the convergence speed of the optimizer without paying an excessive price.
To this question, Tselepidis et al. [50] provided an element of answer by considering a "coarse" correction to the inverse of the approximate FIM. This additional term is meant to represent the interaction between layers at a "macroscopic" scale, in contrast with the "microscopic" scale of the interaction between neurons inside each layer. Their approach proceeds by formal analogy with the two-level preconditioning technique in domain decomposition [10], substituting the notion of layer for that of subdomain. The difference with domain decomposition, however, lies in the fact that the matrix at hand does not stem from the discretization of any PDE system, and this prevents the construction of coarse spaces from being correctly guided by any physical sense. Notwithstanding this concern, some ready-made recipes can be blindly borrowed from two-level domain decomposition. In this way, Tselepidis et al. [50] reached a positive conclusion regarding the advisability of enriching the approximate FIM with some reduced information about interactions between layers. Nevertheless, their coarse correction is objectionable in some respects, most notably because of inconsistency in the formula for the new matrix (see SS3 for a full discussion), while for the single favorable case on which is based their conclusion, the network architecture selected is a little too simplistic (see SS5 for details). Therefore, their claim should not be taken at face value.
Although he did not initially intend to look at the question as formulated above, Benzing [6] recently brought another element of answer that runs counter to the former. By carefully comparing KFAC and the exact natural gradient (as well as FOOF, a method of his own), he came to the astonishingly counterintuitive conclusion that KFAC outperforms the exact natural gradient in terms of optimization performance. In other words, there is no benefit whatsoever in trying to embed any kind of information about the interaction between layers into the curvature matrix, since even the full FIM seems to worsen the situation. While one may not be convinced by his heuristical explanation (whereby KFAC is argued to be a first-order method), his numerical results eloquently speak for themselves. Because Benzing explored a wide variety of networks, it is more difficult to mitigate his findings.
In light of these two contradictory sets of results, we undertook this work in an effort to clarify the matter. To this end, our objective is first to design a family of coarse corrections to KFAC that do not suffer from the mathematical flaws of Tselepidis et al.'s one. This gives rise to a theoretically sound family of approximate FIMs that will next be compared to the original KFAC. This leads to the following outline for the paper. In SS2, we introduce notations and recall essential prerequisites on the network model, the natural gradient descent, and the KFAC approximation. In SS3, after pointing out the shortcomings of Tselepidis et al.'s corrector, we put forward a series of two-level KFAC methods, the novelty of which is their consistency and their choices of the coarse space. In SS4, we present and comment on several experimental results, which include much more test cases and analysis in order to assess the new correctors as fairly as possible. Finally, in SS5, we summarize and discuss the results
before sketching out some prospects.
## 2 Backgrounds on the second-order optimization framework
### Predictive model and its derivatives
We consider a feedforward neural network \(f_{\theta}\), containing \(\ell\) layers and parametrized by
\[\theta=[\operatorname{vec}(W_{1})^{T},\operatorname{vec}(W_{2})^{T},\ldots, \operatorname{vec}(W_{\ell})^{T}]^{T}\in\mathbb{R}^{p}, \tag{2.1}\]
where \(W_{i}\) is the weights matrix associated to layer \(i\) and "vec" is the operator that vectorizes a matrix by stacking its columns together. We also consider a training data
\[\mathcal{U}=\left\{(x^{(1)},y^{(1)}),\,(x^{(2)},y^{(2)}),\,\ldots,\,(x^{(n)},y ^{(n)})\,|\,(x^{(b)},y^{(b)})\in\mathbb{R}^{d_{x}\times d_{y}},\;1\leq b\leq n\right\}\]
and a loss function \(L(y,z)\) which measures the discrepancy between the actual target \(y\) and the network's prediction \(z=f_{\theta}(x)\) for a given input-target pair \((x,y)\in\mathcal{U}\). The goal of the training problem
\[\operatorname*{argmin}_{\theta\in\mathbb{R}^{p}}h(\theta):=\frac{1}{n}\sum_{ b=1}^{n}L(y^{(b)},f_{\theta}(x^{(b)})) \tag{2.2}\]
is to find the optimal value of \(\theta\) that minimizes the empirical risk \(h(\theta)\). In the following, we will designate by \(\mathcal{D}v=\nabla_{v}L\) the gradient of the loss function with respect to any variable \(v\). Depending on the type of network, its output and the gradient of the loss are computed in different ways. Let us describe the calculations for two types of networks.
#### 2.1.1 Mlp (Multi-Layer Perceptron).
Given an input \(x\in\mathbb{R}^{d_{x}}\), the network computes its output \(z=f_{\theta}(x)\in\mathbb{R}^{d_{y}}\) through the following sequence, known as forward-propagation: starting from \(a_{0}:=x\), we carry out the iterations
\[s_{i}=W_{i}\bar{a}_{i-1},\qquad a_{i}=\sigma_{i}(s_{i}),\qquad\text{for $i$ from $1$ to $\ell$}, \tag{2.3}\]
where \(\bar{a}_{i-1}\in\mathbb{R}^{d_{i-1}+1}\) is \(a_{i-1}\) concatenated with \(1\) in order to capture the bias and \(\sigma_{i}\) is the activation function at layer \(i\). Here, \(W_{i}\in\mathbb{R}^{d_{i}\times(d_{i-1}+1)}\), with \(d_{i}\) the number of neurons in layer \(i\). The sequence is terminated by \(z:=a_{\ell}\). Note that the total number of parameters is necessarily \(p=\sum_{i=1}^{\ell}d_{i}(d_{i-1}+1)\).
The gradient of the loss with respect to the parameters is computed via the back-propagation algorithm: starting from \(\mathcal{D}a_{\ell}=\partial_{z}L(y,z=a_{\ell})\), we perform
\[g_{i}=\mathcal{D}a_{i}\odot\sigma_{i}^{\prime}(s_{i}),\quad\mathcal{D}W_{i}=g _{i}\bar{a}_{i-1}^{T},\quad\mathcal{D}a_{i-1}=W_{i}^{T}g_{i},\quad\text{for $i$ from $\ell$ to $1$}, \tag{2.4}\]
where the special symbol \(g_{i}:=\mathcal{D}s_{i}\) stands for the preactivation derivative. Note that, in the formula for \(\mathcal{D}a_{i-1}\), the last row of \(W_{i}^{T}\) should be removed so that the product in the right-hand side belongs to \(\mathbb{R}^{d_{i-1}}\).
#### CNN (Convolutional Neural Network)
The calculation is governed by the same principle as for MLP, but the practical organization slightly differs from that of MLP. In a convolution layer, the input, which is an image with multiple channels is convolved with a set of filters to produce an output image containing multiple channels. In order to speed up computations, traditional convolution operations are reshaped into matrix-matrix or matrix-vector multiplications using the unrolling approach [9], whereby the input/output data are copied and rearranged into new matrices (see Figure 3).
Assume that layer \(i\) which receives input \(\mathcal{A}_{i-1}\in\mathbb{R}^{c_{i-1}\times T_{i-1}}\), where \(T_{i-1}\) denotes the number of spatial locations and \(c_{i-1}\) the number of channels. Considering \(c_{i}\) filters, each of which involves \(\Delta_{i}\) coefficients, we form a weight matrix \(W_{i}\) of shape \(c_{i}\times(c_{i-1}\Delta_{i}+1)\), where each row corresponds to a single filter flattened into a vector. Note that the additional 1 in the column dimension of \(W_{i}\) is required for the bias parameter. Around each position \(t\in\{1,\dots,T_{i-1}\}\), we define the local column vector \(a_{i-1,t}\in\mathbb{R}^{c_{i-1}\Delta_{i}}\) by extracting the patch data from \(\mathcal{A}_{i-1}\) (cf. [18] for explicit formulas). The output \(\mathcal{A}_{i}\in\mathbb{R}^{c_{i}\times T_{i}}\) is computed by the forward-propagation: for \(t\in\{1,\dots,T_{i}\}\), the \(t\)-th column \(\widetilde{a}_{i,t}\) of \(\mathcal{A}_{i}\) is given by
\[s_{i,t}=W_{i}\bar{a}_{i-1,t},\qquad\widetilde{a}_{i,t}=\sigma_{i}(s_{i,t}), \tag{2.5}\]
where \(\bar{a}_{i-1,t}\in\mathbb{R}^{c_{i-1}\Delta_{i}+1}\) is \(a_{i-1}\) concatenated with 1 in order to capture the bias. In matrix form, let \([\![\mathcal{A}_{i-1}]\!]\in\mathbb{R}^{(c_{i-1}\Delta_{i}+1)\times T_{i}}\) be the matrix whose \(t\)-th column is \(\bar{a}_{i-1,t}\). Then,
\[\mathcal{S}_{i}=W_{i}[\![\mathcal{A}_{i-1}]\!],\qquad\mathcal{A}_{i}=\sigma_{i }(\mathcal{S}_{i}). \tag{2.6}\]
The gradient of the loss with respect to the parameters of layer \(i\) is computed as
Figure 1: Traditional convolution turned into matrix-matrix multiplication with the _unrolling_ approach.
via the back-propagation formulas
\[g_{i,t}=\mathcal{D}\widetilde{a}_{i,t}\odot\sigma^{\prime}_{i}(s_{i,t}),\quad \mathcal{D}W_{i}=\sum_{t=1}^{T_{i}}g_{i,t}\bar{a}_{i-1,t}^{T},\quad\mathcal{D}a _{i-1,t}=W_{i}^{T}g_{i,t}, \tag{2.7}\]
for \(t\in T_{i}\), where the special symbol \(g_{i,t}:=\mathcal{D}s_{i,t}\) stands for the preactivation derivative.
In all cases (MLP or CNN), the gradient \(\nabla_{\theta}L=\mathcal{D}\theta\) of the loss with respect to whole parameter \(\theta\) is retrieved as
\[\mathcal{D}\theta=[\mathrm{vec}(\mathcal{D}W_{1})^{T},\mathrm{ vec}(\mathcal{D}W_{2})^{T},\ldots,\mathrm{vec}(\mathcal{D}W_{\ell})^{T}]^{T}. \tag{2.8}\]
A general descent method to solve the training problem (2.2) is based on the iterates
\[\theta_{k+1}=\theta_{k}-\alpha_{k}[C(\theta_{k})]^{-1}\nabla_{ \theta}h(\mathcal{S}_{k},\theta_{k}), \tag{2.9}\]
where \(\alpha_{k}>0\) is the learning rate,
\[\nabla_{\theta}h(\mathcal{S}_{k},\theta_{k})=\frac{1}{|\mathcal{ S}_{k}|}\sum_{(x^{(b)},y^{(b)})\in\mathcal{S}_{k}}\nabla_{\theta}L(y^{(b)},f_{ \theta_{k}}(x^{(b)})) \tag{2.10}\]
is a batch approximation of the full gradient \(\nabla_{\theta}h(\theta_{k})=\frac{1}{n}\sum_{b=1}^{n}\nabla_{\theta}L(y^{(b)},f_{\theta_{k}}(x^{(b)}))\) on a random subset \(\mathcal{S}_{k}\subset\mathcal{U}\), and \(C(\theta_{k})\) is an invertible matrix which depends on the method being implemented.
### Natural Gradient Descent
The _Natural Gradient Descent_ (NGD) is associated with a particular choice for matrix \(C(\theta_{k})\), which is well-defined under a mild assumption.
Hypothesis on the loss function.From now on, we take it for granted that there exists a probability density \(\wp(y|z)\) on \(y\in\mathbb{R}^{d_{y}}\) such that, up to an additive constant \(\nu\), the loss function \(L(y,z)\) takes the form
\[L(y,z)=-\log\wp(y|z)+\nu. \tag{2.11}\]
For instance, if the elementary loss corresponds to the least-squares function
\[L(y,z)=\frac{1}{2}\|y-z\|_{2}^{2},\] (2.12a) then we can take the normal density \[\wp(y|z)=(2\pi)^{-d_{y}/2}\exp(-\tfrac{1}{2}\|y-z\|_{2}^{2}),\] (2.12b) so that \[L(y,z)=-\log\wp(y|z)-\frac{d_{y}}{2}\log(2\pi). \tag{2.12c}\]
Introduce the notation \(p(y|x,\theta)=\wp(y|f_{\theta}(x))\). Then, the composite loss function
\[L(y,f_{\theta}(x))=-\log p(y|x,\theta) \tag{2.13}\]
derives from the density function \(p(y|x,\theta)\) of the model's conditional predictive distribution \(P_{y|x}(\theta)\). As shown above, \(P_{y|x}(\theta)\) is multivariate normal for the standard square loss function. It can also be proved that \(P_{y|x}(\theta)\) is multinomial for the cross-entropy one. The learned distribution is therefore \(\mathsf{P}_{x,y}(\theta)\) with density
\[\mathsf{p}(x,y|\theta)=q(x)p(y|x,\theta), \tag{2.14}\]
where \(q(x)\) is the density of data distribution \(Q_{x}\) over inputs \(x\in\mathbb{R}^{d_{x}}\).
#### Fisher Information Matrix.
The NGD method [1] is defined as the generic algorithm (2.9) in which \(C(\theta)\) is set to the _Fisher Information Matrix_ (FIM)
\[F(\theta) =\mathbb{E}_{(x,y)\sim\mathsf{P}_{x,y}(\theta)}\{\nabla_{\theta} \log\mathsf{p}(x,y|\theta)[\nabla_{\theta}\log\mathsf{p}(x,y|\theta)]^{T}\} \tag{2.15a}\] \[=\mathbb{E}_{(x,y)\sim\mathsf{P}_{x,y}(\theta)}\{\mathcal{D} \theta(\mathcal{D}\theta)^{T}\}\] (2.15b) \[=\mathrm{cov}(\mathcal{D}\theta,\mathcal{D}\theta), \tag{2.15c}\]
where \(\mathbb{E}_{(x,y)\sim\mathsf{P}_{x,y}(\theta)}\) denotes the expectation taken over the prescribed distribution \(\mathsf{P}_{x,y}(\theta)\) at a fixed \(\theta\). To alleviate notations, we shall write \(\mathbb{E}\) instead of \(\mathbb{E}_{(x,y)\sim\mathsf{P}_{x,y}(\theta)}\) from now on. Likewise, we shall write \(F\) instead of \(F(\theta)\) or \(F(\theta_{k})\).
By definition (2.15), the FIM is a covariance matrix and is therefore always positive semi-definite. However, for the iteration
\[\theta_{k+1}=\theta_{k}-\alpha_{k}F^{-1}\nabla_{\theta}h(\mathcal{S}_{k}, \theta_{k}) \tag{2.16}\]
to be well-defined, \(F\) has to be invertible. This is why, in practice, \(C(\theta_{k})\) will be taken to be a regularized version of \(F\) under the form
\[F_{\bullet}=F+\lambda I_{p},\qquad\lambda>0. \tag{2.17}\]
The actual NGD iteration is therefore
\[\theta_{k+1}=\theta_{k}-\alpha_{k}F_{\bullet}^{-1}\nabla_{\theta}h(\mathcal{ S}_{k},\theta_{k}). \tag{2.18}\]
In the space of probability distributions \(\mathsf{P}_{x,y}(\theta)\) equipped with the _Kullback-Leibler_ (KL) divergence, the FIM represents the local quadratic approximation of this induced metric, in the sense that for a small vector \(\delta\in\mathbb{R}^{p}\), we have
\[\mathrm{KL}[\mathsf{P}_{x,y}(\theta)\,\|\,\mathsf{P}_{x,y}(\theta+\delta)]= \frac{1}{2}\delta^{T}F\delta+O(\|\delta\|^{3}), \tag{2.19}\]
where \(\mathrm{KL}[\mathsf{P}||\mathsf{Q}]\) is the KL divergence between the distributions \(\mathsf{P}\) and \(\mathsf{Q}\) As a consequence, the unregularized NGD can be thought of as the steepest descent in this space of probability distributions [2].
### Advantages and drawbacks.
By virtue of this geometric interpretation, the NGD has the crucial advantage of being intrinsic, that is, invariant with respect to invertible reparameterizations. Put another way, the algorithm will produce matching iterates regardless of how the unknowns are transformed. This is useful in high-dimensional cases where the choice of parameters is more or less arbitrary.
Strictly speaking, invariance with respect to parameters only occurs at the continuous level, i.e., in the limit of \(\alpha_{k}\to 0\). This minor drawback does not undermine the theoretical soundness of the method. In fact, the real drawback of the NGD (2.15)-(2.16) lies in the cost of computing and inverting the Fisher matrix. This is why it is capital to consider suitable approximations of the FIM such as KFAC (cf. SS2.3).
Finally and anecodtically, the NGD method can also be viewed as an approximate Newton method since the FIM and the GGN matrix are equivalent when the model predictive distribution \(P_{y|x}(\theta)\) belongs to exponential family distributions [28].
### KFAC approximation of the FIM
From equation (2.15), the FIM can be written as a block-diagonal matrix
\[F=\mathbb{E}[\mathcal{D}\theta(\mathcal{D}\theta)^{T}]=\begin{bmatrix}F_{1,1}& \ldots&F_{1,\ell}\\ \vdots&&\vdots\\ F_{\ell,1}&\ldots&F_{\ell,\ell}\end{bmatrix}\in\mathbb{R}^{p\times p},\] (2.20a) in which each block \[F_{i,j}\] is given by \[F_{i,j}=\mathbb{E}[\mathrm{vec}(\mathcal{D}W_{i})\mathrm{vec}(\mathcal{D}W_{j })^{T}]. \tag{2.20b}\]
One can interpret \(F_{i,i}\) as being second-order statistics of weight derivatives of layer \(i\), and \(F_{i,j}\), \(i\neq j\) as representing the interactions between layer \(i\) and \(j\).
The KFAC method [31] is grounded on the following two assumptions to provide an efficient approximation to the FIM that is convenient for training DNNs.
1. The first one is that there are no interactions between two different layers, i.e., \(F_{i,j}=0\) for \(i\neq j\). This results in the block-diagonal approximation \[F\approx\widetilde{F}=\mathrm{diag}(F_{1,1},F_{2,2},\ldots,F_{\ell,\ell})\] (2.21) for the FIM. At this step, computing the inverse of \(\widetilde{F}\) is equivalent to computing the inverses of diagonal blocks \(F_{i,i}\). Nevertheless, because the diagonal blocks \(F_{i,i}\) can be very large (especially for DNNs with large layers), this first approximation remains insufficient.
2. The second one comes in support of the first one and consists in factorizing each diagonal block \(F_{i,i}\) as a Kronecker product of two smaller matrices, namely, \[F_{i,i}\approx[F_{\mathrm{KFAC}}]_{i,i}=A_{i}\otimes G_{i}.\] (2.22) where the Kronecker product between \(A\in\mathbb{R}^{m_{A}\times n_{A}}\) and \(B\in\mathbb{R}^{m_{B}\times n_{B}}\) is the
matrix of size \(m_{A}m_{B}\times n_{A}n_{B}\) given by
\[A\otimes B=\left[\begin{array}{ccc}A_{1,1}B&\ldots&A_{1,n_{A}}B\\ \vdots&&\vdots\\ A_{m_{A},1}B&\ldots&A_{m_{A},n_{A}}B\end{array}\right]. \tag{2.23}\]
Now, depending on the type of the layer, the computation of the Kronecker factors \(A_{i}\) and \(G_{i}\) may require different other assumptions.
#### 2.2.2 MLP layer.
When layer \(i\) is an MLP, the block \(F_{i,i}\) is given by
\[F_{i,i} =\mathbb{E}[\text{vec}(\mathcal{D}W_{i})\text{vec}(\mathcal{D}W_ {j})^{T}]\] \[=\mathbb{E}[\text{vec}(g_{i}\bar{a}_{i-1}^{T})\text{vec}(g_{i} \bar{a}_{i-1}^{T})^{T}]\] \[=\mathbb{E}[(\bar{a}_{i-1}\otimes g_{i})(\bar{a}_{i-1}\otimes g_{ i})^{T}]\] \[=\mathbb{E}[\bar{a}_{i-1}\bar{a}_{i-1}^{T}\otimes g_{i}g_{i}^{T}]. \tag{2.24}\]
From the last equality, if one assumes that activations and pre-activation derivatives are independent, that is, \(a_{i-1}\perp\!\!\!\perp g_{i}\), then \(F_{i,i}\) can be factorized as
\[F_{i,i}\approx[F_{\text{KFAC}}]_{i,i}=\mathbb{E}[\bar{a}_{i-1}\bar{a}_{i-1}^{T }]\otimes\mathbb{E}[g_{i}g_{i}^{T}]\,=:A_{i}\otimes G_{i}, \tag{2.25}\]
with \(A_{i}=\mathbb{E}[\bar{a}_{i-1}\bar{a}_{i-1}^{T}]\) and \(G_{i}=\mathbb{E}[g_{i}g_{i}^{T}]\).
#### 2.2.3 Convolution layer.
With such a layer, \(F_{i,i}\) is written as
\[F_{i,i} =\mathbb{E}\big{[}\text{vec}(\mathcal{D}W_{i})\text{vec}( \mathcal{D}W_{j})^{T}\big{]}\] \[=\mathbb{E}\bigg{[}\text{vec}\bigg{(}\sum_{t=1}^{T_{i}}g_{i,t} \bar{a}_{i-1,t}^{T}\bigg{)}\text{vec}\bigg{(}\sum_{t=1}^{T_{i}}g_{i,t}\bar{a}_ {i-1,t}^{T}\bigg{)}^{T}\bigg{]}\] \[=\mathbb{E}\bigg{[}\sum_{t=1}^{T_{i}}\sum_{t^{\prime}=1}^{T_{i}} (\bar{a}_{i-1,t}\otimes g_{i,t})(\bar{a}_{i-1,t^{\prime}}\otimes g_{i,t^{ \prime}})^{T}\bigg{]}\] \[=\mathbb{E}\bigg{[}\sum_{t=1}^{T_{i}}\sum_{t^{\prime}=1}^{T_{i}} \bar{a}_{i-1,t}\bar{a}_{i-1,t^{\prime}}^{T}\otimes g_{i,t}g_{i,t^{\prime}}^{ T}\bigg{]}\] \[=\mathbb{E}\bigg{[}\sum_{t=1}^{T_{i}}\sum_{t^{\prime}=1}^{T_{i}} \Omega_{i}(t,t^{\prime})\otimes\Gamma_{i}(t,t^{\prime})\bigg{]}, \tag{2.26}\]
with \(\Omega_{i}(t,t^{\prime})=\bar{a}_{i-1,t}\bar{a}_{i-1,t^{\prime}}^{T}\) and \(\Gamma_{i}(t,t^{\prime})=g_{i,t}g_{i,t^{\prime}}^{T}\). In order to factorize \(F_{i,i}\) into Kronecker product of two matrices, Grosse and Martens [18] resort to three hypotheses. First, similarly to MLP layers, activations and pre-activation derivatives are assumed to be independent. Secondly, postulating spatial homogeneity, the second-order statistics of the activations and pre-activation derivatives at any two spatial locations \(t\) and \(t^{\prime}\) depend only on the difference \(t-t^{\prime}\). Finally, the pre-activation derivatives at any two distinct spatial locations are declared to be uncorrelated, i.e., \(\Gamma_{i}(t,t^{\prime})=0\) for \(t\neq t^{\prime}\).
Combining these three assumptions yields the approximation
\[F_{i,i}\approx[F_{\text{KFAC}}]_{i,i}=\mathbb{E}\bigg{[}\sum_{t=1}^{T_{i}}\Omega_{ i}(t,t)\bigg{]}\otimes\frac{1}{T_{i}}\mathbb{E}\bigg{[}\sum_{t=1}^{T_{i}}\Gamma_{i}(t,t )\bigg{]}\,=:A_{i}\otimes G_{i}, \tag{2.27}\]
with \(A_{i}=\mathbb{E}\big{[}\sum_{t=1}^{T_{i}}\Omega_{i}(t,t)\big{]}\) and \(G_{i}=\frac{1}{T_{i}}\mathbb{E}\big{[}\sum_{t=1}^{T_{i}}\Gamma_{i}(t,t)\big{]}\).
**Remark 2.1**.: It should be mentioned that, in the same spirit, a KFAC-type approximation has been developed for the RNN (Recurrent Neural Network), but with much more assumptions. In this work, we do not consider recurrent layers. The readers interested in KFAC for RNN are referred to [30].
Going back to MLP and CNN layers, the matrices \(A_{i}\) and \(G_{i}\) are estimated using Monte Carlo method, with a mini-batch \(\mathcal{B}=\{(x_{1},y_{1}),\ldots,(x_{B},y_{B})\}\), where the targets \(y_{i}\)'s are sampled from the model predictive distribution \(P_{y|x}(\theta)\). Combining the block-diagonal approximation and Kronecker factorization of each block, the approximate FIM becomes
\[F\approx F_{\text{KFAC}}=\text{diag}(A_{1}\otimes G_{1},\,A_{2}\otimes G_{2 },\,\ldots,\,A_{\ell}\otimes G_{\ell}). \tag{2.28}\]
The descent iteration (2.9) with \(C(\theta_{k})=F_{\text{KFAC}}(\theta_{k})\) is now well suited to training DNNs. Indeed, thanks to the Kronecker product properties \((A\otimes B)^{-1}=A^{-1}\otimes B^{-1}\) and \((A\otimes B)\text{vec}(X)=\text{vec}(BXA^{T})\), it is plain that the product
\[F_{\text{KFAC}}^{-1}\nabla_{\theta}h=\begin{bmatrix}\text{vec}(G_{1}^{-1}( \nabla_{W_{1}}h)A_{1}^{-1})\\ \vdots\\ \text{vec}(G_{\ell}^{-1}(\nabla_{W_{\ell}}h)A_{\ell}^{-1})\end{bmatrix} \tag{2.29}\]
only requires to store and to invert matrices of moderately small sizes.
In practice, invertibility of \(F_{\text{KFAC}}\) must be enforced by a regularization procedure. The usual Tikhonov one, by which we consider \(F_{\text{KFAC}}+\lambda I,\lambda>0\), instead of \(F_{\text{KFAC}}\), is equivalent to adding a multiple of the identity matrix of appropriate size to each diagonal block, i.e., \(A_{i}\otimes G_{i}+\lambda I_{i}\). Unfortunately, this breaks down the Kronecker factorization structure of the blocks. To preserve the factorized structure, the authors of KFAC [31] advocate a heuristic damping technique in which each Kronecker factor is regularized as
\[[F_{\bullet\,\text{KFAC}}]_{i,i}=(A_{i}+\pi_{i}\lambda^{1/2}I_{A_{i}})\otimes (G_{i}+\pi_{i}^{-1}\lambda^{1/2}I_{G_{i}}),\] (2.30a) where \[I_{A_{i}}\] and \[I_{G_{i}}\] denote identity matrices of same size as \[A_{i}\] and \[G_{i}\] respectively, and \[\pi_{i}=\sqrt{\frac{\text{tr}(A_{i})/(d_{i-1}+1)}{\text{tr}(G_{i})/d_{i}}}. \tag{2.30b}\]
The actual KFAC iteration is therefore
\[\theta_{k+1}=\theta_{k}-\alpha_{k}F_{\bullet\,\text{KFAC}}^{-1}\nabla_{ \theta}h(\mathcal{S}_{k},\theta_{k}), \tag{2.31a}\]
with
\[F_{\bullet\,\text{KFAC}}=\text{diag}([F_{\bullet\,\text{KFAC}}]_{1,1},\,[F_{ \bullet\,\text{KFAC}}]_{2,2},\,\ldots,\,[F_{\bullet\,\text{KFAC}}]_{\ell,\ell}). \tag{2.31b}\]
## 3 Two-level KFAC methods
Henceforth, the learning rate is assumed to be \(\alpha_{k}=1\). Let
\[\zeta_{k}=\theta_{k}-\theta_{k+1}=[C(\theta_{k})]^{-1}\nabla_{ \theta}h(\mathcal{S}_{k},\theta_{k}) \tag{3.1}\]
be the negative increment of \(\theta\) at iteration \(k\) of the generic descent algorithm (2.9). To further alleviate notations, we shall drop the subscript \(k\) and omit the dependence on \(\theta_{k}\). For the regularized NGD (2.18), we have
\[\zeta=F_{\bullet}^{-1}\,\nabla_{\theta}h, \tag{3.2}\]
while for the regularized KFAC method, we have
\[\zeta_{\text{KFAC}}=F_{\bullet\,\text{KFAC}}^{-1}\nabla_{\theta}h, \tag{3.3}\]
being understood that the matrices are regularized whenever necessary.
We want to build a new matrix \(F_{\bullet\,\text{KFAC-2L}}^{-1}\), an augmented version of \(F_{\bullet\,\text{KFAC}}^{-1}\), such that the solution
\[\zeta_{\text{KFAC-2L}}=F_{\bullet\,\text{KFAC-2L}}^{-1}\nabla_{ \theta}h, \tag{3.4}\]
is a better approximation to \(\zeta\) than \(\zeta_{\text{KFAC}}\), namely,
\[\|\zeta_{\text{KFAC-2L}}-\zeta\|_{F}\ll\|\zeta_{\text{KFAC}}- \zeta\|_{F}. \tag{3.5}\]
By "augmented" we mean that, at least partially and at some rough scale, \(F_{\text{KFAC-2L}}^{-1}\) takes into account the information about layer interactions that was discarded by the block-diagonal approximation KFAC. The basic tenet underlying this initiative is the belief that a more accurate approximation to the NGD solution \(\zeta\) at each descent iteration will help the global optimization process to converge faster.
### Analogy and dissimilarity with domain decomposition
The construction philosophy of \(F_{\bullet\,\text{KFAC-2L}}^{-1}\) proceeds by analogy with insights from domain decomposition. To properly explain the analogy, we first need to cast the matrix \(F_{\bullet\,\text{KFAC}}^{-1}\) under a slightly different form.
For each \(i\in\{1,\ldots,\ell\}\), let \(R_{i}\in\mathbb{R}^{p_{i}\times p}\) be the matrix of the restriction operator from \(\mathbb{R}^{p}\), the total space of all parameters, to the subspace of parameters pertaining to layer \(i\), whose dimension is \(p_{i}\). In other words, for \((\xi,\eta)\in\{1,\ldots,p_{i}\}\times\{1,\ldots,p\}\),
\[(R_{i})_{\xi\eta}=\left\{\begin{aligned} & 1&\text{ if }\;\eta=p_{1}+\ldots+p_{i-1}+\xi,\\ & 0&\text{ otherwise.}\end{aligned}\right. \tag{3.6}\]
The transpose \(R_{i}^{T}\in\mathbb{R}^{p\times p_{i}}\) then represents the prolongation operator from the subspace of parameters in layer \(i\) to the total space of all parameters. Obviously, the \(i\)-th diagonal block of the regularized FIM can be expressed as
\[[F_{\bullet}]_{i,i}=R_{i}F_{\bullet}R_{i}^{T}.\]
If there were no approximation of each diagonal block by a Kronecker product, then the block-diagonal approximation of \(F\) would give rise to the inverse matrix
\[F_{\bullet\,\text{block-diag}}^{-1}=\sum_{i=1}^{\ell}R_{i}^{T}[F_{\bullet}]_{i,i}^{-1}R_{i}=\sum_{i=1}^{\ell}R_{i}^{T}(R_{i}F_{\bullet}R_{i}^{T})^{-1}R_{i}. \tag{3.7}\]
In the case of KFAC, it follows from (2.28)-(2.29) that
\[F_{\bullet\,\text{KFAC}}^{-1} =\sum_{i=1}^{\ell}R_{i}^{T}[F_{\bullet\,\text{KFAC}}]_{i,i}^{-1}R _{i} \tag{3.8}\] \[=\sum_{i=1}^{\ell}R_{i}^{T}(A_{i}+\pi_{i}\lambda^{1/2}I_{A_{i}}) ^{-1}\otimes(G_{i}+\pi_{i}^{-1}\lambda^{1/2}I_{G_{i}})^{-1}R_{i}.\]
In the context of the domain decomposition methods to solve linear systems arising from the discretization of PDEs, the spatial domain of the initial problem is divided into several subdomains. The system is then projected onto the subdomains and the local subproblems are solved independently of each other as smaller systems. In this stage, parallelism can be fully taken advantage of by assigning a processor to each subdomain. This produces a local solution on each subdomain. These local solutions are next combined to create an approximate global solution on the overall domain. Algebraically, the whole process is tantamount to using an inverse matrix of a form similar to (3.7)-(3.8) either within a Schwarz-like iterative procedure or as a preconditioner [10]. The counterparts of \([F_{\bullet}]_{i,i}^{-1}\) or \([F_{\bullet\,\text{KFAC}}]_{i,i}^{-1}\) are referred to as _local solvers_.
**Remark 3.1**.: The above analogy is not a perfect one. In domain decomposition, the subdomains are allowed (and even recommended!) to overlap each other, so that an unknown can belong to two or more subdomains. In this case, the restriction operators \(R_{i}\) can be much more intricate than the one trivially defined in (3.6).
A well-known issue with domain decomposition methods of the form (3.7)-(3.8) is the disappointingly slow rate of convergence, which results in a lack of _scalability_[10]: the speed-up factor does not grow proportionally with the number of subdomains (and therefore of processors). The reason is that, as the number of subdomains increases, it takes more iterations for an information local to one subdomain to be propagated and taken into account by the others. The common remedy to this problem is to append a "coarse" correction that enables subdomains to communicate with each other in a faster way. The information exchanged in this way is certainly not complete, but only concerns the low frequencies.
**Remark 3.2**.: In domain decomposition, there is a physical problem (represented by the PDE at the continuous level) that serves as a support for the mathematical and numerical reasoning. This is not the case here, where we have to think in a purely algebraic way.
### Multiplicative vs. additive coarse correction
We are going to present the idea of two-level KFAC in a very elementary fashion. Let \(m\geq\ell\) be an integer and \(R_{0}\in\mathbb{R}^{m\times p}\) be a given matrix. The subspace of \(\mathbb{R}^{p}\) spanned by the columns of \(R_{0}^{T}\in\mathbb{R}^{p\times m}\) is called the _coarse space_. The choice of the coarse space will be discussed later on. For the moment, we can assume that it is known.
The idea is to add to \(\zeta_{\text{KFAC}}\) a correction term that lives in the coarse space, in such a way that the new vector minimizes the error in the \(F_{\bullet}\)-norm with respect to the FIM solution \(\zeta=F_{\bullet}^{-1}\nabla_{\theta}h\). More concretely, this means that for the negative increment, we consider
\[\zeta_{\text{KFAC-2L}}=\zeta_{\text{KFAC}}+R_{0}^{T}\beta^{*}, \tag{3.9}\]
where
\[\beta^{*} =\operatorname*{argmin}_{\beta\in\mathbb{R}^{\ell}}\|(\zeta_{ \text{KFAC}}+R_{0}^{T}\beta)-\zeta\|_{F_{\bullet}}^{2} \tag{3.10a}\] \[=\operatorname*{argmin}_{\beta\in\mathbb{R}^{\ell}}\|(\zeta_{ \text{KFAC}}+R_{0}^{T}\beta)-F_{\bullet}^{-1}\nabla_{\theta}h\|_{F_{\bullet}} ^{2}. \tag{3.10b}\]
The solution of the quadratic minimization problem (3.10) is given by
\[\beta^{*}=(R_{0}F_{\bullet}R_{0}^{T})^{-1}R_{0}(\nabla_{\theta}h-F_{\bullet} \zeta_{\text{KFAC}}), \tag{3.11}\]
provided that the matrix
\[F_{\text{coarse}}:=R_{0}F_{\bullet}R_{0}^{T}\in\mathbb{R}^{m\times m}, \tag{3.12}\]
representing the _coarse operator_, be invertible. This is a small size matrix, insofar as \(m\) will be in practice taken to be equal to \(\ell\) or \(2\ell\), and will in any case remain much smaller than \(p\). This is in agreement with domain decomposition where the size of the coarse system is usually equal to the number of subdomains.
As for the vector
\[r_{\text{KFAC}}:=\nabla_{\theta}h-F_{\bullet}\zeta_{\text{KFAC}}, \tag{3.13}\]
it is referred to as the _residual_ associated to the approximate solution \(\zeta_{\text{KFAC}}\). Plugging (3.11) into (3.9) and recalling that \(\zeta_{\text{KFAC}}=F_{\bullet\,\text{KFAC}}^{-1}\nabla_{\theta}h\), we end up with
\[\zeta_{\text{KFAC-2L}}=F_{\bullet\,\text{KFAC-2L}}^{-1}\nabla_{\theta}h, \tag{3.14}\]
with
\[F_{\bullet\,\text{KFAC-2L}}^{-1}=F_{\bullet\,\text{KFAC}}^{-1}+R_{0}^{T}F_{ \text{coarse}}^{-1}R_{0}(I-F_{\bullet}F_{\bullet\,\text{KFAC}}^{-1}). \tag{3.15}\]
The matrix (3.15) that we propose can be checked to be consistent: if \(F_{\bullet\,\text{KFAC}}^{-1}\) and \(R_{0}^{T}F_{\text{coarse}}^{-1}R_{0}\) are both homogeneous to \(F_{\bullet}^{-1}\), then \(F_{\bullet\,\text{KFAC-2L}}^{-1}\) is homogenous to
\[F_{\bullet}^{-1}+F_{\bullet}^{-1}-F_{\bullet}^{-1}F_{\bullet}\,F_{\bullet}^{-1 }=F_{\bullet}^{-1}\]
too. In the language of domain decomposition, the coarse corrector of (3.15) is said to act _multiplicatively_, to the extent that
\[I-F_{\bullet\,\text{KFAC-2L}}^{-1}F_{\bullet}=[I-(R_{0}^{T}F_{\text{coarse}}^{-1 }R_{0})F_{\bullet}][I-F_{\bullet\,\text{KFAC}}^{-1}F_{\bullet}]. \tag{3.16}\]
as can be straightforward verified. If \(G\) is an approximation of \(F_{\bullet}^{-1}\), the matrix \(I-GF_{\bullet}\) measures the quality of this approximation. Equality (3.16) shows that the approximation quality of \(F_{\bullet\,\text{KFAC-2L}}^{-1}\) is the product of those of \(R_{0}^{T}F_{\text{coarse}}^{-1}R_{0}\) and \(F_{\bullet\,\text{KFAC}}^{-1}\).
A common practice in domain decomposition is to drop the factor \(I-F_{\bullet\,\text{KFAC}}^{-1}\) (which is equivalent to replacing the residual \(r_{\text{KFAC}}=\nabla_{\theta}h-F_{\bullet}\theta_{\text{KFAC}}\) by \(\nabla_{\theta}h\)). This amounts to approximating \(F_{\bullet\,\text{KFAC-2L}}^{-1}\) as
\[F_{\bullet\,\text{KFAC-2L}}^{-1}\approx F_{\bullet\,\text{KFAC}}^{-1}+R_{0} ^{T}F_{\text{coarse}}^{-1}R_{0}. \tag{3.17}\]
The coarse corrector of (3.17) is said to act _additively_ in domain decomposition. Clearly, the resulting matrix is inconsistent with \(F_{\bullet}^{-1}\): in fact, it is consistent with \(2F_{\bullet}^{-1}\)! No matter how crude it is, this coarse corrector is actually valid as long as \(F_{\bullet\,\text{KFAC-2L}}^{-1}\) is used only as a preconditioner in the resolution of the system \(F_{\bullet}\boldsymbol{\zeta}=\nabla_{\theta}h\), which means that we solve instead \(F_{\bullet\,\text{KFAC-2L}}^{-1}F_{\bullet}\boldsymbol{\zeta}=F_{\bullet\, \text{KFAC-2L}}^{-1}\nabla_{\theta}h\) to benefit from a more favorable conditioning but the solution we seek remains the same.
Here, in our problem, \(F_{\bullet}^{-1}\) is directly approximated by \(F_{\bullet\,\text{KFAC-2L}}^{-1}\) and therefore the inconsistent additive coarse corrector (3.17) is not acceptable. Note that Tselepidis et al. [50] adopted this additive coarse correction, in which \(F_{\text{coarse}}\) is approximated as
\[F_{\text{coarse}}\approx R_{0}\bar{F}_{\bullet}R_{0}^{T},\] (3.18a) where \[\bar{F}_{\bullet}\] is the block-diagonal matrix whose blocks \[[\bar{F}_{\bullet}]_{i,j}\] are given by \[[\bar{F}_{\bullet}]_{i,j}=\left\{\begin{array}{ll}\mathbb{E}[\bar{a}_{i-1} \bar{a}_{j-1}^{T}]\otimes\mathbb{E}[g_{i}g_{j}^{T}]&\text{if}\;\;i\neq j,\\ \mathbb{E}[A_{i}+\pi_{i}\lambda^{1/2}I_{A_{i}}]\otimes\mathbb{E}[G_{i}+\pi_{ i}^{-1}\lambda^{1/2}I_{G_{i}}]&\text{if}\;\;i=j.\end{array}\right. \tag{3.18b}\]
In this work, we focus to the consistent multiplicative coarse corrector (3.15) and also consider the exact value (3.12) for \(F_{\text{coarse}}\).
### Choice of the coarse space \(R_{0}^{T}\)
By the construction (3.9)-(3.10), we are guaranteed that
\[\left\|\zeta_{\text{KFAC-2L}}-\zeta\right\|_{F}^{2}\leq\left\|\zeta_{\text{KFAC }}-\zeta\right\|_{F}^{2} \tag{3.19}\]
for any coarse space \(R_{0}^{T}\), since the right-hand side corresponds to \(\beta=0\). The choice of \(R_{0}^{T}\) is a compromise between having a small dimension \(m\ll p\) and lowering the new error
\[\left\|\zeta_{\text{KFAC-2L}}-\zeta\right\|_{F}^{2}=\left\|-[I-R_{0}^{T}(R_{0} F_{\bullet}R_{0}^{T})^{-1}R_{0}F_{\bullet}][I-F_{\bullet\,\text{KFAC}}^{-1}F_{ \bullet}]\zeta\right\|_{F}^{2} \tag{3.20}\]
as much as possible. But it seems out of reach to carry out the minimization of the latter with respect to the entries of \(R_{0}^{T}\).
In the context of the preconditioner, the idea behind a two-level method is to remove first the influence of very large eigenvalues which correspond to high-frequency modes, then remove the smallest eigenvalues thanks to the second level, which affect greatly the convergence. To do so, we need a suitable coarse space to efficiently deal with this second level [32]. Ideally, we would like to choose the deflation subspace which consists of the eigenvectors associated with the small eigenvalues of the preconditioned operator. However, this computation is more costly than solving a linear system itself.
This leads us to choose the coarse space in an a priori way. We consider the a priori form
\[R_{0}^{T}=\begin{bmatrix}V_{1}&0&\ldots&\ldots&0\\ 0&V_{2}&\ldots&\ldots&0\\ \vdots&\vdots&\ddots&&\vdots\\ 0&0&\ldots&\ldots&V_{\ell}\end{bmatrix}\in\mathbb{R}^{p\times m}, \tag{3.21}\]
where each block \(V_{i}\in\mathbb{R}^{p_{i}\times m_{i}}\) has \(m_{i}\) columns with \(m_{i}\ll p_{i}\), and
\[m_{1}+m_{2}+\ldots+m_{\ell}=m. \tag{3.22}\]
To provide a comparative study, we propose to evaluate several coarse space choices of the form (3.21) that are discussed below.
#### 3.2.2 Nicolaides coarse space.
Historically, this is the first [35] coarse space ever proposed in domain decomposition. Transposed to our case, it corresponds to
\[m_{1}=\ldots=m_{\ell}=1,\qquad m=\ell, \tag{3.23}\]
and for all \(i\in\{1,\ldots,\ell\}\),
\[V_{i}=\begin{bmatrix}1,\ldots,1\end{bmatrix}^{T}\,\in\,\mathbb{R}^{p_{i}}. \tag{3.24}\]
Originally, the motivation for selecting the vector whose all components are equal to \(1\) is that it is the discrete version of a continuous constant field, which is the eigenvector associated with the eigenvalue \(0\) of the operator \(-\nabla\cdot(\kappa\nabla)\) (boundary conditions being set aside). Inserting it into the coarse space helps the solver take care of the lowest frequency mode. In our problem, however, there is no reason for \(0\) to be an eigenvalue of \(F\), nor for \(1\) to be an eigenvector if this is the case. Hence, there is no justification for the Nicolaides coarse space. Still, this choice remains convenient and practical. This is probably the reason why Tselepidis et al. [50] have opted for it.
#### 3.2.3 Spectral coarse space.
This is a slightly refined version of the Nicolaides coarse space. The idea is always to capture the lowest mode [32], but since the lowest eigenvalue and eigenvector are not known in advance, we have to compute them. More specifically, we keep the values (3.23) for the column sizes within \(R_{0}^{T}\), while prescribing
\[V_{i}=\text{eigenvector associated to the smallest eigenvalue of }[F_{\bullet\,\text{KFAC}}]_{i,i} \tag{3.25}\]
for all \(i\in\{1,\ldots,\ell\}\). In our case, an advantageous feature of this definition is that the cost of computing the eigenvectors is "amortized" by that of the inverses of \([F_{\text{KFAC}}]_{i,i}\), in the sense that these two calculations can be carried out simultaneously. Indeed, let
\[A_{i}+\pi_{i}\lambda^{1/2}I_{A_{i}}=U_{A_{i}}\Sigma_{A_{i}}V_{A_{i}}^{T},\qquad G _{i}+\pi_{i}^{-1}\lambda^{1/2}I_{G_{i}}=U_{G_{i}}\Sigma_{G_{i}}V_{G_{i}}^{T} \tag{3.26}\]
be the singular value decompositions of \(A_{i}+\pi_{i}\lambda^{1/2}I_{A_{i}}\) and \(G_{i}+\pi_{i}^{-1}\lambda^{1/2}I_{G_{i}}\) respectively. Then,
\[[F_{\bullet\,\text{KFAC}}]_{i,i}^{-1} =(A_{i}+\pi_{i}\lambda^{1/2}I_{A_{i}})^{-1}\otimes(G_{i}+\pi_{i}^ {-1}\lambda^{1/2}I_{G_{i}})^{-1}\] \[=(U_{A_{i}}\Sigma_{A_{i}}V_{A_{i}}^{T})^{-1}\otimes(U_{G_{i}} \Sigma_{G_{i}}V_{G_{i}}^{T})^{-1}\] \[=(U_{A_{i}}\Sigma_{A_{i}}^{-1}V_{A_{i}}^{T})\otimes(U_{G_{i}} \Sigma_{G_{i}}^{-1}V_{G_{i}}^{T}). \tag{3.27}\]
Since \(\Sigma_{A_{i}}\) and \(\Sigma_{G_{i}}\) are diagonal matrices, their inverses are easy to compute. Now, if \(V_{A_{i}}\) and \(V_{G_{i}}\) are the eigenvectors associated to the smallest eigenvalues of \(A_{i}\) and \(G_{i}\) respectively, then the eigenvector associated to the smallest eigenvalue of \([F_{\bullet\,\text{KFAC}}]_{i,i}\) is given by
\[V_{i}=V_{A_{i}}\otimes V_{G_{i}}. \tag{3.28}\]
**Krylov coarse space.** If we do not wish to compute the eigenvector associated to the smallest eigenvalue of \([F_{\bullet\,\text{KFAC}}]_{i,i}\), then a variant of the spectral coarse space could be the following. We know that this eigenvector can be obtained by the inverse power method. The idea is then to perform a few iterations of this method, even barely one or two, and to include the iterates into the the coarse subspace. If \(m_{i}-1\geq 1\) is the number of inverse power iterations performed for \([F_{\bullet\,\text{KFAC}}]_{i,i}\), then we take
\[V_{i}=[v_{i},\ \ [F_{\bullet\,\text{KFAC}}]_{i,i}^{-1}v_{i},\ \ldots,\ \ [F_{\bullet\,\text{KFAC}}]_{i,i}^{-(m_{i}-1)}v_{i}]\,\in\,\mathbb{R}^{p_{i} \times m_{i}} \tag{3.29}\]
where \(v_{i}\in\mathbb{R}^{p_{i}}\) is an arbitrary vector, assumed to not be an eigenvector of \([F_{\bullet\,\text{KFAC}}]_{i,i}\) to ensure that the columns of \(V_{i}\) are not collinear. By appropriately selecting \(v_{i}\), we are in a position to use this approach to enrich the Nicolaides coarse space and the residuals coarse space (cf. next construction).
The increase in the number of columns for \(V_{i}\) is not the price to be paid to avoid the eigenvector calculation: we could have put only the last iterate \([F_{\bullet\,\text{KFAC}}]_{i,i}^{-(m_{i}-1)}v_{i}\) into \(V_{i}\). But since we have computed the previous ones, it seems more cost-effective to use them all to enlarge the coarse space. The larger the latter is, the lower is the minimum value of the objective function. In this work, we consider the simplest case
\[m_{1}=\ldots=m_{\ell}=2,\qquad m=2\ell. \tag{3.30}\]
**Residuals coarse space.** We now introduce a very different philosophy of coarse space, which to our knowledge has never been envisioned before. From the construction (3.9)-(3.10), it is obvious that if the error \(\zeta-\zeta_{\text{KFAC}}\) belongs to the coarse space \(R_{0}^{T}\), that is, if it can be written as a linear combination \(R_{0}^{T}\beta^{\sharp}\) of the coarse matrix columns, then the vector \(\zeta_{\text{KFAC}}+R_{0}^{T}\beta^{\sharp}\) coincides with the exact solution \(\zeta\) and the correction
would be ideally optimal. Although this error \(\zeta-\zeta_{\text{KFAC}}\) is unknown, it is connected to the residual (3.13) by
\[\zeta-\zeta_{\text{KFAC}}=F_{\bullet}^{-1}r_{\text{KFAC}}. \tag{3.31}\]
The residual \(r_{\text{KFAC}}\) is not too expensive to compute. as it consists of a direct matrix-product \(F\zeta_{\text{KFAC}}\). Unfortunately, solving a linear system involving \(F\) as required by (3.31) is what we want to avoid.
But we can just approximate this error by inverting with \(F_{\bullet\text{KFAC}}^{-1}\) instead of \(F_{\bullet}^{-1}\). Therefore, we propose to build a coarse space that contains \(F_{\bullet\text{KFAC}}^{-1}r_{\text{KFAC}}\) instead of \(F_{\bullet}^{-1}r_{\text{KFAC}}\). To this end, we split \(F_{\bullet\text{KFAC}}^{-1}r_{\text{KFAC}}\) into \(\ell\) segments, each corresponding to a layer. This amounts to choosing the values (3.23) for the column sizes and set the columns of \(R_{0}^{T}\) as
\[V_{i}=[F_{\bullet\text{KFAC}}]_{i,i}^{-1}r_{\text{KFAC}}[i]\in\mathbb{R}^{p_{ i}},\qquad r_{\text{KFAC}}[i]=\text{vec}(\mathcal{D}W_{i})-(F_{\bullet}\zeta_{ \text{KFAC}})[i] \tag{3.32}\]
for \(i\in\{1,\ldots,\ell\}\), where for a vector \(\xi\in\mathbb{R}^{p}\) the notation \(\xi[i]=\xi(p_{i-1}+1:p_{i})\) designates the portion related to layer \(i\). Formulas (3.32) ensure that \(F_{\bullet\text{KFAC}}^{-1}r_{\text{KFAC}}\) belongs to the coarse space. Indeed, taking \(\beta=[1,\ldots,1]^{T}\in\mathbb{R}^{\ell}\), we find \(R_{0}^{T}\beta=F_{\bullet\text{KFAC}}^{-1}r_{\text{KFAC}}\).
**Taylor coarse space.** The previous coarse space is the zeroth-order representative of a family of more sophisticated constructions based on a formal Taylor expansion of \(F_{\bullet}^{-1}\), which we now present but which will not be implemented. Setting
\[E=I-F_{\bullet\text{KFAC}}^{-1}F_{\bullet} \tag{3.33}\]
and observing that \(F_{\bullet}=F_{\bullet\text{KFAC}}(I-E)\), we have
\[F_{\bullet}^{-1}=(I-E)^{-1}F_{\bullet\text{KFAC}}^{-1}=(I+E+\ldots+E^{q-1}+ \ldots)F_{\bullet\text{KFAC}}^{-1}. \tag{3.34}\]
The formal series expansion in the last equality rests upon the intuition that \(E\) measures the approximation quality of \(F_{\bullet}^{-1}\) by \(F_{\bullet\text{KFAC}}^{-1}\) and therefore can be assumed to be small. Multiplying both sides by the residual \(r_{\text{KFAC}}\) and stopping the expansion at order \(q-1\geq 0\), we obtain the approximation
\[(I+E+\ldots+E^{q-1})F_{\bullet\text{KFAC}}^{-1}r_{\text{KFAC}} \tag{3.35}\]
for the error \(F_{\bullet}^{-1}r_{\text{KFAC}}=\zeta-\zeta_{\text{FAC}}\), which is also the ideal correction term. As earlier, we impose that this approximate correction vector (3.35) must be contained in the coarse space \(R_{0}^{T}\). This suggests to extract the components in layer \(i\) of the vectors
\[\big{\{}F_{\bullet\text{KFAC}}^{-1}r_{\text{KFAC}},\ EF_{\bullet\text{KFAC}}^{-1}r_{\text{KFAC}},\ \ldots,\ E^{q-1}F_{\bullet\text{KFAC}}^{-1}\big{\}}\]
and assign them to the columns of \(V_{i}\). In view of (3.33), the space spanned by the above vectors is the same as the one spanned by
\[\big{\{}F_{\bullet\text{KFAC}}^{-1}r_{\text{KFAC}},\ (F_{\bullet\text{KFAC}}^{-1}F_{ \bullet})F_{\bullet\text{KFAC}}^{-1}r_{\text{KFAC}},\ \ldots,\ (F_{\bullet\text{KFAC}}^{-1}F_{\bullet})^{q-1}F_{ \bullet\text{KFAC}}^{-1}r_{\text{KFAC}}\big{\}}.\]
Consequently, we can take
\[m_{1}=\ldots=m_{\ell}=q,\qquad m=q\ell, \tag{3.36}\]
and
\[V_{i}=[w_{1}[i],\;w_{2}[i],\;\ldots,\;w_{q}[i]]\in\mathbb{R}^{p_{i} \times m_{i}} \tag{3.37}\]
where
\[w_{1}=F_{\bullet\,\mathrm{KFAC}}^{-1}r_{\mathrm{KFAC}}\in\mathbb{R}^{p}, \qquad w_{j+1}=F_{\bullet\,\mathrm{KFAC}}^{-1}F_{\bullet\,}w_{j}\in\mathbb{R}^ {p}, \tag{3.38}\]
for \(1\leq j\leq q-1\). The case \(q=1\) degenerates to the residuals coarse space. From (3.38), we see that upgrading to the next order is done by multiplying by \(F_{\bullet}\), an operation that mixes the layers.
For the practical implementation of these coarse spaces, we need efficient computational methods for two essential building blocks, namely, the matrix-vector product \(F_{\bullet}u\) and the coarse operator \(F_{\mathrm{coarse}}\). These will be described in appendix SSA.
### Pseudo-code for two-level KFAC methods
Algorithm 1 summarizes the steps for setting up a two-level KFAC method.
```
Input:\(\theta_{0}\) (Initial point), \(k_{\mathrm{max}}\) (maximum number of iterations), and \(\alpha\) (learning rate) Output:\(\theta_{k_{\mathrm{max}}}\) for\(k=0,1,\ldots,k_{\mathrm{max}}-1\)do \(\bullet\) Compute an estimate \(\nabla_{\theta}h(\mathcal{S}_{k},\theta_{k})\) of the gradient on a mini-batch \(\mathcal{S}_{k}\) randomly sampled from the training data; \(\bullet\) Compute \(\theta_{\mathrm{KFAC}}=F_{\bullet\,\mathrm{KFAC}}^{-1}\nabla_{\theta}h( \mathcal{S}_{k},\theta_{k})\); \(\bullet\) Choose a coarse space \(R_{0}^{T}\) and compute the associated coarse correction \(R_{0}\beta^{*}=R_{0}^{T}(F_{\mathrm{coarse}})^{-1}R_{0}r_{\mathrm{KFAC}}\); \(\bullet\) Compute \(\theta_{\mathrm{KFAC-2L}}=\theta_{\mathrm{KFAC}}+R_{0}\beta^{*}\); \(\bullet\) Update \(\theta_{k+1}=\theta_{k}-\alpha\theta_{\mathrm{KFAC-2L}}\); end for
```
**Algorithm 1**High-level pseudo-code for a two-level KFAC method
## 4 Numerical results
In this section, we compare the new two-level KFAC methods designed in SS3 with the standard KFAC [18, 31] from the standpoint of convergence speed. For a thorough analysis, we also include the two-level KFAC version of Tselepidis et al. [50] and baseline optimizers (ADAM and SGD).
We run a series of experiments to investigate the optimization performance of deep auto-encoders, CNNs, and deep linear networks. Since our primary focus is on convergence speed rather than generalization, we shall only be concerned with the ability of optimizers to minimize the objective function. In particular, we report only training losses for each optimizer. To equally treat all methods, we adopt the following rules.
We perform a Grid Search and select hyper-parameters that give the best reduction to the training loss. Learning rates for all methods and damping parameters for KFAC and two-level KFAC methods are searched in the range
\[\{10^{-4},\,10^{-3},\,10^{-2},\,10^{-1},\,10^{0},\,10^{1},\,10^{2},\,10^{3},\,10^{ 4}\}.\]
For each optimizer, we apply the Early Stopping technique with patience of 10 epochs i.e. we stop training the network when there is no decrease in the training loss during 10 consecutive epochs). We also include weight decay with a coefficient of \(10^{-3}\) for all optimizers.
All experiments presented in this work are performed with PyTorch framework [39] on a supercomputer with Nvidia Ampere A100 GPU and AMD Milan@2.45GHz CPU. For ease of reading, the following table explains all abbreviations of two-level KFAC methods that we will use in the figure legends.
### Deep auto-encoder problems
The first set of experimental tests performed is the optimization of three different deep auto-encoders, each trained with a different dataset (CURVES, MNIST, and FACES). Note that due to the difficulty of optimizing the underlying networks, these three auto-encoder problems are commonly used as benchmarks for evaluating new optimization methods in the deep learning community [7, 22, 27, 31, 48]. For each problem, we train the network with three different batch sizes.
Figure 2 shows the obtained results. The first observation is that, as expected, natural gradient-based methods (KFAC and two-level KFAC methods) outperform baseline optimizers (ADAM and SGD). The second and most important observation is that, for each of the three problems, regardless of the batch size, the training curve of KFAC and those of all two-level KFAC methods (the one of Tselepidis et al. [50] and those proposed in this work) are overlaid, which means that taking into account the extra-diagonal terms of the Fisher matrix through two-level decomposition methods does not improve the convergence speed of KFAC method. This second observation is quite puzzling, since theoretically two-level methods are supposed to offer a better approximation to the exact natural gradient than KFAC does and therefore should at least slightly outperform KFAC in terms of optimization performance. Note that we repeated these experiments on three different random seeds and obtained very similar results.
These surprising results are in line with the findings of Benzing [6], according to which KFAC outperforms the exact natural gradient in terms of optimization performance. This suggests that extra-diagonal blocks of the FIM do not contribute to improving the optimization performance, and sometimes even affect it negatively.
\begin{table}
\begin{tabular}{l l} \hline \hline Optimizer & Name abbreviation \\ \hline Two-level KFAC with Nicolaides coarse space & NICO \\ Two-level KFAC with spectral coarse space & SPECTRAL \\ Two-level KFAC with residuals coarse space & RESIDU \\ Two-level KFAC with Krylov Nicolaides coarse space & KRY-NICO \\ Two-level KFAC with Krylov residuals coarse space & KRY-RESIDU \\ Two-level KFAC of Tselepidis et al. [50] & PREVIOUS \\ \hline \hline \end{tabular}
\end{table}
Table 1: Name abbreviations of two-level KFAC optimizers.
### Convolution neural networks
The second set of experiments concerns the optimization of three different CNNs namely Resnet 18 [19], Cuda-convnet and Resnet 34 [19]. We consider in particular Cuda-convnet which is the architecture used to evaluate the original KFAC method in [18]. It must be mentioned that it contains 3 convolution layers and one MLP layer. We train Cuda-convnet on CIFAR10 dataset [23] with a batch size equal to 256, and Resnet 18 on CIFAR100 [23] with a batch size equal to 128. Finally, we train Resnet 34 on the SVHN dataset [34] with a batch size equal to 512.
For these CNNs (see Figure 3), we arrive at quite similar observations and conclusions to those we mention for deep auto-encoder problems. In particular, like in [50], when considering CNNs, we do not observe any significant gain in the convergence speed of KFAC when we enrich it with cross-layer information through two-level decomposition methods. Once again, these results corroborate the claims of Benzing [6] and suggest that we do not need to take into account the extra diagonal blocks of the FIM.
Figure 2: Comparison of KFAC against two-level KFAC methods on the three deep auto-encoder problems (CURVES **top** row, MNIST **middle** row and FACES **bottom** row). Three different batch sizes are considered for each problem (each column corresponds to a different batch size).
### Deep linear networks
The last experiments concern relatively simple optimization problems: linear networks optimization. We consider two deep linear networks. These tests are motivated by the results obtained by Tselepidis et al. [50] for their two-level method. Indeed, for an extremely simple linear network with 64 layers (each layer contains 10 neurons and a batch normalization layer) trained with randomly generated ten-size input vectors, they outperform KFAC in terms of optimization performance. Here, we first consider the same architecture but train the network on the Fashion MNIST dataset [52](since we could not use the same dataset). Then, we consider another linear network that contains 14 layers with batch normalization, with this time much larger layers. More precisely we consider the following architecture: \(784-1000-900-800-700-600-500-400-300-200-100-50-20-10\). We train this second network on the MNIST dataset. Both networks are trained with a batch size of 512.
Figure 4 shows the training curves obtained in both cases. Here we observe like in [50] an improvement in the optimization performance of two-level optimizers over KFAC. However, this gain remains too small and only concerns simple linear networks that are not used for practical applications. We therefore do not encourage enriching KFAC with two-level methods that require additional computational costs.
Figure 4: Optimization performance evaluation of KFAC and Tow-level KFAC optimizers on two different deep linear networks.
Figure 3: Optimization performance evaluation of KFAC and two-level KFAC methods on three different CNNs.
### Verification of error reduction for linear systems
In the above experiments, two-level methods do not seem to outperform KFAC in terms of optimization performance. We thus wish to check that at each descent iteration, the negative increment \(\zeta_{\text{KFAC-2L}}\) obtained with the coarse correction is indeed closer to that of the regularized natural gradient one \(\zeta\) than the negative increment \(\zeta_{\text{KFAC}}\) corresponding to the original KFAC. In other words, we want to make sure that inequality (3.19) holds numerically.
For \(\beta\in\mathbb{R}^{m}\), let
\[\mathfrak{E}(\beta)=\|\zeta_{\text{KFAC}}+R_{0}^{T}\beta-\zeta\|_{F_{\bullet}}^ {2} \tag{4.1}\]
be the function to be minimized at fixed \(R_{0}^{T}\) in the construction (3.9)-(3.10), where it is recalled that
\[\zeta=F_{\bullet}^{-1}\nabla_{\theta}h,\qquad\zeta_{\text{KFAC}}=F_{\bullet}^{ -1}\nabla_{\theta}h.\]
Note that
\[\mathfrak{E}(0)=\|\zeta_{\text{KFAC}}-\zeta\|_{F_{\bullet}}^{2} \tag{4.2}\]
is the squared \(F_{\bullet}\)-distance between the KFAC increment and that natural gradient one, regardless of \(R_{0}^{T}\). Meanwhile, if \(\beta^{*}\) is taken to be the optimal value (3.11), then
\[\mathfrak{E}(\beta^{*})=\|\zeta_{\text{KFAC-2L}}-\zeta\|_{F_{\bullet}}^{2}. \tag{4.3}\]
To see whether (3.19) is satisfied, the idea is to compute the difference \(\mathfrak{E}(\beta^{*})-\mathfrak{E}(0)\) and check that it is negative. The goal of the game, however, is to avoid using the unknown natural gradient solution \(\zeta\). Owing to the identity \(\|a\|^{2}-\|b\|^{2}=(a-b,a+b)\) for the \(F_{\bullet}\)-dot product, this difference can be transformed into
\[\mathfrak{E}(\beta^{*})-\mathfrak{E}(0) =\|\zeta_{\text{KFAC-2L}}-\zeta\|_{F_{\bullet}}^{2}-\|\zeta_{ \text{KFAC}}-\zeta\|_{F_{\bullet}}^{2}\] \[=(\zeta_{\text{KFAC-2L}}-\zeta_{\text{KFAC}},\,\zeta_{\text{KFAC-2L }}+\zeta_{\text{KFAC}}-2\zeta)_{F_{\bullet}}\] \[=\|\zeta_{\text{KFAC-2L}}-\zeta_{\text{KFAC}}\|_{F_{\bullet}}^{2} +2(\zeta_{\text{KFAC-2L}}-\zeta_{\text{KFAC}},\,\zeta_{\text{KFAC}}-\zeta)_{F _{\bullet}}\] \[=\|R_{0}^{T}\beta^{*}\|_{F_{\bullet}}^{2}+2(R_{0}^{T}\beta^{*},\, \zeta_{\text{KFAC}}-\zeta)_{F_{\bullet}}\] \[=\big{\langle}F_{\bullet}R_{0}^{T}\beta^{*},R_{0}^{T}\beta^{*} \big{\rangle}+2\big{\langle}R_{0}^{T}\beta^{*},\,F_{\bullet}(\zeta_{\text{KFAC }}-\zeta)\big{\rangle}, \tag{4.4}\]
where \(\langle\cdot,\cdot\rangle\) denotes the Euclidean dot product. But
\[F_{\bullet}(\zeta_{\text{KFAC}}-\zeta)=F_{\bullet}\zeta_{\text{KFAC}}-\nabla _{\theta}h=-r_{\text{KFAC}} \tag{4.5}\]
is the opposite of the residual (3.13), which can be computed without knowing \(\zeta\). Finally, the desired difference can also be computed as
\[\mathfrak{E}(\beta^{*})-\mathfrak{E}(0)=\big{\langle}R_{0}F_{\bullet}R_{0}^{ T}\beta^{*},\,\beta^{*}\big{\rangle}-2\big{\langle}R_{0}^{T}\beta^{*},\,r_{\text{KFAC}} \big{\rangle}. \tag{4.6}\]
For the two-level method of Tselepidis-Kohler-Orvieto [50], the correction reads
\[\zeta_{\text{TKO}}=\zeta_{\text{KFAC}}+R_{0}^{T}\beta_{\text{TKO}}^{*}\] (4.7a) with \[\beta_{\text{TKO}}^{*}=(R_{0}F_{\bullet}R_{0}^{T})^{-1}R_{0}\nabla_{\theta}h \tag{4.7b}\]
instead of \(\beta^{*}\), the KFAC-2L value (3.11). The difference \(\mathfrak{E}(\beta_{TKO}^{*})-\mathfrak{E}(0)\) is then given by a formula similar to (4.6) in which \(\beta^{*}\) is simply replaced by \(\beta_{\text{TKO}}^{*}\).
We compute the error \(\mathfrak{E}(\beta^{*})-\mathfrak{E}(0)\) associated to various two-level methods in the experiments conducted above. More specifically, we do it for the three deep auto-encoder problems and also for a CNN (cuda-convnet). The results obtained are shown in Figure 5. The observation is that all two-level methods proposed in this work as well as the TKO two-level method [50] have the gap have negative gaps \(\mathfrak{E}(\beta^{*})-\mathfrak{E}(0)\) throughout the optimization process. This implies that two-level methods solve the linear system (3.2) more accurately than KFAC does. It also means that the approximate natural gradients obtained with Two-level methods are closer to the exact natural gradient than the one obtained with KFAC is.
Figure 5: Evolution of \(\mathfrak{E}(\beta^{*})-\mathfrak{E}(0)\) during training for each of the two-level methods considered. All methods proposed in this work as well as the TKO two-level method [50] have the gap \(\mathfrak{E}(\beta^{*})-\mathfrak{E}(0)\) negative throughout the training process.
## 5 Conclusion and discussion
In this study, we sought to improve KFAC by incorporating extra-diagonal blocks using two-level decomposition methods. To this end, we proposed several two-level KFAC methods, with a careful design of coarse corrections. Through several experiments, we came to the conclusion that two-level KFAC methods do not generally outperform the original KFAC method in terms of optimization performance of the objective function. This implies that taking into account the interactions between the layers is not useful for the optimization process.
We also numerically verified that, at the level of the linear system of each iteration, the increment provided by any two-level method is much closer to the exact natural gradient solution than that obtained with KFAC, in a norm naturally associated with the FIM. This reveals that closeness to the exact natural gradient does not necessarily results in a more efficient algorithm. This observation is consistent with Benzing's previous claim [6] that KFAC outperforms the exact natural gradient in terms of optimization performance.
The fact that incorporating extra-diagonal blocks does not improve or often even hurts the optimization performance of the initial diagonal approximation could be explained by a negative interaction between different layers of the neural network. This suggests ignoring extra-diagonal blocks of the FIM and keeping the block-diagonal approximation, and if one seeks to improve the block-diagonal approximation, one should focus on diagonal blocks as attempted in many recent works [7; 13; 15; 22].
It is worth pointing out that the conclusion of Tspelepedis et al. [50] on the performance of their proposed two-level method seems a little hasty. Indeed, the authors only ran two different experiments: the optimization of a CNN and a simple linear network. For the CNN network, they did not observe any improvement. For the linear network they obtain some improvement in the optimization performance. Their conclusion is therefore based on this single observation.
Finally, we recall that as is the case for almost every previous work related to natural gradient and KFAC methods [6; 7; 18; 31], the one undertaken in this paper is limited to the optimization performance of the objective function. It will thus be interesting to investigate the generalization capacity of these methods (including KFAC). Since the study of generalization requires a different experimental framework [6; 54; 55], we leave it as a prospect. Our findings and those of Benzing [6] imply that it can be interesting to explore the use of even simpler approximations of the FIM. More precisely, after approximating the FIM by a block diagonal matrix as in KFAC, one can further approximate each full diagonal block by an inner sub-blocks diagonal matrix (see for instance [4]). This approach will save computational time and probably maintain the same level of optimization performance.
|
2308.16848 | Accurate Computation of Quantum Excited States with Neural Networks | We present a variational Monte Carlo algorithm for estimating the lowest
excited states of a quantum system which is a natural generalization of the
estimation of ground states. The method has no free parameters and requires no
explicit orthogonalization of the different states, instead transforming the
problem of finding excited states of a given system into that of finding the
ground state of an expanded system. Expected values of arbitrary observables
can be calculated, including off-diagonal expectations between different states
such as the transition dipole moment. Although the method is entirely general,
it works particularly well in conjunction with recent work on using neural
networks as variational Ans\"atze for many-electron systems, and we show that
by combining this method with the FermiNet and Psiformer Ans\"atze we can
accurately recover vertical excitation energies and oscillator strengths on a
range of molecules. Our method is the first deep learning approach to achieve
accurate vertical excitation energies, including challenging double
excitations, on benzene-scale molecules. Beyond the chemistry examples here, we
expect this technique will be of great interest for applications to atomic,
nuclear and condensed matter physics. | David Pfau, Simon Axelrod, Halvard Sutterud, Ingrid von Glehn, James S. Spencer | 2023-08-31T16:27:08Z | http://arxiv.org/abs/2308.16848v3 | # Natural Quantum Monte Carlo Computation of Excited States
###### Abstract
We present a variational Monte Carlo algorithm for estimating the lowest excited states of a quantum system which is a natural generalization of the estimation of ground states. The method has no free parameters and requires no explicit orthogonalization of the different states, instead transforming the problem of finding excited states of a given system into that of finding the ground state of an expanded system. Expected values of arbitrary observables can be calculated, including off-diagonal expectations between different states such as the transition dipole moment. Although the method is entirely general, it works particularly well in conjunction with recent work on using neural networks as variational Ansatze for many-electron systems, and we show that by combining this method with the FermiNet and Posiformer Ansatze we can accurately recover vertical excitation energies and oscillator strengths on molecules as large as benzene. Beyond the examples on molecules presented here, we expect this technique will be of great interest for applications of variational quantum Monte Carlo to atomic, nuclear and condensed matter physics.
## I Introduction
The computation of excited states properties of quantum systems is a fundamental challenge in chemistry and many branches of physics. Understanding electronic excitations is critical for predicting photochemical phenomena such as fluorescence and conformational changes in the presence of light [1; 2]. In condensed matter physics, excitations determine the optical band gap of semiconductors, which is critical for predicting the behavior of solar cells, photosensors, LEDs and lasers [3]. Excited states are also relevant to understanding nuclear phenomena like metastable isomers and electron capture [4]. Ultimately, the dynamics of quantum systems when stimulated cannot be understood without taking excited states into account. Despite the importance of excited states for quantum phenomena, a full computational account of excited states remains challenging.
Quantum Monte Carlo (QMC) methods [5; 6] are an appealing class of algorithms for computing the behavior of quantum systems due do the favorable scaling with the number of particles, typically \(\mathcal{O}(N^{3})-\mathcal{O}(N^{4})\), and wide applicability. Variational quantum Monte Carlo (VMC) in particular is quite conceptually simple, and consists of finding an explicit functional form for a wavefunction which minimizes a variational bound, but historically was not considered accurate enough on its own for many demanding applications. Recent work using neural networks as a wavefunction Ansatz has reinvigorated interest in VMC [7; 8], and has demonstrated that VMC can be competitive with state-of-the-art methods for ground state calculations.
In this paper, we focus on computing excited states of quantum systems by VMC. When used to optimize ground states, there are only two variational principles for QMC - energy minimization and variance minimization. Innovations in ground state VMC primarily focus on the choice of trial wavefunction [9; 10], or optimization method used to achieve the variational bound [11; 12], but the choice of objective to optimize is well-established. The same cannot be said for variational optimization of excited states.
Approaches for computing excited states by VMC can be broken down into several categories. Most methods are either state-_targeting_, in that they aim to find a single excited state, or state-_averaging_, in that they aim to find the lowest-lying exciting states by minimizing the total weighted energy of many states simultaneously. Among state-targeting methods, there are methods which target specific energy ranges [13; 14], specific symmetries of the system [15], or a specific ordering of the roots (i.e. the \(k\)-th lowest state) [16]. For state-averaging approaches, the different states must be kept orthogonal, which can be achieved by including a penalty term in the variational bound which pushes the states apart [17; 15; 18], or by explicitly constructing orthogonal Ansatze, sometimes repeatedly re-orthogonalizing during optimization [19; 20; 21; 22].
All of these approaches have drawbacks and limitations. Targeting specific symmetries or energy ranges requires prior knowledge about the states of interest which may not be available, and state-targeting by variance minimization can lose track of the desired state [21]. Root-targeting methods are prone to root-flipping, whether they are used for QMC or other computational paradigms [23; 24]. Some methods require solving a generalized eigenvalue problem from stochastic estimates of the Hamiltonian and overlap matrices, which introduces biases into the gradients [16; 25]. Penalty methods often have problems with multiple Ansatze collapsing onto the same state, or have biased gradients [18], and the
strength of the penalty term is a free parameter which must be chosen. Constructing orthogonal Ansatze is usually only possible when the Ansatz is a linear combination of basis set functions [26; 27], which rules out many recently-developed Ansatze based on deep neural networks [28; 29; 30; 7]. Heuristics such as variance matching may be required to achieve good numerical results for all approaches. Despite almost four decades of work on QMC methods for excited states [26; 31], no single variational principle has emerged which has no free parameters, has convergence guarantees when optimizing with noisy Monte Carlo estimates, and is applicable to all possible Ansatze and all excited states, regardless of symmetry.
Here we present a new variational principle for computing the lowest excited states of a quantum system by Monte Carlo which does not suffer from any of these limitations. Our method can be seen as a state-averaging approach with a particular choice of sampling distribution which does not require the states to be orthogonal. This choice of sampling distribution is equivalent to reformulating the problem of finding \(K\) excited states of an \(N\) particle system into the problem of finding the ground state of a \(K\)-fermion system where each fermion is equivalent to \(N\) particles in the original system. Instead of orthogonalizing the states, the local energy is promoted from a scalar to a matrix, which gives unbiased estimates of a matrix whose eigenvalues are the energies of orthogonal states. Because wavefunction optimization can be done by stochastic gradient descent from unbiased noisy estimates of the total energy, the procedure is guaranteed to converge to a local minimum of the total energy over states. Due to the many desirable mathematical properties which follow from the choice of sampling distribution, we refer to our proposed approach as _natural excited states_ for VMC (NES-VMC).
## II Method
### Variational Monte Carlo
First we briefly review ground-state VMC and establish some notation. We will stick to the notation of first quantization and consider a system of \(N\) particles with states \(\mathbf{x}=\mathbf{x}_{1},\ldots,\mathbf{x}_{N}\), although everything we discuss could be applied to variational Ansatze represented in second quantization as well. We aim to find the lowest eigenfunction of a Hamiltonian operator \(\hat{H}\). This can be done by reformulating the eigenfunction problem in variational form, as one of finding the minimum of the Rayleigh quotient:
\[\psi^{*}=\arg\min_{\psi}\frac{\langle\psi\hat{H}\psi\rangle}{\langle\psi^{2} \rangle} \tag{1}\]
where the Ansatz \(\psi\) is not necessarily normalized. Computing this quotient involves taking high-dimensional integrals over all possible particle states \(\mathbf{x}\), and can be approximated by Monte Carlo integration. Many choices of Monte Carlo sampling distribution \(p(\mathbf{x})\) are possible, but if \(p(\mathbf{x})\propto\psi^{2}(\mathbf{x})\), then the Rayleigh quotient take a simple form that allows for unbiased empirical estimation of the energy and gradients of the energy:
\[\frac{\langle\psi\hat{H}\psi\rangle}{\langle\psi^{2}\rangle}=\mathbb{E}_{ \mathbf{x}\sim\psi^{2}}\left[\psi^{-1}(\mathbf{x})\hat{H}\psi(\mathbf{x})\right] \tag{2}\]
For this reason, \(\psi^{2}\) is the natural choice of sampling distribution for ground state estimation. The scalar \(E_{L}(\mathbf{x})\mathbf{\triangleq}\psi^{-1}(\mathbf{x})\hat{H}\psi(\mathbf{ x})\) that appears inside the expectation is the _local energy_, and at any eigenfunction of \(\hat{H}\) it will be constant if \(\hat{H}\) is a local operator.
### Natural Excited States
Going from ground states to excited states, we aim to find the lowest \(K\) eigenfunctions of \(\hat{H}\). We refer to a single set of \(N\) particle states as a _particle set_, and denote different particle sets with an upper index, so that \(\mathbf{x}^{i}\) denotes a set of \(N\) particles \(\mathbf{x}^{i}_{1},\ldots,\mathbf{x}^{i}_{N}\). For the remainder of the article, we will use \(\mathbf{x}\) to denote the complete state of all particle sets \(\mathbf{x}^{1},\ldots,\mathbf{x}^{K}\). Let \(\psi_{i}\) denote a (possibly unnormalized) N-particle wavefunction, then we are trying to find wavefunctions \(\psi_{1},\ldots,\psi_{K}\) which approximate the lowest excited states. Let \(\mathbf{\Psi}(\mathbf{x})\in\mathbb{R}^{K\times K}\) denote the matrix combining all electron sets with all wavefunctions:
\[\mathbf{\Psi}(\mathbf{x})\overset{\triangle}{\equiv}\begin{pmatrix}\psi_{1}( \mathbf{x}^{1})&\ldots&\psi_{K}(\mathbf{x}^{1})\\ \vdots&&\vdots\\ \psi_{1}(\mathbf{x}^{K})&\ldots&\psi_{K}(\mathbf{x}^{K})\end{pmatrix} \tag{3}\]
The determinant of this matrix \(\Psi(\mathbf{x})=\det(\mathbf{\Psi}(\mathbf{x}))\) can be thought of as an unnormalized Slater determinant, except that instead of single-particle orbitals, it is made up of N-particle wavefunctions. We call \(\Psi(\mathbf{x})=\det(\mathbf{\Psi}(\mathbf{x}))\) the _total Ansatz_, while the individual \(\psi_{i}\) are the _single-state Ansatze_.
Rather than optimizing the single-state Ansatze in order from lowest to highest energy, we will only optimize the total Ansatz to minimize the total energy of all states. This is conceptually quite similar to state-averaging approaches in VMC, except that we will not explicitly enforce the orthogonality of the different single-state Ansatze. Note that taking any linear combination of single-state Ansatze \(\psi^{\prime}{}_{i}=\sum_{j}a_{ij}\psi_{j}\) only changes the total Ansatz by a constant factor. Also note that if two single-state Ansatze are the same, the total Ansatz becomes zero. Thus, by representing the total Ansatz as a determinant of single-state Ansatze, we can prevent the collapse of different Ansatze onto the same state, without requiring them to be orthogonal.
For an arbitrary operator \(\hat{\mathcal{O}}\) that acts on \(N\)-particle wavefunctions, let \(\hat{\mathcal{O}}\mathbf{\Psi}(\mathbf{x})\) denote the matrix of all values
of this operator applied to all single-state Ansatze and particle sets:
\[\hat{\mathcal{O}}\boldsymbol{\Psi}(\mathbf{x})\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Not only is diagonalizing \(\mathbb{E}_{\Psi^{2}}[\mathbf{E}_{L}(\mathbf{x})]\) sufficient to recover the energies - it also provides us with the necessary change of basis to evaluate other observables \(\hat{\mathcal{O}}\), even off-diagonal observables \(\langle\psi_{i}\hat{\mathcal{O}}\psi_{j}\rangle\) between states. This can be seen due to the identity \(\mathbb{E}_{\Psi^{2}}[\mathbf{\Psi}^{-1}\hat{\mathcal{O}}\mathbf{\Psi}]= \mathbf{S}^{-1}\hat{\mathbf{O}}\), and for single-state Ansatze which are a linear combination of eigenfunctions, \(\mathbf{S}^{-1}\hat{\mathbf{O}}=\mathbf{A}^{-1}\hat{\mathbf{O}}^{*}\mathbf{A}\). So if we accumulate and diagonalize \(\mathbb{E}_{\Psi^{2}}[\mathbf{E}_{L}(\mathbf{x})]\) and use the resulting eigenvectors to compute \(\mathbf{U}^{-1}\mathbb{E}_{\Psi^{2}}[\mathbf{\Psi}^{-1}\hat{\mathcal{O}} \mathbf{\Psi}]\mathbf{U}\), then in the vicinity of the true ground state of the total Ansatz the result will be approximately \(\mathbf{\Sigma}^{-1}\hat{\mathbf{O}}^{*}\mathbf{\Sigma}\). Along the diagonal, this gives exactly the expectations \(\langle\psi_{i}^{*}\hat{\mathcal{O}}\psi_{i}^{*}\rangle\). Off the diagonal, this yields \(\frac{\sigma_{i}}{\sigma_{j}}\langle\psi_{i}^{*}\hat{\mathcal{O}}\psi_{j}^{*}\rangle\). If we multiply the matrix elementwise by its transpose, the \(\sigma_{i}\) terms cancel out, and we recover \(\langle\psi_{i}^{*}\hat{\mathcal{O}}\psi_{j}^{*}\rangle^{2}\), which gives the expectation up to a sign factor. This sign factor is not physically observable however, and in practice for computing quantities like the oscillator strength, only the expectation squared is needed.
### Neural Network Ansatze
The use of variational Monte Carlo for ground state calculations was typically used to find a trial wavefunction for more accurate projector QMC methods like diffusion Monte Carlo [5] or auxiliary field Monte Carlo [33]. However, in recent years, advances in deep neural networks have led to their use as accurate Ansatze for studying spin systems [7], electronic structure [8] and nuclear systems [34], often reaching levels of accuracy rivaling projector QMC methods. This has led to a renewed interest in VMC as a standalone method. While a variety of different neural network architectures can be used depending on the problem, such as restricted Boltzmann machines [7], convolutional neural networks [35], and autoregressive models [36], a number of custom architectures have been developed specifically for many-body electronic structure problems in first quantization [28; 29; 30; 37; 38; 39; 40; 41]. Most of these Ansatze start from a linear combination of Slater determinants:
\[\psi(\mathbf{x})=\sum_{k}\omega_{k}\text{det}\begin{pmatrix}\phi_{1}^{k}( \mathbf{x}_{1})&\ldots&\phi_{N}^{k}(\mathbf{x}_{1})\\ \vdots&&\vdots\\ \phi_{1}^{k}(\mathbf{x}_{N})&\ldots&\phi_{N}^{k}(\mathbf{x}_{N})\end{pmatrix} \tag{11}\]
It has long been recognized [42] that the single-particle orbitals in a Slater determinant can be generalized to depend on _all_ particles, so long as they depend on all but one in a permutation-independent manner:
\[\psi(\mathbf{x})=\sum_{k}\omega_{k}\text{det}\begin{pmatrix}\phi_{1}^{k}( \mathbf{x}_{1};\{\mathbf{x}_{/1}\})&\ldots&\phi_{N}^{k}(\mathbf{x}_{1};\{ \mathbf{x}_{/1}\})\\ \vdots&&\vdots\\ \phi_{1}^{k}(\mathbf{x}_{N};\{\mathbf{x}_{/N}\})&\ldots&\phi_{N}^{k}(\mathbf{x }_{N};\{\mathbf{x}_{/N}\})\end{pmatrix} \tag{12}\]
where \(\{\mathbf{x}_{/i}\}\) denotes the set of all particles _except_\(\mathbf{x}_{i}\). In the event that the particles are spin-assigned, the orbitals can also be expressed as \(\phi_{i}^{k}(\mathbf{x}_{j}^{\dagger};\{\mathbf{x}_{/j}^{\dagger}\},\{\mathbf{ x}^{\dagger}\})\) where the function is only invariant to changing the order of particles of the same spin. Most neural network Ansatze for electrons in real space implement this idea by using permutation-equivariant deep neural networks to represent the orbitals, sometimes with a multiplicative Jastrow factor to account for pairwise interactions [29; 30; 38].
Extending these Ansatze to represent multiple states is quite straightforward. Each state is still expressed as a sum of determinants of generalized neural network orbitals, there are simply more orbitals:
\[\psi_{i}(\mathbf{x})=\sum_{ik}\omega_{ik}\text{det}\begin{pmatrix}\phi_{1}^{ ik}(\mathbf{x}_{1};\{\mathbf{x}_{/1}\})&\ldots&\phi_{N}^{ik}(\mathbf{x}_{1};\{ \mathbf{x}_{/1}\})\\ \vdots&&\vdots\\ \phi_{1}^{ik}(\mathbf{x}_{N};\{\mathbf{x}_{/N}\})&\ldots&\phi_{N}^{ik}(\mathbf{x }_{N};\{\mathbf{x}_{/N}\})\end{pmatrix} \tag{13}\]
Nothing is changed about the neural network architecture itself, just the number of orbitals is increased proportionally to the number of states.
Neural network Ansatze differ from classic Ansatze like the Slater-Jastrow-backflow Ansatz [9] in important ways which make it difficult to apply existing excited state methods. Many methods assume that the Ansatz is a linear combination of orthogonal basis functions like Slater determinants, a necessary assumption for maintaining the orthogonality of states, either through explicit construction or a diagonalization step [19]. Classic Ansatze are usually optimized through a small number of gradient steps, where each gradient step is accumulated over a large number of MCMC steps, so that the gradients are nearly deterministic. Most modern deep neural networks, by constrast, are optimized by stochastic gradient descent using a large number of small, noisy steps [43]. This means bias in the gradients becomes a more significant concern.
Existing work on excited state calculations with neural networks has focused on penalty methods [18; 15], but these still require choosing a free parameter trading off total energy and penalty strength, and may not exactly satisfy orthogonality in the states. Some of these methods also have biased gradients in the penalty term [18] due to nonlinearities meant to push states apart more strongly. By contrast, the NES-VMC method has no free parameters to tune, can be optimized by unbiased gradients that have the same form as for ground state calculations, does not require the states to be orthogonal, and makes no assumption on the functional form of the Ansatz. Thus, while NES-VMC is generally applicable to _all_ excited state VMC calculations, it is particularly well-tailored for use with recently developed neural network Ansatze.
## III Results
While the natural excited states method is fully general and can be applied to any quantum Hamiltonian, our experimental validation is focused on electronic structure in atoms and molecules, due to the abundant experimental and computational literature to compare against. For
all experiments, we are solving the Schrodinger equation in the Born-Oppenheimer approximation [44]:
\[\hat{H}= -\frac{1}{2}\sum_{i}\nabla_{i}^{2}+\sum_{i>j}\frac{1}{|\mathbf{r}_ {i}-\mathbf{r}_{j}|}\] \[-\sum_{iI}\frac{Z_{I}}{|\mathbf{r}_{i}-\mathbf{R}_{I}|}+\sum_{I>J }\frac{Z_{I}Z_{J}}{|\mathbf{R}_{I}-\mathbf{R}_{J}|} \tag{14}\]
where the indices \(i\) and \(j\) are over electrons and \(I\) and \(J\) are over atomic nuclei with fixed locations.
To try to disentangle the effect that the choice of Ansatz has on performance, we investigated two different neural network architectures: the Fermionic Neural Network (FermiNet) [29] and the Wavefunction Transformer (Psiformer) [38]. While the Psiformer has generally been found to be more accurate on large systems, it is also slower, and for ground state calculations up to approximately 15 electrons, no appreciable difference in accuracy between the two has been found.
### Atomic Spectra
As an initial check of the correctness of our method, we investigate the excited states of first-row atoms, from lithium to neon. Atomic spectral lines have been the subject of some of the highest-precision measurements in all of science, and while we do not aim to reach spectroscopic accuracy, we can have high confidence in accuracy of the measurements, and do not need to worry about effects such as adiabatic relaxation and zero-point vibrational energy which affect molecular measurements. All experimental data was taken from the energy level tables in the NIST Handbook of Basic Atomic Spectroscopic Data [32]. Because we are working with the nonrelativistic Schrodinger equation without spin-orbit corrections, we are not able to compute fine or hyperfine structure. To remove the fine structure, experimental energy levels with different total angular momenta are averaged together weighted by the degeneracy \(m_{J}=2J+1\) and treated as a single level. The hyperfine structure is too small to be of concern here. To investigate the effect of the choice of Ansatz as well as the choice of number of states \(k\) to compute, we ran calculations with the FermiNet with both 5 and 10 states, as well as the Psiformer with 10 states. Results are given in Fig. 1, with numerical results (including error bars) in the Appendix in Table 2.
For all atoms, NES-VMC gives results closely matching experiment. From lithium up to oxygen, the error relative to experiment is far less than 1 mHa (27.2 meV) for all but the highest excited state, and is often less than 0.1 mHa, an exceedingly high level of accuracy for a deep neural network Ansatz. On lithium, all Ansatze correctly converge to the \({}^{2}S\) and \({}^{2}P^{\circ}\) states, which are missed by the PauliNet penalty method [18]. The method struggles in some cases to get the highest energy state correct, but this seems to be improved by simply computing more states - for instance, the error in the \({}^{4}P\) states of fluorine is cut in half by increasing the number of states from 5 to 10. In rare cases, the highest state
Figure 1: Excited state energies for first row atoms from lithium to neon. Results from natural excited state VMC applied to the FermiNet (10 states, blue, 5 states, red) are shown on top of experimental results [32]. Spectral lines which match computed states are labeled with electron configurations and atomic term symbols (except for the highest levels of F and Ne, where term symbols are omitted for clarity). For all but the largest systems and highest excited states, there is excellent agreement with experiment. The discrepancy between 5 and 10 excited states is minimal except for the highest excited states of F and Ne, where computing more states increases the accuracy of a given state. Complete numerical results are given in Table 2.
seems to converge to the incorrect state, such as boron with the Psiformer, which seems to converge to the \({}^{2}P^{\circ}\) state rather than the last \({}^{2}D\) state. Fluorine and neon both have relatively large errors on the order of 1-2 mHa for low-lying states, but going from the FermiNet to the more accurate Psiformer Ansatz seems to reduce this error in all cases. The largest errors are in the highest states of fluorine and neon, where the error is significant. In this case we suspect the difficulty is due to the large number of different states with similar electron configurations and energies, and hope that by computing even more states or by using even more expressive Ansatze, the effects of individual states can be disentangled. The excellent performance on low-lying states gives us confidence that NES-VMC is mathematically sound.
### Oscillator Strengths
Going beyond results on single atoms and vertical excitation energies, we are interested in the performance of NES-VMC on more complicated molecular systems, as well as observable quantities other than the energy. The QUEST database [45; 46; 48; 49; 50; 51; 52; 53; 54] is an excellent source of well-controlled benchmark vertical excited states calculations using coupled cluster methods on molecules of various sizes, with consistent geometries and basis set extrapolations. Of particular interest is the subset of QUEST for which oscillator strengths have been computed [45], as oscillator strengths provide a strong test of how well an excited state method can perform on experimentally-observable quantities, and especially as oscillator strength and transition probability calculations are known to be highly sensitive to choices of basis set [55].
Oscillator strengths are a measure of the probability of transition between different states occurring as a result of photon emission or absorption. Under the assumption that the wavelength of the photon is much longer than the system under consideration, so the interaction can be approximated by a constant electric field, the transition dipole moment between two states gives a measure of how that transition will interact with light:
\[\mathbf{d}_{ij}=\left\langle\psi_{i}^{\dagger}\sum_{k}q_{k}\mathbf{r}_{k}\psi_ {j}\right\rangle \tag{15}\]
where the sum over \(k\) is taken over all particles in the system with charge \(q_{k}\) and position \(\mathbf{r}_{k}\). For electrons, \(q_{k}=-e\). The transition dipole moments are vector-valued quantities which include a complex phase factor, and are not directly observable. The oscillator strength
Figure 2: Vertical excitation energies and oscillator strengths for small molecules from Chrayteh _et al._[45]. Singlet states are in blue and triplet states are in gray. NES-VMC results are indicated by markers while theoretical best estimates from Chrayteh _et al._[45] or directly from QUEST [46] are given by the lines. When no data from QUEST is available, no TBE is given. Experimental results from Chrayteh _et al._[45] and references thererin are given by the dashed lines in green. Where available, energies and oscillator strengths from Entwistle _et al._[18] are provided by the black triangles for comparison, with (pointing left) and without (pointing right) variance matching. In most cases, our results on both energies and oscillator strengths agree closely with theoretical best estimates. Complete numerical results are given in Table 3.
of a particular transition can be computed from the transition dipole moment:
\[f_{ij}=\frac{2}{3}\frac{m}{\hbar^{2}}\left(E_{i}-E_{j}\right)|\mathbf{d}_{ij}|^{2} \tag{16}\]
which reduces the transition dipole moment to a dimensionless positive scalar. In natural excited states, we can compute expectations of operators between different states up to an arbitrary sign factor, and that sign factor goes away in the oscillator strength. Computational details are discussed in more detail in Sec. C.3.
We applied NES-VMC to all of the small molecules investigated in Chrayteh _et al._[45], computing the 5 lowest energy states with both the FermiNet and Psiformer. Results are presented in Fig. 2 and Table 3. Wherever possible, we take results from QUEST [45, 46] to be theoretical best estimates (TBEs) for comparison, though for many of the states we converged to, especially triplets, no results exist in QUEST. For molecules with heavier atoms (HCl, H\({}_{2}\)S, H\({}_{2}\)CSi), we found that using pseudopotentials for the heaviest atoms significantly improved the accuracy of the results, likely because the total energy scale was reduced by ignoring core electrons. Where applicable, we also include a comparison against the VMC penalty method of Entwistle _et al._[18]. We omit N\({}_{2}\) because the lowest-lying excited states are all triplets. For all diatomic systems, the \({}^{1}\Pi\) state is doubly-degenerate, and so the baseline oscillator strengths are divided by two to match the computed results.
In almost all cases, both the vertical excitation energies and the oscillator strengths are in excellent agreement with the TBE. The vertical excitation energies are almost all within chemical accuracy (1.6 mHa or 0.04 eV) of the TBE while the oscillators strengths usually diverge from the TBE by at most an amount on the order of 0.001, comparable to the uncertainty in the calculations. The results of Entwistle _et al._, in contrast, often differ noticeably from other theoretical results, even when correction using variance matching are applied. This is particularly noticeable for the oscillator strengths. We note that we do not use variance matching for any of the NES-VMC calculations.
There are a few cases where NES-VMC behaves oddly. While the FermiNet and Psiformer find nearly identical vertical excitation energies for the \({}^{1}\Pi\) state of HCl, and the FermiNet accurately predicts the oscillator strength, the Psiformer mistakenly finds this to be a dark state. On formaldehyde (CH\({}_{2}\)O), both the FermiNet and Psiformer fail to find the \({}^{3}A_{1}\) state at all, and the oscillator strength for the \({}^{1}B_{2}\) state diverges from the TBE by a significant margin, although the Psiformer halves that margin relative to the FermiNet. Vertical excitation energies for systems with heavier atoms, such as H\({}_{2}\)S, and the highest state of thioformaldehydehyde (CH\({}_{2}\)S), are not quite
Figure 3: Excited states of the carbon dimer (C\({}_{2}\)). (a) The symmetries of the different states can be identified by evaluating each single state Ansatz at location \(\mathbf{r}\) and \(-\mathbf{r}\) for parity symmetry (u/g, blue) or by flipping \(\mathbf{r}\) across the x-axis for reflection symmetry (+/–, orange). (b) The vertical and adiabatic energies of excited states of C\({}_{2}\). The green line indicates experimental energies [47] and the red line indicates the energy of the \(B^{1}\Delta_{g}\) state from QUEST [48]. Bright transitions are labelled with their oscillator strength and, when available, their names. (c) Visualization of the 8 lowest natural orbitals of C\({}_{2}\). (d) The occupancy of the different natural orbitals for the different excited states of C\({}_{2}\), identified from the density matrix of each state. The \(a^{3}\Pi_{u}\) through \(A^{1}\Pi_{u}\) states are single excitations while the last two states are double excitations. Complete numerical results are given in Table 4.
as accurate as other results, though in the case of thioformaldehyde we are hopeful that, consistent with the atomic results in the previous section, computing more states will reduce the error in the \({}^{3}B_{2}\) state. For nitroxyl (HNO), the FermiNet fails to converge to the \({}^{1}A^{\prime}\) state, but the Psiformer finds it correctly, albeit with a relatively large error in the vertical excitation energy. This suggests that there are occasional difficulties in getting NES-VMC to converge to all low-lying states, but we are hopeful that improvements in optimization methods can improve this in the future. What is clear is that NES-VMC works well in the large majority of cases, and is far more accurate than alternative methods which have been proposed for neural network Ansatze.
Other QMC methods have also been applied to some of these systems. In particular, the QMC-CIPSI method has been successfully applied to computing the vertical excitation energies of the \({}^{1}A_{2}\) state in formaldehyde and thioformaldehyde to within chemical accuracy, using a conventional Slater-Jastrow Ansatz [56]. While the QMC-CIPSI method cannot be applied to neural network Ansatze, this suggests that good results can still be achieved with VMC with a simple Ansatz, and that the benefit of using NES-VMC relative to the penalty method in Entwistle _et al._ is due to the method rather than the choice of Ansatz.
### Carbon Dimer
In addition to computing observable quantities, it is also desirable to be able to say something about the _nature_ of different excited states - whether a state is a valence or Rydberg or charge transfer excitation, what its symmetries are, whether it is a single or double excitation, etc. As a benchmark system for demonstrating the ability of NES-VMC to characterize different states, we study the carbon dimer (C\({}_{2}\)). Despite its small size, the carbon dimer has a complicated electronic structure with a large number of low-lying excited states [60; 47; 59]. Due to the existence of very strong bands in the visible spectrum, the carbon dimer is frequently detected in astrophysical measurements, and can be observed in comets rich in organic materials [61]. The exact bond order of C\({}_{2}\) is still a subject of some controversy - while molecular orbital theory would classify it as a double bond, valence bond calculations suggest it may be better described as a quadruple bond [62]. And the carbon dimer is one of the smallest molecules to have low-lying double excitations, a class of excited state which other methods often struggle with [48]. Correctly reconstructing the potential energy curves for different low-lying states requires correctly disentangling and characterizing these different states at different geometries.
We compute the 8 lowest-lying states of the carbon dimer at several different bond lengths using NES-VMC and the Psiformer Ansatz, and present the results in Figs. 3 and 4. At equilibrium (1.244 A), we classify the different states by computing their spin magnitude and their symmetries - both parity symmetry (u/g) where the electron positions \(\mathbf{r}\) are replaced by \(-\mathbf{r}\) and reflection symmetry (+/-) where the positions are flipped on the x-axis. We do not compute the orbital angular momentum operator, but confirm that we see the expected degeneracy, for instance \(\Pi\) states are doubly degenerate (one of each reflection symmetry). The oscillator strengths show several bright transitions, which we show in Fig. 3b. Due to the degeneracy of the \(\Pi\) states, we add the oscillator strengths together to give the total strength. We correctly identify the Phillips and Ballik-Ramay systems [63; 64], as well as the unnamed \(B^{1}\Delta_{g}\to A^{1}\Pi_{u}\) transition. We also find that the energy of the \(B^{1}\Delta_{g}\) energy closes matches the TBE in QUEST [48]. The \(A^{1}\Pi_{u}\), \(c^{3}\Sigma_{u}^{+}\) and \(b^{3}\Sigma_{g}^{-}\) states all have nearly the same energy, so correctly identifying the oscillator strengths for these transitions is very challenging.
To better understand the nature of each excited state, we compute the occupancy of the different natural orbitals. We first compute the one-electron reduced density matrix (1-RDM) for each single-state Ansatz in a large basis set and then diagonalize these matrices to find the natural orbitals, as described in more detail in Sec. C.2. In this case, using the Hartree-Fock orbitals as the basis set, we find that all 1-RDMs are nearly diagonal, that is the natural orbitals closely match the Hartree-Fock molecular orbitals. We see in Fig. 3d that all states above the ground state involve excitation of electrons into the \(2p_{z}\sigma_{g}\) orbital. The \(\Pi\) states are well-described by single excitations from one of the \(2p\pi_{u}\) orbitals while the \(c^{3}\Sigma_{u}^{+}\) state promotes an electron from the \(2s\sigma_{u}^{*}\) orbital. Finally, both the \(b^{3}\Sigma_{g}^{-}\) and \(B^{1}\Delta_{g}\) states are double excitations of the \(2p\pi_{u}\) electrons into the \(2s\sigma_{u}^{*}\) orbital, as expected. Not only is NES-VMC able to predict double excitation energies correctly, but by having an explicit functional form
Figure 4: Potential energy curves of the low-lying excited states of C\({}_{2}\) which can be uniquely identified from their symmetries. Complete numerical results are given in
for the wavefunction Ansatz, we can compute quantities such as the reduced density matrices which allow us to derive insight about the nature of electronic excitations.
Predicting experimental excitation energies requires computing the energy difference between different states in their respective lowest energy configurations, the so-called _adiabatic_ excitation energy. To compute this for C\({}_{2}\), we repeated the equilibrium calculations at a number of different bond lengths. Wherever possible, we matched the energy levels at different geometries to the appropriate states based on the same symmetries as in Fig (a)a, and for five states we were able to reconstruct enough of the potential energy curve to identify the minimum energy for each. The results are shown in Fig. 4, smoothed by cubic interpolation. Taking the difference between the minimum energies of each state gives an estimate of the adiabatic excitation energy, which we show in purple in Fig. (b)b, and in 3 out of 4 cases we matched the experimental energy[47] to within roughly 0.01 eV. We did not estimate the zero-point vibrational energies, but believe this may explain the discrepancy in the \(c^{3}\Sigma_{u}^{+}\) state. This shows that not only can NES-VMC match other theoretical calculations of vertical excitation energies, but can predict experimental results to high accuracy.
### Twisted Ethylene
The excited states of ethylene (C\({}_{2}\)H\({}_{4}\)) across its potential energy surface present a challenging benchmark problem for many methods. As the torsion of the carbon double bond is increased, an avoided crossing occurs when the torsion angle is \(90^{\circ}\). Even for ground state calculations, DFT and single-reference coupled cluster calculations predict an unphysical cusp at this location[67]. Starting from the \(90^{\circ}\) torsion and bending the hydrogen atoms on one side inward (so-called "pyramidalization"), ethylene undergoes a conical intersection where the ground state transitions from a \(\pi\) to \(\pi^{*}\) highest occupied orbital (the \(N\) and \(V\) states, with term symbols \({}^{1}A_{g}\) and \({}^{1}B_{1u}\)). Modeling this conical intersection requires fundamentally multireference methods, and while time-dependent density functional theory (TD-DFT) struggles with this system[68], multireference configuration interaction (MR-CI) methods describe it well[58].
We compute the excited states of ethylene as the torsion angle is varied from \(0^{\circ}\) to \(90^{\circ}\), followed by variation of the pyramidalization angle from \(0^{\circ}\) to \(120^{\circ}\), enough to include the conical intersection of the \(N\) and \(V\) states. We try to match the geometry from previous MR-CI studies[58] as closely as possible. Results are shown in Fig. 5. There are also several low-lying triplet states of ethene, the \({}^{3}B_{1u}\) and \({}^{3}B_{3u}\) states, and so we calculated \(K=3\) excited states for all geometries, which we found was enough to find two singlet states for all geometries except at equilibrium, where we used \(K=5\) and took the highest state, as the \({}^{1}B_{3u}\) state has lower energy exclusively at equilibrium. We tried both the FermiNet and Psiformer, and did not find any significant difference in the results, so we show the Psiformer results here (though FermiNet results are included in Table 6). For comparison, in addition to TD-DFT[57] and MR-CI, we also compare against the PauliNet penalty method[18]. For consistency, we show the PauliNet penalty method without variance matching, though the difference is not large. All results are normalized so that the ground state energy at
Figure 5: Excited states and conical intersection of ethylene (C\({}_{2}\)H\({}_{4}\)). Our results (blue) are compared against TD-DFT[57] (purple), MR-CI[58] (green) and a penalty method used with the PauliNet[18] (red). The best estimate of the location of the conical intersection of the V and N states for each method is given by the vertical line in Fig. (b)b. Our method is in close agreement with MR-CI up to a constant shift, and agrees with the location of the conical intersection better than the PauliNet penalty method. Note that the \(\phi=0\) geometry in Fig. (b)b differs slightly from the \(\tau=90\) geometry in Fig. (a)a, as in Barbati _et al.[58]_. Complete numerical results are given in Table 6.
the equilibrium geometry is 0.
Qualitatively, the results from NES-VMC closely match MR-CI. The spurious cusp when the torsion angle is 90\({}^{\circ}\) is avoided, and the error in the ground state relative to MR-CI is smaller than for the PauliNet penalty method across torsion angles. The non-parallelity error in the \(V\) state relative to MR-CI is lower for our method than the PauliNet penalty method, and our predicted location for the conical intersection (\(\sim\)97.5 degrees) is closer to the MR-CI value (\(\sim\)96 degrees) than the predicted PauliNet penalty method value (\(\sim\)100 degrees). There is a nearly constant shift in the energy of the \(V\) state on the order of several tenths of an eV relative to MR-CI, and a shift in the energy of the \(N\) state which grows as the pyramidal angle grows. Increasing the number of excited states and using a different Ansatz did not seem to make a difference. We note that when using the equilibrium geometry for ethylene from QUEST in Sec III.2 as opposed to the geometry from MR-CI, our results agreed with the theoretical best estimates to within chemical accuracy. The overall agreement with experimentally relevant quantities like the location of the conical intersection is in excellent agreement with other highly accurate theoretical studies, and so we are confident that NES-VMC is able to capture the important behavior of this system across the potential energy surface.
### Benzene
Finally, as a demonstration of the ability of our method to scale to larger systems, we applied NES-VMC with both the FermiNet and Psiformer to benzene. Benzene is a common benchmark for excited state methods for medium-sized molecules, so there is abundant data for us to compare against. For VMC, in addition to the penalty method of Entwistle _et al._[18], there is also the penalty method of Pathak _et al._[17], which is used in combination with a traditional Slater-Jastrow Ansatz, and uses a different form of the penalty function which allows for unbiased gradients. On top of VMC results and coupled-cluster-based TBEs from QUEST, we also compare against CASPT2[66] and TD-DFT with the PBE0 functional[65]. Results are shown in Fig. 6, with complex numerical results in Table 7. For our calculations, we used the same geometry as in QUEST[50].
As can be seen in Fig 5(a), NES-VMC with the Psiformer comes very close to reaching the TBE for all computed states. The FermiNet is not quite as accurate, and struggles with the highest energy \({}^{3}B_{2u}\) state. Inspection of the spin magnitude reveals that the highest excited state of the FermiNet converges to a mixture of a triplet and singlet state, which suggests that contamination from the \({}^{1}B_{1u}\) state is affecting the performance. The Psiformer is known to be much more accurate for ground state calculations on systems as large as benzene[38], so it is not surprising that the Psiformer is also better suited for computing the relative energy between states at this
Figure 6: Excited states of benzene. The NES-VMC results (green and blue) are compared against theoretical best estimates from QUEST[50; 54] alongside TD-DFT-PBE0[65], CASPT2[66], DMC with a Slater-Jastrow Ansatz and penalty method[17], and the PauliNet with a penalty method[18]. NES-VMC with the Psiformer Ansatz is competitive with state-of-the-art methods. All excitations are \(\pi\rightarrow\pi^{*}\) excitations, and the orbitals involved are visualized in Fig 5(b). Complete numerical results are given in Table 7.
scale. CASPT2 and TD-DFT methods are less accurate across the board, though this is not surprising as density functional methods are generally less accurate than wavefunction methods, and CASPT2 is generally intermediate in accuracy between DFT and coupled cluster. The penalty method of Entwistle _et al._ in combination with the PauliNet was only applied to the lowest excited state, and even on that, it only reaches CASPT2-level accuracy, even with variance matching (which we do not use in NES-VMC). The penalty method of Pathak _et al._, however, is much more accurate, generally reaching comparable levels of accuracy to NES-VMC with the Psiformer. We suspect this is due to the unbiased gradients in the method of Pathak _et al._. Additionally, the results reported in Pathak _et al._ include a diffusion Monte Carlo correction, which we omit, though this only reduces the root mean squared error by \(\sim\)0.1 eV. We note that NES-VMC with a sufficiently expressive Ansatz not only reaches levels of accuracy near coupled cluster, but does so without any free parameters to tune, unlike penalty methods.
To better understand the nature of the excitations computed, we inspected the density matrices of the respective states, similarly to the analysis of the carbon dimer in Sec III.3 and Fig. 3c. The natural orbitals are well described by the Hartree-Fock orbitals, and so the density matrices in the Hartree-Fock basis are all nearly diagonal. All five excited states for benzene we computed are single excitations from a \(\pi\) to \(\pi^{*}\) orbital, but interestingly, in the natural orbital picture, they are best described by exciting half an electron from two distinct \(\pi_{g}\) orbitals into two distinct \(\pi_{u}^{*}\) orbitals. These orbitals are visualized in Fig 6b. The ability to easily evaluate and analyze properties of wavefunctions other than just the energy is one of the advantages of explicitly computing the functional form of the wavefunction in VMC. Overall, our results on benzene show that NES-VMC can be usefully applied to larger systems and still produce accurate results, so long as the correct Ansatz is used.
## IV Discussion
We have presented a novel method for calculating excited state properties of quantum systems by variational quantum Monte Carlo (VMC), the Natural Excited States method (NES-VMC). NES-VMC has no free parameters to tune, and allows for unbiased estimation of energies and gradients, by reformulating a state-averaging approach as the problem of finding the ground state of an extended system. In much the same way that sampling from \(\psi^{2}\) is the natural way to compute ground state properties by VMC, we believe that NES-VMC is the natural variational principle for computing excited state properties. Additionally, it dovetails well with recent work on neural network Ansatze for many-body systems.
We have demonstrated the effectiveness of NES-VMC on a number of benchmark problems ranging from small atoms and molecules up to the benzene molecule. In all cases, NES-VMC is competitive with theoretical best estimates using coupled cluster methods for estimating energies and oscillator strengths, and can capture the behavior of double excitations and conical intersections. The optimized Ansatz can be used in downstream analyses to characterize the nature of the electronic structure of different excited states. We are confident that NES-VMC is as effective as any other method for computing excited states with QMC, with the added benefit of simplicity and generality.
Neural network Ansatze can be quite computationally expensive, which puts an upper limit on the scale of systems we considered. We believe that recent work on scaling and accelerating neural network Ansatze for many-electron problems [41] can be usefully applied to NES-VMC as well, which could allow these methods to be applied to problems for which no satisfactory solution exists today. While we focused on applications using neural network Ansatze, classic Ansatze like the Slater-Jastrow Ansatz can be scaled to much larger systems [25; 69]. Although our results suggest that more accurate Ansatze are quite important for achieving good performance, we look forward to finding out how well NES-VMC works in conjuction with these classic Ansatze on large problems.
Finally, while our experiments in this paper focused on molecular systems, that is, many-electron problems with open boundary conditions, NES-VMC is fully general and can be applied to _any_ quantum Hamiltonian. Excited state calculations with QMC are an important tool for studying nuclear physics [6], optical band gaps in condensed matter physics [13; 70], many properties of spin systems, as well as time dynamics and finite temperature phenomena. We are excited to see how NES-VMC can be applied to many of the most challenging open problems in many-body quantum mechanics in the future.
###### Acknowledgements.
The authors would like to thank Matthew Foulkes, Denis Jacquemin, Michael Bearpark, Aron Cohen and Alex Gaunt for helpful discussions, and James Kirkpatrick, Annette Obika, Ali Eslami and Pushmeet Kohli for support.
|
2310.00496 | The Sparsity Roofline: Understanding the Hardware Limits of Sparse
Neural Networks | We introduce the Sparsity Roofline, a visual performance model for evaluating
sparsity in neural networks. The Sparsity Roofline jointly models network
accuracy, sparsity, and theoretical inference speedup. Our approach does not
require implementing and benchmarking optimized kernels, and the theoretical
speedup becomes equal to the actual speedup when the corresponding dense and
sparse kernels are well-optimized. We achieve this through a novel analytical
model for predicting sparse network performance, and validate the predicted
speedup using several real-world computer vision architectures pruned across a
range of sparsity patterns and degrees. We demonstrate the utility and
ease-of-use of our model through two case studies: (1) we show how machine
learning researchers can predict the performance of unimplemented or
unoptimized block-structured sparsity patterns, and (2) we show how hardware
designers can predict the performance implications of new sparsity patterns and
sparse data formats in hardware. In both scenarios, the Sparsity Roofline helps
performance experts identify sparsity regimes with the highest performance
potential. | Cameron Shinn, Collin McCarthy, Saurav Muralidharan, Muhammad Osama, John D. Owens | 2023-09-30T21:29:31Z | http://arxiv.org/abs/2310.00496v2 | # The Sparsity Roofline: Understanding the Hardware Limits of Sparse Neural Networks
###### Abstract
We introduce the Sparsity Roofline, a visual performance model for evaluating sparsity in neural networks. The Sparsity Roofline jointly models network accuracy, sparsity, and theoretical inference speedup. Our approach does not require implementing and benchmarking optimized kernels, and the theoretical speedup becomes equal to the actual speedup when the corresponding dense and sparse kernels are well-optimized. We achieve this through a novel analytical model for predicting sparse network performance, and validate the predicted speedup using several real-world computer vision architectures pruned across a range of sparsity patterns and degrees. We demonstrate the utility and ease-of-use of our model through two case studies: (1) we show how machine learning researchers can predict the performance of unimplemented or unoptimized block-structured sparsity patterns, and (2) we show how hardware designers can predict the performance implications of new sparsity patterns and sparse data formats in hardware. In both scenarios, the Sparsity Roofline helps performance experts identify sparsity regimes with the highest performance potential.
## 1 Introduction
Deep neural networks are often over-parameterized (Howard et al., 2019; Tan and Le, 2019) and their weights or parameters can be eliminated (_pruned_) to improve inference latency and/or decrease network size (LeCun et al., 1989; Han et al., 2015; Molchanov et al., 2017; Zhu and Gupta, 2018) without affecting accuracy. Depending on the _pattern_ and _degree_ of sparsity, which together constitute a _sparsity configuration_, networks exhibit widely different accuracy and runtime behavior. This presents major problems for machine learning practitioners who wish to find the best sparsity pattern and degree that balances accuracy loss and performance constraints for their specific application. Obtaining the accuracy corresponding to a sparsity pattern and degree typically requires some form of network fine-tuning (Frankle and Carbin, 2019), making it highly inefficient to estimate the impact of different sparsity configurations by trying hundreds of combinations of hyperparameters.
Thus we hope to predict which sparsity combinations might be most fruitful without fine-tuning them all. But accurately estimating the effects that a specific sparsity configuration has on inference runtime poses a different set of challenges: (1) which metric should we use to estimate runtime performance, and (2) how do we obtain the runtime performance of sparsity patterns that are either unimplemented or have unoptimized implementations? To illustrate the challenge of identifying the right metric, consider the total floating point operations (FLOPs) performed during sparse matrix operations such as matrix multiplication (a common operation in neural networks (Chetlur et al., 2014)). FLOPs are frequently used to evaluate the performance of pruned models (Han et al., 2015; Molchanov et al., 2017; Zhu and Gupta, 2018; Frankle and Carbin, 2019; Lee et al., 2019; Hoefler et al., 2021; Blalock et al., 2020). Table 1 illustrates the limitations of this metric. Here, we show two weight matrices that provide a counterexample to the notion that FLOPs are positively correlated with measured runtime. The structured weight matrix shown on the left side of the table has 1.57\(\times\) more FLOPs than the unstructured matrix on the right, but runs nearly 6\(\times\) faster.
Addressing the challenge of _estimating_ optimized runtime performance is even harder. While performance experts have implemented computation kernels specifically targeting sparse neural networks (Gale et al., 2020; Sarkar et al., 2020; Chen et al., 2021; Vooturi and Kothapalli, 2019), there are significant gaps. For example, NVIDIA's cuSparse library provides optimized GPU kernels for block-sparse matrices, but they are primarily optimized for larger block sizes such as 16\(\times\)16 and 32\(\times\)32 (Yamaguchi and Busato, 2021).
As discussed in Section 4.1, using smaller block sizes often leads to higher accuracies; however, in the absence of computation kernels optimized for these sizes, it is impossible
to estimate their effect on runtime via benchmarking.
To help practitioners better understand the complex relationship between sparsity configuration, accuracy, and inference performance (both current and potential), we introduce a novel visual model named the _Sparsity Roofline_. Our work builds upon the well-known Roofline model (Williams et al., 2009), which provides a visual representation of the performance of a given computation kernel.
In the Roofline model, users compute the _arithmetic intensity_ of the given kernel, and plot it against one or more hardware-specific upper limits (the Rooflines) defined by the peak memory bandwidth and peak floating-point throughput of that hardware architecture. In a similar vein, the Sparsity Roofline plots network accuracy against the theoretical speedup of sparse over dense models, with additional sparsity information. This clearly shows the two most important aspects of weight pruning to a machine learning practitioner--accuracy and performance--and can be analyzed across any model architecture, sparsity hyperparameters, or hardware accelerator. Plotting the Sparsity Roofline requires sampling the accuracy values corresponding to the sparsity configurations being analyzed, which can be easily done with masking-based approaches and existing software libraries (Paszke et al., 2019; Joseph et al., 2020). The only other metrics needed are the arithmetic intensity, which can be either profiled or computed by hand, and the hardware-specific peak computational throughput (in FLOPs/s) and memory bandwidth (in bytes/s).
We validate and demonstrate the usefulness of the Sparsity Roofline by analyzing several real-world computer vision models, including convolutional neural networks (CNNs), vision transformers (ViT), and multi-layer perceptron (MLP)-based networks. We investigate which sparsity characteristics have the greatest impact on accuracy and GPU performance, and point out promising areas to focus on for kernel optimization. Finally, we present two case studies: (1) analyzing tradeoffs associated with block-structured sparsity for deep learning practitioners, and (2) efficient sparsity patterns for future hardware architectures.
This paper makes the following contributions:
1. It introduces the Sparsity Roofline visual model for understanding accuracy vs. latency trade-offs for currently unoptimized and unimplemented kernel designs.
2. It uses the Sparsity Roofline to benchmark and analyze several real-world computer vision architectures pruned to a range of sparsity patterns and levels.
3. It demonstrates the use of the Sparsity Roofline in two distinct use cases: to analyze block-sparsity structures for DL practitioners, and to help inform future sparse hardware implementations.
## 2 Background
In this Section, we provide a brief overview of neural network pruning, followed by a description of the traditional Roofline model.
### Neural Network Pruning
Weight pruning involves setting a subset of neural network weights to zero, followed by a training or fine-tuning stage that attempts to recover any lost accuracy (Hoefler et al., 2021). Pruning can be unstructured (fine-grained), where individual non-zero values are eliminated, or structured (coarse-grained), where groups of non-zero values are removed instead, each resulting in a different _sparsity pattern_. The _sparsity level_ refers to the fraction of zero weights to total weights and is expressed as a percentage in this paper. Structured pruning has been demonstrated to achieve better runtime performance, typically at the cost of decreased accuracy (Narang et al., 2017; Vooturi et al., 2018; Li et al., 2022). A number of algorithms have been proposed in the literature for accuracy recovery of pruned models (Deng et al., 2021; Renda et al., 2020; Hoefler et al., 2021). In this paper, we use the learning rate rewinding approach proposed by Renda et al. (2020).
### The Roofline Model
The Roofline model (Williams et al., 2009) is a visual performance model that shows how well a computational kernel
\begin{table}
\begin{tabular}{c c c} \hline \hline Matrix Heatmap & & \\ \hline
**Runtime (ms)** & **6.613** & **3.526** \\
**GFLOPs** & **24.4** & **15.5** \\ TFLOPs/s & 39.9 & 4.4 \\ Number of Nonzeros & 1.95M & 1.23M \\ \(m\times k\)-dimensions & & \\ (sparse operand) & 3072\(\times\)768 & 3072\(\times\)768 \\ \(n\)-dimension & & \\ (dense operand) & 6272 & 6272 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Runtime vs. GFLOPs: SpMM performance on (32\(\times\)32) block sparsity vs. unstructured with a similar amount of nonzeros. White indicates zero-valued weights, blue non-zero. The block sparse matrix has more FLOPs but has a nearly 6\(\times\) better runtime latency vs. unstructured.**
utilizes the hardware. The Roofline model plots the arithmetic intensity (FLOPs computed / bytes read and written) on the x-axis and the throughput (FLOPs per second) on the y-axis. This enables users to visually observe if their program is memory-bound or compute-bound, and to what extent. The upper bound (Roofline) of the model is determined by both the hardware's peak compute throughput and peak memory bandwidth. Although there are variants that consider cache hierarchies (Ilic et al., 2014), the traditional Roofline model that we discuss in this paper assumes perfect caching is necessary (including user-managed caching such as shared memory and local registers) to achieve peak memory bandwidth utilization; we thus use DRAM memory bandwidth. The hardware throughput component can be increased with additional hardware acceleration for a specific application (e.g., Tensor Cores for deep learning (Jia et al., 2018)). The utility of the Roofline model comes from its ability to succinctly show potential improvement for a given program with respect to the hardware speed-of-light.
### Evaluating Sparse Neural Networks
Figure 1 plots the Roofline for individual SpMM matrices across all benchmarked computer vision models. The line in each plot is the "Roofline", which slopes upwards during the memory-bound region where the arithmetic intensity (AI) is too low to saturate the compute resources, and flattens out once the AI reaches a hardware-specific point, called the _knee_. The dashed line is for Tensor Cores and the solid line for CUDA cores, where the Tensor Core knee has almost 10x the AI of CUDA cores.
The points that are closest to the Roofline are utilizing the GPU the best, with higher sparsities being more memory bound and lower sparsities approaching and becoming compute bound in some situations, such as when the inner-dimension of the matrix product is higher. The Roofline model is a significant improvement over analyzing FLOPs, but it has three major drawbacks in optimizing sparse deep learning models:
1. The Roofline model lacks any concept of accuracy, and GFLOPs/s is challenging to use to compare the relative performance between sparse and dense layers.
2. The Roofline model is only meaningful per-layer instead of per-model. An entire model is almost always a combination of layers, where some are memory-bound and others are likely compute-bound. Therefore calling the entire model "compute bound" or "memory bound" is misleading at best.
3. The Roofline model requires benchmarking to compute GFLOPs/s. Even if optimal kernels exist, such as cuBLAS for dense GEMM operations, the surrounding benchmarking framework is time-consuming to implement, test, and maintain.
Our proposed solution, the Sparsity Roofline, directly addresses these concerns. It is not meant to replace the Roofline model, but instead _complement_ it for the specific use case of designing and optimizing sparse deep-learning kernels.
## 3 The Sparsity Roofline
The Sparsity Roofline is designed to be an easy-to-use tool for deep learning practitioners interested in sparsity, performance experts, and hardware designers. It achieves this goal by addressing the three major issues with the existing Roofline model described in Section 2.3.
The Sparsity Roofline plots accuracy vs. theoretical speedup, as opposed to the traditional Roofline's GFLOPs/s vs. arithmetic intensity. Accuracy is almost always the most important optimization metric in DNNs, and therefore we place it on the \(y\) axis. Similarly, replacing GFLOPs/s with theoretical speedup makes it far easier to understand relative performance differences of a sparse and dense layer or model. Further, the sparsity configuration is encoded into the point and/or line style in order to easily compare different sparsity design decisions, which are crucial for optimal performance.
The Sparsity Roofline converts per-layer peak GFLOPs/s to per-model minimum or _speed-of-light_ (SoL) latency. We first calculate a per-layer SoL latency, then sum the layer-wise latencies for the model SoL latency. This represents
Figure 1: **Roofline, Sparse vs. Dense**: Roofline model measuring throughput of SpMM on unstructured sparse layers and GEMM on dense layers from all trained models, on a single NVIDIA A100. The solid line is the CUDA core peak throughput, the dashed line the Tensor core peak throughput. Unstructured sparsity kernels in cuSPARSE do not use Tensor cores.
the true performance metric that practitioners care about: end-to-end latency of the entire model.
Like the traditional Roofline, the Sparsity Roofline does not require benchmarking. We only need to look up the hardware peak GFLOPs/s and peak GB/s of a hardware architecture, and compute the per-layer GFLOPs and GBs read/written by hand in order to calculate arithmetic intensity.
The Sparsity Roofline for unstructured sparsity is shown in Figure 2, and for ConvNeXt-Tiny and Swin-Tiny in Figures 3 and 4, respectively. We will now describe how these Sparsity Rooflines are constructed.
Given our model uses accuracy metrics, the model being used in the Sparsity Roofline needs to be fine-tuned to a given sparsity from a pre-trained dense model. Fine-tuning for sparsification is a standard practice in deep learning, and the only way to quantify accuracy. We use the learning-rate rewinding technique proposed by Renda et al. (2020) and the Condensa library by Joseph et al. (2020). Our model is most accurate when the sparse kernels are well optimized, and thus approaching the speed-of-light. This doesn't mean the sparse kernel needs to be compute bound, but if it is memory bound the closer it is to the device peak memory throughput, the more accurate our model is. This is discussed in detail in Section 3.4.
### Use Cases
The Sparsity Roofline is designed to quantify the performance-accuracy tradeoff for a specific combination of hardware, model architecture and sparsity configuration, such as sparsity pattern, sparsity level or percent, and sparse data format. Thus it can be used by both software and hardware engineers who want to understand how an optimized kernel would perform, but do not want to go through the trouble of implementing and benchmarking sub-optimal scenarios. In Section 4.1, we show how a deep-learning practitioner may use this tool to investigate optimal block-structure sparsity patterns, and in Section 4.2 we show how a hardware engineer can investigate different N:M sparsity patterns and sparse data formats to implement in hardware, e.g., for new sparse Tensor core formats.
In contrast, the Sparsity Roofline is not meant for engineers who already have a specific sparsity-configuration optimization target. In that scenario, a combination of the Roofline model, benchmarking / profiling, and lower-level optimizations are likely the correct tools to understand detailed performance statistics that would inform kernel design, such as load balancing and caching.
### Constructing the Sparsity Roofline
The Sparsity Roofline plots accuracy vs. theoretical speedup from sparsity. We start by deriving the theoretical speedup.
First, we need to define the kernel's GFLOPs and GBs read/written to global memory. Equation 1 shows this for SpMM (\(\text{Sparse}\times\text{Dense}=\text{Dense}\) matrix multiply); the index data depends on the sparse data format. For compressed row format (CSR), it is \(\textit{nnz}+m+1\).
\[\begin{split}\text{SpMM FLOPs}&=\textit{nnz}\times n \\ \text{SpMM GB}&=\textit{nnz}+n\times k+m\times n+ \text{index data}\end{split} \tag{1}\]
Next, we define the per-layer speed-of-light latency as the maximum runtime for the kernel's given GFLOPs and GBs read/written to global memory. Using the device's peak GFLOPs and GB/s, this is computed as
\[\text{Per-Layer SoL}=\text{max}\bigg{(}\frac{\text{GFLOP}}{\text{ Peak GFLOP/s}},\frac{\text{GB}}{\text{Peak GB/s}}\bigg{)} \tag{2}\]
Finally, we sum the \(L\) per-layer runtimes for the dense model and the same corresponding sparse model, and take their runtime ratio as the speedup, using the dense computation as the baseline. For example, if the sparse latency is 1 ms and the dense latency is 2 ms, the speedup would be 2x.
Figure 2: **Per-Model Sparsity Roofline**: The Sparsity Roofline for several computer vision models on ImageNet-100 pruned with global magnitude pruning. Speedup is calculated per-layer using the maximum compute or memory bound latency, and then summed per model. The machine learning engineer can choose the architecture that provides the optimal balance of accuracy, speedup, and implementation difficulty.
\[\text{Speedup at SoL}=\frac{\sum_{l=1}^{L_{\text{dense}}}\text{SoL Runtime}_{l}}{ \sum_{l=1}^{L_{\text{dense}}}\text{SoL Runtime}_{l}} \tag{3}\]
These equations make the same assumption as the Roofline model: the maximum achievable FLOPs/s is the hardware's peak compute throughput, and each byte of data may be read from or written to global memory once, at the hardware's peak memory throughput, with perfect caching (including shared memory or local registers) for any intermediate reads.
### Evaluating Accuracy
To compute accuracy for each model and sparsity configuration, we start by pre-training one baseline model per architecture. We pre-train without sparsity for 300 epochs on ImageNet-100 (Vinyals et al., 2016). This dataset is a subset of the ImageNet-1K dataset (Deng et al., 2009) created by sampling 100 of the 1000 classes in ImageNet-1K, which allows us to train a larger number of models, sparsity patterns, and sparsity levels.
All model definitions are from the _timm_ library (Wightman, 2019) and each is trained with the same set of data augmentations, hyperparameters, and training schedules based on modern architectures such as DeiT (Touvron et al., 2021), Swin (Liu et al., 2021) and ConvNeXt (Liu et al., 2022). This includes data augmentations RandAugment (Cubuk et al., 2020), MixUp (Zhang et al., 2018) and CutMix (Yun et al., 2019), a cosine decay learning rate schedule (Loshchilov and Hutter, 2017), and the AdamW optimizer (Loshchilov and Hutter, 2019) with a base learning rate of \(10^{-3}\) and 20 epochs of warm up. Using these uniform settings across all models ensures a fair comparison with an identical training procedure. We store the checkpoint with the minimum validation loss and use this for fine-tuning.
We apply an incremental fine-tuning algorithm based on learning rate rewinding (Renda et al., 2020) to the baseline model to obtain the accuracy values corresponding to the following sparsity levels: 50%, 75%, 87.5%, 93.75% and 96.875%. This pattern involves halving the number of nonzeros per iteration, which ends up slightly biasing the results towards higher sparsities where sparse kernels are typically more performant.
For a given combination of model and sparsity pattern, e.g., ConvNeXt-Tiny with unstructured sparsity, we prune the weights with global magnitude pruning to the lowest sparsity level of 50%. We rewind the learning rate schedule but with a shorter 60 epoch total decay rather than 300 epochs. After 60 epochs we increase the sparsity level by \((1-\text{Sparsity})/2\), prune the additional weights, and rewind the learning rate again. We repeat this a total of five times within a single run to fine-tune five sparsity levels for our model / sparsity pattern combination in 300 epochs total, which is the same number as during training. We find this process to be simple and efficient, and quantitatively works well for ImageNet-100. For more challenging datasets such as ImageNet-1k or ImageNet-22k, the fine-tuning schedule would likely need to be increased.
### Validation
It is important to understand the cases where speed-of-light (SoL) speed-up equals the actual measured speed-up, without having to implement and optimize a specific sparse kernel. We can easily show that the speedup at SoL is precisely equal to the measured speed-up when the sparse and dense kernels are _equally optimized_. Specifically, at a per-layer level this occurs when the percentage of the per-layer SoL latency for dense and sparse are equal. For example, if a given GEMM kernel is compute bound and obtains 90% of the SoL GFLOPs/s, and the corresponding SpMM kernel is memory bound and also obtains 90% of the SoL GB/s, then the percent of SoL is identical and our model will predict a SoL speedup that is equal to the measured speedup. More formally:
\[\text{Per-Layer Speedup at SoL} \stackrel{{?}}{{=}}\text{Per-Layer Speedup Meas.}\] \[\frac{\text{Dense SoL Runtime}}{\text{Sparse SoL Runtime}} =\frac{\text{Dense Meas. Runtime}}{\text{Sparse Meas. Runtime}}\] \[\frac{\text{Dense SoL Runtime}}{\text{Dense Meas. Runtime}} =\frac{\text{Sparse SoL Runtime}}{\text{Sparse Meas. Runtime}}\]
Dense Per-Layer % of SoL
In the last equation, note that the percent of speed-of-light (or fraction of speed-of-light) is defined as the ratio between the SoL latency to the measured latency. The measured latency can take on values as small as the SoL latency but no smaller, by definition. Therefore this is bounded between 0-1 (or 0-100%).
The same equation holds for per-model aggregation, but in this case each individual term is a summation of all layers.
\[\text{Per-Model Speedup at SoL} \stackrel{{?}}{{=}}\text{Per-Model Speedup Meas.}\] \[\frac{\sum_{l=1}^{L_{\text{dense}}}\text{SoL Runtime}_{l}}{\sum_{l= 1}^{L_{\text{dense}}}\text{Meas. Runtime}_{l}} =\frac{\sum_{l=1}^{L_{\text{dense}}}\text{SoL Runtime}_{l}}{\sum_{l= 1}^{L_{\text{dense}}}\text{Meas. Runtime}_{l}}\]
Dense Per-Model % of SoL \(=\text{Sparse Per-Model }\%\) of SoL
At the aggregated per-model level, the SoL speedup is equal to the measured speedup when the sparse and dense models are equally optimized, such that the percentage of the per-model SoL latency for dense and sparse are equal.
## 4 Case Study
### DL Practitioner
Suppose Alice is researching pruning algorithms and wants to find out whether block sparsity can provide effective inference latency improvements on NVIDIA GPUs for ConvNext () and Swin (), two state-of-the-art computer vision models. She would typically start by training, pruning and then fine-tuning these models for various block sizes, say, 2\(\times\)2, 4\(\times\)4, 8\(\times\)8, 16\(\times\)16 and 32\(\times\)32, to capture a sufficiently large sample of the search space.
Alice would like to compare the speedups that her block pruning scheme achieves w.r.t. unstructured global magnitude pruning, but she would prefer to avoid implementing a custom block-sparse GPU kernel until she is sure it's the right approach. She then considers using existing kernels from a vendor-optimized library such as cuSparse (), but backs off due to two reasons: (1) writing a custom operator for a deep learning framework is not trivial, and (2) she notices in the documentation for the vendor-optimized library that it achieves poor performance for smaller block sizes, and may thus not provide a fair comparison across block sizes.
Rather than trying to measure actual latency numbers, Alice now plans to use some simple metrics to estimate potential speedups. She starts by counting the FLOPs of each sparse model. However, since her blocked SpMM and unstructured SpMM kernels would be running on NVIDIA Tensor Cores and CUDA cores, respectively, the former will end up achieving higher throughput than the latter. Additionally, since Tensor Cores necessitate more efficient memory bandwidth utilization, she would also need to account for the reads and writes that her sparse models perform during inference.
To address the above concerns, Alice instead generates the Sparsity Roofline for the block-sparse models she has trained to quickly approximate the speedups she would achieve for various block sizes. Figures 3a and 3b show the Sparsity Roofline models Alice would generate for ConvNext and Swin with a batch size of 1. By observing the accuracy and performance tradeoffs that the Sparsity Roofline depicts, Alice is now able to determine that her models achieve higher speedups using larger block sizes, but they only maintain accuracy with smaller block sizes of 2\(\times\)2 and 4\(\times\)4. _Importantly, Alice was able to arrive at this conclusion without needing to go to the effort of writing her own optimized sparse kernels for a variety of block sizes._ She now realizes that if she invests her time in optimizing for smaller block sizes, she will get reasonable speedups without sacrificing accuracy.
### Hardware Architect
Bob is a hardware architect designing next-generation Tensor Cores for future GPUs and is investigating alternative N:M patterns for future hardware support. He would like to quickly assess the accuracy and performance implications of the new N:M patterns before he puts in any effort into design and simulation. His goal is to find patterns that achieve accuracy numbers similar to the currently supported 2:4 pattern, but are at least 30% faster given the same Tensor Core throughput.
Bob's target workload for these N:M patterns is inference with a batch size of 1 on ConvNeXt and Swin. These two network architectures, in addition to providing state-of-the-art accuracies on their tasks, are also comprised of a variety of layer types, and involve matrix operations of various shapes and sizes, making them fairly representative. The N:M schemes he chooses to investigate are 1:4, 2:8 and 2:16, in addition to the pre-existing 2:4 pattern.
Bob works with a machine learning engineer to get these two networks trained, pruned, and fine-tuned for each of the above sparsity patterns, and then obtains the corresponding accuracy numbers. He now needs to determine how these models would perform if hardware support for the new N:M patterns was available.
Instead of developing RTL code for these new hardware units and simulating the workloads, which would be labor-intensive and time-consuming, Bob would prefer a quicker way of estimating the runtime performance of each of these pruned models on their respective hypothetical hardware units. Bob could simply use FLOPs to estimate speedups for each pattern (e.g., going from 2:4 to 1:4 is a 2x speedup); however, note that Bob would also need to account for the memory system's ability to keep up with the Tensor Core's throughput to get a more accurate performance estimation.
To address these concerns, Bob constructs the Sparsity Roofline for the N:M pruned models to quickly estimate the speedups he would achieve w.r.t. the accuracy. The resulting Sparsity Roofline plots are shown in Figures 4a and 4b. From the Sparsity Roofline, Bob notices that at the same Tensor Core throughput, 2:16 sparsity achieves nearly a 1.8\(\times\) speedup over dense and is over 30% faster than the 2:4 sparsity pattern, meeting his original goal. He also notices that the 1:4 and 2:8 patterns are promising in cases where accuracy preservation is more important than raw speedup. Similar to Alice (see Section 4.1), Bob was able to estimate his performance metrics significantly faster using the Sparsity Roofline.
## 5 Discussion
### Unstructured Sparsity
Global magnitude pruning with re-training has become a widely applicable technique due to its simplicity and effectiveness. Figure 5 shows how this technique can reach almost 90% sparsity with minimal accuracy loss. In the context of small computer vision models, Figure 2 indicates that accuracy can only be preserved to about a \(1.5\times\) speedup over dense. While a 50% speedup would be somewhat substantial, the time cost of fine-tuning may not be worthwhile in every scenario. Additionally, a 50% speedup is far less than what FLOP counts would suggest. At 87.5% sparsity, a network requires only \(1/8\) the FLOPs of the original, yet Figure 2 tells us that a \(8\times\) speedup is infeasible in any case. To make sparsity generally viable from a performance perspective, we need to understand and alleviate the underlying factors that inhibit SpMM from achieving the speedups suggested by the FLOP reduction. Despite the wide range of factors that affect SpMM kernel performance on GPUs, such as load balancing and efficient data reuse Gale et al. (2020); Bell and Garland (2009), we only consider the factors that make up the Sparsity Roofline. Thus, in our analysis, we account for FLOPs, bytes read/written, hardware peak throughput, and hardware peak memory bandwidth (the same as the Roofline model).
One of the most glaring downsides of unstructured sparsity is its inability to leverage the GPU's tensor cores that are effectively leveraged by dense models.1 The Roofline model
Figure 4: **N:M Sparsity Roofline: The Sparsity Roofline for (a) ConvNext-Tiny and (b) Swin-Tiny on ImageNet-100 pruned with various N:M patterns. Calculations are done using a batch size of 1 and NVIDIA A100 hardware specs.**
Figure 3: **Block-Sparsity Roofline: The Sparsity Roofline for (a) ConvNext-Tiny and (b) Swin-Tiny on ImageNet-100 pruned with various block pruning sizes. Calculations are done using a batch size of 1 and NVIDIA A100 hardware specs.**
in figure 1 shows the elevated peak tensor core throughput above the peak CUDA core throughput. For the A100, the tensor core throughput is 16x faster than the CUDA core throughput (NVIDIA, 2020). To address the hardware discrepancy and put sparse and dense on a level playing field, we opt to investigate sparsity structures that can leverage the tensor cores.
### Block Sparsity
The Sparsity Roofline shows two benefits of block sparsity: (1) the ability to use the high-throughput sparse tensor cores, and (2) the reduced index data from the block sparse format. The reduced index data results from the sparsity pattern's more coarse-grained structure, where a single block index refers to multiple nonzeros. The index data is reduced by a factor of the block size.
Despite the reduction in reads and writes from block sparsity, Figure 6 shows that the vast majority of block-pruned weights are still memory bound. Because of this, the Sparsity Rooflines for different block sizes in figure 2(a) and figure 2(b) see only a small improvement compared to unstructured sparsity. The accuracy-speedup tradeoff is slightly better than unstructured sparsity at best, and only just as good in the worst case.
While the heatmap in Table 1 suggests that block sparsity should perform much better than unstructured, we observe that the accuracy loss from large block sizes (16\(\times\)16 and 32\(\times\)32) is too significant to be viable. When we therefore restrict our analysis to smaller block sizes, we see that we can't achieve the full throughput from the tensor cores due to the memory bottleneck seen in Figure 6. The smaller block sizes are completely memory-bound, whilst the larger block sizes are less so, and can thus get more throughput from the tensor cores.
### N:M Sparsity
NVIDIA's sparse tensor cores provide an interesting alternative to block sparsity, allowing adopters to leverage the throughput of the tensor cores whilst being able to prune weights in a fine-grained manner. While the coarse-grained structure of block sparsity restricts the freedom of pruning algorithms' weight selection and hurts accuracy, the fine-grained structured sparsity for the N:M patterns should theoretically hurt accuracy less.
In addition to the accuracy benefits of a fine-grained structure, the N:M formats can reduce the memory overhead for indexing data. With dedicated hardware support, N:M formats only need \(\log_{2}(M)\) bits to store the index of each nonzero inside the \(M\)-wide blocks; for 2:4, that's only 2 bits per nonzero.
Figures 3(a) and 3(b) show the Sparsity Roofline for N:M formats. We see that the various N:M patterns achieve a better performance-accuracy tradeoff over unstructured than what
Figure 5: **Accuracy vs. Sparsity and FLOPs: A common but misleading means of evaluating sparse models. Plotting accuracy (here ImageNet-100 top-1 accuracy) vs. sparsity (top) and FLOPs (bottom) for various models implies higher sparsity means higher GPU performance, which does not take memory bandwidth into account.**
Figure 6: **Roofline, Block Sparse vs. Dense: Roofline model measuring throughput of SpMM on all block sparse layers and GEMM on dense layers from all trained models, on a single NVIDIA A100.**
block sparsity was able to achieve. N:M is an improvement over block sparsity in our pruned networks due to the reduced accuracy degradation and minimal index data overhead.
### Feature Overhead
Finally, we have not yet mentioned the read and write overhead of the input and output features of each layer. Equation 1 shows the data for the input and output features as \(n\times k\) and \(m\times n\) (respectively). Akin to Amdahl's law, we can only expect to reduce the number of memory accesses for pruned matrices. Therefore, regardless of our pruning strategy, the input and output features will always incur a fixed number of reads and writes as overhead. Figure 7 shows the severity of this problem. For a batch size of 1, the feature memory accesses, which cannot be reduced via pruning, account for half of all accesses. For a batch size of 32, the feature memory accesses heavily dominate the overall number of accesses, making it difficult to decrease the memory bottleneck of our sparse models.
The \(n\) dimension in equation 1 is shared by the input and output feature matrices and is not one of the weight matrix dimensions. The size of \(n\) relative to \(m\) and \(k\) is determines the appearance of the graphs in Figure 7. The \(n\) dimension scales linearly with both the batch size and the number of spatial locations in the feature data (for both convolution and transformer FFN layers). This suggests that we will see larger speedups from pruning when the model size (\(m\) and \(k\)) is large relative to the batch size and feature sizes (\(n\)).
## 6 Related Work
Automated Model CompressionRecent work has explored various approaches for automatically inferring optimal sparsity levels using approaches such as Bayesian Optimization (Joseph et al., 2020) and reinforcement learning (He et al., 2018). Our work differs in two ways: we focus on providing (1) a _visual_ representation of the accuracy and performance landscape for different sparsity patterns and levels, and (2) meaningful estimates of potential inference runtimes to aid deep learning practitioners, performance experts and hardware designers.
Deep Learning Roofline ModelsThe Roofline model has been applied to the deep learning problem space in the past (Yang et al., 2020; Wang et al., 2020; Czaja et al., 2020). However, this work primarily focuses on dense neural networks. Specifically, Wang et al. (2020) extend the Roofline model to deep learning by using latency and compute/bandwidth complexity. Yang et al. (2020) provide a toolkit extension for deep learning to support new precisions, tensor cores, and a tool for measuring performance metrics. Czaja et al. (2020) perform a Roofline analysis of DNNs accounting for non-uniform memory access (NUMA) systems.
|
2306.17630 | Navigating Noise: A Study of How Noise Influences Generalisation and
Calibration of Neural Networks | Enhancing the generalisation abilities of neural networks (NNs) through
integrating noise such as MixUp or Dropout during training has emerged as a
powerful and adaptable technique. Despite the proven efficacy of noise in NN
training, there is no consensus regarding which noise sources, types and
placements yield maximal benefits in generalisation and confidence calibration.
This study thoroughly explores diverse noise modalities to evaluate their
impacts on NN's generalisation and calibration under in-distribution or
out-of-distribution settings, paired with experiments investigating the metric
landscapes of the learnt representations across a spectrum of NN architectures,
tasks, and datasets. Our study shows that AugMix and weak augmentation exhibit
cross-task effectiveness in computer vision, emphasising the need to tailor
noise to specific domains. Our findings emphasise the efficacy of combining
noises and successful hyperparameter transfer within a single domain but the
difficulties in transferring the benefits to other domains. Furthermore, the
study underscores the complexity of simultaneously optimising for both
generalisation and calibration, emphasising the need for practitioners to
carefully consider noise combinations and hyperparameter tuning for optimal
performance in specific tasks and datasets. | Martin Ferianc, Ondrej Bohdal, Timothy Hospedales, Miguel Rodrigues | 2023-06-30T13:04:26Z | http://arxiv.org/abs/2306.17630v2 | # Impact of Noise on Calibration and Generalisation of Neural Networks
###### Abstract
Noise injection and data augmentation strategies have been effective for enhancing the generalisation and robustness of neural networks (NNs). Certain types of noise such as label smoothing and MixUp have also been shown to improve calibration. Since noise can be added in various stages of the NN's training, it motivates the question of when and where the noise is the most effective. We study a variety of noise types to determine how much they improve calibration and generalisation, and under what conditions. More specifically we evaluate various noise-injection strategies in both in-distribution (ID) and out-of-distribution (OOD) scenarios. The findings highlight that activation noise was the most transferable and effective in improving generalisation, while input augmentation noise was prominent in improving calibration on OOD but not necessarily ID data.
Machine Learning, ICML 2023 Workshop on Spurious Correlations, Invariance, and Stability. Honolulu, Hawaii, USA. Copyright 2023 by the author(s).
Machine Learning, ICML 2023 Workshop on Spurious Correlations, Invariance, and Stability. Honolulu, Hawaii, USA. Copyright 2023 by the author(s).
## 1 Introduction
Noise injection methods have emerged as a promising approach to enhance the generalisation of neural networks (NNs) (Srivastava et al., 2014; Neelakantan et al., 2017). Given the importance of noise for Bayesian NNs (BNNs) (Gal and Ghahramani, 2016; Blundell et al., 2015; Welling and Teh, 2011), we hypothesise that noise injections during training of standard NNs can also positively impact their calibration. Calibration refers to the alignment of prediction's accuracy to their confidence (Guo et al., 2017).
Examples of noise injection approaches include dropout (Srivastava et al., 2014; Gal and Ghahramani, 2016), label smoothing (Szegedy et al., 2016), MixUp (Zhang et al., 2018), Gaussian noise (Blundell et al., 2015), shrinking and perturbing NN weights (Ash and Adams, 2020), and gradient noise (Neelakantan et al., 2017). By introducing noise during the training, these methods encourage active exploration of the parameter space and can be applied to various components of the network, including the input, targets, activations, gradients and the model itself. In this paper, we aim to provide a fair comparison of noise injection methods during training and investigate their impact on both calibration and generalisation of NNs in a computer vision classification setting. We ensure fairness of the comparison through dedicated hyperparameter optimization per noise type and we examine the transferability of found hyperparameters from one dataset or architecture to another. To robustly evaluate both generalisation and calibration we consider testing the methods on both test in-distribution (ID) and out-of-distribution (OOD) data.
The key takeaways from our work are: 1) Activation noise, especially dropout (Srivastava et al., 2014), improves generalisation and marginally also calibration across architectures and datasets. 2) Input augmentation, MixUp (Zhang et al., 2018), improves calibration and generalisation on OOD data but not necessarily ID data. 3) Model noise and gradient noise improve generalisation and calibration, but only to a smaller extent than input or activation noise.
## 2 Related Work
Standard NNs were shown to lack calibration (Guo et al., 2017), motivating the need for approaches focusing on training NNs such that their confidence matches their accuracy. Bayesian NNs (BNNs) (Blundell et al., 2015; Gal and Ghahramani, 2016; Welling and Teh, 2011) and NN ensembles (Lakshminarayanan et al., 2017) are popular approaches for obtaining well-calibrated models, but they are computationally expensive as they require random sampling and multiple forward passes during test time. Alternative methods have been proposed without increasing computational complexity, particularly during training. They include different loss functions (Kumar et al., 2018; Mukhoti et al., 2020; Bohdal et al., 2021) and temperature scaling (Guo et al., 2017). However, these approaches have
their own limitations and may not be suitable for all scenarios. On the other hand, most noise injections are applicable to any NN architecture and any task.
For **input noise** injection, commonly used are MixUp and Output Diversified Sampling (ODS) methods. MixUp (Zhang et al., 2018) linearly interpolates between two samples and their labels, while ODS (Tashiro et al., 2020) augments the input to diversify predictions and was used in the context of adversarial examples but not calibration. MixUp has been shown to improve calibration and generalisation (Zhang et al., 2022), but its transferability between datasets and architectures has not been explored. Additionally, we investigate naive Gaussian and uniform noise injection, which adds Gaussian or uniform noise to the input during training. In terms of **target noise** injection, label smoothing (Pereyra et al., 2017) and MixUp (Zhang et al., 2018) label interpolation are frequently used. Label smoothing replaces hard targets with soft targets and has already been shown to improve calibration, but not on OOD data (Muller et al., 2019). **Activation noise** injections include Dropout, Gaussian and uniform noise injections. Dropout (Srivastava et al., 2014) randomly sets activations to zero. Gaussian noise injection (Blundell et al., 2015; Camuto et al., 2020; Alemi et al., 2017; Yu et al., 2021) adds Gaussian noise to the activations, while uniform noise injection adds uniform noise. In BNNs, these injections are applied both during training and evaluation, whereas in this work we only apply noise during training. Furthermore, **gradient noise** has been shown to improve generalisation through adding annealed Gaussian noise to the gradients during training (Neelakantan et al., 2017; Welling & Teh, 2011). However, it was not benchmarked on calibration, especially without ensembling weights at different training time-steps. Finally for **model noise** injection, recently Gaussian noise injection via shrinking and perturbing weights (Ash & Adams, 2020) at a given epoch frequency was shown to improve retraining generalisation, but calibration on ID or OOD data was not considered.
To the best of our knowledge, the noise injections have been studied 1) separately (Zhang et al., 2022; Muller et al., 2019), 2) orthogonally for generalisation and calibration on ID or OOD data, and 3) without a unified hyperparameter (HP) optimization protocol. This research aims to start the conversation into a comprehensive analysis of the noise injection methods and their relationship to generalisation and calibration, across datasets and NN architectures, providing valuable insights into their effectiveness and practicality.
## 3 Methodology
This study focuses on training a NN with noise perturbations to investigate their impact on NN's accuracy and calibration, identifying which perturbations are helpful and when. The noises are divided between **input**, **target**, **activation**, **gradient** and **model**, and their deployment during training is outlined in Algorithm 1 via blue lines. The probability of applying each noise to a batch out of \(B\) batches is determined by the HP \(p\in[0,1]\), except model noise, which was applied with a selected frequency during the \(T\) training epochs. The noises have associated HPs and tuning ranges.
```
0: Training dataset \(D=\{(x^{b},y^{b})\}_{b=1}^{B}\), \(B\) batches, learning rate \(\eta\), number of epochs \(T\), weights \(\theta\), operation \(g(\cdot,\theta)\), hidden states \(h^{b}\), hidden depth \(D\), activation \(f(\cdot)\), probability of applying noise to a batch \(p\)
1: Initialize \(\theta\) randomly
2:for\(t=1\) to \(T\)do
3:for\(b=1\) to \(B\)do
4: Randomly select \((x^{b},y^{b})\) from \(D\)
5: Sample \(e\sim U(0,1)\) {If \(e<p\)}
6:Input noise: Modify \(x^{b}\)
7:Target noise: Modify \(y^{b}\)
8:for\(i=1\) to \(D\)do
9:\(h^{b}_{i}=g(h^{b}_{i-1},\theta)\) {Where \(h^{b}_{0}=x^{b}\)}
10:Activation noise: Modify \(h^{b}_{i}\) before activation
11:\(h^{b}_{i}=f(h^{b}_{i})\)
12:endfor
13: Compute predicted output \(\hat{y}^{i}=g(h^{b}_{D},\theta)\)
14: Compute loss \(\mathcal{L}(\hat{y}^{i},y^{i})\) and gradients \(\nabla_{\theta}\mathcal{L}\)
15:Gradient noise: Modify \(\nabla_{\theta}\mathcal{L}\)
16: Update weights: \(\theta\leftarrow\theta-\eta\nabla_{\theta}\mathcal{L}\)
17:endfor
18:if\(t\mod\text{frequency}=0\)and\(t<0.75T\)then
19:Model noise: Modify \(\theta\)
20:endif
21:endfor
22:return\(\theta\)
```
**Algorithm 1** Training of Neural Network with Noise
**Input noise:** The input noise consisted of 2 naive variants and 2 variants which tapped into predictions or the targets to compute the noise. The two naive variants consisted of adding Gaussian or uniformly sampled noise \(n\sim U(-\sigma,\sigma);n\sim\mathcal{N}(0,\sigma)\) added to inputs \(x\) with standard deviation \(\sigma\in[1e^{-4},1e^{-1}]\). We considered ODS (Tashiro et al., 2020) with respect to \(\epsilon\in[1e^{-4},1e^{-1}]\) and temperature \(T\in[0.5,5.0]\), and MixUp (Zhang et al., 2018) with \(\alpha\in[0,1]\) which also modified the targets accordingly. **Target noise:** In addition to MixUp we considered a static noise introduced to the labels \(y\) in the form of label smoothing (Muller et al., 2019) with the smoothing factor \(l\in[0,0.25]\). **Activation noise:** The hidden states prior to applying the activation function, \(\{h^{b}_{i}\}_{i=0}^{D}\), where \(D\) is the depth of the net, could be disturbed by 3 types of activation noise: additive Gaussian or Uniform
\(n\sim U(-\sigma,\sigma);n\sim\mathcal{N}(0,\sigma)\) with \(\sigma\in[1-4,1e^{-1}]\) as a tunable HP or multiplicative Dropout (Srivastava et al., 2014) that incorporates a dropout rate \(d\in[0,1]\). The activation noise was used prior to an activation \(f(\cdot)\) for all linear or convolutional operations \(g(\cdot,\theta)\) but not in the output layer. **Gradient noise:** The noise applied to the gradients \(\nabla_{\theta}\mathcal{L}\) followed (Neelakantan et al., 2017) with the step size \(\eta\in[0,1]\) and the annealing factor \(\gamma\in[0,1]\). **Model noise**: Lastly the model noise follows the idea of shrinking and perturbing the weights \(\theta\)(Ash and Adams, 2020) with a shrink factor \(\mu\in[0.0,1.0]\) and standard deviation \(\sigma\in[0.0,1e^{-3}]\) with frequency of perturbing every frequency \(\in[0,80]\) epochs, except the last 25% of training epochs.
## 4 Experiments
SettingsWe first tune the learning rate and L2 regularisation of a no-noise network which are reused when tuning the HPs of each noise injection method on three different combinations: ResNet-18 paired with CIFAR-10 or CIFAR-100 and a fully connected (FC) network paired with SVHN. The tuning is performed using \(1/4\) of the training budget over the course of one day, using model-based Tree-structured Parzen Estimator method (Bergstra et al., 2011). With these settings we are able to evaluate about 40 configurations selected using Bayesian Optimization. Our protocol allows us to optimize the performance of each noise injection method and provide fair comparison. Full experimental details are in Appendix A, including a summary of the identified HPs.
To assess the effectiveness of the noise injection methods, we measure their performance using three metrics: Error \([\downarrow,\%]\), Expected Calibration Error (ECE) (Guo et al., 2017) \([\downarrow,\%]\), and Negative Log-Likelihood (NLL) \([\downarrow]\) that we report in Appendix B. These metrics provide insights into the accuracy and its match with the confidence of the NNs' predictions. We evaluate the performance on both the ID test set and an augmented OOD set that includes an average over visual corruptions across 19 categories and 5 severities (Hendrycks and Dietterich, 2019). These corruptions include, for example, adding snow or fog to the image, changing the brightness or saturation of the image or blurring the image. We conduct experiments on a series of deployment scenarios where 1) the tuned HPs are directly used on the tuned dataset and architecture, 2) the architecture is changed but the HPs are kept, 3) the HPs come from a different source dataset. The results presented are averaged across 3 seeds and the best results are in **bold**.
### Analysis
Tuned HyperparametersIn this scenario, we evaluate the performance of the noise injection methods when the HPs are tuned specifically for the dataset and architecture. The results for these experiments are in Tables 1 and 2, and they show that activation and input augmentation noises are prominent in improving the accuracy and calibration of the networks across the datasets. Dropout was the most effective for improving ID generalisation in CIFAR-10 and CIFAR-100, while MixUp was the most effective for SVHN. Uniform activation worked the best for improving ID calibration in CIFAR-10 and CIFAR-100, whereas ODS was the best in SVHN. The strong result obtained by ODS on SVHN shows that adversarial attack techniques may be useful also for other uses-cases, including calibration. Interestingly, some of the improvements carried to OOD data, for example, where the error on SVHN or CIFAR-100 was the lowest with MixUp or dropout. However, when considering calibration on OOD data, MixUp was dominant for CIFAR-10 and CIFAR-100. On average, dropout improved generalisation and MixUp improved calibration when considering both ID and OOD data. The naive Gaussian and uniform input noise perturbations did not bring significant improvements.
Architecture TransferIn this scenario, we assess the performance of the noise injection methods when the HPs are transferred to a different architecture while keeping the dataset constant. We conduct experiments using SVHN with ResNet-18 with HPs tuned on an FC network. Furthermore, we use HPs tuned for ResNet-18 for both CIFAR-10 and CIFAR-100 and we change the architecture to WideResNet-18. The results are presented in Tables 3 and 4. Considering the performance on ID data, we see that dropout reduced error across architectures and also improved calibration. Contrary to the improvements seen on SVHN when using FC, MixUp did not reduce the error when using ResNet-18 and it even recorded worse performance on OOD data than no noise at all. Switching focus to OOD data, model perturbation moderately improved calibration for CIFAR-100, while activations had a negative impact and led to worse calibration. Even though WideResNet-18 and ResNet-18 are relatively similar, transferring hyperparameters for example for MixUp in CIFAR-100, did not prove efficient as seen in calibration on OOD data which became worse than not using any noise at all. In summary, activation noises, most notably dropout, performed well on improving generalisation on both ID and OOD data and moderately on calibration on ID data. However, no method was able to consistently improve calibration on OOD data after the architecture was changed.
Dataset TransferUnder these settings, we investigate the transferability of hyperparameters by evaluating the noise injection methods on the same architectures but using different datasets. Specifically, we evaluate SVHN with ResNet-18 and HPs from CIFAR-100/ResNet-18, CIFAR
10 with ResNet-18 and CIFAR-100/ResNet-18 HPs, and CIFAR-100 with ResNet-18 but with CIFAR-10/ResNet-18 HPs. The results are shown in Tables 5 and 6. For all SVHN, CIFAR-10, CIFAR-100, the most significant error improvements across ID or OOD data were achieved using dropout and Gaussian noise. Interestingly, the activation Gaussian noise was able to improve calibration on both ID and OOD data on CIFAR-100, but not on the other datasets. MixUp has demonstrated varying results, for example on SVHN or CIFAR-10 the calibration on ID data was worse than not using any noise at all, while in CIFAR-100 there was a marginal improvement. Nevertheless on OOD data MixUp was able to improve calibration across all datasets.
SummaryThe effectiveness of noise injections varies with the dataset and architecture. Nevertheless, especially in the tuned regime, certain settings of different noises improved both generalisation and calibration. Activation noise injections demonstrated promising results for error reduction across ID data, while input augmentations seemed to be the most effective for OOD data. Dropout was the most effective in improving error on ID or OOD data, and it proved to be transferable across architectures and datasets. MixUp was the best in improving the performance on OOD data in terms of calibration and accuracy, but not necessarily on the ID data. Interestingly, hidden in its mediocrity, model noise was able to marginally improve accuracy and calibration across majority of considered scenarios. Additional evaluation in terms of NLL, in Appendix B, has shown MixUp, dropout and model perturbations to be the most effective.
## Acknowledgements
Martin Ferianc was sponsored through a scholarship from the Institute of Communications and Connected Systems at UCL. Ondrej Bohdal was supported by the EPSRC Centre for Doctoral Training in Data Science, funded by the UK Engineering and Physical Sciences Research Council (grant EP/L016427/1) and the University of Edinburgh.
|
2310.04431 | Can neural networks count digit frequency? | In this research, we aim to compare the performance of different classical
machine learning models and neural networks in identifying the frequency of
occurrence of each digit in a given number. It has various applications in
machine learning and computer vision, e.g. for obtaining the frequency of a
target object in a visual scene. We considered this problem as a hybrid of
classification and regression tasks. We carefully create our own datasets to
observe systematic differences between different methods. We evaluate each of
the methods using different metrics across multiple datasets.The metrics of
performance used were the root mean squared error and mean absolute error for
regression evaluation, and accuracy for classification performance evaluation.
We observe that decision trees and random forests overfit to the dataset, due
to their inherent bias, and are not able to generalize well. We also observe
that the neural networks significantly outperform the classical machine
learning models in terms of both the regression and classification metrics for
both the 6-digit and 10-digit number datasets. Dataset and code are available
on github. | Padmaksh Khandelwal | 2023-09-25T03:45:36Z | http://arxiv.org/abs/2310.04431v1 | ## Can Neural Networks Count Digit Frequency?
### Abstract
In this research, we aim to compare the performance of different classical machine learning models and neural networks in identifying the frequency of occurrence of each digit in a given number. It has various applications in machine learning and computer vision, e.g. for obtaining the frequency of a target object in a visual scene. We considered this problem as a hybrid of classification and regression tasks. We carefully create our own datasets to observe systematic differences between different methods. We evaluate each of the methods using different metrics across multiple datasets. The metrics of performance used were the root mean squared error and mean absolute error for regression evaluation, and accuracy for classification performance evaluation. We observe that decision trees and random forests overfit to the dataset, due to their inherent bias, and are not able to generalize well. We also observe that the neural networks significantly outperform the classical machine learning models in terms of both the regression and classification metrics for both the 6-digit and 10-digit number datasets. Dataset and code are available on github.
### Introduction
Some of the fundamental aspects of deep learning were introduced quite early, e.g. backpropagation[1] and deep convolutional neural networks[2], however, it required an increase in computational power and access to large datasets[3, 4, 5] to get mainstream. Recently, these learning techniques have been shown to be successful in different tasks like playing the game of Go[6] and even the task of question-answering interactions, e.g. instructGPT[7] which led to recently popular ChatGPT.
In this paper, we show that it is still not easy to use the recent machine learning models for a simple but important task of counting the frequency of different digits in a given sequence of numbers, e.g. Figure 1 shows that even ChatGPT is not good at this task. This task has several downstream applications, e.g. counting the number of objects detected in a scene[8, 9]. We compare different classical machine learning and neural network-based methods for this task. As part of classical methods, we utilize decision trees[10, 11] and random forests[12, 13, 14]. Thus, in this research work, we try to understand classical machine learning and neural network architectures and their effects.
Decision Tree and Random Forests: A decision tree is created using a binary split which decides the branch to allocate for a data sample. The quality of a split is decided by a measure of impurity, e.g. "gini", which can be similar to the sum of the standard deviation of samples lying on each side of the split[15, 16], hence the best split is likely to have the least "gini" score. Refer to Figures 6 to 9 to see decision tree structures. Decision trees can face the issue of overfitting which can be avoided by using random forests[12, 13, 14]. The basic idea behind random forests is to create lots of large decision trees such that their predictions are uncorrelated[14] and then take the average of their predictions, which is also called bagging[9]. There are different approaches to create uncorrelated models, e.g. by training them on different subsets of data, by considering a random subset of columns for each split, etc[12, 13, 14]. Random forests have been shown to work quite well in practice, which is also evident from this work.
Our major contributions in this work are listed below:
* We systematically create our own datasets to bring out the differences in performance of different methods.
* We carefully split the datasets into training, validation and test sets to test the generalization capabilities of different methods across dataset sizes.
* For fair evaluation of the methods, we do multiple runs of each method to obtain statistical results. We also consider different metrics for both regression-based evaluation and accuracy-based evaluation.
* We also list specific examples to observe the overfitting behavior of decision trees and random forests which is not observed in the neural networks.
* We also perform hyper-parameter tuning of the neural networks and provide our observation as part of the ablation studies.
allow for the fine-tuning of the hyperparameters of the neural networks on the validation set which can later be tested on the unseen and unbiased test set, whose samples follow the same distribution as the training and validation set.
The training set of size 90,000 represents 9% of the total possible 6-digit numbers. This can help us understand the generalization of the performance of machine learning models to unseen 6-digit numbers. To further challenge the generalizability of the models and test their capabilities to learn from limited data, we also considered a 10-digit numbers dataset as a 90,000-sized training set represents only 0.0009% of the total possible 10-digit numbers. We show that this change in the fraction of seen dataset (from 9% to 0.0009%) has the least effect on the performance of the neural networks [1, 2] as compared to the classical machine learning models [10, 11, 12, 13, 14].
### Implementation
For the implementation of the different machine learning models, we extensively used Jupyter Notebooks with the _scikit learn_[17] and _fastai_[18] libraries. While _scikit learn_[17] has several built-in classical ML models, _fastai_[18] has implementations of several state-of-the-art deep learning models. Using these libraries help us overcome the challenge of tediously and manually assigning all hyperparameters and thus allows us to quickly experiment with multiple methods and techniques.
We decided to use the decision tree and random forest regressor as classical ML models. Decision trees [10] build regression or classification models in the form of a tree structure. At every node, it splits a dataset into two subsets such that the "gini" score is minimized, to incrementally develop the decision tree. The final result is a tree with decision nodes and leaf nodes. A random forest [13, 14] is a meta-estimator that fits a number of classifying decision trees on various sub-samples of the dataset and uses averaging to improve the predictive accuracy and avoid over-fitting.
Figure 2: 6-Digit Original Dataset: a sequence of 6-digit number (rightmost column) and the corresponding count of each digit
The dataset follows a specific labeling pattern, hence we believe that the decision tree could, perhaps, identify the necessary comparisons to perfectly, or nearly perfectly, predict the pattern. Random forest in general is the best performing and the most versatile classical ML model and is a key reason for its widespread popularity and, thus, also stood out as a possibly strong baseline.
Let \(x_{i}\) be the \(i^{th}\) number or sample for 1\(\leq\)\(i\)\(\leq\)\(n\), let \(y_{i}\) be the ground-truth label vector for the \(i^{th}\) number such that \(y_{ij}\) is the count of \(j^{th}\) digit for 0\(\leq\)\(j\)\(\leq\)9, and \(\overset{\wedge}{y_{i}}\) be the predicted vector for the \(i^{th}\) number such that \(y_{ij}^{\ \wedge}\) is the count of \(j^{th}\) digit for 0\(\leq\)\(j\)\(\leq\)9.
The regression performance metrics we consider are root mean squared error and mean absolute error, the two popular metrics in regression, and the classification metric we consider is accuracy. Root mean squared error is calculated as
\[RMSE\ =\ \sqrt{\sum\limits_{i=1}^{n}\sum\limits_{j=0}^{l-1}\frac{{{{\left({{y_{ ij}-{y_{ij}}}}\right)}^{2}}}}{{n!}}}\]
and the mean absolute error is calculated as
\[MAE\ =\ \sum\limits_{i=1}^{n}\sum\limits_{j=0}^{l-1}\frac{{{{\left|{{y_{ij}-{y_{ ij}}}}\right|}}}}{{n!}}\]
, where \(n\) is the total number of samples (or numbers), \(l\) is the length of output vector (which is 10 for the count of 10 digits), \(y_{i}\) is the \(i^{th}\) ground-truth label vector; and \(\overset{\wedge}{y_{i}}\) is the \(i^{th}\) predicted vector.
Figure 3: 10-Digit Original Dataset: a sequence of 10-digit number (rightmost column) and the corresponding count of each digit
The problem statement can be tackled either using a regression method or classification method. The count of each of the 10 digits is only limited to integers 0 to 6 for the 6-digit set and 0 to 10 for the 10-digit set. However, if we consider a classification method, the presence of different digits would require an excessively complex and yet underperforming multi-class multi-label classification method which may easily overfit the small fraction of real data we have.
Therefore, to tackle this problem, we first implemented multi-class regression models and generated the two error metrics and, then modified the predictions to be rounded off to the nearest whole number (predictions less than zero rounded up to zero and those more than the total number of digits rounded down to the total digits themselves (6 and 10 respectively). We can therefore also consider accuracy metric over these predictions which we define as:
\[Accuracy\ =\ \sum\limits_{i=1}^{n}\sum\limits_{j=0}^{l-1}\frac{I(y_{ij}=y_{ ij}^{\ \
All the neural networks were composed of input layers, dense linear layers, and dense non-linear layers, which implement ReLUs (Rectified Linear Units)[3] as activation functions, SGD[1, 2, 3], and Adam optimizers[19]. For reference, a ReLU layer is used to implement a non-linearity in the neural network to better trace a non-linear pattern, which is essentially an identity function for all non-negative values, and zero for negative values.
### Experiments
The results show that neural networks performed significantly better than the decision tree and random forest models, especially when using the modified dataset. The best results were obtained by using the appropriate number of layers, learning rate, and number of epochs.
Figure 4: 6-Digit Original Dataset with 16 columns: a sequence of 6-digit (rightmost 6 columns) and the corresponding count of each digit (left columns)
Figure 5: 10-Digit Original Dataset with 20 columns: a sequence of 10-digit (rightmost 10 columns) and the corresponding count of each digit (left columns)
The results are shown in Tables 1, 2, 3, and 4. For reference, the following keys are provided to identify the different models:
* Decision Tree trained on the original dataset
* Random Forest trained on the original dataset
* Decision Tree trained on the modified dataset
* Random Forest trained on the modified dataset
* _fastai.tabular_ implemented neural network[20]
* _fastai.tabular_ neural network implemented with a hidden embedding[20].
We report RMSE, MAE and Accuracy metrics for each of the methods. We run each method multiple times on the validation set to obtain statistical errors. The results are consistent for both the 6-digit and 10-digit datasets, and by employing both the regression and classification metrics. However, it is key to note that even the neural networks do not have perfect accuracy but it is almost 100%.
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline
**Method** & **RMSE** & **MAE** & **Accuracy** \\ \hline Decision Tree 1 & 0.998 & 0.693 & 43.986\% \\ \hline Decision Tree 2 & 1.018 & 0.712 & 43.198\% \\ \hline Random Forest 1 & 0.864 & 0.666 & 44.545\% \\ \hline Random Forest 2 & 0.620 & 0.495 & 52.827\% \\ \hline Neural Network & 0.303 & 0.216 & 97.833\% \\ \hline Neural Network \(+\) Embedding & 0.274 & 0.208 & 97.920\% \\ \hline \end{tabular}
\end{table}
Table 4: 10-Digit Test Set
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline
**Method** & **RMSE** & **MAE** & **Accuracy** \\ \hline Decision Tree 1 & 0.997\(\pm\)0.000 & 0.693\(\pm\)0.000 & 44.167\% \\ \hline Decision Tree 2 & 1.021\(\pm\)0.001 & 0.714\(\pm\)0.000 & 42.994\% \\ \hline Random Forest 1 & 0.862\(\pm\)0.000 & 0.666\(\pm\)0.000 & 44.583\% \\ \hline Random Forest 2 & 0.623\(\pm\)0.001 & 0.499\(\pm\)0.001 & 53.019\% \\ \hline Neural Network & 0.293\(\pm\)0.025 & 0.221\(\pm\)0.018 & 98.256\% \\ \hline Neural Network \(+\) Embedding & 0.210\(\pm\)0.014 & 0.162\(\pm\)0.010 & 96.965\% \\ \hline \end{tabular}
\end{table}
Table 3: 10-Digit Validation Set. For statistical error, each method was run 5 times.
networks are only slightly affected or nearly unaffected by the increase in the digits, especially considering the large difference in the proportionality of more possible values in 6-digit and 10-digit numbers as mentioned earlier.
Modified dataset effect: It is observed that the modified dataset improves the performance of both decision trees and random forests, however, substantially more for random forests. This could be attributed to the tendency of random forests to generate many decision trees over multiple different features, instead of a single feature which generated the one and only possible tree given in the figures below. The averaging process of random forests [12, 13] over several decision trees in the modified dataset and on multiple batches of random, unbiased data is responsible for generating different outputs every time they are run and causing substantially less error and more accuracy compared to the performance on the original dataset.
This could also be the explanation for the decision trees and random forests generating exactly the same performance consistently on the original datasets for both 6-digit and 10-digit numbers across multiple runs, thus, having no change in the statistical error, as only a single decision tree is possible and only a single set of decision trees and their respective batches are being computed in the random forest.
Decision tree overfits: As we used decision tree analysis methods, it was observed that the decision tree had created over 85,000 leaf nodes for the training dataset of 90,000 numbers for both datasets, which is a clear example of an overfitting and memorizing model.
The random forest model performed slightly better than the decision tree model; however, it is worth mentioning that as a random forest creates many decision trees on unbiased data and bags them together, it will always outperform decision trees. It is also worth noting that the decision tree created many numerical splits to make nodes and for inference, it simply outputs the average of the count of each digit across numbers reaching a leaf node during training, refer to Figure 6, Figure 7, Figure 8 and Figure 9, which shows that both the classical ML models clearly could not interpret any patterns.
Figure 7: First 6 nodes of the decision tree for the modified 6-digit training dataset.
Figure 8: First 6 nodes of the decision tree for the original 10-digit training dataset (a): the top part, and (b): the lower part of the decision tree.
Figure 9: First 6 nodes of the decision tree for the modified 10-digit training dataset (a): the top part, and (b): the lower part of the decision tree.
We also experimented with a handful of outlier data points or numbers to observe predictions of the classical ML models.
For the original 6-digit dataset we tried the two pairs of consecutive numbers: (999998, 9999999) and (100000, 100001). The decision tree predicted [0, 0, 0, 0, 1, 0, 0, 0, 0, 5] for both numbers of the first pair and [4, 2, 0, 0, 0, 0, 0, 0, 0, 0] for both numbers of the second pair, and random forest after undergoing the classification modification predicted [0, 0, 0, 0, 0, 0, 0, 1, 0, 5] for the first pair and [4, 2, 0, 0, 0, 0, 0, 0, 0, 0] for the second pair. Rerunning the classical ML models on the modified dataset still generated similar results: the decision tree predicted [0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 5] for the first pair and [3, 3, 0, 0, 0, 0, 0, 0, 0, 0] for the second pair, and random forest after undergoing the classification modification predicted [0, 0, 0, 0, 0, 0, 0, 1, 5] for first pair and [3, 3, 0, 0, 0, 0, 0, 0, 0, 0] for the second. Thus these classical methods are making the same prediction for the successive numbers. This shows the inherent limitation of the decision tree and random forest, as they are splitting the nodes based on the numeric values of the numbers and not the count of each digit.
For the 10-digit dataset, we tried the two pairs of numbers: (9999999999, 9999999998) and (100000000, 100000001). The decision tree predicted [0, 1, 1, 0, 2, 0, 0, 0, 0, 6] for the former and [4, 2, 0, 1, 2, 0, 0, 0, 1, 0] for the latter. The random forest, whereas, predicted [0.02, 0.61, 0.71, 0.26, 1.31, 0.2, 0.29, 0.35, 0.75, 5.5] for the former and [3.57, 1.67, 0.52, 0.95, 1.81, 0.05, 0.4, 0.02, 0.57, 0.44] for the latter which after the classification modification are [0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 6] and [4, 2, 0, 1, 2, 0, 0, 0, 1, 0] respectively. The results are similar for the modified dataset. Evidently, this is another indication of the memorization that these classical ML models underwent and how they failed poorly in pattern recognition, which is even more evident in the 10-digit dataset.
### Observations on Neural Networks
The neural networks, as aforementioned, outperformed classical ML models in every scenario and for both datasets. According to our hyperparameter optimization, we found the following best values for all the different scenarios using 16 epochs and [x,y,z] layers, where x,y, and z respectively are the number of parameters in each of the non-linear (ReLU [6]) hidden layers:
* Layers = [96,96,96], Learning Rate = 0.01 \(\circ\) Neural Network with Embedding
- Layers = [96,96,96], Learning Rate = 0.01, Embeddings are [10,100] by considering each of the 10 unique digits
* Layers = [128,128,128], Learning Rate = 0.01 \(\circ\) Neural Network with Embedding
- Layers = [256,256,256], Learning Rate = 0.005, Embeddings are [10,100] by considering each of the 10 unique digits
It could be hypothesized that as the neural networks utilize stochastic gradient descent to minimize loss by altering the parameters or weights and implement non-linearities through the ReLU layers, they at least trace out the non-linear pattern very well1,2. The 100-dimensional embeddings were used as an input feature for each of the ten possible values. Overall they did not significantly alter the predictions across the different metrics.
It is an intriguing detail that the classical ML models, which gave an accuracy of nearly 90% for 6-digit numbers, although by memorization, fell to less than or nearly 50% accuracy for 10-digit ones. On the contrary, neural networks hardly changed by even 1% in accuracy across datasets. They also produced less than half the errors compared to the best classical ML model baseline, which is the random forest, in both metrics. The following loss curve vs the number of epochs graphs, refer to Figure 10(a), 10(b), 10(c) and 10(d), indicate that the neural networks did not undergo any form of overfitting or memorization. This shows the generalization capability of neural networks.
Similar to the classical ML models, we also worked with the following consecutive numbers for the neural networks: 6-digit numbers - (99999, 999998) and (100000, 100001); 10-digit numbers - (9999999999, 999999998) and (100000000,100000001). Here are firstly the results by ChatGPT3 when asked for the task for recognizing the frequency of each digit in the above numbers, refer to Figure 11(a), 11(b), 11(c), 11(d), 11(e), 11(f).
Figure 10: **(a)-(d):** _Loss (MSE) Curves for Neural Networks vs Number of Epochs_
To summarize the results, except for the number 9,999,999,999 which it predicted completely correctly, all the predictions by ChatGPT3 were even worse than the classical ML models. This further showcases the deceptiveness of the simplicity of the task. The neural networks, on the other hand, produced the following results after the classification modification:
* Digit Dataset:
Figure 11: **(a) - (b):**_ChatGPT3 responses for the above-mentioned numbers_
* Input: (999999, 999998) and (100000, 100001)
* Neural Network output: [0,0,0,0,0,0,0,0,0,5] and [0,0,0,0,0,0,0,0,1,4] for the former pair, and [5,1,0,0,0,0,0,0,0,0] and [4,2,0,0,0,0,0,0,0,0] for the latter.
* Neural Network with Embedding output: [0,0,0,0,0,0,0,0,6] and [0,0,0,0,0,0,0,0,0,1,5] for the former pair, and [5,3,0,0,0,0,0,0,0,0] and [3,3,0,0,0,0,0,0,0,0] for the latter.
* 10 - Digit Dataset:
* Input: (9999999999, 9999999998) and (1000000000,1000000001)
* Neural Network output: [0,0,0,0,1,1,0,0,0,9] and [0,0,0,0,1,1,0,0,1,8] for the former pair, and [7,2,1,0,1,0,0,1,0,0] and [7,2,1,0,0,0,1,0,0,0] for the latter.
* Neural Network with Embedding output: [0,1,0,0,0,1,0,2,2,9] and [0,0,0,0,1,0,0,1,9] for the former pair, and [9,1,0,0,0,1,0,0,0,0] and [9,2,0,0,0,0,2,0,0,1] for the latter.
Interestingly, half of these predictions are incorrect but the other half are either completely correct or close to it with one or so digits wrong. They, at least, do not make the exact same prediction for the successive numbers unlike the classical ML models which means that they are partially learning the pattern. However, similar to classical ML models, their performance significantly worsens for 10-digit numbers as well. The proportion of data seems to play a significant role in the performance of all the models but with varying degrees.
#### Ablation Study
When running the neural networks on the 6-digit and 10-digit test sets, we found some alternative hyperparameter values, learning rate (lr) and layers, which gave significantly better outputs in terms of the regression metrics. We have mentioned them in the table given below, refer to Table 5.
## Conclusion
In this research work we compared the performance of different classical machine learning models and neural networks in identifying the frequency of occurrences of each digit in a given
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline
**6-Digit Test Set** & **Hyperparameters** & **RMSE** & **MAE** \\ \hline Neural Network + Embedding & lr = 1e-5, layers = (96,96,96) & 0.093 & 0.073 \\ \hline
**10-Digit Test Set** & & & \\ \hline Neural Network & lr = 0.003, layers = [128,128,128] & 0.171 & 0.130 \\ \hline Neural Network + Embedding & lr=5e-3, layers = [256,256,256] & 0.221 & 0.168 \\ \hline \end{tabular}
\end{table}
Table 5: Alternative hyperparameter values for neural networks on the test sets
number. We observed that the neural networks significantly outperformed the classical ML models in terms of both the regression and classification metrics for both the 6-digit and 10-digit number datasets.
We discovered that some of the behaviors of the classical machine learning models such as split condition and averaging made the trees extremely biased and led to overfitting and memorization. Thus they failed in pattern recognition. The neural networks, on the other hand, thanks to their non-linear optimization were substantially more successful in recognizing the evident pattern. The accuracy was greater than 95% for all scenarios which indicates that the deep learning models did, in fact, learn the pattern accurately. This research further acknowledges the vast learning capabilities and adaptability of neural networks that have been stated in previous research work.
All the experiments were conducted on a MacBook M2 Air in a matter of two months. With more time, one could potentially extend the research to other datasets with larger numbers of digits and may find various other trends with neural networks. Regardless, they already seem to be reliable in learning this unconventional, yet simple pattern.
Furthermore, despite the research being experimental in nature, the results obtained in this research can potentially be applied to downstream computer vision problems, such as counting the number of times a specific object occurs in an image, which is an essential task in many computer vision applications [3, 5, 15, 16]. Also, the ability to detect the most frequent elements can be used to detect the rare elements, which can have applications in healthcare, e.g. to detect rare diseases.
## Acknowledgement
I would like to acknowledge the unconditional support and guidance offered from my mentor Mr. Viveka Kulharia, PhD in Computer Vision from the University of Oxford, for assisting me in everything, from researching the idea through his resources to writing the paper.
|
2309.05102 | Is Learning in Biological Neural Networks based on Stochastic Gradient
Descent? An analysis using stochastic processes | In recent years, there has been an intense debate about how learning in
biological neural networks (BNNs) differs from learning in artificial neural
networks. It is often argued that the updating of connections in the brain
relies only on local information, and therefore a stochastic gradient-descent
type optimization method cannot be used. In this paper, we study a stochastic
model for supervised learning in BNNs. We show that a (continuous) gradient
step occurs approximately when each learning opportunity is processed by many
local updates. This result suggests that stochastic gradient descent may indeed
play a role in optimizing BNNs. | Sören Christensen, Jan Kallsen | 2023-09-10T18:12:52Z | http://arxiv.org/abs/2309.05102v3 | # Is Learning in Biological Neural Networks based on Stochastic Gradient Descent?
###### Abstract
In recent years, there has been an intense debate about how learning in biological neural networks (BNNs) differs from learning in artificial neural networks. It is often argued that the updating of connections in the brain relies only on local information, and therefore a stochastic gradient-descent type optimization method cannot be used. In this paper, we study a stochastic model for supervised learning in BNNs. We show that a (continuous) gradient step occurs approximately when each learning opportunity is processed by many local updates. This result suggests that stochastic gradient descent may indeed play a role in optimizing BNNs.
**Keywords:** Biological neural networks, Schmidt-Hieber model, stochastic gradient descent, supervised learning
## 1 Introduction
In order to understand how biological neural networks (BNNs) work, it seems natural to compare them with artificial neural networks (ANNs). Although the definition of the latter is inspired by the former, they also differ in several aspects. One of them is the way the network parameters are updated.
In simple terms, an ANN learns from data by adjusting the weights of the connections between nodes in order to minimize a loss function that measures the difference between the desired output and the actual output of the network. More specifically, the optimization step is performed using the Stochastic Gradient Descent (SGD) algorithm, which iteratively updates the weights of the network by moving them in the direction of the steepest descent of the empirical loss function of a single training sample. The gradient itself is computed with the so-called backpropagation algorithm. In particular, the update of any parameter is based on the states of all other parameters. Such a mechanism does not seem to be biologically plausible for BNNs, as many authors have pointed out. Parameter update in BNNs occurs only locally, and distant neurons are only indirectly connected through the endogenous reward system. This observation is closely related to the weight transportation problem [6; 2; 4]. We refer to [12; 11] for a detailed discussion about the role of SGD in BNN, which the author of [10; Section 5] summarizes as follows: "[T]here are various theories that are centered around the idea that the learning in BNNs should be linked to gradient descent. All of these approaches, however, contain still biological implausibilities and lack a theoretical analysis."
The starting point for the present paper is the recent article [10] just cited. In this seminal study, the author proposes a very persuasive stochastic model for brain-supervised learning
which has a thorough biological foundation in terms of spike-timing-dependent plasticity. We review and discuss this setup in Section 2. In this model the local updating rule of the connection parameters in BNNs turns out to be a zero-order optimization procedure. More precisely, it is shown in [10] that the expected value of the iterates coincides with a modified gradient descent. However, this holds only on average. The noise for such zero-order methods is so high that one can hardly imagine effective learning based on it, see [3, 8, 1]. The author himself writes in [10, Section 4]: "It remains to reconcile the observed efficiency of learning in biological neural networks with the slow convergence of zero-order methods."
In this paper we make an attempt to achieve this reconciliation. To this end, we consider in Section 3 a slight modification of the model of [10]. More specifically, we relax the assumption that for each learning opportunity and each connection, exactly one spike is released. Instead, we assume that a large number of spikes is released for each training sample in the BNN model and thus many parameter updates are made. It turns out that with this modification, the updates correspond approximately to a continuous descent step along the gradient flow, see Theorem 1. This can be interpreted in the sense that it is not biologically implausible that BNNs use a kind of SGD algorithm after all, but without explicitly computing the gradient.
## 2 The Schmidt-Hieber model for BNNs revisited
We begin this section by reviewing the model introduced in [10]. It considers a classical instance of supervised learning: input-output pairs \((\mathbf{X}_{1},Y_{1}),(\mathbf{X}_{2},Y_{2}),\ldots\) are given as observations, all being identically distributed. The goal is to predict the output \(Y\) for each new input \(\mathbf{X}\) based on previous training data. This setting includes, for example, classification (when the set of possible outcomes of \(Y\) is finite) or regression problems.
A (feedforward) biological neural network (BNN) is modeled in [10] as a directed acyclic graph with input neurons receiving information from the observations \(\mathbf{X}_{k}\) and generating a predicted response \(\widehat{Y}_{k}\) as output. The nodes represent the neurons in the network and an edge \(\nu=(i,j)\) between two nodes \(i\) and \(j\) indicates that neuron \(i\) is presynaptic for neuron \(j\). Each element \((i,j)\) in the edge set \(\mathcal{T}\) has a weight \(w_{ij}\) which indicates the strength of the connection between \(i\) and \(j\). While the structure of the graph does not change, the weights \(w_{ij}\) are adjusted in each learning step.
Spike-timing-dependent plasticity (STDP) is chosen as the biological mechanism to update the parameters. It is considered as a form of Hebbian learning [5], which states that neurons that fire together wire together. More precisely, the synaptic weight \(w_{ij}\) changes depending on the timing of the spikes from neuron \(i\) to neuron \(j\). The weight decreases when neuron \(i\) spikes before neuron \(j\), and increases when neuron \(j\) spikes before neuron \(i\). The closer the spikes are in time, the larger the change in weight. It is important to note that the spike times are modeled as random variables. After some standardization (see [10, Equation (4.3)]) the update of the parameter for edge \((i,j)\) becomes
\[w_{ij}\gets w_{ij}+w_{ij}C(e^{-U_{ij}}-e^{U_{ij}}),\]
where \(U_{ij}\) are uniformly distributed random variables on some interval \([-A,A]\), i.e. \(U_{ij}\sim\mathcal{U}(-A,A)\), modeling the random spike times. The constant \(C\) represents the effect of a reward system, for example by neurotransmitters such as dopamine. It plays a key role for any meaningful learning process. In the present setup, the reward is tied to the success of predicting \(Y_{k}\), more specifically, whether the task is solved better or worse than in earlier trials. Using the standardization \(\theta_{ij}:=\log w_{ij}\) and a Taylor approximation, the following structure is derived in [10, Equation (4.5)] for the update of \(\theta_{ij}\) at step \(\ell\):
\[\theta_{ij}^{(\ell)}=\theta_{ij}^{(\ell-1)}+\alpha^{(\ell-1)}\Big{(}L^{(\ell-1 )}(\boldsymbol{\theta}^{(\ell-1)}+\mathbf{U}^{(\ell)})-L^{(\ell-2)}( \boldsymbol{\theta}^{(\ell-2)}+\mathbf{U}^{(\ell-1)})\Big{)}\Big{(}e^{-U_{ij}^ {(\ell)}}-e^{U_{ij}^{(\ell)}}\Big{)}, \tag{2.1}\]
where \(\alpha^{(\ell)}>0\) is a learning rate, \(\mathbf{\theta}^{(\ell)}=\left(\theta^{(\ell)}_{ij}\right)_{(i,j)\in T}\) denotes the vector of parameters for all edges, \(\mathbf{U}^{(\ell)}\) the vector of the independent uniformly distributed random variables \(U^{(\ell)}_{ij}\) for all edges \((i,j)\), and \(L^{(\ell)}\) the loss function associated with the respective step \(\ell\). Thus, the update of the weights of the individual edges is in fact affected by the state of the entire network, but only through the value of the common loss function, which provides an assessment of the learning success. In particular, no gradient appears in the mechanism.
In [10] the author considers the case where for each input-output pair \((\mathbf{X}_{k},Y_{k})\) the parameters are updated only once. To this end, the loss of the current input-output pair is compared with the previous one. More specifically, we have \(\ell=k\) as well as \(L^{(k)}(\mathbf{\theta})=L(\mathbf{\theta},\mathbf{X}_{k},Y_{k})\), \(k=1,2,\dots\), leading to the update rule
\[\theta^{(k)}_{ij}=\theta^{(k-1)}_{ij}+\alpha^{(k-1)}\Big{(}L( \mathbf{\theta}^{(k-1)}+\mathbf{U}^{(k)},\mathbf{X}_{k-1},Y_{k-1})-L(\mathbf{\theta}^ {(k-2)}+\mathbf{U}^{(k-1)},\mathbf{X}_{k-2},Y_{k-2})\Big{)}\Big{(}e^{-U^{(k)}_ {ij}}-e^{U^{(k)}_{ij}}\Big{)}. \tag{2.2}\]
As a main result, the author shows in [10, Theorem 1] that this procedure corresponds on average to a gradient descent method, with a gradient evaluated not exactly at \(\mathbf{\theta}^{(k-1)}\) but slightly perturbed randomly. However, as noted in [10], sufficiently fast convergence cannot be expected for such a zero-order method.
## 3 Multiple updates per learning opportunity
The key ingredient leading to (2.2) is that, for each learning opportunity and each connection, exactly one spike is triggered and thus only one update of the parameters is made. The author himself calls this assumption "strong", see [10, Section 4].
Given the average spike frequency in real biological systems and the strong brain activity even at immobile rest [7], it seems more reasonable to assume instead a large number of spikes per learning opportunity. This corresponds to a series \(\mathbf{\theta}^{(k,0)},\dots,\mathbf{\theta}^{(k,n)}\) of updates to the parameters after observing any input-output pair \((\mathbf{X}_{k-1},Y_{k-1})\). The assessment of the update steps is based on the loss function associated with the most recent observation \((\mathbf{X}_{k-1},Y_{k-1})\). More specifically, equation (2.1) turns into
\[\theta^{(k,\ell)}_{ij} =\theta^{(k,\ell-1)}_{ij}+\alpha^{(k-1,\ell-1)}\Big{(}L(\mathbf{ \theta}^{(k,\ell-1)}+\mathbf{U}^{(k,\ell)},\mathbf{X}_{k-1},Y_{k-1})-L(\mathbf{ \theta}^{(k,\ell-2)}+\mathbf{U}^{(k,\ell-1)},\mathbf{X}_{k-1},Y_{k-1})\Big{)}\] \[\quad\times\left(e^{-U^{(k,\ell)}_{ij}}-e^{U^{(k,\ell)}_{ij}}\right) \tag{3.1}\]
for \(\ell\geq 1\) with initial values given by \(\mathbf{\theta}^{(k,0)}:=\mathbf{\theta}^{(k,-1)}:=\mathbf{\theta}^{(k-1,n)}\). Since \(n\) is considered to be large, the individual update steps should be small in order to avoid overfitting.
We start by analyzing this update rule for a fixed observation \((\mathbf{X}_{k-1},Y_{k-1})\), i.e. for a fixed \(k\). For ease of notation we suppress the dependence on \(k\) in the following considerations. Since our goal is to study the limiting behavior for a large number of update steps \(n\), we instead make the dependence on \(n\) explicit. So we rewrite (3.1) as
\[{}^{n}\theta^{(\ell)}_{ij}={}^{n}\theta^{(\ell-1)}_{ij}+{}^{n}\alpha^{(\ell-1) }\Big{(}L^{(n}\mathbf{\theta}^{(\ell-1)}+{}^{n}\mathbf{U}^{(\ell)})-L^{(n}\mathbf{ \theta}^{(\ell-2)}+{}^{n}\mathbf{U}^{(\ell-1)})\Big{)}\Big{(}e^{-{}^{n}U^{( \ell)}_{ij}}-e^{{}^{n}U^{(\ell)}_{ij}}\Big{)}. \tag{3.2}\]
The parameter update depends on increments of a loss function which resembles a gradient on first glance. Note, however, that this increment term is the same for all edges and, moreover, randomness occurs in both the loss function and in the external factor. We now consider the following dependencies on \(n\):
\[{}^{n}\alpha^{(\ell-1)}:=\alpha,\quad A_{n}:=n^{-1/3}A,\quad{}^{n}U^{(\ell)}_ {ij}\sim\mathcal{U}(-A_{n},A_{n}), \tag{3.3}\]
with constants \(\alpha,A>0\). Moreover, we rescale and extend the discrete-time process \((^{n}\theta^{(\ell-1)}_{ij})_{\ell=-1,\ldots,n}\) in time, defining a continuous-time process \(\mathbf{Z}^{n}=(\mathbf{Z}^{n}_{t})_{t\in[-1/n,1]}\) by
\[\mathbf{Z}^{n}_{t_{\ell}^{n}}=\mathbf{Z}^{n}_{t_{\ell-1}}+\alpha_{n}\Big{(}L( \mathbf{Z}^{n}_{t_{\ell}^{n}}+^{n}\mathbf{U}^{(\ell)})-L(\mathbf{Z}^{n}_{t_{ \ell-2}^{n}}+^{n}\mathbf{U}^{(\ell-1)})\Big{)}\Big{(}e^{-u^{(\ell)}_{ij}}-e^{u ^{(\ell)}_{ij}}\Big{)} \tag{3.4}\]
for \(t_{\ell}^{n}:=\ell/n\) and by \(\mathbf{Z}^{n}_{t}:=\mathbf{Z}^{n}_{\lfloor tn/n\rfloor}\) for arbitrary \(t\in[0,1]\).
As a candidate limit for large \(n\) we consider a standard rescaled gradient process \(\mathbf{Z}=(\mathbf{Z}_{t})_{t\in[0,1]}\), which is defined as the solution to the deterministic ordinary differential equation (ODE)
\[\frac{d\mathbf{Z}_{t}}{dt}=-4\alpha\nabla L(\mathbf{Z}_{t}),\quad\mathbf{Z}_ {0}=\mathbf{\theta}^{(k,0)}, \tag{3.5}\]
see e.g. [9]. In order for our main theorem to hold, we assume that
\[\nabla L\]
is bounded and Lipschitz continuous with Lipschitz constant
\[\lambda\]
. (3.6)
\(\mathbf{Z}\) naturally emerges as a limit if you run the ordinary gradient descent algorithm for minimizing the function \(L\) with many small steps. The main result of this paper states that the rescaled STDP process \(\mathbf{Z}^{n}\) converges to \(\mathbf{Z}\) as well. More precisely, we have
**Theorem 1**.: _Assume (3.3) and (3.6). Then, for each fixed training sample \(k\), the rescaled process \(\mathbf{Z}^{n}\) of the BNN weights converges to the rescaled gradient process \(\mathbf{Z}\) uniformly in \(L^{2}\), i.e._
\[\lim_{n\to\infty}\mathbb{E}\left(\sup_{t\in[0,1]}\|Z^{n}_{t}-Z_{t}\|^{2} \right)=0.\]
The previous theorem shows that learning in BNNs based on the local principle of STDP may indeed lead to optimization of parameters according to SGD if the number of spikes per learning opportunity is high. To wit, we start with an initial parameter vector \(\mathbf{\theta}^{(0)}\) and loss function \(L=L(\cdot,\mathbf{X}_{1},Y_{1})\). According to Theorem 1 the STDP nearly performs a continuous gradient step as in (3.5), leading to an updated parameter vector \(\mathbf{\theta}^{(1)}\). Switching now to the loss function \(L=L(\cdot,\mathbf{X}_{2},Y_{2})\), the next approximate gradient step leads to an updated vector \(\mathbf{\theta}^{(2)}\) etc.
This procedure only differs from the classical SGD in that we make many small instead of one large gradient step per learning opportunity. Interestingly, neither the gradient nor even the functional dependence of the loss function on the parameters need to be known explicitly for this purpose. By contrast, it relies crucially on the randomness in the update, which may seem counterintuitive because the desired gradient ODE (3.5) is deterministic.
Proof of Theorem 1.: Following the argument in [10, Proof of Theorem 1], we may decompose the dynamics of \(\mathbf{Z}^{n}\) in coordinate \(\nu=(i,j)\) as
\[Z^{n}_{t_{\ell}^{n},\nu}=Z^{n}_{t_{\ell-1},\nu}+c^{n}_{\nu}\Big{(}\mathbf{Z}^{ n}_{t_{\ell-1}}\Big{)}+D^{n}_{\ell,\nu}, \tag{3.7}\]
where
\[b^{n}_{\nu}(z) :=-n\alpha e^{-A_{\nu}}C(A_{n})\partial_{\nu}L(z),\] \[c^{n}_{\nu}(z) :=\frac{1}{n}\mathbb{E}b^{n}_{\nu}(z+^{n}\mathbf{V}^{\nu}),\]
and
\[D^{n}_{\ell,\nu} :=\alpha\Big{(}L\big{(}\mathbf{Z}^{n}_{t_{\ell}^{n}}+^{n}\mathbf{ U}^{(\ell)}\big{)}-L\big{(}\mathbf{Z}^{n}_{t_{\ell-1}^{n}}+^{n}\mathbf{U}^{( \ell-1)}\big{)}\Big{)}\Big{(}e^{-u^{(\ell)}_{\nu}}-e^{u^{(\ell)}_{\nu}}\Big{)} -c^{n}_{\nu}\Big{(}\mathbf{Z}^{n}_{t_{\ell-1}^{n}}\Big{)}\]
is a martingale difference process with respect to the filtration \((\mathcal{F}_{\ell})_{\ell=0,\ldots,n}\) that is generated by all randomness up to step \(\ell\). Moreover, the random vector \({}^{n}\mathbf{V}^{n}\) has independent components where all but the \(\nu\)th are uniformly distributed on \([-A_{n},A_{n}]\) and the \(\nu\)th has density
\[f_{A_{n}}(x):=C(A_{n})^{-1}(e^{A_{n}}-e^{x})(e^{A_{n}}-e^{-x})\]
on \([-A_{n},A_{n}]\) with normalizing constant
\[C(A_{n}):=\int_{-A_{n}}^{A_{n}}(e^{A_{n}}-e^{x})(e^{A_{n}}-e^{-x})dx=2A_{n}(e^{ 2A_{n}}+1)+2-2e^{2A_{n}}.\]
We obtain
\[\mathbf{Z}_{t}^{n}-\mathbf{Z}_{t}=\int_{0}^{\lfloor tn\rfloor/n}\Big{(}b_{ \nu}(\mathbf{Z}_{s}^{n})-b_{\nu}(\mathbf{Z}_{s})\Big{)}ds+\mathbf{C}_{\lfloor tn \rfloor/n}^{n}+\mathbf{M}_{\lfloor tn\rfloor/n}^{n}+\delta_{t}^{n},\]
for all \(t\in[0,1]\) where
\[b_{\nu}(z) :=-4\alpha\partial_{t}L(z),\] \[\mathbf{C}_{\lfloor tn\rfloor/n}^{n} :=\sum_{\ell:t_{\ell}^{n}\leq t}\Big{(}\mathbf{c}^{n}\Big{(} \mathbf{Z}_{t_{\ell-1}^{n}}^{n}\Big{)}-\frac{1}{n}b_{\nu}\Big{(}\mathbf{Z}_{t _{\ell-1}^{n}}^{n}\Big{)}\Big{)}\] \[=\sum_{\ell:t_{\ell}^{n}\leq t}\Big{(}\mathbf{c}^{n}\Big{(} \mathbf{Z}_{t_{\ell-1}^{n}}^{n}\Big{)}-\frac{1}{n}b_{\nu}^{n}\Big{(}\mathbf{Z} _{t_{\ell-1}^{n}}^{n}\Big{)}\Big{)}+\frac{1}{n}\sum_{\ell:t_{\ell}^{n}\leq t} \Big{(}b_{\nu}^{n}\Big{(}\mathbf{Z}_{t_{\ell-1}^{n}}^{n}\Big{)}-b_{\nu}\Big{(} \mathbf{Z}_{t_{\ell-1}^{n}}^{n}\Big{)}\Big{)},\] \[\mathbf{M}_{\lfloor tn\rfloor/n}^{n} :=\sum_{\ell:t_{\ell}^{n}\leq t}\mathbf{D}_{t_{\ell}^{n}}^{n},\] \[\delta_{t}^{n} :=\mathbf{Z}_{\lfloor tn\rfloor/n}-\mathbf{Z}_{t}.\]
Set
\[\varepsilon_{n}(t):=\sqrt{\mathbb{E}\sup_{s\leq t}\|\mathbf{Z}_{s}^{n}- \mathbf{Z}_{s}\|^{2}}.\]
Using (3.6) and the triangle inequality, we conclude that
\[\varepsilon_{n}(t)\leq\gamma_{n}+4\alpha\lambda\int_{0}^{t}\varepsilon_{n}(s)ds\]
with
\[\gamma_{n}=\sqrt{\mathbb{E}\sup_{\ell=0,\ldots,n}\left\|\mathbf{C}_{t_{n}^{n} }^{n}\right\|^{2}}+\sqrt{\mathbb{E}\sup_{\ell=0,\ldots,n}\left\|\mathbf{M}_{t_ {n}^{(\ell)}}^{n}\right\|^{2}}+\sup_{t\in[0,1]}\|\delta_{t}^{n}\|.\]
By Gronwall's inequality we obtain
\[\varepsilon_{n}(t)\leq\gamma_{n}\exp(\lambda).\]
It is therefore sufficient to prove that \(\gamma_{n}\to 0\).
Note that
\[\left\|b_{\nu}^{n}\Big{(}\mathbf{Z}_{t_{\ell-1}^{n}}^{n}\Big{)}-b_{\nu}\Big{(} \mathbf{Z}_{t_{\ell-1}^{n}}^{n}\Big{)}\right\|\leq\alpha\Big{|}e^{-A_{n}}nC(A _{n})-4\Big{|}\sup_{z}\|\partial_{\nu}L(z)\|\]
and
\[\left\|c^{n}\Big{(}\mathbf{Z}_{t_{\ell-1}^{n}}^{n}\Big{)}-\frac{1}{n}b_{\nu}^ {n}\Big{(}\mathbf{Z}_{t_{\ell-1}^{n}}^{n}\Big{)}\right\|\leq\frac{1}{n}\alpha e ^{-A_{n}}nC(A_{n})\lambda A_{n}\]
because the random vectors \({}^{n}\mathbf{V}^{n}\) are all concentrated on \([-A_{n},A_{n}]\). Since \(e^{-A_{n}}nC(A_{n})\to 4\) and \(A_{n}\to 0\) as \(n\to\infty\), we obtain that \(\mathbb{E}\sup_{\ell=0,\ldots,n}\left\|\mathbf{C}_{t_{n}^{(\ell)}}^{n}\right\| ^{2}\to 0\) as desired. Moreover, we have that
\(\sup_{t\in[0,1]}\|\delta_{t}^{n}\|\to 0\) because \(\mathbf{Z}\) is uniformly continuous as a continuous function on a compact interval.
So it only remains to be verified that \(\mathbb{E}\sup_{\ell=0,\dots,n}\|\mathbf{M}_{t_{\ell}^{n}}^{n}\|^{2}\to 0\). Doob's inequality yields
\[\mathbb{E}\sup_{\ell=0,\dots,n}\left\|\mathbf{M}_{t_{\ell}^{n}}^{n}\right\|^{2} \leq 4\mathbb{E}\left(\sum_{\nu}\sum_{\ell=1}^{n}\left(D_{\ell,\nu}^{n} \right)^{2}\right).\]
Since the gradient of \(L\) is bounded and the components of the \({}^{n}\mathbf{U}^{(\ell)}\) are concentrated on the interval \([-A_{n},A_{n}]=[-n^{1/3},n^{1/3}]\), we have that
\[\left\|\alpha\left(L\left(\mathbf{Z}_{t_{\ell}^{n}}^{n}+{}^{n}\mathbf{U}^{( \ell)}\right)-L\left(\mathbf{Z}_{t_{\ell-1}^{n}}^{n}+{}^{n}\mathbf{U}^{(\ell- 1)}\right)\right)\left(e^{-\nu\mathbf{U}^{(\ell)}}-e^{\nu\mathbf{U}^{(\ell)}} \right)\right\|\leq an^{-2/3}+b\left\|\mathbf{Z}_{t_{\ell}^{n}}^{n}-\mathbf{Z}_ {t_{\ell-1}^{n}}^{n}\right\| \tag{3.8}\]
for some constants \(a,b\in\mathbb{R}_{+}\). By Lemma 2 below and \(\mathbf{Z}_{t_{0}^{n}}^{n}-\mathbf{Z}_{t_{\ell-1}^{n}}^{n}=0\) this implies that (3.8) is bounded by a multiple of \(n^{-2/3}\) and hence its square by \(n^{-4/3}\). Moreover, \(\|c_{\nu}^{n}(\mathbf{Z}_{t_{\ell-1}^{n}}^{n})\|\) is bounded by a multiple of \(1/n\). Together, this yields that \(\sum_{\ell=1}^{n}(D_{\ell,\nu}^{n})^{2}\) is bounded by a multiple of \(n^{-1/3}\), which yields the desired convergence.
**Lemma 2**.: \(x_{n}\leq a+bx_{n-1}\)_, \(n=1,2,\dots\) for \(a,b,x_{n}\in\mathbb{R}_{+}\) implies_
\[x_{n}\leq x_{0}b^{n}+a\frac{1-b^{n}}{1-b}.\]
Proof.: This follows by induction on \(n\).
|
2309.11188 | Rebellions and Impeachments in a Neural Network Society | Basede on a study of the modern presidencial democracies in South America, we
present a statistical mechanics exploration of the collective, coordinated
action of political actors in the legislative chamber that may result on the
impeachment of the executive. By representing the legislative political actors
with neurla networks, we observed that the larger the effective number of
presidential-agenda items are treated, the smaller the chances for a
cross-party dialogue, which, if combined with a decrement in the president's
public approval rating, could trigger an impeachment process. | Juan Neirotti, Nestor Caticha | 2023-09-20T10:18:17Z | http://arxiv.org/abs/2309.11188v2 | # Rebellions and Impeachments in a Neural Network Society
###### Abstract
Based on a study of the modern presidencial democracies in South America, we present a statistical mechanics exploration of the collective, coordinated action of political actors in the legislative chamber that may result on the impeachment of the executive. By representing the legislative political actors with neurla networks, we observed that the larger the effective number of presidential-agenda items are treated, the smaller the chances for a cross-party dialogue, which, if combined with a decrement in the president's public approval rating, could trigger an impeachment process.
Introduction
The mechanism for deposition of institutional power in several areas of the world has changed in the last half century. A new pattern of government overthrow took over the traditional military coup, specially in Latin America as extensively documented in [27]. These abrupt changes of collective behavior are typically seen in a society which exerts pressure on the parliament to promote changes outside the election model by parliamentary impeachment. External pressures which include elements such as the state of the economy, perception of corruption or their combination may be distal reasons, but at a closer look the correlated behavior of the parliament follows the emboldenment that derives from the collective perception that there is sufficient strength in the opposition camp to overthrow the executive. Technically still within the realm of constitutional order, despite being associated to affective rather than ideological affinity [14; 15], this transition mechanism seems to bring new theoretical challenges to comparative studies of presidentialism.
To illustrate the problem and to better understand the expected characteristics of the model, we will briefly discuss three different instances were the executive was either impeached or nearly impeached by the legislative. The first case corresponds to the presidency of Fernando Collor de Mello, from Brazil. Collor won the 1989 elections in the second round with 54% of the votes, but his party only had 8% of the sits in the Chamber of Deputies and 4% in the Senate. By March 1990, when Collor was sworn into office and his approval ratings were at +60%, the consumer price index rose by 84% (observe the evolution of these numbers in figure 1); at this point Collor lunched his first economic plan. But in spite of the extreme measures imposed, government control of inflation proved elusive and popular discontent increased. The application of a second (unsuccessful) economic plan (Collor II), and a number of corruption cases revealed by the press provoked a plummet on the approval ratings and triggered a number of streets demonstrations. With very few allays in the Legislative, impeachment procedures were triggered and by the end of September 1992 the Chamber of Deputies approved (by a vote of 441-38) the impeachment by the Senate.
The second case corresponds to the presidency of Carlos Andres Perez, from Venezuela. Perez won his second term in 1988 with 53% of the vote and, in contrast to Collor's case, he was the leader of the largest party in the country. In order to put under control the critical situation inherited from the previous president, Perez announced an economic package (the Great Turnaround) in February 1989. These measures triggered an abrupt rise in inflation, and an immediate popular response in the form of riots (Caracazo). Observe the evolution of the presidential approval rating and the number of popular marches per month in figure 2. The human rights violations occurring during the days of the Caracazo, the increasing number of protests, and media exposes revealing scandals involving members of the administration compromised the credibility of the government and its capacity to control the economy. By September 1991 the followers of president Perez within the ruling party lost their internal elections. On February and November
Figure 1: Time evolution of the consumer prise index (full-black line) [1] and the presidential approval rating (dashed-red line) from 1990 to September 1992 (adapted from [27]).
1992 there were attempts of coup d'etat. By May 1993 was suspended by the Senate.
The third case corresponds to the presidency of Ernesto Samper, from Colombia. He won the elections of June 1994 with 50.6% of the votes. In the weeks following the election, the press began to reveal a possible contribution from the Cali drug cartel to Samper's campaign. The Prosecutor General opened an investigation (Process 8000) and, preemptively, Samper asked the Congress (under his party's control) to also investigate the accusations. The investigations were closed and, although his presidential approval rating continue to decline, Samper kept control of the Congress and managed to finish his tenure (figure 3).
In these three cases we observe patterns also found in in other presidencies. These observations can be synthesize
Figure 3: Approval and disapproval presidential ratings during Ernesto Samper’s tenure, from August 1994 to August 1998 [27].
Figure 2: Time evolution of (dashed-red line) from 198
as follows: At the beginning of the presidential period the presidential approval ratings are high, and they fluctuate according to the information the electorate receives about the current issues (either policies or scandals). Meanwhile the members of the legislative chamber discus the president's proposals under the influence of their internal political alliances and the effective pressure exerted by the presidential approval ratings. The more items are discussed in the chamber the more is known about the presidential agenda, which rearranges the chamber's alliances and provides more information to the pollsters, feeding into the cycle of influences and interactions. Naively, we conclude that the actions of the agents in the legislative chamber are based on the opinions they form on the items propposed by the president to be discussed, and that such discussions are modified by the alliances they have with each other and by the pressure from the polls. The larger the difference in opinions between legislative agents and the president and the lower the presidential ratings the higher the chances of impeachment.
There is a need to devise mechanisms for opinion formation and for alliance formation in order to provide insight for the understanding of the mechanics of modern presidential democracies. Empirical evidence coming from research in psychology supports there is a cost of dissent, with humans trying to attain social conformity modulated by peer pressure [3; 9; 30; 32] and that conformity is learned from interactions within a social network e.g. [16]. The cost of dissension on some set of issues can be modeled using techniques of Statistical Mechanics of disordered systems and there is an extensive literature on polarization [4; 7; 18; 21; 28; 31; 33] and echo chambers [8; 12; 22; 35] in opinion dynamics models. Our aim in this paper is to address yet again collective behavior by studying Agent Based Models, but with the specific aim of addressing constitutional impeachments.
The model is introduced in Section II. The analytical approach derives from the use of Statistical Mechanics in the space of interactions in the Gardner style [11], here applied not only to a single perceptron agent, but to a population of such agents. The relevant time scales of the different changes that can occur are discussed and this leads to the methodology appropriate for the analysis. In Section III we present the structure of the agents, the relevant order parameters that characterize the macroscopic state of the society and the analytical framework. The two types of quenched disorder, from the issues under discussion and the graph of interactions, lead as shown in Section IV, to functional mean field equations that determine the thermodynamics of the model. In Section V we present analytical results obtained from the study of the saddle-point equations. Readers interested in the lessons that can be gleaned from this toy model should go to Section VI where the interpretation in terms of less mathematical terms can be found. A short version of the extensive calculations is shown in the Supplementary Material (SM) Section in VII.1.
## II The model
Our members-of-congress agents, are simple neural networks that discuss and emit for or against opinions about multidimensional issues. In addition there is a special agent, playing the role of the current executive leader, called the president, to be followed or deposed depending on the intensive parameters of the model.
The agenda consists of \(N\)-dimensional binary issues \(\mathbf{\xi}_{\mu}\), and a simple measure of the agenda's complexity is given by \(\alpha=P/N\), where \(P\) is the number of pertinent topics under discussion. The \(a^{\rm th}\) agent's for or against opinion is \(\sigma_{a\mu}\), arising from the issue and its internal state \(\mathbf{J}_{a}\), \(\sigma_{a\mu}=\varphi(\mathbf{J}_{a},\mathbf{\xi}_{\mu})\in\{-1,+1\}\), where \(a\) runs over the members of congress and the president is designed by a label \(B\), and its opinion on issue \(\mathbf{\xi}_{\mu}\) is \(\sigma_{B\mu}=\varphi(\mathbf{B},\mathbf{\xi}_{\mu})\), where \(\mathbf{B}\) is the internal state of \(B\). A specific choice for \(\varphi\) will be postponed for now, but its output is binary, i.e. \(\varphi\in\{-1,+1\}\).
The inner circle or click of a particular congress-agent is represented by an adjacency matrix \(\mathbf{G}\) with entries \(g_{ac}\neq 0\) if agents \(a\) cares about the opinion of agent \(c\) and zero if not. The weighed opinion on issue \(\mathbf{\xi}_{\mu}\) of \(a\)'s peers is \(\Sigma_{a\mu}=\sum_{c}g_{ac}\sigma_{c\mu}\).
We consider the cost for agent \(a\) to hold an opinion on the \(\mu\)th issue to arise from two contributions:
\[C_{a,\mu}=-\frac{1+\sigma_{B\mu}\sigma_{a\mu}}{2}-\frac{1-\sigma_{B\mu}\sigma _{a\mu}}{2}\sigma_{a\mu}\Sigma_{a\mu}. \tag{1}\]
Equation (1) implements a mechanism of _corroboration_ as follows. If agent \(a\) agrees with \(B\), i.e. \(\sigma_{B\mu}\sigma_{a\mu}>0\), only the first term contributes to the cost, which gets reduced in one unit. If \(a\) and \(B\) disagree, \(\sigma_{B\mu}\sigma_{a\mu}<0\) and the second term is different from zero. If the weighed opinion of \(a\)'s peers is in agreement with \(B\), i.e. \(\sigma_{B\mu}\Sigma_{a\mu}=-\sigma_{a\mu}\Sigma_{a\mu}>0\), the cost increases, and if \(\sigma_{a\mu}\Sigma_{a\mu}>0\) the cost decreases. If agreeing with its peers is less costly than agreeing with \(B\), \(a\) can form a local consensus against \(B\), through corroboration.
A simple rearrangement of terms allows us to write the cost as:
\[2C_{a,\mu}=-\sigma_{B\mu}\sigma_{a\mu}-\Sigma_{a\mu}\sigma_{a\mu}+\Sigma_{a \mu}\sigma_{B\mu}-1. \tag{2}\]
The first term describes the advantage of having the same opinion as the president. The second, of concurring with its peers. The third one can be attributed to the disadvantage that other members of its peer group are in alliance with the president. The last is just an additive constant.
The overall cost for the entire congress, defined by the microscopic states \(\{{\bf J}_{a}\}\) of the agents, the topological structure of alliances \({\mathbf{G}}\) and the complete presidential agenda \({\cal A}=\{{\mathbf{\xi}}_{\mu},\sigma_{B\mu}\}_{\mu=1,\ldots P}\), gives the full Hamiltonian cost of the system:
\[E(\{{\bf J}_{a}\},{\cal A},{\mathbf{G}})=\sum_{a,\mu}C_{a,\mu}. \tag{3}\]
The cost for agent \(a\), expression (2), depends on the \(g_{ac}\) in two places. In the first, which we leave as shown, it describes the interaction of the peers \(c\) with agent \(a\). The second describes the overall influence of the president on the group of peers of \(a\). To simplify matters we consider, in the second term, to have a mean \(\nu\eta_{0}\) and disregard its fluctuations, where \(\eta_{0}\) is the average intensity of the influence exerted by another agent and \(\nu\) the average size of the group. Then the overall cost, up to an additive constant, simplifies to:
\[E_{0}(\{{\bf J}_{a}\};{\cal A},{\mathbf{G}})=-\frac{1}{2}\left[(1-\nu\eta_{0})\sum _{a,\mu}\sigma_{B\mu}\sigma_{a\mu}+\sum_{\mu ac}g_{ac}\sigma_{a\mu}\sigma_{c \mu}\right]. \tag{4}\]
The choice of techniques used to analyze the system follow from a discussion of the relevant "physical" time scales of the problem. While there is no conservation law that applies to the global cost in any strict sense, there are several relevant time scales associated to this discussion. The agenda under discussion and the political alliances are supposed to remain valid on a long time scale \(\tau_{q}\) of around one year, certainly less than \(\tau_{P}\), the 4 or 5 years of the presidential cycle. For times of the order of \(\tau_{q}\) we expect the variables \(\nu\) (the co-conspirators clique-size), \(\eta_{0}\) (the co-conspirators interaction strength), and \(\alpha\) (the volume of the agenda covered) to remain constant. Agents interact and may change opinions about the issues on a faster scale \(\tau_{op}\) of a few days. \(\tau_{op}\) is the typical time elapsed between the treatment of subsequent agenda items \({\mathbf{\xi}}_{\mu}\) and \({\mathbf{\xi}}_{\mu+1}\). The expected value of the cost is sufficiently stable on an intermediate time scale \(\tau_{C}\), which is larger than the time scale associated to the dynamics of the agents but much shorter than changes of the issues of national interest, \(\tau_{op}\ll\tau_{C}\ll\tau_{q}<\tau_{P}\). \(\tau_{C}\) is of the order of weeks, similar to the time validity of presidential polls data. This separation of the time scales determines the methodology of analysis of the problem and leads to a description of the system with a Boltzmann distribution with a \(\beta\), conjugated to the expected value of the cost, that controls the size of the fluctuations above the ground state. It can be interpreted as the pressure that society at large exerts on congress. As an example, [2] choose \(\beta\) as a measure of the president's polling. Since the time scale in which the agenda and the political alliances changes is still larger, their random effect can be averaged over as quenched disorder. This is reasonable, since at least during \(\tau_{q}\) the prominent issues of the agenda are to some extent fixed, as are the intra-party alliances.
The macroscopic state of the system is characterized by order parameters to be described below. Still within the presidential cycle, changes due to externalities may lead to changes in the intensive parameters. Phase transitions may occur for a congress divided into a situation and opposition parties, to a majoritarian congress that either supports a state of almost unanimous support for the president or it is in almost total opposition. These transitions are to a constitutional dictatorship regime or to a state where the conditions for impeachment are ripe. They signal presidential crises driven by the collective properties of congress and not by external or internal military forces that act by simple dissolution of congress.
## III Methods and order parameters
We have not yet made explicit the _a priori_ measure of the \(\sigma\) variables. If it were just a product of independent uniform measures, e.g. Ising like variables, several interesting features of the problem would remain untouched. Thus we decided for more structured agents, which we model by a neural network classifier with a binary for/against, \(\pm 1\) output. In order to keep it analytically manageable, we choose the simplest architecture, the single layer perceptron. Linearly separable models, in some manner similar to the Rescorla-Wagner model [29] from psychology have been shown to be useful in describing human performance in several cases. Therefore the dynamical variables of an agent are \(N\) dimensional vectors \({\bf J}_{a}\) and its opinion on an issue is \(\sigma_{a\mu}=\mathrm{sgn}({\bf J}_{a}\cdot{\mathbf{\xi}}_{\mu})\). The issues from the agenda are constructed by choosing independently \(P=\alpha N\) vertices of the \(N\) dimensional hyper-cube with coordinates of absolute value equal to one.
Under the assumption that the average value of \(E_{0}\) in equation 3, is approximately constant over a cycle of discussions of order \(\tau_{C}\) and the random agenda and alliances quenched on the \(\tau_{q}\) scale, standard arguments yield the probability distribution of the states of the congress-agents, given by:
\[{\cal P}({\bf J}_{a}|\beta,{\cal A},{\bf G})=\frac{1}{Z}{\cal P}_{0}({\bf J}_{ a})\exp\{-\beta E_{0}(\{{\bf J}_{a}\};{\cal A},{\bf G})\}, \tag{5}\]
where \({\cal P}_{0}({\bf J})=(2\pi{\rm e})^{-N/2}\,\delta\left({\bf J}\cdot{\bf J}-N\right)\) is the _a priori_ measure of the agents weights, taken to be independent and uniform over the spherical shell of radius \(\sqrt{N}\) in \(N\) dimensions. The discussion about the separation of time scales require the use of quenched disorder. The macroscopic properties of the system are obtained from the free energy \(f=-\beta^{-1}\overline{\ln Z}\), averaged of the possible agendas and alliances, taken to be fixed on the relevant time scale.
The interactions between agents, encoded in the matrix \(\mathbf{G}\), are assumed independent of each other and identically distributed. They are constructed in two steps. First, a Bernoulli with parameter \(p\), is used to decide if there is a connection present between two peers. Then, the strength of their interaction \(\eta\) is drawn from a Normal distribution centered at \(\eta_{0}\) with variance \(\Delta^{2}.\) This leads to \(\nu=2p(M-1)\) (see equation (66)) where the factor 2 accounts for the fact that the graph is directed, then:
\[{\cal P}(g,\eta,x|\nu,\eta_{0},\Delta^{2}) = {\cal P}(g|\eta,x){\cal P}(x|p){\cal P}(\eta|\eta_{0},\Delta^{2}) \tag{6}\] \[= \delta(g-x\eta)\left[(1-p)\delta(x)+p\delta(x-1)\right]{\cal N}( \eta|\eta_{0},\Delta^{2}).\]
The problem is complicated by the existence of two sources of quenched disorder, the agenda \({\cal A}\) and the alliances, encoded in the matrix \(\mathbf{G}\). An adaptation of ideas introduced in References [5; 24; 34] to treat this problems associated to coding theory are needed here. Due to the technical impossibility of computing the average of a logarithm we proceed by applying the replica formalism [23], i.e.:
\[f=-\beta^{-1}\lim_{n\to 0}\frac{Z^{n}-1}{n}, \tag{7}\]
where \(Z^{n}=\prod_{\gamma=1}^{n}Z^{\gamma}\) is the partition function of the replicated system, each of the \(n\) systems linked to an index \(\gamma\).
Taking expectations over the alliances brings forward the following population averages order parameters:
\[\varrho_{\mu_{1}\ldots\mu_{\ell}}^{\gamma_{1}\ldots\gamma_{\ell}}\equiv{\mathbb{ E}}\left[\frac{1}{M}\sum_{a}\left(\sigma_{a\mu_{1}}^{\gamma_{1}}\sigma_{B\mu_{1} }\ldots\sigma_{a\mu_{\ell}}^{\gamma_{\ell}}\sigma_{B\mu}\right)\right], \tag{8}\]
where \(\varrho_{\mu}^{\gamma}\) is the average agreement of the population with \(B\) on the \(\mu\)-th issue on the \(\gamma\) replica, and \(\varrho_{\mu_{1}\ldots\mu_{\ell}}^{\gamma_{1}\ldots\gamma_{\ell}}\) are population averages of the agreement of individual \(a\) with \(B\) across systems \(\gamma_{1}\) (on issue \({\bf S}_{\mu_{1}}\)) to \(\gamma_{\ell}\) (on issue \({\bf S}_{\mu_{\ell}}\)). Their expectation values are \(\ell\)-point correlation functions for the opinions. The introduction of these parameters also requires the introduction of conjugate parameters \(\tilde{\varrho}_{\mu_{1}\ldots\mu_{\ell}}^{\gamma_{1}\ldots\gamma_{\ell}}\). Observe that \(\varrho_{\mu_{1}\ldots\mu_{\ell}}^{\gamma_{1}\ldots\gamma_{\ell}}\) is the average of a local properties, thus the conjugate variable \(\tilde{\varrho}_{\mu_{1}\ldots\mu_{\ell}}^{\gamma_{1}\ldots\gamma_{\ell}}\) must represent the average effect of the local neighborhood on the local agent. By imposing the replica-symmetric ansatz [5; 24; 34], the order parameters should not present any dependency on either replica or agenda item indexes, they should only depend on their number \(\ell\). By observing that the definition of the order parameters equation (8) satisfy \(-1\leq\varrho_{\mu_{1}\ldots\mu_{\ell}}^{\gamma_{1}\ldots\gamma_{\ell}}\leq 1\), we suppose the existence of a field \(\tanh(\beta z)\) which is drawn from two normalized distributions \(\pi(z)\) and \(\hat{\pi}(z)\) such that:
\[\varrho_{\mu_{1}\ldots\mu_{\ell}}^{\gamma_{1}\ldots\gamma_{\ell}}=\int{\rm d}z \,\pi(z)\tanh^{\ell}(\beta z)\qquad\quad\tilde{\varrho}_{\mu_{1}\ldots\mu_{ \ell}}^{\gamma_{1}\ldots\gamma_{\ell}}=\nu\int{\rm d}z\,\hat{\pi}(z)\tanh^{\ell} (\beta z). \tag{9}\]
Graph disorder introduces these two probability densities \(\pi(z)\) and \(\hat{\pi}(s)\), that are functional order parameters that describe the level of consensus at the local and neighborhood levels, respectively. It is their behavior that signals the transitions from a two parties equilibrium to a consensus that can be either for or against the presidential agent.
Observe that the parameters defined in (9) have been introduced in (72) and (73).
The usual order parameters associated with the agents overlaps and with the president are also introduced:
\[R_{a}^{\gamma} = {\mathbb{E}}({\bf J}_{a}^{\gamma}\cdot{\bf B}/N),\quad q_{a}^{ \gamma\rho}={\mathbb{E}}({\bf J}_{a}^{\gamma}\cdot{\bf J}_{a}^{\rho}/N), \tag{10}\] \[W_{ab}^{\gamma} = {\mathbb{E}}({\bf J}_{a}^{\gamma}\cdot{\bf J}_{b}^{\gamma}/N), \quad t_{ab}^{\gamma\rho}={\mathbb{E}}({\bf J}_{a}^{\gamma}\cdot{\bf J}_{b}^{ \rho}/N). \tag{11}\]
Under the assumption of replica symmetric saddle points \(R_{a}^{\gamma}=R\), \(q_{a}^{\gamma\rho}=q\), and, by Reference [26], \(W_{ab}^{\gamma}=t_{ab}^{\gamma\rho}=W\), the properties of the system follow from the extrema of the free energy functional (see Section VII.1):
\[f[q,R,\pi,\hat{\pi}] = T\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
where the averages are taken over the following distributed variables \(\eta\sim\mathcal{N}(\eta|\eta_{0},\Delta^{2})\), \(y\sim\mathsf{P}(y|\hat{\pi})\), and \(x\sim 2\mathcal{N}(x|0,1)\mathcal{H}\left(-Rx/\sqrt{q-R^{2}}\right)\), where \(\mathcal{H}(t)\) is the Gardner error function \(\mathcal{H}(t)=\int_{t}^{\infty}\mathrm{d}x\mathcal{N}(x|0,1)\). We used the short hand \(\epsilon=\epsilon(\beta,\nu\eta,y)=\left[\exp(2\beta(1-\nu\eta_{0}+y))-1\right] ^{-1}\). The free energy is a functional of the normalized distributions \(\pi\) and \(\hat{\pi}\). The new variable \(y\)'s distribution is:
\[\mathsf{P}(y|\hat{\pi})\equiv\int\frac{\mathrm{d}\hat{y}}{2\pi} \mathrm{e}^{-iy\hat{y}}\exp\left[\nu\left(\int\mathrm{d}s\hat{\pi}(s)\mathrm{e }^{i\hat{y}s}-1\right)\right]. \tag{13}\]
The characteristic function \(\phi_{s}(\hat{y})\) of \(\hat{\pi}(s)\), and the generator function of the cumulants of \(s\), \(K_{s}(\hat{y})\) are:
\[\phi_{s}(\hat{y}) = \int\mathrm{d}s\hat{\pi}(s)\mathrm{e}^{i\hat{y}s}, \tag{14}\] \[K_{s}(\hat{y}) = \log\phi_{s}(\hat{y}). \tag{15}\]
We observe that there exists a distributed variable \(u\) such its generator of cumulants function can be defined as: \(K_{u}(\hat{y})\equiv\phi_{s}(\hat{y})-1\). Add \(\nu\) independent copies of \(u\) to define \(y=\sum_{i=1}^{\nu}u_{i}\), since:
\[\mathsf{P}(y|\hat{\pi}) = \int\frac{\mathrm{d}\hat{y}}{2\pi}\mathrm{e}^{-iy\hat{y}}\exp \left[\nu K_{u}(\hat{y})\right], \tag{16}\] \[= \int\frac{\mathrm{d}\hat{y}}{2\pi}\mathrm{e}^{-iy\hat{y}}\left[ \phi_{u}(\hat{y})\right]^{\nu}, \tag{17}\]
where the \(u_{i}\) are random variables with the property that the \(r\)th cumulant of \(u\) is equal to \(\mathbb{E}(s^{r}|\hat{\pi})\), the \(r\)th moment of \(s\). Hence as \(\nu\) grows, \(y\) becomes normal. The \(r\)th order cumulant \(\kappa_{r}^{(y)}\) of \(y\) satisfies:
\[\kappa_{r}^{(y)}=\nu\kappa_{r}^{(u)}=\nu\mathbb{E}(s^{r}|\hat{\pi}), \tag{18}\]
so the cumulants of \(y\) are constructed by accumulation of the cumulants of \(u\) or of the moments of \(s\). It automatically follow that:
\[\mathbb{E}(y|\mathsf{P}) = \nu\mathbb{E}(s|\hat{\pi}) \tag{19}\] \[\mathbb{E}(y^{2}|\mathsf{P})-\mathbb{E}(y|\mathsf{P})^{2} = \nu\mathbb{E}(s^{2}|\hat{\pi}). \tag{20}\]
Since \(\mathbb{E}(s^{2}|\hat{\pi})\) turns out to be proportional to \(\eta_{0}^{2}\), \(y\)'s variance is proportional to \(1/\nu\) in the relevant region where \(\nu\eta_{0}\) is of order \(1\).
## IV Saddle point equations
The extreme of the free energy (12) is determined by the saddle point equations, which determine the order parameters in a self-consistent way. The distribution \(\hat{\pi}(s)\) satisfies:
\[\hat{\pi}(s)=\int\mathrm{d}z\int\mathrm{d}y\,\mathsf{P}(z,s,y|\hat{\pi})=\int \mathrm{d}z\int\mathrm{d}y\,\mathsf{P}(y|\hat{\pi})\mathsf{P}(z|y)\mathsf{P}( s|z), \tag{21}\]
where
\[\mathsf{P}(z|y) = \left\langle\delta\left[z-\beta^{-1}g(x;\epsilon,q)\right] \right\rangle_{x} \tag{22}\] \[\mathsf{P}(s|z) = \left\langle\delta\left(s-\beta^{-1}\mathrm{arctanh}\left[ \tanh(\beta\eta)\tanh(\beta z)\right]\right)\right\rangle_{\eta}, \tag{23}\]
and
\[g(x;\epsilon,q)=\frac{1}{2}\ln\frac{1+\epsilon}{\epsilon}+\frac{1}{2}\ln\frac{ 1-\mathcal{H}_{+}}{\mathcal{H}_{+}};\mathrm{with}\ \mathcal{H}_{+}=\mathcal{H}\left(\sqrt{\frac{q}{1-q}}x\right). \tag{24}\]
The equations for \(\pi(z)\) and \(\hat{\pi}(s)\) are:
\[\pi(z) = \int\mathrm{d}y\,\mathsf{P}(y|\hat{\pi})\left\langle\delta\left[z- \beta^{-1}g(x;\epsilon,q)\right]\right\rangle_{x}, \tag{25}\] \[\hat{\pi}(s) = \int\mathrm{d}z\,\pi(z)\left\langle\delta\left(s-\beta^{-1} \mathrm{arctanh}\left[\tanh(\beta\eta)\tanh(\beta z)\right]\right)\right\rangle_{ \eta}. \tag{26}\]
This shows that \(\pi(z)\) is the distribution of the local field \(z\) associated to the agent, that is constructed over the influence of its neighborhood through \(\mathsf{P}(y|\hat{\pi})\), the distribution of consensus in the neighborhood of the agent, and the influence of the agenda through the average over \(x\). These two contributions represent the sources the agent uses to form its opinion. The distribution of the neighborhood effective field \(\hat{\pi}(s)\) acting on the local agent is obtained by averaging over the distribution of the local agent field through \(\pi(z)\) and through the distribution of influences through the average over \(\eta\). Observe that if there is agreement between agent and president and if the influence between peers is strong (large \(\eta\)), the neighborhood field \(s\) becomes large and positive. If the agent does not give any importance to its peers (\(\eta=0\)), the distribution \(\hat{\pi}(s)\) becomes a delta function centered at zero, and the system decouples.
In addition, these functional saddle point equations also depend on the usual parameters \(q\) and \(R\), which satisfy
\[\frac{q-R^{2}}{1-q} = \frac{\alpha}{\pi}\int\mathrm{d}y\,\mathsf{P}(y|\hat{\pi})\int \mathcal{D}x\,\frac{\exp\left(-\frac{qx^{2}}{1-q}\right)\,\mathcal{H}\left(- \frac{Rx}{\sqrt{q-R^{2}}}\right)}{\left[\epsilon+\mathcal{H}\left(-\sqrt{ \frac{q}{1-q}x}\right)\right]^{2}} \tag{27}\] \[\frac{R}{\sqrt{1-q}} = \frac{\alpha}{\pi}\sqrt{\frac{q}{q-R^{2}}}\int\mathrm{d}y\, \mathsf{P}(y|\hat{\pi})\int\mathcal{D}x\frac{\exp\left\{-\left(\frac{q}{1-q} +\frac{R^{2}}{q-R^{2}}\right)\frac{x^{2}}{2}\right\}}{\epsilon+\mathcal{H} \left(-\sqrt{\frac{q}{1-q}x}\right)}, \tag{28}\]
The numerical solution of this set of equations is discussed in Section VII.2.
## V Macroscopic characterization of the model
In Section VII.2 we demonstrate that for sufficiently large neighborhoods (\(\nu>O(1)\)), for sufficiently high pressure \(\beta\), and for a very narrow distribution of social strengths, i.e. \(\Delta\ll\eta_{0}\) and \(\mathcal{P}(\eta)=\mathcal{N}(\eta|\eta_{0},\Delta^{2})\), there are three possible solutions for equations (25) and (26). Two of them are the pure _conservative_ state, obtained if \(\nu\eta_{0}<1\) and the other is the _polarized_ pure state if \(\nu\eta_{0}>1\). There is a possibility of a third solution which is a mixture of the two pure states, that appears in the region of the phase space where dialogue between opposite positions may exist. By defining the parameter \(\Lambda\) as:
\[\Lambda(R)\equiv\frac{\mathrm{sgn}(R)}{2\beta}\frac{q}{1-q}, \tag{29}\]
which allow us to plot a partial phase diagram, presented in figure 7. In the region \((|\Lambda|,\eta_{0})\in\mathbb{A}\) a convex combination of both pure states is found. For \((|\Lambda|,\eta_{0})\notin\mathbb{A}\) the distributions can be expressed as:
\[\hat{\pi}_{0}(z) \equiv \mathcal{N}\left(z\left|\mathcal{I}_{0}^{\star}(\Lambda,\eta_{0} ),\eta_{0}^{2}-[\mathcal{I}_{0}^{\star}(\Lambda,\eta_{0})]^{2}+\Delta^{2}\right.\right) \tag{30}\] \[\pi_{0}(s) \equiv \mathcal{N}\left(s\left|1+\nu[\mathcal{I}_{0}^{\star}(\Lambda, \eta_{0})-\eta_{0}]+\frac{1}{2}\Lambda,\nu(\eta_{0}^{2}+\Delta^{2})+\frac{3}{4 }\Lambda^{2}\right.\right)\] (31) \[\mathsf{P}_{0}(y|\hat{\pi}) \equiv \mathcal{N}\left(y\left|\nu\mathcal{I}_{0}^{\star}(\Lambda,\eta_ {0}),\nu(\eta_{0}^{2}+\Delta^{2})\right.\right), \tag{32}\]
where \(\mathcal{I}_{0}^{\star}\) is the only solution to
\[\mathcal{I}_{0}^{\star}=\eta_{0}\mathrm{erf}\left(\frac{1-\nu\eta_{0}+\nu \mathcal{I}_{0}^{\star}+\frac{1}{2}\Lambda(R)}{\sqrt{2\left[\nu(\eta_{0}^{2}+ \Delta^{2})+\frac{3}{4}\Lambda(R)^{2}\right]}}\right) \tag{33}\]
outside region \(\mathbb{A}\) (this equation is developed in Section VII.2, equation (94)). We have observed that in the region of interest \(\nu\eta_{0}\sim O(1)\), the variance of \(\mathsf{P}_{0}(y|\hat{\pi})\) is of order \(O(\nu^{-1})\), therefore we can approximate this distribution by:
\[\mathsf{P}_{0}(y|\hat{\pi})\approx\delta(y-\nu\mathcal{I}_{0}^{\star}). \tag{34}\]
Inside the region \(\mathbb{A}\) we have mixed states described by:
\[\hat{\pi}_{\mathrm{m}}(z) \equiv h_{+}\mathcal{N}\left(s\left|\mathcal{I}_{+}^{\star},\Delta^{2} \right.\right)+h_{-}\mathcal{N}\left(s\left|\mathcal{I}_{-}^{\star},\Delta^{2} \right.\right) \tag{35}\] \[\pi_{\mathrm{m}}(s) \equiv \mathcal{N}\left(s\left|1+\nu[\mathcal{I}^{\star}-\eta_{0}]+ \frac{1}{2}\Lambda,\nu(\eta_{0}^{2}+\Delta^{2})+\frac{3}{4}\Lambda^{2}\right.\right)\] (36) \[\mathsf{P}_{\mathrm{m}}(y|\hat{\pi}) \equiv \mathcal{N}\left(y\left|\nu\mathcal{I}^{\star},\nu\{\left(( \mathcal{I}^{\star})^{2}\right)+\Delta^{2}\}\right.\right), \tag{37}\]
where \({\cal I}_{\pm}^{\star}\) are the stable solutions to equation (33) in \(\mathbb{A}\), \(h_{\pm}\) are suitable weights (104) satisfying \(0\leq h_{\pm}\leq 1\) and \(h_{+}+h_{-}=1\), \({\cal I}^{\star}\) is the mixed solution:
\[{\cal I}^{\star} \coloneqq h_{+}{\cal I}_{+}^{\star}+h_{-}{\cal I}_{-}^{\star} \tag{38}\] \[\left\langle({\cal I}^{\star})^{2}\right\rangle \coloneqq h_{+}({\cal I}_{+}^{\star})^{2}+h_{-}({\cal I}_{-}^{\star})^{2}, \tag{39}\]
and, given that for all \((\lambda,\eta_{0})\in\mathbb{A}\), \(\nu\eta_{0}\sim O(1)\), we have that:
\[{\sf P}_{\rm m}(y|\hat{\pi})\approx\delta\left(y-\nu{\cal I}^{\star}\right). \tag{40}\]
The application of equations (34) and (40) into equations (27) and (28) produce the following expressions
\[\frac{q-R^{2}}{1-q} = \frac{\alpha}{\pi}\left({\rm e}^{\kappa}-1\right)^{2}\int{\cal D }x\,\frac{\exp\left(-\frac{qx^{2}}{1-q}\right)\,{\cal H}\left(-\frac{Rx}{ \sqrt{q-R^{2}}}\right)}{\left[1+\left({\rm e}^{\kappa}-1\right){\cal H}\left( -\sqrt{\frac{q}{1-q}x}\right)\right]^{2}} \tag{41}\] \[\frac{R}{\sqrt{1-q}} = \frac{\alpha}{\pi}\left({\rm e}^{\kappa}-1\right)\sqrt{\frac{q}{ q-R^{2}}}\int{\cal D}x\frac{\exp\left\{-\left(\frac{q}{1-q}+\frac{R^{2}}{q-R^{2}} \right)\frac{x^{2}}{2}\right\}}{1+\left({\rm e}^{\kappa}-1\right){\cal H}\left( -\sqrt{\frac{q}{1-q}x}\right)}, \tag{42}\]
where \(\kappa\equiv 2\beta(1-\nu\eta_{0}+\nu{\cal I}^{\star})\). Observe that these equations are invariant under the following transformation \((\kappa,q,R)\rightarrow(-\kappa,q,-R)\).
At very high pressure \(\beta\), the equations (27) and (28) can be expressed as:
\[\frac{q_{\pm}-R_{\pm}^{2}}{1-q_{\pm}} = \frac{\alpha}{\pi}\int{\cal D}x\,\frac{\exp\left(-\frac{q_{\pm}x ^{2}}{1-q_{\pm}}\right)\,{\cal H}\left(-\frac{R_{\pm}x}{\sqrt{q_{\pm}-R_{\pm} ^{2}}}\right)}{\left[{\cal H}\left(-\sqrt{\frac{q_{\pm}}{1-q_{\pm}}x}\right) \right]^{2}} \tag{43}\] \[\frac{R_{\pm}}{\sqrt{1-q_{\pm}}} = \pm\frac{\alpha}{\pi}\sqrt{\frac{q_{\pm}}{q_{\pm}-R_{\pm}^{2}}} \int{\cal D}x\frac{\exp\left\{-\left(\frac{q_{\pm}}{1-q_{\pm}}+\frac{R_{\pm}^ {2}}{q_{\pm}-R_{\pm}^{2}}\right)\frac{x^{2}}{2}\right\}}{{\cal H}\left(-\sqrt {\frac{q_{\pm}}{1-q_{\pm}}x}\right)}, \tag{44}\]
where the sub-index \(+(-)\) is valid for \(\nu\eta_{0}<(>)1\). The \(\beta\rightarrow\infty\) solutions satisfy \(q_{\pm}=\pm R_{\pm}\). These results justify naming the solution with sub-index \(+\) as conservative and the solution with sub-index \(-\) as polarized. Similar behavior is observed for finite but large values of the pressure.
For sufficiently large pressures, sufficiently large \(\nu\) and a volume of information \(\alpha\gg\beta\), we can also demonstrate that:
\[q = 1-\frac{Q_{0}^{2}}{\alpha^{2}}+o(\alpha^{-2}) \tag{45}\] \[Q_{0} = \frac{(2\pi)^{3/2}}{2+\sqrt{\pi}} \tag{46}\]
and
\[R=\begin{cases}q+2\pi\sqrt{3}Q_{0}^{3}\alpha^{-3}{\rm e}^{-2\beta}&\nu\eta_{0 }<1\\ -q-2\pi\sqrt{3}Q_{0}^{3}\alpha^{-3}{\rm e}^{-2\beta(\nu\eta_{0}-1)}&1<\nu\eta_ {0}\end{cases}. \tag{47}\]
Due to the odd parity of \(R(\kappa)\), we can conclude that the plane \((\nu\eta_{0},\beta^{-1})\) is divided into two phases, a conservative phase for which \(R>0\) and \(\nu\eta_{0}<1\) and a polarized phase with \(R<0\) and \(\nu\eta_{0}>1\).
Consider the set of points with coordinates \((\beta^{-1},\nu\eta=1,\alpha)\) such that the parameter defined in equation (29) becomes \(|\Lambda^{\star}|=0.411\) (see figure 7). In consequence, the solution of equation (33) over this line is \(\nu{\cal I}^{\star}=0.988\) and the correspondent index \(\kappa\) becomes a function of the overlap \(q\), i.e.
\[\kappa^{\star}(q)=\frac{\nu{\cal I}^{\star}}{|\Lambda^{\star}|}\frac{q}{1-q}. \tag{48}\]
By solving the equations (41) and (42) with \(\kappa\) given by (48), we obtain the curve presented in figure 4.
We solve for the properties of the equilibrium state valid in the \(\tau_{q}\) time scale in the intensive parameters of the system: \(\beta\) the pressure, \(\alpha\) a measure of the complexity of the agenda and \(\nu\eta_{0}\) a measure of the peer pressure by other agents in congress which arises from the mean number of interlocutors \(\nu\) and the mean intensity of their interaction \(\eta_{0}\). These results are presented as the phase diagrams shown in figure 5.
By fixing the value of \(\alpha\), we can study the behavior of the system for a given volume of information. We constructed figure 6 by solving equation (33) for different values of \(\beta^{-1}\) and \(\nu\eta_{0}\) at fixed values of \(\alpha\) with \(\nu=10\) and \(\Delta=0.01\). The
Figure 4: Critical pressure against volume of information. When the number of issues treated in the legislature is not sufficiently large, there is always a phase around the opinion boundary \(\nu\eta=1\) where polarized and conservatives states coexist. This area represents the collection of points \((\alpha,\beta^{-1})\) where a discussion between members of the chamber with different positions may occur. There is a critical volume of information \(\alpha^{*}=1.534(1)\) bellow which there is always room for discussion, no matter how high the pressure \(\beta^{-1}\) is. Above this threshold there is always a maximum pressure \(\beta^{-1}(\alpha)\) such that above it there is no more discussion and positions are definitely set.
Figure 5: Phase diagram of the system in terms of the parameters \(\nu\eta_{0}\), \(\beta^{-1}\) and \(\alpha\). There are two phases separated by the plane \(\nu\eta_{0}=1\). For \(\nu\eta_{0}<1\) we have that \(R>0\) and the average consensus is in favor of \(B\). For \(\nu\eta_{0}>1\), \(R<0\) and the average position of the agents is to form local alliances against the president \(B\). In all the points of the space above the surfaces \(\nu\eta_{+}\) and \(\nu\eta_{-}\), the distribution describing the position of the neighborhood, given by \(\hat{\pi}(z)\), is sharply picked at \(+\eta_{0}\), for the conservative position, i.e. \(\nu\eta_{0}<1\), or at \(-\eta_{0}\), for the polarized position, i.e. \(\nu\eta_{0}>1\). In the region bellow the surfaces \(\nu\eta_{+}\) and \(\nu\eta_{-}\) we have the same phase separation at \(\nu\eta_{0}=1\) but the contribution from the neighborhood is a mixture of a polarized component plus a conservative component. The circle at coordinates \(\nu\eta_{0}=1\), \(\beta^{-1}=0.824\) and \(\alpha=1.534\) is the critical point presented in figure 4. The phase diagram presented in figure 6 has been obtained by cutting sections at constant \(\alpha\) from this three-dimensional plot, and the the red sphere corresponds to the first value of \(\alpha\) (\(=\alpha^{*}\)) for which the behavior presented in figure 6 c) is observed.
full lines separate pure-state areas [in white, for \(R<0\) and in dark gray (orange on-line) for \(R>0\), given by equations (30), (31), and (32)] from mixed-state areas [in gray (yellow on-line), given by equations (35), (36), and (37)]. We also found that for values of \(\alpha<\alpha^{\star}=1.534(1)\) the mixed states are contained into a mixed-triangular-shaped area, with vertexes at \((\beta^{-1}=0,\nu\eta_{0}=1.651(1))\), \((\beta^{-1}=0,\nu\eta_{0}=0.717(1))\), and \((\beta=\beta(\alpha),\nu\eta_{0}=1).\) In particular we observe that \(\beta^{\star}\equiv\beta(\alpha^{\star})\approx 1.214(1)\) and for all \(\alpha^{\star}<\alpha^{\prime}<\alpha\), \(\beta(\alpha)>\beta(\alpha^{\prime})>\beta(\alpha^{\star}).\) The lightly shaded (yellow on-line) region close to the boundary (\(\nu\eta_{0}=1\)) is characterized by a mixture of states that represents a state of dialog, where the influence on the agents from their neighborhoods come from both sides of the argument. The larger the complexity of the agenda (\(\alpha\)) the smaller the size of this region.
To complete our analysis and for very low values of \(\beta\) we obtain the following values for the parameters \(R\) and \(q\):
\[q \approx \frac{2\alpha}{\pi}\beta^{2}(1-\nu\eta_{0})^{2}\left(\frac{2 \alpha}{\pi}+1\right) \tag{49}\] \[R \approx \frac{2\alpha}{\pi}\beta(1-\nu\eta_{0})[1-2\beta(1-\nu\eta_{0})]. \tag{50}\]
Discussion and conclusions
During the last decade [6] the application of statistical mechanics techniques to model social problems have produced a number of interesting results, not only providing new insight to the discussion of social phenomena but also showing predictive capabilities [10]. Inspired by these ideas, we have develop a model for the phenomenon of impeachment in presidential democracies. The political agents, represented by perceptrons, interact with an external meta-agent \(B\), which represents the executive, and with peers in the legislative chamber. The model has been tailored to balance the need to explain observed behavior [17], the complexity of the social interactions [19; 31], and the analytical tractability of the mathematical expressions constructed.
It is important to note that, for sufficiently large number of alliances \(\nu>O(1)\), the saddle point equations (25 to 28) can be solved in pairs. The first two (25) and (26) involving the distributions \(\pi\) and \(\hat{\pi}\) connected with the distribution of alliances and the pair (27) and (28), connected to the parameters associated with the discussion of the presidential agenda. The solution to (25) and (26) has been expressed using the parameter \(\Lambda\) defined in equation (29), which brings an input from the disorder-from-learning part of the problem into the disorder-from-graph part of the problem. In a similar manner, the parameter \(\kappa\) that helps to express the solution of equations (27) and (28), introduces effects from the disorder-from-graph part of the problem into the disorder-from-learning part of the problem. The constraints that emerged from expressing the solution in terms of these parameters have help constructing the phase diagram presented in figure 7.
We obtained a set of sensible results for a topology represented by a directed graph with an average of \(\nu\) links per vertex (the number of co-conspirators). In this setting and considering a steady president, i.e. \(B\) constant, we found that there exist two possible pure positions. One characterized by an overall average attitude in favor of the president characterized by a positive and increasing (with the volume of information) average agreement \(R\), that we dubbed the conservative state, and other with a negative and decreasing value of \(R\), the polarized state. These states are also characterized by a sharp picked distribution of neighbors' influences \(\hat{\pi}\), centered at \(\mathcal{I}_{+}^{\star}\) (\(\mathcal{I}_{-}^{\star}\)) for the conservative (polarized) state. From figure 6 we also showed that for volumes of information bellow a critical value \(\alpha^{\star}=1.534(1)\), there is a region in the plane \((\nu\eta_{0},\beta^{-1})\), in a form of a band around \(\nu\eta_{0}=1\), where mixed states, defined by the equations (35), (36), and (37), exist. The mixture is explicit in equation (35) that presents the influence on an agent by its neighborhood (\(\hat{\pi}\)) as a combination of the two sides of the argument. This band of mixed states collapses into a triangle with vertexes at \((\beta^{-1}=0,\nu\eta_{0}=1.651(1))\), \((\beta^{-}1=0,\nu\eta_{0}=0.71845(1))\), and \((\beta=\beta(\alpha),\nu\eta_{0}=1)\), for values of \(\alpha>\alpha^{\star}.\) We also observed that the larger the volume of information the smaller the triangle area, i.e. \(\alpha>\alpha^{\prime}\) implies that \(\beta(\alpha)>\beta(\alpha^{\prime}).\) The interpretation of this behavior is as follows: when information is limited (low \(\alpha\)) and for values of effective co-conspirators \(\nu\eta_{0}\simeq 1\) the influence from the neighborhood to the agent is formed by a combination of positions in favor and against the executive. In this region the overlap \(R,\) which represents the average agreement with \(B\) still has a well defined sign given by \(\mathrm{sgn}(R)=\mathrm{sgn}(1-\nu\eta_{0}),\) but is the result from two pure-state contributions. In this region coexist the two positions, in pro and against the executive \(B\). Definite positions are not set yet, thus propitiating a state of dialogue. The more information is fed to the system the smaller this region becomes. There is a critical value of information \(\alpha^{\star}=1.534(1),\) beyond which this behavior is only observed for pressures lower than \(\beta^{\star}=1.214(1).\) In other words, the more information is provided the purer the contribution to the agents opinion from their neighborhoods and the lower the chances for a dialog between opposed positions. For very large values of \(\alpha\) and \(\nu\eta_{0}\approx 1\) coexistence exists only if the pressure \(\beta\) is sufficiently high. Thus, only a president with high index of popularity can guarantee a discussion of the topics in the agenda between opposite positions of the legislative.
Under the light of the cases used as motivation for our model we observe that there are events, represented by particular items of the executive's agenda, that are so momentous in the formation of opinions that can be considered critical (Collor de Mello's economic plans, Perez's Grate Turn Around), to the point that, immediately after they occur, opposite positions in pro or against the executive's proposals become more consolidated, and the influence of the neighborhood on the agents becomes more polarized (on either position) and the dialogue-prone region gets reduced. If the public rejects the proposals, \(\beta\) diminishes and the executive may find itself in front of a polarized legislative chamber that either supports it (Samper's case) or not (Collor de Mello's case). If the negative information instances persist and neither the public or the chamber supports the president, the executive may find itself facing an impeachment procedure.
A natural extension to this work comes through the consideration of the case of a changing \(B\). In a previous work [25] we have study the evolution of opinions in the presence of an adaptive social rule that slowly changes following the average position of the population. As a consequence, the contribution from socially neutral issues (i.e. issues \(\boldsymbol{\xi}_{0}\) such that \(\mathbf{B}\cdot\boldsymbol{\xi}_{0}=0\)) becomes relevant [13], as it can be observed by the presence of the parameter \(W\), which represents the overlap between the representation of different agents (and it is a measure of the level of agreement between them). We expect that, if a similar setting is imposed in the present framework, the free energy functional so obtain should be dependent also on a parameter \(W\), revealing the contribution from the socially neutral issues to
the system.
**Acknowledgment:** This work received partial support from CNAIPS-NAP USP.
## VII Supplementary Material
### Calculation of the averages over \(\mathcal{A}\), \(\mathbf{B}\) and \(\boldsymbol{G}\)
By observing that \(\mathcal{P}(g_{ac})=\int\mathrm{d}\eta_{ac}\mathcal{N}(\eta_{ac}|\eta_{0}, \Delta^{2})\sum_{x_{ac}=0,1}[p\delta_{x_{ac},1}+(1-p)\delta_{x_{ac},0}]\, \delta(g_{ac}-x_{ac}\eta_{ac})\), where the Kronecker's delta is \(\delta_{X,Y}=1\) if \(X=Y\) and \(0\) otherwise and Dirac's delta is \(\int_{\Omega}\mathrm{d}x\delta(x-x_{0})=1\) if \(x_{0}\in\Omega\) and \(0\) otherwise, the replicated partition function is:
\[\overline{Z^{n}}(\beta) \equiv \int\mathrm{d}\mathbf{B}\mathcal{P}(\mathbf{B})\int\prod_{\mu} \mathrm{d}\boldsymbol{\xi}_{\mu}\mathcal{P}(\boldsymbol{\xi}_{\mu})\prod_{a} \prod_{c}\int\mathrm{d}g_{ac}\mathcal{P}(g_{ac})\int\prod_{\gamma=1}^{n}\prod _{a}\mathrm{d}\mathbf{J}_{a}^{\gamma}\mathcal{P}(\mathbf{J}_{a}^{\gamma}) \tag{51}\] \[\prod_{\gamma\mu a}\exp\left\{\beta\sum_{c\in\mathbb{N}_{a}}x_{ ac}\eta_{ac}\mathrm{sgn}\left(\frac{\mathbf{J}_{a}^{\gamma}\cdot\boldsymbol{\xi}_{ \mu}}{\sqrt{N}}\right)\mathrm{sgn}\left(\frac{\mathbf{J}_{c}^{\gamma}\cdot \boldsymbol{\xi}_{\mu}}{\sqrt{N}}\right)\right\}\] \[\prod_{\gamma\mu a}\exp\left\{(1-\nu\eta_{0})\beta\mathrm{sgn} \left(\frac{\mathbf{J}_{a}^{\gamma}\cdot\boldsymbol{\xi}_{\mu}}{\sqrt{N}} \right)\mathrm{sgn}\left(\frac{\mathbf{B}\cdot\boldsymbol{\xi}_{\mu}}{\sqrt{N} }\right)\right\},\]
and by defining the variables:
\[\lambda_{a,\mu}^{\gamma}\equiv\frac{\mathbf{J}_{a}^{\gamma}\cdot\boldsymbol{ \xi}_{\mu}}{\sqrt{N}},\qquad u_{\mu}\equiv\frac{\mathbf{B}\cdot\boldsymbol{ \xi}_{\mu}}{\sqrt{N}} \tag{52}\]
and, by defining the overlaps:
\[R_{a}^{\gamma}\equiv\frac{\mathbf{J}_{a}^{\gamma}\cdot\mathbf{B }}{N},\qquad\quad W_{ab}^{\gamma}=\frac{\mathbf{J}_{a}^{\gamma}\cdot\mathbf{J }_{b}^{\gamma}}{N},\] \[q_{a}^{\gamma\rho}\equiv\frac{\mathbf{J}_{a}^{\gamma}\cdot \mathbf{J}_{a}^{\rho}}{N},\qquad\quad t_{ab}^{\gamma\rho}\equiv\frac{\mathbf{J }_{a}^{\gamma}\cdot\mathbf{J}_{b}^{\rho}}{N}, \tag{53}\]
we have that the expectation over patterns is:
\[\left\langle\cdot\right\rangle_{\mathcal{A}} \equiv \int\prod_{\mu}\mathrm{d}\boldsymbol{\xi}_{\mu}\mathcal{P}( \boldsymbol{\xi}_{\mu})\exp\left(i\sum_{\gamma\mu a}\hat{\lambda}_{a\mu}^{ \gamma}\frac{\mathbf{J}_{a}^{\gamma}\cdot\boldsymbol{\xi}_{\mu}}{\sqrt{N}}+i \sum_{\mu}\hat{u}_{\mu}\frac{\mathbf{B}\cdot\boldsymbol{\xi}_{\mu}}{\sqrt{N}}\right) \tag{54}\] \[= \int\prod_{\gamma a}\frac{\mathrm{d}R_{a}^{\gamma}\mathrm{d}\hat{ R}_{a}^{\gamma}}{2\pi/N}\,\exp\left(i\sum_{\gamma a}\hat{R}_{a}^{\gamma}(NR_{a}^{ \gamma}-\mathbf{J}_{a}^{\gamma}\cdot\mathbf{B})\right)\] \[\int\prod_{\gamma}\prod_{a<b}\frac{\mathrm{d}W_{ab}^{\gamma} \mathrm{d}\hat{W}_{ab}^{\gamma}}{2\pi/N}\,\exp\left(i\sum_{\gamma}\sum_{a<b} \hat{W}_{ab}^{\gamma}(NW_{ab}^{\gamma}-\mathbf{J}_{a}^{\gamma}\cdot\mathbf{J}_ {b}^{\gamma})\right)\] \[\int\prod_{a}\prod_{\gamma<\rho}\frac{\mathrm{d}q_{a}^{\gamma\rho} \mathrm{d}\hat{q}_{a}^{\gamma\rho}}{2\pi/N}\,\exp\left(i\sum_{a}\sum_{\gamma< \rho}\hat{q}_{a}^{\gamma\rho}(Nq_{a}^{\gamma\rho}-\mathbf{J}_{a}^{\gamma}\cdot \mathbf{J}_{a}^{\rho})\right)\] \[\int\prod_{\gamma<\rho}\prod_{a<b}\frac{\mathrm{d}t_{ab}^{\gamma \rho}\mathrm{d}\hat{q}_{ab}^{\gamma\rho}}{2\pi/N}\,\exp\left(i\sum_{a<b}\sum_{ \gamma<\rho}\hat{t}_{ab}^{\gamma\rho}(Nt_{ab}^{\gamma\rho}-\mathbf{J}_{a}^{\gamma }\cdot\mathbf{J}_{b}^{\rho})\right)\] \[\exp\left\{-\frac{1}{2}\sum_{\mu}\left[\sum_{\gamma a}\left(\hat{ \lambda}_{a\mu}^{\gamma}\right)^{2}+2\sum_{\gamma a}\sum_{\gamma<\rho}\hat{ \lambda}_{a\mu}^{\gamma}\hat{\lambda}_{a\mu}^{\rho}q_{a}^{\gamma\rho}+2\sum_{ \gamma a}\sum_{a<b}\hat{\lambda}_{a\mu}^{\gamma}\hat{\lambda}_{b\mu}^{\gamma}W_{ ab}^{\gamma}+\right.\right.\] \[\left.\left.+2\sum_{\gamma a}\sum_{\gamma<\rho}\sum_{a<b}\hat{ \lambda}_{a\mu}^{\gamma}\hat{\lambda}_{b\mu}^{\rho}t_{ab}^{\gamma\rho}+2\sum_{ \gamma a}\hat{u}_{\mu}\hat{\lambda}_{a\mu}^{\gamma}R_{a}^{\gamma}+\hat{u}_{\mu} ^{2}\right]\right\}+O(N^{-1}).\]
By considering the distribution of the synaptic vector \(\mathbf{B}\) as \(\mathcal{P}(\boldsymbol{B})=\prod_{k}\delta(B_{k}-1)\) and by defining the matrices:
\[[\boldsymbol{\hat{Q}}]_{a,b}^{\gamma,\rho} \equiv i\left\{\delta^{\gamma,\rho}\left(\delta_{a,b}\hat{\ell}_{a}^{ \gamma}+(1-\delta_{a,b})\hat{W}_{a,b}^{\gamma}\right)+(1-\delta^{\gamma,\rho}) \left(\delta_{a,b}\hat{q}_{a}^{\gamma,\rho}+(1-\delta_{a,b})\hat{t}_{a,b}^{ \gamma,\rho}\right)\right\} \tag{55}\]
\[[{\mathbf{Q}}]^{\gamma,\rho}_{a,b} \equiv \delta^{\gamma,\rho}\left(\delta_{a,b}+(1-\delta_{a,b})W^{\gamma}_{a, b}\right)+(1-\delta^{\gamma,\rho})\left(\delta_{a,b}q^{\gamma,\rho}_{a}+(1- \delta_{a,b})t^{\gamma,\rho}_{a,b}\right) \tag{56}\]
we have that the average over synaptic vectors become:
\[\langle\cdot\rangle_{{\bf B},\{{\bf J}^{\gamma}_{a}\}} = \int\prod_{\gamma,a}\frac{{\rm d}\hat{\ell}^{\gamma}_{a}}{4\pi} \exp\left(i\frac{N}{2}\sum_{\gamma,a}\hat{\ell}^{\gamma}_{a}-N\ln|\hat{\mathbf{Q}} |-\frac{1}{2}\sum_{a,b}\sum_{\gamma,\rho}\hat{R}^{\gamma}_{a}[\hat{\mathbf{Q}}^{-1} ]^{\gamma,\rho}_{a,b}\hat{R}^{\rho}_{b}-\frac{nNM}{2}\right), \tag{57}\]
which renders the following expression for the partition function:
\[\overline{Z^{n}}(\beta) = \int\prod_{\gamma a}\frac{{\rm d}\hat{\ell}^{\gamma}_{a}}{4\pi} \int\prod_{\gamma a}\frac{{\rm d}R^{\gamma}_{a}{\rm d}\hat{R}^{\gamma}_{a}}{2 \pi/N}\int\prod_{\gamma}\prod_{a<b}\frac{{\rm d}W^{\gamma}_{ab}{\rm d}\hat{W}^ {\gamma}_{ab}}{2\pi/N}\int_{a}\prod_{\gamma<\rho}\frac{{\rm d}q^{\gamma\rho}_{ a}{\rm d}\hat{q}^{\gamma\rho}_{a}}{2\pi/N}\int\prod_{\gamma<\rho}\prod_{ab}\frac{{ \rm d}t^{\gamma\rho}_{ab}{\rm d}\hat{t}^{\gamma\rho}_{ab}}{2\pi/N} \tag{58}\] \[\exp\left(\frac{N}{2}\mbox{tr}{\mathbf{Q}}\hat{\mathbf{Q}}-\frac{N}{2} \ln|\hat{\mathbf{Q}}|-\frac{N}{2}\sum_{ab}\sum_{\gamma,\rho}\hat{R}^{\gamma}_{a} \left[\hat{\mathbf{Q}}^{-1}\right]^{\gamma\rho}_{ab}\hat{R}^{\rho}_{b}+iN\sum_{ \gamma a}\hat{R}^{\gamma}_{a}R^{\gamma}_{a}-\frac{nNM}{2}\right)\] \[\int\prod_{\gamma\mu a}\frac{{\rm d}\lambda^{\gamma}_{a\mu}{\rm d }\hat{\lambda}^{\gamma}_{a\mu}}{2\pi}\exp\left(-i\sum_{\gamma\mu a}\hat{ \lambda}^{\gamma}_{a\mu}\lambda^{\gamma}_{a\mu}\right)\] \[\int\prod_{\mu}{\cal D}u_{\mu}\exp\left(i\sum_{\gamma\mu a}\hat {\lambda}^{\gamma}_{a\mu}R^{\gamma}_{a}u_{\mu}+(1-\nu\eta_{0})\beta\sum_{ \gamma\mu a}\mbox{sgn}(\lambda^{\gamma}_{a\mu}u_{\mu})\right)\] \[\exp\left\{-\frac{1}{2}\sum_{\mu}\left[\sum_{\gamma a}\left[1-(R^ {\gamma}_{a})^{2}\right]\left(\hat{\lambda}^{\gamma}_{a\mu}\right)^{2}+2\sum_{ \gamma a}\sum_{\gamma<\rho}\left[q^{\gamma\rho}_{a}-R^{\gamma}_{a}R^{\rho}_{a} \right]\hat{\lambda}^{\gamma}_{a\mu}\hat{\lambda}^{\rho}_{a\mu}+\right.\right.\] \[\left.\left.+2\sum_{\gamma a}\sum_{a<b}\left[W^{\gamma}_{ab}-R^{ \gamma}_{a}R^{\gamma}_{b}\right]\hat{\lambda}^{\gamma}_{a\mu}\hat{\lambda}^{ \gamma}_{b\mu}+2\sum_{\gamma a}\sum_{\gamma<\rho}\sum_{b}\left[t^{\gamma\rho}_ {ab}-R^{\gamma}_{a}R^{\rho}_{b}\right]\hat{\lambda}^{\gamma}_{a\mu}\hat{ \lambda}^{\rho}_{b\mu}\right]\right\}\] \[\left\langle\exp\left\{\beta\sum_{\gamma\mu a}\sum_{a\neq c}x_{ac }\eta_{ac}\mbox{sgn}(\lambda^{\gamma}_{a\mu}\lambda^{\gamma}_{c\mu})\right\} \right\rangle_{\!\!\!\!G}+O(N^{-1})\]
in the limit of large \(N\) we find that
\[\hat{R}^{\gamma}_{a} = i\sum_{\rho,b}[\hat{\mathbf{Q}}]^{\gamma,\rho}_{a,b}R^{\rho}_{b} \tag{58}\] \[\left[\hat{\mathbf{Q}}^{-1}\right]^{\gamma,\rho}_{a,b} = [{\mathbf{K}}]^{\gamma,\rho}_{a,b}\equiv[{\mathbf{Q}}]^{\gamma,\rho}_{a,b}- R^{\gamma}_{a}R^{\rho}_{b} \tag{59}\]
and to express the extraction of the asymptotic behavior of integrals of the form \(I_{N}\equiv\int_{x_{1}}^{x_{2}}{\rm d}x\,{\rm e}^{-Ng(x)}\) in the limit \(N\to\infty\) through Laplace's method, we denote: \(\mbox{extr}_{x}\,I_{N}\equiv{\rm e}^{-Ng(x_{0})+O(\log N)}\) where \(x_{0}\) is such that \(g(x_{0})\leq g(x)\) for all \(x\in[x_{1},x_{2}]\), so we can write:
\[\overline{Z^{n}}(\beta) = \mbox{extr}_{\mathbf{K}}\left\{\exp\left(\frac{N}{2}\ln|{\mathbf{K}}| \right)\right. \tag{60}\] \[\int\prod_{\gamma\mu a}\frac{{\rm d}\lambda^{\gamma}_{a\mu}{\rm d }\hat{\lambda}^{\gamma}_{a\mu}}{2\pi}\exp\left(-i\sum_{\gamma\mu a}\hat{ \lambda}^{\gamma}_{a\mu}\lambda^{\gamma}_{a\mu}\right)\] \[\int\prod_{\mu}{\cal D}u_{\mu}\,\exp\left(i\sum_{\gamma a}\hat{ \lambda}^{\gamma}_{a\mu}R^{\gamma}_{a}u_{\mu}+(1-\nu\eta_{0})\beta\sum_{\gamma \mu a}\mbox{sgn}(\lambda^{\gamma}_{a\mu}u_{\mu})\right)\] \[\exp\left[-\frac{1}{2}\sum_{\gamma\mu a}\left[1-(R^{\gamma}_{a})^ {2}\right]\left(\hat{\lambda}^{\gamma}_{a\mu}\right)^{2}-\sum_{\gamma\mu a}\sum_ {\gamma<\rho}\left[q^{\gamma\rho}_{a}-R^{\gamma}_{a}R^{\rho}_{a}\right]\hat{ \lambda}^{\gamma}_{a\mu}\hat{\lambda}^{\rho}_{a\mu}-\right.\right.\] \[\left.\left.-\sum_{\gamma\mu a}\sum_{a<b}\left[W^{\gamma}_{ab}-R^{ \gamma}_{a}R^{\gamma}_{b}\right]\hat{\lambda}^{\gamma}_{a\mu}\hat{\lambda}^{ \gamma}_{b\mu}-\sum_{\gamma\mu a}\sum_{\gamma<\rho}\sum_{b}\left[t^{\gamma\rho}_ {ab}-R^{\gamma}_{a}R^{\rho}_{b}\right]\hat{\lambda}^{\gamma}_{a\mu}\hat{ \lambda}^{\rho}_{b\mu}\right]\] \[\left\langle\exp\left\{\beta\sum_{\gamma\mu a}\sum_{a\neq c}x_{ac }\eta_{ac}\mbox{sgn}(\lambda^{\gamma}_{a\mu})\mbox{sgn}(\lambda^{\gamma}_{c\mu })\right\}\right\rangle_{\!\!\!\!G}\Bigg{\}},\]
where \({\cal D}x\equiv(2\pi)^{-1/2}{\rm d}x\,\exp(-x^{2}/2)\). Also, by imposing the Replica Symmetric (RS) ansatz: \(R_{a}^{\gamma}=R,\,q_{a}^{\gamma,\rho}=q\), and by following [26] we can assume that \(t_{ab}^{\gamma\rho}=W_{ab}^{\gamma}=W,\) then by defining
\[{\cal C}_{a\mu}\equiv\frac{\sqrt{W-R^{2}}y_{\mu}+Ru_{\mu}+\sqrt{q-W}y_{a\mu}}{ \sqrt{1-q}} \tag{62}\]
and by observing that the logarithm of the matrix \(K\) in the RS approach is
\[\ln|\mathbf{K}|=nM\left(\ln(1-q)+\frac{q-R^{2}}{1-q}\right)+O(n^{2}), \tag{63}\]
we have that, after the integration over the variables \(\{\hat{\lambda}_{a\mu}^{\gamma}\}\), the partition function becomes:
\[\overline{Z^{n}}(\beta) = \mathop{\rm extr}_{RqW}\left\{\exp\left[\frac{nNM}{2}\left(\ln(1- q)+\frac{q-R^{2}}{1-q}\right)\right]\right. \tag{64}\] \[\left.\int\prod_{\mu}{\cal D}u_{\mu}\prod_{\mu}{\cal D}y_{\mu} \prod_{\mu a}{\cal D}y_{a\mu}\prod_{\gamma\mu a}\frac{{\rm d}\lambda_{a\mu}^{ \gamma}}{\sqrt{2\pi}}\right.\] \[\left.\exp\left[-\frac{1}{2}\sum_{\gamma\mu a}(\lambda_{a\mu}^{ \gamma}-{\cal C}_{a\mu})^{2}+(1-\nu\eta_{0})\beta\sum_{\gamma\mu a}{\rm sgn} \left(\lambda_{a\mu}^{\gamma}u_{\mu}\right)\right]\right.\] \[\left.\left.\left\langle\exp\left\{\beta\sum_{\gamma\mu a}\sum_{a \neq c}x_{ac}\eta_{ac}{\rm sgn}(\lambda_{a\mu}^{\gamma}){\rm sgn}(\lambda_{c \mu}^{\gamma})\right\}\right\rangle_{\mathbf{G}}\right\}.\]
The average over the graph variables is, by defining the temperature \(\beta^{\prime}\equiv\beta/(1+\nu\eta_{0})\):
\[\Upsilon \equiv \left\langle\exp\left\{\beta\sum_{\gamma\mu a}\sum_{a\neq c}x_{ ac}\eta_{ac}{\rm sgn}(\lambda_{a\mu}^{\gamma}){\rm sgn}(\lambda_{c\mu}^{\gamma}) \right\}\right\rangle_{\mathbf{G}} \tag{65}\] \[= \int\prod_{ac}\frac{{\rm d}\eta_{ac}}{\sqrt{2\pi\Delta^{2}}}\exp \left[-\frac{(\eta_{ac}-\eta_{0})^{2}}{2\Delta^{2}}\right]\prod_{ac}\left\{1-p +p\,\prod_{\gamma\mu}\exp\left[\beta{\rm sgn}(\lambda_{a\mu}^{\gamma}){\rm sgn }(\lambda_{c\mu}^{\gamma})\right]\right\}\] \[= (1-p)^{M(M-1)}\int\prod_{ac}\frac{{\rm d}\eta_{ac}}{\sqrt{2\pi \Delta^{2}}}\exp\left[-\frac{(\eta_{ac}-\eta_{0})^{2}}{2\Delta^{2}}\right]\] \[\prod_{ac}\left\{1+\frac{p}{1-p}\,\prod_{\gamma\mu}\left[\cosh( \beta\eta_{ac})+{\rm sgn}(\lambda_{a\mu}^{\gamma}\lambda_{c\mu}^{\gamma})\sinh (\beta\eta_{ac})\right]\right\}\] \[\prod_{ac}\left\{1+\frac{p}{1-p}\,\cosh(\beta\eta_{ac})^{nP}\prod _{\gamma\mu}\left[1+\tanh(\beta\eta_{ac}){\rm sgn}\left(\lambda_{a\mu}^{ \gamma}\lambda_{c\mu}^{\gamma}\right)\right]\right\}.\]
Observe that we are assuming that the number of neighbors, in average, must be \(\nu\ll M:\)
\[\sum_{a}x_{ac}{\cal P}(x_{ac}) = p(M-1)=\frac{\nu}{2} \tag{66}\]
\[\Upsilon = \left(1-\frac{\nu}{2(M-1)}\right)^{M(M-1)}\int\prod_{ac}\frac{ \mathrm{d}\eta_{ac}}{\sqrt{2\pi\Delta^{2}}}\exp\left[-\frac{(\eta_{ac}-\eta_{0}) ^{2}}{2\Delta^{2}}\right] \tag{67}\] \[\prod_{ac}\left\{1+\frac{p}{1-p}\,\cosh(\beta\eta_{ac})^{nP} \left[1+\tanh(\beta\eta_{ac})\sum_{\gamma\mu}\mathrm{sgn}\left(\lambda_{a\mu}^ {\gamma}\lambda_{\nu\rho}^{\gamma}\right)\right.\right.\] \[\left.\left.\qquad+\tanh(\beta\eta_{ac})^{2}\sum_{\langle\gamma_{ 1}\mu_{1};\gamma_{2}\mu_{2}\rangle}\mathrm{sgn}\left(\lambda_{a\mu_{1}}^{ \gamma_{1}}\lambda_{c\mu_{1}}^{\gamma_{1}}\right)\mathrm{sgn}\left(\lambda_{ a\mu_{2}}^{\gamma_{2}}\lambda_{c\mu_{2}}^{\gamma_{2}}\right)+\ldots\right]\right\}\] \[= \exp\left\{-\frac{\nu}{2}M\right.+\] \[+ \left.\frac{\nu}{2}M\sum_{\ell=0}^{nP}\left\langle\cosh(\beta \eta)^{nP}\tanh(\beta\eta)^{\ell}\right\rangle_{\eta}\sum_{\langle\gamma_{1} \mu_{1};\ldots;\gamma_{\ell}\mu_{\ell}\rangle}\left[\frac{1}{M}\sum_{a} \mathrm{sgn}\left(\lambda_{a\mu_{1}}^{\gamma_{1}}u_{\mu_{1}}\right)\ldots \mathrm{sgn}\left(\lambda_{a\mu_{\ell}}^{\gamma_{\ell}}u_{\mu_{\ell}}\right) \right]^{2}\right\}.\]
Observe that \(\Upsilon\) is the part of the replicated partition function that accounts for the interaction between peers and the interaction between peers and graph. If \(\eta_{0}=\Delta=0\) then \(\Upsilon=1.\) We define
\[\varrho_{\mu_{1}\ldots\mu_{\ell}}^{\gamma_{1}\ldots\gamma_{\ell}} \equiv \frac{1}{M}\sum_{a}\mathrm{sgn}\left(\lambda_{a\mu_{1}}^{\gamma_{ 1}}u_{\mu_{1}}\right)\ldots\mathrm{sgn}\left(\lambda_{a\mu_{\ell}}^{\gamma_{ \ell}}u_{\mu_{\ell}}\right) \tag{68}\] \[\varrho_{0} \equiv \frac{1}{M}\sum_{a}1, \tag{69}\]
where \(\varrho_{\mu_{1}\ldots\mu_{\ell}}^{\gamma_{1}\ldots\gamma_{\ell}}\) is the average level of agreement per individual, across issues and replicas and \(\varrho_{0}\), which is fancy way to write 1, will be left as a free parameter for the time being until we apply a variational technique (which will confirm its value, see below).
Applying Laplace's method to the integrals involving the parameters \(\varrho_{\mu_{1}\ldots\mu_{\ell}}^{\gamma_{1}\ldots\gamma_{\ell}}\) and their conjugates \(\tilde{\varrho}_{\mu_{1}\ldots\mu_{\ell}}^{\gamma_{1}\ldots\gamma_{\ell}}\), we can express the replicated partition function as:
\[\overline{Z^{n}}(\beta) = \mathop{\mathrm{extr}}_{q,W,R,\left\{\varrho_{\mu_{1}\ldots\mu_{ \ell}}^{\gamma_{1}\ldots\gamma_{\ell}},\tilde{\varrho}_{\mu_{1}\ldots\mu_{ \ell}}^{\gamma_{1}\ldots\gamma_{\ell}}\right\}}\left\{\exp\left[\frac{nNM}{2} \left(\ln(1-q)+\frac{q-R^{2}}{1-q}\right)\right]\right. \tag{70}\] \[\left.\exp\left[-M\sum_{\ell=0}\sum_{\langle\gamma_{1}\mu_{1}; \ldots;\gamma_{\ell}\mu_{\ell}\rangle}\varrho_{\mu_{1}\ldots\mu_{\ell}}^{ \gamma_{1}\ldots\gamma_{\ell}}\tilde{\varrho}_{\mu_{1}\ldots\mu_{\ell}}^{\gamma _{1}\ldots\gamma_{\ell}}-\frac{\nu}{2}M+\right.\right.\] \[\left.\left.+\frac{\nu}{2}M\sum_{\ell=0}\left\langle\cosh(\beta \eta)^{nP}\tanh(\beta\eta)^{\ell}\right\rangle_{\eta}\sum_{\langle\gamma_{1} \mu_{1};\ldots;\gamma_{\ell}\mu_{\ell}\rangle}\left(\varrho_{\mu_{1}\ldots\mu_ {\ell}}^{\gamma_{1}\ldots\gamma_{\ell}}\right)^{2}\right]\right.\] \[\left.\int\prod_{\mu}\mathcal{D}u_{\mu}\prod_{\mu}\mathcal{D}y_{ \mu}\left(\prod_{\mu}\mathcal{D}t_{\mu}\prod_{\gamma\mu a}\frac{\mathrm{d} \lambda_{\mu}^{\gamma}}{\sqrt{2\pi}}\exp\left[-\frac{1}{2}\sum_{\gamma\mu}( \lambda_{\mu}^{\gamma}-\mathcal{C}_{\mu})^{2}+\right.\right.\right.\] \[\left.\left.\left.+(1-\nu\eta_{0})\beta\sum_{\gamma\mu}\mathrm{ sgn}\left(\lambda_{\mu_{1}}^{\gamma_{1}}u_{\mu_{1}}\right)\right]\right)^{M}\right\}\]
where we have disregarded terms of \(O(n^{2})\), \(O(N^{-1})\) and \(O(M^{-1})\) in the argument of the exponential and now:
\[\mathcal{C}_{\mu}\equiv\frac{\sqrt{W-R^{2}}y_{\mu}+Ru_{\mu}+\sqrt{q-W}t_{\mu}}{ \sqrt{1-q}}. \tag{71}\]
Once more we consider the RS approach by introducing the distribution \(\pi(z)\) and its conjugate \(\hat{\pi}(z):\)
\[\hat{\varrho}_{\mu_{1}\ldots\mu_{\ell}}^{\gamma_{1}\ldots\gamma_{ \ell}}=\mathcal{C}_{\hat{\pi}}\int\mathrm{d}s\,\hat{\pi}(s)\tanh^{\ell}( \beta s) \varrho_{\mu_{1}\ldots\mu_{\ell}}^{\gamma_{1}\ldots\gamma_{\ell}}=\int \mathrm{d}z\,\pi(z)\tanh^{\ell}(\beta z) \tag{72}\] \[\hat{\varrho}_{0}=\mathcal{C}_{\hat{\pi}}\int\mathrm{d}s\,\hat{\pi }(s) \varrho_{0}=\int\mathrm{d}z\,\pi(z) \tag{73}\]
where equations (72) are the definitions of the fields \(\pi\) and \(\hat{\pi}\), and equations (73) are the normalization conditions they must satisfy. By using the symmetry of the RS supposition and by integrating over the variables \(\{\lambda_{\mu}^{\gamma}\}\), the Hubbard-Stratanovich variables \(u\), \(t\) and \(y\) and by applying the scaling condition \(P=\alpha N\), we have that the replicated partition function takes the form of:
\[\overline{Z^{n}}(\beta) = \mathop{\rm extr}\limits_{q,R,\varrho_{0},\hat{\varrho}_{0},\pi, \hat{\pi}}\left\{\exp\left[-M\left(\frac{\nu}{2}-\hat{\varrho}_{0}+\varrho_{0} \hat{\varrho}_{0}-\frac{\nu}{2}\varrho_{0}^{2}\right)\right]\right. \tag{74}\] \[\left.\left(1+nMN\frac{\nu}{2}\varrho_{0}^{2}\alpha\left\langle \ln\cosh(\beta\eta)\right\rangle_{\eta}+\frac{nMN}{2}\left(\ln(1-q)+\frac{q-R^ {2}}{1-q}\right)-\right.\right.\] \[\left.\left.-nMN\alpha\mathcal{C}_{\hat{\pi}}\int\mathrm{d}z \,\mathrm{d}s\,\pi(z)\hat{\pi}(s)\ln\left[1+\tanh(\beta s)\tanh(\beta z) \right]+\right.\] \[\left.+\frac{\nu}{2}nMN\alpha\int\mathrm{d}z_{1}\,\mathrm{d}z_{2 }\,\pi(z_{1})\pi(z_{2})\,\left\langle\ln\left(1+\tanh(\beta\eta)\tanh(\beta z _{1})\tanh(\beta z_{2})\right)\right\rangle_{\eta}+\right.\] \[\left.\left.+nNM\alpha\mathrm{e}^{-\hat{\varrho}_{0}}\sum_{C=0}^ {\infty}\frac{\mathcal{C}_{\hat{\pi}}^{\ell}}{C!}\int_{-\infty}^{\infty}\prod _{\ell=1}^{C}\mathrm{d}s_{\ell}\,\hat{\pi}(s_{\ell})\;2\int_{-\infty}^{\infty }\mathcal{D}x\mathcal{H}\left(-\frac{Rx}{\sqrt{q-R^{2}}}\right)\right.\right.\] \[\left.\left.\ln\left[\mathcal{H}\left(\sqrt{\frac{q}{1-q}}x \right)\mathrm{e}^{-\beta(1-\nu\eta_{0})}\prod_{\ell=1}^{C}[1-\tanh(\beta s_{ \ell})]+\right.\right.\] \[\left.\left.\left.+\mathcal{H}\left(-\sqrt{\frac{q}{1-q}}x\right) \mathrm{e}^{\beta(1-\nu\eta_{0})}\prod_{\ell=1}^{C}[1+\tanh(\beta s_{\ell})] \right]\right)\right\}.\]
Observe that during the integration process the dependency with respect to \(W\) disappears. By adding up the series, can be re-express as:
\[\overline{Z^{n}}(\beta) = \mathop{\rm extr}\limits_{q,R,\varrho_{0},\hat{\varrho}_{0},\pi, \hat{\pi}}\left\{\exp\left[-M\left(\frac{\nu}{2}-\hat{\varrho}_{0}+\varrho_{0 }\hat{\varrho}_{0}-\frac{\nu}{2}\varrho_{0}^{2}\right)\right]\right. \tag{75}\] \[\left.\left(1+nMN\frac{\nu}{2}\varrho_{0}^{2}\alpha\left\langle \ln\cosh(\beta\eta)\right\rangle_{\eta}+\frac{nMN}{2}\left(\ln(1-q)+\frac{q-R ^{2}}{1-q}\right)-\right.\right.\] \[\left.\left.-nMN\alpha\mathcal{C}_{\hat{\pi}}\int\mathrm{d}z\, \mathrm{d}s\,\pi(z)\hat{\pi}(s)\ln\left[1+\tanh(\beta s)\tanh(\beta z)\right]+\right.\] \[\left.+\frac{\nu}{2}nMN\alpha\int\mathrm{d}z_{1}\,\mathrm{d}z_{2 }\,\pi(z_{1})\pi(z_{2})\,\left\langle\ln\left(1+\tanh(\beta\eta)\tanh(\beta z _{1})\tanh(\beta z_{2})\right)\right\rangle_{\eta}+\right.\] \[\left.+nNM\alpha\left(-\beta(1-\nu\eta_{0})+\mathrm{e}^{-\hat{ \varrho}_{0}}+\mathcal{C}_{\hat{\pi}}\int\mathrm{d}s\,\hat{\pi}(s)\ln[1+ \tanh(-\beta s)]\right)\right.\] \[\left.\left.+2nNM\alpha\int_{-\infty}^{\infty}\mathcal{D}x \mathcal{H}\left(-\frac{Rx}{\sqrt{q-R^{2}}}\right)\right.\right.\] \[\left.\left.\int\frac{\mathrm{d}y}{2\pi}\int\mathrm{d}\hat{y} \mathrm{e}^{-iy\hat{y}}\exp\left[\mathcal{C}_{\hat{\pi}}\int\mathrm{d}s\,\hat{ \pi}(s)\mathrm{e}^{i\hat{y}s}-\hat{\varrho}_{0}\right]\ln\left[1+\left(\mathrm{ e}^{2\beta(1-\nu\eta_{0}+y)}-1\right)\mathcal{H}\left(-\sqrt{\frac{q}{1-q}}x \right)\right]\right)\right\}.\]
Observe that
\[\partial_{\varrho_{0}}\overline{Z^{n}} = \left(-M\hat{\varrho}_{0}+\nu M\varrho_{0}+O(n)\right)\overline{Z^{n}} \tag{76}\] \[\partial_{\hat{\varrho}_{0}}\overline{Z^{n}} = \left(M-M\varrho_{0}+O(n)\right)\overline{Z^{n}} \tag{77}\]
which implies in the extreme that \(\varrho_{0}=1\) (as it was expected from equation (69)) and \(\hat{\varrho}_{0}=\nu\). From this last equation we have that \(\mathcal{C}_{\hat{\pi}}=\nu\). By defining the weight function:
\[\mathsf{P}(y|\hat{\pi})\equiv\int\frac{\mathrm{d}\hat{y}}{2\pi}\mathrm{e}^{-iy \hat{y}}\exp\left[\nu\left(\int\mathrm{d}s\,\hat{\pi}(s)\mathrm{e}^{i\hat{y}s }-1\right)\right], \tag{78}\]
and the distribution
\[\mathcal{P}(x|q,R)\equiv 2\mathcal{N}(x)\mathcal{H}\left(-\frac{Rx}{\sqrt{q-R^{2}}} \right), \tag{79}\]
the free energy functional \(F\) can be defined as
\[F \equiv T\frac{1-\overline{Z^{n}}(\beta)}{nNM} \tag{80}\] \[= T\left\{-\alpha\left(-\beta(1-\nu\eta_{0})+\mathrm{e}^{-\nu}+ \frac{\nu}{2}\left\langle\ln\cosh(\beta\eta)\right\rangle_{\eta}\right)-\frac{1 }{2}\left(\ln(1-q)+\frac{q-R^{2}}{1-q}\right)+\right.\] \[\left.-\frac{\nu}{2}\alpha\int\mathrm{d}z_{1}\,\mathrm{d}z_{2}\, \pi(z_{1})\pi(z_{2})\,\left\langle\ln\left(1+\tanh(\beta\eta)\tanh(\beta z_{1} )\tanh(\beta z_{2})\right)\right\rangle_{\eta}+\right.\] \[\left.+\nu\alpha\int\mathrm{d}z\,\mathrm{d}s\,\pi(z)\hat{\pi}(s) \ln\left[\frac{1+\tanh(\beta s)\tanh(\beta z)}{1-\tanh(\beta s)}\right]- \alpha\beta\int\mathrm{d}y\mathsf{P}(y|\hat{\pi})(1-\nu\eta_{0}+y)\right.\] \[\left.-\alpha\int\mathrm{d}x\mathcal{P}_{Rq}(x)\,\int\mathrm{d} y\mathsf{P}(y|\hat{\pi})\ln\left[\mathrm{e}^{-\beta(1-\nu\eta_{0}+y)}\mathcal{H} \left(\sqrt{\frac{q}{1-q}}x\right)+\mathrm{e}^{\beta(1-\nu\eta_{0}+y)} \mathcal{H}\left(-\sqrt{\frac{q}{1-q}}x\right)\right]\right\}\]
density of the system can be expressed as \(f=\mathrm{ext}_{q,R,\pi,\hat{\pi}}\,F.\)
### Fourier Transform of the Distributions
By using the Fourier Transform in the saddle point equations (25) and (26) we define the functions:
\[\hat{\phi}(\omega) \equiv \int\mathrm{d}s\,\hat{\pi}(s)\mathrm{e}^{is\omega} \tag{81}\] \[= 1+i\mathcal{I}_{0}\omega-\frac{\mathcal{R}_{0}}{2}\omega^{2}+O( \omega^{3})\] \[\phi(\omega) \equiv\int\mathrm{d}s\,\pi(s)\mathrm{e}^{is\omega}. \tag{82}\]
Thus:
\[\mathsf{P}(y|\hat{\pi}) \equiv \int\frac{\mathrm{d}\omega}{2\pi}\,\mathrm{e}^{-ig\omega+\hat{ \phi}(\omega)-\nu} \tag{83}\] \[\approx \int\frac{\mathrm{d}\omega}{2\pi}\,\exp\left[-\frac{\nu\mathcal{ R}_{0}}{2}\omega^{2}+i(\nu\mathcal{I}_{0}-y)\omega\right]=\mathcal{N}\left(y\,|\nu \mathcal{I}_{0},\nu\mathcal{R}_{0}\right),\]
which is consistent with (19) and (20). Consider the definition (29), then the Fourier Transform of \(\pi(s)\) can be expressed as:
\[\phi(\omega) = \tag{84}\] \[\approx \int\mathrm{d}y\mathcal{N}\left(y\,|\nu\mathcal{I}_{0},\nu \mathcal{R}_{0}\,\right)\mathrm{e}^{is\omega}\left\langle\exp\left[i\omega \left(1-\nu\eta_{0}+\frac{\Lambda(x)}{2}x^{2}\right)\right]\right\rangle_{x},\]
where the expectation over \(x\) is approximated by:
\[\left\langle\exp\left\{i\omega\left[1-\nu\eta_{0}+\frac{\Lambda(x )}{2}\right]\right\}\right\rangle_{x} \approx 2\int_{-\infty}^{\infty}\mathcal{D}x\,\Theta(xR)\exp\left[i \omega\left(1-\nu\eta_{0}+\frac{\Lambda(x)}{2}x^{2}\right)\right] \tag{85}\] \[\approx 2\int_{0}^{\infty}\frac{\mathrm{d}x}{\sqrt{2\pi}}\exp\left[- \frac{x^{2}}{2}+i\omega\left(1-\nu\eta_{0}+\frac{\Lambda(R)}{2}x^{2}\right)\right]\] \[\approx \int_{-\infty}^{\infty}\frac{\mathrm{d}x}{\sqrt{2\pi}}\exp\left[ -[1-i\omega\Lambda(R)]\frac{x^{2}}{2}+is\left(1-\nu\eta_{0}\right)\right]\] \[\approx \frac{1}{\sqrt{1-i\omega\Lambda}}\exp\left[i(1-\nu\eta_{0})\omega\right]\] \[\approx \exp\left[-\frac{3}{8}\Lambda^{2}\omega^{2}+i\left(1-\nu\eta_{0}+ \frac{\Lambda(R)}{2}\right)\omega\right],\]
where (85) is reached by assuming that \(\Lambda\) sufficiently small. From this point onwards we will refer to the parameter \(\Lambda(R)\) as the amplified thermal noise. In such a case the function \(\phi\) is Gaussian:
\[\phi(s)\approx\exp\left[-\frac{\nu{\cal R}_{0}+\frac{3}{4}\Lambda(R)^{2}}{2}s^{2 }+i\left(1-\nu\eta_{0}+\nu{\cal I}_{0}+\frac{\Lambda(R)}{2}\right)s\right], \tag{86}\]
and so
\[\pi(z)\approx\frac{1}{\sqrt{2\pi\left(\nu{\cal R}_{0}+\frac{3}{4}\Lambda(R)^{2 }\right)}}\exp\left\{-\frac{\left(z-1+\nu\eta_{0}-\nu{\cal I}_{0}-\frac{1}{2} \Lambda(R)\right)^{2}}{2\left(\nu{\cal R}_{0}+\frac{3}{4}\Lambda(R)^{2}\right) }\right\}. \tag{87}\]
The argument in the Dirac's delta function in (26) is mildly sensitive to changes in temperature, therefore we can approximate it by:
\[\beta^{-1}{\rm arctanh}\left[\tanh(\beta\eta)\tanh(\beta z)\right]\ \approx\ \frac{|\eta+z|-|\eta-z|}{2}={\rm sgn}(z\eta)\min\{|\eta|,|z|\}, \tag{88}\]
which allows us to write the expression of the Fourier transform of the distribution \(\hat{\pi}\) as:
\[\hat{\phi}(\omega)\ =\ \int{\rm d}s\phi(s)\left\langle\int\frac{{\rm d}z}{2\pi} \,\exp\left(-isz+i\omega{\rm sgn}(z\eta)\min\{|\eta|,|z|\}\right)\right\rangle _{\eta}, \tag{89}\]
where the expectation over the social strengths can be demonstrated to be, disregarding terms of \(O(\eta_{0}\Delta^{2})\):
\[{\cal Q} \equiv\ \left\langle\int\frac{{\rm d}z}{2\pi}\,\exp\left(-isz+i \omega\beta^{-1}\,{\rm arctanh}\left[\tanh(\beta\eta)\tanh(\beta z)\right] \right)\right\rangle_{\eta} \tag{90}\] \[\approx\ \delta(s)+\frac{\omega}{\pi}\exp\left(-\frac{\Delta^{2}s^{2}}{2} \right)\frac{\eta_{0}}{s}-\omega^{2}\frac{\eta_{0}^{2}+\Delta^{2}}{2}\delta(s).\]
Expression (90), together with (89) and (81) produce the following expressions for the moments of \(\nu^{-1}\hat{\pi}\) :
\[1 = \phi(0) \tag{91}\] \[{\cal I}_{0} = -\frac{i\eta_{0}}{\pi}\int{\rm d}s\frac{\phi(s)}{s}{\rm e}^{- \frac{\Delta^{2}s^{2}}{2}}\] (92) \[{\cal R}_{0} = \eta_{0}^{2}+\Delta^{2}, \tag{93}\]
which implies that
\[{\cal I}_{0}^{\star}=\eta_{0}{\rm erf}\left(\frac{1-\nu\eta_{0}+\nu{\cal I}_{ 0}^{\star}+\frac{1}{2}\Lambda(R)}{\sqrt{2\left[\nu(\eta_{0}^{2}+\Delta^{2})+ \frac{3}{4}\Lambda(R)^{2}\right]}}\right). \tag{94}\]
The equation (94) has one, two or three solutions depending on the value of the parameters \(\nu\), \(\eta_{0}\), \(\Delta\) and the function \(\Lambda(R).\) It is easy to see that for the line:
\[\eta_{0}=\frac{2+\Lambda(R)}{2\nu} \tag{95}\]
\({\cal I}_{0}^{\star}=0\) is a solution. Thus we expect that for \(\nu\eta_{0}>1+\Lambda(R)/2\), \({\cal I}_{0}^{\star}\approx-(\eta_{0}-\epsilon)\) is a solution, and for \(\nu\eta_{0}<1+\Lambda(R)/2\), \({\cal I}_{0}\approx\eta_{0}-\epsilon\) is a solution (in both cases \(0<\epsilon<\eta_{0}\) is a suitable positive number).
If the derivative with respect to \({\cal I}_{0}\) of the right-hand-side of (94) evaluated at \({\cal I}_{0}^{\star}\) is equal to 1, and \({\cal I}_{0}^{\star}\) is also a solution of (94), then (94) has two solutions, \({\cal I}_{0}^{\star}\) and \(\eta_{0}-\epsilon\) or \(-(\eta_{0}-\epsilon)\) depending on whether \(\nu\eta_{0}<1+\Lambda(R)/2\) or \(\nu\eta_{0}>1+\Lambda(R)/2\), respectively.
If \(\eta_{-}(\Lambda)<\eta_{0}<\eta_{+}(\Lambda)\), where:
\[\eta_{\pm} = \frac{2\mp|\Lambda|}{2\nu}\mp\left[\frac{1}{\nu}\sqrt{\left[\nu( \eta_{\pm}^{2}+\Delta^{2})+\frac{3}{4}\Lambda(R)^{2}\right]\log\left(\frac{2} {\pi}\frac{\nu^{2}\eta_{\pm}^{2}}{\nu(\eta_{\pm}^{2}+\Delta^{2})+\frac{3}{4} \Lambda(R)^{2}}\right)}-\right. \tag{96}\] \[\left.-\eta_{\pm}{\rm erf}\left(\sqrt{\log\left(\sqrt{\frac{2}{ \pi}\frac{\nu^{2}\eta_{\pm}^{2}}{\nu(\eta_{\pm}^{2}+\Delta^{2})+\frac{3}{4} \Lambda(R)^{2}}}\right)}\right)\right]\]
are the superior (\(\eta_{+}\)) and inferior (\(\eta_{-}\)) limits to the area in the plane (\(|\Lambda|,\eta_{0}\)) where three solutions to the equation (94) can be found. Let as define the set \(\mathbb{A}:=\{(|\Lambda|,\eta_{0}):\eta_{-}(\Lambda)<\eta_{0}<\eta_{+}(\Lambda)\}\). (The use of \(|\Lambda|\) instead of \(\Lambda\) is due to the fact that realizable solutions must satisfy \(\mathrm{sgn}(R)=\mathrm{sgn}(1-\nu\eta_{0})\). This point will be clarified when the equations involving \(R\) and \(q\) are contemplated. See bellow.)
It is important to note that for very high values of the pressure \(\beta\) we have that:
\[\nu\eta_{\pm}\approx\frac{1}{1\pm\sqrt{\frac{1}{\nu}\log\left(\frac{2\nu}{\pi }\right)}\mp\mathrm{erf}\left(\sqrt{\frac{1}{2}\log\left(\frac{2\nu}{\pi} \right)}\right)}, \tag{97}\]
which implies that the segment (at very high pressure) of the coexistence of states grows with \(\sqrt{\nu}\). From equation (94) we conclude that, to satisfy the saddle-point equations (25,26), there must be a set of values of \(\eta\) such that, for a very high pressure \(\beta\) there coexist states with different attitudes.
For all the points \((\Lambda,\eta_{0})\notin\mathbb{A}\), we have that both conditions (92) and (93) are satisfied for a distribution:
\[\hat{\pi}(s)\ =\ \mathcal{N}\left(s\left|\mathcal{I}_{0}^{\star},\eta_{0}^{2}-( \mathcal{I}_{0}^{\star})^{2}+\Delta^{2}\right.\right) \tag{98}\]
where \(\mathcal{I}_{0}^{\star}\) is the only solution of (94). In this case we also have that:
\[\mathsf{P}(y|\hat{\pi})\approx\mathcal{N}\left(y\left|\nu\mathcal{I}_{0}^{ \star},\nu(\eta_{0}^{2}+\Delta^{2})\right.\right). \tag{99}\]
For the points \((\Lambda,\eta_{0})\in\mathbb{A}\) we propose the following form for \(\hat{\pi}(s)=\mathcal{Z}^{-1}\exp[-\Phi(s)]\), where \(\mathcal{Z}\) is a normalization constant and the function \(\Phi(s)\) is defined as:
\[\Phi(s) \coloneqq \frac{s^{2}}{2\Delta^{2}}-\frac{\eta_{0}}{\nu}\frac{1-\nu\eta_{ 0}+\nu s+\frac{1}{2}\Lambda(R)}{\Delta^{2}}\mathrm{erf}\left(\frac{1-\nu\eta_ {0}+\nu s+\frac{1}{2}\Lambda(R)}{\sqrt{2\left[\nu(\eta_{0}^{2}+\Delta^{2})+ \frac{3}{4}\Lambda(R)^{2}\right]}}\right)- \tag{100}\] \[-2\frac{\eta_{0}}{\nu}\frac{\nu(\eta_{0}^{2}+\Delta^{2})+\frac{3 }{4}\Lambda(R)^{2}}{\Delta^{2}}\mathcal{N}\left(\nu s\left|\nu\eta_{0}-1- \frac{1}{2}\Lambda(R),\nu(\eta_{0}^{2}+\Delta^{2})+\frac{3}{4}\Lambda(R)^{2} \right.\right).\]
Figure 7: Phase diagram of the system in terms of the parameters \(\nu\eta_{0}\) and \(|\Lambda|\). There are two phases separated by the line \(\nu\eta_{0}=1\). For \(\nu\eta_{0}<1\) we have that \(R>0\) and the average consensus is in favor of \(B\). For \(\nu\eta_{0}>1\)\(R<0\) and the average position of the agents is to form local alliances against the president \(B\). In all the points of the plain outside the region \(\mathbb{A}\), the distribution describing the position of the neighborhood, given by \(\hat{\pi}(z)\), is sharply picked at \(+\eta_{0}\), for the conservative position, i.e. \(\nu\eta_{0}<1\), or at \(-\eta_{0}\), for the polarized position, i.e. \(\nu\eta_{0}>1\). In region \(\mathbb{A}\) we have the same phase separation at \(\nu\eta_{0}=1\) but the contribution from the neighborhood is a mixture of a polarized component plus a conservative component. We also observe that the vertexes of \(\mathbb{A}\) are (0,1.651), (0.411,1) and (0,0.717).
Observe that \(\Phi^{\prime}(\mathcal{I}_{0}^{*})=0\) is identical to (94) and, for all points in \(\mathbb{A}\) this equations have three roots \(\mathcal{I}_{-}^{*}<\mathcal{I}_{0}^{*}<\mathcal{I}_{+}^{*}\). The asymptotic behavior of \(\lim_{s\rightarrow\pm\infty}s^{-2}\Phi(s)>0\) indicates that \(\mathcal{I}_{\pm}^{*}\) are minima with \(\mathcal{I}_{0}^{*}\) an intermediate maximum. Let us compute the second derivative of \(\Phi\) at the minima:
\[\Phi^{\prime\prime}(\mathcal{I}_{\pm}^{*}) = \left.\frac{1}{\Delta^{2}}\left(1-\eta_{0}\ \frac{\mathrm{d}}{\mathrm{d}s} \mathrm{erf}\left(\frac{1-\nu\eta_{0}+\nu s+\frac{1}{2}\Lambda(R)}{\sqrt{2 \left[\nu(\eta_{0}^{2}+\Delta^{2})+\frac{3}{4}\Lambda(R)^{2}\right]}}\right) \right|_{s=\mathcal{I}_{\pm}^{*}}\right), \tag{101}\]
therefore \(\Phi^{\prime\prime}(\mathcal{I}_{\pm}^{*})=\mathcal{L}_{\pm}^{2}\Delta^{-2}\), with:
\[\mathcal{L}_{\pm}\coloneqq\sqrt{1-2\nu\eta_{0}\mathcal{N}\left(\nu\mathcal{I} _{\pm}^{*}\left|\nu\eta_{0}-1-\frac{1}{2}\Lambda(R),\nu(\eta_{0}^{2}+\Delta^{ 2})+\frac{3}{4}\Lambda(R)^{2}\right.\right)}, \tag{102}\]
which is larger than zero for all \((\Lambda,\eta_{0})\in\mathbb{A}\). Thus:
\[\hat{\pi}(s) \approx h_{+}\mathcal{N}\left(s\left|\mathcal{I}_{+}^{*},\Delta^{2} \right.\right)+h_{-}\mathcal{N}\left(s\left|\mathcal{I}_{-}^{*},\Delta^{2}\right.\right) \tag{103}\] \[h_{\pm} \equiv \frac{1}{2}\pm\frac{1}{2}\tanh\left(\frac{\Phi(\mathcal{I}_{+}^{* })-\Phi(\mathcal{I}_{-}^{*})}{4}-\frac{1}{2}\ln\frac{\mathcal{L}_{+}}{ \mathcal{L}_{-}}\right)\] (104) \[\int\mathrm{d}s\hat{\pi}(s)s \approx h_{+}\mathcal{I}_{+}^{*}+h_{-}\mathcal{I}_{-}^{*}\] (105) \[\int\mathrm{d}s\hat{\pi}(s)s^{2} \approx h_{+}(\mathcal{I}_{+}^{*})^{2}+h_{-}(\mathcal{I}_{-}^{*})^{2}+ \Delta^{2}, \tag{106}\]
observe that the expectation (104) represents a convex combination between the solutions to the equation (94), and \(\hat{\pi}^{\prime}(\mathcal{I}_{\pm}^{*})\approx 0\). In order to compute the distribution \(\mathsf{P}(y|\hat{\pi})\) we first need to compute the Fourier transform of \(\hat{\pi}\):
\[\hat{\phi}(\omega) \approx h_{+}\exp\left[-\frac{\Delta^{2}}{2}\omega^{2}+i\omega\mathcal{I }_{+}^{*}\right]+h_{-}\exp\left[-\frac{\Delta^{2}}{2}\omega^{2}+i\omega \mathcal{I}_{-}^{*}\right], \tag{107}\]
and if we define
\[\Upsilon(\omega,y) \coloneqq -iy\omega-\nu+\nu\int\mathrm{d}s\frac{\exp[-\Phi(s)+is\omega]}{ \int\mathrm{d}s^{\prime}\,\exp[-\Phi(s^{\prime})]} \tag{108}\] \[\approx -iy\omega-\nu+\nu\left\{h_{+}\exp\left[-\frac{\Delta^{2}}{2} \omega^{2}+i\omega\mathcal{I}_{+}^{*}\right]+h_{-}\exp\left[-\frac{\Delta^{2} }{2}\omega^{2}+i\omega\mathcal{I}_{-}^{*}\right]\right\}\] \[\approx -iy\omega+\nu\left\{i\omega(h_{+}\mathcal{I}_{+}^{*}+h_{-} \mathcal{I}_{-}^{*})-\frac{h_{+}(\mathcal{I}_{+}^{*})^{2}+h_{-}(\mathcal{I}_{ -}^{*})^{2}+\Delta^{2}}{2}\omega^{2}\right\}+O(\omega^{3})\]
\[\mathsf{P}(y|\hat{\pi}) = \int\frac{\mathrm{d}\omega}{2\pi}\,\mathrm{e}^{\Upsilon(\omega,y)} \tag{109}\] \[\approx \mathcal{N}\left(y\left|\nu(h_{+}\mathcal{I}_{+}^{*}+h_{-} \mathcal{I}_{-}^{*}),\nu[h_{+}(\mathcal{I}_{+}^{*})^{2}+h_{-}(\mathcal{I}_{-}^ {*})^{2}+\Delta^{2}]\right.\right)\]
These distributions should be applied depending on the value of \(\eta_{0}\) and \(\Lambda(R)\), following the diagram presented in figure 7.
|
2301.13817 | Patch Gradient Descent: Training Neural Networks on Very Large Images | Traditional CNN models are trained and tested on relatively low resolution
images (<300 px), and cannot be directly operated on large-scale images due to
compute and memory constraints. We propose Patch Gradient Descent (PatchGD), an
effective learning strategy that allows to train the existing CNN architectures
on large-scale images in an end-to-end manner. PatchGD is based on the
hypothesis that instead of performing gradient-based updates on an entire image
at once, it should be possible to achieve a good solution by performing model
updates on only small parts of the image at a time, ensuring that the majority
of it is covered over the course of iterations. PatchGD thus extensively enjoys
better memory and compute efficiency when training models on large scale
images. PatchGD is thoroughly evaluated on two datasets - PANDA and UltraMNIST
with ResNet50 and MobileNetV2 models under different memory constraints. Our
evaluation clearly shows that PatchGD is much more stable and efficient than
the standard gradient-descent method in handling large images, and especially
when the compute memory is limited. | Deepak K. Gupta, Gowreesh Mago, Arnav Chavan, Dilip K. Prasad | 2023-01-31T18:04:35Z | http://arxiv.org/abs/2301.13817v1 | # Patch Gradient Descent: Training Neural Networks
###### Abstract
Traditional CNN models are trained and tested on relatively low resolution images (\(<300\) px), and cannot be directly operated on large-scale images due to compute and memory constraints. We propose Patch Gradient Descent (PatchGD), an effective learning strategy that allows to train the existing CNN architectures on large-scale images in an end-to-end manner. PatchGD is based on the hypothesis that instead of performing gradient-based updates on an entire image at once, it should be possible to achieve a good solution by performing model updates on only small parts of the image at a time, ensuring that the majority of it is covered over the course of iterations. PatchGD thus extensively enjoys better memory and compute efficiency when training models on large scale images. PatchGD is thoroughly evaluated on two datasets - PANDA and UltraMNIST with ResNet50 and MobileNetV2 models under different memory constraints. Our evaluation clearly shows that PatchGD is much more stable and efficient than the standard gradient-descent method in handling large images, and especially when the compute memory is limited.
## 1 Introduction
Convolutional neural networks (CNNs) are considered among the most vital ingredients for the rapid developments in the field of computer vision. This can be attributed to their capability of extracting very complex information far beyond what can be obtained from the standard computer vision methods. For more information, we refer the reader to the recently published comprehensive reviews (Khan et al., 2020; Li et al., 2021; Alzubaidi et al., 2021).
With the recent technological developments, very large images are obtained from data acquisition in the fields of microscopy (Khater et al., 2020; Schermelleh et al., 2019), medical imaging (Aggarwal et al., 2021), and earth sciences (Huang et al., 2018; Amani et al., 2020), among others. Recently, there has been a drive to use deep learning methods in these fields as well. In particular, several deep learning methods have been proposed to handle the images from the microscopy domain (Orth et al., 2017; Dankovich and Rizzoli, 2021; Sekh et al., 2020, 2021), however, the big data challenge of applying CNNs to analyze such images is immense, as we demonstrate in Figure 1. High content nanoscopy involves taking nanoscopy images of several adjacent fields-of-view and stitching them side-by-side to have a full perspective of the biological sample, such as a patient's tissue biopsy, put under the microscope. There is information at multiple scales embedded in these microscopy images (Villegas-Hernandez et al., 2022), with the smallest scale of features being only a few pixels in size. Indeed, such dimensions of images and levels of details are a challenge for the existing CNNs.
Figure 1: Example nanoscopy image (left) of a mouse kidney cryosection approximately 1/12th of the area of a single field-of-view of the microscope, chosen to illustrate the level of details at different scales. The bottom right images show that the smallest features in the image of relevance can be as small as a few pixels (here 5-8 pixels for the holes)(Villegas-Hernández et al., 2022).
Existing deep learning models using CNNs are predominantly trained and tested on relatively low resolution regime (less than \(300\times 300\) pixels). This is partly because the widely used image benchmarking datasets such as ILSVRC(ImageNet dataset) [10] for classification and PASCAL VOC [1] for object detection/segmentation consist of low-resolution images in a similar range, and most of the existing research has been towards achieving state-of-the-art (SOTA) results on these or similar datasets. Using these models on high-resolution images leads to quadratic growth of the associated activation size, and this in turn leads to massive increase in the training compute as well as the memory footprint. Further, when the available GPU memory is limited, such large images cannot be processed by CNNs.
There exist very limited works that address the issue of handling very large images using CNNs. The most common approach among these is to reduce the resolution of the images through downscaling. However, this can lead to a significant loss of information associated with the small-scale features, and it can adversely affect the semantic context associated with the image. An alternate strategy is to divide the image into overlapping or non-overlapping tiles and process the tiles in a sequential manner. However, this approach does not assure that the semantic link across the tiles will be preserved and it can hinder the learning process. Several similar strategies exist that attempt to learn the information contained in the large images, however, their failure to capture the global context limits their use.
In this paper, we present a novel CNN training pipeline that is capable of handling very large images. We point out here that 'large images' should not be plainly interpreted in terms of the number of pixels that they comprise, rather an image should be considered too large to be trained with CNNs if the respective computational memory budget available for it is small. For example, while training a a ResNet50 classification model with images of size \(10,000\times 10,000\) might be hardly possible on a GPU card of 48 GB memory, a GPU memory of 12 GB could be good enough to train the same model on \(512\times 512\) size images. Further, when the same \(512\times 512\) size images are trained on a GPU memory limit of 4 GB, these might be looked at as too large.
Figure 2 presents a better understanding of the problem outlined above. We consider here the task of classification of the UltraMNIST digits [11] into one of the 10 predefined classes labelled from 0-9. UltraMNIST images used here comprise 3-5 MNIST digits of extremely varying scale and the sum of the digits ranges between 0-9. The label class of each image corresponds to the sum of the contained digits. More details related to the UltraMNIST classification problem are presented in Appendix B.2. We consider here images of size \(512\times 512\) pixels and pose the problem to be solved at two different computational memory budgets. We consider the two cases of GPU memory limits of 4 GB and 16 GB. For the base CNN model, we use ResNet50 [12] architecture and employ the standard training approach. We refer to this approach as Gradient descent (GD). We further present results obtained using the proposed training pipeline, referred as _PatchGD_. Abbreviated for Patch Gradient Descent, it is a scalable training method designed to build neural networks with either very large images, or very low memory compute or a combination of both.
The efficacy of PatchGD is evident from the results in Figure 2 where PatchGD outperforms the conventional GD method for 16 GB as well as 4 GB memory limit. While the difference in performance is 4% at 16 GB, it grows to a remarkable margin of 13% difference in the accuracy measure at 4 GB. The classification problem at 4 GB memory compute is intended to replicate the real-world challenges when dealing with large images. With only 4 GB in hand, the image size of \(512\times 512\) is already too large to be used for training a ResNet50 model, and this leads to the inferior performance shown in Figure 2. However, PatchGD is stable even at this low memory regime, and this can be attributed to its design that makes it invariant to image size to a large extent. We describe the details of the method later in the paper as well as demonstrate through experimental results on a variety of image sizes that PatchGD is capable of adapting the existing CNN models to work with very large images even if the available GPU memory is limited.
**Contributions.** To summarize, the contributions of this paper can be listed as follows.
* We present _Patch Gradient Descent (PatchGD)_, a novel strategy to train neural networks on very large images in an end-to-end manner.
Figure 2: Performance comparison of standard CNN and PatchGD (ours) for the task of classification of UltraMNIST digits of size \(512\times 512\) pixels using ResNet50 model. Two different computational memory budgets of 16 GB and 4GB are used, and it is demonstrated that PatchGD is relatively stable for the chosen image size, even for very low memory compute.
* Due to its inherent ability to work with small fractions of a given image, PatchGD is scalable on small GPUs, where training the original full-scale images may not even be possible.
* PatchGD reinvents the existing CNN training pipeline in a very simplified manner and this makes it compatible with any existing CNN architecture. Moreover, its simple design allows it to benefit from the pre-training of the standard CNNs on the low-resolution data.
## 2 Related Work
This paper aims at improving the capability of CNNs in handling large-scale images in general. To our knowledge there is only very limited research work in this direction and we discuss them in this section. Most works that exist focus on histopathological datasets since these are popular sources of large images. The majority of existing works employ pixel-level segmentation masks, which are not always available. For example, Iizuka et al. (2020); Liu et al. (2017) perform patch-level classification based on labels created from patchwise segmentation masks available for the whole slide images (WSI), and then feed it to a RNN to obtain the final WSI label. Braatz et al. (2022) use goblet cell segmentation masks to perform patch-level feature extraction. However, these approaches require labelled segmentation data, are computationally expensive, feature learning is very limited, and the error propagation is higher.
Another set of methods focus on building a compressed latent representation of the large input images using existing pretrained models or unsupervised learning approaches. For example, Lai et al. (2022) use U-Net autoencoder and stack them into a cube, which is then fed to another module to obtain slide-level predictions. Tellez et al. (2018) explore the use of different encoding strategies including reconstruction error minimization, contrastive learning and adversarial feature learning to map high-resolution patches to a lower-dimensional vector. Tellez et al. (2020) extend this work and use multi-task learning to get better representations of patches than their unsupervised counterparts. One important limitation of this class of methods is that the encoding network created from unsupervised learning is not always the strong representative of the target task.
There exist several methods that use pretrained models derived from other other tasks as feature extractors and the output is then fed to a classifier. Example methods include using Cancer-Texture Network (CAT-Net) and Google Brain (GB) models as feature extractors (Kosaraju et al., 2022), or additionally using similar datasets for fine-tuning (Brancati et al., 2021). Although these methods gain advantage from transfer learning, such two-stage decoupled pipelines propagate errors through under-represented features and the performance of the model on the target task is hampered. In this paper, we propose a single step approach that can be trained in an end-to-end manner on the target task.
Several research works have focused on identifying the right patches from the large images and use them in a compute-effective manner to classify the whole image. Naik et al. (2020) propose to construct the latent space using randomly selected tiles, however, this approach does not preserve the semantic coherence across the tiles and fails to extract features that are spread across multiple tiles. Campanella et al. (2019) consider this as a multi-instance learning approach, assigning labels to top-K probability patches for classification. Pinckaers et al. (2022); Huang et al. (2022) propose a patch-based training, but make use of streaming convolution networks. Sharma et al. (2021) cluster similar patches and performs cluster-aware sampling to perform WSI and patch classification. Cordonnier et al. (2021) use a patch scoring mechanism and patch aggregator network for final prediction, however they perform downsampling for patch scoring which may cause loss of patch-specific feature important for WSI. Papadopoulos et al. (2021) progressively increases the resolution and localize the regions of interest dropping the rest equivalent to performing hard adaptive attention. DiPalma et al. (2021) train a teacher model at high-resolution and performs knowledge distillation for the same model at lower resolution. Katharopoulos and Fleuret (2019) perform attention sampling on downsampled image and derive an unbiased estimator for the gradient update. However their method involves downsampling for attention which may loose out some vital information. It is important to note that all such methods which employ patch selection and knowledge distillation are orthogonal to our work and can be easily combined with our work. However, this is beyond the scope of this paper.
With the recent popularity of Transformer-based methods for vision-based tasks, Chen et al. (2022) proposed a self-supervised learning objective for pre-training large-scale vision transformer at varying scale. Their method involves a hierarchical vision transformer which leverages the natural hierarchical structure inherent in WSI. However their method requires a massive pre-training stage which is not always feasible. Also their method is specific to WSI rather than more general image classification and involves training multiple large-scale transformers. Our method on the other hand, targets more general image classification task and does not involve large scale pre-training, rather it directly works over any existing CNN model.
## 3 Approach
### General description
_Patch Gradient Descent (PatchGD)_ is a novel CNN training strategy that can train networks with high-resolution images. It is based on the hypothesis that, rather than performing gradient-based updates on an entire image at once, it should be possible to achieve a good solution by performing model updates on only small parts of the image at a time, ensuring that the majority of it is covered over the course of iterations. However, even if only a portion of the image is used, the model is still trainable end-to-end with PatchGD.
Figure 3 presents a schematic explanation of the PatchGD method. At the core of PatchGD lies the construction or filling of \(\mathbf{Z}\) block, a deep latent representation of the full input image. Irrespective of which parts of the input are used to perform model updates, \(\mathbf{Z}\) builds an encoding of the full image based on information acquired for different parts of it from the previous few update steps. We further explain the use of the \(\mathbf{Z}\) block using the diagram shown in Figure 2(a). As can be seen, \(\mathbf{Z}\) is primarily an encoding of an input image \(\mathbf{X}\) obtained using any given model parameterized with weights \(\mathbf{\theta}_{1}\). The input image is divided into \(m\times n\) patches and each patch is processed as an independent image using \(\mathbf{\theta}_{1}\). The size of \(\mathbf{Z}\) is always enforced to be \(m\times n\times s\), such that patch \(\mathbf{x}_{ij}\) in the input space corresponds to the respective \(1\times 1\times s\) segment in the \(\mathbf{Z}\) block.
The process of \(\mathbf{Z}\)-filling spans over multiple steps, where every step involves sampling \(k\) patches and their respective positions from \(\mathbf{X}\) and passing them as a batch to the model for processing. The output of the model combined with the positions are then used to fill the respective parts of \(\mathbf{Z}\). Once all the \(m\times n\) patches of \(\mathbf{X}\) are sampled, the filled form of \(\mathbf{Z}\) is obtained. The concept of filling \(\mathbf{Z}\) is employed by PatchGD during model training as well as inference stages. To build an end-to-end CNN model, we add a small subnetwork comprising convolutional and fully-connected layers that processes the information contained in \(\mathbf{Z}\) and transforms it into a vector of \(c\) probabilities as desired for the task of classification. It is important to note that the cost of adding this small sub-network is generally negligible. The pipelines for model training and inference are shown in Figure 2(b). During training, model components \(\mathbf{\theta}_{1}\) as well as \(\mathbf{\theta}_{2}\) are updated. Based on a fraction of patches sampled from the input image, the respective encodings are computed using the latest state of \(\mathbf{\theta}_{1}\) and the output is used
Figure 3: Schematic representations of the pipelines demonstrating working of different components of the PatchGD process.
to update the corresponding entries in the already filled \(\mathbf{Z}\). The partially updated \(\mathbf{Z}\) is then used to further compute the loss function value and the model parameters are updated through back-propagation.
### Mathematical formulation
In this section, we present a detailed mathematical formulation of the proposed PatchGD approach and describe its implementation for the model training and inference steps. For the sake of simplicity, we tailor the discussion towards training of a CNN model for the task of classification.
Let \(f_{\mathbf{\theta}}:\mathbb{R}^{M\times N\times C}\to\mathbb{R}^{c}\) denote a CNN-based model parameterized by \(\mathbf{\theta}\) that takes an input image \(\mathbf{X}\) of spatial size \(M\times N\) and \(C\) channels, and computes the probability of it to belong to each of the \(c\) pre-defined classes. To train this model, the following optimization problem is solved.
\[\underset{\mathbf{\theta}}{\text{min}}\ \ \mathcal{L}(f(\mathbf{\theta};\mathbf{X}), \mathbf{y}), \tag{1}\]
where \(\{\mathbf{X},\mathbf{y}\}\in\mathcal{D}\) refers to the data samples used to train the network and \(\mathcal{L}(\cdot)\) denotes the loss function associated with the training. Traditionally, this problem is solved in deep learning using the popular mini-batch gradient descent approach where updates are performed at every step using only a fraction of the data samples. We present below the formulation of standard gradient descent followed by the formulation our PatchGD method.
**Gradient Descent (GD).** Gradient descent in deep learning involves performing model updates using the gradients computed for the loss function over one or more image samples. With updates performed over one sample at a time, referred as stochastic gradient descent method, the model update at the \(i^{\text{th}}\)step can be mathematically stated as
\[\mathbf{\theta}^{(i)}=\mathbf{\theta}^{(i-1)}-\alpha\frac{\mathrm{d}\mathcal{L}}{ \mathrm{d}\mathbf{\theta}^{(i-1)}}, \tag{2}\]
where \(\alpha\) denotes the learning rate. However, performing model updates over one sample at a time leads to very slow convergence, especially because of the noise induced by the continuously changing descent direction. This issue is alleviated in mini-batch gradient descent method where at every step, the model weights are updated using the average of gradients computed over a batch of samples, denoted here as \(\mathcal{S}\). Based on this, the update can be expressed as
\[\mathbf{\theta}^{(i)}=\mathbf{\theta}^{(i-1)}-\frac{\alpha}{N(\mathcal{S})}\sum_{ \mathbf{X}\in\mathcal{S}}\frac{\mathrm{d}\mathcal{L}^{(\mathbf{X})}}{\mathrm{d }\mathbf{\theta}^{(i-1)}} \tag{3}\]
and \(N(S)\) here denotes the size of the batch used. As can be seen in Eq. 3, if the size of image samples \(s\in\mathcal{S}\) is very large, it will lead to large memory requirements for the respective activations, and under limited compute availability, only small values of \(N(\mathcal{S})\), sometimes even just 1 fits into the GPU memory. This should clearly demonstrate the limitation of the gradient descent method, when handling large images. This issue is alleviated by our PatchGD approach and we describe it next.
**PatchGD.** As described in Section 3.1, PatchGD avoids model updates on an entire image sample in one go, rather it computes gradients using only part of the image and updates the model parameters. In this regard, the model update step of PatchGD can be stated as
\[\mathbf{\theta}^{(i,j)}=\mathbf{\theta}^{(i,j-1)}-\frac{\alpha}{k\cdot N(\mathcal{S}_{ i})}\sum_{\mathbf{X}\in\mathcal{S}_{i}}\sum_{p\in\mathcal{P}_{\mathbf{X},j}} \frac{\mathrm{d}\mathcal{L}^{(\mathbf{X},p)}}{\mathrm{d}\mathbf{\theta}^{(i,j-1)}}. \tag{4}\]
In the context of deep learning, \(i\) here refers to the index of the mini-batch iteration within a certain epoch. Further, \(j\) denotes the inner iterations, where at every inner iteration, \(k\) patches are sampled from the input image \(\mathbf{X}\) (denoted as \(\mathcal{P}_{\mathbf{X},j}\)) and the gradient-based updates are performed as stated in Eq. 4. Note that for any iteration \(i\), multiple inner iterations are run ensuring that the the majority of samples from the full set of patches that are obtained from the tiling of \(\mathbf{X}\) are explored.
In Eq. 4, \(\mathbf{\theta}^{(i,0)}\) denotes the initial model to be used to start running the inner iterations on \(\mathcal{S}_{i}\) and is equal to \(\mathbf{\theta}^{(i-1,\zeta)}\), the final model state after \(\zeta\) inner iterations of patch-level updates using \(\mathcal{S}_{i-1}\). For a more detailed understanding of the step-by-step model update process, please see Algorithm 1. As described earlier, PatchGD uses an additional sub-network that looks at the full latent encoding \(\mathbf{Z}\) for any input image \(\mathbf{X}\). Thus the parameter set \(\mathbf{\theta}\) is extended as \(\mathbf{\theta}=[\mathbf{\theta}_{1},\mathbf{\theta}_{2}]^{\intercal}\), where the base CNN model is \(f_{\mathbf{\theta}_{1}}\) and the additional sub-network is denoted as \(g_{\mathbf{\theta}_{2}}\).
```
1:\(\mathbf{\theta}^{(i)}\), \(\mathbf{\theta}^{(i-1)}\), \(\bm
affects the convergence process, we have observed that gradient update per inner-iteration leads to sometimes poor convergence. Thus, we introduce gradient accumulation over \(\epsilon\) steps and update the model accordingly. Note that gradients are allowed to backpropagate only through those parts of \(\mathbf{Z}\) that are active at the \(j^{\text{th}}\) inner-iteration. During inference phase, \(\mathbf{Z}\) is filled using the optimized \(f_{\mathbf{\theta}_{2}}\) as stated in Algorithm 2 and then the filled version of \(\mathbf{Z}\) is used to compute the class probabilities for input \(\mathbf{X}\) using \(g_{\mathbf{\theta}_{2}^{*}}\).
```
Input: Batch of input images \(\mathcal{X}\in\mathbb{R}^{B\times M\times N\times C}\), Pre-trained feature trained feature extractor \(f_{\mathbf{\theta}_{1}}\), Classifier head \(g_{\mathbf{\theta}_{2}}\), Patch size \(p\), Inner iterations \(\zeta\), Patches per inner iteration \(k\), Batch size \(B\), Learning rate \(\alpha\), Grad. Acc. steps \(\epsilon\) Initialize: \(\mathbf{Z}=\mathbf{0}^{B\times m\times n\times c};\mathbf{U}_{1}=\mathbf{0}, \mathbf{U}_{2}=\mathbf{0}\) \(\mathbf{Z}\leftarrow\mathbf{Z}\text{-filling}(\mathbf{X},f_{\mathbf{\theta}_{1}},p)\) for\(\mathbf{X}\in\mathcal{X}\) \(f_{\mathbf{\theta}_{1}}\leftarrow\texttt{start\_gradient}(f_{\mathbf{\theta}_{1}})\) for\(j:1\text{ to }\zeta\)do for\(\mathbf{X}\) in \(\mathcal{X}\)do \(\{\mathcal{P}_{\mathbf{X},j},v\}=\texttt{patch\_sampler}(\mathbf{X},k),\) \(\mathcal{P}_{\mathbf{X},j}\in\mathbb{R}^{p\times p\times C\times k}\) \(\mathbf{z}=f_{\mathbf{\theta}_{1}}(\mathcal{P}_{\mathbf{X},j})\) \(\mathbf{Z}[v]=\mathbf{z}\) // Update the positional embeddings \(\mathbf{y}_{\text{pred}}=g_{\mathbf{\theta}_{2}}(\mathbf{Z})\) \(\mathcal{L}=\texttt{calculate\_loss}(\mathbf{y},\mathbf{y}_{\text{pred}})\) \(\mathbf{U}_{1}=\mathbf{U}_{1}+\text{d}\mathcal{L}/\text{d}\mathbf{\theta}_{1}, \mathbf{U}_{2}=\mathbf{U}_{2}+\text{d}\mathcal{L}/\text{d}\mathbf{\theta}_{2}\) endfor if\(j\%\epsilon=0\)then \(\mathbf{U}_{1}=\mathbf{U}_{1}/\epsilon\), \(\mathbf{U}_{2}=\mathbf{U}_{2}/\epsilon\) \(\mathbf{\theta}_{1}=\mathbf{\theta}_{1}-\alpha\mathbf{U}_{1}\) \(\mathbf{\theta}_{2}=\mathbf{\theta}_{2}-\alpha\mathbf{U}_{2}\) \(\mathbf{U}_{1}=\mathbf{0},\mathbf{U}_{2}=\mathbf{0}\) endif endfor
```
**Algorithm 1** Model Training for 1 iteration
## 4 Experiments
We demonstrate here the efficacy of PatchGD through multiple numerical experiments on two benchmark datasets comprising large images with features at multiple scales.
### Experimental setup
**Datasets.** For the experiments presented in this paper, we consider two datasets: UltraMNIST (Gupta et al., 2022) and Prostate cANcer graDe Assessment (PANDA) (Bulten et al., 2022) datasets. UltraMNIST is a classification dataset and each sample comprises 3-5 MNIST digits of varying scales placed at random locations in the image such that the sum of the digits lies between 0-9. PANDA dataset comprises high-resolution histopathological images, and for this study, we consider a maximum image resolution of \(4096\times 4096\) pixels. Note that unlike the aforementioned approaches, we do not make use of any segementation masks for PANDA. Therefore, the complete task boils down to taking an input high-resolution image and then classifying them into 6 categories based on the International Society of Urological Pathology (ISUP) grade groups. More details related to the datasets can be found in Appendix B.
**CNN models.** We consider two popular CNN architectures: ResNet50 (He et al., 2016) and MobileNetV2 (Sandler et al., 2018). ResNet50 is a popular network from the residual networks family and forms backbone for several models used in a variety of computer vision tasks (such as object detection and tracking). Thus, we demonstrate the working of PatchGD on primarily this model. MobileNetV2 is a light-weight architecture which is commonly employed for edge-devices, and it would be of interest to see how it performs with large images under limited memory scenarios.
**Implementation details.** We follow the same hyperparameters across our experiments for a fair comparison. Exact details are stated in Appendix C. We report classification accuracy and quadratic weighted kappa (QWK) for PANDA dataset. PyTorch is the choice of framework to implement both the baselines and the PatchGD. We follow 4GB, 16GB and 24GB memory constraints to mimic the popular deep learning GPU memory limits. Latency is calculated on 40GB A100 GPU, completely filling the GPU memory.
### Results
**UltraMNIST classification.** The performance of PatchGD for UltraMNIST has already been shown in Figure 2. More detailed results are presented in Tables 1 and 2. For both the architectures, we see that PatchGD outperforms the standard gradient descent method (abbreviated as GD) by large margins. Our approach employs an additional sub-network \(g_{\mathbf{\theta}_{2}}\), and it can be argued that the gains reported in the paper are due to it. For this purpose, we extend the base
CNN architectures used in GD and report the respective performance scores in Tables 1 and 2 as GD-extended.
For both the architectures, we see that PatchGD outperforms GD as well as GD-extended by large margins. For ResNet50, the performance difference is even higher when we have a low memory constraint. At 4 GB, while GD seems unstable with a performance dip of more than 11% compared to the 16 GB case, our PatchGD approach seems to be significantly more stable. For MobileNetV2, the difference between PatchGD and GD is even higher at 16GB case, thereby clearly showing that PatchGD blends well with even light-weight models such as MobileNetV2. For MobileNetV2, we see that going from 16 GB to 4 GB, there is no drop in model performance, which demonstrates that MobileNetV2 can work well with GD even at low memory conditions. Nevertheless, PatchGD still performs significantly better. The underlying reason for this gain can partly be attributed to the fact that since PatchGD facilitates operating with partial images, the activations are small and more images per batch are permitted. We also observe that the performance scores of GD-extended are inferior compared to even GD. ResNet50 and MobilenetV2 are optimized architectures and we speculate that addition of plain convolutional layers in the head of the network is not suited due to which the overall performance is adversely affected.
**Prostate Cancer Classification (PANDA).** Table 3 presents the results obtained on PANDA dataset for three different image resolutions. For all experiments, we maximize the number of images used per batch while also ensuring that the memory constraint is not violated. For images of \(512\times 512\), we see that GD as well as PatchGD deliver approximately similar performance scores (for both accuracy as well as QWK) at 16 GB memory limit. However, for the similar memory constraint, when images of size \(2048\times 2048\) (2K) pixels are used, the performance of GD drops by approximately 10% while our PatchGD shows a boost of 9% in accuracy. There are two factors that play role in creating such a big gap in the performance of GD and PatchGD. First, due to significantly increased activation size for higher-resolution images, GD faces the bottleneck of batch size and only 1 image per batch is permitted. Note that to stabilize it, we also experimented with gradient-accumulation across batches, however, it did not help. Alternatively, we performed hierarchical training, where the model trained on the lower resolution case was used as the initial model for the higher-resolution. To alleviate the issue of using only 1 image per batch, we considered a higher memory limit. Another reason for the low performance is that for higher-resolution images, the optimized receptive field of ResNet50 is not suited which leads to non-optimal performance.
For increased batch size at 2K resolution, we also considered running quantized networks at half-precision and increased memory (see Table 3). At half-precision, the performance of GD improves, however, it is still significantly lower than PatchGD. Similar observation is made for 4K images that PatchGD performs better. The performance improves further when a patch size of 256 is used. Clearly, from the results reported on PANDA dataset, it is evident that PatchGD is significantly better than GD in terms of accuracy as well as QWK when it comes to handle large images in an end-to-end manner. We also report the latency of both the methods during inference time, and it can be seen that PatchGD performs almost at par with GD. The reason is that unlike GD, the activations produced by PatchGD are smaller and the gain in terms of speed from this aspect balance the slowness induced by patchwise processing of the images. Clearly for applications demanding to handle large images but also aiming to achieve real-time inference, PatchGD could be an interesting direction to explore further.
**Additional study.** We demonstrated in the earlier experiments that PatchGD performs significantly better than its counterpart. We present here a brief study related to some of the hyperparameters involved in PatchGD. Table 4 presents the influence of patch sampling on the overall performance of PatchGD. We vary the sampling fraction per inner-iteration as well as the fraction of samples considered in total for an image in a certain iteration. We observe that keeping the sampling fraction per inner-iteration small helps to achieve better accuracy. This is counter-intuitive since smaller fractions provide a lesser context of the image in one go. We speculate that similar to mini-batch gradient
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & Patch size & Memory (in GB) & Accuracy \\ \hline GD & - & 16 & 65.2 \\ GD-extended & - & 16 & 50.5 \\ PatchGD & 256 & 16 & **69.2** \\ GD & - & 4 & 53.6 \\ GD-extended & - & 4 & 52.5 \\ PatchGD & 256 & 4 & **63.1** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance scores for standard Gradient Descent and our PatchGD method obtained using ResNet50 architectures on the task of UltraMNIST classification with images of size \(512\times 512\).
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & Patch size & Memory (in GB) & Accuracy \% \\ \hline GD & - & 16 & 67.3 \\ GD-extended & - & 16 & 64.3 \\ PatchGD & 256 & 16 & **83.7** \\ GD & - & 4 & 67.7 \\ GD-extended & - & 4 & 60.0 \\ PatchGD & 256 & 4 & **74.8** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance scores for standard Gradient Descent and our PatchGD method on the task of UltraMNIST classification with images of size \(512\times 512\) obtained using MobileNetV2 architecture.
descent, not using too large patch batch size induces regularization noise, which in turn improves the convergence process. However, this aspect needs to be studied in more details for a better understanding. We also observed that the fraction of the image seen in one overall pass of the image in PatchGD does not generally affect the performance, unless it is low. For lower fractions, it is hard for the model to build the global context and the convergene is sub-optimal.
We have also briefly studied the influence of gradient accumulation length parameter for PatchGD and the results are reported in Table 6 of the appendices. We observed that performing gradient-based model update per inner iteration leads to superior performance for the chosen experiment. However, the choice of \(\epsilon\) depends on the number of inner steps \(\zeta\). For large values of \(\zeta\), values greater than 1 are favored. For example, for the case of processing 2K resolution images with patch size of \(128\times 128\), \(\epsilon=\zeta\) worked well. However, an empirical relation between \(\zeta\) and \(\epsilon\) is still to be identified, and this is a part of our future research work.
## 5 Discussion
**Hyperparameter optimization and fine-tuning.** PatchGD involves several hyperparameters and their optimized combination is still to be identified. While we have demonstrated the influence through a few experiments, more clarity needs to be gained on the best values of number of inner-update steps to be combined in gradient accumulation (\(\epsilon\)), striking the right balance between patch size and the number of inner iterations for a given compute memory limit as well choosing the right pretraining strategy. We have observed that using the models trained with GD as the initial models in PatchGD can improve the overall performance. However, there are instances when model training on GD is not possbile. In such scenarios, one could use low-resolution models trained on GD or even the conventional pretrained models. Nevertheless, the effect of each of these choices needs to be thoroughly studied.
**Application to other tasks.** In this paper, we have focused on demonstrating the working of PatchGD on tasks of image classification, and in particular those where features exist at varying scales. However, this does not limit the applicability of our method to other problems. PatchGD can also be used on the conventional classification problems, and we speculate that it could help to refine the receptive field of the existing models. We discuss this in more details later in this paper. Beyond classification, it is also straightforward to adapt this method for other tasks such as segmentation, object detection, among others, and we intend to cover them in an extended version of this study later.
**Limitations.** This paper presented the foundational concept of PatchGD. Although we have demonstrated the efficacy of PatchGD through multiple numerical experiments, the overall investigation is still limited in terms of understanding the generalization and stability of the method. Another minor limitation is that since our approach looks only at a fraction of an image in one step, it is relatively slower than the standard gradient descent method. However, since the inference speed is almost the same, this issue creates a bottleneck only when real-time training is a priority.
**Conclusions.** In this paper, we have demonstrated that it is possible to handle large images with CNN even when the available GPU memory is very limited. We presented Patch Gradient Descent (PatchGD), a novel CNN training strategy that performs model updates using only fractions of the image at a time while also ensuring that it sees almost the full context over a course of multiple steps. We have demonstrated through multiple experiments the efficacy of
\begin{table}
\begin{tabular}{c c c c} \hline \hline Sampling & Max Sampled & Accuracy & QWK \\ \hline
50 & 100 & 42.3 & 0.538 \\
30 & 100 & 49.9 & 0.613 \\
10 & 100 & 53.9 & 0.627 \\
10 & 70 & 53.1 & 0.624 \\
10 & 50 & 53.9 & 0.622 \\
10 & 30 & 51.1 & 0.610 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Sampling ablation on PANDA dataset. Memory limit is 16 GB, Image size and patch size are 2048 and 128 respectively
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline Method & Resolution & Patch Size & Sampling \% & Mem. Constraint & \# Parameters (M) (G) & Latency (imgs/sec) & Accuracy \% & QWK \\ \hline GD & 512 & - & - & 16 & 23.52 & 618.05 & 44.4 & 0.558 \\ PatchGD & 512 & 128 & 30* & 16 & 26.39 & 521.42 & 44.9 & 0.576 \\ GD & 2048 & - & - & 16 & 23.52 & 39.04 & 34.8 & 0.452 \\ PatchGD & 2048 & 128 & 10 & 16 & 26.40 & 32.52 & 53.9 & 0.627 \\ GD-fp16 & 2048 & - & - & 24 & 23.52 & 39.04 & 50.6 & 0.658 \\ PatchGD-fp16 & 2048 & 128 & 10 & 24 & 26.40 & 32.52 & 56.1 & 0.662 \\ GD-fp16 & 4096 & - & - & 24 & 23.52 & 9.23 & 50.1 & 0.611 \\ PatchGD-fp16 & 4096 & 128 & 10 & 24 & 26.41 & 8.09 & 53.5 & 0.667 \\ PatchGD-fp16 & 4096 & 256 & 10 & 24 & 26.40 & 9.62 & 55.6 & 0.672 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance scores obtained using Resnet50 on PANDA dataset for Gradient Descent (GD) and Patch Gradient Descent (PatchGD). In case of 512 image size, 10% sampling leads to only one patch, hence 30% patches are chosen.
PatchGD in handling large images as well as operating under low memory conditions, and in all scenarios, our approach outperforms the standard gradient descent by significant margins. We hope that the details of the method as well as the experimental evidence presented in the paper sufficiently justify the significance of PatchGD in making existing CNN models work with large images without facing the bottleneck of compute memory.
**Future work.** This paper has established the foundational concept of patch gradient descent to enable training CNNs using very large images. The results as well as insights presented in the paper open doors to several novel secondary research directions that could be interesting in terms of improving the efficacy as well as the acceptance of the presented method in a broader scientific community. Examples of these include extending PatchGD to work on gigapixel images at small compute memory, using PatchGD for enhanced receptive field on standard computer vision tasks, and lastly to couple PatchGD with transformers. Details on the associated challenges and possible modifications are further discussed in Appendix A.
**Acknowledgement** We would like to thank Texmin Foundation for the financial support provided through grant PSF-IH-1Y-022 to support this work.
|
2310.20552 | Privacy-preserving design of graph neural networks with applications to
vertical federated learning | The paradigm of vertical federated learning (VFL), where institutions
collaboratively train machine learning models via combining each other's local
feature or label information, has achieved great success in applications to
financial risk management (FRM). The surging developments of graph
representation learning (GRL) have opened up new opportunities for FRM
applications under FL via efficiently utilizing the graph-structured data
generated from underlying transaction networks. Meanwhile, transaction
information is often considered highly sensitive. To prevent data leakage
during training, it is critical to develop FL protocols with formal privacy
guarantees. In this paper, we present an end-to-end GRL framework in the VFL
setting called VESPER, which is built upon a general privatization scheme
termed perturbed message passing (PMP) that allows the privatization of many
popular graph neural architectures.Based on PMP, we discuss the strengths and
weaknesses of specific design choices of concrete graph neural architectures
and provide solutions and improvements for both dense and sparse graphs.
Extensive empirical evaluations over both public datasets and an industry
dataset demonstrate that VESPER is capable of training high-performance GNN
models over both sparse and dense graphs under reasonable privacy budgets. | Ruofan Wu, Mingyang Zhang, Lingjuan Lyu, Xiaolong Xu, Xiuquan Hao, Xinyi Fu, Tengfei Liu, Tianyi Zhang, Weiqiang Wang | 2023-10-31T15:34:59Z | http://arxiv.org/abs/2310.20552v1 | # Privacy-preserving design of graph neural networks with applications to vertical federated learning
###### Abstract
The paradigm of vertical federated learning (VFL), where institutions collaboratively train machine learning models via combining each other's local feature or label information, has achieved great success in applications to financial risk management (FRM). The surging developments of graph representation learning (GRL) have opened up new opportunities for FRM applications under FL via efficiently utilizing the graph-structured data generated from underlying transaction networks. Meanwhile, transaction information is often considered highly sensitive. To prevent data leakage during training, it is critical to develop FL protocols with _formal privacy guarantees_. In this paper, we present an end-to-end GRL framework in the VFL setting called VESPER, which is built upon a general privatization scheme termed _perturbed message passing (PMP)_ that allows the privatization of many popular graph neural architectures. Based on PMP, we discuss the strengths and weaknesses of specific design choices of concrete graph neural architectures and provide solutions and improvements for both dense and sparse graphs. Extensive empirical evaluations over both public datasets and an industry dataset demonstrate that VESPER is capable of training high-performance GNN models over both sparse and dense graphs under reasonable privacy budgets.
## 1 Introduction
In recent years, there has been an increasing interest in adopting modern machine learning paradigms to the area of financial risk management (FRM) [31]. The most crucial task in operational risk scenarios like fraud detection is identifying risky identities based on the behavioral data collected from the operating financial platform [4; 24]. For institutions like commercial banks and online payment platforms, the most important source of behavior information is the _transaction records_ between users, making _transaction networks_ (with users as nodes and transactions as edges) a direct and appropriate data model. To exploit the potential of transaction networks in a machine learning context, recent approaches [26; 47] have been exploring the adoption of graph representation learning (GRL) [16] as a principled way of incorporating structural information contained in transaction networks into the learning process. The family of graph neural networks in the message passing form [13; 48] offers a powerful yet scalable solution to GRL, and has become the prevailing practice in industry-scale graph learning [52].
Despite its convincing performance, high-quality network data are not always available for financial institutions. F It is, therefore, of great interest for institutions to learn GRL models _collaboratively_ while being coherent to regulatory structures at the same time. The technique of federated learning
(FL) [20, 49] provides a recipe for such scenarios, with participating institutions (hereafter abbreviated as _parties_) exchanging intermediate results instead of raw data. Depending on the specific form of collaboration, FL protocols are generally divided into horizontal federated learning (HFL), where participants aggregate their locally trained models to obtain a strong global model, and vertical federated learning (VFL) where participants are able to align the identifiers of modeling entities and train a model that efficiently combines feature or label information that are distributed among different parties. VFL is particularly useful when training a (supervised) model is not possible based on information of a single party, i.e., each party holds only feature or label data, and has attracted significant attention in applications to FRM [28]. While ordinary FL paradigms avoid the transmission of local raw data, they typically lack a formal guarantee of privacy [20, Chapter 4]. Moreover, recent studies have reported successful attacks targeting individual privacy against FL protocols [54, 50, 19, 9, 8]. As transaction records are widely considered extremely sensitive personal information, it is thus critical to establish FL applications in FRM with rigorous privacy guarantees.
Differential privacy (DP) [11] is the state-of-the-art approach to address information disclosure that injects algorithm-specific random noise to fuse the participation of any individual. The adoption of DP as the privacy model for FL is now under active development, with most of the applications appearing in HFL over independently identically distributed (i.i.d.) data through the lens of optimization [20]. However, discussions on applying DP over VFL remain nascent [3, 53, 39]. The situation becomes even more complicated in VFL over graph-structured data, since the right notions of (differential) privacy on graphs are semantically different from that in the i.i.d. case [35, 22]. So far, as we have noticed, the only work that provides meaningful DP guarantee under VFL over graphs is the GAP model [39], which requires three stages of training. Meanwhile, a notable aspect of GRL is that the structure of the underlying graph, i.e., whether the graph is dense or sparse, might have a significant influence on the performance of the graph neural model especially when the aggregation process involves noisy perturbations. This phenomenon was overlooked in previous studies.
In this paper, we discuss private FL over graph-structured data under the task of node classification in the vertical setup with edge DP [35] chosen as the privacy model. We first develop a general privatization scheme termed _perturbed message passing (PMP)_ that produces message-passing procedures over graphs that are guaranteed to satisfy edge DP constraints. Next, we discuss the influence of the underlying graph's degree profiles on the utility of specific design choices of PMP, using two representative graph aggregation schemes, namely GIN [48] and GCN [23], and develop further improvements of PMP that better handles sparse graphs under the GCN aggregation scheme. Finally, we integrate the developments of PMP and its variants into a VFL architecture called VESPER based on the SplitNN framework [14], and conducted extensive empirical evaluations over both public and industrial datasets covering dense and sparse graphs. We summarize our contributions as follows:
* We propose PMP, a general framework for designing differentially private message-passing procedures. PMP enables the privatization of many popular graph neural network architectures. The privacy guarantee of PMP is formally analyzed with new privacy amplification results under uniform neighborhood sampling.
* We discuss two representative design choices under the PMP framework, GIN and GCN, and discover the fact that the utility of the privatized GNN model may be affected by the _degree profile_ of the input graph. To better accommodate varying graph structures, we develop the truncated message passing framework under the base model of GCN through properly tuning the hyper-parameter that reduces noise scale at the cost of learning less structural information, which is beneficial when the input graph is _sparse_.
* We derive an end-to-end VFL learning framework operating over graph-structured data called VESPER, which is efficient in computation and communication. A thorough experimental study demonstrates that VESPER achieves better privacy-utility trade-off over previously proposed models and is capable of training high-performance GNN models over both sparse and dense graphs under reasonable privacy budgets.
## 2 Methodology
### Preliminaries
We focus on the node classification task over a static, undirected graph \(G=(V,E)\) with node size \(N=|V|\), node feature \(X=\{x_{v}\}_{v\in V}\) and node labels \(Y=\{y_{v}\}_{v\in V_{T}}\) where \(V_{T}\subseteq V\) is the set of training nodes with \(N_{T}=|V_{T}|\). Throughout this article, we will assume the graph of interest to be degree bounded, i.e.,
\[\max_{G}\max_{v\in G}d_{v}\leq D \tag{1}\]
for some \(D>1\). In this paper, we will be interested in the setup where the graph data \(G\) and label information are distributed over two distinct parties. Specifically, suppose there are two parties, A (Alice) and B (Bob), where A holds the graph data \(G\) as well as the node feature \(X\) and B holds the label collection \(Y\), both indexed by node identifiers that are known to both sides (i.e., \(V_{T}\) is known to both party A and party B). We consider a representative federated learning paradigm that A and B collaboratively train a graph representation learning model via utilizing the panoply of graph neural networks [13], which could be regarded as a special case of vertical federated learning (VFL) [49]. Under VFL protocols, party A and party B iteratively exchange intermediate outputs depending on the specific training algorithm chosen. A main concern in VFL [20, Chapter 4] is, therefore, whether the exchanging process satisfies formal _privacy_ guarantees. Before elaborating on privacy protection issues, we first state the threat model in our context.
**Threat model** We adopt the following threat model in this paper: In the training stage, label party B is curious about the adjacency information (i.e., the existence of some edges) in the data party A. The data party A is assumed to be benign, with both parties strictly obeying the chosen VFL protocol. 1 In other words, the goal of privacy protection is to prevent the _semi-honest_ adversary (party B) from inferring the edge membership that is only known to party A.
Footnote 1: The assumption of a harmless party A might be relaxed to a curious onlooker that tries to infer party B’s label information. We discuss related extensions in section D.
Differential privacy [11] is now the _de facto_ choice of privacy protection paradigm against membership inference adversaries. As an appropriate solution concept in the current setup, we introduce the edge-level differential privacy model (hereafter abbreviated as Edge DP).
**Definition 1** (Edge-level differential privacy(Edge DP)).: For a (randomized) graph-input mechanism \(\mathcal{M}\) that maps graphs to some output space \(\mathcal{S}\) and two non-negative numbers \(\epsilon\) and \(\delta\), the mechanism is \((\epsilon,\delta)\)-Edge DP if for any subset \(S\) (or more rigorously defined as Borel measurable sets) of the output space, the following holds uniformly for any two possible adjacent graphs \((G,G^{\prime})\):
\[\mathbb{P}[\mathcal{M}(G)\in S]\leq e^{\epsilon}\mathbb{P}[ \mathcal{M}(G^{\prime})\in S]+\delta, \tag{2}\]
where we define two graphs \(G\) and \(G^{\prime}\) as being adjacent if \(G\) could be edited into \(G^{\prime}\) via adding or removing a single edge.
Regarding the capability of the adversary adopted in this paper, a VFL protocol satisfying Edge DP with a reasonable \(\epsilon\) level implies that based on all the exchanged intermediate outputs between party A and party B, any membership inference algorithm may not be able to make any sophisticated guess about the existence of some specific edge in a probabilistic sense, thereby offering strong privacy protection. Most contemporary differentially private machine learning algorithms involve sequentially applying DP procedures to intermediate learning steps [1], with the privacy level of
Figure 1: A concise pictorial description of the VESPER framework. We use solid arrows to depict the dataflow of forward computations and use dashed arrows to depict the dataflow of backward computations.
the entire training procedure obtained via composition theorems [11; 21]. In this paper, we choose the composition framework of analytical moment accountant (AMA) [44] that exploits the idea of Renyi DP [33], which we introduce below in our graph learning context:
**Definition 2** (Edge-level Renyi -differential privacy(Edge RDP)).: Sharing notations with definition 1, the mechanism \(\mathcal{M}\) is \((\alpha,\epsilon(\alpha))\)-Renyi differentially private with some \(\alpha>1\) and \(\epsilon(\alpha)\geq 0\), if for any two possible adjacent graphs \((G,G^{\prime})\), the \(\alpha\)-Renyi divergence of the induced probability distribution of random variables \(\mathcal{M}(G)\) and \(\mathcal{M}(G^{\prime})\) is bounded by \(\epsilon(\alpha)\):
\[D_{\alpha}\left(\mathcal{M}(G)||\mathcal{M}(G^{\prime})\right)\leq\epsilon( \alpha), \tag{3}\]
with the definition of \(\alpha\)-Renyi divergence \(D_{\alpha}\left(\cdot||\cdot\right)\) presented in appendix A.
To develop privacy-preserving learning algorithms under the AMA framework, we first design mechanisms that satisfy RDP guarantee in each step, then use standard composition results of RDP [33] to obtain the privacy level of the learning procedure. Finally, we apply the conversion rule in [2] to convert it back to \((\epsilon,\delta)\)-DP for reporting.
**Message passing GNNs with stochastic training** The backbone of our privacy-preserving training framework is the graph neural network model in the message passing form [13]. We define the GNN of interest to be a map from the space of graphs to a node embedding matrix with embedding dimension \(d\): \(f:\mathcal{G}\mapsto\mathbb{R}^{N\times d}\), or \(H:=\{h_{v}\}_{v\in V}=f(G)\). For an \(L\)-layer GNN, let \(h_{v}^{(0)}=g(x_{v})\) be the input encoding of node \(v\), which could be either \(x_{v}\) or some encoding based on \(x_{v}\). We assume the following recursive update rule for \(1\leq l\leq L\) and \(v\in V\):
\[h_{v}^{(l)}=\sigma\left(\widetilde{h}_{v}^{(l)}\right),\quad\widetilde{h}_{v}^ {(l)}=\omega_{v}W_{1}^{(l)}h_{v}^{(l-1)}+\sum_{u\in N(v)}\beta_{uv}W_{2}^{(l)}h _{u}^{(l-1)}, \tag{4}\]
with \(\mathbf{\omega}:=\{\omega_{v}\}_{v\in V}\in\mathbb{R}^{N}\) and \(\mathbf{\beta}:=\{\beta_{uv}\}_{u,v\in V\times V}\in\mathbb{R}^{N\times N}\) be model-dependent coefficients, \(\sigma\) a parameter-free nonlinear function, and \(\mathbf{W}=(W_{1}^{(1)},\dots,W_{1}^{(L)},W_{2}^{(1)},\dots,W_{2}^{(L)})\) be the collection of learnable parameters. For any matrix \(W\), we denote \(\|W\|_{\text{op}}\) as the operator norm of the matrix (i.e., its largest singular value). In this paper, we assess two representative instantiations of the protocol (4) which are the GIN model [48] with with \(\omega_{v}\equiv\beta_{uv}\equiv 1,\forall u,v\in V\) and the GCN model [23] with \(\omega_{v}=\frac{1}{d_{v}+1}\) and \(\beta_{uv}=\frac{1}{\sqrt{d_{u}+1}\sqrt{d_{v}+1}}\). For simplicity we additionally let the nonlinearity be the ReLU function and set \(W_{1}^{(l)}=W_{2}^{(l)}=W^{(l)},1\leq l\leq L\).
Applying message passing updates (4) may become computationally prohibitive for large input graphs, which are frequently encountered in industrial scenarios. To enable scalable GRL, the prevailing practice is to use graph sampling methods [15] and adopt **stochastic training of graph neural networks**. In this paper, we investigate the simple and effective sampling scheme of uniform neighborhood sampling [15; 7], with the maximum number of neighbors sampled in each layer to be the maximum degree \(D\). Asides from their computational benefits, it has been observed [1; 32] that stochastic training with a low sampling ratio over large datasets is crucial to training high-utility differentially private machine learning models with reasonably small privacy budgets, which has also been recently verified in the case of differentially private graph learning [7; 39].
### Perturbed message passing
A notable fact about the message-passing protocol (4) is that it uses the aggregation strategy of _weighted summation_, thereby allowing standard additive perturbation mechanisms like the Laplace mechanism or Gaussian mechanism that are prevailing in the design of differentially private algorithms [11]. Motivated by this fact, we propose a straightforward solution to privatize message-passing GNNs in a _layer-wise_ fashion named _perturbed message passing (PMP)_, which adds layer-wide Gaussian noise with an additional normalization step that controls sensitivity. We present the pseudo-code of PMP with neighborhood sampling in algorithm 1. Next we discuss the privacy guarantee of algorithm 1. To state our main result, we first define the right notion of sensitivity in our context:
**Definition 3** (Edge sensitivity).: Denote \(G^{\prime}\) as the adjacent graph via removing the edge \((u^{*},v^{*})\) from \(G\), and let \(\widetilde{h}_{v}\) and \(\widetilde{h}^{\prime}_{v}\) be the outputs of node \(v\) generated via some \(1\)-layer GNN protocol under graph \(G\) and \(G^{\prime}\) without nonlinearity, then we define the (\(\ell_{2}\)-) _edge sensitivity_ as:
\[\mathcal{S}=\max_{G,G^{\prime}}\sqrt{\sum_{v\in V}\|\widetilde{h}_{v}- \widetilde{h}^{\prime}_{v}\|_{2}^{2}}. \tag{5}\]
The following theorem quantifies the privacy guarantee of algorithm 1:
**Theorem 2.1** (RDP guarantee).: _Let \(\mathbf{H}_{L}\) be the released outputs with input a minitach of \(B\) subgraphs produced by uniform neighborhood sampling for \(L\) layers with a maximum number of \(D\) neighbors sampled in each layer. Define \(\epsilon(\alpha):=\frac{\alpha\sum_{L}^{L}\mathcal{S}_{2}^{2}}{2\theta^{2}}\), then \(\mathbf{H}_{L}\) is \((\alpha,\epsilon_{\gamma}(\alpha)\)-RDP for any \(\alpha>1\), where \(\gamma=1-\frac{\binom{N_{T}-\frac{2(DL-1)}{D-1}}{\binom{B}{B}}}{\binom{B}{B}}\) and_
\[\epsilon_{\gamma}(\alpha)\leq\frac{1}{\alpha-1}\log\left(1+ \gamma^{2}\binom{\alpha}{2}\min\left(4\left(e^{\epsilon(2)}-1\right),\epsilon( 2)\min\left(2,\left(e^{\epsilon(\infty)-1}\right)^{2}\right)\right)\right. \tag{6}\] \[+\left.\sum_{j=3}^{\infty}\gamma^{j}\binom{\alpha}{j}e^{(j-1) \epsilon(j)}\min\left(2,\left(e^{\epsilon(\infty)-1}\right)^{j}\right)\right)\]
Theorem 2.1 provides a principled way of analyzing the privacy of privatized GNN models using algorithm 1, which boils down to computing the edge sensitivity of the underlying message passing protocol. However, sensitivity computations are usually conducted in a _worst-case_ manner, resulting in unnecessarily large noise levels and significant utility loss. Therefore, it is valuable to explore the utility of concrete PMP models and their relationships with the underlying input graph. To begin our expositions, we analyze the GIN model in the following section.
```
0: Graph \(G=(V,E)\), input encodings \(\{h_{v}^{(0)}\}_{v\in V}\), number of message passing rounds \(L\), GNN spec \((\omega,\mathbf{\beta},\sigma)\), noise scale \(\theta\), GNN parameter \(\mathbf{W}\), batch size \(B\), maximum degree \(D\).
1: Sample a random batch of root nodes \(v_{1},\ldots,v_{B}\).
2: Apply an \(L\)-layer neighborhood sampler with each layer sampling at most \(D\) nodes with roots \(v_{1},\ldots,v_{B}\), obtaining a batch of \(B\) subgraphs \((G_{v_{1}}^{(L)},\ldots,G_{v_{B}}^{(L)})\).
3: Combine \((G_{v_{1}}^{(L)},\ldots,G_{v_{B}}^{(L)})\) into a subgraph \(G_{B}^{(L)}\). Additionally, overload the notation \(N(v)\) for the neighborhood of node \(v\) with respect to \(G_{v_{B}}^{(L)}\).
4: Set \(h_{v}^{(0)}=\frac{h_{v}^{(0)}}{\left\|h_{v}^{(0)}\right\|_{2}}\) for \(\forall v\in G_{v_{B}}^{(L)}\))
5:for\(l\in\{1,\ldots,L\}\)do
6:for\(v\in G_{v_{B}}^{(L)}\)do
7: Compute the linear update \(\widetilde{h}_{v}^{(l)}=\omega_{v}W_{1}^{(l)}h_{v}^{(l-1)}+\sum_{u\in N(v)} \beta_{uv}W_{2}^{(l)}h_{u}^{(l-1)}\).
8: Do additive perturbation, \(h_{v}^{(l)}=\sigma(\widetilde{h}_{v}^{(l)}+N(0,\theta^{2}))\)
9: Normalize \(h_{v}^{(l}=\frac{h_{v}^{(l)}}{\left\|h_{v}^{(l)}\right\|_{2}}\) return A list of all layers' embedding matrices \(\mathbf{H}_{L}=(H^{(1)},\ldots,H^{(L)})\), with \(H^{(l)}=\{h_{v}^{(l)}\}_{v\in G_{v_{B}}^{(L)}},1\leq l\leq L\).
```
**Algorithm 1** PMP with neighborhood sampling
### Analysis of GIN and the challenge of sparse graphs
We start with the following proposition:
**Proposition 1**.: _Under the GIN model, the edge sensitivity is bounded from above by \(\mathcal{S}_{l}^{\text{GIN}}\leq\sqrt{2}\|W^{(l)}\|_{\text{op}}\) for each \(1\leq l\leq L\)._
**Advantage of layer-wise perturbations** According to proposition 1, the edge sensitivity of GIN is independent of the input graph's maximum degree upper bound \(D\), which is essentially a direct consequence of the fact that for a \(1\) layer message passing procedure, adding or removing one edge would affect up to two nodes' output embeddings. As a consequence, the privacy cost scales linearly with the number of message-passing layers in the Renyi DP framework, thereby offering a better privacy-utility trade-off than algorithms that do the do the perturbation only in the final layer [53], whose privacy cost may scale exponentially with \(D\).
**Effectiveness and challenges of summation pooling** It has been observed in previous works [39] that aggregation perturbation with sum pooling works well on graphs with a large average degree. Intuitively, this phenomenon could be understood as keeping a high "signal-to-noise ratio (SNR)" during the aggregation process: For nodes with large degrees, the noise scale becomes relatively small with respect to the summation of incoming messages. Therefore if high-degree nodes are prevalent in the underlying graph, the utility loss during aggregation is reasonably controlled for most nodes. However, realistic graph data might not have large average degrees. For example, transaction networks in FRM scenarios are usually sparse, including many nodes with degrees smaller than \(5\) or even being singular (i.e., of degree \(0\)). Consequently, the SNR of sparse networks makes it harder for summation pooling to maintain decent utility, which will be further verified in section 3.
### Improvements of PMP in the GCN model
As discussed in the previous section, the degree profile of the input graph may affect the utility of PMP-privatized GNNs when the underlying aggregation follows the summation pooling scheme. It is therefore of interest to explore aggregation schemes that are more appropriate when the input graph is sparse. On first thought, we may expect aggregation schemes like mean pooling or GCN pooling to have smaller sensitivities. However, such sensitivity reduction does NOT hold in a worst-case analysis: Just think of nodes with degree \(1\), then it is not hard to check that mean pooling or GCN pooling behaves similarly to summation pooling. The primary issue with worst-case analysis is that the resulting sensitivity is determined by extremely _low-degree_ nodes. Inspired by this phenomenon, we seek improvements by first deriving lower sensitivity with an extra requirement on a _degree lower bound_, and then relax the requirement via introducing a modified protocol. We start with the following observation:
**Proposition 2**.: _Assume all the possible input graphs have a minimum degree larger or equal to \(D_{\text{min}}\), or_
\[\min_{G}\min_{v\in G}d_{v}\geq D_{\text{min}}>1. \tag{7}\]
_Then for the GCN model, the edge sensitivity of the \(l\)-th layer \(\mathcal{S}_{l}^{\text{GCN}}\) is bounded from above by a function \(\eta_{l}(D_{\text{min}})\), defined as:_
\[\eta_{l}(D_{\text{min}})=\sqrt{2}\left(\frac{1-1/D_{\text{min}}}{2D_{\text{min }}}+\frac{1}{D_{\text{min}}(D_{\text{min}}+1)}+\frac{1}{D_{\text{min}}+1} \right)\|W^{(l)}\|_{\text{op}}. \tag{8}\]
Proposition 2 implies that the edge sensitivity of the GCN model shrinks significantly if the underlying graph has a reasonably large minimum degree, which will result in a significantly reduced noise scale that improves utility. However, the minimum degree assumption (7) is impractical since most of the realistic graph data have a large number of nodes with small degrees. To circumvent the impracticality of assumption (7) while still being able to reduce the noise scale in the GCN model, we propose a modification to the basic message passing algorithm 1 called _truncated message passing_. The idea of truncated message passing is to block all the incoming messages unless the receiver node's neighborhood is large than or equal to \(D_{\text{min}}\), which is treated as a hyperparameter. For nodes with degrees lower than \(D_{\text{min}}\), the output embedding is instead produced by an MLP with perturbation that does not involve any edge information. A detailed version is provided in algorithm 2 in appendix F. Consequently, it is straightforward to show that the differential privacy guarantee of the resulting algorithm operating on any graph matches the privacy level of perturbed GCN (produced by algorithm 1) operating only on graphs with minimum degree assumption.
**How to choose \(D_{\text{min}}\)?** To maintain the same privacy level under the truncated message passing algorithm, one may reduce the noise scale \(\theta\) at the cost of raising the minimum degree hyperparameter \(D_{\text{min}}\). On the one hand, reducing the noise scale significantly improves the utility of the message-passing procedure. On the other hand, raising \(D_{\text{min}}\) might prevent a non-ignorable proportion of nodes from learning structural information. Therefore, properly adjusting \(D_{\text{min}}\) may help achieve a better privacy-utility trade-off in the GCN model. In practice, one may choose \(D_{\text{min}}\) based on prior knowledge about the degree distribution of the underlying graph or via inspecting a private release of its degree distribution, which could be done efficiently using the Laplace mechanism [11].
### VESPER: an end-to-end learning framework
In previous sections, we have established the PMP framework for differentially private graph representation learning. Now under the vertically federated learning setup described in section 2.1,
we propose an end-to-end architecture inspired by the SplitNN paradigm [14] based on the PMP framework, named **V**E**rtically private **S**plit GNN with **PER**turbed message passing(**V**ESPER**). The VESPER architecture contains three main components: Encoder, Private representation extractor (PRE), and Decoder.
**Encoder** The encoder module maps input node features into a \(d\)-dimensional representation vector, with an ad-hoc choice being an MLP. Note that for node features with additional structural patterns (i.e., sequence data), we may use a more tailored encoder architecture as long as it does not involve edge information. The encoder model is physically stored in party A.
**Private representation extractor** The PRE module takes its input the node embeddings produced by the encoder and a batch of \(B\) subgraphs produced by a neighborhood sampler. The output representation of PRE is computed using some specific type of PMP mechanism such as PMP-GIN or PMP-GCN. The PRE module is physically stored in party A. The output of PRE is a tensor of shape \(B\times d\times L\), with \(d\) and \(L\) being the dimension of graph representation and the number of message passing layers respectively. The outputs will be transmitted from party A to party B.
**Decoder** The decoder module is physically stored in party B, which decodes the received node embeddings produced by PRE into the final prediction of VESPER with its structure depending on the downstream tasks (i.e., classification, regression, ranking, etc.). We test two types of decoder architectures in our implementation of VESPER. The first one proceeds via concatenating the node embeddings of all layers followed by an MLP, which we call the CONCAT decoder. The second one treats the node embeddings as a sequence of \(L\) node embeddings and uses a GRU network to combine them, similar to the idea used in GNN architectures like GaAN [25] and GeniePath [30] which we term the GRU decoder.
The VFL training protocol closely resembles the SplitNN protocol [14], where in each step, forward computation results (i.e., the outputs of the PRE module) are transmitted from party A to party B. After party B finishes the forward computation using the decoded outputs and label information, party B first update its local decoder module via back-propagation, and then sends (partial) gradients that are intermediate results of the backward computation to party A for updating party A's local parameters (i.e., parameters of the encoder module and PRE module). A pictorial illustration of the VESPER architecture is presented in figure 1. We will discuss some practical issues in implementing VESPER in appendix E.1.
## 3 Experiments
In this section we present empirical evaluations of the VESPER framework via investigate its privacy-utility trade-off and resistance to empirical membership inference attacks. Due to limited space, a complete report will be postponed to appendix C.
### Datasets
We use three large-scale graph datasets, with their summary statistics listed in table 2. Specifically, we use two public datasets ogbn-products and Reddit, with their detailed descriptions postponed to appendix C.1. We additionally used an industrial dataset called **the Finance dataset** which is generated from transaction records collected from one of the world's leading online payment systems. The underlying graph is generated by treating users as nodes, and two nodes are connected if at least one transaction occurred between corresponding users within a predefined time period. The business goal is to identify risky users which is cast into an algorithmic problem of node classification with a binary label. The node features are obtained via statistical summaries of corresponding users' behavior on the platform during a specific time period. The training and testing datasets are constructed under two distinct time windows with no overlap.
**A differentially private analysis of degree profiles** While all three datasets are large in scale (i.e., with the number of nodes exceeding \(100,000\)), they differ significantly in their degree distributions. For a better illustration, we conduct a differentially private analysis of degree distribution (with \((0.1,0)\)-differential privacy) detailed in appendix C.2. According to the analysis, we find that both the ogbn-products and the Reddit contain a large portion of high-degree nodes (as illustrated by the spiking bar at the \(\geq 50\) category), while the Finance dataset exhibits a concentration on the lower-degree nodes. As discussed in section 2.2, it is expected that the Finance dataset is more challenging for (private) message passing under sum pooling.
### Baselines
We compare the proposed VESPER framework with three types of baselines, with each one being able to implement in the vertically federated setting. **MLP without edge information** we use MLP over node features directly is the most trivial solution to the learning task as it totally ignores edge information. **Non-private GNN counterparts** we compare with ordinary GCN and GIN models without privacy guarantees, or equivalently set the \(\epsilon\) parameter in the VESPER framework to be infinity. **GNN models with privacy guarantees** we consider two alternative approaches to private GRL, namely the VFGNN model [53] and the GAP model [39]. We found the privacy analysis in the corresponding papers to be somewhat incoherent with the privacy model in our paper and we conducted new analysis of their privacy properties, detailed in appendix C.3.
### Experimental setup
Due to limited space, we postpone the description of our training configurations to appendix C.4 and elaborate more on the **privacy configurations**: All the privacy reports are based on the \((\epsilon,\delta)\)-differential privacy model, with \(\delta\) being the reciprocal of the number of edges. To adequately inspect the privacy-utility trade-off, we aim to evaluate all the models with differential privacy guarantees under the total privacy costs (privacy budgets) \(\epsilon\in\{1,2,4,8,16,32\}\), with the privacy costs accounted during the entire training period. We treat the setting where \(\epsilon\in\{1,2\}\) as of _high privacy_, \(\epsilon\in\{4,8\}\) as of _moderate privacy_, and the rest as of _low privacy_. For VESPER and VFGNN, we add spectral normalization to each GNN layer. For the privacy accountant, we base our implementation upon AMA implementation available in the dp-accounting library and use an adjusted sampling probability according to theorem 2.1. For each required privacy level, we compute the minimum scale of Gaussian noise via conducting a binary search over the adjusted AMA, with associating spectral norms of weight matrices fixed at one in all layers.
**Evaluation metrics** We adopt classification accuracy (ACC) as the evaluation metric for the ogbn-products and Reddit datasets, and ROC-AUC score (AUC) as the evaluation metric for the Finance dataset.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & \multicolumn{3}{c}{Non-private approaches} \\ \hline Model & ogbn-products & Reddit & Finance \\ \hline MLP & \(61.66_{\text{softmax}}\) & \(71.67_{\text{softmax}}\) & \(71.39_{\text{softmax}}\) \\ GNN & \(78.54_{\text{softmax}}\) & \(94.85_{\text{softmax}}\) & \(79.75_{\text{softmax}}\) \\ GCN & \(78.25_{\text{softmax}}\) & \(94.56_{\text{softmax}}\) & \(80.13_{\text{softmax}}\) \\ \hline \hline Model & ogbn-products & \multicolumn{3}{c}{Reddit} \\ \hline \hline \end{tabular}
\begin{tabular}{l c c c c c c c c c c c c} \hline \hline Model & ogbn-products & \multicolumn{3}{c}{Reddit} \\ \hline Model & ogbn-products
### Performance and privacy-utility trade-off
According to our empirical experience, obtaining reasonable performance in the _high privacy_ regime is difficult, especially for baseline algorithms. Therefore, we report two sets of results: Firstly, we thoroughly investigate the privacy-utility trade-off regarding the proposed VESPER framework under both GIN and GCN aggregation schemes and plot the results in figure 2. Secondly, we report comparisons of VESPER against private and non-private baselines with only moderate to large privacy budgets and summarize the results in table 1. The results demonstrate that the proposed VESPER framework exhibits competitive privacy-utility trade-off under both GIN and GCN aggregators. Moreover, a comparison of GIN and GCN aggregator suggests that summation pooling excels when the underlying graph is dense (i.e., ogbn-products and Reddit), while introducing the truncated message passing mechanism helps achieving better results over sparse graphs (i.e., Finance). Finally, VESPER demonstrates a better privacy-utility trade-off compared to other private GNN baselines.
### Protection against membership inference attacks
We launch a membership inference attack (MIA) [37] to empirically investigate the resilience of VESPER against practical privacy risks that targets the membership of nodes instead of edges, which is regarded as a stronger attack than edge MIA. We provide a detailed description of the attack setup in appendix C.7. The attack is conducted over trained models under privacy budgets \(\epsilon\in\{1,2,4,8,16,32,\infty\}\), where \(\epsilon=\infty\) indicated no privacy protection is adopted. We use ROC-AUC (AUC) to evaluate the attack performance. We report the attack performances in Figure 4. From the results, we observe that when privacy protection is disabled (\(\epsilon=\infty\)), the attacks show non-negligible effectiveness, especially on obgn-products and Reddit datasets. Generally, with the privacy budget getting smaller (privacy getting stronger), the attack performances sharply decline. With an appropriate privacy budget, the attacks on all three datasets are successfully defended with AUC reduced to around 0.5 (random guess baseline).
**Additional experiments** We will report a series of ablation studies that assess the effect of maximum degree \(D\), minimum degree \(D_{\text{min}}\) for PMP-GCN and batch size in appendix C.8.
## 4 Related Works
### Graph representation learning in the federated setting
The majority of GRL research in the federated setting is based on the horizontal setup, with each party holding its own local graph data [45; 17; 38]. The adoption of VFL paradigms to GRL is relatively few, VFGNN [53] uses additive secret sharing to combine feature information held by different parties, followed by a straightforward adaptation of the SplitNN framework [14] with the underlying neural model being graph neural networks. In [5; 46], the authors discussed VFL setups where node features and graph topology belong to different parties. We refer to the recent survey [27] for a more detailed overview.
### Graph representation learning with differential privacy guarantees
The most straightforward way to integrate DP techniques into GRL is via adopting private optimization algorithms like DP-SGD[1]. However, meaningful notions of differential privacy over graph data (i.e., the edge model [35] and node model [22]) are semantically different from that of i.i.d. data, and require refined privacy analysis which is sometimes ignored in the privacy analysis in previous works [53; 45; 36]. In [7], the authors analyzed the DP-SGD algorithm in the node DP model. The GAP model [39] proposed a three-stage training procedure and analyzed its privacy guarantee in both edge DP and node DP models. However, we noticed that the privacy analysis in [39] did not properly address the effect of sampling, resulting in an overly optimistic performance. Considering only edge DP, randomized response (RR) [46] that flips each entry of the underlying graph's adjacent matrix guarantees privacy (in a stronger _local_ sense), but makes reasonable privacy-utility trade-off extremely hard to obtain in practice.
## 5 Conclusion and discussions
We present the VESPER framework as a differentially private solution to node classification in the VFL setup using graph representation learning techniques. The core algorithmic component of VESPER is the PMP scheme that allows efficient learning under both dense and sparse graph
data. We demonstrate the practicality and effectiveness of the proposed framework by establishing theoretical DP guarantees as well as investigating its ability for privacy protection and privacy-utility trade-off empirically. We will discuss possible extensions and future directions of the VESPER framework in appendix D.
|
2306.17418 | ReLU Neural Networks, Polyhedral Decompositions, and Persistent Homolog | A ReLU neural network leads to a finite polyhedral decomposition of input
space and a corresponding finite dual graph. We show that while this dual graph
is a coarse quantization of input space, it is sufficiently robust that it can
be combined with persistent homology to detect homological signals of manifolds
in the input space from samples. This property holds for a variety of networks
trained for a wide range of purposes that have nothing to do with this
topological application. We found this feature to be surprising and
interesting; we hope it will also be useful. | Yajing Liu, Christina M Cole, Chris Peterson, Michael Kirby | 2023-06-30T06:20:21Z | http://arxiv.org/abs/2306.17418v1 | # ReLU Neural Networks, Polyhedral Decompositions, and Persistent Homology
###### Abstract
A ReLU neural network leads to a finite polyhedral decomposition of input space and a corresponding finite dual graph. We show that while this dual graph is a coarse quantization of input space, it is sufficiently robust that it can be combined with persistent homology to detect homological signals of manifolds in the input space from samples. This property holds for a variety of networks trained for a wide range of purposes that have nothing to do with this topological application. We found this feature to be surprising and interesting; we hope it will also be useful.
Machine Learning, Polyhedral Decompositions, Persistent Homology, Persistent Homology, Persistent Homology
## 1 Introduction
The rectified linear unit (ReLU) function is the default activation function for many well known, deep, feed forward neural networks (AlexNet, ResNet, Inception, SqueezeNet, etc). These networks frequently use some convolutional layers to decrease the number of parameters being trained and often utilize skips to decrease issues from overfitting. The importance of ReLU feed forward neural networks (FFNNs) has motivated an extensive investigation of their properties from a variety of aspects to demystify their "black-box nature". In this paper, we start from the observation that, regardless of their use of skips and/or convolutions, a ReLU FFNN \(F:\mathbb{R}^{m}\rightarrow\mathbb{R}^{n}\) decomposes the input space (\(\mathbb{R}^{m}\)) into convex polyhedra and assigns to each polyhedron a unique binary vector that encodes the ReLU activation pattern of all nodes in the ReLU layers of the network. More precisely, the neural network assigns a binary vector to each input point, assigning identical binary vectors to input points lying in the same polyhedra. Despite this granularity imposed on input space, we found that the homology of manifolds embedded in the input space of the network can still be detected via persistent homology applied solely to distance measures built from the binary vectors of the points in the sampling. After explaining the connection between polyhedra and binary vectors in a ReLU FFNN, we describe methods for finding the binary vectors associated with the nearby neighbors of a given polyhedron in the decomposition. For small networks, one can (in principle) use these methods to determine all the polyhedra in the decomposition and determine their proximity within the decomposition. Toward this goal, we exploit the following observation: _two polyhedra share a facet if and only if their associated binary vectors differ in exactly one bit_. Due to this property, we propose the Hamming distance between binary vectors as a proxy for proximity between polyhedra in the decomposition. More precisely, if \(\mathcal{X}\) is a collection of data points in the input space, each \(x\in\mathcal{X}\) lies in one of the polyhedra and can be assigned the binary vector, \(s(x)\), that labels the polyhedron. Using the Hamming metric, we build a network driven distance matrix between pairs of binary vectors associated to data points in \(\mathcal{X}\). Using a software package such as Ripser (see [https://live.ripser.org](https://live.ripser.org)), we illustrate how the coarse distance measure corresponding to a polyhedral decomposition of input space can be used to extract homological information about a manifold lying in the input space from sample points on the manifold. It is worth noting that the Hamming distance between polyhedral bit vectors is an approximation for the smallest number of polyhedral steps between two polyhedra (i.e. for the shortest path between vertices on the dual graph of the polyhedral decomposition). (Jamil et al., 2023) has demonstrated the efficacy of the Hamming distance in illuminating the mechanisms underlying neural network functionality.
The paper is organized as follows: Section 2 introduces the basic background on ReLU FFNNs, the polyhedral decomposition associated to a network, the binary vectors assigned to each polyhedron, the linear model that determines each polyhedron, the linear program for filtering out redundant inequalities in the linear model, and the affine linear function that is attached to each polyhedron. Section 3 outlines algorithms for determining the binary vectors that occur for polyhedra in the entire input space and for polyhedra that occur within a bounded region. Section 4 describes how the polyhedral decomposition, realized through binary vectors, can be combined with persistent homology to uncover topological features in data. Section 5 concludes the work and
sketches directions for future research.
## 2 Neural Networks and Polyhedral Decompositions
This section gives the basic definitions of a ReLU neural network and describes the connection to polyhedral decompositions and binary vectors.
### Notation for Neural Networks
Consider an \((L+1)-\)layer ReLU feedforward neural network:
\[\mathbb{R}^{m}\ \xrightarrow[]{(W_{1},b_{1})}\ \mathbb{R}^{h_{1}}\ \xrightarrow[]{(W_{2},b_{2})} \mathbb{R}^{h_{2}}\to\ldots\to\mathbb{R}^{h_{L-1}}\] \[\xrightarrow[]{(W_{L},b_{L})}\ \mathbb{R}^{h_{L}}\xrightarrow[]{(W_{L+1},b_{L+1})} \mathbb{R}^{n}. \tag{1}\]
In this model, \(\mathbb{R}^{m}\) is the input space, \(\mathbb{R}^{n}\) is the output space, and \(h_{i}\) corresponds to the number of nodes at layer \(i\). Layer 0 corresponds to the input space and layer \(L+1\) corresponds to the output space (so \(h_{0}=m\) and \(h_{L+1}=n\)). We let \(W_{i}\in\mathbb{R}^{h_{i}\times h_{i-1}}\) and \(b_{i}\in\mathbb{R}^{h_{i}}\) denote the weight matrix and bias vector of layer \(i\), respectively. The activation functions for the hidden layers (layers \(1,\ldots,L\)) are assumed to be ReLU functions (applied coordinate-wise) while the map to the last layer (the output layer) is assumed to be affine linear (without a ReLU function being applied to the image). Recall that the ReLU function is the map \(ReLU:\mathbb{R}\to\mathbb{R}\) given by
\[ReLU(a)=\begin{cases}a&\text{if }a>0\\ 0&\text{if }a\leq 0.\end{cases} \tag{2}\]
The ReLU map is a map on real numbers that is piecewise linear and continuous. It can be naturally extended to a piecewise linear continuous map on vector spaces (which we also denote as ReLU). More precisely, we define \(ReLU:\mathbb{R}^{h_{i}}\to\mathbb{R}^{h_{i}}\), by applying the ReLU function to each coordinate of \(x\in\mathbb{R}^{h_{i}}\). Let \(w_{i,j}\) denote the \(j^{th}\) row of \(W_{i}\) and let \(b_{i,j}\) denote the \(j^{th}\) entry of \(b_{i}\). Given an input data point \(x\in\mathbb{R}^{m}\), we denote the output of \(x\) in layer \(i\) as \(F_{i}(x)\). Thus, with this notation we have \(F_{i}(x)\in\mathbb{R}^{h_{i}}\), \(F_{0}(x)=x\), and
\[\begin{split} F_{i}(x)&=\text{ReLU}(W_{i}F_{i-1}(x )+b_{i})\\ &=\begin{bmatrix}\max\{0,w_{i,1}F_{i-1}(x)+b_{i,1}\}\\ \vdots\\ \max\{0,w_{i,h_{i}}F_{i-1}(x)+b_{i,h_{i}}\}\end{bmatrix}.\end{split} \tag{3}\]
### Definitions of Binary Vectors
Consider model (1). Given an input data point \(x\in\mathbb{R}^{m}\), for each hidden layer \(i\) (so \(1\leq i\leq L\)), we introduce a binary (bit) vector
\[s_{i}(x)=[s_{i,1}(x)\ \ldots\ s_{i,h_{i}}(x)]^{\top}\in\mathbb{R}^{h_{i}},\]
where \(s_{i,j}(x)\) (with \(1\leq j\leq h_{i}\)) is defined as follows:
\[s_{i,j}(x)=\begin{cases}1&\text{if }w_{i,j}F_{i-1}(x)+b_{i,j}>0\\ 0&\text{if }w_{i,j}F_{i-1}(x)+b_{i,j}\leq 0.\end{cases} \tag{4}\]
Thus, for each point \(x\in\mathbb{R}^{m}\), we have a sequence of binary vectors \(s_{1}(x),s_{2}(x),\ldots,s_{L}(x)\). We can stack the binary vectors associated to \(x\) to make a long column vector
\[s(x)=[s_{1}^{\top}(x)\ \ldots\ s_{L}^{\top}(x)]^{\top}\in\mathbb{R}^{h}, \tag{5}\]
where \(h=\sum_{i=1}^{L}h_{i}\) is the total number of nodes in the hidden layers. We call \(s(x)\) the binary vector of \(x\).
Different points from the input space \(\mathbb{R}^{m}\) can have the same binary vector. We next show that the set of points that have the same binary vector, \(\{x^{\prime}:s(x^{\prime})=s(x),x^{\prime}\in\mathbb{R}^{m}\}\), form a convex polyhedron in \(\mathbb{R}^{m}\).
### Linear Model for Binary Vectors
Let \(s_{1},s_{2},\ldots,s_{L}\) denote a given sequence of binary vectors for model (1). For each layer \(i\)\((1\leq i\leq L)\), we describe inequality constraints that must be satisfied for an input data point to have the same binary vector \(s_{i}\). To describe the inequalities in a consistent manner, we introduce a sign vector of \(1^{\prime}s\) and \(-1^{\prime}s\) for each hidden layer. For layer \(i\), define \(s^{\prime}_{i}=[s^{\prime}_{i,1}\ \ldots\ s^{\prime}_{i,h_{i}}]^{\top}\) with
\[s^{\prime}_{i,j}=\begin{cases}1&\text{if }s_{i,j}=0\\ -1&\text{if }s_{i,j}=1.\end{cases} \tag{6}\]
In layer 1, because \(F_{1}(x)=\text{ReLU}(W_{1}x+b_{1})\), any data point \(x\) that has the bit vector \(s_{1}\) satisfies the following linear inequality:
\[\text{diag}(s^{\prime}_{1})(W_{1}x+b_{1})\leq 0 \tag{7}\]
where \(\text{diag}(v)\) is a square diagonal matrix with the elements of vector \(v\) on the main diagonal. The inequality (7) is established by a fundamental principle: the affine output generated by the first hidden layer, in response to input \(x\), must satisfy a greater-than-zero condition for nodes where the corresponding binary vector is set to 1, and a less-than-or-equal-to-zero condition for nodes where the corresponding binary vector is set to 0.
Suppose that \(x\) has the bit vector sequence \(s_{1},s_{2},\ldots,s_{L}\). Let \(\hat{W}_{j}=W_{j}\text{diag}(s_{j-1})\hat{W}_{j-1}\) and \(\hat{b}_{j}=W_{j}\text{diag}(s_{j-1})\hat{b}_{j-1}+b_{j}\) for \(2\leq j\leq L\) with \(\hat{W}_{1}=W_{1},\hat{b}_{1}=b_{1}\). By model (1), the following equation holds for \(1\leq j\leq L\):
\[F_{j}(x)=\text{ReLU}(W_{j}F_{j-1}(x)+b_{j})=\text{diag}(s_{j})(\hat{W}_{j}x+ \hat{b}_{j})\]
where \(F_{0}(x)=x\).
More generally, any data point \(x\) that has \(s_{j}\) as its bit vector for layer \(j\) should satisfy the following linear inequalities:
\[\text{diag}(s^{\prime}_{j})\hat{W}_{j}x\leq\text{diag}(s^{\prime}_{j})(-\hat{b}_ {j}). \tag{8}\]
Let \(A_{j}=\text{diag}(s^{\prime}_{j})\hat{W}_{j}\) and \(c_{j}=\text{diag}(s^{\prime}_{j})(-\hat{b}_{j})\) for \(1\leq j\leq L\). Combining (7) and (8), we have
\[Ax\leq c, \tag{9}\]
where
\[A=[A_{1}^{\top}\ A_{2}^{\top}\ \dots\ A_{L}^{\top}]^{\top}\ \text{and}\ c=[c_{1}^{\top}\ c_{2}^{ \top}\ \dots\ c_{L}^{\top}]^{\top}\]
with \(A_{i}\in\mathbb{R}^{h_{i}\times m}\) and \(c_{i}\in\mathbb{R}^{h_{i}}\). Note that the set \(P=\{x\in\mathbb{R}^{m}:Ax\leq c\}\) is a convex polyhedron (potentially empty, potentially unbounded). Considering the totality of points in the input space, the weights of a ReLU neural network lead to a decomposition of the input space into a collection of bounded and unbounded polyhedra and each polyhedron has a corresponding bit vector. Note that a random bit vector may or may not correspond to a non-empty polyhedra.
It is not difficult to see that for a polyhedron \(P\) of full dimension (the same dimension as the ambient space), there is a unique smallest subset of inequalities, which we denote by \((A^{\prime},c^{\prime})\), that one can obtain from \((A,c)\) that leaves the polyhedron \(P\) unchanged. Thus, \(A^{\prime}\) is built from a subset of rows of \(A\) and \(c^{\prime}\) is the corresponding subset of rows of \(c\) such that the set of points that satisfy \(Ax\leq c\) is the same as the set of points that satisfy \(A^{\prime}x\leq c^{\prime}\).
Write
\[A=\begin{bmatrix}a_{1}\ a_{2}\ \dots\ a_{h}\end{bmatrix}^{\top}\ \text{ and }\ \ c= \begin{bmatrix}c^{1}\ c^{2}\ \dots\ c^{h}\end{bmatrix}^{\top}\]
with \(a_{i}\in\mathbb{R}^{m}\) and \(c^{i}\in\mathbb{R}\). To determine if \(a_{i}x\leq c^{i}\) is a redundant constraint, first define
\[\tilde{A}=\begin{bmatrix}a_{1}\ a_{2}\ \dots\ a_{i-1}\ a_{i+1}\dots\ a_{h} \end{bmatrix}^{\top}\]
and
\[\tilde{c}=\begin{bmatrix}c^{1}\ c^{2}\ \dots\ c^{i-1}\ c^{i+1}\ \dots\ c^{h} \end{bmatrix}^{\top}.\]
Next, consider the following linear program
\[\begin{array}{l}\operatorname*{maximize}_{x}\ \ a_{i}^{\top}x\\ \text{s. t.}\ \ \tilde{A}x\leq\tilde{c}.\end{array} \tag{10}\]
If the optimal objective value of (10) is less than or equal to \(c^{i}\), then the \(i\)th linear inequality is redundant, and we can remove \(a_{i}^{\top}\) and \(c^{i}\) from \(A\) and \(c\), respectively. We determine \((A^{\prime},c^{\prime})\) by iterating this process to remove all redundant constraints.
Given a bit vector \(s\), the \(i\)-th entry of \(s\) is called active if the \(i\)th row of \(A\) is in \(A^{\prime}\), and inactive otherwise. For any input \(x\) in the polyhedron determined by the bit vector \(s\), the output of \(x\) is determined by the single affine map: \(G(x)=W_{L+1}\text{diag}(s_{L})\hat{W}_{L}x+W_{L+1}\text{diag}(s_{L})\hat{b}_{ L}+b_{L+1}\).
We summarize some of the points of this section, together with implications (many of which we leave to the reader) in the bullet points that follow. We emphasize that the polyhedral decomposition of input space has an associated dual graph. This graph has vertices corresponding to polyhedra and edges corresponding to polyhedra that share an \(m-1\) dimensional face. It is not hard to show that this dual graph is bipartite (refer to Appendix B). We will be using this graph, implicitly, to detect homological signals via persistent homology. For the interested reader, we point out the following references related to topological features and combinatorial features of the type of neural networks we are considering (Grigsby et al., 2022; Masden, 2022). We would also like to acknowledge the quickly-growing library of work in the area of polyhedral theory, a survey of which can be found in (Huchette et al., 2023).
The following is a summary of the key ideas presented either explicitly or implicitly in this section:
* A ReLU FFNN determines a continuous piecewise linear map \(F:\mathbb{R}^{m}\rightarrow\mathbb{R}^{n}\).
* The neural network leads to a decomposition of \(\mathbb{R}^{m}\) into a collection of bounded and unbounded convex polyhedra.
* A binary vector, supported on the hidden nodes of the network, can be attached to each point in the domain of the network. The value of the binary vector associated to a given input \(x\in\mathbb{R}^{m}\), at a given node, is \(1\) if ReLU was not applied at the node and \(0\) if ReLU was applied at the node (the ReLU activation pattern).
* If \(h\) denotes the number of hidden nodes in the network, then there are, a priori, \(2^{h}\) possible binary vectors and thus \(2^{h}\) possible polyhedra in the decomposition of input space. In reality, the number of realized convex polyhedra in the domain is much smaller.
* If two points lie in the interior of the same convex polyhedron then they determine the same binary vector.
* If two points are in the interior of distinct convex polyhedra then they determine distinct binary vectors.
* From the ReLU activation pattern for points in a convex polyhedron, one can determine an affine linear equation that represents the behavior of the neural network on points in the polyhedron. If the polyhedron is denoted by \(P\), then for points in \(P\), the function \(F:\mathbb{R}^{m}\rightarrow\mathbb{R}^{n}\) can be expressed as \(F(x)=A_{P}x+b_{P}\) for some matrix \(A_{P}\) and some vector \(b_{P}\).
* Input values that determine a value of \(0\) on some node, before applying ReLU, are precisely the input values that lie on the boundary of distinct convex polyhedra. This is due to the fact that applying ReLU to \(0\) has the same effect as not applying ReLU to \(0\).
* If two convex polyhedra, \(P_{1},P_{2}\), share an \((m-1)\)-dimensional face, then the binary vectors associated to each of the polyhedra differ in one bit. Furthermore, the affine linear functions \(A_{P_{1}}+b_{P_{1}}\) and \(A_{P_{2}}+b_{P_{2}}\) agree on this \((m-1)\)-dimensional face.
* Any polyhedral decomposition of \(\mathbb{R}^{m}\) has a natural dual graph with vertices corresponding to \(m\)-dimensional polyhedra and edges corresponding to polyhedra sharing an \(m-1\) dimensional facet.
* The Hamming distance between binary vectors (that represent two polyhedra) can be used as an approximation for the smallest number of polyhedral steps between the two polyhedra (i.e. the length of a minimal geodesic on the dual graph).
## 3 Algorithms for Bit Vector Search and Examples
The number of polyhedra in the input space or within a bounded region of the input space provides a measure of the network's expressivity and complexity. Upper and lower bounds on the maximal number of polyhedra obtainable from a given ReLU FFNN architecture can be found in (Pascanu et al., 2014), (Montufar et al., 2014), (Raghu et al., 2017), (Arora et al., 2018), and (Serra et al., 2018). Several algorithms ((Xiang et al., 2018), (Yang et al., 2020), and (Xu et al., 2021)) have been developed to compute the exact polyhedra decomposition of the input space through layer-by-layer linear inequality solving. Larger decomposition examples can be computed using a method developed by (Vincent and Schwager, 2021), which enumerates all polyhedra in the input space as follows:
* Start with a random point \(x\in\mathbb{R}^{m}\) and determine its bit vector \(s(x)\). This bit vector labels a polyhedron \(P\).
* Find the active bits in \(s(x)\) for the polyhedron \(P\).
* Each active bit corresponds to a neighboring polyhedron. Each neighboring polyhedron has a bit vector that can be obtained by "flipping" one of the active bits for \(P\). Thus, one can find all of the neighboring polyhedra for \(P\), in terms of their binary vectors.
* Repeat the process to find the neighbors of each of these newly identified polyhedra.
* The number of active bits for a polyhedron is equal to the number of nearest neighbors of the polyhedron. The previous steps continue until we have a list of polyhedra \(\mathcal{P}\) that satisfies the property that for each \(P\in\mathcal{P}\), the set of nearest neighbors of \(P\) is itself a subset of \(\mathcal{P}\).
This process leads to a set \(\mathcal{P}\) of convex polyhedra that decompose input space. Such decompositions have been used to define network-imposed distance metrics for model inputs (Balestriero and Baraniuk, 2018). We coarsen these metrics and show they can still be used to detect homological signals. Equating neural networks of various types (convolutional neural networks, residual networks, skip-connected networks, recurrent neural networks) to max-affine spline operators has been carried out by (Balestriero and Baraniuk, 2018). The paper (Sattelberg et al., 2020) investigated the behavior of the ReLU FFNNs based on the structure of the decomposition, together with the affine map attached to each polyhedron.
While there are multiple methods that can be used to determine the polyhedral decomposition, and their associated binary vectors, imposed by the weights of a ReLU neural network, all of these methods are woefully inadequate for the large and deep networks that are commonplace today. This is not a fault of the algorithms, many modern networks contain many millions of hidden nodes and have input spaces with dimension well beyond 100,000. There are just too many polyhedra represented by such networks. However, for small networks, any of the methods will work. We present two straightforward methods. The first is a brute-force method formulated as a Linear programming problem. The second, which is the traversal-and-pruning method outlined above, computationally improves upon the first. We demonstrate our methods with examples. Pseudocode for these two methods are provided in Appendix A.
### Algorithms
A mixed-integer linear program to count the number of polyhedra in a bounded region can be found in (Serra et al., 2018). We present a linear program that can also count polyhedra, determine if they share a facet, and whose implementational simplicity is useful. Our proposed method yields not only the count of polyhedra in the entire input space \(\mathbb{R}^{m}\) but also their respective binary vectors. To count the number of polyhedra in a bounded region, the linear inequality constraints in (15) need to be augmented to include bounds for variables. For example, when the input is 2-dimensional and the bounded region is determined by \(a_{1}\leq x_{1}\leq b_{1},\ a_{2}\leq x_{2}\leq b_{2}\), the constraints become \(\tilde{A}^{j}x\leq\tilde{c}^{j}\) where \(\tilde{A}^{j}\) is the concatenation of \(A^{j}\) and \(B\) while \(\tilde{c}^{j}\) is the concatenation of \(c^{j}\) and \(d\), with
\[B=\begin{bmatrix}1&-1&0&0\\ 0&0&1&-1\end{bmatrix}^{T}\ \operatorname{and}\ d=\begin{bmatrix}b_{1}\ -a_{1}\ b_{2}\ -a_{2}\end{bmatrix}^{T}.\]
Algorithm 1 is fast for small neural network structure. However, (Hanin and Rolnick, 2019) proved that the number of bit vectors that correspond to polyhedra is approximately \(h^{m}\) which is much smaller than \(2^{h}\) for large \(h\), highlighting the computational savings that can be procured by only traversing active bit vectors instead of all possibilities as is done by
the brute-force method. These savings are what Algorithm 2 has to offer.
Algorithm 1 is easy to implement but slow. The idea for Algorithm 2 is based on the fact that each polyhedron corresponding to \(Ax\leq c\) is determined by \(A^{\prime}x\leq c^{\prime}\) and two adjacent polyhedra that share one facet differ by one active bit. This active bit corresponds to the hyperplane containing this facet. The key step for Algorithm 2 is to find the active bits for a given bit vector using the method mentioned in Section 2.2. Once identified, these active bits can then be used to find all neighboring polyhedra, starting a scheme that ripples through the desired domain until all polyhedra are found. The idea behind this algorithm also motivated the reachable polyhedral marching algorithm in (Vincent and Schwager, 2021). We make a small modification to enumerate the polyhedra in a bounded region rather than in the entire, unbounded domain. For a bounded region, the bit vector can be expanded by checking whether the corresponding polyhedron hits the boundary or not. For example, when \(m=2\) and the bounded region is defined by \(a_{1}\leq x_{1}\leq b_{1},a_{2}\leq x_{2}\leq b_{2}\), the bit vector \(s\) can be expanded by 1 more bit with 1's or 0's by checking if its \(Ax\leq c\) solutions hit any of the domain boundaries defined by \(x_{1}=a_{1}\), \(x_{1}=b_{1}\), \(x_{2}=a_{2}\), or \(x_{2}=b_{2}\) (1 if true, 0 otherwise). For the bit vectors that have \(1\) for their last bit, we update their labels with 1 after they are added to \(\mathcal{P}\). The reason is that only if the bit vector of the initial point does not hit the region boundaries, the bit vectors that hit the region boundaries will be derived by flipping the active bits of their neighbors, which means that all its adjacent polyhedra in the bounded region are already in \(\mathcal{P}\).
### Examples
In this section, we visualize the polyhedral decomposition, determined by a ReLU FFNN, on a bounded region for two basic models:
\[\mathbb{R}^{2}\xrightarrow[\text{ReLU}]{(W_{1},b_{1})}\mathbb{R}^{3} \xrightarrow[\text{ReLU}]{(W_{2},b_{2})}\mathbb{R}^{3}\xrightarrow[\text{ReLU}]{(W_{3},b_{3})}\mathbb{R} \tag{11}\]
\[\mathbb{R}^{3}\xrightarrow[\text{ReLU}]{(W_{1},b_{1})}\mathbb{R}^{10} \xrightarrow[\text{ReLU}]{(W_{2},b_{2})}\mathbb{R}^{10}\xrightarrow[\text{ReLU }]{(W_{3},b_{3})}\mathbb{R}^{10}\xrightarrow[\text{ReLU}]{(W_{4},b_{4})} \mathbb{R} \tag{12}\]
We first applied model (11) to fit \(f_{1}(x_{1},x_{2})=x_{1}^{2}+x_{2}^{2}-2/3\) using 10000 points uniformly sampled in \([-1,1]^{2}\) and model (12) to fit \(f_{2}(x_{1},x_{2},x_{3})=(x_{1}-1)^{2}+2x_{2}^{2}+x_{3}^{2}+1\) using 125000 points uniformly sampled in \([-1,1]^{3}\). We used TensorFlow to train the above models with batch size of 50 and early stopping criterion based on the convergence of validation loss.
We used Algorithm 1 to enumerate all binary vectors in \(\mathbb{R}^{2}\) and in \([-1,1]^{2}\). We used Algorithm 2 to enumerate all binary vectors in \(\mathbb{R}^{3}\) and in \([-1,1]^{3}\). We plotted the polyhedra by finding their extremal vertices.
Figure 1(a) and Figure 1(b) provide visualizations of the polyhedral decomposition derived from model (11) and model (12), respectively. For Figure 1(a), there are 25 polyhedra in the bounded region \([-1,1]^{2}\) and 27 polyhedra in \(\mathbb{R}^{2}\). Of these polyhedra, 21 are bounded and 6 are unbounded. The binary vectors are superimposed onto the polyhedra. Note that the binary vectors associated to different polyhedra differ in exactly one bit if and only if they share a facet. For Figure 1(b), there are 1858 polyhedra in the bounded region \([-1,1]^{3}\) and 3331 polyhedra in \(\mathbb{R}^{3}\).
## 4 Persistent Homology and Polyhedral Decompositions
It has been well documented that the Euclidean distance between sampled points on a manifold in \(\mathbb{R}^{n}\) can be employed to detect the topology of the manifold. In this section, we provide a description of Vietoris-Rips persistent homology and illustrate how it can be effectively combined with the non-Euclidean distance measure, associated to the polyhedral decomposition, to also identify homological features. Persistent homology has been a rapidly developing branch of topology largely due to its usefulness in data analysis and machine learning (Ghrist, 2008; Zomorodian and Carlsson, 2004; Carlsson, 2009; Edelsbrunner and Harer, 2008) (a collection of additional resources and videos can be found at [https://www.aatrn.net](https://www.aatrn.net)). Work linking persistent homology and neural networks has been appearing with increasing frequency, please see the following for a sampling of some recent works in this direction (Rieck et al., 2018; Zhao et al., 2020; Carriere et al., 2020; Birdal et al., 2021). The description of persistent homology given below is extremely condensed. The interested reader is encouraged to read the survey article (Ghrist, 2008) where additional details can be found.
Figure 1: Visualizations of low-dimensional polyhedral decompositions. A detailed discussion on (a) can be found in Appendix B along with the exact binary vectors that label all of the regions of the decomposition.
### Persistent Homology
Consider a metric space, \(C\), with distance function \(d:C\times C\to\mathbb{R}\). Let \(x^{(1)},x^{(2)},\ldots x^{(N)}\) be points in \(C\). We can utilize \(d\) to build an \(N\times N\) matrix \(D\) by setting \(D_{i,j}=d(x^{(i)},x^{(j)})\). \(D\) will be hollow (i.e. \(D_{i,i}=0\)), symmetric, and non-negative. From \(D\) we can build a family, \(A(t)\), of \(N\times N\)\(\{0,1\}\)-matrices parameterized by a real parameter \(t\) using the rule
\[A(t)_{i,j}=\begin{cases}0&\text{if $D(i,j)>t$ or if $i=j$}\\ 1&\text{else}.\end{cases} \tag{13}\]
For each \(t\), \(A(t)\) can be viewed as an adjacency matrix of a graph \(G(t)\). Let \(CL(t)\) denote the clique complex of \(G(t)\)(Hausmann, 1995). By construction, \(CL(t)\) is a simplicial complex and there is a natural inclusion map
\[CL(t_{1})\hookrightarrow CL(t_{2})\]
whenever \(t_{1}<t_{2}\). Let \(\mathbb{F}\) be a field. A simplicial complex \(S\) has an associated chain complex \(S_{\bullet}\) of \(\mathbb{F}\)-vector spaces. The failure of the chain complex to be exact at location \(i\) is measured by the \(i^{th}\) homology \(H_{i}(S_{\bullet},\mathbb{F})\) (which itself is an \(\mathbb{F}\)-vector space). The inclusion map \(CL(t_{1})\hookrightarrow CL(t_{2})\) induces a chain map
\[CL(t_{1})_{\bullet}\to CL(t_{2})_{\bullet}.\]
Whenever you have a chain map \(F:S_{\bullet}\to T_{\bullet}\) between chain complexes, \(S_{\bullet},T_{\bullet}\) you get associated linear maps between \(H_{i}(S_{\bullet},\mathbb{F})\) and \(H_{i}(T_{\bullet},\mathbb{F})\) for each \(i\). Thus, the chain map \(CL(t_{1})_{\bullet}\to CL(t_{2})_{\bullet}\) induces, for each \(i\), a linear map
\[H_{i}(CL(t_{1})_{\bullet},\mathbb{F})\to H_{i}(CL(t_{2})_{\bullet},\mathbb{F}).\]
If we pick values \(t_{1}<t_{2}<\cdots<t_{k}\), then we can build a nested sequence of simplicial complexes
\[CL(t_{1})\subset CL(t_{2})\subset\cdots\subset CL(t_{k})\]
which leads to
\[H_{i}(CL(t_{1})_{\bullet},\mathbb{F})\to H_{i}(CL(t_{2})_{\bullet},\mathbb{F}) \rightarrow\cdots\to H_{i}(CL(t_{k})_{\bullet},\mathbb{F}), \tag{14}\]
where the arrows denote linear maps.
Each of the \(H_{i}(CL(t_{j})_{\bullet},\mathbb{F})\) are finite dimensional \(\mathbb{F}\)-vector spaces and we can view (14) as a finitely generated graded \(\mathbb{F}[x]\) module, where \(\mathbb{F}[x]\) is the ring of polynomials. This module is frequently called the \(i^{th}\) persistence module associated to the nested sequence of simplicial complexes. Since \(\mathbb{F}[x]\) is a principal ideal domain, the \(F[x]\) module has a decomposition into its invariant factors by the well known structure theorem for finitely generated graded modules over a principal ideal domain. Each invariant factor can be viewed as a homology class that has a birth time \(t_{b}\) and a death time \(t_{d}\) (possibly infinite) and this invariant factor can be represented as an interval \([t_{b},t_{d}]\). This collection of intervals corresponds to the barcode representation of the invariant factors of a persistence module.
### Combining Persistence with a Polyhedral Decomposition
First we provide two examples based on the polyhedral decomposition of model (12).
_Example 4.1_.: We consider the circle in \(\mathbb{R}^{3}\) (the input space) with parameterization given by \((0,cos(t),sin(t))\). We sampled the circle at \(20\) evenly spaced points \(x^{(1)},x^{(2)},\ldots,x^{(20)}\). Each of these points lie in one of the polyhedra from Figure 1(b). A picture of the polyhedra encountered by the \(20\) sample points is found in Figure 2(a). We recorded the binary vectors for each of the encountered polyhedra. This gave a total of \(20\) binary vectors but only \(19\) were distinct. We labeled these distinct binary vectors \(s^{(1)},s^{(2)},\ldots s^{(19)}\). We built a \(19\times 19\) matrix \(E\) by setting \(E_{i,j}\) equal to the Hamming distance between \(s^{(i)}\) and \(s^{(j)}\). The resulting matrix is hollow, symmetric, and has positive integers in entries off the diagonal. We input this matrix into Ripser (see [https://live.ripser.org](https://live.ripser.org)) and asked for the \(H_{0}\) and \(H_{1}\) barcodes. The result of this experiment can be found in Figure 3(a).
_Example 4.2_.: We considered the same circle in \(\mathbb{R}^{3}\) as in the previous example but sampled at \(500\) evenly spaced points \(x^{(1)},x^{(2)},\ldots,x^{(500)}\). A picture of the polyhedra encountered by the \(500\) sample points is found in Figure 2(b). We recorded the \(500\) binary vectors for each of the polyhedra that was hit by a data point. Only \(41\) of the bit vectors were distinct (corresponding to \(41\) distinct polyhedra) and we labeled these \(s^{(1)},s^{(2)},\ldots s^{(41)}\). We built a \(41\times 41\) matrix \(F\) by setting \(F_{i,j}\) equal to the Hamming distance between \(s^{(i)}\) and \(s^{(j)}\). We input this matrix into Ripser (see [https://live.ripser.org](https://live.ripser.org)) and asked for the \(H_{0}\) and \(H_{1}\) barcodes. The result of this experiment can be found in Figure 3(b).
The \(H_{0}\) barcode from the first example indicates connectivity at Hamming distance 4. The \(H_{1}\) barcode indicates a spurious (i.e. short lived) closed loop occurring at Hamming distance 3. The homological signal of the circle appears (and is quite strong) at Hamming distance 4. The \(H_{0}\) barcode from the second example indicates connectivity at
Figure 2: Polyhedra from two sample sizes
Hamming distance 2. The \(H_{1}\) barcode indicates a long lived loop beginning at Hamming distance 2.
In the following two examples, we will consider a similar example but utilizing real images from the ImageNet validation dataset (which contains 50K images). We will calculate the Hamming distance between bit vectors of data points via a much deeper neural network. The network we use is known as ResNet-50, it is a 50-layer convolutional neural network and was pre-trained on ImageNet (Deng et al., 2009). The training images are 224x224x3 thus the input space has dimension 150528. It uses ReLU as an activation function on the outputs from the convolutional layers and contains more than 6,000,000 nodes in its many layers. Like in the previous examples, the activation pattern of each ReLU layer is stored as a bit vector (as defined in Section 2.2).
_Example 4.3_.: We started with 3 pictures, denoted by \(A_{1},A_{2},A_{3}\), chosen from the ImageNet validation dataset. They are photos of a miniature poodle, a Persian cat, and a Saluki (see Figure 4). Each photo is represented by a 224 x 224 x 3 array. We generated \(50\) data points using the formulas \(\sin\theta A_{1}+\cos\theta A_{2}\) and \(A_{3}+\sin\theta A_{1}+\cos\theta A_{2}\), respectively, with \(\theta\) consisting of \(50\) points uniformly sampled from \([0,2\pi]\). We calculated the bit vectors via ResNet-\(50\) for the \(50\) data points and label them as \(s^{(1)},s^{(2)},\ldots s^{(50)}\). We built a \(50\times 50\) distance matrix \(G\) by setting \(G_{i,j}\) equal to the Hamming distance between \(s^{(i)}\) and \(s^{(j)}\). We input this matrix into Ripser which returned the \(H_{0}\) and \(H_{1}\) barcodes. The result of this experiment can be found in Figure 5.
We note that there is homological noise (or homological dust) in Figure 5(a) but none in Figure 5(b). In both examples, there is a strong signal representing the circle. The sample points in the second example have all positive entries while the sample points in the first example do not have this property. We were unsure why (or if) this is related to the homological noise but we found it interesting nevertheless. It is a potentially useful feature that the input space to a neural network simultaneously has two kinds of distance measures. The first derives from the standard Euclidean geometry and the second derives from the coarse geometry implied by the Hamming distance. If one applies an isometry to a data set then its pairwise distance matrix will not change. However, if one applies an isometry with respect to one metric but measure distance via the second metric then one can definitely observe a change. It may be useful to combine information from multiple measurements. One way to carry this out is by using a monotone Boolean function (i.e Boolean functions built using only **and** and **or** operations). The next example illustrates this approach using a small rotation as the isometry in order to produce two distance matrices arising from essentially the same data set.
_Example 4.4_.: Using the same \(A_{1},A_{2}\) with Example 4.3, we uniformly generated 50 points using \(\sin\theta A_{1}+\cos\theta A_{2}\) from \([0,2\pi]\) and \([1,2\pi+1]\), respectively. We calculated the bit vectors via ResNet-50 for the \(50\) data points sampled from \([0,2\pi]\) and labelled them as \(s^{(1)},s^{(2)},\ldots s^{(50)}\). Similarly, we label the bit vectors of the \(50\) data points sampled from \([1,2\pi+1]\) as \(\bar{s}^{(1)},\bar{s}^{(2)},\ldots\bar{s}^{(50)}\). We built four distance matrices \(G^{1},G^{2},G^{3}\), and \(G^{4}\) with \(G^{1}_{i,j}\) equal to the Hamming distance between \(s^{i}\) and \(s^{j}\), \(G^{2}_{i,j}\) equal to the Hamming distance between \(\bar{s}^{i}\) and \(\bar{s}^{j}\), \(G^{3}_{i,j}\) equal to the maximum of the Hamming distance between \(s^{i}\) and \(s^{j}\) and between \(\bar{s}^{i}\) and \(\bar{s}^{j}\), and \(G^{4}_{i,j}\) equal to minimum of the Hamming distance between \(s^{i}\) and \(s^{j}\) and between \(\bar{s}^{i}\) and \(\bar{s}^{j}\). The \(H_{1}\) barcodes for the four cases are presented in Figure 6
The distance matrix corresponding to the max function corresponds to requiring that distance matrices \(G_{1}\) and \(G_{2}\) both have their corresponding entry below some threshold. In other words, this corresponds to applying the **and** Boolean function. Similarly, the min function corresponds to applying the **or** Boolean function. Each seems to improve some feature related to the homological noise but it is hard to
Figure 4: Images of \(A_{1},A_{2},\) and \(A_{3}\)
Figure 3: \(H_{0}\) and \(H_{1}\) barcode plots
say which of the two is better. In the next example, we see an overall strengthening of the length of the homological signal for the max function and we see an earlier start of the homological signal for the min function. Homological noise did not make an appearance in the next example.
_Example 4.5_.: Using the same \(A_{1},A_{2},A_{3}\) with Example 4.3, we uniformly generated 50 points using \(A_{3}+\sin\theta A_{1}+\cos\theta A_{2}\) from \([0,2\pi]\) and \([1,2\pi+1]\), respectively. As in Example 4.4, we generated 4 different distance matrices \(\hat{G}^{1}\), \(\hat{G}^{2}\), \(\hat{G}^{3}\), and \(\hat{G}^{4}\), respectively. The \(H_{1}\) barcodes for the four cases are presented in Figure 7.
## 5 Conclusion and Future Work
A ReLU feedforward neural network induces a finite polyhedral decomposition of the input space. The corresponding dual graph represents this decomposition, where vertices correspond to polyhedra and edges represent shared facets. Data in the input space gets mapped to vertices of this graph. The geometry of the graph can be exploited, via persistent homology, to detect homological signals of manifolds from points sampled from the manifold. Many techniques in data analysis build on the premise that data, sharing a collection of common features, tends to be well approximated by a low dimensional geometric object. It is conjectured that the coarseness of the polyhedral decomposition can be helpful in dealing with the noise present in many data sets (maybe by including monotone Boolean functions to combine multiple distance matrices). In future work, we hope to extend the examples in this paper with the goal of detecting manifolds, known to exist in data, such as \(SO(3)\), higher dimensional tori, and various fiber bundles involving these manifolds. A torus example can be found in Appendix C.
Neural networks of the kind considered in this paper have a finite number of "polyhedral resources". The training function, the distribution of training data, the network architecture, and the training method are some of the factors in how the network utilizes its polyhedra. In the networks we considered, we observed the general pattern that there was a dense collection of smaller volume polytopes near the training data, larger volume polytopes in the general vicinity but away from the training data, with a shell of unbounded polyhedra surrounding the polytopes. You can see hints of this tendency in Figure 1(a). In future work, we hope to make more precise characterizations in the direction of this observation. In particular, we are eager to extend this work to depictions that more accurately quantify the number of polytopes that emerge from a neural network and their distributions of volumes, the "polytopes landscape".
## 6 Acknowledgements
This work is partially supported by the United States Air Force under Contract No. FA865020C1121 and the DARPA Geometries of Learning Program under Award No. HR00112290074. |
2309.15111 | SGD Finds then Tunes Features in Two-Layer Neural Networks with
near-Optimal Sample Complexity: A Case Study in the XOR problem | In this work, we consider the optimization process of minibatch stochastic
gradient descent (SGD) on a 2-layer neural network with data separated by a
quadratic ground truth function. We prove that with data drawn from the
$d$-dimensional Boolean hypercube labeled by the quadratic ``XOR'' function $y
= -x_ix_j$, it is possible to train to a population error $o(1)$ with $d
\:\text{polylog}(d)$ samples. Our result considers simultaneously training both
layers of the two-layer-neural network with ReLU activations via standard
minibatch SGD on the logistic loss. To our knowledge, this work is the first to
give a sample complexity of $\tilde{O}(d)$ for efficiently learning the XOR
function on isotropic data on a standard neural network with standard training.
Our main technique is showing that the network evolves in two phases: a
$\textit{signal-finding}$ phase where the network is small and many of the
neurons evolve independently to find features, and a $\textit{signal-heavy}$
phase, where SGD maintains and balances the features. We leverage the
simultaneous training of the layers to show that it is sufficient for only a
small fraction of the neurons to learn features, since those neurons will be
amplified by the simultaneous growth of their second layer weights. | Margalit Glasgow | 2023-09-26T17:57:44Z | http://arxiv.org/abs/2309.15111v2 | SGD Finds then Tunes Features in Two-Layer Neural Networks with Near-Optimal Sample Complexity: A Case Study in the XOR problem
###### Abstract
In this work, we consider the optimization process of minibatch stochastic gradient descent (SGD) on a 2-layer neural network with data separated by a quadratic ground truth function. We prove that with data drawn from the \(d\)-dimensional Boolean hypercube labeled by the quadratic "XOR" function \(y=-x_{i}x_{j}\), it is possible to train to a population error \(o(1)\) with \(d\text{polylog}(d)\) samples. Our result considers simultaneously training both layers of the two-layer-neural network with ReLU activations via standard minibatch SGD on the logistic loss. To our knowledge, this work is the first to give a sample complexity of \(\bar{O}(d)\) for efficiently learning the XOR function on isotropic data on a standard neural network with standard training. Our main technique is showing that the network evolves in two phases: a _signal-finding_ phase where the network is small and many of the neurons evolve independently to find features, and a _signal-heavy_ phase, where SGD maintains and balances the features. We leverage the simultaneous training of the layers to show that it is sufficient for only a small fraction of the neurons to learn features, since those neurons will be amplified by the simultaneous growth of their second layer weights.
## 1 Introduction
Stochastic gradient descent (SGD) is the primary method of training neural networks in modern machine learning. Despite the empirical success of SGD, there are still many questions about why SGD is often able to efficiently find good local minima in the non-convex optimization landscape characteristic of training neural networks.
A growing body of work aims to theoretically understand the optimization dynamics and sample complexity of learning natural classes of functions via SGD on neural networks. A particularly well-understood regime in this regard is the neural tangent kernel (NTK)[Jacot et al., 2021], where the network only moves a small distance from its initialization. However, in many cases, the NTK provably requires a poor sample complexity to generalize [Abbe et al., 2022].
More recent work aims to prove convergence guarantees for SGD on neural networks with tight sample complexity guarantees. A natural test-bed for this, which has garnered a lot of attention, is learning target functions that are inherently low-dimensional, depending only on a constant number of dimensions of the data [Chen and Meka, 2020, Chen et al., 2020, Nichani et al., 2022, Barak et al., 2022, Bietti et al., 2022, Mousavi-Hosseini et al., 2022, Refinetti et al., 2021, Abbe et al., 2021, 2022, 2023]. Such functions, often called _sparse_ or _multi-index_ functions, can be written as \(f(x):=g(Ux)\), where \(U\in\mathbb{R}^{k\times d}\) has orthogonal rows, and \(g\) is a function on \(\mathbb{R}^{k}\). Many works have shown that learning such target functions via SGD on neural networks is possible in much fewer samples than achievable by kernel methods [Chen et al., 2020, Bai and Lee, 2019, Damian et al., 2022, Abbe et al., 2021, 2022, 2023]. The results in these papers apply to a large class of ground truth functions, and have greatly enhanced our understanding of the sample complexity necessary for learning via SGD on neural networks.
The limitation of the aforementioned works is that they typically modify the SGD algorithm in ways that don't reflect standard training practices, for example using layer-wise training, changing learning rates, or clipping. While providing strong guarantees on certain subclasses of multi-index functions, such modifications may limit the ability of SGD to learn broader classes of multi-index functions with good sample complexity. We discuss this more in the context of related work in Section 1.1.
The goal of this paper is to show that for a simple but commonly-studied problem, standard minibatch SGD on a two-layer neural network can learn the ground truth function in near-optimal sample complexity. In particular, we prove in Theorem 3.1 that a polynomial-width ReLU network trained via online minibatch SGD on the logistic loss will classify the boolean XOR function \(f(x):=-x_{i}x_{j}\) with a sample complexity of \(\tilde{O}(d)\).1 We study the XOR function because it one of the simplest test-beds for a function which exhibits some of the core challenges of analyzing SGD on neural networks: a random initialization is near a saddle point, and the sample complexity attainable by kernel methods is suboptimal (see further discussion in Section 1.1).
Footnote 1: We consider this near-optimal in the sense that for algorithms that are rotationally invariant, \(\tilde{\Theta}(d)\) samples are required. See Section F for details.
Despite its simplicity, the prior theoretical understanding of learning the XOR function via SGD on standard networks is lacking. It is well-known that the NTK requires \(\Theta(d^{2})\) samples to learn this function (Wei et al., 2019; Ghorbani et al., 2021; Abbe et al., 2023). Wei et al. (2019) showed that \(\tilde{O}(d)\) samples statistically suffice, either by finding the global optimum of a two-layer network, or by training an infinite-width network, both of which are computationally intractable. Similar guarantees of \(\tilde{O}(d)\) are given by Bai and Lee (2019) and Chen et al. (2020); however, such approaches rely on drastically modifying the network architecture and training algorithm to achieve a quadratic neural tangent kernel. Abbe et al. (2023) proves a sample complexity of \(\tilde{O}(d)\) for the XOR problem, but uses an algorithm which assumes knowledge of the coordinate system under which the data is structured, and is thus not rotationally invariant. It is also worth noting that several works have studied the XOR problem with non-isotropic data, where the cluster separation grows to infinity (Frei et al., 2022; Ben Arous et al., 2022), in some cases yielding better sample complexities.
The main approach in our work is showing that while running SGD, the network naturally evolves in two phases. In the first phase, which we call the _signal-finding_ phase, the network is small, and thus we can show that a sufficient fraction of the neurons evolve independently, similarly to how they would evolve if the output of the network was zero. Phase 1 is challenging because it requires moving away from the saddle near where the network is initialized, which requires super-constant time (here we use "time" to mean the number of iterations times step size). This rules out using the mean field model approach as in Mei et al. (2018, 2019), or showing convergence to a lower-dimensional SDE as in Ben Arous et al. (2022), which both break down after constant time when directly applied to our setting.
After the signal components in the network have become large enough to dominate the remaining components, the network evolves in what we call the _signal-heavy_ phase. In this phase, we show inductively that throughout training, the signal components stay significantly larger than their counterparts. This inductive hypothesis allows us to approximate the output of the network on a sample \(x\) by its _clean_ approximation, given by a network where all the non-signal components have been removed. Under this approximation, the dynamics of the network are easier to compute, and we can show that the signal components will grow and rebalance until all four of the clusters in the XOR problem have sufficiently small loss.
Our Phase 2 analysis leverages the simultaneous training of both layers to show that the dominance of the signal components will be maintained throughout training. In particular, we show once individual neurons become signal heavy, their second layer weights become large, and thus a positive feedback cycle between the first and second layer weights of that neuron causes it to grow faster than non-signal-heavy neurons. This allows us to maintain the signal-heavy inductive hypothesis. If we only trained the first layer, and all second layer weights had equal absolute value, then unless we have strong control over the balance of the clusters, it would be possible for the non-signal components to grow at a rate which is on the same order as the rate of the signal components (see Remark 4.3).
### Related Work
Learning Multi-Index Functions via Neural NetworksMost related to our work is a body of work aiming to understand the sample complexity of learning multi-index functions via SGD on neural networks (Bietti et al., 2022; Refinetti et al., 2021; Chen et al., 2020; Abbe et al., 2021; Abbe et al., 2022; Damien et al., 2022; Barak et al., 2022; Daniely and Malach, 2020; Mousavi-Hosseini et al., 2022; Nichani et al., 2022; Ge et al., 2017; Mahankali et al., 2023). Such functions are typically studied in either the Gaussian data setting where \(x\sim\mathcal{N}(0,I_{d})\), or in the Boolean hypercube setting, where \(x\sim\text{Uniform}(\{\pm 1\}^{d})\). In both cases, we have \(f(x):=g(Ux)\), where \(U\) projects \(x\) onto a lower dimensional space of dimension \(k\), and \(g\) is an arbitrary function on \(k\) variables. In the Boolean setting, \(U\) projects onto a subset of \(k\) coordinates of \(x\), so in the case of the XOR function we study, \(k=2\) and \(g\) is a quadratic function.
Chen and Meka (Chen and Meka, 2020) showed when \(k\) is constant, and \(g\) is a degree-\(D\) polynomial for constant \(D\), there exists a polynomial-time algorithm which learns such multi-index functions on Gaussian covariates in \(\tilde{O}(d)\) samples. Such algorithms can also be emulated in the same sample complexity via SGD on neural networks designed to emulate arbitrary Statistical Query algorithms (Abbe and Sandon, 2020; Abbe et al., 2021), though these networks bear little similarity to standard neural networks used in practice.
The sample complexity of learning multi-index functions via SGD on standard neural networks is an open and active area of research. It is known that the neural tangent kernel (and more generally, kernel methods) require \(\Omega(d^{D})\) samples (Hsu, 2021). A line of work by Abbe et al. (Abbe et al., 2021, 2022, 2023) has conjectured that the sample complexity required for SGD is \(\tilde{\Theta}(d^{\max(L-1,1)})\), where \(L\) denotes the "leap complexity", a measure of hierarchical structure upper bounded by \(D\), and which equals \(2\) for the XOR function. If true, this conjecture would place the sample complexity of SGD on standard neural networks squarely between that of kernel methods and arbitrary polynomial-time algorithms. When \(L=1\), Abbe et al. (2022) showed via a mean-field analysis that is possible to learn with \(\Theta(d)\) samples via layer-wise training, where the first layer is trained until it learns the subspace \(U\), and then the second layer is trained as a linear model. For \(L>1\), Abbe et al. (2023) provided a layer-wise SGD algorithm achieving the conjectured complexity, but which assumes knowledge of the coordinate system under which the data is structured. This means the algorithm is not-rotationally invariant, barring the network from learning more general multi-index functions. Other works have also used layer-wise training to give similar results for subclasses of multi-index functions (Damian et al., 2022; Mousavi-Hosseini et al., 2022; Barak et al., 2022); Mousavi-Hosseini et al. (2022) studies a setting where \(k=1\) and \(L=1\), while Damian et al. (2022); Barak et al. (2022) study settings where \(L\geq 2\), and use just a single gradient step on on the first layer, which requires \(\Omega(d^{L})\) samples. Numerous other works (Tan and Vershynin, 2019; Bietti et al., 2022; Wu et al., 2023) have made progress in the setting of single-index functions (\(k=1\)) when \(L>1\), in some cases achieving tight guarantees that depend on a quantity called the "information exponent" of \(g\), though these methods require training only a single neuron in \(\mathbb{R}^{d}\). The recent work Mahankali et al. (2023) considers training a single-index target function with \(k=2\) and degree \(4\) on a \(2\)-layer neural network via vanilla gradient descent, and shows a sample complexity of \(O(d^{3+\epsilon})\), which improves over kernel methods.
The above discussion highlights a gap in our understanding when \(k\geq 2\) and \(L\geq 2\). Indeed, such a setting is challenging because it requires learning multiple neurons, and escaping one (or more) saddles (Abbe et al., 2023). For this reason, we believe the XOR function (with \(k,L=2\)) is a good stepping stone for understanding the behaviour of SGD on neural networks for more general functions with \(k\geq 2,L\geq 2\). We note that several other works (Bai and Lee, 2019; Chen et al., 2020) have achieved a near-optimal sample complexity of \(\tilde{O}(d)\) for the XOR problems; these works use a non-standard architecture and training algorithm which puts SGD into a quadratic NTK regime. While such a regime can often attain sample complexities beating the standard (linear) NTK, in general this method yields complexities of \(\tilde{O}(d^{D-1})\), which is larger than the rate achieved by Abbe et al. (2022) whenever \(L=1\) and \(D\geq 3\). We emphasize that our work achieves the near-optimal sample complexity \(\tilde{O}(d)\) with a standard two-layer neural network, trained with standard minibatch SGD.
We note that many more works have explored both empirically (eg. (Woodworth et al., 2020; Chizat et al., 2019)) and theoretically (eg.(Li et al., 2020; Allen-Zhu and Li, 2020; Suzuki and Akiyama, 2020; Telgarsky, 2022)) the sample-complexity advantages of "rich" SGD training over the "lazy" NTK regime.
Simultaneous Training of LayersWhile many of the works mentioned above use layer-wise training algorithms, the standard empirical practice is to train all layers simultaneously. Several theoretical works
explore this setting, uncovering implicit biases of ReLU (or other homogeneous) networks trained simultaneously (Wei et al., 2019; Chizat and Bach, 2020; Lyu and Li, 2019; Lyu et al., 2021; Maennel et al., 2018). Under a variety of assumptions, these works have related the solutions found via gradient descent to margin-maximizing solutions. A much finer understanding of the implicit bias of simultaneous training is provided for a line of work on diagonal neural networks (Pesme and Flammarion, 2023; Even et al., 2023).
### Organization of Paper
In Section 2, we formally describe the data and training model we study. In Section 3 we state our result. In Section 4, we give an overview of proof technique. In Section 5, we discuss the limitations of our work, takeaways, and open questions. All proofs are given in the Appendix.
### Notation
For a vector \(v\), we use \(\|v\|\) to denote the \(\ell_{2}\) norm, and \(\|v\|_{1}\) to denote the \(\ell_{1}\) norm. We use \(\|M\|_{2}\) to denote the spectral norm of a matrix \(M\). All big-O notation is with respect to \(d\to\infty\), and we use \(\tilde{O}\) to suppress log factors in big-O notation. \(\omega(1)\) denotes growing to infinity with \(d\). We use \(\mathbb{S}^{d-1}(r)\) to denote the sphere of radius \(r\) in \(d\) dimensions, and \(\mathbf{1}(\cdot)\) to denote the indicator variable of an event.
## 2 Model and Setting
### Data.
We study the setting where the data comes from the Boolean hypercube \(x\sim\text{Uniform}(\{-1,1\}^{d})\), and the label \(y\) is given by \(y(x)=\text{XOR}(x_{1},x_{2}):=-x_{1}x_{2}\).
Note that with \(\mu_{1}:=e_{1}-e_{2}\), and \(\mu_{2}:=e_{1}+e_{2}\), we can model the distribution as
\[(x,y)=\begin{cases}(\mu_{1}+\xi,1)&w.p.\;1/4\qquad(-\mu_{1}+\xi,1)\qquad w.p. \;1/4\\ (\mu_{2}+\xi,-1)&w.p.\;1/4\qquad(-\mu_{2}+\xi,-1)\quad w.p.\;1/4\end{cases},\]
where \(\xi\sim\text{Uniform}(0^{2}\times\{-1,1\}^{d-2})\) so that \(\xi\perp\{\mu_{1},\mu_{2}\}\). We will often write
\[x=z+\xi,\]
where \(z\) is the projection of \(x\) onto the space spanned by \(e_{1}\) and \(e_{2}\), and \(\xi\) is the projection of \(x\) orthogonal to \(e_{1}\) and \(e_{2}\). We denote this distribution by \(P_{d}\), and throughout, it is implicitly assumed that all probabilities and expectations over \(x\) are for \(x\sim P_{d}\).
**Remark 2.1**.: _While for simplicity, we state our results for the setting where the data comes from an axis-aligned Boolean hypercube, and where ground truth depends on the first two dimensions, the minibatch SGD algorithm and the initialization of the network will be rotationally invariant. Thus all our results hold for a Boolean hypercube with any basis._
### Training.
Model.We train both layers of a two-layer ReLU network with width \(p\):
\[\frac{1}{p}\sum_{j=1}^{p}a_{j}\sigma(w_{j}^{T}x),\]
where \(\sigma(\alpha)=\max(0,\alpha)\) is the ReLU function. We will use the variable \(\rho:=\frac{1}{p}\sum_{j=1}^{p}\mathbf{1}_{(w_{j},a_{j})}\) to denote the empirical distribution of the neurons and their second layer weights. Thus we denote
\[f_{\rho}(x):=\mathbb{E}_{w,a\sim\rho}a\cdot\sigma(w^{T}x),\]
We will often abuse notation and write probabilities and expectations using \(w\sim\rho\), and use \(a_{w}\) to denote its associated second layer weight. We note that it is not necessarily the case the second layer weight \(a_{w}\) is a _function_ of \(w\); we do this for the convenience of not indexing each pair as \((w_{j},a_{j})\).
Initialization.We initialize the network with \(w_{j}\sim\text{Uniform}(\mathbb{S}^{d-1}(\theta))\) for a scale parameter \(\theta\), such that \(\|w_{j}\|=\theta\). We initialize the second layer as \(a_{j}=\epsilon_{j}\|w_{j}\|\), where \(\epsilon_{j}\sim\text{Uniform}(\pm 1)\).
Minibatch SGD.We train using minibatch SGD on the logistic loss function
\[\ell_{\rho}(x):=-2\log\left(\frac{1}{1+\exp(-y(x)f_{\rho}(x))}\right),\]
and define the population loss \(L_{\rho}:=\mathbb{E}_{x\sim P}\ell_{\rho}(x)\). We will use the shorthand \(\ell_{\rho}^{\prime}(x)\) to denote the derivative of \(\ell_{\rho}(x)\) with respect to \(f_{\rho}(x)\):
\[\ell_{\rho}^{\prime}(x):=-\frac{2y(x)\exp(-y(x)f_{\rho}(x))}{1+\exp(-y(x)f_{ \rho}(x))}.\]
We use \(\rho_{t}\) to denote the empirical distribution of the \(p\) neurons \((w^{(t)},a_{w}^{(t)})\) at iteration \(t\). At each step, we perform the minibatch SGD update
\[w^{(t+1)}:=w^{(t)}-\eta\nabla\hat{L}_{\rho}(w^{(t)})\qquad a_{w}^{(t+1)}:=a_{w }^{(t)}-\eta\nabla\hat{L}_{\rho}(a_{w}^{(t)}).\]
Here \(\hat{L}_{\rho}=\frac{1}{m}\sum_{x^{(i)}\in M_{t}}\ell_{\rho}(x^{(i)})\) denotes the empirical loss with respect to a minibatch \(M_{t}\) of \(m\) random samples chosen i.i.d. from \(P_{d}\) at step \(t\), and for a loss function \(L\) and a parameter \(u\) in the network, \(\nabla_{u}L:=p\frac{\partial L}{\partial u}\) denotes the scaled partial derivative of the loss with respect to \(u\), defined in particular for a neuron \((w,a_{w})\), as follows: 23
Footnote 2: Since the ReLU function is non-differentiable at zero, we define \(\sigma^{\prime}(0)=0\).
Footnote 3: For convenience, we scale this derivative up by a factor of \(p\) to correspond to the conventional scaling in the mean-field model. Of course if we didn’t perform this scaling, we would achieve the same result by scaling the learning rate \(\eta\).
\[\nabla_{w}\hat{L}_{\rho} =\frac{1}{m}\sum_{x^{(i)}\in M_{t}}\frac{\partial}{\partial w}p \ell_{\rho}(x^{(i)})=\frac{1}{m}\sum_{x^{(i)}\in M_{t}}a_{w}\ell_{\rho_{t}}^{ \prime}(x^{(i)})\sigma^{\prime}(w^{T}x^{(i)})x^{(i)};\] \[\nabla_{a_{w}}\hat{L}_{\rho} =\frac{1}{m}\sum_{x^{(i)}\in M_{t}}\frac{\partial}{\partial a_{w }}p\ell_{\rho}(x^{(i)})=\frac{1}{m}\sum_{x^{(i)}\in M_{t}}\ell_{\rho_{t}}^{ \prime}(x^{(i)})\sigma(x_{i}^{T}w).\]
## 3 Main Result
The following theorem is our main result.
**Theorem 3.1**.: _There exists a constant \(C>0\) such that the following holds for any \(d\) large enough. Let \(\theta:=1/\log^{C}(d)\). Suppose we train a 2-layer neural network with minibatch SGD as in Section 2.2 with a minibatch size of \(m\geq d/\theta\), width \(1/\theta\leq p\leq d^{C}\), step size \(d^{-C}\leq\eta\leq\theta\), and initialization scale \(\theta/\sqrt{p}\). Then for some \(t\leq C\log(d)/\eta\), with probability \(1-d^{-\omega(1)}\), we have_
\[\mathbb{E}_{x\sim P_{d}}[\ell_{\rho_{t}}(x)]\leq o(1).\]
By setting \(\eta=\theta\) and \(m=d/\theta\), Theorem 3.1 states that we can learn the XOR function up to \(\epsilon\) population loss in \(\Theta\left(d\text{polylog}(d)\right)\) samples and iterations on a polynomial-width network.
## 4 Proof Overview
Throughout the following section, and in our proofs, we will use the following shorthand to refer to the components of a neurons \(w\). We decompose \(w=w_{1:2}+w_{\perp}\), where \(w_{1:2}\) is the projection of \(w\) in the direction spanned \(e_{1}\) and \(e_{2}\) (and equivalently by \(\mu_{1}=e_{1}-e_{2}\) and \(\mu_{2}=e_{1}+e_{2}\)), and \(w_{\perp}\) is the component of \(w\) in the orthogonal subspace. We further decompose \(w_{1:2}=w_{\text{sig}}+w_{\text{opp}}\) as follows:
\[w_{\text{sig}}=\begin{cases}\frac{1}{2}\mu_{1}\mu_{1}^{T}w&a_{w}\geq 0;\\ \frac{1}{2}\mu_{2}\mu_{2}^{T}w&a_{w}<0.\end{cases}\qquad w_{\text{opp}}= \begin{cases}\frac{1}{2}\mu_{2}\mu_{2}^{T}w&a_{w}\geq 0;\\ \frac{1}{2}\mu_{1}\mu_{1}^{T}w&a_{w}<0.\end{cases}\]
Intuitively, we want the neurons to grow in the \(w_{\rm sig}\) direction, but not the \(w_{\rm opp}\) direction; in a network achieving the maximum normalized margin, we will have \(w=w_{\rm sig}\) exactly, and \(w_{\rm opp}=w_{\perp}=0\). We summarize this notation in Table 1, along with future shorthand we will introduce in this section.
The main idea of our proof is to break up the analysis of SGD into two main phases. In the first phase, the network is small, and thus we have (for most \(x\)) that the loss \(\ell_{\rho}(x)\) is well approximated by a first order approximation of the loss at \(f_{\rho}=0\), namely
\[\ell_{0}(x;\rho):=-2\log(1/2)-y(x)f_{\rho}(x).\]
As long as this approximation holds, the neurons of the network evolve (approximately) independently, since \(\ell_{0}^{\prime}(x):=\frac{\partial\ell_{0}(x;\rho)}{\partial f_{\rho}(x)}=-y(x)\) does not depend on the full network \(\rho\). We will show under this approximation that for many neurons, \(\|w_{\rm sig}\|\) grows exponentially fast. Thus we will run this first phase for \(\Theta(\log(d)/\eta)\) iterations until for all four clusters \(\mu\in\{\pm\mu_{1},\pm\mu_{2}\}\), there exists a large set of neurons \(S_{\mu}\) on which \(w_{\rm sig}^{T}\mu>0\), and the "margin" from this set of neurons is large, ie.
\[\tilde{\gamma}_{\mu}:=\mathbb{E}_{\rho}[\mathbf{1}(w\in S_{\mu})a_{w}\sigma(w ^{T}\mu)]\gg\mathbb{E}_{\rho}\|w_{\perp}+w_{\rm opp}\|^{2}. \tag{4.1}\]
In the Phase 2, we assume that Eq. 4.1 holds, and we leverage the dominance of the signal to show that (1) The signal components \(w_{\rm sig}\) grow faster that \(w_{\rm opp}+w_{\perp}\), and thus Eq. 4.1 continues to hold; and (2) SGD balances the signal components in the 4 cluster directions such that the margins \(\tilde{\gamma}_{\mu}\) balance, and become sufficiently large to guarantee \(o(1)\) loss.
We proceed to describe the analysis in the two phases in more detail. Full proofs are in the Appendix.
### Phase 1
In Phase 1, we approximate the evolution of the network at each gradient step by the gradient step that would occur for a network with output \(0\). The main building blocks of our analysis are estimates of the \(L_{0}:=\mathbb{E}_{x}\ell_{0}(x;\rho)\) population gradients, and bounds on the difference \(\nabla L_{0}-\nabla L_{\rho}\).
\(L_{0}\) population gradients.Since the primary objective of this phase is to grow the neurons in the signal direction, we sketch here the computation of the gradient \(\nabla_{w_{1,2}}L_{0}\) in the subspace spanned by \(\mu_{1},\mu_{2}\). The remaining estimates of \(\nabla L_{0}\) are simpler, and their main objective is to show that \(\nabla_{w_{\perp}}L_{0}\) and \(\nabla_{a_{w}}L_{0}\) are sufficiently small, such that \(\|w_{\perp}\|\) doesn't change much throughout Phase 1, and \(|a_{w}|\) stays approximately the same as \(\|w\|\). For convenience, the reader may assume that \(|a_{w}|=\|w\|\) exactly.4,
Footnote 4: When \(\eta\to 0\) as in gradient flow, this equivalence holds exactly for ReLU networks, as long as the initialization satisfies \(|a_{w}|=\|w\|\).
For a data sample \(x\sim\text{Rad}^{d}\), we denote \(x=z+\xi\), where \(z\in\text{Span}(\{\pm\mu_{1},\pm\mu_{2}\})\), and \(\xi\perp\text{Span}(\{\pm\mu_{1},\pm\mu_{2}\})\).
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline \(w_{\rm sig}=\begin{cases}\frac{1}{2}\mu_{1}\mu_{1}^{T}w&a_{w}\geq 0\\ \frac{1}{2}\mu_{2}\mu_{2}^{T}w&a_{w}<0\end{cases}\) & \(w_{\rm opp}=\begin{cases}\frac{1}{2}\mu_{2}\mu_{2}^{T}w&a_{w}\geq 0\\ \frac{1}{2}\mu_{1}\mu_{1}^{T}w&a_{w}<0\end{cases}\) & \(\begin{cases}w_{1:2}=w_{\rm sig}+w_{\rm opp}\\ w_{\perp}=w-w_{1:2}\end{cases}\) \\ \hline \(\gamma_{\mu}=f_{\rho}(\mu)y(\mu)\) & \(\gamma_{\rm min}=\min_{\mu\in\{\pm\mu_{1},\pm\mu_{2}\}}\gamma_{\mu}\) & \(\gamma_{\rm max}=\max_{\mu\in\{\pm\mu_{1},\pm\mu_{2}\}}\gamma_{\mu}\) \\ \hline \(g_{\mu}=|\ell_{\rho}^{\prime}(\mu)|\) & \(g_{\rm min}=\min_{\mu\in\{\pm\mu_{1},\pm\mu_{2}\}}|\ell_{\rho}^{\prime}(\mu)|\) & \(g_{\rm max}=\max_{\mu\in\{\pm\mu_{1},\pm\mu_{2}\}}|\ell_{\rho}^{\prime}(\mu)|\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of Notation used in Proof Overview and Proofs
By leveraging the symmetry of the data distribution and the fact that \(y(z)=y(-z)\), we can compute
\[\begin{split}\nabla_{w_{1:2}}L_{0}&=-a_{w}\mathbb{E}_{ x=z+\xi}y(x)\sigma^{\prime}(w^{T}x)z\\ &=-a_{w}\mathbb{E}_{\xi}\frac{1}{2}\mathbb{E}_{z}y(z)\left(\sigma ^{\prime}(w^{T}\xi+w^{T}z)-\sigma^{\prime}(w^{T}\xi-w^{T}z)\right)z\\ &=-a_{w}\mathbb{E}_{\xi}\frac{1}{2}\mathbb{E}_{z}y(z)\mathbf{1}(|w ^{T}z|\geq|w^{T}\xi|)\operatorname{sign}(w^{T}z)z\\ &=-\frac{1}{2}a_{w}\mathbb{E}_{z}y(z)\operatorname{sign}(w^{T}z)z \mathbb{P}_{\xi}[|w^{T}z|\geq|w^{T}\xi|]\\ &\approx-\frac{1}{2}a_{w}\mathbb{E}_{z}y(z)\operatorname{sign}(w ^{T}z)z\mathbb{P}_{G\sim\mathcal{N}(0,\|w\|^{2})}[G\leq|w^{T}z|]\\ &\approx-\frac{1}{2}a_{w}\mathbb{E}_{z}y(z)\operatorname{sign}(w ^{T}z)z\sqrt{\frac{2}{\pi}}\frac{|w^{T}z|}{\|w\|}.\end{split} \tag{4.2}\]
Here the two approximations come from the fact that \(\xi\) has boolean coordinates and not Gaussian, and from an approximation of the Gaussian distribution, which holds whenever \(\frac{|w^{T}z|}{\|w\|}\) is small. By taking the expectation over \(z\in\{\pm\mu_{1},\pm\mu_{2}\}\), the last line of Eq 4.2 can be shown to evaluate to
\[-\frac{|a_{w}|}{\|w\|\sqrt{2\pi}}w_{\text{sig}}+\frac{|a_{w}|}{\|w\|\sqrt{2\pi} }w_{\text{opp}}. \tag{4.3}\]
Observe that near initialization, this gradient is quite small, since \(\frac{\|w_{\text{sig}}\|}{\|w\|}\) is approximately \(\frac{1}{\sqrt{d}}\) for a random initialization. Nevertheless, this gradient suggests that \(w_{\text{sig}}\) will grow exponentially fast.
Bounding the difference \(\nabla L_{0}-\nabla L_{\rho}\).To bound \(\|\nabla_{w}L_{\rho}-\nabla_{w}L_{0}\|_{2}\), first recall that
\[\nabla_{w}L_{0}-\nabla_{w}L_{\rho}=\mathbb{E}_{x}a_{w}(\ell_{\rho}^{\prime}(x )-\ell_{0}^{\prime}(x))\sigma^{\prime}(w^{T}x)x.\]
Defining \(\Delta_{x}:=(\ell_{\rho}^{\prime}(x)-\ell_{0}^{\prime}(x))\sigma^{\prime}(w^{ T}x)\), we can show using routine arguments (see Lemma C.2 for the details) that:
\[\|\nabla_{w}L_{\rho}-\nabla_{w}L_{0}\|_{2}=|a_{w}|\|\mathbb{E}_{ x}\Delta_{x}x\| \leq|a_{w}|\sqrt{\mathbb{E}_{x}\Delta_{x}^{2}} \tag{4.4}\] \[\approx|a_{w}|\sqrt{\mathbb{E}_{x}f_{\rho}(x)^{2}}\] \[\lessapprox|a_{w}|\mathbb{E}_{\rho}[\|a_{w}w\|]\approx\frac{|a_{ w}|}{\operatorname{polylog}(d)}.\]
While this deviation bound is useful for showing that \(w_{\perp}\) doesn't move too much, this bound far exceeds the scale of the gradient in the \(w_{\text{sig}}\), which is on the scale \(\frac{|a_{w}|}{\sqrt{d}}\) near initialization. Fortunately, we can show in Lemma C.3 that the deviation is much smaller on the first two coordinates, namely,
\[\|\nabla_{w_{1:2}}L_{\rho}-\nabla_{w_{1:2}}L_{0}\|_{2}\leq|a_{w}|O(\log^{2}(d) )\left(\mathbb{E}_{\rho}[\|a_{w}w_{1:2}\|]+\mathbb{E}_{\rho}[\|a_{w}w\|] \frac{\|w_{1:2}\|}{\|w\|}\right) \tag{4.5}\]
Note that since near initialization \(\|w_{1:2}\|\ll\|w\|\) for all neurons, this guarantee is much stronger than Eq. 4.4. In fact, since throughout this phase we can show that \(a_{w}\) and \(\|w\|\) change relatively little, staying at the scale \(1/\text{polylog}(d)\), the approximation error in Eq. 4.5 is smaller than the gradient in the \(w_{\text{sig}}\) direction (Eq. 4.3) whenever say \(\|w_{\text{sig}}\|\geq 100\mathbb{E}_{\rho}[\|a_{w}w_{1:2}\|]\), which occurs on a substantial fraction of the neurons.
Lemma C.3 is the most important lemma in our Phase 1 analysis. At a high level, it shows that the approximation error \(\|\nabla_{w_{1:2}}L_{\rho}-\nabla_{w_{1:2}}L_{0}\|_{2}\) can be coupled with the growth of the signal, \(-(\nabla_{w}L_{0})^{T}\frac{w_{\text{sig}}}{\|w_{\text{sig}}\|}\). This is because we use a symmetrization trick with the pairs \(z+\xi\) and \(-z+\xi\) to show that both the error and the signal gradient only grow from samples \(x=z+\xi\) where \(|z^{T}w|\geq|\xi^{T}w|\).
In more detail, to prove Eq. 4.5, we also need to leverage the fact that for any \(\xi\in\{\mu_{1},\mu_{2}\}^{\perp}\) and \(z\in\{\pm\mu_{1},\pm\mu_{2}\}\), we have \(|\ell_{\rho}^{\prime}(\xi+z)-\ell_{\rho}^{\prime}(\xi-z^{\prime})|\leq 4p\mathbb{E}_{ \rho}[\|a_{w}w_{1:2}\|]\), much smaller than we can expect
\(|\ell^{\prime}_{\rho}(x)-\ell^{\prime}_{0}(x)|\) to be. Thus \(|\Delta_{\xi+z}-\Delta_{\xi-z}|\leq 4p\mathbb{E}_{\rho}[\|a_{w}w_{1:2}\|]\) whenever \(|\xi^{T}w|\geq|z^{T}w|\) (such that \(\sigma^{\prime}(w^{T}(\xi+z))=\sigma^{\prime}(w^{T}(\xi-z))\)). Following the symmetrization trick in Eq. 4.2, we have
\[\left\|\frac{1}{a_{w}}\left(\nabla_{w_{1:2}}L_{\rho}-\nabla_{w_{1: 2}}L_{0}\right)\right\| =\|\mathbb{E}_{x}\Delta_{x}z\|\] \[=\|\mathbb{E}_{\xi}\mathbb{E}_{z}\Delta_{\xi+z}z\|\] \[=\frac{1}{2}\|\mathbb{E}_{\xi}\mathbb{E}_{z}(\Delta_{\xi+z}- \Delta_{\xi-z})z\|\] \[\leq 2\sqrt{2}\mathbb{E}_{\rho}[\|a_{w}w_{1:2}\|]+\sqrt{2} \mathbb{E}_{\xi}\mathbb{E}_{z}\mathbf{1}(|\xi^{T}w|\leq|z^{T}w|)|\Delta_{x}|.\]
A careful computation comparing \(w^{T}\xi\) to a Gaussian distribution then shows that
\[\mathbb{E}_{z}\mathbf{1}(|\xi^{T}w|\leq|z^{T}w|)|\Delta_{x}|\approx\left( \mathbb{P}_{x}[|\xi^{T}w|\leq|z^{T}w|]\right)(\mathbb{E}_{x}|\Delta_{x}|) \lessapprox\frac{\|w_{1:2}\|}{\|w\|}\mathbb{E}_{\rho}[\|a_{w}w\|].\]
Putting Phase 1 TogetherThe building blocks above, combined with standard concentration bounds on \(\nabla\hat{L}_{\rho}\), suffice to show that a substantial mass of neurons will evolve according to Eq 4.3, leading to exponential growth in \(w_{\text{sig}}\). After \(\Theta(\log(d)/\eta)\) iterations, for these neurons, we can achieve \(\|w_{\text{sig}}\|\gg\|w_{\perp}+w_{\text{opp}}\|\). Formally, we show the following for some \(\zeta\leq 1/\text{polylog}(d)\):
**Lemma 4.1** (Output of Phase 1: Informal; See Lemma C.1 for formal version).: _With high probability, for \(\eta\leq\tilde{O}(1)\), after some \(T=\Theta(\log(d)/\eta)\) iterations of minibatch SGD, with \(m=\tilde{\Theta}(d)\) samples in each minibatch, the network \(\rho_{T}\) satisfies:_
1. \(\mathbb{E}_{\rho_{T}}[\|w_{\perp}+w_{opp}\|^{2}]\leq\theta\)_._
2. _For each_ \(\mu\in\{\pm\mu_{1},\pm\mu_{2}\}\)_, on at least a_ \(0.1\) _fraction of all the neurons, we have_ \(w_{\text{sig}}^{T}\mu>0\)_, and_ \(\|w_{\text{sig}}\|^{2}\geq\zeta^{-1}\theta\)_._
We remark that the analysis to prove Lemma 4.1 is somewhat subtle, since the tight approximation in Eq 4.2 breaks down when \(\|w_{\text{sig}}\|\) approaches \(\|w_{\perp}\|\). The details are given in Appendix C.
#### 4.1.1 Phase 2
The conclusion of Lemma 4.1 is a sufficient condition of the network to begin the second phase. In the second phase, we have that (for most \(x\))
\[\ell^{\prime}_{\rho}(x)\approx\ell^{\prime}_{\rho}(z), \tag{4.6}\]
where we recall that \(z\) is the component of \(x\) in the space spanned by \(\mu_{1}\) and \(\mu_{2}\). We refer to this as the _clean_ loss derivative, and our main tool will be analyzing the evolution of SGD under this clean surrogate for the loss derivative. Namely, we define:
\[\nabla^{\text{cl}}_{w}L_{\rho}:=a_{w}\mathbb{E}_{x}\ell^{\prime}_{\rho}(z) \sigma^{\prime}(w^{T}x)x\quad\text{and}\quad\nabla^{\text{cl}}_{a_{w}}L_{\rho }:=\mathbb{E}_{x}\ell^{\prime}_{\rho}(z)\sigma(w^{T}x). \tag{4.7}\]
Before proceeding, we introduce the following definitions, which will be useful in Phase 2 (summarized in Table 1):
\[\gamma_{\min} :=\min_{\mu\in\{\pm\mu_{1},\pm\mu_{2}\}}\gamma_{\mu} g_{\min}:=\min_{\mu\in\{\pm\mu_{1},\pm\mu_{2}\}}|\ell^{\prime}_{ \rho}(\mu)|=\frac{\exp(-\gamma_{\max})}{1+\exp(-\gamma_{\max})}\] \[\gamma_{\max} :=\max_{\mu\in\{\pm\mu_{1},\pm\mu_{2}\}}\gamma_{\mu} g_{\max}:=\max_{\mu\in\{\pm\mu_{1},\pm\mu_{2}\}}|\ell^{\prime}_{\rho}( \mu)|=\frac{\exp(-\gamma_{\min})}{1+\exp(-\gamma_{\min})}\]
To ensure the approximation in Eq. 4.6 holds throughout the entire the second phase, we will maintain a certain inductive hypothesis, which ensures the the scale of the signal-direction components of the network continue to dominate the scale of the non-signal-direction components of the network. Formally, we consider the following condition.
**Definition 4.2** (Signal-Heavy Inductive Hypothesis).: _For parameters \(\zeta=o(1)\) and \(H>1\) with \(\zeta\leq\exp(-10H)\), we say a network is \((\zeta,H)\)-signal-heavy if there exists some set of heavy neurons \(S\) on which \(\exp(6H)\|w_{\perp}\|+\|w_{opp}\|\leq\zeta\|w_{sig}\|\), and_
\[\mathbb{E}_{\rho}\mathbf{1}(w\notin S)\|w\|^{2}\leq\zeta\tilde{\gamma}_{min}.\]
_Here we have defined \(\tilde{\gamma}_{\mu}:=\mathbb{E}[\mathbf{1}(w\in S,w_{sig}^{T}>0)a_{w}\sigma(w ^{T}\mu)]\) and \(\tilde{\gamma}_{min}:=\min_{\mu\in\{\pm\mu_{1},\pm\mu_{2}\}}\tilde{\gamma}_{\mu}\)._
_Further,_
\[\mathbb{E}_{\rho}[\|w\|^{2}]\leq\mathbb{E}_{\rho}[|a_{w}|^{2}]+\zeta H\leq 2H,\]
_and for all neurons, we have \(|a_{w}|\leq\|w\|\)._
We show via a straightforward argument in Lemma D.4 that if the conclusion of Lemma 4.1 (from Phase 1) holds for some \(\zeta\), then the network is \((\Theta(\zeta^{1/3}),H)\)-signal-heavy, for \(H=\Theta(\log\log(d))\).
Assuming that the network is \((\zeta,H)\)-signal-heavy, using a similar approach to Eq. 4.4, we can show (see Lemma D.5 for the precise statement) that for any neuron \((w,a_{w})\),
\[\frac{1}{|a_{w}|}\|\nabla_{w}L_{\rho}-\nabla_{w}^{\mathrm{cl}}L_{\rho}\|_{2} \lessapprox\sqrt{\mathbb{E}_{x}(f_{\rho}(x)-f_{\rho}(z))^{2}}\lessapprox \mathbb{E}_{\rho}[\|a_{w}w_{\perp}\|]\leq\zeta\gamma_{\max},\]
and similarly \(\|\nabla_{a_{w}}L_{\rho}-\nabla_{a_{w}}^{\mathrm{cl}}L_{\rho}\|_{2}\lessapprox \|w\|\zeta\gamma_{\max}\).
By working with the clean gradients, it is possible to approximately track (or bound) the evolution of \(w_{\text{sig}}\), \(w_{\perp}\), and \(w_{\text{opp}}\) on neurons in \(S\), the set of neurons for which \(\|w_{\text{sig}}\|\gg\|w_{\perp}+w_{\text{opp}}\|\). In Lemmas D.6, D.7, and D.8 we show the following for any \(w\in S\) (let \(\mu\) be the direction of \(w_{\text{sig}}\)):
1. **The signal component \(w_{\text{sig}}\) grows quickly.** We have \(-w_{\text{sig}}^{T}\nabla_{w}^{\mathrm{cl}}L_{\rho}\approx|a_{w}\ell_{\rho}^{ \prime}(\mu)|\tau\), where \(\tau:=\frac{\sqrt{2}}{4}\). Also \(a_{w}\) grows at a similar rate. This growth is due to the fact that points with \(z=-\mu\) will almost never activate the ReLU, while points with \(z=\mu\) almost always will.
2. **A linear combination of \(\|w_{\perp}\|^{2}\) and \(\|w_{\text{opp}}\|^{2}\) decreases.** The argument here is more subtle, but the key idea is to argue that if \(|w_{\perp}^{T}\xi|\geq|w_{\text{opp}}^{T}z|\) frequently, then \(\|w_{\perp}\|^{2}\) will decrease. Meanwhile, if \(|w_{\perp}^{T}\xi|\leq|w_{\text{opp}}^{T}z|\) frequently, then \(w_{\text{opp}}\) will decrease (and there is a sizeable event on which they both decrease).
Since most of the mass of the network is in \(S\), this shows that the signal will grow at the exponential rate \(\tau|\ell_{\rho}^{\prime}(\mu)|\) -- or for the "weakest" cluster, that is, in the direction \(\mu\) that maximizes \(\tilde{\gamma}_{\mu}\), we will have \(\tilde{\gamma}_{\min}^{(t+1)}\gtrapprox(1+2\eta\tau g_{\max})\,\tilde{\gamma}_ {\min}^{(t)}\).
On neurons outside of \(S\), we show in Lemma D.11 that they grow _at most_ as fast as the rate of the weakest clusters, meaning we can essentially ignore these neurons.
**Remark 4.3**.: _If we did not train the second layer weights (and for instance they all had norm \(1\)), then our tools would not suffice to maintain the signal-heavy hypothesis in Definition 4.2. Indeed, the neurons in \(S\) would grow at a linear rate of \(\tau|\ell_{\rho}^{\prime}(\mu)|\), and at (up to) an equal linear rate outside of \(S\). Thus the neurons outside of \(S\) might eventually attain a non-negligible mass. However, because the layers are trained simultaneously, this leads to positive feedback between the growth of \(\|w_{\text{sig}}\|\) and \(|a_{w}|\), leading to exponential growth, which maintains the mass ratios between the neurons in and out of \(S\)._
Combining the ideas above, we prove the following lemma, which shows that after one SGD step, the network stays signal-heavy (with a slightly worse parameter), the behavior of the weakest margin improves, and the network (measured by the size of the largest margin \(\gamma_{\max}\)) doesn't become too big.
**Lemma 4.4** (Phase 2 Inductive Step: Informal; See Lemma D.3 for formal version).: _If a network \(\rho_{t}\) is \((\zeta,H)\)-signal heavy with heavy set \(S\), then after one minibatch gradient step, with probability \(1-d^{-\omega(1)}\),_
1. \(\rho_{t+1}\) _is_ \((\zeta(1+10\eta\zeta H),H)\)_-signal heavy._
2. \(\tilde{\gamma}_{min}^{(t+1)}\geq(1+2\eta\tau(1-o(1))g_{\max})\,\tilde{\gamma}_ {min}^{(t)}\)__
3. \(\tilde{\gamma}_{max}^{(t+1)}\leq(1+2\eta\tau(1+o(1))g_{\min})\,\tilde{\gamma}_ {max}^{(t)}\)_, where_ \(\tilde{\gamma}_{max}^{(t)}:=\max_{\mu\in\{\pm\mu_{1},\pm\mu_{2}\}}\tilde{\gamma} _{\mu}^{(t)}\)_._
Theorem 3.1 is proved by iterating this lemma for \(\Theta(\log\log(d)/\eta)\) steps, yielding \(\gamma_{\min}\approx\tilde{\gamma}_{\min}=\omega(1)\).
Conclusion and Discussion
In this work, we showed that in \(\tilde{O}(d)\) samples, it is possible to learn the XOR function on Boolean data on a 2-layer neural network. Our results shows that by a careful analysis that compares that dynamics to the dyamincs under the surrogate \(L_{0}\) loss, we can show that SGD find the signal features, and escape the region of the saddle where it was initialized. Then, after learning the feature direction, we show that SGD will enlarge and balance the signal components to learn well-classify points from all 4 clusters.
We now discuss some of the limits and possible extensions of our techniques.
Minibatch SGD vs SGD vs GD.In this work, we study minibatch SGD, with a batch size of \(m\geq d\text{polylog}(d)\). This affords us enough samples at each iteration to have strong enough convergence to the population loss. Extending our results to SGD with a batch size of 1 is an interesting open question, and it is possible that this could be achieved using the drift-martingale techniques in Tan and Vershynin (2019); Arous et al. (2021); Abbe et al. (2023). Such methods allow larger fluctuations from the population loss at each step, but show that the fluctuations concentrate over time, even when SGD is run for \(T=\omega(1/\eta)\) steps, enough time to escape a saddle.
We remark that in this problem, using minibatch SGD with fresh samples can achieve stronger sample complexities than that required to show uniform convergence of the empirical gradient to the population gradient (as in Ge et al. (2017); Mei et al. (2018)), which in our setting, is \(\Omega(d^{2})\) samples. This means proving the convergence of GD on the empirical loss would require tools beyond uniform convergence.
Boolean Data vs Gaussian Data.One limitation of this work is that our results only hold for boolean data, and not gaussian data \(x\sim\mathcal{N}(0,I_{d})\). As a matter of convenience, it is easier to compute the population gradients \(\nabla_{w}L_{0}\) and \(\nabla_{w}^{\text{cl}}L_{\rho}\) with Boolean data, and the gradient does not depend on interactions between \(w_{\text{sig}}\) and \(w_{\text{opp}}\). With some willingness to perform various Gaussian integrals, we believe the analysis in Phase 1 could be extended to the Gaussian setting. This would require changing Lemma C.17 to reflect the population gradients, and modifying the definition of "strong" neurons (Def. C.13) to be a more restrictive set that only includes neurons where \(\|w_{\text{opp}}\|\ll\|w_{\text{sig}}\|\), such that \(w_{\text{sig}}\) grows at the maximum possible rate. We do not know of any way to directly extend Phase 2 to the Gaussian case. This is because if the cluster margins \(\gamma_{u}\) become very imbalanced, it is possible \(w_{\text{sig}}\) could grow in the wrong direction.
Classification vs Regression.In our classification setting, it suffices to show that the margin on each cluster grows large. We accomplish this in our Phase 2 analysis by showing that there is a large mass of neurons primarily in the \(\mu\)-direction for each \(\mu\in\{\pm\mu_{1},\pm\mu_{2}\}\). Adapting this strategy may be possible for XOR regression on Boolean data, but on Gaussian data, representing the ground truth function would require more specialization among the neurons. To see this, consider the following simpler example: to represent the single-index function \(f^{*}(x)=(e_{1}^{T}x)^{2}\) on Gaussian data on a ReLU network without biases, the neurons cannot all be oriented in the \(\pm e_{1}\) direction, otherwise the output would be \(a\sigma(x_{1})+b\sigma(-x_{1})\) for scalars \(a,b\). Studying the power of SGD to perform this specialization is an exciting open direction. We believe that our Phase 1 analysis may be a useful first step in this regard to show that the network can become signal-heavy. More powerful techniques would need to be developed to show specialization once the network contains sufficient signal.
|
2309.06019 | DSLOT-NN: Digit-Serial Left-to-Right Neural Network Accelerator | We propose a Digit-Serial Left-tO-righT (DSLOT) arithmetic based processing
technique called DSLOT-NN with aim to accelerate inference of the convolution
operation in the deep neural networks (DNNs). The proposed work has the ability
to assess and terminate the ineffective convolutions which results in massive
power and energy savings. The processing engine is comprised of low-latency
most-significant-digit-first (MSDF) (also called online) multipliers and adders
that processes data from left-to-right, allowing the execution of subsequent
operations in digit-pipelined manner. Use of online operators eliminates the
need for the development of complex mechanism of identifying the negative
activation, as the output with highest weight value is generated first, and the
sign of the result can be identified as soon as first non-zero digit is
generated. The precision of the online operators can be tuned at run-time,
making them extremely useful in situations where accuracy can be compromised
for power and energy savings. The proposed design has been implemented on
Xilinx Virtex-7 FPGA and is compared with state-of-the-art Stripes on various
performance metrics. The results show the proposed design presents power
savings, has shorter cycle time, and approximately 50% higher OPS per watt. | Muhammad Sohail Ibrahim, Muhammad Usman, Malik Zohaib Nisar, Jeong-A Lee | 2023-09-12T07:36:23Z | http://arxiv.org/abs/2309.06019v2 | # DSLOT-NN: Digit-Serial Left-to-Right Neural Network Accelerator
###### Abstract
We propose a Digit-Serial Left-to-right (DSLOT) arithmetic based processing technique called _DSLOT-NN_ with aim to accelerate inference of the convolution operation in the deep neural networks (DNNs). The proposed work has the ability to assess and terminate the ineffective convolutions which results in massive power and energy savings. The processing engine is comprised of low-latency most-significant-digit-first (MSDF) (also called _online_) multipliers and adders that processes data from left-to-right, allowing the execution of subsequent operations in digit-pipelined manner. Use of online operators eliminates the need for the development of complex mechanism of identifying the negative activation, as the output with highest weight value is generated first, and the sign of the result can be identified as soon as first non-zero digit is generated. The precision of the online operators can be tuned at run-time, making them extremely useful in situations where accuracy can be compromised for power and energy savings. The proposed design has been implemented on Xilinx Virtex-7 FPGA and is compared with state-of-the-art Stripes on various performance metrics. The results show the proposed design presents power savings, has shorter cycle time, and approximately \(50\%\) higher OPS per watt.
Online arithmetic, most-significant-digit first, convolution neural network, CNN acceleration.
## I Introduction
In the recent years, deep neural networks have shown impressive performance and are considered as state-of-the-art classification algorithms, achieving near-human performance in applications including image processing, natural language processing, object detection, and bio-informatics etc., [1, 2, 3]. The performance of the DNNs is related to their computational complexity. It is commonly observed that the number of layers has a significant impact on the network's performance [4]. Specifically, a greater number of layers often results in superior feature extraction capabilities. However, it is important to note that deeper networks typically require a larger number of parameters and, consequently, more extensive computational resources and memory capacity to be effectively trained. The main computation is the multiply-accumulate (MAC) operation that account for \(99\%\) of the total computations in convolution neural networks (CNN) [5]. The arrangement of MAC units are dependent on the size and shape of DNN. For example, the first entry DNN in ImageNet challenge to surpass human-level accuracy named ResNet model with \(152\) layers requires \(11.3\) GMAC operations and \(60\) million weights [6]. As such, there exists a trade-off between the benefits of increased network depth and the associated costs in terms of model size and resource requirements.
### _Related Works_
The aforementioned challenges led the research into designing domain specific architectures to accelerate the computation of convolution operations in deep neural networks [7, 8]. Moreover, such designs perform the CNN inference in a layer-by-layer fashion, which substantially increases the flow of data to and from the external memory. During the past few years, there has been an emerging trend towards the implementation of DNN acceleration and evaluation designs using bit-serial arithmetic circuits [9, 10]. This trend has been motivated due to various reasons such as: (1) reduce the computational complexity and the required communication bandwidth (2) the requirement of variable data precision by various deep learning networks as well as the requirement of variable precision within the layers of a network, (3) the compute precision can be varied easily using bit-serial designs simply by adjusting the number of compute cycles in a DNN model evaluation, and (4) the need to improve the energy and resource utilization by early detection of negative results, hence terminating such ineffectual computations yielding negative results. Stripes [9] is considered among the pioneering works employing bit-serial multipliers instead of conventional parallel multipliers in their accelerator architecture to address the challenges such
as power and throughput. In the similar context, UNPU [10] enhanced the Stripes architecture by incorporating look-up tables (LUTs) to store the inputs to be reused multiple times during the computation of an input feature map.
Most modern CNNs use rectified linear unit (ReLU) as an activation function which filters the negative results of the convolution and replaces them with zero. Studies [11, 12, 13], show that about \(42\%\)-\(68\%\) of the modern CNN produce a negative output, suggesting a significant wastage of power on unnecessary computation. Most conventional CNN acceleration designs perform the ReLU activation separately, after the completion of the convolution operations. Recently, some researchers have proposed methods of early detection and termination of the negative results [11, 12, 13]. Early detection of the negative activations results in reduced computations and improvement in energy requirements of the hardware designs. Existing solutions either involve special digit encoding schemes [12, 13] or designing sophisticated circuits [11] to predict if the result is negative. In [11], the algorithm requires significant software complicity to re-order the operation, limiting the deployment of such techniques.
In this research, we propose to use _Online arithmetic_ for early detection of negative input to the ReLU activation function and terminate ineffective convolution. We develop online arithmetic-based multiplier and adders to perform multiply and accumulate operation.
### _Organization and Specific Contributions of the Paper_
The specific contributions of this work are as follows:
* DNN accelerator design based on MSDF arithmetic scheme.
* A novel and straight-forward mechanism for the detection of negative activation during the computation of the convolution operation.
* Energy efficient design resulting in \(50\%\) higher OPS per watt compared to SIP [9].
The rest of the paper is organized as follows. Section II presents the details of proposed online arithmetic based convolution computation and early termination technique. The evaluation and results of the proposed methodology has been presented in Section III, followed by conclusion in Section IV.
## II Materials and Methods
A convolution layer processes an input image by applying \(M\) 3D kernels in a sliding window fashion. Typically, convolution layers in CNNs perform a series of multiply-accumulate (MAC) operations to compute the output feature maps. Each MAC operation involves multiplying corresponding elements of the kernel and input feature maps and summing up the results. The convolution operation carried out in a CNN layer can be outlined by a simple weighted sum or SOP equation as follows;
\[y_{ij}=\sum_{a=0}^{m-1}\sum_{b=0}^{m-1}w_{ab}x_{(i+a)(j+b)} \tag{1}\]
where, \(y_{ij}\) is the \(ij^{th}\) output of layer \(l\), \(w\) is the kernel of dimensions \(m\times m\), and \(x\) represents the input of the convolution. It can be observed from the equation that for any \((i,j)\), the kernel \(w\) remains the same while the input changes according to the sliding window operation. This characteristic of the convolution brings the opportunity of weight stationarity in the dataflow architecture of convolution layers.
### _Online Artihemtic_
The online arithmetic is essentially a computing paradigm that works serially in most-significant digit first (MSDF) manner, i.e., the inputs are fed, and output is generated digit-by-digit from left-to-right (LR) [14]. Digit level pipelining stands out as a key feature of this computing paradigm, among several other characteristics. Since all the computation is done in LR manner, it is possible to pipeline subsequent operations at digit level i.e., as soon as first digit of the preceding operation is obtained, the succeeding operation regardless of data dependency, can start computation after a small fixed delay called _online delay_, denoted by \(\delta\) as shown in Fig. 1.
Owing to this property, the intermediate results need not be stored, rather they are consumed in the successive computation, resulting in a decreased number of read/write operation from/to memory, hence low bandwidth requirements and consequent energy savings. In order to generate the output on the basis of partial input information, the online computation requires flexibility in computing digits. This is done by employing redundant digit number system. To this end, signed digit (SD) redundant number system in which number representation is done in radix (\(r\)) format is usually employed which has more than \(r\) values for the representation of a given value. In this study, we use symmetric radix-\(2\) redundant digit set of \(-1,0,1\). For compatibility, the online modules use fractional numbers, this also simplifies the alignment of the operands. The first digit of the operand has weight of \(r^{-1}\), and at a given iteration \(j\), the digit \(x_{j}\) is represented by two bits \(x^{+}\) and \(x^{-}\), and the numerical value is given by (2).
\[x_{j}=SUB(x^{+},x^{-}) \tag{2}\]
The input and outputs are given as (3) and (4) respectively.
\[x[j]=\sum_{i=1}^{j+\delta}x_{i}r^{-i} \tag{3}\]
\[z[j]=\sum_{i=1}^{j}z_{i}r^{-i}, \tag{4}\]
Fig. 1: Timing characteristics of online operation with \(\delta=3\).
where the square brackets represent the iteration index and subscript denote the digit index. A given online algorithm executes for \(n+\delta\) cycles. The single digit input is fed for \(n\) iterations, and after \(\delta\) cycles a single output digit is generated in each iteration.
#### Iii-A1 Online Multiplier (OLM)
In most CNN designs, the convolution during inference is carried out by multiplying a constant weight kernel with the input image in a sliding window fashion. This particular characteristic of CNNs suggests that the kernel matrix must be used multiple times for the convolution operation. In this context, an online multiplier, with one operand in parallel and the other in serial manner, can be useful, where the weight kernel can be employed in parallel and input can be fed in serial manner. In this study, we use the non-pipelined serial-parallel multiplier presented in [15], and depicted in the following Fig. 2(a). The multiplier generates its output in MSDF fashion after an online delay of \(\delta=2\) cycles. The serial input and and output in each cycle are represented as (3) and (4) respectively, while the constant weight is represented as:
\[Y[j]=Y=-y_{0}\cdot r^{0}+\sum_{i=1}^{n}y_{i}r^{-i} \tag{5}\]
Further derivations related to the recurrence and selection function of the serial-parallel online multiplier can be found in [15].
#### Iii-A2 Online Adder (OLA)
Since the multipliers used in this study generate their outputs in an MSDF fashion, an adder with similar capability is needed to compute the sum-of-product (SOP). In this context, a digit-serial online adder that takes both its inputs and generates its output in an MSDF fashion, is employed. This enables digit-level pipelining in the proposed SOP design and also helps in the early determination and subsequently termination of negative activations. The online adder with an online delay of \(\delta=2\), follows a simple construction as presented in Fig. 2(b). Further details and relevant derivations can be found in [16].
### _Proposed Design_
This section details the architecture of the proposed DSLOT-NN based on online computation units with early termination capability. The arrangement of computation units in the processing engine (PE) of DSLOT-NN and the techniques for terminating ineffectual convolutions (resulting as negative) are discussed.
#### Iii-B1 Processing Engine and DSLOT-NN Architecture
The architecture of the proposed DSLOT-NN is presented in Fig. 4. Each PE, presented in Fig. 3 contains \(k\times k\) online serial-parallel multipliers followed by a reduction tree to generate one output pixel. The input pixel is fed serially while the kernel pixel is fed in parallel, depicted by the thickness of the arrows in Fig. 4. The arrangement of PEs is done in such a way that the outputs of the \(4\) PEs will directly be fed to the ensuing pooling layer. It is worth noting that the architecture presented in Fig. 4 is designed for a CNN with single input feature map. A similar approach can be followed for a CNN with multiple input feature maps. A generic representation of the DSLOT-NN is also presented in the following sections.
Each multiplier in the PE is responsible for the multiplication of one pixel in the convolution window with the corresponding pixel in the same feature map of the convolution kernel. Therefore, all the \((k\times k)\) pixels are processed in parallel. The number of cycles required for a PE to generate its output can be calculated as follows
\[\begin{split} Num_{Cycles}=\delta_{\times}+\delta_{+}\times\lceil log _{2}(k\times k)\rceil+\\ \delta_{+}\times\lceil log_{2}(N)\rceil+p_{out}\end{split} \tag{6}\]
where \(\delta_{\times}\) and \(\delta_{+}\) are the online delays of online multiplier and adder respectively, \(\lceil log_{2}(k\times k)\rceil\) is the number of reduction tree stages required to generate the SOP of the \(k\times k\) multipliers, \(\lceil log_{2}(N)\rceil\) is the number of reduction tree stages required to add the SOP results of \(N\) input feature maps, and \(p_{out}\) is the precision of the SOP result. \(p_{out}\) is calculated as follows.
\[p_{out}=p_{out}^{Mult}+\lceil log_{2}(k\times k)\rceil \tag{7}\]
#### Iii-B2 Early Termination of Negative Computations
Most CNN accelerator designs put emphasis on the faster or efficient generation of the sum-of-product (SOP), but only a few works discuss the possibility of early assessment of negative values
Fig. 3: processing engine Architecture
Fig. 2: Basic Components (a) Online Serial-Parallel Multiplier [15], where \(x\) is the serial input and \(Y\) is the parallel output, (b) Online Adder [16]
for the activation layer (ReLU). The early determination of negative activations is a challenging problem in accelerators based on conventional arithmetic. For instance, the bit-serial multipliers takes _multiplicand_ in parallel and the _multiplier_ is processed serially. In each iteration a partial product is generated and stored in register, which is then shifted into appropriate positions before being added to other partial products to obtain the final product. Typically a series of adders, such as carry save adders, ripple-carry adders, etc., are employed to perform this reduction. In convolution, another level of reduction is required to get the output pixel. Furthermore, another level of reduction is needed, if there are more than \(1\) input feature maps, to compute the SOP. In conventional bit-serial multipliers, the determination of the most significant bit and the identification of the result's polarity require waiting until all partial products have been generated and added to the previous partial sums. Among the few works that aim to solve the early detection of the negative activations, use either a digit encoding scheme or an estimation technique for early negative detection [12, 13, 17].
The challenge of early detection and termination of negative activations can be addressed by the intrinsic ability of online arithmetic to generate output digits in an MSDF manner. The proposed design supports the termination of negative activation computation in \(p\) cycles, where \(p<\mathbb{N}\), and \(\mathbb{N}\) is the number of cycles to compute complete result. This is done by observing and comparing the output digits. The process of detecting the negative activations and subsequently terminating the relevant computation is summarized in Algorithm 1.
```
1:\(z^{+}[j]\), \(z^{-}[j]\) bits
2:for j : l to \(Num_{cycles}\)do
3:\(z^{+}[j]\gets z^{+}[j]\)\(\frown z^{+}_{j}\)
4:\(z^{-}[j]\gets z^{-}[j]\)\(\frown z^{-}_{j}\)
5:if\(z^{\uparrow}[j]<z^{-}[j]\)then
6: Terminate
7:else
8: Continue
9:endif
10:endfor
```
**Algorithm 1** Early detection and termination of negative activations
The ReLU unit is equipped with registers to store redundant output \(z^{+}[j]\) and \(z^{-}[j]\) bits, which are the positive and negative output digits of the SOP representing the output SOP in redundant number representation. During each iteration, the new digits are concatenated, indicated by "\(\frown\)" in Algorithm 1, with their corresponding pre logits and as soon as the value of \(z^{+}[j]\) goes below the value of \(z^{-}[j]\) indicating a negative output, a termination signal is generated by the control unit and the computation of the SOP is terminated. Fig. 4, shows the block diagram of the proposed DSLOT-NN considering one input feature map. This simple procedure of early negative detection can save upto \(45-50\%\) of the computation cycles for a convolution operation resulting in a negative number subsequently resulting in an energy efficient design. According to (6), the number of cycles required by the proposed design to process one convolution is found to be \(33\), where \(\delta_{\times}=\delta_{+}=2\), \(k=5\), \(N=1\), and \(p_{out}=21\) considering the bit growth in the reduction tree stages. Where \(p_{out}\) is calculated by eq. 7 as, \(p_{out}^{Mult}=16\) and \(\lceil log_{2}(k\times k)\rceil=5\), with \(k\times k=5\times 5=25\) as the convolution kernel dimensions.
#### Iii-B3 General DSLOT-NN Design
A general extension of the proposed DSLOT-NN for larger networks is presented in Fig. 5. The number of PEs in a processing block (PB) depend upon the number of input feature maps for a particular convolution layer in a CNN. This generic architecture can be repeated multiple times depending upon the number of output feature maps if more parallelism is required.
The PBs are responsible for the computation of one of the pixels belonging to a pooling (or maxpooling) window. In Fig. 5, we presented an example of a \(2\times 2\) pool window hence the 4 PBs. Each PB consists of multiple PEs followed by an online adder tree. The number of PEs in a PB represents
Fig. 4: DSLOT-NN block diagram
the input tiling and it has a range of \((1,N)\), where \(N\) is the number of input feature maps. The output digits of the adder tree are forwarded to a simple comparator circuit to perform the detection of negative activations for ReLU. The structure of a PE is presented in Fig. 3.
Section III contains further details on the experiments conducted to determine the amount of clock cycles and the subsequent energy savings achieved due to the early detection and the termination of the negative activations.
## III Experimental Results
To show the effectiveness of DSLOT-NN both in-terms of latency as well as the early determination of negative activations, we consider a pre-trained CNN as shown in Fig. 6.
As an initial study, we opt to accelerate the first three layers i.e., convolution, ReLU and maxpooling only as presented in Fig. 7. With one input feature map, and generating one output pixel after a maxpooling of \(2\times 2\), we employ the configuration of DSLOT-NN as shown in Fig. 5. Four PEs equipped with \(25\) multipliers and reduction tree each, compute the sum-of-product of one of the convolution window shown in different colors of Fig. 7, in parallel. The rectified linear unit (ReLU) operation has been integrated as an inherent characteristic of the design, whereby each PE in the system detects the sign of its output. In the event of a negative sign detection, the further computation process is terminated following Algorithm 1. An experiment was conducted on the MNIST handwritten digit classification database [18].
### _Results and Analysis of the Proposed Early Negative Detection and Termination_
During inference of the proposed design, it is found that on average, \(12.5\%\) of output pixels result in negative values for each MNIST test set image. Fig. 8 presents a graphical representation of the average percentage of negative activations in each MNIST class. This is calculated by counting the number of negative activations resulting in each convolution performed on MNIST test set. The reason for only \(12.5\%\) predicted negative activations, compared to the statistics explained in studies such as [11, 12, 13], is mainly because these works report the statistics for popular DNNs such as VGG-16, AlexNet, ResNet50, etc., while the proposed work uses a relatively simple CNN design. Another reason is that for the proposed implementation, the adopted CNN design was trained and implemented without the inclusion of the bias term. In general CNN architectures handling MNIST database, a substantial number of activations are rendered negative due to the reason that in MNIST database, a large number of input pixels are zero due to the presence of massive black regions in the image. In most networks trained on MNIST database, these bias terms are usually very small, and mostly negative, values. Therefore, the absence of the bias term in the proposed CNN implementation causes lesser negative values. This problem can also be catered by exploiting the sparsity in the input feature maps. This can lead to significant computational savings in terms of the number of cycles required to calculate an entire convolution. For simplicity, a randomly selected batch of 1000 images (100 images per class) from the MNIST test set was used.
The average number of computation cycles being saved per digit can be seen in Fig. 9, where, the x-axis represent the digit classes in the MNIST database and y-axis represent the percentage of average number of computation cycles being
Fig. 5: General DSLOT-NN architecture
Fig. 8: Average number of negative output activations (\(\%\)) (after CNN layer) per image in each MNIST digit class
Fig. 6: CNN for MNIST handwritten digit classification
Fig. 7: Simultaneous computation of first three layers of the CNN
saved during the convolution computation using the proposed design.
### _FPGA Implementation_
For comparison, we consider the bit-serial inner product units (SIP) from Stripes [9], presented in Fig. 10, for a similar configuration as the proposed design. The SIP unit design is extended to perform 8-bit multiplication and subsequently the SIP processing engines are designed for computing the (\(k\times k\)) convolution. This results in a similar configuration as the proposed design presented in Fig. 4.
A detailed description of the SIP design is presented in Fig. 11. The partial product generator (PPG) presented in Fig. 11(a) is the AND gate array responsible for generating the partial products for the multiplication of a pixel of kernel matrix with the corresponding input pixel fed in a bit-serial manner. Where \(w[0],w[1],\ldots w[n]\) represent the bits of a \(n\)-bit kernel pixel, while \(x[i]\) is the input bit at iteration \(i\). This input bit is ANDed with \(n\)-bits of the kernel pixel to generate the \(i^{th}\) partial product. For a fair comparison, \((k\times k)\) PPGs are used in the SIP design whose outputs, the \((k\times k)\) partial products are forwarded to a reduction tree which generates the sum of these partial products. This reduction tree is followed by an accumulator which accumulates the incoming sum of partial products (SOPP) by shifting and adding the previous sum with the incoming SOP. This process is iterated \(n\) times, keeping the input and kernel precision the same (\(n\)).
The critical path of the SIP design can be represented by the following equation
\[t_{SIP}=t_{AND}+5\times t_{CPA-8}+t_{CPA-21} \tag{8}\]
Similarly, the critical path of the proposed design can be calculated as the sum of the critical path of online multiplier and the subsequent reduction tree.
\[t_{OLM}=t_{[2:1]MUX}+t_{[3:2]Adder}+t_{CPA-4}+t_{SELM}+t_{XOR} \tag{9}\]
The critical path of an online adder (OLA) is found to be
\[t_{OLA}=2\times t_{FA}+t_{FF} \tag{10}\]
Therefore, the critical path for the reduction tree can be calculated as the product of the number of stages and \(t_{OLA}\). So, the critical path of the proposed DSLOT-NN can be calculated as
\[t_{DSLOT}=t_{OLM}+5\times t_{OLA} \tag{11}\]
The input and weight are represented by fixed point 8-bits. However, for the proposed design the fixed point-8 is converted to redundant representation. The effect of precision of the input on the model accuracy is not considered in the scope of this work. SIP uses simple implementation for multiplication where the weights bits are fed in parallel and ANDed with the input which is fed serially. Both the SIP and the proposed design have been implemented on Virtex-7 FPGA and the results of the implementation are presented in Table. I. In terms of area, the proposed design has marginally higher consumption than SIP in terms of look-up tables (LUT). The proposed design shows savings in power consumption. In particular, the design has \(9.1\%\) and \(33.22\%\) low power and energy consumption than SIP, respectively. The proposed design has smaller critical path and shows approximately \(48.6\%\) shorter than SIP. Besides the significant improvement in critical path delay, in this implementation, the experiments were conducted on FPGA and the primary issues considered for the scope of this work were the challenge of early termination of negative activations and the subsequent computational efficiency. The results of performance density in-terms of \(GOPS/W\) showcase the effectiveness of the proposed method. Moreover, in future works, more experiments
Fig. 11: SIP design, (a) Partial product generator (PPG), (b) Overall SIP design
Fig. 10: A general bit-serial inner product unit (SIP) [9]
Fig. 9: Average number of computation cycles (\(\%\)) saved per class in MNIST hand-written digit classification database
on professional design tools will be conducted where various design optimizations, including the timing optimization will be included to assess the robustness and flexibility of the proposed design. The effect of early termination is observed in the significant improvement in performance of the proposed design. DSLOT-NN has approximately \(49.7\%\) higher \(OPS/W\) than SIP.
The LUTs used by the proposed design are slightly higher in number compared to the SIP design. In particular, the proposed design uses \(56.86\%\) more number of LUTs compared to the SIP design, however, given the lower critical path delay, dynamic power, and the capability of energy and computation savings owing to the early detection and termination of negative activations, it can be observed from the results that the proposed design has superior performance compared to SIP design.
Although, the proposed design has been tested on a relatively simple and small benchmark, however, the general design presented in Fig. 5 shows the overall scheme of the implementation which can handle arbitrary kernel size and the number of input feature maps to construct the convolution layer of various dimensions for any given network and database.
## IV Conclusion
In this paper we presented DSLOT-NN which utilize online arithmetic based arithmetic operators for the acceleration of convolution layers in deep neural networks. The online arithmetic presents various benefits including shorter latency, variable precision and digit-level pipelining. We implemented a mechanism to detect and terminate the ineffective convolutions which resulted in power savings and increased performance. In particular, the proposed design has approximately \(50\%\) higher performance compared to the state-of-the-art approach for convolution computation. In future, we plan to analyze the behavior of online arithmetic in DNN acceleration with variable input and kernel precision in an inter-layer as well as intra-layer setting. Furthermore, the sparsity in the input and kernels will also be exploited to further improve the performance and energy efficiency of the proposed design.
|
2309.16223 | GInX-Eval: Towards In-Distribution Evaluation of Graph Neural Network
Explanations | Diverse explainability methods of graph neural networks (GNN) have recently
been developed to highlight the edges and nodes in the graph that contribute
the most to the model predictions. However, it is not clear yet how to evaluate
the correctness of those explanations, whether it is from a human or a model
perspective. One unaddressed bottleneck in the current evaluation procedure is
the problem of out-of-distribution explanations, whose distribution differs
from those of the training data. This important issue affects existing
evaluation metrics such as the popular faithfulness or fidelity score. In this
paper, we show the limitations of faithfulness metrics. We propose GInX-Eval
(Graph In-distribution eXplanation Evaluation), an evaluation procedure of
graph explanations that overcomes the pitfalls of faithfulness and offers new
insights on explainability methods. Using a fine-tuning strategy, the GInX
score measures how informative removed edges are for the model and the EdgeRank
score evaluates if explanatory edges are correctly ordered by their importance.
GInX-Eval verifies if ground-truth explanations are instructive to the GNN
model. In addition, it shows that many popular methods, including
gradient-based methods, produce explanations that are not better than a random
designation of edges as important subgraphs, challenging the findings of
current works in the area. Results with GInX-Eval are consistent across
multiple datasets and align with human evaluation. | Kenza Amara, Mennatallah El-Assady, Rex Ying | 2023-09-28T07:56:10Z | http://arxiv.org/abs/2309.16223v2 | # GInX-Eval: Towards In-Distribution Evaluation of Graph Neural Network Explanations
###### Abstract
Diverse explainability methods of graph neural networks (GNN) have recently been developed to highlight the edges and nodes in the graph that contribute the most to the model predictions. However, it is not clear yet how to evaluate the _correctness_ of those explanations, whether it is from a human or a model perspective. One unaddressed bottleneck in the current evaluation procedure is the problem of out-of-distribution explanations, whose distribution differs from those of the training data. This important issue affects existing evaluation metrics such as the popular faithfulness or fidelity score. In this paper, we show the limitations of faithfulness metrics. We propose **GInX-Eval** (Graph **In**-distribution **eX**planation **Evaluation**), an evaluation procedure of graph explanations that overcomes the pitfalls of faithfulness and offers new insights on explainability methods. Using a fine-tuning strategy, the GInX score measures how informative removed edges are for the model and the EdgeRank score evaluates if explanatory edges are correctly ordered by their importance. GInX-Eval verifies if ground-truth explanations are instructive to the GNN model. In addition, it shows that many popular methods, including gradient-based methods, produce explanations that are not better than a random designation of edges as important subgraphs, challenging the findings of current works in the area. Results with GInX-Eval are consistent across multiple datasets and align with human evaluation.
## 1 Introduction
While people in the field of explainable AI have long argued about the nature of good explanations, the community has not yet agreed on a robust collection of metrics to measure explanation _correctness_. Phenomenon-focused explanations should match the ground-truth defined by humans and are evaluated by the accuracy metric. Model-focused explanations contribute the most to the model predictions and are evaluated by the faithfulness metrics. Because ground-truth explanations are often unknown, faithfulness and its variants are the most common measures of quality. Faithfulness metrics remove or retrain only the important graph entities identified and observe the changes in model outputs. However, this edge masking strategy creates Out-Of-Distribution (OOD) graph inputs, so it is unclear if a high faithfulness score comes from the fact that the edge is important or from the distribution shift induced by the edge removal (Gunnemann, 2022).
We propose **GInX-Eval**, an evaluation procedure of in-distribution explanations that brings new perspectives on GNN explainability methods. Testing two edge removal strategies, we evaluate the impact of removing a fraction \(t\) of edges in the graph on the GNN model performance. To overcome the OOD problem of faithfulness metrics, the model is fine-tuned and tested on the reduced graphs at each degradation level. The best explainability methods can identify the graph entities whose removal triggers the sharpest model accuracy degradation. We compare generative and non-generative methods on their **GInX** score against a random baseline across four real-world graph datasets and two synthetic datasets, all used for graph classification tasks. With this strategy, we show that existing explainers are not better than random in most of the cases. In addition, we show the overall superiority of GNNExplainer, PGMExplainer, and most of the generative methods above
gradient-based methods and Occlusion. Our results lead to diverging conclusions from recent studies (Yuan et al., 2023; Agarwal et al., 2022) and again question the use of faithfulness as a standard evaluation metric in explainable AI (xAI). The GInX-Eval framework also proposes the **EdgeRank** score to assess the capacity of explainers to correctly order edges by their true importance for the model. Finally, GInX-Eval is a useful tool to validate ground-truth explanations provided with some datasets and discover both human- and model-based explanations. Because it relies on a fine-tuning strategy of black-box pre-trained models, GInX-Eval is also a useful evaluation procedure in real-world scenarios where models are not publicly available and can only be used via API calls. Due to the computational cost of re-training, GInX-Eval is not proposed as a systematic evaluation metric but as a tool to throw light on the true informative power of explainers and validate ground-truth explanations. To summarize our contributions:
* We first show that faithfulness evaluates OOD explanations. In addition, we observe that (1) it is inconsistent with the accuracy metric, (2) it leads to divergent conclusions across datasets, and (3) across edge removal strategies. Overcoming the OOD problem, we propose **GInX-Eval** (**Graph In**-distribution eXplanation ), a new evaluation framework for GNN explainability methods. The **GInX** score evaluates how informative explanatory edges are to the model and the **EdgeRank** score assesses if those edges are correctly ordered by their importance.
* We propose a validation protocol of ground-truth explanations using the GInX score. This way, we can measure the degree of alignment between human-based and model-based explanations.
* With GInX-Eval, we now finally demonstrate the true informative power of well-known explainability methods, filter out bad methods, and choose methods that can correctly rank edges.
The rest of this article is organized as follows. Section 2 discusses the literature on graph neural networks (GNN) explainability evaluation and the OOD problem. Section 3 presents the limitations of the current evaluation with faithfulness and introduces GInX-Eval, our new in-distribution evaluation procedure, and its two scores, GInX and EdgeRank. Section 4 presents the experiments that we conducted in detail. Section 5 summarizes the paper and discusses the future opportunities.
## 2 Related work
**Evaluation in xAL** To measure the correctness of explanations, a few metrics have been developed. GNN explainability method should satisfy accuracy, faithfulness, stability (Sanchez-Lengeling et al.,
Figure 1: Summary of GInX-Eval procedure. (1) A GNN model is pre-trained to predict the class of the input graphs. An explainability method generates explanatory subgraphs. (2) For each \(t\in[0.1,...,0.9]\), a new train and test datasets are generated where the fraction \(t\) of the top explanatory edges is removed. At each \(t\), the pre-trained GNN model is fine-tuned on the new train dataset, evaluated on the new test set, and the GInX score is computed. If the model performance decreases, i.e., the GInX scores increase, the explanatory edges are informative to the model. The EdgeRank score is also computed to evaluate if explanatory edges are correctly ranked by the explainability method.
Yuan et al., 2023; Agarwal et al., 2021; 2022), consistency and contrastivity (Yuan et al., 2023), usefulness (Colin et al., 2021). The two most popular approaches are: (1) measuring accuracy using ground-truth annotations and (2) measuring faithfulness using objective metrics (Chan et al., 2022). However, the accuracy metric also referred to as plausibility (Li et al., 2022; Longa et al., 2022; Nauta et al., 2022), needs ground-truth explanations and is therefore less universal and more subjective than faithfulness. Nauta et al. (2022) argues that evaluating the accuracy of an explanation to humans is different from evaluating its correctness. It is not guaranteed that it aligns with its faithfulness Jacovi & Goldberg (2020). According to Petsiuk et al. (2018), it is preferable to keep humans out of the evaluation to better capture the model's understanding rather than representing the human's view. Faithfulness metrics are therefore the most popular evaluation metrics, but we show later in Section 3.2 that they have strong limitations including evaluating out-of-distribution explanations.
**Solving the OOD problem.** Recent works have proposed to adapt the GNN model or develop robust explainability methods to overcome the out-of-distribution (OOD) problem. Faber et al. (2020) argue that explanations should stay in the training data distribution and propose CoGe to produce Distribution Compliant Explanation (DCE). Li et al. (2021) propose a novel out-of-distribution generalized graph neural network. Hsieh et al. do not remove features but apply small adversarial changes to the feature values. Instead of developing robust methods, Hooker et al. (2018) evaluate interpretability methods by observing how the performance of a retrained model degrades when removing the features estimated as important. While this retraining strategy circumvents the OOD problem, it has only been developed for CNN models on images to evaluate feature importance. Building on this retraining strategy, we propose a new evaluation procedure for GNN models and introduce an alternative to faithfulness metrics.
## 3 Method
This section highlights the limitations of the popular xAI evaluation procedure using faithfulness metrics and proposes GlnX-Eval to overcome those. We can assess the informativeness of explanations for the model with the GlnX score and the capacity of methods to correctly order explanatory edges by their importance with the EdgeRank score.
### Preliminaries
Given a well-trained GNN model \(f\) and an instance of the dataset, the objective of the explanation task is to identify concise graph substructures that contribute the most to the model's predictions. The given graph can be represented as a quadruplet \(G(\mathcal{V},\mathcal{E},\mathbf{X},\mathbf{E})\), where \(\mathcal{V}\) is the node set, \(\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}\) is the edge set. \(\mathbf{X}\in\mathbb{R}^{|\mathcal{V}|\times d_{e}}\) and \(\mathbf{E}\in\mathbb{R}^{|\mathcal{E}|\times d_{e}}\) denote the feature matrices for nodes and edges, respectively, where \(d_{n}\) and \(d_{e}\) are the dimensions of node features and edge features. In this work, we focus on structural explanation, _i.e.,_ we keep the dimensions of node and edge features unchanged. Given a well-trained GNN \(f\) and an instance represented as \(G(\mathcal{V},\mathcal{E},\mathbf{X},\mathbf{E})\), an explainability method generates an explanatory edge mask \(M\in\mathbb{R}^{|\mathcal{E}|}\) that is normalized. Furthermore, to obtain a human-intelligible explanation, we transform the edge mask to a sparse matrix by forcing to keep only the fraction \(t\in\mathcal{T}\) of the highest values and set the rest of the matrix values to zero. Each explainability method can be expressed as a function \(h:\mathcal{G}\rightarrow\mathcal{G}\) that returns for each input graph \(G\) an explanatory subgraph \(h(G)\).
**Edge removal strategies.** There are two strategies to select edges in a graph: the _hard_ selection and the _soft_ selection. Hard selection picks edges from the graph so that the number of edges and nodes is reduced. This creates subgraphs that very likely do not lie in the initial data distribution. Soft selection sets edge weights to zero when edges are to be removed. Therefore it preserves the whole graph structure with all nodes and edge indices. Following these two definitions, we define _hard_ and _soft_ explanations. Note here that the hard removal strategy might break the connectivity of the input graphs, resulting in explanations represented by multiple disconnected subgraphs.
### Faithfulness metrics
DefinitionThe faithfulness or fidelity scores are the most general quality metrics in the field of GNN explainability. To evaluate the correctness of the explanation, the explanatory subgraph or
weighted graph \(h(G)\) produced by the explainer \(h\) is given as input to the model to compute the fidelity score on the probabilities:
\[fid=|p(f(h(G))=y)-p(f(G)=y)|\in[0;1] \tag{1}\]
where \(y\) is the true label for graph \(G\) and \(f(G)\), \(f(h(G))\) the predicted labels by the GNN given \(G\) and \(h(G)\) respectively. The closer \(fid\) is to 0, the more faithful the explanation is. The faithfulness score is averaged over the N explanatory graphs \(G_{e,i},i\leq N\) as:
\[\text{Faithfulness}=1-\frac{1}{N}\sum_{i=1}^{N}|p(f(h(G_{i}))=y_{i})-p(h(G_{i}) =y_{i})|\in[0;1] \tag{2}\]
The metric is normalized and the closer it is to 1, the more faithful the evaluated \(N\) explanations are to the initial predictions. The above score corresponds to the \(fid-^{prob}\), one of the four forms of the fidelity scores Yuan et al. (2023), described in Appendix A.1.
Prior work.While faithfulness metrics are the most popular quality measures independent of ground-truth explanations, they have been recently criticized. Based on a "removal" strategy, i.e., we remove or keep the graph entities estimated as important, faithfulness withdraws some entities by setting them to a baseline value either removing them from the graph or putting their weight to zero. Hsieh et al. correctly observe that this evaluation procedure favors graph entities that are far away from the baseline. Consequently, methods that focus on highly weighted edges while maybe ignoring low-weight but still essential connections are favored. In addition, truncated graphs after edge removal can lie out of the data distribution used for training the GNN model (Hooker et al., 2018). In this case, model behavior might differ not because of removing important information but because of evaluating a sample outside the training distribution. The out-of-distribution risk is even larger with graphs because of their discrete nature (Faber et al., 2020).
### GinX-Eval
GinX-Eval is an evaluation procedure of explainability methods that overcomes the faithfulness metrics' OOD problem and assesses the informativeness of explanatory edges towards the GNN model. Figure 1 gives an overview of the procedure. To evaluate the explainer \(h\), GInX-Eval first gathers the explanations produced by \(h\) for all graph instances. The explanatory edges can be ranked according to their respective weight in the subgraph: the most important edges have a weight close to 1 in the mask, while the least important ones have weights close to 0. At each degradation level \(t\), we remove the top \(t\) fraction of the ordered explanatory edge set from the input graphs. We generate new train and test sets at different degradation levels \(t\in[0.1,0.2,...,1]\). The pre-trained GNN model is then fine-tuned at each degradation level on the new train dataset and evaluated on the new test data. While being the most computationally expensive aspect of GInX-Eval, fine-tuning is scalable (see Appendix B.4) and we argue that it is a necessary step to decouple whether the model's degradation in performance is due to the loss of informative edges or due to the distribution shift. The objective here is not to provide a computationally efficient evaluation metric but to highlight the limitations of popular evaluation in xAI for GNN and question the superiority of gold standard methods. The pseudo-code to implement GinX-Eval is given in Appendix B.3.
A drop in test accuracy when removing edges indicates that those edges were important for the model to make correct predictions. These edges are therefore considered as important as they are the most informative to the model. It is worth noticing that edges might be correlated and those spurious correlations can lead to an absence of accuracy drop when removing the top important edges and then a sudden decrease in accuracy when all correlated edges are removed.
#### 3.3.1 GinX Score
Following this evaluation procedure, we define the GInX score at \(t\). It captures how low the test accuracy is after removing the top \(t\) edges. Let \(h(G)\) be the explanatory subgraph generated by the method \(h\), \(y\) the true label for graph \(G\) and \(\chi:\mathcal{G}\times\mathcal{T}\rightarrow\mathcal{G}\) the removal function that takes an explanation \(h(G)\) and returns the hard or soft explanatory graph containing the top \(t\in\mathcal{T}\) edges. We define GInX(\(t\)) as:
\[\text{GInX}(t)=1-\text{TestAcc}(t)=1-\frac{1}{N_{test}}\sum_{i=0}^{N_{test}} \mathds{1}(f(G_{i}\setminus\chi(h(G_{i}),t))=y_{i}) \tag{3}\]
The closer the GInX score is to one, the more informative the removed edges are to the model. Note here that the GInX score at \(t\) can be computed following the hard or soft edge removal strategy; however, we show in Appendix A.2 that the GInX score computed with hard edge removal has higher expressiveness.
#### 3.3.2 EdgeRank Score
Based on the GInX score, we can compute the power of explainability methods to rank edges, i.e., to correctly order edges based on their importance. The edge ranking power can be evaluated with the EdgeRank score defined as:
\[\text{EdgeRank}=\sum_{t=0,0,1,\ldots,0.8}(1-t)\times(\text{GInX}(t+0.1)-\text{ GInX}(t)) \tag{4}\]
A high edge ranking score indicates methods that assign the highest importance weights to the most genuinely informative edges for the model. This is especially important when you try to characterize an explanation and identify fundamental entities within the explanatory substructure.
## 4 Experimental results
In the following section, we propose to validate the GInX-Eval procedure and show its superiority over the widely used faithfulness metric. We support our claims with well-chosen experiments.
Experimental settingExplainability methods were evaluated on two synthetic datasets, BA-2Motifs and BA-HouseGrid, three molecular datasets MUTAG, Benzene and BBBP, and MNISTbin. They all have ground-truth explanations available except for the BBBP dataset. We test two GNN models: GIN (Hu et al., 2020) and GAT (Velickovic et al., 2018) because they score high on the selected real-world datasets, with a reasonable training time and fast convergence. For the two synthetic datasets, we only use GIN since the GAT model does not give good accuracy. Further details on the datasets, GNN training parameters, and time are given in Appendix B. We compare non-generative methods, including the heuristic Occlusion (Zeiler and Fergus, 2014), gradient-based methods Saliency (Baldassarre and Azizpour, 2019), Integrated Gradient (Sundararajan et al., 2017), and Grad-CAM (Pope et al., 2019), and perturbation-based methods GNNExplainer (Ying et al., 2019), PGMExplainer (Vu and Thai, 2020) and SubgraphX (Yuan et al., 2021). We also consider generative methods: PGExplainer (Luo et al., 2020), GSAT (Miao et al., 2022), GraphCFE (CLEAR) (Ma et al., 2022), D4Explainer and RCExplainer (Bajaj et al., 2021). For more details on the differences between generative and non-generative explainers, we refer the reader to Appendix B.5. We compare those explainability methods to base estimators: Random, Truth, and Inverse. Random assigns random importance to edges following a uniform distribution. Truth estimates edge importance as the pre-defined ground-truth explanations of the datasets. The Inverse estimator corresponds to the worst-case scenario where edges are assigned the inverted ground-truth weights. If \(w_{i,j}\) is the ground-truth importance of the edge connecting nodes \(i\) and \(j\), the weight assigned by the Inverse estimator is equal to \(1-w_{i,j}\).
### The Out-Of-Distribution Faithfulness Evaluation
The biggest limitation of the faithfulness metrics is the so-called OOD problem. The generated explanations are out-of-distribution, i.e. they lie outside the data distribution and "fool" the underlying predictor to change the original class, i.e., \(f(h(G))\neq f(G)\). Whereas, in factual explainability scenarios, we expect the explanatory graph \(h(G)\) to have the same class as the input graph \(G\), i.e., \(f(h(G))=f(G)\). Figure 2 illustrates the OOD problem: the extracted model embeddings of explanations of toxic molecules are more similar to the ones of non-toxic molecules. In this case, the model predicts the explanatory subgraphs to be non-toxic while they are valid toxic molecular
fragments. The model prediction is altered not necessarily because we keep only the important entities but also because the model lacks knowledge about these new explanatory graphs. Therefore, the faithfulness score which definition is based on the model predictions of explanations, does not entirely capture the quality of explanations and is ill-suited to evaluate explainability methods.
**Observation 1**_The faithfulness metric is not consistent with the accuracy score._ In figure 3, there is a general misalignment in the rankings of explainers and base estimators on faithfulness or AUC score. For all datasets but Benzene, the Truth estimator, whose accuracy is maximal, has a small faithfulness score \(\sim 0.5\). For MNISTbin, Inverse is by far the best according to the faithfulness score while being the worst explainer by definition on the AUC score. For BA-2Motifs, Random has the highest faithfulness score but can only be 50% accurate by definition. Due to the OOD problem of faithfulness, we cannot decide if the model is fooled by the subgraphs induced by the most informative edges or if human-based and model-based evaluations disagree. Therefore, we cannot
Figure 3: Rankings of base estimators and non-generative explainability methods according to the faithfulness score computed on soft explanations, the faithfulness score on hard explanations, and the AUC score. The AUC ranking is only reported for datasets with ground-truth explanations. Baselines were evaluated on the full explanatory masks, while explainability methods were evaluated on the truncated explanations, keeping the top 10 important undirected edges.
As a result, we cannot rely on the evaluation with faithfulness to draw general conclusions on the explainability methods. We compare the rankings of explainability methods according to the faithfulness evaluated on the two types of explanations, Hard Fidelity and Soft Fidelity respectively, and the accuracy score defined as the AUC score to stay consistent with previous work Longa et al. (2022).
Figure 2: Illustration of the out-of-distribution problem: explanations of a toxic molecule lie closer to the non-toxic molecular representations. Graph embeddings were extracted after the readout layer of the pre-trained GIN model for the MUTAG dataset. We use both t-SNE and UMAP to project the embeddings into 2D representations. Both projection methods show the existence of out-of-distribution explanations.
quantify the alignment between human and model explainability.
**Observation 2**_The evaluation of the explainability methods with the faithfulness metric is not consistent across datasets._ Figure 3 shows no consensus on the top-3 methods across datasets on the soft faithfulness or hard faithfulness score. For instance, we observe that GradCAM and Occlusion have the highest Soft Fid scores for BA-House-Grid, MUTAG, Benzene, and MNISTbin, but not for BA-2Motifs and BBBP where Truth, Random, and GNNExplainer outperform. For Hard Fid, the results are also very heterogeneous among the six datasets. Due to the OOD problem, we cannot conclude that those inconsistencies across datasets are related to differences inherent to the graph data itself, e.g., differences in graph topology, size, or density among the datasets.
**Observation 3**_The faithfulness metric is not consistent across edge removal strategies._ On figure 3, the top-3 ranking for Soft Fid and Hard Fid is always different except for Benzene dataset. This means that the edge removal strategy influences the model perception: the model does not predict labels only based on the information contained in the explanatory edges but also based on the structure of the given explanations. Because of the OOD problem, we cannot decide whether those inconsistencies come from the explainability methods themselves: methods that produce disconnected explanations are penalized by the hard removal strategy because the GNN model is not able to process the message passing.
### Validation of GInX-Eval Procedure
We validate the GInX-Eval procedure on the BA-HouseGrid synthetic dataset because ground-truth explanations, i.e., house and grid motifs, are very well-defined and class-specific. In the binary classification setting, graphs are labeled 1 if they have grids and 0 if they have house motifs attached to the main Barabasi graph. We test three explainability baselines: the Random explainer that assigns values in \([0,1]\) following a uniform distribution, the Truth that assigns ground-truth explanations, and the Inverse estimator that returns the inverse ground-truth explanations and is, therefore, the worst estimator possible.
On Figure 4, GInX-Eval distinguishes the three methods because we observe a sharp decrease of the Truth explainer after 10% edge removal, while the Inverse estimator does not degrade the model performance, and the Random baseline starts decreasing after 20% of the edges are removed. Without re-training, all base importance estimators lead to a model performance degradation. Therefore, evaluating without retraining the model cannot reflect the true explainability power of the methods.
### Evaluating with GInX-Eval
#### 4.3.1 Overview
GInX-Eval evaluates to what extent removing explanatory edges degrades the model accuracy. We adopt the hard selection strategy to remove edges. Even if conclusions are similar for both selection strategies (see Appendix A.3 and C.2), the degradation is of the order of \(10^{-1}\) with hard selection versus \(10^{-2}\) for soft selection. For visualization purposes, we prefer conveying here results with the hard selection. We refer the reader to Appendix C.2 for additional results with the soft selection.
Figure 5 shows how the GInX score increases when we remove a fraction \(t\in[0.1,...,0.9]\) of the most important edges according to multiple explainability methods. For clarity, we choose to display
Figure 4: Comparison between not fine-tuning the GNN model and GInX-Eval on the BA-HouseGrid dataset. Without fine-tuning, the model’s performance also decreases for the worst estimator Inverse where uninformative edges are removed first, preventing a correct evaluation of explainability methods. However, for GInX-Eval where the model is fine-tuned on modified datasets, we observe no test accuracy degradation for the worst-case estimator Inverse.
a smaller set of explainability methods. For more methods, we refer the reader to Appendix C.1. We first observe that model performance is remarkably robust to graph modification for the four real-world datasets, with a maximum growth of the GInX score of 30% observed for the Benzene dataset. For the synthetic datasets, removing many edges leads to a random assignment of labels by the model. In real-world datasets, GNN models might be able to capture high-level information even with absent connections.
We note a particularly small increase of the GInX score for MNISTbin, i.e., in the order of \(10^{-2}\). For this dataset, the GNN model is robust to edge modification. After removing most of the edges from the input graph, the model retains most of the predictive power. The reason might be that node and edge features are more important for the prediction than the graph structure itself for those two datasets.
#### 4.3.2 GinX-Eval of Base Estimators
_Is the ground-truth explanation meaningful to the model?_ The Truth and the Inverse edge importance estimators are evaluated on all datasets except BBBP which has no ground-truth available. We observe in figure 5 that the GInX score stays constant for Inverse and drops significantly for Truth. We conclude that the explanations generated with Inverse have only uninformative edges for the model, while the ground-truth edges contain crucial information for making correct predictions. GInX-Eval is a useful tool to validate the quality of provided ground-truth explanations of published graph datasets.
_Does a random assignment of edge importance informative to the model?_ For all datasets except Benzene, the Random baseline leads to a similar degradation as the Truth estimator in figure 5. There are two reasons for this. First, random explanations contain a few edges present in the ground-truth explanation. Removing just these few edges makes the GInX score increase sharply because of the strong correlations that exist among informative edges. Second, true informative edges might have correlations to some other random edges, so removing edges randomly affects the capacity of the model to correctly learn important patterns in the graph.
_Are explanations obtained with graph explainability methods better than a random guess?_ We observe that a random edge modification removes more informative edges than GradCAM, Integrated Gradient, Occlusion, RCExplainer, and PGExplainer. Therefore, those methods are not better than Random.
Figure 5: GInX scores of a fine-tuned GIN model on graphs with increasing fraction of removed edges. The removed edges are the most important based on explainability methods, and new input graphs are obtained with the _hard_ selection, i.e., explanatory edges are strictly removed from the graph so that the number of edges and nodes is reduced. For more methods, see Appendix C.1.
GInX-Eval identifies how informative ground-truth explanations are for the model, thus assessing the agreement between model and human-based explanations, and draws attention to how much random explanations are meaningful to the model.
#### 4.3.3 GInX-Eval of Explainability methods
_What fraction \(t\) of removed edges should I fix to compare the GInX scores of multiple explainability methods?_ Methods produce explanations of different sizes: some methods constrain their explanations to be sparse, while others assign importance weight to almost all edges in the graph. Figure 6 indicates the heterogeneity of masks generated by different explainability methods. While Truth, PGMExplainer, SubgraphX, and GraphCFE constrain their explanations to be sparse, the rest of the methods include most of the edges in the explanations, assigning a different importance weight to each edge.
The _critical threshold_\(t_{m}^{e}\) of a method \(m\) is the ratio of non-zero values in the masks. Beyond this critical threshold, we are not evaluating the method anymore but a random assignment of edge importance weights. Therefore, it is crucial to compare methods at a threshold \(t\) smaller than the minimum of the methods' critical thresholds. To compare methods, we propose to define the dataset's _optimal threshold_\(t^{*}\) such as \(t^{*}=min_{m\in\mathcal{M}}\{t_{m}^{e}\}\), where \(\mathcal{M}\) denotes the set of explainability methods. The optimal threshold corresponds to the threshold closest to the average mask sparsity of ground-truth explanations. In other words, we take as reference the size of ground-truth explanations as the optimal number of informative edges in the graph and compare methods at this sparsity threshold. We compute the optimal thresholds for the six datasets and report them in table 1. Only the BBBP dataset has no ground-truth explanation available so we set \(t^{*}=0.3\) to have human-intelligible sparse explanations.
\begin{table}
\begin{tabular}{l|c c} Dataset & Truth & Optimal \\ & sparsity & threshold \\ \hline
**BA-2-Modits** & \(0.216\) & \(0.3\) \\
**BA-HouseGrid** & \(0.065\) & \(0.1\) \\
**Benzene** & \(0.175\) & \(0.2\) \\
**MNISTlin** & \(0.235\) & \(0.3\) \\
**MUTAG** & \(0.039\) & \(0.1\) \\ \end{tabular}
\end{table}
Table 1: Truth mask sparsity values for each dataset and the deduced optimal thresholds.
Figure 6: Mask sparsity of different explainability methods. A high sparsity indicates an explanatory mask with many zeros and small pre-processing explanatory subgraphs.
Figure 7: GInX scores at the optimal thresholds. For the BBBP dataset, we define an arbitrary optimal threshold \(t=0.3\). For the other datasets, the optimal threshold is estimated based on the explanatory mask sparsity generated by the Truth estimator. Figure 7 displays the GInX scores of explainability methods at the optimal threshold defined for each dataset. Except for the Benzene dataset, we observe that gradient-based methods and Occlusion have the smallest GInX scores at the optimal thresholds. Gradient-based methods contain less informative edges than GNNExplainer, PGMExplainer, and generative methods. This contradicts
observations made in figure 3 where gradient-based methods and Occlusion are always better than GNNExplainer and PGMExplainer. GInX-Eval unveils new insights on gradient-based methods that go against recent studies (Yuan et al., 2023; Agarwal et al., 2022). On the other hand, GNNExplainer, PGMExplainer, GSAT, and D4Explainer have competitive performance with Random and Truth baselines. This proves that generative methods are not necessarily better at capturing meaningful information than non-generative methods.
The GInX score at the optimal threshold helps filter out uninformative methods including gradient-based methods and Occlusion, and shows that methods can generate informative explanations independent of their generative nature.
#### 4.3.4 EdgeRank score of explainability methods
We use the EdgeRank score to evaluate the capacity of explainers to rank edges correctly according to their true informativeness for the model. In figure 8, we observe that gradient-based methods and Occlusion are not good at correctly ordering edges by their importance. This is another reason why they should not be used to generate meaningful explanations. We also observe that RCExplainer and PGExplainer which perform well on the GInX score have a low edge ranking power, except for the BA-HouseGrid dataset. These two methods can capture the most informative edges but cannot decide what the relative importance of those important edges is. Finally, PGMExplainer, GNNExplainer, GraphCFE, and D4Explainer have both a high GInX score (see figure 5) and a high EdgeRank score, making them the best choice for informative and edge-rank powerful explainers.
## 5 Discussion
This work discusses the pitfalls of faithfulness, one of the most popular metrics in xAI, and the problem of out-of-distribution explanations. Overcoming these limitations, our evaluation procedure GInX-Eval measures the informativeness of explainability methods and their ability to accurately rank edges by their importance for the GNN model. Observing the prediction change, GInX-Eval assesses the impact of removing the generated explanations from the graphs. It gets around the issue of OOD explanations by fine-tuning the GNN model. GInX-Eval is a useful tool to validate the quality of the provided ground-truth explanations. It also demonstrates the poor informativeness of gradient-based methods, contradicting results from recent studies (Yuan et al., 2023; Agarwal et al., 2022) and reproduced in this paper. Combining the GInX and EdgeRank scores, we can filter out uninformative explainability methods and find the optimal ones. Because GInX-Eval relies on a fine-tuning strategy of pre-trained black-box models, our method can easily be used for models only accessible via API calls, including large language models. Due to the computation cost of retraining, GInX-Eval is not meant to be used systematically but is designed as a validation tool for new metrics. This work paves the way for developing approaches that conform with both human- and model-based explainability.
Figure 8: Edge ranking power of explainability methods. |
2309.05263 | Brain-inspired Evolutionary Architectures for Spiking Neural Networks | The complex and unique neural network topology of the human brain formed
through natural evolution enables it to perform multiple cognitive functions
simultaneously. Automated evolutionary mechanisms of biological network
structure inspire us to explore efficient architectural optimization for
Spiking Neural Networks (SNNs). Instead of manually designed fixed
architectures or hierarchical Network Architecture Search (NAS), this paper
evolves SNNs architecture by incorporating brain-inspired local modular
structure and global cross-module connectivity. Locally, the brain
region-inspired module consists of multiple neural motifs with excitatory and
inhibitory connections; Globally, we evolve free connections among modules,
including long-term cross-module feedforward and feedback connections. We
further introduce an efficient multi-objective evolutionary algorithm based on
a few-shot performance predictor, endowing SNNs with high performance,
efficiency and low energy consumption. Extensive experiments on static datasets
(CIFAR10, CIFAR100) and neuromorphic datasets (CIFAR10-DVS, DVS128-Gesture)
demonstrate that our proposed model boosts energy efficiency, archiving
consistent and remarkable performance. This work explores brain-inspired neural
architectures suitable for SNNs and also provides preliminary insights into the
evolutionary mechanisms of biological neural networks in the human brain. | Wenxuan Pan, Feifei Zhao, Zhuoya Zhao, Yi Zeng | 2023-09-11T06:39:11Z | http://arxiv.org/abs/2309.05263v1 | # Brain-inspired Evolutionary Architectures for
###### Abstract
The complex and unique neural network topology of the human brain formed through natural evolution enables it to perform multiple cognitive functions simultaneously. Automated evolutionary mechanisms of biological network structure inspire us to explore efficient architectural optimization for Spiking Neural Networks (SNNs). Instead of manually designed fixed architectures or hierarchical Network Architecture Search (NAS), this paper evolves SNNs architecture by incorporating brain-inspired local modular structure and global cross-module connectivity. Locally, the brain region-inspired module consists of multiple neural motifs with excitatory and inhibitory connections; Globally, we evoke free connections among modules, including long-term cross-module feedforward and feedback connections. We further introduce an efficient multi-objective evolutionary algorithm based on a few-shot performance predictor, endowing SNNs with high performance, efficiency and low energy consumption. Extensive experiments on static datasets (CIFAR10, CIFAR100) and neuromorphic datasets (CIFAR10-DVS, DVS128-Gesture) demonstrate that our proposed model boosts energy efficiency, archiving consistent and remarkable performance. This work explores brain-inspired neural architectures suitable for SNNs and also provides preliminary insights into the evolutionary mechanisms of biological neural networks in the human brain.
Evolutionary Neural Architecture Search, Spiking Neural Networks, Brain-inspired Module and Long-term Connection, Neural Circuit Motifs, Efficient Multi-objective Evolution
## I Introduction
After millions of years of evolution, neurons with complex information processing capabilities and microcircuits with specific functions have emerged in the human brain, empowering it to be a powerful device. As the third generation of neural network, Spiking Neural Network (SNN) [1] realizes a low energy consumption and high-efficiency computing paradigm by simulating the characteristics of neuron communication in the brain. However, existing SNN research employs fixed architectures that lack references to brain-inspired topological properties, limiting their performance on multiple tasks.
Neural Architecture Search (NAS) attempts to replace the experience-based manual design of architectures with an automated searching approach. Studies have shown that it even outperforms architectures designed with the expertise of human experts on some tasks [2, 3]. As a kind of optimization scheme in NAS, Evolutionary Neural Architecture Search (ENAS) is promising due to its insensitivity to local optimal values and without a large number of computing resources compared to reinforcement learning (RL)-based algorithms [4, 5, 6]. Based on the current understanding of the evolutionary structure of the brain, this paper is committed to exploring the evolution of brain-inspired local and global neural circuits for optimizing SNN architecture and function.
In ENAS, when the population and the number of iterations are large, it is very time-consuming to train SNN models one by one to evaluate the performance of individuals. Thus efficient evaluation methods are particularly important, and research ideas are usually inseparable from weight-sharing [7, 8], performance predictor [9, 10] or zero-shot methods [11, 12]. The weight-sharing method is also called the one-shot method, and only one supernet needs to be trained in the entire algorithm process. However, training to convergence takes a long time, and it may encounter the problem of performance collapse, which could not reflect real network fitness. Zero-cost evaluation methods require no training and have extremely low time costs, but only roughly measuring attributes positively related to architectural performance may also lead to inaccurate rankings of fitness. As a few-shot method, performance predictor trains a regression model for directly predicting the performance of unknown architectures based on the information of some trained architectures [9, 10, 13, 14, 15] and can be divided into online and offline according to whether the predictor is updated during the NAS process. Offline predictors only sample limited architectures once, and no new information is considered after construction. Therefore, the online predictor is more practical and flexible, able to continuously improve the evaluation accuracy by leveraging the existing knowledge [15, 16, 17].
A large amount of ENAS research is based on deep neural networks (DNNs), and existing works on evolving SNNs architecture are still very limited. [18] proposes a search model named AutoSNN to evolve more energy-efficient architecture. It defines an artificially formulated weight factor to optimize the number of spikes and classification accuracy, which is still essentially a single-objective optimization without manifesting the coordination of the two contradictory |
2309.16425 | Feed-forward and recurrent inhibition for compressing and classifying
high dynamic range biosignals in spiking neural network architectures | Neuromorphic processors that implement Spiking Neural Networks (SNNs) using
mixed-signal analog/digital circuits represent a promising technology for
closed-loop real-time processing of biosignals. As in biology, to minimize
power consumption, the silicon neurons' circuits are configured to fire with a
limited dynamic range and with maximum firing rates restricted to a few tens or
hundreds of Herz.
However, biosignals can have a very large dynamic range, so encoding them
into spikes without saturating the neuron outputs represents an open challenge.
In this work, we present a biologically-inspired strategy for compressing
this high-dynamic range in SNN architectures, using three adaptation mechanisms
ubiquitous in the brain: spike-frequency adaptation at the single neuron level,
feed-forward inhibitory connections from neurons belonging to the input layer,
and Excitatory-Inhibitory (E-I) balance via recurrent inhibition among neurons
in the output layer.
We apply this strategy to input biosignals encoded using both an asynchronous
delta modulation method and an energy-based pulse-frequency modulation method.
We validate this approach in silico, simulating a simple network applied to a
gesture classification task from surface EMG recordings. | Rachel Sava, Elisa Donati, Giacomo Indiveri | 2023-09-28T13:22:51Z | http://arxiv.org/abs/2309.16425v1 | Feed-forward and recurrent inhibition for compressing and classifying high dynamic range biosignals in spiking neural network architectures
###### Abstract
Neuromorphic processors that implement Spiking Neural Networks (SNNs) using mixed-signal analog/digital circuits represent a promising technology for closed-loop real-time processing of biosignals. As in biology, to minimize power consumption, the silicon neurons' circuits are configured to fire with a limited dynamic range and with maximum firing rates restricted to a few tens or hundreds of Herz. However, biosignals can have a very large dynamic range, so encoding them into spikes without saturating the neuron outputs represents an open challenge. In this work, we present a biologically-inspired strategy for compressing this high-dynamic range in SNN architectures, using three adaptation mechanisms ubiquitous in the brain: spike-frequency adaptation at the single neuron level, feed-forward inhibitory connections from neurons belonging to the input layer, and Excitatory-Inhibitory (E-I) balance via recurrent inhibition among neurons in the output layer. We apply this strategy to input biosignals encoded using both an asynchronous delta modulation method and an energy-based pulse-frequency modulation method. We validate this approach _in silico_, simulating a simple network applied to a gesture classification task from surface EMG recordings.
Neuromorphic, Spiking Neural Network, signal compression, E-I balance, EMG
## I Introduction
Closed-loop systems allow personalized healthcare devices to detect unexpected changes continuously, and to provide feedback stimulation or control signals to maintain a desired state. Thanks to their ultra-low power consumption, biologically plausible time constants, and inherent parallelism, neuromorphic technologies provide ideal characteristics for real-time closed-loop interactions with the biological system. Neuromorphic solutions enable always-on continuous monitoring of physiological parameters in a wide range of wearable applications, such as Electrocardiography (ECG) anomaly detection [1, 2, 3], High Frequency Oscillation (HFO) detection [4, 5], Electromyography (EMG) decoding for prosthetic control [6, 7, 8], and nervous system interfacing [9].
However, processing biosignals can be challenging because they have a high dynamic range in their frequency components and amplitude, often covering over 3 orders of magnitude. For example, in ECG, the amplitude signal includes both large peaks from the louder backdrop of the heart beating and small fluctuations related to specific conditions [3]. In the frequency domain, different bands are often associated with different underlying physiological processes, which are superimposed at the sensor interface [10]. For example, ECG and EMG signals capture both low-frequency drift in heart rate variability or muscular posture (0 - 5 Hz) and rapid high-frequency electrical events (\(>\)1 kHz) [11].
As one core function of the brain is to decipher real-world events from perceptual information, it is often required to encode and process exceptionally high dynamic range signals (for example, auditory neurons encode noise from infrasound (\(<\)20 Hz) to ultrasound (\(>\)20 kHz)) [12]. To discriminate stimuli over this large frequency range, the brain employs various structural, biochemical, and cortical mechanisms of adaptation. Among them are excitatory-inhibitory (E-I) balanced neural networks, which develop a near-constant ratio between inhibitory and excitatory synaptic currents in each cell. This renders them able to suppress excessive firing for high-frequency inputs and facilitate activation for low-frequency inputs [13] (see Fig. 1). The resulting network activity is sparse and decorrelated, which minimizes information redundancy and leads to a more energy-efficient encoding [14]. A second neural adaptation mechanism is feed-forward inhibitory connections, which sharpen the temporal precision of the target neurons by narrowing the window for suprathreshold summation of excitatory inputs by hyperpolarizing the membrane potential shortly after the onset of excitation. This enhances signal discrimination and prevents saturation in neural circuits [15]. Finally, at the cellular level, neurons exhibit spike-frequency adaptation as an intrinsic mechanism to prevent premature saturation: Calcium ion accumulation during prolonged or repetitive output spikes activates potassium channels, generating a hyperpolarizing current that reduces the neuron's excitability and firing rate. This mechanism preserves the dynamic range of neuronal responses and ensures responsiveness to changing input signals [16], and has been shown to be equivalent Sigma-Delta encoding schemes [17] which enable accurate and efficient time-domain classification [18].
In this paper, we present a spiking neural network (SNN) architecture implemented to address frequency mismatch for biosignal processing and to facilitate a wide range of classification tasks. Through exploiting inhibitory adaptation mechanisms ubiquitous in the brain, this ultra-compact network
is maintained in effective operating range, independently of the input dynamic range. To validate the behavior of the network, we show its implementation for the processing of EMG signal for hand gesture classification. We are targeting EMG signals due to their frequency (a few Hz - 1 kHz) and amplitude (a few V - 10 mV) ranges. In addition, for the first time, we show a comparison for signal-to-spike conversion required to feed the analog inputs to the SNN. We present an asynchronous delta modulation method (mainly sensitive to the changes in the input signals) [4] and an energy-based method (sensitive also to the DC level of the input signals).
## II Network implementation
The network proposed has been designed to be compatible with a wide range of mixed-signal neuromorphic processing chips. However, to assess its biological plausibility and robustness for deployment on-chip, we first simulated it using neural and synaptic models based on the Dynamic Neuromorphic Asynchronous Processor (DYNAP-SE) [19].
### _Neuromorphic behavioral simulator_
The SNN was simulated using the spiking simulator Brian2 [20] using custom equations and parameters that faithfully model the transistor and circuit properties present in the DYNAP-SE chip. In particular, neural and synaptic dynamics were modeled based on the Differential Pair Integrator (DPI) circuit, which implements a first-order log-domain filter with non-linear properties that are useful to reproduce short-term plasticity effects [21]. The DYNAP-SE neurons are connected on-chip by synapses of classes NMDA (slow excitatory), AMPA (fast excitatory), GABA-A (fast inhibitory), and GABA-B (slow inhibitory). The neurons are divided into four cores, and the properties of the neurons of each core as well as each synapse class are tunable via the selection of relevant circuit parameters (e.g. leak and gain currents). Arbitrary subsets of the 256 available neurons may be connected with synapses of pre-specified weight.
### _Signal pre-processing_
An EMG dataset was previously collected for 3 gestures from the Roshambo game (rock, paper, scissors) using the Myo armband by Thalmic Labs Inc (8 forearm sensors sampled at 200 Hz) from 10 able subjects over 3 sessions each containing 5 repetitions of 2 \(seconds\) of gesture [22]. Between each session, a relaxing phase of 1 \(second\) was introduced to remove any residual muscular activation.
The recordings were then segmented into 200 \(ms\) windows - an accepted latency in prosthetic control [23] - and shuffled to generate the training and test datasets. To reduce the impact of label bleeding (where muscular activity persists in the neighboring rest period), a function was employed to exclude windows that more resemble the adjacent state than the current. Oversampling was then employed to balance the class occurrences.
To serve as input to the SNN, raw voltage traces were converted into the event-based domain. During the conversion into spikes, it is crucial to maintain the original signal information. With this aim, for the first time, we applied and compared two bioinspired approaches.
### _Asynchronous Delta Modulation_
The high-frequency Asynchronous Delta Modulation (ADM) algorithm, based on a delta modulator circuit [24], was applied to transform the continuous EMG signals into two digital pulse outputs ('UP' and 'DOWN') according to the signal derivative (positive and negative, respectively) (Fig. 2). The parameters were optimized by a grid search to minimize the Root Mean Square Error (RMSE) between the original signal and a reconstruction from the ADM spike trains. The final ADM parameters are a threshold of 0.8 -corresponding to the sensitivity for the spike conversion -, a refractory period of 10 \(\mu\) s - a period of system inactivity after the generation of a spike -, and an interpolation factor of 3500 - a factor required to increase the signal resolution. The resulting spike trains contain frequency elements from 0 - 8 kHz.
### _Energy-based Pulse Frequency Modulation_
An energy-based approach was devised as an alternative mechanism for spike conversion. The raw signal was filtered into 2 frequency bands split equally between 0 and 100 Hz, full-wave rectified, and injected as a time-varying current into a simple integrate-and-fire Brian2 neuron to achieve pulse-frequency modulation (Fig. 2). To ensure that the resulting firing rate is proportional to the energy of the original signal, the input values were scaled to best cover the linear range of the neuron's injected current (0 - 8 \(nA\)) vs firing rate (0 - 4 kHz) curve. Since - due to high-amplitude outliers - a direct compression of the original range to the target results in most values occupying the lowest fifth of the linear region, for each subject and session a unique scaling range was calculated to best redistribute the mass of the amplitude distribution, and employed across all channels to maintain the relative amplitude differences between electrodes. The resulting spike trains contain frequency elements from 0 - 3 kHz.
### _Architecture and behavior tuning_
The SNN consists of an input population of 16 neurons, an excitatory population of 8 neurons which also serve as the classi
Fig. 1: SNN-based frequency balancing mechanism, comprising a feed-forward inhibitory (FF) population and an excitatory-inhibitory balanced subnetwork (I, E).
fication output layer, a feed-forward inhibitory population of 16 neurons, and a recurrently-connected inhibitory population of 4 neurons (Fig. 3). Together, the inhibitory populations facilitate network-level spike-frequency adaptation (Fig. 1). In response to low energy signal, patterns are propagated to the excitatory readout layer, the feed-forward population fires sparsely, and the E-I subnetwork converges on a low firing rate via mutual recurrent inhibition. For high energy signals, the spike generator strongly activates the feed-forward population which drives down activity in the readout layer. This layer also promotes its own inhibition via excitation of the E-I subnetwork, which progressively drives down its own activity to a steady state via self-inhibition. These complementary mechanisms maintain the readout layer in the ideal operating range.
The parameters of all populations (Table I) were progressively optimized via grid search to achieve linearity in the input-output curves of the classification layer neurons to input patterns ranging from 0 to 8 kHz (Fig. 4). This range was selected to fit the higher maximum spike train firing rate resulting from the ADM conversion method.
Poisson spike trains at each frequency were employed in the network tuning to generate input-output curves (Fig. 4). To account for variability and device mismatch in future hardware implementations of the network proposed, we added external noise by sending additional 40 Hz Poisson spike trains through AMPA and GABA-B synapses to each neuron, combining fast and slow excitatory and inhibitory modulating currents.
### _Training and testing_
The EMG dataset was split into 200 ms windows and converted into two spike trains for each of the 8 electrodes (UP/DOWN or High/Low). The dataset was divided in a 80%-20% train-test split. After the training, the weight connection between the inputs and the excitatory population, _Input-E_, were frozen, and the output neuron pair with the highest firing rate over each test window was taken as the prediction of the network.
We applied a supervised learning method to update the Input-E weights according to the delta learning rule: \(\Delta w_{ji}=\alpha(T_{j}-y_{j})x_{i}\). To accomplish this, two values were added to the DYNAP-SE neuron equations: a teacher value (\(T\)) (0.1 for the correct class, 0 for all others) and the instantaneous firing rate (\(y\)) (approximated by an exponential kernel [25] whose decay is defined by \(\tau_{x}\frac{dx_{trace}}{dt}=-xtrace\), where \(\tau_{x}\) is 50 ms). A low learning rate of 5e-4 was selected.
\begin{table}
\begin{tabular}{|l|l|l|} \hline \multicolumn{2}{|c|}{**Synapse**} & \multicolumn{2}{|c|}{**Neuron**} \\ \hline Inp-FF & 0.5 & Membrane capacitance & 2 \(pF\) \\ FF-E & 3.8 & Refractory period (E) & 3 \(ms\) \\ Inp-E & trained & Refractory period (I) & 1 \(ms\) \\ I-E & 3.0 & Refractory period (FF) & 1 \(ms\) \\ E-I & 1.7 & Leakage current & 5 \(pA\) \\ I-I & 0.5 & Spiking threshold current & 2000 \(pA\) \\ Leak current & 3 \(pA\) & Reset current & 1.2 \(pA\) \\ Gain current & 12 \(pA\) & Constant current injection & 1 \(pA\) \\ AMPA \(I_{\tau}\) & 10 \(pA\) & Gain current & 5 \(pA\) \\ GABA-B \(I_{\tau}\) & 10 \(pA\) & Adaptation \(I_{\tau}\) & 0.04 \(pA\) \\ NMDA \(I_{\tau}\) & 5 \(pA\) & Adaptation gain & 1.5 \\ \hline \end{tabular}
\end{table} TABLE I: Parameters of Adaptive Integrate–and–Fire (AIF) model neurons and synapses [21].
Fig. 4: Input-output curves for the full network with progressive addition of inhibitory neuron populations.
Fig. 3: Architecture of the full SNN.
Fig. 2: Input signal and the its conversion into spike trains by ADM (up and down channels) and energy-based method on frequency bands (low and high bands).
## III Results
### _Network behavior_
Figure 5 highlights the desired behaviors of each neural group in the final network: the _FF_ population fires in step with the spike generator, maintaining its target layer in reasonable operating range by coupling the strength of excitatory input to the level of inhibition. The _recurrent-I_ population fires more dominantly than the readout layer, maintaining the inhibitory-dominated E-I network. These result in the \(E\) population firing sparsely.
### _Classification accuracy_
Across all train-test cycles for a subset of subjects with the highest previously reported classification accuracies (2, 3, and 4) [22], the mean accuracy (three-fold validated) of the SNN was compared to a Support Vector Machine (SVM) trained on either the data after root mean squared (window = 10 samples, optimized via grid search) which yielded the highest accuracy, or - as a better comparison for the spike conversion methods - the raw electrode traces (kernel = RBF, c=10, optimized via grid search).
To ascertain whether the network's behavior is generalizable across individuals, the ADM-SNN was trained and tested on all data from all subjects. The results of this training showed comparable accuracy to the individual sessions (65.04% \(\pm\)3.89%, n=3), suggesting the network's robust ability to abstract temporal features despite varied baseline activity levels.
### _Base-to-full analysis_
To quantify the contributions of each network element, the network was progressively constructed and tested on identical data (Subject 3.3). The plastic synaptic weights in networks without both inhibitory populations tend to lower values over the course of training (Fig. 7), due to the saturation of the excitatory layer in the absence of frequency-modulating mechanisms.
_FF_ inhibitory neurons appear to most significantly reduce saturation in the output layer, and thus in the prediction accuracy (Table II), suggesting that the greatest hindrance to SNN classification is an inability to capture input patterns at higher firing frequencies. Excitatory-inhibitory mechanisms also serve this aim, but prove insufficient on their own.
## IV Conclusions
We proposed an SNN architecture that offers a low-latency solution to the processing of high dynamic range biosignals in neuromorphic networks. While spike conversion methods yield input spike trains with frequency components that would otherwise saturate the firing of the DYNAP-SE neurons in plain feed-forward networks, the SNNs architecture proposed proved capable of compensating for the increased dynamic range of the input signal in simple class-based classification tasks, making neuromorphic chips prime candidates for lightweight, long-lasting on-board electronics for bio-interfacing controllers.
\begin{table}
\begin{tabular}{c c c c c} \hline Base & +spike adapt & +E-I & +FF & Full \\ \hline
27.3\% & 47.1\% & 73.3\% & 82.5\% & 89.0\% \\ \hline \end{tabular}
\end{table} TABLE II: Classification accuracy for progressive network configurations (additions are relative to Base).
Fig. 5: Representative neuron population behaviors in response to 1000 Hz Poisson input.
Fig. 6: Mean accuracy percentage across subject and session for SVM trained on RMS data (window = 10), SVM trained on raw traces, ADM conversion + SNN, and PFM conversion + SNN.1
Fig. 7: Weight evolution upon progressive network additions |
2309.13575 | Probabilistic Weight Fixing: Large-scale training of neural network
weight uncertainties for quantization | Weight-sharing quantization has emerged as a technique to reduce energy
expenditure during inference in large neural networks by constraining their
weights to a limited set of values. However, existing methods for
weight-sharing quantization often make assumptions about the treatment of
weights based on value alone that neglect the unique role weight position
plays. This paper proposes a probabilistic framework based on Bayesian neural
networks (BNNs) and a variational relaxation to identify which weights can be
moved to which cluster centre and to what degree based on their individual
position-specific learned uncertainty distributions. We introduce a new
initialisation setting and a regularisation term which allow for the training
of BNNs under complex dataset-model combinations. By leveraging the flexibility
of weight values captured through a probability distribution, we enhance noise
resilience and downstream compressibility. Our iterative clustering procedure
demonstrates superior compressibility and higher accuracy compared to
state-of-the-art methods on both ResNet models and the more complex
transformer-based architectures. In particular, our method outperforms the
state-of-the-art quantization method top-1 accuracy by 1.6% on ImageNet using
DeiT-Tiny, with its 5 million+ weights now represented by only 296 unique
values. | Christopher Subia-Waud, Srinandan Dasmahapatra | 2023-09-24T08:04:28Z | http://arxiv.org/abs/2309.13575v3 | Probabilistic Weight Fixing: Large-scale training of neural network weight uncertainties for quantization
###### Abstract
Weight-sharing quantization has emerged as a technique to reduce energy expenditure during inference in large neural networks by constraining their weights to a limited set of values. However, existing methods often assume weights are treated solely based on value, neglecting the unique role of weight position. This paper proposes a probabilistic framework based on Bayesian neural networks (BNNs) and a variational relaxation to identify which weights can be moved to which cluster center and to what degree based on their individual position-specific learned uncertainty distributions. We introduce a new initialization setting and a regularization term, enabling the training of BNNs with complex dataset-model combinations. Leveraging the flexibility of weight values from probability distributions, we enhance noise resilience and compressibility. Our iterative clustering procedure demonstrates superior compressibility and higher accuracy compared to state-of-the-art methods on both ResNet models and the more complex transformer-based architectures. In particular, our method outperforms the state-of-the-art quantization method top-1 accuracy by 1.6% on ImageNet using DeiT-Tiny, with its 5 million+ weights now represented by only 296 unique values. Code available at [https://github.com/subiawaud/PWFN](https://github.com/subiawaud/PWFN).
## 1 Introduction
Weight-sharing quantization is a technique developed to lower the energy costs associated with inference in deep neural networks. By constraining the network weights to take on a limited set of values, the technique can significantly reduce the data-movement costs within hardware accelerators, which represent the primary source of energy expenditure (DRAM read costs can be as much as 200 times higher than multiplication costs (Horowitz, 2014; Jouppi et al., 2017; Sze et al., 2017)). Storing the weight values close to computation and reusing them multiple times also becomes more feasible, thanks to the limited range of possible values. Various studies have explored the effectiveness of weight-sharing quantization, including (Ullrich et al., 2017; Subia-Waud and Dasmahapatra, 2022; Wu et al., 2018; Han et al., 2015, 2016).
Weight-sharing quantization approaches face a common challenge in determining how much a single weight can be moved to a cluster center. Traditional methods rely on the magnitude or relative movement distances between weight and cluster to decide which weights can be moved to which clusters (Han et al., 2015; Subia-Waud and Dasmahapatra, 2022), irrespective of which filter or layer in the network the weight is located. However, this assumption neglects the fact that weights with the same value may have different roles in the network, and their placement within the architecture may affect their likelihood of being moved to a particular cluster. We posit that context-dependent movement of weights to clusters can, instead, better preserve the representational capacity of the
network while reducing its complexity. We build upon previous attempts (Wu et al., 2018; Achterhold et al., 2018) to use a probabilistic framework to do so. The idea is to use a probability distribution to capture the flexibility in weight values, informing clustering decisions which reduce the entropy of the network and lower the unique parameter count without performance degradation.
We look to make progress on this problem with the use of Bayesian neural networks and a variational relaxation to identify optimal cluster configurations for the compression of neural networks with a method we call probabilistic weight fixing networks (PWFN). By incorporating the insights from the previous works on minimizing the relative weight-to-cluster distance (Subia-Waud and Dasmahapatra, 2022) and favoring additive powers-of-two cluster centroid values (Li et al., 2019), we propose a novel initialization setting. Further, we discover that a simple regularization term which encourages maximal noise resilience is sufficient to avoid the variance of the weights' distribution from collapsing to zero. This regularization term facilitates the training of model-dataset pairings that were previously considered intractable when employing the variational Bayes-by-backdrop approach (Blundell et al., 2015). These changes, when combined with a novel iterative clustering procedure, enable us to achieve superior compressibility. We can represent ResNet family models trained on the ImageNet dataset with fewer unique parameters and a reduced weight-space entropy while maintaining higher accuracy than the current state-of-the-art. Furthermore, we demonstrate the effectiveness of applying PWFN to transformer-based architectures, again achieving state-of-the-art results.
**Weight-sharing quantization.** Consider a neural network that consists of \(N\) weights \(\mathbf{w}=\{w_{1},w_{2},\dots,w_{N}\}\). Each weight is typically unique and can take up to \(2^{b}\) distinct values, where \(b\) is the bit-width of the number system used. However, this poses a challenge since each time a weight is used, at least one memory read is required. Since memory reads dominate the computational cost of inference, it is desirable to reduce them. Weight-sharing quantization addresses this challenge using a reduced pool of cluster centers \(\mathbf{c}=\{c_{1},c_{2},\dots,c_{k}\}\), with \(k\ll N\) and defining a map \(\mathbf{w}\rightarrow\mathbf{c}\), where each weight \(\mathbf{w}_{i}\in\mathbf{w}\) is mapped to a cluster center: \(w_{i}\mapsto c_{k}\in\mathbf{c}\).
Two considerations are pivotal for formulating this process: determining which values should be included in the reduced cluster pool \(\mathbf{c}\), and identifying the mapping of weights in \(\mathbf{w}\) to the clusters in \(\mathbf{c}\).
**Which values should be in the cluster pool?** Insights from the literature highlight that pre-trained weight distributions often adhere to a heavy-tailed distribution (Barsbey et al., 2021; Hodgkinson and Mahoney, 2021; Gurbuzbalaban et al., 2021). Moreover, hardware implementations show a preference for powers-of-two values for weights due to their cost-effective multiplication properties
Figure 1: An overview of the PWFN process.
[Vogel et al., 2019, Przewlocka-Rus and Kryjak, 2023, Lee et al., 2017]. While some methods advocate for exclusive use of powers-of-two as cluster centers [Zhou et al., 2017], others propose a more flexible approach, suggesting additive powers-of-two [Li et al., 2019b, Subia-Waud and Dasmahapatra, 2022] - but only if it doesn't compromise performance. In our study, we align with this perspective, populating the cluster pool with powers-of-two values, but making exceptions for additive powers-of-two when they enhance performance.
**How to identify which of the weights should be moved to which cluster?** Previous studies have shown that distance metrics can be utilized to determine which fixed clusters can accommodate certain weights without negatively affecting the performance. Weight-sharing clustering techniques can rely on these metrics to assign weights to clusters. The commonly used distance metrics include Euclidean distance [Wu et al., 2018] and Relative movement distance [Subia-Waud and Dasmahapatra, 2022]. However, these metrics implicitly assume that all weights with the same value should be treated equally, irrespective of their location in the network. This may not be valid as moving a small (large) weight by a small (large) distance affects classification outcome depending on where the weight is in the network. To overcome this limiting assumption, we apply in this paper a Bayesian neural network (BNN) training approach to obtain a metric to determine the allowed movement of weights to clusters.
## 2 Related Work
**Bayesian Neural Networks.** BNNs attempt to use the methods of Bayesian inference in modeling predictive problems. Rather than the weights in a network coming from point estimates (i.e., a single value for each weight), a BNN attempts to model many (ideally all) configurations of weight values throughout a network and make predictions, weighting each configuration by its probability. Exact Bayesian inference on the weights would require the computation of the integral \(P(\mathbf{y}|x,D)=\int P(\mathbf{y}|x,\mathbf{w})P(\mathbf{w}|D)d\mathbf{w}\) where predictions for each allowed \(\mathbf{w}\) are averaged over. Unfortunately, the marginalization over \(P(\mathbf{w}|D)\) is intractable for even simple networks, so approximations are needed. Approaches to this include Laplace approximation [MacKay, 1992], gradient MCMC [Welling and Teh, 2011], expectation propagation updates [Hernandez-Lobato and Adams, 2015], and variational methods such as Bayes-by-backprop (BBP) [Blundell et al., 2015]. Using dropout at inference time has also been shown to be a variational approximation [Gal and Ghahramani, 2016] which is much more amenable to scale - albeit with reduced expressive power [Jospin et al., 2022]. BBP, of which our approach uses a variation of, models each of the network weights as coming from a Gaussian distribution and uses the re-parameterization trick for gradient descent to learn the parameters of the weight distributions. We give the derivations to do so in the Appendix. BBP has been demonstrated to work well in more complex model-dataset combinations than other BNN approaches (aside from dropout) but is still not able to scale to modern architectures and problems such as the ImageNet dataset and Resnet family of networks [Jospin et al., 2022].
One reason for the limited applicability is that the weight prior, which is a scaled mixture of two zero-mean Gaussians [Blundell et al., 2015], is not a well-motivated uninformative prior [Fortuin, 2022]. Instead, we initialize our training from a pre-trained network, in the spirit of empirical Bayes, and propose a prior on weight variances that encourages noise-resilience. A noise-resilient network can be thought of as one with a posterior distribution with large variances at its modes for a given choice of weight coordinates. Thus, small changes in weight space do not meaningfully alter the loss. The corresponding characterization of the flat minima in the loss landscape is well supported in the optimization literature [Foret et al., 2020, Kaddour et al., 2022].
**Quantization.** Quantization is a technique used to reduce the number of bits used to represent components of a neural network in order to decrease energy costs associated with multiplication of 32-bit floating-point numbers. There are different approaches to quantization, such as quantizing weights, gradients, and activations. Clipping+scaling quantization maps the weight \(w_{i}\) to \(w^{\prime}_{i}=\mathrm{round}(\frac{w_{i}}{s})\), where \(\mathrm{round}()\) is a predetermined rounding function and \(s\) is a scaling factor learned channel-wise, layerwise or with even further granularity. Quantization can be performed post-training or with quantization-aware training [Jacob et al., 2018].
Weight sharing quantization uses clustering techniques to group weights and fix the weight values to their assigned cluster centroid. These weights are stored as codebook indices, which enables compressed representation methods like Huffman encoding to compress the network further. Unlike
clipping+scaling quantization, weight sharing techniques share the pool of weights across the entire network. Several studies have proposed different methods for weight sharing quantization. For example, Zhou et al. [23] restrict layerwise rounding of weights to powers-of-two for energy-cheap bit-shift multiplication, while [11] suggest additive powers-of-two (APoT) to capture the pre-quantized distribution of weights better. [22] use a spectrally relaxed k-means regularization term to encourage the network weights to be more amenable to clustering and focus on a filter-row codebook for convolution. Similarly, [19] and [16] focus on quantizing groups of weights into single codewords rather than the individual weights themselves. [24] use the distance from an evolving Gaussian mixture as a regularization term to prepare the clustering weights. However, their approach is computationally expensive. In contrast, our formulation reduces computation by adding cluster centers iteratively when needed and only considering weights that are not already fixed for a regularization prior. A work closely related to ours is that of [1], where the authors formulate post-training quantization and pruning as a variational inference problem with a 'quantizing prior' but due to its optimization complexity and difficulties with weights not being drawn tightly enough into clusters the method was only demonstrated to work on simple dataset-model combinations and for the case of ternary quantization. WFN [25] is a recent weight-sharing approach that uses the minimizing of the relative movement distance to determine which weights are to be clustered and has demonstrated that a low weight-space entropy and few unique weights with a single codebook can maintain accuracy. In our work, we go much further in reducing weight-space entropy and unique weight count by using a context-aware distance metric and probabilistic framework in determining which weights can be moved where. Additionally, almost all of the works reviewed do not quantize the first and last layers [11, 22, 23, 24, 25, 26, 27] in order to maintain performance and in some cases don't quantize the bias terms [11], - we challenge this view and attempt a full network quantization, leaving only the batch-norm and layer-norm parameters at full precision.
## 3 Pwfn
In PWFN, we follow \(T\) fixing iterations each of which combines a training and a clustering stage in order to reach an outcome of a highly compressed/quantized network with a single whole-network codebook. We model each individual weight \(w_{i}\) as a draw from a distribution with learnable parameters and use these parameters as guidance for the clustering stage. We model each \(w_{i}\in\mathbf{w}\) as coming from a Gaussian distribution \(\mathcal{N}(\mu_{i},\sigma_{i})\) and during the training stage we use a form of BBP to train the weight distribution parameters \(\mathbf{\mu}=(\mu_{1},\ldots,\mu_{N})\) and \(\mathbf{\sigma}=(\sigma_{1},\ldots,\sigma_{N})\) to minimize both the task performance loss and to encourage the weight distributions to tell us exactly how much noise can be added to each weight without affecting performance. Both \(\mathbf{\mu}\) and \(\mathbf{\sigma}\) are trained with an additional regularization term that encourages larger values of \(\mathbf{\sigma}\) to counter the model reverting to the point estimates with \(\sigma_{i}=0\) to minimize the classification loss. During the clustering stage, we look
Figure 2: For DeiT small, we show a box plot of the entropies and unique counts per input channel for each Q,K,V by layer and with the mean of each layer (calculated across all attention heads) shown in the black lines.
to use this information to move the \(\mu_{i}\) values to one of a handful of cluster centers. We favour the cluster centers to be hardware multiplication-friendly powers-of-two, or additive-powers-of-two. After \(T\) iterations of training and clustering, each of the weights' distributions in the networks will have their \(\mathbf{\mu}\) values centered on one of the \(k\) clusters \(\mathbf{c}\) in the codebook.
After the \(T\) fixing iterations there are two options depending on the downstream usage of the network: either the network can be converted into point estimates and the weights set to the exact \(\mu\) values giving us a quantized network. Or, we can use the extra information given to us by modelling each weight as a distribution as a way of quantifying uncertainty of a particular prediction. If, after multiple samples of \(\mathbf{w}\), a model changes its prediction for a fixed input, this tells us that there is uncertainty in these predictions - with this information being useful for practical settings.
**PWFN Training.** Consider a network parameterized by \(N\) weights \(\mathbf{w}=\{w_{1},...,w_{N}\}\). In PWFN, each weight \(w_{i}\) is not a single value but is instead drawn from a distribution \(w_{i}\sim\mathcal{N}(\mu_{i},\sigma_{i})\), and instead of learning the \(w_{i}\) directly, the learning process optimizes each \(\mu_{i}\) and \(\sigma_{i}\). In a forward pass, during training, we sample weight values \(w_{i}\) according to its distribution:
\[w_{i}=\mu_{i}+\sigma_{i}\epsilon,\ \epsilon\sim\mathcal{N}(0,1). \tag{1}\]
The forward pass is stochastic under fixed \(\mathbf{\mu},\mathbf{\sigma}\). If trained correctly, the \(\sigma_{i}\) values give us information about the amount of noise a particular weight \(w_{i}\) can handle without affecting performance. Said another way, if we can find a configuration \(\mathbf{w}=(\mathbf{\mu},\mathbf{\sigma})\) which maintains task performance despite the randomness introduced by the \(\sigma_{i}\) parameters, then we will know which of the corresponding weights can be moved and to what degree. In PWFN, we train \(\mathbf{\mu}\), \(\mathbf{\sigma}\) following the BBP optimization process (Blundell et al., 2015) with some changes both in terms of initialization and the priors on \(\mathbf{\mu}\) and \(\mathbf{\sigma}\).
**Large \(\mathbf{\sigma}\) constraint for \(\mathbf{w}\).** Given the usual cross-entropy or other performance loss, there is a clear direction of travel during gradient descent towards having small \(\sigma_{i}\) values and less uncertainty in the network parameters. A prior on the distribution of weights is therefore needed to prevent the \(\mathbf{\sigma}=0\) point estimate solution being found which would leave us with no weight movement information. In the original BBP set-up, the authors looked to prevent vanishing variance by regularising the distribution of weights according to a prior distribution of a mixture of zero-mean Gaussian densities with different variances (the parameters of the prior they find through an exhaustive search). The motivation for doing so was because the empirical Bayes approach didn't perform well due to the network favoring updating these parameters over the posterior (since there are fewer) and the link to the successful spike-and-slab prior (Mitchell and Beauchamp, 1988) -- where values are concentrated around 0 (the _slab_) or another value known as the _spike_ - favoring sparsity. Instead, we hypothesize that a _good_ network can handle the most noise injection whilst maintaining performance. These networks are likely more compressible since they have been trained to accept changes to their weight values without performance degradation during training.
Figure 3: The regularization term acts to stop the \(\sigma\) uncertainty values from collapsing to zero. This experiment is run using the Cifar10 dataset with ResNet-18, stopping after 30 epochs.
We attempt this by making our \(\mathbf{\sigma}\) values to be large. Networks with large \(\mathbf{\sigma}\) have, probabilistically, more noise added to the \(\mathbf{\mu}\) values during training and so have to learn to have robust performance under such circumstances. We note that this acts as a push-pull relationship with the performance loss, which favours low \(\mathbf{\sigma}\) values. The motivation is that, much like \(L_{1}\) norms enforcing sparsity, this formulation will train the network to produce a large \(\sigma_{i}\) for noise-resilient parameter \(w_{i}\), whilst maintaining a noise-sensitive weight \(w_{j}\) to have a small \(\sigma_{j}\) despite the prior pull. The regularised loss function for training the training phases of the algorithm is:
\[-\log P(\mathcal{D}|\mathbf{\mu},\mathbf{\sigma})+\alpha\mathcal{L}_{\text{REG}}(\mathbf{ \sigma}), \tag{2}\]
where the regularization term is:
\[\mathcal{L}_{\text{REG}}(\mathbf{\sigma})=\sum_{i=1}^{N}\mathcal{L}(\sigma_{i})= -\sum_{i=1}^{N}(\sigma_{i}-S)\Theta(S-\sigma_{i}), \tag{3}\]
with \(\Theta(x)=1\) for \(x\geq 0\) and \(0\) otherwise. The \(\Theta\) function prevents the optimization from finding a network with a subset of \(\mathbf{\sigma}\) with infinitely large values and dominating the cross entropy term. \(S\) is thus a cutoff on how large the values in \(\mathbf{\sigma}\) can be. \(\alpha\) is a hyperparameter controlling the formation of a noise-resilient network where the _majority_ of the weights can receive noise injection without hurting performance, not just a few. In Figure 3 we illustrate the effect on the distribution of \(\mathbf{\sigma}\) under different \(\alpha\) values for a ResNet-18 trained on the Cifar-10 dataset. As we increase \(\alpha\) the \(\mathbf{\sigma}\) values no longer collapse to zero giving us information for downstream clustering.
**Initialization using Relative Distance from Powers-of-two.** For each weight \(w_{i}\) we need to specify its prior distribution so as to derive the posterior using Bayesian updating. We assume that the posterior distribution is Gaussian with a diagonal covariance matrix: \(P(w_{i};\mu_{i},\sigma_{i})\) whose parameters \(\mu_{i},\sigma_{i}\) are trained using BBP. To initialize the prior distributions for \(\mu_{i}\) and \(\sigma_{i}\) we set \(P^{0}(\mu)=\prod_{i}P^{0}(\mu_{i})\) where \(P^{0}(\mu_{i})\propto\delta_{\mu_{i},w_{i}}\) for the pre-trained weight value \(w_{i}\). For a Gaussian posterior we would typically require an unknown \(\sigma\) to be drawn from a Gamma conjugate prior distribution. Instead, we set \(\sigma_{i}\) to be a known function of the \(\mu_{i}\) at initialization. In [20] relative distances to the preferred powers of two values for the weight was used to determine weight movement. To favour anchoring weights at powers of two, we set the standard deviations to be smallest (\(2^{-30}\)) at either edge of each interval between the nearest integer powers of two, \((2^{x_{i}}\leq\mu_{i}\leq 2^{x_{i}+1})\) for integer \(x_{i}\), and largest at the midpoint of the interval. We introduce a parabolic function \(\sigma_{i}(\mu_{i})\) as a product of relative distances of each pre-trained weight value (\(\mu_{i}\)) to the nearest lower and upper powers of two:
\[\sigma_{i}(\mu_{i})=(0.05)^{2}\left(\frac{|2^{x_{i}}-\mu_{i}|}{|2^{x_{i}}|} \right)\left(\frac{|\mu_{i}-2^{x_{i}+1}|}{|2^{x_{i}+1}|}\right), \tag{4}\]
(Full details and visualization is given in the appendix).
**PWFN Clustering.** In Figure 1 we show a schematic of the clustering stage in which we use the information garnered from the weights' distribution parameters to identify cluster centers and their
Figure 4: \(p_{t}\) follows the same schedule as [20] (left). In the middle and right plots, we see that PWFN achieves very small entropy values by majority of weights to only a very small (4 or 5) cluster values and the rest are assigned as outliers, most of which are powers-of-two.
assignment. PWFN clustering is a two-step method running for \(t=1,\ldots,T\) iterations. At each step we set a fraction \(p_{t}\) of the weights to be fixed, so that \(|W_{\text{fixed}}^{t}|=Np_{t}\). The remaining weights at iteration stage \(t\) are trainable and called \(W_{\text{free}}^{t}\). We follow the scheme first proposed in [22] in setting \(p_{t}\) (Figure 4, left). All of the weights \(w_{i}\) that are assigned to \(W_{\text{fixed}}^{t}\) will have their \(\mu_{i}\) values fixed to one of the set of cluster centers. At the last iteration, \(|W_{\text{free}}^{T}|=0\) and \(p_{T}=1\), as all weights have been fixed to their allocated cluster centroids.
We next introduce how a cluster center \(c_{k}\) is defined and how the mapping \(\mu_{i}\mapsto c_{k}\in\mathbf{c}\) is performed. Let \(R=\{-\frac{1}{2^{k}},\ldots,-\frac{1}{2^{k+1}},-\frac{1}{2^{j}},0,\frac{1}{2 ^{j}},\frac{1}{2^{j+1}},\ldots,\frac{1}{2^{k}}\}\) be the set of all powers-of-two up to a precision \(b\). For a weight to be a desired additive power of two, a sum over at most \(\omega\) elements of \(R\) is defined to be a cluster center of order \(\omega\). Formally, for \(\mathcal{P}(R)\) the power set of \(R\),
\[\mathbf{c}^{\omega}=\{\sum_{i\in r}i\mid r\in\mathcal{P}(R)\wedge|r|\leq\omega\}. \tag{5}\]
PWFN begins with order \(\omega=1\), the powers-of-two up to precision \(b\) as the proposal cluster set \(\mathbf{c}^{\omega}\). Next, for each weight \(w_{i}=(\mu_{i},\sigma_{i})\) in the network, the value of \(\sigma_{i}\) is used to determine how far away they are from each of the cluster centers using:
\[D_{\text{prob}}(w_{i},c_{j})=\frac{|\mu_{i}-c_{j}|}{\sigma_{i}}. \tag{6}\]
Interpret this Mahalanobis distance as: "how many sigmas (standard deviations) away is cluster \(c_{j}\in\mathbf{c}^{\omega}\) from weight \(w_{i}\)". At iteration stage \(t\), for each free weight we define \(c_{k}^{\omega}(i)=\min_{c\in\mathbf{c}^{\omega}}D_{\text{prob}}(w_{i},c)\) as the cluster center that is the fewest sigmas away from \(w_{i}\in W_{\text{free}}^{t}\). We denote by \(n_{k}^{\omega}\) the number of weights with the smallest \(D_{\text{prob}}\) to cluster \(c_{k}^{\omega}\), _i.e._, \(n_{k}^{\omega}=\sum_{i}\mathbb{I}[c_{k}^{\omega}=c_{*}^{\omega}(i)]\). We then take the index \(k^{*}\) of the cluster with the most number of weights nearest to a cluster: \(k^{*}=\text{argmax}_{k}\ n_{k}^{\omega}\). Thus, \(c_{k^{*}}^{\omega}\in\mathbf{c}^{\omega}=(c_{1}^{\omega},\ldots,c_{k}^{\omega})\) is the cluster with the most number of weights nearest to it.
We then order the weights in \(W_{\text{free}}^{t}\) by their distance to \(c_{k^{*}}^{\omega}\). In detail, for \(W_{\text{free}}^{t}=[w_{1},\ldots,w_{i},\ldots,w_{n}]\), we reorder the weights by permuting the indices \(w_{i}^{\prime}=w_{\pi(i)}\) where \(\pi:[1,\ldots,n]\rightarrow[1,\ldots,n]\) is a permutation, \(i\mapsto\pi(i)\). The ordered list \([w_{1}^{\prime},\ldots,w_{n}^{\prime}]\) satisfies
\[D_{\text{prob}}(w_{i}^{\prime},c_{k^{*}}^{\omega})\leq D_{\text{prob}}(w_{i+1 }^{\prime},c_{k^{*}}^{\omega}) \tag{7}\]
Next, we need to determine how many of these weights we should assign to cluster \(c_{k^{*}}^{\omega}\). To do so, we define a threshold \(\delta\) and we take the first \(\ell(\delta)\) weights from \([w_{1}^{\prime},...,w_{n}^{\prime}]\) such that:
\[\frac{1}{\ell(\delta)}\sum_{i=1}^{\ell(\delta)}D_{\text{prob}}(w_{i}^{\prime}, c_{k^{*}}^{\omega})\leq\delta. \tag{8}\]
As long as this is possible with \(\ell(\delta)>0\), we have identified both a cluster \(c_{k}^{\omega}\). and set of weights \([w_{1}^{\prime},...,w_{\ell(\delta)}^{\prime}]\) which can be moved from \(W_{\text{free}}^{t}\) to \(W_{\text{fixed}}^{t+1}\). We map the weights in \([w_{1}^{\prime},\ldots,w_{\ell(\delta)}^{\prime}]=[(\mu_{1}^{\prime},\sigma_{1 }^{\prime}),\ldots,(\mu_{\ell(\delta)}^{\prime},\sigma_{\ell(\delta)}^{\prime})]\) to a single weight \(w_{k^{*}}=(\mu_{k^{*}},\sigma_{k^{*}})\) corresponding to cluster \(c_{k^{*}}^{\omega}\): \(\mu_{k^{*}}=c_{k^{*}}^{\omega}\) and \(\sigma_{k^{*}}=\texttt{std}([\mu_{1}^{\prime},\ldots,\mu_{\ell(\delta)}^{\prime}])\) where std computes the standard deviation of its argument. This process is then repeated, finding the next most popular cluster until \(Np_{t}\) weights are assigned a cluster. If \(\ell(\delta)=0\), before enough weights are assigned in iteration \(t\), then we have not been able to find any cluster centers \(c_{j}\in\mathbf{c}^{\omega}\) which are close enough to any weight, _i.e._, \(D_{\text{prob}}(w_{i},c_{j})>\delta\) for all weights \(w_{i}\in W_{\text{free}}^{t}\) and \(c_{j}=c_{k^{*}}\). In this case, we set \(\omega\leftarrow\omega+1\) and \(\delta\gets 2\delta\) giving us a higher-order additive powers-of-two set and less restrictive \(\delta\) value threshold. Since \(|\mathbf{c}^{\omega+1}|>|\mathbf{c}^{\omega}|\), this increase in \(\omega\) makes more cluster centers available during the next clustering attempt.
**Putting it All Together.** Putting the training and clustering stages together, we have a process for training a neural network whose weights are from a Gaussian posterior distribution with diagonal covariance matrix by backpropagation (BPP) that favours configurations with long Gaussian tails, which the clustering stage can then use to identify which weights to move and to what extent. This process is repeated for \(T\) iterations, with the fraction \(p_{t}\) of weights increasing with each \(t\)\(p_{t+1}>p_{t}\) until all of the weights are moved from \(W_{\text{free}}\) to \(W_{\text{fixed}}\) at iteration \(T\) where \(p_{T}=1\). We give the full algorithm in the Appendix.
## 4 Experiments
We conduct our experimentation on the ImageNet dataset with a wide range of models: ResNets-(18,34,50) [He et al., 2016], DenseNet-161 [Huang et al., 2017] and the challenging DeiT (small and tiny) [Touvron et al., 2021]. For each model, we convert all the parameters in the convolution and linear layers into Gaussian distributions where the mean value is set to be the weight value of the pre-trained model found in the Timm library. Thus, at test time with no further training, we retain the original accuracies. We set the variance parameters according to the setting described in Eq (12). We then apply nine rounds of the described weight fixing with three epochs of re-training each round, totalling to 27 epochs of training. We train using SGD with momentum 0.9 with a learning rate of 0.001. For all experiments, we fix \(\delta\) = 1, \(\alpha=2^{-11}\) which we found using grid-search on the Cifar-10 dataset and works surprisingly well in all settings. For all our experiments we train using 4x RTX8000 GPUs and a batch-size of 128. For the ensemble results, we sample 20 times different weights using the learned weights' distributions and report the mean accuracy.
## 5 Results
We compare PWFN against a range of quantization approaches where the model weights have been made available so that we can make accurate measurements of entropy and unique parameter count.
\begin{table}
\begin{tabular}{c c c c c c c} & \multicolumn{5}{c}{**Separate Codebook**} \\
**Model** & **Method** & **Layer** & **In-ch** & **Attn** & **Top-1** (Ensemble) & **Entropy** & **Params** \\ \hline ResNet-18 & Baseline & - & - & - & 68.9 & 23.3 & 10756029 \\ & LSQ & ✓ & ✗ & - & 68.2 & - & - \\ & APoT & ✓ & ✗ & - & 69.9 & 5.7 & 1430 \\ & WFN & ✗ & ✗ & - & 69.7 & 3.0 & 164 \\ & PWFN (no prior) & ✗ & ✗ & - & 69.3 (69.6) & **1.7** & **143** \\ & PWFN & ✗ & ✗ & - & **70.0 (70.1)** & 2.5 & 155 \\ \hline ResNet-34 & Baseline & - & - & - & 73.3 & 24.1 & 19014310 \\ & LSQ & ✓ & ✗ & - & 71.9 & - & - \\ & APoT & ✓ & ✗ & - & 73.4 & 6.8 & 16748 \\ & WFN & ✗ & ✗ & - & 73.0 & 3.8 & 233 \\ & PWFN (no prior) & ✗ & ✗ & - & 73.5 (74.4) & **1.2** & **147** \\ & PWFN & ✗ & ✗ & - & **74.3 (74.6)** & 1.8 & 154 \\ \hline ResNet-50 & Baseline & - & - & - & 76.1 & 24.2 & 19915744 \\ & LSQ & ✓ & ✗ & - & 75.8 & - & - \\ & WFN & ✗ & ✗ & - & 76.0 & 4.1 & **261** \\ & PWFN (no prior) & ✗ & ✗ & - & 77.2 (78.1) & 3.5 & 334 \\ & PWFN & ✗ & ✗ & - & **77.5 (78.3)** & **3.4** & 325 \\ \hline DeiT-Small & Baseline & - & - & - & 79.9 & 16.7 & 19174713 \\ & LSQ+ & ✓ & ✓ & ✗ & 77.8 & - & - \\ & Q-ViT & ✓ & ✓ & ✓ & **78.1** & 11.3 & 3066917 \\ & Q-ViT (w/o FL-Bias) & ✓ & ✓ & ✓ & **78.1** & 10.4 & 257149 \\ & PWFN (no prior) & ✗ & ✗ & ✗ & 78.0 (78.3) & **2.7** & **352** \\ & PWFN & ✗ & ✗� & ✗ & **78.1 (78.5)** & **2.7** & 356 \\ \hline DeiT-Tiny & Baseline & - & - & - & 72.9 & 15.5 & 5481081 \\ & LSQ+ & ✓ & ✓ & ✗ & 68.1 & - & - \\ & Q-ViT & ✓ & ✓ & ✓ & 69.6 & 11.5 & 1117630 \\ & Q-ViT (w/o FL-Bias) & ✓ & ✓ & ✓ & 69.6 & 10.5 & 128793 \\ & PWFN (no prior) & ✗ & ✗� & ✗ & **71.4** (71.6) & **2.8** & 300 \\ & PWFN & ✗ & ✗� & ✗ & 71.2 (71.5) & **2.8** & **296** \\ \hline DenseNet161 & Baseline & - & - & - & 77.8 & 17.1 & 26423159 \\ & PWFN & ✗ & ✗ & ✗ & 77.6 (78.0) & 1.1 & 125 \\ \hline \end{tabular}
\end{table}
Table 1: Full comparison results. (w/o FL-Bias) refers to calculating the metrics without the first-last layers and bias terms included. ‘Params’ refers to the unique parameter count in the quantized model, entropy is the full weight-space entropy. In-ch, layer, attn refer to whether the method uses a separate codebook for each layer, in-channel and attention head respectively.
For the ResNet family, we compare against the current state-of-the-art APoT [Li et al., 2019b]1 and WFN [Subia-Waud and Dasmahapatra, 2022]2. For the transformer models, there has only been one work released, Q-Vit [Li et al., 2022]3, which has both the model saves and code released. For both APoT and Q-Vit, we compare the 3-bit models which are the closest in terms of weight-space entropy to that achieved by PWFN.
Footnote 1: [https://github.com/yhhhli/APoT_Quantization](https://github.com/yhhhli/APoT_Quantization)
Footnote 2: [https://github.com/subiawaud/Weight_Fix_Networks](https://github.com/subiawaud/Weight_Fix_Networks)
Footnote 3: [https://github.com/YanjingLi0202/Q-ViT](https://github.com/YanjingLi0202/Q-ViT)
As presented in Table 2, PWFN requires substantially fewer additional training epochs than most methods, save for WFN, highlighting its training efficiency. We use a straightforward regularization term that encourages an increase in \(\sigma\), and its computational cost is comparable to that of l1 regularization. While our approach does lead to greater memory demands due to the additional \(\sigma\) parameters and their associated gradient updates, the overall simplicity of the method is more efficient than previous BNN training procedures making it feasible to tackle more complex model-dataset pairings. Additionally, we note that when using the quantized version for inference, there are no extra costs, and the BNN functions as a point-estimate network.
In Table 1 we can see the set of results. PWFN demonstrates superior entropy, unique parameter count and top-1 accuracy across the board. In addition to the point-estimate accuracy using the mean of each of the weights' distributions (the cluster centers), we can additionally sample the weights from the learned distributions to give us an ensemble of models the mean prediction of which gives us further accuracy gains which we show in brackets in the Table. The prior initialization gives a slight but consistent accuracy improvement over using a uniform prior (PWFN (no prior)). We note that for both APoT and Q-Vit, different codebooks are used for different layers and for Q-Vit, different codebooks were additionally used for every attention head and input channel, and the bias terms were left unquantized, pushing up the parameter count and weight-space entropy substantially. We highlight this as a growing trend in the field, where relaxations such as leaving large parts of the network unquantized, or using different codebooks for ever granular parts of the network, are often used. Each relaxation comes at a cost in hardware, be that support for unquantized elements - such as the first and last layers - or the use of different codebooks for various parts of the architecture. Figure 2 illustrates the variation in entropy and the count of unique parameters across different layers and attention components. A notable observation from our study is that the weights associated with the 'value' component exhibit higher entropy in the final layer. This observation aligns with the notion that employing a fixed quantization scheme for each layer necessitates a relaxation of the quantization constraints specifically for the last layer, as supported by prior studies [Li et al., 2019a, Jung et al., 2019, Zhou et al., 2016, Yamamoto, 2021, Oh et al., 2021, Li et al., 2022]. Moreover, this highlights an intriguing possibility that in the context of attention networks, such relaxation might be essential only for the 'value' weights, and not for the 'keys' and 'queries'.
In understanding how PWFN is able to compress a network's representation to such a degree compared to WFN we look to how often the previously proposed relative distance threshold is maintained.
In Figure 5, it's evident that while the relative distance threshold established in WFN is, on average, maintained, there are edge-cases where it isn't. This observation suggests that having a context-specific noise tolerance benefits subsequent compression stages. Furthermore, the data indicates that these values are typically small (as seen in the left column), have a high frequency of occurrence (depicted in the middle), and are predominantly assigned during the middle (0.6, 0.7) and final rounds.
\begin{table}
\begin{tabular}{l|c} \hline \hline
**Method** & **Num of additional epochs** \\ \hline ApoT & 120 \\ PWFN & 27 \\ WFN & 27 \\ LSQ & 90 \\ QviT & 300 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of the number of additional training epochs required by different fine-tuning quantization methods.
## 6 Conclusion
This work proposed PWFN, a training and clustering methodology that can both scale BNNs to complex model-dataset combinations and then use the weight uncertainties to inform quantization decision-making. The result is a set of networks with extremely low weight-space entropies that can be used downstream in accelerator designs to mitigate expensive data movement costs. Additionally, we have seen the potential of the probabilistic aspect of the learned networks with a sampled ensemble giving noticeable accuracy gains. An exciting direction for future work is to explore how the uncertainty estimations and the out-of-distribution performance of neural networks could be enhanced using PWFN to train Bayesian Neural Networks.
## 7 Acknowledgments
This work was supported by the UK Research and Innovation Centre for Doctoral Training in Machine Intelligence for Nano-electronic Devices and Systems [EP/S024298/1]
|
2309.09934 | Hierarchical Attention and Graph Neural Networks: Toward Drift-Free Pose
Estimation | The most commonly used method for addressing 3D geometric registration is the
iterative closet-point algorithm, this approach is incremental and prone to
drift over multiple consecutive frames. The Common strategy to address the
drift is the pose graph optimization subsequent to frame-to-frame registration,
incorporating a loop closure process that identifies previously visited places.
In this paper, we explore a framework that replaces traditional geometric
registration and pose graph optimization with a learned model utilizing
hierarchical attention mechanisms and graph neural networks. We propose a
strategy to condense the data flow, preserving essential information required
for the precise estimation of rigid poses. Our results, derived from tests on
the KITTI Odometry dataset, demonstrate a significant improvement in pose
estimation accuracy. This improvement is especially notable in determining
rotational components when compared with results obtained through conventional
multi-way registration via pose graph optimization. The code will be made
available upon completion of the review process. | Kathia Melbouci, Fawzi Nashashibi | 2023-09-18T16:51:56Z | http://arxiv.org/abs/2309.09934v1 | # Hierarchical Attention and Graph Neural Networks: Toward Drift-Free Pose Estimation
###### Abstract
The most commonly used method for addressing 3D geometric registration is the iterative closet-point algorithm, this approach is incremental and prone to drift over multiple consecutive frames. The Common strategy to address the drift is the pose graph optimization subsequent to frame-to-frame registration, incorporating a loop closure process that identifies previously visited places. In this paper, we explore a framework that replaces traditional geometric registration and pose graph optimization with a learned model utilizing hierarchical attention mechanisms and graph neural networks. We propose a strategy to condense the data flow, preserving essential information required for the precise estimation of rigid poses. Our results, derived from tests on the KITTI Odometry dataset, demonstrate a significant improvement in pose estimation accuracy. This improvement is especially notable in determining rotational components when compared with results obtained through conventional multi-way registration via pose graph optimization. The code will be made available upon completion of the review process.
## I Introduction
Geometric registration is a critical prerequisite for a myriad of subsequent robotics tasks. These encompass surface reconstruction, obstacle avoidance, augmented reality, sensor fusion and more. The Iterative Closest Point (ICP) algorithm, a key tool for estimating geometric registration for point-clouds, operates by aligning consecutive frames which pre-suppose an alignment for initialization.
When tracking consecutive frames, errors can gradually build up, resulting in pose drift. This drift is corrected using optimization techniques [1]. One such technique includes detecting loop closures, where previously visited places are recognized. The goal is to find the rigid (or similarity) transformation that best re-align the frames [2]. Nevertheless, place recognition can lead to false positives, constraining the user to process the optimization part several times to achieve accurate results. Moreover, processing frames with a large number of point clouds to find the optimal matches can be time-consuming, compromising the ability to process in real-time.
In the past few years, several works have investigated the use of new neural network based pipelines to process large point-clouds, essentially attention mechanisms popularized in transformer architectures [3, 4, 5, 6]. More specifically, when applied to point-clouds, the attention vector encodes the relationship between points, disentangles most relevant information from input data [7] and learn to focus on different aspects of temporal patterns [5]. This vector is permutation invariant which makes it better candidate for handling unordered point clouds.
In this work, we undertake an examination of the potential for employing a learned model as an alternative to the conventional geometric registration with pose graph optimization. This idea originates from the questions: Can we replace this complex, computationally-intensive process with a unique learned model?.
Our proposed model operates directly on a bundle of frames instead of processing them sequentially. A key feature of this model is a dual-layer hierarchical attention mechanism applied to the concatenated frames. This mechanism enables the model to selectively concentrate on segments of the sequence that are most pertinent to geometric registration. Within the context of our work, this could mean giving more attention to frames that display substantial motion or notable changes in the environment. To distill and capture only the most essential information, we have utilized the maximum values across various dimensions of the point cloud embeddings.
We have evaluated our approach on the KITTI odometry benchmark [34] to demonstrate that this model can leverage large point clouds generated by recent LiDAR sensors, establishing its potential utility in autonomous vehicle tasks.
The remainder of this paper is organized as follows: Section II discusses the work related to attention mechanisms in point cloud registration. In Section III, we introduce the proposed architecture. Section IV provides details on the experiments we conducted. Finally, Section V concludes the paper and outlines the directions for future work.
## II Related works
For many years, the task of aligning and matching 3D shapes was achieved using classic computer vision techniques. These techniques leverage ICP [8], Feature matching using descriptors [9], and RANSAC [10] to remove outliers.
In the past few years, many research works have focused on the development of new neural network based pipelines for point clouds registration, following the ground-breaking results obtained with these architectures in other domains [11, 12]. However, they faced several challenges: (1) point-sets are irregular and order invariant, (2) scaling up point neural networks to large datasets (N \(>\) 10k points) remains difficult even with the availability of modern hardwares, and (3) the impact of the choice of a specific method on the
performance of deep learning models remains only partially understood [13].
Solutions based on intermediate representations such as meshes and voxels were dominant until PointNet [14] which proposed to leverages multiple MLPs and symmetric functions to process unordered point-clouds directly through 3D coordinates, learning both global and local features, to efficiently perform classification and segmentation tasks.
Since then, the attention mechanism popularized in transformer architecture [3] has proven to be a better candidate for handling unordered point clouds. Indeed, the attention vector is permutation invariant, allows encoding long range dependencies and thus, it is analogous to a weighted adjacency vector.
It has been demonstrated that predictive performances of attention mechanisms for perception and computer vision tasks can be attributed to their similarities with spacial smoothing: (1) they flatten the cost landscapes, (2) they are low-pass filters then less vulnerable to high-frequency noises [15]. When applied to point clouds, the attention vectors have the ability to encode similarity scores between data points, providing a robust tool to understand complex spatial relationships. Furthermore, using multiple attention heads enables to focus on different aspects of temporal patterns.
As a result, several works have leveraged attention mechanisms to learn 3D geometric registration. The initial step in this process is embedding the point clouds in an appropriate space before introducing them to attention blocks. Among these, some works adopt a representation where the point cloud is structured as a graph [16, 17, 18, 19, 20]. The graph takes the Cartesian coordinates of the data points and their associated normals and/or intensities, and outputs a down-sampled point cloud, with the aggregated nearest-neighbor features. The graph representation enables to process irregular point clouds. However, as the size of the point cloud grows, the computational complexity might increase substantially.
To mitigate time complexity, a variety of approaches learns hierarchical features from the point cloud. PadLoc [21] utilizes the PV-RCNN [22] that discretize the point clouds into voxel grids, which are then processed using sparse 3D convolution layers to create a series of feature maps. These maps are combined to generate a bird's eye view (BEV) map from where key points and their associated features are sampled. Lepard [23] uses KpConv [24] that learns deformable kernel points, enhancing its ability to adapt to the local geometry of the point clouds. Super-point based methods [25] divide features from the KpConv into super points where lower layers capture fine-grained local features and higher layers capture more global features.
Other works expand the use of the attention mechanism to non-rigid objects [26]. The approach detailed in [27] incorporates raw 3D shapes into the attention block. The attention vector has been modified to take into account the points' density in a local area. The method cited in [19] use dynamic GCNN architecture [28] for feature extraction and computes the relative drift of point pairs to address the registration task. To better constraint the registration process, an optimal transport loss is used in [29].
The works cited above predominantly focus on utilizing attention mechanisms on a single frame of data, from which they derive rigid transformation by comparing it to a target frame. In this work, we explore the improvements achieved by applying hierarchical attention mechanisms to bundles of frames simultaneously, aligning with multiway registration techniques. We further explore the combination of attention mechanisms and graph neural networks to understand spatial relationships and dynamics, leveraging both local and global contextual information for pose estimation.
## III Algorithm architecture
An overview of our model is shown in Figure 1.
Let \(X\) represent the collection of LiDAR scans, denoted as \(X=\{X_{1},X_{2},\ldots,X_{n}\}\), where each point cloud \(X_{i}=\{p_{1},p_{2},\ldots,p_{M}\}\) is of shape \([M,D]\). Here, \(M\) refers to the number of points \(p_{i}\) in each scan, and \(D\) indicates the dimensionality of the points. For 3D cartesian coordinates, \(D=3\), but \(D\) could be higher if more coordinates are added. We are interested in finding a set of \(n\) rigid transformations, \(\{T_{1},T_{2},\ldots,T_{n}\}\), from the special Euclidean group \(SE(3)\), to align each \(X_{i}\) to a common reference frame \(F_{w}\). Each transformation \(T_{i}\) is expressed as a \((4\times 4)\) matrix including a \((3\times 3)\) rotation matrix \(R_{i}\) from the special orthogonal group \(SO(3)\) and a \((3\times 1)\) translation vector \(t_{i}\).
Without loss of generality, we consider \(F_{w}\) to be identical to \(X_{1}\) for the remainder of the paper.
Point cloud encoderThe initial phase of our architectural framework focuses on generating embeddings for each data point in a given point cloud, denoted as \(E\equiv\{e_{1},e_{2},\ldots,e_{M}\}\).
\[e_{i}=\text{EmbeddingFunction}(p_{i}),\quad\forall p_{i}\in X_{i}\]
To achieve this we employ a variation of the Dynamic Graph CNN (DGCNN) architecture[28]. The first step consists of mapping the point normals into a higher-dimensional feature space through the application of dual convolutional layers. These enriched normal features are combined with the original data points to serve as the input for the DGCNN. This later enables the generation of node representations that integrate both topological connectivity and feature attributes from localized neighborhoods [30].
Hierarchical Attention MechanismIn the second step, we introduce a hierarchical attention mechanism. This consists of dual-stage attention, where the first stage -multihead self attention in Figure 1- focuses on quantifying relationships among points within individual point cloud. It accomplishes this by calculating attention scores based on the embeddings generated for each data point.
\[A_{ij}^{(1)}=\text{softmax}(f(e_{i},e_{j})),\quad\forall e_{i},e_{j}\in E\]
\[e_{i}^{\prime}=\sum_{j}A_{ij}^{(1)}\cdot e_{j}\]
where \(f(\cdot,\cdot)\) is a function that computes a normalized attention score between the features embeddings \(e_{i}\) and \(e_{j}^{\prime}\).
To extract the most salient features from a \(D^{\prime}\)-dimensional space, spanning \(M^{\prime}\) points within each point cloud (where \(D^{\prime}\) and \(M^{\prime}\) are derived from the point cloud encoder), we use a maximum operation to obtain the top \(S\) embeddings.
\[e_{i}^{\prime\prime}=\left\{e_{i\sigma_{1}}^{\prime},e_{i\sigma_{2}}^{\prime}, \ldots,e_{i\sigma_{S}}^{\prime}\right\}\]
\[\forall i\in\{1,2,\ldots,M^{\prime}\}\]
Where \(S\) is the predefined number such that \(1\leq S\leq D^{\prime}\), and \(\sigma\) is a permutation function that sorts the indices of vector \(e_{i}^{\prime}\) in decreasing order based on their corresponding values in vector \(e_{i}^{\prime}\).
The second stage of attention -multi-head cross attention in Figure 1- is designed to encode relationships between points across different sets. This enables our model providing a comprehensive representation that accounts for global context within the point cloud data.
\[A_{kl}^{(2)}=\text{softmax}\left(g(e_{k}^{\prime\prime},e_{l}^{\prime\prime}) \right)\quad\text{for}\quad k\neq l\]
where \(g(\cdot,\cdot)\) is a function that computes a normalized attention score between the features \(e_{k}^{\prime\prime}\) and \(e_{l}^{\prime\prime}\).
\[e_{k}^{\prime\prime\prime}=\sum_{l\neq k}A_{kl}^{(2)}\cdot e_{l}^{\prime\prime}\]
_Rigid Transformation Decoder_To align each point cloud with the global coordinate system, we utilize a Multi-layer Perceptron (MLP) on the \(N\times M^{\prime}\) embeddings derived from the cross-attention module. The initial frame is set as a \((4\times 4)\) identity matrix. The MLP is designed to generate unique feature sets representing both translation and rotation. For translation, it outputs three features, each corresponding to the x, y, and z coordinates.
The rotation representation, however, depends on the specific parametrization approach selected. In this work, we adopt the Gram-Schmidt orthonormalization method [31]. This method is used to project the rotation features outputted by the decoder onto the nearest rotation matrix. In utilizing the Gram-Schmidt orthonormalization, the output rotation of the MLP encompasses a 6-dimensional space, reshaped as \((3\times 2)\) matrix \(W\). The nearest rotation matrix is then retrieved by minimizing the Frobenius norm as illustrated in Eq. 1.
\[\text{argmin}_{R^{*}\in SO(3)}\|R^{*}-W_{3\times 2}\|_{F}^{2}, \tag{1}\]
Nonetheless, it is possible to use the special orthogonal procrustes parametrization as presented by Bregier et al. [32], which includes a total of 9 features for the rotation. The arbitrary \((3\times 3)\) matrix \(W\) produced by the decoder is then projected onto the nearest rotation matrix by minimizing the Frobenius norm (Eq. 2).
\[\text{argmin}_{R^{*}\in SO(3)}\|R^{*}-W_{3\times 3}\|_{F}^{2}, \tag{2}\]
_Loss functions_We formulate the loss function with individual constraint on the translation and rotation components. Since estimating distances tends to be simpler than directly determining absolute poses, we define the loss function as depicted in Eq. (3).
\[\mathcal{L}(\theta)=\alpha f_{t}(\Delta t_{ij}^{*},\Delta t_{ij})+\beta f_{R} (\Delta\mathcal{R}_{ij}^{*},\Delta R_{ij}), \tag{3}\]
Where \(\alpha\), \(\beta\) are weighting factors to balance the contributions of the translational and rotational components respectively. \(f_{t}\) returns the mean squared error (MSE) between the pair \((\Delta t_{ij}^{*},\Delta t_{ij})\) and \(f_{R}\) returns the angular distance \(\delta\) between a pair of rotation matrices, based on the equality \(|\,\Delta R_{ij}^{*}-\Delta R_{ij}\mid_{F}=2\sqrt{2}sin(\delta/2)\).
## IV Experiments
To evaluate the efficacy of our architecture, we benchmarked it against the multiway registration with pose graph optimization as outlined in [10] and available in the Open3D library [33]. We evaluated our model using the KITTI odometry dataset [34] to demonstrate its applicability in scenarios involving a large number of points.
In multiway registration via pose graph optimization a pose graph is created where each node represents a piece of geometry associated with a pose matrix transforming it into a global space. These nodes are linked by edges that signify the relationship and transformations needed to align one piece of
Fig. 1: Graphical Overview of the Ego-Pose Estimation Framework: The pose estimation model takes \(N\) point-clouds as input, with each point cloud represented through a distinct set of embeddings. These embeddings are learned using a Graph Neural Network (DGCNN), which processes them to obtain an enriched representation for each point cloud. These representations are then passed through a hierarchical attention mechanism, that operates in two distinct stages: self-attention and cross-attention. These mechanisms help to capture of both intra and inter-set relationships respectively. Derived self attention scores guide the salient features’ selection, carrying aggregated information to be utilized by the subsequent cross-attention layer, enhancing the comprehensibility of global context in each point cloud. The 6DoF poses decoder leverages this rich representation to deduce the rigid transformations needed to align the point clouds in a common reference frame, guided by the loss function defined in Equation. (3).
geometry with another. The alignments are categorized into two classes: odometry edges for neighboring nodes aligned using local registration techniques like ICP, and loop closure edges for non-neighboring nodes - considered less reliable-aligned through global registration.
To optimize the pose graph, uncertain parameters and information matrices are assigned to the edges, allowing for the identification and pruning of false alignments through a two-pass optimization process. The first pass tries to identify false alignments considering all edges, while the second pass optimizes the alignment without these false alignments.
Different optimization methods and criteria can be selected to suit specific needs and achieve better results. In our experiment we have fixed the voxel size to be 0.05 and we kept the default values -as specified in Open3D- for the remaining parameters [33].
The training of our model is done on single NVIDIA GeForce RTX 2080 Ti with \(\sim\!10G\) memory. We chose to train our model on a one-second window of a specific trajectory, equivalent to roughly \(\sim\!10\) LiDAR scans. These scans are fed sequentially into the model, which outputs a corresponding sequence of rigid poses (Figure 2). It's worth noting that the window parameter, once set during training, can be reduced for inference without requiring retraining. However, increasing the window parameter necessitates specifying the new value and retraining the model from scratch.
Point cloud preprocessingBefore feeding the point cloud data into the graph neural network, we subject it to a series of preprocessing steps. First, we employ voxel downsampling to reduce the data's complexity by decreasing the number of points. Next, we employ the RANSAC algorithm to fit a plane that separates ground features, thereby removing any undesired rough ground surfaces from the data. These procedures are illustrated in Figure 3.
Upon completion of these preprocessing steps, each processed point cloud contains approximately \(70,000\) points. To standardize the tensor sizes, padding is applied to each point cloud to match the dimensions of the largest point cloud.
Rigid pose estimationEach point cloud in the sequence is transformed into vectors with dimensions \([1024,256]\) through the DGCNN network [28]. These vectors serve as inputs to determine attention scores in the first stage of the attention mechanism.
To aggregate this high-dimensional data while retaining the most significant features, we use a max pooling operation. This yields a one-dimensional vector for each point cloud in the batch, capturing the most important attributes from the initial set of 1024 points.
By highlighting the peak value in each feature dimension, we aim to accentuate the most noteworthy characteristics of each point cloud. This leads to a concise yet data-rich 1024-dimension representation for every cloud in the batch, allowing for the preservation of key attributes.
The sequence, configured as [10, 1024], is subsequently processed by an additional attention mechanism to discern the interrelations between distinct point clouds. In our experiments, we've configured both attention mechanisms with 4 heads and 4 layers.
The 6DOF pose decoder, configured as a two-stage Multi-Layer Perceptron (MLP), generates 9-dimensional feature set to represent both translation and rotation. The translation component is represented through three outputs corresponding to the x, y, and z coordinates. Meanwhile, the rotation is represented using six features to project the arbitrary \((3\!\times\!2)\) matrix onto a rotation matrix using special Gram-Schmidt orthonormalization [31], as explained in section III. For this task, we leveraged the Roma library [32], maintaining a value of 1 for both \(\alpha\) and \(\beta\) parameters.
The qualitative results are presented in Figure 4. This Figure showcases the alignment of point clouds using three methods: the ground truth, the multiway registration via pose graph optimization, and our proposed approach. These alignments are demonstrated for select examples from the test trajectories from the KITTI Odometry benchmark [34].
The quantitative results are detailed in Table I. This table presents the average Root Mean Square Error (RMSE) values derived from absolute pose errors, showcasing the performance of our model over various sequences. These averages have been computed across 100 sub-trajectories, with each trajectory containing 10 scans. The RMSE values are differentiated into translational (measured in meters) and rotational (measured in radians) errors to offer a comprehensive view of the trajectory accuracies observed in the different sequences
In the majority of test sequences examined, both methods yielded a largely comparable translational error. However, our learned model presents significantly less rotational error.
Essentially, initializing the pose to estimate using the pre
Fig. 3: Visual comparison of point cloud data from KITTI odometry sequence 5. (a) shows the original, unprocessed point cloud, while (b) displays the point cloud after segmentation to remove rough ground features and reduce the number of points.
Fig. 2: Left: Ten non-aligned scans from Sequence 7 of KITTI odometry benchmark; Right: aligned point clouds using our model-derived poses.
vious frame can indeed make the optimization process easier, especially when dealing with sequences of frames where changes between consecutive frames are relatively small. However, this method can encounter problems when there is significant rotational movement, where the assumption of pose similarity between successive frames may not hold, rendering the method less effective.
The trained model takes advantage of temporal dynamics, reducing its dependence on pose information from the previous data frame, a strategy that gives it better flexibility and adaptability in the search for a wide range of potential solutions.
## V Conclusion
In this work, we presented a learned model for processing point clouds data and determining rigid poses through a deep learning architecture that integrates a dynamic graph neural network with a hierarchical attention mechanism. Our method demonstrated promising results in the rigid pose estimation task, effectively compressing high-dimensional data into a low-dimensional vector representation. This was achieved by leveraging hierarchical attention mechanism and maximum operation to extract the most crucial features of
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Sequence & \multicolumn{2}{c}{Our Model} & \multicolumn{2}{c}{Multiway registration via Pose Graph Optimization} \\ & RMSE Trans(m). & RMSE Rot(rad). & RMSE Trans(m). & RMSE Rot(rad). \\ \hline Sequence 0 & **0.064** & **0.614** & 0.385 & 2.842 \\ Sequence 1 & **0.027** & **0.016** & 0.068 & 0.065 \\ Sequence 2 & **0.363** & **0.121** & 0.407 & 1.056 \\ Sequence 3 & **0.027** & **0.016** & 0.068 & 0.065 \\ Sequence 4 & **0.181** & **0.745** & 0.249 & 2.420 \\ Sequence 5 & 0.051 & **0.013** & **0.017** & 0.201 \\ Sequence 6 & **0.387** & **1.851** & 2.158 & 2.473 \\ Sequence 7 & **0.013** & **0.018** & 0.105 & 0.098 \\ Sequence 8 & **0.384** & **0.689** & 3.234 & 0.507 \\ Sequence 9 & **0.192** & **0.369** & 0.443 & 0.521 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Comparison of RMSE for absolute pose error in translation and rotation.
Fig. 4: From top to bottom, each row represents 10 point clouds derived from different sequences of the KITTI odometry dataset. From left to right: (1) Non-aligned scans; (2) Point clouds aligned using our estimated poses; (3) Point clouds aligned through multiway registration via pose-graph optimization; (4) Point clouds aligned according to ground truth poses.
each point cloud. Moreover, we opted for a Gram-Schmidt orthonormalization parametrization in the pose decoding stage, which yielded more precise results. This framework inherently supports a deep understanding of the spatial relationships and dynamics, leveraging both local and global contextual information in estimating ego-poses.
A significant insight drawn from our evaluations, conducted on the KITTI Odometry dataset, reveals that in many instances, our method outperforms the established multiway registration via pose graph optimization approach, notably reducing rotational error in several test sequences. This assert the potential effectiveness and applicability of our proposed model for geometric registration over large point clouds, particularly where rotational accuracy is of crucial importance.
In future work, we plan to further investigate the relevance of our model over larger windows and its direct usability within Simultaneous Localization and Mapping (SLAM) algorithm. This will involve applying the model to submaps or tiles [1] that compile a larger number of scans, aggregated from odometry poses. Our objective is to replace the existing loop closure optimization scheme in SLAM with our learned model.
|
2307.16416 | MRA-GNN: Minutiae Relation-Aware Model over Graph Neural Network for
Fingerprint Embedding | Deep learning has achieved remarkable results in fingerprint embedding, which
plays a critical role in modern Automated Fingerprint Identification Systems.
However, previous works including CNN-based and Transformer-based approaches
fail to exploit the nonstructural data, such as topology and correlation in
fingerprints, which is essential to facilitate the identifiability and
robustness of embedding. To address this challenge, we propose a novel paradigm
for fingerprint embedding, called Minutiae Relation-Aware model over Graph
Neural Network (MRA-GNN). Our proposed approach incorporates a GNN-based
framework in fingerprint embedding to encode the topology and correlation of
fingerprints into descriptive features, achieving fingerprint representation in
the form of graph embedding. Specifically, we reinterpret fingerprint data and
their relative connections as vertices and edges respectively, and introduce a
minutia graph and fingerprint graph to represent the topological relations and
correlation structures of fingerprints. We equip MRA-GNN with a Topological
relation Reasoning Module (TRM) and Correlation-Aware Module (CAM) to learn the
fingerprint embedding from these graphs successfully. To tackle the
over-smoothing problem in GNN models, we incorporate Feed-Forward Module and
graph residual connections into proposed modules. The experimental results
demonstrate that our proposed approach outperforms state-of-the-art methods on
various fingerprint datasets, indicating the effectiveness of our approach in
exploiting nonstructural information of fingerprints. | Yapeng Su, Tong Zhao, Zicheng Zhang | 2023-07-31T05:54:06Z | http://arxiv.org/abs/2307.16416v1 | # MRA-GNN: Minutiae Relation-Aware Model over Graph Neural Network for Fingerprint Embedding
###### Abstract
Deep learning has achieved remarkable results in fingerprint embedding, which plays a critical role in modern Automated Fingerprint Identification Systems. However, previous works including CNN-based and Transformer-based approaches fail to exploit the nonstructural data, such as topology and correlation in fingerprints, which is essential to facilitate the identifiability and robustness of embedding. To address this challenge, we propose a novel paradigm for fingerprint embedding, called Minutiae Relation-Aware model over Graph Neural Network (MRA-GNN). Our proposed approach incorporates a GNN-based framework in fingerprint embedding to encode the topology and correlation of fingerprints into descriptive features, achieving fingerprint representation in the form of graph embedding. Specifically, we reinterpret fingerprint data and their relative connections as vertices and edges respectively, and introduce a minutia graph and fingerprint graph to represent the topological relations and correlation structures of fingerprints. We equip MRA-GNN with a Topological relation Reasoning Module (TRM) and Correlation-Aware Module (CAM) to learn the fingerprint embedding from these graphs successfully. To tackle the over-smoothing problem in GNN models, we incorporate Feed-Forward Module and graph residual connections into proposed modules. The experimental results demonstrate that our proposed approach outperforms state-of-the-art methods on various fingerprint datasets, indicating the effectiveness of our approach in exploiting nonstructural information of fingerprints.
## 1 Introduction
Thanks to the rapid development of biometrics [1, 4, 29, 28], Automated Fingerprint Identification Systems (AFIS) have been applied in a variety of fields, such as civil identification and criminal identification. Fingerprint embedding, which refers to the encoding of fingerprints as compact and identifiable features, has a significant impact on the efficiency and accuracy of AFIS. Classical embedding algorithms generally focus on the geometric structures of fingerprint minutiae, _e.g_., triplets [1, 18] and cylinders [5]. Nevertheless, these methods heavily rely on handcrafted rules to extract embedded features, thereby suffering from inferior robustness and limited generalization in large-scale automated fingerprint recognition scenarios.
In recent years, deep learning methods have been introduced into AFIS and made significant advances in fixed-length fingerprint embedding beyond classical methods. Such deep learning methods can be primarily classified into two categories according to network structure, namely _CNN-based_ and _Transformer-based_ approaches. CNN-based approaches [27, 3, 21, 6, 4] generally involve multi
Figure 1: Illustration of graph construction in the proposed MRA-GNN. (a) Minutia-level graph construction: for each fingerprint, the minutiae and their neighborhood relations are considered to form vertices and edges of a graph, expressing the topology of a fingerprint. (b) Fingerprint-level graph construction: similarly, a batch of fingerprints is sampled to form a graph, expressing local structure in the fingerprint manifold. We show the central and nearby vertices in red and green color. The fingerprint embedding will be extracted from these graphs via MRA-GNN.
ple stages to extract feature representation from fingerprint images, such as global alignment, minutia detection, and patch extraction. This leads to a tedious training process that is difficult to extract global information from given fingerprint images. Inspired by the success of Vision Transformer, recent Transformer-based works [8, 30] show that the Transformer with powerful feature extraction capability can learn better fingerprint embedding with global representation in an end-to-end manner.
Despite the significant progress achieved by CNN- and Transformer-based approaches in fingerprint embedding, we recognize that _previous works do not exploit nonstructural information, such as topology and correlation in fingerprints_, which is significant to facilitate the identifiability and robustness of AFIS. It is worth noting that fingerprint data possess rich non-grid structures, such as the distances and adjacency relationships between the minutiae scattered on fingerprints. Although CNN and Transformer excel at learning salient features from the grid-like pixel data in images, it is hard for them to exploit the non-grid structure of fingerprints and mine the valid topological information due to their inherent limitations. In this paper, we present a novel paradigm for fingerprint embedding, called _Minutiae Relation-Aware model over Graph Neural Network_ (MRA-GNN), which incorporates a GNN-based framework in fingerprint embedding to encode the topology and correlation of fingerprints into descriptive features.
In terms of technique, MRA-GNN is equipped with _Topological relation Reasoning Module_ (TRM) and _Correlation-Aware Module_ (CAM) to learn the fingerprint embedding from both minutia-level and fingerprint-level graphs rather than image pixels. As shown in Fig. 1(a), _we reinterpret minute and their connections as vertices and edges_ respectively, and a graph structure is introduced to represent the topology of each fingerprint. Then the graph is utilized to infer implicit topological relations among minutiae by means of the GNN-based TRM. In Fig. 1(b), _a batch of fingerprints is considered as vertices, and the similarities of their global features are adopted to construct edges_, resulting in the formation of a graph, which is then applied to perceive inherent correlation structures among fingerprints via the GNN-based CAM. Benefiting from the GNN-based framework, our proposed approach can be efficiently and accurately trained in an end-to-end way with the supervision of triplet loss. To alleviate the over-smoothing problem of GNN, we first incorporate graph residual connections into GNN-based TRM and CAM, then propose to apply Feed-Forward Module (FFM) after them to maintain the diversity of vertex features, enabling the greater performance of graph embedding as layers deepen.
To our best knowledge, this work is the first to successfully apply graph neural network to achieve fingerprint embedding according to topological information and correlation structures. We believe this novel paradigm will inspire the community to further explore GNN-based models in fingerprint embedding. In a nutshell, the main contributions of this paper are:
* We propose a novel graph-based paradigm for fingerprint embedding. The final representation is extracted from the topological graphs of minutiae and fingerprints rather than image pixels employed in previous deep learning works.
* We devise an MRA-GNN model under the paradigm, which is the first to study compact and identifiable fingerprint embedding via a deep GNN for large-scale AFIS. Since topological relations among minutiae and correlation structures among fingerprints are considered, it is intuitive that the embedding of MRA-GNN has stronger robustness and generalization.
* Extensive experiments demonstrate that the proposed MRA-GNN is eminently competitive with prior state-of-the-art methods [21, 6, 4, 30]. For instance, MRA-GNN achieves the best TAR@FAR=0.1% on NIST SD4 (99.15%) and EER on FVC2004 DB1 (0.53%) in recognition task, and performs best on NIST SD4 dataset in indexing task.
## 2 Related work
**Fingerprint embedding approach.** Fingerprint feature embedding plays a critical role in modern AFIS. Classical embedding methods are generally based on minutia triplets [1, 18]or cylinder-code [5]. However, algorithms with these geometric structures heavily rely on handcrafted rules and complex design procedures that depress the performance of representation. With the emergence of deep learning, the research on deep feature embedding is increasing [21, 4, 6, 35]. These methods do not need to undergo cumbersome manual design as classical ones. More precisely, Li _et al_. [21] adopt paired fingerprint patches centered on minutiae to extract local descriptors which will be aggregated into a fixed-length representation through global average pooling. Nevertheless, the method suffers from a lack of generalization and interpretability, and minutia information is not well learned by simple global pooling. Follow-up method [4] presents an end-to-end latent fingerprint search system with two separate minutia extraction models to provide complementary minutia templates and texture templates. Besides, Engelsma _et al_. [6] subsequently learn to extract fixed-length fingerprint embedding based on global texture information with a multi-task CNN. Their representations mainly focus on global or local patterns of fingerprints but do not pay attention to the topology and correlation in them. Thus, the above algorithms lack compact and discriminative features. A new approach to fingerprint embedding needs to emerge urgently.
Graph neural network.Our work is also related to graph neural network (GNN), which has been successfully applied to various scenarios in deep learning [16]. The earliest GNN was initially outlined in [7]. Micheli [23] proposes the form of spatial-based GNN by architecturally compositing non-recursive layers. Spectral-based graph convolutional network was first presented by Bruna _et al._[2] who introduce graph convolution based on the spectral graph theory. GCN is usually employed in graph data, such as social networks and biochemical graphs [11]. In recent years, it is gradually applied to computer vision, natural language processing, and other fields [12, 15]. Nevertheless, these methods are at the primary stage. They are prone to over-smoothing, have inferior accuracy and robustness, and the interface with tasks is not perfect. Different from existing works on GNN, which mainly focus on graph-structure data or scene objects, we introduce it to characterize topological relations and correlation structures in fingerprints. As far as we know, our work is the first to study compact and discriminative fingerprint embedding on GNN.
## 3 Methodology
In this section, we introduce the graph-based paradigm that incorporates a GNN-based framework within fingerprint embedding to encode the topology and correlation of fingerprints into descriptive features. In the following subsections, we first illustrate the basic knowledge of GNN (Sec. 3.1). Then, we introduce the design philosophy of MRA-GNN (Sec. 3.2), in which Topological relation Reasoning Module (Sec. 3.3) and Correlation-Aware Module (Sec. 3.4) learn the fingerprint embedding from both minutia-level and fingerprint-level graphs rather than texture patterns or image pixels. Lastly, the Feed-Forward Module and graph residual connections (Sec. 3.5) for tackling the over-smoothing problem of GNN are presented.
### Preliminaries
Graph Convolutional Network.Given a graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) with \(N\) nodes \(v_{i}\in\mathcal{V}\) and edges \((v_{i},v_{j})\in\mathcal{E}\), the Graph Convolutional Network (GCN), which is composed of graph convolutional layers, is designed to obtain an embedding of each vertex by fusing information from the graph. Specifically, let \(x_{i}\) be the feature vector of \(v_{i}\), and \(X=[x_{1},\dots,x_{N}]\), then a graph convolutional layer can be defined as
\[\mathrm{GraphConv}(X;\mathcal{G})=\sigma(h\circ g(X)), \tag{1}\]
where \(g\) is an _aggregation_ operator to aggregate features of neighbors within \(\mathcal{G}\), \(h\) is an _update_ function with learnable parameters to map aggregated features into another space, and \(\sigma\) is an activation to increase the non-linearity of features. For example, Kipf _et al._[17] propose a simplified and efficient GCN layer using \(g:X\to\hat{A}X,h:X\to XW,\sigma:X\to ReLU(X)\), where \(\hat{A}\) is a normalized adjacency matrix derived from \(\mathcal{E}\). With these definitions, the GCN layer can be written as
\[\mathrm{GraphConv}(X;\mathcal{G})=\mathrm{ReLU}(\hat{A}XW). \tag{2}\]
For more details, we refer readers to the seminal work [17].
Over-smoothing phenomenonis a prevalent issue in various GCNs [20]. As graph convolutional layers deepen, the feature vectors of different nodes tend to converge towards similar representation, resulting in performance degradation of the GCN model. This issue restricts the capacity of GCN for modeling complex graph structures, and applications of GCN for large-scale datasets.
In our work, we propose to use graph to express the topology of fingerprints and introduce a GCN-based model to extract informative embedding from fingerprints.
Figure 2: Illustration of the proposed MRA-GNN for fingerprint embedding. The framework includes: (1) Minutia level performs the minutia-based graph construction and the reasoning of implicit topological relations. (2) Fingerprint level achieves the fingerprint-based graph construction and the awareness of inherent correlation structures. The embedding can be applied to various fingerprint tasks.
### Overview of MRA-GNN
Fig. 2 provides an overview of the proposed MRA-GNN for graph-based fingerprint embedding, which contains minutia level and fingerprint level. Each level possesses two steps: graph construction and graph embedding. Specifically, **(I)** at the minutia level, we first adopt an off-the-shelf backbone \(\mathbf{E}\) to extract minutiae from fingerprint \(F\). Subsequently, we regard each minutia \(\mathbf{E}(F)\) as a central vertex and find its \(\mathcal{K}_{m}\) nearest neighbors to construct a minutia graph \(\mathcal{G}_{m}(F)\). Then we utilize TRM to infer minutia graph embedding \(m_{F}\) with topological relations implied in the graph. We abbreviate this process as
\[m_{F}=TRM(\mathcal{G}_{m}(F)). \tag{3}\]
**(II)** At the fingerprint level, we sample a batch of fingerprints and find \(\mathcal{K}_{f}\) nearest neighbors of fingerprint \(F\) based on minutia graph embedding \(m_{F}\) to construct graph \(\mathcal{G}_{f}(F)\). Then we adopt CAM to compute fingerprint graph embedding \(M_{F}\) with correlation structures as fingerprint representation. We abbreviate this process as
\[M_{F}=CAM(\mathcal{G}_{f}(F)). \tag{4}\]
**(III)** Since the increase of GCN layers in TRM and CAM will inevitably cause the over-smoothing problem, we introduce FFM and graph residual connections to the design of MRA-GNN, in order to enhance the model capacity while preventing features from collapsing.
In summary, for a given fingerprint, MRA-GNN extracts its embedding from a couple of minutia graphs and fingerprint graphs by means of anti-smoothing TRM and CAM.
### Topology reasoning on minutia graph
In this section, we detail the minutia level based on Topological relation Reasoning Module (TRM), which works on minutia graph construction and implicit topological relation inference among minutiae. Thanks to the efficiency and robustness, minutia-based features that fully represent fingerprints, are extensively applied in fingerprint embedding rather than texture features and image pixels. Previous works are generally based on minutia geometries and image patches and do not exploit topological relations among minutiae. The advantages of minutia topology based on graph paradigm include: 1) graph is a generalized data structure that fixed geometry and grid can be regarded as a special case of graph; 2) graph is more flexible and precise than fixed geometry or gird to model irregular structures among minutiae in a fingerprint; 3) similar information can be aggregated and updated by the constructed graph connections among minutiae, thus enhancing the performance of minutia embedding; 4) the advanced research on GNN can be transferred to address fingerprint tasks.
Minutia graph construction.For each fingerprint \(F\), we first employ an off-the-shelf minutia extractor \(\mathbf{E}\)[24] to extract minutiae in \(F\) and transform each of them into a 3-dimension feature vector \(x_{i}\in\mathbb{R}^{3}\) that consists of location and orientation information \((x,y,d)\), _i.e._, \(\mathbf{E}(F)=[x_{1},x_{2},\cdots,x_{N}]\). These minutiae can be viewed as a set of unordered vertices, which are denoted as \(\mathcal{V}_{m}=\{v_{1},v_{2},\cdots,v_{N}\}\). For each vertex \(v_{i}\), we find its \(\mathcal{K}_{m}\) nearest neighbors \(\mathcal{N}(v_{i})\) and add an unoriented edge \((v_{i},v_{j})\) to \(\mathcal{E}_{m}\) for every \(v_{j}\in\mathcal{N}(v_{i})\). The visualized construction process of the minutia graph is illustrated in Fig. 1(a). After constructing minutia graph structure \(\mathcal{G}_{m}(F)=(\mathcal{V}_{m},\mathcal{E}_{m})\) of \(F\), we explore how to adopt GNN to deduce implicit topological relations among minutiae.
TRM-based minutia graph embedding.As shown in Fig. 3(a), the input minutia graph \(\mathcal{G}_{m}(F)\) of TRM is composed of minutia relations and their features. TRM consists of one GCN block, one linear layer with GeLU activation[14], and one normalization layer. To fully reason the topological relations among vertices, the GCN block is basically made up of EdgeConv graph convolutional layer[34], followed by a batch normalization layer and GeLU activation. Here EdgeConv is employed for performing a channel-wise symmetric aggregation operation on both vertex features and edge features associated with all edges originating from each vertex. As introduced in Eq. (1), the aggregation function of EdgeConv is defined as
\[g(x_{i})=\mathrm{concat}(x_{i},\max(\{e_{ij}|(v_{i},v_{j})\in\mathcal{E}\})). \tag{5}\]
Figure 3: Illustration of various modules in MRA-GNN.
Here, \(\mathrm{concat}\) denotes the concatenation of input vectors, and the edge feature \(e_{ij}\) is given by the equation:
\[e_{ij}=\mathrm{GeLU}(\theta\cdot(x_{j}-x_{i})+\phi\cdot x_{i}), \tag{6}\]
where \(\theta\) and \(\phi\) are learnable parameters. The update function is a linear transformation \(h(x)=xW\), as used in previous works. After the GCN block, we employ a linear layer to unify the dimensions of graph embedding, so that each fingerprint has a fixed-length feature. We integrate all vertex features from TRM to be the embedding \(m_{F}\) in Eq. (3).
### Correlation learning on fingerprint graph
In this section, we detail the Correlation-Aware Module (CAM), which works on fingerprint graph construction and inherent correlation structure awareness among fingerprints. Based on the graph paradigm, similar fingerprints can be aggregated and updated by correlation information, while different fingerprints are structurally distanced, thus improving the performance of feature embedding.
Fingerprint graph construction.Considering a batch of fingerprints \(\{F_{i}\}_{i=1}^{V}\) and their features \(\{m_{F_{i}}\}_{i=1}^{V}\), we view them as vertices to express the local structure in fingerprint manifold and adopt minutia graph embeddings as the initial vertex features. To obtain correlation information in the structure, for each fingerprint \(F_{i}\) we utilize K-NN algorithm to add unoriented edges \(e_{ij}\) for all \(F_{j}\in\mathcal{N}(F_{i})\) by Euclidean distance. After that, we obtain a fingerprint graph \(\mathcal{G}_{f}\). The visualized construction process of the fingerprint graph is shown in Fig. 1(b). In the following words, we explore how GNN can be applied to perceive correlation structures among fingerprints.
CAM-based fingerprint graph embedding.Fig. 3(b) displays the details of Correlation-Aware Module (CAM). The input of CAM is graph \(\mathcal{G}_{f}\) that is composed of a batch of fingerprints and their initial features. CAM consists of one GCN block, two linear layers, and one GeLU activation layer with normalization. Here we employ the same EdgeConv as Eq. (5) in the GCN block, followed by a batch normalization layer. The output of the entire fingerprint level is the graph embedding features \(\{M_{F_{i}}\}_{i=1}^{V}\), in which \(M_{F_{i}}\) is the corresponding feature embedding of fingerprint \(F_{i}\) as denoted in Eq. (4).
### Over-smoothing alleviation
It is difficult to scale up GCNs due to the over-smoothing problem, thus we introduce two additional ways to mitigate this problem in the MRA-GNN.
Feed-Forward Module.First, we insert Feed-Forward Module (FFM) into MRA-GNN to alleviate over-smoothing problem. FFM is a multi-layer perceptron with two Fully-Connected (FC) layers and one GeLU activation layer, as illustrated in Fig. 3(c). Concretely, FFM is given by
\[Y=\mathrm{GeLU}(XW_{1})W_{2}, \tag{7}\]
where input \(X\) can be \(m_{F}\) and \(\{M_{F_{i}}\}_{i=1}^{V}\) at minutia and fingerprint levels, respectively. \(W_{1}\) and \(W_{2}\) are the weight of FC layers. Through the FC layers, we can project node features into the same domain and increase the feature diversity, thus the embedded vectors of different nodes will converge to different representations. Moreover, layer collapse will be avoided by nonlinear activation.
Graph residual connection.Inspired by the huge success of ResNet [13], we transfer residual connections to GCN, thus unleashing its potential. In the original GCN framework as Eq. (1), the underlying mapping \(\sigma(h\circ g)\), which takes a graph as input and outputs a new graph representation, is learned. Here we introduce graph residual connection in the original GCN framework. Specifically, after \(X\) is transformed by \(\sigma(h\circ g)\), vertex-wise addition is performed to obtain the graph residual connection, which is defined as
\[\mathrm{GraphConv}_{res}(X;\mathcal{G})=\sigma(h\circ g(X))+X. \tag{8}\]
The graph residual connection can reduce the back-propagation complexity of GCN and prevent gradient vanishing, which enables deeper GCN to reliably converge in training and achieve superior performance in inference.
### Training
The detailed training scheme is shown in Fig. 4, in which we optimize parameters of MRA-GNN under the supervision of triplet loss[26] in an end-to-end manner. Considering a batch of fingerprints, we first select the most similar fingerprint pairs as the anchor-positive pairs \((a,p)\). Next, for each pair \((a,p)\) we choose the hardest sample \(n\), which is most similar to the anchor but has a different label, to generate semi-hard triplet \((a,p,n)\). The target of triplet loss is minimizing the distance of anchor-positive pairs \(d(a,p)\) while maximizing the distance of anchor-negative pairs \(d(a,n)\). Here we adopt \(l_{2}-norm\) of feature embedding to measure the distance between fingerprint pairs\((f_{1},f_{2})\)
Figure 4: The detail training process of our framework. The backbone encoder is first pre-trained, then we train the whole network end-to-end with the supervision of triplet loss.
which is known as \(d(f_{1},f_{2})=||f_{1}-f_{2}||_{2}\). In summary, the objective function of MRA-GNN training is given by
\[Loss(a,p,n)=\max(d(a,p)-d(a,n)+\gamma,0), \tag{9}\]
where the constant \(\gamma\) is a margin that forces the model to learn smaller \(d(a,p)\) and larger \(d(a,n)\). The loss ensures that the model can learn diverse features from samples.
## 4 Experiments
In this section, we report the datasets and different experiments that we performed for testing the MRA-GNN model accompanied by the results. Since our proposed method achieves a novel graph paradigm for fingerprint embedding, which is a framework capable of being generalized to any end-to-end fingerprint task, we conduct experiments in fingerprint recognition and indexing tasks, respectively.
### Datasets
For the training and validation, we adopt an in-house dataset comprised of 109K rolled fingerprint images captured by optical sensors from 11,265 unique fingers with the spatial size \(256\times 360\), in which 90K and 19K are utilized for each target. For testing, the widely used benchmark NIST SD4[33] and NIST SD14[31] that are made up of rolled fingerprint images similar to the training dataset, as well as slap fingerprint datasets FVC2004 DB1[22], are applied in the recognition tasks. In terms of indexing tasks, since the top-5 accuracy of NIST SD14 has been elevated to 100% by MaRs[35] and without the necessity to optimize, we adopt NIST SD27[32] that contains 258 pairs of latent and rolled fingerprints instead. There are 2K rolled fingerprint pairs in NIST SD4 with the spatial size \(512\times 512\), and 27K rolled fingerprint pairs in NIST SD14 with the spatial size \(832\times 768\). However, only the last 2,700 pairs from NIST SD14 are used for evaluation in previous approaches, and we employ them for fair comparison. The major motivation for selecting FVC2004 DB1 is that it is comprised of slap fingerprint images and is extremely challenging. Hence, we are able to demonstrate that even if MRA-GNN is trained on rolled fingerprint images, our incorporation of domain knowledge into the framework enables itself to be robust and generalize well to slap fingerprint datasets.
### Implementation details
We adopt dilated aggregation[19] in both TRM and CAM modules, and set the dilated rate as \(\lceil l/4\rceil\) for the \(l\)-th layer. To fair comparison, we apply a batch size of 256 to train our model on GeForce RTX 3090 for 200 epochs with AdamW optimizer. The learning rate is 1e-4 in the initial stage and varies according to the cosine schedule. The default values of momentum and weight decay factor of AdamW are employed. The margin \(\gamma\) is set at \(0.5\). During training, we augment the dataset with random rotations (from\(-15^{\circ}\) to \(15^{\circ}\)) and translations. As for fingerprint alignment, the topological invariance of graph allows our method to mitigate the effects of geometric transformation.
### Fingerprint recognition
In the fingerprint recognition stage, each fingerprint is first represented as an embedded vector by MRA-GNN. Subsequently, the similarity of two fingerprints can be measured through the product of the corresponding embedding, which is defined as
\[Sim(F_{i},F_{j})=<M_{F_{i}},M_{F_{j}}>, \tag{10}\]
where \(F_{i}\) and \(F_{j}\) denote two fingerprints, \(M_{F_{i}}\) and \(M_{F_{j}}\) indicate the fingerprint feature embedding extracted by MRA-GNN. Thanks to the normalization of representation in CAM, the product of two vectors is similar to cosine similarity. For a given fingerprint, we can determine the match by setting a fixed similarity threshold, thereby identifying whether fingerprint pairs come from the same finger or not.
For evaluating the recognition performance of our MRA-GNN fingerprint embedding model, we make a fair comparison with previous approaches on FVC2004 DB1, NIST SD4, and NIST SD14. In terms of evaluation metrics, we adopt TAR@FAR=0.1% for each benchmark, which indicates the True Accept Rate (TAR) when False Accept Rate
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c}{TAR@FAR=0.1\%} & \multicolumn{2}{c}{EER} \\ \cline{2-5} & FVC2004 DB1 & NIST SD4 & NIST SD14 & FVC2004 DB1 \\ \hline FingerPatches[21] & - & 96.57\% & 97.30\% & - \\ DeepPrint[6] & 97.50\% & 97.90\% & 98.55\% & 1.75\% \\ LatentAFIS[4] & 97.00\% & 98.70\% & 99.00\% & 1.31\% \\ TMNet[25] & 98.04\% & 97.46\% & 98.24\% & 1.02\% \\ Convolution-Transformer[30] & **98.55\%** & 98.35\% & 99.13\% & 0.84\% \\ MRA-GNN(ours) & 98.35\% & **99.15\%** & **99.60\%** & **0.53\%** \\ \hline \hline \end{tabular}
\end{table}
Table 1: The comparisons of TAR@FAR=0.1% and EER in existing methods and the proposed model for recognition tasks on three benchmarks: NIST SD4, NIST SD14, and FVC2004 DB1. ’-’ denotes that previous methods do not present this indicator and their codes are not released, with difficulties of reproducing based on the description of the paper. The best results are highlighted in bold.
(FAR) is 0.1%. Apart from that, Equal Error Rate (EER) which is the error rate when FAR and False Reject Rate (FRR) are equal, is utilized to evaluate the performance of FVC2004 DB1 given its specific properties.
The comparisons of experimental results of fingerprint recognition are presented in Table 1. Compared with five other algorithms such as DeepPrint[6] and LatentAFIS[4], the performance of our proposed MRA-GNN surpasses most of them. Especially when measured in the TAR@FAR=0.1% indicator, our proposed method achieves 99.15% and 99.60% on NIST SD4 and NIST SD14 respectively, which is significantly better than all other methods. For metrics on FVC2004 DB1, we notice that the TAR@FAR=0.1% of our method has a small drop of 0.2%. However, the EER metric outperforms SOTA by 0.31%, which demonstrates that MRA-GNN has the ability to be robust and generalize well over various fingerprint datasets.
As described above, the performance improvement of MRA-GNN on fingerprint recognition is prominent, especially on NIST datasets. In the following section, we perform a series of ablation studies to analyze the influence of graph convolutional types, components, and different configurations of our proposed MRA-GNN framework.
### Ablation studies
We conduct ablation studies of the proposed model on fingerprint recognition tasks of NIST SD4 and NIST SD14.
layers will lead to over-smoothing that is unable to avoid by FFM. We tune \(L\) from 1 to 18 and conduct experiments on NIST SD4 and NIST SD14, respectively. The results are presented in Fig. 5. We discover that the indicator tends to increase and then decrease as \(L\) rises, and performs best when \(L\) is in the range of 9 to 15.
Impact of the number of GNN neighbors.The performance of our model is also related to the number of neighbor nodes \(K\) in graph, which controls the aggregated range. Too few neighbors will restrain information exchange, while too many neighbors will also lead to inevitable over-smoothing and more irrelevant vertices will be included. Fig. 6 displays the TAR@FAR=0.1% obtained by MRA-GNN with different values of \(K\) on NIST SD4 and NIST SD14. We discover that the performance of our proposed method is increasing with \(K\) in the initial stage, and reaches the maximum at around \(K=10\).
### Fingerprint indexing
We employ MRA-GNN for fingerprint indexing, which selects candidate sets for a given fingerprint based on the similarity of feature embeddings. Particularly, for a query fingerprint, we filter out the candidate set of a certain length from the gallery according to embedded vectors, hence rapidly identifying whether fingerprint pairs come from the same finger or not and reducing the search space for indexing. The gallery we adopted is a closed set (each probe has a mate in gallery), which contains 10K rolled fingerprints. For a fair comparison, we report the Top-k accuracy of indexing on NIST SD4 and NIST SD27, which is more precise and stricter than the traditional error rate for a given penetration rate. From the results in Table 5, we conclude that our method performs best on the NIST SD27 given that MRA-GNN only requires identifiable sub-graph structures of the fingerprints. Moreover, the Top-1, Top-5, and Top-10 accuracy on NIST SD4 are able to reach 99.46%, 99.65%, and 99.75%, which maintain around the optimal indicators.
## 5 Conclusion
In this paper, we pioneer studying the topology of fingerprints by deep graph-paradigm embedding. We develop an MRA-GNN model to extract fingerprint embedding with superior performance. At the minutia level, we reinterpret minutiae and their connections as vertices and edges for minutia graph construction, and introduce TRM to reason implicit topological relations among minutiae. At the fingerprint level, a batch of fingerprints is considered as vertices and we design CAM to perceive inherent correlation structures among fingerprints. For alleviating the over-smoothing problem, we propose FFM and graph residual connections to maintain feature diversity as layers deepen. Extensive experiments demonstrate the superiority, robustness, and generalization of our framework.
Admittedly, the main limitation of our study is the demand for high-quality initial features, and our model currently performs on fingerprint datasets but cannot execute online matching for a single fingerprint. For future work, exploring more effective graph models and applying them to broader biometrics will be meaningful.
AcknowledgementsThis work is supported by the National Natural Science Foundation of China (12271504).
Figure 5: The effects of different GNN layers.
Figure 6: The effects of different GNN neighbors.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{3}{c}{NIST SD4} & \multicolumn{3}{c}{NIST SD27} \\ \cline{2-7} & Top-1 & Top-5 & Top-10 & Top-1 & Top-5 & Top-10 \\ \hline FingerPatches[21] & 99.27\% & **99.65\%** & **99.77\%** & - & - & - \\ DeepPrint[6] & 98.70\% & 99.22\% & 99.75\% & 65.50\% & 69.84\% & 72.15\% \\ LatentAFIS[4] & 98.55\% & 98.94\% & 99.15\% & 68.60\% & 73.30\% & 75.10\% \\ Multi-scale representation[9] & 98.80\% & - & - & 69.38\% & - & - \\ MaRS[35] & 99.35\% & **99.65\%** & 99.70\% & - & - & - \\ MRA-GNN(ours) & **99.46\%** & **99.65\%** & 99.75\% & **70.50\%** & **74.45\%** & **76.84\%** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Top-k accuracy for fingerprint indexing on two benchmarks: NIST SD4 and NIST SD27. ’-’ denotes that previous methods do not present this indicator and their official codes are not released. The best results are highlighted in bold. |
2309.12445 | Ensemble Neural Networks for Remaining Useful Life (RUL) Prediction | A core part of maintenance planning is a monitoring system that provides a
good prognosis on health and degradation, often expressed as remaining useful
life (RUL). Most of the current data-driven approaches for RUL prediction focus
on single-point prediction. These point prediction approaches do not include
the probabilistic nature of the failure. The few probabilistic approaches to
date either include the aleatoric uncertainty (which originates from the
system), or the epistemic uncertainty (which originates from the model
parameters), or both simultaneously as a total uncertainty. Here, we propose
ensemble neural networks for probabilistic RUL predictions which considers both
uncertainties and decouples these two uncertainties. These decoupled
uncertainties are vital in knowing and interpreting the confidence of the
predictions. This method is tested on NASA's turbofan jet engine CMAPSS
data-set. Our results show how these uncertainties can be modeled and how to
disentangle the contribution of aleatoric and epistemic uncertainty.
Additionally, our approach is evaluated on different metrics and compared
against the current state-of-the-art methods. | Ahbishek Srinivasan, Juan Carlos Andresen, Anders Holst | 2023-09-21T19:38:44Z | http://arxiv.org/abs/2309.12445v1 | # Ensemble Neural Networks for Remaining Useful Life (RUL) Prediction
###### Abstract
A core part of maintenance planning is a monitoring system that provides a good prognosis on health and degradation, often expressed as remaining useful life (RUL). Most of the current data-driven approaches for RUL prediction focus on single-point prediction. These point prediction approaches do not include the probabilistic nature of the failure. The few probabilistic approaches to date either include the aleatoric uncertainty (which originates from the system), or the epistemic uncertainty (which originates from the model parameters), or both simultaneously as a total uncertainty. Here, we propose ensemble neural networks for probabilistic RUL predictions which considers both uncertainties and decouples these two uncertainties. These decoupled uncertainties are vital in knowing and interpreting the confidence of the predictions. This method is tested on NASA's turbofan jet engine CMAPSS data-set. Our results show how these uncertainties can be modeled and how to disentangle the contribution of aleatoric and epistemic uncertainty. Additionally, our approach is evaluated on different metrics and compared against the current state-of-the-art methods.
## I Introduction
The cost of downtime due to failure and its corresponding unplanned maintenance is high. A well-planned maintenance strategy can better minimize these failure occurrences. Predictive maintenance (an advanced maintenance planning strategy) uses models to monitor the health index of a system to schedule a maintenance. A popular health index is the Remaining Useful Life (RUL), which is the effective life left of a component measured in number of operational time, such as number of cycles, number of hours, or amount of air pumped. The two main streams of RUL modeling approaches are physics based and data-driven based. **Physics based** models are mathematical representations of a system degradation to predict RUL. For complex systems, one common method for RUL modeling is to divide the system into subsystems and recurrently modeling its sub-components individually [1]. This process of decomposing the system into smaller sub-systems and modeling them can be repeated until the desired level of granularity is reached. This granularity selection also affects the accuracy of the model (in general, the deeper the level of granularity, the more accurate the model is). This modeling approach can be time-consuming and deep domain knowledge about the system and sub-systems is needed. **Data-driven** models are modeled using data obtained from the system. With the developments in machine learning (ML) the process of data-driven modeling has become more accurate than ever [2]. Motivated by the success of deep learning (DL) in computer vision and text processing [2, 3] DL has become mainstream among many researchers within PHM. Currently, state-of-the-art models take two different directions for RUL modeling for complex systems; on one hand the inputs are directly mapped onto the RUL [4, 5] and on the other hand, when a health index is possible to be defined or measured, the modeling is done in a two step procedure _i_) inputs are mapped onto the health index, _ii_) the health index is mapped onto the RUL [6]. Despite the good accuracy of the current approaches using DL [4, 5, 6], most of them model point estimates of the RUL without considering the probabilistic nature of the system and uncertainties in the modeling [5].
In general, there are two main sources of uncertainties in the modeling process; _aleatoric uncertainty_ which is originated from the system failing at different operational times, and _epistemic uncertainty_ which comes from uncertainties of the model parameters, e.g. these model parameters might change with the quantity of available data. Knowing the source of the uncertainties gives the possibility of taking better decisions based on the model predictions [7]. For instance, when the epistemic uncertainties are large the model predictions should not be trusted. This high epistemic uncertainty strongly indicates that the provided input is different than the trained data distribution. If the aleatoric uncertainties dominate, then the uncertainties are inherent to the underlying system (or quality of data) and cannot be reduced by adding any other source of information. For industrial applications, being able
to distinguish between these uncertainties can be of much help, i.e., _i_) the aleatoric uncertainty provides information about the variance in the failure process. This information can be used to know the amount of risk taken when planning the maintenance. _ii)_ the high epistemic uncertainty indicates regions where more data collection is needed to enrich model's knowledge. This distinction gives crucial information to interpret the model output more accurate in relation to the uncertainties, thus improving the trustworthiness.
In this work, we predict the probabilistic estimates; incorporating the aleatoric and the epistemic uncertainties by utilizing an ensemble neural network. This ensemble based approach is simple, easily parallelizable, and well calibrated to reflect real underlying behavior. Our methods are tested on NASA's turbofan jet engine CMAPSS data-set benchmark [8]. The results show the capability of our model approach to provide probabilistic estimates and can measure the isolated effect of the aleatoric and epistemic uncertainties.
The paper begins with related work followed by ensemble neural networks for probabilistic modeling, then we describe the experiments and results. Finally, we show some advantages of this method and conclude this work.
## II Related Work
A number of different authors use neural networks to predict the RUL of a system. The most common neural network architectures for this application are Convolution Neural Networks (CNN) and Long Short-term Memory (LSTM). [4] use an LSTM network combined with fully connected layers that take in normalized data and predict RUL. [5] use CNN with attention mechanism to the predict the RUL along with some interpretability methods. The aforementioned approaches model for point prediction, our work aims to model probabilistic predictions incorporating uncertainties.
Some work that considers probabilistic prediction are [6]; [9]; [10]; [11]. The work of [6] uses three-step model for probabilistic RUL prediction. The first step is to predict the probability distribution health index. In the second step, the predicted distribution of the health index is mapped onto the RUL estimated distribution. The third step is a correction carried out using LSTMs, this step acts as a re-calibrator for the prediction. Although the uncertainty estimation on the NN is similar to our work, one crucial difference between this work and Nemani's work is that our method is a single-stage prediction where inputs are mapped directly onto the RUL. This is important in complex systems such as CMAPSS where defining a health index that is interpretable and observable is difficult or even impossible.
[11] use Monte Carlo dropout approach for probabilistic predictions and it requires high computation and modeling time compared to our approach [12]. [10] use an approach of modeling which only takes into account uncertainties from the system and does not model the uncertainties of model parameters. Another approach by [9] where they measure the uncertainties from the model (epistemic) and don't consider the uncertainties from the system (aleatoric).
Most of the existing work focuses on modeling point prediction for the RUL and only a few focus on probabilistic methods. To our knowledge the existing probabilistic methods either estimate the aleatoric, or the epistemic uncertainties, or both simultaneously without separating the source of uncertainties. Our approach models a probabilistic approach that distinguishes the source of uncertainties.
## III Methods
### _Ensemble Neural Networks for Prediction_
[12] proposed a novel approach to model both aleatoric and epistemic uncertainties using deep ensembles probabilistic networks. Individuals of an ensemble are made of probabilistic neural networks (PNN). This PNN is a probabilistic model which captures aleatoric uncertainties from a given data. PNNs work like a neural network with the difference that they predict the parameters \(\theta\) of the assumed distribution \(\Pi(\theta)\)Additionally, epistemic uncertainties are captured by the ensembles, by the fact that individuals in the ensemble converges to different optimums while capturing the distribution of the model parameters. During the training process, the optimizer aims to find parameters for the PNN to maximize the selected scoring rule.
The _Scoring rule_ is a function that measures the quality of the predicted distribution \(p_{\theta}\). The higher the value is, the better the quality of prediction is. This scoring rule helps to check if the model is calibrated _i.e._, the predicted distribution \(p_{\theta}\) reflects the real distribution \(q\), where \(\theta\) is the parameter of the assumed distribution. A well-defined scoring rule should satisfy the following conditions: _i_) \(S(p_{\theta},Y|x)<S(q,Y|x)\) and _ii_) \(S(p_{\theta},Y|x)=S(q,Y|x)\) if and if only \(p_{\theta}(Y|x)=q(Y|x)\). Negative log likelihood (NLL) and Brie score are some examples of scoring rules that satisfy the above properties.
### _Proposed Model Structure_
The proposed model uses a Gaussian distribution \(\mathcal{N}(\mu,\sigma)\) as the assumed distribution \(\Pi(\theta)\), where \(\mu\) is the mean and \(\sigma\) is the standard deviation. In other words, the distribution of the RUL estimates is assumed to be Normal distributed. The model architecture consists of \(K\) stacks of LSTM layers followed by \(L\) fully connected layers which output two parameter estimates \(\hat{\mu}\) and \(\hat{\sigma}\). This network is trained using the NLL of the Gaussian distribution, and the training data is used as observations on the predicted distribution. The NLL of the \(i^{th}\) sample is given by Eq. (1). Our modeling approach predicts the RUL at every time step of the provided window.
\[-\log\boldsymbol{p}_{\mu,\sigma}\left(y_{i}\mid\mathbf{x}_{i}\right)=\frac{ \log\sigma^{2}(\mathbf{x}_{i})}{2}+\frac{\left(y_{i}-\mu(\mathbf{x}_{i}) \right)^{2}}{2\sigma^{2}(\mathbf{x}_{i})}+\text{ const }. \tag{1}\]
The prediction from \(M\) individuals of ensembles is put together by finding the mean distribution \(\mathcal{N}(\hat{\mu}_{*},\hat{\sigma}_{*})\)
\[\hat{\mu}_{*}=\frac{1}{M}\sum_{i=1}^{M}\hat{\mu}_{i}\, \tag{2}\]
\[\hat{\sigma}_{*}^{2}=\frac{1}{M}\sum_{i=1}^{M}(\hat{\sigma}_{i}^{2}+\hat{\mu}_{ i}^{2})-\hat{\mu}_{*}^{2}. \tag{3}\]
### _Uncertainty Measures_
As mentioned before the total uncertainty can be split into aleatoric and epistemic, which can be expressed as \(U_{tot}=U_{al}+U_{ep}\). Aleatoric uncertainty can be measured by the average entropy \(H\) of each prediction, this is \(U_{al}=\frac{1}{M}\sum_{i=1}^{M}H(\mathbf{p}^{(i)})\), where \(M\) is the total number of models in the ensemble, \(i\) is an individual in the ensemble and \(\mathbf{p}^{(1)},\ldots,\mathbf{p}^{(M)}\) are the \(M\) predictive distributions of the ensemble. The total uncertainty \(U_{tot}\) can be calculated as the entropy of the mean prediction, i.e., \(U_{tot}=H(\frac{1}{M}\sum_{i=1}^{M}\mathbf{p}^{(i)})\). Therefore, \(U_{ep}=H(\frac{1}{M}\sum_{i=1}^{M}\mathbf{p}^{(i)})-\frac{1}{M}\sum_{i=1}^{M}H( \mathbf{p}^{(i)})\)[13]. By assuming a Normal distributed variable, i.e., \(x\sim\mathcal{N}(\mu,\sigma)\), the entropy can be expressed as \(H=\frac{1}{2}\log(2\pi\sigma^{2})+\frac{1}{2}\). Therefore, \(U_{tot}=\frac{1}{2}\log(2\pi\hat{\sigma}_{*}^{2})+\frac{1}{2}\) and we can write the _aleatoric_ and _epistemic_ uncertainties as
\[U_{al}\sim\frac{1}{M}\sum_{i=1}^{M}\log\left(\hat{\sigma}_{i}^{2}\right), \tag{4}\]
\[U_{ep}\sim\log(\hat{\sigma}_{*}^{2})-\frac{1}{M}\sum_{i=1}^{M}\log\left(\hat{ \sigma}_{i}^{2}\right). \tag{5}\]
## IV Experimental Setting
### _Data_
Our proposed method was tested on NASA's turbofan jet engine CMAPSS data-set [8], specifically using FD001 for training and test sets form FD001, FD002 and FD003 data-sets. These data-sets were curated for RUL prediction tasks, containing 21 selected signals collected during different operational cycles until failure. we omitted sensor signals 1, 5, 10, 16, 18, and 19 as their values are constant in data-set FD001. We utilize piecewise linear RUL targets; in the initial stages we assume the RUL to be a constant of value \(128\) and linearly decreasing in the last \(128\) cycles, similar to previous approaches [10, 14].
The data is pre-processed, where the signals are normalized using the \(Z\)-norm \(x_{i}^{norm}=(x_{i}-\mu_{x})/\sigma_{x}\). The normalizing parameters of the train data are utilized for normalizing the test. Additionally, the sliding window method is used to generate samples that are used as inputs to the neural networks. This is typically done by using a window of length \(l\) and this window is moved along time on stride \(s\). For this work, the stride \(s\) was set to 1 and the window length \(l\) was set to 100.
### _Model_
For reproducibility purposes, the experiments utilized a fixed random seed 237. Our model uses 2 layers of LSTM layers each with 32 and 16 neurons, respectively. LSTM layers are followed by 1 dense layer. Our ensemble consists of 15 models. Train and test split is according to the original data-set. Our models utilize a batch-size of \(32\) and an Adam optimizer with a learning rate of \(\lambda=0.001\), parameters \(\beta_{1}=0.9\), and \(\beta_{2}=0.999\). An early stopping mechanism monitors loss from epoch \(35\) and waits for \(3\) epochs to cut off the training when loss continues to increase or at 100 epochs.
### _Evaluation Metric_
In order to compare against the point prediction methods, we evaluate our method against the same metrics that are used in point prediction methods. For this purpose, the mean measure is calculated. Commonly used metric for point predictions are Root Mean Square Error (RMSE) shown in Eq. (6), where \(N\) is the number of samples in the data-set and \(\hat{y}\) is the model prediction. The Score function is shown in Eq. (7) where \(a_{1}\) is set to \(10\) and \(a_{2}\) to \(13\) as in [8].
\[RMSE=\sqrt{\frac{1}{N}\sum_{i=1}^{N}(\hat{y}_{i}-y_{i})} \tag{6}\]
\[s=\left\{\begin{array}{l}\sum_{i=1}^{N}e^{-\left(\frac{\hat{y}_{i}-y_{i}}{a _{1}}\right)}-1\text{ for }(\hat{y}_{i}-y_{i})<0\\ \sum_{i=1}^{N}e^{\left(\frac{\hat{y}_{i}-y_{i}}{a_{2}}\right)}-1\text{ for }(\hat{y}_{i}-y_{i})\geq 0\end{array}\right. \tag{7}\]
For evaluating the probabilistic predictions, we use the prediction interval coverage percentage (PICP) and normalized mean prediction interval width (NMPIW). PICP measures the percent of the prediction which falls within the bounds given the confidence interval. NMPIW measures the average width of the bounds, _i.e._, upper and lower-bound in a possible range of values. Formulae for PICP and NMPIW are provided in Eq. (8) and Eq. (9), respectively,
\[PICP=\frac{1}{N}\sum_{i=1}^{N}\left\{\begin{array}{l}1\text{ if }y_{i}\in[U_{\alpha}(\hat{\mathbf{p}}_{i}),L_{\alpha}(\hat{\mathbf{p}}_{i})]\\ 0\text{ if }y_{i}\notin[U_{\alpha}(\hat{\mathbf{p}}_{i}),L_{\alpha}(\hat{\mathbf{p}}_{i})] \end{array}\right.\, \tag{8}\]
\[NMPIW=\frac{1}{N(\max\left\{y\right\}-\min\left\{y\right\})}\sum_{i=1}^{N}(U_ {\alpha}(\hat{\mathbf{p}}_{i})-L_{\alpha}(\hat{\mathbf{p}}_{i})), \tag{9}\]
where \(\hat{\mathbf{p}}_{i}\) is the estimated distribution by the \(i^{th}\) individual in the ensemble. The upper bound \(U_{\alpha}(\mathbf{p})\) and lower bound \(L_{\alpha}(\mathbf{p})\) are calculated based on the confidence interval \(\alpha\) of the distribution \(\mathbf{p}\). We use a 95% confidence interval for our calculations.
\begin{table}
\begin{tabular}{l|l|l|l|l} \hline \hline & **FD001** & **FD002** & **FD003** & **FD004** \\ \hline \hline Train Units & 100 & 260 & 100 & 249 \\ \hline Test Units & 100 & 259 & 100 & 249 \\ \hline Operating Condition & 1 & 6 & 1 & 6 \\ \hline Fault Modes & 1 & 1 & 2 & 2 \\ \hline \end{tabular}
\end{table} TABLE I: Table summarizing the NASA turbofan jet engine data-set. This consists of four data-set with different number of units, operating conditions, and fault modes.
In our modeling process, we train by using a window of 100 time steps and predict all 100 time steps. Usually, RUL models are evaluated by the prediction done at the last available time step, therefore we utilize only the last time step to compare with existing models. Prediction for one test unit can be seen in the Fig. 1, the mean prediction follows the ground truth and variance decreases later in the operational life of this random unit.
We train the ensemble model on the folder FD001 and calculated the aleatoric and epistemic uncertainty for all the samples in test-sets from folders FD001, FD002, and FD003. The kernel density estimate of the aleatoric and epistemic uncertainties are plotted in Fig. 2 (a) and Fig. 2 (b), respectively. From Fig. 2 (b), it is clear that the epistemic uncertainties for the samples from FD002 are high compared to the samples in FD001. This high uncertainty indicates that the model has not been trained on the data distribution of FD002 and should not be trusted (_i.e._, re-training needed for this data-set). In the case of FD003 the ensemble model has an epistemic uncertainty that is closer to the FD001, indicating that the prediction can be trusted but are not as good as for FD001 and data-distribution is closer to FD001. To further analyze the epistemic uncertainty and how this reflects on the difference in data distribution of the different data-sets (_i.e._, FD001, FD002 and FD003), we plot in Fig. 2 (c) the T-distributed stochastic neighbor embedding (TSNE), a dimensionality reduction technique on the data-space of FD001, FD002 and FD003. This visualization shows the data embedding of the different data-sets, one can see that the FD001 are subsets of FD002, and that FD003 is majorly a sub-set of FD001 with minor exceptions that can be seen on the left boundaries. Fig. 2 (c) confirms our interpretation of the epistemic uncertainty in data-set FD002 and FD003.
In Fig. 2 (a) we see that the aleatoric uncertainties lie in the same region for all 3 data-sets. This indicates that the uncertainties coming from the system are similar in the three data-sets. This is because the model was trained to predict the aleatoric uncertainties (\(\sigma\) of the estimates) of FD001 and therefore model predicates aleatoric uncertainties in the same region as FD001. These uncertainties can only be trusted
Fig. 1: Prediction of unit 34 from the test set in FD001 using model trained on train set form FD001. Here the predictions are for the last 102 window steps.
Fig. 2: Kernel density plots of aleatoric uncertainties in (a) and epistemic uncertainties in (b) over test sets from folder FD001, FD002, and FD003 when predicted over ensemble model trained on FD001. The uncertainties of FD001 are plotted in a red solid line, FD002 in the dashed blue line, and FD003 in a dash-dotted orange line. (c) shows TSNE embedding where projections of data on TSNE dimension 1 and TSNE dimension 2. The data from different data-sets is provided in different colors red for FD001, blue for FD002, and orange for FD003.
when the epistemic uncertainties are low. These aleatoric uncertainties are due to inherent characteristics of data and can not be reduced by any means.
Finally, to compare against the existing state-of-the-art point-prediction approaches, we evaluated our approach using point-prediction and probabilistic metrics. The comparison is shown in Table. II. In this work, the focus is on how to include probabilistic prediction in RUL modeling and use a simple LSTM model for RUL predictions. From the table, we see that our simple RUL-LSTM compares well with state-of-the-art point prediction models. Moreover, our probabilistic approach can be easily implemented in the best performing RUL predictive models.
## VI Conclusion
To summarise, we proposed an ensemble LSTM neural network for probabilistic prediction to incorporate both aleatoric and epistemic uncertainties for RUL prediction. This approach is tested on NASA's turbofan jet engine CMAPSS data-set. Our results show how epistemic and aleatoric uncertainties can be added to RUL predictions. The knowledge of the uncertainties, especially the epistemic uncertainty, allows us to estimate the ensemble model prediction confidence on a given data-set. If the epistemic uncertainty is large, then it is a strong indication that the ensemble model has not seen this data before and needs to be re-trained for this data-set. This ensemble probabilistic approach is simple to implement on already existing RUL point-predictions, which would significantly improve trust and transparency to current state-of-the-art predictions.
Further work could explore methods for the selection of optimal distribution in place of Gaussian distribution based on the data and could perform further tests to understand the effect of number of models in the ensemble.
## Acknowledgment
This work is supported by VINNOVA FFI under the contract 2020-05138. We thank Kuo-Yun Liang for helping us by reviewing this work. Finally, thanks to Scania CV AB for supporting this research project.
|
2309.13773 | GHN-QAT: Training Graph Hypernetworks to Predict Quantization-Robust
Parameters of Unseen Limited Precision Neural Networks | Graph Hypernetworks (GHN) can predict the parameters of varying unseen CNN
architectures with surprisingly good accuracy at a fraction of the cost of
iterative optimization. Following these successes, preliminary research has
explored the use of GHNs to predict quantization-robust parameters for 8-bit
and 4-bit quantized CNNs. However, this early work leveraged full-precision
float32 training and only quantized for testing. We explore the impact of
quantization-aware training and/or other quantization-based training strategies
on quantized robustness and performance of GHN predicted parameters for
low-precision CNNs. We show that quantization-aware training can significantly
improve quantized accuracy for GHN predicted parameters of 4-bit quantized CNNs
and even lead to greater-than-random accuracy for 2-bit quantized CNNs. These
promising results open the door for future explorations such as investigating
the use of GHN predicted parameters as initialization for further quantized
training of individual CNNs, further exploration of "extreme bitwidth"
quantization, and mixed precision quantization schemes. | Stone Yun, Alexander Wong | 2023-09-24T23:01:00Z | http://arxiv.org/abs/2309.13773v1 | GHN-QAT: Training Graph Hypernetworks to Predict Quantization-Robust Parameters of Unseen Limited Precision Neural Networks
###### Abstract
Graph Hypernetworks (GHN) can predict the parameters of varying unseen CNN architectures with surprisingly good accuracy at a fraction of the cost of iterative optimization. Following these successes, preliminary research has explored the use of GHNs to predict quantization-robust parameters for 8-bit and 4-bit quantized CNNs. However, this early work leveraged full-precision float32 training and only quantized for testing. We explore the impact of quantization-aware training and/or other quantization-based training strategies on quantized robustness and performance of GHN predicted parameters for low-precision CNNs. We show that quantization-aware training can significantly improve quantized accuracy for GHN predicted parameters of 4-bit quantized CNNs and even lead to greater-than-random accuracy for 2-bit quantized CNNs. These promising results open the door for future explorations such as investigating the use of GHN predicted parameters as initialization for further quantized training of individual CNNs, further exploration of "extreme bitwidth" quantization, and mixed precision quantization schemes.
## 1 Introduction
Low-bit neural networks that use limited precision quantization for inference [1; 2] can make state-of-the-art models much more practical. However, perturbations induced by quantization of weights and activations can change DNN behaviour in non-trivial ways. In some cases, state-of-the-art performance can have significant degradation after quantization of weights and activations [3]. So how can we find high-performant, quantization robust CNN parameters?
Recent works by [4] and [5] have shown remarkable performance using Graph Hypernetworks (GHN) to predict all trainable parameters of unseen DNNs in a _single forward pass_. Preliminary research in [6] has explored the use of GHNs to predict quantization-robust parameters for 8-bit and 4-bit quantized CNNs. However, this work trained GHNs to predict parameters for full-precision float32 candidate CNNs and only quantized them for testing. Building on this, we explore quantization-specific training and find that quantization-aware training of GHNs (which we refer to as GHN-QAT) can significantly improve quantized accuracy for GHN predicted parameters of 4-bit quantized CNNs and even achieve greater-than-random accuracy for 2-bit CNNs. More specifically, we simulated quantization (SimQuant) in sampled CNNs such that GHN-QAT adapts to the quantization errors
induced by quantizing GHN-predicted models (see Fig. 1). By finetuning GHNs on a mobile-friendly, **quantized** CNN architecture space, GHNs learn representations specifically for efficient quantized CNNs.
## 2 Experiment
We first investigate SimQuant based quantization training (commonly referred to as quantization-aware training/QAT) on a target design space for limited precision quantization. We evaluate a few low bit-width settings (W4/A4, W4/A8, W2/A2 where W indicates weight bitwidth and A indicates activation bitwidth). Using SimQuant for W2/A2 proved to be unstable and we found that modelling quantization as uniform noise (NoiseQuant) led to much better results. The reported W2/A2 results are from training with NoiseQuant where the sampling distribution is computed based on 2-bit precision. In all cases, GHN-QAT training is precision/bitwidth-specific. Encoding bitwidth into the CNN graph could potentially remove the need for bit-width specific finetuning. We finetuned a CIFAR-10, DeepNets-1M pretrained GHN-2 model obtained from [7] on the ConvNets-250K graph dataset from [6]. We use tensorwise, asymmetric, uniform quantization throughout. Figure 1 shows how GHN-QAT is finetuned to predict efficient, quantization-robust CNNs.
GHN-QAT was finetuned on ConvNets-250K for 100 epochs using CIFAR-10 [8]. We follow a testing procedure similar to [4], [6] and evaluate GHN-QAT by comparing the mean CIFAR-10 test accuracy at the stated target precisions. Table 1 shows the top-1 and top-5 accuracy results on different testing splits. To establish the benefits of QAT, we also include results from [6] where the authors used full-precision float32 training and only quantized CNNs for testing. In [6], W2/A2 degraded to random accuracy. We use weight-tiling and normalization as described in [4] and use \(s^{(max)}=10\) for max shortest path of virtual edges.
## 3 Discussion
As demonstrated, we can easily simulate quantization of CNNs to arbitrary precision. Furthermore, we could even model other scalar quantization methods. Thus, GHN-QAT becomes a powerful tool for quantization-aware design of efficient CNN architectures. The parameters predicted by GHN-QAT are remarkably robust and the QAT finetuning results (see Table 1) show a significant improvement over the full-precision float32 finetuning used in [6]. This shows a clear benefit to adapting GHNs specifically to predict parameters for quantization-aware graphs. Additional possibilities/challenges of leveraging quantization-aware training, such as learned quantization thresholds or reducing QAT
Figure 1: GHN-QAT finetuned on ConvNets-250K. We generate a large number of CNN graphs which are then quantized to target bitwidth for training. Once trained, GHN-QAT can predict robust parameters for unseen CNNs.
oscillations like in [9], should be explored to further improve GHN-QAT, especially for "extreme" low bitwidths. It's possible that such improvements to QAT would make SimQuant more stable for 2-bit quantization.
From GHN-QAT, we can see that introducing quantization into our GHN training allows for greater use of GHNs for quantization-specific neural network parameter prediction. Besides leveraging GHN-QAT for quantized versions of floating point operations, we should be able to encode quantization information such as bit-width and quantization scheme into the graphs. If used as a form of quantized accuracy prediction, GHN-QAT could greatly accelerate the process of searching for accurate, quantized CNNs. Additionally, GHN-QAT could be a useful weight initialization for quantized CNN training. The authors of [10] found noticeable differences in quantized accuracy of CNNs depending on their initialization. Thus, a quantization-aware weight initialization could be more robust than random initialization. If GHN-QAT-predicted parameters can be used as initialization for quantization-aware training rather than first training models to convergence in full float precision before additional QAT, then the training time of quantized models would be significantly reduced. Unfortunately, GHNs do not yet match the accuracy of iterative optimization and further improvements should be explored to bridge this gap. However, the aforementioned benefits could still greatly reduce the costs of designing quantized CNNs.
|
2309.12121 | A Multiscale Autoencoder (MSAE) Framework for End-to-End Neural Network
Speech Enhancement | Neural network approaches to single-channel speech enhancement have received
much recent attention. In particular, mask-based architectures have achieved
significant performance improvements over conventional methods. This paper
proposes a multiscale autoencoder (MSAE) for mask-based end-to-end neural
network speech enhancement. The MSAE performs spectral decomposition of an
input waveform within separate band-limited branches, each operating with a
different rate and scale, to extract a sequence of multiscale embeddings. The
proposed framework features intuitive parameterization of the autoencoder,
including a flexible spectral band design based on the Constant-Q transform.
Additionally, the MSAE is constructed entirely of differentiable operators,
allowing it to be implemented within an end-to-end neural network, and be
discriminatively trained. The MSAE draws motivation both from recent multiscale
network topologies and from traditional multiresolution transforms in speech
processing. Experimental results show the MSAE to provide clear performance
benefits relative to conventional single-branch autoencoders. Additionally, the
proposed framework is shown to outperform a variety of state-of-the-art
enhancement systems, both in terms of objective speech quality metrics and
automatic speech recognition accuracy. | Bengt J. Borgstrom, Michael S. Brandstein | 2023-09-21T14:41:54Z | http://arxiv.org/abs/2309.12121v1 | # A Multiscale Autoencoder (MSAE) Framework for End-to-End Neural Network Speech Enhancement
###### Abstract
Neural network approaches to single-channel speech enhancement have received much recent attention. In particular, mask-based architectures have achieved significant performance improvements over conventional methods. This paper proposes a multiscale autoencoder (MSAE) for mask-based end-to-end neural network speech enhancement. The MSAE performs spectral decomposition of an input waveform within separate band-limited branches, each operating with a different rate and scale, to extract a sequence of multiscale embeddings. The proposed framework features intuitive parameterization of the autoencoder, including a flexible spectral band design based on the Constant-Q transform. Additionally, the MSAE is constructed entirely of differentiable operators, allowing it to be implemented within an end-to-end neural network, and be discriminatively trained. The MSAE draws motivation both from recent multiscale network topologies and from traditional multiresolution transforms in speech processing. Experimental results show the MSAE to provide clear performance benefits relative to conventional single-branch autoencoders. Additionally, the proposed framework is shown to outperform a variety of state-of-the-art enhancement systems, both in terms of objective speech quality metrics and automatic speech recognition accuracy.
Speech Enhancement, End-to-End Neural Networks, Multiscale Representations, Multiresolution Transforms
## I Introduction
When captured in realistic acoustic environments, speech signals typically suffer from distortions such as additive noise and reverberation. For human listeners, this can lead to reduced intelligibility [1] and increased listener fatigue [2]. It can also result in performance degradation for automated speech applications such as speech and speaker recognition [3, 4, 5, 6, 7]. Speech enhancement can be employed to improve the perceptual quality of captured signals, and to curb these negative effects. For many decades, speech enhancement relied on statistical model-based techniques [8, 9, 10, 11]. Recently, however, deep neural networks (DNNs) have achieved impressive performance improvements due to their high modelling capacity [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36].
Neural network speech enhancement systems can be categorized into _generative_ and _mask-based_ approaches. Generative systems are regression-based and synthesize the enhanced waveform using a set of learned filters, generally operating without much constraint on the output [13, 14, 32, 33, 34, 35, 36]. While they can freely synthesize output waveforms and potentially reconstruct speech signal components that have been attenuated due to channel effects, they can also result in distorted speech signals or unnatural residual noise. Mask-based approaches, on the other hand, are constrained to apply a multiplicative mask in a time-frequency space in order to suppress interfering signal components [12, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31]. While this constraint can help avoid distorted speech, it also limits the ability of the system to reconstruct any signals components not present in the original waveform.
This paper focuses on mask-based approaches to neural network speech enhancement. In such systems, an encoder first transforms the input signal into an embedding space. A multiplicative mask is then applied to suppress signal components from interfering sources. Finally, a decoder synthesizes the output waveform. A crucial component of mask-based end-to-end enhancement networks is the autoencoder which maps input signals into an embedding space which is effective at separating speech and noise components. Some prior studies have relied on mappings such as the Short-Time Fourier Transform (STFT) or Discrete Cosine Transform (DCT) [20, 22, 30, 37, 38, 39], while others have explored trainable encoders and decoders [29, 28, 12].
The human cochlea is well known to implement a complex set of auditory filters that reflect a broad trade-off between frequency precision and temporal resolution [40], extracting a multiscale representation of speech signals. The resulting filterbank conveys a non-uniform time-frequency tiling with a general emphasis on frequency resolution at the lower end of the auditory range and finer temporal resolution at the upper end. For decades speech scientists have successfully exploited the ear's frequency characteristics to great effect. The input features to the vast majority of speech-derived analysis methods include some form of frequency scale manipulation (e.g. mel and bark scale warping) that emulates the cochlea's spectral properties. Similarly, but to a lesser degree, the ear's temporal characteristics have been mimicked via a multiplicity of analysis window durations that allow for the detection of transient events without sacrificing the required frequency resolution. The narrow-band and wide-band spectrogram [41] represent two extremes of time-frequency trade-off, while methods such as the multi-resolution short-time Fourier Transform [42], Constant-Q transform [43], and wavelet filtering
[44] implement filtering schemes that reflect the cochlea's temporal-spectral properties to some degree.
In the field of deep learning, several recent studies have explored neural network topologies with multiscale representations [45, 46, 47, 48]. Such approaches typically aggregate representations across network layers in order to simultaneously model information at different scales. Multiscale network architectures have been shown highly successful for tasks such as image classification and segmentation [45, 46, 47]. Additionally, [48] and [49] showed the effectiveness of multiscale networks in modeling speech for the task of speaker verification.
This paper proposes the multiscale autoencoder (MSAE) for mask-based end-to-end neural network speech enhancement, which is motivated both by recent multiscale network topologies and traditional multiresolution transforms in speech processing. The framework provides the encoder and decoder mappings required by mask-based approaches, and is not specific to the architecture used for mask estimation. The MSAE performs spectral decomposition of an input waveform within separate band-limited branches, each operating with a different rate and scale, to extract a sequence of embeddings. The proposed framework features intuitive parameterization of the autoencoder, including a flexible spectral band design based on the Constant-Q transform. Additionally, the MSAE is constructed entirely of differentiable operators, allowing it to be implemented within an end-to-end neural network, and discriminatively trained for the task of speech enhancement.
To assess the performance of the proposed MSAE framework, it is combined with an example mask estimator based on the \(U\)-Net architecture [50] to form the MSAE-UNet end-to-end enhancement system. The paper provides experimental results comparing the performance of MSAE-UNet to several state-of-the-art methods in terms of objective speech quality metrics, and shows the proposed system to significantly outperform them. Additionally, the paper presents experimental results for automatic speech recognition using enhancement as a pre-processing step, and again clearly shows the benefit of the proposed framework.
This paper is organized as follows. Section II provides an overview of mask-based end-to-end speech enhancement systems. The MSAE framework is presented in Section III. Section IV then introduces an example mask estimation architecture, that when integrated in the proposed autoencoder framework forms the end-to-end MSAE-UNet. Experimental results are presented in VI and Section VII provides conclusions.
## II Mask-Based End-to-end Enhancement
Let \(\mathbf{s}\in\mathbb{R}^{D}\) be a clean speech waveform, where
\[\mathbf{s}=\left[s\left(1\right),\ldots,s\left(D\right)\right]^{T}. \tag{1}\]
Let an observed speech waveform, \(\mathbf{x}\in\mathbb{R}^{D}\), be similarly defined. Here, the observed signal is captured in a real-world environment, and may suffer from distortions such as additive noise and reverberation. In this paper, speech enhancement is defined as the task of inferring the clean waveform, \(\mathbf{s}\), from the observed version, \(\mathbf{x}\), yielding the estimate \(\hat{\mathbf{s}}\in\mathbb{R}^{D}\).
This paper focuses on a mask-based approach to end-to-end neural network speech enhancement. Specifically, we use the \(b\)-Net framework from [12], illustrated in Figure 1, and named for its likeness to the lower-case letter. The general network architecture is delineated by its application of a multiplicative mask in a general, possibly learned, embedding space. Similar networks have been explored in the context of source separation [28, 51] as well as signal enhancement [12]. The \(b\)-Net structure can be interpreted as a generalization of statistical model-based speech enhancement methods [8, 9, 10, 11]. With these earlier systems the short-time magnitude spectrogram is extracted from the noisy input waveform, manipulated via a multiplicative mask, and the output waveform is then generated from the enhanced spectrogram through overlap-and-add synthesis using the original noisy phase signal. With \(b\)-Net, the Fourier analysis and overlap-and-add synthesis are replaced by generalized encoder and decoder mappings.
Within the \(b\)-Net framework, as can be observed in Figure 1, an encoder network extracts a set of embeddings from the input according to the generalized mapping
\[\mathbf{Z}=\mathcal{E}\left(\mathbf{x}\right), \tag{2}\]
where \(\mathbf{Z}\) is a tensor of size \(T\times K\times C\). Here, the first axis corresponds to time, and \(T\) denotes the number of frames extracted from the input waveform. The second axis corresponds to frequency, and \(K\) denotes the number of bins used in the frequency representation. Finally, \(C\) represents the number of channels in the feature map \(\mathbf{Z}\). Enhancement is performed via multiplicative masking in the embedding space defined by \(\mathcal{E}\), so that interfering signal components are appropriately attenuated. A mask estimation network extracts this multiplicative mask according to
\[\mathbf{M}=\mathcal{M}\left(\mathbf{Z}\right), \tag{3}\]
where \(\mathbf{M}\) is a tensor of the same size as \(\mathbf{Z}\), and contains values in the range \([0,1]\). Enhanced versions of the embeddings are obtained via element-wise multiplication according to
\[\hat{\mathbf{Z}}=\mathbf{M}\otimes\mathbf{Z}, \tag{4}\]
Fig. 1: Overview of the \(b\)-Net Architecture [12]. Note that with the masking mechanism disabled, the \(b\)-Net results in an autoencoder.
where \(\otimes\) denotes the Kronecker product. Finally, a decoder network extracts the output waveform from the enhanced embeddings according to the generalized mapping
\[\hat{\mathbf{s}}=\mathcal{D}\left(\hat{\mathbf{Z}}\right). \tag{5}\]
Note that the overall operation of the \(b\)-Net framework can be expressed concisely as
\[\hat{\mathbf{s}}=\mathcal{D}\left(\mathcal{M}\left(\mathcal{E}\left(\mathbf{x} \right)\right)\otimes\mathcal{E}\left(\mathbf{x}\right)\right). \tag{6}\]
A key feature of the \(b\)-Net architecture is the ability to disable the masking mechanism. This results in an autoencoder path expressed as
\[\hat{\mathbf{x}}=\mathcal{D}\left(\mathcal{E}\left(\mathbf{x}\right)\right), \tag{7}\]
where \(\hat{\mathbf{x}}\) is the reconstructed version of the input \(\mathbf{x}\). Typically the autoencoder path is able to produce an accurate reconstruction of the input, leading to the approximation
\[\mathbf{x}\approx\mathcal{D}\left(\mathcal{E}\left(\mathbf{x}\right)\right). \tag{8}\]
Note that in general, end-to-end architectures do not contain an analogous autoencoder path, such as in Fully Convolutional Networks (FCNs) [21, 24, 26]. The existence of this path allows the user to dynamically control the level of noise suppression via a minimum gain level by adapting (6) so that
\[\hat{\mathbf{s}}=\mathcal{D}\left(\max\left\{G_{min},\mathcal{M}\left( \mathcal{E}\left(\mathbf{x}\right)\right)\right\}\otimes\mathcal{E}\left( \mathbf{x}\right)\right), \tag{9}\]
where \(G_{min}\in[0,1]\) can be tuned during inference [12]. In this way, the user can efficiently navigate the inherent trade-off between noise suppression and speech quality. In [12], it was shown that dynamic control of the level of noise suppression can provide benefits in enhancement performance, both for objective speech quality metrics and for subjective listening tests.
The focus of this paper is the design of the autoencoder path defined in (7). Several studies have used the Short-Time Fourier Transform (STFT) to define encoder and decoder mappings [30, 37, 20, 38], while other work has relied on the Discrete Cosine Transform (DCT) [22, 39]. Finally, trainable \(1\)-dimensional Convolutional Neural Network (CNN) layers have showed promise for learning an embedding space for effective masking-based speech enhancement [12, 28, 29]. In the following section, the MSAE is developed to provide a multirate and multiscale embedding space for mask-based speech enhancement.
## III The Multiscale Autoencoder
This section presents the Multiscale Autoencoder. First, definitions are provided for the analysis operator, \(\mathcal{A}\), and synthesis operator, \(\mathcal{S}\), which represent the basic building blocks of the MSAE. Next, the encoder network is described, which performs spectral decomposition of an input waveform within separate band-limited branches, each operating with a different rate and scale. Finally, the decoder network is described, which synthesizes the output waveform from the encoder output. Note that the encoder and decoder networks can be constructed entirely using standard neural network operations, allowing discriminative training of autoencoder parameters.
### _The Analysis Operator_
The analysis operator, \(\mathcal{A}\), performs a band-limited spectral decomposition of the input waveform \(\mathbf{x}\), outputting a tensor \(\mathbf{Y}\). It is parameterized by the tuple \((N,\omega_{1},\omega_{2})\), were \(N\) is the duration of the analysis window in samples, and \([\omega_{1},\omega_{2}]\) is the range of normalized frequencies included in the analysis.
The analysis operator relies on an invertible transform, either fixed or learned, to perform spectral analysis. The DFT represents one possible instantiation of the analysis operator and is used in this study, but other transforms, e.g. the DCT, can also be used. Let \(\mathbf{F}=[\mathbf{f}_{1},\ldots,\mathbf{f}_{N}]\) be the \(N\)-dimensional DFT transform. Define \(\mathbf{W}\) to contain the basis vectors of \(\mathbf{F}\) which correspond to the intended spectral range \([\omega_{1},\omega_{2}]\), i.e.
\[\mathbf{W}=\left[\mathbf{f}_{\left\lceil N\omega_{1}/2\right\rceil},\ldots, \mathbf{f}_{\left\lceil N\omega_{2}/2\right\rceil}\right]\in\mathbb{R}^{N \times K_{W}}, \tag{10}\]
where \(K_{W}=\left\lfloor N\omega_{2}/2\right\rfloor-\left\lceil N\omega_{1}/2 \right\rceil+1\). To perform spectral analysis, the \(\mathcal{A}\) operator applies \(4\) parallel \(1\)-dimensional CNN layers, using the kernels \(\mathfrak{Re}\left(\mathbf{W}\right)\), \(\mathfrak{Re}\left(-\mathbf{W}\right)\), \(\mathfrak{Im}\left(\mathbf{W}\right)\), and \(\mathfrak{Im}\left(-\mathbf{W}\right)\), respectively. Each layer uses a stride of \(N/2\) and Rectified Linear Unit (ReLU) activation functions, resulting in parallel tensors of shape \(\left\lfloor 2D/N\right\rfloor\times K_{W}\). The outputs of the CNN layers are then concatenated to form the tensor \(\mathbf{Y}\) with shape \(\left\lfloor 2D/N\right\rfloor\times K_{W}\times 4\). Since \(\mathbf{x}\) is a real-valued signal, its spectrum is conjugate-symmetric. For the sake of computational efficiency, the kernel \(\mathbf{W}\) in (10) therefore only contains DFT basis functions in the normalized frequency range \([0,1]\), since the remainder of the spectrum can be reconstructed from these basis functions.
Note that the motivation for using \(4\) CNN layers is to enable representation of complex numbers as a vector of non-negative values. However, for the sake of efficiency, the \(4\) CNN layers can be implemented as \(2\) CNN layers with kernels \(\mathfrak{Re}\left(\mathbf{W}\right)\) and \(\mathfrak{Im}\left(\mathbf{W}\right)\). The outputs can then be expanded to \(4\) parallel layers by separately considering non-negative and negative values of each, prior to the ReLU activations.
For improved spectral properties, windowing functions can be leveraged when constructing the kernel in (10). That is, a window \(\mathbf{h}\in\mathbb{R}^{N}\) can be applied to the DFT basis functions as \(\mathbf{f}_{k}\leftarrow\mathbf{h}\otimes\mathbf{f}_{k}\). Note that the chosen windowing function should provide unity overlap-and-add when used in conjunction with the synthesis operator [52]. In this study, the square-root of the Hanning window is used.
### _The Synthesis Operator_
The synthesis operator, \(\mathcal{S}\), generates a band-limited waveform \(\mathbf{x}\) from the tensor \(\mathbf{Y}\). It applies a \(1\)-dimensional Transpose CNN layer to each of the \(4\) slices in \(\mathbf{Y}\), using the corresponding kernels defined in Section III-A. A stride of \(N/2\) is used for each layer, and the outputs are summed to form \(\mathbf{x}\). Note that when identically parameterized, the analysis operator inverts the synthesis operator, i.e.
\[\mathbf{Y}=\mathcal{A}\left(\mathcal{S}\left(\mathbf{Y};N,\omega_{1},\omega_{2} \right);N,\omega_{1},\omega_{2}\right). \tag{11}\]
Note that \(\mathbf{W}\) in (10) only contains DFT basis functions within the normalized frequency range \([0,1]\), omitting the other half of the spectrum. Since \(\mathbf{x}\) is a real-valued signal, its spectrum is conjugate-symmetric, and the spectrum can be
reconstructed by simply scaling certain DFT basis functions during synthesis. Specifically, when constructing \(\mathbf{W}\) in the synthesis operator, all basis functions \(\mathbf{f}_{k}\) are scaled by \(2\), except for those corresponding to the DC and Nyquist frequencies.
### _The Encoder Network_
Having defined the analysis and synthesis operators, the encoder and decoder networks can be presented. The encoder performs spectral decomposition of an input waveform \(\mathbf{x}\) within separate band-limited branches, each operating with a different rate and scale, and outputs a tensor, \(\mathbf{Z}\), providing the mapping in (2). The network relies on the analysis operator \(\mathcal{A}\) for spectral processing. The encoder network is parameterized by the number of branches, \(B\), and the duration of the base analysis window in samples, \(N_{o}\). Additionally, the encoder network is parameterized by the normalized frequency ranges which split the input spectrum into \(B\) roughly orthogonal bands, requiring the \(B+1\) band edges \(\omega_{b}\) for \(0\leq b\leq B\).
Figure 2 provides an overview of the encoder network. The input waveform \(\mathbf{x}\) is first processed by \(B\) parallel analysis operators, resulting in spectral decomposition tensors \(\mathbf{Y}_{b}\), each operating at a different frame rate, and each of shape
\[\left\lfloor 2^{-(B-b-1)}D/N_{o}\right\rfloor \tag{12}\] \[\times\left(\left\lfloor 2^{B-b-1}N_{o}\omega_{b}\right\rfloor- \left\lceil 2^{B-b-1}N_{o}\omega_{b-1}\right\rceil+1\right)\times 4.\]
Next, tensors \(\mathbf{Y}_{b}\) are upsampled by rate \(2^{B-b}\) in time, via frame repetition, resulting in tensors \(\mathbf{Z}_{b}\) all operating at the same base frame rate. Finally, tensors \(\mathbf{Z}_{b}\) are concatenated along the \(2^{nd}\) axis, resulting in the output tensor \(\mathbf{Z}\), containing a multi-scale spectral decomposition of input \(\mathbf{x}\). The shape of \(\mathbf{Z}\) is \(\left\lfloor 2D/N_{o}\right\rfloor\times K_{T}\times 4\), where
\[K_{T}=B+\sum_{b=1}^{B}\left(\left\lfloor 2^{B-b-1}N_{o}\omega_{b} \right\rfloor-\left\lceil 2^{B-b-1}N_{o}\omega_{b-1}\right\rceil\right). \tag{13}\]
### _The Decoder Network_
The decoder network, illustrated in Figure 3, synthesizes the output waveform \(\hat{\mathbf{x}}\) from the encoder output \(\mathbf{Z}\), according to (6). First, \(\mathbf{Z}\) is split into branch-specific tensors \(\mathbf{Z}_{b}\). Next, Max-Pooling in time is applied to each, with pool size \(2^{B-b}\) and stride \(2^{B-b}\), yielding tensors \(\mathbf{Y}_{b}\). Finally, the synthesis operator \(\mathcal{S}\) is utilized to generate the reconstructed waveform according to
\[\hat{\mathbf{x}}=\sum_{b=1}^{B}\mathcal{S}\left(\mathbf{Y}_{b};2^{B-b}N_{o}, \omega_{b-1},\omega_{b}\right). \tag{14}\]
Note that besides the trivial case of \(B=1\), the MSAE does not provide perfect reconstruction, which is due to adjacent DFT basis functions at branch edges having different dimensions and therefore not being strictly orthogonal. However, distortion due to reconstruction error was deemed perceptually negligible for MSAE configurations used during experimentation, leading to the approximation in (8).
### _The Design of Spectral Bands_
The spectral bands of the MSAE can be designed in various ways. For example, the spectrum can be split into \(B\) bands of uniform width. In this paper, however, we use a generalized Constant-Q design [43], where the quality factor \(Q\) is defined as the ratio of band center frequency to band width,
\[Q=\frac{\omega_{b}+\omega_{b-1}}{2\left(\omega_{b}-\omega_{b-1}\right)}, \tag{15}\]
and is constant across bands. Rearranging (15) leads to the recursive expression
\[\omega_{b-1}=\rho^{-1}\omega_{b}, \tag{16}\]
where
\[\rho=\frac{2Q+1}{2Q-1}, \tag{17}\]
for \(Q>0.5\). If the uppermost band edge is set to the Nyquist frequency, i.e. \(\omega_{B}=1\), then (16) leads to the band edge design
\[\omega_{b}=\left\{\begin{array}{ll}\rho^{-B+b}&\text{if}\ \ 0<b\leq B\\ 0&\text{if}\ \ b=0\end{array}\right.. \tag{18}\]
Note that if \(Q=1.5\), then \(\rho=2\), representing a dyadic decomposition.
Using the Constant-Q band design, MSAE configurations can be fully parameterized by the tuple (\(B\),\(Q\),\(N_{o}\)). However, in the remainder of the paper signals will be assumed to have a sampling frequency of \(F_{s}=16\)kHz, and configurations will instead be specified by the tuple (\(B\),\(Q\),\(T_{o}\)), where \(T_{o}=N_{o}/F_{s}\), which allows for more intuition when comparing MSAE configurations. Additionally, for MSAE systems with \(B=1\), the band design is irrelevant, and the \(-\) symbol will be used to specify quality factor.
The embedding space defined by the MSAE encoder differs from conventional time-frequency representations with respect to two main characteristics. First, the multiscale processing of the MSAE provides the ability to capture improved spectral resolution in lower frequency bands, while simultaneously achieving greater temporal resolution in higher frequency regions. Second, since the longer analysis windows used to model low frequency regions provide higher spectral resolution, the time-frequency representation defined by the MSAE naturally includes a warped frequency axis with more emphasis applied to lower frequency bands. A magnitude spectral representation of the tensor \(\mathbf{Z}\) can be derived by calculating the root-mean-square (RMS) along the \(3^{rd}\) axis, yielding a \(T\times K\) matrix. Figure 4 provides example magnitude spectral representations of \(\mathbf{Z}\) for various MSAE configurations, for a clean speech signal from the VCTK Noisy and Reverberant set [53] with the transcription "Please call Stella." The top panel shows the \((1,-,2.5\)ms) configuration, which corresponds to conventional wideband processing with an analysis window duration of \(2.5\)ms. The second panel shows the \((1,-,40\)ms) configuration, which corresponds to conventional narrowband processing with an analysis window duration of \(40\)ms. Finally, the bottom panel provides the \((5,2.0,2.5\)ms) configuration, which corresponds to five bands spanning analysis window durations between \(2.5\)ms and \(40\)ms.
Fig. 4: Magnitude spectral representations of \(\mathbf{Z}\) for various MSAE configurations, for a clean speech signal from the VCTK Noisy and Reverberant set [53] with the transcription ”Please call Stella”. The top panel shows the \((1,-,2.5\text{ms})\) configuration, which corresponds to conventional wideband processing with an analysis window duration of \(2.5\text{ms}\). The second panel shows the \((1,-,40\text{ms})\) configuration, which corresponds to conventional narrowband processing with an analysis window duration of \(40\text{ms}\). Finally, the bottom panel provides the \((5,2.0,2.5\text{ms})\) configuration, which operates with five bands spanning analysis window durations between \(2.5\text{ms}\) and \(40\text{ms}\).
Fig. 3: The Decoder Network of the MSAE: The synthesis operator \(\mathcal{S}\) is defined in Section III-B, the \(\downarrow K\) operator denotes Max-Pooling in time with pool size \(K\) and stride \(K\), and stacked blocks denote tensor concatenation along the \(2^{nd}\) axis.
Fig. 2: The Encoder Network of the MSAE: The analysis operator \(\mathcal{A}\) is defined in Section III-A, the \(\uparrow K\) operator denotes nearest-neighbor upsampling in time by rate \(K\), and stacked blocks denote tensor concatenation along the \(2^{nd}\) axis.
As can be observed in Figure 4, the wideband time-frequency representation of the top panel captures fine temporal patterns such as the plosives \(/k/\) and \(/t/\) at \(0.75\)s and \(1.5\)s, respectively. Conversely, the narrowband representation of the second panel achieves better resolution of harmonics and formant frequencies, e.g. the \(/a/\) vowel at \(2.0\)s. The multiscale representation in the bottom panel, however, achieves both wideband representation in higher frequency regions, and narrowband representation in lower frequency regions. Additionally, it can be observed that the multiscale embedding space in the bottom panel results in a warped frequency scale. This frequency warping can also be observed in Figure 5, which illustrates the frequency responses of the \(1\)-dimensional CNN filters utilized by the \((5,2.0,2.5\)ms) MSAE. In Figure 5, it can be observed that a greater number of CNN filters is allocated to lower frequency ranges. Additionally, due to the longer analysis windows applied in the lower frequency bands, the corresponding filters show a narrower passband.
### _The MSAE as a Trainable Network_
The MSAE encoder and decoder networks are composed of differentiable operations, and can be constructed entirely using standard neural network blocks. This allows the kernels \(\mathbf{W}\) to be trained jointly with the mask estimation network for the task of speech enhancement. Note that the kernels can be initialized according to (10), and allowed to adapt during the overall network training process. Additionally, the tensors \(\mathbf{W}\) can be overcomplete, i.e. \(K_{W}>\lfloor N\omega_{2}/2\rfloor-\lceil N\omega_{1}/2\rceil+1\), providing greater modeling capacity in the spectral decomposition performed by the encoder network. In this case, an overcompleteness factor, \(\kappa\), is defined such that
\[K_{W}=\lfloor\kappa\left(\lfloor N\omega_{2}/2\rfloor-\lceil N\omega_{1}/2 \rceil+1\right)\rfloor. \tag{19}\]
In this study, for \(\kappa>1\), the tensor \(\mathbf{W}\) was initialized with DFT basis vectors with interpolated center frequencies.
## IV Mask Estimation
The role of the mask estimator within the \(b\)-Net framework is to map \(\mathbf{Z}\) to a tensor \(\mathbf{M}\) via (3), which is used as a multiplicative mask to perform speech enhancement in the embedding space defined by the encoder. The proposed MSAE framework is not specific to the architecture of the mask estimation network, and is instead compatible with the generalized mapping given in (3). However, in order to assess the performance of the MSAE framework, this study leverages an example mask estimation network based on the \(U\)-Net architecture from [50], which is described in this section. During experimentation, when the example \(U\)-Net mask estimation architecture is combined with the proposed MSAE framework, the resulting end-to-end enhancement system is referred to as the MSAE-UNet.
An overview of the mask estimation network used in this study is provided in Figure 6. As a basic component, the _CNN Block_ is defined as the following series of operations: a \(2\)-dimensional CNN layer, Batch Normalization [54], and non-linear activation functions which are ReLUs unless otherwise stated. This block is parameterized by the dimensions of the CNN filters, the number of input channels, and the number of output channels. Configurations of _CNN Blocks_ are expressed as tuples of these parameters, e.g. \((3\times 3,32,64)\).
As can be observed in Figure 6, the mask estimation network first applies Frequency-wise Normalization. This includes the element-wise \(\log\left(\mathbf{Z}+1\right)\) operator, followed by mean- and variance-normalization across time frames and channels, resulting in zero-mean and unit-variance feature map distributions per frequency bin. Frequency-wise Normalization is designed to promote invariance to domain shifts in the input signals, and is based on the Cepstral Extraction operator in [12]. A _CNN Block_ is then applied to expand the feature map to \(16\) channels prior to processing by the \(U\)-Net architecture,
The mask estimation network next utilizes a \(U\)-Net structure. As can be observed in Figure 6, a contraction path first
Fig. 5: Frequency responses of the \(1\)-dimensional CNN filters utilized by the MSAE with \((5,2.0,2.5\)ms) configuration.
Fig. 6: Overview of the Mask Estimation Network: \(\mathbf{Z}\) is the set of embeddings extracted from the input waveform via (2), and \(\mathbf{M}\) is the multiplicative mask used for enhancement in (6).
applies a series of _Contraction Blocks_, defined in Figure 7, in order to increase the spectral and temporal context captured in the feature map. Max-Pooling using a \(2\times 2\) window size and a \(2\times 2\) stride is first applied, halving the dimensions of the feature map. A _CNN Block_ is then applied to double the number of channels, followed by two more _CNN Blocks_. The _Contraction Blocks_ also provide a second output, which is a skip connection from the input. The contracting path includes \(4\) levels, resulting in an output feature map with \(256\) channels.
The mask estimation network also includes an expansion path which applies a series of _Expansion Blocks_, defined in Figure 8, in order to reconstruct the original resolution of the feature map. Within each _Expansion Block_, a _CNN Block_ is first applied to halve the number of channels, followed by nearest-neighbor upsampling with a \(2\times 2\) window. The feature map is then concatenated with the skip connection from the corresponding level in the contraction path, doubling the number of channels. A _CNN Block_ is then applied to halve the number of channels in the feature map, followed by two additional _CNN Blocks_. After the expansion path, a _CNN Block_ with Sigmoid activation functions is applied to generate a mask tensor \(\mathbf{M}\) which is of the same shape as \(\mathbf{Z}\) and contains output values in the range \([0,1]\) which are appropriate for multiplicative masking.
As Figure 6 illustrates, the modeling capacity of the mask estimation network can be improved with the inclusion of base-level processing between the U-Net's contraction and expansion paths. The example architecture here utilizes a base-level network consisting of \(5\) Squeeze-and-Excitation Residual Network (SE-ResNet) blocks [55]. These blocks, detailed in Figure 9, combine Squeeze-and-Excitation (SE) channel calibration with a Residual Network (ResNet) topology [56], and have been shown to be effective in a variety of tasks.
## V The MSAE-UNet Training Process
This section presents the training process of the MSAE-UNet. Specifically, the derivation of target signals is discussed. Additionally, the generation of parallel training data is outlined. Finally, the training loss is described.
### _Designing Target Signals_
Typically, end-to-end systems are trained with parallel data in which known clean speech is corrupted with additive noise; the system learns the inverse mapping from \(\mathbf{x}\) to \(\mathbf{s}\) through minimization of some distance measure \(d\left(\mathbf{s},\hat{\mathbf{s}}\right)\). However, in many realistic environments, speech signals are captured in
Fig. 8: Overview of the Expansion Block: For a general Input shape of \(T_{i}\times K_{i}\times C_{i}\), the Skip Connection and Output have shape \(2T_{i}\times 2K_{i}\times C_{i}/2\).
Fig. 9: Overview of the Squeeze-and-Excitation Residual Network (SE-ResNet) Block: The Input and Output have the same shape, and \(C_{i}\) denotes the number of channels in the Input feature map.
the presence of reverberation in addition to additive noise and other distortions. While recent end-to-end speech enhancement systems have been shown successful at suppressing additive noise [16, 17, 18, 21, 26, 31], few studies have successfully addressed suppression of reverberation with an end-to-end system [12, 19]. Training an end-to-end enhancement system to learn the mapping from \(\mathbf{x}\) to \(\mathbf{s}\) in the presence of reverberation may be difficult due to the phase distortion caused by convolutional noise.
In this study we utilize the target signal design proposed in [12], which enables the enhancement system to learn joint suppression of additive and convolutional distortion. Let \(\mathbf{v}\) be the reverberated-only version of \(\mathbf{s}\), which excludes any further distortion present in \(\mathbf{x}\). The STFT of \(\mathbf{s}\) is given by \(\mathbf{S}_{m,l}\) where \(m\) and \(l\) are the frequency channel and frame indices. Let \(\mathbf{V}_{m,l}\) be defined similarly. An enhanced version of \(\mathbf{V}\) can be obtained using an oracle Wiener Filter,
\[\mathbf{S}_{m,l}^{*}=\max\left\{\min\left\{\frac{\left|\mathbf{S}_{m,l}\right| ^{2}}{\left|\mathbf{V}_{m,l}\right|^{2}},1\right\},0\right\}\mathbf{V}_{m,l}. \tag{20}\]
The corresponding waveform, \(\mathbf{s}^{*}\), can be synthesized via the inverse STFT. The signal \(\mathbf{s}^{*}\) then represents a version of the reverberant signal \(\mathbf{v}\) with the majority of late reflections suppressed, but with the phase distortion introduced by early reflections still present. This allows an end-to-end system to be trained to perform joint suppression of noise and reverberation by learning a mapping from \(\mathbf{x}\) to \(\mathbf{s}^{*}\) through the minimization of some distance measure \(d\left(\mathbf{s}^{*},\hat{\mathbf{s}}\right)\).
### _Generating Training Data_
Training an end-to-end system requires parallel data consisting of the original (clean) signal, \(\mathbf{s}\), and its corrupted, observed counterpart, \(\mathbf{x}\). Utilizing the target signals discussed in Section V-A also requires the reverberated-only version, \(\mathbf{v}\). In practice, field recorded data of this nature is relatively limited and not of sufficient quantity or scope to be useful for (at least initial) training purposes. As a result, considerable care has been taken to synthetically generate realistic parallel data representing a broad range of operational conditions.
Our in-house training material was dynamically generated from an extensive corpus of speech sources, reverberant channels, and noise conditions. The clean speech consists of 456 hours of audio collected from multiple publicly available corpora (e.g. the Linguistic Data Consortium (LDC), LibriSpeech [57], TIMIT [58]). Currently 2458 unique talkers are utilized, representing a spectrum of languages and ages. Clean speech signals, \(\mathbf{s}\), are randomly selected from this set. The generator then applies a room impulse response to produce \(\mathbf{v}\), and has the capacity to incorporate varying room impulse responses (currently 1708) derived from a number of sources (including [59, 60, 61]) to recreate a generalized ensemble of enclosure sizes and types. Additive noise is then added to produce the observed signal \(\mathbf{x}\). The noise conditions are delineated as background (i.e. ambient) and non-stationary. The 250+ hours of noise samples (collected primarily from the MUSAN [62], Voice-Home [60], Noisex [63], and AudiSet [58]), represent 218 distinct background and 88756 non-stationary noise conditions. In an effort to further expand the scope of observed material, the room impulse responses and additive noise signals are perturbed by random rate-resampling and spectral-shaping. Additionally, the resulting mixed signal is subjected to multiple potential modifications associated with the signal acquisition process, such as clipping, quantization, and the presence of interceding communications channel.
### _Training Loss_
Training an end-to-end speech enhancement system requires a distance measure which operates on time-domain samples. Initial studies on end-to-end enhancement systems optimized network parameters to minimize the mean square-error (MSE) between the output waveform, \(\hat{\mathbf{s}}\), and the clean waveform, \(\mathbf{s}\),
\[d_{MSE}\left(\mathbf{s},\hat{\mathbf{s}}\right)=\frac{1}{D}\sum_{n=1}^{D}\! \left(s\left(n\right)-\hat{s}\left(n\right)\right)^{2}. \tag{21}\]
However, the MSE does not take into account properties of human perception of speech, and may not result in an enhanced signal which optimizes perceptual quality. Recent studies have proposed loss functions which address these issues [15, 21, 25, 31, 16]. In this study, however, we leverage the perceptually-motivated MSE (pMSE) distance proposed in [12].
The MSE loss from (21) can be generalized to include the effects of both signal pre-emphasis [52] and \(\mu\)-law amplitude companding [64], leading to
\[d_{pMSE}\left(\mathbf{s},\hat{\mathbf{s}}\right)= \tag{22}\] \[\frac{1}{D}\sum_{n=1}^{D}\!\left(f_{\mu}\left(s\left(n\right)- \beta s\left(n-1\right)\right)\right.\] \[\left.-f_{\mu}\left(\hat{s}\left(n\right)-\beta\hat{s}\left(n-1 \right)\right)\right)^{2}\!\!\!,\]
where \(\beta\) is the pre-emphasis constant, and the \(\mu\)-law companding function is given by
\[f_{\mu}\!\left(s\left(n\right)\right)=\text{sign}\!\left(s\left(n\right)\right) \!\frac{\log\left(1+\mu\left|s\left(n\right)\right|\right)}{\log\left(1+\mu \right)}. \tag{23}\]
Equation (22) offers a generalized distance measure which can be tuned to account for various properties of human perception via the parameters \(\beta\) and \(\mu\). Note that for settings \(\beta=0.0\) and \(\mu\to 0.0\), the proposed measure is equivalent to the standard MSE in (21). More details regarding the motivation for the pMSE cost can be found in [12].
Note that several studies have proposed the use of spectral-based cost functions which don't take into account the phase signal [65, 31, 66, 32]. During our experimentation, we found that while the inclusion of such spectral-based components in the overall training loss provided some advantage with the objective speech quality measures, they also lead to the inclusion of 'buzzy' artifacts in the resulting speech, particularly in the higher frequencies. For this reason the time-domain pMSE loss, which offers both a degree of perceptual modelling as well as a sensitivity to speech phase distortion, was utilized exclusively in this study.
In [12], a dual-path loss was proposed for mask-based end-to-end enhancement systems in order to ensure that the autencoder path provides high quality reconstruction when encoder and decoder filters are trained. The dual-path loss is given by \(d_{pMSE}\left({{\bf{s}},{\tilde{\bf{s}}}}\right)+d_{pMSE}\left({{\bf{x}},{\tilde{ \bf{x}}}}\right)\), where the additional second term enforces the approximation from (8). In this study, the dual-path loss was utilized whenever MSAE kernels were learnable, as discussed in Section III-F.
The construction of training data can have significant impact on the behavior of the resulting neural network enhancement systems. For example, the ratio of active speech to inactive speech within the data can change the system's emphasis on speech quality versus noise suppression. To better control this tradeoff during training, a prior probability of active speech can be induced by weighting individual training waveforms. It is straightforward to determine whether a clean reference sample contains a significant amount of active speech. For a given training batch, let \(M_{1}\) and \(M_{0}\) denote the total number of samples that are determined to correspond to active speech and inactive speech, respectively. To introduce an effective prior, \(\pi\in\left[0,1\right]\), active speech and inactive speech segments can be weighted by \(\pi\left({{M_{0}}+{M_{1}}}\right)/{M_{1}}\) and \(\left({1-\pi}\right)\left({{M_{0}}+{M_{1}}}\right)/{M_{0}}\), respectively. In this way, using a large \(\pi\) focuses network training on speech quality, whereas a small value puts more emphasis on noise suppression.
## VI Experimental Results
This section presents experimental results for speech enhancement using MSAE-Unet, both in terms of objective speech quality metrics and automatic speech recognition performance. During experimentation, a variety of MSAE configurations are analyzed. For all systems, input waveforms are split into windows of dimension \(D=20480\), corresponding to \(1.28\) sec, with \(50\%\) overlap, before processing. Output waveforms are then reconstructed using overlap-and-add. All windows are scaled to unit variance prior to enhancement, and gains are reintroduced after network processing. Networks are trained using target signals generated according to Section V-A. Parallel training data is generated as discussed in Section V-B. Networks are optimized by minimizing the pMSE loss described in Section V-C, using an active speech prior of \(\pi=0.75\). In all testing, the minimum gain, \(G_{min}\), is set to \(-50\)dB, which is observed to provide a good trade-off between noise suppression and speech quality during informal listening.
### _Objective Speech Quality_
Speech enhancement experiments are performed on the Voice Cloning Toolkit (VCTK) Noisy and Reverberant Corpus [53], a parallel clean-corrupted set comprised of \(824\) speech files with synthetically added reverberation and background noise. To assess performance, a variety of objective speech quality metrics are used. Specifically, results are reported in terms of Perceptual Evaluation of Speech Quality (PESQ) [67] and Short-time Objective Intelligibility (STOI) [68]. Results are also reported in terms of the Composite Signal (CSIG), Composite Background (CBAK), and Composite Overall (COVL) quality scores from [69], which are weighted combinations of various signal quality measures where the weightings are trained to correlate with subjective listening tests. Finally, the relative improvement in COVL compared to the unprocessed signal is provided to offer more clarity. In all tables, bold entries denote the best result for each metric.
#### Vi-A1 The Effect of Analysis Window Duration
The first set of experiments aim at exploring the effect of analysis window duration in neural network speech enhancement systems. Table I provides speech quality measures on the VCTK Noisy and Reverberant Corpus for MSAE-UNet with the single-branch MSAE configuration, \((1,-,T_{o})\), and varying \(T_{o}\). Note that short window durations, e.g. \(T_{o}=5\)ms, correspond to conventional wideband analysis in the autoencoder, whereas long windows, e.g. \(T_{o}=40\)ms, correspond to conventional narrowband analysis. From Table I, it can be observed that analysis window duration has a significant impact on speech quality metrics, with the best performance achieved for intermediate durations. With respect to COVL, an analysis window duration of \(T_{o}=10\)ms maximizes the quality of enhanced speech. These results show the sensitivity of neural network speech enhancement to the design of analysis windows, and motivates the MSAE framework which can represent separate frequency bands with appropriate window durations.
#### Vi-A2 The Multi-Branch MSAE Framework
The next set of experiments aim at assessing the benefits of the proposed MSAE framework within mask-based enhancement systems. An important parameter of the MSAE is the number of branches, \(B\). Table II provides speech quality measures on the VCTK Noisy and Reverberant Corpus for MSAE-UNet with the multi-branch MSAE configuration, \((B,1.5,2.5\)ms), and varying number of branches. All systems use a default dyadic spectral band design, corresponding to the quality factor \(Q=1.5\). It can be observed that speech quality improves significantly for MSAE configurations with multiple branches, when compared to the single-branch system, with the best performance achieved for \(B=5\). Specifically, the \(B=5\) MSAE provides an additional \(16\%\) relative improvement in COVL compared to the single-branch autoencoder. These results clearly show the benefit of the proposed multiscale autoencoder, relative to conventional systems.
#### Vi-A3 Spectral Band Design within the MSAE
The MSAE is also parameterized by the Constant-Q band design. The next experiments explore the role of the quality factor, \(Q\), in enhancement performance. Table III provides speech quality measures on the VCTK Noisy and Reverberant Corpus for the MSAE configuration \((5,Q,2.5\)ms), for different values of \(Q\). It can be observed that relative to a dyadic band design, speech quality can be improved by tuning the quality factor to \(Q=2.0\), which corresponds to narrower bands and a higher degree to frequency warping than that illustrated in Figure 5. Specifically, the improved band design provides an additional \(2\%\) relative improvement in COVL.
#### Vi-A4 Trainable MSAE Filters
The final set of experiments explore the use of trainable encoder and decoder filters within the MSAE framework. As discussed in Section III-F, the MSAE encoder and decoder networks are composed of differentiable operations, and can be constructed entirely using standard neural network blocks, allowing the kernels \(\mathbf{W}\) to be trained
jointly with the mask estimation network. Table IV compares speech enhancement performance of the MSAE-UNet with the \((5,2.0,2.5\)ms) MSAE configuration, with fixed and learned kernels. In the latter case, the overcompleteness factor, \(\kappa\), is specified. In the table, it can be observed that allowing MSAE filters to adapt during training provides performance improvements across speech quality metrics. Specifically, the use of learned filters and an overcompleteness factor \(\kappa=1.5\) provides an additional \(6\%\) relative improvement in COVL. Note that in the remainder of this section, only results for this MSAE-UNet configuration will be reported.
#### Iv-B5 An Ablation Study
In order to give a clear summary of the previous experiments, Table V provides an ablation study of neural network speech enhancement using the MSAE framework. The table first includes objective speech quality metrics for the original degraded signals in the VCTK Noisy and Reverberant Corpus. The next row provides results for a baseline mask-based enhancement system which uses a single-branch autoencoder with a \(10\)ms analysis window, and is trained to learn the mapping from noisy signals \(\mathbf{x}\) to clean signals \(\mathbf{s}\), using the standard MSE loss. Each successive row then provides results when cumulatively adding an additional feature to the enhancement system.
In Table V, the poor results of the baseline system may be due to the use of clean target signals. The presence of potentially severe reverberation in the training data may introduce a high degree of phase distortion, making it difficult for an end-to-end system to learn the mapping from the noisy signal \(\mathbf{x}\) to the clean signal \(\mathbf{s}\). In the third row, Wiener-filtered target signals described in Section V-A are instead used during training, leading to a sigificant improvement in enhancement performance. In the fourth row, the conventional MSE loss is replaced by the perceptual MSE during training, as discussed in Section V-C, leading to further performance improvements.
In Table V, the fifth row provides results for a multi-branch MSAE. Specifically, the \((5,1.5,2.5\)ms) MSAE configuration with fixed \(\mathbf{W}\) filters is used, providing further performance improvements. Next, the Constant-Q spectral band design within the MSAE was tuned from a dyadic decomposition to a quality factor of \(Q=2.0\), yielding the results in the sixth row of the table. Finally, the seventh row provides results for the use of adapted MSAE kernels, providing a slight performance improvement in certain objective metrics.
#### Iv-B6 Comparison to State-of-the-Art Enhancement Systems
Finally, the MSAE-Unet was compared to state-of-the-art neural network speech enhancement systems, and a variety of baselines was chosen to assess the performance of the proposed system. First, the MetricGAN+ [34] system uses a Bidirectional Long Short Term Memory (BLSTM) architecture [13], trained within a Generative Adversarial Network (GAN)
[70] framework using a perceptually-motivated loss. Next, the Mimic system [23] uses a neural network architecture, and is trained jointly to both minimize the MSE loss and to improve senone classification of the enhanced speech. Finally, the DE-MUCS system [32] uses a U-Net architecture, and is trained with a composite time-domain and spectral-domain loss. Note that the MetricGAN+ and Mimic models are provided by the SpeechBrain Toolkit [71], and were trained with data from the VoiceBank [72] and Deep Noise Suppression (DNS) [73] corpora. The DEMUCS model was trained on the VCTK Noisy [14] and DNS corpora. Table VI provides a performance comparison of MSAE-UNet with the baseline systems on the VCTK Noisy and Reverberant Corpus. Additionally, it includes the number of trainable parameters in each model studied. As can be observed, the proposed system achieves significant performance improvements across speech quality metrics. Specifically, the MSAE-UNet provides additional relative improvements in COVL of \(12\%\)-\(24\%\) compared to the baseline systems. This is especially noteworthy considering the small size of MSAE-UNet relative to the Mimic and DEMUCS models.
### _Automatic Speech Recognition_
Enhancement can not only improve the perceptual quality of distorted speech, alleviating listener fatigue, but can also improve performance of applications such as automated speech recognition (ASR). This section provides experimentation using enhancement as pre-processing for speech recognition. In order to provide general results, two ASR models are studied, namely Deep Speech [74] and the _Large_ variant of Whisper [75], both representing state-of-the-art end-to-end neural network approaches. Experiments are performed on the VCTK Noisy and Reverberant Corpus and the Voices Obscured in Complex Environmental Settings (VOiCES) Corpus [61]. The latter is a large set of far-field audio collected by replaying speech and noise distractor signals within various reverberant rooms. The corpus captures a wide range of acoustic environments by varying room dimensions, noise types, microphone types, and microphone placements. Throughout experimentation, results are reported as Word Error Rate (WER), calculated using the NIST Scoring Toolkit (SCTK) [76].
Table VII provides ASR results for the Deep Speech and Whisper models, when using enhancement as a pre-processing step. It can be observed that with the Deep Speech model, the baseline enhancement systems generally result in performance degradation relative to the original signals. This degradation is especially large for the VOiCES Corpus. The MSAE-UNet system, however, provides ASR improvements for both benchmarks. For the Whisper model, the baseline enhancement systems again lead to performance degradation in many cases. The MSAE-UNet system, however, provides clear performance improvements for the VCTK benchmark, and minor degradation for the VOiCES Corpus.
## VII Conclusion
This paper proposed the multiscale autoencoder (MSAE) for mask-based end-to-end neural network speech enhancement. This framework provides the encoder and decoder mappings for such networks, and is not specific to the mask estimation architecture. The MSAE performs spectral decomposition of an input waveform within separate band-limited branches, each operating with a different rate and scale, to extract a sequence of multiscale embeddings. The proposed framework features intuitive parameterization of the autoencoder, including a flexible spectral band design based on the Constant-Q transform. Additionally, the MSAE is constructed entirely of differentiable operators, allowing it to be implemented within an end-to-end neural network, and discriminatively trained for the task of speech enhancement. To assess the performance of the proposed MSAE framework, it was integrated with an example mask estimator based on the \(U\)-Net architecture. The
resulting end-to-end enhancement system was shown to outperform several state-of-the-art speech enhancement methods both in terms of objective speech quality metrics and automatic speech recognition accuracy.
|
2309.06626 | Accelerating Deep Neural Networks via Semi-Structured Activation
Sparsity | The demand for efficient processing of deep neural networks (DNNs) on
embedded devices is a significant challenge limiting their deployment.
Exploiting sparsity in the network's feature maps is one of the ways to reduce
its inference latency. It is known that unstructured sparsity results in lower
accuracy degradation with respect to structured sparsity but the former needs
extensive inference engine changes to get latency benefits. To tackle this
challenge, we propose a solution to induce semi-structured activation sparsity
exploitable through minor runtime modifications. To attain high speedup levels
at inference time, we design a sparse training procedure with awareness of the
final position of the activations while computing the General Matrix
Multiplication (GEMM). We extensively evaluate the proposed solution across
various models for image classification and object detection tasks. Remarkably,
our approach yields a speed improvement of $1.25 \times$ with a minimal
accuracy drop of $1.1\%$ for the ResNet18 model on the ImageNet dataset.
Furthermore, when combined with a state-of-the-art structured pruning method,
the resulting models provide a good latency-accuracy trade-off, outperforming
models that solely employ structured pruning techniques. | Matteo Grimaldi, Darshan C. Ganji, Ivan Lazarevich, Sudhakar Sah | 2023-09-12T22:28:53Z | http://arxiv.org/abs/2309.06626v2 | # Accelerating Deep Neural Networks via Semi-Structured Activation Sparsity
###### Abstract
The demand for efficient processing of deep neural networks (DNNs) on embedded devices is a significant challenge limiting their deployment. Exploiting sparsity in the network's feature maps is one of the ways to reduce its inference latency. It is known that unstructured sparsity results in lower accuracy degradation with respect to structured sparsity but the former needs extensive inference engine changes to get latency benefits. To tackle this challenge, we propose a solution to induce semi-structured activation sparsity exploitable through minor runtime modifications. To attain high speedup levels at inference time, we design a sparse training procedure with awareness of the final position of the activations while computing the General Matrix Multiplication (GEMM). We extensively evaluate the proposed solution across various models for image classification and object detection tasks. Remarkably, our approach yields a speed improvement of \(1.25\times\) with a minimal accuracy drop of \(1.1\%\) for the ResNet18 model on the ImageNet dataset. Furthermore, when combined with a state-of-the-art structured pruning method, the resulting models provide a good latency-accuracy trade-off, outperforming models that solely employ structured pruning techniques. The code is available at [https://github.com/Deeplite/activ-sparse](https://github.com/Deeplite/activ-sparse).
## 1 Introduction
Deep neural networks (DNNs) have become the go-to state-of-the-art solution in most domains of machine learning in recent years, like computer vision [32], natural language understanding [53] and generative AI [30]. Oftentimes, the computational footprint of DNN models limits their usage on low-resource embedded processors. Compression and acceleration of such models is an active research area aimed at bridging this gap [6] and could be generally categorized into pruning [34, 39, 58], tensor decomposition [38], quantization [10, 46], development of lightweight neural networks [25, 26, 42], and runtime optimizations [3, 19].
Pruning remains a prominent compression method, particularly evidenced by recent strides in structured weight pruning, achieving state-of-the-art latency-accuracy trade-offs across diverse computer vision tasks [12]. However, existing research in pruning has predominantly focused on removing redundant model parameters, overlooking the potential inherent sparsity within feature maps, commonly referred to as activations. Activation sparsity is naturally intrinsic in DNNs with ReLU-like activation functions to a certain extent [36, 50]. Nevertheless, this sparsity, tied to the functional form of the ReLU non-linearity, retains an unstructured nature and lacks homogeneity across layers. Several methods have emerged to artificially augment activation sparsity during training, enhancing model generalization and robustness through regularization techniques [14, 57]. However, such methods selectively remove blocks of connected pixels solely during model training, maintaining denseness at inference time and consequently forfeiting opportunities for model inference acceleration. In contrast, to achieve faster model execution post-training, activation sparsity needs to extend to inference time as well. A variety of works explored _data-dependent_ mechanisms to exploit activation sparsity at runtime, dynamically selecting the pixels according to the complexity of the input sample to process [8, 49, 52]. While these approaches efficiently reduce computations with minimal accuracy loss, effectively integrating them into low-power embedded devices can be challenging due to the required architectural modifications. In contrast, _data-free_ strategies employ custom regularization with proper hard-thresholding to establish a fixed and constant sparsity pattern [13, 33]. Such a strategy guarantees consistent speedup across distinct input samples. However, the absence of structured regular patterns among zeroed elements confines these model acceleration benefits to dedicated sparse inference engines (e.g., DeepSparse [33]).
To tackle these challenges, we propose an efficient DNN compression pipeline that consists of (i) a novel training scheme that induces semi-structured sparsity in activation feature maps and (ii) an easy-to-implement runtime modification that allows exploiting the semi-structured sparsity of the network's activations at inference time. The proposed sparsity pattern for feature maps is structured in the channel dimension, but unstructured in the spatial dimension. That is,
a set of individual pixels are zeroed across all channels of the feature map. We suggest an effective way to construct such sparsity masks during training and demonstrate how these sparse masks can be used by the runtime during inference. With XNNPACK [17] as an example library, we implement a runtime modification that transforms the semi-structured sparsity of activations into effectively structured sparsity, resulting in reduced computational load through the use of lower ranks in General Matrix Multiplication (GEMM).
To summarize, the primary focus of this study could be outlined as follows:
* We propose a novel training scheme inducing semi-structured activation sparsity in deep neural networks via the propagation of random spatial masks.
* We show that sampling of random masks during training followed by mask freezing improves the performance of DNNs under the constraint of semi-structured sparsity in activations.
* We demonstrate the effectiveness of the proposed training scheme on image classification and object detection tasks and show how it can be combined with structured pruning to get a competitive accuracy-latency trade-off.
* We provide an example of an easy-to-implement runtime modification on top of XNNPACK [17] that allows obtaining latency speedup of up to \(2\times\) with relatively low sparsity rates (under \(50\%\)).
## 2 Related Work
Over the past few years, significant progress has been made in the field of deep learning model compression and acceleration, aimed at improving the efficiency of deep neural networks during inference by reducing their memory and computational requirements. Pruning [39, 58] focuses on removing redundant connections or units in the model architecture based on heuristic importance criteria, resulting in streamlined models with improved efficiency. Quantization [28, 42] tackles model size compression by reducing the numerical precision of weights and activations from standard \(32\)-bit floating-point representations to lower bit-widths such as \(8\)-bit, or in more extreme cases, \(2\)-bit or \(1\)-bit. Knowledge distillation [22, 56] involves transferring knowledge from a larger, more complex network to a smaller one, allowing the compact model to attain comparable performance to its larger counterpart. Hand-crafted models, exemplified by architectures like MobileNetV3 [24], EfficientNetV2 [51] and ShuffleNetV2 [41], are often designed with custom operations and blocks optimized for faster inference, thereby enhancing overall efficiency. Furthermore, apart from direct model modifications, there are other strategies aimed at improving the efficiency of deep neural networks. Graph order rewriting involves transforming the network's computational graph to optimize its execution flow, thus enhancing overall performance [1]. Custom runtime optimization [3, 19] aims to maximize model performance at the operator level, harnessing the target hardware's potential. It becomes indispensable in cases where existing operators or processing units cannot directly execute certain model structures, such as unstructured sparse or low-bit quantized models, requiring specific adaptations for seamless and efficient execution.
### Pruning
Pruning methods can be usually categorized according to their granularity [23] or to their importance policy. In terms of granularity, pruning can usually operate with _unstructured_ or _structured_ sparsity patterns. Unstructured pruning involves removing single connections in the network based on their importance [43, 20]. Targeting individual weights offers flexibility in achieving high accuracy but may lead to challenges in efficient inference due to irregular memory access patterns. A custom runtime with specialized sparse kernels is often necessary to achieve speedup in case of unstructured sparsity (e.g., DeepSparse [27]). Conversely, structured pruning [45, 35] involves the removal of entire channels or filters from the network, which can pose challenges during model training due to its more substantial impact on accuracy. However, pruning at this level of granularity can significantly enhance model efficiency in many existing runtimes, resulting in notable reductions in storage requirements and accelerated inference latency.
Pruning policies encompass various schemes and criteria for efficient model compression. Magnitude-based criteria rely on the absolute weight values to identify less important parameters [40, 20], while first-order methods leverage gradients for importance ranking [7, 44]. Some approaches involve one-time pruning followed by retraining [21], while others adopt iterative pruning techniques [34]. Recent research has explored the efficacy of various pruning methods, offering valuable insights to enhance model compression techniques [54]. Notably, DepGraph [12] introduced a novel method for general structural pruning of arbitrary architectures, efficiently removing coupled parameters for model acceleration. The results demonstrate its superior performance compared to many other techniques.
### Activation Sparsity
Another crucial sphere of inquiry revolves around exploiting the inherent sparsity present within neural network feature maps, particularly in the context of computer vision applications. The induction of activation sparsity stands out as a pivotal technique for latency reduction, providing a synergistic complement to weight pruning strategies. Sparsity is naturally present in feature maps due to the presence of ReLU-like activation functions which force feature maps
to become zero when their values fall below certain thresholds [33, 36].
The majority of efforts in the literature have been directed towards harnessing activation sparsity through _data-dependent_ mechanisms, tightly linked to input complexity. This strategy entails an informed masking approach, where the sparsity pattern is dynamically generated based on the distribution of less informative pixels within the input samples. Consequently, a distinct sparsity pattern is generated for each input. Some of these techniques necessitate architectural adjustments for on-the-fly pattern generation at runtime [52, 49, 8]. Unfortunately, these requirements significantly hamper their effectiveness when deployed on resource-constrained devices. As a result of these constraints, many of these works often lack real-world hardware validation or predominantly demonstrate latency improvements on higher-performance hardware configurations. For instance, the efficacy of sparsity has been pronounced in GPU deployment scenarios, yielding impressive latency enhancements such as up to \(1.88\times\) acceleration on a ResNet50 architecture using a _Mali_ GPU [48]. Similarly, the work by Xu et. al [55] tailored custom kernels for Nvidia GPUs, resulting in performance acceleration of \(3\)-\(4\times\).
In more recent investigations, novel regularization strategies have emerged to induce activation sparsity featuring a regular and consistent pattern, regardless of varying input samples (_data-free_ strategies). Georgiadis et. al [13] proposed to combine sparsity, quantization, and entropy encoding of activation maps to achieve up to \(1.6\times\) inference acceleration and up to \(6\times\) reduction of the memory footprint for architectures like InceptionV3 and MobileNetV1. Kurtz et al. [33] introduced a new regularization technique and threshold-based sparsification based on a parameterized activation function to maximize sparsity with minimal accuracy drop. While these works are the most similar to our approach, they predominantly emphasize unstructured sparsity among zeroed elements. As a consequence, these model acceleration benefits remain confined to dedicated sparse inference engines like DeepSparse [33].
### Low-Rank GEMM
The widely adopted im2col-based General Matrix Multiply (GEMM) technique converts feature maps into column-wise matrices. This transformation paves the way for stream-lined matrix multiplication with weight matrices, thus fostering parallel computations and refining the convolutional operations. Moreover, the low-rank GEMM approach focuses on reducing the number of rows (or columns) in one of the two matrices, aiming to decrease computational complexity and memory demands. Dong et al. [8] devised a trainable module learning collaborative kernels to selectively skip activation pixels during computation, yielding a \(1.2\times\) speedup. Their analysis focused on two models and relatively simple datasets. In the context of video processing, the Skip-conv network [18] leverages residual images, creating sparsity exploited by low-rank GEMM. This approach suits moving objects, producing notable sparsity. Liu et al. [37] applied sparse adaptive inference for super-resolution, more similar to our approach, but just tailored to low-rank GEMM for specific patches crucial in super-resolution tasks.
## 3 Methodology
GEMM-based implementation of the convolution operation is typically favored over the direct one as GEMM enables faster and more efficient matrix operations, making
Figure 1: Illustration of the proposed activation sparsity pattern in both tensor and im2col spaces.
it a preferred choice for deep learning inference engines. Reducing the rank of the matrices in GEMM operations is generally directly correlated with faster computation, especially on low-power CPUs. Our proposed technique aims to reduce the rank of the input activation matrix (activation feature map in the im2col space) to speed up model inference. This is pursued by inducing semi-structured sparsity in the network at training time which will be exploited through lower-rank GEMMs at inference time.
Figure 1 shows the convolution-as-GEMM implementation for convolutional layers, where both weights (green) and activations (blue) are unfolded respectively from 4-D and 3-D tensors to \(2\)-D matrices. The picture shows the standard convolution operation both in the tensor space (i.e., the standard space before the reshaping) and in the im2col space. Each of the \(n\) filters is reshaped into a row of \(k^{2}c\) size, where \(k\) is the kernel size and \(c\) is the number of channels. In the same way, the input feature map is reshaped into a \(k^{2}c\times z\) matrix, where each column is composed of all the pixels of the input sliding window (\(k^{2}c\)). The number of rows \(z\) depends on the convolution parameters (e.g., stride, padding, and dilation values). Then a standard matrix multiplication of weights and activation matrices is computed to generate an \(n\times z\) output matrix.
In order to reduce the rank of the activation matrix, a subset \(s<z\) of columns needs to be removed. These columns correspond to elements covered by the sliding local tiles (covering all channels) used during the convolution in the tensor space. To remove the columns at compute time, during each convolution, a subset \(s\) of the sliding local tiles needs to be skipped: a binary mask with a im2col-based pattern is used to apply hard thresholding to the activation tensors, where the \(s\) sparse columns of the activation matrix will be directly skipped during inference. In the two following subsections, we show how to induce (at training time) and how to exploit (at inference time) such semi-structured activation sparsity.
### Training
To induce activation sparsity with the im2col pattern, we need to group activations in the tensor space according to their final position after the im2col reshaping. We consider this approach as semi-structured as it is unstructured in the \(width\times height\) space (spatial dimensions of the feature map) but it is structured across the channel dimension.
Pruning activations with this pattern is a more delicate procedure compared to standard unstructured weight pruning, as the elements of the activation feature map cannot be directly removed from the model. The sparsified elements in the activations for one convolutional window/tile (i.e., one im2col column) could be kept dense (unmasked) for the next windows/tiles. Figure 2 demonstrates this concept for a case when a single window (tile) is selected to be sparsified (masked). In this case, the pixels \(\{A,B,C,D\}\) are dropped from the computation (including all the pixels/elements with the same \((width,height)\) coordinates in the other channels). This results in the first column of the im2col matrix becoming zero, which reduces the rank of the matrices to be multiplied. However, dropping (masking) this block from the feature map altogether should also affect the second column of the matrix, which is not selected to be pruned. For this reason, the pixels \(B\) and \(F\) will be masked for the first column but will be kept non-zero in the second one.
Introducing activation sparsity in deep neural networks for computer vision is challenging due to the varying positions of the regions of interest in images. Uniformly enforcing sparsity with a fixed pattern across data samples can lead to information loss for some images and retention for others. Achievable sparsity levels (while keeping accuracy degradation low) are often limited compared to weight sparsity, due to the dynamic and context-dependent nature of activation patterns in different input images. It has been shown that inducing structured sparsity through sampling random masks [14] can act as a regularizer that enhances the model's generalization and robustness. We found sampling random masks during training can reduce the accuracy loss when the sparsity rates are kept relatively low. The random ranking mechanism ensures that the selection of pixels to be masked is unbiased, contributing to the robustness of the training process. We propose a novel custom random masking approach, which involves randomly selecting a percentage of pixels from the input image to be masked. The resulting input image mask is then propagated consistently across all layers (employing pooling operations when downsampling is necessary). By propagating this initial random sparse pattern layer-to-layer, we ensure the preservation of the same
Figure 2: Example of the im2col procedure: input activations (left) and the activation matrix after transformation (right). Note that masking (highlighted in black) a sliding tile of the convolution affects only a single column in the reshaped matrix. In the first column, pixels \(B\) and \(F\) are masked, while they remain non-zero in the second column.
masking structure throughout the network. This guarantees translation invariance across the feature maps of different layers, even when they have varying resolutions. The proposed custom random mask sampling is a crucial aspect of our training procedure as it helps the model to prevent overfitting to specific patterns and encourage more generalized learning, yet limiting accuracy loss. The generated binary masks, specific to each sparsity level, enable the model to adapt its weights during training, effectively promoting the benefits of sparsity while maintaining crucial representational capacity. The training process comprises three key stages: (i) initially, a few dense pretraining epochs are performed; (ii) subsequently, our masking technique is applied gradually according to a schedule, incrementing sparsity rate until the desired target [58] is achieved; (iii) finally, the mask freezing stage ensues, where binary masks for each layer are fixed for the rest of the training process, allowing the model to recover from accuracy loss through more precise updates.
Algorithm 1 outlines our sparse training pipeline. The algorithm takes the fixed sparsity percentage \(s\) as an input and returns the trained model with a binary constant mask \(mask\). The pruning scheduler (line \(3\)) controls the switch between dense (line \(8\)) and sparse forward steps (line \(6\)). The updateMask (line \(4\)) scheduler sets when to update or freeze the masks through the getMask function (line \(5\)). This mask is used by maskedForward to induce the sparsity in the feature maps. At the end of the training, both the model and the masks are returned (line \(11\)). It needs to be highlighted that model weights are kept fully dense, and no weights are pruned. The getMask function plays a critical role in our sparse training pipeline, as it is responsible for generating a different binary mask for each forward step. At first, a random \(2\)-D score is generated according to the input image resolution (line \(13\)). This is propagated through the layers, downscaling the resolution when needed (lines \(15\)-\(16\)). At last, the function ranks the model's score and generates the binary mask (lines \(17\)-\(19\)).
```
1Functionmain(model, steps, \(s\)) :
2fort in stepsdo
3ifpruneStep(t)then
4ifupdateMask(t)then
5 mask = getMask (model, s)
6 maskedForward (model, mask)
7
8else
9 forward (model)
10 backward (model)
11
12 end if
13 return model, mask
14
15FunctiongetMask(model, s) :
16 score = randomScore2d(model.input_res)
17forlayer in modeldo
18 ratio = input_res // layer.res
19 layer_score = avg_pool2d (score, ratio)
20 idx = rankPixels (layer_score)
21 mask = ones_like (model)
22
23 mask[idx] = 0
24
25 end if
26 return mask
27
28return
```
**Algorithm 1**Sparse Training
### Inference
To accelerate the processing of the models with sparse activation maps, we implemented custom modifications to the XNNPACK [17] inference engine. We used TensorFlow like (TFLite) [16] built from source with XNNPACK [17] as a delegate. Given a TFLite model, a binary mask, and layer-wise sparsity levels as inputs, our inference engine computes the convolution of sparse activations. Our modifications are specific to convolutional layers only. The full pipeline consists of three main stages: (i) custom im2col reshaping, (ii) dense GEMM, and (iii) custom post-processing of the dense GEMM output.
The first step consists of reshaping the tensors into a \(2\)-D matrix for activations, as shown in Fig. 1. Considering that the XNNPACK [17] im2col routine is based on an indirection buffer [9], we developed a custom transformation to facilitate the skipping of rows of an indirection matrix. After this is done, the compute range of the GEMM is downsized to \(output\_size\ -\ (sparsity\ *\ output\_size)\) to enable a low-rank GEMM in the following step. In the second stage, standard GEMM is employed, utilizing a low-rank matrix of activations. However, the subsequent layer assumes dense activation, necessitating an efficient post-processing stage. In this implementation, zeroed elements are inserted into the GEMM output based on the binary masks used in the initial stage. These modifications follow a consistent pattern across different inference engines, all designed to work with commonly used general-purpose processors. For more detailed information on the runtime modifications, please refer to Appendix A.
## 4 Results
### Training Setup
The proposed pipeline was validated on several image classification and object detection datasets, including CIFAR100, Flowers102, Food101, and ImageNet for classification and PASCAL VOC and Global Wheat for object detection (further details in Appendix B). We have performed experiments on ResNet18, ResNet50, and MobileNetV2 architectures for the image classification task, and used YOLOv5n [29] as a base architecture for the object detection experiments. Note that a few of the base architectures we
used (e.g., MobileNetV2, YOLOv5n) were initially designed as lightweight efficient architectures, which makes it more challenging to obtain competitive latency speedup with low accuracy degradation.
For image classification, we used the training code provided by Ultralytics [29] with default values of hyperparameters except for the number of epochs (Adam optimizer, initial learning rate \(10^{-4}\), \(400\) epochs, batch size \(64\)). ImageNet pre-trained weights were used for model initialization for both the dense baseline as well as for sparse training. We set the dense training stage to stop at \(10\%\) of the training steps and the freezing stage to start at \(90\%\) of the steps. For object detection experiments, the training code provided by Ultralytics [29] was also used with default values of hyperparameters. COCO pre-trained weights were used to initialize the models both for the dense baseline as well as for sparse training.
### Sparse Model Deployment
The latency speedup from using semi-structured activation sparsity was measured on a Raspberry Pi 4B [15] device, featuring a quad-core ARM Cortex-A72 processor operating at \(1.5\)GHz, with \(4\)GB of RAM. We ran Ubuntu 18.04 \(64\)-bit OS on this platform and GNU gcc version \(11.0\) for compilation. For deployment, we used TFLite [16] inference engine built with XNNPACK [17] delegate with custom modifications for sparse inference.
### Sparse vs. Dense Model Performance
In this section, we evaluate the efficacy of the semi-structured activation sparsity approach for enhancing DNN speed, prioritizing high-speed improvements at the expense of marginal accuracy degradation.
#### 4.3.1 Low Accuracy Loss Regime
Using the same sparse training procedure, we induced the activation sparsity at three different levels \(S=\{10\%,20\%,30\%\}\). Table 1 shows that the accuracy loss is low (under \(2.5\%\)) for the first two sparsity rate levels in image classification tasks, while it is close to \(3\%\) for the highest sparsity rate chosen (\(30\%\)) depending on the architecture. ResNet models are found to be more resilient to activation sparsity compared to MobileNetV2, in fact, they have an average \(1.82\%\) of accuracy loss instead of \(2.72\%\) for MobileNetV2. On the more challenging ImageNet dataset (Table 2), ResNet18 at \(10\%\) sparsity rate provides almost the same accuracy (\(-0.05\%\)) as the dense counterpart. For clarity, we included further details on the training procedure in Appendix B. To evaluate the generalization capabilities of our proposed compression pipeline, we carried out experiments for the object detection task using the YOLOv5n model. The obtained results on VOC and Global Wheat datasets are summarized in Table 3, showcasing the impact of compression on accuracy. Notably, results for object detection appear to be comparable to those of image classification, with limited mAP\({}_{50}\) degradation on a simpler dataset (Global Wheat) and higher accuracy loss observed on a more large-scale task (VOC). These findings highlight the effectiveness of our compression techniques in preserving model accuracy across different tasks.
#### 4.3.2 High Speedup Regime
In our findings, we observe a consistent trend where activation sparsity contributes to notable and reliable speed improvements throughout the network layers, with the magnitude of the speedup roughly proportional to the degree of activation sparsity achieved. To visually depict and quantify
\begin{table}
\begin{tabular}{c c|c c c} \hline \hline \multirow{3}{*}{**Sparsity**} & **Sparsity** & **ResNet18** & **ResNet50** & **MobileNetV2** \\ \cline{2-5} & 0\% & 92.02 & 92.50 & 92.57 \\ & 10\% & 91.20 (-0.80) & 91.80 (-0.70) & 91.46 (-1.11) \\ & 20\% & 90.25 (-1.75) & 91.02 (-1.48) & 90.11 (-2.46) \\ & 30\% & 88.89 (-3.22) & 90.13 (-2.37) & 88.52 (-4.05) \\ \cline{2-5} & 0\% & 82.20 & 86.17 & 77.20 \\ & 10\% & 81.07 (-1.13) & 85.10 (-1.07) & 82.35 (-1.77) \\ & 10\% & 80.27 (-1.93) & 84.10 (-2.07) & 81.04 (-1.32) \\ & 30\% & 78.59 (-3.61) & 82.40 (-3.77) & 79.32 (-4.80) \\ \cline{2-5} & 0\% & 77.20 & 78.00 & 73.10 \\ & 30\% & 76.37 (-0.83) & 77.26 (-0.74) & 71.30 (-1.80) \\ & 30\% & 75.30 (-1.90) & 75.80 (-2.20) & 70.57 (-2.53) \\ & 30\% & 74.11 (-3.09) & 74.78 (-3.22) & 68.60 (-4.50) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Top-1 accuracy result (\(\%\)) for different architectures on Flowers102, Food101, and CIFAR100 datasets. The relative inference speedups are reported in Fig. 3.
\begin{table}
\begin{tabular}{c|c|c} \hline \hline
**Sparsity** & **VOC** & **Global Wheat** \\ \hline \hline
0\% & 80.20 & 96.38 \\ \hline
10\% & 78.08 (-2.12) & 96.00 (-0.38) \\
20\% & 76.63 (-3.57) & 95.49 (-0.89) \\
30\% & 74.13 (-6.07) & 94.80 (-1.58) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Top-1 accuracy results for different architectures on Imagenet dataset. The relative inference speedups are reported in Fig. 3.
\begin{table}
\begin{tabular}{c|c|c} \hline \hline
**Sparsity** & **ResNet18** & **MobileNetV2** \\ \hline \hline
0\% & 70.53 & 72.19 \\
10\% & 70.48 (-0.05) & 70.43 (-1.76) \\
20\% & 69.42 (-1.11) & 69.94 (-2.25) \\
30\% & 67.88 (-2.65) & 67.92 (-4.27) \\ \hline \hline \end{tabular}
\end{table}
Table 3: mAP\({}_{50}\) results for YOLOv5n on VOC and Global Wheat datasets. The relative inference speedups are reported in Fig. 3.
these results, we present Fig. 3, which illustrates the end-to-end speedup outcomes for four distinct models: ResNet18, ResNet50, MobileNetV2, and YOLOv5n.
ResNet18 exhibits a nearly linear relationship between the sparsity percentage and the speedup for all the sparsity levels. For, ResNet50, MobileNetV2, and YOLOv5n, due to the larger amount of layers and complexity, experience a slightly diminished speedup when compared to ResNet18. This slight reduction in speedup can be attributed to the presence of additional steps that involve custom im2col and post-processing transformations, which offset the gains obtained from reduced GEMM computations. For ResNet50, the speedup achieved is approximately \(1.75\times\), while MobileNetV2 and YOLOv5n attain speedups of around \(1.44\times\) and \(1.46\times\), respectively, all based on \(50\%\) sparsity.
In summary, our findings indicate that activation sparsity within the network layers leads to consistent and significant improvements in inference latency. The overall trend suggests that activation sparsity offers a valuable approach to enhancing the efficiency of deep learning models across a variety of architectures.
### Ablation Study
To comprehensively evaluate the efficacy of our proposed sparse training scheme, we conducted two ablation studies focusing on the custom features involved to reduce accuracy loss: mask propagation and mask freezing. For both studies, we trained ResNet18 on the Flowers102 dataset using the same hyperparameters described in the Subsection 4.1.
Mask PropagationFigure 4 depicts the comparison of accuracy and sparsity achieved by the ResNet18 model with and without mask propagation. The plot clearly demonstrates the advantages of employing the mask propagation method, revealing a significant improvement in the model's resilience to sparsification. The use of mask propagation provides up to \(1.28\%\) of accuracy boost at \(30\%\) sparsity rate and an average of \(0.83\%\) for the three tested sparsity levels.
Mask FreezingThe mask freezing approach ensures that the binary masks used for sparsity remain fixed during the last training epochs, thereby allowing the model to recover from accuracy loss more effectively with precise updates. This mechanism, widely used in literature [58], is crucial for our training scheme where the masks are randomly changed after each step. Figure 4 shows the clear advantage of integrating the mask freezing method into the training process: the model trained with mask freezing showcases up to \(0.96\%\) higher accuracy than the one without.
### Weight Pruning vs. Activation Sparsity
In this section, we conduct a comprehensive comparison of our activation sparsity method with a state-of-the-art
Figure 4: Ablation results for mask propagation and mask freezing for ResNet18 on Flowers102 dataset.
Figure 3: Speed-up vs. sparsity rate for ImageNet, CIFAR100, and VOC datasets on different architectures. Flowers102 and Food101 speed-up results are equal to those of ImageNet.
structured weight pruning technique represented by DepGraph [12]. By utilizing DepGraph as a robust baseline, we aim to thoroughly assess the effectiveness and potential of our activation sparsity approach in comparison to leading compression techniques. While the work by Kurtz et al. [33] appears conceptually aligned with our approach, we refrain from direct comparison due to the need for a custom sparse kernel to achieve the desired latency boost. Moreover, their research primarily focuses on higher-performance platforms, such as AWS C5.12xlarge CPU and NVIDIA K80 GPUs, rather than exploring embedded CPUs, limiting the scope of direct comparison with our solution.
Since structured weight pruning and activation sparsity can be applied independently, we decided to apply activation sparsity on models pruned using DepGraph to see the impact on performance. Figure 5 depicts the latency vs. accuracy trade-off achievable by structured pruning with and without our proposed activation sparsity technique. We performed these experiments on ResNet18 with the Flowers102 dataset. The pruned models were obtained using the original code-base provided by DepGraph authors with different values of the speedup proxy parameter (MACs count ratio) from \(2.0\times\) to \(10.0\times\)[12]. Then, we induced activation sparsity in the pruned models for four different sparsity levels (\(5\%\), \(10\%\), \(20\%\), \(30\%\)), using the Ultralytics training code for image classification [29]. The same training code was also used to further finetune the pruned models (without sparsity) for fair comparison. The experimental results show that while the solely structured pruning is Pareto optimal for lower speedup rates, a combination of both techniques becomes more favorable for beyond \(3.5\times\) speedup. Furthermore, while structured pruning offers high scaling ability, activation sparsity acts as a fine-grained control knob in the accuracy vs. latency solution space. Latency measurement experiments carried out on the Raspberry Pi 4B [15] showcase a significant difference between the real and theoretical speedup of pruned models. A detailed table with all the different speedups is available in Appendix B.
Activation sparsity applied to pruned models shows notable performance improvements, especially for high pruning ratios. This behavior can be attributed to the understanding that models pruned beyond a certain limit may experience reduced capacity and subsequently degraded performance. In such cases, activation sparsity proves to be an effective approach by capitalizing on zeros in the activation maps, which remain independent of the model's capacity, leading to optimal results.
## 5 Conclusion
This paper presents an efficient DNN compression pipeline leveraging semi-structured activation sparsity to reduce inference latency. The proposed training procedure induces activation sparsity through the propagation and freezing of random spatial masks, being cognizant of element positions during GEMM-based convolutions. Additionally, we provide an illustrative example of a practical runtime modification integrated into XNNPACK to measure latency speedup on a Raspberry Pi 4B device. Our experimental results showcase the impact of activation sparsity on accuracy and speedup across diverse test cases encompassing image classification and object detection tasks. Furthermore, we demonstrate the potential to combine our compression pipeline with other structured pruning algorithms, offering enhanced accuracy-speed trade-offs, especially for high compression ratios. In future work, we plan to explore advanced regularization techniques to determine optimal sparsity levels across layers.
Figure 5: Latency-accuracy trade-off distribution for structured weight pruning with and without activation sparsity (ResNet18, Flowers102). A detailed table with all the numerical values is available in Appendix B. |
2303.17823 | An interpretable neural network-based non-proportional odds model for
ordinal regression | This study proposes an interpretable neural network-based non-proportional
odds model (N$^3$POM) for ordinal regression. N$^3$POM is different from
conventional approaches to ordinal regression with non-proportional models in
several ways: (1) N$^3$POM is defined for both continuous and discrete
responses, whereas standard methods typically treat the ordered continuous
variables as if they are discrete, (2) instead of estimating response-dependent
finite-dimensional coefficients of linear models from discrete responses as is
done in conventional approaches, we train a non-linear neural network to serve
as a coefficient function. Thanks to the neural network, N$^3$POM offers
flexibility while preserving the interpretability of conventional ordinal
regression. We establish a sufficient condition under which the predicted
conditional cumulative probability locally satisfies the monotonicity
constraint over a user-specified region in the covariate space. Additionally,
we provide a monotonicity-preserving stochastic (MPS) algorithm for effectively
training the neural network. We apply N$^3$POM to several real-world datasets. | Akifumi Okuno, Kazuharu Harada | 2023-03-31T06:40:27Z | http://arxiv.org/abs/2303.17823v4 | # An interpretable neural network-based
###### Abstract
This study proposes an interpretable neural network-based non-proportional odds model (N\({}^{3}\)POM) for ordinal regression. In the model, the response variable can take continuous values, and the regression coefficients vary depending on the predicting ordinal response. Contrary to conventional approaches, where the linear coefficients of regression are directly estimated from the discrete response, we train a non-linear neural network that outputs the linear coefficients by taking the response as its input. By virtue of the neural network, N\({}^{3}\)POM may have flexibility while preserving the interpretability of the conventional ordinal regression. We show a sufficient condition under which the predicted conditional cumulative probability (CCP) locally satisfies the monotonicity constraint over a user-specified region in the covariate space. We also provide a monotonicity-preserving stochastic (MPS) algorithm for adequately training the neural network.
**Keywords:** Continuous ordinal regression, Non-proportional odds model, Neural network
## 1 Introduction
Ordinal regression modeling treats response as an ordinal scale and aims to understand the relationship between the response order and covariates (Agresti, 2010). In the context of ordinal regression modeling, response variables are typically assumed to be ordinal and discrete (e.g., stage of cancer, scores of wine quality). While standard regression-based approaches are mainly interested in the actual value of the response, this study focuses on thresholds of the responses; that is, the probability of the response being less than or equal to a specific threshold as a function of the covariates.
Let \(d,J\in\mathbb{N}\) and consider \((G,X)\), a pair of such discrete ordinal response variables \(G\in\{1,2,\ldots,J\}\) and their covariate \(X\in\mathbb{R}^{d}\). A standard model for analyzing such a threshold is the proportional odds model (POM):
\[\text{logit}(\mathbb{P}_{\text{POM}}(G\leq j\mid X=\mathbf{x}))=\alpha_{j}+\langle \mathbf{\beta},\mathbf{x}\rangle\quad(j\in\{1,2,\ldots,J-1\}),\]
where \(\text{logit}(z)=\log\frac{z}{1-z}\) is a logit function, and \(\alpha_{1},\alpha_{2},\ldots,\alpha_{J-1}\in\mathbb{R},\mathbf{\beta}\in\mathbb{R }^{d}\) are parameters to be estimated. This model satisfies the parallelism assumption (McCullagh, 1980), which states that the regression coefficients are equal across all thresholds. However, the parallelism assumption is often considered to be violated (see, e.g., Long and Freese (2006)). For example, in restaurant ratings, basic factors such as good hygiene may be considered important in lower scores, while factors like ingredient origin and wine selection may be considered more significant in higher scores. See Williams (2016) for a discussion on the violation of the parallelism assumption in a real-world situation. |
2309.06975 | Predicting Expressibility of Parameterized Quantum Circuits using Graph
Neural Network | Parameterized Quantum Circuits (PQCs) are essential to quantum machine
learning and optimization algorithms. The expressibility of PQCs, which
measures their ability to represent a wide range of quantum states, is a
critical factor influencing their efficacy in solving quantum problems.
However, the existing technique for computing expressibility relies on
statistically estimating it through classical simulations, which requires many
samples. In this work, we propose a novel method based on Graph Neural Networks
(GNNs) for predicting the expressibility of PQCs. By leveraging the graph-based
representation of PQCs, our GNN-based model captures intricate relationships
between circuit parameters and their resulting expressibility. We train the GNN
model on a comprehensive dataset of PQCs annotated with their expressibility
values. Experimental evaluation on a four thousand random PQC dataset and IBM
Qiskit's hardware efficient ansatz sets demonstrates the superior performance
of our approach, achieving a root mean square error (RMSE) of 0.03 and 0.06,
respectively. | Shamminuj Aktar, Andreas Bärtschi, Abdel-Hameed A. Badawy, Diane Oyen, Stephan Eidenbenz | 2023-09-13T14:08:01Z | http://arxiv.org/abs/2309.06975v1 | # Predicting Expressibility of Parameterized Quantum Circuits using Graph Neural Network
###### Abstract
Parameterized Quantum Circuits (PQCs) are essential to quantum machine learning and optimization algorithms. The expressibility of PQCs, which measures their ability to represent a wide range of quantum states, is a critical factor influencing their efficacy in solving quantum problems. However, the existing technique for computing expressibility relies on statistically estimating it through classical simulations, which requires many samples. In this work, we propose a novel method based on Graph Neural Networks (GNNs) for predicting the expressibility of PQCs. By leveraging the graph-based representation of PQCs, our GNN-based model captures intricate relationships between circuit parameters and their resulting expressibility. We train the GNN model on a comprehensive dataset of PQCs annotated with their expressibility values. Experimental evaluation on a four thousand random PQC dataset and IBM Qiskit's hardware efficient ansatz sets demonstrates the superior performance of our approach, achieving a root mean square error (RMSE) of 0.03 and 0.06, respectively.
Parameterized Quantum Circuits (PQCs), Expressibility, Graph Neural Networks (GNNs)
## I Introduction
Parameterized Quantum Circuits (PQCs) are essential in leveraging the capabilities of quantum computers. They provide a flexible framework for solving intricate problems by optimizing tunable parameters within a quantum circuit. A PQC is a sequence of gates applied to a set of qubits, with some gates allowing for adjustable classical parameters that can be varied during circuit execution to prepare a quantum state. The importance of PQCs has led to the development of new ansatz designs, e.g., problem-specific PQCs and hardware-efficient PQCs. Researchers have proposed different qualitative metrics i.e. expressibility [5], entangling capability [5], and trainability [3] to estimate the quality and usefulness of PQCs.
The expressibility of a PQC [5] determines its ability to explore states in the Hilbert space. Expressibility is measured by computing the Kullback-Leibler (KL) divergence [2] between the estimated fidelity distribution \(P_{PQC}(F;\theta)\) measured as a distance between sampled pairs of parameterized quantum states, and the maximally expressive uniform distribution resulting from a Haar-random unitary \(P_{Haar}(F)\)[1].
\[Expr=D_{KL}(P_{PQC}(F;\theta)||P_{Haar}(F)) \tag{1}\]
Lower KL divergence indicates better expressibility, meaning a better ability to explore a wider range of states in the Hilbert space. The fidelity distribution required in expressibility computations measures the overlap between two quantum states prepared using different sets of parameters. Measuring expressibility directly requires many samples, which are time-consuming to acquire. For instance, Sim et al. employed 5000 fidelity samples for four-qubit circuits to measure expressibility. This reliance on a large number of samples poses challenges due to the time and computational resources required.
To address these challenges, we propose a novel approach utilizing a Graph Neural Network (GNN) [4] to predict the expressibility of PQCs shown in Figure 1. GNNs have demonstrated exceptional capabilities in capturing complex relationships within graph-structured data, making them well-suited for analyzing the intricate structure of PQCs. We train and evaluate our GNN model on a comprehensive dataset of PQCs to demonstrate the effectiveness of our approach. Evaluation on around four thousand random PQCs and IBM Qiskit's hardware efficient ansatz sets demonstrate the high accuracy of the model, RMSE of 0.03 and 0.06, respectively.
## II Proposed Framework
### _Dataset Generation_
In our study, we generate random parameterized quantum circuits using a qubit gate set of X, SX, RX, RY, RZ, and CX
Fig. 1: Our proposed framework for predicting expressibility of PQC. We derive a graph representation of PQC with input, gate, and output nodes. Nodes are encoded with feature vectors representing node type and utilized qubits, edges capture information flow within the PQC, and global features encompass circuit characteristics.
gates. The first layer consists of random single-qubit gates for each qubit. That layer is followed by random entanglement between CX gates. The entanglement was established by selecting two random qubits and applying the CX gate between them. The length of the entanglement, representing the number of consecutive CX gates, was also chosen randomly. We then repeated the layers of single-qubit gates and CX entanglement multiple times to deepen the circuit. Finally, we concluded the circuit with another layer of single-qubit gates. Overall, we generated 12,000 circuits, with a maximum of 4 qubits and a maximum depth of 40. Next, we compute the estimated fidelity distribution for each circuit using the quantum kernel method to compute the distance between pairs of parameterized quantum states. We run the experiments in IBM QASM simulator and then we compute the expressibility values of the circuits using Equation 1. Additionally, we include 64 hardware-efficient ansatz circuits from Qikit's RealAmplitude circuits up to four qubits with up to 4 circuit repetitions with different entanglement patterns available as validation dataset.
### _Graph Neural Network (GNN) Model_
We utilize a GNN model to learn the complex relationship between PQCs and their expressibility. First, we extract a directed graph with a set of nodes, i.e., input, output, and gate nodes, and edge connectivity showing the flow of information within the PQC. Each node has a one-hot encoded vector representing features like node type and qubits. The circuits also include global feature sets, i.e., circuit depth, width, number of parameterized gates, number of qubits, and counts of different gates represented as a vector and fed to the GNN model using three fully connected (FC) layers. First, our architecture employs three SAGEConv layers to capture and process the local neighborhood information within the PQC graphs. Next, the global feature vector is concatenated with the aggregated node feature vector, and the combined representation is fed into the model. Finally, there is a regression layer to predict the expressibility value of the PQC.
### _Training and Evaluation_
We use 80% of the randomly generated PQC dataset to train our GNN model. Before training, we normalize the node features across the dataset by removing the mean and dividing the standard deviation. We train the model using Adam optimizer for 300 epochs with learning rate \(10^{-4}\) and weight decay \(10^{-6}\) and ReduceLROnPlateau scheduler that reduces the learning rate by a factor of 0.1. We utilize a batch size of 2048, employ the Huber loss as our loss function and save the model with the best validation loss. Figure 2(left) shows the training and testing prediction loss as the number of epochs increases during training. Our trained model achieved an RMSE of 0.03 for our testing dataset consisting of 20% randomly generated PQCs. Figure 2(right) shows scatter plots of model predicted expressibility vs. the true expressibility for our test dataset. Additionally, we evaluated our model using 64 circuits from IBM Qiskit's RealAmplitude circuits and achieved an RMSE of 0.06. Figure 3 demonstrate predicted vs. true expressibility for 64 RealAmplitude circuits. These results demonstrate the effectiveness of the GNN model in predicting the expressibility of PQCs.
Fig. 3: Predicted expressibility compared to true expressibility for the 64 Qiskit’s RealAmplitude circuits resulted in an overall RMSE of 0.06.
Fig. 2: _(Left)_: Prediction loss of training and testing datasets with the number of epochs during GNN model training shows the generalizability of the model. _(Right)_: Trained model’s predicted expressibility compared to the true expressibility for the testing dataset yielded an overall RMSE of 0.03
## III Conclusion
We present a GNN-based approach for predicting the expressibility of PQCs, considering the PQC as a graph. By leveraging the capabilities of GNNs, we effectively capture the intricate relationships between the structure of PQCs and their expressibility. Evaluation on the random PQC dataset and Qiskit's RealAmplitude circuit sets shows effective accuracy of our expressibility prediction technique. Importantly, this approach significantly reduces the cost associated with computing fidelity distribution for a large number of samples in expressibility computation.
|
2309.08569 | Local Differential Privacy in Graph Neural Networks: a Reconstruction
Approach | Graph Neural Networks have achieved tremendous success in modeling complex
graph data in a variety of applications. However, there are limited studies
investigating privacy protection in GNNs. In this work, we propose a learning
framework that can provide node privacy at the user level, while incurring low
utility loss. We focus on a decentralized notion of Differential Privacy,
namely Local Differential Privacy, and apply randomization mechanisms to
perturb both feature and label data at the node level before the data is
collected by a central server for model training. Specifically, we investigate
the application of randomization mechanisms in high-dimensional feature
settings and propose an LDP protocol with strict privacy guarantees. Based on
frequency estimation in statistical analysis of randomized data, we develop
reconstruction methods to approximate features and labels from perturbed data.
We also formulate this learning framework to utilize frequency estimates of
graph clusters to supervise the training procedure at a sub-graph level.
Extensive experiments on real-world and semi-synthetic datasets demonstrate the
validity of our proposed model. | Karuna Bhaila, Wen Huang, Yongkai Wu, Xintao Wu | 2023-09-15T17:35:51Z | http://arxiv.org/abs/2309.08569v2 | # Local Differential Privacy in Graph Neural Networks: a Reconstruction Approach
###### Abstract
Graph Neural Networks have achieved tremendous success in modeling complex graph data in a variety of applications. However, there are limited studies investigating privacy protection in GNNs. In this work, we propose a learning framework that can provide node privacy at the user level, while incurring low utility loss. We focus on a decentralized notion of Differential Privacy, namely Local Differential Privacy, and apply randomization mechanisms to perturb both feature and label data at the node level before the data is collected by a central server for model training. Specifically, we investigate the application of randomization mechanisms in high-dimensional feature settings and propose an LDP protocol with strict privacy guarantees. Based on frequency estimation in statistical analysis of randomized data, we develop reconstruction methods to approximate features and labels from perturbed data. We also formulate this learning framework to utilize frequency estimates of graph clusters to supervise the training procedure at a sub-graph level. Extensive experiments on real-world and semi-synthetic datasets demonstrate the validity of our proposed model.
graph neural networks, local differential privacy, frequency estimation, learning from label proportions
## I Introduction
Graph data are ubiquitous in the modern world allowing graph-structured representation for complex data in the realm of social networks, finance, biology, and so on. Graph Neural Networks (GNNs) have been widely adopted in such domains to model the expressive nature of graph-structured data [1, 2]. GNNs rely on _message-passing_ mechanisms to propagate information between graph nodes and output embeddings that encode both node features and neighborhood features aggregated using graph adjacency information. These embeddings are used in predictive downstream tasks for meaningful applications such as drug discovery [3], traffic prediction [4], recommendation [5]. This widespread prevalence of GNNs, however, raises concerns regarding the privacy of sensitive information whose leakage may lead to undesirable and even harmful consequences. GNNs have been shown to be vulnerable to several privacy risks including membership inference [6], link re-identification [7], and attribute disclosure [8]. This risk of data leakage is considerably higher in GNNs compared to traditional learning models due to the presence of additional graph structure information [6]. To ensure compliance with legal data protection guidelines [9] and for the protection of user privacy, GNNs must thus be trained and deployed in a privacy-preserving manner.
In this paper, we aim to address such privacy concerns in GNNs. We focus on a specific scenario of node privacy wherein node-level features and labels are held locally by each user and the global graph structure is available to the server that hosts applications. The server could benefit from users' feature data which paired with graph topology can be utilized for embedding generation and/or predictive modeling via GNNs. However, collecting user feature and label data, possibly containing sensitive and identifying information, may incur serious privacy issues. To this end, Local Differential Privacy (LDP) [10] is often adopted in data collection for training machine learning or releasing statistics in a private manner [11]. Furthermore, it has been deployed in large-scale data-gathering of user behavior and usage statistics at Apple [12] and Google [13] motivating the integration of LDP in data collection for GNNs as well.
**Challenges** The main challenge in training GNNs with privately collected data arises due to the utility-privacy trade-off of differentially private mechanisms. With randomization of data at an individual level, privatized data oftentimes misrepresents the data distribution of the population. A learning model that learns feature and label correlation from this data overfits the noise and is unable to achieve good utility on predictive and analytical tasks with unseen data. Furthermore, since GNNs propagate information throughout the graph to output node embeddings, the quality of the embeddings also suffers due to additive noise present in the training data after applying LDP mechanisms.
**Prior Work** A few recent works have attempted to address node privacy in GNNs [14, 15] but they enforce privacy only during training and/or model release which puts user information at risk if the server is malicious. Most notably, Sajadmanesh and Gatica-Perez [16] propose a node-level LDP framework in the distributed setting where features and labels are held private by the user, and the graph structure is known to the server. They propose an LDP protocol called multi-bit mechanism to perturb node features by extending the 1-bit mechanism [17] to multidimensional features. The multi-bit mechanism randomly selects a subset of features for each user, transforms each selected feature value to either 1 or -1, and indiscriminately reports the value 0 for the remaining ones. To protect label privacy, node labels are perturbed using Randomized Response (RR) [18]. A GCN-based multi-hop aggregator is then prepended to the GNN model for implicit |
2309.11763 | Bloch Equation Enables Physics-informed Neural Network in Parametric
Magnetic Resonance Imaging | Magnetic resonance imaging (MRI) is an important non-invasive imaging method
in clinical diagnosis. Beyond the common image structures, parametric imaging
can provide the intrinsic tissue property thus could be used in quantitative
evaluation. The emerging deep learning approach provides fast and accurate
parameter estimation but still encounters the lack of network interpretation
and enough training data. Even with a large amount of training data, the
mismatch between the training and target data may introduce errors. Here, we
propose one way that solely relies on the target scanned data and does not need
a pre-defined training database. We provide a proof-of-concept that embeds the
physical rule of MRI, the Bloch equation, into the loss of physics-informed
neural network (PINN). PINN enables learning the Bloch equation, estimating the
T2 parameter, and generating a series of physically synthetic data.
Experimental results are conducted on phantom and cardiac imaging to
demonstrate its potential in quantitative MRI. | Qingrui Cai, Liuhong Zhu, Jianjun Zhou, Chen Qian, Di Guo, Xiaobo Qu | 2023-09-21T03:53:33Z | http://arxiv.org/abs/2309.11763v1 | # Bloch Equation Enables Physics-informed Neural Network in Parametric Magnetic Resonance Imaging
###### Abstract
Magnetic resonance imaging (MRI) is an important non-invasive imaging method in clinical diagnosis. Beyond the common image structures, parametric imaging can provide the intrinsic tissue property thus could be used in quantitative evaluation. The emerging deep learning approach provides fast and accurate parameter estimation but still encounters the lack of network interpretation and enough training data. Even with a large amount of training data, the mismatch between the training and target data may introduce errors. Here, we propose one way that solely relies on the target scanned data and does not need a pre-defined training database. We provide a proof-of-concept that embeds the physical rule of MRI, the Bloch equation, into the loss of physics-informed neural network (PINN). PINN enables learning the Bloch equation, estimating the T\({}_{2}\) parameter, and generating a series of physically synthetic data. Experimental results are conducted on phantom and cardiac imaging to demonstrate its potential in quantitative MRI.
Physics-informed neural network, deep learning, parametric imaging, Bloch equation, magnetic resonance imaging
## I Introduction
Quantitative magnetic resonance imaging (qMRI) can measure parameters that reflect the intrinsic characteristics of tissues [1]. Common quantitative parameters include T\({}_{1}\), T\({}_{2}\), T\({}_{2}\)*, extracellular volume fraction, etc.. T\({}_{1}\), T\({}_{2}\), T\({}_{2}\)* and myocardial extracellular volume fraction can be used to detects diffuse fibrosis [3], evaluate the degree of myocardial edema [4], quantifies tissue iron content [5], and reflects the degree of myocardial fibrosis [5], respectively. For example, T\({}_{1}\) and T\({}_{2}\) parameters (Fig. 1) could assess the histopathological changes of the myocardium and save the myocardium as early as possible [2].
Accurate parameter estimation is important. The estimation include two steps: First, solve the parametric signal model following the physical rule, Bloch equation, of MRI; Second, estimate parameters through fitting the signal model [6]. Thus, parameter estimation may encounter problems if the analytical solution is hard to be obtained [7]. On the other hand, advanced deep learning methods have shown great potential in qMRI [8], but deep learning methods usually lack interpretability and require a large amount of high-quality training labels [9].
Recently, to avoid using the pre-defined training database, physics-informed neural networks (PINN) [10-11] uses known physical equations as prior information and embeds them into the network's loss function.
In this work, we propose a PINN-based method for T\({}_{2}\) mapping in this work. We designed a physics-informed loss function that embeds the Bloch equation in PINN. By learning the Bloch equation through the network, it can obtain quantitative T\({}_{2}\) values by directly solving the inverse problem of equations without signal analytical formula and with only a single sample data. Once the PINN is trained, it can be used to generate physically synthetic MRI data.
## II Proposed Method
### _Bloch Equation_
The magnetization vector \(\mathbf{M}=\left(M_{x},M_{y},M_{z}\right)^{T}\) in the magnetic field satisfies the Bloch equation:
\[\frac{d\mathbf{M}(\mathbf{r},t)}{dt}=\gamma\mathbf{M}(\mathbf{r},t)\times \mathbf{B}(\mathbf{r},t)-\frac{M_{z}(\mathbf{r},t)+M_{y}(\mathbf{r},t) \mathbf{j}}{T_{2}}-\frac{M_{x}(\mathbf{r},t)-M_{y}}{T_{1}}\mathbf{k}, \tag{1}\]
where \(\mathbf{B}(\mathbf{r},t)\) is magnetic field at a spatial location \(\mathbf{r}\), \(\mathbf{M}_{0}\), T\({}_{1}\), T\({}_{2}\) are tissue parameters. Due to the limitations of coil placement, in practical applications, the magnetization intensity \(M_{z}\) in the z-direction is difficult to be directly detected. Therefore, our target is the transverse magnetization vector \(M_{\perp}=\sqrt{M_{z}^{*}+M_{y}^{*}}\). For the multi-spin echo sequence (Fig. 2) that measures the T\({}_{2}\) value of tissues, the transverse magnetization vector \(M_{z}\) satisfies the Bloch equation that can be simplified as:
\[\frac{dM_{1}(\mathbf{r},t)}{dt}+\frac{M_{\perp}(\mathbf{r},t)}{T_{2}}=0. \tag{2}\]
### _PINN for T\({}_{2}\) Mapping_
Traditional quantitative T\({}_{2}\) mapping for a single voxel (without variable \(\mathbf{r}\)) uses the least square (Fig. 1).
According to Bloch equation (2), the signal analysis formula can be obtained:
(2)
where \(\mathrm{M}_{0}\) and \(\mathrm{T}_{2}\) are parameters to be quantified. For the measured data \(M_{\perp}(t_{i})\) in multiple time moments \(t_{i}\), where \(i\in\left\{1,2,\cdots,I\right\},\ I\) is the number of different contrast images, we set \(M_{\perp}(t_{i})\) as \(S_{i}\). According to the signal analysis formula, we can obtain \(\mathrm{M}_{0}\) and \(\mathrm{T}_{2}\) through fitting with least square.
However, for some sequences, the signal analysis formula cannot be obtained from Bloch equation, thus we use a nonlinear network is to approximate Bloch equation and obtain the quantitative parameters.
Here, we take multi-spin echo sequence as an example for quantitative \(\mathrm{T}_{2}\) mapping. The aim of PINN we propose is to approximate \(M_{\perp}\) in Bloch equation (2) by network training. The structure of PINN is a fully connected network with two hidden layer (Fig. 3). The input of PINN is \(t\), where \(t\) is a set of K discrete time points in \(\mathrm{T}_{2}\) parameter interval, the network output is \(\mathcal{N}(t)\). \(\mathcal{N}(t)\) and \(M_{\perp}(t)\) should satisfy the same equation:
\[\frac{d\mathcal{N}(t)}{dt}+\frac{\mathcal{N}(t)}{T_{2}}=0\,. \tag{3}\]
The loss function of PINN is divided into two parts. The first part is a physics-informed loss for the Bloch equation (3):
(4)
where \(\left\|\blackarrow\right\|\) is the \(l_{2}\) norm, \(k\in\left\{1,2,\cdots,K\right\}\). The second part is the loss between the network output and the measured realistic cardiac qMRI data as follows:
\[\mathcal{L}_{\mathrm{data}}=\frac{1}{I}\sum_{i=1}^{I}\left|S_{i}-\mathcal{N} \left(t_{i}\right)\right|\,. \tag{5}\]
Fig. 1: Principle of least square for \(\mathrm{T}_{2}\) mapping.
Fig. 4: \(\mathrm{T}_{2}\) mapping of \(\mathrm{MnCl}_{2}\) phantom. (a) An photo of the phantom, (b) and (c) are estimated \(\mathrm{T}_{2}\) values using the conventional least square and the proposed Bloch equation-based PINN, respectively.
Fig. 3: Network structure of the Bloch equation-based PINN.
Fig. 2: Multi-spin echo sequence.
Finally, the total loss function is a weighted sum of two parts:
\[\mathcal{L}=w_{\textit{Block}}\mathcal{L}_{\textit{Block}}+w_{\textit{data}} \mathcal{L}_{\textit{data}}\, \tag{7}\]
where \(w_{\textit{Block}}\) and \(w_{\textit{data}}\) are two weights to balance importance of two terms.
## III Experimental Result
### _Phantom_
The phantom consists of 14 tubes that are filled with different concentrations of MnCl\({}_{2}\) solution (Fig. 4(a)) [12]. We acquired the imaging data on an United Imaging Healthcare 3.0T MRI scanner. Imaging parameters include FOV = 240*240 mm\({}^{2}\) and slice thickness = 2 mm. PINN parameters are \(K=1001\), \(C=8\), \(w_{\textit{Block}}=0.01\), and \(w_{\textit{data}}=1\).
Figs. 4(b) and (c) show that the estimated T\({}_{2}\) values by two approaches are close to each other at most tubes. For the tube #14 that have small T\({}_{2}\) value (The standard T\({}_{2}\) is 5.592 ms [12]), the estimated T\({}_{2}\) values by PINN is closer to the reported standard one than the conventional least square approach.
### _Healthy Volunteer_
The cardiac imaging data were acquired from a healthy volunteer on a Philips 3.0T scanner. Imaging parameters are FOV = 300*300 mm\({}^{2}\) and slice thickness = 10 mm. PINN parameter include \(K=1001\), \(C=8\), \(w_{\textit{Block}}=0.01\) and \(w_{\textit{data}}=1\).
Given nine contrast images with different time \(t\), the T\({}_{2}\) values estimated by two methods are comparable (Fig. 5).
### _Data Generation_
The PINN approximates the relationship from time \(t\) to the transverse magnetization intensity \(M_{\perp}(t)\) after training. Input time \(t\), the corresponding transverse magnetization intensity \(M_{\perp}(t)\) can be obtained for any voxel. Therefore, the trained PINN can be used to generate the MRI data at any time \(t\) (Fig. 6). This may benefit physics-driven deep learning, particularly when a large amount of training data is not available [9, 13-15].
## IV Conclusion
This paper proposes to incorporate the physical rule of MRI, the Bloch equation, into the neural network learning. No matter on phantom or realistic data, T\({}_{2}\) maps obtained for by two methods are comparable. Thus, the Bloch equation enables physics-informed neural network in parametric magnetic resonance imaging.
## Acknowledgments
This work was partially supported by the National Natural Science Foundation of China (62122064, 61971361, 62371410 and 62331021), the Natural Science Foundation of Fujian Province of China (2023J02005 and 2021J011184), the President Fund of Xiamen University (20720220063), and Nanqiang Outstanding Talent Program of Xiamen University. Thanks are due to Huajun She, Bei Liu and Wenlong Feng
Fig. 5: T\({}_{2}\) mapping of cardiac MRI data from a healthy volunteer. (a)-(i) Nine different contrast images with different time t, (j) and (k) are estimated T\({}_{2}\) values using the conventional least square and the proposed Bloch equation-based PINN, respectively, (l) the difference between (j) and (k).
Fig. 6: Physically generated MRI data by using the PINN at different time t.
from School of Biomedical Engineering, Shanghai Jiao Tong University for providing quantitative phantom data.
|
2306.17597 | Razor SNN: Efficient Spiking Neural Network with Temporal Embeddings | The event streams generated by dynamic vision sensors (DVS) are sparse and
non-uniform in the spatial domain, while still dense and redundant in the
temporal domain. Although spiking neural network (SNN), the event-driven
neuromorphic model, has the potential to extract spatio-temporal features from
the event streams, it is not effective and efficient. Based on the above, we
propose an events sparsification spiking framework dubbed as Razor SNN, pruning
pointless event frames progressively. Concretely, we extend the dynamic
mechanism based on the global temporal embeddings, reconstruct the features,
and emphasize the events effect adaptively at the training stage. During the
inference stage, eliminate fruitless frames hierarchically according to a
binary mask generated by the trained temporal embeddings. Comprehensive
experiments demonstrate that our Razor SNN achieves competitive performance
consistently on four events-based benchmarks: DVS 128 Gesture, N-Caltech 101,
CIFAR10-DVS and SHD. | Yuan Zhang, Jian Cao, Ling Zhang, Jue Chen, Wenyu Sun, Yuan Wang | 2023-06-30T12:17:30Z | http://arxiv.org/abs/2306.17597v1 | # Razor SNN: Efficient Spiking Neural Network with Temporal Embeddings
###### Abstract
The event streams generated by dynamic vision sensors (DVS) are sparse and non-uniform in the spatial domain, while still dense and redundant in the temporal domain. Although spiking neural network (SNN), the event-driven neuromorphic model, has the potential to extract spatio-temporal features from the event streams, it is not effective and efficient. Based on the above, we propose an events sparsification spiking framework dubbed as Razor SNN, pruning pointless event frames progressively. Concretely, we extend the dynamic mechanism based on the global temporal embeddings, reconstruct the features, and emphasize the events effect adaptively at the training stage. During the inference stage, eliminate fruitless frames hierarchically according to a binary mask generated by the trained temporal embeddings. Comprehensive experiments demonstrate that our Razor SNN achieves competitive performance consistently on four events-based benchmarks: DVS 128 Gesture, N-Caltech 101, CIFAR10-DVS and SHD.
Keywords:Efficient SNNs DVS Temporal embeddings Pruning.
## 1 Introduction
Event-based neuromorphic computation utilizes sparse and asynchronous events captured by DVS to represent signals more efficiently. Unlike RGB cameras, DVS encodes the time, location, and polarity of the brightness changes for each pixel at high event rates [10]. Although the events are sparse in the spatial domain, the streams they composed are dense in the temporal domain. This characteristic makes event streams hardly process directly through deep neural networks (DNNs), which are based on dense computation. Fortunately, spiking neural networks (SNNs) have an event-triggered computation characteristic that matches well with processing events. However, it is desirable to accelerate the SNN models to be more suitable and efficient for real-time events tasks and further improve accuracy.
The dynamic mechanism owns attention recipes, which selectively focus on the most informative components of the input, and can be interpreted as the
sensitivity of output to the variant input. As for SNNs, inspired by [17] and [24], we propose the **temporal embeddings** combined with the dynamic mechanism for SNNs, to explore the unstructured and data-dependent strategy. Heaps of prior works [10, 24, 2] are dedicated to the spatial-wise attention. Different from above works, the temporal embeddings emphasize on dense ticks of event streams.
As shown in Fig. 1, we present an event pruning mechanism for the temporal domain by embeddings, to adaptively filter out immaterial events and get the SNNs slim. It would reconstruct features and predict attention vectors, to compute the probabilities of dropping the events, while retaining the event-triggered characteristic. We get the pruning architecture as Razor SNN. Our method reveals the possibility of exploiting the dynamic mechanism with temporal embeddings for the acceleration of SNNs and RNNs-like models. The contributions are listed as follows:
* Rethink the DVS events characteristics in both spatial and temporal aspects, and propose the novel pruning method named Razor SNN. It can do inference tasks with less data but higher performance. To the best of our knowledge, this is the first work to design a dynamic mechanism with temporal embeddings for SNNs.
* Our Razor SNN can achieve competitive performances on events recognition tasks, even compared with full-events inputs. Besides, it gets improvement of accuracy for gesture recognition with inference only 65 ms.
## 2 Related Works
### Object Recognition Using DVS
For events recognition matched with a dynamic vision camera, processing the events as groups is the most common method, to yield sufficient signal-to-noise ratios (SNR) [1]. In this paper, we adopt the frame-based representation that
Figure 1: Event pruning for a spike layer in Razor SNN.
accumulates events occured in a time window and maps into a frame [24]. It is convenient to generate frame-based representation and naturally compatible with the traditional computer vision framework. Besides, SNN algorithms based on frames benefit from faster convergence for training [14]. Timestep and window size is crucial to determine the quality of frame-based representation, the larger window size is, the higher SNR we could have. The prior works have been dedicated to taking various techniques to improve the classification performance based on the large window size. [29, 21] attempt to improve performance by taking training methods, while [23, 4] by changing the connection path of the SNNs, and [6, 8] through hybrid fusion.
### Spiking Neural Networks
Based on the biologically plausible, the spiking neuron computes by transforming dynamic input into a series of spikes. Spike-based SNNs is to assume that the neurons which have not received any input spikes will skip computations, called event-triggered characteristic [15]. There are heaps of research works on launching spiking neural networks for recognition tasks [18]. Diehl et al. proposed a mechanism utilizing Spike Time Dependent Plasticity (STDP) [3], lateral inhibition and homeostasis to recognize. Lee et al. and Delbruck et al. have proposed a supervised learning mechanism mimicking back-propagation of conventional ANNs that can efficiently learn [12]. The first one uses feed-forward layers of spiking neurons and a variant of back-propagation by defining an error function between desired and spiking activity. Wu et al. proposed Spatio-Temporal Backpropagation (STBP), a learning rule for back-propagating error in temporal and spatio domain [20]. This addresses the problem of approximating the derivative of the spike function that inherently brings in the question of biological plausibility. In this work, we adopt LIAF models as the elements of spike-based SNNs and STBP to evaluate the network architecture.
### Model Acceleration of SNNs
There are various solutions that aim to compress or accelerate spiking neural networks. Structural network pruning methods explicitly prune out filters [9]. Knowledge distillation methods [27, 11, 28] can guide the training of a student model with learnt knowledge, such as predictions and features, from a higher-capacity teacher (ANNs or SNNs). Some works design synchronous hardware inference mechanisms with parallelization strategies [13]. In our work, however, we aim to accelerate a SNN model based on feature map reconstruction with constraining width and depth of the model.
## 3 Razor SNN
### Iterative LIAF Model
We first introduce the Leaky Integrate-and-Fire (LIF) model, a balance between complex dynamic characteristics and simpler mathematical form, and translate
it to an iterative expression with the Euler method [21]. Mathematically it is updated as:
\[\mathbf{u}(t+1)=\tau\mathbf{u}(t)+\mathbf{I}(t), \tag{1}\]
where \(\mathbf{u}\left(t\right)\) denotes the neuronal membrane potential at time \(t\), \(\tau\) is a time constant for the leaky behavior and \(\mathbf{I}\left(t\right)\) is the pre-synaptic input. The LIF model is as follows:
\[\mathbf{a}_{i}^{t+1,\ l}=\sum_{j=1}^{l-1}\mathbf{W}_{i,j}^{n}\mathbf{O}_{j}^{t+1,l-1}. \tag{2}\]
EQ. 2 comes from the spatial domain. Where \(\mathbf{a}_{i}\) is the axon inputs of the \(i\)th neuron, \(\mathbf{W}_{i,j}^{n}\) is the synaptic weight from the \(j\)th neuron in previous layer to the \(i\)th neuron and \(\mathbf{O}_{j}\) is the output of the \(j\)th neuron, whose value can only be 1 or 0. Besides, where the \(t\) means the timestamp, \(l\) represents the \(l\)th layer:
\[\mathbf{u}_{i}^{t+1,\ l}=\mathbf{u}_{i}^{t,l}\mathbf{h}\left(\mathbf{O}_{i}^{t,l}\right)+\mathbf{ a}_{i}^{t+1,l}. \tag{3}\]
EQ. 3 comes from the temporal domain (TD). \(\mathbf{u}_{i}\) is the membrane potential of the \(i\)th neuron. \(\mathbf{h}\left(x\right)\) means the rate of TD memory loss as follows:
\[\mathbf{h}\left(x\right)=\tau e^{-\frac{\pi}{\tau}}. \tag{4}\]
EQ. 5 is the output of the \(i\)th neuron, responsible for checking whether the membrane potential exceeds the threshold \(\mathbf{V}_{th}\) and firing a spike or not:
\[\mathbf{O}_{i}^{t+1,\ l}=\left\{\begin{array}{ll}1&\quad u_{i}^{t+1,l}\geq V_{ th},\\ 0&\quad u_{i}^{t+1,l}<V_{th}.\end{array}\right. \tag{5}\]
Figure 2: The complete forward flow of Razor SNN with Event Pruning Mechanism. The feature maps colored green are processed at timestamp \(\mathbf{T}_{i}\). The dashed box in the bottom right corner represents the Event Pruning Mechanism we proposed. Zoom up to view better.
However, introducing the LIF model into the last layer will lose information on the membrane potential and disturb the performance. Instead, we adpot the leaky integrate-and-analog-fire (LIAF) model. LIAF changes the Heaviside step function to ReLU function, then both spatial and temporal domains are analog values.
### Event Pruning Mechanism
The precise observation of significant events is the keystone to Dynamic Event Pruner. Unlike vanilla attention mechanisms, Razor SNN takes the information from neighboring frames into consideration with Global Temporal Embeddings. Besides, Prune the refined events to purify inputs. For simplicity, we follow method [24] that the spatial input of \(l\)th layer at \(t\)th timestamp \(\mathbf{O}^{t,\ l-1}\) equals to \(\mathbf{X}^{t,\ l-1}\in\mathbb{R}^{C\times H\times W}\), where \(\mathbf{X}\) is the feature maps tensor and \(C\) is channel size.
#### 3.2.1 Global Temporal Embeddings
We introduce a set of learnable global temporal embeddings \(\mathbf{B}\in\mathbb{R}^{E\times T}\) to extract \(E\) sequence principles on temporal features. Notably, not all embeddings play the same role. For example, we would assign more attention on the moment when events are dense, while less on that when events are sparse. In this paper, we propose an embedding weighting module to determine the embedding importance independently. Concretely, we conduct a convolution-based module with softmax function (see Figure 2) to predict, and weight the importance vector \(\mathbf{w}\) onto the temporal embeddings to generate weighted embeddings \(\hat{\mathbf{B}}\):
\[\hat{\mathbf{B}}^{t}=\sum_{i=1}^{T}\mathbf{w_{i}}\odot\mathbf{B}^{t}. \tag{6}\]
#### 3.2.2 Reconstruct Events Feature
We accumulate the feature maps within \(\mathbf{T}\) and flatten \(\mathbf{X}\) into shape of \((T,C\times H\times W)\) in the paper. Then the Events of Interests (EoI) masks \(\mathbf{M}\) can be obtained by calculating the similarities between weighted embeddings and the temporal frames in the feature maps:
\[\mathbf{M}=\sigma(\hat{\mathbf{B}}\mathbf{X}), \tag{7}\]
where \(\sigma\) denotes sigmoid activation function. Eventually, multiplying with the masks, we reconstruct Events Feature and get the refined feature \(\hat{\mathbf{X}}\), which is a better substitute for the vanilla features:
\[\hat{\mathbf{X}}=\sum_{i=1}^{E}\mathbf{M}\odot\mathbf{X}. \tag{8}\]
#### 3.2.3 Pruning Strategy
Due to the refined feature has contained discriminative temporal information provided by the Global Temporal Embeddings, it is what the pruning mechanism based on. We only need to send the refined feature to the adaptive max pooling 3d for extracting importance scores of temporal features \(\mathbf{S}\in\mathbb{R}^{T}\), which is as follows:
\[\mathbf{S}=max_{i=1}^{H}max_{j=1}^{W}max_{k=1}^{C}\hat{\mathbf{X}}, \tag{9}\]
While during inference, we will generate a binary mask \(\mathbf{M}\) according to the scores, where eliminate pointless frames which are lower than filtering threshold \(\mathbf{S}_{th}\) we set, and set the attention score of the other frames to 1.
\[\mathbf{M}=\mathbf{H}(\mathbf{S}-\mathbf{S}_{th}). \tag{10}\]
\(\mathbf{H(\cdot)}\) is a Heaviside step function that is same as in EQ. 5. Eventually, we combine the mask \(\mathbf{M}\) with the input tensor, and the formed input at \(t\)th timestamp is:
\[\widetilde{\mathbf{X}}^{t,l-1}=\mathbf{M}^{t,l-1}\odot\hat{\mathbf{X}}^{t,l-1}. \tag{11}\]
### Architecture Design of RazorSNNs
We implement the RazorSNNs with embedding the Event Pruning Mechanism into each spiking layer except the encoder layer (the first layer). The reason is that, we assume SNN cannot extract more spatio-temporal information in this condition and pruning the whole network leads to unstable results. We follow the recent state-of-the-art methods[19, 24] to use a x-stage pyramid structure, and the Razor Pruning Architecture is shown in Figure 2.
## 4 Experimental Results
In this section, to show our method superiority and effectiveness, we conduct experiments on three popular event-based benchmarks: DVS Gesture, N-Caltech and CIFAR10-DVS.
### Implementation Details
In this paper, we follow the similar notation as Yao et al. [24] to define our network architectures separately for DVS128 Gesture, SHD and CIFAR10-DVS, while N-Caltech's is the same as DVS128 Gesture's. Besides, We take rate coding as loss function, and utilize the Adam optimizer for accelerating the training process. The corresponding hyperparameters details are shown in Tab 1.
### Performance Comparison
We compare Razor SNN with various of prior works for event-based data, like CNN method, spike-based SNN, and analog-based SNN, on above mentioned benchmarks. The experiment results are shown in Tab 3. From the results, our Razor SNN outperforms the strong baseline with a margin on SHD, CIFAR series and N-Caltech 101, and achieves competitive performance on DVS128 Gesture.
**SHD** The SHD dataset [5] is a large spike-based audio classification task that contains 10420 audio samples of spoken digits ranging from zero to nine in English and German languages. Unlike the fourdimensional event stream generated by the DVS camera, the audio spike stream has only two dimensions, i.e., time and position. Our method surpasses the previous state-of-the-art by 1.75%, and verify the effectiveness of the Temporal embedding module.
**DVS128 Gesture** Almost all the spike-based SNNs methods evaluate their model on it. Razor SNN surpasses TA-SNN by 0.22% and outperforms all CNN-based methods. Due to unique temporal feature extraction and reconstruction, Razor SNN has superior ability to distinguish the clockwise and counterclockwise of gestures, which are easily confused.
**N-Caltech 101** N-Caltech is a spiking version of frame-based Caltech101 dataset, which is a large-scale Events dataset. Few methods tested on N-Caltech because of the complicate background and computational complexity. Notably, Razor SNN still gets a nearly 2.1% increase over the TA-SNN, and outperforms all the SOTA methods. Besides, the temporal embeddings function as the attention module when SNNs meet static image, which is beneficial to classification.
**CIFAR10-DVS** Compared to the best result TA-SNN so far, our method gets 1.01% improvement, attributed to its global temporal module catching critical events information and filter temporal noise, which damages SNNs accuracy. Our
\begin{table}
\begin{tabular}{|l|l|l|} \hline Parameter & Description & Value \\ \hline \(\mathbf{dt}\) & Window size & 1ms \\ \(\mathbf{V}_{th}\) & Fire threshold & 0.3 \\ \(\mathbf{e}^{-\frac{dt}{\tau}}\) & Leakage factor & 0.3 \\ \(\mathbf{S}_{th}\) & Razor ratio & 0.4 \\ \hline \end{tabular}
\end{table}
Table 1: Comprehensive parameters for experiments.
\begin{table}
\begin{tabular}{c|c|c} \hline Methods & Architecture & SHD \\ \hline Cramer [5] & LIF RSNN & 71.40 \\ Yin [26] & RELU SRNN & 88.93 \\ Zenke [25] & SG-based SNN & 84.00 \\ Yao [24] & TA-SNN & 91.08 \\ Ours & Razor SNN & **92.83** \\ \hline \end{tabular}
\end{table}
Table 2: Accuracy of models for the SHD Dataset (%).
Razor SNN shows better generalization on large-scale data set, where exists more noisy and pointless events.
Moreover, the number of parameters in Razor SNN only has weak increase compared with the vanilla SNN. So the events pruning mechanism can afford SNNs to achieve higher performance with less cost for practical applications.
### Ablation Studies
#### 4.3.1 The Number of Temporal Embeddings
We perform experiments on DVS Gesture to explore the effects of different numbers of temporal embeddings in Razor SNN. As shown in Tab 4, with only 2 embeddings, Event Pruning improves SNN by 0.84% Acc, while more tokens could achieve further improvements. In this paper, we choose a number of **4** for a better performance.
#### 4.3.2 The Position of Where to Prune
It is vital to figure out that we should insert temporal embeddings into which layers to prune, and we design four sets of experiments to explore. E1, pure SNNs baseline; E2, introduce temporal embeddings into the encoder layer; E3, introduce temporal embeddings into the
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
0 (vanilla) & 2 & 4 & 6 & 8 \\ \hline
97.99 & 98.56 & **98.83** & 98.79 & 98.34 \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Ablation on the number of temporal embeddings.**
\begin{table}
\begin{tabular}{c c|c c c} \hline \hline Methods & Architecture & Gesture & N-Caltech & CIFAR10 \\ \hline Wu [21] & NeuNorm & - & - & 60.50 \\ Ramesh [16] & DART & - & 66.8 & 65.78 \\ Kugele [8] & DenseNet & 95.56 & - & 66.75 \\ Wu [22] & LIAF-Net & 97.56 & - & 70.40 \\ Zheng [29] & ResNet19 & 96.87 & - & 67.80 \\ Kim [7] & SALT & 67.10 & 55.0 & - \\ Wu [19] & ASF-BP & 93.40 & 60.23 & 62.50 \\ Yao [24] & TA-SNN & 98.61 & 68.42 & 72.00 \\ \hline Ours & Razor SNN & **98.83** & **70.50** & **73.01** \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Comparison of different methods on DVS-Gesture, N-Caltech 101 and CIFAR10-DVS (%).**
backbone layers. E4, introduce temporal embeddings into all the layers. As shown in Fig 3, we observe that E2 and E3 independently afford improvement in most cases, where E3 achieves the best accuracy when T=120. But E4, who owns E2 and E3 simultaneously, leads to unstable results, and we assume SNNs cannot extract more spatio-temporal information in this condition.
#### 4.3.1 Effects of components in Event Pruning Mechanism
We set experiments to show the contribution of each proposed component in Mechanism in Tab 5. + **Embeddings.** Global temporal embeddings benefit most (0.31%) for Razor SNNs due to its consideration of neighboring frames and extraction of temporal features. **+ Embeddings weighting module.** Embedding weighting module decides the embedding importance independently and provide discriminative information and gains by 0.24%. **+ Reconstruct Events Feature.** The refined feature has contained discriminative temporal information, and experiment statics (0.10% ) proves that it a better substitute for the original features indeed. **+ Pruning.** Pruning eliminates worthless events contained much noise which disturb the SNNs model.
### Visualization Analysis
To validate the effectiveness of Razor SNN, we visualize the case that when the vanilla SNN fails in recognition, the Razor SNN succeeds. As shown in 4, each feature indicates the average response of a spiking layer. We make the following two observations about the effect of temporal embeddings and reconstruction of Razor SNN.
The spiking activity is more concentrated in Razor SNN, i.e., the deep blue area of Razor SNN is smaller and more focused. This suggests that global temporal
Figure 3: E1, pure SNNs baseline; E2, introduce embeddings into the encoder layer; E3, introduce embeddings into the backbone layers. E4, introduce embeddings into all the layers.
extraction is beneficial to handling the important region of intermediate channels. We observe that the pruning lightens the color of the yellow area (background). The lighter the pixel, the weaker the spiking activity rate.
## 5 Conclusion
In this paper, we innovatively introduce dynamic attention mechanism based on temporal embeddings into SNNs, and propose the Razor SNNs. Compared with vanilla Spiking Neural Networks, Razor SNNs process signals more efficiently, especially for event streams, pruning pointless event frames progressively. Our method enjoys finer temporal-level feature and prunes worthless events frames. Extensive experiments show that, our Razor SNNs apply to various benchmarks and can achieve state-of-the-art performance consistently.
Figure 4: Visualization of the heat maps generated by vanilla SNNs and Razor SNNs separately. The temporal embeddings and reconstructed features urge SNNs to centre on the gesture instead of distraction somewhere like vanilla models. Best viewed in color.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Embeddings & Weighting & Reconstruct & Pruning & Acc(\%) \\ \hline - & - & - & - & 97.99 (baseline) \\ ✓ & - & - & - & 98.30 (**+0.31**) \\ ✓ & ✓ & - & - & 98.54 (+0.24) \\ ✓ & ✓ & ✓ & - & 98.64 (+0.10) \\ ✓ & ✓ & ✓ & ✓ & 98.83 (+0.19) \\ \hline \hline \end{tabular}
\end{table}
Table 5: **Ablation experiments on effects of components in Event Pruning Mechanism.** |
2309.04434 | Physics-Informed Neural Networks for an optimal counterdiabatic quantum
computation | We introduce a novel methodology that leverages the strength of
Physics-Informed Neural Networks (PINNs) to address the counterdiabatic (CD)
protocol in the optimization of quantum circuits comprised of systems with
$N_{Q}$ qubits. The primary objective is to utilize physics-inspired deep
learning techniques to accurately solve the time evolution of the different
physical observables within the quantum system. To accomplish this objective,
we embed the necessary physical information into an underlying neural network
to effectively tackle the problem. In particular, we impose the hermiticity
condition on all physical observables and make use of the principle of least
action, guaranteeing the acquisition of the most appropriate counterdiabatic
terms based on the underlying physics. The proposed approach offers a
dependable alternative to address the CD driving problem, free from the
constraints typically encountered in previous methodologies relying on
classical numerical approximations. Our method provides a general framework to
obtain optimal results from the physical observables relevant to the problem,
including the external parameterization in time known as scheduling function,
the gauge potential or operator involving the non-adiabatic terms, as well as
the temporal evolution of the energy levels of the system, among others. The
main applications of this methodology have been the $\mathrm{H_{2}}$ and
$\mathrm{LiH}$ molecules, represented by a 2-qubit and 4-qubit systems
employing the STO-3G basis. The presented results demonstrate the successful
derivation of a desirable decomposition for the non-adiabatic terms, achieved
through a linear combination utilizing Pauli operators. This attribute confers
significant advantages to its practical implementation within quantum computing
algorithms. | Antonio Ferrer-Sánchez, Carlos Flores-Garrigos, Carlos Hernani-Morales, José J. Orquín-Marqués, Narendra N. Hegade, Alejandro Gomez Cadavid, Iraitz Montalban, Enrique Solano, Yolanda Vives-Gilabert, José D. Martín-Guerrero | 2023-09-08T16:55:39Z | http://arxiv.org/abs/2309.04434v2 | # Physics-Informed Neural Networks for an Optimal Counterdiabatic quantum computation
###### Abstract
We introduce a novel methodology that leverages the strength of Physics-Informed Neural Networks (PINNs) to address the counterdiabatic (CD) protocol in the optimization of quantum circuits comprised of systems with \(N_{Q}\) qubits. The primary objective is to utilize physics-inspired deep learning techniques to accurately solve the time evolution of the different physical observables within the quantum system. To accomplish this objective, we embed the necessary physical information into an underlying neural network to effectively tackle the problem. In particular, we impose the hermiticity condition on all physical observables and make use of the principle of least action, guaranteeing the acquisition of the most appropriate counterdiabatic terms based on the underlying physics. The proposed approach offers a dependable alternative to address the CD driving problem, free from the constraints typically encountered in previous methodologies relying on classical numerical approximations. Our method provides a general framework to obtain optimal results from the physical observables relevant to the problem, including the external parameterization in time known as scheduling function, the gauge potential or operator involving the non-adiabatic terms, as well as the temporal evolution of the energy levels of the system, among others. The main applications of this methodology have been the \(\mathrm{H_{2}}\) and \(\mathrm{LiH}\) molecules, represented by a 2-qubit and 4-qubit systems employing the STO-3G basis. The presented results demonstrate the successful derivation of a desirable decomposition for the non-adiabatic terms, achieved through a linear combination utilizing Pauli operators. This attribute confers significant advantages to its practical implementation within quantum computing algorithms.
Neural networks, counterdiabatic driving, Quantum computing, Quantum information, PINNs
## 1 Introduction
Quantum computing has emerged as a dynamic and vibrant domain of research within the scientific community, primarily driven by notable achievements and advancements in applied quantum machine learning [1, 2, 3, 4], quantum simulation [5, 6], and optimization of circuits and systems [7, 8]. Optimization problems have garnered particular attention, given their pervasive presence in various domains, including medicine [9], economics [10], logistics [11], and numerous others [12, 13, 14]. Classical approaches to solving these problems from an industrial perspective often face challenges in terms of efficiency and speed, thereby motivating the exploration of quantum computing as a promising alternative. The escalating interest in these methods can be attributed primarily to recent experimental advancements. This surge in interest is particularly noticeable from an industrial and commercial perspective. Consequently, there is growing anticipation that both conventional computers and quantum computing, in general, could yield significant advantages, eventually achieving a state known as "quantum supremacy" [15]. This potential advantage and progress have, in turn, spurred developments in various scientific domains, wherein contemporary quantum computers serve as proof of concept. However, it is essential to underscore that the applicability of these quantum algorithms warrants extensive research and investigation, particularly in the current state of quantum computing, which is commonly referred to as the Noisy Intermediate-Scale Quantum (NISQ) era [16] whose defining characteristic is the utilization of quantum processors with capacities of up to 1000 qubits.
Hybrid classical-quantum algorithms leverage NISQ devices while offloading a portion of their computational workload onto classical devices, offering considerable potential for practical applications in the field of quantum computing. A prominent example worth highlighting is the Variational Quantum Eigensolver (VQE) [17]. The primary objective of VQE is to determine the lowest energy quantum state through hybrid optimization, utilizing a designated Hamiltonian operator in conjunction with variational quantum circuits. The realm of quantum machine learning also falls under the purview of these algorithms, seeking to adapt classical algorithms to their quantum counterparts to enhance and expedite computations by capitalizing on the principles of quantum superposition, entanglement, and interference. Within this domain, one can identify supervised classification algorithms like binary classification and those based on Grover's search algorithm [18]. Notably, Grover's algorithm has demonstrated quadratic acceleration in solving problems such as \(k\)-medians [19] or \(k\)-nearest neighbors [20, 21]. On the other hand, an alternative methodology that has significantly progressed in the literature of this field and has laid the foundation for numerous studies is the Quantum Approximate Optimization Algorithm (QAOA) [22, 23]. This approach presents a valuable alternative for tackling combinatorial optimization problems using shallow quantum circuits through classical optimization of the associated parameters. In recent literature, substantial endeavors have been dedicated to employing these methodologies to solve the ground-state challenges of physical systems [24, 25, 26], reflecting the ongoing efforts to enhance and adapt these techniques for broader applications in quantum optimization.
In recent years, significant attention and interest have been directed towards the development of adiabatic quantum optimization (AQO) methodologies for confronting optimization problems [27, 28] with direct practical implementations in the branches of physics and chemistry [29, 30, 31]. These algorithms begin by initializing a quantum system in the ground state of a known Hamiltonian. The system's Hamiltonian is then slowly evolved into one that represents the problem to be solved, with its ground state encoding the solution. Leveraging the adiabatic theorem [32, 33], it becomes feasible to ensure that the quantum system remains in its instantaneous state of lowest energy, provided the evolution of the Hamiltonian is carried out in a sufficiently slow and gradual manner and within a sufficiently extended period of time. Nevertheless, implementing slow adiabatic evolution at the experimental level is typically not feasible, necessitating the development of methodologies that accelerate these processes. In pursuit of this objective, recent scientific literature puts forth various approaches based on Shortcuts To Adiabaticity (STA) [34, 35]. These methodologies encompass diverse techniques, including fast-forward methods [36, 37], invariant-based inverse engineering [38, 39], and counterdiabatic (CD) protocols [40, 41]. Despite the noticeable progress in the first two methodologies, this study primarily centers around the CD protocols. These techniques are specifically designed to speed up the adiabatic evolution process from an initial Hamiltonian to a final Hamiltonian. This is achieved by incorporating non-adiabatic terms following Equation (1), which effectively nullify the transition amplitudes between any energy eigenstate of the original Hamiltonian [42]. Consequently, the quantum system undergoes an accelerated adiabatic evolution in practical applications. The resulting Hamiltonian by including the CD term is given by
\[\mathbf{\mathcal{H}}(t):=\mathbf{\mathcal{H}}_{\text{AD}}(t)+\mathbf{\mathcal{H}}_{\text{ CD}}(t). \tag{1}\]
The operator designated as \(\mathbf{\mathcal{H}}_{\text{AD}}(t)\) in (1) will be tailored to facilitate the preparation of its ground energy state at the initial time of the evolution. Nevertheless, the main challenge of the CD protocols lies in the accurate and plausible determination of the operator encompassing the non-adiabatic terms of the process, denoted here as \(\mathbf{\mathcal{H}}_{\text{CD}}(t)\). In general, the computation and acquisition of this operator are exceedingly intricate tasks, particularly when dealing with many-body quantum systems [43]. As a customary approach in related literature, a time-dependent external
parameterization, denoted as \(\mathbf{\lambda}(t)\), is introduced to which the operators [44] are dependent (explained in detail in Section 2.2). Efforts have been directed towards a method for the approximate determination of this operator, leading to significant progress, such as the development of the Nested Commutator (NC) methodology [45]. This advancement has led to recent contributions, exemplified by [23, 46]. Within this framework, the computation of these terms is simplified into a series of commutators involving \(\mathbf{\mathcal{H}}_{\mathrm{AD}}(t)\) and its derivatives concerning the mentioned external parameterization \(\mathbf{\lambda}(t)\). As a result, the non-adiabatic terms of these protocols are obtained approximately through an expansion in orders, where the complexity of obtaining them rises accordingly with the number of particles in the system and the order considered in the expansion. Nevertheless, these methodologies may exhibit problem-dependent characteristics, as their escalating complexity in non-trivial physical scenarios might necessitate the adoption of alternative perspectives to approach the issue at hand.
In a distinct domain, within the rapid progression of the computer science field, the realm of Deep Learning (DL) has achieved prominence as a highly potent instrument for constructing diverse models, owing to its direct applicability in domains such as image processing [47], natural language generation and processing [48], time series prediction and classification [49], among a plethora of other possibilities. Present-day technologies encompass Recurrent Neural Networks (RNNs) [50], Long Short-Term Memory architectures (LSTMs) [51], the well-known transformers [52], sparse and submanifolds convolutional networks [53] utilized for images with limited information load, among other advanced techniques. The Physics-Informed Neural Networks (PINNs) methodology has emerged as a highly intriguing DL application within the realm of physical sciences since its first appearances in the literature [54, 55]. This approach aims to address specific problems by employing neural networks as powerful universal approximators for systems of Partial Differential Equations (PDEs) that govern the physics describing the evolution. Due to their remarkable potential as numerical solvers, PINNs have garnered considerable attention and established themselves as a viable alternative to conventional numerical solving algorithms. Extensive efforts have been undertaken in diverse branches of physics to apply this methodology and its adaptations. These fields encompass classical hydrodynamics [56], relativistic hydrodynamics [57], electrodynamics [58], chemistry [59], and many others. PINNs demonstrate their utility wherever differential equations are employed to describe the underlying physics of a given scenario. Their ability to unravel complex physical phenomena and offer numerical solutions has positioned them as promising tools for tackling intricate problems across various scientific domains. Consequently, the motivation to employ this methodology for addressing the challenge of CD protocols in quantum systems arises organically. The investigation of potential applications of PINNs in quantum systems and circuits becomes a natural course of study.
In this paper, we present an innovative approach for designing the counterdiabatic terms in CD protocols. Our proposed method lays on the utilization of PINNs, thereby representing a direct application of DL. We introduce a PINN-based methodology without supplementary alterations, enabling it to effectively resolve the underlying physics of the problem. Through the neural network, we obtain both the counterdiabatic terms and the temporal parameterization \(\lambda(t)\), as expounded in Section 2.2. Additionally, we directly explore the experimental feasibility by decomposing the non-adiabatic operator into tensor products of the set of Pauli and identity matrices. This approach offers a comprehensive and direct means to address the experimental applicability of the method.
The rest of the paper is structured as follows: Section 2 provides an introduction to the foundational theoretical framework concerning the operation of baseline PINNs. It further presents a comprehensive exposition of the theoretical framework under consideration, encompassing CD protocols and the specific problem that serves as the motivation for our research. Additionally, a thorough literature review of prior work in this domain is presented. Section 3 delves into a meticulous presentation of the adapted PINN methodology, particularized to address our specific case, while taking into account all pertinent physical factors that the network must conform to. In Section 4, we present notable outcomes obtained through our methodology and juxtapose them with previous findings in the field. Furthermore, we conduct comparisons and explore the scalability of the proposed methodology. Finally, Section 5 serves as a concluding segment, summarizing the principal conclusions drawn from our research and offering insights into potential avenues for future investigations.
## 2 General Concepts
### Physics-Informed Neural Networks
The fundamental approach employed in PINNs methodologies [55] involves leveraging neural networks as powerful tools for approximating functions and solving physical problems by fitting sets of differential equations, known as PDEs. PINNs derive their name from the fact that they incorporate physical knowledge through the incorporation of inductive biases. These biases are manifested in various aspects of the methodology, including the design of the underlying neural network architecture, the formulation of appropriate cost functions (losses), and other characteristics that aim to ensure or enhance the convergence of the neural model. The underlying algorithm of these networks leverages the
automated differentiation capabilities found in contemporary frameworks [60] to construct differential equations based on the output variables obtained from the network. These variables are essential for computing the specific problem at hand. By performing calculations, a minimization process is subsequently employed, guided by a designated loss function, to update the trainable parameters of the architecture. Consequently, this adjustment aligns the network with the requirements of the physical framework.
Taking a broad perspective while maintaining generality, let us denote by \(\boldsymbol{\mathcal{U}}:=\boldsymbol{\mathcal{U}}(t,\boldsymbol{x})\) a collection of physical variables that serve as the output of the neural network. These variables, along with their derivatives, are the components of a system of PDEs defined within a domain of interest \(\Omega\) over a specific time interval \([0,T]\). Consequently, it is possible to write the following definition:
\[\mathcal{F}\left(t,\boldsymbol{x};\frac{\partial\boldsymbol{\mathcal{U}}}{ \partial t},\frac{\partial^{2}\boldsymbol{\mathcal{U}}}{\partial t^{2}},\dots ;\frac{\partial\boldsymbol{\mathcal{U}}}{\partial x_{1}},\dots,\frac{ \partial\boldsymbol{\mathcal{U}}}{\partial x_{D}};\frac{\partial^{2} \boldsymbol{\mathcal{U}}}{\partial x_{1}\partial x_{1}},\dots,\frac{ \partial^{2}\boldsymbol{\mathcal{U}}}{\partial x_{1}\partial x_{D}},\dots \right)=0,\quad\boldsymbol{x}=(x_{1},\dots,x_{D})\in\Omega,\quad t\in[0,T], \tag{2}\]
where \(D\) corresponds to the spatial dimension of the problem and \(\Omega\subset\mathbb{R}^{D}\). The operator \(\mathcal{F}\) defined in Equation (2) can be conceptualized as the comprehensive collection of physical constraints inherent to the system, which must be fulfilled in order to satisfy the underlying PDEs. It is worth noting that, in addition to these constraints, supplementary limitations can be established, such as the initial conditions that dictate the evolution of the system, or potential boundary conditions that may influence the behavior of the system at the spatial boundaries of the domain, namely,
\[\mathcal{IC}(t,\boldsymbol{x})=0,\qquad(t,\boldsymbol{x})\in\{0 \}\times\Omega. \tag{3}\] \[\mathcal{B}(t,\boldsymbol{x})=0,\qquad(t,\boldsymbol{x})\in(0,T] \times\partial\Omega. \tag{4}\]
In addition to the aforementioned conditions, other factors can be taken into consideration, such as imposing additional constraints on the final time (final conditions), or at specific points of significant physical significance within the spatiotemporal framework. Furthermore, if there are actual experimental measurements available for a subset of the domain, they can also be incorporated. Consequently, each of these physical conditions represents a segment of a priori knowledge regarding the physical scenario that can be integrated into the cost function as separate terms (referred to as "soft enforcement"), as denoted with \(\mathcal{L}_{i}\) in Equation (5):
\[\mathcal{L}:=\omega_{\mathcal{F}}\mathcal{L}_{\mathcal{F}}+\sum_{i}\omega_{i} \mathcal{L}_{i}, \tag{5}\]
where \(\mathcal{L}_{\mathcal{F}}\) corresponds to the metric pertaining to the underlying system of PDEs, while the collection \((\omega_{\mathcal{F}},\omega_{i},\dots)\) represents the weights assigned to each term within the mixture. The neural architecture employed in this methodology yields a set of essential physical variables, denoted as \(\boldsymbol{\mathcal{U}}(t,\boldsymbol{x};\Theta):=\boldsymbol{\mathcal{U}}_{ \Theta}(t,\boldsymbol{x})\), where \(\Theta\) encompasses all the trainable parameters of the network that are updated during the training process. Consequently, the output aims to closely align with the corresponding real-world values:
\[\boldsymbol{\mathcal{U}}_{\Theta}(t,\boldsymbol{x})\approx\boldsymbol{ \mathcal{U}}(t,\boldsymbol{x}).\]
The constraints expressed in (3) and (4) can be transformed into cost functions by employing a suitable difference measurement metric, such as _Mean Squared Error_ (MSE) or similar approaches. The determination of these cost functions together with \(\mathcal{L}_{\mathcal{F}}\) can be outlined as follows.
\[\mathcal{L}_{\mathcal{I}\mathcal{C}}:=\frac{1}{N_{\mathcal{I}\mathcal{C}}} \sum_{\{0\}\times\Omega}|\boldsymbol{\mathcal{U}}_{\Theta}(0,\boldsymbol{x})- \boldsymbol{\mathcal{U}}(0,\boldsymbol{x})|^{2}, \tag{6}\]
\[\mathcal{L}_{\mathcal{B}}:=\frac{1}{N_{\mathcal{B}}}\sum_{(0,T]\times\partial \Omega}|\boldsymbol{\mathcal{U}}_{\Theta}(t,\boldsymbol{x}\in\partial\Omega)- \boldsymbol{\mathcal{U}}(t,\boldsymbol{x}\in\partial\Omega)|^{2}, \tag{7}\]
\[\mathcal{L}_{\mathcal{F}}:=\frac{1}{N_{\mathcal{F}}}\sum_{(0,T]\times\Omega} \underbrace{\sum_{k}|\mathcal{R}_{k}(t,\boldsymbol{x};\Theta)|^{2}}_{| \mathcal{F}|^{2}}. \tag{8}\]
Here, the set \((N_{\mathcal{I}\mathcal{C}},N_{\mathcal{B}},N_{\mathcal{F}})\) represents the number of points in the sample for each respective domain considered. Additionally, \(\mathcal{R}_{k}\) denotes the fulfillment of specific physical constraints imposed at the level of the PDE. Through the utilization of the fundamental methodology employed in PINN models, it becomes feasible to smoothly enforce the imposed constraints by adjusting the PDEs associated with the given problem. Nonetheless, alternative approaches, such as the "hard enforcement" method [61], propose compelling the neural network to enforce predetermined constraints from the onset of training by modifying the output of the network. This technique necessitates incorporating several constraints, but it entails establishing a specific dependence on the problem at hand. Other researchers achieve a certain level of independence concerning the set of weights \((\omega_{\mathcal{R}},\omega_{i},...)\) in (5) through the application of the Augmented Lagrangian method [62]. This technique involves updating multipliers with corresponding names during training in accordance with the degree of violation for each respective condition.
### Physical background
Combinatorial and optimization problems are of great interest in many industry and research fields [23, 46]. Adiabatic Quantum Computing (AQC) algorithms are used to solve this kind of problems and they are expected to outperform classical computers in the current NISQ era [63, 64]. In this paradigm, we can prepare a Hamiltonian in an initial ground state, driving or mixing Hamiltonian, and evolve it towards a desired final operator, whose ground state encodes the solution to the underlying problem, or it can also be the solution itself [43, 46]. This initial or driving Hamiltonian operator should be easy to prepare and evolve. We shall commence by establishing the Hamiltonian operator associated with the adiabatic transition of the system, \(\mathbf{\mathcal{H}}_{\mathrm{AD}}(t)\), which is characterized by its energy eigenvalues, \(E_{n}(t)\), and corresponding eigenstates, \(\ket{n(t)}\), as determined by:
\[\mathbf{\mathcal{H}}_{\mathrm{AD}}(t)\ket{n(t)}=E_{n}(t)\ket{n(t)}. \tag{9}\]
A time-dependent Hamiltonian, as the one defined in (9), generally could lead to modifications in the quantum states it governs. Nevertheless, when these changes are sufficiently minute, analytically tractable, and under controlled conditions, the adiabatic theorem [32] ensures that the system, assuming non-degeneracy in energy, will preserve its proximity to the initial energy state throughout its temporal evolution. Considering the aforementioned perspective, it is consistently feasible to formulate a Hamiltonian operator, denoted as \(\mathbf{\mathcal{H}}(t)\), that exhibits a direct correlation with \(\mathbf{\mathcal{H}}_{\mathrm{AD}}(t)\) and accurately reproduces the temporal progression of its eigenstates. In other words, it possesses transition amplitudes between energy levels that are precisely zero [42]. This operator will be designed to satisfy the following Schrodinger equation:
\[i\hbar\,\partial_{t}\ket{\psi_{n}(t)}=\mathbf{\mathcal{H}}(t)\ket{\psi_{n}(t)},\]
where \(\ket{\psi_{n}(t)}\) can be defined in terms of the corresponding eigenstate of the operator in (9), \(\ket{n(t)}\). These states represent each one of the \(n\) states driven by \(\mathbf{\mathcal{H}}_{\mathrm{AD}}(t)\) in a certain particular system. Through rigorous derivation and computation, one can obtain a plausible analytical representation for the operator \(\mathbf{\mathcal{H}}(t)\). For a detailed deducement, the interested reader is encouraged to refer to the works of [42, 65, 66]. This operator, defined in (10), effectively governs the evolution of energy states within the framework of \(\mathbf{\mathcal{H}}_{\mathrm{AD}}(t)\), ensuring the absence of transitions between them.
\[\mathbf{\mathcal{H}}(t)=\underbrace{\sum_{n}\ket{n}E_{n}\bra{n}}_{\mathbf{\mathcal{H }}_{\mathrm{AD}}}+\underbrace{i\hbar\,\sum_{n}\left(\ket{\partial_{t}n}\bra{n}- \bra{n}\partial_{t}n\ket{n}\bra{n}\right)}_{\mathbf{\mathcal{H}}_{\mathrm{CD}}} =\mathbf{\mathcal{H}}_{\mathrm{AD}}(t)+\mathbf{\mathcal{H}}_{\mathrm{CD}}(t). \tag{10}\]
The Hamiltonian operator \(\mathbf{\mathcal{H}}_{\mathrm{CD}}(t)\) corresponding to the second term in (10), can be interpreted as the operator responsible for capturing the counterdiabatic effects during system evolution. These effects facilitate the acceleration of the underlying system dynamics by introducing additional accessible degrees of freedom allowing to reach the same results as the entire adiabatic evolution of the system and eliminating any possible transition between eigenstates [64, 67], as previously stated. These frameworks, which are the well-known as counterdiabatic Protocols, effectively expedite the adiabatic processes by introducing novel terms that precisely offset any excitations that may arise during system acceleration. These have been recently developed as part of the STA methods. With this in mind, it is feasible to extend the theoretical deduction by incorporating a set of time-dependent external parameters, denoted as \(\mathbf{\lambda}(t)\), upon which all our operators will be dependent [44]. By doing so, when retrieving Equation (10), it follows that the temporal derivatives of the \(\ket{n}\) states can be written as follows,
\[\ket{\partial_{t}n}=\frac{d\mathbf{\lambda}}{dt}\ket{\mathbf{\nabla}_{\mathbf{\lambda}}n}. \tag{11}\]
Equation (11) allows us to redefine the operator \(\mathbf{\mathcal{H}}(t)\) as
\[\mathbf{\mathcal{H}}(t):=\mathbf{\mathcal{H}}_{\text{AD}}(t)+\underbrace{\frac{d\mathbf{ \lambda}}{dt}\mathbf{\mathcal{A}}_{\text{CD}}(t)}_{\mathbf{\mathcal{H}}_{\text{CD}}}, \qquad\mathbf{\mathcal{A}}_{\text{CD}}(t):=i\hbar\,\sum_{n}\left(\ket{\mathbf{\nabla}_{ \lambda}n}\bra{n}-\bra{n}\ket{\mathbf{\nabla}_{\lambda}n}\ket{n}\bra{n}\right).\lx@note{ footnote}{While the operators in this formulation exhibit time dependence through $\lambda$, for the sake of notation simplicity, we will denote the time dependency explicitly.} \tag{12}\]
In general, this parameterization could encompassze a collection of values \(\mathbf{\lambda}:=(\lambda_{1},...,\lambda_{N})\). However, to align with the latest literature in the field, we will confine ourselves to a single scalar function. Consequently, \(\lambda(t)\), which usually receives the name of _scheduling function_ in the literature, carries out the counterdiabatic driving by terms of its temporal derivative. Indeed, it is evident that in the limit of zero velocity, \(\ket{\frac{d\lambda}{dt}}\to 0\), the Hamiltonian in Equation (12) simplifies to the adiabatic operator, as anticipated during its construction. When extrapolating this understanding to contemporary digitized versions of algorithms within the realm of quantum circuits [43], it is customary to initiate the process with a Hamiltonian which presents a ground state of energy that can be readily prepared in practical settings, denoted as \(\mathbf{\mathcal{H}}_{\text{initial}}\). Under adiabatic conditions, it is imperative for this operator to undergo a gradual transformation over time, eventually converging to the target Hamiltonian corresponding to the specific problem under investigation, written as \(\mathbf{\mathcal{H}}_{\text{problem}}\).
\[\mathbf{\mathcal{H}}_{\text{AD}}(t):=(1-\lambda(t))\,\mathbf{\mathcal{H}}_{\text{ initial}}+\lambda(t)\,\mathbf{\mathcal{H}}_{\text{problem}}. \tag{13}\]
In Equation (13), the scheduling function can once again be identified. Within the time interval \(t\in[t_{\text{min}},t_{\text{max}}]\), it is necessary for \(\lambda(t)\) to fulfill the conditions \(\lambda(t_{\text{min}})=0\) and \(\lambda(t_{\text{max}})=1\) based on its functional definition, which enables the interpolation of the process from the initial Hamiltonian to the desired endpoint of the process.
On the other hand, the \(\mathbf{\mathcal{A}}_{\text{CD}}(t)\) operator (also written in the literature as \(\mathbf{\mathcal{A}}_{\lambda}\)) is defined as the Adiabatic Gauge Potential. Obviously, this operator should fulfill the hermiticity condition, i.e., \(\mathbf{\mathcal{A}}_{\text{CD}}=\mathbf{\mathcal{A}}_{\text{CD}}^{\dagger}\), in order to be interpreted as a physical observable. It is also easy to show that the potential \(\mathbf{\mathcal{A}}_{\text{CD}}(t)\) satisfies the condition (15), which is equivalent to minimizing the action \(\mathcal{S}\) defined in Equation (14) for the Hilbert-Schmidt operator \(\mathbf{\mathcal{G}}_{\lambda}\)[44, 68]. Consequently, Equation (15) can be understood as the Euler-Lagrange equation resulting from the minimization of the physical action.
\[\mathcal{S}=\text{Tr}\left[\mathbf{\mathcal{G}}_{\lambda}^{2}\right],\qquad\mathbf{ \mathcal{G}}_{\lambda}\left(\mathbf{\mathcal{A}}_{\text{CD}}\right)=\partial_{ \lambda}\mathbf{\mathcal{H}}_{\text{AD}}+\frac{i}{\hbar}\left[\mathbf{\mathcal{A}}_{ \text{CD}},\mathbf{\mathcal{H}}_{\text{AD}}\right]. \tag{14}\]
\[\left[i\hbar\frac{\partial\mathbf{\mathcal{H}}_{\text{AD}}}{\partial\lambda}-[\mathbf{ \mathcal{A}}_{\text{CD}},\mathbf{\mathcal{H}}_{\text{AD}}],\mathbf{\mathcal{H}}_{\text {AD}}\right]=0.\lx@note{footnote}{To provide clarity and consistency, from now on we will exclusively employ Planck units, wherein the reduced Planck constant, denoted as $\hbar$, is established as unity (i.e., $\hbar=1$).} \tag{15}\]
This term should establish a connection between the aforementioned external parameter \(\lambda(t)\) and instantaneous eigenstates. Finding this exact CD terms is not easy without spectral information of the system and they are usually approximated using the NC approach [43, 46, 23] or through Variational Circuits [63]. This lead to a set of possible local CD terms and different techniques that have been developed to determine which of them are the most adequate for the particular problem. Nonetheless, these methods even though they are of great interest and may constitute the state-of-the-art approaches, they are still approximations. Consequently, there could be other relevant terms and aspects that are not considered within these methodologies.
### Quantum circuit design for the \(\mathrm{H}_{2}\) ground state problem
The main numerical application of this paper will be to find the ground state of the \(\mathrm{H}_{2}\) molecule in the STO-3G basis assuming different bond distances where, in particular, this STO-3G basis corresponds to a minimal set that uses three Gaussian functions to approximate the atomic orbitals [69]. This can be described with the 2-qubit Full Configuration Interaction (FCI) Hamiltonian in Equation (16), where the value of the coefficients vary with the bond distances. This Hamiltonian has been obtained using the well-know _Oiskit_ module [70]. In a different domain, it is well-known that the Pauli matrices comprise a set of Hermitian and unitary matrices, namely \(\{\sigma_{\text{X}},\sigma_{\text{Y}},\sigma_{\text{Z}}\}\in\mathcal{M}_{2\times 2 }(\mathbb{C})\), which are highly suitable for representing quantum operators and physical observables. These matrices, together with the identity matrix \(\mathbf{\mathcal{I}}\) (also called \(\sigma_{0}\)), form an orthogonal basis of the Hilbert space of \(2\times 2\) Hermitian matrices. Similarly, when dealing
with a system of \(N_{Q}\) qubits, the matrix space of Hermitian \(\mathcal{M}_{\mathcal{I}^{2N_{Q}}\times 2^{N_{Q}}}(\mathbb{C})\) matrices can be decomposed by means of tensor products involving the aforementioned set (the interested reader is referred to [71] Chapter 2 and [72] Chapter 3). This procedure, referred to as the Pauli basis expansion, enables us to achieve a comprehensive representation of any Hermitian operator on a system comprising a certain amount of qubits. From the perspective of quantum circuits, this approach offers a structured and concise depiction of quantum operations and physical observables, thereby facilitating the analysis and manipulation of quantum systems.
\[\boldsymbol{\mathcal{H}}_{\text{FCI}}=c_{0}\boldsymbol{\mathcal{I}}^{(1)} \otimes\boldsymbol{\mathcal{I}}^{(2)}+c_{1}\,\boldsymbol{\mathcal{I}}^{(1)} \otimes\sigma_{\text{Z}}^{(2)}+c_{2}\,\sigma_{\text{Z}}^{(1)}\otimes \boldsymbol{\mathcal{I}}^{(2)}+c_{3}\,\sigma_{\text{Z}}^{(1)}\otimes\sigma_{ \text{Z}}^{(2)}+c_{4}\,\sigma_{\text{X}}^{(1)}\otimes\sigma_{\text{X}}^{(2)} +c_{5}\,\boldsymbol{\mathcal{I}}^{(1)}\otimes\boldsymbol{\mathcal{I}}^{(2)}. \tag{16}\]
This corresponds to \(\mathcal{H}_{\text{problem}}\) in Equation (13). In this context, the numeric superscripts enclosed in parentheses mean that the written operator pertain to the specific qubit under consideration thereby resulting in distinct matrices associated with each one. The symbol \(\otimes\) denotes the Kronecker product. Furthermore, the first and last coefficients are written separately with the same operator since they correspond to different atoms in the molecule, even though they have the same interaction. The FCI Hamiltonian in Equation (16) can also be written in matrix form Equation (17).
\[\boldsymbol{\mathcal{H}}_{\text{FCI}}=\begin{bmatrix}c_{0}+c_{1}+c_{2}+c_{3}+ c_{5}&0&0&c_{4}\\ 0&c_{0}-c_{1}+c_{2}-c_{3}+c_{5}&c_{4}&0\\ 0&c_{4}&c_{0}+c_{1}-c_{2}-c_{3}+c_{5}&0\\ c_{4}&0&0&c_{0}-c_{1}-c_{2}+c_{3}+c_{5}\end{bmatrix}. \tag{17}\]
As detailed before, we start from a _driving_ Hamiltonian that should be easy to prepare. For this particular problem, the Hartee-Fock (HF) approximation Hamiltonian (18) is used, with its matrix form written in Equation (19). It is easy to see that both the FCI and the HF Hamiltonian are real-valued since they are composed only with the identity matrix, \(\boldsymbol{\mathcal{I}}\), and the Pauli matrices \(\sigma_{\text{X}}\) and \(\sigma_{\text{Z}}\).
\[\boldsymbol{\mathcal{H}}_{\text{HF}}=p_{0}\,\boldsymbol{\mathcal{I}}^{(1)} \otimes\boldsymbol{\mathcal{I}}^{(2)}+p_{1}\,\boldsymbol{\mathcal{I}}^{(1)} \otimes\sigma_{\text{Z}}^{(2)}+p_{2}\,\sigma_{\text{Z}}^{(1)}\otimes \boldsymbol{\mathcal{I}}^{(2)}+p_{3}\,\sigma_{\text{Z}}^{(1)}\otimes\sigma_{ \text{Z}}^{(2)}+p_{4}\,\boldsymbol{\mathcal{I}}^{(1)}\otimes\boldsymbol{ \mathcal{I}}^{(2)}. \tag{18}\]
\[\boldsymbol{\mathcal{H}}_{\text{HF}}=\begin{bmatrix}p_{0}+p_{1}+p_{2}+p_{3}+p_{4 }&0&0\\ 0&p_{0}-p_{1}+p_{2}-p_{3}+p_{4}&0&0\\ 0&0&p_{0}+p_{1}-p_{2}-p_{3}+p_{4}&0\\ 0&0&0&p_{0}-p_{1}-p_{2}+p_{3}+p_{4}\end{bmatrix}. \tag{19}\]
The HF method approximates the ground state of the system using a single Slater determinant. The contributions of both the initial and final Hamiltonian operators in the adiabatic evolution can be determined using advanced numerical algorithms [70]. As stated before, the primary objective is to solve the adiabatic temporal evolution while considering non-adiabatic terms. The goal is to accelerate this evolution without allowing unexpected transitions between energy states. By incorporating the adiabatic operator defined in Equation (13) into the total Hamiltonian operator described in Equation (12), and considering that the initial and final Hamiltonians are represented as shown in Equations (16-19) with \(\boldsymbol{\mathcal{H}}_{\text{initial}}:=\boldsymbol{\mathcal{H}}_{\text{HF}}\) and \(\boldsymbol{\mathcal{H}}_{\text{final}}:=\boldsymbol{\mathcal{H}}_{\text{FCI }}=\boldsymbol{\mathcal{H}}_{\text{problem}}\), we can establish the following PDE,
\[\boldsymbol{\mathcal{H}}(t):=(1-\lambda(t))\,\boldsymbol{\mathcal{H}}_{\text{ HF}}+\lambda(t)\boldsymbol{\mathcal{H}}_{\text{problem}}+\frac{d\lambda}{dt} \boldsymbol{\mathcal{A}}_{\text{CD}}(t). \tag{20}\]
### Related work
The primary objective of CD driving processes is to attain a gauge potential, \(\boldsymbol{\mathcal{A}}_{\text{CD}}(t)\), by expediting the adiabatic process through a temporal velocity parameter, specifically the time derivative of the scheduling function, \(\frac{d\lambda}{dt}\). The aforementioned operator possesses the property that the product \(\frac{d\lambda}{dt}\boldsymbol{\mathcal{A}}_{\text{CD}}\) in Equation (20) fully mitigates the occurrence of non-adiabatic transitions that would otherwise manifest in a specific eigenstate \(|n(t)\rangle\) under the total \(\boldsymbol{\mathcal{H}}(t)\), which has undergone evolution from the initial state \(|n(t=0)\rangle\) of the control Hamiltonian, \(\boldsymbol{\mathcal{H}}_{\text{AD}}(t)\)[73]. Therefore, the aim of the related methodologies is to ensure precise compensation of the adiabatic terms in the evolutionary dynamics within the moving frame [40]. This potential, comprising the non-adiabatic compensations, represents an operator that is typically intricate to derive. While an exact numerical construction is possible for certain systems with a small number of particles, resolving it for systems with numerous bodies (a substantial number of qubits) becomes exceedingly
challenging, if not impossible, due to the requirement of diagonalizing the resulting Hamiltonian observable across the entire Hilbert space. Furthermore, \(\mathbf{\mathcal{A}}_{\text{CD}}\) often entails the emergence of highly intricate and non-local couplings, thus limiting its implementation to specific cases [74, 75, 76]. Part of the research focused on a robust obtaining of the potential \(\mathbf{\mathcal{A}}_{\text{CD}}\) has led to works such as [77] where fast-forward methods are presented through which it is possible to obtain an objective wave function in a shorter time. This investigation is conducted within the field of microscopic quantum mechanics, aiming to delve into the macroscopic domain primarily governed by the Schrodinger equation. These inquiries are expounded upon in references such as [36, 78]. Despite these notable progressions, there presently exists no viable approach to extend these studies to the context of many-body systems. Consequently, these methodologies are not applicable to complex problem sets. On the other hand, several recent studies such as [63] have suggested alternative approaches for addressing the aforementioned issues. The authors suggest to employ Variational Circuits (VC) in conjunction with classical optimizers to optimize the choice of CD terms, with these being treated as trainable parameters, being tested on the specific case of an Ising model of interactions with nearby neighbors and carrying out a comparison with some of the state-of-the-art QAOA methodologies [79, 80, 23]. Some assumptions are made regarding the form of the function \(\lambda(t)\) thereby predefining a temporal parameterization, which may exhibit certain dependencies on the problem under investigation.
Returning to the latest approaches to derive \(\mathbf{\mathcal{A}}_{\text{CD}}(t)\) as described in the existing literature, based on its definition in Equation (12) and assuming that the parameterization is completely determined by the function \(\lambda(t)\), by differentiation of Equation (9) [42], it is straightforward to arrive at
\[\left\langle m\right|\mathbf{\mathcal{A}}_{\text{CD}}\left|n\right\rangle=i\left\langle m \right|\partial_{\lambda}n\right\rangle=-i\frac{\left\langle m\right|\partial_ {\lambda}\mathbf{\mathcal{H}}_{\text{AD}}\left|n\right\rangle}{E_{m}-E_{n}}. \tag{21}\]
However, it can be readily observed from Equation (21) that determining the operator \(\mathbf{\mathcal{A}}_{\text{CD}}(t)\) can become highly intricate and computationally expensive, especially when dealing with many-body problems. This potential necessitates exact diagonalization of the operator \(\mathbf{\mathcal{H}}_{\text{AD}}(t)\) over time and, additionally, the difference in energies \(E_{m}-E_{n}\) can introduce complications, leading to divergent and mathematically ill-defined matrix elements. In an attempt to address these challenges, the authors in [45] suggest employing the so-called method of NC for approximating the representation of \(\mathbf{\mathcal{A}}_{\text{CD}}(t)\).
\[\mathbf{\mathcal{A}}_{\text{CD}}^{(l)}=i\sum_{k=1}^{l}\alpha_{k}\underbrace{[\mathbf{ \mathcal{H}}_{\text{AD}},[\mathbf{\mathcal{H}}_{\text{AD}},\ldots[\mathbf{\mathcal{H}} _{\text{AD}},\partial_{\lambda}}_{2k-1}\mathbf{\mathcal{H}}_{\text{AD}}]]]}_{2k-1}. \tag{22}\]
Equation (22) presents a methodology for obtaining an approximate numerical approach for the operator \(\mathbf{\mathcal{A}}_{\text{CD}}\). In this expression, \(l\) represents the order of the expansion, and the set of coefficients \(\{\alpha_{k}\}_{k=1}^{l}\) can be determined by minimizing the action in Equation (14) specifically tailored for the \(l\)-th order. For further details and a comprehensive demonstration, interested readers are referred to the reference, where it is demonstrated that the exact potential \(\mathbf{\mathcal{A}}_{\text{CD}}\) is recovered in the limit as \(l\) tends towards infinity. Notwithstanding its utilization as an approximation limited to an expansion order \(l\), this ansatz has proven to be a methodological framework for attaining cutting-edge outcomes in the field of research [23, 43, 46, 81, 82]. In the absence of an analytical solution to the problem of the CD protocol, the most compelling alternative at our disposal is to employ techniques rooted in DL, which possess the capacity to comprehend and acquire knowledge of the fundamental physics governing the problem.
## 3 Methodology
### PINN framework
Our approach involves employing a Physics-Informed Neural Network (PINN) that incorporates a neural network structure comprising multiple fully connected dense layers, with the total number of layers denoted as \(K\). This includes both the input and output layers. Each layer consists of a variable number of neurons, represented by \(N_{k}\), and is characterized by a common output activation function denoted as \(\sigma_{k}\). This activation function remains consistent across all neurons within a given layer after it is specified. Let \(\mathbf{W}^{(k)}\in\mathbb{R}^{N_{k}\times N_{k-1}}\) be the weight matrix connecting the \((k-1)\)-th layer to the \(k\)-th layer, and \(\mathbf{b}^{(k)}\in\mathbb{R}^{N_{k}}\) be the bias vector for the \(k\)-th layer. Consequently, the output of the \(k\)-th layer, denoted as \(\mathbf{\mathcal{U}}_{\Theta}^{(k)}\), can be expressed as the application of the activation function \(\sigma_{k}\) to the weighted sum of the output of the previous layer, followed by the addition of the biases, as denoted in Equation (23):
\[\mathbf{\mathcal{U}}_{\Theta}^{(k)}=\sigma_{k}\left(\mathbf{W}^{(k)}\mathbf{\mathcal{U}}_{ \Theta}^{(k-1)}+\mathbf{b}^{(k)}\right) \tag{23}\]
In this manner, contingent upon the specific problem under consideration, additional network parameters beyond weights and biases may be taken into account. Nevertheless, if only these two sets are considered, the trainable variables would be limited to those typically found in a conventional network, denoted as \(\Theta:=\{\mathbf{W}^{(k)},\mathbf{b}^{(k)}\}_{1\leq k\leq K}\). Regarding the activation functions, particularly those employed at the output layer, they need to be tailored to suit the requirements of the specific physical problem at hand. For instance, certain physical variables may exhibit limitations within a defined range of values. An example of such a case is the scheduling function as described in (20), which is constrained to the interval \([0,1]\) by definition, or the velocity of a specific fluid within relativistic scenarios which is subject to an upper limit that cannot surpass the speed of light [57]. In the context of our specific problem, the sole independent variable is time, thereby allowing us to exclude spatial dimensions as inputs to the neural model. Consequently, by considering (23) and the time interval \(t\in[0,T]\), we can express the ensemble of output variables of the architecture in the following manner:
\[\mathbf{\mathcal{U}}(t)\approx\mathbf{\mathcal{U}}_{\Theta}(t)=\sigma_{K}\left(\mathbf{ \mathcal{U}}_{\Theta}^{(K)}\circ\sigma_{K-1}\circ\mathbf{\mathcal{U}}_{\Theta}^{( K-1)}\circ\ldots\circ\sigma_{1}\circ\mathbf{\mathcal{U}}_{\Theta}^{(1)}\right)(t) \tag{24}\]
Equation (24) showcases the mathematical representation of the approximation proposed by the underlying neural network in our methodology. This approach aims to closely resemble the actual physical solution following the completion of the training process. In this context, the symbol \(\circ\) represents the composition operator. Recent research in the field asserts that PINNs enhance their performance by incorporating dynamic activation functions that vary with the training process and are distinct for each neuron [83]. However, our focus lies primarily on establishing the definition of physical inductive biases within the network. Consequently, each layer \(k\) will be associated with an activation function denoted as \(\sigma_{k}\), which uniformly affects the output tensor of that particular layer.
### Inductive biases and optimization
Our approach involves employing a methodology based on PINNs, which allows us to incorporate strong inductive biases and a priori knowledge into the neural network. This incorporation is intended to ensure that the underlying physics governing the problem is adequately satisfied. To achieve this objective, it is essential to consider that the output of the underlying network in our methodology will consist of a set of variables denoted as
\[\mathbf{\mathcal{U}}_{\Theta}(t):=\left(\lambda,\mathbf{\mathcal{A}}_{\text{CD}},\mathbf{ \mathcal{C}}\right)_{\Theta}\lx@note{footnote}{Henceforth, the utilization of the symbol “\Theta” to denote the network prediction will be omitted and presumed understood, except in cases where the terminology may potentially result in misconceptions.}. \tag{25}\]
Here, \(\lambda\in\mathbb{R}\) denotes the scheduling function, while \(\mathbf{\mathcal{A}}_{\text{CD}}\in\mathcal{M}_{2^{N_{Q}}\times 2^{N_{Q}}}( \mathbb{C})\) represents the counterdiabatic terms of the evolution, and \(\mathbf{\mathcal{C}}\in\mathbb{R}^{4^{N_{Q}}}\) stands for the set of coefficients in which these counterdiabatic terms can be linearly decomposed attending to all the possible tensors that come from the Kronecker products of the possible combinations according to both the number of qubits considered and the set of identity and Pauli matrices, \(\{\mathbf{I},\sigma_{\!\!\!X},\sigma_{\!\!\!Y},\sigma_{\!\!\!Z}\}\) (see [84] Chapter 2.1). In general, \(\mathbf{\mathcal{A}}_{\text{CD}}\) is an operator composed of complex terms and will be of size \(2^{N_{Q}}\times 2^{N_{Q}}\) with \(N_{Q}\) being the number of qubits into consideration. Moreover, notwithstanding that the dependency may not always be explicitly stated, all variables emerging from the PINN are contingent upon the input time. Hence, the network yields solutions for each of the considered inference times.
The underlying neural network will be optimized using the so-called "soft enforcement" technique, as introduced in equations (6,7,8). We will adapt this methodology to our specific scenario of Hamiltonian dynamics, where the global cost function can be decomposed into multiple terms. Specifically, we will address the initial and final time conditions for an input time interval \(t\in[t_{\text{min}},t_{\text{max}}]\) as described in Equations (26) and (27), respectively.
\[\mathcal{L}_{\mathcal{I}\mathcal{C}}:=\frac{\omega_{\mathcal{I}\mathcal{C},1} }{N_{\mathcal{I}\mathcal{C}}}\sum_{\{t_{\text{min}}\}}|\lambda(t_{\text{min}}) |^{2}+\frac{\omega_{\mathcal{I}\mathcal{C},2}}{N_{\mathcal{I}\mathcal{C}}} \sum_{\{t_{\text{min}}\}}\left|\mathbf{\mathcal{H}}(t_{\text{min}})-\mathbf{\mathcal{ H}}_{\text{HF}}\right|^{2}, \tag{26}\]
\[\mathcal{L}_{\mathcal{F}\mathcal{C}}:=\frac{\omega_{\mathcal{F}\mathcal{C},1} }{N_{\mathcal{F}\mathcal{C}}}\sum_{\{t_{\text{max}}\}}\left|\lambda(t_{\text{ max}})-1\right|^{2}+\frac{\omega_{\mathcal{F}\mathcal{C},2}}{N_{\mathcal{F} \mathcal{C}}}\sum_{\{t_{\text{max}}\}}\left|\mathbf{\mathcal{H}}(t_{\text{max}})- \mathbf{\mathcal{H}}_{\text{problem}}\right|^{2}. \tag{27}\]
In the definitions above, \((\omega_{\mathcal{I}\mathcal{C},1},\omega_{\mathcal{I}\mathcal{C},2})\) and \((\omega_{\mathcal{F}\mathcal{C},1},\omega_{\mathcal{F}\mathcal{C},2})\) are the weights of the mixture in the calculation of the total loss function and whose values will depend on the knowledge applied as well as on the problem being treated, while \(N_{\mathcal{I}\mathcal{C}}\) and \(N_{\mathcal{F}\mathcal{C}}\) represent the number of sample points at the initial and final instants, respectively. Regarding the
scheduling function of the problem, denoted as \(\lambda(t)\), this function shall delineate the progression of physical states subsequent to the introduction of counterdiabatic terms as stipulated in Equation (20). At the initial time instant, we shall impose a condition where \(\lambda(t_{\text{min}})\) equals zero, i.e., \(\lambda(t_{\text{min}})=0\). Similarly, at the terminal time instant, we would prescribe that \(\lambda(t_{\text{max}})\) assumes a value of one, i.e., \(\lambda(t_{\text{max}})=1\), as per its formal definition [46]. The aforementioned conditions outlined correspond to the physical limitations that are inherently necessary to be satisfied within our specific scenario. At the initial moment, denoted as \(t=t_{\text{min}}\), we enforce the scheduling function to be zero, while ensuring that the resulting Hamiltonian operator (20) is equivalent to the one obtained through the Hartree-Fock method (see [85] Chapter 2.2), defined as \(\boldsymbol{\mathcal{H}}(t_{\text{min}})=\boldsymbol{\mathcal{H}}_{\text{HF}}\). Additionally, by incorporating counterdiabatic terms, our intention is to accelerate the adiabatic transition and reduce the computational complexity of the underlying circuits [23, 43]. This, however, does not impede our knowledge of the final Hamiltonian operator, denoted as \(\boldsymbol{\mathcal{H}}_{\text{problem}}\), which can be computed via advanced numerical methods in the chemistry field [70]. These conditions, combined with the requirement that the scheduling function must be equal to one at the conclusion of the evolution, collectively constitute the final conditions mandated by the methodology.
After establishing the initial and final inductive biases explicitly, it becomes crucial to elucidate the physics governing the intermediate time periods within the interval. Upon acquiring all the aforementioned components, we possess the necessary elements to construct the complete Hamiltonian operator for the given problem, as delineated in Equation (20). However, for the purpose of coherence, we reiterate the expression here to avoid any disruption in the logical progression.
\[\boldsymbol{\mathcal{H}}(t)=\boldsymbol{\mathcal{H}}_{\text{AD}}(t)+\frac{d \lambda}{dt}\boldsymbol{\mathcal{A}}_{\text{CD}}(t),\qquad\text{with}\qquad \boldsymbol{\mathcal{H}}_{\text{AD}}(t):=(1-\lambda(t))\,\boldsymbol{\mathcal{ H}}_{\text{HF}}+\lambda(t)\boldsymbol{\mathcal{H}}_{\text{problem}}. \tag{28}\]
As it is well-known, the Hamiltonian operators in their general form are operators comprising complex numbers, which necessitate adherence to the property of Hermiticity. By satisfying this requirement, the operators can be appropriately interpreted as physical observables, enabling extraction of values from energy states that are purely real. Consequently, it is obvious that our operators should possess Hermitian properties. Furthermore, it is imperative to impose the condition that the neural network yields the solution for the Gauge potential (counterdiabatic terms) that achieves the utmost reduction in the physical action within the given scenario. This entails selecting the solution that results in achieving
\[\frac{\delta\mathcal{S}(\boldsymbol{\mathcal{A}}_{\text{CD}})}{\delta \boldsymbol{\mathcal{A}}_{\text{CD}}}=0. \tag{29}\]
The minimization of the physical action represents a crucial requirement for ensuring that the employed methodology yields an optimal operator \(\boldsymbol{\mathcal{A}}_{\text{CD}}(t)\) under specific conditions, encompassing local temporary development, robustness, availability, and experimental accessibility, among other factors. Research studies, such as [64] and related literature, demonstrate the impact of the action in recovering the Euler-Lagrange Equation (15). Consequently, demanding the neural network to minimize the action is entirely equivalent to defining the term associated with the physical loss, as described in (30). Moreover, it is well-established that the temporal rate of change of the scheduling function, denoted as \(\frac{d\lambda}{dt}:=\dot{\lambda}\), represents the velocity or rate at which the non-adiabatic components drive the evolution of the system in the presence of the total Hamiltonian operator. Consequently, when the derivative approaches zero, i.e., \(\dot{\lambda}=0\), the conventional adiabatic Hamiltonian is recovered. However, it is undesirable for the counterdiabatic terms to greatly surpass the adiabatic counterparts, as their purpose is to expedite the process without exerting dominant influence. Hence, it is essential that the time derivative of \(\lambda\) remains small, yet not entirely nullified. This particular information must be communicated to the PINN through Equation (31).
\[\mathcal{L}_{\text{Least\ Action}}:=\frac{\omega_{\text{Action}}}{N_{\mathcal{ F}}}\sum_{(t_{\text{min}},t_{\text{max}})}\left[i\frac{\partial\boldsymbol{ \mathcal{H}}_{\text{AD}}(t)}{\partial\lambda}-\left[\boldsymbol{\mathcal{A}}_{ \text{CD}}(t),\boldsymbol{\mathcal{H}}_{\text{AD}}(t)\right],\boldsymbol{ \mathcal{H}}_{\text{AD}}(t)\right]. \tag{30}\]
\[\mathcal{L}_{\text{Adiabaticity}}:=\frac{\omega_{\text{Ad}}}{N_{\mathcal{F}}} \sum_{(t_{\text{min}},t_{\text{max}})}\left|\frac{d\lambda}{dt}\right|^{2}. \tag{31}\]
As mentioned in Section 2.3, the set \(\{\sigma_{0},\sigma_{\text{X}},\sigma_{\text{Y}},\sigma_{\text{Z}}\}\in \mathcal{M}_{2\times 2}(\mathbb{C})\) form an orthogonal basis of the Hilbert space of \(2\times 2\) Hermitian matrices. Consequently, it is both logical and well-founded to seek the computation of the operator \(\boldsymbol{\mathcal{A}}_{\text{CD}}\) as a composite of tensor products involving the previously introduced Pauli matrices for a certain system of qubits. By doing so, it is possible to construct a linear combination of tensor products involving these matrices, yielding a set of
coefficients denoted as \(\mathbf{\mathcal{C}}\in\mathbb{R}^{4^{N_{Q}}}\). Each element within this set represents the relative magnitude of each term within the operator. Therefore, the decomposition expressed in Equation (32) would enable us to perform efficient simulations and facilitate a more accessible analysis of physical systems through the utilization of quantum circuit models.
\[\mathbf{\mathcal{A}}_{\text{CD}}^{{}^{\prime}}(t):=\sum_{i,j,\ldots,N_{Q}\in\{0, \text{X},\text{Y},\text{Z}\}}\mathbf{\mathcal{C}}_{i,j,\ldots,N_{Q}}(t)\left(\sigma _{i}\otimes\sigma_{j}\otimes\ldots\otimes\sigma_{N_{Q}}\right). \tag{32}\]
In this study, we employ \(\mathbf{\mathcal{A}}_{\text{CD}}^{{}^{\prime}}(t)\) to represent the non-adiabatic terms. This notation serves to distinguish this expansion, which takes the form of a linear combination, from the operator that serves as an output of the PINN. To achieve this objective, it is necessary to introduce an additional term into the loss function of the neural network, which directly affects the set of coefficients denoted as \(\mathbf{\mathcal{C}}(t)\), as shown in Equation (33). Consequently, these terms are dynamically adjusted during the training process in order to construct the decomposition of the Gauge operator. By employing the specified procedure and adhering to the prescribed requirement, our methodology exhibits the capability to yield these scalar quantities not only at the initial and final moments but also throughout the entire temporal interval. This is attributable to the fact that these scalars, in a general sense, are functions contingent upon time.
\[\mathcal{L}_{\text{Conpling}}:=\frac{\omega_{\text{Conpling}}}{N_{\mathcal{F}}} \sum_{(t_{\text{min}},t_{\text{max}})}\left|\mathbf{\mathcal{A}}_{\text{CD}}(t)- \mathbf{\mathcal{A}}_{\text{CD}}^{{}^{\prime}}(t)\right|^{2}. \tag{33}\]
Once all the requisite components have been specified according to Equations (30), (31), and (32), it becomes feasible to establish the loss term for our PINN, Equation (34), which is solely linked to the underlying differential equations. In this context, the vector \((\omega_{\text{Action}},\omega_{\text{Adinicity}},\omega_{\text{Conpling}})\) denotes the weights employed in the combination process when constructing the resultant term. Thus, by considering Equation (5) and acknowledging the pre-established mixture weights within our loss terms, the formulation of the final loss function, Equation (35), becomes straightforward. This incorporates the loss associated with the PDEs outlined in Equation (34), as well as the temporal constraints at the initial and final temporal steps, as indicated in (26) and (27). Consequently, the defined loss function serves as the physical metric that guides the optimization process, dictating the objective for the neural network within our methodology to minimize.
\[\mathcal{L}_{\mathcal{F}}:=\mathcal{L}_{\text{Least Action}}+\mathcal{L}_{ \text{Adinicity}}+\mathcal{L}_{\text{Conpling}}. \tag{34}\]
\[\mathcal{L}:=\mathcal{L}_{\mathcal{IC}}+\mathcal{L}_{\mathcal{FC}}+\mathcal{L} _{\mathcal{F}}. \tag{35}\]
In addition, the network has not explicitly been required to satisfy the necessary condition of operators hermiticity even though it can be achieved by including an additional term in Equation (34) that minimizes the difference \(\left|\mathbf{\mathcal{A}}_{\text{CD}}-\mathbf{\mathcal{A}}_{\text{CD}}^{\dagger} \right|^{2}\).
Nonetheless, such a restriction is not obligatory and would be redundant for the PINN. If the coefficients \(\mathbf{\mathcal{C}}\in\mathbb{R}^{N_{Q}}\) are defined as real, i.e., \(\mathbf{\mathcal{C}}=\mathbf{\mathcal{C}}^{*}\), then it is evident from the decomposition of \(\mathbf{\mathcal{A}}_{\text{CD}}^{{}^{\prime}}\) in Equation (32) that we recover \(\mathbf{\mathcal{A}}_{\text{CD}}^{{}^{\prime}}-\mathbf{\mathcal{A}}_{\text{CD}}^{{}^{ \prime}}=0\) naturally, without necessitating any additional requirements. Therefore, the physical condition expressed in Equation (33) is more than sufficient for the neural network to ensure the hermiticity of the operator \(\mathbf{\mathcal{A}}_{\text{CD}}\), and since \(\mathbf{\mathcal{H}}_{\text{HF}},\mathbf{\mathcal{H}}_{\text{problem}}\in\mathcal{M}_{2 ^{N_{Q}}\times 2^{N_{Q}}}(\mathbb{R})\)[23], the hermiticity of the complete Hamiltonian operator is also ensured.
After considering all the relevant factors, a comprehensive description of our methodology can be provided. In order to accomplish this, we will utilize a visual aid in the form of a diagram (Figure 1), and a step-by-step algorithm outlined in Algorithm 1. The initial step involves identifying the independent variable(s) that will influence our outcomes. In our particular case, we only have the temporal variable, denoted as \(t\), defined within the interval \([t_{\text{min}},t_{\text{max}}]\), commonly specified as \([t_{\text{min}},t_{\text{max}}]=[0,1]\). Consequently, we need to select an appropriate sampling method. Various methods are available, including uniform sampling with equidistant points, random sampling, Latin hypercube sampling [86, 87], and Sobol sampling [88]. The choice of method is somewhat arbitrary, with greater impact when considering fewer time points, as its significance diminishes as the sample size approaches infinity. In particular, the Sobol sampling has exhibited significant advancements in prior inquires within the relevant body of literature [89]. In our investigation, we have employed this approach, which entails generating pseudo-random samples using powers of two. This technique results in a more homogeneous distribution of points within the space, with reduced overlap compared to completely random sampling. Nonetheless, it is important to acknowledge the inherent randomness involved in this approach. After generating the time domain, it serves as the input to the network, which will be constructed as a sequence of fully connected \(K\) dense layers, encompassing both the input and output layers. The number of layers and the number of
neurons in each layer (\(N_{k}\)) are hyperparameters that will be predetermined prior to training. While these parameters can be adjusted, it is anticipated that a relatively conventional configuration, such as 6 or 7 layers with 30 to 40 neurons each, will suffice for characterizing the counterdiabatic driving problem. Nevertheless, the complexity of individual problems may necessitate varying numbers of layers and/or neurons.
With the establishment of the input domain and the construction of the network, we are now capable of extracting the output and interpreting it as the tensor of physical variables denoted by \(\mathcal{U}_{\Theta}(t)=\left(\lambda,\boldsymbol{\mathcal{A}}_{\text{CD}}, \boldsymbol{\mathcal{C}}\right)_{\Theta}\), where \(\lambda\in\mathbb{R}\)
Figure 1: The methodology follows a general procedure where time, represented by the variable \(t\), is the only independent physical variable considered, incorporated into our network as a tensor. The output of the methodology comprises three elements: the scalar variable of the scheduling function denoted as \(\lambda(t)\); the operator representing the counterdiabatic terms of the process, expressed as \(\boldsymbol{\mathcal{A}}_{\text{CD}}(t)\), with each of its components treated as an independent output; and the coefficients \(\boldsymbol{\mathcal{C}}(t)\) denote the general decomposition coefficients of the operator \(\boldsymbol{\mathcal{A}}_{\text{CD}}\) expressed as a linear combination of tensors resulting from all possible Kronecker products formed between the set of operators comprising the identity matrix and the Pauli operators. Subsequently, the computations required to construct the total Hamiltonian, denoted as \(\boldsymbol{\mathcal{H}}(t)\), are carried out. Various physical constraints are imposed during these computations, including inductive biases. These constraints adhere to the principle of minimum action, satisfy the initial and final conditions, and ensure the hermiticity of the physical operators among other specifications. At each step of the training process, the total loss is calculated by aggregating the contributions from all imposed conditions, denoted as \(\mathcal{L}\). This calculation continues until a specific training period is reached or until a predetermined error threshold is achieved.
\(\mathbf{\mathcal{A}}_{\text{CD}}\in\mathcal{M}_{2^{N\mathcal{O}\times 2^{N_{Q}}}}(\mathbb{C})\), and \(\mathbf{\mathcal{C}}\in\mathbb{R}^{4^{N_{Q}}}\). Subsequently, the derivative \(\frac{d\lambda}{dt}\) can be straightforwardly computed using automatic differentiation, getting the Hamiltonian operator \(\mathbf{\mathcal{H}}(t)\) as described in (20). Furthermore, the initial and final temporal constraints, defined in equations (26) and (27), are necessary. These constraints are accompanied by the calculation of the loss associated with the underlying PDEs stated in Equation (34), which incorporates the physical constraints outlined in equations (30), (31), and (33). These physical restrictions represent the fulfillment of the Principle of Least Action, the agreement with the adiabaticity, and the decomposition of the operator \(\mathbf{\mathcal{A}}_{\text{CD}}(t)\) as previously explained. By combining all these components, the final loss metric \(\mathcal{L}\) given in Equation (35) is computed, and the set of trainable variables \(\Theta\) is updated via backpropagation, minimizing the loss through an optimization process.
## 4 Numerical experiments and results
In this section, various numerical results obtained through our DL-based methodology are presented, as outlined in Section 3. This approach has been applied to address the \(\mathrm{H}_{2}\) molecule problem within the STO-3G basis, considering different bond distances between the particles and utilizing a 2-qubit representation. The initial and final Hamiltonian operators used for the evolution, as denoted by the PDE in (20), are listed in Table 1 and were configured following the guidelines in [70]. Since our numerical example involves a 2-qubit representation, these operators will possess a matrix size of \(4\times 4\). Furthermore, it is pertinent to note that these operators are real-valued by definition, i.e., \(\mathbf{\mathcal{H}}_{\text{HF}},\mathbf{\mathcal{H}}_{\text{problem}}\in\mathcal{M}_{4 \times 4}\left(\mathbb{R}\right)\). In a general context, and unless explicitly stated otherwise, all conducted trainings comprised a total of 500,000 epochs (iterations) dedicated to updating the \(\Theta\) parameters of the used PINN. The training procedure utilized the _PyTorch_ module [91], employing the _Adam_ optimizer [92] with a fixed learning rate of \(10^{-5}\). This learning rate was chosen to be sufficiently small, ensuring convergence without excessive numerical noise. Throughout all examples, the sampling method used has been the Sobol sampling [88] as stated in Section 3. The number of sampling points at the initial and final time instances of the evolution, denoted as \(N_{\mathcal{I}\mathcal{C}}\) and \(N_{\mathcal{F}\mathcal{C}}\) respectively, has remained constant at \(2^{13}\). However, the number of points in the inner part of the interval, which is linked to the underlying PDE and represented as \(N_{\mathcal{F}}\), was subject to variation and examination in the experiments. Moreover, if not otherwise specified, the neural architecture used for all the examples consists of six hidden layers with 30 neurons each.
To illustrate initial general results, Figure 2 presents the physical loss functions employed for optimizing the neural network, as defined in Section 3. It is essential to emphasize that each training process would exhibit in general distinctive loss curves, influenced by factors such as the number of qubits used for representation and the specific molecular system being studied, including the considered bond distances between molecules, among other features. However, the figure showcases the cost function for the specific scenario under investigation: the \(\mathrm{H}_{2}\) molecule, represented by a 2-qubit system in the STO-3G basis, with specifications outlined in Section 2.3. The left subfigure displays the three constituents comprising the total loss \(\mathcal{L}\) (35), namely, \(\mathcal{L}_{\mathcal{F}}\), \(\mathcal{L}_{\mathcal{I}\mathcal{C}}\), and \(\mathcal{L}_{\mathcal{F}\mathcal{C}}\), corresponding to the
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & \multicolumn{4}{c}{\(\mathbf{\mathcal{H}}_{\text{HF}}\)} & \multicolumn{4}{c}{\(\mathbf{\mathcal{H}}_{\text{problem}}\)} \\ \cline{2-9} \(d=1.0\) Å & -0.5490812 & 0 & 0 & 0 & -0.5490812 & 0 & 0 & 0.19679058 \\ & 0 & -1.0661087 & 0 & 0 & 0 & -1.0661087 & 0.19679058 & 0 \\ & 0 & 0.00400595 & 0 & 0 & 0.19679058 & 0.00400595 & 0 \\ & 0 & 0 & -0.5490812 & 0.19679058 & 0 & 0 & -0.5490812 \\ \hline \multirow{2}{*}{\(d=1.5\) Å} & -0.6610488 & 0 & 0 & 0 & -0.6610488 & 0 & 0 & 0.22953594 \\ & 0 & -0.91083753 & 0 & 0 & 0 & -0.910837535 & 0.22953594 & 0 \\ & 0 & 0 & -0.3944683 & 0 & 0 & 0.22953594 & -0.3944683 & 0 \\ & 0 & 0 & 0 & -0.6610488 & 0.22953594 & 0 & 0 & -0.6610488 \\ \hline \multirow{2}{*}{\(d=2.0\) Å} & -0.66539884 & 0 & 0 & 0 & -0.66539884 & 0 & 0 & 0.25913846 \\ & 0 & -0.7837927 & 0 & 0 & 0 & -0.7837927 & 0.25913846 & 0 \\ & 0 & 0 & -0.5412806 & 0 & 0 & 0.25913846 & -0.5412806 & 0 \\ & 0 & 0 & 0 & -0.66539884 & 0.25913846 & 0 & 0 & -0.66539884 \\ \hline \multirow{2}{*}{\(d=2.5\) Å} & -0.649429 & 0 & 0 & 0 & -0.649429 & 0 & 0 & 0.28221005 \\ & 0 & -0.7029436 & 0 & 0 & 0 & -0.7029436 & 0.28221005 & 0 \\ & 0 & 0 & -0.5944048 & 0 & 0 & 0.28221005 & -0.5944048 & 0 \\ & 0 & 0 & 0 & -0.649429 & 0.28221005 & 0 & 0 & -0.649429 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Numerical configurations for the initial Hamiltonian operator, denoted as \(\mathbf{\mathcal{H}}_{\text{HF}}\), and the final Hamiltonian operator, denoted as \(\mathbf{\mathcal{H}}_{\text{problem}}\), obtained using the quantum computing library _Qiskit_[70]. These configurations correspond to the evolution of the molecule \(\mathrm{H}_{2}\) in the STO-3G basis and are represented by their respective matrix descriptions given in Equations (19) and (17), respectively.
residual conditions of the PDE, the initial conditions over time, and the final conditions, respectively. The latter two diminish rapidly, converging to magnitudes on the order of \(10^{-4}\) or even lower, especially evident for \(\mathcal{L}_{\mathcal{FC}}\), where the error is so minute that any variation induces substantial noise on a logarithmic scale. Conversely, the residual loss \(\mathcal{L}_{\mathcal{F}}\) (34) encompasses three internal terms, generating discernible tension among them, which impedes its reduction to such small orders of magnitude. This becomes more evident in the right subfigure, which provides a detailed breakdown of these terms. It is evident that the loss term responsible for minimizing the Principle of Least Action or, equivalently, the Euler-Lagrange equations (30), attains the lowest value, thus ensuring a highly satisfactory minimization of the action and, consequently, the attainment of a potential gauge \(\mathbf{\mathcal{A}}_{\text{CD}}(t)\) that adheres closely to its prescribed guidelines. Additionally, the graphical representation includes a loss term denoted as \(\mathcal{L}_{\text{Hermicity}}\), which assesses the quadratic discrepancy between \(\mathbf{\mathcal{A}}_{\text{CD}}(t)\) and its corresponding operator, defined in a similar way to how the other terms are. However, this factor is not incorporated in the total loss; instead, it serves as a representation allowing us to confirm that minimizing the loss \(\mathcal{L}_{\text{Coupling}}\) (33) through the decomposition of the potential gauge directly enforces its hermiticity, given the strict real-valued nature of the coefficients \(\mathbf{\mathcal{C}}(t)\) involved in the decomposition (32). Thus, for all the examples shown in this article, the mix weights considered for each of the individual loss terms have been preset as:
\[\left(\omega_{\mathcal{IC}},\omega_{\mathcal{FC}},\omega_{\text{Action}}, \omega_{\text{Ad}},\omega_{\text{Coupling}}\right)=\left(10^{3},10^{3},10^{2}, 5\times 10^{-1},2.5\times 10^{2}\right), \tag{36}\]
where \(\omega_{\mathcal{IC},1}=\omega_{\mathcal{IC},2}=\omega_{\mathcal{IC}}\) and \(\omega_{\mathcal{IC},1}=\omega_{\mathcal{FC},2}=\omega_{\mathcal{FC}}\) in Equations (26) and (27). These weights play a crucial role in determining the relative emphasis that the neural network places on each term, essentially representing their respective priorities. For our specific objectives, ensuring prompt and robust satisfaction of the initial and final conditions in time is of crucial importance. Consequently, we set \(\omega_{\mathcal{IC}}\) and \(\omega_{\mathcal{FC}}\) to significantly larger values than \(\omega_{\text{Action}}\), \(\omega_{\text{Ad}}\), and \(\omega_{\text{Coupling}}\). Following the same thoughts, the minimization of the action is also essential in our methodology as it leads us to identify the most appropriate operator \(\mathbf{\mathcal{A}}_{\text{CD}}(t)\) for the system [64], and we express it through its linear combination (32). Consequently, these two weights will be considerably higher, yet an order of magnitude lower than the first two. Particularly, \(\omega_{\text{Coupling}}\) is considered slightly higher, as this condition inherently encompasses the constraint on the hermiticity of the operator. Lastly, the condition of adiabaticity will be assigned the lowest weight, reflecting our intent to recover the limit of the adiabatic theory without permitting it to dominate the overall metric.
We could continue our analysis of the right-hand subfigure by specifically putting our attention on the curve that corresponds to the loss of recovery of the adiabatic theory, denoted as \(\mathcal{L}_{\text{Adiabaticity}}\) (31) and represented in a solid black curve. This loss exhibits a marginal decline, commencing from a value proximal to zero at the onset of the training process. Subsequently, it rises to approximately \(10^{2}\) before descending to around \(10^{0}\) and eventually maintaining a constant level of magnitude for the duration of the evolution. The physical reason behind this phenomenon is clear: while the theory necessitates a decrease in the adiabatic speed, \(\frac{d\lambda}{dt}\), the PINN fails to recognize the appropriateness of taking it to values lower than what is observed in our results. This is primarily attributed to the presence of multiple
Figure 2: Analysis of the evolution of the loss function during the training process. On the left we illustrate the dynamic changes of the components contributing to the total loss, \(\mathcal{L}\), as defined in Equation (35). On the right side of the graph, each individual constituent of the loss, \(\mathcal{L}_{\mathcal{F}}\), is presented, corresponding to the different physical aspects under consideration. It is important to note that the loss term \(\mathcal{L}_{\text{Hermicity}}\) is included in the plot, although it remains undefined and unused throughout the training. This term quantifies the discrepancy between \(\mathbf{\mathcal{A}}_{\text{CD}}\) and its adjoint operator but is solely provided for visualization purposes, tracking its reduction relative to \(\mathcal{L}_{\text{Coupling}}\) due to their mathematical equivalence.
terms within the loss function, not solely related to adiabaticity recovery, thereby creating inherent tension in defining the physical problem as a whole. Consequently, the solution predicted by the methodology does not significantly benefit from further reductions in the adiabatic speed. The fast saturation of adiabatic recovery is the main cause of the considerably elevated value for \(\mathcal{L}_{\mathcal{F}}\) observed in the left subfigure, underscoring the importance of isolating and analyzing each term separately. Thus, a zoom has been applied to \(\mathcal{L}_{\text{Adiabaticity}}\) between 30,000 and 50,000 iterations, a range during which the \(\lambda\) function transitions from a sigmoidal to a linear behavior. Consequently, it assumes a temporal derivative of 1, i.e., \(\frac{d\lambda}{dt}=1\). A more comprehensive discussion on this topic will be presented in the following section. It is important to note that unless otherwise specified, all the results shown in this section have been carried out for the system formed by 2 qubits representing the \(\mathrm{H}_{2}\) molecule in the STO-3G basis, following the guidelines of Section 2.3.
### Scheduling function, \(\lambda(t)\)
The output of our methodology is triple. Firstly, it enables the retrieval of the scheduling function \(\lambda\) as a time-dependent variable. Secondly, it facilitates the extraction of the matrix components \((i,j)\) of the potential gauge \(\mathbf{\mathcal{A}}_{\text{CD}}(t)\), considering also the temporal variable, and it also enables the obtaining of the \(\mathbf{\mathcal{C}}(t)\) components of its decomposition through time. This achievement is made possible by the utilization of a cost metric defined on the underlying PDE comprising three distinct terms, as stated in Equation (34). Each of these terms compels the neural network to update one of the outputs while adhering to the imposed physical constraints. In particular, the term \(\mathcal{L}_{\text{Adiabaticity}}\) plays a crucial role in enforcing the adherence of the function \(\lambda(t)\) to the physical evolution, thereby recovering, in the most accurate approximation, the adiabatic theory that has been empirically demonstrated to hold true. Among all the terms taken into account within the total cost function, the neural network exhibits the lowest attention towards fulfilling the requirement of recovering the fully adiabatic theory, \(\mathcal{L}_{\text{Adiabaticity}}\). As illustrated in Figure 2, our PINN ceases to actively optimize this particular term after approximately 50,000 iterations. Consequently, the scheduling function \(\lambda(t)\) converges to an optimal solution wherein its time derivative, representing the adiabatic velocity, maintains an approximately constant value of 1 throughout the temporal evolution. As a consequence, the optimal form of the \(\lambda\) function predicted for the system is the one that adheres to the condition \(\lambda(t)=t\) for the evolution, as governed by the counterdiabatic differential Equation (20).
This phenomenon is exemplified in Figure 3, where we show the values of \(\lambda(t)\) and its temporal derivative in the left and right subfigures, respectively, for different training steps in both cases concerning the 2-qubit system representing the \(\mathrm{H}_{2}\) molecule. As observed, even after 15,000 and 25,000 epochs, the function maintains a sigmoidal shape, a widely employed representation in the existing literature [23, 46]. This sigmoidal form is considered appropriate from a theoretical perspective due to its derivative taking the form of a "bell curve", facilitating the adiabatic terms \(\mathbf{\mathcal{A}}_{\text{CD}}\) to present higher values at intermediate values during time evolution while effectively being turned off at the initial and temporal instants. It should be noted, however, that this sigmoidal shape for \(\lambda\) emerges predominantly during the early phases of the training thereby helping in the driving of the convergence of the methodology towards a more optimal solution. From a theoretical point of view, our ultimate goal is to recover the fully adiabatic theory while incorporating the non-zero presence of counterdiabatic terms. Consequently, our neural network converges towards a function \(\lambda\) that precisely matches the temporal variable \(t\). This outcome signifies the restoration of the original formulation of counterdiabatic driving, as explained in [42], thereby undoing the temporal parameterization of physical operators through a specific set of parameters \(\mathbf{\Lambda}(t)\), which, in our case, corresponds to a single scalar parameterization.
Through this procedure, our objective is to begin with a differential equation containing non-zero counterdiabatic terms and then strive to recover the adiabatic theory in the limiting case. By doing so, we can obtain all the necessary results in accordance with the theory of adiabaticity. In Figure 4, we present the profiles of \(\lambda(t)\) and its time derivative for the counterdiabatic (CD) protocol from Equation (20) on the left, while on the right subfigure, we depict the same results but without considering the presence of counterdiabatic terms, i.e., working directly with the adiabatic theory (13). It is evident that for both cases we obtain \(\lambda(t)=t\), except for some numerical noise at the edges of the time interval, which is directly related to the nature of the automatic differentiation process [60, 93, 94]. Notably, during the initial stages of training, the neural network capitalizes on the sigmoid-shaped \(\lambda(t)\), while simultaneously adjusting other physical conditions. This aids the network in achieving a more optimal solution in recovering the adiabatic theory.
### Temporal evolution of the \(\boldsymbol{\mathcal{H}}(t)\) operator and the energy levels
From the three outputs obtained from the network (see diagram in Figure 1), all the necessary information can be derived. As an initial step, considering the potential gauge \(\boldsymbol{\mathcal{A}}_{\text{CD}}(t)\) that minimizes the physical action of the system, the total Hamiltonian operator \(\boldsymbol{\mathcal{H}}(t)\) can be computed. The time evolution of all its components is illustrated in Figure 5. Notably, \(\boldsymbol{\mathcal{H}}\in\mathcal{M}_{2^{N_{Q}}\times 2^{N_{Q}}}\left(\mathbb{C}\right)\), where \(N_{Q}\) represents the number of qubits, implying that for a 2-particle system, both \(\boldsymbol{\mathcal{H}}(t)\) and the remaining operators will have a matrix size of \(4\times 4\). In both depictions, the inherent hermiticity of the observable is evident; the real part of the components exhibits symmetry with respect to the diagonal, while the
Figure 4: Evolution over time of the _scheduling function_\(\lambda(t)\) (in solid black) along with its derivative (in dashed blue) predicted by our methodology for the \(\mathrm{H}_{2}\) molecule in the STO-3G basis set using the CD protocol on the left. On the right, the same is depicted for a fully adiabatic evolution according to expression (13).
Figure 3: Analysis of the scheduling function \(\lambda\) and its temporal derivative for distinct iterations (epochs) of the training process. Observations reveal that during the initial stages of neural optimization, \(\lambda\) exhibits characteristics resembling a sigmoidal function. However, as the training advances, it converges towards a linear behavior \(\left(\frac{d\lambda}{dt}\approx 1\right)\).
imaginary part displays complete antisymmetry, leading to diagonal elements being as close to zero as possible (around the order of \(10^{-3}\sim 10^{-4}\)). This condition can be further reinforced by increasing its relative weight, \(\omega_{\text{Coupling}}\). In the mentioned figure, the real and imaginary parts of distinct components of the operator are depicted within the same graph, distinguished by black and blue colors, respectively, and with individual scales. By considering the fulfillment of hermiticity for \(\mathbf{\mathcal{H}}(t)\), we can now extract its instantaneous eigenvalues, thereby obtaining information about the energy levels of the 2-qubit physical system representing the \(\mathrm{H}_{2}\) molecule with which we are currently conducting our investigation.
The operator \(\mathbf{\mathcal{H}}(t)\), as defined in Equation (20), is specifically designed to yield the eigenstates \(|n(t)\rangle\) of \(\mathbf{\mathcal{H}}_{\text{AD}}(t)\) exactly over time. It ensures that no transitions between energy levels, \(E_{n}(t)\), are possible [42]. This property holds for all eigenstates of \(\mathbf{\mathcal{H}}_{\text{AD}}(t)\), allowing us to interpret the set of states \(|n(t)\rangle\) as "moving eigenstates" of the total operator \(\mathbf{\mathcal{H}}(t)\). Consequently, we can extract the energy levels corresponding to \(\mathbf{\mathcal{H}}(t)\) throughout the entire time evolution and compare them to the energy levels obtained under the assumption of a completely adiabatic transition, i.e., considering \(\frac{d\lambda}{dt}=0\). Figures 6 and 7 depict the real and imaginary components, respectively, corresponding to two distinct scenarios considered. The first one employs the CD protocol and is shown in the top row, while the second scenario involves
Figure 5: The evolution of the real and imaginary components of the Hamiltonian operator \(\mathbf{\mathcal{H}}(t)\) for the \(\mathrm{H}_{2}\) molecule is examined using a bond distance of \(d=1.0\) Å. Black and blue colors have been used for real and imaginary parts, respectively, using dashed lines for the latter and a different scale for each one. The findings reveal substantial fluctuations in the values across various time scales. Notably, the natural symmetry and antisymmetry for the real and imaginary components, respectively, arise due to the hermiticity of the operator.
a totally adiabatic transition and is displayed in the bottom row. These investigations were conducted for the \(\mathrm{H}_{2}\) molecule. To broaden the scope of our study, we examined different bond distance values between particles, specifically \(d\in\{1.0,1.5,2.0,2.5\}\) A. The numerical values of the corresponding initial (\(\mathbf{\mathcal{H}}_{\mathrm{HF}}\)) and final (\(\mathbf{\mathcal{H}}_{\mathrm{problem}}\)) Hamiltonian operators are described in Table 1. Notably, Figure 7 reveals that the imaginary part of the eigenvalues (energies) obtained is on the order of \(10^{-3}\), indicating their proximity to zero. As such, these values have negligible influence on observable analyses. However, it is essential to recognize that this outcome directly arises from the hermiticity of the physical observable. Furthermore, by adjusting the weight \(\omega_{\mathrm{coupling}}\), it is possible to further fortify the enforcement of this property. Moreover, in view of the formulation of the Hamiltonian operator under the entirely adiabatic scenario, \(\mathbf{\mathcal{H}}_{\mathrm{AD}}\) (13), and the consideration of the initial and final operators as detailed in Table 1, it is important to note that the scalar nature of the function \(\lambda(t)\) ensures the absence of complex numbers. As a result, the imaginary component of the energy levels in the completely adiabatic case is strictly zero, as evidenced in the bottom row of Figure 7.
Regarding the real component of the energies, which is of primary interest, it is observed that in the case of a fully adiabatic transition, these energies demonstrate nearly linear temporal evolution, with diminishing separation as particle bonding increases. The close proximity of energy levels leads to an unfavorable outcome wherein transitions between them become more likely for the system. However, such a challenge is addressed in the CD protocol, wherein energy levels tend to be notably more separated throughout the entire evolution domain. This phenomenon becomes more noticeable for the energy ground state, \(E_{0}\), and the first excited level, \(E_{1}\). Notably, when the bound distance (\(d\)) is set at \(2.0\) A and \(2.5\) A, it is evident that these two energy levels remain substantially distant, especially at the initial stages of the evolution of the system. This occurrence is highly desirable from an experimental perspective, as it aims to minimize the probability of transitions between energy levels, especially between the ground and the first excited state, given that the system is initially prepared in the \(E_{0}\) level.
Figure 6: Temporal evolution of the real component of the instantaneous energy levels, namely the eigenvalues \(E_{n}(t)\), describing the molecule \(\mathrm{H}_{2}\) within a system of 2 qubits utilizing the STO-3G basis. These computations are conducted for diverse values of the interparticle bond distance, \(d\). A comparative analysis of the energy levels is presented, showing the results obtained from the CD protocol (20) in the top row, juxtaposed against the levels obtained from the same methodology but with a fully adiabatic transition (13) in the bottom row. It is noteworthy that the energy levels demonstrate a tendency to exhibit greater separation under the CD protocol, a phenomenon that becomes particularly pronounced at \(d=2.0\) Å and \(d=2.5\) Å.
### \(\mathbf{\mathcal{A}_{\text{CD}}}(t)\) operator and its decomposition
Our methodology enables us to directly obtain the components of the potential gauge, denoted as \(\mathbf{\mathcal{A}_{\text{CD}}}(t)\). Consequently, we are capable of visualizing the temporal evolution of both its real and imaginary components. Analogous to the presentation of \(\mathbf{\mathcal{H}}(t)\) in Figure 5 above, we display the temporal evolution of the corresponding operator \(\mathbf{\mathcal{A}_{\text{CD}}}\) in Figure 8 of this section. We differentiate their respective real and imaginary components on two distinct scales. Observing the plot, it becomes evident that as a physical observable from an experimental standpoint, the real part of the operator, \(\text{Re}\left(\mathbf{\mathcal{A}_{\text{CD}}}\right)\), exhibits complete symmetry, while its imaginary part, \(\text{Im}\left(\mathbf{\mathcal{A}_{\text{CD}}}\right)\), is fully antisymmetric. This property comes directly from the natural hermiticity of the operator, which, in turn, is imposed indirectly by the condition \(\mathcal{L}_{\text{Coupling}}\) (33), together with \(\mathbf{\mathcal{C}}(t)\in\mathbb{R}^{\nicefrac{{4N}}{{Q}}}\).
While exploring the components of \(\mathbf{\mathcal{A}_{\text{CD}}}(t)\) holds theoretical significance, our primary focus from an experimental point of view lies in obtaining the set of coefficients \(\mathbf{\mathcal{C}}(t)\) over time. These coefficients play a crucial role as they enable the gauge potential to be expressed as a linear combination of all the possible combinations and interactions between the qubits of the system, thereby allowing a better implementation in a real quantum circuit [23, 46]. The theoretical formulation of this decomposition is represented by Equation (32), wherein the potential is expressed using the required Kronecker products. This formulation takes into account all possible combinations for the 2-qubit system under current investigation. Given the relatively small size of our example system, it remains feasible to consider all the possible combinations of the Pauli matrices as the number of these combinations scales with \(4^{N_{Q}}\), being \(N_{Q}\) the number of qubits. Nevertheless, in certain experimental scenarios, it may be unnecessary to explore all combinations and interactions. In such cases, specific specializations and additional requirements can be applied, guided by the methodology we present.
In Figure 9, we present the temporal evolution of the coefficients derived from the decomposition of the \(H2\) system on the STO-3G basis, as a function of the bond distance (\(d\)) between the particles. The upper row illustrates the evolution itself, while the lower row displays a bar chart presenting the specific coefficients arranged in descending order based on their average values throughout the observed time interval.
This visualization enables us to identify the most significant contributions in terms of the absolute value when implementing this system in a real quantum circuit. Notably, the two coefficients that exhibit the highest values are denoted as \(\mathcal{C}_{\text{XY}}\) and its symmetric counterpart \(\mathcal{C}_{\text{YX}}\). These findings align with previous literature that employed the NC methodology [23, 46]. Our approach naturally reveals these prominent contributions, facilitating an explicit
Figure 7: Time-dependent variation of the imaginary component of the eigenvalues \(E_{n}(t)\) investigated for the molecular system \(\mathrm{H}_{2}\), represented by a 2-qubit configuration in the STO-3G basis. The computational analysis encompasses two scenarios: one employing the CD protocol in the top row, and the other considering a purely adiabatic transition in the bottom row. In the former case, the imaginary components exhibit magnitudes on the order of \(10^{-3}\), whereas in the latter case, these are precisely zero, as dictated by the definition of the underlying PDE (13).
understanding of their respective orders of magnitude. Additionally, there are less prominent contributions, such as \(\mathcal{C}_{\text{ZZ}}\), \(\mathcal{C}_{\text{XX}}\), and \(\mathcal{C}_{\text{H}}\), which warrant consideration as well.
The determination of these coefficients is universally applicable, i.e., it emerges as an outcome of implementing our methodology for any system, irrespective of the number of qubits considered. Nevertheless, as previously commented in the preceding section, certain specific instances may arise in experimental scenarios where it becomes impractical to account for all possible contributions in the decomposition of \(\mathbf{\mathcal{A}}_{\text{CD}}(t)\). In such circumstances, it would be interesting to adapt the method and restrict the output \(\mathbf{\mathcal{C}}(t)\) of the neural network to a subset of the entire set, which becomes particularly relevant when dealing with an increasing number of qubits. Consequently, a trade-off between purely theoretical outcomes, exemplified herein, and those of particular experimental significance may always exist.
Figure 8: Time evolution of the real and imaginary components of the gauge potential \(\mathbf{\mathcal{A}}_{\text{CD}}(t)\) for the \(\text{H}_{2}\) molecule using the STO-3G basis and a bond distance of \(d=1.0\) Å, represented respectively in black and blue colors using two different scales. It is observed that the values exhibit variations across different orders of magnitude. Notably, the natural symmetry and antisymmetry arises naturally in these components over time due to the hermeticity of the operator.
### Scalability
So far, we have conducted a study on a 2-qubit system representing the \(\mathrm{H}_{2}\) molecule in the STO-3G basis. However, it is straightforward and easily feasible to modify the base or adjust the number of qubits used in our methodology. This process primarily involves examining the matrix dimensions of the respective network outputs. Specifically, the matrices \(\mathcal{H},\mathbf{\mathcal{A}}_{\mathrm{CD}},\mathbf{\mathcal{H}}_{\mathrm{HF}},\mathbf{ \mathcal{H}}_{\mathrm{problem}}\) are elements of the matrix space \(\mathcal{M}_{\mathrm{2}^{N_{Q}}\times\mathrm{2}^{N_{Q}}}(\mathbb{C})\), while \(\mathbf{\mathcal{C}}(t)\) belongs to \(\mathbb{R}^{4^{N_{Q}}}\). Consequently, the number of components of the variables obtained may increase with the number of qubits, but it is important to note that the neural architecture for all calculations has consistently comprised six internal layers, each containing 30 neurons. This architectural choice is widely documented in the literature and has resulted in state-of-the-art outcomes across various domains of research [56, 57, 95].
In general, the scale and arrangement of the neural architecture of the PINN will depend on the specific problem being addressed, meaning that it is more or less directly related to the difficulty presented by the underlying PDEs. In our case, we are dealing with a single differential equation, which essentially defines \(\mathbf{\mathcal{H}}(t)\) in Equation (20) whereby involving only one derivative function. Consequently, our chosen neural network architecture efficiently addresses the problem of the CD protocol, as increasing the number of trainable weights usually does not get translated into better results. Moreover, such an increase could even negatively impact computation time and model convergence during the backpropagation process [96]. Apart from the \(\mathrm{H}_{2}\) molecule, others can also be considered on the STO-3G basis such as lithium hydride, \(\mathrm{LiH}\), which can be represented using 4 qubits. The initial and final Hamiltonian operators of the process can be computed again using [70].
Discussing scalability within these methodologies holds substantial importance as it elucidates the extent to which an approach can be experimentally applicable. Undoubtedly, our DL-based methodology provides a wealth of information, as we have shown throughout the entire paper. However, when addressing the scalability of the method, we need an examination of two key factors: the number of qubits and the quantity of points encompassed within the time interval (\(N_{\mathcal{F}}\)), in conjunction with the graphics processing unit (GPU) or hardware employed, as a whole. All computations considered employed an NVIDIA A16 card with a memory capacity of 16 GB. Consequently, we present, in Figure 10 (top row), a comprehensive analysis of the final physical loss (or final total residual) which marks the conclusion of network training concerning the number of points within the interval \((t_{\text{min}},t_{\text{max}})\) denoted as \(N_{\mathcal{F}}\). This analysis encompasses diverse bond distances for the \(\mathrm{H}_{2}\) molecule (left) and a singular value of \(d\) for the \(\mathrm{LiH}\) molecule (right), solely for the purpose of facilitating comparisons.
Figure 9: In the upper row, we present the temporal evolutions of the coefficients \(\mathbf{\mathcal{C}}(t)\) resulting from the decomposition of the operator \(\mathbf{\mathcal{A}}_{\mathrm{CD}}(t)\) (32) applied to the \(\mathrm{H}_{2}\) molecule utilizing a 2-qubit configuration in the STO-3G basis. In the lower row, a bar chart illustrates the average values of each coefficient, arranged in descending order of magnitude. It is evident that both the coefficient \(C_{\text{XY}}\) and its symmetric counterpart \(\mathcal{C}_{\text{YX}}\) exert the most substantial influence throughout the entire process, followed by \(\mathcal{C}_{\text{H}}\), \(\mathbf{\mathcal{C}}_{\text{ZZ}}\) and \(\mathcal{C}_{\text{XX}}\).
The physical loss depicted in the top row of the figure has been normalized with respect to the minimum value observed for that particular magnitude across all training sessions considered in this section, encompassing both the \(\mathrm{H}_{2}\) and \(\mathrm{LiH}\) simulations. This normalization, denoted as \(\mathcal{L}_{\mathrm{min}}\), allows us to represent the loss in a manner independent of specific training instances. An analysis of the results reveals that the general loss for the \(\mathrm{LiH}\) case is marginally higher, although the values are within the same order of magnitude as the \(\mathrm{H}_{2}\) case. It is worth noting that both simulations employed an identical neural architecture comprising 6 internal layers, each containing 30 neurons. Furthermore, a common training regimen of 500,000 iterations (epochs) was applied to all cases. Consequently, a longer training duration would likely result in reduced final physical errors, owing to the common training approach. The consistency in architecture and training duration enables us to draw meaningful comparisons between both simulations and enables us to fairly evaluate their respective final performances.
On the other side, for the purpose of effectively quantifying the computational time involved in various training sessions and facilitating a meaningful comparison, it is imperative to consider external factors that could impact the simulations. These factors include the specific GPU utilized, available free memory in the CPU (as some calculations are delegated to the CPU), among others. In the lower section of the figure, we present a comparative analysis between the two molecules concerning the time consumed (\(\Delta T\)), which is also normalized with respect to the minimum value obtained for this metric, denoted as \(T_{\mathrm{min}}\). The results demonstrate that an increase in the number of data points, \(N_{\mathcal{F}}\), generally leads to a multiplication of approximately 3.5 times in the compute time consumed, when transitioning from \(2^{7}\) to \(2^{14}\) data points, for the \(\mathrm{H}_{2}\) molecule. However, it is crucial to highlight that with only \(N_{\mathcal{F}}=2^{11}\) the final physical error \(\mathcal{L}\) obtained is minimal. This observation indicates that augmenting the sampled time domain does not necessarily enhance the performance of the model. In other words, with this particular number of data points in the training domain, the PINN exhibits significant capabilities in making inferences, extrapolating information to new temporal instances, and interpolating. Consequently, the adoption of \(2^{11}\) data points implies a multiplicative factor of slightly more than 1.5 in comparison to the minimum elapsed computation time and it can be regarded as a favorable trade-off between training time and performance.
Figure 10: Graphical investigation of the scalability of our methodology for the \(\mathrm{H}_{2}\) molecule in the STO-3G basis. Various bond distances of the hydrogen atoms are considered, represented by distinct colors as indicated in the legend to the right. The top side of the graph illustrates the total physical loss \(\mathcal{L}\) (35) after completing the training process, plotted against the number of points \(N_{\mathcal{F}}\) considered within the domain \(t\in(t_{\mathrm{min}},t_{\mathrm{max}})\). On the bottom side of the graph, we present the time required to complete the entire training process, normalized to the minimum time. We conducted these experiments using an NVIDIA A16 GPU.
Concerning the training of the \(\mathrm{LiH}\) molecule which involves 4 qubits, a significant increase in the required time is observed compared to the case of \(\mathrm{H}_{2}\) with 2 qubits. This discrepancy arises due to the consideration that the latter necessitates a decomposition of the potential gauge \(\mathbf{\mathcal{A}}_{\text{CD}}(t)\) comprising 16 possible combinations, each represented by a \(4\times 4\) matrix. However, in the case of the lithium hydride, there are 256 possible combinations in total, each represented by a matrix size of \(16\times 16\). Consequently, both hardware memory and training time experience a considerable surge from one case study to another. It is important to note, however, that this increase in resources is essential for extracting all possible information from the problem. This includes the scheduling function, all components of the potential gauge at each time instant, as well as the instantaneous values of each coefficient of the decomposition. In practical situations, theoretical analysis often focuses on a subset of all the possible interactions of the qubits constituting the system. By reducing the number of interactions from 256 to a more manageable figure, the problem becomes more amenable to study. Under these circumstances, the primary contributors to the memory and computing time consumption are both \(\mathcal{L}_{\text{Coupling}}\) (33) and \(\mathcal{L}_{\text{Least Action}}\) (30). The former involves simultaneous manipulation of numerous matrices, while the latter involves performing two commutators. Moreover, both terms span \(N_{\mathcal{F}}\) defined points in time, which further adds to the computational complexity.
## 5 Discussion and conclusions
In this study we have shown that deep learning methodologies such as Physics-Informed Neural Networks (PINNs) can be used to tackle the problem of counterdiabatic (CD) driving for analog quantum computation, deviating from conventional numerical methodologies [45]. Our proposed method is a generalized problem-independent methodology that offers a comprehensive solution, encompassing the determination of counterdiabatic terms and the temporal parametrization which empirically demonstrates that the adiabatic theory holds true. The suggested approach provides an unified and effective way to resolve the underlying physics of the problem while also giving the theoretical Pauli matrix decomposition needed to be implemented experimentally. Furthermore, using the CD approach allows to get a greater separation between the ground state, \(E_{0}\), and the first excited level, \(E_{1}\), throughout a significant portion of the temporal progression. This is a desirable property from an experimental perspective, as mentioned in Section 4.2.
Several questions may emerge from these findings. Firstly, an exploration of the computational capacity regarding the maximum qubit count achievable through PINN-based simulation is desirable, i.e., scalability. Secondly, there is an opportunity to enhance our methodology by integrating recent PINN advancements, including the incorporation of causality as an additional constraint and enabling the network to autonomously learn activation functions. Lastly, the restrictions of the theoretical Pauli matrix decomposition to encompass hardware-specific limitations introduce the prospect of improving the operational performance of analog quantum computers within experimental contexts.
Currently, all methodologies formulated within this context have been exclusively applied to systems comprising two and four qubits, as discused in Section 4.4. Indeed, our study reveals that the training duration of a PINN for a 4-qubit system is approximately 15 times greater than that required for the 2-qubit counterpart. However, it is imperative to acknowledge that, empirically, the possible permutations of gates and qubit interconnections are usually restricted. Hence, despite the potential exponential increase in training time for an N-qubit system, the imposed experimental limitations render it feasible to effectively train the methodology for a substantial quantity of qubits.
Aside from reducing the problem, it is also possible to improve the overall performance of our proposal by forcing it to respect the causality principle [97] further to restrict the time evolution of our Hamiltonian operators. Using this approach is sufficient to achieve substantial improvements in terms of physical accuracy. Therefore, it is feasible to reduce the number of temporal points used during the training process without altering the performance of the network. Moreover, implementing dynamically trainable activation functions may help improve performance and convergence time [83].
Furthermore, the physical losses presented in Figure 2 are around \(10^{-4}\), thus underscoring the imperative to comprehend the attainable precision of a base PINN in the context of CD protocols. Enhancing the presented methodology could entail the imposition of temporal causal prerequisites and the optimization of the neural architecture. The incorporation of mixture coefficients (36) within the loss function is a predetermined selection based on inductive physical biases, thereby specifying greater weights for the initial and final loss components to steer the progression of the system. Other constituents associated with physical conditions encompass hyperparameters that can be selected through iterative experimentation. The said coefficients are amenable to alteration during the training process, potentially employing techniques such as the Augmented Lagrangian approach [62], which adapts them according to the deviation from each physical condition. Consequently, the presented approach offers opportunities for enhancing the achievements and mitigating physical losses, improving, if feasible, the robustness of the model from a physical perspective.
In conclusion, our work has shown that PINNs-based approaches are a promising methodology that can be effectively used to optimize CD protocols for adiabatic quantum computing. However, despite the substantial advancements
achieved within this context, it is evident that ample opportunities for enhancement exist. In particular, it would be worth studying the impact of these results in digitized-counterdiabation quantum computing (DCQC) algorithms [98, 99]. The aforementioned questions stand as open paths for our future research, aiming to evolve and elaborate upon them.
## Acknowledgements
The authors express their sincere appreciation for the thoughtful attention provided by the Kipu Quantum team. AFS and JDMG are partially supported by the agreement funded by the European Union, between the Valencian Ministry of Innovation, Universities, Science and Digital Society, and the network of research centers in Artificial Intelligence (Valencian Foundation valgrAI). It has also been funded by the Valencian Government grant with reference number CIAICO/2021/184; the Spanish Ministry of Economic Affairs and Digital Transformation through the QUANTUM ENIA project call - Quantum Spain project, and the European Union through the Recovery, Transformation and Resilience Plan - NextGenerationEU within the framework of the Digital Spain 2025 Agenda.
|
2309.03617 | NeuroCodeBench: a plain C neural network benchmark for software
verification | Safety-critical systems with neural network components require strong
guarantees. While existing neural network verification techniques have shown
great progress towards this goal, they cannot prove the absence of software
faults in the network implementation. This paper presents NeuroCodeBench - a
verification benchmark for neural network code written in plain C. It contains
32 neural networks with 607 safety properties divided into 6 categories: maths
library, activation functions, error-correcting networks, transfer function
approximation, probability density estimation and reinforcement learning. Our
preliminary evaluation shows that state-of-the-art software verifiers struggle
to provide correct verdicts, due to their incomplete support of the standard C
mathematical library and the complexity of larger neural networks. | Edoardo Manino, Rafael Sá Menezes, Fedor Shmarov, Lucas C. Cordeiro | 2023-09-07T10:19:33Z | http://arxiv.org/abs/2309.03617v1 | # NeuroCodeBench: a plain C neural network benchmark for software verification
###### Abstract
Safety-critical systems with neural network components require strong guarantees. While existing neural network verification techniques have shown great progress towards this goal, they cannot prove the absence of software faults in the network implementation. This paper presents _NeuroCodeBench_ - a verification benchmark for neural network code written in plain C. It contains 32 neural networks with 607 safety properties divided into 6 categories: maths library, activation functions, error-correcting networks, transfer function approximation, probability density estimation and reinforcement learning. Our preliminary evaluation shows that state-of-the-art software verifiers struggle to provide correct verdicts, due to their incomplete support of the standard C mathematical library and the complexity of larger neural networks.
## 1 Introduction
In contrast to classic software development, neural networks are crafted via a long process of trial and error that terminates when their predictive performance reaches a satisfactory level [7, 43]. The iterative and performance-driven nature of this process leaves neural networks vulnerable on many fronts [23]: poor performance on out-of-distribution [18] and adversarial inputs [37], misspecification of the neural architecture and training process [24], invocation of broken and deprecated libraries [35], outright software bugs [20]. Unfortunately, many of these vulnerabilities are not easy to catch early in the development process and may remain hidden until after deployment.
The most successful techniques for guaranteeing the functional correctness of neural networks operate at a high level of abstraction, where finite precision and other implementation details are not considered [27, 46, 36]. Although efforts to debug the actual implementation of neural networks exist, they are based on automatic testing and thus cannot prove correctness for all inputs [39, 20, 17]. This lack of guarantees is especially concerning for safety-critical systems since common software vulnerabilities [12] (e.g., arithmetic overflows, invalid memory accesses) can make the networks produce wrong results, expose sensitive data or corrupt the system they are executed on.
While off-the-shelf software verifiers can be used to check neural network code [42, 33], there has been no systematic attempt at assessing their performance on such tasks. Typically, state-of-the-art verification tools (e.g., CPAChecker [5], ESBMC [19], CBMC [28], UAutomizer [21]) are compared on SV-COMP [4] - the largest software verification competition with over 15'000 C programs ranging
from hand-crafted code to real-world software (e.g., drivers, Linux kernel). However, this competition lacks a dedicated benchmark for either neural networks or mathematical libraries (e.g., math.h).
This paper presents _NeuroCodeBench_ - a reasoned benchmark of neural network code in plain C. It is designed to exercise the capabilities of existing software verifiers without overwhelming them with excessively large instances. More specifically, it contains 32 neural networks with 607 safety properties in SV-COMP format divided into the following 6 categories: maths library, activation functions, error-correcting networks, transfer function approximation, probability density estimation and reinforcement learning. The last two categories are converted to C code from the VNN-COMP'22 suite [36], whereas the rest are entirely new. As a demonstration, we run the leading tools of SV-COMP 2023 in reachability, falsification and floating point arithmetic [4]. Our preliminary results show that these verifiers have incomplete support of the math.h library and struggle on larger neural networks. Lastly, we make _NeuroCodeBench_ publicly available at [31] and [30].
## 2 The Benchmark
### Design Requirements
In designing _NeuroCodeBench_, we target two main requirements. First, our benchmark must be representative of existing neural network code. Mainstream libraries like PyTorch [3] and Tensorflow [9] have an opaque multi-language interpreted structure that can be easily tested [20, 17], but does not lend itself to automated software verification. For this reason, we opt for micro-controller frameworks, where the source code of the network is fully available. We use two existing tools for converting high-level neural network specifications to standalone C code: onnx2c[45] and keras2c[14, 15].
Second, our benchmark must contain safety properties whose verdict is known, with reasonably balanced sets of safe and unsafe verdicts. Existing works rely on the verdicts of a single tool [42, 33] and thus are not a reliable source of information. Here, we establish the ground-truth verdict of our 607 safety properties in three ways (see Table 1): _A Priori_ verdicts come from the specific mathematical structure of the functions and networks we verify; _Brute Force_ verdicts come from exhaustive exploration of all possible floating point inputs; _VNN-COMP'22_ verdicts come from the independently-run neural network verification competition [36]. For the latter, we only keep unsafe properties if we can reproduce the corresponding counterexamples.
### Benchmark Description
Math Library.Typically, neural networks rely on 32-bit floating point operations1 and invoke the corresponding functions in the math.h library. More specifically, most activation functions depend on
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Benchmark Category & Safe & Unsafe & Ground Truth \\ \hline math\_functions & 33 & 11 & A Priori \\ activation\_functions & 40 & 16 & A Priori \\ hopfield\_nets & 47 & 33 & A Priori \\ poly\_approx & 48 & 48 & Brute Force \\ reach\_prob\_density & 22 & 13 & VNN-COMP’22 \\ reinforcement\_learning & 103 & 193 & VNN-COMP’22 \\ \hline \end{tabular}
\end{table}
Table 1: Overview of _NeuroCodeBench_. The “Unsafe” column comprises all properties for which a counterexample exists. The “Ground Truth” column reports the source of our verdicts.
exponential, logarithm, error function, absolute value, and max function (see activation_functions category). Similarly, positional encodings depend on sine and cosine [29], while Euclidean distances and vector normalisation depend on the square root [8].
In this light, it is worth checking whether software verifiers correctly handle calls to math.h. We write benchmarks that depend on the following functions: acosf, asinf, cosf, erff, expf, fabsf, logf, sinf and sqrtf. Since their semantics are platform-specific, we assume compliance with the IEEE 754 standard for 32-bit floating point [25] and the C99 standard for math.h[26]. We provide 44 safety properties (see Table 1) that check for a wide range of behavior: output bounds, monotonicity, periodicity, symmetry, function inversion and derivatives.
Activation Functions.Most of the non-linear behaviour in neural networks is concentrated in the activation layers [8]. These contain fairly restricted sets of activation functions whose implementation should be verified for correctness. Our benchmark includes the most popular ones [38, 22]: Elu, Gelu, Logistic, ReLU, Softmax, Softplus, Softsign and TanH. In turn, their definition depends on the functions erff, expf, expmf, fabsf, fmaxf, log1pf and tanhf. While most activation functions are univariate, the Softmax accepts multivariate inputs. To keep our verification instances small, we limit the size of Softmax input vectors to 2 and 4.
Error-Correcting Networks.For a long time, it has been known that certain types of recurrent neural networks can act as error-correcting decoders [1, 11]. The main idea is encoding a sequence of \(d\) bits into a vector \(x\in\{\pm 1\}^{d}\), and letting the neural network flip the sign of the incorrect entries.
Here, we choose Hopfield networks with Hebbian weights since their properties are well understood [1]. Specifically, we build networks reconstructing a single pattern \(x=(1,\ldots,1)\). We vary the pattern length in \(d\in\{4,8,16,32,64\}\) and the number of recursions in \(t\in[1,4]\). For compatibility with keras2c[14, 15], we use Softsign and TanH activations (see Table 2) rather than traditional sign activations [1]. Our safety properties check whether the network can reconstruct \(x\) when the first \(d/2-1\) entries can take any value in \([-1,1]\). Due to the network symmetry, we establish the ground truth by checking the extreme inputs \(x\) and \(x^{\prime}=(-1,\ldots,1)\), where \(x^{\prime}_{i}=-1\) for all \(i\in[1,d/2-1]\).
Transfer Function NetworksIn several engineering areas, neural networks are used to approximate the transfer function of electrical components [47, 32]. Here, we emulate this process by defining a hypothetical polynomial component \(f(x)=0.125x^{4}-0.25x^{3}-0.75x^{2}+x+0.5\) with oscillating transfer function. Then, we create a training set by uniformly sampling \(f(x)\) in \(x\in[-2,3]\) and train 16 different feedforward ReLU networks \(\hat{f}(x)\). The smallest has four layers with four neurons each, and the largest has a single hidden layer with 1024 neurons (see poly_approx category in Table 2).
We formally verify the approximation quality by measuring the difference between \(\hat{f}(x)\) and \(f(x)\) for each possible 32-bit floating point value in \([-2,3]\). With this information, we write 96 robustness properties (see Table 1). Specifically, we check the input domain in a small interval \(\mathcal{X}\) of size 0.1
\begin{table}
\begin{tabular}{|r|c|c|c|c|c|c|c|} \hline Neural Network Category & Inputs & Outputs & Layers & Neurons & Activations & Architecture & Conversion \\ \hline hopfield\_nets & 4–64 & 4–64 & 1 & 4–64 & Softsign/TanH & Recurrent & keras2c \\ poly\_approx & 1 & 1 & 1–4 & 16–1024 & ReLU & Feedforward & keras2c \\ reach\_prob\_density & 3–14 & 3–14 & 2–3 & 64–192 & ReLU & Feedforward & onnx2c \\ reinforcement\_learning & 4–8 & 2–8 & 2 & 128–512 & ReLU & Feedforward & onnx2c \\ \hline \end{tabular}
\end{table}
Table 2: Neural networks in _NeuroCodeBench_. The “Layers” and “Neurons” columns refer to the hidden layers only. The networks in hopfield_nets have a number of iterations between 1 and 4.
around the worst approximation error. There, we make sure that the error is always below a given threshold \(\epsilon\geq|f(x)-\hat{f}(x)|,\forall x\in\mathcal{X}\). We select six decreasing values of \(\epsilon\) for each network: three make the property hold and three yield a counterexample.
VNN-COMP NetworksSince its first edition in 2020, the International Verification of Neural Networks Competition (VNN-COMP) publishes all its benchmarks [36]. These benchmarks do not contain implementation details since they target a higher level of abstraction (real number arithmetic, no memory model). To provide a reference implementation, we propose the following conversion process: we translate the networks from ONNX format [13] to C with onnx2c[45], and the safety properties from VNN-LIB [44] to a minimal main() function with pre- and post-conditions.
Among all categories of the 2022 edition [36], we select two that contain relatively small neural networks (see Table 2): reach_prob_density are networks that approximate probability densities [34], reinforcement_learning are control networks trained via reinforcement learning [40].
### Preliminary Evaluation
Here, we use _NeuroCodeBench_ to evaluate six software verifiers which achieved top-places2 in the _Reachability_, _Falsification_ and _Floats_ categories of SV-COMP 2023 [4]. We keep our experimental setup as similar to SV-COMP as possible: we use the benchmarking tool BenchExec [6] with 2 CPU cores, 6GB of RAM and 900 seconds of total CPU time per verifier for each verification task.
Footnote 2: We omit VeriAbs [2] and VeriAbsL [16] due to licence restrictions. We omit BRICK [10] due to technical issues. We omit cooperative verifiers for clarity. We run PeSCo [41] with the CPAChecker binary from SV-COMP 2023.
Our preliminary results in Figure 1 show that all six verifiers produce a large ratio of _incorrect-to-correct_ verdicts. One of the likely reasons is incomplete support of math.h functions, which appear in the first three categories of Table 1. Indeed, CBMC, ESBMC, CPAChecker and UAutomizer produce many math-related warnings in their output, even when their verdict is correct or unknown. At the same time, approximately half of the unknown verdicts are due to timeouts on the larger neural networks of _NeuroCodeBench_, which suggests that the verifiers struggle with their complexity.
## 3 Conclusions and Future Work
_NeuroCodeBench_ is a challenging benchmark of neural network code in plain C. Our preliminary analysis demonstrates that state-of-the-art verifiers cannot produce correct verdicts on most of our safety properties. In the future, we plan to provide complete operational models for the math.h library, whose absence impacts existing verifiers. Furthermore, we plan to contribute _NeuroCodeBench_ to SV-COMP and draw the attention of that community to the challenges of verifying neural code.
Figure 1: Results of state-of-the-art software verifiers on _NeuroCodeBench_ after 900 seconds. |
2309.15075 | On Excess Risk Convergence Rates of Neural Network Classifiers | The recent success of neural networks in pattern recognition and
classification problems suggests that neural networks possess qualities
distinct from other more classical classifiers such as SVMs or boosting
classifiers. This paper studies the performance of plug-in classifiers based on
neural networks in a binary classification setting as measured by their excess
risks. Compared to the typical settings imposed in the literature, we consider
a more general scenario that resembles actual practice in two respects: first,
the function class to be approximated includes the Barron functions as a proper
subset, and second, the neural network classifier constructed is the minimizer
of a surrogate loss instead of the $0$-$1$ loss so that gradient descent-based
numerical optimizations can be easily applied. While the class of functions we
consider is quite large that optimal rates cannot be faster than
$n^{-\frac{1}{3}}$, it is a regime in which dimension-free rates are possible
and approximation power of neural networks can be taken advantage of. In
particular, we analyze the estimation and approximation properties of neural
networks to obtain a dimension-free, uniform rate of convergence for the excess
risk. Finally, we show that the rate obtained is in fact minimax optimal up to
a logarithmic factor, and the minimax lower bound shows the effect of the
margin assumption in this regime. | Hyunouk Ko, Namjoon Suh, Xiaoming Huo | 2023-09-26T17:14:10Z | http://arxiv.org/abs/2309.15075v1 | # On Excess Risk Convergence Rates of Neural Network Classifiers
###### Abstract
The recent success of neural networks in pattern recognition and classification problems suggests that neural networks possess qualities distinct from other more classical classifiers such as SVMs or boosting classifiers. This paper studies the performance of plug-in classifiers based on neural networks in a binary classification setting as measured by their excess risks. Compared to the typical settings imposed in the literature, we consider a more general scenario that resembles actual practice in two respects: first, the function class to be approximated includes the Barron functions as a proper subset, and second, the neural network classifier constructed is the minimizer of a surrogate loss instead of the 0-1 loss so that gradient descent-based numerical optimizations can be easily applied. While the class of functions we consider is quite large that optimal rates cannot be faster than \(n^{-\frac{1}{3}}\), it is a regime in which dimension-free rates are possible and approximation power of neural networks can be taken advantage of. In particular, we analyze the estimation and approximation properties of neural networks to obtain a dimension-free, uniform rate of convergence for the excess risk. Finally, we show that the rate obtained is in fact minimax optimal up to a logarithmic factor, and the minimax lower bound shows the effect of the margin assumption in this regime.
_Keywords--_ Neural network classification, excess risk convergence rate, neural network approximation theory, minimax optimality, Barron approximation space
## 1 Introduction
Neural networks have a long history as a class of functions with competitive performance in pattern recognition and classification problems. Theoretically, their approximation capability of various significant classes of functions as well as their statistical properties as nonparametric estimators have been actively studied. More recently, the rise of deep neural networks as a solution to many previously unsolved problems in the computer science community has led to the investigation of their theoretical properties.
Since then, many papers have shown universal consistency properties for a variety of classifiers. Among the most successful were the support vector machines pioneered by [1] and [2] and kernel methods based on function class of reproducing kernel Hilbert space pioneered by [1], [2], and others. Various types of neural networks such as shallow feed-forward neural networks with sigmoidal activation, polynomial networks, and Kolmogorov-Lorentz networks were also studied. In particular, a one hidden-layer neural network with sigmoidal activation obtained by minimization of empirical \(L_{1}\) error was shown to be universally consistent in [10].
A central tool widely used in proving these consistency and convergence rate results is the collection of concentration inequalities in various contexts. Roughly speaking, the excess risk for classification can be bounded by a term involving the suprema of empirical process indexed by the class of candidate functions for decision rule. Arguably the most important of them for our purposes is the Talagrand inequality [11] which gives a functional uniform concentration bound. This allows us to apply localized second-order arguments to obtain sharp convergence rates. See [1] for details.
With the help of relatively new techniques from empirical process theory, a series of seminal papers [12], [13], [14] providing convergence rate that holds uniformly over a class of distributions satisfying some regularity assumptions were published. In particular, two types of classifiers were considered: empirical risk minimization (ERM) rules and plug-in rules. While an ERM rule provides a classifier directly
based on a decision set, that is a classifier that outputs 1 if data belongs to the set and 0 otherwise, a plug-in rule tries to approximate the regression function \(E[Y=1|X=x]\) first and outputs 1 or 0 based on whether the function value exceeds a given threshold. Accordingly, the set of assumptions on the joint distribution of \((X,Y)\) differ in that while results on the ERM rule apply under the set complexity assumption, those on the plug-in rule apply under the function class complexity assumption. In addition, the provided rates of convergence are shown to be minimax optimal in respective scenarios. Details of relevant results will be provided in Section 3.
This paper also focuses on the binary classification problem. Specifically, we will provide a non-asymptotic uniform rate of convergence for the excess risk \(E[g(X)\neq Y]-L^{*}\) where \(L^{*}\) is the Bayes risk, under several distributional assumptions for a plug-in rule based on feed-forward ReLU neural networks. In Table 1, we provide a comparison of our work with other existing works in the literature. The capitalized letters in the assumptions column mean the following: \(S(\rho)\): set class complexity assumption where \(\rho\) is the entropy parameter, \(S(V)\): set class complexity assumption where \(V\) is the VC-dimension, \(M(\alpha)\): margin condition, \(H(\beta)\): Holder class assumption of smoothness index \(\beta\), \(G(\beta)\): geometric noise assumption, \(B\): bounded variation assumption, \(C\): function class convexity assumption, \(D\): density assumption on \(X\), \(BA\): Barron approximation space assumption. More details on various assumptions will be given in Section 3. We analyze a new, more general setting that has not been studied before. The major distinguishing points of our analysis are:
\begin{table}
\begin{tabular}{|c||c|c|c|c|c|} \hline Paper & Classifier & Classifier & Loss function & Assumptions & Convergence \\ & Type & Class & & & Rate \\ \hline
[14] & ERM & ERM from set class, finite sieves & & & \\
[13] & ERM & Finite sieves & 0-1 loss & \(S(\rho),M(\alpha)\) & \(n^{-\frac{1+\alpha}{2+\alpha+\rho\alpha}}\) \\
[1] & ERM + plug-in & Linear polynomial & 0-1 loss & \(M(\alpha),D,H(\beta)\) & \(n^{-\frac{\beta(1+\alpha)}{2\beta+d}}\) \\
[2] & ERM + & Boosting & convex losses & \(S(V),C,M(\alpha)\) & \(n^{-\frac{V+2}{2(V+1)(2-\alpha)}}\) \\
[15] & ERM + & Boosting & exponential & \(S(V),M(\alpha)\) & \(n^{\frac{V+2}{2(2-\alpha)(V+1)}}\) \\ & plug-in & classifier & or logistic & & \\
[15] & ERM + & Boosting & exponential & \(B,M(\alpha)\) & \(n^{-\frac{2(1+\alpha)}{3(2+\alpha)}}\) \\ & plug-in & based on & or logistic & & \\ & & decision & & & \\ & & stumps & & & \\
[16] & plug-in & Support vector machines & hinge loss & \(M(\alpha),G(\beta)\) & \(n^{-\frac{2\beta(\alpha+1)}{2\beta(\alpha+2)+3\alpha+4}}\) \\
[17] & ERM + & Neural & hinge loss & \(S(\beta),M(\alpha)\) & \(n^{-\frac{\beta(\alpha+1)}{\beta(\alpha+2)+(d-1)(\alpha+1)}}\) \\ & plug-in & networks & & & \\ \hline This paper & ERM + & Neural & logistic loss & \(M(\alpha)\), \(BA\) & \(n^{-\frac{1+\alpha}{3(2+\alpha)}}\) \\ & plug-in & networks & & & \\ \hline \end{tabular}
\end{table}
Table 1: Rough comparison of related papers
* We study a regime in which minimax optimal rates are "slow" (as fast as \(n^{-\frac{1}{3}}\)). While it is much more general than typical settings in the literature, dimension-free rates are possible, and neural networks have the approximation power to achieve such rates. Indeed, we show that our neural network-based plug-in rule is minimax optimal up to a logarithmic factor.
* We consider a feed-forward ReLU neural network-based plug-in rule that is chosen based on the minimization of the empirical average of logistic loss, which we denote by \(\phi\). This contrasts with the work of [13] that uses hinge loss.
* We apply state-of-the-art results on the complexity of deep feed-forward ReLU neural network class combined with the refined localization analysis to obtain a sharp convergence rate.
In addition to controlling the estimation error via applications of empirical process techniques, we also need to control the approximation error. While approximation of high-dimensional functions usually suffers from the curse of dimensionality, which describes the phenomenon that the approximation rate deteriorates in the input dimension, it was first shown in [1] that for a class of functions whose variation is bounded in a suitable sense, shallow neural networks attain a dimension-free \(N^{-1/2}\) rate of convergence in \(L_{2}\) norm where \(N\) is the number of weights in the architecture, and [1] refined the result so that the same rate holds also for the uniform norm. In fact, this is the key property we will use in defining the class of candidate functions for the regression function.
In this paper, we consider the scenario where the true regression function is locally characterized by elements of the Barron approximation space proposed in [1]. Unlike the classical Barron space in [1], which is actually a subset of the set of Lipschitz continuous functions, this space includes even discontinuous functions and hence is more general. Also, while [1] does explore estimation bounds for Barron approximation space, the setting is rather restrictive in that it assumes a noiseless setting where there is a deterministic function \(f\) such that \(Y=f(X)\). In contrast, we work with a general class of probability measures defined jointly on \((X,Y)\). We will analyze the performance of an empirical risk minimizer of a surrogate loss that is widely used in practice for its computational and statistical advantages over the 0-1 loss. Specifically, we derive a non-asymptotic uniform rate of convergence over a class of distributions that roughly speaking, can be characterized by the Barron approximation space.
Finally, we will conclude by providing a minimax lower bound for the proposed class of distributions. The lower bound shows that achieving uniform convergence rate is inherently difficult in the sense that it cannot be better than \(n^{-\frac{1}{3}}\).
### Main Contributions
In summary, the purpose of this paper is to show a non-asymptotic and uniform rate of convergence for a sequence of classifiers based on neural networks in a binary classification setting when the regression function is locally characterized by the Barron approximation space. The classifier chosen is based on empirical risk minimization of the logistic loss. We combine refined results from classical classification theory with approximation results for neural networks. Specifically,
* We first derive a nonasymptotic bound (Theorem 4.1) on the approximate excess \(\phi\)-risk for a function obtained via empirical minimization of \(\phi\)-loss instead of 0-1 loss, which is how neural networks are trained in practice. In this preliminary result, while we initially put minimal assumption on the distribution, the obtained bound is distribution-dependent.
* Second, we consider the class of distributions such that (1) the Mammen-Tsybakov noise condition holds, (2) the regression function \(\eta\) locally belongs to the Barron approximation space. Then, we obtain a uniform bound on the excess risk (Theorem 4.6) over this class of distributions for the neural network plug-in classifiers using the preliminary result above. Apart from the general difference in distributional assumptions, this work differs from the two works [14] and [15] in that first, our classifier is a plug-in rule, and second, it provides results for a feasible classifier that can be obtained via available optimization methods based on gradient descent while in those works the classifier may not be feasible or difficult to obtain. Another comparable work is [13] where a fast convergence rate is shown for a neural network classifier that minimizes empirical risk for hinge loss and when the Bayes classifier is characterized by function classes proposed by [10], [12]. The regime they work
under is perhaps less interesting because traditional classifiers such as local polynomials and support vector machines also lead to minimax optimal rates there. In fact, the optimal rates are dependent on the input-dimension and the smoothness of the regression function. Our work demonstrates a regime where dimension-free rates are possible without explicit smoothness or regularity conditions. Our work also critically differs from [10] and [14] in that we include an approximation error analysis. Furthermore, we specifically focus on neural network learning with logistic loss, making use of the sharpest known bounds on VC-dimension while taking advantage of the approximation power of neural networks. A rough but honest summary and comparison of related works with convergence rate results for binary classification are given in Table 1. We would like to warn the reader, however, that it is oftentimes not a good idea to compare convergence rates per se because the assumptions applied in respective works are different, and even a very subtle difference can lead to wildly different convergence behaviors.
* Third, we derive a minimax lower bound (Theorem 4.7) for the same class of distributions considered in deriving the upper bound. This result shows that the upper bound on the rate achieved with neural network classifiers is indeed minimax optimal upto a logarithmic factor. Closely related are the minimax lower bounds derived in [1] under mild distributional assumptions and Holder regression function class setting.
### Organization
The rest of the paper is organized as follows. In Section 2, we provide all the necessary definitions from classification and empirical process theory that we will be using throughout the paper. In Section 3, we review existing results and discuss how their assumptions and results differ from each other and from our own. In Section 4, we state our main results culminating in the rate of convergence for excess risk given in Theorem 4.6. In Section 5, we give all the technical proofs for results from Section 4.
## 2 Background
### Basic setup
Let \(Z=(X,Y)\) be a \(S:=\mathbb{R}^{d}\times\{0,1\}\)-valued random vector, and suppose we have a sample of size \(n\): \(\mathcal{D}_{n}=\{(X_{1},Y_{1}),\ldots(X_{n},Y_{n})\}\) that are i.i.d. with distribution \(P\). Denote by \(P_{X}\) the marginal distribution of \(X\) and \((P_{X})_{n}\) the empirical distribution based on the \(n\) samples for \(P_{X}\), which is a random measure on \(\mathbb{R}^{d}\) based on the \(n\) samples. The goal is to construct a classifier
\[M_{n}:\mathbb{R}^{d}\times\{\mathbb{R}^{d}\times\{0,1\}\}^{n}\to\{0,1\} \tag{1}\]
that assigns a label to any given input in \(\mathbb{R}^{d}\) based on \(n\) samples from \(\mathcal{D}_{n}\). The quality of \(M_{n}\) is measured by the error function \(L\), which takes as input the classifier \(M_{n}\) and outputs the following conditional probability:
\[L(M_{n}):=P(M_{n}(X;X_{1},Y_{1},\ldots,X_{n},Y_{n})\neq Y|\mathcal{D}_{n}). \tag{2}\]
Hence, when the randomness of \(\mathcal{D}_{n}\) is integrated over, \(E[L(M_{n})]\) also becomes a deterministic real number between 0 and 1. Then, we are interested in obtaining a provable upper bound on \(E[L(M_{n})]\) in terms of \(n\) when \(M_{n}\) is a function realized by a neural network.
The minimal possible expected error will be denoted by
\[L^{*}:=\inf_{g}E[L(g)] \tag{3}\]
where the infimum is taken over all measurable classifiers, and expectation is taken with respect to all sources of randomness. It is a well-known result that this infimum is actually achieved by a classifier with prior knowledge of \(P\).
To describe the classifier achieving (3), we define the so-called regression function \(\eta:\mathbb{R}^{d}\to[0,1]\) as the Borel-measurable function satisfying:
\[\eta(\mathbf{X})=E[Y=1|\mathbf{X}]. \tag{4}\]
That such function \(\eta\) exists and when composed with \(\mathbf{X}\), is a version of the conditional expectation is shown, for example, in [1, Theorem 9.1.2] or in a more general setting of a Polish space, [1, Theorem 10.2.1 and 10.2.2]. Then, it is a standard result (see, for example, [1, Section 2.1]) that the infimum is achieved by the classifier
\[M^{*}(\mathbf{X})=\mathbb{1}_{\{x:\eta(x)\geq 1/2\}}(\mathbf{X})=\begin{cases}1,&\text{if } \eta(\mathbf{X})\geq 1/2;\\ 0,&\text{otherwise},\end{cases} \tag{5}\]
which one can construct with prior knowledge of \(P\) and is independent of the samples \(\mathcal{D}_{n}\) so that we have
\[L^{*} =E[L(M^{*})]\] \[\overset{\text{definition}}{=}E[P(M^{*}(X)\neq Y|\mathcal{D}_{n})]\] \[\overset{\text{independence}}{=}P(M^{*}(X)\neq Y).\]
For a given function \(f:\mathbb{R}^{d}\to\mathbb{R}\), it is convenient to write
\[p_{f}(\mathbf{x}):=\mathbb{1}_{\{x:f(x)\geq 0\}}(\mathbf{x}), \tag{6}\]
where for any subset \(A\subset\mathbb{R}^{d}\), \(\mathbb{1}_{A}\) is the indicator function defined by
\[\mathbb{1}_{A}(x):=\begin{cases}1,&\text{if }x\in A;\\ 0,&\text{otherwise}.\end{cases} \tag{7}\]
Then, we can regard \(p_{f}\) as an estimator of the function \(\mathbb{1}_{\{\eta(x)\geq 1/2\}}\). These types of classifiers are called plug-in rules in the literature. Using this notation, note that (5) is the plug-in rule, and \(p_{\eta-1/2}\) is also called the Bayes classifier.
Now, we formally define the concepts needed to quantify the performance of a classifier.
**Definition 2.1** (Excess risk).: _The excess risk of a classifier \(g_{n}:\mathbb{R}^{d}\times\{\mathbb{R}^{d}\times\{0,1\}\}^{n}\to\{0,1\}\) that depends on samples \(\mathcal{D}_{n}\) is defined as_
\[\mathcal{E}(g_{n}):=E[L(g_{n})]-\inf_{h\text{ measurable}}P(h(\mathbf{X})\neq Y).\]
We will use the notation \(P(\cdot)\) to denote integration with respect to \(P\) or simply \(E[\cdot]\) when the measure is clear from the context. Now while the infimum in the right-hand side of the above display is taken with respect to all measurable \(h\), for analysis, it will be convenient to first consider the case when the infimum is taken with respect to a given function class \(\mathcal{G}\) that we take as the candidate set for estimation. Accordingly, we define the approximate excess risk below:
**Definition 2.2** (Approximate excess risk).: _Suppose \(g_{n}:\mathbb{R}^{d}\times\{\mathbb{R}^{d}\times\{0,1\}\}^{n}\to\{0,1\}\in \mathcal{G}\) for some class of functions \(\mathcal{G}\). We define the approximate excess risk of \(g_{n}\) with respect to \(\mathcal{G}\) as_
\[\widehat{\mathcal{E}}(g_{n}):=E[L(g_{n})]-\inf_{h\in\mathcal{G}}P(h(\mathbf{X}) \neq Y).\]
Since we only consider plug-in rules, when a classifier \(p_{f}\) is constructed from \(f\) via (6), we also write \(\mathcal{E}(f)\) for \(\mathcal{E}(p_{f})\) and likewise for the approximate excess risk when there is no room for confusion.
### Surrogate loss, classification calibration, and excess risks
Let \(\phi:\mathbb{R}\to[0,\infty)\) be the logistic loss function:
\[\phi(t):=\log\left(1+e^{-t}\right). \tag{8}\]
Let \(\mathcal{G}\) be some class of real-valued measurable functions \(g:\mathbb{R}^{d}\to\mathbb{R}\) realized by neural networks, and define the function \(\phi\bullet g:\mathbb{R}^{d}\times\{0,1\}\to\mathbb{R}\),
\[\phi\bullet g(x,y)=\phi((2y-1)g(x)). \tag{9}\]
Based on the above notation, we can analogously define the excess \(\phi\)-risk and approximate excess \(\phi\)-risk of a function \(g_{n}\) with respect to function class \(\mathcal{G}\) as the approximate excess risk with respect to the class of functions \(\{\phi\bullet g,g\in\mathcal{G}\}\) as follows:
\[\mathcal{E}_{\phi}(g_{n}) :=E(\phi\bullet g_{n}(X,Y;\mathcal{D}_{n}))-\inf_{g\text{ measurable}}P(\phi\bullet g(X,Y)), \tag{10}\] \[\widehat{\mathcal{E}}_{\phi}(g_{n}) :=E(\phi\bullet g_{n}(X,Y;\mathcal{D}_{n}))-\inf_{g\in\mathcal{G} }P(\phi\bullet g(X,Y)). \tag{11}\]
We will also write \(L_{\phi}(g):=E[\phi\bullet g]\) In our analysis, we will be interested in \(\widehat{g}_{n}\), which is the solution of the following empirical risk minimization problem:
\[\widehat{g}_{n}:=\operatorname*{arg\,min}_{g\in\mathcal{G}}E_{n}(\phi\bullet g),\]
where \(E_{n}\) denotes expectation with respect to the empirical measure \(\frac{1}{n}\sum_{i=1}^{n}\delta_{X_{i},Y_{i}}\) based on samples \(\mathcal{D}_{n}\). We will first study the statistical quality of \(\widehat{g}_{n}\in\mathcal{G}_{n}\) as measured by its approximate excess \(\phi\)-risk, and from there derive a convergence rate for the excess risk \(\mathcal{E}(\widehat{g}_{n})\).
Although in our examination, \(\phi\) will always be the logistic loss, statistical properties of a general class of convex losses have been extensively studied, for example in [1]. With regard to the use of a surrogate loss, an important concept is that of classification-calibration. We first introduce some preliminary definitions.
**Definition 2.3**.: _Define the optimal conditional \(\phi\)-risk as_
\[H(\eta):=\inf_{\alpha\in\mathbb{R}}\eta\phi(\alpha)+(1-\eta)\phi(-\alpha), \quad\eta\in[0,1].\]
Note that optimal \(\phi\)-risk is then given by
\[L_{\phi}^{*}:=E[H(\eta(X))]=\inf_{f\text{ measurable}}E[\phi((2Y-1)f(X))] \tag{12}\]
where the second equality follows from the property of conditional distribution; see, for example, [11, Theorem 10.2.1], and that for the logistic loss, the infimum on the right-hand side is achieved by \(f(x)=\log(\frac{\eta(x)}{1-\eta(x)})\), which is measurable.. We also define a similar function \(H^{-}\) as
**Definition 2.4**.: \(H^{-}(\eta):=\inf_{\alpha:\alpha(2\eta-1)\leq 0}\eta\phi(\alpha)+(1-\eta)\phi( -\alpha),\quad\eta\in[0,1]\)_,_
which is the optimal conditional \(\phi\)-risk under the constraint that \(\alpha\) takes a sign different from \(2\eta-1\). Based on the above definitions, we define classification calibration:
**Definition 2.5**.: _The surrogate loss \(\phi\) is classification calibrated if, for any \(\eta\neq 1/2\),_
\[H^{-}(\eta)>H(\eta).\]
Intuitively this says that the \(\phi\)-risk associated with a "wrong" classifier should always yield a higher value of \(\phi\)-risk than the "correct" one. It is not trivial to see how the excess \(\phi\)-risk (10) relates to the excess risk (Definition 2.1), which is of ultimate interest. [15] first showed the so-called Zhang's inequality: for a function \(f_{n}:\mathbb{R}^{d}\rightarrow\mathbb{R}\), if \(\phi\) is such that for some positive constants \(s\geq 1\) and \(c\geq 0\)
\[\left|\frac{1}{2}-\eta\right|^{s}\leq c^{s}(1-H(\eta)),\quad\eta\in[0,1]\]
then, we have
\[\mathcal{E}(f_{n})\leq c\mathcal{E}_{\phi}(f_{n})^{1/s}. \tag{13}\]
In fact, this result was later refined using the additional assumption of Mammen-Tsybakov noise condition by [1] as follows:
\[\mathcal{E}(f_{n})\leq C\mathcal{E}_{\phi}(f_{n})^{(1+\alpha)/(s+\alpha)} \tag{14}\]
where \(C>0\) is a constant, \(s\) is as in Zhang's inequality, and \(\alpha\) is the noise exponent in the Mammen-Tsybakov noise condition, which will be discussed later in Section 3.1.3. In particular, since the logistic loss is convex and differentiable at \(0\) with a negative derivative, it can be shown that it is classification-calibrated so that \(\phi\) satisfies the conditions needed to conclude (14). Hence, it suffices to provide a bound of \(\mathcal{E}_{\phi}(\widehat{g}_{n})\) to bound \(\mathcal{E}(\widehat{g}_{n})\).
Lastly, following [1], we define the \(\psi\)-transform of a loss function \(\phi\) in the following:
**Definition 2.6**.: _Given \(\phi:\mathbb{R}\to[0,\infty)\), define \(\psi\)-transform \(\psi:[-1,1]\to[0,\infty)\) by \(\psi=\tilde{\psi}^{**}\) where_
\[\tilde{\psi}(\eta)=H^{-}\left(\frac{1+\eta}{2}\right)-H\left( \frac{1+\eta}{2}\right)\]
_and \(\tilde{\psi}^{**}\) is the Fenchel-Legendre biconjugate of \(\tilde{\psi}\). Recall that the Fenchel-Legendre biconjugate of a function \(f\) is the closed convex hull of \(f\), or equivalently, the largest lower semi-continuous function \(f^{**}\) such that \(f^{**}\leq f\)._
### Concepts from empirical risk minimization theory
In this subsection, we give a number of definitions that frequently appear in empirical risk minimization and empirical process theory used in the proofs of our main results.
**Definition 2.7** (\(\delta\)-minimal set of \(P\)-risk).: _For any class of functions \(\mathcal{G}\), we define the \(\delta\)-minimal set of \(P\) for \(\mathcal{G}\) as_
\[\mathcal{G}(\delta):=\{g:g\in\mathcal{G},E[g]-\inf_{f\in\mathcal{G }}E[f]\leq\delta\}.\]
Since we will be working with \(\phi\)-risks, it is convenient to define the analogous \(\delta\)-minimal set of \(\phi\)-risk as follows:
**Definition 2.8** (\(\delta\)-minimal set of \(\phi\)-risk).: \[\mathcal{G}_{\phi}(\delta):=\{g:g\in\mathcal{G},\widehat{\mathcal{ E}}_{\phi}(g)\leq\delta\}.\] (15)
We also define VC-dimension widely used as a useful measure of function class complexity. First is the definition of VC-index or VC-dimension for a collection of subsets of a given space.
**Definition 2.9** (VC-index, VC-class of sets).: _Let \(\mathcal{C}\) be a class of subsets of a set \(\mathcal{X}\). For an arbitrary subset of \(\mathcal{X}\), \(\{x_{1},\ldots,x_{n}\}\), \(\mathcal{C}\) is said to shatter \(\{x_{1},\ldots,x_{n}\}\) if \(|\{C\cap\{x_{1},\ldots,x_{n}\}:C\in\mathcal{C}\}|=2^{n}\) where \(|\cdot|\) denotes cardinality of the set. The VC-index of \(\mathcal{C}\) is then defined as_
\[V(\mathcal{C}):=\inf\{n:\forall A\subset\mathcal{X}\text{ with }|A|=n,\mathcal{C}\text{ does not shatter }A\}.\]
_If \(V(\mathcal{C})\) is finite, \(\mathcal{C}\) is said to be of VC-class with VC-index \(V(\mathcal{C})\)._
Now we can further extend the definition to a collection of functions. By a subgraph of a function, we mean
**Definition 2.10** (subgraph of function).: _For a function \(f:\mathcal{X}\to\mathbb{R},\) the subgraph of \(f\) is the set_
\[\{(x,t)\in\mathcal{X}\times\mathbb{R}:t<f(x)\}.\]
**Definition 2.11** (VC-class of functions).: _A class of functions \(\mathcal{F}\) is said to be of VC-class if the set \(\mathcal{C}_{\mathcal{F}}:=\{\text{subgraph of }f\colon f\in\mathcal{F}\}\) is of VC-class, and we define the VC-index of \(\mathcal{F}\) as \(V(\mathcal{F})=V(\mathcal{C}_{\mathcal{F}})\)_
In other words, the VC-index for a class of real-valued functions is defined in terms of the VC-index of the class of their subgraphs.
**Remark 2.12**.: _Sometimes, the VC-dimension is defined for a class of binary-valued functions and the definition is extended to real-valued functions by transforming them via a fixed threshold function, and the term "pseudodimension" is used instead to refer to our definition of VC-index for real-valued functions. However, for our purposes, the two definitions can be used interchangeably up to change in constants. See page 2 of [1] for details on justification._
Now that we have defined the VC-class of functions, we introduce the Rademacher process, a special stochastic process widely used in functional concentration results:
**Definition 2.13** (Rademacher process).: _We define the Rademacher process indexed by a class of functions \(\mathcal{F}\) as_
\[R_{n}(f)=\frac{1}{n}\sum_{i=1}^{n}\epsilon_{i}f(X_{i}),\quad f\in \mathcal{F}, \tag{16}\]
_where \(\epsilon_{i}\)'s are i.i.d. Rademacher random variables (discrete random variables with mass 1/2 each on -1 and 1)._
Based on \(\delta\)-minimal set, we can define the expected sup-norm
\[\phi_{n}(\delta):=\phi_{n}(\mathcal{G},\delta):=E_{\mathcal{D}_{ n}}\left[\sup_{g_{1},g_{2}\in\mathcal{G}(\delta)}|(E_{n}-E)(g_{1}-g_{2})|\right] \tag{17}\]
where \(E_{\mathcal{D}_{n}}\) denotes expectation with respect to \(\mathcal{D}_{n}\), \(E_{n}\) denotes expectation with respect to empirical measure based on \(\mathcal{D}_{n}\), and \(E\) denotes expectation with respect to \((\mathbf{X},Y)\). We also define the \(L_{2}(P)\)-diameter of the \(\delta\)-minimal set \(\mathcal{G}(\delta)\) as
\[D^{2}(\delta):=D^{2}(\mathcal{G},\delta):=\sup_{g_{1},g_{2}\in \mathcal{G}(\delta)}E(g_{1}-g_{2})^{2}. \tag{18}\]
We also define some function transformations introduced, for example, in [15] that will be used in technical parts of later proofs. Given a function \(\psi:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}_{\geq 0}\), define
\[\psi^{b}(\delta) :=\sup_{\sigma\geq\delta}\frac{\psi(\sigma)}{\sigma},\] \[\psi^{\sharp}(\epsilon) :=\inf\{\delta:\delta>0,\psi^{b}(\delta)\leq\epsilon\}. \tag{19}\]
### Neural Network Class
We consider constructing classifiers through a hybrid of plug-in and ERM procedures as realized by neural networks. Namely, we obtain an estimator \(\widehat{f}\) of the Bayes classifier that belongs to the class of feed-forward ReLU networks.
Write the ReLU function
\[\sigma(x)=\max\{x,0\},x\in\mathbb{R}.\]
Then, a feed-forward ReLU neural network with \(L\) hidden-layers and width vector \(\mathbf{p}=(p_{1},\ldots,p_{L})\) is the function \(f\) defined by the following:
\[f(\mathbf{x})=\sum_{i=1}^{p_{L}}c_{1,i}^{L}f_{i}^{L}(\mathbf{x})+c_{1,0}^ {L},\]
which is the output of the last layer of the neural network, and \(f_{i}^{l}\) for \(l=1,\ldots,L,i=1,\ldots,p_{l}\) are recursively defined by
\[f_{i}^{l}(\mathbf{x})=\sigma\left(\sum_{j=1}^{p_{l-1}}c_{i,j}^{l-1}f_ {j}^{l-1}(\mathbf{x})+c_{i,0}^{l-1}\right)\]
and for the base case,
\[f_{j}^{0}(x):=x_{j}\]
for constants \(c_{i,j}^{L}\in\mathbb{R}\) for all corresponding indices \(i,j,l\). We also say that the \(l\)th hidden layer has \(m\) neurons when \(p_{l}=m\). See Figure 1 for an illustration.
Then, we denote the class of feed-forward ReLU neural networks by \(\mathcal{F}(L,p)\) where
\[\mathcal{F}(L,p)=\{f:f\text{ is a feed-forward ReLU neural network with}\] \[L\text{ layers and width vector }p\}.\]
### Barron approximation space
In this subsection, we define the target class of functions we wish to approximate. Specifically, we give a characterization of the function class that the regression function \(\eta\) belongs to. We introduce several definitions following [10].
**Definition 2.14** (Barron Approximation Space).: _Let \(U\subset\mathbb{R}^{d}\) be bounded with a non-empty interior and \(C>0\) a constant. We define \(\mathcal{BA}_{C}(U)\) to be the set of all measurable functions \(f:U\to\mathbb{R}\) such that for every \(m\in\mathbb{N}\), there exists a 1-hidden layer ReLU neural network \(g\) with \(m\) neurons such that_
\[\left\|f-g\right\|_{\infty}\leq\sqrt{d}Cm^{-1/2}, \tag{20}\]
_and such that all weights involved in \(g\) are bounded in absolute value by_
\[\sqrt{C}\cdot\left(5+\inf_{x_{0}\in U}\left[\left\|x_{0}\right\|_{ 1}+\vartheta\left(U,x_{0}\right)\right]\right) \tag{21}\]
_where \(\vartheta(U,x_{0}):=\sup_{\xi\in\mathbb{R}^{d}\setminus\{0\}}(\|\xi\|_{\infty} /|\xi|_{U,x_{0}})\) and \(|\xi|_{U,x_{0}}:=\sup_{x\in U}|\langle\xi,x-x_{0}\rangle|\). Then, we define \(\mathcal{BA}(U):=\bigcup_{C>0}\mathcal{BA}_{C}(U)\) and call it the Barron approximation space._
We give some interpretations of the term (21). Since \(U\) is open, it is possible to find an open rectangle in \(U\) whose edge widths are all greater than some positive \(\delta>0\). Then, it is straightforward to verify that independent of the choice of \(x_{0}\in U\), \(\vartheta\left(U,x_{0}\right)\) is bounded from above by \(\frac{2}{\delta}\) (c.f. [10, Remark 2.3]). It is also bounded from below by a positive number since \(\|\xi\|_{\infty}/|\xi|_{U,x_{0}}\geq\frac{\|\xi\|_{\infty}}{\sup_{x\in U}\|\xi \|\|x-x_{0}\|}\geq\frac{1}{\sqrt{d}D}\) where \(D\) is the diameter of the set \(U\). With this view, we see that (21) is independent of the choice of \(f\) and fully determined by the shape and size of \(U\).
The definition of Barron approximation space is motivated by the direct approximation theorems for functions in the classical Barron space as introduced in [1], [1]. Barron functions there are defined as functions that admit a Fourier integral representation with finite first moment with respect to the magnitude distribution of the defining complex measure \(F\) in the integral representation. It is worth mentioning that
Figure 1: An example of a fully-connected feed-forward neural network with 2 hidden layers, input dimension 2, and output dimension 1.
there exist several distinct definitions of "Barron space" used in the literature. Under the definition of [13] (Definition 2.1), it is shown that any Barron function can be estimated by a 1-hidden layer neural network at the same rate as in (20), which motivates the given definition for Barron approximation space. Hence, Barron approximation space includes all the Barron functions as defined in [1], which includes many important classes of functions such as functions with high-order derivatives. See section V of [1] for more examples. For another common definition of Barron space, the convergence rate of \(O(m^{-1/2})\) in \(L^{2}\) and \(O(\sqrt{d}m^{-1/2})\) in \(L^{\infty}\) are shown, see [1], [E+20]. Some of the embedding relationships between these different definitions are investigated in [13].
## 3 Review of existing works
In this subsection, we briefly review some important risk bounds from the literature. In Section 3.1, we first discuss common assumptions on the joint distribution of \((X,Y)\). Section 3.2, we discuss empirical risk-minimizing classifiers and the results associated with them. In Section 3.3, we discuss plug-in classifiers and the results associated with them.
### Distributional assumptions
To obtain any meaningful results on the convergence rate of excess risk, assumptions on the distribution governing \((X,Y)\) are necessary. Indeed, Theorem 7.2 of [12] shows that for any sequence of classifiers, there always exists a "bad" distribution such that the excess risk converges to 0 at an arbitrarily slow rate.
In this subsection, we introduce three assumptions on distribution commonly used in the literature. In subsection 3.1.1, we discuss the assumption on the class of candidate sets where the optimal decision set belongs, specifically its set-class complexity. In subsection 3.1.2, we discuss the assumption on function class complexity, which is the complexity of the class of candidate functions that the regression function \(\eta\) defined in (4) belongs to. In subsection 3.1.3, we discuss the Mammen-Tsybakov margin assumption, which is an assumption on the behavior of \(\eta\) near the decision boundary. Specifically, we impose restrictions on the probability of sets where the regression function is close to the boundary, i.e., points \(x\) such that \(\eta(x)\) is near \(1/2\). In the rest of the paper, we will call this the margin assumption.
#### 3.1.1 Assumption on the complexity of sets
Let \(G^{*}=\{x:\eta(x)\geq 1/2\}\), which is the decision set corresponding to the Bayes classifier that satisfies
\[G^{*}=\operatorname*{arg\,min}_{G\text{ measurable set}}P(Y\neq \mathbb{1}\left(X\in G\right)).\]
A standard assumption in the ERM framework is that for some constant \(\rho>0\), \(G^{*}\) belongs to some given class of sets \(\mathcal{G}\) that satisfies
\[\mathcal{H}(\epsilon,\mathcal{G},d_{\Delta})\leq c_{0}\epsilon^{-\rho} \tag{22}\]
where \(\mathcal{H}(\epsilon,\mathcal{G},d_{\Delta})\) denotes the \(\epsilon\)-entropy of the set \(\mathcal{G}\) with respect to the pseudo-metric \(d_{\Delta}\) defined as
\[d_{\Delta}(A,B):=P_{X}(A\Delta B) \tag{23}\]
for sets \(A\) and \(B\) in \(\mathbb{R}^{d}\) and \(A\Delta B:=(A\backslash B)\bigcup(B\backslash A)\) is the symmetric difference of sets. Recall that \(\epsilon\)-entropy \(\mathcal{H}(\epsilon,\mathcal{G},d_{\Delta})\) is defined as the minimum number of \(d_{\Delta}\)-balls with radius \(\epsilon\) required to cover \(\mathcal{G}\). Note it is necessary that \(\mathcal{G}\) be totally bounded with respect to the pseudo-metric \(d_{\Delta}\) to satisfy this complexity assumption since otherwise, \(\mathcal{H}(\epsilon,\mathcal{G},d_{\Delta})\) will be infinite.
We also remark that sometimes, a closely-related concept of \(\delta\)-entropy with bracketing is used, for example in [14]. Since we won't need it for our purpose, we omit the details.
#### 3.1.2 Assumption on regression function \(\eta\)
Similarly, we may assume we are given a large class of functions \(\Sigma\) which includes the true \(\eta\) we are looking for. Then, a complexity assumption on \(\Sigma\) is described as
\[\mathcal{H}(\epsilon,\Sigma,L_{p})\leq c_{1}\epsilon^{-\rho} \tag{24}\]
where \(\rho>0\), \(p\geq 1\), and \(\mathcal{H}(\epsilon,\Sigma,L_{p})\) is the \(\epsilon\)-entropy of the class of functions \(\Sigma\) with respect to the \(L_{p}\) norm for \(P_{X}\) on \(\mathbb{R}^{d}\). Recall that \(\mathcal{H}(\epsilon,\Sigma,L_{p})\) is defined as the natural logarithm of the minimal number of \(\epsilon\)-balls in \(L_{p}\) norm to cover \(\Sigma\).
#### 3.1.3 Margin assumption
This is an assumption on the joint distribution of \(X\) and \(Y\). Intuitively, it controls (with margin parameter \(\alpha\)) how \(\eta(x)\) behaves around the boundary of the optimal set \(\{x:\eta(x)\geq 1/2\}\). Bigger \(\alpha\) means there is a jump of \(\eta(x)\) near this boundary, which is favorable for learning, and smaller \(\alpha\) close to \(0\) means there is a plateau behavior near the boundary, a difficult situation for learning. Specifically, we assume there exist constants \(C_{0}>0\) and \(\alpha\geq 0\) such that
\[P_{X}(0<|\eta(x)-1/2|\leq t)\leq C_{0}t^{\alpha},\quad\forall t>0, \tag{25}\]
where \(\eta\) is the regression function as in (4). Note the assumption becomes trivial for \(\alpha=0\) and stronger for larger \(\alpha\). An equivalent way to say this is that \(|2\eta-1|\in L_{\alpha,\infty}(P_{X})\) where \(L_{\alpha,\infty}(P_{X})\) is the Lorentz space with respect to measure \(P_{X}\). See [10],[12] for further discussion on this assumption.
This assumption is also called the Mammen-Tsybakov noise condition and has the following equivalent characterizations as summarized in [1]:
* \(\exists\beta>0\) such that for any measurable classifier \(g\), we have \(\mathbb{E}\left[\mathbbm{1}_{\{x:g(x)\neq M^{*}(x)\}}(X)\right]\leq\beta\left( L(g)-L^{*}\right)^{\kappa}\).
* \(\exists c>0\) such that \(\forall A\in\mathcal{B}(\mathbb{R}^{d})\), we have \(\int_{A}dP(x)\leq c\left(\int_{A}|2\eta(x)-1|dP(x)\right)^{\kappa}\).
* \(\exists B>0,\forall t\geq 0\), we have \(\mathbb{P}\{|2\eta(X)-1|\leq t\}\leq Bt^{\frac{\kappa}{1-\kappa}}\).
Note \(M^{*}\) is as defined in (5), and \(\mathcal{B}(\mathbb{R}^{d})\) refers to the Borel sigma-algebra of \(\mathbb{R}^{d}\) which actually coincides with the n-product sigma-algebra \(\underbrace{\mathcal{B}(\mathbb{R})\otimes\cdots\otimes\mathcal{B}(\mathbb{R })}_{\text{n times}}\).
Now that we have discussed some standard assumptions, we present a number of important known convergence rates from the literature.
### Results on ERM classifiers
Let \(\mathcal{C}\) be a given class of sets in \(\mathbb{R}^{d}\). ERM classifier is defined as the function
\[M_{\widehat{G}}(X):=\mathbbm{1}_{\{X:X\in\widehat{G}\}}(X):= \begin{cases}1,&\text{if }X\in\widehat{G};\\ 0,&\text{otherwise}.\end{cases} \tag{26}\]
where
\[\widehat{G}:=\operatorname*{arg\,min}_{G\in\mathcal{C}}\frac{1}{n }\sum_{i=1}^{n}\mathbbm{1}(\mathbbm{1}_{G}(X_{i})\neq Y_{i}). \tag{27}\]
In this framework, Bayes classifier corresponds to \(M_{G^{*}}\) where \(G^{*}=\{x:\eta(x)\geq 1/2\}\).
As can be seen from the definition, ERM classifiers are completely determined by the choice of decision set \(\widehat{G}\subset\mathbb{R}^{d}\). A frequently used assumption on the distribution is that \(G^{*}\) belongs to a certain class of sets,
say \(\Sigma\), whose complexity is bounded. Excess risks of ERM classifiers have been extensively studied under various assumptions on \(\mathcal{C}\) and the underlying distribution \(P\), and convergence results take the form
\[\sup E[L(M_{\vec{G}}(\mathbf{X}))]-L(M^{*})=O(n^{-\beta}) \tag{28}\]
for some \(\beta>0\). Here the supremum is taken over the class of distributions such that \(G^{*}\in\Sigma\) and satisfy additional assumptions such as regularity conditions on the marginal distribution of \(X\). Many of the proofs in this direction rely on results from empirical process theory.
First, [14] showed that (28) holds with \(\alpha\) that depends on the margin assumption (see (25)) as well as the complexity of the true candidate family of sets that \(G^{*}\) belongs to (see Section 3.1.1). The main result therein is that for all \(P\) satisfying the margin assumption and complexity assumption, there exists ERM-type classifier \(M_{n}\) such that
\[E_{n}[L(M_{n})]-L^{*}=O(n^{-\frac{1+\alpha}{2+\alpha+\alpha\beta}}) \tag{29}\]
where \(L(M_{n})\) is as in (2) and \(E_{n}\) denotes expectation with respect to samples \(\mathcal{D}_{n}\).
The limitation in [14] was that the solution to ERM that achieves optimal rate was either infeasible or unrealistic when the class \(\mathcal{C}\) that contains \(G^{*}\) is large. Specifically, the empirical minimizer was chosen among the entire candidate family of sets or as a sieve estimator where the sieve is constructed based on prior knowledge of the noise parameter.
Paper [13] addressed such issues by showing that the same optimal rate as (29) can be achieved with empirical minimizer even when \(\mathcal{C}\) is a finite sieve or \(\epsilon\)-net over the class of sets containing \(G^{*}\) constructed without prior knowledge of noise condition. Here, the construction of sieve or \(\epsilon\)-net only assumes knowledge of \(\delta\)-entropy with bracketing of \(\mathcal{C}\).
### Results on plug-in estimators
When we have a function \(f\) approximating \(\eta\) in some sense (for e.g., in \(L^{p}\)), recall \(p_{f}\) from (6), the corresponding plug-in classifier. Under (A1) Mammen-Tsybakov noise condition with noise parameter \(\alpha\), (A2) regularity assumption on the distribution of \(\mathbf{X}\), and (A3) smoothness assumption on \(\eta\) (such as continuous differentiability or Holder condition) parametrized by smoothness index \(\beta>0\), [1] showed that fast and even super-fast rates (better than \(O(n^{-1})\)) are possible. We note that assumption (A3) often implies complexity assumption (24) since most smooth classes of functions satisfy (24); see Section V of [10].
Specifically, the main result is that there exists a plug-in classifier \(M_{n}\) of the form (6) such that uniformly over all \(P\) satisfying the above three assumptions, the following holds:
\[E[P(M_{n}(\mathbf{X})\neq Y)]-L^{*}=O(n^{-\frac{\beta(1+\alpha)}{2\beta+ \alpha}}) \tag{30}\]
where \(d\) is the dimension of the input space.
This result and the result of the ERM classifier differ critically in the assumptions they make: while ERM makes an assumption on the complexity of the class of sets, here the assumption is on the complexity of function class \(\eta\) belongs to. As noted in [1], no inclusion relationship holds between these two assumptions. Hence, it is incorrect to nominally compare the rates of (29) and (30). In [1], the local polynomial estimator and sieve estimator of \(\eta\) are shown to yield minimax optimal rates of convergence when \(\eta\) is assumed to belong to Holder class of functions.
Finally, the work [12] introduces a new assumption on distributions, namely the geometric noise assumption, which roughly speaking, controls the measure \(|2\eta(x)-1|P_{X}\) near the decision boundary. The noise level is parametrized by \(\beta\) in an analogous way \(\alpha\) controls noise in the margin assumption. They show that support vector machines based on Gaussian RBF kernels achieve the rates \(O(\frac{\beta}{2\beta+1})\) and \(O(n^{-\frac{2\beta(\alpha+1)}{2\beta(\alpha+2)\beta+3\alpha+4}})\) in two different regimes according to whether \(\beta\leq\frac{\alpha+2}{2\alpha}\). One distinguishing feature of this work is that the geometric noise assumption makes no smoothness assumption on \(\eta\) or regularity condition on \(P_{X}\) as in the works discussed above. We remark that the rates shown here are implicitly dependent on the input-dimension by the way geometric noise assumption is defined. The analysis in this work is involved mostly because of the explicit regularization term in the loss function of SVMs. One caveat is that no results are known regarding the optimality of these rates in this regime.
Main Results
We assume the basic setup from Section 2.1. Notably, we let \(\mathcal{D}_{n}\) denote the sample of \(n\) data points and \(P\) the corresponding distribution. Furthermore, we suppose that \(\mathcal{F}_{n}\) is a class of fully-connected feed-forward ReLU neural networks. We state the details on what \(\mathcal{F}_{n}\) we consider in the following assumption:
**Assumption 4.1** (Assumptions on Estimating Function Class).: _We restrict ourselves to the function class \(\mathcal{F}_{n}:=\bigcup_{\mathbf{p}}\mathcal{F}(L,p)\) whose depth \(L\) is a fixed constant greater than \(10\) and width vector \(p\) is such that the total number of parameters is bounded by some function of \(n\) denoted by \(W(n)\), and such that the range of functions in the class are contained in the interval \([M/2,M/2]\) for some large enough \(B>0\)._
For the logistic loss \(\phi\) (cf. (8)), suppose that \(\widehat{f}_{n}\) is the empirical \(\phi\)-risk minimizer
\[\widehat{f}_{n}:=\operatorname*{arg\,min}_{f\in\mathcal{F}_{n}}\frac{1}{n} \sum_{i=1}^{n}\phi\bullet f(X_{i},Y_{i}). \tag{31}\]
Recall the \(\bullet\) operator defined in (9). The problem we investigate is the uniform rate at which the excess risk \(\mathcal{E}(\widehat{f}_{n})\) converges to \(0\) over the class of distributions, i.e., the set of probability measures \(P\), satisfying the following assumptions:
**Assumption 4.2** (Assumptions on Distribution).: _We restrict ourselves to the Borel distributions of \((X,Y)\) satisfying all of the below:_
* _The regression function_ \(\eta\) _defined in (_4_) satisfies the margin assumption (_25_)._
* _The regression function_ \(\eta\) _is bounded away from_ \(0\) _and_ \(1\) _by some arbitrary constant almost surely._
* _X is supported on a compact subset_ \(\Omega\) _of_ \(\mathbb{R}^{d}\)_._
* _For some positive integer_ \(M\)_, there exists an open cover_ \(\{U_{i}\}_{i=1,\dots,M}\) _of_ \(\Omega\)_, such that over each set_ \(U_{i}\)_, there exists a 1 hidden-layer neural network_ \(I_{i}:\mathbb{R}^{d}\to\mathbb{R}\) _whose restriction to_ \(U_{i}\) _satisfies all the requirements of Definition_ 2.14 _so as to make_ \(\eta|_{U_{i}}\in\mathcal{BA}(U_{i})\) _(cf. Section_ 2.5_)._
The margin assumption ensures that the regression function remains smooth near the decision boundary where the degree of smoothness is governed by \(\alpha\). Bigger \(\alpha\) ensures a more favorable situation where there are no jumps at the decision boundary. Moreover, the assumption that \(\eta\) is bounded away from \(0\) and \(1\) can be relaxed by requiring exponential decay of probability of \(x\) such that \(\eta(x)\) is near \(0\) and \(1\), but we work with the current assumption for simplicity.
The last assumption roughly says that \(\eta\) is locally characterized as a function in the Barron approximation space. Note the choice of open cover may be an infinite one but can be reduced to a finite cover by the compactness of \(\Omega\). Similarly, we may assume that each \(U_{i}\) is bounded. In the paper [12] where Barron approximation space is defined and analyzed, they define the so-called sets with Barron class boundary, which says that \(\Omega\) is locally defined as the graph of a function in the Barron approximation space: that is, \(\mathbb{1}_{\Omega\cap Q_{i}}(x)=\mathbb{1}_{\{x:x_{j}\leq f(x^{(j)})\}}(x)\) for some \(j\in\{1,\dots,d\}\) where \(x^{(j)}\) denotes the \(d-1\)-dimensional vector formed from \(x\) by dropping its \(j\)th component. However, their \(\Omega\) is less general than our set \(\{x:\eta(x)\geq 1/2\}\) because they assume the relationship \(Y=\mathbb{1}_{\Omega}(X)\) so that actually \(Y\) is \(\sigma(X)\)-measurable. Thus, our setting of describing the decision set includes the description of \(\Omega\) in [12] as a special case.
### Approximate excess risk bound
The first preliminary result shows the rate at which the approximate excess \(\phi\)-risk defined in (11) converges to \(0\). Here, there is no assumption on the space of functions to which \(\eta\) belongs.
**Theorem 4.1**.: _We assume that the following holds for all integers \(n\geq 1\): First, suppose that \(\mathcal{F}_{n}\) satisfies Assumption 4.1. Second, suppose that \(\tilde{f}\in\mathcal{F}_{n}\) satisfies \(\tilde{f}=\operatorname*{arg\,min}_{f\in\mathcal{F}_{n}}P(\phi\bullet f)\). Third, let \(\{\tau_{n}\}\) be a
sequence of positive numbers such that for distribution \(P\) there exists a neural network \(I_{n}\in\mathcal{F}_{n}\) satisfying \(\mathcal{E}_{\phi}(I_{n})\leq c_{0}\tau_{n}\) for some constant \(c_{0}>0\). Denote_
\[\omega_{n}(\delta) :=E\sup_{f\in\mathcal{F}_{n},\left\|f-f\right\|_{L_{2}(P_{X})}^{2} \leq\delta}|R_{n}(f-\tilde{f})|,\] \[\widehat{f}_{n} :=\operatorname*{arg\,min}_{f\in\mathcal{F}_{n}}E_{n}(\phi\bullet f),\]
_where \(E_{n}\) denotes expectation with respect to empirical measure based on \(\mathcal{D}_{n}\) Then, there exists constants \(K>0,C>0,c>0\) such that for all \(\alpha\in(0,1]\),_
\[P\left(\widehat{\mathcal{E}}_{\phi}(\widehat{f}_{n})\geq K \left(\max\left\{\omega_{n}^{\sharp}\left(c\alpha\right)-\tau_{n},\tau_{n} \alpha\right\}+\frac{t}{n}+\sqrt{\frac{t\tau_{n}}{n}}\right)\right)\] \[\leq Ce^{-t}\]
_where \(\omega_{n}^{\sharp}\) refers to the \(\sharp\)-transformation (19) of the function \(\omega_{n}\). In particular, if there exists a neural network \(I_{n}\in\mathcal{F}_{n}\) such that \(\mathcal{E}_{\phi}(I_{n})\leq\frac{C}{\sqrt{W(n)}}\) for some constant \(C>0\), for all \(\alpha\in(0,1]\), we have_
\[P\bigg{(}\widehat{\mathcal{E}}_{\phi}(\widehat{f}_{n})\geq K \bigg{(}\max\left\{\omega_{n}^{\sharp}\left(c\alpha\right)-\tau_{n},\tau_{n} \alpha\right\}+\frac{t}{n}\] \[\qquad\qquad+\sqrt{\frac{t}{n\sqrt{W(n)}}}\bigg{)}\bigg{)}\leq Ce ^{-t}. \tag{32}\]
### Approximation of the Barron Approximation Space
In this section, we give some intermediate approximation results regarding the regression function \(\eta\). We begin by summarizing the direct approximation properties of \(\eta\) by neural networks in the \(L^{\infty}\)-norm.
**Lemma 4.2**.: _Suppose the regression function \(\eta\) satisfies Assumption 4.2. Then, there exists a neural network \(\tilde{f}\in\mathcal{F}(11,(d,p,\ldots,p,1))\) such that \(\left\|\eta-\tilde{f}\right\|_{\infty}\leq Cp^{-\frac{1}{2}}\) for constant \(C>0\) only dependent on \(d\)._
While [5] uses the so-called tube-compatible assumption on the distribution of \(X\) to obtain a similar result, we relax this assumption by making use of the Mammen-Tsybakov noise condition. Note that the surrogate loss function \(\phi\) is classification-calibrated and from the Definition 2.5, we observe the minimizer \(f_{\phi}^{*}(x)=H(\eta(x))\) satisfies
\[\mathbb{1}_{f_{\phi}^{*}(x)\geq 1/2}=\mathbb{1}_{\eta(x)\geq 1/2}.\]
In fact, since our \(\phi\) is the logistic loss, there is a closed-form expression for \(f_{\phi}^{*}\) as
\[f_{\phi}^{*}(x)=\log\left(\frac{\eta(x)}{1-\eta(x)}\right). \tag{33}\]
Now observe that \(\log(x/(1-x))\) for \(0<x<1\) is smooth and furthermore Lipschitz continuous when \(x\) is restricted to a compact subset of \((0,1)\). Hence, it is not hard to see that \(f_{\phi}^{*}\in\mathcal{BA}(Q_{m})\). We make this statement precise in the lemma below:
**Lemma 4.3**.: _Suppose the regression function \(\eta\) satisfies the second and the last bullet point of Assumption 4.2. Then, if we define \(f_{\phi}^{*}(x)=\log\left(\frac{\eta(x)}{1-\eta(x)}\right)\), there exists \(I_{1}\in\mathcal{F}(2,\overline{p})\) for \(\overline{p}=(p,p)\in\mathbb{N}^{2}\) such that \(\left\|f_{\phi}^{*}|_{Q_{m}}-I_{1}\right\|_{Q_{m},\infty}\leq Cp^{-\frac{1}{2}}\). Moreover, for any \(\delta>0\), there exists \(I_{2}\in\mathcal{F}(3,\tilde{p})\) for \(\tilde{p}=(2d+p,2d+p,2d+1,1)\in\mathbb{N}^{4}\) and some \(\Omega_{0}\subset\Omega\) with probability at least \(1-\delta\) such that \(\left\|f_{\phi}^{*}-I_{2}\right\|_{\Omega_{0},\infty}\leq Cp^{-\frac{1}{2}}\) for constants \(C>0\) only dependent on \(d\)._
We defer the proof to Section 5.
Now, we provide an approximation result which shows the existence of a sequence of neural network functions that achieves the rate \(O(N^{-1/2})\) for the excess \(\phi\)-risk.
**Theorem 4.4**.: _Let the input dimension \(d\) be an integer greater than \(1\), and suppose the regression function \(\eta\) satisfies Assumption 4.2. Let \(f_{\phi}^{*}\) be the minimizer of \(\phi\)-risk as in (33). Then, there exists a neural network \(I_{N}\) with 3 hidden layers with \(O(N)\) parameters such that we have_
\[\mathcal{E}_{\phi}(I_{N})=E(\phi\bullet I_{N})-E(\phi\bullet f_{ \phi}^{*})\leq CN^{-1/2}. \tag{34}\]
_where \(C\) may depend on \(d\) linearly._
### Classification Error Bounds
In this section, we construct an appropriate class of neural networks and apply Theorem 4.1 to obtain a convergence rate for the excess risk. Note that Theorem 4.1 only guarantees the convergence of the approximate excess \(\phi\)-risk, \(\widehat{\mathcal{E}}_{\phi}(\widehat{f}_{n})\). To make use of (14) that relates the excess risk to the excess \(\phi\)-risk, we observe that with the simple decomposition \(\mathcal{E}_{\phi}(f)=E[\phi\bullet f]-\inf_{g\in\mathcal{F}_{n}}E[\phi \bullet g]+\inf_{g\in\mathcal{F}_{n}}E[\phi\bullet g]-E[\phi\bullet f_{\phi}^{ *}]\), it will be made straightforward that the same rate applies to the excess \(\phi\)-risk by the way we have constructed \(\mathcal{F}_{n}\). In particular, its complexity is controlled so that the bottleneck from approximation and estimation error are the same.
The first step consists of combining Theorem 4.1 with Theorem 4.4 to obtain a rate of convergence for the approximate excess \(\phi\)-risk when \(\mathcal{F}_{n}\) is an appropriately chosen class of neural network functions. The result is given in the following proposition:
**Proposition 4.5**.: _Suppose that \(\mathcal{F}_{n}\) is a class of fully-connected feed-forward ReLU neural networks satisfying Assumption 4.1 with \(W(n)\) being some constant multiple of \(n^{2/3}\). Denote by \(\Sigma\) the set of joint distributions on \((X,Y)\) satisfying Assumption 4.2. Let \(\widehat{f}_{n}\) be the empirical \(\phi\)-risk minimizer:_
\[\widehat{f}_{n}:=\operatorname*{arg\,min}_{f\in\mathcal{F}_{n}}E _{n}(\phi\bullet f).\]
_Then,_
\[\sup_{P\in\Sigma}P\left(\widehat{\mathcal{E}}_{\phi}(\widehat{g }_{n})\geq K\left(1+t\right)n^{-\frac{t}{3}}\log(n)\right)\leq Ce^{-t}. \tag{35}\]
Now, to apply either Zhang's inequality (13) or Bartlett's improved bound (14), we need to show convergence of the excess \(\phi\)-risk. This type of situation is also noted in [1], in which they simply assume \(\inf_{f\in\mathcal{F}_{\lambda}}L_{\phi}(f)=L_{\phi}(f_{\phi}^{*})\) when \(\mathcal{F}_{\lambda}\) is a class of regularized (by parameter \(\lambda\)) boosting classifiers. This is true in simple cases: for example, when \(f\) is of bounded variation and the base classifiers in boosting are decision stumps. For us, this condition is not true, but a simple inspection of the rates (34) and (35) shows that indeed the same rate as (35) may be claimed for \(\mathcal{E}_{\phi}(\widehat{g}_{n})\). Applying (14), the conclusion is summarized as follows:
**Theorem 4.6**.: _Suppose that \(\mathcal{F}_{n}\) is a class of fully-connected feed-forward ReLU neural networks satisfying Assumption 4.1. Let \(\mathcal{D}_{n}\) be a given \(n\) samples of data: \(\mathcal{D}_{n}=\{(X_{1},Y_{1}),\ldots(X_{n},Y_{n})\}\) each i.i.d. from a common distribution \(P\) that belong to a class of distributions \(\Sigma\) that satisfy Assumption 4.2. Then, for some constant \(C>0\), the empirical \(\phi\)-risk minimizer defined as_
\[\widehat{f}_{n}:=\operatorname*{arg\,min}_{f\in\mathcal{F}_{n}} \frac{1}{n}\sum_{i=1}^{n}\phi\bullet f(X_{i},Y_{i}),\]
_satisfies the following:_
\[\sup_{P\in\Sigma}E[P(p_{\widehat{f}_{n}}(X)\neq Y)]-P(M^{*}(X) \neq Y)\] \[\leq Cn^{-\frac{1+\alpha}{3(2+\alpha)}}(\log(n))^{\frac{1+\alpha} {2+\alpha}}. \tag{36}\]
Note that the rate is increasing in \(\alpha\) from \(n^{-\frac{1}{3}}\) in the worst case to \(n^{-\frac{1}{3}}\) as \(\alpha\to\infty\), which indicates a more favorable situation for learning, i.e., near-jumps at the boundary of the regression function.
### Minimax Lower Bound
In this subsection, we present a minimax lower bound corresponding to the class of distributions considered in the previous subsection. The result shows that the upper bound of Theorem 4.6 is tight up to a logarithmic factor.
**Theorem 4.7**.: _For the class of distributions \(\Sigma\) that satisfy Assumption 4.2, the lower bound for the minimax excess risk is given by_
\[\inf_{\widehat{f}_{n}}\sup_{P\in\Sigma}E\left[P(\widehat{f}_{n}( \mathbf{X})\neq Y)-P(M^{*}(\mathbf{X})\neq Y)\right]\] \[\geq Cq^{-r}mw(1-q^{-r}\sqrt{nw})\]
_for some constant \(C>0\), and any positive integer \(q,m\) and positive real \(w\) satisfying \(m\leq q^{d}\), \(w\leq\frac{1}{m}\), and \(wm\leq\frac{q^{-r\alpha}}{2^{\alpha}}\). In particular, choosing \(m=q^{d}\), \(w=\frac{q^{-\alpha r-d}}{2^{\alpha}}\), \(r=\frac{2d}{2+\alpha}\) for \(q=\lfloor\overline{C}n^{\frac{(2+\alpha)}{3(2+\alpha)}}\rfloor\), the lower bound becomes \(Cn^{-\frac{1+\alpha}{3(2+\alpha)}}\)._
We remark that the rate ranges from \(n^{-\frac{1}{6}}\) to \(n^{-\frac{1}{3}}\) as \(\alpha\) varies from \(0\) to \(\infty\). We note that the regime we study is indeed an inherently difficult one as general as it may be, and that fast rates better than \(n^{-\frac{1}{2}}\) are not possible as in [10], [11], even with the margin assumption imposed on the class of distributions. As noted in the discussion section of Tsybakov's work [11], margin assumption has served as a key assumption separating results with rates slower or faster than \(n^{-\frac{1}{2}}\). Our work then adds a new part of the picture: how does the margin assumption affect convergence rate when the class of distributions is so large that fast rates are not possible? In fact, a bound on the metric entropy, i.e., the logarithm of the minimal covering number, is sufficient to ensure that for large \(\alpha\), fast rates are possible (c.f. [11],[11]). Further assumptions on the smoothness of regression function and regularity of the support of \(P_{X}\) can even lead to super-fast rates faster than \(n^{-1}\) (c.f. [1]). Theorem 4.7 then shows that such fast rates are not possible in our set-up and furthermore, that the rate (36) achieved by the empirical \(\phi\)-risk minimizer is indeed minimax optimal.
## 5 Proofs
### Proof of Theorem 4.1
The first step is to obtain the standard excess risk bound using the techniques from [12]. Recalling the definitions of expected sup-norm (17) and \(L_{2}(P)\)-diameter of \(\delta\)-minimal set (18), for any \(q>1\) and \(t>0\), denote
\[V_{n}^{t}(\sigma):=2q\left[\phi_{n}^{b}(\sigma)+\sqrt{\left(D^{2}\right)^{b}( \sigma)}\sqrt{\frac{t}{n\sigma}}+\frac{t}{n\sigma}\right],\sigma>0.\]
Let
\[\sigma_{n}^{t}:=\sigma_{n}^{t}(\mathcal{F};P):=\inf\left\{\sigma:V_{n}^{t}( \sigma)\leq 1\right\}.\]
**Proposition 5.1**.: _Suppose that functions in \(\mathcal{F}\) take values in \([0,1]\). Let_
\[\widehat{f}_{n}:=\operatorname*{arg\,min}_{f\in\mathcal{F}}P_{n}(f).\]
_Then, For all \(t>0\),_
\[\mathbb{P}\left\{\widehat{\mathcal{E}}\left(\widehat{f}_{n}\right)>\sigma_{n} ^{t}\right\}\leq C_{q}e^{-t}\]
_where_
\[C_{q}:=\max\left\{\frac{q}{q-1},e\right\}.\]
In fact, the above proposition easily extends to function classes with values in a compact subset of \(\mathbb{R}\) with only changes to constants.
The second step is to obtain a bound on the second moment of the process \(\{\phi\bullet g\}_{g\in\mathcal{G}_{n}}\) in terms of its first moment.
For a vector space \(S\), define modulus of convexity of a convex function \(f:S\to\mathbb{R}\) with respect to metric \(d\) as
\[\delta(\epsilon)\] \[\quad:=\inf\biggl{\{}\frac{f(x)+f(y)}{2}-f\left(\frac{x+y}{2} \right):x,y\in S,d(x,y)\geq\epsilon\biggr{\}}\]
and call \(f\) strictly convex with respect to \(d\) if \(\delta(\epsilon)>0\) for all \(\epsilon>0\). The first lemma shows the logistic loss satisfies some convexity conditions and is essentially proved in [1]:
**Lemma 5.2**.: _Suppose random variable \(X\) is compactly supported, and \(\mathcal{F}\) is a class of functions defined on the support of \(X\) such that for all \(f\in\mathcal{F}\), \(f(X)\) is almost surely contained in \([-M/2,M/2]\) for some constant \(M>0\). For logistic loss \(\phi\), \(L_{\phi}(f)=E[\phi((2Y-1)f(X))]\) is a strictly convex functional of \(f\) with respect to \(L^{2}\) distance \(d\), which is defined for two integrable functions \(f,g\) as \(d(f,g)=E[(f(X)-g(X))^{2}]^{\frac{1}{2}}\), and the modulus of convexity of this functional \(L_{\phi}\) satisfies \(\delta(\epsilon)\geq\frac{e^{-M}}{16}\epsilon^{2}\)._
Proof.: Using the fact that \(\phi\) restricted to \([-M/2,M/2]\) has modulus of convexity \(e^{-M}\epsilon^{2}/16\) (see for e.g., Table 1 of [1]), Lemma 8 of [1] immediately gives the result.
We write \(\delta_{L_{\phi}}(\cdot)\) to denote the modulus of convexity of the functional \(L_{\phi}\). Now we present our desired second moment bound:
**Lemma 5.3**.: _Let \(\mathcal{F}_{n}\), \(I_{n}\), and \(P\) be as in Theorem 4.1. Let \(M>0\) be such that the range of all functions in \(\mathcal{F}_{n}\) is contained in \([-\frac{M}{2},\frac{M}{2}]\). Suppose that \(\tilde{f}\in\mathcal{F}_{n}\) satisfies \(\tilde{f}=\arg\min_{f\in\mathcal{G}_{n}}E(\phi\bullet f)\) Then, for any \(f\in\mathcal{F}_{n}\), we have_
\[E(\phi\bullet f-\phi\bullet\tilde{f})^{2}\leq E(f-\tilde{f})^{2}\leq 8e^{M}E( \phi\bullet f-\phi\bullet\tilde{f})+2c_{0}\tau_{n}.\]
Proof.: First, we have
\[E[(\phi\bullet f-\phi\bullet\tilde{f})^{2}]\] \[=E[\phi((2Y-1)f(X))-\phi((2Y-1)\tilde{f}(X))^{2}]\] \[\leq E[(f(X)-\tilde{f}(X))^{2}].\]
Second, using the notation
\[L_{\phi}(f):=E[(2Y-1)\phi(f(X))],\]
we can argue by Lemma 5.2 that
\[\frac{L_{\phi}(f)+L_{\phi}(\tilde{f})}{2} \geq L_{\phi}\left(\frac{f+\tilde{f}}{2}\right)+\delta_{L_{\phi}} \left(E[(f(X)-\tilde{f}(X))^{2}]^{\frac{1}{2}}\right)\] \[\geq L_{\phi}\left(\frac{f+\tilde{f}}{2}\right)+\frac{e^{-M}}{16} E[(f(X)-\tilde{f}(X))^{2}].\]
Now, observe that \(\frac{f+\tilde{f}}{2}\) belongs to a neural network class \(\mathcal{F}^{\prime}\) with at most twice as many parameters as
(see, for example [1, Lemma 2.6]),
\[L_{\phi}(\tilde{f})-L_{\phi}\left(\frac{f+\tilde{f}}{2}\right) \leq L_{\phi}(\tilde{f})-\inf_{g\in\mathcal{F}^{\prime}}L_{\phi}(g)\] \[\leq\underbrace{L_{\phi}(\tilde{f})-L_{\phi}(I_{n})}_{\leq 0}+L_{ \phi}(I_{n})-L_{\phi}(f_{\phi}^{*})\] \[+\underbrace{L_{\phi}(f_{\phi}^{*})-\inf_{g\in\mathcal{O}^{ \prime}}L_{\phi}(g)}_{\leq 0}\] \[\leq c_{0}\tau_{n}\]
where the last inequality follows by assumption. Thus, we have
\[\frac{L_{\phi}(f)-L_{\phi}(\tilde{f})}{2}\geq\frac{e^{-B}}{16}E[(f(X)-\tilde{f} (X))^{2}]-c_{0}\tau_{n},\]
so that we can conclude
\[E[\phi\bullet f-\phi\bullet\tilde{f}]+2c_{0}\tau_{n} \geq\frac{e^{-2B}}{8}E[(f(X)-\tilde{f}(X))^{2}]\] \[\geq\frac{e^{-2B}}{8}E[(\phi\bullet f-\phi\bullet\tilde{f})^{2}].\]
There is one more technical lemma regarding \(\sharp\)-transformation stated in [11] we will need:
**Lemma 5.4**.: _For any given function \(\psi:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}_{\geq 0}\) where \(\mathbb{R}_{\geq 0}\) denotes the set of non-negative real numbers, the following holds:_
1. _For_ \(c>0\)_, let_ \(\psi_{c}(\delta):=\psi(c\delta)\)_. Then_ \(\psi_{c}^{\sharp}(\epsilon)=\frac{1}{c}\psi^{\sharp}(\epsilon/c)\)_._
2. _For_ \(c>0\)_, let_ \(\psi_{c}(\delta):=\psi(\delta+c)\)_. Then for all_ \(u>0,\epsilon\in(0,1],\psi_{c}^{\sharp}(u)\leq\) _(_\(\psi^{\sharp}(\epsilon u/2)-c)\lor c\)_. where_ \(a\lor b=\max\{a,b\}\)_._
3. _For_ \(\epsilon=\epsilon_{1}+\cdots+\epsilon_{m}\) _and_ \(\psi_{1},\ldots,\psi_{m}\) _as in the hypothesis,_ \((\psi_{1}+\cdots+\psi_{m})(\epsilon)\leq\psi_{1}(\epsilon_{1})+\cdots+\psi( \epsilon_{m})\)_._
Now we finally prove our first theorem:
Proof of Theorem 4.1.: From Lemma 5.3, we have that for any \(f\in\mathcal{F}_{n}\),
\[\frac{e^{-M}}{8}P_{X}(f(X)-\tilde{f}(X))^{2}-2c_{0}\tau_{n}\leq E[\phi\bullet f -\phi\bullet\tilde{f}],\]
which implies that the \(\delta\)-minimal set of \(\mathcal{L}=\{\phi\bullet f:f\in\mathcal{F}_{n}\}\) satisfies
\[\mathcal{L}(\delta) =\{\phi\bullet f:f\in\mathcal{F}_{n},\widetilde{\mathcal{E}}_{ \phi}(f,\mathcal{F}_{n})\leq\delta\}\] \[\subset\{\phi\bullet f:P_{X}(f(X)-\tilde{f}(X))^{2}\leq 16e^{M} \left(\delta+c_{0}\tau_{n}\right)\}.\]
Letting \(\mathcal{F}_{\delta}:=\{f:f\in\mathcal{F}_{n},P_{X}(f(X)-\tilde{f}(X))^{2}\leq 1 6e^{M}\left(\delta+c_{0}\tau_{n}\right)\}\), we can thus bound the \(L_{2}(P)\) diameter of \(\mathcal{L}(\delta)\) as
\[D^{2}(\delta) :=D^{2}(\mathcal{L},\delta)\] \[=\sup_{f_{1},f_{2}\in\mathcal{L}(\delta)}P(\phi\bullet f_{1}-\phi \bullet f_{2})^{2}\] \[\leq\sup_{f_{1},f_{2}\in\mathcal{F}_{\delta}}P_{X}(f_{1}-f_{2})^ {2}\] \[\leq\sup_{f_{1},f_{2}\in\mathcal{F}_{\delta}}2P_{X}(f_{1}-\tilde{ f})^{2}+2P_{X}(f_{2}-\tilde{f})^{2}\] \[\leq 64e^{M}\left(\delta+c_{0}\tau_{n}\right)\]
where first inequality used Lipschitz property of \(\phi\), second inequality used the elementary inequality \((a+b)^{2}\leq 2a^{2}+2b^{2}\), and third inequality follows from the definition of \(\mathcal{G}_{\delta}\). Secondly, we seek to bound \(\phi_{n}(\delta)\). Using symmetrization, properties of Rademacher complexity, and the contraction principle, we can write
\[\phi_{n}(\mathcal{L},\delta)\] \[=E\left[\sup_{g_{1},g_{2}\in\mathcal{L}(\delta)}|(P_{n}-P)(g_{1}-g _{2})|\right]\] \[\leq 2E\left[\sup\left\{|R_{n}(\phi\bullet f_{1}-\phi\bullet f_{2 })|:\phi\bullet f_{1},\phi\bullet f_{2}\in\mathcal{L}(\delta)\right\}\right]\] \[\leq 2E\left[\sup\left\{|R_{n}(\phi\bullet f_{1}-\phi\bullet f_{2 })|:f_{1},f_{2}\in\mathcal{F}_{\delta}\right\}\right]\] \[\leq 4E\left[\sup_{f\in\mathcal{F}_{\delta}}\left|\frac{1}{n} \sum_{i=1}^{n}\epsilon_{i}\left(\phi\bullet f(X_{i},Y_{i})-\phi\bullet\tilde {f}(X_{i},Y_{i})\right)\right|\right]\] \[\leq 8E\left[\sup_{g\in\mathcal{F}_{\delta}}\left|\frac{1}{n} \sum_{i=1}^{n}\epsilon_{i}\left(f(X_{i})-\tilde{f}(X_{i})\right)\right|\right]\] \[=8\omega_{n}(8e^{M}(\delta+\tau_{n})).\]
From the above calculations, we can bound
\[V_{n}^{t}(\delta) :=V_{n}^{t}(\delta,t)\] \[=4\left(\phi_{n}^{b}(\mathcal{L},\delta)+\sqrt{(D^{2})^{b}(\delta )}\sqrt{\frac{t}{n\delta}}+\frac{t}{n\delta}\right)\] \[\leq K_{1}\biggl{(}\sup_{\sigma\geq\delta}\biggl{\{}\frac{\omega_ {n}(8e^{M}(\sigma+\tau_{n}))}{\sigma}\biggr{\}}\] \[+\sqrt{\sup_{\sigma\geq\delta}\frac{\sigma+\tau_{n}}{\sigma}} \sqrt{\frac{t}{n\delta}}+\frac{t}{n\delta}\biggr{)}\] \[\leq K_{2}\biggl{(}\sup_{\sigma\geq\delta}\biggl{\{}\frac{\omega_ {n}(8e^{M}(\sigma+\tau_{n}))}{\sigma}\biggr{\}}+\sqrt{\frac{t}{n\delta}}+ \sqrt{\frac{t\tau_{n}}{n\delta^{2}}}\] \[+\frac{t}{n\delta}\biggr{)},\]
for some \(K_{1},K_{2}\geq 2\). Further using the three properties of \(\sharp\)-transformation from Lemma 5.4, we can conclude that for some constant \(c>0\) only dependent on \(M\),
\[\sigma_{n}^{t} :=\inf\left\{\sigma>0:V_{n}^{t}(\sigma)\leq 1\right\}\] \[\leq K_{3}\left(\max\left\{\omega_{n}^{\sharp}\left(c\alpha \right)-\tau_{n},\tau_{n}\alpha\right\}+\frac{t}{n}+\sqrt{\frac{t\tau_{n}}{n}}\right)\]
where \(\alpha\) is any number in \((0,1]\), and \(K_{3}\) and \(c\) depend only on \(M\). Now, we can immediately apply Proposition 5.1 to obtain the desired result.
### Proof of Lemma 4.2
We first state a result on approximating multiplication operator with a neural network adapted from [11] for our purpose:
**Lemma 5.5** ([11], Lemma A.3).: _Let an arbitrary real number \(M>0\) be fixed. Then, there exists a neural network \(Mul(\cdot)\in\mathcal{F}(9,(2,p,\ldots,p,1))\) such that for all \(x,y\in[-M,M]\), we have_
\[|xy-Mul(x,y)|\leq Cp^{-\frac{1}{2}}\]
_for \(C>0\) depending only on \(M\). Moreover, for any \(x,y\) such that \(xy=0\), we also have \(Mul(x,y)=0\)._
Proof of Lemma 4.2.: By assumption, there exists an open cover of \(\Omega\), \(\{U_{i}\}_{i=1,\ldots,M}\) and corresponding neural networks \(\{I_{i}\}_{i=1,\ldots,M}\), each in \(\mathcal{F}(1,p)\), such that \(\left\|1_{U_{i}}(\eta-I_{i})\right\|_{\infty}\leq Cp^{-\frac{1}{2}}\) for all \(i\). Also, there exists a \(C^{\infty}\) partition of unity \(\{\rho_{i}\}_{i=1,\ldots,M}\) subordinate to the given cover (c.f. [11] Theorem 13.7). Because smooth functions belong to the Barron approximation space, there exists \(1\) hidden-layer neural networks \(\tilde{\rho}_{i}\in\mathcal{F}(1,p)\) such that \(\left\|\rho_{i}-\tilde{\rho}_{i}\right\|_{\infty}\leq C_{1}p^{-\frac{1}{2}}\). Then, it is easy to see that \(\sum_{i=1}^{M}Mul(I_{i},\tilde{\rho}_{i})\) is again a neural network with \(11\) hidden layers with constant width, which is a constant (depending only on \(M\)) multiple of \(p\). Then, using the decomposition \(\eta=\sum_{i=1}^{M}\rho_{i}\eta\), we have
\[\left\|\eta-\sum_{i=1}^{M}Mul(I_{i},\tilde{\rho}_{i})\right\|_{ \infty}=\left\|\sum_{i=1}^{M}\rho_{i}\eta-\sum_{i=1}^{M}Mul(I_{i},\tilde{ \rho}_{i})\right\|_{\infty}\] \[\leq\sum_{i=1}^{M}\left\|\rho_{i}\eta-Mul(I_{i},\tilde{\rho}_{i} )\right\|_{\infty}\] \[\leq\sum_{i=1}^{M}\left\|\rho_{i}\eta-Mul(\rho_{i},\eta)\right\| _{\infty}+\left\|Mul(\rho_{i},\eta)-Mul(\rho_{i},I_{i})\right\|_{\infty}\] \[\qquad+\left\|Mul(\rho_{i},I_{i})-Mul(I_{i},\tilde{\rho}_{i}) \right\|_{\infty}\] \[\leq Cp^{-\frac{1}{2}}.\]
Here, note that the network \(I_{i}\)'s all have bounded weights (bound only depending on the diameter of \(\Omega\)), hence it is possible to apply Lemma 5.5 with appropriate choice of \(M>0\) in the hypothesis.
### Proof of Lemma 4.3
Proof.: For a function \(f:A\to\mathbb{R}\) for \(A\subset\mathbb{R}^{d}\) and \(B\subset A\), denote by \(\left\|f\right\|_{B,\infty}\) the sup-norm of \(f\) over \(B\), i.e., \(\sup_{x\in B}\left|f(x)\right|\) Fix any \(m\in\{1,\ldots,M\}\). By assumption, for any \(p\in\mathbb{N}\) there exists a \(1\) hidden-layer ReLU neural network \(I_{m}^{p}\) with \(p\) neurons such that
\[\left\|\eta-I_{m}^{p}\right\|_{Q_{m},\infty}\leq B_{m}\sqrt{d}p^{-1/2}. \tag{37}\]
Let \(c\) denote the constant such that \(\eta(x)\in[c,1-c]\) for all \(x\), which exists by assumption. Now let
\[g(x):=\begin{cases}\log\frac{x}{1-x},&\text{if }x\in[c,1-c];\\ \log\frac{c}{1-c},&\text{if }x\in[c-B_{m}\sqrt{d}p^{-\frac{1}{2}},c);\\ \log\frac{1-c}{c},&\text{if }x\in(1-c,1-c+B_{m}\sqrt{d}p^{-\frac{1}{2}}]. \end{cases}\]
Then, being piecewise infinitely differentiable, there is a neural network \(I_{0}^{p}\) with \(p\) neurons such that
\[\left\|g-I_{0}^{p}\right\|_{[c-B_{m}\sqrt{d}p^{-\frac{1}{2}},1-c+B_{m}\sqrt{d }p^{-\frac{1}{2}}],\infty}\leq C_{0}\sqrt{d}p^{-1/2} \tag{38}\]
for some constant \(C_{0}>0\). Now, combining (37) and (38) and using \(\circ\) to denote composition of functions, we can write using a simple telescoping argument,
\[\left\|f_{\phi}^{*}|_{Q_{m}}-I_{0}^{p}\circ I_{m}^{p}\right\|_{Q_ {m},\infty} \tag{39}\] \[=\left\|g\circ\eta\right|_{Q_{m}}-I_{0}^{p}\circ I_{m}^{p}\right\| _{Q_{m},\infty}\] \[\leq\left\|g\circ\eta\right|_{Q_{m}}-g\circ I_{m}^{p}+g\circ I_{ m}^{p}-I_{0}^{p}\circ I_{m}^{p}\right\|_{Q_{m},\infty}\] \[=\left\|g\circ\eta\right|_{Q_{m}}-g\circ I_{m}^{p}\right\|_{Q_{m}, \infty}+\left\|g\circ I_{m}^{p}-I_{0}^{p}\circ I_{m}^{p}\right\|_{Q_{m},\infty}\] \[\leq C\left\|\eta\right|_{Q_{m}}-I_{m}^{p}\right\|_{Q_{m},\infty} +\left\|g-I_{0}^{p}\right\|_{[c,1-c],\infty}\] \[\leq(CB_{m}+C_{0})\sqrt{d}p^{-1/2} \tag{40}\]
where \(C\) only depends on \(c\). Note that the composition \(I_{0}^{p}\circ I_{m}^{p}\) can be realized as a neural network as well using the fact that the positive and negative parts of the outputs of \(I_{m}^{p}\) can be separately treated to get identity. Since \(I_{0}^{p}\circ I_{m}^{p}\) is continuous on a compact set, its outputs are bounded by a constant, say \(C_{1}>0\). Without loss of generality, we may assume \(C_{1}\) is the bound for the outputs of \(I_{0}^{p}\circ I_{m}^{p}\) for all \(m=1,\ldots,M\).
Next, we want to "glue together" the neural networks from above in such a way that the final neural network is locally identical to \(I_{0}^{p}\circ I_{m}^{p}\) on each of the rectangles \(Q_{m}\). Fix \(m\). Since the measure \(P_{X}\) is a Borel probability measure, it is finite so that it is a regular measure (see, for e.g., Theorem 7.1.4 of [10]). This means that for any \(A\) in the Borel \(\sigma\)-algebra \(\mathcal{B}(\mathbb{R}^{d})\),
\[\mu(A)=\sup\{\mu(K):K\subset A,K\text{ compact},K\subset\mathbb{R}^{d}\}.\]
In particular, when \(Q_{m}=[a,b]\subset\mathbb{R}^{d}\), there exists a real number \(\epsilon\) such that \(0<\epsilon\leq\frac{1}{2}\min_{i\in[d]}(b_{i}-a_{i})\) and \([a+\epsilon,b-\epsilon]:=\prod_{i=1}^{d}[a_{i}+\epsilon,b_{i}-\epsilon]\) satisfies \(P_{X}([a,b]\backslash[a+\epsilon,b-\epsilon])\leq\delta/M\). Then similar to the approach in [12] Lemma A.6, we define \(t_{i,m}:\mathbb{R}\rightarrow\mathbb{R},i\in\{1,\ldots,d\}\) as
\[t_{i,m}(u):=\begin{cases}0,&\text{ if }u\in\mathbb{R}\backslash\left[a_{i},b_{i} \right];\\ 1,&\text{ if }u\in[a_{i}+\varepsilon,b_{i}-\varepsilon]\,;\\ \frac{u-a_{i}}{\epsilon},&\text{ if }u\in[a_{i},a_{i}+\varepsilon]\,;\\ \frac{b_{i}-u}{\varepsilon},&\text{ if }u\in[b_{i}-\varepsilon,b_{i}]\end{cases}\]
and \(h_{\epsilon,m}:\mathbb{R}^{d}\times\mathbb{R}\rightarrow\mathbb{R}\), \(h_{\epsilon}(x,y):=C_{1}\sigma(\sum_{i=1}^{d}t_{i}(x_{i})+\sigma(y)/C_{1}-d)\). Now, we observe that
\[h_{\epsilon}(x,y):=\begin{cases}y,&\text{ if }x\in[a+\epsilon,b-\epsilon],y\in(0,C _{1});\\ 0,&\text{ if }x\in\mathbb{R}\backslash[a,b]\end{cases} \tag{41}\]
Note that \(h_{\epsilon}\) is implementable by a neural network, say \(J_{m}\). Then, assuming for simplicity that the outputs of \(I_{0}^{p}\circ I_{m}^{p}\) are all positive (which is true for \(p\) large enough), we have by construction
\[P_{X}(\{x\in\mathbb{R}^{d}:\mathbb{1}_{\,Q_{m}}(x)I_{0}^{p}\circ I _{m}^{p}(x)\neq J_{0}(x,I_{0}^{p}\circ I_{m}^{p}(x))\})\] \[\leq P_{X}([a,b]\backslash[a+\epsilon,b-\epsilon])\] \[\leq 1-\delta/M. \tag{42}\]
Combining (40) and (42), we can conclude that on a set with probability at least \(1-\delta\), say \(\Omega_{0}\),
\[\left\|f_{\phi}^{*}-\sum_{m=1}^{M}J_{m}(\cdot,I_{0}^{p}\circ I_{m}^{p})\right\| _{\Omega_{0},\infty}\leq(CB+C_{0})\sqrt{d}p^{-1/2}.\]
where \(B:=\max_{m=1,\ldots,M}\{B_{m}\}\). The above constructions and calculations immediately imply the conclusions of the lemma.
### Proof of Theorem 4.4
Proof.: By Lemma 4.3, for any \(\delta\) and given \(p\), there exists a set \(\Omega_{0}\) with \(P_{X}\)-measure at least \(1-\delta\) on which there is a neural network \(I_{p}\) with \(O(p)\) neurons and weights such that \(\left\|f_{\phi}^{*}-I\right\|_{\Omega_{0},\infty}\leq Cp^{-1/2}\). Now we can write
\[L_{\phi}(I_{p})-L_{\phi}(f_{\phi}^{*})\] \[=E[\eta(X)(\phi(I_{p}(X))-\phi(f_{\phi}^{*}(X)))\] \[\qquad+(1-\eta(X))(\phi(-I_{p}(X))-\phi(-f_{\phi}^{*}(X)))]\] \[\leq E[\eta(X)|\phi(I_{p}(X))-\phi(f_{\phi}^{*}(X))|\] \[\qquad+(1-\eta(X))|\phi(-I_{p}(X))-\phi(-f_{\phi}^{*}(X))|]\] \[\leq E[|I_{p}(X)-f_{\phi}^{*}(X)|]\] \[\leq\left\|\mathbb{1}_{\,\Omega_{0}}(X)(I_{p}(X)-f_{\phi}^{*}(X)) \right\|_{\infty}+O(\delta)\] \[\leq Cp^{-1/2}\]
where the last inequality follows by choosing \(\delta=O(p^{-1/2})\).
### Proof of Proposition 4.5
Here we give a proof of Proposition 4.5 that constitutes Step 1 of our analysis mentioned in Section 4.3.
**Step 1**: First, we state some standard results from empirical process theory often used in the ERM literature. Define an envelop of a function class \(\mathcal{F}\) as a measurable function \(F\) such that
\[f(x)\leq F(x),\quad\forall x,\forall f\in\mathcal{F}.\]
The following result from [10] (see Theorem 3.1, Example 3.5) provides a bound on the Rademacher complexity in terms of bound on the covering number:
**Proposition 5.6**.: _For a class of functions \(\mathcal{F}\) uniformly bounded by \(U\) and an envelop \(F\) and empirical distribution \((P_{X})_{n}\), suppose for some \(v>0\) and some \(A>0\), the following holds for all \(\omega\) in the underlying probability space:_
\[N(\epsilon,\mathcal{F},L_{2}((P_{X})_{n}))\leq\left(A\frac{\|F\|_{L_{2}((P_{X })_{n})}}{\epsilon}\right)^{v}. \tag{42}\]
_If we let \(\sigma^{2}:=\sup_{f\in\mathcal{F}}Pf^{2}\) then, for some universal constant \(C>0\)_
\[E\left[\|R_{n}\|_{\mathcal{F}}\right]\leq C\max\biggl{\{} \sqrt{\frac{v}{n}}\sigma\sqrt{\log\frac{A\|F\|_{L_{2}(P_{X})}}{\sigma}},\] \[\frac{vU}{n}\log\frac{A\|F\|_{L_{2}(P_{X})}}{\sigma}\biggr{\}}.\]
Furthermore, for any VC-subgraph class of functions \(\mathcal{F}\) with VC-index \(V(\mathcal{F})\), we have the following standard result from [24] (Theorem 2.6.7):
**Proposition 5.7**.: _For a VC-class of functions with measurable envelope function \(F\) and \(r\geq 1\), one has for any probability measure \(Q\) with \(\|F\|_{Q,r}:=(\int F^{r}dQ)^{1/r}>0\) where,_
\[N\left(\varepsilon\|F\|_{Q,r},\mathcal{F},L_{r}(Q)\right)\leq KV(\mathcal{F}) e^{V(\mathcal{F})}\left(\frac{1}{\varepsilon}\right)^{r(V(\mathcal{F})-1)},\]
_for a universal constant \(K\) and \(0<\varepsilon<1\)._
Note that the above proposition is a universal result in the sense that it holds for any choice of probability measure \(Q\). Thus, applying Proposition 5.7 with any realization (by \(\omega\)) of probability measure \((P_{X})_{n}(\omega)\) and \(r=2\), we have that
\[N\left(\varepsilon,\mathcal{F},L_{2}((P_{X})_{n})\right)\leq KV(\mathcal{F}) (16e)^{V(\mathcal{F})}\left(\frac{\|F\|_{Q,r}}{\varepsilon}\right)^{2(V( \mathcal{F})-1)}. \tag{43}\]
Thus we have that any VC-class of functions \(\mathcal{F}\) with VC-index \(V(\mathcal{F})\) satisfies (42) with \(v=2(V(\mathcal{F})-1)\).
Another lemma we will use is the following bound on the VC-index of the class of feed-forward ReLU networks shown in [1]:
**Lemma 5.8**.: _The VC-index of the class of feed-forward ReLU networks with \(W\) total number of parameters and \(L\) number of layers satisfies_
\[c_{1}WL\log(W/L)\leq V(\mathcal{F})\leq c_{0}WL\log(W)\]
_for some constants \(c_{0},c_{1}>0\)._
Suppose that \(\mathcal{G}_{n}\) is the class of feed-forward ReLU neural networks with \(W(n)\) total number of parameters and a constant number of layers (at least 3) that take values in \([-M/2,M/2]\). Because the entropy bound (43) holds, we can apply Proposition 5.6 with \(\mathcal{F}=\{g-\tilde{g}:g\in\mathcal{G}_{n},\|g-\tilde{g}\|_{L_{2}(P_{X})}^{2} \leq\delta\}\) to conclude that
\[\omega_{n}(\delta)\leq C\max\biggl{\{} \sqrt{\frac{2(V(\mathcal{G}_{n})-1)}{n}}\sqrt{\delta}\sqrt{\log \frac{M}{\sqrt{\delta}}},\] \[\frac{2(V(\mathcal{G}_{n})-1)M}{n}\log\frac{M}{\sqrt{\delta}} \biggr{\}}.\]
This implies that
\[\omega_{n}^{4}(c)\leq\frac{CV(\mathcal{G}_{n})}{nc^{2}}\log\left( \frac{Mnc^{2}}{V(\mathcal{G}_{n})}\right) \tag{44}\]
for appropriately redefined constant \(C>0\).
Then, Lemma 5.8 implies that there exists some \(c_{0},c_{1}>0\) such that
\[c_{1}W(n)\leq V(\mathcal{G}_{n})\leq c_{0}W(n)\log(W(n)). \tag{45}\]
Combining (44) and (45), we immediately get
\[\omega_{n}^{4}(c)\leq\frac{Cc_{0}W(n)\log(W(n))}{nc^{2}}\log\left( \frac{Mnc^{2}}{c_{1}W(n)}\right). \tag{46}\]
Now we analyze the term \(\max\left\{\omega_{n}^{4}\left(c\alpha\right)-\tau_{n},\tau_{n}\alpha\right\}\) that appears in the excess risk bound of Theorem 4.1. The result of Theorem 4.4 implies that we may take \(\tau_{n}=\frac{Cd}{\sqrt{W(n)}}\).
Because of the freedom of choosing \(\alpha>0\), we may assume \(\alpha=n^{-u}\) for some \(u\geq 0\). Furthermore, put \(W(n)=n^{r}\) for \(r>0\). With these substitutions and from (46), we get for appropriately redefined constants \(C,c>0\), the following:
\[\omega_{n}^{4}\left(c\alpha\right)-\tau_{n} \leq Cn^{r+2u-1}\log(n^{r})\log(cn^{1-2u-r}),\] \[\tau_{n}\alpha \leq Cn^{-u-r/2}.\]
Plugging in the above display to the excess risk bound of Theorem 4.1, we can conclude that for some constant \(K,C>0\), we have
\[P\biggl{(}\widehat{\mathcal{E}}_{\phi}(\widehat{g}_{n})\geq K\biggl{(}\max\left\{n^{r+2u-1}\log n^{r}\log(n^{1-2u-r}),n^{\frac{-2u-r}{2}} \right\}\] \[+\frac{t}{n}+\sqrt{\frac{t}{n^{1+r/2}}}\biggr{)}\biggr{)}\leq Ce^ {-t}.\]
Since \(n^{r+2u-1}\leq n^{-u-r/2}\) if and only if \(r+2u-1\leq-u-r/2\), we get that \(\max\left\{n^{r+2u-1},n^{-u-r/2}\right\}=n^{-u-r/2}\) if and only if \(r\leq-2u+2/3\). In this case, by choosing \(r=2/3,u=0\), we get for some constant \(K>0\),
\[P\left(\widehat{\mathcal{E}}_{\phi}(\widehat{g}_{n})\geq K\left(1+t\right)n^{- 1/3}\log(n)\right)\leq Ce^{-t}.\]
It is easy to check that the other possible scenario of \(n^{r+2u-1}\geq n^{-u-r/2}\) leads to the same rate after computations similar to the above.
### Proof of Theorem 4.7
We provide lower bounds on the following minimax excess risk for classification:
\[\inf_{M\text{ measurable}}\sup_{P\in\Sigma}P(M(\mathbf{X})\neq Y)-P(M^{*}( \mathbf{X})\neq Y) \tag{47}\]
where the supremum is taken over the class of distributions satisfying Assumption 4.2. We start by stating and proving a result that holds under a more general situation. The construction closely follows that found in [1] and is based on an application of Assouad's lemma, which can be found for example in [13].
Consider the partition of \([0,1]^{d}\) by the canonical grid: For positive integer \(q\), define the following finite subset of \(\mathbb{R}^{d}\)
\[G_{q}=\bigg{\{}\bigg{(}\frac{2k_{1}+1}{2q},\ldots,\frac{2k_{d}+1 }{2q}\bigg{)}:\] \[k_{i}=0,\ldots,q-1,i=1,\ldots,d\bigg{\}}. \tag{48}\]
There are \(q^{d}\) elements in \(G_{q}\), and we can label them in some order, for example by dictionary order based on the coordinate and magnitude of \(k_{i}\). Let \(g_{1},\ldots,g_{q^{d}}\) be such labeled points. For any \(x\in[0,1]^{d}\) denote by \(g(x)\in G_{q}\) the point in \(G_{q}\) closest to \(x\) so that \(g(x)=\operatorname*{arg\,min}_{g\in G_{q}}\left\|x-g\right\|_{2}\) where \(\operatorname*{arg\,min}\) is well-defined with appropriate tie-breaking rule. Then we can write the partition as follows:
\[M_{i}=\{x\in[0,1]^{d}:g(x)=g_{i}\},\]
\[[0,1]^{d}=\bigcup_{i=1}^{q^{d}}M_{i}\]
where now \([0,1]^{d}\) is the union of \(q^{d}\) disjoint sets of same Lebesgue measure.
Now we build up a setting appropriate for applying Assouad's lemma to obtain a minimax lower bound. To that end, we first define the following finite class of probability distributions: Choose a positive integer \(m\leq q^{d}\) and define the class of distributions \(\mathcal{H}:=\{P_{\sigma}:\sigma=(\sigma_{1},\ldots,\sigma_{m})\in\{0,1\}^{m}\}\) where each \(P_{\sigma}\) represents a distinct distribution of \((\mathbf{X},Y)\) on \(\mathbb{R}^{d}\times\{0,1\}\). Now we define each \(P_{\sigma}\) by providing the marginal distribution for \(\mathbf{X}\) and the conditional distribution \(P(Y=1|X)\).
We shall define the marginal distribution of \(\mathbf{X}\), denoted by \((P_{\sigma})_{X}\) in the same way for every choice of \(\sigma\). Let \(w\) be some real number satisfying \(0<w<1/m\) and write \(B(x,r)\) to mean the Euclidean ball in \(\mathbb{R}^{d}\) centered at \(x\in\mathbb{R}^{d}\) and radius \(r>0\). Define \(A\) as some bounded subset of \(\mathbb{R}^{n}\backslash\bigcup_{i=1}^{m}M_{i}\). We then define \((P_{\sigma})_{X}\) to be the measure absolutely continuous with respect to the \(d\)-dimensional Lebesgue measure \(\lambda\) with density \(u\) (nonnegative, integrable function on \(\mathbb{R}^{d}\)) defined as
\[u(x)=\begin{cases}\frac{w}{\lambda(B(0,1/(4q)))},&\text{ if }x\in B\left(g, \frac{1}{4q}\right)\text{ for some }g\in G_{q};\\ \frac{1-mw}{\lambda(A)},&\text{ if }x\in A;\\ 0,&\text{ otherwise.}\end{cases}\]
In words, the marginal distribution of \(\mathbf{X}\) is supported on balls centered at the \(m\) points in the grid \(G_{q}\), actually with constant density, and the bounded set \(A\). Now, we define the Borel-measurable function \(\eta_{\sigma}:\mathbb{R}^{d}\to[0,1]\) such that \(\eta_{\sigma}(\mathbf{X})\) is a version of \(P_{\sigma}(Y=1|\mathbf{X})\) (see [1, Theorem 9.1.3] for existence of such a function).
Let \(h:[0,\infty)\to[0,\infty)\) be the nonincreasing, infinitely differentiable function defined as:
\[h(x) =\int_{x}^{1/2}h_{1}(t)dt\bigg{/}\left(\int_{1/4}^{1/2}h_{1}(t) dt\right),\] \[h_{1}(x) =\begin{cases}\exp\left(-\frac{1}{(1/2-t)(t-1/4)}\right),\text{ for }t\in(1/4,1/2);\\ 0,\text{ for }t\in[0,1/4]\cup[1/2,\infty).\end{cases}\]
Note \(h=1\) on \([0,1/4]\) and \(h=0\) on \([1/2,\infty)\). Then, we define \(\phi:\mathbb{R}^{d}\to[0,\infty)\) as \(\phi(x):=q^{-r}h(\left\|x\right\|_{2})\) for some \(r>0\) to be specifed later. Observe that \(\phi\) is an infinitely differentiable bump function supported in \([0,1/2)\) and as shown in [1, Section IX], is a Barron function. In particular, it is an element of \(\mathcal{BA}(\Omega)\).
For the same \(q\) used in defining the grid \(G_{q}\) in (48), we finally define for an arbitrary \(\sigma\in\{0,1\}^{m}\)
\[\eta_{\sigma}(x)=\begin{cases}\frac{1+\sigma_{i}\phi(q(x-g(x)))}{2},\text{ for }x\in M_{i},\quad i=1,\dots,m;\\ 1/2,\text{ for }x\in A;\\ 0,\text{ otherwise.}\end{cases}\]
We first verify that all \(P_{\sigma}\in\mathcal{H}\) satisfies the margin assumption: for any fixed choice of \(x_{0}\in G_{q}\),
\[P_{\sigma}(0<|\eta_{\sigma}-1/2|\leq t)\] \[=P_{\sigma}(0<\phi(q(x-g_{x}))\leq 2t)\] \[=m\int_{B(x_{0},1/(4q))}\mathbb{1}(0<\phi(q(x-x_{0}))\leq 2t)u(x)dx\] \[=m\int_{B(0,1/4)}\mathbb{1}(0<\phi(x)\leq 2t)\frac{w}{q^{d} \lambda(B(0,1/(4q)))}dx\] \[=mw\mathbb{1}(t\geq q^{-r}/2).\]
Thus if the choice of \(m,w\) is such that \(mw\leq(q^{-r}/2)^{\alpha}\), the margin assumption is satisfied.
In order to apply Assouad's lemma to obtain a minimax lower bound, it is first necessary to relate the minimax excess risk of (47) to the minimax risk for the Hamming distance between \(\sigma\)'s used to define \(\mathcal{H}\). Precisely, define \(\rho(\sigma,\sigma^{\prime})\) as the Hamming distance between \(\sigma\) and \(\sigma^{\prime}\): \(\rho:\{0,1\}^{m}\times\{0,1\}^{m}\to\{0,1,\dots,m\}\) such that \(\rho(\sigma,\sigma^{\prime})\) equals the number of positions in which \(\sigma\) and \(\sigma^{\prime}\) differ. Then, for any classifier \(\widehat{f}_{n}\), we want to have the following bound that holds uniformly over all such \(\widehat{f}_{n}\):
\[\sup_{P\in\Sigma}E[P(\widehat{f}_{n}(\mathbf{X})\neq Y)]-P(f^{*} (\mathbf{X})\neq Y)\] \[\geq\inf_{\widehat{\sigma}}\max_{\sigma\in\{0,1\}^{m}}E_{P_{ \sigma}^{n}}[\rho(\widehat{\sigma},\sigma)]\]
where \(f^{*}\) is the Bayes classifier for \(P\).
We proceed as follows: If we denote by \(f^{*}_{\sigma}\) the Bayes classifier for measure \(P_{\sigma}\), we can write
\[P_{\sigma}(\widehat{f}(\mathbf{X})\neq Y)-P_{\sigma}(f^{*}( \mathbf{X})\neq Y)\] \[=2\left[\int\left|\eta_{\sigma}(x)-\frac{1}{2}\right|\mathbb{1}( \widehat{f}_{n}(x)\neq f^{*}_{\sigma}(x))(P_{\sigma})_{X}(dx)\right]\] \[=\sum_{i=1}^{m}\int_{M_{i}}|\phi(q(x-g(x)))|\,\mathbb{1}(\widehat {f}_{n}(x)\neq f^{*}_{\sigma}(x))(P_{\sigma})_{X}(dx) \tag{49}\]
where first equality follows from the standard formula for excess risk found for example in [1, Theorem 2.2] and second equality follows from our construction in the preceding paragraphs.
Now if we define
\[\widehat{\sigma}_{i}:=\operatorname*{arg\,min}_{t=0,1}\int_{M_{i}}|\phi(q(x-g (x)))|\,\mathbb{1}(\widehat{f}_{n}(x)\neq t)(P_{\sigma})_{X}(dx)\]
and observe that for all \(x\) in each \(M_{i}\),
\[f^{*}_{\sigma}(x)=\begin{cases}1,\text{ if }\sigma_{i}=1;\\ 0,\text{ if }\sigma_{i}=0,\end{cases}\]
we can lower bound (49) as
\[\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeqeq:eqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeq:
differ can be easily computed as
\[H^{2}(P_{\sigma},P_{\sigma^{\prime}})\] \[=\frac{2w}{\lambda(B(0,\frac{1}{4}q))}\int_{B(g_{i},\frac{1}{4q})} \biggl{(}\sqrt{\biggl{(}\frac{1+\phi(q(x-g(x)))}{2}}\biggr{)}\] \[\qquad\qquad\qquad-\sqrt{\biggl{(}\frac{1-\phi(q(x-g(x)))}{2} \biggr{)}}\biggr{)}^{2}dx\] \[=\frac{2w}{\lambda(B(0,\frac{1}{4}))}\int_{B(0,\frac{1}{4})} \biggl{(}\sqrt{\biggl{(}\frac{1+\phi(x))}{2}}\biggr{)} \tag{52}\] \[\qquad\qquad\qquad-\sqrt{\biggl{(}\frac{1-\phi(x)}{2}\biggr{)}} \biggr{)}^{2}dx\] \[=\frac{2w}{\lambda(B(0,\frac{1}{4}))}\int_{B(0,\frac{1}{4})}1- \sqrt{1-\phi^{2}(x)}dx\] \[=2w(1-\sqrt{1-q^{-2r}})\] \[\leq 2wq^{-2r}. \tag{53}\]
By combining (50), (51), (53), we obtain the first result:
\[\inf_{\widetilde{f}_{n}}\sup_{P\in\Sigma}E\left[P(\widehat{f}_{n} (\mathbf{X})\neq Y)-P(M^{*}(\mathbf{X})\neq Y)\right]\] \[\geq Cq^{-r}mw(1-q^{-r}\sqrt{nw}) \tag{54}\]
for some constant \(C>0\), and \(m\leq q^{d}\), \(w\leq 1/m\)\(wm\leq q^{-r\alpha}/2^{\alpha}\). Now make the following choice for \(m,w,r,q\):
\[m =q^{d},\] \[w =\frac{q^{-\alpha r-d}}{2^{\alpha}},\] \[r =\frac{2d}{2+\alpha},\] \[q =\lfloor\overline{C}n^{\frac{1}{3r(2+\alpha)}}\rfloor.\]
for appropriate constant \(\overline{C}\geq 2\) whose choice will be soon specified. We verify that for the above choice of parameters, for any \(\sigma\), \(P_{\sigma}\in\mathcal{H}\) satisfies all four points in Assumption 4.2. First, as we already mentioned, the margin condition is satisfied if \(mw\leq\frac{q^{-r\alpha}}{2^{\alpha}}\), which is true by construction. Second, observe that by definition, for any \(\sigma\), \(\eta_{\sigma}(x)\) is bounded away from \(0\) and \(1\) by \(\frac{1}{2}-\frac{q^{-d}}{2}>0\), again by construction. Third, it is clear that \(X\) is compactly supported in our setup. Fourth, since \(\eta_{\sigma}\) is defined as an infinitely differentiable function on each \(M_{i}\), \(i=1,\ldots,m\) and constant on \(A\), \(\eta|_{M_{i}}\) and \(\eta_{A}\) all belong to the Barron approximation space, as required.
Finally, we need to verify that the right-hand side of (54) yields the desired rate. First, to check that \(1-q^{-r}\sqrt{nw}\) is indeed positive, it sufficies to check \(q^{r}w^{-\frac{1}{2}}\geq\sqrt{n}\). Since \(q^{r}w^{-\frac{1}{2}}=q^{\frac{(2+\alpha)r+d}{2}}/2^{\frac{\alpha}{2}}= \lfloor\frac{\overline{C}}{2\alpha/2}n^{\frac{(2+\alpha)r+d}{3r(2+\alpha)}}\rfloor\), by choosing \(\overline{C}\geq 2^{\alpha/2}\) we only need to choose \(r\) such that \(\frac{2d}{r}\geq 2+\alpha\). In particular, \(r=\frac{2d}{2+\alpha}\) works. This computation also shows that \(1-q^{-r}\sqrt{nw}\) is a constant depending on the choice of \(\overline{C}\) and \(\alpha\). This implies the rate in (54) is determined by \(q^{-r}mw=q^{-(\alpha+1)r}/2^{\alpha}=\lfloor\frac{\overline{C}}{2^{\alpha}}n^ {-\frac{1+\alpha}{3(2+\alpha)}}\rfloor\), which is indeed the rate in Theorem 4.7.
Conclusion
This work has derived a new, non-asymptotic rate of convergence for the excess risk when the classifier is the empirical risk minimizer of the logistic loss. The class of distributions studied is characterized by the Barron approximation space, which includes the classical Barron functions as a proper subset. This regime is particularly interesting for neural networks precisely because they achieve dimension-free approximation rates here. A matching lower bound for the minimax rate of convergence is also derived, showing the minimax optimality of the proposed neural network-based classifiers.
Our results suggest a rich avenue for future research: what happens when the regression functions belong to other classical function spaces from approximation theory such as \(L^{2}\)-Sobolev space, cartoon functions, and bounded variation functions? It is shown in [10] that neural networks are indeed Kolmogorov-Donoho optimal approximants of many of these spaces. One important point in this result is that Kolmogorov-Donoho optimal rate can be achieved only when the architecture of the neural network is deep: specifically, the depth of the network has to scale polylogarithmically to the inverse of the desired approximation accuracy. This will obviously be a regime different from ours and more general in the sense that now depth matters. We hope to study how these results can be translated into excess risk convergence results in classification context in our future work.
|
2310.03760 | Investigating Deep Neural Network Architecture and Feature Extraction
Designs for Sensor-based Human Activity Recognition | The extensive ubiquitous availability of sensors in smart devices and the
Internet of Things (IoT) has opened up the possibilities for implementing
sensor-based activity recognition. As opposed to traditional sensor time-series
processing and hand-engineered feature extraction, in light of deep learning's
proven effectiveness across various domains, numerous deep methods have been
explored to tackle the challenges in activity recognition, outperforming the
traditional signal processing and traditional machine learning approaches. In
this work, by performing extensive experimental studies on two human activity
recognition datasets, we investigate the performance of common deep learning
and machine learning approaches as well as different training mechanisms (such
as contrastive learning), and various feature representations extracted from
the sensor time-series data and measure their effectiveness for the human
activity recognition task. | Danial Ahangarani, Mohammad Shirazi, Navid Ashraf | 2023-09-26T14:55:32Z | http://arxiv.org/abs/2310.03760v1 | Investigating Deep Neural Network Architecture and Feature Extraction Designs for Sensor-based Human Activity Recognition
###### Abstract
The extensive ubiquitous availability of sensors in smart devices and the Internet of Things (IoT) has opened up the possibilities for implementing sensor-based activity recognition. As opposed to traditional sensor time-series processing and hand-engineered feature extraction, in light of deep learning's proven effectiveness across various domains, numerous deep methods have been explored to tackle the challenges in activity recognition, outperforming the traditional signal processing and traditional machine learning approaches. In this work, by performing extensive experimental studies on two human activity recognition datasets, we investigate the performance of common deep learning and machine learning approaches as well as different training mechanisms (such as contrastive learning), and various feature representations extracted from the sensor time-series data and measure their effectiveness for the human activity recognition task.
_Keywords: human activity recognition; deep learning; contrastive learning; sensors; pretraining_
## I Introduction
The recent advancements in human activity recognition have given rise to a wide range of applications, which include smart homes [1], efficient manufacturing environments [4, 5], and patient activity monitoring for healthcare applications [3]. Activity recognition plays a crucial role in human life by capturing people's behaviors through data, enabling computing systems to monitor, analyze, and assist them in their daily activities. Due to the availability of various sensors such as accelerometers and gyroscopes (i.e., inertial measurement units or IMUs) in most off-the-shelf smart devices, as opposed to video-based approaches [6], recent approaches for human activity recognition have relied on such sensors [7], which introduce fewer privacy issues.
Earlier works on human activity recognition have leveraged signal processing techniques [8] and hand-engineered feature extraction methods [9]. Furthermore, traditional machine learning methods have also been widely adopted for human activity recognition in prior works. However, recent works have proposed various deep learning-based architectures that outperform the aforementioned works by extracting more complicated features from the input times-series data [2, 10, 11, 24, 13, 23, 22]. Considering the prior research on human activity recognition, we briefly summarize the involved challenges as follows:
1. **Deep Model Architecture Design**: There exists a wide range of complex deep learning architectures (such as feed-forward, convolutional [13], recurrent [14], residual [12], etc.). As each architecture has its own benefits and disadvantages, designing a model architecture that performs well for all human activity recognition datasets is challenging.
2. **Effective Time-Series Feature Extraction:** Prior works often consider time-series features to identify different activities. However, as shown in [2, 10] spectral or statistical features could also serve as additional inputs to enhance the model's capabilities for more accurate human activity recognition. Therefore, there is a need to investigate the performance of different models given various types of features extracted from the sensor data to provide a clear understanding of their effectiveness.
3. **Efficient Model Training Mechanism:** Common human activity approaches rely on the traditional classification model training through the cross-entropy loss function. However, there exist other pretraining techniques including contrastive learning [15] or the triplet loss [16] that could further push the limits of the human activity model to generate better results. In this work, we aim to perform extensive experimental studies on two human activity recognition datasets to measure the effectiveness of common deep learning architectures, feature extraction methods, and model learning techniques for human activity recognition. The rest of this paper is organized as follows. We first review the related work in Sec. II provide the details of the datasets and the preprocessing steps in Sec. III followed by the feature extraction and problem statement in Sec. IV. Then, we explore the studied model architectures and different learning mechanisms in Sec. V. We then present our
experimental studies in Sec. V and conclude the paper in Sec. VI.
## II Related Work
Recent works on human activity recognition has been focusing on machine learning and deep neural networks due to their high accuracy for complicated tasks compared to hand-engineered works [8, 9]. For instance, the proposed method in [22], used the long short-term memory (LSTM) layers to extract the temporal information in the sensor time-series. Similarly, [21] added the attention mechanism on top of the LSTM layers to enhance the important feature extraction. Moreover, the model proposed in [13] is based on 1-dimensional convolutional neural network that extracts the temporal information from the sensor data in a more efficient way. The proposed model in [18] leverages two LSTM layers to process the time-series data in two direction to enhance the temporal information extraction of the model. The authors of [14] leveraged the residual connections to augment the training of the human activitiy recognition model. Furthermore, to improve the training quality of the model, the proposed method in [2] incorporates the contrastive learning loss function [15] in addition to the commonly used cross-entropy loss function, which enhances the representation learning of the model.
## III Datasets, Data Preprocessing, and Feature Extraction
### _Datasets_
We briefly summarize the datasets studied as follows.
**Dataset 1** (DS1): The first dataset studied is collected by [17], which consists of 7,498 records and consists of six human activity classes as follows: going downstairs, walking upstairs, jogging, standing, walking, and sitting. This dataset contains the time-series data collected from the accelerometer sensor by 36 different users.
**Dataset 2** (DS2): The second dataset studied is provided by [11], which has a total of 39,168 records and consists of the following human activity classes: walking, bike riding, going upstairs, going downstairs, jogging, and riding in bux/taxi. This dataset contains the time-series data collected by 48 different users from both the accelerometer and gyroscope sensors.
### _Data Preprocessing_
Having the recorded time-series data of the accelerometer and gyroscope sensors along the x, y, and z axes, and with length \(L\) = i.e., in \(\mathbb{R}^{L}\), we perform the following pre-processing steps to prepare them for model processing.
**Segmentation:** Given the time-series data, we divide them into multiple segments with a sliding window of size \(S\) (150 in this study), where each window has 70% of overlap with the previous window.
**Noise Filtering:** We use the moving average method to filter the noises caused by the vibrations that occurred while recording the sensor data. Specifically, we slide a window of size \(M=10\) and calculate the average of all the values within the window to eliminate the noise.
**Normalization:** Finally, since the scale of the values varies across different sensors, we leverage the min-max normalization to normalize each sensor axes (e.g., accelerometer along the y-axis) to have values in [0,1] interval.
## IV Feature Extraction and Problem Statement
### _Feature Extraction_
We extract three different features from the time-series segments as described below:
**Temporal Features:** The most widely studied feature for human activity recognition is the temporal features [11, 2, 18]. Basically, each resultant segment after the pre-processing is considered as the temporal features, which are then commonly processed by recurrent neural networks to extract the temporal information within them. Since here we are focusing on the the two accelerometer and gyroscope sensors each producing values along three of the x, y, anx z axes, the temporal features for DS2 would have a dimension of \(\mathbb{R}^{S\times 6}\), where \(S\) is size of the time-series segment as stated earlier. The temporal features would have a size \(\mathbb{R}^{S\times 3}\) for DS1 as it only contains the sensor data for the three axes of accelerometer.
**Statistical Features:** Rather than processing the time-series segments, we can apply statistical functions (such as minimum, maximum, average, standard deviation, etc.) on
Fig. 1: Visualization of the temporal and spectral feature for one example from the jogging human activity class.
each axis of each of the accelerometer and gyroscope sensors to extract statistical features. Such features have been shown by [10] to be highly effective for similar sensor time-series classification tasks. In this work, we consider the four minimum, maximum, average, and standard deviation functions to extract statistical features from each of the 6 axis of the accelerometer and gyroscope sensors. Thus, the resultant statistical features would have a dimension in \(\mathbb{R}^{24}\) for DS2 and \(\mathbb{R}^{12}\) for DS1.
**Spectral Features:** Finally, to capture more complicated patterns and extract far more advanced features, recent studies [2, 10], inspired by audio processing feature extraction methods [19], have proposed to extract spectral features from the time-series segments. Specifically, the continuous wavelet transform (CWT) function is applied on each sensor axis with different scales and the resultant features are combine to create multi-dimensional features. In this work, we leverage the Morlet wavelet function with 50 different scales to apply CWT on the time-series segment [20]. Therefore, the spectral features would have their dimension in \(\mathbb{R}^{50\times S\times 6}\) for DS2 \(\mathbb{R}^{50\times S\times 3}\) for DS1, where \(S\) is the length of the segment as before.
To better demonstrate the extracted features, we visualize the temporal features and their correponding spectral features for the accelerometer time-series values along the x, y, and z axis of an example record belonging to the jogging human activity class in Fig. 1. We can observe that generally, the jogging exhibits a repeated pattern in the accelerometer time-series values due the nature of this activity.
### _Problem Statement_
Given the sensor data recorded with the accelerometer and gyroscope sensor, the task of the human activity recognition model is to predict the correct human activity class. As stated above, in this work, the studied datasets consist of 6 different human activity classes.
## V Model Architectures and Training Mechanisms
In this section, we provide the details of the deep model architectures and the training mechanisms explored for the experimental studies.
### _Model Architectures_
Considering the fact that the temporal and spectral features could be either be processed by recurrent or convolutional neural networks, we have adopted the various model architectures from the literature [2, 10, 11, 12, 13, 14] to consider for our experiments. Besides, since traditional machine learning models are also widely studied for human activity recognition, we consider the common machine learning models for human activity recognition as baselines.
**Traditional Machine Learning Models:** We study support vector machine (SVM), K-nearest neighbor (KNN), gradient boosting decision tree (GBDT), logistic regression (LR), decision tree (DT), random forest (RF), AdaBoost, Gaussian Naive Bayes (GaussianNB), and Multi-layer perceptron (MLP) as the most commonly used machine learning models for human activity recognition. The input to these models is the statistical features. For SVM, we use the linear kernel function. For KNN we set the number of the neighbors to 5. For RF we set the number of estimators to 100. Finally, for MLP we use two hidden layers.
**ResNet**[14]: We adapt the residual connection proposed in [24] to design a network based on convolutional neural networks. Specifically, we use 4 residual blocks each having two convolutional and two residual layers. The input to this model is the spectral features.
**Transformers**[23]: Recently, transformers [23] have shown to be very effective in various domains. Thus, we have designed a neural network architecture based on the transformers that process the temporal features to identify different human activities. For this model, we leverage two transformer layers each having 8 heads.
**LSTM**[13, 22]: According to [13, 22], We design a network based on long-short term memory that processes the temporal features. We leverage one LSTM layer with 64 hidden units for this model.
**BiLSTM**[18]: To better capture the temporal information, we process them in both the forward and backward direction and leverage the combination of the features extracted from both directions to classify the human activities. We leverage 2 BiLSTM layers each having 64 hidden units for this model.
**LSTM-Attention**[21]: We augment the LSTM network previously stated with the attention mechanisms to measure the effects of such designs for human activity recognition. For this model, we leverage 2 LSTM layers with the attention mechanism where each has 64 hidden units
**CNN1D**[13]: Recurrent neural networks are often slow and involve high computation overheads. Thus, we have designed a network architecture based on 1-dimensional convolutional neural networks (CNN1D) to process the temporal features for human activity recognition. For this model, we stack two 1-dimensional convolutional layers with 64 and 32 filters, respectively. Besides, we set the kernel size of the convolutional layers to 3.
**MRNet**[2, 10]: Inspired by prior studies, the combination of all the temporal, statistical, and spectral features could be effective for higher classification accuracy. Therefore, we have adapted the network proposed in [2, 10] that first processes the temporal, statistical, and spectral features with sub-networks based on recurrent, fully connected, and convolutional neural networks. Then, we concatenate the
output of all three networks and use them to predict the human activity class.
For all the models above, we use the rectified linear unit (ReLU) function as the activation function. Besides, we add three fully connected layers with 256, 128, and 6 hidden units as their last layer to perform human activity classification for a total of 6 different classes. Similarly, all the classification layers leverage the ReLU activation function while the last layer uses the softmax function to generate the probability values for each human activity class.
### _Training Mechanisms_
**Cross Entropy Classification Loss:** The most commonly used loss function for model training is the cross entropy loss formulated as follows:
\[\ell_{CE}=-\sum\limits_{i=1}^{x}{{{plog(\hat{p})}}}\;, \tag{1}\]
where \(p\) is the probability of the correct human activity class and \(\hat{p}\) is the probability of the correct class generated by the model. \(Z~{}=~{}6\) is the total number of classes in this study.
**Supervised Contrastive Learning Loss:** Contrastive learning has been recently adopted for model pretraining in various tasks [15]. Here we adopt the supervised variant of the contrastive learning that leverages the label information from the dataset to generate distinguishable embeddings for each human activity class. The supervised contrastive learning loss is formulated as follows:
\[\ell_{CL}=\sum\limits_{i\in\textbf{Q}}{\frac{-1}{|\textbf{{A}}(\textbf{{i}})|} \sum\limits_{\textbf{{a}}\in\textbf{{A}}(\textbf{{i}})}log\frac{{{\textit{exp }}(\textbf{{e}}_{i}\textbf{{e}}_{p}/\textbf{{r}})}}{{{\Sigma_{\textbf{{b}}} \in\textbf{{A}}(\textbf{{i}})}\textit{exp}(\textbf{{e}}_{i}\textbf{{e}}_{p}/ \textbf{{r}})}}}}\;, \tag{2}\]
where \(Q\) is the set of all the data records, \(\textbf{{e}}_{i}\) is the embedding of the \(i\)-th data record, \(\tau\) is the temperature parameter, \(\textbf{{A}}(\textbf{{i}})\) is the set of all the other data records with the same class as the \(i\)-th record.
**Triplet Loss:** Similarly, the triplet loss [16] aims to generate similar embeddings for data records belonging to the same class. The triplet loss is formulated as follows:
\[\ell_{TL}=max(d(\textbf{{e}}_{\textit{{a}}},\textbf{{e}}_{p})-d(\textbf{{e}}_ {\textit{{a}}},\textbf{{e}}_{n})+m,0), \tag{3}\]
where \(\textbf{{e}}_{\textit{{a}}}\), \(\textbf{{e}}_{p}\), \(\textbf{{e}}_{n}\) are the embeddings of the anchor, positive (same class as the anchor), and negative (different class than the anchor), respectively, and \(m\) is the margin controlling the distance between the embeddings. Besides, \(d(\cdot)\) represents the distance function such as the Euclidean distance.
Based on the above, we train different models using the cross-entropy loss without pretraining. On the other hand, we can first pretrain the model based on either the contrastive or the triplet loss functions, and then continue the training based on the cross-entropy loss function.
## VI Experimental Studies
In this section, we first review the parameter settings, and then present and discuss the experimental studies.
**Parameters:** For all the models, we use the Adam optimizer to train them with a learning rate of 0.001. We train the models using the cross-entropy for 50 iterations. For pretraining, we set the number of the iterations to 10. Besides, we set temperature parameter of the contrastive learning to \(\tau=0.07\). Moreover, we leverage 70% of the data for training, 10% for validation, and 20% for evaluation.
**Performance Results:** We first train the model based on the cross-entropy loss function on DS1 and DS2 and illustrate the results in Table. 1 We can see that the high accuracy on DS1 is achieved by the ResNet and LSTM model. Although the performance achieved by these two models is very close, we realized that since ResNet takes advantage of the residual connections and is based on lightweight convolutional neural networks, compared to LSTM, the training procedure was much shorter as the model converged faster. Furthermore, we can observe that MRNet has outperf
\begin{table}
\begin{tabular}{|l|c|c|} \hline Model & DS1 & DS2 \\ \hline SVM & 0.779 & 0.569 \\ \hline KNN & 0.935 & 0.798 \\ \hline GBDT & 0.892 & 0.784 \\ \hline LR & 0.763 & 0.555 \\ \hline DT & 0.874 & 0.759 \\ \hline RF & 0.929 & 0.850 \\ \hline AdaBoost & 0.446 & 0.683 \\ \hline GaussianNB & 0.781 & 0.538 \\ \hline MLP & 0.775 & 0.603 \\ \hline ResNet & 0.954 & 0.535 \\ \hline Transformers & 0.878 & 0.840 \\ \hline LSTM & 0.953 & 0.873 \\ \hline BiLSTM & 0.954 & 0.874 \\ \hline LSTMAttention & 0.931 & 0.870 \\ \hline CNN1D & 0.939 & 0.828 \\ \hline MRNet & 0.970 & 0.552 \\ \hline \end{tabular}
\end{table} TABLE. 1 CROSS-ENTROPY ACCURACY (%)
\begin{table}
\begin{tabular}{|l|c|c|} \hline Model & DS1 & DS2 \\ \hline ResNet & 0.882 & 0.872 \\ \hline Transformers & 0.852 & 0.813 \\ \hline LSTM & 0.923 & 0.857 \\ \hline BiLSTM & 0.915 & 0.856 \\ \hline LSTMAttention & 0.870 & 0.826 \\ \hline CNN1D & 0.919 & 0.820 \\ \hline MRNet & 0.854 & 0.723 \\ \hline \end{tabular}
\end{table} TABLE. 2: SUPERVISED CONTRASTIVE LEARNING ACCURACY (%)
and LSTM models by combining the temporal, spectral, and statistical features, and further supports the idea proposed in [2, 10].
On the other hand, we can see that the recurrent neural models such as LSTM and BiLSTM have achieved much higher performance on DS2 compared to the other models, which shows that such models are more suitable for learning on large datasets. Besides, we can observe that the traditional machine learning models such as SVM, AdaBoost or GaussianNB have generally achieved low performance due to their limited computation capabilities.
Next, we perform the experiments by pretraining the models based on the supervised contrastive learning and the triplet loss and show the results in Tables. 2 and 3, respectively. We can observe no performance improvements on DS1 and an improvement less than 1% on DS2 for some of the models based on these pretraining methods.
In summary, the BiLSTM model with the temporal features as its input and the cross-entropy loss function has the highest accuracy among all the deep learning models. We illustrate the confusion matrices of the BiLSTM model for DS1 and DS2 in Fig. 2, which shows the high accuracy of this model for different human activity classes.
## VII Conclusion
In this paper, we investigated the performance of different deep neural network architectures for human activity recognition given the temporal, statistical, and spectral features. Moreover, we explored different model designs based on residual connections, convolutional and recurrent layers, transformers, attention mechanisms, and traditional machine learning algorithms. Moreover, we trained the models based on three common learning algorithms and compared their performance according to our experiments on two large-scale human activity recognition datasets. According to our results, the combination of multiple features could lead to performance improvements while learning algorithms such as contrastive learning or the triplet loss could be less effective depending on the complications within the dataset.
|
2309.13410 | Tropical neural networks and its applications to classifying
phylogenetic trees | Deep neural networks show great success when input vectors are in an
Euclidean space. However, those classical neural networks show a poor
performance when inputs are phylogenetic trees, which can be written as vectors
in the tropical projective torus. Here we propose tropical embedding to
transform a vector in the tropical projective torus to a vector in the
Euclidean space via the tropical metric. We introduce a tropical neural network
where the first layer is a tropical embedding layer and the following layers
are the same as the classical ones. We prove that this neural network with the
tropical metric is a universal approximator and we derive a backpropagation
rule for deep neural networks. Then we provide TensorFlow 2 codes for
implementing a tropical neural network in the same fashion as the classical
one, where the weights initialization problem is considered according to the
extreme value statistics. We apply our method to empirical data including
sequences of hemagglutinin for influenza virus from New York. Finally we show
that a tropical neural network can be interpreted as a generalization of a
tropical logistic regression. | Ruriko Yoshida, Georgios Aliatimis, Keiji Miura | 2023-09-23T15:47:35Z | http://arxiv.org/abs/2309.13410v1 | # Tropical neural networks and its applications to classifying phylogenetic trees
###### Abstract
Deep neural networks show great success when input vectors are in an Euclidean space. However, those classical neural networks show a poor performance when inputs are phylogenetic trees, which can be written as vectors in the tropical projective torus. Here we propose tropical embedding to transform a vector in the tropical projective torus to a vector in the Euclidean space via the tropical metric. We introduce a tropical neural network where the first layer is a tropical embedding layer and the following layers are the same as the classical ones. We prove that this neural network with the tropical metric is a universal approximator and we derive a backpropagation rule for deep neural networks. Then we provide TensorFlow 2 codes for implementing a tropical neural network in the same fashion as the classical one, where the weights initialization problem is considered according to the extreme value statistics. We apply our method to empirical data including sequences of hemagglutinin for influenza virus from New York. Finally we show that a tropical neural network can be interpreted as a generalization of a tropical logistic regression.
## 1 Introduction
A neural network is a learning method, called deep learning, to learn data in a way to mimic a brain system, i.e., which interconnects nodes, called neurons, in a layered structure like a human brain system [13, 8, 15]. Recent years deep neural networks show great success to process input data which lay in the Euclidean space [11]. However, when input data are phylogenetic trees or the time series with trends, represented as vectors in the _tropical projective torus_[26, 27, 23, 38, 39, 40, 29, 42, 35], classical neural networks show a poor performance. Therefore in this paper, we propose neural networks which process an input data as vectors over the tropical projective torus. The tropical projective torus denoted by \(\mathbb{R}^{d}/\mathbb{R}\mathbf{1}\) is the \(d\)-dimensional real numbers, \(\mathbb{R}^{d}\), mod by the vector with all ones, i.e., \(\mathbf{1}:=(1,1,\ldots,1)\in\mathbb{R}^{d}\). Over the tropical projective torus denoted by \(\mathbb{R}^{d}/\mathbb{R}\mathbf{1}\), we define \(x:=(x_{1},x_{2},\ldots,x_{d})=(x_{1}+c,x_{2}+c,\ldots,x_{d}+c)\in\mathbb{R}^{d} /\mathbb{R}\mathbf{1}\) where \(c\in\mathbb{R}\)[20]. Here we consider the _tropical metric_, also known as
the _generalized Hilbert projective metric_, over the tropical projective torus as activation functions in a hidden layer of a neural network. It is important to keep the invariance of the input vector under the one-vector which is innate in the tropical projective torus [20, 17, 33, 18, 41, 30]. Our strategy is to embed an input vector in the tropical projective torus into a vector in the classical Euclidean space in the first layer. This is analogous to the word embedding in the field of natural language processing [37, 28]. Then the following layers can be the same as the classical ones.
Although some previous works analyzed ReLU neural networks by using the tropical geometry, the neural networks themselves are defined on a classical Euclidean space [43, 1, 24]. In this paper, on the other hand, we consider a tropical projective torus as an input space and keep the invariance under the one-vector. That is, our work is truly tropical.
In this paper, we first introduce a tropical embedding layer. We use the tropical embedding layer as the first layer of the classical neural networks to keep the invariance in the tropical projective space. To check if this tropical neural network has enough flexibility, we next prove that this tropical neural network is a universal approximator. Then we derive a backpropagation rule for the tropical neural networks. We provide TensorFlow 2 codes for implementing a tropical neural network in the same fashion as the classical one, where the weights initialization problem is considered according to the extreme value statistics. We show its applications to phylogenomics, a new field in evolutionary biology which applies tools from phylogenetic trees to genome data. Applications includes simulated data under the multi-species coalescent model which is the most popular model to analyze gene tree analysis on a genome data [21], and empirical data of influenza virus data set collected from the state of New York [27]. Finally we briefly show that a tropical neural network can be interpreted as a generalization of a tropical logistic regression.
## 2 Tropical Embedding for Tropical Neural Networks
The classical neural networks only accept a input vector in an Euclidean space in its original form. Thus they cannot accept a phylogenetic tree as an input since a space of phylogenetic trees is not Euclidean [33, 3, 6], for example. Therefore, we first consider a tropical embedding layer, which is analogous to the word embedding in natural language processing [37, 28]. Once a phylogenetic tree is embedded in an Euclidean space, a classical neural network can is applied to analyzing it.
**Definition 1** (tropical neural networks).: _A tropical neural network is a network where a tropical embedding layer as the first hidden layer is followed by a classical neural network (classical layers)._
**Definition 2** (tropical embedding layer).: _Let \(x\) in \(\mathbb{R}^{d}/\mathbb{R}\mathbf{1}\) be an input vector to the tropical embedding layer. the activity of \(j\)-th neuron as an output of the
tropical embedding layer is given by_
\[z_{j}=\max_{i}(x_{i}+w^{(1)}_{ji})-\min_{i}(x_{i}+w^{(1)}_{ji}). \tag{1}\]
**Remark 3**.: _Note that no activation function is executed for \(z\) as "max - min" operation is somehow regarded as the activation function of the neurons in the first hidden layer._
**Remark 4**.: _There is a geometric interpretation: "max - min" operation measures the distance between the points \(x\) and \(w^{(1)}_{j}\). Therefore \(z(x)\) is invariant along one vectors \(\mathbf{1}\)._
**Remark 5**.: _There are alternative ways to attain the invariance such as_
\[z_{j}=\max_{i}(x_{i}+w^{(1)}_{ji})-2\text{nd}\max_{i}(x_{i}+w^{(1)}_{ji}). \tag{2}\]
_There is a geometric interpretation: "max - 2nd max" operation measures the distance between a point \(x\) and the tropical hyperplane whose normal vector is \(w^{(1)}\)[17]. Therefore \(z(x)\) is invariant along one vectors \(\mathbf{1}\). You could even use \(j\)-th max in general. However, the repertoire of functions never increase by using alternative ones. That is, from the view point of universal approximation theorem, using Eq. (1) suffices. In addition, Eq. (1) seems to perform better than the alternative ones according to our numerical experiments (not shown). Therefore we solely use Eq. (1) as a tropical embedding layer in what follows._
**Remark 6**.: _Suppose \(A\in\mathbb{Z}_{+}^{N\times d}\). Then we consider the ReLU such that_
\[\max\{Ax+b,0\}.\]
_Assume that \(A\mathbf{1}\neq 0\). Suppose \(x\in\mathbb{R}^{d}/\mathbb{R}\mathbf{1}\). Then we have \(x:=x+c\cdot(1,\ldots,1)=x+c\cdot\mathbf{1}\in\mathbb{R}^{d}/\mathbb{R}\mathbf{1}\). Then for \(c\ll 0\) and fixed \(x\), we have:_
\[\max\{Ax+cA\mathbf{1}+b,0\} = 0.\]
_As \(c\to-\infty\), we have_
\[\frac{1}{1+\exp(-\max\{Ax+cA\mathbf{1}+b,0\})}\to\frac{1}{1+1}=1/2\]
_for any \(x\in\mathbb{R}^{d}/\mathbb{R}\mathbf{1}\). Also for \(c\gg 0\) and fixed \(x\), we have:_
\[\max\{Ax+cA\mathbf{1}+b,0\} = Ax+cA\mathbf{1}+b.\]
_As \(c\to\infty\), we have_
\[\frac{1}{1+\exp(-(Ax+cA\mathbf{1}+b))}\to 1\]
_for any \(x\in\mathbb{R}^{d}/\mathbb{R}\mathbf{1}\). Therefore, neural networks with the ReLU cannot learn from observation in these cases. However, with the activator function defined in Eq. (1), we have_
\[\max_{i}(x_{i}+c\cdot\mathbf{1}+w^{(1)}_{ji})-\min_{i}(x_{i}+c\cdot\mathbf{1}+ w^{(1)}_{ji})=\max_{i}(x_{i}+w^{(1)}_{ji})-\min_{i}(x_{i}+w^{(1)}_{ji}).\]
**Remark 7**.: _Classical neural networks are not well-defined in the tropical projective torus, since the neuron values are not invariant under transformations of the form \(x\to x+(c,\ldots,c)\). Meanwhile, the tropical embedding layer of Eq. (1) is invariant under such transformations._
## 3 Universal Approximation Theorems for Tropical Neural Networks
It is very important to check if the tropical embedding layer as in Eq. (1) followed by classical layers has enough varieties to represent considerable input-output relations [8]. In this section, we show that the tropical neural network can approximate enough variety of functions so that we can safely use it.
**Definition 8**.: _The norm \(\|\cdot\|_{q}\) for \(q\geq 1\) is defined by_
\[\|f(x)\|_{q}=\int_{\mathbb{R}^{n}}|f(x)|^{q}dx \tag{3}\]
_The space \(L^{q}(\mathbb{R}^{d}),(1<q<\infty),\) is the set of Lebesgue integrable functions \(f\) from \(\mathbb{R}^{d}\) to \(\mathbb{R}\) for which \(\|f(x)\|_{q}<\infty\)._
**Definition 9**.: _The space \(C^{0}(\mathbb{R}^{d})\) is the set of continuous, compactly suppported functions from \(\mathbb{R}^{d}\) to \(\mathbb{R}\)._
**Remark 10**.: _Note that \(C^{0}(\mathbb{R}^{d})\subset L^{q}(\mathbb{R}^{d})\)._
For the classical case, a universal approximation theorem for ReLU feedforward neural networks has been proved in [4].
**Theorem 11** (classical universal approximation theorem [4]).: _Any function of \(x_{j}\) for \(j=1,\ldots,d\) in \(L^{q}(\mathbb{R}^{d}),(1<q<\infty),\) can be arbitrarily well approximated in the \(\|\cdot\|_{q}\) by a ReLU feedforward neural network with at most \(L=2(\lfloor\log_{2}d\rfloor+2)\) layers._
As the \(d-1\) neurons in the tropical embedding layer can easily represent \((x_{j}-x_{d})\) for \(j=1,\ldots,d-1\) and Theorem 11 can be applied to the second and later layers of a tropical neural network (that is equivalent to a classical neural network), we can prove the following theorem.
**Theorem 12** (tropical universal approximation theorem).: _Any function of \((x_{j}-x_{d})\) for \(j=1,\ldots,d-1\) in \(L^{q}(\mathbb{R}^{d}/\mathbb{R}\mathbf{1})\simeq L^{q}(\mathbb{R}^{d-1}),(1<q <\infty),\) can be arbitrarily well approximated in the \(\|\cdot\|_{q}\) by a tropical neural network with at most \(L=2(\lfloor\log_{2}d\rfloor+2)+1\) layers (which include an tropical embedding layer as the first layer)._
Proof.: For any \(f\in L^{q}(\mathbb{R}^{d-1})\), \(\exists g\in C_{0}(\mathbb{R}^{d-1})\) such that \(\|f-g\|_{q}<\epsilon/2\)[8]. Let \(K\) be the support of \(g\) and let \(M\) be \(\max_{x\in K}\|x\|\). For \(x\in K\), we can set \(w^{(1)}_{jj}=-w^{(1)}_{jd}=2M\) and \(w^{(1)}_{ji}=0\) for \(i\neq j,d\) to obtain \(z_{j}=x_{j}-x_{d}+4M\)
for \(j=1,\ldots,d-1\). This means that a neuron in the first tropical embedding layer can represent \(x_{j}-x_{d}\). Then \(d-1\) neurons can represent \(z_{1}\), \(z_{2}\),..., \(z_{d-1}\). Finally, simply apply Theorem 11 to the classical neural network \(F(z_{1},\ldots,z_{d-1})\) consisting of the second and later layers of a tropical neural network to obtain \(\|g-F\|_{q}<\epsilon/2\). Taken together, \(\|f-F\|_{q}<\|f-g\|_{q}+\|g-F\|_{q}<\epsilon\).
There is another type of classical universal approximation theorems.
**Definition 13**.: _The width \(d_{m}\) of a neural network is defined to be the maximal number of neurons in a layer._
**Theorem 14** (classical universal approximation theorem for width-bounded ReLU networks [19]).: _For any \(f\in L^{1}(\mathbb{R}^{d})\) and any \(\epsilon>0\), there exists a classical neural networks \(F(x)\) with ReLU activation functions with width \(d_{m}\leq d+4\) that satisfies_
\[\int_{\mathbb{R}^{d}}|f(x)-F(x)|dx<\epsilon. \tag{4}\]
Again, as the \(d-1\) neurons in the tropical embedding layer can easily represent \((x_{j}-x_{d})\) for \(j=1,\ldots,d-1\) and Theorem 14 can be applied to the second and later layers of a tropical neural network (that is equivalent to a classical neural network), we can prove the following theorem.
**Theorem 15** (tropical universal approximation theorem with bounded width).: _For any function \(f\) of \((x_{j}-x_{d})\) for \(j=1,\ldots,n-1\) in \(L^{1}(\mathbb{R}^{d}/\mathbb{R}\mathbf{1})\simeq L^{1}(\mathbb{R}^{d-1})\) and any \(\epsilon>0\), there exists a tropical neural networks \(F(x)\) with width \(d_{m}\leq d+4\) that satisfies_
\[\int_{\mathbb{R}^{d-1}}|f(x)-F(x)|dx<\epsilon. \tag{5}\]
Proof.: For any \(f\in L^{1}(\mathbb{R}^{d-1})\), \(\exists g\in C_{0}(\mathbb{R}^{d-1})\) such that \(\|f-g\|_{1}<\epsilon/2\)[8]. Let \(K\) be the support of \(g\) and let \(M\) be \(\max_{x\in K}\|x\|\). For \(x\in K\), we can set \(w^{(1)}_{jj}=-w^{(1)}_{jd}=2M\) and \(w^{(1)}_{ji}=0\) for \(i\neq j,d\) to obtain \(z_{j}=x_{j}-x_{d}+4M\) for \(j=1,\ldots,d-1\). This means that a neuron in the first tropical embedding layer can represent \(x_{j}-x_{d}\). Then \(d-1\) neurons can represent \(z_{1}\), \(z_{2}\),..., \(z_{d-1}\). Finally, simply apply Theorem 14 to the classical neural network \(F(z_{1},\ldots,z_{d-1})\) consisting of the second and later layers of a tropical neural network to obtain \(\|g-F\|_{q}<\epsilon/2\). Taken together, \(\|f-F\|_{q}<\|f-g\|_{q}+\|g-F\|_{q}<\epsilon\).
## 4 Backpropagation Rule for Simplest Tropical Neural Networks
Here we demonstrate that the gradients of the loss function with respect to weights exist for tropical neural networks. The gradient is computable through the chain rule for differentials called backpropagation rule in the similar fashion to the classical case. The gradients obtained in this way can guarantee the successful update of the weights at each iteration of learning.
We consider a simplest three layer network whose weights in the first and the second layers are denoted as \(w^{(1)}\in\mathbb{R}^{d\times N}\) and \(w^{(2)}\in\mathbb{R}^{N\times 1}\). Suppose the activity in the first hidden layer is given by Eq. 1 and the output of the network is given as
\[y=\sum_{j}^{N}w_{j}^{(2)}z_{j}. \tag{6}\]
Note that although here we derive a backpropagation rule for this regression setting just for simplicity, the backpropagation rule can be derived in a similar manner for the classification setting with a sigmoid function, too.
Below is the summary of parameters of the neural network.
* \(w^{(1)}\), \(w^{(2)}\): weights in the first and second layers;
* \(z_{j}\): activation of \(j\)-th neuron in the hidden layer; and
* \(y\): activation of the output neuron.
**Theorem 16**.: _The partial derivatives of the cost function \(Q:=\frac{1}{2}(y-y^{\text{true}})^{2}\) with respect to weights for the above tropical neural network \(y=f(x)\) are given by_
\[\frac{\partial Q}{\partial w_{ji}^{(1)}}=(y-y^{\text{true}})w_{j}^{(2)}(\delta (i=i_{j}^{\text{max}})-\delta(i=i_{j}^{\text{min}})), \tag{7}\]
_where \(i_{j}^{\text{max}}\) (or \(i_{j}^{\text{min}}\)) is the index \(i\) for which \((x_{i}+w_{ji}^{(1)})\) takes the maximum (or
Figure 1: architecture of a simplest neural networks that accept a vector in \(\mathbb{R}^{d}/\mathbb{R}\mathbf{1}\)
minimum) and_
\[\frac{\partial Q}{\partial w_{j}^{(2)}}=(y-y^{true})z_{j}. \tag{8}\]
Proof.: Direct calculations.
**Example 17**.: _As a simplest example of Eq. (7), let us consider the three dimensional input case \((d=3)\). Suppose the number of neurons in the middle layer is one and its activity is z, for simplicity. Assume \(x_{1}=1\), \(x_{2}=2\) and \(x_{3}=3\) and \(w_{1}^{(1)}=w_{2}^{(1)}=w_{3}^{(1)}=0\). Then \((x_{i}+w_{1}^{(1)})<(x_{i}+w_{2}^{(1)})<(x_{i}+w_{3}^{(1)})\) and \(i_{\text{max}}=3\) and \(i_{\text{min}}=1\). Therefore,_
\[\frac{\partial Q}{\partial w_{i}^{(1)}}=\left\{\begin{array}{ll}-(y-y^{true })w^{(2)}&(i=1)\\ 0&(i=2)\\ (y-y^{true})w^{(2)}&(i=3).\end{array}\right. \tag{9}\]
_In this case we have \(z=3-1=2\) If, furthermore, \(w^{(2)}=1\), then, \(y=w^{(2)}z=2\) and_
\[\frac{\partial Q}{\partial w_{i}^{(1)}}=\left\{\begin{array}{ll}-(2-y^{true }),&(i=1)\\ 0,&(i=2)\\ 2-y^{true}.&(i=3),\end{array}\right. \tag{10}\]
\(w_{i}^{(1)}\) _can be, for example, updated by the SGD rule: \(\Delta w_{i}^{(1)}=-\eta\frac{\partial Q}{\partial w_{i}^{(1)}}\), where \(\eta>0\) is a learning rate. Then, \(w_{3}^{(1)}\) increases (and \(w_{1}^{(1)}\) decreases) if \(2>y^{\text{true}}\)._
**Remark 18**.: _It is interesting that only two of \(w_{i}^{(1)}\) are modified while the others remains. Note that \(\Delta w^{(1)}\) is orthogonal to the one vector, \(\mathbf{1}:=(1,1,\ldots,1)\in\mathbb{R}^{d}\). It is interesting to elucidate how this learning rule works as a dynamical system._
## 5 TensorFlow2 Codes for Tropical Neural Networks
In order to boost computing with GPUs, we implement tropical neural networks in TensorFlow 2 [9]. As is the case for the classical neural networks, the auto-differential is the key for the GPU implementation of tropical neural networks. In order to guarantee fast auto-differentials, all the calculations must be implemented only with the math functions in TensorFlow2 such as top_k(\(v\),\(d\)), which returns the maximum and the minimum of a vector \(v\).
In practice, it is essential to create a user-friendly class for the tropical embedding as the first layer of the tropical neural networks, that is scalable for big data. The following code defines a hand-made class called TropEmbed(), which enables us to easily implement the tropical neural networks in the Keras/Tensorflow style.
class TropEmbed(Layer): def__init__(self,units=2,input_dim=3): super(TropEmbed,self).__init__() self.w = self.add_weight(shape=(units,input_dim), \ initializer="random_normal") self.units = units self.input_dim = input_dim
def call(self, x): x_reshaped = tf.reshape(x,[-1, 1, self.input_dim]) Bcast = repeat_elements(x_reshaped, self.units, 1) val, i = tf.math.top_k(Bcast + self.w, self.input_dim) returnval[:,:,0] - val[:,:,-1]
# usage model = Sequential([TropEmbed(10, d), Dense(1)])
The codes for TropEmbed() class and for reproducing all the figures in this paper are available at [https://github.com/keiji-miura/TropicalNN](https://github.com/keiji-miura/TropicalNN).
## 6 Weight Initialization Based on Extreme Value Statistics
Weight initializations are important for avoiding the divergence and vanishment of neural activities after propagating many layers. For the classical neural networks, Xavier's and He's initializations are famous [12, 16]. Here we consider a tropical analogue.
**Definition 19** (Generalized Hilbert Projective Metric).: _For any points \(v:=(v_{1},\ldots,v_{d}),\,w:=(w_{1},\ldots,w_{d})\in\mathbb{R}^{d}/\mathbb{R} \mathbf{1}\), the tropical distance (also known as tropical metric) \(d_{\mathrm{tr}}\) between \(v\) and \(w\) is defined as:_
\[d_{\mathrm{tr}}(v,w):=\max_{i\in\{1,\ldots,d\}}\bigl{\{}v_{i}-w_{i}\bigr{\}}- \min_{i\in\{1,\ldots,d\}}\bigl{\{}v_{i}-w_{i}\bigr{\}}.\]
**Lemma 20**.: _Suppose \(x_{i},w_{i}\sim N(0,1)\) for \(i=1,\ldots,d\). Then the expectation and variance of \(d_{\mathrm{tr}}(x,-w)\) can be approximated by \(2\sqrt{2}(a_{d}\gamma+b_{d})\) and \(\frac{\pi^{2}}{3\log d}\), respectively, where \(a_{d}=\frac{1}{\sqrt{2\log d}}\) and \(b_{d}=\sqrt{2\log d}-\frac{\log\log d+\log(4\pi)}{2\sqrt{2}\log d}\)._
Proof.: As \(x_{i}+w_{i}\sim N(0,2)\), \(Z:=\frac{\max\{x+w\}/\sqrt{2}-b_{d}}{a_{d}}\sim Gumbel(0,1)\) as \(d\rightarrow\infty\). Therefore, \(\mathrm{Ex}[d_{\mathrm{tr}}(x,-w)]=\mathrm{Ex}[2\max\{x+w\}]\xrightarrow[d \rightarrow\infty]{}2\sqrt{2}(a_{d}\mathrm{Ex}[Z]+b_{d})\). \(\mathrm{Var}[d_{\mathrm{tr}}(x,-w)]=\mathrm{Var}[\max\{x+w\}-\min\{x+w\}]=2 \mathrm{Var}[\max\{x+w\}]+2\mathrm{Cov}[\max\{x+w\},-\min\{x+w\}]\xrightarrow[d \rightarrow\infty]{}2\times 2a_{d}^{2}\mathrm{Var}[Z]=2\times 2a_{d}^{2}\frac{\pi^{2}}{6}\) where \(\mathrm{Cov}[\max\{x+w\},-\min\{x+w\}]\xrightarrow[d\rightarrow\infty]{}0\) was assumed.
Here we confirm that the above scaling holds actually by numerical calculations.
One way for better weight initialization is to choose the scale of \(w\) so that the variance of the neural activity in the embedding layer becomes \(1\).
**Lemma 21**.: _Suppose \(x_{i}\sim N(0,1)\) and \(w_{i}\sim N(0,\frac{6\log d}{\pi^{2}}-1)\) for \(i=1,\ldots,d\). Then the expectation and variance of \(d_{\mathrm{tr}}(x,-w)\) can be approximated by \(2\sqrt{\frac{6\log d}{\pi^{2}}}(a_{d}\gamma+b_{d})\) and \(1\), respectively._
Proof.: As \(x_{i}+w_{i}\sim N(0,\frac{6\log d}{\pi^{2}})\), \(Z:=\frac{\max\{x+w\}/\sqrt{\frac{6\log d}{\pi^{2}}-b_{d}}}{a_{d}}\sim Gumbel(0,1)\). Therefore, \(\mathrm{Ex}[d_{\mathrm{tr}}(x,-w)]=\mathrm{Ex}[2\max\{x+w\}]\xrightarrow[d \rightarrow\infty]{}2\sqrt{\frac{6\log d}{\pi^{2}}}(a_{d}\mathrm{Ex}[Z]+b_{d})\). \(\mathrm{Var}[d_{\mathrm{tr}}(x,-w)]\xrightarrow[d\rightarrow\infty]{}2\times \frac{6\log d}{\pi^{2}}a_{d}^{2}\mathrm{Var}[Z]=2\frac{6\log d}{\pi^{2}}a_{d}^ {2}\frac{\pi^{2}}{6}=1\).
To control the standard deviation of the weights, you can customize an initializer (instead of simply specifying initializer = "random_normal") in the definition of TropEmbed class.
> ini = tf.keras.initializers.RandomNormal(mean=0., stddev=1.) > self.w = self.add_weight(shape=(units, input_dim), \(\backslash\) initializer=ini) However, as the weight initialization should be done together with the data preprocessing, in this paper we entirely use the default value of stddev=0.05 for "random_normal" for simplicity.
> self.w = self.add_weight(shape=(units, input_dim), \(\backslash\) initializer="random_normal")
Figure 2: histogram of simulated \(d_{\mathrm{tr}}(x,-w)\) for the same situation as in Lemma 20 with \(d=10000\). For the histogram, \(100000\) samples of \(d_{\mathrm{tr}}(x,-w)\) are used. The simulated mean is \(10.893\) while the theoretical prediction is \(10.954\). The simulated std is \(0.604\) while the theoretica prediction is \(0.598\). The mean and std are the same as predicted from the theory.
Computational Experiments
In this section, we apply tropical neural networks with one hidden layer (the tropical embedded layer) to simulated data as well as empirical data. Then later we compare its performance against neural networks with one hidden layer with ReLU activator.
### Small simulated data
First we illustrate our tropical neural networks with one hidden layer with \(16\) neurons and with one output Sigmoid function using a small example. First we generate two dimensional \(16+16\) random points from the Gaussian distributions with means \((0.5,-0.5)\) and \((-0.5,0.5)\) with the unit covariance matrix. Then these points are randomly translated by \((c,c)\) where \(c\) is a Gaussian random variable whose standard deviation is \(4\). The left and right figures in Figure 3 show the actual test labels and the predicted probabilities of the test data by the tropical neural networks with one hidden layer with \(16\) neurons.
### High-dimensional simulated data
Second we demonstrate that a tropical neural network with one hidden layer with \(8\) neurons and with one output sigmoid function works against the curse of dimensionality, where the most of the variables in this high dimensional data are rather just noises [42]. We generate \(d\) dimensional \(16+16\) random points from the Gaussian distributions with means \((0.5,-0.5,0,\ldots,0)\) and \((-0.5,0.5,0,\ldots,0)\) with the unit covariance matrix. Then these points are randomly translated by \((c,c,c,\ldots,c)\) where \(c\) is a Gaussian random variable whose standard deviation is \(6\). The result demonstrates that the tropical neural networks work robustly against the curse of dimensionality.
Figure 3: Predicted probabilities by the tropical neural networks on a small example.
### simulated data generated from the multi-species coalescent model
In this subsection we apply the tropical neural networks to a sample of phylogenetic trees generated under the _multi-species coalescent model_.
A phylogenetic tree is a weighted tree whose leaves are labeled with \([m]:=\{1,2,\ldots,m\}\), where \(m\) is the number of leaves, and whose internal nodes are unlabeled. A weight on each edge of a phylogenetic tree is considered as a distance from a node to another node on the tree and in evolutionary biology, a weight on an edge can be considered as a product of an evolutionary time and mutationa rate [32]. On this paper we consider a rooted phylogenetic tree with a leaf label set \([m]\). An equidistant tree with \(m\) leaves is called _equidistant tree_ if the total weights on the unique path from its room to each leaf is the same for all leaf in \([m]\). Under the multi-species coalescent model which can be used to analyze gene trees, which are phylogenetic trees reconstructed from genes in a genome, we assume that gene trees are all equidistant trees. Therefore in this paper we assume that all phylogenetic trees are equidistant trees.
To conduct a statistical analysis on a set of phylogenetic trees, we consider a _space of phylogenetic trees_ with fixed \([m]\). A space of phylogenetic trees on \([m]\) is a set of all possible phylogenetic trees with \([m]\) and it is well-known that it is not Euclidean [33]. It is also well-known that the space of all possible equidistant trees on \([m]\) with the _tropical metric_ under the max-plus algebra is a subspace of the tropical projective space [3, 39]. In order to define the space of equidistant trees, first we define _ultrametrics_. COnsider a map \(u:[m]\times[m]\rightarrow\mathbb{R}\) such that \(u(i,j)=u(j,i)\) and \(u(i,i)=0\). This map is called a _dissimilarity map_ on \([m]\). If a dissimilarity map \(u\) satisfies that
\[\max\{u(i,j),u(i,k),u(j,k)\}\]
Figure 4: Application of the tropical neural networks to a high-dimensional example. The test accuracy averaged over 100 trials is plotted. The tropical neural networks work robustly against the curse of dimensionality.
achieve at least twice, then we call \(u\) an _ultrametric_.
**Example 22**.: _Suppose \(m=3\), i.e., \([3]=\{1,2,3\}\) and suppose_
\[u(1,2)=u(2,1)=1,u(1,3)=u(3,1)=1,u(2,3)=u(3,2)=0.5\]
_and \(u(i,i)=0\) for all \(i=1,2,3\). Since_
\[\max\{u(1,2),u(1,3),u(2,3)\}=1\]
_and it achieves twice, i.e., \(u(1,2)=u(1,3)=1\). Thus, \(u\) is an ultrametric._
Consider dissimilarity maps \(u_{T}\) on a phylogenetic tree \(T\) on \([m]\) such that \(u(i,j)\) is the total weights on the unique path from a leaf \(i\) to a leaf \(j\) for all \(i,j\in[m]\). Then we have the following theorem:
**Theorem 23** ([7]).: _Consider an equidistant tree \(T\) on \([m]\). Then \(u_{T}\) realizes an equidistant tree \(T\) on \([m]\) if and only if a dissimilarity map \(u_{T}\) is ultrametric._
**Example 24**.: _Suppose we have an ultrametric from Example 22. An equidistant tree whose dissimilarity maps are ultrametric in Example 22 is a rooted phylogenetic tree with leaves \([3]=\{1,2,3\}\) shown in Figure 5._
Therefore, we consider the space of all possible ultrametrics on \([m]\) as the space of equidistant trees on \([m]\). Then we have the following theorem:
**Theorem 25** ([3]).: _The space of ultrametrics on \([m]\) is the tropicalization of the linear subspace defined by linear equations_
\[x_{ij}-x_{ik}+x_{jk}=0\]
_for \(1\leq i\leq j\leq k\leq m\) by replacing sum with max operation and replacing multiplication by a classical summation._
Figure 5: An equidistant tree with the dissimilarity maps which are ultrametric shown in Example 22.
The space of ultrametrics on \([m]\) is a subspace of the tropical projective space \((\mathbb{R}^{e}\cup\{-\infty\})/\mathbb{R}\mathbf{1}\) where \(e=\binom{m}{2}\). Therefore, we apply our method, tropical neural networks, to simulated data generated from the multi-species coalescent model using a software Mesquite [21].
The multi-species coalescent model has two parameters: species depth and effective population size. Here we fix the effective population size \(N_{e}=100000\) and we vary
\[R=\frac{SD}{N_{e}}\]
where \(SD\) is the species depth. We generate species trees using the Yule process. Then we use the multi-species coalescent model to generate gene trees with a given species tree. In this experiment, for each \(R\), we generate two different set of \(1000\) gene trees: In each set of gene tree, we have a different species tree so that each set of gene trees is different from the other. We conduct experiments with \(R=0.25,0.5,1,2,5,10\).
Note that when we have a small ratio \(R\), then gene trees become more like random trees since the species tree constrains less on tree topologies of gene trees. Thus it is more difficult when we have a small \(R\) while if we have a large \(R\), then it is easier to classify since the species tree constrains more on tree topologies of gene trees.
In this experiment, we set one hidden layer for each neural network: neural network with ReLU activators and neural network with tropical activators. We set the Sigmoid function in the output node in both neural networks. In each neural network, we set \(1000\) neurons in the hidden layer.
Figure 6 shows ROC curves for neural networks with ReLU and tropical
Figure 6: ROC curves for neural networks with ReLU and tropical neural networks with one hidden layer. We conduct experiments with \(R=0.25,0.5,1,2,5,10\).
neural networks. In general tropical neural networks perform better than neural networks with ReLU activator function.
### Influenza data
In this subsection we apply our method to genomic data for 1089 full length sequences of hemagglutinin (HA) for influenza A H3N2 from 1993 to 2017 collected in the state of New York obtained from the GI-SAID EpiFlu database (www.gisaid.org). These collected data were aligned using muscle developed by [10] with the default settings. Then we apply the neighbor-joining method with the p-distance [31] to reconstruct a tree from each sequenced data. Each year corresponds to the first season. We also apply KDETrees [38] to remove outliers and a sample size of each year is about 20,000 trees.
We apply tropical neural networks and neural networks with ReLU with one hidden layer with 10 neurons to all pairs of different years to see if they are significantly different one year to the other. Heatmaps of accuracy rates with the probability threshold 0.5 and AUC values are shown in Figure 7. Again, tropical neural networks outperform classical neural networks.
Tropical Neural Network as a Generalization of Tropical Logistic Regression for Classification of Gene Trees
A tropical neural network can be interpreted as a generalization of a tropical logistic regression. The tropical logistic regression model [2] is developed for binary classification on gene trees and it has been proven to have higher predictive power than classical logistic regression to classify phylogenetic trees. It assumes that if \(X\) is the covariate (gene tree), the binary response random variable is \(Y\sim\text{Bernoulli}(p(X))\), with
\[p(x)=S\left(\lambda_{0}d_{\text{tr}}(x,\omega_{0})-\lambda_{1}d_{\text{tr}}(x,\omega_{1})+C\right),\]
where \(m\) is the number of leaves in each phylogenetic tree in the given sample, \(e:=\binom{m}{2}\), \(S\) is the sigmoid function, \(\omega_{0},\,\omega_{1}\in\mathbb{R}^{e}/\mathbb{R}\mathbf{1}\), and \(\lambda_{0},\lambda_{1},C\in\mathbb{R}\) with \(\lambda_{0}\lambda_{1}\geq 0\). Note that this model is a special case for Eq. (6), with the sigmoid function as the link, and two neurons in the hidden layer whose weights are \(w_{0}^{(2)}=\lambda_{0},w_{1}^{(2)}=-\lambda_{1}\) and \(w_{0}^{(1)}=-\omega_{0},w_{1}^{(1)}=-\omega_{1}\). Therefore, tropical logistic regression is almost identical to a tropical neural network consisting of one tropical embedding layer with two neurons and a classical layer, with the additional assumption that \(w_{0}^{(2)}w_{1}^{(2)}\leq 0\).
The one-species model described in [2] can be considered to be a neural network with \(e\) neurons in the input layer, no hidden layers and a unique output neuron. The activation function is the logistic function and the inner product used is tropical defined as
\[\langle x,-\omega\rangle:=d_{\text{tr}}(x,\omega)-C, \tag{11}\]
where \(C\) can be considered to be a bias variable, similarly to the intercept variable in classical models. Tropical logistic regression returns the sigmoid of the tropical inner product. We define the tropical generalised linear model as an extension to tropical logistic regression, where instead of the sigmoid function, we may use a different link/activation function. If there are multiple outputs (multivariate generalized linear model (GLM)) and if we treat the output layer as the new input layer and iterate this \(L\) times, then we have an \(L\)-layer neural network. In the same way that classical neural networks are a stack/recursive application of classical multivariate GLMs, tropical neural networks can be a stack of tropical multivariate GLMs. Effectively, all is identical to classical networks, but instead of applying classical inner products, we apply tropical inner products as defined in Eq. (11). The \(i\)-th neuron of the \(l\)-th layer is
Figure 7: Heat maps for (top) classification rates with threshold 0.5 and (bottom) AUC values for classical neural networks with ReLU (left) and tropical neural networks (right).
defined as \(x_{i}^{(l)}\) and computed through the recursive formula,
\[x_{i}^{(l)}=d_{\mathrm{tr}}\left(x_{i}^{(l-1)},\omega_{i}^{(l)}\right)-C_{i}^{(l )}, \tag{12}\]
where \(\Omega^{(l)}=(\omega_{1}^{(l)},\omega_{2}^{(l)},\ldots,\omega_{N_{l}}^{(l)})\in \mathbb{R}^{N_{l-1}\times N_{l}}\) is the weight matrix between layer \((l-1)\) and \(l\) for the number \(N_{s}\) of neurons in layer \(s\), and \(C^{(l)}\in\mathbb{R}^{N_{l}}\). By assuming that all neurons share the same bias variable \(c=C_{i}^{(l)}\) for all \(i\in[N_{l}]\), Eq. (12) reduces to Eq. (1), since vectors are defined up to an additive constant vector \((c,\ldots,c)\) in the tropical projective torus. When the last tropical embedding layer connects to the first classical layer, the constant bias vector is incorporated in the bias term of the classical layer. Hence, tropical bias terms are redundant and not considered in the development of tropical neural networks. Thus, the tropical neural network which we propose in this paper follows naturally as an extension of the tropical logistic regression model.
## 8 Summary and Discussion
In this paper, we first developed a tropical embedding layer. We used the tropical embedding layer as the first layer of the classical neural networks to keep the invariance in the tropical projective space. To check if this tropical neural network has enough flexibility, we next proved that this tropical neural network is a universal approximator. After we derived a backpropagation rule for the tropical neural networks, we provided TensorFlow 2 codes for implementing a tropical neural network in the same fashion as the classical one, where the weights initialization problem is considered according to the extreme value statistics. Finally we showed some applications as examples.
The tropical neural networks with the tropical metric worked better than the classical neural networks when the input data are phylogenetic trees which is included in the tropical projective torus. This is partly because only the tropical neural network can keep the invariance of the input vector under the one-vector which is innate in the tropical projective torus.
One of the nice properties of tropical neural networks is its tractability and interpretability in analysis. The tropical embedding can be interpreted as taking the tropical distance to a point in the space of the tropical projective torus. The activities of neurons in the tropical neural networks with the randomized weights and inputs can be analyzed by using the extreme value statistics. The backpropagation rule of the tropical neural networks can be derived and interpreted rather easily.
The TensorFlow 2 codes for the Python class for tropical embedding was provided in the paper. This makes it possible to implement a tropical neural network in the same familiar fashion as the classical one. This facilitates, for example, to compare tropical and classical neural networks for the same data by using a common code.
Recent work shows that neural networks are vulnerable against adversarial attacks (for example, [5, 25, 34, 22]). However, our initial computational
experiments on image data from computer vision show that tropical neural networks are robust against gradient based methods, such as the Fast Gradient Sign Method [14] and Ensemble Adversarial Training [36]. It is interesting to investigate why tropical neural networks are robust against such attacks. In addition, it is interesting to develop adversarial attacks toward tropical neural networks.
|
2303.17995 | Neural Network Entropy (NNetEn): Entropy-Based EEG Signal and Chaotic
Time Series Classification, Python Package for NNetEn Calculation | Entropy measures are effective features for time series classification
problems. Traditional entropy measures, such as Shannon entropy, use
probability distribution function. However, for the effective separation of
time series, new entropy estimation methods are required to characterize the
chaotic dynamic of the system. Our concept of Neural Network Entropy (NNetEn)
is based on the classification of special datasets in relation to the entropy
of the time series recorded in the reservoir of the neural network. NNetEn
estimates the chaotic dynamics of time series in an original way and does not
take into account probability distribution functions. We propose two new
classification metrics: R2 Efficiency and Pearson Efficiency. The efficiency of
NNetEn is verified on separation of two chaotic time series of sine mapping
using dispersion analysis. For two close dynamic time series (r = 1.1918 and r
= 1.2243), the F-ratio has reached the value of 124 and reflects high
efficiency of the introduced method in classification problems. The
electroenceph-alography signal classification for healthy persons and patients
with Alzheimer disease illustrates the practical application of the NNetEn
features. Our computations demonstrate the synergistic effect of increasing
classification accuracy when applying traditional entropy measures and the
NNetEn concept conjointly. An implementation of the algorithms in Python is
presented. | Andrei Velichko, Maksim Belyaev, Yuriy Izotov, Murugappan Murugappan, Hanif Heidari | 2023-03-31T12:11:21Z | http://arxiv.org/abs/2303.17995v2 | Neural Network Entropy (NNetEn): Entropy-Based EEG Signal and Chaotic Time Series Classification, Python Package for NNetEn Calculation
###### Abstract
Entropy measures are effective features for time series classification problems. Traditional entropy measures, such as Shannon entropy, use probability distribution function. However, for the effective separation of time series, new entropy estimation methods are required to characterize the chaotic dynamic of the system. Our concept of Neural Network Entropy (NNetEn) is based on the classification of special datasets in relation to the entropy of the time series recorded in the reservoir of the neural network. NNetEn estimates the chaotic dynamics of time series in an original way and does not take into account probability distribution functions. We propose two new classification metrics: R2 Efficiency and Pearson Efficiency. The efficiency of NNetEn is verified on separation of two chaotic time series of sine mapping using dispersion analysis. For two close dynamic time series (\(r=1.1918\) and \(r=1.2243\)), the F-ratio has reached the value of 124 and reflects high efficiency of the introduced method in classification problems. The electroencephalography signal classification for healthy persons and patients with Alzheimer disease illustrates the practical application of the NNetEn features. Our computations demonstrate the synergistic effect of increasing classification accuracy when applying traditional entropy measures and the NNetEn concept conjointly. An implementation of the algorithms in Python is presented.
time series classification; electroencephalography; entropy features; neural network entropy; Python; NNetEn +
Footnote †: journal:
Article
## 1 Introduction
During the past 160 years, the concept of entropy has been applied to thermodynamical systems [1]. Over the years, the concept of entropy has been extended in various ways to solve scientific problems related to biomedical, healthcare, thermodynamics, physics, and others. Using horizon entropy, Jacobson and Parentani assessed the energy of black hole horizons [2]; Bejan studied entropy from the thermodynamic standpoint [3]. Bagnoli described the concept of entropy using the relationship between a thermal engine and a waterwheel [4]. His results showed that the concept of entropy is not restricted to thermal engines but can be applied to a wider variety of machines as well. A distribution entropy measure was used by Karmakar et al. for detecting short-term arrhythmias in heart rate [5].
There are two types of extended entropy measures: thermodynamic entropy and Shannon entropy. In thermodynamics, the entropy measure is related to the energy of a physical system. A Shannon entropy measure is an entropy measure that is used in information theory. Information/Shannon entropy measures are used to quantify the degree of freedom (DoF) or complexity of time series in practical applications. The technological advances in digitalization and their wide applications in practical problems in recent decades have made information entropy measures very popular.
The approximate entropy (ApEn) measure is a well-known entropy measure employed in biosignal analysis [6]. To quantify the self-similarity pattern in the given time series, ApEn considers the embedding dimension \(m\), the time delay \(tau\) (\(\mathsf{T}\)), and the tolerance value \(r\) as the initial parameters. On the basis of initial parameters, a new trajectory matrix is constructed. Using the tolerance \(r\), ApEn is computed by finding the self-similarity value between the rows of the trajectory matrix [6]. In short-length time series, ApEn is not accurate due to a strong dependence on data length [1]. Sample entropy (SampEn) and permutation entropy (PermEn) are introduced independently to overcome these difficulties [1]. The SampEn algorithm ignores self-matching patterns, which reduces computation time. It has been widely used in practical applications to analyze heart rate [7], recognize emotions [8], and analyze MRI signals [9]. The SampEn measure is inefficient when the time series are erratic or spiked [10].
In order to overcome these limitations, the Heaviside function, which is used in the SampEn algorithm, is substituted by a fuzzy membership function [1]. This new measure of entropy is called fuzzy entropy (FuzzyEn). According to experimental results, FuzzyEn performs better than SampEn [11; 12]. In recent years, FuzzyEn has been used to detect epileptic seizures [13], recognize emotions [14], and investigate the dynamic of lung cancer images [15]. Chanwimalueang and Mandic proposed the CoSiEn as another improvement to SampEn [10]. They used a different (angular) metric to search for similar embedding vectors. Their results showed that CoSiEn outperformed SampEn and FuzzyEn in the present \(1/f\) noise or short length dataset. In a study by Li et al. [16], the distribution entropy (DistEn) was introduced as an improvement over SampEn. According to Karmakar et al., DistEn is less sensitive than SampEn under initial parameters and gives additional information about time series [5]. Hence, DisEn can be regarded as an independent feature of SampEn.
Permutation entropy (PermEn), the other improvement of ApEn, combines entropy with symbolic dynamics [17]. In the PermEn algorithm, partitions are constructed by comparing neighboring time series values. PermEn is widely used in literature due to its robustness and computational efficiency [18]. Despite some advantages, PermEn is not stable regarding its input parameters. Therefore, the accuracy of PermEn depends on several input parameters. Its efficiency is affected by inconsistent results due to different initial parameters [19]. In order to overcome this difficulty, bubble entropy (BubbleEn) was introduced [19]. Similar to PermEn, BubbleEn uses a bubble sort algorithm to rank embedded vectors. However, BubbleEn is less sensitive to initial parameters than PermEn [19]. Increment entropy (IncEn) modifies PermEn to transform ranked time series into symbol sequences [20]. Consequently, IncEn is a suitable measure of entropy in the case of short datasets [20].
In addition to SampEn, PermEn, and their improved entropy measures, some entropy measures directly improve ShEn. Alter et al. extended ShEn for \(A_{n^{\text{new}}}\) matrices based on the singular value decomposition theorem (SVD) [21; 22]. Singular value decomposition entropy (SVDEn) is defined as the ShEn of the normalized eigenvalues of the diagonal matrix (which appears in SVD). A new measure of entropy based on Poincare plots of time series was introduced by Yan et al. [23]. The Poincare plot is divided into \(n\times n\) grids. In each grid, the probability of the number of each point is calculated. A measure of ShEn with respect to probability distributions is known as grid entropy (GridEn) [23]. Rohila and Sharma proposed phase entropy (PhaseEn), based on GridEn's concept [24]. In PhaseEn, the PhaseEn measure is computed using a second-order Poincare plot and ShEn
[24]. In a recent study, Yang et al. introduced attention entropy (AttEn) which considers only peak points. The results show that AttEn is robust for short time series [25].
All of the mentioned entropy measures are extensions of the ShEn algorithm, which requires probability density functions. A measure of entropy called NNetEn [26] was introduced by Velichko and Heidari that uses feedforward neural networks with LogNNet models [27; 28] and does not take into account probability distribution functions. In [29], the researchers showed that NNetEn is robust under noise conditions and outperforms traditional entropy measures.
Figure 1 shows the general concept of calculating NNetEn. A time series \(X=(x_{1}...x_{n})\), of length \(N\), is written to the reservoir (stage 1). In the second stage, a dataset is selected to be used as the basis for calculating the classification metric. We feed the \(Y\) vector from the dataset into the reservoir input (stage 3), and then pass it through normalization (stage 4). A time series element transforms the \(Y\) vector in the reservoir (stage 5). Consequently, the output vector \(Sh\) (stage 6) has a different dimension from the input vector \(Y\). The output vector \(Sh\) is normalized (stage 7) and fed into the output classifier (stage 8). In stage 9, the classifier is trained on the dataset, and in stage 10, it is tested. A linear transformation is performed on the classification metric in order to convert it into NNetEn entropy (Stage 11). In Section 2.3 of the methodology, the details of the steps are explained. A higher degree of irregularity in the time series \(X\) leads to a more efficient process for transforming the input vector \(Y\) into a different dimension, which in turn leads to a higher classification metric, and a higher output entropy. This principle differs from the common principles for calculating entropy based on a probability distribution.
The determination of data for entropy estimation (stage 2 in Figure 1) is an important aspect of the study. Initially, LogNNet was developed to compute the NNetEn based on MNIST handwritten dataset [30]. Here, we used 60,000 handwritten images for training and 10,000 handwritten images for testing the proposed entropy accuracy. Several studies have tested this method successfully [29; 31; 32; 33; 34; 35]. Li et al. considered NNetEn as the characteristic of ship-radiated noise signals [31]. Heidari used NNetEn of electroencephalogram (EEG) signals for classifying motor imaginary of end users with disability [32]. He found that eight channels are enough for the classification, while other existed methods considered 30 channels. Velichko et al. introduced two-dimensional NNetEn for remote sensing imaginary and geophysical mapping [33]. NNetEn successfully performed analyzing the dynamic of chaotic spike oscillator based on S-switch [34] and measuring the complexity of response of traveling ionospheric disturbances under geomagnetic storms [35].
One of the disadvantages of the method is the high computation time associated with training and testing the network using the MNIST-10 database (stages 9, 10). To reduce
Figure 1: NNetEn calculation concept. The concept demonstrates the sequence of steps for computing NNetEn, including writing a time series to the reservoir, transforming the database in the reservoir through the time series, training and testing the output classifier, converting classification metric into NNetEn entropy.
computational overhead, we tested the method to reduce MNIST database size and used SARS-CoV-2-RBV1 dataset [36] as the input dataset in Stage 2.
Compared to the other entropy measures, the NNetEn has a unique feature to compute the dynamic behavior of the input data of either short or longer length and gives a meaningful information about the input data. In previous studies, NNetEn was computed using Delphi software, however, it is impractical to expect the researchers to be familiarized with this software tool. Hence, we aim to calculate the NNetEn measure using a widely used programming language Python. In this study, we have developed Python programming code to compute the NNetEn, tested the tool with different types of input data, and benchmarked the performance of the proposed method with the state-of-the-art methods in the literature.
The NNetEn's ability to differentiate signals was investigated on synthetic signals based on sinus mapping. In addition, we analyzed the EEG recordings obtained at AHEPA General Hospital of Thessaloniki's Department of Neurology [37]. Information on the functioning of the human brain can help to develop such tools as human-machine interface (HMI) and clinical diagnosis systems (CDS) [38]. Besides analyzing cognitive processes (emotional state, mental fatigue level, emotional stress, mental health), it is possible to detect the presence of a number of neurological diseases (epilepsy, Alzheimer's disease, attention deficit/hyperactivity disorder, bipolar disorder, etc.) [38]. In order to classify EEG signals, it is necessary to extract features from the original EEG signal. Methods to extract features include spectral analysis [39], wavelet analysis [40], entropy [41], statistical analysis [42], and others. Following the feature extraction, signals are classified using machine learning algorithms, such as a nearest neighbor method, a support vector machine, a decision tree, a random decision forest, neural networks, etc. [38; 43]. In this paper, the robustness of NNetEn feature in analyzing EEG signal is investigated by considering AHEPA dataset. The AHEPA dataset includes electroencephalograms from 88 patients divided into three groups: control group (29 people), dementia (23), and Alzheimer's disease patients (36 people). Later, the NNetEn extracted from the above dataset has been processed and classified through a conventional machine learning algorithm (Support Vector Machine (SVM)). The performance of the NNetEn feature was analyzed with other conventional entropy measures. In addition to the existing accuracy metrics [44; 45], we have developed two alternative classification accuracy metrics using \(R^{2}\) and Pearson coefficients (R2 Efficiency and Pearson Efficiency) to assess the performance of the NNetEn.
The major contributions of the paper are:
* The concept of computing NNetEn is introduced.
* Investigating the effect of input dataset (Stage 2) in NNetEn value by considering the SARS-CoV-2-RBV1 dataset.
* Proposing the R2 Efficiency and Pearson Efficiency as new time series features related to NNetEn.
* Python package for NNetEn calculation is developed.
* The results of the separation of synthetic and real EEG signals using NNetEn are presented.
* The synergistic effect of using NNetEn entropy along with traditional entropy measures for classifying EEG signals is demonstrated.
The rest of the paper is organized as follows. In Section 2, the used datasets, proposed methods and new metrics are described. Section 3 is devoted to numerical results. The sine chaotic map and EEG signal of control group and patient with Alzheimer disease are analyzed. We demonstrate the proposed method, and the new metrics are robust for classifying different signals. Section 4 concludes the study and outlines directions for future research.
## 2 Materials and Methods
### Description of Datasets
In this paper, we consider MNIST-10 [30] and SARS-CoV-2-RBV1 [36] as the input datasets for Stage 2 and the EEG dataset [37] as the input dataset for Stage 1 in the NNetEn algorithm. For simplicity, we call MNIST-10, SARS-CoV-2-RBV1 and the considered EEG dataset as Dataset 1, Dataset 2 and Dataset 3 respectively.
* Dataset 1: The MNIST dataset contains handwritten numbers from '0' to '9' with 60,000 training images and 10,000 testing images. Each image has a size of 28 x 28 = 784 pixels and is presented in grayscale coloring. This dataset is well balanced since the number of vectors of different classes is approximately the same. The distribution of elements in each class is given in Table 1.
Here, Dataset 1 and Dataset 2 are balanced so that simple metrics such as classification accuracy can be successfully applied to them.
In the study of NNetEn, we also measured the dependence of the signal separation efficiency on the database usage fraction \(\mu\). For example, \(\mu\) = 0.01 means using 600 samples for training and 100 samples for testing for Dataset 1, and 53 samples for Dataset 2. Varying this parameter allows you to vary the entropy calculation time and its functionality.
* Dataset 3: This dataset contains records of EEG signals recorded at the AHEPA General Hospital of Thessaloniki's 2nd Department of Neurology [46]. This dataset consists of electroencephalograms of 88 patients divided into three groups: controls (29 people), Alzheimer's disease patients (36 people), and dementia patients (23 people). In order to record EEG, the authors of the dataset used a Nihon Kohden EEG 2100 device with 19 electrodes (channels) located on the head according to the 10-20 scheme: Fp1, Fp2, F7, F3, Fz, F4, F8, T3, C3, Cz, C4, T4, T5, P3, Pz, P4, T6, O1, O2. Each channel's signal was digitized at a sampling rate of 500 Hz. The duration of EEG recordings ranged from 5 min to 21.3 min. A resting EEG was recorded with the eyes closed.
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Classes** & **Number of Training Images** & **Number of Testing Images** \\ \hline
0 & 5923 & 980 \\ \hline
1 & 6742 & 1135 \\ \hline
2 & 5958 & 1032 \\ \hline
3 & 6131 & 1010 \\ \hline
4 & 5842 & 982 \\ \hline
5 & 5421 & 892 \\ \hline
6 & 5918 & 958 \\ \hline
7 & 6265 & 1028 \\ \hline
8 & 5851 & 974 \\ \hline
9 & 5949 & 1009 \\ \hline Total & 60,000 & 10,000 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Number of images of different classes in MNIST-10 dataset.
### Performance Metrics
Entropy is estimated using the classification metrics of neural networks: Classification Accuracy, R2 Efficiency and Pearson Efficiency.
* Classification Accuracy (_Acc_): Based on [26], we proposed to use the classification accuracy, which is as follows for multiclass classification: \[\text{Metric: Accuracy}\hskip 36.135ptAcc=\frac{\sum\limits_{i=1}^{K}TP(C_{i,i})}{ \sum\limits_{i=1}^{K}\sum\limits_{j=1}^{K}C_{i,j}}\,,\] (1) where \(TP(C_{i,i})\) indicates the number of true positive classified elements in class \(i\).
Here we use a multiclass confusion matrix with dimension \(K\) *\(K\), where \(K\) is the number of different class labels, a \(C_{i,i}\) coefficients of confusion matrix [44].
There is only one disadvantage to using this metric, which is that it has a significant discreteness of value, which is especially noticeable when using a small dataset. If there are 100 vectors in the test dataset, then the precision value is limited to two decimal places (e.g., 0.55, 0.62, 0.99). In this paper, we present two more classification metrics that evaluate the efficiency of the classification algorithm by analogy with regression problems. These precision values have many significant digits after the decimal point. Figure 2 is presented for clarification. There is a data vector in the output layer of the classifier _Sout_, and a label vector in the training dataset \(L\). A Pearson correlation coefficient (\(p\)) (Equation (2)) or a determination coefficient (\(R^{2}\)) (Equation (3)) is calculated for each pair.
Pearson correlation coefficient: \[\rho=\frac{\sum\limits_{i=1}^{K}(L_{i}-\overline{L})(Sout_{i}-\overline{Sout}) }{\sqrt{\sum\limits_{i=1}^{K}(L_{i}-\overline{L})^{2}}\sqrt{\sum\limits_{i=1}^ {K}(Sout_{i}-\overline{Sout})^{2}}}\] (2) Determination coefficient (R2): \[R^{2}=1-\frac{\sum\limits_{i=1}^{K}(L_{i}-Sout_{i})^{2}}{\sum\limits_{i=1}^{K} (L_{i}-\overline{L})^{2}}\] (3)
Figure 2: An example of calculating the determination coefficient \(R^{2}\) and the Pearson correlation coefficient \(\rho\) for a database with 10 classes of the MNIST dataset (**a**) and the SARS-CoV-2-RBV1 dataset with binary classification (**b**).
Where \(\overline{L}\) mean value of vector \(L\), \(\overline{Sour}\) mean value of vector \(Sout\).
* R2 Efficiency (\(R2E\)): The classification efficiency metric R2 Efficiency is proposed as an additional performance measure on this work. This is equal to the average \(R_{i}^{2}\) over all vectors in the testing dataset with the number of vectors \(M\).
\[\text{Metric: R2 Efficiency}\qquad\qquad R2E{=}\frac{\sum_{i=1}^{M}R_{i}^{2}}{M} \tag{4}\]
The more closely the \(Sout\) vector repeats the label vector \(L\), the closer R2 Efficiency is to 1.
* Pearson Efficiency (\(PE\)): The Pearson Efficiency metric is proposed, which is equal to the average \(\rho_{i}\) for all vectors in the testing dataset.
\[\text{Metric: Pearson Efficiency}\qquad\qquad PE=\frac{\sum_{i=1}^{M}\rho_{i}}{M} \tag{5}\]
The more closely the \(Sout\) vector repeats the \(L\) label vector, the closer the Pearson Efficiency is to 1.
As a result, NNetEn is equivalent to the value of one of the three types of metrics.
\[\text{NNetEn=}\begin{cases}\text{Accuracy}\\ \text{R2 Efficiency}\\ \text{Pearson Efficiency}\end{cases} \tag{6}\]
The metric type is an input parameter to the entropy function (see Section 2.4).
### NNetEn Calculation
As shown in Figure 3, the main steps involved in calculating NNetEn are described in detail. Basically, it is the same structure as LogNNet neural networks we presented in [27]. An important feature of the reservoir is the presence of a matrix transformation, in which the matrix coefficients are filled with the time series \(X\).
The main stages of NNetEn calculation used in this study are described in detail below.
Stage 1: The first stage involves writing the time series \(X=(x_{1}...x_{n})\), length \(N\), into the reservoir. The maximum length of the series \(Nmax\) is determined by the number of elements in the reservoir matrix \(Nmax=Y_{max}\times P_{max}\), where \(Y_{max}\) refers to the dimension of the \(Y\) vector of the dataset and \(P_{max}\) refers to the dimension of the vector \(Sh\) (the first layer) of the output classifier; we used \(P_{max}=25\) in our work [29]. For Dataset 1, \(N_{max}=(784+1)\times 25=19,625\) and for Dataset 2, \(N_{max}=(51+1)\times 25=1300\).
In our earlier work, there are six main methods available for filling the reservoir are explored, detailed in [29]. The following conventions apply to methods M1...M6: M1\(-\) Row-wise filling with duplication; M2\(-\)Row-wise filling with an additional zero element; M3\(-\)Row-wise filling with time series stretching; M4\(-\)Column-wise filling with duplication; M5\(-\)Column-wise filling with an additional zero element (see Figure 3); M6\(-\)Column-wise filling with time series stretching.
Stage 2: Datasets 1 and 2 are selected based on which the classification metric will be calculated.
Stage 3: The \(Y\) vector is formed from the dataset, including a zero offset \(Y[0]=1\). For Dataset 1 vector dimension is \(Y\_max=784+1\), for Dataset 2 \(Y\_max=51+1\).
Stage 4: The \(Y\) vector goes through the normalization stage in stage 4. For Dataset 1, this is the division of all elements (except \(Y[0]\)) by 255. Based on the maximum and minimum values in the original database, \(Y\) is normalized for Dataset 2.
Stage 5: In stage 5, the vector \(Y\) has transformed into a vector \(Sh\) by multiplying it with the reservoir matrix and the input vector \(Sh=W\times Y\) (see Algorithm 1).
Stage 6: In stage 6, Vector \(Sh\) enters the input layer of the classifier with a dimension of \(P\_max=25\).
Stage 7: The vector \(Sh\) goes through the normalization stage, as described in Algorithm 1. Here is a comparison of the executable code in Delphi (Algorithm 1a) and Python (Algorithm 1b), which uses vectorized calculations. In this array, \(Sh\_min\), \(Sh\_max\), and \(Sh\_mean\)[27] are the minimum, maximum, and average values of the vector \(Sh\) calculated across the entire database.
Stage 8: A single-layer output classifier with ten neurons for Dataset 1 and two neurons for Dataset 2 (see Figure 2). An activation function based on logistic regression was used.
Stage 9-Stage 10: The training and testing phases of the neural network are shown in stages 9-10. The training was carried out using the backpropagation method, with a variable number of epochs (\(Ep\)). The value of \(Ep\) is a parameter of the entropy function.
Stage 11: In accordance with equation 6, the classification metric is transformed linearly into the NNetEn entropy based on one of three options.
```
(a) (b) for j := 1 to P_max do Sh = np.sum(np.multiply(Y, W), axis=1) begin Sh = np.divide(Sh - Sh_min, Sh[j] = 0; Sh_max - Sh_min) for i := 0 to Y_max do - 0.5 - Sh_mean Sh[j] := Sh[j] + Y[i] * W1[i,j]; if (Sh_max[j] - Sh_min[j]) \(\rhd\) then Sh[j] := ((Sh[j] - Sh_min[j]) / (Sh_max[j] - Sh_min[j]) -0.5) - Sh_mean[j]; end;
```
Figure 3: Main steps of NNetEn calculation.
### Entropy Settings Options in Python
The calculation for SampEn, CoSiEn, FuzzyEn, PhaseEn, DistEn, BubbleEn, GridEn, IncEn and AttnEn entropy measures was implemented using the EntropyHub [47] software package. SVDEn and PermEn were calculated using the Antropy [48] software package.
The following parameters were used to calculate the entropies:
* SVDEn (\(m=2\), _delay_ = 1);
* PermEn (\(m=4\), _delay_ = 2);
* SampEn (\(m=2\), \(r=0.2\), \(\tau=1\)), where \(d\) is standard deviation of the time series;
* CoSiEn (\(m=3\), \(r=0.1\), \(\tau=1\));
* FuzzyEn (\(m=1\), \(r=0.2\), \(r=3\), \(\tau=1\)), where \(d\) is standard deviation of the time series and \(r\) is fuzzy membership function's exponent;
* PhaseEn (\(K=3\), \(\tau=2\));
* DistEn (\(m=3\), _bins_ = 100, \(\tau=1\));
* BubbleEn (\(m=6\), \(\tau=1\));
* GridEn (\(m=10\), \(\tau=1\));
* IncEn (\(m=4\), \(q=6\), \(\tau=1\));
* AttnEn has no parameters;
The following specifiers are included in parentheses for NNetEn settings:
NNetEn (Database (D1 -- Dataset 1 or D2 -- Dataset 2), database usage fraction \(\mu\), reservoir filling method (M1-M6), number of epochs (\(Ep\)), classification metric type (Equation (6))). For example, NNetEn(D1, 1, M1, Ep5, R2E) uses Dataset 1, \(\mu=1\) -- full base, reservoir filling method M1, number of epochs \(Ep\) = 5, classification metric R2 Efficiency).
There are 72 gradations of settings for Dataset 1 and Dataset 2.
Nset settings are numbered according to the following formula:
\[Nset=(m_{1}-1)\cdot 24+(m_{2}-1)\cdot 4+m_{3}\]
\[m_{1}=\begin{cases}1,\text{ if Metric}=\text{R2 Efficiency}\\ 2,\text{ if Metric}=\text{Pearson Efficiency}\\ 3,\text{ if Metric}=\text{Accuracy}\end{cases} \tag{7}\]
\[m_{2}=1...6,\text{ number of matrix method filing method M1...M6}\]
\[m_{3}=\begin{cases}1,\text{ if }\text{ Ep}=1\\ 2,\text{ if }\text{ Ep}=5\\ 3,\text{ if }\text{ Ep}=20\\ 4,\text{ if }\text{ Ep}=100\end{cases}\]
Each group of settings _Nset_ = 1,..., 24 is analyzed NNetEn using the R2E metric, _Nset_ = 25,..., 48 is analyzed NNetEn using the Pearson Efficiency metric, and _Nset_ = 49,..., 72 is analyzed NNetEn using the Accuracy metric. The three groups are further divided into six subgroups that fill the reservoir in different ways according to the number of epochs \(Ep\).
### Generation of Synthetic Time Series
To generate synthetic time series, we used the discrete chaotic sine map [26]:
\[x_{n+1}=r\cdot\sin(\pi\cdot x_{n})\text{ },0.7\leq r\leq 2,x_{-99}=0.1, \tag{8}\]
The first 1000 elements are ignored due to the transient period. If \(n>0\), then the NNetEn measure is calculated for \(x_{n}\). In this series, \(N=300\) elements were included. To generate a class corresponding to one value of \(r\), 100-time series were generated. Elements
in each series were calculated sequentially equation 8, (\(x_{1}\),..., \(x_{300}\)), (\(x_{301}\),..., \(x_{\infty 0}\)), etc. Figure 4a shows an example of a bifurcation diagram for a sine map.
Figure 4b shows the NNetEn (black color for D2 and green color for D1) dependencies for the two parameters, along with an example of SampEn (red color). While sections with regular oscillations show decreased entropy values, those with chaotic oscillations have increased entropy values. It is evident that the form of dependencies differs greatly.
We computed the NNetEn measure for pair A (\(r\) = 1.1918 and 1.2243) and pair B (\(r\) = 1.7161 and 1.7551). Their mutual arrangement is shown in Figure 4b. Figure 5 shows examples of signals for pair A, and Figure 6 shows examples for pair B. As can be seen, pair A is a chaotic oscillation, and it is difficult to distinguish between the signals visually in terms of the amount of randomness and the differences in dynamics between them. In pair B, the differences in signals are more pronounced. Compared to the periodic signal with \(r\) = 1.7161, the chaotic signal with \(r\) = 1.7551 changes its amplitude chaotically, mostly in the same region. For each class, the mean entropy value NNetEn\({}_{\text{av}}\) was determined for these settings, as well as the standard deviation \(S\).
Figure 4: (**a**) Bifurcation diagrams for sine map (Equation (6)); (**b**) the dependence of entropy on the parameter \(r\) for NNetEn and SampEn. Comparison of figures (**a**) and (**b**), shows that the ranges \(r\) that demonstrate regular oscillations have low entropy values, while ranges with chaotic oscillations have high entropy values.
### Signal Separation Metrics
#### 2.6.1 Statistical Analysis of Class Separation
In order to assess the degree of class separation, analysis of variance (ANOVA) was performed, with a significance level of \(p<0.05\). ANOVA allows us to compare the variances of the means of different groups [45]. The F-ratio measures the difference between variance within and variance between groups. The entropies of each signal within the class were calculated, and then the F-ratio between the classes of pair A and pair B of synthetic signals, as well as between the control group and the group of patients with Alzheimer's disease, was calculated (Section 3.4). As a result, the larger the F-ratio, the stronger the separation between two classes of signals.
#### 2.6.2 Calculation of the Accuracy of Signal Classification by Entropy Features
Figure 5: Examples of signals (\(x_{i}\), \(\ldots\), \(x_{200}\)) for pair B (\(r=1.1918\) and \(r=1.2243\)).
Figure 6: Examples of signals (\(x_{i}\), \(\ldots\), \(x_{200}\)) for pair B (\(r=1.7161\) and \(r=1.7551\)).
A Support Vector Machine (SVM) was used to classify the signal using one and two entropy features.
Two steps were involved in evaluating the classification metric.
* In the first stage, hyperparameters were selected using Repeated K-Fold cross-validation (RKF). In order to accomplish this, the original dataset was divided into \(K=5\) folds in \(N=5\) ways. The K-folds of each of the \(N\) variants of partitions were filled differently with samples. In addition, the distribution of classes in each K-fold approximated the distribution in the original dataset. Next, the classifier hyperparameters were selected at which the average accuracy of the classifier on the validation set was maximized. As a result of using a large number of training and validation sets, repeated K-Fold cross-validation allows minimize the overfitting of the model.
* As a second step, we used the hyperparameter values obtained in the first stage and performed RKF cross-validation (\(K=5\)) in a similar manner to the first stage, but using different \(N=10\) partitions. As a result, the \(A_{\text{RKF}}\) is calculated by averaging \(N\) partitioned scores. We used \(A_{\text{RKF}}\) as a metric to assess signal separation.
#### 2.6.3 Synergy Effect Metric
As a result of combining entropy features during signal separation, \(A_{\text{RKF}}\) classification becomes more accurate than if each feature were used individually. Using the formula below, the synergistic effect coefficient \(K_{syn}\) was estimated.
\[Ksyn=\frac{1-MAX\left(A_{RKF}\left[\text{Entropy1}\right]\right.A_{RKF}\left[ \text{Entropy2}\right]\right)}{1.001-A_{RKF}\left[\text{Entropy1,Entropy2}\right]} \tag{9}\]
where, \(A_{\text{RKF}}\)[Entropy1] is the classification accuracy using one entropy feature Entropy1, and \(A_{\text{RKF}}\)[Entropy1, Entropy2] is the classification accuracy using two entropy features. \(MAX\) is the maximum value selection function.
### Python Package for NNetEn Calculation
#### 2.7.1 General Requirements
The following libraries were used to implement the NNetEn algorithm in Python:
* NumPy
* Numba
NumPy contains all the necessary mathematical operations for matrices and arrays. This library offers optimization of operations and vectorization of calculations. The Numba compiler library converts Python functions into machine code at runtime using the industry-standard LLVM (Low-Level Virtual Machine) library [49]. Python version 3.7 or later is required to run the algorithm. There are four main blocks in the Python algorithm for calculating NNetEn, including those described in Section 2.3:
Stage 1 (Block 1): Reading time series from the input data files.
Stage 2-Stage 4 (Block 2): An instance of the NNetEn_entropy class is created, containing normalized, training, and test sets for the LogNNet neural network. The data at these stages are prepared for further analysis, as Datasets 1 and Datasets 2 have different formats. The MNIST dataset consists of four binary files: a training and test set of images, and a training and test set of labels. Training and test data are contained in one file in the SARS-CoV-2-RBV1 dataset. In addition, the \(\mu\) parameter (database usage fraction) is passed here.
Stages 5-10 (Block 3): Calculation of classification accuracy by neural networks LogNNet. Several parameters need to be passed to the NNetEn_calculation function, including the reservoir formation method (M1,..., M6), the number of neural networks training epochs, and the metric calculation algorithm.
Stage 11 (Block 4): NNetEn entropy calculation with parameters written to log file. The log file contains the following data: the timestamp when the recording was made, the
NNetEn value, the number of epochs, the size of the reservoir matrix \(W\), \(\mu\), and the length of the time series. The format makes it easier to analyze the data in the future.
#### 2.7.2 Function Syntax
The program installation is done from PyPi repository using the following command (Listing 1).
```
1pip install NNetEn
```
An instance of the NNetEn_entropy class is created by two commands (Listing 2).
```
1fromNNetEnimportNNetEn_entropy
2NNetEn=NNetEn_entropy(database='D1', mu=1)
```
Arguments:
database -- (default = D1) Select dataset, D1 --MNIST, D2 --SARS-CoV-2-RBV1
mu -- (default = 1) usage fraction of the selected dataset \(\mu\) (0.01,..., 1).
Output: The LogNNet neural network model is operated using normalized training and test sets contained in the NNetEn entropy class.
To call the calculation function, one command is used (Listing 3).
```
1value=NNetEn.calculation(time_series,epoch=20,method=3,metric='Acc',log=False)
```
Arguments:
time_series--input data with a time series in numpy array format.
epoch-- (default epoch = 20). The number of training epochs for the LogNNet neural network, with a number greater than 0.
method-- (default method = 3). One of 6 methods for forming a reservoir matrix from the time series M1,..., M6.
metric -- (default metric = 'Acc'). See Section 2.2 for neural network testing metrics. Options: metric ='Acc', metric = 'R2E', metric = 'PE' (see Equation (6)).
log-- (default = False) Parameter for logging the main data used in the calculation. Recording is done in log.txt file
Output: NNetEn entropy value.
The source code of thePython package is stored on the site ([https://github.com/izotov93/NNetEn](https://github.com/izotov93/NNetEn) (accessed on 26 April 2023)), and an example of the calculation is presented in Supplementary Materials.
## 3 Numerical Results and Discussion
### Separation of Synthetic Signals
In Figure 7, statistics are calculated for two signals by choosing \(r=1.11918\) and \(1.2243\) in Equation (8) which is called pair A. We present the dependence of the mean entropy value \(\mathrm{NNetEn}_{\mathrm{N*}}\) (with standard deviation \(S\)) based on _Nset_ setting numbers for Dataset 1 (Figure 7a left axis) and Dataset 2 (Figure 7b left axis). Furthermore, F-ratios are shown by blue diagrams based on the right vertical axis. Results for pair B (\(r=1.7161\) and \(r=1.7551\)) are shown in Figure 8.
There are several observations that can be made from the above results.
In general, the F-ratio value varies with the number of settings. For pair A, it reaches the maximum values of \(\sim\)124 (Dataset 1, _Nset_ = 34) and \(\sim\)30 (Dataset 2, _Nset_ = 12, 36, 60). Therefore, Dataset 1 shows a large separation capacity for given pairs of signals. In Dataset 1 higher F-ratio values correspond to methods of filling reservoirs M1 and M3. In Dataset 2 higher F-ratio values correspond to methods of filling reservoirs M1, M2, M3 and M5.
The higher the F-ratio, the more the NNetEnv values differ between the signals in the pair. This is clearly seen in insert 1 of pair A (_Nset_ = 9) insert 2 (_Nset_ = 12), and insert 3 (_Nset_ = 34) when zooming in (see Figure 7a). As a result, NNetEnv shows the biggest difference, with a low \(S\) value being associated with _Nset_ = 34. For pair B, this pattern is clearly visible, for example, with the (_Nset_ = 28) and (_Nset_ = 29) (Figure 8a).
Figure 8: Dependences of the average entropy value NNetEnv (indicating the standard deviation \(S\)) and F-ratio depending on the _Nset_ setting number for Dataset 1 (**a**) and Dataset 2 (**b**). Pair B signals.
As a result of the large difference in NNetEnav for Pair B, the F-ratios are extremely large at ~10\({}^{6}\).
For Dataset 1 and Dataset 2, there is a dependency that increases in \(Ep\) lead to increases in NNetEnav. An increase in the number of epochs leads to an increase in network training and database recognition efficiency, which is measured by NNetEn (Equation (6)). In addition, \(S\) decreases with increasing epochs, i.e., entropy for a given signal reaches stationary values.
Several settings have been found that cause one entropy for one signal to exceed the entropy of another signal, while other settings cause the ratio to change. Therefore, Figure 7b for Dataset 2 shows setting where NNetEnav (\(r=1.2243\)) is greater than NNetEnav (\(r=1.1918\)) (for example, _Nset_ = 9 see insert 1), and setting where NNetEnav (\(r=1.2243\)) is less than NNetEnav (\(r=1.1918\)) (for example, _Nset_ = 12 see insert 2). Similarly, Figure 8a for Dataset 1 shows a change in ratios. A change in the ratios can be beneficial when applying combinations of entropies (see Section 3.2).
Entropy metrics (Accuracy, Pearson Efficiency, and R2 Efficiency) generally performed similarly, but F-ratios varied depending on the pair of signals. The Pearson Efficiency metric was in the lead for pair A and base D1 (Figure 7a). According to Figure 8a, R2 Efficiency led for pair B and base D2.
### Entropy Combinations
Our previous section showed that NNetEn by itself can be a strong feature for splitting signals into pairs. However, using combinations of entropies should produce the greatest effect. In the following paragraphs, we discuss the results for two features, whose simplest combination is the difference. In order to separate the signals, a pair of A signals were used as a more complex pair.
#### 3.2.1 Entropy Difference NNetEn
as a Feature for Signal Separation
We can consider the relationship between the F-ratio and the difference in entropies. In Figure 9, the F-ratio distributions for three variants of difference are shown: NNetEn(D1)-NNetEn(D1) (Figure 9a), NNetEn(D2)-NNetEn(D2) (Figure 9b), NNetEn(D2)-NNetEn(D1) (Figure 9c). For the same time series of signals, the entropy difference was computed with different settings.
In Figure 9a, the maximum F-ratio (~127) is only 3 points higher when using only Dataset 1, whereas when using one feature. In Figure 9b, F-ratio reached a maximum of 50 using only Dataset 2, which is 20 points higher than when using one feature. As a result of combining both datasets (Figure 9c), the F-ratio reached 140, which is 13 points higher than all previous options combined. A feature difference is therefore even more powerful
Figure 9: F-ratio distribution for three variants of entropy differences: NNetEn(D1)-NNetEn(D1) (**a**), NNetEn(D2)-NNetEn(D2)-NNetEn(D2) (**b**), NNetEn(D2)-NNetEn(D1) (**c**). The figure demonstrates the relationship between the F-ratio and the difference in entropies. The entropy difference under certain settings can have high efficiency (F-ratio) when used as a feature.
than an individual feature. It can be explained by a change in the relation between entropies. For example, for Pair A and Dataset 2 (Figure 7b) with _Nset_ = 9 NNetEn\({}_{nv}\) (\(r\) = 1.1918) \(<\) NNetEn\({}_{nv}\) (\(r\) = 1.2243), and for _Nset_ = 12 NNetEn\({}_{nv}\) (\(r\) = 1.1918) \(>\) NNetEn\({}_{nv}\) (\(r\) = 1.2243), therefore, the entropy difference at these settings changes more strongly from class to class. As a result, we see the maximum F-ratio for the point (9,12) in Figure 9b.
#### 3.2.2 NNetEn as a Paired Feature in Signal Classification
The problem of separating signals is addressed through the definition of classification accuracy _A_Risk described in Section 2.6.3. Figure 10a shows the _A_Risk dependencies using the NnetEn(D1) and NnetEn(D2) features separately. NnetEn(D1) has a higher classification accuracy than NNetEn(D2), reaching the maximum of _A_Risk \(\sim\)0.837 at _Nset_ = 34. It is consistent with Figure 7 where the F-ratio also reached its maximum at _Nset_ = 34, with Dataset 1 showing higher values than Dataset 2.
We used SampEn(\(m\) = 2, \(r\) = 0.2\(d\)) together with NNetEn(D1) and NNetEn(D2). According to Figure 10b, a single SampEn gives recognition accuracy _A_Risk = 0.8845. By combining entropies, _A_Risk can either increase, reaching values of \(\sim\)0.9145 or decrease to \(\sim\)0.8735. The combination of SampEn and NNetEn entropies can significantly increase classification accuracy. Figure 10c illustrates a quantitative assessment of the synergistic effect. At _Nset_ = 2, 27, 35, 50, 59, the D1 base achieves the highest Ksyn. The settings numbers correspond to M1 and M3 reservoir filling methods. A two-dimensional diagram is shown in Figure 11 for the combination [NNetEn(D1, _Nset_ = 2), SampEn]. As can be seen, there is a selective extrusion of points along the ordinate axis as indicated by the sign NNetEn(D1, _Nset_ = 2). In this way, it is possible to more clearly separate the classes; in the figure, a blue straight line indicates a conditional separation between the classes. A slanted blue line separates
Figure 10: Classification accuracy _A_Risk dependences using NNetEn(D1) and NNetEn(D2) features separately (**a**), their combination with SampEn (**b**) and dependence of the synergistic effect coefficient (**c**). The figure shows the efficiency of using NNetEn as a feature pair to SampEn.
classes better than a vertical or horizontal line, and it proves the effectiveness of using NNetEn and SampEn in pairs.
### Dependence of F-Ratio and Algorithm Speed on Dataset Size
To vary the size of the dataset, the parameter \(\mu\) (0.01,..., 1) was introduced. It determines the fraction of the database usage. Figure 12 shows the dependences of F-ratio and calculation time of one time series on \(\mu\).
Figure 11: Feature combination diagram [NNetEn(D1, _Nset_ = 2), SampEn(\(m\) = 2, \(r\) = 0.2d)]. The figure demonstrates the effectiveness of NNetEn and SampEn pairing, as a slanted blue line separates classes better than a vertical or horizontal one.
Figure 12: Dependences of F-ratio and calculation time of one time series on \(\mu\). The figure shows that decreasing \(\mu\) can significantly reduce entropy calculation time. For some settings, time is reduced without significant loss in classifying accuracy (F-ratio).
NNetEn with Dataset 1 shows a gradual decrease in F-ratio for \(\mu>0.2\) and an intense decrease for \(\mu<0.2\). Dataset 2 already at \(\mu<0.95\) has low F-ratio values, with the presence of local maxima. As a result of different database sizes, the algorithm for Dataset 2 takes approximately an order of magnitude less time to execute than Dataset 1. With a decrease in \(\mu\), calculation time decreases. Based on the F-ratio of 30 and the average calculation time of 0.2 s, datasets have approximately the same efficiency regarding time costs. If we compare the time costs at the F-ratio -15 level, then using Dataset 2 speeds up calculations per order, due to the presence of local maxima at a low value of \(\mu\)-0.03, where the calculation time for NNetEn(D1) is -0.2 s and for NNetEn(D2) -0.02 s.
In Figure 13, the dependences between NNetEn\({}_{\infty}\)(\(r\)1), a difference (NNetEn\({}_{\infty}\)(\(r\)2) - NNetEn\({}_{\infty}\)(\(r\)1)), and the standard deviation \(S(r\)1) are plotted to reveal the reason for the sharp change in F-ratio. Here, \(r\)1 is 1.1918 and \(r\)2 is 1.2243. It can be seen that for Dataset 1 at \(\mu>0.2\), the standard deviation is approximately at the same level, and the difference in entropy slowly decreases. At \(\mu<0.2\), there is a sharp increase in \(S(r\)1) and a sharp decrease in the difference of entropies. For Dataset 2, a sharp decrease in F-ratio at \(\mu=0.95\) is associated with a sharp decrease in the value of the entropy difference, which changes its sign for the first time at \(\mu=0.8\). At \(\mu<0.2\) for Dataset 2, there is a sharp increase in the difference between entropy and standard deviation.
### EEG Signal Separation
#### 3.4.1 Selection of the Most Informative Component of the EEG Signal
As described in Section 2.1, this work utilizes the Dataset 3 to study EEG signal separation. Due to the complexity of the problem, it was decided to use only the records of patients with Alzheimer's disease (36 people) and those in the control group (29 healthy people). Because the number of patients in these records is small, six non-overlapping segments were selected from each record, each of which represented an independent observation. All segments lasted 10 s and had the same duration. As the original dataset contained time stamps indicating unwanted patient activity (muscle movements, swallowing, blinking), the segments were selected to avoid overlap with the indicated activity periods.
The EEG signal's entropy may be a sign of poor separation ability since it has a wide spectral range (0-250 Hz), and the main information about brain activity lies in a relatively narrow set of frequencies [50]. The wavelet transform can be used to decompose the EEG signal into separate frequency components and increase the information content. A 5th-order digital Butterworth filter was used to pre-filter the signal (lower frequency 0.5 Hz,
Figure 13: Dependences of entropy values NNetEn\({}_{\infty}\)(\(r\)1), their difference (NNetEn\({}_{\infty}\)(\(r\)2) - NNetEn\({}_{\infty}\)(\(r\)1)) and standard deviation S(\(r\)1) for Dataset 1 (**a**) and Dataset 2 (**b**). Figure reflects the reason for the sharp change in F-ratio (in Figure 12).
upper frequency 32 Hz) [51]. In this case, the discrete wavelet transform (DWT) of the signal was carried out using the \(db\)4 wavelet and performed six level decomposition.
A total of 14 options were considered to select the most informative component of the signal:
1: The original unfiltered \(X_{\text{EEC}}\)(t) signal.
2: \(X_{\text{EEC}}\)(t) filtered signal.
3-8: Approximation coefficients (A1-A6) of the wavelet transform of the filtered signal.
9-14: Detail coefficients (D1-D6) of the wavelet transform of the filtered signal.
Then, 14 datasets were created, each containing 390 records (65 patients, 6 segments) and 19 features--the SVDEn (\(m\) = 2, _delay_ = 1) value for 19 channels, followed by a class label of 0 indicating a control group patient and 1 indicating a patient with Alzheimer's disease patient. In order to determine the informativeness of the signals, an analysis of variance (ANOVA) was conducted with a significance level (\(p\)) of 0.05. A maximum F-ratio (\(F_{m}\)) value was used to select the most informative signal type.
Figure 14 shows the \(F_{m}\) values for all types of signals. Among all types of filtered signals, the approximation coefficient A3 (0-32.5 Hz) provides the most information. Additionally, the coefficients A1, A2 and A4, as well as XFEEG(t) have relatively high \(F_{m}\) values. In terms of detail factor analysis, D4 (16.25-32.5 Hz) is the most informative. The coefficients A6, D1 and D2 do not allow us to state that the EEG signals from healthy and sick patients differ using SVDEn, as they are greater than the significance level (\(p\) = 0.05).
#### 3.4.2 Influence of the Entropy Calculation Method on the Separation of EEG Signals
In order to determine the optimal method for calculating entropy using A3 coefficients, the following entropies were tested: SVDEn, PermEn, SampEn, CoSiEn, FuzzyEn, DistEn, PhaseEn, BubbleEn, AttEn, IncEn, NNetEn. The \(F_{m}\) values for various entropies are shown in Figure 15. In order to maximize \(F_{m}\), the entropy parameters were chosen in a manner that maximized the value of \(F_{m}\). In this case, FuzzyEn \(F_{m}\) = 140 seems to produce the best results. It is found that all three methods (SampEn, CoSiEn, FuzzyEn) yield a relatively high result when vectors that are similar to the built-in matrix are taken into consideration. The worst result comes from DistEn--\(F_{m}\) = 6.8. NNetEn shows that there is a sufficiently good separation of the signals from healthy and sick patients at \(F_{m}\) = 40.29.
Figure 14: Distribution of \(F_{m}\) values for various types of signals calculated using the SVDEn feature. \(F_{m}\)—maximum F-ratio values for all 19 channels. The figure reveals the most informative types of signals.
In Figure 16, the best F-ratio values for different entropy methods are shown for each channel. It was found that channel number 10 (Cz) was the most significant among the channels processed. Additionally, good separation can be observed in channels 7 (F8), 9 (C3), 15 (Pz), and 16 (P4). On channels 5, 11, 12, 13, and 18, NNetEn ranks among the leaders in terms of feature strength, and on channel 11, it shows the best results.
In Figure 17, F-ratio distributions for Dataset 1 (Figure 17a) and Dataset 2 (Figure 17b) are shown based on the settings number and channel. The best reservoir filling methods for Dataset 1 are M3, M4, and M5, and for Dataset 2 M1, M3, and M4. In Dataset 1, channels 7, 8, 10, 11, 12, and 15 provide the best signal recognition, while in Dataset 2, channels 7, 9, 10, 13, 15, and 16 provide the best signal recognition.
Figure 16: Distribution of the F-ratio value for various entropy calculation methods using A3 coefficients. The figure shows the performance of NNetEn as a feature for different channels.
Figure 15: Distribution of the \(F_{n}\) value when using the A3 coefficients for various entropy calculation methods. The figure shows the ratio of entropy efficiencies.
In terms of the single feature, NNetEn generally performed quite well in comparison with other entropies. The synergistic effect of standard entropies with NNetEn(D1,1, _Nset_ = 60) is shown in Figure 18. A synergistic effect was observed when almost all entropies were paired with NNetEn, increasing the total \(A_{\text{NSF}}\) value (Figure 18a). The dependence of \(K_{\text{syn}}\) on pair type can be seen in Figure 18b. The most effective synergistic effects were observed for IncEn, AttnEn, SampEn, and SVDEn.
Figure 17: F-ratio distribution depending on the setting number and channel for Dataset 1 (**a**) and Dataset 2 (**b**). The most effective settings are in red.
### Features of the Python Execution of the NNetEn Algorithm
Python programming language is an interpreted language, and calculations take a long time compared with pre-compiled programs like Delphi. A just-in-time (JIT) compiler was used in Numba's open-source project in order to speed up Python source code execution. Numba decorated functions are compiled into machine code "just-in-time" for execution, enabling your code to run at native machine code speed after they are called. It was possible to reduce the time required for Python to calculate entropy by using Numba. A summary of the average network training stage (Stage 9) for NNetEn(D1, 1, Ep5, M2, Acc) is shown in Table 2.
Table 2 shows that the calculation of NNetEn entropy without Numba takes 11.5 s, while the calculation time with the Numba compiler is 3.7 s. In fact, this execution speed is even faster than the Delphi program, indicating this is an optimal execution speed. The Numba compiler may be overspeeding calculations due to the use of vector operations and caching.
## 4 Conclusions
Feature extraction is a key part in time series analyzing and classification procedures. Entropy measures are well-known time series features that are widely described in the literature. Traditional entropy measures are not appropriate for short and noisy time series. Moreover, they are sensitive to initial parameters that leads to inaccurate results. NNetEn overcomes these difficulties, and it has been applied to solving practical problems successfully. However, there were no functional modules for calculating NNetEn in common programming languages. To overcome this shortcoming, we apply Python language to implement the NNetEn algorithm. To increase the speed of calculations, the SARS-CoV-2-RBV1 dataset is used as the input data instead MNIST-10 dataset. Based on the NNetEn algorithm, three neural network entropy metrics (NNetEn, R2 Efficiency and Pearson Efficiency) are provided. The sine chaotic map, ANOVA and F-score are used for investigating the efficiency of new metrics in distinguishing between different signals. The results demonstrate that the new metrics are an efficient tool for distinguishing between the time series. The EEG classification of patients with Alzheimer's disease (36 people) and those in the control group (29 healthy people) is illustrated as a practical application. The SVM
\begin{table}
\begin{tabular}{c c} \hline \hline \(\mu=1\) & \(\mu=1\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Average durations of the network training stage (Stage 9) using different programming languages.
Figure 18: Synergistic effect of standard entropies with NNetEn(D1,1, _N_et = 60) shown based on accuracy _A_ssv_ (**a**), and coefficient _K_syn (**b**). The figure demonstrates an increase in the efficiency of classification, when using NNetEn as a paired feature to other entropies.
algorithm is used for the classification. For some channels, NNetEn ranks among the leaders in terms of feature strength; and on channel 11, it demonstrates the best results. Our computations confirm that a combination of classical entropy measures and NNetEn method provides results that are more accurate. Our computations demonstrate the synergistic effect of increasing classification accuracy when applying traditional entropy measures and the NNetEn concept conjointly.
In practice, it is important to understand what entropies to use for signal separation and classification. However, each case requires its own parameters and types of entropies.
NNetEn is the first entropy that does not use probability distribution function and provides useful information about the signal in the classification problems. For each individual task, a specific entropy measure must be tested, as it is difficult to predict how effective the measure will be in the given settings. Based on the examples presented, entropy depends in a complex way on the calculation parameters, and it is possible to extract useful information from a change in entropy. It is common for signals with similar dynamics to have different entropy correlations based on their parameters. Implementing NNetEn in Python will allow the scientific community to apply the algorithm to identify a class of problems that are effectively solved by NNetEn. The review of previous studies demonstrates the NNetEn's success in practical problems. Moreover, NNetEn is illustrated with a practical example. In terms of feature strength, NNetEn ranks among the leaders for the classification of EEG signals from certain channels.
The challenge of filling the reservoir matrix is the availability of many options. In Section 2.3, we have introduced six basic filling methods that affect the obtained results. Therefore, the method is considered as NNetEn input parameter. [ 1 shows the general concept of NNetEn calculation without specifying the method, in contrast to the algorithm defined in Figure 2. Therefore, the search for new types of reservoirs in the concept of NNetEn calculation can be a subject of further research. However, it is already possible to use the matrix reservoir model with six filling methods in practical signal classification cases.
Parallelizing the NNetEn algorithm, applying the new metric on different practical problems and investigating the effects on new metrics on different classification algorithms can be considered as future research directions.
Although entropy is a measure of chaos, results demonstrate that two signals for different entropy parameters can have a different ratio. Therefore, the ability of entropy to sense subtle differences in signals without tying it to the concept of chaos is a feature of entropy functions. It can be apparent reason for many different methods and algorithms for calculating entropy, including a new algorithm presented in this study.
**Supplementary Materials:** Python package for NNetEn calculation involved in this study is publicly available on GitHub. [https://github.com/izotov93/NNetEn](https://github.com/izotov93/NNetEn) (accessed on 26 April 2023). Examples of NNetEn calculation (examples.zip).
**Author Contributions:** Conceptualization, A.V., M.B., Y.I., M.M. and H.H.; methodology, A.V. and M.B.; software, A.V., M.B. and Y.I.; validation, M.B.; formal analysis, M.M. and H.H.; investigation, A.V. and M.B.; resources, A.V.; data curation, A.V.; writing \(-\)original draft preparation, A.V., M.B., Y.I., M.M. and H.H.; writing\(-\)review and editing, A.V., M.B., Y.I., M.M. and H.H.; visualization, A.V., M.B. and Y.I.; supervision, A.V.; project administration, A.V.; funding acquisition, A.V. All authors have read and agreed to the published version of the manuscript.
**Funding:** This research was supported by the Russian Science Foundation (grant no. 22-11-00055, [https://rscf.ru/en/project/22-11-00055/](https://rscf.ru/en/project/22-11-00055/), accessed on 30 March 2023).
**Data Availability Statement:** The data used in this study can be shared with the parties, provided that the article is cited.
**Acknowledgments:** The authors express their gratitude to Andrei Rikkiev for valuable comments made in the course of the article's translation and revision. Special thanks to the editors of the journal and to the anonymous reviewers for their constructive criticism and improvement suggestions. |
2308.00143 | Formally Explaining Neural Networks within Reactive Systems | Deep neural networks (DNNs) are increasingly being used as controllers in
reactive systems. However, DNNs are highly opaque, which renders it difficult
to explain and justify their actions. To mitigate this issue, there has been a
surge of interest in explainable AI (XAI) techniques, capable of pinpointing
the input features that caused the DNN to act as it did. Existing XAI
techniques typically face two limitations: (i) they are heuristic, and do not
provide formal guarantees that the explanations are correct; and (ii) they
often apply to ``one-shot'' systems, where the DNN is invoked independently of
past invocations, as opposed to reactive systems. Here, we begin bridging this
gap, and propose a formal DNN-verification-based XAI technique for reasoning
about multi-step, reactive systems. We suggest methods for efficiently
calculating succinct explanations, by exploiting the system's transition
constraints in order to curtail the search space explored by the underlying
verifier. We evaluate our approach on two popular benchmarks from the domain of
automated navigation; and observe that our methods allow the efficient
computation of minimal and minimum explanations, significantly outperforming
the state of the art. We also demonstrate that our methods produce formal
explanations that are more reliable than competing, non-verification-based XAI
techniques. | Shahaf Bassan, Guy Amir, Davide Corsi, Idan Refaeli, Guy Katz | 2023-07-31T20:19:50Z | http://arxiv.org/abs/2308.00143v3 | # Formally Explaining Neural Networks
###### Abstract
Deep neural networks (DNNs) are increasingly being used as controllers in reactive systems. However, DNNs are highly opaque, which renders it difficult to explain and justify their actions. To mitigate this issue, there has been a surge of interest in explainable AI (XAI) techniques, capable of pinpointing the input features that caused the DNN to act as it did. Existing XAI techniques typically face two limitations: (i) they are heuristic, and do not provide formal guarantees that the explanations are correct; and (ii) they often apply to "one-shot" systems, where the DNN is invoked independently of past invocations, as opposed to reactive systems. Here, we begin bridging this gap, and propose a formal DNN-verification-based XAI technique for reasoning about multi-step, reactive systems. We suggest methods for efficiently calculating succinct explanations, by exploiting the system's transition constraints in order to curtail the search space explored by the underlying verifier. We evaluate our approach on two popular benchmarks from the domain of automated navigation; and observe that our methods allow the efficient computation of minimal and minimum explanations, significantly outperforming the state of the art. We also demonstrate that our methods produce formal explanations that are more reliable than competing, non-verification-based XAI techniques.
+
Footnote †: Both authors contributed equally.
## I Introduction
Deep neural networks (DNNs) [61] are used in numerous key domains, such as computer vision [58], natural language processing [26], computational biology [10], and more [23]. However, despite their tremendous success, DNNs remain "black boxes", uninterpretable by humans. This issue is concerning, as DNNs are prone to critical errors [19, 108] and unexpected behaviors [11, 31].
DNN opacity has prompted significant research on explainable AI (XAI) techniques [67, 85, 86], aimed at explaining the decisions made by DNNs, in order to increase their trustworthiness and reliability. Modern XAI methods are useful and scalable, but they are typically heuristic; i.e., there is no provable guarantee that the produced explanation is correct [20, 48]. This hinders the applicability of these approaches to critical systems, where regulatory bars are high [72].
These limitations provide ample motivation for _formally_ explaining DNN decisions [20, 36, 42, 72]. And indeed, the formal verification community has suggested harnessing recent developments in DNN verification [14, 22, 29, 32, 73, 76, 77, 79, 90, 95, 102, 103] to produce provable explanations for DNNs [17, 42, 47]. Typically, such approaches consider a particular input to the DNN, and return a subset of its features that caused the DNN to classify the input as it did. These subsets are called _abductive explanations_, _prime implicants_ or _PI-explanations_[17, 47, 93]. This line of work constitutes a promising step towards more reliable XAI; but so far, existing work has focused on explaining decisions of "one-shot" DNNs, such as image and tabular data classifiers [17, 46, 47], and has not addressed more complex systems.
Modern DNNs are often used as controllers within elaborate reactive systems, where a DNN's decisions affect its future invocations. A prime example is _deep reinforcement learning_ (_DRL_) [64], where DNNs learn control policies for complex systems [12, 18, 62, 68, 80, 94, 106]. Explaining the decisions of DRL agents (XRL) [35, 54, 69, 82] is an important domain within XAI; but here too, modern XRL techniques are heuristic, and do not provide formally correct explanations.
In this work, we make a first attempt at formally defining abductive explanations for _multi-step decision processes_. We propose novel methods for computing such explanations and supply the theoretical groundwork for justifying the soundness of these methods. Our framework is model-agnostic, and could be applied to diverse kinds of models; but here, we focus on DNNs, where producing abductive explanations is known to be quite challenging [17, 15, 47]. With DNNs, our technique allows us to reduce the number of times a network has to be unrolled, circumventing a potential exponential blow-up in runtime; and also allows us to exploit the reactive system's transition constraints, as well as the DNN's sensitivity to small input perturbations, to curtail the search space even further.
For evaluation purposes, we implemented our approach as a proof-of-concept tool, which is publicly available as an artifact accompanying this paper [16]. We used this tool to automatically generate explanations for two popular DRL benchmarks: a navigation system on an abstract, two-dimensional grid, and a real-world robotic navigation system. Our evaluation demonstrates that our methods significantly outperform state-of-the-art, rigorous methods for generating abductive explanations, both in terms of efficiency and in the size of the explanation generated. When comparing our approach to modern, heuristic-based XAI approaches, our explanations were found to be significantly more precise. We regard these results as strong evidence of the usefulness of
applying verification in the context of XAI.
The rest of this paper is organized as follows: Sec. II contains background on DNNs, their verification, and their formal explainability. Sec. III contains our definitions for formal abductive explanations and contrastive examples for reactive systems. In Sec. IV we propose different methods for computing such abductive explanations. We then evaluate these approaches in Sec. V, followed by a discussion of related work in Sec. VI; and we conclude in Sec. VII.
## II Background
**DNNs.** Deep neural networks (DNNs) [61] are directed, layered graphs, whose nodes are referred to as _neurons_. They propagate data from the first (_input_) layer, through intermediate (_hidden_) layers, and finally onto an _output_ layer. A DNN's output is calculated by assigning values (representing input _features_) to the input layer, and then iteratively calculating the neurons' values in subsequent layers. In classification, each output neuron corresponds to a _class_, and the input is classified as the class matching the greatest output. Fig. 1 depicts a toy DNN. The input layer has three neurons and is followed by a weighted-sum layer that calculates an affine transformation of the input values. For example, given input \(V_{1}=[1,1,1]^{T}\), the second layer evaluates to \(V_{2}=[7,8,11]^{T}\). This is followed by a ReLU layer, which applies the ReLU(\(x\)) = \(\max(0,x)\) function to each value in the previous layer, resulting in \(V_{3}=[7,8,11]^{T}\). The output layer computes the weighted sum \(V_{4}=[15,-4]^{T}\). Because the first output neuron has the greatest value, \(V_{1}\) is classified as the output class corresponding to that neuron.
**DNN Verification.** We define a DNN verification query as a tuple \(\{P,N,Q\}\), where \(N\) is a DNN that maps an input vector \(x\) to an output vector \(y=N(x)\), \(P\) is a predicate over \(x\), and \(Q\) is a predicate over \(y\)[55]. A DNN verifier needs to answer whether there exists some input \(x^{\prime}\) that satisfies \(P(x^{\prime})\wedge Q(N(x^{\prime}))\) (a SAT result) or not (an UNSAT result). It is common to express \(P\) and \(Q\) in the logic of real arithmetic [66]. The problem of verifying DNNs is known to be NP-Complete [55].
**Formal Explanations for Classification DNNs.** A classification problem is a tuple \(\{F,D,K,N\}\), where (i) \(F=\{1,\ldots,m\}\) is the feature set; (ii) \(D=\{D_{1},D_{2},\ldots,D_{m}\}\) are the domains of individual features, and the entire feature space is \(\mathbb{F}=(D_{1}\times D_{2}\times\ldots\times D_{m})\); (iii) \(K=\{c_{1},c_{2},\ldots,c_{n}\}\) represents the set of all classes; and (iv) \(N:\mathbb{F}\to K\) is the classification function, represented by a neural network. A _classification instance_ is a pair \((v,c)\), where \(v\in\mathbb{F}\), \(c\in K\), and \(c=N(v)\). Intuitively, this means that \(N\) maps the input \(v\) to class \(c\).
Formally explaining the instance \((v,c)\) entails determining _whv_\(v\) is classified as \(c\). An _explanation_ (also known as an _abductive explanation_) is defined as a subset of features, \(E\subseteq F\), such that fixing these features to their values in \(v\) guarantees that the input is classified as \(c\), regardless of features in \(F\setminus E\). The features _not_ part of the explanation are _"free"_ to take on any arbitrary value, but cannot affect the classification. Formally, given an input \(v=(v_{1},\ldots,v_{m})\in\mathbb{F}\) classified by the neural network to \(N(v)=c\), we define an explanation as a subset of features \(E\subseteq F\), such that:
\[\forall x\in\mathbb{F}.\quad\bigwedge_{i\in E}(x_{i}=v_{i})\to(N(x)=c) \tag{1}\]
We demonstrate formal explanations using the running example from Fig. 1. For simplicity, assume that each input can only take the values \(0\) or \(1\). Fig. 2 shows that the set \(\{v_{1}^{1},v_{1}^{2}\}\) is an explanation for the input vector \(V_{1}=[1,1,1]^{T}\): setting the first two features in \(V_{1}\) to \(1\) ensures that the classification is unchanged, regardless of the values the third feature takes.
A candidate explanation \(E\) can be verified through a verification query \(\{P,N,Q\}=\langle E=v,N,Q_{\neg c}\rangle\), where \(E=v\) means that all of the features in \(E\) are set to their corresponding values in \(v\), and \(Q_{\neg c}\) implies that the classification of this query is _not_\(c\). If this query is UNSAT, then \(E\) is a valid explanation for the instance \((v,c)\).
It is straightforward to show that the set of all features is a trivial explanation. However, smaller explanations typically provide more meaningful information regarding the decision of the classifier; and we thus focus on finding _minimal_ and _minimum_ explanations. A _minimal explanation_ is an explanation \(E\subseteq F\) that ceases to be an explanation if any of its features are removed:
\[\begin{split}&(\forall x\in\mathbb{F}.\quad\bigwedge_{i\in E}(x_{i }=v_{i})\to(N(x)=c))\ \wedge\\ &(\forall j\in E.\quad\exists y\in\mathbb{F}.\quad\bigwedge_{i \in E\setminus j}(y_{i}=v_{i})\wedge(N(y)\neq c))\end{split} \tag{2}\]
A minimal explanation for our running example, \(\{v_{1}^{1},v_{1}^{2}\}\), is depicted in Fig. 15 of the appendix.
A _minimum explanation_ is a subset \(E\subseteq F\) which is a minimal explanation of minimum size; i.e., there is no other
Fig. 1: A toy DNN.
minimal explanation \(E^{\prime}\neq E\) such that \(|E^{\prime}|<|E|\). Fig. 16 of the appendix shows that \(\{v_{1}^{3}\}\) is a minimal explanation of minimal cardinality, and is hence a minimum explanation in our example.
**Contrastive Examples.** We define a contrastive example (also known as a _contrastive explanation (CXP)_) as a subset of features \(C\subseteq F\), whose alteration may cause the classification of \(v\) to change. More formally:
\[\exists x\in\mathbb{F}.\quad\bigwedge_{i\in F\setminus C}(x_{i}=v_{i})\wedge(N (x)\neq c) \tag{3}\]
A contrastive example for our running example appears in Fig. 3.
Checking whether \(C\) is a contrastive example can be performed using the query \((P,N,Q)=\{(F\setminus C)=v,N,Q_{-c}\}\): \(C\) is contrastive iff the quest is SAT. Any set containing a contrastive example is contrastive, and so we consider only contrastive examples that are minimal, i.e., which do not contain any smaller contrastive examples.
Contrastive examples have an important property: every explanation contains at least one element from every contrastive example [17, 46]. This can be used for showing that a _minimum hitting set_ (MHS; see Sec. II of the appendix) of all contrastive examples is a minimum explanation [84, 44]. In addition, there exists a duality between contrastive examples and explanations [46, 50]: minimal hitting sets of all contrastive examples are minimal explanations, and minimal hitting sets of all explanations are minimal contrastive examples. This relation can be proved by reducing explanations and contrastive examples to minimal unsatisfiable sets and minimal correction sets, respectively, where this duality is known to hold [46]. Calculating an MHS is NP-hard, but can be performed in practice using modern MaxSAT or MILP solvers [41, 63]. The duality is thus useful since computing contrastive examples and calculating their MHS is often more efficient than directly computing minimum explanations [17, 46, 47].
## III K-Step Formal Explanations
A reactive system is a tuple \(R=\langle S,A,I,T\rangle\), where \(S\) is a set of states, \(A\) is a set of actions, \(I\) is a predicate over the states of \(S\) that indicates initial states, and \(T\subseteq S\times A\times S\) is a transition relation. In our context, a reactive system has an associated neural network \(N:S\to A\). A \(k\)-step execution \(\mathcal{E}\) of \(R\) is a sequence of \(k\) states \((s_{1},\ldots,s_{k})\), such that \(I(s_{1})\) holds, and for all \(1\leq i\leq k-1\) it holds that \(T(s_{i},N(s_{i}),s_{i+1})\). We use \(\mathcal{E}_{S}=(s_{1},\ldots,s_{k})\) to denote the sequence of \(k\) states visited in \(\mathcal{E}\), and \(\mathcal{E}_{A}=(a_{1},\ldots,a_{k})\) to denote the sequence of \(k\) actions selected in these states. More broadly, a reactive system can be considered as a deterministic, finite-state transducer Mealy automaton [91]. Our goal is to better understand \(\mathcal{E}\), by finding abductive explanations and contrastive examples that explain why \(N\) selected the actions in \(\mathcal{E}_{A}\).
**K-Step Abductive Explanations.** Informally, we define an explanation \(E\) for a \(k\)-step execution \(\mathcal{E}\) as a subset of features of each of the visited states in \(\mathcal{E}_{S}\), such that fixing these features (while freeing all other features) is sufficient for forcing the DNN to select the actions in \(\mathcal{E}_{A}\). More formally, \(E=(E_{1},\ldots,E_{k})\), such that \(\forall x_{1},x_{2},\ldots,x_{k}\in\mathbb{F}\),
\[\big{(}\bigwedge_{i=1}^{k-1}T(x_{i},N(x_{i}),x_{i+1})\wedge\bigwedge_{i=1}^{k} \bigwedge_{j\in E_{i}}(x_{i}^{j}=s_{i}^{j})\big{)}\rightarrow\bigwedge_{i=1}^{ k}N(x_{i})=a_{i} \tag{4}\]
We continue with our running example. Consider the transition relation \(T=\{(s,a,s^{\prime})\mid s^{3}=s^{\prime 3}\}\); i.e., we can transition from state \(s\) to state \(s^{\prime}\) provided that the third input neuron has the same value in both states, regardless of the action selected in \(s\). Observe the \(2\)-step execution \(\mathcal{E}:s_{1}=(1,1,1)\xrightarrow{c_{1}}s_{2}=(1,0,1)\xrightarrow{c_{1}}\), depicted in Fig. 4 (for simplicity, we omit the network's hidden neurons), and suppose we wish to explain \(\mathcal{E}_{A}=\{c_{1},c_{1}\}\). Because \(\{s^{3}\}\) is an explanation for the first step, and because fixing \(s_{1}^{3}\) also fixes the value of \(s_{2}^{3}\), it follows that fixing \(s_{1}^{3}\) is sufficient to guarantee that action \(c_{1}\) is selected twice -- i.e., \((\{s^{3}\},\varnothing)\) is a multi-step explanation for \(\mathcal{E}\).
Given a candidate \(k\)-step explanation, we can check its validity by encoding Eq. 4 as a DNN verification query. This is achieved by _unrolling_ the network \(N\) for \(k\) subsequent steps; i.e., by encoding a network that is \(k\) times larger than \(N\), with input and output vectors that are \(k\) times larger than the original. We must also encode the transition relation \(T\) as a set of constraints involving the input values, to mimic \(k\) time-steps within a single feed-forward pass. We use \(N_{[i]}\) to denote an unrolling of the neural network \(N\) for \(i\) steps, for \(1\leq i\leq k\).
Using the unrolled network \(N_{[k]}\), we encode the negation of Eq. 4 as the query \((P,N,Q)=\langle E=\mathcal{E}_{S},N_{[k]},Q_{-\mathcal{E}_{A}}\rangle\), where \(E=\mathcal{E}_{S}\) means that we restrict the features in each subset \(E_{i}\in E\) to their corresponding values in \(s_{i}\); and \(Q_{-\mathcal{E}_{A}}\) indicates that in some step \(i\), an action that is not \(a_{i}\) was selected by the
DNN. An UNSAT result for this query indicates that \(E\) is an explanation for \(\mathcal{E}\), because fixing \(E\)'s features to their values forces the given sequence of actions to occur.
We can naturally define a _minimal_\(k\)-step explanation as a \(k\)-step explanation that ceases to be a \(k\)-step explanation when we remove any of its features. A _minimum_\(k\)-step explanation is a minimal \(k\)-step explanation of the lowest possible cardinality; i.e., there does not exist a \(k\)-step explanation \(E^{\prime}=(E^{\prime}_{1},E^{\prime}_{2},\ldots,E^{\prime}_{k})\) such that \(\sum_{i=1}^{k}|E^{\prime}_{i}|<\sum_{i=1}^{k}|E_{i}|\).
**K-Step Contrastive Examples.** A contrastive example \(C\) for an execution \(\mathcal{E}\) is a subset of features whose alteration can cause the selection of an action not in \(\mathcal{E}_{A}\). A \(k\)-step contrastive example is depicted in Fig. 5: altering the features \(s^{3}_{1}\) and \(s^{3}_{2}\) may cause action \(c_{2}\) to be chosen instead of \(c_{1}\) in the second step. Formally, \(C\) is an ordered set of (possibly empty) subsets \(C=(C_{1},C_{2},\ldots,C_{k})\), such that \(C_{i}\subseteq F\), and for which \(\exists x_{1},x_{2},\ldots,x_{k}\in\mathbb{F}\) such that
\[\begin{split}&\big{(}\bigwedge_{i=1}^{k-1}T(x_{i},N(x_{i}),x_{i+1}) \big{)}\wedge\\ &\big{(}\bigwedge_{i=1}^{k}\bigwedge_{j\in F\setminus C_{i}}(x_{i }^{j}=s_{i}^{j})\big{)}\wedge\big{(}\bigvee_{i=1}^{k}N(x_{i})\neq a_{i}\big{)} \end{split} \tag{5}\]
Similarly to multi-step explanations, \(C\) is a multi-step contrastive example iff the verification query: \((P,N,Q)=\langle(F\setminus C_{1},F\setminus C_{2},\ldots,F\setminus C_{k})= \mathcal{E}_{S},N_{[k]},Q_{\neg\mathcal{E}_{A}}\rangle\) is SAT.
## IV Computing Formal K-Step Explanations
We now propose four different methods for computing formal \(k\)-step explanations, focusing on _minimal_ and _minimum_ explanations. All four methods use an underlying DNN verifier to check candidate explanations, but differ in how they enumerate different explanation candidates until ultimately converging to an answer. We begin with the more straightforward methods.
**Method 1: A Single, K-Sized Step.** The first method is to encode the negation of Eq. 4 by unrolling all \(k\) steps of the network, as described in Sec. III. This transforms the problem into explaining a non-reactive, single-step system (e.g., a "one-shot" classifier). We can then use any existing abductive explanation algorithm for explaining the unrolled DNN (e.g., [17, 46, 47]).
This method is likely to produce small explanation sets but is extremely inefficient. Encoding \(N_{[k]}\) results in an input space roughly \(k\) times the size of any single-step encoding. Such an unrolling for our running example is depicted in Fig. 6. Due to the NP-completeness of DNN verification, this may cause an exponential growth in the verification time of each query. Since finding minimal explanations requires a linear number of queries (and for minimum explanations -- a worst-case exponential number), this may cause a substantial increase in runtime.
**Method 2: Combining Independent, Single-Step Explanations.** Here, we dismantle any \(k\)-step execution into \(k\) individual steps. Then, we _independently_ compute an explanation for each step, using any existing algorithm, and without taking the transition relation into account. Finally, we concatenate these explanations to form a multi-step explanation. Fixing the features of the explanation in each step ensures that the ensuing action remains the same, guaranteeing the soundness of the combined explanation.
The downside of this method is that the resulting \(E\) need not be minimal or minimum, even if its constituent \(E_{i}\) explanations are minimal or minimum themselves; see Fig. 7. In this instance, finding a minimum explanation for each step results in the 2-step explanation (\(\{s^{3}\},\{s^{3}\}\)), which is _not minimal_ -- even though its components are minimum explanations for their respective steps. The reason for this phenomenon is that this method ignores the transition constraints and information flow across time-steps. This can result in larger and less meaningful explanations, as we later show in Sec. V.
**Method 3: Incremental Explanation Enumeration.** We now suggest a scheme that takes into consideration the transition
Fig. 5: (\(\{s^{3}\},\{s^{3}\}\)) is a multi-step contrastive example for \(\mathcal{E}\).
Fig. 6: Finding explanations using a 2-step unrolling.
constraints between steps (unlike Method 2), but which encodes the verification queries for validating explanations in a more efficient manner than Method 1. The scheme relies on the following lemma:
**Lemma 1**.: _Let \(E=(E_{1},E_{2},\ldots,E_{k})\) be a \(k\)-step explanation for execution \(\mathcal{E}\), and let \(1\leq i\leq k\) such that \(\forall j>i\) it holds that \(E_{j}=F\). Let \(E^{\prime}\) be the set obtained by removing a set of features \(F^{\prime}\subseteq E_{i}\) from \(E_{i}\), i.e., \(E^{\prime}=(E_{1},\ldots,E_{i-1},E_{i}\smallsetminus F^{\prime},E_{i+1},\ldots, E_{k})\). In this case, fixing the features in \(E^{\prime}\) prevents any changes in the first \(i-1\) actions \((a_{1},\ldots,a_{i-1})\); and if any of the last \(k-i+1\) actions \((a_{i},\ldots,a_{k})\) change, then \(a_{i}\) must also change._
A proof appears in Sec. III of the appendix. The lemma states that "breaking" an explanation \(E\) of \(\mathcal{E}\) at some step \(i\) (by removing features from the \(i\)'th step), given that the features in steps \(i+1,\ldots,k\) are fixed, causes \(a_{i}\) to change before any other action. In this scenario, we can determine whether \(E\) explains \(\mathcal{E}\) using a simplified verification query: we can check whether \((E_{1},\ldots,E_{i})\) explains the first \(i\) steps of \(\mathcal{E}\), regardless of steps \(i+1,...,k\). If so, then \(a_{i}\) cannot change; and from Lemma 1, no action in \(\mathcal{E}_{A}\) can change, and \((E_{1},\ldots,E_{k})\) is an explanation for \(\mathcal{E}\). Otherwise, \(E\) allows an action in \(\mathcal{E}_{A}\) to change, and it does not explain \(\mathcal{E}\). We can leverage this property to efficiently enumerate candidates as part of a search for a minimal/minimum explanation for \(\mathcal{E}\), as explained next.
**Finding Minimal Explanations with Method 3.** A common approach for finding minimal explanations for a "one-shot" classification instance is via a greedy algorithm, which dispatches a linear number of queries to the underlying verifier [47]. Such an algorithm can start with the explanation set to be the entire feature space, and then iteratively attempt to remove features. If removing a feature allows misclassification, the algorithm keeps it as part of the explanation; otherwise, it removes the feature and continues. A pseudo-code for this approach appears in Alg. 1.
```
1:\(N\) (DNN), \(F\) (\(N\)'s features), \(v\) (values), \(c\) (predicted class)
2:\(\text{Explanation}\leftarrow\ F\)
3:for each\(f\in F\)do
4:if verify ((Explanation\(\smallsetminus\{\)f\(\}\))=v,N,\(Q_{\neg c}\)) is UNSAT then
5:\(\text{Explanation}\leftarrow\text{Explanation}\smallsetminus\{f\}\)
6:return Explanation
```
**Algorithm 1**Greedy-Minimal-Explanation
We suggest performing a similar process for explaining \(\mathcal{E}\). We start by fixing all features in all states of \(\mathcal{E}\) to their values; i.e., we start with \(E=(E_{1},\ldots,E_{k})\) where \(E_{i}=F\) for all \(i\), and then perform the following steps:
First, we iteratively remove individual features from \(E_{1}\), each time checking whether the modified \(E\) remains an explanation for \(\mathcal{E}\). Since all features in steps \(2,\ldots,k\) are fixed, it follows from Lemma 1 that checking whether the modified \(E\) explains \(\mathcal{E}\) is equivalent to checking whether the modified \(E_{1}\) explains the selection of \(a_{1}\). Thus, we perform a process that is identical to the one in the greedy Alg. 1 for finding a minimal explanation for a "one-shot" classification DNN. At the end of this phase, we are left with \(E=(E_{1},\ldots,E_{k})\) where \(E_{i}=F\) for all \(i>1\) and \(E_{1}\) was reduced by removing features from it. We keep all current features in \(E\) fixed for the following steps.
Second, we begin to iteratively remove features from \(E_{2}\), each time checking whether the modified \(E\) still explains \(\mathcal{E}\). Since the features in steps \(3,\ldots,k\) are entirely fixed, it suffices (from Lemma 1) to check whether the modified \((E_{1},E_{2})\) explains the selection of the first two actions \((a_{1},a_{2})\) of \(\mathcal{E}_{A}\). This is performed by checking whether
\[\begin{split}&(\forall x_{1},x_{2}\in\mathbb{F},\quad T(x_{1},a_{ 1},x_{2})\wedge\bigwedge_{j\in E_{1}}(x_{1}^{j}=s_{1}^{j})\wedge\\ &\bigwedge_{j\in E_{2}}(x_{2}^{j}=s_{2}^{j}))\to N(x_{2})=a_{2} \end{split} \tag{6}\]
We do not need to require that \(N(x_{1})=a_{1}\) (as in Method 1) -- this is guaranteed by Lemma 1. This is significant, because it exempts us from encoding the neural network twice as part of the verification query. We denote the negation of Eq. 6 for validating \((E_{1},E_{2})\) as: \((P,N,Q)=((E_{1},E_{2})=\mathcal{E}_{S_{[1]}},N,Q_{\neg a_{2}})\).
Third, we continue this iterative process for all \(k\) steps of \(\mathcal{E}\), and find the minimal explanation for each step separately. In step \(i\), for each query we encode \(i\) transitions and check whether the modified \(E\) still explains the first \(i\) steps of \(\mathcal{E}\) (by encoding \(((E_{1},\ldots,E_{i})=\mathcal{E}_{S_{[i]}},N,Q_{\neg a_{i}})\)), which _does not_ require encoding the DNN \(i\) times. The correctness of each step follows directly from Lemma 1.
The pseudo-code for this process appears in Alg. 2. The minimality of the resulting explanation holds because removing any feature from this explanation would allow the action in that step to change (since minimality is maintained in each step of the algorithm). An example of the first two iterations of this process on our running example appears in Fig. 8: in the first iteration, we attempt to remove features from the first step, until converging to an explanation \(E_{1}\). In the second iteration, while the features in \(E_{1}\) remain fixed to their values, we encode the constraints of the transition relation \(T(s_{1},a_{1},s_{2})\) between the first two steps, and dispatch queries to verify candidate explanations for the second step -- until converging to a minimal explanation \((E_{1},E_{2})\). In this case, \(E_{2}=\varnothing\), and \((\{s^{3}\},\varnothing)\) is a valid explanation for the \(2\)-step execution, since fixing the value of \(s_{1}^{3}\) determines the value of \(s_{2}^{3}\) -- which forces the selection of \(a_{2}\) in the second step.
We emphasize that incrementally enumerating candidate explanations for a \(k\)-step execution in this way is preferable to simply finding a minimal explanation by encoding verification queries that encompass all \(k\)-steps, a la Method 1: (i) in each iteration, we dispatch a verification query involving only a single invocation of the DNN, thus circumventing the linear growth in the network's size -- which causes an exponential worst-case increase in verification times; and (ii) in each iteration, we do not need to encode the entire set of \(k\) disjuncts
(from the negation of Eq. 4), since we only need to validate \(a_{i}\) on the \(i\)'th iteration, and not all actions of \(\mathcal{E}_{A}\).
```
1:\(N\) (DNN), \(F\) (\(N\)'s features), \(\mathcal{E}\) (execution of length \(k\) to explain)
2:\(\text{Explanation}\leftarrow(E_{1},\dots,E_{k})\) where \(E_{i}=F\) for all \(1\leq i\leq k\)
3:for each\(i\in\{1,...,k\}\) and \(f\in E_{i}\)do
4:ifverify (\((E_{1},\dots,E_{i}\smallsetminus f)\)=\(\mathcal{E}_{S_{\{i\}}},N,Q_{\neg a_{i}}\)) is UNSAT then
5:\(E_{i}\gets E_{i}\smallsetminus f\)
6:return Explanation
```
**Algorithm 2**Incremental-Minimal-Explanation-Enumeration
**Finding Minimum Explanations with Method 3.** We can also use our proposed enumeration to efficiently find _minimum_ explanations, using a recursive approach. In each step \(i=1,\dots,k\), we iterate over all the possible explanations, each time considering a candidate explanation and recursively invoking the procedure for step \(i+1\). In this way, we iterate over all the possible multi-step explanation candidates and can return the smallest one that we find. This process is described in Alg. 3.
Finding a minimum explanation in this manner is superior to using Method 1, for the same reasons noted before. In addition, the exponential blowup here is in the number of explanations in each step, and not in the entire number of features in each step -- which is substantially smaller in many cases. Nevertheless, as the method advances through steps, it is expected to be significantly harder to iterate over all the candidate explanations. We discuss more efficient ways for finding global minimum explanations in Method 4.
**Method 4: Multi-Step Contrastive Example Enumeration.** As mentioned earlier, a common approach for finding minimum explanations is to find all contrastive examples, and then calculate their minimum hitting set (MHS). Because DNNs tend to be sensitive to small input perturbations [96], small contrastive examples are often easy to find, and this can expedite the process significantly [17]. When performing this procedure on a multi-step execution \(\mathcal{E}\), we show that it is possible to enumerate contrastive example candidates in a more efficient manner than simply using the encoding from Method 1.
**Lemma 2**.: _Let \(\mathcal{E}\) be a \(k\)-step execution, and let \(C=(C_{1},\dots,C_{k})\) be a minimal contrastive example for \(\mathcal{E}\); i.e., altering the features in \(C\) can cause at least one action in \(\mathcal{E}_{A}\) to change. Let \(1\leq i\leq k\) denote the index of the first action \(a_{i}\) that can be changed by features in \(C\). It holds that: \(C_{i}\neq\emptyset\); \(C_{j}=\emptyset\) for all \(j>i\); and if there exists some \(l<i\) such that \(C_{l}\neq\emptyset\), then all sets \(\{C_{l},C_{l+1},\dots,C_{i}\}\) are not empty._
The lemma gives rise to the following scheme. We examine some contrastive example \(C^{\prime}\) of _a set of subsequent steps of \(\mathcal{E}\)_. For simplicity, we discuss the case where \(C^{\prime}=(C^{\prime}_{i})\) involves only a single step \(i\), but the technique generalizes to subsets of steps, as well. Such a \(C^{\prime}_{i}\) can be found using a "one-shot" verification query on step \(i\), without encoding the transition relation or unrolling the network. Our goal is to use \(C^{\prime}\) to find many contrastive examples for \(\mathcal{E}\), and use them in computing the MHS. We observe that there are three possible cases:
1. \(C=(\emptyset,\dots,\emptyset,C^{\prime}_{i},\emptyset,\dots,\emptyset)\) already constitutes a contrastive example for \(\mathcal{E}\). In this case, we say that \(C^{\prime}=(C^{\prime}_{i})\) is an _independent contrastive example_.
2. The features in \(C^{\prime}_{i}\) can cause a skew from \(\mathcal{E}\) only when features from preceding steps \(l,\dots,i-1\) (for some \(l<i\)) are also altered. In this case, we say
Fig. 8: Running Method 3 for finding minimal explanations, for two iterations.
that \(C^{\prime}\) is a _dependent contrastive example_, and that it depends on steps \(l,\ldots,i-1\); and together, the features from all these steps form the contrastive example \(C=(\varnothing,\ldots,\varnothing,C_{l},\ldots,C_{i-1},C^{\prime}_{i}, \varnothing,\ldots,\varnothing)\) for \(\mathcal{E}\).
3. \(C^{\prime}\) is a _spurious contrastive example_: the first \(i-1\) steps in \(\mathcal{E}\), and the constraints that the transition relation imposes, prevent the features freed by \(C^{\prime}_{i}\) from causing any action besides \(a_{i}\) to be selected in step \(i\).
Fig. 9 illustrates the three cases. The first case is identical to the one from Fig. 5, where \((\{s^{3}\})\) is a dependent contrastive example of the second step, which depends on the previous step and is part of a larger contrastive example: \((\{s^{3}\},\{s^{3}\})\). In the second case, assume that \(T\) requires that \(s^{1}_{3}+s^{2}_{3}\neq 1\) for any feasible transition. Thus, the assignment for \(s^{2}_{3}\) which may cause the second action in the sequence to change is not reachable from the previous step, and hence \((\{s^{3}\})\) is a spurious contrastive example of the second step. In the third case, assume that \(T\) allows all transitions, and hence \((\{s^{3}\})\) is an independent contrastive example for the second step; and so \((\varnothing,\{s^{3}\})\) is a contrastive example of the entire execution.
It follows from Lemma 2 that one of these three cases must always apply. We next explain how verification can be used to classify each contrastive example of a subset of steps into one of these three categories. If \(C^{\prime}\) is independent, it can be used as-is in computing the MHS; and if it is spurious, it should be ignored. In the case where \(C^{\prime}\) is dependent, our goal is to find all multi-step contrastive examples that contain it, for the purpose of computing the MHS. We next describe a recursive algorithm, termed _reverse incremental enumeration_ (RIE), that achieves this.
**Reverse Incremental Enumeration.** Given a contrastive example \(C^{\prime}\) containing features from a set of subsequent steps of \(\mathcal{E}\), we propose to classify it into one of the three categories by iteratively dispatching queries that check the reachability of \(C^{\prime}\) from the previous steps of the sequence. We execute this procedure by recursively enumerating contrastive examples in previous steps. For simplicity, we assume again that \(C^{\prime}=(C^{\prime}_{i})\) is a single-step contrastive example of step \(i\).
1. For checking whether \(C^{\prime}\) is an independent contrastive example of \(\mathcal{E}\), we set \(C_{i-1}=\varnothing\) and \(C_{i}=C^{\prime}_{i}\), and check whether \(C=(C_{i-1},C_{i})\) is a contrastive example for steps \(i-1\) and \(i\). This is achieved by dispatching the following query: \(\exists x_{i-1},x_{i}\in\mathbb{F}\) such that: \[\begin{split}& T(x_{i-1},N(x_{i-1}),x_{i})\wedge\\ &\quad\big{(}\bigwedge_{l=i-1}^{i}\bigwedge_{j\in\mathcal{F}\smallsetminus C _{l}}(x_{l}^{j}=s_{l}^{j})\big{)}\wedge(N(x_{i})\neq a_{i})\end{split}\] (7) If the verifier returns SAT, \(C^{\prime}_{i}\) is independent of step \(i-1\), and hence independent of all steps \(1,\ldots,i-1\). Hence, \(C^{\prime}\) is an independent contrastive example of \(\mathcal{E}\).
2. If the query from Eq. 7 returns UNSAT, we now attempt to decide whether \(C^{\prime}\) is dependent. We achieve this through additional verification queries, again setting \(C_{i}=C^{\prime}_{i}\), but now setting \(C_{i-1}\) to a _non empty_ set of features -- once for every possible set of features, separately. We again generate a query using the encoding from Eq. 7, and if the verifier returns SAT it follows that \(C^{\prime}\) is dependent on step \(i-1\), and that \(C^{\prime\prime}=(C_{i-1},C_{i})\) is a contrastive example for steps \(i-1\) and \(i\). We recursively continue with this enumeration process, to determine whether \(C^{\prime\prime}\) is independent, dependent of step \(i-2\), or a spurious contrastive example.
3. In case the previous phases determine that \(C^{\prime}\) is neither independent nor part of a larger contrastive example, we conclude that it is spurious.
An example of a single reverse incremental enumeration step on a contrastive example \(C^{\prime}\) in our running example is depicted in Fig. 10, and its recursive call is shown in Alg. 5 (Cxps denotes the set of all multi-step contrastive examples containing the initial \(C^{\prime}\)).
```
1:if\(\varnothing\)=1then
2:returnC'\(\triangleright\)C' is trivially independent of steps \(j\!<\!1\)
3:if\((\varnothing,C^{\prime}_{j},\ldots,C^{\prime}_{i})\) is a contrastive example of steps \(j-1\ldots i\)then
4:return\((C_{l}\mid\forall 1\leq l\leq j-1,\ C_{l}=\varnothing)\cdot C^{\prime}\)\(\triangleright\)C' is independent of step \(j-1\)
5:Cxps\(\leftarrow\varnothing\)
6:for each subset \(C_{f}\) of F do
7:if\((C_{f},C^{\prime}_{j},\ldots,C^{\prime}_{i})\) is a contrastive example of steps \(j-1\ldots i\)then
8:Cxps\(\leftarrow\)Cxps\(\cup\)RIE(\(i,j-1,C_{f}\))\(\triangleright\)C' is dependent of step \(j-1\)
9:returnCxps\(\triangleright\) if Cxps is empty, C' is spurious
```
**Algorithm 5**Reverse Incremental Enumeration (RIE)
Using reverse incremental enumeration, we can find all multi-step contrastive examples of \(\mathcal{E}\):
1. First, we find all contrastive examples for the first step of \(\mathcal{E}\). This is again the same as finding contrastive examples of a "one-shot" classification problem, and can be performed efficiently [17], via Alg. 7. We first enumerate all contrastive examples of size \(1\) (i.e., contrastive _singletons_); then all contrastive examples of size \(2\) that do not contain contrastive singletons within them; and then continue this process for all \(1\leq i\leq|F|\) ("skipping" all non-minimal cases).
2. Next, we search for all contrastive examples for the second step of \(\mathcal{E}\), in the same manner. We perform a reverse incremental enumeration on each contrastive example found, obtaining all contrastive examples for steps 1 and 2.
3. We continue iteratively, each time visiting a new step \(i\) and reversely enumerating all contrastive examples that affect steps \(1,\ldots,i\). We stop when we reach the final step, \(i=k\).
The full enumeration process for finding all contrastive examples of \(\mathcal{E}\) is described fully in Alg. 6, which invokes Alg. 7.
We also make the following observation: we can further expedite the enumeration process by discarding sets that contain contrastive examples within them since we are specifically searching for minimal contrastive examples. For instance, in the given example in Fig. 10, if we find \((\varnothing,s^{1},\varnothing)\) as a contrastive example for the entire multi-step instance, we no longer need to consider sets in step \(2\) that contain \(s^{1}\) when iterating in reverse from step \(3\) to step \(2\). Our evaluation shows that this approach can significantly improve performance as the increasing number of contrastive examples found in previous steps greatly reduces the search space.
Of course, our approach's worst-case complexity is still exponential in the number of steps, \(k\), because each dependent contrastive example requires a recursive call that potentially enumerates all contrastive examples for the previous step. However, the number of recursive iterations is limited by the dependency between steps. For instance, if contrastive examples in step \(i\) are only dependent on step \(i\) - \(1\) and
Fig. 10: An illustration of reverse incremental enumeration. We start with a single-step contrastive example, \(C^{\prime}_{3}=\{s^{3}\}\) for the third step of the execution. In the second iteration, we find that \((C^{\prime}_{3})\) is dependent on the previous step, and that \((\{s^{3}\},\{s^{3}\})\) constitutes a contrastive example for steps 2 and 3. In the third iteration, \(\big{(}\{s^{3}\},\{s^{3}\}\big{)}\) is found to be independent of the first step, and hence \((\varnothing,\{s^{3}\},\{s^{3}\})\) is a contrastive example for \(\mathcal{E}\).
Fig. 9: \((\{s^{3}\})\) as a dependent, spurious and independent contrastive example.
not on step \(i-2\), the recursive iterations will be limited to \(2\). Additionally, skipping the verification of candidates containing contrastive examples found in previous steps can also significantly reduce runtime.
## V Evaluation
**Implementation and Setup.** We created a proof-of-concept implementation of all aforementioned approaches and benchmarks [16]. To search for explanations, our tool [16] dispatches verification queries using a backend DNN verifier (we use _Marabou_[56], previously employed in additional studies [3, 4, 5, 21, 83], although other engines may also be used). The queries encode the architecture of the DNN in question, the transition constraints between consecutive steps of the reactive system, and the candidate explanation or contrastive example being checked. Calculating the MHS, when relevant, was done using RC-2, a MaxSAT-based tool of the PySat toolkit [45].
**Benchmarks.** We trained DRL agents for two well-known reactive system benchmarks: GridWorld [97] and TurtleBot [99] (see Fig. 11). GridWorld involves an agent moving in a 2D grid, while TurtleBot is a real-world robotic navigation platform. These benchmarks have been extensively studied in the DRL literature. GridWorld has 8 input features per state, including agent coordinates, target coordinates, and sensor readings for obstacle detection. The agent can take \(4\) possible actions: UP, DOWN, LEFT, or RIGHT. TurtleBot has \(9\) input features per state, including lidar sensor readings, target distance, and target angle. The agent has \(3\) possible actions: LEFT, RIGHT, or FORWARD. We trained our DRL agents with the state-of-the-art PPO algorithm [88]. Additional details appear in Sec. V and VI of the appendix.
**Generating Executions.** We generated 200 unique multi-step executions of our two benchmarks: \(100\) GridWorld executions (using \(10\) agents, each producing \(10\) unique executions of lengths \(6\leq k\leq 14\)), and \(100\) TurtleBot executions (using \(100\) agents, each producing a single execution of length \(6\leq k\leq 8\)). Next, from each \(k\)-step execution, we generated \(k\) unique sub-executions, each representing the first \(i\) steps of the original execution (\(1\leq i\leq k\)). This resulted in a total of \(931\) GridWorld executions and \(647\) Turtlebot executions. We used these executions to assess the different methods for finding minimal and minimum explanations. Each experiment ran with a timeout value of \(3\)-\(i\) hours, where \(i\) is the execution's length.
**Experiments.** We begin by comparing the performance of the four methods discussed in Sec. IV: (i) encoding the entire network as a "one-shot" query; (ii) computing individual explanations for each step; (iii) incrementally enumerating explanations; and (iv) reversely enumerating contrastive examples and calculating their MHS. We note that we use Methods 1-3 to generate both minimal and minimum explanations, whereas Method \(4\) is only used to generate minimum explanations. To generate minimum explanations using the "one-shot" encodings of Methods \(1\) and \(2\), we use the state-of-the-art approach of Ignatiev et al. [47]. We use two common criteria for comparison [47, 17, 46]: the _size_ of the generated explanations (small explanations tend to be more meaningful), and the overall runtime and timeout ratios.
**Results.** Results for the GridWorld benchmark are presented in Table I. These results clearly indicate that Method 2 (generating explanations in independent steps) was significantly faster in all experiments, but generated drastically larger explanations -- about two times larger when searching for a _minimal_ explanation, and about five times larger for a _minimum_ explanation, on average. This is not surprising; as noted earlier, the explanations produced by such an approach do not take the transition constraints into account, and hence, may be quite large. In addition, we note again that this approach does not guarantee the minimality of the combined explanation, even when combining minimal/minimum explanations for each step. The corresponding results for TurtleBot appear in Sec. VII of the appendix, and also demonstrate similar outcomes.
When comparing the three approaches that can guarantee minimal explanations, the incremental enumeration approach (Method 3) is clearly more efficient than the "one-shot" scheme (running for about \(1\) second compared to above \(5\) minutes, on average, across all solved instances), as depicted in Fig. 12. For the minimum explanation comparison, the results show that the reversed-enumeration-based strategy (Method 4) ran significantly faster than all other methods that can find guaranteed minimum explanations: on average, it ran for \(39\) seconds, while the other methods ran for more than \(6\) and \(23\) minutes. In addition, out of all methods guaranteed
\begin{table}
\begin{tabular}{c||c||c||c c c||c} \hline \hline \multirow{2}{*}{**setting**} & \multirow{2}{*}{**experiment**} & **time (s)** & \multicolumn{3}{c||}{**size**} & **solved** \\ & & **avg.** & **min** & **avg.** & **max** & **(\%)** \\ \hline \multirow{3}{*}{\begin{tabular}{c} minimal \\ (local) \\ \end{tabular} } & one-shot (1) & 304 & 5 & 33 & 112 & 98 \\ & independent (2) & 1 & 5 & 34 & 97 & 99.9 \\ & **incremental (3)** & **1** & **5** & **18** & **78** & **99.7** \\ \hline \multirow{3}{*}{
\begin{tabular}{c} minimum \\ (global) \\ \end{tabular} } & one-shot (1) & 405 & 5 & 14 & 32 & 29.8 \\ & independent (2) & 4 & 5 & 35 & 98 & 98.3 \\ \cline{1-1} & incremental (3) & 1,396 & 5 & 7 & 9 & 17.9 \\ \cline{1-1} & **reversed (4)** & **39** & **5** & **7** & **16** & **99.7** \\ \hline \hline \end{tabular}
\end{table} TABLE I: _GridWorld_: columns from left to right: experiment type, method name (and number), time and size of returned explanation (out of experiments that terminated), and the percent of solved instances (the rest timed out). The bold highlighting indicates the method that generated the explanation with the optimal size.
Fig. 11: Benchmarks: (A) GridWorld; and (B) TurtleBot.
to produce a minimum explanation, experiments that ran with the "reversed" strategy hit significantly fewer timeouts. The "reversed" strategy outperforms the competing methods significantly, on both benchmarks (see Fig. 13).
Next, we analyzed the strategies at a higher resolution -- focusing on a _step-wise_ level comparison, i.e., on analyzing how the length of the execution affected runtime. The results (see Figs. 17- 20 of the appendix) demonstrate the drastic performance gain of our "reversed" strategy as \(k\) increases: this strategy can efficiently find explanations for longer executions, while the competing "one-shot" strategy fails. This again is not surprising: when dealing with large numbers of steps, the transition function, the unrolling of the network, and the underlying enumeration scheme become more taxing on the underlying verifier. A full analysis of both benchmarks, and all explanation types, appears in Sec. VII of the appendix.
**Explanation Example.** We provide a visual example of an instance from our GridWorld experiment identified as a minimum explanation. The results (depicted in Fig. 14) include a minimum explanation for an execution of \(8\) steps. They show the following meaningful insight: fixing part of the agent's location sensors at the initial step, and a single sensor in the sixth step, is sufficient for forcing the agent to move along the original path, regardless of any other sensor reading.
**Comparison to Heuristic XAI Methods.** We also compared our results to popular, non-verification-based, heuristic XAI methods. Although these methods proved scalable, they often returned unsound explanations when compared to our approach. For additional details, see Section VIII of the appendix.
## VI Related Work
This work joins recent efforts on utilizing formal verification to explain the decisions of ML models [17, 28, 47, 59, 92, 93, 104]. Prior studies primarily focused on formally explaining _classification_ over various domains [17, 47, 48, 49, 104] or specific model types [38, 43, 49, 51, 71]. while others explored alternative ways of defining explanations over classification tasks [9, 37, 52, 59, 74, 81, 104, 101].
Methods closer to our own have focused on formally explaining DNNs [17, 40, 47, 59, 104], where the problem is known to be complex [47, 65]. This work relies on the rapid development of DNN verification [1, 13, 30, 33, 55, 57, 105]. There has also been ample work on heuristic XAI [34, 85, 86, 89, 34], including approaches for explaining the decisions of reinforcement-learning-based reactive systems (XRL) [35, 54, 69, 82]. However, these methods do not provide formal guarantees.
## VII Conclusion
Although DNNs are used extensively within reactive systems, they remain "black-box" models, uninterpretable to humans. We seek to mitigate this concern by producing formal explanations for executions of reactive systems. As far as we are aware, we are the first to provide a formal basis of explanations in this context, and to suggest methods for efficiently producing such explanations -- significantly outperforming the competing approaches. We also note that our approach is agnostic to the type of reactive system, and can be generalized beyond DRL systems, to any k-step reactive DNN system (including RNNs, LSTMs, GRUs, etc.). Moving forward, a main extension could be scaling our method, beyond the simple DRLs evaluated here, to larger systems of higher complexity. Another interesting extension could include evaluating the attribution of the hidden-layer features, rather than just the input features.
Fig. 14: _GridWorld_: a \(5\)-sized explanation for an \(8\)-step execution. The steps are numbered (in blue circles), the target is the yellow square, and the obstacles are depicted in red.
Fig. 12: _Minimal explanation_: number of solved instances depending on (accumulative) time, for the methods that guarantee minimality.
Fig. 13: _Minimum explanation_: number of solved instances depending on (accumulative) time, for the methods that guarantee minimality.
**Acknowledgments.** The work of Bassan, Amir, Refaeli, and Katz was partially supported by the Israel Science Foundation (grant number 683/18). The work of Amir was supported by a scholarship from the Clore Israel Foundation. The work of Corsi was partially supported by the "Dipartimenti di Eccellenza 2018-2022" project, funded by the Italian Ministry of Education, Universities, and Research (MIUR).
|
2302.14623 | Fast as CHITA: Neural Network Pruning with Combinatorial Optimization | The sheer size of modern neural networks makes model serving a serious
computational challenge. A popular class of compression techniques overcomes
this challenge by pruning or sparsifying the weights of pretrained networks.
While useful, these techniques often face serious tradeoffs between
computational requirements and compression quality. In this work, we propose a
novel optimization-based pruning framework that considers the combined effect
of pruning (and updating) multiple weights subject to a sparsity constraint.
Our approach, CHITA, extends the classical Optimal Brain Surgeon framework and
results in significant improvements in speed, memory, and performance over
existing optimization-based approaches for network pruning. CHITA's main
workhorse performs combinatorial optimization updates on a memory-friendly
representation of local quadratic approximation(s) of the loss function. On a
standard benchmark of pretrained models and datasets, CHITA leads to
significantly better sparsity-accuracy tradeoffs than competing methods. For
example, for MLPNet with only 2% of the weights retained, our approach improves
the accuracy by 63% relative to the state of the art. Furthermore, when used in
conjunction with fine-tuning SGD steps, our method achieves significant
accuracy gains over the state-of-the-art approaches. | Riade Benbaki, Wenyu Chen, Xiang Meng, Hussein Hazimeh, Natalia Ponomareva, Zhe Zhao, Rahul Mazumder | 2023-02-28T15:03:18Z | http://arxiv.org/abs/2302.14623v1 | # Fast as CHITA: Neural Network Pruning with Combinatorial Optimization
###### Abstract
The sheer size of modern neural networks makes model serving a serious computational challenge. A popular class of compression techniques overcomes this challenge by pruning or sparsifying the weights of pretrained networks. While useful, these techniques often face serious tradeoffs between computational requirements and compression quality. In this work, we propose a novel optimization-based pruning framework that considers the combined effect of pruning (and updating) multiple weights subject to a sparsity constraint. Our approach, CHITA, extends the classical Optimal Brain Surgeon framework and results in significant improvements in speed, memory, and performance over existing optimization-based approaches for network pruning. CHITA's main workhorse performs combinatorial optimization updates on a memory-friendly representation of local quadratic approximation(s) of the loss function. On a standard benchmark of pretrained models and datasets, CHITA leads to significantly better sparsity-accuracy tradeoffs than competing methods. For example, for MLPNet with only 2% of the weights retained, our approach improves the accuracy by 63% relative to the state of the art. Furthermore, when used in conjunction with fine-tuning SGD steps, our method achieves significant accuracy gains over the state-of-the-art approaches.
Machine Learning, ICML
## 1 Introduction
Modern neural networks tend to use a large number of parameters (Devlin et al., 2018; He et al., 2016), which leads to high computational costs during inference. A widely used approach to mitigate inference costs is to prune or sparsify pre-trained networks by removing parameters (Blandock et al., 2020). The goal is to obtain a network with significantly fewer parameters and minimal loss in performance. This makes model storage and deployment cheaper and easier, especially in resource-constrained environments.
Generally speaking, there are two main approaches for neural net pruning: (i) magnitude-based and (ii) impact-based. Magnitude-based heuristic methods (e.g., Hanson and Pratt, 1988; Mozer and Smolensky, 1989; Gordon et al., 2020) use the absolute value of weight to determine its importance and whether or not it should be pruned. Since magnitude alone may not be a perfect proxy for weight relevance, alternatives have been proposed. To this end, impact-based pruning methods (e.g. LeCun et al., 1989; Hassibi and Stork, 1992; Singh and Alistarh, 2020) remove weights based on how much their removal would impact the loss function, often using second-order information on the loss function. Both of these approaches, however, may fall short of considering the _joint effect_ of removing (and updating) multiple weights simultaneously. The recent method CBS (Combinatorial Brain Surgeon) (Yu et al., 2022) is an optimization-based approach that considers the joint effect of multiple weights. The authors show that CBS leads to a boost in the performance of the pruned models. However, CBS can be computationally expensive: it makes use of a local model based on the second-order (Hessian) information of the loss function, which can be prohibitively expensive in terms of runtime and/or memory (e.g., CBS takes hours to prune a network with 4.2 million parameters).
In this work, we propose CHITA (Combinatorial Hessian-free Iterative Thresholding Algorithm), an efficient optimization-based framework for network pruning at scale. Our approach follows earlier works that consider a local quadratic approximation of the loss function based on the second-order Hessian information. Different from previous works, we make use of a simple yet important observation with which we can avoid computing and storing the Hessian matrix to solve the optimization problem (hence the name "Hessian-free" in CHITA)--this allows us to address large networks efficiently. Specifically, we propose an equivalent reformulation of the problem as an \(\ell_{0}\)-constrained sparse linear regression problem with a data matrix \(A\in\mathbb{R}^{n\times p}\)
where \(p\) is the number of trainable parameters in the original model and \(n\lesssim 10^{3}\) (usually, \(p\gg n\)) is the number of the sub-samples used in approximating the Hessian. Compared to state-of-the-art approaches that consider a dense \(p\times p\) matrix approximation of the Hessian, our approach leads to a significant reduction in memory usage (up to \(10^{3}\) times for \(p=10^{6}\)) without any approximation.
Furthermore, we propose a novel approach to minimize our \(\ell_{0}\) regression reformulation, leveraging active set strategies, better stepsize selection, and various methods to accelerate convergence on the selected support. Our proposed approach leads to significant improvements over Iterative Hard Thresholding methods (IHT, Blumensath and Davies, 2009) commonly used in sparse learning literature. For instance, our framework can prune a network with 4.2M parameters to 80% sparsity (i.e., 0.84M nonzero parameters) in less than a minute and using less than 20GB of RAM 1.
Footnote 1: For reference, CBS and Woodfisher would run out of memory in similar circumstances.
Since the local quadratic model approximates the loss function only in a small neighborhood of the current solution (Singh and Alistarh, 2020), we also propose a multi-stage algorithm that updates the local quadratic model during pruning (but without retraining) and solves a more constrained problem in each stage, going from dense weights to sparse ones. Our experiments show that the resulting pruned models have a notably better accuracy compared to that of our single-stage algorithm and other pruning approaches. Furthermore, when used in the gradual pruning setting (Gale et al., 2019; Singh and Alistarh, 2020; Blalock et al., 2020) where re-training between pruning steps is performed, our pruning framework results in significant performance gains compared to state-of-the-art unstructured pruning methods.
ContributionsOur contributions can be summarized as follows:
* We propose CHITA, an optimization framework for network pruning based on local quadratic approximation(s) of the loss function. We propose an \(\ell_{0}\)-constrained sparse regression reformulation that avoids the pitfalls of storing a large dense Hessian, resulting in a significant reduction in memory usage (we work with an \(n\times p\) matrix instead of a \(p\times p\) one, with often \(n\ll p\)).
* A key workhorse of CHITA is a novel IHT-based algorithm to obtain good solutions to the sparse regression formulation. Exploiting problem structure, we propose methods to accelerate convergence and improve pruning performance, such as a new and efficient stepsize selection scheme and rapidly updating weights on the support. This leads to up to 1000x runtime improvement compared to existing network pruning algorithms.
* We show performance improvements across various models and datasets. In particular, CHITA results in a 98% sparse (i.e., 98% of weights in dense model are set to zero) MLPNet with 90% test accuracy (3% reduction in test accuracy compared to the dense model), which is a significant improvement over the previously reported best accuracy (55%) by CBS. As an application of our framework, we use it for gradual pruning and observe notable performance gains against state-of-the-art gradual pruning approaches.
## 2 Problem Setup and Related Work
In this section we present the general setup with connections to related work--this lays the foundation for our proposed methods discussed in Section 3.
### Problem Setup
Consider a neural network with an empirical loss function \(\mathcal{L}(w)=\frac{1}{N}\sum_{i=1}^{N}\ell_{i}(w),\) where \(w\in\mathbb{R}^{p}\) is the vector of trainable parameters, \(N\) is the number of data points (samples), and \(\ell_{i}(w)\) is a twice-differentiable function on the \(i\)-th sample. Given a pre-trained weight vector \(\bar{w}\in\mathbb{R}^{p}\), our goal is to set some parameters to zero and possibly update other weights while maintaining the original model's performance (e.g., accuracy) as much as possible. In mathematical terms, given a pre-trained weight \(\bar{w}\) and a target level of sparsity \(\tau\in(0,1)\), we aim to construct a new weight vector \(w\in\mathbb{R}^{p}\) that satisfies :
* The loss function at \(w\) is as close as possible to the loss before pruning: \(\mathcal{L}(w)\approx\mathcal{L}(\bar{w})\).
* The number of nonzero weights at \(w\) respects the sparsity budget2: \(\|w\|_{0}\leq(1-\tau)p\). Footnote 2: Here \(\ell_{0}\) norm \(\|w\|_{0}\) denotes the number of nonzero in the vector \(w\).
Similar to LeCun et al. (1989); Hassibi and Stork (1992); Singh and Alistarh (2020), we use a local quadratic approximation of \(\mathcal{L}\) around the pre-trained weight \(\bar{w}\):
\[\mathcal{L}(w)=\mathcal{L}(\bar{w})+\nabla\mathcal{L}(\bar{w})^ {\top}(w-\bar{w})+\\ \frac{1}{2}(w-\bar{w})^{\top}\nabla^{2}\mathcal{L}(\bar{w})(w- \bar{w})+O(\|w-\bar{w}\|^{3}). \tag{1}\]
With certain choices of gradient and Hessian approximations \(g\approx\nabla\mathcal{L}(\bar{w}),H\approx\nabla^{2}\mathcal{L}(\bar{w})\), and ignoring higher-order terms, the loss \(\mathcal{L}\) can be locally approximated by:
\[Q_{0}(w):=\mathcal{L}(\bar{w})+g^{\top}(w-\bar{w})+\frac{1}{2}(w-\bar{w})^{\top }H(w-\bar{w}). \tag{2}\]
Pruning the local approximation \(Q_{0}(w)\) of the network can be naturally formulated as an optimization problem to minimize \(Q_{0}(w)\) subject to a cardinality constraint, i.e.,
\[\min_{w}\ \ Q_{0}(w)\qquad\text{s.t.}\quad\|w\|_{0}\leq k. \tag{3}\]
For large networks, solving Problem (3) directly (e.g., using iterative optimization methods) is computationally challenging due to the sheer size of the \(p\times p\) matrix \(H\). In Section 3.1, we present an exact, hessian-free reformulation of Problem (3), which is key to our scalable approach.
### Related Work
Impact-based pruning dates back to the work of LeCun et al. (1989) where the OBD (Optimal Brain Damage) framework is proposed. This approach, along with subsequent ones (Hassibi and Stork, 1992; Singh and Alistarh, 2020; Yu et al., 2022) make use of local approximation (2). It is usually assumed (but not in our work) that \(\bar{w}\) is a local optimum of the loss function, and therefore \(g=0\) and \(\mathcal{L}(w)\approx\mathcal{L}(\bar{w})+\frac{1}{2}(w-\bar{w})^{\top}H(w- \bar{w})\). Using this approximation, OBD (Optimal Brain Damage, LeCun et al. (1989)) searches for a single weight \(i\) to prune with minimal increase of the loss function, while also assuming a diagonal Hessian \(H\). If the \(i\)-th weigth is pruned (\(w_{i}=0,w_{j}=\bar{w}_{j}\forall j\neq i\)), then the loss function increases by \(\delta\mathcal{L}_{i}=\frac{\bar{w}_{i}^{2}}{2\nabla^{2}\mathcal{L}(\bar{w}) _{ii}}\). This represents a score for each weight, and is used to prune weights in decreasing order of their score. OBS (Optimal Brain Surgeon, Hassibi and Stork (1992)) extends this by no longer assuming a diagonal Hessian, and using the optimality conditions to update the un-pruned weights. The authors also propose using the empirical Fisher information matrix, as an efficient approximation to the Hessian matrix. Layerwise OBS (Dong et al., 2017) proposes to overcome the computational challenge of computing the full (inverse) Hessian needed in OBS by pruning each layer independently, while Singh and Alistarh (2020) use block-diagonal approximations on the Hessian matrix, which they approximate by the empirical Fisher information matrix on a small subset of the training data (\(n\ll N\)):
\[\nabla^{2}\mathcal{L}(\bar{w})\approx H=\frac{1}{n}\sum_{i=1}^{n}\nabla\ell_{i }(\bar{w})\nabla\ell_{i}(\bar{w})^{\top}. \tag{4}\]
While these approaches explore different ways to make the Hessian computationally tractable, they all rely on the OBD/OBS framework of pruning a single weight, and do not to consider the possible interactions that can arise when pruning multiple weights. To this end, Yu et al. (2022) propose CBS (Combinatorial Brain Surgeon) an algorithm to approximately solve (3). While CBS shows impressive improvements in the accuracy of the pruned model over prior work, it operates with the full dense \(p\times p\) Hessian \(H\). This limits scalability both in compute time and memory utilization, as \(p\) is often in the order of millions and more.
Choices of gradient approximation \(g\).As mentioned earlier, most previous work assumes that the pre-trained weights \(\bar{w}\) is a local optimum of the loss function \(\mathcal{L}\), and thus take the gradient \(g=0\). However, the gradient of the loss function of a pre-trained neural network may not be zero in practice due to early stopping (or approximate optimization) (Yao et al., 2007). Thus, the WoodTaylor approach (Singh and Alistarh, 2020) proposes to approximate the gradient by the stochastic gradient, using the same samples for estimating the Hessian. Namely,
\[g=\frac{1}{n}\sum_{i=1}^{n}\nabla\ell_{i}(\bar{w}). \tag{5}\]
One-shot and gradual pruning.Generally speaking, _one-shot pruning_ methods (LeCun et al., 1989; Singh and Alistarh, 2020; Yu et al., 2022) can be followed by a few fine-tuning and re-training steps to recover some of the accuracy lost when pruning. Furthermore, recent work has shown that pruning and re-training in a gradual fashion (hence the name, _gradual pruning_) can lead to big accuracy gains (Han et al., 2015; Gale et al., 2019; Zhu and Gupta, 2018). The work of Singh and Alistarh (2020) further shows that gradual pruning, when used with well-performing one-shot pruning algorithms, can outperform state-of-the-art unstructured pruning methods. In this paper, we focus on the one-shot pruning problem and show how our pruning framework outperforms other one-shot pruning methods (see Section 4.1). We then show that, if applied in the gradual pruning setting, our pruning algorithm outperforms existing approaches (see Section 4.2), establishing new state-of-the-art on MobileNetV1 and ResNet50.
## 3 Our Proposed Framework: Chitta
In this section, we present our algorithmic framework CHITA (Combinatorial Hessian-free Iterative Thresholding Algorithm) for pruning a network to specific sparsity level. We formulate sparse network pruning by considering \(\ell_{0}\)-regression problem(s) and propose scalable algorithms. For example, we can address networks with size \(p\approx 10^{6},n\approx 10^{3},k\approx 10^{5}\) in less than one minute and using less than 20GB of memory. Our basic single-stage algorithm is an improved and scalable variant of IHT to solve (3). In Section 3.3, we propose a multi-stage framework that repeatedly refines the local quadratic approximation and optimizes it (under sparsity constraints) resulting in further performance boosts as shown in Section 4.
### An \(\ell_{0}\)-regression formulation
Our formulation is based on a critical observation that the Hessian approximation in (4) has a low-rank structure:
\[H=\frac{1}{n}\sum_{i=1}^{n}\nabla\ell_{i}(\bar{w})\nabla\ell_{i}(\bar{w})^{ \top}=\frac{1}{n}A^{\top}A\in\mathbb{R}^{p\times p}, \tag{6}\]
where \(A=[\nabla\ell_{1}(\bar{w}),\ldots,\nabla\ell_{n}(\bar{w})]^{\top}\in\mathbb{R}^{n \times p}\) has rank at most \(n\) with \(10^{3}\geq n\ll p\).
Using observation (6) and the gradient expression (5), we note that problem (3) can be equivalently written in the following Hessian-free form:
\[\min_{w}\ \frac{1}{2}\|b-Aw\|^{2}\qquad\text{s.t.}\ \ \ \ \ \|w\|_{0}\leq k, \tag{7}\]
where \(b:=A\bar{w}-e\in\mathbb{R}^{n}\) and \(e\) is a vector of ones. Furthermore, to improve solution quality (see discussion below), we include a ridge-like regularizer of the form \(\|w-\bar{w}\|^{2}\) to the objective in (7). This leads to the following problem:
\[\min_{w}\ Q(w):=\frac{1}{2}\|b-Aw\|^{2}+\frac{n\lambda}{2}\|w-\bar{w}\|^{2}\ \operatorname{s.t.}\ \|w\|_{0}\leq k, \tag{8}\]
where \(\lambda\geq 0\) is a parameter controlling the strength of the ridge regularization. Note that our algorithm actually applies to the general form (8). Importantly, the regression formulation (8) does not require us to compute or store the full Hessian matrix \(H\in\mathbb{R}^{p\times p}\). As discussed in Section 3.2, we only need to operate on the low-rank matrix \(A\) throughout our algorithm--this results in substantial gains in terms of both memory consumption and runtime.
Ridge term and choices of \(\lambda\).We observe empirically that the success of our final pruned model depends heavily on the accuracy of the quadratic approximation of the loss function. Since this approximation is local, it is essential to ensure that the weights \(w\) during the pruning process are sufficiently close to the initial weights \(\bar{w}\). One way3 to achieve this is by including a squared \(\ell_{2}\) penalty, also known as the ridge, on the difference \(w-\bar{w}\). This regularization technique does not appear to be explicitly4 considered in previous works (Hassibi and Stork, 1992; Singh and Alistarh, 2020; Yu et al., 2022) on pruning using local quadratic approximations. The usefulness of the regularization term \(\lambda\) is further explored in Appendix B.2.1. We observe that a well-chosen ridge term can help improve the test accuracy on MLPNet by 3%.
Footnote 3: Another way is to introduce a multi-stage procedure, as explained in Section 3.3.
Footnote 4: It appears to be used implicitly though to obtain an invertible Fisher matrix which is achieved by adding a small diagonal \(\lambda_{0}I\) to the Fisher matrix.
Relation to prior work on \(\ell_{0}\)-regression problems.There is a substantial literature on algorithms for solving \(\ell_{0}\)-regularized linear regression problems. We provide a brief overview of related work, but it is worth noting that the context of network pruning and the problem-scale we consider here makes our work different from earlier works in \(\ell_{0}\)-sparse linear regression. Blumensath and Davies (2009) developed an iterative hard thresholding method, which involves projecting the weights onto the feasible set after each gradient step. Bertsimas and Van Parys (2020); Hazimeh et al. (2022) proposed algorithms to solve sparse regression problems to optimality via branch-and-bound. Beck and Eldar (2013) explore coordinate descent-type algorithms that update one/two coordinates at a time. Hazimeh and Mazumder (2020) propose efficient algorithms based on coordinate descent and local combinatorial optimization that applies to the unconstrained \(\ell_{0}\ell_{2}\)-penalized regression problem. We refer the reader to Hazimeh et al. (2022) for a more comprehensive discussion of related work.
In summary, earlier methods for \(\ell_{0}\)-regularized linear regression are quite effective at discovering high-quality solutions to Problem (7) for small to medium-sized problems and require \(k\) to be sufficiently small for efficiency. However, these methods are not well-suited for tackling large network pruning problems (e.g, \(p\sim 10^{6}\) and \(k\sim 10^{5}\)) due to slow convergence or expensive per-iteration cost. To address this issue, we propose novel approaches for scalability in Section 3.2. Additionally, we emphasize that (8) arises from a local approximation of the true loss \(\mathcal{L}\) around \(\bar{w}\). Therefore, achieving a high-quality solution for (8) alone does not guarantee a pruned network with high accuracy. To this end, we study a multi-stage extension (see Section 3.3) that requires solving several problems of the form (8).
### Our proposed algorithm for problem (8)
We present the core ideas of our proposed algorithm for Problem (8), and discuss additional details in Appendix A.
Our optimization framework relies on the IHT algorithm (Blumensath and Davies, 2009; Bertsimas et al., 2016) that optimizes (8) by simultaneously updating the support and the weights. By leveraging the low-rank structure, we can avoid the computational burden of computing the full Hessian matrix, thus reducing complexity.
The basic version of the IHT algorithm can be slow for problems with a million parameters. To improve the computational performance of our algorithm we propose a new line search scheme. Additionally, we use use an active set strategy and schemes to update the weights on the nonzero weights upon support stabilization. Taken together, we obtain notable improvements in computational efficiency and solution quality over traditional IHT, making it a viable option for network pruning problems at scale.
#### 3.2.1 Structure-aware IHT update
The IHT algorithm operates by taking a gradient step of size \(\tau\) from the current iteration, then projects it onto the set of points with a fixed number of non-zero coordinates through hard thresholding. Specifically, for any vector \(x\), let \(\mathcal{I}_{k}(x)\) denote the indices of \(k\) components of \(x\) that have the largest absolute value. The hard thresholding operator
\(P_{k}(x)\) is defined as \(y_{i}=x_{i}\) if \(i\in\mathcal{I}_{k}(x)\), and \(y_{i}=0\) if \(i\notin\mathcal{I}_{k}(x)\); where \(y_{i}\) is the \(i\)-th coordinate of \(P_{k}(x)\). IHT applied to problem (8) leads to the following update:
\[w^{t+1} =\mathsf{HT}(w^{t},k,\tau^{s}):=P_{k}\left(w^{t}-\tau^{s}\nabla Q (w^{t})\right) \tag{9}\] \[=P_{k}\left(w^{t}-\tau^{s}(A^{\top}(Ab-w^{t})+n\lambda(w^{t}- \bar{w}))\right),\]
where \(\tau^{s}>0\) is a suitable stepsize. The computation of \(\mathsf{HT}(w^{t},k,\tau^{s})\) involves only matrix-vector multiplications with \(A\) (or \(A^{\top}\)) and a vector, which has a total computation cost of \(O(np)\). This is a significant reduction compared to the \(O(p^{2})\) cost while using the full Hessian matrix as Singh and Alistarh (2020); Yu et al. (2022) do.
**Active set strategy.** In an effort to further facilitate the efficiency of the IHT method, we propose using an active set strategy, which has been proven successful in various contexts such as (Nocedal and Wright, 1999; Friedman et al., 2010; Hazimeh and Mazumder, 2020). This strategy works by restricting the IHT updates to an active set (a relatively small subset of variables) and occasionally augmenting the active set with variables that violate certain optimality conditions. By implementing this strategy, the iteration complexity of the algorithm can be reduced to \(O(nk)\) in practice, resulting in an improvement, when \(k\) is smaller than \(p\). The algorithm details can be found in Appendix A.3.
#### 3.2.2 Determining a good stepsize
Choosing an appropriate stepsize \(\tau^{s}\) is crucial for fast convergence of the IHT algorithm. To ensure convergence to a stationary solution, a common choice is to use a constant stepsize of \(\tau^{s}=1/L\)(Bertsimas et al., 2016; Hazimeh and Mazumder, 2020), where \(L\) is the Lipschitz constant of the gradient of the objective function. This approach, while reliable, can lead to conservative updates and slow convergence--refer to Appendix A.1 for details. An alternative method for determining the stepsize is to use a backtracking line search, as proposed in Beck and Teboulle (2009). The method involves starting with a relatively large estimate of the stepsize and iteratively shrinking the step size until a sufficient decrease of the objective function is observed. However, this approach requires multiple evaluations of the objective function, which can be computationally expensive.
**Our novel scheme.** We propose a novel line search method for determining the stepsize to improve the convergence speed of IHT. Specifically, we develop a method that (approximately) finds the stepsize that leads to the maximum decrease in the objective, i.e., we attempt to solve
\[\min_{\tau^{s}\geq 0}\ \ g(\tau^{s}):=Q\left(P_{k}\left(w^{t}-\tau^{s}\nabla Q (w^{t})\right)\right). \tag{10}\]
For general objective functions, solving the line search problem (as in (10)) is challenging. However, in our problem, we observe and exploit an important structure: \(g(\tau^{s})\) is a piecewise quadratic function with respect to \(\tau^{s}\). Thus, the optimal stepsize on each piece can be computed exactly, avoiding redundant computations (associated with finding a good stepsize) and resulting in more aggressive updates. In Appendix A.1, we present an algorithm that finds a good stepsize by exploiting this structure. Compared to standard line search, our algorithm is more efficient, as it requires fewer evaluations of the objective function and yields a stepsize that results in a steeper descent.
#### 3.2.3 Additional techniques for scalability
While the IHT algorithm can be quite effective in identifying the appropriate support, its progress slows down considerably once the support is identified (Blumensath, 2012), resulting in slow convergence. We propose two techniques that refine the non-zero coefficients by solving subproblems to speedup the overall optimization algorithm: (i) Coordinate Descent (CD, Bertsekas, 1997; Nesterov, 2012) that updates each nonzero coordinate (with others fixed) as per a cyclic rule; (ii) Back solve based on Woodbury formula (Max, 1950) that calculates the optimal solution exactly on a restricted set of size \(k\). We found both (i), (ii) to be important for improving the accuracy of the pruned network. Further details on the strategies (i), (ii) are in Appendix A.2 and A.4.
Our single-stage algorithm CHITA glues together the different pieces discussed above into a coherent algorithm. It takes as input a low-rank matrix \(A\), the initial weight \(\bar{w}\) and the \(\ell_{0}\)-constraint \(k\); and returns a pruned weight \(w\) that serves as a good solution to (8).
### A multi-stage procedure
Our single-stage methods (cf Section 3.2) lead to high-quality solutions for problem (8). Compared to existing methods, for a given sparsity level, our algorithms deliver a better objective value for problem (8)--for eg, see Figure 2(b). However, we note that the final performance (e.g., accuracy) of the pruned network depends heavily on the quality of the local quadratic approximation. This is particularly true when targeting high levels of sparsity (i.e., zeroing out many weights), as the objective function used in (8) may not accurately approximate the true loss function \(\mathcal{L}\). To this end, we propose a multi-stage procedure named CHITA++ that improves the approximation quality by iteratively updating and solving local quadratic models. We use a scheduler to gradually increase the sparsity constraint and take a small step towards higher sparsity in each stage to ensure the validity of the local quadratic approximation. Our multi-stage procedure leverages the efficiency of the single-stage approaches and can lead to pruned networks with improved accuracy by utilizing more accurate approximations of the true loss function. For example, our experiments show that
the multi-stage procedure can prune ResNet20 to 90% sparsity in just a few minutes and increases test accuracy from 15% to 79% compared to the single-stage method. Algorithm 1 presents more details on CHITA++.
Our proposed multi-stage method differs significantly from the gradual pruning approach described in Han et al. (2015). While both methods involve pruning steps, the gradual pruning approach also includes fine-tuning steps in which SGD is applied to further optimize the parameters for better results. However, these fine-tuning steps can be computationally expensive, usually taking days to run. In contrast, our proposed multi-stage method is a one-shot pruning method and only requires constructing and solving Problem (8) several times, resulting in an efficient and accurate solution. This solution can then be further fine-tuned using SGD or plugged into the gradual pruning framework, something we explore in Section 4.2.
```
0: Pre-trained weights \(\bar{w}\), a target sparsity level \(\tau\), number of stages \(f\).
1:Initialization: Construct a increasing sequence of sparsity parameters \(\tau_{1},\tau_{2},\ldots,\tau_{f}=\tau\); and set \(w^{0}=\bar{w}\)
2:for\(t=1,2,\ldots,f\)do
3: At current solution \(w^{t-1}\), calculate the gradient based on a batch of \(n\) data points and construct the matrix \(A\) given in (4).
4: Obtain a solution \(w^{t}\) to problem (8) by invoking CHITA\((A,\bar{w},k)\) with \(\bar{w}=w^{t-1}\) and number of nonzeros \(k=\lfloor(1-\tau_{t})p\rfloor\).
5:endfor
```
**Algorithm 1** CHITA++: a multi-stage pruning procedure
## 4 Experimental Results
We compare our proposed framework with existing approaches, for both one-shot and gradual pruning.
### One shot pruning
We start by comparing the performance of our methods: CHITA (single-stage) and CHITA++ (multi-stage) with several existing state-of-the-art one-shot pruning techniques on various pre-trained networks. We use the same experimental setup as in recent work Yu et al. (2022); Singh & Alistarh (2020). The existing pruning methods we consider include MP (Magnitude Pruning, Mozer & Smolensky, 1989), WF (WoodFisher, Singh & Alistarh, 2020), CBS (Combinatorial Brain Surgeon, Yu et al., 2022) and M-FAC (Matrix-Free Approximate Curvature, Frantar et al., 2021). The pre-trained networks we use are MLPNet (30K parameters) trained on MNIST LeCun et al. (1998), ResNet20 He et al. (2016, 200k parameters) trained on CIFAR10 Krizhevsky et al. (2009), and MobileNet (4.2M parameters) and ResNet50 He et al. (2016, 22M parameters) trained on ImageNet Deng et al. (2009). For further details on the choice of the Hessian approximation, we refer the reader to Appendix A.5. Detailed information on the experimental setup and reproducibility can be found in Appendix B.1.1.
#### 4.1.1 Runtime comparison
Recent works that use the empirical Fisher information matrix for pruning purposes Singh & Alistarh (2020); Yu et al. (2022) show that using more samples for Hessian and gradient approximation results in better accuracy. Our experiments also support this conclusion. However, most prior approaches become computationally prohibitive as sample size \(n\) increases. As an example, the Woodfisher and CBS algorithms require hours to prune a MobileNet when \(n\) is set to 1000, and their processing time increases at least linearly with \(n\). In contrast, our method has been designed with efficiency in mind, and we have compared it to M-FAC, a well-optimized version of Woodfisher that is at least 1000 times faster. The results, as depicted in Figure 1, demonstrate a marked improvement in speed for our algorithm, with up to 20 times faster performance.
#### 4.1.2 Accuracy of the pruned models
**Comparison against state-of-the-art.** Table 1 compares the test accuracy of MLPNet, ResNet20 and MobileNetV1 pruned to different sparsity levels. Our single-stage method achieves comparable results to other state-of-the-art approaches with much less time consumption. The multi-stage method (CHITA++) outperforms other methods by a large margin, especially with a high sparsity rate.
Figure 1: Runtime comparison between our single-stage approaches and M-FAC (the fastest among the competitive methods) while pruning MLPNet and ResNet20 to 90% sparsity level (90% of the entries are zero). Note that Woodfisher and CBS are at least 1000 times slower than M-FAC. The error bar represents the standard error over four runs. CHITA here uses IHT to find a support and performs a back-solve on the found support.
**One-shot pruning on ResNet50.** We further compare our approach to competing methods on ResNet50, an even larger network where some pruning algorithms, like CBS, do not scale. In Figure 2, we evaluate the performance of our algorithm in comparison to M-FAC and Magnitude Pruning (MP) using two metrics: test accuracy and the final objective value of the \(\ell_{0}\)-constrained problem (8). As both M-FAC and our algorithm aim to minimize this objective, it can be used to judge the efficacy of our model in solving problem (8). As seen in the figure, our approach achieves a lower objective value, and in this case, it also results in a better test accuracy for the final pruned network.
**Sparsity schedule in multi-stage procedure.** We study the effect of the sparsity schedule (i.e., choice of \(\tau_{1}\leq\tau_{2}\leq\cdots\leq\tau_{f}=\tau\) in Algorithm 1) on the performance of CHITA++. We compare test accuracy of three different schedules: (i) exponential mesh, (ii) linear mesh, and (iii) constant mesh. For these schedules, \(f\) is set to be \(15\). For the first two meshes, \(\tau_{1}\) and \(\tau_{15}\) are fixed as \(0.2\) and \(0.9\), respectively. As shown in Figure 3, the exponential mesh computes \(\tau_{2},\ldots,\tau_{14}\) by drawing an exponential function, while the linear mesh adopts linear interpolation (with \(\tau_{1}\) and \(\tau_{15}\) as endpoints) to determine \(\tau_{2},\ldots,\tau_{14}\) and the constant mesh has \(\tau_{1}=\tau_{2}=\cdots=\tau_{15}\).
Figure 4 plots the test accuracy of the three schedules over the number of stages. We observe that the linear mesh outperforms the exponential mesh in the first few iterations, but its performance drops dramatically in the last two iterations. The reason is that in high sparsity levels, even a slight increase in the sparsity rate leads to a large drop in accuracy. Taking small "stepsizes" in high sparsity levels allows the exponential mesh to fine-tune the weights in the last several stages and achieve good performance.
\begin{table}
\begin{tabular}{c|c c c c c c} \hline \hline Network & Sparsity & MP & WF & CBS & CHITA & CHITA++ \\ \hline & 0.5 & 93.93 & 94.02 & 93.96 & 93.97 (0.03) & **95.97** (0.05) \\ & 0.6 & 93.78 & 93.82 & 93.96 & 93.94 (0.02) & **95.93** (0.04) \\ MLPNet & 0.7 & 93.62 & 93.77 & 93.98 & 93.80 (0.01) & **95.89** (0.06) \\ on MNIST & 0.8 & 92.98 & 93.57 & 93.90 & 93.59 (0.03) & **95.89** (0.03) \\ (93.97\%) & 0.9 & 90.31 & 96.93 & 91.94 & 92.46 (0.04) & **95.58** (0.03) \\ & 0.95 & 83.64 & 85.54 & 89.82 & 88.09 (0.24) & **94.70** (0.06) \\ & 0.98 & 92.25 & 38.26 & 55.45 & 46.25 (0.85) & **90.73** (0.11) \\ \hline & 0.3 & 90.77 & **91.37** & 91.35 & **91.37** (**0.04)**, **91.25** (0.08) \\ & 0.4 & 99.89 & 91.95 & **91.21** & 91.19 (0.05) & **91.20** (0.05) \\ ResNet20 & 0.5 & 88.44 & 90.23 & 90.58 & 90.60 (0.07) & **91.04** (0.09) \\ on CIFAR10 & 0.6 & 85.24 & 97.86 & 88.89 & 92.02 (0.19) & **90.78** (0.12) \\ (91.36\%) & 0.7 & 78.79 & 81.05 & 81.84 & 84.20 (43.08) & **90.88** (0.10) \\ & 0.8 & 54.01 & 62.63 & 51.28 & 57.90 (1.04) & **88.72** (0.17) \\ & 0.9 & 119.19 & 14.93 & 13.68 & 15.60 (14.79) & **79.32** (\(\pm\)1.19) \\ \hline & 0.3 & 71.60 & **71.88** & **71.88** & **71.87** (0.01) & 71.60 (0.02) \\ MobileNetV1 & 0.4 & 69.16 & 71.15 & 71.45 & 71.50 (0.02) & **71.61** (0.02) \\ on ImageNet & 0.5 & 62.61 & 68.91 & 70.21 & 70.42 (0.02) & **70.99** (\(\pm\)0.04) \\ (71.95\%) & 0.6 & 41.94 & 60.90 & 66.37 & 67.30 (0.03) & **69.54** (0.01) \\ & 0.7 & 6.78 & 29.26 & 55.11 & 59.40 (0.09) & **66.42** (0.03) \\ & 0.8 & 0.11 & 0.24 & 16.38 & 29.78 (0.18) & **47.45** (\(\pm\)0.25) \\ \hline \hline \end{tabular}
\end{table}
Table 1: The pruning performance (model accuracy) of various methods on MLPNet, ResNet20, MobileNetV1. As to the performance of MP, WF, and CBS, we adopt the results reported in Yu et al. (2022). We take five runs for our single-stage (CHITA) and multi-stage (CHITA++) approaches and report the mean and standard error (in the brackets). The best accuracy values (significant) are highlighted in bold. Here sparsity denotes the fraction of zero weights in convolutional and dense layers.
Figure 3: Three different sparsity schedules: exponential, linear, and constant schedules.
Additional ablation studies.We perform additional ablation studies to further evaluate the performance of our method. These studies mainly focus on the effect of the ridge term (in Appendix B.2.1), and the effect of the first-order term (in Appendix B.2.2).
### Performance on gradual pruning
To compare our one-shot pruning algorithms against more unstructured pruning methods, we plug CHITA into a gradual pruning procedure (Gale et al., 2019), following the approach in Singh and Alistarh (2020). Specifically, we alternate between pruning steps where a sparse weight is computed and fine-tuning steps on the current support via Stochastic Gradient Descent (SGD). To obtain consistent results, we start from the same pre-trained weights used in Kusupati et al. (2020), and re-train for 100 epochs using SGD during fine-tuning steps, similarly to Kusupati et al. (2020); Singh and Alistarh (2020). We compare our approach against Incremental (Zhu and Gupta, 2018), STR (Kusupati et al., 2020), Global Magnitude (Singh and Alistarh, 2020), Wood-Fisher (Singh and Alistarh, 2020), GMP (Gale et al., 2019), Variational Dropout (Molchanov et al., 2017), RIGL (Evci et al., 2020), SNFS (Dettmers and Zettlemoyer, 2020) and DNW (Wortsman et al., 2019). Further details on training procedure can be found in Appendix B.1.2.
**MobileNetV1.** We start by pruning MobileNetV1 (4.2M parameters). As Table 2 demonstrates, CHITA results in significantly more accurate pruned models than previous state-of-the-art approaches at sparsities 75% and 89%, with only 6% accuracy loss compared to 11.29%, the previous best result when pruning to a sparsity of 89%.
**ResNet50.** Similarly to MobileNetV1, CHITA improves test accuracy at sparsity levels 90%, 95%, and 98% compared to all other baselines, as Table 3 shows. This improvement becomes more noticeable as we increase the target sparsity, with CHITA producing a pruned model with 69.80% accuracy compared to 65.66%, the second-best performance, and previous state-of-the-art.
## 5 Conclusion
In this work we have presented an efficient network pruning framework CHITA, which is based on a novel, hessian
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & Sparsity & Pruned & Relative Drop (\%) & Remaining \\ Method & (\%) & Accuracy & \(\frac{Prunnel-Dp_{p_{p_{p_{p{p{p{p{p{p{p{p{p{p}}}}}}}}}}}}}\) & \# of params \\ \hline \hline Incremental & 74.11 & 67.70 & -4.11 & 1.09 M \\ STR & 75.28 & 68.35 & -5.07 & 1.04 M \\ Global Magnitude & 75.28 & 69.90 & -2.92 & 1.04 M \\ WoodFisher & 75.28 & 70.09 & -2.65 & 1.04 M \\
**CRITA** & 75.28 & **71.11** & **-1.23** & 1.04 M \\ \hline Incremental & 89.03 & 61.80 & -12.46 & 0.46 M \\ STR & 89.01 & 62.10 & -13.75 & 0.46 M \\ Global Magnitude & 89.00 & 63.02 & -12.47 & 0.46 M \\ WoodFisher & 89.00 & 63.87 & -11.29 & 0.46 M \\
**CRITA** & 89.00 & **67.68** & **-6.00** & 0.46 M \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results of gradually pruning MobileNetV1 in 75% and 89% sparsity regimes, comparing CHITA to other baselines (Dense accuracy: 72.00%). We also include the relative drop in accuracy to account for different methods starting from different dense weights. CHITA numbers are averaged across two runs. Numbers for other baselines are taken from Singh and Alistarh (2020).
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & Sparsity & Pruned & Relative Drop (\%) & Remaining \\ Method & (\%) & Accuracy & \(\frac{Prunnel-Dp_{p_{p{p{p{p{p{p{p{p{p{p{p}}}}}}}}}}}}{D}\) & \# of params \\ \hline \hline GMP + LS & 90.00 & 73.91 & -3.62 & 2.56 M \\ Variational Dropout & 90.27 & 73.84 & -3.72 & 2.49 M \\ RIGL + ERK & 90.00 & 73.00 & -4.94 & 2.56 M \\ SNFS + LS & 90.00 & 72.90 & -5.32 & 2.56 M \\ STR & 90.23 & 74.31 & -3.51 & 2.49 M \\ Global Magnitude & 90.00 & 75.20 & -2.42 & 2.56 M \\ DNW & 90.00 & 74.00 & -4.52 & 2.56 M \\ WoodFisher & 90.00 & 75.21 & -2.34 & 2.56 M \\
**CRITA** & 90.00 & **75.29** & **-2.23** & 2.56 M \\ \hline GMP & 95.00 & 70.59 & -7.95 & 1.28 M \\ Variational Dropout & 94.92 & 69.41 & -9.49 & 1.30 M \\ Variational Dropout & 94.94 & 71.81 & -6.36 & 1.30 M \\ RIGL + ERK & 95.00 & 70.00 & -8.85 & 1.28 M \\ DNW & 95.00 & 68.30 & -11.31 & 1.28 M \\ STR & 94.80 & 70.97 & -7.84 & 1.33 M \\ STR & 95.03 & 70.40 & -8.58 & 1.27 M \\ Global Magnitude & 95.00 & 71.79 & -6.78 & 1.28 M \\ WoodFisher & 95.00 & 72.12 & -6.35 & 1.28 M \\
**CRITA** & 95.00 & **73.46** & **-4.61** & 1.28 M \\ \hline GMP + LS & 98.00 & 57.90 & -24.50 & 0.51 M \\ Variational Dropout & 98.57 & 64.52 & -15.87 & 0.36 M \\ DNW & 98.00 & 58.20 & -24.42 & 0.51 M \\ STR & 98.05 & 61.46 & -20.19 & 0.50 M \\ STR & 97.78 & 62.84 & -18.40 & 0.57 M \\ Global Magnitude & 98.00 & 64.28 & -16.53 & 0.51 M \\ WoodFisher & 98.00 & 65.55 & -14.88 & 0.51 M \\
**CRITA** & 98.00 & **69.80** & **-9.36** & 0.51M \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results of gradually pruning a ResNet50 network in the 90%, 95%, and 98% sparsity regimes, comparing CHITA to other state-of-the-art methods (Dense accuracy: 77.01%). We also include the relative drop in accuracy to account for different methods starting from different dense weights. CHITA numbers are averaged across two runs. Numbers for other baselines are taken from Singh and Alistarh (2020).
Figure 4: Comparison of test accuracy using CHITA++ with 15 stages, pruning a ResNet20 model to a 90% sparsity rate, under different sparsity schedules. Text around the point indicates the current sparsity level of the point.
free \(\ell_{0}\)-constrained regression formulation and combinatorial optimization techniques. Our single-stage methods demonstrate comparable results to existing methods while achieving a significant improvement in runtime and reducing memory usage. Furthermore, by building upon the single-stage methods, our multi-stage approach is capable of achieving even further improvements in model accuracy. Additionally, we have demonstrated that incorporating our pruning methods into existing gradual pruning frameworks results in sparse networks with state-of-the-art accuracy.
## Acknowledgements
This research is supported in part by grants from the Office of Naval Research (N000142112841 and N000142212665), and Google. We thank Shibal Ibrahim for helpful discussions. We also thank Thiago Serra and Yu Xin for sharing with us code from their CBS paper (Yu et al., 2022).
|
2309.04742 | Affine Invariant Ensemble Transform Methods to Improve Predictive
Uncertainty in Neural Networks | We consider the problem of performing Bayesian inference for logistic
regression using appropriate extensions of the ensemble Kalman filter. Two
interacting particle systems are proposed that sample from an approximate
posterior and prove quantitative convergence rates of these interacting
particle systems to their mean-field limit as the number of particles tends to
infinity. Furthermore, we apply these techniques and examine their
effectiveness as methods of Bayesian approximation for quantifying predictive
uncertainty in neural networks. | Diksha Bhandari, Jakiw Pidstrigach, Sebastian Reich | 2023-09-09T10:01:51Z | http://arxiv.org/abs/2309.04742v2 | # Affine Invariant Ensemble Transform Methods to Improve Predictive Uncertainty in ReLU Networks
# Affine Invariant Ensemble Transform Methods to Improve Predictive Uncertainty in ReLU Networks
Diksha Bhandari, Jakiw Pidstrigach, Sebastian Reich
**Abstract** We consider the problem of performing Bayesian inference for logistic regression using appropriate extensions of the ensemble Kalman filter. Two interacting particle systems are proposed that sample from an approximate posterior and prove quantitative convergence rates of these interacting particle systems to their mean-field limit as the number of particles tends to infinity. Furthermore, we apply these techniques and examine their effectiveness as methods of Bayesian approximation for quantifying predictive uncertainty in ReLU networks.
## 1 Introduction
The task in inverse problems is the inference of an unknown parameter \(\theta\in\mathbb{R}^{D}\) from noisy observations \(d\in\mathbb{R}^{N}\), which are generated through
\[d=G(\theta)+\eta, \tag{1}\]
where \(G\) denotes a forward map from model parameters to observable output data \(d\) and \(\eta\) denotes observational noise, which is commonly assumed to be Gaussian, that is, \(\eta\sim\mathcal{N}(0,P_{\eta})\). A Bayesian inverse problem can be formulated as producing samples from a random variable \(\theta\) conditioned on \(d\). Given the prior \(\pi_{\text{prior}}(\theta)\) on \(\theta\), the inverse problem is formulated as finding the posterior \(\pi_{\text{post}}(\theta)\) on \(\theta\) given \(d\). By Bayes theorem, the posterior distribution can be written in terms of the prior density \(\pi_{\text{prior}}\) and the negative log-likelihood or loss function \(\Psi:\mathbb{R}^{D}\to\mathbb{R}\) as
\[\mathrm{d}\pi_{\text{post}}(\theta)\propto\exp(-\Psi(\theta))\mathrm{d}\pi_{ \text{prior}}(\theta). \tag{2}\]
The problem of sampling from a target distribution is fundamental to Bayesian statistics, machine learning, and data assimilation. Considerable research has been done on sampling methods for Bayesian inverse problems, mainly in the case of \(l_{2}\)-loss functions [8, 21, 23, 24]. In this work, we will focus on \(\Psi\) being the cross-entropy loss instead, which is used in logistic regression problems.
We will introduce two methods, closely related to the algorithms introduced in [34] to approximate the posterior distribution for Bayesian logistic regression. We will prove that the methods are well-defined and provide quantitative bounds on their convergence in the large-ensemble (mean-field) limit.
We will then apply these methods to Bayesian neural networks. This gives us the possibility to quantify uncertainty in the model weights and the network outputs. Our numerical experiments show that our methods outperform the state-of-the-art methods for this problem.
### Classification and logistic regression
In this work, we focus on classification problems, that is, the problems arising from classifying objects into classes \(C_{1},\ldots,C_{k}\). For notational convenience, we will focus on the case of binary classification, i.e., we only have classes \(C_{1}\) and \(C_{2}\). However, all methods can be generalized to \(k\) classes, see [34, Section 7.2].
Consider the data set
\[\mathcal{D}=\{(\phi^{n},d^{n})\}_{n=1}^{N},\]
where \(d^{n}\in\{0,1\}\) are targets for the binary classification case with input features \(\phi^{n}\in\mathbb{R}^{D}\) for \(n=1,...,N\). The \(\phi^{n}\)'s can either be the representation of the data in any numerical form, or stem from a feature map. In Section 5, we will train a neural network for the use as feature map. We also introduce the shorthand notation
\[\Phi=(\phi^{1},...,\phi^{N})\in\mathbb{R}^{D\times N} \tag{3}\]
and define a parametric family of models as follows. Given a parameter \(\theta\), the probability of an example belonging to class \(C_{1}\) will be given by
\[\mathbb{P}_{\theta}[\phi\in C_{1}]=\sigma(\langle\theta,\phi\rangle),\]
where \(\sigma\) is the sigmoid function
\[\sigma(z)=\frac{1}{1+\exp{(-z)}}. \tag{4}\]
The negative log-likelihood of the given dataset \(\mathcal{D}\) under our parametric model is given by the cross-entropy loss function
\[\Psi(\theta)=-\sum_{n=1}^{N}\{d^{n}\log(y_{n}(\theta))+(1-d^{n})\log(1-y_{n}( \theta))\}, \tag{5}\]
where
\[y_{n}(\theta)=\sigma(\langle\theta,\phi^{n}\rangle)=\mathbb{P}_{\theta}[\phi^ {n}\in C_{1}]. \tag{6}\]
For \(n=1,\ldots,N\), we introduce the vector \(y(\theta)\in\mathbb{R}^{N}\) as
\[y(\theta)=(y_{1}(\theta),\ldots,y_{n}(\theta))^{\mathrm{T}}, \tag{7}\]
and the vector of target data labels \(d\in\mathbb{R}^{N}\) as
\[d=(d^{1},\ldots,d^{n})^{\mathrm{T}}. \tag{8}\]
We investigate sampling methods based on the ensemble Kalman filter (EnKF) [11] and its extension to Bayesian logistic regression as already considered in [34]. As discussed in the previous section, the Bayesian inverse problem now consists of sampling from \(\pi_{\mathrm{post}}\) given by (2).
Note that the likelihood in this section stems from linear logistic regression, since \(y_{n}=\sigma(\langle\theta,\phi^{n}\rangle)\). However, the methods can be generalized to nonlinear logistic regression, i.e., \(y_{n}=\sigma(f^{n}(\theta))\), with \(f^{n}(\theta)\) being nonlinear, see [34, Section 7.1]. Furthermore, the method proposed in Section 2.2 can be implemented in a derivative-free manner, as discussed in [34, Section 7.1].
### Literature review
Since its introduction in [10], the EnKF has been a popular choice for performing data assimilation tasks due to its robustness and wide applicability. In particular, the EnKF has shown promise as a derivative-free Bayesian inference technique [11, 34, 23]. More recently, the EnKF has been combined with sampling methods for Bayesian inference [8, 9] in order to transform random samples at \(s=0\) into samples from the posterior as \(s\to\infty\). Studying the theoretical aspects of homotopy methods based on the EnKF has also been an active area of research [38, 39, 34, 35, 4]. Furthermore, it should also be noted that both the EnKF and ensemble Kalman inversion (EKI) [24] can be cast within an interactive particle setting. However, most of the work is done in the case of a quadratic likelihood \(\Psi(\theta)\) or a perturbation of a quadratic likelihood. The work [24, 34] introduces multiple EnKF-type methods to study Bayesian logistic regression, i.e., \(\Psi\) being the negative log-likelihood of a logistic regression model. In this paper, we further develop two of the methods proposed in [34] by taking inspiration from [17, 23, 15, 37] and deploy them for uncertainty quantification in Bayesian logistic regression.
As neural networks have become widely used in critical applications, the need for accurate uncertainty quantification has grown [13]. To address this, a popular technique is using Bayesian inference [36], which allows machine learning algorithms to assign low confidence to test points that deviate significantly from the training data or prior information. Bayesian neural networks are commonly used for this purpose [43, 1, 30, 3, 44, 14, 33, 29]. Popular approximate Bayesian approaches include traditional variational inference [3, 19], Laplace approximation [28], Sampling-based approaches like Hamiltonian Monte Carlo (HMC) [32] and stochastic gradient Langevin dynamics [42]. More recently, in an attempt to make these Bayesian approaches more computationally feasible, many methods of partial stochastic networks have been considered [40, 25, 6, 41]. Moreover, many non-Bayesian methods [26, 22, 27] for uncertainty quantification have also been proposed. These methods aim to provide usable estimates of predictive uncertainty even in the presence of limited data.
### Outline and our contribution
The fundamental idea underlying this study is to develop efficient Bayesian inference methods for logistic regression based on the EnKF. Furthermore, we apply these methods to uncertainty quantification in ReLU networks and compare them to the state-of-the-art. To that end we will derive two interacting particle systems (IPSs) which sample from an approximate posterior. Moreover, we prove quantitative convergence rates of the IPSs to their mean-field-limit as the number of particles tends to infinity. We demonstrate the efficacy of the proposed methods for estimating uncertainty in ReLU networks through numerical experiments for binary classification in Section 6.
The remainder of this paper has been organized as follows. The dynamical system formulations for ensemble transform methods based on the homotopy approach and the infinite time second-order dynamical sampler for Bayesian inference in logistic regression are described in Section 2. Section 3 analyzes the proposed ensemble transform methods and derives their mean-field limits. Therein, we quantitatively bound the distance of the \(J\)-particle system to its infinite-particle limit in Theorem 1. Efficient methods of time-stepping of interacting particle systems for robust implementations and pseudocode describing the algorithms that we employ in this paper are discussed in Section 4. Section 5 introduces the framework for application of proposed methods to Bayesian logistic regression for uncertainty quantification in ReLU networks. Experimental results on predictive uncertainty of ReLU networks using proposed Bayesian methods for inference are shown in Section 6. These experiments demonstrate how reliable the uncertainty estimates are for different methods considered.
## 2 Dynamical system formulations
We denote by \(\theta_{s}^{1},\ldots\theta_{s}^{J}\) an ensemble of \(J\) particles, indexed by a time parameter \(s\geq 0\). We denote by \(m_{\theta_{s}}\) the empirical mean of the ensemble,
\[m_{\theta_{s}}=\frac{1}{J}\sum_{j=1}^{J}\theta_{s}^{j}, \tag{9}\]
and by \(P_{\theta_{s}}\) the empirical covariance matrix,
\[P_{\theta_{s}}=\frac{1}{J}\sum_{j=1}^{J}(\theta_{s}^{j}-m_{s})(\theta_{s}^{j}- m_{s})^{\mathrm{T}}. \tag{10}\]
We introduce the matrix of ensemble deviations \(\Theta_{s}\in\mathbb{R}^{D\times J}\) as
\[\Theta_{s}=\left(\theta_{s}^{1}-m_{\theta_{s}},\theta_{s}^{2}-m_{\theta_{s}}, \ldots,\theta_{s}^{J}-m_{\theta_{s}}\right). \tag{11}\]
We will adapt the convention that upper indices stand for the ensemble index, while lower indices stand for the time index or the \(i\)th entry of an \(N\)-dimensional vector. Associated to the particle system is the empirical measure
\[\mu_{\theta_{s}}=\frac{1}{J}\sum_{j=1}^{J}\delta_{\theta_{s}^{j}}, \tag{12}\]
where \(\delta_{\theta}\) stands for the Dirac delta at \(\theta\). We denote the expectation with respect to this measure by \(\mu_{\theta_{s}}[f]\), i.e. for a function \(f\),
\[\mu_{\theta_{s}}[f]=\frac{1}{J}\sum_{j=1}^{J}f(\theta_{s}^{j}).\]
Finally, we assume that the prior measure \(\pi_{\mathrm{prior}}\) is a Gaussian and given as
\[\pi_{\mathrm{prior}}=\mathcal{N}(m_{\mathrm{prior}},P_{\mathrm{prior}}).\]
In each of the following two subsections, we introduce an IPS to sample from the posterior in a logistic regression problem.
### Homotopy using moment matching
In homotopy approaches to Bayesian inference, we assume that the initial ensemble \(\{\theta_{0}^{j}\}\) is distributed according to the prior \(\pi_{\mathrm{prior}}\). One then evolves the ensemble such that at some fixed terminal time \(s\), often \(s=1\), the ensemble is distributed according to the posterior \(\pi_{\mathrm{post}}\). To derive such an evolution, one starts by defining a path \(\pi_{s}\) in the space of measures, which starts at a prior distribution \(\pi_{0}=\pi_{\mathrm{prior}}\) and ends at \(\pi_{1}=\pi_{\mathrm{post}}\)[5, 35].
We will study a homotopy approach introduced in [34]. In this section, we will shortly summarize the resulting differential equations. See [34, Section 4.2] for more details and the derivation of the equations.
We will need the gradient and Hessian of the negative log-likelihood \(\Psi\), defined in (5). The gradient is given by
\[\nabla_{\theta}\Psi(\theta)=\sum_{n=1}^{N}(y_{n}(\theta)-d^{n})\phi^{n}=\Phi(y (\theta)-d), \tag{13}\]
while the Hessian is
\[D_{\theta}^{2}\Psi(\theta)=\Phi R(\theta)\Phi^{\mathrm{T}}. \tag{14}\]
Here \(R(\theta)\in\mathbb{R}^{N\times N}\) is a diagonal matrix with diagonal entries
\[r_{nn}=y_{n}(\theta)(1-y_{n}(\theta)). \tag{15}\]
For \(s\in[0,1]\), the dynamical system to transform the prior is given by
\[\begin{split}\frac{\mathrm{d}}{\mathrm{d}s}\theta_{s}^{j}& =-\frac{1}{2}P_{\theta_{s}}\left(\mu_{\theta_{s}}[D_{\theta}^{2} \Psi(\theta)](\theta_{s}^{j}-m_{\theta_{s}})+2\mu_{\theta_{s}}[\nabla_{\theta }\Psi(\theta)]\right)\\ &=-\frac{1}{2}P_{\theta_{s}}\Phi\Big{(}\mu_{\theta_{s}}[R]\Phi^{ \mathrm{T}}(\theta_{s}^{j}-m_{\theta_{s}})+2(\mu_{\theta_{s}}[y]-d)\Big{)} \end{split} \tag{16}\]
with random initial conditions \(\theta_{0}^{j}\sim\mathcal{N}(m_{\mathrm{prior}},P_{\mathrm{prior}})\) i.i.d. for \(j=1,\ldots,J\).
One can also evolve the mean \(m_{\theta_{s}}\) and ensemble deviations \(\Theta_{s}\) instead of the ensemble \(\{\theta_{s}^{j}\}_{j=1}^{J}\). Their evolution equations are given by
\[\frac{\mathrm{d}}{\mathrm{d}s}m_{\theta_{s}}=-P_{\theta_{s}}\Phi\Big{(}\mu_{ \theta_{s}}[y]-d\Big{)}, \tag{17}\]
and
\[\frac{\mathrm{d}}{\mathrm{d}s}\Theta_{s}=-\frac{1}{2}P_{\theta_{s}}\Phi\mu_{ \theta_{s}}[R]\Phi^{\mathrm{T}}\Theta_{s} \tag{18}\]
respectively, where \(P_{\theta_{s}}\) is the empirical covariance matrix given in (10). However, the full ensemble is still required to compute the empirical expectation value of \(R(\theta)\) and \(y(\theta)\).
### Deterministic second-order dynamical sampler
Alternatively to transporting from the prior to the posterior in a fixed time interval, one can also construct systems that sample the target distribution as \(s\to\infty\). Markov Chain Monte Carlo algorithms are the most famous family of algorithms with this property. However, they normally work on a single sample trajectory \(\theta_{s}\) instead of an ensemble.
The algorithm introduced in [34, Section 5] combines the homotopy approaches with overdamped Langevin dynamics to motivate an IPS that approximates the posterior as \(s\to\infty\). The system of equations is given by
\[\begin{split}\frac{\mathrm{d}\theta_{s}^{j}}{\mathrm{d}s}& =-\frac{1}{2}P_{\theta_{s}}\Bigg{(}\Phi\Big{(}\mu_{\theta_{s}}[R] \Phi^{\mathrm{T}}(\theta_{s}^{j}-m_{\theta_{s}})+2(\mu_{\theta_{s}}[y]-d) \Big{)}\\ &\qquad+P_{\mathrm{prior}}^{-1}(\theta_{s}^{j}+m_{\theta_{s}}-2m _{\mathrm{prior}})\Bigg{)}+P_{\theta_{s}}^{1/2}\mathrm{d}W_{s}^{j},\end{split} \tag{19}\]
where \(W_{s}^{j}\) denotes \(D\)-dimensional standard Brownian motion. For details on the derivation see [34, Section 5]. Note, that similar systems are also introduced in [17, 23, 18].
We now modify (19) by replacing the stochastic driving term \(P_{\theta_{s}}^{1/2}\mathrm{d}W_{s}^{j}\) by \(\frac{1}{2}(\theta_{s}^{j}-m_{\theta_{s}})\mathrm{d}s\), rendering the system deterministic except for the choice of the initial ensemble \(\{\theta_{0}^{j}\}_{j=1}^{J}\). The advantage of making it deterministic is that we can perfectly assess convergence of the algorithm, since the particles will stop moving when they reach equilibrium. The replacement of \(P_{\theta_{s}}^{1/2}\) by \(\frac{1}{2}(\theta_{s}-m_{s})\) is motivated by the fact, that in the mean-field case, for Gaussian densities, they have the same
distributional effect, which we prove in Proposition 6 (for more details, see Section D). Furthermore, we will prove that the Gaussian assumption is well-founded in Section 3.
Therefore, the IPS ODE we will study is given by
\[\begin{split}\frac{\mathrm{d}\theta_{s}^{j}}{\mathrm{d}s}& =-\frac{1}{2}P_{\theta_{s}}\Bigg{(}\Phi\Big{(}\mu_{\theta_{s}}[R] \Phi^{\mathrm{T}}(\theta_{s}^{j}-m_{\theta_{s}})+2(\mu_{\theta_{s}}[y]-d) \Big{)}\\ &\qquad+P_{\mathrm{prior}}^{-1}(\theta_{s}^{j}+m_{\theta_{s}}-2m _{\mathrm{prior}})\Bigg{)}+\frac{1}{2}(\theta_{s}^{j}-m_{\theta_{s}})\end{split} \tag{20}\]
with random initial conditions \(\theta_{0}^{j}\sim\mathcal{N}(m_{0},P_{0})\) i.i.d. for \(j=1,\ldots,J\).
As already utilized in Section 2.1, to propagate the ensemble \(\{\theta_{s}^{j}\}_{j=1}^{J}\) in the interacting particle system (20), we can equivalently evolve the mean \(m_{\theta_{s}}\) and ensemble deviations \(\Theta_{s}\) by
\[\frac{\mathrm{d}}{\mathrm{d}s}m_{\theta_{s}}=-P_{\theta_{s}}\left(\Phi(\mu_{ \theta_{s}}[y]-d)+P_{\mathrm{prior}}^{-1}(m_{\theta_{s}}-m_{\mathrm{prior}}) \right), \tag{21}\]
and
\[\frac{\mathrm{d}}{\mathrm{d}s}\Theta_{s}=-\frac{1}{2}P_{\theta_{s}}\left(\Phi \mu_{\theta_{s}}[R]\Phi^{\mathrm{T}}\Theta_{s}+P_{\mathrm{prior}}^{-1}\Theta_ {s}\right)+\frac{1}{2}\Theta_{s} \tag{22}\]
respectively, where \(P_{\theta_{s}}\) is the empirical covariance. The method reaches an equilibrium solution as \(s\to\infty\), which gives the deterministic dynamical sampler a robustness that the homotopy methods might not possess.
## 3 Theoretical results on mean-field limits
In this section, we study what happens to the equations (16) and (20) when we let the number of particles go to infinity. As we will see, the interaction between the particles decreases, and in the limit \(J\to\infty\) all particles follow a deterministic mean-field ODE, independently of the other particles.
To that end, we introduce some notation. Both of the IPS in Section 2 only depend on the other particles through their empirical measure. Therefore, we can rewrite (16) and (20) as
\[\mathrm{d}\theta_{s}^{j}=b(\mu_{\theta_{s}})(\theta^{j}), \tag{23}\]
where \(b(\mu_{\theta})\) is defined implicitly by (16) and (20). By the continuity equation, we know that if the particles are evolved using the drift (23), then their empirical measure is a weak solution to the partial differential equation (PDE)
\[\partial_{s}\mu_{\theta_{s}}(\theta) = -\operatorname{div}(\mu_{\theta_{s}}b(\mu_{\theta_{s}})(\theta)), \qquad\mu_{\theta_{0}}=\frac{1}{J}\sum_{j=1}^{J}\delta_{\theta_{i}},\]
where \(\delta_{\theta}\) is the Dirac-delta distribution at \(\theta\). Due to the dependence of \(b\) on \(\mu_{\theta_{s}}\), this PDE is typically nonlinear. However, since \(b\) only depends on the other particles through the empirical measure, the above PDE forms a closed system. Therefore, we abstract the PDE
\[\partial_{s}\mu_{s}(\theta)=-\operatorname{div}(\mu_{s}b(\mu_{s})(\theta)) \tag{24}\]
which, given an initial condition \(\mu_{0}\), can be solved at the level of measures or densities directly. Given such a solution \((\mu_{s})_{s}\) for a fixed initial \(\mu_{0}\), we plug it into (23) and get the differential equation
\[\frac{\mathrm{d}}{\mathrm{d}t}\eta_{s}=b(\mu_{s})(\eta_{s}).\]
Note that this equation does not constitute an IPS anymore but a mean-field ODE instead; i.e., the particle evolution depends on its own distribution \(\mu_{s}\), which we obtain as the solution to (24). Furthermore, we find from (24) that \(\eta_{0}\sim\mu_{0}\) implies \(\eta_{s}\sim\mu_{s}\) for all \(s>0\).
The proofs in the following subsections work now as follows. We fix an initial condition
\[\mu_{0}=\mathcal{N}(m_{0},P_{0}),\]
for which we obtain the solution \(\mu_{s}\) to (24). Then, we define an intermediate mean-field particle system:
\[\frac{\mathrm{d}}{\mathrm{d}t}\eta_{s}^{j} = b(\mu_{s})(\eta_{s}^{j}),\quad\eta_{0}^{j}\sim\mu_{0}. \tag{25}\]
Note that the \(\eta_{s}^{j}\), \(j=1,\ldots,J\), are all independent. We then couple the IPS (16) or (20) to (25) by choosing the same initial conditions, i.e., \(\theta_{0}^{j}=\eta_{0}^{j}\) for all \(j=1,\ldots,J\). We then prove that the expected Wasserstein-distance of the empirical measures
\[\mathbb{E}[\mathcal{W}(\mu_{\theta_{s}},\mu_{\eta_{s}})]\to 0 \tag{26}\]
converges to \(0\) as \(J\to\infty\). Let us briefly discuss the expectation value in (26). Note that although (23) and (25) are deterministic, \(\mu_{\theta_{s}}\) as well as \(\mu_{\eta_{s}}\) are random probability measures. However, the randomness comes solely from the initial conditions. Therefore, the expectation in (26) is with respect to i.i.d. initial conditions \(\eta_{0}^{j}=\theta_{0}^{j}\sim\pi_{0}\).
We not only prove (26) but are able to obtain a quantitative convergence rate. Since \(\mu_{\eta_{s}}\) is the \(J\)-fold product of \(\mu_{s}\) with itself, this shows that for large \(J\) the \(\theta_{s}^{j}\) approximate independent samples from \(\mu_{s}\). One can make the last statement precise by using known rates at which \(\mu_{\eta_{s}}\) converges to \(\mu_{s}\), see [12].
### Analysing the mean-field systems
As already discussed, we will prove that the empirical measure \(\mu_{\theta_{s}}\) approximates the solution of the mean-field PDE \(\mu_{s}\) for large ensemble sizes. Therefore, it is instructive to briefly study \(\mu_{s}\). Since \(\mu_{0}\) is Gaussian in our case and the drift terms \(b(\mu_{\theta_{s}})(\theta)\) are linear in \(\theta\), \(\mu_{s}\) will remain Gaussian for all times \(s\geq 0\). We denote its mean and covariance by \(m_{s}\) and \(P_{s}\):
\[\mu_{s}=\mathcal{N}(m_{s},P_{s}).\]
Therefore, solving the mean-field PDE (24) corresponds to solving an ODE for the mean and covariance of the Gaussian distribution. We will next work out these mean-field ODEs for our two IPS.
#### 3.1.1 Homotopy using moment matching
The mean-field limit PDE, corresponding to (24), is given by
\[\partial_{s}\mu_{s}(\eta) = \frac{1}{2}\operatorname{div}\left(\mu_{s}\left(P_{s}(\Phi\mu_{s }[R]\Phi^{\mathrm{T}}(\eta-m_{s})+2\Phi(\mu_{s}[y]-d)\right)\right).\]
In the case of Gaussian initial conditions, the above PDE is equivalent to solving the following ODEs for the mean and covariance:
\[\begin{array}{rcl}\frac{\mathrm{d}}{\mathrm{d}s}m_{s}&=&-P_{s}\Phi(\mu_{s}[y]-d )=-P_{s}\mu_{s}[\nabla_{\theta}\Psi(\theta)],\\ \frac{\mathrm{d}}{\mathrm{d}s}P_{s}&=&-P_{s}\Phi\mu_{s}[R]\Phi^{\mathrm{T}}P_{ s}=-P_{s}\mu_{s}[D_{\theta}^{2}\Psi(\theta)]P_{s},\end{array} \tag{27}\]
where \(\nabla_{\theta}\Psi\) and \(D_{\theta}^{2}\Psi\) are the gradient and Hessian of the negative log-likelihood function as defined in (13) and (14), respectively.
#### 3.1.2 Deterministic second-order dynamical sampler
In this case, the mean-field limit PDE is given by
\[\begin{array}{rcl}\partial_{s}\mu_{s}(\eta)=&\frac{1}{2}\operatorname{div} \left(\mu_{s}\left(P_{s}(\Phi\mu_{s}[R]\Phi^{\mathrm{T}}(\eta-m_{s})+2\Phi( \mu_{s}[y]-d)\right)\right)\\ &\quad+\frac{1}{2}\operatorname{div}\left(\mu_{s}\left(P_{\mathrm{ prior}}^{-1}(\eta+m_{s}-2m_{\mathrm{prior}})\right)+(\eta-m_{s})\right) \right),\end{array} \tag{28}\]
where \(\mu_{0}=\mathcal{N}(m_{0},P_{0})\). As in the previous subsection, we again derive the mean-field ODEs for the mean and covariance:
\[\begin{array}{rcl}\frac{\mathrm{d}}{\mathrm{d}s}m_{s}&=&-P_{s}\Phi(\mu_{s}[y ]-d)-P_{s}P_{\mathrm{prior}}^{-1}(m_{s}-m_{\mathrm{prior}}),\\ \frac{\mathrm{d}}{\mathrm{d}s}P_{s}&=&-P_{s}\Phi\mu_{s}[R]\Phi^{\mathrm{T}}P_{ s}-P_{s}P_{\mathrm{prior}}^{-1}P_{s}+P_{s}.\end{array} \tag{29}\]
The mean-field ODE (25) becomes
\[\begin{array}{rcl}\frac{\mathrm{d}}{\mathrm{d}s}\eta_{s}^{j}&=&-\frac{1}{2} P_{s}\left(\Phi(\mu_{s}[R]\Phi^{\mathrm{T}}(\eta^{j}-m_{s}))+2\Phi(\mu_{s}[y]-d)\right) \\ &+&\frac{1}{2}P_{s}P_{\mathrm{prior}}^{-1}(\eta^{j}+m_{s}-2m_{\mathrm{prior}} ))+\frac{1}{2}(\eta^{j}-m_{s}),\end{array} \tag{30}\]
where \(\eta_{0}^{j}=\theta_{0}^{j}\sim\mathcal{N}(m_{0},P_{0})\). When the system (29) stops evolving, we have reached the equilibrium distribution
\[\mu_{*}=\mathcal{N}(m_{*},P_{*}).\]
To derive equations for \(m_{*}\) and \(P_{*}\), we set the right-hand sides of (29) to zero and obtain:
\[m_{*} =m_{\mathrm{prior}}-P_{\mathrm{prior}}\Phi(\mu_{*}[y]-d)=m_{ \mathrm{prior}}-P_{\mathrm{prior}}\mu_{*}[\nabla\Psi(\theta)], \tag{31}\] \[P_{*} =(\Phi\mu_{*}[R]\Phi^{\mathrm{T}}+P_{\mathrm{prior}}^{-1})^{-1}= (\mu_{*}[D_{\theta}^{2}\Psi(\theta)]+P_{\mathrm{prior}}^{-1})^{-1}. \tag{32}\]
These are implicit equations in \(m_{*}\) and \(P_{*}\) and the evolution equations (32) can be seen as means to find \(m_{*}\) and \(P_{*}\). Therefore, in the many-particle and large time limit, we are approximating a Gaussian with mean and covariance given by (31) and (32) respectively.
Approximating a distribution by a Gaussian is also an important topic in variational inference [16]. However, in contrast to popular methods for Gaussian variational inference (see the discussion in [16]), which are based on taking gradients with respect to the mean and covariance of the approximating Gaussian, we do not need to invert the state space (\(D\times D\)) covariance matrix for our method to work.
**Remark 1**: _The work [16] also proposes a deterministic IPS for Gaussian variational inference. While their evolution equations differ from ours, the equilibrium state agrees with (31)-(32). Furthermore, in contrast to our formulation, the IPS proposed in [16] is not affine-invariant. See [34] for a discussion of affine invariance._
### Statement of results
Since the IPS (16) and (20) are similar, differing only in additional terms for (20), the proofs of the following results are quite similar too. Due to (20) having more terms, in particular also terms that increase the ensemble spread, the proofs are more technical. We concentrate on that case. The analogous results for (16) follow by performing very similar, often nearly identical, calculations, but for fewer terms.
First of all, we prove in the following proposition that the objects of interest, \(\theta_{s}^{j}\), \(\mu_{\theta_{s}}\), \(\eta_{s}^{j}\), and \(\mu_{s}\) are well-defined.
**Proposition 1**: _The mean-field PDE (28) has a unique global solution \(\mu_{s}\). Furthermore, the IPS (20) and the mean-field IPS (30) also posses unique global solutions._
The proposition is proven in Section C.1, in Proposition 3 and Proposition 4. We are now in a position to state our main theorem.
**Theorem 1**: _Let \(\{\theta_{s}^{j}\}_{j=1}^{J}\) be the solution to (20) with associated empirical measure \(\mu_{\theta_{s}}\) and let \(\mu_{s}\) be the solution to (28). For any \(\varepsilon\), there exists a constant \(C\), depending only on \(m_{0},P_{0},\Phi,d,s\) and \(\varepsilon\), such that_
\[\mathbb{E}[\mathcal{W}_{2}(\mu_{\theta_{s}},\mu_{s}^{N})]\leqslant C_{ \varepsilon,s}J^{-\frac{1}{2}+\varepsilon},\]
_where \(\mu_{s}^{N}\) is the \(N\)-fold product measure of \(\mu_{s}\) with itself._
**Proof** (Sketch) We introduce an artificial mean-field particle system \(\eta^{j}\) as described in Section 3. The precise mean-field ODEs can be found in (30). We couple the \(\theta_{s}^{j}\) to the \(\eta_{s}^{j}\) by choosing the same initial conditions, i.e., \(\eta_{0}^{j}=\theta_{0}^{j}\). The \(\eta_{s}^{j}\) are samples from \(\mu_{s}^{N}\), and therefore the Wasserstein distance, being the infimum over all couplings, can be upper bounded by this specific coupling between \(\theta_{s}^{j}\) and \(\eta_{s}^{j}\), i.e.,
\[\mathbb{E}[\mathcal{W}_{2}(\mu_{\theta_{s}},(\mu_{s})^{N})]\leqslant\mathbb{ E}\left[\left(\frac{1}{J}\sum_{j=1}^{J}|\theta_{s}^{j}-\eta_{s}^{j}|^{2}\right)^{1 /2}\right]. \tag{33}\]
We fix a \(T\) and assume that \(s\leqslant T\). To bound (33), we define
\[\Delta_{s}=\left(\frac{1}{J}\sum_{j=1}^{J}|\theta_{s}^{j}-\eta_{s}^{j}|^{2} \right)^{1/2}\]
and upper bound its growth. In Proposition 2 we will show that we can bound any moment of \(\mathbb{E}[\Delta_{s}]\) independently of the ensemble size \(J\), i.e.,
\[\mathbb{E}\left[|\Delta_{s}|^{p}\right]^{1/p}\lesssim C_{p}, \tag{34}\]
where \(x\lesssim y\) symbolizes that the inequality \(x\leqslant ay\) holds with a constant \(a\) depending only on \(T,m_{0},P_{0},\Phi\) and \(d\).
We then use a bootstrapping technique inspired by [8]. The main idea is to show that, if
\[\mathbb{E}[\Delta_{s}]\lesssim J^{-\gamma} \tag{35}\]
for some \(\gamma\geq 0\), then one can actually improve that \(\gamma\) value to a better \(\gamma^{\prime}\), i.e.,
\[\mathbb{E}[\Delta_{s}]\lesssim J^{-\gamma^{\prime}} \tag{36}\]
holds for some \(\gamma^{\prime}>\gamma\). Plugging \(p=0\) into (34), we see that (35) holds for \(\gamma=0\). We then iteratively improve \(\gamma=0\) to any \(\gamma=\frac{1}{2}-\epsilon\).
We now go into a bit more detail on how to improve the current estimate of \(\gamma\). We denote by \(H_{\alpha}\) a random variable, which can change from occurrence to occurrence, such that \(\mathbb{E}[H_{\alpha}^{p}]^{1/p}\leqslant C_{p}J^{-\alpha}\) for all \(p\geq 0\), where \(C_{p}\) only depends on \(T,m_{0},P_{0},\Phi,d\) and \(p\). Considering the ODE for \(\Delta_{s}\) and bounding all terms of its right-hand side, we end up with
\[\frac{\mathrm{d}}{\mathrm{d}s}\mathbb{E}[\Delta_{s}] \leqslant C(\mathbb{E}[\Delta_{s}]+\mathbb{E}[\Delta_{s}H_{1/4}]+\mathbb{ E}[H_{1/2}])\] \[\leqslant C(\mathbb{E}[\Delta_{s}]+\mathbb{E}[\Delta_{s}^{\varepsilon} \Delta_{s}^{1-\varepsilon}H_{1/4}]+\mathbb{E}[H_{1/2}])\] \[\leqslant C(\mathbb{E}[\Delta_{s}]+\mathbb{E}[\Delta_{s}]^{1-\varepsilon }\mathbb{E}[\Delta_{s}(H_{1/4})^{1/\varepsilon}]^{\varepsilon}+\mathbb{E}[H_ {1/2}])\] \[\leqslant C(\mathbb{E}[\Delta_{s}]+\mathbb{E}[\Delta_{s}]^{1-\varepsilon }\mathbb{E}[\Delta_{s}^{2}]^{\varepsilon/2}\mathbb{E}[(H_{1/4})^{2/\varepsilon }]^{\varepsilon/2}+\mathbb{E}[H_{1/2}])).\]
Here we repeatedly used Holder's inequality. Now, all moments of \(H_{\alpha}\) can be bounded by \(J^{-\alpha}\)
\[\frac{\mathrm{d}}{\mathrm{d}s}\mathbb{E}[\Delta_{s}] \lesssim \mathbb{E}[\Delta_{s}]+\mathbb{E}[\Delta_{s}]^{1-\varepsilon}J^{ -1/4}+J^{-1/2}\] \[\leqslant \mathbb{E}[\Delta_{s}]+J^{-\gamma(1-\varepsilon)-1/4}+J^{-1/2}.\]
In the second inequality we used the a priori assumption that \(\mathbb{E}[\Delta_{s}]\leqslant J^{-\gamma}\). We now apply Groenwall to obtain
\[\mathbb{E}[\Delta_{s}]\lesssim J^{-\gamma^{\prime}},\]
where \(\gamma^{\prime}=\min\left(\frac{1}{2},\frac{1}{4}+\gamma(1-\varepsilon)\right)\). Plugging in \(\gamma=0\), we obtain a rate of \(J^{-1/4}\) by applying the above argument once. Then, by picking a small enough \(\varepsilon\), we can achieve any rate below \(1/2\) by applying the bootstrapping technique a second time. The full details of the proof can be found in Appendix C.3. \(\Box\)
The following a priori bound is crucial for the proof of Theorem 1.
**Proposition 2**: _For \(s\in[0,T]\),_
\[\mathbb{E}\left[\Big{|}\frac{1}{J}\sum|\theta_{s}^{j}-\eta_{s}^{j}|^{2}|^{p} \right]^{1/p}\leqslant C_{p},\]
_i.e., the \(p\)-norm can be bounded independently of \(J\) for a fixed \(T\). The constant \(C_{p}\) only depends on \(m_{0},P_{0},\Phi,d\) and \(T\)._
The proof relies on the fact that \(\sum_{j=1}^{J}|\theta_{0}^{j}-\eta_{0}^{j}|\) is a martingale and uses martingale inequalities. It can be found in Section C.2.
## 4 Algorithmic implementation
A typical way of time-stepping the interacting particle systems presented in this paper is the forward-Euler method. However, due to its restricted domain of stability, using this method can lead to restrictions on the step-size \(\Delta s\). In this section we describe tamed discretizations for the homotopy based moment matching formulation (16) and the deterministic second-order dynamical sampler (20). We introduce a step-size \(\Delta s\geq 0\) and discrete times \(s_{k}=k\Delta s\). Further, we use the shorthand \(\theta_{s_{k}}\approx\theta_{k},m_{\theta_{s_{k}}}\approx m_{k},P_{\theta_{s_ {k}}}\approx P_{k},\Theta_{s_{k}}\approx\Theta_{k},\mu_{\theta_{s_{k}}} \approx\mu_{k}\) in the forthcoming subsections.
### Homotopy using moment matching
We employ modifications to the time stepping for moment-matching method (16) by using the following tamed discretizations
\[\theta_{k+1}^{j}=\theta_{k}^{j}-\frac{\Delta s}{2}P_{k}\Phi\left(M_{k}\Phi^{ \mathrm{T}}(\theta_{k}^{j}-m_{k})+2(\mu_{k}[y]-d)\right) \tag{37}\]
where
\[P_{k}=\frac{1}{J}\Theta_{k}\Theta_{k}^{\mathrm{T}}\]
and
\[M_{k}=\left(\Delta s\Phi^{\mathrm{T}}P_{k}\Phi+\mu_{k}[R]\right)^{-1} \tag{38}\]
for \(j=1,\ldots,J\). As discussed in Section 2.1, we propagate \(\theta_{k}^{j}\) forward by evolving the associated empirical mean and ensemble deviations using (17)-(18). The resulting time-stepping of (17)-(18) is of the form
\[m_{k+1}=m_{k}-\Delta sP_{k}\Phi\left(\mu_{k}[y]-d\right) \tag{39}\]
and
\[\Theta_{k+1}=\Theta_{k}-\frac{\Delta s}{2}P_{k}\Phi M_{k}\Phi^{\mathrm{T}} \Theta_{k} \tag{40}\]
for the ensemble mean and ensemble deviations, respectively.
**Remark 2**: _Inverting the \(N\times N\) matrix (38) can prove prohibitive for large data sets. However, since \(R(\theta)\) is diagonal, taking the full inverse in (38) can be replaced by inverting the diagonal entries of \(\Delta s\Phi^{\mathrm{T}}P_{k}\Phi+\mu_{k}[R]\) only, as proposed in [2]. This inexpensive approximation still provides improved stability compared to an explicit Euler discretization of (18)._
We provide pseudo-code summarizing the second-order moment matching method described in Algorithm 1.
```
Inputs: Data set \(\{(\phi^{n},d^{n})\}_{n=1}^{N}\); feature map \(\Phi=\{\phi^{n}\}_{n=1}^{N}\); initial ensemble \(\{\theta_{0}^{j}\}_{j=1}^{J}\) drawn from a Gaussian distribution; step-size \(\Delta s\) and \(K\) such that \(\Delta sK=1\). for\(k=0\)to\(K-1\)do: 1 Evaluate ensemble mean \(m_{k}\) (9), ensemble deviations \(\Theta_{k}\) (11), covariance matrix \(P_{k}\) (10), \(y\) (6), \(\mu_{k}[y]\), \(\mu_{k}[R]\) and \(M_{k}\) (38). 2 Evolve \(m_{k}\) and \(\Theta_{k}\) using (39)- (40). 3 Determine \(\{\theta_{k}^{j}\}_{j=1}^{J}\) from \(m_{k+1}\) and \(\Theta_{k+1}\). endfor Output: Final ensemble \(\{\theta_{K}^{j}\}_{j=1}^{J}\).
```
**Algorithm 1**Homotopy using moment matching method for Bayesian inference
### Deterministic second-order dynamical sampler
We employ the idea of Trotter splitting to solve (20) numerically. The required splitting is provided by the evolution equation (16), already encountered in the homotopy approach, and the remainder
\[\frac{\mathrm{d}\theta_{s}^{j}}{\mathrm{d}s}=-\frac{1}{2}P_{\theta_{s}}P_{\text{ prior}}^{-1}(\theta_{s}^{j}+m_{\theta_{s}}-2m_{\text{prior}})+\frac{1}{2}( \theta_{s}^{j}-m_{\theta_{s}}) \tag{41}\]
Therefore, for every time step \(k\), given \(\{\theta_{k}^{j}\}_{j=1}^{J}\), we first compute \(\theta_{k+1/2}^{j}\) using the second order moment matching method (37) rewritten as
\[\theta_{k+1/2}^{j}=\theta_{k}^{j}-\frac{\Delta s}{2}P_{k}\Phi\left(M_{k}\Phi^{ \mathrm{T}}(\theta_{k}^{j}-m_{k})+2(\mu_{k}[y]-d)\right) \tag{42}\]
for \(j=1,2,...,J\). Equivalently, we can obtain \(\theta_{k+1/2}^{j}\) by evaluating \(m_{k+1/2}\) and \(\Theta_{k+1/2}\) as stated in (39)-(40) with subscript \(k+1\) replaced by \(k+1/2\). In the second half step, we approximate (41) using the following scheme:
\[\begin{split}\theta_{k+1}^{j}&=\theta_{k+1/2}^{j}- \frac{\Delta s}{2}P_{k+1/2}\left(\Delta sP_{k+1/2}+P_{\text{prior}}\right)^{-1} \left(\theta_{k+1/2}^{j}+m_{k+1/2}-2m_{\text{prior}}\right)\\ &\quad+\frac{\Delta s}{2}(\theta_{k+1/2}^{j}-m_{k+1/2})\end{split} \tag{43}\]
Again, if \(P_{\text{prior}}\) is diagonal, full matrix inversion in (43) can be replaced by inverting the diagonal only.
We provide pseudo-code describing the algorithm for deterministic second-order dynamical sampler in Algorithm 2.
```
Inputs: Data set \(\{(\phi^{n},d^{n})\}_{n=1}^{N}\); feature map \(\Phi=\{\phi^{n}\}_{n=1}^{N}\); initial ensemble \(\{\theta_{0}^{j}\}_{j=1}^{J}\) drawn from a Gaussian distribution; step-size \(\Delta s\); threshold value \(\epsilon>0\). for\(k\geq 0\)do: 1 Numerically solve ODE (20) by Trotter splitting: 1. Evaluate ensemble mean \(m_{k}\) (9), ensemble deviations \(\Theta_{k}\) (11), covariance matrix \(P_{k}\) (10), \(y\) (6), \(\mu_{k}[y]\), \(\mu_{k}[R]\) and \(M_{k}\) (38) 2. Determine \(\{\theta_{k+1/2}^{j}\}_{j=1}^{J}\) by evolving \(m_{k}\) and \(\Theta_{k}\) using the time-stepping (39)-(40). 3. Evaluate covariance matrix \(P_{k+1/2}\) (10). 4. Determine \(\{\theta_{k+1}^{j}\}_{j=1}^{J}\) using the time-stepping (43). 2. Update the ensemble \(\{\theta_{k}^{j}\}_{j=1}^{J}\rightarrow\{\theta_{k+1}^{j}\}_{j=1}^{J}\). if\(\frac{\|P_{k+1}-P_{k}\|_{2}}{\|P_{k}\|_{2}}<\epsilon\); break endfor Output: Final ensemble \(\{\theta_{k}^{j}\}_{j=1}^{J}\).
```
**Algorithm 2**Deterministic second-order dynamical sampler for Bayesian inference
Application to Bayesian logistic regression in ReLU networks
Neural networks using ReLU activation functions are possibly the most widely used neural network architectures for classification. However, it has been proven that these networks exhibit arbitrarily high confidence far away from the training data when fitted by minimizing the negative log-likelihood \(\psi\) or, equivalently, by approximating the maximum likelihood estimator (MLE), denoted by \(\tilde{\theta}_{\text{MLE}}\), as demonstrated in [22, 20]. Thus, this architecture along with a MLE training scheme is not robust and does not provide any measure of uncertainty in the model's predictions.
One way of obtaining predictive uncertainty is to place distributions over the weights of a ReLU network, which leads to Bayesian neural networks. The idea of replacing the MLE \(\tilde{\theta}_{\text{MLE}}\) by a posterior measure \(\tilde{\pi}_{\text{post}}\) over parameters \(\tilde{\theta}\) therefore enables us to make better informed predictions and to know when our model predictions are not to be trusted. To understand how uncertainty might be expressed in this setting, we put a prior \(\tilde{\pi}_{\text{prior}}\) on the parameters \(\tilde{\theta}\) which can be thought of as incorporating some prior knowledge and then refining it based on data to learn a posterior distribution. That is, after observing \(\mathcal{D}\), we can get the posterior measure through Bayes formula (2).
This Bayesian approach to neural networks introduces a certain degree of computational complexity as we now need to sample from \(\tilde{\pi}_{\text{post}}\) instead of minimizing \(\Psi\) which can also be computationally expensive. Computational approximations to \(\tilde{\pi}_{\text{post}}\) have been enabled by advances in Markov chain Monte Carlo (MCMC) methods (see [31]). However, even today's most sophisticated MCMC methods are rendered impractical for deep neural network architectures. Therefore, to make the Bayesian approach tractable, we focus on a last layer Bayesian approximation that places a prior distribution only on the output layer's parameters. Thus, we decompose an \(l\)-layered ReLU network \(G_{\tilde{\theta}}:\mathbb{R}^{n}\rightarrow\mathbb{R}\) into a feature map \(\phi_{\tilde{\theta}}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{d}\) consisting of the first \(l-1\) layers of the ReLU network and the output layer, i.e.,
\[G_{\tilde{\theta}}(x)=\sigma(\langle\theta,\phi_{\tilde{\theta}}(x)\rangle)\]
with \(\tilde{\theta}=(\hat{\theta}^{\text{T}},\theta^{\text{T}})^{\text{T}}\). One now first trains the complete network using a (regularised) MLE approach, which provides \(\tilde{\theta}_{\text{MLE}}\) and the associated trained feature map
\[\phi(x):=\phi_{\tilde{\theta}_{\text{MLE}}}(x). \tag{44}\]
Furthermore, upon defining the input features \(\phi^{n}=\phi(x^{n})\), \(n=1,\ldots,N\), over the data set \(\{(x^{n},d^{n})\}_{n=1}^{D}\), this architecture is now equivalent to a Bayesian logistic regression problem in the output layer parameters \(\theta\) as discussed in detail in Section 1.1.
In this paper, we analyze the performance of ReLU networks in the case of binary classification. While the work in [25] focuses on Laplace approximations for Bayesian inference, we employ algorithms for Bayesian logistic regression based on the methods proposed in Section 2. We demonstrate experimentally that our methods in conjunction with pre-trained deterministic ReLU networks provide desirable uncertainty estimates. However, it should be noted that the methods proposed in this paper are not limited to a 'last layer' use, but can be easily extended to multiple layers or the entire network.
We use a 3-layer ReLU network with 20 hidden units at each layer and \(D=50\) units at the output layer in the subsequent numerical experiments. The data set is constructed by generating a 2D binary classification data using scikit-learn. The MLE estimator \(\tilde{\theta}_{\text{MLE}}\) is obtained using the PyTorch library. Figure 1 depicts the above generated data set.
## 6 Numerical experiments
In this section, we consider a numerical experiment for binary classification problem as an illustration for uncertainty quantification in ReLU networks. We employ the already described 3-layer neural network architecture \(G_{\tilde{\theta}}(x)\). The training data set \(\{(x^{n},d^{n})\}_{n=1}^{N}\) consists of inputs and the associated labels. We train the ReLU network using stochastic gradient descent (SGD) with 0.9 momentum, learning rate of \(3\times 10^{-4}\) and weight-decay for training over the 2D binary classification data set 1, using \(N=30\) test points for toy binary classification problem. SGD minimizes the cross entropy loss quantity (5) across the training data set, and we denote the computed minimizer by \(\tilde{\theta}_{\text{MLE}}\).
The computed set of parameters (\(\hat{\theta}_{\text{MLE}}\)) is then used to provide the feature maps (44) which is used for Bayesian logistic regression. The chosen prior is Gaussian with mean \(m_{\text{prior}}=0\) and covariance matrix \(P_{\text{prior}}=2I\). Using the ensemble of particles distributed according to the posterior \(\tilde{\pi}_{\text{post}}\), approximated with the homotopy based moment-matching method (39)-(40) and the second-order dynamical sampler (43), the predictive distribution is estimated as
\[\pi(d=1|x,\mathcal{D})=\frac{1}{J}\sum_{j=1}^{J}\sigma(\langle\theta_{*}^{j}, \phi_{\hat{\theta}_{\text{MLE}}}(x)\rangle), \tag{45}\]
where \(\{\theta_{*}^{j}\}_{j=1}^{J}\) is the final ensemble of particles obtained using Algorithm 1 and Algorithm 2. The associated Bayesian posterior distribution now translates uncertainty in weights to uncertainty in model predictions.
Results for both methods introduced in this paper are reported using an ensemble size of \(J=200\). Algorithm 1 is implemented with step-size \(\Delta s=10^{-3}\) while Algorithm 2 is implemented with \(\Delta s=10^{-1}\) and \(\Delta sK=20\).
The results from Algorithms 1 & 2 are compared to the following two alternative methods.
Figure 1: 2D binary classification data set
**Laplace approximations.** We compare the uncertainty estimates provided by the proposed EnKF based methods for Bayesian approximation over the output layer parameters to the last-layer Laplace approximation (LLLA) for inference as introduced in [25]. In this case, we perform a Laplace approximation to get the posterior of the weights of the last layer, assuming the previous layers to be fixed at MAP estimates. So, for unknown output layer parameters \(\theta\), we infer
\[\pi(\theta|\mathcal{D})=\mathcal{N}(\theta|\theta_{\mathrm{MLE}},H^{-1}) \tag{46}\]
where \(H\) is the Hessian of the negative log-posterior with respect to \(\theta\) at \(\theta_{\mathrm{MLE}}\). The predictive distribution in the case of binary classification is thus given by
\[\pi(d=1|x,\mathcal{D})=\int\sigma(\langle\theta,\phi(x)\rangle\pi(\theta| \mathcal{D})\mathrm{d}\theta, \tag{47}\]
where \(\pi(\theta|\mathcal{D})\) is approximated with (46) and the integral (47) is computed using a prob approximation as described in [25, 30].
**Ensemble learning.** As a non-Bayesian method, we investigate the uncertainty estimates from ensemble learning (or deep ensembles), introduced in [26, 33]. The technique uses an ensemble of deterministic networks, meaning that each network in the ensemble produces a point estimate rather than a distribution. We train an ensemble of \(M=5\) ReLU networks independently, using the entire training data set to train each network. Given an input point \(x_{n}\), target label \(d_{n}\) and cross-entropy loss \(\Psi(\tilde{\theta},x_{n},d_{n})\), we generate an adversarial sample
\[x_{n}^{*}=x_{n}+\xi\ \mathrm{sign}(\nabla_{\theta}\Psi(\tilde{\theta},x_{n},d_{ n})),\]
where \(\xi\sim\mathcal{N}(0,0.01)\). As described in [26], using adversarial samples for training by adding perturbations in the direction the network is likely to increase the loss provides a 'better random' direction for smoothing predictive distributions. For \(M\) models with MLE estimated parameters \(\{\tilde{\theta}_{m}\}_{m=1}^{M}\), we evaluate the ensemble predictions as
\[\pi(d=1\mid x) =M^{-1}\sum_{m=1}^{M}\pi(d=1\mid x,\tilde{\theta}_{m}), \tag{48}\] \[=M^{-1}\sum_{m=1}^{M}G_{\tilde{\theta}_{m}}(x),\]
where \(\{\tilde{\theta}_{m}\}_{m=1}^{M}\) denotes the parameters of the \(m^{\mathrm{th}}\) model in the ensemble. For classification, this corresponds to averaging the predicted probabilities.
We obtain results for the model's confidence in its prediction on and away from any input data \(x\), where confidence is defined as the maximum predictive probability. In the case of a binary classification problem, confidence in prediction can be expressed as \(\max_{i\in\{0,1\}}p(d=i\mid x)\). Ideally, one would want the model to convey low confidence with such inputs. We next report on the results from these two experimental settings.
### Predictive uncertainty in ReLU networks
As shown in Figure 2 and Figure 3, MLE predictions have very high confidence everywhere besides the region close to the decision boundary. Using deep ensembles of ReLU networks improves the accuracy of the prediction, which results in a better decision boundary and shows increased uncertainty estimates near the decision boundary. However, both MLE and ensemble learning predictions do not
express uncertainty far away from the training data. On the other hand, last-layer Bayesian approximations, implemented with either Laplace, the homotopy moment matching method or second-order dynamical sampler, assign relatively high confidence close to the training data and are uncertain otherwise. In other words, the region of high confidence is identified much closer to the training data. All Bayesian approximations closely follow the decision boundary obtained using the MLE estimates and thus do not negatively impact the network's predictive accuracy. Furthermore, the second-order dynamical sampler assigns higher confidence closer to training data than any other algorithm and allows one to use a larger step size \(\Delta s\), and can therefore be recommended for further use. It can be noted that last-layer Laplace approximations also improve predictive uncertainty but assigns lower confidence than our methods on and near the training data set.
Figure 2: Binary classification on a toy dataset using (a) MAP estimates, (b) ensemble of neural networks, last-layer Gaussian approximations over the weights obtained via (c) Laplace approximation, (d) moment matching method, (e) deterministic second-order dynamical sampler. Background colour depicts the confidence in classification while black line represents the decision boundary obtained for the toy classification problem.
Figure 3: Zoomed-out versions of the results in Figure 2 for binary classification on a toy data set using (a) MAP estimates, (b) ensemble of neural networks, last-layer Gaussian approximations over the weights obtained via (c) Laplace approximation, (d) moment matching method, (e) deterministic second-order dynamical sampler. Background colour depicts the confidence in classification.
In Figure 3, we show a zoomed out version of the results in Figure 2 to capture confidence levels significantly away from training data. It can be seen that the MLE estimate and ensemble learning demonstrate high confidence in the entire domain. The Bayesian approximations, even when applied to just the last-layer of a ReLU network, give desirable uncertainty estimates.
### Uncertainty on out-of-distribution data
In this experiment, we generate test samples on a large grid, such that the test points deviate significantly from the training samples. For each point in the test data set, we evaluate its Euclidean distance \(\delta>0\) from the nearest training point. We then evaluate the model's confidence in classifying these out-of-distribution (OOD) samples as a function of \(\delta\). The results can be found as in Figure 4. It can be seen that as distance from the training set increases, the MLE technique is extremely overconfident in its predictions everywhere. MLE trained network always yields arbitrarily 'overconfident' predictions away from training data and thus, is not robust. Using an ensemble of ReLU networks also does not improve uncertainty estimates and has little affect on the confidence on OOD data. However, last layer-Bayesian approximations assign lower confidence for OOD test data. As the distance from the training set increases, the model is less confident in its prediction and the level of confidence converges to a constant. Furthermore, our approaches assign maximum confidence (higher than Laplace approximations) when \(\delta=0\), i.e., the in-distribution data.
Figure 4: Confidence of MAP, ensembles of neural networks, last-layer Laplace approximation, moment matching method, and deterministic second-order dynamical sampler as functions of \(\delta\) over the test set. Thick blue lines and shades correspond to means and \(\pm\) standard deviations, respectively. Dashed black lines signify the desirable confidence for \(\delta\) sufficiently high.
### Effect of varying ensemble sizes on predictive uncertainty
We also analyze the effect of varying ensemble sizes on inference of last-layer network parameters.
It can be observed that as ensemble size increases, the region of high confidence is identified a lot closer to the training data, while closely following the decision boundary obtained by the MLE estimator. For parameters \(D=50\), as ensemble size \(J\rightarrow\infty\) (here \(J=300\)), we observe that the interacting particle systems converge to their mean-field limit. It can also be concluded that a large ensemble size (\(J>D\)) provides better uncertainty estimates than small ensemble sizes (\(J\leq D\)). However, using ensemble size (\(J<D\)) still results in better uncertainty estimates near the decision boundary than those obtained by MLE estimates.
## 7 Conclusions
In this paper, we have presented two extensions of EnKF and related interacting particle systems to Bayesian logistic regression. We have proven quantitative convergence rates for these systems to their mean field limits as the number of particles tends to infinity. We have employed both methods for Bayesian inference in ReLU networks with cross-entropy loss function. The numerical results confirm the effectiveness of the proposed methods for quantifying uncertainty. They have also shown that these uncertainty estimates make ReLU networks more robust with respect to distributional shifts.
**Acknowledgements:** The research has been partially funded by the Deutsche Forschungsgemeinschaft (DFG)- Project-ID 318763901 - SFB1294. The authors would also like to thank the Isaac
Figure 5: Effect of varying ensemble sizes (\(J\)) on confidence in prediction for binary classification using proposed ensemble sampling methods for Bayesian inference over the network’s output (last) layer.
Newton Institute for Mathematical Sciences, Cambridge, for support and hospitality during the programme _The Mathematical and Statistical Foundation of Future Data-Driven Engineering_ where work on this paper was undertaken. This work was supported by EPSRC grant no EP/R014604/1.
|
2309.12849 | DeepOPF-U: A Unified Deep Neural Network to Solve AC Optimal Power Flow
in Multiple Networks | The traditional machine learning models to solve optimal power flow (OPF) are
mostly trained for a given power network and lack generalizability to today's
power networks with varying topologies and growing plug-and-play distributed
energy resources (DERs). In this paper, we propose DeepOPF-U, which uses one
unified deep neural network (DNN) to solve alternating-current (AC) OPF
problems in different power networks, including a set of power networks that is
successively expanding. Specifically, we design elastic input and output layers
for the vectors of given loads and OPF solutions with varying lengths in
different networks. The proposed method, using a single unified DNN, can deal
with different and growing numbers of buses, lines, loads, and DERs.
Simulations of IEEE 57/118/300-bus test systems and a network growing from 73
to 118 buses verify the improved performance of DeepOPF-U compared to existing
DNN-based solution methods. | Heng Liang, Changhong Zhao | 2023-09-22T13:22:15Z | http://arxiv.org/abs/2309.12849v1 | # DeepOPF-U: A Unified Deep Neural Network to Solve AC Optimal Power Flow in Multiple Networks
###### Abstract
The traditional machine learning models to solve optimal power flow (OPF) are mostly trained for a given power network and lack generalizability to today's power networks with varying topologies and growing plug-and-play distributed energy resources (DERs). In this paper, we propose DeepOPF-U, which uses _one_ unified deep neural network (DNN) to solve alternating-current (AC) OFF problems in different power networks, including a set of power networks that is successively expanding. Specifically, we design elastic input and output layers for the vectors of given loads and OPF solutions with varying lengths in different networks. The proposed method, using a single unified DNN, can deal with different and growing numbers of buses, lines, loads, and DERs. Simulations of IEEE \(57/118/300\)-bus test systems and a network growing from \(73\) to \(118\) buses verify the improved performance of DeepOPF-U compared to existing DNN-based solution methods.
Optimal power flow, deep neural networks
## I Introduction
There have been increasing research efforts in leveraging the powerful approximation capabilities of deep neural networks (DNNs) to learn the high dimensional load-to-solution mappings of the important optimal power flow (OPF) problems. High-quality near-optimal OPF solutions can be instantly obtained from well trained DNN models, significantly accelerating the solution process compared to traditional solvers [1, 2]. However, most existing DNN models were built and trained only for a specific power network with a given topology. With the expansion of buses, lines, loads, and distributed energy resources (DERs) and the corresponding change of power network topologies, the existing DNN models need to be rebuilt through repeated trainings, incurring heavy storage and computation burdens.
Solving alternating-current (AC) OPF problems across multiple networks is challenging, due to the difference in network topologies, line admittances, lengths of load and solution vectors, et cetera. Several methods have been developed recently to partly address the topology variation issue. For instance, [5] resampled the training data and retrained the DNNs in real time to adapt to the emerging new topologies, which is often computationally expensive. Reference [6] integrated the topology labels into the training process, while [7] encoded discrete topology representations with line admittances into the DNN input. These methods, without retraining, can directly predict OPF solutions under flexible topologies. However, they are still limited to a fixed number of loads and generators and incapable of incorporating plug-and-play DERs in a network expansion setting. Reference [4] learned a linear controller for an expanding radial network, which is not applicable to networks with general topologies.
In this paper, we propose DeepOPF-U, a novel approach that utilizes _one_ unified DNN to learn the load-to-solution mappings of AC OPF problems across multiple and expanding networks with different numbers of buses, lines, loads, and generators. The contribution of this work includes:
* We design elastic DNN input and output layers with plug-and-play neurons, to adapt to the varying lengths of load and OPF solution vectors in different networks.
* We design an incremental training process to sequentially update the weights of a unified DNN for multiple and expanding networks. The unified DNN can predict the OPF solutions in multiple networks without being retrained.
Simulations on IEEE \(57/118/300\)-bus test systems and a network growing from \(73\) to \(118\) buses demonstrate the adaptive mapping capability of the unified DNN without compromising solution quality. To the best of our knowledge, this is the first work to solve AC OPF problems across multiple networks using a single DNN.
## II AC OPF across Multiple Power Networks
Consider a series of power networks indexed by \(k=1,...,K\). Denote network \(k\) by \(\{\mathcal{N}_{k},\mathcal{E}_{k}\},\) where \(\mathcal{N}_{k}\) and \(\mathcal{E}_{k}\) collect the buses and lines, respectively. We aim to solve the following AC OPF problem in each network \(k\):
\[(\textbf{P}_{k}):\min_{P_{i}^{g},Q_{i}^{g}}\sum_{i=1}^{\mathcal{N} _{k}^{G}}C_{i}(P_{i}^{g}) \tag{1a}\] \[\text{s.t.}\sum_{j\in\mathcal{N}_{k}}\text{Re}\{V_{i}V_{j}^{*}Y_{ ij}^{*}\}=P_{i}^{g}-P_{i}^{d},\ \forall i\in\mathcal{N}_{k},\] (1b) \[\sum_{j\in\mathcal{N}_{k}}\text{Im}\{V_{i}V_{j}^{*}Y_{ij}^{*}\}=Q _{i}^{g}-Q_{i}^{d},\ \forall i\in\mathcal{N}_{k},\] (1c) \[P_{i}^{g}\leq P_{i}^{g}\leq\overline{P}_{i}^{g},\ \forall i\in \mathcal{N}_{k}^{G},\] (1d) \[\underline{Q}_{i}^{g}\leq Q_{i}^{g}\leq\overline{Q}_{i}^{g},\ \forall i\in \mathcal{N}_{k}^{G},\] (1e) \[\underline{Y}_{i}\leq|V_{i}|\leq\overline{V}_{i},\ \forall i\in \mathcal{N}_{k},\] (1f) \[|V_{i}(V_{i}^{*}-V_{j}^{*})Y_{ij}^{*}|\leq\overline{S}_{ij},\ \forall(i,j)\in \mathcal{E}_{k} \tag{1g}\]
where \(\mathcal{N}_{k}^{G}\) denotes the set of buses with dispatchable generators. \(P_{i}^{g}\) and \(Q_{i}^{g}\), \(P_{i}^{d}\) and \(Q_{i}^{d}\) represent the active and reactive power generation, active and reactive power consumption at bus \(i\), respectively. \(V_{i}\) denotes the complex voltage at bus \(i\). \(Y_{ij}\) is the \((i,j)\)-th entry of the network admittance matrix \(\mathbf{Y}\). Constants \(\underline{x}\) and \(\overline{x}\) are the lower and upper limits of variable \(x\). The generation cost at bus \(i\in\mathcal{N}_{k}^{G}\) is \(C_{i}(P_{i}^{g})\).
Problem \(\textbf{P}_{k}\) varies significantly across different networks \(k\). Prior methods [1, 2, 7] solved OPF problems in a specific network by using a dedicated DNN. As a new network emerges or the network successively expands, a new DNN needs to be built and trained, which lacks generalizability.
To overcome the limitation above, we design _one_ unified DNN to learn the load-to-solution mappings of AC OPF problems across multiple and expanding networks. The proposed approach, called DeepOPF-U, is applicable to:
* Multiple networks with different numbers of buses, lines, loads, DERs, and different topologies;
* An expanding network with increasing numbers of buses, lines, loads, and DERs.
## III Unified DNN for Multiple Power Networks
We design a single unified DNN to learn the load-to-solution mappings of AC OPF problems in multiple networks. For problems \(\{\textbf{P}_{k},\ k=1,...,K\}\), suppose their numbers of buses \(N_{1}\leq\cdots\leq N_{K}\). Let \(U_{1}:=[P_{k}^{d\top},Q_{k}^{d\top}]^{\top}\) stack the active and reactive power loads of the smallest network \(\{\mathcal{N}_{1},\mathcal{E}_{1}\}\). Define \(U_{k}:=[U_{k-1}^{\top},u_{k}^{\top}]^{\top}\), \(W_{k}^{1}:=[W_{k-1}^{\top},w_{k}^{1}]^{\top}\), \(W_{k}^{o}:=[W_{k-1}^{o\top},w_{k}^{o}]^{\top}\), \(B_{k}^{o}:=[B_{k-1}^{o\top},b_{k}^{o\top}]^{\top}\), \(X_{k}:=[X_{k-1}^{\top},x_{k}^{\top}]^{\top}\) for \(k=2,\ldots,K\). The unified DNN is designed as a multi-layer fully-connected neural network:
\[U_{k} =[U_{k-1}^{\top},u_{k}^{\top}]^{\top}, \tag{2a}\] \[h^{1} =\sigma([W_{k-1}^{1},w_{k}^{1}]U_{k}+B^{1}),\] (2b) \[h^{i} =\sigma(W^{i}h^{i-1}+B^{i}),\ \forall i=2,...,L\] (2c) \[\left[\begin{array}{c}X_{k-1}\\ x_{k}\end{array}\right] =\sigma^{\prime}\left(\left[\begin{array}{c}W_{k-1}^{o}\\ w_{k}^{o}\end{array}\right]\right)h^{L}+\left[\begin{array}{c}B_{k-1}^{o}\\ b_{k}^{o}\end{array}\right]\right) \tag{2d}\]
where \(u_{k}\) and \(x_{k}\) are the increments of input and output vector lengths from network \((k-1)\) to \(k\). Correspondingly, \(w_{k}^{o}\), \(w_{k}^{o}\) and \(b_{k}^{o}\) are parameters of the plug-and-play neurons to be adaptively activated/deactivated according to the input and output lengths. ReLU function \(\sigma(\cdot)\) and sigmoid function \(\sigma^{\prime}(\cdot)\) are used as activation functions of the hidden layers and the output layer, respectively. The elastic input and output layers (2b), (2d) embed \(K\) DNNs into a unified DNN in an incremental manner. The input, output, and parameters of the unified DNN are demonstrated in Figure 1.
The unified DNN is trained to minimize the following loss function corresponding to power network \(k\):
\[L_{k}:=\sum_{i\in\mathcal{N}_{k}}(|\hat{V}_{i}|-|V_{i}|)^{2}+\gamma(\hat{ \theta}_{i}-\theta_{i})^{2}\]
where \(|\hat{V}_{i}|\) and \(\hat{\theta}_{i}\) are the voltage magnitude and phase angle at bus \(i\) predicted by the DNN; \(|V_{i}|\) and \(\theta_{i}\) are their ground truth; factor \(\gamma\) tunes the relative importance of the two terms.
Inspired by the once-for-all idea [3], we design an incremental training strategy, which sequentially computes the losses \(L_{k}\) and backpropagates it immediately after each \(k\) to update the corresponding DNN parameters. Specifically, for each \(k\), the parameters for the input layer are updated as:
\[(\textbf{D}_{k}):W_{k}^{1}\gets W_{k}^{1}-\alpha\nabla_{W_{k}^{1}}L_{k}\]
where \(\alpha>0\) is the learning rate; similarly for the update of output-layer parameters \(W_{k}^{o}\) and \(B_{k}^{o}\). After the complete training process \(\textbf{D}_{k}\), \(k=1,...,K\), the DNN can predict OPF solutions to \(\textbf{P}_{k}\), \(k=1,...,K\) without retraining.
## IV Numerical Experiments
We consider two sets of test systems. The first set consists of the IEEE \(57\)-bus, \(118\)-bus, and \(300\)-bus feeders from PYPOWER. The second set consists of \(73\)-bus, \(90\)-bus, \(106\)-bus, and \(118\)-bus feeders, which (except the \(118\)-bus feeder itself) are all formed by removing buses (and loads and DERs on them) and lines from the IEEE \(118\)-bus feeder, to emulate a successively expanding case.
For the first set of feeders, load data are uniformly sampled within \([90\%,110\%]\) of the original loads. We sample \(150,000\) data points, \(50,000\) for each feeder, among which \(80\%\) form the training set and \(20\%\) the test set. For the second set of feeders, we use the 5-minute load profile of California ISO on May 20, 2020, which varies by up to \(54\%\) from 00:00 to 23:59, to scale the original load at each bus. This serves as the base load and the test set. In each 5-minute slot, \(400\) data points are sampled uniformly within \([90\%,110\%]\) of the base load, as the training set. The conventional MIPS solver in PYPOWER is used to obtain the ground-truth OPF solutions at the sampled load data points.
The unified DNNs are built on PyTorch, with details explained in Table I. The input and output layers have elastic lengths in the given ranges. The length of the output layer is twice the number of buses of the corresponding network, as both voltage magnitudes \(|V_{i}|\) and phase angles \(\theta_{i}\) are output. We apply the Adam optimizer with initial learning rate \(\alpha=1\times 10^{-3}\), mini-batch size \(100\), and \(500\) epochs. The learning rate halves every \(50\) epochs.
Following [2, 7], we use the metrics below to assess the performance. 1) _Optimality loss_: the gaps between the OPF objective values predicted by DeepOPF-U and those returned by MIPS in PYPOWER. It is better to be closer to 0. 2) _Constraint satisfaction_: the percentage of inequality constraints (1d)-(1g) being satisfied, including the active and reactive power generation limits (\(\eta_{Ps}\) and \(\eta_{Q^{g}}\)), voltage magnitude limits (\(\eta_{V}\)), and branch flow magnitude limits (\(\eta_{S_{\ell}}\)). It is better to be closer to \(100\). 3) _Load satisfaction_: the average
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline \#buses & Input layer & Hidden layers & Output layer \\ \hline
57/118/300 & 84–374 & 1024/512/256 & 114–600 \\
73/90/106/118 & 114–189 & 1024/512/256 & 146–236 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Details of unified DNNs built on PyTorch.
Fig. 1: Demonstration of the unified DNN (2) (omitting activation functions). The sizes of parameter matrices \(W_{k}^{1}\), \(W_{k}^{o}\) and vector \(B_{k}^{o}\) are elastic.
percentage of active (\(\eta_{pq^{4}}\)) and reactive (\(\eta_{Q^{4}}\)) power loads being satisfied. It is better to be closer to \(100\). 4) _Speed-up_: the average number of times by which a DNN accelerates OPF solution compared to MIPS. The higher, the better.
### _Performance across Multiple Feeders_
We compare the proposed DeepOPF-U with DeepOPF-V [2] in the first set of feeders to verify the capability of the former across multiple feeders. The results are shown in Table II. The configurations of two methods are set the same for fair comparison. DeepOPF-U can predict AC OPF solutions in three feeders using a single DNN without compromising solution quality, while DeepOPF-V can only predict solutions in each single feeder with a separate DNN. As a result, DeepOPF-U uses \(44\%\) the storage space of DeepOPF-V, which can be even smaller with more feeders. Meanwhile, DeepOPF-U achieves a computation speed-up (compared to the conventional MIPS solver) similar to that of DeepOPF-V.
### _Performance in Expanding Feeders_
Consider the second set of feeders, which successively expands from \(73\), \(90\), \(106\), to \(118\) buses. Figure 2 shows their OPF objective values over time, solved by DeepOPF-U and the conventional MIPS solver. The result shows that DeepOPF-U can effectively track the OPF solutions in a time-varying (and load-varying) setting, with minor difference from the ground-truth solutions obtained by MIPS.
### _Comparision with State-of-the-Art Methods_
We compare the performance of DeepOPF-U with state-of-the-art DACOPF [1] and EACOPF [8] (at their default settings in the papers) on the IEEE \(2,000\)-bus feeder, as shown in Table III. Note that DeepOPF-U can solve OPF problems in \(57/118/300/2000\)-bus feeders with one unified NN, while the other two methods can only deal with one feeder with one NN. With such better generalizability, DeepOPF-U can still achieve good solution quality, especially more strict satisfaction of voltage constraints, and a significantly higher computation speed-up (compared to MIPS) than other two methods.
## V Conclusion
We proposed DeepOPF-U, a unified DNN to learn the load-to-solution mappings of AC OPF problems across multiple and expanding power networks. Simulation results on IEEE \(57/118/300\)-bus test systems and a system expanding from \(73\) to \(118\) buses demonstrate the adaptive mapping capability, good solution quality, and satisfactory computational speed of the unified DNN.
In the future, we will experiment on the best number of different power networks that can be effectively integrated into a unified DNN. We are also interested in exploring the similarities between OPF problems in different power networks, to characterize the performance of the unified DNN.
|
2310.00137 | On the Disconnect Between Theory and Practice of Neural Networks: Limits
of the NTK Perspective | The neural tangent kernel (NTK) has garnered significant attention as a
theoretical framework for describing the behavior of large-scale neural
networks. Kernel methods are theoretically well-understood and as a result
enjoy algorithmic benefits, which can be demonstrated to hold in wide synthetic
neural network architectures. These advantages include faster optimization,
reliable uncertainty quantification and improved continual learning. However,
current results quantifying the rate of convergence to the kernel regime
suggest that exploiting these benefits requires architectures that are orders
of magnitude wider than they are deep. This assumption raises concerns that
architectures used in practice do not exhibit behaviors as predicted by the
NTK. Here, we supplement previous work on the NTK by empirically investigating
whether the limiting regime predicts practically relevant behavior of
large-width architectures. Our results demonstrate that this is not the case
across multiple domains. This observed disconnect between theory and practice
further calls into question to what degree NTK theory should inform
architectural and algorithmic choices. | Jonathan Wenger, Felix Dangel, Agustinus Kristiadi | 2023-09-29T20:51:24Z | http://arxiv.org/abs/2310.00137v2 | # On the Disconnect Between Theory and Practice of Overparametrized Neural Networks
###### Abstract
The infinite-width limit of neural networks (NNs) has garnered significant attention as a theoretical framework for analyzing the behavior of large-scale, overparametrized networks. By approaching infinite width, NNs effectively converge to a linear model with features characterized by the neural tangent kernel (NTK). This establishes a connection between NNs and kernel methods, the latter of which are well understood. Based on this link, theoretical benefits and algorithmic improvements have been hypothesized and empirically demonstrated in synthetic architectures. These advantages include faster optimization, reliable uncertainty quantification and improved continual learning. However, current results quantifying the rate of convergence to the kernel regime suggest that exploiting these benefits requires architectures that are orders of magnitude wider than they are deep. This assumption raises concerns that practically relevant architectures do not exhibit behavior as predicted via the NTK. In this work, we empirically investigate whether the limiting regime either describes the behavior of large-width architectures used in practice or is informative for algorithmic improvements. Our empirical results demonstrate that this is _not_ the case in optimization, uncertainty quantification or continual learning. This observed disconnect between theory and practice calls into question the practical relevance of the infinite-width limit.
## 1 Introduction
The behavior of large-scale, overparametrized neural networks (NNs) has for a long time been poorly understood theoretically. This is in stark contrast to their state-of-the-art performance on tasks like image classification (He et al., 2016; Zagoruyko and Komodakis, 2016), natural language processing (Devlin et al., 2019; Sun et al., 2019), as well as generative and sequence modeling (Brown et al., 2020; Touvron et al., 2023). The seminal work of Jacot et al. (2018) established a link between the evolution of NNs during training and kernel methods by considering networks with infinite width. In this limit, NNs effectively converge to linear models with fixed features such that their predictions are equivalent to those made by a Gaussian process (GP) model using the neural tangent kernel (NTK). Kernel methods and GPs are theoretically well-understood (Rasmussen and Williams, 2005). Consequently, this finding has led to a flurry of research interest in the NTK with the hope of an improved understanding of the behavior of NNs (e.g. Du et al., 2019; Zhou et al., 2020; Bowman and Montufar, 2022b; Mirzadeh et al., 2022).
Kernel methods enjoy several theoretical benefits which if applicable to NNs would be desirable. First, training a linear model or kernel regressor requires solving a quadratic optimization problem, which reduces to solving a linear system with the kernel matrix evaluated pairwise at the training data (Rasmussen and Williams, 2005). Conceptually this simplifies training significantly as the well-studied machinery of convex optimization and numerical linear algebra can be exploited. This is in contrast to the challenges of large-scale stochastic optimization, which compared to the convex setting suffers from slow convergence, requires manual tuning, and choosing an optimizer from a long list of available methods (Schmidt et al., 2021). Second, via the connection to GPs in the case of regression, uncertainty can be quantified via the posterior covariance defined through the NTK. As for prediction, uncertainty quantification then reduces to well-studied numerical methods (Rasmussen and Williams, 2005), unlike Bayesian NNs which generally suffer from similar issues as optimization (Zhang et al., 2020; Kristiadi et al., 2022). Third, data often becomes available continually and we
want to incorporate it into the model rather than retrain from scratch. This _continual learning_ setting in practice often leads to a drop in performance on previous tasks, known as _catastrophic forgetting_McCloskey and Cohen (1989); Goodfellow et al. (2013). It has been observed that large-scale over-parametrized networks forget less catastrophically Ramasesh et al. (2022); Mirzadeh et al. (2022) and this has been hypothesized to be a consequence of these NNs behaving according to the NTK regime. If that was the case, the amount of worst-case forgetting could be predicted theoretically Evron et al. (2022); Le et al. (2023) and algorithmically mitigated Bennani et al. (2020); Doan et al. (2021).
Given the advantageous network properties and algorithmic implications in terms of training, uncertainty quantification, and continual learning close to the kernel regime, the question becomes _when_ a network architecture satisfies the necessary assumptions. Loosely speaking, most theoretical results on NN convergence to a kernel regressor give rates of the form \(O(1/\sqrt{m})\) in the (minimum) width of the hidden layers \(m\)Du et al. (2019); Lee et al. (2019); Bowman and Montufar (2022). However, this asymptotic notation suppresses a dependence on the _network depth_\(L\), which is generally at least _polynomial_Arora et al. (2019) or even _exponential_Bowman and Montufar (2022). Even for quite shallow networks, this requires layer widths that are orders of magnitude larger than any of the common architectures (such as WideResNets). Figure 1 illustrates, that even shallow networks, if not sufficiently wide, can behave very differently from their infinite-width Gaussian process limit. This prompts the important question of whether assumptions based on the kernel regime, and methods derived thereof, apply to deep architectures that are used in practice.
**Contribution** In this work, we consider three areas for which the neural tangent kernel perspective on neural networks promises to be informative: optimization, uncertainty quantification and continual learning. We empirically evaluate whether the infinite-width regime either describes the behavior of large-width architectures used in practice or is useful for algorithm design. We find that in all three domains, assumptions based on NTK theory do _not_ translate to predictable phenomena or improved performance. This disconnect between theory and practice challenges the significance of overparametrization theory when applied to common architectures. We hope our negative findings can serve as a call to action for theory development and as a cautionary tale for algorithm design.
**Limitations** Our work studies architectures that are _currently_ being used in practice. This does _not_ mean that future architectures with large widths are not described well via the kernel regime. However, achieving competitive performance with wide architectures is a challenge, likely due to reduced representation learning Pleiss and Cunningham (2021); Zavatone-Veth et al. (2021); Noci et al. (2021); Coker et al. (2022). Further, we also do _not_ claim that methods we consider fail to be competitive in _any_ setting, rather just that their motivation via the kernel regime is unsuitable for practical architecture choices and problems. They may work well on specific choices of models and datasets.
## 2 Overparametrization Theory: An Overview
Let \(f:\mathcal{X}\times\Theta\rightarrow\mathcal{Y}\) be a neural network (NN) with input space \(\mathcal{X}\subseteq\mathbb{R}^{D}\), output space \(\mathcal{Y}\subseteq\mathbb{R}^{C}\) and parameter space \(\Theta\subseteq\mathbb{R}^{P}\). Assume we linearize \(f\) around a parameter vector \(\mathbf{\theta}_{0}\in\Theta\), i.e.
\[f(\mathbf{x};\mathbf{\theta})\approx f_{\text{lin}}(\mathbf{x};\mathbf{\theta})\coloneqq f( \mathbf{x};\mathbf{\theta}_{0})+\mathbf{J}(\mathbf{x};\mathbf{\theta}_{0})(\mathbf{\theta}-\mathbf{\theta }_{0}) \tag{1}\]
Figure 1: _Infinitely-wide NN in theory and its finite-width approximation in practice.1 The two models make very different predictions about the data-generating latent function, suggesting that the finite-width NN with a commonly selected architecture for real-world regression (\(L=3,m=128,P=21\)M, Li et al., 2023) is not well-described by the kernel regime. Increasing the width by an order of magnitude does not significantly improve the approximation via the empirical NTK._
where \(\mathbf{J}(\mathbf{x};\mathbf{\theta}_{0})\coloneqq(\nicefrac{{\partial f_{\mathbf{\theta}}(\mathbf{x})}}{{ \partial\mathbf{\theta}}})_{\mathbf{\theta}=\mathbf{\theta}_{0}}\in\mathbb{R}^{C\times P}\). When \(\mathbf{\theta}\) is close to \(\mathbf{\theta}_{0}\), the linear model \(f_{\mathrm{lin}}(\mathbf{x};\mathbf{\theta})\) with features defined by the network's Jacobian \(\mathbf{J}(\mathbf{x};\mathbf{\theta}_{0})\) is a good approximation of \(f(\mathbf{x};\mathbf{\theta})\). Consider a fully connected neural network \(f_{\mathrm{MLP}}(\mathbf{x};\mathbf{\theta})=\mathbf{h}^{L}(\mathbf{x}^{L-1})\) defined recursively as
\[\mathbf{h}^{\ell}(\mathbf{x}^{\ell-1})=\mathbf{W}^{\ell}\mathbf{x}^{\ell-1}+\mathbf{b}^{\ell}, \qquad\mathbf{x}^{\ell-1}(\mathbf{h}^{\ell-1})=\varphi(\mathbf{h}^{\ell-1}),\qquad\mathbf{x}^ {0}=\mathbf{x} \tag{2}\]
for \(\ell=L,\ldots,1\) with parameters \(\mathbf{\theta}=\{\mathbf{W}^{\ell}=\nicefrac{{1}}{{\sqrt{m_{\ell-1}}}}\mathbf{V}^{\ell} \}_{\ell=1}^{L}\cup\{\mathbf{b}^{\ell}\}_{\ell=1}^{L}\), s.t. \(\mathbf{V}^{\ell}_{ij},\mathbf{b}^{\ell}_{i}\sim\mathcal{N}(0,1)\), layer widths \(m_{\ell}\) and activation function \(\varphi\).2 Remarkably, Jacot et al. (2018) showed that for infinitely wide fully connected NNs, the parameters remain sufficiently close to their initialization \(\mathbf{\theta}_{0}\) during training via gradient descent. This means we can (approximately) understand the properties of a wide NN \(f_{\mathrm{MLP}}(\mathbf{x};\mathbf{\theta})\) by considering a much simpler-to-understand _linear_ model with features defined by the Jacobian \(\mathbf{J}(\mathbf{x};\mathbf{\theta}_{0})\) instead. Or more precisely, from a function space perspective, in the infinite width limit the behavior of a fully connected NN is described by the (deterministic) _neural tangent kernel_\(K_{\text{NTK}}\), defined as the limit in probability of the _finite-width_ or _empirical NTK_
Footnote 2: This normalized form of the weight matrices is known as the _NTK parametrization_ (NTP).
\[K^{\mathbf{\theta}}_{\text{eNTK}}(\mathbf{x},\mathbf{x}^{\prime})\coloneqq\mathbf{J}(\mathbf{x}; \mathbf{\theta})\mathbf{J}(\mathbf{x}^{\prime};\mathbf{\theta})^{\top}\stackrel{{ P}}{{\longrightarrow}}K_{\text{NTK}}(\mathbf{x},\mathbf{x}^{\prime}) \quad\text{as }m_{1},\ldots,m_{L}\to\infty.\]
This is known as the linear or _kernel regime_. In this regime, at initialization, the implicit prior over functions defined by the network is given by a Gaussian process \(\mathcal{GP}(0,K_{\text{NTK}})\) with zero-mean and covariance function defined by the NTK. Further, the (continuous-time) training dynamics of the network can be described via the differential equation \(\partial_{t}f(\mathbf{x};\mathbf{\theta}_{t})=-K_{\text{NTK}}(\mathbf{x},\mathbf{X})\nabla_{f} \mathcal{L}(f(\mathbf{X};\mathbf{\theta}_{t}))\) i.e. the optimization trajectory of \(f(\mathbf{x};\mathbf{\theta})\) is given by kernel gradient descent with respect to the loss function \(\mathcal{L}\). In the case of square loss regression on a training dataset \((\mathbf{X},\mathbf{y})\), this is a linear ODE, which in the limit of infinite training \(t\to\infty\) admits a closed-form solution. The network prediction is equivalent to the mean function
\[\lim_{t\to\infty}f(\mathbf{x};\mathbf{\theta}_{t})=\mu_{*}(\mathbf{x})\coloneqq f(\mathbf{x}; \mathbf{\theta}_{0})+K_{\text{NTK}}(\mathbf{x},\mathbf{X})K_{\text{NTK}}(\mathbf{X},\mathbf{X})^{ -1}(\mathbf{y}-f(\mathbf{X};\mathbf{\theta}_{0})) \tag{3}\]
of a GP posterior \(\mathcal{GP}(\mu_{*},K_{*})\), resulting from conditioning the prior \(\mathcal{GP}(0,K_{\text{NTK}})\) on observations \(\mathbf{y}=f_{*}(\mathbf{X})\) from the latent function \(f_{*}\) generating the data. These results for fully connected NNs have since been extended to nearly all architectures currently used in practice, such as CNNs, RNNs, and GNNs (Yang and Littwin, 2021).
Implications for Training, Uncertainty Quantification and Continual LearningThe connection to GP regression with the NTK demonstrates why the kernel regime is powerful as a theoretical framework. First, training a neural network simplifies to solving a linear system or equivalently a convex optimization problem (assuming \(K_{\text{NTK}}\) is positive (semi-)definite) since
\[K_{\text{NTK}}(\mathbf{X},\mathbf{X})^{-1}(\mathbf{y}-f(\mathbf{X};\mathbf{\theta}_{0}))=\arg\min_{ \mathbf{v}}\tfrac{1}{2}\mathbf{v}^{\top}K_{\text{NTK}}(\mathbf{X},\mathbf{X})\mathbf{v}-(\mathbf{y}-f( \mathbf{X};\mathbf{\theta}_{0}))^{\top}\mathbf{v} \tag{4}\]
which can be solved using well-understood, fast converging algorithms from numerical analysis (Nocedal and Wright, 2021). This is in contrast to the challenges of stochastic, non-convex optimization (Schmidt et al., 2021). Second, one often cited limitation of NNs is their lack of uncertainty quantification (Hein et al., 2019; Kristiadi et al., 2022; 2020). The connection to the posterior mean of a GP in the kernel regime when training to convergence (3) provides a strategy for Bayesian deep learning (Lee et al., 2018; Khan et al., 2019), by using the posterior covariance function
\[K_{*}(\mathbf{x},\mathbf{x}^{\prime})\coloneqq K_{\text{NTK}}(\mathbf{x},\mathbf{x}^{\prime})-K_ {\text{NTK}}(\mathbf{x},\mathbf{X})K_{\text{NTK}}(\mathbf{X},\mathbf{X})^{-1}K_{\text{NTK}}(\mathbf{ X},\mathbf{x}^{\prime}) \tag{5}\]
to quantify uncertainty. Finally, in a continual learning problem, the similarity between tasks in the kernel regime is measured via the NTK, which in turn describes the amount of catastrophic forgetting when training on new tasks (Doan et al., 2021).
Convergence to the Kernel RegimeWhen should we expect a neural network's behavior to be well-described by the NTK? We can characterize how quickly a network approaches the kernel regime as a function of its (minimum) width \(m=\min_{\ell\in\{1,\ldots,L-1\}}m_{\ell}\). The typical rate of convergence of the finite-width NTK at initialization to the NTK is
\[|K^{\mathbf{\theta}_{0}}_{\text{eNTK}}(\mathbf{x},\mathbf{x}^{\prime})-K_{\text{NTK}}(\mathbf{x}, \mathbf{x}^{\prime})|=\tilde{O}(\nicefrac{{1}}{{\sqrt{m}}}) \tag{6}\]
either pointwise (Du et al., 2019; Arora et al., 2019; Huang and Yau, 2020) or uniform (Buchanan et al., 2021; Bowman and Montufar, 2022; Bowman and Montufar, 2022; 2) with high probability. These results assume an _overparametrized_ NN with width \(m=\Omega(\mathrm{poly}(N))\) significantly exceeding the number of training datapoints \(N\).3 Note that the asymptotic notation in (6) suppresses a dependence on the (constant) depth \(L\) of the NN. This dependence of the width on the depth is _polynomial_ (e.g. \(m=\Omega(L^{6})\) in Arora et al., 2019) or even _exponential_(Bowman and Montufar, 2022), which suggests that to approach the kernel regime, a large network width is required already at moderate depth.
Footnote 3: Note, that Bowman and Montufar (2022) relax this requirement to \(m\approx N\) via the use of a stopping time.
For most architectures, an analytical/efficient-to-evaluate expression for the NTK is not known.4 Therefore in practice, the finite-width NTK \(K^{\mathbf{\theta}}_{\mathrm{NTK}}\approx K_{\text{NTK}}\) is used as an approximation. However as Fig. 1 illustrates, the prediction of a finite-width NN and the associated uncertainty given by the empirical NTK can be very different from the network's theoretical infinite-width limit. Therefore, making assumptions based on the kernel regime can potentially be misleading.
Footnote 4: A fully connected neural network with ReLU activations being a notable exception (Lee et al., 2018).
## 3 Connecting Theory and Practice
To empirically study whether the predictions from the kernel regime about the behavior of overparametrized NNs are reproducible in architectures used in practice, we take a closer look at training, uncertainty quantification and continual learning. For each of these, the kernel regime either makes predictions about the behavior of the network, motivates certain algorithms, or both.
### Training: Second-order Optimization
The empirical risk is typically a convex function of the network output, but generally non-convex in the network's parameters. In addition, stochastic approximations are often necessary due to memory constraints. However, for a NN close to the kernel regime, informally, the loss landscape becomes more convex, since the network approaches a linear model. In fact, for square loss the problem becomes quadratic, see (4). Using this intuition about the kernel regime, Du et al. (2019) have shown that gradient descent, a first-order method, can achieve zero training loss in spite of non-convexity. First-order methods, such as SGD and ADAM, are state-of-the-art for deep learning optimization (Schmidt et al., 2021). This is in contrast to "classical" convex optimization, in which second-order methods are favored due to their fast convergence (e.g. Nesterov, 2008, 2021). If NNs in practice are described well theoretically via the kernel regime, this may seem like a missed opportunity, since the near-convexity of the problem would suggest second-order optimizers to be an attractive choice.
There are multiple challenges for making second-order methods practical. They are less stable under noise--predominant in deep learning where data is fed in mini-batches--have higher per-iteration costs, and are often more complex to implement efficiently. Here, we investigate whether the theoretical argument in favor of second-order methods applies to real-world networks in that they are sufficiently close to the kernel regime. We exclude the aforementioned additional challenges, which amount to an entire research field (e.g. Martens and Grosse, 2015; Gupta et al., 2018; Ren and Goldfarb, 2021), since in the overparametrization regime approximate second-order methods that overcome these challenges can be shown to exhibit similar behavior than their deterministic counterparts (Karakida and Osawa, 2020).
**Fast Convergence of NGD in the Kernel Regime** To empirically test whether (practical) NNs can be efficiently optimized via second-order optimization as predicted by theory in the infinite-width limit, we consider natural gradient descent (NGD). Zhang et al. (2019) studied the convergence behavior of NGD theoretically. They give conditions for fast convergence of finite-step NGD, which extend to approximate NGD methods such as KFAC (Martens and Grosse, 2015). For simplicity, we focus on the special case of NNs with scalar-valued output and mean-squared error. Consider a network \(f(\mathbf{\theta},\mathbf{x}):=f(\mathbf{x};\mathbf{\theta})\) with flattened parameters \(\mathbf{\theta}\in\mathbb{R}^{P}\). For a dataset \(\{(\mathbf{x}_{n},y_{n})\}_{n=1}^{N}\), we minimize the empirical risk \(\mathcal{L}(\mathbf{\theta})=\nicefrac{{1}}{{2N}}\sum_{n=1,\ldots,N}\|\mathbf{f}-\mathbf{y }\|_{2}^{2}\) where \(\mathbf{f}(t):=(f(\mathbf{\theta}(t),\mathbf{x}_{1})\)\(\ldots\)\(f(\mathbf{\theta}(t),\mathbf{x}_{N}))^{\top}\in\mathbb{R}^{N}\) and \(\mathbf{y}:=(y_{1}\)\(\ldots\)\(y_{N})^{\top}\). Zhang et al. (2019) describe two conditions to ensure fast convergence of finite-step NGD to a global minimum which apply to _arbitrary_ architectures:
1. _Full row-rank of the network Jacobian_ \(\mathbf{J}(\mathbf{X};\mathbf{\theta}(0))\in\mathbb{R}^{N\times P}\) _at initialization_, implying \[\lambda_{\text{min}}(\mathbf{G}(0))>0,\] (7) where \(\mathbf{G}(0)=K_{\text{enTK}}^{\mathbf{\theta}(0)}(\mathbf{X},\mathbf{X})\) and restricting the trajectory to be close to initialization.5 Footnote 5: Note, that this implicitly assumes \(P\geq N\), i.e. overparametrization.
2. _Stable Jacobian_, \(\exists 0\leq C\leq\nicefrac{{1}}{{2}}\) such that \(\forall\mathbf{\theta}:\left\|\mathbf{\theta}-\mathbf{\theta}(0)\right\|_{2}\leq\rho \coloneqq\nicefrac{{3\|\mathbf{y}-\mathbf{f}(0)\|_{2}}}{{\sqrt{\lambda_{\text{min}}( \mathbf{G}(0))}}}\)
\[\left\|\mathbf{J}(\mathbf{\theta})-\mathbf{J}(0)\right\|_{2}\leq\tfrac{C}{3}\sqrt{\lambda_ {\text{min}}(\mathbf{G}(0))}\,. \tag{8}\]
The smaller \(C\), the 'closer' to linear the network is to initialization, with equality for \(C=0\).
We can evaluate both conditions in a scalable, matrix-free fashion using standard functions of automatic differentiation frameworks (see Section A.1). As a proxy for (8), we compute \(C^{\prime}(\mathbf{\theta})\coloneqq\nicefrac{{3\|\mathbf{J}(\mathbf{\theta})-\mathbf{J}(0)\|_ {2}}}{{\sqrt{\lambda_{\text{min}}(\mathbf{G}(0))}}}\) with \(\mathbf{\theta}\) drawn uniformly from a sphere with radius \(\rho\). If \(C^{\prime}(\mathbf{\theta})>\nicefrac{{1}}{{2}}\) for any \(\mathbf{\theta}\), then \(C\not\in[0;\nicefrac{{1}}{{2}}]\) and the network violates the stable Jacobian condition. Figures 2 and 3 summarize our findings which we now discuss in more detail.
**Shallow ReLU Net + Synthetic Data** We start with a synthetic problem for which Zhang et al. (2019) give theoretical guarantees. We generate a regression problem with \(N=16\) by i.i.d. drawing \(\mathbf{x}_{n}\in\mathbb{R}^{2}\sim\mathcal{U}([0;1^{2}]),\epsilon_{n}\in\mathbb{ R}\sim\mathcal{N}(0,1)\), and setting \(y_{n}=\sin(2\pi([\mathbf{x}_{n}]_{1}+[\mathbf{x}_{n}]_{2}))+0.1\epsilon_{n}\). Our model is a shallow two-layer ReLU net \(f(\mathbf{x},\mathbf{\theta})=\nicefrac{{1}}{{\sqrt{n}}}\mathbf{W}^{(2)}\mathrm{ReLU}( \mathbf{W}^{(1)}\mathbf{x})\) where \(\mathbf{W}^{(1)}\in\mathbb{R}^{m\times 2}\sim\mathcal{N}(\mathbf{0},\nu^{2}\mathbf{I})\) with \(\nu=1\), \(\mathbf{W}^{(2)}\in\mathbb{R}^{1\times m}\sim\mathcal{U}(\{-1,1\}^{m})\). Only \(\mathbf{W}^{(1)}\) is trainable and each input is normalized in the pre-processing stage, \(\mathbf{x}_{n}\leftarrow\nicefrac{{x_{n}}}{{\|\mathbf{x}_{n}\|_{2}}}\), to satisfy the theoretical assumptions. In this setting, Zhang et al. (2019) show that \(m=\Omega(N^{\star}\nicefrac{{\nu^{2}}}{{\lambda_{\text{min}}^{\star}}}\delta^ {s})\) is required for fast convergence of NGD with probability at least \(1-\delta\) and achieves an improvement of \(\mathcal{O}(\nicefrac{{\lambda_{0}}}{{N}})\) over GD.6 The Jacobian has full row-rank with high probability for \(m=\Omega(\nicefrac{{N\log(N/\delta)}}{{\lambda_{\text{0}}}})\) and we empirically observe a sharp increase in \(\lambda_{\text{min}}(\mathbf{G}(0))\) at relatively low widths (around \(m=500\)) in Fig. 2. However, the Jacobian stabilizes with \(\left\|\mathbf{J}(\mathbf{\theta})-\mathbf{J}(0)\right\|_{2}=\mathcal{O}(m^{-\nicefrac{{1} }{{\theta}}})\), and even for extreme widths (up to \(10^{7}\)) we observe that \(C^{\prime}>\nicefrac{{1}}{{2}}\), and therefore \(C>\nicefrac{{1}}{{2}}\).
Footnote 6: Here \(\lambda_{0}=\lambda_{\text{min}}(K_{\text{NTK}}(\mathbf{X},\mathbf{X}))\) is the minimum eigenvalue of the NTK from Du et al. (2019).
**Deep ReLU Net + Synthetic Data** Next, we move away from the kernel regime by adding depth to the architecture while keeping the same synthetic data and pre-processing. We use two fully connected NNs, as defined in (2), with \(L\in\{3,5\}\) layers of equal width and ReLU activations. For these models, scaling to large
Figure 3: More data _decreases_ the Jacobian stability for WideResNet on a subset of CIFAR-10.
Figure 2: _Conditions for fast convergence of NGD for different NN widths and problems_ (dots represent medians and shaded areas represent the 25/75-quantiles over three independent initializations and five parameter perturbations). The problems range from shallow (_theory_) ReLU MLP and depth-3 and -5 ReLU MLPs on a synthetic regression task to WideResNets on a sub-set of CIFAR-10 with \(N=400\) (_practice_). _Left:_ Relatively small widths are sufficient to satisfy the Gram matrix condition (7). _Right:_ None of the NNs achieve the required Jacobian stability (horizontal dashed line at \(\nicefrac{{1}}{{2}}\)) from (8) for any width both for synthetic and benchmark data.
widths is more expensive than for the shallow case, as the NN's parameters grow quadratically in \(m\). For both depths, we observe a sharp transition in \(\lambda_{\text{min}}(\mathbf{G}(0))\) at relatively small widths (around \(m=500\)) that are computationally accessible. In the close-to-linearity measure, we can see that depth increases non-linearity. While we observe a similar sharp transition in \(C^{\prime}\) to smaller values around \(m=500\) for both depths, its values remain well above \(\nicefrac{{1}}{{2}}\).
**CNN + Benchmark Data** Finally, we investigate a practical architecture (WideResNet, depth 10) on CIFAR-10. We convert the classification task into a regression problem for class indices and use a subset of the data. We rely on the implementation of Kuen (2017) and use its built-in widening factor, which scales the channels of the intermediate features within a block, as a measure for the network's width \(m\). In contrast to the previous cases, this net's Jacobian has a full row rank even for small widths. However, for larger widths attainable within our compute budget, \(C^{\prime}\) remains many orders of magnitude above \(\nicefrac{{1}}{{2}}\). And the stability further deteriorates when using more data (Fig. 3).
**Summary:** In the kernel regime, NGD has favorable convergence over GD in theory. However, empirically we find that the necessary conditions consistently do _not_ hold throughout problem scales--even for a shallow network with theoretical guarantees.
### Uncertainty Quantification: Neural Bandits
In sequential decision-making problems, not only predictive accuracy of a model is important, but crucially also accurate uncertainty quantification (Lattimore and Szepesvari, 2020; Garnett, 2023). Recently, the connection between infinitely wide NNs and GPs has been exploited to design neural bandit algorithms, whose guarantees rely on the assumption that the surrogate model \(f_{\mathbf{\theta}_{t}}\) is sufficiently close to the kernel regime (Zhou et al., 2020; Zhang et al., 2021; Kassaie and Krause, 2022; Nguyen-Tang et al., 2022). We empirically test whether this assumption holds in practice.
**Neural Contextual Bandits via the Kernel Regime** Our goal is to sequentially take optimal actions with regard to an unknown time-varying reward function \(r_{t}(a_{t},\mathbf{x}_{t})\in\mathbb{R}\) which depends on an action-context pair \((a_{t},\mathbf{x}_{t})\in\{1,\ldots,K\}\times\mathbb{R}^{n}\) where \(t=1,\ldots,T\). To do so, we learn a surrogate \(f_{\mathbf{\theta}_{t}}(a_{t},\mathbf{x}_{t})\approx r_{t}(a_{t},\mathbf{x}_{t})\) approximating the reward from past data \(\mathcal{D}_{t}=\{(a_{t^{\prime}},\mathbf{x}_{t^{\prime}}),r_{t^{\prime}}(a_{t^{ \prime}},\mathbf{x}_{t^{\prime}})\}_{t^{\prime}=1}^{t-1}\). An action \(\widetilde{a}_{t}=\arg\max_{a_{t}}u(a_{t},\mathbf{x}_{t})\) is then selected based on a _utility function_\(u\) which generally depends on both the prediction and uncertainty of the neural surrogate for the reward. Overall we want to minimize the _cumulative regret_\(R(T)=\sum_{t=1}^{T}\!r_{t}(\widetilde{a}_{t},\mathbf{x}_{t})-r_{t}(a_{t}^{*},\mathbf{x}_ {t})\), where \(a_{t}^{*}\) are the optimal actions, and the reward \(r_{t}(\widetilde{a}_{t},\mathbf{x}_{t})\) is only observable once an action \(\widetilde{a}_{t}\) is taken. Here we use the popular UCB (Auer, 2002) utility function
\[u(a_{t},\mathbf{x}_{t})=f_{\mathbf{\theta}_{t}}(a_{t},\mathbf{x}_{t})+\gamma_{t}\sqrt{ \operatorname{var}f_{\mathbf{\theta}_{t}}(a_{t},\mathbf{x}_{t})}, \tag{9}\]
where \(\gamma_{t}>0\) controls the exploration-exploitation tradeoff. Due to the importance of uncertainty quantification in the selection of an action based on \(u(a_{t},\mathbf{x}_{t})\), GPs have been used extensively as surrogates (Krause and Ong, 2011). Here, we consider neural surrogates instead, which quantify uncertainty via the empirical NTK at a MAP estimate \(\mathbf{\theta}_{t}\). This can be thought of as a finite-width approximation to the limiting GP in the kernel regime, or equivalently from a weight space view as a linearized Laplace approximation (LLA, MacKay, 1992; Khan et al., 2019; Immer et al., 2021b; Daxberger et al., 2021; Kristiadi et al., 2023b).
**Exploration-Exploitation Trade-off** The parameter \(\gamma_{t}\) in (9) controlling the exploration-exploitation trade-off strongly impacts the cumulative regret, making its choice an important problem in practice. Recent works prove (near-)optimal regret bounds for the neural bandit setting by choosing \(\gamma_{t}\) based on the kernel regime (Zhou et al., 2020; Kassaie and Krause, 2022). To approach the kernel regime, the convergence results discussed in Section 2 require the width of the network \(m\) to be polynomial in the depth \(L\) and number of training data \(N=t-1\). This poses the question whether the proposed choice of \(\gamma_{t}\) is useful in practice. Here, we consider the NeuralUCB algorithm proposed by Zhou et al. (2020), where the exploration parameter \(\gamma_{t}=\tilde{O}(\mathrm{poly}(\nicefrac{{1}}{{\sqrt{m}}},L,t))\). We find that even for shallow NNs (\(L=3\)), \(\gamma_{t}\) rapidly grows very large (see Fig. 4), which by (9) results in essentially no exploitation, only exploration. This suggests that for \(\gamma_{t}\) to achieve a non-vacuous value, \(m\) must be potentially unfeasibly large.
Figure 4: Setting \(\gamma_{t}\) via NTK theory results in overexploration in practice.
**Experiment Setup** We empirically test whether the assumptions based on the kernel regime in the neural bandit setting result in good performance in practice for realistic architectural choices. We use standard contextual bandit benchmark problems, based on UCI classification datasets (Zhou et al., 2020; Zhang et al., 2021; Gupta et al., 2022) (see Section A.2). We compare (i) a _random_ baseline policy and various neural UCB baselines, (ii) the UCB policy with _constant_ exploration parameter \(\gamma_{t}\equiv\gamma\in\{0.01,0.1,1,10\}\) as for simplicity often used in practice, (iii) the UCB policy where \(\gamma_{t}\) is set via the NTK theory with widths \(m\in\{100,1024\}\)(Zhou et al., 2020), and (iv) setting \(\gamma_{t}\equiv 1\), but leveraging the connection between the (empirical) NTK and the LLA in Bayesian deep learning (Immer et al., 2021) to learn a prior precision hyperparameter via marginal likelihood both _post-hoc_ and _online_(Immer et al., 2021).7
Footnote 7: Computing the evidence in the LLA setting incurs no overhead since the LA itself is an approximation of _both_ the posterior \(p(\mathbf{\theta}\mid\mathcal{D}_{t})\)_and_ the marginal likelihood \(p(\mathcal{D}_{t})=\int p(\mathcal{D}_{t}\mid\mathbf{\theta})\,p(\mathbf{\theta})\,d \mathbf{\theta}\)(MacKay, 1992).
**Experiment Results** The results of our experiment are given in Fig. 5, which shows the cumulative regret \(R(t)\) over time. Perhaps frustratingly, the NTK-based policy performs poorly, oftentimes no better than the random baseline, with an order of magnitude larger width having no discernable effect. This is likely explained by the overexploration problem discussed previously. Therefore, in this setting, relying on assumptions based on the kernel regime results in a poorly performing algorithm in practice. In fact, Zhou et al. (2020) set \(\gamma_{t}\) to be constant in their experiments instead of according to the proposed (near-)optimal value based on NTK theory. This disconnect between NTK theory and practice can also be observed for other utility functions such as expected improvement (Gupta et al., 2022) and Thompson sampling (Zhang et al., 2021). We find that setting \(\gamma_{t}\equiv\gamma\) to a value with a well-chosen \(\gamma\) performs best in our experiments (see also Fig. 6 top) However, the optimal value of \(\gamma\) is _unknown a-priori_ and can only be obtained via grid search. This can be problematic in a real-world setting, where a sufficiently large, representative, validation set may not be available, and multiple experiments may not be possible prior to running the "real" experiment--it defeats the spirit of _online_ learning. With that in mind, the marginal-likelihood-based choice of \(\gamma_{t}\) both _post-hoc_ and online perform well in terms of their cumulative regret. While using grid search provides better results in terms of rank (Fig. 6 top), the difference in terms of the cumulative regret \(R(t)\) is small for all \(t\)--see Fig. 6 bottom. The minimal difference in cumulative regret between the marginal-likelihood-based strategies and the best strategy suggests that learning a good exploration-exploitation trade-off is possible, but likely _not_ via an algorithm motivated via the kernel regime.
**Summary:** Avoid setting the exploration parameter in eNTK-based neural bandits via the NTK theory. Instead, use the toolbox of the Laplace approximation to optimize the scale of the posterior variance via evidence maximization.
Figure 5: _Cumulative regret of neural bandits with different degrees of exploration \((\gamma_{t})_{t}\)_ on benchmark data. Setting the exploration parameter \(\gamma_{t}\) via NTK theory yields second-worst performance after random search. Constant exploration achieves the best results but the optimal \(\gamma_{t}\equiv\gamma=10^{-2}\) is a-priori unknown. Online marginal-likelihood (ML) calibration performs near-optimally.
### Continual Learning: Catastrophic Forgetting
In many applications, such as robotics, NNs need to be trained continually on new _tasks_, given by a sequence of training datasets. The primary challenge in _continual learning_(Thrun and Mitchell, 1995; Parisi et al., 2019) is to train on new tasks without a significant loss of performance on previous tasks, known as _catastrophic forgetting_(McCloskey and Cohen, 1989; Goodfellow et al., 2013).
Catastrophic Forgetting in the Kernel RegimeAssuming the NN is sufficiently wide to be in the linear regime, worst-case forgetting can be described theoretically, as well as the convergence to an offline solution--i.e. training on data from all tasks at once (Evron et al., 2022; Evron et al., 2023). One way to algorithmically avoid forgetting is _orthogonal gradient descent_(OGD, Farajtabar et al., 2020), which projects gradients on new tasks such that model predictions on previous tasks change minimally. Bennani et al. (2020) show that, in the kernel regime, OGD provably avoids catastrophic forgetting on an arbitrary number of tasks (assuming infinite memory). Additionally, for SGD and OGD generalization bounds have been given, which are based on the task similarity with respect to the NTK (Bennani et al., 2020; Doan et al., 2021). Ramasesh et al. (2022) investigated catastrophic forgetting empirically in the pretraining paradigm and found that forgetting systematically decreases with scale of both model and pretraining dataset size. Mirzadeh et al. (2022) report that increasing the width of a neural network reduces catastrophic forgetting significantly as opposed to increasing the depth. The hypothesis for this is that as the model becomes wider, gradients across tasks become increasingly orthogonal, and training becomes "lazy", meaning the initial parameters change very little during training, consistent with the convergence of the empirical NTK at initialization to the NTK (6). This naturally leads to the question of whether increasing the width of networks that are used in practice is in fact a simple way to mitigate catastrophic forgetting.
Experiment SetupTo test whether the predictions about continual learning in the kernel regime apply in practice, we train increasingly wide NNs on a sequence of tasks. We train toy two-layer NNs with ReLU activations on the RotatedMNIST dataset, as well as WideResNets (Zagoruyko and Komodakis, 2016) on the SplitCIFAR10, SplitCIFAR100 and SplitTinyImageNet datasets. See Section A.3 for details. Our main goal is to study the effect of width on forgetting. Let \(a_{t,i}\) denote test accuracy on task \(i\) after training on task \(t\). We compute the _average forgetting_\(\bar{\phi}_{T}=\nicefrac{{1}}{{T}}-\nicefrac{{1}}{{\sum_{i=1}^{T}}}\max_{t \in\{1,\ldots,T-1\}}(a_{t,i}-a_{T,i})\), i.e. the average maximal accuracy difference during task-incremental training; the _average accuracy_\(\bar{\alpha}_{T}=\nicefrac{{1}}{{T}}\sum_{i=1}^{T}a_{T,i}\), i.e. the average accuracy across tasks after training on all tasks; and the _learning accuracy_\(\bar{\alpha}_{\max}=\nicefrac{{1}}{{T}}\sum_{i=1}^{T}a_{i,i}\), i.e. the average accuracy across tasks immediately after training on the current task.8 To ascertain whether a network operates in the lazy training/kernel regime, we also track the relative distance in parameter space \(d(\mathbf{w}_{T},\mathbf{w}_{0})=\nicefrac{{\|\mathbf{w}_{T}-\mathbf{w}_{0}\|_{2}}}{{\|\mathbf{ w}_{0}\|_{2}}}\) between the initial parameters \(\mathbf{w}_{0}\) and the parameters \(\mathbf{w}_{T}\) after training on all tasks.
Footnote 8: In practice, \(\bar{\alpha}_{\max}\) almost always equals the average maximum accuracy per task, justifying the notation.
Experiment ResultsThe results of our experiment are shown in Fig. 7. We find that for the shallow NN, average forgetting decreases monotonically with the network width. Further, the relative change in parameters \(d(\mathbf{w}_{T},\mathbf{w}_{0})\) approaches zero, consistent with the lazy training hypothesis in the kernel regime. This seems to confirm observations by Mirzadeh et al. (2022) that wide neural networks forget less. However, for WideResNets we find that a crucial confounding factor is whether the network has been fully trained. NNs that are trained insufficiently show a decrease in forgetting as they become wider. But, this decrease is primarily due to lower learning accuracy, and thus a smaller gap between peak accuracy and minimum accuracy across tasks (see Section B.2). In contrast, training networks to high accuracy increases average forgetting since the peak performance across tasks increases. This can be explained by the fact that they are not actually operating in the kernel regime. The relative change of the weights during training remains large even as width increases beyond what is used in practice, meaning the networks are still adapting to unseen tasks.
Summary: Increasing the width of a practical NN architecture only avoids catastrophic forgetting if not trained to high accuracy per task. Since a smaller change in the weights of the network throughout training correlates with reduced forgetting, strategies that constrain parameter updates when training on new tasks (e.g. Kirkpatrick et al., 2017) or along directions which minimally change performance on previous tasks (e.g. OGD) promise to be more useful strategies in practice than increasing network width.
## 4 Conclusion
In this work, we empirically evaluated whether predictions about the behavior of overparametrized neural networks through the theoretical framework of the neural tangent kernel hold in architectures used in practice. We considered three different areas in which the kernel regime either makes predictions about the behavior of a neural network or informs algorithmic choices. We find that across optimization, uncertainty quantification, and continual learning, theoretical statements in the infinite-width limit do not translate to observable phenomena or improvements in practical architectures with realistic widths. For optimization, we found that such architectures are not sufficiently close to linear to enjoy fast convergence from a second-order method as predicted by existing theory. For uncertainty quantification, we found that controlling the exploration-exploitation trade-off in a sequential decision-making problem via assumptions based on the kernel regime led to performance only marginally better than a random baseline. Finally, in continual learning, we found that wide neural networks as used in practice, if fully trained, do not actually forget less catastrophically.
This observed disconnect between theory and practice leads to two important conclusions. First, our theoretical understanding of the behavior of large-scale overparametrized neural networks is still limited and in particular restricted to architectures that do not resemble those used in practice. This paper is empirical evidence to that effect and thus calls for an effort to improve our understanding by developing a theory under more practically relevant assumptions. Second, algorithms motivated by the neural tangent kernel theory should be scrutinized closely in terms of their practical performance, and researchers should be careful in basing their ideas too strongly on the infinite-width limit. We hope in this way our negative results can serve as a cautionary tale and will ultimately benefit _both_ the theory and practice of deep neural networks.
Figure 7: _Catastrophic forgetting of wide NNs in theory and practice._ As the width of the shallow MLP approaches the kernel regime, average forgetting decreases, while average accuracy increases. Similar observations hold for WideResNets if trained for a few of epochs only—consistent with experiments by Mirzadeh et al. (2022). However, if they are trained to convergence on each task, resulting in increased learning accuracy, _forgetting does not decrease with width_. This suggests that architectures in practice are not actually wide enough to reduce forgetting via the kernel regime.
#### Acknowledgments
JW was supported by the Gatsby Charitable Foundation (GAT3708), the Simons Foundation (542963) and the Kavli Foundation. Resources used in preparing this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute.
|
2301.01597 | Problem-Dependent Power of Quantum Neural Networks on Multi-Class
Classification | Quantum neural networks (QNNs) have become an important tool for
understanding the physical world, but their advantages and limitations are not
fully understood. Some QNNs with specific encoding methods can be efficiently
simulated by classical surrogates, while others with quantum memory may perform
better than classical classifiers. Here we systematically investigate the
problem-dependent power of quantum neural classifiers (QCs) on multi-class
classification tasks. Through the analysis of expected risk, a measure that
weighs the training loss and the generalization error of a classifier jointly,
we identify two key findings: first, the training loss dominates the power
rather than the generalization ability; second, QCs undergo a U-shaped risk
curve, in contrast to the double-descent risk curve of deep neural classifiers.
We also reveal the intrinsic connection between optimal QCs and the Helstrom
bound and the equiangular tight frame. Using these findings, we propose a
method that uses loss dynamics to probe whether a QC may be more effective than
a classical classifier on a particular learning task. Numerical results
demonstrate the effectiveness of our approach to explain the superiority of QCs
over multilayer Perceptron on parity datasets and their limitations over
convolutional neural networks on image datasets. Our work sheds light on the
problem-dependent power of QNNs and offers a practical tool for evaluating
their potential merit. | Yuxuan Du, Yibo Yang, Dacheng Tao, Min-Hsiu Hsieh | 2022-12-29T10:46:40Z | http://arxiv.org/abs/2301.01597v3 | # Demystify Problem-Dependent Power of Quantum Neural Networks on Multi-Class Classification
###### Abstract
Quantum neural networks (QNNs) have become an important tool for understanding the physical world, but their advantages and limitations are not fully understood. Some QNNs with specific encoding methods can be efficiently simulated by classical surrogates, while others with quantum memory may perform better than classical classifiers. Here we systematically investigate the problem-dependent power of quantum neural classifiers (QCs) on multi-class classification tasks. Through the analysis of expected risk, a measure that weighs the training loss and the generalization error of a classifier jointly, we identify two key findings: first, the training loss dominates the power rather than the generalization ability; second, QCs undergo a U-shaped risk curve, in contrast to the double-descent risk curve of deep neural classifiers. We also reveal the intrinsic connection between optimal QCs and the Helstrom bound and the equiangular tight frame. Using these findings, we propose a method that uses loss dynamics to probe whether a QC may be more effective than a classical classifier on a particular learning task. Numerical results demonstrate the effectiveness of our approach to explain the superiority of QCs over multilayer Perceptron on parity datasets and their limitations over convolutional neural networks on image datasets. Our work sheds light on the problem-dependent power of QNNs and offers a practical tool for evaluating their potential merit.
## I Introduction
The advent of hardware fabrication pushes the boundary of quantum computing from verifying its superiority on artificial tasks [1, 2, 3] to conquering realistic problems with merits [4, 5, 6]. This has led to the emergence of a popular paradigm known as quantum neural networks (QNNs), which combine variational quantum Ansatze with classical optimizers [7, 8]. So far, various QNN-based methods have been proposed to address difficult problems in areas such as quantum physics [9, 10, 11, 12], quantum information theory [13, 14, 15, 16], combinatorial optimization [17, 18, 19, 20, 21], and machine learning [22, 23, 24, 25, 26]. Among these applications, QNNs are often deployed as _quantum classifiers_ (QCs) to predict correct labels of the input data [27, 28, 29, 30, 31, 32], e.g., categorize image objects [33, 34, 35], classify phases of quantum matters [36, 37, 38, 39], and distinguish entangled states from separable states [40, 41].
To comprehend the full potential of existing quantum classifiers (QCs) and to spur the development of novel QCs, huge efforts have been made to unveil the learnability of QCs [42, 43, 44]. Prior literature establishes the foundations of QCs from three primary aspects, i.e., model capacity [45, 46, 47, 48], trainability [49, 50, 51], and generalization [52, 53, 54, 55, 56, 57]. Nevertheless, the advantages and constraints of QCs have rarely been proven [57, 58, 59, 60, 61, 62]. Meanwhile, previous results cannot rigorously explain the empirical observations such that QCs generally outperform classical classifiers (CCs) on handcraft or quantum data [44, 63] but are inferior to them on realistic problems [64]. As a result, the need for QCs to address classical issues remains highly questionable.
A principal criteria in characterizing the power of a classifier is the expected risk [65], which weighs the empirical risk (i.e., training loss) and the generalization error (i.e., test loss) jointly. An _optimal_ classifier is one which achieves zero expected risk. As shown in Fig. 1(a), the success of deep neural classifiers is attributed to their double-descent risk curves [66, 67]. This means that as the hypothesis space is continually expanded, the expected risk of a trained deep neural classifier initially decreases, increases, and when it overfits the train set, undergoes a second descent. As such, to show the superiority of QCs over CCs, it demands to distill ubiquitous rules that capture the risk curve of diverse QCs in addition to conditions where the expected risk of QCs can be lower than CCs.
In this study, we unify a broad class of QCs in the same framework and understand their problem-dependent ability under the expected risk (see Fig. 1(b)). Our analysis reveals two substantial outcomes: (i) trainability dominates QCs' ability more than generalization ability; (ii) QCs undergo a U-shape risk curve instead of the double-descent curve for CCs. These outcomes consolidate and refine previous observations. Concretely, the first outcome suggests that the deficiency of QCs on classical data stems from their limited ability to fit the train set, resulting in a larger training loss compared to CCs. The second outcome highlights the distinct learning behavior of QCs and CCs. Despite the fact that overparameterization is fundamental to enhance the performance of CCs, it adversely affects the power of QCs. In line with the diverse dynamics of the risk curves for QCs and CCs, we devise an efficient problem-dependent method to recognize potential merits of QCs, as shown in Fig. 1(a). Conceptually, for a given learning task, our method fits the loss (risk) dynamics of QC and CC under the prior (i.e., U-shape versus double descent) and then
identify the 'advant' regime where the risk of QC is lower than CC. Numerical simulations are conducted to support our theoretical results.
On the technical level, we approach the two outcomes by separately quantifying the empirical risk and generalization error of QCs. Specifically, we first prove conditions of QCs that lead to near-zero empirical risk, the geometric interpretation of which is depicted in Fig. 1(c). As a byproduct, we elucidate how such conditions are inherently linked to quantum state discrimination and quantum measurement theory. In addition, we prove that deep QCs can never reach the vanished empirical risk by utilizing the concentration property of quantum observables [68, 69]. We next analyze the generalization error of QCs by exploiting algorithmic robustness [70]. The derived bound surpasses prior results because it is the first non-vacuous bound in the over-parameterized regime. By combining the unreachable zero empirical risk with the manipulatable generalization error, we obtain the first outcome. The second outcome is gained by integrating the fact that deep QCs are unable to reach the vanished empirical risk with the first outcome.
## II Main results
_Expected risk._-- Let us first introduce a \(K\)-class (\(K\geq 2\)) classification task. Denote the input space as \(\mathcal{X}\), the label (class) space as \(\mathcal{Y}=\{1,\cdots,K\}\), and the train set as \(\mathcal{D}=\bigcup_{k=1}^{K}\{(\mathbf{x}^{(i,k)},y^{(i,k)})\}_{i=1}^{n_{k}}\) with \(|\mathcal{D}|\) samples drawn i.i.d. from an unknown probability distribution \(\mathbb{D}\) on \(\mathcal{Z}=\mathcal{X}\times\mathcal{Y}\). In standard scenarios, the number of train samples in each class is the same, i.e., \(n_{1}=...=n_{k}\equiv n_{c}\) and \(|\mathcal{D}|:=n=Kn_{c}\). The purpose of a classification algorithm \(\mathcal{A}\) is using \(\mathcal{D}\) to infer a hypothesis (a.k.a., a classifier) \(h_{\mathcal{A}_{\mathcal{D}}}:\mathcal{X}\rightarrow\mathbb{R}^{K}\) from the hypothesis space \(\mathcal{H}\) to separate train examples from different classes. This is equivalent to identifying an optimal hypothesis in \(\mathcal{H}\) minimizing the _expected risk_\(\mathsf{R}(h)=\mathbb{E}_{(\mathbf{x},\mathbf{y})\sim\mathbb{D}}[\ell(h(\mathbf{x}),y)]\), where \(\ell(\cdot,\cdot)\) is the per-sample loss and for clarity we specify it as the square error with \(\ell(\mathbf{a},\mathbf{b})=\frac{1}{2}\|\mathbf{a}-\mathbf{b}\|_{2}^{2}\)[71]. Unfortunately, the inaccessible distribution \(\mathbb{D}\) forbids us to assess the expected risk directly. In practice, \(\mathcal{A}\) alternatively learns an _empirical classifier_\(\hat{h}\in\mathcal{H}\), as the global minimizer of the (regularized) loss function
\[\mathcal{L}(h,\mathcal{D})=\frac{1}{n}\sum_{i=1}^{n_{c}}\sum_{k=1}^{K}\ell(h( \mathbf{x}^{(i,k)}),y^{(i,k)})+\mathfrak{E}(h), \tag{1}\]
where \(\mathfrak{E}(h)\) is an optional regularizer.
The foremost role of the risk means that quantum advantages can be ascertained if \(\mathsf{R}(\hat{h}_{Q})<\mathsf{R}(\hat{h}_{C})\), where \(\hat{h}_{Q}\) and \(\hat{h}_{C}\) are the empirical QC and CC on \(\mathcal{D}\). Unlike conventions merely focusing on a QC on one specific task, the above criteria orients to unearth _ubiquitous rules_ of QCs with computational advantages. To reconcile the intractable issue of \(\mathsf{R}(\hat{h})\) and proceed the following analysis, we decomposed it into two measurable terms, i.e.,
\[\mathsf{R}(\hat{h})=\mathsf{R}_{\text{ERM}}(\hat{h})+\mathsf{R}_{\text{Gene}}( \hat{h}), \tag{2}\]
where \(\mathsf{R}_{\text{ERM}}(\hat{h})=\frac{1}{n}\sum_{i=1}^{n}\sum_{k=1}^{K}\ell( \hat{h}(\mathbf{x}^{(i,k)}),y^{(i,k)})\) is the _empirical risk_ and \(\mathsf{R}_{\text{Gene}}(\hat{h})=\mathsf{R}(\hat{h})-\mathsf{R}_{\text{ERM}}( \hat{h})\) is the _generalization error_. Based on Eq. (2), detecting advances of QCs is translated to deriving under what conditions do QCs commit both lower \(\mathsf{R}_{\text{ERM}}\) and \(\mathsf{R}_{\text{Gene}}\) than CCs.
To better elucidate our results, let us recall that the general form of QC is \(\hat{h}_{Q}=\arg\min_{h_{Q}\in\mathcal{H}_{Q}}\mathcal{L}(h_{Q},\mathcal{D})\)
Figure 1: **Risk curve and geometry of the unified QCs.** (a) The risk curve of QCs and CCs are highlighted by the solid red and blue lines (labeled by ‘Q-\(\mathcal{R}\)’ and ‘C-\(\mathcal{R}\)’), respectively. The former yields a ‘U’ shape while the latter yields a double-descent tendency. Potential advantages of QCs are dominated by the empirical risk, highlighted by the dashed curve. The shaded region refers to the potential merits of QCs. (b) The unified QC consists of two parts, the feature state \(\rho\) and the measure operator \(\mathbf{o}\). This model covers diverse QCs. (c) Geometric relationship between \(\{\rho^{(i,k)}\}\) and \(\mathbf{o}\) of QCs with (near) zero training loss: (i) the feature states associated with train samples belonging to the same class concentrate around their class-feature mean, i.e., \(\bar{\rho}^{*(k)}:=\rho^{*(1,k)}=...=\rho^{*(n_{c},k)}\) for \(\forall k\in[K]\); (ii) the class-feature means are maximally distant with each other, i.e., \(\operatorname{Tr}(\bar{\rho}^{*(k)}\bar{\rho}^{*(k^{\prime})})\sim\delta_{k,k^{ \prime}}\); (iii) the measure operator should align with class-feature means, i.e., \(\operatorname{Tr}(\bar{\rho}^{*(k)}o^{*(k^{\prime})})\sim\delta_{k,k^{\prime}}\).
where \(\mathcal{L}\) is defined in Eq. (1) and \(\mathcal{H}_{Q}\) is the hypothesis space. For an \(N\)-qubit QC, its hypothesis space is
\[\mathcal{H}_{Q}=\left\{\left[h_{Q}(\cdot,U(\mathbf{\theta}),O^{(k)})\right]_{k=1:K} \left|\mathbf{\theta}\in\mathbf{\Theta}\right.\right\}, \tag{3}\]
where \([\cdot]_{k=1:K}\) is a \(K\)-dimensional vector, its \(k\)-th entry \(h_{Q}(\mathbf{x},U(\mathbf{\theta}),O^{(k)})=\mathrm{Tr}(O^{(k)}U(\mathbf{\theta})\sigma( \mathbf{x})U(\mathbf{\theta})^{\dagger})\) for \(\forall k\in[K]\) refers to the output (prediction) of quantum circuits, \(\sigma(\mathbf{x})=U_{E}(\mathbf{x})(\ket{0}\bra{0}^{\otimes N}U_{E}(\mathbf{x})^{\dagger}\) is the input state of \(\mathbf{x}\) with the encoding circuit \(U_{E}(\cdot)\), \(\mathbf{O}=\{O^{(k)}\}_{k=1}^{K}\) is a set of measure operators, and \(U(\mathbf{\theta})\) is the adopted Ansatz with trainable parameters \(\mathbf{\theta}\) living in the parameter space \(\mathbf{\Theta}\). Without loss of generality, we define \(U(\mathbf{\theta})=\prod_{l=1}^{N_{l}}(u_{l}(\mathbf{\theta})u_{e})\in\mathcal{U}(2^{ N})\), where \(u_{l}(\mathbf{\theta})\in\mathcal{U}(2^{m})\) is the \(l\)-th parameterized quantum gate operated with at most \(m\) qubits (\(m\leq N\)) and \(u_{e}\) refers to fixed quantum gates. Similarly, we define \(U_{E}(\mathbf{x})=\prod_{g=1}^{N_{g}}u_{g}(\mathbf{x})\in\mathcal{U}(2^{N})\), where \(u_{g}(\mathbf{x})\in\mathcal{U}(2^{m})\) refers to the \(g\)-th quantum gate operated with at most \(m\) qubits, and \(N_{g}\) gates contain \(N_{ge}\) tunable gates and \(N_{g}-N_{ge}\) fixed gates.
Due to the diverse constructions of \(U(\mathbf{\theta})\) and \(U_{E}(\cdot)\), it is necessary to unify various QCs into the same framework to obtain the generic results. Notably, the unified QC should be _agnostic to_ particular forms of these two terms. A feasible way is rewritten \(h_{Q}(\cdot,U(\mathbf{\theta}),O^{(k)})\) as
\[h_{Q}(\rho^{(i,k)},o^{(k)}):=\mathrm{Tr}(\rho^{(i,k)}o^{(k)})\ \forall k\in[K], \tag{4}\]
where \(O^{(k)}=\mathbb{I}_{2^{N-D}}\otimes o^{(k)}\) with the nontrivial local operator \(o^{(k)}\in\mathbb{C}^{2^{D}\times 2^{D}}\), \(D\) describes the locality, and \(\rho^{(i,k)}=\mathrm{Tr}_{D}(U(\mathbf{\theta})\sigma(\mathbf{x}^{(i,k)})U(\mathbf{\theta} )^{\dagger})\) corresponds to the state before measurements, named as _feature state_. An intuition of the unified QC is shown in Fig. 1(b).
We are now ready to exploit the unified framework to analyze the expected risk of QCs. Let \(\mathbf{\rho}=\{\rho^{(i,k)}\}\) and \(\mathbf{o}=\{o^{(k)}\}\) be two sets collecting all feature states and measure operators. The following theorem exhibits conditions in which QCs allow a low expected risk, where the formal statement and the proof are deferred to SM A.
**Theorem 1** (informal).: _Following notations in Eqs. (1)-(4), when the train data size is \(nO(KN_{ge}\log\frac{KN_{g}}{\epsilon\delta})\) with \(\epsilon\) being the tolerable error, and the optimal sets of \(\mathbf{\rho}^{*}\) and \(\mathbf{o}^{*}\) satisfy three conditions: (i) the feature states have the vanished variability in the same class; (ii) all feature states are equal length and are orthogonal in the varied classes; (iii) any feature state is alignment with the measure operator in the same class, with probability \(1-\delta\), the expected risk of QC tends to be zero, i.e., \(\mathsf{R}(\hat{h}_{Q})\to 0\)._
Conditions (i)-(iii) visualized in Fig. 1(c) sculpt the geometric interpretations of \(\mathbf{\rho}^{*}\) and \(\mathbf{o}^{*}\). These properties come across the design philosophy of CCs, e.g., linear discriminant analysis and neural collapse phenomenon appeared in most deep neural classifiers [71, 72, 73]. Moreover, these conditions unveil the intrinsic connection between optimal QCs and the quantum state discrimination [74], since \(\mathbf{\rho}^{*}\) and \(\mathbf{o}^{*}\) should maximize the Helstrom bound [75], which explains the ultimate limit of QCs observed in [76]. However, as will be explained later (see Corollary 1 and Lemma 1), under certain scenarios, it is hard for QCs to meet these conditions. A typical instance is applying QC to learn the image dataset, where the difficulty stems from the limited nonlinearity of QC to fit the train set, thereby inducing a large empirical risk.
Conditions (i)-(iii) also imply how the quantum measurement theory can be used to guide the design of QCs. Namely, the mean feature states of each class \(\{\tilde{\rho}^{\star(k)}\}\) compose the equiangular tight frame (ETF) and Condition (iii) suggests that the optimal measure operators \(\{\mathbf{o}^{*}\}\) also satisfy this ETF [77]. Due to the relation between symmetric informationally complete (SIC) measurements and ETF [78, 79], the optimal measure operators could be estimated by various SIC construction strategies [80]. Besides, the locality \(D\) of \(\{\mathbf{o}^{*}\}\) should be carefully selected in QCs in which a small \(D\) precludes the acquisition of the optimal QCs (i.e., the complex ETF does not exist when \(2^{D}=K\)[81, 82]), while an extremely large \(D\) may incur the barren plateaus [83, 84]. Furthermore, when \(K\) is large, Pauli-based measurements are preferable than computational basis measurements in QCs, since the former allows classical shadow techniques to accelerate the training of QCs [85, 86].
The scaling behavior of \(n\) indicates that it is data-efficient for QCs to attain a low generalization error, where the size of train set only linearly depends on the class number \(K\) and the number of encoding gates \(N_{ge}\) (see Lemma 3 for the technical elaboration). In other words, the generalization error of QCs can be well controlled by the modest-size train data.
According to Theorem 1, the challenges in satisfying Conditions (i)-(iii) and the well controlled generalization error pinpoint that the risk of a QC is mostly dominated by its empirical loss rather than its generalization error. In this view, the core in devising advanced QCs is tailoring \(\mathcal{H}_{Q}\) in Eq. (3) so that \(\hat{h}_{Q}\) has a (near) zero empirical risk on \(\mathcal{D}\), or equivalently examining whether the employed QCs can fulfill Conditions (i)-(iii).
_U-shape risk curve._--The risk curve concerns how the expected risk of a classifier behaves with the varied hypothesis space. It is desired that as with deep neural classifiers, QCs undergo a double-descent risk curve in the sense that the over-parameterized QCs consent a low expected risk when the trainable parameters \(N_{t}\) is much greater than the train data \(n\). If so, 'over-parameterization' could serve as a golden law in designing QCs. However, the following corollary refutes the existence of the double-descent risk curve for QCs.
**Corollary 1**.: _Following notations in Theorem 1, when \(\{U_{E}(\mathbf{x})|\mathbf{x}\in\mathcal{X}\}\) follows the Haar distribution, with probability \(1-\delta\), the empirical QC follows \(|\,\mathrm{Tr}\left(\sigma(\mathbf{x}^{(i,k)})\sigma(\mathbf{x})\right)-\frac{1}{2^{N}}| \leq\sqrt{\frac{3}{2^{2N}\delta}}\). When \(\{U(\mathbf{\theta})|\mathbf{\theta}\in\Theta\}\) follows the Haar distribution, with probability \(1-\delta\), the empirical QC follows
\(\sqrt{\frac{\operatorname{Tr}(\rho^{(k^{\prime})})^{2}+2\operatorname{Tr}((\rho^{(k^ {\prime})})^{2})}{2^{2}\hbar\delta}}\).
The proof is deferred to SM B. The achieved results reveal the caveat of deep QCs. Specifically, when \(U_{E}(\mathbf{x})\) is deep, two encoded states \(\sigma(\mathbf{x}^{(i,k)})\) and \(\sigma(\mathbf{x}^{(i^{\prime},k)})\) from the same class tend to be orthogonal, which contradicts with Conditions (i) in Theorem 1. Besides, when \(U(\mathbf{\theta})\) is deep, the output of QC concentrates to zero, regardless how \(o^{(k^{\prime})}\) and \(\rho^{(i,k)}\) are selected. This violates Condition (iii) in Theorem 1. Overall, over-parameterized QCs encounter the high empirical risk and thus the high expected risk, which suggests that QCs experience a _U-shape risk curve_. This observation differs the dynamics of QCs from variational quantum Eigensolvers, since the latter can benefit from over-parameterization, e.g., better trainability and convergence rate [87, 88, 89, 90]. Moreover, the rule of thumb in QCs' construction is slimming \(\mathcal{H}_{Q}\) to find the valley region. Intriguingly, tailoring the feature states echoes with quantum metric learning and quantum self-supervised learning [91, 92, 93, 94, 95].
_Probe power of QCs via loss dynamics._--The distinct tendency of the risk curves between QCs and CCs provides a succinct way to recognize the potential quantum advantages. As shown in Fig. 1(a), given a specific data set, the U-shape risk curve of QCs indicates that its advantages mostly appear in the valley region. Precisely, if the risk values of QC around the basin are lower than those of CC, potential merits may exist; otherwise, QC is inferior to CC. The proved learning behavior of QCs, accompanied with the tight generalization bound, allows us to effectively fit its risk curve according to their loss dynamics. Specifically, our method contains three steps. First, \(W\) tuples of \(\{n,N_{t},T\}\) are initialized based on Theorem 1 so that the collected risk points of QC span the basin area with low generalization error. Second, we execute QC and CC under these \(W\) hyper-parameter settings and fit their loss dynamics to attain the risk curve. Last, we compare two risk curves and probe potential advantages. See SM F for the implementation details.
_Technical analysis._--Theorem 1 is achieved by analyzing when \(\mathsf{R}_{\mathrm{ERM}}(\hat{h}_{Q})\) and \(\mathsf{R}_{\mathrm{Gene}}(\hat{h}_{Q})\) are (near) zero. In the analysis of \(\mathsf{R}_{\mathrm{ERM}}(\hat{h}_{Q})\), we first consider the most general case in which both \(\mathbf{\rho}\) and \(\mathbf{o}\) are tunable, where \(\hat{h}_{Q}\equiv h_{Q}(\mathbf{\rho}^{*},\mathbf{o}^{*})\) with \((\mathbf{\rho}^{*},\mathbf{o}^{*})=\min_{\mathbf{\rho},\mathbf{o}}\mathcal{L}(\mathbf{\rho},\mathbf{ o})\).
**Lemma 1** (Informal).: _When the regularizer \(\mathfrak{E}\) is considered and \((\mathbf{\rho}^{*},\mathbf{o}^{*})\) meets the three conditions in Theorem 1, the global minimizer leads to \(\mathsf{R}_{\mathrm{ERM}}(\hat{h}_{Q})=C_{1}^{2}/2\) with \(C_{1}\) depending on the hyper-parameters in \(\mathfrak{E}\)._
The achieved properties of \(\mathbf{o}^{*}\) can be used as a priori to simplify QCs. To this end, the following lemma quantifies \(\mathsf{R}_{\mathrm{ERM}}(\hat{h}_{Q})\) when \(\mathbf{o}\) is predefined and \(\mathfrak{E}=0\), where \(\hat{h}_{Q}\equiv h_{Q}(\mathbf{\rho}^{*},\mathbf{o})\) with \(\mathbf{\rho}^{*}=\min_{\mathbf{\rho}}\mathcal{L}(\mathbf{\rho},\mathbf{o})\).
**Lemma 2** (Informal).: _When the predefined \(\{o^{(k)}\}\) are mutually orthogonal with each other and the conditions in Theorem 1 are satisfied, we have \(\mathsf{R}_{\mathrm{ERM}}(\hat{h}_{Q})=0\)._
The proofs of Lemmas 1 and 2 are given in SM C&D.
We next analyze \(\mathsf{R}_{\mathrm{Gene}}(\hat{h}_{Q})\). Prior results cannot be used to prove Theorem 1, since such bounds polynomially scale with the trainable parameters and become vacuous in the over-parameterized regime. To remedy this issue, we utilize the concept of algorithmic robustness [70].
**Definition 1** (Robustness).: _A learning algorithm \(\mathcal{A}\) is \((R,\nu(\cdot))\)-robust with \(R\in\mathbb{N}\) and \(\nu(\cdot):\mathcal{Z}^{n}\to\mathbb{R}\), if \(\mathcal{Z}\) can be partitioned into \(R\) disjoint sets, denoted by \(\{C_{r}\}_{r=1}^{R}\), such that the following holds for all \(\mathcal{D}\subset\mathcal{Z}^{n}:\forall\mathbf{s}=(\mathbf{x}^{(i)},y^{(i)})\in \mathcal{D}\), \(\forall z=(\mathbf{x},y)\in\mathcal{Z}\), \(\forall r\in[R]\),_
\[\mathbf{s},\mathbf{z}\in\mathcal{C}_{r}\Rightarrow|l(h_{\mathcal{A}_{D}}(\mathbf{x}^{(i)}), y^{(i)})-l(h_{\mathcal{A}_{D}}(\mathbf{x}),y)|\leq\nu(\mathcal{D}).\]
Concisely, robustness measures how much the loss value can be varied with respect to the input space \(\mathcal{Z}\). A higher robustness of a classifier admits lower \(R\), \(\nu(\cdot)\), and \(\mathsf{R}_{\mathrm{Gene}}\)[70]. The following lemma quantifies the upper bound of \(\mathsf{R}_{\mathrm{Gene}}(\hat{h}_{Q})\) whose proof is given in SM E.
**Lemma 3**.: _Suppose the measure operator is bounded by \(C_{2}\) with \(\max_{k\in[K]}\|o^{(k)}\|\leq C_{2}\). Define \(\epsilon\) as the tolerable error. Following notations in Definition 1, the empirical QC is \((K(28N_{ge}/\epsilon)^{4nN_{ge}},4L_{1}KC_{2}\epsilon)\)-robust, and with probability \(1-\delta\) we have_
\[\mathsf{R}_{\mathrm{Gene}}(\hat{h}_{Q})\leq 4L_{1}KC_{2}\epsilon+5\xi(\hat{h}_{Q}) \sqrt{\frac{|\mathcal{T}_{\mathcal{D}}|4^{m}N_{ge}\ln\frac{56KN_{ge}}{\epsilon \delta}}{n}},\]
_where \(L_{1}\) is the Lipschitz constant of \(\ell\) with respect to \(h_{Q}\), \(\mathcal{T}_{\mathcal{D}}^{\mathcal{D}}=\{i\in[n]:\mathbf{z}^{(i)}\in\mathcal{C}_{r}\}\), \(\xi(\hat{h}):=\max_{\mathbf{z}\in\mathcal{Z}}(\xi(\hat{h},\mathbf{z}))\), and \(\mathcal{T}_{\mathcal{D}}:=\{r\in[R]:|\mathcal{T}_{r}^{\mathcal{D}}|\geq 1\}\)._
The achieved results convey threefold insights. First, our bound does not explicitly depend on the number of trainable parameters [96]. This unlocks a new way to understand the generalization ability of QCs, especially for the over-parameterized ones. Next, our bound hints that a carefully designed \(U_{E}\) can enhance performance of QCs [97, 53]. Last, \(\mathsf{R}_{\mathrm{Gene}}(\hat{h}_{Q})\to 0\) requires \(n\gg|\mathcal{T}_{\mathcal{D}}|4^{m}N_{ge}\). Fortunately, a reasonable value of \(n\) is sufficient to warrant this condition, because in general \(m\leq 2\), \(N_{ge}\propto|\mathbf{x}|\), and \(|\mathcal{T}_{\mathcal{D}}|\) is continuously decreased from \(n\) to \(K\) with respect to the reduced empirical loss.
## III Numerical simulations
We conduct numerical simulations to exhibit that the advantages and limitations of QCs on different classification tasks can be interpreted by the derived risk curve and feature states. The omitted construction details and results are deferred to SM G.
We first apply QC to accomplish the binary classification on the parity dataset [98, 99, 100]. The number of qubits is \(N=6\) and the hardware-efficient Ansatz is adopted to realize \(U(\mathbf{\theta})\). The gradient descent method is used as the classical optimizer. Two measure operators are
\(o^{(1)}=\left|0\right\rangle\left\langle 0\right|\) and \(o^{(2)}=\left|1\right\rangle\left\langle 1\right|\). The simulation results of QC with \(N_{t}=54\) are displayed in Fig. 2(a). Particularly, the averaged train (test) accuracy steadily grows from \(44.1\%\) to \(100\%\) within \(22\) epochs, and the corresponding loss decreases from \(0.26\) to \(4\times 10^{-5}\). The dynamics of the feature states \(\{\rho^{(i,t,\dagger)}\}\) with \(t\in\{0,10,20,30,40\}\) visualized by Bloch spheres echo with Lemma 2. Besides, QC becomes more robust when we continue the training. Although the train (test) accuracy reaches the optimum, the loss can be further reduced and suggests a lower risk warranted by Theorem 1. We further compare the risk curve between QC and multilayer Perceptron (MLP) on this dataset. We fit their risk curves following the proposed method to probe potential quantum merits. As shown in Fig. 2(b), QC clearly outperforms MLP when the trainable parameters ranges from \(20\) to \(140\) and the valley of the risk curve is around \(N_{t}=70\)[101].
We then apply QC to learn the Fashion-MNIST image dataset with \(K=9\)[102]. The employed number of qubits is \(N=10\) and the Pauli-based measure operators are employed. Convolutional neural networks (CNNs) are exploited as the reference. For all classifiers, the number of epochs is fixed to be \(T=50\) and the number of trainable parameters \(N_{t}\) ranges from \(60\) to \(9000\). Each setting is repeated with \(3\) times. As shown in Fig. 3, when the layer number is \(50\) with \(N_{t}=1500\), both the train and test accuracies of QC are about \(50\%\). This performance is inferior to CNN under the similar setting. To explore whether QC has the potential to outperform CNN on this dataset, we compare their risk curves. As shown in Fig. 3(b), unlike the parity dataset, QC is evidently inferior to CNN on Fashion-MNIST dataset.
## IV Discussions and Outlook
We understand the potential of diverse QCs in terms of the expected risk. Our theoretical findings demonstrate that the efficacy of QCs is dependent on the problem at hand, which explains the empirical evidence of their superiority on synthetic and quantum datasets, yet inferiority on realistic tasks. With the clear difference between the risk curve of QCs and deep neural classifiers, we present a concise technique to investigate potential quantum benefits by fitting their loss dynamics. Numerical results validate our theoretical results and the effectiveness of our method.
There are several interesting future research directions. The U-shape curve of QCs poses two open questions. First, can contemporary QCs attain quantum benefits on certain classical data when only limited data and restricted computing resources are available? Secondly, is it necessary to redesign QCs such as nonlinear QCs [103, 104] that can also exhibit a double-descent risk curve? Besides, the unearthed connection between the conditions towards optimal empirical risk and quantum state discrimination opens a new research avenue that amplifies the potential of QCs on quantum data aided by quantum information theory. Finally, it is intriguing to extend the developed non-vacuous generalization error bound of QCs to other scenarios, such as out-of-distribution data, in order to identify potential quantum advantages.
###### Acknowledgements.
The authors thank Xinbiao Wang for valuable input and inspiring discussions.
Figure 3: **Multi-class classification on the image dataset with \(K=9\).** (a) The learning performance of QC when the layer number is \(50\). (b) The fitted risk curve of QC and CNN. All labels have the same meaning with those used in Fig. 2.
Figure 2: **Binary classification on the parity dataset.** (a) The learning performance of QC when the layer number is \(3\). The \(x\)-axis denotes the epoch numbers. Shaded region represents variance. The Bloch spheres display the quantum feature states at different epochs. (b) The fitted risk curve of QC and MLP. The \(x\)-axis denotes the number of trainable parameters. The label ‘_QC-risk_’ (‘_MLP-risk_’) refers to the fitted risk curve of QC and MLP. The label ‘_QC-res_’ (‘_MLP-res_’) refers to the collected results used for fitting the curves. |
2309.08385 | A Unified View Between Tensor Hypergraph Neural Networks And Signal
Denoising | Hypergraph Neural networks (HyperGNNs) and hypergraph signal denoising
(HyperGSD) are two fundamental topics in higher-order network modeling.
Understanding the connection between these two domains is particularly useful
for designing novel HyperGNNs from a HyperGSD perspective, and vice versa. In
particular, the tensor-hypergraph convolutional network (T-HGCN) has emerged as
a powerful architecture for preserving higher-order interactions on
hypergraphs, and this work shows an equivalence relation between a HyperGSD
problem and the T-HGCN. Inspired by this intriguing result, we further design a
tensor-hypergraph iterative network (T-HGIN) based on the HyperGSD problem,
which takes advantage of a multi-step updating scheme in every single layer.
Numerical experiments are conducted to show the promising applications of the
proposed T-HGIN approach. | Fuli Wang, Karelia Pena-Pena, Wei Qian, Gonzalo R. Arce | 2023-09-15T13:19:31Z | http://arxiv.org/abs/2309.08385v1 | # A Unified View Between Tensor Hypergraph Neural Networks And Signal Denoising
###### Abstract
Hypergraph Neural networks (HyperGNNs) and hypergraph signal denoising (HyperGSD) are two fundamental topics in higher-order network modeling. Understanding the connection between these two domains is particularly useful for designing novel HyperGNNs from a HyperGSD perspective, and vice versa. In particular, the tensor-hypergraph convolutional network (T-HGCN) has emerged as a powerful architecture for preserving higher-order interactions on hypergraphs, and this work shows an equivalence relation between a HyperGSD problem and the T-HGCN. Inspired by this intriguing result, we further design a tensor-hypergraph iterative network (T-HGIN) based on the HyperGSD problem, which takes advantage of a multi-step updating scheme in every single layer. Numerical experiments are conducted to show the promising applications of the proposed T-HGIN approach.
Hypergraph Neural Network, Hypergraph Signal Denoising, Hypergraph Tensor.
## I Introduction
Hypergraphs are ubiquitous in real-world applications for representing interacting entities. Potential examples include biochemical reactions that often involve more than two interactive proteins [1], recommendation systems that contain more than two items in a shopping activity [2], and traffic flows that can be determined by more than two locations [3]. In a hypergraph, entities are described as vertices/nodes, and multiple connected nodes form a hyperedge as shown in Fig. 1 (b, c) of a hypergraph example.
A hypergraph \(\mathcal{G}\) is defined as a pair of two sets \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}=\{v_{1},v_{2},...,v_{N}\}\) denotes the set of \(N\) nodes and \(\mathcal{E}=\{e_{1},e_{2},...,e_{K}\}\) is the set of \(K\) hyperedges whose elements \(e_{k}\) (\(k=1,2,...,K\)) are nonempty subsets of \(\mathcal{V}\). The maximum cardinality of edges, or \(m.c.e(\mathcal{G})\), is denoted by \(M\), which defines the order of a hypergraph. Apart from the hypergraph structure, there are also features \(\mathbf{x}_{v}\in\mathbb{R}^{D}\) associated with each node \(v\in\mathcal{V}\), which are used as row vectors to construct the feature matrix \(\mathbf{X}\in\mathbb{R}^{N\times D}\) of a hypergraph. From a hypergraph signal processing perspective, since the feature matrix \(\mathbf{X}\) can be viewed as a \(D\)-dimensional signal over each node, we use the words "feature" and "signal" interchangeably throughout the paper.
Given the hypergraph structure \(\mathcal{G}\) and the associated feature matrix \(\mathbf{X}\), hypergraph neural networks (HyperGNNs) are built through two operations: 1) signal transformation and 2) signal shifting to leverage higher-order information. Specifically, if a HyperGNN is defined in a matrix setting, these two steps can be written as follows:
\[\left\{\begin{array}{cl}\text{Signal transformation: }\mathbf{X}^{\prime}=\phi_{trans}( \mathbf{X};\mathcal{W});\\ \text{Signal shifting: }\mathbf{Y}=\phi_{shift}(\mathbf{X}^{\prime}, \mathcal{G});\end{array}\right. \tag{1}\]
where \(\mathbf{X}^{\prime}\) is the transformed signal in a desired hidden dimension \(D^{\prime}\) and \(\mathbf{Y}\) represents the linear combination of signals at the neighbors of each node according to the hypergraph structure \(\mathcal{G}\). While here the variables are denoted by matrices, in fact, a tensor paradigm provides significant advantages [4] as will be introduced later, and thus will be at the core of this paper context. The signal transformation function \(\phi_{trans}\), is parameterized by a learnable weight \(\mathcal{W}\) and is generally constructed by multi-layer perceptrons (MLPs). As a result, the variation of HyperGNNs mainly lies in the signal-shifting step. To make use of the hypergraph structure in the signal-shifting step, an appropriate hypergraph algebraic descriptor is required. Prior efforts on HyperGNNs primarily focus on matrix representations of hypergraphs with possible information loss [4, 5]. Consider one of the most common hypergraph matrix representations, the adjacency matrix of the clique-expanded hypergraph used in [6, 7], which constructs pairwise connections between any two nodes that are within the same hyperedge, thus only providing a non-injective mapping. As shown in Fig 1, hypergraphs (b) \(\mathcal{G}_{1}\) and (c) \(\mathcal{G}_{2}\) have the same pairwise connections as the simple graph of Fig. 1 (a).
Recently, a tensor-based HyperGNN framework T-HyperGNN [8] has been proposed to address potential information loss in matrix-based HyperGNNs. Specifically, the T-HyperGNN formulates tensor-hypergraph convolutional network (T-HGCN) via tensor-tensor multiplications (t-products) [9], which fully exploits higher-order features carried by a hypergraph. Interestingly, we find that the hypergraph signal shifting in T-HGCN is equivalent to a one-step gradient descent of solving a hypergraph signal denoising
Fig. 1: Robot collaboration network represented by (a) a simple graph and (b) a hypergraph \(\mathcal{G}_{1}\) and (c) another hypergraph \(\mathcal{G}_{2}\). In (a), each cooperation relationship is denoted by a line connecting exactly two entities; whereas in (b) and (c), each hyperedge denoted by a colored ellipse represents multi-robot cooperation.
(HyperGSD) problem (to be shown in Sec. III). Nevertheless, updating the gradient in one step per HyperGNN layer might be sub-optimal: For the two steps of HyperGNNs, only the signal shifting step corresponds to the gradient descent update. If we simply stack many layers of T-HGCN to perform multi-step gradient descent as shown in Fig. 2(a), the number of learnable parameters will unnecessarily increase. More importantly, numerous sequential transformations of the hypergraph signals could cause indistinguishable features across all nodes, leading to the well-known over-smoothing problem [10]. To overcome these issues, we propose an iterative \(K\)-step gradient descent procedure to solve the underlying HyperGSD problem, and further cast this procedure to formulate the novel **Tensor-hypergraph iterative network (T-HGIN)**, which combines the \(K\)-step updating process (signal shifting) in just a single layer as shown in Fig. 2(b). Additionally, T-HGIN leverages the initial input (with weight \(\alpha\)) and the current output (with weight \(1-\alpha\)) at each shifting step, performing a skip-connection operation that avoids over-smoothing.
## II Preliminaries
### _Hypergraph tensor representations and signal shifting_
While a hypergraph can be represented in either a matrix or a tensor form, in this work, we use tensorial descriptors to represent hypergraphs as they preserve intrinsic higher-order characteristics of hypergraphs [11]. Given a hypergraph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) containing \(N\) nodes with order \(M\) (that is, \(m.c.e(\mathcal{G})=M\)), we define its **normalized adjacency tensor** as an \(M\)-order \(N\)-dimensional tensor \(\mathcal{A}\in\mathbb{R}^{N^{M}}\). Specifically, for any hyperedge \(e_{k}=\{v_{k_{1}},v_{k_{2}},...,v_{k_{k}}\}\in\mathcal{E}\) with \(c=|e_{k}|\leq M\), the tensor's corresponding entries are given by
\[a_{p_{1}p_{2}...p_{M}}=\frac{1}{d(v_{p_{1}})}\frac{c}{\alpha}, \tag{2}\]
with
\[\alpha=\sum_{r_{1},r_{2},...,r_{c}\geq 1,\sum_{i=1}^{c}r_{i}=M}\binom{M}{r_{1},r_{2},\cdots,r_{c}}, \tag{3}\]
and \(d(v_{p_{1}})\) being the degree of node \(v_{p_{1}}\) (or the total number of hyperedges containing \(v_{p_{i}}\)). The indices \(p_{1},p_{2},...,p_{M}\) for adjacency entries are chosen from all possible ways of \(\{k_{1},k_{2},...,k_{c}\}\)'s permutations with at least one appearance for each element of the hyperedge set, and \(\alpha\) is the sum of multinomial coefficients with the additional constraint \(r_{1},r_{2},...,r_{c}\neq 0\). In addition, other entries not associated with any hyperedge are all zeros. Note that for any node \(v_{p_{1}}\in\mathcal{V}\), we have \(\sum_{p_{2},...,p_{M}=1}^{N}a_{p_{1}p_{2}...p_{M}}=1\).
The **hypergraph signal tensor**, on the other hand, is designed as the \((M-1)\)-time outer product of features along each feature dimension. Given the feature (or signal) matrix \(\mathbf{X}\in\mathbb{R}^{N\times D}\) as the input, with \(D\) being the dimension of features for each node, the \(d\)-th dimensional hypergraph signal (\(d=1,\cdots,D\)) is given by
\[[\mathcal{X}]_{d}=\underbrace{[\mathbf{x}]_{d}\circ[\mathbf{x}]_{d}\circ \cdots\circ[\mathbf{x}]_{d}}_{\text{(M-1) times}}\in\mathbb{R}^{N\times 1\times N ^{(M-2)}}, \tag{4}\]
where \(\circ\) denotes the outer (elementary tensor) product, and \([\mathbf{x}]_{d}\in\mathbb{R}^{N}\) represents the \(d\)-th dimensional feature vector of all \(N\) nodes. For example, given \(M=3\), \([\mathcal{X}]_{d}=[\mathbf{x}]_{d}[\mathbf{x}]_{d}^{T}\in\mathbb{R}^{N\times 1 \times N}\), where we unsqueeze the outer-product tensor to generate the additional second mode for the dimension index of different features. Then by computing \([\mathcal{X}]_{d}\) for all \(D\) features and stacking them together along the second-order dimension, we obtain an \(M^{\text{th}}\)-order interaction tensor \(\mathcal{X}\in\mathbb{R}^{N\times D\times N^{(M-2)}}\). The resulting interaction tensor can be viewed as a collection of \(D\) tensors, each depicting node interactions at one feature dimension.
Analogous to the simple graph signal shifting, **hypergraph signal shifting** is defined as the product of a hypergraph representation tensor \(\mathcal{A}\) and a hypergraph signal tensor \(\mathcal{X}\), offering the notion of information flow over a hypergraph. The tensor-tensor multiplications (known as t-products), in particular, preserve the intrinsic higher-order properties and are utilized to operate hypergraph signal shifting [11]. Take \(M=3\) as a convenient example of the t-product. To provide an appropriate alignment in the t-product signal shifting (to be introduced in Eq. (7)), we first symmetrize the adjacency tensor \(\mathcal{A}\in\mathbb{R}^{N\times N\times N}\) to be \(\mathcal{A}s\in\mathbb{R}^{N\times N\times(2N+1)}\) by adding a zero matrix \(\mathbf{0}N\times N\) as the first frontal slice, reflecting the frontal slice of the underlying tensor, and then dividing by 2: \(\mathcal{A}_{s}=\frac{1}{2}\)\(\texttt{fold}([\mathbf{0},\mathbf{A}^{(1)},\mathbf{A}^{(2)},...,\mathbf{A}^{(N)}, \mathbf{A}^{(N)},...,\mathbf{A}^{(2)},\mathbf{A}^{(1)}])\), where the \(k\)-th frontal slice is \(\mathbf{A}^{(k)}=\mathcal{A}(:,:,k)\in\mathbb{R}^{N\times N\times 1}\). After applying the same operation to the hypergraph tensor \(\mathcal{X}\) and obtain \(\mathcal{X}_{s}\), the hypergraph signal shifting is then defined through the t-product \(*\) as
\[\mathcal{A}_{s}*\mathcal{X}_{s} \tag{5}\] \[= \texttt{fold}(\texttt{bcirc}(\mathcal{A}_{s})\cdot\texttt{unfold}( \mathcal{X}_{s}))\] (6) \[= \texttt{fold}\begin{pmatrix}\begin{bmatrix}\mathbf{0}&\mathbf{A}^{ (1)}&\mathbf{A}^{(2)}&\cdots&\mathbf{A}^{(2)}&\mathbf{A}^{(1)}\\ \mathbf{A}^{(1)}&\mathbf{0}&\mathbf{A}^{(1)}&\cdots&\mathbf{A}^{(3)}&\mathbf{A} ^{(2)}\\ \mathbf{A}^{(2)}&\mathbf{A}^{(1)}&\mathbf{0}&\cdots&\mathbf{A}^{(4)}&\mathbf{A} ^{(3)}\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ \mathbf{A}^{(2)}&\mathbf{A}^{(3)}&\mathbf{A}^{(4)}&\cdots&\mathbf{0}&\mathbf{A }^{(1)}\\ \mathbf{A}^{(1)}&\mathbf{A}^{(2)}&\mathbf{A}^{(3)}&\cdots&\mathbf{A}^{(1)}& \mathbf{0}\end{bmatrix}\begin{bmatrix}\mathbf{0}\\ \mathbf{X}^{(1)}\\ \mathbf{X}^{(2)}\\ \vdots\\ \mathbf{X}^{(1)}\end{bmatrix}, \tag{7}\]
Fig. 2: To perform \(K\)-step gradient descent for the underlying hypergraph signal denoising problem, we need (a) K-layer T-HGCN or alternatively (b) 1-layer T-HGIN.
where \(\mathtt{bcirc}(\mathcal{A}_{s})\) converts the set of \(N_{s}\) frontal slice matrices (in \(\mathbb{R}^{N\times N}\)) of the tensor \(\mathcal{A}_{s}\) into a block circulant matrix. The \(\mathtt{unfold}(\mathcal{X}_{s})\) stacks vertically the set of \(N_{s}\) frontal slice matrices (in \(\mathbb{R}^{N\times D}\)) of \(\mathcal{X}_{s}\) into a \(N_{s}N\times D\) matrix. The \(\mathtt{fold}()\) is the reverse of the \(\mathtt{unfold}()\) process so that \(\mathtt{fold}(\mathtt{unfold}(\mathcal{A}_{s}))=\mathcal{A}_{s}\). The t-product of higher order tensors is more involved with recursive computation with \(3^{\mathrm{rd}}\) order base cases. To maintain presentation brevity here, a reader may refer to literature [9] for full technical details of the t-product \(*\).
### _Tensor-Hypergraph Convolutional Neural Network_
With the defined hypergraph signal shifting operation, a single T-HGCN [8] layer is given by \(\mathcal{Y}_{s}=\mathcal{A}_{s}*\mathcal{X}_{s}*\mathcal{W}_{s}\), where \(\mathcal{W}_{s}\in\mathbb{R}^{D\times D^{\prime}\times N_{s}^{(M-2)}}\) is a learnable weight tensor with \(DD^{\prime}\) weights parameterized in the first frontal slice and all the remaining frontal slices being zeros. Since the t-product is commutable [9], we rewrite the T-HGCN into the following two steps:
\[\begin{cases}&\text{Signal transformation: }\mathcal{X}_{s}^{\prime}= \text{MLP}(\mathcal{X}_{s});\\ &\text{Signal shifting: }\mathcal{Y}_{s}=\mathcal{A}_{s}*\mathcal{X}_{s}^{ \prime},\end{cases} \tag{8}\]
where \(\mathcal{X}_{s}\in\mathbb{R}^{N\times D\times N_{s}^{(M-2)}}\) and \(\mathcal{Y}_{s}\in\mathbb{R}^{N\times D^{\prime}\times N_{s}^{(M-2)}}\) are the input and output of a T-HGCN layer. To perform downstream tasks, non-linear activation functions can be applied to \(\mathcal{Y}_{s}\) accordingly.
## III Equivalence Between T-HGCN and Tensor Hypergraph Signal Denoising
Recall that the signal-shifting function \(\phi_{shift}\) aggregates neighboring signals to infer the target signal of each node. The intuition behind the architecture of HyperGNNs (especially the signal shifting) is that connected nodes tend to share similar properties, that is, signals over a hypergraph are smooth. Motivated by this intuition and precious work [12] on simple graphs, we introduce the tensor Hypergraph signal denoising (HyperGSD) problem with the smoothness regularization term and prove its equivalency to T-HGCN in this section.
### _Tensor Hypergraph Signal Denoising_
**Problem (Hypergraph Signal Denoising).** Suppose \(\mathcal{X}_{s}\in\mathbb{R}^{N\times D\times N_{s}^{(M-2)}}\) is the hypergraph signal of an observed noisy hypergraph signal on an \(M^{\mathrm{th}}\) order hypergraph \(\mathcal{G}\). Without loss of generality, we assume \(D=1\) (if \(D>1\), we can simply take summation over all feature dimensions and obtain the same result). Motivated by a smoothness assumption of hypergraph signals, we formulate the HyperGSD problem with the Laplacian-based total variation regularization term as follows:
\[\operatorname*{argmin}_{\mathcal{Y}_{s}}\mathcal{J}=(\mathcal{Y}_{s}- \mathcal{X}_{s})^{T}*(\mathcal{Y}_{s}-\mathcal{X}_{s})+b\mathcal{Y}_{s}^{T}* \mathcal{L}_{s}*\mathcal{Y}_{s}, \tag{9}\]
where \(\mathcal{Y}_{s}\in\mathbb{R}^{N\times 1\times N_{s}^{(M-2)}}\) is the desired hypergraph signal that we aim to recover, \(b>0\) is a scalar for the regularization term, and the last \(M-2\) orders of all the tensors are flattened as frontal slice indices to simplify the t-product. Here, \(\mathcal{L}_{s}=\mathcal{I}_{s}-\mathcal{A}_{s}\) is the normalized symmetric Laplacian tensor, and \(\mathcal{I}_{s}\) is an identity tensor (with the first frontal slice being identity matrix and the other entries being zero). The tensor transpose of \(\mathcal{Y}_{s}\in\mathbb{R}^{N\times 1\times N_{s}^{(M-2)}}\), under the t-algebra, is defined as \(\mathcal{Y}_{s}^{T}\in\mathbb{R}^{1\times N\times N_{s}^{(M-2)}}\), which is obtained by recursively transposing each sub-order tensor and then reversing the order of these sub-order tensors [9]. The first term encourages the recovered signal \(\mathcal{Y}_{s}\) to be close to the observed signal \(\mathcal{X}_{s}\), while the second term encodes the regularization as neighboring hypergraph signals tend to be similar. Notice that the cost function \(\mathcal{J}(\mathcal{Y}_{s})\) is not a scalar, but a tensor in \(1\times 1\times N_{s}^{(M-2)}\).
### _T-HGCN as Hypergraph Signal Denoising_
Next, we show the key insight that the hypergraph signal shifting operation in the T-HGCN is directly connected to the HyperGSD problem, which is given in the following theorem.
**Theorem III.1**: _The hypergraph signal shifting \(\mathcal{Y}_{s}=\mathcal{A}_{s}*\mathcal{X}_{s}\) is equivalent to a one-step gradient descent of solving the leading function of the HyperGSD problem Eq. (9) with \(c=\frac{1}{2b}\), where \(c\) is the learning rate of the gradient descent step._
_Proof._ First take the derivative of the cost function \(\mathcal{J}(\mathcal{Y}_{s})\) w.r.t \(\mathcal{Y}_{s}\):
\[\frac{\partial\mathcal{J}}{\partial\mathcal{Y}_{s}}=2\cdot\mathtt{bcirc}( \mathcal{Y}_{s}-\mathcal{X}_{s})+2b\cdot\mathtt{bcirc}(\mathcal{L}_{s}* \mathcal{Y}_{s}). \tag{10}\]
Recall from Eq. (7) that the \(\mathtt{bcirc}(\cdot)\) operation has the first column being the unfolded \(2N+1\) frontal slices, and the other columns being the cyclic shifting of the first column. When updating \(\mathcal{Y}_{s}\) using one-step gradient descent, the first column of a block circulant tensor is sufficient, as it contains all information of updating \(\mathcal{Y}_{s}\), and the remaining columns differ from the first column in order only. Using the leading function \(\mathcal{J}_{1}\) for Eq. (10), which gives the first block column of the circulant tensor \(\frac{\partial\mathcal{J}}{\partial\mathcal{Y}_{s}}\), we can simply drop the \(\mathtt{bcirc}(\cdot)\) operation so that the one-step gradient descent to update \(\mathcal{Y}_{s}\) from \(\mathcal{X}_{s}\) is
\[\mathcal{Y}_{s} \leftarrow\mathcal{X}_{s}-c\frac{\partial\mathcal{J}_{1}}{ \partial\mathcal{Y}_{s}}\Big{|}_{\mathcal{Y}=\mathcal{X}_{s}} \tag{11}\] \[=\mathcal{X}_{s}-2bc(\mathcal{L}_{s}*\mathcal{X}_{s})\] (12) \[=(1-2bc)\mathcal{X}_{s}+2bc\mathcal{A}_{s}*\mathcal{X}_{s}. \tag{13}\]
Given learning rate \(c=\frac{1}{2b}\), we obtain \(\mathcal{Y}_{s}\leftarrow\mathcal{A}_{s}*\mathcal{X}_{s}\), which is the same form as the shifting operation in Eq. (8). \(\square\)
This theorem implies that a single layer of T-HGCN [8] is essentially equivalent to solving the HyperGSD problem by one-step gradient descent. Correspondingly, performing a \(K\)-step gradient descent would require \(K\) layers of T-HGCN, which could much increase the number of learnable parameters. As a result, a question naturally arises: Can we perform multi-step gradient descent toward the HyperGSD problem with just a single layer of HyperGNNs? We provide an affirmative answer by proposing the T-HGIN approach in the next section.
## IV Tensor-Hypergraph Iterative Network
With the goal of merging multi-step gradient descent into a single HyperGNN, we first propose the \(K\)-step iterative gradient descent for the HyperGSD problem in Eq. (9). Then we adopt the iteration process to design the Tensor-Hypergraph Iterative Network (T-HGIN).
**Iterative Gradient Descent for Signal Denoising.** Given the gradient of the HyperGSD problem in Eq. (10), we now update the gradient iteratively to obtain the sequence of hypergraph signals \((\mathcal{Y}_{s}^{(0)},\mathcal{Y}_{s}^{(1)},\mathcal{Y}_{s}^{(2)},..., \mathcal{Y}_{s}^{(K)})\) with the following iterative process:
\[\mathcal{Y}_{s}^{(k)} \leftarrow\mathcal{Y}_{s}^{(k-1)}-c\frac{\partial\mathcal{J}_{1} }{\partial\mathcal{Y}_{s}}\Big{|}_{\mathcal{Y}_{s}=\mathcal{Y}_{s}^{(k-1)}}\] \[=(1-2b-2bc)\mathcal{Y}_{s}^{(k-1)}+2b\mathcal{X}_{s}+2bc\mathcal{ A}_{s}*\mathcal{Y}_{s}^{(k-1)}, \tag{14}\]
where \(\mathcal{Y}_{s}^{(k)}\) with \(k=1,...,K\) are iteratively updated clean hypergraph signals and the starting point is \(\mathcal{Y}_{s}^{(0)}=\mathcal{X}_{s}\).
**From Iterative Signal Denoising To T-HGIN.** From the updating rule above, we then formulate T-HGIN by a slight variation to Eq. (14). Setting the regularization parameter \(b=\frac{1}{2(1+c)}\), we then obtain that
\[\mathcal{Y}_{s}^{(k)}\gets 2b\mathcal{X}_{s}+2bc\mathcal{A}_{s}*\mathcal{Y}_ {s}^{(k-1)}. \tag{15}\]
Since \(2b+2bc=1\), setting \(2b=\alpha\) implies that \(2bc=1-\alpha\). Consequently, a single layer of the T-HGIN is formulated as
\[\left\{\begin{array}{l}\text{Signal transformation: }\mathcal{X}_{s}^{\prime}= \text{MLP}(\mathcal{X}_{s});\\ \text{Signal shifting: }\mathcal{Y}_{s}^{(k)}=\alpha\mathcal{X}_{s}^{\prime}+(1- \alpha)\mathcal{A}_{s}*\mathcal{Y}_{s}^{(k-1)},\end{array}\right. \tag{16}\]
with \(k=1,...,K\), \(\mathcal{Y}_{s}^{(0)}=\mathcal{X}_{s}^{\prime}\) and \(\alpha\in[0,1]\). The signal transformation is constructed by a MLP. The signal shifting of the T-HGIN can be roughly viewed as an iterative personalized PageRank [10], where \(\alpha\) is the probability that a node will teleport back to the original node and \(1-\alpha\) is the probability of taking a random walk on the hypergraph through the hypergraph signal shifting. In fact, when \(\alpha=0\) and \(K=1\), the T-HGIN is the same as the T-HGCN, indicating that the T-HGCN could be subsumed in the proposed T-HGIN framework. In addition, T-HGIN has three major advantages compared to T-HGCN:
1. As shown in Fig. 2, a \(K\)-layer T-HGCN is required to perform \(K\) steps of hypergraph signal shifting, but in contrast, the T-HGIN breaks this required equivalence between the depth of neural networks and the steps of signal shifting, allowing any steps of signal shifting in just one layer.
2. The T-HGIN leverages the information contained in the original hypergraph signal \(\mathcal{X}_{s}\), which performs a "skip-connection" analogous to ResNet [13] and mitigates the potential over-smoothing problem [10] as the neural network is going deep to aggregate broader neighborhood.
3. Although the \(K\)-step hypergraph signal shifting is somewhat involved, the number of learnable parameters remains the same as only one layer of the T-HGCN. As shown in the following experiment, the T-HGIN can often achieve better performance than other alternative HyperGNNs that would require more learnable parameters.
## V Experiments
We evaluate the proposed T-HGIN approach on three real-world academic networks and compare it to four state-of-the-art benchmarks. The experiment aims to conduct a semi-supervised node classification task, in which each node is an academic paper and each class is a research category. We use the accuracy rate to be the metric of model performance. For each reported accuracy rate, \(50\) experiments are performed to compute the mean and the standard deviation of the accuracy rates. We use the Adam optimizer with a learning rate and the weight decay choosing from \(\{0.01,0.001\}\) and \(\{0.005,0.0005\}\) respectively, and tune the hidden dimensions over \(\{64,128,256,512\}\) for all the methods.
**Datasets.** The hypergraph datasets we used are the co-citation datasets (Cora, CiteSeer, and PubMed) in the academic network. The hypergraph structure is obtained by viewing each co-citation relationship as a hyperedge. The node features associated with each paper are the bag-of-words representations summarized from the abstract of each paper, and the node labels are research categories (e.g., algorithm, computing, etc). For expedited proof of concept, the raw datasets from [14] are downsampled to smaller hypergraphs. The descriptive statistics of these hypergraphs are summarized in Table I.
**Experiment Setup and Benchmarks.** To classify the labels of testing nodes, we feed the whole hypergraph structure and node features to the model. The training, validation, and testing data are set to be \(50\%,25\%\), and \(25\%\) for each complete dataset, respectively. We choose regular multi-layer perceptron (MLP), HGNN [6], HyperGCN [14], and HNHN [15] as the benchmarks. In particular, the HGNN and the HyperGCN utilize hypergraph reduction approaches to define the hypergraph adjacency matrix and Laplacian matrix, which may result in higher-order structural distortion [5]. The HNHN formulates a two-stage propagation rule using the incidence matrix, which does not use higher-order interactions of the hypergraph signal tensor [8]. Following the convention of HyperGNNs, we set the number of layers for all HyperGNNs to be 2 to avoid over-smoothing except for the T-HGCN and the proposed T-HGIN. For the T-HGCN and the T-HGIN, we use only one layer: the T-HGCN's accuracy decreases when the number of layers is greater than one, while the T-HGIN can achieve a deeper HyperGNN architecture by varying the times of iteration \(K\) within one layer as shown in Fig. 2 (b). The grid search is used to tune the two hyperparameters \(K\) and \(\alpha\) through four evenly spaced intervals in both \(K\in[1,5]\) and \(\alpha\in[0.1,0.5]\)
**Results and Discussion.** The averaged accuracy rates are summarized in Table II, which shows that our proposed \(K\)-step shifting entailed T-HGIN achieves the best performance among the state-of-the-art HyperGNNs on the three hypergraphs. While high variances of the results often occur to other existing HyperGNNs in these data examples, the proposed T-HGIN desirably shows only relatively moderate variance.
**The effect of the number of iterations.** Interestingly, the optimal values selected for \(K\) coincide with the maximum shortest path on the underlying hypergraphs, the observation of which is consistent with that of [10]. To some extent, this phenomenon supports the advantage of the proposed T-HGIN over other "shallow" HyperGNNs that perform only one or two steps of signal shifting. Equipped with the multi-step iteration and the skip-connection mechanism, the T-HGIN is able to fully propagate across the whole hypergraph, and importantly, avoid the over-smoothing issue at the same time.
**The effect of the teleport probability.** Regarding the teleport parameter \(\alpha\), the optimal selected values for the three datasets are \(\{0.1,0.1,0.3\}\), respectively. Empirically, the selection of \(\alpha\)'s could depend on the connectivity of nodes. For example, the PubMed hypergraph has more isolated connected components and tends to require a higher value of \(\alpha\). A direct visualization for the PubMed network is also shown in Fig. 3 using one representative run of the experiment, which shows that the tensor-based approaches appear to give more satisfactory performance than the classic matrix-based HyperGNN; the proposed T-HGIN further improves upon the T-HGCN, confirming the effectiveness of the proposed multi-step iteration scheme.
## VI Conclusion
In the context of Tensor-HyperGraph Neural Networks (T-HyperGNNs), this work demonstrates that the hypergraph signal shifting of T-HGCN is equivalent to a one-step gradient descent of solving the hypergraph signal denoising problem. Based on this equivalency, we propose a \(K\)-step gradient descent rule and formulate a new hypergraph neural network - Tensor-Hypergraph Iterative Network (T-HGIN). Compared to the T-HGCN, the T-HGIN benefits from the construction of \(K\)-step propagation in one single layer, offering an efficient way to perform propagation that spreads out to a larger-sized neighborhood. Satisfactorily, the proposed T-HGIN achieves competitive performance on multiple hypergraph data examples, showing its promising potential in real-world applications. We also note that the equivalency between HyperGNNs and HyperGSDs can also be utilized to design neural networks for denoising like in [16, 17], and we will leave this as an interesting extension for future studies.
## Acknowledgment
This work was partially supported by the NSF under grants CCF-2230161, DMS-1916376, the AFOSR award FA9550-22-1-0362, and by the Institute of Financial Services Analytics.
|
2307.16373 | 2D Convolutional Neural Network for Event Reconstruction in IceCube
DeepCore | IceCube DeepCore is an extension of the IceCube Neutrino Observatory designed
to measure GeV scale atmospheric neutrino interactions for the purpose of
neutrino oscillation studies. Distinguishing muon neutrinos from other flavors
and reconstructing inelasticity are especially difficult tasks at GeV scale
energies in IceCube DeepCore due to sparse instrumentation. Convolutional
neural networks (CNNs) have been found to have better success at neutrino event
reconstruction than conventional likelihood-based methods. In this
contribution, we present a new CNN model that exploits time and depth
translational symmetry in IceCube DeepCore data and present the model's
performance, specifically for flavor identification and inelasticity
reconstruction. | J. H. Peterson, M. Prado Rodriguez, K. Hanson | 2023-07-31T02:37:36Z | http://arxiv.org/abs/2307.16373v1 | # 2D Convolutional Neural Network for Event Reconstruction in IceCube DeepCore
###### Abstract:
IceCube DeepCore is an extension of the IceCube Neutrino Observatory designed to measure GeV scale atmospheric neutrino interactions for the purpose of neutrino oscillation studies. Distinguishing muon neutrinos from other flavors and reconstructing inelasticity are especially difficult tasks at GeV scale energies in IceCube DeepCore due to sparse instrumentation. Convolutional neural networks (CNNs) have been found to have better success at neutrino event reconstruction than conventional likelihood-based methods. In this contribution, we present a new CNN model that exploits time and depth translational symmetry in IceCube DeepCore data and present the model's performance, specifically for flavor identification and inelasticity reconstruction.
**Corresponding authors:** J.H. Peterson\({}^{1*}\), M. Prado Rodriguez\({}^{1}\), K. Hanson\({}^{1}\)
\({}^{1}\) University of Wisconsin - Madison
\({}^{*}\) Presenter
The 38th International Cosmic Ray Conference (ICRC2023)
26 July - 3 August, 2023
Nagoya, Japan
Introduction
The IceCube Neutrino Observatory is a neutrino detector located at the South Pole. When high energy neutrinos interact with the Antarctic ice they produce fast moving charged secondary particles that produce Cherenkov radiation. The IceCube detector consists of 86 strings of digital optical modules (DOMs) that detect this Cherenkov radiation [1]. IceCube DeepCore is an infill of the IceCube Neutrino Observatory that utilizes more densely instrumented strings to lower the energy threshold from hundreds of GeV to a few GeV [2]. Many of the IceCube experiment's physics goals require reconstructing observables such as neutrino direction, neutrino energy, and neutrino flavor.
When a muon neutrino / antineutrino undergoes a charged current deep inelastic scattering it produces an energetic muon / antimuon that can travel a significant distance and produce a detectable track of light [3]. This is referred to as a track event. At lower energies, for neutral current deep inelastic scattering events, and for charged current deep inelastic scattering of electron neutrinos / antineutrinos and most tau neutrinos / antineutrinos, only the hadronic cascade will be visible [2]. This is referred to as a cascade event. Thus, muon neutrinos / antineutrinos can be seperated from the other flavors using the IceCube detector.
The inelasticity is the fraction of neutrino energy that is deposited in the hadronic cascade of a deep inelastic scattering. The average inelasticity for muon neutrinos and muon antineutrinos is different [4, 5]. This can allow us to separate neutrinos from antineutrinos statistically using the IceCube detector.
Both neutrino flavor and inelasticity are observables that require evaluating the morphology of individual events. For particle identification (PID), we look for the existence of an outgoing muon / antimuon track [3]. At the energies relevant for DeepCore, the energetic muon / antimuon is a minimum ionizing particle, and thus the average length the muon / antimuon travels scales linearly with the energy of the muon / antimuon. Thus we can look at the length of the outgoing track and the total light deposited in the hadronic cascade to determine the inelasticity [3]. The density of the DOMs in the ice means that events at lower energies have lower resolutions, making standard reconstruction techniques more difficult to use. It has been shown that machine learning based reconstruction methods are faster and more precise than the photon table based reconstruction methods that have been previously used in IceCube [6]. In this paper we describe a new convolutional neural network (CNN) developed to measure PID and inelasticity in IceCube DeepCore.
## 2 Methods
### Monte Carlo Data and Selection
For training and testing the CNN we use simulated data that includes the following restrictions:
* The energy must be between 3 GeV and 1 TeV for inelasticity reconstruction, 5 GeV to 1 TeV for PID reconstruction.
* There must be at least eight photon hits and at most 250 photon hits in the event.
* The interaction must occur between 2106 meters and 2450 meters deep in the ice in a circle of radius 100 meters surrounding the center of DeepCore.
We elect to use relatively loose selection criteria so that we have more events to train the CNN with and so that in the future we can apply the trained CNN to existing data sets that have stricter selection criteria. The energy threshold was lowered from 5 GeV to 3 GeV for inelasticity to match existing matter dependant studies. For PID reconstruction, we need to select a sample that contains both tracks and cascades, so we take muon and electron neutrino / antineutrino events, including both neutral and charged current interactions. For inelasticity reconstruction we just need to focus on track events, so we only take muon neutrino / antineutrino charged current interactions. After imposing these restrictions our data has the composition shown in Table 1.
We then split the data into three sets: 4% for a testing the CNN, 10% for a validating performance during training, and the rest is used as a training set.
### Monte Carlo Data Preparation
Our data should have translational symmetry in both space and time. However, the strings in the DeepCore detector are non-uniformly spaced in the x-y plane, as shown in Figure 1, so exploiting the x-y translational symmetry cannot be easily done. The DOM spacing is uniform along the strings, so we can exploit the translational symmetry along depth and along time. Thus, for every event we compose a 2-dimensional image for each string in the DeepCore detector. The images consist of the DOMs ordered by depth on one axis and time sliced into 25 nanosecond time segments on the other axis. The charge of the photon hits seen by a particular DOM in an event is summed, and the pixel containing the earliest photon hit in a particular DOM is populated with the total charge. An example of such a string image is shown in Figure 2.
There are two types of strings of DOMs in the DeepCore detector. Seven of the strings are identical to the strings that are used in the rest of the IceCube detector. We will refer to those strings as IceCube strings. There are eight strings that are more densely instrumented surrounding a single IceCube string. We will refer to those strings as DeepCore strings. We utilize all of the strings in the blue circle in Figure 1 in our CNN. For the inelasticity CNN we also incorporate the IceCube strings surrounding the DeepCore detector shown in Figure 1. This is done because it was found that we obtain better reconstruction results at higher energy. We will do the same for PID in the future, but the results shown in this paper for PID do not include those additional IceCube strings.
We elect to only use DOMs that are below a particular dust layer at about 2000 meters to 2100 meters deep that reduces the optical quality of the ice [7]. This creates the complication that the string images constructed from DeepCore strings will have more rows than the images constructed from IceCube strings since there are more DOMs below the dust layer on the DeepCore strings
\begin{table}
\begin{tabular}{|c||c|c|} \hline & tracks & cascades \\ \hline \hline PID & 4,495,634 & 2,835,831 \\ \hline inelasticity & 4,827,397 & 0 \\ \hline \end{tabular}
\end{table}
Table 1: Composition of Monte Carlo data used for training the CNN. A track event is a muon neutrino / antineutrino CC interaction. Everything else is labelled a cascade event.
than on the IceCube strings [1]. Thus we pad the IceCube string images with empty DOM rows in such a way that IceCube string DOMs and DeepCore string DOMs with similar depths will be in the same row in the string images. We then stack the DeepCore string images and padded IceCube string images together to feed to the CNN.
### Neural Network Architecture and Training
Our CNN includes four convolutional blocks followed by a two layer multi-layer perceptron (MLP). Each convolution block includes two 2-dimensional convolution layers with rectified liner unit (ReLU) activation functions. Then we apply a max pooling layer to the output of the convolutional layers. For the MLP we use a dropout rate of 25% to help regularize the CNN. A diagram of the neural network architecture is shown in Figure 3.
For the inelasticity CNN, we have a single output that is passed through a sigmoid activation function after the MLP. Inelasticity is bounded between 0 and 1, so the sigmoid activation function insures that the output of the CNN upholds these bounds.
For PID, we have two outputs: a label for the presence of a significant muon track and a label for a significant hadronic cascade. We use these labels to represent the fact that some events are dominated by the hadronic cascade, some are dominated by the muon track, and some events have a significant cascade and track. We use true inelasticity as a metric for when we identify an event as having a significant track and cascade. Table 2 shows how each event is assigned its labels. The cuts in the table are determined by training many networks with different cuts and picking the
configuration with the best performance.
For training, we use the Adam algorithm [8] for optimization with a learning rate of 0.0001. We stop the training when the validation loss achieves a minimum and starts to increase upon following epochs.
For the inelasticity CNN, we use the following L1 loss:
\[Loss_{inelasticity}=|y_{reconstructed}-y_{true}|. \tag{1}\]
This choice was made because we find that we achieve better performance at higher energies with this loss compared to L2 loss. For the PID CNN we use the binary cross entropy (BCE) loss applied to both outputs:
\[Loss_{PID}=0.7\times BCE(output_{track};label_{track})+BCE(output_{cascade};label _{cascade}). \tag{2}\]
The binary cross entropy loss for the track label is weighted by a factor of 0.7, which was found to improve performance via a grid scan.
\begin{table}
\begin{tabular}{|c|c||c|c|} \hline \multicolumn{2}{|c||}{Muon Track label} & \multicolumn{3}{c|}{Hadronic Cascade label} \\ \hline \hline label & condition & label & condition \\ \hline track & \(\nu_{\mu}\) CC events & cascade & \(\nu_{\mu}\) NC events, \(\nu_{e}\) events, \(\nu_{\mu}\) CC events w/ \(y>0.8\) \\ \hline no track & \(\nu_{\mu}\) NC events, \(\nu_{e}\) events & no cascade & \(\nu_{\mu}\) CC events w/ \(y<0.8\) \\ \hline \end{tabular}
\end{table}
Table 2: The conditions used to label each event. Each event has a muon track and hadronic cascade label to indicate if there is a visible muon track or hadronic cascade.
Figure 3: Architecture of our 2D CNN. The ”out” at the the end represents whatever output one might need, for this paper it would be inelasticity or a track or cascade confidence.
## 3 Results and Discussion
### Particle Identification
We find that the track and cascade labels provide very similar outputs, and so we only show here the output of the track label. However, we find that using this labelling scheme and loss gives better results than just using the cross entropy loss, which is why we keep it regardless of the unintended behavior.
For Figure 5 and Figure 5, we use all muon, electron, and tau neutrino / antineutrino Monte Carlo (MC) events that pass the data selection criteria outlined in Section 2.1. After checking the normalized PID distributions for the test set and the training set, we find they are nearly identical, suggesting that overfitting has been well-regulated.
Figure 5 is the distribution of PID outputs for the set of MC events described previously. There are three notable features of this distribution; the spike in track events near a PID score of 1, The collection of events around a PID score of 0.5, and the cascade rich tail below a PID score of 0.2. The spike in track events near a PID score of 1 and the central collection of cascade and track events are common features for other PID reconstruction algorithms for the DeepCore detector [9, 10]. The collection of cascade and track events around a PID score of 0.4 to 0.5 and the lack of a spike of cascade events around a PID score of 0 are a reflection of the fact that at low energies many track events look like cascades, and only when there is a definite muon track can we be confident in what a particle's identity is. The cascade rich tail is a feature that isn't seen in similar reconstruction methods, and suggests that the CNN has started to find features in the data unique to cascade events.
Figure 5 shows the receiver operating characteristic curves for our CNN and a boosted decision tree (BDT) that is used in another analysis [9]. This plot demonstrates that the CNN method is
better at identifying tracks and cascades than the BDT based method, which relies on high level reconstructed inputs.
### Inelasticity
Figure 5(a) is a two dimensional histogram of events binned by the true inelasticity and the inelasticity from CNN reconstruction. An ideal reconstruction would have all events in the bins along the dashed black line. There are two populations of events that can be seen here. The flat distribution are events that are hard to reconstruct, and so the network learns to assign roughly the mean inelasticity of those events. The second population of events can be seen faintly in the upper-right corner of the plot. These are events that are being reconstructed more effectively.
A good proxy for the difficulty of reconstruction is the energy of the event. Lower energy events will have less photon hits and hence lower resolution. Thus, we can make cuts in the energy to separate the two populations.
Figure 5(b), Figure 5(c), and Figure 5(d) are similar histograms to Figure 5(a) but with the events split
Figure 6: MC events binned by true inelasticity and inelasticity reconstructed by the CNN. The dashed black line indicates where events would be assuming a perfect reconstruction. Plot (a) contains all events in the test set, whereas (b), (c), and (d) show events separated by different true energy ranges.
into three populations based on energy (the specific energy ranges are shown in the plots). Figure 6b clearly shows the first population of events without the second. There is a slight asymmetry in the reconstructed inelasticity distribution which can be used to slightly separate neutrinos from antineutrinos. To quantify this separation power, we separate events with true energies of 3 GeV to 20 GeV into two bins, \(y_{reco}>0.32\) and \(y_{reco}<0.32\). Then we calculate the ratio of antineutrino events to all events in each bin and we find ratios of 0.34 and 0.31 respectively.
Figure 6c shows a definite improvement in reconstruction when compared to Figure 6b, and Figure 6d even better reconstruction than Figure 6c. This is expected because the larger the energy of the event, the more DOMs will detect photons and thus it will be easier to resolve the event morphology. The second plot shows that we can push the inelasticity reconstruction down to roughly 30 GeV.
## 4 Conclusion
We have developed a new 2D CNN architecture as well as a new method for preparing IceCube DeepCore events that is effective for reconstruction tasks in IceCube DeepCore. We also show that when this CNN is applied to PID reconstruction it outperforms a BDT based method used in other DeepCore analyses, and when we apply the CNN to inelasticity reconstruction we can effectively measure inelasticity down to 30 GeV and gain some neutrino antineutrino separating power below 20 GeV. It is likely that this CNN will be developed further, perhaps with different loss functions or more optimized architectures. We will also incorporate the IceCube strings surrounding DeepCore into the PID CNN in the near future. We then hope to apply this CNN to inelasticity based studies and/or matter effect dependent oscillation studies, where separating neutrinos from anti-neutrinos can greatly benefit sensitivity [5].
|
2309.09700 | Securing Fixed Neural Network Steganography | Image steganography is the art of concealing secret information in images in
a way that is imperceptible to unauthorized parties. Recent advances show that
is possible to use a fixed neural network (FNN) for secret embedding and
extraction. Such fixed neural network steganography (FNNS) achieves high
steganographic performance without training the networks, which could be more
useful in real-world applications. However, the existing FNNS schemes are
vulnerable in the sense that anyone can extract the secret from the
stego-image. To deal with this issue, we propose a key-based FNNS scheme to
improve the security of the FNNS, where we generate key-controlled
perturbations from the FNN for data embedding. As such, only the receiver who
possesses the key is able to correctly extract the secret from the stego-image
using the FNN. In order to improve the visual quality and undetectability of
the stego-image, we further propose an adaptive perturbation optimization
strategy by taking the perturbation cost into account. Experimental results
show that our proposed scheme is capable of preventing unauthorized secret
extraction from the stego-images. Furthermore, our scheme is able to generate
stego-images with higher visual quality than the state-of-the-art FNNS scheme,
especially when the FNN is a neural network for ordinary learning tasks. | Zicong Luo, Sheng Li, Guobiao Li, Zhenxing Qian, Xinpeng Zhang | 2023-09-18T12:07:37Z | http://arxiv.org/abs/2309.09700v1 | # Securing Fixed Neural Network Steganography
###### Abstract.
Image steganography is the art of concealing secret information in images in a way that is imperceptible to unauthorized parties. Recent advances show that is possible to use a fixed neural network (FNN) for secret embedding and extraction. Such fixed neural network steganography (FNNS) achieves high steganographic performance without training the networks, which could be more useful in real-world applications. However, the existing FNNS schemes are vulnerable in the sense that anyone can extract the secret from the stepo-image. To deal with this issue, we propose a key-based FNNS scheme to improve the security of the FNNS, where we generate key-controlled perturbations from the FNN for data embedding. As such, only the receiver who possesses the key is able to correctly extract the secret from the stego-image using the FNN. In order to improve the visual quality and undetectability of the stego-image, we further propose an adaptive perturbation optimization strategy by taking the perturbation cost into account. Experimental results show that our proposed scheme is capable of preventing unauthorized secret extraction from the stego-images. Furthermore, our scheme is able to generate stego-images with higher visual quality than the state-of-the-art FNNS scheme, especially when the FNN is a neural network for ordinary learning tasks.
Steganography, Fixed neural network, Key-controlled perturbation +
Footnote †: journal: Information systems — Multimedia information systems
+
Footnote †: journal: Information systems — Multimedia information systems
+
Footnote †: journal: Information systems — Multimedia information systems
+
Footnote †: journal: Information systems — Multimedia information systems — Multimedia information systems
+
Footnote †: journal: Information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia information systems — Multimedia systems — information systems — Multimedia information systems — Multimedia systems — Multimedia systems — information systems — Multimedia systems — information systems — Multimedia systems — information systems — Multimedia systems — information systems — Multimedia systems — information systems — Multimedia systems — information systems — Multimedia systems — information systems — Multimedia systems — information systems — Multimedia systems — information systems — Multimedia systems — information systems — Multimedia systems — information systems — Multimedia systems — information systems — Multimedia systems — information systems — Multimedia systems — information systems — Multimedia systems — information systems — Multimedia systems — information systems — Multimedia systems — information systems — Multimedia systems — information systems — Multimedia systems — information systems — Multimedia systems — information systems — Multimedia systems — information systems — Multimedia systems — information systems — Multimedia systems — information systems — Multimedia systems — information systems — Multimedia systems — information systems — information systems — Multimedia systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems systems — information systems — information systems — information systems — information systems — information systems systems — information systems — information systems — information systems — information systems — information systems systems — information systems — information systems — systems systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems systems — information systems — information systems — information systems — information systems — information systems systems — information systems — information systems — information systems systems — information systems — information systems — information systems systems — information systems — information systems — information systems — information systems systems — information systems — information systems — information systems — information systems systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems systems — information systems — information systems — information systems — information systems — information systems — information systems — information systems systems — information systems — information systems — information systems systems — information systems — information systems — information systems systems — information systems systems — information systems — information systems systems — information systems systems — information systems — information systems systems — information systems systems — information systems systems — information systems — information systems systems — information systems — information systems systems — information systems systems — information systems systems — information systems systems — information systems systems — information systems systems — information systems systems — information systems systems — information systems — information systems systems — information systems systems — information systems systems systems — information systems systems — information systems systems — information systems systems — information systems systems — information systems systems — information systems systems — information systems systems — information systems — information systems systems — information systems systems — information systems systems — information systems systems — information systems systems — information systems systems — information systems systems — information systems — information systems systems — information systems systems — information systems systems — information systems systems — information systems systems — information systems systems — information systems systems — information systems systems — information systems systems — information systems systems systems — information systems systems — information systems systems systems — information systems systems — information systems systems systems — information systems systems systems — information systems systems — information systems systems — information systems systems systems — information systems systems — information systems systems — information systems systems systems — information systems systems — information systems systems — information systems systems systems — information systems systems systems — information systems systems — information systems systems — information systems systems — information systems systems — information systems systems — information systems systems — information systems systems systems — information systems systems — information systems systems systems — information systems systems — information systems systems — information systems systems — information systems systems systems — information systems systems — information systems systems — information systems systems systems — information systems systems systems — information systems systems — information systems systems — information systems systems — information systems systems systems — information systems systems — information systems systems systems — information systems systems — information systems systems systems — information systems systems systems — information systems systems — information systems systems — information systems systems — information systems systems systems — information systems systems — information systems systems — information systems systems — information systems systems — information systems systems systems systems — information systems systems — information systems systems — information systems systems — information systems systems systems — information systems systems — information systems systems — information systems systems — information systems systems — information systems systems systems — information systems systems — information systems systems — information systems systems — information systems systems systems — information systems — systems systems systems — information systems systems systems — information systems systems — information systems systems — information systems systems — systems systems systems — information systems systems — information systems — systems systems systems — information systems — systems systems systems — information systems systems — information systems — systems — information systems systems — information systems systems — systems systems — information systems — systems systems systems — information systems — systems — information systems — systems systems — information systems — systems — systems — systems — information systems — systems — systems — systems — systems — systems — systems — systems — information systems — systems — systems — systems — systems — systems — systems — systems — systems — systems — systems — systems — systems — systems — systems — systems — systems — systems — systems — systems — — systems — systems — — systems — systems — — systems — — systems — — — systems — — systems — — systems — — — systems — — systems — — — systems — — — systems — — — systems — — systems — — — — systems — — — systems — — — — systems — — — — — — — systems — — — — — — — — — — — — — systems — — — — — — — — — — — — — systems — — — — — — — — — — — — systems — — — — — — — — — — systems — — — — — — — — — — — systems — — — — — — — — — — — systems — — — — — — — — — — systems — — — — — — — — — — — systems — — — — — — — — — — systems — — — — — — — — — — systems — — — — — — — — systems — — — — — — — — — systems — — — — — — — — systems — — — — — — — — — systems — — — — — — — — systems — — — — — — — systems — — — — — — systems — — — — — — — — systems — — — — — — systems — — — — — — systems — — — — — — systems — — — — — — — systems — — — — — systems — — — — — systems — — — — — — systems — — — — — systems — — — — systems — — — — — systems — — — — — systems — — — — systems — — — — systems — — — — systems — — — — — systems — — — — systems — — — — systems — — — systems — — — — systems — — — — systems — — — — systems — — — — systems — — — — systems — — — systems — — — systems — — — — systems — — — — systems — — — — systems — — — — systems — — — — systems — — — — systems — — — systems — — — — systems — — — systems — — — — systems — — — — systems — — — — systems — — — systems — — — — systems — — — — systems — — — — systems — — — — — systems — — — — — systems — — — systems — — — — systems — — — — — systems — — — — systems — — — — — systems — — — — systems — — — — — systems — — — — systems — — — — — systems — — — — systems — — — — — systems — — — systems — — — systems — — — — — systems — — — systems — — — — — systems — — — — — systems — — — — — systems — — — — systems — — — — systems — — — — systems — — — — systems — — — — — systems — — — systems — — — — — systems — — — — systems — — — — — systems — — — — — systems — — — — — systems — — — — systems — — — — — — systems — — — — — — systems — — — — — systems — — — — — — systems — — — — — — systems — — — — — — systems — — — — — — systems — — — — — — systems — — — — — — systems — — — — — — — — systems — — — — — — — — — systems — — — — — — — — systems — — — — — — — systems — — — — — — — — — systems — — — — — — — — — — — systems — — — — — — — — — systems — — — — — — — — — — systems — — — — — — — — — — — — — systems — — — — — — — — — — — — — — — — — — systems — — — — — — — — — — — — — — — — — systems — — — — — — — — — — — — — — — — — — — — — — — — — — — — systems —
he can easily extract the secret from the stego-image. To address this issue, an intuitive approach is to encrypt the secret before data embedding. The problem is that the DNN-based steganographic schemes could not guarantee lossless data extraction due to the uncertainty of the neural networks. A single bit of extraction error of the cipher text would cause a failure in decryption.
In this paper, we propose a key-based FNNS to improve the security of the existing FNNS schemes. Instead of directly encrypting the secret, we propose to use a key to control the generation of the adversarial perturbations for data embedding using a FNN. Once the stego-image is generated, only the receiver who possesses the correct key is able to perform correct secret decoding, as shown in Fig.1. To improve the visual quality and undetectability of the stego-images, we propose to estimate the perturbation cost and incorporate it into the design of the loss function to adaptively learn the perturbation from the FNN for data embedding. In particular, pixels with high perturbation costs will be assigned with low perturbation strength. Experimental results demonstrate the advantage of our scheme for preventing unauthorized data extraction. Furthermore, our scheme offers higher visual quality and undetectability than the state-of-the-art FNNS scheme, especially when using FNNs which work on ordinary learning tasks.
The main contributions of this paper are summarized as follows.
1. We are the first to look into the vulnerability of the existing FNNS schemes and propose a key-based FNNS to prevent unauthorized secret extraction from the stego-image.
2. We propose a key-based perturbation generation strategy by encrypting the stego-image before feeding it into the FNN for secret decoding.
3. We propose to estimate the perturbation cost of each image pixel, which is incorporated into the loss function to generate adaptive perturbation for data embedding. This is shown to be able to significantly improve the visual quality and undetectability of the stego-images.
## 2. Related Works
### Traditional image steganography
Traditional image steganography designs hand-crafted schemes to modify the cover image for data embedding, which can be divided into two categories including spatial domain-based steganography (Sutskever et al., 2016; Krizhevsky et al., 2015; Krizhevsky et al., 2016; Krizhevsky et al., 2017) and transform domain-based steganography(Krizhevsky et al., 2016; Krizhevsky et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017). The former directly alters the pixel values in the spatial domain, while the latter changes the coefficients of the cover image in the transform domain to accommodate the secret.
To improve the undetectability of the stego-images, researchers propose adaptive image steganography which can be applied to perform data embedding in the spatial or transformed domain(Krizhevsky et al., 2016; Krizhevsky et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017). The most popular framework for adaptive steganography is the syndrome-trellis codes (STCs) steganographic framework (Krizhevsky et al., 2016), which is able to achieve minimum distortion caused by data embedding. To effectively estimate the distortion, Pevny _et al._(Pevny et al., 2016) propose HUGO to measure the distortion for spatial domain-based steganography. Li _et al._(Li et al., 2017) introduce HILL which exploits both the high-pass filter and low-pass filter to focus more on the texture area for data embedding. Kin-Cleaves _et al._(Kin-Cleaves et al., 2016) propose Dual-Syrndrome Trellis Codes (Dual-STCs) to improve the robustness of steganography. On top of this, Guan _et al._(Guan et al., 2016) propose a novel coding scheme that extends Dual-STCs to a double-layered embedding scheme which leverages the channel knowledge for data embedding. The
Figure 1. The application scenario of our proposed method. Alice (sender) and Bob (receiver) download a FNN from a DNN model repository. Alice uses this FNN to generate a stego-image with a key and sent it to Bob through a public communication channel. Bob can extract the secret from the stego-image using the FNN with the corresponding key. Eve (attacker) may eavesdrop on the channel to intercept the stego-image. However, he is not able to extract the secret without knowing the key.
capacity of these schemes is usually limited to ensure high undetectability.
### DNN-based image steganography
DNN-based image steganography trains a secret encoder for embedding secret into a cover image and a secret decoder for data extraction from a stego-image, which is shown to be promising to improve the performance of steganography.
Zhu _et al.[(38)]_ pioneer the research for DNN-based image steganography, where an end-to-end autoencoder is proposed for data embedding. This is further improved by SteganGAN [(37)] which is able to achieve a payload of up to 6 bits per pixel (BPP). Tancik _et al.[(27)]_ incorporate the image printing and recapturing process in the encoder-decoder to enhance the performance of the secret decoder, which is robust against the attacks caused due to printing and recapturing. Baluja[(3)] proposes a DNN which is able to hide a color image into another color image. Wei _et al.[(32)]_ utilize generative adversarial networks (GANs)[(9)] to directly generate stego-images from secrets without using a cover image. Recently, researchers attempt to conduct data embedding using invertible networks [(11; 15; 22; 34)], which treat the data embedding and extraction as a pair of inverse problems to achieve a high data embedding capacity.
These schemes require training the steganographic networks on a large dataset. To avoid training, a few studies have been explored for Fixed Neural Network Steganography (FNNS) which does not require any training for data embedding and extraction. This is achieved by adding adversarial perturbations into a cover image to generate a stego-image which is able to produce some specific outputs corresponding to the secret. Ghamizi _et al.[(8)]_ produce the stego-images by encoding the secret into image labels to generate the perturbation, the capacity of which is rather limited. Kishore _et al.[(17)]_ propose to generate perturbations according to a message loss to produce the stego-image, which significantly improves the data embedding capacity compared with the work in [(8)].
Despite the advantage, the existing FNNS schemes are weak in securing the secret embedded in the stego-image. The attackers can extract the secret from the stego-images using the FNN or a surrogate network. On the other hand, it is yet unanswered on how we could generate a piece of perturbation that is able to minimize the distortion caused by data embedding. To address these two issues, we propose in this paper to generate key controlled and adaptive adversarial perturbations for FNNS. The former makes sure that the secret can only be extracted from the stego-image using a correct key, while the latter adaptively changes the perturbation strength for different pixels to improve the visual quality and undetectability of the stego-image.
## 3. Problem Formulation
Given a FNN \(\mathcal{F}\), a cover image \(\mathcal{X}_{cover}\), a secret \(\mathcal{M}\) and different keys \(\mathcal{K}\) and \(\mathcal{K}_{err}\), our goal is to generate a stego-image by
\[\mathcal{X}_{stego}=\mathcal{X}_{cover}+\Delta(\mathcal{F},\mathcal{X}_{cover },\mathcal{K},\mathcal{M},\Phi),\]
where \(\Phi\) is an image encryption operation, \(\Delta\) refers to our key controlled and adaptive perturbation generation scheme. Tab. 1 summarizes the notations used in this paper.
The stego-image should have the least distortion compared with the cover image, which has to satisfy the following properties.
Property 1 ().: _We should be able to extract the secret from the stego-image using the FNN and the correct key, i.e.,_
\[\mathcal{F}(\Phi(\mathcal{X}_{stego},\mathcal{K}))=\mathcal{M}. \tag{1}\]
Property 2 ().: _We should not be able to extract the secret from the stego-image by only using the FNN, i.e.,_
\[\mathcal{F}(\mathcal{X}_{stego})\neq\mathcal{M}. \tag{2}\]
Property 3 ().: _We should not be able to extract the secret from the stego-image by using the FNN and a wrong key, i.e.,_
\[\mathcal{F}(\Phi(\mathcal{X}_{stego},\mathcal{K}_{err}))\neq\mathcal{M}. \tag{3}\]
## 4. The Proposed Method
The overall structure of our proposed method is illustrated in Fig.2. Given an RGB cover image \(\mathcal{X}_{cover}\in[0,1]^{C\times H\times W}\), where \(C\), \(H\), and \(W\) refer to the channel, height, and width of the image, respectively. We propose to generate key controlled and adaptive perturbations using \(\mathcal{F}\) based on \(\mathcal{K}\) and \(\mathcal{M}\in\{0,1\}^{D\times H\times W}\) with \(D\) being the number of bits per pixel to be hidden. We propose to encrypt the stego-image using \(\mathcal{K}\) before optimizing the perturbations. And we design three different types of decoding loss terms to satisfy the three requirements listed in the previous section. To achieve the minimum data embedding distortion, we propose an adaptive image distortion loss by taking the perturbation cost for each pixel (in \(\mathcal{X}_{cover}\)) into consideration. The adaptive image distortion loss and the three types of decoding losses are combined to generate \(\mathcal{X}_{stego}\) by adaptively adding the perturbations into \(\mathcal{X}_{cover}\).
### Stego-image Encryption
We adopt a simple and straightforward way to encrypt the stego-image by adding each element in the stego-image with a random number. The encrypted version of the stego-image is computed by
\[\mathcal{X}_{stego}^{\mathcal{K}}=\Phi(\mathcal{X}_{stego},\mathcal{K})= \mathcal{X}_{stego}\oplus Rand(\mathcal{K}), \tag{4}\]
where \(\oplus\) is the element-wise addition, and \(Rand\) is a function to generate a random matrix with elements ranging [-1,1] using \(\mathcal{K}\)
\begin{table}
\begin{tabular}{c|l} \hline Notation & Description \\ \hline \(\mathcal{X}_{cover}\) & the cover image \\ \hline \(\mathcal{X}_{stego}\) & the stego-image \\ \hline \(\mathcal{F}\) & the FNN \\ \hline \(\mathcal{M}\) & the secret message \\ \hline \(\Phi\) & the process of image encryption \\ \hline \(\delta\) & the perturbation added to the cover image \\ \hline \(\mathcal{K}\) & the correct key \\ \hline \(\mathcal{K}_{err}\) & the wrong key \\ \hline \(\mathcal{W}\) & the perturbation cost matrix \\ \hline \end{tabular}
\end{table}
Table 1. Notation.
Note that the dimension of the random matrix should be the same as that of the stego-image.
### Perturbation Cost Estimation
To enhance the visual quality and undetectability of the stego-images, we adopt a similar distortion function which has been used in the existing adaptive image steganographic schemes to estimate a perturbation cost for each pixel in \(\mathcal{X}_{coorer}\). The distortion function we use here is motivated by HILL(HILL, 2017), which employs a high-pass filter \(F_{h}\) and two low-pass filters \(F_{l}^{1},F_{l}^{2}\) to measure the distortion caused by data embedding. The distortion function is formulated below
\[\mathcal{W}=\frac{1}{|\mathcal{X}_{coorer}\otimes F_{h}|\otimes F_{l}^{1}} \otimes F_{l}^{2}, \tag{5}\]
where \(F_{l}^{1}\) and \(F_{l}^{2}\) are average filters with the size of \(3\times 3\) and \(15\times 15\), \(\bigotimes\) refers to the convolution operation, and \(F_{h}\) is designed as
\[F_{h}=\begin{bmatrix}-1&2&-1\\ 2&-4&2\\ -1&2&-1\end{bmatrix}. \tag{6}\]
To make the perturbation cost suitable for optimizing the adaptive perturbations, \(\mathcal{W}\) is further truncated and processed by
\[\mathcal{W}=\begin{cases}T&\text{if}\ \mathcal{W}>t\\ \mathcal{W}&\text{otherwise}\end{cases}. \tag{7}\]
The perturbation cost \(\mathcal{W}\) measures the cost of perturbing each pixel in the cover image. Pixels with high perturbation costs are not suitable to be changed for data embedding, which refers to pixels' smooth areas. During the optimization, the perturbation strength should be low on such pixels. On the contrary, we can carry out perturbations with high strength for pixels with low perturbation costs.
### Loss Functions
We design four loss functions including adaptive image distortion loss, Type-I decoding loss, Type-II decoding loss, and Type-III decoding loss. The adaptive image distortion loss aims to preserve the quality of the stego-image according to the perturbation cost. The three decoding losses are used to guarantee the generation of key control perturbations.
**Adaptive image distortion loss.** The adaptive image distortion loss is formulated by
\[\mathcal{L}_{d}(\mathcal{X}_{coorer},\mathcal{X}_{stego},\mathcal{W})=\sqrt{ \mathcal{W}\cdot(\mathcal{X}_{coorer}-\mathcal{X}_{stego})^{2}}, \tag{8}\]
where \(\cdot\) denotes the element-wise product operation. This is a weighted L2 distance between the cover image and the stego-image. By using such a loss, we shall be able to learn an adaptive perturbation during optimization, which focuses more on the pixels with low perturbation costs.
**Type-I decoding loss** We design a Type-I decoding loss below to satisfy Property 1 mentioned in Section 3.
\[\mathcal{L}_{I}(\mathcal{X}_{stego},\mathcal{K},\mathcal{M})=L_{BCE}(\mathcal{ F}(\Phi(\mathcal{X}_{stego},\mathcal{K})),\mathcal{M}), \tag{9}\]
where \(L_{BCE}\) a binary cross-entropy. This loss makes sure that, when the correct key is used to encrypt the stego-images, we are able to recover the secret by feeding the encrypted version of the stego-image into the FNN.
**Type-II decoding loss.** We design a Type-II decoding loss below to satisfy Property 2 mentioned in Section 3.
\[\mathcal{L}_{II}(\mathcal{X}_{stego},\mathcal{M})=L_{BCE}(\mathcal{F}( \Phi(\mathcal{X}_{stego},\mathcal{K}_{err})),\mathcal{M}). \tag{10}\]
This loss is used to prevent the attacker from using the wrong key for message decoding. To well simulate the scenario in which the
Figure 2. An overview of the proposed key-based FNNS.
user uses several randomly guessed keys for message decoding, we generate a wrong key set \(\mathcal{K}_{err}\) containing \(N\) distinct keys which are different from \(\mathcal{K}\). We accumulate the Type-III decoding loss for each wrong key by
\[\mathcal{L}_{III}(\mathcal{X}_{stego},\mathcal{K}_{err},\mathcal{M})=\sum_{n=1}^ {N}L_{BCE}(\mathcal{F}(\Phi(X_{stego},K_{err}^{n})),\mathcal{M}), \tag{12}\]
where \(K_{err}^{n}\) denotes the \(n^{th}\) key in \(\mathcal{K}_{err}\).
**Total Loss.** The overall loss function is a weighted sum among the aforementioned losses, which is given by
\[\mathcal{L}_{Total}=\lambda_{d}\mathcal{L}_{d}+\lambda_{I}\mathcal{L}_{I}- \lambda_{II}\mathcal{L}_{II}-\lambda_{III}\mathcal{L}_{III}, \tag{13}\]
where \(\lambda_{d}\), \(\lambda_{I}\), \(\lambda_{II}\) and \(\lambda_{III}\) are the weights for balancing different loss terms.
### Optimization strategy
Our goal is to optimize the following problem:
\[min \mathcal{L}_{Total}, \tag{14}\] \[s.t. 0\leq\mathcal{X}_{stego}\leq 1.\]
Researchers have proposed a lot of optimization approaches to solve this problem by generating adversarial perturbations. Note that we have to quantize the stego-images for real-world applications. The quantization will reduce the visual quality of the stego-image. It would also make the output of the FNN cross the decision boundary, which causes a reduction in message decoding accuracy. We propose a two-stage optimization strategy to alleviate the negative impact of quantization on steganographic performance. In the first stage, we carry out the optimization by only using the adaptive image distortion loss and Type-I decoding loss. In the second stage, we conduct the optimization of the total loss. As such, the output of the network would repeatedly cross the decision boundary during the optimization process to achieve stable performance.
The details of the optimization process are given in Algorithm 1, where we use L-BFGS(Bordes and Riedler, 2017) as the main optimization approach. For each iteration, we quantize each element in each of the RGB channels of the stego-image into 255 levels to learn perturbations that are effective on quantized RGB images.
```
Input:\(\mathcal{F}\); \(\mathcal{X}_{coor}\); \(\mathcal{W}\); \(\mathcal{M}\); \(\mathcal{K}\); \(\mathcal{K}_{err}\) Output:\(\mathcal{X}_{stego}\)
1 Parameters:\(\alpha\): learning rate; \(E\): number of iterations; \(st_{1}\): the number of steps for the first stage ; \(st_{2}\): the number of steps for the second stage; \(\lambda_{d},\lambda_{I},\lambda_{II},\lambda_{III}\);
2 Freezeze \(\mathcal{F}\); \(\triangleright\) Freeze the decoder network parameters \(\mathcal{X}_{stego}\leftarrow\mathcal{X}_{coor}\); \(\triangleright\) Initialize for\(i=1\) to \(E\)do
3\(\mathcal{L}_{Total}^{1}\leftarrow\lambda_{d}\mathcal{L}_{d}+\lambda_{I} \mathcal{L}_{I}\)
4\(\delta\leftarrow\) L-BFGS(\(\mathcal{L}_{Total}^{1},\alpha,st_{1}\)); \(\triangleright\) Takes \(st_{1}\) steps to optimize \(\mathcal{L}_{Total}^{2}\)
5\(\mathcal{L}_{Total}^{2}\leftarrow\lambda_{d}\mathcal{L}_{d}+\lambda_{I} \mathcal{L}_{I}-\lambda_{II}\mathcal{L}_{II}-\lambda_{III}\mathcal{L}_{III}\)
6\(\delta\leftarrow\) L-BFGS(\(\mathcal{L}_{Total}^{2},\alpha,st_{2}\)); \(\triangleright\) Takes \(st_{2}\) steps to optimize \(\mathcal{L}_{Total}^{2}\)
7\(\mathcal{X}_{stego}\gets clip_{1}^{0}(\mathcal{X}_{coor}+\delta)\); \(\triangleright\) Clip the image to [0,1]
8\(\mathcal{X}_{stego}\gets quantize(\mathcal{X}_{stego})\); \(\triangleright\) Quantization
9
10 end for return \(\mathcal{X}_{stego}\)
```
**Algorithm 1**Key-based FNNS
## 5. Experiments
### Setup
**Datasets and DNN models.** To evaluate the effectiveness of our proposed method, we conduct experiments on three different datasets, namely MS-COCO(Shi et al., 2017), Div2k(Dai et al., 2018), and CelebA(Miyi et al., 2018). To prepare the data for our experiments, we follow different procedures for different datasets. For Div2k, which is a high-quality dataset consisting of diverse images, we use the entire validation set as our cover images. For MS-COCO and CelebA, which are large-scale datasets for natural scenes and human faces, we randomly use 100 images as our cover images. For all the datasets, we resize the images to a fixed resolution of \(256\times 256\) to ensure consistency and efficiency. To generate the secret and the key, we use a random function to assign each bit with equal probability. Specifically, we generate each bit from a Bernoulli distribution with a parameter of 0.5, meaning that each bit has a 50% chance of being 0 or 1. As such, the secret and the key are uniformly distributed and independent. We use a SteganoGAN(Shi et al., 2017) model pre-trained on the corresponding dataset as the FNN. The SteganoGAN is a popular tool for image steganography which is designed using generative adversarial networks.
**Parameters and Evaluation Metrics.** We set the learning rate \(\alpha\) as 0.10, the number of iterations \(E\) as 100, and the number of the two-stage L-BFGS optimizations as 15 for both \(st_{1}\) and \(st_{2}\). The parameters for balancing different loss terms are set as \(\lambda_{d}=40\), \(\lambda_{I}=5\), \(\lambda_{II}=0.05\) and \(\lambda_{III}=0.05\), respectively. The number of wrong keys is set as \(N=3\) in the wrong key set \(\mathcal{K}_{err}\). The \(t\) and \(T\) for computing the perturbation cost are set as 0.5 and 3, respectively. We use three widely used metrics to evaluate the performance of our method, including bit error rate (BER), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM). The BER measures the accuracy of the secret extraction, while the PSNR and SSIM measure the visual quality of stego-images.
### Comparisons
We conduct quantitative comparisons between our proposed method and the state-of-the-art FNNS scheme proposed in (Kang et al., 2017) (termed as the FNNS for short). We follow the default settings according to the source code of FNNS for implementation, where we use a pre-trained SteganoGAN model (Shi et al., 2017) as the FNN. Tab.2 reports the performance among different schemes in terms of BER, PSNR, and SSIM, where we use the same key for secret embedding and extraction using our proposed method. Compared with the FNNS, our scheme achieves similar BER which is close to zero when the payload is less than 3BPP. Our method outperforms the FNNS in terms of PSNR and SSIM in almost all cases. In Div2k, the PSNR and SSIM of our proposed method are significantly higher than the FNNS with 6dB and 0.2 increment in PSNR and SSIM, respectively. This indicates the effectiveness of our adaptive perturbation
learning strategy by incorporating the perturbation cost during the optimization. We also observe that both our proposed method and the FNNS perform better than the pre-trained SteganoGAN, which indicates the advantage of the FNNS-based schemes.
Tab.3 further gives the performance of our proposed scheme when the attackers do not know the correct keys. We can see that the BER is around 30% (50% for random guess) for the no key and wrong key cases at different payloads and datasets. This is to say, the attackers are not able to correctly extract the secret from our stego-images if they only know the FNN.
Fig. 3 illustrates some stego-images generated using our proposed method and the FNNS. It can be seen that our stego-images are visually similar to the corresponding cover images. Compared with the FNNS, our method is able to adaptively adjust the perturbation strength according to the image content, where more perturbations are learnt for texture areas. This further demonstrates the advantage of our proposed scheme over the FNNS in terms of visual quality.
### Performance on non-steganographic DNN Models
In this section, we evaluate the performance of our proposed method on DNN models which perform ordinary learning tasks. Concretely, we download two non-steganographic DNN models from a public model repository, including DnCNN(Wang et al., 2017) and FFDNet(Wang et al., 2018), which are pre-trained for image denoising. We use these two image-denoising DNN models as the FNNs for image steganography. Tab.4 gives the comparison between our proposed method and the FNNS on these two DNN models. It can be seen that our scheme performs significantly better than the FNNS in terms of PSNR and SSIM. For both DNN models, the PSNR and SSIM of our stego-images are significantly higher than those generated using FNNS, with over 15dB improvement in PSNR and over 0.6 increment in SSIM. While the BER of our proposed method is still acceptable. Therefore, our proposed method would be more useful for real-world applications when the sender and receiver do not possess any steganographic DNN models.
### Undetectability
One of the important criteria for measuring the steganographic performance is the undetectability of the stego-images against steganalysis tools, which are used to detect the existence of hidden secrets in an image. We follow the suggestion given in (Krizhevsky et al., 2014) to evaluate the undetectability, where StegExpose (Bengio et al., 2017) is adopted as the steganalysis tool. To measure the undetectability of our method, we randomly select 1000 cover images from the MS-COCO dataset and generate 1000 stego-images using FNNS and our proposed method at a payload of 1 BPP. Fig.4 plots the ROC curves for different schemes. It can be seen that the undetectability of our proposed method is better than that of the FNNS.
### Ablation Study
In this section, we conduct ablation studies to analyze the effects of different components in our method. In all the ablation studies, we use the CelebA dataset for evaluation and conduct a data embedding at a payload of 1BPP to generate the stego-images, where the Type-I decoding loss is always used.
**Effectiveness of perturbation cost.** In order to evaluate the effectiveness of perturbation cost, we replace the adaptive image distortion loss with the standard L2 loss in our proposed scheme
\begin{table}
\begin{tabular}{c|c|c c c c|c c c|c c c c} \hline \hline \multirow{2}{*}{Datasets} & \multirow{2}{*}{Methods} & \multicolumn{4}{c|}{BER(\%)\(\downarrow\)} & \multicolumn{4}{c|}{PSNR(dB)\(\uparrow\)} & \multicolumn{4}{c}{SSIM\(\uparrow\)} \\ \cline{3-13} & & 1BPP & 2BPP & 3BPP & 4BPP & 1BPP & 2BPP & 3BPP & 4BPP & 1BPP & 2BPP & 3BPP & 4BPP \\ \hline \multirow{3}{*}{COCOCO} & SteganoGAN & 3.40 & 6.29 & 11.13 & 15.70 & 25.32 & 24.27 & 25.01 & 24.94 & 0.84 & 0.82 & 0.82 & 0.82 \\ & FNNS & 0.03 & 0.01 & **0.01** & 14.56 & 37.58 & 36.04 & 26.31 & **34.75** & 0.91 & 0.93 & 0.71 & **0.91** \\ & Ours & **2E-04** & **0.01** & 0.10 & **14.43** & **40.46** & **36.62** & **29.43** & 33.65 & **0.98** & **0.95** & **0.84** & 0.89 \\ \hline \multirow{3}{*}{Div2k} & SteganoGAN & 5.12 & 8.31 & 13.74 & 22.85 & 21.33 & 21.06 & 21.42 & 21.84 & 0.76 & 0.76 & 0.77 & 0.78 \\ & FNNS & **0.00** & **5E-04** & **0.07** & **2.21** & 25.96 & 21.41 & 18.68 & 18.68 & 0.79 & 0.60 & 0.38 & 0.38 \\ & Ours & 0.01 & 0.08 & 1.87 & 8.77 & **33.92** & **27.60** & **25.77** & **25.79** & **0.96** & **0.88** & **0.79** & **0.77** \\ \hline \multirow{3}{*}{CelebA} & SteganoGAN & 3.94 & 7.36 & 8.84 & 10.00 & 25.98 & 25.53 & 25.70 & 25.08 & 0.85 & 0.86 & 0.85 & 0.82 \\ & FNNS & **2E-06** & **5E-05** & **4E-04** & **2.40** & 34.43 & 34.48 & 30.98 & 30.79 & 0.83 & 0.87 & 0.80 & 0.75 \\ \cline{1-1} & Ours & 3E-04 & 2E-03 & 0.02 & 2.75 & **39.48** & **36.43** & **33.22** & **33.86** & **0.95** & **0.92** & **0.87** & **0.86** \\ \hline \hline \end{tabular}
\end{table}
Table 2. Performance comparisons among different methods on different datasets. \(\uparrow\) indicates better, and vice versa. We mark the best-performing values in bold.
\begin{table}
\begin{tabular}{c|c c c c|c c c} \hline \hline \multirow{2}{*}{Datasets} & \multicolumn{4}{c|}{BER without key(\%)\(\uparrow\)} & \multicolumn{4}{c}{BER with wrong key(\%)\(\uparrow\)} \\ \cline{2-9} & 1BPP & 2BPP & 3BPP & 4BPP & 1BPP & 2BPP & 3BPP & 4BPP \\ \hline COCO & 29.77 & 24.81 & 13.64 & 28.98 & 34.32 & 30.51 & 18.86 & 32.62 \\ Div2k & 22.26 & 21.11 & 13.72 & 22.31 & 25.54 & 24.95 & 17.73 & 25.19 \\ CelebA & 30.65 & 31.61 & 29.53 & 27.98 & 33.76 & 35.19 & 33.64 & 31.97 \\ \hline \hline \end{tabular}
\end{table}
Table 3. The BER of the secret extracted using our method on different datasets without using a key or using wrong keys at different payloads.
and re-evaluate the performance. The results are shown in the second row in Tab.5, where we report various metrics including the PSNR, SSIM, and the BER using different keys. Compared with the numbers shown in the last row in Tab.5, we can observe that using the perturbation cost improves the performance of BER, PSNR, and SSIM. Because it allows us to focus more on the low-cost regions corresponding to the texture areas for data embedding.
**Effectiveness of Type-II decoding loss.** To examine the effectiveness of the Type-II decoding loss, we take out this loss and rerun the experiments, the results of which are given in the third row in Tab.5. Compared with the results given in the last row in Tab.5, we can see the use of Type-II decoding loss would be able to make it more difficult for the attacker to decode the secret without using a correct key.
**Effectiveness of Type-III decoding loss.** For the same token, we remove the Type-III decoding loss and rerun the experiments to conduct ablation studies for Type-III decoding loss. The corresponding results are reported in the fourth row in Tab.5. We see that using Type-III decoding loss substantially increases the BER for the cases where no key or wrong key is used for decoding. Compared with the results in the last row in Tab.5, using the Type-III decoding loss increase the BER of 8.29% and 6.75% for the no key and wrong key cases, respectively.
Figure 4. The ROC curves plotted for different schemes against the StegExpose.
Figure 3. Illustration of stego-images generated using our method and the FNNS at different payload settings on different datasets. The first column shows the cover image, the second to the ninth column shows the stego-images generated using our method and the FNNS from a payload of 1BPP to 4BPP. For each cover image, we present the stego-images in the first row and the corresponding residuals (i.e., the absolute difference between the cover and stego-images) in the second row. We magnify the residuals by 10 times to highlight the altered regions. The cover images (from top to bottom) used in this figure are randomly selected from MS-COCO[20], Div2k[1], and CelebA[21], respectively.
**Effectiveness of two-stage update.** Next, we conduct evaluations to see whether our two-stage update strategy is helpful. To do so, we use a single one-stage update by optimizing only the total loss using the L-BFGS optimizer. The results are shown in the fifth row in Tab.5. We find that our two-stage updating strategy indeed improves the steganographic performance because it helps us to escape local minima caused by quantization. Particularly, the BER is improved from 0.04% to 3E-04% by using our two-stage update strategy.
**Effectiveness of iterative quantization.** To generate a stego-image that is suitable to be transmitted in public communication channels, we have to quantize the image after the optimization. A simple way is to quantize the image when the optimization is done. In our proposed method, we quantize the images in each iteration instead of performing the quantization in the final iteration. We believe such a strategy would be able to learn perturbations that are more appropriate for the quantized images. For justification, we conduct two additional experiments here, where we conduct one-stage updates and two-stage updates and only perform the image quantization after the optimization, respectively. Tab.6 reports the results of these experiments. By comparing the results in Tab.6 with the last two rows in Tab.5, we can clearly observe that the optimal performance is achieved only when both iterative quantization and two-stage updates are applied.
In addition, as shown in the first row of Tab.5, the performance of our method degrades significantly when none of the proposed components are incorporated. We also observe that the combination of all components leads to a substantial improvement over the use of any single component. This indicates that our components are complementary for performance boosting.
## 6. Conclusion
In this paper, a key-based FNNS is proposed to improve the security of the existing FNNS schemes. Unlike the existing FNNS schemes, we use a key to control the generation of the adversarial perturbations for data embedding, which is performed by encrypting the input images of the FNN. Given a stego-image, only the receiver who possesses the correct key can extract the secret using the FNN. We further propose an adaptive perturbation generation scheme by taking the perturbation cost into account during the optimization. This is shown to be effective in improving the visual quality and undetectability of the stego-images. We use a pre-trained steganographic network and two image-denoising DNN models as the FNNs to evaluate the performance of our key-based FNNS. The results indicate the advantage of our proposed scheme over the state-of-the-art FNNS in terms of preventing unauthorized secret extraction as well as the steganographic performance.
###### Acknowledgements.
This work is supported by National Natural Science Foundation of China under Grant 62072114, U20A20178, U20B2051, U1936214 and U22B2047.
\begin{table}
\begin{tabular}{c|c c c c c} \hline \hline & BER(\%)\(\downarrow\) & BER without & BER with & PSNR(dB)\(\uparrow\) & SSIM\(\uparrow\) \\ \hline One-stage Update & 0.32 & 33.58 & 36.34 & 38.19 & 0.95 \\ \hline Two-stage Update & 0.65 & 35.56 & 38.13 & 37.97 & 0.95 \\ \hline \end{tabular}
\end{table}
Table 6. The effectiveness of iterative quantization.
\begin{table}
\begin{tabular}{c|c|c c|c c|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{Models} & \multirow{2}{*}{Methods} & \multicolumn{3}{c|}{BER(\%)\(\downarrow\)} & \multicolumn{3}{c|}{BER without key(\%)\(\uparrow\)} & \multicolumn{3}{c|}{BER with wrong key(\%)\(\uparrow\)} & \multicolumn{3}{c}{PSNR(dB)\(\uparrow\)} & \multicolumn{3}{c}{SSIM\(\uparrow\)} \\ \cline{3-14} & & 1BPP & 2BPP & 3BPP & 1BPP & 2BPP & 3BPP & 1BPP & 2BPP & 3BPP & 1BPP & 2BPP & 3BPP & 1BPP & 2BPP & 3BPP \\ \hline \multirow{2}{*}{DnCNN} & FNNS & **0.04** & **0.02** & **0.08** & / & / & / & / & / & 13.64 & 10.38 & 11.76 & 0.12 & 0.07 & 0.07 \\ & Ours & 3.03 & 3.41 & 4.00 & 20.45 & 21.80 & 21.46 & 20.97 & 21.38 & 21.23 & **31.81** & **30.39** & **29.19** & **0.80** & **0.75** & **0.70** \\ \hline \multirow{2}{*}{FFDNet} & FNNS & 7.17 & 9.92 & 14.69 & / & / & / & / & / & 12.03 & 12.31 & 12.26 & 0.08 & 0.08 & 0.08 \\ & Ours & **5.62** & **5.76** & **5.77** & 21.28 & 22.40 & 21.82 & 21.72 & 22.14 & 21.95 & **31.88** & **30.50** & **29.34** & **0.80** & **0.76** & **0.71** \\ \hline \end{tabular}
\end{table}
Table 4. Performance comparisons on the CelebA dataset using two non-steganographic DNN models, downloaded from a public model repository, as FNNs for steganography.
\begin{table}
\begin{tabular}{c c c c|c c c c} \hline \hline Perturbation & Type-II & Type-III & Two-stage & BER(\%)\(\downarrow\) & BER without & BER with & PSNR(dB)\(\uparrow\) & SSIM\(\uparrow\) \\ Cost & Loss & Loss & Update & & key(\%)\(\uparrow\) & wrong key(\%)\(\uparrow\) & PSNR(dB)\(\uparrow\) & SSIM\(\uparrow\) \\ \hline ✗ & ✗ & ✗ & ✗ & ✗ & 0.04 & 8.08 & 16.09 & 38.24 & 0.94 \\ ✗ & ✓ & ✓ & ✓ & 2E-03 & **30.91** & 33.67 & 38.58 & 0.94 \\ ✓ & ✗ & ✓ & ✓ & 4E-03 & 29.60 & 33.09 & 38.44 & 0.95 \\ ✓ & ✓ & ✗ & ✓ & 7E-03 & 22.35 & 27.01 & 39.16 & 0.95 \\ ✓ & ✓ & ✓ & ✗ & 0.04 & 25.38 & 28.74 & 37.11 & 0.93 \\ ✓ & ✓ & ✓ & ✓ & **3E-04** & 30.64 & **33.76** & **39.48** & **0.95** \\ \hline \end{tabular}
\end{table}
Table 5. Effectiveness of different components in our method. |
2301.00106 | Physics-informed Neural Networks approach to solve the Blasius function | Deep learning techniques with neural networks have been used effectively in
computational fluid dynamics (CFD) to obtain solutions to nonlinear
differential equations. This paper presents a physics-informed neural network
(PINN) approach to solve the Blasius function. This method eliminates the
process of changing the non-linear differential equation to an initial value
problem. Also, it tackles the convergence issue arising in the conventional
series solution. It is seen that this method produces results that are at par
with the numerical and conventional methods. The solution is extended to the
negative axis to show that PINNs capture the singularity of the function at
$\eta=-5.69$ | Greeshma Krishna, Malavika S Nair, Pramod P Nair, Anil Lal S | 2022-12-31T03:14:42Z | http://arxiv.org/abs/2301.00106v2 | # Physics-informed Neural Networks approach to solve the Blasius function
###### Abstract
Deep learning techniques with neural networks have been used effectively in computational fluid dynamics (CFD) to obtain solutions to nonlinear differential equations. This paper presents a physics-informed neural network (PINN) approach to solve the Blasius function. This method eliminates the process of changing the non-linear differential equation to an initial value problem. Also, it tackles the convergence issue arising in the conventional series solution. It is seen that this method produces results that are at par with the numerical and conventional methods. The solution is extended to the negative axis to show that PINNs capture the singularity of the function at \(\eta=-5.69\).
Blasius equation, physics-informed neural networks, automatic differentiation, computational fluid dynamics, boundary layer flow, singularity.
## I Introduction
Machine learning helps learn patterns from the data to make predictions. The three basic requirements for machine learning are data, theory, and hardware to overcome computational difficulties using better GPUs and new approaches. Computational science is an essential tool that we can use to incorporate physical invariances into learning. For example, the laws that govern the conservation of momentum, mass, and energy. To quote Dr.Tinsley Oden - "Computational Science can analyze past events and look into the future. It can explore the effects of thousands of scenarios instead of actual experiments and be used to study events beyond the reach of expanding the boundaries of experimental science". Deep learning can be quite useful in the real world, recognising various types of cancer like skin cancer [1], lung cancer [2] and many more by just passing the necessary images. Also has various applications in agriculture [7].
Neural networks(NNs) are the most used Machine Learning technique. It offers a set of robust tools for solving variously supervised and unsupervised problems in pattern recognition, data processing, and non-linear control, which can be regarded as complementary to the conventional approaches [3]. It can be considered a functional approximator as it maps a set of input variables to a set of output variables. A set of parameters called weights manages this mapping. These weights are updated to train the network that fetches the required output. A polynomial can be viewed as a function that transforms a single input variable into a single output variable. The coefficients in the polynomial are comparable to the weights in a neural network. Determining these coefficients helps in evaluating the solution.
### _Physics Informed Neural Networks_
Physics-informed neural networks (PINNs) play a huge role where fewer data and the system's conventional physics are known to get the solutions to a differential equation. NN can approximate the error to ground truth function, generalize to unseen data and train the model. The contribution of PINNs is to obtain a neural network that knows about the physics hidden behind the equation and thus efficiently solves differential equations. The earlier numerical methods for solving them are finite difference, finite elements, spectral elements, and finite volume. When working with PINNs, there is data efficiency from a machine learning perspective since it is regularized heavily with physics. The required derivatives can be calculated using automatic differentiation at the end of the network. It is possible to train a nonlinear neural network using this approach and its layers of hidden nodes [4]. It then finds a loss function corresponding to the differential equation and boundary conditions.
### _Boundary Layer Theory_
The boundary layer of a flowing fluid is a thin layer near a solid surface, and the flow near the solid surface is known as the boundary layer flows. Ludwig Prandtl is credited for developing the boundary layer theory. In 1904, he published a paper titled "On the Motion of a Fluid with Very Small Viscosity" [5]. He laid out the mathematical foundation for flows and condensed the two-dimensional Navier-Stokes equations (NSE) into the boundary layer equations. This
publication made understanding fluid motion physics possible, which is regarded as the beginning of contemporary fluid mechanics [6]. The primary disadvantage of using a CFD solver for turbomachinery optimization is the amount of time needed to complete the numerous computationally demanding simulations [7].
The introduction of the similarity variables and the transformation of the PDEs into nonlinear ODEs in one coordinate allowed the successful resolution of the problem. Still, specific methods were needed to handle the unbounded boundary conditions. The Blasius benchmark problem has been resolved [8] using the trial function method put out by Lagaris [9] or a hybrid approach [10]. The solution function is seen to have a singularity on the negative real axis at approximately -5.69. The Blasius equation is a single equation that solely models the viscous boundary layer and is one of CFD's fundamental models.
In this paper, we propose solving the Blasius equation using PINNs and comparing the solution we obtain with the best-known solutions available in the literature. We shall also extend our solution to the negative real axis to locate the singularity existing for the function. In the next section of the paper, we shall present a brief literature review on the Blasius equation and PINNs. In Section III, we present the methodology that has been adopted to solve the Blasius Equation. Section IV discusses the results and analysis of the proposed algorithm. The conclusions are discussed in the last section.
## II Literature review
### _Physics Informed Neural Networks_
Neural networks can be used to attain solutions to ODEs and PDEs by reducing them to an optimization problem instead of numerically solving the equations. The application of NNs in fluid mechanics began in the 1990s. Raissi and his team [11, 12] developed a technique called Physics informed neural network (PINN) where the loss function defined in the corresponding NN is extracted from the physics behind the PDE and related equations. In earlier years, Computational fluid dynamics (CFD) had been a great relief in numerically solving the compressible and incompressible Naive Stokes equations. Minimizing the loss function is challenging in PINNs, as it can be highly complex. Nevertheless, PINNs have been proven to be more accurate than conventional CFD techniques with lesser computations [13].
Although numerical discretization of the Navier-Stokes equations (NSE) has made significant progress in simulating flow problems over the past 50 years, the present algorithms cannot solve governed by high-parametrized NSE. Additionally, it is expensive to solve inverse flow problems due to their complexity, expensive formulations, and need for new algorithms. PINNs are expanded to fractional PINNs (fPINNs) that explore their convergence methodically to solve space-time fractional advection-diffusion equations (fractional ADEs) [14]. A hybrid method for building the residual in the loss function that utilizes both automated differentiation for the integer-order operators and numerical discretization for the fractional operators is a novel component of the fPINNs.
Wide varieties of PINNs have been found in the literature in recent times. can-PINNs [15] link derivative terms with nearby support points, which generally apply to Taylor series expansion-based numerical systems. Apart from demonstrating good dispersion and dissipation characteristics, they are highly trainable and require four to sixteen times fewer collocation points than original PINNs. Auxiliary PINNs (A-PINN) is a technique for solving forward and inverse problems of non-linear integrodifferential equations [16]. ViscoelasticNet is another PINN framework for stress discovery and model selection [17]. PINNs can also be used to solve Reynolds-averaged Navier-Stokes equations [18], full waveform seismic inversions in 2D acoustic media, and wave propagation as it seamlessly handles boundary conditions and physical constraints [19]. In addition to addressing ill-posed problems beyond the scope of conventional computing techniques, PINNs can also close the discrepancy between computational and experimental heat transfer [20].
### _Blasius Equation_
Blasius equation is a third order non-linear ordinary differential equation of the form \(f^{\prime\prime\prime}+\frac{1}{2}ff^{\prime\prime}=0\) with the boundary conditions \(f(0)=0\), \(f^{\prime}(0)=0\), \(f^{\prime}(\infty)=1\). It governs the boundary layer flow over a semi-infinite flat plate. Suppose that the \(u-\)velocity, the velocity parallel to the surface, is much greater than the \(v-\)velocity, perpendicular to the surface, and the changes in the perpendicular direction to the surface are much greater than changes parallel to the surface. The boundary layer equations consists of conservation of mass (II.1), conservation of x-momentum (II.2), and conservation of y-momentum. In a flat plate boundary layer, the pressure gradient term appearing in the x-momentum equation becomes zero (II.2). This leads to the hydrodynamic solution for the flat plate boundary layer in a laminar flow called the Blasius solution.
\[\frac{\partial u}{\partial x}+\frac{\partial v}{\partial y}=0\] (II.1)
\[u\frac{\partial u}{\partial x}+v\frac{\partial u}{\partial y}=\nu\frac{ \partial^{2}u}{\partial y^{2}}\] (II.2)
Blasius' analysis focus on the laminar boundary layer forming on a flat plate. The main aspect of the work is the transformation of the PDE for a flat plate boundary layer with zero pressure gradient into a single ordinary differential equation(ODE) by considering the velocity components that satisfy equation II.1.
\[u\equiv\frac{\partial\psi}{\partial y}\ \ \ \ v\equiv-\frac{\partial\psi}{ \partial x}\] (II.3)
The stream function \(\psi=U_{\infty}\sqrt{\frac{\nu\kappa}{U_{\infty}}}f(\eta)\), is directly proportional to the function \(f(\eta)\) called the Blasius function. Here, \(U_{\infty}\) is the free stream velocity, and \(\eta\) is a transformed coordinate called the similarity parameter. Here, the velocity components are proportional to the first derivative of \(f(\eta)\)
and the second and third derivatives of \(f\) are proportional to the first and second derivatives of velocity. Substituting these relations into the momentum equation (II.2), the final form of the Blasius boundary layer equation \(f^{\prime\prime\prime}+\frac{1}{2}ff^{\prime\prime}=0\), for a flat plate can be obtained. The first and second derivatives of \(f(\eta)\) are given by \(f^{\prime}=\frac{u}{U_{\infty}}\) (Non-dimensional velocity profile) and \(f^{\prime\prime}=\frac{1}{U_{\infty}}\sqrt{\frac{\nu\nu\nu}{U_{\infty}}}\frac{ \partial u}{\partial y}\) (quantity related to shear stress). The boundary conditions are set considering the laminar flow on a flat plate, the no-slip condition, and free-stream velocity outside the boundary layer. Hence we have
\[f(0)=0\hskip 14.226378ptf^{\prime}(0)=0\hskip 14.226378ptf^{\prime}(\infty)=1\] (II.4)
Since the value of \(f^{\prime\prime}\) at \(\eta=0\) is unknown, it cannot be considered an initial value problem.
A power series solution to the boundary layer equation of flow across a flat plate was proposed by Blasius [21]. Schmidt and Beckmann [22] and Ostrach [23] conducted the most important work on the topic, conducting theoretical and experimental research on the free convection flow of air around a vertical flat plate under the influence of gravity. Boyd solved the equation using an analytical series solution technique [24]. Nowadays, with the availability of computers, we can obtain a numerical solution to this equation and calculate it with a very high degree of accuracy. To solve the Blasius equation numerically, a possible method is to use the shooting algorithm to find what value would satisfy the boundary condition at \(\eta=0\). The first step is to guess a value at the wall. Then solving the ODE along the non-dimensional coordinate until the first derivative of \(f\) stops changing. This checks if the first derivative satisfies the given boundary condition; instead, adjust the guessed value to decrease or increase if the first derivative is higher or lower than one. The algorithm is repeated until the boundary condition is finally satisfied to reach the final solution. Howarth [25] found the solution using a numerical method that accurately predicted the value of \(f^{\prime\prime}(0)\) to be 0.332.
This differential equation is a direct representation of the velocity profile inside the boundary layer. Once the solution is obtained, the velocity components can be calculated using these relations. Hence the boundary layer thickness can be calculated located at the point where the velocity is 99\(\%\) of the free stream velocity. It was found that this occurs approximately at \(\eta\) equal to 5. The displacement and momentum thickness can also be estimated using the solution. The wall shear stress based on the velocity gradient at the wall can be estimated, and the friction and drag coefficients can be calculated. Note that the expression is only accounting for one side of the plate. Comparing the exact solution with the approximation solution from the integral analysis is interesting. The relations are close to each other. Indeed the integral analysis is within 10\(\%\) of the exact solution; unlike Blasius' solution, these values were obtained without complicated math. The Blasius solution provides a self-similar solution meaning that the solution is the same if the independent and dependent variables of the governing equations are appropriately scaled. This can be seen in comparing experimental results with the Blasius solution. Different Reynolds numbers (\(Re\)) provide the same profile when the variables on the two axes are appropriately scaled.
The solution to the Blasius equation can be found in [26, 27], where the accurate benchmark results of the Blasius boundary layer problem using a leaping Taylors series that converges for all real values. There have been different methods to solve the Blasius equation, such as the Topber transformation, which is executed using inverse transformation, [28, 29, 30]. Runge-Kutta, incorporated with the shooting method, finds the solution numerically, while Adomian Decomposition Method [31] finds the solution analytically.
## III Proposed methodology
The ODE representing the Blasius equation derived from the PDE is considered in the present work. We follow a similar methodology proposed for the general form of generalized non-linear ODE by Raissi [11]. The workflow diagram of the present method is shown in Fig.1.
The input values \(\eta\) are discrete and are equally distributed between the boundary points \(\eta_{0}\) and \(\eta_{m}\). The beauty of PINNs lies in loss function as it incorporates the boundary conditions and the differential equation and thus includes the physics information. The loss function is the total loss from given ODE \(L_{o}\), initial conditions \(L_{i}\), and boundary \(L_{b}\) conditions.
\[L=L_{o}+L_{b}+L_{i}\] (III.1)
\[L_{o}=\sum_{\eta}[\hat{f}^{\prime\prime\prime}+\frac{1}{2}\hat{f}\hat{f}^{ \prime\prime}]^{2}\] (III.2)
\[L_{i}=[\hat{f}(\eta_{0})-f(\eta_{0})]^{2}+[\hat{f}^{\prime}(\eta_{0})-f^{ \prime}(\eta_{0})]^{2}\] (III.3)
\[=[\hat{f}(\eta_{0})-0]^{2}+[\hat{f}^{\prime}(\eta_{0})-0]^{2}\]
\[L_{b}=[\hat{f}(\eta_{m})-f(\eta_{m})]^{2}=[\hat{f}(\eta_{m})-1]^{2}\] (III.4)
From the definition of the loss function, it is clear that the total loss will be zero if the function \(f(\eta)\) is exact. Here we update the weights of the neural network in each iteration such that the loss function is minimized. Supplying the independent input values representing discrete spatial coordinates \(\eta\) ranging from zero to \(\eta_{m}\) into the neural network is sufficient to solve the ODE via PINNs. The neural network maps the input \(\eta\) to \(f(\eta)\)
Fig. 1: Workflow diagram to solve for \(f(\eta)\)
which is the estimated solution to the stream function. Unlike standard neural network techniques that approximate the value of \(f(\eta)\) in a heuristic manner from sample output values, PINNs obtain a solution function that minimizes the loss function, which is a combination of the differential equation and boundary values.
Several combinations of the number of hidden layers and the number of nodes in each layer were tested on a trial-and-error basis. We see that the results obtained are at par with that of [27] when initializing the neural network using two fully connected hidden layer structures and setting each hidden layer's width to 100 neurons. The learning rate set in this case is 0.96. Some of the relatively good results obtained for various combinations of hidden layers and nodes, along with the changes in the learning rate of the algorithms, are presented in table I. The solution \(f_{i}(\eta)\) for each of the cases in table I is graphically presented in Fig.2.
In earlier methods proposed by Lagaris [9], all the boundary conditions had to be found before moving to find the solution. In this method, it is only required to set an appropriate finite value for the \(\eta\) at \(\infty\), which we assumed to be \(\eta_{m}=8\) in our case. Although higher values could be set, we see from the literature and our results that \(f^{\prime}(\eta)\) takes the value very close to one when \(\eta=5\). Hence the justification for taking the value of \(\eta\) at \(\infty\) as \(\eta_{m}=8\). Another benefit of using PINNs is that we can considerably reduce the number of collocation points and still attain the same level of accuracy. In our method, we have considered a total of 100 equidistant collocation points of \(\eta\) between 0 and 8. The network trains with this input data and finds the derivatives using the automatic differentiation tool to calculate the loss \(L_{o}\).
The Adam optimizing algorithm is incorporated into the PINNs methodology to set the model's adaptive learning rates. It adds momentum as the estimate of the first-order moment of the gradient and includes bias corrections to the estimates of first and second-order moments. The second-order method to train the network was the Limited memory Broyden Fletcher Goldfarb Shanno (L-BFGS) algorithm. The cost of memory has been decreased by avoiding the hessian approximation of BFGS and replacing it with an identity matrix. The loss function is optimized using ADAM and L-BFGS until it converges.
The method was extended to the negative axis setting \(\eta_{0}\) to -5.69 and \(\eta_{m}\) to 7. The value of the \(f^{\prime\prime}(0)\) found from the previous program is incorporated into the existing loss function. Our purpose in doing this was to check if our method could capture the singularity of the solution function mentioned in the literature.
## IV Results and discussion
The proposed PINNs method can reduce computational time in solving the Blasius equation over conventional CFD techniques as it reduces the burden of finding the value of \(f^{\prime\prime}(0)\) and then solving the initial value problem. The neural network constructed with two hidden layers with 100 neurons each is considered for our results and discussions. The input of \(\eta\) ranges from 0 to 8 (\(\eta_{m}\)). The learning rate for adam optimizer is assigned the value of 0.96. From Fig.3, it is clear that the results obtained from PINNS perfectly match with numerical techniques' results [26].
In the present method, the value of \(f^{\prime\prime}(\eta)\) at 0 is not considered for finding the solution. Further, once \(f(\eta)\) is obtained, we can also find the functions corresponding to \(f^{\prime}(\eta)\) and \(f^{\prime\prime}(\eta)\). The value of the second derivative at the wall
Fig. 3: Estimated results \(f(\eta),f^{\prime}(\eta),f^{\prime\prime}(\eta)\) obtained from PINNs and those obtained by numerical methods
Fig. 2: Solutions obtained for various widths and depths of NN
is observed from the PINNs method to be 0.33165, which aligns approximately with those found by Howarth [25]. The loss generated from this method is 1.67x10\({}^{-6}\) that is of order \(10^{-6}\) indicating the results to be accurate with sufficiently less error value. As proposed by Gaurav Pandey and Ambedkar Dukkipati [32], increasing the width (number of neurons) and depth (number of hidden layers) of the network can be beneficial. But in our case, we see that the function \(\hat{f}(\eta)\) tends to become more vulnerable to a high amount of bias by choosing a deeper network. Thus, the initial conditions must be chosen much more accurately for the loss function to converge to the global minimum. Further, fixing the width and depth of the network to 100 and 2, respectively, depicts accurate results; hence, the need for a more complex network was overridden.
The velocity components are directly proportional to \(f^{\prime}(\eta)\). Thus, the proposed method's solution helps in mapping non-dimensional coordinate \(\eta\) with \(f^{\prime}(\eta)\). This is the velocity profile inside the boundary layer. The values of velocity components can also be calculated from \(f^{\prime}(\eta)\) and \(f^{\prime\prime}(\eta)\) From the graph, it is visually appealing that when \(\eta\) approaches five, the \(f^{\prime}(\eta)\) goes to 1. Hence the boundary layer thickness is calculated where the velocity is 99\(\%\) of free stream velocity.
In 1999, Boyd [24] suggested that convergence of his power series method to solve the Blasius equation is limited by a singularity on the negative \(\eta-\)axis at \(\eta=-5.6900380545\). Anil and Milan [27] considered a leaping Taylor's series solution for the Blasius equation to overcome this singularity. In the second part of our investigation, we extended the collocation points to the negative portion to check for the solution of the Blasius equation on the negative \(\eta-\)axis. The solution obtained from PINNs also shows that the function \(f(\eta)\) increases rapidly near \(\eta=-5.7\) signifying the presence of a singularity near the point. A graph of \(f(\eta)\) including the negative \(\eta-\)axis is shown in Fig.4.
## V Conclusion
PINNs use knowledge of the governing equation in deep learning and find a solution to the differential equation by minimizing a loss function, including the physics information. The proposed methodology using PINNs is free from mesh generation, which is an integral part of conventional CFD techniques. Here, we have obtained the solutions to the Blasius equations, which agree with all the numerical methods in the literature. Further, the solution captures the singularity mentioned while solving the differential equation analytically.
|
2301.13146 | Enhancing Neural Network Differential Equation Solvers | We motivate the use of neural networks for the construction of numerical
solutions to differential equations. We prove that there exists a feed-forward
neural network that can arbitrarily minimise an objective function that is zero
at the solution of Poisson's equation, allowing us to guarantee that neural
network solution estimates can get arbitrarily close to the exact solutions. We
also show how these estimates can be appreciably enhanced through various
strategies, in particular through the construction of error correction
networks, for which we propose a general method. We conclude by providing
numerical experiments that attest to the validity of all such strategies for
variants of Poisson's equation. | Matthew J. H. Wright | 2022-12-28T17:26:46Z | http://arxiv.org/abs/2301.13146v1 | # Enhancing Neural Network Differential Equation Solvers
###### Abstract
We motivate the use of neural networks for the construction of numerical solutions to differential equations. We prove that there exists a feed-forward neural network that can arbitrarily minimise an objective function that is zero at the solution of Poisson's equation, allowing us to guarantee that neural network solution estimates can get arbitrarily close to the exact solutions. We also show how these estimates can be appreciably enhanced through various strategies, in particular through the construction of error correction networks, for which we propose a general method. We conclude by providing numerical experiments that attest to the validity of all such strategies for variants of Poisson's equation. The source code for this project can be found at [https://github.com/mjhwright/error-correction](https://github.com/mjhwright/error-correction).
## 1 Introduction
Differential equations are among the most ubiquitous problems in contemporary mathematics. In recent years, developments in artificial neural networks have prompted new research into their capacity to be excellent differential equation solvers [1, 2, 3, 4, 5]. They are universal approximators [6]; they can circumvent the curse of dimensionality [7]; and they are continuous. However, practically, their construction and optimisation costs are enough to deter the discerning user.
In this paper, we explain a method by which neural networks can numerically solve differential equations. We further this by providing three strategies that can be targeted to improve the efficacy of the solver. The first two - sinusoidal representation networks [8] and random Fourier features [9] - are well-established in the field of artificial neural networks and machine learning. The third is a novel technique called error correction [10, 11, 12, 13, 14]. We explain how error correction can be implemented recursively, with little modification to the original solver, to give enhanced numerical solutions to differential equations, and we present results that demonstrate this.
This paper is designed to give a flavour of the competence of artificial neural networks in this field, while also highlighting their certain limitations.
## 2 Background
Throughout this paper, we consider differential equations with solution \(\phi:\mathbb{R}^{d}\to\mathbb{R}\). Consequently, our neural network approximation is a function \(\mathcal{N}:\mathbb{R}^{d}\to\mathbb{R}\).
### Universal approximation theorems
The realisation of neural networks' capabilities to learn seemingly any function has brought about numerous universal approximation theorems. These state that, under certain conditions, neural networks are able to approximate any function to arbitrary closeness. We recall one of these theorems by Hornik [6].
First, define the set of all functions represented by a neural network with a single hidden layer of width \(n\) and identity activation on the output layer as
\[\mathscr{A}^{n}(\sigma)=\left\{\mathcal{N}:\mathbb{R}^{d}\to\mathbb{R},\mathcal{N}( \mathbf{x})=\mathbf{W}^{(1)}\left(\sigma\left(\mathbf{W}^{(0)}\mathbf{x}+ \mathbf{b}^{(0)}\right)\right)+b^{(1)}\right\}\]
where \(\mathbf{x}\in\mathbb{R}^{d},\mathbf{W}^{(0)}\in\mathbb{R}^{n\times d},\mathbf{ W}^{(1)}\in\mathbb{R}^{1\times n},\mathbf{b}^{(0)}\in\mathbb{R}^{n},b^{(1)}\in \mathbb{R}\), and \(\sigma:\mathbb{R}\to\mathbb{R}\) is applied element-wise. Then,
\[\mathscr{A}(\sigma)=\bigcup_{n=1}^{\infty}\mathscr{A}^{n}(\sigma) \tag{1}\]
is the set of all such functions with any number of neurons. Define also \(\mathcal{C}^{m}(\mathbb{R}^{d})\) as the space of all functions that, together with their partial derivatives of order \(|\alpha|\leq m\), are continuous on \(\mathbb{R}^{d}\).
**Theorem 1**.: [6] _If \(\sigma\in\mathcal{C}^{m}(\mathbb{R}^{d})\) is nonconstant and bounded, then \(\mathscr{A}(\sigma)\) is uniformly \(m\)-dense on all compact sets of \(\mathcal{C}^{m}(\mathbb{R}^{d})\), i.e. for all \(\phi\in\mathcal{C}^{m}(\mathbb{R}^{d})\), for all compact sets \(\Omega\subset\mathbb{R}^{d}\) and for all \(\epsilon>0\), there exists \(\mathcal{N}\in\mathscr{A}(\sigma)\) such that_
\[\max_{|\alpha|\leq m}\sup_{x\in\Omega}|\partial_{x}^{(\alpha)}\mathcal{N}(x)- \partial_{x}^{(\alpha)}\phi(x)|<\epsilon\]
Theorem 1 illustrates the universal approximation quality for single-layer networks of arbitrary width. Applying results from an earlier paper by Hornik et al. [15], this can be extended to multilayer networks. Crucially, these theorems tell us that neural networks are dense on certain function spaces, but they do not tell us how to train a network to realise this.
### Neural network differential equation solvers
Using neural networks to solve differential equations was introduced in the late 1990s [1], but experienced a modern resurgence through the publication of two papers [2, 3] on physics-informed neural networks. The deep Galerkin method [4] which we describe below is very similar to the method described in [2] only, instead of using experimental data, we train a network on points randomly sampled across the domain of the differential equation.
Consider Poisson's equation with Dirichlet boundary conditions:
\[\begin{cases}\nabla^{2}\phi&=f\text{ in }\Omega\\ \phi&=g\text{ on }\partial\Omega\end{cases} \tag{2}\]
**Lemma 2**.: _Let \(\Omega\subset\mathbb{R}^{d}\) be a smooth, compact domain. Then there exists at most one solution \(\phi\) to (2)._
Proof.: Suppose \(\phi\) and \(\varphi\) both satisfy the conditions of (2) and let \(\omega=\phi-\varphi\). Then \(\omega\) is harmonic in \(\Omega\) and zero on \(\partial\Omega\). Then,
\[\int_{\Omega}\omega(\mathbf{x})\nabla^{2}\omega(\mathbf{x})\,d\mathbf{x}=\int _{\partial\Omega}\omega(\mathbf{x})\delta_{\mathbf{n}}\omega(\mathbf{x})\,d \mathbf{x}-\int_{\Omega}||\nabla\omega(\mathbf{x})||^{2}\,d\mathbf{x}=-\int_{ \Omega}||\nabla\omega(\mathbf{x})||^{2}\,d\mathbf{x}=0\]
and \(\nabla\omega=0\). Thus, \(\omega=0\) and \(\phi=\varphi\).
We now seek an approximation \(\mathcal{N}\) to \(\phi\). Define the objective function
\[\mathcal{J}(\mathcal{N})=\int_{\Omega}|\nabla^{2}\mathcal{N}(\mathbf{x})-f( \mathbf{x})|^{2}\nu_{1}(\mathbf{x})\,d\mathbf{x}+\int_{\partial\Omega}| \mathcal{N}(\mathbf{x})-g(\mathbf{x})|^{2}\nu_{2}(\mathbf{x})\,d\mathbf{x}\]
for probability distributions \(\nu_{1}\) on \(\Omega\) and \(\nu_{2}\) on \(\partial\Omega\). By uniqueness of \(\phi\), \(\mathcal{J}(\mathcal{N})=0\implies\mathcal{N}=\phi\). However, minimising the objective function directly is impractical. First, we transform the problem into a machine learning framework. Our approximation \(\mathcal{N}=\mathcal{N}(\cdot;\theta)\) becomes a neural network with parameters \(\theta\).
#### Deep Galerkin method
We demonstrate the algorithm for the deep Galerkin method [4] when applied to Poisson's equation (2):
1. Randomly sample points \(\{\mathbf{x}_{i}\}_{i=1}^{M}\) from \(\Omega\) and \(\{\mathbf{y}_{j}\}_{j=1}^{N}\) from \(\partial\Omega\) according to respective probability distributions \(\nu_{1}\) and \(\nu_{2}\), and propagate them through a feed-forward neural network \(\mathcal{N}(\cdot;\theta)\).
2. Calculate the loss: \[\mathcal{L}(\theta)=\frac{1}{M}\sum_{i=1}^{M}\left(\nabla^{2}\mathcal{N}( \mathbf{x}_{i};\theta)-f(\mathbf{x}_{i})\right)^{2}+\frac{1}{N}\sum_{j=1}^{N }\left(\mathcal{N}(\mathbf{y}_{j};\theta)-g(\mathbf{y}_{j})\right)^{2}\]
3. Update parameters \(\theta_{t+1}=\theta_{t}-\eta\nabla_{\theta}\mathcal{L}(\theta_{t})\) with learning rate \(\eta>0\) and \(t\in\mathbb{N}_{0}\).
4. Repeat until \(\nabla_{\theta}\mathcal{L}(\theta_{t})\approx 0\).
This is a minibatch gradient descent implementation, where \(M\) and \(N\) are the size of the minibatches and \(M>N\).
**Lemma 3**.: \(\mathbb{E}[\nabla_{\theta}\mathcal{L}(\theta_{t})|\theta_{t}]=\nabla_{\theta} \mathcal{J}(\mathcal{N}(\cdot;\theta_{t}))\)__
Proof.: Assume \(\mathcal{L}\) sufficiently smooth and bounded to interchange derivatives and integrals. Then,
\[\mathbb{E}[\nabla_{\theta}\mathcal{L}(\theta_{t})|\theta_{t}] =\nabla_{\theta}\left[\frac{1}{M}\sum_{i=1}^{M}\mathbb{E}\left[( \nabla^{2}\mathcal{N}(\mathbf{x}_{i};\theta_{t})-f(\mathbf{x}_{i}))^{2}\right] +\frac{1}{N}\sum_{j=1}^{N}\mathbb{E}\left[(\mathcal{N}(\mathbf{y}_{j};\theta_{ t})-g(\mathbf{y}_{j}))^{2}\right]\right]\] \[=\nabla_{\theta}\left[\frac{1}{M}\sum_{i=1}^{M}\int_{\Omega}( \nabla^{2}\mathcal{N}(\mathbf{x};\theta_{t})-f(\mathbf{x}))^{2}\nu_{1}( \mathbf{x})\,d\mathbf{x}+\frac{1}{N}\sum_{j=1}^{N}\int_{\partial\Omega}( \mathcal{N}(\mathbf{y};\theta_{t})-g(\mathbf{y}))^{2}\nu_{2}(\mathbf{y})\,d \mathbf{y}\right]\] \[=\nabla_{\theta}\mathcal{J}(\mathcal{N}(\cdot;\theta_{t}))\]
Therefore, the \(\nabla_{\theta}\mathcal{L}(\theta_{t})\) are unbiased estimates of \(\nabla_{\theta}\mathcal{J}(\mathcal{N}(\cdot;\theta_{t}))\), and we can assume a step in the descent direction of \(\mathcal{L}\) is also one in \(\mathcal{J}\). Thus, any minimisation of \(\mathcal{L}\) should translate to a local minimisation of \(\mathcal{J}\).
#### Minimisation of \(\mathcal{J}(\mathcal{N})\)
We prove the following theorem, adapted from the original deep Galerkin method paper [4].
**Theorem 4**.: _Let \(\mathscr{A}(\sigma)\) be given by (1), for nonconstant, bounded \(\sigma\), and let \(\Omega\in\mathbb{R}^{d}\) be a compact domain and consider measures \(\nu_{1},\nu_{2}\) whose supports are contained in \(\Omega,\partial\Omega\) respectively. Assume further that \(\nabla^{2}\phi\) is locally Lipschitz with Lipschitz constant that can have at most polynomial growth on \(\nabla\phi\), uniformly with respect to \(x\), i.e._
\[|\nabla^{2}\mathcal{N}-\nabla^{2}\phi|\leq\left(||\nabla\mathcal{N}||^{\frac{ \alpha}{2}}+||\nabla\phi||^{\frac{\alpha}{2}}\right)||\nabla\mathcal{N}-\nabla \phi|| \tag{3}\]
_for some constants \(0\leq a,b<\infty\). Then, for all \(\epsilon>0\), there exists a constant \(\kappa>0\) such that there exists a function \(\mathcal{N}\in\mathscr{A}(\sigma)\) with_
\[\mathcal{J}(\mathcal{N})\leq\kappa\epsilon\]
Proof.: The condition given by (3) implies that
\[|\nabla^{2}\mathcal{N}-\nabla^{2}\phi|^{2} \leq\left(||\nabla\mathcal{N}||^{\frac{p}{2}}+||\nabla\phi||^{ \frac{p}{2}}\right)^{2}||\nabla\mathcal{N}-\nabla\phi||^{2}\] \[\leq\left(||\nabla\mathcal{N}||^{a}+||\nabla\phi||^{b}+2||\nabla \mathcal{N}||^{\frac{p}{2}}||\nabla\phi||^{\frac{p}{2}}\right)||\nabla\mathcal{N }-\nabla\phi||^{2}\] \[\leq 2\left(||\nabla\mathcal{N}||^{a}+||\nabla\phi||^{b}\right)|| \nabla\mathcal{N}-\nabla\phi||^{2}\]
with the last line following from Young's inequality [16]. Then,
\[\int_{\Omega}|\nabla^{2}\mathcal{N}(\mathbf{x})-\nabla^{2}\phi( \mathbf{x})|^{2}\,d\nu_{1}(\mathbf{x}) \leq 2\int_{\Omega}\left(||\nabla\mathcal{N}(\mathbf{x})||^{a}+|| \nabla\phi(\mathbf{x})||^{b}\right)||\nabla\mathcal{N}(\mathbf{x})-\nabla \phi(\mathbf{x})||^{2}\,d\nu_{1}(\mathbf{x})\] \[\leq 2\left[\int_{\Omega}\left(||\nabla\mathcal{N}(\mathbf{x})|| ^{a}+||\nabla\phi(\mathbf{x})||^{b}\right)^{p}\,d\nu_{1}(\mathbf{x})\right]^ {\frac{1}{p}}\left[\int_{\Omega}||\nabla\mathcal{N}(\mathbf{x})-\nabla\phi( \mathbf{x})||^{2q}\,d\nu_{1}(\mathbf{x})\right]^{\frac{1}{q}}\]
if we apply Holder's inequality [16] for exponents \(p,q\) satisfying \(\frac{1}{p}+\frac{1}{q}=1\) and \(1\leq p,q\leq\infty\). Furthermore,
\[\int_{\Omega}|\nabla^{2}\mathcal{N}(\mathbf{x})-\nabla^{2}\phi( \mathbf{x})|^{2}\,d\nu_{1}(\mathbf{x}) \leq K\left[\int_{\Omega}\left(||\nabla\mathcal{N}(\mathbf{x})- \nabla\phi(\mathbf{x})||^{a}+||\nabla\phi(\mathbf{x})||^{\max\{a,b\}}\right)^ {p}\,d\nu_{1}(\mathbf{x})\right]^{\frac{1}{p}}\] \[\quad\cdot\left[\int_{\Omega}||\nabla\mathcal{N}(\mathbf{x})- \nabla\phi(\mathbf{x})||^{2q}\,d\nu_{1}(\mathbf{x})\right]^{\frac{1}{q}}\] \[\leq K(\epsilon^{a}+\sup_{\mathbf{x}\in\Omega}||\nabla\phi( \mathbf{x})||^{\max\{a,b\}})\epsilon^{2}\]
for some constant \(K\). The last line follows from Theorem 1. Applying this result and Theorem 1 again to the objective function \(\mathcal{J}\), we obtain:
\[\mathcal{J}(\mathcal{N}) =\int_{\Omega}|\nabla^{2}\mathcal{N}(\mathbf{x})-f(\mathbf{x})|^{2 }\,d\nu_{1}(\mathbf{x})+\int_{\partial\Omega}|\mathcal{N}(\mathbf{x})-g( \mathbf{x})|^{2}\,d\nu_{2}(\mathbf{x})\] \[=\int_{\Omega}|\nabla^{2}\mathcal{N}(\mathbf{x})-\nabla^{2}\phi( \mathbf{x})|^{2}\,d\nu_{1}(\mathbf{x})+\int_{\partial\Omega}|\mathcal{N}( \mathbf{x})-\phi(\mathbf{x})|^{2}\,d\nu_{2}(\mathbf{x})\] \[\leq K(\epsilon^{a}+\sup_{\mathbf{x}\in\Omega}||\nabla\phi( \mathbf{x})||^{\max\{a,b\}})\epsilon^{2}+\epsilon^{2}\]
Finally, a rescaling of \(\epsilon>0\) yields
\[\mathcal{J}(\mathcal{N})\leq\kappa\epsilon\]
for some constant \(\kappa>0\) which may depend on \(\sup_{\mathbf{x}\in\Omega}||\nabla\phi(\mathbf{x})||\).
Theorem 4 guarantees the existence of a feed-forward neural network \(\mathcal{N}\) that, under relatively relaxed conditions, makes the objective function \(\mathcal{J}(\mathcal{N})\) for Poisson's equation arbitrarily small. However, neural network objective functions are highly non-convex. This means they have numerous minima and, while gradient descent algorithms like the deep Galerkin method are extremely effective at reaching said minima [17], there is no guarantee of achieving the global minimum i.e., in our case, finding the unique solution. Many authors research such ideas in non-convex optimisation [18], but we do not touch on them here, and present only empirical evidence of our solver finding/not finding global minima in the Results section (see **4**).
Methods
We now present three highly accessible methods to enhance the performance of a neural network trained to solve differential equations via the deep Galerkin method.
### Sinusoidal representation networks
Consider a neural network that is trained to approximate a function directly. We need only the first-order derivatives of the activation functions to backpropagate, and thus ReLU seems a natural choice [19]. However, our framework requires a network to learn a function via its derivatives. ReLU networks cannot do this without significant loss of information since they have second derivative zero. They are incapable of accurately modelling a signal's higher-order derivatives.
A recent paper [8] highlighting these limitations proposes something the authors call a sinusoidal representation network or SIREN. This is a neural network that implicitly defines a function, in our case \(\mathcal{N}\), with sinusoidal activations. Thus, while regular feed-forward networks with, say, ReLU activation may be excellent function approximators, a SIREN can further accurately fit derivatives of functions \(\phi\) through its own derivatives. ReLU networks typically cannot, due to their piecewise linear nature. This idea is hidden in Theorem 1 since ReLU is continuous but not differentiable, and so a network \(\mathcal{N}\) with ReLU activation could only achieve
\[\sup_{x}|\partial_{x}^{(\alpha)}\mathcal{N}(x)-\partial_{x}^{(\alpha)}\phi(x )|<\epsilon\]
for \(\alpha=0\). By contrast, \(\sin\in\mathcal{C}^{\infty}\), so the equivalent statement is true for any \(|\alpha|<\infty\).
Evaluating the gradient of a SIREN scales quadratically in the number of layers of the SIREN [8]. So, fitting higher-order derivatives is no easy task. However, for simple differential equations like Poisson's equation, it is computationally feasible, and the authors of [8] provide experimental results that show SIRENs are excellent at modelling first and second-order derivatives of complicated signals, as well as the high-frequency signals themselves.
### Random Fourier features
Recent works [20, 21] have described a spectral bias inherent to neural networks learning functions. They prioritise learning the low-frequency modes of the functions and thus, high frequencies are captured much later in the training procedure.
In many ways, this is a key reason behind the immense success of neural networks. Often, they are over-parameterised, i.e. the number of parameters far exceeds the number of training samples yet, counter-intuitively, they still show remarkable capacity to generalise well [22]. Spectral bias may explain part of this phenomenon because it suggests, if there is a way to fit data effectively with only low frequencies, then a neural network will do just this, without needing to resort to high frequencies that overfit the data.
However, this also means that neural networks struggle to learn high frequency functions. Theoretical results in [23] show that a one-dimensional function of pure frequency \(\omega\), e.g. \(\cos(\omega x)\), is learned in time that scales with \(\omega^{2}\). This is ratified experimentally.
A 2020 paper [9] publishes results on the use of a Fourier feature mapping to effectively overcome this spectral bias, and allow multilayer perceptrons (MLPs) to learn high frequency functions in low-dimensional domains. The authors motivate such work with neural tangent kernel (NTK) theory. NTKs have been shown to model the behaviour of MLPs in the infinite-width limit during training [24]. We do not describe them in detail here, but give a summary of the main idea behind Fourier feature mapping. For two different inputs \(\mathbf{x},\mathbf{x}^{\prime}\) to the MLP, the corresponding NTK can be given by
\[NTK(\mathbf{x},\mathbf{x}^{\prime})=h(\mathbf{x}^{T}\mathbf{x}^{\prime})\]
where \(h\) is some scalar function [9].
The mapping
\[\gamma(\mathbf{x})=[\cos(2\pi\mathbf{B}\mathbf{x}),\sin(2\pi\mathbf{B}\mathbf{x })]^{T} \tag{4}\]
is a Gaussian random Fourier feature mapping for \(\mathbf{x}\in\mathbb{R}^{d}\), where each entry in \(\mathbf{B}\in\mathbb{R}^{n\times d}\) is sampled from a normal distribution with mean zero and variance \(\Sigma^{2}\). Therefore,
\[NTK(\gamma(\mathbf{x}),\gamma(\mathbf{x}^{\prime})) =h(\gamma(\mathbf{x})^{T}\gamma(\mathbf{x}^{\prime}))\] \[=h\left(\cos(2\pi\mathbf{B}\mathbf{x})\cos(2\pi\mathbf{B}\mathbf{ x}^{\prime})+\sin(2\pi\mathbf{B}\mathbf{x})\sin(2\pi\mathbf{B}\mathbf{x}^{ \prime})\right)\] \[=h(\cos(2\pi\mathbf{B}(\mathbf{x}-\mathbf{x}^{\prime})))\]
Crucially, this defines a kernel function with width controlled by the random matrix \(\mathbf{B}\). Kernel functions are used to fit data, and their width directly influences whether they overfit (with high frequencies) or underfit (with low frequencies). So, given that this function characterises the evolution of the MLP during training, we can tune the network towards learning particular frequencies by simply changing \(\Sigma\):
* A small \(\Sigma\) gives a wide kernel that will underfit a high-frequency function.
* A large \(\Sigma\) gives a narrow kernel that will overfit a low-frequency function.
In our framework, \(\Sigma\) is now just another hyperparameter, and we can find the optimal \(\Sigma\) through a simple sweep of values. We choose the value that gives the fastest convergence. The authors of [9] also advise that \(n\), the number of Fourier features, improves performance with size. Of course, there is a computational cost associated with increasing \(n\), so it is best taken 'as small as gives good enough results.'
### Error correction
We introduce the main work of this paper; the novel technique error correction [10, 11, 12, 13, 14] is designed to increase the efficacy of any neural network differential equation solver. This method is general and can be applied to all differential equations, in combination with any such similar strategies, such as Koopman boosting [25] or those presented above. Much of the work here was proposed in [10] and formalised in [11], which the reader should refer to as supplement.
When dealing with neural networks, we bank on the idea that a'small enough' loss implies a 'good enough' accuracy. Now, in many scenarios, this ideology fails because zero loss would represent drastic overfitting. Conveniently, this does not concern us as we want our network to fit the (training) data as accurately as possible. Still, the original problem remains; how can we know how close we are to the true solution \(\phi\)?
It turns out analysis and estimation of the unknown error between \(\phi\) and \(\mathcal{N}\) is possible. Indeed, in [12], the author shows how you can obtain specific bounds on this error, without knowledge of \(\phi\). In this section, we provide a correction method (based on this error) to enhance neural network differential equation solvers, by overcoming performance saturation when the network settles around a local minimum of the loss function. Here, we also make use of differential equation operators which send true solutions to zero. Consider this for Poisson's equation (2):
\[\mathbf{F}[\cdot]=\nabla^{2}[\cdot]-f\]
Define \(\phi_{\epsilon}=\phi-\mathcal{N}\) as the error between the unknown solution \(\phi\) and a fixed approximation \(\mathcal{N}\). Clearly,
\[\mathbf{F}[\mathcal{N}] =\nabla^{2}\mathcal{N}-f\] \[=\nabla^{2}[\phi-\phi_{\epsilon}]-f\] \[=\nabla^{2}\phi-f-\nabla^{2}\phi_{\epsilon}\] \[=-\nabla^{2}\phi_{\epsilon}\]
since \(\mathbf{F}[\phi]=\nabla^{2}\phi-f=0\). Thus, \(\mathbf{F}[\mathcal{N}]+\nabla^{2}\phi_{\epsilon}=0\) and, given that \(\mathbf{F}[\mathcal{N}]\) is completely independent to \(\phi_{\epsilon}\), we have defined a new Poisson's equation. Our general strategy now will be to train a neural network \(\mathcal{N}_{\epsilon}\) to approximate \(\phi_{\epsilon}\) through the conditions of this new differential equation. Then, \(\mathcal{N}+\mathcal{N}_{\epsilon}\approx\mathcal{N}+\phi_{\epsilon}=\phi\).
Before we formalise and evaluate this method, note that it applies also to differential equations with non-linear terms. Consider the Poisson-Boltzmann equation with Dirichlet boundary conditions:
\[\begin{cases}\nabla^{2}\phi+\sinh\phi&=f\text{ in }\Omega\\ \phi&=g\text{ on }\partial\Omega\end{cases}\]
Define the operator
\[\mathbf{G}[\cdot]=\nabla^{2}[\cdot]+\sinh[\cdot]-f\]
and, once again, have \(\phi_{\epsilon}=\phi-\mathcal{N}\). Then,
\[\mathbf{G}[\mathcal{N}] =\nabla^{2}\mathcal{N}+\sinh\mathcal{N}-f\] \[=\nabla^{2}[\phi-\phi_{\epsilon}]+\sinh\mathcal{N}+\sinh\phi- \sinh\phi-f\] \[=\nabla^{2}\phi+\sinh\phi-f-\nabla^{2}\phi_{\epsilon}+\sinh \mathcal{N}-\sinh\phi\] \[=-\nabla^{2}\phi_{\epsilon}+\sinh\mathcal{N}-\sinh(\mathcal{N}+ \phi_{\epsilon})\]
since \(\mathbf{G}[\phi]=\nabla^{2}\phi+\sinh\phi-f=0\). A clever trick of adding and subtracting \(\sinh\phi\) allows the \(\mathbf{G}[\phi]\) term to be removed from the equation. In the last line, we simply seek to keep the equation explicit in \(\mathcal{N}\) and \(\phi_{\epsilon}\).
#### Theoretical results
Now, we formalise this idea of error correction, adapting the approach from [11]. Consider a differential equation over \(\Omega\) in operator form:
\[\mathbf{F_{0}}[\phi]=\mathbf{A}[\phi]+\mathbf{B}[\phi]+\mathbf{C}=0 \tag{5}\]
where \(\mathbf{A}\) represents the terms that depend linearly on \(\phi\), \(\mathbf{B}\) represents those that depend non-linearly on \(\phi\), and \(\mathbf{C}\) is independent of \(\phi\). The solution \(\phi\) may also admit some constraints on the boundary \(\partial\Omega\) but, for now, these are not of interest. Assume also that \(\phi\) is unique.
We first prove a result that follows from the inverse function theorem [26]:
**Theorem 5**.: (Inverse function theorem). _Suppose that \(F:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) is continuously differentiable in some open set containing \(x^{*}\), and suppose moreover that the Jacobian \(DF(x^{*})\) is invertible. Then there exists open sets \(U,V\subset\mathbb{R}^{n}\) with \(x^{*}\in U\) and \(F(x^{*})\in V\) such that \(F:U\to V\) is a bijection, and \(F^{-1}:V\to U\) is continuously differentiable for all \(y\in V\) with_
\[DF^{-1}(y)=\left[DF(F^{-1}(y))\right]^{-1}\]
**Corollary 6**.: _Suppose that \(\mathbf{F_{0}}:\mathbb{R}\rightarrow\mathbb{R}\) in (5) is continuously differentiable in some open set containing \(\phi^{*}\), that \(D\mathbf{F_{0}}[\phi^{*}]\) is invertible, and \(\mathbf{F_{0}}[\phi^{*}]=0\). Then, there is a neighbourhood of \(0\) small enough such that_
\[\mathbf{F_{0}}[\mathcal{N}]\to 0\implies\mathcal{N}\rightarrow\phi^{*}\]
Proof.: By Theorem 5, choose neighbourhoods \(U,V\subset\mathbb{R}\) with \(\phi^{*}\in U,0\in V\) such that \(\mathbf{F_{0}}:U\to V\) is a bijection and \(\mathbf{F_{0}}^{-1}:V\to U\) is continuous differentiable for all \(y\in V\). For \(\mathcal{N}\in U\), the continuity of \(\mathbf{F_{0}}^{-1}\) implies that
\[\mathbf{F_{0}}[\mathcal{N}]\to 0\implies\mathcal{N}\rightarrow\phi^{*}\]
Thus, assuming we can minimise the loss function for some neural network \(\mathcal{N}\) such that \(\mathbf{F_{0}}[\mathcal{N}]\to 0\) at all points, then \(\mathcal{N}\rightarrow\phi\) at all points. So, let us train such a network \(\mathcal{N}_{0}\) to approximate \(\phi\) via (5). Define also \(\phi_{1}=\phi-\mathcal{N}_{0}\).
\[\mathbf{F_{0}}[\mathcal{N}_{0}] =\mathbf{A}[\mathcal{N}_{0}]+\mathbf{B}[\mathcal{N}_{0}]+\mathbf{C}\] \[=\mathbf{A}[\phi-\phi_{1}]+\mathbf{B}[\mathcal{N}_{0}]+\mathbf{B }[\phi]-\mathbf{B}[\phi]+\mathbf{C}\] \[=\mathbf{A}[\phi]+\mathbf{B}[\phi]+\mathbf{C}-\mathbf{A}[\phi_{1} ]+\mathbf{B}[\mathcal{N}_{0}]-\mathbf{B}[\phi]\] \[=-\mathbf{A}[\phi_{1}]+\mathbf{B}[\mathcal{N}_{0}]-\mathbf{B}[ \mathcal{N}_{0}+\phi_{1}]\]
since \(\mathbf{F_{0}}[\phi]=\mathbf{A}[\phi]+\mathbf{B}[\phi]+\mathbf{C}=0\) by definition. We have defined a new differential equation in operator form:
\[\mathbf{F_{1}}[\phi_{1}]=\mathbf{F_{0}}[\mathcal{N}_{0}]+\mathbf{A}[\phi_{1}]- \mathbf{B}[\mathcal{N}_{0}]+\mathbf{B}[\mathcal{N}_{0}+\phi_{1}]=0\]
\(\phi_{1}\) solves the above equation exactly and, given the uniqueness of \(\phi\), is also unique. Now, train some other neural network \(\mathcal{N}_{1}\) to approximate \(\phi_{1}\), and define \(\phi_{2}=\phi_{1}-\mathcal{N}_{1}\). Once again,
\[\mathbf{F_{1}}[\mathcal{N}_{1}] =\mathbf{F_{0}}[\mathcal{N}_{0}]+\mathbf{A}[\mathcal{N}_{1}]- \mathbf{B}[\mathcal{N}_{0}]+\mathbf{B}[\mathcal{N}_{0}+\mathcal{N}_{1}]\] \[=\mathbf{F_{0}}[\mathcal{N}_{0}]+\mathbf{A}[\phi_{1}-\phi_{2}]- \mathbf{B}[\mathcal{N}_{0}]+\mathbf{B}[\mathcal{N}_{0}+\mathcal{N}_{1}]+ \mathbf{B}[\mathcal{N}_{0}+\phi_{1}]-\mathbf{B}[\mathcal{N}_{0}+\phi_{1}]\] \[=\mathbf{F_{0}}[\mathcal{N}_{0}]+\mathbf{A}[\phi_{1}]-\mathbf{B}[ \mathcal{N}_{0}]+\mathbf{B}[\mathcal{N}_{0}+\phi_{1}]-\mathbf{A}[\phi_{2}]+ \mathbf{B}[\mathcal{N}_{0}+\mathcal{N}_{1}]-\mathbf{B}[\mathcal{N}+\phi_{1}]\] \[=-\mathbf{A}[\phi_{2}]+\mathbf{B}[\mathcal{N}_{0}+\mathcal{N}_{1}] -\mathbf{B}[\mathcal{N}_{0}+\phi_{1}]\] \[=-\mathbf{A}[\phi_{2}]+\mathbf{B}[\mathcal{N}_{0}+\mathcal{N}_{1} ]-\mathbf{B}[\mathcal{N}_{0}+\mathcal{N}_{1}+\phi_{2}]\]
since \(\mathbf{F_{1}}[\phi_{1}]=\mathbf{F_{0}}[\mathcal{N}_{0}]+\mathbf{A}[\phi_{1}]- \mathbf{B}[\mathcal{N}_{0}]+\mathbf{B}[\mathcal{N}_{0}+\phi_{1}]=0\), and \(\phi_{1}=\mathcal{N}_{1}+\phi_{2}\). We define a further differential equation in operator form:
\[\mathbf{F_{2}}[\phi_{2}]=\mathbf{F_{1}}[\mathcal{N}_{1}]+\mathbf{A}[\phi_{2}]- \mathbf{B}[\mathcal{N}_{0}+\mathcal{N}_{1}]+\mathbf{B}[\mathcal{N}_{0}+\mathcal{ N}_{1}+\phi_{2}]=0\]
Now, repeat the process. This algorithm can continue indefinitely, and we summarise the steps below. The idea is that our error-corrected approximation \(\mathcal{N}_{0}+\mathcal{N}_{1}+\mathcal{N}_{2}+...\) will be more accurate than the once-trained approximation \(\mathcal{N}_{0}\). This strategy is not unseen in the field of numerical methods to differential equations, we just apply it here to neural network solvers.
Let us define a recursive differential equation for the \(k^{\text{th}}\) error correction. At this point, we have trained the initial network \(\mathcal{N}_{0}\), and also a further \(k-1\) residual networks \(\mathcal{N}_{1},\mathcal{N}_{2},...,\mathcal{N}_{k-1}\). Our current error-corrected
approximation is \(\mathcal{N}^{(k-1)}=\mathcal{N}_{0}+\mathcal{N}_{1}+\mathcal{N}_{2}+...+\mathcal{ N}_{k-1}\). Define \(\phi_{k}=\phi_{k-1}-\mathcal{N}_{k-1}\). Now, train a new network \(\mathcal{N}_{k}\) to approximate \(\phi_{k}\) through the following differential equation:
\[\mathbf{F}_{\mathbf{k}}[\phi_{k}]=\mathbf{F}_{\mathbf{k-1}}[\mathcal{N}_{k-1}]+ \mathbf{A}[\phi_{k}]-\mathbf{B}[\mathcal{N}^{(k-1)}]+\mathbf{B}[\mathcal{N}^{( k-1)}+\phi_{k}]=0 \tag{6}\]
**Remark**.: \(\mathbf{F}_{\mathbf{k}}[\mathcal{N}_{k}]\equiv\mathbf{F}_{\mathbf{0}}[ \mathcal{N}^{(k)}]\)_._
**Corollary 7**.: _Suppose that \(\mathbf{F}_{\mathbf{k}}:\mathbb{R}\rightarrow\mathbb{R}\) in (6) is continuously differentiable in some open set containing \(\phi_{k}^{*}\), that \(D\mathbf{F}_{\mathbf{k}}[\phi_{k}^{*}]\) is invertible, and \(\mathbf{F}_{\mathbf{k}}[\phi_{k}^{*}]=0\). Then, there is a neighbourhood of \(0\) small enough such that_
\[\mathbf{F}_{\mathbf{k}}[\mathcal{N}_{k}]\to 0\implies\mathcal{N}_{k} \rightarrow\phi_{k}^{*}\]
_Furthermore,_
\[|\mathcal{N}^{(k)}-\phi|=\mathcal{O}\left(|\mathbf{F}_{\mathbf{k}}[\mathcal{N }_{k}]|\right)\]
Proof.: The first result follows analogously from the inverse function theorem as in Corollary 6.
By Theorem 5, \(\mathbf{F_{\mathbf{0}}}^{-1}\) is continuously differentiable on some open set around \(0\). Thus, it is also locally Lipschitz continuous around \(0\), meaning there exists some constant \(\alpha\geq 0\) such that
\[|\mathcal{N}^{(k)}-\phi| =|\mathbf{F_{\mathbf{0}}}^{-1}[\mathbf{F_{\mathbf{0}}}[\mathcal{ N}^{(k)}]-\mathbf{F_{\mathbf{0}}}^{-1}[\mathbf{F_{\mathbf{0}}}[\phi]]|\] \[\leq\alpha|\mathbf{F_{\mathbf{0}}}[\mathcal{N}^{(k)}]-\mathbf{F_ {\mathbf{0}}}[\phi]|\] \[\leq\alpha|\mathbf{F_{\mathbf{0}}}[\mathcal{N}^{(k)}]|\] \[\leq\alpha|\mathbf{F_{\mathbf{k}}}[\mathcal{N}_{k}]|\]
and therefore,
\[|\mathcal{N}^{(k)}-\phi|=\mathcal{O}\left(|\mathbf{F}_{\mathbf{k}}[\mathcal{N }_{k}]|\right)\]
Finally, given Dirichlet boundary conditions \(\phi=g\) on \(\partial\Omega\), any \(\phi_{k}\) is known exactly over \(\partial\Omega\) since \(\phi_{k}=\phi-\mathcal{N}^{(k-1)}\). Thus, the loss function for the \(k^{\text{th}}\) error correction can be defined as
\[\mathcal{L}_{k}(\theta^{(k)})=\frac{1}{M}\sum_{i=1}^{M}\left( \mathbf{F}_{\mathbf{k}}\left[\mathcal{N}_{k}\left(\mathbf{x}_{i};\theta^{(k) }\right)\right]\right)^{2}+\frac{1}{N}\sum_{j=1}^{N}\left(\mathcal{N}_{k} \left(\mathbf{y}_{j};\theta^{(k)}\right)-\phi_{k}(\mathbf{y}_{j})\right)^{2} \tag{7}\]
for some randomly sampled points \(\{\mathbf{x}_{i}\}_{i=1}^{M}\) from \(\Omega\) and \(\{\mathbf{y}_{j}\}_{j=1}^{N}\) from \(\partial\Omega\).
**Algorithm**
The error correction algorithm to order \(K\) proceeds as follows:
1. Train a neural network \(\mathcal{N}_{0}\) to satisfy the conditions of a differential equation given by (5) and constraint conditions. Once the loss has converged, stop training and freeze the parameters of \(\mathcal{N}_{0}\).
2. Initiate and train new neural networks \(\{\mathcal{N}_{k}\}_{k=1}^{K}\) in sequence to satisfy differential equations given by (6), via loss functions (7). Once the loss has converged, stop training, freeze the parameters of \(\mathcal{N}_{k}\), and proceed with \(\mathcal{N}_{k+1}\).
3. The solution to (5) is approximated by \(\mathcal{N}:=\mathcal{N}^{(K)}=\sum\limits_{k=0}^{K}\mathcal{N}_{k}\).
This is given above for Dirichlet boundary conditions, but works generally if you incorporate the constraint conditions into all loss functions.
#### Poisson's equation
For Poisson's equation (2), \(\mathbf{B}\equiv 0\) since the Laplacian is linear, so we can write the \(k^{\text{th}}\) differential equation as
\[\mathbf{F_{k}}[\phi_{k}]=\mathbf{F_{k-1}}[\mathcal{N}_{k-1}]+\nabla^{2}[\phi_ {k}]=0\]
which is a Poisson's equation with our usual \(f=-\mathbf{F_{k-1}}[\mathcal{N}_{k-1}]\). Thus, we can apply Theorem 4 to guarantee that there are neural networks out there that can get very, very close to the \(\phi_{k}\). In the next section, we provide evidence that shows, if we can train just two or three of these networks to reasonably approximate their true solutions, our error-corrected approximation will be a more accurate numerical solution to the original differential equation.
## 4 Results
We present results for a variety of different Poisson's equations (2). Our choice of Poisson's equation is motivated by its immense application in many areas of theoretical physics, including electrostatics and fluid dynamics. It is also the simplest second-order, linear PDE, making for a concise yet insightful demonstration of the power of error correction in neural network differential equation solvers.
To achieve this, we choose the function \(f\) on the RHS to force a particular solution \(\phi\) that we want to capture. For example, \(f(x)=1\) would force the solution \(\phi(x)=\frac{1}{2}x^{2}+c_{1}x+c_{0}\). However, in general, \(f\) can be anything, particularly something which does not admit a closed-form solution to (2), and we do this for ease of visualising \(\phi\).
Knowing the ground truth solution \(\phi\) in closed form also allows us to compute the relative error
\[\frac{\sum\limits_{\mathbf{x}\in S}\left(\phi(\mathbf{x})-\mathcal{N}(\mathbf{ x})\right)^{2}}{\sum\limits_{\mathbf{x}\in S}\phi(\mathbf{x})^{2}}\]
at each epoch (iteration) of the training procedure, so we have an understanding of the success of our solver. It is important to note that, while we know \(\phi\) and the relative error associated with our approximation, the neural network does not, and is solely trained via the loss function.
All neural networks used are SIRENs with 5 hidden layers and 128 hidden units per layer. They are trained on batches of 256, using the stochastic gradient descent variant Adam [27], and learning rates are manually tuned for each case of Poisson's equation. All experiments are run on a 1.8 GHz Dual-Core Intel Core i5 CPU.
### 3D Poisson's Equation
Figure 1 shows the solution to Poisson's equation:
\[\begin{cases}\nabla^{2}\phi&=-75\sin(5x)\sin(5y)\sin(5z)\text{ in }\Omega=[-\pi,\pi] \times[-\pi,\pi]\times[-\pi,\pi]\\ \phi&=0\text{ on }\partial\Omega\end{cases} \tag{8}\]
Figure 2 shows our numerical solutions, with \(\mathcal{N}^{(0)}=\mathcal{N}_{0}\) on the left, \(\mathcal{N}^{(1)}=\mathcal{N}_{0}+\mathcal{N}_{1}\) in the centre, and \(\mathcal{N}^{(2)}=\mathcal{N}_{0}+\mathcal{N}_{1}+\mathcal{N}_{2}\) on the right. We refer to these as Error Correction 0, 1 and 2, respectively.
Visually, all error corrections seem to capture the solution well. Furthermore, each correction decreases the relative error (printed at the bottom of of Figure 2). Error Correction 1 does so significantly, while the improvement in accuracy from Error Correction 2 is marginal.
This is further captured in Figure 3, which is a plot of the loss and relative error per epoch. After finding a local minimum in Error Correction 0, the loss fluctuates erratically until we initialise Error Correction 1. The improvement is truly appreciable, and felt across the trends in relative error too.
Figure 3: Per-epoch loss and relative errors for numerical solutions to (8)
### 2D Poisson's Equation
Figure 4 shows the solution to Poisson's equation:
\[\begin{cases}\nabla^{2}\phi&=-800\sin(5x)\sin(5y)\text{ in }\Omega=[-\pi,\pi] \times[-\pi,\pi]\\ \phi&=0\text{ on }\partial\Omega\end{cases} \tag{9}\]
Due to the highly oscillatory nature of the solution, a neural network will struggle to accurately capture its structure. This is demonstrated in Figure 5, where the approximation cannot account for so many peaks and troughs in the solution.
To obtain a realistic solution, we apply a Gaussian random Fourier feature mapping to the input, before passing it through the network. After a simple sweep of values, we take \(\Sigma=1\) and \(n=256\), as defined in (4). Figures 6 and 7 show similar trends to those in the previous experiment.
Figure 5: Naive attempt at a numerical solution to (9)
Figure 7: Per-epoch loss and relative errors for numerical solutions to (9)
Figure 8 shows the solution to Poisson's equation:
\[\begin{cases}\nabla^{2}\phi&=(100y^{2}-100\pi^{2}-2)\sin(10x)\text{ in }\Omega=[-\pi,\pi]\times[-\pi,\pi]\\ \phi&=0\text{ on }\partial\Omega\end{cases} \tag{10}\]
In Figure 9, we train a neural network \(\mathcal{N}^{(0)}\) to approximate the solution to (10) for \(2^{11}\) epochs, but we save its parameter states after \(2^{10}\) epochs. These define a new network which we call \(\mathcal{N}_{0}\). The fully-trained \(\mathcal{N}^{(0)}\) achieves a reasonable relative error. Roughness is clearly visible in the plot.
In Figure 10, we plot the half-trained \(\mathcal{N}_{0}\) on the left. As expected, it has not yet reached the accuracy of \(\mathcal{N}^{(0)}\). However, we also initiate an error correction \(\mathcal{N}_{1}\), of \(\mathcal{N}_{0}\), that trains for another \(2^{10}\) epochs. Thus, we produce an approximation \(\mathcal{N}^{(1)}=\mathcal{N}_{0}+\mathcal{N}_{1}\) that has also trained for a total of \(2^{11}\) epochs. This is significantly more accurate than \(\mathcal{N}^{(0)}\), and the plot is visibly smoother. Figures 9 and 10 provide a clear exemplification of the immediate fruitfulness of a single error correction.
Figure 10: Numerical solutions \(\mathcal{N}_{0}\) and \(\mathcal{N}^{(1)}\) to (10), trained for a total of \(2^{11}\) epochs
Figure 9: Numerical solution \(\mathcal{N}^{(0)}\) to (10), trained for \(2^{11}\) epochs
Discussion
Our results do not endorse error correction as a tool to marginally reduce error across tens of corrections. Instead, they suggest training a network for half the allotted time, and devoting the other half to a single error correction. This can yield significantly more accurate results.
Error correction is not without cost however. In our implementation, we train correction networks on newly sampled points. This means that to obtain \(\mathbf{F_{k}}[\mathcal{N}_{k}]\), we must first make \(k\) forward passes of the new data through \(\mathcal{N}_{0},\mathcal{N}_{1},...,\mathcal{N}_{k-1}\) and differentiate these to compute \(\mathbf{F_{k-1}}[\mathcal{N}_{k-1}]\). The time complexity of producing a \(k^{\text{th}}\) order approximation \(\mathcal{N}^{(k)}\), assuming the number of epochs \(E\) and batch size \(B\) per correction, and optimisation costs, are kept constant across all corrections, is \(\mathcal{O}\left(EB(k+1)^{2}\right)\). If we instead pass identical batches through each correction network, storing the \(\mathbf{F_{k-1}}[\mathcal{N}_{k-1}]\) in memory, we can have a time complexity of \(\mathcal{O}(EB(k+1))\), however the space complexity would be substantially increased.
## 6 Further Work
Over time, this study of neural network differential equation solvers naturally lent itself to hot-off-the-press topics in machine learning like sinusoidal representation networks [8] and random Fourier features [9], for the simple reason that such concepts are inextricably linked through their applications. Outside of differential equations, neural networks as continuous parameterisations of discrete signals have immense potential in 3D shape representation, but also in image, video and audio representation and reconstruction. These problems may utilise neural networks as function approximators or, as we did, derivative approximators. There is no reason to suggest why the ideas of error correction cannot be employed here, and every reason to further explore the interplay of these techniques when applied to problems in computer vision.
|
2308.16910 | Robust Variational Physics-Informed Neural Networks | We introduce a Robust version of the Variational Physics-Informed Neural
Networks method (RVPINNs). As in VPINNs, we define the quadratic loss
functional in terms of a Petrov-Galerkin-type variational formulation of the
PDE problem: the trial space is a (Deep) Neural Network (DNN) manifold, while
the test space is a finite-dimensional vector space. Whereas the VPINN's loss
depends upon the selected basis functions of a given test space, herein, we
minimize a loss based on the discrete dual norm of the residual. The main
advantage of such a loss definition is that it provides a reliable and
efficient estimator of the true error in the energy norm under the assumption
of the existence of a local Fortin operator. We test the performance and
robustness of our algorithm in several advection-diffusion problems. These
numerical results perfectly align with our theoretical findings, showing that
our estimates are sharp. | Sergio Rojas, Paweł Maczuga, Judit Muñoz-Matute, David Pardo, Maciej Paszynski | 2023-08-31T17:59:44Z | http://arxiv.org/abs/2308.16910v3 | # Robust Variational Physics-Informed Neural Networks
###### Abstract
We introduce a Robust version of the Variational Physics-Informed Neural Networks (RVPINNs) to approximate the Partial Differential Equations (PDEs) solution. We start from a weak Petrov-Galerkin formulation of the problem, select a discrete test space, and define a quadratic loss functional as in VPINNs. Whereas in VPINNs the loss depends upon the selected basis functions of a given test space, herein we minimize a loss based on the residual in the discrete dual norm, which is independent of the test space's choice of test basis functions. We demonstrate that this loss is a reliable and efficient estimator of the true error in the energy norm. The proposed loss function requires computation of the Gram matrix inverse, similar to what occurs in traditional residual minimization methods. To validate our theoretical findings, we test the performance and robustness of our algorithm in several advection-dominated-diffusion problems in one spatial dimension. We conclude that RVPINNs is a robust method.
_Keywords_: Robustness, Variational Physics-Informed Neural Networks, Petrov-Galerkin formulation, Riesz representation, Minimum Residual principle, a posteriori error estimation.
###### Contents
* 1 Introduction
* 2 Preliminaries
* 2.1 Abstract framework
* 2.2 Neural Network framework
* 2.3 Variational Physics-Informed Neural Networks
* 2.4 An alternative definition of VPINNs
* 3 Robust Variational Physics-Informed Neural Networks
* 3.1 Orthonormal discrete basis and relation with other VPINNs
* 4 Error estimates for RVPINNs
* 4.1 A posteriori error estimates for RVPINNs in the sense of equivalence classes
* 4.2 Energy norm error estimates based on local semi-discrete inf-sup condition
* 5 Numerical examples
* 5.1 Diffusion-advection model problem
* 5.2 Discrete setting
* 5.3 A smooth diffusion problem
* 6
5.4 Delta source problem * 5.5 Advection-dominated diffusion problem
* 6 Conclusions
* 7 Acknowledgments
## 1 Introduction
The remarkable success of Deep Learning (DL) algorithms across different scientific areas [22, 30, 21] in the last decade has recently led to explore the potential of this discipline to address classical problems in physics and mathematics. These problems include approximating the solutions of Partial Differential Equations (PDEs) employing (deep) Neural Networks (NN), which can accurately approximate continuous functions. The exponential growth of interest in these techniques started with the Physics Informed Neural Networks (PINNs) ([40]). This method incorporates the governing physical laws the PDE describes in the learning process. Then, the network is trained on a dataset that consists of a random selection of points in the physical domain and its boundary. PINNs have been successfully applied to solve a wide range of problems in scientific computing, including fluid mechanics [5, 35], wave propagation [41], or inverse problems [10, 36], among many others. However, the loss function in PINNs is given by the strong form of the PDE residual, and it is known that in some problems with low regularity of the data, the solution only makes sense in a variational form. Therefore, PINNs fails to provide accurate solutions in those cases [29].
A natural continuation to PINNs addressing the aforementioned limitation is the so-called Variational PINNs (VPNNs) [25]. Here, the authors introduce a variational formulation of the underlying PDE within a Petrov-Galerkin framework. The solution is then approximated by a (deep) NN, whereas the test functions belong to linear vector spaces. Finally, the authors define a variational loss function to minimize during the training process. However, this methodology was not as popular as its predecessor. The main reason might be that the approach is sensible to the choice of the basis test functions. That is, given a discrete test space, the method's loss, stability, and robustness heavily depends upon the choice of the basis functions of the given test space. In [2], authors present an a posteriori error analysis for discretizing elliptic boundary-value problems with VPINNs employing piecewise polynomials for the test space. As the loss function in VPINNs is, in general, not robust with respect to the true error (for example, the loss function can tend to zero even if the true error does not), they provide an error estimator employing classical techniques in finite element analysis to obtain practical information on the quality of the approximation. They also study in [3] the importance of selecting appropriate quadrature rules and the degree of piecewise polynomial test functions to guarantee optimal convergence rates for VPINNs when performing mesh refinements. Finally, in [26], the same authors from VPINNs introduced _hp_-VPINNs. In the latter, they employ piecewise polynomials for testing and allow for _hp_-refinements via domain decomposition by selecting non-overlapping test functions. However, the aforementioned issue of selecting an appropriate loss function for each problem remains also in this approach.
The strategy we propose in this article overcomes the aforementioned limitation by following the core ideas introduced in Minimum Residual (MinRes) methods. The latter is a class of numerical methods for solving PDEs that guarantee stability. The spirit of MinRes methods is to minimize the dual norm of the residual of the PDE given in variational form. Many strategies have been developed based on this idea over the last five decades, including the families of Galerkin Least-Squares methods [4, 24, 23], First Order Least-squares [6, 7], residual minimization methods on dual discontinuous Galerkin norms [8, 12, 11, 43, 31], isogeometric residual minimization methods [9, 33, 32, 34], and Discontinuous Petrov-Galerkin (DPG) methods [14, 15, 16, 37, 42]. As the residual operator lives in the dual of the test space and the dual norm is difficult to compute, the use of the Riesz representation theorem [38] is natural in this context. The latter maps elements of the dual space into elements on the test space. Therefore, in many of the aforementioned methods, instead of minimizing the residual in the dual norm, they minimize its Riesz representative (which is an element of the test space) in a given test norm.
In this work, we revisit the initial work on VPINNs from Kharazmi et al. in [25], and we introduce a Robust version of Variational Physics-Informed Neural Networks (RVPINNs). As in VPINNs, we consider a Petrov-Galerkin formulation of the PDE, approximate the solution employing NNs, and select a finite-dimensional test space. Our goal is to minimize the residual in the discrete dual norm. Therefore, we select a single test function that is the Riesz representation of the weak residual over the selected discrete test space. We select an inner product and span the Riesz representative on a discrete basis so the resulting loss function will include the inverse of the Gram matrix corresponding to the selected inner product. We prove that the norm of such test function is efficient and reliable, i.e., the norm of the true error is bounded from below and above (up to an oscillation
term) by the norm of the residual representative. Moreover, as we show in the numerical results, our strategy is insensitive (unlike VPINNs) to the choice of the discrete basis spanning the test space (assuming that the numerical integration and the inversion of the Gram matrix are sufficiently accurate). It is easy to see that minimizing the proposed loss functional is equivalent to minimizing the test norm of the Riesz representation of the residual, which is the idea behind classical MinRes methods. Summarizing, we provide a general mathematical framework to define robust loss functionals in VPINNs. Our strategy relies on two ideas: (a) the appropriate selection of the inner product in the test space, ensuring the stability of the variational formulation, and (b) the selection of a single test function, that is the Riesz representation of the weak residual.
In particular, if we select an orthonormal discrete basis with respect to the inner product in the test space, the Gram matrix becomes the identity, and we recover the original definition of the loss functional in VPINNs. The difference with classical VPINNs is that our strategy is robust and independent of the choice of the basis functions. Other works employ similar ideas closely related to minimum residual methods. In [45], the authors minimize the dual norm of the weak residual for symmetric variational formulations in \(H^{1}\) and rectangular domains via a spectral decomposition. In [44], the same authors extend this method to time-harmonic Maxwell equations in the context of symmetric \(H(curl)\)-formulations. Also, in [46], in the context of Petrov-Galerkin formulations, the authors minimize the dual norm of the residual based on the concept of optimal testing form [16]. They approximate both the solution of the variational problem and the corresponding optimal test functions employing NNs by solving two nested deep Ritz problems.
Finally, apart from the usual limitations of DL-based technologies (optimizer, integration, etc.), the largest bottleneck of RVPINNs at the moment is that in certain configurations, we need to invert the Gram matrix corresponding to the selected basis and inner product on the test space. We will explore how to optimize this step in the future. However, there are particular cases where the inversion of the Gram matrix is trivial (as in the spectral case) or easy to compute. For example, if we consider a strong variational formulation, the test inner product is \(L^{2}\), and selecting piecewise discontinuous polynomials for testing the Gram matrix becomes block diagonal. On the other hand, for parametric problems, the Gram matrix may remain the same for different values of the parameters. Therefore, the inversion can be done offline, being valid for a large class of problems. Parametric problems are essential to solve inverse problems [20], so RVPINNs could be of great interest in this area.
The article is organized as follows: In Section 2, we introduce the variational formulation of the model problem we consider in this article, the NN framework for approximation of the solution, and a brief overview of VPINNs. We also provide an alternative definition of VPINNs employing a single test function in the loss. Section 3 presents the methodology of RVPINNs and the connection with other methods based on VPINNs. Section 4 is devoted to the derivation of robust error estimates. We first prove that the residual representative is a reliable and efficient a posteriori error estimator of the true error in the sense of equivalent classes. Then, we demonstrate under the assumption of a local Fortin operator's existence that the true error is equivalent to the residual error estimator up to an oscillation term. In Section 5 we test our method in several 1D advection-diffusion problems, showing the robustness of the approach. Finally, Section 6 summarizes the conclusions and future research lines.
## 2 Preliminaries
### Abstract framework
Let \(U\) and \(V\) denote Hilbert spaces with norms \(\|\cdot\|_{U}\) and \(\|\cdot\|_{V}\). Assume that we are interested in obtaining an approximation of a PDE problem admitting a variational formulation of the form:
\[\text{Find}\;u\in U,\;\text{such that:}\;\;r(u,v):=l(v)-a(u,v)=0,\,\forall\,v \in V, \tag{1}\]
where \(l(\cdot)\in V^{\prime}\) is a bounded linear form, with \(V^{\prime}\) denoting the dual space of \(V\); \(a(\cdot,\cdot)\) is a bounded inf-sup stable bilinear form in \(U\times V\). That is, there exist constants \(\mu,\alpha>0\), respectively, such that:
\[a(w,v)\leq\mu\|w\|_{U}\|v\|_{V},\quad\forall\,w\in U,v\in V, \tag{2}\]
and
\[\sup_{0\neq v\in V}\frac{a(w,v)}{\|v\|_{V}}\geq\alpha\|w\|_{U},\quad\forall\,w \in U. \tag{3}\]
We also assume that, for all \(v\in V\), the operator \(a(\cdot,v)\in U^{\prime}\) satisfies:
\[a(w,v)=0,\,\forall\,w\in U\Longrightarrow v=0. \tag{4}\]
Problem (1) admits a unique solution by means of the Banach-Necas-Baboska Theorem, and the following a priori estimate holds (see [17, Theorem 1.1]):
\[\|u\|_{U}\leq\frac{1}{\alpha}\|I(\cdot)\|_{V^{\prime}}, \tag{5}\]
with
\[\|I(\cdot)\|_{V^{\prime}}:=\sup_{0\neq v\in V}\frac{I(v)}{\|v\|_{V}}. \tag{6}\]
Additionally, it is well-known that (3) and (4) imply that the following adjoint inf-sup condition holds (see, e.g., [13, Theorem 1]:
\[\sup_{0\neq w\in U}\frac{a(w,v)}{\|w\|_{U}}\geq\alpha\|v\|_{V},\quad\forall\,v\in V. \tag{7}\]
**Remark 1** (The weak residual).: _For a given \(w\in U\), we refer to \(r(w,\cdot)\in V^{\prime}\) (see (1)) as the weak residual._
### Neural Network framework
To numerically approximate (1), we consider a (Deep) Neural Network (DNN) function with input \(\mathbf{x}=(x_{1},\ldots,x_{d})\) and output \(u_{\theta}(\mathbf{x})\), where \(\theta\in\mathbb{R}^{S}\) represents the trainable parameters. For simplicity, in this work, we employ a simple fully-connected feedforward Neural Network structure composed of \(L\) layers. Each layer \(l\) in the Neural Network consists of a set of neurons, which compute a weighted sum of their inputs plus a bias followed by a nonlinear activation function in the first \(L-1\) layers and the identity function as the activation in the last layer. More precisely, the output of layer \(l\), with \(l=1,\ldots L-1\), is given by:
\[\mathbf{z}^{(l)}=\sigma(\mathbf{w}^{(l)}\mathbf{z}^{(l-1)}+\mathbf{b}^{(l)}), \tag{8}\]
where \(\sigma\) is a nonlinear activation function (e.g., the tanh activation function), \(\mathbf{w}^{(l)}\), \(\mathbf{b}^{(l)}\) are the weights and biases respectively associated with the layer \(l\), and \(\mathbf{z}^{(0)}=\mathbf{x}\) is the input to the first layer. The final layer \(L\) is given by:
\[u_{\theta}=\mathbf{w}^{(L)}\mathbf{z}^{(L-1)}+\mathbf{b}^{(L)}. \tag{9}\]
Using an optimization algorithm, the Neural Network weights and biases are learned from a training set by minimizing a loss functional of input-output pairs.
We denote by \(U_{NN}\) the manifold consisting of all possible realizations for a given DNN architecture belonging to \(U\), where \(U\) is defined in Section 2.1. That is,
\[U_{NN}:=\{u_{\theta},\,\forall\,\theta\in\mathbb{R}^{S}\}\cap U. \tag{10}\]
Finally, for a given discrete space \(V_{M}\subseteq V\), we consider the following Petrov-Galerkin discretization of (1):
\[\text{Find }u_{NN}\in U_{NN},\text{ such that: }r(u_{NN},v_{M})=0,\,\forall\,v_{M} \in V_{M}. \tag{11}\]
### Variational Physics-Informed Neural Networks
Variational Physics Informed Neural Networks (VPINNs) approximate the solution \(u\) of problem (1) by minimizing a loss functional defined in terms of the weak residual (see Remark 1), and a discrete space \(V_{M}\subseteq V\) with basis \(\{\varphi_{m}\}_{m=1}^{M}\). That is, it approximates \(u\) by solving the following minimization problem:
\[\text{Find }u_{\theta^{*}},\text{ such that }\theta^{*}=\underset{\theta\in \mathbb{R}^{S}}{\text{argmin}}\,\mathcal{L}_{r}\left(u_{\theta}\right), \tag{12}\]
with \(\mathcal{L}_{r}(u_{\theta})\) being a loss functional defined in terms of the weak residual and the discrete basis. This approach allows to consider scenarios where the solution belongs to less regular spaces than in collocation PINNs. For example, elliptic problems with solutions in \(H^{2-\varepsilon}(\Omega)\), with \(0<\varepsilon\leq 1\).
A typical definition for the loss functional in (12) found in the literature is the following (see, e.g., [25, 27, 2, 3]):
\[\mathcal{L}_{r}(u_{\theta}):=\sum_{m=1}^{M}r(u_{\theta},\varphi_{m})^{2}+C(u_ {\theta}), \tag{13}\]
where \(C(\cdot)\) is a \(V_{M}\)-independent quadratic functional employed to impose boundary conditions for \(u_{NN}\), that can be taken as \(C(u_{\theta})=0\) if boundary conditions are strongly imposed in the DNN structure.
**Remark 2** (Classical approach).: _We will refer to problem (12), with the loss functional defined by (13), as the classical VPINNs approach._
If we assume sufficient approximability of the DNN structure and an ideal optimizer (solver), a minimizer for (13) for which the loss functional vanishes is also a solution to the Petrov-Galerkin problem (11). However, adopting such a loss functional may be computationally inefficient. Indeed, given the test space \(V_{M}\), Eq. (13) depends heavily upon the choice of its basis. In particular, a simple re-scaling of one basis function (e.g., by making it sufficiently large) may easily lead to catastrophic results (cf. [45]). Consequently, there is a natural demand for robust loss definitions in practical VPINNs.
### An alternative definition of VPINNs
To motivate how robust losses can be constructed, we first notice that, for a given set of trainable parameters \(\theta\), we can define the following function in \(V_{M}\):
\[\widetilde{\varphi}=\sum_{m=1}^{M}r(u_{\theta}\,,\varphi_{m})\,\varphi_{m}. \tag{14}\]
Then, as a consequence of the linearity of the weak residual, it holds (cf. [25]):
\[\mathcal{L}_{r}\,(u_{\theta})=\sum_{m=1}^{M}r(u_{\theta}\,,\varphi_{m})^{2}+C (u_{\theta})=r(u_{\theta}\,,\widetilde{\varphi})+C(u_{\theta}). \tag{15}\]
The last equation suggests that VPINNs can be generalized in the following way. For a given \(\widetilde{\varphi}\in V_{M}\), a VPINNs loss functional can be defined as:
\[\mathcal{L}_{r}\,(u_{\theta})=r(u_{\theta}\,,\widetilde{\varphi})+C(u_{ \theta}). \tag{16}\]
The goal now is to define the loss functional as (16) in terms of a particular discrete test function \(\widetilde{\varphi}\in V_{M}\) such that VPINNs becomes robust. In the following Section, we present a strategy based on defining such a discrete function as the Riesz residual representative in \(V_{M}\) with respect to the norm inducing the inf-sup stability (3).
## 3 Robust Variational Physics-Informed Neural Networks
In Robust Variational Physics-Informed Neural Networks (RVPINNs), we first introduce an inner product \((\cdot,\cdot)_{V}\)1, whose associated norm satisfies the inf-sup stability condition (3). Next, for a given trainable parameter \(\theta\), we compute \(\phi:=\phi(\theta)\in V_{M}\) being the solution of the following Galerkin problem:
Footnote 1: i.e., \(\|v\|_{V}^{2}:=(v,v)v\), for all \(v\in V\).
\[(\phi,\varphi_{n})_{V}=r(u_{\theta},\varphi_{n}),\text{ for }n=1,\ldots,M, \tag{17}\]
and define the loss functional as:
\[\mathcal{L}_{r}^{\phi}\,(u_{\theta}):=r(u_{\theta},\phi)+C(u_{\theta}), \tag{18}\]
where \(C(\cdot)\) is a \(V_{M}\)-independent quadratic functional to impose boundary conditions as in (13). Finally, we obtain the approximation of the solution of problem (1) by solving the following minimization problem (cf., [25]):
\[\text{Find }u_{\theta^{*}},\text{ such that }\theta^{*}=\underset{\theta\in \mathbb{R}^{S}}{\text{argmin}}\,\mathcal{L}_{r}^{\phi}\,(u_{\theta})\,, \tag{19}\]
**Remark 3** (Riesz representative of the weak residual).: _The solution \(\phi\) of problem (17) is the Riesz representative in \(V_{M}\) of the residual._
As a consequence of the linearity of the weak residual and the linearity of the inner product with respect to the second variable, defining:
\[\phi:=\sum_{m=1}^{M}\eta_{m}(\theta)\varphi_{m}, \tag{20}\]
problem (17) leads to the resolution of the following problem written in matrix form:
\[G\eta(\theta)=\mathcal{R}(\theta), \tag{21}\]
with \(\eta(\theta)\) the vector of coefficients \(\eta_{m}(\theta)\), \(G\) the (\(\theta\)-independent) symmetric and positive definite Gram matrix of coefficients \(G_{nm}=(\varphi_{m},\varphi_{n})_{V}\), and \(\mathcal{R}(\theta)\) the vector of coefficients \(\mathcal{R}_{n}(\theta)=r(u_{\theta},\varphi_{n})\), with \(n=1,\ldots,M\). Thus,
\[r(u_{\theta}\,,\phi)=\sum_{m=1}^{M}\eta_{m}(\theta)r(u_{\theta}\,,\phi_{m})= \sum_{m=1}^{M}\mathcal{R}_{m}(\theta)\eta_{m}(\theta)=\mathcal{R}(\theta)^{T }G^{-1}\mathcal{R}(\theta), \tag{22}\]
implying that the loss functional (18) is equivalently written as:
\[\mathcal{L}_{r}^{\phi}(u_{\theta})=\mathcal{R}(\theta)^{T}G^{-1}\mathcal{R}( \theta)+C(u_{\theta}). \tag{23}\]
We also notice that, by definition, to minimize the loss functional (23) is equivalent, up to the constraint \(C(u_{\theta})\), to minimize the quantity \(\|\phi\|_{V}^{2}\). Indeed, from (17), we deduce:
\[\|\phi\|_{V}^{2}=(\phi,\phi)_{V}=r(u_{\theta},\phi). \tag{24}\]
In Section 4.2, we prove that \(\|\phi\|_{V}\) is, up to an oscillation term, a local robust error estimation for the error \(\|u-u_{\theta}\|_{U}\) under the assumption of the existence of a local Fortin's operator.
### Orthonormal discrete basis and relation with other VPINNs
When a test space \(V_{M}\) is the span of an orthonormal set \(\{\varphi_{m}\}_{m=1}^{M}\) with respect to the \(\|\cdot\|_{V}\)-norm, the Gram matrix \(G\) coincides with the identity matrix; therefore, the corresponding residual representative has the form:
\[\phi=\sum_{m=1}^{M}r(u_{\theta},\varphi_{m})\varphi_{m}, \tag{25}\]
and the loss functional is explicitly written as
\[\mathcal{L}_{r}^{\phi}(u_{\theta})=\sum_{m=1}^{M}r(u_{\theta},\varphi_{m})^{2 }+C(u_{\theta}). \tag{26}\]
Thus, the classical VPINNs loss functional definition (13) is recovered for this case. This is also the case of the recently proposed Deep Fourier Residual method (see [45, 44]), where authors first define the loss functional as the continuous dual norm of the weak residual, that later is approximated by considering a truncation of a series in terms of an orthonormal basis for the test space. In particular, in [45], authors consider diffusion problems and approximate the norm of the weak residual in \(H^{-1}\). This turns out to be equivalent to (26) considering the standard \(H^{1}\)-norm for the space \(V\), and the functions \(\varphi_{m}\) as orthonormalized sinusoidal (spectral) functions with respect to the \(H^{1}\)-norm.
## 4 Error estimates for RVPINNs
One of the main complexities for proving the robustness of the residual estimator \(\|\phi\|_{V}\) is that the solution of the Petrov-Galerkin problem (11) may not have a solution or, if there exists, it may be non-unique since the space of all possible realizations for the NN structure defines a manifold instead of a finite-dimensional space (see, e.g., [3, Section 6.3]). Thus, standard FEM arguments based on a discrete inf-sup condition cannot be applied in this context.
Nevertheless, we can derive a posteriori error estimates using a different strategy. First, we introduce in Section 4.1 an equivalence class that allows us to neglect the part of the error that is \(a\)-orthogonal to \(V_{M}\). For that equivalence class, we prove that the residual representative always defines a reliable and efficient a posteriori estimator for the error. Then, in Section 4.2, we consider the case of the full error, for which we demonstrate its equivalence to the residual error estimator up to an oscillation term and under the assumption of the existence of a local Fortin operator.
### A posteriori error estimates for RVPINNs in the sense of equivalence classes
We start by introducing the following Null space of the operator \(A:U\mapsto V_{M}^{\prime}\), defined in terms of the bilinear form \(a(\cdot\,,\cdot)\) associated with problem (1) (cf. [13]):
\[U_{M}^{0}:=\left\{w\in U\,:\,\langle A(w)\,,v_{M}\rangle:=a(w\,,\,v_{M})=0,\, \forall\,v_{M}\in V_{M}\right\}, \tag{27}\]
and the following norm for the quotient space \(U/U_{M}^{0}\):
\[\|[w]\|_{U/U_{M}^{0}}:=\inf_{w_{0}\in U_{M}^{0}}\|w+w_{0}\|_{U}. \tag{28}\]
We extend the definition of the bilinear form \(a(\cdot\,,\cdot)\) to the product space \(U/U_{M}^{0}\times V\) as:
\[a([w],v)=a(w,v),\quad\text{with }w\in[w]\text{ being any arbitrary representative of the equivalence class.} \tag{29}\]
The following result follows as a consequence of the boundedness and inf-sup stability properties of the bilinear form \(a(\cdot\,,\cdot)\) (see equations (2) and (3)), respectively:
**Proposition 1**.: _The following boundedness and semi-discrete inf-sup conditions, respectively, are satisfied by the bilinear form (29):_
\[a([w],v_{M})\leq\mu\|[w]\|_{U/U_{M}^{0}}\|v_{M}\|_{V},\quad\forall[w]\in U/U_{ M}^{0},\,v_{M}\in V_{M}, \tag{30}\]
\[\sup_{0\neq v_{M}\in V_{M}}\frac{a([w],v_{M})}{\|v_{M}\|_{V}}\geq\alpha\|[w] \|_{U/U_{M}^{0}},\quad\forall[w]\in U/U_{M}^{0}. \tag{31}\]
Proof.: First, notice that (30) is direct from the definition (28) and the boundedness property of the bilinear form \(a(\cdot\,,\cdot)\). Indeed, for all \(w\in U\) and \(w_{0}\in U_{M}^{0}\), it holds:
\[a(w,v_{M})=a(w+w_{0}\,,v_{M})\leq\mu\|w+w_{0}\|_{U}\|v_{M}\|_{V},\qquad\forall v _{M}\in V_{M}.\]
Thus, (30) follows from taking the infimum. To prove (31), first notice that, for all \(v_{M}\in V_{M}\), (7) implies:
\[\alpha\|v_{M}\|_{V}\leq\sup_{0\neq w\in U}\frac{a(w\,,v_{M})}{\|w\|_{U}}=\sup_ {[0]\neq[w]\in U/U_{M}^{0}}\frac{a([w]\,,v_{M})}{\|w\|_{U}}\leq\sup_{[0]\neq[ w]\in U/U_{M}^{0}}\frac{a([w]\,,v_{M})}{\|[w]\|_{U/U_{M}^{0}}},\]
where the last inequality follows from the fact that \(\|[w]\|_{U/U_{M}^{0}}\leq\|w\|_{U}\), since \(w_{0}=0\in U_{M}^{0}\). Additionally, by construction, \(\forall[w]\in U/U_{M}^{0}\),
\[a([w]\,,v_{M})=0,\,\forall\,v_{M}\in V_{M}\Longrightarrow[w]=[0]. \tag{32}\]
Therefore, (31) follows as a consequence of Theorem 1 in [13].
Using the previous proposition, we can establish the main result of this section:
**Theorem 2** (Error class bounds in terms of the residual representative).: _Let \(u\in U\) be the solution of the continuous problem (1); \(u_{\theta}\in U_{NN}\) denote a DNN structure with trainable parameters \(\theta\in\mathbb{R}^{S}\); \(V_{M}\subseteq V\) denote a finite dimensional space, dotted with norm \(\|\cdot\|_{V}\); and \(\phi\in V_{M}\) be the solution of problem (17). It holds:_
\[\frac{1}{\mu}\|\phi\|_{V}\leq\|[u-u_{\theta}]\|_{U/U_{M}^{0}}\leq\frac{1}{ \alpha}\|\phi\|_{V}. \tag{33}\]
Proof.: First notice that, by definition of \(\phi\) and consistency of the analytical solution \(u\), it holds:
\[\|\phi\|_{V}=\sup_{0\neq v_{M}\in V_{M}}\frac{(\phi,v_{M})_{V}}{\|v_{M}\|_{V} }=\sup_{0\neq v_{M}\in V_{M}}\frac{r(u_{\theta},v_{M})}{\|v_{M}\|_{V}}=\sup_{ 0\neq v_{M}\in V_{M}}\frac{a([u-u_{\theta}],v_{M})}{\|v_{M}\|_{V}}.\]
Therefore, the left inequality in (33) is obtained by using the boundedness property (30). On another side, as a consequence of the inf-sup stability (31), it holds:
\[\alpha\left\|[u-u_{\theta}]\right\|_{U/U_{M}^{0}}\leq\sup_{0\neq v _{M}\in V_{M}}\frac{a([u-u_{\theta}],v_{M})}{\|v_{M}\|_{V}} =\sup_{0\neq v_{M}\in V_{M}}\frac{a(u-u_{\theta},v_{M})}{\|v_{M}\| _{V}}\] \[=\sup_{0\neq v_{M}\in V_{M}}\frac{r(u_{\theta},v_{M})}{\|v_{M}\|_{V}}\] \[=\sup_{0\neq v_{M}\in V_{M}}\frac{(\phi,v_{M})_{V}}{\|v_{M}\|_{V}}\] \[=\|\phi\|_{V},\]
proving the right inequality in (33).
**Remark 4** (Robustness of the residual representative in the sense of classes).: _Inequalities (33) show that, for any \(\theta\), \(\|\phi\|_{V}\) always, up to some multiplicative constants, defines an efficient (lower) and a reliable (upper) bound (thus, robust) estimation for the error \(\|[u-u_{\theta}]\|_{U/U_{M}^{0}}\). This, in practice, implies that to minimize \(\|\phi\|_{V}\) is equivalent to minimize \(\|[u-u_{\theta}]\|_{U/U_{M}^{0}}\). Moreover, if we assume that problem (11) admits a solution, when \(\|\phi\|_{V}\to 0^{+}\), \(u-u_{\theta}\) converges to a function that belongs to \(U_{M}^{0}\) as a consequence of the robustness. Thus, since \(u\not\in U_{M}^{0}\), we conclude that \(u_{\theta}\) converges to a function \(u_{\theta}\), satisfying:_
\[0=a(u-u_{\theta^{*}},v_{M})=r(u_{\theta^{*}},v_{M}),\quad\forall v_{M}\in V_{ M}.\]
_Therefore, \(u_{\theta}\) converges to a solution of the Petrov-Galerkin problem (11), as expected._
### Energy norm error estimates based on local semi-discrete inf-sup condition
Even if the a posteriori error estimates of the previous section only ensure that the RVPINNs definition of the loss functional is robust in the sense of equivalence classes, numerical experiments confirm that it can also be robust concerning the true error in the energy norm.
We first notice that, as a consequence of Theorem 2 and the quotient norm definition (28), the following result can be deduced:
**Corollary 3** (Lower bound for the error in the energy norm).: _Under the same hypothesis of Theorem 2, it holds:_
\[\frac{1}{\mu}\|\phi\|_{V}\leq\|u-u_{\theta}\|_{U}. \tag{34}\]
Therefore, for a given \(\theta\), \(\|\phi\|_{V}\) always defines an efficient bound for the error in the energy norm.
If we assume that the NN structure allows for solutions of the Petrov-Galerkin problem (11), we can also obtain a local reliable bound for \(\|u-u_{\theta}\|_{U}\) through the following Assumption:
**Assumption 1** (Local Fortin's condition).: _There exists a parameter \(\theta^{*}\in\mathbb{R}^{S}\), such that \(u_{\theta^{*}}\in U_{NN}\) is a solution of the Petrov-Galerkin problem (11). Additionally, there exists \(R>0\) such that, for all \(\theta\in B(\theta^{*},R)\), there is an operator \(\Pi_{\theta}:V\mapsto V_{M}\), and a \(\theta\)-independent constant \(C_{\Pi}>0\), verifying:_
1. \(a(u_{\theta},v-\Pi_{\theta V})=0,\quad\forall v\in V\)_,_
2. \(\|\Pi_{\theta}v\|_{V}\leq C_{\Pi}\|v\|_{V},\quad\forall v\in V\)_,_
_where \(B(\theta^{*},R)\) denotes an open ball of center \(\theta^{*}\) and radius \(R\), with respect to a given norm of \(\mathbb{R}^{S}\)._
**Proposition 4** (Upper bound of the error in the energy norm).: _Under the same hypothesis of Theorem 2. If Assumption 1 is satisfied, it holds:_
\[\|u-u_{\theta}\|_{U}\leq\frac{1}{\alpha}\operatorname{osc}(u)+\frac{1}{C_{ \Pi}\alpha}\|\phi\|_{V},\quad\forall\,\theta\in B(\theta^{*},R), \tag{35}\]
_with_
\[\operatorname{osc}(u):=\sup_{0\neq v\in V}\frac{a(u,v-\Pi_{\theta}v)}{\|v\|_ {V}}.\]
Proof.: If Assumption 1 is satisfied, it holds:
\[\alpha\|u-u_{\theta}\|_{U}\leq\sup_{0\neq v\in V}\frac{a(u-u_{ \theta},v)}{\|v\|_{V}} =\sup_{0\neq v\in V}\frac{a(u-u_{\theta},v-\Pi_{\theta}v)}{\|v\|_ {V}}+\sup_{0\neq v\in V}\frac{a(u-u_{\theta},\Pi_{\theta}v)}{\|v\|_{V}}\] \[\leq\operatorname{osc}(u)+\frac{1}{C_{\Pi}}\sup_{0\neq v\in V} \frac{a(u-u_{\theta},\Pi_{\theta}v)}{\|v\|_{V}}\] \[\leq\operatorname{osc}(u)+\frac{1}{C_{\Pi}}\sup_{0\neq v_{M}\in V _{M}}\frac{a(u-u_{\theta},v_{M})}{\|v_{M}\|_{V}}\] \[=\operatorname{osc}(u)+\frac{1}{C_{\Pi}}\sup_{0\neq v_{M}\in V_{ M}}\frac{r(u_{\theta},v_{M})}{\|v_{M}\|_{V}}\] \[=\operatorname{osc}(u)+\frac{1}{C_{\Pi}}\sup_{0\neq v_{M}\in V_{ M}}\frac{(\theta,v_{M})_{V}}{\|v_{M}\|_{V}}\] \[=\operatorname{osc}(u)+\frac{1}{C_{\Pi}}\|\phi\|_{V},\]
proving (35).
**Remark 5** (Existence of a local Fortin's operator).: _First, notice that, for a given \(\theta\), a Fortin operator's existence is only possible if \(u_{\theta}\neq 0\) does not belong to the space \(U^{0}_{M}\). Indeed, if \(u_{\theta}\in U^{0}_{M}\), we have_
\[a(u_{\theta},\Pi_{\theta}v)=0,\text{ for all Fortin operator }\Pi_{\theta},\]
_since \(\Pi_{\theta^{\prime}}\in V_{M}\). Thus, in such a case, the residual representative \(\|\phi\|_{V}\) is only a reliable estimate in the spirit of Theorem 2. Nevertheless, it is expected that, in the minimization procedure, the NN solution approaches a solution of the Petrov-Galerkin problem (11) (see Remark 4), implying that the NN solution will not belong to the space \(U^{0}_{M}\) in a further iteration. Therefore, it is meaningful to assume that, after a sufficient number of iterations, the nonlinear solver will reach a parameter \(\theta\) belonging to a neighborhood of a parameter \(\theta^{*}\), in which \(\theta^{*}\) is the local minimum, implying that a local semi-discrete inf-sup condition is satisfied, as stated in the following Lemma (cf. [19])_
**Lemma 5** (Local Fortin's lemma).: _If Assumption 1 is satisfied, then the following local semi-discrete inf-sup condition is satisfied:_
\[\frac{\alpha}{C_{\Pi}}\|w_{NN}\|_{U}\leq\sup_{0\neq v_{M}\in V_{M}}\frac{a(w_ {NN},v_{M})}{\|v_{M}\|_{V}},\quad\forall\,w_{NN}\in U^{\theta^{*}}_{NN}:=\{u_ {\theta}:\theta\in B(\theta^{*},R)\}\,. \tag{36}\]
Proof.: Using the properties of Fortin's operator and the linearity of the bilinear form \(a(\cdot,\cdot)\) with respect to the first variable, it holds:
\[\sup_{0\neq v_{M}\in V_{M}}\frac{a(w_{NN},v_{M})}{\|v_{M}\|_{V}}\geq\sup_{0 \neq v\in V}\frac{a(w_{NN},\Pi_{\theta}v)}{\|\Pi_{\theta}v\|_{V}}=\sup_{0\neq v \in V}\frac{a(w_{NN},v)}{\|\Pi_{\theta}v\|_{V}}\geq\frac{1}{C_{\Pi}}\sup_{0 \neq v\in V}\frac{a(w_{NN},v)}{\|v\|_{V}}\geq\frac{\alpha}{C_{\Pi}}\|w_{NN}\|_ {U}.\]
Finally, with the help of the previous Lemma, we can prove the following local a priori error estimate:
**Proposition 6** (Local a priori error estimate).: _If Assumption 1 is satisfied, it holds:_
\[\|u-u_{\theta^{*}}\|_{U}\leq\left(1+\frac{\mu C_{\Pi}}{\alpha}\right)\inf_{w_ {NN}\in U^{\theta^{*}}_{NN}}\|u-w_{NN}\|_{U}. \tag{37}\]
Proof.: First notice that, for all \(w_{NN}\in U^{\theta^{*}}_{NN}\), it holds
\[\frac{\alpha}{C_{\Pi}}\|u_{\theta^{*}}-w_{NN}\|_{U}\leq\sup_{0\neq v_{M}\in V _{M}}\frac{a(u_{\theta^{*}}-w_{NN},v_{M})}{\|v_{M}\|_{V}}=\sup_{0\neq v_{M} \in V_{M}}\frac{a(u-w_{NN},v_{M})}{\|v_{M}\|_{V}}\leq\mu\|u-w_{NN}\|_{U}.\]
Therefore, the result (37) follows from the triangular inequality
\[\|u-u_{\theta^{*}}\|_{U}\leq\|u-w_{NN}\|_{U}+\|u_{\theta^{*}}-w_{NN}\|_{U}.\]
**Remark 6** (Fortin's lemma converse).: _Following [18], it can be proved that the converse of Lemma 5 is also valid. This could be useful in scenarios where proving the local semi-discrete inf-sup condition is simpler than proving the existence of a Fortin operator._
**Remark 7** (Norm for the discrete space).: _For the sake of simplicity, we assume that the norm for the discrete space \(V_{M}\) coincides with the norm \(\|\cdot\|_{V}\), of the space \(V\). However, under certain conditions, it could be convenient to consider a different norm for the space \(V_{M}\) being equivalent to \(\|\cdot\|_{V}\). For instance, when properties of the discrete test allow obtaining a better inf-sup constant for the local semi-discrete inf-sup condition. In such a case, previous results are straightforwardly extended to this scenario._
## 5 Numerical examples
To illustrate the proposed strategy's performance, we consider several diffusion-advection problems subject to Dirichlet-type boundary conditions. We focus on problems that either are challenging for the standard finite element method (advection-dominated problems) or do not admit a solution belonging to \(H^{2}(\Omega)\). To construct the loss functionals, we consider two distinct discrete test spaces: one employing standard FE piece-wise polynomials and another employing spectral orthonormal test functions. Here, we do not intend to compare the performance between different test spaces, but rather our aim is to numerically validate the robustness of RVPINNs.
### Diffusion-advection model problem
For \(d\geq 1\), given a bounded and open Lipschitz polyhedra \(\Omega\in\mathbb{R}^{d}\) with boundary \(\partial\Omega\), this work considers the following linear diffusion-advection problem:
Find \(u\), such that:
\[-\nabla\cdot(\varepsilon\nabla u)+\beta\cdot\nabla u =f,\quad\text{ in }\Omega,\] \[u =0,\quad\text{ on }\partial\Omega, \tag{38}\]
where \(\varepsilon>0\) denotes a diffusive coefficient term, \(\beta\in[L^{m}(\Omega)]^{d}\) is a divergence-free (i.e., \(\text{div}\beta\equiv 0\)) advective coefficient function. Finally, \(f\in V^{\prime}\) is a given source term, with \(V^{\prime}:=H^{-1}(\Omega)\) being the dual space of \(V:=H^{1}_{0}(\Omega)\). Problem (38) admits the following continuous variational formulation:
\[\text{Find }u\in U:=V\,:\,r(u,v):=l(v)-a(u,v),\,\forall\,v\in V, \tag{39}\]
with
\[a(u,v):=\left(\varepsilon\nabla u-\beta\,u,\nabla v\right)_{0},\quad\text{and }\quad l(v)=\langle f,v\rangle, \tag{40}\]
where \((\cdot,\cdot)_{0}\) denotes the \(L^{2}\)-inner product2, and \(\langle\cdot,\cdot\rangle\) denotes the duality map between \(V^{\prime}\) and \(V\), coinciding with \((f,v)_{0}\) when \(f\) belongs to \(L^{2}(\Omega)\).
Footnote 2: We adopt the same notation for the scalar and vectorial \(L^{2}\)-inner products for the sake of simplicity.
Let us equip the Hilbert spaces \(U,V\) with the norms
\[\|\cdot\|_{U}^{2}:=\|\cdot\|_{V}^{2}:=\varepsilon\,(\nabla\cdot,\nabla\cdot)_ {0}. \tag{41}\]
Using the Cauchy-Schwarz inequality, the following boundedness estimate is proved (cf. (2)):
\[a(w,v)\leq\left(1+\frac{C_{\Omega}\,\|\beta\|_{w}}{\varepsilon}\right)\|w\|_{ U}\|v\|_{V},\quad\forall\,w\in U,v\in V, \tag{42}\]
where \(C_{\Omega}\) denotes the Poincare constant (see [39, 1]). Moreover, as a consequence of the fact that (recall that \(\text{div}\beta\equiv 0\)):
\[(\beta\cdot\nabla v,v)_{0}=0,\quad\forall\,v\in V, \tag{43}\]
a straightforward verification shows that the following coercive estimate holds:
\[a(v,v)\geq\|v\|_{V}^{2},\quad\forall\,v\in V. \tag{44}\]
Notice also that the coercive property (44) implies that the following inf-sup condition is satisfied (cf. (3)):
\[\sup_{0\neq v\in V}\frac{a(w,v)}{\|v\|_{V}}\geq\|w\|_{U},\quad\forall\,w\in U. \tag{45}\]
### Discrete setting
For simplicity, we consider only one-dimensional problems. We set \(\Omega:=(-1,1)\) and define two kinds of test spaces. First, we consider \(V_{M}\) as FE space of the standard globally continuous and piece-wise linear functions that belong to \(H^{1}_{0}(\Omega)\), defined over a uniform mesh partition of \(\Omega\) into \(M+1\) elements \([x_{i},x_{i+1}]\), with \(x_{0}=-1<x_{1}<\dots<x_{M}<x_{M+1}=1\). The corresponding Gram matrix is constructed using the inner product, inducing the norm \(\|\cdot\|_{V}\), defined in (41). The second test space is intended to show examples involving an orthonormal basis for the discrete space \(V_{M}\). Here, we normalize with respect to the inner product (41) the first \(M\) eigenfunctions of the Laplacian satisfying the Dirichlet homogeneous boundary conditions. The \(m\)-th eigenfunction and the corresponding orthonormal set, respectively, are given by
\[s_{m}:=\sin\left(m\pi\frac{x+1}{2}\right),\qquad E_{M}=\left\{\varphi_{m}:= \frac{2s_{m}}{\sqrt{\varepsilon}\pi m}\right\}_{m=1}^{M}. \tag{46}\]
Therefore, the loss functional (26) is explicitly given, up to the constraint \(C(u_{\theta})\), as:
\[\mathcal{L}_{r}^{\phi}(u_{\theta})=\frac{4}{\varepsilon\pi^{2}}\sum_{m=1}^{M} \frac{1}{m^{2}}\,r\,(u_{\theta}\,,s_{m})^{2}\,. \tag{47}\]
For the architecture of the DNN approximator \(u_{\theta}\), we adopt the following structure: a fixed feed-forward fully connected network comprising five layers, each with 25 neurons. Throughout all layers, the activation function
employed is tanh. When strongly enforcing the homogeneous Dirichlet boundary conditions, we multiply the last layer's output by \((x+1)(x-1)\). Another version includes the constrained imposition of homogeneous Dirichlet boundary conditions by selecting:
\[C(u_{\theta})=|u_{\theta}(-1)|^{2}+|u_{\theta}(1)|^{2}. \tag{48}\]
As our choice for the nonlinear solver, we opt for the ADAM optimizer (refer to [28]) and initialize it with a learning rate of \(0.0005\). Our nonlinear solver undergoes up to \(6000\) iterations (epochs). To approximate the integral terms appearing in the loss functional definitions, we employ (a) a fixed Gaussian quadrature rule with five nodes per element when considering FE functions, and (b) a trapezoidal rule with \(4000\) equally distributed nodes when considering spectral functions. Lastly, the error \(|u-u_{\theta}|_{U}\) is numerically estimated using a trapezoidal rule, with \(10000\) equally distributed nodes.
### A smooth diffusion problem
We first consider a simple diffusion problem with a smooth solution. More precisely, setting \(\Omega=(-1,1)\), we consider the variational problem (39) with \(\epsilon=1\), \(\beta=0\) and forcing term \(f\) defined in such a way that the analytical solution is given by
\[u(x)=x\sin(\pi(x+1)).\]
Figure 1 shows the numerical results obtained with RVPINNs with \(50\) sinusoidal test functions and strong imposition of the BCs. We observe a good performance of the RVPINNs strategy, obtaining an accurate approximation of the analytical solution, as shown in Figure 0(a). Moreover, we observe a perfect match between the quantities \(\|u-u_{\theta}\|_{U}\), \(\|\phi\|_{V}\), and \(\sqrt{\mathcal{L}_{\nu}^{\phi}(u_{\theta})}\) as shown in figures 0(b) and 0(c) where, in particular, the last one shows a perfect correlation between the true error and the square root of the loss functional. Figure 2 shows an almost identical behavior when considering \(100\) FE test functions and strong imposition of BCs. We obtain similar results when considering the constrained imposition of BCs, but we omit them here for brevity.
Note that in the case of pure diffusive problems, the continuity constant \(\mu\) and the inf-sup stability constant \(\alpha\) are equal to one when considering the \(H^{1}\)-seminorm (cf. (42) and (45)). Therefore, \(\|\phi\|_{V}\) always defines a lower bound for the error (cf. (34)) and an upper bound up to a constant close to one if the oscillation term in (35) is sufficiently small. These theoretical considerations are confirmed in figures 1 and 2
Finally, to show the effect of the oscillation term (35), Figure 2(b) shows the results obtained with only five test FE basis functions. Figure 2(b) reveals that the estimation is not robust after (approximately) \(100\) iterations and only becomes a lower bound, as the DNN has reached (up to implementation precision) a solution of the Petrov-Galerkin problem (11).
### Delta source problem
We consider again the pure \(1\)D diffusion problem (i.e., eq. (39) with \(\epsilon=1\) and \(\beta=0\)), defined over \(\Omega=(-1,1)\) and subject to homogeneous Dirichlet BCs, but now with a Dirac delta source \(\delta_{1/2}\in H^{-1}(\Omega)\), that is explicitly defined through the action:
\[l(v):=\langle\delta_{1/2}\,,\,v\rangle:=v(1/2),\quad\forall v\in H^{1}_{0}( \Omega). \tag{49}\]
Figure 1: RVPINNs approximation of the smooth diffusion problem; strong BCs imposition, \(50\) spectral test functions.
The analytical solution to this problem is explicitly given as:
\[u(x)=\left\{\begin{array}{ll}\frac{1}{4}(x+1),&-1\leq x\leq\frac{1}{2},\\ \frac{3}{4}(1-x),&\frac{1}{2}<x\leq 1,\end{array}\right.\]
which is only a \(C^{0}(\Omega)\) solution, while the activation function tanh is smooth. Therefore, a major dominance of the oscillation term is expected, even when considering a sufficiently large number of test functions, as appreciated in Figure 4 where we have considered 100 FE test functions and strong imposition of the BCs. Figure 5 shows similar results for the same configuration but with the constrained imposition of the BCs. In both cases, a good representation of the analytical solution is obtained despite being approximated with a smooth function. Additionally, figures 4 and 5 show that the loss functional defines a robust estimation of the error in the pre-asymptotic regime (before the dominance of the oscillation term). A similar behavior is obtained when considering the loss functional with spectral test functions- omitted here for brevity.
Figure 4: RVPINNs approximation of the delta source diffusion problem; strong BCs imposition, 100 FE test functions.
Figure 3: RVPINNs approximation of the smooth diffusion problem; strong BCs imposition, 5 FE test functions.
Figure 2: RVPINNs approximation of the smooth diffusion problem; strong BCs imposition, 100 FE test functions.
### Advection-dominated diffusion problem
As our last example, we consider problem (39) in an advection-dominant regime, with the spirit of testing the performance and robustness of the estimates for RVPINNs in near unstable scenarios. For this, we set \(\beta=1\), the source term as \(f=1\), and consider small values of \(\epsilon\). For a given \(\epsilon\), the analytical solution of (39) in this case is given by:
\[u(x)=\frac{2(1-e^{\frac{\epsilon-1}{\epsilon}})}{1-e^{-\frac{ \epsilon}{2}}}+x-1,\]
exhibiting a strong gradient near the right boundary when \(\epsilon\to 0\) (cf. figures (a)a and (a)a). We restrict the discussion to spectral test functions, as we observe similar behaviors when considering FE test functions.
Figure 6 shows the results obtained for \(\epsilon=0.1\), considering 50 test functions and strong imposition of the BCs. Contrary to the pure diffusive case, Figure (b)b reflects that the lines representing the error and the square root of the loss functional (respectively, the norm of the discrete Riesz representative) do not overlap. This behavior is expected as, in this case, the continuity constant \(\mu\) is not equal to one. It deteriorates when \(\epsilon\to 0\) (cf. (42)). We also observe a decay in the expected optimal correlation between the error and the square root of the loss starting near iteration 3000, apparently due to stability issues. This suboptimality is improved when considering the constrained imposition of the boundary conditions, as shown in Figure 7. This constrained imposition of BCs could thus serve as a stabilization technique. This is further confirmed in Figure 8, where we show the results obtained for \(\epsilon=0.005\), with 200 test basis functions and constrained BCs. When considering instead the strong imposition of the BCs and the same number of test functions, we do not observe convergence within the first 6000 iterations--omitted here for brevity. We observe that when considering the constrained BCs, the loss functional exhibits some differences with the norm of the Riesz representative. However, this also agrees with our findings, as the constraint does not depend on the test space. Thus, it is not represented by the Riesz representative, as the boundary is not taken into consideration in the norm definition of the test space. Nevertheless, in this case, we still observe a good correlation between the loss and the true error.
Figure 5: RVPINNs approximation of the delta diffusion problem; constrained BCs imposition, 100 FE test functions.
Figure 6: RVPINNs approximation of the diffusion-advection problem with \(\epsilon=0.1\); strong BCs imposition, 50 spectral test functions.
## 6 Conclusions
In this article, we provide a general mathematical framework to construct robust loss functionals based on VPINNs. For that, we first generalize the definition of the loss functional in VPINNs to the choice of a single test function. Then, following a minimum residual principle, we select such test function as the Riesz representative of the weak residual over a given discrete test space. We expand the Riesz representation over the discrete test space so the loss functional includes the inversion of the Gram matrix corresponding to the inner product. We prove that the true error in the energy norm is equivalent to the test norm of the residual error estimator. To prove the robustness of the method, we need to select a test norm that induces the inf-sup stability at the continuous level. We numerically show the robustness of our a posteriori error estimator and also that our methodology is insensitive to the choice of the basis functions in the selected discrete test space.
Possible future research lines include: (a) the application of the framework to other families of PDEs like wave propagation, nonlinear or time-dependent problems, (b) the extension to nonconforming formulations, (c) the application of RVPINNs to parametric problems with an off-line inversion of the Gram matrix, and (d) an efficiency study of the method for a given variational formulation, NN configuration, and test space.
## 7 Acknowledgments
The work of Sergio Rojas was done in the framework of the Chilean grant ANID FONDECYT No. 3210009. Judit Munoz-Matute has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie individual fellowship grant agreement No. 101017984 (GEODPG). David Pardo has received funding from: the Spanish Ministry of Science and Innovation projects with references TED2021-132783B-I00, PID2019-1081118B-I00 (FEDER/AEI) and PDC2021-121093-I00 (MCIN / AEI / 10.13039/501100011033/Next Generation EU), the "BCAM Severo Ochoa" accreditation of excellence CEX2021-001142-S / MICIN / AEI / 1.13039/501100011033; the Spanish Ministry of Economic and Digital Transformation with Misiones Project IA4TES (MIA.2021.M04.008 / NextGenerationEU PRTR); and the Basque Government through the BERC 2022-2025 program, the Elkartek project SIGZE (KK-2021/00095), and the Consolidated Research Group MATHMODE (IT1456-22) given by the Department of Education, and the Euskampus Foundation through the ORLEG-IA project. The work of Maciej Paszynski and Pawel Maczuga was supported by
Figure 8: RVPINNs approximation of the diffusion-advection problem with \(\varepsilon=0.005\); constrained BCs imposition, 200 spectral test functions.
Figure 7: RVPINNs approximation of the diffusion-advection problem with \(\varepsilon=0.1\); constrained BCs imposition, 50 spectral test functions.
the program "Excellence initiative - research university" for the AGH University of Science and Technology.
|
2310.00337 | Quantization of Deep Neural Networks to facilitate self-correction of
weights on Phase Change Memory-based analog hardware | In recent years, hardware-accelerated neural networks have gained significant
attention for edge computing applications. Among various hardware options,
crossbar arrays, offer a promising avenue for efficient storage and
manipulation of neural network weights. However, the transition from trained
floating-point models to hardware-constrained analog architectures remains a
challenge. In this work, we combine a quantization technique specifically
designed for such architectures with a novel self-correcting mechanism. By
utilizing dual crossbar connections to represent both the positive and negative
parts of a single weight, we develop an algorithm to approximate a set of
multiplicative weights. These weights, along with their differences, aim to
represent the original network's weights with minimal loss in performance. We
implement the models using IBM's aihwkit and evaluate their efficacy over time.
Our results demonstrate that, when paired with an on-chip pulse generator, our
self-correcting neural network performs comparably to those trained with
analog-aware algorithms. | Arseni Ivanov | 2023-09-30T10:47:25Z | http://arxiv.org/abs/2310.00337v1 | Quantization of Deep Neural Networks to facilitate self-correction of weights on Phase Change Memory-based analog hardware
###### Abstract
In recent years, hardware-accelerated neural networks have gained significant attention for edge computing applications. Among various hardware options, crossbar arrays, offer a promising avenue for efficient storage and manipulation of neural network weights. However, the transition from trained floating-point models to hardware-constrained analog architectures remains a challenge. In this work, we combine a quantization technique specifically designed for such architectures with a novel self-correcting mechanism. By utilizing dual crossbar connections to represent both the positive and negative parts of a single weight, we develop an algorithm to approximate a set of multiplicative weights. These weights, along with their differences, aim to represent the original network's weights with minimal loss in performance. We implement the models using IBM's aihwkit and evaluate their efficacy over time. Our results demonstrate that, when paired with an on-chip pulse generator, our self-correcting neural network performs comparably to those trained with analog-aware algorithms.
## 1 Introduction
An emerging area in neural network hardware is the analog compute paradigm. In order to get around the von-Neumann bottleneck, compute and memory is moved into a shared area, often implemented using crossbar arrays [11]. This allows us to reduce the computational complexity of certain operations, such as Matrix-vector multiplication(MVM) from O(\(N^{2}\)) to O(1) by utilizing properties of analog electronics with Kirchoffs laws.
### _Background and Challenges_
In all current proposed variations of analog hardware, we find a certain weakness that causes there to be a trade-off between the device and required qualities for neural network implementation. Phase Change Memory(PCM) is a device variation which has shown promise in the field[1]. As for the weaknesses with PCM devices, it is that they are susceptible to various kinds of noise. These are: write/programming noise, read noise, and weight/conductance drift. Write and read noise is applied when the respective action is performed on the analog weight, whilst the weight drift is tied to the inherent material properties of the PCM device. A concise description of a PCM device can be found in aihwkit's documentaiton[1].
"A PCM device consists of a small active volume of phase-change material sandwiched between two electrodes. In PCM, data is stored by using the electrical resistance contrast between a high-conductive crystalline phase and a low-conductive amorphous phase of the phase-change material. The phase-change material can be switched from low to high conductive state, and vice-versa, through applying electrical current pulses. The stored data can be retrieved by measuring the electrical resistance of the PCM device."
These noise types can drive weights away from their intended values, leading to network inaccuracies. Existing techniques to counteract this include noise-aware training, differential weight representation, and global weight drift compensation.
### _Our Contribution_
We propose a solution that combines a built-upon existing technique for differential weight representation, weight quantization as well as a novel self-correcting mechanism. Our algorithm minimizes the loss between the original and quantized weights by finding optimal quantization bins through simulated annealing. The self-correcting mechanism further ensures long-term network stability.
## 2 Method
### _Theoretical setup_
We employ a two-element differential representation of each weight which we can visualize it in the simplified diagram in Figure 1. In reality, we will also need source lines and converters between analog and digital.
This structure has previously been employed in analog neural networks [14] as it reduces the effects of weight drift/perturbations that affects the hardware. If all weights are shifted 5 mV, a weight represented by a difference will stay the same.
The inputs get sent to both the positive and negative weight for that input, which themselves accumulate onto the output line using Kirchoff's laws. All weights in the system are represented with positive resistances, which means that we can subtract the accumulated output from the negative output line from the positive one. This lets us have negative weights represented by positive numbers in the system, which often are required by neural networks to work efficiently.
### _Network selection and training procedure_
Firstly, we need to select a problem and train a neural network to perform a task. For this experiment, we use the MNIST dataset, and a simple convolutional neural network that is chosen from a known architecture that was previously successfully implemented on crossbar arrays[14].
We then impose some constraints during the training of the neural network. This includes adding weights below a value \(\epsilon\) to the loss function. This discourages weights \(w<\epsilon\), which would otherwise either require very small bins, or a very small difference between two bins in our architecture. Both of these are unwanted as the noise will affect those weights in a much larger proportion to their size. We can visualize the effect of this constraint in Figure 2.
We also add a constraint to large weights above a value \(\theta\). This is due to the conductance drift in the weights, which is larger the larger the weight is when using PCM-based crossbar arrays.
### _Simulated Annealing for Bin Optimization_
Then, we perform simulated annealing to find the best bins for the task. The constraints for the optimization are as following:
* **Quantization Levels Constraint:** We should find two sets, one positive and one negative set. Each set should have \(N\) distinct quantization levels and together create a set of bins.
* **Bin Constraint:** The possible bins in any found quantization set are given by \(SQ=\{d_{\text{pos}},d_{\text{neg}},(d_{\text{pos}_{i}}-d_{\text{neg}_{j}}\,|\,i \in d_{\text{pos}},j\in d_{\text{neg}})\}\).
* **Divisibility Constraint:** Each quantization level in a set must be divisible by the smallest factor in the set. They do not, however, have to be linearly distributed.
* **SNR Constraint:** The step-multiple values \(d[0]_{pos}\) and \(d[0]_{neg}\) in the set should be larger than the write noise \(\delta\) constraint which depends on the hardware and the programming procedure.
* **Bin Difference Constraint:** The difference between the smallest positive and negative bins in the set (\(abs(d[0]_{pos}-d[0]_{neg})\)) must be larger than read noise error threshold \(\epsilon\).
* The number of distinct quantization levels in each set.
* The set of positive quantization levels.
* The set of negative quantization levels.
* The write noise constraint, which depends on the hardware and the programming procedure.
* The read noise error constraint, which depends on the trade-off between write noise and read noise.
We provide details on the cost function, cooling schedule, and selection mechanism, showcasing how this approach leads to optimal bin selection.
The goal of the algorithm is to minimize the error between original weights, and the weights quantized using a found quantization set combination. Below is a pseudocode implementation of the algorithm:
It is possible to choose in step 7 in **Algorithm 1** if we want to enforce a linear constraint on the found bins such that any bin is a previous bin with the smallest factor \(N[0]\) added. A linear constraint can simplify the search, but might not find the best result.
```
1:Input: Neural net weights \(W\), positive and negative parts \(W_{\text{pos}}\), \(W_{\text{neg}}\), number of bins \(N\)
2:Initialize \(d_{i}\) for \(W\in\{W_{\text{pos}},W_{\text{neg}}\}\)
3:Create quantization sets and calculate set \(SQ\)
4:Initialize current error, best error, and temperature \(T\)
5:for iteration \(i\) in range(iterations) do
6: Update temperature \(T\)
7: Perturb positive and negative bases
8: Propose new positive and negative bins
9: Compute error for proposed bins
10:if proposed error \(<\) current error or random value \(<\exp\left(-\frac{\text{proposed error}-\text{current error}}{T}\right)\)then
11: Update current positive base, negative base, and error
12:if proposed error \(<\) best error then
13: Update best positive base, negative base, and error
14:endif
15:endif
16:endfor
17: Return best positive bins, negative bins
``` ```
**Algorithm 1** Optimization of Bins Using Simulated Annealing
### _Self-Correction Mechanism_
In our framework, we introduce a self-repairing mechanism that leverages the quantized weight levels to correct drifts in analog weight representations over time. The mechanism consists of four main components: an error threshold, a correction condition, a weight identification process, and an on-chip correction methodology.
#### 2.4.1 Error Threshold
To quantify the deviation in the network's state, we define an error threshold based on the modulus of the weight values. Specifically, if any weight value modulus grows beyond \(\frac{N}{3}\) of its initial quantized level, the weight is considered a candidate for adjustment. Here, \(N\) is the quantization level multiple that was used initially for that specific weight. The error threshold comes with an power/accuracy trade-off. If we wait too long with re-adjusting bins, the weight might drift to an extent where the closest multiple no longer is the initial multiple. This leads to an irreversible degradation in the overall network performance for the remainder of its operational lifetime as we will no longer be able to get the initial network values until we reset the weights using a different mechanism.
#### 2.4.2 Correction Condition
The network-wide condition for triggering the self-correction mechanism is based on global error estimation. By periodically pulsing an identity matrix through the network and accumulating the outputs, we can compare the current state of each layer against a baseline recorded at \(t=0\). If the sum of the absolute differences across all weights exceeds a pre-defined global threshold, the self-correction mechanism is triggered.
#### 2.4.3 Weight Identification
Once the correction condition is met, we proceed to identify the weights contributing most to the drift. This is done by selecting groups of weights, for example a layer of weights, and comparing the identity matrix output with it's initial output at \(t=0\). If we have exceeded a layer-based drift difference threshold \(dt\), we move on to the correction.
In some cases, it can be cheaper to just reprogram the entire network, but in other cases where we have noise-sensitive layers such as CNN's, it might be sufficient to only reprogram those.
Fig. 1: Simplified view of the two-element representation of two weights(\(w_{1}\) and \(w_{2}\)) and two inputs \(x_{1}\) and \(x_{2}\) creating a matrix multiplication output \(y_{mat}\) by using the difference between the positive and negative lines.
#### 2.4.4 On-Chip Correction Methodology
To correct the identified weights, we use short programming pulses to nudge them back to their original multiple-based states. The magnitudes and durations of these pulses are determined based on the difference between the current and target states of each weight, as well as the current magnitude of the weight. This can be performed by an on-chip pulse generator[22].
#### 2.4.5 Advantages and Applications
The self-correction mechanism enhances the network's resilience to hardware-induced drifts, thus making it more robust for long-term deployments in edge computing scenarios. Moreover, the mechanism opens the door to more aggressive quantization strategies, as minor errors introduced by quantization can be periodically corrected, further reducing the computational and storage overhead.
### Compression
Another benefit of the chosen multiple-quantization is that we can efficiently apply compression techniques such as those used in weight clustering to the weights. We can represent the positive and negative layers with integer matrices in range [0,M] where M is the largest multiple-factor used. This allows us to use N-bit representations of the weights, more generally \(2^{N}-1<M\) of the value, such as 4-bit weight representations if M \(<\) 16.
The lower representation range of values yields more repetition in the weight matrices, and allows for more aggressive compression of the weights.
### Testing methodology
The accuracy of the self-repairing and the hardware-awarely trained networks are tested in time steps of 5 minutes. During every step, noise is added to the weights. At every timestep, the self-repairing neural network is probed for repair if a threshold of the cumulative layer error is exceeded. We compare the networks over 20 timesteps and note the accuracies in Figure 3 and Figure 4.
## 3 Results
We train the candidate CNN network in a traditional fashion and achieve a f1-accuracy of 97.7% on the MNIST dataset. We then apply the quantization and visualize the distribution of the weights in Figure 2. We can see that due to our constraints on the network weights enforced by the loss function, we find the first bins at \(\epsilon\) distance away from 0. This quantization of weights keeps our initial accuracy of 97.7%.
We then evaluate the network with 20 time-steps of 300 seconds drift each. At every time-step, we let the self-repairing network adjust it's weights into the closest positive and negative multiples.
Alongside the self-repairing network, we train a hardware-aware analog neural network using the same network architecture and plot it's performance over the same timespan in Figure 4.
Both models were trained with the same analog noise configuration. The PCM noise configuration is given in Appendix A.
## 4 Discussion
We find that the self-repairing network manages to keep it's accuracy stable once correcting itself, but in between the corrections it has a much wider variance of accuracy compared to the analog-awarely trained network.
The constraint of weight being larger than \(\delta\) allows us to represent small weights as a combination of a positive and negative weight. This is useful as shown by [22] where the proposed on-chip pulse generator has a significantly larger pulse error for smaller pulses. Pulses of size 100nA have up to 6% average programming error, whilst pulses of 1.28mA have a 0.2% average error.
Note that we do have to keep in mind that since we are working with small numbers, a high enough read noise error will mean that we will due to propagation of uncertainty get a much larger percentual error if a positive and a negative bin are close to each other. It is therefore important that we put a constraint on how close the positive and negative bins multiples are allowed to be.
### Layer-specific findings
We confirm the findings of [1] which claim that CNN's are more susceptible to noise on analog format. This was found by a larger loss of accuracy when drift was applied to the CNN layers compared to dense layers.
We also find that there is inter-layer dependency between the layers given the type and amount of noise applied. aihwkit's **drift_analog_weights**-function drift weights equally if the same RPU-config is given. This means that often we will find that the layers drift in a similar stochastic fashion. This means that adjusting one single layer that has reached over a drift threshold \(dt\) will often degrade the performance, as the inter-layer weight representation is dramatically changed instead of stochastically translated using the noise. This means that an approach where the entire net
Figure 3: Digital weights over time with drift applied every 300 seconds. The red points signify accuracy after drift, while the blue points after a dotted red line signify the accuracy after adjustment.
Figure 2: Scatter plot of amount of weights in each quantized bin in the set SQ with the best found quantization. Red dots signify negative weights, blue positive weights and green the weights defined using combinations of a positive and a negative weight.
work is re-adjusted can sometimes be better given the network and the conditions.
### Future research
In order to access the methodology in practice, there needs to be an implementation of both techniques to hardware, and the result should be compared after periods of time.
A more robust approach would be to investigate the feasibility of an algorithm that combines the two methodologies, meaning that we do hardware-aware training whilst keeping the weights constrained close to multiples.
Another interesting area to explore is self-repair using bit-sliced network weights. This means that a network is represented with weights that are sliced into binary representations of 0s or 1s. This would make the weight adjustment scheme much more simple and flexible to various weights at the cost of more required hardware connections per weight.
Lastly, it would be interesting to see how the methodology performs on other types of analog memory architectures, such as RRAM which do not suffer from the same kinds of noise as PCM-based architectures.
## 5 Conclusion
We show that by using a constrained bin weight scheme, we can regain lost performance over time using a weight-multiple adjustment over a positive and negative part of the weight. We do however note that by not performing analog-aware training for PCM modules, the network becomes less stable. Despite regaining the accuracy back, the drift will affect the result between the resets more negatively than by using purely analog-awarely trained neural networks.
Figure 4: Analog-awarely trained weights over time with drift applied every 300 seconds.
## Appendix A Analog Noise Configuration
The following Python code snippet provides the configuration for the analog noise in the Phase-Change Memory (PCM) model. It sets up various parameters including weight noise, clip type, and drift compensation.
rpu_config = InferenceRPUConfig() rpu_config.forward.out_res = -1.0 # Turn off (output) ADC discretization. rpu_config.forward.w_noise_type = WeightNoiseType.ADDITIVE_CONSTANT rpu_config.forward.w_noise = 0.02 # Short-term w-noise.
rpu_config.clip.type = WeightClipType.FIXED_VALUE rpu_config.clip.fixed_value = 1.0 rpu_config.modifier.pdrop = 0.03 # Dropconnect. rpu_config.modifier.type = WeightModifierType.ADD_NORMAL # Fwd/bwd weight noise. rpu_config.modifier.std_dev = 0.1 rpu_config.modifier.rel_to_actual_wmax = True
# Inference noise model. rpu_config.noise_model = PCMLikeNoiseModel(g_max=25.0)
# drift compensation rpu_config.drift_compensation = GlobalDriftCompensation()
|
2310.10656 | VeriDIP: Verifying Ownership of Deep Neural Networks through Privacy
Leakage Fingerprints | Deploying Machine Learning as a Service gives rise to model plagiarism,
leading to copyright infringement. Ownership testing techniques are designed to
identify model fingerprints for verifying plagiarism. However, previous works
often rely on overfitting or robustness features as fingerprints, lacking
theoretical guarantees and exhibiting under-performance on generalized models.
In this paper, we propose a novel ownership testing method called VeriDIP,
which verifies a DNN model's intellectual property. VeriDIP makes two major
contributions. (1) It utilizes membership inference attacks to estimate the
lower bound of privacy leakage, which reflects the fingerprint of a given
model. The privacy leakage fingerprints highlight the unique patterns through
which the models memorize sensitive training datasets. (2) We introduce a novel
approach using less private samples to enhance the performance of ownership
testing.
Extensive experimental results confirm that VeriDIP is effective and
efficient in validating the ownership of deep learning models trained on both
image and tabular datasets. VeriDIP achieves comparable performance to
state-of-the-art methods on image datasets while significantly reducing
computation and communication costs. Enhanced VeriDIP demonstrates superior
verification performance on generalized deep learning models, particularly on
table-trained models. Additionally, VeriDIP exhibits similar effectiveness on
utility-preserving differentially private models compared to non-differentially
private baselines. | Aoting Hu, Zhigang Lu, Renjie Xie, Minhui Xue | 2023-09-07T01:58:12Z | http://arxiv.org/abs/2310.10656v1 | # VeriDIP: Verifying Ownership of Deep Neural Networks through Privacy Leakage Fingerprints
###### Abstract
Deploying Machine Learning as a Service gives rise to model plagiarism, leading to copyright infringement. Ownership testing techniques are designed to identify model fingerprints for verifying plagiarism. However, previous works often rely on overfitting or robustness features as fingerprints, lacking theoretical guarantees and exhibiting under-performance on generalized models. In this paper, we propose a novel ownership testing method called VeriDIP, which verifies a DNN model's intellectual property. VeriDIP makes two major contributions. (1) It utilizes membership inference attacks to estimate the lower bound of privacy leakage, which reflects the fingerprint of a given model. The privacy leakage fingerprints highlight the unique patterns through which the models memorize sensitive training datasets. (2) We introduce a novel approach using less private samples to enhance the performance of ownership testing.
Extensive experimental results confirm that VeriDIP is effective and efficient in validating the ownership of deep learning models trained on both image and tabular datasets. VeriDIP achieves comparable performance to state-of-the-art methods on image datasets while significantly reducing computation and communication costs. Enhanced VeriDIP demonstrates superior verification performance on generalized deep learning models, particularly on table-trained models. Additionally, VeriDIP exhibits similar effectiveness on utility-preserving differentially private models compared to non-differentially private baselines.
Fingerprinting, neural networks, ownership protection, membership inference, differential privacy.
## 1 Introduction
Deep learning plays an important role in various tasks such as image recognition [1, 2, 3], natural language processing [4], and speech recognition [5] tasks. Building a sophisticated deep neural network (DNN) requires a significant amount of annotated training data, which often contains user privacy, demands powerful computing resources, and necessitates machine learning expertise. These unique DNN models represent valuable intellectual property (IP) and require copyright protection. However, deploying DNN models' APIs for user queries introduces the risk of model extraction attacks, leading to copyright infringement [6, 7, 8]. Model extraction attack efficiently transfers the functionality of a _victim model_ to a _stolen copy_ using limited query answers. Additionally, attackers, who may be insiders with full access to the victim models, employ techniques such as distillation [9], fine-tuning [10], or pruning [11, 12] in an attempt to reverse-engineer the tracking.
Proof-of-ownership serves as an adequate protection mechanism against model stealing attacks, ensuring accountability for any theft of copyright-protected models. However, proving ownership of a neural network poses challenges due to the stochastic nature of the training and stealing process [13]. Many stealing mechanisms have minimal side effects on the model's functionality but disable the proof-of-ownership mechanism [9, 14, 15]. Methods for proving ownership of DNN models can be broadly classified into two categories: **watermark embedding (WE)**[16, 17, 18, 19, 20, 21, 22] and **ownership testing (OT)**[23, 24, 25, 15].
The WE methods embed customized watermarks into DNN models during the training stage then verify the ownership by confirming the presence of the respective
Fig. 1: Ownership testing framework for DNN models.
watermarks from given suspect models. However, WE techniques have certain limitations, including tampering with the training process, potential side effects on model functionality, and vulnerability to watermark erasure attacks [9, 10, 26]. In contrast, the OT methods extract the intrinsic characteristics (fingerprints) of DNN models, making them non-invasive and more resilient to adaptive attacks [15, 23]. In this paper, our focus is on the OT technique to verify the copyright of DNN models.
To the best of our knowledge, existing ownership testing solutions rely on two types of DNN fingerprints -- _model robustness_ and _model overfitting_, which capture the uniqueness of DNN models. Robustness-based solutions utilize adversarial examples to delineate the decision boundary of both the victim model and its stolen copies, and then compare the percentage of matched answers [15, 24, 25]. However, techniques that enhance a DNN model's robustness against adversarial attacks, such as adversarial training [27], undermine the performance of ownership testing. On the other side, overfitting-based OT solutions, such as dataset inference [23], leverage the observation that the stolen copies exhibit a higher level of overfitting to the training set of the victim models, thereby extracting the overfitting level as fingerprints. While these approaches are innovative and effective, they have certain limitations. The verification process is communicated and computationally expensive requiring thousands of queries to the stolen copy to obtain dozens of minimal adversarial noise as fingerprints [23] Continuous inquiries may raise suspicions of model theft and result in rejection of the inquiries [28]. Furthermore, the performance of overfitting-based solutions is negatively affected by the model's generalization ability.
To address these problems, we propose a novel ownership testing approach to Verify a DNN model's Intelligence Property (VeriDIP). The key feature of VeriDIP is its utilization of **privacy leakage** fingerprints, instead of relying on overfitting [23] or robustness [15, 24, 25] metrics to indicate model uniqueness. Drawing on the concept of membership inference (MI) attacks from previous works [29, 30, 31], the privacy leakage of a model against MI attacks reflects the extent to which the model has memorized its private or secret training data. Hence, considering the secrecy of the training data, a stolen model would not exhibit the same level of privacy leakage on the victim's private training data as the victim model under the same MI attacks. In other words, the privacy leakage fingerprint of a model captures the distinctive and confidential patterns learned by the model, fulfilling the criteria of a reliable fingerprint: uniqueness and irremovability. As a result, any unauthorized DNN models that result in a certain degree of privacy leakage of a private training set can be identified as plagiarized.
Using privacy leakage fingerprints, VeriDIP consists of four components for verifying a DNN model's intelligence property. First, motivated by the aforementioned properties of the privacy leakage of a given model, we utilize MI attacks to estimate the lower bound of privacy leakage, which serves as the extracted fingerprint of a given model. Then we employ hypothesis testing on the extracted fingerprint to determine the likelihood of a suspect model being a stolen copy of the victim model. However, we may encounter the issue of "fingerprint fading" when dealing with well-generalized models that exhibit minimal privacy leakage against MI attacks. To tackle this problem, we introduce an enhanced version of VeriDIP where MI attacks query the suspect models using less private samples to extract the worst-case privacy leakage fingerprints of the suspect models. These less private samples face higher privacy leaking risks against MI attacks, enabling the enhanced VeriDIP to extract stronger privacy leakage fingerprints. To identify the less private data in advance, we train numerous shadow models to investigate the impact of each training sample on the decision boundary of DNN models. The data that significantly influences the models will be considered as the less private data.
We extensively evaluate VeriDIP on two image datasets (FMNIST and CIFAR-10) and two tabular datasets (Adult and Health) against three types of model stealing attacks: model extraction attack, model distillation attack, and fine-tune attack. The evaluation results for FMNIST and CIFAR demonstrate that VeriDIP can publicly authenticate all stolen copies while exposing less than 5 training samples, with a significantly reduced number of queries to the suspect models compared to [23]. Despite the models trained on tabular datasets having minimal overfitting, VeriDIP is still capable of publicly authenticating all stolen copies, at the cost of exposing dozens of training samples, whereas previous works [15, 23, 24, 25] are unable to do so.
In this work, we also address an open question raised in [23] regarding the effectiveness of VeriDIP on differentially private DNN models. We demonstrate that VeriDIP's success rate is constrained by a stringent privacy budget, such as \(\varepsilon=0.1\). However, we find that VeriDIP remains effective even for utility-preserving differentially private models, such as those with a higher privacy budget, e.g., \(\varepsilon=0.5\).
To summarize, our contributions are as follows:
* We propose VeriDIP, a model ownership testing (OT) approach for DNN models. VeriDIP utilizes the membership inference (MI) attack to estimate the privacy leakage of DNN models, which serves as the fingerprint of a given (victim/target) model.
* We further enhance VeriDIP by utilizing less private samples to estimate the worst-case privacy leakage, thereby strengthening the extracted fingerprints of DNN models.
* We perform extensive evaluations on VeriDIP using various DNN models trained with tabular or image benchmarks, against three types of model stealing attacks. The results show that VeriDIP can publicly authenticate all stolen copies with minimal verification costs.
* We theoretically and experimentally analyze the connection between the effectiveness of VeriDIP and differential privacy (DP) privacy protection. The results demonstrate that as long as a DP model is utility-preserving, VeriDIP can effectively protect its copyright.
## 2 Related Work
In this section, we review model stealing attacks, well-known ownership testing techniques and membership inference attacks. We list the comparison of different copyright protection methods for DNN models in Table I.
### _Model stealing attacks_
**Black-box attacks.** Tramer et al. [6] proposed the first model extraction attack that trains a stolen copy using the predictions of victim models. It requires black-box access to the victim model and some unlabeled datasets from the same distribution. According to Shafieinejad et al. [33], existing watermark embedding techniques [32, 14] and some fingerprinting solutions [24, 10] cannot withstand model extraction attacks. Distillation [34] was first proposed to distill the knowledge of teacher models into student models and later extended as an attack against methods that protect model copyrights [9]. Distilled models are often able to evade copyright tracking, as demonstrated in works such as Cao et al. [24] and Lukas et al. [25].
**White-box attacks.** White-box attackers have full access to victim model's parameters, and their goal is modify these parameters in order to disable copyright protection mechanisms. For instance, fine-pruning [11] is a defensive method against DNN model backdooring. It prune backdoored neurons and then fine-tuning the models. Consequently, fine-pruning could potentially be an attack against backdoor-based model watermarking techniques, such as those proposed in works like Adi et al. [14, 32]. More recently, Chen et al. [10] proposed an advanced fine-tuning technique that aims to erase model watermarks. They initially increase the learning rate to make the victim model forget unnecessary details about watermarks and then gradually restore the utility of the model by reducing the learning rate step by step. While these attacks are effective in disabling watermark embedding techniques, it remains unclear whether they pose a threat to the copyright protection provided by ownership testing methods.
### _Ownership testing_
Ownership testing (OT) techniques, also referred to as DNN fingerprinting techniques, are an emerging area of research that focuses on extracting the intrinsic characteristics of DNN models to track stolen copies. Currently, the research on OT is limited, with the majority of studies relying on two fingerprint characteristics: robustness [24, 25, 15] and overfitting [23].
IPGuard [24] proposes using model robustness as fingerprints. The authors observe that stolen copies exhibit similar predictions to the victim model for most adversarial data points. While IPGuard can successfully identify white-box derivation attacks, such as fine-tuning, it is not effective against black-box extraction attacks, such as model extraction attack [33], where the attacker retrains the model from scratch, resulting in a larger disparity in the decision surface compared to the victim model. To address this limitation, Lukas et al. [25] propose the use of transferable adversarial samples to extract DNN fingerprints. This approach successfully defends against white-box derivation attacks and most black-box extraction attacks, but it is vulnerable to transfer learning and adversarial training. More recently, Chen et al. [15] propose a testing framework for verifying ownership. Instead of relying on single metrics, they utilize multiple dimensions and combine the results to determine ownership. Their black-box metrics also use robustness as fingerprints, similar to IPGuard [24], making them susceptible to black-box extraction attacks. Their white-box metrics utilize the robustness of inner neuron outputs, requiring the defender to have knowledge of all parameters of stolen copies.
Dataset inference (DI) [23] exploits the overfitting of DNN models to their training data as a means to demonstrate that stolen copies exhibit similar overfitting fingerprints to the victim models. They employ minimal adversarial noise that leads to model misclassification [35] as fingerprints. DI is capable of identifying all white-box and black-box model variations [23]. However, this approach has some limitations. Firstly, it cannot be directly applied to DNN models trained on tabular data since some of the features are categorical, making it challenging to perform most adversarial example attacks [36]. Secondly, DI requires querying the suspect model thousands of times, which significantly increases the risk of detector attacks [28]. Thirdly, the effectiveness of DI on differentially private (DP) [37] DNN models remains unanswered. Hence, this paper aims to propose a novel ownership testing approach that addresses these limitations by achieving high verification efficiency and protecting the intellectual property of DP models.
### _Membership inference attacks_
Shokri et al. proposed the first membership inference (MI) attack in 2017 [29], which successfully guesses the membership of the training data with black-box access to the target DNN models. Since then, researchers have made efforts to enhance the attack performance and reduce the background information required by MI attackers. More recently, some researchers have utilized MI attacks as an
\begin{table}
\begin{tabular}{l|c c c c c c c c} \hline \hline \multirow{2}{*}{Approaches} & \multirow{2}{*}{Type} & \multirow{2}{*}{Method} & \multirow{2}{*}{\begin{tabular}{c} Non- \\ invasive \\ \end{tabular} } & \multirow{2}{*}{
\begin{tabular}{c} DP \\ connection \\ \end{tabular} } & \multicolumn{3}{c}{Model stealing attacks} & \multirow{2}{*}{Adaptive attacks} \\ \cline{5-6} & & & & & & & ME & KD & FT \\ \hline Adi et al. [32] & watermarking & backdoor & \(\times\) & N/A & \(\times\)[33] & \(\times\)[9] & \(\times\)[10] & N/A \\ Zhang et al. [14] & watermarking & backdoor & \(\times\) & N/A & \(\times\)[33] & \(\times\)[9] & \(\times\)[10] & \(\times\)[10] & N/A \\ Chen et al. [15] & fingerprinting & robustness & \(\surd\) & N/A & \(\times\)[15] & \(\times\)[15] & \(\surd\) & ADV training [27] \\ Cao et al. [24] & fingerprinting & robustness & \(\surd\) & N/A & \(\times\)[25] & \(\times\)[25] & \(\surd\) & ADV training [27] \\ Lukas et al. [25] & fingerprinting & robustness & \(\surd\) & N/A & \(\surd\) & \(\surd\) & \(\surd\) & ADV training [27] \\ Maini et al. [23] & fingerprinting & over-fitting & \(\surd\) & N/A & \(\surd\) & \(\surd\) & \(\surd\) & detector attacks [28] \\ VeriDIP (This work) & fingerprinting & privacy leakage & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) & secure for now \\ \hline \hline \end{tabular}
\end{table} TABLE I: Comparison of different DNN model copyright protection methods. ME: model extraction attack; KD: knowledge distillation attack; FT: fine-tuning attack; ADV: adversarial; DP: differential privacy. ME, KD, and FT are model stealing attacks. Adaptive attacks aim to weaken the effect of ownership test approaches.
empirical measurement for estimating the privacy leakage of DNN models [30, 31, 38]. This approach has inspired us to leverage the MI advantage as a lower bound for estimating model privacy leakage and consider privacy leakage characteristics as the model fingerprint. Additionally, other studies have revealed the varying exposure risks of training data against MI attacks [31, 39], which have also motivated us to extract stronger fingerprints.
## 3 Ownership Testing Problem
In this section, we first formulate the ownership testing (OT) problem, then discuss the capabilities of adversaries and defenders, followed by the backgrounds of differential privacy and membership inference.
### _Notations_
Let \(\mathbf{z}=(\mathbf{x},y)\) be a data point, where \(\mathbf{x}\) is the feature vector and \(y\) is the corresponding label. \(\mathcal{D}\) represents the data distribution from which \(\mathbf{z}\) is drawn. We assume that the victim model is trained on the training set \(S(\sim\mathcal{D}^{n})\) consisting of \(n\) data points. The loss function \(\ell(f,\mathbf{z})\) measures the difference between the model predictions \(f(\mathbf{x})\) and the ground-truth label \(y\). We provide a summary of the notations used in this work in Table II.
### _Problem Formulation_
Figure 1 depicts a general framework of ownership testing (OT) for DNN models, where we have three components - machine learning as a service (MLaaS), model stealing attacks and defences.
Particularly, MLaaS provides users with access to pre-built machine learning (DNN) models through APIs, allowing the users to integrate machine learning capabilities into their applications and perform complex tasks through simple queries. However, to fully utilize the potential of the pre-build models, attackers might attempt to steal the models by mimicking the behaviors of regular users (querying the models through the open APIs) to infer/extract the model details. To protect the copyright of (the victim) DNN models, an OT approach extracts and compares the fingerprints of a suspect model and the victim model to produce a test outcome, indicating whether the suspect model is a stolen copy of the victim model.
In this paper, we aim to design a model OT algorithm \(\mathcal{V}\), defined as follows
\[\mathcal{V}(f,\mathcal{P}_{S},\mathcal{B})\rightarrow\{0,1\}, \tag{1}\]
where \(\mathcal{P}_{S}\) is an auxiliary dataset, containing carefully chosen adversarial examples [24, 15, 25] or a subset of training examples [23] and \(\mathcal{B}\) represents the publicly available knowledge about the model [24, 15, 25] or about the private training data [23]. In the algorithm \(\mathcal{V}(f,\mathcal{P}_{S},\mathcal{B})\), the verifier first extracts the inherent fingerprint of the suspect model \(f\) using \(\mathcal{P}_{S}\) and \(\mathcal{B}\), and then determines the ownership based on whether it aligns with the owner's expectations. The algorithm \(\mathcal{V}(f,\mathcal{P}_{S},\mathcal{B})\) outputs \(1\) when the verifier believes the suspect model \(f\) is a stolen copy of the victim model \(f_{S}\), and vice versa. The algorithm \(\mathcal{V}(f,\mathcal{P}_{S},\mathcal{B})\) should be highly accurate, computationally and communicationally efficient, and privacy-preserving (safe to audit in public).
### _Threat Model_
We specify the capabilities of the attacker and verifier (defender) shown in Figure 1.
**Attacker.** We consider a wide variety of model stealing attacks, including both black-box access and white-box access capabilities. However, the adversary does not have access to the entire (private) training set of the victim model.
* Black-box attacker. Attackers, who are external entities attempting to exploit the functionality of the victim model, employ various attacks such as model extraction attacks [6] and model distillation attacks [9].
* White-box attacker. Attackers, who are insiders with full access rights to the victim model, aim to evade tracking and detection. They employ various attacks such as model fine-tuning [10] and model fine-pruning [11, 12].
**Verifier.** As for defense, our focus is on black-box verifiers who have limited query access to the suspect model. There are two main reasons for this choice. First, when the verifier is a third-party agency, sharing excessive information such as training data or model parameters can pose risks to the model owner or data contributors. Second, allowing an unlimited number of verification queries can potentially trigger detector attacks [28]. In a detector attack, the unauthorized model API may refuse to respond or provide random responses upon detecting an attempt to verify copyright. For example, in the work by Maini et al. [23], the victim model is queried 1500 times for a single data point to collect minimal adversarial noise vectors for ownership determination, which significantly increases the likelihood of triggering a detector attack (refer to Table I).
\begin{table}
\begin{tabular}{l|l} \hline \hline Notations & Description \\ \hline \(\mathbf{x}\) & feature vector \\ \(y\) & the label corresponding to \(\mathbf{x}\) \\ \(\mathbf{z}\) & a data point \(\mathbf{z}=(\mathbf{x},y)\) \\ \(\alpha\) & significance level for hypothesis testing \\ \(\mathcal{D}\) & data distribution \\ \(S\) & private training dataset \\ \(f\) & DNN models \\ \(n_{S}\) & number of exposed samples during \\ \((\epsilon,\delta)\) & public copyright verification \\ \((\epsilon,\delta)\) & DP parameters (privacy budget, failure probability) \\ \((C,\mathbf{z})\) & DP hyper-parameters (clipping threshold, noise multiplier) \\ \(P\) & probability of not being a stolen model \\ \(Y\) & ownership testing outcome - Stolen or Not stolen \\ \hline \(\ell(f,\mathbf{z})\) & Loss function, output the prediction loss of model \(f\) on sample \(\mathbf{z}\) \\ \(\mathcal{V}(f,\mathcal{P}_{S},\mathcal{B})\) & OT algorithm, output whether a suspect model \(f\) is trained on \(S\), where \(\mathcal{P}_{S}\) is an auxiliary dataset to \(S\) and \(\mathcal{B}\) is background knowledge \\ \(\mathcal{A}(\mathbf{z},f,\mathcal{D})\) & MI attack algorithm, output whether a sample \(\mathbf{z}\) is used to train model \(f\), \(\mathcal{D}\) is auxiliary information \\ \(\mathrm{Adv^{M}}(\mathcal{A},f,\mathcal{D})\) & Membership advantage algorithm, output membership advantages of algorithm \(\mathcal{A}\) on model \(f\), \(\mathcal{D}\) is auxiliary information \\ \hline \hline \end{tabular}
\end{table} TABLE II: Summary of Notations
### _Membership Advantage_
As we know, Yeom et al. [38] show that the privacy budget of a differentially private DNN model is a lower bound of the model's privacy leakage against MI attacks. Furthermore, as demonstrated by Yeom et al. [38], the privacy budget of a differentially private DNN model serves as a lower bound for estimating the model's privacy leakage against membership inference (MI) attacks. Additionally, recent research by Hyland et al. [40] highlights that not only intentionally noisy DNN models provide privacy protection, but ordinary DNN models also possess a certain level of privacy protection due to the inherent randomness introduced by stochastic gradient descent (SGD). Consequently, it becomes possible to assess the potential privacy leakage of a non-differentially private DNN model by estimating the corresponding privacy budget associated with the non-DP model.
#### 3.4.1 Differential Privacy
Recall the definition of differential privacy [37], A learning algorithm \(f:\mathcal{D}\mapsto\mathcal{R}\) satisfies (\(\epsilon,\delta\))-DP if, for all adjacent databases \(D\) and \(D^{\prime}\) that differs in one record, and all possible outputs \(\mathcal{O}\subseteq\mathcal{R}\), the following inequality holds.
\[\Pr[f(D)\in\mathcal{O}]\leq\exp(\epsilon)\Pr[f(D^{\prime})\in\mathcal{O}]+\delta, \tag{2}\]
where the probabilities are taken only over the randomness of the learning algorithm \(f\). A greater \(\epsilon\) indicates a lesser degree of privacy protection for the training data, meaning that the machine learning algorithm \(f\) may potentially compromise more privacy of the sensitive database \(D\).
If the verifier is able to quantify the privacy risks associated with a particular learning algorithm on a specific private training set, this value can be used as a fingerprint for identifying plagiarism. This is because the target model and its pirated version are likely to exhibit higher privacy leakage of their training data compared to independently trained models. By analyzing and comparing the privacy risks of different models, the verifier can detect potential instances of plagiarism or unauthorized use of the training data. However, it is noteworthy that directly estimating the value of \(\epsilon\) for deployed non-DP DNN models on given datasets is intractable. This is because it would require traversing all possible adjacent datasets and evaluating all possible outputs to compute the maximum divergence. This process becomes computationally expensive and impractical, especially for large-scale datasets and complex models.
#### 3.4.2 Membership Inference
Membership inference (MI) attacks [29] aim to predict whether a particular example is part of a training dataset. Recently, some researchers [41, 42] have proposed utilizing MI attacks as a means to measure privacy leakage. Other works [30, 38] have theoretically established that the privacy leakage measured by MI attacks serves as a lower bound for \(\epsilon\). In this work, we leverage the concept of membership advantage [38] and utilize it as a fingerprint for our model. We provide a review of the related definition below.
Before getting into membership advantage, we first define the MI attack following [29, 38].
**Definition 1** (Membership inference experiment \(\mathrm{Exp}^{\mathrm{M}}(\mathcal{A},f_{S},\mathcal{D})\)).: _Let \(\mathcal{A}\) be a membership inference attack algorithm, \(f_{S}\) is a machine learning model trained on \(S\sim\mathcal{D}^{n}\). The procedure of the membership inference experiment is as follows:_
1. _[leftmargin=*]_
2. _Toss a coin at random_ \(b\leftarrow\{0,1\}\)_;_
3. _If_ \(b=0\)_, then the sample_ \(\boldsymbol{z}\) _drows from_ \(S\)_, denoted as_ \(\boldsymbol{z}\sim S\)_._
4. \(\{0,1\}\leftarrow\mathrm{Exp}^{\mathrm{M}}(\mathcal{A},f_{S},\mathcal{D})\)_. The experiment_ \(\mathrm{Exp}^{\mathrm{M}}(\mathcal{A},f,\mathcal{D})\) _returns_ \(1\) _to represent the attacker correctly guessing the answer of_ \(b\)_, denoted as_ \(\mathcal{A}\left(\boldsymbol{z},f_{S},\mathcal{D}\right)=b\) _and vice versa._
In Definition 1, the attack algorithm \(\mathcal{A}\left(\boldsymbol{z},f_{S},\mathcal{D}\right)\) inputs arbitrary sample \(\boldsymbol{z}\), model \(f_{S}\), public data distribution \(\mathcal{D}\), and outputs the judgment about whether the sample \(\boldsymbol{z}\) is used to train model \(f_{S}\).
Membership advantage [38] represents the advantage of an MI attacker's ability to guess the decision boundary of training samples and other samples over random guess.
**Definition 2** (Membership Advantage).: _The advantage of the MI attack algorithm \(\mathcal{A}\) is defined as_
\[\mathrm{Adv}^{\mathrm{M}}(\mathcal{A},f,\mathcal{D})=2\Pr\left[\mathrm{Exp}^{ \mathrm{M}}(\mathcal{A},f,\mathcal{D})=1\right]-1. \tag{3}\]
Membership advantage ranges from \(0\) to \(1\), where \(0\) indicates no advantage (equivalent to random guessing), and \(1\) represents a full advantage. The right-hand side of Equation (3) can be empirically determined by computing the difference between the true positive rate (TPR) and the false positive rate (FPR) of the attack algorithm \(\mathcal{A}\). That is,
\[\begin{array}{l}\mathrm{Adv}^{\mathrm{M}}(\mathcal{A},f,\mathcal{D})=\Pr[ \mathcal{A}=1\mid b=1]-\Pr[\mathcal{A}=1\mid b=0]\\ =\mathop{\mathbb{E}}_{\boldsymbol{z}\sim S}\left[\mathcal{A}\left(\boldsymbol{z },f,\mathcal{D}\right)\right]-\mathop{\mathbb{E}}_{\boldsymbol{z}\sim\mathcal{D }}\left[\mathcal{A}\left(\boldsymbol{z},f,\mathcal{D}\right)\right].\end{array} \tag{4}\]
It can be observed from the above equation that the membership advantage is dependent on the specific implementation approach of the attack algorithm \(\mathcal{A}\left(\boldsymbol{z},f,\mathcal{D}\right)\), and various options have been proposed in the literature, including [29, 38, 43, 44].
## 4 VeriDIP
In this section, we present our ownership testing approach for DNN models called VeriDIP, which performs hypothesis testing for extracted privacy leakage fingerprints. To illustrate, we first introduce the framework for _basic VeriDIP_, followed by a detailed fingerprint extraction algorithm. Next, we propose _enhanced VeriDIP_ to improve the performance of the basic VeriDIP for more generalized DNN models. Finally, we discuss the relationship between VeriDIP and differential privacy techniques.
### _Ownership Testing Algorithm_
We present the construction of ownership testing algorithm \(\mathcal{V}(f,\mathcal{P}_{S},\mathcal{B})\rightarrow\{0,1\}\) (see Equation (1)) that outputs whether the suspect model \(f\) is a stolen copy of the victim model. Let \(S\sim\mathcal{D}^{n}\) be a private training set, \(f_{S}\) be the IP-protected (victim) DNN model trained on \(S\), \(\mathcal{P}_{S}=\{\boldsymbol{z}\mid\boldsymbol{z}\in S\}_{n_{S}}\) be an auxiliary dataset associated with \(S\) that contains \(n_{S}\) random samples from the private training set \(S\), and \(\mathcal{B}=\{\mathcal{A},\mathcal{D}\}\) be the public background
knowledge that contains an MI attack algorithm \(\mathcal{A}\) and the publicly available data distribution \(\mathcal{D}\). We show the proposed ownership testing algorithm in Algorithm 1.
Algorithm 1 performs a one-tailed hypothesis test on the observed membership advantage fingerprints for stolen model \(f\) on a given private training set \(S\). We first give formal definitions of the membership advantage fingerprints of a DNN model \(f\) as follows:
**Definition 3** (Membership advantage fingerprint).: _We define the fingerprint of a DNN model \(f\) as its privacy leakage against the private training set \(S\), which is empirically computed as \(\mathcal{F}(f\mid S)=\mathrm{Adv}^{\mathrm{M}}(\mathcal{A},f,\mathcal{D})\)._
Empirically, \(\mathcal{F}\) represents the membership advantage of the attacker over a random guesser. If \(f\) is independent of \(f_{S}\), then \(\mathcal{F}(f\mid S)\) should be close to 0. Therefore, we set the null hypothesis as \(\mathcal{F}(f\mid S)=0\), which indicates that the suspect model \(f\) is not a stolen copy of the victim model \(f_{S}\). On the other hand, a larger value of \(\mathcal{F}(f\mid S)\) in the alternative hypothesis indicates that the suspect model \(f\) discloses more privacy of the private training set \(S\) of \(f_{S}\) and is more likely to be a stolen copy of \(f_{S}\).
In the verification process, the verifier computes the likelihood of observed fingerprints. Firstly (step 1 in Algorithm 1), the verifier randomly selects \(n_{S}\) training samples from the private dataset \(S\) and randomly selects \(n_{S}\) samples from the public data distribution \(\mathcal{D}\). Then (step 2 in Algorithm 1), the empirical computation of fingerprint estimation is performed as follows:
\[\mathcal{F}^{*}(f\mid S)=\mathop{\mathbb{E}}_{\mathbf{z}\sim D_{0}}\left[\mathcal{ A}\left(\mathbf{z},f,\mathcal{D}\right)\right]-\mathop{\mathbb{E}}_{\mathbf{z}\sim D_{1}} \left[\mathcal{A}\left(\mathbf{z},f,\mathcal{D}\right)\right]. \tag{5}\]
Next (step 3 in Algorithm 1), it computes the p-value for observed fingerprints. The output p-value stands for the likelihood of a suspect model not being a stolen model. It computes
\[P=1-\Pr[Z>\mathcal{F}^{*}(f\mid S)], \tag{6}\]
where \(Z\sim\mathcal{N}(0,\sigma)\) and \(\sigma\) are estimated by the observed \(\mathcal{F}^{*}(f\mid S)\). Thus, for the stolen models, a lower p-value indicates better OT performance. Finally (step 4 in Algorithm 1), we give the judgment based on pre-defined significant level \(\alpha\).
The use of hypothesis testing in VeriDIP serves the purpose of enabling public verifiability. Hypothesis testing allows for a reduction in the number of exposed training samples during ownership verification while maintaining a satisfactory level of verification confidence. If the verifier (as shown in Figure 1) is a third-party agency or if the verification process is required to be executed publicly, directly exposing the entire private training set \(S\) to the public would lead to severe privacy violations.
We then theoretically analyze factors that influence the performance of our OT algorithm.
**Theorem 1**.: _The p-value returned by Algorithm 1 is negatively correlated with the extracted model fingerprint estimation value and sample size \(n_{s}\)._
Proof.: In Algorithm 1, assume \(H_{0}\) is true then \(\mathrm{Adv}^{\mathrm{M}}(\mathcal{A},f,\mathcal{D})=0\). Let the observed the standard deviation of \(\mathcal{A}\left(\mathbf{z},f,\mathcal{D}\right)\) be \(\sigma_{0}\) and \(\sigma_{1}\), for \(\mathbf{z}\in S\) and \(\mathbf{z}\in\mathcal{D}\), respectively. According to the central limit theorem [45], \(\mathop{\mathbb{E}}_{\mathbf{z}\sim D_{0}}\left[\mathcal{A}\left(\mathbf{z},f, \mathcal{D}\right)\right]\)\(-\mathop{\mathbb{E}}_{\mathbf{z}\sim D_{1}}\left[\mathcal{A}\left(\mathbf{z},f, \mathcal{D}\right)\right]\) approximately follows Gaussian distribution \(\mathcal{N}(0,\sqrt{\frac{\sigma_{0}^{2}+\sigma_{1}^{2}}{n_{s}}})\), where \(D_{0}\) and \(D_{1}\) are randomly sampled \(n_{S}\)-sized datasets, from \(S\) and \(\mathcal{D}\), respectively. Thus, p-value is computed as:
\[P =1-\Phi\left(\frac{\left(\mathop{\mathbb{E}}_{\mathbf{z}\sim D_{0}} \left[\mathcal{A}\left(\mathbf{z},f,\mathcal{D}\right)\right]-\mathop{\mathbb{E}} _{\mathbf{z}\sim D_{1}}\left[\mathcal{A}\left(\mathbf{z},f,\mathcal{D}\right)\right] \right)*n_{S}}{\sqrt{\sigma_{0}^{2}+\sigma_{1}^{2}}}\right) \tag{7}\] \[=1-\Phi\left(\frac{\mathcal{F}^{*}(f\mid S)*n_{S}}{\sqrt{\sigma_ {0}^{2}+\sigma_{1}^{2}}}\right),\]
where \(\Phi\) is the cumulative distribution function of the standard normal distribution and \(D_{0}\) and \(D_{1}\) are two randomly sampled \(n_{S}\)-sized datasets from \(S\) and \(\mathcal{D}\), respectively.
Referring to Equation (7), it can be observed that \(\sigma_{0}\) and \(\sigma_{1}\) are constants specific to the neural networks used. Hence, generalized models (with less overfitting) may pose challenges in obtaining satisfactory ownership judgments when limited sensitive training samples are available (smaller \(n_{S}\)). Additionally, a more potent membership inference (MI) attack can enhance the likelihood of obtaining positive judgments for plagiarism.
### _Fingerprints Extraction_
In this section, we provide a comprehensive explanation of the implementation process for estimating the membership advantage fingerprint, as defined in Definition 3. The goal is to compute the membership advantage \(\mathrm{Adv}^{\mathrm{M}}(\mathcal{A},f,\mathcal{D})=\mathop{\mathbb{E}}_{ \mathbf{z}\sim\mathcal{D}}\left[\mathcal{A}\left(\mathbf{z},f,\mathcal{D}\right) \right]-\mathop{\mathbb{E}}_{\mathbf{z}\sim S}\left[\mathcal{A}\left(\mathbf{z},f, \mathcal{D}\right)\right]\) (refer to Equation (4)). It's worth noting that any existing black-box membership inference (MI) attack algorithms can be utilized as fingerprint extractors. In this paper, we discuss two specific instantiations.
For illustrative purposes, we begin by considering a simple MI attack --_Global threshold MI attack_[38]. The definition is as follows.
**Definition 4** (Global MI attack \(\mathcal{A}\)[38]).: _Assume the loss of a machine learning model \(f\) is bounded by a constant \(B\), denoted as \(\ell(f,\mathbf{z})\leq B\). Data \(\mathbf{z}=(\mathbf{x},y)\) are sampled from the training
set \(S\) or data distribution \(\mathcal{D}\). Given model \(f\), sample \(\mathbf{z}=(\mathbf{x},y)\), public data distribution \(\mathcal{D}\), the MI attack algorithm \(\mathcal{A}_{\ell}\left(\mathbf{z},f,\mathcal{D}\right)\) output 1 with probability \(1-\ell(f,\mathbf{z})/B\)._
The membership advantage fingerprint is estimated as follows:
\[\begin{split}&\mathcal{F}(f\mid S)\\ &=\operatorname{Adv}^{\text{M}}(\mathcal{A}_{\ell},f,\mathcal{D}) \\ &=\mathbb{E}\left[\frac{\ell(f,\mathbf{z})}{B}\mid b=1\right]-\mathbb{E }\left[\frac{\ell(f,\mathbf{z})}{B}\mid b=0\right]\\ &=\operatorname*{\mathbb{E}}_{\mathbf{z}\sim\mathcal{D}}\left[\frac{ \ell(f,\mathbf{z})}{B}\right]-\operatorname*{\mathbb{E}}_{\mathbf{z}\sim\mathcal{S}} \left[\frac{\ell(f,\mathbf{z})}{B}\right].\end{split} \tag{8}\]
We also consider the latest (to the best of our knowledge) membership inference (MI) attack, known as the _Personable threshold MI attack_[31]. This attack takes a different approach by training multiple shadow models to learn the discrepancy in the model's loss distribution for each sample, distinguishing between samples that are part of the training set and those that are not. For each data point \(\mathbf{z}\), the attack fits two Gaussian distributions, \(\mathcal{N}\left(\mu_{\text{in}},\sigma_{\text{in}}^{2}\right)\) and \(\mathcal{N}\left(\mu_{\text{out}},\sigma_{\text{out}}^{2}\right)\), to the confidence distribution in the logit scale. Subsequently, a likelihood test is performed to compute \(L(\mathbf{z})=\frac{\text{logit}(p_{z})\mid\mathcal{N}(\mu_{\text{in}},\sigma_{ \text{in}}^{2})}{\text{logit}(p_{z})\mid\mathcal{N}(\mu_{\text{out}},\sigma_{ \text{out}}^{2})}\), where \(\text{logit}(p)=\ln(\frac{p}{1-p})\) and \(p_{\mathbf{z}}=-\exp(\ell(f,\mathbf{z}))\). A large value of \(L(\mathbf{z})\) indicates a higher likelihood of the data point \(\mathbf{z}\) being a member. In this attack, the membership advantage is computed as the difference between the true positive rate (TPR) and the false positive rate (FPR) of the MI attack algorithm.
Note that while the per-sample threshold MI attack may be computationally inefficient due to the need to train multiple shadow models for each batch of MI queries, it is particularly suitable for model ownership verification tasks. This is because the ownership testing verifier has prior knowledge of the data used for conducting MI attacks, allowing the shadow models to be pre-trained in advance.
### _Enhanced VeriDIP_
Recall that we have previously suspected that more generalized models may yield unsatisfactory ownership judgments due to the negative correlation between input membership advantage fingerprints and output p-values, as shown in Equation (7). To address this issue, we propose an enhanced version of VeriDIP that mitigates the reliance on the effectiveness of VeriDIP's MI attack success rates. The key idea is to utilize _the worst-case_ privacy leakage instead of _the average-case_ privacy leakage as model fingerprints for ownership verification. While average privacy risks are computed using a set of randomly sampled training samples, the worst-case privacy leakage focuses on measuring the privacy risks of a set of less private training samples. It serves as a tighter lower bound for \(\epsilon\) defined in differential privacy. Therefore, we believe it constitutes an enhanced fingerprint for identifying stolen models.
Recently, several studies have demonstrated that certain training samples exhibit lower levels of privacy than others when subjected to MI attacks [31, 46]. These samples with reduced privacy are well-suited for estimating worst-case privacy leakage. We define less private data in model \(f\) as follows:
**Definition 5** (Less private Data).: _Let \(S\) be the training set for the DNN model \(f_{S}\). We define a data point \(\mathbf{z}\in S\) as a less private data point if the model trained on the set \(S\setminus\mathbf{z}\) is significantly different from \(f_{S}\)._
**Search for the less private data.** Measuring the difference between two DNN models, as defined in Definition 5, can be challenging. However, if we assume that the removal of a data point \(\mathbf{z}\) from the training set has the most significant impact on the model's prediction for that data point, the problem becomes more manageable. We can compute the loss difference between two models by comparing their performance when trained with and without the presence of \(\mathbf{z}\). This can be expressed as follows:
\[\eta(\mathbf{z})=\frac{\ell(f_{S\setminus\mathbf{z}},\mathbf{z})}{\ell(f_{S},\mathbf{z})}. \tag{9}\]
The data point with a larger \(\eta(z)\) value is less private.
To provide an example of the less private data, we conducted a search within the training set of DNN models to identify the sample with the highest \(\eta(\mathbf{z})\) score. The behavior of a less private data point and a more private data point is demonstrated in Figure 2. The x-axis represents a transformation of the loss \(S^{-1}(\exp(-\ell(f,\mathbf{z})))\) following [31], where \(S^{-1}\) denotes the inverse of the Sigmoid function. This transformation ensures that the transformed loss distribution is approximately normal. The y-axis represents the frequency of discrete loss values. From Figure 2, it is evident that the prediction capability of DNN models is particularly sensitive to the presence or absence of certain data points, as illustrated in Figure 2(b) compared to Figure 2(a). The absence of data point 2 significantly reduces the model's confidence in predicting the label of data point 2. Therefore, data point 2 corresponds to the less private data we are specifically interested in identifying.
Through further analysis, we discovered that the less private data points are significantly more abundant compared to other data points. To assess the prevalence of the less private data, we traversed all training data points for four benchmarks and computed the corresponding \(\eta(\mathbf{z})\) values for each data point. The distributions of \(\eta(\mathbf{z})\) for each database are depicted in Figure 3. Notably, all distributions exhibit a _long tail_ pattern, indicating a substantial presence of the less private data points. Consequently, if we were to draw random samples to estimate privacy leakage, encountering the less private data points would be
Fig. 2: Loss score distribution comparison for the data “IN” model and “QUT” of model, Adult database. The response of DNN models is more sensitive to the absence of data 2 than data 1.
a rare occurrence. Therefore, identifying these less private data points is crucial in obtaining robust privacy leakage fingerprints.
In summary, for the enhanced VeriDIP, our approach involves initially identifying a set of several less private data points, similar to "Data 2" in Figure 2(b), for each victim model beforehand. During the verification phase, the verifier utilizes these data points to extract worst-case privacy leakage fingerprints, rather than relying on average-case privacy leakage, as evidence for claiming ownership. It is worth noting that training shadow models to identify the less private data incurs additional computational costs. However, it is important to highlight that, for a given victim model, only one dataset of less private data is required. This dataset can be used for an unlimited number of ownership verifications for the respective victim model. Consequently, the additional cost associated with training the shadow models does not pose a significant challenge for the enhanced VeriDIP approach.
### _Bounding Model's Ownership via Differential Privacy Budget_
Maini et al. [23] raised an open question regarding the effectiveness of ownership testing methods based on over-fitting metrics when applied to differentially private DNN models. In this paper, we aim to address this question by investigating the behavior of the p-value in Algorithm 1 for \(\epsilon\)-DP DNN models, where \(\epsilon\) represents the privacy budget.
Differential privacy techniques [37], considered the de facto standard for privacy protection, provide an upper bound on the advantage of MI attacks [38] by definition. Consequently, they also place a lower bound on the p-value obtained through the model ownership proof algorithm, such as Algorithm 1. These techniques introduce a privacy budget \(\epsilon\) to govern the level of privacy protection afforded to DNN models (see Section 3.4.1). A smaller value of \(\epsilon\) corresponds to stronger privacy protection.
Let \(f_{\epsilon}\) be a DNN model that satisfies \(\epsilon\)-DP and \(\mathcal{A}\) be the global MI attack algorithm in Definition 4. According to [38], the membership advantadge satisfies \(\mathrm{Adv}^{\mathrm{M}}(\mathcal{A},f_{\epsilon},\mathcal{D})\leq\exp( \epsilon)-1\). Substituting the inequality into Equation (7), we have
\[\begin{split} P&=1-\Phi\left(\frac{\left(\underset{ \boldsymbol{z}\sim\mathcal{D}_{0}}{\mathbb{E}}\left[\mathcal{A}\left( \boldsymbol{z},f_{\epsilon},\mathcal{D}\right)\right]-\underset{\boldsymbol{z }\sim\mathcal{D}_{1}}{\mathbb{E}}\left[\mathcal{A}\left(\boldsymbol{z},f_{ \epsilon},\mathcal{D}\right)\right]\right)}{\sqrt{\frac{\sigma_{0}^{2}+\sigma _{1}^{2}}{n_{S}}}}\right)\\ &\geq 1-\Phi\left(\frac{\left(\exp(\epsilon)-1\right)*\sqrt{n_{S}}}{ \sqrt{\sigma_{0}^{2}+\sigma_{1}^{2}}}\right).\end{split} \tag{10}\]
Therefore, when the privacy budget \(\epsilon\) and sample size \(n_{S}\) are fixed, the minimum p-value is determined accordingly. We plot the minimum p-value as a function of the privacy budget \(\epsilon\) for specific values of \(n_{S}\). In our analysis, we consider three choices for \(n_{S}\), namely 10, 20, and 100. The corresponding results are illustrated in Figure 4.
**Differential privacy budgets negatively impact the performance of VeriDIP.** In Figure 4(a), for the CIFAR-10 dataset, when \(\epsilon=0.1\) and \(n_{S}=10\), the corresponding p-value is \(P\geq 0.156\). This implies that if the DNN model is \(0.1\)-differentially private, the ownership testing algorithm, using only \(10\) samples at a significance level of \(\alpha=0.01\), cannot claim ownership of this model due to the presence of differential privacy protection. This holds true regardless of the effectiveness of the deployed MI attack. By increasing \(\epsilon\) to \(0.5\), the lower bound of the p-value decreases to \(P\geq 1.15\times 10^{-14}\). Fortunately, in practice, it is uncommon to train machine learning models with excessively restrictive privacy budgets such as \(\epsilon=0.1\), as doing so would significantly compromise the utility of the machine learning model. In the upcoming section, we will experiment with a reasonable privacy budget on a wide range of models and datasets to explore the trade-offs between privacy protection and model ownership protection.
## 5 Evaluations
In this section, we begin by introducing the experimental settings. We then conduct a comprehensive evaluation of both the basic and enhanced VeriDIP methods, comparing their performance to the state-of-the-art Dataset Inference (DI) [23] approach. Finally, we explore the effectiveness of VeriDIP when applied to DP DNN models.
Fig. 4: Lower bound of p-values against the privacy budget \(\epsilon\).
Fig. 3: Score difference distribution for CIFAR-10, FMNIST, Health, Adult datasets.
### _Experimental Setup_
To begin with, we briefly show the details of datasets and the configurations of machine learning models used in the experiments.
**Datasets.** We use four famous datasets in our experimental evaluation, CIFAR-10 1, FMNIST 2, Adult 3, and Health 4. Specifically, CIFAR-10 and FMNIST are two image datasets used by recent studies in evaluating WE and OT approaches [10, 23, 25, 32]; Adult and Health are two tabular datasets, by which we could train (almost) perfect MI attacks-resilient model as (almost) the worst-case scenario for VeriDIP (Algorithm 1).
Footnote 1: [https://www.cs.toronto.edu/~kriz/cifar.html](https://www.cs.toronto.edu/~kriz/cifar.html)
Footnote 2: [https://github.com/zalandoresearch/fashion-mnist](https://github.com/zalandoresearch/fashion-mnist)
Footnote 3: [https://archive.ics.uci.edu/ml/datasets/adult](https://archive.ics.uci.edu/ml/datasets/adult)
Footnote 4: [https://www.dshs.texas.gov/THCIC/Hospitals/Download.shtm](https://www.dshs.texas.gov/THCIC/Hospitals/Download.shtm)
* **CIFAR-10: CIFAR-10 consists of \(32\times 32\) color images of \(10\) real world objects, with \(5,000\) instances of each object class.**
* **FMNIST: Fashion MNIST consists of \(28\times 28\) grayscale images, associated a label from \(10\) classes, with \(7,000\) instances of each object class.**
* **Adult: The US Adult Census dataset comprises \(48,842\) entries, with each entry containing \(13\) features. These features are utilized to infer whether an individual's income exceeds 50K/year or not.**
* **Health: The Heritage Health dataset consists of \(139,785\) physician records and insurance claims, with each record containing \(250\) features. The objective is to predict ten-year mortality by binarizing the Charlson Index, using the median value as a cutoff.**
**Neural networks.** Following existing works [10, 32], we train CIFAR-10 using ResNet-18 architecture and the SGD optimizer with a stepped learning rate. The initial learning rate is set to \(0.01\) and is divided by ten every 20 epochs. For the FMNIST dataset, we train a convolutional neural network (CNN) using the Adam optimizer. As for the Adult and Health datasets, which are tabular datasets, we utilize a 4-layer perceptron with the Adam optimizer. The learning rate for all Adam optimizers is set to \(10^{-4}\). The batch size is set to \(50\) for CIFAR-10 and FMNIST, and it is set to \(500\) for Adult and Health.**
**Model stealing attacks.** We have discussed attackers in OT experiments in Section 3.3. In this section, we consider three types of model stealing attacks that are commonly used for evaluating the effectiveness of copyright protection approaches. Note that fine-prune attack [11] presented in Figure 1 is not specifically targeted at model copyright protection but rather falls under a category of defenses against model backdoor attacks. Therefore, to ensure fairness in the experiments, we did not include it in our evaluation.
* **Model extraction (ME) attack [6, 33]. The ME attack retrains a model from scratch by minimizing the loss between the predictions of stolen copies and its teacher predictions.**
* **Knowledge distillation (KD) [9]. The KD attack retrains a model from scratch by minimizing the distance between the teacher's and student's soft predictions plus the cross-entropy loss between the student's prediction and ground-truth label \(y\). The student model is the stolen copy.**
* **Fine-tuning (FT) [10]. The FT attack keeps training the victim model for a while to modify the original decision boundary. It first uses a large learning rate to erase the original decision boundary, then gradually reduces the learning rate to restore the prediction accuracy of the model. According to their result, it is effective for removing all watermarks.**
The ME and the KD are black-box attacks, while FT is a white-box attack. We use the open-source code and the same hyperparameters as the existing works of ME [33], KD [9] and FT [10]. We list their loss functions and hyperparameters in Table III. According to [10], carefully tuning the learning rate can remove all model watermarks. Our aim is to determine the effectiveness of these attacks in disturbing model fingerprints.**
**MI attack algorithm.** The implementation of the global threshold MI attack follows the methods proposed by Yeom et al. [38]. As for the per-sample threshold MI attacks, there are two implementations: online and offline. We use the open-source code of the online implementation [31] since it demonstrates better attack performance.**
**Reproduction of Dataset Inference (DI) [23].** DI proposed to use "prediction margins" as fingerprints to verify model ownership. The prediction margins are obtained by performing adversarial attacks on the suspect models. We use their black-box implementation (_Blind Walk_) since it is more consistent with our attacker's capability assumptions. Plus, the _Blind Walk_ has better verification performance and lower computational costs than their white-box implementation (_MinGD_) [23].
### _Metrics_
We use two indicators to evaluate the performance of the model OT algorithm:
* **p-value.** The p-value is the outputs of Algorithm 1, which is inherited from [23]. The p-value indicates the probability that a suspect model is not a stolen copy. The smaller this metric, the more copyright verification judgment is likely to be correct.
* **Exposed sample size \(n_{S}\). \(n_{S}\) denotes the minimum number of training samples exposed in the verification phase to verify the copyright of stolen copies successfully. Thus, for a fixed \(\alpha\), a smaller value of \(n_{S}\) indicates better privacy protection.
### _Performance of Baseline Models: Victim and Stolen Models_
We begin by training machine learning models on the four datasets and present the training set size (TrainSize), test set
size (TestSize), training set accuracy (TrainAcc), test set accuracy (TestAcc), and accuracy difference (AccDiff) in Table IV. It can be observed that all victim/target models achieve satisfactory accuracy. To improve the performance of CIFAR-10, we employ the data augmentation technique [2]. This involves randomly flipping and cropping the images to generate new samples, thereby increasing the diversity of the training set and enhancing the generalization capabilities of the trained machine learning models. As depicted in Table IV, the models trained on tabular datasets (i.e., Adult and Health) exhibit better generalization (with smaller TrainAcc and TestAcc differences) compared to the models trained on image datasets (i.e., CIFAR-10 and FMNIST).
We also present the performance of stolen models obtained using the ME attack, KD attack, and FT attack in Table V. We assume that attackers possess a randomly sampled subset of the private trainset \(S\), comprising \(40\%\) of the data. It is important to note that the ME attacker does not have access to ground-truth labels, as per its definition. The FT attack, as described in [10], initially perturbs the original decision boundary of the model using a large learning rate and subsequently reduces the learning rate to restore the model's usability. In general, the performance of FT models tends to be superior to that of the victim model, whereas the usability of ME and KD models is slightly inferior to that of the victim model.
### _VeriDIP Performance_
#### 5.4.1 Fingerprints Distribution
By conducting theoretical analysis, we can determine whether the MI advantage serves as a valid fingerprint. In such cases, its value should be higher in the victim model and approach \(0\) in the independent model. Here, an independent model refers to a model that is trained separately and is not derived from the victim model. To represent independent models, we consider two scenarios: (1) models trained on disjoint but identically distributed data, specifically using validation data, and (2) models trained on different distributional data, involving other datasets. For our experiment, we train a total of \(50\) victim models and \(50\) independent models for each database. Subsequently, we plot the distribution of extracted model fingerprints for both victim models (positives) and independent models (negatives). The resulting distributions are presented in Figure 5.
**The experimental results confirmed that MI advantage is a valid fingerprint**. Overall, we observe that the MI advantage of all target models can be clearly distinguished from that of the independent models. Specifically, the MI advantage of all independent models approaches \(0\), aligning with our expectations. Notably, in Figure 5(b), we observe that the MI advantage serves as a valid fingerprint even for Health models, as evidenced by the AUROC of the global threshold MI attack being \(0.5032\) (indicating performance similar to random guessing).
Regardless of whether the training set of independent models is sampled from the same data distribution or other data distributions, the use of MI advantages as fingerprint estimations enables their identification as negative models. Figure 5(a), Figure 5(b), and Figure 5(c) depict independent models trained on validation data from the same distribution, while Figure 5(d) shows independent models trained on MNIST datasets (representing a different distribution). In all these benchmarks, the extracted fingerprints from victim models are consistently close to \(0\).
#### 5.4.2 Basic VeriDIP
In this section, we evaluate the performance of VeriDIP, as proposed in Algorithm 1, on the four datasets. We first focus on the basic VeriDIP, which utilizes "random samples" to estimate the average-case privacy leakage. The basic VeriDIP, coupled with the global threshold MI attack, is denoted as \(\mathcal{V}_{G}\), while the basic VeriDIP employing the per-sample threshold MI attack is denoted as \(\mathcal{V}_{P}\). Stolen copies obtained through model extraction attacks (ME), knowledge distillation (KD), and fine-tuning (FT) are considered positive instances in our evaluation.
We report the p-values returned by Algorithm 1 in Table VI. A lower p-value is considered better for positive instances (victim, stolen models), while a higher p-value is preferred for negative instances (independent models). To obtain each p-value presented in Table VI, we trained a minimum of \(10\) models with varying seeds. We then performed hypothesis tests over \(20\) iterations for each model, resulting in an average of at least \(200\) trials for the final result.
Since different numbers of exposed samples (\(n_{S}\)) lead to different p-values, we also plot the p-value curves against \(n_{S}\) for the four datasets. The results are shown in Figure 6. The black dashed line represents the significance level set at \(\alpha=0.01\). When a point on the curve lies below the threshold line, it indicates that exposing those \(n_{S}\) training samples is sufficient to establish ownership under the condition of \(\alpha=0.01\).
According to the results shown in Table VI and Figure 6, we summarized the following results.
**(1) The basic VeriDIP demonstrates satisfactory performance in verifying the ownership of victim models and their stolen copies on CIFAR-10 and FMNIST datasets.** Overall, VeriDIP equipped with both the global and the per-sample MI attacks successfully establishes ownership of all positive models with a confidence level exceeding \(99\%\), requiring the exposure of fewer than \(200\) private training samples. The p-values of all independent models (negative models) are in the range of \(10^{-1}\), ensuring they are
\begin{table}
\begin{tabular}{c|c c c c c} \hline \hline Datasets & TrainSize & TestSize & TrainAcc & TestAcc & AccDiff \\ \hline CIFAR-10 & \(17500\) & \(10000\) & \(98.41\%\) & \(86.76\%\) & \(11.79\%\) \\ FMNIST & \(29700\) & \(10000\) & \(99.77\%\) & \(90.50\%\) & \(9.51\%\) \\ Health & \(20000\) & \(10000\) & \(88.31\%\) & \(86.87\%\) & \(1.43\%\) \\ Adult & \(15000\) & \(5222\) & \(85.61\%\) & \(84.81\%\) & \(0.80\%\) \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Machine learning efficacy for victim models,
AccDiff=TrainAcc -TestAcc.
\begin{table}
\begin{tabular}{c c|c c c c} \hline \hline Database & TrainSize & ME & KD & FT & Base \\ \hline CIFAR-10 & 7000 & \(80.60\%\) & \(81.79\%\) & \(89.61\%\) & \(86.76\%\) \\ FMNIST & 11880 & \(88.23\%\) & \(88.23\%\) & \(91.04\%\) & \(90.50\%\) \\ Health & 8000 & \(86.74\%\) & \(86.61\%\) & \(86.77\%\) & \(86.87\%\) \\ Adult & 6000 & \(84.73\%\) & \(84.70\%\) & \(84.82\%\) & \(84.81\%\) \\ \hline \hline \end{tabular}
\end{table} TABLE V: Machine learning performance of stolen copies.
not misclassified as positives. This effective discrimination between positive and negative models is achieved through the proposed fingerprint extraction scheme in this paper.
**(2) The ownership verification performance of VeriDIP is negatively correlated with the model's generalization ability.** VeriDIP equipped with the per-sample MI attack remains effective for DNN models trained on the Adult and Health datasets but exposes a larger number of private training samples, up to about \(2,000\) to \(3,000\). However, VeriDIP equipped with a global MI attack fails to achieve successful verification on these two datasets. This outcome is not surprising, as we have previously expressed concerns in Section 4.3. When a model's output probability distributions for membership and non-membership are nearly identical, extracting sufficient fingerprints to determine ownership requires more exposed samples and stronger MI attacks. Nevertheless, increasing the number of exposed private training samples violates the principle of personal privacy protection during public ownership verification. Therefore, the adoption of stronger fingerprint extraction methods, such as the enhanced VeriDIP proposed in Section 4.3, may prove beneficial.
**(3) Fine-tuning, although the most effective attack against watermark embedding, is the easiest attack for VeriDIP to defend.** Unlike watermark embedding techniques that artificially embed unique classification patterns into the decision boundary of IP-protected models, VeriDIP extracts inherent privacy leakage characteristics as fingerprints for ownership verification. As reported in [10], their proposed fine-tuning attack can effectively remove all watermarks. However, the results shown in Figure 6 indicate that the fine-tuned model (red line) is even more susceptible to fingerprint extraction compared to the original model (blue line). The reason behind this observation might be that fine-tuning reinforces the model's memory of a subset of training samples, which VeriDIP can exploit as a fingerprint for ownership judgment.
**(4) The effect of VeriDIP is positively correlated with the MI attack effectiveness.** While VeriDIP can be equipped with various black-box MI attacks to extract model ownership fingerprints, this paper focuses on evaluating two representative attacks: the basic global MI attack and the advanced per-sample MI attack, due to space limitations. Comparing Figure 6(a) and Figure 6(b) for CIFAR-10, as well as Figure 6(c) and Figure 6(d) for FMNIST, we observe that \(\mathcal{V}P\) requires exposing only half the number of training samples compared to \(\mathcal{V}G\). Additionally, for the Adult and Health databases, \(\mathcal{V}_{G}\) fails to verify ownership altogether (refer to Figure 6(e) and Figure 6(h)). The reason for this is that a stronger MI attack can provide a tighter lower bound estimation of privacy leakage, resulting in more accurate model fingerprints.
In summary, the basic VeriDIP equipped with the per-sample MI attacks \(\mathcal{V}_{P}\) successfully identifies all victim models and their stolen copies as positives, while correctly classifying all independent models as negatives. However, for models that are only slightly overfitted, even with the utilization of the most advanced MI attack to estimate privacy leakage fingerprints, a significant number of private training samples are still required to establish ownership. Hence, it is imperative to devise solutions that reduce VeriDIP's reliance on model overfitting.
#### 5.4.3 Enhanced VeriDIP
In this section, we evaluate the enhanced VeriDIP on four datasets and compare the results with those of the basic VeriDIP. Table VII reports the minimum number of exposed training samples required to verify ownership at a significance level of \(\alpha=0.01\) (with \(99\%\) confidence). Note that the p-values of all independent models remain at \(10^{-1}\), and therefore, we have omitted the corresponding \(n_{S}\) values for them.
To identify the less private data in advance, we train \(N\) shadow models (\(N=100\)), where each model is trained by sampling half of the database. Consequently, for each data point, we have approximately \(N/2\) models that include the data and \(N/2\) models that exclude the data. We compute the loss difference \(\eta(z)\) for each data point using Equation (9)
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & \multirow{2}{*}{Datasets} & \multirow{2}{*}{\(n_{S}\)} & \multicolumn{6}{c}{p-value} \\ \cline{3-8} & & & TAR & ME & KD & FT & IND \\ \hline \multirow{4}{*}{\(\mathcal{V}_{G}\)} & CIFAR-10 & 200 & \(10^{-5}\) & \(10^{-3}\) & \(10^{-3}\) & \(10^{-8}\) & \(10^{-1}\) \\ & FMNIST & 200 & \(10^{-6}\) & \(10^{-3}\) & \(10^{-3}\) & \(10^{-8}\) & \(10^{-1}\) \\ & Adult & 2000 & \(10^{-2}\) & \(10^{-2}\) & \(10^{-2}\) & \(10^{-2}\) & \(10^{-2}\) \\ & Health & 3000 & \(10^{-2}\) & \(10^{-1}\) & \(10^{-2}\) & \(10^{-3}\) & \(10^{-1}\) \\ \hline \hline \multirow{4}{*}{\(\mathcal{V}_{P}\)} & CIFAR-10 & 200 & \(10^{-10}\) & \(10^{-4}\) & \(10^{-5}\) & \(10^{-11}\) & \(10^{-1}\) \\ & FMNIST & 200 & \(10^{-10}\) & \(10^{-4}\) & \(10^{-4}\) & \(10^{-11}\) & \(10^{-1}\) \\ \cline{1-1} & Adult & 2000 & \(10^{-5}\) & \(10^{-4}\) & \(10^{-3}\) & \(10^{-5}\) & \(10^{-1}\) \\ \cline{1-1} & Health & 3000 & \(10^{-12}\) & \(10^{-3}\) & \(10^{-3}\) & \(10^{-10}\) & \(10^{-1}\) \\ \hline \hline \end{tabular}
\end{table} TABLE VI: p-values for OT. Tar: target models; ET: model extraction attack; DT: Distillation Attack; FT: Fine-tune Attack; Ind: independent models. \(\mathcal{V}_{\Psi}\): The basic VeriDIP equipped with the global MI attack; \(\mathcal{V}_{\Psi}\): The basic VeriDIP equipped with the per-sample MI attack.
Fig. 5: Fingerprints distribution for target models and the independent models, using 50 models for each distribution.
and select the \(k\) samples with the highest \(\eta(z)\) values as the less private data.
**The enhanced VeriDIP offers superior performance compared to the basic VeriDIP.** For CIFAR-10 and FMNIST datasets shown in Table VII, the enhanced VeriDIP equipped with both the global MI attacks and the per-sample MI attacks successfully verify the ownership of all target ("Tar") and stolen models ("ME", "KD", and "FT") by exposing only \(5\) samples. In the case of more generalized models, such as Adult and Health, the number of exposed training samples is reduced to \(\frac{1}{100}-\frac{1}{10}\) of the basic VeriDIP. It is worth noting that the enhanced VeriDIP equipped with the global MI attack fails to prove ownership for the Adult database. We believe this is because the global MI attack is not powerful enough to extract useful privacy leakage fingerprints in such generalized models. The main reasons for the success of the enhanced solution are:
* Leveraging the worst-case privacy leakage as the model fingerprint can significantly amplify the characteristics of the positive model that are different from the negative counterparts (see Figure 2);
* The decision boundary for less private data is transferable
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Datasets} & \multirow{2}{*}{Models} & \multicolumn{2}{c}{global} & \multicolumn{2}{c}{per-sample} \\ \cline{3-6} & & Basic & Enh & Basic & Enh \\ \hline \multirow{4}{*}{CIFAR-10} & TAR & 42 & 5 & 23 & 5 \\ & ME & 185 & 5 & 87 & 5 \\ & KD & 94 & 5 & 47 & 5 \\ & FT & 24 & 5 & 23 & 5 \\ \hline \multirow{4}{*}{FMNIST} & TAR & 27 & 5 & 17 & 5 \\ & ME & 170 & 5 & 75 & 5 \\ & KD & 125 & 5 & 80 & 5 \\ & FT & 23 & 5 & 15 & 5 \\ \hline \multirow{4}{*}{Adult} & TAR & – & – & 460 & 5 \\ & ME & – & – & 800 & 6 \\ & KD & – & – & 1600 & 70 \\ & FT & – & – & 430 & 5 \\ \hline \multirow{4}{*}{Health} & TAR & – & 83 & 250 & 8 \\ & ME & – & 148 & 2500 & 28 \\ \cline{1-1} & KD & – & 135 & 2200 & 125 \\ \cline{1-1} & FT & 3000 & 81 & 200 & 6 \\ \hline \hline \end{tabular}
\end{table} TABLE VII: Exposed number of training samples \(n_{S}\) when \(\alpha=0.01\). Smaller \(n_{S}\) means better ownership verification performance. “-” means failure.
Fig. 6: p-value against the number of exposed training samples \(n_{S}\). Black dotted line implies \(\alpha=0.01\).
Fig. 7: Comparison between the enhanced VeriDIP equipped with the global MI attack \(\mathcal{V}_{\text{E}\text{.G}}\) (Dotted line with marker “\(\times\)”) and the enhanced VeriDIP equipped with the per-sample MI attack \(\mathcal{V}_{\text{E}\text{.P}}\) (Solid line with marker “\(\cdot\)”).
(not easy to erase) in the process of model stealing.
We then compare the performance of the enhanced VeriDIP equipped with the global MI attack (denoted as \(\mathcal{V}_{\text{E-G}}\)) with the enhanced VeriDIP equipped with the per-sample MI attack (denoted as \(\mathcal{V}_{\text{E-P}}\)) and plot the p-value against \(n_{S}\) in Figure 7.
**Compared with the basic VeriDIP where \(\mathcal{V}_{\text{E}}\) is superior to \(\mathcal{V}_{\text{G}}\) for all tasks, the behavior of \(\mathcal{V}_{\text{E-P}}\) and \(\mathcal{V}_{\text{E-G}}\) is more complex in enhanced VeriDIP.** For instance, in Figure 7(a) and Figure 7(b), \(\mathcal{V}_{\text{E-G}}\) shows surprisingly better performance than \(\mathcal{V}_{\text{E-P}}\), but the opposite is true for the Health and Adult databases. Particularly for the Adult database (see Figure 7(c)), \(\mathcal{V}_{\text{E-G}}\) fails to identify all positive models. Investigating the attack ability of MI attacks on different types of databases is beyond the scope of this work. However, we can conclude that the enhanced VeriDIP equipped with the global MI attack is more than sufficient to prove ownership of models trained on CIFAR-10 and FMNIST databases. For models that are barely overfitted, such as those trained on the Adult and Health databases, the enhanced VeriDIP equipped with the per-sample MI attack is a better choice.
#### 5.4.4 Comparisons with State-of-the-art
Dataset Inference (DI) [23] is the most similar to our idea, but differs in terms of model fingerprint extraction methods. Therefore, we compare our verification performance and costs with DI both functionally and experimentally. The result are show in Table VIII and Table IX. We summarize the results in the following aspects:
First, VeriDIP is applicable to tabular trained DNN models, while DI is not. DI uses adversarial noise as fingerprints, but finding the adversarial noise is not trivial for models trained on tabular data. Tabular data may contain a combination of continuous, discrete, and categorical features, making it difficult to calculate adversarial noise through gradient descent. VeriDIP, on the other hand, only requires querying the DNN model's prediction probability, making it applicable to all classifiers.
Second, compared to DI, VeriDIP significantly reduces the number of required queries during ownership verification, making it immune to the detector attack [28]. DI requires querying the suspect model \(n_{S}\times n_{\text{adv}}\times T\) times to obtain a model fingerprint. However, this can raise suspicion from pirated APIs, leading to refusals to answer or adding noise to the responses. Here, \(n_{S}\) denotes the number of exposed training samples, \(n_{\text{adv}}\) is the number of repeated adversarial attacks per sample, and \(T\) is the number of queries for one adversarial attack. In the original setting of [28], \(n_{\text{adv}}=30\) and \(T=50\). Table IX lists the experimental results for identifying target models in CIFAR-10 and FMNIST. We do not provide the results for Adult and Health datasets because DI does not support them. Consequently, VeriDIP achieves similar or better performance with significantly fewer exposed training samples (two orders of magnitude less than DI).
Third, VeriDIP can be directly linked to the definition of DP, as the privacy leakage estimated by MI attacks serves as a lower bound for the privacy budget \(\epsilon\) in DP (see analysis in Section 4.4). In contrast, DI leaves the connection to DP as an open question.
#### 5.4.5 Differential Privacy Relationship
In this section, we experimentally discuss the effectiveness of VeriDIP on DP machine learning models, which is also a remaining problem addressed in [23]. For this evaluation, we select the enhanced VeriDIP models \(\mathcal{V}\)E-P and \(\mathcal{V}\)E-G due to their improved performance.
**Experiment setup.** We use the DP Adam optimizer [47] to train DP machine learning models and compose the privacy budget using RDP techniques [48]. In each iteration, we first clip gradient norm with the threshold \(C\), then add Gaussian noise with scale \(\sigma=\text{z}\ast C\) (see Table XIII) where \(\text{z}\) stands for the noise multiplier. We adjust different pairs of hyper-parameters \((C,\text{z})\) to trade off privacy vs. utility. For each dataset, we choose two privacy budget options for \((\epsilon,\delta)\), such that \((0.5,10^{-5})\) and \((1.0,10^{-5})\), where \(\delta\) is usually set to be the inverse of the number of training sets, as shown in [47]. These options are commonly used in training DP machine learning models. A smaller privacy budget \(\epsilon\) indicates a higher privacy protection level (yet lower model utility). The hyper-parameters that are related to training DP models and testing the accuracy of DP models are listed in Table XIII. Note that, the configuration of model stealing attacks are identical to the former's (see Section 5.1).
Recall the theoretical analysis in Section 4.4, we bound the privacy budgets align with the VeriDIP's performance, for instance, \(\epsilon=0.1\) result in \(P>0.156\). Thus, we first
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Database & (\(\epsilon\), \(\delta\)) & epoch & \((C,\text{z})\) & TestAcc \\ \hline \multirow{2}{*}{CIFAR-10} & \((1.0,10^{-5})\) & 60 & (5e-4.2,1) & \(84.79\%\) \\ & \((0.5,10^{-5})\) & 60 & (5e-4.4,1) & \(84.49\%\) \\ \hline \multirow{2}{*}{FMNIST} & \((1.0,10^{-5})\) & 19 & (5e-3,1,2) & \(90.54\%\) \\ & \((0.5,10^{-5})\) & 20 & (5e-3,1,9) & \(90.00\%\) \\ \hline \multirow{2}{*}{Health} & \((1.0,10^{-5})\) & 50 & (1e-3,4.9) & \(86.97\%\) \\ & \((0.5,10^{-5})\) & 50 & (1e-3,9.7) & \(86.92\%\) \\ \hline \multirow{2}{*}{Adult} & \((1.0,10^{-5})\) & 70 & (1e-3,7.9) & \(84.69\%\) \\ & \((0.5,10^{-5})\) & 60 & (1e-3,14.9) & \(84.73\%\) \\ \hline \hline \end{tabular}
\end{table} TABLE IX: Hyper-parameters and test accuracy for DP models. \(\text{z}\): noise multiplier, \(C\):clipping threshold.
\begin{table}
\begin{tabular}{c c c c} \hline \hline & \begin{tabular}{c} Immune to \\ detector attack \\ \end{tabular} & \begin{tabular}{c} Support table- \\ trained models \\ \end{tabular} &
\begin{tabular}{c} Directly \\ link to DP \\ \end{tabular} \\ \hline DI & no & no & no \\ Ours & yes & yes & yes \\ \hline \hline \end{tabular}
\end{table} TABLE VIII: Functional comparison with Dataset Inference [23].
experiment with \(\epsilon=0.1\) and find all DP models experienced a substantial loss in functionality. Particularly for CIFAR-10, the \((0.1,10^{-5})\)-DP model achieved only \(76.71\%\) test accuracy, compared with the non-DP benchmark, it loses approximately \(10\%\) of the accuracy. In accordance with the theoretical analysis, none of these models can be verified for ownership using VeriDIP. However, protecting the copyright of DP models becomes less meaningful without preserving utility, which motivated us to focus on evaluating the effectiveness of VeriDIP on more useful DP models. Based on our analysis, when \(\epsilon=0.5\), the limitation on the p-value is already negligible. We then experiment with \(\epsilon=0.5\) and \(\epsilon=1.0\) and Table XI presents the main result for VeriDIP on \((0.5,10^{-5})\)-DP and \((1.0,10^{-5})\)-DP models. Additionally, Figure 8 illustrates the comparisons of p-values against \(n_{S}\) curves for these DP models and non-DP models. Note that the fine-tuning attack [10] fails to steal a functionally-preserving DNN model trained with Adam optimizer, which is why the fourth row of CIFAR-10 is empty.
**VeriDIP is as effective on utility-preserving DP models as it is on non-DP models**. Comparing the model utility presented in Table XI and Table IV, we found that, by carefully choosing DP hyper-parameters, all DP models show comparable utility with non-DP baselines. From Table XI and Figure 8, we can see that the effectiveness of \(\mathcal{V}_{\text{E-G}}\) and \(\mathcal{V}_{\text{E-P}}\) on CIFAR-10 and FMNIST are hardly affected by the noise injected by DP. While on Adult and Health datasets, more strict privacy protection may increase the number of exposed training samples. In Table XI, the number of exposed samples \(n_{S}\) of \((0.5,10^{-5})\)-DP models is higher than that of \((1.0,10^{-5})\)-DP models. This indicates that there is a trade-off between privacy protection and copyright protection, especially for those barely overfitted models.
Since there is a subtle balance between privacy protection and copyright protection in generalized models, we study the behavior of VeriDIP varying different DP hyper-parameters for Adult and Health datasets. In particular, We study two types of DP hyper-parameters: DP clipping threshold \(C\) and the number of training epochs, and analyze their influence on VeriDIP.
**(1) DP clipping threshold \(C\).**\(C\) represents the clipping threshold for batch gradients in each training iteration. We conducted experiments with different values of \(C\) as it does not affect the value of \(\epsilon\) but impacts the training performance. We kept the noise multiplier z and the number of training epochs fixed for \(\epsilon=0.5\). The p-value against \(n_{S}\) curve comparisons are depicted in Figure 9. From the figures, we observe that certain choices of \(C\) lead to the failure of VeriDIP, such as \(C=10^{-1}\), \(C=10^{-2}\), and \(C=10^{-5}\) in Figure 9(a), and \(C=10^{-1}\) in Figure 9(b). Excessively large or small values of \(C\) have a detrimental effect on the effectiveness of VeriDIP. A large \(C\) introduces
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Datasets} & \multirow{2}{*}{Models} & \multicolumn{2}{c}{\(\epsilon=0.5\)} & \multicolumn{2}{c}{\(\epsilon=1.0\)} \\ \cline{3-6} & & \(n_{S}\) & p-value & \(n_{S}\) & p-value \\ \hline \multirow{3}{*}{CIFAR-10} & TAR & 5 & \(10^{-4}\) & 5 & \(10^{-4}\) \\ & ME & 5 & \(10^{-3}\) & 5 & \(10^{-4}\) \\ & KD & 5 & \(10^{-3}\) & 5 & \(10^{-4}\) \\ & FT & – & – & – & – \\ \hline \multirow{3}{*}{FMNIST} & TAR & 5 & \(10^{-6}\) & 5 & \(10^{-6}\) \\ & ME & 5 & \(10^{-3}\) & 5 & \(10^{-3}\) \\ & KD & 5 & \(10^{-3}\) & 5 & \(10^{-3}\) \\ & FT & 5 & \(10^{-4}\) & 5 & \(10^{-6}\) \\ \hline \multirow{3}{*}{Adult} & TAR & 5 & \(10^{-3}\) & 5 & \(10^{-4}\) \\ & ME & 35 & \(10^{-3}\) & 25 & \(10^{-3}\) \\ & KD & 75 & \(10^{-3}\) & 55 & \(10^{-3}\) \\ & FT & 15 & \(10^{-3}\) & 5 & \(10^{-4}\) \\ \hline \multirow{3}{*}{Health} & TAR & 15 & \(10^{-4}\) & 15 & \(10^{-4}\) \\ & ME & 175 & \(10^{-3}\) & 55 & \(10^{-3}\) \\ \cline{1-1} & KD & 135 & \(10^{-3}\) & 75 & \(10^{-3}\) \\ \cline{1-1} & FT & 15 & \(10^{-3}\) & 5 & \(10^{-3}\) \\ \hline \hline \end{tabular}
\end{table} TABLE XI: Verification performance of the enhanced VeriDIP on DP models.
Fig. 8: Performance of The Enhanced VeriDIP \(\mathcal{V}_{\text{E-G}}\) and \(\mathcal{V}_{\text{E-P}}\) on DP IP-protected models.
excessive noise due to the noise scale \(\sigma=\mathrm{z}*C\). Conversely, a small \(C\) restricts the gradient magnitude in each iteration, thereby affecting the model's learning process. Hence, we encourage model owners to explore various choices of \(C\) to determine the optimal value when training a DNN model with both privacy protection and copyright protection.
**(2) Number of training epochs.** In addition to \(C\), the model trainer has two options to achieve the same privacy protection: (a) more training epochs but less noise for each iteration. (b) less training epochs but more noise for each iteration. Thus, we compare these options and the results are shown in Figure 10. As a result, we find that option (a) has better VeriDIP performance for the DNN models than option (b).
To summarize, the enhanced VeriDIP is effective on DP-protected DNN models. Some privacy-preserving models may double or triple the number of exposed training samples in VeriDIP as a trade-off. Besides, carefully selecting the DP hyperparameters is crucial for model owners to simultaneously benefit from privacy protection and copyright protection.
## 6 Conclusion and Future Work Directions
**Conclusion of This Paper.** The increasing prevalence of model-stealing attacks poses a significant threat to the protection of neural network models' copyrights. In this work, we propose a novel ownership testing framework for DNN models, VeriDIP, along with its enhanced version, to combat model plagiarism. VeriDIP leverages privacy leakage as a natural fingerprint for verifying DNN model ownership. The enhanced VeriDIP utilizes a reduced amount of private data to estimate the worst-case privacy leakage of models, serving as enhanced model fingerprints. Our comprehensive experiments demonstrate that the enhanced VeriDIP achieves a true positive rate of \(100\%\) and a false positive rate of \(0\) in accurately identifying positive models (victim models and their stolen copies) as opposed to negative models (independent models), requiring a minimum of 5 data samples during the verification process. Furthermore, the enhanced VeriDIP effectively addresses an open problem concerning the protection of the copyright of any utility-preserved differentially private models.
**Future Work Directions.** We list the following potential future work directions for this paper.
1. Quantitative standard for the Number of Shadow Models Required. In this paper, in order to identify less private data for the enhanced VeriDIP, we trained \(100\) shadow models for each mentioned dataset. It is important to note that this empirical number of shadow models may vary depending on the specific datasets. Therefore, it would be valuable to propose a quantitative standard for determining the appropriate number of shadow models based on the characteristics of the given datasets.
2. Extending to other data domains. While our study primarily focuses on image and tabular data, future research can explore the applicability of VeriDIP to other data types and domains. This could include natural language processing, audio data, or even more specialized domains such as genomics or finance.
3. Efficiency improvement. Future work can focus on enhancing the efficiency of the VeriDIP framework by reducing the computation costs associated with finding less private data. These efforts will contribute to minimizing the computational overhead and making the framework more practical for real-world deployment.
|
2309.12095 | Bayesian sparsification for deep neural networks with Bayesian model
reduction | Deep learning's immense capabilities are often constrained by the complexity
of its models, leading to an increasing demand for effective sparsification
techniques. Bayesian sparsification for deep learning emerges as a crucial
approach, facilitating the design of models that are both computationally
efficient and competitive in terms of performance across various deep learning
applications. The state-of-the-art -- in Bayesian sparsification of deep neural
networks -- combines structural shrinkage priors on model weights with an
approximate inference scheme based on stochastic variational inference.
However, model inversion of the full generative model is exceptionally
computationally demanding, especially when compared to standard deep learning
of point estimates. In this context, we advocate for the use of Bayesian model
reduction (BMR) as a more efficient alternative for pruning of model weights.
As a generalization of the Savage-Dickey ratio, BMR allows a post-hoc
elimination of redundant model weights based on the posterior estimates under a
straightforward (non-hierarchical) generative model. Our comparative study
highlights the advantages of the BMR method relative to established approaches
based on hierarchical horseshoe priors over model weights. We illustrate the
potential of BMR across various deep learning architectures, from classical
networks like LeNet to modern frameworks such as Vision Transformers and
MLP-Mixers. | Dimitrije Marković, Karl J. Friston, Stefan J. Kiebel | 2023-09-21T14:10:47Z | http://arxiv.org/abs/2309.12095v2 | # Bayesian sparsification for deep neural networks with Bayesian model reduction
###### Abstract
Deep learning's immense capabilities are often constrained by the complexity of its models, leading to an increasing demand for effective sparsification techniques. Bayesian sparsification for deep learning emerges as a crucial approach, facilitating the design of models that are both computationally efficient and competitive in terms of performance across various deep learning applications. The state-of-the-art - in Bayesian sparsification of deep neural networks - combines structural shrinkage priors on model weights with an approximate inference scheme based on stochastic variational inference. However, model inversion of the full generative model is exceptionally computationally demanding, especially when compared to standard deep learning of point estimates. In this context, we advocate for the use of Bayesian model reduction (BMR) as a more efficient alternative for pruning of model weights. As a generalization of the Savage-Dickey ratio, BMR allows a post-hoc elimination of redundant model weights based on the posterior estimates under a straightforward (non-hierarchical) generative model. Our comparative study highlights the advantages of the BMR method relative to established approaches based on hierarchical horseshoe priors over model weights. We illustrate the potential of BMR across various deep learning architectures, from classical networks like LeNet to modern frameworks such as Vision Transformers and MLP-Mixers.
Bayesian model reduction, Stochastic variational inference, Deep neural networks
## 1 Introduction
Bayesian deep learning integrates the principles of Bayesian methodology with the objectives of deep learning, facilitating the training of expansive parametric models tailored for classifying and generating intricate audio-visual data, including images, text, and speech (Wang and Yeung, 2020; Wilson, 2020; Wang and Yeung, 2016). Notably, the Bayesian approach frames
the challenge of model optimization as an inference problem. This perspective is especially apt for scenarios necessitating decision-making under uncertainty (Murphy, 2022; Ghahramani, 2015). As a result, Bayesian formulations in deep learning have proven advantageous in various respects, offering enhancements in generalization (Wilson and Izmailov, 2020), accuracy, calibration (Izmailov et al., 2020; Luo and Kareem, 2020), and model compression (Louizos et al., 2017).
These functional enhancements are intrinsically tied to judiciously chosen structural priors (Fortuin, 2022). The priors, integral to the probabilistic generative model, scaffold the architecture of the network, thereby reducing the data required for the inference of optimal parametric solutions. Recent studies have highlighted the efficacy of hierarchical shrinkage priors over model weights, a specific category of structural priors, in achieving highly-sparse network representations (Nalisnick et al., 2019; Louizos et al., 2017; Seto et al., 2021; Ghosh et al., 2018). Sparse representations not only reduce redundancy but also evince additional performance benefits. However, the adoption of shrinkage priors in all deep learning models presents a conundrum: the ballooning space of latent parameters and the diminishing scalability of prevailing approximate inference schemes (Snoek et al., 2015; Krishnan et al., 2019; Izmailov et al., 2020; Daxberger et al., 2021).
In line with ongoing research on scalable Bayesian inference, we introduce an approximate inference scheme rooted in Bayesian model reduction (BMR). In essence, BMR extends the foundational principles of the Savage-Dickey Density Ratio method (Cameron, 2013). BMR is typically conceptualized as a combinatorial model comparison framework, enabling swift estimations of model evidence across an extensive array of models, that differ in their prior assumptions, to identify the most probable one. Originally conceived for model comparison within the dynamical causal modeling framework (Rosa et al., 2012; Friston and Penny, 2011), the scope of BMR has since broadened. Subsequent works expanded its methodology (Friston et al., 2016, 2017, 2018) and adapted it for structure learning (Smith et al., 2020). More recently, BMR has found applications in Bayesian nonlinear regression and classification tasks using Bayesian neural networks with variance backpropagation (Beckers et al., 2022; Haussmann et al., 2020).
The BMR method is intimately connected with the spike-and-slab prior, a type of shrinkage prior (Mitchell and Beauchamp, 1988). Intriguingly, this specific structured shrinkage prior has parallels with Dropout regularization (Nalisnick et al., 2019). Such an association spurred researchers in Bayesian deep learning to formulate sparsification methods based on a different type of shrinkage prior--the hierarchical horseshoe prior (Piironen and Vehtari, 2017)--as a tool for automated depth determination. Subsequent studies suggested that merging horseshoe priors with structured variational approximations yields robust, highly sparse representations (Ghosh et al., 2018). The allure of continuous shrinkage priors (e.g., horseshoe priors) stems from the computational challenges associated with model inversion reliant on spike-and-slab priors (Nalisnick et al., 2019; Piironen and Vehtari, 2017). However, continuous shrinkage priors necessitate a considerably more expansive parameter space, to represent the approximate posterior, compared to optimizing neural networks using the traditional point estimate method.
In this work, we reexamine the spike-and-slab prior within the framework of BMR-based sparsification, highlighting its efficiency. Notably, this approach circumvents the need to expand the approximate posterior beyond the conventional fully factorised mean-field ap
proximation, making it more scalable than structured variational approximations (Ghosh et al., 2018). In this light, BMR can be seen as a layered stochastic and black-box variational inference technique, which we term _stochastic BMR_. We subject the stochastic BMR to rigorous validation across various image classification tasks and network architectures, including LeNet-5 (LeCun et al., 1989), Vision Transformers (Dosovitskiy et al., 2020), and MLP-Mixers (Tolstikhin et al., 2021).
Central to our study is an empirical comparison of stochastic BMR with methods anchored in hierarchical horseshoe priors. Through multiple metrics - from Top-1 accuracy to expected calibration error and negative log-likelihood - we establish the competitive performance of stochastic BMR. We argue its computational efficiency, and remarkable sparsification rate, position BMR as an appealing choice, enhancing the scalability and proficiency of contemporary deep learning networks across diverse machine learning challenges, extending well beyond computer vision. We conclude with a discussion on potential avenues of future research that could further facilitate of BMR based pruning of deep neural networks.
## 2 Methods
In this section, we first describe the methods and techniques used in our research to address the problem of efficient Bayesian sparsification of deep neural networks. We provide a detailed overview of our approach, starting with variational inference methods, followed by the formulation of the Bayesian model reduction (BMR), Bayesian neural networks with shrinkage priors, and the description of corresponding approximate posterior.
### Variational inference
Given a joint density of latent variables, represented as \(\mathbf{z}=\left(z_{1},\ldots,z_{k}\right)\), and a dataset of \(n\) observations \(\mathbf{\mathcal{D}}=\left(y_{1},\ldots,y_{n}\right)\) we can express the joint density, that is, the generative model, as
\[p\left(\mathbf{\mathcal{D}},\mathbf{z}\right)=p\left(\mathbf{z}\right)p\left(\mathbf{\mathcal{ D}}|\mathbf{z}\right).\]
The posterior density is then obtained, following the Bayes rule, as
\[p\left(\mathbf{z}|\mathbf{\mathcal{D}}\right)\propto p\left(\mathbf{z}\right)p\left(\mathbf{ \mathcal{D}}|\mathbf{z}\right). \tag{1}\]
For complex generative models, direct inference as described above becomes computationally prohibitive. To circumvent this, we approximate the exact posterior \(p\left(\mathbf{z}|\mathbf{\mathcal{D}}\right)\), constraining it to a distribution \(q\left(z\right)\) that belongs to a named distribution family \(\mathcal{Q}\). We then seek \(q^{*}\left(z\right)\in\mathcal{Q}\), an approximate solution that minimizes the following Kullback-Leibler divergence (Blei et al., 2017)
\[q^{*}\left(z\right)=\underset{q\in\mathcal{Q}}{\text{argmin}}D_{KL}\left(q \left(\mathbf{z}\right)||p\left(\mathbf{z}|\mathbf{\mathcal{D}}\right)\right)=\underset{ q\in\mathcal{Q}}{\text{argmin}}F\left[q\right],\]
where \(F\left[q\right]\) stands for the variational free energy (VFE), defined as
\[F\left[q\right]=E_{q\left(\mathbf{z}\right)}\left[\ln q\left(\mathbf{z}\right)-\ln p \left(\mathbf{\mathcal{D}},\mathbf{z}\right)\right]\]
VFE serves as an upper bound on the marginal log-likelihood
\[F\left[q\right]=D_{KL}\left(q\left(\mathbf{z}\right)||p\left(\mathbf{z}|\mathbf{\mathcal{ D}}\right)\right)-\ln p\left(\mathbf{\mathcal{D}}\right)\geq-\ln p\left(\mathbf{ \mathcal{D}}\right)\]
As KL-divergence is always greater or equal to zero, minimizing VFE brings the approximate solution as close as possible to the true posterior, without having to compute the exact posterior.
The most straightforward way to obtain the approximate posterior \(q^{*}\left(\mathbf{z}\right)\), is to minimize the VFE along its negative gradient:
\[\dot{\mathbf{\phi}}=-\nabla_{\mathbf{\phi}}F\left[q\right]\]
where \(\mathbf{\phi}\) signifies the parameters of the approximate posterior \(q_{\mathbf{\phi}}\left(\mathbf{z}\right)=q\left(\mathbf{z}|\mathbf{\phi}\right)\). Thus, variational inference reframes the inference problem highlighted in eq. (1) as an optimization problem Beal (2003).
### Stochastic and black-box variational inference
_Stochastic variational inference_ (SVI) improves the computational efficiency of gradient descent by approximating the variational free energy using a subset--\(\mathbf{\mathcal{K}}_{i}=\left(y_{s_{1}^{i}},\ldots,y_{s_{k}^{i}}\right);\,k\ll n\)--of the entire data set \(\mathbf{\mathcal{D}}\). This approach fosters a stochastic gradient descent (SGD) mechanism, capable of managing large datasets (Hoffman et al., 2013). Crucially, at every iteration step \(i\) of the SGD process, the subset \(\mathbf{\mathcal{K}}_{i}\) undergoes re-sampling.
_Black-box Variational Inference_ (BBVI) facilitates the optimization of any (named or unnamed) posterior density \(q_{\mathbf{\phi}}\left(\mathbf{z}\right)\), through the integration of Monte Carlo estimates for variational gradients (Ranganath et al., 2014). This can be formulated as the following relation
\[\nabla_{\mathbf{\phi}}F\left[q\right]\approx\nabla_{\mathbf{\phi}}\tilde{F}\left[q \right]=\frac{1}{S}\sum_{s=1}^{S}\nabla_{\mathbf{\phi}}\ln q_{\mathbf{\phi}}\left(\bm {z}\right)\left[\ln\frac{q_{\mathbf{\phi}}\left(\mathbf{z}\right)}{p\left(\mathbf{ \mathcal{D}},\mathbf{z}\right)}+1\right];\quad\mathbf{z}_{s}\sim q\left(\mathbf{z}|\mathbf{ \phi}\right) \tag{2}\]
which is known as the REINFORCE estimator (Williams, 1992). To mitigate the variance inherent to Monte Carlo gradient estimations, we employ Rao-Blackwellization (Schulman et al., 2015), with an implementation sourced from NumPyro (Bingham et al., 2019). For optimizing the variational objective stochastically, we leverage the AdaBelief optimizer (Zhuang et al., 2020). As an adaptive algorithm, AdaBelief ensures swift convergence, robust generalization, and steady optimization. Notably, we use AdaBelief's implementation from the Optax package within the JAX ecosystem (Babuschkin et al., 2020).
### Bayesian model reduction
Let us consider two generative processes for the data: a full model
\[p\left(\mathbf{z}|\mathbf{\mathcal{D}}\right)\propto p\left(\mathbf{\mathcal{D}}|\mathbf{z} \right)p\left(\mathbf{z}\right)\]
and a reduced model 1 in which the original prior \(p\left(\mathbf{z}\right)\) is replaced with a more informative prior \(\tilde{p}\left(\mathbf{z}\right)=p\left(\mathbf{z}|\mathbf{\theta}\right)\) that depends on hyper-parameters \(\mathbf{\theta}\). This change leads to a different posterior
Footnote 1: the reduction here implies applying constraints of any form to the prior to obtain a posterior with reduced entropy.
\[\tilde{p}\left(\mathbf{z}|\mathbf{\mathcal{D}}\right)\propto p\left(\mathbf{\mathcal{D}}| \mathbf{z}\right)\tilde{p}\left(\mathbf{z}\right)\]
Noting that as the following relation holds:
\[1=\int\mathrm{d}\mathbf{z}\tilde{p}\left(\mathbf{z}|\mathbf{\mathcal{D}}\right)=\int\mathrm{d }\mathbf{z}p\left(\mathbf{z}|\mathbf{\mathcal{D}}\right)\frac{\tilde{p}\left(\mathbf{z}\right)p \left(\mathbf{\mathcal{D}}\right)}{p\left(\mathbf{z}\right)\tilde{p}\left(\mathbf{ \mathcal{D}}\right)},\]
we can express the link between the models as:
\[\begin{split}-\ln\tilde{p}\left(\mathbf{\mathcal{D}}\right)& =-\ln p\left(\mathbf{\mathcal{D}}\right)-\ln\int d\mathbf{z}p\left(\mathbf{z}| \mathbf{\mathcal{D}}\right)\frac{\tilde{p}\left(\mathbf{z}\right)}{p\left(\mathbf{z} \right)}\\ &\approx F\left(\mathbf{\phi}^{*}\right)-\ln\int d\mathbf{z}q_{\mathbf{\phi}^ {*}}\left(\mathbf{z}\right)\frac{\tilde{p}\left(\mathbf{z}\right)}{p\left(\mathbf{z} \right)}\end{split} \tag{3}\]
where we assumed the approximate posterior for the full model corresponds to \(p\left(\mathbf{z}|\mathbf{\mathcal{D}}\right)\approx q_{\mathbf{\phi}^{*}}\left(\mathbf{z}\right)\), and that \(-\ln p\left(\mathbf{\mathcal{D}}\right)\approx F\left(\mathbf{\phi}^{*}\right)\).
From eq. (3) we obtain the free energy of the reduced model as
\[-\ln\tilde{p}\left(\mathbf{\mathcal{D}}\right)\approx-\ln E_{q}\left[\frac{\tilde {p}\left(\mathbf{z}\right)}{p\left(\mathbf{z}\right)}\right]+F\left(\mathbf{\phi}^{*} \right)=-\Delta F\left(\mathbf{\theta}\right). \tag{4}\]
where \(\Delta F\left(\mathbf{\theta}\right)\) denotes the change in the free energy of going from the full model to the reduced model, given hyper-parameters \(\mathbf{z}_{H}\). Note that for \(\Delta F\left(\mathbf{\theta}\right)>0\) the reduced model has a better variational free energy compared to the flat model. Consequently, the reduced model offers a model with a greater marginal likelihood; i.e., a better explanation for the data and improved generalization capabilities. Heuristically, this can be understood as minimising model complexity, without sacrificing accuracy (because log evidence can be expressed as accuracy minus complexity, where complexity is the KL divergence between posterior and prior beliefs). This relationship is pivotal in formulating efficient pruning criteria, especially for extensive parametric models commonly employed in deep learning.
### Bayesian neural networks
In a general (nonlinear) regression problem, we model the relationship between predictors \(\mathbf{X}=\left(\mathbf{x}_{1},\ldots,\mathbf{x}_{n}\right)\) and target variables \(\mathbf{Y}=\left(\mathbf{y}_{1},\ldots,\mathbf{y}_{n}\right)\) using a likelihood distribution from an exponential family as
\[\mathbf{y}_{i}\sim p\left(\mathbf{y}|\mathbf{\mathcal{W}},\mathbf{x}_{i}\right)=h(y)\exp \left[\mathbf{\eta}\left(\mathbf{f}\left(\mathbf{\mathcal{W}},\mathbf{x}_{i}\right)\right) \cdot\mathbf{T}\left(\mathbf{y}\right)-A\left(\mathbf{f}\left(\mathbf{\mathcal{W}},\mathbf{x}_{i} \right)\right)\right]. \tag{5}\]
Functions \(h(\cdot),\mathbf{\eta}(\cdot),\mathbf{T}(\cdot),A(\cdot)\) are known and selected depending on the task. For example in a regression problem the likelihood will correspond to a multivariate normal distribution and in a classification problem to a categorical distribution. In this work, we will only consider a categorical likelihood, as it is the most suitable for image classification tasks.
The mapping \(\mathbf{f}\left(\mathbf{\mathcal{W}},\mathbf{x}_{i}\right)\) represents a generic deep neural network of depth \(L\) defined as
\[\mathbf{\mathcal{W}} =\left(\mathbf{W}_{1},\ldots,\mathbf{W}_{L}\right)\] \[\mathbf{h}_{i}^{0} =\mathbf{x}_{i}\] \[\mathbf{h}_{i}^{l} =\mathbf{g}\left(\mathbf{W}_{l}\cdot\left[\mathbf{h}_{i}^{l-1};1\right]\right)\] \[\mathbf{f}\left(\mathbf{\mathcal{W}},\mathbf{x}_{i}\right) =\mathbf{W}_{L}\cdot\left[\mathbf{h}_{i}^{L-1};1\right]\]
A probabilistic formulation of the deep learning task, that is, inferring model weights, introduces implicit bias to the parameters \(\boldsymbol{\mathcal{W}}\) of an artificial neural network in the form of a prior distribution \(p\left(\boldsymbol{\mathcal{W}}\right)\). Hence, parameter estimation is cast as an inference problem where
\[p\left(\boldsymbol{\mathcal{W}}|\boldsymbol{\mathcal{D}}\right)\propto p\left( \boldsymbol{\mathcal{W}}\right)\prod_{i=1}^{n}p\left(\boldsymbol{y}_{i}| \boldsymbol{\mathcal{W}},\boldsymbol{x}_{i}\right)\]
The choice of the prior distribution is crucial for optimal task performance, and a prior assumption of structural sparsity is essential for inferring sparse representations of over-parameterised models, such as deep neural networks.
### Bayesian neural networks with shrinkage priors
Shrinkage priors instantiate a prior belief about the sparse structure of model parameters. Here, we will investigate two well-established forms of shrinkage priors for network weight parameters, a canonical spike-and-slab prior (George and McCulloch, 1993; Mitchell and Beauchamp, 1988) defined as
\[w_{ijl} \sim\mathcal{N}\left(0,\lambda_{ijl}^{2}\gamma_{0}^{2}\right)\] \[\lambda_{ijl} \sim\text{Bernoulli}\left(\pi_{l}\right)\] \[\pi_{l} \sim\mathcal{B}e\left(\alpha_{0},\beta_{0}\right)\]
and a regularised-horseshoe prior (Piironen and Vehtari, 2017)
\[w_{ijl} \sim\mathcal{N}\left(0,\gamma_{il}^{2}\right)\] \[\gamma_{il}^{2} =\frac{c_{l}^{2}v_{l}^{2}\tau_{il}^{2}}{c_{l}^{2}+\tau_{il}^{2}v_ {l}^{2}} \tag{6}\] \[c_{l}^{-2} \sim\Gamma\left(2,6\right)\] \[\tau_{il} \sim\mathcal{C}^{+}(0,1)\] \[v_{l} \sim\mathcal{C}^{+}(0,\tau_{0})\]
where \(i\in\{1,\ldots,K_{l}\}\), \(j\in[1,\ldots,K_{l-1}+1]\), and where \(w_{ijl}\) denotes \(ij\)th element of the weight matrix at depth \(l\). The symbols \(\mathcal{B}e\), and \(\mathcal{C}^{+}\) denote a Beta distribution and a half-Cauchy distribution, respectively.
Importantly, the spike-and-slab prior relates to dropout regularisation, which is commonly introduced as a sparsification method in deep learning (Nalisnick et al., 2019; Mobiny et al., 2021). This type of prior is considered the gold standard in shrinkage priors and has been used in many recent applications of Bayesian sparsification on neuronal networks (Bai et al., 2020; Hubin and Storvik, 2023; Jantre et al., 2021; Sun et al., 2022; Ke and Fan, 2022) showing excellent sparsification rates. However, the inversion of the resulting hierarchical model is challenging and requires carefully constructed posterior approximations. Moreover, their dependence on discrete random variables renders them unsuitable for Markov-Chain Monte Carlo-based sampling schemes. As a result, researchers often use continuous formulations of the shrinkage-prior, with the horseshoe prior being a notable example.
In contexts that involve sparse learning with scant data, the regularised horseshoe prior has emerged as one of the preferred choices within shrinkage prior families (Ghosh et al., 2019). A distinct advantage of this prior is its ability to define both the magnitude of regularisation for prominent coefficients and convey information about sparsity. It is worth noting a dependency highlighted in Ghosh et al. (2018): for \(v_{l}\tau_{il}\ll 1\) the equation simplifies to \(\gamma_{il}\approx v_{l}\tau_{il}\) recovering the original horseshoe prior. In contrast, for \(v_{l}\tau_{il}\gg 1\), the equation becomes \(\gamma_{il}^{2}\approx c_{l}^{2}\). In this latter scenario, the prior over the weights is defined as \(w_{ijl}\sim\mathcal{N}\left(0,c_{l}^{2}\right)_{i}\), with \(c_{l}\) serving as a weight decay hyper-parameter for layer \(l\).
### Approximate posterior for Bayesian neural networks
To benchmark stochastic BMR, we explore two forms of prior distribution \(p\left(\boldsymbol{\mathcal{W}}\right)\)--a flat and a hierarchical structure--in conjunction with a fully factorised mean-field approximation.
Firstly, let us consider the flat prior over model weights, represented in a non-centered parameterization:
\[\begin{split} c_{l}^{-2}&\sim\Gamma(2,2)\\ \hat{w}_{ijl}&\sim\mathcal{N}\left(0,1\right)\\ w_{ijl}&=\gamma_{0}c_{l}\hat{w}_{ijl}\end{split} \tag{7}\]
where we set \(\gamma_{0}=0.1\). Note that in the flat prior we incorporate a layer specific scale parameter, which we found to stabilise variational inference. Based on this, we describe a fully factorised approximate posterior as a composite of Normal and Log-Normal distributed random variables. Hence,
\[\begin{split} q\left(\boldsymbol{\hat{\mathcal{W}}},\boldsymbol {c}\right)&=\prod_{l}q\left(c_{l}^{-2}\right)\prod_{i}\prod_{j}q \left(\hat{w}_{ijl}\right)\\ q\left(\hat{w}_{ijl}\right)&=\mathcal{N}\left( \mu_{ijl},\sigma_{ijl}^{2}\right)\\ q\left(c_{l}^{-2}\right)&=\mathcal{L}\mathcal{N} \left(\mu_{c,l},\sigma_{c,l}^{2}\right).\end{split} \tag{8}\]
When inverting a hierarchical generative model over weights of artificial neural network, we exclusively apply stochastic black-box variational inference to the model variant with the regularised horseshoe prior. This choice is motivated by its documented superiority over the spike-and-slab prior, as established in Ghosh et al. (2018). We express the hierarchical prior
in the non-centered parameterization as:
\[a_{il},b_{il} \sim\Gamma\left(\frac{1}{2},1\right)\] \[\hat{a}_{l},\hat{b}_{l} \sim\Gamma\left(\frac{1}{2},1\right)\] \[\tau_{il} =\sqrt{\frac{a_{il}}{b_{il}}}\] \[v_{l} =\tau_{0}\sqrt{\frac{\hat{a}_{l}}{\hat{b}_{l}}}\] \[\hat{w}_{ijl} \sim\mathcal{N}\left(0,1\right)\] \[w_{ijl} =\gamma_{il}\hat{w}_{ijl}\]
Note that the expressions above involve a reparameterization of Half-Cauchy distributed random variables as the square-root of the quotient of two Gamma distributed random variables, a strategy drawn from Wand et al. (2011) (see Appendix B for additional details). Such a reparameterization of the Half-Cauchy ensures capturing of fat-tails in the posterior, even when leveraging a fully-factorised mean-field posterior approximation, as referenced in Ghosh et al. (2018).
For the fully-factorised mean-field approximation, the approximate posterior is portrayed as a composite of Normal and Log-Normal distributed random variables, expressed as:
\[q\left(\hat{\mathbf{\mathcal{W}}},\mathbf{a},\mathbf{b},\mathbf{\hat{a}},\mathbf{ \hat{b}},\mathbf{c}\right) =\prod_{l}q\left(c_{l}^{-2}\right)q\left(\hat{a}_{l}\right)q \left(\hat{b}_{l}\right)\prod_{i}q\left(a_{il}\right)q\left(b_{il}\right)\prod _{j}q\left(\hat{w}_{ijl}\right)\] \[q\left(c_{l}\right) =\mathcal{LN}\left(\mu_{c,l},\sigma_{c,l}^{2}\right)\] \[q\left(\hat{a}_{l}\right) =\mathcal{LN}\left(\hat{\mu}_{a,l},\hat{\sigma}_{a,l}^{2}\right)\] \[q\left(\hat{b}_{l}\right) =\mathcal{LN}\left(\hat{\mu}_{b,l},\hat{\sigma}_{b,l}^{2}\right)\] \[q\left(a_{il}\right) =\mathcal{LN}\left(\mu_{a,il},\sigma_{a,il}^{2}\right)\] \[q\left(b_{il}\right) =\mathcal{LN}\left(\mu_{b,il},\sigma_{b,il}^{2}\right)\] \[q\left(\hat{w}_{ijl}\right) =\mathcal{N}\left(\mu_{w,ijl},\sigma_{w,ijl}^{2}\right)\]
### Application of stochastic BMR to Bayesian neural networks
To apply BMR to Bayesian neural networks, we commence by estimating an approximate posterior for the flat model, as detailed in eq. (7). To retain high computational efficiency, we pair BMR solely with the fully factorised approximate posterior, as presented in eq. (8). While it is feasible to use this method alongside the structured posterior (Ghosh et al., 2018), it requires considerably more computationally intensive estimations of the reduced free energy. As shown below, we obtain satisfactory results with a fully factorised posterior. Therefore, we defer the exploration of BMR with a structured posterior to future endeavours.
Given a fully factorised approximate posterior, we can determine the change in variational free energy, \(\Delta F\)--after substituting the prior \(\mathcal{N}\left(0,1\right)\) with \(\mathcal{N}\left(0,\theta_{ijl}^{2}\right)\) for the weight \(\hat{w}_{ijl}\)--as:
\[\Delta F\left(\theta_{ijl}\right) =-\frac{1}{2}\ln\rho_{ijl}^{2}-\frac{1}{2}\frac{\mu_{ijl}^{2}}{ \sigma_{ijl}^{2}}\left(1-\frac{\theta_{ijl}^{2}}{\rho_{ijl}^{2}}\right)\] \[\rho_{ijl}^{2} =\theta_{ijl}^{2}+\sigma_{ijl}^{2}-\theta_{ijl}^{2}\sigma_{ijl}^{2}\]
For the second hierarchical level of the approximate posterior, we aim to minimize the following form for the variational free energy:
\[F=\sum_{l=1}^{L}E_{q\left(\boldsymbol{\theta}_{l}\right)}\left[-\sum_{i,j} \Delta F(\theta_{ijl})+\ln\frac{q\left(\boldsymbol{\theta}_{l}\right)}{p\left( \boldsymbol{\theta}_{l}\right)}\right] \tag{9}\]
This minimization is done with respect to \(q\left(\boldsymbol{\Theta}\right)=\prod_{l}q\left(\boldsymbol{\theta}_{l}\right)\), the approximate posterior over hyper-parameters. Note the application of eq.4 in substituting the marginal log-likelihood with the change in the variational free energy.
For the spike-and-slab prior we can write the following relation:
\[\boldsymbol{\theta}_{l} =\left[\pi_{l},\lambda_{ijl}\right]\text{ for }i\in\left\{1,\ldots,K_{l}\right\}, \text{ and }j\in\left\{1,\ldots,K_{l-1}+1\right\}\right]\] \[\theta_{ijl} =\lambda_{ijl}\]
Consequently, the approximate posterior at the second level of the hierarchy can be approximated as:
\[q\left(\boldsymbol{\Theta}\right) =\prod_{l}q\left(\pi_{l}\right)\prod_{ij}q\left(\lambda_{ijl}\right)\] \[q\left(\lambda_{ijl}\right) =q_{ijl}^{\lambda_{ijl}}\left(1-q_{ijl}\right)^{1-\lambda_{ijl}}\] \[q\left(\pi_{l}\right) =\mathcal{B}\left(\alpha_{l},\beta_{l}\right)\]
The iterative update to obtain the minimum of the simplified variational free energy (eq.9) is then:
\[q_{ijl}^{k+1} =\frac{1}{1+e^{-\left[\zeta_{l}^{k}-\Delta F\left(\lambda_{ijl}= 0\right)\right]}}\] \[\zeta_{l}^{k} =\psi(\alpha_{l}^{k})-\psi(\beta_{l}^{k})\] \[\alpha_{l}^{k+1} =\sum_{i,j}q_{ijl}^{k+1}+\alpha_{0}\] \[\beta_{l}^{k+1} =\sum_{i,j}\left(1-q_{ijl}^{k+1}\right)+\beta_{0}\]
Here, \(\alpha_{l}^{0}=\alpha_{0}\), \(\beta_{l}^{0}=\beta_{0}\), \(\Delta F\left(\lambda_{ijl}=0\right)=-\frac{1}{2}\left[\ln\sigma_{ijl}^{2}+ \frac{\mu_{ijl}^{2}}{\sigma_{ijl}^{2}}\right]\), and \(\psi\left(\cdot\right)\) refers to the digamma function. The efficiency of this inference scheme is remarkable, typically achieving convergence after a few iterations. In practice, we cap the maximum number of iterations at \(k_{max}=4\).
Finally, we use the following pruning heuristics to eliminate model weights and sparsify network structure
\[\text{if }q_{ijl}^{k_{max}}<\frac{1}{2}\text{, set }\hat{w}_{ijl}=0.\]
To achieve the high sparsification rate presented in the next section, we adopt an iterative optimisation and pruning approach proposed in Beckers et al. (2022). We perform weight pruning at the beginning of each epoch (except the first one), and further optimisation for 500 iterations, completing one epoch. In total, we apply iterative pruning and optimisation for fifty epochs in all examples below.
The complete implementation of stochastic BMR is available at an online repository github.com/dimarkov/bmr4pml with notebooks and scripts necessary to recreate all result figures.
## 3 Results
In this section, we present the outcomes of our experiments and analyses conducted to evaluate the performance and efficiency of the stochastic Bayesian model reduction in the context of Bayesian sparsification of deep neural networks. Our results are structured to provide insights into the capabilities and advantages of our approach.
### Performance Comparison
The training regimen used a batch size of \(N_{B}=128\) and the AdaBelief algorithm with learning rate set to \(\alpha=10^{-3}\) in the case of the MAP estimate, \(\alpha=5\cdot 10^{-3}\) in the case of the mean-field methods, and \(\alpha=10^{-2}\) in the case of stochastic BMR (the exponential decay rates were kept at default values \(\beta_{1}=0.9\), and \(\beta_{2}=0.999\)). Figure 1 charts the
Figure 1: Classification performance comparison on FashionMNIST dataset for different neuronal architectures and approximate inference schemes.
epoch-wise evolution of ACC, ECE, and NLL for each architecture, under five distinct approximate inference strategies: (i) Maximum a posteriori (MAP) estimate for the flat generative model, akin to traditional deep learning point estimates coupled with weight decay. (ii) A fully factorised posterior approximation for the flat generative model (Flat-FF). (iii) A fully factorised posterior approximation of the hierarchical generative model with a regularised horseshoe prior (Tiered-FF). (iv) The stochastic BMR algorithm augmented with a spike-and-slab prior (BMR-S&S). Each epoch is defined by 500 stochastic gradient steps, with each step randomly drawing \(N_{B}\) data instances from the training pool.
Interestingly, all approximate inference methods demonstrate comparable top-1 accuracy scores. However, the stochastic BMR method followed by the Tiered-FF approximatio (with a single exception), consistently resulted in the lowest ECE and NLL scores across the majority of DNN architectures and datasets (see Figure S1 for CIFAR10 dataset and Figure S2 for CIFAR100 dataset).The implicit reduction in model complexity suggests that--as anticipated--Stochastic BMR furnishes a model of the data that has the greatest evidence or marginal likelihood (not shown). In this setting, the NLL of the test data can be regarded as a proxy for (negative log) marginal likelihood.
### Learning of sparse representations
Figure 2 depicts the fraction of pruned model parameters for different DNN architectures and datasets. It is noteworthy to observe the substantive sparsity achieved by the stochastic BMR algorithm. This sparsity is consistent across datasets and architectures, with the exception of the LeNet-5 structure when used for the FashionMNIST dataset, because by default LeNet-5 architecture is already sparse and contains relatively low-number of model weights (for other data sets we substantially increased the dimensionality of hidden layers as detailed in Appendix A).
To delve deeper into the pruning behavior across varying network depths, Figure 3 presents a per-layer cumulative distribution function (CDF) for model parameters, highlighting the proportion of parameters whose absolute mean posterior estimate falls below a given threshold. When juxtaposing the BMR CDF trajectories with those obtained from the Tiered-FF method (sparsification is induced by the regularised half-cauchy prior), it is evident that BMR furnishes more pronounced sparsification. This distinction is crucial,
Figure 2: Total fraction of pruned model parameters obtained with the stochastic BMR algorithm across different DNN architectures and datasets.
as the stochastic BMR not only matches or surpasses the performance of the Tiered-FF algorithm but also averages a 30% faster stochastic gradient descent.
To illustrate the structural learning variations among algorithms, Figure 4 presents heatmaps of posterior expectations obtained using the four different methods. The Figure 4 reveals subtle differences between inferred representations of the MLP and LeNet-5
Figure 4: Posterior expectations (color coded) over model parameters obtained using different approximate inference schemes at the first layer of (a) MLP architecture, and (b) LeNet architectures.
Figure 3: Cumulative Distribution Function (CDF) of absolute posterior parameter expectations at different layers of MLP (top row), and LeNet architectures (bottom row). The y-axis represents the fraction of parameters with values less than or equal to the value on the x-axis.
architecture's input layers trained on the Fashion MNIST dataset. Divergent compression rates among the algorithms indicate inherent trade-offs between efficiency and performance. It is evident that the stochastic BMR strikes a balance between compression advantages and performance, as it is less prone to over-pruning as compared to the Tiered-FF method (two featured of the LeNet-5 input layer are effective removed - see Figure 4(b)).
## 4 Discussion
In this study, we presented a novel algorithm--stochastic Bayesian model reduction--designed for an efficient Bayesian sparsification of deep neural networks. Our proposed method seamlessly integrates stochastic and black-box variational inference with Bayesian model reduction (BMR), a generalisation of the Savage-Dickey ratio. Through the stochastic BMR strategy, we enable iterative pruning of model parameters, relying on posterior estimates acquired from a straightforward variational mean-field approximation to the generative model. This model is characterized by Gaussian priors over individual parameters and layer-specific scale parameters. The result is an efficient pruning algorithm for which the computational demand of the pruning step is negligible compared to the direct stochastic black-box optimization of the full hierarchical model.
Over recent years, the Bayesian sparsification of neural networks has gained momentum, primarily driven by the spike-and-slab prior Bai et al. (2020); Hubin and Storvik (2023); Jantre et al. (2021); Sun et al. (2022); Ke and Fan (2022). These works have showcased the remarkable sparsification capabilities inherent to such shrinkage priors. Nevertheless, when juxtaposed with the stochastic BMR algorithm, they often necessitate supplementary assumptions related to the approximate posterior. These assumptions, in turn, lead to a more computation-intensive model inversion. Moreover, in contrast to related approaches, the versatility of stochastic BMR allows its integration with more efficient optimization techniques, like variational Laplace Daxberger et al. (2021) and proximal-gradient methods Khan et al. (2018), provided the resulting approximate posterior in the form of a normal distribution is apt for the application at hand.
The insights obtained here pave the way for a deeper exploration of the potential applications of Bayesian model reduction across a wider array of architectures and tasks in probabilistic machine learning, such as audiovisual and natural language processing tasks. A more detailed fine tuning of the core dynamics of these algorithms, in terms of iterations steps, learning rates, and other free-parameters, might be the key to unveiling even more proficient Bayesian deep learning methodologies in the near future.
We thank Conor Heins, Magnus Koudahl, and Beren Millidge for valuable discussions during the initial stages of this work. SK acknowledges support by DFG TRR 265/1 (Project ID 402170461, B09) and Germany's Excellence Strategy--EXC 2050/1 (Project ID 390696704)--Cluster of Excellence "Centre for Tactile Internet with Human-in-the-Loop" (CeTI) of Technische Universitat Dresden.
## Appendix A
For the simple multi-layer perceptron, we configure the architecture with five hidden layers, each comprising 400 neurons. The chosen activation function is the Swish activation function (Ramachandran et al., 2017).
For the LeNet-5 architecture, we adhere to the original design, which includes three convolutional layers, average pooling following the initial two convolutional layers, and two linear layers. The activation function used is the hyperbolic tangent. The convolutional layers employ a kernel size of \(5\times 5\), while the average pooling uses a window of shape \(2\times 2\). For the FashionMNIST dataset, the feature counts of the convolutional layers are designated as (6, 16, 120), and the two linear layers have neuron counts of (84, 10). However, for the CIFAR10 and CIFAR100 datasets, we elevate the feature counts of the convolutional layers to (18, 48, 360), with linear layer neuron counts set to (256, 10) for CIFAR10 and (256, 100) for CIFAR100.
For the MlpMixer architecture we employ six layers and a patch resolution of \(4\times 4\). Across all datasets, we maintain constant values for hidden size (\(C\)), sequence length (\(S\)), MLP channel dimension (\(D_{C}\)), and MLP token dimension (\(D_{S}\)); specifically \(C=256\), \(S=64\), \(D_{C}=512\) and \(D_{S}=512\) for all datasets.
For the VisionTransformer architecture, we adopt a slightly modified version of the ViT-Tiny setup: we use six layers, eight heads for each attention block, an embedding dimension of 256, and a hidden dimension of 512. The patch resolution of \(4\times 4\) is consistent with the MlpMixer. In both MlpMixer and VisionTransformer architectures, the GeLU activation function is used (Hendrycks and Gimpel, 2016).
For training using the maximum a posteriori estimate (Flat-MAP), dropout regularization, with dropout probability set to 0.2, is applied to all linear layers across all architectures, with the exception of the MlpMixer.
## Appendix B
In the centered parameterization of a generative model, Stochastic Variational Inference (SVI) with a fully factorized posterior yields a non-sparse solution, undermining the objective of employing shrinkage priors (Ghosh et al., 2019). Typically, this limitation is addressed by adopting the non-centered parameterization of the prior.
Consider the unique property of the half-Cauchy distribution: given \(x\sim C^{+}(0,1)\), and \(z=bx\) the resulting probability distribution for \(z\) is \(z\sim C^{+}(0,b)\). Therefore, the non-centred parameterization is formulated as
\[\hat{\tau}_{i}^{l} \sim\mathcal{C}^{+}(0,1)\] \[\hat{\lambda}_{ij}^{l} \sim\mathcal{C}^{+}(0,1)\] \[\hat{w}_{ij}^{l} \sim\mathcal{N}\left(0,1\right)\] \[\left[\gamma_{ij}^{l}\right]^{2} =\frac{\left[c^{l}\tau_{0}^{l}\tau_{i}^{l}\hat{\lambda}_{ij}^{l} \right]^{2}}{\left[c^{l}\right]^{2}+\left[\tau_{0}^{l}\tau_{i}^{l}\hat{ \lambda}_{ij}^{l}\right]^{2}}\] \[w_{ij}^{l} =\gamma_{ij}^{l}\hat{w}_{ij}^{l}\]
However, while the half-Cauchy distribution is frequently chosen for sampling-based inference, it poses challenges in variational inference (Piironen and Vehtari, 2017). Firstly, exponential family-based approximate posteriors (e.g., Gamma or log-Normal distributions) inadequately capture the half-Cauchy distribution's fat tails. Secondly, using a Cauchy approximating family for the posterior results in high variance gradients during stochastic variational inference (Ghosh et al., 2019). Hence, in the context of stochastic variational inference, the half-Cauchy distribution undergoes a reparameterization, as described in (Ghosh et al., 2018):
\[x\sim\mathcal{C}^{+}(0,b)\equiv x=\sqrt{\frac{1}{u}},u\sim\Gamma\left(\frac{1} {2},\frac{1}{v}\right),v\sim\Gamma\left(\frac{1}{2},b^{2}\right)\]
or, when represented in the non-centered parameterization:
\[x=b\sqrt{\frac{v}{u}},u\sim\Gamma\left(\frac{1}{2},1\right),v\sim\Gamma\left( \frac{1}{2},1\right) \tag{10}\]
## Appendix C
|
2304.01015 | Adaptive structure evolution and biologically plausible synaptic
plasticity for recurrent spiking neural networks | The architecture design and multi-scale learning principles of the human
brain that evolved over hundreds of millions of years are crucial to realizing
human-like intelligence. Spiking Neural Network (SNN) based Liquid State
Machine (LSM) serves as a suitable architecture to study brain-inspired
intelligence because of its brain-inspired structure and the potential for
integrating multiple biological principles. Existing researches on LSM focus on
different certain perspectives, including high-dimensional encoding or
optimization of the liquid layer, network architecture search, and application
to hardware devices. There is still a lack of in-depth inspiration from the
learning and structural evolution mechanism of the brain. Considering these
limitations, this paper presents a novel LSM learning model that integrates
adaptive structural evolution and multi-scale biological learning rules. For
structural evolution, an adaptive evolvable LSM model is developed to optimize
the neural architecture design of liquid layer with separation property. For
brain-inspired learning of LSM, we propose a dopamine-modulated
Bienenstock-Cooper-Munros (DA-BCM) method that incorporates global long-term
dopamine regulation and local trace-based BCM synaptic plasticity. Comparative
experimental results on different decision-making tasks show that introducing
structural evolution of the liquid layer, and the DA-BCM regulation of the
liquid layer and the readout layer could improve the decision-making ability of
LSM and flexibly adapt to rule reversal. This work is committed to exploring
how evolution can help to design more appropriate network architectures and how
multi-scale neuroplasticity principles coordinated to enable the optimization
and learning of LSMs for relatively complex decision-making tasks. | Wenxuan Pan, Feifei Zhao, Yi Zeng, Bing Han | 2023-03-31T07:36:39Z | http://arxiv.org/abs/2304.01015v1 | Adaptive structure evolution and biologically plausible synaptic plasticity for recurrent spiking neural networks
###### Abstract
The architecture design and multi-scale learning principles of the human brain that evolved over hundreds of millions of years are crucial to realizing human-like intelligence. Spiking Neural Network (SNN) based Liquid State Machine (LSM) serves as a suitable architecture to study brain-inspired intelligence because of its brain-inspired structure and the potential for integrating multiple biological principles. Existing researches on LSM focus on different certain perspectives, including high-dimensional encoding or optimization of the liquid layer, network architecture search, and application to hardware devices. There is still a lack of in-depth inspiration from the learning and structural evolution mechanism of the brain. Considering these limitations, this paper presents a novel LSM learning model that integrates adaptive structural evolution and multi-scale biological learning rules. For structural evolution, an adaptive evolvable LSM model is developed to optimize the neural architecture design of liquid layer with separation property. For brain-inspired learning of LSM, we propose a dopamine-modulated Bienenenstock-Cooper-Munros (DA-BCM) method that incorporates global long-term dopamine regulation and local trace-based BCM synaptic plasticity. Comparative experimental results on different decision-making tasks show that introducing structural evolution of the liquid layer, and the DA-BCM regulation of the liquid layer and the readout layer could improve the decision-making ability of LSM and flexibly adapt to rule reversal. This work is committed to exploring how evolution can help to design more appropriate network architectures and how multi-scale neuroplasticity principles coordinated to enable the optimization and learning of LSMs for relatively complex decision-making tasks.
Please note: Abbreviations should be introduced at the first mention in the main text - no abbreviations lists. Suggested structure of main text (not enforced) is provided below.
## Introduction
The brain is a highly heterogeneous and powerful network of tens of billions of neurons possessing unparalleled feats of cognitive functions. Traditional artificial intelligence models are predominantly built on networks with hierarchical feed-forward architectures, different from the highly recurrent connected biological network in the brain [1], making it difficult to match the results of natural evolution in terms of function and efficiency. Macro-scale reconstruction studies of human brain structure [2] confirmed the existence of a large number of non-hierarchical structures in the brain, such as modular structure [3, 4, 5, 6], hub structures [7, 8], small-world structures [9, 6]. These topological properties enable the brain to better coordinate multiple cognitive functions to adapt to complex and dynamic environments and are also unconventional structures missing in existing brain-inspired AI models.
Motivated by this, this work focuses on a network structure called Liquid State Machine (LSM) which can generate complex dynamics like the brain and facilitate the processing of real-time tasks. LSM [10] is a spiking neural network (SNN) structure that belongs to the reservoir, with randomly connected liquid layers and readout layers whose weights can be modified, as shown in Fig.1. Reservoir computing has achieved some progress in different fields, such as speech recognition [11, 12, 13], image recognition [14, 15, 16], robot control [17, 18], etc.
Some LSM models use fixed weights for the liquid layer, probably because its complex recurrent structure is difficult to be trained and optimized, which limits the learning capability and wide application of LSM [17, 19, 20]. Most of these existing models
used gradient-based approach [21, 22, 23, 24, 25] to train the readout layer without training the liquid layer, resulting in a gap with the real learning mechanism in the brain. Some approaches [26, 27, 28, 29] tried to train the liquid layer through local synaptic plasticity such as Spike-Timing-Dependent Plasticity (STDP) [30] or Hebb [31], which is limited to simple tasks. In summary, there is still a need to explore biologically plausible learning rules applicable to LSM to optimize its liquid and readout layers.
In addition, from the structure perspective, the liquid layer is usually fixed after initialization, simply serving as a way of high-dimensional encoding. Some methods [28, 25, 32] inspired by deep neural networks superposed multiple LSM layers as a deep LSM to solve machine learning tasks. These approaches have not explored the studies about dynamic LSM' structure search in order to adapt to the changing tasks. And in fact, the human brain evolved rather than followed a specific design, which is different from current common AI algorithms. Evolution allows the brain's nervous system to be continuously optimized and eventually evolve into non-hierarchical, highly-efficient structures. Inspired by this, some studies [20, 21, 23, 27, 33] proposed evolutionary methods for optimizing the parameters and structures of LSM. [17] assessed LSM according to three LSM's properties proposed in [10], and this work encoded the three LSM properties into the chromosome, and optimized the separation property (SP) as the objective function. Using SP as fitness is reasonable because it could reflect the role of evolution in the network dynamic adjustment. However, this work is limited to simple control tasks. [21] developed a three-step search method based on the genetic algorithm (GA) to search the network architecture and parameters of LSMs. [21, 23, 27] directly used the experimental data set as a criterion for evaluating the fitness of LSM. These approaches lack effective exploitation of the internal dynamics of LSM.
Considering the various limitations of existing LSM's studies mentioned above, in this paper, we present a brain-inspired LSM with evolutionary architecture and dopamine-modulated Bienenstock-Cooper-Munros (DA-BCM) [34] Rule. We consider the optimization of LSM from structure and function respectively. **Structurally**, we optimize the architecture of liquid layer according to an evolutionary perspective to obtain a more brain-inspired effective structure with a higher separation property. **Functionally**, instead of the gradient-based method, we propose a biologically plausible learning method with the combination of local trace-based BCM [34] synaptic plasticity and global dopamine regulation. The experimental results show that the proposed evolved DA-modulated LSM is able to learn the correct strategy faster and flexibly adapt to rules reversal on multiple reinforcement learning tasks. As reservoir computation exhibits complex dynamics consistent with activity in brain neural circuits, the evolvable LSM based on DA-BCM provides us with a powerful and biologically realistic tool to delve deeper into the learning process of complex networks. This work provides new opportunities for developing more brain-inspired complex and efficient network models, building adaptive learning frameworks, and revealing the evolutionary mechanisms of brain structures and functions.
## Results
### The Evolution and Learning Process
This paper first evolves the structure of the liquid layer of LSM, and then optimizes the liquid layer and readout layer based on DA-BCM to accomplish online decision making. Evolution randomly initializes \(N_{ini}=100\)) individuals according to the inputs of different tasks, and then randomly mutates multiple offspring during the mutation process, finally selecting the optimal \(N_{opt}=20\) structures for decision making. All experimental results in this work are based on the average of the network structures obtained from multiple random evolution to ensure accuracy and fairness.
Figure 1: In the traditional definition of the LSM, randomly connected liquid layer neurons receive time-varying signals from external inputs and other nodes. Recursive connection patterns enable input signals to be converted into liquid layer dynamics and then abstracted by the readout layer.
Experiments show that evolved individuals are superior to randomly generated models in efficiency. Based on the evolved structures, the \(N_{opt}\) individuals (agents) are then placed in a specific environment, where the next step action is determined according to the LSM's output. DA-BCM rule dynamically adjusts the strength of LSM's weights through the reward signal fed back by the environment, enabling the agent to learn and survive better in the environment. Model performance is evaluated according to the average cumulative reward of \(N_{opt}=20\) individuals within \(T\) steps, which is calculated as follows:
\[R=\frac{\sum_{i}^{N_{opt}}\sum_{l}^{T}DA_{l}^{i}}{N_{opt}} \tag{1}\]
\(DA_{t}^{i}\) represents the reward obtained by individual \(i\) at step \(t\). Therefore, \(R\) represents the average reward of all individuals.
### T-maze
#### Experiment Configuration
We constructed a T-shaped maze (two ends of the maze are food and poison, respectively). Three input neurons representing the agent's observations in three directions (maybe walls, roads, food, poison) feed information into the evolved LSM, and the agent performs actions (forward, left, right) based on the output. The distance difference between the agent and the food before and after executing the behavior is defined as \(dis_{m}\), then the reward function for the T-maze task is:
Figure 2: Experimental results on T-maze. **a:** the separation property of evolving LSMs which is calculated from the average of all individuals in population. **b:** reversal learning results. Green dots indicate that the agent has obtained food, and red dots indicate poison. **c:** performance of LSMs (applying different learning rules). Evolved model results are the average performance of \(N_{opt}\) individuals, unevolved model results are the average performance of multiple runs. **d:** the agent learns to change behavior guided by DA regulation. **e:** When the rule is reversed, the agent learns to avoid the poison after being punished once.
\[DA=\begin{cases}3,&get\ food\\ -3,&get\ poison\\ 1,&dis_{m}<0\\ -1,&dis_{m}\geq 0\end{cases} \tag{2}\]
An energy mechanism is set up to prevent the agent from wandering or standing still (i.e.hitting the wall all the time) in the maze. Each round is counted from the starting point to the endpoint, where learning time is limited to a certain number of steps, after which the exploration process will be restarted. When the agent receives positive rewards continuously, the positive and negative rewards in the maze will exchange positions with a certain probability, thereby verifying the ability of the model to adapt to reversal learning.
#### Results on T-maze
The fitness change of the evolved \(N_{opt}\) individuals is shown in Fig.2a, which can gradually evolve to reach the maximum value, verifying the evolutionary algorithm's effectiveness. Comparing evolved and unevolved models in Fig.2c, we could find that structure evolution improves the learning efficiency and performance of LSM models. Models' performance is calculated using the average reward value of \(N_{opt}\) individuals over a period of time \(T=500\). Fig.2d shows how evolved LSMs with DA-BCM learning rule help the agent to find where the food is. Along the way, reward signals of environmental feedback (shown in green and red, respectively) guide agent behavior through dopamine modulation.
**Reversal Learning.** During the learning process, the agent showed the ability to flexibly adapt to the reversion of the rules, as shown in Fig.2b: after taking the poison for the first time and being punished, the agent can avoid the poison no matter how the positions of the poison and food are changed, which means that agent has the ability to reversal learning and can flexibly handle changes in the environment. Simulation results shown in Fig.2e indicate that the agent exhibits the ability of reversal learning.
**Ablation Analysis.** Ablation experiments further evaluate the effect of DA-BCM learning rules by applying STDP and DA-BCM to the liquid and readout layers to explore the effect of different learning rules on LSM performance. As shown in Table 1 and Fig.2c, the evolved LSM with liquid and readout layers trained by DA-BCM achieves the best performance and significantly outperforms other models. The worst among all methods is the evolved LSM trained with unsupervised STDP, indicating that the model without any environmental feedback to guide LSM dynamics cannot gain knowledge, causing the average reward \(R\) to fluctuate over time with no cumulative trend. Besides, the none+DA-BCM and STDP+DA-BCM (the front of "+" represents the liquid layer learning rule, and the back represents the readout layer learning rule) models achieve similar good performance, indicating that the regulation of DA-BCM at the readout layer can help the model to learn the rules of the environment. The STDP+DA-BCM and DA-BCM+DA-BCM are superior to the none+DA-BCM, which indicates that optimizing the weights of liquid layer is more effective than fixing their weights. Further, the outstanding advantage of DA-BCM+DA-BCM illustrates that our proposed biologically plausible DA-BCM training method can outperform untrained or STDP-trained models, helping to evolve LSM's structure and learn the environment information more efficiently.
### Flappy Bird
#### Experiment Configuration
Flappy Bird is a game in which the player controls the bird to move between the gaps of many pipes without colliding as much as possible. The settings of state space and action space are shown in the Fig.3a, the size of action space is 2 (i.e. up or down). We divide the state space into 9 parts according to the positional relationship between the bird and the pipe. The state of the bird and its action at the next moment are the input and output of the evolved LSM. The positive reward is used to encourage the bird to pass the pipes (i.e. the gap between the upper and lower pipes), and the negative reward is used to punish the bird
\begin{table}
\begin{tabular}{c c c c} \hline \hline Structure & Liquid Layer & Readout Layer & Performance \\ \hline Evolved & STDP & STDP & -0.9\(\pm\)5.54 \\ Unevolved & DA-BCM & DA-BCM & 162.0\(\pm\)12.44 \\ Evolved & none & DA-BCM & 399.44\(\pm\)5.99 \\ Evolved & STDP & DA-BCM & 414.75\(\pm\)1.41 \\
**Evolved** & **DA-BCM** & **DA-BCM** & **464.4\(\pm\)5.15** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results of ablation experiments on T-maze.
for staying away from the exit of the pipes. The reward is learned by LSM for the adjustment of synaptic strength (based on DA-BCM). When the bird collides with the pipe, the game is over.
Table 2 illustrates the definition of reward function (_DA_) for Flappy Bird. The reward is determined according to current state and last state, and the distance difference \(dis_{f}\) between the bird and the center of pipe before and after executing a selected behavior. The maximum positive reward is given when the bird is in state 0 or 1. A slightly smaller positive reward is used to encourage shorter distances (\(dis_{f}<0\)) to the target location (i.e. empty space between pipes). Correspondingly, if the distance becomes longer (\(dis_{f}\geq 0\)), a negative reward is given. The largest negative reward is used to punish hitting the pipe. Models' performance is calculated using the average reward value of \(N_{opt}\) individuals over a period of time steps \(T=2000\).
#### Results on Flappy Bird
To verify the validity of the proposed model, we compared the unevolved LSM with liquid layer and the readout layer both trained by DA-BCM (Unevolved model with DA-BCM in Fig.3b), the evolved LSM with non-trained liquid layer and DA-BCM trained readout layer (Evolved model with None+DA-BCM in Fig.3b), the evolved LSM with STDP-trained liquid layer and DA-BCM trained readout layer (Evolved model with STDP+DA-BCM in Fig.3b), and the evolved LSM with DA-BCM trained liquid layer and DA-BCM trained readout layer (Evolved model with DA-BCM+DA-BCM in Fig.3b), respectively. Fig.3b depicts the average reward curves ( Gaussian smoothed) for different models. It is obvious that evolved model with DA-BCM+DA-BCM achieves the best results, while unevolved method is inferior to the evolved methods. Comparing the optimization methods for liquid layer, STDP slightly outperforms the untrained method, while DA-BCM can further bring improvements. Fig.3c shows that our proposed model can guide the bird to fly smoothly through the pipe via dopamine modulation.
The detailed final performances (average reward \(R\) and its variance) of different methods are listed in Table 3. LSM trained only with STDP could not finish this task, failing to get positive feedback from the environment, causing the bird to hit the pipe from the start and the game to stop. Each component of the proposed model such as evolution, DA-BCM in the liquid layer and readout layer enables the LSM network to learn faster and better. Thus, we can conclude that our work brings outstanding superiority to optimizing the LSM from the structural and functional perspectives.
## Discussion
Evolution has not only designed the brain's general connectivity patterns but has also optimized a multi-scale plasticity coordinated learning rule, endowing the brain with the ability to flexibly adapt to reversal learning and enabling efficient online learning. Inspired by this, this paper proposed a structurally and functionally optimized LSM model that incorporates adaptive structural evolution and biologically plausible DA-BCM learning rule. Experimental results demonstrated that the structural evolution of the liquid layer and the DA-BCM regulation of the liquid layer and the readout layer significantly improved multiple decision-making tasks.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{current state} & \multicolumn{2}{c}{last state = current state} & \multirow{2}{*}{last state \(\neq\) current state} \\ & \(dis_{f}<0\) & \(dis_{f}\geq 0\) & \\ \hline
0 or 1 & 6 & 6 & 6 \\
2 or 3 & 3 & -5 & -3 \\
4 or 5 & 3 & -8 & -5 \\
6 or 7 & 3 & -3 & -3 \\
8 & -100 & -100 & -100 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Reward function of Flappy Bird.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Structure & Liquid Layer & Readout Layer & Performance \\ \hline Evolved & STDP & STDP & -54.77\(\pm\)59.7 \\ Unevolved & DA-BCM & DA-BCM & 4.97\(\pm\)0.04 \\ Evolved & none & DA-BCM & 5.29\(\pm\)0.00 \\ Evolved & STDP & DA-BCM & 5.36\(\pm\)0.02 \\
**Evolved** & **DA-BCM** & **DA-BCM** & **5.43\(\pm\)0.03** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results of ablation experiments on Flappy Bird.
Most existing works [21, 22, 23, 24, 25, 26, 27, 28, 29] used backpropagation-based methods (which are suitable for hierarchical networks) to optimize the readout layer without considering the optimization of the liquid layer, or only adopted unsupervised STDP to optimize the liquid layer. Our model proposed a DA-BCM learning rule for both the liquid layer and the readout layer, which shows more biologically plausible. In addition, unlike existing structural search methods that directly search for the highest-performing structure, we took inspiration from the evolutionary mechanism and optimized the structure of the LSM according to its internal properties. Here, we would like to compare our approach with other reinforcement learning models, including the classical Q-learning [35],DQN [36], and LSTM [37] (learning via policy gradient algorithm) with recurrent structure.
In LSTM configuration, the network consists of one layer of LSTM with 128 hidden neurons and one fully connected layer. The Bellman equation Q-learning uses as Eq.3, where \(\gamma=0.9\), \(\alpha=0.1\). Agent's action is selected according to the \(\epsilon\)-greedy algorithm (\(\epsilon=0.8\)), which means that there is a probability of 0.2 for each selection to explore the action space randomly. The reward discount value \(\gamma\) and learning rate \(\alpha\) are set to 0.99 and 0.1, respectively, in DQN. The loss function of the Q network is constructed in the form of mean square error, as shown in Eq.4. The DQN network, which is fully connected, consists of three layers, the input layer, the hidden layer (with a size of 50), and the output layer. In Eq.4 \(\gamma\) is set to 0.86. Learning rate in the used adam optimizer is set to 0.1.
For fairness, multiple experiments are performed for each comparison algorithm, and the performance is averaged. The results for LSTM, Q-learning, and DQN are averaged over multiple runs (\(n=20\)), where LSTM and DQN run 1000 episodes each.
\[Q(s,a)=Q(s,a)+\alpha\left[R(s,a)+\gamma\max_{a^{\prime}}Q^{\prime}\left(s^{ \prime},a^{\prime}\right)-Q(s,a)\right] \tag{3}\]
\[\omega^{*}=\arg\min_{\omega}\frac{1}{2N}\sum_{i=1}^{N}\left[Q_{\omega}\left(s_ {i},a_{i}\right)-\left(r_{i}+\gamma\max_{a^{\prime}}Q_{\omega}\left(s^{\prime }_{i},a^{\prime}\right)\right)\right]^{2} \tag{4}\]
Figure 3: Experimental results on Flappy Bird. **a:** the setup of state space and action space. The whole space is divided into 9 states, where 6 and 7 are the ultimate goals to be achieved. **b:** the final performance of all models in the Flappy Bird environment. Evolved model results are the average performance of \(N_{opt}\) individuals, unevolved model results are the average performance of multiple runs. **c:** agents avoid mistakes under the guidance of reward signals.
Table 4 and Fig 4 compare the average reward of the evolved \(N_{opt}\) individuals under different learning rule applications in detail. From the results, it can be seen that the efficiency of our proposed model is better than the comparison algorithms in terms of both mean and stability (variance). In fact, by combining 3 and 3, it can be found that three evolutionary LSMs (DA-BCM+DA-BCM, STDP+DA-BCM, none+ DA-BCM) outperform LSTM and Q-learning in two tasks. We can also see that on the T-maze task, the performance of LSTM and DQN are significantly weaker than other models, and the variance of LSTM is very large, which may be caused by too many parameters that bring overfitting in a small sample learning task. In Flappy Bird, although DQN performance is better than LSTM and Q-Learning, the variance is very large. The overall efficiency is not as good as our model.
**Computational Cost Analysis.** We also consider the impact of the computational cost of the model on fairness. Take T-maze for example. In our experiments, LSTM input size is 4, the hidden layer size is 128, and the total parameters are computed as Eq.5:
\[4*(4*128+128^{2}+128)=68096 \tag{5}\]
For Q-learning, only a state-action table of size 28*3=84 needs to be stored. For DQN, regardless of the Q table, the parameters of the three fully connected layers are \(4*50*3=6000\). As for the LSM model we proposed, considering that the connection density of the evolved model is less than 2%, the number of connections of the liquid layer is up to about \(64*64*0.02=81.92\). Including the number of connections between the liquid layer and the input and output, the total number of parameters is about 109.92 on average, of the same magnitude as Q-learning. Therefore, the computational cost of our proposed model belongs to a low level compared to DQN and LSTM.
To sum up, this work breaks through the fixed deep hierarchical network structure that relies on BP optimization used in AI, and develops a multi-scale biological plasticity coordinated learning rule (instead of BP) and an efficient structure evolution for LSM. Because the proposed model borrows the information processing mechanism of the brain from the structure and function, it is more biologically plausible, more flexible and efficient, and naturally more suitable for developing human-like cognitive intelligence. Although this paper demonstrates the superiority of our proposed evolutionary LSM model in terms of model
\begin{table}
\begin{tabular}{c c c} \hline \hline Learning Methods & T-maze & Flappy Bird \\ \hline Q-learning [35] & 348.4\(\pm\)12.81 & 5.19\(\pm\)0.04 \\ LSTM [37] & 169.95\(\pm\)72.61 & 4.54\(\pm\)0.40 \\ DQN [36] & 80.1\(\pm\)11.68 & 5.36\(\pm\)0.62 \\
**Ours** & **464.4\(\pm\)5.15** & **5.43\(\pm\)0.03** \\ \hline \hline \end{tabular}
\end{table}
Table 4: For different methods, the final performances (average reward R and its variance) on T-maze and Flappy Bird tasks.
Figure 4: Comparative Experimental results on Two Tasks. **a:** comparison results of four models in T-maze. **b:** comparison results of four models in Flappy Bird.
efficiency and computational cost, there are still some limitations. For example, in learning algorithms, there is still more room for exploration for developing brain-inspired models, and many neural mechanisms are waiting for us to investigate and fully apply in the AI field. This paper focuses on the small sample learning environment. Other application scenarios can also be used to further explore more energy-efficient self-organizing brain-inspired evolutionary spiking neural networks.
## Methods
### LSM Architecture
#### Leaky Integrate-and-Fire (LIF) Neuron Model
Neuron model used in the proposed LSM is implemented with LIF model [38], which can be simulated by Eq.6:
\[\tau_{\text{m}}\frac{dV_{\text{m}}(t)}{dt}=I(t)-V_{\text{m}}(t) \tag{6}\]
\(V_{m}\) is the voltage across the cell membrane, \(I\) is the external input, and \(\tau_{m}=2.0\) is the membrane potential time constant. The post-synaptic neuron voltage accumulates from \(V_{reset}=0\) exponentially when a current spike is received from pre-synaptic neurons, producing an output spike as soon as the threshold \(V_{th}=1.0\) is reached.
#### Liquid Layer Initialization
In the experiment, the number of neurons in the liquid layer was set to 10*10, totaling 100. Inspired by neuron connections in the mammalian visual cortex, we set the connection probability \(p\) between neurons \(i\) and \(j\) to exponentially inversely proportional to the Euclidean distance \(d(i,j)\)[39]. The closer the distance, the higher the connection probability, which is defined as the following:
\[p=\left(e^{-\frac{1}{\lambda^{2}}}\right)^{d^{2}} \tag{7}\]
\(\lambda\) is the parameter that controls the average number of synaptic connections and the average distance between neurons. To prevent neurons that are too far apart from connecting, a mask matrix \(M_{dis}\) is added, combined with \(p\) to form the weight matrix \(W_{l}\) of the liquid layer as Eq.9:
\[W_{l}=\alpha*M_{dis}*M_{sparse}*p \tag{8}\]
\[M_{dis}^{i,j}=\begin{cases}1,d(i,j)<D_{th}\\ 0,d(i,j)>D_{th}\quad or\quad i=j\end{cases} \tag{9}\]
Both \(M_{dis}\) and \(M_{sparse}\) are binary matrices. \(M_{dis}\) helps to form locally interconnected liquid layer structures (self-connection is not allowed). \(M_{sparse}\) describes a sparse binary matrix in which only a randomly selected 1% of the connections have the value \(M_{sparse}\) equal to 1 (sparser liquid density is to facilitate subsequent evolution operations). \(\alpha=4\) is a constant.
#### Readout Layer Initialization
To construct an effective readout layer structure, preventing many inactive neurons from being read out, we formulate the connection weight \(W_{r}\) between the liquid layer and the readout layer according to the state matrix \(S\) as follows:
\[W_{r}=\beta*M_{r}*S*w_{rand} \tag{10}\]
\(w_{rand}\) indicates a rand weight matrix. Both \(M_{r}\) and \(S\) are binary matrices. When the readout layer has \(N_{r}\) neurons, all the liquid layer neurons are randomly divided into \(N_{r}\) classes, making each liquid layer neuron connected to only one readout layer neuron. The resulting mask matrix \(M_{r}\) specifies which liquid layer neurons are connected to which readout layer neurons. 0 in the state matrix \(S\) represents that the neuron did not fire, and a 1 represents it fired. Therefore, there is no connection between the non-firing liquid neurons and all the readout neurons. \(\beta=4\) is a constant. The number of liquid neurons connected to each output or input neuron is set to 4.
### Evolutionary LSM Structure
First, we randomly initialized \(N_{ini}\) LSMs to form a population \(pop\) with their respective liquid layer connectivity matrices as chromosomes. For each chromosome, a population of offsprings is generated, and each offspring is mutated. According to the calculated fitness function, the optimal offspring corresponding to each chromosome is selected as a new chromosome. Meanwhile, to introduce innovation, the next generation consists of all optimal offsprings and partially regenerated random individuals. Evolution continues until the \(N_{opt}\) individuals with the highest fitness in \(G_{th}\) generation are selected as the output of the evolutionary algorithm. Fig.5 illustrates the detailed evolutionary process.
#### Initialization
We initialize \(N_{init}\) LSM individuals, each of which is an LSM structure. A chromosome is defined as a matrix representing an individual's liquid layer connectivity patterns:
\[Chrom^{i}=\begin{cases}1,W_{l}^{i}>0\\ 0,W_{l}^{i}=0,\quad 0\leq i\leq N_{ini}\end{cases} \tag{11}\]
\(i\) is the number of the individual, and \(W_{l}\) is the liquid layer connection weight of the \(i\)th individual defined in Eq.9.
#### Mutation
Each chromosome generates multiple offsprings (collectively a chromosome population) and mutates them: randomly select an inactive neuron (firing no spikes) among all liquid neurons and connect it with a surrounding active neuron (firing at least one spike).
#### Evaluation
The Separation Property (SP) was proposed by [10] together with the concept of LSM as a measure of performance, which calculates the separation between the internal system state trajectories produced by two different input streams. There are many methods in current research to measure the SP of LSM, here we refer to [40] to design an SP function to measure the quality of
Figure 5: The procedure of evolutionary LSM structure.
the liquid layer. We first calculate a state matrix \(S\) (1 for fired, otherwise 0) of the liquid layer based on input and then compute the SP according to the following formula:
\[SP=rank(S) \tag{12}\]
\(rank(S)\) means the rank of matrix \(S\). The larger the value, the stronger the separation property of LSM. After mutation, we calculate the separation property of offsprings obtained by mutation referring to Eq.12 as the fitness function.
#### Selection
Based on the fitness of all offsprings \(F_{offs}\), select the one with the largest fitness in the chromosome population to replace the individual. In the first \(G_{th}\) generation, the next generation consists of individuals with high fitness and new individuals explored randomly as a proportion of \(rate\) of the entire population. After \(G_{th}\) times of evolution, the new generation uses the experience of multiple iterative optimizations to select \(N_{opt}\) individual with the highest fitness as the evolution output.
The algorithm process of evolving the LSM architecture is Algorithm.1.
#### DA-BCM for Training Evolved LSM
After evolving LSM, we incorporated multi-scale biological-inspired learning rules such as local synaptic plasticity and global dopamine regulation for optimizing the synaptic strength. As shown in Fig.6, the learning process updates the connection weights within the liquid layer and between the readout layers according to local BCM plasticity and global dopamine regulation.
#### Local BCM Synaptic Plasticity
Since gradient-based learning rules are inconsistent with biological reality, we employed a more biologically plausible mechanism of synaptic plasticity: BCM rules [34], combined with dopamine global regulation to simulate the effects of reward and historical memory on behavior, encoding readout neurons target spatio-temporal dynamics. BCM was first used to explain how cortical neurons simultaneously undergo LTP or LTD depending on the different regulatory stimulation protocols applied to pre-synaptic neurons [41]. According to BCM, the activity of the postsynaptic neuron strengthens the connection, and the activity experience of the postsynaptic neuron determines the dynamic correction of the threshold. The synaptic strength update rule for the activity of pre- and post-synaptic neurons is as follows:
\[\frac{dm(t)}{dt}=\phi(e^{t}_{post})e^{t}_{pre}-\varepsilon m(t) \tag{13}\]
\(m\) is the weight between pre- and post-synaptic neurons. \(\varepsilon\) is a coefficient that decays uniformly over time. \(\phi\) is the BCM modification function that adjusts according to the neural spiking trace of the postsynaptic neuron, incorporating a sliding activity-dependent modification threshold \(\theta_{m}\) to allow bidirectional synaptic modification. \(e^{t}_{pre}\) is the spiking trace of the presynaptic neuron at time \(t\) and \(e^{t}_{post}\) is the spiking trace of the postsynaptic neuron at time \(t\), which are calculated as:
\[e^{t}_{pre}=\tau_{bcm}e^{t-1}_{pre}+o^{t}_{pre} \tag{14}\]
Figure 6: DA-BCM optimizes the synaptic weights of the evolved LSM.
\[e^{t}_{post}=\tau_{bcm}e^{t-1}_{post}+o^{t}_{post} \tag{15}\]
Where \(o^{t}_{pre}\) and \(o^{t}_{post}\) denote the spikes of pre- and post-synaptic neurons, respectively. \(\tau_{bcm}\) is the time decay constant. \(\phi\) is defined as:
\[\phi(e)=e(e-\theta_{m}) \tag{16}\]
The sliding threshold \(\theta_{m}\) is dynamically updated according to the average value of the trace \(e\) over a period of time.
#### Global Dopamine Regulation
\({}^{42}\) proposed the "reward prediction error hypothesis" that dopamine neurons encode reward and punishment signals during interacting with the environment. Related studies have introduced the learning rules of reward regulation into deep spiking neural networks [43] and multi-brain regions coordinated SNNs [44, 45]. Further, reward-modulated STDP [46, 47, 48] integrates dopamine regulation and STDP could solve the problem of credit assignment in order to obtain more reward.
Here, inspired by the neural mechanisms of dopamine regulation, we propose a DA-BCM learning rule that integrates long-term dopamine regulation and local BCM synaptic plasticity. When receiving an external reward signal, dopamine forms a global long-term regulation, combining with BCM plasticity to adaptively adjust synaptic strengthen for the liquid layer and readout layer. The DA-BCM learning rule is as follows:
\[\frac{dm(t)}{dt}=DA*(\phi(e^{t}_{post})e^{t}_{pre}-\varepsilon m(t)) \tag{17}\]
Here, \(DA\) stands for dopamine signal.
|
2309.03167 | Split-Boost Neural Networks | The calibration and training of a neural network is a complex and
time-consuming procedure that requires significant computational resources to
achieve satisfactory results. Key obstacles are a large number of
hyperparameters to select and the onset of overfitting in the face of a small
amount of data. In this framework, we propose an innovative training strategy
for feed-forward architectures - called split-boost - that improves performance
and automatically includes a regularizing behaviour without modeling it
explicitly. Such a novel approach ultimately allows us to avoid explicitly
modeling the regularization term, decreasing the total number of
hyperparameters and speeding up the tuning phase. The proposed strategy is
tested on a real-world (anonymized) dataset within a benchmark medical
insurance design problem. | Raffaele Giuseppe Cestari, Gabriele Maroni, Loris Cannelli, Dario Piga, Simone Formentin | 2023-09-06T17:08:57Z | http://arxiv.org/abs/2309.03167v1 | # Split-Boost Neural Networks
###### Abstract
The calibration and training of a neural network is a complex and time-consuming procedure that requires significant computational resources to achieve satisfactory results. Key obstacles are a large number of hyperparameters to select and the onset of overfitting in the face of a small amount of data. In this framework, we propose an innovative training strategy for feed-forward architectures - called _split-boost_ - that improves performance and automatically includes a regularizing behaviour without modeling it explicitly. Such a novel approach ultimately allows us to avoid explicitly modeling the regularization term, decreasing the total number of hyperparameters and speeding up the tuning phase. The proposed strategy is tested on a real-world (anonymized) dataset within a benchmark medical insurance design problem.
Artificial intelligence, deep learning, machine learning, neural networks, hyperparameter tuning, regularization +
Footnote †: footnoteinfo
first are explicitly obtained. However, our work differs from traditional ELMs, where the parameters of the first layer are fixed a priori (and not updated) after initial randomization, see Huang et al. (2021). Additionally, backpropagation in ELMs is removed. Instead, we still perform backpropagation to train the first layer's parameters. Likewise, a one-shot optimization is performed (ridge regression) with the substantial difference in how the data are divided to parallelize the computation on 2 different batches to retrieve the parameters of the second layer explicitly. This strategy is expected to reduce the occurrence of overfitting without resorting to a regularization term.
The paper is organized as follows. In Section 2 a review of the state of the art, the main challenges, limitations, and the benchmark network structure are presented. In Section 3 the mathematical formulation of the training strategy of the _split-boost_ neural network is presented: in the first part of the section the mathematical notation is introduced, then in Section 3.1 the _split-boost_ optimization problem for optimal weights computation is defined, and finally in Section 3.2 the details on training procedure, the policy for best epoch retrieval and the learning rate switching strategy are described. In Section 4 a numerical comparison between the traditional feed-forward neural network training and the split-boost training is presented. The paper is ended by some concluding remarks.
## 2 Problem Statement
The goal of this work is to propose an alternative training strategy for a classic feed-forward neural network. Without modification of the network structure, we want to show that using the same amount of data in a different way allows us to improve training performances and achieve implicit regularization, namely to obtain a regularization effect without modeling it explicitly. This allows us to simplify the tuning procedure of the network, reducing the number of hyperparameters to be calibrated.
In this section, the basic structure of the architecture of a feed-forward neural network is introduced without going into details as this morphology is widely studied in the literature, see Rosenblatt (1958), Bishop (1995), Nair and Hinton (2010), Goodfellow et al. (2016). A feed-forward neural network is a deep learning model which, based on the interaction of several processing units, called neurons, introducing non-linearities on the inputs, can perform various tasks ranging from classification, regression and prediction of time series. The parameters that build this model are represented by the weight matrices that correspond to the different layers of neurons building the architecture. These weights are updated through epochs (e.g., several passes through the dataset) via gradient descent (or one of its variants) and the help of the well-known backpropagation algorithm Rumelhart et al. (1986), for the calculation of the gradient of the errors with respect to the weights of the network.
Neural networks are models with a high descriptive capacity but are characterized by the phenomenon of overfitting, which can cause negative repercussions on the generalization capacity. Overfitting is one of the curses of general statistical learning. It often discourages users of artificial intelligence as the lack of a sufficient amount of data makes the architectures prone to its onset. In the literature, there are several documents relating to the description of the problem (see Tieleman and Hinton (2009), Wan and et al. (2013), Keskar and et al. (2016), Zhang and et al. (2021)). In the case of feed-forward neural networks, there are several strategies that can counter overfitting such as dropout, early stopping, regularization, data augmentation, batch normalization (see, e.g., Srivastava and et al. (2014), Ioffe and Szegedy (2015), Hinton et al. (2015), Loshchilov and Hutter (2017)). The goal of these methodologies is to prevent the neural network from overly relying (e.g. fitting the noise) on the data used in training. To do this, the idea is to reduce the number of network hyperparameters (or limit their norm) or artificially increase the available data. In this study, we assume _regularization_ as a benchmark of anti-overfitting methodology. This technique consists in adding into the cost function that must be minimized (or equivalently the reward that must be maximized) a term that depends on the norm of the weights of the different layers of the neural network. During the training step, the neural network is forced to keep the norm of these weights limited to prevent an excessive increase in the cost term.
In Figure 1 a sketch of the neural network architecture in matrix notation used in this work is shown. The considered structure consists of 2 layers (hidden and output) and is designed to solve a regression problem. Each input is processed by each layer (and by each neuron per layer) through the non-linear activation function \(f_{1}\) (rectified linear unit, RELU). \(X\in\mathbb{R}^{N\times D}\) is the input data matrix, where \(N\) is the number of samples and \(D\) is the number of features. \(W_{1}\in\mathbb{R}^{D\times H}\) is the weight matrix of the first (hidden) layer, where \(H\) is the number of neurons. \(W_{1}^{b}\in\mathbb{R}^{H\times 1}\) is the bias vector of the first (hidden) layer. \(Z_{1}\in\mathbb{R}^{N\times H}\) are the pre-activations of the first layer. \(X_{1}\in\mathbb{R}^{N\times H}\) are the activations of the first layer. \(W_{2}\in\mathbb{R}^{H\times 1}\) is the weight matrix of the second (output) layer. \(W_{2}^{b}\in\mathbb{R}\) is the bias of the output layer. \(Y,\hat{Y}\in\mathbb{R}^{N\times 1}\) are respectively the matrix of the targets and the matrix of the prediction, and finally, \(J\) is the loss function. Layer biases \(W_{1}^{b}\) and \(W_{2}^{b}\) follow the same training procedure of the corresponding layer weights. For sake of compactness of notation, they are omitted.
The optimization problem that must be solved to find the optimal weights of a feed-forward neural network with 2 fully connected layers, following the traditional training procedure known in literature has the structure of the following unconstrained minimization problem:
\[\min_{W_{1},W_{2}}\frac{1}{2N_{t}}||Y_{t}-f_{1}(X_{t}\cdot W_{1})\cdot W_{2} ||_{2}^{2}+\frac{\lambda}{2}\sum_{i=1}^{2}||W_{i}||_{2}^{2} \tag{1}\]
where the subscript \(t\) refers to the training set and \(\lambda\) is a hyperparameter that controls the intensity of the regularization. In the next section, we present an alternative, novel, way to write down the same optimization problem for training a 2-layer fully connected neural network in a way that allows us to exploit at best the training information introducing an implicit regularizing effect and
Figure 1: Neural network architecture
simultaneously reducing the number of parameters to be tuned. Indeed, within this framework, the regularization factor is not needed anymore.
## 3 Mathematical Formulation of Split-Boost Networks
This section illustrates this alternative training strategy of the proposed Split-Boost Neural Network. The updating of the weight parameters takes place separately for the 2 layers (hidden and output) that characterize the network architecture. The idea is to formulate the training procedure as a bilevel optimization problem, whose outer optimization variables are the weights of the hidden layer \(W_{1}\), and the inner optimization variables are the weights of the output layer \(W_{2}\). For a regression problem, the optimal values of the parameters of the output layer \(W_{2}\) can be obtained in closed form by solving two least square problems. These least square problem solutions represent the constraints of the first optimization problem.
The algorithm involves first a _splitting step_, in which the training set is divided into two sub-sets (a reasonable choice is to divide it equally, we do not claim that this choice is the only possible one, however, it guarantees that both optimization problems see the same amount of data, avoiding the unbalancing towards one of the two partitions.). Both subsets are then used to solve, separately, two least squares problems. Once the least squares problems are solved (e.g. the optimal values for \(W_{2}\) with respect to the two sub-sets are found), the whole training set is used to update the values of \(W_{1}\).
From here, _the boosting idea_. The optimal values obtained with the first sub-set are used to generate the prediction for the data belonging to the second sub-set and vice-versa. Our goal is to show that this methodology can effectively replace the regularization term in a traditional feed-forward neural network, overcoming its performance. This step represents one of the main differences compared to traditional network training: dividing the training set into 2 batches, used to calibrate the parameters of the second layer independently, allows us to improve the information content extraction. In Table 1 the symbols and their description is summarized.
### Optimization problem, forward and backward propagations.
The heart of the training algorithm is represented by the optimization problem shown in Equation (2). Since this cannot be solved in closed form, it is solved iteratively through a descending gradient problem. The value of the \(W_{1}\) parameters is therefore updated as the epochs pass.
\[\min_{W_{1}} J =\frac{1}{2N_{b}}\|Y_{b}-f_{1}(X_{b}\cdot W_{1})\cdot W_{2a}^{*}(W_ {1})\|_{2}^{2}\] \[+\frac{1}{2N_{a}}\|Y_{a}-f_{1}(X_{a}\cdot W_{1})\cdot W_{2b}^{*} (W_{1})\|_{2}^{2} \tag{2a}\] \[W_{2a}^{*}(W_{1}) =\operatorname*{argmin}_{W_{2}}\frac{1}{2N_{a}}\|Y_{a}-f_{1}(X_{a }\cdot W_{1})\cdot W_{2}\|_{2}^{2}\] (2b) \[W_{2b}^{*}(W_{1}) =\operatorname*{argmin}_{W_{2}}\frac{1}{2N_{b}}\|Y_{b}-f_{1}(X_{b }\cdot W_{1})\cdot W_{2}\|_{2}^{2} \tag{2c}\]
In what follows, the subscripts \(k,j\) are used to refer both to training subsets A and B. Since the description of the equations referring to the two training sets mirror each other, the notation will follow as \(\forall k,j\in[a,b],k\neq j\). All the procedure is repeated for both admissible values of \(k\) and \(j\), training is _splitted_. The subscript notation is summarized in Table 2.
In the following equations, the forward propagation is described. We do not go into details as this step is well-known in literature (see Rumelhart et al. (1986), Werbos (1988), Fahlman and Lebiere (1989), Bartlett et al. (2006), LeCun and et al. (1998)).
\[Z_{1k} =X_{k}\cdot W_{1} \tag{3a}\] \[X_{1k} =f_{1}(Z_{1k})\] (3b) \[\hat{Y}_{k} =X_{1k}\cdot W_{2j}\] (3c) \[J_{W_{2k}} =\frac{1}{2N_{k}}\|Y_{k}-\hat{Y}_{k}\|_{2}^{2} \tag{3d}\]
We can compute explicitly the optimal values for the output layer parameters \(W_{2}\) solving a least square problem:
\[W_{2k}^{*}=(X_{1k}^{T}\cdot X_{1k})^{-1}\cdot X_{1k}^{T}Y_{k}. \tag{4}\]
Solving explicitly Equation (4) we derive the optimal values for the output layer parameters computed with the forward pass in the two separate training sub-sets. Notice that \(W_{2k}^{*}\) is function of \(W_{1}\) through the dependency on \(X_{1k}\). For sake of brevity in the following derivation, we write \(W_{2k}^{*}\) in place of \(W_{2k}^{*}(W_{1})\). For sake of compactness, the Jacobian expression is derived in scalar form. Nonetheless, it must be interpreted as the matrix of first-order partial derivatives of a vector-valued function with respect to its input variables. Since \(W_{2k}^{*}\) is an optimum computed explicitly, it is true that:
\[\frac{\partial J_{k}(W_{1},W_{2k})}{\partial W_{2k}}\bigg{|}_{W_{2k}=W_{2k}^{ *}}=\frac{\partial J_{k}(W_{1},W_{2k}^{*})}{\partial W_{2k}^{*}}=0 \tag{5}\]
Differentiating both sides with respect to \(W_{1}\) and using the chain rule according to the influence diagram in Figure 2:
\begin{table}
\begin{tabular}{|l|l|} \hline
**Symbol** & **Description** \\ \hline \(N\) & Number of samples \\ \(t,v,ts\) & Training, validation and test sets \\ \(a\) & Training set partition “A” \\ \(b\) & Training set partition “B” \\ \(D\) & Number of input features \\ \(X\in\mathbb{R}^{N\times D}\) & Input data \\ \(Y,\hat{Y}\in\mathbb{R}^{N\times 1}\) & Output data, prediction \\ \(Z\in\mathbb{R}^{N\times H}\) & Pre-activations \\ \(X_{1}\in\mathbb{R}^{N\times H}\) & Activations \\ \(H\) & Number of neurons per layer \\ \(W_{1}\in\mathbb{R}^{D\times H}\) & Weights of hidden layer \\ \(W_{2}\in\mathbb{R}^{H\times 1}\) & Weights of output layer \\ \(f_{1}:\mathbb{R}^{N\times H}\rightarrow\mathbb{R}^{N\times H}\) & Activation function \\ \(J\) & Cost function \\ \(\lambda\) & Regularization parameter \\ \(\gamma\) & Learning rate \\ \hline \end{tabular}
\end{table}
Table 1: Notation.
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**Subscript** & **Set A** & **Set B** \\ \hline \(k\) & \(a\) & \(b\) \\ \(j\) & \(b\) & \(a\) \\ \hline \end{tabular}
\end{table}
Table 2: Training sets notation.
\[\frac{\partial}{\partial W_{1}}\left(\frac{\partial J_{k}(W_{1},W_{2k} ^{*})}{\partial W_{2k}^{*}}\right)=0\] \[\frac{\partial}{\partial W_{1}}\left(\frac{\partial J_{k}(W_{1},W_ {2k}^{*})}{\partial W_{2k}^{*}}\right)+\] \[+\frac{\partial}{\partial W_{2k}^{*}}\left(\frac{\partial J_{k}(W_ {1},W_{2k}^{*})}{\partial W_{2k}^{*}}\right)\cdot\frac{\partial W_{2k}^{*}}{ \partial W_{1}}=0\] \[\frac{\partial^{2}J_{k}(W_{1},W_{2k}^{*})}{\partial W_{1}\partial W _{2k}^{*}}+\frac{\partial^{2}J_{k}(W_{1},W_{2k}^{*})}{\partial W_{2k}^{*} \partial W_{2k}^{*}\Upsilon}\cdot\frac{\partial W_{2k}^{*}}{\partial W_{1}}=0\]
Solving for the Jacobian \(\frac{\partial W_{2k}^{*}}{\partial W_{1}}\):
\[\frac{\partial W_{2k}^{*}}{\partial W_{1}}=-\left(\frac{\partial^{2}J_{k}(W_ {1},W_{2k}^{*})}{\partial W_{2k}^{*}\partial W_{2k}^{*}\Upsilon}\right)^{-1} \cdot\frac{\partial^{2}J_{k}(W_{1},W_{2k}^{*})}{\partial W_{1}\partial W_{2k}^ {*}} \tag{7}\]
And then with notation abuse (we use the approximation symbol to treat the Jacobian as in the scalar case even though, as commented earlier, it must be interpreted as the matrix of first-order partial derivatives):
\[\mathbf{J}_{W_{1}}W_{2k}^{*}\approx\frac{\partial W_{2k}^{*}}{\partial W_{1}} \tag{8}\]
The Jacobian \(\mathbf{J}_{W_{1}}W_{2k}^{*}\) described in Equations (7) and (8) is a fundamental ingredient in the evaluation of the gradient of the cost function \(J\), to update \(W_{1}\) values. To complete the training procedure, the backward propagation step must be performed. Similarly to forward propagation, backward propagation also takes place by working simultaneously on both sub-sets of the training set. This allows us to derive the expressions of the gradients necessary for the final calculation of the gradient of \(J\) with respect to \(W1\):
\[\nabla_{\hat{Y}_{k}}J_{k} =\frac{1}{N_{k}}(\hat{Y}_{k}-Y_{k}) \tag{9a}\] \[\nabla_{X_{1k}}J_{k} =\nabla_{\hat{Y}_{k}}J_{k}\cdot W_{2j}^{*\tau}\] (9b) \[\nabla_{W_{2j}^{\prime}}J_{k} =X_{1k}^{T}\cdot\nabla_{\hat{Y}_{k}}J_{k}\] (9c) \[\nabla_{Z_{1k}}J_{k} =\nabla_{X_{1k}}J_{k}\circ f_{1}^{\prime}(Z_{1k}) \tag{9d}\]
For a deeper understanding of how backpropagation works, refer to Hecht-Nielsen (1987), Bottou (1991), Glorot et al. (2011).
Merging together the results obtained by solving Equations (3), (4) and (9) we derive the expression of the gradient of the cost function (2a) with respect to the weights of the first layer, \(W_{1}\), described in the following equation:
\[\nabla_{W_{1}}J= X_{b}^{T}\nabla_{Z_{1k}}J_{b}+\mathbf{J}_{W_{1}}W_{2a}^{*\tau} \nabla_{W_{2a}^{*}}J_{b}+\] \[+X_{a}^{T}\nabla_{Z_{1a}}J_{a}+\mathbf{J}_{W_{1b}}W_{2b}^{*\tau} \nabla_{W_{2a}^{*}}J_{a} \tag{10a}\] \[W_{1} =W_{1}-\gamma\nabla_{W_{1}}J\] (10b) \[W_{2} =\frac{W_{2a}^{*}+W_{2b}^{*}}{2} \tag{10c}\]
Thanks to Equations (10) and (4) we can update network parameters between two consecutive epochs. Notice the difference: \(W_{1}\) is updated through gradient descent in Equation (10b) while \(W_{2}\) is obtained as the average of the optimal \(W_{2}^{*}\) computed explicitly by looking at the two training sub-sets separately in Equation (10c). Notice that, in general, this closed-form solution for \(W_{2}\) might not be practicable (e.g. in the case of classification problems, the concatenation of non-linearities through the different layers of the network would make it unattainable to formulate a least squares problem, as in this case, efficiently solved analytically). In that case, a similar expression to (10a) must be derived and update \(W_{2}\) with gradient descent. However, for this setting, it improves the computational time. Equations (10) represent the heart of the algorithm, defining the _boosting_ step. The mixing of the two sub-sets of the training set embedded in the gradient expression and in optimal values for \(W_{2}\) can improve training performance achieving the same training cost in a lower number of epochs and at the same time avoiding overfitting without the aid of regularization terms. This is critical for reducing the number of hyperparameters. With the updated weights obtained in Equations (10), the prediction is computed as follows:
\[\hat{Y}=f_{1}(X\cdot W_{1})W_{2} \tag{11}\]
### Details on training procedure
Early stopping procedure is used. The stop condition for training is obtained by monitoring the status of the validation cost, as follows:
\[|J_{v}(k)-J_{v}(k-1)|<\epsilon \tag{12}\]
where \(k\) is the epoch index and \(\epsilon\)) is the stop threshold. If the variation of the validation cost between two consecutive epochs is less than \(\epsilon\), training stops and the optimal number of epochs is retrieved. After the computation of the optimal number of epochs, the network is re-trained using as new training set the sum of the original training set, and the validation set. Performances are evaluated considering the test set.
The split-boost strategy has a learning rate varying with the number of epochs. This choice avoids oscillations in training cost (an excessively large learning rate leads to fluctuations and consequent degradation of performance). In the example reported in the next section, the following switch condition is adopted:
\[\begin{cases}\gamma=\gamma^{*}\ if\ J_{t}(k)-J_{t}(k-1)>0\\ \gamma=\frac{\gamma^{*}}{10}\ otherwise\end{cases} \tag{13}\]
where \(\gamma^{*}\) is the best learning rate, chosen after a sensitivity analysis carried on the validation cost.
## 4 An Experimental Case Study
In this section, we show the comparison between a traditional feed-forward neural network and the split-boost neural network applied to a real-world regression problem. The code is written in Python 3.7.13. The Python library used to develop the neural networks is PyTorch Paszke and et al. (2019). Simulations run on an Intel Core i7-8750H with 6 cores, at 2.20 GHz (maximum single core frequency: 4.10 GHz), with 16 GB RAM.
Figure 2: Influence diagram describing the interaction between network layer parameters.
### Dataset description
The case-study is the medical insurance forecast of patients living in the U.S., given a set of clinical features. Data are open-source and offered in Lantz (2013). In Table 4.1 data features and targets are summarized. The goal is to predict the medical insurance charge (\(\$\)) given a set of \(D=6\) features: age, sex, BMI, number of children, smoking condition, and region of residence.
The number of people in the study is \(N=1338\). Dataset is split into training, validation and test sets according to the proportions shown in Figure 3. Test set is of \(20\%\) of the total. The validation set is \(16\%\). The training set is \(64\%\). In the case of the split-boost neural network, the training set is further divided into two halves, \(32\%\) each.
### Networks Hyperparameters
In this section, we show the hyperparameter tuning procedure. Sensitivity analysis of the learning rate \(\gamma\) and the regularization factor \(\lambda\) is performed. In the case of the split-boost neural network, there is the advantage of not having a regularization term which, instead, is replaced by the boosting procedure. Sensitivity analysis with respect to \(\gamma\) is described in the upper plot of Figure 4. The validation cost \(J_{v}=\frac{1}{2N_{v}}(Y_{v}-\hat{Y}_{v})^{2}\) associated to different choices of \(\gamma\) is shown. The best choice for both networks is \(\gamma=0.1\). In the lower plot of Figure 4 the sensitivity analysis with respect to the regularization hyper-parameter \(\lambda\) is shown (only for the feed-forward neural network), evaluating the validation cost. The goal is to compare the best possible feed-forward regularized neural network with the split-boost network. The best choice is \(\lambda=0.01\). In Table 4 the network parameters are summarized. Network architecture is the same. Same number of layers (2), neurons per layer (\(H=10\)) and activation function (RELU). Split-boost strategy does not need any regularization parameter.
### Numerical Simulations
In this section we show numerical insights about the training procedure of the split-boost network. Figure 5 shows the trend of the training cost along the training epochs for the two networks. The split-boost network is represented by the solid red line, the feed-forward is represented by the blue lines for varying values of \(\lambda\). The solid blue line corresponds to the best \(\lambda\) identified (see Table 4). As \(\lambda\) increases, the training cost of the feed-forward network increases. For smaller values of \(\lambda\), it decreases. Split-boost training cost converges to values close to those of the feed-forward regime for a substantially lower number of epochs, \(E_{TB}^{*}=50\) epochs against \(E_{FF}^{*}=200\) epochs. This allows us to conclude that the split-boost procedure, in this case, can converge to the maximum information content extractable from the training set, in a smaller number of epochs, and with an implicit regularization effect.
Notice that the selection of the best number of epochs (implementing early stopping strategy) is performed looking at validation cost as discussed in Section 3.2 for both the networks. After the best epoch retrieval, both the networks are re-trained considering as training set the union of the previous training and validation sets.
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**Hyperparameter** & **FF** & **Split-Boost** \\ \hline \(L\) (Layers) & 2 & 2 \\ \(H\) & 10 & 10 \\ \(f_{1}\) & RELU & RELU \\ \(\gamma\) & 0.1 & 0.1 \\ \(\lambda\) & 0.01 & \(-\) \\ \hline \end{tabular}
\end{table}
Table 4: Networks Hyperparameters.
Figure 4: Hyperparameter tuning: sensitivity analysis with respect to the learning rate \(\gamma\) (upper panel); sensitivity analysis with respect to the regularization parameter \(\lambda\) (lower plot).
Figure 5: Training cost: comparison between _split-boost_ and _feed-forward_ network for different values of the regularization hyper-parameter \(\lambda\). Solid line is the best configuration of \(\lambda\).
\begin{table}
\begin{tabular}{|l|l|} \hline
**Features** & **Description** \\ \hline Age & Age of primary beneficiary \\ Sex & Insurance contractor gender \\ BMI & Body mass index \\ Children & Number of children \\ Smoker & Smoker condition \\ Region & Residential area in the U.S. \\ \hline
**Target** & **Description** \\ \hline Charge & Medical insurance bill [8] \\ \hline \end{tabular}
\end{table}
Table 3: Medical Insurance Dataset: features and target.
In Figure 6 the computational training time required by the 2 strategies is shown over 200 epochs. Split-boost strategy shows an average training time of \(T_{SB}=1.679\,s\) while the feed-forward of \(T_{FF}=1.179\,s\). On average, each epoch of the split-boost takes 42% more time than the feed-forward. However, considering the training convergence shown in Figure 5, which highlights that split-boost training cost converges at the regime in \(E^{*}_{SB}=50\) epochs against the \(E^{*}_{FF}=200\) of the feed-forward, leads to the average computational requirements of:
\[E^{*}_{SB}\cdot T_{SB}=83.95\,s\leq E^{*}_{FF}\cdot T_{FF}=235.8\,s. \tag{14}\]
In Figure 7 the re-training procedure of the split-boost neural network is shown. On the left the training and test costs are shown. If compared with Figure 5, the number of epochs to which the training cost regime is reached is higher. This depends on the fact that the training set is larger (it includes also the previous validation set). In the middle, the plot of the regression prediction versus target values for each person in the training set is shown. On the right plot, regression prediction versus target values for test data is shown. In the middle and right panels it is shown that the neural network can map successfully the non-linear relationship between the features collected in Table 1 and the regression target. There are some patients whose characteristics escape mapping: with similar features compared to the remaining patients, a higher medical insurance cost is attributed to them by the experts.
To evaluate the performance of the split-boost strategy with respect to the best regularized feed-forward neural network with \(\lambda=0.01\), derived from the sensitivity analysis carried on the regularizing term for the traditional FFNN in Fig. 4, 50 Monte Carlo randomizations of the dataset were performed, extracting 50 different combinations of training, validation and test sets. The results obtained on the test set were collected within the boxplots in Figure 8. The test cost \(J_{ts}=\frac{1}{2N_{ts}}(Y_{v}-\dot{Y}_{ts})^{2}\) obtained after the Monte Carlo randomizations in the case of the split-boost strategy is statistically lower. Split-boost test cost is lower in 72% of the cases. This proves that, with statistical evidence, the split-boost neural network overcomes the feed-forward neural network also in terms of prediction accuracy.
## 5 Conclusions
In this article, we have shown an alternative training approach for feed-forward neural networks. We have called this strategy "split-boost" to recall the idea that dividing (_split_) the dataset and combining the subsets _might_ lead to an improvement (_boost_) in performance.
In the considered real-world case study, the "split-boost" approach turns out to: lead to higher predictive performance than traditional training; and be computationally advantageous since in the training phase it converges within a smaller number of epochs, although the computational time per epoch is greater. The proposed strategy also implicitly counteracts overfitting.
Future activities will focus on an extensive validation of the proposed training strategy, as well as on its generalization and extension to multi-layer networks.
|
2309.06779 | ZKROWNN: Zero Knowledge Right of Ownership for Neural Networks | Training contemporary AI models requires investment in procuring learning
data and computing resources, making the models intellectual property of the
owners. Popular model watermarking solutions rely on key input triggers for
detection; the keys have to be kept private to prevent discovery, forging, and
removal of the hidden signatures. We present ZKROWNN, the first automated
end-to-end framework utilizing Zero-Knowledge Proofs (ZKP) that enable an
entity to validate their ownership of a model, while preserving the privacy of
the watermarks. ZKROWNN permits a third party client to verify model ownership
in less than a second, requiring as little as a few KBs of communication. | Nojan Sheybani, Zahra Ghodsi, Ritvik Kapila, Farinaz Koushanfar | 2023-09-13T08:06:13Z | http://arxiv.org/abs/2309.06779v1 | # ZKROWNN: Zero Knowledge Right of Ownership
###### Abstract
Training contemporary AI models requires investment in procuring learning data and computing resources, making the models intellectual property of the owners. Popular model watermarking solutions rely on key input triggers for detection; the keys have to be kept private to prevent discovery, forging, and removal of the hidden signatures. We present ZKROWNN, the first automated end-to-end framework utilizing Zero-Knowledge Proofs (ZKP) that enable an entity to validate their ownership of a model, while preserving the privacy of the watermarks. ZKROWNN permits a third party client to verify model ownership in less than a second, requiring as little as a few KBs of communication.
## I Introduction
Deep Neural Networks (DNN) have emerged as the de facto solution for major learning applications such as image and face recognition [1, 2, 3] and natural language processing [4]. Training state-of-the-art DNNs requires access to large amounts of data, as well as massive computational resources; for example, recent language models are trained on terabytes of data, have billions of parameters, and require hundreds of GPUs and algorithmic expertise for training [5, 6].
Given the amount of required data and computing resources spent on training a model, vendors who give access to their trained models remotely or release them publicly have an interest to protect the intellectual property rights of their models against copyright infringements. Towards this goal, prior work has proposed methods for watermarking deep learning models [7, 8, 9]. Watermarks are designed in order to target the decision boundary of the model to resist against various removal attempts such as fine-turning and pruning, while retaining a high accuracy. Extracting the watermark signature involves providing a special _key input_ that triggers the watermark which can be detected at the output, e.g. by a threshold function. However, once the key pertaining to the watermark is revealed, the embedded signature can be easily discovered and removed. This property creates an impediment for litigating an ownership dispute which would require providing proofs of ownership during the discover process to potentially multiple parties.
In this work, we propose use of Zero-Knowledge Proofs (ZKP) to facilitate legitimate proof of DNN ownership without revealing any other information. ZKPs are a particular set of protocols in cryptography involving a prover (\(\mathcal{P}\)) and a verifier (\(\mathcal{V}\)). The prover seeks to prove a mathematical assertion on a private input to the verifier, without disclosing any other information about the input. While the use of ZKPs in legal settings have been proposed in prior work for other applications [10, 11], to the best of our knowledge, our work is the first to propose a concrete framework for DNN proof of ownership. Our work demonstrates how ZKPs can be used by expert witnesses and other parties to verify ownership claims without revealing details on the watermarking procedure that could jeopardize the intellectual property rights of the model owner. The use of ZKPs in legal settings should satisfy two main requirements. First, executing the protocol should be simple, and it is beneficial that the interaction be limited to \(\mathcal{P}\) sending a single message to \(\mathcal{V}\) for verification. Second, the proof has to be _publicly verifiable_, i.e., \(\mathcal{P}\) can perform the proof generation once that can be verified by any party, without having to convince every entity in a separate process [11].
Non-interactive Zero-Knowledge Proof systems (NIZK) [12] support these requirements. NIZK include a one-time setup, and the process of verifying the proof consists of a single message sent from the prover to verifier. This is in contrast to interactive proof protocols where verification is performed over multiple rounds, and has to be repeated with every new verifier. Our work builds on zero-knowledge succinct non-interactive arguments of knowledge (zkSNARKs) to implement watermark extraction including feed-forward computations in DNNs and sigmoid thresholding for trigger detection. zkSNARK setup and proof generation steps are circuit dependent; if the circuit changes often, this technique could be quite computationally intensive. Fortunately, our proposed work only handles a constant circuit representing the pertinent DNN. Our concrete implementation demonstrates the applicability of our framework, and we show that our framework is able to prove ownership with as little as \(11s\) and \(1ms\) of computation time on the prover and verifier side respectively, and as little as \(35KB\) of communication for image recognition benchmarks. Our setting only requires setup and proof generation once as the circuit does not change, resulting in amortized prover and setup computation time compared to the overall usage lifetime.
In summary, our contributions are:
* We propose ZKROWNN, the first end-to-end watermark extraction and verification framework for deep neural networks based on zero-knowledge proofs. ZKROWNN enables a model owner to prove their right of ownership without revealing details of the watermarking technique.
* ZKROWNN incorporates non-interactive proofs for simplified proof generation and verification. More specifically,
ZKROWNN's proofs are _publicly verifiable_, i.e., the proof is generated once and can be verified by third parties without further interaction.
* We provide a concrete implementation of ZKROWNN with extensive evaluation on various benchmarks. ZKROWNN requires small communication (less than 16MB during setup and only 128B to transfer the proof for our largest example), and enables sub-second proof verification.
## II Background
### _Neural Network Watermarking_
Watermarking techniques have been extended to DNNs to protect the intellectual property of model owners. Watermarking can be considered analogous to introducing a backdoor in a neural network, especially in the case of Black box dynamic watermarking [13]. A backdoor is embedded in the model by hand-crafting key input triggers which generate the desired watermark (WM). The model continues to perform at the same accuracy with minimal overhead, except when these selected key inputs are used to verify the watermark.
Watermarks can be embedded in weights [14], activations [7], or near the decision boundary [15] of neural networks. For image processing networks, [16] proposes spatially invisible watermarking mechanisms. A unified and invisible signature is learned and embedded into all the outputs, which can be extracted afterwards to verify ownership. Additional methods include introduction of statistical bias in the weights while training a neural network [8]. An embedding regularizer, which uses binary cross entropy loss, is used in the standard cost function of one of the convolution layers. The watermark can be extracted by projecting w using a secret key X, where the \(j^{th}\) bit of the watermark is extracted as \(b_{j}=s(\Sigma_{i}X_{ji}w_{i})\). Here, \(s(x)\) is a step function, \(w_{i}\) are the weights of the network and X is the secret key required to embed and detect the watermarks. This methodology has advantages over the usual procedure of embedding the signature in the weights of a trained network as it does not degrade the network's performance after training and embeds the signature during training itself. However, embedding the signatures in the weights of the DNN, even while training, poses significant challenges related to WM robustness, and makes it prone to watermark overwriting and network morphism.
In this work, we consider the watermarking method presented in DeepSigns [7] which embeds the watermark into the probability density function (PDF) of activation maps across various layers of the network. DeepSigns takes the trained model along with an owner-defined watermark signature, and produces a watermarked network.
DeepSigns watermark embedding is a two step process, beginning with securely generating owner-specific WM keys. In the next step, the owner's DNN is fine tuned and the generated WM signature is embedded into the pdf distribution of the activation maps of selected layers. The encoded watermark signatures are Independently and Identically Distributed (iid) arbitrary binary strings. For the intermediate hidden layers, the data distribution is assumed to be a Gaussian Mixture Model (GMM). One or more random indices are selected from 1 to \(S\), where each index corresponds to a Gaussian in the mixture model. \(S\) is the number of classes in the final application. The WM signature is then embedded into the mean of these selected Gaussian distributions.
The WM keys contain three parameters, the chosen Gaussian classes \(s\), the input triggers, which are basically a subset (1%) of the input training data (\(X^{key}\)), and the projection matrix \(A\). The projection matrix is used to project the mean values of the selected Gaussian distributions into binary space. To minimize the distance between these mean values, that is the centers of the Gaussian distributions and the owner's WM signature, an additional loss term is added to the cost function while fine tuning.
The watermark extraction phase consists of three steps. It begins with querying the underlying DNN with the owner-specific watermark keys (\(X^{key}\)) generated during embedding. In the next step, the Gaussian Centers are approximated by taking a statistical mean of the obtained activation maps. These Gaussian centers and the projection matrix \(A\), obtained from the owner's WM keys, are used to estimate the relevant Watermark signature. Finally, the bit error rate (BER) between the obtained WM signature and the owner's actual signature is computed. If the BER is zero for any layer, DeepSigns ascertains that the deployed DNN is the IP of the model owner in question. This WM methodology is robust to watermark overwriting, model fine-tuning and model-pruning.
### _Zero-Knowledge Proofs_
ZKPs are a cryptographic primitive that allows a prover \(\mathcal{P}\) to convince a verifier \(\mathcal{V}\) that an evaluation of computation \(\mathcal{C}\) on \(\mathcal{P}\)'s private input \(w\), also called the witness, is correct without revealing anything about \(w\). In a standard ZKP scheme, \(\mathcal{P}\) convinces \(\mathcal{V}\) that \(w\) is a valid input such that \(y=\mathcal{C}(x,w)\), in which \(x\) and \(y\) are public inputs and outputs, respectively. When the communication of this proof is done in a single message, the ZK scheme is referred to as non-interactive. ZKPs can also be generated interactively, in which the proof is computed through several rounds of communication between \(\mathcal{P}\) and \(\mathcal{V}\), but this requires \(\mathcal{V}\) to be online for the duration of proof generation, which is undesirable when there are many verifiers. Interactive ZKP schemes are limited, as they only support the _designated verifier_ model, meaning that a new proof must be generated for each verifier for one circuit \(C\). In contemporary ZK constructions, \(\mathcal{C}\) is expressed as an efficient generalization of an arithmetic circuit, such as Rank 1 Constraint Systems (R1CS) or Quadratic Arithmetic Programs (QAPs), which have been popularized due to their ease of use [17, 18].
Zero-knowledge succinct non-interactive arguments of knowledge (zkSNARKs) have been utilized for a myriad of tasks, including the real-world case of Zcash's privacy preserving cryptocurrency [19]. zkSNARKs have emerged as a popular ZKP method, acting as the technical foundation for many ZK works, as this construction provides fast and compu
tationally inexpensive proof verification [20]. zkSNARKs also benefit from being _publicly verifiable_, meaning that any verifier with the proper verification key can verify a zkSNARK. The main drawback of zkSNARKs are the reliance on a trusted setup for every new circuit \(\mathcal{C}\) and the intensive computation necessary for proof generation. If \(\mathcal{C}\) does not change often, or at all, these computational drawbacks can be amortized.
In this work, we use the Groth16 zkSNARK protocol, which is based on QAP representations of computation [21]. The high-level approach for proof generation with Groth16 (and other NIZKs in general) can be represented with the three following algorithms:
* \((\mathcal{VC},\mathcal{PK})\leftarrow\) Setup(\(\mathcal{C}\)): A trusted third party or \(\mathcal{V}\) run a setup procedure to generate a prover key \(\mathcal{PK}\) and verifier key \(\mathcal{PK}\). These keys are used for proof generation and verification, respectively. This setup must be repeated each time \(\mathcal{C}\) changes.
* \(\pi\leftarrow\) Prove(\(\mathcal{PK}\), \(\mathcal{C}\), \(x\), \(y\), \(w\)): \(\mathcal{P}\) generates proof \(\pi\) to convince \(\mathcal{V}\) that \(w\) is a valid witness.
* \(1/0\leftarrow\) Verify(\(\mathcal{VR}\), \(\mathcal{C}\), \(x\), \(y\), \(\pi\)): \(\mathcal{V}\) accepts or rejects proof \(\pi\). Due to soundness property of zkSNARKs, \(\mathcal{V}\) cannot be convinced that \(w\) is a valid witness by a cheating \(\mathcal{P}\).
The Groth16 protocol allows us to achieve small proofs and fast verification, independent of the circuit size [21], at the cost of high prover and setup complexity. Due to the static nature of \(\mathcal{C}\) in our zkROWNN, proof generation and setup only happen once, so runtimes are amortized and therefore negligible.
## III Methodology
### _Zkrownn Setting and Threat Model_
In this work we assume a setting where a model owner holds a watermarked model \(\mathbf{M}\) with private trigger key \(\mathcal{K}\) and watermark parameters \(\mathcal{W}\). The model owner claims that a _second_ model \(\mathbf{M}^{\prime}\) is built based on watermarked model \(\mathbf{M}\). The model owner takes the role of a prover \(\mathcal{P}\), and generates a proof \(\pi\) attesting that \(\mathbf{M}^{\prime}\) produces the watermark \(\mathcal{W}\) when triggered with \(\mathcal{K}\). In our threat model, prover \(\mathcal{P}\) is semi-honest, meaning that \(\mathcal{P}\) will not deviate from the protocol. The ownership proof \(\pi\) can be verified by any third party \(\mathcal{V}\), requiring only a verification key. The proof generation and verification steps in zkROWNN are illustrated in Figure 1. ZKROWNN utilizes zkSNARKs to enable proof of ownership without revealing the trigger key \(\mathcal{K}\) and watermark \(\mathcal{W}\). We extend the watermark embedding and extraction technique in DeepSigns [7]. As detailed in Section II-A, the watermark is embedded in a specific layer, which is only known to the original model owner. This watermark is only extractable when the model takes in a specific trigger key as an input. As discussed in [7], the watermarks are embedded in and extracted from the probability density function (pdf) of the activation maps in the model. With all of this information in hand, we outline zkROWNN's zero-knowledge watermark extraction in algorithm 1.
```
Public Values: Model \(\mathcal{M}\), target BER \(\theta\) Private Input: trigger key \(X^{key}\), \(B\)-bit watermark \(wm\), projection matrix \(A^{M\times N}\) where \(M=\) size of feature space and \(N=\) size of \(wm\), embedded layer \(l_{wm}\) Circuit: \(check=1\) \(zkFeedForward(\mathcal{M})\) on input \(X^{key}\) until layer \(l_{wm}\) Extract activation maps \(a\) at layer \(l_{wm}\) \(\mu^{1\times M}=zkAverage(a)\) \(G^{1\times N}=zkSigmoid(\mu^{1\times M}\times A^{M\times N})\) \(\hat{wm}=zkHard\_Thresholding(G^{1\times N},0.5)\) \(valid\_BER=zkBER(wm,wm,\theta)\) return\(check\wedge valid\_BER\)
```
**Algorithm 1** ZKROWNN Watermark Extraction
To support the circuit presented in algorithm 1, we provide seperate smaller zkSNARK circuits for each computation, such as sigmoid and thresholding. For the feed-forward operation, we support Dense, ReLU, and Convolution3d layers, as we assume that the watermarks are embedded in one of the initial layers of the model. In addition, we provide end-to-end zero-knowledge watermark extraction circuits for a Multilayer Perceptron (MLP) and Convolutional Neural Network (CNN). In the next section, we provide the implementation details of each operation that zkROWNN supports.
### _Zkrownn Implementation_
To implement zkROWNN, we use xJsnark [22], a high-level framework that enables zkSNARK circuit development. The generated circuits are compiled in libsnark, a C++ framework for zkSNARK generation [23]. As stated before, proof generation and verification are done with the Groth16 protocol using the BN128 elliptic curve, which provides 128 bits of security. While xJsnark and libsnark have open-source gadgets/arithmetic circuits available for general computation, none were relevant to the computation that zkROWNN requires. Therefore, all circuits were designed specifically for watermark extraction, however can be generalized and extended for other relevant applications, such as DNN inference.
Fig. 1: High level description of zkROWNN
zkssnark arithmetic circuits do not natively support floating point computation without requiring conversion to binary circuits [24]. This process incurs large overhead for the prover, which is already the computational bottleneck in zkssnark schemes. To avoid floating point computation, we scale our inputs by several orders of magnitude and truncate the result. This does not affect the performance, as the floating point conversions are all done in a preprocessing step before the proof generation and all functions are modified accordingly before circuit generation. For readability, we still use floating point values in our descriptions of algorithm 1 and our implementations. We now present the implementation details of the functions that ZKROWNN supports. It is important to note that, although these operations are used collectively for end-to-end watermark extraction, each circuit can also be used in a standalone zkssnark due to our modular design approach.
#### Iii-B1 Matrix Multiplication
We implement a zero knowledge matrix multiplication circuit that efficiently computes \(A^{M\times N}\times B^{N\times L}=C^{M\times L}\), where \(C\) is a private matrix and \(A\) or \(B\) can be public or private, depending on the application. This circuit can be used for dense/fully connected layers that are done in the feed-forward step of watermark extraction, or for standard matrix multiplication, both of which are necessary in our end-to-end implementation.
While there have been optimizations for matrix multiplication in zero knowledge proposed before, such as Freivald's algorithm [25], the most notable optimizations require inter-activity between prover and verifier. As the ZKROWNN use case greatly benefits from non-interactivity, we do not consider these optimizations.
#### Iii-B2 Convolution
We implement the 3D convolution operation by flattening the input and kernel into 1D vectors. The input is grouped and structured based on the size of the kernel and stride value into a vector. Afterwards, we perform a 1D convolution operation between the processed input vector, and the flattened kernel. We develop an arithmetic circuit for zero-knowledge 1D convolution, which consists of inner product and shift operations.
#### Iii-B3 Sigmoid
The standard sigmoid function is defined as \(S(x)=1/(1+e^{(-x)})\), which is a very difficult computation to do in zero-knowledge. To work around this, we use the Chebyshev polynomial approximation of the sigmoid function presented in [26]:
\[S(x) =0.5+0.2159198015x-.0082176259x^{3}\] \[+0.0001825597x^{5}-0.0000018848x^{7}\] \[+0.0000000072x^{9}\]
We reiterate that floating point computation is converted to integer arithmetic by increasing floating point numbers by several order of magnitudes and truncating.
#### Iii-B4 ReLU and Hard Thresholding
We implement ReLU in a zero-knowledge circuit that computes \(f(x)=max(0,x)\). Due to the similarity between ReLU and hard thresholding, a similar circuit is used for the two operations. To implement hard thresholding, we take in a threshold \(\beta\) as an input and build a circuit that computes the following piecewise function:
\[f(x)=\begin{cases}1&\text{if }x\geq\beta\\ 0&\text{if }x<\beta\end{cases}\]
Hard thresholding is performed on the output of the sigmoid function, resulting in a vector of ones and zeroes that can be concatenated to generate the extracted watermark.
#### Iii-B5 Bit Error Rate
The bit error rate is defined as the percentage of bits that differ between the private watermark \(wm\) and the ZKROWNN extracted watermark \(\hat{w}\hat{m}\). This is the last computation that is done in our end-to-end implementations. To compute this, we perform bit by bit comparison of \(wm\) and \(\hat{w}\hat{m}\) as a secondary function implemented in the hard thresholding circuit. If the bit error rate is below some predefined threshold \(\theta\), the circuit outputs a 1. If not, the circuit will output a 0.
#### Iii-B6 End-to-end Examples
We include implementations of ZKROWNN applied to a multilayer perceptron (MLP) and convolutional neural network, assuming that the watermark is embedded in the first hidden layer for both examples. ZKROWNN still works when the watermark is embedded in deeper layers, at the cost of higher prover complexity.
## IV Evaluation
**Experimental Setup.** ZKROWNN is implemented with a libsnark [23] backend using the Groth16 zkssnark protocol. All zkssnark circuits are built using xJsnark[22]. We run all experiments on a 128GB RAM, AMD Ryzen 3990X CPU desktop.
**ZKROWNN Evaluation Metrics.** We present the following metrics to evaluate ZKROWNN and the individual circuits:
* _Number of Constraints:_ The number of constraints is used as an indicator for how large the zkssnark circuit is. As number of constraints increases, runtimes also increase.
* _Setup Runtime:_ The setup process is used to generated the prover key \(\mathcal{PK}\) and verifier key \(\mathcal{VC}\) by a trusted third party. Trusted setup is a core idea in zkssnarks. In ZKROWNN's setting, this process is only run once, so its runtime can be amortized.
* _Prover Runtime:_ This is defined as the amount of time for prover \(\mathcal{P}\) to generate a zkssnark. As zkssnarks are designed to reduce verifier complexity, this often comes at the cost of increased prover complexity. Much like the setup, this process is only run once in the ZKROWNN setting, so its runtime can be amortized.
* _Proof Size:_ Due to the succinctness property of zkssnarks, the proofs that are generated in ZKROWNN are very small, requiring very little communication between the prover and all verifiers.
* _Prover Key Size:_ The prover key size grows with respect to the size witness data in our zkssnark circuit, so this can grow quite large in our setting. This requires communication from the trusted setup provider to the prover, but, again, this process is only done once.
* _Verifier Key Size:_ The verifier key size grows with respect to the size of the public inputs in our zkSNARK circuit. This requires communication between the trusted setup provider and each verifier.
* _Verifier Runtime:_ zkSNARKs aim to minimize verifier complexity, so verifier runtime is often in the millisecond range. This greatly benefits zkROWNN, as our goal is to provide verifiers with a simple scheme to validate their ownership of a model.
### _Zkronnn Performance_
We evaluate zkrownn on two DNN benchmarks: a multilayer perceptron (MLP) on the MNIST dataset and a convolutional neural network (CNN) on the CIFAR-10 dataset. These benchmarks are extended from DeepSigns [7]. We assume that the model owner embedded a 32-bit watermark in the first hidden layer, however, our framework can handle extracting the watermark from any layer. We also benchmark the specific circuits that make zkrownn's automated end-to-end framework. zkrownn does not result in any lapses in model accuracy, as our scheme does not modify the weights of the model at all. zkrownn is able to achieve the same BER and detection success from extracted watermarks as DeepSigns, while protecting the model owner's trigger keys and preserving the privacy of the watermark.
Table I highlights the end-to-end performance of zkrownn on the benchmark architectures described in Table II. The DNN benchmarks use ReLU as the activation function, however we provide the capability of using sigmoid, at the cost of potentially lower model accuracy. Alongside end-to-end performance, we also benchmark the performance of our individual zkSNARK circuits.
When observing the results of zkrownn, we are able to achieve low communication and runtime for the verifier, even with large circuits. The corresponding results are bolded in Table I. Although we witness relatively high prover/setup runtimes, we reiterate that proof generation and setup only happen once per circuit. In our setting, we benefit from this, as the zkSNARK circuit does not change, thus amortizing the proof generation and setup runtimes and communications.
Our largest circuit, the MLP circuit, only results in a \(127B\) proof. This only requires \(29.4ms\) to verify, and any third party with the verifier key can verify this. The verifier key requires \(16MB\) of communication from the trusted setup provider to each verifier, due to taking in the model's weights as a public input. Due to memory constraints, we precompute a small portion of the first layer matrix multiplication in the MLP, but ensure that there is no risk of information leakage, as the precomputed values still act as private inputs. The CNN circuit, requiring only a quarter of the constraints as the MLP circuit, has much more desirable setup, prover, and verifier performance. Prover and setup runtimes and proving key sizes are cut at least in half. We are able to maintain the same proof size, with a drastically reduced verifier key, due to the reduction of public input size. This results in a \(1ms\) verification time, which is highly attractive for verifiers.
When looking at the results as a whole, we see that proof size stays constant, no matter what the size of the circuit is, which is beneficial in our use case. With our largest individual circuit, matrix multiplication with \(128\times 128\) inputs, we only need \(0.6ms\) to verify computational correctness. As mentioned before, the verifier key grows with the public input, which has a direct effect on the the verification runtime. Some circuits, such as sigmoid and averaging, required some extra public inputs to compute correctness, thus leading to some higher
\(\mathcal{V}\mathcal{K}\) sizes. To reduce runtimes and constraints in our end-to-end example, which are combinations of the individual circuits, we make specific optimizations such as bitwidth scaling between operations and combining operations within loops.
Overall, we show the efficiency of ZKROWNNN in developing proofs of model ownership alongside fast verification by any third party entity. We also present the proof generation and verification performance for each circuit that is used to implement the MLP and CNN circuits. The individual circuits achieve fast and communication-light verification. We use the individual circuits to implement end-to-end watermark extraction and verification in ZKROWNNN, however, these circuits can be combined to perform a myriad of tasks, including verifiable machine learning inference.
## V Conclusion
This paper presented ZKROWNNN, the first end-to-end watermark extraction and verification framework for DNNs based on zero-knowledge proofs. ZKROWNN utilizes zkSNARKs to enable a model owner to prove their right of ownership of a watermarked model while preserving privacy of watermark-sensitive data. We show ZKROWNN's end-to-end efficiency over multiple popular DNN benchmarks, and highlight the fact that our scheme is _publically-verifiable_. Therefore, any third party can check the validity of the generated proofs in ZKROWNN. This work presents a paradigm shift from previous watermarking works by providing an end-to-end zero-knowledge approach to extracting watermarks, therefore allowing model owners to prove ownership of another model, without putting their original embedded watermarks at risk.
|
2309.14816 | A Comparative Study of Population-Graph Construction Methods and Graph
Neural Networks for Brain Age Regression | The difference between the chronological and biological brain age of a
subject can be an important biomarker for neurodegenerative diseases, thus
brain age estimation can be crucial in clinical settings. One way to
incorporate multimodal information into this estimation is through population
graphs, which combine various types of imaging data and capture the
associations among individuals within a population. In medical imaging,
population graphs have demonstrated promising results, mostly for
classification tasks. In most cases, the graph structure is pre-defined and
remains static during training. However, extracting population graphs is a
non-trivial task and can significantly impact the performance of Graph Neural
Networks (GNNs), which are sensitive to the graph structure. In this work, we
highlight the importance of a meaningful graph construction and experiment with
different population-graph construction methods and their effect on GNN
performance on brain age estimation. We use the homophily metric and graph
visualizations to gain valuable quantitative and qualitative insights on the
extracted graph structures. For the experimental evaluation, we leverage the UK
Biobank dataset, which offers many imaging and non-imaging phenotypes. Our
results indicate that architectures highly sensitive to the graph structure,
such as Graph Convolutional Network (GCN) and Graph Attention Network (GAT),
struggle with low homophily graphs, while other architectures, such as
GraphSage and Chebyshev, are more robust across different homophily ratios. We
conclude that static graph construction approaches are potentially insufficient
for the task of brain age estimation and make recommendations for alternative
research directions. | Kyriaki-Margarita Bintsi, Tamara T. Mueller, Sophie Starck, Vasileios Baltatzis, Alexander Hammers, Daniel Rueckert | 2023-09-26T10:30:45Z | http://arxiv.org/abs/2309.14816v1 | A Comparative Study of Population-Graph Construction Methods and Graph Neural Networks for Brain Age Regression
###### Abstract
The difference between the chronological and biological brain age of a subject can be an important biomarker for neurodegenerative diseases, thus brain age estimation can be crucial in clinical settings. One way to incorporate multimodal information into this estimation is through population graphs, which combine various types of imaging data and capture the associations among individuals within a population. In medical imaging, population graphs have demonstrated promising results, mostly for classification tasks. In most cases, the graph structure is pre-defined and remains static during training. However, extracting population graphs is a non-trivial task and can significantly impact the performance of Graph Neural Networks (GNNs), which are sensitive to the graph structure. In this work, we highlight the importance of a meaningful graph construction and experiment with different population-graph construction methods and their effect on GNN performance on brain age estimation. We use the homophily metric and graph visualizations to gain valuable quantitative and qualitative insights on the extracted graph structures. For the experimental evaluation, we leverage the UK Biobank dataset, which offers many imaging and non-imaging phenotypes. Our results indicate that architectures highly sensitive to the graph structure, such as Graph Convolutional Network (GCN) and Graph Attention Network (GAT), struggle with low homophily graphs, while other architectures, such as GraphSage and Chebyshev, are more robust across different homophily ratios. We conclude that static graph construction approaches are potentially insufficient for the task of brain age estimation and make recommendations for alternative research directions.
Keywords:Brain age regression Population graphs Graph Neural Networks
## 1 Introduction
Alzheimer's disease [9], Parkinson's disease [24], and schizophrenia [18], among other neurodegenerative diseases, cause an atypically accelerated aging process in
the brain. Consequently, the difference between an individual's biological brain age and their chronological age can act as an indicator of deviation from the normal aging trajectory [2], and potentially serve as a significant biomarker for neurodegenerative diseases [7, 13].
Graph Neural Networks (GNNs) have been recently explored for medical tasks, since graphs can provide an inherent way of combining multi-modal information, and have demonstrated improved performance in comparison to graph-agnostic deep learning models [23, 1]. In most cases, the whole population is represented in a graph, namely population-graph, and the structure of the graph is pre-decided and remains static throughout the training. However, since the structure is not given but has to be inferred from the data, there are multiple ways that a graph can be constructed, which will possibly lead to different levels of performance. Every subject of the cohort is a node of the graph, with the imaging information usually allocated as node features. The interactions and relationships among the subjects of the cohort are represented by the edges. Two nodes are chosen to be connected based on a distance measure, or similarity score, usually based on the non-imaging phenotypes. The population graph is used as input to a GNN for the task of node prediction, most commonly node classification.
Brain MRI data have been widely used in population graphs in combination with GNNs. Parisot et al. were the first to propose a way to construct a static population graph using a similarity score that takes into account both imaging and non-imaging data [23]. Many works were based on this work and extended the method to other applications [16, 28]. When it comes to brain age regression, which is an inherently more complicated task than classification, there is little ongoing work using population graphs. To our knowledge, only Stankeviciute et al. [25] worked on the construction of the static population graph using the non-imaging information, but the predictive performance was relatively low.
In all of the cases mentioned above, the graph is chosen based on some criterion before training and remains static, thus it cannot be changed. However, since there are multiple ways that it can be constructed, there is no way to ensure that the final structure will be optimized for the task. This problem has been identified in the literature and multiple metrics have been described in order to evaluate the final graph structure [29, 21], with homophily being the one most commonly used. A graph is considered homophilic, when nodes are connected to nodes with the same class label, and hence similar node features. Else, it is characterized as heterophilic. It has been found that some GNNs, such as Graph Convolutional Network (GCN) [17], and Graph Attention Network (GAT) [26], are very sensitive to the graph structure, and a meaningless graph along with these models can perform worse than a graph-agnostic model. On the other hand, other models, such as GraphSage [14] and Chebyshev [10], are more resilient to the graph structure, and their performance is not affected as much by a heterophilic graph [30].
In this work, we implement and evaluate the performance of different static graph construction methods for the task of brain age regression. We test their
performance on the UK Biobank (UKBB), which offers a variety of both imaging and non-imaging phenotypes. The extracted population graphs are used along with the most popular GNNs, and more specifically GCN [17] and GAT [26], which are architectures highly sensitive to the graph structure, as well as GraphSage [14] and Chebyshev [10], which have been found to be resilient to the population-graph structure. The quantitative results and the visualization of the graphs allow us to draw conclusions and make suggestions regarding the use of static graphs for brain age regression. The code is available on GitHub at: [https://github.com/bintsi/brain-age-population-graphs](https://github.com/bintsi/brain-age-population-graphs).
## 2 Methods
We have a dataset consisting of a set of \(N\) subjects, each described by \(M\) features. This dataset can be represented as \(\mathbf{X}=[\mathbf{x}_{1},\ldots,\mathbf{x}_{N}]\in\mathbb{R}^{N\times M}\), where \(\mathbf{x}_{i}\) represents the feature vector for the \(i\)-th subject. Additionally, each subject has a label denoted by \(\mathbf{y}\in\mathbb{R}^{N}\). Every subject \(i\) is also characterized by a set of \(K\) non-imaging phenotypes \(\mathbf{q}_{i}\in\mathbb{R}^{K}\).
To establish the relationship between subjects, we introduce a population graph, denoted as \(\mathcal{G}=\{\mathcal{V},\mathcal{E}\}\). The graph consists of two components: \(\mathcal{V}\) represents the set of nodes, where each subject corresponds to a unique node, and \(\mathcal{E}\) represents the set of edges that define the connectivity between nodes.
In this paper, we explore four different graph construction approaches, which are described in detail below. In all of the graph construction methods, the node features consist of the imaging features described in the Dataset section 3.1.
#### 2.0.1 No edges
As the baseline, we consider a graph with no edges among the nodes. This is the equivalent of traditional machine learning, where the node features act as the features used as input in the model. The machine learning model in this case is a Multi-Layer Perceptron (MLP). We refer to this approach as _"No edges"_.
#### 2.0.2 Random Graph
With the next approach, we want to explore whether the way of constructing the graph plays an important role on the performance of the GNN. Thus, we build a random Erdos-Renyi graph [11], where a random number of nodes is chosen as neighbors for every node.
#### 2.0.3 Clinical similarity score (Stankeviciunte et al.)
An approach of creating a population graph specifically for the task of brain age regression was proposed by [25], and it does not include the imaging features at all in the extraction of the edges. Instead, the edges are decided only from the non-imaging information. More specifically, the similarity function for two subjects \(i\) and \(j\) is given by:
\[sim(i,j)=\frac{1}{K}\sum_{k=1}^{K}\mathbf{1}[q_{ik}=q_{jk}], \tag{1}\]
where \(q_{ik}\) is the value of the \(k\)-th non-imaging phenotype for the \(k\)-th subject, and \(\mathbf{1}\) is the Kronecker delta function. Intuitively, the Kronecker delta function will only return 1 if the values of a particular non-imaging phenotype of two subjects match. Two nodes \(i\) and \(j\) are connected if \(sim(i,j)\geq\mu\), with \(\mu\) being a similarity threshold decided empirically.
#### 3.1.1 Similarity score (Parisot et al.)
In most of the related works in the literature, the way of creating the adjacency matrix \(W\) was originally suggested by [23] and it is calculated by:
\[W(i,j)=Sim(\mathbf{x}_{i},\mathbf{x}_{j})\sum_{k=1}^{K}\gamma(q_{ik},q_{jk}), \tag{2}\]
where \(Sim(\mathbf{x}_{i},\mathbf{x}_{j})\) is a similarity measure between the node features \((\mathbf{x}_{i},\mathbf{x}_{j})\) of the subjects \(i\) and \(j\), in our case cosine similarity, \(\gamma(\cdot,\cdot)\) is the distance of the non-imaging phenotypes between the nodes, and \(q_{ik}\) is the value of the \(k\)-th non-imaging phenotype for the \(i\)-th subject.
The two terms in Eq. 2 indicate that both imaging, and non-imaging information are taken into account for the extraction of the edges. The second term is similar to the term \(sim(i,j)\) of Eq. 1.
The computation of \(\gamma(\cdot,\cdot)\) is different for continuous and categorical features. For categorical data, \(\gamma(\cdot,\cdot)\) is defined as the Kronecker delta function \(\mathbf{1}\), as before. For continuous data, \(\gamma(\cdot,\cdot)\) is defined as a unit-step function with respect to a specific threshold \(\theta\). Intuitively, this means that the output of the \(\gamma\) function will be 1 in case the continuous phenotypes of the two nodes are similar enough.
#### 3.1.2 kNN graph
The last two graph construction approaches are based on the Nearest Neighbors (NN) algorithm. We connect the edges based on a distance function, in this case, cosine similarity. Each node is connected to its 5 closest neighbors.
In the first approach, we use the neuroimaging information for the node features, and we also use the node features in order to estimate the distances of the nodes. We refer to this approach as _"kNN (imaging)"_.
Similarly to before, in the second approach, we use the cosine similarity of a set of features in order to extract the graph structure. The node features incorporate the imaging information as before. The difference here is that we estimate the cosine similarity of the non-imaging phenotypes of the subjects with the purpose of finding the 5 closest neighbors. We refer to this approach as _"kNN (non-imaging)"_.
Finally, we use all the available phenotypes, i.e. both the imaging, and non-imaging information, to connect the nodes, again using cosine similarity. We refer to this approach as _"kNN (all phenotypes)"_.
## 3 Experiments
### Dataset
For the experiments of the comparative study we use the UK Biobank (UKBB) [3], which not only offers an extensive range of vital organ images, including brain scans, but also contains a diverse collection of non-imaging information such as demographics, biomedical data, lifestyle factors, and cognitive performance measurements. Consequently, it is exceptionally well-suited for brain age estimation tasks that require integrating both imaging and non-imaging information.
To identify the most important factors influencing brain age in the UKBB, we leverage the work of [6] and select 68 neuroimaging phenotypes and 20 non-imaging phenotypes that were found to be the most relevant to brain age in UKBB. The neuroimaging features are obtained directly from the UKBB and include measurements derived from both structural MRI and diffusion-weighted MRI. All phenotypes are standardized to a normalized range between 0 and 1.
The study focuses on individuals aged 47 to 81 years. We include only those subjects who have the necessary phenotypes available, resulting in a group of approximately 6500 subjects. The dataset is split into three parts: 75% for training, 5% for validation, and 20% for testing.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Graph** & **GCN** & **GraphSAGE** & **GAT** & **Chebyshev** & **Homophily** \\ \cline{2-5}
**Construction** & \multicolumn{5}{c}{**MAE** (years)} \\ \cline{2-5} Random graph & 5.19 & 3.72 & 5.38 & 3.83 & 0.7495 \\ Parisot [23] & 4.21 & 3.74 & 4.35 & 3.77 & 0.7899 \\ Stankeviciütte [25] & 4.61 & 3.73 & 4.90 & 3.77 & 0.7743 \\ kNN (imaging) & **3.89** & 3.77 & 4.09 & 3.75 & 0.8259 \\ kNN (non-imaging) & 4.76 & **3.68** & 4.98 & **3.72** & 0.7796 \\ kNN (all phenotypes) & 3.93 & 3.76 & **4.07** & 3.73 & 0.8191 \\ \cline{2-5} & \multicolumn{5}{c}{\(R^{2}\) **score**} \\ \cline{2-5} Random graph & 0.26 & 0.59 & 0.2 & 0.54 & \\ Parisot [23] & 0.49 & 0.59 & 0.47 & 0.55 & \\ Stankeviciütte [25] & 0.4 & 0.59 & 0.3 & 0.58 & \\ kNN (imaging) & **0.56** & 0.58 & 0.51 & 0.59 & \\ kNN (non-imaging) & 0.38 & **0.59** & 0.3 & 0.55 & \\ kNN (all phenotypes) & 0.56 & 0.58 & **0.53** & **0.6** & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance of the different graph construction methods along with various GNNs on the test set. For every population graph, the homophily ratio is estimated. The baseline MLP (_No edges_) has a performance of MAE=3.73 years and and \(R^{2}\) score of 0.56. Best performance for each GNN model is highlighted in bold.
### Results
The graph construction methods described in the previous section are leveraged by four GNN architectures and evaluated on the task of brain age regression. For every experiment, both spectral and spatial methods, and more specifically, GCN [17], GAT [26], GraphSage [14], and Chebyshev [10], are used. The results for all combinations of graph construction methods and GNN architectures are shown in Table 1.
As a baseline, we use an MLP (_No edges_) that achieves a MAE of 3.73 years and a \(R^{2}\) score of 0.56. For the case of GCN and GAT, we notice that even though all of the graph costruction methods manage to extract a graph that outperforms the random graph, none of them manages to outperform the simple MLP. The methods that used only the imaging information, i.e. the _kNN graph (imaging)_, or all of the phenotypes, such as _Parisot et. al [23]_, extract graphs that when used as input to the GCN or the GAT models, they perform better (MAE of around 4 years) compared to the ones that only use the non-imaging features for the extraction of the graph, such as _kNN graph (non-imaging)_ (MAE of 4.5-5 years). The GraphSAGE and Chebyshev models perform on similar levels regardless of the graph construction methods, with a MAE similar to the MAE given by the MLP. The graph construction method that performs the best though, is the _kNN graph (non-imaging)_ along with the GraphSAGE, with a MAE of 3.68 years and a \(R^{2}\) score of 0.59.
Finally, we estimate the homophily ratio of the population graphs extracted from the different construction methods based on [22] (Table 1). The most homophilic graphs tend to be the ones that use only imaging or a combination of imaging and non-imaging information for the extraction of the edges.
### Visualizations
Visualizing the extracted static graphs, can provide more insights about the performance of the different GNN architectures. Therefore, we visualize the different population graphs colored based on the age of the subjects in Figure 1. The various graph construction methods result in very different graph structures. More specifically, the graph construction methods that take into account the imaging phenotypes, either alone or in a combination with the non-imaging ones, provide graphs that are more meaningful, as subjects of similar ages are closer to each other. On the contrary, methods such as Stankeviciute et al. [25], create graphs that are similar to the random graph, with the neighborhoods not being very informative regarding the age of the subjects.
### Implementation Details
The static graphs extracted are chosen to be sparse for complexity reasons. The hyperparameters of the different construction methods were selected in such a way that all the extracted graphs have about 40000 to 50000 edges. Regarding the model hyperparameters, every GNN contains a graph convolutional layer
consisting of 512 units, followed by a fully connected layer with 128 units, prior to the regression layer. ReLU activation is chosen. The number of layers and their dimensions are determined by conducting a hyperparameter search based on validation performance. During training, the networks are optimized using the AdamW optimizer [19] with a learning rate of 0.001 for 150 epochs, with the best model being saved. For the similarity threshold we use \(\mu=18\) and for the unit-step function threshold, we use \(\theta=0.1\). Both hyperparameters are selected based on validation performance and sparsity requirements. The implementation utilizes PyTorch Geometric [12] and a Titan RTX GPU.
## 4 Discussion and Conclusions
In this work, we implement and evaluate static population graphs that are commonly used in the literature for other medical tasks, for brain age regression. We use the extracted graphs along with a number of popular GNN models, namely GCN, GAT, GraphSAGE, Chebyshev in order to get insights about the behavior of both the extraction methods, as well as the performance of the different models. By visualizing the graphs and estimating their homophily, we can provide further intuition in why the different static graphs do not work as expected
Figure 1: Visualizations of the population graphs extracted from the different graph construction methods. The nodes are colored based on the age of the subject. Colder colors (i.e. blue) indicate subjects with an older age, while warmer colors (i.e. red) correspond to younger subjects.
and we highlight the problem that extracting static graphs from the data is not straightforward and possibly not suitable for brain age regression.
The reported results in Table 1 indicate that the GCN and GAT are highly sensitive to the graph structure, which is in agreement with the relevant literature [30, 20]. It is clear that the graph construction methods that provide graphs of higher homophily, lead to better performance for GCN and GAT compared to the ones that provide a random-like population graph.
On the contrary, GraphSAGE and Chebyshev are not negatively affected by a graph structure with low homophily. This is because GraphSAGE encodes separately the node's encodings and the neighbors' encodings, which in our case are dissimilar, and hence it is affected less by the graph structure. When it comes to Chebyshev, the model is able to aggregate information from k hops in one layer, while the other GNNs achieve this through multiple layers. Being able to get information from higher order neighborhoods allows the model to find more relevant features, which would not be possible in the 1-hop neighborhood as this is highly heterophilic. But even these GNNs, that are more resilient to the graph structure, cannot distinctly outperform the MLP. This behavior is expected as, according to the relevant literature, these models perform similarly to a MLP, and they can outperform it only after a specific homophily threshold [30]. What is also observed is that the _kNN (non-imaging)_ works slightly better than the MLP both for the GraphSAGE and the Chebyshev, probably because the models were able to capture the information added in the edges.
To further explore the behavior of the graph construction methods and the population-graph they produce, we calculate the homophily ratio and we visualize the graphs. The graphs that are more homophilic perform better along with the models that are more sensitive to the graph structure, such as the GCN and the GAT. We note here that all of the homophily ratios are higher than expected compared to the homophily reported in classification tasks, since we would expect that the random graph would have a homophily of 0.5. This is possibly because of the implementation of homophily for regression, as well as due to the imbalanced nature of the dataset. However, the trend is very clear and in agreement with the graph visualizations and the performance of the models.
All in all, the extraction of static population graphs for brain age regression, and in general for medical tasks for which the graph is not given, does not look very promising. In our opinion, there are multiple directions that should be explored. Firstly, one approach could be to learn the edges of the graph along with the training of the GNN, which allows the extraction of an optimized graph for the specific task at-hand. There is some ongoing work on this adaptive graph learning [27, 15, 8, 5], but more focus should be given. In addition, the creation of GNN models for graphs with high heterophily [29, 30], or the exploitation of graph rewiring techniques that could make the existing GNNs work better on heterophilic graphs [4], have proved to be useful for classification tasks. In the case of medical tasks, it might also be beneficial to incorporate the existing medical insights along with the above. It is also important to make the models and the graphs interpretable, as interpretability can be vital in healthcare, and
it is something that is not currently widely explored in heterophilic graphs [29]. Last but not least, introducing metrics that evaluate population graphs is of high importance, since this can help us understand better the structure of the graph. Even though there is some ongoing work when it comes to node classification [20, 29], metrics regarding node regression have only recently started to being explored [22].
#### Acknowledgements
KMB would like to acknowledge funding from the EPSRC Centre for Doctoral Training in Medical Imaging (EP/L015226/1).
|
2302.14231 | CHGNet: Pretrained universal neural network potential for
charge-informed atomistic modeling | The simulation of large-scale systems with complex electron interactions
remains one of the greatest challenges for the atomistic modeling of materials.
Although classical force fields often fail to describe the coupling between
electronic states and ionic rearrangements, the more accurate
\textit{ab-initio} molecular dynamics suffers from computational complexity
that prevents long-time and large-scale simulations, which are essential to
study many technologically relevant phenomena, such as reactions, ion
migrations, phase transformations, and degradation.
In this work, we present the Crystal Hamiltonian Graph neural Network
(CHGNet) as a novel machine-learning interatomic potential (MLIP), using a
graph-neural-network-based force field to model a universal potential energy
surface. CHGNet is pretrained on the energies, forces, stresses, and magnetic
moments from the Materials Project Trajectory Dataset, which consists of over
10 years of density functional theory static and relaxation trajectories of
$\sim 1.5$ million inorganic structures. The explicit inclusion of magnetic
moments enables CHGNet to learn and accurately represent the orbital occupancy
of electrons, enhancing its capability to describe both atomic and electronic
degrees of freedom. We demonstrate several applications of CHGNet in
solid-state materials, including charge-informed molecular dynamics in
Li$_x$MnO$_2$, the finite temperature phase diagram for Li$_x$FePO$_4$ and Li
diffusion in garnet conductors. We critically analyze the significance of
including charge information for capturing appropriate chemistry, and we
provide new insights into ionic systems with additional electronic degrees of
freedom that can not be observed by previous MLIPs. | Bowen Deng, Peichen Zhong, KyuJung Jun, Janosh Riebesell, Kevin Han, Christopher J. Bartel, Gerbrand Ceder | 2023-02-28T01:30:06Z | http://arxiv.org/abs/2302.14231v2 | # CHGNet: Pretrained universal neural network potential for charge-informed atomistic modeling
###### Abstract
The simulation of large-scale systems with complex electron interactions remains one of the greatest challenges for the atomistic modeling of materials. Although classical force-fields often fail to describe the coupling between electronic states and ionic rearrangements, the more accurate _ab-initio_ molecular dynamics suffers from computational complexity that prevents long-time and large-scale simulations, which are essential to study many technologically relevant phenomena, such as reactions, ion migrations, phase transformations, and degradation.
In this work, we present the Crystal Hamiltonian Graph neural Network (CHGNet) as a novel machine-learning interatomic potential (MLIP), using a graph-neural-network-based force-field to model a universal potential energy surface. CHGNet is pretrained on the energies, forces, stresses, and magnetic moments from the Materials Project Trajectory Dataset, which consists of over 10 years of density functional theory static and relaxation trajectories of \(\sim 1.5\) million inorganic structures. The explicit inclusion of magnetic moments enables CHGNet to learn and accurately represent the orbital occupancy of electrons, enhancing its capability to describe both atomic and electronic degrees of freedom. We demonstrate several applications of CHGNet in solid-state materials, including charge-informed molecular dynamics in Li\({}_{x}\)MnO\({}_{2}\), the finite temperature phase diagram for Li\({}_{x}\)FePO\({}_{4}\) and Li diffusion in garnet conductors. We critically analyze the significance of including charge information for capturing appropriate chemistry, and we provide new insights into ionic systems with additional electronic degrees of freedom that can not be observed by previous MLPs.
+
Footnote †: preprint: APS/123-QED
## I Introduction
Large-scale simulations, such as molecular dynamics (MD), are essential tools in the computational exploration of solid-state materials [1]. They enable the study of reactivity, degradation, interfacial reactions, transport in partially disordered structures, and other heterogeneous phenomena relevant for the application of complex materials in technology. Technological relevance of such simulations requires rigorous chemical specificity which originates from the orbital occupancy of atoms. Despite their importance, accurate modeling of electron interactions or their subtle effects in MD simulations remains a major challenge. Classical force-fields treat the charge as an atomic property that is assigned to every atom _a-priori_[2; 3]. Methodology developments in the field of polarizable force-fields such as the electronegativity equalization method (EEM) [4], chemical potential equalization (CPE) [5], and charge equilibration (Qeq) [6] realize charge evolution via the redistribution of atomic partial charge. However, these empirical methods are often not accurate enough to capture complex electron interactions.
_Ab-initio_ molecular dynamics (AIMD) with density functional theory (DFT) can produce high-fidelity results with quantum-mechanical accuracy by explicitly computing the electronic structure within the density functional approximation. The charge-density distribution and corresponding energy can be obtained by solving the Kohn-Sham equation [7]. Long-time and large-scale spin-polarized AIMD simulations critical for studying ion migrations, phase transformations and chemical reactions are challenging and extremely computing intensive [8; 9]. These difficulties underscore the need for more efficient computational methods in the field that can account for charged ions and their orbital occupancy at sufficient time and length scales needed to model important phenomena.
Machine-learning interatomic potentials (MLIPs) such as enet [10; 11] and DeepMD [12] have provided promising solutions to bridge the gap between expensive electronic structure methods and efficient classical interatomic potentials. Specifically, graph neural network (GNN)-based MLIPs such as DimeNet [13], NequIP [14], and MACE [15] have been shown to achieve state-of-the-art performance by incorporating invariant/equivariant symmetry constraints and long-range interaction through graph convolution [16]. Most recently, GNN-based MLIPs trained on the periodic table (e.g., M3GNet) have demonstrated the possibility of universal interatomic potentials that may not require chemistry-specific training for each new application [17; 18]. However, so far none
of these methods has included the important effects that valences have on chemical bonding.
The importance of an ion's valence derives from the fact that it can engage in very different bonding with its environment depending on its electron count. While traditional MLIPs treat the elemental label as the basic chemical identity, different valence states of transition metal ions behave as different from each other as different elements. For example, high spin Mn\({}^{4+}\) is a nonbonding spherical ion which almost always resides in octahedral coordination by oxygen atoms, whereas Mn\({}^{3+}\) is a Jahn-Teller active ion, radically distorts its environment, and Mn\({}^{2+}\) is an ion that strongly prefers tetrahedral coordination [8]. Such strong chemical interaction variability across different valence states exists for almost all transition metal ions and requires specification of an ion beyond its chemical identity. In addition, the charge state is a degree of freedom that can create configurational entropy and whose dynamic optimization can lead to strongly coupled charge and ion motion, impossible to capture with a MLIP that only carries elemental labels. The relevance of explicit electron physics motivates the development of a robust MLIP model with charge information built in.
Charge has been represented in a variety of ways, from a simple oxidation state label to continuous wave functions derived from quantum mechanics [19]. Challenges in incorporating charge information into MLIPs arise from many factors, such as the ambiguity of representations, complexity of interpretation, and impracticality of taking charge as an input (\(E(\{\mathbf{r}_{i}\},\{q_{i}\})\), as the labels \(\{q_{i}\}\) are generally not _a-priori_ available). In this work, we define charge as an atomic property (_atomic charge_) that can be inferred from the inclusion of magnetic moments (magmons). We show that by explicitly incorporating the site-specific magmons as the charge-state constraints into the **C**rystal **H**amiltonian **G**raph neural-**N**etwork (CHGNet), one can both enhance the latent-space regularization and accurately capture electron interactions.
We demonstrate the charge constraints and latent-space regularization of atomic charge in Na\({}_{2}\)V\({}_{2}\)(PO\({}_{4}\))
Figure 1: **CHGNet model architecture** (a) CHGNet workflow: a crystal structure with unknown atomic charge is used as input to predict the energy, force, stress, and magnetic moments, resulting in a charge-decorated structure. (b) Atom graph: The pairwise bond information is drawn between atoms; Bond graph: the pairwise angle information is drawn between bonds. (c) Graphs run through basis expansions and embedding layers to create atom, bond, angle features. The features are updated through several interaction blocks, and the properties are predicted at output layers. (d) Interaction block in which the atom, bond, and angle share and update information. (e) Atom convolution layer where neighboring atom and bond information is calculated through weighted message passing and aggregates to the atoms.
and show the applications of CHGNet in the study of charge-transfer and phase transformation in Li\({}_{x}\)MnO\({}_{2}\), electronic entropy in the Li\({}_{x}\)FePO\({}_{4}\) phase diagram, and Li diffusivity in garnet-type Li-superionic conductors Li\({}_{3+x}\)La\({}_{3}\)Te\({}_{2}\)O\({}_{12}\). By critically comparing and evaluating the importance of incorporating charge information in the construction of CHGNet, we offer new insights into the materials modeling of ionic systems with additional electronic degrees of freedom. Our analysis highlights the essential role that charge information plays in atomistic simulations for solid-state materials.
## II Results
### CHGNet architecture
The foundation of CHGNet is a GNN, as shown in Fig. 1, where the graph convolution layer is used to propagate atomic information via a set of nodes \(\{v_{i}\}\) connected by edges \(\{e_{ij}\}\). The translation, rotation, and permutation invariance are preserved in GNNs [20; 21; 22]. Figure 1(a) shows the workflow of CHGNet which takes a crystal structure with unknown atomic charges as input and outputs the corresponding energy, forces, stress, and magmoms. The charge-decorated structure can be inferred from the on-site magnetic moments and atomic orbital theory. The details are described in the following section.
In CHGNet, a periodic crystal structure is converted into an atom graph \(G^{a}\) by searching for neighboring atoms \(v_{j}\) within \(r_{\rm cut}\) of each atom \(v_{i}\) in the primitive cell. The edges \(e_{ij}\) are drawn with information from the pairwise distance between \(v_{i}\) and \(v_{j}\), as shown in Fig. 1(b). Three-body interaction can be computed by using an auxiliary bond graph \(G^{b}\), which can be similarly constructed by taking the angle \(a_{ijk}\) as the pairwise information between bonds \(e_{ij}\) and \(e_{jk}\) (see Methods). We adopt similar approaches to include the angular/three-body information as other recent GNN MLIPs [13; 17; 23].
Figure 1(c) shows the architecture of CHGNet, which consists of a sequence of basis expansions, embeddings, interaction blocks, and outputs layers (see Methods for details). Figure 1(d) illustrates the components within an interaction block, where the atomic interaction is simulated with the update of atom, bond, and angle features via the convolution layers. Figure 1(e) presents the convolution layer in the atom graph. Weighted message passing is used to propagate information between atoms, where the message weight \(\tilde{e}_{ij}^{a}\) from node \(j\) to node \(i\) decays to zero at the graph cutoff radius to ensure smoothness of the potential energy surface [13].
Unlike other GNNs, where the updated atom features \(\{v_{i}^{n}\}\) after \(n\) convolution layers are directly used to predict energies, CHGNet regularizes the node-wise features \(\{v_{i}^{n-1}\}\) at the \(n-1\) convolution layer to contain the information about magnetic moments. The regularized features \(\{v_{i}^{n-1}\}\) carry rich information about both local ionic environments and charge distribution. Therefore, the atom features \(\{v_{i}^{n}\}\) used to predict energy, force, and stress are charge-constrained by their charge-state information. As a result, CHGNet can provide charge-state information using only the nuclear positions and atomic identities as input, allowing the study of charge distribution in atomistic modeling.
### Materials Project Trajectory Dataset
The Materials Project database contains a vast collection of DFT calculations on \(\sim 146,000\) inorganic materials composed of 94 elements [24]. To accurately sample the universal potential energy surface, we extracted \(\sim 1.37\) million Materials Project tasks of structure relaxation and static calculations using either the generalized gradient approximation (GGA) or GGA+U exchange-correlation (see Methods). This effort resulted in a comprehensive dataset with 1,580,395 atom configurations, 1,580,395 energies, 7,944,833 magnetic moments, 49,295,660 forces, and 14,223,555 stresses. To ensure the consistency of energies within the MPtrj dataset, we applied the GGA/GGA+U mixing compatibility correction, as described by Wang _et al._[25].
The distribution of elements in the MPtrj dataset is illustrated in Fig. 2. The lower-left triangle (warm color) in an element's box indicates the frequency of occurrence of that element in the dataset, and the upper-right triangle (cold color) represents the number of instances where magnetic information is available for the element. With over 100,000 occurrences for 60 different elements and more than 10,000 instances with magnetic information for 76 different elements, the MPtrj dataset provides comprehensive coverage of all chemistries, excluding only the noble gases and actinoids. The lower boxes in Fig. 2 present the counts and mean absolute deviations of energy, force, stress, and magmoms in the MPtrj dataset.
CHGNet with 400,438 trainable parameters was trained on the MPtrj dataset with 8:1:1 training, validation, and test set ratio partition by materials (see Methods). The statistics of the mean absolute errors of the energy, force, stress, and magmoms on the MPtrj test set structures are shown in Table. 1. We observe a similar test set error with slight improvements in the model trained with magmoms.
### Charge-constraints and charge-inference from magnetic moments
In solid-state materials that contain heterovalent ions, it is crucial to distinguish the atomic charge of the ions, as an element's interaction with its environment can depend strongly on its valence state. It is well known that the valence of heterovalent ions cannot be directly calculated through the DFT charge density because the charge density is almost invariant to the valence state due to the
hybridization shift with neighboring ligand ions [26; 27]. Furthermore, the accurate representation and encoding of the full charge density is another demanding task requiring substantial computational resources [28; 29]. An established approach is to rely on the magnetic moment for a given atom site as an indicator of its atomic charge, which can be derived from the difference in localized up-spin and down-spin electron densities in spin-polarized DFT calculations [8; 30]. Compared with the direct use of charge density, magmoms are found to contain more comprehensive information regarding the electron orbital occupancy and therefore the chemical behavior of ions, as demonstrated in previous studies.
To rationalize our treatment of the atomic charge, we used a NASICON-type cathode material Na\({}_{4}\)V\({}_{2}\)(PO\({}_{4}\))\({}_{3}\) as an illustrative example. The phase stability of the (de-)intercalated material Na\({}_{4-x}\)V\({}_{2}\)(PO\({}_{4}\))\({}_{3}\) is associated with Na/vacancy ordering and is highly correlated to the charge ordering on the vanadium sites [31]. We generated a supercell structure of Na\({}_{4}\)V\({}_{2}\)(PO\({}_{4}\))\({}_{3}\) with 2268 atoms and randomly removed half of the Na ions to generate the structure with composition Na\({}_{2}\)V\({}_{2}\)(PO\({}_{4}\))\({}_{3}\), where half of the V ions are oxidized to a V\({}^{4+}\) state. We used CHGNet
\begin{table}
\begin{tabular}{c c c c} \hline \hline & **Energy** & **Force** & **Stress** & **Magmom** \\ & (meV/atom) & (meV/Å) & (GPa) & (\(\mu_{B}\)) \\ \hline
**With mag** & 30 & 77 & 0.348 & 0.032 \\ \hline
**No mag** & 33 & 79 & 0.351 & N/A \\ \hline \hline \end{tabular}
\end{table}
Table 1: Mean-absolute-errors (MAEs) of CHGNet on MPtrj test set of 157,955 structures from 14,572 materials. ’With mag’ and ’No mag’ indicate whether the model is trained with magmoms (\(\mu_{B}\) is the Bohr magneton).
Figure 2: **Element distribution of Materials Project Trajectory (MPtrj) Dataset.** The color on the lower-left triangle indicates the total number of atoms/ions of an element. The color on the upper right indicates the number of times the atoms/ions are incorporated with magnetic moment labels in the MPtrj dataset. On the lower part of the plot is the count and mean absolute deviation (MAD) of energy, magmoms, force, and stress
to relax the (de-)intercalated structure and analyze its capability to distinguish the valence states of V atoms with the ionic relaxation (see Methods).
Figure 3(a) shows the distribution of predicted magmoms on all V ions in the unrelaxed (blue) and relaxed (orange) structures. Without any prior knowledge about the V-ion charge distribution other than learning from the spatial coordination of the V nuclei, CHGNet successfully differentiated the V ions into two groups of V\({}^{3+}\) and V\({}^{4+}\). Figure 3(b) shows the two-dimensional principal component analysis (PCA) of all the latent space feature vectors of V ions for both unrelaxed and relaxed structures after three interaction blocks. The PCA analysis demonstrates two well-separated distributions, indicating the latent space feature vectors of V ions are strongly correlated to the different valence states of V. Hence, imposing different magmom labels to the latent space (i.e., forcing the two orange peaks to converge to the red dashed lines in Fig. 3(a)) would act as the _charge constraints_ for the model by regularizing the latent-space features.
Because energy, force, and stress are calculated from the same feature vectors, the inclusion of magmoms can improve the featurization of the heterovalent atoms in different local chemical environments (e.g., V\({}^{3+}\) and V\({}^{4+}\) displays very distinct physics and chemistry) and therefore improve the accuracy and expressibility of CHGNet.
### Charge disproportionation in Li\({}_{x}\)MnO\({}_{2}\) phase transformation
The long-time and large-scale simulation of CHGNet enables studies of ionic rearrangements coupled with charge transfer [32; 33], which is crucial for ion mobility and the accurate representation of the interaction between ionic species. As an example, in the LiMnO\({}_{2}\) battery cathode material, transition-metal migration plays a central role in its phase transformations, which cause irreversible capacity loss [34; 35]. The mechanism of Mn migration is strongly coupled with charge transfer, with Mn\({}^{4+}\) being an immobile ion, and Mn\({}^{3+}\) and Mn\({}^{2+}\) generally considered to be more mobile [36; 37; 38]. The dynamics of the coupling of the electronic degrees of freedom with those of the ions has been challenging to study but is crucial to understand the phase transformation from orthorhombic LiMnO\({}_{2}\) (_o_-LMO, shown in Fig. 4(a)) to spinel LiMnO\({}_{2}\) (_s_-LMO), as the time scale and computational cost of such phenomena are far beyond any possible _ab-initio_ methods.
In early quasi-static _ab-initio_ studies, Reed _et al._[32] rationalized the remarkable speed at which the phase transformation proceeds at room temperature using a charge disproportion mechanism: 2Mn\({}^{3+}_{\rm oct}\rightarrow\) Mn\({}^{2+}_{\rm tet}+\) Mn\({}^{4+}_{\rm oct}\), where the subscript indicates location in the tetrahedral or octahedral site of a face-centered cubic oxygen packing, as shown in Fig. 4(a). The hypothesis based on DFT calculations was that Mn\({}^{2+}\) had a lower energy barrier for migration between tetrahedral and octahedral sites and preferred to occupy the tetrahedral site. The ability therefore for Mn to dynamically change its valence would explain its remarkable room temperature mobility. However, Jang _et al._[36] showed in a later magnetic characterization experiment that the electrochemically transformed spinel LiMnO\({}_{2}\) has lower-spin (high-valence) Mn ions on the tetrahedral sites, which suggested the possibility that Mn with higher valence can be stable on tetrahedral sites during the phase transformation.
To demonstrate the ability of CHGNet to fully describe such a process, we used CHGNet to run a charge-informed MD simulation at 1100 K for 1.5 ns (see Methods). The MD simulation started from a partially delithiated supercell structure with the _o_-LMO structure (Li\({}_{20}\)Mn\({}_{40}\)O\({}_{80}\)), which is characterized by peaks at 15\({}^{\circ}\), 26\({}^{\circ}\), and 40\({}^{\circ}\) in the X-ray diffraction (XRD) pattern (the bottom line in Fig. 4(b)). As the simulation proceeded, a phase transformation from orthorhombic ordering to spinel-like ordering was observed. Figure 4(b) presents the simulated XRD pattern of MD structures at different time intervals from 0 to 1.5 ns, with a clear increase in the characteristic spinel peaks (18\({}^{\circ}\), 35\({}^{\circ}\)) and a decrease in the orthorhombic peak. The simulated results agree well with the experimental in-situ XRD results [34; 36].
Figure 4(d) presents the CHGNet-predicted energy of the LMO supercell structure as a function of simulation time, together with the peak strength of 2\(\theta\) = 15\({}^{\circ}\) and 18\({}^{\circ}\). An explicit correlation between the structural transformation and energy landscape is observed. The predicted energy of the spinel phase is approximately 26 meV/oxygen lower than that of the starting _o_-LMO, suggesting that the phase transformation to spinel is indeed, thermodynamically favored.
The advantage of CHGNet is shown in its ability to predict charge-coupled physics, as evidenced by the lower plot in Fig. 4(d). A histogram of the magmoms of all
Figure 3: **Magmom and hidden-space regularization in Na\({}_{2}\)V\({}_{2}\)(PO\({}_{4}\))\({}_{3}\)**. (a) Magmom distribution of the 216 V ions in the unrelaxed structure (blue) and CHGNet-relaxed structure (orange). (b) A two-dimensional visualization of the PCA on V-ion embedding vectors before the magmom projection layer indicates the latent space clustering is highly correlated with magmoms and charge information. The PCA reduction is calculated for both unrelaxed and relaxed structures.
the Mn ions in the structure is presented against time. In the early part of the simulation, the magmoms of Mn ions are mostly distributed between 3\(\mu_{B}\) and 4\(\mu_{B}\), which correspond to Mn\({}^{4+}\) and Mn\({}^{3+}\). At approximately 0.8 ns, there is a significant increase in the amount of Mn\({}^{2+}\), which is accompanied by a decrease in the potential energy and changes in the XRD peaks. Following this major transformation point, the Mn\({}^{3+}\) ions undergo charge disproportionation, resulting in the coexistence of Mn\({}^{2+}\), Mn\({}^{3+}\), and Mn\({}^{4+}\) in the transformed spinel-like structure.
One important observation from the long-time charge-informed MD simulation is the correlation between ionic rearrangements and the charge-state evolution. Specifically, we noticed that the time scale of charge disproportionation (\(\sim\) ns for the emergence of Mn\({}^{2+}\)) is far longer than the time scale of ion hops (\(\sim\) ps for the emergence of Mn\({}_{\rm tet}\)), indicating that the migration of Mn to the tetrahedral coordination is less likely related to the emergence of Mn\({}^{2+}\). Instead, our result indicates that the emergence of Mn\({}^{2+}_{\rm tet}\) is correlated to the formation of the long-range spinel-like ordering. Figure 4(c) shows the average magmoms of Mn\({}_{\rm tet}\) and Mn\({}_{\rm oct}\) as a function of time. The result reveals that Mn\({}^{2+}_{\rm tet}\) only forms over a long time period, which cannot be observed using any conventional
Figure 4: **Li\({}_{0.5}\)MnO\({}_{2}\) phase transformation and charge disproportionation** (a) orthorhombic LiMnO\({}_{2}\) (_o_-LMO) unit cell plotted with the tetrahedral site and the octahedral site. (b) Simulated XRD pattern of CHGNet MD structures as the system transforms from the _o_-LMO phase to the _s_-LMO. (c) Average magmoms of tetrahedral and octahedral Mn ions _vs._ time. (d) Top: total potential energy and the relative intensity of _o_-LMO and _s_-LMO characteristic peaks _vs._ time. Bottom: the histogram of magmoms on all Mn ions _vs._ time. The brighter color indicates more Mn ions distributed at the magmom. (e) Predicted magmoms of tetrahedral Mn ions using r\({}^{2}\)SCAN-DFT (black) and CHGNet (blue), where the structures are drawn from MD simulation at 0.4 ns (left) and 1.5 ns (right).
simulation techniques.
To further validate this hypothesis and the accuracy of CHGNet prediction, we used r\({}^{2}\)SCAN-DFT static calculations to get the magmoms of the structures at 0.4 and 1.5 ns, where the results are shown in Fig. 4(e). The r\({}^{2}\)SCAN-DFT magmoms (black) infer the same Mn\({}_{\text{tet}}\) valence states as the CHGNet prediction (blue). The systematically lower magnoms from r\({}^{2}\)SCAN are expected since CHGNet is trained with GGA+U which over-localizes the electron density in oxides [39]. The r\({}^{2}\)SCAN-DFT shows a 34 meV/oxygen driving force between the structures at 0.4 and 1.5 ns.
### Electronic entropy effect in the phase diagram of Li\({}_{x}\)FePO\({}_{4}\)
The configurational electronic entropy has a significant effect on the temperature-dependent phase stability of mixed-valence oxides, and its equilibrium modeling therefore requires an explicit indication of the atomic charge. However, no current MLPs can provide such information. We demonstrate that using CHGNet one can include the electronic entropy in the thermodynamics of Li\({}_{x}\)FePO\({}_{4}\) and the temperature-dependent phase diagram (PD).
Previous research has shown that the formation of a solid solution in Li\({}_{x}\)FePO\({}_{4}\) is mainly driven by electronic entropy rather than by Li\({}^{+}\)/vacancy configurational entropy [40]. We applied CHGNet as an energy calculator to generate two cluster expansions (CEs), which is the typical approach to studying configurational entropy [41]. One of these is charge-decorated (considering Li\({}^{+}\)/vacancy and Fe\({}^{2+}\)/Fe\({}^{3+}\)) and another is non-charge-decorated (only considering Li\({}^{+}\)/vacancy without consideration of the Fe valence). Semi-grand canonical Monte Carlo was used to sample these cluster expansions and construct Li\({}_{x}\)FePO\({}_{4}\) PDs (see Methods). The calculated PD with charge decoration in Fig. 5(a) features a miscibility gap between FePO\({}_{4}\) and LiFePO\({}_{4}\), with a eutectoid-like transition to the solid-solution phase at intermediate Li concentration, qualitatively matching the experiment result [42; 43]. In contrast, the calculated PD without charge decoration in Fig. 5(b) features only a single miscibility gap without any eutectoid transitions, in disagreement with experiments. This comparison highlights the importance of explicit inclusion of the electronic degrees of freedom, as failure to do so can result in incorrect physics. These experiments show how practitioners may benefit from CHGNet with atomic charge inference for equilibrium modeling of configurationally and electronically disordered systems.
### Activated Li diffusion network in Li\({}_{3}\)La\({}_{3}\)Te\({}_{2}\)O\({}_{12}\)
In this section, we showcase the precision of CHGNet for general-purpose MD. Lithium-ion diffusivity in fast Li-ion conductors is known to show a drastic non-linear response to compositional change. For example, stuffing a small amount of excess lithium into stoichiometric compositions can result in orders-of-magnitude improvement
Figure 5: **Li\({}_{x}\)FePO\({}_{4}\) phase diagram from CHGNet**. The phase diagrams in (a) and (b) are calculated with and without electronic entropy on Fe\({}^{2+}\) and Fe\({}^{3+}\). The colored dots represent the stable phases obtained in semi-grand canonical MC. The dashed lines indicate the two-phase equilibria between solid solution phases.
Figure 6: **Li diffusivity in garnet Li\({}_{3}\)La\({}_{3}\)Te\({}_{2}\)O\({}_{12}\)**. The CHGNet simulation accurately reproduces the dramatic increase in Li-ion diffusivity when a small amount of extra Li is stuffed into the garnet structure, qualitatively matching the activated diffusion network theory and agreeing well with the DFT-computed activation energy.
of the ionic conductivity [44]. Xiao _et al._[45] reported that the activation energy of Li diffusion in stoichiometric garnet Li\({}_{3}\)La\({}_{3}\)Te\({}_{2}\)O\({}_{12}\) decreases from more than 1 eV to \(\sim\)160 meV in a slightly stuffed Li\({}_{3+\delta}\) garnet (\(\delta=1/48\)), owing to the activated Li diffusion network of face-sharing tetrahedral and octahedral sites.
We performed a zero-shot test to assess the ability of CHGNet to capture the effect of such slight compositional change on the diffusivity and its activation energy. Figure 6 shows the Arrhenius plot from CHGNet-based MD simulations and compares it to AIMD results. Our results indicate that not only is the activated diffusion network effect precisely captured, the activation energies from CHGNet are also in excellent agreement with the DFT results [45]. This effort demonstrates the capability of CHGNet to precisely capture the strong interactions between Li ions in activated local environments and the ability to simulate highly non-linear diffusion behavior. Moreover, CHGNet can dramatically decrease the error on simulated diffusivity and enable studies in systems with poor diffusivity such as the unstuffed Li\({}_{3}\) garnet by extending to nano-second-scale simulations [46].
## III Discussion
Large-scale computational modeling has proven essential in providing atomic-level information in materials science, medical science, and molecular biology. Many technologically relevant applications contain heterovalent species, for which a comprehensive understanding of the atomic charge involved in the dynamics of processes is of great interest. The importance of assigning a valence to ions derives from the fundamentally different electronic and bonding behavior ions can exhibit when their electron count changes. _Ab-initio_ calculations based on DFT are useful for these problems, but the \(\sim\mathcal{O}(N^{3})\) scaling intrinsically prohibits its application to large time- and length-scales. Recent development of MLIPs provides new opportunities to increase computational efficiency while maintaining near DFT accuracy. The present work presents an MLIP that combines the need to include the electronic degrees of freedom with computational efficiency.
In this work, we developed CHGNet and demonstrated the effectiveness of incorporating magnetic moments as a proxy for inferring the atomic charge in atomistic simulations, which results in the integration of electronic information and the imposition of additional charge constraints as a regularization of the MLIP. We highlight the capability of CHGNet in distinguishing Fe\({}^{2+}\)/Fe\({}^{3+}\) in the study of Li\({}_{x}\)FePO\({}_{4}\), which is essential for the inclusion of electronic entropy and finite temperature phase stability. In the study of LiMnO\({}_{2}\), we demonstrate CHGNet's ability to gain new insights into the relation between charge disproportionation and phase transformation in a heterovalent transition-metal oxide system from long-time charge-informed MD.
CHGNet builds on recent advances in graph-based MLIPs [13; 17], but is pretrained with electronic degrees of freedom built in, which provides an ideal solution for high-throughput screening and atomistic modeling of a variety of technologically relevant oxides, including high-entropy materials [47; 48]. As CHGNet is already generalized to broad chemistry during pretraining, it can also serve as a data-efficient model for high-precision simulations when augmented with fine-tuning to specific chemistries.
Despite these advances, further improvements can be achieved through several efforts. First, the use of magnetic moments for valence states inference does not strictly ensure global charge neutrality. The formal valence assignment depends on how the atomic charges are partitioned [19]. Second, although magnetic moments are good heuristics for the atomic charge from spin-polarized calculations in ionic systems, it is recognized that the atomic charge inference for non-magnetic ions may be ambiguous and thus requires extra domain knowledge. As a result, the atom-centered magnetic moments cannot accurately reflect their atomic charges. We posit that it is possible to enhance the model by incorporating more advanced and general approaches into charge representations, such as an electron localization function [49], electric polarization [50], and atomic orbital-based partitioning (e.g. Wannier function [51]). These approaches could be used for atom feature engineering in latent space.
In conclusion, CHGNet enables charge-informed atomistic simulations amenable to the study of heterovalent systems using large-scale computational simulations, expanding opportunities to study charge-transfer-coupled phenomena in computational chemistry, physics, biology, and materials science.
## IV Acknowledgments
This work was funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Materials Sciences and Engineering Division under Contract No. DE-AC0205CH11231 (Materials Project program KC23MP). The work was also supported by the computational resources provided by the Extreme Science and Engineering Discovery Environment (XSEDE), supported by National Science Foundation grant number ACI1053575; the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory; and the Lawrence Computational Cluster resource provided by the IT Division at the Lawrence Berkeley National Laboratory. The authors would also like to thank Jason Munro for helpful discussions.
Methods
### Data parsing
The Materials Project Trajectory Dataset (MPtrj) was parsed from the September 2022 Materials Project database version. We collected all the GGA and GGA+U task trajectories under each material-id and followed the criteria below:
1. We removed deprecated tasks and only kept tasks with the same calculation settings as the primary task, from which the material could be searched on the Materials Project website. To verify if the calculation settings were equal, we confirmed the following: (1) The +U setting must be the same as the primary task. (2) The energy of the final frame cannot differ by more than 20 meV/atom from the primary task.
2. Structures without energy and forces or electronic step convergence were removed.
3. Structures with energy higher than 1 eV/atom or lower than 10 meV/atom relative to the relaxed structure from Materials Project's ThermoDoc were filtered out to eliminate large energy differences caused by variations in VASP settings, etc.
4. Duplicate structures were removed to maintain a healthy data distribution. This removal was achieved using a pymatgen StructureMatcher and an energy matcher to tell the difference between structures. The screening criteria of the structure and energy matchers became more stringent as more structures under the same materials-id were added to the MPtrj dataset.
### Model design
In constructing the crystal graph, the default \(r_{\text{cut}}\) is set to 5A, which has been shown adequate enough for capturing long-range interactions [17]. The bond graph is constructed with a cutoff of 3A for computational efficiency. The bond distances \(r_{ij}\) were expanded to \(\tilde{e}_{ij}\) by a trainable smooth radial Bessel function (SmoothRBF), as proposed in Gasteiger _et al._[13]. The SmoothRBF forces the radial Bessel function and its derivative to approach zero at the graph cutoff radius, thus guaranteeing a smooth potential energy surface. The angles \(\theta_{ijk}\) were expanded by Fourier basis functions to create \(\tilde{a}_{ijk}\) with trainable frequency. The atomic numbers \(Z_{i}\), \(\tilde{e}_{ij}\), and \(\tilde{a}_{ijk}\) were then embedded into node \(v_{i}^{0}\), edge \(e_{ij}^{0}\), and angle features \(a_{ijk}^{0}\) (all have 64 feature dimensions by default):
\[\begin{split} v_{i}^{0}&=\mathbf{W}_{v}Z_{i}+\mathbf{b}_{v}, \\ e_{ij,n}^{0}&=\mathbf{W}_{e}\tilde{e}_{ij},\;\tilde{e} _{ij}=\sqrt{\frac{2}{5}}\frac{\sin(\frac{n\pi r_{ij}}{5})}{r_{ij}}\odot u(r_{ ij})\\ a_{ijk,\ell}^{0}&=\begin{cases}\frac{1}{\sqrt{2 \pi}}&\text{if }\ell=0\\ \frac{1}{\sqrt{\pi}}\cos\left[\ell\theta_{ijk}\right]&\text{if }\ell=[1,N]\\ \frac{1}{\sqrt{\pi}}\sin\left[(\ell-N)\theta_{ijk}\right]&\text{if }\ell=[N+1,2N]\end{cases}, \end{split} \tag{1}\]
where \(\{\mathbf{W},\mathbf{b}\}\) are the trainable weights and bias. The angle is computed using \(\theta_{ijk}=\arccos\frac{e_{ij}\cdot e_{ik}}{|e_{ij}|\ell|e_{jk}|}\). The \(u(r_{ij})\) is a polynomial envelope function to enforce the value, the first and second derivative of \(\tilde{e}_{ij}\), smoothly toward 0 at the graph cutoff radius [13]. The subscripts \(n,\ell\) are the expansion orders, and we set the maximum orders for both \(n\) and \(\ell\) to be \(2N+1=9\). The superscript denotes the index of the interaction block. The \(\odot\) represents the element-wise multiplication. The edge vectors \(e_{ij}^{t}\) are bi-directional, which is essential for \(e_{ij}^{t}\) and \(e_{ji}^{t}\) to be represented as a single node in the bond graph [23].
For the atom graph convolution, a weighted message passing layer is applied to the concatenated feature vectors \((v_{i}^{t}||v_{j}^{t}||e_{ij}^{t})\) from two atoms and one bond. For the bond graph convolution, the weighted message passing layer is applied to the concatenated feature vectors \((e_{ij}^{t}||e_{jk}^{t}||a_{ijk}^{t}||v_{j}^{t+1})\) from two bonds, the angle between them, and the atom where the angle is located. For the angle update function, we used the same construction for the bond graph message vector but without the weighted aggregation step. The mathematical form of the atom, bond, and angle updates are formulated below:
\[\begin{split} v_{i}^{t+1}&=v_{i}^{t}+L_{v}^{t} \left[\sum_{j}\tilde{e}_{ij}\cdot\phi_{v}^{t}\left(v_{i}^{t}||v_{j}^{t}||e_{ij} ^{t}\right)\right],\\ e_{jk}^{t+1}&=e_{jk}^{t}+L_{v}^{t}\left[\sum_{i} \tilde{e}_{ij}\cdot\tilde{e}_{jk}\cdot\phi_{e}^{t}\left(e_{ij}^{t}||e_{jk}^{t }||a_{ijk}^{t}||v_{j}^{t+1}\right)\right],\\ a_{ijk,f}^{t+1}&=a_{ijk}^{t}+\phi_{a}^{t}\left(e_{ij}^{t +1}||e_{jk}^{t+1}||a_{ijk}^{t}||v_{j}^{t+1}\right).\end{split} \tag{2}\]
The \(L\) is a linear layer and \(\phi\) is the gated multilayer perceptron (gatedMLP) [22]:
\[\begin{split} L(x)&=\mathbf{W}x+\mathbf{b},\\ \phi(x)&=\left(\sigma\circ L_{\text{gate}}(x)\right) \odot\left(g\circ L_{\text{core}}(x)\right),\end{split} \tag{3}\]
where \(\sigma\) and \(g\) are the Sigmoid and SiLU activation functions, respectively. The magnetic moments are predicted by a linear projection of the atom features \(v_{i}^{3}\) after three interaction blocks by
\[m_{i}=\;L_{m}(v_{i}^{3}). \tag{4}\]
Instead of using a full interaction block, the last convolution layer only includes atom graph convolution
\[v_{i}^{4}=v_{i}^{3}+\sum_{j}\tilde{e}_{ij}\cdot\phi_{v}^{3}\left(v_{i}^{3}||v_ {j}^{3}||e_{ij}^{3}\right). \tag{5}\]
The energy is calculated by a non-linear projection of the site-wise averaged feature vector over all atoms \(\{v_{i}^{4}\}\). The forces and stress are calculated via auto-differentiation of the energy with respect to the atomic Cartesian coordinates and strain:
\[E_{\text{tot}} =\sum_{i}L_{3}\circ g\circ L_{2}\circ g\circ L_{1}(v_{i}^{4}), \tag{6}\] \[\vec{f}_{i} =-\frac{\partial E_{\text{tot}}}{\partial\vec{x}_{i}},\] \[\mathbf{\sigma} =\frac{1}{V}\frac{\partial E_{\text{tot}}}{\partial\mathbf{\varepsilon }}.\]
Overall, with four atom convolution layers, the pre-trained CHGNet can capture long-range interaction up to 20 A with a small computation cost.
### Model training
The model is trained to minimize the summation of Huber loss (with \(\delta=0.1\)) of energy, force, stress, and magmoms:
\[\mathcal{L}(x,\hat{x})=\begin{cases}0.5\cdot(x-\hat{x})^{2}&\text{if }|x-\hat{x}|<\delta\\ \delta\cdot(|x-\hat{x}|-0.5\delta)&\text{otherwise}\end{cases}. \tag{7}\]
The loss function is a weighted sum of the contributions from energy, forces, stress, and magmoms:
\[\mathcal{L}=\mathcal{L}(E,\hat{E})+w_{f}\mathcal{L}(\mathbf{f},\hat{\mathbf{f}})+w_{ \sigma}\mathcal{L}(\mathbf{\sigma},\hat{\mathbf{\sigma}})+w_{m}\mathcal{L}(m,\hat{m}), \tag{8}\]
where the weights for the forces, stress, and magmoms are set to \(w_{f}=1\), \(w_{\sigma}=0.1\), and \(w_{m}=0.1\), respectively. The DFT energies are normalized with elemental reference energies before fitting to CHGNet to decrease variances [17]. The batch size is set to 40 and the Adam optimizer is used with \(10^{-3}\) as the initial learning rate. The CosineAnnealingLR scheduler is used to adjust the learning rate 10 times per epoch, and the final learning rate decays to \(10^{-5}\) after 20 epochs.
### Software interface
CHGNet was implemented using pytorch 1.12.0 [52], with crystal structure processing from pymatgen [53]. Molecular dynamics and structure relaxation were simulated using the interface to Atomic Simulation Environment (ASE) [54]. The cluster expansions were performed using the smol package [55].
### Structure relaxation and molecular dynamics
All the structure relaxations were optimized by the FIRE optimizer over the potential energy surface provided by CHGNet [56], where the atom positions, cell shape, and cell volume were simultaneously optimized to reach converged interatomic forces of 0.1 eV/A.
For the MD simulations of the _o_-LMO to _s_-LMO phase transformation, the initial structure Li\({}_{20}\)Mn\({}_{40}\)O\({}_{80}\) was generated by randomly removing Li from a Li\({}_{40}\)Mn\({}_{40}\)O\({}_{80}\) supercell of the orthorhombic structure and relaxing with DFT. The MD simulation was run under the NVT ensemble, with a time step of 2 fs at T = 1100 K for 2 ns. For the simulated XRD in Fig. 4(b), the structures at 0.0, 0.3, 0.6, 0.9, 1.2, and 1.5 ns were coarse-grained to their nearest Wyckoff positions to remove noisy peaks. In Fig. 4(c), Mn\({}_{\text{oct}}\) and Mn\({}_{\text{tet}}\) were determined by counting the number of bonding oxygen ions within 2.52 A. If six bonding oxygen ions were found, then the Mn ion is categorized into Mn\({}_{\text{oct}}\); if less than six bonding oxygen ions were found, the Mn ion is coarse-grained into Mn\({}_{\text{tet}}\) for representation of lower coordinated environments. In Fig. 4(e), Mn\({}^{2+}\) and Mn\({}^{3+}\) are classified by CHGNet magmom threshold of 4.2 \(\mu_{B}\)[30].
For the MD simulations of garnet Li\({}_{3}\)La\({}_{3}\)Te\({}_{2}\)O\({}_{12}\) systems, a time step of 2 fs was used. We ramped up the temperature to the targeted temperature in the NVT ensemble with at least 1 ps. Then, after equilibrating the system for 50 ps, the lithium self-diffusion coefficients were obtained by calculating the mean squared displacements of trajectories for at least 2.3 ns. The uncertainty analysis of the diffusion coefficient values was conducted following the empirical error estimation scheme proposed by He _et al._[57]. In Li\({}_{3+\delta}\), the excess lithium was stuffed to an intermediate octahedral (48\(g\)) site to face-share with the fully occupied 24\(d\) tetrahedral sites.
### Phase diagram calculations
The cluster expansions (CEs) of Li\({}_{x}\)FePO\({}_{4}\) were performed with pair interactions up to 11 A and triplet interactions up to 7 A based on the relaxed unit cell of LiFePO\({}_{4}\). For better energy accuracy, we first fine-tuned CHGNet with the Materials Project structures in the Li-Fe-P-O chemical space with a MSE loss function for 40 epochs, which resulted in a 12 meV/atom training energy error and 19 meV/atom validation energy error. We applied CHGNet to relax 456 different structures in Li\({}_{x}\)FePO\({}_{4}\) (\(0\leq x\leq 1\)) and predict the energies and magmoms, where the 456 structures were generated via an automatic workflow including CE fitting, canonical CE Monte Carlo for searching the ground state at varied Li\({}^{+}\) composition and CHGNet relaxation. The charge-decorated CE is defined on coupled sublattices over Li\({}^{+}\)/vacancy and Fe\({}^{2+}\)/Fe\({}^{3+}\) sites, where Fe\({}^{2+}\) and Fe\({}^{3+}\) are treated as different species. In addition, the non-charge-decorated CE is defined only on Li\({}^{+}\)/vacancy sites. In the charge-decorated CE, Fe\({}^{2+}\)/Fe\({}^{3+}\) is classified with magmom in \([3\mu_{B},4\mu_{B}]\) and \([4\mu_{B},5\mu_{B}]\), respectively [30].
The semigrand canonical Monte Carlo simulations were implemented using the Metropolis-Hastings algo
rithm, where 20% of the MC steps were implemented canonically (swapping Li\({}^{+}\)/vacancy or Fe\({}^{2+}\)/Fe\({}^{3+}\)) and 80% of the MC steps were implemented grand-canonically using the table-exchange method [58, 59]. The simulations were implemented on a \(8\times 6\times 4\) of the unit cell of LiFePO\({}_{4}\). In each MC simulation, we scanned the chemical potential in the \([-5.6,-4.8]\) range with a step of 0.01 and sampled the temperatures from 0 to 1000 K. The boundary for the solid solution stable phases is determined with a difference in the Li concentration \(<0.05\) by \(\Delta\mu=0.01\) eV.
### DFT calculations
DFT calculations were performed with the _Vienna ab initio simulation package_ (VASP) using the projector-augmented wave method [60, 61], a plane-wave basis set with an energy cutoff of 680 eV, and a reciprocal space discretization of 25 \(k\)-points per A\({}^{-1}\). All the calculations were converged to \(10^{-6}\) eV in total energy for electronic loops and 0.02 eV/A in interatomic forces for ionic loops. We relied on the regularized strongly constrained and appropriately normed meta-GGA exchange-correlation functional (r\({}^{2}\)SCAN) [62, 63], which has improved performance on volume, coordination, and formation-energy prediction in solid-state systems. r\({}^{2}\)SCAN provides better computational efficiency than the earlier version of SCAN [64].
### Code availability
The source code of CHGNet is available at [https://github.com/CederGroupHub/chgnet](https://github.com/CederGroupHub/chgnet).
### Data availability
The dataset will be released after review.
|
2309.06221 | Use neural networks to recognize students' handwritten letters and
incorrect symbols | Correcting students' multiple-choice answers is a repetitive and mechanical
task that can be considered an image multi-classification task. Assuming
possible options are 'abcd' and the correct option is one of the four, some
students may write incorrect symbols or options that do not exist. In this
paper, five classifications were set up - four for possible correct options and
one for other incorrect writing. This approach takes into account the
possibility of non-standard writing options. | JiaJun Zhu, Zichuan Yang, Binjie Hong, Jiacheng Song, Jiwei Wang, Tianhao Chen, Shuilan Yang, Zixun Lan, Fei Ma | 2023-09-12T13:41:59Z | http://arxiv.org/abs/2309.06221v1 | # Use neural networks to recognize students' handwritten letters and incorrect symbols
###### Abstract
Correcting students' multiple-choice answers is a repetitive and mechanical task that can be considered an image multi-classification task. Assuming possible options are 'abcd' and the correct option is one of the four, some students may write incorrect symbols or options that do not exist. In this paper, five classifications were set up - four for possible correct options and one for other incorrect writing. This approach takes into account the possibility of non-standard writing options.
## I Introduction
Automated grading of multiple-choice questions is a repetitive and low-meaning task that deep learning-based computer vision has proven effective for. When dealing with deterministic multiple-choice questions, deep neural networks[1] can be used for image recognition and treating the grading task as a Multiclass Classification task[2]. We proposes using deep learning to automate grading multiple-choice questions, treating it as a five-classification image recognition task. A significant flaw is that students may write random symbols leading to misjuddment, so an unknown category is added to the model. Two approaches are discussed for model selection: self-attention mechanism and MLPs[3] or ResNet networks[4]. The dataset is designed by selecting symbols without abcd features, using 26 letters data from the MNIST dataset[5]. Details of the dataset division will be presented later in the article.
## II Related Works
Some algorithms, such as neural networks[39], decision trees, k-Nearest Neighbor, Naive Bayes, and Support Vector Machines, can naturally extend the binary classification technique to solve the multiclass classification problem. In other words, these algorithms can use the same approach for both binary and multiclass classification problems with some modifications.[13]These background work of solving image classification tasks by predecessors can be roughly divided into traditional machine learning algorithms, which I will elaborate on in section II.A, and deep learning algorithms[35] based on neural networks, which I will explain in section II.B.
### _traditional machine learning algorithms_
#### Ii-A1 Decision Trees
Decision trees are a highly effective classification technique, with two well-known algorithms being Classification and Regression Trees (CART)[14] and ID3/C4.5[15]. These algorithms use available feature values to determine the best way to split the training data, producing a strong generalization. The split at each node is based on the feature that provides the most information gain. Each leaf node represents a class label, with new examples classified by following a path from the root node to a leaf node and testing certain features at each node. The leaf node reached determines the class label for the example. This algorithm is capable of handling binary or multiclass classification problems, and leaf nodes can refer to any of the K classes in question.
#### Ii-A2 k-Nearest Neighbors
kNN[16] is one of the earliest non-parametric classification algorithms. It classifies an unknown example by measuring its distance (using a distance measure like Ecolidean) to every other training example. The algorithm identifies the k smallest distances and considers the output class label as the most represented class among these k classes. The value of k is usually determined using cross-validation or a validation set.
#### Ii-A3 Naive Bayes
Naive Bayes [17] is a successful classification algorithm that operates on the principle of Maximum A Posteriori (MAP). It works by assigning a class label c to an unknown example with \(\text{features}=\left(x^{1},\ldots,x^{n}\right)\) based on the maximum a posterior probability given the observed data. This probability is determined by the prior probabilities \(\mathrm{P}\left(C_{1}\right),\ldots,\mathrm{P}\left(C_{k}\right)\) and the likelihood of the features given the class label. In summary, Naive Bayes chooses the class label with the highest probability given the observed data.
#### Ii-A4 Support Vector Machines
Support Vector Machines (SVMs) are a classification algorithm known for their robustness and success [18, 19]. They work by maximizing the margin, which is the minimum distance between the separating hyperplane and the nearest example. SVMs typically only support binary classification, but researchers have proposed extensions [20, 21] to handle multiclass classification. These extensions add additional parameters and constraints to the
optimization problem to separate the different classes. However, some formulations [22] can result in a large optimization problem that may be impractical for a large number of classes. Other formulations, such as the one presented in [23], have a more efficient implementation.
### _deep learning algorithms based on neural networks_
#### Ii-B1 convolutional neural networks
Convolutional Neural Networks (CNNs) are a popular type of deep learning architecture that mimics the visual perception mechanism found in living creatures. Hubel and Wiesel first discovered in 1959 that receptive fields in animal visual cortex cells are responsible for detecting light. Convolutional neural networks are highly effective for image recognition and classification tasks and four well-known CNNs from the past include AlexNet[24], VGGNet[25], GoogleNet[26], and ResNet[27]. Over time, CNN architectures have become increasingly deeper, with ResNet - the ILSVRC 2015 champion - being about 20 times deeper than AlexNet and 8 times deeper than VGGNet. By increasing the depth of the network, it can better approximate the target function with increased nonlinearity and obtain improved feature representations.
#### Ii-B2 vision transformer
Vision transoformer(ViT)[28] is a model proposed by Google team in 2020, which applies Transformer[29]to image calssification.Although not the first paper to apply Transformer to visual tasks, ViT has become a milestone work for the application of Transformer in the field of computer vision due to its'simple' model, good performance, and strong scalability(the larger the model[34], the better the performance), and has sparked subsequent related research. The most prominent feature of ViT is that when there is enough data for pre-training, its performance exceeds that of CNN, breaking the restriction of Transformer's lack of inductive bias and achieving good transfer effects[36] in downstream tasks. However,when the training dataset is not large enough, the performance of ViT is usually worse than that of ResNets[27] of the same size because Transformer lacks inductive bias compared to CNN,namely a kind of prior knowledge or assumption made in advance. CNN has two kinds of inductive bias: one is locality(two-dimensional neighborhood structure), that is, adjacent regions on the image have similar features. The other is translation equivariance, f(g(x))=g(f(x)), where g represents convolution operation and f represents translation operation.When CNN has these two kinds of inductive bias, it has a lot of prior information and can learn a relatively good model with relatively little data.
## III Methodology
### _Models_
As stated in the introduction, we consider the task of identifying and grading student multiple-choice answers as an image multi-classification task. To avoid misidentifying symbols outside of the given options as correct answers, we have added an additional category to our model. Specifically, assuming that there are four correct options to be graded in multiple-choice questions, this task will be treated as a five-classification task, with the extra category being for identifying non-conforming writing options in student responses.For the purpose of conducting more efficient comparative experiments in this task, we have simplified the actual problem. Specifically, in this paper, all multiple-choice questions in the applications considered are treated as single-choice questions with four options. As a result, our task has been simplified to an image recognition five-classification task.We ultimately chose to utilize ResNet as our primary model for completing our task. It is commonly believed that the ability of a network to extract features is enhanced to a certain extent as the number of layers increases. ResNet's Shortcut Connections residual connection method can effectively help us make the network deeper. The fundamental principle of residual connections is to add the input x to the output F(x) after residual convolution, resulting in a final output that contains all the information from the input and output ends. This can only be done if the dimensions of the output end and input end are consistent, as defined: In Eqn. (1), x and y represent the input and output vectors of the layers considered. The function \(\mathrm{F}\left(\mathrm{x},\left\{\mathrm{W}_{\mathrm{s}}\right\}\right)\) represents the residual mapping that needs to be learned. For the example in Fig. 1 with two layers, \(\mathrm{F}=\mathrm{W}_{2}\sigma\left(\mathrm{W}_{1}\mathrm{x}\right)\) where \(\sigma\) represents the ReLU[30]nonlinearity and biases are omitted for simplification. The operation of F + x is performed by a shortcut connection and element-wise addition. We adopt the second nonlinearity after the addition,such as \(\sigma(\mathrm{y})\)in Fig. 1[27]. To summarize, the shortcut connections in ResNet allow for deeper networks without introducing extra parameters or computation complexity. Linear projection can be used to match the dimensions of x and F if needed. In Eqn. (1), it is necessary for the dimensions of x and F to be identical. In the event that this is not already the case, we can utilize a linear projection technique to manipulate the results and ensure that the dimensions are matched appropriately:
\[\mathrm{y}=\mathrm{F}\left(\mathrm{x},\left\{\mathrm{W}_{\mathrm{s}}\right\} \right)+\mathrm{W}_{\mathrm{i}}\mathrm{x} \tag{1}\]
ResNet can be built with deeper layers and better feature extraction using shortcut connections. The common ResNet models have 18, 50, 101, and 152 layers. After conducting multiple ablation experiments for our image classification task,
Fig. 1: a building block for Resnet[27]
it has been proven that ResNet50 is the most suitable model for our task. The trained ResNet50 achieved the highest accuracy of 98.6% on our validation set. This accuracy did not improve with deeper ResNet101 and ResNet152 models. Therefore, we believe that ResNet50 is the most economical choice for our image recognition five-classification task. Further details of the ablation experiments will be elaborated in section 4. The approximate model architecture of ResNet50 that we have chosen is presented below as the Fig. 2.
As we can see in Fig. 2. ResNet50 is composed of 49 convolutional layers and one fully connected layer. The network structure can be divided into seven parts, with the first part performing convolution, regularization, activation function, and maximum pooling calculations on the input. The second to fifth parts contain residual blocks, with the green blocks changing the dimension but not the size of the residual block. Each residual block has three convolutional layers, resulting in a total of 49 convolutional layers and one fully connected layer. The input to the network is 224*224*3, and after the convolution calculations, the output is 7*7*2048. The pooling layer converts the output into a feature vector, which is then used by the classifier to calculate and output the class probabilities.
### _3.2Dataset_
Our dataset was selected from the English letter dataset in the public dataset MNIST[31], as mentioned in the Introduction. Our task is to recognize the answers to multiple-choice questions written by students by hand. Given that most of the answers to multiple-choice questions are primarily in English letters, the handwritten English letters in MNIST are very suitable for our requirements. As previously mentioned, we simplify our grading application scenario to four-choice questions (questions with more options can easily be extended with our method). However, we recognize that in addition to setting the basic four possible answers of ABCD for classification on our model, we must also prevent students' incorrectly written and non-standard symbols from being randomly classified by the machine as options in ABCD, leading to grading errors. Therefore, we added a fifth class on top of the four-class classification, named "unknown," indicating that answers written by students recognized as belonging to this class are not any of the ABCD options. Regarding this innovation, we also made a more reasonable division of the MNIST dataset. We selected the images of the four handwritten English letters ABCD to construct the first four categories of our datasets, and randomly and evenly selected another 22 English letters to construct a fifth category, which is the same size as each of the ABCD categories, called the "unknown" category, used to recognize students' non-standard writing. The underlying reason for such division is that in essence, constructing the fifth category to recognize students' non-standard writing only requires selecting a large number of diverse data images that are different from ABCD, and the datasets of the fifth category composed of the remaining 22 letters[37] meets this criterion. The shapes of the 22 letters are diverse and different from the ABCD letters, and the model can distinguish and recognize symbols written by students that are not the prescribed ABCD four letters after learning a large amount of letters that do not belong to ABCD.The details of the datasets partitioning are illustrated in the TABLE I.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|} \hline CLASS & Class & Class & Class & Class & Class \\ & A & B & C & D & unknown \\ \hline Train datasets & 4800 & 4800 & 4800 & 4800 & 4800 \\ \hline Val datasets & 800 & 800 & 800 & 800 & 800 \\ \hline \end{tabular}
\end{table} TABLE I: Our dataset Selected from MNIST
Fig. 2: Example network architectures for Resnet50
### _Training strategy_
#### Iv-C1 Transfer Learning
In order to train our task model better and faster, we will introduce pre-trained networks such as ResNet50, VGG, and AlexNet provided in torchvision.models during model training. We will use transfer learning to fix the weights of all layers in the front of the pre-trained model and only train the last layer. We will also fully train two networks for comparison. More details will be provided in Section. Experiments.
#### Iv-C2 Cyclic Cosine Annealing Learning Rate Schedule
To enhance our network training, we will apply a learning rate decay strategy called Cyclic Cosine Annealing Learning Rate Schedule[33]. This strategy enables our learning rate to periodically fluctuate during the training process, resulting in better training performance,which defined as below:
\[\eta_{t}=\eta_{\text{min}}^{i}\ +\frac{1}{2}\left(\eta_{\text{max}}^{i}\ -\eta_{\text{min}}^{i}\ \right)\left(1+\cos\left(\frac{\text{Tur}^{\text{cur}}}{T_{i}}\pi \right)\right) \tag{2}\]
### _loss function_
We choose the Cross Cross Entropy Loss Function as our loss function. It is a commonly used loss function in machine learning for classification tasks. It is used to measure the difference between the predicted probability distribution and the true probability distribution of a classification problem. The function is defined as the negative sum of the true class probability multiplied by the logarithm of the predicted class probability. The formula for Cross Entropy Loss Function is as follows:
\[H(p,q)=-\sum_{x}P(x)\log(q(x)) \tag{3}\]
where P(x) is the true probability distribution and\(q(x)\)is the predicted probability distribution. The function penalizes the model more when it predicts a low probability for the true class and assigns high probabilities to other classes.
## IV Experiments
In this section, we study the performance of our designed task on our selected datasets and demonstrate its effectiveness and efficiency compared with different models.
### _Preprocessing_
As described in Section 3.2, our dataset is derived from the MNIST dataset and has been divided into training and validation sets, each containing five classes. The specifications of each data point in the training and validation sets are identical, with 28x28 pixel grayscale images. Before inputting the data, we preprocess the image data by resizing the images to 64x64 pixels, believing that larger images will make it easier for the model to extract information and improve the classification accuracy of our task.
### _Parameter setting_
During the experiments, we standardized some basic parameters to better compare the effectiveness of different models in accomplishing our designed image five-classification task. The parameters we set include a batch size of 128 and 20 epochs. Regarding the learning rate, we adopted the Cyclic Cosine Annealing Learning Rate decay strategy, as mentioned in Section 3.3. The initial learning rate was set at 0.01 and the minimum learning rate was designed to be 0.00001. Throughout the training process, the learning rate oscillates periodically between 0.01 and 0.00001, with a half oscillation cycle occurring every six epochs. This approach allows for a more comprehensive evaluation of the models' performance in our classification task.
## V Results
### _Comparison Experiment_
perform our task, with the sole criterion for evaluating the excellence of a model[38] being the accuracy on the validation set.During the training process, we will record the best accuracy achieved across all epochs to represent the capabilities of each model. We will compare the training of five models suitable for image classification tasks, including pretrained 18-layer and 50-layer ResNets, AlexNet, 19-layer VGG, and ViT a kind of Vision Transformer. For each model, we have tested the effectiveness of transfer learning, adopting the following training strategies: 1. Apply transfer learning by training only the last output layer of each model while using the pretrained parameters for the remaining layers of the network. 2. Load the parameters from the first training, unlock all layers of the model, and train all parameters completely. 3. Do not use transfer learning, and train the entire model from scratch.
The content of the Table 2 represents the accuracy on the validation set after training different models. Each data point is rounded to two decimal places. It can be observed that, in general, training with transfer learning results in higher accuracy compared to training the entire model from scratch. Among the different models, except for Vision Transformer, the best training strategies yield very similar accuracy rates. However, ResNet50 achieves slightly better results compared to the other models.As shown in below, we present more detailed training data for the best-performing model, Resnet50. The above figure provides a detailed representation of the training process of Resnet50 over 20 epochs. Our sole criterion for evaluating the model is the accuracy on the validation set, which is the percentage of correctly classified samples in the validation set. The figure displays two curves: the orange curve represents the validation accuracy of the Resnet50 model
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline Training strategy & transfer learning & Retrain the whole & Train from scratch \\ \hline Resnet18 & 80.6\% & 98.25\% & 98.48\% \\ \hline Resnet50 & 84.25\% & 99.40\% & 98.88\% \\ \hline Vgg19 & 83.00\% & 98.33\% & 97.9\% \\ \hline ViT & 63.20\% & 77.42\% & 70.12\% \\ \hline Alexnet & 84.70\% & 88.41\% & 80.12\% \\ \hline \end{tabular}
\end{table} TABLE II: compare different models
trained only on the linear output layer, while the blue curve represents the validation accuracy of the model obtained through transfer learning by fully training it after inheriting the weights of the last linear layer. It is evident that the fully trained model yields significantly higher accuracy compared to the partially trained model.
### _Results Analysis_
Using the best-performing ResNet50, we can achieve an accuracy of up to 98.6% on the validation set. In fact, after analyzing the data, we believe that our combined training strategy has even higher accuracy potential. Upon analyzing the misclassified images, we found that one obvious reason is the presence of some difficult-to-identify characters in the MNIST dataset, as shown in the following image. In reality, the model obtained using our method has an accuracy of at least 99% when grading real-world multiple-choice questions, and it can accurately identify students' incorrectly written characters without misjudging them.
The samples shown in Fig.4 are all misclassified. The model's predicted answers are marked outside the brackets, while the true labels are inside the brackets. These misclassified samples are only a portion of all incorrect data. Upon analyzing them, we found that even with the human eye, it is difficult to accurately determine their true labels. A part of our error set consists of such cases, so we believe that the actual accuracy of our task under the combined strategy is even higher.
## VI Conclusion
In this paper, we mainly discussed how to better use AI to replace manual recognition of students' written answers and give judgments. We regard the landing application of this AI in the field of education as an image multi-classification task. Due to the possibility of students writing non-standard symbols incorrectly, we added an "unknown" class after each correct answer corresponding category to identify non-standard writing options. In training such a model, we compared many famous models horizontally and used many training strategies and finally decided that ResNet50 is our local optimal solution. In the future, we will continue to explore better ways to expand the application of AI intelligent correction, and think and deliberate on more application scenarios.
|
2309.05067 | Mutation-based Fault Localization of Deep Neural Networks | Deep neural networks (DNNs) are susceptible to bugs, just like other types of
software systems. A significant uptick in using DNN, and its applications in
wide-ranging areas, including safety-critical systems, warrant extensive
research on software engineering tools for improving the reliability of
DNN-based systems. One such tool that has gained significant attention in the
recent years is DNN fault localization. This paper revisits mutation-based
fault localization in the context of DNN models and proposes a novel technique,
named deepmufl, applicable to a wide range of DNN models. We have implemented
deepmufl and have evaluated its effectiveness using 109 bugs obtained from
StackOverflow. Our results show that deepmufl detects 53/109 of the bugs by
ranking the buggy layer in top-1 position, outperforming state-of-the-art
static and dynamic DNN fault localization systems that are also designed to
target the class of bugs supported by deepmufl. Moreover, we observed that we
can halve the fault localization time for a pre-trained model using mutation
selection, yet losing only 7.55% of the bugs localized in top-1 position. | Ali Ghanbari, Deepak-George Thomas, Muhammad Arbab Arshad, Hridesh Rajan | 2023-09-10T16:18:49Z | http://arxiv.org/abs/2309.05067v1 | # Mutation-based Fault Localization
###### Abstract
Deep neural networks (DNNs) are susceptible to bugs, just like other types of software systems. A significant uptick in using DNN, and its applications in wide-ranging areas, including safety-critical systems, warrant extensive research on software engineering tools for improving the reliability of DNN-based systems. One such tool that has gained significant attention in the recent years is _DNN fault localization_. This paper revisits _mutation-based fault localization_ in the context of DNN models and proposes a novel technique, named deepmuff, applicable to a wide range of DNN models. We have implemented deepmuff and have evaluated its effectiveness using 109 bugs obtained from StackOverflow. Our results show that deepmuff detects 53/109 of the bugs by ranking the buggy layer in top-1 position, outperforming state-of-the-art static and dynamic DNN fault localization systems that are also designed to target the class of bugs supported by deepmuff. Moreover, we observed that we can halve the fault localization time for a pre-trained model using _mutation selection_, yet losing only 7.55% of the bugs localized in top-1 position.
Deep Neural Network, Mutation, Fault Localization
## I Introduction
Software bugs [1] are a common and costly problem in modern software systems, costing the global economy billions of dollars annually [2]. Recently, data-driven solutions have gained significant attention for their ability to efficiently and cost-effectively solve complex problems. With the advent of powerful computing hardware and an abundance of data, the use of deep learning [3], which is based on deep neural networks (DNNs), has become practical. Despite their increasing popularity and success stories, DNN models, like any other software, may contain bugs [4, 5, 6, 7], which can undermine their safety and reliability in various applications. Detecting DNN bugs is _not_ easier than detecting bugs in traditional programs, _i.e._, programs without any data-driven component in them, as DNNs depend on the properties of the training data and numerous hyperparameters [8]. Mitigating DNN bugs has been the subject of fervent research in recent years, and various techniques have been proposed for testing [9, 10], fault localization [11, 12], and repair [13, 14] of DNN models.
Fault localization in the context of traditional programs has been extensively studied [15], with one well-known approach being _mutation-based fault localization_ (MBFL) [16, 17]. This approach is based on mutation analysis [18], which is mainly used to assess the quality of a test suite by measuring the ratio of artificially introduced bugs that it can detect. MBFL improves upon the more traditional, lightweight _spectrum-based fault localization_[19, 20, 21, 22, 23, 24] by uniquely capturing the relationship between individual statements in the program and the observed failures. While both spectrum-based fault localization [25, 26] and mutation analysis [27, 28, 29] have been studied in the context of DNNs, to the best of our knowledge, MBFL for DNNs has not been explored by the research community, yet the existing MBFL approaches are not directly applicable to DNN models.
This paper revisits the idea of MBFL in the context of DNNs. Specifically, we design, implement, and evaluate a technique, named deepmuff, to conduct MBFL in pre-trained DNN models. The basic idea behind deepmuff is derived from its traditional MBFL counterparts, namely, Metallaxis [30] and MUSE [17], that are based on measuring the impact of mutations on passing and failing test cases (see SSII for more details). In summary, given a pre-trained model and a set of data points, deepmuff separates the data points into two sets of "passing" and "failing" data points (test cases), depending on whether the output of the model matches the ground-truth. deepmuff then localizes the bug in two phases, namely _mutation generation phase_ and _mutation testing/execution phase_. In mutation generation phase, it uses 79 mutators, a.k.a. mutation operators, to systematically mutate the model, _e.g._, by replacing activation function of a layer, so as to generate a pool of mutants, _i.e._, model variants with seeded bugs. In mutation testing phase, deepmuff feeds each of the mutants with passing and failing data points and compares the output to the output of the original model to record the number of passing and failing test cases that are impacted by the injected bugs. In this paper, we study two types of impacts: _type 1_ impact, _a la_ MUSE, which tracks only fail to pass and pass to fail, and _type 2_ impact, like Metallaxis, which tracks changes in the actual output values. deepmuff uses these numbers to calculate _suspiciousness values_ for each layer according to MUSE, as well as two variants of Metallaxis formulas. The layers are then sorted in descending order of their suspiciousness values for the developer to inspect.
We have implemented deepmuff on top of Keras [31], and it
supports three types of DNN models for regression, as well as classification tasks that must be written using Sequential API of Keras: fully-connected DNN, convolutional neural network (CNN), and recurrent neural network (RNN). Extending deepmulti to other libraries, _e.g._, TensorFlow [32] and PyTorch [33], as well as potentially to other model architectures, _e.g._, functional model architecture in Keras, is a matter of investing engineering effort on the development of new mutators tailored to such libraries and models. Since the current implementation of deepmulti operates on pre-trained models, its scope is limited to _model bugs_[7], _i.e._, bugs related to activation function, layer properties, model properties, and bugs due to missing/redundant/wrong layers (see SSVI).
We have evaluated deepmulti using a diverse set of 109 Keras bugs obtained from StackOverflow. These bugs are representatives of the above-mentioned model bugs, in that our dataset contains examples of each bug sub-category at different layers of the models suited for different tasks. For example, concerning the sub-category _wrong activation function_ model bug, we have bugs in regression and classification fully-connected DNN, CNN, and RNN models that have wrong activation function of different types (_e.g._, ReLU, softmax, _etc._) at different layers. For 53 of the bugs, deepmulti, using its MUSE configuration, pinpoints the buggy layer by ranking it in top-1 position. We have compared deepmulti's effectiveness to that of state-of-the-art static and dynamic DNN fault localization systems Neuralint [12], DeepLocalize [11], DeepDiagnosis [8], and UMLAUT [34] that are also designed to detect model bugs. Our results show that, in our bug dataset, deepmulti, in its MUSE configuration, is 77% more effective than DeepDiagnosis, which detects 30 of the bugs.
Despite this advantage of deepmulti in terms of effectiveness, since it operates on a pre-trained model, it is slower than state-of-the-art DNN fault localization tools from an end-user's perspective. However, this is mitigated, to some extent, by the fact that similar to traditional programs, one can perform _mutation selection_[35] to curtail the mutation testing time: we observed that by randomly selecting 50% of the mutants for testing, we can still find 49 of the bugs in top-1 position, yet we halve the fault localization time after training the model.
In summary, this paper makes the following contributions.
* **Technique**: We develop MBFL for DNN and implement it in a novel tool, named deepmulti, that can be uniformly applied to a wide range of DNN model types.
* **Study**: We compare deepmulti to state-of-the-art static and dynamic fault localization approaches and observed:
* In four configurations, deepmulti outperforms other approaches in terms of the number of bugs that appear in top-1 position and it detects 21 bugs that none of the studied techniques were able to detect.
* We can halve the fault localization time for a pre-trained model by random mutation selection without significant loss of effectiveness.
* **Bug Dataset**: We have created the largest curated dataset of _model bugs_, comprising 109 Keras models ranging from regression to classification and fully-connected DNN to CNN and RNN.
**Paper organization.** In the next section, we review concepts of DNNs, mutation analysis, and MBFL. In SSIII, we present a motivating example and discusses how deepmulti works under the hood. In SSIV, we present technical details of the proposed approach, before discussing the scope of deepmulti in SSV. In SSVI, we present the results of our experiments with deepmulti and state-of-the-art DNN fault localization tools from different aspects. We discuss threats to validity in SSVII and conclude the paper in SSIX.
**Data availability.** The source code of deepmulti and the data associated with our experiments are publicly available [36].
## II Background
### _Mutation Analysis_
Mutation analysis [18], is a program analysis method for assessing the quality of the test suite. It involves generating a pool of program variants, _i.e._, _mutants_, by systematically mutating program elements, _e.g._, replacing an arithmetic operator with another, and running the test suite against the mutants to check if the output of the mutated program is different from that of the original one; if different, the mutant is marked as _killed_, otherwise as _survived_. A mutant might survive because it is semantically equivalent to the original program, hence the name _equivalent_ mutant. Test suite quality is assessed by computing a _mutation score_ for the test suite, which is the ratio of killed mutants over the non-equivalent survived mutants. Mutation score indicates how good a test suite is in detecting real bugs [37]. In addition to its original use, mutation analysis has been used for many other purposes [38], such as fault localization [16, 17], automated program repair [39, 40], test generation [41, 42] and prioritization [43], program verification [44, 45], _etc._
### _Mutation-based Fault Localization_
Mutation-based fault localization (MBFL) uses mutation analysis to find bugs. In this section, we review two major approaches to MBFL, namely Metallaxis [30] and MUSE [17]. Both of these approaches are implemented in deepmulti. The reader is referred to the original papers [30, 17] for examples explicating the rationale behind each approach.
#### Ii-B1 Metallaxis
Metallaxis [30] posits that mutants generated by mutating the same program element are likely to exhibit similar behaviors and mutants generated by mutating different program elements are likely to exhibit different behaviors. Since a fault itself can also be viewed as a mutant, it is expected to behave similar to other mutants generated by mutating that same buggy program element and can be located by examining the mutants based on this heuristic. Metallaxis assumes that the mutants impacting the test outputs, or their error messages, _e.g._, stack trace, as _impacting the tests_. Thus, mutants impacting failing test cases might indicate that their corresponding code elements is the root cause of the test failures, while mutants impacting passing test cases might indicate that their corresponding code elements are correct.
Once the number of impacted passing and failing test cases are calculated, Metallaxis uses a fault localization formula to calculate suspiciousness values for each element.
Metallaxis fault localization formula can be viewed as an extension to that of spectrum-based fault localization, by treating all mutants impacting the tests as covered elements while the others as uncovered elements. Specifically, the maximum suspiciousness value of the mutants of a corresponding code element is returned as the suspiciousness value of the code element. More concretely, assuming we are using SBI formula [46], suspiciousness value for a program element \(e\), denoted \(s(e)\), is calculated as follows.
\[s(e)=\max_{m\in M(e)}\left(\frac{|T_{f}(m,e)|}{|T_{f}(m,e)|+|T_{p}(m,e)|}\right), \tag{1}\]
where \(M(e)\) denotes the set of all mutants targeting program element \(e\), \(T_{f}(m,e)\) is the set of failing test cases that are impacted by the mutant \(m\), while \(T_{p}(m,e)\) denotes the set of passing test cases that are impacted by \(m\). In this definition, and in the rest of the paper, the notation \(|\cdot|\) represents the size of a set. Alternatively, had we used Ochiai [47], Metallaxis suspiciousness formula would be modified as follows.
\[s(e)=\max_{m\in M(e)}\left(\frac{|T_{f}(m,e)|}{\sqrt{(|T_{f}(m,e)|+|T_{p}(m,e) |)|T_{f}|}}\right), \tag{2}\]
where \(T_{f}\) denotes the set of all failing tests cases.
#### Ii-B2 Muse
MUSE [17] is based on the assumption that mutating a faulty program element is likely to impact more failed test cases than passing test cases by "fixing" it, while mutating a correct program element is likely to impact more passing test cases than failing test cases by breaking it. The notion of "impacting test cases" in MUSE, unlike Metallaxis, is more rigid, in that it refers to making passing test cases fail, _vice versa_. Once the number of impacted failing and passing test cases are identified, _suspiciousness values_ can be calculated using the following formula.
\[s(e)=\frac{1}{|M(e)|}\Sigma_{m\in M(e)}\left(\frac{|T_{f}(m,e)|}{|T_{f}|}- \alpha\frac{|T_{p}(m,e)|}{|T_{p}|}\right), \tag{3}\]
where \(T_{p}\) denotes the set of all passing tests cases and \(\alpha\) is a constant used to balance the two ratios that is defined to be \(\frac{|F-P|}{|T_{f}|}\frac{|T_{p}|}{|T_{p}|}\). In the latter definition, \(F\leadsto P\) denotes the set of failing test cases that pass due to some mutation, while \(P\leadsto F\) denotes the set of passing test cases that fail as a result of some mutation.
### _Deep Neural Networks_
A neural network is intended to compute a function of the form \(\mathbb{R}^{m}\longrightarrow\mathbb{R}^{n}\), where \(m,n\) are positive integers. A neural network is often represented as a weighted directed acyclic graph arranged in layers of three types, _i.e._, _input layer_, one or more _hidden layers_, and an _output layer_. Input and output layers output linear combination of their inputs, while hidden layers can be viewed as more complex computational units, _e.g._, a _non-linear unit_, _convolutional unit_, or a _batch normalization unit_. A non-linear unit is composed of _neurons_, functions applying a non-linear _activation function_, _e.g._, rectified linear unit (ReLU), tanh, or sigmoid, on the weighted sum of their inputs. A convolutional layer, calculates the convolution between the vector of the values obtained from the previous layer and a learned kernel matrix. Lastly, a batch normalization layer, normalizes the vector of values obtained from the previous layer _via_ centering or re-scaling. A neural network with two or more hidden layers is referred to as a _deep neural network_ (DNN).
## III Motivating Example
In this section, we first describe how deepmulti helps programmers detect and fix bugs by presenting a hypothetical use case scenario and then motivate the idea behind deepmulti by describing the details of how deepmulti works, under the hood, on the example developed in the use case story.
Courtney is a recent college graduate working as a junior software engineer at an oil company, which frequently makes triangular structures, made of epoxy resin, of varying sizes to be used under the water. The company needs to predict with at least 60% confidence that a mold of a specific size will result in an epoxy triangle after it has been dried, and potentially shrunk, and it does not need to spend time on cutting and/or sanding the edges. Over time, through trial and error, the company has collected 1,000 data points of triangle edge lengths and whether or not a mold of that size resulted in a perfect triangle. Courtney's first task is to write a program that given three positive real numbers \(a\), \(b\), and \(c\), representing the edge lengths of the triangle mold, determines if the mold will result in epoxy edges that form a perfect triangle. As a first attempt, she writes the program shown in Listing 1.
```
1:ifload994ofthedatapoints=X_trainandy_train
2:#
3:
4:model=Sequential()
5:model.add(Dense(2,activation=relu^))
6:
7:model.compile(les='sparse_categorical_consistency'.
8:optimizers='normal'.netics='accuracy')
9:model.fit(X=train.Y=train.Y=train.Spe=ks=100.validation_split=0.1)
```
Listing 1: Courtney's first attempt
The program uses 994 out of 1,000 data points for training a model. After testing the model on the remaining 6 data points, she realizes that the model achieves no more than 33% accuracy. Fortunately, Courtney uses an IDE equipped with a modern DNN fault localization tool, named deepmulti, which is known to be effective at localizing bugs that manifest as stuck accuracy/loss. She uses deepmulti, with its default settings, _i.e._, Metallaxis with SBI, to find the faulty part of her program. The tool receives the fitted model in.h5 format [48] together with a set of testing data points \(T\) and returns a ranked list of model elements; layers, in this case. After Courtney provides deepmulti with the model saved in.h5 format and the 6 testing data points that she had, within a few seconds, the tool returns a list with two items, namely Layer 2 and Layer 1, corresponding to the lines 5 and 4, respectively, in Listing 1. Once she navigates
to the details about Layer 2, she receives a ranked list with 5 elements, _i.e._, Mutant 12: replaced activation function'relu' with'softmax',..., Mutant 10: divided weights by 2, Mutant 11: divided bias by 2. By seeing the description of Mutant 12, Courtney immediately recalls her machine learning class wherein they were advised that in classification tasks they should use _softmax_ as the activation function of the last layer. She then changes the activation function of the last layer at Line 5 of Listing 1 from relu to softmax. By doing so, the model achieves an accuracy of 67% on the test dataset, and similarly on a cross-validation, exceeding the expectations of the company.
We now describe how deepmulti worked, under the hood, to detect the bug _via_ Metallaxis' default formula. Figure 1 depicts the structure of the model constructed and fitted in Listing 1. Each edge is annotated with its corresponding weight and the nodes are annotated with their bias values. The nodes are using ReLU as the activation function. In this model, the output \(T\) is intended to be greater than the other output if \(a\), \(b\), and \(c\) form a triangle, and \(\sim\)\(T\) should be greater than or equal to the other output, otherwise.
Table 1 shows an example of how deepmulti localizes the bug in the model depicted in Figure 1. In the first two columns, the table lists the two layers, and within each layer, the neurons. For each neuron three mutators are applied, _i.e._, halving weight values, halving bias value, and replacing the activation function. More mutators are implemented in deepmulti, but here, for the sake of simplicity, we only focus on 3 of them and also restrict ourselves to only one activation function replacement, _i.e._, ReLU vs. softmax.
As we saw in Courtney's example, she had a test dataset \(T\) with 6 data points which initially resulted in 33% accuracy. These six data points are denoted T1,..., T6 in Table 1, where correctly classified ones are colored green, whereas misclassified data points are colored rose. deepmulti generates 12 mutants for the model of Figure 1, namely, M1,..., M12. Each mutant is a variant of the original model. For example, M1 is the same as the model depicted in Figure 1, except the weights of the incoming edges to neuron N1 are halved, _i.e._, 0.51, -0.38, and -0.52 from left to right, while M9 is the same as the model depicted in Figure 1, except that the activation functions for N3 and N4 are softmax instead of relu. After generating the mutants, deepmulti applies each mutant on the inputs T1,..., T6 and compares the results to that of the original model. For each data point T1,..., T6, if the result obtained from each mutant M1,..., M12 is different from that of the original model, we put a bullet point in the corresponding cell. For example, two bullets points in the row for M3 indicate that the mutant misclassifies the two data points that used to be correctly classified, while other data points, _i.e._, T1,..., T4, are misclassified as before. Next, deepmulti uses SBI formula [46] to calculate suspiciousness values for each mutant \(m\in\) {M1,..., M12}, individually. These values are reported under the one but last column in Table 1. Lastly, deepmulti takes the maximum of the suspiciousness values of the mutants corresponding to a layer and takes it as the suspiciousness value of that layer (c.f. Eq. 1 in SII). In this particular example, layer L1 gets a suspiciousness value of 0, while L2 gets a suspiciousness value of 1. Thus, deepmulti ranks L2 before L1 for user inspection and for each layer it sorts the mutants in descending order of their suspiciousness values, so that the user will understand what change impacted most the originally correctly classified data points. In this case, M12 and M9 wind up at the top of the list, and as we saw in Courtney's story, the information associated with the mutations helped fixing the bug.
## IV Proposed Approach
Our technique deepmulti comprises four components: (1) mutation generator, (2) test case splitter, (3) mutation executor/tester, and (4) suspiciousness calculator. Figure 2 depicts these components as processes, numbered accordingly, taking
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c}
inputs and producing outputs. Mutation generator (marked 1 in Figure 2), applies 79 mutators on all the layers of the input buggy DNN, so as to generate a pool of mutants, _i.e._, variants of the original, buggy DNN model with small perturbations, _e.g._, replacing activation function of a layer. Test case splitter (marked 2 in the figure), applies the original buggy DNN on a given set of test data points, _i.e._, test cases (or input values) paired with their expected output values, so as to partition the set into two subset, namely _passing test cases_ and _failing test cases_. Passing test cases are referred to as those input values for which the expected output matches that of produced by the original model, whereas failing test cases are referred to as those input values for which the expected output does not match that of produced by the model. This component also stores the output of the original model on each of the test cases. Next, mutation executor (which is also called mutation tester, marked 3 in the figure) applies the generated mutants on each of the passing and failing test cases and the results are compared to that of the original model recorded in the previous step. This amounts to a mutation execution matrix that is used to calculate suspiciousness values for each layer in the model (marked 4 in the figure). The user may instruct deepmull to use a specific fault localization formula, _e.g._, MUSE or Metallaxis with SBI or Ochiai, for calculating suspiciousness values. The layers are then ranked based on the calculated suspiciousness values for user inspection. The ranked list is accompanied with the information about the mutations conducted on each layer to facilitate debugging.
### _Mutation Generator_
Mutation generator component receives the original, buggy DNN model and generates as many variants of the model, _i.e._, mutants, as possible, by systematically mutating every elements of the input model. This component implements 79 mutators. Mutators can be viewed as transformation operators that when applied to a given element, _e.g._, a neuron or a layer, in the model, returns a new variants of the model with that particular element mutated. Table 2 lists all the mutators implemented in deepmull, the types of model elements on which they can operate, and the way each mutator affects the target element. These mutators are inspired by the ones implemented in the existing mutation analysis systems, _e.g._, [27, 28, 29, 50, 51, 52, 53], to name a few. Ma _et al._[29], and Hu _et al._[28], define so-called _model-level_ mutators that also operate on pre-trained models. Direct reuse of all of their mutators was not possible, as those mutators depend on random values which would introduce a source of non-determinism in deepmull: mutating the same model element with random values, _e.g._, Gaussian fuzzing, as in [29], would yield a different mutant each time, making deepmull to produce different outputs on each run for the same model. In general, as far as MBFL is concerned, using variable values (whether it is deterministic or not), instead of the current hard-coded ones, for mutation of weights would not bring about any benefit, as the goal here is to _break_ the model in some way and observe its impact on originally failing and passing test cases.
We argue that not all model bugs could be emulated using mutators at the level of pre-trained models, _e.g._, missing batch normalization, but the mutators listed in Table 2 are sufficient for emulating a subset of such bugs, _e.g._, wrong activation function or missing/redundant layer. Please see SSV for a more detailed discussion on supported bugs.
Mutation generation in deepmull is done directly on the trained model and there is no need for retraining the model. This makes deepmull quite fast and perhaps more attractive from a practical point of view. However, this comes with a risk of not being traceable, _i.e._, a mutation on pre-trained.h5 model does not directly correspond to a line of source code for the user to inspect. In the Keras programs that we studied, this limitation was mitigated by the fact that the models with Sequential architecture were implemented using a well-understood structure and mapping layer numbers/identifiers in deepmull's reports to source code was trivial. In a future work, with the help of simple auto-generated annotations, _e.g._, for lexical scoping of the code snippet for model declaration, we will extend deepmull to automatically map layer numbers/identifiers in its reports to actual lines of source code.
Humbatova _et al._[27] argue about the importance of mutators emulating real DNN faults. We acknowledge that mutators emulating real faults would help generating more informative reports that would also give hints on how to fix the program. However, unlike mutation analysis, the main objective of an MBFL technique is to assign suspiciousness values to the program elements which can, in theory, be done using any kind of mutator, whether or not it makes sense from the standpoint of a developer. It is worth noting that the alternative design decision of using DeepCrime [27] as a mutation generation engine for deepmull would result in finding more bugs than the current version of deepmull, _e.g._, bugs related to training hyper-parameters or training dataset, but such a design is expected to be impacted by the nondeterminacy inherent in training process and, given the fact that we do not employ any training data selection, would be significantly slower due to numerous re-training. Nevertheless, finding more bugs would be an incentive for exploring this alternative as a future work.
### _Test Case Splitter_
Before we introduce this component, we need to clarify certain terminology.
**Definition 1**.: _A data point in a testing dataset for a DNN model is defined to be a pair of the form \((I,O)\), where \(I\in\mathbb{R}^{m}\) and \(O\in\mathbb{R}^{n}\), with \(m\) and \(n\) being positive integers. In this paper \(I\) is called test case, test input, or input, while \(O\) is called expected output or ground-truth value._
Given a test dataset, test case splitter component applies the original model on each of the test cases for the data points and checks if the model output matches to the expected output. If the two outputs match, then the corresponding test case will be marked as _passing_, otherwise it will be marked as _failing_. This component also records the output produced by the original model to be used during impact analysis, described below.
### _Mutation Executor (Mutation Tester)_
We start describing this component with a definition.
**Definition 2**.: _A mutation execution matrix \(\mathcal{E}\) is a \(k\times l\) matrix, where \(k\) is the number of generated mutants, while \(l\) is the number of test cases. Each element \(\mathcal{E}_{i,j}\) in the matrix is a member of the set \(\{\bigstar,\bigstar,\bigstar,\bigstar\}\), wherein \(\bigvee\) indicates that the \(i^{\mathrm{th}}\) mutant impacts \(j^{\mathrm{th}}\) test case, whereas \(\bigstar\) indicates that the mutant does not affect the test case. \(\bigcirc\) denotes a nonviable mutant, i.e., a mutant that fails when loading or applying it on a test case. Such mutants might be generated, e.g., due to creating a shape error [32] after the mutation._
Mutation executor component construct mutation execution matrix by applying each of the generated mutants (see SSIV-A) on the failing and passing test cases to determine which of the test cases are impacted by which mutants. The impact of mutation on test cases is measured using two types of impacts, _i.e._, _type 1_ impact and _type 2_ impact, defined below.
**Definition 3**.: _Given a DNN model \(\mathcal{M}\), its mutated version \(\mathcal{M}^{\prime}\), and a data point \((I,O)\), we define the two types of impacts:_
* _Type 1: Mutant_ \(\mathcal{M}^{\prime}\) impacts the test case \(I\) if \(\mathcal{M}(I)=O\) but \(\mathcal{M}^{\prime}(I)\neq O\), or \(\mathcal{M}(I)\neq O\) but \(\mathcal{M}^{\prime}(I)=O\). In other words, type 1 impact tracks pass to fail and fail to pass test cases, a la MUSE [17]._
* _Type 2: Mutant_ \(\mathcal{M}^{\prime}\) impacts the test case \(I\) if \(\mathcal{M}(I)\neq\mathcal{M}^{\prime}(I)\)._
_In this definition, \(\mathcal{M}(I)\) or \(\mathcal{M}^{\prime}(I)\) denotes the operation of applying model \(\mathcal{M}\) or \(\mathcal{M}^{\prime}\) on the test case \(I\)._
It is worth noting that checking for equality of two values can be tricky for regression models, as those models approximate the expected values. To work around this problem, deep-null compares values obtained from regression models using a user-defined delta threshold, _i.e._, the values are deemed equal if their absolute difference is no more than a threshold. By default, deepnull uses a threshold of 0.001. This is the approach adopted by well-known testing frameworks for comparing floating-point values [54, 55]. Also, whether deepnull uses type 1 or type 2 impact is a user preference and is determined alongside the threshold.
### _Suspiciousness Value Calculator_
Armed with the above definitions, we can now give concrete definitions for the terms used in Eq. 1, 2, and 3, specialized to DNNs.
* Given a model element, _i.e._, neuron or a layer, \(e\), \(M(e)\) is defined to be the set of all mutants generated by mutating \(e\). These sets are produced by mutation generator process.
* Assuming that \(m\) is a mutant on the element \(e\), \(T_{f}(m,e)\) (or \(T_{p}(m,e)\)) is defined as the set of failing (or passing) test cases that are impacted in type 1 or type 2 fashion by \(m\). More concretely, \(T_{p}(m,e)=\{t\mid\mathcal{E}_{m,t}=\bigvee\wedge t\text{ is missing}\}\), similarly \(T_{f}(m,e)=\{t\mid\mathcal{E}_{m,t}=\bigvee\wedge t\text{ is failing}\}\). These two sets are defined using a quite similar notation; the readers are encouraged to review Definitions 2 and 3 to avoid confusion.
* \(T_{f}\) (or \(T_{p}\)) is the set of originally failing (or passing) test cases. These sets are constructed by test case splitter.
* \(F\rightsquigarrow P\) (or \(P\rightsquigarrow F\)), for a model element \(e\), is defined as the set of originally failing (or passing) test cases that turned into passing (or failing) as a result of some mutation on \(e\). More concretely, assuming the execution matrix \(\mathcal{E}\) is constructed using type 1 impact, \(F\rightsquigarrow P\) is defined as \(\{t\mid t\text{ is missing}\wedge\exists m\in M(e)\cdot\mathcal{E}_{m,t}= \bigvee\}\). Similarly, \(P\rightsquigarrow F\) is defined as \(\{t\mid t\text{ is missing}\wedge\exists m\in M(e)\cdot\mathcal{E}_{m,t}= \bigvee\}\). In other words, these sets track all the failing/passing test cases that are _type 1 impacted_ by some mutant of a given element. These two sets are defined using a quite similar notation; the readers are encouraged to review Definitions 2 and 3 to avoid confusion.
Having specialized definitions for the terms used in the fault localization formulas described earlier, we are now able to calculate suspiciousness values for the elements in a DNN model. Guided by the user preferences, deepnull calculates all the values for \(|T_{f}(m,e)|\), _etc._, and plugs into the specified formula to calculate suspiciousness values for the elements. It is worth noting that if all the mutants generated for a given element are nonviable, MUSE formula (Eq. 3) and all the variants of Metallaxis (_e.g._, Eq. 1), by definition, will return 0 as the suspiciousness value for the element. Nonviable mutants do not contribute toward localizing the bug, therefore they are considered _overhead_ to the fault localization process. Fortunately, based on our observations in our dataset of buggy DNNs, nonviable mutants are rare.
\begin{table}
\begin{tabular}{c||c}
**Mutation Class** & **Description** \\ \hline \hline
**MATH_WEIGHT** & **Adiabatic 1** within the weights of a given action and multiple multiple times up 2. Targets Ounces and SimplePMN loops. \\ \hline
**MATH_WEIGHT**\(\bigcirc\) & **Adiabatic 1** without the weights of a condition keyword and multiple multiple times up 2. Targets subsets of Ounces, _i.e._, \(\bigcirc\). \\ \hline
**MATH_ACT_WEIGHT** & **Adiabatic 1** without the restriction weights of a condition keyword and multiple times up 2. Targets SinglePMN loops. \\ \hline
**MATH_IST_WLR_WEIGHT** & **Adiabatic 1** without the weight of an LSTM layer and multiple multiple times up 2. Targets 15MB loops. \\ \hline
**MATH_ISIT_PORTPORT_WEIGHT** & **Adiabatic 1** without the weight of a given action and multiple multiple times up 2. Targets Ounces and SimplePMN loops. \\ \hline
**MATH_ISIT_WLR_WEIGHT** & **Adiabatic 1** without the weight of a given action \((I,O)\), \(O\
Equivalent mutants are another source of overhead for deepmul. Currently, we do not have any means of detecting equivalent mutants, but we argue that these mutants do not impact MBFL results, as they are equivalent to the original model and do not impact any passing or failing test cases.
## V Supported DNN Bugs
Due to the complex nature of DNN bugs, and MBFL itself, we do not hope to give a formal account of what types of DNN bugs deepmul is capable of localizing. Instead, we attempt to provide as accurate description of the supported bugs as possible and discuss the way such bugs manifest in DNN programs. The discussion given in this section leverages the characterization of DNN bugs provided by previous research [7, 4, 6].
As we mentioned earlier, current version of deepmul generates on pre-trained Keras Sequential models. This means that much of the information, such as training hyper-parameters and whether or not the input data is normalized, has already been stripped away from the input to deepmul, and the current version of the technique is not capable of detecting any bug related to training process, _e.g._, training data and hyper-parameters. Moreover, a pre-trained model does not contain bugs related to tensor shapes (as otherwise, the training would fail with shape errors), and since deepmul does not receive the source code of the buggy model as input, bugs related to GPU usage and API misuse are also out of the reach of the technique, by definition. This leaves us with the so-called _model bugs_[7] the extent to which deepmul is capable of localizing is explicated below. The four model bug sub-categories are represented with identifiers SC1,..., SC4 in the rest of this paper for ease of reference.
* **SC1: Activation function**. These bugs are related to the use of wrong activation function in a layer. We observed that deepmul detects this type of bugs and it also gives actionable, direct fixes.
* **SC2: Model type or properties**. These bugs include wrong weight initialization, wrong network architecture, wrong model for the task, _etc._ Through altering the weights and biases in layers, deepmul detects weight/bias initialization bugs and pinpoint the location of the bug, but the bug report produced by the tool does not provide helpful information for fixing.
* **SC3: Layer properties**. These bugs include wrong filter/kernel/stride size, sub-optimal number of neurons in a layer, wrong input sample size, _etc._ deepmul detects and pinpoints the bugs related to filter/kernel/stride size and sub-optimal number of neurons. We observed that, the former case sometimes produce non-viable mutants. In the cases where deepmul produced viable mutants, effective MBFL takes place and it has been able to pinpoint the bug location and provide explanation on how to fix it. In the latter case, deepmul was able to pinpoint the bug location, but the bug report does not give helpful information on how to fix the bugs in this sub-category.
* **SC4: Missing/redundant/wrong layer**. These bugs include missing/extra one dense layer, missing dropout layer, missing normalization layer, _etc._ By mutating the layers adjacent to the missing layer, or deleting the redundant layer, deepmul detects and pinpoints the location of the missing/culprit layer, and in most of the cases, it provides useful information on how to fix such bugs.
By manually examining the bug descriptions provided by the programmers in our dataset of bugs, and also referring to the previous work on DNN bugs and root cause characterization [4], these bugs might manifest as low test accuracy/MSE, constant validation accuracy/MSE/loss during training, NaN validation accuracy/MSE/loss during training, dead nodes, vanishing/exploding gradient, and saturated activation.
At this point, we would like to emphasize that deepmul is not intended to repair a model, so if a mutation happens to be the fix for the buggy model, the model has to be retrained from scratch so that correct weights and biases will be calculated.
## VI Evaluation
We evaluate deepmul and compare it to state-of-the-art static and dynamic DNN fault localization techniques, by investigating the following research questions (RQs).
* **RQ1 (Effectiveness)**: 1. How does deepmul compare to state-of-the-art tools in terms of the number of bugs detected? 2. How many bugs does deepmul detect from each subcategory of model bugs in our dataset and how does that compare to state-of-the-art tools? 3. What are the overlap of detected bugs among deepmul and other fault localization techniques?
* **RQ2 (Efficiency)**: 1. What is the impact of mutation selection on the effectiveness and efficiency of deepmul? 2. How does deepmul compare to state-of-the-art tools in terms of end-to-end fault localization time?
### _Dataset of DNN Bugs_
To evaluate deepmul and compare it to state-of-the-art DNN fault localization techniques, we queried StackOverflow Q&A website for posts about Keras that had at least one accepted answer. Details about the SQL query used to obtain the initial list of posts is available online [36]. The query resulted in 8,412 posts that we manually sieved through to find the programs with model bugs. Specifically, we kept the bugs that satisfied the following conditions.
* Implemented using Sequential API of Keras,
* The bug in the program was a _model bug_ supported by deepmul as described in SSV, and
* The bug either had training datatset available in the post in some form (_e.g._, hard-coded, clearly described in the body of the post, or a link to the actual data was provided) or we could see the described error using synthetic data obtained from scikit-learn's dataset generation API.
This resulted in 102 bugs and we paired each bug with a fix obtained from the accepted answer to the question. We further added 7 bugs from DeepLocalize dataset [11] that are also coming from StackOverflow and we paired these bugs also with their fixes that are obtained from the most up-voted answer. Thus, we ended up with 109 bugs in total. To the best of our knowledge, this is the largest dataset of model bugs obtained from StackOverflow and it overlaps with the existing DNN bug datasets from previous research [12, 56]. Our bug dataset contains 85 classifiers (45 fully-connected DNNs, 29 CNNs, and 11 RNNs) and 24 regression models (19 fully-connected DNNs, 3 CNNs, and 2 RNNs). And each category has at least one example of model bugs. Therefore, we believe that our dataset is greatly representative of model bugs, _i.e._, the bugs supported by deepmft (and other tools that support this type of bugs), as we have examples of each sub-category of bug in various locations of the models for various regression and classification tasks.
After loading the training dataset for the bugs, we fitted the buggy models three times and stored them in.h5 file format separately. The repetition was conducted to take randomness in training into account. Randomness in data generation was mitigated by using deterministic random seeds. For fault localization purposes, we used the test dataset, and if it was not available, we used the training dataset itself. When we had to use synthesized data points, we deterministically splitted the generated set of data into training and testing datasets.
### _Baseline Approaches and Measures of Effectiveness_
In RQ1 and RQ2, we compare five different configurations of deepmft to recent static and dynamic DNN fault localization tools. The five configurations of deepmft are as follows.
* **Metallaxis**[30]: In this setting, we use the Metallaxis formula to calculate suspiciousness values of model elements. Metallaxis, by default, uses SBI [46] to calculate suspiciousness values for individual mutants. A recent study [57] provides empirical evidence on the superiority of Ochiai [47] over SBI when used within Metallaxis formula. Thus, we considered the following four combinations: _type 1 impact_: (1) SBI formula (_i.e._, Eq. 1); (2) Ochiai formula (_i.e._, Eq. 2), and _type 2 impact_: (3) SBI formula (_i.e._, Eq. 1); (4) Ochiai formula (_i.e._, Eq. 2).
* **MUSE**[17]: We used the default formula of MUSE to calculate the suspiciousness of model elements. For this, only type 1 impact is considered, as the heuristics behind MUSE are defined based on type 1 impact.
Our technique follows a more traditional way of reporting root causes for the bugs [30, 17, 57, 19, 22, 21, 15], in that it reports a list of potential root causes ranked based on the likelihood of being responsible for the bug. This allows the users find the bugs faster and spend less time reading through the fault localization report, which in turn increases practicality of the technique [58]. We have used top-\(N\), with \(N=1\), metric to measure the effectiveness of deepmft in RQ1 and RQ2. Specifically, if the numbers of any of the buggy layers of the bug appeared in the first place in the output of deepmft, we reported it as _detected_, otherwise we marked the bug as _not-detected_. We emphasize that top-1 metric gives a strong evidence on the effectiveness of deepmft, as the developers usually only inspect top-ranked elements, _e.g._, over 70% of the developers only check top-5 ranked elements [59].
Our selection criteria for the studied fault localization techniques are: (1) availability; (2) reproducibility of the results reported in the original papers, so as to have a level of confidence on the correctness of the results reported here; and (3) support for _model bugs_ in our dataset, so that we can make a meaningful comparison to deepmft. Below we give a brief description of each of the selected tools, why we believe they support model bugs, and how we have interpreted their outputs in our experiments, _i.e._, when we regard a bug being detected by the tool.
#### Iii-B1 Neuralint
A static fault localization tool that uses 23 rules to detect faults and design inefficiencies in the model. Each rule is associated with a set of rules of thumb to fix the bug that are shown to the user in case the precondition for any of the rules are satisfied. The five rules described in Section 4.2.1 of the paper target model bugs. Neuralint produces outputs of the form \([\texttt{Layer}\ L==>MSG]^{*}\), where \(L\) is the suspicious layer number, and \(MSG\) is a description of the detected issue and/or suggestion on how to fix the problem. A bug is deemed _detected_ by this tool if it is located in the layer mentioned in the output message or the messages describe any of the root causes of the bug.
#### Iii-B2 DeepLocalize
A dynamic fault localization technique that detects numerical errors during model training. One of three rules described in Section III.D of the paper checks model bugs related to wrong activation function. DeepLocalize produces a single message of the form \(\texttt{Batch}\ B\ \texttt{Layer}\ L:\ MSG\), where \(B\) is the batch number wherein the symptom is detected and \(L\) and \(MSG\) are the same as we described for Neuralint. A bug is deemed _detected_ if it is located in the layer mentioned in the output message or the message describes any of the root causes of the bug.
#### Iii-B3 DeepDiagnosis
A tool similar to DeepLocalize, but with more bug pattern rules and a decision procedure to give actionable fix suggestions to the users based on the observations. All 8 rules in Table 2 of the paper monitor the symptoms of model bugs. Similar to DeepLocalize, DeepDiagnosis produces a single message of the form \(\texttt{Batch}\ B\ \texttt{Layer}\ L:\ MSG_{1}\ [\texttt{OR}\ MSG_{2}]\), where \(B\) and \(L\) are the same as described in DeepLocalize and \(MSG_{1}\) and
\begin{table}
\begin{tabular}{c||c|c|c|c||c} \hline
**deepmft configuration /** & **SC 1** & **SC 2** & **SC 3** & **SC 4** & **Total (detected)** \\ \hline
**Metallaxis SBI + Type 1** & 31 & 2 & 6 & 3 & 42 \\ \hline
**Metallaxis Ochiai + Type 1** & 36 & 2 & 7 & 2 & 47 \\ \hline
**Metallaxis SBI + Type 2** & 18 & 2 & 4 & 2 & 26 \\ \hline
**Metallaxis Ochiai + Type 2** & 29 & 2 & 4 & 2 & 37 \\ \hline
**MUSE** & **41** & 3 & 6 & 3 & 38 \\ \hline \hline
**Neuralint** & 15 & 1 & 4 & 1 & 21 \\ \hline
**DeepLocalize** & 21 & 0 & 4 & 1 & 26 \\ \hline
**DeepDiagnosis** & 22 & 2 & 5 & 1 & 30 \\ \hline
**UNLATY** & 18 & 1 & 6 & 0 & 25 \\ \hline \hline
**Total (entire dataset)** & 80 & 4 & 17 & 8 & \\ \hline \end{tabular}
\end{table}
Table 3: Effectiveness of different deepmft configurations and four other tools in detecting bugs from four sub-categories of model bugs
\(MSG_{2}\) are two alternative solutions that the tool might suggest to fix the detected problem. A bug is deemed _detected_ if it is located in the layer mentioned in the output message or the message describes any of the root causes of the bug.
#### V-B4 Umlaut
A hybrid, _i.e._, a combination of static and dynamic, technique that works by applying heuristic static checks on, and injecting dynamic checks in, the program, parameters, model structure, and model behavior. Violated checks raise error flags which are propagated to a web-based interface that uses visualizations, tutorial explanations, and code snippets to help users find and fix detected errors in their code. All three rules described in Section V-B2 of the paper target model bugs. The tool generates outputs of the form \([<MSG_{1}>\cdots<MSG_{m}>]^{*}\), where \(m>0\) and \(MSG_{i}\) is a description of the problem detected by the tool. A bug is deemed _detected_ if any of the messages match the fix prescribed by the ground-truth.
### _Results_
To answer RQ1, we ran deepmulti (using its five configurations) and four other tools on the 109 bugs in our benchmark. We refer the reader to the repository [36] for the raw data about which bug is detected by which tool, and here we describe the summaries and provide insights.
At top-1, deepmulti detects 42, 47, 26, 37, and 53 bugs using its Metallixs SBI + Type 1, Metallixs Ochiai + Type 1, Metallixs SBI + Type 2, Metallixs Ochiai + Type 2, and MUSE, respectively, configurations. Meanwhile Neuralint, DeepLocalize, DeepDiagnosis, and UMLAUT detect 21, 26, 30, and 25, respectively, bugs. Therefore, as far as the number of bugs detected by each technique is concerned, MUSE configuration of deepmulti is the most effective configuration of deepmulti, significantly outperforming studied techniques, and Metallixs Ochiai + Type 2 is the least effective one, outperformed by DeepDiagnosis. An empirical study [57], which uses a specific dataset of traditional buggy programs, concludes that Metallixs Ochiai + Type 2 is the most effective configuration for MBFL. Meanwhile, our results for DNNs corroborates the theoretical results by Shin and Bae [60], _i.e._, we provide empirical evidence that in the context of DNNs MUSE is the most effective MBFL approach.
Table III reports more details and insights on the numbers discussed above. Specifically, it reports the number of bugs detected by each configuration of deepmulti an four other studied tools from each sub-category of model bugs present in our dataset of bugs. As we can see from the upper half of the table, MUSE is most effective in detecting bugs related to activation function (SC1), bugs related to model type/properties (SC2), and wrong/redundant/missing layer (SC4), while Metallixs Ochiai + Type 1 configuration outperforms other configurations in detecting bugs related to layer properties (SC3). Similarly, from bottom half of the table, we can see that other tools are also quite effective in detecting bugs related to activation function, with DeepDiagnosis being the most effective one among others. We can also observe that UMLAUT has been the most effective tool in detecting bugs related to layer properties. As we can see, MUSE configuration of deepmulti is consistently more effective than other tools across all bug sub-categories.
Table IV provides further insights on the overlap of bugs detected by each variant of deepmulti and those detected by the other four tools. Each value in row \(r\) and column \(c\) of this table, where \(2\leq r\leq 5\) and \(2\leq c\leq 6\), denotes the percentage of bugs detected by the deepmulti variant corresponding to row \(r\) and tool corresponding to column \(c\). The values inside the parenthesis are the actual number of bugs. For example, 8 out of 42, _i.e._, 19.05%, of the bugs detected by Metallixs SBI + Type 1 configuration of deepmulti are _also_ detected by DeepLocalize. The last column of the table reports same statistics, except for all four of the studied tools combined. As we can see from the table, 60.38% of the bugs detected by MUSE configuration of deepmulti are already detected by one of the four tools, yet it detects 21 (=53-32) bugs that are not detected by any other tools. This is because deepmulti approaches fault localization problem from a fundamentally different aspect giving it more flexibility. Specifically, instead of looking for conditions that trigger a set of hard-coded rules, indicating bug patterns, deepmulti breaks the model using a set of mutators to observe how different mutation impact the model behavior. Then by leveraging the heuristics underlying traditional MBFL techniques, it performs fault localization using the observed impacts on the model behavior. Listing 2 shows an example of a model bug that only deepmulti can detect.
``` :H:Head andpoint() :H:Head and() :Head.Head() :Head.Head() :Head.Head() :Head.Head() :Head.Head.Head() :Head.Head.(Head) :Head.() :Head.(Head) :Head.
To answer RQ2, we ran deepmulti and the other four tools on a Dell workstation with Intel(R) Xeon(R) Gold 6138 CPU at 2.00 GHz, 330 GB RAM, 128 GB RAM disk, and Ubuntu 18.04.1 LTS and measured the time needed for model training as well as the MBFL process to complete. We repeated this process four times, and in each round of deepmulti's execution, we randomly selected 100% (_i.e._, no selection), 75%, 50%, and 25% of the generated mutants for testing. Random mutation selection is a common method for reducing the overhead of mutation analysis [61, 35]. During random selection, we made sure that each layer receives at least one mutants, so that we do not mask any bug. The last row in Table 5 reports the average timing (of 3 runs) of MBFL in each round of mutation selection. The table also reports the impact of mutation selection on the number of bugs detected by each configuration of deepmulti. As we can see, in MUSE configuration of deepmulti, by using 50% of the mutants, one can halve the execution time and still detect 92.45% of the previously detected bugs. Therefore, mutation selection can be used as an effective way for curtailing MBFL time in DNNs.
For a fair comparison of deepmulti to state-of-the-art fault localization tools in terms of efficiency, we need to take into account the fact that deepmulti requires a pre-trained model as its input. Thus, as far as the end-to-end fault localization time from an end-user's perspective is concerned, we want to take into consideration the time needed to train the input model in addition the time needed to run deepmulti. With training time taken into account, deepmulti takes, on average, 1492.48, 1714.63, 1958.35, and 2192.4 seconds when we select 25%, 50%, 75%, and 100% of the generated mutants, respectively. We also emphasize that the time for DeepLocalize and DeepDiagnosis varied based on whether or not they found the bug. Given the fact that a user could terminate the fault localization process after a few epochs when they lose hope in finding bugs with these two tools, we report two average measurements for DeepLocalize and DeepDiagnosis: (1) average time irrespective of the fact that the tools succeed in finding the bug; (2) average time if the tools successfully finds the bug. Unlike these two tools, the time for Neuralint and UMLAUT does not change based on the fact that they detect a bug or not. DeepLocalize takes on average 1244.09 seconds and it takes on average 57.29 seconds when the tool successfully finds the bug. These numbers for DeepDiagnosis are 1510.71 and 11.05 seconds, respectively. Meanwhile, Neuralint and UMLAUT take on average 2.87 seconds and 1302.61 seconds to perform fault localization.
### _Discussion_
It is important to note that while deepmulti outperforms state-of-the-art techniques in terms of the number of bugs detected in our dataset, it is not meant to replace them. Our dataset only covers a specific type of bugs, _i.e._, model bugs, while other studied techniques push the envelope by detecting bugs related to factors like learning rate and training data normalization, which are currently outside of deepmulti's reach. We observed that combining all the techniques results in detecting 87 of the bugs in our dataset; exploring ways to combine various fault localization approaches by picking the right tool based on the characteristics of the bug is an interesting topic for future research. Moreover, depending on the applications and resource constraints, a user might prefer one tool over another. For example, although Neuralint might be limited by its static nature, _e.g._, it might not be able analyze models that use complex computed values and objects in their construction, it takes only few seconds for the tool to conduct fault localization. Thus, in some applications, _e.g._, online integration with IDEs, approaches like that of Neuralint might be the best choice.
A major source of overhead in an MBFL technique is related to the sheer number of mutants that the technique generates and tests [62, 61]. Sufficient mutator selection [63] is referred to the process of selecting a subset of mutators that achieve the same (or similar) effect, _i.e._, same or similar mutation score and same or similar number of detected bugs, but with smaller number of mutants generated and tested. For the mutators of Table 2, so far, we have not conducted any analysis on which mutators might be redundant, as a reliable mutator selection requires a larger dataset that we currently lack. We postpone this study as a future work.
Combining fault localization tools can be conducted with the goal of improving efficiency. We see the opportunity in building faster, yet more effective, fault localization tools by predicting the likely right tool upfront for a given model or running tools one by one and moving on to the next tool if we have a level of confidence that the tool will not find the bug. We postpone this study for a future work.
Lastly, we would like to emphasize that comparisons to the above-mentioned techniques in a dataset of bugs that deepmulti supports is fair, as the other tools are also designed to detect bugs in the category of model bugs. However, making these tools to perform better than this, would require augmenting their current rule-base with numerous new rules, yet adding new rules comes with the obligation of justifying the generality and rationale behind them, which might be a quite difficult undertaking. deepmulti, on the other hand, approaches the fault localization problem differently, allowing for more flexibility without the need for hard-coded rules.
## VII Threats to Validity
As with most empirical evaluations, we do not have a working definition of representative sample of DNN bugs, but we made efforts to ensure that the bugs we used in the evaluation is as representative as possible by making sure
\begin{table}
\begin{tabular}{c|c|c|c|c} & \multicolumn{4}{c}{Selected mutants} \\ \cline{2-5} & **25\%** & **30\%** & **75\%** & **100\%** \\ \hline
**Metaplasia Still + Type 1** & 37 & 41 & 42 & 42 \\ \hline
**Metaplasia Still + Type 2** & 40 & 46 & 47 & 47 \\ \hline
**Metaplasia Still + Type 2** & 25 & 26 & 26 & 26 \\ \hline
**Metaplasia Conjal + Type 2** & 34 & 37 & 37 & 37 \\ \hline
**MUSE** & 42 & 49 & 51 & 53 \\ \hline
**Time (6)** & 340.58 & 562.72 & 806.45 & 1,040.49 \\ \end{tabular}
\end{table}
Table 5: The impact of mutation selection on the effectiveness and execution time of deepmulti
that our dataset has diverse examples of bugs from each sub-category of model bugs.
Many of the bugs obtained from StackOverflow did not come with accompanying training datasets. To address this issue, we utilized the dataset generation API provided by scikit-learn [64] to generate synthetic datasets for regression or classification tasks. We ensured that the errors described in each StackOverflow post would manifest when using the synthesized data points and that applying the fix suggested in the accepted response post would eliminate the bug. However, it is possible that this change to the training process may introduce new unknown bugs. To mitigate this risk, we have made our bug benchmark publicly available [36].
Another potential threat to the validity of our results is the possibility of bugs in the construction of deepmulti itself, which could lead to incorrect bug localization. To mitigate this, we make the source code of deepmulti publicly available for other researchers to review and validate the tool.
Another threat to the validity of our results is the potential impact of external factors, such as the stochastic nature of the training process and the synthesized training/testing datasets, as well as system load, on our measurements. To address this, besides using deterministic seeds for dataset generation and splitting, we repeated our experiments with deepmulti three times. Similarly, we also ran other dynamic tools three times to ensure that their results were not affected by randomness during training. We did not observe any differences in effectiveness between the rounds for either deepmulti or the other studied techniques. Additionally, we repeated the time measurements for each round, and reported the average timing, to ensure that our time measurements were not affected by system load. Furthermore, judging whether or not any of the tools detect a bug requires manual analysis of textual description of the bugs and matching it to the tools; output messages which might be subject to bias. To mitigate this bias, we have made the output messages by the tools available for other researchers [36].
Lastly, deepmulti uses a threshold parameter to compare floating-point values (see SSIV-C). In our experiments, we used the default value of 0.001 and ensured that smaller threshold values yield the same results.
## VIII Related Work
Neuralint [12] uses _graph transformations_[65] to abstract away unnecessary details in the model and check the bug patterns directly on the graph. While Neuralint is orders of magnitude faster than deepmulti, it proved to be less effective than deepmulti in our dataset.
DeepLocalize [11] and DeepDiagnosis [8] intercept the training process looking for known bug patterns such as numerical errors. DeepDiagnosis pushes the envelope by implementing a decision tree that gives actionable fix suggestions based on the detected symptoms. A closely related technique, UMLAUT [34], works by applying heuristic static checks on, and injecting dynamic checks in, various parts of the DNN program. deepmulti outperforms DeepLocalize, DeepDiagnosis, and UMLAUT in terms of the number of bugs detected.
DeepFD [66] is a recent learning-based fault localization technique which frames the fault localization as a learning problem. MODE [25] and DeepFault [26] implement white-box DNN testing technique which utilizes suspiciousness values obtained _via_ an implementation of spectrum-based fault localization to increase the hit spectrum of neurons and identify suspicious neurons whose weights have not been calibrated correctly and thus are considered responsible for inadequate DNN performance. MODE was not publicly available, but DeepFault was, but unfortunately it was hard-coded to the examples shipped with its replication package, so we could not make the tool work without making substantial modifications to it, not to mention that these techniques work best on ReLU-based networks and applying them on most of the bugs in our dataset would not make much sense.
Other related works are as follows. PAFL [67] operates on RNN models by converting such models into probabilistic finite automata (PFAs) and localize faulty sequences of state transitions on PFAs. Sun _et al._[68] propose DeepCover, which uses a variant of spectrum-based fault localization for DNN explainability.
## IX Conclusion
This paper revisits mutation-based fault localization in the context of DNN and presents a novel DNN fault localization technique, named deepmulti. The technique is based on the idea of mutating a pre-trained DNN model and calculating suspiciousness values according to Metallaxis and MUSE approaches, Ochiai and SBI formulas, and two types of impacts of mutations on the results of test data points. deepmulti is compared to state-of-the-art static and dynamic fault localization systems [11, 8, 34, 12] on a benchmark of 109 model bugs. In this benchmark, while deepmulti is slower than the other tools, it proved to be almost two times more effective than them in terms of the total number of bugs detected and it detects 21 bugs that none of the studied tools were able to detect. We further studied the impact of mutation selection on fault localization time. We observed that we can halve the time taken to perform fault localization by deepmulti, while losing only 7.55% of the previously detected bugs.
## Acknowledgments
The authors thank Anonymous ASE 2023 Reviewers for their valuable feedback. We also thank Mohammad Wardat for his instructions on querying StackOverflow. This material is based upon work supported by the National Science Foundation (NSF) under the grant #2127309 to the Computing Research Association for the CIFellows Project. This work is also partially supported by the NSF grants #2223812, #2120448, and #1934884. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF. |
2309.17113 | Meta-Path Learning for Multi-relational Graph Neural Networks | Existing multi-relational graph neural networks use one of two strategies for
identifying informative relations: either they reduce this problem to low-level
weight learning, or they rely on handcrafted chains of relational dependencies,
called meta-paths. However, the former approach faces challenges in the
presence of many relations (e.g., knowledge graphs), while the latter requires
substantial domain expertise to identify relevant meta-paths. In this work we
propose a novel approach to learn meta-paths and meta-path GNNs that are highly
accurate based on a small number of informative meta-paths. Key element of our
approach is a scoring function for measuring the potential informativeness of a
relation in the incremental construction of the meta-path. Our experimental
evaluation shows that the approach manages to correctly identify relevant
meta-paths even with a large number of relations, and substantially outperforms
existing multi-relational GNNs on synthetic and real-world experiments. | Francesco Ferrini, Antonio Longa, Andrea Passerini, Manfred Jaeger | 2023-09-29T10:12:30Z | http://arxiv.org/abs/2309.17113v2 | # Meta-Path Learning for Multi-relational Graph Neural Networks
###### Abstract
Existing multi-relational graph neural networks use one of two strategies for identifying informative relations: either they reduce this problem to low-level weight learning, or they rely on handcrafted chains of relational dependencies, called meta-paths. However, the former approach faces challenges in the presence of many relations (e.g., knowledge graphs), while the latter requires substantial domain expertise to identify relevant meta-paths. In this work we propose a novel approach to learn meta-paths and meta-path GNNs that are highly accurate based on a small number of informative meta-paths. Key element of our approach is a scoring function for measuring the potential informativeness of a relation in the incremental construction of the meta-path. Our experimental evaluation shows that the approach manages to correctly identify relevant meta-paths even with a large number of relations, and substantially outperforms existing multi-relational GNNs on synthetic and real-world experiments.
## 1 Introduction
Graph neural networks (GNNs) have emerged as a powerful framework for analyzing networked data [6; 8; 18; 24], enabling effective learning and representation of complex relationships in several real-world applications [2; 23; 31; 39]. Standard GNN approaches have mostly focused on homogeneous graphs [7; 30; 34], where all nodes and edges belong to a single type. However, many real-world graph datasets exhibit heterogeneity, with multiple types of nodes and relations [4; 22; 28].
Treating heterogeneous graphs as homogeneous and aggregating information uniformly across all relations is a suboptimal approach, as different relations can convey largely different semantic information about the nodes they connect. A simple and effective strategy to retain the rich semantic information present in heterogeneous graphs is relying on meta-paths, which are chains of relational dependencies (e.g., "actor -> acted in -> movie -> has genre"). The challenge lies in determining the relevant meta-paths in a given graph. Existing methods either rely on predefined meta-paths defined by domain experts [3; 5; 32], which are extremely expensive to collect, or learn "soft" meta-paths by learning to assign weights to relations [25; 37; 38], an approach that only works with few relations and fails to scale to knowledge graphs. Solutions conceived for mining meta-paths from knowledge graphs typically consider relations only, ignoring node features altogether [16; 33].
To overcome these limitations, we propose a novel approach to learn meta-paths and meta-path GNNs that are highly accurate based on a small number of informative meta-paths. Key to our approach is the formalization of a scoring function, inspired by the relational information gain principle [14], that evaluates the potential informativeness of a relation in the incremental construction of the meta-path. This allows to learn a Meta-Path Graph Neural Network (MP-GNN) in which different layers convey information from different relations while retaining node-specific features in the aggregation process.
The main contributions of this work can be summarized as follows:
* We propose a scoring function evaluating the potential informativeness of a relation in the meta-path construction.
* We introduce MP-GNN, a simple variant of the RGCN architecture, which effectively combines learned meta-paths and node features into a multi-relational graph processing architecture.
* We provide an extensive experimental evaluation on synthetic and real-world datasets showing how our approach substantially outperforms existing multi-relational GNNs when dealing with graphs with a large number of relations.
## 2 Related work
In recent research, meta-path mining has emerged as an effective approach for analyzing heterogeneous graphs, relying on frequency cutoffs and sequential pattern mining strategies to identify promising meta-paths [17, 26, 36]. In the field of neuro-symbolic reasoning for knowledge graph completion (KGC), various approaches use reinforcement learning-based algorithms to explore relation-paths and derive logical formulas [16, 33]. Other approaches [11, 12, 20], search for the most relevant meta-path using variants of random walk search. A major limitation of all these approaches is that they are incapable of accounting for node features in determining the relevance of a candidate meta-path, making them unusable in knowledge graph embedding scenarios.
In the field of heterogeneous graph embedding, several methods have been proposed to enhance node and graph embedding by incorporating meta-paths. These methods can be broadly categorized into two groups: those using predefined meta-paths and those learning meta-paths by weighting the contribution of different relations.
In the first group, Meta-path Aggregated Graph Neural Network [5] focuses on aggregating node features along predefined meta-paths using an attention mechanism, capturing diverse structural patterns. Heterogeneous Attention Network [32] introduces a hierarchical attention mechanism to handle heterogeneous graphs, enhancing performance and interpretability. GraphMSE [13] tackles the problem of information-rich meta-path selection by aggregating information using different meta-paths and adopting BFS (Breadth First Search) as meta-path expansion strategy. Meta-path extracted graph neural network [3] incorporates message passing and emphasizes interpretability and semantic relationships. However, these approaches require that meta-paths are provided beforehand, something which severely limits their adaptability.
In the second group, Relational Graph Convolutional Networks (RGCN) [25] capture relation-specific patterns with distinct trainable parameters for each edge type. R-HGNN [35] uses a dedicated graph convolution component for unique node representations from relation-specific graphs. RSHN [40] integrates Coarsened Line Graph Neural Network (CL-GNN) for enhanced embedding in large-scale heterogeneous graphs. Graph Transformer Networks (GTN) [37] learn task-specific multi-hop connections (i.e., meta-paths) for useful node representations. FastGTN [38] addresses GTN's high complexity by implicitly transforming graphs. HGN [15] employs GAT as a backbone for a simple yet effective model. HGT [9] uses node and edge-type dependent parameters for heterogeneous attention. MVHRE [19] enriches relational representations for link prediction using a multi-view network representation learning framework. While effective with a small number of candidate relations, these approaches' performance degrades as the number increases, as shown in our experimental evaluation.
## 3 Preliminary
In this section, we provide an overview of fundamental concepts of our approach.
**Heterogeneous graph.** A heterogeneous graph is a directed graph \(\mathcal{G}=(\mathcal{V},\mathcal{E},\mathcal{T}_{v},\mathcal{T}_{e})\) where \(\mathcal{V}\) is the set of nodes or entities and \(\mathcal{E}\) is the set of edges. Each node \(v\) and edge \(e\) has a type, specified by the mapping functions \(\tau_{v}(v):\mathcal{V}\rightarrow\mathcal{T}_{v}\) and \(\tau_{e}(e):\mathcal{E}\rightarrow\mathcal{T}_{e}\). Moreover, each node \(v\) has a feature vector \(x_{v}\in\mathrm{I\!R}^{d}\).
**Meta-path** A meta-path \(mp\) is a relation sequence defined on a heterogeneous graph \(\mathcal{G}\), denoted in the form \(\xrightarrow{r_{1}}\xrightarrow{r_{2}}...\xrightarrow{r_{L}}\), where \(r_{1},...,r_{L}\) are relation types and for each consecutive pair of relations \(\xrightarrow{r_{i}}\xrightarrow{r_{i+1}}\) the intersection between the valid node types that are the destination of \(\xrightarrow{r_{i}}\) and the valid node types that are the source of \(\xrightarrow{r_{i+1}}\) is non-empty. Note that this is a more general definition than
the one in [27], in that it allows multiple node types as sources and destinations of a given relation, consistently with what can be found in large general-purpose knowledge graphs.
**RGCN layer** The relational graph convolutional layer from [25] extends the standard convolution operation on graphs [10] to the multi-relational setting by assigning specific parameters for each relation type. Message passing update for node \(i\) at layer \(l\) is given by:
\[h_{i}^{(l+1)}=\sigma\left(W_{0}^{(l)}h_{i}^{(l)}+\sum_{r\in\mathcal{R}}\sum_{j \in\mathcal{N}_{i}^{r}}\frac{1}{c_{i,r}}W_{r}^{(l)}h_{j}^{(l)}\right) \tag{1}\]
where \(\mathcal{R}\) is the set of relations in the graph, \(\mathcal{N}_{i}^{r}\) is the set of \(r\)-neighbours of node \(i\) and \(c_{i,r}\) is a fixed or learnable normalizing parameter.
## 4 MP-GNN learning
The goal of our approach is to learn relevant meta-paths that can serve as predictive features for the node classification task. 1 Differently from the approaches that use all relations at the same time by weighting each edge type contribution, we focus on finding the relevant chains of relations (i.e., meta-paths) beneficial for making accurate predictions. Note that, differently from what happens in purely relational settings [11, 12, 17, 20, 26, 36], we assume here that the informativeness of a meta-path also depends on the features of the nodes that are being traversed (which include the node type, but also node attributes and potentially pre-computed node embeddings). Our approach accounts for this aspect in mining relevant meta-paths. Meta-paths are constructed in a greedy, incremental approach using the idea of relational information gain [14] to score candidate extensions of an already constructed meta-path prefix. Consider the toy node classification task in Figure 1. To incrementally build the correct meta-path (bottom right in the legend), one has to realize that "Main actor in" is a better candidate than "Appeared in". Intuitively, our scoring function does this by assigning weights (i.e., pseudo-labels) to nodes reached by a candidate relation in such a way that the label of the target node can be inferred by propagating the pseudo-label of the neighbour. Figure 2 shows an example of weight assignment for the "Main actor in" and "Appeared in" relations, indicating a higher score for the former. However, these pseudo-labels only hint at the potential informativeness of the relation. Indeed, being a main actor in a movie is not enough to qualify as an award winning actor, even in the toy example of Figure 1. The movie should be a drama (node feature), and be directed by an award winning director. Whether this potential informativeness actually materializes is determined in the following steps, where the pseudo-labels become new prediction targets for the next extension of the meta-path under construction. Details of this method are described in Section 4.1.
Figure 1: A toy node classification problem. Node shapes indicate types, while node attributes and edge types are encoded as colors. The task consist in labelling actor nodes (pentagons, which do not have attributes). An \(Actor\) is labelled as positive if involved as main actor in a drama directed by and award winning director.
Once a candidate meta-path has been extracted, it is used to build a MP-GNN in which each layer corresponds to a relation on the meta-path. Section 4.2 presents a formalization of this architecture, and shows how to extend it to account for multiple meta-paths. Finally, these ingredients are combined into an overall algorithm to jointly learn a meta-path and a corresponding MP-GNN. For the sake of readability, the algorithm is presented for the single meta-path case, but its extension to multiple meta-paths using a beam search is straightforward (we employed a beam size equal to three in the experiments). Note that this algorithm is designed to identify existential meta-path features, i.e., cases where the existence of an instance of a meta-path is informative for the class label. Adaprations and extensions where counts or proportions of meta-path realizations are the relevant feature are subject of future work.
### Scoring function
The goal of the scoring function is that of providing a measure of informativeness of a relation towards predicting node labels. We start discussing the first iteration, i.e., identifying the first relation in the meta-path, and then show how the function can be adapted to deal with meta-path extension.
In the first iteration, the scoring function takes as input a list of nodes together with their target labels. Under the previously introduced existential quantification assumption, a candidate relation \(r\) is informative for the label of a node \(i\) if at least one of the neighbors \(\mathcal{N}_{i}^{\tau}\) of \(i\) according to \(r\) belongs to the ground-truth meta-path, and \(i\) has the right features (remember that the label is assumed to depend on the combination of the meta-path and the features of the nodes being traversed). This can be formalized as follows:
\[\tilde{y}_{i}^{r}=\Theta^{T}h_{i}^{(0)}\cdot\max_{j\in\mathcal{N}_{i}^{r}}w_{j} \tag{2}\]
Here \(\Theta\) is a learnable weight vector accounting for the contribution of the node features, while \(w_{j}\) is a learnable node weight that is set (close) to 1 if node \(j\) is predicted as belonging to the ground-truth meta-path, and (close to) zero otherwise. The score of \(r\) is computed by minimizing the MSE between the predicted and ground truth node labels over \(\Theta\) and \(\mathbf{w}\):
\[s_{r}=\min_{\Theta,\mathbf{w}}\frac{1}{N}\sum_{i=1}^{N}(\tilde{y}_{i}^{r}-y_{ i})^{2} \tag{3}\]
The relation with the minimum score is selected as the first relation of the meta-path.
To explain how the scoring of the following relations in the meta-path works, it is important to remember that the weights \(\mathbf{w}\) represent a tentative assignment to neighbours as belonging or not-belonging to the ground-truth meta-path (i.e., their _potential informativeness_). Multiple potential assignments can be minimizers of Eq. 3. In the left panel of Figure 2, where relation \(r_{1}\) (green) is being scored, any minimizers of Eq. 3 requires \(w_{E}=1\) (to account for the positive label of node \(A\)) and \(W_{F}=0\) (to account for the negative label of node \(B\)). On the other hand, (0,1), (1,0) and (1,1) are all valid assignments to the \((W_{G},W_{A})\) pair. Indeed, the only constraint that the positive label of \(C\) enforces is that the bag \((W_{G},W_{A})\) contains at least one node with value 1, as happens in multi-instance classification settings [1]. We thus generate labelled bags of nodes for the following iteration(s) of meta-path construction, that will play the role of the node labels \(y\) in the initial iteration.
Figure 2: **First iteration:** the scoring function assigns a high score to the red (“Main actor in”) relation (left panel) by giving large weights to movies D and E, that are only connected to a positive node, and small weights to the other movie nodes. On the other hand, the green (“Appeared in”) relation (right panel) has low score, as no weight assignment can jointly explain the positive label of the A node and the negative label of the B node.
Positive bags are computed as follows:
\[B^{+}(i)=\{j\in\mathcal{N}_{i}^{r}\mid\nexists k:j\in\mathcal{N}_{k}^{r}\wedge y_{ k}=-1\} \tag{4}\]
where \(i\) is a positive-labelled node (\(y_{i}=+1\)). Negative bags, on the other hand, are singletons, i.e., given a negatively-labelled node \(j\), we create a negative bag \(B^{-}(k)=\{k\}\) for each of its neighbors \(k\in\mathcal{N}_{j}^{r}\). The informativeness of the new relation \(s\) (as extension of relation \(r\)) can now be computed in terms of its potential in predicting bag labels:
\[\tilde{y}_{B(i)}^{s}=\max_{j\in B(i)}\Bigl{(}\Theta^{T}h_{j}^{(0)}\cdot\max_{k \in\mathcal{N}_{j}^{r}}w_{k}\Bigr{)} \tag{5}\]
and obtained minimizing MSE at the bag-label level. See Figure 3 for a graphical representation of the components involved.
Once the next relation is selected, the procedure could in principle continue, by further expanding positive bags with a procedure analogous to Eq. 4, where \(i\) is itself replaced with a bag of nodes. However, this procedures ends up diluting information too much, so that the informativeness of relations becomes difficult to predict. We rather assign a positive label to a node within a bag if it is used to predict the positive label of the bag (Eq. 2) at least once out of \(M\) restarts with randomly initialized weights. See the Appendix for the details.
### Mp-Gnn
In the single meta-path MP-GNN, a meta-path \(mp=r_{1},...,r_{L}\) induces a multi-relational GNN with \(L\) layers, that we denote by MP-GNN(\(mp\)). The first layer is associated to the last relation of the meta-path \(r_{L}\), and so on until the final layer which is associated with \(r_{1}\). The message passing update is formalized as follows:
\[h_{i}^{(l+1)}=\sigma\left(W_{0}^{(l)}h_{i}^{(l)}+\sum_{j\in\mathcal{N}_{i}^{r _{L}-l+1}}\frac{1}{|\mathcal{N}_{i}^{r_{L}-l+1}|}W^{(l)}h_{j}^{(l)}\right) \tag{6}\]
where \(l\) ranges from \(1\) to \(L\).
The definition can be generalized to deal with multiple meta-paths by concatenating embeddings coming from the different meta-paths:
\[h_{i}^{(l)}=\big{\|}_{k=1}^{K}h_{(i,k)}^{(l)} \tag{7}\]
where \(K\) is the number of meta-paths, \(h_{(i,k)}^{(l)}\) is the embedding of node \(i\) according to meta-path \(k\) computed using Eq. 6 and \(\|\) is the concatenation operator.
It is worth mentioning here that while this definition of MP-GNN is a straightforward adaptation of the RGCN architecture to deal with learned meta-paths, more complex architectures involving
Figure 3: **Second iteration**: The scoring function assigns a high score to the purple (”Directed by”) relation (left panel) by assigning a large weight to the M director, which is only one connected to a positive bag, and small weights to the other directors. On the other hand, the “pink” (Inspired) relation (right panel) gets a low score as no weight assignment is compatible with the positive bag.
pre-defined meta-paths could in principle be employed [3; 5; 13; 32]. We opted for this simple choice in the paper so as to best single out the contribution of the scoring function in determining the performance of the resulting architecture.
### Overall algorithm
The overall algorithm for learning MP-GNN is outlined in Algorithm 1. The algorithm takes as inputs a heterogeneous graph \(\mathcal{G}\), a set of candidate relations \(\mathcal{R}\), a set of node-label pairs \(labels\) and a hyper-parameter \(L_{MAX}\) indicating the maximal length of candidate meta-paths. The algorithm repeatedly call the scoring function (Eq. 3) to score candidate relations and keeps track of the best scoring one. It then builds an MP-GNN with the current (partial) meta-path and trains it to predict node labels, using \(F_{1}\) score (computed on a validation set, omitted in the algorithm for the sake of compactness) as final meta-path evaluation metric. Note that this is the only "real" measure of meta-path quality, as the one computed by the scoring function is still a "potential" informativeness, that only fully materializes when the meta-path is embedded into an MP-GNN. The algorithm keeps track of the highest \(F_{1}\) meta-path prefix so far, and proceeds by generating labelled bags as described in Section 4.1 for the next round of relation scoring.
As previously mentioned, the algorithm is presented for the sake of simplicity in the single meta-path case. However, the actual implementation performs beam search on the space of meta-paths, retaining the \(K\) top-scoring ones according to Eq. 3 and concatenating their embeddings into the MP-GNN as per Eq. 7. Notice that in evaluating the resulting MP-GNN, meta-paths not contributing to increasing \(F_{1}\) are discarded, so as to retain only the informative meta-paths in the final architecture.
```
1:procedureLearnMP-GNN(\(\mathcal{G}\), \(\mathcal{R}\), \(labels\), \(L_{MAX}\))
2: Initialize \(mp^{*}\leftarrow[\ ]\), \(mp\leftarrow[\ ]\), \(F_{1}^{*}\gets 0\), \(target\gets labels\)
3:while\(|mp|<L_{MAX}\)do
4:for\(r\in\mathcal{R}\)do
5:\(s_{r}\leftarrow\textsc{score-relation}(\mathcal{G},target,r)\)\(\triangleright\) Equation 3
6:endfor
7:\(r^{*}\leftarrow\) best scoring relation
8:\(mp\gets mp,r^{*}\)
9:\(gnn\leftarrow\textsc{train}(\textsc{MP-GNN}(mp),\mathcal{G},labels)\)
10:\(F_{1}\leftarrow\textsc{test}(gnn)\)
11:if\(F_{1}>F_{1}^{*}\)then
12:\(mp^{*}\gets mp,\ F_{1}^{*}\gets F_{1}\)
13:endif
14:\(target\leftarrow\textsc{generate-bags}(target,r^{*})\)\(\triangleright\) Section 4.1
15:endwhile
16:return\(mp^{*}\)
17:endprocedure
```
**Algorithm 1**LearnMP-GNN algorithm. \(\mathcal{G}\) is a heterogeneous graph, \(\mathcal{R}\) is the set of possible relations, \(labels\) is the initial set of node-label pairs and \(L_{MAX}\) is the maximal meta-path length
## 5 Experimental results
Our experimental evaluation aims to answer the following research questions:
1. Can MP-GNN recover the correct meta-path for an increasing number of candidate relations?
2. Is MP-GNN competitive with existing approaches in real-world datasets with few relations?
3. Does MP-GNN outperform existing approaches in real-world datasets with many relations?
We compared MP-GNN with existing solutions that: 1) do not require to pre-specify relevant meta-paths, 2) can handle (possibly high-dimensional) node features. Given these requirements, we identified the following competitors:
* **RGCN**[25], a generalization of the GCN architecture to the multi-relational case, that employs a different matrix of parameters for each edge type.
* **GTN**[37] can convert an input graph into different meta-path graphs for specific tasks and learn node representations within these graphs.
* **FastGTN**[38], an efficient variant of GTN that avoids adjacency matrices multiplication for graph transformation.
* **R-HGNN**[35], employs a different convolution for each edge type. Finally combines different embeddings with a cross-relation message passing.
* **HGN**[15], utilizes GAT as backbone to design an extremely simple HGNN model.
We implemented MP-GNN using Pytorch Geometric [21], while the code of the competitors was taken from their respective papers. For MP-GNN we used Adam optimizer with a learning rate of 0.01. We set the maximum meta-path length \(L_{MAX}=4\) and the beam size \(K=3\). We used an 80/20/10 split between train, validation and test in all cases, with model selection performed on the validation set for all methods. We employed F1-macro score on the test set as evaluation metric to account for the unbalancing present in many of the datasets. The code is available at LINK.
In the following we report the experimental setting and the results we obtained in addressing each of the research questions under investigation. The statistics of the datasets used in the experiments are reported in the Appendix.
### Q1: MP-GNN consistently identifies the correct meta-path
In order to answer the first research question, we designed a controlled setting where the correct meta-path is known, and experiments can be run for an increasing number of candidate relations. We generated synthetic datasets where nodes are typed A or B, the number of relations \(|\mathcal{R}|\) varies in \(\{4,8,10,14\}\), and the number of relations that can connect more than one pair of node types (e.g., \(A\xrightarrow{\tau}B\) and \(A\xrightarrow{\tau}A\)). The ground truth meta-path consists of a (valid) sequence of relations and nodes of a given type (e.g., \(x\xrightarrow{\tau}A\xrightarrow{\tau}B\), with \(x\) being a node of arbitrary type). Nodes are labelled as positive if found to be starting points of a ground-truth meta-path, and negative otherwise. We generated labelled datasets using ground-truth meta-paths of different lenghts \(L\in\{2,3,4\}\). Details of the different settings can be found in the Appendix (Figure 7).
Figure 4 shows the \(F_{1}\) score for each model when varying the overall number of relations and the number of shared relations, for a ground-truth meta-path of length three. Darker cells correspond to higher \(F_{1}\) value. Results show that the performance of existing multi-relational GNN approaches is severely affected by the relational complexity of the graph, with RGCN and R-HGNN being more sensible to the overall number of candidate relations and GTN and FastGTN having bigger problems with the number of shared relations, while HGN has poor performance in all settings, likely due to its lack of an explicit modelling of relation types. Conversely, MP-GNN consistently achieves optimal or quasi-optimal performance in all settings. Whenever \(F_{1}=1\), MP-GNN manages to perfectly recover the ground-truth meta-path, while values smaller than one are due to spurious relations being added at the end of the meta-path (which however have a limited impact on predictive performance).
Figure 5 shows results when increasing the relational complexity of the network _and_ the length of the meta-path characterizing the positive class. Each setting corresponds to an entry in the main diagonal of Figure 4, where we additionally varied the length of the meta-path from 2 to 4. Results show that GTN, FastGTN and HGN struggle in most settings, while RGCN and R-HGNN are competitive in the simplest settings (few relations and/or short meta-paths) but its performance quickly degrade when the size of the search space increases. Again, MP-GNN consistently achieves excellent performance in all settings, almost always perfectly recovering the ground-truth meta-path.
Figure 4: Synthetic setting: F1-score (the darker the better) as a function of the overall number of relations (rows) and the number of shared relations (columns).
### Q2: MP-GNN achieves state-of-the-art results on real-world datasets with few relations
The second set of experiments focuses on popular real-world benchmarks for multi-relational GNNs. In all cases the task is multi-class classification at the node level. We quickly summarize the characteristics of the benchmarks in the following: **IMDB**: a dataset extracted from the popular Internet Movie Database. It contains 3 types of nodes (movies (M), directors (D) and actors (A)) and uses the genres of movies as labels. **DBLP**: citation network where nodes are of paper (P), author (A) or conference (C) type, connected by edge types PA, AP, PC, CP, and the task is predicting the research area of authors. **ACM**: again a citation network, similar to the one of DBLP with conference nodes replaced by subject (S) nodes (and edge types replaced accordingly).
Table 1 (top) shows the \(F_{1}\) scores achieves by the different methods. As expected, all approaches achieve similar results, which are consistent with the ones observed in previous work [38]. Indeed, the number of relations is very limited (three for IMDB, four for DBLP and ACM) and, most importantly, no relations are shared among different node pair types, substantially restricting the number of candidate meta-paths. Still, MP-GNN achieves slightly better results, most likely thanks to its ability to select a minimal set of meta-paths, as shown in Table 1 (bottom).
### Q3: MP-GNN substantially outperforms competitors on real-world datasets with many relations
The last set of experiments aims to evaluate MP-GNN in a complex real-world setting characterized by a large set of relations, as typical of general-purpose knowledge graphs. We thus designed a set of node-classification tasks over **FB15K-237**[29], which is a large knowledge graph derived from Freebase. Each entity in the graph is associated with a text description, that we transformed into a bag-of-words representation of length 100 (retaining the most frequent words in the dataset). We identified as target labels all many-to-one relations that have from 2 to 20 possible destination types (to avoid having classes with too few examples). Examples include gender, event type and a number of currency-related relations. See the Appendix for the statistics of the datasets.
Table 2 reports \(F_{1}\) scores for the different methods. GTN and FastGTN have serious difficulties in learning reasonable models in all cases. Indeed, the unbalancing in the class distribution, combined with the large pool of candidate relations to learn from, drives them to boil down to majority class
\begin{table}
\begin{tabular}{l l l l} \hline Model & DBLP & IMDB & ACM \\ \hline R-HGNN & 0.86\(\pm\)(0.04) & **0.64\(\pm\)**(0.01) & 0.9(\(\pm\)0.01) \\ HGN & **0.94\(\pm\)**(0.01) & 0.63(\pm\)(0.02) & 0.92(\(\pm\)0.02) \\ RGCN & 0.91\(\pm\)(0.01) & 0.6(\(\pm\)0.01) & 0.9(\(\pm\)0.02) \\ GTN & 0.9(\(\pm\)0.01) & 0.62(\(\pm\)0.01) & 0.91(\(\pm\)0.01) \\ FastGTN & 0.92(\(\pm\)0.00) & 0.63(\(\pm\)0.01) & **0.93(\(\pm\)**0.00) \\ MP-GNN & **0.94\(\pm\)**(0.01) & **0.64**(\(\pm\)0.01) & **0.93(\(\pm\)**0.00) \\ \hline GTN/ & APCPA, & MAM, & PAP, \\ FastGTN & APAPA, & MDM, & PSP, \\ & APA & MDMM & \\ MP-GNN & APCPA, & MAM, & PAP, \\ & APAPA & MDM & PSP \\ \hline \hline \end{tabular}
\end{table}
Table 1: Few-relations datasets. **(Top)**: \(F_{1}\) scores, mean and std computed over five runs. Best results highlighted in bold. **(Bottom)**: learnt meta-paths for MP-GNN and GTN/FastGTN (which learn the very same meta-paths). Other baselines not reported as they do not explicitly extract meta-paths.
Figure 5: Synthetic setting: F1-score as a function of the ground-truth meta-path length, for an increasing complexity of the search space.
prediction in most cases. Despite the better performance of RGCN, HGN, and R-HGNN, they still exhibit substantially lower F1-scores compared to MP-GNN. Notably MP-GNN is surpassed only by RGCN and R-HGNN in the "event" and "team sport" classification tasks, respectively. Figure 6 shows some examples of extracted meta-paths for two different classification tasks, namely predicting the currency of domestic tuition fees in educational institutions and predicting the sport a team is playing. In the former case, extracted meta-paths lead to the headquarters of the organization delivering the educational program, which clearly correlate with the currency being used. In the latter case, meta-paths include the league where the team is playing, which again carries information about the sport being played. Note that in both cases, node features are crucial in leveraging meta-path information, as there are not enough examples to generalize via, e.g., specific headquarter or sport league name. Indeed, an ablation experiment excluding node feature information (the typical setting of meta-path mining approaches [11, 17, 26]), shows that none of the methods manages to learn any sensible meta-path, always boiling down to learning majority class prediction rules (see Appendix 8). For the same reasons, plain meta-path mining fails to extract sensible meta-paths, resulting in poor performance (see Appendix 9 for the results using the popular PRA meta-path miner [11]).
Finally, to assess the computational efficiency of MP-GNN, we conducted a running time comparison, detailed in Appendix 10. Results show that our approach is comparable with that of the competitors on the synthetic and few relation (IMDB, DBLP, ACM) datasets. On the freebase tasks, which have a larger set of candidate relations, our approach is more expensive than (most) competitors, but these have substantially lower performance in terms of F1, with the fastest approaches (GTN and FastGTN) completely failing to learn anything sensible.
## 6 Conclusion
In this work we introduced a novel approach inspired by information theoretic principles to effectively learn meta-paths and meta-path based multi-relational GNNs in settings characterized by a large number of candidate relations combined with arbitrarily rich node features. Our experimental evaluation confirms the potential of the approach in recovering correct (in synthetic tasks) and informative (in real-world tasks) meta-paths despite the large number of candidate relations, a setting where existing multi-relational GNNs struggle to learn meaningful models.
Future work includes generalizing the approach to account for counts or proportions of meta-path realizations as relevant features, as well as more complex relational structures like meta-trees.
\begin{table}
\begin{tabular}{l l l l l l l} \hline Label & R-HGNN & HGN & RGCN & GTN & FastGTN & MP-GNN \\ \hline PNC & 0.72 & 0.68 & 0.74 & 0.33 & 0.33 & **0.83** \\ EDC & 0.6 & 0.75 & 0.71 & 0.12 & 0.12 & **0.96** \\ EIC & 0.63 & 0.65 & 0.73 & 0.12 & 0.12 & **0.8** \\ ELC & 0.47 & 0.74 & 0.72 & 0.12 & 0.15 & **0.78** \\ FBC & 0.45 & 0.48 & 0.42 & 0.14 & 0.14 & **0.61** \\ GNC & 0.8 & 0.74 & 0.82 & 0.19 & 0.19 & **0.90** \\ OC & 0.67 & 0.73 & 0.78 & 0.14 & 0.14 & **0.93** \\ G & 0.81 & 0.64 & 0.8 & 0.44 & 0.44 & **0.84** \\ TS & **0.67** & 0.53 & 0.62 & 0.09 & 0.09 & 0.63 \\ E & 0.89 & 0.8 & **0.98** & 0.07 & 0.07 & 0.96 \\ \hline \end{tabular}
\end{table}
Table 2: Many-relations dataset: F1-scores for the different node classification tasks on the FB15K-237 dataset. Results with standard deviations can be found in Table 7 in Appendix. See Table 3 in the Appendix for the meaning of the label acronyms.
Figure 6: Examples of learned meta-paths on two node classification tasks
## Acknowledgments
This research was supported by TAILOR, a project funded by the EU Horizon 2020 research and innovation program under GA No 952215. AL acknowledges the support of the MUR PNRR project FAIR - Future AI Research (PE00000013) funded by the NextGenerationEU.
|
2309.04755 | Towards Real-time Training of Physics-informed Neural Networks:
Applications in Ultrafast Ultrasound Blood Flow Imaging | Physics-informed Neural Network (PINN) is one of the most preeminent solvers
of Navier-Stokes equations, which are widely used as the governing equation of
blood flow. However, current approaches, relying on full Navier-Stokes
equations, are impractical for ultrafast Doppler ultrasound, the
state-of-the-art technique for depiction of complex blood flow dynamics
\emph{in vivo} through acquired thousands of frames (or, timestamps) per
second. In this article, we first propose a novel training framework of PINN
for solving Navier-Stokes equations by discretizing Navier-Stokes equations
into steady state and sequentially solving steady-state Navier-Stokes equations
with transfer learning. The novel training framework is coined as SeqPINN. Upon
the success of SeqPINN, we adopt the idea of averaged constant stochastic
gradient descent (SGD) as initialization and propose a parallel training scheme
for all timestamps. To ensure an initialization that generalizes well, we
borrow the concept of Stochastic Weight Averaging Gaussian to perform
uncertainty estimation as an indicator of generalizability of the
initialization. This algorithm, named SP-PINN, further expedites training of
PINN while achieving comparable accuracy with SeqPINN. Finite-element
simulations and \emph{in vitro} phantoms of single-branch and trifurcate blood
vessels are used to evaluate the performance of SeqPINN and SP-PINN. Results
show that both SeqPINN and SP-PINN are manyfold faster than the original design
of PINN, while respectively achieving Root Mean Square Errors (RMSEs) of 1.01
cm/s and 1.26 cm/s on the straight vessel and 1.91 cm/s and 2.56 cm/s on the
trifurcate blood vessel when recovering blood flow velocities. | Haotian Guan, Jinping Dong, Wei-Ning Lee | 2023-09-09T11:03:06Z | http://arxiv.org/abs/2309.04755v1 | Towards Real-time Training of Physics-informed Neural Networks: Applications in Ultrafast Ultrasound Blood Flow Imaging
###### Abstract
Physics-informed Neural Network (PINN) is one of the most preeminent solvers of Navier-Stokes equations, which are widely used as the governing equation of blood flow. However, current approaches, relying on full Navier-Stokes equations, are impractical for ultrafast Doppler ultrasound, the state-of-the-art technique for depiction of complex blood flow dynamics _in vivo_ through acquired thousands of frames (or, timestamps) per second. In this article, we first propose a novel training framework of PINN for solving Navier-Stokes equations by discretizing Navier-Stokes equations into steady state and sequentially solving steady-state Navier-Stokes equations with transfer learning. The novel training framework is coined as SeqPINN. Upon the success of SeqPINN, we adopt the idea of averaged constant stochastic gradient descent (SGD) as initialization and propose a parallel training scheme for all timestamps. To ensure an initialization that generalizes well, we borrow the concept of Stochastic Weight Averaging Gaussian to perform uncertainty estimation as an indicator of generalizability of the initialization. This algorithm, named SP-PINN, further expedites training of PINN while achieving comparable accuracy with SeqPINN. Finite-element simulations and _in vitro_ phantoms of single-branch and trifurcate blood vessels are used to evaluate the performance of SeqPINN and SP-PINN. Results show that both SeqPINN and SP-PINN are manyfold faster than the original design of PINN, while respectively achieving Root Mean Square Errors (RMSEs) of 1.01 cm/s and 1.26 cm/s on the straight vessel and 1.91 cm/s and 2.56 cm/s on the trifurcate blood vessel when recovering blood flow velocities.
Blood flow, Data-driven scientific computing, Navier-Stokes equations, Physics-informed learning, Ultrafast Doppler Ultrasound.
## I **Introduction**
Depicting hemodynamics in the circulation system is important because it is closely related to the development of cardiovascular diseases, such as myocardial infarction and ischemic stroke due to atherosclerosis. Blood flow velocity and blood pressure are key hemodynamic parameters and directly outline vascular conditions. Ultrafast Doppler ultrasound permits unprecedentedly high full-view acquisition frame rates for high-velocity flow imaging based on the Doppler effect [4]. However, its accompanying low signal-to-noise ratio and Doppler angle dependence lead to velocity estimation bias and variance. Physics-constrained optimization has enabled regularization of color Doppler using planar mass conservation and free-slip boundary conditions [51].
As a governing principle, Navier-Stokes equations are partial differential equations (PDEs) widely used to describe the motion of an incompressible viscous fluid flow [49]. Computational Fluid Dynamics (CFD) modeling is the conventional method for solving Navier-Stokes equations [2]. Given proper boundary conditions, CFD can produce a velocity and a pressure field that obeys Navier-Stokes equations. However, Navier-Stokes equations are highly nonlinear, causing CFD to be computationally expensive. A high level mathematical understanding in the generation of meshing is also required for blood vessels of complex geometries [31]. These inherent difficulties refrain real-time applications of CFD models for personalized assessment of blood flow in clinical settings.
Artificial Intelligence (AI) for science empowered by deep learning has been an emerging research field. Recently, deep learning is popular given the fast advancement in computational resources, such as graphics processing unit (GPU) [1]. With sufficient data, deep learning has become the new paradigm for many scientific discoveries because of its flexibility and capability of extracting features automatically.
Since its emergence, physics-informed neural network (PINN) [40] has made a transformational impact on solving PDEs [47, 12, 48]. Recent research on PINN has demonstrated its excellence in coping with imperfect situations, such as no specified initial or boundary conditions and noisy data [5, 8, 13]. PINN is mesh-free, making it more flexible for arbitrary geometries and more computationally efficient. PINN has also been applied extensively in research works and real world physics systems. Cheng et al combined Resnet block with PINN to solve for fluid dynamics; they showed that Resnet block could improve stability of PINN [15]. Jin et al proposed NSFnets [16] specifically targeting at simulating solutions to Navier-Stokes equations; two alternative formulations of Navier-Stokes equations, namely velocity-pressure (VP) and vorticity-velocity (VV), were considered as part of the loss function. They conducted experiments on simulated laminar flow and turbulent channel flow and analyzed the performance and convergence of NSFnets under different neural network architecture designs and various combinations of weights in loss function.
Despite the success of PINN in many situations as an
alternative of CFD models, the training speed of PINN has not been proven to be faster than CFD [12]. The current PINN training framework exhibits potential to achieve fast training because of its lightweight design of the architecture, a low dimensional data input, and PDE-guided loss functions. These advantages enable fast updates of model parameters but cannot guarantee fast convergence of the model. In fact, the prevalent training framework has several drawbacks. First, training of PINN is associated with domain geometry and initial boundary conditions, so PINN models are usually hard to generalize to different geometries or fluid flow patterns [11]. Retraining PINN is thus imperative for a patient-specific model. However, re-training PINN from scratch is time-consuming due to the use of gradient-based optimization methods, a large number of collocation points, and complexity of PDEs. Gradient-based optimization methods are effective because they move model parameters towards local minima [10]. Nonetheless, owing to the lack of a closed-form solution, it is impossible to estimate the required number of epochs for a model to converge. To develop a well-trained PINN model, it usually takes thousands or even ten thousands of training epochs [16]. Second, Navier-Stokes equations are highly nonlinear and require computation of second derivatives of the fluid velocity field. The loss landscape of PINN can be difficult to optimize [9]. Therefore, fast training of PINN is indispensable but remains challenging.
Abundant efforts have been made to expedite the training of PINN and can be summarized into three categories. The first category aims to accelerate the convergence of PINN. Some studies [18, 20] focused on modifying the loss function. The standard loss function of PINN is comprised of three terms, which impose optimization challenge occasionally. One study [21] proposed to use numerical differentiation coupled with automatic differentiation to enhance the reliability of derivative computation and thus resulted in fast and accurate convergence of PINN. The second category relieves the optimization challenges of PINN by meta-learning to find better initialization for test data (or, tasks). Liu et al. [24] applied the classic Raptile framework from meta-learning to initialize the model parameters of PINN. Seo et al. [26] decomposed physics laws into a spatial derivative and a time derivative module. Then, they proposed a meta-learning approach to solve for the so-called reusable spatial derivative modules. However, the limitation of the meta-learning-based approaches is that the model trained during a meta-training phase must have learned the fundamental rule of finding a solution [22]. Designing appropriate meta-training tasks could be difficult and is more of an art work. Moreover, most methods mentioned previously were built upon the current PINN training framework, which is impractical to incorporate a large dataset as in ultrafast Doppler ultrasound. The third category takes advantages of parallel training by decomposing the computational domain. cPINN [6] offered space parallelization, while XPINN [8] provided both space and time parallelization. The parallelized computation greatly expedites the training of PINN; nonetheless, dividing domains could be tricky when complex geometry or pulsatile velocity is associated with a blood flow profile. A lack of a fast training framework is impeding efficient training of PINN.
In this work, we propose two novel training frameworks that expedite the training of PINN for Navier-Stokes equations and illustrate the effectiveness of the methods with applications in ultrafast Doppler ultrasound (1). Specifically, we first demonstrate that PINN is capable of solving Navier-Stokes equations under the steady-state assumption. Then, we initialize the model with the solution of steady-state Navier-Stokes equations and show that the initialized model together with transfer learning significantly reduces training time for subsequent frames. The number of collocation points used in steady-state Navier-Stokes equations during training is \(N\) times less than the original design of PINN, where \(N\) is the number of timestamps. This largely improves the training efficiency for one epoch. Overall, this work makes the following contributions.
1. We address the need of fast training of PINN. We propose to solve steady-state Navier-Stokes equations and discretize PINN along the time dimension by removing time variable t from input. This significantly alleviates the optimization challenge and computational burden of training. The framework, coined as SeqPINN, is illustrated in the flowchart in Fig. 2. The successful implementation of SeqPINN opens the gate for real-time training of PINN for Navier-Stokes equations.
2. Except the first timestamp, we apply transfer learning on subsequent ones. This promotes fast generalization to any timestamp provided that sparse sampled data are available.
3. With a further assumption of independent timestamps, we propose Sampled-Posterior PINN, coined as SP-PINN. SP-PINN achieves parallel training of PINN after a short initialization of SeqPINN. Fig. 1 illustrates the improvements in performance of SeqPINN and SP-PINN over Vanilla PINN.
4. SeqPINN and SP-PINN are of high practical value since they efficiently provide physics-regularized blood flow estimates from ultrafast Doppler ultrasound in clinical settings. The proposed algorithms show good applicability, as they can be implemented either on CPUs, which are widespread, or on GPUs, which enable even faster training of PINN.
## II **Related Work**
The current acceleration approaches can be divided into three groups--convergence-based approach, meta-learning approach, and domain decomposition approach.
### **Convergence-based Approaches**
The convergence-based approach aims to hasten the learning process by tackling imbalanced gradient during the training of PINN or inaccurate approximation of derivatives so that PINN converges faster. Wang et al. [17] derived the weights of loss terms based on eigenvalues of Neural Tangent Kernel (NTK). Shin et al. [19] proved the convergence theory for data-driven PINNs and derived the Lipschitz Regularized loss to solve linear second-order elliptic and parabolic type PDEs. Xiang [18] defined loss terms using Gaussian probabilistic
models and proposed a noise parameter to update the weights of loss terms. However, the self-adaptive loss function trades training efficiency per epoch with the number of training epochs. The weights of loss terms are calculated at the cost of more complex computation involving gradients. Although the number of total epochs decreases, the training time and complexity per epoch increase. Besides assigning weights to loss terms, some efforts accelerate the convergence of PINN by ensuring more accurate calculation of derivatives. Yu et al. [20] proposed to enhance the gradient of loss terms by enforcing the gradient of loss terms with respect to inputs to be zero, thus leading to more accurate estimation of derivatives. Instead of using Automatic Differentiation (AD), the derivatives can also be more robust by numerical differentiation (ND)-based methods. For example, Chiu et al. [21] proposed a coupled-automatic-numerical differentiation framework for calculating derivatives, improving the convergence speed and accuracy of PINN.
In summary, convergence-based methods aim to alleviate the optimization challenge during training of PINN by reshaping the loss landscape using designed weights for loss terms. Although convergence-based methods can accelerate the training of PINN, they do not simplify the training process. They fail to accelerate the training of PINN when the imbalanced loss is not the most dominant issue.
### **Meta-learning Approaches**
Meta-learning, known as learning to learn, targets at learning the most fundamental rules to learn and understand in a system. In the context of PINN, the goal of meta-learning is to enable better initialization of models for unseen tasks by learning from various tasks. Meta-learning can be classified into three categories: 1) model-based; 2) metric-based; 3) optimization-based. Recently, optimization-based algorithms, such as Model Agnostic Meta-Learning (MAML) [22] and Raptile [23], are promising because of their strong performance and ability to incorporate with any models trained through gradient descent. Liu et al. [24] proposed a new Raptile initialization method for PINN by modifying both the task sampling process and the penalty term of the loss. Another trending approach in meta-learning is to use reusable learning modules [31]. Seo et al. [26] demonstrated the decomposability of the continuity equation into spatial derivative and time derivative modules and adopted the idea of reusable modules. They generated synthetic data and used MAML to meta-initialize the spatial derivatives.
Compared with convergence-based methods, meta-learning based methods develop plug-and-play models that approach the optimal solution of a test case. A good meta-learning model has learned and understood the underlying mechanism of solving PDEs. However, purely meta-learning based approaches rely on the design of meta-training tasks. Test tasks may not benefit from meta-initialized models when the learning process of meta-testing tasks differs a lot from that of meta-training tasks [24].
Convergence-based acceleration and meta-learning acceleration are not optimal choices for PINN acceleration, albeit effective, because they rely on the original training framework of PINN.
### **Domain Decomposition Approaches**
The third approach aiming at expediting the training of PINN is the domain decomposition approach. cPINN [6] and XPINN [8], both of which are successful implementations of the domain decomposition approach, divide the computational domain into non-overlapping subdomains spatially or temporally. The communication between subdomains relies on the interface which is defined as the common boundaries between subdomains. Interface conditions are parts of the governing equations used to stitch subdomains. For different PDEs, interface conditions, such as solution continuity, flux continuity, etc., can be applied. The interface conditions promote propagation of information from one subdomain to its neighboring subdomains. They act as the most important governing equation when no training data are available in a subdomain [7]. In cPINN, the average solution continuity condition is applied to enable spatial decomposition of the computational domain. In XPINN, comparing with cPINN, the more general residual continuity conditions are imposed along with the average solution continuity condition, allowing space-time decomposition of any differential equations. During training, subdomains are initialized independently and then trained in parallel. However, it is not always straightforward when decomposing the computational domain. Unseemly subdomains lead to arduous optimization, and the final solution is available only after solving the most difficult subdomains.
The parallel training scheme supported by domain decomposition is most promising because it guarantees speed-up of PINN, while convergence-based and meta-learning approaches fail to expedite the training of PINN in the worst case scenario. Moreover, for a large dataset, such as ultrafast Doppler ultrasound, it is impractical to train a PINN without modifying the training framework, as in convergence-based and meta-learning approaches. In this work, we resort to decomposing the computational domain and completely renovate the training framework of PINN for Navier-Stokes equations by solving a spatial solution first and then adapting the model temporally.
Fig. 1: Model performance among Vanilla PINN, SeqPINN, and SP-PINN. Lower left corner represents the best model with lowest RMSE and shortest training time.
## III **Preliminaries**
### **Physics-informed Neural Network (PINN) with attention**
The idea of solving PDEs with neural networks can be traced back to 1990s [38]; however, its popularity was endowed by physics-informed deep learning in 2019 [40]. The vanilla PINN was first built to find solutions to nonlinear PDEs in both forward and inverse problems. PINN is outstanding due to its ability to fit experimental data while complying with any underlying laws of physics expressed by PDEs. PINN can be implemented to solve PDEs in a general form of:
\[\begin{split}\mathcal{F}(u(z);\gamma)&=f(z)\quad z \text{ in }\Omega,\\ \mathcal{B}(u(z))&=g(z)\quad z\text{ in }\partial\Omega, \end{split} \tag{1}\]
where \(\gamma\) are the physics-related parameters, \(z\) are the input coordinates (or collocation points), \(u\) represents the desired output, \(f\) and \(g\) are mapping functions, \(\mathcal{F}\) is the nonlinear differential operator, \(\mathcal{B}\) denotes the initial and boundary conditions, and \(\Omega\) is the defined domain. Since the inputs of PINN are spatio-temporal coordinates, PINN utilizes the property that neural networks are universal function approximators [28].
The structural design of PINN is usually very simple: few compositions of fully-connected layers followed by element-wise nonlinear functions to produce the output \(h_{k}\) after each layer as in eq. (2).
\[\mathbf{h}_{k}=\sigma\left(\mathbf{W}_{k-1}^{\top}\mathbf{h}_{k-1}+\mathbf{b} _{k-1}\right), \tag{2}\]
where \(\sigma\) is the activation function, and \(W_{k-1}\) and \(b_{k-1}\) are the weight and bias in the fully-connected layer, respectively. Partial derivatives are calculated using Automatic Differentiation [29], which uses exact expressions with floating-point values, thus avoiding approximation errors [30]. A more complex design of PINN architecture is to implement attention mechanism [50] on top of the original PINN. We find that attention mechanism stabilizes the training of PINN, so we adopt it in the architecture design. To make fair comparisons, vanilla PINN is introduced with attention mechanism. Since the acronym PINN could be misleading, we use _PINN_ to refer to the architecture design and _vanilla PINN_ to specifically refer to the original design of PINN. A flowchart of using steady-state Navier-Stokes equations to obtain full Navier-Stokes equations is demonstrated in Fig. 1.
### **PINN applications on blood flow**
Flow dynamics is mostly governed by Navier-Stokes equations as shown in eq. (3), where \(\mathbf{u}\) is the velocity vector of the fluid, \(p\) is the pressure, \(\rho\) is the density, \(\nu\) is the kinematic viscosity, and \(\nabla\) is the gradient differential operator.
\[\begin{split}\frac{\partial\mathbf{u}}{\partial t}+(\mathbf{u} \cdot\nabla)\mathbf{u}&=-\frac{1}{\rho}\nabla p+\nu\nabla^{2} \mathbf{u}\\ \nabla\cdot\mathbf{u}&=0\end{split} \tag{3}\]
The input to PINN are (x,y,t) collocation points and the outputs are blood velocity and blood pressure. Navier-Stokes equations, comprised of conservation of mass and conservation of momentum for Newtonian Fluids, are embedded as the residual loss denoted as \(\mathcal{L}_{\mathcal{F}}(\theta)\) in the loss function. Initial
Fig. 2: PINN architecture with Navier-Stokes equations as an example.
boundary conditions and sparse velocity samples are enforced by supervised learning using MSE (mean square error) losses, denoted by \(\mathcal{L}_{\mathcal{B}}(\theta)\) and \(\mathcal{L}_{\text{data}}\left(\theta\right)\), respectively. Using \(\theta\) to refer the parameters in neural network, the objective of PINN can be formed as finding the best \(\theta\) that minimizes the total loss:
\[\theta^{*}=\operatorname*{arg\,min}_{\theta}\left(\mathcal{L}_{\mathcal{F}}( \theta)+\mathcal{L}_{\mathcal{B}}(\theta)+\mathcal{L}_{\text{data}}\left( \theta\right)\right) \tag{4}\]
### **Uncertainty Estimation with SWA-Gaussian**
Stochastic Weight Average (SWA) has been shown to improve generalizability of deep learning models without increasing inference time. It uses a single model and subtly performs an average of weights traversed by Stochastic Gradient Descent (SGD) iterates [10]. To encourage exploration of regions where the weights correspond to high-performing networks, a SGD trajectory is obtained using a modified learning rate schedule. SGD often converges to a flat region where the gradient of the total loss is small, and then it tends to oscillate around the local minimum. Thus, averaging SGD iterates makes model parameters move towards the minima. The state-of-art performance of SWA has been shown on various supervised learning and semi-supervised learning tasks [32, 33, 34]. Stochastic Weight Averaging Gaussian (SWAG) [35], using the SWA solution, forms a Gaussian distribution to approximate the posterior distribution over neural network weights. The mean of Gaussian is the SWA solution built using SGD iterates as shown in eq. (5), and the variance also originates from SGD iterates as shown in eq. (6).
\[\bar{\theta}=\frac{1}{T}\sum_{i=1}^{T}\theta_{i} \tag{5}\]
\[\Sigma=\frac{1}{T-1}\sum_{i=1}^{T}\left(\theta_{i}-\bar{\theta}_{i}\right) \left(\theta_{i}-\bar{\theta}_{i}\right)^{\top}=\frac{1}{T-1}DD^{\top} \tag{6}\]
where \(T\) is the total number of SGD iterates and \(\top\) is the matrix transpose operation. Maddox et al. [35] then samples from this Gaussian distribution and performs Bayesian model averaging. SWAG has been widely used when estimating uncertainties in deep learning [37, 39].
## IV **Methods**
### **PINN for Steady-state Navier-Stokes Equations**
The traditional PINN architecture for fluid flow embeds Navier-Stokes equations into the loss function, and solve for flow velocity and pressure fields with sparse samples in the forward problem [12]. In this work, we assume that the blood flow starts at a steady state, and each timestamp follows the steady state with an infinitesimal change in flow velocity and pressure. Theoretically, a steady state is achieved when (i) all data of Navier-Stokes equations are independent of time, and (ii) the Reynolds number is sufficiently small [41]. Time independence can be equivalent to constant flow and constant pressure. To fulfill this requirement, we simulate infinitesimal time steps, making the change in velocity and pressure across time approximately zero. A small Reynolds number can characterize a fully developed flow when flow velocity is small. Mathematically, the loss function of PINN for non-dimensionalized steady-state Navier-Stokes equations can be written as
\[(\mathbf{u}^{\prime}\cdot\nabla)\mathbf{u}^{\prime} =-\nabla p+\frac{1}{\mathrm{Re}}\nabla^{2}\mathbf{u}^{\prime} \tag{7}\] \[\nabla\cdot\mathbf{u}^{\prime} =0 \tag{8}\]
\(\mathbf{u}^{\prime}\) is the non-dimensionalized velocity vector by \(\mathbf{u}^{\prime}=\frac{u}{U}\), where \(U\) is the characteristic velocity. The utilization of steady-state Navier-Stokes equations can be regarded as decomposing the computational domain of Navier-Stokes equations where the spatial and temporal modules are treated separately. The spatial module is shareable across timestamps since the inputs to the model are collocation points and thus identical for all timestamps. For consecutive timestamps, the model is adapted by SGD to a new local minimum led by the infinitesimal change in velocity measurements.
### **SeqPINN**
We first present an algorithm Sequential PINN, coined as SeqPINN, for learning solutions to steady-state Navier-Stokes equations and fast generalization to subsequent timestamps. SeqPINN is initialized by finding a solution for steady-state Navier-Stoke equations. The initial frame is chosen by selecting the timestamp associated with the lowest blood flow velocity in a cardiac cycle based on the temporal profile of the blood flow velocity. This prevents violation of steady-state Navier-stokes equations. In our experiments, the blood flow velocity profile is available as a priori knowledge. When such profile is unknown, starting from an arbitrary timestamp is also feasible. After initialization, the current PINN solution is deemed as a pre-trained model for the next timestamp. Given the boundary conditions and data points sampled at next timestamp, SGD can adapt to a new local minimum with \(m^{*}\) epochs. Section V will show that a pre-defined parameter \(m^{*}\) that optimizes the trade-off between accuracy and training speed exists. In practice, \(m^{*}\) can be determined by Earlystopping. By removing the time dimension from input, we keep the spatial collocation points the same across all timestamps. This significantly reduces computational burden of SGD. The training process of SeqPINN is summarized in Algorithm 1.
### **Data Assimilation**
SeqPINN is essentially a time series framework where we update the model from the previous timestamp and assimilate data to make predictions of flow velocity and pressure at the current timestamp. The sampled locations in the flow velocity field are fixed over time, while the model is being updated by SGD over time. Therefore, the algorithm SeqPINN can be regarded as a data assimilation process. Data assimilation at each timestamp not only promotes the model to find a new local minimum quickly, but also assists the model prediction at future timestamps by updating the model parameters. However, the impact of current sampled measurements decays as the model moves towards the end of the timeline. Here, we only present the most basic version of SeqPINN. If needed,
the weight of the data at particular timestamp can be tuned by modifying the number of epochs trained at a given timestamp. We denote \(s_{i}\) as the number of epochs trained using sampled measurements at current timestamp \(i\) with \(j\) iterates through all timestamps. The weight of each timestamp is calculated as
\[\sigma_{i}=\frac{e^{s_{i}}}{\sum_{j=1}^{t}e^{s_{j}}} \tag{9}\]
### **Computational Complexity**
We demonstrate that SeqPINN is more computationally efficient by reducing the input dimension. Instead of training PINN to solve continuous-time Navier-Stokes equations, solving steady-state Navier-Stokes equations only requires (x,y) pairs as the input at a single timestamp. Assuming a case containing \(n\) pairs of (x,y) and \(N\) timestamps in total, the number of training points in an epoch for original PINN is \(n\cdot N\), while that for SeqPINN is \(n\). Thus, training one epoch of SeqPINN is always \(n\) times more computationally efficient than PINN. Overall, training SeqPINN for 30 epochs over \(N\) timestamps requires running \((30\cdot n)\) data through the neural network, which is equivalent to training PINN, which requires \(30\cdot(n\cdot N)\) points for 30 epochs. Moreover, the size of the model is also smaller when the input dimension is reduced from 3 to 2. This also explains what makes SeqPINN more efficient. SeqPINN also benefits from the use of ultrafast Doppler ultrasound, which operates at thousands of frames per second. Vanilla PINN, feeding collocation points for all timestamps, may not be trained when the number of timestamps is high due to the limitation of GPU memory (VRAM), while SeqPINN can always be trained since it only takes in collocation points for a single timestamp.
### **Sampled-Posterior PINN**
SeqPINN significantly expedites the convergence of PINN while maintaining satisfactory accuracy. However, the sequential training scheme impedes real-time training of PINN in ultrafast Doppler ultrasound since the model at any timestamp is dependent of the model updates at the previous timestamp. A parallel training scheme fits the goal of real-time training to the utmost. In this section, we demonstrate how we adopt the idea of SWAG to ensure trustworthy initialization and achieve parallel training across timestamps based on SeqPINN initialization.
At the initialization stage, instead of promoting a set of PINN parameters optimized for a single timestamp, an initial set of PINN parameters that generalizes to all timestamps fast and accurately is imperative for the parallel training scheme. Similar to what we do in SeqPINN, we initialize the PINN model under the assumption of steady-state Navier-Stokes equations. Then, we view the initialization in SeqPINN \(p(\theta)\) as a prior belief and sparse data at the next \(k\) timestamps \(p(\mathcal{D}\mid\theta)\) as new observations in a set which can be used to update the prior belief. As a result, a training dataset \(\mathcal{D}\) comprised of \(k\) timestamps is formed, and it enables Bayesian approaches to boost network accuracy and achieve better generalizability [52, 53, 32]. The posterior belief \(p(\theta\mid\mathcal{D})\) can be derived from Bayes' rule.
\[p(\theta\mid\mathcal{D})=\frac{p(\mathcal{D}\mid\theta)p(\theta)}{p(\mathcal{ D})} \tag{10}\]
where \(p(\mathcal{D})\) is obtained by marginalizing \(\theta\) from \(p(\mathcal{D},\theta)\), which is the joint probability function of \(\mathcal{D}\) and \(\theta\):
\[p(\mathcal{D})=\int p(\mathcal{D},\theta)\,d\theta \tag{11}\]
However, due to infinite possibilities in neural network training, the marginalization is not tractable. Alternatively, deep ensembles have been shown to be an efficient way for approximating Bayesian posterior distribution [52, 54]. As proved in [53], constant SGD can be used to simulate Markov chain with a stationary distribution as an approximation for Bayesian posterior inference [53]. Thus, by sequentially training the next \(k\) timestamps for \(m^{*}\) epochs with constant SGD, we obtain \(k\) samples from the posterior distribution. Given the temporal causality in physical systems [55], \(m^{*}\) epochs are trained for a single timestamp (data) before the next timestamp is trained, instead of cyclically training \(k\) timestamps for \(m^{*}\) epochs. Finally, by averaging samples from the posterior distribution as in eq. (12), we derive a set of PINN parameters that achieves better generalizability for all timestamps.
\[\theta^{*}=\frac{1}{k}\sum_{i=1}^{k}\theta_{i} \tag{12}\]
One concern is that the initialization frame may find a sharp local minimum, instead of a flat and wide local minimum, on the loss surface; this may cause the solutions of the next
timestamps to have a large variance. The averaged solution may fall into the territory between two local minima and have poor generalizability. To address this, we propose an uncertainty estimation algorithm inspired by SWAG (see Section III-C) to avoid sharp local minima. A Gaussian distribution is formed to approximate the posterior distribution of SeqPINN model parameters. The mean is the averaged SGD solutions, and the variance of the Gaussian distribution is approximated by the solution of \(k\) timestamps. After approximating the posterior distribution of PINN model parameters, uncertainty estimation is performed by Bayesian model averaging. The uncertainty is represented by a standard deviation map (std map) visually, and the uncertainty index is defined as the mean value in a std map. Uncertainty is monitored during training. For all timestamps, we initialize the PINN model with the averaged solution from constant SGD. This method is thus coined as Sampled-Posterior PINN (SP-PINN).
```
1for\(i\gets 1\)to\(N\)do
2if\(i==1\)then
3 initialize the first frame for\(j\gets 1\)to\(n\)do
4\(\theta\)*\(\leftarrow\theta-\eta\cdot\frac{\partial I}{\partial\theta}\)
5 end for
6
7elseif\(i<=k\)then
8 initialize the model with PINN trained from last timestamp for\(j\gets 1\)to\(m^{*}\)do
9\(\theta\),*\(\leftarrow\theta_{j}-\eta\cdot\frac{\partial I}{\partial\theta_{j}}\) \(\widetilde{\theta}\leftarrow\frac{n\theta+\theta^{\prime}}{n+1},\Sigma\leftarrow \frac{1}{n-1}(\theta_{j}-\overline{\theta}_{j})(\theta_{j}-\overline{\theta}_ {j})\)
10 end for
11draw samples from \(\widetilde{\theta}_{j}\sim\mathcal{N}\left(\overline{\theta},\Sigma\right)\) and perform uncertainty estimation
12
13else
14 initialize the model with \(\overline{\theta}\) for\(j\gets 1\)to\(m^{*}\)do
15\(\theta\)*\(\leftarrow\theta-\eta\cdot\frac{\partial I}{\partial\theta}\)
16 end for
17
18 end for
19
20 end for
```
**Algorithm 2** Sampled Posterior Physics-informed Neural Networks.
## V **Experiments**
### **Experimental Setup**
#### V-A1 **Fluid-structure Interaction (FSI) Simulation Data**
We first evaluated our proposed methods on computer simulations of blood flow to demonstrate the feasibility of SeqPINN. It is important to show that SeqPINN can solve Navier-Stokes equations under a pulsatile fluid flow as shown in Fig. 2(c) given its relevance in biomedical applications. Fluid-structure interaction simulations, which better describe blood dynamics in a compliant blood vessel than CFD, were conducted [46] using COMSOL 5.5 software (Comsol Inc. Burlington, MA, USA) and adopted as the ground truth. We designed a single-branch (Fig. 2(a)) and a three-branch (Fig. 2(b)) blood vessel to mimic the common segment of a major artery and branched structure, respectively. In designing COMSOL simulations, surrounding medium such as deformable solid structures and their mechanical properties are first defined to fix the vessel geometry. For the single-branch vessel, the fluid properties were set close to real human blood, whose density was 1060 \(kg/m^{3}\) and dynamic viscosity was 5 \(mPa*s\). The Reynolds number of the blood flow was \(R_{e}=\frac{\rho V_{avg}D}{\mu}=\frac{1060kg/m^{3}*82.8cm/s*5mm}{5mPaS}=877.7\), where \(\rho\) is the fluid density, \(V_{avg}\) is the average fluid velocity, and \(D\) is the entrance diameter. The inlet flow velocity and outlet pressure used in the FSI simulation of the single-branch case are shown in Fig. 2(c) and Fig. 2(d). A 200 \(\times\) 30 grid was generated during COMSOL simulation and 3718 collocation points were used when calculating the equation loss after removing the surrounding medium of the Comsol simulated vessel.
For the three-branch vessel [46], the fluid density and dynamic viscosity were set to 1037 \(kg/m^{3}\) and 4.1 \(mPa*s\), respectively, to match the properties of blood mimicking fluids used in the _in vitro_ phantom experiment. The inlet flow velocity and outlet pressure used in the FSI simulation of the three-branch case are shown in Fig. 2(c) and Fig. 2(d). This scenario aimed to investigate PINN's capability of solving blood flow velocity in a branched structure under the supervision of steady-state Navier-Stokes equations. The Reynolds number of the blood flow was \(R_{e}=\frac{\rho V_{avg}D}{\mu}=\frac{1037kg/m^{3}*18.7cm/s*10mm}{4.1mPa*s}=473\). The simulation was generated automatically using Comsol with previously specified parameters. A larger grid with 500 \(\times\) 300 collocation points was generated due to the complexity of geometry, and 24359 collocation points were used when calculating the equation loss after removing the surrounding medium of the simulated trifurcate vessel. Blood flow was assumed to be an incompressible Newtonian fluid. More details of the fluid-structure simulation design can be read in [45, 46].
#### V-A2 **Evaluation Metrics**
Root Mean Square Error (RMSE) across time is the most common way to evaluate the performance of models, and it is defined as follows:
\[RMSE=\sqrt{\frac{1}{N}\sum_{i=1}^{N}(u_{i}-u_{i})^{2}} \tag{13}\]
where \(u\) is the velocity, and \(i\) represents time index, and \(N\) is the total number of timestamps. Training efficiency is another important metric as the ultimate study goal is to achieve real-time training of PINN. We measure the training efficiency by training time and the ratio between training time and improvement in RMSE. Huge sacrifice in training time for a small increase in RMSE is not desired.
#### Iv-A3 **Training Process**
SeqPINN and SP-PINN were both trained on Nvidia GeForce RTX 3090. The training of SeqPINN and SP-PINN contained two stages: initialization and adaption. During initialization of SeqPINN, the model was trained with 3000 epochs using Adam optimizer. The learning rate was set to 1e-3 for 2000 epochs and 5e-4 for 1000 epochs. The batch size was 1024 in the single-branch vessel and 2048 in the three-branch vessel. The neural network consisted of 8 fully connected layers with 150 neurons in each layer. During adaption of SeqPINN, each timestamp was initialized with a pre-trained model from the previous timestamp and then trained for 30 epochs. The learning rate was set to 5e-4, following the end of training in the initialization stage. Batch size was kept at 1024 for the single-branch case and 2048 for the three-branch case. An identical initialization stage was performed for SP-PINN. Then, all timestamps were initialized with an averaged SGD solution during the adaption stage and thus trained in parallel. In our experiments, since the batch sizes (1024 and 2048) were small, full power of a single GPU could not be employed when a single model was trained. Thus, we utilized Multi-Process Service (MPS) provided by Nvidia to maximize the efficiency of GPU.
#### Iv-A4 **Phantom Experiment**
Phantom data from Dong et al. [46] were used to validate the feasibility of SeqPINN and SP-PINN. Ultrafast Doppler ultrasound data are known to be not noise-free, and accuracy and precision of Doppler estimates for blood flow velocities depend on not only Doppler angles but also ultrafast ultrasound image quality, including the signal-to-noise ratio, contrast, and spatial resolution. Whether PINN could handle ultrafast Doppler ultrasound data was unknown. Thus, the feasibility of using ultrafast Doppler ultrasound data as the input was substantiated using data from phantom experiments.
### **Results**
Tables table I and table II compared model accuracy and efficiency among vanilla PINN, SeqPINN, and SP-PINN in the single-branch and three-branch cases, respectively. Due to the extremely large number of collocation points, it is impractical for vanilla PINN to cover the entire cardiac cycle with a high frame rate of 1000 frames per second, so the comparison was only performed on 100 timestamps. SeqPINN was demonstrated to be much more accurate and efficient than vanilla PINN. Specifically, SeqPINN, achieving RMSE of 1.01 cm/s in the single-branch vessel, outperformed vanilla PINN by 45% in the RMSE value. The training time consumed by SeqPINN was 7.5 times shorter than vanilla PINN. SP-PINN achieved RMSE of 1.26 cm/s, surpassing vanilla PINN by 31%. The training speed of SP-PINN was 11.4 times faster than vanilla PINN. For the three-branch vessel, the RMSE of SeqPINN was 1.91 cm/s, which was 29% lower than that of vanilla PINN. The training speed of SeqPINN was 6.1 times faster than that of vanilla PINN. The performance of SP-PINN was 4% more accurate than vanilla PINN, while the training time with SP-PINN was 10.2 times faster than vanilla PINN. Overall, we suggest SeqPINN in biomedical applications because accuracy outweighs the training speed. In other fields of research where the training speed is crucial, SP-PINN is a preferred option. Interested readers may refer to supplementary videos for fully-recovered blood flow velocity maps in the entire computational domain by SeqPINN and SP-PINN.
of diastole (from 600 ms to 700 ms). The novel training framework of PINN under the assumption of steady-state Navier-Stokes equations was demonstrated to outperform vanilla PINN. It is worth mentioning that the errors of all three models across time had the same trend as the model velocity profile.
### _Discussion_
The number of training epochs needed for subsequent timestamps after initialization, denoted as \(m\), is an important parameter to be tuned in SeqPINN to balance the trade-off between accuracy and efficiency. Increasing \(m\) by one for every timestamp results in \(t\) more training epochs in total. Therefore, we analyzed the trade-off between accuracy (RMSE) and \(m\) empirically and showed that the optimal number of training epochs \(m^{*}\) existed. Fig. 6 illustrates the relationship among accuracy (RMSE), the number of training epochs, and training time on the simulated single-branch (left) and three-branch (right) vessels. On one hand, we noticed that RMSE decreased drastically and almost linearly as we increased the number of training epochs for each timestamp until 30 epochs per timestamp, and then the decrease in RMSE slowed down as we trained each timestamp for more epochs. On the other hand, training time grew linearly as we increased the number of training epochs for each timestamp. A diminishing return on reduction in RMSE was observed for every 10 more training epochs when \(m\) was greater than 30. An effective strategy to overcome the diminishing return is through training SeqPINN cyclically, instead of increasing \(m^{*}\) in a single training cycle. Fig. 7 indicates that training SeqPINN for two cycles with \(m\) equal to 30 in each cycle significantly improved model accuracy from training SeqPINN for one cycle with \(m\) equal to 60. Note that two training strategies have the same computational complexity. The superior performance of training SeqPINN cyclically suggested that models learned across timestamps helped PINN find a global model that alleviated the optimization challenges for all timestamps.
We proposed SeqPINN under the assumption of infinitesimal time-step where ultrafast Doppler ultrasound was a suitable biomedical technology example. In fact, SeqPINN was robust against a large time-step. Fig. 8 shows that RMSE and standard deviation were stable even if we increased the time-step to 50 ms. SeqPINN exhibited a strong transfer learning ability, with slightly degraded performance in accuracy as the step size increased. The number of training epochs \(m^{*}\) was kept at 30. The success of SeqPINN was based on a reasonable initialization for the current timestamp. Clearly, the initialization timestamp could not only be seen as a pre-trained model for the next timestamp, but also a pre-trained model for all timestamps, hereby building a foundation for SP-PINN.
SP-PINN is built upon the implementation of SeqPINN and an Bayesian approach. A well trained solution for the steady-state Navier-Stokes equation followed by a constant SGD training scheme produced a stable posterior Gaussian distribution of PINN, and the mean of the distribution shall have good generalizability across all timestamps. In SP-PINN, generalizability was assessed by an uncertainty map produced by sampling from the posterior distribution and performing Bayesian model averaging. Fig. 9 and Fig. 10 illustrate uncertainty estimation in single-branch and three
Fig. 4: Comparison of lateral (a) and axial (c) velocity maps by conventional ultrafast Doppler ultrasound and lateral (b) and axial (d) velocity maps by PINN-regularized ultrafast Doppler ultrasound.
Fig. 5: Comparison of RMSEs among Vanilla PINN, SeqPINN, and SP-PINN across 100 timestamps in the simulated single-branch (top) and three-branch (below) vessels.
branch simulation cases. Note that in the single-branch vessel case, good initialization and bad initialization were similar visually. It was difficult to distinguish a bad initialization from a good initialization. Therefore, an uncertainty estimation was indispensable. In our experiments, the uncertainty index was monitored once for every 500 training epochs. The size of the training dataset \(k\) leveraged the training speed and training stability. When dealing with a pulsatile velocity field using ultrafast Doppler ultrasound, \(k\) is much smaller than the total number of timestamps \(N\). The lack of the global insight biases any efforts towards an initialization that generalizes well to all timestamps. The Bayesian approach mitigates the bias with an informative prior belief and Bayesian marginalization. Practically, \(k\) was fine-tuned in choices of 5, 10, 15, 20. We found that using the average of 15 subsequent timestamps led to the best accuracy. The idea of averaging constant SGD solutions of consecutive timestamps is a cheap and well-justified way to approximate a Bayesian posterior distribution while ensuring accuracy for subsequent timestamps. Since we used the same initialization for all timestamps, SP-PINN was not affected by the step size in time.
Comparing with SeqPINN, Vanilla PINN was slower to train because the extra dimension in time brings \(N\) times more copies of the collocation points, with \(N\) as the number of timestamps. In contrast, SeqPINN initializes and trains a unique model one at a time for all timestamps, needing
Fig. 8: Comparison of accuracy and time-steps in the simulated single-branch (left) and three-branch (right) vessels. Mean and standard deviation were calculated using RMSEs at each timestamp. Time-steps of 5, 10, 20, 30, 40, and 50 ms were tested.
Fig. 6: Comparison of SeqPINN training time and accuracy as the number of training epochs per frame \(m\) increases in the simulated single-branch (a) and three-branch (b) vessels. The accuracy and training time recorded were for the entire computational domain containing 608 timestamps.
Fig. 7: Comparison of training SeqPINN for one cycle with \(m\)*=30 and \(m\)*=60 and two cycles with \(m\)*=30 in the simulated single-branch (a) and three-branch (b) vessels.
Fig. 9: Uncertainty estimation in the single-branch vessel. (a) a sample from good estimation of the posterior distribution. (b) a sample from bad estimation of the posterior distribution. (c) standard deviation (std) of 30 samples from the approximation of the posterior distribution corresponding to (a). Low std indicates that the initialization is stable. (d) standard deviation (std) of 30 samples from the approximation of the posterior distribution corresponding to (b). High std indicates that the initialization is not stable.
times more space for storage of PINN models. In other words, SeqPINN trades the number of models (storage space) for training speed. Fortunately, individual SeqPINN models were very small in size (about 461 kB), and it was not necessary to store them on GPU. The trade-off between storage space and training speed is of high value in efficient training of PINN.
For the comparison between SeqPINN and SP-PINNN, training speed was no longer a bottleneck; thus, a comparison on the entire velocity profile was done (Fig. 11 and Fig. 12). For the single-branch vessel, SeqPINN and SP-PINN outputted very similar results, while SP-PINN was more oscillating. For the three-branch vessel, SP-PINN had lower accuracy than SeqPINN for almost all timestamps. We speculated that lower model accuracy of SeqPINN for the three-branch vessel led to poor performance of SP-PINN. Due to the implementation of steady-state Navier-Stokes equations, a small batch size cannot fully utilize computing power of GPU. Therefore, SP-PINN will always be faster than SeqPINN using MPS by Nvidia, even under the circumstance when computing resources do not support parallel training due to limited GPUs and the large number of timestamps as in Doppler ultrasound.
## VI **Conclusion**
In this paper, we developed SeqPINN with steady-state Navier-Stokes equations to facilitate the training of PINN, and SP-PINN to conduct parallel training of SeqPINN toward real-time generalization. Fast training of PINN is imperative in many areas, such as ultrafast Doppler ultrasound, which depicts complex blood flow dynamics at thousands of frames per second. We view SeqPINN as the foundation towards real-time training of physics-informed learning for solving Navier-Stokes equations. The novel training framework of SeqPINN significantly reduces the training time, while achieving superior performance compared with current implementation of PINN. SeqPINN is a generic algorithm that can incorporate various techniques mentioned in related work. For example, the design of weighted loss and calculation of derivatives can potentially be built on top of SeqPINN. SP-PINN with uncertainty estimation is a reliable way to initialize a generalizable model that can train SeqPINN in parallel. Meta-learning based approaches can also be adopted to search for a good initialization along the time dimension. The success of SeqPINN is built on the foundation of stead-state Navier-Stokes equations. This paper is envisioned to stimulate more future research on stead-state PDEs, particularly in biomedical applications.
|
2309.04782 | RRCNN$^{+}$: An Enhanced Residual Recursive Convolutional Neural Network
for Non-stationary Signal Decomposition | Time-frequency analysis is an important and challenging task in many
applications. Fourier and wavelet analysis are two classic methods that have
achieved remarkable success in many fields. They also exhibit limitations when
applied to nonlinear and non-stationary signals. To address this challenge, a
series of nonlinear and adaptive methods, pioneered by the empirical mode
decomposition method have been proposed. Their aim is to decompose a
non-stationary signal into quasi-stationary components which reveal better
features in the time-frequency analysis. Recently, inspired by deep learning,
we proposed a novel method called residual recursive convolutional neural
network (RRCNN). Not only RRCNN can achieve more stable decomposition than
existing methods while batch processing large-scale signals with low
computational cost, but also deep learning provides a unique perspective for
non-stationary signal decomposition. In this study, we aim to further improve
RRCNN with the help of several nimble techniques from deep learning and
optimization to ameliorate the method and overcome some of the limitations of
this technique. | Feng Zhou, Antonio Cicone, Haomin Zhou | 2023-09-09T13:00:30Z | http://arxiv.org/abs/2309.04782v1 | RRCNN\({}^{+}\): An Enhanced Residual Recursive Convolutional Neural Network for Non-stationary Signal Decomposition
###### Abstract
Time-frequency analysis is an important and challenging task in many applications. Fourier and wavelet analysis are two classic methods that have achieved remarkable success in many fields. They also exhibit limitations when applied to nonlinear and non-stationary signals. To address this challenge, a series of nonlinear and adaptive methods, pioneered by the empirical mode decomposition method have been proposed. Their aim is to decompose a non-stationary signal into quasi-stationary components which reveal better features in the time-frequency analysis. Recently, inspired by deep learning, we proposed a novel method called residual recursive convolutional neural network (RRCNN). Not only RRCNN can achieve more stable decomposition than existing methods while batch processing large-scale signals with low computational cost, but also deep learning provides a unique perspective for non-stationary signal decomposition. In this study, we aim to further improve RRCNN with the help of several nimble techniques from deep learning and optimization to ameliorate the method and overcome some of the limitations of this technique.
Empirical mode decomposition; Non-stationary signal decomposition; Deep learning; Attention mechanism.
## I Introduction
Time-frequency analysis has gone through over 200 years of development, and its beginning can be traced back to the Fourier transform [1]. To process signals more effectively, wavelet transform, which provides focusing capability, has been studied since the late 1980s [2]. On one hand, Fourier and wavelet methods have remarkable success in a wide range of applications. On the other hand, they are linear transforms, which become a limitation when handling non-stationary signals. In 1998, Huang and collaborators proposed the empirical mode decomposition (EMD) [3], which is a nonlinear procedure for decomposing a signal into multiple quasi-stationary components so that they can be better analyzed. EMD has gained a lot of popularity and found numerous applications in different disciplines. However, its mathematical foundation is still lacking. Many alternative algorithms with improved performance have emerged in recent years. They can be classified into two categories: methods based on iterations and methods based on optimization.
Among the iteration-based methods, we mention the one based on moving average computation [4, 5], partial differential equation (PDE) solution [6, 7], and recursive filter application [8, 9]. In this category, Lin et al. proposed the iterative filtering (IF) method to calculate the local average. The idea is to apply filters to replace the mean of the upper and lower envelopes computation in the sifting process of EMD [8]. Cicone et al. conducted in-depth research on IF and extended it to high-dimensional [10] and non-stationary signals [9]. The optimization-based methods include compressed sensing [11], variational optimization [12, 13, 14, 15], and a few other techniques [16, 17, 18].In the variational optimization-based methods, Dragomiretskiy et al. proposed in [12] variational mode decomposition (VMD) with the goal of decomposing a signal into a few modal functions with a specific sparsity. Osher et al. in [14] further develop the idea contained in VMD, by developing the geometric mode decomposition which is based on VMD itself.
After an in-depth comparison of existing methods, we find some common features. 1) The local average of a signal is critical. Many methods aim to find it in a reasonable way. 2) It is unrealistic to find a single local average method handling all signals. 3) Existing methods usually require parameters. Their results are generally sensitive to the selection of parameters. These observations inspire us to consider the problem from a new perspective. Namely, finding the local average customized as the "pattern" of a signal, which is a typical task routinely accomplished by modern deep learning methods.
In [19] we proposed a deep-learning-based method, called residual recursive convolutional neural network (RRCNN) for non-stationary signal decomposition. Experiments show that RRCNN not only is more stable and effective than existing methods on artificially synthesized signals but also can replicate the decomposition of existing methods on real-life signals with a significant reduction in boundary effects. More importantly, once the RRCNN model is trained, it yields decompositions in real time that are unparalleled by any of the existing methods. However, since RRCNN was the first deep learning algorithm designed for this purpose, it did not realize the full potential of deep learning in non-stationary signal decomposition. For example, its weights are always
the same in predicting different signals. In this sense, it is not fully adaptive to different classes of signals. In addition, the decomposition produced via RRCNN may contain high-frequency oscillations with small amplitude in some components. These small artifacts can degrade the subsequent time-frequency analysis of the signal.
In this work, we employ several simple techniques from deep learning, including the multi-scale convolution [20, 21], attention [22], residue [23], and the total-variation-based denoising (TVD) [24, 25], to further improve RRCNN. The proposed model is called RRCNN\({}^{+}\). The main contributions are summarized here: 1) The new improved module composed of the multi-scale convolution, attention, and residue techniques allows RRCNN\({}^{+}\) to extract heterogeneous features, and have a stronger adaptability. 2) TVD allows RRCNN\({}^{+}\) to remove the small amplitude high-frequency oscillations that are often observed in RRCNN. The resulting components appear to have more physical meaning than the one produced with the standard RRCNN algorithm in some applications.
The rest of this paper is organized as follows. We review RRCNN in Section II. Then, we introduce RRCNN\({}^{+}\) and illustrate how it works in Section III. Experiments are discussed in Section IV. Section V provides the conclusion.
## II RRCNN
Given a non-stationary signal \(X\in\mathbb{R}^{N}\), RRCNN can be described as the following optimization problem,
\[\begin{cases}\min_{\{\mathcal{W}_{m}\}_{m=1}^{M}}\|\hat{\mathbf{Y}}-\mathbf{Y} \|_{F}^{2}+\eta\,QTV(\hat{\mathbf{Y}})\\ \text{s.t., }\hat{Y}_{m}=F(X_{m-1},\mathcal{W}_{m}),X_{m}=X_{m-1}-\hat{Y}_{m}, \end{cases} \tag{1}\]
where \(m=1,2,\ldots,M\), \(M\) denotes the number of the expected components (each is called an intrinsic mode function (IMF) [3]), \(\hat{\mathbf{Y}}=\{\hat{Y}_{m}|\hat{Y}_{m}\in\mathbb{R}^{N}\}_{m=1}^{M}\) and \(\mathbf{Y}=\{Y_{m}|Y_{m}\in\mathbb{R}^{N}\}_{m=1}^{M}\) represent the predicted and true IMFs respectively, \(\|\cdot\|_{F}\) is the Frobenius norm, \(\eta\) denotes the penalty parameter, \(QTV(\hat{\mathbf{Y}})=\sum_{t=1}^{N-1}\sum_{m=1}^{M}(\hat{Y}_{(t+1),m}-\hat{ Y}_{t,m})^{2}\) is added to ensure the smoothness of each \(\hat{Y}_{m}\), \(\mathcal{W}_{m}\) denotes the set of parameters involved in finding the \(m\)-th IMF. To facilitate the understanding of the calculation process of \(F(X_{m-1},\mathcal{W}_{m})\), \(X_{m-1}\) and \(\mathcal{W}_{m}\) are denoted here as \(X\) and \(\mathcal{W}\), \(F(X,\mathcal{W})\) is obtained by \(f\left(X^{(S-1)},\mathcal{W}^{(S-1)}\right)\), where \(S\) represents the number of recursion, \(\mathcal{W}^{(S-1)}\) is the parameter set composed of the undetermined weights in the \((S-1)\)-th recursion, \(f(\cdot,\cdot)\) and \(X^{(i+1)}\) are calculated as: \(f\left(X^{(i)},\mathcal{W}^{(i)}\right)=\sigma\left(X^{(i)}*W_{1}^{(i)}\right) \ast\tilde{W}_{2}^{(i)}\), \(X^{(i+1)}=X^{(i)}-f(X^{(i)},\mathcal{W}^{(i)})\), where \(i=0,1,\ldots,S-2\), \(X^{(0)}=X\), \(\mathcal{W}^{(i)}=\{W_{1}^{(i)},W_{2}^{(i)}|W_{1}^{(i)}\in\mathbb{R}^{K_{1}},W _{2}^{(i)}\in\mathbb{R}^{K_{2}}\}\) (\(K_{1}\) and \(K_{2}\) represent the convolutional filter lengths), \(\tilde{W}_{2}^{(i)}=\text{softmax}\left(W_{2}^{(i)}\right)\), \(\sigma(\cdot)\) is the activation function, and \(\ast\) denotes the 1-D convolution operation.
Although it is empirically observed that RRCNN is superior to the existing methods, there is room for further improvements. First of all, \(\{\mathcal{W}_{m}\}_{m=1}^{M}\) do not depend on the specific signal. That is to say, once RRCNN is trained, it uses the same \(\{\mathcal{W}_{m}\}_{m=1}^{M}\) to process signals of different classes in the prediction phase. To some extent, this is not adaptive, while adaptivity is a desirable trait in non-stationary signal decomposition. Secondly, adding \(QTV\) to the objective function may bring two issues. (i) A bad choice in \(\eta\) can confuse RRCNN in the learning process. So the selection of \(\eta\) becomes critical and sensitive. (ii) For non-stationary signal decomposition, the smoothness has a specific physical meaning, it aims to avoid the high-frequency, low-amplitude oscillations in generating IMF. Adding the QTV directly to the objective function is difficult to avoid these oscillations.
## III RRCNN\({}^{+}\)
To improve RRCNN, we incorporate some techniques including a mechanism composed of multi-scale convolution [20, 21], attention [22] and residue [23], and total-variation-denoising (TVD) [26] into model (1). More precisely, the proposed RRCNN\({}^{+}\) method is expressed as
\[\begin{cases}\min_{\{\mathcal{W}_{m}\}_{m=1}^{M}}\|\hat{\mathbf{Y}}-\mathbf{Y} \|_{F}^{2}\\ \text{s.t., }\hat{Y}_{m}=\tilde{F}(X_{m-1},\mathcal{W}_{m}),\hat{Y}_{m}\leftarrow \text{TVD}\left(\hat{Y}_{m}\right)\\ \quad\quad\quad\quad X_{m}=X_{m-1}-\hat{Y}_{m},\end{cases} \tag{2}\]
where \(\text{TVD}\left(\hat{Y}_{m}\right)\) is used to smooth \(\hat{Y}_{m}\), \(\tilde{F}(X_{m-1},\mathcal{W}_{m})\) denotes the structure of \(F(X_{m-1},\mathcal{W}_{m})\) improved by the multi-scale convolution, attention, and residue techniques. The other symbols are the same as in (1). \(\tilde{F}(\cdot,\cdot)\) is defined as \(\tilde{F}(X,\mathcal{W})=\tilde{f}\left(X^{(S-1)},\mathcal{W}^{(S-1)}\right)\), \(\tilde{f}(\cdot,\cdot)\) and \(X^{(i+1)}\) are calculated as follows,
\[\begin{cases}X_{k}=\sigma\left(X^{(i)}*W_{1k}^{(i)}\right),\ X_{att}=\text{ Attention}\left(X^{(i)}\right),\\ \tilde{f}\left(X^{(i)},\mathcal{W}^{(i)}\right)=\sigma\left(\text{Concat}[(X^{(i)}, \{X_{k}\},X_{att})*W_{2}^{(i)}\right)*\tilde{W}_{3}^{(i)},\\ X^{(i+1)}=X^{(i)}-\tilde{f}\left(X^{(i)},\mathcal{W}^{(i)}\right),\end{cases}\]
where \(k=1,2,3\), \(\mathcal{W}^{(i)}\) composed of convolution kernel weights at different scales, i.e., \(\{W_{1k}^{(i)}\in\mathbb{R}^{[K_{1}/2^{k-1}]}\}_{k=1}^{3}\), \(W_{2}^{(i)}\in\mathbb{R}^{K_{2}}\) and \(W_{3}^{(i)}\in\mathbb{R}^{K_{3}}\) (\(K_{1}\), \(K_{2}\) and \(K_{3}\) represent the filter lengths); \(X_{att}=\text{Attention}\left(X^{(i)}\right)\) denotes the attention layer of \(X^{(i)}\) proposed in [22]; \(\tilde{W}_{3}^{(i)}=\text{softmax}\left(W_{3}^{(i)}\right)\), and the other symbols are the same as in RRCNN.
Given a component \(\hat{Y}\in\mathbb{R}^{N}\), the output of \(\text{TVD}(\hat{Y})\) is generated by solving the following optimization problem: \(\arg\min_{Y\in\mathbb{R}^{N}}\{\frac{1}{2}\|\hat{Y}-Y\|_{2}^{2}+\lambda\text{ TV}(Y)\}\), where \(\lambda\) denotes a penalty parameter, \(\text{TV}(Y)=\|\mathbf{D}Y\|_{1}\), and \(\mathbf{D}\) is the 1-order difference matrix. The solution of \(\text{TVD}(\hat{Y})\) is given in Algorithm 1 of the supplementary material.
The overall architecture of RRCNN\({}^{+}\) is shown in Fig. 1. The pseudocode of RRCNN\({}^{+}\) is reported in Algorithm 2 of the supplementary material. Compared to RRCNN, the improvements of RRCNN\({}^{+}\) are reflected in two aspects. First, three convolutions with different kernel scales and an attention layer are carried out, their outputs and shared input are concatenated as the input of the Conv. 2 layer. Second, TVD is added in front of each IMF. The improvements are called the multi-scale convolutional attention and TVD modules, respectively, where the former enhances the adaptability and the diversity of the extracted features, while the latter obtains smooth components that are more meaningful from a physical perspective.
## IV Experiments
We first evaluate the TVD and multi-scale convolutional attention modules that are adopted in RRCNN\({}^{+}\). For the convenience of notation, we denote the models improved by TVD and multi-scale convolutional attention as RRCNN_TVD and RRCNN_ATT, respectively. Then, RRCNN\({}^{+}\) is also compared with the state-of-the-art methods, including EMD [3], EEMD [27], VMD [12], EWT [28], FDM [29], IF [30], INCMD [31], SYNSQ_CWT and SYNSQ_STFT [32]. Details of the experimental data, settings, and evaluation metrics are described in the supplementary material.
### _Are TVD and multi-scale convolutional attention effective?_
To justify that TVD and multi-scale convolutional attention modules are effective, we first compare RRCNN_TVD, RRCNN_ATT and RRCNN\({}^{+}\) with RRCNN on both training and validation datasets of Dataset_1 and Dataset_2. The results, measured by MAE (mean absolute error), RMSE (root mean squared error), MAPE (mean absolute percentage error) and TV (total variation), are listed in Table I. We obtain the following findings: (i) For the vast majority of cases of both training and validation sets of Dataset_1 and Dataset_2, TVD and multi-scale convolutional attention effectively improve the performance over RRCNN. (ii) Combining TVD and multi-scale convolutional attention, i.e., RRCNN\({}^{+}\), improves performance over RRCNN in all cases. In addition, we examine the smoothness by comparing the TV norm. The TV values for the components generated by RRCNN\({}^{+}\) are essentially the closest to those of the true components.
To test the generalization capability of both modules, two signals not contained in both datasets are constructed. The first one is \(x_{1}(t)=\cos(6.4\pi t)+\cos(5\pi t)\), where \(\cos(6.4\pi t)\) and \(\cos(5\pi t)\) are denoted as the components \(c_{1}\) and \(c_{2}\) of \(x_{1}\), respectively. The frequencies of \(c_{1}\) and \(c_{2}\) are very close, which makes \(x_{1}\) can be used to evaluate the deep-learning-based models trained on Dataset_1. The second signal is \(x_{2}(t)=\cos\left(8\pi t+2t^{2}+\cos(t)\right)+\cos(5\pi t)+\varepsilon(t)\), where \(\cos\left(8\pi t+2t^{2}+\cos(t)\right)\) and \(\cos(5\pi t)\) are called the components \(c_{1}\) and \(c_{2}\) of \(x_{2}\), respectively, \(\varepsilon(t)\) is additive Gaussian noise with SNR = \(15dB\). \(x_{2}\) contains stronger noise than that added to the signals in Dataset_2, and it is used to test against the trained deep-learning-based models on Dataset_2.
Results of \(x_{1}\), \(x_{2}\) are given in Table II and Fig. 2. In Table II, we observe that the results obtained by RRCNN_TVD, RRCNN_ATT and RRCNN\({}^{+}\) are always better, measured by error metrics, than that of RRCNN, which indicates the effectiveness of introducing TVD and multi-scale convolutional attention. From the TV norm comparison, we see that RRCNN_TVD, RRCNN_ATT and RRCNN\({}^{+}\) improve the results obtained by RRCNN, except in one case on \(c_{1}\) of \(x_{2}\), where the TV norm of RRCNN is the closest to that of true component. By looking closely at Fig. 2, we found that RRCNN_TVD, RRCNN_ATT and RRCNN\({}^{+}\) greatly improve the performance of RRCNN at the peaks and troughs.
### _Can RRCNN\({}^{+}\) outperform the state-of-the-art models?_
RRCNN\({}^{+}\) is also compared with existing methods, including EMD, EEMD, VMD, EWT, FDM, IF, INCMD, SYNSQ_CWT, SYNSQ_STFT, RRCNN, RRCNN_TVD and RRCNN_ATT. Again, we take \(x_{1}\) and \(x_{2}\) that are constructed in Section IV-A as the test signals.
The results obtained by different methods are shown in Table VI of the supplementary material. Here we only list the results of the true and the obtained components by the top three methods in Table VI. We find that: (i) Since the frequencies of the constituent components of \(x_{1}\) are very close, and \(x_{2}\) is disturbed by high level noise, some existing methods, such as EWT, FDM and VMD for \(x_{1}\), and EEMD and SYNSQ_STFT for \(x_{2}\) have a hard time distinguishing the two components. Other methods, like EWT, INCMD and SYNSQ_CWT only
Fig. 1: Graphic illustration of the network structure of RRCNN\({}^{+}\).
show relatively accurate components on \(x_{2}\). (ii) Deep-learning-based methods are generally impressive on both \(x_{1}\) and \(x_{2}\).
The components and the corresponding time-frequency distributions by the top three models are depicted in Fig. 4 in the supplementary material. Here, we only depict time-frequency distributions of true and the components of RRCNN\({}^{+}\) in Fig. 3. Although the time-frequency information of non-stationary signals is not considered in RRCNN\({}^{+}\), its result is still relatively reasonable. We expect that a more accurate time-frequency distribution will be yielded when time-frequency information is incorporated into the model.
## V Conclusion
We recalled RRCNN and introduced RRCNN\({}^{+}\), deep-learning-based methods for non-stationary signal decomposition. We demonstrated that by introducing the multi-scale convolutional attention and TVD, RRCNN\({}^{+}\) improves the performance of RRCNN, and overcomes some of its limitations.
Yet, RRCNN\({}^{+}\) has some limitations too. For example, its network does not take into account the time-frequency information of the signal, which leads to the lack of time-frequency physical meaning in the results. RRCNN\({}^{+}\) is a supervised learning model that needs to assign a label to each training data. It is not easy to extend this approach to real signals because labels for real signals are difficult to obtain. We plan to further address these issues in future work.
Fig. 3: Time-frequency distribution by IMFogram [33] for the true (top) and resulting components (bottom) of \(x_{1}\) (left) and \(x_{2}\) (right) by RRCNN\({}^{+}\).
Fig. 2: Results of \(x_{1}\), \(x_{2}\) by deep-learning-based models trained on Dataset_1, Dataset_2, respectively. Left: \(x_{1}\); Right: \(x_{2}\). |
2309.15328 | Exploring Learned Representations of Neural Networks with Principal
Component Analysis | Understanding feature representation for deep neural networks (DNNs) remains
an open question within the general field of explainable AI. We use principal
component analysis (PCA) to study the performance of a k-nearest neighbors
classifier (k-NN), nearest class-centers classifier (NCC), and support vector
machines on the learned layer-wise representations of a ResNet-18 trained on
CIFAR-10. We show that in certain layers, as little as 20% of the intermediate
feature-space variance is necessary for high-accuracy classification and that
across all layers, the first ~100 PCs completely determine the performance of
the k-NN and NCC classifiers. We relate our findings to neural collapse and
provide partial evidence for the related phenomenon of intermediate neural
collapse. Our preliminary work provides three distinct yet interpretable
surrogate models for feature representation with an affine linear model the
best performing. We also show that leveraging several surrogate models affords
us a clever method to estimate where neural collapse may initially occur within
the DNN. | Amit Harlev, Andrew Engel, Panos Stinis, Tony Chiang | 2023-09-27T00:18:25Z | http://arxiv.org/abs/2309.15328v1 | # Exploring Learned Representations of Neural Networks with Principal Component Analysis
###### Abstract
Understanding feature representation for deep neural networks (DNNs) remains an open question within the general field of explainable AI. We use principal component analysis (PCA) to study the performance of a k-nearest neighbors classifier (k-NN), nearest class-centers classifier (NCC), and support vector machines on the learned layer-wise representations of a ResNet-18 trained on CIFAR-10. We show that in certain layers, as little as \(20\%\) of the intermediate feature-space variance is necessary for high-accuracy classification and that across all layers, the first \(\sim\)\(100\) PCs completely determine the performance of the k-NN and NCC classifiers. We relate our findings to neural collapse and provide partial evidence for the related phenomenon of intermediate neural collapse. Our preliminary work provides three distinct yet interpretible surrogate models for feature representation with an affine linear model the best performing. We also show that leveraging several surrogate models affords us a clever method to estimate where neural collapse may initially occur within the DNN.
## 1 Introduction
In the past several years, DNNs have become a common tool in many scientific fields and real-world applications. As their use becomes more widespread, it is more important now than ever to better our understanding of these models. One way this can be accomplished is by studying their learned representations. This topic has been explored by many papers in recent years, including methods such as linear probing ([1; 4; 11; 8]), studying the dimensionality of the manifold underlying the activations ([2; 13; 14]), and studying the geometry of the learned representations ([9]).
In this paper, we return to a classical tool for data analysis, _principal component analysis_, to help us better understand the learned representations present in DNNs. While several papers have used PCA to study learned representations (e.g. [8; 11]), we are the first to study in depth the performance of multiple surrogate models using varying number of PCs across an entire CNN. We train a k-nearest neighbors classifier (k-NN), a nearest class-center classifier (NCC), and a support vector machine (SVM) on each residual block's activations after projecting down to the first \(d\) principal components (PCs) and make qualitative observations based on the results. Studying a pretrained ResNet-18 on the CIFAR10 dataset, we observed that:
1. The SVM matches or outperforms the k-NN and NCC across the network.
2. The best possible performance of k-NN and NCC models on intermediate layer activations are completely determined by the first \(\sim\)\(100\) PCs. In fact, the k-NN model seems to overfit as additional PCs are used.
3. The low-variance PCs of intermediate layers contain meaningful information that improves SVM performance.
4. In the latter half of the network, the PCs necessary for \(90\%\) of the classification accuracy account for only \(20\%\)-\(40\%\) of the variance.
## 2 Related work
Probing intermediate layers.The idea behind classifier probes is that we can learn more about the behavior of intermediate layers, and thus neural networks in general, by studying the suitability of the intermediate representations for the desired task. The term "probe" was introduced by [1], who observed that the measurements of linear probes monotonically and gradually increased on trained networks the deeper they were in the network. [4] observed that k-NN, SVM, and logistic regression probes all match the performance of a DNN in the last layer and that the k-NN predictions are almost identical to those of the DNN. [8] projected each layer's activations down to the first \(d\) (RBF) kernel principal components before training linear classifiers. They studied changes in performance as architecture, hyperparameters, and \(d\) were varied. While [8] studied early CNN architectures, we study behaviors of modern residual networks. [11] introduced SVCCA, a technique combining SVD and canonical correlation analysis, to study the relationships between representations coming from different layers, training epochs, and architectures. They show that "trained networks perform equally well with a number of directions just a fraction of the number of neurons with no additional training, provided they are carefully chosen with SVCCA."
Intrinsic dimension (ID) of neural network representations.Another approach to understanding the learned representations of DNNs has been to study their dimensionality across the network. [14] used tangent plane approximations to estimate the dimension of feature maps and observed that they declined quickly across the network. More recently, [2] and [13] estimated IDs several orders of magnitude smaller than those of [14] using non-linear methods designed for curved manifolds. They also observed the layerwise ID profile to have a "hunchback" shape where the ID first increases and then drastically decreases. [2] compared against "PC-ID", the number of PCs required to explain \(90\%\) of the variance in the activations. They observed that (1) layerwise PC-ID profiles were qualitatively the same in trained and untrained networks and (2) the PC-IDs were one to two orders of magnitude greater than IDs estimated with non-linear methods. Using this, they argued that the activations must lie on a highly curved manifold. While this may be the case, we show that PCA can in fact help find interesting structures in learned representations. Additionally, we show that while the underlying manifold may be highly curved, it exists in a low-dimensional subspace that can be found using PCA.
Neural collapse.First defined by [9], neural collapse is a phenomenon observed in the last layer activations of deep neural networks characterized by several properties, two of which are: **(NC1)** within-class variability collapses to zero and the activations collapse towards their class means and **(NC4)** the DNN classifies each activation using the NCC decision rule. Since then, there has been significant interest in investigating this phenomenon, including several papers exploring whether this phenomenon manifests in earlier layer's activations ([12; 5; 3]). Both [5] and [3] study the
Figure 1: Diagram showing ResNet-18 architecture with residual blocks labeled.
performance of the NCC classifier across the layers of a neural network and observe an increase in performance the deeper the layer is in the network and the more training epochs used. [12] shows that the within-class covariance decreases relative to the between-class covariance as you move deeper into a trained DNN.
## 3 Experiment
We used a pre-trained ([10]) ResNet-18 ([6]) with a test accuracy of \(92.5\%\) on the CIFAR-10 dataset ([7]). For a given layer, we standardized (mean zero, std one) the activations from the training data and then used PCA to project onto the first \(d\) PCs. We trained a 10-nearest neighbors model, nearest class-center model, and soft-margin support vector machine on the resulting data and then used them to classify the test data after applying the same standardization and projection learned from the training data. This was done for each \(d=1-20\), \(30\), \(40\), \(50\), \(100\), \(150\), \(200\), \(250\), \(300\), \(400\), \(500\), \(750\), \(1000\), \(1250\), \(1500\), \(1750\), \(2000\) and subsequently at intervals of \(1000\) until reaching the size of the layer. Figure 2 shows the accuracy by number of PCs for each model. For each model and layer, we also found the minimum number of PCs needed to attain at least \(90\%\) of the best accuracy attained at that layer and by that model, as well as the variance explained by those PCs. For example, if model X's highest attained accuracy on layer Y was \(96\%\), we found the minimum number of PCs for which model X attained \(96\%*0.9=86.4\%\) accuracy. This is shown in Figure 3. We considered the activations output by the initial max pooling layer and each of the eight residual blocks present in a ResNet-18-- see Figure 1.
## 4 Results
Looking at Figure 2, we see that up until block 4, each of our three models exhibits different behaviors as we increase the number of PCs, and that from block 5 onwards, all three models exhibit qualitatively identical behavior. Up until block 4, the k-NN model's (Figure 1(a)) accuracy increases up to \(\sim\)\(100\) PCs before decreasing significantly, a sign that it may be overfitting. On the other hand, the NCC model (Figure 1(b)) achieves maximum accuracy at around the same point, but then remains unchanged as more PCs are used. The SVM (Figure 1(c)) performs similarly to the k-NN for the first \(\sim\)\(100\) PCs, but continues to improve in accuracy as the number of PCs increases. It also achieves the best performance with the original activations (i.e. before projection) across all layers. All three models see steady increases in accuracy as we move deeper into the network. On blocks 5 onwards, all three models see a sharp, almost identical spike up to the true accuracy of the DNN between one and ten PCs, followed by no change in accuracy beyond that.
In Figure 2(a) we see a "hunchback" profile for the NCC model (and to a lesser degree, the k-NN model) that matches the "hunchback" ID profile that [2] observed using a non-linear dimensionality estimator. On the other hand, the SVM, the only affine-linear method we studied, exhibits a completely different profile starting very high and then monotonically decreasing. We observe that, just as in Figure 2, all three models exhibit identical profiles for blocks 5 through 8 and that, excluding block 5, they require
Figure 2: Performance of 10-NN (a), NCC (b), and SVM (c) after projecting activations from each residual block onto first \(d\) principal components.
only \(2\)-\(3\) PCs to attain \(90\%\) of the accuracy of the DNN_. Figure 2(b) shows us that in the latter half of the network, only \(20\%\) to \(40\%\) of the variance is needed for accurate classification, and that this holds true across the entire network for the non-linear models.
## 5 Discussion and conclusion
While the performance of the k-NN and NCC models is determined by the first \(\sim\)\(100\) PCs, the SVM's performance increases with the number of PCs up to using the whole space. When considered along with the observations of intermediate neural collapse of [12], this could perhaps point to there being a "partially collapsed" subspace in each layer that determines the behavior of the k-NN and NCC models, while the SVM also accounts for information helpful to classification in the low variance subspaces. In particular, this means that the low-variance subspaces contain meaningful information and not just noise. Additionally, it is interesting to note that the SVM, an affine-linear model, is the most robust and best performing across all learned representations of the DNN. While all three models contribute to our intuitive understanding of how the representation is changing across the network, the SVM's accuracy suggests that applications using learned representations might benefit most from simpler models.
The behavior in blocks 5-8 can also be explained by neural collapse. That is, the network reaches a "fully collapsed state" at block 5 in which all activations are approximately equal to their class means, so all three classifiers perform equally well on very few PCs. Note that had we only trained one surrogate model, it would not be clear between which layers the network was "fully collapsing". However, with three models, Figure 2 and Figure 3 clearly show that this collapse occurs between the fourth and fifth residual blocks. Identifying this "collapsing" layer could be a useful tool for understanding mis-classified training data, as most of the information used by the DNN for classification is only present prior to that layer. The notion of intermediate neural collapse is further supported by the fact that the number of PCs needed for good classification with the SVM decreases monotonically across the network and that the variance necessary for accurate classification (by all models) decreases until block 5, which is where we see "full collapse".
Since k-NN, NCC, and PCA are all very well understood, the fact that these non-linear models display the same profile in Figure 2(a) as observed by [2] provides us a more interpretable way to think about this "hunchbacked" behavior. Additionally, since the non-linear methods required only \(\sim\)\(100\) PCs or less throughout the network, this implies that the curved manifold underlying the activations most likely lives within a relatively low-dimensional subspace, which can be found using PCA.
Lastly, while it is common to select the number of PCs to keep using metrics such as accounting for \(90\%\) of variance--as seen in [2] and [11]--Figure 2(b) shows that this may not be the best approach for analyzing learned representations, as the majority of the variance is not necessary for classification.
In this paper, we study learned representations of a ResNet-18 using PCA and observe multiple interesting behaviors. We hope that our work provides new intuition and inspires more experiments
Figure 3: For each model: number of PCs (a) and the percentage of variance explained by those PCs (b) needed to attain \(90\%\) of maximum classification accuracy at each residual block.
into the behavior and structure of learned representations, as well as demonstrates that there may still be more for us to learn about these complex models using simple techniques.
## 6 Acknowledgements
AH, AE, and TC were partially supported by the Mathematics for Artificial Reasoning in Science (MARS) initiative via the Laboratory Directed Research and Development (LDRD) Program at PNNL. PS was partially supported from the U.S. Department of Energy, Advanced Scientific Computing Research program, under the Scalable, Efficient and Accelerated Causal Reasoning Operators, Graphs and Spikes for Earth and Embedded Systems (SEA-CROGS) project (Project No. 80278). PNNL is a multi-program national laboratory operated for the U.S. Department of Energy (DOE) by Battelle Memorial Institute under Contract No. DE-AC05-76RL0-1830
|
2301.00675 | FlatENN: Train Flat for Enhanced Fault Tolerance of Quantized Deep
Neural Networks | Model compression via quantization and sparsity enhancement has gained an
immense interest to enable the deployment of deep neural networks (DNNs) in
resource-constrained edge environments. Although these techniques have shown
promising results in reducing the energy, latency and memory requirements of
the DNNs, their performance in non-ideal real-world settings (such as in the
presence of hardware faults) is yet to be completely understood. In this paper,
we investigate the impact of bit-flip and stuck-at faults on activation-sparse
quantized DNNs (QDNNs). We show that a high level of activation sparsity comes
at the cost of larger vulnerability to faults. For instance, activation-sparse
QDNNs exhibit up to 17.32% lower accuracy than the standard QDNNs. We also
establish that one of the major cause of the degraded accuracy is sharper
minima in the loss landscape for activation-sparse QDNNs, which makes them more
sensitive to perturbations in the weight values due to faults. Based on this
observation, we propose the mitigation of the impact of faults by employing a
sharpness-aware quantization (SAQ) training scheme. The activation-sparse and
standard QDNNs trained with SAQ have up to 36.71% and 24.76% higher inference
accuracy, respectively compared to their conventionally trained equivalents.
Moreover, we show that SAQ-trained activation-sparse QDNNs show better accuracy
in faulty settings than standard QDNNs trained conventionally. Thus the
proposed technique can be instrumental in achieving sparsity-related
energy/latency benefits without compromising on fault tolerance. | Akul Malhotra, Sumeet Kumar Gupta | 2022-12-29T06:06:14Z | http://arxiv.org/abs/2301.00675v1 | # FlatENN: Train Flat for Enhanced Fault Tolerance of Quantized Deep Neural Networks
###### Abstract
Model compression via quantization and sparsity enhancement has gained an immense interest to enable the deployment of deep neural networks (DNNs) in resource-constrained edge environments. Although these techniques have shown promising results in reducing the energy, latency and memory requirements of the DNNs, their performance in non-ideal real-world settings (such as in the presence of hardware faults) is yet to be completely understood. In this paper, we investigate the impact of bit-flip and stuck-at faults on _activation-sparse_ quantized DNNs (QDNNs). We show that a high level of activation sparsity comes at the cost of larger vulnerability to faults. For instance, activation-sparse QDNNs exhibit up to 11.13% lower accuracy than the standard QDNNs. We also establish that one of the major cause of the degraded accuracy is sharper minima in the loss landscape for activation-sparse QDNNs, which makes them more sensitive to perturbations in the weight values due to faults. Based on this observation, we propose the mitigation of the impact of faults by employing a sharpness-aware quantization (SAQ) training scheme. The activation-sparse and standard QDNNs trained with SAQ have up to 19.50% and 15.82% higher inference accuracy, respectively compared to their conventionally trained equivalents. Moreover, we show that SAQ-trained activation-sparse QDNNs show better accuracy in faulty settings than standard QDNNs trained conventionally. Thus the proposed technique can be instrumental in achieving sparsity-related energy/latency benefits without compromising on fault tolerance.
DNN Accelerators, Model compression, Flat minima, Fault Tolerance.
## I Introduction
The remarkable success of deep neural networks (DNNs) for tasks involving decision-making and sensory processing [1][2] has prompted the exploration of DNN accelerator designs for various applications [3]. However, the performance benefits of state-of-the-art DNNs come at the cost of large storage and computation requirements, introducing various design challenges, especially for energy- and memory-constrained edge applications. The need to reduce the size and computational complexity of DNNs has led to the emergence of Quantized Deep Neural networks (QDNNs). Various quantization techniques based on reduced bit precision of the weights and activations of DNNs have been proposed to achieve energy savings associated with storage, computation and communication, while alleviating the accuracy drop with respect to their full-precision counterparts [4][5].
Another popular method to reduce the resource requirements of DNNs is to sparsify the weights and activations by systematically removing the less important portions of the network. Various techniques to induce weight sparsity, also referred to as weight pruning, have led to significant compression of DNNs with little or no loss of accuracy [6][7]. Recently, approaches to increase and leverage the sparsity in the activations have also gained attention to reduce the memory requirements and improve inference speed [8][9]. Such techniques involving weight and activation sparsification can be used in conjunction with quantization to design QDNNs optimized for edge applications.
While sparsified QDNNs offer promising attributes for resource-constrained systems, their deployment in real world settings also needs to consider the non-ideal hardware behavior. DNN accelerator weights are generally stored on-chip in the memory which experiences various types of faults such as stuck-at and bit-flip faults. These faults corrupt the weight values and degrade system accuracy, Although the impact of faults in conventional DNN accelerators is well understood [10][11], the fault tolerance of weight/activation sparse DNNs has been studied only to a limited extent. Some works have explored the impact of faults and non-idealities on pruned
Figure 1: (a) Shows an activation-sparse (AS) QDNN in a faulty scenario. We show in our work that AS QDNNs are more prone to severe accuracy degradation than their standard counterparts. (b) describes the trade-off between the energy/latency benefits due to enhanced AS and reduced fault tolerance. We propose a fault mitigation strategy which enables AS QDNNs to retain fault tolerance by flattening its weight loss landscape using sharpness-aware quantization (SAQ) based training.
models, and have shown that pruned DNNs have a larger accuracy degradation in the presence of faults and non-idealities compared to their unpruned counterparts [12][13]. However, the understanding of the impact of faults on _activation-sparse_ DNNs is lacking. Moreover, techniques that can alleviate the adverse effect of faults on accuracy of activation-sparse QDNNs are needed to increase their computational robustness in non-ideal settings.
In this paper, we address these critical needs by (a) extensively analyzing the performance of activation-sparse QDNNs in the presence of stuck-at and bit-flip faults and (b) proposing a training technique to enhance fault tolerance. The key contributions of our work are as follows:
* We show that an increase in activation sparsity comes at the expense of reduced fault tolerance and lower inference accuracy in QDNNs. To the best of our knowledge, this is the first work exploring the performance of _activation-sparse QDNNs_ in the presence of faults.
* Using the weight loss landscape visualization method in [14], we establish that the higher sensitivity of activation-sparse QDNNs to faults is attributed to sharper minima in their loss landscape (compared to the standard QDNNs).
* Based on the above finding, we propose the use of sharpness-aware quantization (SAQ) [15] (which is a variant of sharpness-aware minimization (SAM) [16] designed for QDNNs) to mitigate the impact of faults on system accuracy.
* We show that the proposed method increases the inference accuracy of activation-sparse and standard QDNNs by up to 19.50% and 15.82% by reducing the sharpness of the weight loss landscape. This is achieved without compromising on the baseline software accuracy.
* We also show that SAQ-trained activation-sparse QDNNs have higher inference accuracy than their standard counterparts trained without SAQ. This enables the design of QDNNs which are both activation-sparse and fault tolerant, optimal for edge applications.
Figure 1 provides an overview of our work and contributions.
## II Background and Related Work
### _Activation sparsity in DNNs_
Activation sparsity refers to the prevalence of a large number of zero values in DNN activations. The most common activation function in DNNs is the rectified linear unit (ReLU), which outputs a zero for every negative input, leading to a high activation sparsity. As a result, sparse storage schemes can be used to store the activations, reducing memory requirements [17]. Also, the computations involving the zero-valued activations can be skipped, reducing the energy and latency of the DNNs [18]. Note that activation sparsity is dynamic in nature, which means that the number and location of the zero values vary from input to input. This property needs to taken into account while exploiting activation sparsity to reduce the computation and memory requirements. Hence, when utilized properly, a large activation sparsity can be highly beneficial for designing resource-constrained DNN accelerators.
Due to this reason, algorithmic techniques to _enhance_ the activation sparsity have gained interest in recent times. These techniques are primarily based on explicitly adding a regularization term to the loss function which penalizes dense activations. For example, the work in [9] adds the \(L_{1}\) norm of the activations (\(||x_{l,n}||_{1}\)) to the original loss function to incentivize sparsity:
\[L_{reg}(x,w)=L_{0}(x,w)+\sum_{n=1}^{N}\sum_{l=1}^{L}\alpha_{l}||x_{l,n}||_{1} \tag{1}\]
Here,\(L_{reg}(x,w)\) is the new loss function to be minimized, \(L_{0}(x,w)\) is the original loss function value, \(L\) is the number of layers in the DNN, \(N\) is the batch size and \(\alpha_{l}\) is the regularization constant per layer. By using the sparsifying properties of \(L1\) regularization [19], the work in [9] demonstrates up to 60% increase in sparsity with negligible loss in inference accuracy for image classification. The work in [8] uses a Hoyer sparsity metric based regularizer and a variant of the ReLU activation function to boost the activation sparsity of DNNs. It should be noted that these techniques are not equivalent to "pruning" the activations, analogous to weight pruning [6][7]. While pruning permanently sets the parameters to zero for all inputs, the activation sparsification techniques do not remove any of the activations permanently. Rather, they ensure that a low percentage of activations are non-zero for all inputs _on an average_.
In this paper, we use the \(L_{1}\) regularization based approach to train activation-sparse QDNNs due to its intuitive appeal and ease of implementation. We will refer to the QDNNs trained without \(L1\) activation regularization as standard QDNNs and the ones trained with \(L_{1}\) activation regularization as activation-sparse (AS) QDNNs.
### _Memory faults in DNN accelerators_
Aggressive scaling and the exploration of new memory technologies have made the study of memory faults more important than ever. All the popular memories (SRAMs, ReRAMs, FeFETs, etc.) used for DNN accelerators are impacted by faults. Faults corrupt a percentage of the stored values, leading to inaccurate computation of multiply-and-accumulate (the most common kernel in DNNs). This, in turn, degrades the inference accuracy. Two common memory faults which plague DNN accelerators are bit-flip faults and stuck-at faults. Stuck-at one (SA1) and stuck-at zero (SA0) faults occur when the value of the bitcell _unalterably_ gets fixed at '1' and '0' respectively SA1 and SA0 faults are usually caused by fabrication defects [20] and limited endurance. Bit-flip faults occur when the bitcell value gets flipped (from '0' to '1' or vice versa). They can be permanent or transient and are caused by phenomena such as half-select read disturbance and alpha particle strikes.
Understanding the impact of memory faults on DNN performance has attracted attention in recent times. Works such
as [21] have shown that a single targeted bit flip can significantly degrade the performance of floating point DNNs. Even QDNNs have been shown to be quite vulnerable to stuck-at faults and bit-flip faults [10][11]. Some works have also analyzed the performance of weight sparse (pruned) networks in the presence of faults [13], circuit non-idealities and variations [12],showing a larger vulnerability of pruned DNNs to faults and other non-idealities than their unpruned counterparts. However, to the best of our knowledge, no work has analyzed the impact of faults on activation-sparse QDNNs.
With regard to the fault mitigation, a plethora of hardware and algorithmic solutions have been analyzed. Hardware-based techniques are generally based on adding redundancy by identifying and duplicating the critical portions of the DNN. [22] utilizes exhaustive fault injection to identify the vulnerable parts of the DNNs and replicates those portions to reduce the impact of faults. [23] adds a spare neuron to the DNN, which can be configured to act as any of the other neurons in case they turn faulty. From the algorithmic perspective, fault mitigation strategies include using error-correcting codes, fault-aware retraining and fault-tolerant training. Works like [24] use error-correcting output codes to reduce the sensitivity to variations and SAFs. Fault injection during training [25] is also promising but may not work well when the system is prone to more than one kind of faults. Fault-aware retraining is based on the online detection of faults and then retraining the non-faulty parameters to recover the accuracy of the DNN [26]. This method furnishes promising results but may be challenging to implement for edge applications in which online fault detection may not be supported.
In this work, we will extensively investigate the impact of stuck-at and bit-flip faults on the performance of standard and AS QDNNs.We will also propose a fault mitigation strategy which utilizes SAQ to flatten the weight loss landscape of the QDNN and make its parameters less sensitive to perturbations and hence, more fault-tolerant. Our proposed technique can be used in conjunction with hardware based fault tolerance techniques and provides protection from multiple kinds of faults, as will be discussed subsequently.
### _Sharpness-Aware Quantization (SAQ)_
The sharpness of the minima of the loss landscape that a DNN converges to during training has a key impact on its generalization capability [27][28]. Both theoretically and empirically, it has been shown that convergence to a flat minima improves generalization. Hence, sharpness-aware training schemes, such as sharpness-aware minimization (SAM) have been explored in [16]. SAM simultaneously minimizes the loss value and the sharpness of the weight loss landscape to learn optima which have uniformly low loss values in their neighbourhood. DNNs trained with SAM have been shown to achieve state-of-the-art accuracies for several benchmark datasets and models. However, SAM is not as effective in QDNNs as it does not account for the quantization of weights. Sharpness-aware quantization (SAQ) is a variant of SAM designed for QDNNs [15]. SAQ simultaneously minimizes the loss value and flattens the loss curvature near the minima by minimizing the loss function with adversarially perturbed quantized weights. This leads to better generalization and improved accuracy of QDNNs. In this work, we advance the application of SAQ to train fault-tolerant QDNNs, based on the intuition that QDNNs converged at flatter minima will be less sensitive to the weight perturbations caused by faults.
## III Impact of Faults on Activation-Sparse QDNNs
### _Experimental Framework_
To investigate the impact of faults on both regular and activation-sparse (AS) QDNNs, we utilize the following methodology. First, two 4-bit QDNNs, i.e. LeNet 5 [29] with the FashionMNIST (FMNIST) dataset [30] and ResNet 18 [31] with the CIFAR-10 dataset [32] are trained using the quantization framework of [5] in Pytorch. The AS QDNNs are trained by combining the \(L1\) activation regularization described in [9] and the quantization framework. The standard and AS QDNNs are trained to have nearly equal (\(\leq 0.5\%\)) inference accuracies to ensure a fair comparison.
Once the QDNNs are trained, two types of faults viz. bit-flip faults and stuck-at faults (SA0 and SA1 individually) are injected. The faults are distributed randomly and uniformly across each of the models. Monte Carlo based fault injection experiments are performed for LeNet 5 and ResNet 18 respectively. The mean value of the inference accuracy is analyzed for different fault rates and types of faults (see Figure 4). We perform experiments for fault rates 1% - 5% and 0.5% - 3% for the LeNet 5 and ResNet 18 QDNNs respectively. Fault rates up to 3% were used for our analysis of ResNet 18 QDNNs because higher fault rates led to severe accuracy drops that would deem the QDNNs unusable.
To gain further insight into the difference in their fault tolerance, we visualize the weight loss landscape for LeNet 5 standard and AS QDNNs. Various works have utilized loss landscape visualization to enhance understanding of model performance and behaviour, showing correlation between generalization and the shape of minima for DNNs. To the best of our knowledge, this is the first work which investigates the shape of the minima with respect to the QDNN's fault tolerance. We use the normalization-based visualization technique described in [14] to generate the weight loss landscapes for LeNet 5 standard and AS QDNNs.
### _Results_
#### Iii-B1 Activation sparsity
To understand the impact of faults on activation sparse QDNNs, let us first discuss the gain in activation sparsity obtained due to \(L1\) activation regularization. Figures 2a and 2b show the percentage of zero activations (averaged over all layers and test inputs) for LeNet 5 and ResNet 18 QDNNs, respectively. For LeNet 5, training with \(L1\) regularization increases the activation sparsity by 95.34% (from 43% for standard training to 84% for \(L1\) regularization ). For ResNet-18, the activation sparsity increases by 41.51% (from 53% to 75%). The difference in the activation sparsity increase between LeNet 5 and ResNet-18 is attributed
to the higher workload complexity for ResNet 18 (image classification on the CIFAR-10 dataset) compared to LeNet-5 (FMNIST dataset). As a result, ResNet-18 requires more nonzero activations on an average to sustain high accuracy.
We further analyze how the sparsity of activation-sparse QDNNs are impacted in faulty settings. Based on our fault injection analysis, we observe a negligible change in activation sparsity. In the worst case (5% and 3% bit flip fault rate respectively), LeNet 5 and ResNet 18 show a 4% and 2% reduction in sparsity, respectively. This signifies that the gain in activation sparsity is sustained even in the presence of faults.
#### Iii-B2 Impact of faults on Inference Accuracy
The dashed lines in Figures 4a- 4f show the impact of bit-flip, SA0 and SA1 faults on standard and AS LeNet 5 and ResNet 18 QDNNs. We observe that the AS QDNNs have lower inference accuracy in the presence of faults compared to the standard QDNNs. For the bit-flip faults, the accuracy of LeNet 5 and ResNet 18 AS QDNNs is 2.40% to 11.13% and 0.52% to 8.00% lower (absolute difference in accuracy) than their equivalent standard QDNNs respectively. This trend holds for SA0 and SA1 faults, albeit with lower accuracy degradation than bit-flip faults. This is due to the fact that stuck-at faults can potentially get masked and hence, are less severe than bit-flip faults. These results signify that both LeNet 5 and ResNet 18 AS QDNNs have higher accuracy degradation in the presence of faults than their standard QDNN counterparts, implying that increased activation sparsity comes at the price of reduced tolerance to faults.
It should be noted that the accuracy degradation for ResNet 18 AS QDNNs (up to 8.00%) is more than that of LeNet 5 AS QDNNs (up to 5.56%) for the same fault rate (up to 3%). This can be attributed to the higher workload complexity for ResNet 18 on CIFAR-10 compared to LeNet-5 on FMNIST, making it more susceptible to accuracy degradation due to faults.
#### Iii-B3 Weight Loss Landscape visualization
Figure 3a shows the weight loss landscape of the standard and AS LeNet 5 QDNN (created using [14]). Intuitively, it is understood that a QDNN with a sharper minima would incur a larger change in its loss function value for some perturbation to its weights than a QDNN with a flat minima, as shown in Figure 3b and c. It can be seen that the AS QDNN converges at a sharper minima, which explains its degraded fault tolerance empirically observed in Section III. Thus, the increased activation sparsity due to the \(L1\) activation regularization is correlated with the increase in the sharpness of the minima and reduced fault tolerance.
To sum up, we conclude that additional activation sparsity, which reduces the latency and energy consumption of optimized DNN accelerators, comes at the cost of degraded performance in the presence of faults. Motivated by the need to enhance the fault tolerance of AS QDNNs, we now propose how the impact of faults can be alleviated by flattening the weight loss landscape via sharpness-aware quantization (SAQ) based training.
## IV Mitigation Strategy: SAQ
In this section, we propose and study the performance of our SAQ training-based fault mitigation strategy for reducing the accuracy degradation in both standard QDNNs and activation-sparse (AS) QDNNs in the presence of bit-flip and stuck-at faults. We perform our experiments using the framework described in Section III.
### _SAQ-based fault mitigation strategy_
Our fault mitigation technique is based on flattening the weight loss landscape of the QDNNs to make the weights less sensitive to faults. To implement and evaluate our proposed fault mitigation strategy, we utilize SAQ to train both standard and AS QDNNs.
SAQ concurrently minimizes the loss value and the loss sharpness by solving the following min-max optimization problem:
\[\min_{w}\max_{||\epsilon||_{2}\leq\rho}\left((L_{S}(Q_{w}(w,b))+ \epsilon)-L_{S}(Q_{w}(w,b))\right)\\ +L_{S}(Q_{w}(w,b))+\frac{\lambda}{2}||w||_{2}^{2} \tag{2}\]
The first term in the equation defines the sharpness metric, which is the maximum change in the loss value for some weight perturbation \(\epsilon\). The second term is the loss function itself and the third term is the standard \(L_{2}\) regularization term. Also, the perturbation \(\epsilon\) is an adversarial one and is chosen
Figure 3: (a) The weight loss landscape of the standard and activation sparse (AS) LeNet 5 QDNNs visualized using the technique in [14]. x and y are normalized random directions. (b) and (c) illustrate the impact of a fault (F) on the loss value in loss landscape with (b) sharp and (c) flat minima. The fault causes a larger change in the loss value in the sharp minima case.
Figure 2: The activation sparsities of (a) LeNet 5 and (b) ResNet 18 standard QDNNs and activation sparse (AS) QDNNs in both fault-free and faulty settings. The activation sparsity of LeNet 5 and ResNet 18 AS QDNNs is 95.34% and 41.51% higher than their standard counterparts, and is sustained in faulty environments.
such that the maximum sharpness is minimized. \(\epsilon\) is given be the following equation and can have a maximum value of \(\rho\). It has the value:
\[\epsilon\simeq\hat{\epsilon}=\rho\frac{\nabla_{Q_{w}(w,b)}L_{S}(Q_{w}(w,b))}{|| \nabla_{Q_{w}(w,b)}L_{S}(Q_{w}(w,b))||_{2}} \tag{3}\]
where \(\nabla_{Q_{w}(w,b)}L_{S}(Q_{w}(w,b))\) is the gradient of the loss function with respect to the quantized weights. \(\epsilon\) is estimated using a forward and backward pass through the QDNN.
It is important to note that one epoch of SAQ training takes the same time as two epochs of conventional training. Hence for a fair comparison, we train the SAQ-based QDNNs with half the number of epochs compared to the conventionally trained QDNNs. The \(\rho\) hyperparameter for the SAQ-based QDNNs is chosen such that the fault-free accuracy (fault rate = 0%) of the QDNNs is maximized. Using this framework, we analyse the effectiveness of SAQ in increasing the fault tolerance for both standard QDNNs and AS QDNNs.
### _Results_
#### Iv-B1 Standard QDNNs
Figure 4 shows the performance of LeNet 5 and ResNet 18 standard QDNNs trained with SAQ compared to the performance of those conventionally trained in the presence of faults. It can be seen that SAQ-trained standard QDNNs have superior fault tolerance to their conventionally trained equivalents. For LeNet 5 with fault rates of 0% - 5%, the SAQ trained model has a 0.35% - 15.82%, 0.35% - 5.18% and 0.35% - 4.68% higher inference accuracy for bit-flip, SA0 and SA1 faults, respectively. Even at a fault rate of 0% (fault-free environment), SAQ-trained standard QDNNs outperform conventionally trained standard QDNNs, which is consistent with the results in [16] and [15]. Similarly, for ResNet 18, the SAQ-trained standard QDNNs display better inference accuracies compared to conventionally trained standard QDNNs in both fault-free and faulty settings. The accuracy improvements for bit-flip, SA0 and SA1 faults are 0.21% - 12.48%, 0.21% - 14.89% and 0.21% - 14.92%, respectively.
#### Iv-B2 Activation-sparse (AS) QDNNs
The comparison of the inference accuracies of SAQ-trained and conventionally trained LeNet 5 and ResNet 18 AS QDNNs can be seen in Figure 4. The \(L1\) regularization constant for both the SAQ-trained and conventionally trained QDNNs is kept the same, so that the activation sparsity in both of them is nearly equal (value in figures 1(a) and 1(b)).
In trend with standard QDNNs, it is observed that SAQ-trained AS QDNNs have higher inference accuracy than their conventionally trained counterparts. For LeNet 5, the SAQ trained AS QDNN has a 0.12% - 19.50%, 0.12% - 7.94% and 0.12% - 7.63% higher inference accuracy for bit-flip, SA0 and SA1 faults respectively, For ResNet 18, the SAQ-trained AS QDNN again displays superior inference accuracies in both fault-free and faulty settings, with 0.04% - 14.00%, 0.04% - 12.59% and 0.04% - 12.56% higher accuracy than the conventionally trained model for bit-flip, SA0 and SA1 faults respectively.
An interesting point to note here is that activation-sparse (AS) QDNNs trained with SAQ have better performance in the presence of faults than standard QDNNs without SAQ.
Figure 4: Comparison of the impact on classification accuracy for different fault scenarios for both SAQ trained and conventionally trained standard and activation-sparse (AS) QDNNs. It can be seen that AS QDNNs have lesser fault tolerance than their standard counterparts. Also, SAQ-trained QDNNs display higher fault tolerance than their conventionally trained equivalents.
For example, our experiments show that the SAQ-trained LeNet 5 AS QDNN has a 8.37% higher accuracy than the conventionally trained LeNet 5 standard QDNN at fault rate = 5%. Thus, SAQ-trained AS QDNNs have both the benefits of superior fault tolerance _and_ increased activation sparsity (which leads to lower latency and energy consumption), which makes them highly suitable for edge applications.
Lastly, we see that training with SAQ limits the accuracy degradation caused by enhancing the activation sparsity. For both LeNet 5 (fault rate = 5%) and ResNet 18 (fault rate = 3%), the respective AS QDNNs trained with SAQ have 7.45% and 6.49% lower inference accuracy than the SAQ-trained standard QDNNs, whereas the conventionally trained AS QDNNs have 11.13% and 8.00% lower accuracy. Thus, training with SAQ mitigates the adverse impact that activation sparsity augmentation has on the fault tolerance.
## V Conclusion
We explored the performance of activation-sparse (AS) QDNNs in the presence of faults and proposed a mitigation strategy to reduce the impact of faults on the accuracy of the QDNNs. Through our experiments, in which we uniformly inject different kinds of faults in LeNet 5 and ResNet 18 QDNNs, we show that the increase in activation sparsity comes at the price of degraded inference accuracy in the presence of faults, with activation-sparse QDNNs showing up to 11.13% lower accuracy than standard QDNNs. To understand the reason for this reduced fault tolerance, we visualize the weight loss landscape for the standard and AS QDNNs and show that the AS QDNN has a sharper minima leading to lower fault tolerance. Based on this observation, we propose the flattening of the loss landscape for enhanced fault tolerance by utilizing sharpness-aware quantization (SAQ) based training. Our results show that AS and standard QDNNs trained with SAQ have upto 19.50% and 15.82% higher accuracy than their conventionally trained counterparts. We also observe that SAQ-trained AS QDNNs have higher inference accuracy than conventionally trained standard QDNNs, enabling QDNNs which are not only activation-sparse but also fault tolerant. Thus, SAQ-trained QDNNs have enhanced fault tolerance, making them suitable for deployment in fault-prone edge scenarios.
|
2307.16366 | Multi-modal Graph Neural Network for Early Diagnosis of Alzheimer's
Disease from sMRI and PET Scans | In recent years, deep learning models have been applied to neuroimaging data
for early diagnosis of Alzheimer's disease (AD). Structural magnetic resonance
imaging (sMRI) and positron emission tomography (PET) images provide structural
and functional information about the brain, respectively. Combining these
features leads to improved performance than using a single modality alone in
building predictive models for AD diagnosis. However, current multi-modal
approaches in deep learning, based on sMRI and PET, are mostly limited to
convolutional neural networks, which do not facilitate integration of both
image and phenotypic information of subjects. We propose to use graph neural
networks (GNN) that are designed to deal with problems in non-Euclidean
domains. In this study, we demonstrate how brain networks can be created from
sMRI or PET images and be used in a population graph framework that can combine
phenotypic information with imaging features of these brain networks. Then, we
present a multi-modal GNN framework where each modality has its own branch of
GNN and a technique is proposed to combine the multi-modal data at both the
level of node vectors and adjacency matrices. Finally, we perform late fusion
to combine the preliminary decisions made in each branch and produce a final
prediction. As multi-modality data becomes available, multi-source and
multi-modal is the trend of AD diagnosis. We conducted explorative experiments
based on multi-modal imaging data combined with non-imaging phenotypic
information for AD diagnosis and analyzed the impact of phenotypic information
on diagnostic performance. Results from experiments demonstrated that our
proposed multi-modal approach improves performance for AD diagnosis, and this
study also provides technical reference and support the need for multivariate
multi-modal diagnosis methods. | Yanteng Zhanga, Xiaohai He, Yi Hao Chan, Qizhi Teng, Jagath C. Rajapakse | 2023-07-31T02:04:05Z | http://arxiv.org/abs/2307.16366v1 | # Multi-modal Graph Neural Network for Early Diagnosis of Alzheimer's Disease from sMRI and PET Scans
###### Abstract
In recent years, deep learning models have been applied to neuroimaging data for early diagnosis of Alzheimer's disease (AD). Structural magnetic resonance imaging (sMRI) and positron emission tomography (PET) images provide structural and functional information about the brain, respectively. Combining these features leads to improved performance than using a single modality alone in building predictive models for AD diagnosis. However, current multi-modal approaches in deep learning, based on sMRI and PET, are mostly limited to convolutional neural networks, which do not facilitate integration of both image and phenotypic information of subjects. We propose to use graph neural networks (GNN) that are designed to deal with problems in non-Euclidean domains. In this study, we demonstrate how brain networks can be created from sMRI or PET images and be used in a population graph framework that can combine phenotypic information with imaging features of these brain networks. Then, we present a multi-modal GNN framework where each modality has its own branch of GNN and a technique is proposed to combine the multi-modal data at both the level of node vectors and adjacency matrices. Finally, we perform late fusion to combine the preliminary decisions made in each branch and produce a final prediction. As multi-modality data becomes available, multi-source and multi-modal is the trend of AD diagnosis. We conducted explorative experiments based on multi-modal imaging data combined with non-imaging phenotypic information for AD diagnosis and analyzed the impact of phenotypic information on diagnostic performance. Results from experiments demonstrated that our proposed multi-modal approach improves performance for AD diagnosis, and this study also provides technical reference and support the need for multivariate multi-modal diagnosis methods.
**Keywords:** Alzheimer's disease diagnosis, brain networks, graph neural networks, multi-modal method, sMRI, PET
## 1 Introduction
Alzheimer's disease (AD) is a degenerative disease of the central nervous system, largely manifested in the form of memory, language, cognition, and even emotional disorders. The state between normal control (NC) and AD is called mild cognitive impairment (MCI) and more than half of MCI cases progress to AD [1]. Still, no cure or preventive drugs have been successfully developed for AD but early diagnosis of AD allows for early intervention measures that could delay the progression of the disease [2]. Therefore, early diagnosis and treatment of AD is of great significance to patients. At present, the clinical diagnosis of AD largely depends on a wide range of sources, including medical history, neurological assessments, behavioral tests, neuroimaging
scans, etc. [3]
Neuroimaging plays an important role in the identification of treatable causes of dementia and provides a stronger basis for the screening and early diagnosis of AD [4]. A variety of imaging methods including structural magnetic resonance imaging (sMRI) and positron emission tomography (PET) techniques for clinical image-assisted diagnosis, which provides information about brain structure and function, respectively. sMRI helps us to understand changes in brain structure features (such as volume and shape) and can be used to predict AD progression [5]. On the other hand, 18-fluorodeoxyglucose PET (FDG-PET) is a molecular diagnostic method to visualize glucose metabolism. The functional analysis of PET image is carried out by studying the degree of glucose metabolism [6]. Both structural and functional information are important when studying biological systems. Studies performed on only one modality is unable to capture both structural and functional aspects of the brain. To make up for these shortcomings, several studies [7] have used multi-modal methods to enhance features, leading to better prediction performance.
In recent years, deep learning methods have been proposed for the analysis and diagnosis of diseases related to cognitive impairment [8]. Although convolutional neural networks (CNN) can learn image representations effectively, they do not fully consider the correlation between the subjects. Furthermore, CNN provides limited extensibility for the integration of multi-modal datasets; one major downside is the need for all inputs to have the same dimensions if each channel represents a different modality. On the other hand, graph neural networks (GNN), which extends classical CNN to non-Euclidean space by using graph topology or feature propagation between neighborhood nodes [9], afford greater flexibility for multi-modal integration. Graph convolutional network (GCN) is a type of GNN that works directly on graphs and take advantage of relational information encoded in the graph structure [10]. For instance, when nodes are used to represent subjects (such as patients or healthy people), edges of the graph can store information about the similarity between nodes. GCNs can perform signal filtering and aggregate information from neighboring nodes to obtain improved feature representations, which in turn can be used for disease prediction and graph analysis [11][12]. The flexibility of choosing various combinations of data modalities for node vectors and adjacency matrices make GCN an ideal method to combine multi-modal images.
GNN methods have achieved commendable performance for AD diagnosis. Ktena et al. [13] used graph similarity measures between brain connectivity maps of functional MRI (fMRI) to construct a multi-layer GCN filter for AD prediction. Song et al. [14] constructed a multi-class GCN classifier based on structural connectivity, performed multi-class disease classification of four disease stages across the AD spectrum, and verified that the GCN classifier outperforms the SVM on a disease prediction task. Zhang et al. [15] proposed a GCN using multi-modal brain networks from various diffusion weighted imaging sequences to predict clinical indicators and verified the effectiveness of integrated multi-modal brain network in prediction tasks. Overall, these studies have shown the effectiveness of GCN in the diagnosis of brain related diseases.
In the above works that relied on GCN, fMRI and diffusion tensor imaging (DTI) data are usually used for graph analysis tasks [16][17]. There are established methods to construct individual brain networks from these modalities but there is no clear way to do so directly by regions-of-interest (ROI) features for modalities such as sMRI and PET (which are 3-dimensional, as compared to 4-dimensional fMRI and DTI data). This makes it challenging to construct a GCN based on sMRI and PET images. To solve this problem, we adopt a method of generating brain networks [18] via brain ROI features to obtain individual features of subjects. Then, we draw inspiration from the flexibility of graph-based analysis by combining the use of graph nodes to represent the individual features of subjects with the use of a sparse population matrix built using phenotypic information. Finally, a population-based GNN is constructed for the early diagnosis of AD based on sMRI and PET images via the multi-modal GNN framework. We propose combining of sMRI and PET information both at the level of node vectors as well as at the adjacency matrices. We show that our proposed approach led to improvements in model performance for both AD detection and prediction of sMCI versus pMCI. Furthermore, we perform ablation studies on the demographic features used and found that combining MMSE score has a great impact on AD detection.
The contributions of this study are as follows: (1) we adapt a technique to generate specific individual features from indirectly constructed brain networks based on sMRI and PET data, making it possible to use GNN to model these data modalities; (2) The association between individual features and subjects in the population is represented by combining imaging data with phenotypic data, and we discussed the effect of phenotypic information on GNN diagnostic performance; (3) To use the complementary relationship between image information ignored in graph construction, the adjacency matrices constructed by different imaging features are fused to realize edge weight sharing; (4) Through a combination of a late fusion strategy, our proposed multimodal GNN framework is further improved in AD diagnosis performance.
## 2 Dataset and Materials
The data used in this work are from the Alzheimer's Disease Neuroimage Initiative (ADNI) database [19], which is publicly available (www.loni.ucla.edu/adni). We used the MPRAGE sMRI and FDG-PET (six 5-min frames 30-60 min post injection) from the ADNI-1 and ADNI-2 baseline for AD assessment, acquiring paired multimodal images from the same subject and from the closest acquisition date. The
detailed description of image protocols and acquisition can be found at adni-info.org. Except for AD and NC subjects, the obtained MCI data are divided into progressive MCI (pMCI) and stable MCI (sMCI): MCI subjects who developed AD within 3 years were classified as pMCI and those who did not convert to AD were classified as sMCI. In total, the dataset has 792 subjects, including 215 AD, 246 NC, 331 MCI (120 MCI converters (pMCI), and 211 MCI non-converters (sMCI)). The MMSE is a cognitive scale with scores ranging mainly from 10-30, with 30 indicating normal cognitive impairment and lower scores indicating more severe dementia. The gene data apoe4 in our study includes three genetic types tagged as 0, 1 and 2. Table 1 shows the key demographic statistics for each category of subjects in this study.
We used conventional procedures for brain image preprocessing, correction, and affine registration; the data preprocessing workflow is shown in Fig. 1. Specifically, all sMRI data underwent anterior commissure-posterior commissure correction and affine alignment via SPM12. The N4 algorithm [20] was applied to correct the non-uniform tissue intensities and affine alignment to MNI152 space [21] was done to align the sMRI with the normalized template. PET images were co-registered to the corresponding N4 bias-corrected sMRI images by using rigid and non-linear for co-registration routine by Clinica platform [22][23]. The resolution of processed images was 121\(\times\)145\(\times\)121. After that, we extracted 116 sMRI ROI features and 116 PET ROI features based on the AAL atlas [24], respectively. For sMRI, the volumetric information of gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) in brain ROI regions were obtained. For PET, the standardized uptake value ratio (suvr) [25][26] in brain ROI regions was obtained. The calculation of suvr is relative to each individual brain region. We divided the data according to the ID No. of subjects, with the first _n_-1 numbered subjects used for network training, and half of the data after \(n\) used for validation and the other half for testing. To avoid data leakage [27], all brain images in each modality dataset were not from the same subject.
## 3 Methods
In this study, we propose a multi-modal GNN architecture to perform early detection of AD. The architecture is composed of multiple branches of GCN, one for each data modality. Nodes in the adjacency matrix used in the GCNs represent single modality features from a single subject. The scores of all subjects are computed through the decision-making output of a softmax layer in each branch, which are then combined for the final prediction. To better capture the relationships between subjects with image features, we propose to construct a brain network for each subject from ROI features extracted from imaging data, instead of directly using the ROI features from the brain. The edges of the adjacency matrix are defined by combining features from the brain networks constructed with the phenotypic information of subjects, which reveals the similarity between the features of each subject. This bears some similarity with the population graph approach which has become popular recently [12]. A key difference and novelty in our proposed approach is the application to multimodality of images and the method of fusing graphs generated from each data modality.
### Individual Feature extraction
In the population graph, each node represents features of a subject. Due to slight differences between ROI features from sMRI and PET images, using these features in an input matrix leads to suboptimal model performance. Instead, we construct a brain network to extract more contrasting features so as to achieve better performance.
#### 3.1.1 Individual features based on PET
For PET ROI features, it is unclear how to construct brain networks since ROI features are in the form of a vector (unlike fMRI, which is a 4-dimensional data and it is straightforward to see how a correlation matrix can be built). Therefore, we construct a brain network [18] for every subject indirectly by comparing them to a group of normal subjects.
First, we calculate the weighting matrix based on the interregional effect size differences of average intake between individual subjects and mean NC subjects. The connectivity \(E(i,j)\) of a subject in the _i_-th ROI and _j_-th ROI is expressed as:
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & Numbers & Gender(M/F) & Age(yrs) & MMSE(score) \\ AD & 215 & 126/89 & 74.9\(\pm\)7.7 & 23.21(\(\pm\)2.13) \\ NC & 246 & 125/121 & 74.1\(\pm\)5.8 & 29.02(\(\pm\)1.21) \\ sMCI & 211 & 125/86 & 72.5\(\pm\)7.4 & 28.01(\(\pm\)0.71) \\ pMCI & 120 & 74/46 & 74.4\(\pm\)7.1 & 27.15(\(\pm\)1.81) \\ \hline \hline \end{tabular}
\end{table}
Table 1: The demographic information of dataset used in this study
Figure 1: The preprocessing pipeline of sMRI and PET scans. The raw brain images were aligned to MNI152 space and then ROI features were extracted for each modality image, using AAL atlas.
\[E(i,j)=\frac{\left|(f_{i}-\overline{f}_{NC,i})-(f_{j}-\overline{f}_{NC,i})\right|}{ s_{p}(i,j)} \tag{1}\]
where \(f_{i}\) represents the metabolic information suvr of a person in the \(i\)-th ROI; and \(f_{NC,i}\) represents the average metabolic information of all NC patients in the \(i\)-th ROI. In formula (1), \(s_{p}(i,j)=\sqrt{(s_{i}^{2}+s_{j}^{2})/2}\) where \(s_{i}\) represents the standard deviation of the metabolic information of all NC subjects in the \(i\)-th ROI.
The expression of correlation coefficient value \(R(i,j)\) between the \(i\)-th and \(j\)-th ROIs is obtained based on Fisher transform [28]:
\[R(i,j)=\frac{\exp(2\times E(i,j))-1}{\exp(2\times E(i,j))+1} \tag{2}\]
the value of \(R(i,j)\) ranges between 0 and 1, and decreases with the increase of \(E(i,j)\). Then, the weighting matrix \(W(i,j)\) of a single subject is expressed as:
\[W(i,j)=1-R(i,j) \tag{3}\]
The weighting matrix \(W\) of a subject is then multiplied by the connectivity matrix of the NC group to obtain the connectivity between the \(i\)-th ROI and the \(j\)-th ROI of a subject. The brain network matrix \(\{B(i,j)\}\) is expressed as:
\[B(i,j)=W(i,j)\odot M_{NC}(i,j) \tag{4}\]
where \(M_{NC}(i,j)\) is the value of row \(i\) and column \(j\) in the correlation coefficient matrix made by each ROI of all NC subjects and \(\odot\) indicates Hadamard product. A flow chart of the process of creating individual brain matrix and feature extraction is shown in Fig. 2.
Finally, through the feature extraction from the subject's brain matrix \(B\), we use the values on the upper triangle of matrix \(B\) as the subject's individual features. Taking the subject with \(P\) brain ROIs as a reference where the dimension of the connectivity matrix \(B\) is \(P\times P\). Then the dimension of the individual features is given by (\(P\times(P+1)\))/2.
#### 3.1.2 Individual features based on sMRI
The ROI features obtained from sMRI images include gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF). Therefore, we can construct the corresponding individual brain network separately by using several ROI features (GM, WM, or brain matter (GM+WM)) extracted from sMRI in accordance with the above method (3.1.1). Furthermore, we need to explore the specificity of different features obtained by above method to provide more effective input features for the multi-modal GNN.
### Graph construction
The performance of GCN is greatly influenced by how its adjacency matrix is constructed [29]. In this work, each node in the graph is represented by the feature vectors of its corresponding subject, and the edge weights between nodes represent the similarities between the subjects [9][12]. We define an undirected graph \(G(V,E,A)\) with a set of vertices \(v_{n}\in V\) (\(n\)=1,2,...,\(N\)) where \(n\) represents the number of subjects. Each vertex \(v_{n}\) is represented by the subject associated with the features from the upper half matrix of each brain network and the edges (\(v_{n}\), \(v_{m}\))\(\in E\), (\(v_{n}\), \(v_{m}\)) = \(a_{nm}\), \(a_{nm}\)\(\in A\) where each element of \(A\) is an edge weight. \(A\) is a normalized adjacency matrix describing the connectivity of all vertices. The normalized graph Laplacian is defined as:
\[L=I-A=I-D^{-1/2}WD^{-1/2}\,,\;\text{where}\;\;D=diag_{i}\Big{[}\sum_{i\neq j}w_ {i\neq j}\Big{]}\text{ is the diagonal degree matrix. Generally, we can obtain the adjacency matrix \(A\) by computing the similarity. For a total of \(N\) subjects, each subject is represented as a node, where each node is assigned a label \(l\in\{0,\,1\}\) corresponding to its class. The two-layer GCN can be described by the formula:
\[Z=\text{softmax}(A\text{ReLU}(AXW^{(0)})W^{(1)}) \tag{5}\]
Figure 2: The flow chart of individual brain network and feature extraction from PET image. We obtained the ROI features (suvr) of brain regions from PET, then derived the mean and standard deviation of ROI features based on a group of normal health subjects, and obtained the brain matrix by our computational process, and finally flattened the upper triangular matrix into a one-dimensional individual features.
#### 3.2.1 Edge connections and weights
Edge connections and edge weights are key features in GCN as they determine which nodes are used to perform convolution and corresponding convolution coefficients. Edge weights are calculated in different ways in various studies [12][15]. In this work, we combined non-imaging information to construct the graph that established edge connections for assigning larger edge weights among subjects.
In graph theory, the initial similarity can be used to construct the edge weights for convolution filtering. We estimate the similarity \(S\) between subjects \(v\) and \(u\) by calculating the correlation distance. The \(S\) is defined as:
\[S(F_{v},F_{u})=\frac{\exp(-\left[\rho\left(F_{v},F_{u}\right)\right]^{2})}{2 \sigma^{2}} \tag{6}\]
where \(\rho\) is the correlation distance, \(\sigma\) is the width of the kernel, and \(F_{v}\) and \(F_{u}\) are the feature vectors of the subject \(v\) and subject \(u\).
To this end, we further consider the non-imaging information (such as gender, gene and MMSE score, etc.) to construct an adjacency matrix \(A(v,u)\), which is calculated as:
\[A(v,u)=S(F_{v},F_{u})\times(r_{G}(G_{v},G_{u})+r_{p}(P_{v},P_{u})+r_{M}(E_{v}, E_{u})) \tag{7}\]
In formula (6), the \(r_{G}\), \(r_{P}\) and \(r_{M}\) are defined as:
\[r_{G}(G_{v},G_{u})=\begin{cases}1,G_{u}=G_{v}\\ 0,G_{u}\neq G_{v}\end{cases} \tag{8}\]
\[r_{p}(P_{v},P_{u})=\begin{cases}1,P_{v}=P_{u}\\ 0,P_{v}\neq P_{u}\end{cases} \tag{9}\]
\[r_{M}(M_{v},M_{u})=\begin{cases}1,\left[M_{u}-M_{v}\right]\leq 1\\ 0,\;\;\text{otherwise}\end{cases} \tag{10}\]
where \(r_{G}\) represents the distance for their gender information and \(r_{E}\) represents the distance for their apoe4 information and \(r_{M}\) represents the distance for their MMSE score. When the corresponding two subjects have the same gender or same apoe4 or similar MMSE score, the edge weight is doubled, otherwise it is set to 0 as shown in formula (8), (9) and (10).
The constructing of adjacency matrix combined with phenotypic information weights is shown in Fig. 3. The above approach of constructing the adjacency matrix \(A\) works for a single modality but it does not describe how to deal with multiple modalities. We address this issue in the next two subsections.
#### 3.2.2 Integration mechanism for adjacency matrices
Due to the complementarity of structural and functional information, we further construct an integrated adjacency matrix that combines the adjacency matrix from individual modalities. Based on the above construction method of adjacency matrix \(A(v,u)\), we can obtain the adjacency matrix \(A_{s}\) based on sMRI features and the adjacency matrix \(A_{f}\) based on PET features respectively, and then the integrated adjacency matrix \(A_{im}\) is calculated by Hadamard product:
\[A_{im}=A_{s}\bigodot A_{f} \tag{11}\]
#### 3.2.3 Fusion mechanism for node vectors
According to (6), we estimate the similarity \(S\) between subjects \(v\) and \(u\) by calculating the correlation distance between feature vectors. To fuse the two modality features to obtain a shared adjacency matrix, we concatenate the individual features of two images to calculate the correlation matrix. Then \(S\) can be expressed as:
\[S(F_{vc},F_{vc})=\frac{\exp(-\left[\rho\left(F_{vc},F_{vc}\right)\right]^{2})} {2\sigma^{2}} \tag{12}\]
where \(F_{vc}\) and \(F_{vc}\) are the concatenated feature vectors of two modality images of subject \(v\) and subject \(u\). Then the adjacency matrix \(A_{im}\) based on fusion mechanism can be calculated by using (7).
#### 3.2.4 Integrated fusion mechanism
Through the above two mechanisms, we further construct a shared adjacency matrix to fuse the adjacency matrices of each modality. Based on the construction methods of the above two adjacency matrices \(A_{im}\) and \(A_{jn}\), the integrated fusion adjacency matrix \(A_{if}\) is calculated by Hadamard product:
\[A_{f}=A_{im}\bigodot A_{jn} \tag{13}\]
### Chebyshev GCN
In GCNs, spectral theory improves the adjacency matrix by applying Fourier transform and Taylor expansion to obtain an excellent filtering effect. The spectral domain convolution on graphs [9] can be expressed as the operation of signal \(x\) with the filter \(g_{\theta}=\text{diag}(\theta)\) by:
\[g_{\theta}*x=Ug_{\theta}(\Lambda)U^{T}*x=\sum_{k=0}^{K}\theta_{k}T_{k}(\tilde {L})x \tag{14}\]
where \(U\) is the eigenvector matrix and calculated by the formula \(L=I_{N}-D^{-1/2}AD^{-1/2}=U\Lambda U^{T}\). \(I_{N}\) and \(D\) is the identity matrix and diagonal degree matrix, respectively. The truncated expansion of Chebyshev polynomials is well approximated to \(g_{\theta}(\Lambda)\) of \(K\)-order [30]. \(\theta_{k}\) is the vector of Chebyshev coefficients, \(T_{k}\) is the Chebyshev polynomial
Figure 3: The flow chart of the constructing of adjacency matrix combined with phenotypic information weighs.
function, and \(\tilde{L}=2/\lambda_{\text{min}}\Lambda-I_{N}\). Different filtering effects can be obtained by adjusting the polynomial order \(K\), the best performance is achieved when \(K\) is set to 3 or 4 [12].
### Multi-modal network architecture
Our multi-modal network framework consists of two branches of Chebyshev GCN (CGCN), one for each modality. Each branch consists of a two-layer CGCN where hidden layers are activated by ReLU function, the number of units in hidden layer is \(L\) (\(L\)=32). In each branch, the output layer is followed by a softmax function. The trained GNN marks the unlabeled nodes on the test set and outputs the scores by softmax. We use dropout after the ReLU activation of each layer to reduce overfitting. The softmax function for \(N\) class probabilities output of the sub network is as follows:
\[\text{softmax}(z_{j})=\frac{\exp(z_{j})}{\sum_{j=1}^{N}(\exp(z_{j}))} \tag{15}\]
where \(z_{j}\) in the above (13) represents the \(j\)-th value of the output vector in network. \(N\) is the number of categories, the calculated \(\text{softmax}(z_{j})\) value is between (0, 1).
After the softmax function in each branch, we get the final prediction result by the decision fusion of the output probability of softmax:
\[\text{softmax}(z_{j})_{j_{\text{pred}}}=\frac{1}{2}\chi\left(\frac{\exp(z_{j 0})}{\sum_{j_{\text{pred}}=1}^{N}(\exp(z_{j_{\text{pred}}}))}+\frac{\exp(z_{j })}{\sum_{j_{\text{pred}}=1}^{N}(\exp(z_{j_{\text{pred}}}))}\right) \tag{16}\]
where \(z_{j0}\) is from the first branch and \(z_{j1}\) is from the second branch.
Our multi-modal network architecture is illustrated in Fig. 4. In the population-based GNN, the training set is a labeled subset of graph nodes and the trained GNN produces classification labels for the unlabeled nodes in the test set.
## 4 Experiments and Results
### Experimental design
Our models were implemented in PyTorch and ran on a Windows x86-64 computer equipped with Intel(R) Xeon(R) @3.60GHz, NVIDIA Quadro P620 and 32GB memory. In experiments, the training set, validation set and test set was obtained by partitioning the dataset of proportions 70%, 15% and 15%. In our population-based GNN, the training set and verification set were labeled while the test set was unlabeled with a mask. The labels of the test set are unknown during the network optimization, the test set is predicted after training and compared with the correct labels to derive the performance measures. Due to the limited access to medical images compared to other fields, especially in the case of multi-modality data for the same subject, current studies [16][27] are mainly based on ADNI, the most world widely used database. However, the ADNI does not specify a fixed test set. To show that our method still has some generalization, our experiments were conducted on four different sub-datasets (the test set of four sub-datasets do not include the same subjects) and calculated the average accuracy to report as the final diagnostic result. Further, we selected two sub-datasets to evaluate the stability of model by a five-fold cross-validation strategy.
The hyperparameters were determined empirically as follows: dropout rate was 0.5, weight decay was set to 5e-4, and learning rate was set to 1e-3. The order \(K\) in CGCN was
Figure 4: The network architecture of our multi-modal GNN method. We incorporate phenotypic information into the graph, where the nodes represent subjects and it associates the subject’s imaging features. There are two layers composed in the graph network and finally decision is derived by late fusion mechanism, each score in branch network corresponds to the diagnostic result of the corresponding subject.
set to 3. The network was trained for 100 epochs for convergence, and we compared with the GCN architecture trained for 300 epochs. The cross-entropy loss function was used to optimize the model parameters. To compare the validity of proposed method, the training hyperparameters were fixed in all methods. In addition to the AD vs. NC classification for disease exclusion, the prediction of MCI conversion is of great importance for the early treatment of AD patients. Therefore, we conducted the classification tasks of AD vs. NC and sMCI vs. pMCI, and evaluated the performance based on accuracy (ACC), sensitivity (SEN), specificity (SPE) and the area under curve (AUC).
We divided the experimental section into the following parts. Firstly, we carried out experiments on GCN model based on single modality images (i.e., separate experiments for sMRI and PET), and compared the diagnostic effectiveness of brain network features constructed based on several brain features. Secondly, we experimented with several multi-modal methods and compared it with the single modality approach. Finally, we carried out the explorative experiments by constructing the adjacency matrix combined with gender, apoe4, MMSE, etc. information. Using phenotypic information can further improved the diagnostic performance of GNN, and discussed the impact of phenotypic information on AD diagnosis. In addition, we compared the state-of-the-art method to prove the effectiveness of our proposed method.
### Experimental results and discussion
First, the GCN model based on sMRI features was used for ablation experiments and the impact of sMRI features for AD diagnosis was explored. Features obtained through brain networks (BN) and directly extracted ROI features were used as inputs for GCN in this part. We know that in sMRI, cognitive impairment is mainly related to atrophy of GM, WM and brain structure [31][32]. For this reason, first we obtained individual features in this way (Section 3.1.2) based on three kinds of ROI features (GM, WM and brain matter) separately, as well as the way [12] of ROI features extracted directly from MRI, then carried out classification experiments based on GCN model. Results from these experiments are summarized in Table 2. Compared with the GM ROI features from MRI, the individual features we obtained through the brain network have better specificity, allowing the GCN model to perform better clustering performance with a considerable improvement in accuracy. Moreover, from the experimental results, it is seen that models using GM features produced the best result in AD vs. NC classification, with an average accuracy of 87.71%. The models using brain matter features produced the best result in sMCI vs. pMCI classification, with an average accuracy of 72.40%. This also reflected that in AD symptoms, the biomarkers of gray matter are more specific, while in MCI period, the atrophy of gray matter is not obvious compared with AD.
Secondly, we carried out the similar experiments based on PET metabolic features. The results of ablation GCN
\begin{table}
\begin{tabular}{c|c c c c|c c c} \hline \hline Features & \multicolumn{3}{c|}{AD vs. NC} & \multicolumn{3}{c}{sMCI vs. pMCI} \\ \hline & ACC & SEN & SPE & AUC & ACC & SEN & SPE & AUC \\ GM ROIs & 81.62 & 78.74 & 83.83 & 81.28 & 65.00 & 31.18 & 85.99 & 59.42 \\ GM+WM BN & 85.15 & 81.17 & 88.51 & 84.84 & 72.40 & 48.15 & 87.47 & 67.81 \\ WM BN & 77.14 & 69.56 & 83.62 & 76.59 & 67.20 & 36.74 & 87.50 & 62.12 \\ GM BN & 87.71 & 83.95 & 91.43 & 87.63 & 71.20 & 41.33 & 88.40 & 64.87 \\ \hline PET ROIs & 82.45 & 74.22 & 87.81 & 82.02 & 66.23 & 42.22 & 81.56 & 61.89 \\ PET BN & 88.00 & 86.54 & 89.37 & 87.95 & 73.20 & 53.26 & 82.96 & 68.06 \\ \hline BN (Brain Networks) & & & & & & & \\ \end{tabular}
\end{table}
Table 2: The classification results of GCN model based on various imaging information from sMRI and PET
Figure 5: The four figures are the difference of individual brain network matrix between AD and NC subjects. The matrices from left to right are based on brain matter features, WM features, GM features and PET features respectively.
experiments based on PET ROI features and the features from the brain network (Section 3.1.1) are also shown in Table 2. From the classification results in Table 2 based on sMRI and PET features, the method of constructing individual brain network of PET features has better performance. Meanwhile, the accuracy of diagnosis based on brain network features is much higher than that based on ROI features. Brain ROI feature methods are often based on traditional machine learning such as support vector machines (SVM), which needs to achieve better performance with effective feature selection [35][36], but this also requires more processes and is usually effective on smaller samples. Our features acquisition from brain network shows better advantages in terms of performance and efficiency.
Furthermore, the value of using GM features can be demonstrated by visualizing and comparing the brain matrices built using various imaging features. As seen in Fig. 5, the difference between AD and NC is the greatest for GM amongst structural image features. Also, PET shows an obvious difference between AD and NC that were even larger than those seen in GM. This might explain why models using PET did better than models that used sMRI features. This result is consistent with the established clinical knowledge. PET can detect the functional brain changes and specific pathologies of AD at the early stage than sMRI.
By the correlation coefficients of brain regions in our constructed brain network based on sMRI features and PET features in AD diagnosis, we selected five key regions. Fig. 6 shows the visualization of the key ROIs in brain for AD diagnosis based on sMRI and PET in our study. In sMRI, specifically, these ROIs are Temporal_Pole_Sup, Rectus, Linequal, Hippocamp and Amygdala. In PET, specifically, these ROIs are Frontal_Sup, Frontal_Sup_Medial, Occipital_Mid, Occipital_Inf and Temporal_Mid. It can be seen that some of these brain regions are mainly concentrated in memory regions, which are correlated with cognitive disorders in some clinical studies [35][36].
In the subsequent experiments on multi-modal datasets, we will therefore focus on models that use GM features as the structural imaging modality. The similarity only is used to construct the adjacency matrix and we use both types of GNN models for comparative tests, including GCN and CGCN. To test whether the combination of multi-modal imaging features can improve diagnostic performance, we experimented with several multi-modal mechanisms in this study. We first create a baseline where the two GCN branches are simply combined, which we call dual GCN (DGCN), that is, each branch uses its own adjacency matrix. Then, we designed different multi-modal fusion techniques that constructs a shared adjacency matrix in three different ways: integration DGCN (IDGCN) from Section 3.2.2, fusion DGCN (FDGCN) from Section 3.2.3 and integrated
\begin{table}
\begin{tabular}{c|c c c c|c c c} \hline \hline Methods & \multicolumn{4}{c|}{AD vs. NC} & \multicolumn{4}{c}{sMCI vs. pMCI} \\ \hline & ACC & SEN & SPE & AUC & ACC & SEN & SPE & AUC \\ DCGCN & 90.00 & 88.60 & 91.12 & 89.86 & 74.50 & 48.29 & 87.67 & 65.01 \\ IDGCN & 90.72 & 89.41 & 91.86 & 90.63 & 75.00 & 49.43 & 88.02 & 68.72 \\ FDCGCN & 91.07 & 90.15 & 91.86 & 91.00 & 75.50 & 50.43 & 88.02 & 69.22 \\ IFDCGCN & 91.07 & 90.22 & 91.87 & 91.04 & 75.50 & 49.90 & 88.70 & 69.30 \\ \hline \hline \end{tabular}
* The methods in this table are based on CGCN (Chebyshev GCN), DGCN means Dual CGCN, IDGCN means Integration Dual CGCN, FDGCN means Fusion Dual CGCN, IFDCGCN means Integrated Fusion Dual CGCN.
\end{table}
Table 4: The classification results of several multi-modal methods based on Chebyshev GCN model
\begin{table}
\begin{tabular}{c|c c c c|c c c} \hline \hline Methods & \multicolumn{4}{c|}{AD vs. NC} & \multicolumn{4}{c}{sMCI vs. pMCI} \\ \hline & ACC & SEN & SPE & AUC & ACC & SEN & SPE & AUC \\ DGCN & 89.65 & 87.94 & 91.13 & 89.53 & 73.50 & 51.83 & 85.99 & 68.91 \\ IDGCN & 90.36 & 86.47 & 93.87 & 90.17 & 74.00 & 51.03 & 87.78 & 68.67 \\ FDGCN & 90.71 & 87.94 & 93.12 & 90.55 & 75.00 & 52.05 & 85.34 & 68.69 \\ IFDGCN & 91.07 & 88.72 & 93.25 & 90.98 & 75.50 & 50.25 & 88.13 & 68.45 \\ \hline \hline \end{tabular}
* The methods in this table are based on GCN, DGCN means Dual GCN, IDGCN means Integration Dual GCN, FDGCN means Fusion Dual GCN, IFDGCN means Integrated Fusion Dual GCN.
\end{table}
Table 3: The classification results of several multi-modal methods based on GCN model
Figure 6: Visualization of the key ROIs in brain for AD diagnosis. In the top row we show the key ROIs in the coronal and sagittal views of brain MRI image. In the bottom row we show the key ROIs in the coronal and sagittal views of brain PET image.
fusion DGCN (IFDGCN) from Section 3.2.4. The results based on GCN are shown in Table 3, and the results based on CGCN are shown in Table 4. In the binary classification of AD vs. NC, IFDCGCN achieves the best accuracy of 91.07, sensitivity of 90.22, specificity of 91.87 and AUC of 91.04. In the binary classification of sMCI vs. pMCI, IFDCGCN achieves the best accuracy of 75.50, and its corresponding sensitivity of 49.90, specificity of 88.70 and AUC of 69.30 are also improved as compared to single modality methods.
In the population-based GNN method for AD diagnosis, the effective expression of individual features can lead to better prediction performance. From the results in Table 3 and Table 4, we demonstrated that our proposed multi-modal fusion framework can further improve the accuracy of AD diagnosis. The effectiveness of our multi-modal method can be attributed to the following three points. Firstly, it is evident that the late fusion mechanism helped to improve the accuracy of the model prediction. The late fusion combines the decisions of two independent branches of GNN, which is consistently observed in both GCN and CGCN models. CGCN performs better in accuracy as compared to GCN. Also, CGCN has the advantage of stability as the standard deviation of its results is smaller. In Fig. 7, we showed the training curves, loss curves and validation curves of the IFDCGCN and IFDCGCN multi-modal methods for AD prediction. Seen from the validation curves, the accuracy of the fusion decision is higher than that of the two separate branches when the network training reaches a certain epoch. According to the training, validation curves and epoch, CGCN converges faster, and the prediction is more stable. Secondly, the multi-modal mechanisms we proposed are more effective than the simple late fusion approach in DGCN. This shows the value of creating a shared adjacency matrix constructed based on multi-modal data. The way adjacency matrices are constructed has a direct impact on the performance of the GNN models.
Fig. 8 summarizes the comparisons between single modality methods and multi-modal methods in a box plot showing classification accuracy. While multi-modal methods are clearly superior to single modality approaches, we note that the choice of integration mechanisms does not lead to huge differences in model performance. In addition, the multi-modal method of CGCN has better performance in the choice of GNN models, and the stability of CGCN is much better than that of GCN. Overall, our proposed IFDCGCN produced the best results in terms of classification performance and stability.
Fig. 8: The accuracy of classification results based on single modality and multi-modal methods.
Fig.7: The figures from left-to-right are the training curve, loss curve and verification curve based on IFDGCN (above) and IFDCGCN (below) multi-modal method, respectively.
In addition, some non-imaging information has also been found to be associated with AD in some studies [37][38][39], such as genes, gender, age, MMSE, etc., which have important references in the diagnosis of cognitive impairment. With the accumulation and richness of imaging and non-imaging multi-source data, how to fuse multi-source and multimodal data is the trend for accurate AD evaluation in the future. Therefore, this work aims to further fuse non-imaging information based on the use of multi-modal GNN framework to achieve more accurate AD diagnosis.
The results of the ablation experiments on AD vs. NC and sMCI vs. pMCI diagnostic tasks were further explored based on our validated multi-modal GNN (IFDCGCN) incorporating various phenotypic information as shown in Table 5, and the Fig. 9 summarizes the corresponding comparisons in a box plot. We found that the adjacency matrix combined with gender, gene, and MMSE score information benefited or improved the diagnostic performance of the model, especially combining MMSE (in formula (7), \(r_{\text{G}}\)=0 and \(r_{\text{F}}\)=0) obtained a very significant improvement in AD vs. NC diagnosis. The best results were obtained by MMSE, gender, and gene all weighted information in sMCI vs. pMCI diagnosis. With the information based on MMSE scores, it made the weighting more pronounced in AD vs. NC subjects, while both sMCI and pMCI belong to MCI patients, so their MMSE scores were close, both almost in the range of 27-29 score, this made the inter-subject weights insignificant to the extent that the improvement in GNN classification performance is limited. But also obtained better prediction of MCI conversion with the combining of several phenotypic information. In contrast, combining age information (age difference within 1 year weighted as 1, otherwise 0) in our GNN approach did not have any improvement or even a decrease in disease prediction. In the graph, the weights of non-imaging information are associated with the imaging features, the combination of effective phenotypic information allows to target a few subjects with marginal imaging features to be judged correctly. In this part of the explorative experiments, e.g., gender information also plays a role in the construction of the adjacency matrix on the performance of the GCN, which is consistent with some results of study [12]. Some phenotypic information is more clinically accessible, which has an advantage for GNN-based AD diagnosis methods. For the age information, the effect is not ideal. We infer that it is difficult to find a direct correlation with the features of subjects because of the wide range of age distribution. However, age information is helpful for the diagnosis of AD in clinical practice, which is also what needs further research in future.
On a fixed two sub-test set, our IFDCGCN method combining the MMSE information was again experimented with 5-fold cross-validation in the tasks of AD vs. NC, and the results shown in Fig. 10. The average accuracy was 98.00% and 96.29% with standard deviations of 0.78 and 0.78, respectively. The AUCs were 98.06 and 96.58 with standard deviations of 0.90 and 0.72, respectively. The above results indicate that our proposed multi-modal GNN is stable.
In this work, the adjacency matrix can be constructed based on a combination of similarity matrix and non-imaging data. To better demonstrate the advantages of multi-modal mechanisms and to explore the differences in constructing adjacency matrices based on phenotypic information, Fig. 11 compares the visualizations of the adjacency matrices constructed using MMSE and GAM (Gender+Apoe4+MMSE) information based on IFDCGCN method in two diagnostic tasks. In these visualizations, we rearranged the rows so that subjects in the same category were grouped together to make the differences between categories more apparent. The group similarity matrices
\begin{table}
\begin{tabular}{c|c c c c|c c c} \hline sMRI+PET & \multicolumn{4}{c|}{AD vs. NC} & \multicolumn{4}{c}{sMCI vs. pMCI} \\ \hline & ACC & SEN & SPE & AUC & ACC & SEN & SPE & AUC \\ Similarity & 91.07 & 90.22 & 91.87 & 91.04 & 75.50 & 49.90 & 88.70 & 69.30 \\ Aope4 & 91.07 & 88.56 & 93.17 & 90.86 & 75.50 & 50.03 & 88.80 & 69.42 \\ Age & 88.93 & 86.43 & 91.08 & 88.76 & 74.50 & 48.06 & 88.80 & 68.93 \\ Gender & 91.79 & 90.15 & 93.19 & 91.67 & 76.50 & 51.81 & **89.70** & 70.76 \\ MMSE & **96.68** & **99.19** & 94.49 & **96.84** & 76.00 & 51.03 & 88.80 & 69.92 \\ G+M & 95.00 & 93.09 & 96.71 & 94.90 & 77.00 & 51.90 & 89.37 & 70.63 \\ G+A+M & 93.21 & 90.19 & **95.98** & 93.08 & **78.00** & **54.96** & 89.37 & **72.16** \\ \hline \end{tabular}
* G+M means the combining of gender and MMSE. G+A+M means the combining of gender, apoe4 and MMSE.
\end{table}
Table 5: The classification results of Multi-modal GNN framework combining the phenotypic information
Figure 9: It summarizes the comparison of diagnostic accuracy based on multi-modal GNN combined with various phenotypic information.
constructed based on our integrated fusion mechanism in combination with MMSE showed a very significant intra-group correlation for the AD vs. NC subject group, resulting in an average accuracy of 96.68%. In contrast, the combination based on multiple phenotypic information had relatively better intra-group correlations in sMCI vs. pMCI diagnosis. However, MCI conversion prediction needs to be continuously explored, and since both sMCI and pMCI belong to the MCI category, the low sensitive features of both types also contribute to the lower prediction accuracy. We infer that it is more important to acquire or construct individual features that are more perceptive.
In addition to analyzing the parameters that affect the prediction performance of GNN, we also compared it with several different state-of-the-art methods based on ADNI database to verify the utility of our proposed method. The comparative studies are based on sMRI, PET and multi-modal methods. The results include AD vs. NC classification in Table 6 and sMCI vs. pMCI classification in Table 7. It can be observed that our proposed method has achieved satisfactory performance. In addition to a better prediction accuracy, it also has an advantage or comparable performance in terms of diagnostic specificity. Another notable point is our method also outperforms some multi-modal CNN methods.
In summary, the changes of brain structure and metabolic characteristics of AD patients are different, which makes multi-modal images provide more complementary information. But existing GNN analysis based on multi-modal image features are mostly limited to DTI and fMRI [16][17]: it is clear how to present these 4-dimensional data as brain networks and construct the topology of nodes for them in GNN analysis because the brain regions in fMRI or DTI imaging have the characteristics of sequential signals or fiber connection directions. However, it is not obvious how GNN can be used on sMRI and PET data. In addition, many research based on GNN methods focus on the improvement of network architecture and the optimization of adjacency matrix, while ignoring the importance of individual features. To solve the above shortcomings, we obtain specific individual features via constructing brain networks with ROI features respectively, and then construct GNN with the method of nodes representing subjects, which solves the problem of difficulty in constructing graph neural networks based on sMRI and PET features.
In our approach, we further play the advantages of multi-modal data information and improved diagnostic performance was achieved through the combination of multi-modal features, multi-modal adjacency matrices and late decision fusion. Compared with fMRI and DTI data, the preprocessing process of sMRI and PET is relatively simpler, while GNN has the advantage of being fast, flexible and more parameter efficient as compared to CNN, and easier to integrate multi-source and multi-modal data. Therefore, our work could have considerable application prospects in the task of early diagnosis of AD.
## 5 Conclusion
In this study, we proposed a population-based and multi-modal GNN to predict early Alzheimer's disease using image features and phenotypic information. Our method obtained specific individual features by constructing brain networks and combined imaging data with phenotypic data to represent
Figure 11: It shows the adjacency matrices combining non-imaging data based on the IFDCGCN in AD detection (left column) and MCI prediction (right column). Our method has significant intra-group correlation and provides obvious contrast between the classes especially combining the MMSE information (top row) in AD detection (left column).
Figure 10: The five test ACC and AUC results by 5-fold cross-validation based on IFDCGCN multi-modal GNN on two sub-datasets, respectively.
the data association between individual features and subjects in potential populations. In addition, we further combined it with shared adjacency matrix and decision-making mechanism to achieve better multi-modal GNN diagnosis performance. Through several experiments, our proposed multi-modal method achieves improved prediction results on ADNI datasets especially in AD detection. Compared with several state-of-the-art methods, our proposed method shows better or equivalent diagnostic performance, including in the relatively challenging sMCI versus pMCI prediction task. Our study was mainly explorative on using ADNI dataset and further validation may be necessary using additional datasets to confirm the findings.
## Acknowledgments
Data used in preparation of this article were obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. As such, the investigators within the ADNI contributed to the design and implementation of ADNI and/or provided data but did not participate in analysis or writing of this report. A complete listing of ADNI investigators can be found at: [http://adni.loni.usc.edu/wp-content/uploads/how_to_apply/ADNI_Acknowledgement_List.pdf](http://adni.loni.usc.edu/wp-content/uploads/how_to_apply/ADNI_Acknowledgement_List.pdf). More details can be found at adni.loni.usc.edu.
This work was partly supported by the Chengdu Major Technology Application Demonstration Project (Grant No. 2019-YF09-00120-SN), the Key Research and Development Program of Sichuan Province (Grant No. 2022YFS0098), the China Scholarship Council (Grant No. 202106240177). This work was also partly supported by AcRF Tier-2 grant 2EP20121-003 by Ministry of Education, Singapore.
## Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
2309.05846 | Designs and Implementations in Neural Network-based Video Coding | The past decade has witnessed the huge success of deep learning in well-known
artificial intelligence applications such as face recognition, autonomous
driving, and large language model like ChatGPT. Recently, the application of
deep learning has been extended to a much wider range, with neural
network-based video coding being one of them. Neural network-based video coding
can be performed at two different levels: embedding neural network-based
(NN-based) coding tools into a classical video compression framework or
building the entire compression framework upon neural networks. This paper
elaborates some of the recent exploration efforts of JVET (Joint Video Experts
Team of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC29) in the name of neural
network-based video coding (NNVC), falling in the former category.
Specifically, this paper discusses two major NN-based video coding
technologies, i.e. neural network-based intra prediction and neural
network-based in-loop filtering, which have been investigated for several
meeting cycles in JVET and finally adopted into the reference software of NNVC.
Extensive experiments on top of the NNVC have been conducted to evaluate the
effectiveness of the proposed techniques. Compared with VTM-11.0_nnvc, the
proposed NN-based coding tools in NNVC-4.0 could achieve {11.94%, 21.86%,
22.59%}, {9.18%, 19.76%, 20.92%}, and {10.63%, 21.56%, 23.02%} BD-rate
reductions on average for {Y, Cb, Cr} under random-access, low-delay, and
all-intra configurations respectively. | Yue Li, Junru Li, Chaoyi Lin, Kai Zhang, Li Zhang, Franck Galpin, Thierry Dumas, Hongtao Wang, Muhammed Coban, Jacob Ström, Du Liu, Kenneth Andersson | 2023-09-11T22:12:41Z | http://arxiv.org/abs/2309.05846v2 | # Designs and Implementations in Neural Network-based Video Coding
###### Abstract
The past decade has witnessed the huge success of deep learning in well-known artificial intelligence applications such as face recognition, autonomous driving, and large language model like ChatGPT. Recently, the application of deep learning has been extended to a much wider range, with neural network-based video coding being one of them. Neural network-based video coding can be performed at two different levels: embedding neural network-based (NN-based) coding tools into a classical video compression framework or building the entire compression framework upon neural networks. This paper elaborates some of the recent exploration efforts of JVET (Joint Video Experts Team of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC29) in the name of neural network-based video coding (NNVC), falling in the former category. Specifically, this paper discusses two major NN-based video coding technologies, i.e. neural network-based intra prediction and neural network-based in-loop filtering, which have been investigated for several meeting cycles in JVET and finally adopted into the reference software of NNVC. Extensive experiments on top of the NNVC have been conducted to evaluate the effectiveness of the proposed techniques. Compared with VTM-11.0_nnvc1, the proposed NN-based coding tools in NNVC-4.0 could achieve {11,94%, 21.86%, 22.59%}, {9,18%, 19.76%, 20.92%}, and {10.63%, 21.56%, 23.02%} BD-rate reductions on average for {Y, Ch, Cr} under random-access, low-delay, and all-intra configurations respectively.
In-loop filter, intra prediction, neural-network-based video coding, Versatile Video Coding, video compression.
## I Introduction
With the popularization of smart phones and rapid development of video-based applications, the volume of video material has been increasing at a unprecedented speed in recent years. The efficient storage and transmission of mass data have become a great challenge. To cope with this challenge, the Joint Video Experts Team of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC29 has developed and finalized the latest video coding standard, namely Versatile Video Coding (VVC), to provide a more compact representation of video data [3]. VVC/H.266 has made significant progresses in terms of coding efficiency, providing approximately a 50% bit-rate saving for equivalent perceptual quality relative to the performance of the prior standard High Efficiency Video Coding (HEVC)/H.265 [4]. While VVC offers a new level of capability for video compression, the necessity of developing more advanced video coding techniques still exists.
Classical video coding schemes epitomized by VVC adopt a sophisticated framework comprising numerous manually optimized and hand-crafted coding tools. After development of several generations of video coding standards such as AVC/H.264 [5], HEVC/H.265, and VVC/H.266, further improvement has become more and more difficult along this path. Therefore, experts are exploring other learning-based schemes to improve coding efficiency.
Due to the availability of powerful computing resources and abundant training data, deep learning has made a significant breakthrough in well-known artificial intelligence applications such as face recognition [6, 7], autonomous driving [8, 9], and large language model like ChatGPT [10] in the past decade. Recently, the application of deep learning has been extended to a much wider range, especially to scenarios which can be easily formulated as a supervised problem. The target of video compression can be conceptualized as constructing a mapping from an original space (i.e., raw video data) to a latent domain (i.e., a bit stream), and back again, fitting the scope of deep learning. There exists two ways to build the mapping: utilizing both deep learning-based modules and non-learning-based modules, or utilizing purely deep learning-based modules [11, 12]. Accordingly, the efforts exploring neural network-based video coding are distributed in two categories: embedding neural network-based (NN-based) coding tools into a classical video compression framework [13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24], or building the entire compression framework upon neural networks [25, 26, 27, 28, 29, 30]. In the former category, people usually design NN-based alternatives, e.g. NN-based intra/inter predictor [13, 14, 15, 16, 17], transform [18], arithmetic probability estimator [19], in-loop/post filter [20, 21, 22], re-sampler [23], etc. to compete with the non-NN-based counterparts within the classical coding framework and rely on rate-distortion optimization to guarantee an improved performance. While in the latter category, people adopt predictive coding-based
method [25, 26, 27], which first generates the predicted frame e.g. by using optical flow and then encodes residue e.g. with auto-encoder, or conditional coding-based method [28, 29], where prediction is embedded into the latent domain of auto-encoder.
This paper elaborates some of the recent exploration efforts in JVET focusing on developing neural network-based video coding (NNVC) technologies beyond the capabilities of VVC. After investigation activities of several meeting cycles, the experts in JVET have identified two promising NN-based tools as an enhancement of conventional modules in the existing VVC design, i.e. neural network-based intra prediction and neural network-based in-loop filtering, and adopted them into the reference software of NNVC to demonstrate a reference implementation of encoding techniques, decoding process, as well as the training methods for these tools. To generate a better intra prediction, a nonlinear mapping from causal neighboring samples to a prediction of the current block is derived using fully connected neural networks [14, 31]. In addition, the neural network yields side outputs beneficial for subsequent Most Probable Mode (MPM) list construction and the transform kernel selection processes. To better recover details lost during compression, a convolutional neural network-based in-loop filter is designed [32, 33, 34, 35, 36]. The deep filter is trained iteratively to address the over-filtering issue [36]. To further improve performance, the deep filter design also considers elements including coded information exploitation, parameter selection, inference granularity adaptation, residual scaling, temporal filtering, combination with deblocking filtering, harmonization with RDO, etc.
Extensive experiments on top of the Versatile Video Coding have been conducted to evaluate the techniques included in NNVC. Compared with VTM-11.0_nnvc, the proposed NN-based coding tools in NNVC-4.0 could achieve {11.94%, 21.86%, 22.59%}, {9.18%, 19.76%, 20.92%}, and {10.63%, 21.56%, 23.02%} BD-rate reductions on average for {Y, Cb, Cr} under random-access, low-delay, and all-intra configurations respectively.
The remainder of the paper is organized as follows. Section II introduces NN-based intra prediction technique. Section III elaborates NN-based in-loop filtering technique. Section IV describes the small ad-hoc deep learning (SADL) library for inference of NN-based models in NNVC. Performance evaluation of NNVC techniques is presented in Section V. Finally, Section VI concludes the paper.
## II NN-based Intra Prediction
### _VVC coding tools directly interacting with the NN-based intra prediction_
To justify the design of the NN-based intra prediction in Section II-B, the VVC coding tools featuring the strongest interactions in terms of compression efficiency with the NN-based intra prediction mode to be put into VVC must be detailed.
Any intra prediction mode, including a NN-based one, interacts in particular with the other intra prediction modes partly because the entropy coding in the signaling of the index of the intra prediction mode selected to predict a given block creates competition between them. Moreover, any intra prediction mode depends on transform coding as the residue resulting from the intra prediction of a given block is passed on to transform coding [37]. This dependency grows even more in the case of the secondary transforms in VVC, called Low-Frequency Non-Separable Transform (LFNST), as different LFNST kernels are specialized to different intra modes.
Precisely, for a given block predicted in intra and using the Discrete Cosine Transform-2 (DCT-2) horizontally and the DCT-2 vertically as primary transform, LFNST consists in applying a non-separable transform to the top-left region of the block of coefficients arising from the primary transform [37, 38]. LFNST gathers \(4\) transform sets with \(2\) kernels per set. Note that, for a given kernel in a given transform set, the used matrix of weights and the shape of the top-left region involved in the second transform are determined by the size of the current block. The signaling of LFNST is decomposed into a so-called explicit signaling of the kernel set index and a so-called implicit signaling of the transform set index. In the explicit signaling, _lfnstIdx_\(\in\{0,1,2\}\) is written to the bitstream. _lfnstIdx_\(=0\) means that LFNST does not apply for the current block whereas, if _lfnstIdx_\(\{1,2\}\), _lfnstIdx_ - 1 indicates the used kernel set index. In the implicit signaling, the index of the intra prediction mode selected to predict the current block directly maps to the transform set index and whether the block of primary transform coefficients is transposed before applying LFNST. As no relationship between the _directionality_ of the prediction of a given block via a NN and the index of the NN-based intra prediction mode exists [39], this mapping must not be reused for a block predicted via NN, and rather be produced by the NN, see Section II-B.
### _Framework_
The NN-based intra prediction mode contains \(7\) neural networks, each predicting blocks of a different size in \(\{4\times 4,8\times 4,16\times 4,32\times 4,8\times 8,16\times 8,16\times 16\}\).
In this NN-based intra prediction mode, the neural network predicting blocks of size \(w\times h\) is denoted \(f_{h,w}\) ( : \(\mathbf{\theta}_{h,w}\)) where \(\mathbf{\theta}_{h,w}\) gathers its parameters. For a given \(w\times h\) block \(\mathbf{Y}\) to be predicted, \(f_{h,w}\) ( : \(\mathbf{\theta}_{h,w}\)) takes a preprocessed version \(\mathbf{\widetilde{X}}\) of the context \(\mathbf{X}\) made of \(n_{a}\) rows of \(n_{l}+2w+e_{w}\) decoded reference samples located above this block and \(n_{l}\) columns of \(2h+e_{h}\) decoded reference samples located on its left side to provide \(\mathbf{\widetilde{Y}}\), see Fig. 1. The application of a postprocessing to \(\mathbf{\widetilde{Y}}\) yields a prediction \(\mathbf{\hat{Y}}\) of \(\mathbf{Y}\). The above-mentioned preprocessing and postprocessing are fully specified in Section II-C. Besides, to replace the mapping in the LFNST implicit signaling presented in Section II-A, \(f_{h,w}\) ( : \(\mathbf{\theta}_{h,w}\)) returns two indices grpldx\({}_{1}\) and grpldx\({}_{2}\). For \(i\in\{1,2\}\), grpldx\({}_{i}\) denotes the index characterizing the LFNST transform set index and whether the primary transform coefficients resulting from the application of the DCT-2 horizontally and the DCT-2 vertically to the residue of the neural network prediction are transposed when lfnstIdx\(=i\). Furthermore, for efficient synergy between the VVC intra prediction modes, i.e. PLANAR, DC, and the
\(65\) directional intra prediction modes, and the NN-based intra prediction, c.f. Section II-D, \(f_{h,w}\left(\ \cdot\ ;\ \mathbf{\theta}_{h,w}\right)\) returns the index repldx \(\in[0,66]\) of the VVC intra prediction mode whose prediction of \(\mathbf{Y}\) from one row of decoded reference samples above \(\mathbf{Y}\) and one column of decoded reference samples on its left side is the closest to \(\hat{\mathbf{Y}}\).
Note that \(n_{a}\), \(n_{l}\), \(e_{w}\), and \(e_{h}\) together define the shape of the context \(\mathbf{X}\) of \(\mathbf{Y}\). \(n_{a}\), \(n_{l}\), \(e_{w}\), and \(e_{h}\) depend on \(h\) and \(w\), these dependencies being further explained in Section II-E.
### _Preprocessing and Postprocessing_
The preprocessing of the context fed into a neural network, shared by the training and test phases, is designed to obtain a range of values at the neural network input that eases optimization during the training phase [40]. Precisely, the preprocessing in Fig. 1 consists in the four following steps.
* The mean \(\mu\) of the available reference samples \(\overline{\mathbf{X}}\) in \(\mathbf{X}\) is subtracted from \(\overline{\mathbf{X}}\), where the context \(\mathbf{X}\) of the current \(w\times h\) block \(\mathbf{Y}\) is decomposed into the available reference samples \(\overline{\mathbf{X}}\) and the unavailable reference samples \(\mathbf{X}_{u}\), see Fig. 2.
* The reference samples in the context \(\mathbf{X}\) are multiplied by \(\rho=1/(2^{b-8})\), \(b\) being the internal bitdepth, i.e. 10 in VVC.
* All the unavailable reference samples \(\mathbf{X}_{u}\) in \(\mathbf{X}\) are set to 0.
* The context resulting from the previous step is flattened, yielding the vector \(\widetilde{\mathbf{X}}\) of size \(n_{a}(n_{l}+2w+e_{w})+(2h+e_{h})n_{l}\).
The postprocessing of the output of a neural network must approximatively reverses the above preprocessing. Precisely, the postprocessing depicted in Fig. 1 consists in reshaping the vector \(\hat{\mathbf{Y}}\) of size \(hw\) into a rectangle of height \(h\) and width \(w\), dividing the result of the reshape by \(\rho\), adding the mean \(\mu\) of the available reference samples in the context of the current block, and clipping to \([0,\ 2^{b}-1]\). Therefore, the postprocessing can be summarized as
\[\hat{\mathbf{Y}}=\min\Bigg{(}\max\Bigg{(}\frac{\text{reshape}\Big{(}\widetilde{ \mathbf{Y}}\Big{)}}{\rho}+\mu,\ 0\Bigg{)}\,\ 2^{b}-1\Bigg{)}. \tag{1}\]
Note that the above preprocessing and postprocessing apply to a neural network in floats. For a neural network in signed-integers exclusively, \(\rho=2^{Q_{in}-b+8}\), \(Q_{in}\) denoting the input quantizer. For integer width \(16\), \(Q_{in}=7\). For integer width \(32\), \(Q_{in}=23\).
### _MPM List Generation_
In VVC, an efficient entropy coding of the index of the intra prediction mode selected to predict the current luma Coding Block (CB) involves a list of \(6\) Most Probable Modes (MPMs). This list includes the index of the intra prediction mode selected to predict the luma CB above the current one and the index of the intra prediction mode selected to predict the luma CB on the left side of the current one.
In VVC with the NN-based intra prediction mode, if a non-NN-based intra prediction mode is selected to predict the current luma CB and the current luma CB is surrounded by luma CBs predicted via the NN-based intra prediction mode, the relevance of the list of MPMs of the current luma CB can be maintained thanks to repldx. Indeed, if its left luma CB is predicted via the NN-based mode, the repldx returned during the prediction of the left luma CB can become a candidate index to be put into the list of MPMs. If its above luma CB is predicted via the NN-based mode, the repldx collected during the prediction of the above luma CB can become a candidate index to be put into the list of MPMs.
Fig. 2: Decomposition of the context \(\mathbf{X}\) of decoded reference samples around the current \(w\times h\) block \(\mathbf{Y}\) into the available reference samples \(\mathbf{X}\) and the unavailable reference samples \(\mathbf{X}_{u}\). In this figure, \(h=4\), \(w=8\), \(n_{a}=n_{l}=e_{h}=e_{w}=4\), and the number of unavailable reference samples reaches its maximum value.
### _Context Transformations_
As said at the beginning of Section II-B, the NN-based intra prediction mode comprises the \(7\) neural networks \(\{f_{h,w}\left(\.\ ;\mathbf{\theta}_{h,w}\right)\}_{(h,w)\in S}\), \(S=\{\left(4,4\right),\left(4,8\right),\left(4,16\right)\), \(\left(4,32\right),\left(8,8\right),\left(8,16\right),\left(16,16\right)\}\), each predicting blocks of corresponding shape in \(S\). For a given \(w\times h\) block to be predicted, the NN-based intra prediction mode may not contain \(f_{h,w}\left(\.\ ;\mathbf{\theta}_{h,w}\right)\). To circumvent this, context transformations help. Specifically, the context of the current block can be down-sampled vertically by a factor \(\delta\) and/or down-sampled horizontally by a factor \(\gamma\) and/or transposed before the step "preprocessing" in Fig. 1. Then, the prediction of the current block can be transposed and/or up-sampled vertically by the factor \(\delta\) and/or up-sampled horizontally by the factor \(\gamma\) after the step "postprocessing" in Fig. 1. The transposition of the context of the current block and the prediction, \(\delta\), and \(\gamma\) are chosen so that a neural network belonging to the NN-based intra prediction mode can be picked for prediction, see Table I. Note that the NN-based intra prediction mode is disallowed for \(\left(h,w\right)\) absent from Table I.
To limit the complexity of the neural network prediction, \(n_{a}\left(h,w\right)\) and \(n_{l}\left(h,w\right)\) are defined such that, after the potential context transformations, the number of rows and the number of columns in the resulting context never exceed \(8\).
### _Signaling of the NN-based Intra Prediction Mode_
#### Iv-F1 Signaling in luma
Given that the NN-based intra prediction mode predicts blocks of each shape in \(\overline{S}=S\cup\{\left(8,4\right),\left(16,4\right)\), \(\left(32,4\right)\), \(\left(16,8\right)\), \(\left(8,32\right)\), \(\left(32,8\right)\), \(\left(16,32\right)\), \(\left(32,16\right)\), \(\left(32,32\right)\), \(\left(64,64\right)\}\), c.f. Section II-E, the intra prediction mode signaling of the current \(w\times h\) luma CB can be adapted to incorporate the NN-based mode, at low cost, by introducing a flag \(nnFlagY\) only if \(\left(h,w\right)\in\overline{S}\). In details, the adapted intra prediction mode signaling \(\mathcal{S}_{a}\) of the current \(w\times h\) luma CB whose top-left pixel is at position \(\left(y,x\right)\) in the current luma channel is split into two cases, see Fig. 3.
* If \(\left(h,w\right)\in\overline{S}\), \(nnFlagY\) appears. \(nnFlagY=1\) means that the NN-based mode is selected, then END. \(nnFlagY=0\) tells that the NN-based mode is not selected, then the regular VVC intra prediction mode signaling \(\mathcal{S}_{Y}\) of the current luma CB applies.
* Otherwise, \(\mathcal{S}_{Y}\) applies.
Note that, in the case \(\left(h,w\right)\in\overline{S}\)\(\&\&\)\(nnFlagY=1\), if the context of the current luma CB goes out of the current luma channel bounds, i.e. \(x<n_{l}\parallel y<n_{a}\), PLANAR replaces the NN-based intra prediction.
#### Iv-G2 Signaling in chroma
Before presenting the signaling of the NN-based mode in chroma, the Direct Mode (DM) in VVC must be detailed. For a given pair of chroma CBs predicted via the DM, the intra prediction mode selected to predict the luma CB being collocated with this pair of chroma CBs is used to predict each of these two chroma CBs [41].
Based on the principle of the proposed signaling in luma, c.f. II-F1, the adapted intra prediction mode signaling of the current pair of \(w\times h\) chroma CBs whose top-left pixel is at position \(\left(y,x\right)\) in the current pair of chroma channels is decomposed into two cases.
* If the luma CB collocated with this pair of chroma CBs is predicted by the NN-based mode
* If \(\left(h,w\right)\in\overline{S}\), denoted Case \(\left[*\right]\), the DM becomes the NN-based intra prediction mode.
* Otherwise, the DM is set to PLANAR.
* If \(\left(h,w\right)\in\overline{S}\), \(nnFlagC\) is placed before the DM flag in the decision tree of the intra prediction mode signaling in chroma. \(nnFlagC=1\), a.k.a Case \(\left[**\right]\), indicates that the NN-based mode is selected, then END. \(nnFlagC=0\) tells that the NN-based mode is not selected, then the regular VVC intra prediction mode signaling \(\mathcal{S}_{C}\) of the current pair of chroma CBs resumes from the DM flag.
* Otherwise, \(\mathcal{S}_{C}\) applies.
Note that, in Cases \(\left[*\right]\) and \(\left[**\right]\), if the context of the current chroma CB goes out of the current chroma channel bounds, i.e. \(x<n_{l}\parallel y<n_{a}\), PLANAR replaces the NN-based intra prediction mode.
### _Training_
In an Intra-slice (I-slice) in VVC, as the partitioning is mainly driven by the intra prediction, the reference-samples-to-block relationships specific to the VVC intra prediction modes are usually retrieved in the pairs of a partitioned block
Fig. 3: Adapted intra prediction mode signaling \(\mathcal{S}_{a}\) of the current \(w\times h\) luma CB. This CB is framed in orange using dashed line. The bin value of \(nnFlagY\) appears in bold gray.
and its reference samples [14]. Thus, the training of a neural network on pairs a block extracted from the VVC partitioning of a given frame and its context leads to the neural network learning essentially the VVC intra prediction capability. To bypass this, an iterative training of neural networks for intra prediction is developed, see Fig. 4.
* At cycle \(0\), VTM-11.0 anchor produces pairs of a block and its context. Then, the \(7\) neural networks are trained on them, initializing their parameters randomly.
* At cycle \(1\), VTM-11.0 with the NN-based intra prediction mode using the parameters trained at cycle \(0\) produces pairs of a block and its context. Then, the \(7\) neural networks are trained on them, initializing their parameters from their state at the end of cycle \(0\).
* At cycle \(2\), VTM-11.0 including the NN-based intra prediction mode using the parameters trained at cycle \(1\) generates pairs of a block and its context. Then, the \(7\) neural networks are trained on them, initializing their parameters from their state at the end of cycle \(1\). Then, using the same training data, the trainings of these \(7\) neural networks are resumed, introducing this time a sparsity constraint on their weights.
* At cycle \(3\), VTM-11.0 with the NN-based intra prediction mode using the parameters trained at cycle \(2\) gives training data. Then, the portion computing grpldx\({}_{1}\) and grpldx\({}_{2}\) in each of the \(7\) neural networks is trained on them, initializing their parameters from their state at the end of cycle \(2\).
### _Inference Details_
Small Ad-hoc Deep Learning library (SADL), c.f. Section IV, runs the inference of the NN-based intra prediction, see Table II, using its fixed point-based implementation where both neurons and weights are represented as 16-bit signed integer. In each neural network, each intermediate representation features \(1216\) neurons. In each layer, LeakyReLU is chosen as non-linearity, excluding the last layer without non-linearity.
### _Relation to the state-of-the-art_
Prior to the proposed NN-based intra prediction mode in VVC, neural networks for intra prediction have been integrated into either VVC or one of its predecessor. Especially, in [42, 43], multiple neural networks are jointly trained and then integrated as a single intra prediction mode into a predecessor of VVC: HEVC enhanced with non-square partitions and VTM-1.0 respectively. Precisely, in this mode, a different set of neural network predicts blocks of each size. During the training phase, the use of a partitioner and an objective function being the minimum rate-distortion cost computed from the neural network predictions over all partitions of each training block into sub-blocks induces a specialization of different neural networks to different classes of textures. Note that the iterative simplifications of this NN-based intra prediction mode [44, 45] has led to Matrix-based Intra Prediction (MIP), being part of VVC.
Note that, as MIP is a VVC intra prediction mode, the experiments \(M_{1}\) in Table VIII reflects the rate-distortion performance of the proposed NN-based intra prediction mode on top of MIP.
## III NN-based In-loop Filter
The proposed NN-based in-loop filter is known as filter set #1 [32] in NNVC-4.0. The filter architectures are introduced first, followed by an elaboration on parameter selection, residual scaling, temporal filtering, harmonization with RDO, etc. At last, we describe the inference and training details of the filter.
### _Network Architecture_
Fig. 5 illustrates the diagram of CNN-based in-loop filter for luma component, comprising feature extraction, backbone, and reconstruction parts.
It is asserted in [46] that applying existing in-loop filters in VVC prior to the CNN filter may cause the loss of important information. Therefore, the reconstruction samples (Rec in Fig. 5) refer to samples unfiltered by existing in-loop filters in VTM. Note that in-loop filters such as SAO (sample adaptive offset, [47]) or ALF (adaptive loop filter, [48]) create nontrivial bitrate overhead to lower the compression distortion. When placed after the deep filter, this overhead may be reduced. Besides reconstruction, auxiliary inputs are utilized to improve the performance. Intra/inter prediction is the key process for reducing spatial and temporal redundancy. The encoder selects a prediction mode with the best rate-distortion trade-off from a list of candidates during the encoding process. In other words, the prediction samples from the decoder side could reflect decisions made by the encoder, providing important clues about original samples. In addition, compression distortion is directly caused by the quantization on the residues in the transform domain, while residues are highly dependent on the quality of prediction samples, therefore prediction samples can also significantly impact the type and strength of artifacts in the decoded images. Taking the above analysis into account, prediction samples (Pred in Fig. 5) are fed into the filter as an additional auxiliary input. Similarly, boundary strength (BS in Fig. 5) generated during deblocking process reflects the strength of compression artifact near block boundaries. Inputs of QP and IPB make the filter aware of high-level compression conditions, i.e., quantization parameter and prediction types (intra, uni-inter, bi-inter). Given these two auxiliary inputs, a single model is capable of handling contents compressed with different QPs and block prediction types. For instance, if the IPB information states that a block has been inter predicted
rather than intra predicted, this means that the block has likely been NN-filtered once already, and the NN model can lower the filtering strength to avoid over-filtering.
The feature extraction part accounts for aggregating informations from different inputs. Specifically, individual features are extracted separately, concatenated together along the channel dimension, shrunk through a \(1\times 1\) convolutional layer, and downsampled to half resolution, to form a compact representation of all inputs. The network backbone, which is consisting of 8 cascaded residual blocks, transforms the compact representation into clean features with less compression artifacts. At last, the reconstruction part maps the clean features into the pixel domain to predict the details lost during compression. Note that the CNN-based in-loop filter is actually designed to learn the mapping from distorted input to lost details (residual between groundtruth and distorted input), thus the final output can be obtained by,
\[\mathbf{R}_{nn}=\mathbf{R}_{no}+f(\mathbf{R}_{no}) \tag{2}\]
where \(\mathbf{R}_{no}\) and \(\mathbf{R}_{nn}\) denote the unfiltered samples and filtered samples respectively, while \(f\) is the CNN-based in-loop filter.
Coding tools such as CCLM [49] and CCALF [50] utilize luma information for boosting chroma performance. Similarly, the luma information is exploited for the chroma in-loop filtering. In the YUV 4:2:0 format where the luma resolution is higher than that for chroma, features are first extracted separately from luma and chroma. Then luma features are downsampled and concatenated with chroma features. Note that IPB information is not included in the chroma filter, as no benefits are observed from this input for chroma. The same network backbone and reconstruction parts from the luma network are used for the chroma network.
### _Parameter Selection_
As analyzed in [36], the content propagation phenomenon that exists in the inter coding case may deteriorate the efficiency of in-loop filtering significantly, as samples filtered in one frame may be propagated to a following frame and filtered again, leading to over-filtering. Intuitively, providing options with multiple filtering strengths may mitigate the over-filtering issue. To adjust the filtering strengths of a candidate filter, one possible way is to modify its input parameter QP
Fig. 4: Iterative training of the neural networks belonging to the NN-based intra prediction mode.
Fig. 5: Schematic of CNN-based in-loop filter. Rec, Pred, BS, QP, and IPB stand for reconstruction samples, prediction samples, boundary strength, quantization parameter and prediction types respectively. Number of feature maps is 96 for all internal layers.
slightly, because distortion levels at different QPs should cause filtering behaviors with corresponding strengths during the training process.
Without loss of generality, a candidate list containing three QP parameters is considered by default. Using more parameters may bring better performances at the cost of higher encoding complexity, and vice versa. At encoder side, each picture or block could determine whether to apply the CNN-based in-loop filter or not. When the CNN-based filter is determined to be applied to a picture or a block, the QP parameter must be selected from a candidate list. Specifically, all blocks in the current picture are filtered using three QP parameters in the encoder. Then five costs, i.e. _Cost_0_,..., _Cost_5_, are calculated and compared against each other to find the best rate-distortion trade-off. In _Cost_0_, CNN-based filter is prohibited for all blocks. In _Cost_i_, _i = 1, 2, 3_, CNN-based filter with \(i^{th}\) parameter is used for all blocks. In _Cost_4_, different blocks may prefer different parameters, and the information regarding whether to use CNN-based filter, and if so, which parameter to use is signaled for each block. At decoder side, whether to use CNN-based filter and which parameter to use for a block is based on the _Param_Id_ parsed from the bit-stream
Denote the sequence level QP as q, the candidate list {Param_1, Param_2, Param_3} is set as {q, q-5, q-10} and {q, q-5, q+5} for low and high temporal layers respectively. Stronger filtering strength is used for high temporal layers, because coarser quantization used in these layers may yield large distortion, and since higher temporal layers are used less for prediction, over-filtering is less of a problem. A shared parameter is used for the two chroma components to lower the worst-case complexity at the decoder side. In addition, the number of parameter candidates could be specified at the encoder side. For the all-intra configuration, the parameter selection is disabled while filter on/off control is still preserved, since there is no content propagation issue in this configuration.
For further improving the adaptation capability, granularity (block size) of the on/off control and the parameter selection is made dependent on resolution and bitrate. For a higher resolution, the granularity will be coarser as content tends to change slower, and vice versa. For a higher bitrate, the granularity will be finer since more overhead bits can be afforded, and vice versa.
### _Residual Scaling_
As pointed out in Section III-B, varying the filtering strength according to picture content may alleviate the over-filtering issue. Residual scaling is another mechanism (besides parameter selection) to achieve the purpose of filtering strength adjustment, and can be formulated as,
\[\mathbf{R}_{nn}=\omega\cdot(\mathbf{R}_{nn}-\mathbf{R}_{db})+\mathbf{R}_{db} \tag{3}\]
where \(\mathbf{R}_{db}\) is the deblocking filtered samples, \(\omega\) is the scaling factor derived based on least square method. (3) indicates that the residual between deblocking filtered samples and NN filtered samples can be scaled by a scaling factor and then added back to the deblocking filtered samples. For each color component, a scaling factor is signaled. It is worth noting that (3) can be written as,
\[\mathbf{R}_{nn}=\omega\cdot\mathbf{R}_{nn}+(1-\omega)\cdot\mathbf{R}_{db} \tag{4}\]
(4) implies a convex combination of NN filtering and deblocking filtering. It is asserted that using \(\mathbf{R}_{db}\) instead of \(\mathbf{R}_{no}\) in (4) benefits perceptual quality [51]. The reason is that the NN filter has the effect of removing deblocking artifacts, but if turned off in (4) (i.e., \(\omega=0\)), the result would be an output without any deblocking, given \(\mathbf{R}_{db}\) was replaced by \(\mathbf{R}_{no}\). In contrast, (4) guarantees that the output will be deblocked one way or the other, either through the NN filter or through the regular deblocking filter.
### _Temporal Filtering_
In video coding, neighboring reconstructed pictures might have a higher quality than the current picture since quality fluctuation usually exists across compressed pictures. This has motivated work on multi-frame quality enhancement [52], which take advantage of adjacent pictures with higher quality to enhance the current picture.
Since the hierarchical coding structure under the random-access configuration naturally leads to previously coded pictures of higher quality, the temporal information from the reference picture can be exploited for the in-loop filtering of current picture. Specifically, NNVC includes an additional in-loop filter as shown in Fig. 6, namely temporal filter, taking collocated blocks from the first picture in both reference picture lists to improve performance. Note that, to avoid complexity, the two collocated blocks are directly concatenated and fed into the temporal filter without any explicit temporal alignment operations. In addition, this temporal filter is only activated for pictures in the three highest temporal layers, because in low temporal layers, the temporal correlation between collocated blocks and the current block is weak and thus limits the performance.
### _Encoder Optimization with NN Filter_
Rate-distortion optimization (RDO) of the partitioning tree structure plays a vital role to to increase coding efficiency,
Fig. 6: Temporal in-loop filter. Only feature extraction part is illustrated, other parts remain the same as in Fig. 5. {Col_0, Col_1} refers to collocated samples from the first picture in both reference picture lists.
both for traditional codecs such as VVC as well as for NNVC. However, in NNVC, there exists a gap between the reconstruction samples used for distortion calculation during the RDO and the ultimate reconstruction samples, since the latter are eventually filtered using the NN model. To bridge this gap, a NN filter is inserted into the RDO process for partitioning mode selection.
Specifically, a NN filter is applied on the reconstruction samples before comparing them with the original samples to calculate the distortion. The optimal partitioning mode is then selected based on the refined RD cost. To reduce complexity, several fast algorithms are introduced. First, instead of using the full NN in-loop filter, an aggressively simplified version of the NN filter is used. Second, the parameter selection is omitted. Third, coding unit allowing using this technique should have a size no larger than 64. At last, the refined cost will be used only if the difference to the original R-D costs lies in a predefined range.
### _Training Details_
An iteratively conducted two-stage method is adopted to train the NN-based in-loop filters as shown in Fig. 7. The iterative training explicitly takes the filtering effect on reference frames into account during training process, in order to ease the over-filtering issue described in Section III-B.
* In the first stage, NNVC with NN filtering disabled (equivalent to the VTM anchor) is used to compress training images and videos under all-intra and random-access configurations. The reconstructed images and videos together with other auxiliary information are collected and utilized for training intra and inter filters.
* In the second stage, NNVC equipped with the models from the previous training stage is used to compress the training videos in the random-access setting. That is to say, intra pictures and inter pictures will be processed by the intra filters and inter filters obtained in training stage 1, respectively. Then, the intra training data from stage 1 and inter training data from stage 2 are combined to train the unified intra and inter models (one model for both intra luma and inter luma, one model for both intra chroma and inter chroma).
Note that the temporal filter can be trained using a similar strategy. Training images and videos are from the DIV2K dataset [53] and the BVI-DVC dataset [54]. PyTorch [55] serves as the training platform. More details regarding training can be found in the NNVC training folder2.
Footnote 2: [https://vcgit.hbi.fraunhofer.de/jvet-ahg-mvc/VVCSoftware_VTM/-/tree/VTM-1.0_mvc/training](https://vcgit.hbi.fraunhofer.de/jvet-ahg-mvc/VVCSoftware_VTM/-/tree/VTM-1.0_mvc/training)
### _Inference Details_
SADL (see Section IV) is used for the inference of the NN-based in-loop filters. Both floating point-based and fixed point-based implementations are supported, however a real codec would need to use fixed-point arithmetics to avoid drift. In the fixed-point implementation, both weights and feature maps are represented with int16 precision using a static quantization method. In total, there are three filter models, i.e luma filter, chroma filter, and temporal filter. Other details regarding the inference is provided in Table III.
Fig. 8 depicts how to harmonize the NN-based filter with existing loop filters in VVC [56]. Deblocking and NN filtering are performed in parallel and then convexly combined via (4). SAO is disabled as no additional benefits are observed on top, while ALF and CCALF are placed after the CNN-based filtering to reduce overhead. As analyzed in Section III-D, temporal NN filter is proposed for pictures at high temporal layers (\(Tid\geq 3\)) while regular NN filter handles the others.
## IV Small Ad-hoc Deep-Learning Library
### _Overview_
Small Ad-hoc Deep-Learning library (SADL) is a header-only small library for neural network inference available at [57]. SADL provides both floating-point-based and integer-based inference capabilities. The inference of all neural networks in NNVC is based on the SADL. Table IV summarizes the framework characteristics.
### _Integerized Model_
In video compression area, the fixed point implementation is crucial to allow reproducibility of the decoding, independently of the platform or environment. The SADL framework provides both floating point and fixed point implementation for all layers.
To lower the complexity of integer arithmetic of quantized model, the quantization operations required for computational layers are minimized and performed using only bit-shifting, without zero-point shifting, compared to existing method in Tensorflow [58] or PyTorch.
Both weights and latents tensors use the internal integer representation, e.g. int16. For intermediate computation, the integer with twice the number of bits is used. For example, for int16 format, int32 is used for computation. Compared to the float version, the operation are adapted as
* BiasAdd: \(y=C\left(\left(x_{0}\gg\left(q_{0}-q_{1}\right)\right)+x_{1}\right),q=q_{1}\)
* Add: \(y=C\left(\left(x_{0}\gg\left(q_{0}-q\right)\right)+\left(x_{1}\gg\left(q_{1}-q \right)\right)\right),q=\min\left(q_{0},q_{1}\right)\)
* Mul/MatMul/Conv2D: \(y=C(\sum x_{0}x_{1}\gg\left(q_{1}+q_{i}\right)),q=q_{0}-q_{i}\)
* Concat: \(y=x_{0}\gg\left(q_{0}-q\right)\mid x_{1}\gg\left(q_{1}-q\right)\mid...,q=\min( q_{k})\)
* LeakyReLU: si \(x<0,y=\left(\alpha x_{0}\right)\gg q_{\alpha}\), assuming \(\left|\alpha\right|<1\) (no overflow possible), \(q=q_{0}\). The quantizer \(q_{\alpha}\) of the slope \(\alpha\) of LeakyReLU always takes the maximum possible value to represent \(\alpha\) without overflow.
* Maximum: \(y=\max(x_{0},x_{1}\ll\left(q_{0}-q_{1}\right)),q=q_{0}\)
* for layer with only one input, the output will take the quantizer of this input.
Where:
* \(C(.)\) represents the clipping operation associated with the internal bitdepth of the latent. For example for int16 integer \(C\left(x\right)=\max\left(-2^{15}+1,\min\left(2^{15}-1,x\right)\right)\)
* \(x_{0}\), \(x_{1}\): inputs
* \(y\): output
* \(q_{0}\) and \(q_{1}\): shift of the quantizers. The floating value associated to a quantized input \(x\) of quantizer \(q\) can be recovered via \(f\left(x,q\right)=x/\left(1\ll q\right)\)
* \(q_{i}\): internal shift for some layers.
Conversion from trained models using floating point to integerized model can be done either using static quantization where optimal quantizers for each layer is chosen and floating point weights are converted to fixed points, or a quantization aware model training is performed. Fig. 9 shows an example of adaptation of the convolution layer: quantization/clipping/dequantization stages are added for both the weights and the output of the layer.
### _Sparse Matrix Multiplication_
In order to lower the complexity of the dense layer, counted in MAC (Multiply-Accumulate), a sparse matrix representation associated with a sparse matrix/vector is available. The sparse matrix uses a variant of the CSR (Compressed Sparse Row) representation [59], where run-length of non-zero values is aligned on the required SIMD alignment for efficient multiply-and-add implementation on CPU: all run-length are constrained to be of a lenght modulo of 8 or 16, then each runlength is indexed in a vector and mutliply by the corresponding values of the input vector.
For example, the NN-based intra prediction model II-B using the sparse dense layer allows to decrease the complexity as depicted in Table V.
### _Other Implementability Aspects_
In order to evaluate and compare solutions explored in JVET, complexity measurement of a model is also provided in the library. It allows to extract the real number of MACs and
Fig. 8: Embedding of CNN-based in-loop filter into codec.
Fig. 7: Iterative training of CNN-based in-loop filter with two stages.
Fig. 9: Quantization aware convolution layer using fixed point operations.
other operations during the inference of a particular model, independently of an underlying implementation.
A requirement of JVET evaluation is also to compare practical implementation using a reproducible environment in order to compare explored solutions to a given anchor. For these reasons, pure CPU (as opposed to GPU based) and single-threaded implementation is provided by the framework.
## V Experimental Results
This section verifies performances of the proposed techniques in NNVC using NNVC-4.03, the reference software of NNVC. Techniques are tested under all-intra, random-access, and low-delay configurations using QP [22, 27, 32, 37, 42] suggested by NNVC common test conditions (CTC) [60]. BD-rate [61] is adopted to measure the compression efficiency, where the quality metric bases on PSNR. Test sequences are known as classes A1, A2, B, C, D, E, F [60].
Footnote 3: [https://vcgit.hhi.fraunhofer.de/jvet-abg-nmvcVVCSoftware_VTM/](https://vcgit.hhi.fraunhofer.de/jvet-abg-nmvcVVCSoftware_VTM/)
### _Overall Results_
Table VI gives overall performances of the proposed techniques in NNVC-4.0 over VTM-11.0_nnvc. Following JVET common test conditions [62], we exclude classes D and F when computing the overall average. As can be observed, NNVC-4.0 with the proposed techniques outperforms VTM-11.0_nnvc significantly, achieving on average {11.94%, 21.86%, 22.59%}, {9.18%, 19.76%, 20.92%}, and {10.63%, 21.56%, 23.02%} BD-rate reductions for {Y, Cb, Cr}, under random-access, low-delay, and all-intra configurations, respectively. The proposed NN models are tuned on natural contents, therefore gains are limited on Class F containing screen content sequences.
Fig. 10 shows R-D curves for sequences from different classes. Trends can be observed that NNVC-4.0 offers relatively higher coding gains at middle bit-rates. The phenomenon may be related to the distortion characteristics at different bit-rates. The low bit-rate tends to yield larger distortion, making it more difficult to infer the lost details from existing contexts while the high bit-rate usually means low distortion level, leaving limited space for further reduction.
Currently, the implementation of the NN-based models is not fully optimized and is CPU-based, thus the encoding/decoding time of NNVC-4.0 is much longer than that of the highly optimized VVC reference software. Table VII presents the computational time comparison. Encoding complexities are 2.1, 2.2, and 2.5 times for random-access, low-delay, and all-intra cases, respectively. Regarding decoding complexities, they are 324.9, 307.4, and 196.5 times for random-access, low-delay, and all-intra cases, respectively. Note that in real applications, inference of NN-based models could be accelerated significantly with more efficient architectures such as GPUs (graphics processing units), TPUs (tensor processing units, a kind of application-specific integrated circuits) or full custom ASIC silicon. Besides running time, number of total parameters and multiply-accumulates (MACs) are important measurements concerning complexity considered in NNVC. Details regarding these measurements could be found in Table II and Table III.
### _Ablation Test_
Table VIII gives performances of NNVC-4.0 configured in different modes. BD-rate changes are shown for luma component in all-intra and random-access settings. NN tools enabled in each mode are explained below,
* \(M_{1}\), NN-based intra prediction.
* \(M_{2}\), basic NN-based in loop filter, i.e. without temporal filtering (Section III-D) and encoder optimization (Section III-E).
* \(M_{3}\), NN-based intra prediction, basic NN-based in loop filter.
* \(M_{4}\), NN-based intra prediction, basic NN-based in loop filter, temporal filtering (Section III-D).
* \(M_{5}\), NN-based intra prediction, basic NN-based in loop filter, temporal filtering, encoder optimization (Section III-E).
Compared with VTM-11.0_nnvc, NN-based intra prediction and basic in-loop filter provide on average {1.81%, 3.61%} and {9.60%, 7.39%} BD-rate reductions for the luma component under random-access and all-intra settings respectively (as observed from columns \(M_{1}\) and \(M_{2}\)). By combing the two tools, BD-rate reductions rise to {11.02%, 10.40%} for random-access and all-intra settings as shown in column \(M_{3}\). Comparison of columns \(M_{1}\), \(M_{2}\), and \(M_{3}\) suggests gains of NN-based intra prediction and in-loop filtering are almost additive, yet the tools are trained and optimized separately. Results in columns \(M_{4}\) and \(M_{5}\) can reflect additional improvements due to temporal filtering and encoder optimization techniques, i.e. an approximate BD-rate reduction of 0.6% from temporal filter and 0.3% from encoder optimization in the random-access setting.
## VI Conclusion
Joint Video Experts Team of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC29 is working together on an exploration study to evaluate potential NNVC technology beyond the capabilities of VVC. The exploration activity has identified two promising NN-based coding tools as an enhancement of existing intra prediction and in-loop filtering techniques in VVC design. This paper introduced technical features, encoding methods, and training methods of some of these tools. Implementation of these tools in NNVC reference software is based on SADL. Effectivenesses of the NNVC techniques have been verified by the experimental results about NNVC-4.0, i.e. {11.94%, 21.86%, 22.59%}, {9.18%, 19.76%, 20.92%}, and {10.63%, 21.56%, 23.02%} BD-rate reductions on average for {Y, Cb, Cr} compared with VVC under random-access, low-delay, and all-intra settings respectively. Future works on complexity reduction and other competitive NN tools are encouraged.
|
2309.05809 | Divergences in Color Perception between Deep Neural Networks and Humans | Deep neural networks (DNNs) are increasingly proposed as models of human
vision, bolstered by their impressive performance on image classification and
object recognition tasks. Yet, the extent to which DNNs capture fundamental
aspects of human vision such as color perception remains unclear. Here, we
develop novel experiments for evaluating the perceptual coherence of color
embeddings in DNNs, and we assess how well these algorithms predict human color
similarity judgments collected via an online survey. We find that
state-of-the-art DNN architectures $-$ including convolutional neural networks
and vision transformers $-$ provide color similarity judgments that strikingly
diverge from human color judgments of (i) images with controlled color
properties, (ii) images generated from online searches, and (iii) real-world
images from the canonical CIFAR-10 dataset. We compare DNN performance against
an interpretable and cognitively plausible model of color perception based on
wavelet decomposition, inspired by foundational theories in computational
neuroscience. While one deep learning model $-$ a convolutional DNN trained on
a style transfer task $-$ captures some aspects of human color perception, our
wavelet algorithm provides more coherent color embeddings that better predict
human color judgments compared to all DNNs we examine. These results hold when
altering the high-level visual task used to train similar DNN architectures
(e.g., image classification versus image segmentation), as well as when
examining the color embeddings of different layers in a given DNN architecture.
These findings break new ground in the effort to analyze the perceptual
representations of machine learning algorithms and to improve their ability to
serve as cognitively plausible models of human vision. Implications for machine
learning, human perception, and embodied cognition are discussed. | Ethan O. Nadler, Elise Darragh-Ford, Bhargav Srinivasa Desikan, Christian Conaway, Mark Chu, Tasker Hull, Douglas Guilbeault | 2023-09-11T20:26:40Z | http://arxiv.org/abs/2309.05809v1 | # Divergences in Color Perception between Deep Neural Networks and Humans
###### Abstract
Deep neural networks (DNNs) are increasingly proposed as models of human vision, bolstered by their impressive performance on image classification and object recognition tasks. Yet, the extent to which DNNs capture fundamental aspects of human vision such as color perception remains unclear. Here, we develop novel experiments for evaluating the perceptual coherence of color embeddings in DNNs, and we assess how well these algorithms predict human color similarity judgments collected via an online survey. We find that state-of-the-art DNN architectures -- including convolutional neural networks and vision transformers -- provide color similarity judgments that strikingly diverge from human color judgments of (_i_) images with controlled color properties, (_ii_) images generated from online searches, and (_iii_) real-world images from the canonical CIFAR-10 dataset. We compare DNN performance against an interpretable and cognitively plausible model of color perception based on wavelet decomposition, inspired by foundational theories in computational neuroscience. While one deep learning model -- a convolutional DNN trained on a style transfer task -- captures some aspects of human color perception, our wavelet algorithm provides more coherent color embeddings that better predict human color judgments compared to all DNNs we examine. These results hold when altering the high-level visual task used to train similar DNN architectures (e.g., image classification versus image segmentation), as well as when examining the color embeddings of different layers in a given DNN architecture. These findings break new ground in the effort to analyze the perceptual representations of machine learning algorithms and to improve their ability to serve as cognitively plausible models of human vision. Implications for machine learning, human perception, and embodied cognition are discussed.
## I Introduction
Over the last decade, deep neural networks (DNNs) have matched and even surpassed human performance on a range of visual tasks, including image classification and object recognition[1, 2]. Indeed, popular accounts maintain that DNNs readily learn abstractions from noisy and raw data that are similar to the abstractions leveraged in human cognition[3, 4]. A number of studies show that DNNs are effective at predicting human brain activity by modeling fMRI data associated with human vision and language processing[5, 6, 7, 8]. These achievements have inspired arguments that DNNs offer cognitively plausible models of human visual perception and human cognition more broadly[5, 6, 7, 9, 10]. However, a growing body of work shows that DNNs and humans can arrive at similar image classifications (such as correctly labeling an image as containing a "dog" or a "tree") while nevertheless relying on strikingly different perceptual processes[11]. For example, a recent audit study shows that, as DNNs become more accurate in classifying
images, the image features they leverage increasingly deviate from human patterns of visual attention[12]. Relatedly, "adversarial attacks" show that subtle perturbations in image input, such as randomly shuffling a small fraction of pixels, can lead to drastically different DNN classifications despite having no qualitative impact on images from a human perspective[13, 14, 15]. It remains an active debate whether DNNs perceive and represent visual information in a manner that resembles human vision or, more ambitiously, human cognition[16, 17, 18, 11].
A key limitation of prior research is that it primarily evaluates the cognitive plausibility of DNNs by examining their ability to learn human categories when solving complex tasks, such as image classification and object recognition[1, 3]; yet, these tasks are not designed to compare perceptual processes between DNNs and humans. Moreover, DNNs frequently develop complex and high-dimensional representations to solve such tasks. Because it is challenging to decompose and interpret these representations, it remains difficult to isolate the perceptual processes and basic image features that DNNs leverage[16, 17, 19]. A persistent obstacle in this domain is the challenge of creating task paradigms that allow for a clear separation of perceptual information and high-dimensional abstractions in DNNs. One recent proposal is to compare DNNs and humans in how they represent simple, validated stimuli from cognitive psychology designed to elicit and observe basic perceptual processes[20, 19]. For example, recent work evaluates how DNNs represent canonical gestalt stimuli, such as schematic images exemplifying foreground-background relations, as well as visual patterns relating to object closure[11]. It is shown that DNNs often struggle to represent gestalt patterns[11], and when they do[21, 22, 23], these effects only occur in the final layers of the DNN, whereas gestalt effects can be detected in the early stages of human visual processing[24, 25]. However, despite these advances, this recent work continues to examine stimuli involving a mixture of basic perceptual information (e.g., texture and color) along with more complex and abstract information (e.g., spatial and geometrical properties of images on multiple scales), which limits the ability to isolate how DNNs represent basic perceptual information as part of their more abstract representations.
We address this gap by comparing DNNs and humans in terms of how they represent a foundational aspect of human visual perception--namely, color--as presented in simple, controlled color stimuli and in the presence of more complex textural and spatial patterns. Color is particularly relevant for these explorations for two key reasons. First, color has received strikingly little attention in recent efforts to probe the "cognitive psychology" of DNNs[11, 19]. Only a few recent studies examine DNN color representations -- focusing on DNN embeddings of complex, real-world images -- and these studies do not systematically compare against human perceptions of color[26, 27, 28]. Second, color plays a far-reaching role throughout human cognition. Color vision is ubiquitous in primates[29, 30] and is known to evoke a range of behavioral, emotional, and linguistic responses in humans[31, 32, 33, 34]. Color is frequently evoked in research on embodied cognition as a powerful example of how humans harness sensory information to support a wide range of cognitive functions, such as abstraction[35, 36] and metaphorical reasoning[37, 38, 39]. The human visual system is highly tuned for color perception, and, crucially, preserves color information throughout the formation of abstractions related to visual input. It is difficult to imagine a computer vision algorithm that faithfully models human vision while failing to perceive color in a cognitively plausible manner and failing to preserve color-related information throughout the visual learning process.
Yet, there are several reasons to suspect that extant DNNs are not designed to represent color in a way that resembles human visual processing. Extant DNNs are primarily trained on images represented in RGB color space, which has been shown to fail at capturing human color perception[40, 41]. Furthermore, several studies argue that DNNs are biased toward perceiving images based on the spatial arrangement of pixel luminosities--i.e., image _texture_--rather than in terms of color[42, 43]. Recent attempts to characterize the representations of DNNs suggest that they may drop color information as early as their second layer, which becomes skewed toward the detection of gray-scaled geometric patterns[43, 44]. DNN architectures may be prone to dropping basic perceptual information such as color in the interest of developing high-dimensional abstractions, in stark contrast to human visual cognition, which preserves color information and intermingles it with a wide array of representations, both emotional and linguistic[31, 34, 37, 38]. Consistent with this theory, recent work finds that increasing the depth and dimensionality of DNNs, which correlates with their capacity for abstraction, harms their ability to represent color information[27]. These
findings suggest that state-of-the-art DNNs for image classification may be poorly designed to capture human color perception given their bias toward abstracting away from raw sensory data and basic perceptual information. However, limitations in the ability to identify and examine the perceptual representations of DNNs have prevented prior research from demonstrating this empirically.
Here, we develop a range of novel visual experiments for evaluating the coherence of color embeddings in pre-trained DNNs, and we assess how well these algorithms' embeddings predict human color similarity judgments collected via an online survey. We provide evidence that state-of-the-art DNN architectures -- including convolutional neural networks[2] and vision transformers[45, 46] -- do not represent color in a way that resembles or effectively predicts human color judgments of (_i_) images with controlled color properties, (_ii_) images generated from online Google Image searches, and (_iii_) real-world images from the canonical CIFAR-10 dataset[47]. These results hold when examining DNNs trained on different high-level visual tasks, including image classification and image segmentation[48] (Figure S14). All of the DNNs we examine exhibit similar shortcomings in their ability to model human color perception, with important implications for the plausibility of extant DNNs as models of human visual cognition. At the same time, we find that a convolutional DNN trained on a style transfer task (hereafter referred to as a "style transfer DNN") performs notably better than all other DNNs we test, suggesting that different training goals can influence the relevance of color in DNN embeddings[49, 50]. Moreover, we replicate our analyses by comparing human color judgments against the color representations formed across all layers of a convolutional DNN trained on image classification, given recent work suggesting that earlier layers of DNNs may better represent color[27, 51]. We find that, while none of the network's layers effectively predict human color perception, deeper layers actually perform better, suggesting that hierarchical learning can contribute to DNNs' abilities to represent color (Figure S13). These results shed new light on how DNN architectures and training objectives can improve their ability to represent color in a cognitively plausible manner, as discussed in detail below.
To further ground the interpretation of our DNN results, we develop an alternative algorithmic approach to modeling human color perception based on cognitively plausible wavelet transforms. Wavelet transforms comprise a simple learning architecture that is aptly positioned to efficiently detect and preserve frequency information, including that contained in color channels. Despite being developed in a scientific context separate from the study of human perception, foundational work in computational neuroscience shows that wavelet transforms capture properties of human vision, including the perception of color[52, 53, 54] and texture[55, 56, 25]. Wavelet transforms are well-positioned to serve as an interpretable benchmark algorithm for learning color representations against which the more complex, high-dimensional representations of DNNs can be compared. Recent studies have even found that processing images with wavelet transforms prior to DNN analysis improves classification accuracy[51, 52]. We expand on these results by comparing DNN and wavelet embeddings against human color perception in an interpretable setting, thereby clarifying the cognitively relevant color features captured by wavelets and often missed by DNNs. In addition, a key limitation of many related studies is that they train DNNs in RGB color space, which is known to inaccurately model human color perception. For this reason, we implement our wavelet algorithm in both RGB color space and an approximately perceptually uniform color space that re-weights RGB channels so that Euclidean distances in color space match human perceptible differences in color[41], providing various benchmarks to compare against DNN performance.
In what follows, we demonstrate that our wavelet algorithm is significantly better than state-of-the-art image classification DNNs at predicting human color judgments across all image datasets examined, even when implemented in the same RGB color space in which these state-of-the-art DNNs are trained. This includes comparing against the style transfer DNN trained on artworks; although this network outperforms all other DNNs we test in terms of color representation, it still significantly underperforms relative to our wavelet algorithm under various conditions, regardless of whether our wavelet algorithm operates in approximately uniform or RGB color space. We further show that the color relationships detected by our algorithm are considerably more interpretable, allowing for more direct comparisons to human color vision and revealing areas of color space in which DNNs struggle to differentiate color in a perceptually coherent way. As such, these findings break new ground in our ability to algorithmically
emulate human color perception, and to construct cognitively plausible models of human cognition capable of enriching ongoing theoretical research in cognitive science and artificial intelligence. This paper is organized as follows. In **Study 1: Evaluating Algorithms' Color Embeddings**, we study how several widely-used DNNs represent color. In particular, we scrutinize how these algorithms' embeddings of images from three datasets--ranging from uniform color squares to real-world images used in classification tasks--leverage color. We benchmark these analyses by comparing algorithms' color similarity predictions to images' color similarity, calculated in an approximately perceptually uniform color space, and to our wavelet algorithm described above. In **Study 2: Comparing Color Embeddings Against Human Judgments**, we test how accurately these DNNs and our wavelet algorithm predict color similarity relative to human judgments collected via an online survey. In the **Discussion**, we summarize our results, highlight interesting areas for future work relating to machine learning, human perception, and embodied cognition.
## II. Study 1: Evaluating Algorithms' Color Embeddings
### Methods
We begin by briefly describing our image datasets, computer vision algorithms, and methods for analyzing image embedding; additional details are provided in the **Supplementary Information**.
Figure 1 provides an overview of our methods: specifically, we feed each image dataset described below through three pre-trained DNNs; we then compare how these DNNs represent color and texture by clustering their embeddings, and analyzing the color distributions of images that each algorithm embeds similarly in an approximately perceptually uniform color space. We benchmark these DNN color embeddings against those obtained using our new, perceptually grounded wavelet algorithm. Example images from all of our datasets are shown in the top half of Figure 1; examples of DNN architectures and wavelet filters are shown in the bottom half of Figure 1.
**Image Datasets.** We use the following image datasets in our analyses:
* _Block and Stripe Images_**:** Sets of 1,000 (300 x 300)-pixel images with controlled color properties that are composed of one (block) or two (stripe) randomly-selected colors; for stripe images, colors are arranged in an alternating vertical stripe pattern. Note that the combination of two colors provides pixel-to-pixel variation, and thus a rudimentary form of image texture. Stimuli similar to our stripe images, often referred to as "visual gratings," are frequently used to investigate the core mechanisms of human visual processing[57]. All block and stripe images are presented with a colored border; our findings are robust to varying the color of the border surrounding these images (Figure S20).
* _Colorgrams_**:** A set of 1,000 (300 x 300)-pixel images, each of which is generated by averaging the top 100 Google Image results for a given search term. We select colorgrams from Desikan et al. (2020)[37] corresponding to the most frequently used words in English. Colorgrams visually represent concepts while heavily featuring color; in particular, previous studies show that colorgrams encode semantically meaningful relationships that effectively differentiate concrete and abstract concepts, as well as linguistic metaphors, via color[37, 38].
* _CIFAR-10_**:** A set of 10,000 images from the standard computer vision dataset CIFAR-10. In particular, we collect 10,000 images that are commonly used to train image classification algorithms[47], consisting of 10 approximately equally-represented image classes (airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck). For consistency with the other image datasets, we resize CIFAR-10 images from their original resolution of (32 x 32) pixels to (300 x 300) pixels each; in supplementary analyses, we show that our results are unchanged when using a higher-resolution version of CIFAR-10 (Figure S18).
**Computer Vision Algorithms.** We feed each set of images through four different computer vision approaches: (_i_) our new algorithm based on the discrete wavelet transform, (_ii_) a standard convolutional DNN trained on an image classification task[1], (_iii_) a "style transfer DNN", i.e., a convolutional DNN trained to superimpose textural motifs from images in a training class to previously unseen images[58], and (_iv_) a "vision transformer" DNN architecture trained to classify images while remaining sensitive, via an "attention" mechanism, to small-scale image features[45]. We now describe each algorithm in detail:
* _Perceptually Uniform Wavelet Algorithm_**:** We implement a second order wavelet transform using the Morlet wavelet family, which (like the Gabor wavelet) is known to capture aspects of human visual processing[52, 53, 54, 25]. The resulting 48-dimensional wavelet embeddings capture textural properties of images on different spatial scales; for example, the "wavelet algorithm" box in Figure 1 illustrates several wavelet filters. Our wavelet implementation is released as part of our publicly-available code (see "Data Availability" statement). We test our wavelet algorithm in both RGB color space and an approximately perceptually uniform color space, \(J_{z}A_{z}B_{z}\), which emulates human color perception[41]. This color space is defined such that Euclidean distances between colors linearly map onto differences in human color vision, unlike standard color spaces like RGB. Crucially, in supplementary analyses, we show that the fidelity of our wavelet algorithm's color perception does not significantly degrade when operating in RGB color space. Thus, its success is not predetermined by our use of \(J_{z}A_{z}B_{z}\) color space.
* _Convolutional DNN_**:** We use the ResNet model, which is trained to classify images from the ImageNet database[59]; this architecture is widely used for computer vision tasks. In our main analysis, we extract weights from the penultimate layer of the network, yielding a 512-dimensional embedding for each image; we also test different layers' embeddings in supplementary analyses. Note that the convolutional DNN we employ is _not_ trained to classify images that represent abstract concepts, which often lack clear concrete physical referents. Abstract concepts are not represented in traditional image classification datasets and training objectives, and often exhibit more coherent color properties than images of concrete objects[38]. Nonetheless, image classification algorithms trained on ImageNet and similar datasets are widely used and have been claimed to capture aspects of human visual processing[3, 7], particularly of concrete objects, making this convolutional DNN a useful test case for our study.
* _Style Transfer DNN_**:** We test a "style transfer" convolutional DNN that has been trained to represent the artistic style of images and to superimpose this style on previously unseen images. We use the style transfer algorithm described in Ghiasi et al. (2017)[58] by extracting 4096-dimensional image embeddings from the VGG19 network's penultimate layer. Style transfer algorithms yield images of high perceptual quality, for example using paintings from well-known artists[40, 50]. As a result, the style transfer DNN provides a useful point of comparison for our study because it is trained to capture textural and color-based information in images that may not be leveraged for traditional image classification tasks. We will show that the style transfer DNN captures some aspects of human color perception better than all other DNNs we examine.
* _Vision Transformer DNN_**:** Finally, we test a recent vision transformer DNN architecture trained on image classification. Inspired by advances in NLP that use attention mechanisms for language modeling[60], the vision transformer was conceived as a way to substitute convolutions in CNNs with attention heads to capture local associations in images[45]. Vision transformers require more data and parameter tuning than convolutional DNNs, but often perform as well or better on image classification and object recognition tasks[15, 61]. Furthermore, state-of-the-art image classification processing frameworks often combine transformer and convolutional layers[62, 63]. We extract 196-dimensional embeddings from the final layer of the algorithm from Dosovitskiy et al. (2021)[45]. Although vision transformers require more data than convolutional DNNs to train from scratch, a major appeal of such algorithms is their capacity for pre-training, justifying our use of a pre-trained model.
Figure 1: Overview of our methods for evaluating computer vision algorithms’ color embeddings. DNN architectures are reproduced from Ramzan et al. (2019; convolutional DNN architecture trained on image classification)[64], Simonyan & Zisserman (2015; VGG convolutional DNN architecture trained on a style transfer task)[65], Dosovitsky et al. (2021; vision transformer DNN architecture
trained on image classification)[45]. Wavelet illustrations are reproduced from Kymatio (Andreux et al. 2018)[66].
**Analysis Techniques.** We study each algorithm's representations of the images in the datasets described above by clustering images based on their embeddings. We then analyze the color distributions of the resulting image clusters in perceptually uniform color space as described below; additional details are provided in the **Supplementary Information**.
* _Clustering Techniques_: For each image dataset, we aggregate the embeddings returned by each computer vision algorithm. For the wavelet algorithm, we use a vector of the 16 coefficients from each perceptually uniform color channel as described above, yielding a 48-dimensional representation. For the convolutional, style transfer, and vision transformer DNNs, we use vectors of the weights from the penultimate layer of each network, yielding 512, 4096, and 196-dimensional representations, respectively. We group images according to each algorithm's embeddings using \(k\)-means clustering, with a dimensionality reduction factor that yields approximately ten clusters for each image dataset; this choice yields a sufficient number of images per cluster to evaluate within-cluster color distribution statistics and a sufficient number of clusters to evaluate overall differences among the algorithms' embeddings. All results are robust to alternative techniques for implementing this \(k\)-means clustering algorithm, including the k-init\(++\) initialization algorithm (used in our main results, below) and a random initialization of \(k\)-means clusters (Figure S15).
* _Color Coherence Measure_: We measure the color coherence of the image groups returned by \(k\)-means clustering by measuring the distribution of \(J_{z}A_{z}B_{z}\)[41] values, concatenated over pixel coordinates, for each image. We quantify the similarity between the color distributions of pairs of images using the Jensen-Shannon divergence and \(c_{ij}=(c_{i}+c_{j})/2\); this metric is related to the mutual information shared by two color distributions and represents a distance measure between distributions in perceptually uniform color space. In the analyses below, calculate the color similarity between all image pairs within each cluster; we also compute the mean color similarity within each cluster to study how image properties vary among clusters. We emphasize that our color coherence metric is directly based on distances in \(J_{z}A_{z}B_{z}\) color space, which has been extensively calibrated to match human color perception[41]. Thus, our measure captures perceptual differences in color similarity and is not arbitrary.
Results
**Visual Properties of Image Clusters.** Our wavelet algorithm yields clusters that contain images with significantly more similar color distributions than state-of-the-art DNNs. Figure 2 illustrates this by comparing images from clusters returned by our perceptually uniform wavelet algorithm and the three DNNs we test for our stripe, colorgram, and CIFAR-10 datasets. The wavelet algorithm groups images with visually similar color distributions, while the DNN image classification algorithms often group images with noticeably different color properties. Similar trends hold for all image clusters and are not specific to the examples shown in Figure 2; we provide all image clusters in an online repository. We note that CIFAR-10 images often exhibit less visually coherent color distributions than the other datasets, as expected for images mainly composed of real-world objects and scenes.
**Clustering in Perceptually Uniform Color Space.** We now quantify the perceptually uniform color similarity of the image clusters returned by each algorithm. Figure 3 shows the convex hull of each stripe image cluster returned by our perceptually uniform wavelet and convolutional, style transfer, and vision transformer DNN algorithms, where each image is represented by its average coordinate in \(J_{z}A_{z}B_{z}\) color space. It is visually apparent that the DNNs' clusters overlap in color space while our wavelet algorithm separates images with distinct color properties. The style transfer DNN's color space overlap is particularly noticeable, which may be caused by the trivial textural properties of the stripe image dataset.
Figure 2: Examples of images clustered by our wavelet algorithm that operates in an approximately perceptually uniform color space (top left), by widely-used convolutional DNNs trained on image classification (top right) and style transfer (bottom left), and by a vision transformer DNN architecture trained on image classification (bottom right). For each algorithm, the top, middle, and bottom rows respectively show five images within a single cluster of similar embeddings, for our stripe, colorgram, and CIFAR-10 image datasets, respectively. Our wavelet algorithm groups images that exhibit noticeably more similar color distributions than those returned by the DNNs, on image datasets with both idealized and real-world color properties. All algorithms’ image clusters are available in an online repository.
To quantify these color clustering overlap trends, we define the color coherence fraction, \(f\), as the number of images that lie within the convex hull of exactly one color cluster divided by the total number of images. Thus, \(f\!=1\) corresponds to a maximally color-coherent grouping in which each cluster occupies a distinct region of color space, and lower values of \(f\) correspond to groupings with less similar color distributions within each cluster. Note that our color clustering measure is related to the Jaccard index.
Our wavelet, convolutional DNN, style transfer, and vision transformer algorithms yield \(f_{wavelet}\!=0.81\), \(0.92\), \(0.10\), \(f_{CNN}\!=0.22\), \(0.10\), \(0.04\), \(f_{style}\!=0.11\), \(0.48\), \(0.19\), and \(f_{transformer}\!=0.41\), \(0.09\), \(0.01\) on our stripe, colorgram, and CIFAR-10 datasets, respectively. Thus, for image datasets with idealized color properties, our wavelet algorithm groups images in distinct regions of perceptually uniform color space much more effectively than any of the DNNs we study. We quantify the significance of this result by generating 100 realizations of random image clustering, which return \(f_{rand}\!=0.11\pm 0.005\), \(0.06\pm 0.004\), \(0.05\pm 0.004\) for the stripe, colorgram, and CIFAR-10 datasets respectively. The strength and typical variability among these random clustering assignments is very small compared to \(f_{wavelet}\), indicating that our wavelet's color clustering signal is highly significant. Note that, although the style transfer algorithm yields the most color-coherent clustering for CIFAR-10, its performance on the stripe and colorgram datasets is statistically consistent with a random clustering algorithm that is unable to detect color.
To quantitatively compare the results of our wavelet algorithm that operates in \(J_{z}A_{z}B_{z}\) color space with a version of this algorithm that operates on images represented in RGB color space, we recompute color coherence fractions in the RGB case for all image datasets, finding\(f_{wavelet,RGB}\!=0.76\), \(0.84\), \(0.10\) on our stripe, colorgram, and CIFAR-10 datasets, respectively. These results are comparable to (though slightly less color-coherent than) those returned by our fiducial wavelet algorithm, indicating that its success is not mainly driven by the fact that it represents images in an approximately perceptually uniform color space. In supplementary analyses, we provide direct visual comparisons of the image clusters returned by each version of the algorithm, confirming this finding (Figure S12).
**Color Coherence of Image Clusters.** Next, we examine the distribution of color similarity between image pairs in each cluster for each algorithm and dataset. Figure 4 shows an example of this distribution for our colorgram dataset; in particular, the filled blue histogram shows the perceptually uniform wavelet result, and the unfilled black, green, and cyan histograms show the convolutional, style transfer, and vision transformer DNN results. Our wavelet algorithm yields significantly higher pairwise color similarity, indicating that it embeds images with similar perceptually uniform color distributions more closely than the other algorithms we consider.
The unfilled blue histogram in Figure 4 shows our wavelet algorithm run in RGB color space, which still performs extremely well, indicating that our fiducial algorithm's color perception success is not simply a result of the input color space, but is instead mainly due to its architecture and mechanics. Supplementary analyses replicate these results for the stripe and CIFAR-10 datasets (Figure S16). Another indication that the mechanics of our wavelet algorithm -- rather than the color space it operates in -- underlie its success is that the wavelet algorithm still performs strikingly well when analyzing grayscale images (see Figures 4 and S16). While operating in grayscale significantly degrades the performance of the wavelet algorithm, as expected, Figure 4 shows that this grayscale version of our algorithm still manages to outperform the vision transformer DNN (with access to color) in terms of color clustering for the colorgram dataset, and Figure S16 shows that the grayscale wavelet still outperforms the style transfer DNN (with access to color) when clustering images by color in the stripe dataset. These findings suggest that the wavelet algorithm is able to capture information about pixel luminosity in a way that correlates with images' color distributions, whereas some DNNs with access to both pixel luminosity and color information attend to image features that do not correlate with images' overall color distributions.
We quantify the significance of color clustering results by measuring the pairwise color similarity distribution for 100 realizations of random clustering assignments; the filled vertical band in Figure 4
Figure 3: Color properties of images clustered by our wavelet algorithm (top left), by convolutional DNNs trained on image classification (top right) and style transfer (bottom left), and by a vision transformer DNN trained on image classification (bottom right). Each point shows the perceptually uniform color of a stripe image. Vertices enclose images clustered by each algorithm and are colored by clusters’ mean \(J_{z}A_{z}B_{z}\) coordinates. Our wavelet algorithm clusters images in distinct regions of perceptually uniform color space, while the DNNs mix color properties. Our wavelet algorithm sorts 81% of the stripe images into a unique cluster in color space, compared to 22%, 11%, and 41% for the convolutional, style transfer, and vision transformer DNNs.
indicates the resulting 95% confidence interval for the mean of each realization. Strikingly, the style transfer and convolutional DNNs yield mean color similarity values that are statistically consistent with random clustering assignments that are unable to detect color by design. Furthermore, the difference between the mean of the wavelet and convolutional DNN distributions is highly significant relative to this expected spread (\(p<0.01\), Student T-test, Two-tailed). This finding also holds for CIFAR-10 images (_SOM_). As shown in Figure 4, the mean pairwise color similarity of the convolutional, style transfer, and vision transformer DNNs is consistent with our grayscaled wavelet algorithm at a level well within the statistical variation of random clustering assignments. Thus, the DNNs we test yield image clusters with color properties that are consistent with a random clustering algorithm that cannot detect color.
Note that our wavelet algorithm's success is not solely due to its treatment of color. In particular, when pixels within images are spatially randomized, our wavelet algorithm's color clustering performance significantly degrades relative to our fiducial results, implying that the spatial and color information captured by the wavelet transforms are correlated (see the "shuffled" results in Figure 4, magenta lines).
Figure 4: **A.** Comparison of the color similarity of images clustered by our wavelet algorithm and widely-used DNNs for our colorgram dataset. Color similarity distributions for all image pairs within clusters returned by our wavelet algorithm operating in perceptually uniform (blue), RGB (unfilled blue), and grayscaled (gray) color spaces, and by convolutional (black), style transfer (green), and vision transformer (cyan) DNNs. The dashed vertical lines indicate the mean of each distribution, and the filled vertical band shows the 95% confidence interval of the color similarity mean for a random clustering algorithm that does not detect color. Our perceptually uniform wavelet algorithm returns significantly more color-coherent clusters than any DNN we analyze. Furthermore, the DNNs group images with color similarity statistics that are consistent with those from a random clustering algorithm that is unable to detect color. Vertical magenta lines show the mean color similarity for the same wavelet algorithm operating on images with spatially randomized pixels, either shuffled with any other pixel in each image (dashed) or within each column of each image (dot-dashed). **B.** The fraction of images that occupy similar regions of perceptually uniform color space and that are embedded similarly by our perceptually uniform wavelet algorithm (blue), the same algorithm operated on images in RGB color space (unfilled blue), a spatially randomized version of this algorithm (magenta), by convolutional DNNs trained on image classification (black) and style transfer (green), and by a vision transformer DNN trained on image classification (cyan).
Specifically, we compare the perceptually uniform color similarity and embedding similarity for all image pairs in our block dataset, where color similarity is defined in Equation 1 and embedding similarity is given by the cosine similarity of image embeddings for each algorithm. We expect algorithms with more accurate color perception to exhibit a stronger correlation between the two similarity metrics. Figure 5 shows the results of this test for our perceptually uniform wavelet and DNN algorithms, confirming this expectation: the Spearman rank correlation coefficient between color and embedding similarity is 0.95 for our wavelet algorithm, versus only 0.5 for the convolutional DNN. Moreover, the convolutional DNN displays much more scatter in embedding similarity at fixed color similarity than our wavelet algorithm.
These qualitative results also hold for the style transfer and vision transformer DNNs. In particular, the style transfer DNN embeds block images similarly to the convolutional DNN, with somewhat less scatter (Spearman \(\rho=0.75\)), and the vision transformer DNN differentiates block pairs of different colors most weakly (Spearman \(\rho=0.42\)). For both algorithms, the overall shape and scatter of the embedding versus color similarity relation is similar to the convolutional DNN. The vision transformer DNN's relatively poor performance on this perceptual task is interesting, since it performed best, among the DNNs, at clustering stripe images by color in a perceptually coherent manner (Figure 4B). Thus, the vision transformer may require non-trivial texture--in the form of the pixel-to-pixel variation provided by our stripe images but not by our block images--to generate meaningful embeddings.
This test allows us to identify color relationships that contribute to a given algorithm's lack of perceptual color representation. For example, the top-left quadrant of both panels in Figure 5 illustrates that the convolutional DNN embeds red, blue, and green images too similarly compared to their distance in perceptually uniform color space, which is surprising given that this algorithm is trained in RGB color space. Meanwhile, the bottom-right quadrant suggests that green and yellow hues are embedded less similarly than their distance in perceptually uniform color space warrants. Our wavelet algorithm tends to embed image pairs slightly more similarly than their distance in color space warrants; however, this effect is not strongly dependent on the colors of images in each pair, implying that a constant offset in embedding similarity may further improve our wavelet algorithm's color embeddings.
Figure 5: Embedding similarity versus color similarity for a random sample of image pairs from our color block dataset. Embedding similarities are shown for our perceptually uniform wavelet algorithm (top left), by widely-used convolutional DNNs trained on image classification (top right) and style transfer (bottom left), and by a vision transformer DNN trained on image classification (bottom right). In each panel, embedding and color similarities are calculated between each pair of neighboring squares, ranked from least to most similar, and minmax-normalized; dashed lines show one-to-one relations. Our wavelet algorithm’s embedding similarities correlate more strongly with color similarity than for any of the DNNs (Spearman \(\rho=0.95\) for the wavelet algorithm, versus \(\rho=0.5\), \(0.75\), and \(0.42\) for the convolutional, style transfer, and vision transformer DNNs, respectively).
The results above indicate that direct comparisons of algorithmic and human color perception are crucial to understand DNNs' perceptual inconsistencies when representing color. Here, we conduct such a test via an online survey.
Methods
**Experimental Design.** We designed an online experimental task in which participants compare and rank the color similarity of block image pairs. We mainly tested image pairs for which the similarity derived from our wavelet and convolution DNN embeddings were most different; we also included several "benchmark" pairs for which the algorithms were in good color similarity agreement. Following prior work, we designed our survey as a ranked grouping task to circumvent the need for quantitative, absolute color similarity judgments (see Figure 6) [67]. Details of the survey were as follows:
* _Stimuli Generation_**:** We tested 140 block image pairs for which our perceptually uniform wavelet and convolutional DNN algorithms' embedding similarity strongly disagreed. We also selected 60 "benchmark" pairs with comparable convolutional DNN and wavelet embedding similarities for a total of 200 color tile pairs. We created 200 "sets" of these color tile pairs by randomly assigning each pair to another, without replacement, yielding 96, 88, and 16 sets with zero, one, and two benchmark pairs, respectively.
* _Experiment & Participants_**:** The survey was implemented as a web-based online experiment. We surveyed 100 participants with self-reported normal color vision. Each participant was presented 25 comparison sets of our color tile pairings at random; each comparison set contained two color blocks, where each color block contained two equally sized color tiles, both constituting half the size of the block. Each participant was then asked, for each comparison set, to place the color block containing the more similar color tiles into a designated box. Each comparison set was evaluated by 12 participants on average. The survey took participants an average of 4.4 minutes to complete. See Figure 6 for screenshots of this task and its instructions as experienced by participants.
* _Outcome Measure_: For each comparison set, we identify the majority human judgment of which color block contained the most similar color tiles. Then, for the same comparison sets, we identify which block is associated with the most similar color tiles according to the embeddings of each computer vision algorithm. For our main outcome measure, we calculate the fraction of comparison sets in which each algorithm's selection of the most similar color block matches those similarity judgments made by the majority of survey respondents.
Results
Considering all 200 sets of color tile pairs, our perceptually uniform wavelet algorithm outperforms DNNs at predicting color similarities that match the judgments of color similarity provided by the majority of human participants. As shown in Figure 7, two of the DNNs fail to predict majority human color judgments significantly better than random, namely the vision transformer (46.5% accurate, \(p=0.36\), Proportion Test, Two-tailed) and the convolutional DNN (56.5% accurate, \(p=0.08\), Proportion Test, Two-tailed). The style transfer DNN is able to predict majority human color judgments better than chance (66% accurate, \(p<0.001\), Proportion Test, Two-tailed), but is not significantly better than the CNN (\(p=0.06\), Proportion Test, Two-tailed). Meanwhile, our perceptually uniform wavelet algorithm is significantly more accurate than the vision transformer and convolutional DNNs. Specifically, our wavelet algorithm is 24.5 percentage points more accurate than the transformer model (\(p<0.001\), Proportion Test, Two-tailed) and 14.5 percentage points more accurate than the convolutional DNN (\(p<0.001\), Proportion Test, Two-tailed), while the 5 percentage point difference between wavelet and style transfer DNN accuracy on this task is not statistically significant (\(p=0.33\), Proportion Test, Two-tailed).
Interestingly, the embeddings of our wavelet algorithm track human judgments nearly as well as a model of perceptually uniform color space itself; its overall agreement with majority human judgments is 71.0%, compared to 73.0% when color similarities are predicted directly based on distances in \(J_{z}A_{z}B_{z}\) color space. This success is _not_ simply due to the fact that our wavelet algorithm operates in an approximately perceptually uniform color space; an alternative version of the algorithm using RGB channels performs almost equally well, yielding a 69.0% agreement rate. Together with our discussion related to Figures 4 and S16 above, this confirms that our wavelet results are robust across different input color spaces.
For the 96 sets of color tile pairs that do not contain a benchmark pair (where a benchmark pair is a pair for which our wavelet and convolutional DNN embeddings agree in terms of color similarity), the outsized performance of our perceptually uniform wavelet algorithm is particularly striking. None of the DNNs are able to successfully predict majority human judgments of the non-benchmark pairs above random chance. The style transfer algorithm only matches 54.1% of majority human judgments for these pairs, which is indistinguishable from random (\(p=0.55\), Proportion Test). The CNN (vision transform) performs even more poorly, only predicting 40.6% (38.5%) of majority human judgments for non-benchmark pairs, which is significantly worse than random (\(p<0.001\), Proportion Test, Two-tailed, for both algorithms). By contrast, our perceptually uniform wavelet algorithm is able to successfully predict majority human judgments of non-benchmark pairs with 66.6% accuracy, which is significantly higher than random (\(p\)<0.01, Proportion Test, Two-tailed). For the 104 color sets with at least one benchmark pair, the algorithms predict 75.0%, 76.9%, 71.1%, and 53.8% of majority human judgments, for the style transfer, wavelet, CNN, and vision transformer algorithm respectively.
Yet, comparing against the majority human judgment for each color comparison is a conservative test, since it does not leverage the rate of agreement across participants for each color comparison. Importantly, our results are robust--and in fact, are even stronger--if we compare algorithms' ability to match all individual judgments collected in our survey. We show this explicitly in the right panel of Figure 7, which shows that the wavelet algorithm matches 62% of color similarity judgments across all individuals, which is significantly higher than all DNNs. In particular, it is significantly higher than the vision transformer, which only matches 46% of individual judgments (\(p<0.001\)), the CNN, which only matches 43.5% of individual judgments (\(p<0.001\)), and the style transfer algorithm, which only matches 56% of individual judgments (\(p<0.001\)), (Proportion Test, Two-tailed).
We also examine whether each algorithms' color similarity judgments can predict the level of agreement among separate human annotators' color judgments in the survey data. When agreement among separate human annotators is especially high, algorithms' color embeddings should reflect comparably strong measures of color similarity and difference. For each set of color tile pairs that do not contain a benchmark pair, we calculate the difference in embedding similarity of the pair that the majority of
Figure 6: Examples of the color similarity task completed by our human respondents.
respondents deemed "more similar" versus the minority pair. We compare this to the percentage of human voters who responded that the majority pair is more similar.
As shown in Figure 8, the difference in convolutional DNN embedding similarity is _anticorrelated_ with the percentage of humans in favor of the majority pair (Spearman \(\rho=\) -0.21, \(p<0.04\)), while the wavelet embedding differences are significantly positively correlated (Spearman \(\rho=0.38\), \(p<0.0001\)). Thus, the convolutional DNN not only fails to capture human majority judgments, but also tends to embed perceptually dissimilar colors as more alike. Vision transformer DNN embedding similarity is uncorrelated with the strength of human color judgments (Spearman \(\rho=0.01\), \(p<0.0001\)), and style transfer DNN embedding similarity is only weakly correlated (Spearman \(\rho=0.17\), \(p<0.05\)). Embedding distance in the wavelet algorithm is significantly more predictive of the strength of human color judgments than embedding distance in the style transfer DNN (\(p<0.001\), Student T-test, Two-tailed).
In supplementary analyses, we qualitatively assess which color tile pairs the algorithms provide different color similarity embeddings compared to human judgments (Figure S21). The convolutional DNN failure modes are often surprising; for example, it judges a pair of light brown and purple tiles as more similar than a pair of light green and light yellow, whereas humans consistently judge the green/yellow pair as more similar. The style transfer DNN behaves similarly to the convolutional DNN in this regard; for example, it yields several perceptual errors that involve the same purple/brown color tile pair. Meanwhile, the vision transformer's perceptual errors often involve the outlying color tile pairs that it embeds differently, which contain colors near the boundaries of the standard RGB gamut (e.g., cyan). Color perception for all of the algorithms we test is least accurate relative to human judgments when a pair contains a color tile of very high or low luminance.
Figure 7: The percentage of color similarity rankings from our wavelet algorithm (purple), from convolutional DNNs trained on image classification (gray) and style transfer (green), and by a vision transformer DNN trained on image classification (cyan), that match the judgments of color similarity (A) preferred by the majority of human participants from our online survey, and (B) provided by all individual participants (i.e., not aggregated via majority ranking). There are 200 majority human judgments against which each algorithms’ judgments for the same color tiles are compared, and there are 2,500 individual-level color judgments against which each algorithms’ judgments are compared at the individual level. Error bars show 95% confidence intervals.
**IV. Discussion**
Although state-of-the-art deep learning algorithms often match or even surpass human-level performance on image classification tasks, we find that their perceptual representations fail to resemble or predict human color perception, which is a foundational aspect of human visual cognition. That state-of-the-art DNNs fail to accurately model human color perception casts doubt on the ability of these algorithms to serve as cognitively plausible models of human vision, particularly given the psychological importance of color perception in human vision and human cognition more generally.
Qualitatively, we find that DNNs often group images containing similar shapes (e.g., circles) or textures (e.g., checkered patterns) regardless of these images' color properties. These algorithms appear to be biased toward embedding textural and spatial information at the expense of retaining perceptually
Figure 8: Difference between algorithmic embedding similarity for sets of two color tile pairs versus the percentage of human voters who classified the majority color pair as more similar. Wavelet, convolutional DNN, style transfer DNN, and transformer results are shown in the top-left, top-right, bottom-left, and bottom-right panels, respectively. The convolutional DNN embedding differences (black) are significantly _anticorrelated_ with the strength of human color similarity agreement (Spearman \(\rho\) = -0.21, \(p\) \(<\) 0.04), while our wavelet algorithm’s embedding differences (blue) are significantly and positively correlated with the human consensus (Spearman \(\rho\) = 0.38, \(p\)\(<\) 0.0001). Faded data points show individual color tile pair comparisons; bold data points show the mean, and error bars show one standard deviation.
relevant information like images' basic color properties. Intriguingly, we find that the style transfer DNN more effectively leverages color than either convolutional or vision transformer DNNs trained to classify images. Its performance is especially impressive given that this algorithm was trained on pixel-level information. The style transfer DNN's performance edge is consistent with the intuition that color is more relevant for the artistic tasks and the datasets on which style transfer DNNs are trained (e.g., the classification of paintings) [49, 50, 58]. This suggests that DNNs' training objectives can contribute to their ability to accurately represent color. Nonetheless, our wavelet algorithm outperforms even the style transfer DNN in predicting human color judgments (particularly at the individual level), suggesting that -- even when color is more directly relevant to their training goals -- the DNNs we examine have not effectively learned to retain color information. Under several analyses, we found that the ability for the style transfer DNN to cluster images based on color was indistinguishable from a wavelet algorithm with access to only grayscale information, underscoring the extent to which deep learning methods can mismodel human color perception, even when their training objectives involve color. In this sense, our results complement previous findings that extant DNN architectures may not be well-designed to retain color information throughout the visual learning process [11, 27, 44]. We emphasize that a systematic study of how color perception depends on network architecture and training objectives is an important area for future work.
Our work also provides insights into the effects of DNN architecture on color perception, which may reveal new pathways for improving their perceptual capabilities. Prior work suggests that DNNs drop color information in their later layers as they develop increasingly abstract representations that generalize across sensory data (e.g., learning categories of objects that often vary in their color, such as _cars_ and _dogs_) [11, 27, 51]. Yet, this prior work primarily examines DNN color representations of real-world images; in our supplementary analysis that examines how different layers of DNNs represent controlled color stimuli (Figures S13 and S17), we find that the later layers of our convolutional DNN yield _more_ perceptually coherent color representations, hinting that hierarchical learning might improve DNNs' abilities to represent basic, sensory data in certain environments. Intriguingly, a recent study shows that DNNs can preserve color information throughout their layers when trained to learn color as a categorical variable (i.e., by learning the standard English vocabulary of color names mapped to color space), showing how task structure can shape the ability for hierarchical learning to detect and preserve color information in a multimodal fashion [68]. While this approach is limited by the mapping between conventional linguistic categories and human color perception [31, 69, 70], which is both approximate (e.g., users of the same language can map different colors to the same color words [71]) and culturally dependent [72, 73], it nevertheless highlights how DNNs can, in principle, hierarchically learn abstract associations while preserving basic perceptual information in a given sensory domain.
A promising direction for future research is to integrate feed-forward neural network architectures with components that explicitly retain sensory information and abstract representations that are characteristic of embodied cognition [35, 36, 38]. Progress in this direction may be aided by designing DNNs to better approximate the neurophysiology of human vision [11, 18]. A rich body of work demonstrates that human vision involves separate pathways for sensory processing and the formation of abstractions from visual data (such as object categories) [74, 75]; crucially, research indicates that these separate pathways remain connected throughout the human brain, enabling sensory-infused mental imagery to play an important role in domains from abstract concept learning to language comprehension [76, 77, 78]. In this context, the success of our cognitively-grounded wavelet algorithm in predicting human color judgments has several important implications. Our results provide novel empirical support for the longstanding claim in computational neuroscience that wavelet algorithms can accurately model how the brain represents color and texture [25, 52, 53, 55]. A crucial feature of our algorithm is its transparency: the color representations formed by our wavelet algorithm are readily interpretable, in contrast to the obscurity that continues to characterize DNNs' internal representations. Thus, our wavelet algorithm provides a replicable benchmark against which color representations of future computer vision models can be compared. We posit that integrating such neurophysiological-inspired techniques into deep-learning methods may improve their plausibility as models of human vision.
Finally, we remark that extant DNNs may not effectively leverage color because this kind of perceptual information is not necessary for popular training tasks like image classification. However, in light of numerous studies indicating that humans use color to understand both concrete and abstract concepts[31, 38, 39], this raises the question of how and why human cognition retains and leverages color throughout the formation of visual abstractions[4, 35]. Our study is intended to serve as a springboard for future research aimed at solving this puzzle: how can deep learning architectures and training goals be modified to better retain the low-level perceptual representations that characterize human vision -- and embodied cognition more broadly -- without sacrificing accuracy on standard computer vision tasks such as image classification?
**Data Availability**: All code used in this study is publicly available at [https://github.com/eonadler/cv-color-perception](https://github.com/eonadler/cv-color-perception). All image clusters from our analysis, for our four main datasets (block, stripe, colorgram, and CIFAR-10 images) and four algorithms (perceptually uniform wavelet, convolutional DNN, style transfer DNN, and vision transformer DNN) are publicly available at this online repository.
**Acknowledgements**: The authors gratefully acknowledge the support of the Complex Systems Summer School hosted at the Institute of American Indian Arts and the Santa Fe Institute. We thank Aabir Kar, Sean McLaughlin, Melanie Mitchell, Ruggerio Lo Sardo, Nicholas Guilbeault, and Krishna Savani for helpful discussions and comments.
**Credit Author Statement**
E.N. and D.G. designed the project; E.N., E.D., B.D., C.C., M.C., T.H., and D.G. developed the algorithmic methods, analyzed the data, and wrote the manuscript.
**References**
## References
* [1] He, K., Zhang, X., Ren, S. & Sun, J. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. Preprint at [https://doi.org/10.48550/arXiv.1502.01852](https://doi.org/10.48550/arXiv.1502.01852) (2015).
* [2] Krizhevsky, A., Sutskever, I. & Hinton, G. E. ImageNet classification with deep convolutional neural networks. _Commun. ACM_**60**, 84-90 (2017).
* [3] Jha, A., Peterson, J. C. & Griffiths, T. L. Extracting Low-Dimensional Psychological Representations from Convolutional Neural Networks. _Cognitive Science_**47**, e13226 (2023).
* [4] Tenenbaum, J. B., Kemp, C., Griffiths, T. L. & Goodman, N. D. How to Grow a Mind: Statistics, Structure, and Abstraction. _Science_**331**, 1279-1285 (2011).
* [5] Kauf, C., Tuckute, G., Levy, R., Andreas, J. & Fedorenko, E. Lexical semantic content, not syntactic structure, is the main contributor to ANN-brain similarity of fMRI responses in the language network. 2023.05.05.539646 Preprint at [https://doi.org/10.1101/2023.05.05.539646](https://doi.org/10.1101/2023.05.05.539646) (2023).
* [6] Khosla, M., Ngo, G. H., Jamison, K., Kuceyeski, A. & Sabuncu, M. R. Cortical response to naturalistic stimuli is largely predictable with deep neural networks. _Science Advances_**7**, eabe7547 (2021).
* [7] Yamins, D. L. K. _et al._ Performance-optimized hierarchical models predict neural responses in higher visual cortex. _Proceedings of the National Academy of Sciences_**111**, 8619-8624 (2014).
* [8] Zhuang, C. _et al._ Unsupervised neural network models of the ventral visual stream. _Proceedings of the National Academy of Sciences_**118**, e2014196118 (2021).
* [9] Kriegeskorte, N. Deep Neural Networks: A New Framework for Modeling Biological Vision and Brain Information Processing. _Annual Review of Vision Science_**1**, 417-446 (2015).
* [10] Millet, J. _et al._ Toward a realistic model of speech processing in the brain with self-supervised learning. Preprint at [https://doi.org/10.48550/arXiv.2206.01685](https://doi.org/10.48550/arXiv.2206.01685) (2022).
* [11] Bowers, J. S. _et al._ Deep Problems with Neural Network Models of Human Vision. _Behav Brain Sci_ 1-74 (2022) doi:10.1017/S0140525X22002813.
* [12] Fel, T., Felipe, I., Linsley, D. & Serre, T. Harmonizing the object recognition strategies of deep neural networks with humans. Preprint at [https://doi.org/10.48550/arXiv.2211.04533](https://doi.org/10.48550/arXiv.2211.04533) (2022).
* [13] Goodfellow, I. J., Shlens, J. & Szegedy, C. Explaining and Harnessing Adversarial Examples. Preprint at [https://doi.org/10.48550/arXiv.1412.6572](https://doi.org/10.48550/arXiv.1412.6572) (2015).
* [14] Szegedy, C. _et al._ Intriguing properties of neural networks. Preprint at [https://doi.org/10.48550/arXiv.1312.6199](https://doi.org/10.48550/arXiv.1312.6199) (2014).
* [15] Naseer, M. _et al._ Intriguing Properties of Vision Transformers. Preprint at [https://doi.org/10.48550/arXiv.2105.10497](https://doi.org/10.48550/arXiv.2105.10497) (2021).
* [16] Mitchell, M. _Artificial Intelligence: A Guide for Thinking Humans_. (Farrar, Straus and Giroux, 2019).
* [17] Mitchell, M. Why AI is Harder Than We Think. Preprint at [https://doi.org/10.48550/arXiv.2104.12871](https://doi.org/10.48550/arXiv.2104.12871) (2021).
* [18] Nonaka, S., Majima, K., Aoki, S. C. & Kamitani, Y. Brain hierarchy score: Which deep neural networks are hierarchically brain-like? _iScience_**24**, 103013 (2021).
* [19] Shiffrin, R. & Mitchell, M. Probing the psychology of AI models. _Proceedings of the National Academy of Sciences_**120**, e2300963120 (2023).
* [20] Mitchell, M. Abstraction and Analogy-Making in Artificial Intelligence. _arXiv:2102.10717 [cs]_ (2021).
* [21] Biscione, V. & Bowers, J. S. Learning online visual invariances for novel objects via supervised and self-supervised training. _Neural Networks_**150**, 222-236 (2022).
* [22] Kim, B., Reif, E., Wattenberg, M., Bengio, S. & Mozer, M. C. Neural Networks Trained on Natural Scenes Exhibit Gestalt Closure. _Comput Brain Behav_**4**, 251-263 (2021).
* [23] Pang, Z., O'May, C. B., Choksi, B. & VanRullen, R. Predictive coding feedback results in perceived illusory contours in a recurrent neural network. _Neural Networks_**144**, 164-175 (2021).
* [24] Alexander, D. M. & Van Leeuwen, C. Mapping of contextual modulation in the population response of primary visual cortex. _Cogn Neurodyn_**4**, 1-24 (2010).
* [25] Marr, D. & Ullman, S. _Vision: A Computational Investigation into the Human Representation and Processing of Visual Information_. (The MIT Press, 2010).
* [26] Akbarinia, A. & Gil-Rodriguez, R. Color Conversion in Deep Autoencoders. _Color and Imaging Conference_**29**, 89-98 (2021).
* [27] Flachot, A. _et al._ Deep neural models for color classification and color constancy. _J Vis_**22**, 17 (2022).
* [28] Heidari-Gorji, H. & Gegenfurtner, K. R. Object-based color constancy in a deep neural network. _J. Opt. Soc. Am. A, JOSAA_**40**, A48-A56 (2023).
* [29] Carvalho, L. S., Pessoa, D. M. A., Mountford, J. K., Davies, W. I. L. & Hunt, D. M. The Genetic and Evolutionary Drives behind Primate Color Vision. _Frontiers in Ecology and Evolution_**5**, (2017).
* [30] Surridge, A. K., Osorio, D. & Mundy, N. I. Evolution and selection of trichromatic vision in primates. _Trends in Ecology & Evolution_**18**, 198-205 (2003).
* [31] Elliot, A. J. & Maier, M. A. Color Psychology: Effects of Perceiving Color on Psychological Functioning in Humans. _Annual Review of Psychology_**65**, 95-120 (2014).
* [32] Hill, R. A. & Barton, R. A. Red enhances human performance in contests. _Nature_**435**, 293-293 (2005).
* [33] Hiramatsu, C., Melin, A. D., Allen, W. L., Dubuc, C. & Higham, J. P. Experimental evidence that primate trichromacy is well suited for detecting primate social colour signals. _Proceedings of the Royal Society B: Biological Sciences_**284**, 20162458 (2017).
* [34] Mehta, R. & Zhu, R. (Juliet). Blue or Red? Exploring the Effect of Color on Cognitive Task Performances. _Science_**323**, 1226-1229 (2009).
* [35] Barsalou, L. W. Abstraction in perceptual symbol systems. _Philos Trans R Soc Lond B Biol Sci_**358**, 1177-1187 (2003).
* [36] Barsalou, L. W. Grounded Cognition: Past, Present, and Future. _Topics in Cognitive Science_**2**, 716-724 (2010).
* [37] Desikan, B. _et al._ comp-syn: Perceptually Grounded Word Embeddings with Color. in _Proceedings of the 28th International Conference on Computational Linguistics_ 1744-1751 (2020). doi:10.18653/v1/2020.coling-main.154.
* [38] Guilbeault, D. _et al._ Color associations in abstract semantic domains. _Cognition_**201**, 104306 (2020).
* [39] Lakoff, G. Explaining Embodied Cognition Results. _Topics in Cognitive Science_**4**, 773-785 (2012).
* [40] Paschos, G. Perceptually uniform color spaces for color texture analysis: an empirical evaluation. _IEEE Transactions on Image Processing_**10**, 932-937 (2001).
* [41] Safdar, M., Cui, G., Kim, Y. J. & Luo, M. R. Perceptually uniform color space for image signals including high dynamic range and wide gamut. _Opt Express_**25**, 15131-15151 (2017).
* [42] Gatys, L. A., Ecker, A. S. & Bethge, M. Texture Synthesis Using Convolutional Neural Networks. Preprint at [https://doi.org/10.48550/arXiv.1505.07376](https://doi.org/10.48550/arXiv.1505.07376) (2015).
* [43] Geirhos, R. _et al._ ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. Preprint at [https://doi.org/10.48550/arXiv.1811.12231](https://doi.org/10.48550/arXiv.1811.12231) (2022).
* [44] Stanford University CS231n: Deep Learning for Computer Vision ('Visualizing what ConvNets learn'). [http://cs231n.stanford.edu/](http://cs231n.stanford.edu/).
* [45] Dosovitskiy, A. _et al._ An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. Preprint at [https://doi.org/10.48550/arXiv.2010.11929](https://doi.org/10.48550/arXiv.2010.11929) (2021).
* [46] Han, K. _et al._ A Survey on Vision Transformer. _IEEE Transactions on Pattern Analysis and Machine Intelligence_**45**, 87-110 (2023).
* [47] Torralba, A., Fergus, R. & Freeman, W. T. 80 Million Tiny Images: A Large Data Set for Nonparametric Object and Scene Recognition. _IEEE Transactions on Pattern Analysis and Machine Intelligence_**30**, 1958-1970 (2008).
* [48] Ronneberger, O., Fischer, P. & Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. Preprint at [https://doi.org/10.48550/arXiv.1505.04597](https://doi.org/10.48550/arXiv.1505.04597) (2015).
* [49] Gatys, L. A., Ecker, A. S. & Bethge, M. A Neural Algorithm of Artistic Style. Preprint at [https://doi.org/10.48550/arXiv.1508.06576](https://doi.org/10.48550/arXiv.1508.06576) (2015).
* [50] Gatys, L. A., Ecker, A. S. & Bethge, M. Image Style Transfer Using Convolutional Neural Networks. in 2414-2423 (2016).
* [51] Akbarinia, A., Morgenstern, Y. & Gegenfurtner, K. R. Contrast sensitivity function in deep networks. _Neural Networks_**164**, 228-244 (2023).
* [52] Chen, L. & Zhao, D. Color image encoding in dual fractional Fourier-wavelet domain with random phases. _Optics Communications_**282**, 3433-3438 (2009).
* [53] Huang, K. & Wu, Z. Color image denoising with wavelet thresholding based on human visual system model. in vol. 5150 1667-1676 (2003).
* [54] Prasad, A., Kumar, M. & Choudhury, D. R. Color image encoding using fractional Fourier transformation associated with wavelet transformation. _Optics Communications_**285**, 1005-1009 (2012).
* [55] Jian, M., Dong, J., Gao, D. & Liang, Z. New Texture Features Based on Wavelet Transform Coinciding with Human Visual Perception. in _Eighth ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing (SNPD 2007)_ vol. 1 369-373 (2007).
* [56] Julesz, B. Experiments in the Visual Perception of Texture. _Scientific American_**232**, 34-43 (1975).
* [57] Foley, J. M. & McCourt, M. E. Visual grating induction. _J. Opt. Soc. Am. A, JOSAA_**2**, 1220-1230 (1985).
* [58] Ghiasi, G., Lee, H., Kudlur, M., Dumoulin, V. & Shlens, J. Exploring the structure of a real-time, arbitrary neural artistic stylization network. Preprint at [https://doi.org/10.48550/arXiv.1705.06830](https://doi.org/10.48550/arXiv.1705.06830) (2017).
* [59] Deng, J. _et al._ ImageNet: A large-scale hierarchical image database. in _2009 IEEE Conference on Computer Vision and Pattern Recognition_ 248-255 (2009). doi:10.1109/CVPR.2009.5206848.
* [60] Vaswani, A. _et al._ Attention is All you Need. in _Advances in Neural Information Processing Systems_ vol. 30 (Curran Associates, Inc., 2017).
* [61] Wu, B. _et al._ Visual Transformers: Token-based Image Representation and Processing for Computer Vision. Preprint at [https://doi.org/10.48550/arXiv.2006.03677](https://doi.org/10.48550/arXiv.2006.03677) (2020).
* [62] d'Ascoli, S. _et al._ ConViT: Improving Vision Transformers with Soft Convolutional Inductive Biases. _J. Stat. Mech._**2022**, 114005 (2022).
* [63] Wu, H. _et al._ CVT: Introducing Convolutions to Vision Transformers. Preprint at [https://doi.org/10.48550/arXiv.2103.15808](https://doi.org/10.48550/arXiv.2103.15808) (2021).
* [64] Ramzan, F. _et al._ A Deep Learning Approach for Automated Diagnosis and Multi-Class Classification of Alzheimer's Disease Stages Using Resting-State fMRI and Residual Neural Networks. _J Med Syst_**44**, 37 (2019).
* [65] Simonyan, K. & Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. Preprint at [https://doi.org/10.48550/arXiv.1409.1556](https://doi.org/10.48550/arXiv.1409.1556) (2015).
* [66] Andreux, M. _et al._ Kymatio: Scattering Transforms in Python. _Journal of Machine Learning Research_**21**, 1-6 (2020).
* [67] Moroney, N., Tastl, I. & Gottwals, M. A Similarity Measure for Large Color Differences. in _22nd Color and Imaging Conference Final Program and Proceedings and 2nd Congress of the International Academy of Digital Pathology_ (Society for Imaging Science and Technology, 2014).
* [68] de Vries, J. P., Akbarinia, A., Flachot, A. & Gegenfurtner, K. R. Emergent color categorization in a neural network trained for object recognition. _eLife_**11**, e76472 (2022).
* [69] Riley, C. A. _Color Codes: Modern Theories of Color in Philosophy, Painting and Architecture, Literature, Music, and Psychology_. (UPNE, 1995).
* [70] Goethe, J. W. von. _Theory of Colours_. (M.I.T. Press, 1840).
* [71] Lindsey, D. T., Brown, A. M., Brainard, D. H. & Apicella, C. L. Hadza Color Terms Are Sparse, Diverse, and Distributed, and Presage the Universal Color Categories Found in Other World Languages. _Iperception_**7**, (2016).
* [72] Gibson, E. _et al._ Color naming across languages reflects color use. _PNAS_**114**, 10785-10790 (2017).
* [73] Winawer, J. _et al._ Russian blues reveal effects of language on color discrimination. _PNAS_**104**, 7780-7785 (2007).
* [74] Goodale, M. A. & Milner, A. D. Separate visual pathways for perception and action. _Trends Neurosci_**15**, 20-25 (1992).
* [75] Norman, J. Two visual systems and two theories of perception: An attempt to reconcile the constructivist and ecological approaches. _Behavioral and Brain Sciences_**25**, 73-144 (2002).
* [76] Fernandino, L., Tong, J.-Q., Conant, L. L., Humphries, C. J. & Binder, J. R. Decoding the information structure underlying the neural representation of concepts. _Proceedings of the National Academy of Sciences_**119**, e2108091119 (2022).
* [77] Kozlovskiy, S. & Rogachev, A. How Areas of Ventral Visual Stream Interact When We Memorize Color and Shape Information. in _Advances in Cognitive Research, Artificial Intelligence and Neuroinformatics_ (eds. Velichkovsky, B. M., Balaban, P. M. & Ushakov, V. L.) 95-100 (Springer International Publishing, 2021). doi:10.1007/978-3-030-71637-0 10.
* [78] Nanay, B. Multimodal mental imagery. _Cortex_**105**, 125-134 (2018).
Supplementary Information for:
**Divergences in Color Perception between Deep Neural Networks and Humans**
This appendix contains:
Supplementary Materials and Methods
Supplementary Analyses
Supplementary References
**Supplementary Materials and Methods**
Here, we provide additional details on the image datasets, computer vision algorithms, and methods for analyzing image embeddings used in our main study.
**Image Datasets.**
* _Block and Stripe Images_**:** We use the Python Imaging Library PIL to generate block and stripe images. In particular, we create two sets of 1000 (300 x 300)-pixel images composed of either one randomly-selected color (the "block" dataset), or two randomly-selected colors arranged in a vertical stripe pattern (the "stripe" dataset). In both cases, a white border is included around a square of the primary color(s); this border is not used when calculating color similarity. In supplementary analyses, we show that different background colors do not significantly affect block image embeddings (Figure S20).
* _Colorgrams_**:** To generate colorgrams, we use composite Google Image search results developed using the comp-syn package1,2. Specifically, colorgrams are generated by averaging the top 100 Google Image results for a given search term. Because Google Image results are driven by search and usage popularity according to the PageRank algorithm3, each colorgram can be interpreted as an average visual representation of its corresponding concept according to Google Images. We select 1,000 colorgrams from the dataset of 40,000 colorgrams in Desikan et al. (2020)1, corresponding to the most frequently used words in English. Footnote 1: [https://github.com/faceface](https://github.com/faceface)
particular, we extract 4096-dimensional image embeddings by flattening the (64 x 64) block matrix from the penultimate layer of this network.
* _Vision Transformer DNN:_ We use the final embedding layer of the algorithm from Dosovitskiy et al. (2021)*; in particular, this layer uses information from the 196 locational "patch" embeddings, which were originally optimized to produce Imagenet* label classifications.
**Analysis Techniques.**
* _Clustering Techniques:_ We use the sklearn_cluster_KMeans package with init='k-means++', which performs several random trials at each clustering step to ensure convergence in the final clustering assignments.
* _Color Coherence Measure:_ We measure the color coherence of the image groups returned by _k_-means clustering as follows. Let _c(x,y)_ denote an image, where (_x,y_) are image coordinates and _c(x,y)_ is the three-dimensional color tuple of each pixel in the image (we do not operate on the alpha channel for any of the image sets). We transform these tuples to perceptually uniform _JzAzBz_ color space* and measure _JzAzBz_ distributions, concatenated over pixel coordinates, using eight evenly-segmented _JzAzBz_ subvolumes following. We quantify the similarity between the color distributions of pairs of images in each cluster as follows. Let _c(x,y)_, _c(x,y)_ denote images \(i\) and \(j\), with _JzAzBz_ distributions _c1_ and _cj_, respectively. In particular, we define the color similarity \[C(c_{i}\ \ ||\ c_{j})=1\ \text{-}\ D_{JS}(c_{i}\ \ ||\ c_{j})=1\ \text{-}\ [D_{KL}(c_{i}\ \ ||\ c_{ij})+D_{KL}(c_{j}\ \ ||\ c_{ij})]/2,\] (1) where _DzBz_(_DKL_) denotes Jensen-Shannon (Kullback-Leibler) divergence and _cij_ = (_c1_ + _cj_)/2. Note that larger values of _C(_c1_ \(\ ||\ c_{j}\))_ correspond to image pairs with more similar color distributions.
## Supplementary Analyses
### Study 1: A. Visual Properties of Image Clusters
#### Additional Image Cluster Examples
All image clusters for our main datasets (i.e., stripe images, colorgrams, and CIFAR-10 images) and algorithms (i.e., our perceptually uniform wavelet and convolutional, style transfer, and vision transformer DNNs) are provided at the following link: image cluster repository. This dataset and all supporting data and code will be made publicly available upon publication.
Visual inspection of these image clusters supports our main quantitative results. In particular, 1) the images grouped by our perceptually uniform wavelet algorithm tend to have noticeably more similar color properties, 2) the images grouped by our convolutional and vision transformer DNNs tend to exhibit similar textural patterns at the expense of discriminating between perceptually similar colors, and 3) the images grouped by our style transfer DNN tend to compromise between color and textural similarity relative to the other DNNs. Examples of clusters returned by each algorithm for our colorgram analysis are shown in Figures S1-S4. Interestingly, we note that the style transfer clusters often capture conceptual similarities among images, e.g., by grouping colorgrams representing Google Image searches of food terms.
Similarly, visual inspection of the stripe images returned by our clustering analysis confirms many of the trends identified in our analyses. Specifically, Figures S5-S8 show that stripe images grouped by our perceptually uniform wavelet algorithm tend to have similar hues and luminosities, while the convolutional and vision transformer DNNs can group images of surprisingly different colors (e.g., the convolutional DNN groups red-green and nearly black stripe images with bright purple-ish hues). Meanwhile, the style transfer DNN often groups images with colors of similar luminance that are not perceptually similar when taking hue and saturation into account.
#### Properties of Wavelet Analysis Clusters
To further characterize the properties of image clusters returned by our perceptually uniform wavelet algorithm, we perform principal component analysis on our wavelet algorithm's coefficients, and identify the images that minimize and maximize principal components that explain the majority of the resulting variance. The results of this analysis are shown in Figure S9 and indicate that our wavelet algorithm's coefficients _simultaneously_ capture images' color and textural properties. For example, images that minimize and maximize wavelet coefficients in the bottom rows of Figure S9 are differentiated by both hue (e.g., whitish/grayish/blackish vs. blueish/yellowish hues) and shape (e.g., square vs. circular patterns). A detailed study of the image properties captured by our wavelet coefficients, and the way in which these coefficients change when using different color space representations, is therefore an interesting avenue for future work.
Figure S1: Example of a colorgram cluster returned by our wavelet algorithm. The images have similar color distributions, even in perceptually uniform color space.
**Figure S3.** Example of a colorgram cluster returned by our style transfer DNN algorithm. This cluster contains several colorgrams resulting from Google Image searches of food terms (e.g., "pineapple").
**Figure S4.** Example of a colorgram cluster returned by our vision transformer DNN algorithm.
**Figure S5.** Example of a stripe image cluster returned by our perceptually uniform wavelet algorithm. The images have similar color distributions, even in perceptually uniform color space.
**Figure S6.** Example of a stripe image cluster returned by our convolutional DNN algorithm. Several images have widely varying color distributions; these perceptual errors often involve images with high or low luminance (e.g., the black-ish image near the bottom right falls in the same cluster as bright purple images).
Figure S7: Example of a stripe image cluster returned by our style transfer DNN algorithm.
**Figure S8.** Example of a stripe image cluster returned by our vision transformer DNN algorithm.
**Figure S9.** Examples of colorgrams that minimize and maximize principal components of our perceptually uniform wavelet algorithm's embedding coefficients. Each row corresponds to a different wavelet coefficient that is either minimized (to the left of the vertical line) or maximized (to the right of the vertical line). Wavelet coefficients simultaneously differentiate images by luminosities, textural properties (e.g., checkered patterns), and hues, indicating that our algorithm captures both color and textural properties of images.
### Study 1: B. Clustering in Perceptually Uniform Color Space
#### Additional Color Clustering Results
Figures S10-S11 illustrate the color space distribution of image clusters returned by our perceptually uniform wavelet and convolutional, style transfer, and vision transformer DNN algorithms for our colorgram and CIFAR-10 datasets, in the same format as Figure 3, which showed results for our stripe image dataset. The qualitative results shown in Figure 3 all hold for the other datasets; in particular, image clusters returned by our perceptually uniform wavelet algorithm are more clearly separated in color space than those returned by the convolutional, style transfer, or vision transformer DNNs. This separation is more evident for colorgrams than for CIFAR-10 images, which is consistent with the color clustering results reported in the main text.
The mean colors of the image clusters shown in Figures S10-S11 are also consistent with our finding that stripe images, colorgrams, and CIFAR-10 images successively display narrower ranges of image hues and luminosities. The average colors of CIFAR-10 images are particularly muted, most often sampling grayish, brownish and dark blueish hues, which is likely due to the color averaging effects of "background" pixels in real-world images. For CIFAR-10 and other image classification datasets, these foreground objects are members of the image classes of interest. Meanwhile, "background" pixels are known to influence DNN image classifiers[10]. Thus, the color properties of these "background" pixels may also affect image embeddings and artificially reduce the importance of color for accurately representing and differentiating foreground objects.
#### Comparison between J_a_-_B_- and RGB Wavelet Algorithms
Figure S12 compares image clusters from our fiducial wavelet algorithm, which uses the approximately perceptually uniform \(J_{2}A_{5}B_{2}\) color space, to the results of the same algorithm run on images represented in RGB color space. We show this comparison for our stripe and colorgram datasets, since these algorithms' CIFAR-10 clustering results do not significantly different (Figure 4B). The visual properties of these algorithms' image clusters are similar, supporting our finding that the \(J_{2}A_{5}B_{2}\) wavelet's success is not solely driven by the fact that it represents images in an approximately perceptually uniform color space.
#### Robustness to DNN Embedding Layer
To assess whether the DNN embedding layers we use in our analyses affect our color coherence results, we rerun our analyses using the first, third ("conv3_block4_out"), and fifth ("conv5_block4_out") layers of our ResNet network ("convolutional DNN"), and compare these to our fiducial results that use its penultimate ("avg_pool") layer. The convolutional DNN is well suited for this test because, unlike our style transfer DNN, its embeddings are not themselves aggregates of multiple embedding layers.
The clustering result for our stripe image dataset is shown in Figure S13. Visually, color clustering fidelity improves toward deeper embedding layers; we confirm this by calculating the color coherence fraction, which we find to be \(f_{\textit{first}}\)\(=0,f_{\textit{thid}}\)\(=0.12\), and \(f_{\textit{fifth}}\)\(=0.21\), compared to \(f_{\textit{CNN}}\)\(=0.22\) using the penultimate layer from our main analysis. Note that the first layer embeds all stripe images similarly, such that our \(k\)-means algorithm returns only one distinct cluster; we therefore assign this result a coherence fraction of
zero. Meanwhile, the fifth layer is nearly indistinguishable from the penultimate layer in terms of both the coherence fraction and the visual properties of the clusters in Figure S13.
Thus, earlier layers of the convolutional DNN perform more poorly than the penultimate layer we use throughout, which in turn performs much more poorly than our wavelet algorithm, which returns \(f_{wavelet}\)= 0.81 on the same stripe dataset. In SI Section **C. Color Coherence of Image Clusters**, we demonstrate that the average values of the resulting color similarity distributions are statistically consistent for all layers we examine, with the caveat that the first layer only returns one cluster. Thus, our main conclusions are robust to the specific embedding layers we extract.
_Robustness to Training Objective_
To assess whether DNNs' training objectives affect the fidelity of their color perception, we test a convolutional DNN based on the U-Net architecture that is trained to perform semantic segmentation rather than classification[11]. Note that the encoder portion of the U-Net architecture is similar to the ResNet model ("Convolutional DNN") we study throughout the paper. To compare with our fiducial Convolutional DNN results, which use the penultimate embedding layer, we extract the penultimate embedding layer from the U-Net trained for image segmentation. This layer corresponds to a 224 x 224 matrix, which represents transformed images after convolutions are applied. We unravel this matrix into a 50,176-dimensional vector that we use to reproduce our clustering analysis.
The clustering result for our stripe image dataset is shown in Figure S14. Visually, the image segmentation DNN clusters stripe images with less fidelity than our fiducial classification DNN; we confirm this by calculating the color coherence fraction, which we find to be \(f_{segmentation}\)= 0.15, compared to \(f_{CNN}\)= 0.22 in our main analysis. Thus, although the image segmentation DNN performs slightly worse than the classification DNN at this task, both models' color clustering ability is well below that of our wavelet algorithm, which returns \(f_{wavelet}\)= 0.81 on the same dataset. Furthermore, as we demonstrate in SI Section **C. Color Coherence of Image Clusters**, the average values of these DNNs' color similarity distributions are statistically consistent.
We therefore conclude that our main findings are robust to the specific training objectives of the networks we examine. As discussed in the main text, a style transfer training objective improves the fidelity of DNN color perception in certain regards, but even the style transfer DNN that we examine consistently and significantly underperforms compared to our wavelet algorithm. A dedicated study of the interplay between color perception, training objectives, and network architectures will be a fruitful area for future work; for example, the image segmentation network is most sensitive to the difference between warm and cool hues, which may follow because these hues often delineate images' foregrounds and backgrounds.
_Robustness to k-means Clustering Algorithm_
We also explicitly verify that our k-means clustering results do not depend on the specific algorithm we use for our main analyses. In particular, Figure S15 compares our fiducial result for style transfer DNN clustering of the stripe dataset with an alternative k-mean clustering algorithm that uses the 'random' initialization strategy with 10 initial random clusters (n_init = 10). The clustering results differ slightly at a fine-grained level, but the overall structure and all color clustering statistics are unaffected; this continues to hold for all other algorithms and datasets in our study.
Figure S10: Comparison of the color properties of images clustered by our perceptually uniform wavelet (top-left) convolutional DNN (top-right), style transfer DNN (bottom-left), and vision transformer DNN (bottom-right) algorithms for images in our colorgram dataset.
**Figure S11.** Comparison of the color properties of images clustered by our perceptually uniform wavelet (top-left) convolutional DNN (top-right), style transfer DNN (bottom-left), and vision transformer DNN (bottom-right) algorithms for images in our CIFAR-10 dataset.
Figure S12: Comparison of the color properties of images clustered by our fiducial wavelet algorithm that operates in an approximately perceptually uniform color space (left column) versus the same algorithm that operates on images represented in RGB color space (right column) for our stripe (top row) and colorgram (bottom row) image datasets.
**Figure S13.** Comparison of the color properties of images clustered by our convolutional DNN algorithm using the first (top left), third (top right), fifth (bottom left), and penultimate (bottom right, our fiducial result) embedding layer. Note that the first layer only returns one cluster.
Figure S14: Comparison of the color properties of images clustered by a convolutional DNN trained on image segmentation using the U-Net architecture (left), versus a convolutional DNN trained on image classification using the ResNet architecture (right, our fiducial result).
### Study 1: C. Color Coherence of Image Clusters
#### Additional Color Coherence Results
Figure S16 shows the color similarity distributions for our stripe and CIFAR-10 datasets returned by our perceptually uniform wavelet, grayscaled wavelet, convolutional DNN, and style transfer DNN algorithms. These distributions are qualitatively consistent with our findings in the main text: the perceptually uniform wavelet algorithm yields clusters with the highest color similarity, while the DNNs yield less color-coherent clusters. Note that the stripe dataset's color similarity distribution is discontinuous because these images are composed of color pairs selected from discrete points in RGB color space. Furthermore, the variability from random clustering assignments for this dataset is large compared to the spread of the mean color coherence values among the algorithms we consider.
Both panels of Figure S16 also include the result of our wavelet algorithm when operating in RGB color space. Consistent with our results for the colorgram dataset in the main text, the wavelet algorithm's color coherence does not significantly degrade when using RGB. Note that, for stripe images, the improvement of the RGB wavelet's mean color coherence is not statistically significant given the typical spread of random clustering assignments.
#### Robustness to DNN Embedding Layer and High-Level Visual Training Objective
Following our robustness tests in **B. Clustering in Perceptually Uniform Color Space**, we verify that our color similarity results are robust to the choices of DNN training objective and embedding layer.
The left panel of Figure S17 shows color similarity distributions for our stripe image dataset returned by the first (magenta), third (orange), fifth (green), and penultimate (black) embedding layer of the convolutional DNN. The average values of these color similarity distributions are all consistent within the statistical uncertainties of clustering assignments for our stripe dataset. Furthermore, consistent with Figure S13, earlier embedding layers return lower color similarity, on average, than our result using the penultimate embedding layer. Thus, we conclude that the embedding layer we extract for our main tests does not affect our conclusions; moreover, we may slightly overestimate convolutional DNNs' color perception fidelity by using the penultimate embedding layer.
The right panel of Figure S17 shows the color similarity distribution for our stripe image dataset returned by our perceptually uniform wavelet algorithm (blue), our standard convolutional DNN trained for image classification using the ResNet architecture (black), and a convolutional DNN trained for image segmentation using the U-Net architecture (red). The average values of these DNNs' color similarity distributions are consistent within the statistical uncertainties of clustering assignments for our stripe dataset. Consistent with Figure S14, the DNN trained on image segmentation returns slightly lower color similarity, on average, than our fiducial convolutional DNN. Thus, we conclude that variations in high-level visual training objectives do not significantly affect our color similarity distributions. We have made the code for our embedding layer and training objective tests publicly available, along with the rest of our materials, at [https://github.com/eonadler/cv-color-perception](https://github.com/eonadler/cv-color-perception).
**Figure S16.** Color similarity distributions for our stripe (left) and CIFAR-10 (right) image datasets returned by our perceptually uniform wavelet (blue), RGB wavelet (unfilled blue), grayscaled wavelet (gray), convolutional DNN (black), style transfer DNN (green), and vision transformer (cyan) algorithms. For both datasets, our perceptually uniform wavelet algorithm yields image clusters with the highest mean color similarity.
We also demonstrate that our results are robust to image resolution. In particular, we recreate our color similarity analysis by embedding a higher-resolution version of the same CIFAR-10 dataset (128 x 128 pixels, rather than our standard 32 x 32 pixels) with our convolutional DNN. Figure S18 shows that the resulting color similarity distributions are virtually identical to our fiducial results, indicating that image resolution does not affect our conclusions.
Figure S18: Color similarity distributions for the CIFAR-10 clusters returned by our convolutional DNN algorithm using our standard (32 x 32)-pixel resolution CIFAR-10 images (black) and a higher-resolution (128 x 128)-pixel versions of the same images (brown). The distributions are virtually identical; thus, our results are not sensitive to image resolution.
### Study 1: D. A Color Vision Test for Computer Vision Algorithms
#### Embedding Similarity versus Luminance
It is interesting to consider how specific properties of image pairs' color distributions correlate with embedding similarity. Thus, Figure S19 shows the relationship between embedding similarity and the mean \(J_{z}\) coordinate of each block pair, where the mean \(J_{z}\) coordinates are minmax-normalized.
Intriguingly, our wavelet algorithm's response to image pair luminance is much more symmetric than for any of the DNNs we test. For example, the probability that the convolutional DNN embeds block image pairs with low luminance (i.e., mean pairwise \(J_{z}<0.5\) in Figure S19) less similarly than average is more than twice that for our wavelet algorithm. This suggests that luminance drives the convolutional DNN's embeddings of images in different parts of color space, such that it represents low-luminance colors similarly regardless of their perceptual differences in hue and saturation.
Figure S19 indicates that similar conclusions hold for the style transfer DNN, which again behaves very similarly to the convolutional DNN. Meanwhile, the vision transformer DNN tends to embed all block images similarly, regardless of their luminance. This is consistent with our results in Figure 5 and again suggests that the uniform texture of our block images exacerbates the non-perceptual behavior of the vision transformer DNN we test.
The non-perceptual responses of convolutional and style transfer DNNs to low-luminance images is reminiscent of known shortcomings of RGB color space, in which low-luminance colors are less well differentiated than high-luminance colors[5]. Thus, the DNN behavior we identify in Figure S19 could be caused by the fact that these algorithms are trained on images represented in non-perceptually uniform color spaces. However, these findings may also signal inherent biases in these algorithms' image embeddings, regardless of the color space used when representing images during training.
#### Robustness to Block Image Color Contrast
In our standard block image dataset, each color tile is laid on a white background. Here, we test two alternative background colors to show that color contrast does not affect our conclusions. In particular, we generate two alternative versions of the block dataset with identical central colors but with black and gray backgrounds, respectively, and replicate our analyses using the convolutional DNN. In these tests, color similarity between each block image pair is unchanged because the border is not used when computing color similarity; however, embedding similarity changes because the entire image is embedded by the convolutional DNN.
As shown in Figure S20, these background color variations do not qualitatively affect our findings: the correlation between color similarity and embedding similarity for the convolutional DNNs is significantly lower than for the wavelet algorithm (\(\rho\) = 0.95) when using either a white (\(\rho\) = 0.5, \(p\) < 0.001), black (\(\rho\) = 0.53, \(p\) < 0.001), or gray (\(\rho\) = 0.6, \(p<0.001\)) background (Student t-test). Furthermore, the overall shape of the embedding similarity-color similarity relation is qualitatively unchanged across these different background conditions. Thus, we conclude that block image backgrounds and color contrast do not affect our main findings.
Figure S19: Perceptually uniform wavelet (top left), convolutional DNN (top right), style transfer DNN (bottom left), and vision transformer DNN (bottom right) embedding similarity versus the mean \(J_{z}\) coordinate for a fixed random sample of block image pairs. Our wavelet algorithm responds to low and high-luminance images more symmetrically than any of the DNNs, which tend to embed low-luminance images similarly, regardless of hue and saturation.
**Figure S20.** Convolutional DNN embedding similarity versus color similarity for alternative versions of our block image dataset with black (left) and gray (right) backgrounds, rather than our standard choice of white backgrounds. The embedding similarity-color similarity relation is not qualitatively changed by varying image background and color contrast, and the corresponding correlation coefficients are also not significantly affected. Each panel shows an independent sample of 1000 random block image pairs.
### Study 2: A. Predicting Human Color Judgments from an Online Survey
_Perceptual Errors in Algorithms' Color Judgments_
Figure S21 shows embedding similarity from our wavelet and convolutional, style transfer, and vision transformer DNN algorithms versus perceptually uniform color similarity for the color tile pairs included in our survey. The axes are analogous to Figure 5, except that the embedding and color similarities are not rank-ordered before minmax normalization is applied, to better highlight color tile pairs that are outliers in either dimension. Lines connect pairs for which a given algorithm failed to provide a color similarity judgment consistent with the human consensus. For example, the convolutional DNN connects the pair of light green and light yellow tiles in the bottom-right quadrant of the right-hand panel with the purple and light brown pair in the bottom-left panel. Humans judge the green/yellow pair as more similar, while the convolutional DNN embeds the purple/brown pair more similarly.
Again, the style transfer DNN behaves similarly to the convolutional DNN; for example, it yields several perceptual errors that involve the same purple/brown color tile pair described above. In addition, many of the perceptual errors of both the convolutional and style transfer DNNs involve pairs of primary colors (e.g., see the lines connecting to the red/blue pairs at the top-left of each algorithm's panel in Figure S21). Finally, as discussed above, the vision transformer DNN embeds all block images similarly. It is therefore difficult to pinpoint regions of color space in which it fails to represent human judgments, but we note that several of its perceptual errors involve the outlying color tile pairs that it embeds differently very differently than the remaining pairs (for example, see the pairs that contain a cyan tile towards the bottom-right of the vision transformer DNN panel in Figure S21).
**Figure S21.** Perceptually uniform wavelet (top left), convolutional DNN (top right), style transfer DNN (bottom left), and vision transformer DNN (bottom right) embedding similarity versus the \(J_{z}A_{z}B_{z}\) similarity of each image pair used in our online survey. Lines connect pairs for which a given algorithm failed to provide a color similarity judgment consistent with the human consensus. |
2309.11101 | A New Interpretable Neural Network-Based Rule Model for Healthcare
Decision Making | In healthcare applications, understanding how machine/deep learning models
make decisions is crucial. In this study, we introduce a neural network
framework, $\textit{Truth Table rules}$ (TT-rules), that combines the global
and exact interpretability properties of rule-based models with the high
performance of deep neural networks. TT-rules is built upon $\textit{Truth
Table nets}$ (TTnet), a family of deep neural networks initially developed for
formal verification. By extracting the necessary and sufficient rules
$\mathcal{R}$ from the trained TTnet model (global interpretability) to yield
the same output as the TTnet (exact interpretability), TT-rules effectively
transforms the neural network into a rule-based model. This rule-based model
supports binary classification, multi-label classification, and regression
tasks for small to large tabular datasets. After outlining the framework, we
evaluate TT-rules' performance on healthcare applications and compare it to
state-of-the-art rule-based methods. Our results demonstrate that TT-rules
achieves equal or higher performance compared to other interpretable methods.
Notably, TT-rules presents the first accurate rule-based model capable of
fitting large tabular datasets, including two real-life DNA datasets with over
20K features. | Adrien Benamira, Tristan Guerand, Thomas Peyrin | 2023-09-20T07:15:48Z | http://arxiv.org/abs/2309.11101v1 | # A New Interpretable Neural Network-Based Rule Model for Healthcare Decision Making
###### Abstract
In healthcare applications, understanding how machine/deep learning models make decisions is crucial. In this study, we introduce a neural network framework, _Truth Table rules_ (TT-rules), that combines the global and exact interpretability properties of rule-based models with the high performance of deep neural networks. TT-rules is built upon _Truth Table nets_ (TTnet), a family of deep neural networks initially developed for formal verification. By extracting the necessary and sufficient rules \(\mathcal{R}\) from the trained TTnet model (global interpretability) to yield the same output as the TTnet (exact interpretability), TT-rules effectively transforms the neural network into a rule-based model. This rule-based model supports binary classification, multi-label classification, and regression tasks for small to large tabular datasets.
After outlining the framework, we evaluate TT-rules' performance on healthcare applications and compare it to state-of-the-art rule-based methods. Our results demonstrate that TT-rules achieves equal or higher performance compared to other interpretable methods. Notably, TT-rules presents the first accurate rule-based model capable of fitting large tabular datasets, including two real-life DNA datasets with over 20K features.
## 1 Related Work
Traditional rule-based models, such as decision trees [5], rule lists [20, 3, 9], linear models, and rule sets [13, 7, 8, 18, 24], are commonly used for interpretable classification and regression tasks. However, these models face limitations in handling large datasets, binary classification tasks, and capturing complex feature relationships, which can result in reduced accuracy and limited practicality [25, 23]. To overcome these challenges, recent work by Benamira _et al._ introduced an architecture encoded into CNF formulas, demonstrating scalability on large datasets [4, 6]. Our objective is to extend this approach to handle diverse classification tasks and regression on tabular datasets of varying feature dimensions.
There have been investigations into the connection between deep neural networks (DNNs) and rule-based models. Notable works include DNF-net [1], which focuses on the activation function, and RRL [23], which addresses classification tasks but raises concerns about interpretability due to its complexity and time-consuming training process. Another architecture, Neural Additive Models (NAMs) [2], combines the flexibility of DNNs with the interpretability of additive models but deviates from the strict rule-based model paradigm, posing challenges in interpretation, especially with a large number of features.
## 2 Methodology
This paper introduces a novel neural network framework that effectively combines the interpretability of rule-based models with the high performance of DNNs. Our framework, called TT-rules, builds upon the advancements made by Benamira _et al._[4] Benamira _et al._[4] introduced a new Convolutional Neural Network (CNN) filter function called the Learning Truth Table (LTT) block. The LTT block has the unique property of its complete distribution being computable in constant and practical time, regardless of the architecture. This allows the transformation of the LTT block from weights into an exact mathematical Boolean formula. Since an LTT block is equivalent to a CNN filter, **the entire neural network model, known as Truth Table net (TTnet), can itself be represented as a Boolean formula.** We then optimize our formula set \(\mathcal{R}\) in two steps. We automatically integrate human logic into the truth tables. This reduces the size of each rule in the set \(\mathcal{R}\). Then we analyze the correlation to decrease the number of rules in \(\mathcal{R}\). These optimizations, specific to the TT-rules framework, automatically and efficiently transform the set \(\mathcal{R}\) into an optimized set in constant time. To enhance the interpretability of the model, we convert all boolean formulas into Reduced Ordered Binary Decision Diagrams. An example is given Figure 1.
## 3 Experiments
### Datasets
We utilized a variety of healthcare datasets for our study, including the Diabetes 130 US-Hospitals dataset for multi-classification1[10], two single-cell RNA-seq analysis datasets (head and neck cancer2[17] and melanoma3[21]), the Breast Cancer Wisconsin (Original) dataset4, and the TCGA lung cancer dataset for regression5[15].
Footnote 1: [https://bit.ly/diabetes_130_uci](https://bit.ly/diabetes_130_uci)
Footnote 2: [https://bit.ly/neck_head_rna](https://bit.ly/neck_head_rna)
Footnote 3: [https://bit.ly/meloma_rna](https://bit.ly/meloma_rna)
Footnote 4: [https://archive.ics.uci.edu/dataset/15/breast+cancer+wisconsin+original](https://archive.ics.uci.edu/dataset/15/breast+cancer+wisconsin+original)
Footnote 5: [https://bit.ly/tcga_lung_rna](https://bit.ly/tcga_lung_rna)
Our TT-rules framework's scalability is demonstrated using two DNA datasets. These include single-cell RNA-seq analysis datasets for head and neck cancer, melanoma cancer [17, 21], and the TCGA lung cancer dataset [15]. These datasets contain 23689 and 20530 features, respectively, and are commonly used in real-life machine learning applications [14, 11, 19, 22]. In the melanoma cancer setup, we trained on the head and neck dataset [17] and tested on the melanoma dataset [21] following established literature [14, 11, 19, 22].
### Performance Comparison
Table 1 presents a comparison of various rule-based models, including ours, on the datasets introduced before, in terms of RMSE, AUC and Accuracy. Our proposed model outperforms the others in terms of accuracy on the Diabetes dataset and on the Breast Cancer dataset. XGBoost and DNNs performs better on Diabetes but worse on bigger datasets as shown in the next section. Although GL provides a better tradeoff between performance and complexity, we highlight that GL does not support multi-class classification tasks and is not scalable for larger datasets such as DNA datasets, as shown in the next section.
### Scalability Comparison
Our TT-rules framework demonstrated excellent scalability to real-life datasets with up to 20K features. This result is not surprising, considering the original TTnet paper [4] showed the architecture's ability to scale to ImageNet. Furthermore, our framework's superiority was demonstrated by outperforming other rule-based models that failed to converge to such large datasets (GL [24]). Regarding performance, the TT-rules framework outperforms all other methods. Our approach not only scales but also reduces the input feature set, acting as a feature selection method. We generated a set of 1064 rules out of 20530 features for the regression problem, corresponding to a drastic reduction in complexity. For the binary classification dataset, we generated 9472 rules, which more then halved the input size from 23689 to 9472.
## 4 Conclusion
In conclusion, our proposed TT-rules framework provides a new and optimized approach for achieving global and exact interpretability in regression and classification tasks. With its ability to scale on large datasets and its potential for feature reduction, the TT-rules framework appears as a valuable tool towards explainable artificial intelligence for healthcare applications.
\begin{table}
\begin{tabular}{l|c|c c|c} \hline \hline & **Regression (RMSE)** & **Binary classification (AUC)** & **Multi-classification (Accuracy)** \\ \hline & TCCA Cancer & Melanoma & Breast Cancer & Diabetes \\ continuous/binary \# & 0/20530 & 0/23689 features & 0/81 features & 43/296 features \\ \hline Linear/ log & 0.092 & 0.833 & 0.985 & 0.581 \\ DT & - & - & 0.908 & 0.572 \\ GL & - & - & 0.984 & - \\ TT-rules (Ours) & 0.029 & 0.835 & 0.986 & 0.584 \\ \hline Random Forest & 0.42 & 0.729 & 0.950 & 0.587 \\ DNNs & 0.028 & 0.725 & 0.982 & 0.603 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison machine learning dataset of our method to Linear/Logistic Regression [16], Decision Trees (DT) [16], GL [24], Random Forest [12] and DNNs. Means and standard deviations are reported from 5-fold cross validation. Our TT-rules models were trained with a final linear regression with weights as floating points for the Breast Cancer and Diabetes dataset for better performances. The two other datasets were trained with a sparse binary linear regression to reduce the number of final features. The lower RMSE the better, the higher AUC/Accuracy the better.
Figure 1: Our neural network model trained on the Breast Cancer dataset in the form of Boolean decision trees: the output of the DNN and the output of these decision trees are the same, reaching 99.30% AUC. On the same test set, Random Forest reaches 95.08% AUC, Decision Tree 90.36% AUC and XGBoost 97.79% AUC. Each rule \(r_{i}\) is a function \(r_{i}:\{0,1\}^{n}\mapsto\{-1,0,1\}\), i.e for each data sample I we associate for each rule \(r_{i}\) a score which is in \(\{-1,0,1\}\). The prediction of our classifier is then as stated above. As our model has 24 rules, we have only reported two positive rules out of the 24 to provide an example of the type of rules obtained. |
2309.08444 | Neural Network Exemplar Parallelization with Go | This paper presents a case for exemplar parallelism of neural networks using
Go as parallelization framework. Further it is shown that also limited
multi-core hardware systems are feasible for these parallelization tasks, as
notebooks and single board computer systems. The main question was how much
speedup can be generated when using concurrent Go goroutines specifically. A
simple concurrent feedforward network for MNIST digit recognition with the
programming language Go was created to find the answer. The first findings when
using a notebook (Lenovo Yoga 2) showed a speedup of 252% when utilizing 4
goroutines. Testing a single board computer (Banana Pi M3) delivered more
convincing results: 320% with 4 goroutines, and 432% with 8 goroutines. | Georg Wiesinger, Erich Schikuta | 2023-09-15T14:46:43Z | http://arxiv.org/abs/2309.08444v1 | # Neural Network Exemplar Parallelization with Go
###### Abstract
This paper presents a case for exemplar parallelism of neural networks using Go as parallelization framework. Further it is shown that also limited multi-core hardware systems are feasible for these parallelization tasks, as notebooks and single board computer systems. The main question was how much speedup can be generated when using concurrent Go goroutines specifically. A simple concurrent feedforward network for MNIST [1] digit recognition with the programming language Go [2, 3, 4, 5] was created to find the answer. The first findings when using a notebook (Lenovo Yoga 2) showed a speedup of 252% when utilizing 4 goroutines. Testing a single board computer (Banana Pi M3) delivered more convincing results: 320% with 4 goroutines, and 432% with 8 goroutines.
Backpropagation, Exemplar Parallelization, Go Programming Language, MNIST
## I Introduction
Neural networks and artificial intelligence are becoming more and more important not only in research, but also in daily used technology. Due to higher amounts of data these artificial agents have to analyze there is the need for a larger throughput and highly efficient neural networks. The programming language Go looks promising for developing such a highly efficient agent, as the language itself has been made not only for highly efficient parallelization but also with fast development in mind. The main question is if Go is suitable for a highly efficient parallelization of neural networks. The main objective is the creation of an efficient parallelized neural network. There is a possibility that Go could lead to a higher parallelization efficiency/speedup than other programming languages. As Go is a young programming language, the literature about this specific topic is very sparse to almost nonexistent. There are tertiary sources like websites comparing the general throughput of Go in comparison to web languages like NodeJS, PHP and Java1. Other literature is related to parallelization speedup. There are also some neural networks realized in Go. No sources for a better comparison of parallelization techniques has been found. The scope of this work is to find out the speedup when using multiple goroutines with a neural network while maintaining a high and sustainable classification accuracy. A working MNIST digit recognition system has been created for testing the speedup with up to sixteen cores. The network and parameters have been optimized, but due to only negligible improvements with more than 100 hidden layer nodes this amount has not been exceeded. The execution time for one epoch has been sped up from 856.57-1271.73 (median 1005.80) seconds with 1 sporoutine to only 171.50-221.38 (median 201.82) seconds with 16 goroutines with a Banana Pi M3. The Lenovo Yoga 2 showed a less significant speedup with 137.29-146.01 (median 142.33) for 1 goroutine to 55.10-64.62 (median 56.49) with 4 goroutines. Additional goroutines exceeding the maximum thread limit brought further speedup due to pipelining with the Banana Pi, but a negligible speed loss for the Lenovo Yoga.
Footnote 1: [https://www.toptal.com/back-end/server-side-io-performance-node-php-java-go](https://www.toptal.com/back-end/server-side-io-performance-node-php-java-go)
## II Related Work and Baseline Research
Artificial neural networks and their parallel simulation gained high attention in the scientific community. Parallelization is a classic approach for speeding up execution times and exploiting the full potential of modern processors. Still, not every algorithm can profit from parallelization, as the concurrent execution might add a non-negligible overhead. This can also be the case for data parallel neural networks, where accuracy problems usually occur, as the results have to be merged.
In the literature a huge number of papers on parallelizing neural networks can be found. An excellent source of references is the survey by Tal Ben-Nun and Torsten Hoefler [7]. However, only few research was done on using Golang in this endeavour.
In the following only specific references are listed, which influenced the presented approach directly. The authors of [8] presented a parallel backpropagation algorithm dealing with the accuracy problem only by using a MapReduce and Cascading model. In the course of our work on parallel and distributed systems [9, 10, 11] we developed several approaches for the parallelization of neural networks. In [12], two novel parallel training approaches were presented for face recognizing backpropagation neural networks. The authors use the OpenMP environment for classic CPU multithreading and CUDA for parallelization on GPU architectures. Aside from that, they differentiated between topological data parallelism and structural data parallelism [13], where the latter is focus of the presented approach here. [14] gave a comparison of different parallelization approaches on a cluster computer. The
results differed depending on the network size, data set sizes and number of processors. Using Go as parallelization tool was already analyzed for the Single-Program-Multiple-Data approach [15, 16] and showed promising results. However, in this paper we focus on exemplar parallelization. Besides parallelizing the backpropagation algorithm for training speed-up, alternative training algorithms like the Resilient Backpropagation described in [17] might lead to faster convergence. One major difference to standard backpropagation is that every weight and bias has a different and variable learning rate. A detailed comparison of both network training algorithms was given in [18] in the case of spam classification.
## III Data and Methods
The following data and methods have been used to gain insight on the speedup possibilities.
### _Choosing the data and parallelization method_
Different approaches of which data could be used have been evaluated. Weather data, crime rates, etc. all seemed to be a good fit, but with the possibility of very inconclusive outputs. Finally the "Hello World!" of neural networks has been chosen: The MNIST dataset [1]. With this ready to use dataset the development process sped up as the convolutional part was already done.
Exemplar parallelism [19] has been chosen as parallelization technique. Within the workers the learning method is stochastic gradient descent [20], but due to combining the data in the main connectome, it behaves like a mini-batch update [21].
### _Basic structure_
First a functional code for a basic neural network has been prepared. With this code it is also possible to define simple multi-layer feedforward networks. From that stable basis more functionality has been added (i.E. different activation functions) to ease up the future development. Then the parallelization of the neural network has been implemented. There were additional challenges with avoiding possible race conditions. Go was very helpful with its built in race detector which can be utilized with the "-race" flag. It was easy to spot any race conditions and therefore the development sped up in the area deemed to take the most time. Afterwards the possibility to input and compute large datasets has been implemented. A batch file functionality for ease of testing as well as data output functionality have been added too. Afterwards the code and neural network have been optimized for a balance of speed, memory usage and training quality. Shuffling of the training data has been implemented to prevent any unwanted behavior that comes from repeated data. The elastic net regularization [22] has been chosen to get better results and more stability for the neural network.
### _The math_
Different activation functions have been tested to get a high accuracy, although this is not the purpose of this work. At the end it has been concluded that the best way is to start without any data normalization for the data to be put into the input layer. But before any activation function runs over the layer, the data is normalized by dividing each value by the size of the layer (including the bias) to minimize the risk of exploding gradients [23].
The following activation functions have been used:
* Input layer: Identity
* Hidden layer: ELU [24]
* Output layer: SoftMax
Variables are aas followed:
* \(\eta\) = learning rate
* t = target
* x = neuron value before activation
* \(\varphi\) = activation function
* \(\delta\) = error
* w = weight
#### Iii-B1 Activation functions
**Identity**
The simplest activation function is the identity function. It simply states that the value stays the same, as in equation (1).
\[\varphi(x)=x \tag{1}\]
So the derivation simply is 1 as shown in equation (2).
\[\varphi^{\prime}(x)=1 \tag{2}\]
**Exponential Linear Unit**
In comparison to ReLU and leaky ReLU, ELU "[...] speeds up learning in deep neural networks and leads to higher classification accuracies." [24] and therefore has been chosen over the other options. Equation (3) shows the math, where alpha is a positive value that can be freely chosen.
\[\varphi(x)=\begin{cases}x&\text{if x }\geq 0\\ \alpha*(e^{\text{x}}-1)&\text{if x }<0\end{cases} \tag{3}\]
The derivation for training is shown in equation (4).
\[\varphi^{\prime}(x)=\begin{cases}1&\text{if x }\geq 0\\ \varphi(x)+\alpha&\text{if x }<0\end{cases} \tag{4}\]
**SoftMax**
The SoftMax function gives us a classification of the likelihood that the current input represents a certain number. The math is straightforward and shown in equation (5).
\[\varphi_{\text{i}}(\overrightarrow{x})=\frac{e^{\text{x}_{\text{i}}}}{ \sum\limits_{j=1}^{J}e^{\text{x}_{\text{j}}}} \tag{5}\]
But with this equation, exploding or vanishing gradients [25] can become a problem due to the high likelihood of numbers exceeding.For the SoftMax activation there is a little "trick". It is possible to add a scalar, as shown in (6), without changing the value of the softmax function [26].
\[\varphi_{\text{i}}(\overrightarrow{x^{\prime}})=\frac{e^{\text{x}\text{ + S}}}{\sum\limits_{j=1}^{J}e^{\text{x}_{\text{j}}\text{ + S}}} \tag{6}\]
So, instead of using softmax(x), softmax(z) - with a scalar value of the negative x maximum - has been used, as in equation (7).
\[z_{\text{i}}=(x_{\text{i}}-max_{\text{i}}(x_{\text{i}})) \tag{7}\]
If we use the maximum, we push the calculation into the negative number spectrum. So, instead of having values ranging over ]-\(\infty\), \(\infty\)[, they've been shifted to ]0, 1], as in (8).
\[\varphi_{\text{i}}(\overrightarrow{x})=\frac{e^{\text{z}_{\text{i}}}}{\sum \limits_{j=1}^{J}e^{\text{z}_{\text{i}}}} \tag{8}\]
Equation (9) derivation for training is a little bit more complicated.
\[\varphi^{\prime}_{\text{i}}(\overrightarrow{x})=\frac{\partial\varphi_{\text {i}}(\overrightarrow{x})}{\partial x_{\text{j}}}=\begin{cases}\varphi_{\text {i}}(\overrightarrow{x})*(1-\varphi_{\text{j}}(\overrightarrow{x}))&\text{i = j}\\ \varphi_{\text{i}}(\overrightarrow{x})*(0-\varphi_{\text{j}}(\overrightarrow{x }))&\text{i \neq j}\end{cases} \tag{9}\]
Mathematicans use (10) to shorten the equation to (11).
\[\delta_{\text{ij}}=\begin{cases}1&\text{i = j}\\ 0&\text{i \neq j}\end{cases} \tag{10}\]
\[\varphi^{\prime}_{\text{i}}(\overrightarrow{x})=\frac{\partial\varphi_{\text {i}}(\overrightarrow{x})}{\partial x_{\text{j}}}=\varphi_{\text{i}}( \overrightarrow{x})*(\delta_{\text{ij}}-\varphi_{\text{j}}(\overrightarrow{x })) \tag{11}\]
But in the end it comes down to the same function as the logistic derivation, shown in equation (12). As the result of the derivation is a diagonal matrix [27] there is no need to calculate the whole matrix.
\[\varphi^{\prime}_{\text{i}}(\overrightarrow{x})=(1-\varphi_{\text{i}}( \overrightarrow{x}))*\varphi_{\text{i}}(\overrightarrow{x})=(1-x)*x \tag{12}\]
#### Iii-B2 Elastic Net Regularization
The elastic net regularization [22] has been used for weight updates.
It's a combination of the lasso regression (13) and the ridge regression (14).
\[L^{1}=\lambda|w| \tag{13}\]
\[L^{2}=\lambda w^{2} \tag{14}\]
The elastic net (15) is simple.
\[ElasticNet=\lambda|w|+\lambda w^{2} \tag{15}\]
Computational optimization (16)
\[ElasticNet=\lambda(|w|+w^{2}) \tag{16}\]
For the derivation (17) the signum function (18) is needed.
\[|w|=w*sgn(w) \tag{17}\]
\[sgn(w)=\begin{cases}1&\text{w > 0}\\ 0&\text{w = 0}\\ -1&\text{w < 0}\end{cases} \tag{18}\]
Which leads to (19).
\[ElasticNet^{\prime}=\lambda(sgn(w)+2w) \tag{19}\]
#### Iii-B3 Loss function
Quadratic loss has been chosen as loss function. Although the classification of handwritten digits - as the name says - is a classification problem and therefore cross entropy loss should show better results. Different loss functions will be implemented in the future of this work.
\[L=-\frac{1}{2}\sum\limits_{i}^{nclass}\Big{(}t_{\text{i}}-\varphi(x_{\text{i }})\Big{)}^{2}+\lambda\frac{1}{2}\sum\limits_{i}^{k}\Big{(}|w_{\text{i}}|+{w_{ \text{i}}}^{2}\Big{)} \tag{20}\]
The derivative is the logistic equation, therefore the loss function is equation (21).
\[L^{\prime}=-\sum\limits_{i}^{nclass}\Big{(}t_{\text{i}}-\varphi(x_{\text{i}}) \Big{)}+\lambda\sum\limits_{i}^{k}\Big{(}\frac{1}{2}sgn(w_{\text{i}})+w_{ \text{i}})\Big{)} \tag{21}\]
#### Iii-C4 Forward and backward propagation
All the previous information is needed to understand the forward and backward propagation methods.
**Forward**
After setting the inputs and targets, the first layer of the neural network gets activated. Then for each layer, the next neuron gets excited with the product of the activated value and the weight of the connection between the neurons (22).
\[x_{\mathrm{j}}=\varphi(x_{\mathrm{i}})*w_{\mathrm{ij}} \tag{22}\]
**Backward**
When learning, equation (23) is used to calculate the error.
\[\delta=t-\varphi(x) \tag{23}\]
The formula for the weight update, with learning rate and the regularization in (24).
\[\Delta w=\eta*(\delta*\varphi(x)+\lambda(sgn(w)+w)) \tag{24}\]
The final equation in (25).
\[w_{\mathrm{ij}}^{+}=w_{\mathrm{ij}}-\Delta w, \tag{25}\]
### _Choosing the parameters_
ELU alpha = 0.5 (currently hardcoded)
The alpha value for ELU has been hardcoded as there was no incentive to do otherwise in the current software iteration.
**workerBatch:** 100
The worker batch has been chosen to merge the single neural networks as often as possible, but without losing too much performance due to context switching.
**Minimum/Maximum weight (starting weights):** [-0.1; 0.1]
As the weights are usually getting smaller, when learning occurs, the starting values have to be chosen to be 0.1 instead of 1. This led to the best outcome.
**LearningRate:** 0.8
The learning rate has been set to 0.8, as this led to the best outcome.
**Lambda:** 0.0000001
The multiplicator for the elastic net has been set to this value, as it provided the highest accuracy for the training and test set. As it is hard to tell if either L1 or L2 regularization is the best, there is only one lambda for setting both methods, to achieve a balance between the two methods.
## IV Results/Evaluation
For testing the neural network, two available systems have been chosen: The Lenovo Yoga 2 laptop, as it is a dual core consumer product which utilizes threading and a turbo mode for higher workloads, with 64 Bit Linux. The Banana Pi M3, as it is a well known octa core home server, with no threading, and without data distortion due to turbo mode kicking in, and 32 Bit Linux. Both systems have a standard CPU frequency of 1.80 GHz, although the minimum and maximum values differ.
There are stark differences in computation speed as well as speedup between the Intel and the ARM architecture. As RISC and CISC lost their meaning to describe newer architectures, it is not possible to draw the conclusion here, although the main effect could come from the smaller - and therefore faster access rates - of the Intel L1 and L2 caches, or the lack of an L3 cache in the ARM architecture. Further research would be needed.
### _Benchmark_
When using pprof for checking the total cpu usage of the code parts with BenchBatch (it utilizes 4 cores, uses a worker batch of 100 lines, and processes 1 training with 60.000 MNIST lines as well as 1 test with 10.000 MNIST lines), it can be seen in Figure 1 that thinking and training takes up about 96% of the total time. Thinking takes about 40% of the time, training takes about 56%. Thinking is the forward propagation, training is the backward propagation. Due to that high amount of cpu usage heavy optimizations were made in these code parts, as these had the greatest effects.
The utility functions only play a marginal role. Even though they wouldn't need any optimization, they've been optimized for general code quality reasons. In example the garbage collector (mallocgc) is hardly used in the utility functions, and almost never in the main code part. As strings are only converted when needed, these parts of the code - even though they're not impacting the measurements - have been highly optimized. Maybe there's still room for further optimization, but for the general purpose this goal has been exceeded.
Fig. 1: pprof worker CPU profile
### _Test systems_
The final tests were made with a "Lenovo Yoga 2 Pro Multimode Ultrabool" as well as a "Banana Pi M3".
Specifications of the Lenovo: Intel(R) Core(TM) i7-4500U CPU @ 1.80GHz, Dual Core (4 threads)
Min CPU: 800 MHz, Max CPU: 3.0 GHz
32 KiB L1 cache, 256 KiB L2 cache, 4 MiB L3 cache
2x4096 DIMM @ Clockspeed 1600 MHz
64 Bit Linux Ubuntu 18.04.1 LTS
Specifications of the Banana Pi M3:
A83T ARM Cortex-A7 octa-core CPU @ 1.80 GHz, Octa Core (8 threads), 4800 BogoMIPS
ARMv7 Processor rev 5 (v7l)
Min CPU: 480 MHz, Max CPU: 1.8 GHz
512 KiB L1 cache, 1 MiB L2 cache
2GB LPDDR3
32 bit (armv7l) Linux Ubuntu 16.04.5 LTS, MATE Desktop Environment 1.12.1
_1) Lenovo Yoga 2:_ The 252% speedup generated with 4 goroutines on the Lenovo Yoga 2 when utilizing more than 1 processor is clearly visible in Figure 2. It is also visible that using more goroutines than processors slows the execution time down only by an almost negligible amount.
Parallelization speedup comes at a price. Although very small, there is a slight decrease in recognition rates when utilizing more goroutines as shown in Figure 3.
#### Iv-C2 Banana Pi M3
When looking at the results of the Banana Pi M3 in Figure 4, it is apparent that utilizing multiple cores leads to an even greater benefit than with the Lenovo. It was possible to generate a 320% speedup with 4 goroutines, and - due to pipelining - it was even possible to generate over 498% speedup when using more goroutines than there were threads available.
The training and test set accuracies look promising too. A 99.26% training set accuracy and 97.14% test set accuracy with only one core has been accomplished. The accuracy does not get lower when utilizing more cores, even though quality differences in the recognition rate can occur. In Figure 5 it is clearly visible that recognition rate drops can occur at any time.
### _Accuracy growth depending on goroutines_
When only one goroutine is usedFigure 6 with the Banana Pi, the neural network starts with a very high recognition accuracy after the first epoch and has a very good learning rate.
With 16 goroutinesFigure 7 the recognition accuracy starts lower and the network takes longer to learn.
#### Iv-C
Fig. 3: Lenovo Yoga 2 accuracy
Fig. 2: Lenovo Yoga 2 speedup
1 Goroutine, accuracy > 90%/95%/99%:
93.24% accuracy after 1 epoch, 1040 seconds
95.22% accuracy after 2 epochs, 2006 seconds
99.03% accuracy after 15 epochs, 15608 seconds
16 goroutines, accuracy > 90%/95%/99%:
90.18% accuracy after 2 epochs, 392 seconds
95.12% accuracy after 8 epochs, 1583 seconds
99.02% accuracy after 49 epochs, 9628 seconds
To reach a higher accuracy with more goroutines more epochs and training samples are needed. But the speedup allows to train it in shorter time - or, to look at it from another perspective - to compute more inputs in a much shorter timespan.
## V Lessons Learned
The final part of this work is to look at what has been learned about the "do's and don'ts of implementing neural networks", Go as a language, the drawn conclusion, and possible future work.
### _"To do, or not to do?" of implementing neural networks_
There are certain roads to victory and many paths to development hell. The latter leads to a steeper learning curve and should therefore be preferred when trying to understand the implications of certain design decisions - but under normal circumstances the beaten path is the quicker route. These recommendations for other coders shall make implementing neural networks a little bit easier and shine a light on which thought processes are good and which are impractical to do.
#### V-1 Arrays instead of structs
Do not use one struct instance per neuron, connection, etc. as it has a large overhead. The compiler is able to optimize the usage of arrays. The first iteration of the neural network took hours for just one epoch on the Lenovo, while the array version takes less than a minute.
#### V-2 Only save when necessary
Only save and load data when needed. In the context of the neural network: Save either batchwise or after every epoch. Try to hold the data in the memory as long as possible.
#### V-3 Machine readable is better than human readable
The conversion of data to XML, JSON, or any other human readable format takes a higher amount of computation time, memory, and disk space, than machine readable formats. If a human readable format is needed, it should only be created, if a human wants to read it and there is a need for them to do so. Sifting through millions of weights and updates is not something a human should do. But, depending on the use case, the human readable format can be created, when
* The process is finished and the results shall be shown.
* An error occurs and the data is necessary to fix it.
Fig. 4: Banana Pi M3 speedup
Fig. 5: Banana Pi M3 accuracy
If human entities want to access data while the process is running (in real time, or stepwise for debugging) there are different approaches:
* Create only one file every few epochs which can be accessed by multiple human entities. Do NOT create it for every entity that accesses the file.
* Duplicate the machine readable results and parse them on a different system. For snapshots a simple ID can be given to every file.
#### Iii-B4 Parallelization and context switches
It takes time to store states of threads. Data has to be shoved around between CPU caches. If applicable give a worker as much data as possible, with one drawback in mind: More data merges mean higher fluctuations and slower computation - less data merges can lead to a more stable convergence and faster computation [28], as well as a higher level of generalization [23]. All while being able to perform online learning due to the singular workers performing stochastic gradient descent.
#### Iii-B5 Struct packing
In Go it is possible to pack structs. That means organizing the data types in a way so that they waste the least amount of memory. The principle for this work was "Memory is cheap, but even though students lack the money to buy some, there is no need to overdo it". But one late evening (at 7 o'clock in the morning) these principles had been thrown over board. So structs have been packed.
#### Iii-B6 Slices
Do not loop through lines of data to append it to a batch line by line. Use the slice functionality of Go - which passes them by reference - if applicable. The following data has been taken from the old model with using the updated weights for the error calculation Figure 8.
In example the code in Figure 9 takes 129.68 seconds for 1 training and 1 testing with the MNIST dataset, 4 cores, and a worker batch setting of 100, as shown in Figure 10.
In comparison when utilizing slices instead of making a slice and appending the data lines within core_run() to send the worker batches to the workers as shown in Figure 11, saved about 17 seconds on the Lenovo, as is visible in Figure 12.
Fig. 8: Code changes from wrong to correct code
Fig. 6: Accuracy growth with 1 Goroutine
Fig. 7: Accuracy growth with 16 goroutines
for each cluster's \(\alpha\)-constraint, activationless parameter \(\alpha\) and activationless parameter \(\beta\).
For each cluster's \(\alpha\)-constraint, activationless parameter \(\beta\) is defined by \(\alpha\).
For each cluster's \(\alpha\)-constraint, activationless parameter \(\beta\) is defined by \(\alpha\).
functions, but used the old equations from the first two versions of the paper. There is a small effect on the derivative functions when x = 0. In example with ELU: The old and new equation are only equal when \(\alpha\) = 1. Otherwise, when x = 0, f'(x) should be \(\alpha\), not 1.
One trusted source - due to one University module handed out an excerpt without citing the source which was the basis for an assignment to calculate a forward, backward, and forward propagation by hand, and creating a simple neural network, which both got graded - used the updated weights as basis for the backpropagation.
There was a larger accuracy dropFigure 15 when using multiple goroutines.
Although it was possible to see when to tweak the parameters to gain a higher accuracy with a single coreFigure 16, there has been found no practical use of these values for the correct implementation - the hyperparameters vary widely, so they have to be tuned differently.
Also mathematical sources are often not the best source for calculations in information systems. Math has been spared the problem of both errors due to overflow and underflow (except when using calculators or information systems). Also there is no need to optimize for memory or computation speed.
The problem with trusting the wrong data has been solved with further research from different sources and consulting a mathematician to check if the partial derivatives and all
Fig. 16: Accuracy with wrong parameters
Fig. 14: Benchmark after code changes
Fig. 15: Old Plot showing the accuracy decrease
formulas have been implemented correctly. The error has then been found very quick when checking against the standard reference [26].
### _Comment on Go_
Go is a wonderful language to write code. Implementation and testing of the neural network seemed to be easier than with other programming languages. But go also has some drawbacks (as does any language).
The main annoyance were the "unused imports" bugs. Sometimes only certain outputs are needed for testing which will get dropped by the developer immediately afterwards. It's good that the Go compiler sees these oversights as errors, even though they are a huge annoyance. A probably better way would be if unused imports won't be tested in a debug environment, only in production. But this would have additional drawbacks when Go is used in environments where code quality is not highly valued.
Another annoyance is the "sudden 'bad file descriptor' of doom". Sometimes it's just a "data reader error: file already closed". It was not possible to pin down what exactly causes the error, only that it affects the file as a whole. Not even deleting and creating a new file with the same name helps to overcome that error. Further testing is needed.
An additional observation that can ruin ones day is, that the Go compiler for some reason accesses trashed files, at least under Linux. There is no problem when files are overwritten by new files. But if a file gets deleted, and a new one inserted instead, Go sometimes seems to try to compile the deleted files, which can lead to hard to trace errors. If there's, in example, an error where the Go compiler expects an integer value, the code provides an integer value, but recently a file with the same function expecting a double value had been trashed, simply empty the trash bin.
Another "hard to debug except when you know it" part is: "panic: runtime error: invalid memory address or nil pointer dereference". This error occurs when the object has not been created with new(...). If it's further up in the code, i.e. some struct attribute, this error is not easy to find. When starting with Go that panic tells almost nothing about its nature.
Circular dependencies are not allowed. They can happen while refactoring code or when making some design mistake. It's good that Go does not permit them as they are a sign of bad software design.
The short variable declaration := is very handy. Go recognizes the type and assigns the value to the left hand variable. The best part: It won't break type safety, which prevents weird behavior.
With the test coverage profiler it is easy to see the current code coverage. There is also the possibility to create test heat maps and to show the test coverage in the browser with highlighting good, poor, or not covered code parts2.
Footnote 2: Go test coverage and html heatmap: [https://blog.golang.org/cover](https://blog.golang.org/cover)
There are memory and cpu profilers, and even a profiling tool3. It is easy to list the top cpu or memory consumers or show a profiling web. Therefore memory issues can be found easily, as well as slow code parts.
Footnote 3: Go profiling: [https://blog.golang.org/profiling-go-programs](https://blog.golang.org/profiling-go-programs)
Go uses function inlining which is a great method for speeding up code.
Goroutines are very lightweight4. As they're very efficient and only start to run if they get data from a channel, there's the probability of an application for parallelized neurons instead of only parallelized networks.
Footnote 4: Currently 2, 4, or 8 LB per Goroutine, depending on the version, i.e. [https://github.com/golang/go/issues/7514](https://github.com/golang/go/issues/7514)
It's easy to find and fix race conditions with Go as it comes with its own race detector.
## VI Conclusion and Future work
It has been learned how to use the programming language Go and about its parallel speedup possibilities. The main accomplishment of this work is to have managed to create a stable and fast neural network. The hardest part was to understand the mathematical concepts and ramifications behind neural networks and how to implement them software wise.
The main focus of this work was to see how the parallel speedup of a neural network behaves with the language Go. Due to time and resource restrictions only little derivations from the main focus were made. There are still ways left to make this neural network even more efficient, with higher accuracy, and so on. The current version could have some possible memory leaks. They will be fixed in a future version. As there will be further changes due to development and additional insights, the code will probably be refined and refactored in the future.
Some parts of the code are still untested - mainly file reading and writing. As they work as intended no additional effort has been made to get 100% test coverage in these areas. Here is room for improvement.
Optimization of the neural network would be the largest part of the future work. Currently it is only a simple network with Bias. It would be possible to implement momentum [29] and other artifacts to achieve higher accuracies. NADAM and other stochastic gradient descent optimization algorithms [28] could be implemented too.
Smaller changes will also include several options, in example if the user wants bias nodes, which error severity to log, and to choose different lambdas for the L1 and L2 Regularization in the elastic net. Adaptive learning rates [29] would be of interest too. Different loss functions, especially Cross Entropy Loss [30] will be implemented in the future.
There is an interest to look into Self-Normalizing Neural Networks [31].
|
2309.08849 | Learning a Stable Dynamic System with a Lyapunov Energy Function for
Demonstratives Using Neural Networks | Autonomous Dynamic System (DS)-based algorithms hold a pivotal and
foundational role in the field of Learning from Demonstration (LfD).
Nevertheless, they confront the formidable challenge of striking a delicate
balance between achieving precision in learning and ensuring the overall
stability of the system. In response to this substantial challenge, this paper
introduces a novel DS algorithm rooted in neural network technology. This
algorithm not only possesses the capability to extract critical insights from
demonstration data but also demonstrates the capacity to learn a candidate
Lyapunov energy function that is consistent with the provided data. The model
presented in this paper employs a straightforward neural network architecture
that excels in fulfilling a dual objective: optimizing accuracy while
simultaneously preserving global stability. To comprehensively evaluate the
effectiveness of the proposed algorithm, rigorous assessments are conducted
using the LASA dataset, further reinforced by empirical validation through a
robotic experiment. | Yu Zhang, Yongxiang Zou, Haoyu Zhang, Xiuze Xia, Long Cheng | 2023-09-16T03:03:53Z | http://arxiv.org/abs/2309.08849v6 | Learning a Stable Dynamic System with a Lyapunov Energy Function for Demonstratives Using Neural Networks
###### Abstract
Autonomous Dynamic System (DS)-based algorithms hold a pivotal and foundational role in the field of Learning from Demonstration (LfD). Nevertheless, they confront the formidable challenge of striking a delicate balance between achieving precision in learning and ensuring the overall stability of the system. In response to this substantial challenge, this paper introduces a novel DS algorithm rooted in neural network technology. This algorithm not only possesses the capability to extract critical insights from demonstration data but also demonstrates the capacity to learn a candidate Lyapunov energy function that is consistent with the provided data. The model presented in this paper employs a straightforward neural network architecture that excels in fulfilling a dual objective: optimizing accuracy while simultaneously preserving global stability. To comprehensively evaluate the effectiveness of the proposed algorithm, rigorous assessments are conducted using the LASA dataset, further reinforced by empirical validation through a robotic experiment.
Learning from demonstrations, Autonomous dynamic system, Neural Lyapunov energy function.
## I Introduction
The popularity of robotics is growing with advancements in technology, and robots are expected to possess greater intelligence to perform complex tasks comparable to those performed by humans. However, programming them, especially for non-experts, is challenging. In such situations, Learning from Demonstration (LfD) is a user-friendly approach where robots acquire skills by observing demonstrations without relying on an explicit script or defined reward functions [1].
In practical applications, achieving stability is crucial for ensuring that learned skills can effectively converge to the desired target state. Dynamic Systems (DS) represent powerful tools that offer a versatile solution for modeling trajectories and generating highly stable real-time motion, as highlighted in [2]. This superiority over traditional methods, such as interpolation techniques, underscores the value of DS in various contexts.
One notable advantage of Autonomous DS is their ability to encode the task's target state as a stable attractor, making them inherently resilient to perturbations. Consequently, DS can formulate stable motions from any starting point within the robot's workspace. This inherent stability facilitates seamless adaptation to new situations while maintaining robust performance. Furthermore, DS possess the capability to dynamically adjust the robot's trajectory to accommodate changes in the target position or unexpected obstacles, as discussed in reference [3]. This adaptability further enhances their utility in real-world scenarios.
Given the paramount importance of stability, ensuring the stability of DS is a critical consideration. One elegant approach to guaranteeing stability involves introducing a Lyapunov function, which is a positive-definite, continuously differentiable energy function. This Lyapunov function serves as a robust means of ensuring stability. The first work that combines Lyapunov theorems with DS learning is known as the Stable Estimator of Dynamical Systems (SEDS) [2]. In SEDS, the objective is to learn a globally asymptotically stable DS, which is represented by Gaussian Mixture Models (GMM) and Gaussian Mixture Regression (GMR), while adhering to Lyapunov stability constraints.
However, it's worth noting that the quadratic Lyapunov function utilized in SEDS can impose limitations on the accuracy of reproduction. Researchers have come to recognize that imposing overly strict stability constraints can curtail the accuracy of learning from demonstrations, leading to less precise reproductions. This trade-off between accuracy and stability is often referred to as the "accuracy vs. stability dilemma." [4].
To address the problem, researchers have proposed several modified approaches aimed at enhancing reproduction accuracy while adhering to stability constraints. One such approach is the Control Lyapunov Function-based Dynamics Movements (CLF-DMs) algorithm, introduced in [5]. This algorithm operates in three steps: In the first step, the algorithm learns a valid Lyapunov function that is approximately consistent with the provided data. In the second step, state-of-the-art regression techniques are employed to model an estimate of the motion based on the demonstrations. And the final step focuses on ensuring the stability of the reproduced motion in real-time by solving a constrained convex optimization problem.
While CLF-DM has the advantage of being able to learn from a broader set of demonstrations, it should be noted that the algorithm requires solving optimization problems in each step to maintain stability. This complexity and sensitivity to parameters are important considerations.
Another approach presented in reference [4] introduces a diffeomorphic transformation called "\(\tau\) -SEDS." This transformation is designed to project the data into a different space, with the primary aim of improving accuracy while preserving the system's stability.
Compared to traditional methods, neural network-based
algorithms [6, 7, 8, 9, 10] have proven to be highly effective for learning from demonstrations. These neural networks can be thought of as versatile fitting functions, and when appropriately structured or constrained, they can generate trajectories that converge to satisfy a Lyapunov function. This Lyapunov function can either be learned by another neural network or manually designed. In the pursuit of improving accuracy in reproduction, some researchers have proposed neural networks dedicated to estimating the Lyapunov function based on demonstration data, as demonstrated in [11]. This approach helps enhance reproduction accuracy by reducing violations of the Lyapunov function observed in the demonstration data. However, despite these efforts, the simplicity of the neural network structure in [11] may still leave room for improvement in reproduction accuracy.
To address this limitation, references [6, 7, 8, 9, 10] have introduced invertible neural network structures. These structures are used to transform the original DS into a new, simplified DS that possesses inherent properties that ensure convergence to the target. The advantage of using invertible transformations is that the fitted DS inherits stability from the simplified DS. However, it's important to note that invertible neural networks often have limitations in terms of their nonlinear fitting capabilities. To compensate for this, multiple layers of invertible neural networks may need to be stacked, which can increase computational costs and time.
To address the challenges commonly encountered by neural network-based algorithms, in this paper, a novel neural network-based approach is proposed that eliminates the need for invertible transformations. The key contributions of this paper are summarized as follows:
* A novel neural network architecture is proposed that is capable of directly learning a Lyapunov candidate function from demonstration data.
* The proposed neural network exhibits the ability to directly output state differentiations, aligning seamlessly with the requirements of DS. Leveraging the learned Lyapunov candidate function, the generated trajectories are guaranteed to converge to the desired target, offering enhanced stability and accuracy in the learning process.
* The experimental results conclusively show that the proposed algorithm possesses the versatility to not only learn effectively from a single demonstration but also learn from multiple demonstrations. This adaptability underscores the robustness and practicality of the proposed approach in a variety of learning scenarios.
The paper is organized as follows: Section II provides a comprehensive discussion of the problem formulations. In section III, the intricate details of the neural networks are presented. In section IV, the evaluation results of the proposed algorithm. This evaluation includes simulations conducted on various handwriting trajectories sourced from the Lasa datasets [12], as well as experiments carried out on the Franka Emika robot. In section V, the conclusion is made.
## II Problem Formulation
When a person or a robot engages in a point-to-point task, the motions involved can often be effectively modeled using an autonomous DS as:
\[\dot{\mathbf{x}}=f(\mathbf{x}),\ \forall\mathbf{x}\in\mathbb{R}^{d}, \tag{1}\]
where \(f:\mathbb{R}^{d}\mapsto\mathbb{R}^{d}\) is a continuous and continuously differentiable function, which is described to have a single equilibrium state, denoted as \(\mathbf{x}^{*}\). This equilibrium state corresponds to the target state of the task and can be considered as the attractor of the DS. To simplify the analysis and without sacrificing generality, the targets of the motions are assumed to be positioned at the origin of the Cartesian coordinate system by \(\tilde{\mathbf{x}}=\mathbf{x}-\mathbf{x}^{*}\). Equation (1), which can also describe the manipulation skills acquired from demonstration data, yields a solution \(\Phi(\mathbf{x}_{0},t)\) that represents a valid trajectory generated by the Autonomous DS when provided with an initial state \(\mathbf{x}_{0}\). The ideal motions can then be obtained by adding this trajectory to the target state \(\mathbf{x}=\tilde{\mathbf{x}}+\mathbf{x}^{*}\). Consequently, by altering the initial conditions \(\mathbf{x}_{0}\), different solution trajectories leading to the target state \(\mathbf{x}^{*}\) can be generated. This flexibility allows for various trajectories to be produced, each tailored to different initial conditions.
The demonstration data is typically recorded in the following format: \(\left\{\mathbf{x}_{k,n},\dot{\mathbf{x}}_{k,n}\ |\ k=1,2,...,K_{n};\ n=1,2,...,N_{d} \right\},\) where \(n\) represents the index of the demonstrations and \(k\) is the \(k\)-th sampling time. \(N_{d}\) denotes the total number of demonstrations and \(K_{n}\)\((n=1,2,...,N_{d})\) represents the whole sampling number of the \(n\)-th demonstrated motion. The state variable \(\mathbf{x}\) signifies the condition or configuration of either a human or a robot. Typically, in the context of robotic systems, it corresponds to the spatial coordinates of the robot's end-effector. Alternatively, in the case of human involvement, it characterizes the spatial position of the human hand within Cartesian space. \(\dot{\mathbf{x}}\) is the first-order time derivative of \(\mathbf{x}\). Moreover, it's important to note that all the demonstrations for a given task share the same target state. These demonstrations are assumed to align with an Autonomous DS as described in Equation (1). This alignment allows for the modeling of the system in a parametric fashion, which can be represented as:
\[\dot{\hat{\mathbf{x}}}=\hat{f}(\mathbf{x},\mathbf{\theta}),\ \forall\mathbf{x}\in\mathbb{R}^{d}. \tag{2}\]
where \(\dot{\hat{\mathbf{x}}}\) denotes the estimate of \(\dot{\mathbf{x}}\), \(\hat{f}\) is the model designed manually to approximate the DS in (1), and \(\mathbf{\theta}\) denotes the parameter in the model which can be learned. The optimal parameter configuration, denoted as \(\mathbf{\theta}^{*}\), is obtained through the minimization of the following objective function:
\[J(\mathbf{\theta})\propto\sum_{n=1}^{N_{d}}\sum_{k=1}^{K_{n}}\left\|\hat{\hat{\bm {x}}}_{k,n}-\dot{\mathbf{x}}_{k,n}\right\|^{2}, \tag{3}\]
where \(\propto\) denotes the proportionality relation, \(\left\|\cdot\right\|^{2}\) is the \(l_{2}\)-norm and \(\hat{\hat{\mathbf{x}}}_{k,n}\) represents the estimate of \(\dot{\mathbf{x}}_{k,n}\). Therefore, the objective function compels the learned model to generate precise reproductions that closely match the demonstrated behavior. Nonetheless, it's crucial to note that the objective
function alone does not guarantee the stability of the model. An Autonomous DS is considered locally asymptotically stable at the point \(\mathbf{x}^{*}\) if a continuous and continuously differentiable Lyapunov candidate function \(V(\mathbf{x}):\mathbb{R}^{d}\rightarrow\mathbb{R}\) exists, satisfying the following conditions:
\[\begin{cases}(a)V(\mathbf{x}^{*})=0,\\ (b)\dot{V}(\mathbf{x}^{*})=0,\\ (c)V(\mathbf{x})>0:\forall\mathbf{x}\neq\mathbf{x}^{*},\\ (d)\dot{V}(\mathbf{x})<0:\forall\mathbf{x}\neq\mathbf{x}^{*}.\end{cases} \tag{4}\]
Furthermore, if the radially unbounded condition as
\[\lim_{\|\mathbf{x}\|\rightarrow+\infty}V(\mathbf{x})=+\infty, \tag{5}\]
is satisfied, the DS is globally asymptotically stable.
Incorporating stability considerations into the DS learning process to ensure the robot's arrival at the desired target state may impose limitations on model accuracy. This is because striving for accurate reproductions akin to the demonstration data might lead to violations of the pre-defined Lyapunov candidate function. However, deriving a suitable Lyapunov candidate function analytically from the demonstration data is a challenging task, rendering the search for a satisfactory solution quite difficult.
In this paper, a data-driven approach to learn the Lyapunov candidate function, utilizing neural networks is proposed. This method offers an alternative that bypasses the need for designing Lyapunov candidate function, facilitating the achievement of stability in the DS learning process, which is introduced in the following section.
## III The Proposed Approach
To construct a Lyapunov energy function, the most straightforward way is to establish it as:
\[V(\mathbf{x})=\frac{1}{2}\mathbf{x}^{\mathrm{T}}\mathbf{x}. \tag{6}\]
Given the universal fitting capabilities of neural networks, the candidate Lyapunov energy function can be formulated as follows:
\[V(\mathbf{x})=\frac{1}{2}g(\mathbf{x})^{\mathrm{T}}g(\mathbf{x}), \tag{7}\]
where \(g\) is the function fitted by the neural networks. Denoting \(\mathbf{y}=g(\mathbf{x})\) as shown in Fig. 1, equation 7 becomes:
\[V(\mathbf{x})=\frac{1}{2}\mathbf{y}^{\mathrm{T}}\mathbf{y}. \tag{8}\]
To ensure the stability of the system, it is essential to calculate the derivative of the energy function, which can be expressed as follows:
\[\dot{V}(\mathbf{x})=\mathbf{y}^{\mathrm{T}}\frac{\partial\mathbf{y}}{\partial\mathbf{x}}\dot{ \mathbf{x}}. \tag{9}\]
When \(\frac{\partial\mathbf{y}}{\partial\mathbf{x}}\) is invertible, \(\dot{\mathbf{x}}\) can be calculated as:
\[\dot{\mathbf{x}}=(\frac{\partial\mathbf{y}}{\partial\mathbf{x}})^{-1}\dot{\mathbf{y}}, \tag{10}\]
where \(\dot{\mathbf{y}}\) is the designed variable in this paper. Substituting the designed \(\dot{\mathbf{x}}\) into 9, the 9 becomes:
\[\begin{split}\dot{V}(\mathbf{x})&=\mathbf{y}^{\mathrm{T}} \frac{\partial\mathbf{y}}{\partial\mathbf{x}}\mathbf{\dot{\partial}\mathbf{x}})^{-1}\mathbf{y}\\ &=\mathbf{y}^{\mathrm{T}}\dot{\mathbf{y}}\end{split} \tag{11}\]
Without loss of generality, the motion targets are positioned at the origin of the Cartesian coordinate system, and the convergence point of \(\mathbf{y}\) also coincides with the origin. As a result, when the following conditions are met, the global stability of the DS is ensured.
* \(\frac{\partial\mathbf{y}}{\partial\mathbf{x}}\) is invertible, which is the prerequisite condition of designing \(\dot{\mathbf{x}}\).
* The appropriate \(\dot{\mathbf{y}}\) to guarantee that \(\mathbf{y}^{\mathrm{T}}\dot{\mathbf{y}}<0\), which is the required of the Lyapunov stability condition.
* Only the target point of \(\mathbf{x}^{*}\) should map to the convergence point of \(\mathbf{y}\). In cases where the transformation is not invertible, and multiple points of \(\mathbf{x}\) map to the convergence point of \(\mathbf{y}\), then there is a risk that the original DS may converge to an undesired equilibrium point, which can result in unintended behavior or outcomes.
The first problem can be readily addressed by employing a residual structure for the design of \(\mathbf{y}\), defined as:
\[\mathbf{y}=m(\mathbf{x})+\mathbf{x}, \tag{12}\]
where \(m(\cdot)\) represents neural networks. In this context, the non-invertibility of \(\frac{\partial\mathbf{y}}{\partial\mathbf{x}}\) occurs only when one of the eigenvalues of \(\frac{\partial m(\mathbf{x})}{\partial\mathbf{x}}\) is equal to \(-1\). However, this condition is highly unlikely when using neural network structures, making it a practical and effective solution. The second problem can be solved directly by designing \(\dot{\mathbf{y}}\) as \(\dot{\mathbf{y}}=-\mathbf{y}\). An alternative method is to learn the \(\dot{\mathbf{y}}\) using a neural network. To ensure compliance with the stability condition, it can be expressed as follows:
\[\dot{\mathbf{y}}=\begin{cases}n(\mathbf{y})&if\ \mathbf{y}^{\mathrm{T}}n(\mathbf{y})<0,\\ -\beta\mathbf{y}&otherwise,\end{cases} \tag{13}\]
Fig. 1: The overall structure of the proposed algorithm for learning a DS while simultaneously learning a Lyapunov energy function.
where \(n(\cdot)\) is a neural network and \(\beta\) is a manually designed positive constant. Taking inspiration from the research conducted by [13] and simultaneously learning both \(\dot{\mathbf{y}}\) and \(\dot{\mathbf{x}}\), the equation (12) can be reformulated as:
\[\dot{\mathbf{y}}=n(\mathbf{y})-(n(\mathbf{y})+\beta\mathbf{y})\frac{\mathrm{Relu}(\mathbf{y}^{ \mathrm{T}}n(\mathbf{y}))}{\mathbf{y}^{\mathrm{T}}n(\mathbf{y})}, \tag{14}\]
where \(\mathrm{Relu}\) is a element-wise calculated function characterized by the following form:
\[\mathrm{Relu}(s)=\begin{cases}s&\quad\mathrm{if}\ s>0,\\ 0&\quad\mathrm{otherwise}.\end{cases} \tag{15}\]
To address the third problem, the neural network structure can be configured in a way that ensures the output becomes a zero vector only when the input is a zero vector. There are two approaches to achieve this:
Firstly, employing a suitable activation function, such as \(\mathrm{Softplus}\), that ensures the output is always greater than zero. Then, plus the output of the neural network with \(\mathbf{x}\). Secondly, configure the neural network without bias but with an appropriate activation function, like \(\mathrm{Tanh}\), to guarantee that the output is a zero vector when the input is a zero vector. Additionally, set a portion of the neural network's weight matrices to be identity matrices. This ensures that the weight matrices are linearly independent and cannot produce a zero vector when the input is non-zero.
In our paper, both of these methods are utilized, and the expression for \(\mathbf{y}\) corresponding with the illustration in Fig. 1 that is defined as follows:
\[\begin{cases}g_{1}(\mathbf{x})=m_{1}(\mathbf{x})\odot\mathbf{x},\\ \mathbf{y}_{1}=\mathbf{x}+g_{1}(\mathbf{x}),\\ g_{2}(\mathbf{y}_{1})=m_{2}(g_{1}(\mathbf{x})),\\ \mathbf{y}=g_{2}(\mathbf{y}_{1})+\mathbf{y}_{1}\end{cases} \tag{16}\]
where \(\odot\) denotes Hadamard product, \(m_{1}(\cdot)\) is the neural network with \(\mathrm{Softplus}\) activation function and \(m_{2}(\cdot)\) is the neural network with \(\mathrm{Tanh}\) activation function and no bias. Then \(\mathbf{y}\) is formulated as follows:
\[\mathbf{y}=\mathbf{x}+m_{1}(\mathbf{x}))\odot\mathbf{x}+m_{2}(m_{1}(\mathbf{x}))\odot\mathbf{x}+\mathbf{ x})), \tag{17}\]
which is consistent with the setting as in equation (12).
Consequently, the proposed algorithm involves three neural networks: \(m_{1}(\cdot)\), \(m_{2}(\cdot)\), and \(n(\cdot)\). And in this paper, all the neural networks are configured with three layers, and each layer of \(m_{1}(\cdot)\) and \(n(\cdot)\) comprises 20 neurons, while of \(m_{2}(\cdot)\) comprises 40 neurons in this paper.
## IV Experiment Results and Discussions
To comprehensively assess the efficacy of the proposed algorithm, a rigorous evaluation protocol encompassing both simulation and real-world experimentation has been implemented. The results are summarized as follows:
### _Simulation Results_
The initial assessment of the proposed algorithm was carried out using the LASA dataset, which comprises a comprehensive collection of handwritten letter trajectories.
Fig. 2: The simulation of the LASA dataset using the proposed algorithm is depicted. The images with a dark background showcase the learned vector fields. Within these vector field visuals, the white dotted lines represent the original demonstration data, while the red solid lines correspond to the reproductions, all originating from the same initial points. The target points are denoted as ”\(x\)” in these visuals. Furthermore, the transformed reproduction trajectories (consistent with \(\mathbf{y}\) as shown in Fig. 1) are presented with white background. Within this context, the solid points indicate the starting positions for these trajectories.
To evaluate the proposed algorithm, the quantitative evaluation of accuracy was performed through the application of two error metrics: the Swept Error Area (SEA) [5] and the Velocity Root Mean Square Error (\(V_{rmse}\)) [14]. The SEA scores serve as an indicator of the algorithm's capability to faithfully replicate the shapes of the demonstrated motions. On the other hand, the \(V_{rmse}\) metric gauges the algorithm's proficiency in preserving the velocities of the demonstrated motions. A lower SEA score signifies superior accuracy in reproducing the trajectories with the same initial start points, while a lower \(V_{rmse}\) value indicates that the reproductions closely match the smoothness of the original demonstrations. These metrics collectively assess the performance of the reproduced trajectories by quantifying their similarity to the demonstrated trajectories.
For the implementation of the proposed algorithm, the "Adam" optimization method is opted, adhering to the default hyper-parameters. Specifically, the initial learning rate was set to \(1\times 10^{-3}\), with a decay rate of \(0.99\). Subsequently, the learned DS was utilized to generate trajectories originating from the same starting points with the same steps as demonstrations, facilitating an assessment of the trade-off between stability and accuracy. In this paper, all demonstration data from the LASA dataset were utilized, with trajectories generated from all available starting points. Prior to processing, the data underwent normalization, bringing it within the range of \([-1,1]\). The algorithm's parameters were initialized with random values, and the maximum number of iterations was set to 2000, with a mini-batch size of 64. For the sake of equitable comparisons, DS algorithms that named SEDS and CLF-DM are chosen, for which source code was readily accessible. The parameters for these comparative algorithms were configured to match the values originally specified in their respective references, ensuring a fair benchmark.
Fig. 2 presents a visual representation of the vector fields (depicted with dark background) and the transformed trajectories generated by the proposed DS algorithm for 29 examples drawn from the LASA dataset. Notably, the reproduced trajectories (depicted in red) closely align with the original demonstrations (depicted in white). It is worth emphasizing that regardless of whether the initial point of the DS matches the starting point of the demonstrations, the reproduced DS consistently converges towards the intended goal. Furthermore, the last three images in the third row and first image in the fourth row of Fig. 2 illustrate more intricate motions. For instance, in the third image of the third row, three different types of demonstrations commence from distinct initial points but all culminate at the same target point, underscoring the algorithm's versatility and effectiveness.
The quantified results are shown Table I, the results clearly demonstrate that the proposed algorithm outperforms other methods in terms of trajectory reproduction accuracy. Notably, the proposed algorithm achieves the highest SEA score and the best \(V_{rmse}\) value, showcasing substantial improvements of approximately \(15.56\%\) in SEA score and approximately \(13.23\%\) in \(V_{rmse}\) when compared to CLF-DM.
Furthermore, a noteworthy observation stems from the images presented in Fig. 2, characterized with white background, representing the transformed trajectories denoted as \(\boldsymbol{y}\) in Fig. 1. To elucidate, considering the case of the letter "J" for illustration. These transformed trajectories manage to retain certain essential characteristics of the original trajectories while scaling the two-dimensional coordinates to align with the simplest Lyapunov energy function. This innovative approach, albeit seemingly straightforward at first glance, yields intriguing results when addressing the problem at hand.
The proposed algorithm can also learn the vector field from one demonstration and four results are shown in Fig. 3. The results show that the proposed algorithm can learn from one demonstration well, and the vector field is similar to the vector field learned from multiply demonstrations, which is consistent with common sense.
Incorporating the direct design of \(\hat{\boldsymbol{y}}\) as \(\hat{\boldsymbol{y}}=-\boldsymbol{y}\) into the modeling approach is another method of simultaneously learning the stability DS with a Lyapunov energy function. In Fig. 5, the simulation results for the same data previously examined in Fig. 3 is presented using this approach. It becomes evident that the endpoints of the reproduced trajectories for \(P\) and \(S\) deviate slightly from the target point. This discrepancy implies that the reproduced velocities differ significantly from the demonstrated ones. Furthermore, the reproduction of \(G\) does not align with the demonstrated trajectory. Consequently, this observation suggests that when employing this method, the model's fitting capacity falls short of expectations.
### _Validation on Robot_
Experiments were conducted to validate the proposed algorithm using a Franka Emika robot. The model was trained using data from the 'Multi-Models-1' subset of the LASA dataset, then it was executed on a whiteboard using a pen held by the robot.
Throughout the experiment, the robot continuously acquired its end-effector's position and utilized the proposed algorithm to calculate the corresponding velocity.
To evaluate the algorithm's performance, four random points on the whiteboard were selected. The results demonstrated that the proposed algorithm generated trajectories that converged toward manually defined target points. These trajectories closely followed the vector field, as illustrated in the third figure of the third row in Fig. 2.
## V Conclusions
In this paper, a neural network-based algorithm designed for learning from single or multiple demonstrations is introduced. The algorithm's performance is assessed through both simulated scenarios featuring diverse handwriting examples and practical experiments involving a physical robot. The experimental outcomes provide compelling evidence regarding the efficacy of the proposed method.
Nevertheless, it is crucial to acknowledge that the proposed algorithm relies on neural networks, and the model training process is comparatively time-consuming when compared to conventional DMPs or GMMs. This aspect presents an opportunity for future research, aimed at enhancing the algorithm's efficiency and reducing its training time.
|
2302.14690 | On the existence of minimizers in shallow residual ReLU neural network
optimization landscapes | Many mathematical convergence results for gradient descent (GD) based
algorithms employ the assumption that the GD process is (almost surely) bounded
and, also in concrete numerical simulations, divergence of the GD process may
slow down, or even completely rule out, convergence of the error function. In
practical relevant learning problems, it thus seems to be advisable to design
the ANN architectures in a way so that GD optimization processes remain
bounded. The property of the boundedness of GD processes for a given learning
problem seems, however, to be closely related to the existence of minimizers in
the optimization landscape and, in particular, GD trajectories may escape to
infinity if the infimum of the error function (objective function) is not
attained in the optimization landscape. This naturally raises the question of
the existence of minimizers in the optimization landscape and, in the situation
of shallow residual ANNs with multi-dimensional input layers and
multi-dimensional hidden layers with the ReLU activation, the main result of
this work answers this question affirmatively for a general class of loss
functions and all continuous target functions. In our proof of this statement,
we propose a kind of closure of the search space, where the limits are called
generalized responses, and, thereafter, we provide sufficient criteria for the
loss function and the underlying probability distribution which ensure that all
additional artificial generalized responses are suboptimal which finally allows
us to conclude the existence of minimizers in the optimization landscape. | Steffen Dereich, Arnulf Jentzen, Sebastian Kassing | 2023-02-28T16:01:38Z | http://arxiv.org/abs/2302.14690v1 | # On the existence of minimizers in shallow residual
###### Abstract.
Many mathematical convergence results for gradient descent (GD) based algorithms employ the assumption that the GD process is (almost surely) _bounded_ and, also in concrete numerical simulations, divergence of the GD process may _slow down_, or even completely rule out, convergence of the error function. In practical relevant learning problems, it thus seems to be advisable to design the ANN architectures in a way so that GD optimization processes remain bounded. The property of the boundedness of GD processes for a given learning problem seems, however, to be closely related to the _existence of minimizers in the optimization landscape_ and, in particular, GD trajectories may escape to infinity if the infimum of the error function (objective function) is not attained in the optimization landscape. This naturally raises the question of the existence of minimizers in the optimization landscape and, in the situation of shallow residual ANNs with multi-dimensional input layers and multi-dimensional hidden layers with the ReLU activation, the main result of this work answers this question affirmatively for a general class of loss functions and all continuous target functions. In our proof of this statement, we propose a kind of closure of the search space, where the limits are called _generalized responses_, and, thereafter, we provide sufficient criteria for the loss function and the underlying probability distribution which ensure that all additional artificial generalized responses are suboptimal which finally allows us to conclude the existence of minimizers in the optimization landscape.
Key words and phrases:Neural networks, shallow networks, best approximation, ReLU activation, approximatively compact 2020 Mathematics Subject Classification: Primary 68T07; Secondary 68T05, 41A50
## 1. Introduction
Machine learning methods - often consisting of _artificial neural networks (ANNs)_ trained through _gradient descent_ (GD) type optimization methods - are nowadays omnipresent computational methods which are heavily employed in many industrial applications as well as scientific research activities. Despite the mind-blowing success of such computational schemes, in general, it remains an open problem of research to rigorously prove (or disprove) the convergence of GD optimization methods in the training of ANNs. Even in the situation of shallow ANNs with just one hidden layer it remains an open research question whether the plain vanilla stochastic gradient descent (SGD) method does converge in the training of such ANNs.
In the literature regarding the training of ANNs there are, however, several partial error analysis results for GD type optimization methods (by which we mean, everything, time-continuous gradient flow processes, deterministic GD optimization methods, as well as stochastic GD optimization methods). For example, we refer the reader to [1, 1, 2] for results concerning gradient flows, to [1, 3] for results concerning deterministic gradient methods, to [14, 15, 2, 16] for results concerning stochastic gradient methods and to [16] for results concerning gradient based diffusion processes. Many of these results exploit _Kurdyka-Lojasiewicz_ gradient type inequalities to establishing convergence to a point (often a critical point) of the considered GD type optimization methods and we refer the reader
to [13, 14, 15] for classical results by Lojasiewicz concerning gradient inequalities for analytic target functions and direct consequences for the convergence of gradient flows.
Each of the above mentioned partial convergence results assumes that the considered GD type optimization process is (almost surely) bounded, loosely speaking, in the sense that
\[\sup_{t\in[0,\infty)}\|\Theta_{t}\|<\infty \tag{1}\]
where \(\Theta\colon[0,\infty)\to\mathbb{R}^{\mathfrak{d}}\) corresponds to the employed GD type optimization process (which could be a gradient flow optimization process or a time-continuous version of a discrete GD type optimization process), where \(\mathfrak{d}\in\mathbb{N}\) corresponds to the number of trainable parameters, and where \(\|\cdot\|\) refers to the standard norm on \(\mathbb{R}^{\mathfrak{d}}\).
In general, it remains an open problem to verify (1) (and, thus, whether the above mentioned achievements are actually applicable in context of the training of ANNs). The question whether (1) is satisfied seems to be closely related to _the existence of minimizers in the optimization landscape_; cf. [1, 10]. In particular, in [1] counterexamples to (1) are given and _divergence_ of GD type optimization processes is proved in the sense that
\[\liminf_{t\to\infty}\|\Theta_{t}\|=\infty \tag{2}\]
in certain cases where there do not exist minimizers in the optimization landscape. A divergence phenomena of the form (2) may _slow down_ (or even completely rule out) the convergence of the risk/error function, which is the highly relevant quantity in practical applications, and, in this aspect, it seems to be strongly advisable to _design the ANN architecture_ in a way so that there exist minimizers in the optimization landscape and so that the divergence in (2) _fails to happen_.
The question whether there exist minimizers in the optimization landscape, in turn, seems to be closely related to the choice of the _activation function_ and the specific architecture of the ANN. Indeed, in the case of _fully-connected feedforward ANNs with just one hidden layer and one dimensional input and output layer_, on the one hand for several common (smooth) activations such as
* the _standard logistic activation_,
* the _softplus activation_,
* the _inverse tangent (arctan) activation_,
* the _hyperbolic tangent activation_, and
* the _softsign activation_
it has been shown (cf. [1, Theorems 1.3 and 1.4] and [1, Section 1.2]) that, in general, there _do not exist minimizers in the optimization landscape_ even if the class of considered target functions is restricted to smooth functions or even polynomials but, on the other hand for the _ReLU activation_, it has been proved in [1, Theorem 1.1] that for every Lipschitz continuous target function there _do exist minimizer in the optimization landscape_. These existence/non-existence phenomena for minimizers in the optimization landscape thus reveals a _fundamental advantage of the ReLU activation_ compared to the above mentioned smooth activations and this issue may be one of the reasons why the ReLU activation seems in many simulations to outperform other smooth activations and also why the ReLU activation is probably the most popular employed activation function in simulations, even though it fails to be differentiable in contrast to the above mentioned continuously differentiable activation functions.
Theorem 1.1 in [1] is, however, restricted to Lipschitz continuous target functions, to the _standard mean square loss_, and to ANNs with one neuron on the input layer and excludes ANNs with multi-dimensional input layers. It is precisely the subject of this work to extend
Theorem 1.1 in [14] in several ways, in particular, to the situation of _residual ANNs with multi-dimensional input, general continuous target functions, and general loss functions_.
In particular, in this work we establish in one of our main results in Theorem 1.1 below, under suitable assumptions on the probability distribution of the input data, the existence of minimizers in the optimization landscape in the situation of shallow residual ANNs with multi-dimensional input layers and multi-dimensional hidden layers with the _ReLU activation_\(\mathbb{R}\ni x\mapsto\max\{x,0\}=x^{+}\in\mathbb{R}.\) To simplify the presentation we restrict ourselves in Theorem 1.1 (as in [14, Theorem 1.1]) to the situation of the standard mean square loss but in our more general result in Theorem 1.3 - which includes Theorem 1.1 as a special case - also covers a general class of loss functions.
**Theorem 1.1** (Existence of minimizer - residual ANNs).: _Let \(d_{\mathrm{in}},d\in\mathbb{N}\), \(\mathfrak{d}=d_{\mathrm{in}}(d+1)+2d+1\), let \(f\colon\mathbb{R}^{d_{\mathrm{in}}}\to\mathbb{R}\) and \(h\colon\mathbb{R}^{d_{\mathrm{in}}}\to[0,\infty)\) be continuous, assume that \(h^{-1}((0,\infty))\) is a bounded and convex set, for every \(\theta=(\theta_{1},\ldots,\theta_{\mathfrak{d}})\in\mathbb{R}^{\mathfrak{d}}\) let \(\mathfrak{N}_{\theta}\colon\mathbb{R}^{d_{\mathrm{in}}}\to\mathbb{R}\) satisfy for all \(x=(x_{1},\ldots,x_{d_{\mathrm{in}}})\in\mathbb{R}^{d_{\mathrm{in}}}\) that_
\[\mathfrak{N}_{\theta}(x)=\theta_{\mathfrak{d}}+\left(\sum_{j=1}^{d}\theta_{d _{\mathrm{in}}(d+1)+d+j}\max\{\theta_{d_{\mathrm{in}}(d+1)+j}+\sum_{i=1}^{d_{ \mathrm{in}}}\theta_{jd_{\mathrm{in}}+i}x_{i},0\}\right)+\sum_{i=1}^{d_{ \mathrm{in}}}\theta_{i}x_{i}, \tag{3}\]
_and let \(\mathrm{err}\colon\mathbb{R}^{\mathfrak{d}}\to\mathbb{R}\) satisfy for all \(\theta\in\mathbb{R}^{\mathfrak{d}}\) that \(\mathrm{err}(\theta)=\int_{\mathbb{R}^{d_{\mathrm{in}}}}(f(x)-\mathfrak{N}_{ \theta}(x))^{2}h(x)\,\mathrm{d}x.\) Then there exists \(\theta\in\mathbb{R}^{\mathfrak{d}}\) such that \(\mathrm{err}(\theta)=\inf_{\vartheta\in\mathbb{R}^{\mathfrak{d}}}\mathrm{err }(\vartheta)\)._
Loosely speaking, Theorem 1.1 reveals in the situation of _shallow residual ReLU ANNs_ with a \(d_{\mathrm{in}}\)-dimensional input layer (with \(d_{\mathrm{in}}\) neurons on the input layer), with a \(d\)-dimensional hidden layer (with \(d\) neurons on the hidden layer), and with a skip-connection from the input layer to the output layer that there exist minimizers in the standard mean square loss optimization landscape.
Theorem 1.1 can also be reformulated in the setup of shallow fully-connected feedforward ANNs with multi-dimensional input layers and multi-dimensional hidden layers with the _ReLU activation_\(\mathbb{R}\ni x\mapsto\max\{x,0\}=x^{+}\in\mathbb{R}\) for all except of one neuron on the hidden layer and the _identity activation_\(\mathbb{R}\ni x\mapsto x\in\mathbb{R}\) for the remaining neuron on the hidden layer. This reformulation of Theorem 1.1 is precisely the subject of the next result in Theorem 1.2 below. We also refer to Figure 1 for a graphical visualization illustrating the in Theorem 1.2 employed ANN architectures with \(d_{\mathrm{in}}\) neurons on the input layer and \(d+1\) neurons on the hidden layer.
**Theorem 1.2** (Existence of minimizer - fully-connected feedforward ANNs).: _Let \(d_{\mathrm{in}},d\in\mathbb{N}\), \(\mathfrak{d}=(d_{\mathrm{in}}+2)(d+1)+1\), let \(f\colon\mathbb{R}^{d_{\mathrm{in}}}\to\mathbb{R}\) and \(h\colon\mathbb{R}^{d_{\mathrm{in}}}\to[0,\infty)\) be continuous, assume that \(h^{-1}((0,\infty))\) is a bounded and convex set, for every \(\theta=(\theta_{1},\ldots,\theta_{\mathfrak{d}})\in\mathbb{R}^{\mathfrak{d}}\) let \(\mathfrak{N}_{\theta}\colon\mathbb{R}^{d_{\mathrm{in}}}\to\mathbb{R}\) satisfy for all \(x=(x_{1},\ldots,x_{d_{\mathrm{in}}})\in\mathbb{R}^{d_{\mathrm{in}}}\) that_
\[\begin{split}\mathfrak{N}_{\theta}(x)&=\theta_{ \mathfrak{d}}+\theta_{(d_{\mathrm{in}}+1)(d+1)+1}\big{(}\theta_{d_{\mathrm{in }}(d+1)+1}+\sum_{i=1}^{d_{\mathrm{in}}}\theta_{i}x_{i}\big{)}\\ &\qquad\qquad+\sum_{j=1}^{d}\theta_{(d_{\mathrm{in}}+1)(d+1)+j+1} \max\{\theta_{d_{\mathrm{in}}(d+1)+j+1}+\sum_{i=1}^{d_{\mathrm{in}}}\theta_{jd _{\mathrm{in}}+i}x_{i},0\},\end{split} \tag{4}\]
_and let \(\mathrm{err}\colon\mathbb{R}^{\mathfrak{d}}\to\mathbb{R}\) satisfy for all \(\theta\in\mathbb{R}^{\mathfrak{d}}\) that \(\mathrm{err}(\theta)=\int_{\mathbb{R}^{d_{\mathrm{in}}}}(f(x)-\mathfrak{N}_{ \theta}(x))^{2}h(x)\,\mathrm{d}x.\) Then there exists \(\theta\in\mathbb{R}^{\mathfrak{d}}\) such that \(\mathrm{err}(\theta)=\inf_{\vartheta\in\mathbb{R}^{\mathfrak{d}}}\mathrm{err }(\vartheta)\)._
Theorem 1.1 and Theorem 1.2 are both direct consequences of the more general result in Theorem 1.3 below (which covers more general supports of the probability distribution of the input data of the consider learning problem and also covers a general class of loss functions instead of only the standard mean square loss). In this regard, we note that the set of the realization functions of shallow fully-connected feedforward ANNs with \(d_{\mathrm{in}}\) neurons on the input layer, with \(d+1\) neurons on the hidden layer, with the ReLU activation for \(d\) neurons on the
hidden layer, and with the identity activation for the remaining neuron on the hidden layer (see Figure 1) coincides with the set of the realization functions of shallow residual ReLU ANNs with \(d_{\text{in}}\) neurons on the input layer, with \(d\) neurons on the hidden layer, and with a skip-connection from the input layer to the output layer.
Theorem 1.1 and Theorem 1.2 somehow employ a vectorized description of ANNs in the sense that every real vector \(\theta\in\mathbb{R}^{\mathfrak{d}}\) represents an ANN whose realization function is \(\mathfrak{N}_{\theta}\colon\mathbb{R}^{d_{\text{in}}}\to\mathbb{R}\). In our more general Theorem 1.3 and in our later arguments, we represent ANNs in a more structurized way (see (5) below). More specifically, in the following we describe every ANN by a quadrupel consisting
* of a matrix \(W^{1}=(w^{1}_{j,i})_{(j,i)\in\{0,1,\ldots,d\}\times\{1,2,\ldots,d_{\text{in}}\} }\in\mathbb{R}^{(d+1)\times d_{\text{in}}}\) (_weight matrix_ to describe the linear part in the affine transformation from the \(d_{\text{in}}\)-dimensional input layer to the \((d+1)\)-dimensional hidden layer),
Figure 1. Graphical illustration of a _shallow fully-connected feedforward ANN_ with \(d_{\text{in}}\) neurons on the input layer (with a \(d_{\text{in}}\)-dimensional input layer), with \(d+1\) neurons on the hidden layer (with a \((d+1)\)-dimensional hidden layer), and with 1 neuron on the output layer (with a \(1\)-dimensional output layer) as considered in Theorem 1.3. The blue colored circle around _neuron 0_ on the hidden layer represents the identity function \(\mathbb{R}\ni x\mapsto x\in\mathbb{R}\) as the activation function for this neuron and the purple colored circles around _neuron 1_, _neuron 2_,..., and _neuron \(d\)_ on the hidden layer represent the ReLU function \(\mathbb{R}\ni x\mapsto\max\{x,0\}\in\mathbb{R}\) as the activation function for these neurons.
* of a vector \(b^{1}=(b^{1}_{i})_{i=0,1,\ldots,d}\in\mathbb{R}^{d+1}\) (_bias vector_ to the additive part in the affine transformation from the \(d_{\mathrm{in}}\)-dimensional input layer to the \((d+1)\)-dimensional hidden layer),
* of a matrix \(W^{2}=(w^{2}_{i})_{i=0,1,\ldots,d}\in\mathbb{R}^{1\times(d+1)}\) (_weight matrix_ to describe the linear part in the affine transformation from the \((d+1)\)-dimensional hidden layer to the \(1\)-dimensional output layer), and
* of a scalar \(b^{2}\in\mathbb{R}\) (_bias_ to describe the additive part in the affine transformation from the \((d+1)\)-dimensional hidden layer to the \(1\)-dimensional output layer).
Moreover, for every \(j\in\{0,1,\ldots,d\}\) we write \(w^{1}_{j}=(w^{1}_{j,1},\ldots,w^{1}_{j,d_{\mathrm{in}}})\in\mathbb{R}^{d_{ \mathrm{in}}}\). We let
\[\mathbb{W}=(W^{1},b^{1},W^{2},b^{2})\in\mathbb{R}^{(d+1)\times d_{\mathrm{in} }}\times\mathbb{R}^{d+1}\times\mathbb{R}^{1\times(d+1)}\times\mathbb{R}=: \mathcal{W}_{d} \tag{5}\]
and call \(\mathbb{W}\) a _network configuration_ and \(\mathcal{W}_{d}\) the _parametrization class_. We often refer to a configuration of a neural network as the (_neural_) _network_\(\mathbb{W}\). A configuration \(\mathbb{W}\in\mathcal{W}_{d}\) describes a function \(\mathfrak{N}^{\mathbb{W}}\colon\mathbb{R}^{d_{\mathrm{in}}}\to\mathbb{R}\) via
\[\mathfrak{N}^{\mathbb{W}}(x)=w^{2}_{0}(w^{1}_{0}\cdot x+b^{1}_{0})+\sum_{j=1}^ {d}w^{2}_{j}\big{(}w^{1}_{j}\cdot x+b^{1}_{j}\big{)}^{\!\!+}+b^{2}, \tag{6}\]
where \(\cdot\) denotes the scalar product. We call \(\mathfrak{N}^{\mathbb{W}}\)_realization function_ or _response_ of the network \(\mathbb{W}\).
Note that in general the response of a network is a continuous and piecewise affine function from \(\mathbb{R}^{d_{\mathrm{in}}}\) to \(\mathbb{R}\). We conceive \(\mathbb{W}\mapsto\mathfrak{N}^{\mathbb{W}}\) as a parametrization of a class of potential response functions \(\{\mathfrak{N}^{\mathbb{W}}\colon\mathbb{W}\in\mathcal{W}_{d}\}\) in a minimization problem. More explicitly, let \(\mu\colon\mathcal{B}(\mathbb{R}^{d_{\mathrm{in}}})\to[0,\infty)\) be a finite measure on the Borel sets of \(\mathbb{R}^{d_{\mathrm{in}}}\), let \(\mathbb{D}=\mathrm{supp}(\mu)\) be the support of \(\mu\), and let \(\mathcal{L}\colon\mathbb{D}\times\mathbb{R}\to[0,\infty)\) be a product measurable function, the _loss function_. We aim to minimize the error
\[\mathrm{err}^{\mathcal{L}}(\mathbb{W})=\int_{\mathbb{D}}\mathcal{L}(x, \mathfrak{N}^{\mathbb{W}}(x))\,\mathrm{d}\mu(x) \tag{7}\]
over all \(\mathbb{W}\in\mathcal{W}_{d}\) for a given \(d\in\mathbb{N}_{0}\) and we let
\[\mathrm{err}^{\mathcal{L}}_{d}=\inf_{\mathbb{W}\in\mathcal{W}_{d}}\mathrm{err }^{\mathcal{L}}(\mathbb{W}) \tag{8}\]
the minimal error with \(d+1\) neurons in the hidden layer. We stress that if there does not exist a neural network \(\mathbb{W}\in\mathcal{W}_{d}\) satisfying \(\mathrm{err}^{\mathcal{L}}(\mathbb{W})=\mathrm{err}^{\mathcal{L}}_{d}\) then every sequence \((\mathbb{W}_{n})_{n\in\mathbb{N}}\subseteq\mathcal{W}_{d}\) of networks satisfying \(\lim_{n\to\infty}\mathrm{err}^{\mathcal{L}}(\mathbb{W}_{n})=\mathrm{err}^{ \mathcal{L}}_{d}\) diverges to infinity.
The class of responses \(\{\mathfrak{N}^{\mathbb{W}}\colon\mathbb{W}\in\mathcal{W}_{d}\}\) that is considered in this article clearly contains all responses of shallow ANNs having only \(d\) ReLU neurons in the hidden layer and no linear neuron. Conversely since an affine function \(\mathfrak{a}\colon\mathbb{R}^{d_{\mathrm{in}}}\to\mathbb{R}\) can be represented as the response of two ReLU neurons, \(\{\mathfrak{N}^{\mathbb{W}}\colon\mathbb{W}\in\mathcal{W}_{d}\}\) is also contained in the class of responses of shallow ANNs using \(d+2\) neurons in the hidden layer that all apply the ReLU activation function. If \(\mu\) is compactly supported, then the \(\mathbb{D}\)-restricted responses in \(\{\mathfrak{N}^{\mathbb{W}}\colon\mathbb{W}\in\mathcal{W}_{d}\}\) even can be expressed by shallow ReLU networks with \(d+1\) neurons.
_Overparametrized networks_ in the setting of empirical risk minimization (more ReLU neurons than data points to fit) are often able to perfectly interpolate the data such that there exists a network configuration achieving zero error and, thus, a global minimum in the search space \(\mathbb{W}\mapsto\mathrm{err}^{\mathcal{L}}(\mathbb{W})\). However, for general measures \(\mu\) not necessarily consisting of a finite number of Dirac measures or consisting of a finite number of Dirac measures but in the practically more realistic underparametrized regime, the literature on the existence of global minima is very limited and we refer the reader to [17, Theorem 1.1] for a result in the setting of _shallow ReLU ANNs_
with one-dimensional input. See also [13] for a similar result concerning the existence of global minima in the regression task of approximating functions in the space \(L_{p}([0,1]^{d},\|\cdot\|_{p})\) with shallow ANNs using _heavyside activation_. We also refer to [14] for a general introduction into best approximators in normed spaces. We also point, e.g., to [1, 10, 11, 12, 13] and the references therein for sophisticated convergence analysis for GD optimization methods in such overparametrized regimes.
A good literature review regarding the loss landscape in neural network training can be found in [15]. For statements about the existence of and convergence to _non-optimal local minima_ in the training of (shallow) networks we refer the reader, e.g., to [1, 13, 14, 15, 16, 17, 18].
In this article, we show under mild assumptions on the measure \(\mu\) and the loss function \(\mathcal{L}\) that there exists a global minimum in the loss landscape. We suggest a kind of closure of the search space and, in a second step, show that the additional artificial responses are not optimal in the minimization problem. We state the main result of this article.
**Theorem 1.3** (Existence of minimizers - general loss functions and fully-connected feedforward ANNs).: _Assume that \(\mathbb{D}=\operatorname{supp}(\mu)\) is compact, assume that \(\mu\) has a continuous Lebesgue density \(h\colon\mathbb{R}^{d_{\mathrm{in}}}\to[0,\infty)\), assume that for every hyperplane \(H\subseteq\mathbb{R}^{d_{\mathrm{in}}}\) intersecting the interior of the convex hull of \(\mathbb{D}\) there is an element \(x\in H\) with \(h(x)>0\), and assume that the loss function \(\mathcal{L}\colon\mathbb{D}\times\mathbb{R}\to[0,\infty)\) satisfies the following assumptions:_
1. _(Continuity in the first argument) For every_ \(y\in\mathbb{R}\) _it holds that_ \(\mathbb{D}\ni x\mapsto\mathcal{L}(x,y)\in\mathbb{R}\) _is continuous._
2. _(Strict convexity in the second argument) For all_ \(x\in\mathbb{D}\) _it holds that_ \(\mathbb{R}\ni y\mapsto\mathcal{L}(x,y)\in\mathbb{R}\) _is strictly convex and attains its minimum._
_Then it holds for every \(d\in\mathbb{N}_{0}\) that there exists an optimal network \(\mathbb{W}\in\mathcal{W}_{d}\) with \(\operatorname{err}^{\mathcal{L}}(\mathbb{W})=\operatorname{err}^{\mathcal{L}}_ {d}\)._
Theorem 1.3 is an immediate consequence of Proposition 3.3 below. We stress that the statement of Proposition 3.3 is stronger in the sense that it even shows that in many situations the newly added functions to the extended target space perform strictly worse than the representable responses.
**Example 1.4** (Regression problem).: _Let \(\mu\) be as in Theorem 1.3, let \(f\colon\mathbb{R}^{d_{\mathrm{in}}}\to\mathbb{R}\) be continuous, and let \(L\colon\mathbb{R}\to[0,\infty)\) be a strictly convex function that attains its minimum. Then the function \(\mathcal{L}\colon\mathbb{R}^{d_{\mathrm{in}}}\times\mathbb{R}\to[0,\infty)\) given by_
\[\mathcal{L}(x,y)=L(y-f(x)) \tag{9}\]
_for all \(x\in\mathbb{R}^{d_{\mathrm{in}}}\), \(y\in\mathbb{R}\) satisfies the assumptions of Theorem 1.3 and Theorem 1.3 allows us to conclude that the infimum_
\[\inf_{\mathbb{W}\in\mathcal{W}_{d}}\int L(\mathfrak{R}^{\mathbb{W}}(x)-f(x)) \operatorname{d}\!\mu(x) \tag{10}\]
_is attained for a network \(\mathbb{W}\in\mathcal{W}_{d}\)._
## 2. Generalized response of neural networks
We will work with more intuitive geometric descriptions of realization functions of networks \(\mathbb{W}\in\mathcal{W}_{d}\). We slightly modify the ideas given in [1] and later introduce the notion of a _generalized response_. Then, we show in Proposition 2.4 that, in a general approximation problem, there always exists a generalized response that solves the minimization task at least as good as the class \(\{\mathfrak{W}^{\mathbb{W}}\colon\mathbb{W}\in\mathcal{W}_{d}\}\).
We call a network \(\mathbb{W}\in\mathcal{W}_{d}\)_non-degenerate_ iff for all \(j\in\{1,2,\ldots,d\}\) we have \(w_{j}^{1}\neq 0\). For a non-degenerate network \(\mathbb{W}\), we say that the neuron \(j\in\{1,2,\ldots,d\}\) has
* _normal_\(\mathfrak{n}_{j}=|w_{j}^{1}|^{-1}w_{j}^{1}\in\mathcal{O}:=\{x\in\mathbb{R}^{d_{ \mathrm{in}}}\colon|x|=1\}\),
* _offset_\(o_{j}=-|w_{j}^{1}|^{-1}b_{j}^{1}\in\mathbb{R}\), and
* _kink_\(\Delta_{j}=|w_{j}^{1}|w_{j}^{2}\in\mathbb{R}\).
Moreover, we call the affine mapping \(\mathfrak{a}\colon\mathbb{R}^{d_{\mathrm{in}}}\to\mathbb{R}\) given by
\[\mathfrak{a}(x)=w_{0}^{2}(w_{0}^{1}\cdot x+b_{0}^{1})+b^{2} \tag{11}\]
_affine background._ We call \((\mathfrak{n},o,\Delta,\mathfrak{a})\) with \(\mathfrak{n}=(\mathfrak{n}_{1},\ldots,\mathfrak{n}_{d})\in\mathcal{O}^{d}\), \(o=(o_{1},\ldots,o_{d})\in\mathbb{R}^{d}\), \(\Delta=(\Delta_{1},\ldots,\Delta_{d})\in\mathbb{R}^{d}\) and the affine function \(\mathfrak{a}\) the _effective tuple of \(\mathbb{W}\)_ and write \(\mathcal{E}_{d}\) for the set of all effective tuples using \(d\) ReLU neurons.
First, we note that the response of a non-degenerate ANN \(\mathbb{W}\) can be represented in terms of its effective tuple:
\[\begin{split}\mathfrak{N}^{\mathbb{W}}(x)&=\mathfrak{ a}(x)+\sum_{j=1}^{d}w_{j}^{2}(w_{j}^{1}\cdot x+b_{j}^{1})^{+}=\mathfrak{a}(x)+ \sum_{j=1}^{d}\Delta_{j}\Big{(}\frac{1}{|w_{j}^{1}|}w_{j}^{1}\cdot x+\frac{1}{ |w_{j}^{1}|}b_{j}^{1}\Big{)}^{+}\\ &=\mathfrak{a}(x)+\sum_{j=1}^{d}\Delta_{j}\big{(}\mathfrak{n}_{j} \cdot x-o_{j}\big{)}^{+}.\end{split} \tag{12}\]
With slight misuse of notation we also write
\[\mathfrak{N}^{\mathfrak{n},o,\Delta,\mathfrak{a}}\colon\mathbb{R}^{d_{\mathrm{ in}}}\to\mathbb{R},\qquad x\mapsto\mathfrak{a}(x)+\sum_{j=1}^{d}\Delta_{j} \big{(}\mathfrak{n}_{j}\cdot x-o_{j}\big{)}^{+} \tag{13}\]
and \(\mathrm{err}^{\mathcal{L}}(\mathfrak{n},o,\Delta,\mathfrak{a})=\int\mathcal{ L}(x,\mathfrak{N}^{\mathfrak{n},o,\Delta,\mathfrak{a}}(x))\,\mathrm{d}\mu(x)\). Although the tuple \((\mathfrak{n},o,\Delta,\mathfrak{a})\) does not uniquely describe a neural network, it describes a response function uniquely and thus we will speak of the neural network with effective tuple \((\mathfrak{n},o,\Delta,\mathfrak{a})\).
We stress that also the response of a degenerate network \(\mathbb{W}\) can be described as response associated to an effective tuple. Indeed, for every \(j\in\{1,\ldots,d\}\) with \(w_{j}^{1}=0\) the respective neuron has a constant contribution \(w_{j}^{2}(b_{j}^{1})^{+}\). Now, one can choose an arbitrary normal \(\mathfrak{n}_{j}\) and offset \(o_{j}\), set the kink equal to zero (\(\Delta_{j}=0\)) and add the constant \(w_{j}^{2}(b_{j}^{1})^{+}\) to the affine background \(\mathfrak{a}\). Repeating this procedure for every such neuron we get an effective tuple \((\mathfrak{n},o,\Delta,\mathfrak{a})\in\mathcal{E}_{d}\) that satisfies \(\mathfrak{N}^{\mathfrak{n},o,\Delta,\mathfrak{a}}=\mathfrak{N}^{\mathbb{W}}\). Conversely, for every effective tuple \((\mathfrak{n},o,\Delta,\mathfrak{a})\in\mathcal{E}_{d}\), the mapping \(\mathfrak{N}^{\mathfrak{n},o,\Delta,\mathfrak{a}}\) is the response of an appropriate network \(\mathbb{W}\in\mathcal{W}_{d}\). In fact, for \(j=1,\ldots,d\), one can choose \(w_{j}^{1}=\mathfrak{n}_{j}\), \(b_{j}^{1}=-o_{j}\) and \(w_{j}^{2}=\Delta_{j}\) and gets that for all \(x\in\mathbb{R}^{d_{\mathrm{in}}}\)
\[w_{j}^{2}(w_{j}^{1}\cdot x+b_{j}^{1})^{+}=\Delta_{j}(\mathfrak{n}_{j}\cdot x-o _{j})^{+}. \tag{14}\]
Analogously, one can choose \(w_{0}^{1}=\mathfrak{a}^{\prime}\), \(b_{0}^{1}=\mathfrak{a}(0)\), \(w_{0}^{2}=1\) and \(b^{2}=0\) such that for all \(x\in\mathbb{R}^{d_{\mathrm{in}}}\)
\[\mathfrak{a}(x)=w_{0}^{2}(w_{0}^{1}\cdot x+b_{0}^{1})+b^{2}. \tag{15}\]
This entails that
\[\mathrm{err}_{d}^{\mathcal{L}}=\inf_{(\mathfrak{n},o,\Delta,\mathfrak{b})\in \mathcal{E}_{d}}\int\mathcal{L}(x,\mathfrak{N}^{\mathfrak{n},o,\Delta, \mathfrak{b}}(x))\,\mathrm{d}\mu(x) \tag{16}\]
and the infimum is attained iff there is a network \(\mathbb{W}\in\mathcal{W}_{d}\) for which the infimum in (8) is attained.
For an effective tuple \((\mathfrak{n},o,\Delta,\mathfrak{a})\in\mathcal{E}_{d}\), we say that the \(j\)th ReLU neuron has the _breakline_
\[H_{j}=\big{\{}x\in\mathbb{R}^{d_{\mathrm{in}}}\colon\mathfrak{n}_{j}\cdot x=o_{j }\big{\}} \tag{17}\]
and we call
\[A_{j}=\{x\in\mathbb{R}^{d_{\mathrm{in}}}\colon\mathfrak{n}_{j}\cdot x>o_{j}\} \tag{18}\]
the _domain of activity_ of the \(j\)th ReLU neuron. By construction, we have
\[\mathfrak{N}^{\mathfrak{n},o,\Delta,\mathfrak{a}}(x)=\mathfrak{a}(x)+\sum_{j= 1}^{d}\bigl{(}\Delta_{j}(\mathfrak{n}_{j}\cdot x-o_{j})\bigr{)}\mathds{1}_{A_{ j}}(x). \tag{19}\]
Outside the breaklines the function \(\mathfrak{N}^{\mathfrak{n},o,\Delta,\mathfrak{a}}\) is differentiable with
\[D\mathfrak{N}^{\mathfrak{n},o,\Delta,\mathfrak{a}}(x)=\mathfrak{a}^{\prime}(x )+\sum_{j=1}^{d}\Delta_{j}\mathfrak{n}_{j}\mathds{1}_{A_{j}}(x). \tag{20}\]
Note that for each summand \(j=1,\ldots,d\) along the breakline the difference of the differential on \(A_{j}\) and \(\overline{A_{j}^{c}}\) equals \(\Delta_{j}\mathfrak{n}_{j}\) (which is also true for the response function \(\mathfrak{N}^{\mathbb{W}}\) provided that it is differentiable in the reference points and there does not exist a second neuron having the same breakline \(H_{j}\)).
For a better understanding of the optimization problem discussed in this article it makes sense to view the set of responses \(\{\mathfrak{N}^{\mathbb{W}}\colon\mathbb{W}\in\mathcal{W}_{d}\}\) as a subset of the locally convex vector space \(\mathcal{L}^{1}_{\mathrm{loc}}\) of locally integrable functions. Then the set of response functions is not closed and the main task of this article is to show that functions that can be approached by functions from \(\{\mathfrak{N}^{\mathbb{W}}\colon\mathbb{W}\in\mathcal{W}_{d}\}\) but are itself no response functions will provide larger errors than the response functions.
We now extend the family of response functions.
**Definition 2.1**.: _We call a function \(\mathcal{R}\colon\mathbb{R}^{d_{\mathrm{in}}}\to\mathbb{R}\) a generalized response (generalized realization function) if it admits the following representation: there are \(K\in\mathbb{N}_{0}\), a tuple of open half-spaces \(\mathbf{A}=(A_{1},\ldots,A_{K})\) of \(\mathbb{R}^{d_{\mathrm{in}}}\) with pairwise distinct boundaries \(\partial A_{1},\ldots,\partial A_{K}\), a vector \(\mathbf{m}=(m_{1},\ldots,m_{K})\in\{1,2\}^{K}\), an affine mapping \(\mathfrak{a}\colon\mathbb{R}^{d_{\mathrm{in}}}\to\mathbb{R}\), vectors \(\delta_{1},\ldots,\delta_{K}\in\mathbb{R}^{d_{\mathrm{in}}}\), and reals \(\mathfrak{b}_{1},\ldots,\mathfrak{b}_{K}\in\mathbb{R}\) such that_
1. _it holds for all_ \(x\in\mathbb{R}^{d_{\mathrm{in}}}\) _that_ (21) \[\mathcal{R}(x)=\mathfrak{a}(x)+\sum_{k=1}^{K}\bigl{(}\delta_{k}\cdot x+ \mathfrak{b}_{k}\bigr{)}\mathds{1}_{A_{k}}(x)\] _and_
2. _it holds for all_ \(k\in\{1,\ldots,K\}\) _with_ \(m_{k}=1\) _that_ (22) \[\partial A_{k}\subseteq\{x\in\mathbb{R}^{d_{\mathrm{in}}}\colon\delta_{k} \cdot x+\mathfrak{b}_{k}=0\}.\]
_We will represent generalized responses as in (21), we call \(A_{1},\ldots,A_{K}\) the active half-spaces of the response, and we call \(m_{1},\ldots,m_{K}\) the multiplicities of the half-spaces \(A_{1},\ldots,A_{K}\). The minimal number \(m_{1}+\ldots+m_{K}\) that can be achieved in such a representation is called the dimension of \(\mathcal{R}\). For every \(d\in\mathbb{N}_{0}\) we denote by \(\mathfrak{R}_{d}\) the family of all generalized responses of dimension \(d\) or smaller. We call a generalized response \(\mathrm{simple}\) if it is continuous which means that all mutliplicities can be chosen equal to one. A response \(\mathcal{R}\in\mathfrak{R}_{d}\) is called _strict at dimension \(d\) if the response has dimension \(d-1\) or is discontinuous. We denote by \(\mathfrak{R}_{d}^{\mathrm{strict}}\)_the responses in \(\mathfrak{R}_{d}\) that are strict at dimension \(d\)._
**Remark 2.2**.: _Note that the sets \(\{\mathfrak{N}^{\mathbb{W}}\colon\mathbb{W}\in\mathcal{W}_{d}\}\) and \(\{\mathcal{R}\colon\mathcal{R}\in\mathfrak{R}_{d}\) is simple\(\}\) agree._
In this section, we work with a general measure \(\mu\) which may have unbounded support. The assumptions on \(\mu\) are stated in the next definition.
**Definition 2.3**.:
1. _An element_ \(x\) _of a hyperplane_ \(H\subseteq\mathbb{R}^{d_{\mathrm{in}}}\) _is called_ \(H\)-regular _if_ \(x\in\operatorname{supp}\mu|_{A}\) _and_ \(x\in\operatorname{supp}\mu|_{\overline{A}^{c}}\)_, where_ \(A\) _is an open half-space with_ \(\partial A=H\)_._
2. _A measure_ \(\mu\) _is called_ nice _if all hyperplanes have_ \(\mu\)_-measure zero and if for every open half-space_ \(A\) _with_ \(\mu(A),\mu(\overline{A}^{c})>0\) _the set of_ \(\partial A\)_-regular points cannot be covered by finitely many hyperplanes different from_ \(\partial A\)_._
Next, we will show that under quite weak assumptions there exist generalized responses of dimension \(d\) or smaller that achieve an error of at most \(\operatorname{err}_{d}^{\mathcal{L}}\).
**Proposition 2.4**.: _Assume that \(\mu\) is a nice measure on a closed subset \(\mathbb{D}\subseteq\mathbb{R}^{d_{\mathrm{in}}}\) of \(\mathbb{R}^{d_{\mathrm{in}}}\) and assume that the loss function \(\mathcal{L}\colon\mathbb{D}\times\mathbb{R}\to[0,\infty)\) is measurable and satisfies the following assumptions:_
1. _(Lower-semincontinuity in the second argument) For all_ \(x\in\mathbb{D}\)_,_ \(y\in\mathbb{R}\) _we have_ (23) \[\liminf_{y^{\prime}\to y}\mathcal{L}(x,y^{\prime})\geq\mathcal{L}(x,y).\]
2. _(Unbounded in the second argument) For all_ \(x\in\mathbb{D}\) _we have_ (24) \[\lim_{|y|\to\infty}\mathcal{L}(x,y)=\infty.\]
_Let \(d\in\mathbb{N}_{0}\) with \(\operatorname{err}_{d}^{\mathcal{L}}<\infty\). Then there exists a generalized response \(\mathcal{R}\in\mathfrak{R}_{d}\) which satisfies_
\[\int\mathcal{L}(x,\mathcal{R}(x))\,\mathrm{d}\mu(x)=\overline{\operatorname{ err}_{d}^{\mathcal{L}}}:=\inf_{\tilde{\mathcal{R}}\in\mathfrak{R}_{d}}\int \mathcal{L}(x,\tilde{\mathcal{R}}(x))\,\mathrm{d}\mu(x). \tag{25}\]
_Furthermore, if \(d\geq 1\), then the infimum_
\[\inf_{\tilde{\mathcal{R}}\in\mathfrak{R}_{d}^{\mathrm{strict}}}\int\mathcal{ L}(x,\tilde{\mathcal{R}}(x))\,\mathrm{d}\mu(x) \tag{26}\]
_is attained on \(\mathfrak{R}_{d}^{\mathrm{strict}}\)._
Proof.: Let \((\mathcal{R}^{(n)})_{n\in\mathbb{N}}\) be a sequence of generalized responses in \(\mathfrak{R}_{d}\) that satisfies
\[\lim_{n\to\infty}\int\mathcal{L}(x,\mathcal{R}^{(n)}(x))\,\mathrm{d}\mu(x)= \overline{\operatorname{err}_{d}^{\mathcal{L}}}. \tag{27}\]
We use the representations as in (21) and write
\[\mathcal{R}^{(n)}(x)=\mathfrak{a}^{(n)}(x)+\sum_{k=1}^{K_{n}}\bigl{(}\delta_{k }^{(n)}\cdot x+\mathfrak{b}_{k}^{(n)}\bigr{)}\mathds{1}_{A_{k}^{(n)}}(x). \tag{28}\]
Moreover, denote by \(\mathfrak{n}_{k}^{(n)}\in\mathcal{O}\) and \(o_{k}^{(n)}\in\mathbb{R}\) the quantities with \(A_{k}^{(n)}=\{x\in\mathbb{R}^{d_{\mathrm{in}}}\colon\mathfrak{n}_{k}^{(n)} \cdot x>o_{k}^{(n)}\}\) and by \(\mathbf{m}^{(n)}=(m_{1}^{(n)},\dots,m_{K^{(n)}}^{(n)})\) the respective multiplicities.
1. Step: Choosing an appropriate subsequence.
Since \(K^{(n)}\leq d\) for all \(n\in\mathbb{N}\) we can choose a subsequence \((\ell_{n})_{n\in\mathbb{N}}\) such that there exists \(K\in\mathbb{N}_{0}\) with \(K^{(\ell_{n})}=K\) for all \(n\in\mathbb{N}\). For ease of notation we will assume that this is the case for the full sequence. With the same argument we can assume without loss of generality that there exists \(\mathbf{m}=(m_{1},\dots,m_{K})\in\{1,2\}^{K}\) such that for all \(n\in\mathbb{N}\) we have \(\mathbf{m}^{(n)}=\mathbf{m}\).
Moreover, after possibly thinning the sequence again we can assure that for all \(k\in\{1,2,\ldots,K\}\) we have convergence \(\mathfrak{n}_{k}^{(n)}\to\mathfrak{n}_{k}\) in the compact space \(\mathcal{O}\) and \(o_{k}^{(n)}\to o_{k}\) in the two point compactification \(\mathbb{R}\cup\{\pm\infty\}\). We assign each \(k\in\{1,2,\ldots,K\}\) an _asymptotic active area_\(A_{k}\) given by
\[A_{k}=\{x\in\mathbb{R}^{d_{\mathrm{in}}}\colon\mathfrak{n}_{k}\cdot x>o_{k}\} \tag{29}\]
which is degenerate in the case where \(o_{k}\in\{\pm\infty\}\).
We denote by \(H_{k}\) the respective breakline \(\partial A_{k}\). Even if, for every \(n\in\mathbb{N}\), the original breaklines \(\partial A_{1}^{(n)},\ldots,\partial A_{K}^{(n)}\) are pairwise distinct, this might not be true for the limiting ones. In particular, there may be several \(k\)'s for which the asymptotic active areas may be on opposite sides of the same breakline. In that case we choose one side and replace for each \(k\) with asymptotic active area on the opposite side its contribution in the representation (21) from \((\delta_{k}^{(n)}\cdot x+\mathfrak{b}_{k}^{(n)})\mathds{1}_{A_{k}^{(n)}}(x)\) to
\[\big{(}-\delta_{k}^{(n)}\cdot x-\mathfrak{b}_{k}^{(n)}\big{)}\mathds{1}_{( \overline{A}_{k}^{(n)})^{c}}(x)+\delta_{k}^{(n)}\cdot x+\mathfrak{b}_{k}^{(n)} \tag{30}\]
which agrees with the former term outside the breakline (which is a zero set). This means we replace \(\delta_{k}^{(n)}\), \(\mathfrak{b}_{k}^{(n)}\), and \(A_{k}^{(n)}\) by \(-\delta_{k}^{(n)}\), \(-\mathfrak{b}_{k}^{(n)}\), and \(\big{(}\overline{A}_{k}^{(n)}\big{)}^{c}\), respectively, and adjust the respective affine background accordingly. Thus we can assume without loss of generality that all asymptotic active areas sharing the same breakline are on the same side.
We use the asymptotic active areas to partition the space: let \(\mathbb{J}\) denote the collection of all subsets \(J\subseteq\{1,2,\ldots,K\}\) for which the set
\[A_{J}=\big{(}\bigcap_{j\in J}A_{j}\big{)}\cap\big{(}\bigcap_{j\in J^{c}} \overline{A}_{j}^{c}\big{)} \tag{31}\]
satisfies \(\mu(A_{J})>0\). We note that the sets \(A_{J}\), \(J\in\mathbb{J}\), are non-empty, open, and pairwise disjoint and their union has full \(\mu\)-measure since
\[\mu\Big{(}\mathbb{R}^{d_{\mathrm{in}}}\backslash\bigcup_{J\subseteq\{1,2, \ldots,K\}}A_{J}\Big{)}\leq\sum_{j=1}^{K}\mu\big{(}H_{j}\big{)}=0. \tag{32}\]
Moreover, for every \(J\in\mathbb{J}\) and every compact set \(B\) with \(B\subseteq A_{J}\) one has from a \(B\)-dependent \(n\) onwards that the generalized response \(\mathcal{R}^{(n)}\) satisfies for all \(x\in B\) that
\[\mathcal{R}^{(n)}(x)=\mathcal{D}_{J}^{(n)}\cdot x+\beta_{J}^{(n)}, \tag{33}\]
where
\[\mathcal{D}_{J}^{(n)}:=\mathsf{a^{\prime}}^{(n)}+\sum_{j\in J}\delta_{j}^{(n) }\qquad\text{and}\qquad\beta_{J}^{(n)}:=\mathsf{a}^{(n)}(0)+\sum_{j\in J} \mathfrak{b}_{j}^{(n)}. \tag{34}\]
Let \(J\in\mathbb{J}\). Next, we show that along an appropriate subsequence, we have convergence of \((\mathcal{D}_{J}^{(n)})_{n\in\mathbb{N}}\) in \(\mathbb{R}^{d_{\mathrm{in}}}\). First assume that along a subsequence one has that \((|\mathcal{D}_{J}^{(n)}|)_{n\in\mathbb{N}}\) converges to \(\infty\). For ease of notation we assume without loss of generality that \(|\mathcal{D}_{J}^{(n)}|\to\infty\). We let
\[\mathcal{H}_{J}^{(n)}=\{x\in\mathbb{R}^{d_{\mathrm{in}}}\colon\mathcal{D}_{J}^ {(n)}\cdot x+\beta_{J}=0\}. \tag{35}\]
For every \(n\) with \(\mathcal{D}_{J}^{(n)}\neq 0\), \(\mathcal{H}_{J}^{(n)}\) is a hyperplane which can be parametrized by taking a normal and the respective offset. As above we can argue that along an appropriate subsequence (which is again assumed to be the whole sequence) one has convergence of the normals in \(\mathcal{O}\) and of the offsets in \(\mathbb{R}\cup\{\pm\infty\}\). We denote by \(\mathcal{H}_{J}\) the hyperplane being associated to the limiting normal and offset (which is assumed to be the empty set in the case where the offsets do not converge
in \(\mathbb{R}\)). Since the norm of the gradient \(\mathcal{D}^{(n)}_{J}\) tends to infinity we get that for every \(x\in A_{J}\backslash\mathcal{H}_{J}\) one has \(|\mathcal{R}^{(n)}(x)|\to\infty\) and, hence, \(\mathcal{L}(x,\mathcal{R}^{(n)}(x))\to\infty\). Consequently, Fatou implies that
\[\liminf_{n\to\infty}\int_{A_{J}\backslash\mathcal{H}_{J}}\mathcal{L}(x, \mathcal{R}^{(n)}(x))\,\mathrm{d}\mu(x)\geq\int_{A_{J}\backslash\mathcal{H}_{ J}}\liminf_{n\to\infty}\mathcal{L}(x,\mathcal{R}^{(n)}(x))\,\mathrm{d}\mu(x)=\infty \tag{36}\]
contradicting the asymptotic optimality of \((\mathcal{R}^{(n)})_{n\in\mathbb{N}}\). We showed that the sequence \((\mathcal{D}^{(n)}_{J})_{n\in\mathbb{N}}\) is precompact and by switching to an appropriate subsequence we can guarantee that the limit \(\mathcal{D}_{J}=\lim_{n\to\infty}\mathcal{D}^{(n)}_{J}\) exists.
Similarly we show that along an appropriate subsequence \((\beta^{(n)}_{J})_{n\in\mathbb{N}}\) converges to a value \(\beta_{J}\in\mathbb{R}\). Suppose this were not the case, then there were a subsequence along which \(|\beta^{(n)}_{J}|\to\infty\) (again we assume for ease of notation that this is the case along the full sequence). Then for every \(x\in A_{J}\), \(|\mathcal{R}^{(n)}(x)|\to\infty\) and we argue as above that this contradicts the optimality of \((\mathcal{R}^{(n)})_{n\in\mathbb{N}}\). Consequently, we have for an appropriately thinned sequence on a compact set \(B\subseteq A_{J}\) uniform convergence
\[\lim_{n\to\infty}\mathcal{R}^{(n)}(x)=\mathcal{D}_{J}\cdot x+\beta_{J}. \tag{37}\]
Since \(\bigcup_{J\in\mathbb{J}}A_{J}\) has full \(\mu\)-measure we get with the lower semicontinuity of \(\mathcal{L}\) in the second argument and Fatou's lemma that for every measurable function \(\mathcal{R}\colon\mathbb{R}^{d_{\mathrm{in}}}\to\mathbb{R}\) satisfying for each \(J\in\mathbb{J}\), \(x\in A_{J}\) that
\[\mathcal{R}(x)=\mathcal{D}_{J}\cdot x+\beta_{J} \tag{38}\]
we have
\[\begin{split}\int\mathcal{L}(x,\mathcal{R}(x))\,\mathrm{d}\mu(x) &=\int\liminf_{n\to\infty}\mathcal{L}(x,\mathcal{R}^{(n)}(x))\, \mathrm{d}\mu(x)\\ &\leq\liminf_{n\to\infty}\int\mathcal{L}(x,\mathcal{R}^{(n)}(x)) \,\mathrm{d}\mu(x)=\overline{\mathrm{err}}_{d}^{\mathcal{L}}.\end{split} \tag{39}\]
Step 2: \(\mathcal{R}\) may be chosen as a generalized response of dimension \(d\) or smaller.
We call a summand \(k\in\{1,2,\ldots,K\}\) degenerate if \(A_{k}\) or \(\overline{A}_{k}^{c}\) has \(\mu\)-measure zero. We omit every degenerate summand \(k\) in the sense that we set \(\delta^{(n)}_{k}=0\) and \(\mathfrak{b}^{(n)}_{k}=0\) for all \(n\in\mathbb{N}\) and note that by adjusting the affine background appropriately we still have validity of (37) with the same limit on all relevant cells and, in particular, \(\mu\)-almost everywhere.
Let now \(k\) be a non-degenerate summand. Since \(\mu\) is nice there exists a \(\partial A_{k}\)-regular point \(x\) that is not in \(\bigcup_{A\in\mathbb{A}\colon\,A\neq A_{k}}\partial A\), where \(\mathbb{A}:=\{A_{j}\colon j\text{ is non-degenerate}\}\). We let
\[J^{x}_{-}=\{j\colon x\in A_{j}\}\qquad\text{and}\qquad J^{x}_{+}=J^{x}_{-} \cup\{j\colon A_{j}=A_{k}\}. \tag{40}\]
Since \(x\in\operatorname{supp}(\mu|_{\overline{A}_{j}^{c}})\) we get that the cell \(A_{J^{x}_{-}}\) has strictly positive \(\mu\)-measure so that \(J^{x}_{-}\in\mathbb{J}\). Analogously, \(x\in\operatorname{supp}(\mu|_{A_{j}})\) entails that \(J^{x}_{+}\in\mathbb{J}\). (Note that \(J^{x}_{+}\) and \(J^{x}_{-}\) are just the cells that lie on the opposite sides of the hyperplane \(\partial A_{k}\) at \(x\).) We thus get that
\[\delta^{(n)}_{A_{k}}:=\sum_{j\colon\,A_{j}=A_{k}}\delta^{(n)}_{j}=\mathcal{D}^ {(n)}_{J^{x}_{+}}-\mathcal{D}^{(n)}_{J^{x}_{-}}\to\mathcal{D}_{J^{x}_{+}}- \mathcal{D}_{J^{x}_{-}}=:\delta_{A_{k}}, \tag{41}\]
where the definitions of \(\delta^{(n)}_{A_{k}}\) and \(\delta_{A_{k}}\) do not depend on the choice of \(x\). Analogously,
\[\mathfrak{b}^{(n)}_{A_{k}}:=\sum_{j\colon\,A_{j}=A_{k}}\mathfrak{b}^{(n)}_{j}= \beta^{(n)}_{J^{x}_{+}}-\beta^{(n)}_{J^{x}_{-}}\to\beta_{J^{x}_{+}}-\beta_{J^{x} _{-}}=:\mathfrak{b}_{A_{k}}. \tag{42}\]
Now for general \(J\in\mathbb{J}\), we have
\[\mathcal{D}_{J}\leftarrow\mathcal{D}_{J}^{(n)}=\mathfrak{a}^{\prime(n)}+\sum_{A \in\{A_{j}:\,j\in J\}}\delta_{A}^{(n)}\qquad\text{and}\qquad\beta_{J}\gets \beta_{J}^{(n)}=\mathfrak{a}^{(n)}(0)+\sum_{A\in\{A_{j}:\,j\in J\}}\mathfrak{b}_{ A}^{(n)}. \tag{43}\]
Since \(\sum_{A\in\{A_{j}:\,j\in J\}}\delta_{A}^{(n)}\) and \(\sum_{A\in\{A_{j}:\,j\in J\}}\mathfrak{b}_{A}^{(n)}\) converge to \(\sum_{A\in\{A_{j}:\,j\in J\}}\delta_{A}\) and \(\sum_{A\in\{A_{j}:\,j\in J\}}\mathfrak{b}_{A}\), respectively, we have that \((\mathfrak{a}^{\prime(n)})_{n\in\mathbb{N}}\) and \((\mathfrak{a}^{(n)}(0))_{n\in\mathbb{N}}\) converge and there is an appropriate affine function \(\mathfrak{a}\) such that for all \(J\in\mathbb{J}\) and \(x\in A_{J}\)
\[\mathcal{R}(x)=\mathcal{D}_{J}\cdot x+\beta_{J}=\mathfrak{a}(x)+\sum_{A\in \mathbb{A}}\bigl{(}\delta_{A}\cdot x+\mathfrak{b}_{A}\bigr{)}\mathds{1}_{A}( x). \tag{44}\]
So far, we have only used the definition of \(\mathcal{R}\) on \(\bigcup_{J\in\mathbb{J}}A_{J}\) and we now assume that \(\mathcal{R}\) is chosen in such a way that the latter identity holds for all \(x\in\mathbb{R}^{d_{\mathrm{in}}}\) (by possibly changing the definition on a \(\mu\)-nullset). We still need to show that \(\mathcal{R}\) is a generalized response of dimension \(d\) or smaller.
Every active area \(A\in\mathbb{A}\) that is the asymptotic active area of a single non-degenerate summand \(k\) with \(m_{k}=1\) is assigned the multiplicity one. All other non-degenerate active areas get multiplicity two. Then the overall multiplicity (the sum of the individual multiplicities) is smaller or equal to the dimension \(d\). To see this recall that every active area \(A\in\mathbb{A}\) that is the asymptotic area of more than one summand or one summand of multiplicity two also contributed at least two to the multiplicity of the approximating responses. All degenerate summands do not contribute at all although they contribute to the approximating responses.
It remains to show that \(\mathcal{R}\) is indeed a generalized response with the active areas \(\mathbb{A}\) having the above multiplicities. For this it remains to show continuity of \(\mathds{1}_{A}(x)(\delta_{A}\cdot x+\mathfrak{b}_{A})\) for all \(A\in\mathbb{A}\) with assigned multiplicity one. Suppose that the \(k\)th summand is the unique summand that contributes to such an \(A\). Then \(\delta_{k}^{(n)}=\delta_{A}^{(n)}\to\delta_{A}\) and \(\mathfrak{b}_{k}^{(n)}=\mathfrak{b}_{A}^{(n)}\to\mathfrak{b}_{A}\). Moreover, one has
\[\{x\in\mathbb{R}^{d_{\mathrm{in}}}\colon\mathfrak{n}_{k}^{(n)}\cdot x-o_{k}^{ (n)}=0\}\subseteq\{x\in\mathbb{R}^{d_{\mathrm{in}}}\colon\delta_{k}^{(n)} \cdot x+\mathfrak{b}_{k}^{(n)}=0\} \tag{45}\]
which entails that, in particular, \(\delta_{k}^{(n)}\) is a multiple of \(\mathfrak{n}_{k}^{(n)}\). Both latter vectors converge which also entails that the limit \(\delta_{A}\) is a multiple of \(\mathfrak{n}_{k}\). To show that
\[\partial A\subseteq\{x\in\mathbb{R}^{d_{\mathrm{in}}}\colon\delta_{A}\cdot x +\mathfrak{b}_{A}=0\} \tag{46}\]
is satisfied it thus suffices to verify that one point of the set on the left-hand side lies also in the set on the right-hand side. This is indeed the case since \(o_{k}\mathfrak{n}_{k}\) is in the set on the left-hand side and
\[\delta_{A}\cdot(o_{k}\mathfrak{n}_{k})+\mathfrak{b}_{A}=\lim_{n\to\infty} \underbrace{\delta_{k}^{(n)}\cdot(o_{k}^{(n)}\mathfrak{n}_{k}^{(n)})+ \mathfrak{b}_{k}^{(n)}}_{=0}, \tag{47}\]
where we used that \(x=o_{k}^{(n)}\mathfrak{n}_{k}^{(n)}\) satisfies by assumption \(\delta_{k}^{(n)}\cdot x+\mathfrak{b}_{k}^{(n)}=0\).
Step 3: The infimum over all strict responses is attained.
We suppose that \((\mathcal{R}^{(n)})_{n\in\mathbb{N}}\) is a sequence of strict generalized responses satisfying
\[\lim_{n\to\infty}\int\mathcal{L}(x,\mathcal{R}^{(n)}(x))\,\mathrm{d}\mu(x)= \inf_{\tilde{\mathcal{R}}\in\mathfrak{R}_{d}^{\mathrm{strict}}}\int\mathcal{L }(x,\tilde{\mathcal{R}}(x))\,\mathrm{d}\mu(x). \tag{48}\]
Then we can find a subsequence of responses in \(\mathfrak{R}_{d-1}\) or a subsequence of responses where at least one active area has multiplicity two. In the former case the response constructed above is a generalized response of dimension \(d-1\) or lower which is strict at dimension \(d\). Conversely, in the latter case the construction from above will lead to a generalized response that has at
least one active area with multiplicity two which, in turn, implies that \(\mathcal{R}\) is either of dimension strictly smaller than \(d\) or is discontinuous.
**Remark 2.5** (Asymptotic ANN representations for generalized responses).: _In this remark, we show that every generalized response \(\mathcal{R}\colon\mathbb{R}^{d_{\mathrm{in}}}\to\mathbb{R}\) of dimension \(d\) is, on \(\mathbb{R}^{d_{\mathrm{in}}}\backslash(\bigcup_{k=1,\ldots,K}\partial A_{k})\), the limit of ANN responses of ANNs in \(\mathcal{W}_{d}\)._
_Let \(\mathfrak{n}\in\mathcal{O}\), \(\delta\in\mathbb{R}^{d_{\mathrm{in}}}\) and \(o,\mathfrak{b}\in\mathbb{R}\) and set_
\[A=\{x\in\mathbb{R}^{d_{\mathrm{in}}}\colon\mathfrak{n}\cdot x>o\}\qquad\text{ and}\qquad\forall\,x\in\mathbb{R}^{d_{\mathrm{in}}}\colon\mathcal{R}(x)= \mathds{1}_{A}(\delta\cdot x+\mathfrak{b}). \tag{49}\]
_We will show that the generalized response \(\mathcal{R}\) is on \(\mathbb{R}^{d_{\mathrm{in}}}\backslash(\partial A)\) the limit of the response of two ReLU neurons. For every \(n\in\mathbb{N}\), the following function_
\[\mathcal{R}_{n}(x)=\frac{1}{2}\big{(}(\delta+n\mathfrak{n})\cdot x+\mathfrak{ b}-no\big{)}^{+}-\frac{1}{2}\big{(}(-\delta+n\mathfrak{n})\cdot x-\mathfrak{b}- no\big{)}^{+} \tag{50}\]
_is the response of a shallow ReLU ANN with two hidden ReLU neurons. Now note that for all \(x\in A\) one has \(\mathfrak{n}\cdot x-o>0\) so that_
\[(\delta+n\mathfrak{n})\cdot x+\mathfrak{b}-no=n(\mathfrak{n}\cdot x-o)+\delta \cdot x+\mathfrak{b}\to\infty. \tag{51}\]
_Analogously, \((-\delta+n\mathfrak{n})\cdot x-\mathfrak{b}-no\to\infty\). Consequently, for these \(x\) one has for large \(n\) that_
\[\mathcal{R}_{n}(x)=\delta\cdot x+\mathfrak{b}=\mathcal{R}(x). \tag{52}\]
_Conversely, for all \(x\in\overline{A}^{c}\) one has \(\mathfrak{n}\cdot x-o<0\) so that analogously to above \((\delta+n\mathfrak{n})\cdot x+\mathfrak{b}-no\to-\infty\) and \((-\delta+n\mathfrak{n})\cdot x-\mathfrak{b}-no\to-\infty\). Consequently, for these \(x\) one has for large \(n\) that_
\[\mathcal{R}_{n}(x)=0=\mathcal{R}(x). \tag{53}\]
_We thus represented \(\mathcal{R}\) as asymptotic response of a ReLU ANN with two hidden neurons._
_For a generalized response one replaces every term \(\mathds{1}_{A_{k}}(\delta_{k}\cdot x+\mathfrak{b})\) of multiplicity two (see (21)) by two ReLU neurons exactly as above. Moreover, the terms with multiplicity one are responses of ANNs with one ReLU neuron._
## 3. Discontinuous responses are not optimal
In this section, we show that generalized responses that contain discontinuities are not optimal in the minimization task for a loss function that is continuous in the first argument. This proves Theorem 1.3 since all continuous generalized responses can be represented by shallow networks.
In the proofs we will make use of the following properties of the loss functions \(\mathcal{L}\) under consideration.
**Lemma 3.1**.: _Let \(\mathcal{L}\colon\mathbb{D}\times\mathbb{R}\to[0,\infty)\) be a function satisfying the following assumptions:_
1. _(Continuity in the first argument) For every_ \(y\in\mathbb{R}\) _it holds that_ \(\mathbb{D}\ni x\mapsto\mathcal{L}(x,y)\in\mathbb{R}\) _is continuous._
2. _(Strict convexity in the second argument) For all_ \(x\in\mathbb{D}\) _it holds that_ \(\mathbb{R}\ni y\mapsto\mathcal{L}(x,y)\in\mathbb{R}\) _is strictly convex and attains its minimum._
_Then_
1. _it holds that_ \(\mathcal{L}\colon\mathbb{D}\times\mathbb{R}\to[0,\infty)\) _is continuous,_
2. _it holds for every compact_ \(K\subseteq\mathbb{R}\) _that the function_ \(\mathbb{D}\ni x\mapsto\mathcal{L}(x,\cdot)|_{K}\in C(K,\mathbb{R})\) _is continuous with to the supremum norm,_
3. _it holds that there exists a unique_ \(\mathfrak{m}\colon\mathbb{D}\to\mathbb{R}\) _which satisfies for every_ \(x\in\mathbb{D}\) _that_ (54) \[\mathcal{L}(x,\mathfrak{m}(x))=\min_{y\in\mathbb{R}}\mathcal{L}(x,y),\] _and_
[MISSING_PAGE_POST]
We prove that \(\mathfrak{M}\) is not representable by a network with \(\mathcal{A}\). We prove that \(\mathfrak{M}\) is not representable by a network with \(\mathcal{A}\). We prove that \(\mathfrak{M}\) is not representable by a network with \(\mathcal{A}\).
We prove that \(\mathfrak{M}\) is not representable by a network with \(\mathcal{A}\). We prove that \(\mathfrak{M}\) is not representable by a network with \(\mathcal{A}\). We prove that \(\mathfrak{M}\) is not representable by a network with \(\mathcal{A}\).
We prove that \(\mathfrak{M}\) is not representable by a network with \(\mathcal{A}\). We prove that \(\mathfrak{M}\) is not representable by a network with \(\mathcal{A}\). We prove that \(\mathfrak{M}\) is not representable by a network with \(\mathcal{A}\).
We prove that \(\mathfrak{M}\) is not representable by a network with \(\mathcal{A}\). We prove that \(\mathfrak{M}\) is not representable by a network with \(\mathcal{A}\). We prove that \(\mathfrak{M}\) is not representable by a network with \(\mathcal{A}\). We prove that \(\mathfrak{M}\) is not representable by a network with \(\mathcal{A}\). We prove that \(\mathfrak{M}\) is not representable by a network with \(\mathcal{A}\). We prove that \(\mathfrak{M}\) is not representable by a network with \(\mathcal{A}\). We prove that \(\mathfrak{M}\) is not representable by a network with \(\mathcal{A}\).
We prove that \(\mathfrak{M}\) is not representable by a network with \(\mathcal{A}\). We prove that \(\mathfrak{M}\) is not representable by a network with \(\mathcal{A}\).
where \((\cdot)^{\dagger}\) denotes the transpose of a vector or a matrix. Clearly, continuity of the summands is preserved and, hence, one can choose the same multiplicities. Therefore, \(\mathcal{R}\circ\varphi\) is again in \(\mathfrak{R}_{d}\) and it is even strict at dimension \(d\) or simple if this is the case for \(\mathcal{R}\). Applying the inverse affine transform \(\varphi^{-1}\) we also obtain equivalence of the properties.
We are now in the position to prove the main statement of this article, Theorem 1.3. It is an immediate consequence of the following stronger result.
**Proposition 3.3**.: _Suppose that the assumptions of Theorem 1.3 are satisfied. Let \(d\in\mathbb{N}_{0}\). Then there exists an optimal network \(\mathbb{W}\in\mathcal{W}_{d}\) with_
\[\operatorname{err}^{\mathcal{L}}(\mathbb{W})=\operatorname{err}^{\mathcal{L} }_{d}=\overline{\operatorname{err}}^{\mathcal{L}}_{d}. \tag{61}\]
_If additionally \(d>1\) and \(\operatorname{err}^{\mathcal{L}}_{d}<\operatorname{err}^{\mathcal{L}}_{d-1}\), then one has that_
\[\inf_{\mathcal{R}\in\mathfrak{P}^{\operatorname{strict}}_{d}}\int\mathcal{L}( x,\mathcal{R}(x))\,\mathrm{d}\mu(x)>\operatorname{err}^{\mathcal{L}}_{d}. \tag{62}\]
Proof.: We can assume without loss of generality that \(\mu\neq 0\). First we verify the assumptions of Proposition 2.4 in order to conclude that there are generalized responses \(\mathcal{R}\) of dimension \(d\) for which
\[\int\mathcal{L}(x,\mathcal{R}(x))\,\mathrm{d}\mu(x)=\overline{ \operatorname{err}}^{\mathcal{L}}_{d}. \tag{63}\]
We verify that \(\mu\) is a nice measure: In fact, since \(\mu\) has Lebesgue-density \(h\), we have \(\mu(H)=0\) for all hyperplanes \(H\subseteq\mathbb{R}^{d_{\mathrm{in}}}\). Moreover, for every half-space \(A\) with \(\mu(A),\mu(\overline{A}^{c})>0\) we have that \(\partial A\) intersects the interior of the convex hull of \(\mathbb{D}\) so that there exists a point \(x\in\partial A\) with \(h(x)>0\). Since \(\{x\in\mathbb{R}^{d_{\mathrm{in}}}\colon h(x)>0\}\) is an open set, \(\{x\in\partial A:h(x)>0\}\) cannot be covered by finitely many hyperplanes different from \(\partial A\). Moreover, since for all \(x\in\mathbb{R}^{d_{\mathrm{in}}}\) the function \(y\mapsto\mathcal{L}(x,y)\) is strictly convex and attains its minimum we clearly have for fixed \(x\in\mathbb{R}^{d_{\mathrm{in}}}\) continuity of \(y\mapsto\mathcal{L}(x,y)\) and
\[\lim_{|y|\to\infty}\mathcal{L}(x,y)=\infty.\]
We prove the remaining statements via induction over the dimension \(d\). If \(d\leq 1\), all generalized responses of dimension \(d\) are representable by a neural network and we are done. Now let \(d\geq 2\) and suppose that \(\mathcal{R}\) is the best _strict_ generalized response at dimension \(d\). It suffices to show that one of the following two cases enters: one has
\[\int\mathcal{L}(x,\mathcal{R}(x))\,\mathrm{d}\mu(x)\geq\overline{ \operatorname{err}}^{\mathcal{L}}_{d-1} \tag{64}\]
or
\[\int\mathcal{L}(x,\mathcal{R}(x))\,\mathrm{d}\mu(x)>\overline{ \operatorname{err}}^{\mathcal{L}}_{d}. \tag{65}\]
Indeed, then in the case that (65) does not hold we have as consequence of (64)
\[\overline{\operatorname{err}}^{\mathcal{L}}_{d-1}\leq\int\mathcal{L}(x, \mathcal{R}(x))\,\mathrm{d}\mu(x)=\overline{\operatorname{err}}^{\mathcal{L}}_ {d} \tag{66}\]
and the induction hypothesis entails that \(\operatorname{err}^{\mathcal{L}}_{d-1}=\overline{\operatorname{err}}^{ \mathcal{L}}_{d-1}\leq\overline{\operatorname{err}}^{\mathcal{L}}_{d}\leq \operatorname{err}^{\mathcal{L}}_{d}\leq\operatorname{err}^{\mathcal{L}}_{d-1}\) so that \(\operatorname{err}^{\mathcal{L}}_{d}=\overline{\operatorname{err}}^{ \mathcal{L}}_{d}\) and \(\operatorname{err}^{\mathcal{L}}_{d}=\operatorname{err}^{\mathcal{L}}_{d-1}\). Thus, an optimal simple response \(\mathcal{R}\) of dimension \(d-1\) (which exists by induction hypothesis) is also optimal when taking the minimum over all generalized responses of dimension \(d\) or smaller. Conversely, if (65) holds, an optimal generalized response (which exists by Proposition 2.4) is simple so that, in particular, \(\operatorname{err}^{\mathcal{L}}_{d}=\overline{\operatorname{err}}^{\mathcal{L}}_ {d}\). This shows that
there always exists an optimal simple response. Moreover, it also follows that in the case where \(\operatorname{err}_{d}^{\mathcal{L}}<\operatorname{err}_{d-1}^{\mathcal{L}}\), either of the properties (64) and (65) entail property (62).
Suppose that \(\mathcal{R}\) is given by
\[\mathcal{R}(x)=\mathfrak{a}(x)+\sum_{k=1}^{K}\bigl{(}\delta_{k}\cdot x+ \mathfrak{b}_{k}\bigr{)}\mathds{1}_{A_{k}}(x), \tag{67}\]
with \(A_{1},\ldots,A_{K}\) being the pairwise different activation areas and \(m_{1},\ldots,m_{K}\) being the respective multiplicities. Note that \(\mathcal{R}\) has to be discontinuous, because otherwise \(\mathcal{R}\) is of dimension strictly smaller than \(\operatorname{d}\) and (64) holds. Therefore, we can assume without loss of generality that \(m_{K}=2\) and
\[\partial A_{K}\not\subseteq\{x\in\mathbb{R}^{d_{\operatorname{in}}}\colon \delta_{K}\cdot x+\mathfrak{b}_{K}=0\} \tag{68}\]
(otherwise we reorder the terms appropriately).
If \(\partial A_{K}\) does not intersect the interior of \(\mathbb{D}\), then one can replace the term \(\mathds{1}_{A_{K}}(x)\bigl{(}\delta_{K}\cdot x+\mathfrak{b}_{K}\bigr{)}\) by \(\delta_{K}\cdot x+\mathfrak{b}_{K}\) or \(0\) without changing the error on \(\mathbb{D}\). By doing so the new response has dimension \(d-2\) or smaller. Thus, we get that
\[\int_{\mathbb{D}}\mathcal{L}(x,\mathcal{R}(x))\operatorname{d}\!\mu(x)\geq \overline{\operatorname{err}}_{d-2}^{\mathcal{L}}. \tag{69}\]
Now suppose that \(\partial A_{K}\) intersects the interior of \(\mathbb{D}\). We prove that \(\mathcal{R}\) is not an optimal response in \(\mathfrak{R}_{d}\) by constructing a better response. To see this we apply an appropriate affine transformation on the coordinate mapping. For an invertible matrix \(B\in\mathbb{R}^{d_{\operatorname{in}}\times d_{\operatorname{in}}}\) and a vector \(c\in\mathbb{R}^{d_{\operatorname{in}}}\) we consider the invertible affine mapping
\[\varphi(x)=B(x+c). \tag{70}\]
By Lemma 3.2 the (strict) generalized responses are invariant under right applications of bijective affine transformations so that \(\hat{\mathcal{R}}=\mathcal{R}\circ\varphi\) is an optimal strict generalized response for the loss function \(\hat{\mathcal{L}}\) given by \(\hat{\mathcal{L}}(x,y)=\mathcal{L}(x,\varphi^{-1}(y))\).
Now we distinguish two cases. In line with the notation from before we denote by \(\mathfrak{n}_{K}\in\mathcal{O}\) and \(o_{K}\in\mathbb{R}\) the unique values for which \(A_{K}=\{x\in\mathbb{R}^{d_{\operatorname{in}}}\colon\mathfrak{n}_{K}\cdot x >o_{K}\}\). First suppose that \(\delta_{K}\) and \(\mathfrak{n}_{K}\) are linearly independent. We choose a basis \(b_{1},\ldots,b_{d_{\operatorname{in}}}\) of \(\mathbb{R}^{d_{\operatorname{in}}}\) such that \(b_{1}\cdot\mathfrak{n}_{K}=1\), \(b_{1}\perp\delta_{K}\), and for \(\forall\,l\in(\mathbb{N}\cap[1,d_{\operatorname{in}}])\backslash\{1\}\colon b _{l}\perp\mathfrak{n}_{K}\). This can be achieved by first choosing an arbitrary basis \(b_{2},\ldots,b_{d_{\operatorname{in}}}\) of the space of vectors being orthogonal to \(\mathfrak{n}_{K}\), secondly choosing a vector \(b_{1}^{\prime}\) that is orthogonal to \(\delta_{K}\) but not to \(\mathfrak{n}_{K}\) which is possible since \(\delta_{K}\) and \(\mathfrak{n}_{K}\) are linearly independent and finally letting \(b_{1}=b_{1}^{\prime}/(b_{1}^{\prime}\cdot\mathfrak{n}_{K})\).
We denote by \(B\) the matrix \((b_{1},\ldots,b_{d_{\operatorname{in}}})\) consisting of the basis vectors and choose \(c\in\mathbb{R}^{d_{\operatorname{in}}}\) so that
\[(B^{\dagger}\mathfrak{n}_{K})\cdot c=o_{K}\ \ \text{and}\ \ (B^{\dagger} \delta_{K})\cdot c=-\mathfrak{b}_{K}. \tag{71}\]
The latter is feasible since the expression on the left hand side only depends on the choice of \(c_{1}\) and the expression on the right hand side only on the coordinates \(c_{2},\ldots,c_{d_{\operatorname{in}}}\).
As is straight-forward to verify the respective response \(\hat{\mathcal{R}}\) has as \(K\)th active area \(\hat{A}_{K}=\{x\in\mathbb{R}^{d_{\operatorname{in}}}\colon x_{1}>0\}\) and on \(\hat{A}_{K}\) the \(K\)th summand in the respective representation of \(\hat{\mathcal{R}}\) is
\[\delta_{K}\cdot\varphi(x)+\mathfrak{b}_{K}=(\underbrace{B^{\dagger}\delta_{K}} _{=\colon\hat{A}_{K}})\cdot x. \tag{72}\]
Altogether the previous computations show that we can assume without loss of generality that the considered strict generalized response has as active area \(A_{K}=\hat{A}_{K}\) with \(\delta_{K}\) being perpendicular to the first unit vector and \(\mathfrak{b}_{K}\) being zero.
We compare the performance of the response \(\mathcal{R}\) with the \(\kappa\)-indexed family of generalized responses (\(\mathcal{R}^{\kappa}\colon\kappa\geq 1\)) of dimension \(d\) or smaller given by
\[\mathcal{R}^{\kappa}(x)=\mathfrak{a}(x)+\sum_{k=1}^{K-1}\mathds{1}_{A_{k}}(x) \big{(}\delta_{k}\cdot x+\mathfrak{b}_{k}\big{)}+\tilde{\mathcal{R}}^{\kappa}( x), \tag{73}\]
where
\[\tilde{\mathcal{R}}^{\kappa}(x)=\tfrac{1}{2}(\delta_{K}\cdot x+\kappa x_{1})^ {+}-\tfrac{1}{2}(-\delta_{K}\cdot x+\kappa x_{1})^{+}. \tag{74}\]
Let
\[\tilde{\mathcal{L}}(x,y)=\mathcal{L}\big{(}x,\mathfrak{a}(x)+\sum_{k=1}^{K-1 }(\delta_{k}\cdot x+\mathfrak{b}_{k})\mathds{1}_{A_{k}}(x)+y\big{)},\qquad \tilde{\mathcal{R}}(x)=\mathds{1}_{A_{K}}(x)(\delta_{K}\cdot x), \tag{75}\]
\(\delta_{K}^{\prime}=(\delta_{K,2},\ldots,\delta_{K,d_{\mathrm{in}}})^{\dagger}\) and similarly \(x^{\prime}=(x_{2},\ldots,x_{d_{\mathrm{in}}})^{\dagger}\) and note that
\[\tilde{\mathcal{R}}^{\kappa}(x)=\begin{cases}\tilde{\mathcal{R}}(x),&\text{ if }\kappa|x_{1}|\geq|\delta_{K}^{\prime}\cdot x^{\prime}|,\\ \text{lin. interpol. of }\tilde{\mathcal{R}}\text{ between }\tilde{x}^{(\kappa)} \text{ and }\hat{x}^{(\kappa)},&\text{ otherwise},\end{cases} \tag{76}\]
where
\[\tilde{x}^{(\kappa)}=\begin{pmatrix}-\tfrac{1}{\kappa}|\delta_{K}^{\prime} \cdot x^{\prime}|\\ x^{\prime}\end{pmatrix}\ \ \text{and }\ \ \hat{x}^{(\kappa)}=\begin{pmatrix}\tfrac{1}{ \kappa}|\delta_{K}^{\prime}\cdot x^{\prime}|\\ x^{\prime}\end{pmatrix}. \tag{77}\]
For \(\varepsilon>0\) denote by \(\mathcal{D}_{\varepsilon}\) the set
\[\mathcal{D}_{\varepsilon}=\left\{x^{\prime}\in\mathbb{R}^{d_{ \mathrm{in}}-1}\colon\text{ the segment }\Big{[}\begin{pmatrix}-\varepsilon\\ x^{\prime}\end{pmatrix},\begin{pmatrix}\varepsilon\\ x^{\prime}\end{pmatrix}\Big{]}\text{ has distance greater or equal to }\varepsilon\text{ to every }\partial A_{k}\ (k\neq K),\right.\] \[\text{ and there exists }x_{0}\in\mathbb{R}\text{ so that }(x_{0},x^{\prime})\in\mathbb{D}\right\}\]
It is straight-forward to verify that \(\mathcal{D}_{\varepsilon}\) is closed and due to the compactness of \(\mathbb{D}\) also compact. Moreover,
\[\bigcup_{\varepsilon>0}\mathcal{D}_{\varepsilon}=\underbrace{\left\{x^{\prime} \in\mathbb{R}^{d_{\mathrm{in}}-1}\colon\big{[}\exists x_{0}\in\mathbb{R}:(x_{0 },x^{\prime})\in\mathbb{D}\big{]}\text{ and }\big{[}\forall k\neq K:(0,x^{\prime})\not\in \partial A_{k}\big{]}\right\}}_{=\colon\mathcal{D}}. \tag{78}\]
We have
\[\begin{split}&\int_{\mathbb{D}}\mathcal{L}(x,\mathcal{R}^{\kappa} (x))\,h(x)\,dx-\int_{\mathbb{D}}\mathcal{L}(x,\mathcal{R}(x))\,h(x)\,dx\\ &\qquad=\int_{\mathbb{D}}\tilde{\mathcal{L}}(x,\tilde{\mathcal{R} }^{\kappa}(x))\,h(x)\,dx-\int_{\mathbb{D}}\tilde{\mathcal{L}}(x,\tilde{ \mathcal{R}}(x))\,h(x)\,dx\\ &\qquad=\int_{D_{\kappa}}\bigl{(}\tilde{\mathcal{L}}(x,\tilde{ \mathcal{R}}^{\kappa}(x))-\tilde{\mathcal{L}}(x,\tilde{\mathcal{R}}(x))\bigr{)} \,h(x)\,dx,\end{split} \tag{79}\]
where \(D_{\kappa}=\left\{\begin{pmatrix}x_{1}\\ x^{\prime}\end{pmatrix}\in\mathbb{R}^{d_{\mathrm{in}}}\colon x^{\prime}\in \mathcal{D},|x_{1}|\leq\tfrac{1}{\kappa}|\delta_{K}^{\prime}\cdot x^{\prime}|\right\}\). In dependence on \(\varepsilon>0\) we partition \(D_{\kappa}\) in two sets
\[D_{\kappa,\varepsilon}^{\prime}=\Bigl{\{}\begin{pmatrix}x_{1}\\ x^{\prime}\end{pmatrix}\in D_{\kappa}:x^{\prime}\in\mathcal{D}_{ \varepsilon}\Bigr{\}}\qquad\text{and}\qquad D_{\kappa,\varepsilon}^{\prime \prime}=\Bigl{\{}\begin{pmatrix}x_{1}\\ x^{\prime}\end{pmatrix}\in D_{\kappa}:x^{\prime}\in\mathcal{D}\backslash \mathcal{D}_{\varepsilon}\Bigr{\}}. \tag{80}\]
We note that \(\tilde{\mathcal{L}}\) is continuous on \(([-\varepsilon,\varepsilon]\times\mathcal{D}_{\varepsilon})\times\mathbb{R}^{d_{ \mathrm{in}}}\) as consequence of the continuity of \(\mathcal{L}\) (see Lemma 3.1) and the particular choice of \(\mathcal{D}_{\varepsilon}\). Moreover, for sufficiently large \(\kappa\) (depending on
\(\varepsilon\)), \(D^{\prime}_{\kappa,\varepsilon}\subseteq[-\varepsilon,\varepsilon]\times\mathcal{D }_{\varepsilon}\). Using that continuous functions are uniformly continuous on compacts, that \(h\) is uniformly bounded and the Lebesgue measure of \(D_{\kappa}\) is of order \(\mathcal{O}(1/\kappa)\) we conclude that in terms of \(\Upsilon_{x^{\prime}}:=|\delta^{\prime}_{K}\cdot x^{\prime}|\)
\[\int_{D^{\prime}_{\kappa,\varepsilon}}\bigl{(}\tilde{\mathcal{L}} (x,\tilde{\mathcal{R}}^{\kappa}(x))-\tilde{\mathcal{L}}(x,\tilde{\mathcal{R}} (x))\bigr{)}\,h(x)\,dx\] \[=\int_{D_{\kappa,\varepsilon}}\bigl{(}\tilde{\mathcal{L}} \Bigl{(}\begin{pmatrix}0\\ x^{\prime}\end{pmatrix},\tilde{\mathcal{R}}^{\kappa}(x)\Bigr{)}-\tilde{ \mathcal{L}}\Bigl{(}\begin{pmatrix}0\\ x^{\prime}\end{pmatrix},\tilde{\mathcal{R}}(x)\Bigr{)}\Bigr{)}\,h\Bigl{(} \begin{pmatrix}0\\ x^{\prime}\end{pmatrix}\Bigr{)}\,dx+o\bigl{(}\tfrac{1}{\kappa}\bigr{)}\] \[=\int_{\mathcal{D}_{\varepsilon}}\Bigl{(}\int_{-\frac{1}{\kappa} \Upsilon_{x^{\prime}}}^{\frac{1}{\kappa}\Upsilon_{x^{\prime}}}\Bigl{(}\tilde{ \mathcal{L}}\Bigl{(}\begin{pmatrix}0\\ x^{\prime}\end{pmatrix},\tilde{\mathcal{R}}^{\kappa}\Bigl{(}\begin{pmatrix}x_{ 1}\\ x^{\prime}\end{pmatrix}\Bigr{)}\Bigr{)}-\tilde{\mathcal{L}}\Bigl{(}\begin{pmatrix} 0\\ x^{\prime}\end{pmatrix},\tilde{\mathcal{R}}\Bigl{(}\begin{pmatrix}x_{1}\\ x^{\prime}\end{pmatrix}\Bigr{)}\Bigr{)}\Bigr{)}\,dx_{1}\Bigr{)}\,h\Bigl{(} \begin{pmatrix}0\\ x^{\prime}\end{pmatrix}\Bigr{)}\,dx^{\prime}+o\bigl{(}\tfrac{1}{\kappa}\bigr{)} \tag{81}\]
as \(\kappa\to\infty\). Moreover, for every \(x^{\prime}\in\mathcal{D}_{\varepsilon}\) one has
\[\int_{-\frac{1}{\kappa}\Upsilon_{x^{\prime}}}^{\frac{1}{\kappa} \Upsilon_{x^{\prime}}}\Bigl{(}\tilde{\mathcal{L}}\Bigl{(}\begin{pmatrix}0\\ x^{\prime}\end{pmatrix},\tilde{\mathcal{R}}^{\kappa}\Bigl{(}\begin{pmatrix}x_{ 1}\\ x^{\prime}\end{pmatrix}\Bigr{)}\Bigr{)}-\tilde{\mathcal{L}}\Bigl{(}\begin{pmatrix} 0\\ x^{\prime}\end{pmatrix},\tilde{\mathcal{R}}\Bigl{(}\begin{pmatrix}x_{1}\\ x^{\prime}\end{pmatrix}\Bigr{)}\Bigr{)}\Bigr{)}\,dx_{1}=:\frac{1}{\kappa}\,Q(x^{ \prime}). \tag{82}\]
We represent \(Q(x^{\prime})\) in terms of the measure \(\nu_{x^{\prime}}=\operatorname{Leb}|_{[-\Upsilon_{x^{\prime}},\Upsilon_{x^{ \prime}}]}\), the strictly convex function
\[\xi_{x^{\prime}}\colon[-\Upsilon_{x^{\prime}},\Upsilon_{x^{\prime}}]\to[0, \infty),\ x_{1}\mapsto\tilde{\mathcal{L}}\Bigl{(}\begin{pmatrix}0\\ x^{\prime}\end{pmatrix},\tilde{\mathcal{R}}^{1}\Bigl{(}\begin{pmatrix}x_{1}\\ x^{\prime}\end{pmatrix}\Bigr{)}\Bigr{)} \tag{83}\]
and its secant \(\bar{\xi}_{x^{\prime}}\colon[-\Upsilon_{x^{\prime}},\Upsilon_{x^{\prime}}]\to[0,\infty)\) that equals \(\xi_{x^{\prime}}\) in the boundary points and is linear in between. One has
\[Q(x^{\prime})=\int\bigl{(}\xi_{x^{\prime}}(z)-\bar{\xi}_{x^{\prime}}(z)\bigr{)} \,d\nu_{x^{\prime}}\leq 0 \tag{84}\]
with strict inequality in the case where \(\Upsilon_{x^{\prime}}>0\) (due to strict convexity). Consequently, we get with (81) that
\[\lim_{\kappa\to\infty}\kappa\int_{D^{\prime}_{\kappa,\varepsilon}}\bigl{(} \tilde{\mathcal{L}}(x,\tilde{\mathcal{R}}^{\kappa}(x))-\tilde{\mathcal{L}}(x, \tilde{\mathcal{R}}(x))\bigr{)}\,h(x)\,dx=\int_{\mathcal{D}_{\varepsilon}}Q(x^ {\prime})\,h\Bigl{(}\begin{pmatrix}0\\ x^{\prime}\end{pmatrix}\Bigr{)}\,dx^{\prime}. \tag{85}\]
To analyze the contribution of the integrals on \(D^{\prime\prime}_{\kappa,\varepsilon}\) we note that by uniform boundedness of \(\tilde{\mathcal{L}}(x,\tilde{\mathcal{R}}^{\kappa}(x))-\tilde{\mathcal{L}}(x, \tilde{\mathcal{R}}(x))\) over all \(x\in\mathbb{D}\) and \(\kappa\geq 1\) one has existence of a constant \(C\) not depending on \(\kappa\) and \(\varepsilon\) such that
\[\Bigl{|}\int_{D^{\prime\prime}_{\kappa,\varepsilon}}\bigl{(}\tilde{\mathcal{L}} (x,\tilde{\mathcal{R}}^{\kappa}(x))-\tilde{\mathcal{L}}(x,\tilde{\mathcal{R}}(x ))\bigr{)}\,h(x)\,dx\Bigr{|}\leq C\,|\mathcal{D}\backslash\mathcal{D}_{ \varepsilon}|\,\tfrac{1}{\kappa}, \tag{86}\]
where \(|\mathcal{D}\backslash\mathcal{D}_{\varepsilon}|\) is the \((d_{\mathrm{in}}-1)\)-dimensional Hausdorff measure of the set \(\mathcal{D}\backslash\mathcal{D}_{\varepsilon}\). By choosing \(\varepsilon>0\) arbitrarily small one can make \(|\mathcal{D}\backslash\mathcal{D}_{\varepsilon}|\) arbitrarily small and with a diagonalization argument we obtain with (79) and (85) that
\[\lim_{\kappa\to\infty}\kappa\int_{\mathbb{D}}\bigl{(}\mathcal{L}(x,\mathcal{R}^ {\kappa}(x))-\mathcal{L}(x,\mathcal{R}(x))\bigr{)}\,h(x)\,dx=\int_{\mathcal{D}}Q (x^{\prime})\,h\Bigl{(}\begin{pmatrix}0\\ x^{\prime}\end{pmatrix}\Bigr{)}\,dx^{\prime}. \tag{87}\]
Now there exists \(x^{\prime}\in\mathcal{D}\) with \(h(x^{\prime})>0\) and by continuity of \(h\) we can choose \(x^{\prime}\) such that, additionally, \(\Upsilon_{x^{\prime}}=|\delta^{\prime}_{K}\cdot x^{\prime}|>0\). By continuity we thus get that
\[\int_{\mathcal{D}}Q(x^{\prime})\,h\Bigl{(}\begin{pmatrix}0\\ x^{\prime}\end{pmatrix}\Bigr{)}\,dx^{\prime}<0 \tag{88}\]
and, consequently, there exists \(\kappa>0\) such that the generalized response \(\mathcal{R}^{\kappa}\) of dimension \(d\) or smaller has a strictly smaller error than \(\mathcal{R}\). Hence, it has to be simple.
It remains to treat the case where \(\delta_{K}\) and \(\mathfrak{n}_{K}\) are linearly dependent. In that case we choose \(\alpha\in\mathbb{R}\) with \(\delta_{K}=\alpha\mathfrak{n}_{K}\), we extend \(b_{1}=\mathfrak{n}_{K}\) to an orthonormal basis \((b_{1},\ldots,b_{d_{\mathrm{in}}})\) of \(\mathbb{R}^{d_{\mathrm{in}}}\) and denote by \(B\) the matrix formed by the vectors \(b_{1},\ldots,b_{d_{\mathrm{in}}}\). Moreover, choose \(c=(o_{K},0,\ldots,0)^{\dagger}\) and set \(\varphi(x)=B(x+c)\). Then the response \(\hat{\mathcal{R}}=\mathcal{R}\circ\varphi\) has as \(K\)th activation area \(\hat{A}_{K}=\{x\in\mathbb{R}^{d_{\mathrm{in}}}\colon x_{1}>0\}\) and on \(\hat{A}_{K}\) the \(K\)th summand in the respective representation of \(\hat{\mathcal{R}}\) is
\[\delta_{K}\cdot\varphi(x)+\mathfrak{b}_{K}=\alpha(x_{1}+o_{K})+\mathfrak{b}_{ K}=\alpha x_{1}+\hat{\mathfrak{b}}_{K}, \tag{89}\]
where \(\hat{\mathfrak{b}}_{K}=\alpha o_{K}+\mathfrak{b}_{K}\neq 0\) since otherwise we would have that
\[\partial A_{K}\subseteq\{x\in\mathbb{R}^{d_{\mathrm{in}}}\colon\delta_{K} \cdot x+\mathfrak{b}_{K}=0\}. \tag{90}\]
We showed that in the remaining case we can assume without loss of generality that \(A_{K}=\hat{A}_{K}\), \(\delta_{K}=(\alpha,0,\ldots,0)^{\dagger}\) for an \(\alpha\in\mathbb{R}\) and \(\mathfrak{b}_{K}\neq 0\).
In analogy to above we compare the response \(\mathcal{R}\) with the \(\kappa\)-indexed family of responses (\(\mathcal{R}^{\kappa}\colon\kappa\geq 1\)) given by
\[\mathcal{R}^{\kappa}(x)=\mathfrak{a}(x)+\sum_{k=1}^{K-1}\bigl{(}\delta_{k}\cdot x +\mathfrak{b}_{k}\bigr{)}\mathds{1}_{A_{k}}(x)+\tilde{\mathcal{R}}^{\kappa}( x), \tag{91}\]
where
\[\tilde{\mathcal{R}}^{\kappa}(x)=\tfrac{1}{2}(\alpha+\mathfrak{b}_{K}\kappa) \bigl{(}x_{1}+\tfrac{1}{\kappa}\bigr{)}^{+}+\tfrac{1}{2}(\alpha-\mathfrak{b}_ {K}\kappa)\bigl{(}x_{1}-\tfrac{1}{\kappa}\bigr{)}^{+}. \tag{92}\]
We use \(\tilde{\mathcal{L}}\) and \(\tilde{\mathcal{R}}\) as before, see (75), and note that \(\tilde{\mathcal{R}}^{\kappa}\) agrees with \(\tilde{\mathcal{R}}\) for all \(x\in\mathbb{R}^{d_{\mathrm{in}}}\) with \(|x_{1}|\geq\tfrac{1}{\kappa}\). In analogy to above, we conclude that
\[\begin{split}&\int_{\mathbb{D}}\bigl{(}\mathcal{L}(x,\mathcal{R}^{ \kappa}(x))-\mathcal{L}(x,\mathcal{R}(x))\bigr{)}\,h(x)\,dx\\ &\qquad=\int_{D_{\kappa}}\Bigl{(}\tilde{\mathcal{L}}\Bigl{(} \begin{pmatrix}0\\ x^{\prime}\end{pmatrix},\tilde{\mathcal{R}}^{\kappa}(x)\Bigr{)}-\tilde{ \mathcal{L}}\Bigl{(}\begin{pmatrix}0\\ x^{\prime}\end{pmatrix},\tilde{\mathcal{R}}(x)\Bigr{)}\,h\Bigl{(}\begin{pmatrix} 0\\ x^{\prime}\end{pmatrix}\Bigr{)}\,dx+o\bigl{(}\tfrac{1}{\kappa}\bigr{)},\end{split} \tag{93}\]
where \(D_{\kappa}=[-\tfrac{1}{\kappa},\tfrac{1}{\kappa}]\times\mathcal{D}\). As above we split the domain of integration into the two sets \(D^{\prime}_{\kappa,\varepsilon}\) and \(D^{\prime\prime}_{\kappa,\varepsilon}\). Now in terms of
\[\xi_{x^{\prime}}\colon[-1,1]\to[0,\infty),\qquad x_{1}\mapsto\tilde{\mathcal{ L}}\biggl{(}\begin{pmatrix}0\\ x^{\prime}\end{pmatrix},\tfrac{1}{2}(x_{1}+1)\mathfrak{b}_{K}\biggr{)} \tag{94}\]
we get by using the uniform continuity of \(\tilde{\mathcal{L}}\) on \(D^{\prime}_{\kappa,\varepsilon}\) and the fact that \(|D_{\kappa,\varepsilon}|=\mathcal{O}(\tfrac{1}{\kappa})\) as \(\kappa\to\infty\) that
\[\begin{split}&\int_{D^{\prime}_{\kappa,\varepsilon}}\Bigl{(} \tilde{\mathcal{L}}\Bigl{(}\begin{pmatrix}0\\ x^{\prime}\end{pmatrix},\tilde{\mathcal{R}}^{\kappa}(x)\Bigr{)}-\tilde{ \mathcal{L}}\Bigl{(}\begin{pmatrix}0\\ x^{\prime}\end{pmatrix},\tilde{\mathcal{R}}(x)\Bigr{)}\Bigr{)}\,h\Bigl{(} \begin{pmatrix}0\\ x^{\prime}\end{pmatrix}\Bigr{)}\Bigr{)}\,dx\\ &\qquad=\frac{1}{\kappa}\int_{\mathcal{D}_{\varepsilon}}\int_{-1}^{1} \xi_{x^{\prime}}(x_{1})\,dx_{1}-\bigl{(}\xi_{x^{\prime}}(-1)+\xi_{x^{\prime} }(1)\bigr{)}\,h\Bigl{(}\begin{pmatrix}0\\ x^{\prime}\end{pmatrix}\Bigr{)}\,dx^{\prime}+o\bigl{(}\tfrac{1}{\kappa}\bigr{)}. \end{split} \tag{95}\]
By strict convexity of \(\xi_{x^{\prime}}\), we get that \(Q(x^{\prime}):=\int_{-1}^{1}\xi_{x^{\prime}}(x_{1})\,dx_{1}-\bigl{(}\xi_{x^{ \prime}}(-1)+\xi_{x^{\prime}}(1)\bigr{)}<0\). With the same arguments as in the first case one obtains that
\[\lim_{\kappa\to\infty}\kappa\int_{\mathbb{D}}\bigl{(}\mathcal{L}(x,\mathcal{R}^ {\kappa}(x))-\mathcal{L}(x,\mathcal{R}(x))\bigr{)}\,h(x)\,dx=\int_{\mathcal{D}}Q (x^{\prime})\,h\Bigl{(}\begin{pmatrix}0\\ x^{\prime}\end{pmatrix}\Bigr{)}\,dx^{\prime}<0 \tag{96}\]
so that there exists a response \(\mathcal{R}^{\kappa}\) with strictly smaller error than \(\mathcal{R}\) and the proof is finished.
**Example 3.4**.: _If there exists a hyperplane \(H\) with \(h(x)=0\) for all \(x\in H\) such that \(H\) intersects the convex hull of \(\operatorname{supp}(\mu)\) the conclusion of Theorem 1.3 is in general not true. Consider a continuous function \(f\colon\mathbb{R}^{2}\to\mathbb{R}\) that satisfies \(f(x)=1\) for all \(x\in B((0,1),1)\) and \(f(x)=0\) for all \(x\in B((1,-1),1)\cup B((-1,-1),1)\). Now, let \(\mathcal{L}(x,y)=(f(x)-y)^{2}\) and \(\mu\) be the measure on \(\mathbb{R}^{2}\) with continuous Lebesgue density_
\[h(x)=\operatorname{\mathds{1}\hskip-2.845276pt\rule{0.0pt}{1.5pt}}_{B((0,1), 1)}(x)\,|x-(0,1)|+\operatorname{\mathds{1}\hskip-2.845276pt\rule{0.0pt}{1.5pt}}_ {B((1,-1),1)}(x)\,|x-(1,-1)|+\operatorname{\mathds{1}\hskip-2.845276pt\rule{0.0 pt}{1.5pt}}_{B((-1,-1),1)}(x)\,|x+(1,1)|. \tag{97}\]
_Then, we have \(h(x)\equiv 0\) on \(\mathbb{R}\times\{0\}\). Note that_
\[\mathcal{R}(x)=\begin{cases}0,&\text{ if }x_{2}\leq 0\\ 1,&\text{ if }x_{2}>0\end{cases} \tag{98}\]
_is a strict generalized response of dimension \(2\) with \(\int\mathcal{L}(x,\mathcal{R}(x))\,\mathrm{d}\mu(x)=0\). Conversely, there does not exist a network \(\mathbb{W}\in\mathcal{W}_{d}\) (for arbitrary \(d\in\mathbb{N}\)) with \(\int\mathcal{L}(x,\mathfrak{N}^{\mathbb{W}}(x))\,\mathrm{d}\mu(x)=0\). In particular, assume that there exist \(d\in\mathbb{N}\) and \(\mathbb{W}\in\mathcal{W}_{d}\) with \(\operatorname{err}^{\mathcal{L}}(\mathbb{W})=0\). Then, every breakline \(H\neq\mathbb{R}\times\{0\}\) that intersects the interior of \(D\) contains an uncountable set of points \(x\in H\) with \(h(x)>0\) and for all such points the function \(f\) is constant in a neighborhood of \(x\). Therefore, the collective response of the neurons with breakline \(H\) has to be constant and we can without loss of generality assume that all ReLU neurons have the breakline \(H=\mathbb{R}\times\{0\}\). Moreover, since \(\operatorname{err}^{\mathcal{L}}(\mathbb{W})=0\) it holds that \(\mathfrak{N}^{\mathbb{W}}(x)=0\) for all \(x=(x_{1},x_{2})\in\mathbb{R}^{2}\) with \(x_{2}<0\) and \(\mathfrak{N}^{\mathbb{W}}(x)=1\) for all \(x=(x_{1},x_{2})\in\mathbb{R}^{2}\) with \(x_{2}>0\). This contradicts the continuity of \(\mathfrak{N}^{\mathbb{W}}\)._
_Thus, there does not exist a global minimum in the loss landscape \(\mathcal{W}_{d}\ni\mathbb{W}\to\operatorname{err}^{\mathcal{L}}(\mathbb{W})\) (for arbitrary \(d\in\mathbb{N}\)) and, in order to solve the minimization task iteratively, the sequence of networks returned by a gradient based algorithm have unbounded parameters._
_For a thorough investigation of the non-existence of global minima in the approximation of discontinuous target functions \(f\), see [1]._
Next, we show that in many situations if the class of network responses is not able to produce the function \(x\mapsto\mathbf{m}(x)\) defined in Lemma 3.1 the minimal error strictly decreases after adding a ReLU neuron to the network structure.
**Proposition 3.5**.: _Let \(d\in\mathbb{N}_{0}\), assume that \(\mathbb{D}=\operatorname{supp}(\mu)\) is a compact set, assume there exists \(\mathbb{W}\in\mathcal{W}_{d}\) with \(\operatorname{err}^{\mathcal{L}}(\mathbb{W})=\operatorname{err}^{\mathcal{L}} _{d}<\infty\), assume for every \(x\in\mathbb{D}\) that the function \(\mathcal{L}(x,\cdot)\) is convex,_
1. _assume for every compact_ \(K\subseteq\mathbb{R}\) _there exists_ \(L\in\mathbb{R}\) _such that for all_ \(x\in\mathbb{D}\)_,_ \(y,y^{\prime}\in K\) _that_ (99) \[|\mathcal{L}(x,y)-\mathcal{L}(x,y^{\prime})|\leq L|y-y^{\prime}|,\]
Figure 2. Visualization of the minimization task in Example 3.4. There exists a generalized response of dimension \(2\) but no neural network \(\mathbb{W}\in\mathcal{W}_{d}\) (\(d\in\mathbb{N}\)) attaining zero error.
_
* _assume for every affine function_ \(\varphi\colon\mathbb{R}^{d_{\mathrm{in}}}\to\mathbb{R}\) _that the set_ (100) \[\big{\{}x\in\mathbb{D}\colon\mathcal{L}\text{ is not $y$-differentiable in }(x,\varphi(x))\big{\}}\] _is a Lebesgue nullset,_
_and assume that there exist no neural network \(\mathbb{W}\in\mathcal{W}_{d}\) satisfying for \(\mu\)-almost all \(x\in\mathbb{D}\) that_
\[\mathcal{L}(x,\mathfrak{N}^{\mathbb{W}}(x))=\inf_{y\in\mathbb{R}}\mathcal{L}(x,y). \tag{101}\]
_Then_
\[\mathrm{err}_{d}^{\mathcal{L}}>\mathrm{err}_{d+1}^{\mathcal{L}}. \tag{102}\]
**Remark 3.6**.: _We compare the assumptions of Proposition 3.5 with those of Theorem 1.3. In Proposition 3.5 we explicitly assume the existence of an optimal network \(\mathbb{W}\in\mathcal{W}_{d}\) and relax the continuity assumption on \(\mathcal{L}\) in the first component and the strict convexity assumption on \(\mathcal{L}\) in the second component. On the other hand, we introduce assumptions on the smoothness of \(\mathcal{L}\) in the second component. Under the assumptions of Theorem 1.3, condition (i) of Proposition 3.5 is satisfied (cf. [10, Thm. 10.6]) and we can apply the latter proposition if additionally condition (ii) is satisfied. In that case, the statement of Proposition 3.5 can be rewritten as follows: if \(\mathrm{err}_{d}^{\mathcal{L}}\leq\mathrm{err}_{d+1}^{\mathcal{L}}\), then there exists a neural network \(\mathbb{W}\in\mathcal{W}_{d}\) with \(\mathfrak{N}^{\mathbb{W}}(x)=\mathfrak{m}(x)\) for all \(x\in\mathbb{D}\)._
Proof of Proposition 3.5.: Let \(\mathbb{W}\in\mathcal{W}_{d}\) be a network with \(\mathrm{err}^{\mathcal{L}}(\mathbb{W})=\mathrm{err}_{d}^{\mathcal{L}}\). For \(\Delta,o\in\mathbb{R}\), \(\mathfrak{n}\in\mathcal{O}\) consider the function
\[N(\Delta,\mathfrak{n},o)(x)=\mathfrak{N}^{\mathbb{W}}(x)+\Delta(\mathfrak{n} \cdot x-o)^{+}. \tag{103}\]
If \(\mathrm{err}_{d}^{\mathcal{L}}\leq\mathrm{err}_{d+1}^{\mathcal{L}}\) then we have for all \(\Delta,o\in\mathbb{R}\), \(\mathfrak{n}\in\mathcal{O}\) that
\[\int_{\mathbb{D}}\mathcal{L}(x,\mathfrak{N}^{\mathbb{W}}(x))\,h(x)\,dx\leq \int_{\mathbb{D}}\mathcal{L}(x,N(\Delta,\mathfrak{n},b)(x))\,h(x)\,dx \tag{104}\]
and taking the derivative with respect to \(\Delta\) at \(\Delta=0\) yields
\[\int_{\mathbb{D}}\Bigl{(}\frac{\partial}{dy}\mathcal{L}(x,\mathfrak{N}^{ \mathbb{W}}(x))\Bigr{)}(\mathfrak{n}\cdot x-o)^{+}\,h(x)\,dx=0, \tag{105}\]
where \(x\mapsto\frac{\partial}{dy}\mathcal{L}(x,\mathfrak{N}^{\mathbb{W}}(x))\) is uniformly bounded and well-defined outside a Lebesgue nullset. Indeed, since \(\mathfrak{N}^{\mathbb{W}}\) is piecewise affine there exists a finite number of affine functions \(\varphi_{1},\ldots,\varphi_{m}\) such that the set of points \(x\in\mathbb{D}\) for which \(\mathcal{L}\) is not \(y\)-differential in \((x,\mathfrak{N}^{\mathbb{W}}(x))\) is contained in
\[\bigcup_{i=1}^{m}\{x\in\mathbb{D}\colon\mathcal{L}\text{ is not $y$-differentiable in }(x,\varphi_{i}(x))\}, \tag{106}\]
which by (ii) is a nullset. Moreover, the boundedness of the derivative follows from the Lipschitz continuity of \(\mathcal{L}\) in the second argument, see (i). We let
\[\tilde{h}(x):=\Bigl{(}\frac{\partial}{\partial y}\mathcal{L}(x,\mathfrak{N}^{ \mathbb{W}}(x))\Bigr{)}h(x) \tag{107}\]
and note that the space \(\mathcal{H}\) of all continuous functions \(g\colon\mathbb{D}\to\mathbb{C}\) satisfying
\[\int_{\mathbb{D}}g(x)\,\tilde{h}(x)\,dx=0 \tag{108}\]
is linear and closed under convergence in \(C(\mathbb{D},\mathbb{C})\) (endowed with supremum norm). We showed that \(\mathcal{H}\) contains all functions of the form \(x\mapsto(\mathfrak{n}\cdot x-o)^{+}\) and it is standard to deduce that \(\mathcal{H}\) contains all polynomials and, using the Stone-Weierstrass theorem, thus all continuous functions. By the Riesz-Markov-Kakutani representation theorem, the measure \(\tilde{h}(x)\,dx\) is
the zero-measure and \(\tilde{h}\) is zero except for \(\mu\)-nullsets. Note that \(h>0\), \(\mu\)-almost everywhere, and hence \(\frac{\partial}{\partial y}\mathcal{L}(x,\mathfrak{N}^{\mathbb{W}}(x))=0\), \(\mu\)-almost everywhere. Using the convexity of \(y\mapsto\mathcal{L}(x,y)\) we get \(\mathcal{L}(x,\mathfrak{N}^{\mathbb{W}}(x))=\inf_{y\in\mathbb{R}}\mathcal{L}( x,y)\), \(\mu\)-almost everywhere.
In the next example, we show that the conclusion of Proposition 3.5 is in general false if condition (ii) is not satisfied. We note that the loss function \(\mathcal{L}\) in the example is not strictly convex but the statements of Lemma 3.1 still hold in this case.
**Example 3.7**.: _Consider the following regression problem. Let \(d_{\rm in}=1\), \(\ell\geq 1\), \(\mu={\rm Leb}|_{[-\ell-1,1+\ell]}\) be the Lebesgue measure on the interval \([-\ell-1,1+\ell]\) and \(\mathcal{L}(x,y)=|y-f(x)|\) where_
\[f(x)=\begin{cases}1-|x|,&\text{ if }|x|\leq 1\\ 0,&\text{ if }|x|>1\end{cases}. \tag{109}\]
_Note that \(\lambda(\{x\in[-\ell-1,\ell+1]:\mathcal{L}\text{ is not }y\text{-differentiable in }(x,0)\})=2\ell>0\) and_
\[f(x)=(x+1)^{+}-2(x)^{+}+(x-1)^{+} \tag{110}\]
_so that \(f\) is the response of a network using three ReLU neurons and \({\rm err}_{3}^{\mathcal{L}}=0\). We show that for \(\ell\geq 13\) we have_
\[{\rm err}_{0}^{\mathcal{L}}={\rm err}_{1}^{\mathcal{L}}={\rm err}_{2}^{ \mathcal{L}}=\int_{-\ell-1}^{\ell+1}|f(x)|\,dx=1, \tag{111}\]
_i.e. the best regression function in the set of response functions for networks having \(2\) ReLU neurons is the zero function although \(\inf_{y\in\mathbb{R}}\mathcal{L}(x,y)=f(x)\) is not the zero function. This shows that the conclusion of Proposition 3.5 is in general false if Assumption (ii) is not satisfied._
_Denote by \(E\colon\mathcal{W}_{2}\times\mathbb{I}\to\mathbb{R}\) the function_
\[(\mathbb{W},[s,t])\mapsto E(\mathbb{W},[s,t])=\int_{s}^{t}|\mathfrak{N}^{ \mathbb{W}}(x)-f(x)|-|f(x)|\,dx, \tag{112}\]
_where \(\mathbb{I}\) is the set of all closed intervals that are subsets of \([-\ell-1,\ell+1]\). We show that for \(\ell\geq 13\) we have \(E(\mathbb{W},[-\ell-1,1+\ell])\geq 0\) for all networks \(\mathbb{W}\in\mathcal{W}_{2}\)._
_Let \(\mathbb{W}\in\mathcal{W}_{2}\). Then \(\mathfrak{N}^{\mathbb{W}}\) is continuous and satisfies_
\[\mathfrak{N}^{\mathbb{W}}(x)=\begin{cases}\delta_{1}x+\mathfrak{b}_{1},&\text{ if }x\leq o_{1}\\ \delta_{2}x+\mathfrak{b}_{2},&\text{ if }o_{1}<x<o_{2}\\ \delta_{3}x+\mathfrak{b}_{3},&\text{ if }x>o_{2}\end{cases}, \tag{113}\]
_for \(\delta_{1},\delta_{2},\delta_{3},\mathfrak{b}_{1},\mathfrak{b}_{2},\mathfrak{ b}_{3},o_{1},o_{2}\in\mathbb{R}\) with \(o_{1}\leq o_{2}\)._
Figure 3. Visualization of the minimization task in Example 3.7. There exists a network \(\mathbb{W}\in\mathcal{W}_{3}\) with \(\mathfrak{N}^{\mathbb{W}}=f\) (blue). However, the realization function attaining minimal error in the class \(\mathcal{W}_{2}\), \(\mathcal{W}_{1}\), and \(\mathcal{W}_{0}\) is \(\mathfrak{N}^{\mathbb{W}}=0\) (red).
_First, we assume that \(o_{1}\leq 0\leq o_{2}\) and \(0\leq\mathfrak{N}^{\mathbb{W}}(0)\). We fix \(\delta_{2},\mathfrak{b}_{2}\in\mathbb{R}\) and derive a lower bound for \(E(\mathbb{W},[0,\ell+1])\) over all feasible choices of \(o_{2},\delta_{3}\) and \(\mathfrak{b}_{3}\). We start with the case \(\delta_{2}<0\). Note that the choice \(o_{2}=\mathfrak{N}^{\mathbb{W}}(0)/|\delta_{2}|\), \(\delta_{3}=\mathfrak{b}_{3}=0\) yields a better result than all networks with \(o_{2}\geq\mathfrak{N}^{\mathbb{W}}(0)/|\delta_{2}|\). Therefore we can restrict the optimization task to networks satisfying \(o_{2}\leq\mathfrak{N}^{\mathbb{W}}(0)/|\delta_{2}|\). For \(\mathfrak{N}^{\mathbb{W}}(0)/|\delta_{2}|\leq 1\) we show that, indeed, the optimal choice for the approximation of \(f\) on the right-hand side of the \(y\)-axis is \(o_{2}=\mathfrak{N}^{\mathbb{W}}(0)/|\delta_{2}|\), \(\delta_{3}=\mathfrak{b}_{3}=0\). If \(o_{2}<\mathfrak{N}^{\mathbb{W}}(0)/|\delta_{2}|\) and \(\mathfrak{N}^{\mathbb{W}}(1)\leq 0\) then_
\[E(\mathbb{W},[o_{2},a])\geq-\frac{1}{2}\mathfrak{N}^{\mathbb{W}}(o_{2})(a-o_ {2})\qquad\text{and}\qquad E(\mathbb{W},[a,\ell+1])\geq\frac{1}{2}\mathfrak{ N}^{\mathbb{W}}(o_{2})(a-o_{2}), \tag{114}\]
_where \(a\in[0,1]\) with \(\mathfrak{N}^{\mathbb{W}}(a)=0\). Conversely, if \(\mathfrak{N}^{\mathbb{W}}(1)\geq 0\) then \(E(\mathbb{W},[o_{2},1])\geq-\mathfrak{N}^{\mathbb{W}}(o_{2})(1-o_{2})\geq- \mathfrak{N}^{\mathbb{W}}(o_{2})\). We consider two cases. If \(\mathfrak{N}^{\mathbb{W}}(\ell+1)\geq 0\) then \(\delta_{3}\geq-\mathfrak{N}^{\mathbb{W}}(o_{2})/\ell\). Thus, \(E(\mathbb{W},[1,\ell+1])\geq\frac{1}{2}\ell\mathfrak{N}^{\mathbb{W}}(1)\geq \frac{\ell-1}{2}\mathfrak{N}^{\mathbb{W}}(o_{2})\) so that for \(\ell\geq 3\) we get \(E(\mathbb{W},[o_{2},\ell+1])\geq 0\). If \(\mathfrak{N}^{\mathbb{W}}(\ell+1)\leq 0\) then \(\delta_{3}\leq-\mathfrak{N}^{\mathbb{W}}(o_{2})/(\ell+1)\). In that case, \(E(\mathbb{W},[1,\ell+1])\) corresponds to the area of two triangles with slope \(|\delta_{3}|\) and baselines that add to \(\ell\). This is minimized by two congruent triangles so that_
\[E(\mathbb{W},[1,\ell+1])\geq\Big{(}\frac{\ell}{2}\Big{)}^{2}|\delta_{3}|\geq \frac{\ell-1}{4}\mathfrak{N}^{\mathbb{W}}(o_{2}). \tag{115}\]
_Thus, for \(\ell\geq 5\) we get \(E(\mathbb{W},[o_{2},\ell+1])\geq 0\) and the optimal choice is \(o_{2}=\mathfrak{N}^{\mathbb{W}}(0)/|\delta_{2}|\), \(\delta_{3}=\mathfrak{b}_{3}=0\)._
_It remains to consider the case \(\mathfrak{N}^{\mathbb{W}}(0)/|\delta_{2}|\geq 1\). In this case the choice \(\delta_{3}>0\) is clearly suboptimal and for \(o_{2}\leq 1\) we get \(E(\mathbb{W},[0,o_{2}])\geq-(\mathfrak{N}^{\mathbb{W}}(0)-\frac{1}{2}|\delta _{2}|)\). The above calculations show that_
\[E(\mathbb{W},[o_{2},\ell+1])\geq\frac{\ell-5}{4}\mathfrak{N}^{\mathbb{W}}(o_{2 })\geq\frac{\ell-5}{4}(\mathfrak{N}^{\mathbb{W}}(0)-|\delta_{2}|) \tag{116}\]
_so that for \(\ell\geq 7\) we have \(E(\mathbb{W},[0,\ell+1])\geq-\frac{1}{2}\mathfrak{N}^{\mathbb{W}}(0)\). If \(\mathfrak{N}^{\mathbb{W}}(0)/|\delta_{2}|\geq\ell+1\) the choice \(o_{2}>1\) is suboptimal and in the case \(\mathfrak{N}^{\mathbb{W}}(0)/|\delta_{2}|\leq\ell+1\) where \(o_{2}\leq 1\) is not the optimal choice it is easy to see that actually \(o_{2}=\mathfrak{N}^{\mathbb{W}}(0)/|\delta_{2}|\), \(\delta_{3}=\mathfrak{b}_{3}=0\) is the best choice. In that case_
\[E(\mathbb{W},[0,1])\geq-(\mathfrak{N}^{\mathbb{W}}(0)-\frac{1}{2}|\delta_{2}| )\quad\text{ and }\quad E(\mathbb{W},[1,\ell+1])\geq\frac{1}{2}(\mathfrak{N}^{ \mathbb{W}}(0)-|\delta_{2}|). \tag{117}\]
_In conclusion, in all of the above cases we get for \(\ell\geq 7\) that \(E(\mathbb{W},[0,\ell+1])\geq-\frac{1}{2}\mathfrak{N}^{\mathbb{W}}(0)\)._
_Next, we consider the case \(\delta_{2}\geq 0\). Set \(a:=\inf\{x\geq 0\colon\delta_{2}x+\mathfrak{b}_{2}\geq 1-x\}\). Choosing \(o_{2}>a\) or \(\delta_{3}\geq 0\) is clearly suboptimal. For \(0\leq o_{2}\leq a\) and \(\delta_{3}<0\) note that \(E(\mathbb{W},[0,1])\geq-\mathfrak{N}^{\mathbb{W}}(o_{2})\). Analogously to the case \(\delta_{2}<0\) we get_
\[E(\mathbb{W},[1,\ell+1])\geq\frac{\ell-1}{4}\mathfrak{N}^{\mathbb{W}}(o_{2}) \tag{118}\]
_so that for \(\ell\geq 7\)_
\[E(\mathbb{W},[0,\ell+1])\geq\frac{1}{2}\mathfrak{N}^{\mathbb{W}}(o_{2})\geq \frac{1}{2}\mathfrak{N}^{\mathbb{W}}(0). \tag{119}\]
_Now, if \(\mathfrak{N}^{\mathbb{W}}(0)\leq 0\) and \(\ell\geq 5\) then the above calculations imply that \(E(\mathbb{W},[-\ell-1,\ell+1])\geq 0\). Using the symmetry of the problem we showed that for all networks \(\mathbb{W}\in\mathcal{W}_{2}\) satisfying \(o_{1}\leq 0\leq o_{2}\) we have that_
\[\int_{-\ell-1}^{\ell+1}\mathcal{L}(x,\mathfrak{N}^{\mathbb{W}}(x))\,\mathrm{d}x \geq\int_{-\ell-1}^{\ell+1}\mathcal{L}(x,0)\,\mathrm{d}x. \tag{120}\]
_Using again the symmetry, we are therefore left with considering the case \(0<o_{1}\leq o_{2}\). We can cleary focus on the case \(o_{1}\leq 1\) and \(\mathfrak{N}^{\mathbb{W}}(0)\geq 0\). Note that one can use the above arguments in order to show that_
\[E(\mathbb{W},[o_{1},\ell+1])\geq\min\bigl{(}0,-\frac{1}{2}\mathfrak{N}^{ \mathbb{W}}(o_{1})\bigr{)}. \tag{121}\]
_If \(\delta_{1}<0\) we thus get_
\[E(\mathbb{W},[0,\ell+1])\geq-\frac{3}{2}\mathfrak{N}^{\mathbb{W}}(0). \tag{122}\]
_On the other hand, \(E(\mathbb{W},[-\ell-1,0])\geq(\ell-1)\mathfrak{N}^{\mathbb{W}}(0)\) so that, for \(l\geq 5/2\), we have \(E(\mathbb{W},[-\ell-1,\ell+1])\geq 0\). Conversely, if \(\delta_{1}\geq 0\) we get_
\[E(\mathbb{W},[0,\ell+1])\geq-\frac{3}{2}\mathfrak{N}^{\mathbb{W}}(0)-\delta_{1} \tag{123}\]
_and for \(\delta_{1}\geq\mathfrak{N}^{\mathbb{W}}(0)\) and \(\ell\geq 9/2\) we clearly have \(E(\mathbb{W},[-\ell-1,\ell+1])\geq 0\). For \(\delta_{1}\leq\mathfrak{N}^{\mathbb{W}}(0)\) we get_
\[E(\mathbb{W},[-1,\ell+1])\geq-\frac{5}{2}\mathfrak{N}^{\mathbb{W}}(0)-\frac{1 }{2}\delta_{1}. \tag{124}\]
_Now for \(\delta_{1}\leq\mathfrak{N}^{\mathbb{W}}(0)/(\ell+1)\) we get_
\[E(\mathbb{W},[-\ell-1,-1])\geq\frac{1}{2}\ell\mathfrak{N}^{\mathbb{W}}(1)\geq \frac{1}{2}(\ell-1)\mathfrak{N}^{\mathbb{W}}(0) \tag{125}\]
_and for \(\ell\geq 7\) we get that \(E(\mathbb{W},[-\ell-1,\ell+1])\geq 0\). If \(\mathfrak{N}^{\mathbb{W}}(0)/(\ell+1)\leq\delta_{1}\leq\mathfrak{N}^{\mathbb{ W}}(0)\) then_
\[E(\mathbb{W},[-\ell-1,-1])\geq\Bigl{(}\frac{\ell}{2}\Bigr{)}^{2}|\delta_{1}| \geq\frac{\ell-1}{4}\mathfrak{N}^{\mathbb{W}}(0). \tag{126}\]
_Thus, if \(\ell\geq 13\) have \(E(\mathbb{W},[-\ell-1,\ell+1])\geq 0\) and the proof of the assertion is finished._
_Acknowledgements._ We gratefully acknowledge the Cluster of Excellence EXC 2044-390685587, Mathematics Munster: Dynamics-Geometry-Structure funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation). This work has been partially funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - SFB 1283/2 2021 - 317210226.
|
2309.10948 | A Novel Deep Neural Network for Trajectory Prediction in Automated
Vehicles Using Velocity Vector Field | Anticipating the motion of other road users is crucial for automated driving
systems (ADS), as it enables safe and informed downstream decision-making and
motion planning. Unfortunately, contemporary learning-based approaches for
motion prediction exhibit significant performance degradation as the prediction
horizon increases or the observation window decreases. This paper proposes a
novel technique for trajectory prediction that combines a data-driven
learning-based method with a velocity vector field (VVF) generated from a
nature-inspired concept, i.e., fluid flow dynamics. In this work, the vector
field is incorporated as an additional input to a convolutional-recurrent deep
neural network to help predict the most likely future trajectories given a
sequence of bird's eye view scene representations. The performance of the
proposed model is compared with state-of-the-art methods on the HighD dataset
demonstrating that the VVF inclusion improves the prediction accuracy for both
short and long-term (5~sec) time horizons. It is also shown that the accuracy
remains consistent with decreasing observation windows which alleviates the
requirement of a long history of past observations for accurate trajectory
prediction. Source codes are available at:
https://github.com/Amir-Samadi/VVF-TP. | MReza Alipour Sormoli, Amir Samadi, Sajjad Mozaffari, Konstantinos Koufos, Mehrdad Dianati, Roger Woodman | 2023-09-19T22:14:52Z | http://arxiv.org/abs/2309.10948v1 | A Novel Deep Neural Network for Trajectory Prediction in Automated Vehicles Using Velocity Vector Field
###### Abstract
Anticipating the motion of other road users is crucial for automated driving systems (ADS), as it enables safe and informed downstream decision-making and motion planning. Unfortunately, contemporary learning-based approaches for motion prediction exhibit significant performance degradation as the prediction horizon increases or the observation window decreases. This paper proposes a novel technique for trajectory prediction that combines a data-driven learning-based method with a velocity vector field (VVF) generated from a nature-inspired concept, i.e., fluid flow dynamics. In this work, the vector field is incorporated as an additional input to a convolutional-recurrent deep neural network to help predict the most likely future trajectories given a sequence of bird's eye view scene representations. The performance of the proposed model is compared with state-of-the-art methods on the highD dataset demonstrating that the VVF inclusion improves the prediction accuracy for both short and long-term (5 sec) time horizons. It is also shown that the accuracy remains consistent with decreasing observation windows which alleviates the requirement of a long history of past observations for accurate trajectory prediction.1.
Footnote 1: Source codes are available at: [https://github.com/Amir-Samadi/VVF-TP](https://github.com/Amir-Samadi/VVF-TP)
## I Introduction
Safe and efficient automated driving in dense road traffic environments, where several vehicles dynamically interact with each other, requires predicting their intended manoeuvres and/or future trajectories as a function of time [1]. The prediction accuracy becomes essential in this case and directly impacts the downstream motion planning performance of automated driving systems (ADS). On the one hand, a long time prediction horizon (five seconds) allows for a proactive response to the dynamic changes in the driving scene, while, on the other hand, small observation windows (less than one second) are desirable to be able to predict the future states for most of the perceived surrounding vehicles (SVs). To meet both targets, a comprehensive understanding of the spatio-temporal interactions among nearby road users, including semantic environmental information, such as the location of lane markings, road layout, speed limits and nominal velocities is needed [2, 3]. Formulating such a complicated trajectory prediction problem for all vehicles in the scene is not viable through human-crafted heuristic algorithms, which explains why data-driven techniques consist the state-of-the-art (SOTA) methods in predicting the motion of other road users for ADS.
Contemporary learning-based motion prediction methods have widely leveraged convolutional and recurrent techniques to capture the influence of spatial and temporal interactions among road users on their future trajectories [4]. Despite their promising performance in some driving scenarios, these approaches unfortunately face several challenges. Firstly, their prediction accuracy severely degrades as the prediction horizon increases. Secondly, they rely on past state observations within a specific and usually long period of time, which limits their ability to appropriately react to actors that have been only recently detected [5].
In order to address the above shortcomings, this paper proposes a novel hybrid logical-learning method for trajectory prediction, see Fig. 1 for its block diagram representation. Similar to the study in [6], a baseline deep neural network (DNN) is designed to predict future trajectories using a bird's eye view (BEV) map of the driving scene. To better capture the spatio-temporal inter-dependencies among nearby road users, as well as the semantic information of the drivable area, the BEV data is further processed to generate an equivalent velocity vector field (VVF) based on fluid dynamics principles. Specifically, each BEV map is associated with a two-dimensional (2d) VVF that provides the most likely speed and orientation of a particle for each pixel of the map. Therefore, the generated vector field helps distil additional meaningful information from the driving scene and use that to augment the input to the DNN, so that the training quality is enhanced and the prediction accuracy is
Fig. 1: General framework of the proposed method operating on BEV maps and incorporating VVF into learning-based methods for trajectory prediction. The trajectory of the TV is predicted while taking into account the interactions among all vehicles including the TV and SVs within an observation window of length \(h\).
increased. For instance, Fig. 2 intuitively illustrates how the VVF information helps distinguish between two different cases in a double-lane driving scenario. The left-hand side scenario includes a low-speed moving vehicle in front of the target vehicle (TV), whereas, both vehicles travel with the same speed in the right-hand side scenario. The streamlines starting from cells occupied by the TV are derived from the equivalent VVF (Fig. 2b), and suggest that the TV is going to carry out a lane change at the left-hand side scenario and lane-keeping at the right-hand side situation. This could not be inferred from traditional BEV inputs in which only the instantaneous locations and velocities of the two vehicles would be available to the DNN. The VVF is essentially a model-based enhancement to the BEV map that allows the DNN to better understand the current and future interactions of vehicles in the scene. In summary, the contributions of this paper are:
* Introducing a novel approach to represent the spatio-temporal interactions between road users inspired by the fluid dynamics concept. This is achieved by encoding the semantic information of the driving context as a fluid flow simulation and generating a VVF accordingly.
* Developing a novel trajectory prediction method that combines a convolutional-recurrent DNN with a continuous VVF. This technique results in a notable improvement in trajectory prediction accuracy ranging from 18% to 72% compared to SOTA methods.
* A VVF dataset associated with the publicly available highD dataset, which can be leveraged by the wider research community for designing sophisticated data-driven learning-based methods for trajectory prediction, decision-making, motion planning and control for ADS. The rest of this paper is organised as follows: After reviewing related studies in Section II, the problem, system model assumptions and proposed method are explained in Section III. The performance evaluation of the proposed method is compared with SOTA results in Section IV. Finally, the key takeaways of this study and highlights for future work are presented in Section V.
## II Related work
A review of recent learning-based vehicle trajectory prediction methods has been provided in [7]. This section reviews existing studies on vehicle trajectory prediction, focusing on two main aspects: (1) The input representation used in learning-based prediction, and (2) the type of deep learning techniques used for vehicle trajectory prediction.
### _Input Representations for Trajectory Prediction_
The future trajectory of a target vehicle (TV) depends on several factors such as its current and previous states, the interaction with surrounding vehicles (SVs) and the scene context. Early studies only encoded the TV's motion states into the prediction model's input [8, 9] yielding accurate short-term predictions, but failing to predict the TV's trajectory for longer time horizons. To overcome this issue, recent studies also encoded the interaction with SVs as a list of features, such as the relative distance/velocity between the TV and every SV in the scene [10, 11, 12]. It is also possible to augment the feature list with some scene context features such as the existence of adjacent lanes or the lane width [5]. As expected, the main drawback of these methods lies in the implementation complexity of the learning model. Similarly, end-to-end approaches that automatically extract features from raw-sensor data [13, 14], also suffer from high computational complexity. Kim
Fig. 2: An example illustration of a vector field representation for a driving context including a double-lane road and two moving vehicles. In the right and left columns the vehicle in front (yellow) moves at the same speed and at a lower speed than the vehicle behind (green), respectively. (a) Example illustration of the driving scenario. (b) The associated generated vector fields (blue arrows) and three sampled streamlines (an example illustration of predicted trajectories) for the target vehicle (green). (c) and (d) The magnitude of the vector field components in the longitudinal and lateral directions, respectively.
_et al._[15] addressed this issue by introducing trajectory-lane features, where the scene context and vehicle features are jointly learnt per driving lane. Another group of studies utilised a simplified BEV representation of the environment, including the scene context and the dynamic agents within the environment [4, 16, 17]. Deo _et al._ in [4] utilised a social grid map where each occupied cell by a vehicle is filled with the encoded vehicle dynamics using a long short-term memory (LSTM) network. In [16], a dynamic occupancy grid is generated from ego-centric point cloud data. Mozaffari _et al._ in [7] utilised a stack of BEV images, each representing the dynamic and static context of the scene at a specific time step. Similarly, this paper adopts a BEV representation for the input data, however, we enrich the BEV images with the generated VVF of the driving environment. While traditional BEV input representations are usually sparse, unless the road traffic density is extremely high, the proposed VVF augments the BEV input with extra information that helps deduce the TV's future speed at future locations.
### _Deep Learning Techniques for Trajectory Prediction_
Recurrent neural networks, more specifically LSTMs, have been used in several studies for vehicle trajectory prediction [10, 8, 9]. LSTMs take advantage of feedback loops for extracting temporal features from sequential input data. Despite their power in learning the features of sequential data, they lack a mechanism for spatial features extraction, which is also required for understanding the spatial interactions for vehicle trajectory prediction problems. Therefore, several studies utilised deep learning techniques, such as attention mechanisms [18], and convolutional neural networks (CNNs) [4, 19], often as an addition to LSTMs, to fill this gap. Messaoud _et al._ in [18] proposed a multi-head attention pooling mechanism to extract spatial information from encoded dynamics of vehicles, while in [4], CNNs are used to serve the same task. Mukherjee et al. in [19] utilised convLSTM to extract spatio-temporal features from a sequence of occupancy grid maps of the driving environment. Similarly, in this paper, we first use a CNN to extract spatial features from a sequence of VVF-augmented BEV input representation. The results are then processed by an LSTM to extract temporal dependencies. The comparative study shows that the proposed method can outperform existing SOTA approaches.
## III Proposed Framework
This section describes the VVF-based Trajectory Prediction **(VVF-TP)** framework. It is first discussed how the BEV input data is processed to generate the associated VVF and how the latter is fed into the input of the DNN. Afterwards, the DNN pipeline including a joint convolutional-recurrent neural network is presented and the learning procedure is subsequently described.
### _VVF Generation_
A boundary-value fluid flow dynamic problem is employed for encoding the semantic information of the driving context as a continuous vector field over a 2d drivable area. First, we explain how this problem is derived, and next, how it is solved using a numerical method.
#### Iii-A1 Formulating fluid flow problem from driving context
The boundary-value problem is defined using fluid flow simulation in a structured channel that has the same geometry with the equivalent drivable area. In our case, four types of semantic information of the driving context are interpreted into boundary conditions (BCs) of the fluid flow simulation, namely the moving vehicles' velocities, road boundaries, lane markings, and nominal lane speeds. To consider the _moving vehicles' velocities_, the fluid particle's velocity at the cells occupied by vehicles is set equal to the current vehicle speeds. Since the _road boundary_, similar to static obstacles, should be avoided, a _no-slip_ condition is associated with it, whereas the _lane markings_ should be passable to allow for lane-change manoeuvres. Accordingly, the _porous_ condition is applied to the cells corresponding to lane markings so that the fluid particles could pass the porous barrier in spite of its resistance. Finally, the input/output velocity at the fluid channel cross sections is set equal to the _nominal speed_ (longitudinal) per lane, e.g., 30 m/sec for highways. After the values of the vector field at the boundaries are set, the remaining values are determined by solving the underlying Partial Differential Equations (PDE) that are known as Navier-Stokes equations [20].
#### Iii-A2 Solving the fluid flow problem
While there is no analytical solution for the boundary-value problem, numerical methods, proposed in the computational fluid dynamics literature, can be used to solve for the VVF of the BEV map. Among all available solutions, the lattice Boltzmann method (LBM) [21] is adopted, as it can handle BCs with sophisticated geometries, and it is also easily parallelizable. Both features make this method suitable for motion prediction applications in ADS, where computational efficiency and digesting complex driving contexts are key requirements.
In LBM the simulated fluid domain is discretised into uniformly spaced grids on a lattice (grid cells for the 2d domain in our case study). After assigning the BCs to the corresponding grids, the LBM solves the fluid dynamics differential equation indirectly and calculates the motion vectors via two main steps, i.e., propagation (streaming) and collision (relaxation) of fluid density in the lattice. There are different ways of connecting adjacent nodes in a lattice which are called lattice vectors. For instance, the D2Q9 lattice vector means that the lattice is a 2d grid and each node is connected to nine surrounding nodes. At each lattice update, the microscopic density is propagated along the lattice vector and the node densities are updated through the collision process (see the Appendix for more details). Fig. 1(b) illustrates the VVF calculated for the different scenarios depicted in Fig. 1(a). The corresponding longitudinal, Fig. 1(c), and lateral, Fig. 1(d), velocities that satisfy the BCs are also depicted. The lattice-based propagation and collision processes make it possible to run the LBM on parallel architectures such as graphics processing units (GPUs) that enable real-time performance. _Salfish_[22] is a well-developed open-source toolbox that
implements the LBM with flexibility for defining a problem and incredible computational performance on GPU up to 400 million lattice updates per second (mlups) on GeForce RTX 3080 Ti. Therefore, if it takes 100 lattice updates to calculate the VVF for a lattice with \(256\times 64\) dimensions, the computation takes only \(4.4\) ms to complete. Note that the nominal transmission frequency of the Collective Perception Messages (CPM) standardised by ETSI is equal to \(10\) Hz. It is therefore expected that the EV will receive the perception obtained by roadside infrastructures equipped with sensors and wireless connectivity every \(100\) ms.
### _BEV occupancy grid and VVF data representation_
With the generated VVF at hand, this section explains how the occupancy grid map and the VVF are pre-processed before being fed into the DNN, see the first two steps at the left-hand side of Fig. 3: _Input_ and _Reconstructed Input_. The occupancy grid consists of pixels (or cells) and represents the BEV image using ternary values for each pixel, specifically, (1) for unoccupied cells, (2) for cells occupied by road participants, and (3) for cells covered by the TV. The VVF is represented by two float-valued images, with sizes equal to that of the occupancy grid map, which contain the calculated longitudinal and lateral velocity for each cell of the map. At the _Input_ step, one may observe that for each element of the observation window \(h\), we have vertically stacked the occupancy grid and the VVF, where \(w\) and \(l\) denote the number of pixels representing the width and length of the BEV map, respectively. At the _Reconstructed Input_ data preparation step, the input data is transformed into three spatio-temporal images, where we have concatenated the observation windows of each image. To the best of our knowledge, for the first time in the related literature, we introduce grid time-sequence images in the data preparation step, enabling the following CNN layers to obtain a more comprehensive overview of the entire observation time.
The size of the BEV occupancy grid is configured as \(32\times 256\) pixels. The 20-meter lateral coverage has enough capacity for the TV's adjacent participant to be included, and the 200-meter longitudinal coverage assures enough space for comprising a complete lane change or overtaking manoeuvre. The pre-processed input data to the DNN is a three-channel image of size (\(h\times 32\times 256\)) representing BEV, longitudinal and lateral VVF.
Fig. 3: Structure of the proposed DNN for trajectory prediction. The input comprises the occupancy grid, the longitudinal, \(V_{x}\), and lateral, \(V_{y}\), components (their magnitude) of the generated VVF over the observation length \(h\). The length and width of the BEV images are \(l\) and \(w\), respectively.
### _GRU-CNN Model Design_
This section presents the DNN model architecture that is employed to generate trajectories for the TV using the pre-processed historical observation data of BEV images and VVFs. Fig. 3 illustrates the model architecture, wherein the initial layers comprise CNN layers succeeded by a Recurrent Neural Network (RNN) encoder. CNNs are widely acknowledged for extracting image features, and enabling the identification of spatial patterns through convolution and pooling operations. Each convolutional layer applies a set of filters to the input image, producing a collection of feature maps that capture distinct aspects of the BEV and VVF input images. Subsequently, the pooling layers downsample the feature maps while preserving important information. These layers are succeeded by Gate Recurrent Unit (GRU) layers, which capture temporal features extracted by the CNN. GRU layers, similar to LSTM networks extensively employed in the literature [4, 23], possess fewer parameters to train and facilitate learning. The GRU layers consist of gated units that selectively retain or forget information from preceding time steps, enabling the network to capture sequential information. Afterwards, to decode the extracted spatio-temporal features we used fully connected (FC) layers, transferring encoded feature size into the predicted trajectory for the TV. The specific architecture and hyper-parameters of the CNN layers utilised in this study are presented in Table I.
It is worth noting that in the existing literature, it is common to employ a decoder RNN subsequent to the encoder GRU. However, in our study, we deliberately omitted this layer to compel the CNN to comprehend both spatial and temporal aspects of the provided observation history. This is accomplished by aligning the output channel number of the final CNN layer and the encoding GRU layer with the observation history \(h\). Considering the high computational costs associated with RNN layers, our study demonstrates that avoiding an additional decoder GRU layer not only enhances the overall agility of the architecture but also encourages the CNN layers to extract spatio-temporal features.
### _Training Process_
During the training phase, the DNN estimates the future trajectory of the TV and compares it with its ground truth by evaluating the (differentiable) "Huber Loss" function, \(L_{\delta}\), which applies a \(\delta\) threshold to strike a balance between the Mean Squared Error (MSE) and the Mean Absolute Error (MAE). Let us define by \((x_{j},y_{j}),j=1,\ldots p\) the coordinates of the ground truth trajectory on the BEV map, and by \((\hat{x}_{j},\hat{y}_{j})\) the predicted trajectory on the same coordinate system. Let us also construct the \(2p\)-dimensional column vectors of stacked coordinates, i.e., \(\mathbf{z}\) for the ground truth and \(\mathbf{\hat{z}}\) for the predicted trajectory. Then, the Huber loss function can be read as:
\[\mathcal{L}_{\delta}=\begin{cases}\frac{1}{2}||\mathbf{\hat{z}}-\mathbf{z}||_{ 2}^{2},&||\mathbf{\hat{z}}-\mathbf{z}||_{1}\leq\delta.\\ \delta\left(||\mathbf{\hat{z}}-\mathbf{z}||_{1}-\frac{\delta}{2}\right),& \text{otherwise}.\end{cases} \tag{1}\]
While the quadratic nature of MSE magnifies the values of outliers to avoid them, the MAE weights all errors larger than \(\delta\) on the same linear scale. In practice, our focus lies on improving the overall trajectory prediction performance rather than solely mitigating outlier errors. To this end, intuitively, the Huber Loss function applies MSE to small error values, which results in amplifying them, and MAE to large error values, leading the DNN to spend less learning time to avoid outlier errors.
## IV Performance Evaluation
The performance evaluation of the trajectory prediction method developed in this article (hereafter referred to as the VVF-TP) is carried out for highway driving scenarios in two parts. First, a comparative study to assess the performance against the SOTA methods, second, an ablation analysis to show how the VVF and observation window length affect the prediction performance, separately. In this section, the selected dataset and the evaluation metrics are explained, before presenting the quantitative/qualitative performance evaluation results. During training the parameter \(\delta\) in the Huber Loss function in Eq. (1) is set at \(\delta=1\) m, and the rest of network parameters are given in Table. I.
### _Dataset_
The highD dataset has been collected in six different locations in Germany and contains 110,000 unique trajectories for various types of vehicles moving on two or three-lane roads [26]. It has been widely used for evaluating trajectory prediction performance in highway driving scenarios, as it covers both heavy and light traffic conditions, which cause different driving behaviours throughout the dataset. For a fair comparison of the final results with the SOTA literature, the highD dataset has been divided into training, testing, and validation subsets with ratios 70-20-10 %, respectively. The time granularity between consecutive frames is \(0.2\) seconds. It is worth mentioning that the associated VVF to this dataset using the methodology described in Section III.A has been made publicly available at [https://github.com/Amir-Samadi/VVF-TP](https://github.com/Amir-Samadi/VVF-TP).
### _Evaluation Metrics_
Similar to several recent studies, the root mean square error (RMSE) of the distance between the predicted trajectory and its ground truth has been used in this paper to measure the prediction accuracy (the lower the RMSE the better) of the VVF-TP method. We report the RMSE separately for the longitudinal (\(x\)) and the lateral (\(y\)) directions, in addition to the RMSE of the distance separation between the two trajectories that are usually reported in the literature. This allows us to obtain a more in-depth understanding of how lateral/longitudinal accuracy contributes to the overall prediction error. The RMSE of the distance between the predicted trajectory and its ground truth can be written as
\[\mathrm{RMSE}=\sqrt{\frac{1}{p}\sum\limits_{j=1}^{p}\Big{[}\big{(}\hat{x}_{j} -x_{j}\big{)}^{2}+\big{(}\hat{y}_{j}-y_{j}\big{)}^{2}\Big{]}}, \tag{2}\]
while the RMSE, e.g., at the longitudinal direction, is calculated as
\[\mathrm{RMSE}=\sqrt{\frac{1}{p}\sum\limits_{j=1}^{p}\big{(}\hat{x}_{j}-x_{j} \big{)}^{2}}, \tag{3}\]
and a similar expression can be written for the lateral error.
### _Comparative Results_
There are several SOTA methods for trajectory prediction in which the evaluation metrics are reported based on the highD dataset. These methods can be divided into: (i) Single-modal prediction models such as the CS-LSTM [4], S-LSTM [25], and NLS-LSTM [23] that are developed based on social pooling and LSTM, and (ii) multi-modal approaches like the STDAN [24] and MMNTP [5] that consider more than one possible manoeuvres. The performance evaluation using the RMSE calculated in Eq. (2) is presented in Table II for five timesteps in the future, starting from one to five seconds with step equal to a second. The VVF-TP outperforms all other methods for all timesteps. The RMSE is reduced (on average) by 16 % in all five timesteps with respect to the second-best result that belongs to the recently published works in [5, 24].
### _Ablation Analysis_
In this section, the impact of augmenting the input data to the DNN using the generated VVF and the effect of the size of the observation window (history of past observations for the target vehicle) on the quality of trajectory prediction are investigated in detail. Bearing that in mind, the prediction accuracy using the RMSE calculated in Eq. (2) and Eq. (3)
Fig. 4: Comparing the predicted trajectories using different input states for left and right lane-change (right and left column, respectively) for three consecutive frames (starting from top to bottom) along with the ground truth (GT). The target vehicle (TV) and the surrounding vehicles (SVs) are coloured green and amber respectively.
is evaluated for four different input states \(S1\) to \(S4\). In \(S1\), the observation window degenerates to the current perception frame, i.e., \(h=1\) and the VVF is omitted from the input. The input state \(S2\) is the same as \(S1\) except that an observation window of \(h=10\) frames (or two seconds) is available for the target vehicle. In \(S3\), the VVF is considered with \(h=1\) frame and \(S4\) takes into account the VVF and \(h=10\) frames.
The quantitative and qualitative results for the four states described in the previous paragraph are given in Table III and Fig. 4, respectively. According to Table III, the second-best performance in the longitudinal direction belongs to \(S3\), whereas \(S2\) performs better in the lateral direction. The overall prediction performance based on the RMSE in Eq. (2) is better in \(S3\) than \(S2\) (in four timesteps out of five) indicating that the inclusion of the VVF in the input data is more beneficial than a long history of past observations. Next, according to Fig. 4, \(S4\) predicts much earlier the lane change trajectory than the other input configurations suggesting that the combination of the long observation window with the inclusion of the VVF lead to the best performance. Moreover, similar to Table III, Fig. 4 illustrates that the \(S3\) incurs larger/smaller prediction errors in the lateral/longitudinal direction than \(S2\). Finally, adding a VVF when there is only a single observation available can significantly improve the longitudinal prediction accuracy (compare \(S3\) with \(S1\) in Fig. 4-left).
## V Conclusion
The results obtained in this study suggest that a nature-inspired phenomenon such as fluid flow motion has the potential to be used for modelling the imminent interactions among road participants, and add meaningful information in the form of a velocity vector field (VVF) to the conventional bird's eye view representation of the driving scene. Accordingly, a learning-based trajectory prediction approach adapted to digest the new information could achieve higher performance than the state-of-art methods in short/long-term prediction horizons. Also, the model's performance remains consistent when the number of available past states of surrounding vehicles decreases, which would prove useful in driving environments with occlusions.
The proposed prediction model in this study was designed and tested for highway driving scenarios, however, the same logic could apply to more generic conditions such as urban environments. Adapting the proposed model to operate in complex road segments such as urban intersections and roundabouts is an important area of future work. Moreover, regarding practical concerns, the VVF generation process should operate in real-time to become a part of decision-making, motion planning and control of an autonomous vehicle. These limitations should be addressed in a future study to prove the applicability of the proposed approach in real-life operations.
To generate the 2d velocity vector field using the Lattice Boltzmann Method (LBM) with (D2Q9), each cell in the lattice (the BEV map) interacts with eight surrounding cells plus itself (nine directions) via 2d lattice vectors of unit magnitude, \(\vec{e}_{i}\), see Fig. 5. Firstly, the velocity vectors at the boundary cells are set; Recall from Section III.A the four boundary conditions used in the fluid flow simulation. Then, the velocity vector, \(\vec{u}\), for each of the remaining cells is initialized, and based on that a density \(f_{i},i=0,\ldots 8\) is assigned to each direction \(e_{i}\). The values of \((f_{i},\vec{u})\) for each cell are iteratively updated until there is no significant change in the average velocity (\(<~{}0.01\) m/s) for all cells and the boundary conditions including speed limits are also satisfied. Each iteration consists of the following two steps:
**Streaming step:** Adjacent cells exchange the densities between opposite vectors. For instance, in Fig. 5 cell A interacts with cells B and C by exchanging density via \(\left\{f_{S}^{A}\gets f_{S}^{B},f_{1}^{A}\to f_{1}^{B}\right\}\) and \(\left\{f_{0}^{A}\gets f_{6}^{C},f_{2}^{A}\to f_{2}^{C}\right\}\) pairs, respectively.
**Collision step:** This is executed in each cell separately. First, the equilibrium density is calculated using nine densities and their contributing weights (\(\omega_{i}\)). Subsequently, the nine densities and the cell's velocity vector are updated using Eq. 4 below (order matters).
\[\begin{split} f_{i}^{eq}&=\omega_{i}\rho\left[1+3 \left(\vec{e}_{i}\cdot\vec{u}\right)-\frac{3}{2}\left(\vec{u}\cdot\vec{u} \right)+\frac{9}{2}\left(\vec{e}_{i}\cdot\vec{u}\right)^{2}\right],\\ f_{i}&=f_{i}+(f_{i}^{eq}-f_{i})/\tau,\\ \rho&=\sum\limits_{i=0}^{8}f_{i},\\ \vec{u}&=\sum\limits_{i=0}^{8}f_{i}\vec{e}_{i}. \end{split} \tag{4}\]
According to [22], the updating weights for stationary (\(\vec{e}_{0}\)), diagonal (\(\vec{e}_{i},i\in\{2,4,6,8\}\)), and orthogonal (\(\vec{e}_{i},i\in\{1,3,5,7\}\)) directions are \(\frac{4}{9}\), \(\frac{1}{36}\) and \(\frac{1}{9}\), respectively. Finally,
Fig. 5: Adjacent lattice cells interaction in the overall D2Q9 grid.
in fluid flow simulations, \(\tau\) is the update rate obtained from the kinematic viscosity property of the fluid which has been set equal to \(0.003\) in this study. It should be noted that at each iteration update, the velocity vectors corresponding to the boundary conditions are set equal to the boundary values.
|
2308.00127 | DiviML: A Module-based Heuristic for Mapping Neural Networks onto
Heterogeneous Platforms | Datacenters are increasingly becoming heterogeneous, and are starting to
include specialized hardware for networking, video processing, and especially
deep learning. To leverage the heterogeneous compute capability of modern
datacenters, we develop an approach for compiler-level partitioning of deep
neural networks (DNNs) onto multiple interconnected hardware devices. We
present a general framework for heterogeneous DNN compilation, offering
automatic partitioning and device mapping. Our scheduler integrates both an
exact solver, through a mixed integer linear programming (MILP) formulation,
and a modularity-based heuristic for scalability. Furthermore, we propose a
theoretical lower bound formula for the optimal solution, which enables the
assessment of the heuristic solutions' quality. We evaluate our scheduler in
optimizing both conventional DNNs and randomly-wired neural networks, subject
to latency and throughput constraints, on a heterogeneous system comprised of a
CPU and two distinct GPUs. Compared to na\"ively running DNNs on the fastest
GPU, he proposed framework can achieve more than 3$\times$ times lower latency
and up to 2.9$\times$ higher throughput by automatically leveraging both data
and model parallelism to deploy DNNs on our sample heterogeneous server node.
Moreover, our modularity-based "splitting" heuristic improves the solution
runtime up to 395$\times$ without noticeably sacrificing solution quality
compared to an exact MILP solution, and outperforms all other heuristics by
30-60% solution quality. Finally, our case study shows how we can extend our
framework to schedule large language models across multiple heterogeneous
servers by exploiting symmetry in the hardware setup. Our code can be easily
plugged in to existing frameworks, and is available at
https://github.com/abdelfattah-lab/diviml. | Yassine Ghannane, Mohamed S. Abdelfattah | 2023-07-31T19:46:49Z | http://arxiv.org/abs/2308.00127v2 | # DiviML: A Module-based Heuristic for Mapping Neural Networks onto Heterogeneous Platforms
###### Abstract
Datacenters are increasingly becoming heterogeneous, and are starting to include specialized hardware for networking, video processing, and especially deep learning. To leverage the heterogeneous compute capability of modern data-centers, we develop an approach for compiler-level partitioning of deep neural networks (DNNs) onto multiple interconnected hardware devices. We present a general framework for heterogeneous DNN compilation, offering automatic partitioning and device mapping. Our scheduler integrates both an exact solver, through a mixed integer linear programming (MILP) formulation, and a modularity-based heuristic for scalability. Furthermore, we propose a theoretical lower bound formula for the optimal solution, which enables the assessment of the heuristic solutions' quality. We evaluate our scheduler in optimizing both conventional DNNs and randomly-wired neural networks, subject to latency and throughput constraints, on a heterogeneous system comprised of a CPU and two distinct GPUs. Compared to naively running DNNs on the fastest GPU, he proposed framework can achieve more than 3\(\times\) times lower latency and up to 2.9\(\times\) higher throughput by automatically leveraging both data and model parallelism to deploy DNNs on our sample heterogeneous server node. Moreover, our modularity-based "splitting" heuristic improves the solution runtime up to 395\(\times\) without noticeably sacrificing solution quality compared to an exact MILP solution, and outperforms all other heuristics by 30-60% solution quality. Finally, our case study shows how we can extend our framework to schedule large language models across multiple heterogeneous servers by exploiting symmetry in the hardware setup. Our code can be easily plugged in to existing frameworks, and is available at [https://github.com/abdeflatah-lab/diviml](https://github.com/abdeflatah-lab/diviml).
+
Footnote †: Thanks to TATA Consultancy Services (TCS) for funding support, and Dr. Rekha Singal for insightful discussion and feedback.
## I Introduction
Deep neural networks (DNNs) have emerged as an important computing paradigm making significant breakthroughs in many fields. However, DNNs are both computationally-intensive and memory-hungry, leading to a major hardware restructuring of modern datacenters to keep up with this inside compute demand. GPUs are becoming commonplace, FPGAs have been included by companies like Microsoft [1], and custom DNN accelerators such as Google's TPU [2] are continuously being developed. DNNs themselves are composed of a growing list of diverse building blocks such as convolutions, matrix-multiplications, element-wise operations, non-linear functions and shape transformations. Each of those primitives exhibits different vectorization patterns, sparsity and quantization tolerance and so may be suitable for implementation on different hardware accelerators [3, 4].
In addition to hardware heterogeneity, DNN topologies are becoming evermore irregular and complex thanks to their automated design through neural architecture search (NAS) [5]. NAS has demonstrated considerable success in creating DNN architectures that are highly efficient in terms of computational resource usage [6, 7, 8]. However, the irregular topologies it generates can be challenging to efficiently schedule on heterogeneous systems. In fact, in its most simple form, with no resource usage constraints or batching, the problem of mapping and scheduling a set of tasks with dependence is a classical NP-Hard problem [9]. Finding scalable and efficient methods for mapping such complex DNN computational graphs on heterogeneous systems is becoming more and more important to meet latency and throughput requirements imposed by modern DNNs and hardware platforms during inference.
Even though this scheduling problem has been previously explored in the context of traditional computing [10, 11], few works investigate the challenges associated with neural network models. In this paper, we investigate the scheduling of irregular DNN topologies onto heterogeneous hardware platforms with different latency and throughput requirements, under different batching conditions, and leveraging the _module-based_ nature of DNNs to significantly improve the speed and quality of our automatic scheduler. Many have used randomly-wired neural networks (RWNNs) [12] to represent NAS-designed DNNs in the context of scheduling [13], and we follow suit. Our scheduler operates on a coarse-grained computational graph of DNNs that is available through domain-specific frameworks such as PyTorch [14] or TVM [15]. Our goal is to create a fast heterogeneous scheduling plugin that can be easily integrated into these DNN frameworks to leverage heterogeneous computing platforms.
To achieve this goal, we curate a set of DNNs from the vision domain, both manually-designed ones such as ResNet [16], and NAS-found DNNs represented by an assortment of RWNNs. We investigate the scheduling of these DNNs on a sample heterogeneous computing platform with two GPUs and a CPU, and we demonstrate a considerable improvement compared to many past heuristic baselines. Our key algorithmic contribution is a fast DNN splitting heuristic, MILP-SPLIT, that detects and schedules each DNN module
separately then combines the schedules in either an optimal or quasi-optimal fashion depending on the nature of the connection between modules. MILP-SPLIT also comes with a theoretical lower bound for the optimal solution, which facilitates the evaluation of the scheduling quality. Our contributions are enumerated below:
1. We formalize the problem of partitioning and scheduling a DNN onto interconnected hardware devices in a heterogeneous computing system. We leverage both model and data parallelism to handle two core optimization objectives; latency and throughput.
2. We propose a novel linear mathematical programming model which is the first, up to our knowledge, scheduling problem formulation capable of handling both model and data parallelism for batched DNN execution.
3. We introduce MILP-SPLIT: A splitting heuristic to schedule complex modular DNNs. Alongside, we perform a rigorous theoretical analysis on the implications of modularity and inter-module communication channels, on the performance of our heuristic, via the proposal of a lower bound formula.
4. We evaluate our algorithms on computer-vision DNN benchmarks, on both mainstream DNNs and randomly wired neural networks. Compared to a single device, we achieve more than \(3\times\) lower latency and \(2.9\times\) higher throughput. Compared to heuristics from prior work, we achieve 30-60% better solution quality, and up to 395\(\times\) speedup compared to an exact solution.
## II Related Work
On the topic of general software partitioning, there exists previous work regarding heterogeneous compilation [10]. In particular, Polly-Acc offers an automatic heterogeneous compute compiler targeting CPU-GPU systems where at the compiler IR level, interesting compute kernels are detected, extracted, and modeled, and whose execution strategy is described as a schedule tree [11]. AMAP is an online adaptive decision algorithm to determine if the improvement from running a function in hardware outweighs the overhead of transferring the parameters [17], whereas [18] proposes a dynamic program scheduling approach based on the sampled energy-delay product during tentative runs. Our approach, in contrast, is performed statically during compilation, is specifically tailored for deep learning architectures, and leverages coarse graph-level descriptions of DNNs.
Under the scope of DNN based partitioning, many existing research endeavors focus solely on training [19, 20]. Alpa automates the search for pipeline-parallel schedules for DNN training on homogeneous multi-node GPU clusters. ParDNN introduces a graph slicing heuristic which forms primary clusters, the first iterative critical paths of the graph, and secondary clusters, the single nodes or remaining paths, and optimizes for load balancing during training [21]. Chen at al. [22] propose heuristic methods to optimize latency based on Heterogeneous-Earliest-Finish-Time (HEFT) and Critical-Path for mapping and scheduling DNNs on accelerators consisting of function units such as matrix multiplication or lookup tables. Unlike these approaches that were specific to DNN training, our scheduling algorithm is geared towards low-latency and high-throughput inference.
Liu et al. [23] restrict their scope to the DenseNet architecture and gives an exact and efficient algorithm for its scheduling on a heterogeneous system. However, this approach is tailored for the particular topology of the DenseNet graph and is consequently difficult to generalize to broader model architectures. We propose a more general cut-based heuristic, which also takes advantage of the dynamic programming paradigm and can significantly speed up the mixed integer linear programming (MILP) solving. Additionally, Mirhosein et al. [24] propose a reinforcement learning approach to DNN mapping for both training and inference latency optimization. This suffers however from a lack of generalization with a need to set manually load specific parameters and with training time ranging between 12 to 27 hours. In comparison, our approach focuses on inference, handles batched inputs and strives for efficiency by leveraging modularity while maintaining some optimality guarantees. Finally, SERENITY achieves memory-aware scheduling of irregularly wired neural networks on a single device by resorting to graph rewriting and divide-and-conquer approaches [25]. We focus instead on latency and throughput optimization on multiple heterogeneous devices, taking into account each device's memory constraints.
## III Problem statement and system description
Our approach is based on a coarse-grained representation of computational graphs that is commonly used in deep learning compilers. We present a compile-time mapping and scheduling framework for DNNs on heterogeneous hardware systems. The scheduler's modeling is general and agnostic to back-ends, its only limitation being what is supported by different compilers' back-ends. Figure 1 illustrates how the partitioner is integrated in a DNN compilation pipeline. It is capable of reading an input consisting of a hardware system configuration and any intermediate representation (IR) of a DNN, and outputs the
Fig. 1: Our heterogeneous scheduling framework.
appropriate mapping on the system via existing compilation backends, and its corresponding schedule. An optional clustering step prepares the DNN graph for mapping by reducing the number of task inputs to the mapping algorithms. A prime example is the fusion of convolution, batch normalization, and the ReLU activation function.
### _Problem Formulation_
We represent DNNs as a weighted directed acyclic graph (DAG), with the edges denoting data dependencies and nodes representing a DNN task (e.g. a convolutional or linear operation). If two tasks with data dependencies are mapped onto the same processor, the communication between them is implemented through data sharing in device memory and no communication cost is incurred. Each processor may execute several tasks, but each task has to be assigned to exactly one processor, in which it is entirely executed without preemption. Formally, let \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) be the DAG where \(\mathcal{V}\) denotes the set of tasks and \(\mathcal{E}\) represents the set of edges. Each edge \((i,j)\in\mathcal{E}\) defines a precedence relation between the tasks \(i,j\in\mathcal{V}\), and is weighted by the size of the source task's output. A task cannot be executed unless all of its predecessors (parents) have been processed and all relevant data is available. Each task \(i\in\mathcal{V}\) is assigned the following constants: \((wm_{i})\) the data size allocated for the DNN task weights, \((im_{i})\) the input tensor size and \((om_{i})\) the output tensor's size. As for our hardware system on which we map the DNN, we model it as a tuple of sets \(\mathcal{H}=(\mathcal{K},\mathcal{M},\beta)\). \(\mathcal{K}\) denotes the set of devices in our system. The two remaining sets are descriptors of the hardware system. \(\mathcal{M}:\mathcal{K}\rightarrow\mathbb{R}^{+}\) is the memory capacity for each single processor and \(\beta:\mathcal{K}^{2}\rightarrow\mathbb{R}^{+}\) the communication bandwidth between linked chips--it is null if there is no link. If tasks \(i\) and \(j\) are executed on different compute nodes \(h,k\) ; \(h\neq k\), and \((i,j)\in\mathcal{E}\), a communication time \(om_{i}/\beta_{h,k}\) is incurred.
The objective of this task scheduling problem is to allocate and schedule the tasks onto the compute nodes such that the overall completion time (latency) is minimized. We link the dataflow graph and the hardware via a map \(t:(\mathcal{V},\mathcal{K})\rightarrow\mathbb{R}^{+}\), which assigns to each task and device pair its corresponding latency. We finally add to our formulation the possibility of batching and throughput optimization. Hence we augment our problem description with a map \(\mathcal{B}:\mathcal{K}\to 2^{\mathbb{N}}\) that assigns to each device the subset of batch sizes supported. \(t\) now describes the latency of each possible batch of similar tasks \(i\in\mathcal{V}\) for each device and is redefined as \(t:\mathcal{V}\times\mathcal{K}\times\mathcal{B}(\mathcal{K})\rightarrow\mathbb{ R}^{+}\). The objective is now to find for a set of \(\mathcal{L}\) graph inputs the optimal mapping and scheduling of the tasks into different batches, while respecting the dependency within a single graph and the underlying resource constraints. Finally, we define the notion of a schedule. Let \(\mathcal{S}:\mathcal{V}\times[1,\ldots,\mathcal{L}]\rightarrow\mathcal{K} \times\mathbb{R}^{+}\) be a map which assigns each task to a device and a starting time. \(\mathcal{S}\) is a schedule if and only if \(\mathcal{S}\) respects precedence and no overlap (no two distinct batches can overlap on the same device) criteria, i.e. for every \((i,j)\in\mathcal{E}\), \(l\in[1,\ldots,\mathcal{L}]\):
\[\mathcal{S}(i,l)_{2}+1_{\mathcal{S}(i,l)_{1}\neq\mathcal{S}(j,l)_{1}}\cdot m_ {i}/\beta_{h,k}\leq\mathcal{S}(j,l)_{2}\]
The problem statement becomes:
\begin{tabular}{|p{34.1pt} p{34.1pt}|} \hline \multicolumn{2}{|c|}{**Mapping and Scheduling problem**} \\
**Input** & Objective function (latency/throughput) \(f\), \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), \(\mathcal{S}=(\mathcal{K},\mathcal{M},\beta)\), \(t\), \(\mathcal{B}\), \(\mathcal{L}\). \\
**Output** & A schedule \(\mathcal{S}:\mathcal{V}\times[1,\ldots,\mathcal{L}]\rightarrow\mathcal{K} \times\mathbb{R}^{+}\) \\ & which optimizes \(f\) \\ \hline \end{tabular}
## IV Algorithmic approaches
In this section, we demonstrate our exact scheduling approach based on solving an MILP problem. Linear programming has been effective in solving communication constrained DAG scheduling problems for tractable instances [26]. Our contributions for the exact MILP formulation are twofold: First, we incorporate memory and batching constraints into our formulation, which are commonly encountered in deep learning workloads, and we integrate our scheduler into a graph partitioning routine that we rigorously analyze to ensure the quality of its results. However, the problem of scheduling DNNs is NP-Hard, making it intractable to find exact solutions for large graph sizes. Our second contribution addresses this issue. We take advantage of the inherent modularity in most DNNs to create fast solving schemes that are either optimal or provide strong approximation guarantees.
### _MILP Problem Representation_
We introduce a novel formulation of the problem as an MILP model that explicitly considers the option of batching, where a device can process multiple inputs simultaneously. By incorporating batching, our formulation is better suited to capture the characteristics of modern deep learning workloads, which often involve a large numbers of inputs that can be efficiently processed in batches. Our approach enables us to find optimal solutions that balance the trade-offs between computation and communication costs while respecting batching and memory constraints. We add to the notation introduced earlier the following binary decision variables: \(x_{i,j,l}\) which encodes if the DNN task \(i\) corresponding to the \(l\)-th input is mapped to a device \(j\). Meanwhile, \(b_{i,j,l}\) describes if tasks of kind \(i\) running on \(j\) form a batch of size \(l\), and \(d_{i_{1},i_{2},l_{1},l_{2}}=1\) iff task \(i_{1}\) from input \(l_{1}\) is scheduled before \(i_{2}\) from input \(l_{2}\). We also consider the continuous variables: \(s_{i,j}\) the starting time for processing the batch of \(i\) tasks on \(j\), and \(C\) the total latency. The objective function \(f\) is equal to \(C\) in the latency optimization scenario or \(\mathcal{L}/C\) when optimizing for throughput. Now, we can write the mixed integer linear program, with objective minimize C, and whose constraints are as follows: Condition 1 asserts that each task is assigned to a single machine:
\[\sum_{u\in\mathcal{K}}x_{i,u,l}=1;\ \ i\in\mathcal{V},\ \ l=1,\ldots,\mathcal{L} \tag{1}\]
Condition 2 ensures that each task finishes within the reported latency :
\[s_{i,u}+\sum_{l\in\mathcal{B}_{u}}b_{i,u,l}\cdot t_{i,u,l}\leq C;\ \ i\in\mathcal{V},\ \ u\in\mathcal{K} \tag{2}\]
Condition 3 is the condition expressing the dependency and communication constraint:
\[\begin{split} s_{i,u}&+\sum_{p\in\mathcal{B}_{u}}b_{i, u,p}\cdot t_{i,u,p}+(om_{i}/\beta_{u,v})\cdot(x_{j,v,l}+x_{i,u,l}-1)\\ &\leq s_{j,v};\ \ j\in\mathcal{V},\ \ i\in par(i),\ \ u,v\in \mathcal{K},\ \ l=1,\ldots,\mathcal{L}\end{split} \tag{3}\]
Condition 4 ensures that the batch decomposition adds up correctly to the total number of items in the batch:
\[\sum_{u\in\mathcal{K}}\sum_{l\in\mathcal{B}_{u}}l\cdot b_{i,u,l}=\mathcal{L}; \ \ i\in\mathcal{V} \tag{4}\]
The following condition 5 ensures that only supported batch sizes are chosen:
\[\begin{split} b_{i,u,l}&=1\ \ \text{iff}\sum_{l^{\prime} \in[1\ldots\mathcal{L}]}x_{i,u,l^{\prime}}=l;\\ i&\in\mathcal{V},\ \ u\in\mathcal{K},\ \ l=1, \ldots,\mathcal{L}\end{split} \tag{5}\]
In its form above, it is not a linear equation but we can linearize it via the big M method[27].
Condition 6 holds the memory constraint under the supposition that all data should be preemptively moved:
\[\begin{split}\sum_{i\in\mathcal{V}}((im_{i}+om_{i})\sum_{l\in[1 \ldots\mathcal{L}]}x_{i,u,l}+wm_{i}\sum_{l\in\mathcal{B}_{u}}b_{i,u,l})\\ &\leq\mathcal{M}_{u};u\in\mathcal{K}\end{split} \tag{6}\]
Conditions 7 ensures no overlap of device usage between different batches. We linearize it similarly to condition 5:
\[\begin{split}\begin{cases}s_{i,u}+\sum\limits_{p\in\mathcal{B}_{u} }b_{i,u,p}\cdot t_{i,u,p}-s_{j,u}\leq 0\\ \text{or}\\ s_{j,u}+\sum\limits_{p\in\mathcal{B}_{u}}b_{i,u,p}\cdot t_{i,u,p}-s_{i,u} \leq 0\\ \text{if}\ \ x_{i,u_{l_{1}}}=x_{j,v_{l_{2}}}=1;\\ i,\ j\in\mathcal{V},\ \ u\in\mathcal{K},\ \ i\neq j,l_{1},\ l_{2}=1, \ldots,\mathcal{L}\end{cases}\end{split} \tag{7}\]
An optimization of the formulation of the MILP is to restrict constraint 7 to pairs of tasks (i, \(l_{1}\)) and (j, \(l_{2}\)) which do not belong to the same batch graph or are not part of a path in the DAG. The system remains equivalent to the original as the other constraints from 7 are enforced by the dependency constraint 3. Eliminating these redundant constraints is done by computing the transitive closure of the graph and which can be obtained efficiently with Purdom's algorithm [28].
### _Milp-SPLIT: Leveraging Graph Modularity_
#### Iii-B1 Single-channel modularity
The presence of highly connected clusters is a prevalent feature in many DNN graph structures. An example is shown in Figure 1(a) This characteristic can be leveraged by the scheduler to partition the global problem into independent sub-problems consisting of weakly communicating modules. This approach is particularly useful when dealing with graphs that consist of modules linked to one another, such as ResNets [16], Inception [29], or especially RWNNs [12] that are composed of several instances of sequentially linked random graph modules.
A straightforward method to identify these modules involves detecting articulation points or bridges in the graph, which correspond to vertices or edges whose removal disconnects the undirected graph, grouping tasks between them into the same module, and solving each subproblem independently. However, this approach can lead to suboptimal solutions as it does not account for communication costs through bridges and may result in inconsistent assignments of articulation points across modules. Fortunately, a dynamic programming solution exists to address these issues. To obtain an optimal global solution for the whole graph, we compute the optimal schedule for each module for every possible input-device and output-device pairings, and we combine the resulting building blocks into the best configuration. As a preprocessing step, we transform articulation points that are not endpoints of bridges into bridge edges by introducing a dummy node and a zero-cost edge between them. We also add an additional constraint that mandates the mapping of these two vertices to the same device in the global solution as is illustrated in Figure 1(b). From now on, we refer to bridges as "communication channels".
Formally, Let \(\mathcal{G}(\mathcal{V},\mathcal{E})\) be a DAG with single input and output. We denote by \(\mathcal{I}(\mathcal{Q},\mathcal{F})\) the graph obtained by reducing every module into a single vertex, where \(\mathcal{Q}\) is a partition of \(\mathcal{V}\) into a set of disjoint modules and \(\mathcal{F}:=\{(u,v)\in\mathcal{Q}^{2}|\ \ \exists x\in u\ \exists y\in v\ \ (x,y)\in \mathcal{E}\}\). In particular, if \(\mathcal{Q}\) is defined as the set of vertex modules, then \(\mathcal{I}\) is a path, and we can enumerate \(\mathcal{Q}\) as the set \([1,\ldots,|\mathcal{Q}|]\), and through this ordering we can obtain a dynamic programming problem formulation. For a given module \(M_{t}\in\mathcal{Q}\) and a pair of devices \(u,v\in\mathcal{K}\) onto which the input and output of \(M_{t}\) are mapped, and if we denote by \(opt\) the solution of a module subproblem, the recursion can be written as:
\[\begin{split} dp(M_{t},u,v)&=min_{u^{\prime},v^{ \prime}\in\mathcal{K}}\Big{(}dp(M_{t-1},u^{\prime},v^{\prime})\\ &+com(t,v^{\prime},u)\Big{)}+OPT(M_{t},u,v)\end{split}\]
The effectiveness of the proposed splitting method is influenced by the number and size balance of the extracted modules. The complexity of the approach can be expressed as \(O(|\mathcal{K}|^{2}|\mathcal{Q}|\mathbb{T})\), where \(\mathbb{T}\) represents a runtime bound for each module. This complexity analysis assumes a specific cutting strategy, but can be generalized to arbitrary cuts, where \(\mathcal{I}\) becomes a multigraph.
#### Iii-B2 Multi-channel modularity
Modularity is an important property of graphs that enables exact solving for the scheduling problem on large graphs using a divide-and-conquer approach. However, many graphs can not be split into distinct modules of comparable size that communicate through a _single_ input-output channel. In such cases, it may still be possible to decompose the graph into balanced modules that communicate through _multiple_ edges, and solve for each subgraph independently. Figure 1(a) shows an example with 1 and 2 channels. Identifying the modules boils down to computing the \(k-\)edge connected components [30] where \(k-1\) is the number of channels. Although this approach may result in a loss of optimality, it can significantly improve runtime without
a significant reduction in quality. In the case of partitioning a large graph into multichannel communicating modules, it is desirable to compute a lower bound on the optimal solution to evaluate the quality of the MILP-SPLIT (or other) heuristic, especially when solving for the entire graph is not tractable.
In order to express the lower bound for a DAG \(\mathcal{G}(\mathcal{V},\mathcal{E})\) that can be split into multichannel communicating modules, we first define for a fixed \(T\subseteq\mathcal{V}\) and for every node \(u\) in \(\mathcal{G}\) the set of nodes \(dep(u)_{T}=\{v\in T\mid\text{there is a path from $u$ to $v$}\}\), which we will refer to as the dependency set of \(u\), and the set of nodes \(pre(u)_{T}=\{v\in T\mid\text{there is a path from $v$ to $u$}\}\), and which we will refer to as the predecessor set of \(u\) (as shown in Figure 1(d)). Let \(M_{1},\dots,M_{|\mathcal{Q}|}\) be a decomposition of \(\mathcal{G}\) into such modules, where \(\bigcup_{1\leq t\leq|\mathcal{Q}|}M_{t}=\mathcal{V}\). We denote by \(\mathcal{G}_{s}=\bigcup_{s\leq t\leq|\mathcal{Q}|}M_{t}\). Our approach is to proceed inductively by computing the lower bound in a recursive manner, and using the following remark:
**Remark**.: _Let \(c\) denote the number of channels, and \((I_{t})_{1\leq t\leq c}\) and \((O_{t})_{1\leq t\leq c}\) denote respectively the set of vertices in the communication channels between \(M_{1}\) and \(\mathcal{G}_{2}\) for which the edges are in-going and out-going, i.e., the inputs of \(\mathcal{G}_{2}\) and the outputs of \(M_{1}\). For any valid scheduling of the whole graph, there exists a \(t^{\prime}\) such that the subgraph induced on \(dep(I_{t^{\prime}})_{\mathcal{G}_{2}}\) is completely scheduled after \(M_{1}\), and there exists a \(t"\) such that \(pre(O_{t^{\prime}})_{M_{1}}\) is completely scheduled before \(\mathcal{G}_{2}\)._
Hence, if we denote by \(OPT\) the function mapping subgraphs of \(\mathcal{G}\) onto their optimal schedule, then we obtain the pair of inequalities:
\[OPT(\mathcal{V})\geq OPT(M_{1})+min_{u\in\text{inputs}}(OPT(dep(I_{u})_{ \mathcal{G}_{2}}))\]
and
\[OPT(\mathcal{V})\geq OPT(\mathcal{G}_{2})+min_{v\in\text{outputs}}(OPT(pre(O_{v })_{M_{1}}))\]
The lower bound of the problem is obtained as the maximum value among the right-hand sides of the inequalities. This lower bound can be immediately extended to the batched throughput scenario by observing that the partial ordering defined earlier for dependency, predecessor, and module subgraphs applies to scheduling the minimal batch size that can be executed on each device. Specifically, it is necessary to schedule a minimum portion of the input to maintain the specified constraints via the communication channels outlined in the remark. However, we can do better; let \(M_{1}\) and \(dep(I_{t^{\prime}})_{\mathcal{G}_{2}}\) be defined as in the remark; then if \(\mathcal{L}\) is the total input batch to be processed and \(b\) any batch size supported on every device, then there is at least a batch of \(\mathcal{L}-b+1\) that needs to be processed through \(dep(I_{t^{\prime}})_{\mathcal{G}_{2}}\) after scheduling a load \(b\) of \(M_{1}\). The same reasoning holds between \(OPT(pre(O_{v})_{M_{1}})\) and \(\mathcal{G}_{2}\), and recursively throughout the graph. These bound computations can be accomplished efficiently using the presented recursive formula, which lends itself well to parallelization due to the independent nature of the subproblems considered.
## V Evaluation
We evaluate our mapping and scheduling framework on mainstream DNN models, a set of computer vision neural networks popular in the field of image classification, from the _Torchvision_ model library, and on randomly wired neural networks (RWNNs) also performing image classification tasks [12]. We focus more on the latter because the topological irregularity of RWNNs makes it more difficult to have a good intuition on what a good mapping and scheduling should look like thus necessitating automated algorithms. We choose representatives from three random graph models (Erdos-Renyi, Watts-Strogatz and Barbasi-Albert), with parameters chosen corresponding to the seeds which achieved the best accuracy in prior work [12]: we sample 6 models generated with parameters WS(4, 0.75), ER(0.2) and BA(5), and with module size \(N\in\{10,32\}\). We consider systems comprised of a CPU (Intel Xeon (skylake) CPU 2.00GHz) and two different GPUs (Nvidia Tesla T4 and A100 GPUs) connected by a 16 lanes PCIe 4.0 link to represent a typical heterogeneous system--relative speeds are shown in Table II. The complete pipeline of our scheduler's evaluation setup for the aforementioned networks starts with a Pytorch model. To convert it into a coarse grain DAG, we use the torch.fx [31] symbolic tracer and in particular the Interpreter class. This class is responsible for executing an FX graph, which represents the dataflow graph of DNN inference on a node-by-node basis. By overriding the node run method, we can individually measure the performance of executing each computational node on
Fig. 2: Modularity in DNN graphs. **sdep** : all paths within a module stem from (converge toward) at least one input (output). **wdep** : module inputs and outputs are randomly sampled for their dependencies.
different backends by invoking the appropriate routine on the respective node, thus creating our DAG while simultaneously benchmarking each operation on every device.
Our experiments perform a thorough comparison of our exact MILP solution, our modularity-based splitting heuristic (MILP-SPLIT), and a large number of established baselines from prior work, introduced in Section V-A. We present our findings when optimizing solely for latency (Section V-B) using model parallelism, and when optimizing for throughput (Section V-C) using both data and model parallelism. In both cases, we evaluate the solution quality and cost for Torchvision models, for single-module RWNNs, and for multi-module RWNNs. Our findings demonstrate the superiority and practicality of MILP-SPLIT compared to existing baseline algorithms, and the fidelity of our estimated lower bound.
### _Baselines and Heuristics_
We compare our MILP solver and MILP-SPLIT against popular scheduling algorithms and general purpose optimization heuristics which have shown success in DAG scheduling contexts or graph problems more generally.
* MET: the Minimum Execution Time algorithm is a list-based scheduling algorithm that schedules tasks based on their minimum execution time to minimize the latency of a DAG. We extend the MET algorithm to the batched throughput optimization by selecting the best batch-device combination for each task.
* Greedy: is a greedy heuristic that considers the overall latency for scheduled tasks so far when scheduling the current task.
* HEFT: the Heterogeneous Earliest Finish Time [32] algorithm is an effective approach for scheduling tasks in a heterogeneous computing environment. It assigns tasks to processing nodes with different processing speeds to minimize overall execution time, using two phases to prioritize tasks based on estimated finish times.
* Simulated Annealing (SA) [33]: is a stochastic optimization heuristic algorithm that draws inspiration from statistical mechanics concepts and has been widely used in various optimization problems, including scheduling, for example, to minimize latency [34, 35, 36].
* Biased (1+1) EA: We implement a biased version of the (1+1) EA [37] as an additional approximation heuristic. Also known as the random hill climbing algorithm, it is one of the most basic evolutionary algorithms but has been surprisingly efficient in practice [38, 39]. We qualify as biased the (1+1) EA when the initialisation is not randomly sampled but chosen in a greedy manner, by assigning each task to the device on which it runs fastest.
**Fitness function**: Here we give a succinct formulation of our problem as an objective function and an integer-string search space, which are adopted by two of our search heuristics: (1+1) EA and SA. We encode the mapping solution as a string of integers, wherein each integer in the string signifies a distinct identifier of the device to which a node is mapped. The position of each integer in the string corresponds to the layers of the DNN, arranged in a breadth-first topological ordering. Finally, the fitness function adopted for the latency (throughput) optimization problem corresponds to the latency (throughput) of a sampled mapping with a breadth-first topological ordering.
### _Latency Optimization_
Figure 3 evaluates our scheduler to optimize latency for mainstream Torchvision models. There are no real improvements for DNNs with little to no parallelism, such as AlexNet or ResNet or VGG, the optimal schedule is usually the one where all tasks are mapped to the best performing device (A100 GPU). However, for models with higher parallelism, the improvement from MILP and MILP-SPLIT are significantly higher--more than 100% and 150% for Inception v3 and GoogLeNet respectively. Both MILP and MILP-SPLIT converge to the optimal solution for all Torchvision models without a substantial increase in runtime, thanks to the simplicity and regularity of these DNNs.
Next, we evaluate RWNNs which we expect to be a significantly more challenging workload. In our first experiment in Figure 4, we schedule a _single_ module on our heterogeneous system, optimized for latency. Compared to simply running the RWNN module on the best device, there is a major \(\sim\)2\(\times\) improvement in overall latency from fully-utilizing our heterogeneous system with a CPU and 2 GPUs. When comparing across different scheduling algorithms, MILP converges to the optimal solution and is 22%-26% better than the best available heuristic on equivalent runtimes. However, with RWNNs featuring multiple modules, ten in our experiment, solving MILP on the whole model is more difficult for the solver and is exponentially slower. This motivates the use of MILP-SPLIT for those more realistic multi-module RWNNs that are representative of DNNs created by NAS.
To evaluate MILP-SPLIT, we stack multiple RWNN modules to represent realistic NAS-discovered models. In this case, each module is generated using the ER(0.2) model and may include multiple communication channels to connect to the next module. As indicated by our lower bounds formulation
\begin{table}
\begin{tabular}{c c c c} \hline \hline & CPU & GPU (T4) & GPU (A100) \\ \hline Torchvision & 223.10 (29\(\times\)) & 12.16 (1.6\(\times\)) & 7.80 (1\(\times\)) \\ RWNNs & 183.39 (7.10\(\times\)) & 32.58 (1.26\(\times\)) & 25.84 (1\(\times\)) \\ \hline \hline \end{tabular}
\end{table} TABLE II: Relative speed in milliseconds (ms) on experiment devices, averaged over our evaluated DNNs.
\begin{table}
\begin{tabular}{c|c c c|c c} \hline \hline & \multicolumn{3}{c|}{**sdep**} & \multicolumn{3}{c}{**wdep**} \\ Modules & MILP & SPLIT & factor & MILP & SPLIT & factor \\ \hline
5 & 82.69s & 2.26s & 37x & 129.08s & 2.45s & 53x \\
10 & 232.24s & 4.83s & 48x & 271.66s & 5.00s & 54x \\
20 & 1907.12s & 13.49s & 141x & **5850.37s** & 14.81s & 395x \\ \hline \hline \end{tabular}
\end{table} TABLE III: Speedup of the splitting heuristic for the latency optimization of RWNN models with [5, 10, 20] modules.
(Section IV-B1), the density of nodes and edges that are accessible from the endpoints of communication edges can significantly impact the quality of the splitting heuristic and the accuracy of the corresponding lower bound. Therefore, we evaluate our splitting heuristic using two different scenarios for the topology of communication edges. In the first scenario, module inputs and outputs are randomly sampled for their dependencies, while in the second scenario, all paths within a module stem from (converge toward) at least one input (output). We refer to these scenarios as the "weakly dependent" scenario (**wdep**) and the "strongly dependent" scenario (**sdep**), respectively, and examples are shown in Figures 3(e) and 3(c).
Based on the results presented in Table I, it can be observed that our splitting heuristic (MILP-SPLIT) exhibits a solution that is in close proximity to the optimal solution. Additionally, this heuristic outperforms all other scheduling methods considered in this study by a significant margin, as it is \(\sim\)30% better compared to the best heuristic baseline. Table III highlights that the MILP-SPLIT heuristic provides a substantial improvement (37\(\times\)-395\(\times\)) in runtime compared to MILP when both scheduling algorithms reach their best solution. Also shown in Table I is our lower bound (LBound), which offers a convenient means of obtaining a quick performance guarantee for the splitting heuristic. Our observations indicate that for the **wdep** models, the LBound is closer to the true optimum than for the **sdep** models, where it tends to be more pessimistic. This difference is attributed to the lower bound computation which considers complete overlap in scheduling separate paths originating from each module output. This is more likely to hold in an optimal schedule for the **wdep** scenario, where the distribution of these paths is more evenly spread compared to the **sdep** scenario, where a specific endpoint's emanating paths cover all the predecessor or dependency subgraphs--this phenomenon is also the reason why MILP-SPLIT is closer to the optimum on **sdep** graphs. Our results show that MILP-SPLIT is a viable and high-quality heuristic that offers lower-bound guarantees on quality.
### _Throughput Optimization_
We consider throughput optimization in the low-latency inference regime, where we batch B inputs (e.g. 128) and we find the fastest way to run that batch using the available devices. Successive inputs are queued together in groups of B before going to the hardware system for execution. This realistically mimics how inference is done in datacenters where low latency is critical to respond to user requests promptly.
Figures 6, 6, and Table IV show our throughput optimization results attained with our framework via batching. bMET, bGreedy and bHEFT are the batched equivalent of the corresponding heuristics. In this case, we have a batch of inputs B queued for processing, and our scheduler can further decompose this batch into \(\nicefrac{{B}}{{4}}\), \(\nicefrac{{B}}{{2}}\), and \(\nicefrac{{3B}}{{4}}\) when allocating
inputs to different devices. This enables our scheduler to leverage both model and data parallelism when mapping the DNN workload onto the hardware system. Unlike the latency objective, the MILP solving on the whole graph does not terminate within a 2 hours deadline, even for single RWNN modules or for regular networks with high model parallelism such as inception-based DNNs. Consequently, MILP-SPLIT outperforms naive MILP solving both in terms of scheduling quality and runtime. It is worth noting that since MILP cannot reach the optimum solution for a single RWNN module, MILP-SPLIT provides only an approximate solution for each of its module schedules. However, our splitting heuristic achieves up to \(\sim\)60% better performance than the best-performing heuristic baseline with equivalent running times. Results reported in Table IV are based on 600s deadlines for MILP-SPLIT and for other search heuristics, EA and SA. Moreover, Figure 7 provides a more detailed view of the solution quality over time, illustrating the challenge of solving the scheduling problem on the entire graph using MILP with numerous communication channels.
\begin{table}
\begin{tabular}{c c c c c c c c|c} \hline \hline
**Model** & **BD** & **bMET** & **bGreedy** & **bHEFT** & **(1+1) EA biased** & **SA** & **MILP** & **MILP-SPLIT** & **UBound** \\ \hline
1-chan & 54 & 56 & 74 & 75 & 84 & 87 & 114 & **135** & 164 \\ \hline sdep, 2-chan & 48 & 50 & 67 & 66 & 75 & 78 & 95 & **119** & 180 \\ sdep, 3-chan & 49 & 51 & 68 & 70 & 78 & 81 & 116 & **129** & 196 \\ sdep, 4-chan & 47 & 48 & 65 & 67 & 76 & 79 & 73 & **126** & 209 \\ \hline wdep, 2-chan & 51 & 53 & 76 & 75 & 86 & 87 & 89 & **137** & 182 \\ wdep, 3-chan & 49 & 52 & 73 & 73 & 82 & 85 & 72 & **137** & 181 \\ wdep, 4-chan & 47 & 50 & 72 & 74 & 82 & 84 & 65 & **138** & 207 \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Throughput for RWNNs consisting of 10 modules. Results reported in images-per-second (imgs/s). Best and second best results are highlighted in red (bold) and blue respectively.
Fig. 5: Inference throughput for a batch (B=128) inputs on Torchvision models on a heterogeneous platform.
Fig. 8: Multi-node heterogeneous system for GPT-3 inference.
Fig. 7: Solution quality (throughput) over time for MILP, MILP-SPLIT and heuristics on 10 modules RWNNs.
## VI Case Study: GPT-3 Inference on Distributed Heterogeneous Compute Nodes
As DNN models continue to grow in size, it has become necessary to extend our focus beyond single-node servers to extend our formulation to more complex setups. In this case study, we investigate the use of our scheduler for a large language model (LLM), GPT-3 [40], on a distributed heterogeneous platform as shown in Figure 8. This model belongs to a category of deep neural networks that exhibits notable modularity, as it is predominantly constructed by stacking transformer modules [41]. In contrast to our earlier analysis of RWNNs, GPT-3 modules exhibit high regularity but the complexity of the problem stems from the larger search space of hardware device options. To counterbalance that, a key aspect of our analysis revolves around the exploitation of **symmetry** in the hardware platform to restrict the search space size without sacrificing solution quality. Our preliminary results LLM scheduling only consider a single decoding step as a test workload.
As reported in Table V we consider two ways to schedule our GPT-3 graph. "Single node" utilizes our MILP solvers to schedule one-sixth of the GPT-3 graph on a single node, then replicates that schedule for the rest of GPT-3 on the remaining 5 nodes. We consider this a competitive baseline because it automates the common practice of manually partitioning LLM inference to fit a single compute node then scaling up the number nodes. "Multi node" exposes the entire compute system (all 6 nodes) to our MILP-SPLIT solver, but we employ _symmetry-breaking_ techniques to compress the solution space of the explored schedules, allowing us to find a high-quality schedule in reasonable time. Symmetries arise from the fact that each schedule \(S\) represents a set of equivalent solutions \(E_{S}\), where any element within this set can be derived from \(S\) by permuting device mappings while maintaining the same overall latency. In our approach, we introduce additional constraints to our MILP formulation, enforcing a partial ordering of certain variables (e.g. #batches, #tasks, time utilization) between identical devices within a node or between nodes. For example, we can ensure that the number of tasks assigned to node \(i\) is always less than or equal node \(j\) for \(0\leq i<j<6\) in our example system (Fig. 8). This retains all non-isomorphic solutions in our search space whilst compressing it by \(\sim 4^{6}6!=2.9\times 10^{6}\), where the \(6!\) and \(4^{6}\) factors represent inter- and intra-node symmetry respectively.
Furthermore, our experimental results demonstrate that the choice of symmetry-breaking criterion can significantly impact the quality of the solution. This can be attributed to the phenomenon of premature convergence. If the symmetry-breaking constraints overly restrict the problem or generates a compressed space whose topology is not regular enough, the solver may settle for a locally optimal solution instead of exploring other potentially superior regions of the solution space either located outside of the compressed space or harder to access with the solver's intrinsic optimization heuristics due to the irregularity of the new space. We hypothesize that utilizing #batches as the symmetry-breaking criterion tends to be overly restrictive, discouraging the solver from performing batch rearrangements that would contradict the ordering constraints, thus resulting in relatively smaller improvements over MILP-SPLIT without symmetries. On the other hand, despite the discrete nature of task variables and the continuous nature of utilization time variables, both variables are coarser grain than #batches thus yielding comparable performance and surpassing the baseline schedule by \(\sim\)31% and the single node MILP-SPLIT by \(\sim\)10%. Our results lay the foundations towards multi-node heterogeneous scheduling leveraging MILP-SPLIT, and we aim to further explore this topic in future work.
## VII Conclusion
We presented a general framework that leverages both data and model parallelism to schedule DNNs on heterogeneous hardware systems. Our algorithmic approaches focused on an exact MILP solution, and a splitting heuristic, MILP-SPLIT, to utilize modularity within both conventional and randomly-wired DNNs. Our results on both throughput and latency optimization demonstrated more than 30-60% improvement compared to the best, and most widely-used heuristics, and MILP-SPLIT was up to \(\sim\)395\(\times\) faster than a full MILP solution. Finally, we extended our scheduler to larger multi-node heterogeneous server deployments by showcasing improved scheduling of GPT-3 by exploiting symmetries in the hardware system. In the future, we aim to expand our framework to explore more efficient methods for scheduling large DNNs on distributed systems, to handle DNN training, and to include pre- and post-processing portions of a deep learning workload.
|
2309.08652 | Quantifying Credit Portfolio sensitivity to asset correlations with
interpretable generative neural networks | In this research, we propose a novel approach for the quantification of
credit portfolio Value-at-Risk (VaR) sensitivity to asset correlations with the
use of synthetic financial correlation matrices generated with deep learning
models. In previous work Generative Adversarial Networks (GANs) were employed
to demonstrate the generation of plausible correlation matrices, that capture
the essential characteristics observed in empirical correlation matrices
estimated on asset returns. Instead of GANs, we employ Variational Autoencoders
(VAE) to achieve a more interpretable latent space representation. Through our
analysis, we reveal that the VAE latent space can be a useful tool to capture
the crucial factors impacting portfolio diversification, particularly in
relation to credit portfolio sensitivity to asset correlations changes. | Sergio Caprioli, Emanuele Cagliero, Riccardo Crupi | 2023-09-15T15:21:14Z | http://arxiv.org/abs/2309.08652v2 | Quantifying Credit Portfolio sensitivity to asset correlations with interpretable generative neural networks
###### Abstract
In this research, we propose a novel approach for the quantification of credit portfolio Value-at-Risk (VaR) sensitivity to asset correlations with the use of synthetic financial correlation matrices generated with deep learning models. In previous work Generative Adversarial Networks (GANs) were employed to demonstrate the generation of plausible correlation matrices, that capture the essential characteristics observed in empirical correlation matrices estimated on asset returns. Instead of GANs, we employ Variational Autoencoders (VAE) to achieve a more interpretable latent space representation. Through our analysis, we reveal that the VAE latent space can be a useful tool to capture the crucial factors impacting portfolio diversification, particularly in relation to credit portfolio sensitivity to asset correlations changes.
Keywords:Variational Autoencoder VAE Credit Portfolio Model Concentration risk Interpretable neural networks Generative neural networks.
## 1 Introduction
### Credit Portfolio concentration risk
One of the most adopted models to measure the credit risk of a loan portfolio was proposed in [21] and it is currently a market standard used by regulators for capital requirements [1]. This model provides a closed-form expression to measure the risk in the case of asymptotic single risk factor (ASRF) portfolios. The ASRF model is portfolio-invariant, i.e., the capital required for any given loan only depends on the risk of that loan, regardless of the portfolio it is added to. Hence the model ignores the concentration of exposures in bank portfolios, as the idiosyncratic risk is assumed to be fully diversified. Under the Basel framework, Pillar I capital requirements for credit risk do not cover concentration risk hence banks are expected to autonomously estimate such risk and set aside an appropriate capital buffer within the Pillar II process [17].
A commonly adopted methodology of measuring concentration risk, in the more general case of a portfolio exposed to multiple systematic factors and highly concentrated on a limited number of loans, is to use a Monte Carlo simulation of the portfolio loss distribution under the assumption reported in [7]. The latter states that the standardized value of the \(i\)-th counterparty, \(V_{i}\), is driven by a factor belonging to a set of macroeconomic Gaussian factors \(\{Y_{j}\}\) and by an idiosyncratic independent Gaussian process \(\varepsilon_{i}\):
\[V_{i}=\rho_{i}Y_{j}+\sqrt{1-\rho_{i}^{2}}\varepsilon_{i}=\sum_{f}\rho_{i} \alpha_{j,f}Z_{f}+\sqrt{1-\rho_{i}^{2}}\varepsilon_{i} \tag{1}\]
through a coefficient \(\rho_{i}\). The systematic factors \(\{Y_{j}\}\) are generally assumed to be correlated, with correlation matrix \(\boldsymbol{\Sigma}\). The third term of Eq. 1 makes use of the spectral decomposition \(\boldsymbol{\Sigma}=\boldsymbol{\alpha}\boldsymbol{\alpha}^{T}\) to express \(V_{i}\) as a linear combination of a set of uncorrelated factors \(\{Z_{f}\}\), allowing for a straightforward Monte Carlo simulation.
The bank's portfolio is usually clustered into sub-portfolios that are homogeneous in terms of risk characteristics (i.e. industrial sector, geographical area, rating class or counterparty size). A distribution of losses is simulated for each sub-portfolio and the Value at Risk (VaR) is calculated on the aggregated loss.
The asset correlation matrix \(\boldsymbol{\Sigma}\) is a critical parameter for the estimation of the sub-portfolio loss distribution, that is the core component for the estimation of the concentration risk. Therefore it is worth assessing the credit portfolio VaR sensitivity to that parameter.
### Sampling Realistic Financial Correlation Matrices
As reported in [18],
"markets in crisis mode are an example of how assets correlate or diversify in times of stress. It is essential to see how markets, asset classes, and factors change their correlation and diversification properties in different market regimes. [...] It is desirable not only to consider real manifestations of market scenarios from history but to simulate new, realistic scenarios systematically. To model the real world, quants turn to synthetic data, building artificially generated data based on so-called market generators."
Marti [12] proposed Generative Adversarial Networks (GANs) to generate plausible financial correlation matrices. The author shows that the synthetic matrices generated with GANs present most of the properties observed on the empirical financial correlation matrices estimated on asset returns. In line with [12] we generated synthetic asset correlation matrices verifying some "stylized facts" of financial correlations.
We used a different type of neural network, Variational Autoencoders (VAE), to map historical correlation matrices onto a bidimensional "latent space", also referred to as the bottleneck of the VAE. After training a VAE on a set of
historical asset correlation matrices, we show that it is possible to explain the location of points in the latent space. Furthermore, analyzing the relationship between the VAE bidimensional bottleneck and the VaR values computed by the Credit Portfolio Model using different historical asset correlation matrices, we show that the distribution of the latent variables encodes the main aspects impacting portfolio's diversification as presented in [16].
## 2 Sensitivity to the Asset Correlation matrix
### Data
The dataset contains \(n=206\) correlation matrices of the monthly log-returns of \(M=44\) equity indices, calculated on their monthly time series from February 1997 to June 2022, using overlapping rolling windows of size 100 months. Historical time series considered are Total Market (Italy, Europe, US and Emerging Markets) and their related sector indices (Consumer Discretionary, Energy, Basic Materials, Industrials, Consumer Staples, Telecom, Utilities, Technology, Financials, Health Care), the source is Datastream.
### Variational Autoencoder design
An autoencoder is a neural network composed of a sequence of layers ("encoder" E) that perform a compression of the input into a low-dimensional "latent" vector, followed by another sequence of layers ("decoder" D) that approximately reconstruct the input from the latent vector. The encoder and decoder are trained together to minimize the difference between original input and its reconstructed version.
Variational Autoencoders [8] consider a probabilistic latent space defined as a latent random variable \(z\) that generated the observed samples \(x\). Hence the "probabilistic decoder" is given by \(p(x|z)\) while the "probabilistic encoder" is \(q(z|x)\). The underlying assumption is that the data are generated from a random process involving an unobserved continuous random variable \(z\) and it consists of two steps: (1) a value \(z_{i}\) is generated from some prior distribution \(p_{\theta}^{*}(z)\), (2) a value \(\hat{x_{i}}\) is generated from some conditional distribution \(p_{\theta}^{*}(x|z)\). Assuming that the prior \(p_{\theta}^{*}(z)\) and the likelihood \(p_{\theta}^{*}(x|z)\) come from parametric families of distributions \(p_{\theta}(z)\) and \(p_{\theta}(x|z)\), and that their PDFs are differentiable almost everywhere w.r.t. both \(\theta\) and z, the algorithm proposed by [8] for the estimation of the posterior \(p_{\theta}(z|x)\) introduces an approximation \(q_{\phi}(z|x)\) and minimizes the Kullback-Leibler (KL) divergence of the approximate \(q_{\phi}(z|x)\) from the true posterior \(p_{\theta}(z|x)\). Using a multivariate normal as the prior distribution, the loss function is composed of a deterministic component (i.e. the mean squared error MSE) and a probabilistic component (i.e. the Kullback-Leibler divergence from the true posterior):
\[KL=-\frac{1}{2\overline{n}}\sum_{i=1}^{\overline{n}}\sum_{k=1}^{2} \left(1+log({\sigma_{k}^{i}}^{2})-{\mu_{k}^{i}}^{2}-{\sigma_{k}^{i}}^{2}\right) \tag{2}\] \[\text{MSE}=\frac{1}{\overline{n}}\sum_{i=1}^{\overline{n}}\parallel \mathbf{x}_{i}-\overline{\mathbf{x}}_{i}\parallel_{2}^{2}=\frac{1}{\overline{ n}}\sum_{i=1}^{\overline{n}}\parallel\mathbf{x}_{i}-D(E(\mathbf{x}_{i}))\parallel_{2}^{2}\] \[\text{Loss}=\text{MSE}+\beta\cdot\text{KL}\]
where \(E\) and \(D\) are the encoding and decoding map respectively, \(E:\mathbf{x}\in\mathbf{R}^{M\times M}\longrightarrow\boldsymbol{\theta}=\{\mu _{1},\mu_{2},\sigma_{1},\sigma_{2}\}\in\mathbf{R}^{4}\), \(D:\mathbf{z}\in\mathbf{R}^{2}\longrightarrow\overline{\mathbf{x}}\in\mathbf{ R}^{M\times M}\), \(\mathbf{z}=\boldsymbol{\mu}+\boldsymbol{\sigma}\odot\boldsymbol{\varepsilon}\), \(\boldsymbol{\mu}=\{\mu_{1},\mu_{2}\},\boldsymbol{\sigma}=\{\sigma_{1},\sigma_ {2}\}\), \(\boldsymbol{\varepsilon}\) is a bivariate standard Gaussian variable, and \(\overline{n}<n\) is the number of samples in the training set.
In this equation, \(\mu_{k}^{i}\) and \(\sigma_{k}^{i}\) represent the mean and standard deviation of the \(k\)-th dimension of the latent space for the sample \(\mathbf{x}_{i}\). The loss function balances the MSE, reflecting the reconstruction quality, with \(\beta\) times the KL divergence, enforcing a distribution matching in the 2-dimensional latent space. The KL divergence can be viewed as a regularizer of the model and \(\beta\) as the strength of the regularization.
We trained the VAE for 80 epochs using a learning rate of 0.0001 with an Adam optimizer. The structure of the VAE is shown in Fig. 1. We randomly split the dataset described in section 2.1 in a training sample, used to train the network, and a validation set used to evaluate the performance. We used 30% of the dataset as validation set.
Variational Autoencoders were employed in previous works for financial applications. In particular Brugier and Turinici [4] proposed a VAE to compute an estimator of the Value at Risk for a financial asset. Bergeron et al. [3] used VAE to estimate missing points on partially observed volatility surfaces. Sokol [20] applied VAE for interest rates curves simulation.
### Comparison with linear models
We compared the performances of the Variational Autoencoders with the standard Autoencoder (AE) and with the linear autoencoder (i.e. the autoencoder without activation functions).
The linear autoencoder is equivalent to apply PCA to the input data in the sense that its output is a projection of the data onto the low dimensional principal subspace [19]. As shown in Fig. 2b the autoencoder performs better than VAE (Fig. 2a), while linear models have lower performances (Fig. 3a) even increasing the dimensions of the latent space (Fig. 3b). Hence, neural networks actually bring an improvement in minimizing the reconstruction error. The generative probabilistic component of VAE decreases the performance when compared to a deterministic autoencoder. On the other hand, it allows to generate new but realistic correlation matrices in the sense of stylized facts.
Figure 1: VAE Framework: The input layer comprises 1936 nodes, corresponding to the \(44\times 44\) matrix input. Subsequently, there are layers with 512, 250, and a central hidden layer with 4 nodes. These values represent the mean and variance of a bivariate gaussian distribution. The decoder receives as input two values sampled from the latent space and it is asked to reconstruct the input. Hence, the architecture is symmetrically mirrored until the output layer, which also has 1936 nodes.
Figure 2: Histogram of Mean squared error (MSE) of the Autoencoder and Variational Autoencoder on the historical correlation matrices, split into train and validation set.
### Latent space interpretability
According to Miller [13] and Lipton [9]:
_Interpretable_ is a model such that an observer can understand the cause of a decision.
_Explanation_ is one mode in which an observer may obtain understanding in the latent space, for instance, building a simple surrogate model that mimics the original model to gain a better understanding of the original model's underlying mechanics.
For the sake of our analysis, we refer to the "interpretability" of the VAE as the possibility to understand the reason underlying the responses produced by the algorithm in the latent space. The Variational Autoencoder projected the 206 historical correlation matrices on a two dimensional probabilistic latent space represented by a bivariate normal distribution. As shown in Fig. 3(a), the latent space generated by the VAE and AE are similar while the cluster of points in the middle is recovered only by the 3-dimensional linear autoencoder (Fig. 3(b)).
In order to understand the rationales underlying such representation, we analysed the relationship of the encoded values of the original correlation matrices with respect to their eigenvectors \(\{\nu_{i}\mid i=1:M\}\) and eigenvalues \(\{\lambda_{i}\mid i=1:M\}\). It turned out that the first component of the latent space (\(\mu_{1}\)) is strongly negatively correlated to the first eigenvalue (Fig. 5).
As pointed out in [14]
"the largest eigenvalue of the correlation matrix is a measure of the intensity of the correlation present in the matrix, and in matrices inferred from financial returns tends to be significantly larger than the second largest. Generally, this largest eigenvalue is larger during times of stress and smaller during times of calm."
Hence, the first dimension of the latent space seems to capture the information related to the rank of the matrix i.e. to the "diversification opportunities" on the market. The interpretation of the second dimension (\(\mu_{2}\)) of the latent space
Figure 3: Histogram of Mean squared error (MSE) of the linear autoencoders on the historical correlation matrices, split into train and validation set.
Figure 4: Comparison of the latent space generated with different models. The latent space generated by the VAE and AE are similar while the cluster of points in the middle is recovered only by the 3-dimensional linear autoencoder.
Figure 5: Scatterplot of the first eigenvalue \(\lambda_{1}\) versus the first component of the latent space \(\mu_{1}\), showing a clear negative correlation.
turned out to be related to the eigenvectors of the correlation matrix. In order to understand the other dimension we consider the cosine similarity \(\alpha_{t}^{i}\) between the \(i\)-th eigenvector at time \(t\) and its average over time. Formally:
\[\alpha_{i,t}=\frac{\frac{1}{n}(\sum_{t=1}^{n}\nu_{i,t})^{T}\cdot\nu_{i,t}}{\parallel \frac{1}{n}(\sum_{t=1}^{n}\nu_{i,t})^{T}\parallel\parallel\nu_{i,t}\parallel} \tag{3}\]
where \(i\) is the number of the eigenvector and \(t\) the index of the matrix in the dataset.
Let us define \(\alpha_{1}=\{\alpha_{1,t}\}_{t=1,\ldots n}\), \(\alpha_{2}=\{\alpha_{2,t}\}_{t=2,\ldots n}\). The data point subgroups observed in the space \((\alpha_{1},\alpha_{2},\lambda_{1})\) can be traced to corresponding subgroups in the latent space \((\mu_{1},\mu_{2})\), as shown in Fig. 6.
As pointed out in [16], each eigenvector can be viewed as a portfolio weights of stocks that defines a new index which is uncorrelated with the other eigenvectors. It follows that a change in eigenvectors can impact portfolio diversification. We can conclude that the VAE latent space effectively captures, in two dimensions, the main factors impacting the financial correlations, which is determinant for portfolio diversification.
### Generating synthetic correlation matrices
As explained in section 2.2, the probabilistic decoder of the VAE allows to generate a "plausible" correlation matrix starting from any point of the latent space. Hence, we defined a grid of 132 points of the latent space that cover approximately homogeneously an area centered around the origin and including the historical points (Fig. 10). For each point on the grid, we used the decoder (i.e. a neural network) to compute the corresponding correlation matrix. Along the lines of [12], we check whether the following stylized facts of financial correlation matrices hold for both the historical and the synthetic matrices.
* The distribution of pairwise correlations is significantly shifted towards positive values.
* Eigenvalues follow the Marchenko-Pastur distribution, except for a very large first eigenvalue and a couple of other large eigenvalues.
* The Perron-Frobenius property holds true (first eigenvector has positive entries and has multiplicity one).
* Correlations have a hierarchical structure.
* The Minimum Spanning Tree (MST) obtained from a correlation matrix satisfies the scale-free property.
As shown in Fig. 11, the distributions of pairwise correlations are shifted to the positive both for synthetic and historical matrices. It is worth noting that the synthetic correlations show a more symmetric distribution. The distributions of the eigenvalues (each averaged respectively over the historical and synthetic matrices) are very similar to each other (Fig. 12) and can be approximated by a Marchenko-Pastur distribution but for a first large
Figure 6: The distribution of the distance of the first two eigenvectors from their respective historical average and the distribution of the first eigenvalue characterize the regions of the latent space.
of other eigenvalues. It is worth noting that the correlation matrices analyzed for our purposes are calculated starting from 44 equity indices (as explained in Section 2.1) instead of single stocks as shown in [12], hence a higher degree of concentration is expected. As shown in Fig. 13 the eigenvector corresponding to the largest (real) eigenvalue has strictly positive components both on historical and synthetic matrices. In Fig. 14 we show the heatmap and the dendrogram of two randomly chosen correlation matrices. A hierarchical structure of the correlations can be observed even if, as already pointed out, we are analysing indices instead of single stocks. In Fig. 15 we show the distribution of the degrees of the Minimum Spanning Tree3 calculated on the mean of the historical and synthetic matrices. Both show very few nodes with high degrees while most nodes have degree 1.
Footnote 3: For the construction of the MST we followed [14].
### Quantifying the sensitivity to asset correlations
For each matrix generated with the VAE probabilistic decoder, we estimated the corresponding VaR according to the multi-factor Vasicek model described in section 1.1. We used the VaR metric to show a proof of concept of the methodology and to be aligned to the Economic Capital requirements, but the same rationale can be followed adopting a different risk metric. As mentioned in section 1.1, Vasicek multi-factor model cannot be solved in closed form solution, hence it is necessary to run a Monte Carlo simulation for each generated matrix. We used a stratified sampling simulation with 1 million runs. In each estimation, the parameters of the model and portfolio exposures are held constant. Running the simulation for every sampled point of the latent space, we derived the VaR surface of Fig. 7.
To obtain an estimate of the sensitivity of the VaR to future possible evolutions of the correlation matrix, we "bootstrapped" (see Fig. 9) the historical time series of the points in the 2-dimensional latent space. We used a simple bootstrapping [2] and a block-bootstrapping technique [10] on the differences' time series of the two components of the VAE latent space, \(\mu_{1}\) and \(\mu_{2}\) (depicted in Fig. 8).
Interpolating the estimated VaR over the randomly sample grid (Fig. 7) we can derive the Value at Risk corresponding to any point of the latent space. Hence, for each point belonging to the distribution of correlations changes over a 1-year time horizon estimated via bootstrap, we can calculate the corresponding VaR without recurring to the Monte Carlo simulation.
In this way, we obtained the VaR distribution related to the possible variations of correlation matrices on a defined time horizon.
## 3 Conclusions
In this work we applied a Variational Autoencoder for generating realistic financial matrices that we used as input for the estimation of credit portfolio
Figure 8: The time series of \(\mu_{1}\) and \(\mu_{2}\), projections of the 206 historical correlation matrices in the 2-dimensional latent space.
Figure 7: The surface generated from Value at Risk with respect to the points of the 2-dimensional latent space.
concentration risk estimated with a multi-factor Vasicek model. We deviated from the methodology proposed by G. Marti 2020 [12] who adopted a Generative Adversarial Network, in order to obtain an interpretable model by leveraging the dimensionality reduction provided by VAE. Using as a proof of concept a VAE trained on a dataset composed of 206 correlation matrices calculated on the time series of 44 equity indices using a rolling window of 100 months, we showed how it is possible, even using a small data sample, to derive an interpretation of the latent space that seems aligned to the main aspects driving portfolio diversification [16].
We exploited the generative capabilities of the VAE to extend the scope of the model beyond the necessarily limited size of the historical sample, generating a larger set of correlation matrices that retain all the realistic features observed in the market, as determined by appropriate tests. We computed the augmented sample of synthetic correlation matrices on a grid in the 2-dimensional VAE latent space and for each synthetic matrix the corresponding credit portfolio loss distribution (and its VaR at a certain percentile) obtained via Monte Carlo simulation under a multi-factor Vasicek model. This way we estimated a VaR surface over the VAE latent space.
Analyzing the time series of the encoded version of the correlation matrices (i.e. the two components of the probabilistic latent space) we easily estimated via bootstrapping, the possible variation of the correlation matrices over a 1-year time horizon. Finally, using the interpolated VaR surface, we were able to estimate the corresponding VaR distribution obtaining a quantification of the impact of the correlations movements on the credit portfolio concentration risk. This approach gives rapid estimation of risk without depending on the extensive computations of Monte Carlo simulation, and it does so in a compressed, easy-to-visualize space that captures several aspects of market dynamics. Our analysis provides clear indications that the capabilities of realistic data-augmentation
Figure 9: Using simple bootstrap (on the left) or block-bootstrap (with 11 monthly steps) on the “compressed” representation of the correlation matrices, we estimated the distribution of the possible variation of the current matrix on 1-year time horizon.
provided by Variational Autoencoders combined with the ability to obtain model interpretability can prove useful for risk management purposes, when addressing the sensitivity of models on a structured multidimensional market data as the correlation matrix.
## Disclaimer
The views and opinions expressed within this paper are those of the authors and do not necessarily reflect the official policy or position of Intesa Sanpaolo. Assumptions made in the analysis, assessments, methodologies, models and results are not reflective of the position of any entity other than the authors.
|
2308.00053 | T-Fusion Net: A Novel Deep Neural Network Augmented with Multiple
Localizations based Spatial Attention Mechanisms for Covid-19 Detection | In recent years, deep neural networks are yielding better performance in
image classification tasks. However, the increasing complexity of datasets and
the demand for improved performance necessitate the exploration of innovative
techniques. The present work proposes a new deep neural network (called as,
T-Fusion Net) that augments multiple localizations based spatial attention.
This attention mechanism allows the network to focus on relevant image regions,
improving its discriminative power. A homogeneous ensemble of the said network
is further used to enhance image classification accuracy. For ensembling, the
proposed approach considers multiple instances of individual T-Fusion Net. The
model incorporates fuzzy max fusion to merge the outputs of individual nets.
The fusion process is optimized through a carefully chosen parameter to strike
a balance on the contributions of the individual models. Experimental
evaluations on benchmark Covid-19 (SARS-CoV-2 CT scan) dataset demonstrate the
effectiveness of the proposed T-Fusion Net as well as its ensemble. The
proposed T-Fusion Net and the homogeneous ensemble model exhibit better
performance, as compared to other state-of-the-art methods, achieving accuracy
of 97.59% and 98.4%, respectively. | Susmita Ghosh, Abhiroop Chatterjee | 2023-07-31T18:18:01Z | http://arxiv.org/abs/2308.00053v1 | T-Fusion Net: A Novel Deep Neural Network Augmented with Multiple Localizations based Spatial Attention Mechanisms for Covid-19 Detection
###### Abstract
In recent years, deep neural networks are yielding better performance in image classification tasks. However, the increasing complexity of datasets and the demand for improved performance necessitate the exploration of innovative techniques. The present work proposes a new deep neural network (called as, _T-Fusion Net_) that augments multiple localizations based spatial attention. This attention mechanism allows the network to focus on relevant image regions, improving its discriminative power. A homogeneous ensemble of the said network is further used to enhance image classification accuracy. For ensembling, the proposed approach considers multiple instances of individual T-Fusion Net. The model incorporates fuzzy max fusion to merge the outputs of individual nets. The fusion process is optimized through a carefully chosen parameter to strike a balance on the contributions of the individual models. Experimental evaluations on benchmark Covid-19 (SARS-CoV-2 CT scan) dataset demonstrate the effectiveness of the proposed T-Fusion Net as well as its ensemble. The proposed T-Fusion Net and the homogeneous ensemble model exhibit better performance, as compared to other state-of-the-art methods, achieving accuracy of 97.59% and 98.4%, respectively.
Convolutional neural network, spatial attention, ensemble model, fuzzy max fusion, Covid-19 detection.
## 1 Introduction
Deep learning models (Fig. 1) are achieving greater success in the field of computer vision, especially in image classification tasks. Convolutional Neural Networks (CNNs) have demonstrated remarkable success in extracting discriminative features from images, enabling better classification. However, as datasets become more complex and diverse, achieving further improvements in classification performance remains a challenge. Researchers are continuously exploring innovative techniques to enhance the capabilities of image classification models. One area of focus is the integration of attention mechanisms into deep learning architectures and finding the suitable position for incorporating them. Attention mechanisms aim to enhance the discriminative power of models by selectively focusing on relevant regions within an image. Such mechanisms allow the models to attend to important features and suppress irrelevant or noisy information, leading to improved classification accuracy. Spatial attention, in particular, has gained popularity for its ability to capture fine-grained details from images.
In this context, the present work proposes a novel deep neural network that integrates _MLSAM_, multiple localizations based spatial attention mechanisms. The augmented network is termed as T-Fusion Net. A homogeneous ensemble model of this T-Fusion Net is further used that leverages the strengths of multiple individual model objects, each incorporating spatial attention with different kernel sizes. The fusion of outputs from each individual model is achieved through fuzzy max fusion.
The effectiveness of our proposed approach is evaluated through experiments on benchmark Covid-19 detection dataset: SARS-CoV-2 CT scan [4]. Performance comparison with several other deep neural network models is done. It is seen that the performance of the proposed T-Fusion Net augmented with spatial attention and thereafter ensembling it with fuzzy max fusion is showcasing its superiority in terms of accuracy and other important parameters.
The rest of the paper is organized as follows: Section 2 provides a review of related literature in the field of image classification using deep learning. Section 3 presents the methodology, including a detailed description of the proposed multiple localizations based spatial attention block, its integration within the T-Fusion Net architecture, and the ensemble through fuzzy max fusion. Section 4 discusses the experimental setup mentioning dataset used, evaluation metrics considered, image preprocessing and details of parameters taken. Analysis of results has been put in Section 5. Finally, Section 6 concludes the paper.
## 2 Related Research
Zhang et al. [1] proposed a hybrid attention method that combined both spatial and channel attention mechanisms. Attention mechanisms in deep learning models allow the network to focus on specific regions or channels of input data that are most relevant for making predictions. Spatial attention helps the model concentrate on important spatial regions, while channel attention emphasizes important channels in the feature maps. The approach demonstrated significant performance gains compared to conventional methods, making it a state-of-the-art technique at the time of its publication. Huang et al. [2] introduced a fuzzy fusion technique as an effective way to combine outputs from individual models. Ensemble methods, such as model fusion, are commonly used to boost the performance of machine learning models. In the context of image classification, multiple models may produce different predictions for the same image, and combining their outputs can lead to improved accuracy and robustness. Zheng et al. [3] proposed a hierarchical fusion approach that combined multiple levels of features for image classification. In deep learning models, features are hierarchically learned at different layers of the network.
In line with these advancements, our research presents a novel deep neural network model (termed as, T-Fusion Net) that integrates a new spatial attention method. By incorporating attention mechanisms into individual models, we aim to
Figure 1: General representation of deep neural net [7]
capture diverse and discriminative image features. Thereafter, using fuzzy max fusion, our model optimally combines the strengths of individual T-Fusion Net, leading to improved classification accuracy.
## 3 Proposed Methodology
In the present work, the architecture of the proposed model is designed using Convolutional Neural Networks (CNNs) to extract meaningful features from images. Spatial attention is incorporated by adding convolutional layers with different kernel sizes. We call it multiple localizations. The outputs of these attention-enhanced convolutional layers are concatenated to capture discriminative features from images. Since the proposed network model looks like the English alphabet "T", we termed it as _T-Fusion Net_. Additionally, we make an ensemble model with a fuzzy max function to further improve the performance of the T-Fusion Net. Each of the modules of the T-Fusion Net is described below in detail.
### Multiple Localizations based Spatial Attention
The proposed multiple localizations based attention mechanism (_MLSAM_) aims to enhance the receptive power of the model by selectively focusing on important regions within the input feature map.
The MLSAM module in Fig 2. takes an input feature map and applies convolutional operations to capture local and global patterns at different scales by varying kernel sizes. It consists of three parallel branches, each performing convolution with different kernel sizes: 3x3, 5x5, and 7x7. These branches generate feature maps with 4 channels each, resulting in a concatenated feature map with a total of 12 channels. The concatenation of the feature maps is performed along the channel axis, allowing the model to capture diverse information from different kernel sizes. This step is crucial for acquiring multi-scale feature extraction. Subsequently, a convolutional layer with a 3x3 kernel size is applied to the concatenated feature map. This layer reduces the number of channels to one using the sigmoid activation function. The resultant output is a spatial attention map that represents the importance of different spatial locations within the input feature map. Finally, the spatial attention map is combined with the input feature map through element-wise multiplication. This operation selectively amplifies informative regions while suppressing less relevant ones.
Figure 2: MLSAM: Multiple Localization based Spatial Attention Mechanism.
### Architecture of T-Fusion Net
The architecture of the T-Fusion Net, as shown in Fig. 3, consists of several convolutional layers, which are fundamental building blocks for extracting hierarchical features from input images. The architecture follows a sequential pattern, progressively transforming the given input images to higher-level representations.
Figure 3: T-Fusion net architecture.
The details of each of the components of the T-Fusion Net are described below in brief.
Input Layer: The input layer takes in images with a shape of (224, 224, 3), representing the width, height, and three color channels (R, G, B).
Convolutional Layers: The initial convolutional layer applies a set of filters to the input image, extracting local patterns and low-level features. Three separate convolutional layers with different kernel sizes (3x3, 5x5, 7x7) are applied in parallel, capturing information at different scales. Each convolutional layer performs element-wise multiplications followed by element-wise additions to create activation maps.
Concatenation: The outputs of the three parallel convolutional layers are concatenated along the channel axis. The purpose of concatenation is to capture diverse features at multiple scales, enabling the model to learn a richer representation of the input data.
Batch Normalization: Batch normalization is applied to normalize the activations of the concatenated feature maps, improving the convergence of the model.
MLSAM: Multiple Localizations based Spatial Attention Mechanism (MLSAM) is applied to the batch-normalized feature maps. As stated, this attention mechanism selectively emphasizes important spatial regions while suppressing irrelevant or noisy information thereby enhancing discriminative power. In the present work, the MLSAM has been introduced in the Feature extractor and subsampling Block 1 (in between Batch normalization and Max pooling) (please see Fig. 3).
Max Pooling: Max pooling is performed on the spatially attended feature maps to downsample the dimensions and reduce computational complexity.
Convolutional Layers and Pooling: Four additional convolutional layers and max pooling operations are applied sequentially to capture higher-level abstract features. These are represented in Fig. 3 as "Feature extractor and Sub-sampling block". On these, basically convolution, batch normalization and max pooling operations take place (as shown in enclosed blue block in Fig. 3). These layers gradually increase the receptive field, allowing the model to learn more complex patterns and relationships exist in the image data.
Flattening: The final feature maps are flattened into a 1-dimensional vector to serve as input for the subsequent fully connected layers.
Fully Connected Layers (Dense): The flattened feature vector is connected to a fully connected layer, which performs non-linear transformations to learn class-specific representations.
Dropout Regularization: Dropout regularization is applied to the fully connected layers to prevent overfitting. A dropout of 60% has been applied in this work.
Output (Softmax) Layer: The output layer consists of two nodes with softmax activation function, representing the probabilities of the given input image belonging to each of the two classes (Covid/ non-Covid). The class with higher probability is selected as the predicted class.
### Ensemble Model
For a given input image, the probability values obtained from the softmax layer are treated as the membership values of belongingness to each of the classes. To perform the fusion of the outputs of individual T-Fusion Nets, a fuzzy max fusion method has been applied that combines the outputs of individual nets. The homogeneous ensemble (Fig. 4) computes the maximum value across the individual model outputs. The fused output is obtained by balancing the output with \(\alpha\), adding \(\epsilon\), and further adding a bias B. Finally, the technique constructs the ensemble model, specifying the input and output layers.
This ensemble technique utilizes the fuzzy max fusion to combine the predictions from multiple individual models. \(\mathrm{F_{o}}\), the fused output is written as:
\[\mathrm{F_{o}}^{\mathrm{}}=\alpha\ \mathrm{*}\ \mathrm{M_{o}}^{+}\ \mathrm{\varepsilon}+\mathrm{B} \tag{1}\]
where, \(\mathrm{M_{o}}\) is the maximum value obtained from all the three individual nets, \(\alpha\) determines the balance between the individual model outputs and the fused output. \(\mathrm{\varepsilon}\) is a small constant introduced to avoid division by zero, and a bias \(B\) is applied to introduce some offset and ensure numerical stability.
## 4 Experimental Setup
To assess the effectiveness of the proposed T-Fusion Net and thereafter its homogeneous ensemble, experiment has been conducted using the SARS-CoV-2 CT scan dataset. Details of this experimental setup have been described below in detail.
### Dataset Used
SARS-CoV-2 CT scan dataset [4] is used. It consists of 1252 images of COVID-19 and 1230 images of Non COVID-19 cases (Table 1). Sample images from both the classes have been shown in Fig. 5.
Figure 4: Proposed ensemble model.
### Image Preprocessing
Each input image is resized to a dimension of 224x224 pixels and normalized to [0,1] by dividing the pixel values by 255.
### Performance Metrics Considered
The models' performance is evaluated using metrics such as accuracy, loss values. We have also considered other performance evaluation criteria e.g., Precision, Recall, F1-score, Top-1% error. Confusion matrix and IoU curve have also been considered.
### Parameters Taken
Table 2 shows the parameters and their corresponding values used for our experimentation. Also the values of \(\alpha\), \(\varepsilon\) and B are set to 0.8, 0.0001 and 20 respectively (though we have experimented with various values of these parameters).
### Model Training
The dataset is split into training and testing sets. The split is performed with a test size of 20% and stratified sampling to maintain class balance. During training, the models' weights are updated using backpropagation and gradient descent to minimize the loss function.
## 5 Analysis of Results
To evaluate the effectiveness of the proposed T-Fusion Net and also its ensemble with fuzzy fusion, various performance metrics are used, as mentioned earlier. A total of 20 simulations have been performed and the average values obtained for each of the metrics are depicted into Table. 3. From the table it is noticed that the results are promising in nature in terms of various performance indices, yielding 98.4% accuracy
\begin{table}
\begin{tabular}{|c|c|} \hline Types of Classes & Number of Images \\ \hline COVID-19 & 1252 \\ NON COVID-19 & 1230 \\ \hline
**Total Images** & **2482** \\ \hline \end{tabular}
\end{table}
Table 1: Images taken from SARS-CoV-2 CT scan dataset [4].
Figure 5: Images taken from SARS-CoV-2 CT scan dataset. (a) Covid-19, (b) Non Covid-19
for the ensemble of T-Fusion Nets. Experimentation was done on NVIDIA A100 tensor core GPU.
As described earlier, in the present paper, we propose a new soft segmentation approach called (MLSAM) for image classification tasks. MLSAM aims to enhance the interpretability and accuracy of soft segmentation models. This is achieved by incorporating multiple localizations based spatial attention mechanisms, which allow the model to selectively focus on different regions of the image at various levels of granularity. Fig. 6 shows how our MLSAM module segments the original image.
\begin{table}
\begin{tabular}{|l|l|} \hline Metrics & Ensemble Model (rounded) \\ \hline Precision & 0.98 \\ Recall & 0.98 \\ F1-score & 0.98 \\ Accuracy (\%) & 98.0 \\ Top-1 error (\%) & 2.0 \\ \hline \end{tabular}
\end{table}
Table 3: Results obtained using SARS-CoV-2 CT scan dataset
Figure 6: Original input image is represented in left while the visualization of the intermediate feature representation after the proposed MLSAM module in T-Fusion Net is shown in right.
Figure 7: Variation of training and validation accuracy with epochs for T-Fusion net.
Variations of training & validation accuracy and training & validation loss over epochs for the T-Fusion Net are shown in Figs. 7 and 8, respectively. From Fig. 7 it is seen that both training and validation accuracy increase over epochs. This suggests that the model is generalizing the data well and can make accurate predictions. As training progresses, the validation and training accuracies continue to steady down. Likewise, as expected, loss values are also getting stabilized over epochs (Fig. 8).
Fig. 9 shows the IoU bar plots for two different classes. IoU is a commonly used metric in tasks such as object detection and image segmentation. It measures the overlap between the predicted and true positive classes. In this case, the IoU values are 0.9538 and 0.9522 for Covid-19 and Non Covid-19, respectively. This indicates a high degree of overlap between the predicted positive class and the true positive class for the two respective classes.
The confusion matrix obtained using the proposed T-Fusion Net (augmented with MLSAM) is shown in Fig. 10. The entries in the matrix depict the best result obtained out of 20 simulations. The resultant matrix indicates that the proposed model is performing very well for Covid-19 detection.
Figure 8: Variation of training and validation accuracy with epochs for T-Fusion net + MLSAM.
Figure 9: IoU (Intersection over Union) curves for Covid-19 and Non Covid-19 classes.
To establish the efficacy of our proposed network, performance of the T-Fusion Net and also its ensemble have been compared with four pretrained (on ImageNet dataset) deep learning models (AlexNet, VGG-16, VGG-19, and DenseNet201) and also with an explainable deep learning approach. The corresponding accuracy values are shown in Table 4. The last 3 rows of the table (marked as bold) depict the results obtained through our proposed network and averaged over 20 simulations. In this connection it is to be noted that, the proposed network is trained from scratch, and not pretrained. This table confirms the superiority of T-Fusion Net for Covid-19 detection, even without ensemble.
We have introduced the T-Fusion Net architecture and explored its performance with and without the MLSAM. The proposed deep neural network not only achieves an accuracy of 97.59% (better than other existing methods) but also has less number of parameters (4,221,947) compared to other state-of-the-art neural networks. The Ensemble T-Fusion Net with MLSAM achieved the highest accuracy of 98.40%, outperforming other models and approaches.
\begin{table}
\begin{tabular}{|l|l|} \hline Methods & Accuracy (\%) \\ \hline AlexNet (Pretrained on Imagenet) [5] & 93.71 \\ VGG-16 (Pretrained on Imagenet) [5] & 94.62 \\ VGG-19 (Pretrained on Imagenet) [5] & 93.56 \\ DenseNet201based deep TL [6] & 96.25 \\ T-Fusion net (baseline, no MLSAM) & **96.59** \\ T-Fusion net (with MLSAM) & **97.59** \\ Ensemble of T-Fusion net (with MLSAM) & **98.40** \\ \hline \end{tabular}
\end{table}
Table 4: Classification performance of the proposed and the state-of-the-art models for the SARS-CoV-2 CT scan dataset.
Figure 10: Confusion matrix for Covid-19 detection for the proposed T-Fusion net (with MLSAM)
T-Fusion Net: A Novel Deep Neural Network Augmented with Multiple Localizations based Spatial Attention Mechanisms for COVID-19 Detection
## 6 Conclusion and Future Works
The present work introduces a new deep neural network called _T-Fusion Net_ and incorporates a novel spatial attention mechanism (termed MLSAM). An ensemble of such nets with fuzzy max fusion is also employed. The models were trained to extract relevant features and classify images into Covid-19 and Non Covid-19 categories. The evaluation of these models using various metrics demonstrated their effectiveness in identifying COVID-19 cases. The results obtained from this research (with accuracy of 97.50% and 98.4%, respectively for individual T-Fusion Net and its ensemble) offer promising prospects for use of multiple localizations based spatial attention in Covid-19 diagnosis.
However, further research and refinement of these models are needed to explore it in diverse domains. Future studies should focus on expanding the dataset to include a wider range of Covid-19 cases, including different imaging modalities and disease stages. Additionally, the models could benefit from fine-tuning and optimization to enhance their performance and generalization capabilities.
## Acknowledgement
A part of this work has been supported by the IDEAS - Institute of Data Engineering, Analytics and Science Foundation, The Technology Innovation Hub at the Indian Statistical Institute, Kolkata through sanctioning a Project No /ISI/TIH/2022/55/ dtd. September 13, 2022
|
2309.07367 | The kernel-balanced equation for deep neural networks | Deep neural networks have shown many fruitful applications in this decade. A
network can get the generalized function through training with a finite
dataset. The degree of generalization is a realization of the proximity scale
in the data space. Specifically, the scale is not clear if the dataset is
complicated. Here we consider a network for the distribution estimation of the
dataset. We show the estimation is unstable and the instability depends on the
data density and training duration. We derive the kernel-balanced equation,
which gives a short phenomenological description of the solution. The equation
tells us the reason for the instability and the mechanism of the scale. The
network outputs a local average of the dataset as a prediction and the scale of
averaging is determined along the equation. The scale gradually decreases along
training and finally results in instability in our case. | Kenichi Nakazato | 2023-09-14T01:00:05Z | http://arxiv.org/abs/2309.07367v1 | # The kernel-balanced equation for deep neural networks
###### Abstract
Deep neural networks have shown many fruitful applications in this decade. A network can get the generalized function through training with a finite dataset. The degree of generalization is a realization of the proximity scale in the data space. Specifically, the scale is not clear if the dataset is complicated. Here we consider a network for the distribution estimation of the dataset. We show the estimation is unstable and the instability depends on the data density and training duration. We derive the kernel-balanced equation, which gives a short phenomenological description of the solution. The equation tells us the reason for the instability and the mechanism of the scale. The network outputs a local average of the dataset as a prediction and the scale of averaging is determined along the equation. The scale gradually decreases along training and finally results in instability in our case.
## I Introduction
In the recent decade, data-driven modeling has been empowered with techniques from machine learning. Among them, deep neural networks are the most powerful ones with a large number of applications[1; 2; 3; 4; 5]. Despite the fruitful ones, we do not know much about the mechanism behind them[6; 7]. Specifically, the network can get generalized functions only with the finite dataset. In other words, the network can learn a generalized relation between input and output from the finite one. We can get predictions for unknown inputs but do not fully understand how it works.
A neural network, \(y=f(\mathbf{x},\mathbf{w})\), can be defined with an input, \(\mathbf{x}\), and output, \(y\). In training, we adjust the parameters, \(\mathbf{w}\), of the network so that the pre-defined relation, \(y_{i}=f(\mathbf{x}_{i})\), is satisfied as possible. As the pre-defined relation, we give a dataset, \(\{(\mathbf{x}_{i},y_{i})\}\), in advance. We usually update the parameters step by step along the gradient, \(\sum_{i}\nabla L(|y_{i}-f(\mathbf{x}_{i})|)\), of the minimizing function, \(L\). The name _neural network_ stems from the architectures of the function, \(f\), which is inspired by the brain network. In fact, one of the most famous architecture, convolutional neural networks, is originally a model of retinal structure[8; 9; 10; 11]. In this paper, we focus on them.
We can derive neural tangent kernels, NTKs, or training responses as a theoretical approach to understanding the generalization mechanism[12; 13; 14; 15; 16]. There, we can describe the training response, \(\Theta(\mathbf{x},\mathbf{x}_{i})\), which shows the influence on the output, \(f(\mathbf{x})\), by a training step, in the following equation,
\[f(\mathbf{x},\mathbf{w}+\mathbf{\delta}) \sim f+\frac{\partial f}{\partial\mathbf{w}}\cdot\mathbf{\delta} \tag{1}\] \[= f-\eta\frac{\partial f}{\partial\mathbf{w}}\cdot\frac{\partial f_{i }}{\partial\mathbf{w}}\frac{dL}{df_{i}}\] (2) \[\equiv f-\eta\Theta(\mathbf{x},\mathbf{x}_{i})\frac{dL}{df_{i}}, \tag{3}\]
where the model is trained with a single data, \((\mathbf{x}_{i},y_{i})\), with a minimizing target, \(L\), known as loss function and the parameter, \(\eta\), is a learning rate. We usually have a dataset with many data points, \(\{(\mathbf{x}_{i},y_{i})\}\), and train the model with that. In such a case we can write the equation with the sum of training responses,
\[\Delta f(\mathbf{x})\propto-\sum_{j}\Theta(\mathbf{x},\mathbf{x}_{j})\frac{dL}{df_{j}}. \tag{4}\]
Furthermore, in some cases, we can assume a simple ansatz for the training response with an aging effect. It can be expressed in the following,
\[\Theta(\mathbf{x},\mathbf{x}_{i})\propto t^{-\alpha}K(\mathbf{x},\mathbf{x}_{i}), \tag{5}\]
where the exponent, \(\alpha\), shows aging decay and the response kernel, \(K\), is a positive decreasing function of the distance, \(|\mathbf{x}-\mathbf{x}_{i}|\). The decreasing scale would be determined by the architecture, but we assume it is a similar one with an exponential curve, in this paper.
As a minimizing function, we consider it a more complicated problem than simple supervised training. In standard supervised training, we optimize a network so that the relation, \(y_{i}=f(\mathbf{x}_{i})\), is satisfied for any data point. On the contrary, we want to estimate the distribution of the dataset, \(\{(\mathbf{x}_{i},y_{i})\}\). To do that, we estimate the local
mean, \(\mu(\mathbf{x})\), and standard deviation, \(\sigma(\mathbf{x})\), for each input, \(\mathbf{x}\). As a network, we assume the following one,
\[\mu(\mathbf{x}) =g\circ f(\mathbf{x}) \tag{6}\] \[\sigma(\mathbf{x}) =h\circ f(\mathbf{x}). \tag{7}\]
We have a shared function, \(f\), and specific ones, \(g\) and \(h\), to estimate the mean and standard deviation, respectively. As an optimizing function, we minimize the following,
\[L\equiv-\log(\Pi_{i}\frac{1}{\sigma(\mathbf{x}_{i})\sqrt{2\pi}}\exp(-\frac{1}{2} (\frac{y_{i}-\mu(\mathbf{x}_{i})}{\sigma(\mathbf{x}_{i})})^{2})). \tag{8}\]
In other words, we want to fit with a Gaussian distribution. This type of problem is known as _uncertainty estimation_ in the field of machine learning[17; 18; 19]. We want to know both maximum likelihood estimation, \(\mu(\mathbf{x})\), and uncertainty of that, \(\sigma(\mathbf{x})\), at the same time. Our problem setting is a much simpler one than various applications.
In the case of the standard prediction, we usually minimize the distance, \(|\mu_{i}-y_{i}|\), and the optimal solution is the exact one, \(\mu_{i}=y_{i}\). However, we can have a different mean value because we simultaneously estimate standard deviation, \(\sigma\), in our network. As stated above, training with a data, \((\mathbf{x}_{i},y_{i})\), can influence on another prediction for a data point, \((\mathbf{x}_{j},y_{j})\). In sum, the predicted mean and standard deviation, \(\mu(\mathbf{x}_{i})\) and \(\sigma(\mathbf{x}_{i})\), can be a local estimation of the data distribution. Our main question is how the scale of estimation is determined. We assume two different inputs, \(\mathbf{x}_{a}\), and \(\mathbf{x}_{b}\), should have similar outputs, \(f(\mathbf{x}_{a})\sim f(\mathbf{x}_{b})\), depending on a distance between them, as the nature of prediction. However, we do not know the reality in the cases of deep neural networks. In other words, we want to know how and to what extent the estimation is generalized.
As a hypothesis, we can assume some specific scales of the structure of the dataset are reflected in the prediction. In other words, the scale may be the reflection of the semantic structure of the dataset. However, we use a randomly generated dataset rather than specific public ones, like the other studies in the statistical physics[20; 21]. The advantage of this way is that we can get a universal understanding of the nature of the neural networks independent of the dataset instance.
In the next section II, we introduce our model. There we describe our network and dataset. In addition, we note on training method as well. We show the training dynamics in the section III. We show the estimation is unstable. Furthermore, we introduce a phenomenological description, kernel-balanced equation, of the solution, which explains the instability. It gives answers, the scale and generalization, for our question. Finally, we show that the equation can be redisplayed with dynamics by training response.
## II Model
In general, deep neural networks consist of layers of nonlinear transformations, \(f_{i}\), and an input, \(\mathbf{x}\), and output, \(y\),
\[y=f_{n}\circ\cdots\circ f_{0}(\mathbf{x}). \tag{9}\]
In each layer, we often use combination of linear convolution, \(c_{jk}\), and non-linear activation function, \(R\),
\[\mathbf{h}_{k}=R(b_{k}+\sum_{j}c_{jk}(\mathbf{h}_{j})), \tag{10}\]
where j-th channel of input for a layer, \(\mathbf{h}_{j}\), is transformed into k-th channel of output, \(\mathbf{h}_{k}\). Hidden variables, \(\mathbf{h}_{i}\), have often multi-channels for more degree of freedom of the network. We call this as a convolution layer[8; 9; 10; 11].
Here, we focus on a simple convolutional network with \(n\) layers. We assume the input, \(\mathbf{x}\), is a 1-dimensional bit string with the size, \(b\). In other words, we assume the input, \(\mathbf{x}\), is a binary vector in this paper. Each convolution layer can be defined with the number of output channels, \(s_{c}\), and kernel size, \(s_{k}\), of the convolution. In our model, the number of channels, \(s_{c}\), in each convolution layer is fixed. Needless to say, the output of a mid-layer means the input to the next layer. In the final layer, we usually use a linear network,
\[y=R(\sum_{i}\mathbf{a}_{i}\cdot\mathbf{h}_{i}+b), \tag{11}\]
where the hidden variable, \(\mathbf{h}_{i}\), is the input for the last layer. Non-linear activation, \(R\), is applied after linear transformation with parameters, \(\mathbf{a}_{i}\) and \(b\). In numerical experiments, we use a setting, \(b=8\), \(s_{k}=3\), \(s_{c}=3\), with ELU as an activation function and SGD for the training algorithm with a learning rate, \(\eta=0.1\), without a learning momentum[22; 23; 24; 25].
Since we consider a network with two outputs, \(\mu\), and \(\sigma\), we have two linear networks, \(g\) and \(h\), after the convolution layers, \(f\), in eq. (6) and (7). We call the network as _variance network_, here. In addition, we also consider a simpler one with only one output, \(\mu\), for easier understanding, in eq. (6). We call that as _average network_.
As the dataset, we consider a random bit encoding[16]. The dataset, \(\{(\mathbf{x}_{i},y_{i})\}\), consists of pairs of an input, \(\mathbf{x}_{i}\), and output, \(y_{i}\). We randomly generate \(N\) pairs in advance and use them as a training dataset. We train a model
with the dataset and a loss function, eq. (8). We can focus on training dynamics itself independent of a specific dataset instance by testing randomly generated ones and analyzing its statistical features.
We also consider a simplified model for understanding training dynamics. As stated, training dynamics can be described simply in a simple equation, (3) or (5). The training response, \(\Theta(\mathbf{x},\mathbf{x}_{i})\), is known as a neural tangent kernel and can be constant during training in an infinity limit of network size[12; 13]. Even if the size is finite, it can be represented by a product of a time-dependent term, \(A(t)\), and almost constant kernel, \(K(\mathbf{x},\mathbf{x}_{i})\), like equation (5)[16]. In the case of an average network, we can write down simplified dynamics,
\[\frac{d\mu_{i}}{dt}\propto-\sum_{j}K(\mathbf{x}_{i},\mathbf{x}_{j})\frac{dL}{d\mu_{j}}, \tag{12}\]
where we ignore the time-dependent term in training response. In other words, we consider short-term training dynamics and call it as _response kernel dynamics_.
In many cases, the loss function, \(L\), evaluates the distance between the prediction, \(f_{i}\), and the answer, \(y_{i}\), e.g. mean squared error. In such a case, we can write it as follows,
\[\frac{df_{i}}{dt}\propto\sum_{j}K_{ij}(y_{j}-f_{j}), \tag{13}\]
where the response kernel, \(K_{ij}\), can be seen constant during training as assumption.
## III Results
### 1-point training
We start from the simplest one, where our dataset has only a pair, \((\mathbf{x}_{0},y_{0})\). We train a variance network with that. We call it _1-point training_, here. When the dataset consists of N-pairs, we call it _N-point training_.
Before showing results, we should confirm the form of the loss function, equation 8. Since the loss is the log-likelihood of the Gaussian, it can be rewritten with the summation, easily,
\[L\propto\Sigma_{i}(\sigma(\mathbf{x}_{i})+\frac{1}{2}(\frac{y_{i}-\mu(\mathbf{x}_{i})} {\sigma(\mathbf{x}_{i})})^{2}). \tag{14}\]
The first term, \(\Sigma_{i}\sigma(\mathbf{x}_{i})\), can be minimized by the minimal of the standard deviation, \(\sigma(\mathbf{x}_{i})=0\). However, the second term includes that in its denominator and can be divergent by the zero value. If the first term is minimized faster, the second term can change its value abruptly. In other words, we can see a numerical instability, there. On the contrary, if the scale of the standard deviation can be adjusted moderately, we do not see numerical instability. We can focus on the trajectory of the point, \((y_{i}-\mu(\mathbf{x}_{i}),\sigma(\mathbf{x}_{i}))\), to see the determination of the prediction scale.
We show the results of 1-point training in FIG. 1. In the figure, we show trajectories of training dynamics on the vector field along the loss gradient. All of them start from around the center, \(|\mu_{0}-y_{0}|\sim 0.5\) and \(\sigma_{0}\sim 0.5\), and converged into the optimal point, \(\mu_{0}\sim y_{0}\) and \(\sigma_{0}\sim 0\). As we can confirm, they move almost along the vector field. However, we cannot get to the optimal one, \((\mu_{0},\sigma_{0})=(y_{0},0)\), because that is numerically unstable. One of our network outputs, \(\sigma_{0}\), is in a denominator of the loss function, equation (8). In other words, our formulation of a variance network cannot have the optimal point as a solution for 1-point training. It is a reasonable result because we do not have a meaningful definition of standard deviation only with a data point.
### data density transition
Next, we consider N-point training with a variance network. We sample pairs of random encoding, \((\mathbf{x}_{i}),y_{i}\), with a size, \(N\), and train the network with it. Firstly we want to roughly grasp the feature of training dynamics. To do that we evaluate training results with variance. We can estimate it in two ways, mean squared error, \(V\equiv<(y_{i}-\mu_{i})^{2}>\), and another one, \(V^{*}\equiv<\sigma_{i}^{2}>\). In FIG. 2, we show those estimated variances with different sizes of datasets, \(N\). We can confirm the non-negligible difference between them in the cases with a little dataset, but it is negligible in the cases with a larger dataset. In FIG. 2, we show the difference after training of a fixed epoch, \(e_{mx}=2000\), but it can depend on the duration of the training epoch itself.
In FIG. 3, we show the dependence on the training epoch. In the figure, we plot the difference, \(|V-V^{*}|\), against the size of a dataset, \(N\), and the duration of the training epoch, \(e_{mx}\). As we can see, the difference tends to grow when the duration, \(e_{mx}\), is large. On the other hand, it tends to be reduced when the size, \(N\), is large. In other words, the difference can be negligible when the data density is large but it may grow after enough training.
As we already confirmed, 1-point training is numerically unstable. In addition, N-point training is also un
stable if the training dataset is sparse enough, FIG. 2. If we assume the difference, \(|V-V^{*}|\), stems from numerical instability, the results are reasonable. However, why the instability grows after longer training? We can expect that network outputs, \(\mu_{i}\) and \(\sigma_{i}\), depend not only on the local data point, \((\mathbf{x}_{i},y_{i})\), but also on other ones nearby it. We evaluate the output, \(\mu_{i}\), with a weighted average, \(\sum_{j}\exp(-\alpha|\mathbf{x}_{i}-\mathbf{x}_{j}|)y_{j}/N\). The parameter, \(\alpha\), means a spatial scale of the average. In FIG. 4, we show training dynamics with a dataset, \(N=20\). On the left, we show the dynamics of two variances. On the right, we show the growth of the scale, \(\alpha\). The scale is optimized so that the weighted average and the output, \(\mu_{i}\), should match with each other. The variances show no difference at first. But we see a significant difference between them in the end. At the same time, the scale, \(\alpha\), grows along the training. This suggests a spatial scale of prediction is reduced after enough training.
### kernel-balanced equation
We consider the simplified dynamics, (12) or (13), to understand the growth of scale, \(\alpha\). Firstly, we study the training dynamics of an average network with its simplified one,
\[\frac{d\mu_{i}}{dt}\propto-\sum_{j}K_{ij}(y_{j}-\mu_{j}). \tag{15}\]
Figure 1: Training dynamics with a single point data. A network is trained with a data point, \((\mathbf{x}_{0},y_{0})\), and the error, \(|\mu_{0}-y_{0}|\), and standard deviation, \(\sigma_{0}\), are shown on the map of loss function. In the figure, we show 4 results with different colors and initial conditions, but all trajectories finally end up with numerical instability. The vector field shows the gradient of the loss function. The color of the arrows shows the steepness of the gradient in the log scale. Here we used the setting, learning rate \(\eta=0.1\), the size of input \(b=8\), the number of layers 3. We used SGD and ELU as an optimizer and activation, respectively.
Figure 3: Numerical instability against sample size and training epochs, \(e_{mx}\). We show the difference between two predicted variances. The horizontal axis means the sample size of the training dataset. The vertical axis means training epochs. The difference is shown with color.
The matrix, \(K_{ij}\), consists of kernel distance terms, \(K(|\mathbf{x}_{i}-\mathbf{x}_{j}|)\), and we assume it can be expressed in an exponential form as already introduced,
\[K_{ij}\sim\exp(-\beta|\mathbf{x}_{i}-\mathbf{x}_{j}|). \tag{16}\]
We show eigenvectors and values with an ideally simple case, in FIG. 5. We constructed a matrix, \(K_{ij}=\exp(-|x_{i}-x_{j}|)\), with sorted 100 random values, \(0\leq x_{i}\leq 1\). In other words, we show the features of a response kernel with randomly distanced data points. As we can see, the eigenvalues decrease in a power-law manner. On the other hand, eigenvectors show Gabor wavelet-like forms[26; 27]. Major modes show broader waves than minor ones. This means that training dynamics reduces large scaled spatial error at first. Local error is reduced after enough training. These dynamics can be interpreted in our case as follows,
\[\frac{d\mu_{i}}{dt} \propto \sum_{j}K_{ij}(y_{j}-\mu_{j}) \tag{17}\] \[\sim \sum_{j}K_{ij}(y_{j}-\mu_{i})\] (18) \[\sim \sum_{k}\lambda_{k}\sum_{j}\tilde{K}_{ijk}(y_{j}-\mu_{i}), \tag{19}\]
where the values, \(\tilde{K}_{ijk}\sim\exp(-\beta_{k}|x_{i}-x_{j}|)\), show differently scaled kernel terms. If we can assume a relation, \(\mu_{j}\sim\mu_{i}\pm\delta\), around the point, \(\mathbf{x}_{i}\), we get the equation, 18. This means the local expectation, \(\mu_{i}\), should match the weighted average, \(K_{ij}y_{j}\). Finally, we rewrite it in a manner of multi-scale expansion, eq. 19, using the differently scaled kernels, \(\tilde{K}_{ijk}\), and the weight of each mode, \(\lambda_{k}\). Needless to say, we assume the weight should be larger for the global averaging term, which has a small parameter, \(\beta_{k}\). Thus, the weighted balance of kernel response,
\[\mu_{i}=\frac{\sum_{j}\tilde{K}_{ijk}y_{j}}{\sum_{j}\tilde{K}_{ijk}}, \tag{20}\]
is realized from more global modes to local ones through training.
In the same way, we can write down the kernel-balanced equation for a variance network in the following,
\[\frac{d\mu_{i}}{dt} \propto \sum_{j}\frac{K_{ij}}{2\sigma_{j}^{2}}(y_{j}-\mu_{j}) \tag{21}\] \[\sim \sum_{k}\lambda_{k}\sum_{j}\tilde{K}_{ijk}\frac{y_{j}-\mu_{i}}{2 \sigma_{j}^{2}}\] (22) \[\frac{d\sigma_{i}}{dt} \propto \sum_{j}\frac{K_{ij}}{\sigma_{j}^{3}}((y_{j}-\mu_{j})^{2}-\sigma _{j}^{2})\] (23) \[\sim \sum_{k}\lambda_{k}\sum_{j}\tilde{K}_{ijk}\frac{(y_{j}-\mu_{i})^ {2}-\sigma_{i}^{2}}{\sigma_{j}^{3}}. \tag{24}\]
We can notice the denominator, \(\sigma_{j}\), in the equation (22), as the difference from the previous one. In addition, we have one more equation on the other dynamics, in eq. (23) and (24). These equations suggest we can have
Figure 4: Training dynamics with a training dataset, \(N=20\). (a) Two predicted errors, \(<(y_{i}-\mu_{i})^{2}>\) and \(<\sigma_{i}^{2}>\), are shown. (b) We can approximate the predicted value, \(\mu_{i}\), as a weighted average, \(\sum_{j}\exp(-\alpha|\mathbf{x}_{i}-\mathbf{x}_{j}|)y_{j}/N\). Here we show the evolution of the prediction scale, \(\alpha\).
Figure 5: Eigenvalues and vectors of a response kernel, \(K_{ij}\). We calculated eigenvalues and vectors with a randomly distanced one within a range, \((0,1)\). We generated 100 random values, \(x_{i}\), and made a response kernel, \(\exp(-|x_{j}-x_{i}|)\), with those ones.
numerical instability again, because the point, \(\sigma_{i}=0\), is a local optimal as a result of training.
To confirm the numerically unstable dynamics, we show the training dynamics of N-point training, \(N=20\), in FIG. 6. All the trajectories, \((|y_{i}-\mu_{i}|,\sigma_{i})\), are plotted on the loss gradient, in the same manner as FIG. 1. At a first glance, we notice jumps in them. Needless to say, these suggest numerical instability.
We also show the training dynamics against the training epoch, in FIG. 7. We show some convergent trajectories, in (a), and unstable ones, in (b). All of the trajectories, \(\sigma_{i}\) and \(\mu_{i}\), are plotted, in (c) and (d). We can confirm that jumps happen at the timing when the convergent trajectories approach the optimal.
## IV Discussion
Machine learning is now one of the most powerful ways to construct a model for a given dataset. Among the techniques in the field, deep neural networks play central roles not only in business applications but also in scientific studies. If we train a model with a dataset, \(\{(\mathbf{x}_{i},y_{i})\}\), the trained model, \(f\), can predict outputs for any input, \(y=f(\mathbf{x})\), successfully even if the point, \(\mathbf{x}\), is not included in the dataset. This feature is known as generalization. We know generalization should be a reflection of proximity in the input space. In other words, two outputs, \(f(\mathbf{x}_{a})\) and \(f(\mathbf{x}_{b})\), should be similar if those inputs, \(\mathbf{x}_{a}\) and \(\mathbf{x}_{b}\), are similar with each other. However, the distribution of the dataset is often very complicated and the spatial scale of similarity can be complicated as well. In this paper, the structure of the prediction scale is focused on. We consider neural tangent kernel or training response because they can describe the spatial effect of a training step and be applied to training dynamics. We elucidate the determination mechanism of the prediction scale with simple convolution networks and simplified models for the dynamics.
As an estimation problem for the dataset distribution, we adopt a loss function for the fitting to Gaussian one. We minimize the loss and get an optimal fitting model by training with a dataset. Our model, a variance network, outputs an expectation and standard deviation, \(\mu(\mathbf{x})\) and \(\sigma(\mathbf{x})\). As the dataset, we simply set a randomly generated encoding from a 1D bit-string, \(\mathbf{x}_{i}\), to a binary output, \(y_{i}\). This problem set is a very simple one so that we can understand the dynamics of the prediction scale.
As shown in FIG. 1, the dynamics is numerically unstable in the simplest case, 1-point training, because we cannot determine a meaningful standard deviation. In a similar manner, N-point training suffers from numerical instability, as shown in FIG. 2 and FIG. 3. The numerical instability stems from the reduction of the prediction scale, shown in FIG. 4. The scale gradually reduces along training and finally two types of variances,
Figure 6: Training dynamics with multiple training points, \(N=20\). In the figure, we show all trajectories with different colors. The vector field shows the gradient of a loss function. The color of the arrows shows the steepness of the gradient in the log scale.
Figure 7: A demonstration of numerical instability. We show training dynamics with sample size, \(N=20\). (a) Trajectories of \(\sigma_{i}\) for the group, converged into the state, \(y_{i}^{*}-\mu_{i}=0,\sigma_{i}=0\). In the box, we show the pairs of input, \(x_{i}\), and output, \(y_{i}\). (b) Trajectories of \(\sigma_{i}\), showing first instability. (c) All trajectories of \(\sigma_{i}\). (d) All trajectories of \(\mu_{i}\).
and \(<\sigma_{i}^{2}>\), do not show consistency with each other.
To understand the dynamics of the prediction scale, response kernel dynamics for an average network are studied. The kernel matrix has Gabor wavelet-like eigenvectors with eigenvalues decaying along a power law. This suggests the network learns spatially larger scale patterns at first and local patterns later. These dynamics can be shown in a kernel-balanced equation, in eq. (20). An average network outputs a weighted mean value averaged over the answers, \(y_{i}\). The weight determines the prediction scale and it decreases along the training. In the case of variance network, the solution is not so straightforward, eq. (23) and (24), but the prediction scale decreases along training as well. In addition, the predictions suffer from numerical instability again. Once any prediction, \((\mu_{i},\sigma_{i})\), approaches to the optimal point, \(\sigma_{i}=0\), it destabilizes training dynamics of other predictions, \((\mu_{j},\sigma_{j})\).
The kernel-balanced equation suggests we can understand convolutional networks as a function that outputs the local average. The scale of averaging is not fixed and decreases along the training. In other words, the network can output the same values as the dataset after enough training, if the response kernel does not collapse during it. It is known that the kernel is constant in an ideal condition. Even if the network is not ideal and has a finite size, it can be seen as almost constant in some cases. We still need more studies on kernel stability but we believe the equation should be effective for a more variety of cases.
The problem we consider here is known as uncertainty estimation in the field of machine learning. In reality, we suffer from uncertainty for many reasons in practical usage. If we should assume some noises in the observation, we cannot regard the dataset as the ground truth anymore. Even if we can exclude such a noise in a way, the truths may not be deterministic. In addition, we have some more origins of it in the model and its training. The model can be redundant and have many solutions for the given dataset. We often use non-deterministic training algorithms and this can result in many solutions.
It is known that uncertainty can be divided into two classes, epistemic and aleatoric one[19]. What we consider here is the latter one. In such a case, we can model the dataset as a distribution at most. In this context, what we show here is the insufficiency of Gaussian modeling. Our formulation suggests Laplacian modeling is insufficient as well. On the contrary, a Bayesian approach, in which the parameter has its distribution, can be a solution. In fact, the numerical instability occurs at the optimal point, \(\sigma_{i}=0\), and can be overcome by replacement of it, \(\sigma_{i}\), with another probabilistic one, \(P(\sigma_{i})\). Actually, it is reported that t-distribution is an effective way of that[28]. Our formulation explains the reason for the effectiveness, therefore.
In our days, we often train very large models with large datasets. In such a case, the dataset usually distributes in a non-uniform manner. As pointed out, we often see model variability in such cases[33]. As we have shown with the kernel-balanced equation, the prediction can be different between training phases, especially for minor modes in the dataset. Since the training for minor modes is postponed to a much later stage depending on the initial condition, we can see model variability.
Our instability stems from the shape of the loss function, equation 8. As shown in the kernel-balanced equation, this sort of instability cannot be prevented in a straightforward way. However, when the model and the dataset are very large, training requires much longer epochs and we often need much larger computing time for a training epoch. In such a case, we may not see instability because of short training time, in reality. As a possibility, when we have drop-out layers in the network, we can have a non-zero variance in the prediction for the same input. If the standard deviation for the input, \(\mathbf{x}\), is not zero, we do not see the instability. In this meaning, the instability is not necessarily universal for real applications. However, the analysis with the kernel-balanced equation, especially for the average network, can be applicable to a wide range of applications, because of its simplicity. As the kernel-equation equations shows, training always starts from major modes and proceeds to minor modes later, we can carefully design the dataset density for more efficient training, though we still need more studies.
As such an application, we can apply it for variance estimation without a variance head for prediction of standard deviation, \(\sigma(\mathbf{x})\). Since the output after \(t\)-epochs, \(f_{t}(\mathbf{x})\), approximates a local average dependent on the prediction scale, the difference between two outputs, \((f_{t}(\mathbf{x})-f_{T}(\mathbf{x}))^{2}\), tells us the local variance in the limit, \(T\rightarrow\infty\). We do not know the quantitative precision of this formula yet, but it can be a convenient way for variance estimation.
We are getting more and more data through ubiquitous sensors and it is even available on the web. The field of data science sheds light on the complexity of the real world with them. Actually, astonishingly powerful applications emerge one after another with the aid of such huge datasets and machine powers[1; 2; 3; 4]. Machine learning algorithms are necessary not only in such practical applications but also in scientific studies in the real world's complexity[29; 30; 31]. However, we note that those algorithms still require further understanding. In reality, new technologies often focus on an innovative mathematical formulation and its implementation, but a dynamical
understanding must be necessary as well[16; 32]. We can design an efficient and even safe one with such an understanding. We believe the science of complexity can be an effective approach for the field.
## Acknowledgements
This work is motivated through discussions in Sense-Time Japan, HONDA, and an internship student WL.
|
2305.19717 | Is Rewiring Actually Helpful in Graph Neural Networks? | Graph neural networks compute node representations by performing multiple
message-passing steps that consist in local aggregations of node features.
Having deep models that can leverage longer-range interactions between nodes is
hindered by the issues of over-smoothing and over-squashing. In particular, the
latter is attributed to the graph topology which guides the message-passing,
causing a node representation to become insensitive to information contained at
distant nodes. Many graph rewiring methods have been proposed to remedy or
mitigate this problem. However, properly evaluating the benefits of these
methods is made difficult by the coupling of over-squashing with other issues
strictly related to model training, such as vanishing gradients. Therefore, we
propose an evaluation setting based on message-passing models that do not
require training to compute node and graph representations. We perform a
systematic experimental comparison on real-world node and graph classification
tasks, showing that rewiring the underlying graph rarely does confer a
practical benefit for message-passing. | Domenico Tortorella, Alessio Micheli | 2023-05-31T10:12:23Z | http://arxiv.org/abs/2305.19717v1 | # Is Rewiring Actually Helpful in
###### Abstract
Graph neural networks compute node representations by performing multiple message-passing steps that consist in local aggregations of node features. Having deep models that can leverage longer-range interactions between nodes is hindered by the issues of over-smoothing and over-squashing. In particular, the latter is attributed to the graph topology which guides the message-passing, causing a node representation to become insensitive to information contained at distant nodes. Many graph rewiring methods have been proposed to remedy or mitigate this problem. However, properly evaluating the benefits of these methods is made difficult by the coupling of over-squashing with other issues strictly related to model training, such as vanishing gradients. Therefore, we propose an evaluation setting based on message-passing models that do not require training to compute node and graph representations. We perform a systematic experimental comparison on real-world node and graph classification tasks, showing that rewiring the underlying graph rarely does confer a practical benefit for message-passing.
## 1 Introduction
Neural models for graphs [6; 59], commonly called _graph neural networks_ (GNNs), have been successfully applied in many real-world tasks, such as identifying categories of users in social networks or classifying molecules. GNNs typically operate in the _message-passing_ paradigm, that is by exchanging information between nearby nodes according to the graph structure. Messages are computed from the neighbor node features, then aggregated by a permutation-invariant function to provide node representations. With multiple message-passing steps, GNNs are able to learn a hierarchy of representations that capture interactions between increasingly distant nodes. This is accomplished either via multiple iterations of the same parameterized message-passing function [44; 15], or by a deep network of message-passing layers with different learnable parameters [34; 13; 5; 25]. The need for sufficiently deep graph networks arises for tasks that require the discovery of long-range dependencies between nodes, otherwise the model incurs in _under-reaching_[3]. As deep learning on graph progressed, several challenges preventing the computation of effective node representations have emerged. Among those, _over-squashing_ is inherently connected to the inductive bias at the base of GNNs: the problem of encoding an exponentially growing receptive field [34] in a fixed-size node embedding dimension [3]. As simply increasing the width of node representations does not remove the underlying issues caused by the graph topology [12], this has motivated a growing number of methods that alter (i.e. _rewire_) the original graph as a pre-processing to improve message-passing. In this paper, we attempt to meet the need for an _empirical approach_ to assess the benefits of graph rewiring methods. Indeed, altering the input data without taking into account the specific learning task can possibly lead to the loss of critical information. Since the quality of node representations computed on rewired graphs is evaluated according to the accuracy in downstream learning tasks, the
use of end-to-end trained models does not allow to decouple the effects caused by graph topology on message-passing from the problems inherently connected to training in deep neural networks. Indeed, while it has been proven that gradient vanishing prevails on over-squashing when the number of message-passing steps is much larger than the range of node interactions needed to solve the task, it still unclear how the two issues interact with each other or what happens in intermediate regimes [12]. Furthermore, GNN models that completely or partially avoid learning representations via training have exhibited performances close to or above common end-to-end trained ones [17; 18; 35; 24], in particular when compared to previous results for rewiring methods applied to trained GNNs [54]. Therefore, as opposed to previous literature, we propose to use message-passing models that compute node representations _without training_, either by being parameter-free [58] or by following the reservoir computing paradigm [15], where parameters are just randomly initialized under certain constraints. Crucially, the issues that graph rewiring methods aim to address are connected with the inductive bias of GNNs [8], that is to the message-passing _per se_, whether is done in the forward or backward pass. This will allow us to assess the actual benefits of graph rewiring on several node and graph classification tasks.
The rest of this paper is structured as follows. In Sec. 2 we present a brief survey of the rewiring methods that will be evaluated in our experiments. In Sec. 3 we introduce SGC and GESN, the two training-free message-passing models adopted in our experimental framework. The datasets and results of our experiments will be discussed in Sec. 4, drawing final conclusions in Sec. 5.
## 2 Graph rewiring methods
Let \(\mathcal{G}(\mathcal{V},\mathcal{E})\) be a graph with nodes \(v\in\mathcal{V}\) and edges \((u,v)\in\mathcal{E}\), each node having associated input features \(\mathbf{x}_{v}\in\mathbb{R}^{X}\). We denote by \(\mathcal{N}_{v}\) the set of neighbors of node \(v\) with cardinality (i.e. degree) \(d_{v}\), and respectively by \(\mathbf{A}\), \(\mathbf{D}\), \(\mathbf{L}\) the graph adjacency, degree and Laplacian matrices. We also define the symmetric normalized adjacency \(\mathbf{A}_{\text{sym}}=\mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{- \frac{1}{2}}\), the random-walk normalized adjacency \(\mathbf{A}_{\text{rw}}=\mathbf{A}\mathbf{D}^{-1}\), and the mean-aggregation normalized adjacency \(\mathbf{A}_{\text{mean}}=\mathbf{D}^{-1}\mathbf{A}\), along with the respective normalized Laplacians \(\mathbf{L}_{\text{sym}}\), \(\mathbf{L}_{\text{rw}}\), \(\mathbf{L}_{\text{mean}}\), and the self-loop augmented \(\hat{\mathbf{A}}=\mathbf{A}+\mathbf{I}\). Finally, we denote by \(\mathbf{A}^{+}\) the pseudo-inverse of matrix \(\mathbf{A}\). Throughout the paper we assume the graphs to be undirected.
A graph neural network (GNN) computes node representations \(\mathbf{h}_{v}\in\mathbb{R}^{H}\) via a deep neural network of \(L\) message-passing layers. Each layer \(k=1,...,L\) computes a new node representation \(\mathbf{h}_{v}^{(k)}\) by performing a permutation-invariant aggregation of messages computed from the previous layer representations of neighbor nodes \(\mathbf{h}_{u}^{(k-1)}\). Without much loss of generality, we assume the message-passing layers to have the form
\[\mathbf{h}_{v}^{(k)}=\phi_{k}\left(\sum_{u\in\mathcal{V}}M_{vu}\,\psi_{k} \left(\mathbf{h}_{u}^{(k-1)}\right)\right),\quad\mathbf{h}_{v}^{(0)}=\mathbf{ x}_{v}, \tag{1}\]
where local neighbors of \(v\) are implicitly defined as nodes \(u\) such that \(M_{vu}\neq 0\). By \(\mathbf{M}\) we denote the message-passing matrix, which is a graph shift operator. Such operator that can be e.g. either the adjacency \(\mathbf{A}\), the Laplacian \(\mathbf{L}\), or one of their normalizations. In this case, the aggregations are performed on graph neighborhoods \(\mathcal{N}_{v}\). Message-passing layers can thus represent the relationships induced by graph connectivity in an efficient manner by leveraging the graph structure sparsity. To capture long-range dependencies between nodes, GNNs must perform at least as many message-passing steps (i.e., have as many layers) as the distance between node pairs to not incur in under-reaching [3]. However, building deep GNNs presents an inherent challenge. As depth increases, the receptive field of nodes [34] grows exponentially, thus requiring more information to be encoded in the same fixed-size vectors. This problem is called over-squashing [3]. Topping et al. [52] have investigated this phenomenon via the analysis of node representations sensitivity to input features. Assuming there exist an \(L\)-path from node \(u\) to node \(v\), the sensitivity of \(\mathbf{h}_{v}^{(L)}\) to input features \(\mathbf{x}_{u}\) is upper bounded by
\[\left\|\frac{\partial\mathbf{h}_{v}^{(L)}}{\partial\mathbf{x}_{u}}\right\|\ \leq\ \underset{\underset{\text{Lipschitz constants}}{ \underbrace{k=1}}}{\prod}\|\phi_{k}\|\|\psi_{k}\|\underset{\text{graph topology}}{\underbrace{\left(\mathbf{M}^{L}\right)_{vu}}}. \tag{2}\]
Over-squashing arises when the derivative in (2) becomes too small, indicating that the representation of node \(v\) is mostly insensitive to the information initially present at node \(u\). While increasing the layer Lipschitz constants or the dimension \(H\) can mitigate the issue [54, 12], this may come at the expense of model generalization [32]. Therefore, different methods have been proposed to alter the graph topology in a more favorable way to message-passing. In this paper we focus on graph rewiring methods which change the initial graph--or equivalently, the message-passing matrix \(\mathbf{M}\)--as a pre-processing step, as opposed to e.g. the implicit rewiring done by attention mechanisms [55].
Diffusion processesGraph diffusion was originally proposed as a way of aggregating nodes beyond the immediate \(1\)-hop neighborhood [26], thus allowing a single message-passing layer to directly consider information from more distant nodes. The generalized graph diffusion matrix is computed by the power series \(\sum_{m=0}^{\infty}\theta_{m}\mathbf{A}^{m}\), where the choice of coefficients \(\theta_{m}\) defines the particular diffusion process and \(\mathbf{A}\) can be replaced by any other transition matrix. Two examples of graph diffusion are the heat kernel [27] with \(\theta_{m}^{\text{Heat}}=e^{-t}\frac{1}{m!},t>0\), and personalized PageRank [40] with \(\theta_{m}^{\text{ PageRank}}=\alpha(1-\alpha)^{m},0<\alpha<1\), which correspond respectively to the message-passing matrices
\[\mathbf{M}_{\text{Heat}}=e^{-t\mathbf{A}}\quad\text{and}\quad\mathbf{M}_{ \text{PageRank}}=\alpha(1-(1-\alpha)\mathbf{A})^{+}. \tag{3}\]
Diffusion-based rewiring was proposed exclusively for node-level tasks.
Local bottlenecksIn their analysis of over-squashing, Topping et al. [52] have linked its causes to _bottlenecks_ in the graph topology that happen where the graph structure locally resembles a tree. Intuitively, for a tree the receptive field grows exponentially in the branching factor, while at the other opposite a complete graph has a constant receptive field. To provide a metric to quantify this local behavior, they have proposed the balanced Forman curvature, defined as
\[\text{Ric}_{uv}=\underbrace{\frac{2}{d_{u}}+\frac{2}{d_{v}}}_{\text{tree-likeness}}-2 +2\underbrace{\frac{\sharp^{\triangle}_{uv}}{\max\{d_{u},d_{v}\}}+\frac{ \sharp^{\triangle}_{uv}}{\min\{d_{u},d_{v}\}}}_{\text{local similarity to a complete graph}}+\underbrace{\frac{\sharp^{\square}_{uv}+\sharp^{\square}_{vv}}{ \star^{\max}_{uv}}}_{\text{grid-likeness}}, \tag{4}\]
where \(\sharp^{\triangle}_{uv}\) is the number of triangles on the edge \((u,v)\), \(\sharp^{\square}_{uv}\) is the number of neighbors of \(u\) forming a \(4\)-cycle based on the edge \((u,v)\) without diagonals inside, and \(\gamma^{\max}_{uv}\) is a normalization factor. For a graph having only positively curved edges (i.e. \(\text{Ric}_{uv}>0\) for all \((u,v)\)) it has been proved that the receptive field grows at most polynomially [52]. Therefore, rewiring algorithms that aim at increasing the graph curvature have been proposed. SDRF [52] iteratively samples an edge \((u,v)\) proportionally to how negatively curved it is, then adds the new edge \((u^{\prime},v^{\prime})\) able to provide the largest increase of \(\text{Ric}_{uv}\). (The algorithm optionally removes the most positively curved edges to avoid growing the graph excessively.)
Global bottlenecksThe edge curvature defined in equation (4) is not the only way to measure the presence of bottlenecks in the graph topology. A more _global_ metric is the Cheeger constant \(\mathfrak{h}_{\mathcal{G}}\), which quantifies the minimum fraction of edges that need to be removed in order to make the graph disconnected. A small \(\mathfrak{h}_{\mathcal{G}}\) thus indicates that few edges act as a bridge between two otherwise disconnected communities. However, computing the Cheeger constant is an NP-hard problem, so the lower bound given by the spectral gap \(\lambda_{1}\) (i.e. the smallest positive Laplacian eigenvalue) is used as a proxy measure in practice: \(\mathfrak{h}_{\mathcal{G}}\geq\frac{1}{2}\lambda_{1}\)[36]. GRLEF [7] proposes to improve a graph spectral gap by working exclusively _locally_ via the triangle counts \(\sharp^{\triangle}_{uv}\), which are cheaper to compute as they require just neighborhood information. The algorithm iteratively samples an edge \((u,v)\) proportionally to the inverse of triangle count, that is from an area of the graph that is locally far away from being fully-connected. Then it chooses the pair of edges \((u,u^{\prime}),(v,v^{\prime})\) to flip into \((u,v^{\prime}),(v,u^{\prime})\) which provides the smallest net change in triangle count. This behavior can be interpreted as mitigating a very low local curvature (as suggested by the small term \(\sharp^{\triangle}_{uv}\) in \(\text{Ric}_{uv}\)) at the expense of a reduction in curvature of more positively curved neighboring edges. Banerjee et al. [7] supported the approach of their rewiring algorithm by empirically finding a correspondence between triangle count decrease and spectral gap increase.
Expander propagationThere is a class of graphs that avoid global bottlenecks by construction: expander graphs are simultaneously sparse and highly connected [23]. Additionally, expander families
of graphs are characterized by an uniform lower bound on the Cheeger constant [2], and for uniform maximal node degree their diameter is also logarithmic in the number of nodes [37; 1]. Deac et al. [11] have thus proposed to interleave the message propagation on the original graph with message-passing on an expander graph to provide for information propagation over bottlenecks. The expander graphs adopted for EGP [11] are derived from the Cayley graphs of finite groups \(\mathrm{SL}(2,\mathbb{Z}_{n})\), which are \(4\)-regular and thus guarantee sparsity. Interestingly, these graphs have all negatively curved edges with \(\mathsf{Ric}_{uv}=-\frac{1}{2}\). In our experiments we will thus use the message-passing matrix \(\mathbf{M}_{\mathrm{EGP}}=\mathbf{A}_{\mathrm{Cay}}\,\mathbf{A}\), where \(\mathbf{A}_{\mathrm{Cay}}\) is the adjacency matrix of said Cayley graphs.
Effective resistanceEffective resistance [14] provides an additional way to measure bottlenecks in graph topology. The resistance \(\mathsf{Res}_{uv}\) between two nodes is proportional to the commute time \(\mathsf{Com}_{uv}\), which is the number of expected steps for a random walk to go back and forth between nodes \(u,v\). 1 An high resistance between two nodes is an indication of the difficulty for messages to pass from node \(u\) to node \(v\). Black et al. [9] proved a sensitivity bound similar to (2) relating high effective resistance \(\mathsf{Res}_{vu}\) between pairs of nodes to a reduced sensitivity of the representations \(\mathbf{h}_{v}^{(L)}\) to input features \(\mathbf{x}_{u}\). Furthermore, effective resistance is inversely related to the square of the Cheeger constant by the inequality \(\max_{(u,v)\in\mathcal{E}}\mathsf{Res}_{uv}\leq\frac{1}{b_{0}^{2}}\)[4]. Arnaiz-Rodriguez et al. [4] have proposed a layer for learning effective resistance to re-weight the original graph adjacency (hence 'DiffWire') in the perspective of sampling a spectrally similar but sparser graph which preserves the graph structural information [47]. The additional intuitive effect is to enlarge the relative capacity of high resistance edges, which correspond to bridges over more densely connected communities. In our experiments we implement the DiffWire approach by computing the effective resistance in exact form by \(\mathsf{Res}_{uv}=(\mathbf{1}_{u}-\mathbf{1}_{v})^{\top}\mathbf{L}^{+}(\mathbf{ 1}_{u}-\mathbf{1}_{v})\) with \(\mathbf{1}_{u}\) the indicator vector of node \(u\). The resulting message-passing matrix therefore is \(\mathbf{M}_{\mathrm{DiffWire}}=\mathsf{Res}\odot\mathbf{A}\), where '\(\odot\)' denotes the elementwise product.
Footnote 1: Precisely, \(\mathsf{Res}_{uv}=\frac{1}{\sum_{v\in\mathcal{V}}d_{v}}\mathsf{Com}_{uv}\).
## 3 Training-free graph neural networks
Since graph rewiring methods work as a pre-processing step on the input graph, the choice of GNN model is crucial to assess their benefits in downstream task accuracy. So far, only end-to-end trained models have been used, such as GCN [25] in [52]. This approach does not allow to consider the effects of over-squashing independently from the other issues that can affect training in message-passing models, such as gradient vanishing. By learning node and graph representations jointly with the task prediction readout, the experimental results become inextricably linked to how training is conducted. Therefore, we propose to apply GNNs that compute node and graph representation _without training_ in our experimental setting for assessing the actual contributions of graph rewiring. Indeed, rewiring methods aim to address issues connected with the model bias itself, that is local aggregation of messages computed from node structural neighbors, independently from whether message-passing is done in the forward or backward pass. Isolating the inductive bias of a model from
Figure 1: The two different model architectures of SGC [58] and GESN [15].
training is not completely unprecedented, as it was previously employed for the analysis of recurrent neural networks [51, 50, 16]. For our experiments we adopt two training-free models with different architectural biases, SGC [58] and GESN [15]. In particular, the latter has achieved performances in line with or better than widely adopted end-to-end trained GNNs in node classification tasks [35], also significantly improving upon previous results that include rewiring as graph pre-processing [54]. This may suggest that the training process itself can pose serious challenges.
As an example of how end-to-end training can egregiously fail, in Tab. 1 we report the accuracy of some common GNN models (GCN [25], GraphSAGE [22], GAT [55]) on two node classification tasks. Platonov et al. [42] have observed that both Squirrel and Chameleon present a large number of duplicated nodes--i.e., nodes sharing same local structure and features, resulting in a training-test leakage. Nevertheless, the accuracy of end-to-end trained models is significantly worse than the two training-free GNNs, and is actually much closer to a graph-agnostic baseline (MLP). This is an additional motivation for our choice of excluding end-to-end trained GNNs in our rewiring evaluation framework and rely on the models presented below.
SgcA straightforward way to compute node representations without the need for training is to replace the functions \(\phi_{k},\psi_{k}\) in (1) with the identity, thus removing altogether parameters in layers. Such approach was previously proposed by [58] as a simplification of graph convolution by removing non-linearities, hence the name SGC. This model therefore is reduced to pure message-passing (Fig. 0(a)), with node representations computed after \(L\) message-passing steps as
\[\mathbf{h}^{(L)}=\mathbf{M}^{L}\,\mathbf{x}. \tag{5}\]
Notice that this model was proposed exclusively for node-level tasks [58].
GesnA different approach for training-free models is to follow the reservoir computing (RC) paradigm [39, 31, 56], where input representations (or embeddings) are computed by a dynamical system with randomly initialized parameters. Combining the recursive embedding approach of [44] with RC, Graph Echo State Networks [15] compute node representations by iterating up to \(L\) times the same message-passing function
\[\mathbf{h}^{(k)}_{v}=\tanh\left(\mathbf{W}_{\text{in}}\,\mathbf{x}_{v}+\sum_{ u\in\mathcal{V}}M_{uv}\hat{\mathbf{W}}\,\mathbf{h}^{(k-1)}_{u}+\mathbf{b} \right),\quad\mathbf{h}^{(0)}_{v}=\mathbf{0}, \tag{6}\]
where \(\mathbf{W}_{\text{in}}\in\mathbb{R}^{H\times X}\), \(\mathbf{b}\in\mathbb{R}^{H}\) and \(\hat{\mathbf{W}}\in\mathbb{R}^{H\times H}\) are respectively the input-to-reservoir, bias, and recurrent weights for a reservoir with \(H\) units. This can be interpreted as a form of parameter sharing between message-passing layers (Fig. 0(b)). Notice also that equation (6) slightly departs from (1) due to the presence of input skip connections. All reservoir weights are randomly initialized, with \(\mathbf{W}_{\text{in}}\) rescaled to accommodate the input features range, and \(\hat{\mathbf{W}}\) rescaled to control the Lipschitz constant of (6). For \(\|\hat{\mathbf{W}}\|\ \|\mathbf{M}\|<1\) the message-passing function is contractive [53], that is the iterations of (6) converge to a fixed point \(\mathbf{h}^{(\infty)}\) as \(L\to\infty\). While this regime has been shown to be optimal for graph-level tasks, node-level tasks instead benefit from a non-contractive initialization \(\|\hat{\mathbf{W}}\|\ \|\mathbf{M}\|>1\), as the upper bound on input sensitivity (2) intuitively suggests. In the non-contractive regime, a choice of \(L\) larger then the graph diameter is sufficient to ensure effective node representations [35].
To produce graph representations for graph-level tasks, we apply a parameter-free global pooling operation, such as sum or mean pooling, to the final node representations:
\[\mathbf{h}^{\text{\tiny{SUM}}}_{\mathcal{G}}=\sum_{v\in\mathcal{V}}\mathbf{ h}^{(L)}_{v},\quad\mathbf{h}^{\text{\tiny{MEAN}}}_{\mathcal{G}}=\tfrac{1}{| \mathcal{V}|}\sum_{v\in\mathcal{V}}\mathbf{h}^{(L)}_{v}. \tag{7}\]
ReadoutTo solve a downstream node (or graph) classification task, there still remains the need to train a predictor. For this purpose, we use a linear readout layer \(\mathbf{y}_{v}=\mathbf{W}_{\text{out}}\,\mathbf{h}^{(L)}_{v}+\mathbf{b}_{\text {out}}\), where the
\begin{table}
\begin{tabular}{l c c} \hline \hline & **Squirrel** & **Chameleon** \\ \hline MLP [62] & \(29.68\pm 1.81\) & \(46.36\pm 2.52\) \\ \hline GCN [62] & \(36.89\pm 1.34\) & \(59.82\pm 2.58\) \\ SAGE [62] & \(41.61\pm 0.74\) & \(58.73\pm 1.68\) \\ GAT [62] & \(30.62\pm 2.11\) & \(54.69\pm 1.95\) \\ \hline GCN [42] & \(39.06\pm 1.52\) & \(50.18\pm 3.29\) \\ SAGE [42] & \(35.83\pm 1.32\) & \(50.18\pm 1.78\) \\ GAT [42] & \(32.21\pm 1.63\) & \(45.02\pm 1.75\) \\ \hline SGC & \(72.88\pm 1.20\) & \(76.16\pm 1.87\) \\ GESN [35] & \(73.56\pm 1.62\) & \(77.05\pm 1.24\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Examples of underfitting in common end-to-end trained models on node classification (experimental setting of [41]).
weights \(\mathbf{W}_{\mathrm{out}}\in\mathbb{R}^{C\times H},\mathbf{b}_{\mathrm{out}}\in \mathbb{R}^{C}\) are trained by ridge regression on one-hot encodings of target classes \(y_{v}\in 1,...,C\). This can be achieved efficiently in closed-form even on large data [61], thus removing also from the readout any issue connected to back-propagation learning.
## 4 Experiments and discussion
We evaluate the graph rewiring methods of Sec. 2 jointly with the training-free GNNs presented in the previous section on several real-world classification tasks, many of whom were also adopted in previous rewiring literature [52; 4]. The aim of our experimental approach is to provide a tool for examining the effects of rewiring from a different perspective than previously pursued in literature, thanks to decoupling the inductive bias of GNNs from the training process.
DatasetsFor node classification tasks, we adopt six graphs of up to \(20{,}000\) nodes. Cora [33], CiteSeer [20], PubMed [45] are paper citation networks, where input node features are bag-of-words representations of paper content, and the target is the research topic. Film [49] is a network induced by co-occurrences of actors in the same Wikipedia page, grouped into five categories [41]. TwitchDE [43; 30] is a social network of German garcer accounts from Twitch classified into suitable for work or adult profiles. Tolokers [29; 42] is a collaboration network of users extracted from the crowdsourcing platform Toloka, where the task is to determine whether an user is active or not (since the two classes are unbalanced, the evaluation metric in this case is area under the ROC curve instead of accuracy). The first three are homophilous node classification tasks, while the other three tasks present low homophily. For graph classification we adopt six tasks from the TUDataset collection [38]. NCI-1, NCI-109 [57; 46] are molecules to be classified as cancerogenous or not, where node input features are one-hot encodings of atom type, and edges correspond to chemical bounds. Reddit-B, Reddit-5K, Reddit-12K [60] are interaction networks between users in Reddit discussion threads, and the classification task is to identify the type of sub-reddit the discussions belong to. Collab [28; 60] is a collection of ego-networks belonging to three different scientific collaboration fields. Both Reddit tasks and Collab have no node input features. In all tasks we have consciously avoided adding structural input features to the graph nodes, such as node degrees or positional encodings [48]. Relevant dataset statistics are reported in Tab. 2.
Experimental settingFor all classification tasks we have generated with class stratification \(5\)-fold selection/test splits with inner validation holdout, thus resulting in 60:20:20 training/validation/test set ratios. Both GNN and rewiring algorithm parameters are jointly selected on each validation fold. For SGC, we select the number of message-passing iterations \(L\in[1,15]\) and the type of
\begin{table}
\begin{tabular}{l r r r r r r} \hline \hline & **Cora** & **CiteSeer** & **PubMed** & **Film** & **TwitchDE** & **Tolokers** \\ \hline nodes & \(2{,}708\) & \(3{,}327\) & \(19{,}717\) & \(7{,}600\) & \(9{,}498\) & \(11{,}758\) \\ edges & \(10{,}556\) & \(9{,}104\) & \(88{,}648\) & \(53{,}504\) & \(153{,}138\) & \(519{,}000\) \\ average degree & \(3.90\) & \(2.74\) & \(4.50\) & \(7.03\) & \(16.14\) & \(88.28\) \\ diameter & \(19\) & \(28\) & \(18\) & \(12\) & \(7\) & \(11\) \\ node features & \(1{,}433\) & \(3{,}703\) & \(500\) & \(932\) & \(2{,}514\) & \(10\) \\ classes & \(7\) & \(6\) & \(3\) & \(5\) & \(2\) & \(2\) \\ edge homophily & \(0.81\) & \(0.74\) & \(0.80\) & \(0.22\) & \(0.63\) & \(0.59\) \\ \hline \hline \multicolumn{7}{c}{Graph Classification} \\ \hline & **NCI-1** & **NCI-109** & **Reddit-B** & **Reddit-5K** & **Reddit-12K** & **Collab** \\ \hline graphs & \(4{,}110\) & \(4{,}127\) & \(2{,}000\) & \(4{,}999\) & \(11{,}929\) & \(5{,}000\) \\ average nodes & \(30\) & \(30\) & \(430\) & \(509\) & \(391\) & \(75\) \\ average edges & \(32\) & \(32\) & \(498\) & \(595\) & \(457\) & \(2{,}458\) \\ average degree & \(2.16\) & \(2.16\) & \(2.34\) & \(2.25\) & \(2.28\) & \(37{,}37\) \\ average diameter & \(13{,}33\) & \(13{,}13{,}\) & \(9.72\) & \(11{,}96\) & \(10{,}91\) & \(1.86\) \\ node features & \(37\) & \(38\) & \(0\) & \(0\) & \(0\) & \(0\) \\ classes & \(2\) & \(2\) & \(2\) & \(5\) & \(11\) & \(3\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Dataset statistics.
message-passing matrix (adjacency, Laplacian, or one of their normalizations, with or without the addition of self-loops). For GESN, we select the reservoir size (i.e. node representation dimension) \(H\in[2^{4},2^{12}]\), the input scaling factor in \([0,1]\), and the Lipschitz constant. For the latter, we actually follow the reservoir computing practice of selecting the spectral radius \(\rho(\hat{\mathbf{W}})\) instead of the spectral norm \(\|\hat{\mathbf{W}}\|\), as the radius is a lower bound on the norm [21] and it is cheaper to compute [19]. We select \(\rho(\hat{\mathbf{W}})\in[0.1/\rho(\mathbf{M}),30/\rho(\mathbf{M})]\), while the number of message-passing iterations is fixed at \(L=30\), which is comfortably larger than graph diameters in our datasets [35]. For graph-level tasks we also select the pooling function from the two defined in (7). As for graph rewiring algorithms, we select \(t\in[0.1,5]\) for heat diffusion, and \(\alpha\in[0.01,0.99]\) for PageRank diffusion. We run SDRF and GRLEF for a number of iterations corresponding to up to \(20\%\) of the graph edges, without performing edge removal in the former. Finally, the regularization for the closed-form ridge regression to train the readout classifier is selected in \([10^{-5},10^{3}]\).
ResultsWe report the results of our experiments in Tab. 3-5. The baseline accuracy corresponds to the model applied on the original graph without any rewiring. We highlight better or worse accuracy with respect to the baseline when statistically significant (\(p<0.05\)), denoting no improvements otherwise. The experiments were executed on an NVIDIA A100 with 40GB of GPU RAM. For reference, a single complete model selection for GESN excluding rewiring took up to \(3.5\) hours. 'OOR' in Tab. 3-4 indicates that SDRF exceeded the limit of 10 days of computation for Tolokers.
On node classification tasks the only rewiring methods able to achieve some significant improvements over the baseline both for SGC and GESN are the diffusion-based heat and PageRank rewiring. This improvement is present both on high and low homophily graphs, that is respectively PubMed and Film.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & **Cora** & **CiteSeer** & **PubMed** & **Film** & **TwitchDE** & **Tolokers** \\ \hline Baseline & \(87.70\pm 1.34\) & \(75.84\pm 0.93\) & \(89.53\pm 0.49\) & \(35.23\pm 0.70\) & \(68.62\pm 1.04\) & \(84.40\pm 1.02\) \\ \cline{2-7} Heat & \(87.86\pm 1.50\) & \(75.34\pm 0.88\) & \(89.22\pm 0.33\) & \(36.87\pm 1.05\) & \(68.26\pm 0.30\) & \(84.20\pm 1.17\) \\ PageRank & \(87.50\pm 1.30\) & \(75.20\pm 1.32\) & \(89.19\pm 0.42\) & \(35.91\pm 1.06\) & \(67.88\pm 0.49\) & \(82.63\pm 1.18\) \\ SDRF & \(86.60\pm 1.56\) & \(74.84\pm 1.66\) & \(89.20\pm 0.40\) & \(34.92\pm 0.55\) & \(68.54\pm 0.80\) & OOR \\ GRLEF & \(86.06\pm 1.56\) & \(74.74\pm 1.73\) & \(89.11\pm 0.74\) & \(35.05\pm 0.87\) & \(67.66\pm 0.70\) & \(82.64\pm 1.19\) \\ EGP & \(86.95\pm 2.51\) & \(74.62\pm 1.85\) & \(89.50\pm 0.42\) & \(35.06\pm 0.78\) & \(68.68\pm 0.98\) & \(84.50\pm 1.02\) \\ DiffWire & \(86.51\pm 1.74\) & \(74.03\pm 2.20\) & \(88.81\pm 0.49\) & \(35.01\pm 0.74\) & \(68.15\pm 0.33\) & \(84.77\pm 0.95\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Node classification with GESN.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & **NCI-1** & **NCI-109** & **Reddit-B** & **Reddit-5K** & **Reddit-12K** & **Collab** \\ \hline Baseline & \(78.09\pm 1.64\) & \(77.56\pm 0.83\) & \(87.23\pm 1.38\) & \(53.86\pm 1.49\) & \(44.02\pm 0.54\) & \(72.49\pm 0.77\) \\ \cline{2-7} SDRF & \(73.39\pm 0.63\) & \(72.35\pm 1.46\) & \(87.02\pm 1.30\) & \(53.84\pm 1.55\) & \(44.07\pm 0.47\) & \(71.25\pm 1.09\) \\ GRLEF & \(73.74\pm 1.40\) & \(71.76\pm 1.31\) & \(85.89\pm 2.02\) & \(53.17\pm 1.26\) & \(42.94\pm 1.23\) & \(72.23\pm 0.86\) \\ EGP & \(78.31\pm 1.63\) & \(77.49\pm 0.65\) & \(87.28\pm 1.29\) & \(53.78\pm 1.34\) & \(44.08\pm 0.48\) & \(72.17\pm 0.87\) \\ DiffWire & \(78.14\pm 1.61\) & \(77.48\pm 0.65\) & \(84.54\pm 2.44\) & \(53.58\pm 0.72\) & \(41.37\pm 0.68\) & \(66.31\pm 0.76\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Graph classification with GESN.
This comes as a surprise, since these methods were dismissed in previous literature [52]. We may conjecture that by acting as low-pass filters on the graph spectra [26], diffusion methods can improve the spectral gap \(\lambda_{1}\) (i.e. the smallest positive Laplacian eigenvalue) in certain graphs, possibly resulting in a rewired graph with a larger Cheeger constant since \(\mathfrak{h}_{G}\geq\frac{\lambda_{1}}{2}\)[36]. The other rewiring methods do not provide significant improvements in accuracy, both on node and graph classification tasks. Actually, they can cause a significant degradation of accuracy. To investigate the effects of rewiring algorithms that explicitly act on local bottlenecks of graph topology, we analyze the distribution of edge curvature before and after rewiring (Fig. 1(a)). Notice that the overall curvature distribution is not improved; in particular, the one of SDRF appears to become even more skewed towards negatively curved edges. This is confirmed by observing the differences between initial and final edge curvature in the scatter plots of Fig. 1(b), where the predominant number of edges appears in red below the diagonal, denoting than edge curvature has actually become more negative instead of improving. This behavior can be explained by recalling that the algorithm acts _greedily_ on _local_ curvature information, without accounting for the effects on _global_ graph curvature when deciding where to add supporting edges [12]. As previously stated in Sec. 2, the spectral gap is a proxy measure of global graph bottlenecks. In Fig. 3 we analyze the effects of the two local rewiring algorithms SDRF and GRLEF on this global property. While a more positive curvature should also improve the spectral gap since \(\lambda_{1}\geq\min_{(u,v)\in\mathcal{E}}\text{Ric}_{uv}\)[52], the failure of SDRF in generally increasing edge curvature results in an unchanged \(\lambda_{1}\). On the other end, GRLEF is in some cases able to provide some increase in the spectral gap. However, this does not necessarily translates into an improvement of node classification accuracy, as the results on TwitchDE and Tolokers show. EGP seems to have little effect on accuracy both on graph and node classification tasks in general. As for DiffWire, the significant degradation of accuracy on Collab and Reddit-12K could be attributed to a magnification of spurious edges between network communities. While the scope of our empirical approach is to validate the effectiveness of graph rewiring purely on message-passing, to put the results of our experiments into perspective we recall that the addition of training to a message-passing model on the rewired graph has shown no improvements over training-free baselines [54]. As for a comparison with end-to-end trained GNNs, we refer to the results of rewiring algorithms in the respective original papers.
LimitationsSince our experimental framework is based on training-free GNNs, we have necessarily left out models that learn the graph structure jointly with node representations, or that perform an implicit rewiring via attention mechanisms. We may also have left out from our evaluation rewiring
Figure 3: Effects of SDRF and GRLEF on spectral gaps \(\lambda_{1}\).
Figure 2: Effects of SDRF and GRLEF on graph curvature Ric for Cora. |
2309.06535 | Automatic quantification of abdominal subcutaneous and visceral adipose
tissue in children, through MRI study, using total intensity maps and
Convolutional Neural Networks | Childhood overweight and obesity is one of the main health problems in the
world since it is related to the early appearance of different diseases, in
addition to being a risk factor for later developing obesity in adulthood with
its health and economic consequences. Visceral abdominal tissue (VAT) is
strongly related to the development of metabolic and cardiovascular diseases
compared to abdominal subcutaneous adipose tissue (ASAT). Therefore, precise
and automatic VAT and ASAT quantification methods would allow better diagnosis,
monitoring and prevention of diseases caused by obesity at any stage of life.
Currently, magnetic resonance imaging is the standard for fat quantification,
with Dixon sequences being the most useful. Different semiautomatic and
automatic ASAT and VAT quantification methodologies have been proposed. In
particular, the semi-automated quantification methodology used commercially
through the cloud-based service AMRA R Researcher stands out due to its
extensive validation in different studies. In the present work, a database made
up of Dixon MRI sequences, obtained from children between 7 and 9 years of age,
was studied. Applying a preprocessing to obtain what we call total intensity
maps, a convolutional neural network (CNN) was proposed for the automatic
quantification of ASAT and VAT. The quantifications obtained from the proposed
methodology were compared with quantifications previously made through AMRA R
Researcher. For the comparison, correlation analysis, Bland-Altman graphs and
non-parametric statistical tests were used. The results indicated a high
correlation and similar precisions between the quantifications of this work and
those of AMRA R Researcher. The final objective is that the proposed
methodology can serve as an accessible and free tool for the diagnosis,
monitoring and prevention of diseases related to childhood obesity. | José Gerardo Suárez-García, Po-Wah So, Javier Miguel Hernández-López, Silvia S. Hidalgo-Tobón, Pilar Dies-Suárez, Benito de Celis-Alonso | 2023-09-12T19:19:47Z | http://arxiv.org/abs/2309.06535v1 | ###### Abstract
###### Abstract
Childhood overweight and obesity is one of the main health problems in the world since it is related to the early appearance of different diseases, in addition to being a risk factor for later developing obesity in adulthood with its health and economic consequences. Visceral abdominal tissue (VAT) is strongly related to the development of metabolic and cardiovascular diseases compared to abdominal subcutaneous adipose tissue (ASAT). Therefore, precise and automatic VAT and ASAT quantification methods would allow better diagnosis, monitoring and prevention of diseases caused by obesity at any stage of life. Currently, magnetic resonance imaging (MRI) is the standard for fat quantification, with Dixon sequences being the most useful. Different semiautomatic and automatic ASAT and VAT quantification methodologies have been proposed. In particular, the semi-automated quantification methodology used commercially through the cloud-based service AMRA(r) Researcher (AMRA Medical AB, Linkoping, Sweden) stands out due to its extensive validation in different studies. In the present work, a database made up of Dixon MRI sequences, obtained from children between 7 and 9 years of age, was studied. Applying a preprocessing to obtain what we call total intensity maps, a convolutional neural network (CNN) was proposed for the automatic quantification of ASAT and VAT. The quantifications obtained from the proposed methodology were compared with quantifications previously made through AMRA(r) Researcher. For the comparison, correlation analysis, Bland-Altman graphs and non-parametric statistical tests were used. The results indicated a high correlation and similar precisions between the quantifications of this work and those of AMRA(r) Researcher. The final objective is that the proposed methodology can serve as an accessible and free tool for the diagnosis, monitoring and prevention of diseases related to childhood obesity.
**Automatic quantification of abdominal subcutaneous and visceral adipose tissue in children, through MRI study, using total intensity maps and Convolutional Neural Networks**
Jose Gerardo Suarez-Garcia\({}^{1}\)*, Po-Wah So\({}^{2}\), Javier Miguel Hernandez-Lopez\({}^{1}\), Silvia S. Hidalgo-Tobon\({}^{3,4}\), Pilar Dies-Suarez\({}^{3}\) and Benito de Celis-Alonso\({}^{1}\)
Footnote *: The author JGSG was supported by the National Council of Sciences, Technologies and Humanities (CONAH-CYT) to carry out this work, through a posdoctoral scholarship.
\({}^{1}\)Facultad de Ciencias Fisico-Matematicas, Benemerita Universidad Autonoma de Puebla, Puebla, Mexico
\({}^{2}\)Department of Neuroimaging, Institute of Psychiatry, Psychology and Neuroscience, King's College London, United Kingdom
\({}^{3}\)Departamento de Imagenologia, Hospital Infantil de Mexico Federico Gomez, Mexico City, Mexico
\({}^{4}\)Departamento de Fisica, Universidad Autonoma de Mexico Iztapalapa, Mexico City, Mexico
\({}^{a}\)bdca_buap@yahoo.com.mx
## 1 Introduction
Overweight and obesity in childhood is a global health problem. Between 2000 and 2016, the proportion of overweight children between the ages of 5 and 19 increased from 10% to almost 20%. Childhood overweight can lead to early onset of type 2 diabetes mellitus, as well as stigma and depression. In addition, childhood obesity is associated with an increased risk of obesity in adulthood, which has serious health and economic implications [1]. Mexico is one of the main countries in the world with the highest pravelencia values. Using the body mass index (BMI) as reference, the prevalence of overweight in children between 5 and 9 years old (BMI \(>\) 17.4) is equal to 18.8%, while for obesity (BMI \(>\) 19.8) it is equal to 18.6% [2]. However, for any BMI, each individual varies substantially in the distribution of body fat. This variation has important implications for the risk of developing different diseases [3].
It is well known that higher amounts of visceral adipose tissue (VAT) compared to the amount of abdominal subcutaneous adipose tissue (ASAT) increase cardiovascular risk, development of
type 2 diabetes mellitus, liver disease, cancer, and contracting infections (such as COVID -19) [4]. Quantitative, precise and reproducible measurements of total body fat and its distribution are therefore important for the prevention, diagnosis and monitoring of diseases related to overweight and obesity both in childhood and in adulthood [5]. Dual-energy X-ray absorptiometry is a useful tool to accomplish this task. However, it makes modeling assumptions to differentiate VAT from ASAT, which produces errors in the quantifications. Also, it uses ionizing radiation and can only analyze 2D projections of the body [6]. On the other hand, Magnetic Resonance Imaging (MRI) uses non-ionizing radiation and directly measuring total body fat content and distribution, as well as skeletal tissue mass accurately and reliably [7]. Therefore, MRI is currently the gold standard for measuring body composition. In particular, the so-called Dixon technique is a rapid method that allows obtaining high-contrast images for soft tissue [8]. This type of image uses the slight differences that exist between the magnetic resonance frequencies of the protons bound to the fat and water molecules, in order to distinguish the signals coming from each one. The set of images obtained from the Dixon sequences include in-phase, out-of-phase, fat-only, and water-only images from a single acquisition. However, quantifying VAT and ASAT separately, both in children and adults, is still an interesting task to date, although the literature is still limited in studies of children [8].
Regarding semiautomatic analysis protocols, they have the disadvantage that they require the intervention of an operator or specialized personnel, resulting in a high cost, in addition to introducing variability depending on the analyst [5]. Different automatic VAT and SAT quantification methodologies have been proposed. Among them, those that apply Convolutional Neural Networks (CNNs) stand out both for the segmentation of the regions of interest, as well as for the quantification of fat deposits [9, 10, 11]. CNNs are created specifically for image analysis. Its design aims to mimic the mechanism of the visual cortex of mammals, which is assumed to be formed by groups of ordered and specialized neurons for object recognition, starting from the simplest features to the most complex patterns [12] One of the advantages of CNNs is that they automatically learn the necessary image features, without the need to be entered by the user. CNNs have been applied to solve different problems such as the classification of brain tumors [13], detection of skin lesions [14], detection of diabetes through images of heart rhythm frequencies [15], breast cancer detection [16], COVID-19 detection through X-ray images [17], among many others. Recently, for example, Schneider et al. [10] proposed a software for automatic VAT and ASAT quantification and segmentation by studying MRI of adults applying UNet-based FCN architectures and data augmentation techniques, reaching high correlation values. In another work, Devi et al. [11] developed a hybrid convolutional neural network, combining a conventional CNN and a texture layer, for VAT and ASAT segmentation of abdominal MRI images of adults, obtaining a performance that, according to the authors, exceeds the state-of-the-art methods.
Regarding studies in children, Armstrong et al. [18] presented a paper in which they recognize that many conventional techniques applied in children to quantify body composition and liver fat have limitations, due to sensitivity to movement, mainly in the abdomen region due to breathing. Therefore, they developed a technique based on free-breathing radial and Cartesian MRI sequences to quantify body composition and hepatic proton-density fat fraction (PDFF) in children from 2 to 7 months of age, evaluating the feasibility for hepatic PDFF quantification using a scoring system made by a radiologist. Also Armstrong et al. [19], in another study they compared non-sedated free breathing multi echo 3D stack of radial MRI versus standard breath holding and spectroscopy techniques for fat quantification. They studied healthy and overweight children between 7 and 13 years of age with nonalcoholic fatty liver disease, evaluating the quantifications using image quality scores, linear regression and Bland Altman analysis, obtaining accurate and repeatable measurements. Kway et al. [20] developed and evaluated an automatic segmentation method for the identification of abdominal adipose tissue (AAT), deep subcutaneous adipose tissue (DSAT) and visceral adipose tissue (VAT) deposits in neonates (less than two weeks old) and children. (ages between 4.5 and 6 years). Their method was based on a CNN based on the architecture known as U-net, which was compared with manual segmentations made by an expert through the calculation of Dice scores and Bland-Altman plots.
Among the semiautomatic quantification works, Peterli et al. [21], evaluated the distribution of visceral, subcutaneous, and liver fat in morbidly obese patients before and after bariatric surgery. In their work, they studied Dixon MRI sequences by applying automatic segmentation based on
a statistical model (SSM), to later quantify ASAT, VAT and liver volumes through manual voxel counting. On the other hand, an outstanding semiautomatic methodology for quantifying fat and muscle compartments by studying DLXON sequences is the one used commercially through the cloud-based service AMRA(r) Researcher (AMRA Medical AB, Linkoping, Sweden). Its methodology has been described in detail and evaluated in terms of accuracy [22, 23, 24, 25, 26]. This basically consists of the following. Image calibration to fat referenced images. Atlases with ground truth labels for fat and muscle compartments are recorded to an acquired MRI data set. Quality control is performed by trained operators, who can interactively adjust and improve the final segmentation. Finally, the volumes of fat and muscle are quantified within the segmented regions [27]. Therefore, this methodology requires the intervention of an operator to perform quality control, and before performing the quantification, it is necessary to accurately segment the regions of interest. In addition, it is a commercial method, so an economic investment is necessary, making it not easily accessible to anyone.
In the present work, a simple, economical and low computational methodology for the automatic quantification of VAT and ASAT was proposed. This was based on the study of Dixon sequences in phase, of male children between 7 and 9 years old, applying pre-processing techniques for the generation of what we call Total Intensity Maps. These maps included sufficient information on the regions of interest, and then, without the need to perform a precise segmentation, Convolutional Neural Networks (CNNs) proposed in two dimensions were applied to perform the quantifications. The reference standard were quantifications made previously through AMRA(r) Researcher, comparing these with those obtained in this work, using Bland-Altmann plots, regression analysis and non-parametric statistical tests.
## 2 Methodology
### Subjects
In the present work, a proprietary database obtained from a collaborative project between researchers from institutions in Mexico and the United Kingdom was studied. This contained different MRI modalities of 78 mexican male children between 7 and 9 years of age, obtained at the Hospital Infantil de Mexico in 2018. Among the children studied, 3 were underweight (BMI percentile \(<\) 5), 42 normal weight (BMI percentile 5 - 85), 17 overweight (BMI percentile 85-95 ) and 16 obese (BMI percentile \(>\)95 ).
### MRI protocol
All subjects were scanned using a Siemens 3T Skyra scanner (Syngo MR E11) (Siemens, Erlangen, Germany) with the dual-echo Dixon Vibe protocol, covering neck to knees. Subjects were scanned with five overlapping slabs of axial 3D spoiled gradient dual-echo images, in supine position with the arms along the sides and without localizer. Reconstruction of water-fat Dixon images was performed using the integrated scanner software. Common parameters for slabs one to three were: TR = 3.78 ms, TE = 1.23 ms, flip angle 10, bandwidth 123 Hz, 44 slices, voxel size 1.95\(\times\)1.95\(\times\)5 mm\({}^{3}\) and 256\(\times\)192 matrix, acquired during 17 seconds expiration breath-holds. Slabs four and five were acquired during free breathing with TR = 3.94 ms, TE = 2.49 ms, flip angle 10, bandwidth 123 Hz, 72 slices, voxel size 1.95\(\times\)1.95\(\times\)4 mm\({}^{3}\) and 256\(\times\)192 matrix. Viewed from the axial plane, each volume had dimensions of 192\(\times\)256\(\times\)44 voxeles.
### AMRA(r) Researcher: semiautomatic quantification methodology
For the 78 study subjects, body composition semiautomated quantification technique were performed from the reconstructed water and fat images using Dixon sequences in phase and out of phase, to later analyze these through the commercially available service AMRA(r) Researcher. Briefly and as commented before, the analysis used in AMRA(r) Researcher consisted of the following steps [26]: (1) Intensity inhomogeneity correction and calibration of fat and water images [24]. (2) Ground truth labels for fat compartments were registered to the acquired volumes using non-rigid atlas based registration. (3) All datasets were visually inspected and quality controlled by
an trained analysis engineer at Advanced MR Analytics (Linkoping, Sweden), detecting and correcting common artifacts such as water-fat swaps (exchange of the signal-channel for fat and water due to ambiguities), anatomy outside field of view, breathing/motion artefacts, and issues with the MR-protocol. (4) Quantification of fat, measured in liters (L), based on the calibrated images by integrating over the quality controlled labels. Finally, a report was generated. The included fat compartments were visceral adipose tissue (VAT) and abdominal subcutaneous adipose tissue (ASAT). VAT was defined as adipose tissue within the abdominal cavity, excluding adipose tissue outside the abdominal skeletal muscles and adipose tissue and lipids within and posterior of the spine and posterior of the back muscles. ASAT was defined as subcutaneous adipose tissue in the abdomen from the top of the femoral head to the top of the thoracic vertebrae T9. In each of the reports generated by AMRA\({}^{\circledR}\) Researcher, a precision (calculated from the coefficients of repeatability, i.e. the smallest detectable difference between two measurements at a 95% confidence level) was declared equal to 0.17 L for VAT and equal to 0.33 L for ASAT.
### Proposed automatic quantification methodology
In order to completely automate the quantification algorithm and avoid human intervention to correct the artifact known as water-fat swap, only in-phase Dixon sequences were studied. Remembering that each subject's scan was made up of five overlapping syllables, from top to bottom, only those numbered 2 and 3 were analyzed, since they contained the region of interest. Hereafter, these were called \(V_{1}\) and \(V_{2}\) respectively. Due to the overlap, the two volumes had to be joined by choosing the appropriate slice of each. Although the volumes were obtained in a single acquisition and with the indication of holding the breath, the joining process was not a trivial task. This was mainly due to artifacts caused by breathing or movement, causing differences between the range of intensities of both volumes, misalignment and mismatch in the anatomical regions. In order to correct this situation, a set of processes was proposed to correctly join the pair of volumes of each subject. All the algorithms presented in this work were developed with the MATLAB R2022b software, on a conventional computing system (Intel Core i7 12700H CPU, 16GB RAM, RTX 3070Ti GPU).
#### 2.4.1 Processes to join \(V_{1}\) and \(V_{2}\)
The intensities of the voxels of \(V_{1}\) and \(V_{2}\) were normalized, varying from 0 to 1 using the method known as min-max normalization. This was done assuming that between both volumes the voxels of lower intensity corresponded to the same type of tissue, as well as the voxels of higher intensity corresponded to another tissue. Considering the axial plane, the contrast of each volume was improved by histogram equalization. First, \(V_{2}\) was completely equalized using as reference the histogram of the last 15 slices of \(V_{1}\). Subsequently, \(V_{1}\) was completely equalized using as reference the histogram of the first 15 slices of \(V_{2}\) already equalized. Intensities less than 0.05 were set to 0, which corresponded to the empty bottom of each volume.
In order to find the pair of slices (one from \(V_{1}\) and another from \(V_{2}\)) that will serve to join the two volumes, only the last 8 slices of \(V_{1}\) and the first 8 slices of \(V_{2}\) were compared. These sets of slices were called \(V^{\prime}_{1}\) and \(V^{\prime}_{2}\), which had dimensions of 192\(\times\)256\(\times\)8 voxels. In each volume different regions of the body that were not of interest could be visible, such as arms, shoulders and hands. So, before comparing volumes \(V^{\prime}_{1}\) and \(V^{\prime}_{2}\), it was necessary to apply an algorithm to exclude the mentioned regions. Considering that these regions appeared separated from the region of interest, the algorithm simply started from a voxel located approximately in the central area contained in the region of interest and only all neighboring voxels that were connected to each other were retained. Once this was done, the comparison between \(V^{\prime}_{1}\) and \(V^{\prime}_{2}\) continued. A box with the smallest dimensions was sought such that it completely contained either of the two volumes \(V^{\prime}_{1}\) and \(V^{\prime}_{2}\). Voxels with intensities greater than 0 were labeled using a threshold equal to 0.5, such that voxels with intensities less than or equal to 0.5 were labeled as 1, and voxels with intensities greater than 0.5 were labeled as 2. Next, the 8 slices of \(V^{\prime}_{1}\) and the 8 slices of \(V^{\prime}_{2}\), already labeled, were compared in pairs by calculating the so-called Dice coefficient. From the 64 comparisons made, the pair of slices that obtained the highest value of the Dice coefficient were used as a reference to join the two complete volumes \(V_{1}\) and \(V_{2}\) (Fig. 1). In addition to joining the volumes, they
were also centered. To do this, using the chosen slices, the pair of voxels located in the center of them were used as reference points to center and finally join the volumes \(V_{1}\) and \(V_{2}\) (Fig. 2). After centering and joining \(V_{1}\) and \(V_{2}\), both volumes ended up displaced relative to each other. However, the joined volume had to be contained in a single volume with uniform dimensions. To do this, the two slices that served to join \(V_{1}\) and \(V_{2}\) were centered within two slices of dimensions 200x200 voxels respectively. Then, these slices were joined, and subsequently the rest of the volumes were contained in a single volume with dimensions in the axial plane of 200x200 voxels and height equal to the sum of the heights of the two joined volumes.
Afterwards, only 30 total slices of the joined volume were retained, with 10 from \(V_{1}\) starting from its chosen slice upward, and 20 from \(V_{2}\) starting from its chosen slice downward. The joined volume was called \(V\), which had dimensions of 200\(\times\)200\(\times\)30 voxels. As mentioned before, regions that were not of interest were excluded from volumes \(V_{1}^{\prime}\) and \(V_{2}^{\prime}\) (with 8 slices each). Then, with the joined volume \(V\) (having 30 slices in total), this task was repeated in a slightly different way. Instead of choosing a voxel located in the center of the entire volume, voxels located in the center of each of the 30 slices were searched. In each slice separately, starting from the central voxel, only the voxels that were connected to it and to each other were added. Because the volume was already centered, performing this task for each slice was more efficient than performing it at once for the entire volume. Fig. 3 shows a diagram with all the processes followed to join the volumes.
#### 2.4.2 Creation of total intensity maps \(I_{asat}\) and \(I_{vat}\) for training the proposed CNNs
From each joined volume \(V\), two-dimensional maps \(I_{asat}\) and \(I_{vat}\) were created. These two together formed a new volume \(V_{I}\) with dimensions 200\(\times\)200\(\times\)2 voxels. The volumes \(V_{I}\) were used as inputs to two proposed two-dimensional CNNs whose tasks were the quantification of ASAT and VAT respectively. The volumes \(V_{I}\) were considered by the CNNs as 2D images with two different channels. The image \(I_{asat}\) was created from a volume \(V_{asat}\), which contained an approximate segmentation of the region where the ASAT should be located. On the other hand, the image
Figure 1: **Pair of slices chosen to join \(V_{1}\) and \(V_{2}\). In (a) and (c) examples of slices of the volumes \(V_{1}^{\prime}\) and \(V_{2}^{\prime}\) (normalized and equalized) are shown respectively. In (b) and (d) the previous slices are shown with the voxels labeled within a box that completely contained them in volumes \(V_{1}^{\prime}\) and \(V_{2}^{\prime}\). These two slices were used to join the volumes since they obtained the highest value of the Dice coefficient. Furthermore, the slices served to center the volumes taking as reference the center of the chosen slices (red crosses).**
Figure 3: **Volume joining.** Processes carried out to find the pair of slices from \(V_{1}^{\prime}\) and \(V_{2}^{\prime}\) that served as a reference to join the two volumes \(V_{1}\) and \(V_{2}\) and thus obtain, for each subject, a single joined volume \(V\) normalized, equalized and centered.
Figure 2: **Joining and centering of \(V_{1}\) and \(V_{2}\).** View from the coronal (a) and sagittal (b) planes of the join of \(V_{1}\) and \(V_{2}\) without applying any process to them. View from the coronal (c) and sagittal (d) planes of the normalized, equalized and centered \(V_{1}\) and \(V_{2}\), using as reference the pair of slices chosen from \(V_{1}^{\prime}\) and \(V_{2}^{\prime}\) respectively.
was created from a volume \(V_{vat}\), which contained an approximate segmentation of the region where the VAT should have been located. The images \(I_{asat}\) and \(I_{vat}\) were called total intensity maps. To obtain these two images, the following was done for each subject.
The volume \(V\) was smoothed with a median filter of size 3\(\times\)3\(\times\)3 voxels and was subsequently normalized from 0 to 1. Then, a resized volume proportional to 85% of the original volume was obtained. This volume was smoothed with a median filter of size 7\(\times\)7\(\times\)7 voxels. All voxels with intensities greater than 0 were set equal to 1. A filling process was carried out to eliminate possible holes in the resized volume. This volume was used as a mask over the original volume \(V\), so that all voxels within the mask with intensities greater than a threshold equal to 0.75 were set to 0. The remaining voxels were set to 1, and again a filling process to eliminate possible holes was applied. This volume served as a new mask over the original volume \(V\) used in the following way. By eliminating the voxels that were inside the mask, the volume \(V_{asat}\) was obtained, which contained an approximate segmentation of the region that must have contained the ASAT. On the other hand, by conserving only the voxels that were within the last mask, the volume \(V_{vat}\) was obtained, which contained an approximate segmentation of the region that should have contained the VAT. An example of the process described above is shown in Fig. 4. Although the figure shows a slice as an example, the process was performed with the entire volume \(V\) at the same time.
To obtain the total intensity maps \(I_{asat}\) and \(I_{vat}\) the following was done. All voxels of the volume \(V_{asat}\) whose intensities were different from 0 were set equal to 1 and the following equation was applied:
\[I_{asat}(x,y)=\sum_{z=1}^{30}V_{asat}(x,y,z) \tag{1}\]
so that all the intensities of the voxels located in the same position of each slice with dimensions equal to 200\(\times\)200 voxels were added. Thus, the total intensity map \(I_{asat}\) with dimensions equal to 200\(\times\)200\(\times\)1 voxels was obtained (Fig. 5(a)). To obtain the second map, the following was done.
Figure 4: **Approximate segmentation of the regions that should have contained the VAT and ASAT.** (a) Slice of a volume \(V\). (b) Smoothing. (c) Volume resized and smoothed again. (d) Voxels set to 1, application of hole filling process and mask creation. (e) Volume obtained after applying the mask to the volume shown in (b), eliminating voxels located outside it. (f) Elimination of voxels with intensities greater than 0.75. (g) Voxels equal to 1. (h) Application of hole filling processes and creation of new mask. Applying this last mask to the volume shown in (a), the volume (i) \(V_{asat}\) was obtained by eliminating the voxels inside it, and the volume (j) \(V_{vat}\) by eliminating the voxels outside it.
On the other hand, all voxels of the volume \(V_{vat}\) whose intensities were less than 0.7 were set to 0, while voxels with intensities greater than or equal to 0.7 retained their value. Then, the following equation was applied:
\[I_{vat}(x,y)=\sum_{z=1}^{30}V_{vat}(x,y,z) \tag{2}\]
so that all the intensities of the voxels located in the same position of each slice with dimensions equal to 200\(\times\)200 voxels were also added. Thus, the total intensity map \(I_{vat}\) was obtained with dimensions equal to 200\(\times\)200\(\times\)1 voxels. Finally, a volume \(V_{I}\) was created from the two aforementioned maps. This volume had \(I_{asat}\) as its first slice and \(I_{vat}\) as its second, thus forming a volume with dimensions 200\(\times\)200\(\times\)2 voxels (Fig. 5(b )). The volumes \(V_{I}\) obtained from each subject were used as inputs for the proposed CNNs that will be described in the following section.
#### 2.4.3 Proposed CNNs
Two CNN architectures were proposed to quantify ASAT and VAT respectively. Both CNNs had a similar structure and studied the same volumes \(V_{I}\) of each subject. From 78 subjects, 42 were randomly chosen for training, 18 for validation and 18 for testing. Table 1 shows their distribution according to their weight classification.
Fig. 6 shows the architecture of the CNN to quantify the ASAT. There were four blocks with the same layers. The first was a convolution layer with 128 filters of size 3\(\times\)3 with stride equal to 1\(\times\)1; the second was an average pooling layer of size 2\(\times\)2 with stride equal to 2\(\times\)2 and same padding; the third was a Leaky ReLu activation layer with scale equal to 0.15; and the fourth was a dropout layer with probability equal to 0.5. After the mentioned blocks, there was a fully connected layer of 10 nodes, followed by a batch normalization layer and a Leaky ReLu activation layer with scale equal to 0.15. Then there was a BiLSTM layer with 10 hidden units and 20\(\times\)1 hidden states and a dropout layer with probability 0.2. Finally there was a regression layer with a single output node. To avoid overfitting, data augmentation was used through random rotations varying from -30 to 30 degrees. A minibatch equal to 42 was used (this being the total number of training samples), applying the SGDM optimizer with a constant learning range equal to 0.005. The CNN was trained for 30,000 epochs with validations every 10 epochs.
\begin{table}
\begin{tabular}{c c c c c c} \hline
**Subset** & **Low weight** & **Normal weight** & **Overweight** & **Obesity** & **Total** \\ \hline Training & 3 & 20 & 8 & 11 & 42 \\ Validation & 0 & 11 & 5 & 2 & 18 \\ Testing & 0 & 11 & 4 & 3 & 18 \\ Total & 3 & 42 & 17 & 16 & 78 \\ \hline \end{tabular}
\end{table}
Table 1: Distribution of subjects according to their weight classification.
Figure 5: Total intensity maps. (a) Total intensity map \(I_{asat}\), obtained from the volume \(V_{asat}\). (b) Total intensity map \(I_{vat}\), obtained from the volume \(V_{vat}\). Both maps formed the slices of a volume \(V_{I}\) of dimensions 200\(\times\)200\(\times\)2 voxels that was used as input to the proposed CNNs.
Fig. 7 shows the CNN architecture to quantify the VAT. Its architecture was similar to that for quantifying the ASAT. Their differences were the following. For the second CNN, Leaky ReLu activation layers with scale equal to 0.5 were used; the probability of dropout layers were equal to 0.3; rotation angles for augmentation ranged from -10 to 10 degrees; and the constant learning range was equal to 0.001. The expected outputs of each CNN were the quantizations reported by AMRA(r) Researcher. After training the CNNs, they were applied to the training, validation and testing subjects. In order to compare the quantifications between AMRA(r) Researcher and those made by the CNNs, Bland-Altman plots were created, correlation analysis was performed and the non-parametric statistical test called Wilcoxon signed rank test was applied.
## 3 Results
Fig. 8 shows the correlation and Bland-Altman plots obtained by comparing the VAT quantifications made by AMRA(r) Researcher and those obtained in the present work, this for the training, validation and testing subjects respectively. Something similar is shown in Fig. 9 when comparing the ASAT quantifications. The correlation graphs indicated the value of \(R^{2}\), the intercept and slope of the fit line. Table 2 shows the p-values that indicated whether there was no significant correlation between the compared quantifications (null hypothesis). All results indicated a high correlation which was statistically significant. Bland-Altman plots indicated the average difference of the quantifications, the 95% limits of agreement and the reproducibility coefficient (RCP). For the quantification of VAT, coefficients of variation (CV) equal to 7.3%, 16% and 17%, and RCP equal to 0.08 L, 0.13 L and 0.17 L, were obtained for the training, validation and testing subjects.
Figure 6: CNN architecture to quantify the ASAT. The different blocks and layers that formed the proposed CNN are shown.
Figure 7: CNN architecture to quantify the VAT. The different blocks and layers that formed the proposed CNN are shown.
respectively. For the quantification of ASAT, CV equal to 1.8%, 10% and 8.6%, and RCP equal to 0.08 L, 0.32 L and 0.32 L, were obtained for the training, validation and testing subjects respectively. Table 3 shows the results from the Wilcoxon signed rank test between the AMRA(r) Researcher quantifications and those made in the present work, for the VAT and ASAT, with the training, validation and test subjects respectively. All tests indicated that there were no significant statistical differences (p-value \(>\) 0.05).
## 4 Discussion and conclusions
The present work used as a reference standard the quantifications made by the widely validated commercial measurement system called AMRA(r) Researcher. Within the AMRA(r) Researcher
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Subset** & **p-value (VAT)** & **p-value (ASAT)** \\ \hline Training & 0 & 0 \\ Validation & 9.503\(\times 10^{-11}\) & 1.511\(\times 10^{-14}\) \\ Testing & 3.586\(\times 10^{-12}\) & 1.039\(\times 10^{-15}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Correlation p-values. p-values obtained to know if the correlations between AMRA® Researcher quantifications and those made in the present work were significant.**
Figure 8: **Bland-Altman and correlation plots for VAT. (a), (b) and (c) show the correlation plots, while (d), (e) and (f) show the Blan-Altman plots, for the training, validation and testing subjects respectively for the quantification of VAT.**
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Subset** & **p-value (VAT)** & **p-value (ASAT)** \\ \hline Training & 0.2998 & 0.3093 \\ Validation & 0.2656 & 0.9478 \\ Testing & 0.7757 & 0.1765 \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Results from the Wilcoxon signed rank test. The p-values obtained after applying the Wilcoxon signed rank test between the AMRA® Researcher quantifications and those made in the present work are shown.**
reports (obtained during 2018, the year on which the measurements were carried out), precisions equal to 0.17 L and 0.33 L were indicated for the quantifications of VAT and ASAT respectively. This precision was defined as a repeatability coefficient, that is, the smallest detectable difference between two measurements with a confidence level of 95%, made under the same conditions, with the same imaging protocol and quantification methodology. In this work, the same Dixon sequences used by AMRA\({}^{\circledR}\) Researcher were studied (so the same imaging protocol was used). However, a different quantification methodology was proposed. Therefore, to make a comparison between the AMRA\({}^{\circledR}\) Researcher quantifications and those made in this work, the reproducibility coefficient (RPC) was used, which is defined as the value under which the absolute differences between two measurements would fall within 95 % probability, considering that these were calculated under different conditions or using different measurement systems [28].
As shown in the results, for the quantification of VAT a RCP \(\leq\) 0.17 L was obtained, while for the quantification of ASAT a RCP \(\leq\) 0.32 L was obtained. Therefore, it can be concluded that the Measurements made in this work were within the precision reported by AMRA\({}^{\circledR}\) Researcher. Although today the precision of AMRA\({}^{\circledR}\) Researcher may possibly be greater, it would be necessary to obtain new quantifications carried out through its updated methodology, compare them with those made in the present work and then compare the precision of both methodologies.
Although the work depended on DIXON sequences generated using the AMRA\({}^{\circledR}\) Researcher imaging protocol, the proposed methodology could be verified on other databases. This is because these types of sequences are commonly obtained by different resonators. However, there may be an exception regarding the way slabs are obtained, since this procedure is specific to AMRA\({}^{\circledR}\) Researcher. Even considering the above, in general it would be more convenient to study a single volume obtained at once containing the entire region of interest, omitting the process of joining slabs, and reducing errors in quantifications due to errors made during the joining process. In this case, the proposed methodology could be applied, excluding the union process and appropriately choosing the 30 slices from which the total intensity maps would be obtained.
The proposed two-dimensional CNNs analyzed a set of volumes \(V_{I}\) formed by what we call
Figure 9: **Bland-Altman and correlation plots for ASAT.** (a), (b) and (c) show the correlation plots, while (d), (e) and (f) show the Blan-Altman plots, for the training, validation and testing subjects respectively for the quantification of ASAT.
\(I_{vat}\) and \(I_{asat}\). These maps were obtained by adding the intensities of the voxels that resulted from approximately segmenting the regions that should have contained the VAT and ASAT respectively. Then, the proposed CNNs considered the volumes \(V_{I}\) as a 2D image with two different channels. Training CNNs in two dimensions required a much smaller amount of computational resources compared to studying 3D volumes. Therefore, this was an advantage of the proposed two-dimensional methodology. Furthermore, the proposed CNNs had much simpler architectures than many others used by different works, obtaining excellent results in this case. On the other hand, after studying the training, validation and testing subjects, it was observed that there was consistency in the results, thus demonstrating their reproducibility. When the Dixon-in-phase sequences were studied, the voxels with high intensities did not correspond solely to the fat signal, since it could have been a combination of this with the water signal. Furthermore, volumes \(V_{I}\) were made up of a small number of slices (30 in total), so they did not necessarily cover the entire region that contained the VAT and ASAT. Also, no anatomical reference was used, except for the choice of slabs numbered 2 and 3. Methodologies from other works (including AMRA(r) Researcher) needed to first accurately segment the fat deposits, then decided which ones were part of VAT and ASAT, and finally quantified them. For the quantifications of this work, it was hypothesized that the approximate segmentation made was sufficient to implicitly relate it to the amount of fat to be studied. The proposed methodology had only the objective of quantifying fat without having to locate or segment it precisely. It was known that the main strength of CNNs was the automatic search for abstract patterns in images, with the aim of successfully performing various tasks such as segmentation, classification or detection. Therefore, through the adequate training of the proposed CNNs using the total intensity maps \(I_{asat}\)\(I_{vat}\) as inputs, without requiring precise segmentations and without further anatomical considerations, it was possible to successfully quantify the VAT and ASAT with similar accuracy to AMRA(r) Researcher.
Among the limitations of this work, it is found that the database studied was made up of a small number of samples. Additionally, the study subjects had different BMIs, and there was an imbalance between the total number of samples for each weight classification. Although accurate results were obtained, it can be deduced that the methodology had a bias towards subjects with a normal weight, since those were the ones who had the greatest number of samples during training. On the other hand, this work analyzed the DIXON sequence in phase, thus avoiding the necessary correction of the artifact known as water-fat swap, but wasting the use of fat-only images which contained more useful and explicit information to perform the quantifications. Also, the total intensity maps \(I_{asat}\) and \(I_{vat}\) lost spatial information when reducing a 3D volumes to a 2D images. Therefore, these maps were mostly affected by artifacts generated by movement. As this was a study conducted in children between 7 and 9 years old, the appearance of these artifacts was more likely since the breath-hold condition may not always have been met, nor the subjects were completly at rest during the complete acquisition of the MRI sequences.
Future work should apply the proposed methodology in databases with a greater number of samples with balanced classes and performing cross-validations. Since the study was restricted to male children aged between 7 and 9 years, the proposed method could be applied to subjects of different ages, both children and adults, as well as men and women. Furthermore, fat-only DIXON sequences should be studied, proposing an automatic method for correcting the water-fat-swap artifact and thus taking advantage of the information that this type of sequence offers for the required quantification tasks. Also, an algorithm could be implemented which would automatically choose the best anatomical region of the volumes to perform the quantification, so that the total intensity maps \(I_{asat}\) and \(I_{vat}\) would contain a greater amount of useful information for train the CNNs. Finally, other CNN architectures could be proposed.
In conclusion, an automatic, simple, reproducible and economical methodology for quantifying ASAT and VAT in children was proposed, with low demand for computational resources, based on the analysis of what we called total intensity maps and CNNs in two dimensions with a simple architecture, achieving the precision of the commercial AMRA(r) Researcher quantification method. In this work, Dixon sequences commonly obtained in different scanners were studied, making the proposed methodology accessible and reproducible by independent studies, in order to corroborate the results and implement improvements. In the end, all of the above had the final objective that the proposed methodology can serve as an accessible and free tool for the diagnosis, monitoring and prevention of diseases related to overweight and obesity in children. |
2309.10418 | Graph Neural Networks for Dynamic Modeling of Roller Bearing | In the presented work, we propose to apply the framework of graph neural
networks (GNNs) to predict the dynamics of a rolling element bearing. This
approach offers generalizability and interpretability, having the potential for
scalable use in real-time operational digital twin systems for monitoring the
health state of rotating machines. By representing the bearing's components as
nodes in a graph, the GNN can effectively model the complex relationships and
interactions among them. We utilize a dynamic spring-mass-damper model of a
bearing to generate the training data for the GNN. In this model, discrete
masses represent bearing components such as rolling elements, inner raceways,
and outer raceways, while a Hertzian contact model is employed to calculate the
forces between these components.
We evaluate the learning and generalization capabilities of the proposed GNN
framework by testing different bearing configurations that deviate from the
training configurations. Through this approach, we demonstrate the
effectiveness of the GNN-based method in accurately predicting the dynamics of
rolling element bearings, highlighting its potential for real-time health
monitoring of rotating machinery. | Vinay Sharma, Jens Ravesloot, Cees Taal, Olga Fink | 2023-09-19T08:30:10Z | http://arxiv.org/abs/2309.10418v1 | # Graph Neural Networks for Dynamic Modeling of Roller Bearing
###### Abstract
In the presented work, we propose to apply the framework of graph neural networks (GNNs) to predict the dynamics of a rolling element bearing. This approach offers generalizability and interpretability, having the potential for scalable use in real-time operational digital twin systems for monitoring the health state of rotating machines. By representing the bearing's components as nodes in a graph, the GNN can effectively model the complex relationships and interactions among them. We utilize a dynamic spring-mass-damper model of a bearing to generate the training data for the GNN. In this model, discrete masses represent bearing components such as rolling elements, inner raceways, and outer raceways, while a Hertzian contact model is employed to calculate the forces between these components.
We evaluate the learning and generalization capabilities of the proposed GNN framework by testing different bearing configurations that deviate from the training configurations. Through this approach, we demonstrate the effectiveness of the GNN-based method in accurately predicting the dynamics of rolling element bearings, highlighting its potential for real-time health monitoring of rotating machinery.
GNN Bearings Dynamic Model
## 1 Introduction
Real-time condition monitoring is essential for realizing real-time operational digital twins of complex systems such as rotating equipment. Digital twins enable real-time fault diagnosis and prognosis, mitigating the risk of catastrophic system failures and reducing maintenance costs through early intervention in case of faults. However, purely data-driven methods often struggle to capture the underlying dynamics and generalize to operating conditions not included in the training datasets. Consequently, they fall short of accurately predicting the long-term evolution of physical system states.
To address these challenges, physics-informed neural networks (PINNs) have emerged as potential solution. PINNs integrate the partial differential equation (PDE) of the underlying system into the loss function, thereby regularizing the solution learned by the neural network. These methods have demonstrated significant success in various mechanics problems, including stress prediction in homogeneous elastic plates [1], composites [2], and heterogeneous materials [3]. In the context of multibody dynamical systems (MBD), the PINN loss can be formulated based on the Lagrangian of the system [4], Hamiltonian [5], or conservation of energy [6].
However, applying PINNs to systems with a large number of components presents challenges. It requires explicit derivation of either the PDE or analytical expressions for the conserved quantities, which can be cumbersome for complex multi-component systems. Additionally, enforcing boundary conditions becomes challenging, particularly in multi-component systems where boundaries dynamically form due to contact between different components [7]. Therefore, to handle a large number of interacting components, a network with an encoded inductive bias in its architecture is necessary.
Graph neural networks (GNNs) [8; 9; 10] provide a promising solution to these challenges by representing input components as nodes in a graph and modeling interactions between them as messages passed over the edges of the graph. Due to their encoded inductive bias, they generalize well to systems with varying configurations and boundary conditions. A lot of physical systems consist of components that interact with each other which makes the graph structure of the GNNs very suitable.
In message-passing-GNNs (MP-GNNs), the topological structure of a multi-component system can be represented as a graph where nodes represent the state of different components and edges between the nodes represent the interactions between those components. The pairwise interactions are then modeled as messages passed over the edges. MP-GNNs comprise two networks:(i) an edge network that takes edge features between two nodes (e.g., the distance vector) and generates a message, and (ii) a node network that takes the aggregated messages from all the neighboring nodes and produces a new node state. This process is repeated several times until the final node state is decoded as a target output. Depending on the task, the target of the graph neural network can be the predicted aggregated acceleration of the node. These models have successfully been applied to various dynamics prediction tasks. They have shown success in simple systems such as particle and spring-mass systems [11], as well as in more complex scenarios like three-dimensional skeleton-based human motion prediction [12].
Expanding upon previous research on simple mass-spring systems, our study delves into the specificities of modelling bearing dynamics. Accurate fast modeling of bearing dynamics is vital for timely fault detection and failure prediction in rotating equipment. Building upon the efficacy of GNNs in capturing complex relationships and dynamics, our aim is to develop a graph-neural-network-based simulator that can accurately capture the complex interactions between different components in a bearing. Compared with Finite Element Analysis (FEA) or parameter-calibrated lumped parameter models, a GNN-based bearing model can have specific advantages. Specifically, a significant reduction in computational complexity [10], where dynamics are learned solely from measurements without requiring knowledge of stiffness, mass and damping factors. Moreover, in contrast to pure data-driven methods, this method is interpretable (allowing for the inference of physical quantities not explicitly trained for), generalizable (enabling extrapolation to unseen conditions such as new shaft loads), and flexible (allowing the construction of new graphs, such as by changing the number of rolling elements in the bearing).
In this work, we present the first proof of concept for the application of GNNs in modeling bearing dynamics. To achieve this, we train a GNN on a simple 2D dynamic bearing model, demonstrating its interpretability, generalizability, and flexibility. In future work, we anticipate that this concept can be extended to real sensor data and advanced FEA simulations, such as those involving complex elasto-hydrodynamic forces. The proposed model incorporates specific node features, edge features, and graph connections that are essential for modelling a bearing as a graph. We further introduce the use of Message-Passing Graph Neural Networks (MP-GNNs) as proposed in [9] to predict the evolution of this dynamic. However, in contrast to [9], we propose modifications to the GNN architecture to decode roller loads from the edges and capture the dynamics of the raceways and rolling elements from the nodes.
This paper is organized as follows: In Section 2, we first introduce a 2D dynamic bearing model. This analytical model serves as the generator of simulation trajectories on which we train the graph-based bearing model described in Section 3. In Section 4, we describe the bearing configurations and operating conditions used to generate the simulation data for training and testing the model. In Section 5, we present the results of our experiments and evaluate the performance of the graph-based bearing model. Finally, in 6, we summarize the findings of the study, discuss their implications, and suggest future extensions of the proposed research study.
## 2 Dynamic bearing model
In this work, a 2D dynamic bearing model is used to simulate the behavior of a cylindrical roller bearing (CRB) for training and validation of the GNN. The chosen physics-based bearing model captures the essential components including the inner and outer rings, as well as multiple rolling elements [13] as depicted in figure 1. Nonlinear contact models, based on the work of [14] and [15], are utilized to model the contacts between the rings and rolling elements. To establish the mechanical connections, the inner and outer rings are connected to the ground through springs and dampers. It is important to note that the bearing in this work is considered stationary and all bodies are restricted to horizontal and vertical movements only. The forces acting on each body are calculated as a function of their velocities
and positions in space. Using the mass of the inner and outer ring, their respective accelerations can be computed and the Runge-Kutta method (RK4) is used to numerically integrate these to their updated positions and velocities. The rolling elements are assumed to have negligible mass and their positions are determined as a function of the inner and outer ring positions.
To introduce external stimuli to the system, a time-varying vertical force is applied to the outer ring as an input. The internal loads, positions, and velocities of all components serve as outputs and are used to train the GNN. Figure 1 shows schematic of the dynamic bearing model.
## 3 A graph-based model of bearings
The bearing model can be effectively represented as a graph. Figure 3 depicts a graph representation \(\mathcal{G}=(\nu,\varepsilon)\) of the 2D dynamical model discussed in section 2. The graph representation captures the essential components of the model,
Figure 1: Schematic overview of the 2D bearing model used to generate signals for the GNN.
Figure 3: Graph Representation of 2d Dynamic Model: The node features include the position \(\vec{x}_{i}\) and velocity \(\vec{v}_{i}\) of the centers of components (set to zero for rollers), external force \(\vec{F}_{ext}\), and node type. The edge features include the relative distance \(\vec{dx_{ij}}\) and its magnitude between the rollers and the circumferences of the inner and outer rings.
Figure 2: The inputs and outputs of the bearing model.
including the inner ring, outer ring, and rolling elements, as nodes \(\nu\) in the graph. The interactions between the rolling elements and the rings, characterized by non-linear contacts, are represented by edges \(\varepsilon\) in the graph.
In the following sections, we will elaborate on the GNN model, providing more details about the node and edge features, as well as the learning process.
### Node and edge features
**Node Features**: The nodes in the graph represent the inner ring, outer ring, and rollers of the bearing system. Each node is characterized by a set of features denoted as \(\nu_{i}=\vec{x_{i}},\vec{v_{i}},\vec{F}_{ext},\text{type}\). For the nodes representing the inner and outer rings, the feature \(\vec{x_{i}}\) corresponds to the position of their respective centers and their velocity is captured by the feature \(\vec{v_{i}}\), measured in millimeters per second.
For the nodes representing the rollers, the features \(\vec{x_{i}}\) and \(\vec{v_{i}}\) are set to zero. This choice reflects the assumption that the dynamics of rollers is purely governed by pair-wise interactions with the rings through relative positional features, which are encoded in the edges connecting the nodes.
In terms of external forces, the node representing the outer ring has a non-zero value for the feature \(\vec{F}_{ext}\), which accounts for the externally applied vertical radial force on the outer ring. Whereas, the inner ring and rollers have a value of zero for the external force feature \(\vec{F}_{ext}\), indicating that no external forces are applied to them.
To distinguish between the different components, the node type is encoded as a categorical variable. This allows for differentiation between the three types of components: the inner ring, outer ring, and rollers within the graph representation of the bearing system.
**Edge Features**: The rollers are connected to the inner ring and outer ring nodes through bidirectional edges \(\varepsilon_{ij}=d\vec{x_{ij}},||d\vec{x_{ij}}||\). These edges capture the 2D distance vector \(\vec{dx_{ij}}\) between the roller center and the circumference of the inner or outer ring, along with its scalar magnitude \(||d\vec{x_{ij}}||\).
The choice of using the distance vector and its magnitude as edge features is motivated by the assumption that the non-linear contact between the rollers and the rings can be modeled by non-linear springs. In this model, the forces depend solely on the elongation or compression of the springs, which is captured by the relative distance vector between the roller center and the ring circumference.
To calculate the 2D distance vector, the points on the circumferences of the inner and outer rings are determined, taking into account that the center of the roller is positioned midway between them. This approach ensures an accurate representation of the spatial relationship between the rollers and the rings, enabling the modeling of the contact forces and interactions within the bearing system.
It is worth highlighting that our approach distinguishes itself from the previous applications of GNNs in predicting dynamics of spring-mass or particle systems [9; 10] by incorporating absolute position and velocity features on the nodes representing the inner and outer rings. While earlier applications primarily focused on capturing pair-wise interactions that are independent of position, our objective in modeling bearings expands to encompass the dynamics of both the inner and outer rings. This consideration takes into account their interactions with the ground through springs and dampers as shown in Figure 1.
### Model
To predict the dynamics of the bearing, we utilize an encode-process-decode architecture, employing a message-passing graph neural network framework. The schematic of the model described in this section is illustrated in Figure 4.
**Encode**: The encoder takes a graph \(\mathcal{G}=(\nu,\varepsilon)\) and uses separate Multi-Layer Perceptrons (MLPs) \(f^{\varepsilon}_{enc}\) and \(f^{\nu}_{enc}\) to encode the edge and node features into latent vectors \(E_{ij}\) and \(V_{i}\), respectively, each of size 64. The encoded graph is represented as \(\mathcal{G}_{0}=(V,E)\).
**Process**: The processor consists of multiple blocks with unshared weights, where each block performs sequential message passing over the input graph and produces transformed node and edge latent vectors. Residual connections are employed between the input and output edge/node latent vectors to facilitate information flow. The initial block takes the encoded graph as input, and subsequent blocks take the output of the previous block. Within each block, MLPs \(f^{E}\) and \(f^{V}\) are used to apply transformations to the latent edge vector \(E_{ij}\) and latent node vector \(V_{i}\) using for edges and nodes, respectively.
The edge transformation is described as follows: \(E^{\prime}ij\gets f^{E}(Eij,V_{i},V_{j})\),
Here, \(V_{i}\) and \(V_{j}\) denote the latent vectors of the sender and receiver nodes, respectively, whereas \(E_{ij}\) represents the latent vector of the connecting edge.
The node transformation is described as follows: \(V^{\prime}i\gets f^{V}(V_{i},\sum_{j}E^{\prime}ij)\)
At each node, the transformed latent vectors of incoming edges are aggregated using a permutation-invariant summation function. The resulting sum, along with the node latent vector, is concatenated and fed into the MLP \(f^{V}\). This MLP processes the input and generates the transformed node latent vector, incorporating the information from the aggregated edge vectors.
In Figure 4, the transformed graph after the first message passing step is denoted as \(\mathcal{G}_{1}=(V^{\prime},E^{\prime})\). The next processor block takes \(\mathcal{G}_{1}\) as input, performs similar transformations with separate MLPs \(g^{E}\) for edges and \(g^{V}\) for nodes, resulting in the transformed graph \(\mathcal{G}_{2}\). This process continues for \(M\) message passing blocks, and the output after \(M\) blocks, denoted as \(\mathcal{G}_{M}\), serves as input to the decoder.
**Decoder**: The decoder comprises an edge decoder MLP \(f^{E}_{dec}\) and a node decoder MLP \(f^{N}_{dec}\). This is different from the previous applications of GNNs in dynamics prediction tasks [9; 10] where only the node dynamics are decoded from the node latent vectors using a decoder MLP.
**Edge Decoder**: The edge decoder MLP takes the latent vectors at each edge as input and predicts a 2D contact force for each edge: \(F_{edge}\gets f^{E}_{dec}(E^{\prime}_{M})\).
**Node Decoder**: The node decoder MLP takes the latent vectors at each node as input and predicts the net 2D force on each node: \(F_{node}\gets f^{N}_{dec}(V^{\prime}_{\mathcal{G}_{M}})\).
## 4 Case study
In this study, we utilized the dynamic bearing model described in Section 2 to simulate trajectories of four bearings with different numbers of rolling elements (13, 14, 15, and 16). These bearings were modeled after the SKF N209 ECP cylindrical roller bearing, which has a pitch diameter of 65.5mm and a roller diameter of 11mm. The length of the rollers is 12mm. Additionally, a horizontal and vertical spring with a stiffness of 5e6N/m is connecting the inner ring to the ground. Dampers with damping ratios of 5e4Ns/m and 1e4Ns/m are used to dampen the inner and outer rings, respectively. The simulations were conducted under zero rpm conditions, with an initial external load applied to the outer ring. The range of initial external loads varied from 5000 N to 23000 N, with increments of 2000 N.
During each trajectory, an external load was instantaneously applied at the 0th time step. The initial condition of the bearing is all set to 0, so this results in a step response of the system. The external load was doubled at 2500 time steps and subsequently reduced back to the initial load at the 5000th time step. An example of the variation in external load over time is depicted in Figure 5.
**Training Data**: The GNN was trained using the trajectories of bearings equipped with 13, 14, and 16 rolling elements. At each time step, the positions and velocities of each component (inner ring, outer ring, and rolling elements) were
Figure 4: Encode-Process-Decode architecture with message passing GNN: The Encoder transforms the graph \(\mathcal{G}\) into \(\mathcal{G}_{0}\). Processor 1 takes \(\mathcal{G}_{0}\) as input and transforms it into \(\mathcal{G}_{1}\) after a single message passing step. Subsequent Processors sequentially transform \(\mathcal{G}_{1}\). Finally, the Decoder decodes both the latent nodes and edges of the graph \(\mathcal{G}_{M}\).
used to construct the graph representation \(\mathcal{G}_{t}\) of the bearing system. The roller loads were used as the ground truth for the decoded edges, while the total forces acting on each component were used as the ground truth for the decoded nodes. The objective of the GNN was to predict the roller loads and net forces on each component at each time step.
**Testing Data**: The model was evaluated on the bearing with 15 rollers and an initial load of 13000 N. The applied load during this validation case is shown in figure 5. We tested its ability to predict the roller loads and net forces on the components given the state vector of each component at a specific time \(t\), under different external loads applied to the bearing.
## 5 Results
In this section, we evaluate the performance of the trained GNN on the test data. We compare the GNN's predictions with those generated by the 2D dynamic model, focusing on a single time step without performing roll-outs. The evaluation covers both the loaded roller (top of the bearing) and the non-loaded roller (bottom of the bearing), as illustrated in Figure 6.
Figure 5: Applied external load on the outer ring
Figure 6: Position of top and bottom rolling elements
Figure 7: Prediction of loading and unloading of bottom roller with inner-ring dispacement
Figure 8 illustrates the predicted loads for the loaded roller number 8 and compares it to the simulated data. It is worth noting the presence of oscillatory dynamics resulting from the sudden application of an external load at the time steps 0 and 2500, as depicted in Figure 5. These dynamics arise due to the connection of the inner and outer rings to the ground through dampers.
The proposed GNN demonstrates its capability to accurately predict loads even in dynamic regimes of the bearing for the loaded rollers. Moreover, the GNN's performance shows a significant improvement once the bearing reaches a steady state. This improvement is further supported by the percentage error (\(\frac{prediction-groundtruth}{groundtruth}*100\%\)) for the loaded rollers, depicted in Figure 9 for 50-time steps.
Figure 10 illustrates the predicted load for the unloaded roller. It can be observed that the GNN predicts still small loads for the unloaded rollers even though the ground truth value is zero. While in the first plot in the figure, higher errors in predictions are observed until 50th-time steps, the performance improves once the initial oscillatory dynamics subside. In the second plot, the same observations can be made, however, the magnitude of errors in the initial dynamics phase is lower.
Figures 11 and 12 present a comparison between the predicted forces on the inner ring and outer ring, respectively, and the ground truth at different time-step ranges. It is evident that when a sudden load is applied at the 0th and 2500th-time steps, both the inner and outer rings experience high dynamical forces. The figures indicate that the GNN predicts a small constant force during these instances, which suggests a limitation in accurately capturing short-term dynamical forces. However, as the rings return to a stable dynamics regime, the GNN demonstrates accurate force predictions
**Verification of the learned underlying physics**: To verify whether the GNN has learned the correct underlying physics, an artificial trajectory of a bearing with 15 rollers was generated. The experimental setup involved fixing the center of the outer ring at the origin and providing displacement to the inner ring along the y-direction. Initially, the inner ring was centered at the origin and then displaced vertically within the range of -0.05 mm to +0.05 mm. This is equal to a compression of the roller which is located at the bottom-dead-center in the range of -0.05 mm to +0.05 mm.
Figure 8: Comparison of predictions of roller loads by the GNN for the loaded roller \(\#8\) (shown in Figure 6) with the results obtained from the dynamic bearing simulator (ground truth). Time-step ranges for plots from left to right: 0-250 \(\&\) 2500-2750
Figure 9: Percentage Error in roller load predictions for loaded roller \(\#8\). Error at time step 1 is around 70%.
Figure 11: Comparison of predicted force on the inner-ring by GNN with the dynamic bearing simulator (ground truth) for single time-step predictions. Time-step ranges for plots from left to right: 0-250 \(\&\) 2500-2750
Figure 12: Comparison of predicted force on the outer-ring by GNN with the dynamic bearing simulator (ground truth) for single time-step predictions. Time-step ranges for plots from left to right: 0-250 \(\&\) 2500-2750
Figure 10: Comparison of predictions of roller loads by the GNN for the non-loaded roller \(\#\)0 (Figure 6) with the results obtained from the dynamic bearing simulator (ground truth). Time-step ranges for plots from left to right: 0-250 \(\&\) 2500-2750
The generation of the artificial trajectory involved computing the initial positions of each rolling element based on the known radius of the inner and outer rings. We made the assumption that the rolling elements were positioned midway between the circumferences of the inner and outer rings and uniformly distributed along the 360-degree rotation of the bearing.
Figure 7 depicts the predicted and true loads as a function of roller deformation for the bottom dead center roller in the bearing (see Figure 6). When the inner ring is displaced in the negative y-direction, the bottom-dead-center roller experiences compression, resulting in positive loads. Conversely, displacement of the inner ring in the positive y-direction leads to the unloading of the roller, causing it to experience zero loads.
The GNN successfully predicts the increase in load for the roller up to a displacement of 0.02 mm of the inner ring in the positive and negative y-direction respectively. Moreover, it is particularly noteworthy that the GNN accurately captures the unloading phenomenon, faithfully reproducing the non-linear loading graph. It can be also noted that the largest deviation between the GNN's prediction and the ground truth is only 15 percent.
These findings demonstrate the GNN's capability to understand and reproduce the expected load changes in response to inner-ring displacements for different rollers within the bearing system. This indicates that the GNN has indeed learned the correct underlying physics of the bearing system, as it accurately predicts the expected behavior of the rollers under varying inner-ring displacements.
## 6 Conclusions
This study demonstrates the successful application of a graph neural network framework for predicting the dynamics of bearings. By representing the bearing as a graph and utilizing a message-passing graph neural network, we accurately predict loads at individual time steps based on external load and ring positions/velocities. Our study demonstrates the ability to infer dynamics from trajectory measurements without explicit stiffness, mass, and damping information. In contrast to pure data-driven methods, our approach offers interpretability, generalizability to new conditions such as external load, and flexibility to adapt to varying bearing configurations.
This proof-of-concept study paves the way for future research, wherein roll-out trajectories can be generated from initial conditions. To enhance accuracy, our future research aims to include dampers and springs that connect the rings to the ground. This extension will help address the significant errors in force predictions on inner and outer rings during the oscillatory dynamics regime that occurs during sudden loading. Additionally, future work will consider including bearing rotation as an important parameter. While our GNN is currently trained on Hertzian contact, it has the potential to capture intricate Elasto-hydrodynamic forces with measured data or FEA simulations, supported by the universal approximation theorem.
This study highlights the potential of graph neural networks in modeling bearing dynamics and opens up new possibilities for advancing bearing diagnostics, prognostics, and the development of real-time operational digital twins for monitoring the health of rotating machinery.
|
2309.17363 | Relational Constraints On Neural Networks Reproduce Human Biases towards
Abstract Geometric Regularity | Uniquely among primates, humans possess a remarkable capacity to recognize
and manipulate abstract structure in the service of task goals across a broad
range of behaviors. One illustration of this is in the visual perception of
geometric forms. Studies have shown a uniquely human bias toward geometric
regularity, with task performance enhanced for more regular and symmetric forms
compared to their geometrically irregular counterparts. Such studies conclude
that this behavior implies the existence of discrete symbolic structure in
human mental representations, and that replicating such behavior in neural
network architectures will require mechanisms for symbolic processing. In this
study, we argue that human biases towards geometric regularity can be
reproduced in neural networks, without explicitly providing them with symbolic
machinery, by augmenting them with an architectural constraint that enables the
system to discover and manipulate relational structure. When trained with the
appropriate curriculum, this model exhibits human-like biases towards symmetry
and regularity in two distinct tasks involving abstract geometric reasoning.
Our findings indicate that neural networks, when equipped with the necessary
training objectives and architectural elements, can exhibit human-like
regularity biases and generalization. This approach provides insights into the
neural mechanisms underlying geometric reasoning and offers an alternative to
prevailing symbolic "Language of Thought" models in this domain. | Declan Campbell, Sreejan Kumar, Tyler Giallanza, Jonathan D. Cohen, Thomas L. Griffiths | 2023-09-29T16:12:51Z | http://arxiv.org/abs/2309.17363v1 | Relational Constraints on Neural Networks Reproduce Human Biases Towards Abstract Geometric Regularity
###### Abstract
Uniquely among primates, humans possess a remarkable capacity to recognize and manipulate abstract structure in the service of task goals across a broad range of behaviors. One illustration of this is in the visual perception of geometric forms. Studies have shown a uniquely human bias toward geometric regularity, with task performance enhanced for more regular and symmetric forms compared to their geometrically irregular counterparts. Such studies conclude that this behavior implies the existence of discrete symbolic structure in human mental representations, and that replicating such behavior in neural network architectures will require mechanisms for symbolic processing. In this study, we argue that human biases towards geometric regularity can be reproduced in neural networks, without explicitly providing them with symbolic machinery, by augmenting them with an architectural constraint that enables the system to discover and manipulate relational structure. When trained with the appropriate curriculum, this model exhibits human-like biases towards symmetry and regularity in two distinct tasks involving abstract geometric reasoning. Our findings indicate that neural networks, when equipped with the necessary training objectives and architectural elements, can exhibit human-like regularity biases and generalization. This approach provides insights into the neural mechanisms underlying geometric reasoning and offers an alternative to prevailing symbolic "Language of Thought" models in this domain.
## 1 Introduction
Humans have the amazing capability of building useful abstractions that can capture regularities in the external world. Understanding what is responsible for this special feature of human intelligence relative to other animals is a longstanding goal in cognitive science (Penn et al., 2008; Berwick and Chomsky, 2016). One domain in which cognitive scientists have observed this "human singularity" (Dehaene et al., 2022) is in geometric reasoning: early _Homo sapiens_ 100,000 years ago were able to produce structured abstract geometric shapes and drawings on caves (Henshilwood et al., 2011), whereas similar behaviors have not been observed for non-human primates despite years of human contact (Saito et al., 2014).
Such observations, as well as rigorous empirical work (e.g., Sable-Meyer et al. 2021; Sable-Meyer et al. 2022) have led some cognitive scientists to conclude that human mental representations uniquely contain discrete domain-specific symbols that are recursively and compositionally combined to produce abstractions that support the capacity for generalization that is characteristic of human behavior (Dehaene et al., 2022). A corollary of this hypothesis is that artificial neural networks cannot, in principle, produce
human-like intelligence without the exogenous addition of explicit symbolic machinery and/or representations (Dehaene, 2021; Marcus, 2020). Indeed, empirical work in this domain has shown that explicitly symbolic models fit human behavior better than standard neural networks (Sable-Meyer et al., 2021). This has led to the view, by some, that symbolic "Language of Thought" models are the best models of humans' mental representations (Quilty-Dunn et al., 2022).
However, the fact that human behavior, or their _inductive biases_, may be described effectively with abstract symbolic processing does not necessarily imply that their internal representations are based on discrete symbols (Griffiths et al., 2023). Consequently, there may be other forms of representations, such as the continuous vector spaces of neural networks, that could, under the right conditions, produce this behavior without explicit symbolic machinery (McCoy et al., 2018). In the present work, we provide an existence proof of this point by revisiting recent empirical cognitive science work showing humans' regularity biases towards abstract geometric concepts (Sable-Meyer et al., 2021; 2022). We show that standard neural networks augmented with a simple constraint that favors relational information processing can replicate human generalization and regularity biases without needing to build in explicit symbolic machinery. Specifically, we implement an architectural motif, known as the _relational bottleneck_ (Webb et al., 2023a), that allows networks to exploit relations between objects rather than the attributes of individual objects.
We focus on the results of two studies. The first is the work of Sable-Meyer et al. (2022), in which humans were tested on a standard working memory task, Delayed-Match to Sample (DITS), using image stimuli sampled from a generative Language of Thought model of geometric concepts. The second is a study by Sable-Meyer et al. (2021), in which humans and non-human primates were tested on a version of the Oddball Detection task, a simple categorization paradigm in which participants identify a deviant stimulus in a group of quadrilateral stimuli. We show that a standard neural network, augmented with a relational bottleneck and trained with an appropriately designed curriculum using the same data as the studies by Sable-Meyer et al. (2021) and Sable-Meyer et al. (2022), exhibited human-like biases for abstract geometric regularity. These results offer an alternative interpretation of such biases, suggesting that with the appropriate inductive biases and curriculum neural networks can exhibit features associated with the capacity for symbolic processing without the need to hardcode the network with symbolic representations and/or mechanisms.
## 2 Historical Background and Related Work
For decades, cognitive scientists and AI researchers have embraced two main approaches to building intelligent systems: symbolic models (Fodor, 1975) and neural networks (Rumelhart and McClelland, 1986). Fodor (1975) proposed the "Language of Thought" (LoT) hypothesis: that higher-order cognition in humans is the product of recursive combinations of pre-existing, conceptual primitives, analogous to the way in which sentences in a language are constructed from simpler elements. Symbolic models are well-suited to naturally embed the abstract, structured knowledge humans possess, such as causal theories (Goodman et al., 2011) or hierarchical motor programs that draw handwritten characters (Lake et al., 2015). Neural networks, on the other hand, emphasize _emergence_ of these abstract concepts purely from data within completely unstructured, distributed representations (McClelland et al., 2010). Despite the incredible recent success of neural networks in machine learning, cognitive scientists have hypothesized that their systematic failure at generalizing out of their training distribution comes from a failure to embed the kinds of abstract structural knowledge that can exist in symbolic models (Lake et al., 2017; Marcus, 2003).
Recent work has suggested that these capacities may emerge through learning in neural networks that implement _relational reasoning_. Relational reasoning involves abstracting over the details of particular stimuli or domains and extracting more general forms of structure that are broadly useful for capturing regularities in the external world (Gentner, 1983; Holyoak, 2012). This can be accomplished in neural networks by introducing an architectural inductive bias: the relational bottleneck (Webb et al., 2023a). The general principle of the relational bottleneck is that some components of the network are restricted to operating on relations over representations rather than the representations themselves (Webb et al., 2020; 2023b; Mondal et al., 2023). For example, the network might be constrained to use the similarity or distance between two embeddings rather than the embeddings themselves. Critically, unlike many hybrid neuro-symbolic models (Plate, 1995; Touretzky, 1990; Mao et al., 2019) the relational bottleneck does not introduce pre-specified symbolic primitives or
any explicit mechanisms for symbolic processing, relying instead on the emergence of abstract concepts within unstructured, distributed representations. The motivation of the relational bottleneck is similar to that of other works that have built neural network architectures more sensitive to relational reasoning (Barrett et al., 2018; Santoro et al., 2017; Shanahan et al., 2020).
The Language of Thought (LoT) approach has been applied to a variety of domains in cognitive science, including learning causal theories (Goodman et al., 2011), representations of numbers (Piantadosi et al., 2012), and logical concepts (Piantadosi et al., 2016). However, geometry has recently emerged as one of the domains in which the strongest arguments in favor of this kind of representation have been made (Sable-Meyer et al., 2021, 2022; Dehaene et al., 2022). This setting is also a natural one in which to explore the predictions of neural network models, as geometric stimuli can be presented directly to models in the form of images. In the remainder of the paper, we present a detailed analysis of two of the studies that have been held up as providing support for the LoT approach, demonstrating how neural networks that are constrained to focus on relations are capable of reproducing the key patterns in human behavior.
## 3 Training Neural Networks on a Language of Thought for Geometry
### Background
Sable-Meyer et al. (2022) presented a study designed to test the Language of Thought hypothesis in the setting of geometry. The study was based on a model of geometric concept learning also developed by Sable-Meyer et al. (2022). This model framed concept learning as program induction within the DreamCoder framework (Ellis et al., 2021). A base programming language was defined such that programs can be written to generate shortage, where motor programs that draw geometric shapes are generated through recursive combination of symbolic primitives within a Domain Specific Language (DSL, Fig. 1A). The DSL contains motor primitives, such as tracing a particular curve and changing direction, as well as primitives to recursively combine subprograms such as \(Concat\) (concatenate two subprograms together) and \(Repeat\) (repeat a subprogram \(n\) times). These symbolic programs can then be rendered into images such as the ones seen in Fig. 1. Since each image has an underlying program, the minimum description length (MDL; Ellis et al. 2021) of the program was used to model the psychological complexity of the corresponding geometric pattern.
Abstract geometric patterns were generated by this symbolic LoT model (Fig. 1A) and used as stimuli in a standard working memory task, based on a Delayed-Match to Sample (DMTS, Fig. 1B) paradigm. In this task, human participants were instructed to memorize a geometric stimulus. Fol
Figure 1: **Geometric Language of Thought and Delayed Match to Sample Task** (A) Primitives of the generative Language of Thought (LoT) model implemented in Sable-Meyer et al. (2022). Primitives are recursively composed to produce symbolic programs that can be rendered into abstract geometric pattern stimuli. (B) Schematic of the working memory Delayed-Match to Sample (DMTS) task. A target stimulus is shown at the beginning, followed by a delay period, and the the target image must be selected out of a group of choice images containing distractors.
lowing the memorization phase, participants were presented with a blank screen for two seconds. Subsequently, they were shown six option stimuli, among which one matched the original stimulus they had memorized (the target image), while the remaining five were distractors. The objective for participants was to accurately select the image they had seen during the encoding phase and avoid choosing any of the distractor images.
In preceding work (Sable-Meyer et al. 2021, discussed further in the next section), the authors suggested that perception of abstract geometric stimuli can be based on two systems: a high-level, general-purpose symbolic system, supposedly only available to humans; and a lower-level, domain-specific shape invariant object recognition system, available to both humans and non-human primates, that can be modeled by a standard Convolutional Neural Network (CNN) model of object recognition in the brain (specifically, the Ventral Visual Stream; Kubilius et al. 2019). To study the first system, Sable-Meyer et al. (2022) chose distractor stimuli that were maximally similar to the target image based on hidden representations of a pre-trained CNN model of the Ventral Visual system (CorNet; Kubilius et al. 2019) and the average grey-level of the image. Even with difficult distractors, humans excelled at the task, with error rates as low as \(1.82\%\).
### Neural Network Modeling
We trained two Recurrent Neural Networks (RNNs; one baseline and one implementing a relational bottleneck) on this task, using the LoT model of (Sable-Meyer et al., 2022) to generate a large training corpus of geometric stimuli and holding out the specific stimuli used in the human experiments for the test set. Stimuli were encoded by a CNN encoder, which was comprised of a pre-trained CNN model (CorNet; Kubilius et al. 2019). On each trial, an encoded representation of the stimulus was used as the input to an LSTM (Fig. 2A), followed by encoded representations of three additional timesteps-worth of blank input images4 (Fig. 2A). The resulting output embedding of the LSTM corresponds to the working memory content of the human participants during choice time ("Memory Embedding", see Fig. 2A). The model is subsequently presented with the choice images (Fig. 2). We implemented two types of decision processes to classify the target image out of the six choice images (one target, five distractors). One of these was a standard baseline model, and the other was augmented with a relational bottleneck (Webb et al. 2023a; Fig. 2B).
Footnote 4: The delay period for the human experiments was 2 seconds, while the average stimulus presentation time was around 1.2s. Given this, we believe three timesteps makes the task for the networks at least as hard if not harder than the human task.
Figure 2: **DITS Task Architecture Implementation** (A) Target and delay images are passed through a pretrained CNN encoder (Kubilius et al., 2019). The outputs of the encoder are passed to an LSTM, producing memory embeddings that correspond to participants’ working memory representation of the initial target stimulus when performing the DMTS task. Each of the choice images are encoded using the same CNN encoder. (B) In the baseline model (left), the memory embeddings are simply concatenated to the choice embeddings and passed to a fully connected layer that produces the logits classifying the target image. In the relational bottleneck model (right), the embeddings are used to compute the similarity between each choice embedding and the memory embedding, and these similarities are used to produce the logits.
For the baseline model, the embeddings of the six choice stimuli, along with the memory embedding, were concatenated and simultaneously fed into a standard feedforward layer that was used to classify the target image. For the Relational Bottleneck model, the cosine similarity between the memory embedding and each choice embedding was computed; those similarities were then used to produce the prediction of the target image. This restricted the model to processing the _relations_ between its memory of the target image and the choice stimulus, without "intrusion" from any stimulus-specific attributes of the choice stimuli. During training, distractors were chosen randomly, but during testing, we used the exact same trials that were presented to human participants in the empirical study Sable-Meyer et al. (2022), in which difficult distractors were chosen based on similarity to pretrained CorNet representations (Kubilius et al., 2019) and average grey-levels.
### Results
We tested both implementations of the model on the exact same trials given to human participants in Sable-Meyer et al. (2022). Performance of the baseline model was well below human performance (Fig. 3B). However, the relational bottleneck model generalized extremely well to the test set, performing significantly better than the baseline model (\(p<0.001\)) and approximating the performance of human participants. In addition, it handled longer delay periods substantially better than the baseline model (Fig. 3C), demonstrating its ability to maintain abstract representations of these geometric stimuli more robustly through the delay period. The results suggest that it is possible to achieve human-like performance on this task with a neural network model augmented by a simple constraint that favors learning relations, without imbuing the model with any explicit symbolic representations. The training corpus we used had stimuli containing very rich geometric abstractions (see Fig. 1A and Fig. 7). While our results suggest that inclusion of a relational bottleneck may be _necessary_ to produce representations that support out-of-distribution generalization, it is not clear whether it is _sufficient_ even in cases of a more impoverished training corpus.
Previous work has shown that a rich training data distribution can also contribute to such generalization (Chan et al., 2022). To address this, we tested whether the relational bottleneck would produce similar human-like performance when training on a relatively more restricted training corpus.
Figure 3: **DITS Results** (A) Training accuracy across epochs of baseline and relational bottleneck models. Both models eventually reach near-perfect accuracy. (B) Results on tasks held out from model training that were taken directly from the human trials in Sable-Meyer et al. (2022). The black bar denotes chance performance, while the green bar denotes mean human performance. Error bars are 95% confidence intervals over model training seeds. The Relational Bottleneck model performs much better out of distribution. (C) We increased the delay period from 3 timesteps to 20. Though both models suffer in performance, the Relational Bottleneck model still performs much better.
## 4 Human-like vs Monkey-like Processing of Quadralateral Stimuli
### Background
Inspired by early anthropological work investigating abstract geometric concepts in cave drawings and behavioral research comparing geometric reasoning in humans and non-human primates, Sable-Meyer et al. (2021) compared diverse human groups (varying in education, cultural background, and age) to non-human primates on a simple oddball discrimination task. Participants were shown a set of five reference shapes and one "oddball" shape and prompted to identify the oddball (Fig. 4). The reference shapes were generated based on basic geometric regularities: parallel lines, equal sides, equal angles, and right angles. Reference shapes consisted of 11 types of quadrilaterals varying in their geometric regularity, from squares (most regular) to random quadrilaterals containing no parallel lines, right angles, or equal angles/sides (least regular) (Fig. 4B). In each trial, five different versions of the same reference shape (e.g, a square) were shown in different sizes and orientations. The oddball shape was a modified version of the reference shape, in which the lower right vertex was moved such that it violated the regularity of the original reference shape (e.g, moving the lower right vertex of a trapezoid such that it no longer has parallel sides). Fig. 4A shows an example trial.
Sable-Meyer et al. (2021) found that humans, across many different ages, cultures, and education levels, are naturally sensitive to these geometric regularities (right angles, parallelism, symmetry, etc) whereas non-human primates are not. Specifically, they found that human performance is best on the Oddball task for the most regular shapes, and systematically decreases as shapes become more irregular. Conversely, non-human primates perform well above chance, but they perform worse than humans overall and, critically, show no influence of geometric regularity (Fig. 4B).
To address this pattern of findings, Sable-Meyer et al. (2021) implemented two computational models: a symbolic model and a neural network model. The symbolic model implemented oddball identification using an explicitly symbolic feature space constructed from the shapes' discrete geometric properties. The neural network model was a pretrained CNN model of the Ventral Visual stream (CORNet; Kubilius et al. 2019).3 Sable-Meyer et al. (2021) found that the symbolic model
Figure 4: **Quadrilateral Oddball Task** (A) The Oddball task of Sable-Meyer et al. (2021) used six quadrilateral stimulus images, in which five images were of the same reference shape (differing in scale and rotation) and one was an oddball (highlighted in red) that diverged from the reference shape’s geometric properties. In this example, the reference shape is a rectangle; note that the Oddball does not have four right angles like the rectangles. (B) Sable-Meyer et al. (2021) examined error rates for humans, monkeys, and pre-trained CNNs (Kubilius et al., 2019) across quadrilaterals of decreasing geometric regularity (from squares, which have the highest regularity, to random quadrilaterals that have little regularity). Humans performed significantly better on more regular images, with error rates trending significantly upwards with decreasing regularity, whereas monkey and CNN error rates did not exhibit a significant error rate trend as a function of regularity.
fit the human performance of their Oddball task significantly better than the neural network model, and in particular it captured the effect of increasing error with increasing geometric irregularity. Conversely, the neural network model fit the monkey behavior better, exhibiting no systematic relationship with the level of geometric regularity (Fig. 4B). They interpreted this as evidence that the human sensitivity to geometric regularity requires the presence of unique symbolic representations that are absent in both neural networks and non-human primates.
### Neural Network Modeling
Here, we show that a neural network trained on the same stimuli used by Sable-Meyer et al. (2021), and provided with a relational bottleneck, exhibits the sensitivity of geometric regularity observed in humans, without the explicit specification of discrete symbolic representations.
We started with the ResNet CNN architecture 3, but we modified this architecture to directly compute the Oddball judgements end-to-end using the relational bottleneck, using the method described in Kerg et al. (2022) (Fig. 5A). Specifically, a \(6\times 6\) cosine similarity matrix is computed across each of the six stimuli, and the similarity matrix is fed into a feedforward layer that produces an Oddball decision. This structure forces the model to make decisions based on the relations between choice stimuli rather than the attributes of an individual choice stimulus.
Figure 5: **Oddball Task Architecture Implementation** (A) To make an oddball decision using the Relational Bottleneck, we compute an oddball judgement directly from the \(6\times 6\) similarity matrix of the encoder’s choice embeddings. (B) We implemented two types of contrastive pretraining on a ResNet CNN architecture: (top) a standard contrastive objective based on SimCLR (Chen et al., 2020) and (bottom) a novel contrastive objective using distances in a geometric feature space.
We pretrained the CNN using one of two contrastive objectives (Fig. 5B): **Standard** and **Geometric**. The **Standard** objective was based on SimCLR (Chen et al., 2020). Specifically, simple random rotations and scaling were applied to individual quadrilateral images, and then the CNN was trained to push its representations of those images together, to be more similar (i.e., less distant) to their augmented counterparts, and pull its representations of different quadrilateral images apart, to be more dissimilar (i.e., more distant) from each other. The **Geometric** objective used the geometric features utilized in Sable-Meyer et al. (2021) as the feature space over which to define distances. Those geometric features were binary vectors corresponding to the presence or absence of equal angles, equal sides, parallel lines, and right angles of the quadrilateral. During training, this effectively pushed quadrilaterals with similar geometric features together and pulled quadrilaterals with different geometric features apart. This allowed us to train the network to exhibit the same abstractions defined by the geometric features _without building in the geometric features themselves_. During testing and inference, the geometric features were completely discarded. This is similar to previous work instilling human biases into neural network agents (Kumar et al., 2022), in which the _tabula rasa_ neural networks that were co-trained with symbolic information exhibited human biases without explicitly implementing any symbolic representations.
### Results
Similar to the effect observed in the study by Sable-Meyer et al. (2022) discussed in the previous section, the geometric regularity effect observed for humans in Sable-Meyer et al. (2021) was an inverse relationship between geometric regularity and error rate (see green plot in Fig. 4B). For example, humans performed best on the most regular shapes, such as squares and rectangles. This regularity effect was again absent in the monkey error rates (Fig. 4B).
Following Sable-Meyer et al. (2021), we show, for each of our networks, the error rates for quadrilaterals sorted by geometric regularity and how well they match human and monkey error rates (Fig. 6). The Geometric pre-trained model showed a strong fit to human behavior (\(r=0.72\)) and a significant effect of geometric regularity (\(p<0.001\); Fig. 6). The Standard (SimCLR) pre-trained model, however, showed a strong fit to _monkey_ behavior (\(r=0.70\)), but not to _human_ behavior (\(r=0.005\)), nor did they show the geometric regularity effect (\(p=0.99\); Fig. 6). This indicates that, although the relational bottleneck was necessary, it was not sufficient on its own to reproduce human behavior on this task. However, coupled with the appropriate training, it was able to reproduce the pattern of results observed for human behavior in Sable-Meyer et al. (2021). These results suggest that, with the appropriate structural biases and training experience, it is possible for neural network to
Figure 6: **Oddball Task Results** (A) Mean error rates over the 11 types of quadrilaterals for each type of network. The Geometric pre-trained network showed a significant trend between error rate and geometric regularity (\(p<.001\)), while the Standard (SimCLR) pre-trained network did not (\(p=0.99\)). (B). We correlated error rates across quadrilaterals for each model with the corresponding error rates of humans and monkeys. Geometric pre-training of quadrilaterals led to human-like error patterns, whereas SimCLR pre-training led to more monkey-like error patterns. Error bars are 95% confidence intervals across different model training runs.
learn representations that exhibit human-like biases in the geometric oddball task without explicitly imposing symbolic representations on the network.
## 5 Discussion
A prevailing theory in cognitive science is that abstractions that support strong generalization reflect the presence of symbolic systems innate in humans that may be absent in animals (Fodor, 1975; Quilty-Dunn et al., 2022; Dehaene et al., 2022). Along similar lines, it has been argued that, without explicitly imbuing neural networks with such capabilities, they will not be able to exhibit the same cognitive flexibility as humans (Marcus, 2020; Dehaene, 2021). Empirical findings in the studies by (Sable-Meyer et al., 2021) and Sable-Meyer et al. (2022) have been offered in support of these conjectures. Here, we provide evidence to the contrary, showing how the introduction of a simple, neurally plausible relational inductive bias, coupled with the appropriate training experiences, is sufficient to reproduce behavior consistent with the formation of abstract representations in neural networks.
The domain of the empirical work we re-examine involves the visual perception of geometric patterns (Sable-Meyer et al., 2021, 2022). Sable-Meyer et al. (2022) show that humans are adept at processing geometric patterns, using a delayed-match-to-sample working memory task with stimuli sampled from a generative probabilistic program induction model (Ellis et al., 2021). We trained two types of RNN models on this task: a baseline model and a model with a relational bottleneck that is biased to focus on _relations between stimuli_ to classify the target image. Consistent with the claims of Sable-Meyer et al. (2022), a baseline model does not reach human-level performance out of its training distribution. However, a model with the relational bottleneck does indeed reach human performance on the test set, showing that a simple constraint that favors learning relations can allow neural networks to achieve human-level performance on this task.
Sable-Meyer et al. (2021) further show that humans are sensitive to geometric regularity when performing a visual perception task, the Oddball task, using quadrilateral stimuli, whereas non-human primates and standard CNNs (Kubilius et al., 2019) are not. Here, we found that even with a relational bottleneck, a network trained with a standard contrastive learning objective produced the same monkey-like behavior observed from the CNN trained by Sable-Meyer et al. (2021). However, when trained constrastively on distances produced by geometric features, the model did reproduce the human geometric regularity effect.
One important difference between the two tasks is that, the Delayed-Match-to-Sample task (Sable-Meyer et al., 2022) used reaction times (RTs) to show the geometric regularity effect in humans, whereas the Oddball task (Sable-Meyer et al., 2021) used error rates. This is because error rates in the former were near zero, and therefore RTs were required to observe significant effects. One limitation of our study is that we did not construct an analogue to human RTs for our RNN models. Instead, we used out-of-training-distribution accuracy as the main performance metric. In the oddball task (Sable-Meyer et al., 2021), where human error rates were higher, we were able to conduct a more direct comparison, where we observed a clear correspondence between human (or monkey) behavior and our models.
A further difference between the two experiments is that the model of the Oddball task required geometric contrastive pre-training to match human performance (producing monkey-like behavior without this objective). We believe this is because the dataset used in the Delayed-Match-to-Sample task features a richer distribution of stimuli (Fig. 7) sampled from a Bayesian program induction model (DreamCoder; Ellis et al. 2021). Building a training distribution of samples from such a Bayesian model has an interpretation of effectively distilling the Bayesian model's rich prior into a neural network (McCoy and Griffiths, 2023). In contrast, the Oddball dataset consisted of a relatively simple set of 11 quadrilaterals, which may not be sufficiently diverse to allow the network to extract more abstract representations (see Chan et al. 2022 for a similar argument about how the richness of training data affects the post-training capabilities of Large Language Models).
Our work provides evidence that simple modifications to standard neural networks are sufficient to reproduce human behavior on tasks used in cognitive science to showcase allegedly unique human capabilities. It may be possible that such geometric regularity biases can be instilled in neural networks by other methods. For example, previous work has shown Vision Transformer architectures,
like humans, are biased more towards shapes than textures (Tuli et al., 2021). In general, we suggest that human-like behavior and abstractions can be instilled in neural networks using a variety of strategies, including through specialized architectures (Webb et al., 2023, 2020), specialized loss functions/training curricula (Kumar et al., 2022; Kepple et al., 2022), and/or highly rich data distributions (McCoy and Griffiths, 2023; Chan et al., 2022).
A hallmark of human intelligence is the ability to develop highly general abstractions that capture the essential structure in their environments in a strikingly sample-efficient manner (Gershman, 2017; Lake et al., 2017). Our work highlights the possibility of neural network-based architectures achieving the same level of intelligence without built-in, explicitly symbolic machinery, recapitulating a classic debate in cognitive science (Rumelhart and McClelland, 1986). Given the success of this approach in the geometric setting, we anticipate that similar models may be able to capture behavior that has previously been explained in terms of symbolic representations in learning causal relationships, numerical representations, and logical concepts.
## 6 Acknowledgements
S.K is supported by a Google PhD Fellowship. We thank Mathias Sable-Meyer for assisting us with accessing the data in his work and general advice.
|
2305.00535 | Nearly Optimal Steiner Trees using Graph Neural Network Assisted Monte
Carlo Tree Search | Graph neural networks are useful for learning problems, as well as for
combinatorial and graph problems such as the Subgraph Isomorphism Problem and
the Traveling Salesman Problem. We describe an approach for computing Steiner
Trees by combining a graph neural network and Monte Carlo Tree Search. We first
train a graph neural network that takes as input a partial solution and
proposes a new node to be added as output. This neural network is then used in
a Monte Carlo search to compute a Steiner tree. The proposed method
consistently outperforms the standard 2-approximation algorithm on many
different types of graphs and often finds the optimal solution. | Reyan Ahmed, Mithun Ghosh, Kwang-Sung Jun, Stephen Kobourov | 2023-04-30T17:15:38Z | http://arxiv.org/abs/2305.00535v1 | # Nearly Optimal Steiner Trees using Graph Neural Network Assisted Monte Carlo Tree Search
###### Abstract
Graph neural networks are useful for learning problems, as well as for combinatorial and graph problems such as the Subgraph Isomorphism Problem and the Traveling Salesman Problem. We describe an approach for computing Steiner Trees by combining a graph neural network and Monte Carlo Tree Search. We first train a graph neural network that takes as input a partial solution and proposes a new node to be added as output. This neural network is then used in a Monte Carlo search to compute a Steiner tree. The proposed method consistently outperforms the standard 2-approximation algorithm on many different types of graphs and often finds the optimal solution.
## 1 Introduction
Graphs arise in many real-world applications that deal with relational information. Classical machine learning models, such as neural networks and recurrent neural networks, do not naturally handle graphs. Graph neural networks (GNN) were introduced by Gori et al. [24] in order to better capture graph structures. A GNN is a recursive neural network where nodes are treated as state vectors and the relationships between the nodes are quantified by the edges. Scarselli et al. [42] extended the notion of unfolding equivalence that leads to the transformation of the approximation property of feed-forward networks (Scarselli and Tsoi [43]) to GNNs.
Many real-world problems are modeled by combinatorial and graph problems that are known to be NP-complete. GNNs offer an alternative to traditional heuristics and approximation algorithms; indeed the initial GNN model [42] was used to approximate solutions to two classical graph problems: subgraph isomorphism and clique detection.
Recent GNN work [37, 48] suggests that combining neural networks and tree search leads to better results than just the neural network alone. Li et al. [37] combine a convolutional neural network with tree search to compute independent sets and other NP-hard problems that are efficiently reducible to the independent set problem. AlphaGo, by Silver et al. [44] combines deep convolutional neural networks and Monte Carlo Tree Search (MCTS) [12, 34] to assess Go board positions and reduce the search space. Xing et al. [48] build on this combination to tackle the traveling salesman problem (TSP).
Since Xing et al. [48] showed that the AlphaGo framework is effective for TSP, a natural question is whether this framework can be applied to other combinatorial problems such as the Steiner tree problem. Although both TSP and the Steiner tree problem are NP-complete, they are different. First, in the Steiner tree problem we are given a subset of the nodes called _terminals_ that must be spanned, whereas in TSP all nodes are equivalent. Second, the output of the Steiner tree problem is a tree, whereas the output of TSP is a path (or a cycle). When iteratively computing a TSP solution, the next node to be added can only be connected to the previous one, rather than having to choose from a set of nodes when growing a Steiner tree. Third, TSP and Go are similar in terms of the length of the instance: both the length of the game and the number of nodes in the TSP solution are fixed and taking an action in Go is equivalent to adding a node to the tour, while the number of nodes in the Steiner tree problem varies depending on the graph instance. Finally, Xing et al. [48] only considered geometric graphs, which is a restricted class of graphs.
### Background:
The Steiner tree problem is one of Karp's 21 NP-complete problems [29]: given an edge-weighted graph \(G=(V,E)\), a set of terminals \(T\subseteq V\) and cost \(k\), determine whether there exists a tree of cost at most \(k\) that spans all terminals. For \(|T|=2\) this is equivalent to the shortest path problem, for \(|T|=|V|\) this is equivalent to the minimum spanning tree problem, while for \(2<|T|<|V|\) the problem is NP-complete [11]. Due to applications in many domains, there is a long history of heuristics, approximation algorithms and exact algorithms for the problem. The classical 2-approximation algorithm for the Steiner tree problem [22] uses the _metric closure_ of \(G\), i.e., the complete edge-weighted graph \(G^{*}\) with terminal node
set \(T\) in which, for every edge \(uv\), the cost of \(uv\) equals the length of a shortest \(u\)-\(v\) path in \(G\). A minimum spanning tree of \(G^{*}\) corresponds to a 2-approximation Steiner tree in \(G\). This algorithm is easy to implement and performs well in practice [2]. The last in a long list of improvements is the LP-based algorithm of Byrka et al. [9], with approximation ratio of \(\ln(4)+\varepsilon<1.39\). The Steiner tree problem is APX-hard [7] and NP-hard to approximate within a factor of 96/95 [10]. Geometric variants of the problem, where terminals correspond to points in the Euclidean or rectilinear plane, admit polynomial-time approximation schemes [4, 38].
### Related Work:
Despite its practical and theoretical importance, the Steiner tree problem is not as well explored with machine learning approaches as other combinatorial and graph problems. In 1985, Hopfield et al. [27] proposed a neural network to compute feasible solutions for different combinatorial problems such as TSP. Bout et al. [14] developed a TSP objective function that works well in practice and Brandt et al. [8] provided different networks for solving TSP. Kohonen's 1982 self-organizing maps [35], an architecture for artificial neural networks, can also be used for such problems as shown by Fort [17, 3] and Favata et al. [16].
Recently, graph neural networks have been an active area of research. Lei et al. [36] introduced recurrent neural operations for graphs with associated kernel spaces. Gilmer et al. [23] study graph neural models as Message Passing Neural Networks. Garg et al. [21] generalized message-passing GNNs that rely on the local graph structure, proposing GNN frameworks that rely on graph-theoretic formalisms. GNNs have been widely used in many areas including physical systems [6, 41], protein-protein interaction networks [18], social science [26, 32], and knowledge graphs [25]; The survey of Zhou et al. [50] covers GNN methods and applications in general, and the survey of Vesselinova et al. [45] provides more details on attempts to solve combinatorial and graph problems with neural networks.
### Problem Statement:
In the standard optimization version of the Steiner Tree Problem we are given a weighted graph \(G=(V,E)\) and a set of terminals \(T\subseteq V\), and the objective is to compute a minimum cost tree that spans \(T\). A Steiner tree \(H\) must contain all the terminals and non-terminal nodes in \(H\) are the Steiner nodes. Several approximation algorithms have been proposed for this problem including a classical 2-approximation algorithm that first computes the metric closure of \(G\) on \(T\) and then returns the minimum spanning tree [1]. In this paper we consider whether graph neural networks can be used to compute spanning trees with close-to-optimal costs using a variety of different graph classes.
### Summary of Contributions:
We describe an approach for computing Steiner Trees by combining a graph neural network and Monte Carlo Tree Search (MCTS). We first train a graph neural network that takes as input a partial solution and proposes a new node to be added as output. This neural network is then used in a MCTS to compute a Steiner tree. The proposed method consistently outperforms the standard 2-approximation algorithm on many different types of graphs and often finds the optimal solution. We illustrate our approach in Figure 1. Our approach builds on the work of Xing et al. [48] for TSP. Since TSP is non-trivially different from the Steiner tree problem, we needed to address challenges in both training the graph neural network and testing the MCTS. We summarize our contribution below:
* To train the neural network we generate exact solutions of Steiner tree instances. From each instance, we generate several data points. The purpose of the neural network is to predict the next Steiner node, given a partial solution. Any permutation of the set of Steiner nodes can lead to a valid sequence of predictions. Hence, we use random permutations to generate data points for the network.
* After we determine the Steiner nodes for a given instance, it is not straightforward to compute the Steiner tree. For TSP, any permutation of all nodes is a feasible tour. For the Steiner tree problem, an arbitrary permutation can have many unnecessary nodes and thus a larger weight compared to the optimal solution. Selecting a subset of nodes is not enough either, since the output needs to be connected and span the terminals. We propose heuristics to compute the tree from the nodes that provide valid result with good quality.
* We evaluate our results on many different classes of graphs, including geometric graphs, Erdos-Renyi graphs, Barabasi-Albert graphs, Watts-Strogatz graphs, and known hard instances from the Stein-Lib database [33]. Our method is fully functional and available on Github.
## 2 Our approach
Let \(G(V,E)\) be a graph, where \(V\) is the set of nodes and \(E\) is the set of edges. Let \(w(u,v)\) be the weight of edge \((u,v)\in E\) and for unweighted graphs \(w(u,v)=1\) for any edge \((u,v)\in E\). Let \(T\subseteq V\) be the set of terminals.
We use \(S=\{v_{1},v_{2},\cdots,v_{i}\}\) to represent the set of nodes that are already added in a partially computed Steiner tree. Then, \(\overline{S}=V-S\) is the set of candidate nodes to be added to \(S\).
Given the graph \(G\) our goal is to derive a Steiner tree by adding node \(v\in\overline{S}\) to \(S\) in turn. A natural approach is to train a neural network to predict which node to add to the partial Steiner tree at a particular step. That is, neural network \(f(G|S)\) takes graph \(G\) and partial solution \(S\) as input, and return probabilities for the remaining nodes, indicating the likelihood they belong to the Steiner tree. We use the GNN of [26] to represent \(f(G|S)\).
Intuitively, we can directly use the probability values, selecting all nodes with probability higher than a given threshold. We can then construct a tree from the selected nodes in different ways. For example, we can compute the induced graph of the selected nodes (add an edge if it connects to selected nodes) and extract a minimum spanning tree [11]. Note that the induced graph may be disconnected and therefore the spanning tree will be also disconnected. Even if the spanning tree is connected, it may not span all the terminals, hence it may not provide a valid solution. These issues can be addressed by reducing the given threshold until we obtain a valid solution.
However, deriving trees in this fashion might not be reliable, as a learning-based algorithm has only one chance to compute the optimal solution, and it never goes back to reverse the decision. To overcome this drawback, we leverage the MCTS. We use a variant of PUCT [40] to balance exploration (i.e., visiting a state as suggested by the prior policy) and exploitation (i.e., visiting a state that has the best value). Using the concept of prior probability, the search space of the tree could be reduced substantially, enabling the search to allocate more computing resources to the states having higher values. We could get a more reliable policy after a large number of simulations as the output of the MCTS acts as the feedback information by fusing the prior probability with the scouting exploration. The overall approach is illustrated in Figure 1.
### Graph neural network architecture:
To get a useful neural network, information about the structures of the concerned graph, terminal nodes, and contextual information, i.e., the set of added nodes \(S=\{v_{1},\ldots,v_{i}\}\) in the partial solution, is required. We tag node \(u\) with \(x_{u}^{t}=1\) if it is a terminal, otherwise \(x_{u}^{t}=0\). We also tag node \(v\) with \(x_{v}^{a}=1\) if it is already added, otherwise \(x_{v}^{a}=0\). Intuitively, \(f(G|S)\) should summarize the state of such a "tagged" graph and generate the prior probability for each node to get included in \(S\).
Some combinatorial problems like the independent set problem and minimum vertex cover problem do not consider edge weights. However, edge weight is an important feature of the Steiner tree problem as the objective is computed based on the weights. Hence, we use the static edge graph neural network (SE-GNN) [48], to efficiently extract node and edge features of the
Figure 1: GNN assisted MCTS: first, train a GNN to evaluate non-terminal nodes, then use the network and heuristics to compute a Steiner tree with MCTS.
Steiner tree problem.
A GNN model consists of a stack of \(L\) neural network layers, where each layer aggregates local neighborhood information, i.e., features of neighbors of each node, and then passes this aggregated information to the next layer. We use \(H_{u}^{l}\in\mathbb{R}^{d}\) to denote the real-valued feature vector associated with node \(u\) at layer \(l\). Specifically, the basic GNN model [26] can be implemented as follows. In layer \(l=1,2,\cdots,L\), a new feature is computed as given by 2.1.
\[H_{u}^{l+1}=\sigma\Big{(}\theta_{1}^{l}H_{u}^{l}+\sum_{v\in N(u)}\theta_{2}^{l} H_{v}^{l}\Big{)} \tag{1}\]
In 2.1, \(N(u)\) is the set of neighbors of node \(u\), \(\theta_{1}^{l}\) and \(\theta_{2}^{l}\) are the parameter matrices for the layer \(l\), and \(\sigma(\cdot)\) denotes a component-wise non-linear function such as a sigmoid or a ReLU function. For \(l=0\), \(H_{u}^{0}\) denotes the feature initialization at the input layer.
The edge information is not taken into account in 2.1. To incorporate edge features, we adapt the approach in [30, 47] to the Steiner tree problem. We integrate the edge features with node features using 2.2.
\[\mu_{u}^{l+1}=\sigma\Big{(}\theta_{1}x_{u}+\theta_{2}\sum_{v\in N(u)}\mu_{v}^{ l}+\theta_{3}\sum_{v\in N(u)}\sigma(\theta_{4}w(u,v))\Big{)} \tag{2}\]
In 2.2, \(\theta_{1}\in\mathbb{R}^{l}\), \(\theta_{2},\theta_{3}\in\mathbb{R}^{l\times l}\) and \(\theta_{4}\in\mathbb{R}^{l}\) are all model parameters. We can see in 2.1 and 2.2 that the nonlinear mapping of the aggregated information is a single-layer perceptron, which is not enough to map distinct multisets into unique embeddings. Hence, as suggested in [48, 49], we replace the single perceptron with a multi-layer perceptron. Finally, we compute a new node feature \(H\) using 2.3.
\[H_{u}^{l+1}=\text{MLP}^{l}\Big{(}\theta_{1}^{l}H_{u}^{l}+\sum_{v\in N(u)}\theta _{2}^{l}H_{v}^{l}+\sum_{v\in N(u)}\theta_{3}^{l}e_{u,v}\Big{)} \tag{3}\]
In 2.3, \(e_{u,v}\) is the edge feature, \(\theta_{1}^{l}\), \(\theta_{2}^{l}\), and \(\theta_{3}^{l}\) are parameter matrices, and \(\text{MLP}^{l}\) is the multi-layer perceptron for layer \(l\). Note that SE-GNN differs from GEN [13] in the following aspects: (1) SE-GNN replaces \(x_{u}\) in 2.2 with \(H_{u}\) so that the SE-GNN can integrate the latest feature of the node itself directly. (2) Each update process in the GEN can be treated as one update layer of the SE-GNN, i.e., each calculation is equivalent to going one layer forward, thus calculating \(L\) times for \(L\) layers. Parameters of each layer of SE-GNN are independent, while parameters in GEN are shared between different update processes which limits the neural network. (3) We replace \(\sigma\) in 2.2 with MLP as suggested by [48, 49] to map distinct multisets to unique embeddings.
We initialize the node feature \(H^{0}\) as follows. Each node has a feature tag which is a 4-dimensional vector. The first element of the vector is binary and it is equal to 1 if the partial solution \(S\) contains the node. The second element of the vector is also binary and it is equal to 1 if the node is a terminal. The third and fourth elements of the feature tag are the \(x\) and \(y\) coordinates of the node. The last two are used only for geometric graphs.
### Parameterizing \(f(G|S;\theta)\):
Once the feature for every node is computed after updating \(L\) layers, we use the new feature for the nodes to define the \(f(G|S;\theta)\) function, which returns the prior probability for each node indicating how likely the node will belong to partial solution \(S\). Specifically, we fuse all node feature \(H_{u}^{L}\) as the current state representation of the graph and parameterize \(f(G|S;\theta)\) as expressed by 2.4.
\[f(G|S;\theta)=\text{softmax}(sum(H_{1}^{L}),\cdots,sum(H_{n}^{L})) \tag{4}\]
During training, we minimize the cross-entropy loss for each training sample \((G_{i},S_{i})\) in a supervised manner as given by 2.5.
\[\ell(S_{i},f(G_{i}|S_{i};\theta))=-\sum_{j=1}^{N}y_{j}\log f(G_{i}|S_{i}(1:j-1 );\theta) \tag{5}\]
In 2.5, \(S_{i}\) is an ordered set of nodes of a partial solution which is a permutation of the nodes of graph \(G_{i}\), with \(S_{i}(1:j-1)\) the ordered subset containing the first \(j-1\) elements of \(S_{k}\), and \(y_{j}\) a vector of length \(N\) with 1 in the \(S_{i}(j)\)-th position. We provide more details in Section 3.
### GNN assisted MCTS:
Similar to the implementation in [48], the GNN-MCTS uses graph neural networks as a guide of MCTS. We denote the child edges of \(u\) in MCTS by \(A(u)\). Each node \(u\) in the search tree contains edges \((u,a)\) for all legal actions \(a\in A(u)\). Each edge of MCTS stores a set of statistics:
\[\{N(u,a),Q(u,a),P(u,a)\},\]
where node \(u\) denotes the current state of the graph including the set of nodes \(S\) and other graph information, action \(a\) denotes the selection of node \(v\) from \(\overline{S}\) to add in \(S\), \(N(u,a)\) is the visit count, \(Q(u,a)\) is the action value and \(P(u,a)\) is the prior probability of selecting edge \((u,a)\).
In the Steiner tree problem, we are interested in finding a tree with minimum cost. Hence, we track the
best action value found under the subtree of each node to determine the "exploitation value" of the tree node, as suggested in [19] in the context of the stock trading problem.
The standard MCTS takes solution values in the range \([0,1]\)[34]. However, the Steiner tree can have an arbitrary solution value that does not fall in a predefined interval. This issue could be addressed by adjusting the parameters of the tree search algorithm in such a way that it is feasible for a specified interval. Adjusting parameters requires substantial trial and error due to the change in the number of nodes. Instead, we address this issue by normalizing the action value of node \(n\), whose parent is node \(p\), in the range of \([0,1]\) using 2.6.
\[Q_{n}=\frac{\tilde{Q}_{n}-w_{p}}{b_{p}-w_{p}} \tag{2.6}\]
In 2.6, \(b_{p}\) and \(w_{p}\) are the minimum and maximum action values among the children of \(p\), and \(Q_{n}\) is the action value of \(n\). The actions under \(p\) are normalized in the range of \([0,1]\) so that the best action is 0 and the worst action is 1.
The GNN-MCTS proceeds by iterating over the four phases below and then selects a move to play.
1. **Selection Strategy.** The first in-tree phase of each simulation starts at the root node \(v_{0}\) of the search tree and finishes when the simulation reaches a leaf node \(v_{l}\) at time step \(l\). At time step \(t<l\), we use a variant of PUCT [40] to balance exploration (i.e., visiting the states suggested by the prior policy) and exploitation (i.e., visiting states which have best values) according to the statistics in the search tree as given by 2.7 and 2.8 respectively. (2.7) \[a_{t}=\text{argmax}_{a}(Q(v_{t},a)+U(v_{t},a))\] (2.8) \[U(v,a)=c_{puct}P(v,a)\frac{\sqrt{\sum_{b}N(v,b)}}{1+N(v,a)}\] where \(c_{puct}\) is a constant for trading off between exploration and exploitation. We set \(c_{puct}=1.3\) according to previous experimental results [48].
2. **Expansion Strategy.** When a leaf node \(v\) is reached, the corresponding state \(s_{v}\) is evaluated by the GNN to obtain the prior probability \(p\) of its children nodes. The leaf node is expanded and the statistic of each edge \((s_{v},a)\) is initialized to \(\{N(s_{v},a)=0,Q(s_{v},a)=-\infty,P(s_{v},a)=p_{a}\}\).
3. **Back-Propagation Strategy.** For each step \(t<l\), the edge statistics are updated in a backward process. The visit counts are increased as \(N(v_{t},a_{t})=N(v_{t},a_{t})+1\), and the action value is updated to the best value.
4. **Play.** After repeating steps 1-3 several times (800 times for smaller datasets and 1200 times for larger datasets according to the previous experimental results [48]), we select the node with the biggest \(\hat{P}(a|u_{0})=\frac{Q(u_{0},a)}{\sum_{b}Q(u_{0},b)}\) as the next move \(a\) in the root position \(u_{0}\). The selected child becomes the new root node and the statistics stored in the subtree are preserved.
### Computing Steiner tree from \(S\):
There are several ways to compute a Steiner tree from the set of nodes \(S\). We provide two effective heuristics that we use in our experiments.
1. **MST-based heuristic.** In this heuristic, we first add the terminal nodes to the solution if they are not already present, and then compute the induced graph. We iteratively add nodes from \(\overline{S}\) in order computed by the MCTS until the induced graph is connected. In the last step, we compute a minimum spanning tree (MST) of the induced graph and prune degree-1 non-terminal nodes. This heuristic is effective for geometric graphs and unweighted graphs.
2. **Metric closure-based heuristic.** In this heuristic, given an input graph \(G=(V,E)\) and a set of terminals \(T\), we first compute a metric closure graph \(G^{\prime}=(T,E^{\prime})\). Every pair of nodes in \(G^{\prime}\) is connected by an edge with weight equal to the shortest path distance between them. The minimum spanning tree of the metric closure provides a 2-approximation to the optimal Steiner tree. For
Figure 2: Example graph for the Steiner tree heuristic. Considering \(D\) as a terminal node and computing the MST on the metric closure provides a better solution than the 2-approximation.
example, in Figure 2, \(A\), \(B\) and \(C\) are terminal nodes and \(D\) is not. Note that \(D\) does not appear in any shortest path as every shortest path between pairs of terminals is 5 and none of them goes through \(D\). Without loss of generality, the 2-approximation algorithm chooses the \(A-C-B\) path with total cost of 10, while the optimal solution that uses \(D\) has cost 9.
While the 2-approximation algorithm does not consider any node that does not belong to a shortest path between two terminal nodes, here we consider such nodes. Specifically, we iteratively add nodes from \(\overline{S}\) in order computed by the MCTS, even if they don't belong to any shortest path. Note that, unlike the MST-based heuristic, the metric closure-based heuristic computes the MST on the metric closure (not on the input graph).
Both of the heuristics start by selecting all the terminals as the partial solution. In the MCTS, we gradually add nodes that are not in the set of already selected nodes. For the MST-based heuristic, we stop selecting nodes when the induced graph becomes connected. For the metric closure-based heuristic we stop selecting nodes when 10% non-terminal nodes have been selected.
## 3 Model setup and training
In order to train the models, one has to provide training data consisting of input graphs \(G=(V,E)\), edge weights \(W:E\rightarrow\mathbb{R}^{+}\), and terminals \(T\subseteq V\). Given \(G,W,T\), and partial solution \(S\), our goal is to give label 1 to the next node to be added and 0 to all others. Initially, we set \(S=T\) as all terminals must be in the Steiner tree. Consider a graph with 6 nodes \(u_{1},u_{2},\cdots,u_{6}\), \(T=\{u_{1},u_{2},u_{3}\}\), and an optimal Steiner tree contains the first five nodes \(u_{1},u_{2},\cdots,u_{5}\). For this example, initially we set \(S=T=\{u_{1},u_{2},u_{3}\}\). Since we have two Steiner nodes \(u_{4}\) and \(u_{5}\), both permutations \(u_{4},u_{5}\) and \(u_{5},u_{4}\) are valid. For the first permutation, after setting \(S=\{u_{1},u_{2},u_{3}\}\), the next node to be added in the solution is \(u_{4}\). Hence for this data point, only the label for \(u_{4}\) is 1. This permutation provides another data point where \(S=\{u_{1},u_{2},u_{3},u_{4}\}\) and only the label for \(u_{5}\) is equal to 1. Similarly, we can generate two more data points from the other permutation. This exhaustive consideration of all possible permutations does not scale to larger graphs, so we randomly select 100 permutations from each optimal solution. The model is trained with Stochastic Gradient Descent, using the ADAM optimizer [31] to minimize the cross-entropy loss between the models' prediction and the ground-truth (a vector in \(\{0,1\}^{|V|}\) indicating whether a node is the next solution node or not) for each training sample.
### Data generation:
We produce training instances using several different random graph generation models: Erdos-Renyi [15], Watts-Strogatz [46], Barabasi-Albert [5], and random geometric [39] graphs. Each of these generators needs some parameters; below we describe the values we used, aiming to have graphs of comparable density across the different generators. For Erdos-Renyi model, there is an edge selection probability \(p\), which we set to \(\frac{2\ln n}{n}\) to ensure that the generated graphs are connected with high probability. In the Watts-Strogatz model, we initially create a ring lattice of constant degree \(K\) and rewire each edge with probability \(0\leq p\leq 1\), while avoiding self-loops and duplicate edges. For our experiments we use \(K=6\) and \(p=0.2\). In the Barabasi-Albert model, the graph begins with an initially connected graph of \(m_{0}\) nodes. New nodes are added to the network one at a time. Each new node is connected to \(m\leq m_{0}\) existing nodes with a probability that is proportional to the number of edges that the existing nodes already have. We set \(m_{0}=5\). In the random geometric graph model, we uniformly select \(n\) points from the Euclidean cube, and connect nodes whose Euclidean distance is not larger than a threshold \(r_{c}\), which we choose to be \(\sqrt{\frac{2\ln n}{\pi n}}\) to ensure the graph is connected with high probability.
The Steiner tree problem is NP-complete even if the input graph is unweighted [20]. We generate both unweighted and weighted Steiner tree instances using the random generators described above. The number of nodes in these instances is equal to 20 and the number of terminals is equal to 10. For each type of instance we generate 200 instances. For weighted graphs, we assign random integer weights in the range \(\{1,2,\cdots,10\}\) to each edge. Since the weighted version of the Steiner tree problem is the more general version, and the number of terminals is an important parameter, we create a second dataset of graphs with 50 nodes. For the number of terminals, we use two distributions. In the first distribution, the percentage of the number of terminals with respect to the total number of nodes is in \(\{20\%,40\%,60\%,80\%\}\). In the second distribution the percentage is in \(\{3\%,6\%,\cdots,18\%\}\). These two cases are considered to determine the behavior of the learning models on large and small terminal sets (compared with the overall graph size). As random graphs instances can be "easy" to solve, we also evaluate our approach on graphs from the SteinLib library [33], which provides hard graph instances. Specifically, we perform experiments on two SteinLib datasets: I080 and I160. Each instance of the I080 and I160 datasets contains 80 nodes and 160 nodes respectively. Both datasets have 100 in
stances.
### Computing optimal solutions:
In order to evaluate the performance of our approach (and that of the 2-approximation) we need to compute the optimal solutions. There are different integer linear programs (ILP) for the exact Steiner tree problem. The cut-based approach considers all possible combinations of partitions of terminals and ensures that there is an edge between that partition. This ILP is simple but introduces an exponential number of constraints. A better ILP approach in practice considers an arbitrary terminal as a root and sends flow to the rest of the terminals; see [2, 28] for details about these and other ILP methods for the exact Steiner tree problem.
We generate 2,000 Steiner tree instances and compute the exact solution with the flow-based ILP. We use CPLEX 12.6.2 as the ILP solver on a high-performance computer (Lenovo NeXtScale nx360 M5 system with 400 nodes with 192 GB of memory each). We use Python to implementing the algorithms described above.
### Model architectures:
For the MLP in our GNN model, we have used two hidden layers. The first hidden layer has an embedding dimension equal to 128. The second hidden layer has a convolution dimension equal to 128. We use the ReLU activation function for both layers. We also use batch normalization in both layers to normalize the contributions to a layer for every batch of the datasets. The value of early stopping is equal to 15; hence the model will automatically stop training when the chosen metric does not improve for 15 epochs. We trained the network and evaluated our
Figure 3: Performance on simple graphs. Each data point represents one graph. The lower the cost the better the algorithm is. Our algorithm (MCTS) is nearly optimal and performs better than 2-approximation.
algorithm separately for each combination of generator and node size. Recall that the neural network predicts the next Steiner node from a partial solution. Hence, for each Steiner tree instance, we generate a set of data points. Since neural network architecture can not handle different node sizes, we have trained four independent neural networks for node sizes 20, 50, 80, and 160. The same neural network can predict solution nodes for different graph generation models if the node size is the same. In total, we have trained the networks on around 200,000 data points.
### Heuristic setup:
We used the two heuristics described in Section 2.4. Recall that the MST-based heuristic just computes the minimum spanning tree on the induced graph of the partial solution. It works well for geometric graphs, unweighted Erdos-Renyi, unweighted Watts-Strogatz, and unweighted Barabasi-Albert graphs. We use the metric closure-based heuristic for all the other experiments.
## 4 Experimental results
We evaluate the performance of the proposed approach by comparing the computed trees to those computed by the classical 2-approximation algorithm and the optimal solutions. The proposed approach never performs worse than the 2-approximation algorithm. We also report running times.
The results for geometric graphs and other unweighted graphs are shown in Figure 3. The X-axis represents the graph or instance number that does not have any significance. Traditionally bar plot is used in such a scenario. However, for each instance, we show three costs for three different algorithms. Hence scatter
Figure 4: Performance on weighted graphs. Each data point represents one graph. The lower the cost the better the algorithm is. Our algorithm (MCTS) is nearly optimal and performs better than 2-approximation.
plot provided a better visualization by saving space horizontally. One can show two costs instead of three costs by showing the difference w.r.t. the optimal algorithm. However, this approach does not provide a better visualization since many differences get closer to zero. We illustrate the performance of different algorithms on the Geometric graphs in Figure 2(a). We represent the optimal solution with green triangles, our algorithm with yellow squares, and the 2-approximation with blue circles. For the geometric graph, we have 40 instances, each of which has 20 nodes and 10 terminals. A majority of the time the 2-approximation has a larger solution value and our algorithm has a solution very close to the optimal value. The 2-approximation performs worse than our algorithm in 36 instances out of 40 instances.
Our algorithm also performs well for unweighted graphs. We illustrate the performance for random graphs generated by Erdos-Renyi, Barabasi-Albert, and Watts-Strogatz models in Figure 2(b), Figure 2(c), and Figure 2(d) respectively. We have 40 instances for each type of generator. Again, each instance has 20 nodes and 10 terminals. In all of these instances, our algorithm achieves the optimal solution. For Erdos-Renyi graphs, our algorithm performs better than the 2-approximation in two instances. For Barabasi-Albert graphs, our algorithm performs better in six instances. For Watts-Strogatz graphs, our algorithm performs better in four instances.
Results for the weighted graphs are shown in Figure 4. The weighted version of the Steiner tree problem is harder than the unweighted version. Hence, we consider a larger set of instances. For each random graph generation model, we consider one dataset that has 20 nodes for each instance and another dataset that has 50 nodes. We illustrate the performance on 20 nodes Erdos-Renyi graphs in Figure 3(a). For this dataset, both of the algorithms provide solution values similar to the optimal value. We illustrate the performance on 20 nodes Barabasi-Albert and Watts-Strogatz graphs in Figure 3(b) and Figure 3(c) respectively. For Barabasi-Albert graphs, the 2-approximation performs worse than our algorithm in 24 instances out of 40 instances. Our algorithm provides an optimal solution in 39 instances. For Watts-Strogatz graphs, the 2-approximation performs worse than our algorithm in 24 instances out of 2-approximation performs worse than our algorithm in 39 instances. For Watts-Strogatz graphs, the 2-approximation performs worse than our algorithm in 24 instances out of 2-approximation performs worse than our algorithm in 39 instances. Our algorithm provides an optimal solution in 39 instances. For Watts-Strogatz graphs, the 2-approximation performs worse than our algorithm in 39 instances.
We illustrate the performance of the algorithms on 50 nodes Watts-Strogatz graphs in Figure 4(a). Again, our algorithm provides nearly optimal solutions and the 2-approximation has a noticeable difference. The 2-approximation performs worse than our algorithm in 38 instances out of 40 instances. Our algorithm provides an optimal solution in 34 instances. We illustrate the performance of the algorithms on 50 nodes Barabasi-Albert graphs in Figure 4(b). The 2-approximation
Figure 5: Performance on more weighted graphs. Each data point represents one graph. The lower the cost the better the algorithm is. Our algorithm (MCTS) is nearly optimal and performs better than 2-approximation.
performs worse than our algorithm in 34 instances out of 40 instances. Our algorithm provides an optimal solution in 31 instances. Our algorithm provides nearly optimal solutions for the remaining instances.
The SteinLib library [33] provides hard graph instances for solving the Steiner tree problem. Results for a SteinLib dataset is shown in Figure 6. We can see that there is a relatively large difference between the optimal solution value and the 2-approximation solution value. Despite this larger difference of 2-approximation solution values, our algorithm finds nearly optimal solutions.
### Running time:
The training time of the GNN depends on the dataset. The maximum training time is around 20 hours for the I160 SteinLib dataset. The average running times of the optimal algorithm, 2-approximation, and our algorithm for different test datasets are shown in Figure 1. We denote the geometric, unweighted Erdos-Renyi, unweighted Watts-Strogatz, and unweighted Barabasi-Albert graphs by GE, ER, WS, and BA respectively. We denote the weighted 20 nodes Erdos-Renyi, Watts-Strogatz, and Barabasi-Albert graphs by ER20, WS20, and BA20 respectively. We denote the weighted 50 nodes Erdos-Renyi, Watts-Strogatz, and Barabasi-Albert graphs by ER50, WS50, and BA50 respectively. We denote the 80 nodes and 160 nodes SteinLib datasets by I080 and I160 respectively. We can see the 2-approximation algorithm is the fastest. Our algorithm is a little slower, however, the solution values are closer to the optimal values.
## 5 Conclusion
We described an approach for the Steiner tree problem based on GNNs and MCTS. An experimental evaluation shows that the proposed method computes nearly optimal solutions on a wide variety of datasets in a reasonable time. The proposed method never performs worse than the standard 2-approximation algorithm. The source code and experimental data can be found on github [https://github.com/abureyanahmed/GNN-MCTS-Steiner](https://github.com/abureyanahmed/GNN-MCTS-Steiner).
One limitation of our work is we need to retrain for different node sizes. Also, the Steiner tree problem can be seen as a network sparsification technique. In fact, it is one of the simplest sparsification methods since it only considers trees. It would be interesting to see whether our proposed approach can be adapted to graph spanner problems. Our model is unable to fit different node sizes. Hence in our experiments, we try a
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|} \hline Graphs/ & GE & ER & WS & BA & ER20 & WS20 & BA20 & ER50 & WS50 & BA50 & I080 & I060 \\ Algorithms & & & & & & & & & & & \\ \hline
2-apprx & 0.16 & 0.09 & 0.10 & 0.11 & 0.16 & 0.40 & 0.14 & 1.14 & 0.79 & 0.47 & 1.29 & 7.88 \\ \hline MCTS & 0.64 & 0.40 & 0.49 & 0.49 & 0.75 & 1.73 & 0.66 & 5.06 & 3.20 & 2.17 & 5.77 & 34.52 \\ \hline OPT & 5.92 & 6.33 & 5.00 & 4.68 & 22.99 & 28.61 & 29.90 & 153.71 & 125.41 & 134.46 & 1051.51 & 6188.18 \\ \hline \end{tabular}
\end{table}
Table 1: Average running time of different algorithms in seconds.
Figure 6: Performance on SteinLib datasets. Each data point represents one graph. The lower the cost the better the algorithm is. Our algorithm (MCTS) is nearly optimal and performs better than 2-approximation.
small set of node sizes. It is an interesting future work to generalize the model to handle different node sizes. A general model will provide an opportunity to explore the effectiveness of different parameters of the model.
|
2307.16506 | Explainable Equivariant Neural Networks for Particle Physics: PELICAN | PELICAN is a novel permutation equivariant and Lorentz invariant or covariant
aggregator network designed to overcome common limitations found in
architectures applied to particle physics problems. Compared to many approaches
that use non-specialized architectures that neglect underlying physics
principles and require very large numbers of parameters, PELICAN employs a
fundamentally symmetry group-based architecture that demonstrates benefits in
terms of reduced complexity, increased interpretability, and raw performance.
We present a comprehensive study of the PELICAN algorithm architecture in the
context of both tagging (classification) and reconstructing (regression)
Lorentz-boosted top quarks, including the difficult task of specifically
identifying and measuring the $W$-boson inside the dense environment of the
Lorentz-boosted top-quark hadronic final state. We also extend the application
of PELICAN to the tasks of identifying quark-initiated vs.~gluon-initiated
jets, and a multi-class identification across five separate target categories
of jets. When tested on the standard task of Lorentz-boosted top-quark tagging,
PELICAN outperforms existing competitors with much lower model complexity and
high sample efficiency. On the less common and more complex task of 4-momentum
regression, PELICAN also outperforms hand-crafted, non-machine learning
algorithms. We discuss the implications of symmetry-restricted architectures
for the wider field of machine learning for physics. | Alexander Bogatskiy, Timothy Hoffman, David W. Miller, Jan T. Offermann, Xiaoyang Liu | 2023-07-31T09:08:40Z | http://arxiv.org/abs/2307.16506v4 | # Explainable Equivariant Neural Networks for Particle Physics: PELICAN
###### Abstract
We present a comprehensive study of the PELICAN machine learning algorithm architecture in the context of both tagging (classification) and reconstructing (regression) Lorentz-boosted top quarks, including the difficult task of specifically identifying and measuring the \(W\)-boson inside the dense environment of the boosted hadronic final state. PELICAN is a novel permutation equivariant and Lorentz invariant or covariant aggregator network designed to overcome common limitations found in architectures applied to particle physics problems. Compared to many approaches that use non-specialized architectures that neglect underlying physics principles and require very large numbers of parameters, PELICAN employs a fundamentally symmetry group-based architecture that demonstrates benefits in terms of reduced complexity, increased interpretability, and raw performance. When tested on the standard task of Lorentz-boosted top quark tagging, PELICAN outperforms existing competitors with much lower model complexity and high sample efficiency. On the less common and more complex task of 4-momentum regression, PELICAN also outperforms hand-crafted, non-machine learning algorithms. We discuss the implications of symmetry-restricted architectures for the wider field of machine learning for physics.
###### Contents
* 1 Introduction
* 2 Equivariance and jet physics
* 3 PELICAN architecture
* 4 Tagging jets from Lorentz boosted top quarks
* 5 \(W\)-boson 4-momentum reconstruction
* 6 \(W\)-boson mass measurement
* 7 PELICAN explainability
* 8 IRC-safety and PELICAN
* 9 Conclusion
* 10 Acknowledgements
* A Additional results and plots
* B IRC-safety and Lorentz symmetry
## 1 Introduction
Identifying, reconstructing, and measuring the properties and dynamics of high-energy, short-distance particle phenomena is inherently an inference task, since direct access to the fundamental processes is often impossible due to the time and length scales at which they occur. The suite of detection techniques, pattern recognition algorithms, and measurement approaches used to perform this task inevitably imposes constraints on both the nature of the information used as well as on the form and structure of the results. Such constraints play a crucial role in the context of jet substructure measurements, in which detailed analysis is performed on the long-distance features of Lorentz-boosted particle decays, parton showering, and radiation patterns found in the collimated sprays of particles that form the jets themselves. In this work, we present a comprehensive analysis of a new approach to multiple jet substructure-based inference tasks using a machine learning (ML) architecture that fundamentally respects permutation and Lorentz-group symmetries: PELICAN, the permutation equivariant and Lorentz invariant or covariant aggregator network. Our approach thus imposes explicit physics-informed constraints on the system and consequently yields new insights and capabilities.
Decades of jet substructure research have yielded a wide range of approaches to performing inference tasks such as: distinguishing quark-initiated from gluon-initiated jets [1; 2; 3; 4; 5]; discriminating jets formed from Lorentz-boosted top quarks, Higgs and \(W\)-bosons, from the continuum background of jets formed from light-quarks and gluons [6; 7; 8; 9]; dissecting and measuring the parton-shower structure of light-quark and gluon jets themselves [10; 11; 12; 13; 14; 15]. Many approaches have been adopted to perform these tasks, including the direct use of discriminating high-level observables and multi-variate methods [16; 17; 18], as well as a growing number of ML architectures using a variety of latent-space representations. For a comprehensive overview of jet substructure measurements, see Refs. [19; 20], as well as Ref. [21] for a general review of ML methods in high-energy physics (including substructure measurements). As the model complexity has grown, so too
have questions regarding the relationship of both the methods and the constraints that they impose to the fundamental physical processes that they are used to model. In particular, the use of observables, architectures, and latent space representations that adhere closely to the structure and dynamics of the physics processes under study have been found to provide not only enhanced performance, but also significant insights and improvements in interpreting the results [18, 22, 23]. Imbuing these models with knowledge of, or even fundamental respect for, the symmetry group structures of the system under study has thus become increasingly impactful in the study of jet substructure, especially in the context of ML models and various neural network (NN) architectures [24, 25, 26].
There are several common approaches to enforcing continuous symmetries in NNs. Data augmentation can be used to train a model to have a particular sparsity structure and become approximately symmetric. However, when model complexity and interpretability are of concern, as is the case in particle physics and jet substructure analyses, a different approach is helpful. Similar issues arise with approaches that use preprocessing/normalization, which moreover come with inherent ambiguities and discontinuities that can be detrimental for sufficiently complex tasks.
Traditionally, ML algorithms are evaluated based on basic performance metrics such as accuracy and computational cost. However, in contexts where the trained algorithms are treated not only as predictors or generators, but as actual models for some process, - which is especially true in scientific applications - other metrics of model quality are valuable. Model complexity (defined as e.g. the number of parameters), explainability and interpretability can be important for making a viable physics model out of an ML algorithm. Further, certain problem-specific properties such as symmetries can be critical as well. Symmetries in ML are known to produce less complex models which respect basic geometrical rules and arguably provide more opportunities for interpretability and explainability (e.g. convolutional neural network (CNN) kernels are often interpreted as visual features). Even in realistic settings where the symmetries are merely approximate, symmetry-constrained architectures often outperform more general architectures in terms of pure accuracy (see e.g. Section 4), but even in cases when that is not true symmetric architectures should not be discounted due to their other benefits. For these reasons, as advocated for in Ref. [27], we have adopted an approach of building all symmetries directly into the PELICAN network architecture itself, similar to the inherent translational symmetry of CNNs.
### Summary of results
In Section 2 we discuss equivariance in jet physics and introduce the tools we need to build an efficient equivariant architecture. In Section 3 we describe the architectures of PELICAN classifiers and regressors. Now we briefly summarize the main results presented in this work, corresponding to Sections 4 through 8.
Top-tagging with a PELICAN classifierWe train PELICAN top taggers using a public benchmark dataset, to distinguish between top quark jets, and light quark and gluon jets. These taggers achieve state-of-the-art performance on the benchmark with fewer learnable parameters than the previous highest-performing network. PELICAN top taggers with as few as 11k parameters outperform all non-equivariant networks in the benchmark. See Section 4 for details.
\(W\)-boson 4-momentum reconstruction with PELICANWe train a PELICAN model using a custom dataset [28] of fully-hadronic top decays to reconstruct the full 4-momentum of the intermediate \(W\)-bosons. Specifically, PELICAN uses 4-momenta of the top quark jet constituents as inputs. PELICAN performs favorably in reconstructing the full \(W\) momentum when compared with the Johns Hopkins (JH) top tagger [7], which produces \(W\) candidates for the subset of jets that pass its tagging. PELICAN achieves better \(p_{T}\), mass, and angular resolutions on JH top-tagged jets - and achieves comparable resolutions to the JH tagger even when evaluated on the full dataset. Additionally, we train a PELICAN model to reconstruct the 4-momentum of only the products of the \(W\to qq^{\prime}\) decay which are contained within the jet. We discuss differences in performance and effects of this choice in reconstruction targets in Section 5.
\(W\)-boson mass reconstruction with PELICANMass reconstruction is a common particle physics analysis task, and any reconstruction algorithm should be robust and relatively free of bias. In Section 6 we discuss the nuances of PELICAN mass reconstruction targeting the \(W\)-bosons in the above-mentioned dataset [28] as an example. The results show that eliminating bias in the underlying dataset is required to produce an unbiased
final algorithm. In the case of \(W\) mass reconstruction, this is achieved by training PELICAN on a dataset with multiple values of \(m_{W}\).
Explaining PELICAN weightsPELICAN's respect of the particle permutation and Lorentz symmetries inherent to particle datasets provides it with explainability and interpretability rarely found in particle physics machine learning applications. In Section 7 we investigate the rich penultimate layer of PELICAN and its discriminatory power. In particular, we discuss interpretations of PELICAN as a soft clustering and detector-unfolding algorithm of sorts.
IRC-safety and PELICANIn particle physics, IRC-safety is an algorithmic concern which restricts tools to be robust with respect to soft-particle emissions (infrared - IR) and collinear (C) splits due to divergences in perturbative quantum chromodynamics (QCD). In Section 8 we modify PELICAN into IR-safe and IRC-safe versions and discuss their relative performances.
## 2 Equivariance and jet physics
This section aims to establish a clear connection between the group theory that underlies the PELICAN architecture and the implementation of this approach for both classification and regression, as described in Section 3.
Given a symmetry group \(G\) and two sets \(X,Y\) on which an action of \(G\) is defined, a mapping \(F:X\to Y\) is called _\(G\)-equivariant_ if \(F(g\cdot x)=g\cdot F(x)\) for any \(x\in X\) and \(g\in G\). In particular, if the action of \(G\) on \(Y\) happens to be trivial (i.e. \(g\cdot y=y\) for all \(g,y\)), then \(F\) is called _invariant_. In relativistic physics, equivariant maps are typically represented by tensors with equivariant spacetime indices treated via Einstein notation. For instance, the electromagnetic field tensor \(F^{\mu\nu}\) can be viewed as a Lorentz-equivariant mapping from covariant vector fields to contravariant ones. In this work we will be interested in tasks from jet physics that can be reduced to learning a Lorentz-equivariant map. In this section we review some basics of the Lorentz symmetry in the context of such tasks.
### Lorentz symmetry and jets
Lorentz symmetry is one of the fundamental symmetries of the Standard Model of particle physics. The full Lorentz group \(\mathrm{O}(1,3)\) can be defined as the set of linear transformations of the 4-dimensional spacetime that preserve the Minkowski metric \(\eta=\mathrm{diag}(1,-1,-1,-1)\). However, in this work we will restrict ourselves to the _proper orthochronous_ subgroup \(\mathrm{SO}^{+}(1,3)\) that preserves spatial and temporal orientations. Lorentz invariance is the mathematical encapsulation of the fact that the outcomes of physical phenomena don't depend on the inertial frame of the observer. In the context of particle accelerators, this boils down to the observation that all initial and final states of a particle interaction are the same in all inertial frames. This is formally reflected in the fact that the Standard Model of particle physics is Lorentz-invariant, and therefore any model of any physically relevant processes encompassed by the Standard Model can be as well.
A couple subtle points are worth addressing before applying Lorentz symmetry to experimental tasks in jet physics. Neither the actual particle detectors nor the software simulating particle decays and their detection are Lorentz-invariant. Reasons for this include: non-invariant corrections to perturbative computations in quantum chromodynamics (QCD); non-invariance of jet clustering algorithms; practical limitations of detectors such as finite resolution and energy cutoffs. Nevertheless, it is still valid to learn Lorentz-invariant models from data obtained this way. Firstly, QCD is globally Lorentz-invariant and boosting the _entire_ event does not change the outcome of the decay process. As long as inference is performed on data obtained in conditions similar to the conditions of the perturbative simulation, corrections from such effects as the running of the couplings with varying momentum scales are not a concern either. The same applies to jet clustering algorithms and the finite detection resolution: as long as the data used for inference was obtained in the same reference frame as the data used for training, the inference is valid and the outputs are expected to be Lorentz-equivariant. Finally, the fact that the detector itself introduces a fixed reference frame can be fully addressed without breaking the symmetry of the model by including detector geometry among its inputs. This will be discussed below in Section 3.1.
### Lorentz invariance
The classification task considered in this work is exactly Lorentz invariant. The physical content of this statement will be discussed below, but mathematically it simply means the following. If the inputs to the network are a collection of 4-vectors (energy-momentum vectors in our case) \(p_{1},\ldots,p_{N}\), the output is \(F(p_{1},\ldots,p_{N})\), and \(\Lambda\in\text{SO}^{+}(1,3)\) is a Lorentz transformation, then
\[F\left(\Lambda p_{1},\ldots,\Lambda p_{N}\right)=F\left(p_{1},\ldots,p_{N} \right). \tag{1}\]
There are a few ways of constructing a machine learning model that satisfies a constraint of this kind. The simplest one is to hand-pick a set of invariant observables (such as particle masses, relative masses, particle identification labels and charge) and feed them into a generic neural network architecture.
Another approach inspired by convolutional networks is to preserve group-equivariant latent representations in the hidden layers. In this case the neuron nonlinearity must be a Lorentz-equivariant operation, and examples of this can be found in both the Lorentz Group Network (LGN) [25] and LorentzNet [26]. As in traditional CNN's used in image processing, equivariant latent representations, as opposed to invariant ones, can regularize the network via efficient weight-sharing and improve training.
Here, we take a slightly different approach. Given a set of 4-vector inputs \(p_{1},\ldots,p_{N}\), we compute a _complete_ set of Lorentz invariants on that set. For classical groups, including the Lorentz group, the space of invariants constructed out of a collection of vectors in the fundamental representation are functions of only the pairwise invariant dot products (using the appropriate invariant quadratic form for the given symmetry group) and of square determinants (of, say, 4 column-vectors for the Lorentz group) [29]. Furthermore, if the invariant is required to be symmetric in the vector inputs, then it's _only_ a function of the dot products (see also the discussion in Ref. [30]). In short, all totally symmetric Lorentz invariants can be written in the following form:
\[I(p_{1},\ldots,p_{N})=f\left(\{p_{i}\cdot p_{j}\}_{i,j}\right). \tag{2}\]
This is the first key idea used in our architecture. The first step performed by the input layer is the computation of the \(N\times N\) array of dot products between the particle 4-momenta (also known as the Gram matrix). Note, however, that from simple dimension counting it's clear that the \(N(N-1)/2\) components of the Gram matrix \(\{p_{i}\cdot p_{j}\}\) can't be independent. The physical manifold inside this high-dimensional space is defined by the set of constraints \(\det M_{5}=0\) for _every_ 5-minor \(M_{5}\) of the Gram matrix (that is, any matrix obtained from the original one by crossing out \(N-5\) rows and \(N-5\) columns). Moreover, a causally related set of points such as a particle jet will always satisfy \(p_{i}\cdot p_{j}\geqslant 0\) for all \(i,j\). Therefore a neural network whose input is an \(N\times N\) matrix will learn the task only on this \((4N-6)\)-dimensional submanifold of \(\mathbb{R}^{N^{2}}\). The outputs of the trained model on the rest of the space will be uncontrollable and physically meaningless.
### Permutation equivariance
Particle data are often interpreted as a point cloud since there is no natural ordering on the vectors. For such problems it makes sense to use one of the permutation-invariant or equivariant architectures. One of the simplest approaches is called Deep Sets [31], which has been applied to jet tagging [24] and even heavy-flavor tagging [32]. The fundamental fact used in deep sets is that any permutation-invariant continuous mapping of inputs \(x_{1},\ldots,x_{N}\) can be written in the form \(\psi\left(\sum_{i}\varphi(x_{i})\right)\), where \(\psi\) and \(\varphi\) can be approximated by neural networks.
The main limitation of permutation-invariant architectures such as Deep Sets is the difficulty of training. Since aggregation (summation over the particle index) happens only once, the Deep Sets architecture can struggle with modeling complex higher-order interactions between the particles [33]. The network representing \(\psi\) is forced to be a relatively wide fully connected network, which makes it difficult to train.
The alternative to permutation-invariant architectures is provided by permutation-_equivariant_ ones. Given a symmetry group \(G\) (e.g. the group of permutations), a representation \((V,\rho)\) is a tuple where \(V\) is a set and \(\rho:G\times V\to V\) is a map that becomes a bijection \(\rho_{g}=\rho(g,\cdot):V\to V\) for any fixed value of the first argument, \(\rho_{e}=\text{id}\), and \(\rho_{g^{-1}}=\rho_{g}^{-1}\). Given two representations \((V,\rho)\) and \((V^{\prime},\rho^{\prime})\) of a group \(G\), a map \(F:V\to V^{\prime}\) is called equivariant if it _intertwines_ the two representations, that is:
\[F(\rho_{g}(v))=\rho^{\prime}_{g}\left(F(v)\right),\quad v\in V,\ g\in G. \tag{3}\]
Equivariance is a key property of all convolutional networks - for example, in CNN's the convolution operation is inherently equivariant with respect to translations (up to edge effects).
Similarly, Graph Neural Networks (GNN's) use permutation equivariance to force architectures that respect the underlying graph structure and don't exhibit false implicit biases that produce different outputs after a mere renaming of the graph vertices. In this context, we review the standard definition of a message passing layer where the particles are treated as nodes in a graph (for example, the fully connected graph), and every layer of the network only updates the activation at every node. If we denote by \(f_{i}\) the data assigned to node \(i\), then the message passing layer will typically construct "messages" \(m_{ij}=m(f_{i},f_{j})\) and then update each node by aggregating the messages coming from all neighbors of that node and combining the result with the original state of the node: \(f_{i}^{\prime}=\psi(f_{i},\sum_{j}m_{ji})\). Sometimes the graph also possesses "edge data" \(D_{ij}\) that can be incorporated into the message-forming stage.
Message passing architectures have been successfully applied to jet tagging, most prominently in Refs. [25, 26]. However, attempts to combine message passing with Lorentz invariance runs into a major obstacle: as we have seen, the inputs to the network consist of _nothing but_ edge data \(d_{ij}=p_{i}\cdot p_{j}\). Traditional message passing would require a reduction of this set of inputs to a point cloud (with only one particle index), potentially restricting the set of possible higher-order interactions between the points. To avoid making these unnecessary choices, we employ the general permutation-equivariant layers suggested in Refs. [34, 35].
In the general setting, permutation equivariance is a constraint on mappings \(F\) between arrays \(T_{i_{1}i_{2}\cdots i_{r}}\) of any rank \(r\), every index \(i_{k}\in\{1,\ldots,N\}\) referring to a particle label, whereby permutations of the particles "commute" with the map:
\[F\left(\pi\circ T_{i_{1}i_{2}\cdots i_{r}}\right)=\pi\circ F\left(T_{i_{1}i_{2 }\cdots i_{s}}\right),\quad\pi\in S_{N}. \tag{4}\]
Here, the action of permutations is "diagonal": \(\pi\circ T_{i_{1}i_{2}\cdots i_{r}}=T_{\pi(i_{1})\cdots\pi(i_{r})}\). Graph Neural Networks explicitly implement this constraint for rank 1 arrays (node information). A higher-order generalization of the Message Passing layer can be defined as
\[\text{\bf Equivariant Layer:}\quad T^{(\ell+1)}=\text{\bf Agg}\circ\text{ \bf Msg}\left(T^{(\ell)}\right). \tag{5}\]
Here, Msg is a node-wise nonlinear map ("message forming") shared between all nodes, and Aog is a general permutation-equivariant linear mapping ("aggregation") acting on the particle indices of \(T\). Note that whether Msg is node-wise and whether Agg is linear is somewhat ambiguous based on how one separates the mappings into their components, which is why, in particular, the traditional formulation of message passing allows messages to be functions of pairs of nodes. In practice, our aggregation block will also involve a nonlinear activation function.
### Elementary equivariant aggregators
It only remains to describe the exact structure of the equivariant aggregation layers defined above. Since the general case is presented in Refs. [34, 35], here we will only present the layers that we need for jet tagging. Since the input is an array of rank 2, the main equivariant layer for us is one that transforms arrays of rank 2 to other arrays of the same rank: \(T_{ij}\mapsto T^{\prime}_{ij}\). The space of all linear maps of this type turns out to be 15-dimensional. The basis elements of this space can be conveniently illustrated using binary arrays of rank 4. There are 15 such arrays \(B^{a}_{ijkl},a=1,\ldots,15\), and the action of the equivariant layer can be written as
\[T^{\prime a}_{ij}=\sum_{k,l=1}^{N}B^{a}_{ijkl}T_{kl}. \tag{6}\]
The 15 aggregators \(B^{a}\) are easy to visualize. This is done below for \(N=2\).
The smallest squares represent components of the input \(2\times 2\) array, and the larger \(2\times 2\) squares represent components of the output array. Dots represent the non-zero components of the binary tensors \(B^{a}\), and every component of the output tensor is the result of aggregation over all inputs marked by the dots. Output components that lack any dots are set to be a fixed constant, by default zero (the affine versions of these mappings include two such parameters: one constant for the diagonal and another for the remaining components). By "aggregation" we mean, in general, any symmetric function, but in practice it is usually a sum or mean. For example, the first aggregator is simply the identity map on matrices: the \(ij\)'th component of the output array is the result of aggregation over only the \(ij\)'th component of the input. The second aggregator realizes the transposition of arrays \(T^{\prime}_{ij}=T_{ji}\). The following three aggregators represent various ways of embedding the diagonal of the input array in an equivariant way. It is easy to see that simultaneously swapping the two rows and the two columns of the input is equivalent to doing the same to the output, which confirms equivariance. These first 5 aggregators are "order zero" in \(N\) because they do not actually perform any aggregation. Instead, they can be thought of as permutation-equivariant skip-connections.
The second group of 8 "order one" aggregators aggregate over \(N\) components of the input by aggregating either over rows, columns, or the diagonal, and then embedding the result into the output array in all possible equivariant ways. Finally, the last 2 aggregators are the "order two" aggregators that aggregate over all \(N^{2}\) components of the input.
If we allow aggregators to be nonlinear, then they can take the following form: the binary array \(B^{a}\) selects a subset of the components of the input array, and then a general symmetric function \(S^{a}\) is applied to that subset:
\[T^{\prime a}_{ij}=S^{a}\left(\{T_{kl}\mid k,l:B^{a}_{ijkl}\neq 0\}\right). \tag{7}\]
In practice we define \(S^{a}\) as the mean of its inputs followed by an additional scaling by a factor of \(N^{\alpha_{a}}/\bar{N}^{\alpha_{a}}\) with learnable exponents \(\alpha_{a}\), where \(\bar{N}\) is a constant representing the typical number of input vectors expected in the dataset, provided to the model as a hyperparameter.
### Equivariance and Jet Physics
There are several reasons for enforcing the full Lorentz symmetry in our ML models. First and foremost, it is a fundamental symmetry of the space to which the inputs belong. Lorentz transformations represent the effect of switching between different inertial frames, and most fundamental processes in physics are independent of the choice of the observer's inertial frame: if a given collection of particles consists of products of a decay of a top quark for one observer, then the same is true for all other observers.
Nevertheless, some processes involved in generating and observing high-energy collision events break the Lorentz symmetry in some subtle ways. At the fundamental level, the running of the couplings in QCD can cause Lorentz symmetry breaking in the parton shower distribution functions. Even the amount of final decay products depends on the transversal boost of the initial parton-level particles. However, there is no question that both the original protons and the final (asymptotic) decay products are accurately represented by a collection of 4-vectors subject to the spacetime Lorentz symmetry: the asymptotic outcome of a collision event is independent of the observer's reference frame.
Another reason for symmetry-restricted modeling is that, from the geometric perspective, only some mathematical operations are permissible when working with objects that transform in a certain way under a symmetry group. A non-equivariant neural network effectively neglects the vector nature of the inputs by treating individual components of the input vectors as scalars. While improving network expressivity, non-equivariance fails to deliver physically interpretable models. Ultimately, a statement about equivariance is a statement about what the basic _features_ of the data are - e.g. vectors are features, but the individual components of those vectors are not.
More relevant to the applications is the fact that both the simulation and the observation of collisions inevitably involves some degree of _clustering_. A particle detector is made of cells (e.g. calorimeters) of finite size and as such is unable to distinguish between some particles that are collinear or very close to collinear. Similarly, the standard algorithms for collision simulation typically perform _jet clustering_ to closely reproduce the detector behavior. Clustering of course is not a Lorentz-invariant procedure: particle tracks that diverge by a small angle in one frame will diverge by a large angle in another highly boosted frame. However, this limitation of Lorentz-invariant architectures is fairly minor. Since clustering is always done in a fixed
laboratory frame, it is still reasonable to impose the full Lorentz symmetry on the resulting 4-vector data. So unless the pre-clustering data itself is coming from multiple significantly different inertial frames, clustering is not interfering with the fundamental symmetry. Simply put, however a given set of 4-vectors is obtained and represented in a specific inertial frame, those vectors will respect the Lorentz symmetry.
## 3 PELICAN architecture
The PELICAN architecture is simplified with respect to LGN due to the use of the complete set of dot products between the input 4-momenta (See Section 2), and this has significant implications for both the overall architecture as well as the ease of training and interpretability of the network. This section discusses each of the primary components of the network, including the inputs and their embedding, the permutation and Lorentz equivariant blocks, and the output layers that determine the structure of the result, namely classification or 4-vector regression.
### Inputs and embeddings
Dot Products and BeamsOn the input side of the architecture, the first step is to compute all pairwise dot products of the input 4-momenta. Appended to the list of these 4-momenta are two auxiliary beam particles with 4-momenta \((1,0,0,\pm 1)\). This is helpful since the datasets we are using are all simulated in a fixed laboratory frame where the original proton-proton collision happens along the \(z\)-axis, and the auxiliary inputs restore this orientational knowledge. In particular, the dot products between constituents and beams give PELICAN access to the energies and transverse momenta of all constituents.
It is worth emphasizing that introducing beams in this manner allows us to fix a particular spatial orientation of the events without restricting or violating the global Lorentz symmetry inherent in the architecture. Indeed, if one were to treat the auxiliary beams as constant vectors of hyperparameters, then this action would reduce the full Lorentz symmetry to merely rotations in the \(xy\)-plane and \(z\)-boosts. However, due to the fact that the beams are fed into the network on equal footing with all other inputs, they are properly treated as full-fledged 4-vectors that should also transform under the global Lorentz symmetry. Thus, counter-intuitively, we let the network access individual energies, transverse momenta and \(z\)-momenta while still preserving full Lorentz symmetry and all the computational benefits that come with it.
Input EmbeddingNext there is an embedding layer that applies the function \(f_{\alpha}(x)=((1+x)^{\alpha}-1)/\alpha\) to each dot product with \(C^{0}-2\) different values of the trainable parameter \(\alpha\) (initialized to span the interval \([0.05,0.5]\)). Then the result goes through a masked BatchNorm2D layer. Finally, this array of scalars gets concatenated with two labels \(\mathrm{L}_{i}\), \(\mathrm{L}_{j}\) per dot product \(d_{ij}=p_{i}\cdot p_{j}\) that indicate whether each of particles \(i\) and \(j\) is a beam or not. The label for a beam is chosen to be 1 and the label for all other particles is 0. At the end of this input block, we have a tensor of shape \([B,N_{\mathrm{max}},N_{\mathrm{max}},C^{0}]\) where the feature vector for each particle pair has the form \(\left(\mathrm{BatchNorm2D}\left(f_{\alpha_{1}}(d_{ij}),\ldots,f_{\alpha_{C^{0},2}}(d_{ij})\right),\mathrm{L}_{i},\mathrm{L}_{j}\right)\).
### Permutation equivariant blocks
The main element of the equivariant architecture is the permutation-equivariant block transforming arrays of rank 2. Namely, we assume that the input tensor to the block has shape \([B,N_{\mathrm{max}},N_{\mathrm{max}},C^{l}]\), where \(B\) is the batch size, \(N_{\mathrm{max}}\) is the maximum number of jet constituents per event (with zero padding for events with fewer constituents), and \(C^{l}\) is the number of input channels. We also use a binary mask of shape \([B,N_{\mathrm{max}},N_{\mathrm{max}}]\) to appropriately exclude the zero padding from operations like BatchNorm and aggregation. The output of the block will be a similar tensor of shape \([B,N_{\mathrm{max}},N_{\mathrm{max}},C^{l+1}]\) with the same mask.
As outlined above, the equivariant layer consists of a message block and an aggregation block. The message block is chosen to be a dense multilayer perceptron (MLP) acting on the channel dimension with a LeakyReLU activation and BatchNorm2D (normalization over the first three dimensions of the tensor, for each channel separately, followed by an affine transform with two learnable parameters per channel). Here we use a masked implementation of batch normalization so that the variable particle number is respected. The message block is then followed by Dropout that zeroes out each of the \(B\times N_{\mathrm{max}}^{2}\times C_{\mathrm{eq}}^{l}\) components independently with a certain probability.
he aggregation block applies 15 linear aggregation functions (LinEq\({}_{2\to 2}\)) which, for each component of the output tensor, compute the mean over some subset of the components of the input tensor, as explained in Section 2.4. Note that this is a non-parametric transformation performed on each channel separately. Each of the \(C^{l}_{\text{eq}}\times 15\) resulting aggregation values is then independently multiplied by \(N^{\alpha}/\tilde{N}^{\alpha}\) with a trainable exponent \(\alpha\) (initialized as a random float in \([0,1]\)), where \(N\) is the number of particles in the corresponding event. This allows for some flexibility in the aggregation process, for example \(\alpha=1\) returns the sum aggregation function, and combining multiple aggregators is known to boost accuracy [34].
Aggregation is followed by a dense layer that mixes the \(C^{l}_{\text{eq}}\times 15\) aggregators down to \(C^{l+1}\) features. Due to the size of this layer, we employ a simple factorization to reduce the number of parameters. Namely the weight tensor \(W_{abc}\), where \(a\) is the input channel index, \(b\) is the basis index (1 to 15), and \(c\) is the output channel index, can replaced by the following combination:
\[W_{abc}=W^{0}_{ab}W^{1}_{ac}+W^{2}_{cb}W^{3}_{ac}. \tag{1}\]
Here, the first term first mixes the 15 aggregators among each other for each output channel, and then mixes the channels. Similarly, the second term first mixes the 15 aggregators for each input channel, and then mixes the channels.
The final result is a tensor of shape \([B,N_{\text{max}},N_{\text{max}},C^{l+1}]\), so these equivariant layers can be stacked multiple times.
### Classification and 4-vector regression outputs
One of the strengths of the PELICAN architecture is the ability to easily switch between serving as a classification tool for discriminating between Lorentz-boosted top quarks and the QCD background, to being able to provide 4-vector regression results, such as momentum reconstruction.
PELICAN classifierTo build a classifier, aside from the Eq\({}_{2\to 2}\) equivariant layer one needs a Eq\({}_{2\to 0}\) layer that reduces the rank 2 array to permutation-invariant scalars. This layer involves just 2 aggregation functions instead of 15 - the trace and the total sum of the input square matrix, but is otherwise identical to the equivariant layer described in the last section.
\[\{d_{ij}\}\rightarrow\boxed{\text{Emb}\rightarrow[\text{Eq}_{2\to 2}]^{L} \rightarrow\text{Eq}_{2\to 0}\rightarrow\text{MLP}}\rightarrow\{w_{c}\} \tag{2}\]
From the input block, the tensor is passed through \(L\) equivariant Eq\({}_{2\to 2}\) layers, and the Eq\({}_{2\to 0}\) layer with dropout. This produces a tensor of shape \([B,C_{\text{out}}]\). One final MLP mixes this down to just 2 classification weights per event. A cross-entropy loss function is then used for optimization.
PELICAN 4-vector regressionThe same architecture can also be easily adapted for 4-vector regression tasks, such as momentum reconstruction. Any Lorentz-equivariant map from a collection of 4-vectors
Figure 1: The PELICAN equivariant block updating square arrays.
\(p_{1},\ldots,p_{N}\) to one (or several) 4-vector has the form
\[F(p_{1},\ldots,p_{N})=\sum_{i=1}^{N}f_{i}(p_{1},\ldots,p_{N})\cdot p_{i}, \tag{10}\]
where \(f_{i}\)'s are Lorentz-invariant functions [36]. Combining this with permutation invariance, we conclude that the multi-valued map \((p_{1},\ldots,p_{N})\mapsto(f_{1},\ldots,f_{N})\) must also be equivariant with respect to the permutations of the inputs.
The only change required to the architecture we've introduced for classification is that \(\text{Eq}_{2\to 0}\) must be replaced with \(\text{Eq}_{2\to 1}\) and the final output layer must have only one output channel (assuming we are regressing on a single 4-vector). The \(\text{Eq}_{2\to 1}\) layer is again identical to \(\text{Eq}_{2\to 2}\) except that it uses only 4 linear aggregators: row sums, column sums, trace, and full sum. The architecture is summarized by the following diagram, where we treat \(d_{ij}\) as the inputs and \(f_{i}\) as the outputs, keeping in mind formula (10) that lets us recover the final predicted vector.
\[\{d_{ij}\}\rightarrow\boxed{\text{Emb}\rightarrow[\text{Eq}_{2\to 2}]^{L} \rightarrow\text{Eq}_{2\to 1}\rightarrow\text{MLP}}\rightarrow\{f_{i} \}_{i=1}^{N} \tag{11}\]
## 4 Tagging jets from Lorentz boosted top quarks
This section presents the dataset, training approach, and results of using PELICAN as a classifier in the context of identifying Lorentz-boosted top quarks. Three different versions of PELICAN are discussed, each with a different size in terms both the width of the network and the number of trainable parameters. Lastly, the dependence of the performance on the size of the training dataset is also presented, providing a quantitative relationship between the size of the network, the training dataset efficiency, and the resulting performance.
### Classification dataset
We perform top-tagging on the reference dataset [37], which was also used in Ref. [8]. This dataset consists of 2M entries, each entry corresponding with a single hadronic top jet or the leading jet from a QCD dijet event. There are 1.2M training entries, 400k validation entries and 400k testing entries. The events were generated with, and the framework [38] was used for fast detector simulation in order to incorporate detector effects. For each jet, the 4-momentum of the 200 leading constituents are stored in Cartesian coordinates \((E,p_{x},p_{y},p_{z})\), in order of decreasing \(p_{T}\). This list is zero-padded, and all jets in the dataset have fewer than 200 constituents. The dataset does not contain any other information on the jet constituents, such as charge or spin.
### Classification training procedure
The top-tagging model contains five \(\text{Eq}_{2\to 2}\) blocks of identical shapes. We train three different versions of the model with different widths. The widest model has 132 input and 78 output channels on every messaging layer (the equivariant layer then produces \(132\times 15\) quantities which get mixed down to 78 channels by a fully connected linear layer). The output MLP is just one layer that mixes 132 channels down to 2 classification weights. The number of jet constituents was capped at 80 (no noticeable performance gain was seen beyond that number). The dropout rate was 0.025, and the model was optimized using the AdamW optimizer [39] with weight decay of 0.005. The training on the full dataset went on for 35 epochs with the same learning rate schedule as in Ref. [26]: 4 epochs of linear warm-up up to learning rate of 0.001, followed by 28 epochs of \(\text{CosineAnnealingLR}\) with \(T_{0}\) of 4 epochs and \(T_{\text{mult}}=2\), and then 3 epochs of exponentially decaying learning rate with exponent \(\gamma=0.5\) per epoch. The three models were trained on Nvidia H100 GPU's with batch size of 100, taking 0.43, 0.17, or 0.08 seconds per batch, respectively. Inference took 0.17, 0.07, or 0.04 seconds per batch. Batches were shuffled between epochs.
### Classification results
Figure 2 shows the _receiver operating characteristic_, here represented by the background rejection as a function of the signal efficiency, for the classification performance. In Table 1 we compare the accuracy, area under the curve (AUC), and background rejection values at 30% signal efficiency between PELICAN
and a multiple existing ML top-taggers, including the previous state-of-the-art LorentzNet [26]. We trained three PELICAN top-taggers with layers of differing widths, with 208k, 48k, and 11k trainable parameters respectively. The results are averaged over 5 random initialization seeds, and the uncertainties are given by the standard deviation. The large PELICAN model improves upon the LorentzNet result with a comparable number of parameters, and the medium model roughly matches LorentzNet despite having 5 times fewer parameters. Perhaps most remarkably, the small model with 11k parameters beats every pre-LorentzNet competitor despite having at least times fewer parameters, and up to 130 times fewer parameters, than other networks.
In addition to different model sizes, we also explore sample efficiency. Each of the three models above was trained on 0.5%, 1% and 5% of the training data and compared to the original. For these, the training went on for 70 epochs with 60 epochs of CosineAnnealingLR instead of 28, and 6 epochs of exponential decay instead of 3. The results can be found in Table 2. Notice that at lower amounts of training data the differences in performance between models of different width become much less significant, and at 1% and 0.5% of training data all three models fall within each other's uncertainty ranges. These results suggest that the larger PELICAN networks are likely able to learn a greater range of more subtle features from the training data and thus benefit from seeing a larger training dataset. On the other hand, the primary features are already
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Architecture & Accuracy & AUC & \(1/\epsilon_{B}\) & \# Params \\ \hline TopoDNN[40] & 0.916 & 0.972 & 382\(\pm\) 5 & 59k \\ LGN[25] & 0.929(1) & 0.964(14) & 424 \(\pm\) 82 & 4.5k \\ PFN[24] & 0.932 & 0.982 & 891 \(\pm\) 18 & 82k \\ ResNeX[8] & 0.936 & 0.984 & 1122 \(\pm\) 47 & 1.46M \\ ParticleNet[41] & 0.938 & 0.985 & 1298 \(\pm\) 46 & 498k \\ LorentzNet[26] & 0.942 & 0.9868 & 2195 \(\pm\) 173 & 220k \\ \hline PELICAN\({}_{132/78}\) & 0.9425(1) & 0.9870(1) & 2250 \(\pm\) 75 & 208k \\ PELICAN\({}_{60/35}\) & 0.9423(1) & 0.9868(1) & 2133 \(\pm\) 148 & 48k \\ PELICAN\({}_{25/15}\) & 0.9410(3) & 0.9858(4) & 1879 \(\pm\) 103 & 11k \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of PELICAN models trained on different fractions of the training dataset. The subscripts indicate the width of the network: e.g. 132/78 means each Msq layer has 132 input and 78 output channels.
Figure 2: Performance of various machine learning architectures represented by the background rejection as a function of the signal efficiency.
learnable with just a few percent of the data. In particular, with 5% of the training data and only 11k learnable parameters, the PELICAN\({}_{25/15}\) version of the network appears to achieve the similar background rejection performance as ResNeXt, which uses 1.46M parameters learning on the full dataset.
## 5 \(W\)-boson 4-momentum reconstruction
To test the equivariant regression architecture described in Section 3 we chose a task where the aim is to reconstruct (or _predict_) the full 4-momentum of the \(W\)-boson within the Lorentz-boosted top quark decay products. Specifically, we consider the same hadronic top quark decay that constitutes the signal in the top-tagging dataset, which uses the \(t\to bW\to bqq\) two-step decay, followed by hadronization, showering, and detection. Our aim is to reconstruct the true 4-momentum of the \(W\)-boson given the full set of observed final state particles of the top decay, as represented by the jet constituents.
### Regression dataset
The dataset used for the regression task consists of 1.5M \(t\bar{t}\) events simulated with PYTHIA8, with 700k events for training, 200k events for validation, and 500k events for testing (with an additional 100k events set aside in a second testing set). From each event, we cluster anti-\(k_{T}\) jets with \(R=0.8\) using FastJET[42] and we select the jet nearest to the truth-level top quark in \((\eta,\phi)\), requiring the distance between the top quark and the jet to satisfy \(\Delta R\) (top quark, jet) \(<0.8\). This jet clustering is done both at truth-level, and using calorimeter tower objects produced by running the event through Delphes fast detector simulation using the ATLAS detector card. Thus, each _event_ in the dataset corresponds to a single jet, and includes information for truth-level particles such as the truth-level top quark - we may therefore use the terms _jet_ and _event_ interchangeably below with the understanding that each event in the dataset has one and only one jet recorded. This dataset is publicly available via Zenodo [28], where a full description of the various data fields is provided. Here we provide only an overview of some key features:
1. There are two versions of the dataset, corresponding with truth- and reconstruction-level (Delphes) jets. The events are the same between versions, so the two can be compared event-by-event to study the effects of detector reconstruction on network training and performance.
2. The input data for the network are the 4-momenta of the 200 leading jet constituents. For use as possible regression targets and for defining jet containment (explained below), each event contains 1. the truth-level top quark that initiated the jet, 2. the bottom quark from top quark decay, 3. the \(W\)-boson from top quark decay, 4. the two quarks from subsequent \(W\)-boson decay (\(W\to qq^{\prime}\)),
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Model & \% training data & Accuracy & AUC & \(1/\epsilon_{B}\) \\ \hline PELICAN\({}_{132}\)/78 & 100\% & 0.9425(1) & 0.9870(1) & 2250 \(\pm\) 75 \\ & 5\% & 0.9366(3) & 0.9841(1) & 1213 \(\pm\) 79 \\ & 1\% & 0.9316(6) & 0.9810(5) & 789 \(\pm\) 49 \\ & 0.5\% & 0.9289(11) & 0.9800(5) & 633 \(\pm\) 28 \\ \hline PELICAN\({}_{60/35}\) & 100\% & 0.9423(1) & 0.9868(1) & 2133 \(\pm\) 148 \\ & 5\% & 0.9368(2) & 0.9841(1) & 1148 \(\pm\) 49 \\ & 1\% & 0.9323(3) & 0.9813(4) & 799 \(\pm\) 52 \\ & 0.5\% & 0.9289(9) & 0.9795(5) & 637 \(\pm\) 105 \\ \hline PELICAN\({}_{25/15}\) & 100\% & 0.9410(3) & 0.9858(4) & 1879 \(\pm\) 103 \\ & 5\% & 0.9361(5) & 0.9835(2) & 1122 \(\pm\) 44 \\ & 1\% & 0.9316(1) & 0.9810(5) & 798 \(\pm\) 116 \\ & 0.5\% & 0.9286(11) & 0.9795(6) & 615 \(\pm\) 133 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of PELICAN models trained on different fractions of the training data.
In addition, the event contains the stable \(W\)-boson daughter particles. These are the truth-level, final state particles that are traced back to the \(W\)-boson by PYTHIA.
3. Each jet is tagged with the Johns Hopkins top tagger [7] (JH), as implemented in FASTJET. This allows us to define a subpopulation of JH-tagged events, which we shall sometimes refer to as \(JH\)_events_. For jets that it tags as top quark jets, JH provides a \(W\)-boson candidate constructed from subjets.
4. Each jet is also tagged as whether or not it is _fully-contained_ (FC). We define FC events as those where the \(b\)-quark, as well as the two quarks from \(W\to qq^{\prime}\) decay, are within \(\Delta R<0.6\) of the jet centroid (i.e. within 75% of the jet radius). In such cases almost all of the \(W\) daughters are contained within the jet and we can expect a good reconstruction of the \(W\) momentum. FC events comprise 75% of the dataset.
### Regression training procedure
Our model has 4 equivariant \(\mathrm{Eq}_{2\to 2}\) blocks. Each messaging layer takes in 132 channels and outputs 78 channels. Conversely, each equivariant aggregation layer has 78 input channels and outputs 132 channels. The \(\mathrm{Eq}_{2\to 1}\) block has the same shape, and the final fully connected layer has the shape \(1\times 132\). There are 210k parameters in total. Assuming \(N\) non-zero input jet constituents, this produces \(N\) scalar coefficients \(c_{i}\) with zero-padding, which are the Lorentz invariants introduced in (11). The reconstructed 4-momentum is then computed via
\[p_{\mathrm{reco}}=\sum_{i}c_{i}p_{i}. \tag{12}\]
The training regime for this task is essentially identical to the one for top-tagging: AdamW optimizer with weight decay of 0.01, 35 epochs in total with 4 epochs of warm-up and exponential learning rate decay for the last 3 epochs. The main difference is in the choice of the loss function \(L(p_{\mathrm{reco}},p_{\mathrm{target}})\). Spacetime geometry allows for many choices of this function, which in turn will affect the shape of the landscape near \(p_{\mathrm{target}}\) and in turn the precision of various reconstructed features of the vector, such as the mass, energy, spatial momentum, transverse momentum, and direction. It is even possible to construct Lorentz-invariant loss functions to make the training process itself equivariant. Nevertheless, for the purpose of simultaneous reconstruction of the direction and the mass of the \(W\)-boson, \(m_{W}\), we found
\[L=0.01\|\mathbf{p}_{\mathrm{reco}}-\mathbf{p}_{\mathrm{target}}\|+0.05|m_{\mathrm{ reco}}-m_{\mathrm{target}}| \tag{13}\]
to be very effective. It uses all 4 components of the target vector and strikes a good balance between the precision of the reconstructed mass and spatial momentum.
A rarely discussed feature of this task is the choice of the target vector \(p_{\mathrm{target}}\). Even though our ultimate inference target is the true \(W\) momentum \(p_{\mathrm{true}}^{W}\), it is not necessarily the best training target given the nature of the dataset. Detection and jet clustering include multiple energy, momentum, and spatial cuts that exclude some decay products from the final jet. For instance, one of the three quarks in \(t\to bqq\) might fall outside of the \(R=0.8\) radius of the jet clustering algorithm, in which case most of the decay products of that quark are likely to be absent from the event record. If many of the decay products of the \(W\)-boson are missing, then we lack the information necessary to make an accurate estimate of its true momentum, or even to identify which of the jet constituents belong to the \(W\). This effect is often referred to as an _acceptance_ issue due to the finite purview of the final state reconstruction.
To alleviate this issue and provide better control over the inference stage, we propose an alternative target 4-vector that we call the _contained true \(W\) momentum_\(p_{\mathrm{cont}}^{W}\), equal to the total 4-momentum of the _truth-level \(W\) decay products_ that fall within the radius of the final reconstructed top jet. In the truth-level dataset, this is simply \(p_{\mathrm{cont}}^{W}=\sum_{k}p_{i_{k}}\) where \(i_{k}\) are the indices of the constituents whose parent is the \(W\)-boson and not the \(b\)-quark. In the DelDES dataset, however, there is no simple analytic relationship between \(p_{\mathrm{cont}}^{W}\) and the jet constituents \(p_{i}\). That is to say that the mapping of the truth-level information to the detector-level reconstruction is highly non-linear. Nonetheless, in either dataset this vector more accurately reflects the available information about the \(W\)-boson and allows us to make inferences not only about the \(W\)-boson itself, but also about the containment qualities of the event. This will be discussed further in the Section 5.5 below. For reference, the true mass spectra of both \(p_{\mathrm{true}}^{W}\) and \(p_{\mathrm{cont}}^{W}\) are shown in Fig. 3. For fully-contained (FC)
events, the mass spectra are similar between the true and the contained \(W\) mass as expected. Non-FC events are mostly confined to a clear second peak at 13 GeV corresponding to \(qb\) and \(q\) jets (where one of the quarks from \(W\to qq\) fell outside the jet), and a minor peak at \(m_{\rm cont}^{W}=0\) corresponding to \(b\) jets.
Given the above observations, we prepared two PELICAN models, one trained to reconstruct \(p_{\rm true}^{W}\), and another trained to reconstruct \(p_{\rm cont}^{W}\). Otherwise the two models are identical and are trained in the same way and with the same loss function. We then compare the outputs of each model to \(p_{\rm true}^{W}\) and analyze the benefits of the two choices of the target.
### Regression results for \(p_{\rm true}^{W}\) reconstruction
The results are summarized in Table 3. We quantify the precision of the reconstruction by the transverse momentum1, \(p_{T}\), and mass resolutions, given by half of the central 68th interquantile range of (\(x_{\rm predict}-x_{\rm true}\))/\(x_{\rm true}\), where \(x\) is \(m\) or \(p_{T}\). In addition we report the lower 68th interquantile range for \(\Delta R\), the \(z\)-boost-invariant spatial angle between predicted and true momenta2.
Figure 3: Stacked histogram _with proportional bin heights_ showing the mass spectrum of the two targets, the true \(W\)-boson \(p_{\rm true}^{W}\), and the contained true \(W\) momentum \(p_{\rm cont}^{W}\). The top curve represents the spectrum over the entire dataset on log scale and the bottom curve shows the spectrum over FC events only, _scaled linearly_ relative to the top curve, i.e. the fraction of FC events in a given bin is given by the apparent height of the FC curve divided by the total height of the bin (heights are measured from the \(x\)-axis). The two mass spectra of FC events, in fact, match.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & Method & \(\sigma_{p_{T}}\) (\%) & \(\sigma_{m}\) (\%) & \(\sigma_{\Delta R}\) (centriad) \\ \hline \multirow{4}{*}{PELICAN} & JH & 0.66\% & 1.26\% & 0.216 \\ & PELICAN\(|\)JH & 0.26\% & 0.57\% & 0.113 \\ & PELICAN\(|\)FC & 0.30\% & 0.71\% & 0.139 \\ & PELICAN & 0.79\% & 1.12\% & 0.473 \\ \hline \multirow{4}{*}{PELICAN} & JH & 9.8 \% & 8.3 \% & 9.6 \\ & PELICAN\(|\)JH & 3.5 \% & 2.6 \% & 2.8 \\ \cline{1-1} & PELICAN\(|\)FC & 4.0 \% & 2.9 \% & 3.1 \\ \cline{1-1} & PELICAN & 5.1 \% & 3.0 \% & 4.7 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Momentum reconstruction results for the Johns Hopkins (JH) tagger and PELICAN trained to reconstruct \(p_{\rm true}^{W}\). We report the relative \(p_{T}\) and mass resolutions, and the interquantile range for the angle \(\Delta R\) between predicted and true momenta. PELICAN uncertainties are within the last significant digit.
ince there are no ML-based methods for this task, we use the \(W\)-boson identification of the Johns Hopkins top tagger [43] implemented in FastJet[42] for the baseline comparison. The tagger has a 36% efficiency on the truth-level dataset and 31% on the Delphes one. It can only identify \(W\)-boson candidates for jets it tags, so we report PELICAN results both on the JH-tagged jets only (PELICAN|JH) and on the full dataset (PELICAN). Moreover, we evaluate PELICAN on the population of FC events (PELICAN|FC). More than 99.9% of JH-tagged events contain all three true quarks \(bqq\) within the jet radius, so this population represents an especially restricted and 'ideal' type of event. The results were evaluated over 5 training runs initialized with different random seeds, and the resolutions reported in Table 3 are consistent across the runs.
There are significant differences in PELICAN's performance on the different sub-populations of events. In the direct comparison with the JH tagger, PELICAN|JH is 2-4 times more precise. However, even on the much larger class of FC events, PELICAN produces predictions with almost the same precision. The highest loss of precision happens on non-FC events where many of the \(W\) decay products are missing from the jet, leading to lower average precision on the entire dataset. As we will discuss in Section 6, this result can be _explained_ by interrogating the PELICAN weights and kinematic information directly.
In Fig. 4 we show the relative reconstructed \(W\) masses for two of the models, one trained on truth data, and one on Delphes data. The results also include the curve for the JH tagger's reconstruction, as well as PELICAN|JH and PELICAN|FC. The 68\({}^{\text{th}}\) interquantile ranges of these curves match the numbers in the \(\sigma_{m}\) column of Table 3. See Section 7 for further details on the causes of performance degradation in the Delphes case. For the complete set of results see Appendix A.
### Regression results for \(p_{\text{cont}}^{W}\) reconstruction
Now we train new models with the target vector set to the contained true \(W\) momentum \(p_{\text{cont}}^{W}\), evaluate their precision by comparing the outputs to the true \(W\) momentum \(p_{\text{true}}^{W}\), and compare the results to Table 3. As shown in Table 4, the resolutions for these models on JH-tagged and FC events are slightly worse than the first set of models, in the Delphes case by 5-15%. The largest change is in non-FC events, leading to poor average resolutions on the whole dataset. Despite this, as we will now show, these models can in fact be better suited for real-world applications.
### Discussion
To see the main benefit of this model, we present the behavior of the relative reconstructed mass shown in Fig. 5. PELICAN-reconstructed masses within the range of true \(W\) masses are almost as precise on the full dataset as they are on FC events (see Fig. 5 near the peak at 1). The most prominent feature obvious from these results is that, despite the slightly lower accuracies on FC events (at fixed width and depth of the network), the
Figure 4: Reconstructed \(W\) mass relative to true \(W\) mass for the PELICAN model trained on truth (left) or Delphes (right) data, and targeting \(p_{\text{true}}^{W}\).
model trained to reconstruct \(p_{\text{cont}}^{W}\) accurately reproduces the mass spectrum of \(m_{\text{cont}}^{W}\) in Fig. 3 and therefore discriminates between FC and non-FC events, allowing us to perform post-inference event selections.
For instance, in the Delphes case, choosing a 55 GeV cutoff, 97% of all FC events have \(m_{\text{reco}}>55\) GeV, and vice versa, 97% of all events with \(m_{\text{reco}}>55\) GeV are FC. In this manner we can significantly improve the accuracy of the reconstruction without accessing truth-level information that is needed to identify FC events. This comes at the cost of a modest reduction in signal efficiency - from the ostensible 100% down to 75%. Note that in the Delphes case, the set of FC events is contaminated with a small number of events with significant losses of \(W\) decay products due to detector effects, but it can be refined by reducing the jet radius used in the definition of full containment. Consequently, we propose the following simple routine for real-world applications of these models. First, use the model trained targeting \(p_{\text{cont}}^{W}\) as an FC-tagger to refine the data. Then, apply the model targeting \(p_{\text{true}}^{W}\) to reconstruct the \(W\)-boson.
We conclude that \(p_{\text{cont}}^{W}\) is the better target for many common reconstruction tasks where one is willing to sacrifice some signal efficiency - or to only fully measure the 4-momentum on a sub-sample of the identified events - to gain improved accuracy. In the following sections we will not present models trained on both targets, however a complete set of metrics and results, including models targeting \(p_{\text{true}}^{W}\), can be found in Appendix A.
## 6 \(W\)-boson mass measurement
As we saw above, PELICAN is able to reconstruct the mass of the \(W\)-boson, \(m_{W}\), found within the dense environment o the complete decay products of a top quark jet. For truth-level datasets, the resolution of this
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & Method & \(\sigma_{PT}\) (\%) & \(\sigma_{m}\) (\%) & \(\sigma_{\Delta R}\) (centirad) \\ \hline \multirow{4}{*}{PELICAN} & JH & 0.66\% & 1.26\% & 0.216 \\ & PELICAN\(|\)JH & 0.27\% & 0.62\% & 0.113 \\ & PELICAN\(|\)FC & 0.34\% & 0.86\% & 0.142 \\ & PELICAN & 2.37\% & 38.93\% & 0.681 \\ \hline \multirow{4}{*}{PELICAN} & JH & 9.8 \% & 8.3 \% & 9.6 \\ & PELICAN\(|\)JH & 3.6 \% & 2.8 \% & 3.1 \\ \cline{1-1} & PELICAN\(|\)FC & 4.2 \% & 3.6 \% & 3.4 \\ \cline{1-1} & PELICAN & 6.2 \% & 39.6 \% & 5.6 \\ \hline \hline \end{tabular}
\end{table}
Table 4: PELICAN resolutions for models trained to reconstruct \(p_{\text{cont}}^{W}\). Resolutions are still obtained by comparing the model predictions to \(p_{\text{true}}^{W}\).
Figure 5: Reconstructed \(W\) mass relative to true \(W\) mass for the PELICAN model trained (on truth or Delphes data) targeting \(p_{\text{true}}^{W}\).
reconstruction is below the natural width of the mass spectrum. In the Delphes case, the resolution is too wide to produce any substantial correlation between the true and reconstructed masses (see Appendix A for figures that demonstrate this). We would like to eliminate the possibility that the reason that the true masses are highly concentrated around 80 GeV is due in part to the potential for PELICAN to effectively _memorize_ a single number: the \(W\)-boson mass. In this section we examine a more realistic reconstruction task, where the true mass of the target particle is unknown, and the dataset uniformly covers a wide range of its masses.
The reconstruction task is still identical to that of Section 5. Even though we could use a scalar-valued version of PELICAN to target the mass of the \(W\)-boson, the accuracy of that reconstruction would in fact suffer in comparison with the 4-vector model. This is simply due to the fact that the 4-momentum contains more relevant information than the mass alone, since the direction and the energy of the particle are, in part, correlated with the mass. Thus the only new element in this experiment will be the dataset, which will now involve \(W\)-bosons of varying masses uniformly covering the range \(m_{W}\in\{65,95\}\) GeV. The dataset is also identical to that used in Section 5, except that the \(W\)-boson mass is set to be variable. This is achieved by combining multiple independently-produced datasets where the generator-level value of \(m_{W}\) was modified from its default value. Fig. 6 shows the resulting distribution of \(W\)-boson masses, as well as that of the sum of \(W\) daughters contained within each jet.
### Regression results for \(m_{W}\) reconstruction
The hyperparameters and the training regime used here are the same as in Section 5. Here we focus on the model trained to reconstruct the contained momentum \(p_{\text{cont}}^{W}\) (see Appendix A to find the results for the model targeting \(p_{\text{true}}^{W}\)). The outputs are then compared to the true \(W\)-boson \(p_{\text{true}}^{W}\). The accuracies for the full 4-vector reconstruction are presented in Table 5. The largest loss of accuracy relative to Section 5 is, unsurprisingly, in
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & Method & \(\sigma_{p^{\prime}}\) (\%) & \(\sigma_{m}\) (\%) & \(\sigma_{\Delta R}\) (entirad) \\ \hline \multirow{4}{*}{PELICAN} & JH & 7.98\% & 4.75\% & 22.180 \\ & PELICAN\(|\)JH & 0.27\% & 0.63\% & 0.111 \\ & PELICAN\(|\)FC & 0.35\% & 0.89\% & 0.143 \\ & PELICAN & 2.64\% & 39.00\% & 0.744 \\ \hline \multirow{4}{*}{PELICAN} & JH & 16.0 \% & 12.0 \% & 25.4 \\ & PELICAN\(|\)JH & 4.2 \% & 6.5 \% & 3.4 \\ \cline{1-1} & PELICAN\(|\)FC & 4.9 \% & 8.0 \% & 3.8 \\ \cline{1-1} & PELICAN & 7.3 \% & 40.7 \% & 6.7 \\ \hline \hline \end{tabular}
\end{table}
Table 5: PELICAN resolutions for models trained to reconstruct \(p_{\text{cont}}^{W}\) with variable \(m_{W}\). Resolutions are still obtained by comparing the model predictions to \(p_{\text{true}}^{W}\).
Figure 6: Stacked histogram with proportional bin heights (see description in Fig. 3) showing the mass spectrum of the two targets, \(p_{\text{true}}^{W}\) and \(p_{\text{cont}}^{W}\), in the variable \(W\) mass dataset.
the mass column. However, since the true mass now covers a much wider range, this still presents a significant improvement in the mass reconstruction capability. To demonstrate this better, we show the 2D correlations between target and reconstructed masses in Figures 7 and 8 for the models trained targeting \(p_{\rm true}^{W}\) and \(p_{\rm cont}^{W}\), respectively. We also differentiate between non-FC (left) and FC (right) events in the two sides of each of the panels in each figure.
### Model complexity
The model examined above has 210k trainable parameters, however even significantly smaller models achieve good accuracy. As an illustration, we compare the resolutions of three PELICAN models trained on the variable mass dataset targeting \(p_{\rm true}^{W}\). They are obtained from the original model by a proportional rescaling of the widths of all layers. The first model is the 210k parameter one, with 132/78 channels, i.e. each messaging layer has 132 input and 78 output channels. The second model has 60/35 channels and 49k parameters. The third model has 25/15 channels and 11k parameters. The resolutions over the Delphes test dataset are reported in Table 6, and we observe that even the 11k-parameter model handily beats the JH method.
Figure 7: 2D histograms of true vs. reconstructed masses for models trained on the variable mass dataset targeting \(p_{\rm true}^{W}\) (top: truth data; bottom: Delphes data), broken up into two populations based on jet containment (left: non-FC events; right: FC events).
### Discussion
In the Delphes dataset, we observe that for non-FC events (bottom left of Fig. 8), the reconstructed contained mass is only weakly correlated with the true contained mass (or with the true \(W\) mass, as shown in Fig. 22 in Appendix A). However, in the quadrant where both masses exceed 55 GeV, we find a 65% correlation on FC events in the Delphes case. The most important type of error PELICAN makes here is when a non-FC event gets assigned a high reconstructed mass. Meaning that a mass near that of the true \(W\) was assigned for a jet with few of the \(W\) decay products in it. Among all events with \(m_{\rm reco}>55\) GeV, 3.6% are non-FC, and
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \# params & \(\sigma_{p_{T}}\) (\%) & \(\sigma_{m}\) (\%) & \(\sigma_{\Delta R}\) (centirad) \\ \hline
210k & 6.1 \% & 8.2 \% & 2.8 \\
49k & 6.5 \% & 8.6 \% & 3.2 \\
11k & 7.4 \% & 9.5 \% & 3.8 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Comparison of PELICAN models of three different network widths trained to reconstruct \(p_{\rm true}^{W}\) with variable \(W\) mass. Tested on Delphes data.
Figure 8: 2D histograms of target vs. reconstructed masses for models trained targeting \(p_{\rm cont}^{W}\) (top: truth data; bottom: Delphes data), broken up into two populations based on jet containment (left: non-FC events; right: FC events).
they bring the correlation among that population down to 51% (\(p_{T}\), mass, and angular resolutions on this population closely track those of PELICAN|FC above). But since in practice we're interested in \(m_{\text{true}}^{W}\), the correlation between that and \(m_{\text{reco}}\) is higher, at 59% among events with \(m_{\text{reco}}>55\) GeV. This is a significant improvement over the model trained on the original \(m_{\text{true}}^{W}\sim 80\) GeV Delpres dataset, and especially over non-ML methods such as the JH tagger (see Fig. 9). However, even a model trained on Delpres data to reconstruct \(p_{\text{true}}^{W}\), in fact, achieves a 40% correlation with \(m_{\text{true}}^{W}\) on non-FC events (see Fig. 7), so FC-tagging may not be necessary. Overall, PELICAN provides a viable method for estimating particle masses.
## 7 PELICAN explainability
Despite the output of the PELICAN regression model ostensibly being a 4-vector (or multiple 4-vectors), the richer and more natural object to treat as the output are the PELICAN weights \(\{c_{i}\}\) introduced in (Eq. 11). Each \(c_{i}\) is attached to its corresponding input constituent \(p_{i}\) due to permutation equivariance and therefore encodes _a scalar feature of that particle within the event_. As we will show in this section, the behavior of these weights is key to the unique explainability and visualization features of the PELICAN architecture.
In essence, PELICAN is able to take a set of \(N\) input 4-vectors and assign \(N\) scalar features to them (of course there can be several features per input as well) in a Lorentz-invariant way. This can be powerful in a variety of applications, but in the context of particle reconstruction the problem of finding the right values of the weights is similar to a soft clustering problem. Assuming an idealized dataset with perfect information about the decay products, the model should identify the decay products of the \(W\)-boson, assign \(c_{i}=1\) to them, and zero to all other constituents. This is analogous to what taggers like the Johns Hopkins top-tagger aim to do via jet clustering. However, since any five 4-vectors are linearly dependent, there is a continuum family of solutions \(\{c_{i}\}\) and it is not clear that PELICAN will prefer the clustering solution.
### Distributions of PELICAN weights
In Fig. 10 we display the distributions of all PELICAN weights for models from Section 5 trained targeting \(p_{\text{true}}^{W}\). We also mark each constituent as either a \(W\)- or a \(b\)-daughter. This yields several observations.
Figure 9: JH tagger’s reconstruction of the \(W\) mass on the variable \(W\) mass dataset, truth-level and Delpres versions. The correlation values are 47% and 25%, correspondingly.
Firstly, nearly all weights are either non-negative or very slightly negative (e.g. above \(-0.1\)) with a very sharp peak at zero (the peak is entirely to the left of zero to very high precision3). This is the first feature that justifies the interpretation of PELICAN as a _soft clustering_ method. Since our inputs represent realistic events, all input 4-vectors in them are causally related, and in particular they belong to the future light cone, as does the target vector. This implies that no linear combination of these vectors with positive coefficients can produce a zero vector. The distributions, therefore, show that PELICAN weights assigned to \(b\)-daughters are not "contaminated" with these degenerate combinations.
Footnote 3: The bin \([-10^{-6},0)\) contains about 100 times more constituents than the bin \([0,10^{-6})\).
Secondly, the truth-level distribution is highly concentrated at 0 and 1 and very closely matches the binary clustering solution. That is, almost all constituents assigned weight 0 are \(b\)-daughters, and almost all of those assigned 1 are \(W\)-daughters. Nevertheless, 30% of \(b\)-daughters are assigned positive weights, prompting further investigation. Moreover, the distribution of \(W\)-daughter weights in the Delphes case is so spread out that it becomes difficult to explain it by a mere analogy with clustering.
We can delve more deeply into the weight distribution by evaluating the sub-populations of weights based on jet containment. Fig. 11 shows the distributions of weights for \(bqq\), \(qq\), and non-FC events. The majority of constituents at the high end of the weight scale belong to non-FC events. Similarly, the weights
Figure 11: Stacked histograms with proportional bin heights of all PELICAN weights computed over the testing dataset for the 4-vector reconstruction task from Section 5 using models trained targeting \(p_{\mathrm{true}}^{W}\). Broken up into three populations by jet containment: \(bqq\) events (all three truth-level quarks from the \(t\to bW\to bqq\) process fall within the jet clustering radius); \(qq\) events (only the \(b\)-quark fell outside of the jet); and non-FC events, which include \(bq\), \(b\), and \(q\) events.
Figure 10: Stacked histograms with proportional bin heights of all PELICAN weights computed over the testing dataset for the 4-vector reconstruction task from Section 5 using models trained targeting \(p_{\mathrm{true}}^{W}\). Broken up into two populations – \(W\)-boson products and \(b\)-quark products. In the Delphes case, a constituent is considered a \(W\) product if the corresponding calorimeter cell detected at least one true \(W\) daughter.
produced by the models trained targeting \(p_{\text{cont}}^{W}\), shown in Fig. 12, are more highly concentrated at 0 and 1, and have much lower and shorter "tails" on the right, especially among \(b\)-daughters. This is the first indication that PELICAN tends to up-weight some constituents in events where it doesn't have enough information for an accurate reconstruction.
This approach allows to characterize the constituents that are being up-weighted. Fig. 13 shows the constituent weights as a function of the constituent's \(p_{T}\). The main observation here is that among high-energy ("hard") constituents with \(p_{T}>100\) GeV the weight distribution is much more binary, and the vast majority of constituents with weights falling away from the two peaks are soft, below \(20\) GeV. In the Delphes case PELICAN appears to down-weight high-energy \(W\)-daughters and up-weight soft constituents. Once again, loss of information in the form of detector effects appears to lead to PELICAN up-weighting soft constituents.
### Detector effects on PELICAN weights
While the truth-level PELICAN models reliably converge to a binary clustering solution, the weights in the Delphes case do not permit such a straightforward interpretation. To better understand their behavior, we
Figure 12: Stacked histograms with proportional bin heights of all PELICAN weights for the 4-vector reconstruction task from Section 5 using models trained targeting \(p_{\text{cont}}^{W}\). Broken up into two populations by parent type.
Figure 13: 2D histogram of PELICAN weights vs constituent transverse momentum for the 4-vector reconstruction task from Section 5 using models trained targeting \(p_{\text{cont}}^{W}\). Only FC events shown here.
ran additional experiments using custom datasets that exclude different components of the Delphes detector simulation one by one. Delphes performs the following steps: simulate the effect of the magnetic field \(B_{z}\) on charged final-state particles; aggregate truth-level particle energies within each electromagnetic calorimeter (ECAL) and hadronic calorimeter (HCAL) detector cell; apply energy smearing by sampling a lognormal distribution; unify the ECAL and HCAL cells; apply spatial smearing by picking a uniformly random point within the detector cell; construct the spatial momentum so that the resulting 4-vector, which represents a detector cell, is massless. We found that while each of these steps contributes to smearing out the truth-level distribution of PELICAN weights and shifting the peak downwards, the magnetic field is responsible for almost all of the differences between truth and Delphes results.
The simulated magnetic field is able to deflect charged particles very significantly, enough to account for most of the error in PELICAN's reconstruction relative to the truth-level reconstruction. Our hypothesis for why this leads to lower PELICAN weights for hard constituents is the following. Deflected hard particles produce large errors in the direction but not the energy of the reconstruction, therefore one can down-weight them and compensate for the energy deficit using softer constituents. Moreover, by up-weighting softer constituents PELICAN can in fact partially correct the error in the direction since the deflections of positively charged particles can be partially cancelled out by those of negatively charged particles.
An extra piece of evidence in support of this hypothesis can be found by modifying the loss function. If we re-train the model on the same Delphes dataset using a loss function consisting of a single energy term \(|E_{\rm reco}-E_{\rm true}|\), we find a distribution of weights (see Fig. 14) nearly as bimodal as the original one trained on
Figure 14: Same as Fig. 12 but this time the model is trained using a single-term loss function proportional to \(|E_{\rm reco}-E_{\rm cont}|\).
Figure 15: A single event viewed in the \(\eta,\phi\) plane with color and size dependent on energy. The central cross marks the true \(W\) boson, and the other three crosses mark the three true quarks from the process \(t\to bqq\).
truth-level data (see Fig. 12). This indicates that the source of the error in PELICAN's reconstruction on Delphes data is overwhelmingly _spatial_. Out of all the steps that Delphes performs, only two are purely spatial: momentum smearing within one cell, and the simulated magnetic field. However, the detector cells (approximately \(0.02\times 0.02\) in \((\eta,\phi)\)) are much smaller than the magnitude of PELICAN's typical angular error, and thus smearing cannot explain the error.
### Event visualization
As we discussed above, despite being a single-vector regression model, PELICAN produces one feature _per input constituent_ (namely the weight \(c_{i}\)), and these features become interpretable by virtue of Eq. 1. This gives us a unique opportunity to make event-level visualizations that provide insight into how PELICAN treats jet topology and how it compares to conventional methods such as the JH tagger's jet clustering.
In Fig. 16 we show an amalgamation of 200 events from the Delphes dataset from Section 5 projected onto the unit sphere. Each event was spatially rotated so that the position of the true \(W\) within the image is fixed and the true \(b\)-quark is located in the negative \(\phi\) direction. In one display the constituents are colored according to their parent being either the \(W\) boson or the \(b\)-quark, and in the other they're colored based on their assigned PELICAN weight. The correlation between the two images is clear: \(b\)-daughters tend to be correctly assigned zero weight, whereas \(W\)-daughters have positive weights with the hardest constituents having weights between \(0.4\) and \(0.8\).
In Fig. 15 we show a single event in the \((\eta,\phi)\) plane, with dot color and size dependent on the constituent energy. Note the reduced number of constituents in the Delphes display, and how some of the constituents get strongly deflected by the simulated magnetic field. The same event can be visualized in three more helpful ways. In addition to parent type and PELICAN visualizations introduced in Fig. 16, we can also extract the list of constituents that the JH tagger identifies as belonging to the \(W\) boson and highlight them. Fig. 17 displays the same single event in all three ways. In addition, we add a special marker for the direction of the
Figure 16: Composite event display of 200 events from the Delphes dataset from Section 5. Each event is transformed using a 3D rotation matrix such that the true \(W\) boson ends up at \((\theta,\phi)=(\pi/2,0)\) (white cross), and the true \(b\)-quark is directly below. PELICAN is rotationally invariant, so its output is unaffected by the normalization. Each dot is a Delphes constituent and the dot size increases logarithmically with constituent energy. (a) Color reflects parent type: constituents that are fully derived from \(W\)-daughters are orange and those from \(b\)-daughters are purple; in the rare cases when the fraction of \(W\)-derived energy in a given calorimeter cell is between \(0\) and \(1\), the corresponding color is taken from the color scale in the right pane. (b) Color reflects the value of the PELICAN weight, clipped to the interval \([0,1]\), as shown in the legend. Note how the hardest \(W\) constituents (largest dots) tend to have PELICAN weights between \(0.5\) and \(1\).
reconstructed \(W\) boson. In the parent type pane, this reconstruction is defined as \(\sum_{i=1}^{N}r_{i}p_{i}\) where \(r_{i}\) is the energy of the true \(W\)-daughters within that constituent divided by the actual energy of the constituent. In the JH and PELICAN panes, the marker corresponds to the corresponding reconstructions obtained by those methods.
## 8 IRC-safety and PELICAN
Perturbative computations in QCD suffer from a divergence caused by two types of processes: soft emission and collinear splittings. As a consequence, meaningful observables in this theory need to be insensitive to such processes, and this requirement is known as IRC-safety. In this section we provide a precise definition, give a characterization of IRC-safe Lorentz-invariant observables (see details in Appendix B), and describe modifications to the PELICAN architecture that make it IR-safe or IRC-safe.
Infrared safety (IR-safety) guarantees insensitivity to soft emissions, i.e. particles with relatively low energies and momenta. A family of continuous symmetric observables \(f^{(N)}(p_{1},\ldots,p_{N})\) is said to define an IR-safe observable \(f\) if
\[\lim_{\epsilon\to 0}f^{(N+1)}(p_{1},\ldots,p_{N},\epsilon p)=f^{(N)}(p_{1}, \ldots,p_{N}) \tag{10}\]
for any \(N\) and any \(p_{1},\ldots,p_{N}\), \(p\), where \(\epsilon\) controls how infinitesimally small the considered soft emission \(p\) is.
Figure 17: The same event as in Fig. 15 in the \((\eta,\phi)\) plane. (a) Constituents are colored according to the actual parent type; size increases with energy; the yellow cross marks the reconstruction obtained by summing all of the constituents that belong to the \(W\) boson. (b) Constituents are colored according to how they are tagged by the JH-tagger as either \(W\)-daughters or not; size increases with energy; the yellow cross marks the JH-reconstructed \(W\) boson. (c) Constituents are colored according to their PELICAN weight clipped to the interval \([0,1]\); size increases as the weight goes from 0 to 1 and decreases after that. Note how the soft Delphes \(W\)-constituents get assigned high PELICAN weights.
Collinear safety (C-safety) is a restriction on observables in perturbative QCD that arises from the divergent contributions of collinear emissions of gluons. Positive gluon mass would prevent such divergences, which is why C-safety concerns only massless particles. We can define C-safety formally as follows: an observable \(f(p_{1},\ldots,p_{N})\) is C-safe if, whenever two massless 4-momenta \(p_{1}\) and \(p_{2}\) become collinear (which happens for massless particles iff \(p_{1}\cdot p_{2}=0\)), the value of \(f\) depends only on the total momentum \(p_{1}+p_{2}\). Expressed even more explicitly, C-safety says that setting \(p_{1}=\lambda p\) and \(p_{2}=(1-\lambda)p\) with some 4-vector \(p\) such that \(p^{2}=0\) must lead to the same output regardless of the value of \(\lambda\), i.e.
\[C_{12}(p)f=\partial_{\lambda}f(\lambda p,(1-\lambda)p,p_{3},\ldots,p_{N})=0. \tag{101}\]
In Appendix B we characterize IRC-safe Lorentz-invariant observables, but the following summary will suffice for the purpose of designing an IRC-safe version of PELICAN. First, a Lorentz-invariant observable (assumed to be consistently defined for any finite number \(N\) of 4-vector inputs) is IR-safe if and only if it has no explicit dependence on the multiplicity \(N\). More precisely, adding the zero 4-vector to the list of inputs should leave the output value invariant. Second, an IRC-safe Lorentz-invariant observable is one that is IR-safe and moreover depends on any of its massless inputs only through their total. E.g. if \(p_{1},p_{2},p_{3}\) are fixed to be massless, then \(f(p_{1},p_{2},p_{3},p_{4},\ldots)\) must depend only on \(p_{1}+p_{2}+p_{3}\). In particular, if all inputs are massless, then all IRC-safe invariant observables are functions of only the jet mass \(M^{2}=(\sum_{i}p_{i})^{2}\). Note, however, that such an observable can still depend on these vectors in an arbitrarily complex fashion away from the massless manifold.
The original PELICAN architecture as introduced above is neither IR- nor C-safe. Below we modify the architecture to make it exactly IR-safe or IRC-safe and evaluate the implications.
### IR-safe PELICAN
As shown above, IR-safety in Lorentz-invariant networks essentially requires the outputs to be independent of the multiplicity \(N\). There are four ways in which the multiplicity shows up in PELICAN:
1. Scaling with \(N^{\alpha}/\bar{N}^{\alpha}\) in the equivariant block. This must be disabled for IR-safety.
2. Non-zero bias values in linear layers. Since the network is permutation-equivariant, the bias values are shared across jet constituents, which means that upon aggregation in the following equivariant layer they contribute multiples of \(N\). All biases in all linear layers must be disabled for IR-safety.
3. The input embedding must map zero to zero, but our original choice already satisfies this. In addition, the activation function must also have a fixed point at zero. Our default choice, LeakyReLU, also satisfies this.
4. Following an application of a PELICAN equivariant block, rows and columns corresponding to soft constituents will contain a combination of sums over all constituents. Even in the absence of biasing constants, this effectively increases the multiplicity with which these values enter in the later aggregations. This can be resolved by making sure that rows and columns that are soft at the input remain soft throughout the whole network. Therefore we introduce _soft masking_, whereby the last 12 equivariant aggregators (those don't preserve the softness of rows/columns) are followed by a multiplication by the vector of values \(J\cdot p_{i}\), where \(J=\sum_{i=1}^{N}p_{i}\), scaled and clipped to be within \([-1,1]\). In \(\text{Eq}_{2\to 2}\) this multiplication is applied both row-wise and column-wise, and in \(\text{Eq}_{2\to 1}\) it's component-wise.
With these modifications, PELICAN becomes IR-safe. As we will see, this restriction leads to a modest reduction in the performance of PELICAN's predictions in our tasks of interest.
### IRC-safe PELICAN
Adding C-safety to the architecture is much simpler. As stated above, the necessary requirement is that the output depend on massless inputs only through their sum. In PELICAN this can be achieved by inserting a linear permutation-equivariant layer with a mass-based soft mask immediately at the input (any nonlinear embedding has to be done later). Consider a case where \(p_{1}\), \(p_{2}\) are massless and the dot product matrix \(\{d_{ij}\}\) is fed into such an equivariant layer. Most of the aggregators will compute sums over rows or columns, thus immediately producing C-safe quantities. However, several of the aggregators, including the identity, will
preserve individual information about each \(p_{i}\), therefore their output rows and columns corresponding to \(p_{1}\) and \(p_{2}\) need to be thrown out. This can be done by a soft mask that turns to zero as the mass of any input goes to zero. This mask is defined in the same way as the IR mask except using \(m_{i}^{2}\) instead of \(J\cdot p_{i}\). It needs to be applied only to the first 2 order zero and the first 7 order one aggregators.
Coincidentally, this soft mask can also be used in place of an IR mask, which means that we only need the C-safe soft mask to make a fully IRC-safe PELICAN architecture. Altogether it gets applied to all equivariant aggregators except the third one (which extracts the diagonal and is thus IRC-safe by definition).
### Testing IR/C-safe PELICAN models
First we quantify the deviation in PELICAN's outputs that occurs under soft and collinear splittings and observe how training affects them. We define an IR-splitting as adding a zero 4-vector to the list of input constituents. Then PELICAN's output on IR-split data is directly compared to the original output. Defining a C-splitting is more difficult since realistic events never contain any exactly collinear constituents, and we want to avoid changing the number of particles so as to make this test independent of IR-safety. Therefore we prepare the data by inserting two copies of the vector \((1,0,0,1)\) to each event. Then the C-splitting will amount to replacing these two vectors with \((2,0,0,2)\) and \((0,0,0,0)\). The outputs on the same event prepared in these two ways can be directly compared.
To compare two outputs \(p_{\text{reco}}\), \(p_{\text{reco}}^{\prime}\) we compute the relative deviation \(|(p_{\text{reco}}^{\prime}-p_{\text{reco}})/p_{\text{reco}}|\), where the division is component-wise. To estimate the effect of an IR- or C-splitting on PELICAN's predictions, we take the median value of this deviation over a batch of events. The same can also be done with PELICAN weights as the outputs. The splittings are applied to 100-event batches of events from one of our datasets and the median deviations are averaged over 300 batches. We test 5 randomly-initialized models and 5 models trained on the full variable \(W\) mass dataset from Section 6.
We find that a randomly-initialized PELICAN regression model's output 4-vector deviates by 0.5%-7% under an IR-split, and the PELICAN weights deviate by up to 5%. However, regression models trained to reconstruct \(p_{\text{cont}}^{W}\) have both deviations under 5%, potentially indicating a slight improvement in IR-safety due to training. The resolutions \(\sigma_{p_{T}},\sigma_{m}\), and \(\sigma_{\Delta R}\) of the trained IR-safe truth-level models are about 20%-35% worse (larger) than the original models, and similarly the Delphes resolutions get 40%-50% worse. We also note that IR-safe PELICAN models appear to be slightly more rigid under C-splits, showing 5%-16% deviations that enhance slightly to at most 11% after training.
Under a C-splitting, the randomly-initialized regression model's outputs (both the 4-vector and the weights) consistently deviate by 3%-8%, and the same range of deviations was observed on fully trained models as well. The resolutions of trained IRC-safe truth-level models suffer significantly in comparison to the regular models, exhibiting 5-6 times higher values of \(\sigma_{p_{T}},\sigma_{m}\), and \(\sigma_{\Delta R}\). We do not perform this comparison for Delphes models since the jet constituents coming out of Delphes are massless, so only functions of the jet mass are expressible by IRC-safe PELICAN on that data.
## 9 Conclusion
We have presented a full description of PELICAN: a network that respects particle permutation and Lorentz symmetries important in particle physics. PELICAN is a general network which is performant at 4-vector regression and provides state-of-the-art performance in the task of top-tagging. To demonstrate PELICAN's regression capabilities, we chose the reconstruction of the \(W\)-boson's 4-momentum from a full top quark jet, and to our knowledge PELICAN is the first ML method applied to this problem. Even within these tasks there is room to improve PELICAN's performance by introducing additional scalar information such as particle charges, which would allow the network to account for the simulated collider's magnetic field. PELICAN's architecture, its flexibility, and generalizability may also allow for future applications to charged-particle track reconstruction, pile-up identification, and full-event reconstruction. Being a general architecture, PELICAN is not limited to top quark decays or even jets. This network inherently provides powerful tools for investigating its own behavior due to the equivariant architecture and shows promise as a tool which can be thoroughly investigated if deployed in real world scenarios.
Acknowledgements
The authors would like to thank the Data Science Institute at the University of Chicago for its generous support of this research. TH is supported by the Department of Physics at the University of Chicago. DWM and JTO are supported by the National Science Foundation under Grant PHY-2013010. The computations in this work were, in part, run at facilities supported by the Scientific Computing Core at the Flatiron Institute, a division of the Simons Foundation. The Center for Computational Mathematics at the Flatiron Institute is supported by the Simons Foundation.
|
2304.00146 | On the Relationships between Graph Neural Networks for the Simulation of
Physical Systems and Classical Numerical Methods | Recent developments in Machine Learning approaches for modelling physical
systems have begun to mirror the past development of numerical methods in the
computational sciences. In this survey, we begin by providing an example of
this with the parallels between the development trajectories of graph neural
network acceleration for physical simulations and particle-based approaches. We
then give an overview of simulation approaches, which have not yet found their
way into state-of-the-art Machine Learning methods and hold the potential to
make Machine Learning approaches more accurate and more efficient. We conclude
by presenting an outlook on the potential of these approaches for making
Machine Learning models for science more efficient. | Artur P. Toshev, Ludger Paehler, Andrea Panizza, Nikolaus A. Adams | 2023-03-31T21:51:00Z | http://arxiv.org/abs/2304.00146v1 | On the Relationships between Graph Neural Networks for the Simulation of Physical Systems and Classical Numerical Methods
###### Abstract
Recent developments in Machine Learning approaches for modelling physical systems have begun to mirror the past development of numerical methods in the computational sciences. In this survey we begin by providing an example of this with the parallels between the development trajectories of graph neural network acceleration for physical simulations and particle-based approaches. We then give an overview of simulation approaches, which have not yet found their way into state-of-the-art Machine Learning methods and hold the potential to make Machine Learning approaches more accurate and more efficient. We conclude by presenting an outlook on the potential of these approaches for making Machine Learning models for science more efficient.
Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Neural Networks, Classical Numerical Methods
## 1 Introduction
Recent years have seen an ever-larger push towards the application of Machine Learning to problems from the physical sciences such as Molecular Dynamics (Musaelian et al., 2022), coarse-graining (Wang et al., 2022), the time-evolution of incompressible fluid flows (Wang et al., 2020), learning governing equations from data (Brunton et al., 2016; Cranmer et al., 2020), large-scale transformer models for chemistry (Frey et al., 2022), and the acceleration of numerical simulations with machine learning techniques (Kochkov et al., 2021). All of these algorithms build on the infrastructure underpinning modern Machine Learning in combing state-of-the-art approaches with a deep understanding of the physical problems at hand. This begs the questions if there exist more insights and tricks hidden in existing, classical approaches in the physical sciences which have the potential to maybe not only make the algorithm for the particular problem class more efficient, but maybe even
Machine Learning in general?
Inspired by recent theoretical advances in the algorithmic alignment between Graph Neural Networks (GNNs) and dynamic programming (Xu et al., 2020; Velickovic et al., 2020), we surmise that the extension of this analysis to classical PDE solvers, and the physical considerations they incorporate, enables us to learn from the development trajectory in the physical sciences to inform the development of new algorithms. In this workshop paper we make the following contributions towards this goal:
* A comparison of the development of graph-based learned solvers, and the proximity of their development ideas to the development of Smoothed Particle Hydrodynamics starting from Molecular Dynamics in the physical sciences.
* An analysis of classical numerical solvers, and their algorithmic features to inform new ideas for new algorithms.
## 2 MeshGraphNets and its relation to classical methods
An excellent example of the parallels between the development of Machine Learning methods for the sciences and
Figure 1: Characterization of the physical scales the example methods of section 2 operate on. The Graph Network-based approaches MeshGraphNets, and Graph Network-based Simulators are placed in relation to their classical counterparts.
the development of classical approaches is the recent development of graph-based simulators. When we relate their inherent assumptions and techniques to the development of particle-based methods, starting with Molecular Dynamics, a great many parallels arise. For an impression of the scales the classical methods operate on, and where graph-based simulators are placed in relation, please refer to Figure 1.
In this section, we analyze the structure of two of the first mature learned solvers (GNS Sanchez-Gonzalez et al., 2020), MeshGraphNets Pfaff et al. (2021)) and how these two approaches align with three of the classical methods (MD, FPM, SPH). We select these learned algorithms because they were one of the first of their kind to show promising results on real world data. Also, GNS is trained directly on SPH data which further motivates an algorithmic comparison.
### Graph Neural Network-based Approaches to Simulation
The Graph Network (GN) Battaglia et al. (2018) is a framework that generalizes graph-based learning and specifically the Graph Neural Network (GNN) architecture by Scarselli et al. (2008). However, in this work, we use the terms GN and GNN interchangeably. Adopting the Graph Network formulation, the main design choices are the choice of update-function, and aggregation-function. For physics-informed modeling this gives us the ability to blur the line between classical methods and graph-based methods by including biases similar to CNNs for non-regular grids, as well as encoding physical laws into our network structure with the help of spatial equivariance/invariance, local interactions, the superposition principle, and differential equations. E.g. translational equivariance can easily be incorporated using relative positions between neighboring nodes, or the superposition principle can be encoded in graphs by using the summation aggregation over the representation of forces as edge features.
Viewing MeshGraphNets Pfaff et al. (2021) from a physics-motivated perspective, we argue that MeshGraphNets originate from Molecular Dynamics. To present this argument in all its clarity, we have to begin with its predecessor: the Graph Network-based Simulators (GNS) Sanchez-Gonzalez et al. (2020).
#### 2.1.1 Graph Network-based Simulators
The Graph Networks-based Simulator builds on the encoder-processor-decoder approach, where Graph Networks are applied iteratively on the encoded space. Proving GNS' ability to simulate systems with up to 85k particles, their approach can be summarized as follows.
Let \(X^{t}\) denote the states of a particle system at time \(t\). \(X\) might contain the position, velocity, type of particle, or any other physical information specific to a material particle. A set of \(k+1\) subsequent past states
\[\mathbf{X}^{t_{0:K}}=\left\{X^{t_{0}},X^{t_{1}},\ldots,X^{t_{k}}\right\}\]
if given to the network. The core task is to then learn the differential operator \(d_{\theta}\), which approximates the dynamics
\[d_{\theta}:X^{t_{k}}\longrightarrow Y^{t_{k}},\quad X^{t_{k+1}}=\text{Update} \left\{X^{t_{k}},d_{\theta}\right\}.\]
Here, \(Y^{t}\) is the acceleration, which is used to obtain the next state \(X^{t+1}\) via integration using a deterministic "Update" routine, e.g. semi-implicit Euler scheme. The differential operator \(d_{\theta}\) is learned with the encoder-processor-decoder approach where the encoder takes in 1 to 10 previous states, and encodes them into a graph. This graph consists of nodes - latent representation of the states \(X\) - and edges - between each pair of particles closer than some cut-off radius there is another latent vector, which initially contains the distance or displacement information. The processor is then a multilayer Graph Network of which the exact number of message-passing Graph Networks is a hyperparameter. The result on the graph-space is then decoded back to physical space. The loss is computed as the mean-squared error between the learned acceleration, and the target acceleration. While the approach showed promising results for fluid simulations, and fluid-solid interactions, it struggled on deforming meshes, such as thin shells.
#### 2.1.2 MeshGraphNets
To better represent meshes MeshGraphNets Pfaff et al. (2021) supplemented the Graph Network simulation with an additional set of edges to define a mesh, on which interactions can be learned. Closely related to the superposition principle in physics, the principle of splitting a complicated function into the sum of multiple simpler ones, the interaction function is split into the interaction of mesh-type edges and collision-type edges.
Following the widespread use of remeshing in engineering, MeshGraphNets have the ability to adaptively remesh to model a wider spectrum of dynamics. Mesh deformation without adaptive remeshing would lead to the loss of high frequency information.
The last major improvement of MeshGraphNets over GNS is extending the output vector \(Y\) with additional components to also predict further targets, such as the stress field.
In difference to the Graph Network-based Simulators, the input here includes a predefined mesh and the output is extended to contain dynamical features like pressure.
Similarities between the Development Trajectories of Particle-based Methods and Graph Neural Network-based Approaches to Simulations
Beginning with Molecular Dynamics, the earliest and most fundamental particle-based method, we will now outline the similarities between the development trajectories, and the derivations inherent to them, of MeshGraphNets and the development of particle-based methods in physics.
#### 2.2.1 Similarities to Molecular Dynamics
Molecular Dynamics is a widely used simulation method which generates the trajectories of an N-body atomic system. For the sake of intellectual clarity we restrict ourselves to its simplest form, the unconstrained Hamiltonian mechanics description.
The construction of connections, and edges is one of the clearest similarities between Molecular Dynamics and MeshGraphNets. Both can potentially have a mesh as an input, and both compute the interactions based on spatial distances up to a fixed threshold. Iterative updates, or the repeated application of Graph Network layers in the MeshGraphNets, extend the effective interaction radius beyond the immediate neighbourhood of a particle such that all particles can be interacted with. Both approaches are at the same time translationally invariant w.r.t. accelerations, and permutation equivariant w.r.t. the particles, and use a symplectic time-integrator. While there are theoretical reasons for this choice in Molecular Dynamics, it is choice of convenience in the context of learned approaches. The main difference between the two approaches lies in the computation of the accelerations. In Molecular Dynamics the derivative of a predefined potential function is evaluated, whereas a learned model is used in the Graph Network-based Simulators.
#### 2.2.2 Similarities to Smoothed Particle Hydrodynamics
A closer relative to the Graph Network-based Simulators is the Smoothed Particle Hydrodynamics algorithm originating from astrophysics (Lucy, 1977; Gingold and Monaghan, 1977). Smoothed Particle Hydrodynamics discretizes the governing equations of fluid dynamics, the Navier-Stokes equations, with kernels such that the discrete particles follow Newtonian mechanics with the equivalent of a prescribed molecular potential. Both, Smoothed Particle Hydrodynamics, and Graph Network-based Simulators obey the continuum assumption, whereas Molecular Dynamics presumes a discrete particle distribution, and is constrained to extremely short time intervals.
#### 2.2.3 The Differences
Summarizing the key differences between the closely related approaches, Molecular Dynamics and Smoothed Particle Hydrodynamics both take one past state \(X^{t}\) as an input, whereas Graph-based approaches require a history of \(k\) states \(\mathbf{X}^{t_{0:K}}\). Molecular Dynamics encodes geometric relations in the potential, MeshGraphNets encode the geometry in the mesh, while there exists no direct way for inclusion in the other two approaches. Molecular Dynamics, and Smoothed Particle Hydrodynamics explicitly encode physical laws, for learned methods all these parameters and relations have to be learned from data.
A key advancement of MeshGraphNets, coming from the Graph Network-based Simulators, is the explicit superimposition of solutions on both sets of edges, which far outperforms the implicit distinction of interactions. This approach is equally applicable to all conventional particle-, and mesh-based simulations in engineering. Borrowing the Fluid Particle Model from fluid mechanics, we can subsequently connect the classical methods with the learned approaches by viewing meshes and particles as the same entity under the fluid-particle paradigm.
2.4 Connecting MeshGraphNets to Graph Neural Network-based Simulations with the Fluid Particle Model
The Fluid Particle Model (Espanol, 1998) is a mesoscopic Newtonian model, as seen in Figure 1, situated on an intermediate scale between the microscopic Molecular Dynamics and the macroscopic Smoothed Particle Hydrodynamics. It views particles from the point of view of a Voronoi tesselation of the molecular fluid, see Figure 3. The Voronoi tesselation coarse-grains the atomistic system to a pseudoparticle system with ensembles of atoms in thermal equilibrium summarized as pseudoparticles. This pseudoparticle construction is closely related to the MeshGraphNets construction, where each mesh node also corresponds to
Figure 2: Illustration of the MeshGraphNets scheme with a decomposition of its algorithm into the encoder, processor, and decoder (Image source: Pfaff et al. (2021)).
the cell center of a simulated pseudoparticle. Smoothed Particle Hydrodynamics as well as Dissipative Particle Dynamics (Hoogerbrugge and Koelman, 1992) also both operate on pseudoparticles. All of these approaches share that they have to presume a large enough number of atoms per pseudoparticles to be viewed as a thermodynamic system.
Especially in Dissipative Particle Dynamics one injects Gaussian noise to approximate a physical system, just as is done for Graph Network-based Simulators and MeshGraphNets to stabilize the training. We surmise that this injection of noise into graph-based simulators amounts to forcing the learned model to predict the true output despite the noisy inputs, hence leading the model to converge to the central limit of the estimated conditional distribution of the acceleration.
The construction of Voronoi tesselations governs that the size of the cells is to be inversely proportional to variations in their properties, hence leading to more sampling in regions with high property variation. The very same argument based on the curvature as a heuristic is being used to derive the mesh refinement of the MeshGraphNets algorithm.
## 3 Relation to Numerical Schemes
After the recent success of Neural ODEs solvers (Chen et al., 2018), it has taken almost four years to start considering Neural PDEs in general (Brandstetter et al., 2022). By definition, PDEs deal with derivatives of multiple variables, compared to ODEs having one variable. As a result, typical numerical approximations of PDEs are much more diverse depending on the peculiarities of the PDE of interest. Typical PDE solvers operating on grids (Eulerian description) include Finite Difference Methods (FDM), Finite Volume Methods (FVM), and Finite Element Methods (FEM), whereas other methods follow the trajectory of irregularly spaced points (Lagrangian description) like Smoothed Particle Hydrodynamics (SPH), Fluid Particle Model (FPM), Dissipative Particle Dynamics (DPD) (Hoogerbrugge and Koelman, 1992), Volume of Fluid Method (VOF) (Hirt and Nichols, 1981), Particle-in-Cell (PIC) (Brackbill and Ruppel, 1986), Material Point Method (MPM) (Sulsky et al., 1993), Discrete Element Method (DEM) (Cundall and Strack, 1979), and Meshless FEM (MFEM). Finally, there are also approaches to solving PDEs without any discretization as in Sawhney et al. (2022). Each of these methods works best for a specific type of PDE, boundary/initial conditions, and parameter range. In this section we compare concepts from these classical methods to state-of-the-art learned algorithms.
### Data augmentation with white noise
Two popular papers corrupting training inputs with additive Gaussian noise include Sanchez-Gonzalez et al. (2020); Pfaff et al. (2021), as described before. The goal of this approach is to force the model to deal with accumulating noise leading to a distribution shift during longer rollouts. Thus, the noise acts as an effective regularization technique, which in these two papers allows for much longer trajectories than seen during training. However, one major issue with this approach is that the scale of the noise is represented by two new hyperparameters, which have to be tuned manually (Pfaff et al. (2021), Appendix 2.2).
A perspective on noise injection coming from the physical sciences is to see it through the lens of mesoscopic particle methods like the Fluid Particle Model and Dissipative Particle Dynamics, in which the noise originates from the Brownian motion at small scales. Although GNS and MeshGraphNets operate on scales too large for the relevance of Brownian motion, the Fluid Particle Model provides a principled way of relating particle size and noise scale. The underlying considerations from statistical mechanics might aid to a better understanding of the influence of training noise and in turn make approaches based on it more efficient.
### Data augmentation by multi-step loss
Another way of dealing with the distribution shift is by training a model to correct its own mistakes via some form of a multi-step loss, i.e. during training a short trajectory is generated and the loss is summed over one or multiple past steps (Tompson et al., 2017; Um et al., 2020; Ummenhofer et al., 2020; Brandstetter et al., 2022). The results on this vary with some researchers reporting better performance than with noise injection (Brandstetter et al., 2022), while others report the opposite experience (Sanchez-Gonzalez et al., 2020).
Looking at classical solvers for something related to the multi-step loss, it is natural to think of adaptive time integrators used by default in ODE routines like ODE45 in Matlab (Dormand and Prince, 1980). Adaptive integrators work by generating two short trajectories of the same time length, but with different step sizes, and as long as the outcome with larger steps differs within some bounds, then the step size is increased. This guarantees some level of long-term rollout stability just as attempted with the multi-step loss, but the
Figure 3: Single points (left), Delaunay triangulation (middle), and Voronoi diagram (right)(Image source: Rokicki and Gawell (2016)
multi-step loss forces the network to implicitly correct for future deviations of the trajectory without actually changing the step size. The adaptive step-size idea has gained popularity in ML with the introduction of Neural ODEs (Chen et al., 2018).
### Equivariance bias
Numerical PDE solvers come in two flavors: stencil-based and kernel-based, both of which are equivariant to translation, rotation, and reflection in space (Euclidean group equivariance), as well as translation in time (by Noether's theorem). These properties arise from the conservation of energy, which is a fundamental principle in physics. While equivariance, with respect to the Euclidean group, has been around for a couple of years on grids (Weiler et al., 2018), its extension to the grid-free (Lagrangian) setting is gaining popularity just recently (Brandstetter et al., 2021; Schutt et al., 2021; Batzner et al., 2022; Musaelian et al., 2022). Here, we talk about equivariance in terms of a neural net operation on vectors, which rotates the output exactly the same way as the input is rotated, as opposed to working with scalar values, which is called an invariant operation, e.g. SchNet (Schutt et al., 2017). The performance boost by including equivariant features is significant and reaches up to an order of magnitude compared to invariant methods (Batzner et al., 2022).
### Input multiple past steps
Another common performance improvement in neural net training is observed by stacking multiple past states as an input (Sanchez-Gonzalez et al., 2020; Pfaff et al., 2021; Brandstetter et al., 2022). One argument supporting this approach is overfitting prevention by inputting more data (Pfaff et al., 2021). Looking at conventional solvers we very rarely see multiple past states as input and this is done for materials with memory property, e.g. rheological fluids or "smart" materials. Thus, providing multiple past states implicitly assumes that there is some nonphysical non-Markovian retardation process, which in most cases does not correspond to the physics used for training data generated.
The only physical justification of a multi-step input we are aware of arises if we train the model to learn a coarse-grained representation of the system. Li et al. (2015) showed that explicit memory effects are necessary in Dissipative Particle Dynamics for the correct coarse-graining of a complex dynamical system using the Mori-Zwanzig formalism. Given that papers like GNS and MeshGraphNets do not make use of coarse-graining, it is questionable why we observe improvement in performance and whether this trick generalizes well to different settings.
### Spatial multi-scale modeling
Conventional multi-scale methods include, among others, all types of coarse-graining, Wavelet-based methods (e.g. Kim et al. (2008)), and the Fast Multipole Method (Rokhlin, 1985). Graph Networks seem especially suitable for tasks like coarse-graining as they are designed to work on unstructured domains, opposed for example to approaches using Wavelet or Fourier transforms, which require regular grids. GNNs seem especially promising with many applications in Molecular Dynamics (Husic et al., 2020) and engineering (Lino et al., 2021; Valencia et al., 2022; Migus et al., 2022; Han et al., 2022). It is particularly interesting to see works like Migus et al. (2022) inspired by multi-resolution methods and Valencia et al. (2022) resembling geometric coarse-graining by weighted averaging. All these methods rely on the fact that physical systems exhibit multi-scale behavior, meaning that the trajectory of a particle depends on its closest neighbors, but also on more far-reaching weaker forces. Splitting the scales and combining their contributions can greatly reduce computation. One of the great advantages of GNNs is their capability to operate on irregularly spaced data, which is necessary for most coarse-graining approaches.
### Locality of interactions
In most cases, graph-based approaches to solving PDEs define the edges in the graph, based on an interaction radius. Methods using the Graph Network architecture (Battaglia et al., 2018) effectively expand the receptive field of each node with every further layer, in the extreme case resulting in the phenomenon known as over-smoothing. But if we keep the number of layers reasonably low, the receptive field will always be larger compared to a conventional simulation with the same radius. Until recently, it was thought that a large receptive field is the reason for the success of learned simulators, but Musaelian et al. (2022) question that assumption. In this paper, an equivariant graph network with fixed interaction neighbors performs on a par with the very similar Graph Network-based method NguIP (Batzner et al., 2022) on molecular property prediction tasks. This finding supports the physics-based argument about the locality of interactions.
### Mesh vs Particle
GNN-based simulation approaches offer the flexibility to combine particles and meshes out-of-the-box. If we then train one neural network to reproduce the results of a Finite Element solution on a mesh and Smoothed Particle Hydrodynamics solution over particles, this is where learned methods really shine. This was achieved with the MeshGraphNets framework (Pfaff et al., 2021). We argue that the transition from particles to meshes is a direct result of a
coarse-graining procedure using Voronoi tessellation, which is related to the derivation of the Fluid Particle Model. The main assumption in this derivation is that each mesh cell should be small enough that it can be treated as being in equilibrium - similar to the assumption made when discretizing a domain with points.
### Stencils
We talk about stencils when operating on regular grids. Although this is not the main strength of GNNs, there are some useful concepts from stencil-based simulations, which are conventionally nontrivial to generalize to particles, but can easily be adapted with GNNs. Brandstetter et al. (2022) state that their paper is motivated by the observation that the Weighted Essentially Non-Oscillatory scheme (WENO) (Shu, 1998) can be written as a special case of a GNN. Another work, inspired by the general idea of the Finite Volume Method, looking at the fluxes at the left and right cell boundary, was developed by Praditia et al. (2021). Inspired by the Finite Element Method, finite element networks were introduced by weighting the contributions of neighbouring cells by their volume, as is done in Finite Element analysis (Lienen and Gunnemann, 2022).
### Integration schemes
In addition to the time-step adaptation mentioned in relation to multi-step losses, another topic investigated in literature is the order of the integrator (Sanchez-Gonzalez et al., 2019). This work points to the fact that higher order integrators lead to much better robustness, with respect to the choice of an integration time step. Another interesting question discussed in this paper is whether symplectic integrators improve performance of a learned Hamiltonian neural net. The answer seems to be that the symplectic property is much less important than the order of the integrator, which is in contrast with conventional Molecular Dynamics integrators, which work extremely poorly if not symplectic.
## 4 Untapped Ideas from Classical Approaches
In this subsection, we introduce potentially useful ideas from conventional differential equation solvers in science, which to the best of our knowledge have not been adapted in main-stream learned PDE solvers yet. Figure 4 is a collection of these concepts in the form of a word cloud.
### Noise during inference
Adding noise to the inputs during training has proven to be useful, but has not been done during testing. One idea would be to use noise during inference to emulate Brownian motion. And one further topic we already mentioned is the relation of the noise scale to particle mass. From mesoscopic methods and the Fluctuation-dissipation theorem we would expect the noise to scale as \(1/\sqrt{m}\) if a coarser representation is used.
### Multiple time steps
Learned Molecular Dynamics simulations stick to using only the last past state and doing the same for larger-scale simulations might partially explain the unphysical behavior of the GNS method demonstrated in Klimesch et al. (2022). For coarse-graining though a longer history might be helpful.
### Feature Engineering
From the Volume of Fluid Method we could adapt the idea of including features corresponding to the ratio of different material, if we are interested in simulating multi-material flows. The Discrete Element Method suggests encoding much more features like rotational degree of freedom (in magnetic field or simulating friction), stateful contact information (contact simulations), and often complicated geometry (for non-spherical, e.g. granular particles). Inspired by shock-capturing methods used routinely for the solution of nonlinear fluid dynamics problems (Ketcheson et al., 2020), one could think of further hand-tuned node features indicating the presence of a shock.
Figure 4: Overview of the currently under-utilized ideas discussed in Section 4 for Machine Learning approaches for the physical sciences.
### Particles and Grid
There are a number of methods using the best of both particle and grid worlds like the Particle-in-Cell method and its successor Material Point Method. The idea of updating the node features and from time to time also based on the grid cell they belong to, might speed up simulations and is worth exploring. Now, if we restrict ourselves to regularly spaced particles, respectively grid cells, our solver toolkit becomes much richer with methods like the Fast Fourier Transform (which has already seen great success with the Fourier Neural Operator (Li et al., 2020)) and the Wavelet Transform (as used in the PDE-Net (Long et al., 2018)) at our disposal, as mentioned above in the context of multi-scale modeling.
### Integrator
Taking the perspective of Neural ODEs (Chen et al., 2018) with the neural network learning the perfect acceleration, one could arguably expect the next evolutionary step to be the combination of learned integrators with adaptive integration schemes. Incorporating insights from classical numerical methods, one should possibly seek to define an equivalent stability criterion for learned methods as the Courant-Friedrichs-Lewy (CFL) condition for classical numerical methods. This would in turn aid in bounding the time, and subsequently explore time steps smaller than the critical value.
## 5 Conclusion & Discussion
In this article, we claim that studying classical PDE solvers and their past development offers a direct path to the acceleration of the development of learned PDE solvers. Examples in literature show that biasing a learned solver by means of architectural design, data augmentation, feature engineering, etc. incorporating existing knowledge from classical solvers can greatly improve performance, explainability, and data-efficiency.
In Section 2 we show how this development has already subconsciously played out in the development of graph-based learned solvers following the same development as particle-based methods such as Molecular Dynamics, Smoothed Particle Hydrodynamics, and the Fluid-Particle Model. This investigation is revisited for algorithmic comparisons and illustrations of the limitations of classical solvers later on. In Section 3 we then focus on ideas from classical approaches which have found their way into recent learned solver literature, and discuss the physical interpretation of these developments. In the discussed examples, the included physically motivated biases are used to improve robustness w.r.t. hyperparameter choices, lower errors, and speed-up inference.
Section 4 takes a glimpse into a possible version of the future with ideas which have, to the best of our knowledge, not yet been integrated in learned methods. Given the elaborate history of classical methods, and the short, but highly dynamic history of learned approaches, there is still a lot of potential to be realized within the latter by incorporating insights from the former.
Going further, many exciting problems in the physical sciences, such as simulations involving multiple spatial scales, multiple temporal scales, non-Newtonian fluids, or phase-changing materials, are heavily data-constrained and will hence have to rely on insights from classical methods for Machine Learning approaches to become feasible.
|
2302.14726 | Spiking Neural Network Nonlinear Demapping on Neuromorphic Hardware for
IM/DD Optical Communication | Neuromorphic computing implementing spiking neural networks (SNN) is a
promising technology for reducing the footprint of optical transceivers, as
required by the fast-paced growth of data center traffic. In this work, an SNN
nonlinear demapper is designed and evaluated on a simulated
intensity-modulation direct-detection link with chromatic dispersion. The SNN
demapper is implemented in software and on the analog neuromorphic hardware
system BrainScaleS-2 (BSS-2). For comparison, linear equalization (LE),
Volterra nonlinear equalization (VNLE), and nonlinear demapping by an
artificial neural network (ANN) implemented in software are considered. At a
pre-forward error correction bit error rate of 2e-3, the software SNN
outperforms LE by 1.5 dB, VNLE by 0.3 dB and the ANN by 0.5 dB. The hardware
penalty of the SNN on BSS-2 is only 0.2 dB, i.e., also on hardware, the SNN
performs better than all software implementations of the reference approaches.
Hence, this work demonstrates that SNN demappers implemented on electrical
analog hardware can realize powerful and accurate signal processing fulfilling
the strict requirements of optical communications. | Elias Arnold, Georg Böcherer, Florian Strasser, Eric Müller, Philipp Spilger, Sebastian Billaudelle, Johannes Weis, Johannes Schemmel, Stefano Calabrò, Maxim Kuschnerov | 2023-02-28T16:33:39Z | http://arxiv.org/abs/2302.14726v1 | # Spiking Neural Network Nonlinear Demapping on Neuromorphic Hardware for IM/DD Optical Communication
###### Abstract
Neuromorphic computing implementing spiking neural networks (SNN) is a promising technology for reducing the footprint of optical transceivers, as required by the fast-paced growth of data center traffic. In this work, an SNN nonlinear demapper is designed and evaluated on a simulated intensity-modulation direct-detection link with chromatic dispersion. The SNN demapper is implemented in software and on the analog neuromorphic hardware system BrainScale-2 (BSS-2). For comparison, linear equalization (LE), Volterra nonlinear equalization (VNLE), and nonlinear demapping by an artifical neural network (ANN) implemented in software are considered. At a pre-forward error correction bit error rate of \(2\cdot 10^{-3}\), the software SNN outperforms LE by \(1.5\,\mathrm{dB}\), VNLE by \(0.3\,\mathrm{dB}\) and the ANN by \(0.5\,\mathrm{dB}\). The hardware penalty of the SNN on BSS-2 is only \(0.2\,\mathrm{dB}\), i.e., also on hardware, the SNN performs better than all software implementations of the reference approaches. Hence, this work demonstrates that SNN demappers implemented on electrical analog hardware can realize powerful and accurate signal processing fulfilling the strict requirements of optical communications.
Spiking Neural Network, Optical Communication, Equalization, Data Centers, Intensity-Modulation Direct-Detection
## I Introduction
The fast-paced growth of data center traffic is the driver behind the increase in bit rate and, at the same time, the footprint reduction of the optical transceivers. This trend results in an urgent need to decrease the power consumption per bit. Whereas evolutionary steps can mitigate the problem, the exponential traffic growth asks for a paradigm shift. To resolve this dilemma, recent research envisions moving parts of digital signal processing (DSP) to analog frontends with lower power consumption.
One approach is photonic neuromorphic computing [1], which has been proposed, e.g., for chromatic dispersion (CD) compensation and nonlinear equalization in short-reach optical transmission [2, 3, 4]. However, although photonics can operate faster than electronic hardware, the latter scales better in terms of footprint and power consumption.
The return to analog electrical adaptive equalizers is also gaining traction, e.g., in [5], the transmitter DSP feeds two electrical non-return-to-zero (NRZ) signals to an analog pulse-amplitude-modulation 4-level (PAM-4) encoder, whose output is filtered by a continuous-time linear equalizer (CTLE) and a 3-tap feed forward equalizer (FFE).
At the same time, the research community is striving to implement more powerful nonlinear algorithms, e.g. based on artificial intelligence (AI) techniques, on analog electronics. An important subfield is in-memory-computing (IMC) [6], which aims for efficient calculation of vector-matrix multiplications. Research on IMC is mainly driven by the urgent need for AI accelerators for artificial neural networks (ANNs). Eventually, IMC may enable the use of ANNs for signal processing in the data path of communication systems, see, e.g., [7].
Analog electronic neuromorphic computing offers an alternative path towards AI-based signal processing. Spiking neural networks (SNNs) [8] in analog hardware [9], adopt the brain's unique power efficiency by imitating the basic functioning of the human brain. They combine the sparse representation of information by event-based spiking signals with power efficient IMC. In [10], we have shown that SNN FFEs simulated in software can compensate nonlinear impairments in intensity-modulation direct-detection (IM/DD) links. In [11], SNN decision feedback equalization (DFE) is considered for compensating severe linear inter-symbol interference (ISI).
Recently, in-the-loop (ITL) training of SNNs on analog hardware [12] has shown promising results by achieving state-of-the-art performance in inference tasks [13]. In [14], we presented preliminary results on the design and evaluation of an SNN demapper on the analog neuromorphic BrainScale-2 (BSS-2) system [9]. Specifically, we considered the detection of a PAM-4 signal in a simulated IM/DD link, which is impaired by CD and additive white Gaussian noise (AWGN), as displayed in Fig. 1. Our results in [14] show that SNNs emulated on the neuromorphic BSS-2 hardware outperform
linear equalization in software, while the gap between software and hardware SNN is slightly below 1 dB.
In this work, we detail and extend our previous work on SNN-based neuromorphic demapping [14]. For the same IM/DD link model as in [14] (see Fig. 1), we reduce the SNN software-hardware penalty to below 0.2 dB. We achieve this by optimizing the hardware operation point, tuning the training procedure, and adjusting the input-spike encoding. We compare the proposed solution with software implementations of a linear equalizer, a \(5\)-th order Volterra nonlinear equalizer (VNLE), and a nonlinear ANN demapper. Despite the nonzero hardware penalty, our hardware SNN demapper performs better than the considered simulated reference algorithms. At the assumed forward error correction (FEC) bit error rate (BER) threshold of \(2\cdot 10^{-3}\), the gain over a linear equalizer is approximately 1.5 dB.
The remainder of this work is organized as follows. In Section II, we outline the IM/DD link and explain the implementation of the reference demappers. Section III details the SNN demapper and the input encoding scheme. Subsequently, we provide an overview of the BSS-2 platform in Section IV. The training procedure is explained in Section V. In Section VI, we show our results and in Section VII, we present our conclusions.
## II IM/DD Model and Reference Demappers
In this section, we detail our IM/DD link model and specify the reference algorithms, i.e., linear equalization (LE) and VNLE followed by hard decision (HD) demapping, and ANN nonlinear demapping. All reference demappers are simulated in double-precision floating-point arithmetic, except for the ANN, which uses single-precision floating-point arithmetic. The considered ANN and VNLE architectures are rather complex, i.e., the ANN has two nonlinear hidden layers and the VNLE uses the full filter length also for the higher order terms. The purpose of considering complex ANN and VNLE processing is to benchmark what performance we can achieve by nonlinear processing without considering resource usage, and then to compare the SNN performance to such benchmark.
### _Simulated IM/DD Link_
We simulate the transmission of PAM-4 symbols in the O-band at a baudrate of 112 GBd. Assuming an FEC overhead of 12 % with a BER threshold of \(2\cdot 10^{-3}\), we target a corresponding net bit rate of 200 Gbit s\({}^{-1}\).
We display the simulated link in Fig. 1 and the corresponding parameters in Table I. At the transmitter, a bit sequence \([b_{1}b_{2}]^{N}\) is mapped to a length \(N\) PAM-4 signal \(\mathbf{y}=y^{N}\) according to a Gray-labelled alphabet \(\mathcal{A}=\{-3,-1,1,3\}\). This signal is upsampled, root-raised-cosine (RRC) filtered, and offset by a bias. The resulting sequence is impaired by CD, modelled linearly following, e.g., [15, Sec. 3.2], to simulate the effect of the fibre on the propagating optical signal. We assume that the power dissipated into the fiber is low and we ignore fiber non-linearities in our simulated link. At the receiver, the signal goes through a photodiode (PD), which is modeled as a square-law device, and AWGN is added. The resulting signal is RRC filtered and downsampled, resulting in the received sequence \(\tilde{\mathbf{y}}=\tilde{y}^{N}\). Finally, bit decisions \([\tilde{b}_{1}\tilde{b}_{2}]^{N}\) are output by the respective device. We index the bit sequence and signal elements with \(n\), \(0\leq n<N\). Note that a constellation with non-equidistant signal points to precompensate the squaring of the PD may be beneficial, however, this is beyond the scope of this work.
### _Linear Minimum Mean Squared Error (LMMSE) Equalization_
Our first reference detector consists of LE followed by HD demapping. To simplify the notation in the following, we
Figure 1: The simulated IM/DD link schematics. A bit sequence is mapped at the transmitter (Tx) to a PAM-4 signal and is impaired by CD in the fiber. At the receiver (Rx), after square-law detection, AWGN is added. An equalizer/demapper recovers the transmitted bits.
specify the samples considered for equalizing the \(n\)-th sample via double-indexing,
\[\tilde{\mathbf{y}}_{n} =\left[\tilde{y}_{n,0},\tilde{y}_{n,1},\ldots,\tilde{y}_{n,n_{\text {up}}-1}\right], \tag{1}\] \[:=\left[\tilde{y}_{n-\lfloor n_{\text{up}}/2\rfloor},\ldots, \tilde{y}_{n},\ldots,\tilde{y}_{n+\lfloor n_{\text{up}}/2\rfloor}\right]. \tag{2}\]
Specifically, the LE calculates
\[\hat{y}_{n}=c+\sum_{j=0}^{n_{\text{up}}-1}\tilde{y}_{n,j}h_{j}, \tag{3}\]
where the bias \(c\) accounts for residual direct current (DC) and \(\mathbf{h}\) are the filter coefficients. The number \(n_{\text{up}}\) of taps is the filter width and is assumed to be odd. We use data-aided training to calculate \(\mathbf{h}\) and \(c\) so as to minimize the mean squared error (MSE), \(\sfrac{1}{N}\sum_{n}(\hat{y}_{n}-y_{n})^{2}\). To this end, we form the feature matrix
\[\mathbf{A}=\begin{bmatrix}\cdots&1&1&1&\cdots\\ \cdots&\tilde{\mathbf{y}}_{n-1}^{\top}&\tilde{\mathbf{y}}_{n}^{\top}&\tilde{\mathbf{y}}_{n +1}^{\top}&\cdots\end{bmatrix}^{\top} \tag{4}\]
and we then solve
\[[c^{*},\mathbf{h}^{*}]=\operatorname*{argmin}_{c,\mathbf{h}}\lVert\mathbf{A}[c,\mathbf{h}]^{ \top}-\mathbf{y}^{\top}\rVert_{2}^{2}. \tag{5}\]
The demapper calculates an HD \([\hat{b}_{1}\hat{b}_{2}]_{n}\) from the equalized sample \(\hat{y}_{n}\) via three decision boundaries, which are chosen such that the BER is minimized. Note that at the transmitter, the signal points in the PAM-4 constellation are equidistant, while the received signal points are not equidistant anymore, because of the nonlinear transfer function of the PD. The LE cannot compensate nonlinear distortions, so the received signal points remain non-equidistant after LE. This is compensated in part by the demapper, as the decision boundaries are optimized with respect to the received and equalized signal points \(\hat{y}_{n}\), not the transmitted signal points \(y_{n}\). In the following, we refer to the combination of a LE and a memoryless demapper by linear minimum mean square error (LMMSE) equalization.
### _Volterra Nonlinear Equalizer (VNLE)_
As nonlinear reference, we consider a \(5\)-th order VNLE, see, e.g., [17, Chap. 14], [18]. The VNLE is very similar to the LE, with the difference that the VNLE feature matrix contains additional columns for the monomials of order higher than \(1\). The \(0\)-th and \(1\)-st order feature vectors considered by the LE are
\[\mathbf{f}_{n,0} =[1], \tag{6}\] \[\mathbf{f}_{n,1} =\left[\tilde{y}_{n,j}\middle|0\leq j<n_{\text{tap}}\right]= \tilde{\mathbf{y}}_{n}. \tag{7}\]
Accordingly, the feature vectors of order two and higher are defined by
\[\mathbf{f}_{n,2} =\Big{[}\tilde{y}_{n,j}\cdot\tilde{y}_{n,k}\middle|0\leq j\leq k< n_{\text{tap}}\Big{]}, \tag{8}\] \[\mathbf{f}_{n,3} =\Big{[}\tilde{y}_{n,j}\cdot\tilde{y}_{n,k}\cdot\tilde{y}_{n, \ell}\middle|0\leq j\leq k\leq\ell<n_{\text{tap}}\Big{]},\] (9) \[\vdots\]
The \(n\)-th row of the feature matrix then consists of the concatenation of the feature vectors \(\mathbf{f}_{n,m}\), \(m=0,\ldots,5\). The number of features of order \(m\) is given by
\[\text{length}(\mathbf{f}_{n,m})=\begin{pmatrix}m+n_{\text{tap}}-1\\ m\end{pmatrix} \tag{10}\]
and accordingly, for \(n_{\text{tap}}=7\), the total number of coefficients of the VNLE is
\[\sum_{m=0}^{5}\text{length}(\mathbf{f}_{n,m})=792. \tag{11}\]
The optimization of the coefficients and the specification of the demapper is data-aided and follows exactly the LE procedure described above, where the key step is solving (5).
### _Nonlinear ANN Demapper_
We consider an ANN with \(n_{\text{tap}}=7\) input units, a first hidden layer with 40 neurons, followed by a second hidden layer with 20 neurons, both activated by the \(\tanh\) function, and a linear output layer with 4 neurons. The output values are interpreted as \(\log\)-probabilities providing a soft decision (SD) on the PAM-4 symbols. A symbol-wise HD is obtained by choosing the symbol of highest probability and the bitwise HD is obtained from the symbol decisions via the Gray label.
## III Spiking Neural Networks for Equalization
This section outlines the SNN demappers. We detail their emulation on BSS-2 in Sec. IV.
SNNs consist of neurons, evolving in time \(t\), and communicating via binary spike events. The leaky integrate-and-fire (LIF) spiking neuron model [8, Sec. 1.3] captures some of the core dynamics observed in biological neurons while at the same time maintaining a tractable complexity. LIF neurons integrate synaptic input current \(I(t)\) onto their internal membrane voltage state \(v_{\text{m}}(t)\) according to the dynamics described by the ordinary differential equation (ODE)
\[\tau_{\text{m}}\frac{\text{d}v_{\text{m}}(t)}{\text{d}t}=\big{[}v_{\text{l}}-v _{\text{m}}(t)\big{]}+R_{\text{l}}\cdot I(t). \tag{12}\]
Here, \(\tau_{\text{m}}\) is the membrane time constant, \(R_{\text{l}}\) is the leakage resistance, and \(v_{\text{l}}\) is the leakage potential. When the membrane potential reaches a threshold potential \(\vartheta\) at time \(t^{\text{s}}\), the neuron emits a spike \(z(t)=\delta\left(t-t^{\text{s}}\right)\), with \(\delta\) being the Dirac delta distribution, and \(v_{\text{m}}\) is reset to a potential \(v_{\text{m}}(t^{\text{s}})=v_{\text{r}}\). The synaptic current \(I\) is induced by _presynaptic_ neurons \(\{n_{i}\}\), projecting spike events \(z_{i}(t)=\delta\left(t-t_{i}^{\text{s}}\right)\) at times \(\{t_{i}^{\text{s}}\}\) onto
the _postsynaptic_ neuron through synapses with weights \(w_{i}\), thereby causing an exponentially decaying current described by the ODE
\[\frac{\text{d}I(t)}{\text{d}t}=-\frac{I(t)}{\tau_{\text{s}}}+\sum_{i}w_{i}z_{i}(t). \tag{13}\]
\(\tau_{\text{s}}\) denotes the synaptic time constant. The LIF dynamics are exemplified in Fig. 2A. Neurons with a disabled spiking mechanism and membrane dynamics according to (12), are referred to as leaky integrator (LI) neurons [8, Sec. 1.3].
In the following, we consider an SNN with the structure outlined in [14] and depicted in Fig. 2B. It consists of one hidden layer constituted by \(N^{\text{h}}\) LIF neurons \(\{n^{\text{h}}_{j}\}\), projecting its spike events onto one output layer with \(N^{\text{o}}=4\) non-spiking LI readout neurons \(\{n^{\text{o}}_{k}\}\). The hidden layer receives spike events from the input layer, encoding a set of input samples \(\tilde{\mathbf{y}}_{n}\). The readout layer's outputs are translated to symbol-level \(\log\)-probabilities. Spike-input encoding and output decoding are explained in the following.
Input Spike-EncodingTo damp a sample \(\tilde{y}_{n}\), we consider the chunk \(\tilde{\mathbf{y}}_{n}\) defined in (1) and assign to each sample \(\tilde{y}_{n,\ell}\) a set of input neurons \(\{n^{\text{i}}_{i,\ell}\}_{i=0}^{<\tilde{N}^{\text{i}}_{\ell}}\), encoding the sample value in their spike times \(\{t^{\text{s}}_{i,\ell}\}\). Here, \(\ell\) indexes the samples within \(\tilde{\mathbf{y}}_{n}\) and \(\tilde{N}^{\text{i}}_{\ell}\in\mathbb{N}\) is the number of neurons associated to sample \(\tilde{y}_{n,\ell}\), such that \(N^{\text{i}}=\sum_{\ell=0}^{<\tau_{\text{ap}}}\tilde{N}^{\text{i}}_{\ell}\) is the size of the input layer. Further, we assign each input neuron \(n^{\text{i}}_{i,\ell}\) a _reference point_\(\chi_{i,\ell}\), which we choose together with \(\tilde{N}^{\text{i}}_{\ell}\) to be independent of \(\ell\), \(\chi_{i,\ell}=\chi_{i}\) and \(\tilde{N}^{\text{i}}_{\ell}=\tilde{N}^{\text{i}}\). Finally, we compute the spike time \(t^{\text{s}}_{i,\ell}\) by scaling the distance of \(\tilde{y}_{n,\ell}\) to \(\chi_{i}\),
\[t^{\text{s}}_{i,\ell}=\alpha\big{|}\tilde{y}_{n,\ell}-\chi_{i}\big{|}+o, \tag{14}\]
where \(\alpha\) is a scaling factor and \(o\) is an offset. This spike-encoding preserves all information and encodes the value \(\tilde{y}_{n,\ell}\) redundantly in \(\tilde{N}^{\text{i}}\) spike times in order to increase the network's activity and enrich information in time. The values \(\chi_{i}\), \(\tilde{N}^{\text{i}}\) and \(\alpha\) are subject to tuning and are chosen to augment the network's activity by the right amount to achieve optimal performance. Here, the \(\chi_{i}\)s are equidistantly spaced in the domain of \(\tilde{y}_{n,\ell}\) and \(\alpha\) is selected to obtain spike times comparable to the membrane time constants. Note, while larger \(\tilde{N}^{\text{i}}\) increases the network's complexity, it potentially stabilizes the network's performance on a noisy analog substrate like BSS-2, see Section IV. We further introduce a cutoff time \(t_{\text{c}}\) after which input neurons are not allowed to emit spike events and we do not expect the SNN to gain information afterwards. The spike encoding is illustrated in Fig. 2C. A sample \(\tilde{y}_{n,\ell}\) (purple, dotted) is translated into spike times according to its distance to the reference points, e.g., the distance to \(\chi_{4}\) (blue, solid) results in a spike from input neuron \(n^{\text{i}}_{4,\ell}\) depicted in blue. The input neuron \(n^{\text{i}}_{8,\ell}\), corresponding to \(\chi_{8}\) (yellow, dotted), remains silent.
Output Membrane-DecodingEach of the 4 neurons in the readout layer is assigned to one element in the PAM-4 alphabet \(\mathcal{A}\). We take the maximum membrane voltages produced over time, i.e., \(\hat{s}_{k}=\max_{t}v_{k}(t)\), which are interpreted as \(\log\)-probabilities providing an SD on the PAM-4 symbols. Then, the symbol-wise HD is obtained by choosing the symbol of highest probability and the bitwise HD is obtained from the symbol decisions via the Gray label. Hence, the network learns to place its hidden layer spike events in time, such that the membrane trace of the correct output neuron is deflected
Figure 2: **(A)** LIF dynamic: a LIF neuron receives input spikes (yellow arrows), causing a synaptic current (purple) onto the neuron’s membrane, thereby deflecting its membrane potential (blue). As the potential crosses the threshold (gray), the neuron sends out a spike (blue arrow) and is reset. **(B)** The considered SNN setup for joint equalization and demapping. A chunk \(\tilde{\mathbf{y}}_{n}\) of samples is translated to spike times of input neurons, projected onto one hidden LIF layer. A readout layer of LI neurons adjust their membrane voltage by integrating these hidden spike events. Symbol-level \(\log\)-probabilities are calculated by taking the maximum membrane value over time of the readout neurons, allowing to infer bit-decisions. **(C)** Schematic drawing of the input spike-encoding: a sample \(\tilde{y}_{n,\ell}\) (purple dotted) is translated into spike times according to the distance to reference points assigned to \(10\) neurons (gray lines). Spikes occurring after a cutoff time \(t_{\text{c}}\) are omitted.
upwards while the traces of the others are suppressed.
### _Training_
Time-discretized SNNs are mathematically recurrent neural networks (RNNs) [19] and can be trained with the gradient-based backpropagation through time (BPTT). For this, the derivative of the spiking output of the LIF neurons with respect to their membrane potential has to be known. This derivative is ill-defined due to the threshold activation function. Often surrogate gradients, smoothing out the neurons' activation functions, are used to bypass this issue and allow backpropagating the gradient. Here, we rely on the SuperSpike [19] surrogate gradient. The model parameters are optimized by the Adam optimizer [20].
In the simulation, the SNN is integrated with a step size \(\Delta t=$0.5\,\mathrm{\SIUnitSymbolMicro s}$\) for \(T=$30\,\mathrm{\SIUnitSymbolMicro s}$\), suitable for BSS-2 (see Section IV). Our simulated SNN demappers are implemented using the PyTorch-based Norse [21] framework. To estimate the hardware gradient for the SNN demappers emulated on BSS-2 in continuous time with the BPTT algorithm, we discretize the hardware observables assuming the same step size, see Section IV. We allocate \(\bar{N}^{\mathrm{i}}=10\) input neurons per sample of which only a subset is active, depending on the sample value, see Fig. 5. We use the cross entropy on the max-over-time voltage values as the objective function. The parameters of the SNN and input encoding are listed in Table II. In case of emulation on BSS-2, these parameters are used as calibration targets, resp. for ITL training (see Section IV).
## IV BrainScaleS-2 Neuromorphic System
We now discuss the emulation of the SNN demappers on the BrainScaleS-2 (BSS-2) system [9].
BSS-2, depicted in Fig. 3, is an accelerated neuromorphic mixed-signal hardware platform developed at Heidelberg University. Its analog neural network core features 512 adaptive exponential integrate-and-fire (AdEx) [22] neuron compartments on two chip hemispheres, each implemented as an analog circuit and emulated in continuous time.
Each neuron compartment receives stimuli from a column of 256 synapses with 6-bit weights. Increased fan-in can be achieved by building larger 'logical' neurons from multiple compartments. Synapses can be configured to be inhibitory (negative sign) or excitatory (positive sign) in a row-wise fashion. To realize a specific network topology, on-chip spike routing connects neurons to target synapses. In addition, off-chip spikes are injected as an input stimulus for the network.
Neuron circuits can be configured to emulate LIF neurons as well as LI neurons by disabling the spiking mechanism. Each hemisphere on BSS-2 has one columnar ADC (CADC) to measure and digitize in parallel the membrane potentials of the neurons of each column in the synapse matrix with an effective sampling period of about \(2\,\mathrm{\SIUnitSymbolMicro s}\) including time stamping and writing to memory. CADC measurements and spike events can be recorded on an FPGA-managed dynamic random access memory (DRAM) and read out by the host computer.
Recording hardware observables allows for hardware ITL training. In the case of our SNN demapper on BSS-2, the forward pass is performed on BSS-2 and the hardware gradient is estimated on the host computer by utilizing the network's recorded hardware observables to calculate the weight updates [12, 13].
To obtain an equivalent experiment configuration on BSS-2, our software stack translates the high-level SNN experiment description to a data flow graph representation, places and routes neurons and synapses on the hardware substrate, and compiles stimulus inputs, recording settings and other runtime dynamics into an experiment program [23].
The analog circuits on BSS-2 are subject to device variations (fixed-pattern noise) that can be compensated for by calibration. Therefore, one part of the system configuration consists of a calibration data set that is loaded to obtain a chip operating point, which most closely resembles the desired target dynamics with minimal variation, e.g., with respect to model parameters such as neuron membrane time constants or synaptic efficacy.
To represent one signed software weight \(w_{\mathrm{sw}}\) on BSS-2,
Figure 3: **(A)** The BrainScaleS-2 neuromorphic ASIC bonded onto its carrier board: the ASIC is about \(4\,\mathrm{mm}\) x 8 mm in size. It is organized in two hemispheres, each containing 256 spiking neurons in analog circuits. **(B)** The schematic of BSS-2: one parallel columnar ADC and one general-purpose single instruction, multiple data (SIMD) processor are available per hemisphere.
two hardware synapses, with the respectively excitatory and inhibitory weights
\[w_{\text{hw}}^{\text{inh}}=\max\left(0,-w_{\text{sw}}\right)\quad\text{and} \quad w_{\text{hw}}^{\text{exc}}=\max\left(0,w_{\text{sw}}\right), \tag{15}\]
are allocated and constitute one signed hardware weight \(w_{\text{hw}}\in[-63,63]\). We scale each weight \(w_{\text{sw}}\) linearly into a hardware-compatible range and round it to the nearest value representable on BSS-2. The batched input spikes are injected into BSS-2 and the SNN is emulated for \(T=30\,\upmu\)s per batch entry, i.e., for demapping a single sample. During emulation, spike events are recorded and the CADC samples membrane voltages of the hidden neurons and the readout neurons. After the emulation, the host computer reads back and post-processes the recorded data. The post-processing step includes a linear interpolation to convert event-based CADC recordings to a torch::Tensor expressed on a fixed time grid. To facilitate hardware-ITL training on BSS-2, we utilized hxtorch.snn [23], a PyTorch-based [24] library that automates and abstracts away hardware-specific procedures and provides data conversions from and to PyTorch.
## V Training and Testing
To measure the BER of the demappers against the noise-level in the IM/DD link, we train our models with successively increasing noise-levels \(\sigma^{2}\). At each noise-level, we perform validation runs on independent data and store the model parameters of the best performing demapper. At the next noise-level, we restore the best model from the previous noise-level and continue training. This procedure is repeated for five different random seeds, affecting model initialization, IM/DD-data generation and sampling permutations. We select the best-performing demappers for each noise-level over the seeds according to their respective validation runs and benchmark the models on independent test data. The tests are run until a minimum of \(2000\) bit error events are encountered.
## VI Results
In Figure 4, we compare our \(7\)-tap SNN demapper emulated on the analog neuromorphic BSS-2 system (\(\text{SNN}^{\text{hw}}\)) to a \(7\)-tap SNN demapper simulated in software (\(\text{SNN}^{\text{sw}}\)) in terms of BER versus noise-level \(\sigma^{2}\) in the link. We benchmark our SNN performances against the LMMSE, with 7 taps (LE7) and without memory (LE1). As additional nonlinear references, we consider a 7-tap ANN demapper with two hidden layers (see Section II-D) and a 7-tap VNLE. All demapper configurations are specified in Table III.
Both the simulated \(\text{SNN}^{\text{sw}}\) demapper and the \(\text{SNN}^{\text{hw}}\) demapper on BSS-2 outperform the LMMSE demapper. At a pre-FEC BER of \(2\cdot 10^{-3}\), we observe a gain of about 1.5 dB of the \(\text{SNN}^{\text{sw}}\) demapper to the LE7 demapper and a gain of 0.5 dB to the nonlinear ANN demapper. Compared to the VNLE demapper, the \(\text{SNN}^{\text{sw}}\) demapper shows superior performance for noise levels higher than \(-21\,\text{dB}\), in particular at the considered pre-FEC BER threshold, it shows a 0.3 dB improvement, however, for noise levels lower than \(-21\,\text{dB}\) the VNLE demapper achieves a lower BER.
The \(\text{SNN}^{\text{hw}}\) demapper on BSS-2 approaches the performance observed with the simulated SNN and only suffers from a small hardware penalty with respect to the \(\text{SNN}^{\text{sw}}\) of about 0.2 dB at a BER of \(2\cdot 10^{-3}\), outperforming all reference strategies.
In Fig. 5A, we visualize the process of joint equalization and demapping on BSS-2 on four different samples. The upper row indicates the sample set \(\tilde{\mathbf{y}}_{n}\), with the sample of interest \(\tilde{y}_{n}\) highlighted. Each sample in this set is translated to spike times of \(10\) input neurons, depicted in the second row. For \(n_{\text{tap}}=7\), the hidden LIF layer receives spike events from \(70\) input neurons, of which the majority are silent due to a cutoff time of 15 \(\upmu\)s (see Section III). These input spike events activate the \(40\) LIF neurons in the hidden layer, exciting them to emit spike events themselves as shown in the third row. These spikes events constitute a meaningful pattern, driving the membrane voltage of the correct LI output neuron to the maximum voltage value over time, from which the bits are inferred via an HD. This behavior is observed in the analog membrane traces in the lowermost row. The membrane voltage of the readout neuron corresponding to the estimated symbol is deflected upwards while the others drop below zero and
Fig. 4: The BER of SNN equalizers in simulation and on the BSS-2 system over the noise-levels \(\sigma^{2}\) in the IM/DD link compared to ANN, VNLE, and LMMSE reference equalizers. The error bars denote the 99% credibility intervals.
hence do not intervene in the decision. Note that the dynamics visualized in each column from the second to fourth row all happen simultaneously in BSS-2's analog circuits.
The weight matrices learned on BSS-2 are shown in Fig. 5B. The input-to-hidden weight matrix \(w^{\text{ih}}_{ij}\) shows a greater weight magnitude for rows with indices \(i\in[30,39]\). This is expected as these rows receive the input spike events encoding the most significant sample to demap \(\tilde{y}_{n}\) in the innermost tap. For the outer rows, one can observe a pattern repeating with the number of input rows per sample, \(\bar{N}^{\text{i}}=10\). The lower plot depicts the hidden-to-output weight matrix \(w^{\text{ho}}_{jk}\).
## VII Conclusion
This work successfully showcases the implementation of SNN-based joint equalization and demapping emulated on the accelerated analog neuromorphic hardware system BSS-2. Our demapper on BSS-2 approaches the performance of an SNN demapper simulated in software while outperforming an LMMSE equalizer and performing better than a nonlinear ANN reference demapper, both with the same number of taps. A gain of 1.5 dB at a BER of \(2\cdot 10^{-3}\) of the simulated SNN over the LMMSE clearly demonstrates the nonlinear processing capability of the SNN demapper. A small hardware penalty of about 0.2 dB at the same BER with respect to the SNN simulated in software is observed and is attributed to hardware imperfections like noise in the physical substrate, fixed-pattern noise artifacts of the production process, and potentially a sub-optimal hardware operation point. Typically, the fixed-pattern noise effects are widely absorbed by gradient-based training. An additional cause might be the limited precision of 6-bit hardware weights and the 8-bit CADC. Despite having multiple sources of noise and loss of information owing to limited precision, the SNN demapper on BSS-2 shows an excellent performance and resilience to hardware impairments. We conjecture that the chosen size of the SNN with \(40\) hidden neurons ensures a robust behavior by encoding information redundantly. Accordingly, we expect to observe a larger hardware penalty as the number of hidden neurons decreases. An interesting direction for future research is to investigate how the complexity of the SNN on BSS-2 can be reduced while maintaining its performance.
With the implementation at hand, the equalization and demapping of a single sample take about \(T=30\,\upmu\)s. Therefore, the BSS-2 platform supports a maximum symbol rate of 30 kBd. However, this upper bound is due to the specific design target of BSS-2 as a general purpose experimentation platform and does not follow from an intrinsic limitation of the underlying complementary metal-oxide-semiconductor (CMOS) technology itself. Significantly faster inference, and thus throughput, might be achieved by accelerating the emulation of the LIF dynamics. [25] presented a neuromorphic ASIC exhibiting an acceleration of up to two additional orders of magnitude (OOMs) with respect to BSS-2. Given the fact that the cited implementation was fabricated in a 180 nm CMOS process, it is reasonable to assume that a modern FinFET process could potentially gain at least another 2 OOMs. This
Figure 5: **(A)** Four different examples visualizing the bit inference process on BSS-2. The upper row depicts the chunk of samples used for demapping the innermost sample (colored). This chunk is translated to the spike events of the input neurons (10 per sample) as shown in the the second row. The third rows depicted the corresponding spike events of the hidden LIF neurons. In the last row, the membrane voltage traces of the four LI neurons are plotted. The output neuron corresponding to the estimated symbol produces the maximum voltage value over time. A Gray demapper provides bit decisions. **(B)** The SNN demapper’s learned synapse matrices on BSS-2. The upper matrix shows the weights from the input to the hidden layer. Note, each of the \(n_{\text{tuple}}=7\) samples has \(10\) input neurons assigned, resulting in 70 input rows. The rows corresponding to the middle sample \(\tilde{y}_{n}\) contribute most to the decision. The lower matrix shows the weights from the hidden to the output layer.
would result in a processing time in the order of nanoseconds per sample. The throughput can further be increased by parallelization. Several spiking network cores could be deployed in parallel, of which each could process multiple samples on the same physical substrate at once. To get nanoseconds per sample to 200 Gbit/s, a parallel processing factor of a few hundreds is enough, which is similar to the time-interleaving of multiple analog-to-digital converters (ADCs) used in standard optical DSP solutions [26].
The spatio-temporal sparsity of SNNs promises an intrinsically favorable energy footprint when contrasted to traditional ANN-based solutions - especially when combined with analog IMC [6]. Currently, the power consumption is dominated by I/O as well as the clock distribution and biasing of the individual subsystems - a fact largely attributed to the flexible general-purpose approach of BSS-2. Optimizing or omitting these subsystems in future, more specialized ASICs could dramatically reduce the overall energy footprint.
Future research aims to increase hardware resource efficiency by decreasing the architectural SNN complexity and investigate feature sharing in order to increase the throughput by parallelization. Importantly, the power consumption of neuromorphic signal processing shall be analyzed, compared to a digital implementation, and optimized by minimization of the firing activity of the neurons and efficient design of the input and output interfaces of the SNN.
The presented results demonstrate that electrical neuromorphic hardware can implement signal processing with the accuracy required in optical transceivers. To successfully integrate SNN equalization in optical transceivers, efficient conversion of received signals into input spikes must be researched.
## Acknowledgment
We thank L. Blessing, B. Cramer, and C. Pehle for insightful discussions, C. Mauch for keeping the BSS-2 system on track, and all members of the Electronic Vision(s) research group who contributed to the BSS-2 system.
## Funding
The contributions of the Electronic Vision(s) group have been supported by the EC Horizon 2020 Framework Programme under grant agreements 785907 (HBP SGA2) and 945539 (HBP SGA3), Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy EXC 2181/1-390900948 (the Heidelberg STRUCTURES Excellence Cluster), the Helmholtz Association Initiative and Networking Fund [Advanced Computing Architectures (ACA) under Project SO-092.
|
2309.15762 | Rapid Network Adaptation: Learning to Adapt Neural Networks Using
Test-Time Feedback | We propose a method for adapting neural networks to distribution shifts at
test-time. In contrast to training-time robustness mechanisms that attempt to
anticipate and counter the shift, we create a closed-loop system and make use
of a test-time feedback signal to adapt a network on the fly. We show that this
loop can be effectively implemented using a learning-based function, which
realizes an amortized optimizer for the network. This leads to an adaptation
method, named Rapid Network Adaptation (RNA), that is notably more flexible and
orders of magnitude faster than the baselines. Through a broad set of
experiments using various adaptation signals and target tasks, we study the
efficiency and flexibility of this method. We perform the evaluations using
various datasets (Taskonomy, Replica, ScanNet, Hypersim, COCO, ImageNet), tasks
(depth, optical flow, semantic segmentation, classification), and distribution
shifts (Cross-datasets, 2D and 3D Common Corruptions) with promising results.
We end with a discussion on general formulations for handling distribution
shifts and our observations from comparing with similar approaches from other
domains. | Teresa Yeo, Oğuzhan Fatih Kar, Zahra Sodagar, Amir Zamir | 2023-09-27T16:20:39Z | http://arxiv.org/abs/2309.15762v1 | # Rapid Network Adaptation:
###### Abstract
We propose a method for adapting neural networks to distribution shifts at test-time. In contrast to **training-time** robustness mechanisms that attempt to **anticipate** and counter the shift, we create a **closed-loop** system and make use of a **test-time** feedback signal to adapt a network on the fly. We show that this loop can be effectively implemented using a **learning-based function**, which realizes an **amortized optimizer** for the network. This leads to an adaptation method, named Rapid Network Adaptation (RNA), that is notably **more flexible** and **orders of magnitude faster** than the baselines. Through a broad set of experiments using various adaptation signals and target tasks, we study the efficiency and flexibility of this method. We perform the evaluations using various datasets (Taskonomy, Replica, ScanNet, Hypersim, COCO, ImageNet), tasks (depth, optical flow, semantic segmentation, classification), and distribution shifts (Cross-datasets, 2D and 3D Common Corruptions) with promising results. We end with a discussion on general formulations for handling distribution shifts and our observations from comparing with similar approaches from other domains.
## 1 Introduction
Neural networks are found to be unreliable against distribution shifts [23, 37, 48, 46, 30]. Examples of such shifts include blur due to camera motion, object occlusions, changes in weather conditions and lighting, etc. The _training-time_ strategies to deal with this issue attempt to _anticipate_ the shifts that may occur and _counter_ them at the training stage - for instance, by augmenting the training data or updating the architecture with corresponding robustness inductive biases. As the possible shifts are _numerous_ and _unpredictable_, this approach has inherent limitations. This is the main motivation behind _test-time_ adaptation methods, which instead aim to _adapt_ to such shifts as they occur and recover from failure. In other words, these methods choose adaptation over anticipation (see Fig. 1). In this work, we propose a test-time adaptation framework that aims to perform an _efficient_ adaptation of a given main network using a feedback signal.
One can consider performing "test-time optimization" (TTO) for this purpose, similar to previous works [100, 117, 28]. This involves using SGD to finetune
Figure 1: **Adaptive vs non-adaptive neural network pipelines.**_Top:_ In order to be robust, non-adaptive methods include training-time interventions that _anticipate and counter_ the distribution shifts that will occur at test-time (_e.g._, via data augmentation). The learned model, \(f_{\theta}\), is frozen at test-time, thus upon encountering an out-of-distribution input, its predictions may collapse. _Bottom:_ Adaptive methods create a _closed-loop_ and use an _adaptation signal_ at test-time. The adaptation signal is a quantity that can be computed at test-time from the environment. \(h_{\phi}\) acts as a “controller” by taking in an error feedback, computed from the adaptation signal and model predictions, to adapt \(f_{\theta}\) accordingly. It can be implemented as a **(i)** standard optimizer (_e.g._, using SGD) or **(ii)** neural network. The former is equivalent to test-time optimization (TTO), while the latter aims to _amortize_ the optimization process, by training a controller network to adapt \(f_{\theta}\) – thus, it can be more efficient and flexible. In this work, we study the latter approach and show its efficiency and flexibility.
the network to reduce a proxy loss. While this can successfully adapt a network, it is unnecessarily _inefficient_ as it does not make use of the learnable regularities in the adaptation process, and consequently, unconducive for real-world applications. It also results in a _rigid_ framework as the update mechanism is fixed to be the same as the training process of neural networks (SGD). We show this process can be effectively amortized using a learning-based feed-forward controller network, which yields orders of magnitude _faster_ results (See Fig. 1, Sec. 4.3). In addition, it provides _flexibility_ advantages as the controller is implemented using a neural network and can be engineered to include arbitrary inductive biases and desired features that could counter the suboptimalities of the adaptation signal.
## 2 Related Work
Our work focuses on how to adapt a neural network in an efficient way at test-time on a range tasks and adaptation signals. We give an overview of relevant topics.
**Robustness methods**_anticipate_ the distribution shift that can occur and incorporate inductive biases into the model to help it generalize. Popular methods include data augmentation [67, 116, 63, 39, 112, 110, 36, 48], self-/pre-training [38, 104, 26, 78, 105, 81, 34], architectural changes [19, 8, 89, 68, 61] or ensembling [53, 73, 108, 79, 43, 74]. We focus on adaptation mechanisms and identifying practical adaptation signals that can be used at _test-time_.
**Conditioning methods** use _auxiliary inputs_ to adapt a model. Some examples include using HyperNetworks [33, 47] or cross-attention [86, 97]. A popular method that has been adopted in different problem settings, _e.g_., style transfer [25, 31, 40], few-shot learning [72, 83, 120, 95, 45], is performing feature-wise modulation [24, 77]. It involves training a model to use the auxiliary information to predict affine transformation parameters that will be applied to the features of the target model. Our formulation can be viewed to be a form of conditioning, and we show it results in a framework that is expressive, efficient, and generalizable.
**Amortized optimization methods** make use of learning to improve (_e.g_., speed-up) the solution of optimization problems, particularly for settings that require repeatedly solving similar instances of the same underlying problem [54, 11, 4, 115, 27, 103, 66, 14, 3]. Fully amortized optimization methods model the shared structure between past instances of solved problems to regress the solution to a new problem [33, 21, 57]. As adapting to distribution shifts can be cast as solving an optimization problem at test-time, our method can be seen as an amortized solution.
**Test-time adaptation methods for geometric tasks.** Many existing frameworks, especially in geometric tasks such as aligning a 3D object model with an image of it, in effect instantiate a task-specific case of closed-loop optimization for each image [66, 115, 69]. Common sources of their adaptation quantity include sensor data [101, 107, 15, 98, 111, 18], structure from motion (SFM) [102, 52], motion [12], and photometric and multi-view consistency constraints (MVC) [64, 50]. Many of the latter methods often focus on depth prediction and they introduce losses that are task-specific, _e.g_., [102] optimize a photometric consistency loss. We differ by aiming to investigate a more general framework for test-time adaptation that can be applied to several tasks. For MVC, while we adopt the same losses as [64], we show under collapsed predictions, optimizing only MVC constraints is not sufficient for recovering predictions; depth predictions need to be adapted and this can be done efficiently using our proposed framework (see Sec. 4.3).
**Test-time adaptation methods for semantic tasks.** Most of these works involve optimizing a self-supervised objective at test-time [92, 100, 60, 28, 29, 117, 9, 56, 109]. They differ in the choice of self-supervised objectives, _e.g_., prediction entropy [100], mutual information [56], and parameters optimized [9]. However, as we will discuss in Sec. 3.2, and as shown by [9, 28, 71], existing methods can _fail silently_, i.e. successful optimization of the adaptation signal loss does not necessarily result in better performance on the target task. We aim to have a more efficient and flexible method and also show that using proper adaptation signals results in improved performance.
**Weak supervision for semantic tasks** uses imperfect, _e.g_., sparse and noisy supervision, for learning. In the case of semantic segmentation, examples include scribbles [58] and sparse annotations [7, 75, 90, 118, 17]. For classification, coarse labels are employed in different works [106, 41]. We aim to have a more general method and adopt these as test-time adaptation signals. Further, we show that self-supervised vision backbones, _e.g_., DINO [10], can also be used to generate such signals and are useful for adaptation (See Sec. 3.2).
**Multi-modal frameworks** are models that can use the information from multiple sources, _e.g_., RGB image, text, audio, etc., [13, 5, 93, 55, 2, 16, 78, 1, 6, 32]. Schematically, our method has similarities to multi-modal learning (as many amortized optimization methods do) since it simultaneously uses an input RGB image and an adaptation signal. The main distinction is that our method implements a particular process toward adapting a network to a shift using an adaptation signal from the environment - as opposed to a generic multi-modal learning.
## 3 Method
In Fig. 1, we schematically compared methods that incorporate robustness mechanisms at training-time (thus anticipating the distribution shift) with those that adapt to shifts at test-time. Our focus is on the latter. In this section, we first discuss the benefits and downsides of com
mon adaptation methods (Sec. 3.1). We then propose an adaptation method that is fast and can be applied to several tasks (Sec. 3.1.1). To adapt, one also needs to be able to compute an adaptation signal, or _proxy_, at the test-time. In Sec. 3.2, we study a number of practical adaptation signals for a number of tasks.
### How to adapt at test-time?
An adaptive system is one that can respond to changes in its environment. More concretely, it is a system that can acquire information to characterize such changes, _e.g_., via an adaptation signal that provides an error feedback, and make modifications that would result in a reduction of this error (see Fig. 1). The methods for performing the adaptation of the network range from gradient-based updates, _e.g_. using SGD to fine-tune the parameters [92, 100, 28], to the more efficient semi-amortized [120, 96] and amortized approaches [99, 72, 83] (see Fig. 6 of [83] for an overview). As amortization methods train a controller network to substitute the explicit optimization process, they only require a forward pass at test-time. Thus, they are computationally efficient. Gradient-based approaches, _e.g_., TTO, can be powerful adaptation methods when the test-time signal is robust and well-suited for the task (see Fig. 4). However, they are inefficient, have the risk of failing silently and the need for carefully tuned optimization hyperparameters [9]. In this work, we focus on an amortization-based approach.
**Notation.** We use \(\mathcal{X}\) to denote the input image domain, and \(\mathcal{Y}\) to denote the target domain for a given task. We use \(f_{\theta}:\mathcal{X}\rightarrow\mathcal{Y}\) to denote the model to be adapted, where \(\theta\) denotes the model parameters. We denote the model before and after adaptation as \(f_{\theta}\) and \(f_{\hat{\theta}}\) respectively. \(\mathcal{L}\) and \(\mathcal{D}\) are the original training loss and training dataset of \(f_{\theta}\), _e.g_., for classification, \(\mathcal{L}\) will be the cross-entropy loss and \(\mathcal{D}\) the ImageNet training data. As shown in Fig. 1, \(h_{\phi}\) is a controller for \(f_{\theta}\). It can be an optimization algorithm, _e.g_., SGD, or a neural network. \(\phi\) denotes the optimization hyperparameters or the network's parameters. The former case corresponds to TTO, and the latter is the proposed RNA, which will be explained in the next subsection. Finally, the function \(g:\mathcal{X}^{M}\rightarrow\mathcal{Z}\) returns the adaptation signal by mapping a set of images \(\mathcal{B}=\{I_{1},...,I_{M}\}\in\mathcal{X}^{M}\) to a vector \(g(\mathcal{B})=z\in\mathcal{Z}\). This function \(g\) is given, _e.g_., for depth, \(g\) returns the sparse depth measurements from SFM.
#### 3.1.1 Rapid Network Adaptation (RNA)
For adaptation, we use a neural network for \(h_{\phi}\). The adaptation signal and model predictions are passed as inputs to \(h_{\phi}\) and it is trained to regress the parameters \(\hat{\theta}(\phi)=h_{\phi}(f_{\theta}(\mathcal{B}),z)\). This corresponds to an objective-based amortization of the TTO process [3]. Using both the adaptation signal \(z\) and model prediction \(f_{\theta}(\mathcal{B})\) informs the controller network about the potential _errors_ of the model. Note that we do not need to regress the gradients of the optimization process. Instead, we _simulate_ TTO by training RNA to reduce errors in the predictions using the error feedback signal. That is, the training objective for \(h_{\phi}\) is \(\min_{\phi}\,\mathbb{E}_{\mathcal{D}}\left[\mathcal{L}(f_{\hat{\theta}(\phi) }(\mathcal{B}),y)\right]\), where \((\mathcal{B},y)\sim\mathcal{D}\) is a training batch sampled from \(\mathcal{D}\). Note that the **original weights of \(f\) are frozen** and \(h_{\phi}\)**is a small network**, having only 5-20% of the number of parameters of \(f\), depending on the task. We call this method as _rapid network adaptation_ (RNA) and experiment with different variants of it in Sec. 4.
There exist various options for implementing the amortization process, _e.g_., \(h_{\phi}\) can be trained to update the input image or the weights of \(f_{\theta}\). We choose to modulate the features of \(f_{\theta}\) as it has been shown to work well in different domains [24] and gave the best results. To do this, we insert \(k\) Feature-wise Linear Modulation (FiLM) layers [77] into \(f_{\theta}\) (see Fig. 2). Each FiLM layer performs: FiLM\((\mathbf{x}_{i};\gamma_{i},\beta_{i})=\gamma_{i}\odot\mathbf{x}_{i}+\beta_{i}\), where \(\mathbf{x}_{i}\) is the activation of layer \(i\). \(h_{\phi}\) is a network that takes as input the adaptation signal \(z\) and model predictions and outputs the coefficients \(\{\gamma_{i},\beta_{i}\}\) of all \(k\) FiLM layers. \(h_{\phi}\) is trained on the same dataset \(\mathcal{D}\) as \(f_{\theta}\), therefore, unlike TTO, it is _never exposed to distribution shifts during training_. Moreover, it is able to generalize to unseen shifts (see Sec. 4.3). See the supplementary for the full details, other RNA implementations we investigated, and a comparison of RNA against other approaches that aim to handle distribution shifts.
### Which test-time adaptation signals to use?
While developing adaptation signals is not the main focus of this study and is independent of the RNA method, we need to choose some for experimentation. Existing test-time adaptation signals, or proxies, in the literature include prediction entropy [100], spatial autoencoding [28], and self-supervised tasks like rotation prediction [92], contrastive [60] or clustering [9] objectives. The more aligned the adaptation signal is to the target task, the better the performance on the target task [92, 60]. More importantly, a poor signal can cause the adaptation to fail silently [9, 28]. Figure 3 shows how the original loss on the target task changes as different proxy losses from the literature, i.e. en
Figure 2: **Architecture of RNA.**\(x\) is the input image, \(f_{\theta}\) is the model to be adapted and \(f_{\theta}(x)\) the corresponding prediction. To perform adaptation, we freeze the parameters of \(f_{\theta}\) and insert several FiLM layers into \(f_{\theta}\). We then train \(h_{\phi}\) to take in \(z\), the adaptation signal, and \(f_{\theta}(x)\) to predict the parameters of these FiLM layers. This results in an adapted model \(f_{\hat{\theta}}\) and improved predictions, \(f_{\hat{\theta}}(x)\).
tropy [100], consistency between different middle domains [108, 113] are minimized. In all cases, the proxy loss decreases, however, the improvement in the target loss varies. Thus, successful optimization of existing proxy losses does not necessarily lead to better performance on the target task. In this paper, we adopt a few practical and real-world signals for our study. Furthermore, **RNA turns out to be less susceptible to a poor adaptation signal vs TTO** (see supplementary Tab. 1). This is because RNA is a neural network _trained_ to use these signals to improve the target task, as opposed to being fixed at being SGD, like in TTO.
#### 3.2.1 Employed test-time adaptation signals
We develop test-time adaptation signals for several geometric and semantic tasks as shown in Fig. 4. Our focus is not on providing an extensive list of adaptation signals, but rather on using practical ones for experimenting with RNA as well as demonstrating the benefits of using signals that are rooted in the known structure of the world and the task in hand. For example, geometric computer vision tasks naturally follow the multi-view geometry constraints, thus making that a proper candidate for approximating the test-time error, and consequently, an informative adaptation signal.
**Geometric Tasks.** The field of multi-view geometry and its theorems, rooted in the 3D structure of the world, provide a rich source of adaptation signals. We demonstrate our results on the following target tasks: monocular depth estimation, optical-flow estimation, and 3D reconstruction. For all, we first run a standard structure-from-motion (SFM) pipeline [88] to get sparse 3D keypoints. For depth estimation, we employ the z-coordinates of the sparse 3D keypoints i.e., noisy sparse depth, from each image as the adaptation signal. For optical flow, we perform keypoint matching across images. This returns noisy sparse optical flow which we use as the adaptation signal. Lastly, for 3D reconstruction, in addition to the previous two signals, we employ consistency between depth and optical flow predictions as another signal.
**Semantic Tasks.** For semantic segmentation, we first experiment with using a low number of click annotations for each class, similar to the works on active annotation tools [17, 90, 75]. This gives us sparse segmentation annotations. Likewise, for classification, we use the hierarchical structure of semantic classes, and use coarse labels generated from the WordNet tree [70], similar to [42]. Although these signals (click annotations and coarse labels) are significantly weaker versions of the actual ground truth, thus being cheaper to obtain, it may not be realistic to assume access to them at test-time for certain applications, _e.g_., real-time ones. Thus, we also show how these can be obtained via \(k\)-NN retrieval from the training dataset and patch matching using spatial features obtained from a pre-trained self-supervised vision backbone [10] (see Fig. 4).
To perform adaptation with RNA at test-time, we first compute the adaptation signal for the given task as described above. The computed signal and the prediction from the model before adaptation, \(f_{\theta}\), are concatenated to form the error feedback. This error feedback is then passed as inputs to \(h_{\phi}\) (see Fig. 1). These adaptation signals are practical for real-world use but they are also imperfect i.e., the sparse depth points do not correspond to the ground truth values. Thus, to perform controlled experiments and separate the performance of RNA and adaptation signals, we also provide experiments using ideal adaptation signals, e.g., masked ground truth. In the real world, these ideal signals can come from sensors like LiDAR.
## 4 Experiments
We demonstrate that our approach consistently outperforms the baselines for adaptation to **different distribu
Figure 4: **Examples of employed test-time adaptation signals.** We use a range of adaptation signals in our experiments. These are practical to obtain and yield better performance compared to other proxies. In the left plot, for depth and optical flow estimation, we use sparse depth and optical flow via SFM. In the middle, for classification, for each test image, we perform \(k\)-NN retrieval to get \(k\) training images. Each of these retrieved image has a one lot labeled associated with it, thus, combining them gives us a coarse label that we use as our adaptation signal. Finally, for semantic segmentation, after performing \(k\)-NN as we did for classification, we get a pseudo-labelled segmentation mask for each of these images. The features for each patch in the test image and the retrieved images are matched. The top matches are used as sparse supervision. See Sec. 4.1 for more details.
Figure 3: **Adaptation using different signals. Not all improvements in proxy loss translates into improving the target task’s performance.** We show the results of adapting a pre-trained depth estimation model to a dex-cuts blur corruption by optimizing different adaptation signals: prediction entropy [100], a self-supervised task (sobel edge prediction error [108]), and sparse depth obtained from SFM. The plots show how the \(\ell_{1}\) target error with respect to ground-truth depth (green, left axis) changes as the proxy losses (blue, right axis) are optimized (shaded regions represent the 95% confidence intervals across multiple runs of stochastic gradient descent (SGD) with different learning rates). Only adaptation with the sparse depth (SFM) proxy leads to a reduction of the target error. This signifies the importance of employing proper signals in an adaptation framework. Furthermore, we show that RNA is less susceptible to poorer adaptation signal, which results in comparable or improved performance while being significantly faster (see supplementary Table 1).
tion shifts** (2D and 3D Common Corruptions [37, 48, 49], cross-datasets), over **different tasks** (monocular depth, image classification, semantic segmentation, optical flow) and **datasets** (Taskonomy [114], Replica [91], ImageNet [22], COCO [59], ScanNet [20], Hypersim [84]). The source code can be found on our project page.
### Experimental Setup
We describe our experimental setup, i.e. different adaptation signals, adaptation mechanisms, datasets and baselines, for different tasks. See Tab. 1 for a summary.
**Baselines.** We evaluate the following baselines:
_Pre-Adaptation Baseline_: The network \(f_{\theta}\) that maps from RGB to the target task,, depth estimation, with no test-time adaptation. We denote this as Baseline for brevity.
_Densification_: A network that maps from the given adaptation signal for the target task to the target task,, sparse depth from SFM to dense depth. This is a control baseline and shows what can be learned from the test-time supervision alone, without employing input image information or a designed adaptation architecture. See Sec. 4.3 for a variant which includes the image as an additional input.
_TTO (episodic)_: We adapt the Baseline model, \(f_{\theta}\), to each batch of input images by optimizing the loss computed from the prediction and adaptation signal (see Tab. 1 for the adaptation signal used for each task) at test-time. Its weights are reset to the Baseline model's after optimizing each batch, similar to [100, 117].
_TTO (online)_: We continually adapt to a distribution shift defined by a corruption and severity. Test data is assumed to arrive in a stream, and each data point has the same distribution shift,, noise with a fixed standard deviation [100, 92]. The difference with TTO (episodic) is that the model weights are not reset after each iteration. We denote this as TTO for brevity.
_TTO with Entropy supervision (TENT [100])_: We adapt the Baseline model trained with log-likelihood loss by optimizing the entropy of the predictions. This is to reveal the effectiveness of entropy as a signal as proposed in [100].
_TTO with Sobel Edges supervision (TTO-Edges)_: We adapt the Baseline model trained with an additional decoder that predicts a self-supervised task, similar to [92]. We choose to predict Sobel edges as it has been shown to be robust to certain shifts [108]. We optimize the error of the edges predicted by the model and edges extracted from the RGB image to reveal the value of edge error as a supervision.
**RNA configurations.** At test-time, we first get the predictions of the Baseline model and compute the adaptation signal. The predictions and adaptation signal are then concatenated and passed to \(h_{\phi}\) which adapts \(f_{\theta}\) to \(f_{\hat{\theta}}\). The test images are then passed to \(f_{\hat{\theta}}\) to get the final predictions. We evaluate following variants of RNA.
_RNA (frozen \(f\))_: Baseline model weights, \(f_{\theta}\), are frozen when training \(h_{\phi}\). We call this variant _RNA_ for brevity.
_RNA (jointly trained \(f\))_: In contrast to the _frozen \(f\)_ variant, here we train \(h_{\phi}\) jointly with the Baseline network. This variant requires longer training.
**Adaptation signal.** As described in Sec. 3.2, we compute a broad range of test-time signals from the following processes. Each case describes a process applied on query image(s) in order to extract a test-time quantity. As mentioned in Sec. 3.2.1, the adaptation signal and prediction from \(f_{\theta}\) form the error feedback and the input to \(h_{\phi}\).
_Structure-from-motion (SFM)_: Given a batch of query images, we use COLMAP [88] to run SFM, which returns sparse depth. The percentage of valid pixels, i.e. depth measurements, is about 0.16% on Replica-CC and 0.18% on Replica. For ScanNet we use the pre-computed sparse depth from [85], which has about 0.04% valid pixels. As running SFM on corrupted images results in noisy sparse depth, we train \(h_{\phi}\) to be invariant to noise from the _adaptation signal_[111, 85]. Note that RNA is always trained with clean RGB inputs and only the signal has been corrupted during training.
_Masked ground truth (GT)_: We apply a random mask to the GT depth of the test image. We fixed the valid pixels to 0.05% of all pixels, i.e. similar sparsity as SFM (see the supplementary for other values). This a _control_ proxy as it enables a better evaluation of the adaptation methods without conflating with the shortcomings of adaptation signals. It is also a scalable way of simulating sparse depth from real-world sensors,, LiDAR, as also done in [101, 65, 44].
_Click annotations_: We generate click annotations over random pixels for each class in a given image using GT - simulating an active annotation pipeline. The number of pixels ranges from 3 to 25, i.e. roughly 0.01% of the total pixels, similar to [7, 75, 90, 118, 17].
_Patch matching_: To not use GT click annotations, for each test image, we first retrieve its \(k\)-NN images from the original clean training dataset using DINO features [10]. We
Figure 5: RNA can achieve similar performance as TTO in a much shorter time. We compare how the \(\ell_{1}\) errors of the adaptation mechanisms decrease over wall-clock time (s). The errors are averaged over all episodes (and all corruptions for Replica-CC). RNA only requires a forward pass at test-time, while TTO requires multiple forward and backward passes. On ScanNet and Replica-CC, RNA takes 0.01s, while TTO takes 3s to achieve similar performance. Furthermore, RNA is _not trained with test-time shifts_ unlike TTO, thus, it learned to use the additional supervision to adapt to _unseen shifts_.
then get segmentation masks on these \(k\) images. If the training dataset has labels for segmentation we use them directly, otherwise we obtain them from a pretrained network. For each of the \(k\) training images and test image, we extract non-overlapping patches. The features for each patch that lie inside the segmentation masks of the \(k\) training images are matched to the features of every patch in the test image. These matches are then filtered and used as sparse segmentation annotations. See Fig. 4 for illustration.
_Coarse labels (WordNet)_: We generate 45 coarse labels from the 1000-way ImageNet labels, i.e. making the labels 22x coarser, using the WordNet tree [70], similar to [41]. See supplementary for more details on the construction and results for other coarse label sets.
_Coarse labels (DINO \(k\)-NN)_: For each test image, we retrieve the \(k\)-NN images from the training dataset using DINO features [10]. Each of these \(k\) training images is associated with an ImageNet class, thus, combining \(k\) one-hot labels gives us a coarse label.
_Keypoint matching_: We perform keypoint matching across images to get sparse optical flow.
### Adaptation with RNA vs TTO
Here we summarize our observations from adapting with RNA vs TTO. As described, TTO represents the approach of closed-loop adaptation using the adaptation signal but without benefiting from any amortization and learning (the adaptation process is fixed to be standard SGD). These observations hold across different tasks. See Sec. 4.3 for results.
**RNA is efficient.** As RNA only requires a forward pass at test-time, it is orders of magnitude faster than TTO and is able to attain comparable performance to TTO. In Fig. 5, we compare the runtime of adaption with RNA and TTO for depth prediction. On average, for a given episode, RNA obtains similar performance as TTO in 0.01s, compared to TTO's 3-5s. Similarly, for dense 3D reconstruction, RNA is able to adapt in 0.008s compared to TTO's 66s (see Fig. 9). This suggests a successful amortization of the adaptation optimization by RNA.
Furthermore, RNA's training is also efficient as it only requires training a small model, i.e. 5-20% of the Baseline model's parameters, depending on the task. Thus, RNA has a fixed overhead, and small added cost at test-time.
**RNA's predictions are sharper than TTO for dense prediction tasks.** From the last two rows of Fig. 6, it can be seen that RNA retains fine-grained details. This is a noteworthy point and can be attributed to the fact that _RNA benefits from a neural network, thus its inductive biases can be beneficial (and further engineered) for such advantages_. This is a general feature that RNA, and more broadly using a learning-based function to amortize adaptation optimization, brings - in contrast to limiting the adaptation process to be SGD, as represented by TTO.
**RNA generalizes to unseen shifts.** RNA performs better than TTO for low severities (see supplementary for more details). However, as it was _not exposed to any corruptions_, the performance gap against TTO narrows at high severities as expected, which is _exposed to corruptions_ at test-time.
We hypothesize that the generalization property of RNA is due to the following reasons. Even though \(f_{\theta}\) was trained to convergence, it does not achieve exactly 0 error. Thus, when \(h_{\phi}\) is trained with a frozen \(f_{\theta}\) with the training data, it can still learn to correct the errors of \(f_{\theta}\), thus, adapting \(f_{\theta}\).
### Experiments using Various Target Tasks
In this section, we provide a more comprehensive set of evaluations covering various target tasks and adaptation signals. In all cases, RNA is a fixed general framework without being engineered for each task and shows supportive results.
**Depth.** We demonstrate the results quantitatively in Tab. 2 and Fig. 5 and qualitatively in Fig. 6. In Tab. 2, we compare RNA against all baselines, and over several distribution shifts and different adaptation signals. Our RNA variants outperform the baselines overall. TTO (online) has a better performance than TTO (episodic) as it assumes a smoothly changing distribution shift, and it continuously updates the model weights. RNA (jointly trained \(f\)) has a better performance among RNA variants. This is reasonable as the target model is not frozen, thus, is less restrictive.
As another baseline, we trained a single model that takes as input _a concatenation of the RGB image and sparse supervision_, i.e. multi-modal input. However, its average per
\begin{table}
\begin{tabular}{l|l l l l l} \hline \hline
**Task** & **Adaptation signal** & **Adapted model** & **Training data** & **ODOD evaluation data** & **Baselines** \\ \hline \multirow{2}{*}{Depth} & \multirow{2}{*}{SFM, marked GT} & \multirow{2}{*}{UNet [57], DPT [80]} & \multirow{2}{*}{Taxonomy} & _For SYM:_Regina, Replica-CC, ScanNet, & Pre-adaptation, classification, \\ & & & & _For small GT:_Taxonomy-CC,_DTC:_Hyperprint_ & TINT, TTO edges, TTO \\ \hline \multirow{2}{*}{Optical flow} & \multirow{2}{*}{Keypoint matching} & \multirow{2}{*}{RAPT [80]} & \multirow{2}{*}{FlyingChain, PlyingThings [80]} & \multirow{2}{*}{Replica-CC} & Pre-adaptation \\ \cline{1-1} \cline{5-5} \cline{7-7} \multirow{2}{*}{3D reconstruction} & \multirow{2}{*}{SFM, keypoint matching, consistency} & \multirow{2}{*}{Depth, optical flow models} & \multirow{2}{*}{Depth, optical flow data} & \multirow{2}{*}{Replica-CC} & Pre-adaptation, TTO \\ \cline{1-1} \cline{5-5} \cline{7-7} \multirow{2}{*}{Semantic segmentation} & \multirow{2}{*}{Click annotations, patch matching} & \multirow{2}{*}{FCN [62]} & \multirow{2}{*}{COCO} & \multirow{2}{*}{FCFC} & \multirow{2}{*}{FCFC} \\ \cline{1-1} \cline{5-5} \cline{7-7} \multirow{2}{*}{Classification} & \multirow{2}{*}{Came labels} & & (D classes from Pascal VOC) & \multirow{2}{*}{For patch matching: ImageNet-C} & \multirow{2}{*}{deactivation, TENT, TTO} \\ \cline{1-1} \cline{5-5} \cline{7-7} & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Overview of the experiments for different target tasks, adaptation methods, and adaptation signals. For each task, we list the adaptation signal (Sec. 3.2) that we use for adaptation. We also list the models that we adapt, and the out-of-distribution (OOD) data used for evaluations and the relevant baselines. When there are different options for adaptation signal, _e.g._, in the case of depth, the signal is denoted in italics followed by the corresponding OOD dataset. The weights for the semantic segmentation, classification and optical flow models were taken from PyTorch [76].**
formance on Taskonomy-CC was 42.5% worse than RNA's (see sup. mat. Sec. 3.2). Among the baselines that do not adapt, densification is the strongest under distribution shift due to corruptions. This is expected as it does not take the RGB image as input, thus, it is not affected by the underlying distribution shift. However, as seen from the qualitative results in Figs. 6, 7, unlike RNA, densification is unable to predict fine-grained details (which quantitative metrics often do not well capture). We also show that the gap between RNA and densification widens with sparser supervision (see sup. mat. Fig. 1), which confirms that RNA is making use of the error feedback signal, to adapt \(f\).
**Dense 3D Reconstruction.** We aim to reconstruct a 3D point cloud of a scene given a sequence of corrupted images. To do so, we make use of multiple adaptation signals from multi-view geometry. First, we compute the noisy sparse depth and optical flow from SFM and use it to
\begin{table}
\begin{tabular}{l|c c c|c c c|c|c} \hline \hline
**Adaptation Signal** & \multicolumn{4}{c|}{**SFM**} & \multicolumn{4}{c|}{**Sparse CT**} & \multicolumn{1}{c}{**Relative**} \\ \hline
**Dataset** & \multicolumn{1}{c}{**Rogica**} & \multicolumn{1}{c|}{**ScansNet**} & \multicolumn{1}{c}{**Tashionometry**} & \multicolumn{1}{c|}{**Hypermin**} & \multicolumn{1}{c}{**Rantine**} \\ \hline
**Sift** & **CDS** & **CC** & **CDS** & **None** & **CC** & **3DCC** & **CDS** & \\ \hline Pre-algorithmic Baseline & 1.75 & 6.08 & 3.30 & 2.68 & 5.74 & 4.75 & 33.64 & 1.00 \\ Densification & 2.50 & 4.19 & 2.33 & 1.72 & 1.72 & 1.72 & 17.25 & 1.00 \\ TENT [100] & 2.03 & 6.09 & 4.03 & 5.51 & 5.51 & 4.48 & 35.45 & 15.85 \\ TOT-dglies [72] & 1.73 & 6.14 & 3.28 & 2.70 & 5.69 & 4.74 & 33.69 & 20.95 \\ RNA (from \(f\)) & 1.72 & 4.26 & **1.77** & **1.12** & 1.68 & 1.49 & 16.17 & **1.56** \\ RNA (jointly trained \(f\)) & **1.66** & 3.41 & **1.74** & **1.11** & **1.50** & **1.37** & 17.13 & **1.56** \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Quantitative adaptation results on depth estimation. \(\ell_{1}\) errors on the depth prediction task. (Lower is better. Multiplied by 100 for readability. The best models within 0.0003 error are shown in bold).) We generate distribution shifts by applying Common Corruptions (CC), 3D Common Corruptions (3DCC) and from performing cross-dataset evaluations (CDS). The results from CC and 3DCC are averaged over all distortions and severity levels on Taskonomy and 3 severity levels on Replica data. The adaptation signal from Taskonomy is masked GT (fixed at 0.05% valid pixels) while that from Replica and ScanNet is sparse depth from SFM. RNA and TTO notably outperform the baselines. RNA successfully matches the performance of TTO while being around 10 times faster.** See supplementary for the losses for different corruption types, sparsity levels, and the results of applying RNA to other adaptation signals.**
Figure 6: **Qualitative results of RNA vs the baselines for semantic segmentation on random query images on COCO-CC (left) and depth on images from ScanNet, Taskonomy-3DCC and Replica-CC (right). For semantic segmentation, we use 15 pixel annotations per class. For Taskonomy-3DCC, we use sparse depth with 0.05% valid pixels (30 pixels per image). See Fig. 7 for results on different adaptation signal levels. For ScanNet and Replica-CC, the adaptation signal is sparse depth measurements from SFM [88] with similar sparsity ratios to Taskonomy-3DCC. The predictions with proposed adaptation signals are shown in the last two rows. They are noticeably more accurate compared to the baselines. Comparing TTO and RNA’s predictions are more accurate for segmentation, and sharper than TTO for depth (see the ellipses) while being significantly faster. See 4.2 and supplementary for more results.**
Figure 7: **Qualitative adaptation results on semantic segmentation on random query images on COCO-CC. RNA notably improves the prediction quality using error feedback from as few as 3 random pixels.**
adapt the depth and optical flow models. The results from this adaptation can be found in the previous paragraph (for depth) and supplementary (for optical flow). Next, the two models are adapted to make their predictions consistent with each other. This is achieved using multi-view consistency (MVC) constraints, similar to [64]. The predictions from the adapted models are then used in the backprojection to attain a 3D point cloud.
Figure 9 shows the point cloud visualizations on a scene from the Replica dataset. The sequence of input images was corrupted with Gaussian Noise. This results in collapsed depth predictions, thus, the reconstructions are poor (Baseline column) and performing MVC is not helpful (Baseline+MVC). Adapting the depth predictions using TTO and MVC improves the reconstruction notably while RNA achieves a similar performance significantly faster.
**Semantic Segmentation.** We experiment with click annotations and DINO patch matching as adaptation signals.
_Click annotations:_ In Fig. 8, we show how the IoU changes with the adaptation signal level on COCO-CC. As the Baseline and TENT do not make use of this signal, their IoU is a straight line. RNA clearly outperforms the baselines for all levels of adaptation signal. Figure 7 shows the qualitative results with increasing supervision, and Fig. 6 (left) a comparison against all baselines, demonstrating higher quality predictions with RNA.
_DINO patch matching:_ We perform patch matching on DINO features (described in Sec. 4.1) to get the adaptation signal. As the patch matching process can be computationally expensive, we demonstrate our results on all cat classes in ImageNet and over one noise, blur and digital corruption for 3 levels of severity. We used the predictions of a pre-trained FCN on the clean images as pseudolabels to compute IoU. The mean IoU averaged over these corruptions and severities is 48.98 for the baseline model, 53.45 for TTO. RNA obtains a better IOU of 58.04, thus it can make use of the sparse annotations from DINO patch matching.
**Image Classification.** We experiment with coarse labels from WordNet and DINO \(k\)-NN as adaptation signals.
_Coarse labels (WordNet):_ Table 3 shows the results from using 45-coarse labels on ImageNet-{C,3DCC,V2}. This corresponds to 22x coarser supervision compared to the 1000 classes that we are evaluating on. TENT seems to have notable improvements in performance under corruptions for classification, unlike for semantic segmentation and depth. We show that using coarse supervision results in even better performance, about a further 5 pp reduction
Figure 8: **Quantitative adaptation results on semantic segmentation**. Each point shows the mean IOU over 15 corruptions and 5 severities. RNA significantly improves over baselines. Black dashed line shows the mean IOU of the baseline model for _clean_ validation images, and is provided as an upper bound on performance. Numbers in the legend denote averages over all supervision pixel counts. See supplementary for a breakdown.
\begin{table}
\begin{tabular}{l l|c c c c c} \hline \hline
**Adaptation Signal** & **Dataset** & **Class** & **IN-C** & **IN-M** & **DNC-V2** & **IoU-Baseline** \\ \hline \multirow{2}{*}{-} & Pre-algorithm Baseline & 32.83 & 61.66 & 54.97 & 37.15 & 1.00 \\ & TENT & 24.67 & 64.19 & 47.13 & 37.07 & 5.31 \\ \hline \multirow{2}{*}{\begin{tabular}{l} Coarse labels \\ (nonlinear) \\ \end{tabular} } & Descriptor & 55.95 & 55.90 & 55.90 & 55.00 \\ & TTO (online) & 24.72 & **64.20** & **46.30** & 36.70 & 5.72 \\ & ERA (linear \(f\)) & **16.72** & 41.21 & **40.37** & **25.53** & **1.39** \\ \hline \multirow{2}{*}{
\begin{tabular}{l} Coarse labels \\ (DEN) \\ \end{tabular} } & DINO (\(k\)-NN) & 35.56 & 25.64 & 48.24 & 37.39 & - & - \\ & TTO (online) & 24.59 & 51.59 & 49.18 & 36.69 & 5.72 \\ \cline{1-1} & ERA (linear \(f\)) & 24.36 & 54.86 & 52.29 & 36.88 & **1.39** \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Quantitative adaptation results on on ImageNet (IN) classification task.** We evaluate on the clean validation set, ImageNet-{C,3DCC,V2}. We report average error (%) for 1000-way classification task over all corruptions and severities. For the coarse labels with WordNet supervision, we use 45-coarse labels. For DINO \(k\)-NN, we set \(k=20\).
Figure 9: **Adaptation results for 3D reconstruction.** Using appropriate adaptation signals from multi-view geometry can recover accurate 3D reconstructions. We report the average \(\ell_{1}\) error between ground truth 3D coordinates and the estimated ones. The titles above each column refers to the depth model used to get the reconstruction. TTO+MVC corresponds to the predictions after multi-view consistency optimization. It can be seen that RNA and TTO improve the reconstructions over the baselines with RNA being significantly faster. See supplementary Fig. 4 for more results and the corresponding error maps.
in error. Furthermore, on uncorrupted data, i.e. clean, and ImageNet-V2 [82], RNA gives roughly 10 pp improvement in performance compared to TTO. Thus, coarse supervision provides a useful signal for adaptation while requiring much less effort than full annotation [106]. See supplementary for results on other coarse sets.
_Coarse labels (DINO \(k\)-NN):_ We also show results from using coarse sets generated from DINO \(k\)-NN retrieval. This is shown in the last 3 rows of Tab. 3. Both RNA and TTO use this coarse information to outperform the non-adaptive baselines. However, they do not always outperform TENT, which could be due to the noise in retrieval.
### Ablations and additional results
**Adaptation of other architectures of the main network \(f_{\theta}\).** Previous results in the paper were from adapting \(f_{\theta}\) with a UNet architecture. Here, we study the performance of RNA on other architectures of \(f_{\theta}\), namely, the dense prediction transformer (DPT) [80] for depth and ConvNext [61] for image classification. Table 4 shows the results of incorporating RNA to these architectures. In both cases, RNA is able to improve on the error and runtime of TTO. Thus, RNA can be applied to a range of architectures.
**Controlling for different number of parameters.** We ran a control experiment where all methods have the same architecture, thus, same number of parameters. The results are in Table 5. RNA still returns the best performance. Thus, its improvement over the baselines is not due to a different architecture or number of parameters but due to its test-time adaptation mechanism.
**Implementations of \(h_{\phi}\) with different architectures.** We experiment with different architectures for \(h_{\phi}\)_e.g._, HyperNetworks [33], other FiLM variants, or adapting the input instead of the model parameters. Instead of adding FiLM layers to adapt \(f_{\theta}\) (denoted as FiLM-\(f\)), as described in Sec. 3.1, we also experimented with adding FiLM layers to a UNet model that is trained to update the input image \(x\) (denoted as FiLM-\(x\)). For FiLM-\(x\), only \(x\) is updated and there is no adaptation on \(f_{\theta}\). Lastly, as Hypernetworks [33] have been shown to be expressive and suitable for adaptation, we trained a HyperNetwork, in this case an MLP, to predict the weights of a 3-layer convolutional network that updates \(x\) (denoted as HyperNetwork-\(x\)). The results of adaptation with these variants of RNA are shown in Table 6. The FiLM-\(f\) variant performed best, thus, we adopted it as our main architecture. See supplementary Sec. 2.2 for further details and a conceptual discussion on the trade-offs of the choices of implementing this closed-loop "control" system, namely those that make stronger model-based assumptions.
## 5 Approaches for handling distribution shifts
In this section, we provide a unified discussion about the approaches that aim to handle distribution shifts. Figure 10 gives an overview of how these approaches can be characterized. **Open-loop** systems predict \(y\) by only using their inputs _without receiving feedback_. Training-time robustness methods, image modifications, and multi-modal methods fall into this category. These methods assume the learned model is frozen at the test-time. Thus, they aim to incorporate inductive biases at training time that could be useful against distribution shifts at the test-time. The **closed-loop** systems, on the other hand, are _adaptive_ as they make use of a live error feedback signal that can be computed at test-time from the model predictions and an adaptation signal.
**Model-based vs. Model-free**: The closed-loop systems can be instantiated as _model-based_ or _model-free_ adaptation methods. The former involves making stronger assumptions and performs adaptation by estimating the parameters of _specifically modeled distribution shifts_ (e.g. blur) using the feedback signal. This is depicted as \(e\) in Fig. 10. While this approach often leads to strongly handling the modeled shift and a more interpretable system, conversely it is less likely to generalize to shifts that were not modeled. Our experiments with modeling possible distribution shifts, e.g. the intensity of a noise or blur corruption, did not show significantly better results than the model-free variant, likely for this reason (see supplementary, Sec. 2.2). In contrast, model-free methods do not make explicit assumptions
\begin{table}
\begin{tabular}{l|c c c|c c c} \hline \hline
**Method/Sight** & **None** & **Talcohn/ency-CC** & **Talcohn/ency-300C** & **Hypern** & **BlandedMC** \\ \hline Pre-align & 0.027 & 0.057 & 0.048 & 0.336 & 3.450 \\ RNA (ObjectNet-\(x\)) & 0.019 & 0.041 & 0.003 & 0.257 & 2.887 \\ RNA (FiLM-\(x\)) & 0.019 & 0.009 & 0.003 & 0.279 & 2.636 \\ RNA (FiLM-\(f\)) & 0.013 & 0.004 & 0.000 & 0.198 & 2.310 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Implementations of the controller network \(h_{\phi}\) with different architectures. \(\ell_{1}\) errors on the depth estimation task under distribution shifts are reported. The adaptation signal here is masked GT, fixed at 0.05% of valid pixels.
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline
**Shift** &
\begin{tabular}{c} **Pre-adaptation** \\ **Baseline** \\ \end{tabular} & **Densification** & **TTO** & **RNA** \\ \hline CC & 0.045 & 0.023 & 0.019 & **0.018** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Controlling for number of parameters. \(\ell_{1}\) errors (multiplied by 100 for readability) on the depth estimation task, evaluated on the Taskonomy test set under a subset of common corruptions. Each method is using the same architecture and number of parameters. The adaptation signal here is masked GT, fixed at 0.05% of valid pixels.
about the distributions shifts and learn to adapt only from the data and based on the error feedback signal. Our proposed method RNA belongs to model-free approaches, and as we showed in the paper, it generalized to a diverse set of unseen distribution shifts.
**Observations on RNA vs. RMA**[51]. It should be noted that the above observation about model-free vs. model-based approaches and estimating distribution shifts with specific fixed parameters varies based on the domain and problem of interest. For example, in Rapid Motor Adaptation (RMA), Kumar et al. [51] learned to adapt a policy for legged robots from the data that simulated a fixed set of relevant environment parameters, such as ground friction, terrain height, etc. They showed predicting environment shifts grounded in this fixed set of parameters turns out to be sufficient for a robust adaption generalizable from the simulator to various challenging real-world scenarios. This success _did not duplicate_ for the image recognition problems addressed in this paper despite our attempts. This can be attributed to the lack of a similarly comprehensive simulator and relevant parameter set to sufficiently encapsulate real-world image distortions, as well as possibly the lack of needed inductive biases in the adaption networks.
**RNA vs. Denoising.** The denoising methods, and in general the methods performing modification in the input image, e.g., domain adaptation methods that aim to map an image in the target domain to the style of the source domain [119], are concerned with reconstructing plausible images _without_ taking the downstream prediction \(x\to y\) into account (shown as gray in Fig. 10). Moreover, it has been shown that imperceptible artifacts in the denoised & modified image could result in degraded predictions [37, 108, 29]. In contrast, RNA performs updates with the goal of reducing the error of the target task.
**Multi-modal methods.** The closed-loop adaptation methods have schematic similarities to multi-modal learning approaches as they simultaneously use multiple input sources: an RGB input image and an adaptation signal. The main distinction is the adaptation methods implement a particular process toward adapting a network to a shift using an adaptation signal from the environment - as opposed to performing a generic learning using multiple inputs.
**Using only the adaptation signal \(z\) vs. error feedback as input to \(h_{\phi}\).** In the case where only the adaptation signal, \(z\), is passed as input, it is possible that the side-network is implicitly modelling an error feedback signal. This is because it is trained alongside the main model (\(x\to y\)), thus, it sees and learns to correct the main model's errors during training. We found that having an error feedback signal as input results in better performance on average, thus, we adopted this as our main method.
## 6 Conclusion and Limitations
We presented RNA, a method for efficient adaptation of neural networks at test-time using a closed-loop formulation. It involves training a side network to use a test-time adaptation signal to adapt a main network. This network acts akin to a "controller" and adapts the main network based on the adaptation signal. We showed that this general and flexible framework can generalize to unseen shifts, and as it only requires a forward pass at test-time, it is orders of magnitude faster than TTO. We evaluated this approach using a diverse set of adaptation signals and target tasks. We briefly discuss the limitations and potential future works:
_Different instantiation of RNA and amortization._ While we experimented with several RNA variants (see supplementary for details), further investigation toward a stronger instantiation of RNA which can generalize to more shifts and handle more drastic changes, e.g. via building in a more explicit "model" of the shifts and environment (see the discussion about [51]), is important. In general, as the role of the controller network is to amortize the training optimization of the main network, the amortized optimization literature [3] is one apt resource to consult for this purpose.
Figure 10: An overview of methods that aim to handle distribution shifts. Left: Open-loop systems predict \(y\) by only using their inputs _without receiving feedback_. The first and popular example of open-loop systems is training-time robustness methods (data augmentation, architectural changes, etc.). The next example is the methods that modify the input \(x\), e.g. denoising or style changes, to recover the original image before corruption, independent of \(y\). Furthermore, there are multi-modal methods that use an additional input \(z\). As the learned model is frozen at test-time, these methods need to _anticipate_ the distribution shift by incorporating inductive biases at training time (See also Fig. 1 of the main paper). **Right:** In contrast, closed-loop systems make use of its current output, \(y\), and an adaptation signal, \(z\), to form an _error feedback signal_ that can be used to update its predictions. Thus, they _adapt_ to the shifts as they occur. We can then group closed-loop systems into model-based and model-free methods. The former performs adaptation by estimating the parameters \(e\) of specific modeled distribution shift families, while the latter performs adaptation in a data-driven way without explicitly modeling certain distribution shifts. Adaptation can be performed via running an optimization, i.e. TTO via SGD, or via amortization, i.e., training a side controller network to predict TTO updates that minimize the error feedback. Our proposed method, RNA, belongs to the model-free adaptation approach that makes use of amortization for efficiency.
_Hybrid mechanism for activating TTO in RNA_. TTO constantly adapts a model to a distribution shift, hence, in theory, it can adapt to any shift despite being comparatively inefficient. To have the best of both worlds, investigating mechanisms for selectively activating TTO within RNA when needed can be useful.
_Finding adaptation signals for a given task._ While the focus of this study was not on developing new adaption signals, we demonstrated useful ones for several core vision tasks, but there are many more. Finding these signals requires either knowledge of the target task so a meaningful signal can be accordingly engineered or core theoretical works on understanding how a proxy and target objectives can be "aligned" for training.
**Acknowledgement:** The authors would like to thank Onur Beker. This work was partially supported by the ETH4D and EPFL EssentialTech Centre Humanitarian Action Challenge Grant.
|
2309.04860 | Approximation Results for Gradient Descent trained Neural Networks | The paper contains approximation guarantees for neural networks that are
trained with gradient flow, with error measured in the continuous
$L_2(\mathbb{S}^{d-1})$-norm on the $d$-dimensional unit sphere and targets
that are Sobolev smooth. The networks are fully connected of constant depth and
increasing width. Although all layers are trained, the gradient flow
convergence is based on a neural tangent kernel (NTK) argument for the
non-convex second but last layer. Unlike standard NTK analysis, the continuous
error norm implies an under-parametrized regime, possible by the natural
smoothness assumption required for approximation. The typical
over-parametrization re-enters the results in form of a loss in approximation
rate relative to established approximation methods for Sobolev smooth
functions. | G. Welper | 2023-09-09T18:47:55Z | http://arxiv.org/abs/2309.04860v1 | # Approximation Results for Gradient Descent trained Neural Networks
###### Abstract
The paper contains approximation guarantees for neural networks that are trained with gradient flow, with error measured in the continuous \(L_{2}(\mathbb{S}^{d-1})\)-norm on the \(d\)-dimensional unit sphere and targets that are Sobolev smooth. The networks are fully connected of constant depth and increasing width. Although all layers are trained, the gradient flow convergence is based on a neural tangent kernel (NTK) argument for the non-convex second but last layer. Unlike standard NTK analysis, the continuous error norm implies an under-parametrized regime, possible by the natural smoothness assumption required for approximation. The typical over-parametrization re-enters the results in form of a loss in approximation rate relative to established approximation methods for Sobolev smooth functions.
**Keywords:** deep neural networks, approximation, gradient descent, neural tangent kernel
**AMS subject classifications:** 41A46, 65K10, 68T07
###### Contents
* 1 Introduction
* 2 Main Result
* 2.1 Notations
* 2.2 Setup
* 2.3 Result
* 3 Coercivity of the NTK
* 4 Proof Overview
* 4.1 Preliminaries
* 4.1.1 Neural Tangent Kernel
* 4.1.2 Norms
* 4.1.3 Neural Networks
* 4.2 Abstract Convergence result
* 4.3 Assumption (20): Holder continuity
* 4.4 Assumption (19): Concentration
* 4.5 Assumption (17): Weights stay Close to Initial
* 5 Proof of the Main Result
* 5.1 Proof of Lemma 4.2: Generalized Convergence
* 5.2 Proof of Lemma 4.3: NTK Holder continuity
* 5.3 Proof of Lemma 4.4: Concentration
* 5.3.1 Concentration of the Last Layer
* 5.3.2 Perturbation of Covariances
* 5.3.3 Concentration of the NTK
* 5.4 Proof of Lemma 4.5: Weights stay Close to Initial
* 5.5 Proof of Theorem 2.1: Main Result
* 6 Technical Supplements
* 6.1 Holder Spaces
* 6.2 Concentration
* 6.3 Hermite Polynomials
* 6.4 Sobolev Spaces on the Sphere
* 6.4.1 Definition and Properties
* 6.4.2 Kernel Bounds
* 6.4.3 NTK on the Sphere
## 1 Introduction
Direct approximation results for a large variety of methods, including neural networks, are typically of the form
\[\inf_{\theta}\|f_{\theta}-f\|\leq n(\theta)^{-r},\ \ \ f\in K. \tag{1}\]
I.e., a target function \(f\) is approximated by an approximation method \(f_{\theta}\), parametrized by some degrees of freedom or weights \(\theta\) up to a rate \(n(\theta)^{-r}\) for some \(n(\theta)\) that measures the richness of the approximation method as width, depth or number of weights for neural networks. Generally, the approximation rate can be arbitrarily slow unless the target \(f\) is contained in some compact set \(K\), which depends on the approximation method and application and is typically a unit ball in a Sobolev, Besov, Barron or other normed smoothness space. Such results are well established for a variety of neural network architectures and compact sets \(K\), however, these results rarely address how to practically compute the infimum in the formula above and instead use hand-picked weights.
On the other hand, the neural network optimization literature, typically considers discrete error norms (or losses)
\[\|f_{\theta}-f\|_{*}:=\left(\frac{1}{n}\sum_{i=1}^{n}|f_{\theta}(x_{i})-f(x_{i})| ^{2}\right)^{1/2},\]
together with neural networks that are _over-parametrized_, i.e. for which the number of weights is larger than the number of samples \(n\) so that they can achieve zero training error
\[\inf_{\theta}\|f_{\theta}-f\|_{*}=0,\]
rendering the approximation question obsolete. In contrast, approximation theory measures the error in continuous norms that emerge in the sample \(n\to\infty\) limit, where the problem is necessarily _under-parametrized_.
This paper contains approximation results of type (1) for fully connected networks that are trained with gradient flow and therefore avoids the question how to compute the infimum in (1). The outline of the proof follows the typical neural tangent kernel (NTK) argument: We show that the empirical NTK is close to the infinite width NTK and that the NTK does not change too much during training. The main differences to the standard analysis are:
1. Due to the under-parametrization, the eigenvalues of the NTK are not lower bounded away form zero. Instead we require that the NTK is coercive in a negative Sobolev norm.
2. We show that the gradient flow networks are uniformly bounded in positive Sobolev norms.
3. The coercivity in negative Sobolev smoothness and the uniform bounds of positive Sobolev smoothness allow us to derive \(L_{2}\) error bounds by interpolation inequalities.
4. All perturbation and concentration estimates are carried out in function space norms. In particular, the concentration results need some careful consideration and are proven by chaining arguments.
The NTK is a sum of positive matrices from which we only use the contribution form the second but last layer to drive down the error, while all other layers are trained but estimated only by a perturbation analysis. The coercivity assumption on the NTK is not shown in this paper. It is known for ReLU activations, but we require smoother activations and only provide a preliminary numerical test while leaving a rigorous analysis of the resulting NTK for future work.
The proven approximation rates are lower than finite element, wavelet or spline rates under the same smoothness assumptions. This seems to be a variant of the over-parametrization in the usual NTK arguments: the networks need some redundancy in their degrees of freedom to aid the optimization.
Paper OrganizationThe paper is organized as follows. Section 2.2 defines the neural networks and training procedures and Section 2.3 contains the main result. The coercivity of the NTK is discussed in Section 3. The proof is split into two parts. Section 4 provides an overview and all major lemmas. The proof the these lemmas and further details are provided in Section 5. Finally, to keep the paper self contained, Section 6 contains several facts from the literature.
Literature Review
* _Approximation:_ Some recent surveys are given in [53, 15, 69, 8]. Most of the results prove direct approximation guarantees as in (1) for a variety of classes \(K\) and network architectures. They show state of the art or even superior performance of neural networks, but typically do not provide training methods and rely on hand-picked weights, instead.
* Results for classical _Sobolev_ and _Besov regularity_ are in [25, 27, 50, 44, 64].
* [72, 73, 74, 14, 57, 47] show better than classical approximation rates for Sobolev smoothness. Since classical methods are optimal (with regard to nonlinear width and entropy), this implies that the weight assignment \(f\to\theta\) must be discontinuous.
* Function classes that are specifically tailored to neural networks are _Barron spaces_ for which approximation results are given in [5, 37, 70, 46, 58, 59, 10].
* Many papers address specialized function classes [56, 54], often from applications like PDEs [39, 52, 40, 48]. Besides approximation guarantees (1) many of the above papers also discuss limitations of neural networks, for more information see [20].
* _Optimization:_ We confine the literature overview to neural tangent kernel based approaches, which are most relevant to this paper. The NTK is introduced in [32] and similar arguments together with convergence and perturbation analysis appear simultaneously in [45, 2, 19, 18], Related optimization ideas are further developed in many papers, including [75, 4, 43, 62, 76, 36, 13, 51, 49, 6, 61, 41]. In particular, [3, 63, 34, 12] refine the analysis based on expansions of the target \(f\) in the NTK eigenbasis and are closely related to the arguments in this paper, with the major difference that they rely on the typical over-parametrized regime, whereas we do solemnly rely on smoothness. The papers [23, 28, 21, 42, 55, 68] discuss to what extend the linearization approach of the NTK can describe real neural network training. Characterizations of the NTK are fundamental for this paper and given [9, 22, 35, 11]. Convergence analysis for optimizing NTK models directly are in [65, 66].
* _Approximation and Optimization:_ Since the approximation question is under-parametrized and the optimization literature largely relies on over-parametrization there is little work on optimization methods for approximation. The gap between approximation theory and practice is considered in [1, 26]. The previous paper [24] contains comparable results for \(1d\) shallow networks. Similar approximation results for gradient flow trained shallow \(1d\) networks are in [33, 31], with slightly different assumptions on the target \(f\), more general probability weighted \(L_{2}\) loss and an alternative proof technique. Other approximation and optimization guarantees rely on alternative optimizers. [60, 29] use greedy methods and [30] uses a two step procedure involving a classical and subsequent neural network approximation. \(L_{2}\) error bounds are also proven in generalization error bounds for statistical estimation. E.g. the papers [17, 38] show generalization errors for parallel fully connected networks in over-parametrized regimes with Holder continuity.
## 2 Main Result
### Notations
* \(\lesssim\), \(\gtrsim\), \(\sim\) denote less, bigger and equivalence up to a constant that can change in every occurrence and is independent of smoothness and number of weights. It can depend on the number of layers \(L\) and input dimension \(d\). Likewise, \(c\) is a generic constant that can be different in each occurrence.
* \([n]:=\{1,\ldots,n\}\)
* \(\lambda=ij;\ell\) is the index of the weight \(W_{\lambda}:=W_{ij}^{\ell}\) with \(|\lambda|:=\ell\). Likewise, we set \(\partial_{\lambda}=\frac{\partial}{\partial W_{\lambda}}\).
* \(\odot\): Element wise product
* \(A_{i.}\) and \(A_{.j}\) are \(i\)th row and \(j\)th column of matrix \(A\), respectively.
### Setup
Neural NetworksWe train fully connected deep neural networks without bias and a few modifications: The first and last layer remain untrained, we use gradient flow instead of (stochastic) gradient descent and the first layer remains unscaled. For \(x\) in some bounded domain \(D\subset\mathbb{R}^{d}\), the networks are defined by
\[f^{1}(x) =W^{0}Vx, \tag{2}\] \[f^{\ell+1}(x) =W^{\ell}n_{\ell}^{-1/2}\sigma\left(f^{\ell}(x)\right),\;\;\;\ell =1,\ldots,L\] \[f(x) =f^{L+1}(x),\]
which we abbreviate by \(f^{\ell}=f^{\ell}(x)\) if \(x\) is unimportant or understood from context. The weights are initialized as follows
\[W^{L+1}\in\{-1,+1\}^{1\times n_{L+1}} \text{i.i.d. Rademacher} \text{not trained},\] \[W^{\ell}\in\mathbb{R}^{n_{\ell+1}\times n_{\ell}},\,\ell\in[L] \text{trained},\] \[V\in\mathbb{R}^{n_{0}\times d} \text{orthogonal columns }V^{T}V=I \text{not trained},\]
all trained by gradient flow, except for the last layer \(W^{L+1}\) and the first matrix \(V\), which is pre-chosen with orthonormal columns. All layers have conventional \(1/\sqrt{n_{\ell}}\) scaling, except for the first, which ensures that the NTK is of unit size on the diagonal and common in the literature [18, 9, 22, 11]. We also require that the layers are of similar size, except for the last one which ensures scalar valued output of the network
\[m:=n_{L-1}, 1=n_{L+1}\leq n_{L}\sim\dots\sim n_{0}\geq d.\]
Activation FunctionsWe require comparatively smooth activation functions that have no more that linear growth
\[|\sigma\left(x\right)|\lesssim|x|, \tag{3}\]
uniformly bounded first derivatives
\[|\sigma^{(i)}(x)|\lesssim 1 i=1,2, x\in\mathbb{R} \tag{4}\]
and continuous second and third derivative with at most polynomial growth
\[|\sigma^{(i)}(x)|\leq p(x), i=0,1,2,3,4 \tag{5}\]
for some polynomial \(p\) and all \(x\in\mathbb{R}\).
TrainingWe wish to approximate a function \(f\in L_{2}(D)\) by neural networks and therefore use the \(L_{2}(D)\) norm for the loss function
\[\mathcal{L}(\theta):=\frac{1}{2}\|f_{\theta}-f\|_{L_{2}(D)}^{2}.\]
In the usual split up into approximation and estimation error in the machine learning literature, this corresponds to the former. It can also be understood as an infinite sample limit of the mean squared loss. This implies that we perform convergence analysis in an under-parametrized regime, different from the bulk of the neural network optimization literature, which typically relies on over-parametrization.
For simplicity, we optimize the loss by gradient flow
\[\frac{d}{dt}\theta=-\nabla\mathcal{L}(\theta) \tag{6}\]
and not gradient descent or stochastic gradient descent.
SmoothnessSince we are in an under-parametrized regime, we require smoothness of \(f\) to guarantee meaningful convergence bounds. In this paper, we use Sobolev spaces \(H^{\alpha}(\mathbb{S}^{d-1})\) on the sphere \(D=\mathbb{S}^{d-1}\), with norms and scalar products denoted by \(\|\cdot\|_{H^{\alpha}(\mathbb{S}^{d-1})}\) and \(\langle\cdot,\cdot\rangle_{H^{\alpha}(\mathbb{S}^{d-1})}\). We drop the explicit reference to the domain \(\mathbb{S}^{d-1}\) when convenient. Definitions and required properties are summarized in Section 6.4.1.
Neural Tangent KernelThe analysis is based on the neural tangent kernel, which for the time being, we informally define as
\[\Gamma(x,y)=\lim_{\mathrm{width}\to\infty}\sum_{|\lambda|=L-1} \partial_{\lambda}f_{r}^{L+1}(x)\partial_{\lambda}f_{r}^{L+1}(y). \tag{7}\]
The rigorous definition is in (11), based on an recursive formula as in [32]. Our definition differs slightly form the standard version because we only include weights from layer \(|\lambda|=L-1\). We require that it is coercive in Sobolev norms
\[\left\langle f,\int_{D}\Gamma(\cdot,y)f(y)\,dy\right\rangle_{H^{S( \mathbb{S}^{d-1})}}\gtrsim\|f\|_{H^{S-\beta}} \tag{8}\]
for some \(0\leq\alpha\leq\frac{\beta}{2}\), \(S\in\{-\alpha,\alpha\}\) and all \(f\in H^{\alpha}(\mathbb{S}^{d-1})\). For ReLU activations and regular NTK, including all layers, this property easily follows from [9, 22, 11] as shown in Lemma 3.2. However, our convergence theory requires smoother activations and therefore Section 3 provides some numerical evidence, while a rigorous analysis is left for future research.
The paper [32] provides a recursive formula for the NTK, which in our simplified case reduces to
\[\Gamma(x,y)=\dot{\Sigma}^{L}(x,y)\Sigma^{L-1}(x,y),\]
where \(\dot{\Sigma}^{L}(x,y)\) and \(\Sigma^{L-1}(x,y)\) are the covariances of two Gaussian processes that characterize the forward evaluation of the networks \(W^{L}n_{L}^{1/2}\hat{\sigma}\left(f^{L}\right)\) and \(f^{L-1}\) in the infinite width limit, see Section 4.1.1 for their rigorous definition. We require that
\[c_{\Sigma}\leq\Sigma^{k}(x,x)\leq C_{\Sigma}>0, \tag{9}\]
for all \(x,y\in D\), \(k=1,\ldots,L\) and constants \(c_{\Sigma},C_{\Sigma}\geq 0\). As we see in Section 3, the kernels are zonal, i.e. they only depend on \(x^{T}y\). Hence, with a slight abuse of notation (9) simplifies to \(\Sigma^{k}(x,x)=\Sigma^{k}(x^{T}x)=\Sigma(1)\neq 0\). In fact, for ReLU activation (which is not sufficiently differentiable for our results) the paper [11] shows \(\Sigma^{k}(x,x)=1\).
### Result
We are now ready to state the main result of the paper.
**Theorem 2.1**.: _Assume that the neural network (2) - (5) is trained by gradient flow (6). Let \(\kappa(t):=f_{\theta(t)}-f\) be the residual and assume:_
1. _The NTK satisfies coercivity (_8_) for some_ \(0\leq\alpha\leq\frac{\beta}{2}\) _and the forward process satisfies (_9_)._
2. _All hidden layers are of similar size:_ \(n_{0}\sim\cdots\sim n_{L-1}=:m\)_._
3. _Smoothness is bounded by_ \(0<\alpha<1/2\)_._
4. \(0<\gamma<1-\alpha\) _is an arbitrary number (used for Holder continuity of the NTK in the proof)._
5. _For_ \(\tau\) _specified below,_ \(m\) _is sufficiently large so that_ \[\|\kappa(0)\|_{-\alpha}^{\frac{1}{2}}\|\kappa(0)\|_{\alpha}^{\frac{1}{2}}m^{- \frac{1}{2}}\lesssim 1,\qquad\qquad\frac{cd}{m}\leq 1,\qquad\qquad\frac{\tau}{m} \leq 1.\]
_Then with probability at least \(1-cL(e^{-m}+e^{-\tau})\) we have_
\[\|\kappa(t)\|_{L_{2}(\mathbb{S}^{d-1})}^{2}\lesssim\left[h^{\frac{\beta\gamma }{\beta-\alpha}}\|\kappa(0)\|_{H^{\alpha}(\mathbb{S}^{d-1})}^{\frac{\beta}{ \alpha}}+\|\kappa(0)\|_{H^{-\alpha}(\mathbb{S}^{d-1})}^{\frac{\beta}{\alpha}} e^{-ch^{\frac{\beta\gamma}{\beta-\alpha}}\frac{\beta}{2\alpha}t}\right]^{\frac{ \alpha}{\beta}}\|\kappa(0)\|_{H^{\alpha}(\mathbb{S}^{d-1})} \tag{10}\]
_for some \(h\) with_
\[h\lesssim\max\left\{\left[\frac{\|\kappa(0)\|_{H^{-\alpha}(\mathbb{S}^{d-1})} ^{\frac{1}{2}}\|\kappa(0)\|_{H^{\alpha}(\mathbb{S}^{d-1})}^{\frac{1}{2}}}{ \sqrt{m}}\right]^{\frac{\beta-\alpha}{\beta(1+\gamma)-\alpha}},\,c\sqrt{\frac {d}{m}}\right\},\quad\tau=h^{2\gamma}m\]
_and generic constant \(c\geq 0\), dependent on smoothness \(\alpha\), depth \(L\) and dimension \(d\), independent of width \(m\) and residual \(\kappa\)._
All assumptions are easy to verify, except for the coercivity of the NTK (8) and the bounds (9) of the forward kernel, which we discuss in the next section. The error bound (10) consists of two summands, only one of which depends on the gradient flow time \(t\). For large \(t\), it converges to zero and we are left with the first error term. This results in the following corollary, which provides a direct approximation result of type (1) for the outcome of gradient flow training.
**Corollary 2.2**.: _Let all assumptions of Theorem 2.1 be satisfied. Then for \(m\) sufficiently large, with high probability (both as in Theorem 2.1), we have_
\[\|\kappa\|_{L_{2}(\mathbb{S}^{d-1})} \lesssim\max\left\{\left[\frac{C(\kappa(0))}{m}\right]^{\frac{1} {4}\frac{\alpha\gamma}{\beta(1+\gamma)-\alpha}},\,\left[\frac{d}{m}\right]^{ \frac{1}{4}\frac{\alpha\gamma}{\beta-\alpha}}\right\}\|\kappa(0)\|_{H^{ \alpha}(\mathbb{S}^{d-1})},\] \[C(\kappa(0)) =\|\kappa(0)\|_{H^{-\alpha}(\mathbb{S}^{d-1})}\|\kappa(0)\|_{H^ {\alpha}(\mathbb{S}^{d-1})}\]
_where \(\kappa:=f_{\theta(t)}-f\) is the gradient flow residual for sufficiently large time \(t\)._
For traditional approximation methods, one would expect convergence rate \(m^{-\alpha/d}\) for functions in the Sobolev space \(H^{\alpha}\). Our rates are lower, which seems to be a variation of over-parametrization is disguise: In the over-parametrized as well as in our approximation regime the optimizer analysis seems to require some redundancy and thus more weights than necessary for the approximation alone. Of course, we only provide upper bounds and practical neural networks may perform better. Some preliminary experiments in [24] show that shallow networks in one dimension outperform the theoretical bounds but are still worse than classical approximation theory would suggest. In addition, the linearization argument of the NTK results in smoothness measures in Hilbert spaces \(H^{\alpha}\) and not in larger \(L_{p}\) based smoothness spaces with \(p<2\) or even Barron spaces, as is common for nonlinear approximation.
_Remark 2.3_.: Although Theorem 2.1 and Corollary 2.2 seem to show dimension independent convergence rates, they are not. Indeed, \(\beta\) depends on the dimension and smoothness of the activation function as we see in Section 3 and Lemma 3.2.
## 3 Coercivity of the NTK
While most assumptions of Theorem 2.1 are easy to verify, the coercivity (8) is less clear. This section contains some results for the NTK \(\Gamma(x,y)\) in this paper, which only considers the second but last layer, as well as the regular NTK defined by the infinite width limit
\[\Theta(x,y)=\lim_{\text{width}\to\infty}\sum_{\lambda}\partial f^{L+1}(x) \partial_{p}f^{L+1}(y)\]
of all layers. Coercivity easily follows once we understand the NTK's spectral decomposition. To this end, first note that \(\Gamma(x,y)\) and \(\Theta(x,y)\) are both zonal kernels, i.e. they only depend on \(x^{T}y\), and as consequence their eigenfunctions are spherical harmonics.
**Lemma 3.1** ([22, Lemma 1]).: _The eigenfunctions of the kernels \(\Gamma(x,y)\) and \(\Theta(x,y)\) on the sphere with uniform measure are spherical harmonics._
Proof.: See [22, Lemma 1] and the discussion thereafter.
Hence, it is sufficient to show lower bounds for the eigenvalues. These are provided in [9, 22, 11] under slightly different assumptions than required in this paper:
1. They use all layers \(\Theta(x,y)\) instead of only the second but last one in \(\Gamma(x,y)\). (The reference [18] does consider \(\Gamma(x,y)\) and shows that the eigenvalues are strictly positive in the over-parametrized regime with discrete loss and non-degenerate data.)
2. They use bias, whereas we don't. We can however easily introduce bias into the first layer by the usual technique to incorporate one fixed input component \(x_{0}=1\).
3. The cited papers use ReLU activations, which do not satisfy the third derivative smoothness requirements (4).
Anyways, with these modified assumptions, it is easy to derive coercivity from the NTK's RKHS in [9, 22, 11].
**Lemma 3.2**.: _Let \(\Theta(x,y)\) be the neural tangent kernel for a fully connected neural network with bias on the sphere \(\mathbb{S}^{d-1}\) with \(\operatorname{ReLU}\) activation. Then for any \(\alpha\in\mathbb{R}\)_
\[\left\langle f,L_{\Theta}f\right\rangle_{H^{\alpha}(\mathbb{S}^{d-1})}\gtrsim \left\|f\right\|_{H^{\alpha-d/2}(\mathbb{S}^{d-1})}^{2},\]
_where \(L_{\Theta}\) is the integral operator with kernel \(\Theta(x,y)\)._
The proof is given at the end of Section 6.4.3. Note that this implies \(\beta=d/2\) and thus Theorem 2.1 cannot be expected to be dimension independent. In fact, due to smoother activations, the kernel \(\Gamma(x,y)\) is expected to be more smoothing than \(\Theta(x,y)\) resulting in a faster decay of the eigenvalues and larger \(\beta\). This leads to Sobolev coercivity (Lemmas 6.18 and 3.2) as long as the decay is polynomial, which we only verify numerically in this paper, as shown in Figure 1 for \(n=100\) uniform samples on the \(d=2\) dimensional sphere and \(L-1=1\) hidden layers of width \(m=1000\). The plot uses log-log axes so that straight lines represent polynomial decay. As expected, ReLU and ELU activations show polynomials decay with higher order for the latter, which are smoother. For comparison the \(C^{\infty}\) activation \(GELU\) seems to show super polynomial decay. However, the results are preliminary and have to be considered carefully:
1. The oscillations at the end, are for eigenvalues of size \(\sim 10^{-7}\), which is machine accuracy for floating point numbers.
2. Most eigenvalues are smaller than the difference between the empirical NTK and the actual NTK. For comparison, the difference between two randomly sampled empirical NTKs (in matrix norm) is: ReLU: 0.280, ELU: 0.524, GELU: 0.262.
3. According to [9], for shallow networks without bias, every other eigenvalue of the NTK should be zero. This is not clear from the experiments (which do not use bias, but have one more layer), likely because of the large errors in the previous item.
4. The errors should be better for wider hidden layers, but since the networks involve dense matrices, their size quickly becomes substantial.
In conclusion, the experiments show the expected polynomial decay of NTK eigenvalues and activations with singularities in higher derivatives, but the results have to be regraded with care.
## 4 Proof Overview
### Preliminaries
#### 4.1.1 Neural Tangent Kernel
In this section, we recall the definition of the neural tangent kernel (NTK) and setup notations for its empirical variants. Our definition differs slightly from the literature because we only use the last hidden layer (weights \(W^{L-1}\)) to reduce the loss, whereas all other layers are trained but only estimated by a perturbation analysis. Throughout the paper, we only need the definitions as stated, not that they are the infinite width limit of the network derivatives as stated in (7), although we sometimes refer to this for motivation.
As usual, we start with the recursive definition of the covariances
\[\Sigma^{\ell+1}(x,y):=\mathbb{E}_{u,\,v\sim\mathcal{N}(0,A)}\left[\sigma\left( u\right),\sigma\left(v\right)\right],\ \ \ A=\begin{bmatrix}\Sigma^{\ell}(x,x)&\Sigma^{\ell}(x,y)\\ \Sigma^{\ell}(y,x)&\Sigma^{\ell}(y,y)\end{bmatrix},\ \ \ \Sigma^{0}(x,y)=x^{T}y,\]
which define a Gaussian process that is the infinite width limit of the forward evaluation of the hidden layer \(f^{\ell}(x)\), see [32]. Likewise, we define
\[\dot{\Sigma}^{\ell+1}(x,y):=\mathbb{E}_{u,\,v\sim\mathcal{N}(0,A)}\left[\dot{ \sigma}\left(u\right),\dot{\sigma}\left(v\right)\right],\ \ \ \ \ A=\begin{bmatrix}\Sigma^{\ell}(x,x)&\Sigma^{\ell}(x,y)\\ \Sigma^{\ell}(y,x)&\Sigma^{\ell}(y,y)\end{bmatrix},\]
with activation function of the last layer is exchanged with its derivative. Then the neural tangent kernel (NTK) is defined by
\[\Gamma(x,y):=\dot{\Sigma}^{L}(x,y)\Sigma^{L-1}(x,y). \tag{11}\]
The paper [32] shows that all three definitions above are infinite width limits of the corresponding empirical processes (denoted with an extra hat \(\hat{\cdot}\))
\[\begin{split}\hat{\Sigma}^{\ell}(x,y):=&\ \frac{1}{n_{\ell}}\sum_{r=1}^{n_{\ell}}\sigma\left(f_{r}^{\ell}(x)\right) \sigma\left(f_{r}^{\ell}(y)\right)=\frac{1}{n_{\ell}}\sigma\left(f^{\ell}(x) \right)^{T}\sigma\left(f^{\ell}(y)\right),\\ \hat{\Sigma}^{\ell}(x,y):=&\ \frac{1}{n_{\ell}}\sum_{r=1}^{n_{ \ell}}\dot{\sigma}\left(f_{r}^{\ell}(x)\right)\dot{\sigma}\left(f_{r}^{\ell}( y)\right)=\frac{1}{n_{\ell}}\dot{\sigma}\left(f^{\ell}(x)\right)^{T}\dot{ \sigma}\left(f^{\ell}(y)\right)\end{split} \tag{12}\]
Figure 1: Eigenvalues of the NTK \(\Gamma(x,y)\) for different activation functions.
\[\hat{\Gamma}(x,y):=\sum_{|\lambda|=L-1}\partial_{\lambda}f_{r}^{L+1}(x)\partial_{ \lambda}f_{r}^{L+1}(y).\]
Note that unlike the usual definition of the NTK, we only include weights from the second but last layer. Formally, we do not show that \(\Sigma^{\ell}\), \(\dot{\Sigma}^{\ell}\) and \(\Gamma\) arise as infinite width limits of the empirical versions \(\hat{\Sigma}^{\ell}\), \(\dot{\hat{\Sigma}}^{\ell}\) and \(\hat{\Gamma}\), but rather concentration inequalities between them.
The next lemma shows that the empirical kernels satisfy the same identity (11) as their limits.
**Lemma 4.1**.: _Assume that \(W_{ij}^{L}\in\{-1,+1\}\). Then_
\[\hat{\Gamma}(x,y)=\hat{\Sigma}^{L}(x,y)\hat{\Sigma}^{L-1}(x,y).\]
Proof.: By definitions of \(f^{L}\) and \(f^{L-1}\), we have
\[\partial_{W_{ij}^{L-1}}f_{r}^{L+1} =\sum_{1=r}^{n_{L}}W_{.\tau}^{L}n_{L}^{-1/2}\partial_{W_{ij}^{L-1 }}\sigma\left(f_{r}^{L}\right)\] \[=\sum_{1=r}^{n_{L}}W_{.\tau}^{L}n_{L}^{-1/2}\dot{\sigma}\left(f_ {r}^{L}\right)\partial_{W_{ij}^{L-1}}f_{r}^{L}\] \[=\sum_{1=r}^{n_{L}}W_{.\tau}^{L}n_{L}^{-1/2}\dot{\sigma}\left(f_ {r}^{L}\right)\delta_{ir}n_{L-1}^{-1/2}\sigma\left(f_{j}^{L-1}\right)\] \[=W_{.i}^{L}n_{L}^{-1/2}n_{L-1}^{-1/2}\dot{\sigma}\left(f_{i}^{L} \right)\sigma\left(f_{j}^{L-1}\right).\]
It follows that
\[\hat{\Gamma}(x,y) =\sum_{i=1}^{n_{L}}\sum_{j=1}^{n_{L-1}}\partial_{W_{ij}^{L-1}}f_ {r}^{L+1}(x)\partial_{W_{ij}^{L-1}}f_{r}^{L+1}(y)\] \[=\frac{1}{n_{L}}\sum_{i=1}^{n_{L}}\frac{1}{n_{L-1}}\sum_{j=1}^{n_ {L-1}}\left|W_{.i}^{L}\right|^{2}\dot{\sigma}\left(f_{i}^{L}(x)\right)\dot{ \sigma}\left(f_{i}^{L}(y)\right)\sigma\left(f_{j}^{L-1}(x)\right)\sigma\left( f_{j}^{L-1}(y)\right)\] \[=\hat{\Sigma}^{L}(x,y)\hat{\Sigma}^{L-1}(x,y),\]
where in the last step we have used that \(\left|W_{.i}^{L}\right|^{2}=1\) by assumption and the definitions of \(\hat{\Sigma}^{L}\) and \(\hat{\Sigma}^{L-1}\).
The NTK and empirical NTK induce integral operators, which we denote by
\[Hf:=\int_{D}\Gamma(\cdot,y)f(y)\,dy,\qquad\qquad H_{\theta}f:=\int_{D}\hat{ \Gamma}(\cdot,y)f(y)\,dy\]
The last definition makes the dependence on the weights explicit, which is hidden in \(\hat{\Gamma}\).
#### 4.1.2 Norms
We use several norms for our analysis.
1. \(\ell_{2}\) _and matrix norms:_\(\|\cdot\|\) denotes the \(\ell_{2}\) norm when applied to a vector and the matrix norm when applied to a matrix.
2. _Holder norms_\(\|\cdot\|_{C^{0;\alpha}(D;V)}\) for functions \(f\colon D\subset\mathbb{R}^{d}\to V\) into some normed vector space \(V\), with Holder continuity measured in the \(V\) norm \[\|f\|_{C^{0}(D;V)}:=\sup_{x\in D}\|f(x)\|_{V}+\sup_{x\neq\bar{x}\in D}\frac{\|f (x)-f(\bar{x})\|_{V}}{\|x-\bar{x}\|_{U}^{\alpha}}.\] We drop \(V\) in \(\|\cdot\|_{C^{0;\alpha}(D)}\) when \(V=\ell_{2}\) and \(D\) in \(\|\cdot\|_{C^{0;\alpha}}\) when it is understood from context. We also use alternate definitions as the supremum over the finite difference operator \[\Delta_{h}^{0}f(x)=f(x),\ \ \ \ \Delta_{h}^{\alpha}f(x)=\|h\|_{U}^{-\alpha}[f(x+h )-f(x)],\ \ \ \ \ \alpha>0,\] See Section 6.1 for the full definitions and basic properties.
3. _Mixed Holder norms_\(\|\cdot\|_{C^{0;\alpha,\beta}(D;V)}\) for functions \(f\colon D\times D\subset\mathbb{R}^{d}\to V\) of two variables. They measure the supremum of all mixed finite difference operators \(\Delta_{x,h_{x}}^{s}\Delta_{y,h_{y}}^{t}\) for any \(s\in\{0,\alpha\}\) and \(t\in\{0,\beta\}\), similar to Sobolev spaces with mixed smoothness. As for Holder norms for one variable, we use two different definitions, which are provided in Section 6.1.
4. _Sobolev Norms on the Sphere_ denoted by \(\|\cdot\|_{H^{\alpha}(\mathbb{S}^{d-1})}\). Definitions and properties are provided in Section 6.4.1. The bulk of the analysis is carried out in Holder norms, which control Sobolev norms by \[\|\cdot\|_{H^{\alpha}(\mathbb{S}^{d-1})}\lesssim\|\cdot\|_{C^{0;\alpha+\epsilon }(\mathbb{S}^{d-1})}.\] for \(\epsilon>0\), see Lemma 6.15.
5. _Generic Smoothness norms_\(\|\cdot\|_{\alpha}\), \(\alpha\in\mathbb{R}\) for associated Hilbert spaces \(\mathcal{H}^{\alpha}\). These are used in abstract convergence results and later replaced by Sobolev norms.
6. _Orlicz norms_\(\|\cdot\|_{\psi_{i}}\) for \(i=1,2\) measure sub-gaussian and sub-exponential concentration. Some required results are summarized in Section 6.2.
7. _Gaussian weighted \(L_{2}\) norms_ defined by \[\|f\|_{N}^{2}=\left\langle f,f\right\rangle_{N},\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \left\langle f,g\right\rangle=\int_{\mathbb{R}}f(x)^{2}d\mathcal{N}(0,1)(x)\]
#### 4.1.3 Neural Networks
Many results use a generic activation function denoted by \(\sigma\) with derivative \(\hat{\sigma}\), which is allowed to change in each layer, although we always use the same symbol for notational simplicity. They satisfy the linear growth condition
\[\left|\sigma\left(x\right)\right|\lesssim\left|x\right|, \tag{13}\]
are Lipschitz
\[\left|\sigma\left(x\right)-\sigma\left(\bar{x}\right)\right|\lesssim\left|x- \bar{x}\right| \tag{14}\]
and have uniformly bounded derivatives
\[\left|\dot{\sigma}\left(x\right)\right|\lesssim 1. \tag{15}\]
### Abstract Convergence result
We first show convergence in a slightly generalized setting. To this end, we consider neural networks as maps from the parameter space to the square integrable functions \(f_{\cdot}\colon\Theta\subset\ell_{2}(\mathbb{R}^{m})\to L_{2}(D)\) defined by \(\theta\to f_{\theta}(\cdot)\). More generally, for the time being, we replace \(L_{2}(D)\) by an arbitrary Hilbert space \(\mathcal{H}\) and the network by an arbitrary Frechet differentiable function
\[f:\Theta=\ell_{2}(\mathbb{R}^{m})\rightarrow\mathcal{H}, \theta\to f_{\theta}.\]
For a target function \(f\in\mathcal{H}\), we define the loss
\[L(\theta)=\frac{1}{2}\|f_{\theta}-f\|_{\mathcal{H}}^{2}\]
and the corresponding gradient flow for \(\theta(t)\)
\[\frac{d}{dt}\theta(t)=-\nabla L(\theta), \tag{16}\]
initialized with random \(\theta(0)\). The convergence analysis relies on a regime where the evolution of the gradient flow is governed by its linearization
\[H_{\theta}:=Df_{\theta}(Df_{\theta})^{*},\]
where \(*\) denotes the adjoint and \(H_{\theta}\) is the empirical NTK if \(f_{\theta}\) is a neural network. To describe the smoothness of the target and spectral properties of \(H_{\theta}\), we use a series of Hilbert spaces \(\mathcal{H}^{\alpha}\) for some smoothness index \(\alpha\in\mathbb{R}\) so that \(\mathcal{H}^{0}=\mathcal{H}\). As stated in the lemma below, they satisfy interpolation inequalities and coercivity conditions. In this abstract framework, we show convergence as follows.
**Lemma 4.2**.: _Let \(\theta(t)\) be defined by the gradient flow (16), \(\kappa=f_{\theta}-f\) be the residual and \(m\) be a number that satisfies all assumptions below, which is typically related to the degrees of freedom. For constants \(c_{\infty},c_{0},\beta,\gamma>0\) and \(0\leq\alpha\leq\frac{\beta}{2}\), functions \(p_{0}(m),p_{\infty}(\tau)\), \(p_{L}(m,h)\) and weight norm \(\left\|\cdot\right\|_{*}\) assume that:_
1. _With probability at least_ \(1-p_{0}(m)\)_, the distance of the weights from their initial value is controlled by_ \[\left\|\theta(t)-\theta(0)\right\|_{*}\leq 1\quad\Rightarrow\quad\left\|\theta(t)- \theta(0)\right\|_{*}\lesssim\sqrt{\frac{2}{m}}\int_{0}^{t}\left\|\kappa(\tau) \right\|_{0}d\tau.\] (17)
2. _The norms and scalar product satisfy interpolation and continuity_ \[\|\cdot\|_{b}\lesssim\|\cdot\|_{a}^{\frac{c-b}{c-a}}\|\cdot\|_{c}^{\frac{b-a}{c -a}},\qquad\qquad\left\langle\cdot,\cdot\right\rangle_{-\alpha}\lesssim\| \cdot\|_{-3\alpha}\|\cdot\|_{\alpha},\] (18) _for all_ \(-\alpha-\beta\leq a\leq b\leq c\leq\alpha\)_._
3. _Let_ \(H\colon\mathcal{H}^{\alpha}\to\mathcal{H}^{-\alpha}\) _be an operator that satisfies the concentration inequality_ \[\Pr\left[\|H-H_{\theta(0)}\|_{\alpha\leftarrow-\alpha}\geq c\sqrt{\frac{d}{m} }+\sqrt{\frac{c_{\infty}\tau}{m}}\right]\leq p_{\infty}(\tau)\] (19) _for all_ \(\tau\) _with_ \(\sqrt{\frac{c_{\infty}\tau}{m}}\leq 1\)_. (In our application_ \(H\) _is the NTK and_ \(H_{\theta(0)}\) _the empirical NTK.)_
4. _Holder continuity with high probability:_ \[\Pr\left[\exists\,\bar{\theta}\in\Theta\text{ with }\left\|\bar{\theta}-\theta(0)\right\|_{*}\leq h \text{ and }\|H_{\bar{\theta}}-H_{\theta(0)}\|_{\alpha\leftarrow-\alpha}\geq c_{0}h^{ \gamma}\right]\\ \leq p_{L}(m,h)\] (20) _for all_ \(0<h\leq 1\)_._
5. \(H\) _is coercive for_ \(S\in\{-\alpha,\alpha\}\)__ \[\|v\|_{S-\beta}^{2}\lesssim\left\langle v,Hv\right\rangle_{S},\qquad\qquad \qquad v\in\mathcal{H}^{S-\beta}\] (21)
6. _For_ \(\tau\) _specified below,_ \(m\) _is sufficiently large so that_ \[\|\kappa(0)\|_{-\alpha}^{\frac{1}{2}}\|\kappa(0)\|_{\alpha}^{\frac{1}{2}}m^{- \frac{1}{2}}\lesssim 1,\qquad\qquad\frac{cd}{m}\leq 1,\qquad\frac{\tau}{m}\leq 1.\]
_Then with probability at least \(1-p_{0}(m)-p_{\infty}(\tau)-p_{L}(m,h)\) we have_
\[\|\kappa\|_{-\alpha}^{2} \lesssim\left[h^{\frac{\beta\gamma}{\beta-\alpha}}\|\kappa(0)\|_ {\alpha}^{\frac{\beta}{\alpha}}+\|\kappa(0)\|_{-\alpha}^{\frac{\beta}{\alpha }}e^{-ch^{\frac{\beta\gamma}{\beta-\alpha}}\frac{\beta}{2\alpha}t}\right]^{ \frac{2\alpha}{\beta}}\] \[\|\kappa\|_{\alpha}^{2} \lesssim\|\kappa(0)\|_{\alpha}^{2}\]
_for some \(h\) with_
\[h\lesssim\max\left\{\left[\frac{\|\kappa(0)\|_{-\alpha}^{\frac{1}{2}}\|\kappa( 0)\|_{\alpha}^{\frac{1}{2}}}{\sqrt{m}}\right]^{\frac{\beta-\alpha}{\beta(1+ \gamma)-\alpha}},\,c\sqrt{\frac{d}{m}}\right\},\qquad\tau=h^{2\gamma}m\]
_and generic constants \(c\geq 0\) dependent of \(\alpha\) and independent of \(\kappa\) and \(m\)._
We defer the proof to Section 5.1 and only consider a sketch here. As for standard NTK arguments, the proof is based on the following observation
\[\frac{1}{2}\frac{d}{dt}\|\kappa\|^{2}=-\left<\kappa,H_{\theta(t)}\,\kappa\right> \approx-\left<\kappa,H\,\kappa\right> \tag{22}\]
which can be shown by a short computation. The last step relies on the observation that empirical NTK stays close to its initial \(H_{\theta(t)}\approx H_{\theta(0)}\) and that the initial is close to the infinite width limit \(H_{\theta(0)}\approx H\). However, since we are not in an over-parametrized regime, the NTK's eigenvalues can be arbitrarily close to zero and we only have coercivity in the weaker norm \(\left<\kappa,\,H\,\kappa\right>\gtrsim\|\kappa\|_{-\alpha}\), which is not sufficient to show convergence by e.g. Gronwall's inequality. To avoid this problem, we derive a closely related system of coupled ODEs
\[\frac{1}{2}\frac{d}{dt}\|\kappa\|_{-\alpha}^{2} \lesssim-c\|\kappa\|_{-\alpha}^{2\frac{\alpha+\beta}{2\alpha}}\| \kappa\|_{\alpha}^{-2\,\frac{\beta}{2\alpha}}+h^{\frac{\beta\gamma}{\beta- \alpha}}\|\kappa\|_{-\alpha}^{2}\] \[\frac{1}{2}\frac{d}{dt}\|\kappa\|_{\alpha}^{2} \lesssim-c\|\kappa\|_{-\alpha}^{2\frac{\beta}{2\alpha}}\|\kappa \|_{\alpha}^{2\frac{2\alpha-\beta}{2\alpha}}+h^{\gamma}\|\kappa\|_{\alpha}\| \kappa\|_{-\alpha}.\]
The first one is used to bound the error in the \(\mathcal{H}^{-\alpha}\) norm and the second ensures that the smoothness of the residual \(\kappa(t)\) is uniformly bounded during gradient flow. Together with the interpolation inequality (18), this shows convergence in the \(\mathcal{H}=\mathcal{H}^{0}\) norm.
It remains to verify all assumption of Lemma 4.2, which we do in the following subsections. Details are provided in Section 5.5.
### Assumption (20): Holder continuity
We use a bar \(\bar{\cdot}\) to denote perturbation, in particular \(\bar{W}^{\ell}\) is a perturbed weight, and \(\bar{\Gamma}\) is the corresponding empirical neural tangent kernel. In order to obtain continuity results, we require that the weight matrices and domain are bounded
\[\left\|W^{\ell}\right\|n_{\ell}^{-1/2}\lesssim 1,\qquad\left\|\bar{W}^{\ell} \right\|n_{\ell}^{-1/2}\lesssim 1,\qquad\ \|x\|\lesssim 1\,\forall x\in D. \tag{23}\]
For the initial weights \(W^{\ell}\), this holds with high probability because its entries are i.i.d. standard Gaussian. For perturbed weights we only need continuity bounds under the condition that \(\left\|\theta-\bar{\theta}\right\|_{*}\leq 1\) or equivalently that \(\|W^{\ell}-\bar{W}^{\ell}\|n_{\ell}^{-1/2}\leq 1\) so that the weight bound of the perturbation \(\bar{W}^{\ell}\) follow from the bounds for \(W^{\ell}\). With this setup, we show the following lemma.
**Lemma 4.3**.: _Assume that \(\sigma\) and \(\dot{\sigma}\) satisfy the growth and Lipschitz conditions (13), (14) and may be different in each layer. Assume the weights, perturbed weights and domain are bounded (23) and \(n_{L}\sim n_{L-1}\sim\cdots\sim n_{0}\). Then for
\(0<\alpha<1\)
\[\left\|\tilde{\Gamma}\right\|_{C^{0;\alpha,\alpha}} \lesssim 1\] \[\left\|\tilde{\Gamma}\right\|_{C^{0;\alpha,\alpha}} \lesssim 1\] \[\left\|\hat{\Gamma}-\bar{\tilde{\Gamma}}\right\|_{C^{0;\alpha, \alpha}} \lesssim\frac{n_{0}}{n_{L}}\left[\sum_{k=0}^{L-1}\left\|W^{k}- \bar{W}^{k}\right\|n_{k}^{-1/2}\right]^{1-\alpha}.\]
The proof is at the end of Section 5.2. The lemma shows that the kernels \(\left\|\hat{\Gamma}^{\ell}-\bar{\tilde{\Gamma}}^{\ell}\right\|_{C^{0;\alpha, \alpha}}\) are Holder continuous (w.r.t. weights) in a Holder norm (w.r.t. \(x\) and \(y\)). This directly implies that the induced integral operators \(\left\|H_{\theta}-H_{\theta}\right\|_{\alpha\leftarrow-\alpha}\) are bounded in operator norms induced by Sobolev norms (up to \(\epsilon\) less smoothness), which implies Assumption (20), see Section 5.5 for details.
### Assumption (19): Concentration
For concentration, we need to show that the empirical NTK is close to the NTK, i.e. that \(\left\|H-H_{\theta(0)}\right\|_{\alpha\leftarrow-\alpha}\) is small in the operator norm. To this end, it suffices to bound the corresponding integral kernels \(\left\|\Gamma-\hat{\Gamma}\right\|_{C^{0;\alpha+\epsilon,\alpha+\epsilon}}\) in Holder norms with slightly higher smoothness, see Lemma 6.16. Concentration is then provided by the following Lemma. See the end of Section 5.3 for a proof and Section 5.5 for its application in the proof of the main result.
**Lemma 4.4**.: _Let \(\alpha=\beta=1/2\) and \(k=0,\ldots,L-1\)._
1. _Assume that_ \(W^{L}\in\{-1,+1\}\) _with probability_ \(1/2\) _each._
2. _Assume that all_ \(W^{k}\) _are are i.i.d. standard normal._
3. _Assume that_ \(\sigma\) _and_ \(\dot{\sigma}\) _satisfy the growth condition (_13_), have uniformly bounded derivatives (_15_), derivatives_ \(\sigma^{(i)}\)_,_ \(i=0,\ldots,3\) _are continuous and have at most polynomial growth for_ \(x\rightarrow\pm\infty\) _and the scaled activations satisfy_ \[\left\|\partial^{i}(\sigma_{a})\right\|_{N}\lesssim 1,\ \ \left\|\partial^{i}(\dot{\sigma}_{a})\right\|_{N}\lesssim 1,\ \ a\in\{\Sigma^{k}(x,x):x\in D\},\ \ i=1,\ldots,3,\] _with_ \(\sigma_{a}(x):=\sigma(ax)\)_. The activation functions may be different in each layer._
4. _For all_ \(x\in D\) _assume_ \[\Sigma^{k}(x,x)\geq c_{\Sigma}>0.\]
5. _The widths satisfy_ \(n_{\ell}\gtrsim n_{0}\) _for all_ \(\ell=0,\ldots,L\)_._
_Then, with probability at least_
\[1-c\sum_{k=1}^{L-1}e^{-n_{k}}+e^{-u_{k}} \tag{24}\]
_we have_
\[\left\|\hat{\Gamma}-\Gamma\right\|_{C^{0;\alpha,\beta}}\lesssim\sum_{k=0}^{L-1} \frac{n_{0}}{n_{k}}\left[\frac{\sqrt{d}+\sqrt{u_{k}}}{\sqrt{n_{k}}}+\frac{d+u_{k }}{n_{k}}\right]\leq\frac{1}{2}c_{\Sigma}\]
_for all \(u_{1},\ldots,u_{L-1}\geq 0\) sufficiently small so that the rightmost inequality holds._
### Assumption (17): Weights stay Close to Initial
Assumption (17) follows from the following lemma, which shows that the weights stay close to their random initialization. Again, the estimates are proven in Holder norms, which control the relevant Sobolev norms, see Section 5.5 for details.
**Lemma 4.5**.: _Assume that \(\sigma\) satisfies the growth and derivative bounds (13), (15) and may be different in each layer. Assume the weights are defined by the gradient flow (6) and satisfy_
\[\|W^{\ell}(0)\|n_{\ell}^{-1/2} \lesssim 1, \ell=1,\ldots,L,\] \[\|W^{\ell}(0)-W^{\ell}(\tau)\|n_{\ell}^{-1/2} \lesssim 1, 0\leq\tau<t.\]
_Then_
\[\left\|W^{\ell}(t)-W^{\ell}(0)\right\|n_{\ell}^{-1/2} \lesssim\frac{n_{0}^{1/2}}{n_{\ell}}\int_{0}^{t}\|\kappa\|_{C^{0}(D)^{ \prime}}\,dx\,d\tau,\]
_where \(C^{0}(D)^{\prime}\) is the dual space of \(C^{0}(D)\)._
## 5 Proof of the Main Result
### Proof of Lemma 4.2: Generalized Convergence
NTK EvolutionIn this section, we prove the convergence result in Lemma 4.2. Let us first recall the evolution of the loss in NTK theory. The Frechet derivative of the loss is
\[DL(\theta)v=\left\langle\kappa,(Df_{\theta})v\right\rangle=\left\langle(Df_{ \theta})^{*}\kappa,v\right\rangle,\qquad\qquad\text{for all }v\in\Theta\]
and the gradient of the loss is the Riesz lift of the derivative
\[\nabla L(\theta)=(Df_{\theta})^{*}\kappa. \tag{25}\]
Using the chain rule, we obtain the evolution of the residual
\[\frac{d\kappa}{dt}=(Df_{\theta})\frac{d\theta}{dt}=-(Df_{\theta})\nabla L( \theta)=-(Df_{\theta})(Df_{\theta})^{*}\kappa=:H_{\theta}\kappa \tag{26}\]
and the loss in any \(\mathcal{H}^{S}\) norm
\[\frac{1}{2}\frac{d}{dt}\|\kappa\|_{S}^{2}=\left\langle\kappa,\frac{d\kappa}{ dt}\right\rangle_{S}=-\left\langle\kappa,(Df_{\theta})(Df_{\theta})^{*}\kappa \right\rangle_{S}=-\left\langle\kappa,H_{\theta}\,\kappa\right\rangle_{S}, \tag{27}\]
with
\[H_{\theta}:=(Df_{\theta})(Df_{\theta})^{*}.\]
#### Proof of Lemma 4.2
Proof of Lemma 4.2.: For the time being, we assume that the weights remain within a finite distance
\[h:=\max\left\{\sup_{t\leq T}\lVert\theta(t)-\theta(0)\rVert_{*}\,,c\sqrt{\frac{d}{ m}}\right\}\leq 1 \tag{28}\]
to their initial up to a time \(T\) to be determined below, but sufficiently small so that the last inequality holds. With this condition, we can bound the time derivatives of the loss \(\lVert\kappa\rVert_{-\alpha}\) and the smoothness \(\lVert\kappa\rVert_{\alpha}\). For \(S\in\{-\alpha,\alpha\}\) and respective \(\bar{S}\in\{-3\alpha,\alpha\}\), we have already calculated the exact evolution in (27), which we estimate by
\[\frac{1}{2}\frac{d}{dt}\lVert\kappa\rVert_{\bar{S}}^{2} =-\left\langle\kappa,H_{\theta(t)}\kappa\right\rangle_{S}\] \[=-\left\langle\kappa,H\kappa\right\rangle_{S}+\left\langle\kappa, (H-H_{\theta(0)})\kappa\right\rangle_{S}+\left\langle\kappa,(H_{\theta(0)}-H_ {\theta(t)})\kappa\right\rangle_{S}.\]
We estimate the last two summands as
\[\left\langle\kappa,[\dots]\kappa\right\rangle_{S}\leq\lVert\kappa\rVert_{\bar {S}}\lVert[\dots]\kappa\rVert_{s}\leq\lVert\kappa\rVert_{\bar{S}}\lVert[ \dots]\rVert_{\alpha\leftarrow-\alpha}\lVert\kappa\rVert_{-\alpha},\]
where \(\bar{S}=\alpha\) for \(S=\alpha\) and \(\bar{S}=-3\alpha\) for \(S=-\alpha\) by Assumption 2. Then, we obtain
\[\frac{1}{2}\frac{d}{dt}\lVert\kappa\rVert_{S}^{2} \leq-\left\langle\kappa,H\kappa\right\rangle_{S}+\lVert H-H_{ \theta(0)}\rVert_{\alpha\leftarrow-\alpha}\lVert\kappa\rVert_{\bar{S}}\lVert \kappa\rVert_{-\alpha}+\lVert H_{\theta(0)}-H_{\theta(t)}\rVert_{\alpha \leftarrow-\alpha}\lVert\kappa\rVert_{\bar{S}}\lVert\kappa\rVert_{-\alpha}\] \[\leq-\left\langle\kappa,H\kappa\right\rangle_{S}+\left[c\sqrt{ \frac{d}{m}}+\sqrt{\frac{c_{\infty}\tau}{m}}+c_{0}h^{\gamma}\right]\lVert \kappa\rVert_{\bar{S}}\lVert\kappa\rVert_{-\alpha},\] \[\lesssim-c\lVert\kappa\rVert_{S-\beta}^{2}+h^{\gamma}\lVert \kappa\rVert_{\bar{S}}\lVert\kappa\rVert_{-\alpha},\]
with probability at least \(1-p_{\infty}(\tau)-p_{L}(m,h)\), where the second but last inequality follows from assumptions (19), (20) and in the last inequality we have used the coercivity, (28) and chosen \(\tau=h^{2\gamma}m\) so that \(\sqrt{\frac{c_{\infty}\tau}{m}}\lesssim h^{\gamma}\). The left hand side contains one negative term \(-\lVert\kappa\rVert_{S-\beta}^{2}\), which decreases the residual \(\frac{d}{dt}\lVert\kappa\rVert_{S}^{2}\), and one positive term which enlarges it. In the following, we ensure that these terms are properly balanced.
We eliminate all norms that are not \(\lVert\kappa\rVert_{-\alpha}\) or \(\lVert\kappa\rVert_{\alpha}\) so that we obtain a closed system of ODEs in these two variables. We begin with \(\lVert\kappa\rVert_{\bar{S}}\), which is already of the right type if \(\bar{S}=\alpha\) but \(\lVert\kappa\rVert_{-3\alpha}\) for \(\bar{S}=-\alpha\). Since \(0<\alpha<\frac{\beta}{2}\), we have \(-\alpha-\beta\leq-3\alpha\leq\alpha\) so that we can invoke the interpolation inequality from Assumption 2
\[\lVert v\rVert_{-3\alpha}\leq\lVert v\rVert_{-\alpha-\beta}^{\frac{2\alpha}{ \beta}}\lVert v\rVert_{-\alpha}^{\frac{\beta-2\alpha}{\beta}}.\]
Together with Young's inequality, this implies
\[h^{\gamma}\|\kappa\|_{\bar{S}}\|\kappa\|_{-\alpha} \leq h^{\gamma}\|\kappa\|_{-\alpha-\beta}^{\frac{2\alpha}{\beta}}\| \kappa\|_{-\alpha}^{\frac{2\beta-2\alpha}{\beta}}\] \[\leq\frac{\alpha}{\beta}\left[c\|\kappa\|_{-\alpha-\beta}^{\frac {2\alpha}{\beta}}\right]^{\frac{\alpha}{\beta}}+\frac{\beta-\alpha}{\beta} \left[c^{-1}h^{\gamma}\|\kappa\|_{-\alpha}^{\frac{2\beta-2\alpha}{\beta}} \right]^{\frac{\beta}{\beta-\alpha}}\] \[=\frac{\alpha}{\beta}c^{\frac{\beta}{\alpha}}\|\kappa\|_{-\alpha- \beta}^{2}+c^{\frac{\beta}{(\beta-\alpha)}}h^{\frac{\gamma\beta}{\beta-\alpha }}\|\kappa\|_{-\alpha}^{2}\]
for any generic constant \(c>0\). Choosing this constant sufficiently small and plugging into the evolution equation for \(\|\kappa\|_{-\alpha}\), we obtain
\[\frac{1}{2}\frac{d}{dt}\|\kappa\|_{-\alpha}^{2}\lesssim-c\|\kappa\|_{-\alpha- \beta}^{2}+h^{\frac{\gamma\beta}{\beta-\alpha}}\|\kappa\|_{-\alpha}^{2},\]
with a different generic constant \(c\). Hence, together with the choice \(S=\alpha\), we arrive at the system of ODEs
\[\frac{1}{2}\frac{d}{dt}\|\kappa\|_{-\alpha}^{2} \lesssim-c\|\kappa\|_{-\alpha-\beta}^{2}+h^{\frac{\gamma\beta}{ \beta-\alpha}}\|\kappa\|_{-\alpha}^{2},\] \[\frac{1}{2}\frac{d}{dt}\|\kappa\|_{\alpha}^{2} \lesssim-c\|\kappa\|_{\alpha-\beta}^{2}+h^{\gamma}\|\kappa\|_{ \alpha}\|\kappa\|_{-\alpha}.\]
Next, we eliminate the \(\|\kappa\|_{-\alpha-\beta}^{2}\) and \(\|\kappa\|_{-\beta}^{2}\) norms. Since \(0<\alpha<\frac{\beta}{2}\) implies \(-\alpha-\beta<\alpha-\beta<-\alpha<\alpha\) the interpolation inequalities in Assumption 2 yield
\[\|\kappa\|_{-\alpha} \leq\|\kappa\|_{-\alpha-\beta}^{\frac{2\alpha}{2\alpha+\beta}}\| \kappa\|_{\alpha}^{\frac{\beta}{2\alpha+\beta}} \Rightarrow \|\kappa\|_{-\alpha-\beta}\geq\|\kappa\|_{-\alpha}^{\frac{2 \alpha+\beta}{2\alpha}}\|\kappa\|_{\alpha}^{\frac{\beta}{2\alpha}}\] \[\|\kappa\|_{-\alpha} \leq\|\kappa\|_{\alpha-\beta}^{\frac{2\alpha}{\beta}}\|\kappa\|_ {\alpha}^{\frac{\beta-2\alpha}{\beta}} \Rightarrow \|\kappa\|_{\alpha-\beta}\geq\|\kappa\|_{-\alpha}^{\frac{\beta}{ 2\alpha}}\|\kappa\|_{\alpha}^{\frac{2\alpha-\beta}{2\alpha}},\]
so that we obtain the differential inequalities
\[\frac{1}{2}\frac{d}{dt}\|\kappa\|_{-\alpha}^{2} \lesssim-c\|\kappa\|_{-\alpha}^{2\frac{2\alpha+\beta}{2\alpha}} \|\kappa\|_{\alpha}^{-2\frac{\beta}{2\alpha}}+h^{\frac{\beta\gamma}{\beta- \alpha}}\|\kappa\|_{-\alpha}^{2}\] \[\frac{1}{2}\frac{d}{dt}\|\kappa\|_{\alpha}^{2} \lesssim-c\|\kappa\|_{-\alpha}^{2\frac{\beta}{2\alpha}}\|\kappa \|_{\alpha}^{2\frac{2\alpha-\beta}{2\alpha}}+h^{\gamma}\|\kappa\|_{\alpha}\| \kappa\|_{-\alpha}.\]
Bounds for the solutions are provided by Lemma 5.1 with \(x=\|\kappa\|_{-\alpha}^{2}\), \(y=\|\kappa\|_{\alpha}^{2}\) and \(\rho=\frac{\beta}{2\alpha}\geq 1\geq\frac{1}{2}\): Given that
\[\|\kappa\|_{-\alpha}^{2}\gtrsim h^{2\frac{\gamma\alpha}{\beta-\alpha}}\|\kappa (0)\|_{\alpha}^{2}, \tag{29}\]
i.e. the error \(\|\kappa\|_{-\alpha}\) is still larger than the right hand side, which will be our final error bound, we have
\[\|\kappa\|_{-\alpha}^{2} \lesssim\left[h^{\frac{\beta\gamma}{\beta-\alpha}}\|\kappa(0)\|_{ \alpha}^{\frac{\beta}{\alpha}}+\|\kappa(0)\|_{-\alpha}^{\frac{\beta}{\alpha}}e ^{-ch^{\frac{\beta\gamma}{\beta-\alpha}}\frac{\beta}{2\alpha}t}\right]^{\frac{2 \alpha}{\beta}} \tag{30}\] \[\|\kappa\|_{\alpha}^{2} \lesssim\|\kappa(0)\|_{\alpha}^{2}. \tag{31}\]
The second condition \(B(t)\geq 0\) in Lemma 5.1 is equivalent to \(ax_{0}^{p}\geq by_{0}^{p}\) (notation of the lemma), which in our case is identical to (29) at \(t=0\). Notice that the right hand side of (29) corresponds to the first summand in the \(\|\kappa\|_{-\alpha}^{2}\) bound so that the second summand must dominate and we obtain the simpler expression
\[\|\kappa\|_{-\alpha}^{2} \lesssim\|\kappa(0)\|_{-\alpha}^{2}e^{-ch\frac{\beta\gamma}{\beta- \alpha}t}, \tag{32}\] \[\|\kappa\|_{\alpha}^{2} \lesssim\|\kappa(0)\|_{\alpha}^{2}.\]
Finally, we compute \(h\), first for the case \(h=\sup_{t\leq T}\|\theta(t)-\theta(0)\|_{*}\). For \(T\) we use the smallest time for which (29) fails and temporarily also \(h\leq 1\). Then by Assumption (17), interpolation inequality (18) and the \(\|\kappa\|_{-\alpha}^{2}\), \(\|\kappa\|_{\alpha}^{2}\) bounds, with probability at least \(1-p_{0}(m)\), we have
\[h=\sup_{t\leq T}\|\theta(t)-\theta(0)\|_{*} \lesssim\sqrt{\frac{2}{m}}\int_{0}^{T}\|\kappa(\tau)\|_{0}\,d\tau\] \[\lesssim\sqrt{\frac{2}{m}}\int_{0}^{T}\|\kappa(\tau)\|_{-\alpha} ^{\frac{1}{2}}\|\kappa(\tau)\|_{\alpha}^{\frac{1}{2}}\,d\tau\] \[\lesssim\sqrt{\frac{2}{m}}\|\kappa(0)\|_{-\alpha}^{\frac{1}{2}}\| \kappa(0)\|_{\alpha}^{\frac{1}{2}}\int_{0}^{T}e^{-ch^{\frac{\beta\gamma}{\beta -\alpha}}\frac{\tau}{4}}\,d\tau\] \[\leq c\sqrt{\frac{1}{m}}\frac{\|\kappa(0)\|_{-\alpha}^{\frac{1}{2 }}\|\kappa(0)\|_{\alpha}^{\frac{1}{2}}}{h^{\frac{\beta\gamma}{\beta-\alpha}}},\]
for some generic constant \(c>0\). Solving for \(h\), we obtain
\[h^{1+\frac{\beta\gamma}{\beta-\alpha}}\lesssim\|\kappa(0)\|_{-\alpha}^{\frac{ 1}{2}}\|\kappa(0)\|_{\alpha}^{\frac{1}{2}}m^{-\frac{1}{2}}\quad\Leftrightarrow \quad h\lesssim\left[\|\kappa(0)\|_{-\alpha}^{\frac{1}{2}}\|\kappa(0)\|_{ \alpha}^{\frac{1}{2}}m^{-\frac{1}{2}}\right]^{\frac{\beta-\alpha}{\beta(1+ \gamma)-\alpha}}.\]
Notice that by assumption \(m\) is sufficiently large so that the right hand side is strictly smaller than one and thus \(T\) is only constrained by (29). In case \(h=c\sqrt{d/m}\) there is nothing to show and we obtain
\[h\lesssim\max\left\{\left[\|\kappa(0)\|_{-\alpha}^{\frac{1}{2}}\|\kappa(0)\|_ {\alpha}^{\frac{1}{2}}m^{-\frac{1}{2}}\right]^{\frac{\beta-\alpha}{\beta(1+ \gamma)-\alpha}},\,c\sqrt{\frac{d}{m}}\right\}.\]
Finally, we extend the result beyond the largest time \(T\) for which (29) is satisfied and hence (29) holds with equality. Since \(\|\kappa\|_{0}^{2}\) is defined by a gradient flow, it is monotonically decreasing and thus for any time \(t>T\), we have
\[\|\kappa(t)\|_{-\alpha}^{2}\leq\|\kappa(T)\|_{-\alpha}^{2}=ch^{2 \frac{\gamma\alpha}{\beta-\alpha}}\|\kappa(0)\|_{\alpha}^{2}=c\left[h^{\frac{ \gamma\beta}{\beta-\alpha}}\|\kappa(0)\|_{\alpha}^{\frac{\beta}{\alpha}} \right]^{\frac{2\alpha}{\beta}}\\ \lesssim\left[h^{\frac{\beta\gamma}{\beta-\alpha}}\|\kappa(0) \|_{\alpha}^{\frac{\beta}{\alpha}}+\|\kappa(0)\|_{-\alpha}^{\frac{\beta}{\beta -\alpha}}e^{-ch^{\frac{\beta\gamma}{\beta-\alpha}}\frac{\beta}{2\alpha}}t \right]^{\frac{2\alpha}{\beta}}\]
so that the error bound (30) holds for all times up to an adjustment of the constants. This implies the statement of the lemma with our choice of \(h\) and \(\tau\)
#### Technical Supplements
**Lemma 5.1**.: _Assume \(a,b,c,d>0\), \(\rho\geq\frac{1}{2}\) and that \(x\), \(y\) satisfy the differential inequality_
\[x^{\prime} \leq-ax^{1+\rho}y^{-\rho}+bx, x(0) =x_{0} \tag{33}\] \[y^{\prime} \leq-cx^{\rho}y^{1-\rho}+d\sqrt{xy}, y(0) =y_{0}. \tag{34}\]
_Then within any time interval \([0,T]\) for which_
\[x(t) \geq\left(\frac{d}{c}\right)^{\frac{2}{2\rho-1}}y_{0}, \tag{35}\]
_with_
\[A :=\frac{b}{a}y_{0}^{\rho}, B(t) :=\left[1-\frac{b}{a}\left(\frac{x_{0}}{y_{0}}\right)^{-\rho} \right]e^{-bpt}\]
_we have_
\[x(t) \leq A\left(1-B(t)\right)^{-1}, y(t) \leq y_{0}.\]
_If \(B(t)\geq 0\), this can be further estimated by_
\[x(t) \leq\left(A+x_{0}^{\rho}e^{-bpt}\right)^{\frac{1}{\rho}}, y(t) \leq y_{0}.\]
Proof.: First, we show that \(y(t)\leq y_{0}\) for all \(t\in T\). To this end, note that condition (35) states that we are above a critical point for the second ODE (34). Indeed, setting \(y^{\prime}(t)=0\) and thus \(y(t)=y_{0}\) and solving the second ODE (with = instead of \(\leq\)) for \(x(t)\), we have
\[x(t) =\left(\frac{d}{c}\right)^{\frac{2}{2\rho-1}}y_{0}.\]
To show that \(y(t)\geq y_{0}\), let \(\epsilon\geq 0\) and define
\[T_{\epsilon} =\sup\left\{t\leq T\bigg{|}x(t)\geq\left(\frac{d}{c}\right)^{ \frac{2}{2\rho-1}}y_{0}(1+\epsilon)\right\},\] \[\tau_{\epsilon} =\inf\left\{t\leq T_{\epsilon}|y(t)\geq y_{0}(1+\epsilon)\right\},\]
where the definition of \(T_{\epsilon}\) resembles the definition of \(T\) up to a safety factor of \(1+\epsilon\) and \(\tau_{\epsilon}\) is the smallest time when our hypothesis \(y(t)\leq y_{0}\) fails up to a small margin. Assume that \(\tau_{\epsilon}<T_{\epsilon}\). Since \(2\rho-1\geq 0\), for all \(t<\tau_{\epsilon}\), we have
\[x(t)^{2\rho-1} \geq\left(\frac{d}{c}\right)^{2}\left[y_{0}(1+\epsilon)\right]^{ 2\rho-1}\geq\left(\frac{d}{c}\right)^{2}y(t)^{2\rho-1},\]
which upon rearrangement is equivalent to
\[-cx^{\rho}y^{1-\rho}+d\sqrt{xy}\leq 0,\]
so that the differential equation (34) yields \(y^{\prime}(t)\leq 0\) and hence \(y(t)\leq y_{0}\) for all \(t<\tau_{\epsilon}\). On the other hand, for all \(t>\tau_{\epsilon}\) we have \(y(t)>y_{0}(1+\epsilon)\), which contradicts the continuity of \(y\). It follows that \(\tau_{\epsilon}\geq T_{\epsilon}\) and with \(\lim_{\epsilon\to 0}T_{\epsilon}=T\), we obtain
\[y(t) \leq y_{0}, t<T.\]
Next, we show the bounds for \(x(t)\). For any fixed function \(y\), the function \(x\) is bounded by the solution \(z\) of the equality case
\[z^{\prime} =-az^{1+\rho}y^{-\rho}+bz, z(0) =x_{0}\]
of the first equation (33). This is a Bernoulli differential equation, with solution
\[x(t) \leq z(t)=\left[e^{-b\rho t}\left(a\rho\int_{0}^{t}e^{b\rho\tau}y( \tau)^{-\rho}\,d\tau+x_{0}^{-\rho}\right)\right]^{-\frac{1}{\rho}}.\]
Since \(y(t)\leq y_{0}\), in the relevant time interval this simplifies to
\[z(t)^{\rho} \leq e^{b\rho t}\left(a\rho\int_{0}^{t}e^{b\rho\tau}y_{0}^{-\rho }\,d\tau+x_{0}^{-\rho}\right)^{-1}\] \[=e^{b\rho t}\left(\frac{a}{b}\left(e^{b\rho t}-1\right)y_{0}^{- \rho}+x_{0}^{-\rho}\right)^{-1}\] \[=\left(\frac{a}{b}y_{0}^{-\rho}-\left(\frac{a}{b}y_{0}^{-\rho}-x _{0}^{-\rho}\right)e^{-b\rho t}\right)^{-1}\] \[=\underbrace{\frac{b}{a}y_{0}^{\rho}}_{=:A}\left(1-\underbrace{ \left(1-\frac{b}{a}\left(\frac{x_{0}}{y_{0}}\right)^{-\rho}\right)e^{-b\rho t }}_{=:B(t)}\right)^{-1},\]
which shows the first bound for \(x(t)\). We can estimate this further by
\[z(t)^{\rho} \leq\frac{A}{1-B(t)}=\frac{A[1-B(t)]}{1-B(t)}+\frac{AB(t)}{1-B(t) }=A+\frac{A}{1-B(t)}B(t).\]
In case \(B(t)\geq 0\), the function \(A/(1-B(t))\) is monotonically decreasing and thus with \(A/(1-B(0))=x_{0}^{\rho}\), we have
\[z(t)^{\rho} \leq A+\frac{A}{1-B(0)}B(t)=A+x_{0}^{\rho}B(t)\leq A+x_{0}^{\rho} e^{-b\rho t},\]
which shows the second bound for \(x(t)\) in the lemma.
### Proof of Lemma 4.3: NTK Holder continuity
The proof is technical but elementary. We start with upper bounds and Holder continuity for simple objects, like hidden layers, and then compose these for derived objects with results for the NTK at the end of the section.
Throughout this section, we use a bar \(\bar{\cdot}\) to denote a perturbation. In particular \(\bar{W}^{\ell}\) is a perturbed weight,
\[\bar{f}^{\ell+1}(x) =\bar{W}^{\ell}n_{\ell}^{-1/2}\sigma\left(\bar{f}^{\ell}(x) \right), \bar{f}^{1}(x) =\bar{W}^{0}Vx\]
is the neural network with perturbed weights and \(\tilde{\bar{\Sigma}}\), \(\tilde{\bar{\Sigma}}\), \(\bar{\Gamma}\) and \(\tilde{\bar{\Gamma}}\) are the kernels of the perturbed network. The bounds in this section depend on the operator norm of the weight matrices. At initialization, they are bounded \(\left\|W^{\ell}\right\|n_{\ell}^{-1/2}\lesssim 1\), with high probability. All perturbations of the weights that we need are close \(\left\|W^{\ell}-\bar{W}^{\ell}\right\|n_{\ell}^{-1/2}\lesssim 1\) so that we may assume
\[\left\|W^{\ell}\right\|n_{\ell}^{-1/2} \lesssim 1 \tag{36}\] \[\left\|\bar{W}^{\ell}\right\|n_{\ell}^{-1/2} \lesssim 1 \tag{37}\]
In addition, we consider bounded domains
\[\left\|x\right\|\lesssim 1\quad\text{for all}\quad x\in D. \tag{38}\]
**Lemma 5.2**.: _Assume that \(\left\|x\right\|\lesssim 1\)._
1. _Assume that_ \(\sigma\) _satisfies the growth condition (_13_) and may be different in each layer. Assume the weights are bounded (_36_). Then_ \[\left\|f^{\ell}(x)\right\|\lesssim n_{0}^{1/2}\prod_{k=0}^{\ell-1}\left\|W^{k} \right\|n_{k}^{-1/2}.\]
2. _Assume that_ \(\sigma\) _satisfies the growth and Lipschitz conditions (_13_) and (_14_) and may be different in each layer. Assume the weights and perturbed weights are bounded (_36_), (_37_). Then_ \[\left\|f^{\ell}(x)-\bar{f}^{\ell}(x)\right\|\lesssim n_{0}^{1/2}\sum_{k=0}^{ \ell-1}\left\|W^{k}-\bar{W}^{k}\right\|n_{k}^{-1/2}\prod_{\begin{subarray}{c }j=0\\ j\neq k\end{subarray}}^{\ell-1}\max\left\{\left\|W^{j}\right\|,\,\left\|\bar{W }^{j}\right\|\right\}n_{j}^{-1/2}.\]
3. _Assume that_ \(\sigma\) _has bounded derivative (_15_) and may be different in each layer. Assume the weights are bounded (_36_). Then_ \[\left\|f^{\ell}(x)-f^{\ell}(\bar{x})\right\|\lesssim n_{0}^{1/2}\left[\prod_{k =0}^{\ell-1}\left\|W^{k}\right\|n_{k}^{-1/2}\right]\|x-\bar{x}\|.\]
Proof.:
1. For \(\ell=0\), we have \[\left\|f^{1}(x)\right\|=\left\|W^{0}Vx\right\|\leq n_{0}^{1/2}\left\|W^{0} \right\|n_{0}^{-1/2},\] where in the last step we have used that \(V\) has orthonormal columns and \(\left\|x\right\|\lesssim 1\). For \(\ell>0\), we have \[\left\|f^{\ell+1}\right\|=\left\|W^{\ell}n_{\ell}^{-1/2}\sigma \left(f^{\ell}\right)\right\|\leq\left\|W^{\ell}\right\|n_{\ell}^{-1/2}\left\| \sigma\left(f^{\ell}\right)\right\|\stackrel{{\eqref{eq:13}}}{{ \lesssim}}\left\|W^{\ell}\right\|n_{\ell}^{-1/2}\left\|f^{\ell}\right\|\] \[\stackrel{{\text{induction}}}{{\lesssim}}\left\|W^{ \ell}\right\|n_{\ell}^{-1/2}n_{0}^{1/2}\prod_{k=0}^{\ell-1}\left\|W^{k}\right\| n_{k}^{-1/2}=n_{0}^{1/2}\prod_{k=0}^{\ell}\left\|W^{k}\right\|n_{k}^{-1/2},\] where in the first step we have used the definition of \(f^{\ell+1}\), in the third the growth condition and in the fourth the induction hypothesis.
2. For \(\ell=0\) we have \[\left\|f^{1}-\bar{f}^{1}\right\|=\left\|[W^{0}-\bar{W}^{0}]Vx\right\|=n_{0}^{ 1/2}\left\|W^{0}-\bar{W}^{0}\right\|n_{0}^{-1/2},\] where in the last step we have used that \(V\) has orthonormal columns and \(\left\|x\right\|\lesssim 1\). For \(\ell>0\), we have \[\left\|f^{\ell+1}-\bar{f}^{\ell+1}\right\| =\left\|W^{\ell}n_{\ell}^{-1/2}\sigma\left(f^{\ell}\right)-\bar{ W}^{\ell}n_{\ell}^{-1/2}\sigma\left(\bar{f}^{\ell}\right)\right\|\] \[\leq\left\|W^{\ell}-\bar{W}^{\ell}\right\|n_{\ell}^{-1/2}\left\| \sigma\left(f^{\ell}\right)\right\|\] \[\quad+\left\|\bar{W}^{\ell}\right\|n_{\ell}^{-1/2}\left\|\sigma \left(f^{\ell}\right)-\sigma\left(\bar{f}^{\ell}\right)\right\|\] \[=:I+II\] For the first term, the growth condition (13) implies \(\left\|\sigma\left(f^{\ell}\right)\right\|\lesssim\left\|f^{\ell}\right\|\) and thus the first part of the Lemma yields \[I\lesssim\left\|W^{\ell}-\bar{W}^{\ell}\right\|n_{\ell}^{-1/2}n_{0}^{1/2}\prod _{k=0}^{\ell-1}\left\|W^{k}\right\|n_{k}^{-1/2}.\] For the second term, we have by Lipschitz continuity (14) and induction \[II=\left\|\bar{W}^{\ell}\right\|n_{\ell}^{-1/2}\left\|\sigma \left(f^{\ell}\right)-\sigma\left(\bar{f}^{\ell}\right)\right\|\lesssim \left\|\bar{W}^{\ell}\right\|n_{\ell}^{-1/2}\left\|f^{\ell}-\bar{f}^{\ell}\right\|\] \[\qquad\lesssim n_{0}^{1/2}\sum_{k=0}^{\ell-1}\left\|W^{k}-\bar{W} ^{k}\right\|n_{k}^{-1/2}\prod_{\begin{subarray}{c}j=0\\ j\neq k\end{subarray}}^{\ell}\max\left\{\left\|W^{j}\right\|,\,\left\|W^{j} \right\|\right\}n_{j}^{-1/2}.\] By \(I\) and \(II\) we obtain \[\left\|f^{\ell+1}-\bar{f}^{\ell+1}\right\|\lesssim n_{0}^{1/2}\sum_{k=0}^{ \ell}\left\|W^{k}-\bar{W}^{k}\right\|n_{k}^{-1/2}\prod_{\begin{subarray}{c}j= 0\\ j\neq k\end{subarray}}^{\ell}\max\left\{\left\|W^{j}\right\|,\,\left\|W^{j} \right\|\right\}n_{j}^{-1/2},\] which shows the lemma.
3. Follows from the mean value theorem because by Lemma 5.3 below the first derivatives are uniformly bounded.
**Lemma 5.3**.: _Assume that \(\sigma\) has bounded derivative (15) and may be different in each layer. Assume the weights are bounded (36). Then_
\[\left\|Df^{\ell}(x)\right\|\lesssim n_{0}^{1/2}\prod_{k=0}^{\ell-1}\left\|W^{k} \right\|n_{k}^{-1/2}.\]
Proof.: For \(\ell=0\), we have
\[\left\|Df^{1}(x)\right\|=\left\|W^{0}VDx\right\|\leq n_{0}^{1/2}\left\|W^{0} \right\|n_{0}^{-1/2},\]
where in the last step we have used that \(V\) has orthonormal columns and \(\left\|Dx\right\|=\left\|I\right\|=1\). For \(\ell>0\), we have
\[\left\|Df^{\ell+1}\right\| =\left\|W^{\ell}n_{\ell}^{-1/2}D\sigma\left(f^{\ell}\right)\right\|\] \[=\left\|W^{\ell}n_{\ell}^{-1/2}\right\|\left\|D\sigma\left(f^{ \ell}\right)\right\|\leq\left\|W^{\ell}\right\|n_{\ell}^{-1/2}\left\|\dot{ \sigma}\left(f^{\ell}\right)\odot Df^{\ell}\right\|\] \[\stackrel{{\eqref{eq:def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def}_def_def}}{{ \lesssim}}\left\|W^{\ell}\right\|n_{\ell}^{-1/2}n_{0}^{1/2}\prod_{k=0}^{\ell- 1}\left\|W^{k}\right\|n_{k}^{-1/2}\] \[=n_{0}^{1/2}\prod_{k=0}^{\ell}\left\|W^{k}\right\|n_{k}^{-1/2},\]
where in the first step we have used the definition of \(f^{\ell+1}\), in the fourth the boundedness of \(\dot{\sigma}\) and in the fifth the induction hypothesis.
_Remark 5.4_.: An argument analogous to Lemma 5.3 does not show that the derivative is Lipschitz or similarly second derivatives \(\left\|\partial_{x_{i}}\partial x_{j}f^{\ell}\right\|\) are bounded. Indeed, the argument uses that
\[\left\|\partial_{x_{i}}\sigma\left(f^{\ell}\right)\right\|=\left\|\dot{\sigma }\left(f^{\ell}\right)\odot\partial_{x_{i}}f^{\ell}\right\|\leq\left\|\dot{ \sigma}\left(f^{\ell}\right)\right\|_{\infty}\left\|\partial_{x_{i}}f^{\ell} \right\|,\]
where we bound the first factor by the upper bound of \(\dot{\sigma}\) and the second by induction. However, higher derivatives produce products
\[\left\|\partial_{x_{i}}\partial_{x_{j}}\sigma\left(f^{\ell}\right)\right\| =\left\|\dot{\sigma}\left(f^{\ell}\right)\odot\partial_{x_{i}} \partial_{x_{i}}f^{\ell}+\sigma^{(2)}\left(f^{\ell}\right)\odot\partial_{x_{i} }f^{\ell}\odot\partial_{x_{j}}f^{\ell}\right\|\] \[\leq\left\|\dot{\sigma}\left(f^{\ell}\right)\right\|_{\infty} \left\|\partial_{x_{i}}\partial_{x_{j}}f^{\ell}\right\|+\left\|\sigma^{(2)} \left(f^{\ell}\right)\right\|_{\infty}\left\|\partial_{x_{i}}f^{\ell}\odot \partial_{x_{j}}f^{\ell}\right\|\]
With bounded weights (36) the hidden layers are of size \(\left\|\partial_{x_{i}}f^{\ell}\right\|\lesssim n_{0}^{1/2}\) but a naive estimate of their product by Cauchy Schwarz and embedding \(\left\|\partial_{x_{i}}f^{\ell}\odot\partial_{x_{j}}f^{\ell}\right\|\leq\left\| \partial_{x_{i}}f^{\ell}\right\|_{\ell_{4}}\|\partial_{x_{i}}f^{\ell}\|_{\ell_{ 4}}\leq\|\partial_{x_{i}}f^{\ell}\|\|\partial_{x_{i}}f^{\ell}\|\lesssim n_{0}\) is much larger.
Given the difficulties in the last remark, we can still show that \(f^{\ell}\) is Holder continuous with respect to the weights in a Holder norm with respect to \(x\).
**Lemma 5.5**.: _Assume that \(\sigma\) satisfies the growth and Lipschitz conditions (13), (14) and may be different in each layer. Assume the weights, perturbed weights and domain are bounded (36), (37), (38). Then for \(0<\alpha<1\)_
\[\left\|\sigma\left(f^{\ell}\right)\right\|_{C^{0;\alpha}} \lesssim n_{0}^{1/2}.\] \[\left\|\sigma\left(f^{\ell}\right)\right\|_{C^{0;\alpha}} \lesssim n_{0}^{1/2}.\] \[\left\|\sigma\left(f^{\ell}\right)-\sigma\left(\bar{f}^{\ell} \right)\right\|_{C^{0;\alpha}} \lesssim n_{0}^{1/2}\left\|\sum_{k=0}^{\ell-1}\left\|W^{k}-\bar{W }^{k}\right\|n_{k}^{-1/2}\right\|^{1-\alpha}.\]
Proof.: By the growth condition (13) and the Lipschitz continuity (14) of the activation function, we have
\[\left\|\sigma\left(f^{\ell}\right)\right\|_{C^{0}} \lesssim\left\|f^{\ell}\right\|_{C^{0}}, \left\|\sigma\left(f^{\ell}\right)\right\|_{C^{0;1}} \lesssim\left\|f^{\ell}\right\|_{C^{0;1}}.\]
Thus the interpolation inequality in Lemma 6.3 implies
\[\left\|\sigma\left(f^{\ell}\right)\right\|_{C^{0;\alpha}} \lesssim\left\|\sigma\left(f^{\ell}\right)\right\|_{C^{0}}^{1- \alpha}\left\|\sigma\left(f^{\ell}\right)\right\|_{C^{0;1}}^{\alpha}\lesssim \left\|f^{\ell}\right\|_{C^{0}}^{1-\alpha}\left\|f^{\ell}\right\|_{C^{0;1}}^{ \alpha}\lesssim n_{0}^{1/2},\]
where in the last step we have used the bounds form Lemma 5.2 together with \(\left\|W^{\ell}\right\|n_{\ell}^{-1/2}\lesssim 1\) and \(\left\|\bar{W}^{\ell}\right\|n_{\ell}^{-1/2}\lesssim 1\) from Assumptions (36), (37). Likewise, by the interpolation inequality in Lemma 6.3 we have
\[\left\|\sigma\left(f^{\ell}\right)-\sigma\left(\bar{f}^{\ell} \right)\right\|_{C^{0;\alpha}} \lesssim\left\|\sigma\left(f^{\ell}\right)-\sigma\left(\bar{f}^{ \ell}\right)\right\|_{C^{0}}^{1-\alpha}\left\|\sigma\left(f^{\ell}\right)- \sigma\left(\bar{f}^{\ell}\right)\right\|_{C^{0;1}}^{\alpha}\] \[\lesssim\left\|f^{\ell}-\bar{f}^{\ell}\right\|_{C^{0}}^{1-\alpha }\max\left\{\left\|f^{\ell}\right\|_{C^{0;1}}^{\alpha}\left\|\bar{f}^{\ell} \right\|_{C^{0;1}}^{\alpha}\right\}.\] \[\lesssim n_{0}^{1/2}\left[\sum_{k=0}^{\ell-1}\left\|W^{k}-\bar{W }^{k}\right\|n_{k}^{-1/2}\right]^{1-\alpha},\]
where in the third step we have used that \(\sigma\) is Lipschitz and in the last step the bounds from Lemma 5.2 together with the bounds \(\left\|W^{\ell}\right\|n_{\ell}^{-1/2}\lesssim 1\) and \(\left\|\bar{W}^{\ell}\right\|n_{\ell}^{-1/2}\lesssim 1\) from Assumptions (36), (37).
**Lemma 5.6**.: _Assume that \(\sigma\) satisfies the growth and Lipschitz conditions (13), (14) and may be different in each layer. Assume the weights, perturbed weights
_and domain are bounded (36), (37), (38). Then for \(0<\alpha,\beta<1\)_
\[\left\|\hat{\Sigma}^{\ell}\right\|_{C^{0;\alpha,\beta}} \lesssim\frac{n_{0}}{n_{\ell}},\] \[\left\|\bar{\Sigma}^{\ell}\right\|_{C^{0;\alpha,\beta}} \lesssim\frac{n_{0}}{n_{\ell}},\] \[\left\|\hat{\Sigma}^{\ell}-\bar{\Sigma}^{\ell}\right\|_{C^{0; \alpha,\alpha}} \lesssim\frac{n_{0}}{n_{\ell}}\left[\sum_{k=0}^{\ell-1}\left\|W^{k }-\bar{W}^{k}\right\|n_{k}^{-1/2}\right]^{1-\alpha}.\]
Proof.: Throughout the proof, we abbreviate
\[f^{\ell}=f^{\ell}(x),\qquad\bar{f}^{\ell}=\bar{f}^{\ell}(x),\qquad\bar{f}^{ \ell}=f^{\ell}(y),\qquad\bar{\bar{f}}^{\ell}=\bar{f}^{\ell}(x),\]
for two independent variables \(x\) and \(y\). Then by definition (12) of \(\hat{\Sigma}^{\ell}\)
\[\left\|\hat{\Sigma}^{\ell}\right\|_{C^{0;\alpha,\beta}}=\frac{1}{n_{\ell}} \left\|\sigma\left(f^{\ell}\right)^{T}\sigma\left(\bar{f}^{\ell}\right) \right\|_{C^{0;\alpha,\beta}} \leq\frac{1}{n_{\ell}}\left\|\sigma\left(f^{\ell}\right)\right\| _{C^{0;\alpha}}\left\|\sigma\left(\bar{f}^{\ell}\right)\right\|_{C^{0;\beta}} \lesssim\frac{n_{0}}{n_{\ell}},\]
where in the second step we have used the product identity Item 3 in Lemma 6.3 and in the last step Lemma 5.5. The bound for \(\left\|\bar{\Sigma}^{\ell}\right\|_{C^{0;\alpha,\beta}}\) follows analogously. Likewise for \(\alpha=\beta\)
\[\left\|\hat{\Sigma}^{\ell}-\bar{\bar{\Sigma}}^{\ell}\right\|_{C^{ 0;\alpha,\alpha}} =\frac{1}{n_{\ell}}\left\|\sigma\left(f^{\ell}\right)^{T}\sigma \left(\tilde{f}^{\ell}\right)-\sigma\left(\bar{f}^{\ell}\right)^{T}\sigma \left(\bar{\bar{f}}^{\ell}\right)\right\|_{C^{0;\alpha,\alpha}}\] \[=\frac{1}{n_{\ell}}\left\|\left[\sigma\left(f^{\ell}\right)- \sigma\left(\bar{f}^{\ell}\right)\right]^{T}\sigma\left(\bar{f}^{\ell}\right) -\sigma\left(\bar{f}^{\ell}\right)^{T}\left[\sigma\left(\tilde{f}^{\ell} \right)-\sigma\left(\bar{\bar{f}}^{\ell}\right)\right]\right\|_{C^{0;\alpha, \alpha}}\] \[\leq\frac{1}{n_{\ell}}\left\|\left[\sigma\left(f^{\ell}\right)- \sigma\left(\bar{f}^{\ell}\right)\right]^{T}\sigma\left(\bar{f}^{\ell}\right) \right\|_{C^{0;\alpha,\alpha}}+\left\|\sigma\left(\bar{f}^{\ell}\right)^{T} \left[\sigma\left(\tilde{f}^{\ell}\right)-\sigma\left(\bar{\bar{f}}^{\ell} \right)\right]\right\|_{C^{0;\alpha,\alpha}}\] \[=\frac{2}{n_{\ell}}\left\|\left[\sigma\left(f^{\ell}\right)- \sigma\left(\bar{f}^{\ell}\right)\right]^{T}\sigma\left(\bar{f}^{\ell}\right) \right\|_{C^{0;\alpha,\alpha}},\]
where in the last step we have used symmetry in \(x\) and \(y\). Thus, by the product identity Item 3 in Lemma 6.3, we obtain
\[\left\|\hat{\Sigma}^{\ell}-\bar{\bar{\Sigma}}^{\ell}\right\|_{C^{ 0;\alpha,\alpha}} \leq\frac{2}{n_{\ell}}\left\|\sigma\left(f^{\ell}\right)-\sigma \left(\bar{f}^{\ell}\right)\right\|_{C^{0;\alpha}}\left\|\sigma\left(\bar{f}^{ \ell}\right)\right\|_{C^{0;\alpha}}\] \[\lesssim\frac{n_{0}}{n_{\ell}}\left[\sum_{k=0}^{\ell-1}\left\|W^ {k}-\bar{W}^{k}\right\|n_{k}^{-1/2}\right]^{1-\alpha},\]
where in the last step we have used Lemma 5.5.
**Lemma 5.7** (Lemma 4.3 restated form overview).: _Assume that \(\sigma\) and \(\hat{\sigma}\) satisfy the growth and Lipschitz conditions (13), (14) and may be different in each
layer. Assume the weights, perturbed weights and domain are bounded (23) and \(n_{L}\sim n_{L-1}\sim\cdots\sim n_{0}\). Then for \(0<\alpha<1\)_
\[\left\|\hat{\Gamma}\right\|_{C^{0;\alpha,\alpha}} \lesssim 1\] \[\left\|\bar{\hat{\Gamma}}\right\|_{C^{0;\alpha,\alpha}} \lesssim 1\] \[\left\|\hat{\Gamma}-\bar{\hat{\Gamma}}\right\|_{C^{0;\alpha, \alpha}} \lesssim\frac{n_{0}}{n_{L}}\left[\sum_{k=0}^{L-1}\left\|W^{k}-\bar {W}^{k}\right\|n_{k}^{-1/2}\right]^{1-\alpha}.\]
Proof.: By Lemma 5.6 and \(n_{\ell}\sim n_{0}\), we have
\[\left\|\hat{\Sigma}^{\ell}\right\|_{C^{0;\alpha,\alpha}},\left\|\bar{\hat{ \Sigma}}^{\ell}\right\|_{C^{0;\alpha,\alpha}} \lesssim 1, \left\|\hat{\Sigma}^{\ell}-\bar{\hat{\Sigma}}^{\ell}\right\|_{C^{ 0;\alpha,\alpha}} \lesssim\frac{n_{0}}{n_{\ell}}\left[\sum_{k=0}^{\ell-1}\left\|W^{k}-\bar{W} ^{k}\right\|n_{k}^{-1/2}\right]^{1-\alpha}.\]
Since \(\hat{\sigma}\) satisfies the same assumptions as \(\sigma\), the same lemma provides
\[\left\|\hat{\Sigma}^{\ell}\right\|_{C^{0;\alpha,\alpha}},\left\|\bar{\hat{ \Sigma}}^{\ell}\right\|_{C^{0;\alpha,\alpha}} \lesssim 1, \left\|\hat{\Sigma}^{\ell}-\bar{\hat{\Sigma}}^{\ell}\right\|_{C^{0; \alpha,\alpha}} \lesssim\frac{n_{0}}{n_{\ell}}\left[\sum_{k=0}^{\ell-1}\left\|W^{k}-\bar{W}^{k }\right\|n_{k}^{-1/2}\right]^{1-\alpha}.\]
Furthermore, by Lemma 4.1, we have
\[\hat{\Gamma}^{(}x,y)=\hat{\bar{\Sigma}}^{L}(x,y)\hat{\Sigma}^{L-1}(x,y).\]
Thus, since Holder spaces are closed under products, Lemma 6.3 Item 4, it follows that
\[\left\|\hat{\Gamma}-\bar{\hat{\Gamma}}\right\|_{C^{0;\alpha, \alpha}} =\left\|\hat{\bar{\Sigma}}^{L}(x,y)\hat{\Sigma}^{L-1}(x,y)-\bar{ \hat{\Sigma}}^{L}(x,y)\bar{\hat{\Sigma}}^{L-1}(x,y)\right\|_{C^{0;\alpha, \alpha}}\] \[\leq\left\|\left[\hat{\Sigma}^{L}(x,y)-\bar{\hat{\Sigma}}^{L}(x, y)\right]\hat{\Sigma}^{L-1}(x,y)\right\|_{C^{0;\alpha,\alpha}}\] \[\quad+\left\|\bar{\hat{\Sigma}}^{L}(x,y)\left[\hat{\Sigma}^{L-1}( x,y)-\bar{\hat{\Sigma}}^{L-1}(x,y)\right]\right\|_{C^{0;\alpha,\alpha}}\] \[\leq\left\|\hat{\Sigma}^{L}(x,y)-\bar{\hat{\Sigma}}^{L}(x,y) \right\|_{C^{0;\alpha,\alpha}}\left\|\hat{\Sigma}^{L-1}(x,y)\right\|_{C^{0; \alpha,\alpha}}\] \[\quad+\left\|\bar{\hat{\Sigma}}^{L}(x,y)\right\|_{C^{0;\alpha, \alpha}}\left\|\hat{\Sigma}^{L-1}(x,y)-\bar{\hat{\Sigma}}^{L-1}(x,y)\right\|_{ C^{0;\alpha,\alpha}}\] \[\lesssim\frac{n_{0}}{n_{\ell}}\left[\sum_{k=0}^{\ell-1}\left\|W^ {k}-\bar{W}^{k}\right\|n_{k}^{-1/2}\right]^{1-\alpha},\]
where in the last step we have used Lemma 5.6 and \(n_{L}\sim n_{L-1}\).
### Proof of Lemma 4.4: Concentration
Concentration for the NTK
\[\Gamma(x,y):=\dot{\Sigma}^{L}(x,y)\Sigma^{L-1}(x,y)\]
is derived from concentration for the forward kernels \(\dot{\Sigma}^{L}\) and \(\Sigma^{L-1}\). They are shown inductively by splitting off the expectation \(\mathbb{E}_{\ell}\left[\cdot\right]\) with respect to the last layer \(W^{\ell}\) in
\[\left\|\hat{\Sigma}^{\ell+1}-\Sigma^{\ell+1}\right\|_{C^{0;\alpha,\beta}}\leq \left\|\hat{\Sigma}^{\ell+1}-\mathbb{E}_{\ell}\left[\hat{\Sigma}^{\ell+1} \right]\right\|_{C^{0;\alpha,\beta}}+\left\|\mathbb{E}_{\ell}\left[\hat{ \Sigma}^{\ell+1}\right]-\Sigma^{\ell+1}\right\|_{C^{0;\alpha,\beta}}.\]
Concentration for the first term is shown in Section 5.3.1 by a chaining argument and bounds for the second term in Section 5.3.2 with an argument similar to [18]. The results are combined into concentration for the NTK in Section 5.3.3.
#### 5.3.1 Concentration of the Last Layer
We define
\[\hat{\Lambda}_{r}^{\ell}(x,y):=\sigma\left(f_{r}^{\ell}(x)\right)\sigma\left( f_{r}^{\ell}(y)\right)\]
as the random variables that constitute the kernel
\[\hat{\Sigma}^{\ell}(x,y)=\frac{1}{n_{\ell}}\sum_{r=1}^{n_{\ell}}\hat{\Lambda} _{r}^{\ell}(x,y)=\frac{1}{n_{\ell}}\sum_{r=1}^{n_{\ell}}\sigma\left(f_{r}^{ \ell}(x)\right)\sigma\left(f_{r}^{\ell}(y)\right).\]
For fixed weights \(W^{0},\ldots,W^{\ell-2}\) and random \(W^{\ell-1}\), all \(\hat{\Lambda}_{r}^{\ell}\), \(r\in[n_{\ell}]\) are random variables dependent only on the random vector \(W_{r}^{\ell-1}\) and thus independent. Hence, we can show concentration uniform in \(x\) and \(y\) by chaining. For Dudley's inequality, one would bound the increments
\[\left\|\hat{\Lambda}_{r}^{\ell}(x,y)-\hat{\Lambda}_{r}^{\ell}(\bar{x},\bar{y} )\right\|_{\psi_{2}}\lesssim\|x-\bar{x}\|^{\alpha}+\|y-\bar{y}\|^{\alpha},\]
where the right hand side is a metric for \(\alpha\leq 1\). However, this is not sufficient in our case. First, due to the product in the definition of \(\hat{\Lambda}_{r}^{\ell}\), we can only bound the \(\psi_{1}\) norm and second this leads to a concentration of the supremum norm \(\|\hat{\Lambda}_{r}^{\ell}\|_{C^{0}}\), whereas we need a Holder norm. Therefore, we bound the finite difference operators
\[\left\|\Delta_{x,h_{x}}^{\alpha}\Delta_{y,h_{y}}^{\beta}\hat{ \Lambda}_{r}^{\ell}(x,y)-\Delta_{x,\bar{h}_{x}}^{\alpha}\Delta_{y,\bar{h}_{y}} ^{\beta}\hat{\Lambda}_{r}^{\ell}(\bar{x},\bar{y})\right\|_{\psi_{1}}\\ \lesssim\|x-\bar{x}\|^{\alpha}+\|h_{x}-\bar{h}_{x}\|^{\alpha}+\| y-\bar{y}\|^{\beta}+\|h_{y}-\bar{h}_{y}\|^{\beta},\]
which can be conveniently expressed by the Orlicz space valued Holder norm
\[\left\|\Delta_{x}^{\alpha}\Delta_{y}^{\beta}\hat{\Lambda}_{r}^{\ell}\right\| _{C^{0;\alpha,\beta}(\Delta D\times\Delta D;\psi_{1})}\lesssim 1,\]
with the following notations:
1. Finite difference operators \(\Delta^{\alpha}\colon(x,h)\to h^{-\alpha}[f(x+h)-f(x)]\), depending both on \(x\) and \(h\), with partial application two variables \(x\) and \(y\) denoted by \(\Delta_{x}^{\alpha}\) and \(\Delta_{y}^{\alpha}\), respectively. See Section 6.1.
2. Domain \(\Delta D\) consisting of all pairs \((x,h)\) for which \(x,x+h\in D\), see (48). Likewise the domain \(\Delta D\times\Delta D\) consists of all feasible \(x\), \(h_{x}\), \(y\) and \(h_{y}\).
3. Following the definitions in Section 6.1, we use the Holder space \(C^{0;\alpha,\beta}(\Delta D\times\Delta D;L_{\psi_{i}})\), \(i=1,2\) with values in the Orlicz spaces \(L_{\psi_{i}}\) of random variables for which the \(\|\cdot\|_{\psi_{i}}\) norms are finite. For convenience, we abbreviate this by \(C^{0;\alpha,\beta}(\Delta D\times\Delta D;\psi_{i})\).
Given the above inequalities, we derive concentration by chaining for for mixed tail random variables in [16] summarized in Corollary 6.12.
**Lemma 5.8**.: _Assume for \(k=0,\ldots,\ell-2\) the weights \(W_{k}\) are fixed and bounded \(\|W^{k}\|n_{k}^{-1/2}\lesssim 1\). Assume that \(W^{\ell-1}\) is i.i.d. sub-gaussian with \(\|W_{ij}^{\ell-1}\|_{\psi_{2}}\lesssim 1\). Let \(r\in[n_{\ell}]\)._
1. _Assume that_ \(\sigma\) _satisfies the growth condition (_13_) and may be different in each layer. Then_ \[\left\|\sigma\left(f_{r}^{\ell}(x)\right)\right\|_{\psi_{2}}\lesssim\left( \frac{n_{0}}{n_{\ell-1}}\right)^{1/2}.\]
2. _Assume that_ \(\sigma\) _has bounded derivative (_15_) and may be different in each layer. Then_ \[\left\|\sigma\left(f_{r}^{\ell}(x)\right)-\sigma\left(f_{r}^{\ell}(\bar{x}) \right)\right\|_{\psi_{2}}\lesssim\left(\frac{n_{0}}{n_{\ell-1}}\right)^{1/2} \|x-\bar{x}\|.\]
Proof.:
1. Since for frozen \(W^{0},\ldots,W^{\ell-2}\) \[W_{r\cdot}^{\ell-1}n_{\ell-1}^{-1/2}\sigma\left(f^{\ell-1}\right)=\sum_{s=1}^ {n_{\ell-1}}W_{rs}^{\ell-1}n_{\ell-1}^{-1/2}\sigma\left(f_{s}^{\ell-1}\right)\] is a sum of independent random variables \(W_{rs}^{\ell-1}n_{\ell-1}^{-1/2}\sigma\left(f_{s}^{\ell-1}\right)\), \(s\in[n_{\ell-1}]\), by Hoeffding's inequality (general version for sub-gaussian norms, see e.g. [67, Proposition 2.6.1]) we have \[\left\|W_{r\cdot}^{\ell-1}n_{\ell-1}^{-1/2}\sigma\left(f^{\ell-1}\right)\right\| _{\psi_{2}}\lesssim n_{\ell-1}^{-1/2}\left\|\sigma\left(f^{\ell-1}\right) \right\|.\] Thus \[\left\|\sigma\left(f_{r}^{\ell}\right)\right\|_{\psi_{2}}\lesssim \left\|f_{r}^{\ell}\right\|_{\psi_{2}}=\left\|W_{r\cdot}^{\ell-1}n_{\ell-1}^{- 1/2}\sigma\left(f^{\ell-1}\right)\right\|_{\psi_{2}}\\ \leq n_{\ell-1}^{-1/2}\left\|\sigma\left(f^{\ell-1}\right)\right\| \leq n_{\ell-1}^{-1/2}\left\|f^{\ell-1}\right\|\lesssim\left(\frac{n_{0}}{n_{ \ell-1}}\right)^{1/2},\]
where in the first step we have used the growth condition and Lemma 6.7, in the fourth step the growth condition and in the last step the upper bounds from Lemma 5.2.
2. Using Hoeffding's inequality analogous to the previous item, we have \[\left\|W_{r.}^{\ell-1}n_{\ell-1}^{-1/2}\left[\sigma\left(f^{\ell-1} (x)\right)-\sigma\left(f^{\ell-1}(\bar{x})\right)\right]\right\|_{\psi_{2}}\\ \lesssim n_{\ell-1}^{-1/2}\left\|\sigma\left(f^{\ell-1}(x)\right) -\sigma\left(f^{\ell-1}(\bar{x})\right)\right\|\] and \[\left\|\sigma\left(f_{r}^{\ell}(x)\right)-\sigma\left(f_{r}^{\ell} (\bar{x})\right)\right\|_{\psi_{2}} \lesssim\left\|f_{r}^{\ell}(x)-f_{r}^{\ell}(\bar{x})\right\|_{ \psi_{2}}\] \[=\left\|W_{r.}^{\ell-1}n_{\ell-1}^{-1/2}\left[\sigma\left(f^{\ell -1}(x)\right)-\sigma\left(f^{\ell-1}(\bar{x})\right)\right]\right\|_{\psi_{2}}\] \[\lesssim n_{\ell-1}^{-1/2}\left\|\sigma\left(f^{\ell-1}(x) \right)-\sigma\left(f^{\ell-1}(\bar{x})\right)\right\|\] \[\lesssim n_{\ell-1}^{-1/2}\left\|f^{\ell-1}(x)-f^{\ell-1}(\bar{x})\right\|\] \[\lesssim\left(\frac{n_{0}}{n_{\ell-1}}\right)^{1/2}\|x-\bar{x}\|,\] where in the first step we have used the Lipschitz condition and Lemma 6.7, in the fourth step the Lipschitz condition and in the last step the Lipschitz bounds from Lemma 5.2.
**Lemma 5.9**.: _Let \(U\) and \(V\) be two normed spaces and \(D\subset U\). For all \(0\leq\alpha\leq\frac{1}{2}\), we have_
\[\left\|\Delta^{\alpha}f\right\|_{C^{0;\alpha}(\Delta D;V)}\leq 4\left\|f \right\|_{C^{0;2\alpha}(D;V)},\]
_with \(\Delta D\) defined in (48)._
Proof.: Throughout the proof, let \(C^{0;2\alpha}=C^{0;2\alpha}(D;V)\) and \(|\cdot|=\|\cdot\|_{U}\) or \(|\cdot|=\|\cdot\|_{V}\) depending on context. Unraveling the definitions, for every \((x,h),(\bar{x},\bar{h})\in\Delta D\), we have to show
\[\left|\Delta_{h}^{\alpha}f(x)-\Delta_{h}^{\alpha}f(\bar{x})\right|\leq 4\|f \|_{C^{0;2\alpha}}\max\{|x-\bar{x}|,|h-\bar{h}|\}^{\alpha}.\]
We consider two cases. First, assume that \(|h|\leq\max\{|x-\bar{x}|,|h-\bar{h}|\}\) and \(\bar{h}\) is arbitrary. Then \(|\bar{h}|\leq|\bar{h}-h|+|h|\leq 2\max\{|x-\bar{x}|,|h-\bar{h}|\}\) and thus
\[\left|\Delta_{h}^{\alpha}f(x)-\Delta_{\bar{h}}^{\alpha}f(\bar{x}) \right|\leq|\Delta_{h}^{\alpha}f(x)|+\left|\Delta_{\bar{h}}^{\alpha}f(\bar{x})\right| \\ \leq\|f\|_{C^{0;2\alpha}}|h|^{\alpha}+\|f\|_{C^{0;2\alpha}}|\bar{h }|^{\alpha}\leq 3\|f\|_{C^{0;2\alpha}}\max\{|x-\bar{x}|,|h-\bar{h}|\}^{\alpha}.\]
In the second case, assume that \(\max\{|x-\bar{x}|,|h-\bar{h}|\}\leq|h|\) and without loss of generality that \(|h|\leq|\bar{h}|\). Then
\[\left|\Delta_{h}^{\alpha}f(x)-\Delta_{\bar{h}}^{\alpha}f(\bar{x})\right| \leq\left|[f(x+h)-f(x)]|h|^{-\alpha}-[f(\bar{x}+\bar{h})-f(\bar{x} )]|\bar{h}|^{-\alpha}\right|\] \[\leq\left|f(x+h)-f(x)-f(\bar{x}+\bar{h})+f(\bar{x})\right|||h|^{- \alpha}\] \[\quad+\left|f(\bar{x}+\bar{h})-f(\bar{x})\right|\left|\left|h \right|^{-\alpha}-|\bar{h}|^{-\alpha}\right|\] \[=:I+II.\]
For the first term, we have
\[I \leq\left|f(x+h)-f(x)-f(\bar{x}+\bar{h})+f(\bar{x})\right|\left| h\right|^{-\alpha}\] \[\leq\|f\|_{C^{0;2\alpha}}\left[|x+h-\bar{x}-\bar{h}|^{2\alpha}+| x-\bar{x}|^{2\alpha}\right]|h|^{-\alpha}\] \[\leq 3\|f\|_{C^{0;2\alpha}}\max\left\{|x-\bar{x}|^{2\alpha},|h- \bar{h}|^{2\alpha}\right\}|h|^{-\alpha}\] \[\leq 3\|f\|_{C^{0;2\alpha}}\max\left\{|x-\bar{x}|,|h-\bar{h}| \right\}^{\alpha}.\]
For the second term, since \(\alpha\leq 1\), we have
\[II \leq\|f\|_{C^{0;2\alpha}}\left|\bar{h}\right|^{2\alpha}\left| |h|^{-\alpha}-|\bar{h}|^{-\alpha}\right|\] \[\leq\|f\|_{C^{0;2\alpha}}\left|h|^{\alpha}|\bar{h}|^{\alpha} \left|\left|h\right|^{-\alpha}-|\bar{h}|^{-\alpha}\right|\] \[\leq\|f\|_{C^{0;2\alpha}}\left|\left|\bar{h}\right|^{\alpha}-|h|^ {\alpha}\right|\] \[\leq\|f\|_{C^{0;2\alpha}}|\bar{h}-h|^{\alpha}.\]
Combining all inequalities shows the result.
**Lemma 5.10**.: _Assume for \(k=0,\ldots,\ell-2\) the weights \(W_{k}\) are fixed and bounded \(\|W^{k}\|n_{k}^{-1/2}\lesssim 1\). Assume that \(W^{\ell-1}\) is i.i.d. sub-gaussian with \(\|W_{ij}^{\ell-1}\|_{\psi_{2}}\lesssim 1\). Assume that \(\sigma\) satisfies the growth condition (13), has bounded derivative (15) and may be different in each layer. Let \(r\in[n_{\ell}]\). Then for \(\alpha,\beta\leq 1/2\)_
\[\left\|\Delta_{x}^{\alpha}\Delta_{y}^{\beta}\hat{\Lambda}_{r}^{\ell}\right\|_{ C^{0;\alpha,\beta}(\Delta D\times\Delta D;\psi_{1})}\lesssim\frac{n_{0}}{n_{ \ell-1}},\]
_with \(\Delta D\) defined in (48)._
Proof.: Throughout the proof, we abbreviate
\[f^{\ell} =f^{\ell}(x), C^{0;\alpha}(\psi_{i}) =C^{0;\alpha}(\Delta D,\psi_{i}), i=1,2,\] \[\tilde{f}^{\ell} =f^{\ell}(y), C^{0;\alpha,\beta}(\psi_{i}) =C^{0;\alpha,\beta}(\Delta D\times\Delta D,\psi_{i}).\]
Since by Lemma 6.8 we have \(\|XY\|_{\psi_{1}}\leq\|X\|_{\psi_{2}}\|Y\|_{\psi_{2}}\) by the product inequality Lemma 6.3 Item 3 for Holder norms we obtain
\[\left\|\Delta_{x}^{\alpha}\Delta_{y}^{\beta}\hat{\Lambda}_{r}^{ \ell}\right\|_{C^{0;\alpha,\beta}(\psi_{1})} =\left\|\Delta_{x}^{\alpha}\sigma\left(f_{r}^{\ell}\right)\Delta_ {y}^{\beta}\sigma\left(\tilde{f}_{r}^{\ell}\right)\right\|_{C^{0;\alpha,\beta} (\psi_{1})}\] \[\lesssim\left\|\Delta_{x}^{\alpha}\sigma\left(f_{r}^{\ell}\right) \right\|_{C^{0;\alpha}(\psi_{2})}\left\|\Delta_{y}^{\beta}\sigma\left(\tilde{f} _{r}^{\ell}\right)\right\|_{C^{0;\beta}(\psi_{2})}.\]
Next, we use Lemma 5.9 to eliminate the finite difference in favour of a higher Holder norm
\[\left\|\Delta_{x}^{\alpha}\Delta_{y}^{\beta}\hat{\Lambda}_{r}^{\ell}\right\|_{C^ {0;\alpha,\beta}(\psi_{1})}\lesssim\left\|\sigma\left(f_{r}^{\ell}\right) \right\|_{C^{0;2\alpha}(\psi_{2})}\left\|\sigma\left(\tilde{f}_{r}^{\ell} \right)\right\|_{C^{0;2\beta}(\psi_{2})}.\]
Finally, Lemma 5.8 implies that \(\left\|\sigma\left(f_{r}^{\ell}\right)\right\|_{C^{0;2\alpha}(D;\psi_{2})} \leq n_{0}^{1/2}n_{\ell-1}^{-1/2}\) and likewise for \(\tilde{f}_{r}^{\ell}\) and thus
\[\left\|\Delta_{x}^{\alpha}\Delta_{y}^{\beta}\hat{\Lambda}_{r}^{\ell}\right\|_ {C^{0;\alpha,\beta}(\psi_{1})}\lesssim\frac{n_{0}}{n_{\ell-1}}.\]
**Lemma 5.11**.: _Assume for \(k=0,\ldots,\ell-2\) the weights \(W_{k}\) are fixed and bounded \(\|W^{k}\|n_{k}^{-1/2}\lesssim 1\). Assume that \(W^{\ell-1}\) is i.i.d. sub-gaussian with \(\|W_{ij}^{\ell-1}\|_{\psi_{2}}\lesssim 1\). Assume that the domain \(D\) is bounded, that \(\sigma\) satisfies the growth condition (13), has bounded derivative (15) and may be different in each layer. Then for \(\alpha=\beta=1/2\)_
\[\Pr\left[\left\|\hat{\Sigma}^{\ell}-\mathbb{E}\left[\hat{\Sigma}^{\ell}\right] \right\|_{C^{0;\alpha,\beta}(D)}\geq C\frac{n_{0}}{n_{\ell-1}}\left[\frac{ \sqrt{d}+\sqrt{u}}{\sqrt{n_{\ell-1}}}+\frac{d+u}{n_{\ell-1}}\right]\right]\leq e ^{-u}.\]
Proof.: Since \(\Delta_{x}^{\alpha}\Delta_{y}^{\beta}\hat{\Lambda}_{r}^{\ell}\) for \(r\in[n_{\ell}]\) only depends on the random vector \(W_{r.}^{\ell-1}\), all stochastic processes \(\left(\Delta_{x,h_{x}}^{\alpha}\Delta_{y,h_{y}}^{\beta}\hat{\Lambda}_{r}^{\ell }(x,y)\right)_{(x,h_{x},y,h_{y})\in\Delta D\times\Delta D}\) are independent and satisfy
\[\left\|\Delta_{x}^{\alpha}\Delta_{y}^{\beta}\hat{\Lambda}_{r}^{\ell}\right\|_ {C^{0;\alpha,\beta}(\Delta D\times\Delta D;\psi_{1})}\lesssim\frac{n_{0}}{n_{ \ell-1}}\]
by Lemma 5.10. Thus, we can estimate the processes' supremum by the chaining Corollary 6.12
\[\Pr\left[\sup_{\begin{subarray}{c}(x,h_{x})\in\Delta D\\ (y,h_{y})\in\Delta D\end{subarray}}\left\|\frac{1}{n_{\ell-1}}\sum_{r=1}^{n_{ \ell-1}}\Delta_{x}^{\alpha}\Delta_{y}^{\beta}\hat{\Lambda}_{r}^{\ell}-\mathbb{ E}\left[\Delta_{x}^{\alpha}\Delta_{y}^{\beta}\hat{\Lambda}_{r}^{\ell}\right] \right\|\geq C\tau\right]\leq e^{-u},\]
with
\[\tau=\frac{n_{0}}{n_{\ell-1}}\left[\left(\frac{d}{n_{\ell-1}}\right)^{1/2}+ \frac{d}{n_{\ell-1}}+\left(\frac{u}{n_{\ell-1}}\right)^{1/2}+\frac{u}{n_{\ell- 1}}\right].\]
Noting that
\[\sup_{\begin{subarray}{c}(x,h_{x})\in\Delta D\\ (y,h_{y})\in\Delta D\end{subarray}}\left|\Delta_{x}^{\alpha}\Delta_{y}^{\beta }\cdot\right|=\|\cdot\|_{C^{0;\alpha,\beta}(D)}\]
and
\[\frac{1}{n_{\ell-1}}\sum_{r=1}^{n_{\ell-1}}\Delta_{x}^{\alpha}\Delta_{y}^{ \beta}\hat{\Lambda}_{r}^{\ell}=\Delta_{x}^{\alpha}\Delta_{y}^{\beta}\frac{1}{n _{\ell-1}}\sum_{r=1}^{n_{\ell-1}}\hat{\Lambda}_{r}^{\ell}=\Delta_{x}^{\alpha} \Delta_{y}^{\beta}\hat{\Sigma}^{\ell}\]
completes the proof.
#### 5.3.2 Perturbation of Covariances
This section contains the tools to estimate
\[\left\|\mathbb{E}_{\ell}\left[\hat{\Sigma}^{\ell+1}\right]-\Sigma^{\ell+1} \right\|_{C^{0;\alpha,\beta}},\]
with an argument analogous to [18], except that we measure differences in Holder norms. As we will see in the next section, both \(\mathbb{E}_{\ell}\left[\hat{\Sigma}^{\ell+1}\right]\) and \(\Sigma^{\ell+1}\) are of the form
\[\mathbb{E}_{(u,v)\sim\mathcal{N}(0,A)}\left[\sigma(u)\sigma(v)\right],\]
with two different matrices \(A\) and \(\hat{A}\) and thus it suffices to show that the above expectation is Holder continuous in \(A\). By a variable transform
\[A=\begin{bmatrix}a_{11}&a_{12}\\ a_{21}&a_{22}\end{bmatrix}=\begin{bmatrix}a^{2}&\rho ab\\ \rho ab&b^{2}\end{bmatrix}\]
and rescaling, we reduce the problem to matrices of the form
\[A=\begin{bmatrix}1&\rho\\ \rho&1\end{bmatrix}.\]
For these matrices, by Mehler's theorem we decompose the expectation as
\[\mathbb{E}_{(u,v)\sim\mathcal{N}(0,A)}\left[\sigma(u)\sigma(v)\right]=\sum_{k =0}^{\infty}\left\langle\sigma,H_{k}\right\rangle_{N}\left\langle\sigma,H_{k} \right\rangle_{N}\frac{\rho^{k}}{k!},\]
where \(H_{k}\) are Hermite polynomials. The rescaling introduces rescaled activation functions, which we denote by
\[\sigma_{a}(x):=\sigma(ax). \tag{39}\]
Finally, we show Holder continuity by bounding derivatives. To this end, we use the multi-index \(\gamma\) to denote derivatives \(\partial^{\gamma}=\partial_{a}^{\gamma_{a}}\partial_{b}^{\gamma_{b}}\partial_{ \rho}^{\gamma_{\rho}}\) with respect to the transformed variables. Details are as follows.
**Lemma 5.12**.: _Let_
\[A=\begin{bmatrix}a^{2}&\rho ab\\ \rho ab&b^{2}\end{bmatrix}=\begin{bmatrix}a&\\ &b\end{bmatrix}\begin{bmatrix}1&\rho\\ \rho&1\end{bmatrix}\begin{bmatrix}a&\\ &b\end{bmatrix}.\]
_Then_
\[\mathbb{E}_{(u,v)\sim\mathcal{N}(0,A)}\left[\sigma(u)\sigma(v)\right]=\sum_{k =0}^{\infty}\left\langle\sigma_{a},H_{k}\right\rangle_{N}\left\langle\sigma_{ b},H_{k}\right\rangle_{N}\frac{\rho^{k}}{k!}.\]
Proof.: By rescaling, or more generally, linear transformation of Gaussian random variables, we have
\[\mathbb{E}_{(u,v)\sim\mathcal{N}(0,A)}\left[\sigma(u)\sigma(v)\right] =\int\sigma(u)\sigma(v)dN\left(0,\begin{bmatrix}a&\\ &b\end{bmatrix}\begin{bmatrix}1&\rho\\ \rho&1\end{bmatrix}\begin{bmatrix}a&\\ &b\end{bmatrix}\right)(u,v)\] \[=\int\sigma(au)\sigma(bv)dN\left(0,\begin{bmatrix}1&\rho\\ \rho&1\end{bmatrix}\right)(u,v).\]
Thus, by Mehler's theorem (Theorem 6.14 in the appendix) we conclude that
\[\mathbb{E}_{(u,v)\sim\mathcal{N}(0,A)}\left[\sigma(u)\sigma(v)\right] =\iint\sigma(au)\sigma(bv)\sum_{k=0}^{\infty}H_{k}(u)H_{k}(v)\frac{ \rho^{k}}{k!}\,d\mathcal{N}(0,1)(u)\,d\mathcal{N}(0,1)(v)\] \[=\sum_{k=0}^{\infty}\left\langle\sigma_{a},H_{k}\right\rangle_{N} \left\langle\sigma_{b},H_{k}\right\rangle_{N}\frac{\rho^{k}}{k!}.\]
**Lemma 5.13**.: _Assume \(A=\begin{bmatrix}a^{2}&\rho ab\\ \rho ab&b^{2}\end{bmatrix}\) is positive semi-definite and all derivatives up to \(\sigma^{(\gamma_{a}+\gamma_{\rho})}\) and \(\sigma_{b}^{(\gamma_{b}+\gamma_{\rho})}\) are continuous and have at most polynomial growth for \(x\to\pm\infty\). Then_
\[\partial^{\gamma}\mathbb{E}_{(u,v)\sim\mathcal{N}(0,A)}\left[\sigma(u)\sigma(v )\right]\leq\left\|\partial^{\gamma_{a}+\gamma_{\rho}}(\sigma_{a})\right\|_{N }\left\|\partial^{\gamma_{b}+\gamma_{\rho}}(\sigma_{b})\right\|_{N}.\]
Proof.: By Lemma 5.12, we have
\[\partial^{\gamma}\mathbb{E}_{(u,v)\sim\mathcal{N}(0,A)}\left[ \sigma(u)\sigma(v)\right] =\partial^{\gamma}\sum_{k=0}^{\infty}\left\langle\sigma_{a},H_{k} \right\rangle_{N}\left\langle\sigma_{b},H_{k}\right\rangle_{N}\frac{\rho^{k}} {k!} \tag{40}\] \[=\sum_{k=0}^{\infty}\partial^{\gamma_{a}}\left\langle\sigma_{a},H _{k}\right\rangle_{N}\partial^{\gamma_{b}}\left\langle\sigma_{b},H_{k}\right\rangle _{N}\partial^{\gamma_{\rho}}\frac{\rho^{k}}{k!}.\]
We first estimate the \(\rho\) derivative. Since \(0\preceq A\) and \(a,b>0\), we must have \(0\preceq\begin{bmatrix}1&\rho\\ \rho&1\end{bmatrix}\) and thus \(\det\begin{bmatrix}1&\rho\\ \rho&1\end{bmatrix}=1-\rho^{2}\geq 0\). It follows that \(|\rho|\leq 1\). Therefore
\[\left|\partial^{\gamma_{\rho}}\frac{\rho^{k}}{k!}\right|=\left|\frac{1}{k!} \frac{k!}{(k-\gamma_{\rho})!}\rho^{k-\gamma_{\rho}}\right|\leq\frac{1}{(k- \gamma_{\rho})!}. \tag{41}\]
We eliminate the denominator \((k-\gamma_{\rho})!\) by introducing extra derivatives into \(\partial^{\gamma_{a}}\left\langle\sigma_{a},H_{k}\right\rangle_{N}\). To this end, by Lemma 6.13, we decrease the degree of the Hermite polynomial for a higher derivative on \(\sigma_{a}\):
\[\partial^{\gamma_{a}}\left\langle\sigma_{a},H_{k}\right\rangle_{N}=\left\langle \partial^{\gamma_{a}}(\sigma_{a}),H_{k}\right\rangle_{N}=\left\langle\partial ^{\gamma_{a}+\gamma_{\rho}}(\sigma_{a}),H_{k-\gamma_{\rho}}\right\rangle_{N}.\]
By Lemma 6.13, \(\|\cdot\|_{N}\) normalized Hermite polynomials are given by
\[\bar{H}_{k}:=\frac{1}{\sqrt{k!}}H_{k}\]
and thus
\[\partial^{\gamma_{a}}\left\langle\sigma_{a},H_{k}\right\rangle_{N}=\left\langle \partial^{\gamma_{a}+\gamma_{\rho}}(\sigma_{a}),\bar{H}_{k-\gamma_{\rho}} \right\rangle_{N}\sqrt{(k-\gamma_{\rho})!}.\]
Plugging the last equation and (41) into (40), we obtain
\[\partial^{\gamma}\mathbb{E}_{(u,v)\sim\mathcal{N}(0,A)}\left[\sigma(u )\sigma(v)\right]\\ \leq\sum_{k=0}^{\infty}\left|\left\langle\partial^{\gamma_{a}+ \gamma_{\rho}}(\sigma_{a}),\bar{H}_{k}\right\rangle_{N}\right|\left|\left\langle \partial^{\gamma_{b}+\gamma_{\rho}}(\sigma_{b}),\bar{H}_{k}\right\rangle_{N}\right|\] \[\leq\left(\sum_{k=0}^{\infty}\left\langle\partial^{\gamma_{a}+ \gamma_{\rho}}(\sigma_{a}),\bar{H}_{k}\right\rangle_{N}^{2}\right)^{1/2}\left( \sum_{k=0}^{\infty}\left\langle\partial^{\gamma_{b}+\gamma_{\rho}}(\sigma_{b} ),\bar{H}_{k}\right\rangle_{N}^{2}\right)^{1/2},\] \[=\left\|\partial^{\gamma_{a}+\gamma_{\rho}}(\sigma_{a})\right\|_ {N}\left\|\partial^{\gamma_{b}+\gamma_{\rho}}(\sigma_{b})\right\|_{N},\]
where in the second step we have used Cauchy-Schwarz and in the last that \(\bar{H}_{k}\) are an orthonormal basis.
**Lemma 5.14**.: _Let \(f(a_{11},a_{22},a_{12})\) be implicitly defined by solving the identity_
\[\begin{bmatrix}a_{11}&a_{12}\\ a_{12}&a_{22}\end{bmatrix}=\begin{bmatrix}a&\rho ab\\ \rho ab&b\end{bmatrix}\]
_for \(a\), \(b\) and \(\rho\). Let \(D_{f}\) be a domain with \(a_{11},a_{22}\geq c>0\) and \(|a_{12}|\lesssim 1\). Then_
\[\|f^{\prime\prime\prime}\|_{C^{1}(D_{f})}\lesssim 1.\]
Proof.: Comparing coefficients, \(f\) is explicitly given by
\[f(a_{11},a_{22},a_{12})=\begin{bmatrix}a_{11}&a_{22}&\frac{a_{12}}{a_{11}a_{2 2}}\end{bmatrix}^{T}.\]
Since the denominator is bounded away from zero, all third partial derivatives exist and are bounded.
**Lemma 5.15**.: _For \(D\subset\mathbb{R}^{d}\) and \(x,y\in D\), let_
\[A(x,y)=\begin{bmatrix}a_{11}(x,y)&a_{12}(x,y)\\ a_{12}(x,y)&a_{22}(x,y)\end{bmatrix}\qquad B(x,y)=\begin{bmatrix}b_{11}(x,y)& b_{12}(x,y)\\ b_{12}(x,y)&b_{22}(x,y)\end{bmatrix},\]
_with_
\[a_{11}(x,y)\geq c>0, a_{22}(x,y)\geq c>0, |a_{12}(x,y)|\lesssim 1,\] \[b_{11}(x,y)\geq c>0, b_{22}(x,y)\geq c>0, |b_{12}(x,y)|\lesssim 1.\]
_Assume the derivatives \(\sigma^{(i)}\), \(i=0,\ldots,3\) are continuous and have at most polynomial growth for \(x\to\pm\infty\) and for all \(a\in\{a(x,y):\,x,y\in D,\,a\in\{a_{11},a_{22},b_{11},b_{22}\}\}\) the scaled activation satisfies_
\[\left\|\partial^{i}(\sigma_{a})\right\|_{N}\lesssim 1, i=1,\ldots,3,\]
_with \(\sigma_{a}\) defined in (39). Then, for \(\alpha,\beta\leq 1\) the functions_
\[x \to\mathbb{E}_{(u,v)\sim\mathcal{N}(0,A(x,y))}\left[\sigma(u) \sigma(v)\right],\] \[x \to\mathbb{E}_{(u,v)\sim\mathcal{N}(0,B(x,y))}\left[\sigma(u) \sigma(v)\right]\]
_satisfy_
\[\left\|\mathbb{E}_{(u,v)\sim\mathcal{N}(0,A)}\left[\sigma(u) \sigma(v)\right]-\mathbb{E}_{(u,v)\sim\mathcal{N}(0,B)}\left[\sigma(u)\sigma(v )\right]\right\|_{C^{0;\alpha,\beta}(D)}\\ \lesssim\|A\|_{C^{0;\alpha,\beta}(D)}\|B\|_{C^{0;\alpha,\beta}(D )}\|A-B\|_{C^{0;\alpha,\beta}(D)}.\]
Proof.: Define
\[F(a,b,\rho)=\mathbb{E}_{(u,v)\sim\mathcal{N}(0,\bar{A})}\left[\sigma(u)\sigma (v)\right].\qquad\qquad\bar{A}=\begin{bmatrix}a&\rho ab\\ \rho ab&b\end{bmatrix}\]
and \(f(a_{11},a_{22},a_{12})\) by solving the identity
\[\begin{bmatrix}a_{11}&a_{12}\\ a_{12}&a_{22}\end{bmatrix}=\begin{bmatrix}a&\rho ab\\ \rho ab&b\end{bmatrix}\]
for \(a\), \(b\) and \(\rho\). Then
\[F\circ f\circ A=x,y \to\mathbb{E}_{(u,v)\sim\mathcal{N}(0,A(x,y))}\left[\sigma(u) \sigma(v)\right],\] \[F\circ f\circ B=x,y \to\mathbb{E}_{(u,v)\sim\mathcal{N}(0,B(x,y))}\left[\sigma(u) \sigma(v)\right]\]
and
\[\left\|\mathbb{E}_{(u,v)\sim\mathcal{N}(0,A)}\left[\sigma(u) \sigma(v)\right]-\mathbb{E}_{(u,v)\sim\mathcal{N}(0,B)}\left[\sigma(u)\sigma (v)\right]\right\|_{C^{0;\alpha,\beta}(D)}\\ =\left\|F\circ f\circ A-F\circ f\circ B\right\|_{C^{0;\alpha, \beta}(D)}.\]
By Lemmas 6.4 (for \(\Delta^{\alpha}\) and \(\Delta^{\beta}\)) and 6.5 (for \(\Delta^{\alpha}\Delta^{\beta}\)), we have
\[\|F\circ f\circ A-F\circ f\circ B\|_{C^{0;\alpha,\beta}(D)}\\ \lesssim\|F\circ f\|_{C^{3}(D_{f})}\|A-B\|_{C^{0;\alpha,\beta}(D )}\\ \max\{1,\,\|A\|_{C^{0;\alpha,\beta}(D)}\}\max\{1,\,\|B\|_{C^{0; \alpha,\beta}(D)}\},\]
with \(D_{f}=A(D)\cup B(D)\), so that it suffices to bound \(\|F\circ f\|_{C^{3}(D_{f})}\lesssim 1\). This follows directly from the assumptions, chain rule, product rule and Lemmas 5.13 and 5.14. Finally, we simplify
\[\max\{1,\,\|A\|_{C^{0;\alpha,\beta}(D)}\}\leq\frac{1}{c}\|A\|_{C^{0;\alpha, \beta}(D)}\]
because
\[\frac{1}{c}\|A\|_{C^{0;\alpha,\beta}(D)}\geq\frac{1}{c}a_{11}(\cdot)\geq 1\]
and likewise for \(B\).
#### 5.3.3 Concentration of the NTK
We combine the results from the last two sections to show concentration inequalities, first for the forward kernels \(\Sigma^{\ell}\) and \(\dot{\Sigma}^{\ell}\) and then for the NTK \(\Gamma\).
**Lemma 5.16**.: _Let \(\alpha=\beta=1/2\) and \(k=0,\ldots,\ell\)._
1. _Assume that all_ \(W^{k}\) _are are i.i.d. standard normal._
2. _Assume that_ \(\sigma\) _satisfies the growth condition (_13_), has uniformly bounded derivative (_15_), derivatives_ \(\sigma^{(i)}\)_,_ \(i=0,\ldots,3\) _are continuous and have at most polynomial growth for_ \(x\to\pm\infty\) _and the scaled activations satisfy_ \[\left\|\partial^{i}(\sigma_{a})\right\|_{N}\lesssim 1,\qquad a\in\{\Sigma^{k}(x,x ):x\in D\},\qquad i=1,\ldots,3,\] _with_ \(\sigma_{a}\) _defined in (_39_). The activation function may be different in each layer._
3. _For all_ \(x\in D\) _assume_ \[\Sigma^{k}(x,x)\geq c_{\Sigma}>0.\]
4. _The widths satisfy_ \(n_{\ell}\gtrsim n_{0}\) _for all_ \(\ell=0,\ldots,L\)_._
_Then, with probability at least_
\[1-c\sum_{k=1}^{\ell-1}e^{-n_{k}}+e^{-u_{k}}\]
_we have_
\[\left\|\Sigma^{\ell}\right\|_{C^{0;\alpha,\beta}} \lesssim 1\] \[\left\|\hat{\Sigma}^{\ell}\right\|_{C^{0;\alpha,\beta}} \lesssim 1\] \[\left\|\hat{\Sigma}^{\ell}-\Sigma^{\ell}\right\|_{C^{0;\alpha, \beta}} \lesssim\sum_{k=0}^{\ell-1}\frac{n_{0}}{n_{k}}\left[\frac{\sqrt{d}+ \sqrt{u_{k}}}{\sqrt{n_{k}}}+\frac{d+u_{k}}{n_{k}}\right]\leq\frac{1}{2}c_{\Sigma}\]
_for all \(u_{1},\ldots,u_{\ell-1}\geq 0\) sufficiently small so that the last inequality holds._
Proof.: We prove the statement by induction. Let us first consider \(\ell\geq 1\). We split off the expectation over the last layer
\[\left\|\hat{\Sigma}^{\ell+1}-\Sigma^{\ell+1}\right\|_{C^{0;\alpha,\beta}} \leq\left\|\hat{\Sigma}^{\ell+1}-\mathbb{E}_{\ell}\left[\hat{ \Sigma}^{\ell+1}\right]\right\|_{C^{0;\alpha,\beta}}+\left\|\mathbb{E}_{\ell} \left[\hat{\Sigma}^{\ell+1}\right]-\Sigma^{\ell+1}\right\|_{C^{0;\alpha,\beta}}\] \[=I+II,\]
where \(\mathbb{E}_{\ell}\left[\cdot\right]\) denotes the expectation with respect to \(W^{\ell}\). We Estimate \(I\), given that the lower layers satisfy
\[\|W^{k}\|n_{k}^{-1/2} \lesssim 1, k=0,\ldots,\ell-1, \tag{42}\]
which is true with probability at least \(1-2e^{-n_{k}}\), see e.g. [67, Theorem 4.4.5]. Then, by Lemma 5.11 for \(u_{\ell}\geq 0\)
\[\Pr\left[\left\|\hat{\Sigma}^{\ell+1}-\mathbb{E}\left[\hat{\Sigma}^{\ell+1} \right]\right\|_{C^{0;\alpha,\,\beta}(D)}\geq C\frac{n_{0}}{n_{\ell}}\left[ \frac{\sqrt{d}+\sqrt{u_{\ell}}}{\sqrt{n_{\ell}}}+\frac{d+u_{\ell}}{n_{\ell}} \right]\right]\leq e^{-u_{\ell}}. \tag{43}\]
Next we estimate II. To this end, recall that \(\hat{\Sigma}^{\ell+1}(x,y)\) is defined by
\[\hat{\Sigma}^{\ell+1}(x,y)=\frac{1}{n_{\ell}}\sum_{r=1}^{n_{\ell+1}}\sigma \left(f_{r}^{\ell+1}(x)\right)\sigma\left(f_{r}^{\ell+1}(y)\right).\]
For fixed lower layers \(W^{0},\ldots,W^{\ell-1}\), the inner arguments
\[f_{r}^{\ell+1}(x)=W_{r}^{\ell}.n_{\ell}^{-1/2}\sigma\left(f^{\ell}(x)\right) \qquad\quad f_{r}^{\ell+1}(x)=W_{r}^{\ell}.n_{\ell}^{-1/2}\sigma\left(f^{\ell }(y)\right)\]
are Gaussian random variables in \(W_{r.}^{\ell}\) with covariance
\[\mathbb{E}_{l}\left[W_{r.}^{\ell}n_{\ell}^{-1/2}\sigma\left(f^{ \ell}(x)\right)^{T}W_{r.}^{\ell}n_{\ell}^{-1/2}\sigma\left(f^{\ell}(y)\right)\right] \\ =\frac{1}{n_{\ell}}\sum_{r=1}^{n_{\ell}}n_{\ell}^{-1/2}\sigma \left(f^{\ell}(x)\right)n_{\ell}^{-1/2}\sigma\left(f^{\ell}(y)\right)=\hat{ \Sigma}^{\ell}(x,y). \tag{44}\]
It follows that
\[\mathbb{E}_{\ell}\left[\hat{\Sigma}^{\ell+1}(x,y)\right]=\mathbb{E}_{(u,v) \sim\mathcal{N}(0,\hat{A})}\left[\sigma(u)\sigma(v)\right],\ \ \ \ \hat{A}=\begin{bmatrix}\hat{\Sigma}^{\ell}(x,x)&\hat{\Sigma}^{\ell}(x,y)\\ \hat{\Sigma}^{\ell}(y,x)&\hat{\Sigma}^{\ell}(y,y)\end{bmatrix}.\]
This matches the definition
\[\Sigma^{\ell+1}(x,y)=\mathbb{E}_{u,v\sim\mathcal{N}(0,A)}\left[\sigma\left(u \right),\sigma\left(v\right)\right]\ \ \ \ \ A=\begin{bmatrix}\Sigma^{\ell}(x,x)&\Sigma^{\ell}(x,y)\\ \Sigma^{\ell}(y,x)&\Sigma^{\ell}(y,y)\end{bmatrix}\]
of the process \(\Sigma^{\ell+1}\) up to the covariance matrix \(\hat{A}\) versus \(A\). Thus, we can estimate the difference \(\left\|\mathbb{E}_{\ell}\left[\hat{\Sigma}^{\ell+1}(x,y)\right]-\Sigma^{\ell+ 1}\right\|_{C^{0;\alpha,\beta}}\) by Lemma 5.15 if the entries of \(A\) and \(\hat{A}\) satisfy the required bounds. To this end, we first bound the diagonal entries away from zero. For \(A\), this is true by assumption. For \(\hat{A}\), by induction, with probability at least \(1-c\sum_{k=1}^{\ell-1}e^{-n_{k}}+e^{-u_{k}}\) we have
\[\left\|\hat{\Sigma}^{\ell}-\Sigma^{\ell}\right\|_{C^{0;\alpha,\beta}}\lesssim \sum_{k=0}^{\ell-1}\frac{n_{0}}{n_{k}}\left[\frac{\sqrt{d}+\sqrt{u_{k}}}{\sqrt {n_{k}}}+\frac{d+u_{k}}{n_{k}}\right]\leq\frac{1}{2}c_{\Sigma}. \tag{45}\]
In the event that this is true, we have
\[\hat{\Sigma}^{\ell}(x,x)\geq\frac{1}{2}c_{\Sigma}>0.\]
Next, we bound the off diagonal terms. Since the weights are bounded (42), Lemma 5.6 implies
\[\left\|\hat{\Sigma}^{\ell}\right\|_{C^{0;\alpha,\beta}}\lesssim\frac{n_{0}}{n_{l} }\lesssim 1, \left\|\Sigma^{\ell}\right\|_{C^{0;\alpha,\beta}}\lesssim 1,\]
where the last inequality follows from (45). In particular,
\[\hat{\Sigma}^{\ell}(x,y)\lesssim 1, \Sigma^{\ell}(x,y)\lesssim 1\]
for all \(x,y\in D\). Hence, we can apply Lemma 5.15 and obtain
\[\left\|\mathbb{E}_{\ell}\left[\hat{\Sigma}^{\ell+1}\right]- \Sigma^{\ell+1}\right\|_{C^{0;\alpha,\beta}}\lesssim\left\|\Sigma^{\ell} \right\|_{C^{0;\alpha,\beta}}\left\|\hat{\Sigma}^{\ell}\right\|_{C^{0;\alpha, \beta}}\left\|\hat{\Sigma}^{\ell}-\Sigma^{\ell}\right\|_{C^{0;\alpha,\beta}}\] \[\lesssim\left\|\hat{\Sigma}^{\ell}-\Sigma^{\ell}\right\|_{C^{0; \alpha,\beta}}\cdot\lesssim\sum_{k=0}^{\ell-1}\frac{n_{0}}{n_{k}}\left[\frac{ \sqrt{d}+\sqrt{u_{k}}}{\sqrt{n_{k}}}+\frac{d+u_{k}}{n_{k}}\right],\]
where the last line follows by induction. Together with (42), (43) and a union bound, this shows the result for \(\ell\geq 1\).
Finally, we consider the induction start for \(\ell=0\). The proof is the same, except that in (44) the covariance simplifies to
\[\mathbb{E}_{l}\left[f^{1}(x)f^{1}(y)\right]=\mathbb{E}_{l}\left[(W_{r}^{0}Vx)( W_{r}^{0}y)\right]=(Vx)^{T}(Vy)=x^{T}y=\Sigma^{0}(x,y).\]
Hence, for \(\ell=1\) the two covariances \(A\) and \(\hat{A}\) are identical and therefore \(\left\|\mathbb{E}_{0}\left[\hat{\Sigma}^{1}(x,y)\right]-\Sigma^{1}\right\|_{C ^{0;\alpha,\beta}}=0\).
**Lemma 5.17** (Lemma 4.4, restated from the overview).: _Let \(\alpha=\beta=1/2\) and \(k=0,\ldots,L-1\)._
1. _Assume that_ \(W^{L}\in\{-1,+1\}\) _with probability_ \(1/2\) _each._
2. _Assume that all_ \(W^{k}\) _are are i.i.d. standard normal._
3. _Assume that_ \(\sigma\) _and_ \(\dot{\sigma}\) _satisfy the growth condition (_13_), have uniformly bounded derivatives (_15_), derivatives_ \(\sigma^{(i)}\)_,_ \(i=0,\ldots,3\) _are continuous and have at most polynomial growth for_ \(x\to\pm\infty\) _and the scaled activations satisfy_ \[\left\|\partial^{i}(\sigma_{a})\right\|_{N}\lesssim 1, \left\|\partial^{i}(\dot{\sigma}_{a})\right\|_{N}\lesssim 1, \left.a\in\{\Sigma^{k}(x,x):x\in D\},\ \ i=1,\ldots,3,\] _with_ \(\sigma_{a}(x):=\sigma(ax)\)_. The activation functions may be different in each layer._
4. _For all_ \(x\in D\) _assume_ \[\Sigma^{k}(x,x)\geq c_{\Sigma}>0.\]
5. _The widths satisfy_ \(n_{\ell}\gtrsim n_{0}\) _for all_ \(\ell=0,\ldots,L\)
_Then, with probability at least_
\[1-c\sum_{k=1}^{L-1}e^{-n_{k}}+e^{-u_{k}} \tag{46}\]
_we have_
\[\left\|\hat{\Gamma}-\Gamma\right\|_{C^{0;\alpha,\beta}}\lesssim\sum_{k=0}^{L-1} \frac{n_{0}}{n_{k}}\left[\frac{\sqrt{d}+\sqrt{u_{k}}}{\sqrt{n_{k}}}+\frac{d+u_{ k}}{n_{k}}\right]\leq\frac{1}{2}c_{\Sigma}\]
_for all \(u_{1},\ldots,u_{L-1}\geq 0\) sufficiently small so that the rightmost inequality holds._
Proof.: By definition (11) of \(\Gamma\) and Lemma 4.1 for \(\hat{\Gamma}\), we have
\[\Gamma(x,y) =\dot{\Sigma}^{L}(x,y)\Sigma^{L-1}(x,y),\qquad\quad\hat{\Gamma}(x, y)=\hat{\Sigma}^{L}(x,y)\hat{\Sigma}^{L-1}(x,y)\]
and therefore
\[\left\|\Gamma-\hat{\Gamma}\right\|_{C^{0;\alpha,\beta}} =\left\|\dot{\Sigma}^{L}\Sigma^{L-1}-\hat{\Sigma}^{L}\hat{\Sigma} ^{L-1}\right\|_{C^{0;\alpha,\beta}}\] \[=\left\|\left[\dot{\Sigma}^{L}-\hat{\dot{\Sigma}}^{L}\right] \Sigma^{L-1}\right\|_{C^{0;\alpha,\beta}}+\left\|\dot{\hat{\Sigma}}^{L}\left[ \Sigma^{L-1}-\hat{\Sigma}^{L-1}\right]\right\|_{C^{0;\alpha,\beta}}\] \[=\left\|\dot{\Sigma}^{L}-\hat{\dot{\Sigma}}^{L}\right\|_{C^{0; \alpha,\beta}}\left\|\Sigma^{L-1}\right\|_{C^{0;\alpha,\beta}}+\left\|\dot{ \hat{\Sigma}}^{L}\right\|_{C^{0;\alpha,\beta}}\left\|\Sigma^{L-1}-\hat{\Sigma }^{L-1}\right\|_{C^{0;\alpha,\beta}},\]
where in the last step we have used Lemma 6.3 Item 4. Thus, the result follows from
\[\left\|\Sigma^{L-1}\right\|_{C^{0;\alpha,\beta}}\lesssim 1,\ \ \left\|\hat{\Sigma}^{L-1}\right\|_{C^{0;\alpha,\beta}}\lesssim 1,\ \ \left\|\dot{\Sigma}^{L}\right\|_{C^{0;\alpha,\beta}}\lesssim 1,\ \ \ \left\|\dot{\hat{\Sigma}}^{L}\right\|_{C^{0;\alpha,\beta}}\lesssim 1\]
and
\[\max\left\{\left\|\Sigma^{L-1}-\hat{\Sigma}^{L-1}\right\|_{C^{0; \alpha,\beta}},\,\left\|\dot{\Sigma}^{L}-\hat{\dot{\Sigma}}^{L}\right\|_{C^{ 0;\alpha,\beta}}\right\}\] \[\lesssim\sum_{k=0}^{L-1}\frac{n_{0}}{n_{k}}\left[\frac{\sqrt{d}+ \sqrt{u_{k}}}{\sqrt{n_{k}}}+\frac{d+u_{k}}{n_{k}}\right]\leq\frac{1}{2}c_{ \Sigma},\]
with probability (46) by Lemma 5.16. For \(\dot{\Sigma}^{L}\), we do not require the lower bound \(\dot{\Sigma}^{k}(x,x)\geq c_{\Sigma}>0\) because in the recursive definition \(\dot{\sigma}\) is only used in the last layer and therefore not necessary in the induction step in the proof of Lemma 5.16.
### Proof of Lemma 4.5: Weights stay Close to Initial
The derivative \(\partial_{W^{k}}f^{\ell}(x)\in\mathbb{R}^{n_{\ell-1}\times(n_{k+1}\times n_{ k})}\) is a tensor with three axes for which we define the norm
\[\left\|\partial_{W^{k}}f^{\ell}(x)\right\|_{*}:=\sup_{\|u\|,\|v\|,\|w\|\leq 1} \sum_{r,i,j}u_{r}v_{i}w_{j}\partial_{W^{k}_{ij}}f^{\ell}_{r}(x)\]
and the corresponding maximum norm \(\|\cdot\|_{C^{0}(D;*)}\) for functions mapping \(x\) to a tensor measured in the \(\|\cdot\|_{*}\) norm. We use this norm for an inductive argument in a proof, but later only apply it for the last layer \(\ell=L+1\). In this case \(n_{L+1}=1\) and the norm reduces to a regular matrix norm.
**Lemma 5.18**.: _Assume that \(\sigma\) satisfies the growth and derivative bounds (13), (15) and may be different in each layer. Assume the weights are bounded \(\|W^{k}\|n_{k}^{-1/2}\lesssim 1\), \(k=1,\ldots,\ell-1\). Then for \(0\leq\alpha\leq 1\)_
\[\left\|\partial_{W^{k}}f^{\ell}\right\|_{C^{0}(D;*)}\lesssim\left(\frac{n_{0} }{n_{k}}\right)^{1/2}.\]
Proof.: First note that for any tensor \(T\)
\[\left\|\sum_{r,i,j}u_{r}v_{i}w_{j}T_{rij}\right\|_{C^{0}}\leq C\|u\|\|v\|\|w\|\]
implies that \(\|T\|_{C^{0}(D;*)}\leq C\), which we use throughout the proof. We proceed by induction over \(\ell\). For \(k\geq\ell\), the pre-activation \(f^{\ell}\) does not depend on \(W^{k}\) and thus \(\partial_{W^{k}}f^{\ell}(x)=0\). For \(k=\ell-1\), we have
\[\partial_{W^{k}_{ij}}f^{k+1}_{r}(x)=\partial_{W^{k}_{ij}}W^{k}_{r}n_{k}^{-1/2} \sigma\left(f^{k}(x)\right)=\delta_{ir}n_{k}^{-1/2}\sigma\left(f^{k}_{j}(x)\right)\]
and therefore for any vectors \(u\), \(v\), \(w\)
\[\left\|\sum_{r,i,j}u_{r}v_{i}w_{j}\partial_{W^{k}_{ij}}f^{k}_{r}(x )\right\|_{C^{0}}=\left\|n_{k}^{-1/2}(u^{T}v)\left(w^{T}\sigma\left(f^{k} \right)\right)\right\|_{C^{0}}\\ \leq n_{k}^{-1/2}\|u\|\|v\|\|w\|\left\|\sigma\left(f^{k}\right) \right\|_{C^{0}}\lesssim\|u\|\|v\|\|w\|\left(\frac{n_{0}}{n_{k}}\right)^{1/2},\]
where in the last step we have used Lemma 5.5. Thus, we conclude that
\[\left\|\partial_{W^{k}}f^{k+1}(x)\right\|_{C^{0}(D;*)}\lesssim\left(\frac{n_{ 0}}{n_{k}}\right)^{1/2}.\]
For \(k<\ell-1\), we have
\[\partial_{W^{k}_{ij}}f^{\ell}(x)=\partial_{W^{k}_{ij}}W^{\ell-1}n_{\ell-1}^{- 1/2}\sigma\left(f^{\ell-1}\right)=W^{\ell-1}n_{\ell-1}^{-1/2}\left[\dot{\sigma }\left(f^{\ell-1}\right)\odot\partial_{W^{k}_{ij}}f^{\ell-1}\right]\]
and therefore
\[\left\|\sum_{r,i,j}u_{r}v_{i}w_{j}\partial_{W^{\ell}_{ij}}f^{k}_{ r}\right\|_{C^{0}} \leq\|u^{T}W^{\ell-1}n_{\ell-1}^{-1/2}\|\|v\|\|w\|\left\|\dot{ \sigma}\left(f^{\ell-1}\right)\odot\partial_{W^{k}_{ij}}f^{\ell-1}\right\|_{C ^{0}(D;*)}\] \[\leq\|u\|\|v\|\|w\|\left\|\dot{\sigma}\left(f^{\ell-1}\right) \right\|_{C^{0}(D;\ell_{\infty})}\left\|\partial_{W^{k}_{ij}}f^{\ell-1}\right\|_ {C^{0}(D;*)}\] \[\lesssim\|u\|\|v\|\|w\|\left(\frac{n_{0}}{n_{k}}\right)^{1/2},\]
where in the second step we have used that \(\|W^{\ell-1}\|n_{\ell-1}^{-1/2}\lesssim 1\) and in the last step we have used that \(\left\|\dot{\sigma}\left(f^{\ell-1}\right)\right\|_{\ell_{\infty}}\lesssim 1\) because \(|\dot{\sigma}(\cdot)|\lesssim 1\) and the induction hypothesis. It follows that
\[\left\|\partial_{W^{k}}f^{\ell}(x)\right\|_{C^{0}(D;*)}\lesssim\left(\frac{n_{ 0}}{n_{k}}\right)^{1/2}.\]
**Lemma 5.19** (Lemma 4.5, restated from the overview).: _Assume that \(\sigma\) satisfies the growth and derivative bounds (13), (15) and may be different in each layer. Assume the weights are defined by the gradient flow (6) and satisfy_
\[\|W^{\ell}(0)\|n_{\ell}^{-1/2} \lesssim 1, \ell=1,\ldots,L,\] \[\|W^{\ell}(0)-W^{\ell}(\tau)\|n_{\ell}^{-1/2} \lesssim 1, 0\leq\tau<t.\]
_Then_
\[\left\|W^{\ell}(t)-W^{\ell}(0)\right\|n_{\ell}^{-1/2} \lesssim\frac{n_{0}^{1/2}}{n_{\ell}}\int_{0}^{t}\|\kappa\|_{C^{0} (D)^{\prime}}\,dx\,d\tau,\]
_where \(C^{0}(D)^{\prime}\) is the dual space of \(C^{0}(D)\)._
Proof.: By assumption, we have
\[\|W^{\ell}(\tau)\|n_{\ell}^{-1/2} \lesssim 1, 0\leq\tau<t, \ell=1,\ldots,L.\]
With loss \(\mathcal{L}\) and residual \(\kappa=f_{\theta}-f\), because
\[\frac{d}{d\tau}W^{\ell}=-\nabla_{W^{\ell}}\mathcal{L}=\int_{D}\kappa(x)D_{W_{ \ell}}f^{L+1}(x)\,dx\]
we have
\[\left\|W^{\ell}(t)-W^{\ell}(0)\right\| =\left\|\int_{0}^{t}\frac{d}{d\tau}W^{\ell}(\tau)\,d\tau\right\|\] \[=\left\|\int_{0}^{t}\int_{D}\kappa(x)D_{W_{\ell}}f^{L+1}(x)\,dx\, d\tau\right\|\] \[\leq\int_{0}^{t}\int_{D}|\kappa(x)|\left\|D_{W_{\ell}}f^{L+1}(x) \right\|\,dx\,d\tau\] \[\lesssim\left(\frac{n_{0}}{n_{\ell}}\right)^{1/2}\int_{0}^{t}\| \kappa\|_{C^{0}(D)^{\prime}}\,dx\,d\tau,\]
where in the last step we have used Lemma 5.18. Multiplying with \(n_{\ell}^{-1/2}\) shows the result.
### Proof of Theorem 2.1: Main Result
Proof of Theorem 2.1.: The result follows directly from Lemma 4.2 with the smoothness spaces \(\mathcal{H}^{\alpha}=H^{\alpha}(\mathbb{S}^{d-1})\). While the lemma bounds the residual \(\kappa\) in the \(\mathcal{H}^{-\alpha}\) and \(\mathcal{H}^{\alpha}\) norms, we aim for an \(\mathcal{H}^{0}=L_{2}(\mathbb{S}^{d-1})\) bound. This follows directly from the interpolation inequality
\[\|\cdot\|_{L_{2}(\mathbb{S}^{d-1})}=\|\cdot\|_{H^{0}(\mathbb{S}^{d-1})}\leq\| \cdot\|_{H^{-\alpha}(\mathbb{S}^{d-1})}^{1/2}\|\cdot\|_{H^{\alpha}(\mathbb{S}^ {d-1})}^{1/2}.\]
It remains to verify all assumptions. To this end, first note that the initial weights satisfy
\[\|W(0)^{\ell}\|n_{\ell}^{-1/2}\lesssim 1, \ell=0,\ldots,L, \tag{47}\]
with probability at least \(1-2e^{-cm}\) since \(n_{\ell}\sim m\) by assumption, see e.g. [67, Theorem 4.4.5]. Then, the assumptions are shown as follows.
1. _The weights stay close to the initial (_17_):_ We use the scaled matrix norm \[\left\|\theta\right\|_{*}:=\max_{L\in[L]}\|W^{\ell}\|n_{\ell}^{-1/2}\] to measure the weight distance. Then, by (47) with \(p_{0}(m):=2Le^{-m}\) given that \(\left\|\theta(\tau)-\theta(0)\right\|_{*}\leq 1\), Lemma 4.5 implies that \[\left\|\theta(t)-\theta(0)\right\|_{*}=\max_{\ell\in[L]}\left\|W^{ \ell}(t)-W^{\ell}(0)\right\|n_{\ell}^{-1/2}\\ \lesssim\frac{n_{0}^{1/2}}{n_{\ell}}\int_{0}^{t}\|\kappa\|_{C^{0 }(\mathbb{S}^{d-1})^{\prime}}\,dx\,d\tau,\lesssim m^{-1/2}\int_{0}^{t}\| \kappa\|_{H^{0}(\mathbb{S}^{d-1})}\,dx\,d\tau,\] where the last step follows from the assumption \(n_{0}\sim\cdots\sim n_{L-1}=:m\) and the embedding \(\|\cdot\|_{C^{0}(\mathbb{S}^{d-1})^{\prime}}\lesssim\|\cdot\|_{H^{0}(\mathbb{ S}^{d-1})^{\prime}}=\|\cdot\|_{H^{0}(\mathbb{S}^{d-1})}\), which follows directly from the inverted embedding \(\|\cdot\|_{H^{0}(\mathbb{S}^{d-1})}\lesssim\|\cdot\|_{C^{0}(\mathbb{S}^{d-1})}\).
2. _Norms and Scalar Product (_18_):_ Both are well known for Sobolev spaces, and follow directly from norm definition (52) with Cauchy-Schwarz.
3. _Concentration of the Initial NTK (_19_):_ Since by (5) the first four derivatives of the activation function have at most polynomial growth, we have \[\|\partial^{i}(\sigma_{a})\|_{=}\int_{\mathbb{R}}\sigma^{(i)}(ax)a^{i}\,d \mathcal{N}(0,1))(x)\lesssim 1\] for all \(a\in\{\Sigma^{k}(x,x):x\in D\}\) contained in the set \(\{c_{\Sigma},C_{\Sigma}\}\) for some \(C_{\Sigma}\geq 0\), by assumption. Together with \(\alpha+\epsilon<1/2\) for sufficiently small \(\epsilon\), hidden dimensions \(d\lesssim n_{0}\sim\ldots,\sim n_{L}=:m\) and the concentration result Lemma 4.4 we obtain, with probability at least \[1-p_{\infty}(m,\tau):=1-cL(e^{-m}+e^{-\tau})\]
the bound \[\left\|\hat{\Gamma}-\Gamma\right\|_{C^{0;\alpha+\epsilon,\alpha+\epsilon}}\lesssim L \left[\sqrt{\frac{d}{m}}+\sqrt{\frac{\tau}{m}}+\frac{\tau}{m}\right]\] for the neural tangent kernel for all \(0\leq\tau=u_{0}=\cdots=u_{L-1}\lesssim 1\). By Lemma 6.16, the kernel bound directly implies the operator norm bound \[\left\|H-H_{\theta(0)}\right\|_{-\alpha,\alpha}\lesssim L\left[\sqrt{\frac{d} {m}}+\sqrt{\frac{\tau}{m}}+\frac{\tau}{m}\right]\] for the corresponding integral operators \(H\) and \(H_{\theta(0)}\), with kernels \(\Gamma\) and \(\hat{\Gamma}\), respectively. If \(\tau/m\lesssim 1\), we can drop the last term and thus satisfy assumption (19).
4. _Holder continuity of the NTK (20):_ By (47) with probability at least \[1-p_{L}(m):=1-Le^{-m}\] we have \(\left\|\theta(0)\right\|_{*}\lesssim 1\) and thus for all perturbations \(\bar{\theta}\) with \(\left\|\bar{\theta}-\theta(0)\right\|_{*}\leq h\leq 1\) by Lemma 4.3 that \[\left\|\hat{\Gamma}-\tilde{\Gamma}\right\|_{C^{0;\alpha+\epsilon,\alpha+ \epsilon}}\lesssim Lh^{1-\alpha-\epsilon}\] for any sufficiently small \(\epsilon>0\). By Lemma 6.16, the kernel bound implies the operator norm bound \[\left\|H_{\theta(0)}-H_{\bar{\theta}}\right\|_{\alpha\leftarrow-\alpha} \lesssim Lh^{\gamma}\] for any \(\gamma<1-\alpha\) and integral operators \(H_{\theta(0)}\) and \(H_{\bar{\theta}}\) corresponding to kernels \(\Gamma_{\theta}(0)\) and \(\hat{\Gamma}_{\bar{\theta}}\), respectively.
5. _Coercivity (5):_ Is given by assumption.
Thus, all assumptions of Lemma 4.2 are satisfied, which directly implies the theorem as argued above.
## 6 Technical Supplements
### Holder Spaces
**Definition 6.1**.: _Let \(U\) and \(V\) be two normed spaces._
1. _For_ \(0<\alpha\leq 1\)_, we define the Holder spaces on the domain_ \(D\subset U\) _as all functions_ \(f\colon D\to V\) _for which the norm_ \[\|f\|_{C^{0;\alpha}(D;V)}:=\max\{\|f\|_{C^{0}(D;V)},|f|_{C^{0;\alpha}(D;V)}\}<\infty\] _is finite, with_ \[|f|_{C^{0}(D;V)}:=\sup_{x\in D}\|f(x)\|_{V},\quad|f|_{C^{0;\alpha}(D;V)}:= \sup_{x\neq\bar{x}\in D}\frac{\|f(x)-f(\bar{x})\|_{V}}{\|x-\bar{x}\|_{U}^{ \alpha}}.\]
2. _For_ \(0<\alpha,\beta\leq 1\)_, we define the mixed Holder spaces on the domain_ \(D\times D\subset U\times U\) _as all functions_ \(g\colon D\times D\to V\) _for which the norm_ \[\|f\|_{C^{0;\alpha,\beta}(D;V)}:=\max_{\begin{subarray}{c}a\in\{0,\alpha\}\\ b\in\{0,\beta\}\end{subarray}}|f|_{C^{0;\alpha,b}(D;V)}<\infty,\] _with_ \[|f|_{C^{0;0,0}(D;V)}:=\sup_{x,y\in D}\|f(x,y)\|_{V},\] \[|f|_{C^{0;\alpha,0}(D;V)}:=\sup_{x\neq\bar{x},y\in D}\frac{\|f(x,y)-f( \bar{x},y)\|_{V}}{\|x-\bar{x}\|_{U}^{\alpha}},\] \[|f|_{C^{0;0,\beta}(D;V)}:=\sup_{x,y\neq\bar{y}\in D}\frac{\|f(x,y)-f( x,\bar{y})\|_{V}}{\|y-\bar{y}\|_{U}^{\beta}},\] \[|f|_{C^{0;\alpha,\beta}(D;V)}:=\sup_{x\neq\bar{x},y\neq\bar{y}\in D} \frac{\|f(x,y)-f(\bar{x},y)-f(x,\bar{y})+f(\bar{x},\bar{y})\|_{V}}{\|x-\bar{x} \|_{U}^{\alpha}\|y-\bar{y}\|_{U}^{\beta}}.\]
3. _We use the following abbreviations:_ 1. _If_ \(D\) _is understood from context and_ \(V=\mathbb{R}^{n}\)_, both equipped with the Euclidean norm, we write_ \[C^{0;\alpha}=C^{0;\alpha}(D)=C^{0;\alpha}(D;\ell_{2}(\mathbb{R}^{n})).\] 2. _If_ \(V=L_{\psi_{i}}\)_,_ \(i=1,2\) _is an Orlicz space, we write_ \[C^{0;\alpha}(D;\psi_{i})=C^{0;\alpha}(D;L_{\psi_{i}}).\] _We use analogous abbreviations for all other spaces._
It is convenient to express Holder spaces in terms of finite difference operators,
\[\Delta_{h}^{0}f(x)=f(x),\hskip 28.452756pt\Delta_{h}^{\alpha}f(x)=\|h\|_{U}^{ -\alpha}[f(x+h)-f(x)],\hskip 28.452756pt\alpha>0,\]
which satisfy product and chain rules similar to derivatives. We may also consider these as functions in both \(x\) and \(h\)
\[\Delta^{\alpha}f\colon(x,h)\in\Delta D\to V,\hskip 56.905512pt\Delta^{\alpha}f(x,h )=\Delta_{h}^{\alpha}f(x)\]
on the domain
\[\Delta D:=\{(x,h):\,x\in D,\,x+h\in D\}\subset U\times U. \tag{48}\]
Then, the Holder norms can be equivalently expressed as
\[|f|_{C^{0;\alpha}(D;V)}=\sup_{x\neq x+h\in D}\|\Delta_{h}^{\alpha}f\|_{V}=\| \Delta^{\alpha}f\|_{C^{0}(\Delta D;V)}\,.\]
If \(f=f(x,y)\) depends on multiple variables, we denote the partial finite difference operators by \(\Delta^{\alpha}_{x,h_{x}}\) and \(\Delta^{\alpha}_{y,h_{y}}\) defined by
\[\Delta^{0}_{x,h_{x}}f(x,y) :=f(x,y),\quad\Delta^{\alpha}_{x,h_{x}}f(x,y):=\|h_{x}\|_{U}^{- \alpha}[f(x+h_{x},y)-f(x,y)],\] \[\Delta^{0}_{y,h_{y}}f(x,y) :=f(x,y),\quad\Delta^{\beta}_{y,h_{y}}f(x,y):=\|h_{y}\|_{U}^{- \beta}[f(x,y+h_{y})-f(x,y)],\]
for \(\alpha>0\), and likewise
\[\Delta^{\alpha}_{x}f(x,y,h_{x})=\Delta^{\alpha}_{x,h_{x}}f(x,y),\quad\quad \quad\Delta^{\alpha}_{y}f(x,y,h_{y})=\Delta^{\alpha}_{y,h_{y}}f(x,y).\]
Then, the mixed Holder norms is
\[|f|_{C^{0;\alpha,\beta}(D;V)}=\sup_{\begin{subarray}{c}x\neq x+h_{x}\in D\\ y\neq y+h_{y}\in D\end{subarray}}\left\|\Delta^{\alpha}_{x,h_{x}}\Delta^{ \beta}_{y,h_{y}}f(x,y)\right\|_{V}=\left\|\Delta^{\alpha}_{x}\Delta^{\beta}_{ y}f\right\|_{C^{0}(\Delta D\times\Delta D;V)}\]
for all \(\alpha\), \(\beta\geq 0\) and likewise for all other Holder semi-norms.
In the following lemma, we summarize several useful properties of finite differences.
**Lemma 6.2**.: _Let \(U,V\) and \(W\) be three normed spaces, \(D\subset U\) and \(0<\alpha,\,\beta\leq 1\)._
1. Product rule: _Let_ \(f,g\colon D\to\mathbb{R}\)_. Then_ \[\Delta^{\alpha}_{h}[fg](x)=\left[\Delta^{\alpha}_{h}f(x)\right]g(x)+f(x+h) \left[\Delta^{\alpha}_{h}g(x)\right].\]
2. Chain rule: _Let_ \(f:D\to V\) _and_ \(g:f(D)\to W\)_. Define_ \[\bar{\Delta}_{h}(f,g)(x):=\int_{0}^{1}f^{\prime}(tg(x+h)+(1-t)g(x))\,dt.\] _Then_ \[\Delta^{\alpha}_{h}(f\circ g)(x)=\bar{\Delta}_{h}(f,g)(x)\Delta^{\alpha}_{h}g (x).\]
Proof.:
1. Plugging in the definitions, we have \[\Delta^{\alpha}_{h}[fg](x) =\|h\|_{U}^{-\alpha}\left[f(x+h)g(x+h)-f(x)g(x)\right]\] \[=\|h\|_{U}^{-\alpha}\left[[f(x+h)-f(x)]g(x)+f(x+h)[g(x+h)-g(x)]\right]\] \[=\left[\Delta^{\alpha}_{h}f(x)\right]g(x)+f(x+h)\left[\Delta^{ \alpha}_{h}g(x)\right].\]
2. Follows directly from the integral form of the Taylor remainder: \[\Delta^{\alpha}_{h}(f\circ g)(x) =\|h\|_{U}^{-\alpha}\left[f(g(x+h))-f(g(x))\right]\] \[=\|h\|_{U}^{-\alpha}\int_{0}^{1}f^{\prime}(tg(x+h)+(1-t)g(x))\,dt[ g(x+h)-g(x)]\] \[=\bar{\Delta}_{h}(f,g)(x)\Delta^{\alpha}_{h}g(x).\]
In the following lemma, we summarize several useful properties of Holder spaces.
**Lemma 6.3**.: _Let \(U\) and \(V\) be two normed spaces, \(D\subset U\) and \(0<\alpha,\)\(\beta\leq 1\)._
1. _Interpolation Inequality: For any_ \(f\in C^{1}(D;V)\)_, we have_ \[\|f\|_{C^{0;\alpha}(D;V)}\leq 2\|f\|_{C^{0}(D;V)}^{1-\alpha}\|f\|_{C^{0;1}(D;V)}^{\alpha}.\]
2. _Assume_ \(\sigma\) _satisfies the growth and Lipschitz conditions_ \(\|\sigma\left(x\right)\|_{V}\lesssim\|x\|_{V}\) _and_ \(\|\sigma\left(x\right)-\sigma\left(\bar{x}\right)\|_{V}\lesssim\|x-\bar{x}\|_{V}\)_. Then_ \[\|\sigma\circ f\|_{C^{0;\alpha}(D;V)}\lesssim\|f\|_{C^{0;\alpha}(D;V)}.\]
3. _Let_ \(V_{1}\) _and_ \(V_{2}\) _be two normed spaces and_ \(f,g:D\to V_{1}\)_. Let_ \(\cdot:V_{1}\times V_{1}\to V_{2}\) _be a distributive product that satisfies_ \(\|u\cdot v\|_{V_{2}}\lesssim\|u\|_{V_{1}}\|v\|_{V_{1}}\)_. Then_ \[\|f\cdot g\|_{C^{0;\alpha,\beta}(D;V_{2})}\lesssim\|f\|_{C^{0;\alpha}(D;V_{1}) }\|g\|_{C^{0;\beta}(D;V_{1})}.\]
4. _Let_ \(V=\mathbb{R}\) _and_ \(f,g:D\times D\to\mathbb{R}\)_. Then_ \[\|fg\|_{C^{0;\alpha,\beta}(D)}\lesssim\|f\|_{C^{0;\alpha,\beta}(D)}\|g\|_{C^{0; \alpha,\beta}(D)}.\]
Proof.:
1. The inequality follows directly from \[\left|f\right|_{C^{0;\alpha}(D;V)} =\sup_{x,\bar{x}\in D}\frac{\|f(x)-f(\bar{x})\|_{V}}{\|x-\bar{x}\| _{U}^{\alpha}}\] \[\leq\sup_{x\neq\bar{x}\in D}\|f(x)-f(\bar{x})\|_{V}^{1-\alpha} \sup_{x\neq\bar{x}\in D}\frac{\|f(x)-f(\bar{x})\|_{V}^{\alpha}}{\|x-\bar{x}\| _{U}^{\alpha}}\] \[\leq 2\|f\|_{C^{0}(D;V)}^{1-\alpha}\|f\|_{C^{0;1}(D;V)}^{\alpha}.\]
2. Follows from \[\left|\sigma\circ f\right|_{C^{0;\alpha}(D;V)} =\sup_{x,\bar{x}\in D}\frac{\|\sigma(f(x))-\sigma(f(\bar{x}))\|_{ V}}{\|x-\bar{x}\|_{U}^{\alpha}}\] \[\lesssim\sup_{x,\bar{x}\in D}\frac{\|f(x)-f(\bar{x})\|_{V}^{ \alpha}}{\|x-\bar{x}\|_{U}^{\alpha}}=\|f\|_{C^{0;\alpha}(D;V)}.\] and likewise for the \(|\cdot|_{C^{0}(D;V)}\) norm.
3. Follows from \[\left|f\cdot g\right|_{C^{0;\alpha,\beta}(D;V_{2})} =\sup_{x,\bar{x},y,\bar{y}\in D}\frac{\|f(x)\cdot g(y)-f(\bar{x}) \cdot g(y)-f(x)\cdot g(\bar{y})+f(\bar{x})\cdot g(\bar{y})\|_{V_{2}}}{\|x- \bar{x}\|_{U}^{\alpha}\|y-\bar{y}\|_{U}^{\beta}}\] \[=\sup_{x,\bar{x},y,\bar{y}\in D}\frac{\|[f(x)-f(\bar{x})]\cdot[g( y)-g(\bar{y})]\|_{V_{2}}}{\|x-\bar{x}\|_{U}^{\alpha}\|y-\bar{y}\|_{U}^{\beta}}\] \[\lesssim\sup_{x,\bar{x},y,\bar{y}\in D}\frac{\|f(x)-f(\bar{x})\|_{ V_{1}}\|g(y)-g(\bar{y})\|_{V_{1}}}{\|x-\bar{x}\|_{U}^{\alpha}\|y-\bar{y}\|_{U}^{\beta}}\] \[=|f|_{C^{0;\alpha}(D;V_{1})}|g|_{C^{0;\beta}(D;V_{1})}\]
and analogous identities for the remaining semi norms \(|fg|_{C^{0;0,0}(D;V_{2})}\), \(|fg|_{C^{0;\alpha,0}(D;V_{2})}\), \(|fg|_{C^{0;0,\beta}(D;V_{2})}\).
4. We only show the bound for \(|\cdot|_{C^{0;\alpha,\beta}(D)}\). The other semi-norms follow analogously. Applying the product rule (Lemma 6.2) \[\Delta_{x,h_{x}}^{\alpha}\left[f(x,y)g(x,y)\right]=\left[\Delta_{x,h_{x}}^{ \alpha}f(x,y)\right]g(x,y)+\,f(x+h_{x},y)\left[\Delta_{x,h_{x}}^{\alpha}f(x,y)\right]\] and then analogously for \(\Delta_{y,h_{y}}^{\beta}\) \[\Delta_{y,h_{y}}^{\beta}\Delta_{x,h_{x}}^{\alpha}\left[f(x,y)g(x,y)\right]\] \[=\Delta_{y,h_{y}}^{\beta}\left\{\left[\Delta_{x,h_{x}}^{\alpha}f(x,y)\right]g(x,y)+\ f(x+h_{x},y)\left[\Delta_{x,h_{x}}^{\alpha}f(x,y)\right]\right\}\] \[=\left[\Delta_{y,h_{y}}^{\beta}\Delta_{x,h_{x}}^{\alpha}f(x,y) \right]g(x,y)+\left[\Delta_{x,h_{x}}^{\alpha}f(x,y+h_{y})\right]\left[\Delta_ {y,h_{y}}^{\beta}g(x,y)\right]\] \[\quad+\left[\Delta_{y,h_{y}}^{\beta}f(x+h_{x},y)\right]\left[ \Delta_{x,h_{x}}^{\alpha}f(x,y)\right]+f(x+h_{x},y+h_{y})\left[\Delta_{y,h_{y} }^{\beta}\Delta_{x,h_{x}}^{\alpha}f(x,y)\right].\] Taking the supremum directly shows the result.
The following two lemmas contain chain rules for Holder and mixed Holder spaces.
**Lemma 6.4**.: _Let \(D\subset U\) and \(D_{f}\subset V\) be domains in normed spaces \(U\), \(V\) and \(W\). Let \(g\colon D\to D_{f}\) and \(f\colon D_{f}\to W\). Let \(0<\alpha,\,\beta\leq 1\). Then_
\[\left\|\Delta^{\alpha}(f\circ g)\right\|_{C^{0}(\Delta D;W)}\leq\|f^{\prime}\| _{C^{0;0}(D_{f};L(V,W))}\|g\|_{C^{0;\alpha}(D;V)}\]
_and_
\[\|\Delta^{\alpha}(f\circ g)-\Delta^{\alpha}(f\circ\bar{g})\|_{C^ {0}(\Delta D;W)}\\ \leq\|f^{\prime}\|_{C^{0;1}(D_{f};L(V,W))}\|g-\bar{g}\|_{C^{0}(D ;V)}\|\bar{g}\|_{C^{0;\alpha}(D;V)}\\ +\|f^{\prime}\|_{C^{0;0}(D_{f};L(V,W))}\|g-\bar{g}\|_{C^{0;\alpha} (D;V)},\\ \leq 2\|f^{\prime}\|_{C^{0;1}(D_{f};L(V,W))}\|g-\bar{g}\|_{C^{0; \alpha}(D;V)}\max\{1,\|\bar{g}\|_{C^{0;\alpha}(D;V)}\},\]
_where \(L(V,W)\) is the space of all linear maps \(V\to W\) with induced operator norm._
Proof.: Note that
\[\bar{\Delta}_{h}(f,g)(x):=\int_{0}^{1}f^{\prime}(tg(x+h)+(1-t)g(x))\,dt\]
takes values in the linear maps \(L(V,W)\) and thus \(\|\bar{\Delta}_{h}(f,g)(x)v\|_{W}\leq\|\bar{\Delta}_{h}(f,g)(x)\|_{L(V,W)}\|v \|_{V}\), for all \(v\in V\). Using the chain rule Lemma 6.2, it follows that
\[\left\|\Delta_{h}^{\alpha}(f\circ g)(x)\right\|_{W} =\left\|\bar{\Delta}_{h}(f,g)(x)\Delta_{h}^{\alpha}g(x)\right\|_{W}\] \[\leq\left\|\bar{\Delta}_{h}(f,g)(x)\right\|_{L(V,W)}\left\|\Delta_ {h}^{\alpha}g(x)\right\|_{V}\]
and
\[\left\|\Delta_{h}^{\alpha}(f\circ g)(x)-\Delta_{h}^{\alpha}(f\circ \bar{g})(x)\right\|_{W} =\left\|\bar{\Delta}_{h}(f,g)(x)\Delta_{h}^{\alpha}g(x)-\bar{\Delta}_ {h}(f,\bar{g})(x)\Delta_{h}^{\alpha}\bar{g}(x)\right\|_{W}\] \[\leq\left\|\bar{\Delta}_{h}(f,g)(x)-\bar{\Delta}_{h}(f,\bar{g})( x)\right\|_{L(V,W)}\left\|\Delta_{h}^{\alpha}g(x)\right\|_{V}\] \[\quad+\left\|\bar{\Delta}_{h}(f,\bar{g})(x)\right\|_{L(V,W)} \left\|\Delta_{h}^{\alpha}g(x)-\Delta_{h}^{\alpha}\bar{g}(x)\right\|_{V}.\]
Hence, the result follows from
\[\left\|\bar{\Delta}_{h}(f,\bar{g})(x)\right\|_{L(V,W)}\leq\|f^{\prime}\|_{C^{ 0}(D_{f};L(V,W))} \tag{49}\]
and
\[\left\|\bar{\Delta}_{h}(f,g)(x)-\bar{\Delta}_{h}(f,\bar{g})(x) \right\|_{L(V,W)}\\ \leq\|f^{\prime}\|_{C^{0;1}(D_{f};L(V,W))}\int_{0}^{1}\|t(g-\bar {g})(x+h)+(1-t)(g-\bar{g})(x)\|\ dt\\ \leq\|f^{\prime}\|_{C^{0;1}(D_{f};L(V,W))}\|g-\bar{g}\|_{C^{0}(D; V)}, \tag{50}\]
where we have used that unlike \(\Delta_{h}^{\alpha}\), the integral \(\bar{\Delta}_{h}\) does not have an inverse \(\|h\|_{U}^{-\alpha}\) factor.
**Lemma 6.5**.: _Let \(D\subset U\) and \(D_{f}\subset V\) be domains in normed spaces \(U\), \(V\) and \(W\). Let \(g\colon D\to D_{f}\) and \(f\colon D_{f}\to W\). Let \(0<\alpha,\,\beta\leq 1\). Then_
\[\left\|\Delta^{\alpha}\Delta^{\beta}\left[f\circ g-f\circ\bar{g} \right]\right\|_{C^{0}(\Delta D\times\Delta D;W)}\\ \leq\|f\|_{C^{3}(D_{f},W)}\|g-\bar{g}\|_{C^{0;\alpha,\beta}(D;V)} \\ \max\{1,\|g\|_{C^{0;\alpha,\beta}(D;V)}\}\max\{1,\|\bar{g}\|_{C^{ 0;\alpha,\beta}(D;V)}\}.\]
Proof.: In the following, we fix \(x\) and \(y\), but only include it in the formulas if necessary, e.g. \(f=f(x,y)\). By the chain rule Lemma 6.2, we have
\[\Delta_{y,h_{y}}^{\beta}[f\circ g-f\circ\bar{g}] =\bar{\Delta}_{y,h_{y}}(f,g)\Delta_{y,h_{y}}^{\beta}g-\bar{\Delta} _{y,h_{y}}(f,\bar{g})\Delta_{y,h_{y}}^{\beta}\bar{g}\] \[=\left[\bar{\Delta}_{y,h_{y}}(f,g)-\bar{\Delta}_{y,h_{y}}(f,\bar{ g})\right]\Delta_{y,h_{y}}^{\beta}g\] \[\quad+\bar{\Delta}_{y,h_{y}}(f,\bar{g})\left[\Delta_{y,h_{y}}^{ \beta}g-\Delta_{y,h_{y}}^{\beta}\bar{g}\right]\] \[=:I+II.\]
Applying the product rule Lemma 6.2 to the first term yields
\[\left\|\Delta_{x,h_{x}}^{\alpha}I\right\|_{W}= \left\|\left[\Delta_{x,h_{x}}^{\alpha}[\bar{\Delta}_{y,h_{y}}(f,g )]-\Delta_{x,h_{x}}^{\alpha}[\bar{\Delta}_{y,h_{y}}(f,\bar{g})]\right]\Delta_ {y,h_{y}}^{\beta}g(x+h_{x},y)\right.\] \[+\left.\left[\bar{\Delta}_{y,h_{y}}(f,g)-\bar{\Delta}_{y,h_{y}}(f,\bar{g})\right]\Delta_{x,h_{x}}^{\alpha}\Delta_{y,h_{y}}^{\beta}g\right\|_{W}\] \[\leq\left\|\left[\Delta_{x,h_{x}}^{\alpha}[\bar{\Delta}_{y,h_{y}} (f,g)]-\Delta_{x,h_{x}}^{\alpha}[\bar{\Delta}_{y,h_{y}}(f,\bar{g})]\right] \right\|_{L(V,W)}\left\|\Delta_{y,h_{y}}^{\beta}g(x+h_{x},y)\right\|_{W}\] \[\quad+\left\|\left[\bar{\Delta}_{y,h_{y}}(f,g)-\bar{\Delta}_{y,h_ {y}}(f,\bar{g})\right]\right\|_{L(V,W)}\left\|\Delta_{x,h_{x}}^{\alpha}\Delta_ {y,h_{y}}^{\beta}g\right\|_{W}.\]
Likewise, applying the product Lemma rule 6.2 to the second term yields
\[\left\|\Delta^{\alpha}_{x,h_{x}}II\right\|_{W} =\left\|\Delta^{\alpha}_{x,h_{x}}\bar{\Delta}_{y,h_{y}}(f,\bar{g}) \left[\Delta^{\beta}_{y,h_{y}}g-\Delta^{\beta}_{y,h_{y}}\bar{g}\right]\right\|_ {W}\] \[\quad+\left\|\bar{\Delta}_{y,h_{y}}(f,\bar{g})(x+h_{x},y)\left[ \Delta^{\alpha}_{x,h_{x}}\Delta^{\beta}_{y,h_{y}}g-\Delta^{\alpha}_{x,h_{x}} \Delta^{\beta}_{y,h_{y}}\bar{g}\right]\right\|_{W}\] \[\leq\left\|\Delta^{\alpha}_{x,h_{x}}\bar{\Delta}_{y,h_{y}}(f,\bar {g})\right\|_{L(V,W)}\left\|\Delta^{\beta}_{y,h_{y}}g-\Delta^{\beta}_{y,h_{y}} \bar{g}\right\|_{W}\] \[\quad+\left\|\bar{\Delta}_{y,h_{y}}(f,\bar{g})(x+h_{x},y)\right\| _{L(V,W)}\left\|\Delta^{\alpha}_{x,h_{x}}\Delta^{\beta}_{y,h_{y}}g-\Delta^{ \alpha}_{x,h_{x}}\Delta^{\beta}_{y,h_{y}}\bar{g}\right\|_{W}.\]
All terms involving only \(g\) and \(\bar{g}\) can easily be upper bounded by \(\|g\|_{C^{0;\alpha,\beta}(D;V)}\), \(\|\bar{g}\|_{C^{0;\alpha,\beta}(D;V)}\) or \(\|g-\bar{g}\|_{C^{0;\alpha,\beta}(D;V)}\). The terms
\[\left\|\bar{\Delta}_{y,h_{y}}(f,\bar{g})(x+h_{x},y)\right\|_{L(V, W)} \leq\|f^{\prime}\|_{C^{0}(D_{f};L(V,W))}\] \[\left\|\left[\bar{\Delta}_{y,h_{y}}(f,g)-\bar{\Delta}_{y,h_{y}}(f,\bar{g})\right]\right\|_{L(V,W)} \leq\|f^{\prime}\|_{C^{0;1}(D_{f};L(V,W))}\|g-\bar{g}\|_{C^{0}(D; V)}\]
are bounded by (49) and (50) in the proof of Lemma 6.4. For the remaining terms, define
\[G(x):=tg(x,y+h_{y})+(1-t)g(x,y)\]
and likewise \(\bar{G}\). Then
\[\|G\|_{C^{0;\alpha}(D,V)}\lesssim\|g\|_{C^{0;\alpha,\beta}(D,V)},\quad\|G- \bar{G}\|_{C^{0;\alpha}(D,V)}\lesssim\|g-\bar{g}\|_{C^{0;\alpha,\beta}(D,V)}.\]
Thus, by Lemma 6.4, we have
\[\left\|\Delta^{\alpha}_{x,h_{x}}\left[\bar{\Delta}_{y,h_{y}}(f,g) \right]\right\|_{L(V,W)} =\left\|\int_{0}^{1}\Delta^{\alpha}_{x,h_{x}}(f^{\prime}\circ G) \,dt\right\|_{L(V,W)}\] \[\leq\|f^{\prime\prime}\|_{C^{0;0}(D_{f};L(V,L(V,W)))}\|g\|_{C^{0; \alpha,\beta}(D;V)}\]
and
\[\left\|\Delta^{\alpha}_{x,h_{x}}\left[\bar{\Delta}_{y,h_{y}}(f,g) -\bar{\Delta}_{y,h_{y}}(f,\bar{g})\right]\right\|_{L(V,W)}\] \[\quad=\left\|\int_{0}^{1}\Delta^{\alpha}_{x,h_{x}}\left[f^{\prime }\circ G-f^{\prime}\circ\bar{G}\right]\,dt\right\|_{L(V,W)}\] \[\quad\leq 2\|f^{\prime\prime}\|_{C^{0;1}(D_{f};L(V,L(V,W)))}\|g- \bar{g}\|_{C^{0;\alpha,\beta}(D;V)}\max\{1,\|\bar{g}\|_{C^{0;\alpha,\beta}(D;V) }\}.\]
Combining all inequalities yields the proof.
### Concentration
In this section, we recall the definition of Orlicz norms, some basic properties and the chaining concentration inequalities we use to show that the empirical NTK is close to the NTK.
**Definition 6.6**.: _For random variable \(X\), we define the sub-gaussian and sub-exponential norms by_
\[\|X\|_{\psi_{2}} =\inf\left\{t>0:\,\mathbb{E}\left[\exp(X^{2}/t^{2})\right]\leq 2 \right\},\] \[\|X\|_{\psi_{1}} =\inf\left\{t>0:\,\mathbb{E}\left[\exp(|X|/t)\right]\leq 2\right\}.\]
**Lemma 6.7**.: _Assume that \(\sigma\) satisfies the growth and Lipschitz conditions_
\[|\sigma(x)|\leq G|x|, |\sigma(x)-\sigma(y)|\leq L|x-y|\]
_for all \(x,y\in\mathbb{R}\) and let \(X\), \(Y\) be two sub-gaussian random variables. Then_
\[\|\sigma\left(X\right)\|_{\psi_{2}}\lesssim G\|X\|_{\psi_{2}}, \|\sigma\left(X\right)-\sigma\left(Y\right)\|_{\psi_{2}}\lesssim L\|X-Y\|_{ \psi_{2}}.\]
Proof.: For two random variables \(X\) and \(Y\) with \(X^{2}\leq Y^{2}\) almost surely, we have
\[\|X\|_{\psi_{2}} =\inf\left\{t>0:\,\mathbb{E}\left[\exp(X^{2}/t^{2})\right]\leq 2\right\}\] \[\leq\inf\left\{t>0:\,\mathbb{E}\left[\exp(Y^{2}/t^{2})\right]\leq 2 \right\}=\|Y\|_{\psi_{2}}.\]
Thus, the result follows directly form
\[\sigma(X)^{2}\leq G^{2}X^{2}, [\sigma(x)-\sigma(y)]^{2}\leq L^{2}[x-y]^{2}.\]
**Lemma 6.8**.: _Let \(X\) and \(Y\) be two sub-gaussian random variables. Then_
\[\|XY\|_{\psi_{1}}\leq\|X\|_{\psi_{2}}\|Y\|_{\psi_{2}}.\]
Proof.: Let
\[t=\|X\|_{\psi_{2}}^{1/2}\|Y\|_{\psi_{2}}^{1/2}=\left\|\left(\frac{\|Y\|_{\psi_{ 2}}}{\|X\|_{\psi_{2}}}\right)^{1/2}X\right\|_{\psi_{2}}=\left\|\left(\frac{\|X \|_{\psi_{2}}}{\|Y\|_{\psi_{2}}}\right)^{1/2}Y\right\|_{\psi_{2}}.\]
Ignoring a simple \(\epsilon\) perturbation, we assume that the infima in the definition of the \(\|X\|_{\psi_{2}}\) and \(\|Y\|_{\psi_{2}}\) norms are attained. Then
\[\exp\left(\frac{\|Y\|_{\psi_{2}}}{\|X\|_{\psi_{2}}}\frac{X^{2}}{t^{2}}\right) \leq 2, \exp\left(\frac{\|X\|_{\psi_{2}}}{\|Y\|_{\psi_{2}}}\frac{Y^{2}}{t^{2}} \right)\leq 2.\]
Thus, Young's inequality implies
\[\exp\left(\frac{|XY|}{t}\right)\leq\exp\left(\frac{1}{2}\frac{\|Y \|_{\psi_{2}}}{\|X\|_{\psi_{2}}}\frac{X^{2}}{t^{2}}+\frac{1}{2}\frac{\|X\|_{ \psi_{2}}}{\|Y\|_{\psi_{2}}}\frac{Y^{2}}{t^{2}}\right)\\ \leq\exp\left(\frac{\|Y\|_{\psi_{2}}}{\|X\|_{\psi_{2}}}\frac{X^{2} }{t^{2}}+\frac{\|X\|_{\psi_{2}}}{\|Y\|_{\psi_{2}}}\frac{Y^{2}}{t^{2}}\right)^ {1/2}\leq\sqrt{2}\sqrt{2}\leq 2.\]
Hence
\[\|XY\|_{\psi_{1}}\leq t\leq\|X\|_{\psi_{2}}\|Y\|_{\psi_{2}}.\]
**Theorem 6.9** ([16, Theorem 3.5]).: _Let \(\mathcal{X}\) be a normed linear space. Assume the \(\mathcal{X}\) valued separable random process \((X_{t})_{t\in T}\), has a mixed tail, with respect to some semi-metrics \(d_{1}\) and \(d_{2}\) on \(T\), i.e._
\[\Pr\left[\|X_{t}-X_{s}\|\geq\sqrt{u}d_{2}(t,s)+ud_{1}(t,s)\right]\leq 2e^{-u}\]
_for all \(s,\,t\in T\) and \(u\geq 0\). Set_
\[\gamma_{\alpha}(T,d_{i}) :=\inf_{\mathcal{T}}\sup_{t\in T}\sum_{n=0}^{\infty}2^{n/a}d(t,T_{ n}), \alpha\in\{0,1\},\] \[\Delta_{d}(T) :=\sup_{s,t\in T}d(s,t),\]
_where the infimum is taken over all admissible sequences \(T_{n}\subset T\) with \(|T_{0}|=1\) and \(|T_{n}|\leq 2^{2^{n}}\). Then for any \(t_{0}\in T\)_
\[\Pr\left[\sup_{t\in T}\|X_{t}-X_{t_{0}}\|\geq C\left[\gamma_{2}(T,d_{2})+ \gamma_{1}(T,d_{1})+\sqrt{u}\Delta_{d_{2}}(T)+u\Delta_{d_{1}}(T)\right]\right] \leq e^{-u}.\]
_Remark 6.10_.: [16, Theorem 3.5] assumes that \(T\) is finite. Using separability and monotone convergence, this can be extended to infinite \(T\) by standard arguments.
**Lemma 6.11**.: _Let \(0\leq\alpha\leq 1\) and \(D\subset\mathbb{R}^{d}\) be as set of Euclidean norm \(|\cdot|\)-diameter smaller than \(R\geq 1\). Then_
\[\gamma_{1}(D,|\cdot|^{\alpha})\lesssim\frac{3\alpha+1}{\alpha}R^{1+\alpha}d, \qquad\gamma_{2}(D,|\cdot|^{\alpha})\lesssim\left(\frac{3^{\alpha}}{4\alpha} \right)^{1/2}R^{\alpha/2}d^{1/2}.\]
Proof.: Let \(N(D,|\cdot|^{\alpha},u)\) be the covering number of \(D\), i.e. the smallest number of \(u\)-balls in the metric \(|\cdot|^{\alpha}\) necessary to cover \(D\). It is well known (e.g. [16, (2.3)]) that
\[\gamma_{i}(D,|\cdot|^{\alpha})\lesssim\int_{0}^{\infty}\left[\log N(D,|\cdot |^{\alpha},u)\right]^{1/i}\,du\lesssim\int_{0}^{R^{\alpha}}\left[\log N(D,| \cdot|^{\alpha},u)\right]^{1/i}\,du,\]
where in the last step we have used that \(N(D,|\cdot|^{\alpha},u)=1\) for \(u\geq R^{\alpha}\) and thus its logarithm is zero. Since every \(u\)-cover in the \(|\cdot|\) norm is a \(u^{\alpha}\) cover in the \(|\cdot|^{\alpha}\) metric, the covering numbers can be estimated by
\[N(D,|\cdot|^{\alpha},u)=N(D,|\cdot|,u^{1/\alpha})\leq\left(\frac{3R}{u^{1/ \alpha}}\right)^{d}=\left(\frac{(3R)^{\alpha}}{u}\right)^{d/\alpha},\]
see e.g. [67]. Hence
\[\gamma_{1}(D,|\cdot|^{\alpha})\lesssim\int_{0}^{R^{\alpha}}\log \left(\frac{(3R)^{\alpha}}{u}\right)^{d/\alpha}\,du=\frac{d}{\alpha}\int_{0} ^{R^{\alpha}}\alpha\log(3R)-\log u\,du\\ \leq\frac{d}{\alpha}\left[3\alpha R^{1+\alpha}-R^{\alpha}\log R^ {\alpha}+R^{\alpha}\right]\leq\frac{d}{\alpha}(3\alpha+1)R^{1+\alpha}\]
and using \(\log x\leq x-1\leq x\)
\[\gamma_{2}(D,|\cdot|^{\alpha})\lesssim\left(\frac{d}{\alpha}\right)^ {1/2}\int_{0}^{R^{\alpha}}\left[\log\frac{(3R)^{\alpha}}{u}\right]^{1/2}\,du \lesssim\left(\frac{d}{\alpha}\right)^{1/2}\int_{0}^{R^{\alpha}}\left[\frac{(3R )^{\alpha}}{u}\right]^{1/2}\,du\\ \lesssim\left(\frac{3^{\alpha}dR^{\alpha}}{\alpha}\right)^{1/2} \int_{0}^{R^{\alpha}}\left[\frac{1}{u}\right]^{1/2}\,du\lesssim\left(\frac{3^ {\alpha}d}{4\alpha}\right)^{1/2}R^{\alpha/2}.\]
The following is a rewrite of the chaining inequality [16, Theorem 3.5] or Theorem 6.9, that is compatible with the terminology used in the NTK concentration proof.
**Corollary 6.12**.: _For \(j\in[N]\), let \((X_{j,t})_{t\in D}\) be real valued independent stochastic processes on some domain \(D\) with radius \(\lesssim 1\). Assume that the map \(t\to X_{j,t}\) with values in the Orlicz space \(L_{\psi_{1}}\) is Holder continuous_
\[\|X_{j,\cdot}\|_{C^{0;\alpha}(D;\psi_{1})}\leq L.\]
_Then_
\[\Pr\left[\sup_{t\in T}\left\|\frac{1}{N}\sum_{j=1}^{N}X_{j,t}-\mathbb{E}\left[ X_{j,t}\right]\right\|\geq CL\left[\left(\frac{d}{N}\right)^{1/2}+\frac{d}{N}+ \left(\frac{u}{N}\right)^{1/2}+\frac{u}{N}\right]\right]\leq e^{-u}.\]
Proof.: We show the result with Theorem 6.9 for the process
\[Y_{t}:=\frac{1}{N}\sum_{j=1}^{N}X_{j,t}-\mathbb{E}\left[X_{j,t}\right].\]
We first show that it has mixed tail. For all \(s,t\in D\), we have
\[\|X_{j,t}-X_{j,s}\|_{\psi_{1}}\leq L|s-t|^{\alpha}.\]
Hence, Bernstein's inequality implies
\[\Pr\left[|Y_{t}-Y_{s}|\geq\tau\right]=\Pr\left[\left|\frac{1}{N} \sum_{j=1}^{N}[X_{j,t}-X_{j,s}]-\mathbb{E}\left[X_{j,t}-X_{j,s}\right]\right| \geq\tau\right]\\ \leq 2\exp\left(-cN\min\left\{\frac{\tau^{2}}{L^{2}|t-s|^{2\alpha} },\frac{\tau}{L|t-s|^{\alpha}}\right\}\right).\]
An elementary computation shows that
\[u:=cN\min\left\{\frac{\tau^{2}}{L^{2}|t-s|^{2}},\,\frac{\tau}{L|t-s|}\right\} \quad\Rightarrow\quad\tau=L|t-s|^{\alpha}\max\left\{\sqrt{\frac{u}{cN}},\frac {u}{cN}\right\}\]
and thus
\[\Pr\left[|Y_{t}-Y_{s}|\geq L|t-s|^{\alpha}\max\left\{\sqrt{\frac{u}{cN}},\frac{u}{ cN}\right\}\right]\leq 2\exp(-u). \tag{51}\]
I.e. the centered process \(Y_{t}\) has mixed tail with
\[d_{i}(t,s):=(cN)^{-1/i}L|t-s|^{\alpha},\]
for \(i=1,2\), which are metrics because \(\alpha\leq 1\). Moreover the \(\gamma_{i}\)-functional are linear in scaling
\[\gamma_{i}(D,d_{i})=(cN)^{-1/i}L\gamma_{i}(D,|\cdot|^{\alpha})\]
and thus by Lemma (6.11)
\[\gamma_{1}(D,|\cdot|^{\alpha})\lesssim L\frac{d}{N},\qquad\qquad\gamma_{2}(D, |\cdot|^{\alpha})\lesssim L\left(\frac{d}{N}\right)^{1/2}.\]
Thus, by chaining Theorem 6.9 we have
\[\Pr\left[\sup_{t\in T}\|Y_{t}-Y_{t_{0}}\|\geq CL\left[\left(\frac{d}{N}\right) ^{1/2}+\frac{d}{N}+\left(\frac{u}{N}\right)^{1/2}+\frac{u}{N}\right]\right] \leq e^{-u},\]
which directly yields the corollary with \(\sup_{t\in D}\|Y_{t}\|\leq\sup_{t\in D}\|Y_{t}-Y_{t_{0}}\|+\|Y_{t_{0}}\|\) and (51).
### Hermite Polynomials
Hermite polynomials are defined by
\[H_{n}(x):=(-1)^{n}e^{x^{2}/2}\frac{d^{n}}{dx^{n}}e^{-x^{2}/2}\]
and orthogonal with respect to the Gaussian weighted scalar product
\[\left\langle f,g\right\rangle_{N}:=\mathbb{E}_{u\sim\mathcal{N}(0,1)}\left[f(u )g(u)\right]=\frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}}f(u)g(u)e^{-x^{2}/2}\,du.\]
**Lemma 6.13**.:
1. Normalization: \[\left\langle H_{n},H_{m}\right\rangle_{N}=n!\,\delta_{nm}.\]
2. _Derivatives: Let_ \(f:\mathbb{R}\to\mathbb{R}\) _be_ \(k\) _times continuously differentiable so that all derivatives smaller or equal to_ \(k\) _have at most polynomial growth for_ \(x\to\pm\infty\)_. Then_ \[\left\langle f,H_{n}\right\rangle=\left\langle f^{(k)},H_{n-k}\right\rangle_{N}.\]
Proof.: The normalization is well known, we only show the formula for the derivative. By the growth condition, we have \(\left|f^{(k)}(x)\frac{d^{n-k-1}}{dx^{n-k-1}}e^{-x^{2}/2}\right|\to 0\) for \(x\to\pm\infty\). Thus, in the integration by parts formula below all boundary terms vanish and we have
\[\left\langle f,H_{n}\right\rangle =\frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}}f(u)H_{n}(u)e^{-x^{2}/2}\,du.\] \[=\frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}}f(u)\left[(-1)^{n}e^{x^{2} /2}\,\frac{d^{n}}{dx^{n}}e^{-x^{2}/2}\right]e^{-x^{2}/2}\,du.\] \[=\frac{1}{\sqrt{2\pi}}(-1)^{n}\int_{\mathbb{R}}f(u)\frac{d^{n}}{ dx^{n}}e^{-x^{2}/2}\,du.\] \[=\frac{1}{\sqrt{2\pi}}(-1)^{n-k}\int_{\mathbb{R}}f^{(k)}(u)\frac {d^{n-k}}{dx^{n-k}}e^{-x^{2}/2}\,du.\] \[=\frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}}f^{(k)}(u)\left[(-1)^{n-k} e^{x^{2}/2}\,\frac{d^{n-k}}{dx^{n-k}}e^{-x^{2}/2}\right]e^{-x^{2}/2}\,du.\] \[=\left\langle f^{(k)},H_{n-k}\right\rangle_{N}.\]
**Theorem 6.14** (Mehler's theorem).: _Let_
\[A=\begin{bmatrix}1&\rho\\ \rho&1\end{bmatrix}.\]
_Then the multi- and uni-variate normal density functions satisfy_
\[\operatorname{pdf}_{\mathcal{N}(0,A)}=\sum_{k=0}^{\infty}H_{k}(u)H_{k}(v)\frac {\rho^{k}}{k!}\operatorname{pdf}_{\mathcal{N}(0,1)}(u)\operatorname{pdf}_{ \mathcal{N}(0,1)}(v).\]
Proof.: See [71] for Mehler's theorem in the form stated here.
### Sobolev Spaces on the Sphere
#### 6.4.1 Definition and Properties
We use two alternative characterizations of Sobolev spaces on the sphere. The first is based on spherical harmonics, which are also eigenfunctions of the NTK and thus establishes connections to the available NTK literature. Second, we consider Sobolev Slobodeckij type norms, which are structurally similar to Holder norms and allow connections to the perturbation analysis in this paper.
The spherical harmonics
\[Y_{\ell}^{j},\qquad\qquad\qquad\ell=0,1,2,\ldots,\qquad\qquad 1\leq j\leq\nu(\ell)\]
of degree \(\ell\) and order \(j\) are an orthonormal basis on the sphere \(L_{2}(\mathbb{S}^{d-1})\), comparable to Fourier bases for periodic functions. For any \(f\in L_{2}(\mathbb{S}^{d-1})\), we
denote by \(\hat{f}_{\ell j}=\left\langle f,Y_{\ell}^{j}\right\rangle\) the corresponding basis coefficient. The Sobolev space \(H^{\alpha}(\mathbb{S}^{d-1})\) consists of all function for which the norm
\[\left\|f\right\|_{H^{\alpha}(\mathbb{S}^{d-1})}^{2}=\sum_{\ell=0}^{\infty}\sum_ {j=1}^{\nu(\ell)}\left(1+\ell^{1/2}(\ell+d-2)^{1/2}\right)^{2\alpha}\left|\hat{ f}_{\ell j}\right|^{2}\]
is finite. We write \(H^{\alpha}=H^{\alpha}(\mathbb{S}^{d-1})\) if the domain is understood from context. Since the constants in this paper are dimension dependent, we simplify this to the equivalent norm
\[\left\|f\right\|_{H^{\alpha}(\mathbb{S}^{d-1})}^{2}=\sum_{\ell=0}^{\infty}\sum _{j=1}^{\nu(\ell)}(1+\ell)^{2\alpha}\left|\hat{f}_{\ell j}\right|^{2}. \tag{52}\]
Another equivalent norm, similar to Sobolev-Slobodeckij norms, is given in [7, Proposition 1.4] and defined as follows for the case \(0<\alpha<2\). For the spherical cap centered at \(x\in\mathbb{S}^{d-1}\) and angle \(t\in(0,\pi)\) given by
\[C(x,t):=\left\{y\in\mathbb{S}^{d-1}:\,x\cdot y\geq\cos t\right\}\]
set
\[A_{t}(f)(x):=\mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-12.149815pt}}{{\vbox{\hbox{$-$ }}\kern-9.899849pt}}{{\vbox{\hbox{$-$ }}\kern-8.999863pt}}\!\int_{C(x,t)}f(\tau)\,d\tau.\]
With
\[S_{\alpha}(f)^{2}(x):=\int_{0}^{\pi}\left|A_{t}f(x)-f(x)\right|^{2}t^{-2\alpha -1}\,dt\]
the Sobolev norm on the sphere is equivalent to
\[\left\|f\right\|_{H^{\alpha}(\mathbb{S}^{d-1})}\sim\left\|S_{\alpha}(f) \right\|_{L_{2}(\mathbb{S}^{d-1})}. \tag{53}\]
Using the definition (52) for \(a<b<c\), the interpolation inequality
\[\left\|\cdot\right\|_{H^{b}(\mathbb{S}^{d-1})}\lesssim\left\|\cdot\right\|_{ H^{a}(\mathbb{S}^{d-1})}^{\frac{c-b}{c-a}}\left\|\cdot\right\|_{H^{c}(\mathbb{S}^{d-1 })}^{\frac{b-a}{c-a}},\ \ \ \ \left\langle\cdot,\cdot\right\rangle_{-\alpha}\lesssim\left\|\cdot\right\|_{ -3\alpha}\|\cdot\|_{\alpha}, \tag{54}\]
follows directly from Cauchy-Schwarz. Moreover, we have the following embedding.
**Lemma 6.15**.: _Let \(0<\alpha<1\). Then for any \(\epsilon>0\) with \(\alpha+\epsilon\leq 1\), we have_
\[\left\|\cdot\right\|_{H^{\alpha}(\mathbb{S}^{d-1})}\lesssim\left\|\cdot \right\|_{C^{0;\alpha+\epsilon}(\mathbb{S}^{d-1})}.\]
Proof.: The proof is standard and similar to Lemma 6.16.
#### 6.4.2 Kernel Bounds
In this section, we provide bounds for the kernel integral
\[\left\langle f,g\right\rangle_{k}:=\iint_{D\times D}f(x)k(x,y)g(y)\,dx\,dy\]
on the sphere \(D=\mathbb{S}^{d-1}\) in Sobolev norms on the sphere. Clearly, for \(0\leq\alpha,\beta<2\), we have
\[\left\langle f,g\right\rangle_{k}\leq\|f\|_{H^{-\alpha}}\left\|\int_{D}k(\cdot,y)g(y)\,dy\right\|_{H^{\alpha}}\leq\|f\|_{H^{-\alpha}}\|k\|_{H^{\alpha} \gets H^{-\beta}}\|g\|_{H^{-\beta}},\]
where the norm of \(k\) is the induced operator norm. While the norms for \(f\) and \(g\) are the ones used in the convergence analysis, concentration and perturbation results for \(k\) are computed in mixed Holder norms. We show in this section, that these bound the operator norm.
Indeed, \(\left\langle f,g\right\rangle_{k}\) is a bilinear form on \(f\) and \(g\) and thus is bounded by the tensor product norms
\[\left\langle f,g\right\rangle_{k}\leq\|f\otimes g\|_{(H^{\alpha}\otimes H^{ \beta})^{\prime}}\|k\|_{H^{\alpha}\otimes H^{\beta}}\leq\|f\|_{H^{-\alpha}}\| g\|_{H^{-\beta}}\|k\|_{H^{\alpha}\otimes H^{\beta}},\]
where \({}^{\prime}\) denotes the dual norm. The \(H^{\alpha}\otimes H^{\beta}\) norm contains mixed smoothness and with Sobolev-Slobodeckij type definition (53) is easily bounded by corresponding mixed Holder regularity. In order to avoid rigorous characterization of tensor product norms on the sphere, the following lemma shows the required bounds directly.
**Lemma 6.16**.: _Let \(0<\alpha,\beta<1\). Then for any \(\epsilon>0\) with \(\alpha+\epsilon\leq 1\) and \(\beta+\epsilon<1\), we have_
\[\iint_{D\times D}f(x)k(x,y)g(y)\,dx\,dy\leq\|f\|_{H^{-\alpha}(\mathbb{S}^{d-1 })}\|g\|_{H^{-\beta}(\mathbb{S}^{d-1})}\|k\|_{C^{0;\alpha+\epsilon,\beta+ \epsilon}(\mathbb{S}^{d-1})}.\]
Proof.: Since for any \(u,\,v\)
\[\int u(x)v(x)\,dx=\int u(x)\frac{v(x)}{\|v\|_{H^{\alpha}}}\,dx\, \|v\|_{H^{\alpha}}\\ \leq\sup_{\|w\|_{H^{\alpha}}\leq 1}\int u(x)w\,dx\,\|v\|_{H^{ \alpha}}\leq\|u\|_{H^{-\alpha}}\|v\|_{H^{\alpha}},\]
with \(D=\mathbb{S}^{d-1}\) we have
\[\left\langle f,g\right\rangle_{k}=\iint_{D\times D}f(x)k(x,y)g(y)\,dx\,dy\leq \|f\|_{H^{-\alpha}}\left\|\int_{D}k(\cdot,y)g(y)\right\|_{H^{\alpha}}\]
so that it remains to estimate the last term. Plugging in definition (53) of the Sobolev norm, we obtain
\[\left\|\int_{D}k(\cdot,y)g(y)\right\|_{H^{\alpha}}^{2}=\int_{D}\int_{0}^{\pi} \left|(A_{t}^{x}-I)\left(\int_{D}k(\cdot,y)g(y)\,dy\right)(x)\right|^{2}t^{-2 \alpha-1}\,dt\,dx,\]
where \(A_{t}^{x}\) is the average in (53) applied to the \(x\) variable only and \(I\) the identity. Swapping the inner integral with the one inside the definition of \(A_{t}^{x}\), we estimate
\[\left\|\int_{D}k(\cdot,y)g(y)\right\|_{H^{\alpha}}^{2} =\int_{D}\int_{0}^{\pi}\left|\int_{D}\left[(A_{t}^{x}-I)(k(\cdot,y) )(x)\right]g(y)\,dy\right|^{2}t^{-2\alpha-1}\,dt\,dx,\] \[\leq\int_{D}\int_{0}^{\pi}\left\|(A_{t}^{x}-I)(k(\cdot,y))(x) \right\|_{H^{\beta}}^{2}\|g\|_{H^{-\beta}}^{2}t^{-2\alpha-1}\,dt\,dx,\] \[=\iint_{D\times D}\iint_{0}^{\pi}\left|(A_{s}^{y}-I)(A_{t}^{x}-I) (k)(x,y)\right|^{2}t^{-2\alpha-1}s^{-2\beta-1}\,dst\,dxy\,\|g\|_{H^{-\beta}}^{2}.\]
Plugging in the definition of the averages \(A_{s}^{y}\) and \(A_{t}^{x}\), the integrand is estimated by the mixed Holder norm
\[\left|(A_{s}^{y}-I)(A_{t}^{x}-I)(k)(x,y)\right|= \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,
#### 6.4.3 NTK on the Sphere
This section fills in the proofs for Section 3. Recall that we denote the normal NTK used in [9, 22, 11] by
\[\Theta(x,y)=\lim_{\mathrm{width}\to\infty}\sum_{\lambda}\partial_{\lambda}f^{L+1 }(x)\partial_{\lambda}f^{L+1}(y),\]
whereas the NTK \(\Gamma(x,y)\) used in this paper confines the sum to \(|\lambda|=L-1\), i.e. the second but last layer, see Section 3. We first show that the reproducing kernel Hilbert space (RKHS) of the NTK is a Sobolev space.
**Lemma 6.17**.: _Let \(\Theta(x,y)\) be the neural tangent kernel for a fully connected neural network on the sphere \(\mathbb{S}^{d-1}\) with bias and \(\mathrm{ReLU}\) activation. Then the corresponding RKHS \(H_{\Theta}\) is the Sobolev space \(H^{d/2}(\mathbb{S}^{d-1})\) with equivalent norms_
\[\|\cdot\|_{H_{\Theta}}\sim\|\cdot\|_{H^{d/2}}.\]
Proof.: By [11, Theorem 1] the RKHS \(H_{\Theta}\) is the same as the RKHS \(H_{Lap}\) of the Laplacian kernel
\[k(x,y)=e^{-\|x-y\|}.\]
An inspection of their proof reveals that these spaces have equivalent norms. By [22, Theorem 2], the Laplace kernel has the same eigenfunctions as the NTK (both are spherical harmonics) and eigenvalues
\[\ell^{-d}\lesssim\lambda_{\ell,j}\lesssim\ell^{-d},\qquad\qquad\ell\geq\ell_ {0},\qquad\qquad j=1,\ldots,\nu(\ell),\]
for some \(\ell_{0}\geq 0\), whereas the remaining eigenvalues are strictly positive. By rearranging the constants, this implies
\[(\ell+1)^{-d}\lesssim\lambda_{\ell,j}\lesssim(\ell+1)^{-d},\qquad\quad\ell \geq 0,\qquad\quad j=1,\ldots,\nu(\ell),\]
for all eigenvalues. With Mercer's theorem and the definition (52) of Sobolev norms, we conclude that
\[\|f\|_{H_{\Theta}}^{2}\sim\|f\|_{Lap}^{2}=\sum_{\ell=0}^{\infty}\sum_{j=1}^{ \nu(\ell)}\lambda_{\ell,j}^{-1}|\hat{f}_{\ell,j}|^{2}\sim\sum_{\ell=0}^{ \infty}\sum_{j=1}^{\nu(\ell)}(\ell+1)^{d}|\hat{f}_{\ell,j}|^{2}=\|f\|_{H^{d/2 }(\mathbb{S}^{d-1})}^{2}.\]
**Lemma 6.18**.: _Let \(\Theta(x,y)\) be the neural tangent kernel for a fully connected neural network on the sphere \(\mathbb{S}^{d-1}\) with bias and \(\mathrm{ReLU}\) activation. It's eigenfunctions are spherical harmonics with eigenvalues_
\[(\ell+1)^{-d}\lesssim\lambda_{\ell,j}\lesssim(\ell+1)^{-d},\qquad\quad\ell \geq 0,\qquad\quad j=1,\ldots,\nu(\ell),\]
Proof.: This follows directly form the norm equivalence \(\|\cdot\|_{H_{\Theta}}\sim\|\cdot\|_{H^{d/2}}\) in Lemma 6.17 and in Mercer's theorem representation of the RKHS
\[\sum_{\ell=0}^{\infty}\sum_{j=1}^{\nu(\ell)}\lambda_{\ell,j}^{-1}|\hat{f}_{\ell,j}|^{2}=\|f\|_{H_{\Theta}}^{2}\sim\|f\|_{H^{d/2}(\mathbb{S}^{d-1})}^{2}.=\sum_{ \ell=0}^{\infty}\sum_{j=1}^{\nu(\ell)}(\ell+1)^{d}|\hat{f}_{\ell,j}|^{2},\]
choosing \(f=Y_{\ell}^{j}\) as a spherical harmonic.
With the knowledge of the full spectrum of the NTK, it is now straight forward to show coercivity.
**Lemma 6.19** (Lemma 3.2, restated).: _Let \(\Theta(x,y)\) be the neural tangent kernel for a fully connected neural network with bias on the sphere \(\mathbb{S}^{d-1}\) with ReLU activation. Then for any \(\alpha\in\mathbb{R}\)_
\[\langle f,L_{\Theta}f\rangle_{H^{\alpha}(\mathbb{S}^{d-1})}\gtrsim\|f\|_{H^{ \alpha-d/2}(\mathbb{S}^{d-1})}^{2},\]
_where \(L_{\Theta}\) is the integral operator with kernel \(\Theta(x,y)\)._
Proof.: Plugging in \(f=\sum_{\ell=0}^{\infty}\sum_{j=1}^{\nu(\ell)}\hat{f}_{\ell j}Y_{\ell}^{j}\) in eigenbasis, and using the estimate \(\lambda_{\ell j}\sim(\ell+1)^{-d}\) of the eigenvalues in Lemma 6.18, we have
\[\langle f,L_{\Theta}f\rangle_{H^{\alpha}(\mathbb{S}^{d-1})} =\sum_{\ell=0}^{\infty}\sum_{j=1}^{\nu(\ell)}(\ell+1)^{2\alpha} \hat{f}_{\ell j}\widehat{L_{\Theta}f}_{\ell j}\] \[=\sum_{\ell=0}^{\infty}\sum_{j=1}^{\nu(\ell)}(\ell+1)^{2\alpha} \lambda_{\ell j}|\hat{f}_{\ell j}|^{2}\] \[=\|f\|_{H^{\alpha-d/2}(\mathbb{S}^{d-1})}^{2}.\]
|
2309.09203 | Using Artificial Neural Networks to Determine Ontologies Most Relevant
to Scientific Texts | This paper provides an insight into the possibility of how to find ontologies
most relevant to scientific texts using artificial neural networks. The basic
idea of the presented approach is to select a representative paragraph from a
source text file, embed it to a vector space by a pre-trained fine-tuned
transformer, and classify the embedded vector according to its relevance to a
target ontology. We have considered different classifiers to categorize the
output from the transformer, in particular random forest, support vector
machine, multilayer perceptron, k-nearest neighbors, and Gaussian process
classifiers. Their suitability has been evaluated in a use case with ontologies
and scientific texts concerning catalysis research. From results we can say the
worst results have random forest. The best results in this task brought support
vector machine classifier. | Lukáš Korel, Alexander S. Behr, Norbert Kockmann, Martin Holeňa | 2023-09-17T08:08:50Z | http://arxiv.org/abs/2309.09203v1 | # Using Artificial Neural Networks to Determine Ontologies Most Relevant to Scientific Texts
###### Abstract
This paper provides an insight into the possibility of how to find ontologies most relevant to scientific texts using artificial neural networks. The basic idea of the presented approach is to select a representative paragraph from a source text file, embed it to a vector space by a pre-trained fine-tuned transformer, and classify the embedded vector according to its relevance to a target ontology. We have considered different classifiers to categorize the output from the transformer, in particular random forest, support vector machine, multilayer perceptron, k-nearest neighbors, and Gaussian process classifiers. Their suitability has been evaluated in a use case with ontologies and scientific texts concerning catalysis research. The obtained results confirm support vector machines as a promising classifier, but surprisingly show random forest as unsuitable for this task.
_Keywords:_ ontology; text data; text preprocessing; text representation learning; text classification
## 1 Introduction
A domain ontology defines a set of representational primitives with which to model a domain of knowledge or discourse. The representational primitives are typically classes, attributes, and relationships. The definitions of the representational primitives include information about their meaning and constraints on their logically consistent application. Classes can be defined in two ways: by annotating their definitions, or by connecting classes with each other and with properties. Each domain ontology typically uses domain-specific definitions of terms denoting its primitives.
The FAIR research data management (Findable, Accessable, Interoperable, and Reuseable) needs a consistent data representation in ontologies, particularly for representing the data structure in the specific domain [34]. Since different ontologies are written by different people, they are often incompatible, even within the same domain. As systems that rely on domain ontologies expand, it is often needed to merge domain ontologies by manual tuning. The same is true for enhancing an ontology with information available in domain-related texts. Merging and enhancing ontologies is thus a largely manual process and therefore time-consuming and expensive.
The need to find a suitable ontology for an input text can help in classifying the information presented within the text as well as to connect the input text with data. This would allow for automated selection of ontologies and respective classification of the text. Different text data could thus be compared automatically in an understandable way and connected with corresponding research data. Ontologies represent "a formal specification of a shared conceptualization" [7] and can thus be used to express knowledge and data in a formalized, standardized description language to specify terms and relations between those terms.
Current ontology recommenders, such as the NCBO ontology recommender [8], score annotations based on words similar to preferred and alternate labels of ontology classes and term frequency. In contrast to this, this work aims to use text representation learning in order to not only search for words also contained in ontologies but also to find concepts with similar semantic meaning between text and ontology.
This paper is devoted to a specific problem encountered during enhancing ontologies and sometimes during their merging: to decide which of several available ontologies is most relevant to given domain-related piece of text. Our solution to the problem relies primarily on artificial neural networks (ANNs), in particular on natural language processing (NLP).
The next section surveys the applicability of artificial neural networks to ontologies. Section 3 recalls the employed methods of text preprocessing. There have been used modules for text extractions from PDF files, for transforming extracted files to pure text and for eliminating irrelevant paragraphs. In the section, text representation learning is described as well as the principles of the employed classifiers. In section 4, an application of the proposed methodology to catalysis is described and evaluated.
With regard to sources we have studied described in part 2 of this article, we are not aware that classifiers learned from the results of representational learning have ever been used to determine the most relevant of a given set of ontologies. |
2309.04733 | A Spatiotemporal Deep Neural Network for Fine-Grained Multi-Horizon Wind
Prediction | The prediction of wind in terms of both wind speed and direction, which has a
crucial impact on many real-world applications like aviation and wind power
generation, is extremely challenging due to the high stochasticity and
complicated correlation in the weather data. Existing methods typically focus
on a sub-set of influential factors and thus lack a systematic treatment of the
problem. In addition, fine-grained forecasting is essential for efficient
industry operations, but has been less attended in the literature. In this
work, we propose a novel data-driven model, Multi-Horizon SpatioTemporal
Network (MHSTN), generally for accurate and efficient fine-grained wind
prediction. MHSTN integrates multiple deep neural networks targeting different
factors in a sequence-to-sequence (Seq2Seq) backbone to effectively extract
features from various data sources and produce multi-horizon predictions for
all sites within a given region. MHSTN is composed of four major modules.
First, a temporal module fuses coarse-grained forecasts derived by Numerical
Weather Prediction (NWP) and historical on-site observation data at stations so
as to leverage both global and local atmospheric information. Second, a spatial
module exploits spatial correlation by modeling the joint representation of all
stations. Third, an ensemble module weighs the above two modules for final
predictions. Furthermore, a covariate selection module automatically choose
influential meteorological variables as initial input. MHSTN is already
integrated into the scheduling platform of one of the busiest international
airports of China. The evaluation results demonstrate that our model
outperforms competitors by a significant margin. | Fanling Huang, Yangdong Deng | 2023-09-09T09:36:28Z | http://arxiv.org/abs/2309.04733v1 | # A Spatiotemporal Deep Neural Network for Fine-Grained Multi-Horizon Wind Prediction
###### Abstract
The prediction of wind in terms of both wind speed and direction, which has a crucial impact on many real-world applications like aviation and wind power generation, is extremely challenging due to the high stochasticity and complicated correlation in the weather data. Existing methods typically focus on a sub-set of influential factors and thus lack a systematic treatment of the problem. In addition, fine-grained forecasting is essential for efficient industry operations, but has been less attended in the literature. In this work, we propose a novel data-driven model, Multi-Horizon SpatioTemporal Network (MHSTN), generally for accurate and efficient fine-grained wind prediction. MHSTN integrates multiple deep neural networks targeting different factors in a sequence-to-sequence (Seq2Seq) backbone to effectively extract features from various data sources and produce multi-horizon predictions for all sites within a given region. MHSTN is composed of four major modules. First, a temporal module fuses coarse-grained forecasts derived by Numerical Weather Prediction (NWP) and historical on-site observation data at stations so as to leverage both global and local atmospheric information. Second, a spatial module exploits spatial correlation by modeling the joint representation of all stations. Third, an ensemble module weighs the above two modules for final predictions. Furthermore, a covariate selection module automatically choose influential meteorological variables as initial input. MHSTN is already integrated into the scheduling platform of one of the busiest international airports of China. The evaluation results demonstrate that our model outperforms competitors by a significant margin.
###### Abstract
We present a novel approach to the classification of the joint
through the lens of a wind forecast application deployed in the scheduling platform of a leading international airport (Fig. 1), which has a strong demand to improve the latency and precision of wind prediction for a higher level of operation efficiency. Our work enables a systematic treatment of a series of previously less attended problems, which turn out to be critical in boosting prediction performance.
**Problem 1: How to fuse global NWP and local observation data for accurate and timely fine-grained prediction over a relatively long period of time?** NWP, which numerically solves atmospheric equations with high-performance computers, has been made available at a global scale (Bauer et al., 2015). However, it is too expensive even infeasible for fine-grained predictions, particularly at a high-spatial resolution. In our case (Fig. 1), all stations of the airfield share a single source of NWP. In contrast, statistical models, especially deep neural networks (DNNs) (Zhang et al., 2019; Qin et al., 2019), learning from local observation data, have shown great potential for fast forecasting. Nonetheless, such methods are less effective in forecasting long time horizons than NWP does. The empirical results in Fig. 1 suggest that NWP outputs are valuable for each station to make fine-grained forecasts by involving variances inherent in local observation data.
**Problem 2: How to leverage complicated correlation among various factors?** Weather data observed across multiple stations at a local field exhibit complicated correlation patterns as shown in Fig. 2. The observation of a variable at a time is correlated with its historical behaviors (autocorrelation), substantially influenced by other variables (cross-correlation), and also associated with neighboring observations (spatial correlation). Therefore, assimilating the above correlations is potentially highly valuable to boost the prediction accuracy. Such correlations, nevertheless, are just partially addressed in existing works (Zhang et al., 2019; Wilson et al., 2018). In addition, compared with previous studies that focus on global- or regional- level pre
Figure 2: Weather data exhibit complicated correlations. (a), auto correlations of wind speed observed at station \(S1\); (b), cross correlations between wind speed and other weather variables at stations; (c), spatial correlations of wind speed between the station \(S1\) and the other stations changing along different time lags.
diction, forecasting wind at a high-spatial resolution suffers from significant fluctuations and dependencies (Ezzat, 2020).
**Problem 3: How to derive multi-horizon predictions?** Multi-horizon prediction, i.e., forecasting on multiple steps in future time, is preferred in many practical circumstances as it offers guidance for operation scheduling over a period of time. However, the majority of statistical wind prediction models are limited to one-step ahead forecasting (Wang et al., 2016; Grover et al., 2015; Wilson et al., 2018). Recent survey papers (Wang et al., 2016; Wen et al., 2017) recommended a direct multi-horizon strategy as it can avoid error accumulation, capture the dependencies between predictions, and retain efficiency by parameter sharing (please refer to section 2 for more details).
**Problem 4: How to predict wind direction?** It is essential for scheduling airport operations ahead of time. For instance, an airport can change the direction of take off when the wind direction is significantly changed. Although wind direction prediction is always included in NWP, it is far less addressed by machine learning methods (Masseran et al., 2013; Erdem and Shi, 2011), perhaps because wind direction is harder to be analyzed and modeled. Especially, wind direction is a periodic variable and thus the conventional distributions are infeasible to model it. The hardness could also be illustrated by the low correlation values of wind direction in Fig. 1. Therefore, it is meaningful to model wind direction in a statistical manner for efficient prediction.
To address the above problems, we propose a generic framework, dubbed as Multi-Horizon SpatioTemporal Network (MHSTN), that produce multi-horizon predictions at a high-spatial resolution for both wind speed and wind direction. As a systematic effort to boost wind prediction for industry-level applications, MHSTN takes a sequence-to-sequence (Seq2Seq) structure as backbone (Sutskever et al., 2014) that naturally implements the direct multi-horizon strategy as well as in which advanced deep learning techniques can be seamlessly utilized. We integrate four major modules into MHSTN to predict a target variable of all stations across a local area, First, we devise a temporal module by hybridizing Long Short-Term Memory Network (LSTM) and Multi-Layer Perceptron (MLP) to combine both locally historical and globally NWP temporal data. Second, we employ a Convolutional Neural Network (CNN) as a spatial module to constitute a complete representation of the region base on the state representations of respective stations. Third, we weigh the above two modules with an ensemble module to produce final predictions for each station. Further, we devise a covariate selection module to pick up more influential meteorological variables as the initial input. Our primary contributions are as follows:
* We propose a novel data-driven model, MHSTN, for accurate and timely fine-grained wind prediction. To our knowledge, it is the first one to treat all the aforementioned challenges in a unified framework.
* We evaluate MHSTN with a real-world dataset collected from one of the busiest airports in the whole world. Our models are efficient and significantly outperform the competitor models in terms of prediction accuracy.
In comparison with the on-site NWP results, the average reductions in prediction error are up to 13.59% and 5.48% for wind speed and wind direction, respectively. It should be noted that our design does not depend on the particular airfield, so MHSTN is generally applicable to any other region.
* We publicize our dataset and source code 2 for future research. To our knowledge, it is the first open project involving multi-station observation data and NWP data for fine-grained wind prediction. It is also the first targeting to an airport. Footnote 2: [https://github.com/hfl15/windpred.git](https://github.com/hfl15/windpred.git)
The rest of the paper is organized as follows. We first discuss related work in section 2. In section 3, we formulate the problem. In section 4, we introduce the design details of our integrated framework as well as its major building blocks. In section 5, we report the real-world data used in this work and present the comprehensive experiments to evaluate our framework. Finally, we conclude the paper and outline future work in section 6.
## 2 Related Work
### Wind prediction
Wind prediction as a core task in NWP has been obtained substantial improvement in recent decades (Bauer et al., 2015). However, NWP models are limited to coarse-grained predictions due to the excessive demand for computing power. In contrast, some studies have demonstrated that statistical models (Wang et al., 2016; Cavalcante et al., 2017), especially the DNN models (Qin et al., 2019; Zhang et al., 2019, 2018), are significantly more efficient to capture regional variances. Nonetheless, these models are limited to (very) short-term forecasting due to the shortage of global atmospheric information. Few attempts have been taken to couple the above both sides for regional medium-/long-term wind forecasting Salcedo-Sanz et al. (2009); Cheng et al. (2017), wind power production forecasting Corizzo et al. (2021); Goncalves et al. (2021) and weather forecasting Wang et al. (2019). Along this line, we propose a new machine learning model to assimilate global NWP and local observation data and produce predictions in a higher spatial resolution.
Wind suffers from a complicated dynamic correlation schema that is often the compound of auto-correlation, cross-correlation and spatial correlation. Most existing works, nevertheless, just consider partial correlations. First, capturing auto-correlation has been a basic capability of almost all time series models (e.g., LSTM). Second, some studies utilized cross-correlation by leveraging handcrafted or auto-selected covariates. Erdem and Shi (2011); Grover et al. (2015) used a manually selected covariate set to boost the prediction accuracy. Qin et al. (2019); Zhang et al. (2019) proposed several correlation
metrics to evaluate the importance of weather variables so that a set of influential covariates can be automatically selected. Third, spatial dependence could be mined when more than one observation stations are available. Damousis et al. (2004) encoded how the spatial correlation varies in different conditions of wind speed and wind direction via a fuzzy model into a series of rules for wind speed prediction. Grover et al. (2015) trained a predictor for each location and then combined them via a restricted Boltzman machine to form the joint distribution to derive predictions. Wilson et al. (2018) integrated graph convolutions into a LSTM network to capture the spatial correlation for wind speed prediction. Khodayar and Wang (2018) proposed a graph deep learning model to distill the spatiotemporal features in neighboring wind farms. Ceci et al. (2019) focused on renewable energy production and proposed an entropy based cost function for artificial neural network framework to capture spatial information from spatially-located plants. Instead, we mine and fuse all of the above dynamic correlations in a unified framework.
Most existing works in wind or weather prediction are limited to one-horizon forecasting (Wang et al., 2016; Grover et al., 2015; Wilson et al., 2018). There are three common strategies to realize multi-horizon forecasting. (1) **Recursive strategy** trains a model to predict the one-step-ahead estimate and then iteratively feed this estimate back as the ground truth to forecast longer horizons. Due to the discrepancy between consuming actual data versus estimates during prediction, accumulated errors are usually unavoidable and enlarged by longer horizons. (2) **Direct strategy**, which separately trains multiple models with each directly predicts a future horizon, is less biased. But such a way needs significantly more computing and storage resources as well as neglects dependencies in the time series. (3) **Direct multi-horizon strategy**, which directly trains a model with a multivariate target so as to produce multi-horizon predictions simultaneously, is recommended by recent surveys (Taieb and Atiya, 2015; Wang et al., 2016) to address the previous problems. Therefore, our framework use a Seq2Seq deep neural network (Sutskever et al., 2014) as backbone to naturally implement the direct multi-horizon strategy as well as seamlessly leverage multiple advanced deep learning techniques.
Wind is generally represented by the tuple of wind speed and wind direction (Erdem and Shi, 2011; Grover et al., 2015). However, wind direction is harder to model with statistical models and thus significantly less attended in previous literatures (Masseran et al., 2013; Erdem and Shi, 2011). Our framework can efficiently produce predictions for both wind speed and wind direction.
To sum up, we propose an effective unified neural network that can be viewed as the first step towards addressing the above problems in a systematic fashion.
### Deep neural networks
Deep neural networks (DNNs) have been remarkably successful in many scenarios (LeCun et al., 2015). MLPs are quintessential deep learning models to
form the basis of many applications. Designed to model grid-like data, CNNs have brought about breakthroughs in the field of computer vision (Krizhevsky et al., 2017; Karpathy et al., 2014). Recurrent Neural Networks (RNNs) and LSTMs have been widely used to model sequential data such as speech (Graves et al., 2013) and language (Sutskever et al., 2014). Particularly, Seq2Seq learning is intimately related to multi-horizon time series forecasting (Fan et al., 2019; Wen et al., 2017).
Applications of DNNs to wind/weather forecasting, however, are still limited. Typical MLPs have been introduced to cooperate with NWP models, for example Rasp et al. (2018); Salcedo-Sanz et al. (2009) used MLPs to replace parts of NWP components to accelerate the numerical simulation process, and Krasnopolsky and Fox-Rabinovitz (2006) fed the NWP predictions into a MLP for downscaling. Qin et al. (2019); Zhang et al. (2019) adapted a LSTM network to capture the history information in sequence of wind speed measurements. Wang et al. (2019) proposed a Seq2Seq network with layers of RNNs to incorporate uncertainty quantification for wind speed prediction. Few works hybridized CNNs and RNNs/LSTMs to capture spatiotemporal features. Wilson et al. (2018) integrated graph convolutions into a LSTM network for wind speed prediction. Shi et al. (2015) propose a convolutional LSTM Network for precipitation nowcasting. Different from the above works, we take advantage of different kinds of DNNs to form a generic framework to assimilate multi-source data and thus capture complicated natures in atmosphere for both wind speed and wind direction prediction.
## 3 Problem Formulation
We aim at forecasting wind speed, \(v>0\), and wind direction, \(\theta\in[1,360]\), over all stations in a locally spatial field (Fig. 1). Note that wind direction is a periodic variable and thus conventional distributions are infeasible to model it. To address the problem, we adopt a simple but effective strategy to infer wind direction from components of wind speed. As shown in Fig. 3, wind speed (\(v\)) can be decomposed into a lateral component (\(vx\in\mathbb{R}\)) and a longitudinal component (\(vy\in\mathbb{R}\)) along wind direction (\(\theta\)). Formally:
\[\begin{split} vx&=-v*sin(\frac{\theta}{360}*2\pi),\\ vy&=-v*cos(\frac{\theta}{360}*2\pi).\end{split} \tag{1}\]
Both \(vx\) and \(vy\) are linear variables and the respective observations are highly correlated with NWP predictions (Fig. 1). Therefore, we turn to directly predict \(vx\) and \(vy\), and then the prediction of \(\theta\) can be efficiently derived as
follows:
\[\theta^{\prime}=arctan(\frac{vx}{vy})/\pi*180, \tag{2}\] \[\theta=\begin{cases}\theta^{\prime}&\text{ if }vx<0\ and\ vy<0,\\ \theta^{\prime}+360&\text{ if }vx\geqslant 0\ and\ vy<0,\\ \theta^{\prime}+180&\text{ otherwise }.\end{cases}\]
Next, we introduce the problem formulation for the prediction of a wind variable (\(v\), \(vx\) or \(vy\)) over \(V\) stations. We denote the observation data as follows. Matrix \(\mathbf{Y}\in\mathbb{R}^{T\times V}\) encapsulates the observation values of the target variable at \(V\) stations over \(T\) equidistant timestamps, and matrix \(\mathbf{X}\in\mathbb{R}^{T\times D\times V}\) stores the values for the rest of \(D\) weather variables (covariates). Similarly, we use \(\mathbf{Y}^{(f)}\in\mathbb{R}^{T\times V}\) and \(\mathbf{X}^{(f)}\in\mathbb{R}^{T\times D^{(f)}\times V}\) to denote the future data (i.e., NWP data). Both observation and NWP data are aligned to share a common timeline with the exception that the latter can approximately look into a number of future horizons. The prediction task can be formulated as:
\[[\hat{\mathbf{Y}}_{t+1};...;\hat{\mathbf{Y}}_{t+K}]=F(\mathbf{Y}_{[:t]}, \mathbf{X}_{[:t]},\mathbf{Y}^{(f)}_{[:(t+K)]},\mathbf{X}^{(f)}_{[:(t+K)]}; \mathbf{W}). \tag{3}\]
The devised parametric model \(F\) adaptively learns from the historical data collected from all stations and the estimated future data derived by NWP to produce \(K\)-horizon predictions of the target at any forecast creation time (FCT) \(t\). A sequence of predictions, \(\hat{\mathbf{Y}}\in\mathbb{R}^{T\times V}\), can be obtained by iterating \(t\) through all FCTs that are a series of timestamps with an interval of \(K\). The model parameters, \(\mathbf{W}\), are trained to minimize the mean square error (MSE) defined as:
\[MSE(\mathbf{Y},\hat{\mathbf{Y}})=\frac{1}{T*V}\sum_{t=1}^{T}\sum_{i=1}^{V}( \mathbf{Y}_{t,i}-\hat{\mathbf{Y}}_{t,i})^{2}. \tag{4}\]
Figure 3: Relation between wind speed and direction.
For the sake of readability, we simplify the variables related to a station \(i\) as follows: \(\mathbf{Y}_{\cdot,i}\) is denoted \(\mathbf{y}=[y_{1};y_{2};...;y_{T}]\in\mathbb{R}^{T}\), \(X_{\cdot,\cdot,i}\) is denoted \(\mathbf{x}=[\mathbf{x}_{1};\mathbf{x}_{2};...;\mathbf{x}_{T}]\in\mathbb{R}^{T \times D}\), and \(\mathbf{y}^{(f)}\) and \(\mathbf{x}^{(f)}\) are defined similarly.
## 4 A Unified Prediction Model
We propose a novel MHSTN model, which assimilates local observation and global NWP data, to predict wind variables (_v_, \(vx\) or \(vy\)) of all stations in a region. MHSTN seamlessly hybrids types of DNNs in a Seq2Seq backbone. As illustrated in Fig. 4, MHSTN synergizes four major modules to predict a target variable at each station:
1. A temporal module that uses two encoders, i.e., historical encoder and future encoder, to capture the station's actual history and approximated future, respectively, and then fuses both information to generate a sequence of predictions.
2. A spatial module that models the join of all representations of respective stations to derive a sequence of predictions for the current station.
3. An ensemble module that weighs the above two prediction sequences to produce final output.
4. A covariate selection module that automatically picks up influential meteorological variables as initial input to boost the above procedures.
We will elaborate each module in following subsections.
Figure 4: An instance of MHSTN targeting a predicted variable (e.g., wind speed) at a station (e.g., S1).
### Temporal module
A temporal module is trained for each station. The temporal module produces forecasts of the target variable in future \(K\) horizons at a FCT \(t\) by fusing previous \(W\) observations and future \(K\) NWP predictions.
First, we stack a LSTM (\(LSTM^{(h)}\)) and a MLP (\(MLP^{(h)}\)) to form a historical encoder to capture historical information as a state vector (\(\mathbf{h}_{t}^{(h)}\)). Formally:
\[\begin{split}\mathbf{h}_{t}&=LSTM^{(h)}([\mathbf{y }_{[(t-W+1):t]},\mathbf{x}_{[(t-W+1):t]}]),\\ \mathbf{h}_{t}^{(h)}&=MLP^{(h)}(\mathbf{h}_{t}). \end{split} \tag{5}\]
Similar to prior works (Wen et al., 2017; Wilson et al., 2018), we found a complicated LSTM tends to overfit time-series data. Thus, \(LSTM^{(h)}\) is designed to have one layer and 32 hidden states by default. \(LSTM^{(h)}\) maps the history of sequences to a representation \(\mathbf{h}_{t}\) (typically a vector of hidden states at the last step). For each timestamp \(t\), we abbreviate the input \([y_{t},\mathbf{x}_{t}]\) as \(\mathbf{x}_{t}\). The formulations of the gates, cell update and output of the LSTM cell are defined as:
\[\begin{split}\mathbf{i}_{t}&=sigmoid(\mathbf{W}_{ ix}\mathbf{x}_{t}+\mathbf{W}_{ih}\mathbf{h}_{t-1}+b_{i}),\\ \mathbf{f}_{t}&=sigmoid(\mathbf{W}_{fx}\mathbf{x}_ {t}+\mathbf{W}_{fh}\mathbf{h}_{t-1}+b_{f}),\\ \mathbf{o}_{t}&=sigmoid(\mathbf{W}_{ox}\mathbf{x}_ {t}+\mathbf{W}_{oh}\mathbf{h}_{t-1}+b_{o}),\\ \mathbf{c}_{t}&=\mathbf{f}_{t}\odot\mathbf{c}_{t-1 }+\mathbf{i}_{t}\odot tanh(\mathbf{W}_{cx}\mathbf{x}_{t}+\mathbf{W}_{ch} \mathbf{h}_{t-1}+b_{c}),\\ \mathbf{h}_{t}&=\mathbf{o}_{t}\odot tanh(\mathbf{c}_ {t}),\end{split} \tag{6}\]
where \(\mathbf{W}\). are the weight matrices, \(b\). are the biases, and \(\odot\) is the element-wise vector product. The extracted representation \(\mathbf{h}_{t}\) is then fed into \(MLP^{(h)}\) to form a higher level representation \(\mathbf{h}_{t}^{(h)}\) as output. We implement \(MLP^{(h)}\) as a one-layer MLP with a rectified linear unit (\(relu\)) as the activation function (Glorot et al., 2011). The number of units is set to be double of the length of the input. We claim that \(MLP^{(h)}\) is necessary for extracting better representations. Intuitively, \(\mathbf{h}_{t}\) is inclined to capture the behavior of the input time series, while \(\mathbf{h}_{t}^{(h)}\) is apt to reconstruct the input information to align with the future horizons.
Second, the future encoder applies a standard MLP (\(MLP^{(f)}\)) to encode the NWP predictions in \(K\) future horizons as a representation vector \(\mathbf{h}_{t+1}^{(f)}\). Formally:
\[\mathbf{h}_{t+1}^{(f)}=MLP^{(f)}([\mathbf{y}_{[(t+1):(t+K)]}^{(f)},\mathbf{x}_ {[(t+1):(t+K)]}^{(f)}]). \tag{7}\]
Here the future encoder turns out to be crucial due to the following reasons: (1) Station observations reflect the local ground truth and can be substantially different from the NWP data; (2) The estimated error in NWP predictions requires special consideration. Our main motivations to use the MLP are twofold. First, NWP predictions are high level features close to the final output and hence a simple model is enough, while a complex model may mostly twist the features. Second, some prior research works applied MLPs to downscale NWP predictions (Krasnopolsky and Fox-Rabinovitz, 2006) or act as function
approximators to replace parts of a NWP model for reduced computational complexity (Rasp et al., 2018). In contrast to extrapolating the historical data, these approaches conduct interpolation or regression on data with well aligned input and output. Therefore, we believe our \(MLP^{(f)}\) can produce a representation aligned with the future horizons. While it is tempting to replace \(MLP^{(f)}\) with another LSTM, this replacement brings in negative effects in our case. We hypothesize this inefficiency is due to the following reasons. With the approximated data as input, LSTM is more complicated than MLP and thus tends to overfit spurious variances. However, it is still too simple when compared with the NWP model to capture a complete picture of atmosphere. Meanwhile, the innately estimated error may be enlarged along the long chain of propagation in the LSTM.
Finally, we feed the historical representation (\(\mathbf{h}_{t}^{(h)}\)) and the future representation (\(\mathbf{h}_{t+1}^{(f)}\)) into a MLP (\(MLP^{(l)}\)) to derive local predictions for the current station. Formally:
\[[\hat{y}_{t+1}^{(l)};...;\hat{y}_{t+K}^{(l)}]=MLP^{(l)}([\mathbf{h}_{t}^{(h)}, \mathbf{h}_{t+1}^{(f)}]). \tag{8}\]
\(MLP^{(l)}\) connects one hidden layer, in which the activation function is \(relu\) and the unit size is identical to the input size, to a linear layer outputting \(K\) predicted values in a vector.
### Spatial module
We propose a spatial module to construct a joint representation over all stations and produce predictions at respective stations. Our main considerations are as follows. The case in Fig. 2(c) as well as previous results (Damousis et al., 2004; Wilson et al., 2018; Shi et al., 2015; Ezzat, 2020) reveal the fact that complex dependences always exist in weather observations from closely distributed stations. These dependencies are dynamic and vary with the change of atmosphere and locations. Therefore, simply bundling the data from multiple stations as input is unlikely to help, but could only result in a messy dataset that makes the learning process harder or even infeasible.
Specifically, for a station \(i\), its state at FCT \(t\) can be encoded by its temporal module as a representation vector \(\mathbf{h}_{t}^{(i)}\), which is exactly the output of the last hidden layer in \(MLP^{(l)}\). For readability, the script \(i\) to denote a station is omitted without ambiguity. We incorporate all the representations in a feature map, represented as \(\mathbf{M}_{t}\in\mathbb{R}^{N\times V}\), that has \(V\) channels with each corresponds to a station's representation of length \(N\). Thus, \(\mathbf{M}_{t}\) represents the state of the whole field at FCT \(t\). Then, we feed \(\mathbf{M}_{t}\) into a one-dimensional CNN (\(CNN^{(s)}\)) to produce predictions for one station. Formally:
\[\begin{split}[\hat{y}_{t+1}^{(s)};...;\hat{y}_{t+K}^{(s)}]=CNN^{( s)}(\mathbf{M}_{t}),\\ \mathbf{M}_{t}\in\mathbb{R}^{N\times V}=\{\mathbf{h}_{t}^{(i)}\}_ {i=1}^{V},\mathbf{h}_{t}^{(i)}\in\mathbb{R}^{N}.\end{split} \tag{9}\]
Fig. 5 illustrates how \(CNN^{(s)}\) captures spatial dependences between stations. Here, a filter is sliding over the feature map \(\mathbf{M}_{t}\) to extract a type of features as a channel of values on the output feature map. In the dark blue region, a convolution is operating between the filter and a segment of \(\mathbf{M}_{t}\) that spans all stations. In this way, the output feature map resulted from multiple filters can capture various types of spatial features. We implement \(CNN^{(s)}\) as a stack of a convolutional layer, a max pooling layer, a flatten layer and a linear dense output layer. The convolutional layer takes following settings: the activation function is \(relu\), the number of filters is 64, and the kernel size is 5. The pool size of the pooling layer is 2. The dense layer fed with the flatten feature vector makes predictions in future \(K\) horizons as a vector.
### Ensemble module
The ensemble module of a station uses a dense layer without the bias parameter to weigh the local predictions (\(\hat{\mathbf{y}}_{[(t+1):(t+K)]}^{(l)}\)) and the spatial predictions (\(\hat{\mathbf{y}}_{[(t+1):(t+K)]}^{(s)}\)) before generating the final predictions (\(\hat{\mathbf{y}}_{[(t+1):(t+K)]}\)). The dependencies among future horizons are also considered in this module. Formally:
\[\begin{split}&\hat{\mathbf{y}}_{[(t+1):(t+K)]}=E([\hat{\mathbf{y}}_{[(t+ 1):(t+K)]}^{(l)},\hat{\mathbf{y}}_{[(t+1):(t+K)]}^{(s)}]),\\ &\hat{y}_{t+j}=\sum_{i=1}^{K}\mathbf{W}_{ij}^{(l)}\hat{y}_{t+i}^{ (l)}+\sum_{i=1}^{K}\mathbf{W}_{ij}^{(s)}\hat{y}_{t+i}^{(s)},j=1,2,..,K.\end{split} \tag{10}\]
We empirically found that the initialization of weights is critical for ensuring prediction accuracy. Specifically, for a predicted value (\(\hat{y}_{t+j}\)), we uniformly initialize the weights, \(\mathbf{W}_{\cdot j}^{(l)}\) and \(\mathbf{W}_{\cdot j}^{(s)}\), to \(\frac{1}{2K}\). We believe such a strategy eases
Figure 5: The process to distill spatial features in the spatial module, one-dimensional CNN (\(CNN^{(s)}\)).
the learning process. It could be inferred from Fig. 2(a) that the predicted values are moderately correlated with each other. A uniform initialization, therefore, can produce a set of weights that are closer to the ground truth, at least in comparison with the random or zero initialization. Besides, we argue that the ensemble module is indispensable to take advantage of both temporal and spatial information. The key reason is that spatial dependences are dynamic along the state of atmosphere (Wilson et al., 2018; Damousis et al., 2004) and thus they are not always helpful for a station.
### Covariate selection module
To automatically select influential meteorological variables, we devise a covariate selection module that consists of two components to handle the historical and the future data (i.e., the historical observation and the NWP data), respectively. The core idea is to quantify the importance of each covariate and then identify a threshold to pick up the important ones. Different from prior works, we consider the predictive ability of each candidate with regards to the target variable.
For the historical data of a station, \((\mathbf{y}\in\mathbb{R}^{T},\mathbf{x}\in\mathbb{R}^{T\times D})\), we measure the importances of all covariates as follows. First, a one-step ahead prediction model, i.e., a ridge regression model (Marquardt and Snee, 1975) without the bias parameter, is trained for each future horizon \(k\): \(\hat{y}_{t+k}=Ridge^{(k)}(y_{t},\mathbf{x}_{t};\mathbf{c}^{(k)})\), \(\mathbf{c}^{(k)}\in\mathbb{R}^{D+1},k=1,...,K\), where \(\mathbf{c}^{(k)}\) involves weights of the covariates including the self-history. Second, we consider the magnitude of weights, i.e., let each \(\mathbf{c}^{(k)}=\left|\mathbf{c}^{(k)}\right|\). Third, A min-max normalization is applied to cast each value of \(\mathbf{c}^{(k)}\) into the range of \([0,1]\): \(\mathbf{c}_{j}=\frac{\mathbf{c}_{j}-min(\mathbf{c})}{max(\mathbf{c})-min( \mathbf{c})},j=1,...,D+1\). Finally, for each covariate \(j\), an average of its weights over all horizons will be its final importance factor: \(\mathbf{c}_{j}=\frac{1}{K}\sum_{k=1}^{K}\mathbf{c}_{j}^{(k)},j=1,...,D+1\). A respective importance vector, \(\mathbf{c}\), can then be produced for the current station.
For the future data of a station, \((\mathbf{y}^{(f)}\in\mathbb{R}^{T},\mathbf{x}^{(f)}\in\mathbb{R}^{T\times D^{ (f)}})\), we apply a similar method to calculate importance factors for the covariates, denoted as \(\mathbf{c}^{(f)}\). One exception is that there just a point-to-point ridge regression model is needed: \(\hat{y}_{t}=Ridge^{(f)}(y_{t}^{(f)},\mathbf{x}_{t}^{(f)};\mathbf{c}^{(f)}), \mathbf{c}^{(f)}\in\mathbb{R}^{D^{(f)}+1}\).
## 5 Experiment and Application
We firstly introduce the experimental setup, including a real-world dataset, evaluation settings, implementation details and comparison methods. Then, we demonstrate the effectiveness of MHSTN and the contributions of components within the framework by comprehensive experiments. The results encourage an effective unified framework (i.e., MHSTN) to model complicated natures in spatiotemporal weather data.
### Dataset description
We construct a dataset using real-world data from an international airport (shown in Fig. 1). The dataset consists of nine groups of observation data collected at nine closely distributed stations and one group of NWP data for the airfield. The data are hourly time series of weather variables collected in the period from 02:00:00, Mar. 2018 to 23:00:00, Sep. 2019 with a total of 13,176 time points (24 points per day). Tab. 1 illustrates the included meteorological variables. Note that the values of \(vx\) and \(vy\) in the observation data are calculated from \(v\) and \(\theta\) by Eq. (1) as they cannot be measured directly. Tab. 2 illustrates the basic information of the meteorological data. Fig. 6 visualizes each weather variable over the first 30 days. There are missing values in the observation data. For any station, the missing ratio of a target variable (i.e., \(v\), \(vx\), \(vy\) or \(\theta\)) is less than 0.2%. We simply follow the common practice of aviation business to fill any missing value with the average of its two closest measurements along the directions of history and future.
### Evaluation settings
To extensively evaluate the robustness of prediction models, we follow the rolling and incremental evaluation strategies in time series prediction (Guo et al., 2018). Such evaluation strategies are similar to but different from the standard cross-validation methods that the successiveness of time series data is maintained. In particular, we divide the whole time range of data into non
Figure 6: Visualizations of meteorological variables.
overlapping intervals with each corresponds to one month. In the **rolling evaluation**, we take data within one interval as the testing set and the data in the previous \(I\) intervals as the training set where the last interval is held out as the validation set if needed. For comparison, the **incremental evaluation** uses all the data preceding to the current testing set for training. In our application, there are 18 months/intervals (a year and a half) of data. We set the parameter \(I\) in the rolling evaluation to 6. For either evaluation, we slid a window corresponding to the current testing set over the last 12 intervals and thus conduct 12-fold validations that span across all months/seasons in a year. In a fold of validations, we follow the common practice in machine learning domain to report average performance over multiple runs with different random seeds (i.e., 10 random runs) because it is recognized that a good neural network should usually be robust to different initializations.
We use Root Mean Square Error (RMSE), i.e., the sqrt of MSE defined in Eq. (4), as the metrics to report the prediction performance on the wind speed variables, i.e., \(v\), \(vx\) and \(vy\). Basically, there is no substantial difference between MSE and RMSE except that the former is convenient for derivation while the latter is commonly used as an evaluation metrics. Our motivation to choose MSE/RMSE is that they are more sensitive to extreme values than other losses/metrics do. Extreme values, which are frequently treated as noise in other application domains, reflect extreme weather events that are usually of importance. In our application to an international airport, RMSE is the critical metrics to evaluate candidate algorithms for wind speed prediction because the greater the wind speed implies the greater the risk. We apply an Adjusted Mean Absolute Error (AMAE) (Grimit et al., 2006), defined as Eq. (11), for the circular variable of \(\theta\) due to the RMSE is no longer feasible. Suppose a case involving a wind direction of 10 degree. There are two candidate predicted values, i.e., 350 and 40. RMSE and common MAE will pick up 40, while 350 is closer to the ground truth, 10, in a circular coordinate system.
\[AMAE =\frac{1}{T}\sum_{t=1}^{T}\delta(\hat{\theta}_{t},\theta_{t}), \tag{11}\] \[\delta(\hat{\theta}_{t},\theta_{t}) =\begin{cases}\ \ \ \ \ \ |\hat{\theta}_{t}-\theta_{t}|\ \ \ \ \ \ \ if\ 0<|\hat{\theta}_{t}-\theta_{t}|\leqslant 180,\\ (360-|\hat{\theta}_{t}-\theta_{t}|)\ if\ 180<|\hat{\theta}_{t}-\theta_{t}| \leqslant 360.\end{cases}\]
We also report relative/normalized errors for easy comparison. For wind speed variables (i.e., \(v\), \(vx\) and \(vy\)), we report Root Relative Squared Error (RRSE). RRSE is the root of the squared error of the predictions relative to a naive model that always outputs the average of the actual values. Eq. (12) presents the mathematical definition, where \(y_{t}\) denotes observation value, \(\hat{y}_{t}\) denotes prediction value and \(\bar{y}\) is the average of all \(T\) observations. For \(\theta\), we report Normalized AMAE (NAMAE) that is equal to divide AMAE by the maximum error 180 degree (i.e., \(NAMAE=AMAE/180\)). It should be clarified that NAMAE presents a percentage error ranging in [0, 1]. Although RRSE is not limited to an upper bound of 1, it is relative to a naive model and thus is also comparable across different settings. In addition, the square
root of the RSE reduces the error to the same dimension as the quantity being predicted.
\[RRSE=\sqrt{\frac{\sum_{t=1}^{T}(y_{t}-\hat{y}_{t})^{2}}{\sum_{t=1}^{T}(y_{t}-\bar{ y})^{2}}} \tag{12}\]
### Implementation details
To meet the practical requirements, the number of future horizons \(K\) is set to 24 (hours) and the FCT \(t\) is 00 (o'clock). Due to the latest auto-correlations are far more significant, as exemplified in Fig. 2(a), we set the historical window \(W\) to the minimum period, i.e., 24 (hours).
We train a MHSTN for each of wind speed variables (i.e., \(v,vx\) and \(vy\)) over all stations with following three stages. We first train the temporal modules for all stations, respectively. Then we train the spatial module of each station. Third, the ensemble modules of all stations are trained. In the end, the predictions of wind direction (\(\theta\)) can be efficiently produced by Eq. (2).
We implement DNNs with TensorFlow-Keras-2.3.0 and train them by Adam algorithm (Kingma and Ba, 2014) with the default settings. The initial learning rate (0.001) will be reduced by a factor of 0.5, if no improvement is seen for 3 epochs on the validation set, and 0.0001 is the lower bound. The batch size is 32. We perform early-stopping on the validation set to prevent overfitting, if the loss is not improved for 30 epochs, and save the best model for testing. During training, we normalize the data before feeding them into the model. Specifically, each time series is subtracted by its mean value and then divided by the standard deviation of the training set. After training, we perform an inverse-normalization on the model output to produce final predictions, and report evaluation metrics in the original data space. We run experiments on a Ubuntu-18.04 server featuring a 2.50GHz Intel(R) Xeon(R) E5-2682 CPU and a NVIDIA Tesla P100 GPU.
### Comparison methods
Our unified model has the capability to assimilate multi-source data related to **history (h)**, **future (f)**, **space (s)** and **covariates (c)** (excepting self-history and self-future). For comparison, the models fed with all available covariates are marked with **c***. The comparison methods include advanced physical model, conventional machine learning models, deep neural networks, and variants of our model.
First, we consider two common benchmarks in weather prediction as follows:
* **Persistence** model predicts the wind results by simply copying the results on 24 hours ago. It has been popularly applied to characterize the difficulty of forecasting (Wilson et al., 2018).
* **NWP** is the commercial NWP service in our application. It should be noted that NWP is usually a hard-to-beat physical model (Bauer et al., 2015).
In addition, we implement two conventional regression models, which have been widely used in short-term forecasting problems, as follows:
* **SVR** indicates the Support Vector Regression model which is an extension from the non-linear support vector machine for regression estimation.
* **GBRT** indicates the Gradient Boosting Regression Tree which is an ensemble model for regression tasks and have been widely exploited in practice.
Specifically, our implementation is based on scikit-learn-0.23.1. The above regression models just can output a value once a time, thus we warp them into the module of MultiOutputRegressor 3 that is an implementation of the direct strategy for multi-horizon forecasting (please refer to section 2 for more details). We also extensively search crucial hyper-parameters with randomized search 4 and save the best models for comparison.
Footnote 3: [https://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputRegressor.html](https://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputRegressor.html)
Footnote 4: [https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html)
In the following content, we include a series of DNN models.
First, **MLP** and **LSTM** networks have been mostly used for wind speed prediction. Especially, we adapt components in MHSTN's temporal module as competitors because we found a more complicated network cannot bring in any benefits in terms of prediction accuracy.
* **LSTM(h)**, **LSTM(f)** and **LSTM(h,f)**: Each of them is a LSTM with different inputs that connects a network of \(LSTM^{(h)}\) in Fig. 4 to a linear output layer.
* **MLP(h)**, **MLP(f)**, and **MLP(h,f)**: Each of them is a MLP with different inputs that concatenates a network of \(MLP^{(f)}\) in Fig. 4 to an output layer.
Second, we consider the neural networks that has the capability to model spatiotemporal data.
* **ConvLSTM(h,f,s)**: Convolutional LSTM was originally proposed by Shi et al. (2015) to model spatiotemporal sequences for precipitation nowcasting. The network incorporates convolutional structures to model both input-to-state and state-to-state transitions. We apply a standard ConvLSTM with an input of 4-D tensor, i.e., (_samples_, _time_, _rows_, _channels_), in which _rows_ is 2 corresponding to the two sequences of history and future at a station and _channels_ is the number of stations. We implement the model by keras 5 with settings according to our spatial module (i.e., \(CNN^{(s)}\))). Footnote 5: [https://www.tensorflow.org/api_docs/python/tf/keras/layers/ConvLSTM2D](https://www.tensorflow.org/api_docs/python/tf/keras/layers/ConvLSTM2D)
* **GCNLSTM(h,f,s)**: A coupled Graph Convolutional Network (GCN) (Kipf and Welling, 2017) and LSTM has been applied to model spatiotemporal data for one-step ahead wind speed prediction (Khodayar and Wang, 2018;
Wilson et al., 2018) and traffic prediction (Zhao et al., 2019). Similarly, we implement a GCNLSTM that has three blocks as follows: (1) A GCN block, which is similar to our spatial module (i.e., \(CNN^{(s)}\)) and relies on the spektral library 6; (2) a LSTM block, which is the same as our historical encoder (i.e., \(LSTM^{(h)}\)); and (3) a MLP block, which has a hidden layer with units identical to the length of input and a linear output layer to make predictions. The outputs of the first two blocks are concatenated and fed into the third block. All hidden layers in GCNLSTM adopt a _relu_ activation function. GCNLSTM takes two inputs: a multivariate time series that is a 2-D matrix involving historical and future sequences over all stations (fed into both GCN and LSTM blocks), and an adjacency matrix describing the dynamic spatial dependencies between stations (fed into GCN only), which is a Pearson correlation matrix calculated on the previous input.
Footnote 6: [https://github.com/danielegrattarola/spektral](https://github.com/danielegrattarola/spektral)
We also include the Deep Uncertainty Quantification neural network (**DUQ**) lately proposed by Wang et al. (2019), which combines historical observation and NWP data for wind speed prediction at stations deployed in a coarse grid. With different inputs, there are **DUQ(h,f)**, **DUQ(h,f,c)** and **DUQ(h,f,c*)**. The last one corresponds to the proposed DUQ in (Wang et al., 2019). Specifically, we implement DUQ with the opened source code 7, in which we use the proposed negative log-likelihood error (NLE) loss function and one layer Seq2Seq with 32 hidden notes. There are several notifications should be clarified as follows. (1) We found different settings of DUQ have no significant differences in terms of prediction accuracy and thus picked the current setting. (2) We did not consider exhaustive ensemble strategies due to it is a generally practice to boost the prediction accuracy. (3) DUQ did not leverage spatial dependences and select covariates.
Footnote 7: [https://github.com/BruceBinBoxing/Deep_Learning_Weather_Forecasting](https://github.com/BruceBinBoxing/Deep_Learning_Weather_Forecasting)
Finally, we consider variants of our MHSTN (Fig. 4). There are the temporal module, **MHSTN-T(h,f)**, the spatial module, **MHSTN-S(h,f,s)**, the ensemble module, **MHSTN-E(h,f,s)**, and the whole framework, **MHSTN-E(h,f,s,c)**. Furthermore, we also feed covariates into the aforementioned competitors for comparison. Interested readers please refer to our source code for more details.
### Experimental results
In this section, we summarize major experimental results in Tab. 3, Tab. 4, Tab. 5 and Tab. 6. Each reported value is the average error over 9 (stations) \(\times\) 12 (intervals/months) \(\times\) 10 (random runs) tests. Our framework and it's modules as well as the best results are marked in gray front. Furthermore, we analyze covariates' importances, visualize some cases, and report MHSTN's computation time.
#### 5.5.1 Results of prediction accuracy (Tab. 3, Tab. 4, Tab. 5 and Tab. 6).
First of all, under both evaluation strategies, the results of the two benchmarks, i.e., Persistence and NWP, are the same because they do not have a training process. The rest are statistical models learning from training data. It can be observed that these statistical models consistently perform better on incremental evaluation than rolling evaluation. Such a phenomenon illustrates the common sense that more training data usually helps to improve statistical models. Besides, we observe that Persistence is consistently much worse than NWP. It demonstrates that the prediction tasks are highly challenging, and NWP can achieve a relatively low error. The performance difference between variables also suggests an increase in the predictive difficulty of \(v,vx,vy\) and \(\theta\).
Second, comparing models that ingest historical(h) and/or future(f) sequences, we have following discoveries:
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Models} & \multicolumn{4}{c}{Target wind variables} \\ \cline{2-5} & \(v\) (RMSE) & \(vx\) (RMSE) & \(vy\) (RMSE) & \(\theta\) (AMAE) \\ \hline \multicolumn{5}{c}{Benchmarks} \\ \hline Persistence & 2.322 & 2.720 & 3.256 & 68.889 \\ NWP & 1.516 & 1.577 & 1.701 & 40.124 \\ \hline \hline \multicolumn{5}{c}{Ingestion of historical(h) and/or future(f) sequences} \\ \hline LSTM(h) & \(1.800\pm 0.031\) & \(2.068\pm 0.031\) & \(2.526\pm 0.031\) & \(66.571\pm 1.505\) \\ LSTM(f) & \(1.632\pm 0.079\) & \(1.665\pm 0.079\) & \(1.913\pm 0.079\) & \(44.299\pm 1.225\) \\ LSTM(h,f) & \(1.618\pm 0.068\) & \(1.706\pm 0.068\) & \(1.953\pm 0.068\) & \(44.420\pm 1.459\) \\ MLP(h) & \(1.852\pm 0.035\) & \(2.105\pm 0.035\) & \(2.639\pm 0.035\) & \(69.253\pm 1.757\) \\ MLP(f) & \(1.399\pm 0.019\) & \(1.536\pm 0.019\) & \(1.731\pm 0.019\) & \(41.541\pm 0.800\) \\ MLP(h,f) & \(1.445\pm 0.029\) & \(1.628\pm 0.029\) & \(1.831\pm 0.029\) & \(43.989\pm 1.156\) \\ \hline MHSTN-T(h,f) & \(1.401\pm 0.024\) & \(1.533\pm 0.024\) & \(1.687\pm 0.024\) & \(40.506\pm 0.805\) \\ \hline GBRT(f) & \(1.444\pm 0.027\) & \(1.608\pm 0.027\) & \(1.774\pm 0.027\) & \(41.608\pm 0.791\) \\ GBRT(h,f) & \(1.440\pm 0.027\) & \(1.598\pm 0.027\) & \(1.742\pm 0.027\) & \(40.648\pm 0.757\) \\ SVR(f) & \(1.447\pm 0.010\) & \(1.622\pm 0.010\) & \(1.771\pm 0.010\) & \(40.946\pm 0.290\) \\ SVR(h,f) & \(1.456\pm 0.010\) & \(1.679\pm 0.010\) & \(1.847\pm 0.010\) & \(42.719\pm 0.313\) \\ DUQ(h,f) & \(1.696\pm 0.006\) & \(2.055\pm 0.006\) & \(2.628\pm 0.006\) & \(79.310\pm 0.746\) \\ \hline \hline \multicolumn{5}{c}{Addition of spatial information (s)} \\ \hline MHSTN-S(h,f,s) & \(1.381\pm 0.019\) & \(1.503\pm 0.019\) & \(1.672\pm 0.019\) & \(39.707\pm 0.667\) \\ MHSTN-E(h,f,s) & \(1.370\pm 0.014\) & \(1.504\pm 0.014\) & \(1.658\pm 0.014\) & \(39.441\pm 0.489\) \\ \hline ConvLSTM(h,f,s) & \(1.378\pm 0.017\) & \(1.528\pm 0.017\) & \(1.699\pm 0.017\) & \(39.885\pm 0.617\) \\ GCNLSTM(h,f,s) & \(1.464\pm 0.047\) & \(1.603\pm 0.047\) & \(1.788\pm 0.047\) & \(42.410\pm 1.311\) \\ \hline \hline \multicolumn{5}{c}{Addition of covariates (c)} \\ \hline MHSTN-E(h,f,s,c) & \(1.365\pm 0.017\) & \(1.518\pm 0.017\) & \(1.667\pm 0.017\) & \(39.886\pm 0.486\) \\ MHSTN-E(h,f,s,c*) & \(1.553\pm 0.034\) & \(1.761\pm 0.034\) & \(1.924\pm 0.034\) & \(49.009\pm 0.984\) \\ \hline ConvLSTM(h,f,s,c) & \(1.612\pm 0.061\) & \(1.565\pm 0.061\) & \(1.722\pm 0.061\) & \(41.098\pm 0.715\) \\ ConvLSTM(h,f,s,c*) & \(1.627\pm 0.053\) & \(1.815\pm 0.053\) & \(2.038\pm 0.053\) & \(49.866\pm 1.991\) \\ DUQ(h,f,c) & \(1.767\pm 0.045\) & \(2.063\pm 0.045\) & \(2.633\pm 0.045\) & \(80.006\pm 0.922\) \\ DUQ(h,f,c*) & \(1.764\pm 0.019\) & \(2.121\pm 0.019\) & \(2.703\pm 0.019\) & \(86.745\pm 2.621\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results of the rolling evaluation
1. Future information is critical to attain accurate predictions. Specifically, models that ingest future sequence are usually better than those that do not.
2. LSTM and MLP are apt to model historical and future information, respectively. Specifically, LSTM(h) is better than MLP(h) on all conditions. MLP(f) as the best competitor is much better than LSTM(f).
3. Our temporal module, MHSTN-T(h,f), is usually the best. It implies that the temporal module can hybridize LSTM and MLP networks in an effective manner to assimilate historical and future data. Specifically, under rolling evaluation, MHSTN-T(h,f) achieves the best on variables of \(vx\), \(vy\) and \(\theta\) and is competitive with MLP(f) on \(v\). Under incremental evaluation, MHSTN-T(h,f) is significantly the best on all variables. The results show that MHSTN-T(h,f) can get more profit from the increase of data than other models.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{4}{c}{Target wind variables} \\ \cline{2-5} Models & \(v\) (RRSE) & \(vx\) (RRSE) & \(vy\) (RRSE) & \(\theta\) (NAMAE) \\ \hline \multicolumn{5}{c}{Benchmarks} \\ \hline Persistence & 1.280 & 1.316 & 1.285 & 0.383 \\ NWP & 0.863 & 0.781 & 0.688 & 0.223 \\ \hline \hline \multicolumn{5}{c}{Ingestion of historical(h) and/or future(f) sequences} \\ \hline LSTM(h) & \(0.995\pm 0.017\) & \(1.000\pm 0.017\) & \(0.998\pm 0.017\) & 0.370 \(\pm\) 0.008 \\ LSTM(f) & \(0.910\pm 0.042\) & \(0.810\pm 0.042\) & \(0.763\pm 0.042\) & 0.246 \(\pm\) 0.007 \\ LSTM(h,f) & \(0.902\pm 0.037\) & \(0.829\pm 0.037\) & \(0.778\pm 0.037\) & 0.247 \(\pm\) 0.008 \\ MLP(h) & \(1.027\pm 0.019\) & \(1.021\pm 0.019\) & \(1.039\pm 0.019\) & 0.385 \(\pm\) 0.010 \\ MLP(f) & \(0.795\pm 0.011\) & \(0.755\pm 0.011\) & \(0.691\pm 0.011\) & 0.231 \(\pm\) 0.004 \\ MLP(h,f) & \(0.817\pm 0.016\) & \(0.799\pm 0.016\) & \(0.729\pm 0.016\) & 0.244 \(\pm\) 0.006 \\ \hline MHSTN-T(h,f) & \(0.793\pm 0.014\) & \(0.754\pm 0.014\) & \(0.674\pm 0.014\) & 0.225 \(\pm\) 0.004 \\ \hline GBRT(f) & \(0.813\pm 0.014\) & \(0.786\pm 0.014\) & \(0.710\pm 0.014\) & 0.231 \(\pm\) 0.004 \\ GBRT(h,f) & \(0.810\pm 0.015\) & \(0.781\pm 0.015\) & \(0.697\pm 0.015\) & 0.226 \(\pm\) 0.004 \\ SVR(f) & \(0.808\pm 0.006\) & \(0.789\pm 0.006\) & \(0.707\pm 0.006\) & 0.227 \(\pm\) 0.002 \\ SVR(h,f) & \(0.812\pm 0.005\) & \(0.815\pm 0.005\) & \(0.734\pm 0.005\) & 0.237 \(\pm\) 0.002 \\ DUQ(h,f) & \(0.945\pm 0.003\) & \(0.994\pm 0.003\) & \(1.050\pm 0.003\) & 0.441 \(\pm\) 0.004 \\ \hline \hline \multicolumn{5}{c}{Addition of spatial information (s)} \\ \hline MHSTN-S(h,f,s) & \(0.783\pm 0.011\) & \(0.739\pm 0.011\) & \(0.668\pm 0.011\) & 0.221 \(\pm\) 0.004 \\ MHSTN-E(h,f,s) & \(0.776\pm 0.008\) & \(0.739\pm 0.008\) & \(0.663\pm 0.008\) & 0.219 \(\pm\) 0.003 \\ \hline ConvLSTM(h,f,s) & \(0.782\pm 0.009\) & \(0.749\pm 0.009\) & \(0.678\pm 0.009\) & 0.222 \(\pm\) 0.003 \\ GCNLSTM(h,f,s) & \(0.826\pm 0.026\) & \(0.786\pm 0.026\) & \(0.714\pm 0.026\) & 0.236 \(\pm\) 0.007 \\ \hline \hline \multicolumn{5}{c}{Addition of covariates (c)} \\ \hline MHSTN-E(h,f,s,c) & \(0.769\pm 0.009\) & \(0.745\pm 0.009\) & \(0.666\pm 0.009\) & 0.222 \(\pm\) 0.003 \\ MHSTN-E(h,f,s,c*) & \(0.867\pm 0.019\) & \(0.864\pm 0.019\) & \(0.772\pm 0.019\) & 0.272 \(\pm\) 0.005 \\ \hline ConvLSTM(h,f,s,c) & \(0.903\pm 0.034\) & \(0.766\pm 0.034\) & \(0.688\pm 0.034\) & 0.228 \(\pm\) 0.004 \\ ConvLSTM(h,f,s,c*) & \(0.911\pm 0.030\) & \(0.887\pm 0.030\) & \(0.817\pm 0.030\) & 0.277 \(\pm\) 0.011 \\ DUQ(h,f,c) & \(0.982\pm 0.024\) & \(0.997\pm 0.024\) & \(1.052\pm 0.024\) & 0.444 \(\pm\) 0.005 \\ DUQ(h,f,c*) & \(0.982\pm 0.011\) & \(1.026\pm 0.011\) & \(1.079\pm 0.011\) & 0.482 \(\pm\) 0.015 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results of the rolling evaluation with normalized metrics
4. Conventional ML models, i.e., GBRT and SVR, are worse than neural networks, i.e., MLP(f) and MHSTN-T(h,f), all the time.
Third, comparing models that leverage spatial information (s), we have following discoveries:
1. MHSTN-S(h,f,s) consistently outperforms both MHSTN-T(h,f) and other competitors. It means that MHSTN's spatial module is effective to mine spatial dependences inherent in multi-station data
2. MHSTN-E(h,f,s) is usually the best and reduce the error of MHSTN-S(h,f,s) further. It suggests that MHSTN's ensemble module can balance temporal and spatial information to produce more accurate predictions.
Finally, feeding covariates into advanced models, we have following discoveries:
1. The best competitor to leverage spatiotemporal data, i.e., ConvLSTM(h,f,s), significantly get worse when considering covariates, no matter selected covariates (c) or all covariates (c*).
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Models} & \multicolumn{4}{c}{Target wind variables} \\ \cline{2-5} & \(v\) (RMSE) & \(vx\) (RMSE) & \(vy\) (RMSE) & \(\theta\) (AMAE) \\ \hline \multicolumn{5}{c}{Benchmarks} \\ \hline Persistence & 2.322 & 2.720 & 3.256 & 68.889 \\ NWP & 1.516 & 1.577 & 1.701 & 40.124 \\ \hline \hline \multicolumn{5}{c}{Ingestion of historical(h) and/or future(f) sequences} \\ \hline LSTM(h) & \(1.753\pm 0.032\) & \(2.040\pm 0.016\) & \(2.475\pm 0.027\) & \(61.920\pm 1.121\) \\ LSTM(f) & \(1.530\pm 0.092\) & \(1.587\pm 0.037\) & \(1.770\pm 0.048\) & \(40.726\pm 0.945\) \\ LSTM(h,f) & \(1.516\pm 0.067\) & \(1.615\pm 0.038\) & \(1.807\pm 0.052\) & \(41.415\pm 1.144\) \\ MLP(h) & \(1.773\pm 0.032\) & \(2.042\pm 0.035\) & \(2.508\pm 0.038\) & \(64.694\pm 1.500\) \\ MLP(f) & \(1.362\pm 0.019\) & \(1.494\pm 0.020\) & \(1.643\pm 0.019\) & \(39.403\pm 0.629\) \\ MLP(h,f) & \(1.386\pm 0.025\) & \(1.561\pm 0.030\) & \(1.705\pm 0.031\) & \(41.180\pm 0.953\) \\ \hline MHSTN-T(h,f) & \(1.346\pm 0.020\) & \(1.485\pm 0.027\) & \(1.611\pm 0.023\) & \(38.497\pm 0.734\) \\ \hline GBRT(f) & \(1.391\pm 0.023\) & \(1.557\pm 0.031\) & \(1.705\pm 0.027\) & \(39.544\pm 0.684\) \\ GBRT(h,f) & \(1.387\pm 0.025\) & \(1.541\pm 0.030\) & \(1.672\pm 0.025\) & \(38.730\pm 0.633\) \\ SVR(f) & \(1.411\pm 0.010\) & \(1.581\pm 0.008\) & \(1.680\pm 0.008\) & \(38.960\pm 0.238\) \\ SVR(h,f) & \(1.422\pm 0.008\) & \(1.640\pm 0.009\) & \(1.739\pm 0.008\) & \(40.269\pm 0.282\) \\ DUQ(h,f) & \(1.670\pm 0.007\) & \(1.986\pm 0.005\) & \(2.632\pm 0.005\) & \(66.067\pm 0.936\) \\ \hline \hline \multicolumn{5}{c}{Addition of spatial information (s)} \\ \hline MHSTN-S(h,f,s) & \(1.332\pm 0.016\) & \(1.466\pm 0.015\) & \(1.601\pm 0.021\) & \(37.966\pm 0.542\) \\ MHSTN-E(h,f,s) & \(1.319\pm 0.012\) & \(1.466\pm 0.013\) & \(1.589\pm 0.011\) & \(37.927\pm 0.397\) \\ \hline ConvLSTM(h,f,s) & \(1.329\pm 0.016\) & \(1.489\pm 0.022\) & \(1.634\pm 0.022\) & \(38.688\pm 0.644\) \\ GCNLSTM(h,f,s) & \(1.405\pm 0.042\) & \(1.569\pm 0.047\) & \(1.714\pm 0.048\) & \(41.041\pm 1.259\) \\ \hline \hline \multicolumn{5}{c}{Addition of covariates (c)} \\ \hline MHSTN-E(h,f,s,c) & \(1.310\pm 0.014\) & \(1.469\pm 0.012\) & \(1.591\pm 0.012\) & \(37.933\pm 0.398\) \\ MHSTN-E(h,f,s,c*) & \(1.410\pm 0.021\) & \(1.594\pm 0.027\) & \(1.724\pm 0.022\) & \(42.123\pm 0.721\) \\ \hline ConvLSTM(h,f,s,c*) & \(1.392\pm 0.027\) & \(1.508\pm 0.024\) & \(1.650\pm 0.020\) & \(39.218\pm 0.651\) \\ ConvLSTM(h,f,s,c*) & \(1.465\pm 0.042\) & \(1.636\pm 0.043\) & \(1.788\pm 0.048\) & \(42.756\pm 1.351\) \\ DUQ(h,f,c) & \(1.712\pm 0.046\) & \(1.992\pm 0.007\) & \(2.634\pm 0.006\) & \(66.911\pm 0.971\) \\ DUQ(h,f,c*) & \(1.728\pm 0.019\) & \(2.030\pm 0.028\) & \(2.669\pm 0.029\) & \(71.338\pm 3.273\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Results of the incremental evaluation
2. Compared with MHSTN-E(h,f,s), MHSTN-E(h,f,s,c) gets better on partial variables. Meanwhile, the decreases of MHSTN on the worse variables are negligible and much less than the ConvLSTM's. These results indicate that MHSTN is more robust to the additional variances.
3. Taking all available covariates into consideration always results in bad models, e.g., MHSTN-E(h,f,s,c*). Therefore, the covariate selection module is needed in real-world applications where domain knowledge are usually absent. Further improvements can be made by exploring a more effective manner to leverage the selected covariates instead of directly feeding them into models, but we leave this for future work.
In addition, we observe that the lately advanced DNN weather prediction model, DUQ (Wang et al., 2019), obtains bad results on all conditions and it is even worse than NWP by a large margin. We speculate the possible reasons are two folds. The first reason may be the difference between scenarios. DUQ was designed on the hypothesis that _"Each day and each station are independent."_
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{4}{c}{Target wind variables} \\ \cline{2-5} Models & \(v\) (RRSE) & \(vx\) (RRSE) & \(vy\) (RRSE) & \(\theta\) (NAMAE) \\ \hline \multicolumn{5}{c}{Benchmarks} \\ \hline Persistence & 1.280 & 1.316 & 1.285 & 0.383 \\ NWP & 0.863 & 0.781 & 0.688 & 0.223 \\ \hline \hline \multicolumn{5}{c}{Ingestion of historical(h) and/or future(f) sequences} \\ \hline LSTM(h) & 0.969 \(\pm\) 0.017 & 0.985 \(\pm\) 0.007 & 0.985 \(\pm\) 0.011 & 0.344 \(\pm\) 0.006 \\ LSTM(f) & 0.857 \(\pm\) 0.049 & 0.772 \(\pm\) 0.017 & 0.712 \(\pm\) 0.019 & 0.226 \(\pm\) 0.005 \\ LSTM(h,f) & 0.848 \(\pm\) 0.036 & 0.785 \(\pm\) 0.018 & 0.726 \(\pm\) 0.021 & 0.230 \(\pm\) 0.006 \\ MLP(h) & 0.983 \(\pm\) 0.018 & 0.989 \(\pm\) 0.017 & 0.997 \(\pm\) 0.015 & 0.359 \(\pm\) 0.008 \\ MLP(f) & 0.775 \(\pm\) 0.011 & 0.734 \(\pm\) 0.010 & 0.662 \(\pm\) 0.008 & 0.219 \(\pm\) 0.003 \\ MLP(h,f) & 0.785 \(\pm\) 0.014 & 0.766 \(\pm\) 0.015 & 0.686 \(\pm\) 0.012 & 0.229 \(\pm\) 0.005 \\ \hline MHSTN-T(h,f) & 0.763 \(\pm\) 0.011 & 0.729 \(\pm\) 0.013 & 0.648 \(\pm\) 0.009 & 0.214 \(\pm\) 0.004 \\ \hline GBRT(f) & 0.786 \(\pm\) 0.012 & 0.761 \(\pm\) 0.015 & 0.686 \(\pm\) 0.011 & 0.220 \(\pm\) 0.004 \\ GBRT(h,f) & 0.783 \(\pm\) 0.014 & 0.753 \(\pm\) 0.015 & 0.672 \(\pm\) 0.010 & 0.215 \(\pm\) 0.004 \\ SVR(f) & 0.790 \(\pm\) 0.005 & 0.769 \(\pm\) 0.004 & 0.675 \(\pm\) 0.003 & 0.216 \(\pm\) 0.001 \\ SVR(h,f) & 0.794 \(\pm\) 0.005 & 0.796 \(\pm\) 0.004 & 0.697 \(\pm\) 0.003 & 0.224 \(\pm\) 0.002 \\ DUQ(h,f) & 0.929 \(\pm\) 0.004 & 0.957 \(\pm\) 0.003 & 1.057 \(\pm\) 0.002 & 0.367 \(\pm\) 0.005 \\ \hline \hline \multicolumn{5}{c}{Addition of spatial information (s)} \\ \hline MHSTN-S(h,f,s) & 0.757 \(\pm\) 0.009 & 0.719 \(\pm\) 0.008 & 0.645 \(\pm\) 0.009 & 0.211 \(\pm\) 0.003 \\ MHSTN-E(h,f,s) & 0.749 \(\pm\) 0.007 & 0.719 \(\pm\) 0.007 & 0.640 \(\pm\) 0.005 & 0.211 \(\pm\) 0.002 \\ \hline ConvLSTM(h,f,s) & 0.754 \(\pm\) 0.009 & 0.730 \(\pm\) 0.011 & 0.658 \(\pm\) 0.009 & 0.215 \(\pm\) 0.004 \\ GCNLSTM(h,f,s) & 0.794 \(\pm\) 0.023 & 0.768 \(\pm\) 0.023 & 0.689 \(\pm\) 0.019 & 0.228 \(\pm\) 0.007 \\ \hline \hline \multicolumn{5}{c}{Addition of covariates (c)} \\ \hline MHSTN-E(h,f,s,c) & 0.740 \(\pm\) 0.008 & 0.720 \(\pm\) 0.006 & 0.641 \(\pm\) 0.005 & 0.211 \(\pm\) 0.002 \\ MHSTN-E(h,f,s,c*) & 0.794 \(\pm\) 0.012 & 0.779 \(\pm\) 0.012 & 0.698 \(\pm\) 0.010 & 0.234 \(\pm\) 0.004 \\ \hline ConvLSTM(h,f,s,c) & 0.783 \(\pm\) 0.015 & 0.740 \(\pm\) 0.012 & 0.664 \(\pm\) 0.008 & 0.218 \(\pm\) 0.004 \\ ConvLSTM(h,f,s,c*) & 0.823 \(\pm\) 0.023 & 0.801 \(\pm\) 0.021 & 0.724 \(\pm\) 0.021 & 0.238 \(\pm\) 0.008 \\ DUQ(h,f,c) & 0.950 \(\pm\) 0.024 & 0.960 \(\pm\) 0.003 & 1.058 \(\pm\) 0.003 & 0.372 \(\pm\) 0.005 \\ DUQ(h,f,c*) & 0.961 \(\pm\) 0.011 & 0.979 \(\pm\) 0.013 & 1.073 \(\pm\) 0.012 & 0.396 \(\pm\) 0.018 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Results of the incremental evaluation with normalized metrics
However, in our scenario, as illustrated in Fig.2(c), dense stations in a local field are highly correlated with each other. Besides, DUQ targeted to the variables of wind speed, temperature and humidity that have been demonstrated to be more stable than the vector of wind speed, (\(vx\), \(vy\)), and wind direction, \(\theta\), in the above experimental results and existing studies (Grover et al., 2015; Masseran et al., 2013). Second, the primitive DUQ was just evaluated on a really small dataset that just considered specific 9 days' data. This way did not consider seasonal diversification and may produce an overfitting model.
In summary, under both evaluation strategies, MHSTN achieves the best results. Especially, under the incremental evaluation, MHSTN significantly reduces the prediction errors of NWP on \(v,vx,vy\) and \(\theta\) by 13.59%, 7.04%, 6.58% and 5.48%, respectively. In terms of normalized evaluation metrics, the reductions on \(v,vx,vy\) and \(\theta\) are 14.25%, 7.93%, 6.97% and 5.38%, respectively. All the above results motivate that an effective unified model, i.e., MHSTN, is needed to leverage multi-source data that usually bring in more variances/uncertainties to confuse models.
#### 5.5.2 Results of covariate selection (Tab. 7).
We further inspect the proposed covariate selection module. For a target variable, the importance values of covariates are computed and saved as a vector on each station. For simplicity, we use the average of the importance vectors over respective stations as the final importance vector. Tab. 7 reports the results. For the historical data, it can be observed that a target variable is most correlated to its own history. The model picks up covariates with importance values greater than a given threshold of 0.2 (marked in gray font). Similarly, for the future data, we can see that except for the self-future, all covariates are generally uncorrelated with the target variable. Thus, the model just considers the self-future (marked in gray font) as input.
\begin{table}
\begin{tabular}{l|c c c|c c c} \hline \hline & \multicolumn{6}{c}{Target variables} \\ \cline{2-7} & \multicolumn{3}{c}{on historical data} & \multicolumn{3}{c}{on future data} \\ \cline{2-7} & \(v\) & \(vx\) & \(vy\) & \(v\) & \(vx\) & \(vy\) \\ \hline \(v\) & 0.9440 & 0.2476 & 0.4946 & 1.0000 & 0.1192 & 0.0642 \\ \(vx\) & 0.3493 & 0.9964 & 0.6254 & 0.2066 & 1.0000 & 0.0765 \\ \(vy\) & 0.4479 & 0.2351 & 0.7443 & 0.1092 & 0.1848 & 1.0000 \\ \(\theta\) & 0.1325 & 0.0721 & 0.0942 & 0.0203 & 0.0624 & 0.0618 \\ \(rh\) & 0.0038 & 0.0050 & 0.0131 & 0.0001 & 0.0004 & 0.0005 \\ \(slp\) & 0.3713 & 0.0503 & 0.0547 & 0.0296 & 0.0128 & 0.0183 \\ \(tp\) & 0.3101 & 0.1632 & 0.1837 & 0.0444 & 0.0093 & 0.0053 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Importance values of the covariates regarding to their respective target variables.
#### 5.5.3 Results of visualization (Fig. 7 and Fig. 8).
For readability, we do not include all competitors that have been elaborated above. Fig. 7 visualizes the prediction results of \(v\) at a station where the testing set corresponds to the last split in the incremental evaluation. We can observe that LSTM(h) fails to capture the truth trajectory and the forecasts produced by this under-fitting model gather around the mean line. In contrast, by assimilating NWP data, MHSTN is able to align the future seasonality and events to generate sharp spiky forecasts. Further, the forecasts of MHSTN that injects additional local observation data looks better than that of NWP. Fig. 8 visualizes the forecasts of \(\theta\) in which we add a subplot to show the errors with regard to \(\delta(\hat{\theta}_{t},\theta_{t})\) defined in Eq. (11). We can observe that the errors of NWP are greater than that of MHSTN on most timestamps.
#### 5.5.4 Results of computation time.
The applied NWP service is daily updated. To demonstrate MHSTN is efficient enough to make timely predictions, we report its computation time over the last split in the incremental evaluation that involves all data. To model a target variable (i.e., \(v\)) over all nine stations, the time costs for all modules are as follows. The training of the temporal module, the spatial module, the ensemble module, and the covariate selection module take 64.71 seconds, 51.74 seconds, 5.73 seconds, and 1.75 seconds, respectively. In the case of inference,
Figure 7: The prediction results for \(v\) at a station.
the framework takes 0.21 seconds to produce predictions for one day or 24 hours.
## 6 Conclusion
In this paper, we developed a unified Seq2Seq deep learning framework, MHSTN, for fine-grained multi-horizon wind prediction. MHSTN captures varying characteristics inherent in spatiotemporal weather data and has the capabilities of (1) efficiently predicting both wind speed and wind direction, (2) effectively fusing locally historical and globally estimated future information, (3) uniformly leveraging complicated correlations (including auto-correlation, cross-correlation and spatial correlation), and (4) simultaneously producing accurate multi-horizon predictions at a fine granularity. We constructed a dataset using real-world data from one of the busiest international airport in China and conducted comprehensive experiments. The results demonstrated that synergy of MHSTN components enables it to outperform state-of-the-art competitors by a significant margin. In the future, we are going to explore more advanced deep learning techniques to optimize each component of MHSTN and investigate the data assimilation problem of employing more data sources to increase the prediction performance. In the long run, we believe machine learning based techniques should be systematically integrated with traditional numerical frameworks to address varying trade-offs of short-/long-term and low-/high-resolution wind predictions.
Figure 8: The prediction results for \(\theta\) at a station. |
2302.14685 | DART: Diversify-Aggregate-Repeat Training Improves Generalization of
Neural Networks | Generalization of neural networks is crucial for deploying them safely in the
real world. Common training strategies to improve generalization involve the
use of data augmentations, ensembling and model averaging. In this work, we
first establish a surprisingly simple but strong benchmark for generalization
which utilizes diverse augmentations within a training minibatch, and show that
this can learn a more balanced distribution of features. Further, we propose
Diversify-Aggregate-Repeat Training (DART) strategy that first trains diverse
models using different augmentations (or domains) to explore the loss basin,
and further Aggregates their weights to combine their expertise and obtain
improved generalization. We find that Repeating the step of Aggregation
throughout training improves the overall optimization trajectory and also
ensures that the individual models have a sufficiently low loss barrier to
obtain improved generalization on combining them. We shed light on our approach
by casting it in the framework proposed by Shen et al. and theoretically show
that it indeed generalizes better. In addition to improvements in In- Domain
generalization, we demonstrate SOTA performance on the Domain Generalization
benchmarks in the popular DomainBed framework as well. Our method is generic
and can easily be integrated with several base training algorithms to achieve
performance gains. | Samyak Jain, Sravanti Addepalli, Pawan Sahu, Priyam Dey, R. Venkatesh Babu | 2023-02-28T15:54:47Z | http://arxiv.org/abs/2302.14685v2 | # DART: Diversify-Aggregate-Repeat Training
###### Abstract
Generalization of Neural Networks is crucial for deploying them safely in the real world. Common training strategies to improve generalization involve the use of data augmentations, ensembling and model averaging. In this work, we first establish a surprisingly simple but strong benchmark for generalization which utilizes diverse augmentations within a training minibatch, and show that this can learn a more balanced distribution of features. Further, we propose Diversify-Aggregate-Repeat Training (DART) strategy that first trains diverse models using different augmentations (or domains) to explore the loss basin, and further Aggregates their weights to combine their expertise and obtain improved generalization. We find that Repeating the step of Aggregation throughout training improves the overall optimization trajectory and also ensures that the individual models have a sufficiently low loss barrier to obtain improved generalization on combining them. We shed light on our approach by casting it in the framework proposed by Shen et al. [61] and theoretically show that it indeed generalizes better. In addition to improvements in In-Domain generalization, we demonstrate SOTA performance on the Domain Generalization benchmarks in the popular DomainBed framework as well. Our method is generic and can easily be integrated with several base training algorithms to achieve performance gains.
## 1 Introduction
Deep Neural Networks have outperformed classical methods in several fields and applications owing to their remarkable generalization. Classical Machine Learning theory assumes that test data is sampled from the same distribution as train data. This is referred to as the problem of In-Domain (ID) generalization [16, 19, 30, 34, 52], where the goal of the model is to generalize to samples within same domain as the train dataset. This is often considered to be one of the most important requirements and criteria to evaluate models. However, in several cases, the test distribution may be different from the train distribution. For example, surveillance systems are expected to work well at all times of the day, under different lighting conditions and when there are occlusions. However, it may not be possible to train models using data from all these distributions. It is therefore crucial to train models that are robust to distribution shifts, which is popularly referred to as Out-of-Domain (OOD) Generalization [26]. In this work, we consider the problems of In-Domain generalization and Out-of-Domain Generalization of Deep Networks. For the latter case, we consider the popular setting of Domain Generalization [6, 45, 24], where the training data is assumed to be composed of several source domains and the goal is to generalize to an unseen target domain.
The problem of generalization is closely related to the Simplicity Bias of Neural Networks, due to which models have a tendency to rely on simpler features that are often spurious correlations to the labels, when compared to the harder robust features [59]. For example, models tend to rely on weak features such as background, rather than more robust features such as shape, causing a drop in object classification accuracy when background changes [76, 23].
A common strategy to alleviate this is to use data augmentations [10, 11, 12, 28, 46, 56, 79, 81] or data from several domains during training [24], which can result in invariance to several spurious correlations, improving the generalization of models. Shen et al. [61] show that data augmentations enable the model to give higher importance to harder-to-learn robust features by delaying the learning of spurious features. We extend their observation by showing that training on a combination of several augmentation strategies (which we refer to as _Mixed_ augmentation) can result in the learning of a balanced distribution of diverse features. Using this, we obtain a strong benchmark for ID generalization as shown in Table-1. However, as shown in prior works [1], the impact of augmentations in training is limited by the capacity of the network in being able to
generalize well to the diverse augmented data distribution. Therefore, increasing the diversity of training data demands the use of larger model capacities to achieve optimal performance. This demand for higher model capacity can be mitigated by training specialists on each kind of augmentation and ensembling their outputs [13, 40, 83, 63], which results in improved performance as shown in Table-1. Another generic strategy that is known to improve generalization is model-weight averaging [74, 75, 33]. This results in a flatter minima, thereby improving the robustness to distribution shifts.
In this work, we aim to combine the benefits of the three strategies discussed above - diversification, specialization and model weight averaging, while also overcoming their individual shortcomings. We propose a **D**iversify-**A**gregate-**R**epeat **T**raining strategy dubbed DART (Fig.1), that first trains \(k\)_Diverse_ models after a few epochs of common training, and then _Aggregates_ their weights to obtain a single generalized solution. The aggregated model is then used to reinitialize the \(k\) models which are trained further post aggregation. This process is _Repeated_ over training to obtain improved generalization. The _Diversify_ step allows models to explore the loss basin and specialize on a fixed set of features. The _Aggregate_ (or Model Interpolation) step robustly combines these models, increasing the diversity of represented features while also suppressing spurious correlations. Repeating the _Diversify-Aggregate_ steps over training results in a more robust optimization trajectory and also ensures that the \(k\) diverse models remain in the same basin thereby permitting a fruitful combination of their weights. We justify our approach theoretically and empirically, and also show that the intermediate model aggregation not only ensures that the models are in the same basin, but also increases the learning time for spurious features, improving generalization. We present our key contributions below.
* We present a strong baseline termed Mixed-Training (MT) that uses a combination of diverse augmentations for different images in a training minibatch.
* We propose a novel algorithm DART, that learns specialized diverse models and aggregates their weights iteratively to improve generalization.
* We justify our method theoretically, and empirically on several In-Domain (CIFAR-10, CIFAR-100) and Domain Generalization (OfficeHome, PACS, VLCS, TerraIncognita, DomainNet) datasets.
## 2 Background: Mode Connectivity of Models
The overparameterization of Deep networks leads to the existence of multiple optimal solutions to any given loss function [49, 80, 35]. Prior works [50, 22, 15] have shown that all such solutions learned by SGD lie on a non-linear manifold, and are connected to each other by a path of low loss. Frankle _et al_. [20] further showed that converged models that share a common initial optimization path are linearly connected with a low loss barrier. This is referred to as the _linear mode connectivity_ between the models. Several optimal solutions that are linearly connected to each other are said to belong to a common _basin_ which is separated from other regions of the loss landscape with a higher loss barrier. Loss barrier between any two models \(\theta_{1}\) and \(\theta_{2}\) is defined as the maximum loss that can be attained by the models, \(\hat{\theta}=\alpha\cdot\theta_{1}+(1-\alpha)\cdot\theta_{2}\ \ \ \forall\ \ \alpha\in[0,1]\).
The linear mode connectivity of models facilitates the averaging of weights of different models in a common basin resulting in further gains. In this work, we leverage the linear mode connectivity of diverse models trained from a common initialization to improve generalization.
## 3 Related Works
### Generalization of Deep Networks
Prior works aim to improve the generalization of Deep Networks by imposing invariances to several factors of variation. This is achieved by using data augmentations during training to improve the In-Domain generalization [10, 11, 12, 27, 46, 69, 79, 81], or by training on a combination of multiple domains for Domain Generalization [42, 45, 42, 29, 9].
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{4}{c}{Test} \\ \cline{2-5} Train & No Aug. & Cutout & Cutmix & AutoAugment \\ \hline Pad+Crop+HFlip (PC) & 78.51 & 67.04 & 56.52 & 58.33 \\ Cutout (CO) & 77.99 & 74.58 & 56.12 & 58.47 \\ Cutmix (CM) & 80.54 & 74.05 & **77.35** & 61.23 \\ AutoAugment (AA) & 79.18 & 71.26 & 60.97 & 73.91 \\ Mixed-Training (MT) & 81.43 & 77.31 & 73.20 & **74.73** \\ \hline Ensemble (CM+CO+AA) & **83.61** & **79.19** & 73.19 & 73.90 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Motivation:** Performance (%) on CIFAR100, ResNet-18 with ERM training for 200 epochs across different train and test augmentations. Mixed-Training (MT) outperforms individual augmentations, while ensembles perform best on average.
Figure 1: Schematic Diagram of the proposed method DART
Contrary to these methods, several works focused on utilizing domain-specific features [5, 14], while others try to disentangle the features into domain-specific and domain-invariant for better generalization [36, 37, 42, 53, 72]. Data augmentation has also been exploited for Domain Generalization [47, 54, 60, 62, 70, 71, 73, 77, 78, 85, 86] in order to increase the diversity of training data and simulate domain shift. Foret _et al_. [19] show that minimizing the maximum loss within an \(\ell_{2}\) norm ball of weights can result in a flatter minima thereby improving generalization. Gulrajani _et al_. [24] show that the simple strategy of ERM training on data from several source domains can indeed prove to be a very strong baseline for Domain Generalization. The authors also release DomainBed - which benchmarks several existing methods on some common datasets representing different types of distribution shifts. We empirically show that our method achieves SOTA on the popular Domain Generalization benchmarks and helps in further improving performance and generalization when used in conjunction with several other methods (Table-4) ascribing to its orthogonal nature.
### Averaging model weights across training
Recent works have shown that converging to a flatter minima can lead to improved generalization [16, 19, 30, 34, 52, 64]. Izmailov _et al_. [33] proposed Stochastic Weight Averaging (SWA) to average the model weights from the last few epochs such that the resulting model converges to a flatter minima, thus improving generalization. A variation of SWA, Exponential Moving Average (EMA) of model weights across training iterations is often used to boost model performance at no extra training cost. Cha _et al_. [6] theoretically show that converging to a flatter minima results in a smaller domain generalization gap. The authors propose SWAD that overcomes the limitations of SWA in the Domain Generalization setting and combines several models in the optimal solution basin to obtain a flatter minima with better generalization. We demonstrate that our approach effectively integrates with EMA and SWAD for In-Domain and Domain Generalization settings respectively to obtain further performance gains (Tables-2, 3).
### Averaging weights of fine-tuned models
While earlier works combined models generated from the same optimization trajectory, Tatro _et al_. [66] showed that for any two converged models with different random initializations, one can find a permutation of one of the models so that fine-tuning the interpolation of the second model and the permuted first model leads to improved generalized. On a similar note, Zhao _et al_. [84] proposed to achieve robustness to backdoor attacks by fine-tuning the linear interpolation of pretrained models. More recently, Wortsman _et al_. [75] proposed Model Soups and showed that in a transfer learning setup, fine-tuning and then averaging different models with same pre-trained initialization but with different hyperparameters such as learning rates, optimizers and augmentations can improve the generalization of the resulting model. The authors further note that this works best when the pre-trained model is trained on a large heterogeneous dataset. While all these approaches work only in a finetuning setting, the proposed method incorporates the interpolation of differently trained models in the regime of _training from scratch_, allowing the learning of models for longer schedules and larger learning rates.
### Averaging weights of differently trained models
Wortsman _et al_. [74] propose to average the weights of multiple models trained with different random initializations simultaneously by considering the loss of a combined model for optimization while performing gradient updates on the individual models. Additionally, they minimize the cosine similarity between model weights to ensure that the models learned are diverse. While this training formulation does learn diverse connected models, it leads to individual models having sub-optimal accuracy (Table-2) since their loss is not optimized directly. DART overcomes such issues since the individual models are trained directly to optimize their respective classification losses. Moreover, the step of intermediate interpolation ensures that the individual models also have better performance when compared to the baseline of standard ERM traning on the respective augmentations (Fig.8).
## 4 Proposed Method: DART
A series of observations from prior works [15, 20, 22, 50] have led to the conjecture that models trained independently with different initializations could be linearly connected
Figure 2: **Optimization trajectory** of the proposed approach DART when compared to independent ERM training on each augmentation. Axes represent the top two PCA directions obtained using the weights of DART training. The initial common point on the right represents the model obtained after 100 epochs of Mixed Training (MT). The trajectory shown is for an additional 100 epochs, with a total training budget of 200 epochs. The optimization trajectory of DART is better aligned with the negative gradient direction when compared to the baseline.
with a low loss barrier, when different permutations of their weights are considered, suggesting that _all solutions effectively lie in a common basin_[17]. Motivated by these observations (discussed in Section-2) and the above hypothesis, we aim at designing an algorithm that explores the basin of solutions effectively with a robust optimization path and combines the expertise of several diverse models to obtain a single generalized solution.
We show an outline of the proposed approach - _Diversify-Aggregate-Repeat Training_, dubbed DART, in Fig.1. Broadly, the proposed approach is implemented in four steps - i) ERM training for \(E^{\prime}\) epochs in the beginning, followed by ii) Training \(M\)_Diverse_ models for \(\lambda/M\) epochs each, iii) _Aggregating_ their weights, and finally iv) _Repeating_ the steps _Diversify-Aggregate_ for \(E-E^{\prime}\) epochs.
A cosine learning rate schedule is used for training the model for a total of \(E\) epochs with a maximum learning rate of \(\mathrm{LR}_{max}\). We present the implementation of DART in Algorithm-1, and discuss each step in detail below:
1. **Traversing to the _Basin_ of optimal solutions:** Since the goal of the proposed approach is to explore the _basin_ of optimal solutions, the first step is to traverse from a randomly initialized model upto the periphery of this basin. Towards this, the proposed _Mixed-Training_ strategy discussed in Section-1 is performed on a combination of several augmentations \(D^{*}\) for the initial \(E^{\prime}\) epochs (L4-L5 in Alg.1).
2. **Diversify - Exploring the _Basin_:** In this step, \(M\) diverse models \(f_{\theta^{k}}\) initialized from the Mixed-Training model (L8 in Alg.1), are trained using the respective datasets \(D^{k}\) (L10 in Alg.1). These are generated using diverse augmentations in the In-Domain setting, and from a combination of different domains in the Domain Generalization setting. To maintain the same compute as baselines, we set \(|D^{k}|=|D|/M\).
3. **Aggregate - Combining diverse experts:** Owing to the initial common training for \(E^{\prime}\) epochs, the \(k\) diverse models lie in the same basin, enabling an effective aggregation of their weights using simple averaging (L12 in Alg.1) to obtain a more generalized solution \(\theta\). Aggregation is done after every \(\lambda\) epochs.
4. **Repeat:** Next, all \(k\) models are reinitialized using the common model \(\theta\) (L13 of Alg.1), after which the individual models are trained for \(\lambda\) epochs on their respective datasets \(D^{k}\) as discussed in Step-2, and the process continues for a total of \(E-E^{\prime}\) epochs.
**Visualizing the Optimization Trajectory:** We compare the optimization trajectory of the proposed approach DART with independent training on the same augmentations in Fig.2 after a common training of \(E^{\prime}=100\) epochs on Mixed augmentations. We note that the models explore more in the initial phase of training, and lesser thereafter, which is a result of the cosine learning rate schedule and gradient magnitudes. The exploration in the initial phase helps in increasing the diversity of models, thereby improving the robustness to spurious features (as shown in Proposition-3) leading to a better optimization trajectory, while the smaller steps towards the end help in retaining the flatter optima obtained after Aggregation. The process of repeated aggregation also ensures that the models remain close to each other, allowing longer training regimes.
## 5 Theoretical Results
We use the theoretical setup from Shen _et al_. [61] to show that the proposed approach DART achieves robustness to spurious features, thereby improving generalization.
**Preliminaries and Setup:** We consider a binary classification problem with two classes \(\{-1,1\}\). We assume that the dataset contains \(n\) inputs and \(K\) orthonormal robust features which are important for classification and are represented as \(v_{1},v_{2},v_{3},\ldots,v_{K}\), in decreasing order of their frequency in the dataset. Let each input example \(x\) be composed of two patches denoted as \((x_{1},x_{2})\in R^{d\times 2},\) where each patch is characterized as follows: i) **Feature patch**: \(x_{1}=yv_{k}\), where \(y\) is the target label of \(x\) and \(k^{*}\in[1,K]\), ii) **Noisy patch**: \(x_{2}=\epsilon\) where \(\epsilon\sim\mathcal{N}\left(0,\frac{\sigma^{2}}{d}I_{d}\right)\).
We consider a single layer convolutional neural network consisting of C channels, with \(w=(w_{1},w_{2},w_{3},\ldots,w_{C})\in R^{d\times C}\). The function learned by the neural network (F) is given by \(F(w,x)=\sum\limits_{c=1}^{C}\sum\limits_{p=1}^{2}\phi(w_{c},x_{p})\), where \(\phi\) is the activation function as defined by Shen _et al_. [61].
**Weights learned by an ERM trained model:** Let \(K_{cut}\) denote the number of robust features learned by the model. Following Shen _et al_. [61], we assume the learned weights to be a linear combination of the two types of features present in the dataset as shown below:
\[w=\sum\limits_{k=1}^{K_{cut}}v_{k}+\sum\limits_{k>K_{cut}}y^{(k)}\epsilon^{(k)} \tag{1}\]
**Data Augmentations:** As defined by Shen _et al_. [61], an augmentation \(T_{k}\) can be defined as follows:
\[\forall\ k^{\prime}\ \in\ [1,K],\ \ \ \mathcal{T}_{k}(v_{k^{\prime}})\ =\ v_{((k^{\prime}+k-1)\ mod\ K)+1} \tag{2}\]
Assuming that \(K\) unique augmentation strategies are used (where \(K\) denotes the number of robust patches in the dataset), augmented data is defined as follows:
\[D_{train}^{(aug)}=D_{train}\ \cup\ \mathcal{T}_{1}(D_{train})..\cup\ \mathcal{T}_{K-1}(D_{train}) \tag{3}\]
where \(D_{train}\) is the training dataset. This ensures that each feature patch \(v_{i}\) appears \(n\) times in the dataset, thus making the distribution of all the feature patches uniform.
**Weight Averaging in DART:** In the proposed method, we consider that \(m\) models are being independently trained after which their weights are averaged as shown below:
\[w=\frac{1}{m}\sum\limits_{j=1}^{m}\sum\limits_{k=1}^{K_{cut_{j}}}v_{k_{j}}+ \frac{1}{m}\sum\limits_{j=1}^{m}\sum\limits_{k>K_{cut_{j}}}y_{j}^{(k)}\epsilon _{j}^{(k)} \tag{4}\]
Each branch is trained on the dataset \(D_{train}^{(k)}\) defined as:
\[D_{train}^{(k)}=\mathcal{T}_{k}(D_{train}),\ \ k\in[1,m] \tag{5}\]
**Propositions:** In the following propositions, we derive the convergence time for learning robust and noisy features, and compare with the bounds derived by Shen _et al_. [61] in Section-6. The proofs of all propositions are presented in the Section-9.
**Notation:** Let \(f_{\theta}\) denote a neural network obtained by averaging the weights of \(m\) individual models \(f_{\theta}^{k}\), \(k\in[1,m]\) which are represented as shown in Eq.1. \(n\) is the total number of data samples present in the original dataset \(D_{train}\). \(K\) is the number of orthonormal robust features present in the dataset. The weights \(w_{1},w_{2},\ldots,w_{C}\) of each model \(f_{\theta}^{k}\) are initialized as \(w_{c}\sim\mathcal{N}\left(0,\sigma_{0}^{2}I_{d}\right)\ \forall\ c\in[1,C]\), where C is the number of channels present in a single layer of the model. \(\frac{\sigma}{\sqrt{d}}\) is the standard deviation of the noise present in noisy patches, \(q\) is a hyperparameter used to define the activation (Details in the Section-9), where \(q\geq 3\) and \(d\) is the dimension of each feature patch and weight channel \(w_{c}\).
**Proposition 1**.: _The convergence time for learning any feature patch \(v_{i}\ \forall i\in[1,K]\) in at least one channel \(c\in C\) of the weight averaged model \(f_{\theta}\) using the augmentations defined in Eq.5, is given by \(O\left(\frac{K}{\sigma_{0}^{q-2}}\right)\), if \(\frac{\sigma^{q}}{\sqrt{d}}\ll\frac{1}{K}\), \(m=K\)._
**Proposition 2**.: _If the noise patches learned by each \(f_{\theta}^{k}\) are \(i.i.d\). Gaussian random variables \(\sim\mathcal{N}(0,\frac{\sigma^{2}}{d}I_{d})\) then with high probability, convergence time of learning a noisy patch \(\epsilon^{(j)}\) in at least one channels \(c\in[1,C]\) of the weight averaged model \(f_{\theta}\) is given by \(O\left(\frac{nm}{\sigma_{0}^{q-2}\sigma^{q}}\right)\), if \(d\gg n^{2}\)._
**Proposition 3**.: _If the noise learned by each \(f_{\theta}^{k}\) are \(i.i.d\). Gaussian random variables \(\sim\mathcal{N}\left(0,\frac{\sigma^{2}}{d}I_{d}\right)\), and model weight averaging is performed at epoch \(T\), the convergence time of learning a noisy patch \(\epsilon^{(j)}\) in at least one channels \(c\in[1,C]\) of the weight averaged model \(f_{\theta}\) is given by \(T+O\left(\frac{nm^{(q-2)}d^{(q-2)/2}}{\sigma^{(2q-2)}}\right)\) if \(d\gg n^{2}\)._
## 6 Analysis on the Theoretical Results
In this section, we present the implications of the theoretical results discussed above. While the setup in Section-5 discussed the existence of only two kinds of patches (feature and noisy), in practice, a combination of these two kinds of patches - termed as Spurious features - could also exist, whose convergence can be derived from the above results.
### Learning Diverse Robust Features
We first show that _using sufficiently diverse data augmentations during training generates a uniform distribution of feature patches, encouraging the learning of diverse and robust features by the network_. We consider the use of \(K\) unique augmentations (where \(K\) is the number of robust patches in the dataset) in Eq.3 which transform each feature patch into a different one using a unique mapping as shown in Eq.2. The mapping in Eq.2 can transform a skewed feature distribution to a more uniform distribution after performing augmentations. This results in \(K_{cut}\) being sufficiently large in Eq.1, which depends on the number of high frequency robust features, thereby encouraging the learning of a more balanced distribution of robust features.
Shen _et al_. showed that the time to learn any feature patch \(v_{k}\) by at least one weight channel \(c\in C\) is given by \(O\left(\frac{1}{\sigma_{0}^{q-2}\rho_{k}}\right)\) if \(\frac{\sigma^{q}}{\sqrt{d}}\ll\rho_{k}\), where \(\rho_{k}\) is the fraction of the frequency of occurrence of feature patch \(v_{k}\) divided by the total number of occurrences of all the feature patches in the dataset. The convergence time for learning feature patches
is thus limited by the one that is least frequent in the input data. Therefore, by making the frequency of occurrence of all feature patches uniform, the convergence time reduces. In Proposition-1, we show that the same holds true even for the proposed method DART where several branches are trained using diverse augmentations and their weights are finally averaged to obtain the final model. This justifies the improvements obtained in Mixed-Training (MT) (Eq.1) and in the proposed approach DART (Eq.4) as shown in Table-2.
### Robustness to Noisy Features
Firstly, the use of diverse augmentations in both Mixed-Training (MT) and DART results in better robustness to noisy features since the value of \(K_{cut}\) in Eq.1 and Eq.4 would be higher, resulting in the learning of more feature patches and suppressing the learning of noisy patches. _The proposed method DART indeed suppresses the learning of noisy patches further, and also increases the convergence time for learning noisy features as shown in Proposition-2_. When the augmentations used in each of the \(m\) individual branches of DART are diverse, the noise learned by each of them can be assumed to be \(i.i.d\). Under this assumption, averaging model weights at the end of training results in a reduction of noise variance, as shown in Eq.4. More formally, we show in Proposition-2 that the _convergence time of noisy patches increases by a factor of \(m\) when compared to ERM training_. We note that this does not hold in the case of averaging model weights obtained during a single optimization trajectory as in SWA [33], EMA or SWAD [6], since the noise learned by models that are close to each other in the optimization trajectory cannot be assumed to be \(i.i.d\).
### Impact of Intermediate Interpolations
We next analyse the impact of averaging the weights of the models at an intermediate epoch-T in addition to the interpolation at the end of training. The individual models are further reinitialized using the weights of the interpolated model as discussed in Algorithm-1. As shown in Proposition-3, averaging the weights of all branches at the intermediate epoch T helps in increasing the convergence time of noisy patches by a factor \(O\left(\frac{\sigma_{0}^{q-2}m^{q-3}d^{(q-2)/2}}{\sigma^{q-2}}\right)\) when compared to the case where models are interpolated only at the end of training as shown in Proposition-2. By assuming that \(q>3\) and \(d\gg n^{2}\) similar to Shen _et al_. [61], the lower bound on this can be written as \(O\left(\frac{\sigma_{0}n}{\sigma}\right)\). We note that in a practical scenario this factor would be greater than 1, demonstrating the increase in convergence time for noisy patches when intermediate interpolation is done.
## 7 Experiments and Results
In this section, we empirically demonstrate the performance gains obtained using the proposed approach DART on In-Domain (ID) and Domain Generalization (DG) datasets. We further attempt to understand the various factors that contribute to the success of DART.
**Dataset Details:** To demonstrate In-Domain generalization, we present results on CIFAR-10 and CIFAR-100 [38], while for Domain Generalization, we present results on the 5 real-world datasets on the DomainBed [24] benchmark - VLCS [18], PACS [42], OfficeHome [68], Terra Incognita [3] and DomainNet [51], which represent several types of domain shifts with different levels of dataset and task complexity. DomainNet is a large scale dataset with 6 domains, 345 categories and roughly 0.6 million images. More details on datasets are presented in Section-12.1.
**Training Details (ID):** We set the number of training epochs to 600 for the In-Domain experiments on CIFAR-10 and CIFAR-100. To enable a fair comparison, we select the best performing configuration amongst 200, 400 and 600 total training epochs for the ERM baselines and Mixed-Training, since they may be prone to overfitting. We use a cosine learning rate schedule with a maximum learning rate of 0.1 and weight decay of 5e-4. SGD with momentum of 0.9 is used for optimization of the cross-entropy (CE) loss. Interpolation frequency (\(\lambda\)) is set to be 50 epochs for CIFAR-100 and 40 epochs for CIFAR-10. We present results on ResNet-18 and WideResNet-28-10 architectures.
**Training Details (DG):** Following the setting in DomainBed [24], we use Adam [37] optimizer with a fixed learning rate of 5e-5. The number of training iterations are set to 15k for DomainNet (due to its higher complexity) and 10k for all other datasets with the interpolation frequency being set to 1k iterations. ResNet-50 [25] was used as the backbone, initialized with Imagenet [57] pre-trained weights. Best-model selection across training checkpoints was done based on validation results from the train domains itself, and no subset of the test domain was used. We present further details in Section-12.1.
**SOTA comparison - In Domain (ID) Generalization:** In Table-2, we compare our method against ERM training with several augmentations, and also the strong Mixed-Training benchmark (MT) obtained by using either AutoAugment [10], Cutout [12] or Cutmix [79] for every image in the training minibatch uniformly at random. We use
\begin{table}
\begin{tabular}{l l l} \hline \hline Method & CIFAR-10 & CIFAR-100 \\ \hline ERM+EMA (Pad+Crop+HFlip) & 96.41 & 81.67 \\ ERM+EMA (AutoAugment) & 97.50 & 84.20 \\ ERM+EMA (Cutout) & 97.43 & 82.33 \\ ERM+EMA (Cutmix) & 97.11 & 84.05 \\ Learning Subspaces [74] & 97.46 & 83.91 \\ \hline ERM+EMA (Mixed Training-MT) & 97.69 \(\pm\) 0.19 & 85.57 \(\pm\) 0.13 \\ DART (Ours) & **97.96**\(\pm\) 0.06 & **86.46**\(\pm\) 0.12 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **In-Domain Generalization:** Performance (%) of DART when compared to baselines on WideResNet-28-10 model. Standard deviation for DART and MT are reported across 5 reruns.
the same augmentations in DART as well, with each of the 3 branches being trained on one of the augmentations. As discussed in Section-3, the method proposed by Wortsman _et al_. [74] is closest to our approach, and hence we compare with it as well. We utilize Exponential Moving Averaging (EMA) of weights for the ERM baselines and the proposed approach for a fair comparison. On CIFAR-10, we observe gains of 0.19% on using ERM-EMA (Mixed) and an additional 0.27% on using DART. On CIFAR-100, 1.37% improvement is observed with ERM-EMA (Mixed) and an additional 0.89% with the proposed method DART. The comparison of DART with ERM on ImageNet-1K and fine-grained datasets like Stanford-Cars and CUB200 is shown in Table-6. On ImageNet-1K we observe gains of 0.41% on using RandAugment across all the branches of the model and of 0.14% on using Pad-Crop, RandAugment and Cutout for different branches. We observe gains upto 1.5% on fine-grained datasets.
**SOTA comparison - Domain Generalization:** We present results on the DomainBed [24] datasets in Table-3. We compare only with ERM training (performed on data from a mix of all domains) and SWAD [6] in the main paper due to lack of space, and present a thorough comparison across all other baselines in Table-12.3.1 in Appendix. For the DG experiments, we consider 4 branches (\(M=4\)), with 3 branches being specialists on a given domain and the fourth being trained on a combination of all domains in equal proportion. For the DomainNet dataset, we consider 6 branches due to the presence of more domains. On average, we obtain 2.8% improvements over the ERM baseline without integrating with SWAD, and 1% higher accuracy when compared to SWAD by integrating our approach with it. We further note from Table-4 that the DART can be integrated with several base approaches - with and without SWAD, while obtaining substantial gains across the respective baselines. The best performance achieved is by integrating with Mixup, which yields 3.04% better accuracy over ERM without SWAD, and 1.23% improvement over SWAD by integrating with it. The proposed approach therefore is generic, and can integrated effectively with several algorithms. As observed in Table-7, we observe substantial gains (+2.6%) over MIRO [7] on incorporating DART with SWAD and MIRO and using CLIP [55] as the backbone.
**Evaluation without imposing diversity across branches:** While the proposed method imposes diversity across branches by using different augmentations, we show in Table-5 that the method works even without explicitly introducing diversity, by virtue of the randomness introduced by SGD and different ordering of input samples across models. We obtain an average improvement of 0.9% over the respective baselines, and maximum improvement of 1.82% using Cutout. This shows that the performance of DART is not dependent on data augmentations, although it achieves further improvements on using them.
**Accuracy across training epochs:** We show the accuracy across training epochs for the individual branches and the combined model in Fig.4 for two cases - (a) performing
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multicolumn{1}{c}{**Searches (\%)**} & **\multicolumn{1}{c}{**\(\%\)**} & **\multicolumn{1}{c}{**\(\%\)**} & **\multicolumn{1}{c}{**\(\%\)**} & **\multicolumn{1}{c}{**\(\%\)**} & **\multicolumn{1}{c}{**\(\%\)**} \\ \hline
**Augmentation** & **SBM** & **Mixed** & **1.204** & **DART** & **EBM** & **DART** \\ \hline
**Single Augmentation** & 81.11 & **99.42** & 35.88 & **99.75** & 78.55 & **78.96** \\
**Model Augmentation** & 00.88 & **91.85** & 81.72 & **82.83** & 79.06 & **79.20** \\ \hline \hline \end{tabular}
\end{table}
Table 6: **DART on ImageNet-1K and finegrained datasets: Performance (%) of the DART when compared to ERM on ResNet-50, and Pad-Crop for finegrained datasets (_Single Augmentation_) and when Pad-Crop, RandAugment [11] and Cutout [12] are used for different branches (_Mixed Augmentation_), while instead of RandAugment, AutoAugment is used for finegrained datasets.**
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Algorithm** & **Vanilla** & **DART** (w/o SWAD) & **SWAD** & **DART** (\(\prec\) SWAD) \\ \hline ERM [67] & 66.5 & 70.31 & 70.60 & **72.28** \\ ARM [82] & 64.8 & 69.24 & 69.75 & **71.31** \\ SAM\({}^{\dagger}\)[19] & 67.4 & 70.39 & 70.26 & **71.55** \\ Cutmix\({}^{\dagger}\)[79] & 67.3 & 70.07 & 71.08 & **71.49** \\ Mixup [73] & 68.1 & 71.14 & 71.15 & **72.38** \\ DANN [21] & 65.9 & 70.32 & 69.46 & **70.85** \\ CDANN [44] & 65.8 & 70.75 & 69.70 & **71.69** \\ SagNet [48] & 68.1 & 70.19 & 70.84 & **71.96** \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Combining DART with other DG methods:** OOD performance (%) on OfficeHome (trained using ResNet50) of the proposed method DART coupled with different algorithms against their vanilla and SWAD counterparts. Numbers represented with \({}^{\dagger}\) were reproduced while others are from Domained.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Backbone** & **MIRO** & **MIRO+SWAD** & **DART** & **DART+MIRO** & **DART+MIRO+SWAD** \\ \hline ResNet-50 (NN-1k ink) & 70.50 & 72.40 & 70.10 & 72.50 & **72.70** \\ ViT-B16 (CLIP init.) & 83.35 & 84.80 & 80.95 & 86.14 & **87.37** \\ \hline \hline \end{tabular}
\end{table}
Table 7: **Integrating DART with MIRO:** OOD performance (%) on OfficeHome of the proposed method DART coupled with MIRO [7] and SWAD [6] on different pretrained backbones.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Algorithm** & **VALLS** & **PACS** & **OfficeHome** & **Terralac** & **DomainNet** & **Avg** \\ \hline ERM [67] & 77.5 \(\pm\) 0.4 & 85.5 \(\pm\) 0.2 & 66.5 \(\pm\) 0.3 & 46.1 \(\pm\) 1.8 & 40.9 \(\pm\) 0.1 & 63.3 \\ + DART (ours) & 78.5 \(\pm\) 0.7 & 87.3 \(\pm\) 0.5 & 70.1 \(\pm\) 0.2 & 48.7 \(\pm\) 0.8 & 45.8 \(\pm\) 0.61 \\ \hline SWAD [68] & 79.1 \(\pm\) 0.1 & 88.1 \(\pm\) 0.1 & 70.6 \(\pm\) 0.2 & 50.0 \(\pm\) 0.3 & 46.5 \(\pm\) 0.1 & 66.9 \\ + DART (ours) & **80.3 \(\pm\) 0.2** & **88.9 \(\pm\) 0.1** & **71.9 \(\pm\) 0.1** & **51.3 \(\pm\) 0.2** & **47.2** & **67.9** \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Domain Generalization:** OOD accuracy(%) of DART trained using ResNet50 compared to the respective baselines on DomainBed datasets. Standard dev. across 3 reruns is reported.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Method & Pad+Crop+HFip & AutoAug. & Cutout & Cutmix & Mixed-Train. \\ \hline ERM & 81.48 & 83.93 & 82.01 & 83.02 & 85.54 \\ ERM + EMA & 81.67 & 84.20 & 82.33 & 84.05 & 85.57 \\ DART (Ours) & **82.31** & **85.02** & **84.15** & **84.72** & **86.13** \\ \hline \hline \end{tabular}
\end{table}
Table 5: **DART using same augmentation across all branches:** Performance (%) of the DART when compared to baselines across different augmentations on CIFAR-100 using WideResNet-28-10 architecture. DART is better than baselines in all cases.
interpolations from the beginning, and (b) performing interpolations after half the training epochs, as done in DART. It can be noted from (a) that the interpolations in the initial few epochs have poor accuracy since the models are not in a common basin. Further, as seen in initial epochs of (a), when the learning rate is high, SGD training on an interpolated model cannot retain the flat solution due to its implicit bias of moving towards solutions that minimize train loss alone. Whereas, in the later epochs as seen in (b), the improvement obtained after every interpolation is retained. We therefore propose a common training strategy for the initial half of epochs, and split training after that.
**Ablation experiments:** We note the following observations from the plots in Fig.3 (a-e):
1. **Effect of Compute:** Using DART, we obtain higher (or similar) performance gains as the number of training epochs increases, whereas the accuracy of ERM+EMA (Mixed) baseline starts reducing after 300 epochs of training. This can be attributed to the increase in convergence time for learning noisy (or spurious) features due to the intermediate aggregations as shown in Proposition-3, which prevents overfitting.
2. **Effect of Interpolation Frequency:** We note that an optimal range of \(\lambda\) or the number of epochs between interpolations is 10 - 80, and we set this value to 50. If there is no interpolation for longer epochs, the models drift apart too much, causing a drop in accuracy.
3. **Effect of Start Epoch:** We note that although the proposed approach works well even if interpolations are done from the beginning, by performing ERM training on mixed augmentations for 300 epochs, we obtain 0.22% improvement. Moreover, since interpolations do not help in the initial part of training as seen in Fig.4 (a), we propose to start this only in the second half.
4. **Effect of Number of branches:** As the number of branches increases, we note an improvement in performance due to higher diversity across branches, leading to more robustness to spurious features and better generalization as shown in Proposition-2.
5. **Effect of Interpolation epochs:** We perform a run with 50 epochs of common training followed by a single interpolation. We use a fixed learning rate and plot accuracy by varying the interpolation epoch. As this value increases, models drift far apart, reducing the accuracy after interpolation. At epoch-500, the accuracy even reaches 0, highlighting the importance of having a low loss barrier between models.
## 8 Conclusion
In this work, we first show that ERM training using a combination of _diverse_ augmentations within a training minibatch can be a strong benchmark for ID generalization, which is only outperformed by ensembling the outputs of individual experts. Motivated by this observation, we present DART - Diversify-Aggregate-Repeat Training, to achieve the benefits of both training diverse experts and combining their expertise throughout training. The proposed algorithm first trains several models on different augmentations (or domains) to learn a _diverse_ set of features, and further _aggregates_ their weights to obtain better generalization. We repeat the steps Diversify-Aggregate several times over training, and show that this makes the optimization trajectory more robust by suppressing the learning of noisy features, while also ensuring a low loss barrier between the individual models to enable their effective aggregation. We justify our approach both theoretically and empirically on several benchmark In-Domain and Domain Generalization datasets, and show that it integrates effectively with several base training algorithms. We hope our work motivates further research on leveraging the linear mode connectivity of models for better generalization.
Figure 4: **Accuracy of DART across training epochs** for CIFAR-100 on WideResNet-28-10 model: Each branch is trained on different augmentations, whose accuracy is also plotted. Model Interpolation is done (a) from the beginning, (b) after 300 epochs. Although model interpolation and reinitialization happens every 50 epochs, interpolated model accuracy is plotted every epoch.
Figure 3: **Ablations on CIFAR-100, WideResNet-28-10**: (a-d) Experiments comparing DART with the Mixed-Training baseline using the standard training settings. (e) Varying the interpolation epoch after 50 epochs of common training using a fixed learning rate of 0.1. |
2309.10975 | SPFQ: A Stochastic Algorithm and Its Error Analysis for Neural Network
Quantization | Quantization is a widely used compression method that effectively reduces
redundancies in over-parameterized neural networks. However, existing
quantization techniques for deep neural networks often lack a comprehensive
error analysis due to the presence of non-convex loss functions and nonlinear
activations. In this paper, we propose a fast stochastic algorithm for
quantizing the weights of fully trained neural networks. Our approach leverages
a greedy path-following mechanism in combination with a stochastic quantizer.
Its computational complexity scales only linearly with the number of weights in
the network, thereby enabling the efficient quantization of large networks.
Importantly, we establish, for the first time, full-network error bounds, under
an infinite alphabet condition and minimal assumptions on the weights and input
data. As an application of this result, we prove that when quantizing a
multi-layer network having Gaussian weights, the relative square quantization
error exhibits a linear decay as the degree of over-parametrization increases.
Furthermore, we demonstrate that it is possible to achieve error bounds
equivalent to those obtained in the infinite alphabet case, using on the order
of a mere $\log\log N$ bits per weight, where $N$ represents the largest number
of neurons in a layer. | Jinjie Zhang, Rayan Saab | 2023-09-20T00:35:16Z | http://arxiv.org/abs/2309.10975v1 | # SPFO: A Stochastic Algorithm and its Error Analysis
###### Abstract.
Quantization is a widely used compression method that effectively reduces redundancies in over-parameterized neural networks. However, existing quantization techniques for deep neural networks often lack a comprehensive error analysis due to the presence of non-convex loss functions and nonlinear activations. In this paper, we propose a fast stochastic algorithm for quantizing the weights of fully trained neural networks. Our approach leverages a greedy path-following mechanism in combination with a stochastic quantizer. Its computational complexity scales only linearly with the number of weights in the network, thereby enabling the efficient quantization of large networks. Importantly, we establish, for the first time, full-network error bounds, under an infinite alphabet condition and minimal assumptions on the weights and input data. As an application of this result, we prove that when quantizing a multi-layer network having Gaussian weights, the relative square quantization error exhibits a linear decay as the degree of over-parametrization increases. Furthermore, we demonstrate that it is possible to achieve error bounds equivalent to those obtained in the infinite alphabet case, using on the order of a mere \(\log\log N\) bits per weight, where \(N\) represents the largest number of neurons in a layer.
## 1. Introduction
Deep neural networks (DNNs) have shown impressive performance in a variety of areas including computer vision and natural language processing among many others. However, highly overparameterized DNNs require a significant amount of memory to store their associated weights, activations, and - during training - gradients. As a result, in recent years, there has been an interest in model compression techniques, including quantization, pruning, knowledge distillation, and low-rank decomposition [27, 11, 6, 14, 15]. Neural network quantization, in particular, utilizes significantly fewer bits to represent the weights of DNNs. This substitution of original, say, 32-bit floating-point operations with more efficient low-bit operations has the potential to significantly reduce memory usage and accelerate inference time while maintaining minimal loss in accuracy. Quantization methods can be categorized into two classes [22]: quantization-aware training and post-training quantization. Quantization-aware training substitutes floating-point weights with low-bit representations during the training process, while post-training quantization quantizes network weights only after the training is complete.
To achieve high-quality empirical results, quantization-aware training methods, such as those in [7, 5, 35, 9, 21, 37, 40], often require significant time for retraining and hyper-parameter tuning using the entire training dataset. This can make them impractical for resource-constrained scenarios. Furthermore, it can be challenging to rigorously analyze the associated error bounds as quantization-aware training is an integer programming problem with a non-convex loss function, making it NP-hard in general. In contrast, post-training quantization algorithms, such as [8, 36, 23, 38, 20, 26, 39, 25, 13], require only a small amount
of training data, and recent research has made strides in obtaining quantization error bounds for some of these algorithms [23, 38, 25] in the context of shallow networks.
In this paper, we focus on this type of network quantization and its theoretical analysis, proposing a fast stochastic quantization technique and obtaining theoretical guarantees on its performance, even in the context of deep networks.
### Related work
In this section, we provide a summary of relevant prior results concerning a specific post-training quantization algorithm, which forms the basis of our present work. To make our discussion more precise, let \(X\in\mathbb{R}^{m\times N_{0}}\) and \(w\in\mathbb{R}^{N_{0}}\) represent the input data and a neuron in a single-layer network, respectively. Our objective is to find a mapping, also known as a _quantizer_, \(\mathcal{Q}:\mathbb{R}^{N_{0}}\to\mathcal{A}^{N_{0}}\) such that \(q=Q(w)\in\mathcal{A}^{N_{0}}\) minimizes \(\|Xq-Xw\|_{2}\). Even in this simplified context, since \(\mathcal{A}\) is a finite discrete set, this optimization problem is an integer program and therefore NP-hard in general. Nevertheless, if one can obtain good approximate solutions to this optimization problem, with theoretical error guarantees, then those guarantees can be combined with the fact that most neural network activation functions are Lipschitz, to obtain error bounds on entire (single) layers of a neural network.
Recently, Lybrand and Saab [23] proposed and analyzed a greedy algorithm, called _greedy path following quantization_ (GPFQ), to approximately solve the optimization problem outlined above. Their analysis was limited to the ternary alphabet \(\mathcal{A}=\{0,\pm 1\}\) and a single-layer network with Gaussian random input data. Zhang et al. [38] then extended GPFQ to more general input distributions and larger alphabets, and they introduced variations that promoted pruning of weights. Among other results, they proved that if the input data \(X\) is either bounded or drawn from a mixture of Gaussians, then the relative square error of quantizing a generic neuron \(w\) satisfies
\[\frac{\|Xw-Xq\|_{2}^{2}}{\|Xw\|_{2}^{2}}\lesssim\frac{m\log N_{0}}{N_{0}} \tag{1}\]
with high probability. Extensive numerical experiments in [38] also demonstrated that GPFQ, with 4 or 5 bit alphabets, can achieve less than 1% loss in Top-1 and Top-5 accuracy on common neural network architectures. Subsequently, [25] introduced a different algorithm that involves a deterministic preprocessing step on \(w\) that allows quantizing DNNs via _memoryless scalar quantization_ (MSQ) while preserving the same error bound in (1). This algorithm is more computationally intensive than those of [23, 38] but does not require hyper-parameter tuning for selecting the alphabet step-size.
### Contributions and organization
In spite of recent progress in developing computationally efficient algorithms with rigorous theoretical guarantees, all technical proofs in [23, 38, 25] only apply for a single-layer of a neural network with certain assumed input distributions. This limitation naturally comes from the fact that a random input distribution and a deterministic quantizer lead to activations (i.e., outputs of intermediate layers) with dependencies, whose distribution is usually intractable after passing through multiple layers and nonlinearities.
To overcome this main obstacle to obtaining theoretical guarantees for multiple layer neural networks, in Section 2, we propose a new stochastic quantization framework, called stochastic path following quantization (SPFQ), which introduces randomness into the quantizer. We show that SPFQ admits an interpretation as a two-phase algorithm consisting of a data-alignment phase and a quantization phase. This allows us to propose two variants, summarized
in Algorithm 1 and Algorithm 2, which involve different data alignment strategies that are amenable to analysis.
Importantly, our algorithms are fast. For example, SPFQ with approximate data alignment has a computational complexity that only scales linearly in the number of parameters of the neural network. This stands in sharp contrast with quantization algorithms that require solving optimization problems, generally resulting in polynomial complexity in the number of parameters.
In Section 3, we present the first error bounds for quantizing an entire \(L\)-layer neural network \(\Phi\), under an infinite alphabet condition and minimal assumptions on the weights and input data \(X\). To illustrate the use of our results, we show that if the weights of \(\Phi\) are standard Gaussian random variables, then, with high probability, the quantized neural network \(\widetilde{\Phi}\) satisfies
\[\frac{\|\Phi(X)-\widetilde{\Phi}(X)\|_{F}^{2}}{\mathbb{E}_{\Phi}\|\Phi(X)\|_{ F}^{2}}\lesssim\frac{m(\log N_{\max})^{L+1}}{N_{\min}} \tag{2}\]
where we take the expectation \(\mathbb{E}_{\Phi}\) with respect to the weights of \(\Phi\), and \(N_{\min}\), \(N_{\max}\) represent the minimum and maximum layer width of \(\Phi\) respectively. We can regard the relative error bound in (2) as a natural generalization of (1).
In Section 4, we consider the finite alphabet case under the random network hypothesis. Denoting by \(N_{i}\) the number of neurons in the \(i\)-th layer, we show that it suffices to use \(b\leq C\log\log\max\{N_{i-1},N_{i}\}\) bits to quantize the \(i\)-th layer while guaranteeing the same error bounds as in the infinite alphabet case.
It is worth noting that we assume that \(\Phi\) is equipped with ReLU activation functions, i.e. \(\max\{0,x\}\), throughout this paper. This assumption is only made for convenience and concreteness, and we remark that the non-linearities can be replaced by any Lipschitz functions without changing our results, except for the values of constants.
Finally, we empirically test the developed method in Section 5, by quantizing the weights of several neural network architectures that are originally trained for classification tasks on the ImageNet dataset [10]. The experiments show only a minor loss of accuracy compared to unquantized models.
## 2. Stochastic Quantization Algorithm
In this section, we start with the notation that will be used throughout this paper and then introduce our stochastic quantization algorithm, and show that it can be viewed as a two-stage algorithm. This in turn will simplify its analysis.
### Notation and Preliminaries
We denote various positive absolute constants by C, c. We use \(a\lesssim b\) as shorthand for \(a\leq Cb\), and \(a\gtrsim b\) for \(a\geq Cb\). For any matrix \(A\in\mathbb{R}^{m\times n}\), \(\|A\|_{\max}\) denotes \(\max_{i,j}|A_{ij}|\).
#### 2.1.1. Quantization
An \(L\)-layer perceptron, \(\Phi:\mathbb{R}^{N_{0}}\to\mathbb{R}^{N_{L}}\), acts on a vector \(x\in\mathbb{R}^{N_{0}}\) via
\[\Phi(x):=\varphi^{(L)}\circ A^{(L)}\circ\cdots\circ\varphi^{(1)}\circ A^{(1)}(x) \tag{3}\]
where each \(\varphi^{(i)}:\mathbb{R}^{N_{i}}\to\mathbb{R}^{N_{i}}\) is an activation function acting entrywise, and \(A^{(i)}:\mathbb{R}^{N_{i-1}}\to\mathbb{R}^{N_{i}}\) is an affine map given by \(A^{(i)}(z):=W^{(i)\top}z+b^{(i)}\). Here, \(W^{(i)}\in\mathbb{R}^{N_{i-1}\times N_{i}}\) is a weight matrix and \(b^{(i)}\in\mathbb{R}^{N_{i}}\) is a bias vector. Since \(w^{\top}x+b=\langle(w,b),(x,1)\rangle\), the bias term \(b^{(i)}\) can simply be treated as an extra row to the weight matrix \(W^{(i)}\), so we will henceforth ignore it. For
theoretical analysis, we focus on infinite _mid-tread_ alphabets, with step-size \(\delta\), i.e., alphabets of the form
\[\mathcal{A}=\mathcal{A}_{\infty}^{\delta}:=\{\pm k\delta:k\in\mathbb{Z}\} \tag{4}\]
and their finite versions, mid-tread alphabets of the form
\[\mathcal{A}=\mathcal{A}_{K}^{\delta}:=\{\pm k\delta:0\leq k\leq K,k\in\mathbb{Z }\}. \tag{5}\]
Given \(\mathcal{A}=\mathcal{A}_{\infty}^{\delta}\), the associated _stochastic scalar quantizer_\(\mathcal{Q}_{\mathrm{StoQ}}:\mathbb{R}\to\mathcal{A}\) randomly rounds every \(z\in\mathbb{R}\) to either the minimum or maximum of the interval \([k\delta,(k+1)\delta]\) containing it, in such a way that \(\mathbb{E}(\mathcal{Q}_{\mathrm{StoQ}}(z))=z\). Specifically, we define
\[\mathcal{Q}_{\mathrm{StoQ}}(z):=\begin{cases}\lfloor\frac{z}{\delta}\rfloor \delta&\text{with probability}\,p\\ \big{(}\lfloor\frac{z}{\delta}\rfloor+1\big{)}\delta&\text{with probability}\,1-p \end{cases} \tag{6}\]
where \(p=1-\frac{z}{\delta}+\lfloor\frac{z}{\delta}\rfloor\). If instead of the infinite alphabet, we use \(\mathcal{A}=\mathcal{A}_{K}^{\delta}\), then whenever \(|z|\leq K\delta\), \(\mathcal{Q}_{\mathrm{StoQ}}(z)\) is defined via (6) while \(\mathcal{Q}_{\mathrm{StoQ}}(z)\) is assigned \(-K\delta\) and \(K\delta\) if \(z<-K\delta\) and \(z>K\delta\) respectively.
#### 2.1.2. Orthogonal projections
Given a subspace \(S\subseteq\mathbb{R}^{m}\), we denote by \(S^{\perp}\) its orthogonal complement in \(\mathbb{R}^{m}\), and by \(P_{S}\) the orthogonal projection of \(\mathbb{R}^{m}\) onto \(S\). In particular, if \(z\in\mathbb{R}^{m}\) is a nonzero vector, then we use \(P_{z}\) and \(P_{z^{\perp}}\) to represent orthogonal projections onto \(\mathrm{span}(z)\) and \(\mathrm{span}(z)^{\perp}\) respectively. Hence, for any \(x\in\mathbb{R}^{m}\), we have
\[P_{z}(x)=\frac{\langle z,x\rangle z}{\|z\|_{2}^{2}},\quad x=P_{z}(x)+P_{z^{\perp }}(x),\quad\text{and}\quad\|x\|_{2}^{2}=\|P_{z}(x)\|_{2}^{2}+\|P_{z^{\perp}}(x )\|_{2}^{2}. \tag{7}\]
Throughout this paper, we will also use \(P_{z}\) and \(P_{z^{\perp}}\) to denote the associated matrix representations satisfying
\[P_{z}x=\frac{zz^{\top}}{\|z\|_{2}^{2}}x\quad\text{and}\quad P_{z^{\perp}}x= \Big{(}I-\frac{zz^{\top}}{\|z\|_{2}^{2}}\Big{)}x. \tag{8}\]
#### 2.1.3. Convex order
We now introduce the concept of _convex order_ (see, e.g., [32]), which will be heavily used in our analysis.
**Definition 2.1**.: Let \(X,Y\) be \(n\)-dimensional random vectors such that
\[\mathbb{E}f(X)\leq\mathbb{E}f(Y) \tag{9}\]
holds for all convex functions \(f:\mathbb{R}^{n}\to\mathbb{R}\), provided the expectations exist. Then \(X\) is said to be smaller than \(Y\) in the _convex order_, denoted by \(X\leq_{\mathrm{cx}}Y\).
For \(i=1,2,\ldots,n\), define functions \(\phi_{i}(x):=x_{i}\) and \(\psi_{i}(x):=-x_{i}\). Since both \(\phi_{i}(x)\) and \(\psi_{i}(x)\) are convex, substituting them into (9) yields \(\mathbb{E}X_{i}=\mathbb{E}Y_{i}\) for all \(i\). Therefore, we obtain
\[X\leq_{\mathrm{cx}}Y\implies\mathbb{E}X=\mathbb{E}Y. \tag{10}\]
Clearly, according to Definition 2.1, \(X\leq_{\mathrm{cx}}Y\) only depends on the respective distributions of \(X\) and \(Y\). It can be easily seen that the relation \(\leq_{\mathrm{cx}}\) satisfies reflexivity and transitivity. In other words, one has \(X\leq_{\mathrm{cx}}X\) and that if \(X\leq_{\mathrm{cx}}Y\) and \(Y\leq_{\mathrm{cx}}Z\), then \(X\leq_{\mathrm{cx}}Z\). The convex order defined in Definition 2.1 is also called _mean-preserving spread_[31, 24], which is a special case of _second-order stochastic dominance_[16, 17, 32], see Appendix A for details.
### SPFQ
We start with a data set \(X\in\mathbb{R}^{m\times N_{0}}\) with (vectorized) data stored as rows and a pretrained neural network \(\Phi\) with weight matrices \(W^{(i)}\in\mathbb{R}^{N_{i-1}\times N_{i}}\) having neurons as their columns. Let \(\Phi^{(i)}\), \(\widetilde{\Phi}^{(i)}\) denote the original and quantized neural networks up to layer \(i\) respectively so that, for example, \(\Phi^{(i)}(x):=\varphi^{(i)}\circ W^{(i)}\circ\cdots\circ\varphi^{(1)}\circ W^ {(1)}(x)\). Assuming the first \(i-1\) layers have been quantized, define the _activations_ from \((i-1)\)-th layer as
\[X^{(i-1)}:=\Phi^{(i-1)}(X)\in\mathbb{R}^{m\times N_{i-1}}\quad\text{and}\quad \widetilde{X}^{(i-1)}:=\widetilde{\Phi}^{(i-1)}(X)\in\mathbb{R}^{m\times N_{i- 1}}, \tag{11}\]
which also serve as input data for the \(i\)-th layer. For each neuron \(w\in\mathbb{R}^{N_{i-1}}\) in layer \(i\), our goal is to construct a quantized vector \(q\in\mathcal{A}^{N_{i-1}}\) such that
\[\widetilde{X}^{(i-1)}q=\sum_{t=1}^{N_{i-1}}q_{t}\widetilde{X}_{t}^{(i-1)} \approx\sum_{t=1}^{N_{i-1}}w_{t}X_{t}^{(i-1)}=X^{(i-1)}w\]
where \(X_{t}^{(i-1)}\), \(\widetilde{X}_{t}^{(i-1)}\) are the \(t\)-th columns of \(X^{(i-1)}\), \(\widetilde{X}^{(i-1)}\). Following the GPFQ scheme in [23, 38], our algorithm selects \(q_{t}\) sequentially, for \(t=1,2,\ldots,N_{i-1}\), so that the approximation error of the \(t\)-th iteration, denoted by
\[u_{t}:=\sum_{j=1}^{t}w_{j}X_{j}^{(i-1)}-\sum_{j=1}^{t}q_{j}\widetilde{X}_{j}^{ (i-1)}\in\mathbb{R}^{m}, \tag{12}\]
is well-controlled in the \(\ell_{2}\) norm. Specifically, assuming that the first \(t-1\) components of \(q\) have been determined, the proposed algorithm maintains the error vector \(u_{t-1}=\sum\limits_{j=1}^{t-1}(w_{j}X_{j}^{(i-1)}-q_{j}\widetilde{X}_{j}^{(i -1)})\), and sets \(q_{t}\in\mathcal{A}\) probabilistically depending on \(u_{t-1}\), \(X_{t}^{(i-1)}\), and \(\widetilde{X}_{t}^{(i-1)}\). Note that (12) implies
\[u_{t}=u_{t-1}+w_{t}X_{t}^{(i-1)}-q_{t}\widetilde{X}_{t}^{(i-1)} \tag{13}\]
and using (7), one can get
\[c^{*} :=\arg\min_{c\in\mathbb{R}}\|u_{t-1}+w_{t}X_{t}^{(i-1)}-c \widetilde{X}_{t}^{(i-1)}\|_{2}^{2}\] \[=\arg\min_{c\in\mathbb{R}}\|P_{\widetilde{X}_{t}^{(i-1)}}(u_{t-1} +w_{t}X_{t}^{(i-1)})-c\widetilde{X}_{t}^{(i-1)}\|_{2}^{2}\] \[=\arg\min_{c\in\mathbb{R}}\left\|\frac{\langle\widetilde{X}_{t}^{ (i-1)},u_{t-1}+w_{t}X_{t}^{(i-1)}\rangle}{\|\widetilde{X}_{t}^{(i-1)}\|_{2}^{2 }}\widetilde{X}_{t}^{(i-1)}-c\widetilde{X}_{t}^{(i-1)}\right\|_{2}^{2}\] \[=\frac{\langle\widetilde{X}_{t}^{(i-1)},u_{t-1}+w_{t}X_{t}^{(i-1) }\rangle}{\|\widetilde{X}_{t}^{(i-1)}\|_{2}^{2}}.\]
Hence, a natural design of \(q_{t}\in\mathcal{A}\) is to quantize \(c^{*}\). Instead of using a deterministic quantizer as in [23, 38], we apply the stochastic quantizer in (6), that is
\[q_{t}:=\mathcal{Q}_{\text{StocQ}}(c^{*})=\mathcal{Q}_{\text{StocQ}}\Bigg{(} \frac{\langle\widetilde{X}_{t}^{(i-1)},u_{t-1}+w_{t}X_{t}^{(i-1)}\rangle}{\| \widetilde{X}_{t}^{(i-1)}\|_{2}^{2}}\Bigg{)}. \tag{14}\]
Putting everything together, the stochastic version of GPFQ, namely SPFQ in its basic form, can now be expressed as follows.
\[\begin{cases}u_{0}=0\in\mathbb{R}^{m},\\ q_{t}=\mathcal{Q}_{\text{StocQ}}\bigg{(}\frac{(\widetilde{X}_{t}^{(i-1)},u_{t-1} +w_{t}X_{t}^{(i-1)})}{\|\widetilde{X}_{t}^{(i-1)}\|_{2}^{2}}\bigg{)},\\ u_{t}=u_{t-1}+w_{t}X_{t}^{(i-1)}-q_{t}\widetilde{X}_{t}^{(i-1)}\end{cases} \tag{15}\]
where \(t\) iterates over \(1,2,\ldots,N_{i-1}\). In particular, the final error vector is
\[u_{N_{i-1}}=\sum_{j=1}^{N_{i-1}}w_{j}X_{j}^{(i-1)}-\sum_{j=1}^{N_{i-1}}q_{j} \widetilde{X}_{j}^{(i-1)}=X^{(i-1)}w-\widetilde{X}^{(i-1)}q \tag{16}\]
and our goal is to estimate \(\|u_{N_{i-1}}\|_{2}\).
### A two-phase pipeline
An essential observation is that SPFQ in (15) can be equivalently decomposed into two phases.
**Phase I**: Given inputs \(X^{(i-1)}\), \(\widetilde{X}^{(i-1)}\) and neuron \(w\in\mathbb{R}^{N_{i-1}}\) for the \(i\)-th layer, we first align the input data to the layer, by finding a real-valued vector \(\widetilde{w}\in\mathbb{R}^{N_{i-1}}\) such that \(\widetilde{X}^{(i-1)}\widetilde{w}\approx X^{(i-1)}w\). Similar to our discussion above (14), we adopt the same sequential selection strategy to obtain each \(\widetilde{w}_{t}\) and deduce the following update rules.
\[\begin{cases}\hat{u}_{0}=0\in\mathbb{R}^{m},\\ \widetilde{w}_{t}=\frac{\langle\widetilde{X}_{t}^{(i-1)},\hat{u}_{t-1}+w_{t}X _{t}^{(i-1)}\rangle}{\|\widetilde{X}_{t}^{(i-1)}\|_{2}^{2}},\\ \hat{u}_{t}=\hat{u}_{t-1}+w_{t}X_{t}^{(i-1)}-\widetilde{w}_{t}\widetilde{X}_{t }^{(i-1)}\end{cases} \tag{17}\]
where \(t=1,2\ldots,N_{i-1}\). Note that the approximation error is given by
\[\hat{u}_{N_{i-1}}=X^{(i-1)}w-\widetilde{X}^{(i-1)}\widetilde{w}. \tag{18}\]
**Phase II**: After getting the new weights \(\widetilde{w}\), we quantize \(\widetilde{w}\) using SPFQ with input \(\widetilde{X}^{(i-1)}\), i.e., finding \(\widetilde{q}\in\mathcal{A}^{N_{i-1}}\) such that \(\widetilde{X}^{(i-1)}\widetilde{q}\approx\widetilde{X}^{(i-1)}\widetilde{w}\). This process can be summarized as follows. For \(t=1,2,\ldots,N_{i-1}\),
\[\begin{cases}\widetilde{u}_{0}=0\in\mathbb{R}^{m},\\ \widetilde{q}_{t}=\mathcal{Q}_{\text{StocQ}}\Big{(}\widetilde{w}_{t}+\frac{ \langle\widetilde{X}_{t}^{(i-1)},\widetilde{u}_{t-1}\rangle}{\|\widetilde{X}_{ t}^{(i-1)}\|_{2}^{2}}\Big{)},\\ \widetilde{u}_{t}=\widetilde{u}_{t-1}+(\widetilde{w}_{t}-\widetilde{q}_{t}) \widetilde{X}_{t}^{(i-1)}.\end{cases} \tag{19}\]
Here, the quantization error is
\[\widetilde{u}_{N_{i-1}}=\widetilde{X}^{(i-1)}(\widetilde{w}-\widetilde{q}). \tag{20}\]
**Proposition 2.2**.: _Given inputs \(X^{(i-1)}\), \(\widetilde{X}^{(i-1)}\) and any neuron \(w\in\mathbb{R}^{N_{i-1}}\) for the \(i\)-th layer, the two-phase formulation given by (17) and (19) generate exactly same result as in (15), that is, \(\widetilde{q}=q\)._
Proof.: We proceed by induction on the iteration index \(t\). If \(t=1\), then (17), (19) and (15) imply that
\[\widetilde{q}_{1}=\mathcal{Q}_{\text{StocQ}}(\widetilde{w}_{1})=\mathcal{Q}_ {\text{StocQ}}\Big{(}\frac{\langle\widetilde{X}_{1}^{(i-1)},w_{1}X_{1}^{(i-1)} \rangle}{\|\widetilde{X}_{1}^{(i-1)}\|_{2}^{2}}\Big{)}=q_{1}.\]
For \(t\geq 2\), assume \(\widetilde{q}_{j}=q_{j}\) for \(1\leq j\leq t-1\) and we aim to prove \(\widetilde{q}_{t}=q_{t}\). Note that \(\hat{u}_{t-1}=\sum_{j=1}^{t-1}(w_{j}X_{j}-\widetilde{w}_{j}\widetilde{X}_{j})\) and \(\widetilde{u}_{t-1}=\sum_{j=1}^{t-1}(\widetilde{w}_{j}\widetilde{X}_{j}- \widetilde{q}_{j}\widetilde{X}_{j})=\sum_{j=1}^{t-1}(\widetilde{w}_{j} \widetilde{X}_{j}-q_{j}\widetilde{X}_{j})\) by our induction hypothesis. It follows that \(\hat{u}_{t-1}+\widetilde{u}_{t-1}=\sum_{j=1}^{t-1}(w_{j}X_{j}-q_{j}\widetilde {X}_{j})=u_{t-1}\). Thus, we get
\[\widetilde{q}_{t}=\mathcal{Q}_{\text{StocQ}}\Big{(}\frac{\langle\widetilde{X }_{t}^{(i-1)},\widetilde{u}_{t-1}+\hat{u}_{t-1}+w_{t}X_{t}^{(i-1)}\rangle}{\| \widetilde{X}_{t}^{(i-1)}\|_{2}^{2}}\Big{)}=\mathcal{Q}_{\text{StocQ}}\Big{(} \frac{\langle\widetilde{X}_{t}^{(i-1)},u_{t-1}+w_{t}X_{t}^{(i-1)}\rangle}{\| \widetilde{X}_{t}^{(i-1)}\|_{2}^{2}}\Big{)}=q_{t}.\]
This establishes \(\widetilde{q}=q\) and completes the proof.
Based on Proposition 2.2, the quantization error (16) for SPFQ can be split into two parts:
\[u_{N_{i-1}}=X^{(i-1)}w-\widetilde{X}^{(i-1)}q=X^{(i-1)}w-\widetilde{X}^{(i-1) }\widetilde{w}+\widetilde{X}^{(i-1)}(\widetilde{w}-q)=\hat{u}_{N_{i-1}}+ \widetilde{u}_{N_{i-1}}.\]
Here, the first error term \(\hat{u}_{N_{i-1}}\) results from the data alignment in (17) to generate a new "virtual" neuron \(\widetilde{w}\) and the second error term \(\widetilde{u}_{N_{i-1}}\) is due to the quantization in (19). It follows that
\[\|u_{N_{i-1}}\|_{2}=\|\hat{u}_{N_{i-1}}+\widetilde{u}_{N_{i-1}}\|_{2}\leq\| \hat{u}_{N_{i-1}}\|_{2}+\|\widetilde{u}_{N_{i-1}}\|_{2}. \tag{21}\]
Thus, we can bound the quantization error for SPFQ by controlling \(\|\hat{u}_{N_{i-1}}\|_{2}\) and \(\|\widetilde{u}_{N_{i-1}}\|_{2}\).
```
Input: An \(L\)-layer neural network \(\Phi\) with weight matrices \(W^{(i)}\in\mathbb{R}^{N_{i-1}\times N_{i}}\), input data \(X\in\mathbb{R}^{m\times N_{0}}\)
1for\(i=1\)to\(L\)do
2 Generate \(X^{(i-1)}=\Phi^{(i-1)}(X)\in\mathbb{R}^{m\times N_{i-1}}\) and \(\widetilde{X}^{(i-1)}=\widetilde{\Phi}^{(i-1)}(X)\in\mathbb{R}^{m\times N_{i-1}}\)repeat For each column \(w\) of \(W^{(i)}\)Phase I: Find a solution \(\widetilde{w}\) to (22)Phase II: Obtain the quantized neuron \(\widetilde{q}\in\mathcal{A}^{N_{i-1}}\) via (19) until All columns of \(W^{(i)}\) are quantized Obtain the quantized \(i\)-th layer weights \(Q^{(i)}\in\mathcal{A}^{N_{i-1}\times N_{i}}\)Output: Quantized neural network \(\widetilde{\Phi}\)
```
**Algorithm 1**SPFQ with perfect data alignment
### SPFQ Variants
The two-phase formulation of SPFQ provides a flexible framework that allows for the replacement of one or both phases with alternative algorithms. Here, our focus is on replacing the first, "data-alignment", phase to eliminate, or massively reduce, the error bound associated with this step. Indeed, by exploring alternative approaches, one can improve the error bounds of SPFQ, at the expense of increasing the computational complexity. Below, we present two such alternatives to Phase I.
In Section 3 we derive an error bound associated with the second phase of SPFQ, namely quantization, which is independent of the reconstructed neuron \(\widetilde{w}\). Thus, to reduce the bound on \(\|u_{N_{i-1}}\|_{2}\) in (21), we can eliminate \(\|\hat{u}_{N_{i-1}}\|_{2}\) by simply choosing \(\widetilde{w}\) with \(\widetilde{X}^{(i-1)}\widetilde{w}=X^{(i-1)}w\). As this system of equations may admit infinitely many solutions, we opt for one with the minimal \(\|\widetilde{w}\|_{\infty}\). This choice is motivated by the fact that smaller weights can be accommodated by smaller quantization alphabets, resulting in bit savings in practical applications. In other
words, we replace Phase I with the optimization problem
\[\begin{split}\min_{\widetilde{w}\in\mathbb{R}^{N_{i-1}}}& \|\widetilde{w}\|_{\infty}\\ \text{s.t.}&\widetilde{X}^{(i-1)}\widetilde{w}=X^{(i- 1)}w.\end{split} \tag{22}\]
It is not hard to see that (22) can be formulated as a linear program and solved via standard linear programming techniques [1]. Alternatively, powerful tools like Cadzow's method [3, 4] can also be used to solve linearly constrained infinity-norm optimization problems like (22). Cadzow's method has computational complexity \(O(m^{2}N_{i-1})\), thus is a factor of \(m\) more expensive than our original approach but has the advantage of eliminating \(\|\hat{u}_{N_{i}-1}\|_{2}\).
With this modification, one then proceeds with Phase II as before. Given a minimum \(\ell_{\infty}\) solution \(\widetilde{w}\) satisfying \(\widetilde{X}^{(i-1)}\widetilde{w}=X^{(i-1)}w\), one can quantize it using (19) and obtain \(\widetilde{q}\in\mathcal{A}^{N_{i-1}}\). In this case, \(\widetilde{q}\) may not be equal to \(q\) in (15) and the quantization error becomes
\[X^{(i-1)}w-\widetilde{X}^{(i-1)}\widetilde{q}=\widetilde{X}^{(i-1)}( \widetilde{w}-\widetilde{q})=\widetilde{u}_{N_{i-1}} \tag{23}\]
where only Phase II is involved. We summarize this version of SPFQ in Algorithm 1.
The second approach we present herein aims to reduce the computational complexity associated with (22). To that end, we generalize the data alignment process in (17) as follows. Let \(r\in\mathbb{Z}^{+}\) and \(w\in\mathbb{R}^{N_{i-1}}\). For \(t=1,2,\ldots,N_{i-1}\), we perform (17) as before. Now however, for \(t=N_{i-1}+1,N_{i-1}+2,\ldots,rN_{i-1}\), we run
\[\begin{cases}\hat{v}_{t-1}=\hat{u}_{t-1}-w_{t}X_{t}^{(i-1)}+\widetilde{w}_{t} \widetilde{X}_{t}^{(i-1)},\\ \widetilde{w}_{t}=\frac{\langle\widetilde{X}_{t}^{(i-1)},\hat{v}_{t-1}+w_{t}X _{t}^{(i-1)}\rangle}{\|\widetilde{X}_{t}^{(i-1)}\|_{2}^{2}},\\ \hat{u}_{t}=\hat{v}_{t-1}+w_{t}X_{t}^{(i-1)}-\widetilde{w}_{t}\widetilde{X}_{ t}^{(i-1)}\end{cases} \tag{24}\]
Here, we use modulo \(N_{i-1}\) indexing for (the subscripts of) \(w,\widetilde{w},X^{(i-1)}\), and \(\widetilde{X}^{(i-1)}\). We call the combination of (17) and (24) the \(r\)-th order data alignment procedure, which costs \(O(rmN_{i-1})\) operations. Applying (19) to the output \(\widetilde{w}\) as before, the quantization error consists of two parts:
\[X^{(i-1)}w-\widetilde{X}^{(i-1)}\widetilde{q}=X^{(i-1)}w-\widetilde{X}^{(i-1 )}\widetilde{w}+\widetilde{X}^{(i-1)}(\widetilde{w}-\widetilde{q})=\hat{u}_{rN _{i-1}}+\widetilde{u}_{N_{i-1}}. \tag{25}\]
This version of SPFQ with order \(r\) is summarized in Algorithm 2. In Section 3, we prove that the data alignment error \(\hat{u}_{rN_{i-1}}=X^{(i-1)}w-\widetilde{X}^{(i-1)}\widetilde{w}\) decays exponentially in order \(r\).
## 3. Error Bounds for SPFQ with Infinite Alphabets
We can now begin analyzing the errors associated with the above variants of SPFQ. On the one hand, in Algorithm 1, since data is perfectly aligned by solving (22), we only have to bound the quantization error \(\widetilde{u}_{N_{i-1}}\) generated by procedure (19). On the other hand, Algorithm 2 has a faster implementation provided \(r<m\), but introduces an extra error \(\hat{u}_{rN_{i-1}}\) arising from the \(r\)-th order data alignment. Thus, to control the error bounds for this version of SPFQ, we first bound \(\widetilde{u}_{N_{i-1}}\) and \(\hat{u}_{rN_{i-1}}\) appearing in (23) and (25).
**Lemma 3.1** (Quantization error).: _Assuming that the first \(i-1\) layers have been quantized, let \(X^{(i-1)}\), \(\widetilde{X}^{(i-1)}\) be as in (11) and \(w\in\mathbb{R}^{N_{i-1}}\) be the weights associated with a neuron in the \(i\)-th layer, i.e. a column of \(W^{(i)}\in\mathbb{R}^{N_{i-1}\times N_{i}}\). Suppose \(\widetilde{w}\) is either the solution of (22) or
the output of (24). Quantize \(\widetilde{w}\) using (19) with alphabets \(\mathcal{A}=\mathcal{A}_{\infty}^{\delta}\) as in (4). Then, for any \(p\in\mathbb{N}\),_
\[\|\widetilde{u}_{N_{i-1}}\|_{2}\leq\delta\sqrt{2\pi pm\log N_{i-1}}\max_{1\leq j \leq N_{i-1}}\|\widetilde{X}_{j}^{(i-1)}\|_{2} \tag{26}\]
_holds with probability at least \(1-\frac{\sqrt{2}m}{N_{i-1}^{p}}\)._
Proof.: We first show that
\[\widetilde{u}_{t}\leq_{\mathrm{cx}}\mathcal{N}(0,\Sigma_{t}) \tag{27}\]
holds for all \(1\leq t\leq N_{i-1}\), where \(\Sigma_{t}\) is defined recursively as follows
\[\Sigma_{t}:=P_{\widetilde{X}_{t}^{(i-1)\perp}}\Sigma_{t-1}P_{\widetilde{X}_{t }^{(i-1)\perp}}+\frac{\pi\delta^{2}}{2}\widetilde{X}_{t}^{(i-1)}\widetilde{X} _{t}^{(i-1)\top}\quad\text{with}\quad\Sigma_{0}:=0.\]
At the \(t\)-th step of quantizing \(\widetilde{w}\), by (19), we have \(\widetilde{u}_{t}=\widetilde{u}_{t-1}+(\widetilde{w}_{t}-\widetilde{q}_{t}) \widetilde{X}_{t}^{(i-1)}\). Define
\[h_{t}:=\widetilde{u}_{t-1}+\widetilde{w}_{t}\widetilde{X}_{t}^{(i-1)}\quad \text{and}\quad v_{t}:=\frac{\langle\widetilde{X}_{t}^{(i-1)},h_{t}\rangle}{ \|\widetilde{X}_{t}^{(i-1)}\|_{2}^{2}}. \tag{28}\]
It follows that
\[\widetilde{u}_{t}=h_{t}-\widetilde{q}_{t}\widetilde{X}_{t}^{(i-1)} \tag{29}\]
and (19) implies
\[\widetilde{q}_{t}=\mathcal{Q}_{\mathrm{StocQ}}\Bigg{(}\frac{\langle\widetilde {X}_{t}^{(i-1)},h_{t}\rangle}{\|\widetilde{X}_{t}^{(i-1)}\|_{2}^{2}}\Bigg{)}= \mathcal{Q}_{\mathrm{StocQ}}(v_{t}). \tag{30}\]
Since \(\mathcal{A}=\mathcal{A}_{\infty}^{\delta}\), \(\mathbb{E}\mathcal{Q}_{\mathrm{StocQ}}(z)=z\) for all \(z\in\mathbb{R}\). Moreover, conditioning on \(\widetilde{u}_{t-1}\) in (28), \(h_{t}\) and \(v_{t}\) are fixed and thus one can get
\[\mathbb{E}(\mathcal{Q}_{\mathrm{StocQ}}(v_{t})|\widetilde{u}_{t-1})=v_{t} \tag{31}\]
and
\[\mathbb{E}(\widetilde{u}_{t}|\widetilde{u}_{t-1}) =\mathbb{E}(h_{t}-\widetilde{q}_{t}\widetilde{X}_{t}^{(i-1)}| \widetilde{u}_{t-1})\] \[=h_{t}-\widetilde{X}_{t}^{(i-1)}\mathbb{E}(\widetilde{q}_{t}| \widetilde{u}_{t-1})\] \[=h_{t}-\widetilde{X}_{t}^{(i-1)}\mathbb{E}(\mathcal{Q}_{\text{ StocQ}}(v_{t})|\widetilde{u}_{t-1})\] \[=h_{t}-v_{t}\widetilde{X}_{t}^{(i-1)}\] \[=h_{t}-\frac{\langle\widetilde{X}_{t}^{(i-1)},h_{t}\rangle}{\| \widetilde{X}_{t}^{(i-1)}\|_{2}^{2}}\widetilde{X}_{t}^{(i-1)}\] \[=\Bigg{(}I-\frac{\widetilde{X}_{t}^{(i-1)}\widetilde{X}_{t}^{(i-1 )\top}}{\|\widetilde{X}_{t}^{(i-1)}\|_{2}^{2}}\Bigg{)}h_{t}\] \[=P_{\widetilde{X}_{t}^{(i-1)\perp}}(h_{t}).\]
The identity above indicates that the approximation error \(\widetilde{u}_{t}\) can be split into two parts: its conditional mean \(P_{\widetilde{X}_{t}^{(i-1)\perp}}(h_{t})\) and a random perturbation. Specifically, applying (29) and (7), we obtain
\[\widetilde{u}_{t}=P_{\widetilde{X}_{t}^{(i-1)\perp}}(h_{t})+P_{\widetilde{X}_ {t}^{(i-1)}}(h_{t})-\widetilde{q}_{t}\widetilde{X}_{t}^{(i-1)}=P_{\widetilde{X }_{t}^{(i-1)\perp}}(h_{t})+R_{t}\widetilde{X}_{t}^{(i-1)} \tag{32}\]
where
\[R_{t}:=v_{t}-\widetilde{q}_{t}.\]
Further, combining (30) and (31), we have
\[\mathbb{E}(R_{t}|\widetilde{u}_{t-1})=v_{t}-\mathbb{E}(\widetilde{q}_{t}| \widetilde{u}_{t-1})=v_{t}-\mathbb{E}(\mathcal{Q}_{\text{StocQ}}(v_{t})| \widetilde{u}_{t-1})=0\]
and \(|R_{t}|=|v_{t}-\mathcal{Q}_{\text{StocQ}}(v_{t})|\leq\delta\). Lemma A.5 yields that, conditioning on \(\widetilde{u}_{t-1}\),
\[R_{t}\leq_{\text{cx}}\mathcal{N}\Big{(}0,\frac{\pi\delta^{2}}{2}\Big{)}. \tag{33}\]
Now, we are ready to prove (27) by induction on \(t\). When \(t=1\), we have \(h_{1}=\widetilde{w}_{1}\widetilde{X}_{1}^{(i-1)}\). We can deduce from (32) and (33) that \(\widetilde{u}_{1}=P_{\widetilde{X}_{1}^{(i-1)\perp}}(\widetilde{w}_{1} \widetilde{X}_{1}^{(i-1)})+R_{1}\widetilde{X}_{1}^{(i-1)}=R_{1}\widetilde{X}_{ 1}^{(i-1)}\) with \(R_{1}\leq_{\text{cx}}\mathcal{N}\Big{(}0,\frac{\pi\delta^{2}}{2}\Big{)}\). Applying Lemma A.3, we obtain \(\widetilde{u}_{1}\leq_{\text{cx}}\mathcal{N}(0,\Sigma_{1})\). Next, assume that (27) holds for \(t-1\) with \(t\geq 2\). By the induction hypothesis, we have \(\widetilde{u}_{t-1}\leq_{\text{cx}}\mathcal{N}(0,\Sigma_{t-1})\). Using Lemma A.3 again, we get
\[P_{\widetilde{X}_{t}^{(i-1)\perp}}(h_{t}) =P_{\widetilde{X}_{t}^{(i-1)\perp}}(\widetilde{u}_{t-1}+\widetilde {w}_{t}\widetilde{X}_{t}^{(i-1)})\] \[\leq_{\text{cx}}\mathcal{N}\Big{(}P_{\widetilde{X}_{t}^{(i-1) \perp}}(\widetilde{w}_{t}\widetilde{X}_{t}^{(i-1)}),P_{\widetilde{X}_{t}^{(i-1 )\perp}}\Sigma_{t-1}P_{\widetilde{X}_{t}^{(i-1)\perp}}\Big{)}\] \[=\mathcal{N}\Big{(}0,P_{\widetilde{X}_{t}^{(i-1)\perp}}\Sigma_{t- 1}P_{\widetilde{X}_{t}^{(i-1)\perp}}\Big{)}.\]
Additionally, conditioning on \(\widetilde{u}_{t-1}\), (33) implies
\[R_{t}\widetilde{X}_{t}^{(i-1)}\leq_{\text{cx}}\mathcal{N}\Big{(}0,\frac{\pi \delta^{2}}{2}\widetilde{X}_{t}^{(i-1)}\widetilde{X}_{t}^{(i-1)\top}\Big{)}.\]
Then we apply Lemma A.4 to (32) by taking
\[X=P_{\widetilde{X}_{t}^{(i-1)\perp}}(h_{t}),\,Y=\widetilde{u}_{t},\,W=\mathcal{ N}\Big{(}0,P_{\widetilde{X}_{t}^{(i-1)\perp}}\Sigma_{t-1}P_{\widetilde{X}_{t}^{(i-1) \perp}}\Big{)},\,Z=\mathcal{N}\Big{(}0,\frac{\pi\delta^{2}}{2}\widetilde{X}_{ t}^{(i-1)}\widetilde{X}_{t}^{(i-1)\top}\Big{)}.\]
It follows that
\[\widetilde{u}_{t} \leq_{\mathrm{cx}}W+Z\] \[=\mathcal{N}\Big{(}0,P_{\widetilde{X}_{t}^{(i-1)\perp}}\Sigma_{t-1} P_{\widetilde{X}_{t}^{(i-1)\perp}}+\frac{\pi\delta^{2}}{2}\widetilde{X}_{t}^{(i-1)} \widetilde{X}_{t}^{(i-1)\top}\Big{)}\] \[=\mathcal{N}(0,\Sigma_{t}).\]
Here, we used the independence of \(W\) and \(Z\), and the definition of \(\Sigma_{t}\). This establishes inequality (27) showing that \(\widetilde{u}_{t}\) is dominated by \(\mathcal{N}(0,\Sigma_{t})\) in the convex order, where \(\Sigma_{t}\) is defined recursively using orthogonal projections. So it remains to control the covariance matrix \(\Sigma_{t}\). Recall that \(\Sigma_{t}\) is defined as follows.
\[\Sigma_{t}=P_{\widetilde{X}_{t}^{(i-1)\perp}}\Sigma_{t-1}P_{\widetilde{X}_{t}^ {(i-1)\perp}}+\frac{\pi\delta^{2}}{2}\widetilde{X}_{t}^{(i-1)}\widetilde{X}_{ t}^{(i-1)\top}\quad\text{with}\quad\Sigma_{0}=0.\]
Then we apply Lemma B.1 with \(M_{t}=\Sigma_{t}\), \(z_{t}=\widetilde{X}_{t}^{(i-1)}\), and \(\alpha=\frac{\pi\delta^{2}}{2}\), and conclude that \(\Sigma_{t}\preceq\sigma_{t}^{2}I\) with \(\sigma_{t}^{2}=\frac{\pi\delta^{2}}{2}\max_{1\leq j\leq t}\|\widetilde{X}_{j} ^{(i-1)}\|_{2}^{2}\). Note that \(\widetilde{u}_{t}\leq_{\mathrm{cx}}\mathcal{N}(0,\Sigma_{t})\) and, by Lemma A.2, we have \(\mathcal{N}(0,\Sigma_{t})\leq_{\mathrm{cx}}\mathcal{N}(0,\sigma_{t}^{2}I)\). Then we deduce from the transitivity of \(\leq_{\mathrm{cx}}\) that \(\widetilde{u}_{t}\leq_{\mathrm{cx}}\mathcal{N}(0,\sigma_{t}^{2}I)\). It follows from Lemma B.2 that, for \(\gamma\in(0,1]\) and \(1\leq t\leq N_{i-1}\),
\[\mathrm{P}\bigg{(}\|\widetilde{u}_{t}\|_{\infty}\leq 2\sigma_{t}\sqrt{\log( \sqrt{2}m/\gamma)}\bigg{)}\geq 1-\gamma.\]
Picking \(\gamma=\sqrt{2}mN_{i-1}^{-p}\) and \(t=N_{i-1}\),
\[\|\widetilde{u}_{N_{i-1}}\|_{2}\leq\sqrt{m}\|\widetilde{u}_{N_{i-1}}\|_{\infty }\leq 2\sigma_{N_{i-1}}\sqrt{pm\log N_{i-1}}=\delta\sqrt{2\pi pm\log N_{i-1}} \max_{1\leq j\leq N_{i-1}}\|\widetilde{X}_{j}^{(i-1)}\|_{2}\]
holds with probability exceeding \(1-\sqrt{2}mN_{i-1}^{-p}\).
Next, we deduce a closed-form expression of \(\hat{u}_{rN_{i-1}}\) showing that \(\|\hat{u}_{rN_{i-1}}\|_{2}\) decays polynomially with respect to \(r\).
**Lemma 3.2** (Data alignment error).: _Assuming that the first \(i-1\) layers have been quantized, let \(X^{(i-1)}\), \(\widetilde{X}^{(i-1)}\) be as in (11) and let \(w\in\mathbb{R}^{N_{i-1}}\) be a neuron in the \(i\)-th layer, i.e. a column of \(W^{(i)}\in\mathbb{R}^{N_{i-1}\times N_{i}}\). Applying the \(r\)-th order data alignment procedure in (17) and (24), we have_
\[\hat{u}_{N_{i-1}}=\sum_{j=1}^{N_{i-1}}w_{j}P_{\widetilde{X}_{N_{i-1}}^{(i-1) \perp}}\ldots P_{\widetilde{X}_{j+1}^{(i-1)\perp}}P_{\widetilde{X}_{j}^{(i-1) \perp}}(X_{j}^{(i-1)}) \tag{34}\]
_and_
\[\hat{u}_{rN_{i-1}}=(P^{(i-1)})^{r-1}\hat{u}_{N_{i-1}} \tag{35}\]
_where \(P^{(i-1)}:=P_{\widetilde{X}_{N_{i-1}}^{(i-1)\perp}}\ldots P_{\widetilde{X}_{2 }^{(i-1)\perp}}P_{\widetilde{X}_{1}^{(i-1)\perp}}\)._
Proof.: We first prove the following identity by induction on \(t\).
\[\hat{u}_{t}=\sum_{j=1}^{t}w_{j}P_{\widetilde{X}_{t}^{(i-1)\perp}}\ldots P_{ \widetilde{X}_{j+1}^{(i-1)\perp}}P_{\widetilde{X}_{j}^{(i-1)\perp}}(X_{j}^{(i-1 )}),\quad 1\leq t\leq N_{i-1}. \tag{36}\]
By (17), the case \(t=1\) is straightforward, since we have
\[\hat{u}_{1} =w_{1}X_{1}^{(i-1)}-\widetilde{w}_{1}\widetilde{X}_{1}^{(i-1)}\] \[=w_{1}X_{1}^{(i-1)}-\frac{\langle\widetilde{X}_{1}^{(i-1)},w_{1}X _{1}^{(i-1)}\rangle}{\|\widetilde{X}_{1}^{(i-1)}\|_{2}^{2}}\widetilde{X}_{1}^{( i-1)}\] \[=w_{1}X_{1}^{(i-1)}-P_{\widetilde{X}_{1}^{(i-1)}}(w_{1}X_{1}^{(i- 1)})\] \[=w_{1}P_{\widetilde{X}_{1}^{(i-1)\perp}}(X_{1}^{(i-1)})\]
where we apply the properties of orthogonal projections in (7) and (8). For \(2\leq t\leq N_{i-1}\), assume that (36) holds for \(t-1\). Then, by (17), one gets
\[\hat{u}_{t} =\hat{u}_{t-1}+w_{t}X_{t}^{(i-1)}-\widetilde{w}_{t}\widetilde{X}_ {t}^{(i-1)}\] \[=\hat{u}_{t-1}+w_{t}X_{t}^{(i-1)}-\frac{\langle\widetilde{X}_{t} ^{(i-1)},\hat{u}_{t-1}+w_{t}X_{t}^{(i-1)}\rangle}{\|\widetilde{X}_{t}^{(i-1)} \|_{2}^{2}}\widetilde{X}_{t}^{(i-1)}\] \[=\hat{u}_{t-1}+w_{t}X_{t}^{(i-1)}-P_{\widetilde{X}_{t}^{(i-1)}}( \hat{u}_{t-1}+w_{t}X_{t}^{(i-1)})\] \[=P_{\widetilde{X}_{t}^{(i-1)\perp}}(\hat{u}_{t-1}+w_{t}X_{t}^{(i- 1)}).\]
Applying the induction hypothesis, we obtain
\[\hat{u}_{t} =P_{\widetilde{X}_{t}^{(i-1)\perp}}(\hat{u}_{t-1})+w_{t}P_{ \widetilde{X}_{t}^{(i-1)\perp}}(X_{t}^{(i-1)})\] \[=\sum_{j=1}^{t-1}w_{j}P_{\widetilde{X}_{t}^{(i-1)\perp}}\dots P_{ \widetilde{X}_{j+1}^{(i-1)\perp}}P_{\widetilde{X}_{j}^{(i-1)\perp}}(X_{j}^{(i -1)})+w_{t}P_{\widetilde{X}_{t}^{(i-1)\perp}}(X_{t}^{(i-1)})\] \[=\sum_{j=1}^{t}w_{j}P_{\widetilde{X}_{t}^{(i-1)\perp}}\dots P_{ \widetilde{X}_{j+1}^{(i-1)\perp}}P_{\widetilde{X}_{j}^{(i-1)\perp}}(X_{j}^{(i -1)}).\]
This completes the proof of (36). In particular, if \(t=N_{i-1}\), then we obtain (34).
Next, we consider \(\hat{u}_{t}\) when \(t>N_{i-1}\). Plugging \(t=N_{i-1}+1\) into (24), and recalling that our indices (except for \(\hat{u}\)) are modulo \(N_{i-1}\), we have
\[\hat{u}_{N_{i-1}+1}=\hat{u}_{N_{i-1}}+\widetilde{w}_{1}\widetilde{X}_{1}^{(i- 1)}-\frac{\langle\widetilde{X}_{1}^{(i-1)},\hat{u}_{N_{i-1}}+\widetilde{w}_{1} \widetilde{X}_{1}^{(i-1)}\rangle}{\|\widetilde{X}_{1}^{(i-1)}\|_{2}^{2}} \widetilde{X}_{1}^{(i-1)}=P_{\widetilde{X}_{1}^{(i-1)\perp}}(\hat{u}_{N_{i-1}}).\]
Similarly, one can show that \(\hat{u}_{N_{i-1}+2}=P_{\widetilde{X}_{2}^{(i-1)\perp}}(\hat{u}_{N_{i-1}+1})=P_ {\widetilde{X}_{2}^{(i-1)\perp}}P_{\widetilde{X}_{1}^{(i-1)\perp}}\hat{u}_{N_{ i-1}}\). Repeating this argument for all \(N_{i-1}<t\leq rN_{i-1}\), we can derive (35).
Combining Lemma 3.1 and Lemma 3.2, we can derive a recursive relation between the error in the current layer and that of the previous layer.
**Theorem 3.3**.: _Let \(\Phi\) be an L-layer neural network as in (3) where the activation function is \(\varphi^{(i)}(x)=\rho(x):=\max\{0,x\}\) for \(1\leq i\leq L\). Let \(\mathcal{A}=\mathcal{A}_{\infty}^{\delta}\) be as in (4) and \(p\in\mathbb{N}\). \((a)\) If we quantize \(\Phi\) using Algorithm 1, then, for each \(2\leq i\leq L\),_
\[\max_{1\leq j\leq N_{i}}\|X^{(i-1)}W_{j}^{(i)}-\widetilde{X}^{(i- 1)}Q_{j}^{(i)}\|_{2}\leq\delta\sqrt{2\pi pm\log N_{i-1}}\max_{1\leq j\leq N_{ i-1}}\|X_{j}^{(i-1)}\|_{2}\] \[+\delta\sqrt{2\pi pm\log N_{i-1}}\max_{1\leq j\leq N_{i-1}}\|X^ {(i-2)}W_{j}^{(i-1)}-\widetilde{X}^{(i-2)}Q_{j}^{(i-1)}\|_{2}.\]
_holds with probability at least \(1-\frac{\sqrt{2}mN_{i}}{N_{i-1}^{p}}\)._
\((b)\) _If we quantize \(\Phi\) using Algorithm 2, then, for each \(2\leq i\leq L\),_
\[\max_{1\leq j\leq N_{i}}\|X^{(i-1)}W_{j}^{(i)}-\widetilde{X}^{(i-1 )}Q_{j}^{(i)}\|_{2}\leq\delta\sqrt{2\pi pm\log N_{i-1}}\max_{1\leq j\leq N_{i-1 }}\|X_{j}^{(i-1)}\|_{2}\] \[+\Big{(}N_{i-1}\|W^{(i)}\|_{\max}\|P^{(i-1)}\|_{2}^{r-1}+\delta \sqrt{2\pi pm\log N_{i-1}}\Big{)}\max_{1\leq j\leq N_{i-1}}\|X^{(i-2)}W_{j}^{(i -1)}-\widetilde{X}^{(i-2)}Q_{j}^{(i-1)}\|_{2}\]
_holds with probability exceeding \(1-\frac{\sqrt{2}mN_{i}}{N_{i-1}^{p}}\). Here, \(P^{(i-1)}\) is defined in Lemma 3.2._
Proof.: \((a)\) Note that, for each \(1\leq j\leq N_{i}\), the \(j\)-th columns \(W_{j}^{(i)}\) and \(Q_{j}^{(i)}\) represent a neuron and its quantized version respectively. Applying (23) and (26), we obtain
\[\mathrm{P}\Big{(}\|X^{(i-1)}W_{j}^{(i)}-\widetilde{X}^{(i-1)}Q_{j}^{(i)}\|_{2 }\leq\delta\sqrt{2\pi pm\log N_{i-1}}\max_{1\leq j\leq N_{i-1}}\|\widetilde{X} _{j}^{(i-1)}\|_{2}\Big{)}\geq 1-\frac{\sqrt{2}m}{N_{i-1}^{p}}.\]
Taking a union bound over all \(j\),
\[\max_{1\leq j\leq N_{i}}\|X^{(i-1)}W_{j}^{(i)}-\widetilde{X}^{(i-1)}Q_{j}^{(i )}\|_{2}\leq\delta\sqrt{2\pi pm\log N_{i-1}}\max_{1\leq j\leq N_{i-1}}\| \widetilde{X}_{j}^{(i-1)}\|_{2}\]
holds with probability at least \(1-\frac{\sqrt{2}mN_{i}}{N_{i-1}^{p}}\). By the triangle inequality, we have
\[\max_{1\leq j\leq N_{i-1}}\|\widetilde{X}_{j}^{(i-1)}\|_{2} \leq\max_{1\leq j\leq N_{i-1}}\|X_{j}^{(i-1)}\|_{2}+\max_{1\leq j \leq N_{i-1}}\|X_{j}^{(i-1)}-\widetilde{X}_{j}^{(i-1)}\|_{2}\] \[=\max_{1\leq j\leq N_{i-1}}\|X_{j}^{(i-1)}\|_{2}+\max_{1\leq j \leq N_{i-1}}\|\rho(X^{(i-2)}W_{j}^{(i-1)})-\rho(\widetilde{X}^{(i-2)}Q_{j}^{( i-1)})\|_{2} \tag{37}\] \[\leq\max_{1\leq j\leq N_{i-1}}\|X_{j}^{(i-1)}\|_{2}+\max_{1\leq j \leq N_{i-1}}\|X^{(i-2)}W_{j}^{(i-1)}-\widetilde{X}^{(i-2)}Q_{j}^{(i-1)}\|_{2}\]
It follows that, with probability at least \(1-\frac{\sqrt{2}mN_{i}}{N_{i-1}^{p}}\),
\[\max_{1\leq j\leq N_{i}}\|X^{(i-1)}W_{j}^{(i)}-\widetilde{X}^{(i- 1)}Q_{j}^{(i)}\|_{2}\leq\delta\sqrt{2\pi pm\log N_{i-1}}\max_{1\leq j\leq N_{ i-1}}\|X_{j}^{(i-1)}\|_{2}\] \[+\delta\sqrt{2\pi pm\log N_{i-1}}\max_{1\leq j\leq N_{i-1}}\|X^{(i -2)}W_{j}^{(i-1)}-\widetilde{X}^{(i-2)}Q_{j}^{(i-1)}\|_{2}.\]
\((b)\) Applying Lemma 3.2 with \(w=W_{j}^{(i)}\) and using the fact that \(\|P\|_{2}\leq 1\) for any orthogonal projection \(P\), we have
\[\|\tilde{u}_{N_{i-1}}\|_{2} =\Big{\|}\sum_{k=1}^{N_{i-1}}W_{kj}^{(i)}P_{\widetilde{X}_{N_{i-1} }^{(i-1)\perp}}\ldots P_{\widetilde{X}_{k+1}^{(i-1)\perp}}P_{\widetilde{X}_{k}^ {(i-1)\perp}}(X_{k}^{(i-1)})\Big{\|}_{2}\] \[\leq\sum_{k=1}^{N_{i-1}}|W_{kj}^{(i)}|\Big{\|}P_{\widetilde{X}_{k} ^{(i-1)\perp}}(X_{k}^{(i-1)})\Big{\|}_{2}\] \[=\sum_{k=1}^{N_{i-1}}|W_{kj}^{(i)}|\Big{\|}P_{\widetilde{X}_{k}^{ (i-1)\perp}}(X_{k}^{(i-1)}-\widetilde{X}_{k}^{(i-1)})\Big{\|}_{2}\] \[\leq N_{i-1}\|W_{j}^{(i)}\|_{\infty}\max_{1\leq j\leq N_{i-1}}\|X _{j}^{(i-1)}-\widetilde{X}_{j}^{(i-1)}\|_{2}\] \[=N_{i-1}\|W_{j}^{(i)}\|_{\infty}\max_{1\leq j\leq N_{i-1}}\|\rho( X^{(i-2)}W_{j}^{(i-1)})-\rho(\widetilde{X}^{(i-2)}Q_{j}^{(i-1)})\|_{2} \tag{38}\] \[\leq N_{i-1}\|W^{(i)}\|_{\max}\max_{1\leq j\leq N_{i-1}}\|X^{(i-2) }W_{j}^{(i-1)}-\widetilde{X}^{(i-2)}Q_{j}^{(i-1)}\|_{2}.\]
Then it follows from (25), (26), (37), and (38) that
\[\|X^{(i-1)}W_{j}^{(i)}-\widetilde{X}^{(i-1)}Q_{j}^{(i)}\|_{2}\] \[\leq\|\hat{u}_{r_{N_{i-1}}}\|_{2}+\|\widetilde{u}_{N_{i-1}}\|_{2}\] \[\leq\|P^{(i-1)}\|_{2}^{r-1}\|\hat{u}_{N_{i-1}}\|_{2}+\delta\sqrt {2\pi pm\log N_{i-1}}\max_{1\leq j\leq N_{i-1}}\|\widetilde{X}_{j}^{(i-1)}\|_{2}\] \[\leq N_{i-1}\|W^{(i)}\|_{\max}\|P^{(i-1)}\|_{2}^{r-1}\max_{1\leq j \leq N_{i-1}}\|X^{(i-2)}W_{j}^{(i-1)}-\widetilde{X}^{(i-2)}Q_{j}^{(i-1)}\|_{2}+ \delta\sqrt{2\pi pm\log N_{i-1}}\] \[\times\Big{(}\max_{1\leq j\leq N_{i-1}}\|X_{j}^{(i-1)}\|_{2}+\max _{1\leq j\leq N_{i-1}}\|X^{(i-2)}W_{j}^{(i-1)}-\widetilde{X}^{(i-2)}Q_{j}^{(i-1 )}\|_{2}\Big{)}\]
holds with probability at least \(1-\sqrt{2}mN_{i-1}^{-p}\). By a union bound over all \(j\), we obtain that
\[\max_{1\leq j\leq N_{i}}\|X^{(i-1)}W_{j}^{(i)}-\widetilde{X}^{(i- 1)}Q_{j}^{(i)}\|_{2}\leq\delta\sqrt{2\pi pm\log N_{i-1}}\max_{1\leq j\leq N_{ i-1}}\|X_{j}^{(i-1)}\|_{2}\] \[+\Big{(}N_{i-1}\|W^{(i)}\|_{\max}\|P^{(i-1)}\|_{2}^{r-1}+\delta \sqrt{2\pi pm\log N_{i-1}}\Big{)}\max_{1\leq j\leq N_{i-1}}\|X^{(i-2)}W_{j}^{(i -1)}-\widetilde{X}^{(i-2)}Q_{j}^{(i-1)}\|_{2}\]
holds with probability exceeding \(1-\frac{\sqrt{2}mN_{i}}{N_{i-1}^{p}}\).
Applying Theorem 3.3 inductively for all layers, one can obtain an error bound for quantizing the whole neural network.
**Corollary 3.4**.: _Let \(\Phi\) be an \(L\)-layer neural network as in (3) where the activation function is \(\varphi^{(i)}(x)=\rho(x):=\max\{0,x\}\) for \(1\leq i\leq L\). Let \(\mathcal{A}=\mathcal{A}_{\infty}^{\delta}\) be as in (4) and \(p\in\mathbb{N}\). \((a)\) If we quantize \(\Phi\) using Algorithm 1, then_
\[\max_{1\leq j\leq N_{L}}\|\Phi(X)_{j}-\widetilde{\Phi}(X)_{j}\|_{2}\leq\sum_{i= 0}^{L-1}(2\pi pm\delta^{2})^{\frac{L-i}{2}}\Big{(}\prod_{k=i}^{L-1}\log N_{k} \Big{)}^{\frac{1}{2}}\max_{1\leq j\leq N_{i}}\|X_{j}^{(i)}\|_{2} \tag{39}\]
_holds with probability at least \(1-\sum_{i=1}^{L}\frac{\sqrt{2}mN_{i}}{N_{i-1}^{p}}\)._
\((b)\) _If we quantize \(\Phi\) using Algorithm 2, then_
\[\max_{1\leq j\leq N_{L}}\|\Phi(X)_{j}-\widetilde{\Phi}(X)_{j}\|_{2}\leq \tag{40}\] \[\sum_{i=0}^{L-1}\delta\sqrt{2\pi pm\log N_{i}}\max_{1\leq j\leq N_{ i}}\|X_{j}^{(i)}\|_{2}\prod_{k=i+1}^{L-1}\Big{(}N_{k}\|W^{(k+1)}\|_{\max}\|P^{(k)} \|_{2}^{r-1}+\delta\sqrt{2\pi pm\log N_{k}}\Big{)}\]
_holds with probability at least \(1-\sum_{i=1}^{L}\frac{\sqrt{2}mN_{i}}{N_{i-1}^{p}}\). Here, \(P^{(k)}=P_{\widetilde{X}_{N_{k}}^{(k)\perp}}\ldots P_{\widetilde{X}_{2}^{(k) \perp}}P_{\widetilde{X}_{1}^{(k)\perp}}\) is defined in Lemma 3.2._
Proof.: (a) For \(1\leq j\leq N_{L}\), by (11), we have
\[\Phi(X)_{j}=X_{j}^{(L)}=\rho(X^{(L-1)}W_{j}^{(L)})\quad\text{and}\quad \widetilde{\Phi}(X)_{j}=\widetilde{X}_{j}^{(L)}=\rho(\widetilde{X}^{(L-1)}Q_ {j}^{(L)})\]
where \(W_{j}^{(L)}\) and \(Q_{j}^{(L)}\) are the \(j\)-th neuron in the \(L\)-th layer and its quantized version respectively. It follows from part (a) of Theorem 3.3 with \(i=L\) that
\[\max_{1\leq j\leq N_{L}}\|\Phi(X)_{j}-\widetilde{\Phi}(X)_{j}\|_ {2}=\max_{1\leq j\leq N_{L}}\|\rho(X^{(L-1)}W_{j}^{(L)})-\rho(\widetilde{X}^{( L-1)}Q_{j}^{(L)})\|_{2}\] \[\leq\max_{1\leq j\leq N_{L}}\|X^{(L-1)}W_{j}^{(L)}-\widetilde{X}^ {(L-1)}Q_{j}^{(L)}\|_{2}\] \[\leq\delta\sqrt{2\pi pm\log N_{L-1}}\max_{1\leq j\leq N_{L-1}}\|X _{j}^{(L-1)}\|_{2}\] \[+\delta\sqrt{2\pi pm\log N_{L-1}}\max_{1\leq j\leq N_{L-1}}\|X^{( L-2)}W_{j}^{(L-1)}-\widetilde{X}^{(L-2)}Q_{j}^{(L-1)}\|_{2}.\]
holds with probability at least \(1-\frac{\sqrt{2}mN_{L}}{N_{L-1}^{p}}\). Moreover, by applying part (a) of Theorem 3.3 with \(i=L-1\) to the result above, we obtain that
\[\max_{1\leq j\leq N_{L}}\|\Phi(X)_{j}-\widetilde{\Phi}(X)_{j}\|_ {2}\leq\delta\sqrt{2\pi pm\log N_{L-1}}\max_{1\leq j\leq N_{L-1}}\|X_{j}^{(L-1 )}\|_{2}+2\pi pm\delta^{2}\] \[\quad\times\sqrt{\log N_{L-1}\log N_{L-2}}\Big{(}\max_{1\leq j \leq N_{L-2}}\|X_{j}^{(L-2)}\|_{2}+\max_{1\leq j\leq N_{i-1}}\|X^{(i-2)}W_{j}^ {(i-1)}-\widetilde{X}^{(i-2)}Q_{j}^{(i-1)}\|_{2}\Big{)}\]
holds with probability at least \(1-\frac{\sqrt{2}mN_{L}}{N_{L-1}^{p}}-\frac{\sqrt{2}mN_{L-1}}{N_{L-2}^{p}}\). Repeating this argument inductively for \(i=L-2,L-3,\ldots,1\), one can derive
\[\max_{1\leq j\leq N_{L}}\|\Phi(X)_{j}-\widetilde{\Phi}(X)_{j}\|_{2}\leq\sum_{i =0}^{L-1}(2\pi pm\delta^{2})^{\frac{L-i}{2}}\Big{(}\prod_{k=i}^{L-1}\log N_{k} \Big{)}^{\frac{1}{2}}\max_{1\leq j\leq N_{i}}\|X_{j}^{(i)}\|_{2}\]
with probability at least \(1-\sum_{i=1}^{L}\frac{\sqrt{2}mN_{i}}{N_{i-1}^{p}}\).
(b) The proof of (40) is similar to the one we had in part (a) except that we need to use part (b) of Theorem 3.3 this time. Indeed, for the case of \(i=L\),
\[\max_{1\leq j\leq N_{L}}\|\Phi(X)_{j}-\widetilde{\Phi}(X)_{j}\|_{2} =\max_{1\leq j\leq N_{L}}\|\rho(X^{(L-1)}W_{j}^{(L)})-\rho(\widetilde{X}^{(L-1) }Q_{j}^{(L)})\|_{2}\] \[\leq\max_{1\leq j\leq N_{L}}\|X^{(L-1)}W_{j}^{(L)}-\widetilde{X}^ {(L-1)}Q_{j}^{(L)}\|_{2}\] \[\leq\delta\sqrt{2\pi pm\log N_{L-1}}\max_{1\leq j\leq N_{L-1}}\|X _{j}^{(L-1)}\|_{2}+\Big{(}N_{L-1}\|W^{(L)}\|_{\max}\|P^{(L-1)}\|_{2}^{r-1}\] \[+\delta\sqrt{2\pi pm\log N_{L-1}}\Big{)}\max_{1\leq j\leq N_{L-1} }\|X^{(L-2)}W_{j}^{(L-1)}-\widetilde{X}^{(L-2)}Q_{j}^{(L-1)}\|_{2}\]
holds with probability exceeding \(1-\frac{\sqrt{2}mN_{L}}{N_{L-1}^{r}}\). Then (40) follows by inductively using part (b) of Theorem 3.3 with \(i=L-1,L-2,\ldots,1\).
**Remarks on the error bounds.** A few comments are in order regarding the error bounds associated with Corollary 3.4. First, let us consider the difference between the error bounds (39) and (40). As (40) deals with imperfect data alignment, it involves a term that bounds the mismatch between the quantized and unquantized networks. This term is controlled by the quantity \(\|P^{(k)}\|_{2}^{r-1}\), which is expected to be small when the order \(r\) is sufficiently large provided \(\|P^{(k)}\|_{2}<1\). In other words, one expects this term to be dominated by the error due to quantization. To get a sense for whether this intuition is valid, consider the case where \(\widetilde{X}_{1}^{(k)},\widetilde{X}_{2}^{(k)},\ldots,\widetilde{X}_{N_{k}}^ {(k)}\) are i.i.d. standard Gaussian vectors. Then Lemma B.3 implies that, with high probability,
\[\|P^{(k)}\|_{2}^{r-1}\lesssim\Big{(}1-\frac{c}{m}\Big{)}^{\frac{(r-1)N_{k}}{1 0}}=\Big{(}1-\frac{c}{m}\Big{)}^{\frac{-m}{c}.\frac{-c(r-1)N_{k}}{10m}}\leq e ^{-\frac{c(r-1)N_{k}}{10m}}\]
where \(c>0\) is a constant. In this case, \(\|P^{(k)}\|_{2}^{r-1}\) decays exponentially with respect to \(r\) with a favorable dependence on the overparametrization \(\frac{N}{m}\). In other words, here, even with a small order \(r\), the error bounds in (39) and (40) are quite similar.
Keeping this in mind, our next objective is to assess the quality of these error bounds. We will accomplish this by examining the _relative error_ connected to the quantization of a neural network. Specifically, we will concentrate on evaluating the relative error associated with (39) since a similar derivation can be applied to (40).
We begin with the observation that both absolute error bounds (39) and (40) in Corollary 3.4 only involve randomness due to the stochastic quantizer \(\mathcal{Q}_{\mathrm{StocQ}}\). In particular, there is no randomness assumption on either the weights or the activations. However, to evaluate the relative error, we suppose that each \(W^{(i)}\in\mathbb{R}^{N_{i-1}\times N_{i}}\) has i.i.d. \(\mathcal{N}(0,1)\) entries and \(\{W^{(i)}\}_{i=1}^{L}\) are independent. One needs to make an assumption of this type in order to facilitate the calculation, and more importantly, to avoid adversarial scenarios where the weights are chosen to be in the null-space of the data matrix \(\widetilde{X}^{(i)}\). We obtain the following corollary which shows that the relative error decays with the overparametrization of the neural network.
**Corollary 3.5**.: _Let \(\Phi\) be an \(L\)-layer neural network as in (3) where the activation function is \(\varphi^{(i)}(x)=\rho(x):=\max\{0,x\}\) for \(1\leq i\leq L\). Suppose the weight matrix \(W^{(i)}\) has i.i.d. \(\mathcal{N}(0,1)\) entries and \(\{W^{(i)}\}_{i=1}^{L}\) are independent. Let \(X\in\mathbb{R}^{m\times N_{0}}\) be the input data and \(X^{(i)}=\Phi^{(i)}(X)\in\mathbb{R}^{m\times N_{i}}\) be the output of the \(i\)-th layer defined in (11). Then the following inequalities hold._
\((a)\) _Let \(p\in\mathbb{N}\) with \(p\geq 2\). For \(1\leq i\leq L\),_
\[\max_{1\leq j\leq N_{i}}\|X_{j}^{(i)}\|_{2}\leq(4p)^{\frac{i}{2}}\Bigl{(}\prod_{ k=1}^{i-1}N_{k}\Bigr{)}^{\frac{1}{2}}\Bigl{(}\prod_{k=0}^{i-1}\log N_{k}\Bigr{)}^{ \frac{1}{2}}\|X\|_{F} \tag{41}\]
_holds with probability at least \(1-\sum_{k=1}^{i}\frac{2N_{k}}{N_{k-1}^{p}}\)._
\((b)\) _For \(1\leq i\leq L\), we have_
\[\mathbb{E}_{\Phi}\|X^{(i)}\|_{F}^{2}\geq\frac{\|X\|_{F}^{2}}{(2\pi)^{i}}\prod_ {k=1}^{i}N_{k} \tag{42}\]
_where \(\mathbb{E}_{\Phi}\) denotes the expectation with respect to the weights of \(\Phi\), that is \(\{W^{(i)}\}_{i=1}^{L}\)._
Proof.: \((a)\) Conditioning on \(X^{(i-1)}\), the function \(f(z):=\|\rho(X^{(i-1)}z)\|_{2}\) is Lipschitz with Lipschitz constant \(L_{f}:=\|X^{(i-1)}\|_{2}\leq\|X^{(i-1)}\|_{F}\) and \(\|X_{j}^{(i)}\|_{2}=\|\rho(X^{(i-1)}W_{j}^{(i)})\|_{2}=f(W_{j}^{(i)})\) with \(W_{j}^{(i)}\sim\mathcal{N}(0,I)\). Applying Lemma B.4 to \(f\) with \(X=W_{j}^{(i)}\), Lipschitz constant \(L_{f}\), and \(\alpha=\sqrt{2p\log N_{i-1}}\|X^{(i-1)}\|_{F}\), we obtain
\[\mathrm{P}\Bigl{(}\|X_{j}^{(i)}\|_{2}-\mathbb{E}(\|X_{j}^{(i)}\|_{2}\mid X^{( i-1)})\bigr{|}\leq\sqrt{2p\log N_{i-1}}\|X^{(i-1)}\|_{F}\,\Big{|}\,X^{(i-1)} \Bigr{)}\geq 1-\frac{2}{N_{i-1}^{p}}. \tag{43}\]
Using Jensen's inequality and the identity \(\mathbb{E}(\|\rho(X^{(i-1)}W_{j}^{(i)})\|_{2}^{2}\mid X^{(i-1)})=\frac{1}{2}\| X^{(i-1)}\|_{F}^{2}\), we have
\[\mathbb{E}(\|X_{j}^{(i)}\|_{2}\mid X^{(i-1)}) \leq\Bigl{(}\mathbb{E}(\|X_{j}^{(i)}\|_{2}^{2}\mid X^{(i-1)}) \Bigr{)}^{\frac{1}{2}}\] \[=\Bigl{(}\mathbb{E}(\|\rho(X^{(i-1)}W_{j}^{(i)})\|_{2}^{2}\mid X^ {(i-1)})\Bigr{)}^{\frac{1}{2}}\] \[=\frac{1}{\sqrt{2}}\|X^{(i-1)}\|_{F}.\]
It follows from the inequality above and (43) that, conditioning on \(X^{(i-1)}\),
\[\|X_{j}^{(i)}\|_{2}\leq\Bigl{(}\frac{1}{\sqrt{2}}+\sqrt{2p\log N_{i-1}}\Bigr{)} \|X^{(i-1)}\|_{F}\leq 2\sqrt{p\log N_{i-1}}\|X^{(i-1)}\|_{F}\]
holds with probability at least \(1-\frac{2}{N_{i-1}^{p}}\). Conditioning on \(X^{(i-1)}\) and taking a union bound over \(1\leq j\leq N_{i}\), with probability exceeding \(1-\frac{2N_{i}}{N_{i-1}^{p}}\), we have
\[\|X^{(i)}\|_{F}\leq\sqrt{N_{i}}\max_{1\leq j\leq N_{i}}\|X_{j}^{(i)}\|_{2}\leq 2 \sqrt{pN_{i}\log N_{i-1}}\|X^{(i-1)}\|_{F}. \tag{44}\]
Applying (44) for indices \(i,i-1,\ldots,1\) recursively, we obtain (41).
\((b)\) Applying Jensen's inequality and Proposition B.5, we have
\[\mathbb{E}(\|X_{j}^{(i)}\|_{2}^{2}\mid X^{(i-1)}) =\mathbb{E}(\|\rho(X^{(i-1)}W_{j}^{(i)})\|_{2}^{2}\mid X^{(i-1)})\] \[\geq\Big{(}\mathbb{E}(\|\rho(X^{(i-1)}W_{j}^{(i)})\|_{2}\mid X^{(i -1)})\Big{)}^{2}\] \[\geq\frac{\operatorname{tr}(X^{(i-1)}X^{(i-1)\top})}{2\pi}\] \[=\frac{\|X^{(i-1)}\|_{F}^{2}}{2\pi}.\]
By the law of total expectation, we obtain \(\mathbb{E}_{\Phi}\|X_{j}^{(i)}\|_{2}^{2}\geq\frac{1}{2\pi}\mathbb{E}_{\Phi}\|X ^{(i-1)}\|_{F}^{2}\) and thus
\[\mathbb{E}_{\Phi}\|X^{(i)}\|_{F}^{2}=\sum_{j=1}^{N_{i}}\mathbb{E}_{\Phi}\|X_{j }^{(i)}\|_{2}^{2}\geq\frac{N_{i}}{2\pi}\mathbb{E}_{\Phi}\|X^{(i-1)}\|_{F}^{2}. \tag{45}\]
Then (42) follows immediately by applying (45) recursively.
Now we are ready to evaluate the relative error associated with (39). It follows from (39) and the Cauchy-Schwarz inequality that, with high probability,
\[\frac{\|\Phi(X)-\widetilde{\Phi}(X)\|_{F}^{2}}{\mathbb{E}_{\Phi} \|\Phi(X)\|_{F}^{2}} \leq\frac{N_{L}\max_{1\leq j\leq N_{L}}\|\Phi(X)_{j}-\widetilde{ \Phi}(X)_{j}\|_{2}^{2}}{\mathbb{E}_{\Phi}\|\Phi(X)\|_{F}^{2}}\] \[\leq\frac{N_{L}}{\mathbb{E}_{\Phi}\|\Phi(X)\|_{F}^{2}}\Big{(} \sum_{i=0}^{L-1}(2\pi pm\delta^{2})^{\frac{L-i}{2}}\Big{(}\prod_{k=i}^{L-1} \log N_{k}\Big{)}^{\frac{1}{2}}\max_{1\leq j\leq N_{i}}\|X_{j}^{(i)}\|_{2} \Big{)}^{2} \tag{46}\] \[\leq\frac{LN_{L}}{\mathbb{E}_{\Phi}\|\Phi(X)\|_{F}^{2}}\sum_{i=0 }^{L-1}(2\pi pm\delta^{2})^{L-i}\Big{(}\prod_{k=i}^{L-1}\log N_{k}\Big{)}\max _{1\leq j\leq N_{i}}\|X_{j}^{(i)}\|_{2}^{2}.\]
By Corollary 3.5, \(\max_{1\leq j\leq N_{i}}\|X_{j}^{(i)}\|_{2}^{2}\leq(4p)^{i}\|X\|_{F}^{2}\log N _{0}\prod_{k=1}^{i-1}(N_{k}\log N_{k})\) with high probability, and \(\mathbb{E}_{\Phi}\|\Phi(X)\|_{F}^{2}=\mathbb{E}_{\Phi}\|X^{(L)}\|_{F}^{2}\geq \frac{\|X\|_{F}^{2}}{(2\pi)^{L}}\prod_{k=1}^{L}N_{k}.\) Plugging these results into (46),
\[\frac{\|\Phi(X)-\widetilde{\Phi}(X)\|_{F}^{2}}{\mathbb{E}_{\Phi} \|\Phi(X)\|_{F}^{2}} \leq L(2\pi)^{L}\Big{(}\prod_{k=0}^{L}\log N_{k}\Big{)}\sum_{i=0 }^{L-1}\frac{(2\pi pm\delta^{2})^{L-i}(4p)^{i}}{\prod_{k=i}^{L-1}N_{k}} \tag{47}\] \[\lesssim\Big{(}\prod_{k=0}^{L}\log N_{k}\Big{)}\sum_{i=0}^{L-1} \prod_{k=i}^{L-1}\frac{m}{N_{k}}\]
gives an upper bound on the relative error of quantization method in Algorithm 1. Further, if we assume \(N_{\min}\leq N_{i}\leq N_{\max}\) for all \(i\), and \(2m\leq N_{\min}\), then (47) becomes
\[\frac{\|\Phi(X)-\widetilde{\Phi}(X)\|_{F}^{2}}{\mathbb{E}_{\Phi} \|\Phi(X)\|_{F}^{2}} \lesssim(\log N_{\max})^{L+1}\sum_{i=0}^{L-1}\Big{(}\frac{m}{N_{ \min}}\Big{)}^{L-i}\] \[\lesssim\frac{m(\log N_{\max})^{L+1}}{N_{\min}}.\]
This high probability estimate indicates that the squared error resulting from quantization decays with the overparametrization of the network, relative to the expected squared norm of the neural network's output. It may be possible to replace the expected squared norm by
the squared norm itself using another high probability estimate. However, we refrain from doing so as the main objective of this computation was to gain insight into the decay of the relative error in generic settings and the expectation suffices for that purpose.
## 4. Error Bounds for SPFQ with Finite Alphabets
Our goal for this section is to relax the assumption that the quantization alphabet used in our algorithms is infinite. We would also like to evaluate the number of elements \(2K\) in our alphabet, and thus the number of bits \(b:=\log_{2}(K)+1\) needed for quantizing each layer. Moreover, for simplicity, here we will only consider Algorithm 1. In this setting, to use a finite quantization alphabet, and still obtain theoretical error bounds, we must guarantee that the argument of the stochastic quantizer in (19) remains smaller than the maximal element in the alphabet. Indeed, if that is the case for all \(t=1,...,N_{i-1}\) then the error bound for our finite alphabet would be identical as for the infinite alphabet. It remains to determine the right size of such a finite alphabet. To that end, we start with Theorem 4.1, which assumes boundedness of all the aligned weights \(\widetilde{w}\) in the \(i\)-th layer, i.e., the solutions of (22), in order to generate an error bound for a finite alphabet of size \(K^{(i)}\gtrsim\sqrt{\log\max\{N_{i-1},N_{i}\}}\).
**Theorem 4.1**.: _Assuming that the first \(i-1\) layers have been quantized, let \(X^{(i-1)}\), \(\widetilde{X}^{(i-1)}\) be as in (11). Let \(p,K^{(i)}\in\mathbb{N}\) and \(\delta>0\) satisfying \(p\geq 3\). Suppose we quantize \(W^{(i)}\) using Algorithm 1 with \(\mathcal{A}=\mathcal{A}_{K^{(i)}}^{\delta}\) and suppose the resulting aligned weights \(\widetilde{W}^{(i)}\) from solving (22) satisfy_
\[\|\widetilde{W}^{(i)}\|_{\max}\leq\frac{1}{2}K^{(i)}\delta. \tag{48}\]
_Then_
\[\max_{1\leq j\leq N_{i}}\|X^{(i-1)}W_{j}^{(i)}-\widetilde{X}^{(i-1)}Q_{j}^{(i) }\|_{2}\leq\delta\sqrt{2\pi pm\log N_{i-1}}\max_{1\leq j\leq N_{i-1}}\| \widetilde{X}_{j}^{(i-1)}\|_{2} \tag{49}\]
_holds with probability at least \(1-\frac{\sqrt{2}mN_{i}}{N_{i-1}^{\prime}}-\sqrt{2}N_{i}\sum\limits_{t=2}^{N_{ i-1}}\exp\Bigl{(}-\frac{(K^{(i)})^{2}\|\widetilde{X}_{t}^{(i-1)}\|_{2}^{2}}{8\pi \max\limits_{1\leq j\leq t-1}\|\widetilde{X}_{j}^{(i-1)}\|_{2}^{2}}\Bigr{)}\)._
Proof.: Fix a neuron \(w:=W_{j}^{(i)}\in\mathbb{R}^{N_{i-1}}\) for some \(1\leq j\leq N_{i}\). By our assumption (48), the aligned weights \(\widetilde{w}\) satisfy \(\|\widetilde{w}\|_{\infty}\leq\frac{1}{2}K^{(i)}\delta\). Then, we perform the iteration (19) in Algorithm 1. At the \(t\)-th step, similar to (28), (30), and (32), we have
\[\widetilde{u}_{t}=P_{\widetilde{X}_{t}^{(i-1)\perp}}(h_{t})+(v_{t}-\widetilde {q}_{t})\widetilde{X}_{t}^{(i-1)}\]
where
\[h_{t}=\widetilde{u}_{t-1}+\widetilde{w}_{t}\widetilde{X}_{t}^{(i-1)},\quad v_ {t}=\frac{\langle\widetilde{X}_{t}^{(i-1)},h_{t}\rangle}{\|\widetilde{X}_{t}^ {(i-1)}\|_{2}^{2}},\quad\text{and}\quad\widetilde{q}_{t}=\mathcal{Q}_{\text{ \rm{StocQ}}}(v_{t}). \tag{50}\]
If \(t=1\), then \(h_{1}=\widetilde{w}_{1}\widetilde{X}_{1}^{(i-1)}\), \(v_{1}=\widetilde{w}_{1}\), and \(\widetilde{q}_{1}=\mathcal{Q}_{\text{\rm{StocQ}}}(v_{1})\). Since \(|v_{1}|=|\widetilde{w}_{1}|\leq\|\widetilde{w}\|_{\infty}\leq\frac{1}{2}K^{(i)}\delta\), we get \(|v_{1}-\widetilde{q}_{1}|\leq\delta\) and the proof technique used for the case \(t=1\) in Lemma 3.1 can be applied here to conclude that \(\widetilde{u}_{1}\leq_{\text{\rm{cx}}}\mathcal{N}(0,\sigma_{1}^{2}I)\) with \(\sigma_{1}^{2}=\frac{\pi\delta^{2}}{2}\|\widetilde{X}_{1}^{(i-1)}\|_{2}^{2}\). Next, for \(t\geq 2\), assume that \(\widetilde{u}_{t-1}\leq_{\text{\rm{cx}}}\mathcal{N}(0,\sigma_{t-1}^{2}I)\) holds where \(\sigma_{t-1}^{2}=\frac{\pi\delta^{2}}{2}\max_{1\leq j\leq t-1}\|\widetilde{X}_ {j}^{(i-1)}\|_{2}^{2}\) is defined as
in Lemma 3.1. It follows from (50) and Lemma A.3 that
\[|v_{t}|=\Big{|}\frac{(\widetilde{X}_{t}^{(i-1)},\widetilde{u}_{t-1})}{\| \widetilde{X}_{t}^{(i-1)}\|_{2}^{2}}+\widetilde{w}_{t}\Big{|}\leq\Big{|}\frac{ (\widetilde{X}_{t}^{(i-1)},\widetilde{u}_{t-1})}{\|\widetilde{X}_{t}^{(i-1)}\| _{2}^{2}}\Big{|}+\|\widetilde{w}\|_{\infty}\leq\Big{|}\frac{(\widetilde{X}_{t} ^{(i-1)},\widetilde{u}_{t-1})}{\|\widetilde{X}_{t}^{(i-1)}\|_{2}^{2}}\Big{|}+ \frac{1}{2}K^{(i)}\delta\]
with \(\frac{(\widetilde{X}_{t}^{(i-1)},\widetilde{u}_{t-1})}{\|\widetilde{X}_{t}^{( i-1)}\|_{2}^{2}}\leq_{\rm cx}\mathcal{N}\Big{(}0,\frac{\sigma_{t-1}^{2}}{\| \widetilde{X}_{t}^{(i-1)}\|_{2}^{2}}\Big{)}\). Then we have, by Lemma B.2, that
\[\mathrm{P}(|v_{t}|\leq K^{(i)}\delta)\geq\mathrm{P}\Big{(}\Big{|}\frac{( \widetilde{X}_{t}^{(i-1)},\widetilde{u}_{t-1})}{\|\widetilde{X}_{t}^{(i-1)}\| _{2}^{2}}\Big{|}\leq\frac{1}{2}K^{(i)}\delta\Big{)}\geq 1-\sqrt{2}\exp\Bigl{(}- \frac{(K^{(i)}\delta)^{2}}{16\sigma_{t-1}^{2}}\|\widetilde{X}_{t}^{(i-1)}\|_{ 2}^{2}\Big{)}.\]
On the event \(\{|v_{t}|\leq K^{(i)}\delta\}\), we can quantize \(v_{t}\) as if the quantizer \(\mathcal{Q}_{\rm StocQ}\) used the infinite alphabet \(\mathcal{A}_{\infty}^{\delta}\). So \(\widetilde{u}_{t}\leq_{\rm cx}\mathcal{N}(0,\sigma_{t}^{2}I)\). Therefore, applying a union bound,
\[\mathrm{P}\Big{(}\widetilde{u}_{N_{i-1}}\leq_{\rm cx}\mathcal{N}(0,\sigma_{N_{ i-1}}^{2}I)\Big{)}\geq 1-\sqrt{2}\sum_{t=2}^{N_{i-1}}\exp\Bigl{(}-\frac{(K^{(i)} \delta)^{2}}{16\sigma_{t-1}^{2}}\|\widetilde{X}_{t}^{(i-1)}\|_{2}^{2}\Big{)}. \tag{51}\]
Conditioning on the event above, that \(\widetilde{u}_{N_{i-1}}\leq_{\rm cx}\mathcal{N}(0,\sigma_{N_{i-1}}^{2}I)\), Lemma B.2 yields for \(\gamma\in(0,1]\)
\[\mathrm{P}\Big{(}\|\widetilde{u}_{N_{i-1}}\|_{\infty}\leq 2\sigma_{N_{i-1}} \sqrt{\log(\sqrt{2}m/\gamma)}\Big{)}\geq 1-\gamma.\]
Setting \(\gamma=\sqrt{2}mN_{i-1}^{-p}\) and recalling (23), we obtain that
\[\|X^{(i-1)}W_{j}^{(i)}-\widetilde{X}^{(i-1)}Q_{j}^{(i)}\|_{2}=\|\widetilde{u} _{N_{i-1}}\|_{2}\leq\sqrt{m}\|\widetilde{u}_{N_{i-1}}\|_{\infty}\leq 2\sigma_{N_{ i-1}}\sqrt{mp\log N_{i-1}} \tag{52}\]
holds with probability at least \(1-\frac{\sqrt{2}m}{N_{i-1}^{p}}\). Combining (51) and (52), for each \(1\leq j\leq N_{i}\),
\[\|X^{(i-1)}W_{j}^{(i)}-\widetilde{X}^{(i-1)}Q_{j}^{(i)}\|_{2}\leq 2\sigma_{N_{ i-1}}\sqrt{mp\log N_{i-1}}=\delta\sqrt{2\pi pm\log N_{i-1}}\max_{1\leq j \leq N_{i-1}}\|\widetilde{X}_{j}^{(i-1)}\|_{2}\]
holds with probability exceeding \(1-\frac{\sqrt{2}m}{N_{i-1}^{p}}-\sqrt{2}\sum_{t=2}^{N_{i-1}}\exp\Bigl{(}- \frac{(K^{(i)}\delta)^{2}}{16\sigma_{t-1}^{2}}\|\widetilde{X}_{t}^{(i-1)}\|_{2 }^{2}\Bigr{)}\). Taking a union bound over all \(1\leq j\leq N_{i}\), we have
\[\mathrm{P}\Big{(}\max_{1\leq j\leq N_{i}}\|X^{(i-1)}W_{j}^{(i)}- \widetilde{X}^{(i-1)}Q_{j}^{(i)}\|_{2}\leq\delta\sqrt{2\pi pm\log N_{i-1}} \max_{1\leq j\leq N_{i-1}}\|\widetilde{X}_{j}^{(i-1)}\|_{2}\Big{)}\] \[\geq 1-\frac{\sqrt{2}mN_{i}}{N_{i-1}^{p}}-\sqrt{2}N_{i}\sum_{t=2}^ {N_{i-1}}\exp\Bigl{(}-\frac{(K^{(i)}\delta)^{2}}{16\sigma_{t-1}^{2}}\| \widetilde{X}_{t}^{(i-1)}\|_{2}^{2}\Bigr{)}\] \[\geq 1-\frac{\sqrt{2}mN_{i}}{N_{i-1}^{p}}-\sqrt{2}N_{i}\sum_{t=2}^ {N_{i-1}}\exp\Bigl{(}-\frac{(K^{(i)})^{2}\|\widetilde{X}_{t}^{(i-1)}\|_{2}^{2}} {8\pi\max_{1\leq j\leq t-1}\|\widetilde{X}_{j}^{(i-1)}\|_{2}^{2}}\Bigr{)}.\]
Next, in Theorem 4.2, we show that provided the activations \(X^{(i-1)}\) and \(\widetilde{X}^{(i-1)}\) of the quantized and unquantized networks are sufficiently close, and provided the weights \(w\) follow a random distribution, one can guarantee the needed boundedness of the aligned weights \(\widetilde{w}\). This allows us to apply Theorem 4.1 and generate an error bound for finite alphabets. Our focus on random weights here enables us to avoid certain adversarial situations. Indeed, one can construct activations \(X^{(i-1)}\) and \(\widetilde{X}^{(i-1)}\) that are arbitrarily close to each other, along with adversarial weights \(w\) that together lead to \(\|\widetilde{w}\|_{\infty}\) becoming arbitrarily large. We demonstrate this contrived adversarial scenario in Proposition B.9. However, in generic cases
represented by random weights, as shown in Theorem 4.2, the bound on \(\widetilde{w}\) is not a major issue. Consequently, one can utilize a finite alphabet for quantization as desired.
**Theorem 4.2**.: _Assuming that the first \(i-1\) layers have been quantized, let \(X^{(i-1)}\), \(\widetilde{X}^{(i-1)}\) be as in (11). Suppose the weight matrix \(W^{(i)}\in\mathbb{R}^{N_{i-1}\times N_{i}}\) has i.i.d. \(\mathcal{N}(0,1)\) entries and_
\[\|\widetilde{X}^{(i-1)}-X^{(i-1)}\|_{2}\leq\epsilon^{(i-1)}\sigma_{1}^{(i-1)}< \sigma_{m}^{(i-1)}, \tag{53}\]
_where \(\epsilon^{(i-1)}\in(0,1)\), \(\sigma_{1}^{(i-1)}\) and \(\sigma_{m}^{(i-1)}\) are the largest and smallest singular values of \(X^{(i-1)}\) respectively. Let \(p,K^{(i)}\in\mathbb{N}\) and \(\delta>0\) such that \(p\geq 3\) and_
\[K^{(i)}\delta\geq 2\eta^{(i-1)}\sqrt{2p\log N_{i-1}}. \tag{54}\]
_where \(\eta^{(i-1)}:=\frac{\sigma_{1}^{(i-1)}}{\sigma_{m}^{(i-1)}-\epsilon^{(i-1)} \sigma_{1}^{(i-1)}}\). If we quantize \(W^{(i)}\) using Algorithm 1 with \(\mathcal{A}=\mathcal{A}_{K^{(i)}}^{\delta}\), then_
\[\max_{1\leq j\leq N_{i}}\|X^{(i-1)}W_{j}^{(i)}-\widetilde{X}^{(i-1)}Q_{j}^{(i )}\|_{2}\leq\delta\sqrt{2\pi pm\log N_{i-1}}\max_{1\leq j\leq N_{i-1}}\| \widetilde{X}_{j}^{(i-1)}\|_{2} \tag{55}\]
_holds with probability at least \(1-\frac{2N_{i}}{N_{i-1}^{p-1}}-\frac{\sqrt{2}mN_{i}}{N_{i-1}^{p}}-\sqrt{2}N_{ i}\sum_{t=2}^{N_{i-1}}\exp\Bigl{(}-\frac{(K^{(i)})^{2}\|\widetilde{X}_{j}^{(i-1)} \|_{2}^{2}}{8\pi\max_{1\leq j\leq t-1}\|\widetilde{X}_{j}^{(i-1)}\|_{2}^{2}} \Bigr{)}\)._
Proof.: Pick a neuron \(w:=W_{j}^{(i)}\in\mathbb{R}^{N_{i-1}}\) for some \(1\leq j\leq N_{i}\). Then we have \(w\sim\mathcal{N}(0,I)\) and since we are using Algorithm 1, we must work with the resulting \(\widetilde{w}\), the solution of (22). Applying Proposition B.11 to \(w\) with \(X=X^{(i-1)}\) and \(\widetilde{X}=\widetilde{X}^{(i-1)}\), we obtain
\[\mathrm{P}\Bigl{(}\|\widetilde{w}\|_{\infty}\leq\eta^{(i-1)}\sqrt{2p\log N_{i- 1}}\Bigr{)}\geq 1-\frac{2}{N_{i-1}^{p-1}},\]
so that using (54) gives
\[\mathrm{P}\Bigl{(}\|\widetilde{w}\|_{\infty}\leq\frac{1}{2}K^{(i)}\delta \Bigr{)}\geq 1-\frac{2}{N_{i-1}^{p-1}}. \tag{56}\]
Conditioning on the event \(\{\|\widetilde{w}\|_{\infty}\leq\frac{1}{2}K^{(i)}\delta\}\) and applying exactly the same argument in Theorem 4.1,
\[\|X^{(i-1)}W_{j}^{(i)}-\widetilde{X}^{(i-1)}Q_{j}^{(i)}\|_{2}\leq\delta\sqrt{2 \pi pm\log N_{i-1}}\max_{1\leq j\leq N_{i-1}}\|\widetilde{X}_{j}^{(i-1)}\|_{2} \tag{57}\]
holds with probability exceeding \(1-\frac{\sqrt{2}m}{N_{i-1}^{p}}-\sqrt{2}\sum_{t=2}^{N_{i-1}}\exp\Bigl{(}- \frac{(K^{(i)})^{2}\|\widetilde{X}_{j}^{(i-1)}\|_{2}^{2}}{8\pi\max_{1\leq j \leq t-1}\|\widetilde{X}_{j}^{(i-1)}\|_{2}^{2}}\Bigr{)}\). Combining (56) and (57), and taking a union bound over all \(1\leq j\leq N_{i}\), we obtain (55).
Now we are about to approximate the number of bits needed for guaranteeing the derived bounds. Note that, in Theorem 4.2, we achieved the same error bound (55) as in Lemma 3.1, choosing proper \(\epsilon^{(i-1)}\in(0,1)\) and \(K^{(i)}\in\mathbb{N}\) such that (53) and (54) are satisfied and the associated probability in (55) is positive. This implies that the error bounds we obtained in Section 3 remain valid for our finite alphabets as well. In particular, by a similar argument we used to obtain (47), one can get the following approximations
\[\frac{\|\widetilde{X}^{(i-1)}-X^{(i-1)}\|_{F}^{2}}{\|X^{(i-1)}\|_{F}^{2}} \lesssim\Bigl{(}\prod_{k=0}^{i-1}\log N_{k}\Bigr{)}\sum_{j=0}^{i-2}\prod_{k=j} ^{i-2}\frac{m}{N_{k}}.\]
Due to \(\|X^{(i-1)}\|_{F}\leq\sqrt{m}\|X^{(i-1)}\|_{2}\) and \(\|\widetilde{X}^{(i-1)}-X^{(i-1)}\|_{2}\leq\|\widetilde{X}^{(i-1)}-X^{(i-1)}\|_{F}\), we have
\[\frac{\|\widetilde{X}^{(i-1)}-X^{(i-1)}\|_{2}^{2}}{\|X^{(i-1)}\|_{2 }^{2}} \leq\frac{m\|\widetilde{X}^{(i-1)}-X^{(i-1)}\|_{F}^{2}}{\|X^{(i-1) }\|_{F}^{2}}\] \[\lesssim m\Big{(}\prod_{k=0}^{i-1}\log N_{k}\Big{)}\sum_{j=0}^{i-2 }\prod_{k=j}^{i-2}\frac{m}{N_{k}}.\]
If \(\prod_{k=j}^{i-2}N_{k}\gtrsim m^{i-j}\prod_{k=0}^{i-1}\log N_{k}\) for \(0\leq j\leq i-2\), then it is possible to choose \(\epsilon^{(i-1)}\in(0,1)\) such that (53) holds. Moreover, since \(\sigma_{m}^{(i-1)}\leq\sigma_{1}^{(i-1)}\), we have \(\eta^{(i-1)}=\frac{\sigma_{1}^{(i-1)}}{\sigma_{m}^{(i-1)}-\epsilon^{(i-1)} \sigma_{1}^{(i-1)}}\geq(1-\epsilon^{(i-1)})^{-1}\) and thus (54) becomes
\[K^{(i)}\geq 2\delta^{-1}(1-\epsilon^{(i-1)})^{-1}\sqrt{2p\log N_{i-1}} \gtrsim\sqrt{\log N_{i-1}}. \tag{58}\]
Assuming columns of \(\widetilde{X}^{(i-1)}\) are similar in the sense of
\[\max_{1\leq j\leq t-1}\|\widetilde{X}^{(i-1)}_{j}\|_{2}\lesssim\sqrt{\log N_{ i-1}}\|\widetilde{X}^{(i-1)}_{t}\|_{2},\quad 2\leq t\leq N_{i-1},\]
we obtain that (55) holds with probability exceeding
\[1-\frac{2N_{i}}{N_{i-1}^{p-1}}-\frac{\sqrt{2}mN_{i}}{N_{i-1}^{p} }-\sqrt{2}N_{i}\sum_{t=2}^{N_{i-1}}\exp\Bigl{(}-\frac{(K^{(i)})^{2}\| \widetilde{X}^{(i-1)}_{t}\|_{2}^{2}}{8\pi\max_{1\leq j\leq t-1}\|\widetilde{X} ^{(i-1)}_{j}\|_{2}^{2}}\Bigr{)}\] \[\geq 1-\frac{2N_{i}}{N_{i-1}^{p-1}}-\frac{\sqrt{2}mN_{i}}{N_{i-1}^ {p}}-\sqrt{2}N_{i-1}N_{i}\exp\Bigl{(}-\frac{(K^{(i)})^{2}}{8\pi\log N_{i-1}} \Bigr{)}. \tag{59}\]
To make (59) positive, we have
\[K^{(i)}\gtrsim\log\max\{N_{i-1},N_{i}\}. \tag{60}\]
It follows from (58) and (59) that, in the \(i\)th layer, we only need a number of bits \(b^{(i)}\) that satisfies
\[b^{(i)}\geq\log_{2}K^{(i)}+1\gtrsim\log_{2}\log\max\{N_{i-1},N_{i}\}\]
to guarantee the performance of our quantization method using finite alphabets.
## 5. Experiments
In this section, we test the performance of SPFQ on the ImageNet classification task and compare it with the non-random scheme GPFQ in [38]. In particular, we adopt the version of SPFQ corresponding to (15) 1, i.e., Algorithm 2 with order \(r=1\). Note that the GPFQ algorithm runs the same iterations as in (15) except that \(\mathcal{Q}_{\text{StocQ}}\) is substituted with a non-random quantizer \(\mathcal{Q}_{\text{DetQ}}\), so the associated iterations are given by
Footnote 1: Code: [https://github.com/jayzhang0727/Stochastic](https://github.com/jayzhang0727/Stochastic) \(-\) Path \(-\) Following \(-\) Quantization.git
\[\begin{cases}u_{0}=0\in\mathbb{R}^{m},\\ q_{t}=\mathcal{Q}_{\text{DetQ}}\bigg{(}\frac{(\widetilde{X}^{(i-1)}_{t},u_{t-1} +w_{t}X^{(i-1)}_{t})}{\|\widetilde{X}^{(i-1)}_{t}\|_{2}^{2}}\bigg{)},\\ u_{t}=u_{t-1}+w_{t}X^{(i-1)}_{t}-q_{t}\widetilde{X}^{(i-1)}_{t}\end{cases} \tag{61}\]
where \(\mathcal{Q}_{\text{DetQ}}(z):=\operatorname*{argmin}_{p\in\mathcal{A}}|z-p|\). For ImageNet data, we consider ILSVRC-2012 [10], a 1000-category dataset with over 1.2 million training images and 50 thousand validation
images. Additionally, we resize all images to \(256\times 256\) and use the normalized \(224\times 224\) center crop, which is a standard procedure. The evaluation metrics we choose are top-1 and top-5 accuracy of the quantized models on the validation dataset. As for the neural network architectures, we quantize all layers of VGG-16 [33], ResNet-18 and ResNet-50 [18], which are pretrained 32-bit floating point neural networks provided by torchvision in PyTorch [28]. Moreover, we fuse the batch normalization (BN) layer with the convolutional layer, and freeze the BN statistics before quantization.
Since the major difference between SPFQ in (15) and GPFQ in (61) is the choice of quantizers, we will follow the experimental setting for alphabets used in [38]. Specifically, we use batch size \(m\), fixed bits \(b\in\mathbb{N}\) for all the layers, and quantize each \(W^{(i)}\in\mathbb{R}^{N_{i-1}\times N_{i}}\) with midtread alphabets \(\mathcal{A}=\mathcal{A}_{K}^{\delta}\) as in (5), where level \(K\) and step size \(\delta\) are given by
\[K=2^{b-1},\quad\delta=\delta^{(i)}:=\frac{C}{2^{b-1}N_{i}}\sum_{1\leq j\leq N_ {i}}\|W^{(i)}_{j}\|_{\infty}. \tag{62}\]
Here, \(C>0\) is a constant that is only dependent on bitwidth \(b\), determined by grid search with cross-validation, and fixed across layers, and across batch-sizes. One can, of course, expect to do better by using different values of \(C\) for different layers but we refrain from doing so, as our main goal here is to demonstrate the performance of SPFQ even with minimal fine-tuning.
In Table 1, for different combinations of \(m\), \(b\), and \(C\), we present the corresponding top-1/top-5 validation accuracy of quantized networks using SPFQ in the first column, while the second and third columns give the validation accuracy of unquantized models and the accuracy drop due to quantization respectively. We observe that, for all three models, the quantization accuracy is improved as the number of bits \(b\) increases, and SPFQ achieves less than 0.5% top-1 accuracy loss while using 6 bits.
Next, in Figure 1, we compare SPFQ against GPFQ by quantizing the three models in Table 1. These figures illustrate that GPFQ has better performance than that of SPFQ when \(b=3,4\) and \(m\) is small. This is not particularly surprising, as \(\mathcal{Q}_{\text{DetQ}}\) deterministically rounds its argument to the nearest alphabet element instead of performing a random rounding like \(\mathcal{Q}_{\text{StocQ}}\). However, as the batch size \(m\) increases, the accuracy gap between GPFQ and SPFQ diminishes. Indeed, for VGG-16 and ResNet-18, SPFQ outperforms GPFQ when \(b=6\). Further, we note that, for both SPFQ and GPFQ, one can obtain higher quantization
\begin{table}
\begin{tabular}{l|c|c|c|c c c} \hline \hline Model & \(m\) & \(b\) & \(C\) & Quant Acc (\%) & Ref Acc (\%) & Acc Drop (\%) \\ \hline \multirow{3}{*}{VGG-16} & & 4 & 1.02 & 70.48/89.77 & 71.59/90.38 & 1.11/0.61 \\ & 1024 & 5 & 1.23 & 71.08/90.15 & 71.59/90.38 & 0.51/0.23 \\ & & 6 & 1.26 & 71.24/90.37 & 71.59/90.38 & 0.35/0.01 \\ \hline \multirow{3}{*}{ResNet-18} & & 4 & 0.91 & 67.36/87.74 & 69.76/89.08 & 2.40/1.34 \\ & 2048 & 5 & 1.32 & 68.79/88.77 & 69.76/89.08 & 0.97/0.31 \\ & & 6 & 1.68 & 69.43/88.96 & 69.76/89.08 & 0.33/0.12 \\ \hline \multirow{3}{*}{ResNet-50} & & 4 & 1.10 & 73.37/91.61 & 76.13/92.86 & 2.76/1.25 \\ & 2048 & 5 & 1.62 & 75.05/92.43 & 76.13/92.86 & 1.08/0.43 \\ \cline{1-1} & & 6 & 1.98 & 75.66/92.67 & 76.13/92.86 & 0.47/0.19 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Top-1/Top-5 validation accuracy for SPFQ on ImageNet.
accuracy by taking larger \(m\) but the extra improvement that results from increasing the batch size rapidly decreases.
Figure 1. Top-1 and Top-5 validation accuracy for SPFQ (dashed lines) and GPFQ (solid lines) on ImageNet.
## Acknowledgements
The authors thank Yixuan Zhou for discussions on the numerical experiments in this paper. This work was supported in part by National Science Foundation Grant DMS-2012546 and a Simons Fellowship.
|
2309.14050 | NNgTL: Neural Network Guided Optimal Temporal Logic Task Planning for
Mobile Robots | In this work, we investigate task planning for mobile robots under linear
temporal logic (LTL) specifications. This problem is particularly challenging
when robots navigate in continuous workspaces due to the high computational
complexity involved. Sampling-based methods have emerged as a promising avenue
for addressing this challenge by incrementally constructing random trees,
thereby sidestepping the need to explicitly explore the entire state-space.
However, the performance of this sampling-based approach hinges crucially on
the chosen sampling strategy, and a well-informed heuristic can notably enhance
sample efficiency. In this work, we propose a novel neural-network guided
(NN-guided) sampling strategy tailored for LTL planning. Specifically, we
employ a multi-modal neural network capable of extracting features concurrently
from both the workspace and the B\"{u}chi automaton. This neural network
generates predictions that serve as guidance for random tree construction,
directing the sampling process toward more optimal directions. Through
numerical experiments, we compare our approach with existing methods and
demonstrate its superior efficiency, requiring less than 15% of the time of the
existing methods to find a feasible solution. | Ruijia Liu, Shaoyuan Li, Xiang Yin | 2023-09-25T11:24:40Z | http://arxiv.org/abs/2309.14050v2 | # NNgTL: Neural Network Guided Optimal Temporal Logic
###### Abstract
In this work, we investigate task planning for mobile robots under linear temporal logic (LTL) specifications. This problem is particularly challenging when robots navigate in continuous workspaces due to the high computational complexity involved. Sampling-based methods have emerged as a promising avenue for addressing this challenge by incrementally constructing random trees, thereby sidestepping the need to explicitly explore the entire state-space. However, the performance of this sampling-based approach hinges crucially on the chosen sampling strategy, and a well-informed heuristic can notably enhance sample efficiency. In this work, we propose a novel _neural-network guided_ (NN-guided) sampling strategy tailored for LTL planning. Specifically, we employ a multi-modal neural network capable of extracting features concurrently from both the workspace and the Buchi automaton. This neural network generates predictions that serve as guidance for random tree construction, directing the sampling process toward more optimal directions. Through numerical experiments, we compare our approach with existing methods and demonstrate its superior efficiency, requiring less than 15% of the time of the existing methods to find a feasible solution.
## I Introduction
With the ongoing development and widespread deployment of mobile robots, there has been an increasing focus on path planning for high-level tasks. Among the formal languages used for specifying such complex tasks, Linear Temporal Logic (LTL) stands out as a widely adopted choice. LTL provides a structured means for the users to articulate complex requirements, such as navigating a robot to a target region without collision with obstacles or ensuring specific locations are visited infinitely often [1, 2, 3, 4]. In recent years, the field of robot path planning for LTL tasks has witnessed extensive investigation, spurred by its broad applications. These applications encompass a wide range of scenarios, such as environmental surveillance [5], search and rescue missions [6], and intelligent warehousing systems [7, 8].
In the context of LTL path planning, one of the most foundational methods is the automata-theoretical approach based on finite abstractions [9, 10, 11, 5, 12]. This approach involves the creation of a discrete abstraction of the workspace, which effectively captures the robot's mobility constraints. Subsequently, the discrete abstraction is synchronized with an automaton representation of the LTL task. This synchronization enables the formulation of the planning problem as a graph-search problem within the product space. However, this graph-search approach, although conceptually powerful, faces a significant computational challenge. In particular, as the system's dimensionality increases, the state-space of the finite abstraction grows exponentially, rendering the graph-search problem infeasible.
Sampling-based methods, such as rapid random trees (RRT), have emerged as a promising solution to tackle the computational challenges associated with path planning in continuous state-spaces [13]. More recently, sampling-based algorithms have been introduced to enhance the computational efficiency of solving LTL planning problems [14, 15, 16, 17, 18]. For example, in [16], a sampling-based algorithm, inspired by the RRT*, is proposed to find optimal paths that satisfy LTL tasks with a prefix-suffix structure. This algorithm circumvents the necessity of explicitly exploring the entire state-space by incrementally constructing random trees over the product state-space. Building upon this advancement, [17] further introduces a biased sampling strategy that leverages automata information to significantly improve sample efficiency. Moreover, in [18], the authors extend the methods in [17] to continuous workspaces without discretization. Specifically, they introduce an abstraction-free planning algorithm that directly samples within a continuous space and integrates these samples with automata states.
In sampling-based LTL planning, one of the key factor is how each new state is sampled. While biased sampling strategies offer an enhanced approach compared to uniform sampling, they primarily rely on distance information within the Buchi automaton. This approach neglects valuable insights from the workspace, such as the physical feasibility of task progression. However, integrating continuous workspace information with the Buchi automaton is very challenging due to the inherently heterogeneous structures of these components. Recently, there has been a notable progress in leveraging the power of neural networks for expedting solutions to intricate problems. For instance, within the domain of path planning, neural networks have been employed to predict the probability distribution of optimal paths [19, 20, 21, 22]. Moreover, neural networks have been applied, in an end-to-end fashion, for generating solutions to temporal logic tasks [23, 24, 25, 26].
In this paper, we introduce a novel neural-network guided (NN-guided) sampling strategy designed specifically for LTL planning. Our approach builds upon the basic architecture of the sampling-based algorithm for continuous workspaces, as established in [18]. However, instead of relying solely on automata information, we incorporate a multi-modal neural network capable of jointly extracting features from
both the workspace and the Buchi automaton. This neural network offers predictive capabilities to steer the sampling process in directions that are more likely to yield optimal solutions, ones that are not only task-progressive but also feasible within the workspace. To demonstrate the efficiency of our approach, we compare the proposed NN-guided sampling strategy with existing sampling strategies on a set of randomly generated problem instances. The statistical results underscore the effectiveness of our new strategy, as it requires less than 15% of the time needed by the existing strategies to find a feasible solution.
## II Problem Formulation
### _System Model_
We consider the scenario, where a single robot navigating in a two-dimensional continuous workspace represented by a compact subset \(\mathcal{W}\subseteq\mathbb{R}^{2}\). The workspace is assumed to be partitioned as \(\mathcal{W}=\mathcal{W}_{\text{free}}\cup\mathcal{O}\), where \(\mathcal{O}\) is an open subset representing obstacle regions and \(\mathcal{W}_{\text{free}}\) is the free region in which the robot can move. The free workspace is further partitioned as \(m\) labeled regions of interest and a non-labeled region \(\mathcal{W}_{\text{free}}=\mathcal{R}_{1}\dot{\cup}\cdots\dot{\cup}\mathcal{R}_ {m}\dot{\cup}\mathcal{R}_{non}\).
The mobility of the robot can be captured by a weighted transition system
\[TS=(\mathcal{W},\mathbf{x}_{0},\rightarrow_{T},\mathcal{AP},L,C),\]
where \(\mathcal{W}\) is set of infinite positions of the robot; \(\mathbf{x}_{0}\) is its initial position; \(\rightarrow_{T}\subseteq\mathcal{W}\times\mathcal{W}\) is the transition relation such that: for any \(\mathbf{x},\mathbf{x}^{\prime}\in\mathcal{W}\), we have \((\mathbf{x},\mathbf{x}^{\prime})\in\rightarrow_{T}\) if along the straight line between \(\mathbf{x}\) and \(\mathbf{x}^{\prime}\) (i) does not intersect with \(\mathcal{O}\); and (ii) crosses any boundary of labeled regions at most once [18]; \(\mathcal{AP}=\{\pi^{1},\ldots,\pi^{m}\}\) is the set of atomic propositions with labeling function \(L:\mathcal{W}\rightarrow\mathcal{AP}\) such that \(L(\mathbf{x})=\pi^{i}\) iff \(\mathbf{x}\in\mathcal{R}_{i};C:\mathcal{W}\times\mathcal{W}\rightarrow\mathbb{ R}^{+}\) is the weight function calculating the Euclidean distance, i.e., \(C(\mathbf{x}_{1},\mathbf{x}_{2})=\|\mathbf{x}_{1}-\mathbf{x}_{2}\|_{2}\).
An infinite _path_ is a sequence \(\tau\!=\!\tau(1)\tau(2)\cdots\!\in\!\mathcal{W}^{\omega}\) such that \(\tau(1)=\mathbf{x}_{0}\) and \((\tau(k),\tau(k+1))\in\rightarrow_{T},k=1,2,\ldots\). The _trace_ of infinite path \(\tau\) is \(\operatorname{trace}(\tau)=L(\tau(1))L(\tau(2))\cdots\in(\mathcal{AP})^{\omega}\). We say a path \(\tau\) is finite if it is a finite sequence of points, and we denote by \(|\tau|\) its _length_. For a finite path \(\tau\), its cost \(J(\tau)\) is defined as the the cumulative distances between consecutive states, i.e., \(J(\tau)=\sum_{k=1}^{|\tau|-1}C(\tau(k),\tau(k+1))\).
### _Temporal Logic Tasks_
The formal specification of robot is captured by an linear temporal logic formula without next (\(\operatorname{LTL}_{-\bigcirc}\)), which is widely used in robot task planning in continuous workspace [27]. The syntax of \(\operatorname{LTL}_{-\bigcirc}\) is given as follows:
\[\phi::=\text{true}\mid\pi\in\mathcal{AP}\mid\neg\phi\mid\phi_{1}\wedge\phi_ {2}\mid\phi_{1}\mathcal{U}\phi_{2},\]
where \(\neg\) and \(\wedge\) are Boolean operators "negation" and "conjunction", respectively; \(\mathcal{U}\) is the temporal operator "until", which can further induce temporal operators "eventually" \(\Diamond\) and "always" \(\square\).
An \(\operatorname{LTL}\) formula \(\phi\) is evaluated over an infinite word \(\sigma=\sigma(1)\sigma(2)\cdots\in\mathcal{AP}^{\omega}\). We denote by \(\sigma\models\phi\) that word \(\sigma\) satisfies \(\operatorname{LTL}\) formula \(\phi\); the reader is referred to [28] for the formal semantics. We denote by \(\operatorname{Words}(\phi)\) the set of all words satisfying formula \(\phi\). It is well-known that, for any \(\operatorname{LTL}\) formula \(\phi\), \(\operatorname{Words}(\phi)\) can be accepted by a non-deterministic Buchi automaton (NBA) [28]. Formally, an NBA is a tuple \(B=(\mathcal{Q}_{B},\mathcal{Q}_{B}^{0},\Sigma,\rightarrow_{B},\mathcal{Q}_{B} ^{\prime})\), where \(\mathcal{Q}_{B}\) is the set of states, \(\mathcal{Q}_{B}^{0}\) is the set of initial states, \(\Sigma=\mathcal{AP}\) is the alphabet, \(\rightarrow_{B}\subseteq\mathcal{Q}_{B}\times\Sigma\times\mathcal{Q}_{B}\) is the transition relation, and \(\mathcal{Q}_{B}^{F}\) is the set of accepting states. For simplicity, we assume that the initial-state is unique, i.e., \(\mathcal{Q}_{B}^{0}=\{q_{B}^{0}\}\).
An infinite run \(\rho_{B}\) of \(B\) over an infinite word \(\sigma=\pi_{0}\pi_{1}\cdots\in(\mathcal{AP})^{\omega}\) is a sequence \(\rho_{B}=q_{B}^{0}q_{B}^{1}q_{B}^{2}\cdots\) such that \(q_{B}^{0}\in\mathcal{Q}_{B}^{0},(q_{B}^{i},\pi_{i},q_{B}^{i+1})\in\rightarrow_ {B},\forall i=0,1,\ldots\). We say infinite run \(\rho_{B}\) is accepting if it contains accepting states infinite number of times; we say a word is accepting if it induces an accepting run, and we denote by \(\mathcal{L}_{B}\) the set of all accepting words of \(B\). Hereafter, \(B\) is referred to as the NBA that accepts \(\phi\), i.e., \(\mathcal{L}_{B}=\operatorname{Words}(\phi)\).
### _LTL Task Planning Problem_
To fulfill an \(\operatorname{LTL}_{-\bigcirc}\) formula \(\phi\), the robot needs to execute an infinite path. For the purpose of planning, it suffices to consider paths in the "prefix-suffix" structure \(\tau=\tau^{\operatorname{pre}}[\tau^{\operatorname{swf}}]^{\omega}\), where the prefix part \(\tau^{\operatorname{pre}}\) repeats only once and the suffix part \(\tau^{\operatorname{swf}}\) repeats indefinitely [9]. The cost of prefix-suffix path \(\tau\) is defined by \(J(\tau)=\lambda J(\tau^{\operatorname{pre}})+(1-\lambda)J(\tau^{\operatorname{ swf}})\), where \(\lambda\in[0,1]\) is a user-defined weight coefficient.
Then our objective is to find a plan for the robot in the prefix-suffix structure such that a given \(\operatorname{LTL}\) formula is fulfilled with minimum cost.
**Problem 1**: _Given \(\operatorname{LTL}_{-\bigcirc}\) formula \(\phi\), determine a prefix-suffix path \(\tau\) in transition system \(TS\) such that (i) \(\operatorname{trace}(\tau)\models\phi\); and (ii) for any prefix-suffix path \(\tau^{\prime}\) such that \(\operatorname{trace}(\tau^{\prime})\models\phi\), we have \(J(\tau)\leq J(\tau^{\prime})\)._
## III Sampling-Based Task Planning
To solve Problem 1, a typical approach is to perform graph-search on the _product_ of \(TS\) and \(B\)[5]. However, when the state space \(\mathcal{W}\) is continuous, building the entire product space is infeasible even by discretizations. Therefore, in [18], the authors proposed a (continuous space) sampling-based algorithm, called TL-RRT*, that incrementally builds trees on-the-fly without construct the entire product space a priori. Our work is build upon the sampling-based approach; therefore, we review it briefly in this section. The readers are referred to [18] for more details on this method.
### _Main Sampling-Based Algorithm_
The TL-RRT* algorithm is essentially a random search over the product space \(\mathcal{Q}_{P}:=\mathcal{W}_{\text{free}}\times\mathcal{Q}_{B}\). It consists of two parts, prefix part and suffix part, and for each part, a similar random tree construction is employed in order to search the optimal path. We briefly sketch the procedures.
#### Iii-A1 Prefix Search
Starting from the initial state \(q_{0}=(\mathbf{x}_{0},q_{B}^{0})\), one first builds a prefix tree \(\mathcal{T}=(\mathcal{V}_{\mathcal{T}},\mathcal{E}_{\mathcal{T}},\operatorname{Cost})\) by incrementally adding vertices, where \(\mathcal{V}_{\mathcal{T}}\) and \(\mathcal{E}_{\mathcal{T}}\) are the sets of vertices and edges in \(\mathcal{T}\), respectively. The tree contains some _goal states_ defined by \(\mathcal{Q}_{\text{goal}}:=\mathcal{W}_{\text{free}}\times\mathcal{Q}_{B}^{F}\).
By projecting paths in tree onto \(\mathcal{W}\), the path from the initial state \(q_{0}\) to each goal state \(q_{\text{goal}}\in\mathcal{V}_{\mathcal{T}}\cap\mathcal{Q}_{\text{goal}}\) forms a prefix path \(\tau^{\text{pre}}\), whose cost is stored by function \(\text{Cost}:\mathcal{V}_{\mathcal{T}}\rightarrow\mathbb{R}^{+}\).
#### Iii-A2 Suffix Search
The suffix part is very similar to the prefix part, the main difference is that one needs to construct a set of random trees rooted from goal states \(\mathcal{P}:=\mathcal{T}\cap\mathcal{Q}_{\text{goal}}\) in the prefix tree such that they can return back to the goal states. This gives us a suffix plan \(\tau^{\text{sif}}\) from each goal state, and the prefix-suffix plan with minimum cost among all goal states is the final optimal plan.
#### Iii-A3 Constructing Random Trees
In the above two parts, the key is construction of random tree \(\mathcal{T}=(\mathcal{V}_{\mathcal{T}},\mathcal{E}_{\mathcal{T}},\text{Cost})\) on-the-fly. The main construction steps are as follows.
1. Sample a state \(\mathbf{x}^{\text{rand}}\in\mathcal{W}_{\text{free}}\) "randomly";
2. Determine a new state \(\mathbf{x}^{\text{new}}\) to be added to the tree based on some distance criteria on \(\mathbf{x}^{\text{rand}}\) and \(\mathcal{T}\);
3. Determine set \(\mathcal{Q}^{\text{near}}\subseteq\mathcal{V}_{\mathcal{T}}\) based on some distance criteria from which the tree will be extended to \(\mathbf{x}^{\text{new}}\);
4. For each state \(q_{P}^{\text{near}}=(\mathbf{x}^{\text{near}},q_{B}^{\text{near}})\in \mathcal{Q}_{P}^{\text{near}}\), we consider a potential edge from \(q_{P}\) to \(q_{P}^{\text{near}}=(\mathbf{x}^{\text{new}},q_{B})\) if \((\mathbf{x}^{\text{near}},\mathbf{x}^{\text{new}})\in\rightarrow_{\mathcal{T}}\) and \((q_{B}^{\text{near}},L(\mathbf{x}^{\text{near}}),q_{B}^{\text{near}})\in \rightarrow_{B}\);
5. Add \(q_{P}^{\text{new}}\) to tree \(\mathcal{T}\) from state \(q_{P}\in\mathcal{Q}_{P}^{\text{near}}\) to minimize \(\text{Cost}(q_{P}^{\text{new}})=\text{Cost}(q_{P}^{\text{near}})+C(q_{P}^{ \text{near}},q_{P}^{\text{new}})\);
6. After adding \(q_{P}^{\text{new}}\), the tree edges and costs are reconfigured so that each vertex is reached by a path with minimum cost from the root.
### _Sampling Strategies_
In the construction of random trees, it remains to specify how state \(\mathbf{x}^{\text{rand}}\) is sampled "randomly". In [18], the authors proposed two different sampling strategies for \(\mathbf{x}^{\text{rand}}\):
* _Uniform Sampling:_\(\mathbf{x}^{\text{rand}}\) is selected according to a uniform distribution on \(\mathcal{W}_{\text{free}}\) (in fact, any distribution as long as all states has non-zero probability);
* _Biased Sampling:_\(\mathbf{x}^{\text{rand}}\) is selected according to a biased distribution on \(\mathcal{W}_{\text{free}}\) such that one has more chance to move towards an accepting state in \(B\).
Since our NN-guided approach is related to the biased sampling, we briefly review this method. First, one needs to pre-process the NBA \(B\) so that infeasible transitions are removed. Then based on the simplified NBA, one defines a distance function \(\rho:\mathcal{Q}_{B}\times\mathcal{Q}_{B}\rightarrow\mathbb{N}\) to capture the length of the shortest path between each pair of states. Then the selection of \(\mathbf{x}^{\text{rand}}\) is determined as follows:
1. Select a _feasible accepting state_\(q_{B}^{F,\text{feas}}\in\mathcal{Q}_{B}^{F}\) such that \(\rho(q_{B}^{Q},q_{B}^{F,\text{feas}})\neq\infty\) and \(\rho(q_{B}^{F,\text{feas}},q_{B}^{F,\text{feas}})\neq\infty\);
2. Define the set of vertices \(\mathcal{D}_{\min}\subseteq\mathcal{V}\) that are closest to \(q_{B}^{F,\text{feas}}\) according the distance function \(\rho\). Then select a vertex \(q_{P}^{\text{closest}}=\left(\mathbf{x}^{\text{closest}},q_{B}^{\text{closest }}\right)\in\mathcal{V}_{\mathcal{T}}\) from the tree according to a specific distribution [18] such that states in \(\mathcal{D}_{\min}\) has more chance to be selected;
3. Select two successive states \(q_{B}^{\text{succ},1},q_{B}^{\text{succ},2}\in\mathcal{Q}_{B}\) such that \(\alpha_{B}^{\text{closest}}\xrightarrow{L(\mathbf{x}^{\text{cluster}})}_{B}q_{B }^{\text{succ},1}\xrightarrow{\ldots}p_{B}\ q_{B}^{\text{succ},2}\), and \(\rho(q_{B}^{\text{succ},2},q_{B}^{\text{feas}})\leq\rho(q_{B}^{\text{succ},1},q _{B}^{F,\text{feas}})\leq\rho(q_{B}^{\text{closest}},q_{B}^{F,\text{feas}})\);
4. Select a state \(\mathbf{x}^{\mathcal{C}}\) such that \(q_{B}^{\text{succ},1}\xrightarrow{L(\mathbf{x}^{\mathcal{C}})}_{B}q_{B}^{ \text{succ},2}\) is feasible;
5. Compute the shortest path from \(\mathbf{x}^{\mathcal{L}}\) and \(\mathbf{x}^{\text{closest}}\) within \(\mathcal{W}_{\text{free}}\). Pick the second point in the shortest path, denoted by \(\mathbf{x}^{\text{target}}\) as a heuristic direction for sampling;
6. Finally, select \(\mathbf{x}^{\text{rand}}\) according to a distribution that has more chance to sample towards the direction of \(\mathbf{x}^{\text{target}}\).
## IV Neural Network Guided Sampling
Although biased sampling provides a more efficient approach compared with the uniform sampling, it solely relies on the distance information in Buchi automaton. For example, when \(q_{B}^{F,\text{feas}}\) as well as intermediate states \(q_{B}^{\text{succ},1}\) and \(q_{B}^{\text{succ},2}\) are selected, one only consider whether or not the task progress can be pushed forward in the NBA. However, the information from the workspace, i.e., whether the task progress is feasible physically, is neglected.
To further improve the sample efficiency for the random tree construction, one essentially needs to _jointly_ consider the information workspace and the NBA in order to have more chance to sample states that can move towards accepting states _by paths that are feasible in the workspace_. However, obtaining such a good heuristic is very challenging since the workspace is continuous. In this section, we present a new neural network guided (NN-guided) sampling strategy, where a "sample net" is used that effectively fuses the information of the workspace and the NBA. All implementation details as well as source codes are available at [https://github.com/LRJ-2000/NNgTL](https://github.com/LRJ-2000/NNgTL)
### _Overview of NN-Guided Approach_
#### Iv-A1 Purposes of Network Networks
The main architecture of our sample net is shown in Figure 1 consisting of two sub-networks: the Path Prediction Network (PathNet) and the State Prediction Network (StateNet). The inputs of the PathNet and the StateNet are the same, which are both the workspace map as well as the NBA for the LTL task. However, their purposes and outputs are different. Specifically,
* The output of the StateNet is a vector \(\mathbf{p}\) with a length of \(|\mathcal{Q}_{B}|\). In this vector, each entry \(p_{i}\) denotes the probability that state \(i\) is involved in the optimal path. The prediction vector \(\mathbf{p}\) is employed to guide the choices of \(q_{B}^{F,\text{feas}},q_{B}^{\text{succ},1},q_{B}^{\text{succ},2}\in\mathcal{Q }_{B}\) in the biased sampling.
* The output of the PathNet is a \(200\times 200\) matrix \(\mathcal{P}\) such that the value of each entry represents the likelihood that the entry is on the optimal path. Therefore, this weight matrix \(\mathcal{P}\) is used as a more reasonable metric for sampling \(\mathbf{x}^{\text{rand}}\) by considering the workspace information with preforming shortest path search.
#### Iv-A2 NN-Guided Sampling Strategy
Now, let us discuss how the proposed neural networks are used to guide the sampling process. It still follows the basic idea of biased sampling method as detailed in steps S1-S6. However, we further leverage the outputs of the two sub-networks to improve and simply the sampling process. Specifically, at each instant, suppose that the output of the StateNet and the PathNet are \(\mathbf{p}\) and \(\mathcal{P}\). We set \(\alpha\in(0,1)\) as a parameter that specifies the probability of using the predicted information of StateNet to guide the sampling. Then, to determine
the sample point \(\mathbf{x}^{\text{rand}}\), our approach makes the following changes C-1 and C-2 to steps S-1 to S-6.
C-1: When selecting \(q_{B}^{F,\text{feas}}\), \(q_{B}^{\text{closest}}\), \(q_{B}^{\text{succ.1}}\) as in steps S1-S3 in the biased sampling, there is a \(1-\alpha\) probability that we still follow exactly the same strategy in S-1 to S-3. However, there is an \(\alpha\) probability that all these states selected according to the probability vector \(\mathbf{p}\). In other words, we have \(\alpha\) probability to activate the prediction result of the StateNet.
C-2: After obtaining state \(\mathbf{x}^{\mathcal{L}}\) in step S-4, to sample state \(\mathbf{x}^{\text{rand}}\), we simplify steps S-5 and S-6 as a single step. Particularly, instead of first computing the shortest path and then using the second point in the path to generate a sample distribution, here we directly use the prediction result \(\mathcal{P}\) by the PathNet to sample \(\mathbf{x}^{\text{rand}}\). More specifically, let \((x_{1},y_{1})\) and \((x_{2},y_{2})\) be the coordinations of \(\mathbf{x}^{\mathcal{L}}\) and \(\mathbf{x}^{\text{closest}}\), respectively. We consider the rectangle region \(\textsc{Rec}=[x_{1}:x_{2},y_{1}:y_{2}]\). We define a discrete distribution over all grids in Rec according to the normalized value of their weights in \(\mathcal{P}\), i.e., grids with larger value has more chance to be selected. Then \(\mathbf{x}^{\text{rand}}\) is sampled randomly from the rectangle region according to this distribution.
Here, we discuss the main features of the purposed NN-guided sampling strategy. First, our NN-guided approach subsumes all properties, such as probabilistic completeness and asymptotic optimality, of the TL-RRT* algorithm in [18] since we still follow the main structure of TL-RRT* and at each step, our algorithm has a non-zero probability to switch to the original sampling strategy. However, compared with the biased sampling strategy adopted in [18], our NN-guided sampling strategy further jointly considers both the workspace information and the NBA, which provides a better heuristic for "good" samples. Furthermore, since our strategy uses the predicted distribution directly without involving the shortest path search for each step, its online execution is also much faster that the biased sampling strategy.
### _Inputs Encodings_
Recall that, for both the PathNet and the StateNet, their inputs are the workspace map and the NBA for the LTL formula. To leverage neural networks for processing the workspace with continuous space and NBA with graph structure, appropriate encoding techniques are needed.
#### Iii-B1 Encoding for Workspace
First, we consider the continuous workspace as a \(200\times 200\) pixel image or a grid map. Each grid in the image corresponds to a specific point in the workspace, which is labeled, obstacle or free. Then the image is transformed into a tensor of dimensions \((m+1)\times 200\times 200\) with \(m\) be the number of different labels, i.e., each grid in the workspace is encoded as a vector \(\mathbf{a}=[a_{0},a_{1},...,a_{m}]\). Specifically, the first entry \(a_{0}\) represents the grid's status, where \(-1\) for the initial location, \(0\) for free space, and \(1\) for an obstacle. For \(i=1,\ldots,m\), we have \(a_{i}=1\) if it belongs to \(\mathcal{R}_{i}\), and \(a_{i}=0\) otherwise.
#### Iii-B2 Encoding for Buchi Automata
First, we convert NBA to a directed graph, where nodes and edges correspond to states and transitions, respectively, and have their features. Specifically, the feature for a node is a vector \(\mathbf{v}=[v_{1},v_{2},v_{3}]\), where \(v_{1}\in\{0,1\}\) represents whether or not it is an initial state, \(v_{2}\in\{0,1\}\) represents whether or not it is an feasible accepting state, and \(v_{3}\) is the (normalized) distance to the closest feasible accepting state. For each edge, the feature is a vector \(\mathbf{e}=[e_{1},e_{2},...,e_{m}]\in\{-1,0,1\}^{m}\) specifying the atomic propositions that need to be true for the underlying transition in the NBA. Since later on we use graph neural networks (GNN) to process features of the NBA, we further transform the directed graph into a heterogeneous graph. This transformation involves adding a new node within each original edge so that the features of the edges are inherited by the nodes added. Additionally, to augment feature aggregation and spread, a self-loop is added to each node, and a pooling node is added so that each node has a directed edge leading to this pooling node.
### _Implementation Details of Neural Networks_
#### Iii-C1 PathNet
It has the following four building blocks.
**Map Encoder:** The purpose is to extract features from the grid map by five convolutional blocks. Each block houses a \(4\times 4\) convolutional layer with a stride of \(2\) and padding of \(1\), complemented by a batch-norm layer, a dropout layer with probability \(0.5\), and a LeakyReLU activation with a negative slope of \(0.2\). Note that the sizes of the map evolves from
Fig. 1: Overview of the sampling network.
\(m\times 200\times 200\) to \(1024\times 6\times 6\) when passing through these blocks.
**NBA Encoder:** The purpose is to extract global features from the NBA. Comprising five layers, the encoder utilizes Graph Attention (GAT) convolutions to address distinct edge types in the heterogeneous graph [29]. The node features are proceeded by a dropout layer and a ReLU activation after convolutions. Finally, global mean pooling operations are used to accumulate feature representation of the entire graph.
**Fusion Network:** The purpose is to amalgamate features from both the workspace Map Encoder and the NBA Encoder. This is done by the following two steps. First, NBA features are transformed via a linear layer to attain compatibility with map features. Then vector concatenation are used to fuse these harmonized features.
**Path Predictor:** The purpose is to output the weight matrix \(\mathcal{P}\in\mathbb{R}^{200\times 200}\) as the prediction for optimal paths. To this end, we use five up-convolution blocks to upscale and to refine the amalgamated features. Each block is structured with a transposed convolutional layer, a batch-norm layer, a dropout layer (with probability \(0.5\)), and a ReLU activation. Drawing from the "U-Net" architecture [30], our design integrates skip connections, merging features from both the third convolutional modules of the Map and NBA Encoders. This approach capitalizes on the synergy of both encoders, enhancing the merged feature representation. Subsequent to this fusion, features are relayed to the Path Predictor via concatenation. Leveraging these connections ensures the retention of intricate spatial details alongside the depth of hierarchical features. Finally, we use an \(1\times 1\) convolution to reshape the features to \(1\times 200\times 200\) dimensions, and a sigmoid activation is used to refine the output.
#### Iv-C2 StateNet
This net is essentially a classifier for the NBA states. It has a simpler structure consisting of the following two components.
**Map Encoder:** The main purpose is to encode the map into a \(256\)-length feature vector, which will be served as a precursor to the Node Predictor. Specifically, the map is processed by an \(1\times 1\) convolution, transitioning the \(8\)-channel input to \(3\) channels, thus aligning with conventional image processing frameworks. The features then engage with the pre-trained "ResNet-50" model [31]. The terminal fully connected layer of the ResNet is omitted, and is replaced by our bespoke fully connected layer, rendering the output as a \(256\)-length feature vector.
**Node Predictor**, tailored for node classification, starts with a GAT convolution layer, stretching the graph's node features to a 256-length dimension, in line with the map features. Each node's features are subsequently concatenated with the Map Encoder-generated map feature vector, amalgamating spatial and structural data at every node. A sequence of information dissemination follows via five sequential GAT modules, each housing a GAT convolution, a ReLU activation for non-linearity, and a dropout (with probability \(0.5\)) for regularization. The classification culminates with two sequential fully connected layers and a Softmax layer, analyzing the consolidated node features to yield the classification probability.
### _Training Neural Networks_
#### Iv-D1 Data Set Preparations
Initially, we randomly generate \(15400\) pairs of workspace and LTL formulae. Specifically, in each \(200\times 200\) grid map workspace, we randomly place obstacles as well as seven distinct labeled regions. The initial location of the robot is also randomized. Note that grid map are only used for the purpose of map generation, it is still mapped to and used as continuous workspace. To order to obtain an expert path for each pair of workspace and LTL task, we use the existing biased-sampling approach with \(10,000\) iterations. The obtained expert path is then encoded into a \(200\times 200\) binary matrix. Specifically, we mark those grids crossed by the expert path, as well as their immediate neighbors, as \(1\). Then we label NBA states visited by the expert path as \(1\). Finally, through data augmentations, this data set is expanded to \(107800\) cases.
#### Iv-D2 Training Procedures
The training process started with training the StateNet first. Specifically, we first fixed the parameters of the ResNet portion and only update the parameters of the other parts. When the loss became stable after 100 epochs, we unfroze the parameters of the ResNet and continued training for another 20 epochs. The trainings were performed by the Adam optimizer with an initial learning rate of \(0.001\) and a batch size of \(128\). We used cross entropy loss as the loss function. After training the StateNet, we trained the PathNet. Specifically, we first initialized the GAT layers in the NBA Encoder using the parameters from the GAT layers in StateNet. After training for 150 epochs, the loss stabilized. We still used the Adam optimizer with an initial learning rate of \(0.0001\) and a batch size of \(128\). We employed the binary cross entropy loss as the loss function.
## V Simulations & Numerical Experiments
In this section, we provide simulation results for the proposed method. First, a case study is provided to illustrate our approach. Then we perform a set of numerical experiments to evaluate the efficiency of our approach compared with existing sampling strategies. All algorithms were implemented using Python on a Windows 11 computer with an Intel Core i7-13700K 5.40GHz processor.
### _Case Study_
We consider a robot moving in a workspace shown in Figure 2 with initial position of the robot, obstacles, and labeled regions as depicted in the figure. We consider the following LTL task for the robot
\[\phi=\Box\Diamond_{1}\wedge\neg l_{1}\mathcal{U}l_{2}\wedge\Diamond_{3},\]
i.e., the robot needs to (i) visit \(l_{2}\) at least once without visiting \(l_{1}\); (ii) visit \(l_{3}\) at least once; (iii) visit \(l_{1}\) infinitely often. The NBA of \(\phi\) is shown in Figure 2(b), where state \(4\) is the unique feasible accepting state. The optimal plan searched by our algorithm is shown as the red lines in Figure 2. The robot first goes to \(l_{2}\), then hoes to \(l_{3}\), and finally stays at \(l_{1}\). That is, only the prefix part contributes to the overall cost.
To better illustrate our NN-guided sampling strategy, we explain how the random tree \(\mathcal{T}\) for the prefix path is expended initially from the root \(q_{P}^{0}=(\mathbf{x}_{0},q_{B}^{\mathrm{init}})\). Since the NBA only has one feasible accepting state, the choice for \(q_{B}^{F,\mathrm{feas}}\) is unique, and we have \(q_{P}^{\mathrm{closest}}=q_{P}^{0}\) since the tree only has a root so far. Then the StateNet predicts \(\mathbf{p}=[0999,0.486,0.902,0.994,0.997]\). Since \(0\xrightarrow{L(\mathbf{x}_{0})}_{B}0\), we have \(q_{B}^{\text{succ.1}}=0\). From the feasible successor states of \(q_{B}^{\text{succ.1}}\), we choose \(q_{B}^{\text{succ.2}}=2\) since it has a higher probability (0.902) than state \(1\) (0.486). Then according to the transition condition \(0\xrightarrow{l_{2}}_{B}2\) in the NBA, we select a point in the labeled region \(l_{2}\) as \(\mathbf{x}^{\mathcal{C}}\), which is shown as the yellow point in Figure 3. Then we construct the rectangle region between \(\mathbf{x}^{\mathcal{C}}\) and \(\mathbf{x}^{\mathrm{closest}}\), and state \(\mathbf{x}^{\mathrm{rand}}\) is sampled randomly according to the distribution determined by the PathNet prediction \(\mathcal{P}\) as shown in Figure 2(a).
Note that, in this example, there are actually two feasible prefix paths: \(l_{2}\to l_{3}\to l_{1}\) and \(l_{3}\to l_{2}\to l_{1}\). Clearly, the former is optimal with less cost. This information is captured by the prediction result of the StateNet, which prefers state \(2\in\mathcal{Q}_{B}\) for the optimal path. Furthermore, the prediction result of the PathNet can effectively avoid obstacles and has more chance to sample near the optimal path. Therefore, these prediction results by the neural networks guide our search process to converge to a desired solution more quickly.
### _Comparison with Existing Methods_
#### V-B1 Experiment Settings
We conduct a set of experiments to illustrate the efficiency of our NN-guided sampling strategy compared with the existing uniform and biased sampling strategies. Specifically, independent from the training set, we generate another \(240\) pairs of workspace map and LTL task. For each instance, we run our method with \(\alpha=0.8\) as well as the two existing method to find a feasible plan. Note that, since all these RRT-based approaches are probabilistic optimal, we focus on comparing _how fast_ these strategies can find _the first feasible solution_, and the performance of the first feasible solution in terms of its length. Formally, we consider the following metrics when the first feasible solution is found: the execution time \(T\) taken, the number of iterations \(n\) required, the number of nodes \(m\) in the random tree, and the length of the first feasible solution \(len\).
#### V-B2 Statistic Results
The numerical experiments results are shown in Table I. Specifically, based on the execution time \(T\) of the uniform sampling approach, we divide the tasks into simple tasks with \(T\leq$180\,\mathrm{s}$\) and complex tasks with \(T>$180\,\mathrm{s}$\). Then Tables Ia and Ib show the average value of each metric for each algorithm within these two task categories, respectively. Note that many complex tasks, the uniform sampling strategy fails to find a feasible path within \(2000\,\mathrm{s}\). For such cases, we terminate the search by only recording \(T,n\) and \(m\) without consider the length \(len\).
The statistic results show that, for both simple and complex tasks, our NN-guided sampling strategy can significantly enhance the efficiency of the RRT-based algorithm. In particular, the timed required to obtain a feasible solution by our NN-guided sampling strategy is less than 15% of that of the biased sampling strategy, which is already much more efficient than uniform sampling strategy. Furthermore, the length of the first feasible solution found by our strategy is similar to those of the other two strategies. We would like the remark that the metric of \(len\) is not essential compared with other metrics since this is just the performance of the first feasible solution. The entire algorithm will converge to the optimal solution when the number of iterations increases.
## VI Conclusion
In this paper, building upon the current state-of-the-art sampling-based LTL planning algorithms, we propose a novel sampling strategy based on multi-modal neural networks to guide the sampling process. Our approach, on the one hand, leverages the feature extraction power of neural networks in an end-to-end fashion, and on the other hand, still enjoys all good properties of the sampling-based methods such as probabilistic optimality/completeness. Experimental results show that our proposed sampling strategy can significantly enhance the planning efficiency of the algorithm. In future research, we will further improve the feature fusion methods for multi-robot planning problems.
\begin{table}
\end{table} TABLE I: Experiment Results
Fig. 3: Predication results of the neural networks.
Fig. 2: Workspace of the robot, where the gray regions denote obstacles, the green regions denote labeled regions, and the red lines denote the optimal path synthesized. |
2303.18157 | MAGNNETO: A Graph Neural Network-based Multi-Agent system for Traffic
Engineering | Current trends in networking propose the use of Machine Learning (ML) for a
wide variety of network optimization tasks. As such, many efforts have been
made to produce ML-based solutions for Traffic Engineering (TE), which is a
fundamental problem in ISP networks. Nowadays, state-of-the-art TE optimizers
rely on traditional optimization techniques, such as Local search, Constraint
Programming, or Linear programming. In this paper, we present MAGNNETO, a
distributed ML-based framework that leverages Multi-Agent Reinforcement
Learning and Graph Neural Networks for distributed TE optimization. MAGNNETO
deploys a set of agents across the network that learn and communicate in a
distributed fashion via message exchanges between neighboring agents.
Particularly, we apply this framework to optimize link weights in OSPF, with
the goal of minimizing network congestion. In our evaluation, we compare
MAGNNETO against several state-of-the-art TE optimizers in more than 75
topologies (up to 153 nodes and 354 links), including realistic traffic loads.
Our experimental results show that, thanks to its distributed nature, MAGNNETO
achieves comparable performance to state-of-the-art TE optimizers with
significantly lower execution times. Moreover, our ML-based solution
demonstrates a strong generalization capability to successfully operate in new
networks unseen during training. | Guillermo Bernárdez, José Suárez-Varela, Albert López, Xiang Shi, Shihan Xiao, Xiangle Cheng, Pere Barlet-Ros, Albert Cabellos-Aparicio | 2023-03-31T15:47:49Z | http://arxiv.org/abs/2303.18157v1 | # MAGNETO: A Graph Neural Network-based Multi-Agent system for Traffic Engineering
###### Abstract
Current trends in networking propose the use of Machine Learning (ML) for a wide variety of network optimization tasks. As such, many efforts have been made to produce ML-based solutions for Traffic Engineering (TE), which is a fundamental problem in ISP networks. Nowadays, state-of-the-art TE optimizers rely on traditional optimization techniques, such as Local search, Constraint Programming, or Linear programming. In this paper, we present MAGNETO, a distributed ML-based framework that leverages Multi-Agent Reinforcement Learning and Graph Neural Networks for distributed TE optimization. MAGNETO deploys a set of agents across the network that learn and communicate in a distributed fashion via message exchanges between neighboring agents. Particularly, we apply this framework to optimize link weights in OSPF, with the goal of minimizing network congestion. In our evaluation, we compare MAGNETO against several state-of-the-art TE optimizers in more than 75 topologies (up to 153 nodes and 354 links), including realistic traffic loads. Our experimental results show that, thanks to its distributed nature, MAGNETO achieves comparable performance to state-of-the-art TE optimizers with significantly lower execution times. Moreover, our ML-based solution demonstrates a strong generalization capability to successfully operate in new networks unseen during training.
Traffic Engineering, Routing Optimization, Multi-Agent Reinforcement Learning, Graph Neural Networks
## I Introduction
During the last decade, the networking community has devoted significant efforts to build efficient solutions for automated network control, pursuing the ultimate goal of achieving the long-desired _self-driving networks_[1, 2]. In this vein, Machine Learning (ML) is considered as a promising technique for producing efficient tools for autonomous networking [3, 4].
In this paper, we revisit a fundamental networking problem: Traffic Engineering (TE) optimization [5, 6]. TE is among the most common operation tasks in today's ISP networks. Here, the classical optimization goal is to minimize network congestion, which is typically achieved by minimizing the maximum link utilization in the network [7, 8, 9, 10, 11]. Given the relevance of this problem, we have witnessed a plethora of proposals approaching this problem from different angles, such as optimizing the configuration of widely deployed link-state protocols (e.g., OSPF [12]), making fine-grained flow-based routing, or re-routing traffic across overlay networks [13, 14].
Likewise, for the last years the networking community has focused on developing effective ML-based solutions for TE. In particular, many works propose the use of Reinforcement Learning (RL) for efficient TE optimization (e.g., [15, 16, 17, 18]). However, at the time of this writing, no ML-based proposal has succeeded to replace long-established TE solutions; indeed, the best performing TE optimizers to date are based on traditional optimization algorithms, such as Constraint Programming [10], Local Search [9], or Linear Programming [11, 19].
In this paper, we present MAGNETO, a novel ML framework for distributed TE optimization leveraging Graph Neural Networks (GNN) [20] and Multi-Agent Reinforcement Learning (MARL) [21] at its core1. In the proposed algorithm, a RL-based agent is deployed on each router. Similarly to standard intradomain routing protocols (e.g., OSPF), MAGNETO's agents exchange information with their neighbors in a distributed manner. In particular, agents communicate via a neural network-driven message passing mechanism, and learn how to cooperate to pursue a common optimization goal. As a result, the proposed framework is fully distributed, and agents learn how to effectively communicate to perform intradomain TE optimization, i.e. to minimize the maximum link utilization in the network.
Footnote 1: MAGNNETO stands for Multi-Agent Graph Neural Network Optimization. The code of this framework and all the data needed to reproduce our experiments are publicly available at: [https://github.com/BNN-UPC/Papers/wiki/MAGNNETO-TE](https://github.com/BNN-UPC/Papers/wiki/MAGNNETO-TE).
More in detail, MAGNNETO presents the following contributions:
**Top performance with very low execution times:** We compare MAGNNETO against a curated set of well-established TE solutions: SRLS [9], DEFO [10] and TabulGPWO [11]. These solutions implement mature optimization techniques on top of expert knowledge. As a result, they are able to achieve close-to-optimal performance in large-scale networks within minutes [22]. Our results show that MAGNNETO achieves comparable performance to these state-of-the-art TE solutions, while being significantly faster. In fact, when enabling several simultaneous actions in our framework, it runs up to three orders of magnitude faster than the baseline optimizers (sub-second vs. minutes) in networks with 100+ nodes. The reason for this is the fully decentralized
architecture of MAGNNETO, which naturally distributes and parallelizes the execution across the network.
**Generalization over unseen networks:** A common downside of current ML-based solutions applied to networking is their limited performance when operating in different networks to those seen during training, which is commonly referred to as lack of _generalization_[23]. Without generalization, training _must_ be done at the same network where the ML-based solution is expected to operate. Hence, from a practical standpoint generalization is a crucial aspect, as training directly in networks in production is typically unfeasible. MAGNNETO implements internally a GNN, which introduces proper learning biases to generalize across networks of different sizes and structures [23]. In our evaluation, we train MAGNNETO in two different networks, and test its performance and speed on 75 real-world topologies from the Internet Topology Zoo [24] not seen before. Our results show that in such scenarios, MAGNNETO still achieves comparable performance to state-of-the-art TE optimizers, while being significantly faster.
**No need for overlay technologies**: Recent TE optimizers rely on novel overlay technologies to achieve their optimization goals [9, 10]. By leveraging Segment Routing [25] these solutions are able to use arbitrary overlay paths that are not routed via the standard OSPF weights. This allows to extend the routing space to a source-destination granularity and -as shown in the literature- it renders outstanding results. However, in this paper we show that comparable performance is achievable by using only standard destination-based OSPF routing. Indeed, MAGNNETO is fully compliant with current OSPF-based networks, and does not require the use of any overlay technology.
MAGNNETO is partially based on an earlier version presented at [26]. In that work, we raised an open question: _Is ML ready for Traffic Engineering optimization?_ Our goal was to discuss whether state-of-the-art ML techniques are mature enough to outperform traditional TE solutions; to this end, we presented a ML framework for TE optimization, and made an exploratory evaluation on this. This paper actually deeps dive into this question by formulating an enhanced ML framework -MAGNNETO- and performing a much more comprehensive evaluation. We summarize below the main novelties of this work with respect to [26]:
* MAGNNETO formulates the TE problem as a Decentralized Partially-Observable Markov Decision Process (DecPOMDP), which enables to achieve a more functional MARL setting. Instead, the previous solution [26] operated over a classical MDP, where agents must take actions sequentially in a synchronized manner.
* MAGNNETO supports simultaneous actions at each RL optimization step. This dramatically reduces the execution time (up to 10x in our experiments) with respect to the previous framework, which was limited by design to one action per step.
* We present in this paper an extensive evaluation including 75+ real-world topologies, large-scale scenarios (up to 153 nodes), and a benchmark consisting of a representative collection of advanced TE optimizers. In contrast, the evaluation of [26] only considered 3 different topologies of limited size (up to 24 nodes), and the results were compared against a single TE solver.
The remainder of this paper is as follows. Section II describes the TE scenario where we deploy the proposed ML-based system. Section III formalizes MAGNNETO, as a general framework for networked environments. Afterwards, Section IV describes how we adapt this framework to perform intradomain TE optimization. In Section V, we make an extensive evaluation of the proposed framework against state-of-the-art TE proposals. Section VI summarizes the main existing works related to this paper, and lastly Section VII concludes the paper.
## II Network Scenario
This section describes the intradomain TE scenario where MAGNNETO operates. In this paper, we consider the intradomain TE problem, where network traffic is measured and routed to minimize network congestion. Typically, IP networks run link-state Interior Gateway Protocols (IGP), such as Open Shortest Path First (OSPF) [12], that choose paths using the Dijkstra's algorithm over some pre-defined link weights.
There exists a wide range of architectures and algorithms for TE in the literature [27]. Network operators commonly use commercial tools [28, 29] to fine-tune link weights. However, other mechanisms propose to add extra routing entries [30] or end-to-end tunnels (e.g., RSVP-TE [31]) to perform source-destination routing, thus expanding the solution space.
MAGNNETO is a fully distributed framework that interfaces with standard OSPF, by optimizing the link weights used by such protocol. As a result, it does not require any changes to OSPF and it can be implemented with a software update on the routers where it is deployed. In this context, relying on well-known link-state routing protocols, such as OSPF, offers the advantage that the network is easier to manage compared to finer-grained alternatives, such as flow-based routing [32].
Figure 1 illustrates the general operational workflow of MAGNNETO:
**1) Traffic Measurement:** First, a traffic measurement platform deployed over the network identifies a new Traffic Matrix (TM). This new TM is communicated to all participating routers (Fig. 1, step 1), which upon reception will start the next step and optimize the routing for this TM. We leave
Figure 1: Intradomain traffic engineering optimization with MAGNNETO.
out of the scope of this paper the details of this process, as TM estimation is an extensive research field with many established proposals. For instance, this process can be done periodically (e.g., each 5-10 minutes as in [11]), where the TM is first estimated and then optimized. Some proposals trigger the optimization process when a relevant change is detected in the TM [33], while others use prediction techniques to optimize it in advance [34]. Also, some real-world operators make estimates considering their customers' subscriptions and operate based on a static TM. Our proposal is flexible and can operate with any of these approaches.
**2) MAGNNETO TE optimization:** Once routers receive the new TM, the distributed RL-based agents of MAGNETTOT start the TE optimization process, which eventually computes the per-link weights that optimize OSPF routing in the subsequent step (Fig. 1, step 2). Particularly, we set the goal to minimize the maximum link load (_MinMaxLoad_), which is a classic TE goal in carrier-grade networks [7, 8, 10]. This problem is known to be NP-hard, and even good settings of the weights can deviate significantly from the optimal configuration [8, 32]. Our MARL optimization system is built using a distributed Graph Neural Network (GNN) that exchanges messages over the physical network topology. Messages are sent between routers and their directly attached neighbors. The content of such messages are _hidden states_ that are produced and consumed by artificial neural networks and do not have a human-understandable _meaning_. The GNN makes several message iterations and, during this phase, local configuration of the router remains unchanged, thus having no impact on the current traffic. More details about the inner workings, performance, communication overhead, and computational cost can be found in Sections III-V.
**3) OSPF convergence:** Lastly, the standard OSPF convergence process is executed taking into account the new per-link weights computed by MAGNNETO. Specifically, each agent has computed the optimal weights for its locally attached links. For OSPF to recompute the new forwarding tables, it needs to broadcast the new link weights; this is done using the standard OSPF Link-State Advertisements (LSAs) [12]. Once the routers have an identical view of the network, they compute locally their new forwarding tables (Fig. 1, step 3), and traffic is routed following the optimization goal. Convergence time of OSPF is a well-studied subject. For instance, routing tables can converge in the order of a few seconds in networks with thousands of links [35].
## III MagnnetO
This section provides a detailed description on how MAGNNETO operates. To do so we first briefly introduce the main ML methodologies it implements. Note that MAGNNETO is conceived as a general framework to optimize networked environments in a distributed fashion; details on how it is particularly adapted to face intradomain TE are then provided in Section IV.
### _Related ML-based Technologies_
MAGNNETO incorporates two well-known ML-based mechanisms: Multi-Agent Reinforcement Learning and Graph Neural Networks. Let us provide some background on these technologies:
#### Iii-A1 Reinforcement Learning (RL)
According to the regular setting of RL [36], an agent interacts with the environment in the following way: at each step \(t\), the agent selects an action \(a_{t}\) based on its current state \(s_{t}\), to which the environment responds with a reward \(r_{t}\) and then moves to the next state \(s_{t+1}\). This interaction is modeled as an episodic, time-homogeneous Markov Decision Process (MDP) \((\mathcal{S},\mathcal{A},r,P,\gamma)\), where \(\mathcal{S}\) and \(\mathcal{A}\) are respectively the state and action spaces; \(P\) is the transition kernel, \(s_{t+1}\sim P(\cdot|s_{t},a_{t})\); \(r_{t}\) represents the immediate reward given by the environment after taking action \(a_{t}\) from state \(s_{t}\); and \(\gamma\in(0,1]\) is the discount factor used to compute the return \(G_{t}\), defined as the \(-\)discounted-cumulative reward from a certain time-step \(t\) to the end of the episode \(T\): \(G_{t}=\sum_{t=0}^{T}\gamma^{t}r_{t}\). The behavior of the agent is described by a policy \(\pi:\mathcal{S}\rightarrow\mathcal{A}\), which maps each state to a probability distribution over the action space, and the goal of an RL agent is to find the optimal policy in the sense that, given any considered state \(s\in\mathcal{S}\), it always selects an action that maximizes the expected return \(\hat{G}_{t}\). There are two main model-free approaches to this end [37]:
* Action-value methods, typically referred to as Q-learning; the policy \(\pi\) is indirectly defined from the learned estimates of the action value function \(Q^{\pi}(s,a)=\mathbb{E}_{\pi}\left[G_{t}|s_{0}=s,a_{0}=a\right]\).
* Policy Gradient (PG) methods, which directly attempt to learn a parameterized policy representation \(\pi_{\theta}\). The Actor-Critic family of PG algorithms also involves learning a function approximator \(V_{\phi}(s)\) of the state value function \(V^{\pi_{\theta}}(s)=\mathbb{E}_{\pi_{\theta}}\left[G_{t}|s_{t}=s\right]\). In this case, actions are exclusively selected from function \(\pi_{\theta}\), which estimates the policy (i.e., the actor), but the training of such policy is guided by the estimated value function \(V_{\phi}(s)\), which assesses the consequences of the actions taken (i.e., the critic).
#### Iii-A2 Multi-Agent Reinforcement Learning (MARL)
In a MARL framework there is a set of agents \(\mathcal{V}\) interacting with a common environment that have to learn how to cooperate to pursue a common goal. Such a setting is generally formulated as a Decentralized Partially Observable MDP (DecPOMDP) [21] where, besides the global state space \(\mathcal{S}\) and action space \(\mathcal{A}\), it distinguishes local state and action spaces for every agent -i.e., \(\mathcal{S}_{v}\) and \(\mathcal{A}_{v}\) for \(v\in\mathcal{V}\). At each time step \(t\) of an episode, each agent may choose an action \(a_{t}^{\nu}\in\mathcal{A}_{v}\) based on local observations of the environment encoded in its current state \(s_{t}^{\nu}\in\mathcal{S}_{v}\). Then, the environment produces individual rewards \(r_{t}^{\nu}\) (and/or a global reward \(r_{t}\)), and it evolves to a next global state \(s_{t+1}\in\mathcal{S}\) -i.e., each agent \(v\) transitions to the following state \(s_{t+1}^{\nu}\in\mathcal{S}_{v}\). Typically, a MARL system seeks for the optimal global policy by learning a set of local policies \(\{\pi_{\theta_{v}}\}_{v\in\mathcal{V}}\). For doing so, most state-of-the-art MARL solutions implement traditional (single-agent) RL algorithms on each distributed agent, while incorporating some kind of cooperation mechanism between them [21]. The standard approach for obtaining a robust decentralized execution, however, is based on a centralized training where
extra information can be used to guide agents' learning [38].
#### Ii-A3 Graph Neural Networks (GNN)
These models are a recent family of neural networks specifically conceived to operate over graph-structured data [20, 23]. Among the numerous GNN variants developed to date [39], we focus on Message Passing Neural Networks (MPNN) [40], which is a well-known type of GNN whose operation is based on an iterative message-passing algorithm that propagates information between elements in a graph \(\mathcal{G}=(\mathcal{N},\mathcal{E})\). Focusing on the set of nodes, the process is as follows: first, each node \(v\in\mathcal{N}\) initializes its hidden state \(h_{v}^{0}\) using some initial features already included in the input graph. At every message-passing step \(k\), each node \(v\) receives via messages the current hidden state of all the nodes in its neighborhood \(\mathcal{B}(v)=\{u\in\mathcal{N}\,|\exists e\in\mathcal{E},e=(u,v)\lor e=(v,u)\}\), and processes them individually by applying a message function _m(-)_ together with its own internal state \(h_{v}^{k}\). Then, the processed messages are combined by an aggregation function _a(-)_:
\[M_{v}^{k}=a(\{m(h_{v}^{k},h_{i}^{k})\}_{t\in\mathcal{B}(v)}) \tag{1}\]
Finally, an update function _u(-)_ is applied to each node \(v\); taking as input the aggregated messages \(M_{v}^{k}\) and its current hidden state \(h_{v}^{k}\), it outputs a new hidden state for the next step (\(k+1\)):
\[h_{v}^{k+1}=u(h_{v}^{k},M_{v}^{k}). \tag{2}\]
After a certain number of message passing steps \(K\), a readout function _r(-)_ takes as input the final node states \(h_{v}^{K}\) to produce the final output of the GNN model. This readout function can predict either features of individual elements (e.g., a node's class) or global properties of the graph. Note that a MPNN model generates _a single set of message, aggregation, update, and readout functions that are replicated at each selected graph element_.
### _Execution Framework_
MAGNNETO internally models a networked environment as a graph \(\mathcal{G}=(\mathcal{N},\mathcal{E},\mathcal{V})\), with \(\mathcal{N}\) and \(\mathcal{E}\) representing the set of nodes and edges, respectively, and \(\mathcal{V}\) acting for a set of agents that can control some of the graph entities (nodes or edges). Let \(\mathcal{S}\) and \(\mathcal{A}\) represent the global state and action spaces, respectively, defined as the joint and union of the respective agents' local spaces, \(\mathcal{S}=\prod_{v\in\mathcal{V}}\mathcal{S}_{v}\) and \(\mathcal{A}=\bigcup_{v\in\mathcal{V}}\mathcal{A}_{v}\). The theoretical framework of MAGNNETO allows to implement both Q-learning and PG methods, so for the sake of generalization let \(f_{\theta}\) represent the global RL-based function that is aimed to learn -i.e., the global state-action value function \(Q_{\theta}\) for the former, or the global policy \(\pi_{\theta}\) for the latter.
A main contribution of MAGNNETO is that it makes all agents \(v\in\mathcal{V}\) learn the global RL-based function approximator in a fully distributed fashion -i.e., all agents end up constructing and having access to the very same representation \(f_{\theta}\). In particular, and from a theoretical RL standpoint, this allows to formulate the problem within two different paradigms depending on the number of actions allowed at each time-step of the RL episode. On the one hand, imposing a single action per time-step enables to devise the problem as a time-homogeneous MDP of single-agent RL [37]. On the other hand, it requires the more challenging Dec-POMDP formalization of standard MARL [21] when letting several agents act simultaneously. Note, however, that in practice the execution pipeline of MAGNNETO is exactly the same in both cases.
Another relevant feature of our design is that all agents \(v\in\mathcal{V}\) are able to internally construct such global representation \(f_{\theta}\) mainly through message communications with their direct neighboring agents \(\mathcal{B}(v)\) and their local computations, no longer needing a centralized entity responsible for collecting and processing all the global information together. Such a decentralized, message-based generation of the global function is achieved by modeling the global function \(f_{\theta}\) with a MPNN (see Sec. III-A3), so that all agents \(v\in\mathcal{V}\) deployed in the network are actually _replicas_ of the MPNN modules (message, aggregation, update and readout functions) that perform regular message exchanges with their neighbors \(\mathcal{B}(v)\) following the message passing iteration procedure of MPNNs; in particular, note that such _parameter sharing_ implies that all agents share as well the same local state and action spaces. This reinterpretation of a MPNN as a set of copies of its internal modules is especially important due to the fact that in our approach we directly map the graph \(\mathcal{G}\) to a real networked scenario, deploying copies of the MPNN modules along hardware devices in the network (e.g., routers) and making all message communications involved to actually go through the real network infrastructure. Hence, our proposed architecture naturally distributes the execution of the MPNN, and consequently is able to fully decentralize the execution of single-agent RL algorithms.
Algorithm 1 summarizes the resulting distributed pipeline. At each time-step \(t\) of an episode of length \(T\), the MPNN-driven process of approximating the function \(f_{\theta}(s_{t},a_{t})\) -where \(s_{t}\in\mathcal{S}\) and \(a_{t}\in\mathcal{A}\) refer to the global state and action at \(t-\) first constructs a meaningful hidden state \(h_{v}\) for each agent \(v\in\mathcal{V}\). Each hidden state \(h_{v}\) basically depends on the hidden representations of the neighboring agents \(\mathcal{B}(v)\), and its initialization \(h_{v}^{0}\) is a function of the current agent state \(s_{v}^{t}\in\mathcal{S}_{v}\), which is in turn based on some pre-defined internal agent features \(x_{v}^{t}\). Those representations are shaped during \(K\) message-passing steps, where hidden states are iteratively propagated through the graph via messages between direct neighbors. In particular, successive hidden states \(h_{v}^{k}\), where \(k\) accounts for the message-passing step, are computed by the message, aggregation and update functions of the MPNN, as previously described in Section III-A3.
Once agents generate their final hidden representation, a readout function -following the MPNN nomenclature- is applied to each agent to finally obtain the global function \(f_{\theta}\). Particularly, in our system the readout is not two steps: first, each agent \(v\in\mathcal{V}\) implements a local readout that takes as input the final representation \(h_{v}^{K}\), and outputs the final value -or a representation- of the global function \(f_{\theta}\) over every possible action in the agent's space \(\mathcal{A}_{v}\); for instance, this output could be the unnormalized log probability (i.e., logit) of the agent's actions in case of PG methods,
or directly the q-value associated to each action when considering Q-learning algorithms. The second and last steps involve a communication layer that propagates such individual outputs to the rest of the agents, so that all of them can internally construct the global representation of \(f_{\theta}\) for the overall network state \(s_{t}=\prod_{\nu\in\mathcal{V}}s_{\nu}^{\prime}\) and all possible actions \(\bigcup_{v\in\mathcal{V}}\{a_{v},0,a_{v},1,\ldots,a_{v,i}\}\), with \(i\in\mathbb{N}\backslash\{0\}\) the number of actions of local agent spaces \(\mathcal{A}_{v}\). Finally, to ensure that all distributed agents sample the same actions when \(f_{\theta}\) encodes a distribution, they are provided with the same probabilistic seed before initiating the process. Consequently, only agents whose action has been selected does execute an action at each time-step \(t\). Note that actions are not actually applied over the network configuration until the whole optimization process finishes.
```
0: A graph \(\mathcal{G}=(\mathcal{N},\mathcal{E})\) with a set of agents \(\mathcal{V}\), MPNN trained parameters \(\theta=\{\theta_{i}\}_{i\in\{m,a,u,t,r\}}\)
0: Initial graph configuration \(X^{0}_{\mathcal{G}}\), episode length \(T\), number of message passing steps \(K\)
1 Agents initialize their states \(s^{0}_{\nu}\) based on \(X^{0}_{\mathcal{G}}\)
2for\(t\gets 0\) to \(T\)do
3 Agents initialize their hidden states \(h^{0}_{\nu}\leftarrow(s^{\prime}_{\nu},0,\ldots,0)\)
4for\(k\gets 0\) to \(K\)do
5 Agents share their current hidden state \(h^{k}_{\nu}\) to neighboring agents \(\mathcal{B}(v)\)
6 Agents process the received messages \(M^{k}_{\nu}\gets a_{\theta_{u}}(\{m_{\theta_{u}}(h^{k}_{\nu},h^{k}_{\mu}) \}_{\mu\in\mathcal{B}(v)})\)
7 Agents update their hidden state \(h^{k+1}_{\nu}\gets u(h^{k}_{\nu},M^{k}_{\nu})\)
8 endfor
9 Agents partially evaluate the RL function \(f_{\theta}\) over their own actions \(\{f_{\theta}(s_{t},a)\}_{a\in\mathcal{A}_{v}}\gets r_{\theta_{u}}(h^{K}_{\nu})\)
10 Agents receive the partial evaluations of \(f_{\theta}\) of the rest of agents and build the global representation \(f_{\theta}\leftarrow\{f_{\theta}(s_{t},a)\}_{a\in\mathcal{A}}\)
11 Agents select the same set of actions \(A_{t}\) according to \(f_{\theta}\)
12 Agents whose action was selected execute it, and the environment updates the graph configuration \(X^{t+1}_{\mathcal{G}}\)
13 Agents update their states \(s^{t+1}_{\mathcal{G}}\) based on \(X^{t+1}_{\mathcal{G}}\)
14 endfor
15
16 endfor
17: New graph configuration \(X^{*}_{\mathcal{G}}\) that optimizes some pre-defined objective or metric
```
**Algorithm 1**MAGNNETO's execution pipeline.
## IV MagnnnetO for Traffic Engineering
In this section we describe the particular adaptations of the general MAGNNETO framework when applying it to the intradomain TE scenario described in Section II. Moreover, we provide some details about the training pipeline of our models.
### _General Setting_
A straightforward approach to map the graph \(\mathcal{G}\) of the described MAGNNETO framework to a computer network infrastructure is to associate the nodes \(\mathcal{N}\) to hardware devices (e.g., router, switches) and the edges \(\mathcal{E}\) to the physical links of the network. Regarding the set of agents \(\mathcal{V}\), they can be identified either with the set of nodes, so that they individually control a hardware device, or with the set of edges by controlling some configuration parameters of a link connecting two devices.
In the intradomain TE problem, the goal is to learn the set of link weights \(\mathcal{W}=\{w_{e}\}_{e\in\mathcal{E}}\) that minimizes the maximum link utilization for a certain traffic matrix \(TM\). Hence, we adapt MAGNNETO so that each agent controls a link (i.e., \(\mathcal{V}\Xi\mathcal{E}\)) and can modify its weight \(w_{e}\); in fact, in order to make the notation simpler, from now on we will refer to each agent \(v\in\mathcal{V}\) as the edge \(e\in\mathcal{E}\) it represents. We also note that:
* computer networks are commonly represented as directed graphs with links in both directions, so for each directed link \(e=(n^{src}_{e},n^{dst}_{e})\in\mathcal{E}\), with \(n^{src}_{e},n^{dst}_{e}\in\mathcal{N}\), we define its neighbor as the set \(\mathcal{B}(e)\) of edges whose source node coincides with the destination node of \(e\), i.e. \(\mathcal{B}(e)=\{e^{\prime}\in\mathcal{E}|n^{src}_{e}=n^{dst}_{e}\}\). In other words, edges in \(\mathcal{B}(e)\) are those links that can potentially receive traffic from link \(e\).
* in practice, link-based agents \(e\in\mathcal{E}\) would be deployed and executed in their adjacent source (\(n^{src}_{e}\)) or destination (\(n^{dst}_{e}\)) hardware device.
Furthermore, we implement a well-known Actor-Critic method named Proximal Policy Optimization (PPO) [41], which offers a favorable balance between reliability, sample complexity, and simplicity. Consequently, in this case the global function \(f_{\theta}\) of the framework (see Sec. III-B) is the global policy \(\pi_{\theta}\) of the actor. Regarding the critic's design, more information can be found in Section IV-C.
### _Adapting MAGNNETO to TE_
Having clear the general configuration of our MAGNNETO implementation, now we will further describe its operation when dealing with the intradomain TE objective. To do so, let us reinterpret each of the main fundamental elements introduced earlier from a TE perspective:
#### Iv-B1 Environment
We consider episodes of a fixed number of time-steps \(T\). At the beginning of each episode, the environment provides with a set of traffic demands between all source-destination pairs (i.e., an estimated traffic matrix [11]). Each link \(e\in\mathcal{E}\) has an associated capacity \(c_{e}\), and it is initialized with a certain link weight \(w^{0}_{e}\). These link weights are in turn used to compute the routers' forwarding tables, using the standard Dijkstra's algorithm. Each agent \(v_{e}\in\mathcal{V}\) has access to its associated link features, which in our case are the current weight, its capacity, the estimated traffic matrix and the weights of the other links. This can be achieved with standard procedures in OSPF-based environments (see Sec. II).
#### Iv-B2 State Space and Message Passing
At each time-step \(t\) of an episode, each link-based agent \(v_{e}\in\mathcal{V}\), feeds its MPNN module with its input features \(x^{e}_{e}\) to generate its respective initial hidden state \(h^{0}_{e}\) (Figure 2.a). In particular, agents consider
as input features the current weight \(w_{e}^{t}\) and the utilization \(u_{e}^{t}\)\([0,1]\) of the link, and construct their initial link hidden representations \(h_{e}^{0}\) as a fixed-size vector where the first two components are the input features and the rest is zero-padded. Note that the link utilization can be easily computed by the agent with the information of the estimated traffic matrix and the global link weights locally maintained. Then, the algorithm performs \(K\) message-passing steps (Figures 2.b and 2.c). At each step \(k\), the algorithm is executed in a distributed fashion over all the links of the network. Particularly, each link-based agent \(e\in\mathcal{E}\) receives the hidden states of its neighboring agents \(\mathcal{B}(e)\), and combines them individually with its own state \(h_{e}^{k}\) through the \(message\) function (a fully-connected NN). Then, all these outputs are gathered according to the \(aggregation\) function -in our case an element-wise min and max operations- producing the combination \(M_{e}^{k}\). Afterwards, another fully-connected NN is used as the \(update\) function, which combines the link's hidden state \(h_{e}^{k}\) with the new aggregated information \(M_{e}^{k}\), and produces a new hidden state representation for that link (\(h_{e}^{k+1}\)). As mentioned above, this process is repeated \(K\) times, leading to some final link hidden state representations \(h_{e}^{K}\).
#### Iv-B3 Action Space
In our implementation, each agent \(e\in\mathcal{E}\) can only take a single action: to increase its link weight \(w_{e}\) in one unit. In particular, the agent's action selection (Figure 2.d) is done as follows: first, every agent applies a local readout function -implemented with a fully-connected NN- to its final hidden state \(h_{e}^{K}\), from which it obtains the global logit estimate of choosing its action (i.e., increase its link weight) over the actions of the other agents. Then, as previously described in Section III-B, these logits are shared among agents in the network, so that each of them can construct the global policy distribution \(\pi_{\theta}\). By sharing the same probabilistic seed, all agents sample locally the same set of actions \(A_{t}\). Finally, agents whose action has been selected increase by one unit the weight of their associated link in its internal global state copy, which is then used to compute the new link utilization \(u_{e}^{t+1}\) under the new weight setting, as well as to initialize its hidden state representation in the next time-step \(t+1\).
#### Iv-B4 Reward Function
During training, a reward function is computed at each step \(t\) of the optimization episode. In our case, given our optimization goal we directly define the reward \(r_{t}\) as the difference of the global maximum link utilization between steps \(t\) and \(t+1\). Note that this reward can be computed locally at each agent from its global state copy, which is incrementally updated with the new actions applied at each time-step.
### _Training Details_
The training procedure highly depends on the type of RL algorithm chosen. In our particular implementation, given that we considered an Actor-Critic method (PPO), the objective at training is to optimize the parameters \(\{\theta,\phi\}\) so that:
* the previously described GNN-based actor \(\pi_{\theta}\) becomes a good estimator of the optimal global policy;
* the critic \(V_{\phi}\) learns to approximate the state value function of any global state.
As commented in Section III-A1, the goal of the critic is to guide the learning process of the actor; it is no longer needed at execution time. Therefore, taking \(V_{\phi}\) a centralized design would have no impact on the distributed nature of MAGNNETO.
In fact, following the standard approach of MARL systems [38], the training of MAGNNETO is performed in a centralized fashion, and such centrality precisely comes from the critic's model. In particular, we have implemented \(V_{\phi}\) as another link-based MPNN, similar to the actor but with a centralized readout that takes as inputs all link hidden states in and outputs the value function estimate. We also considered a MPNN-based critic to exploit the relational reasoning provided by GNNs; however, note that any other alternative design might be valid as well.
At a high level, the training pipeline is as follows. First, an episode of length \(T\) is generated by following the current policy \(\pi_{\theta}\), while at the same time the critic's value function \(V_{\phi}\) evaluates each visited global state; this defines a trajectory \(\{s_{t},a_{t},r_{t},p_{t},V_{t},s_{t+1}\}_{t=0}^{T-1}\), where \(p_{t}=\pi_{\theta}(a_{t}|s_{t})\) and \(V_{t}:=V_{\phi}(s_{t})\). When the episode ends, this trajectory is used to update the model parameters -through several epochs of minibatch Stochastic Gradient Descent- by maximizing the global PPO objective \(L^{PPO}(\theta,\phi)\) described in [41]. The same process of generating episodes and updating the model is repeated for a fixed number of iterations to guarantee convergence.
## V Evaluation
In this section we extensively evaluate MAGNNETO in an intradomain TE scenario: we benchmark it against a curated set of advanced TE optimizers in more than 75 different real-world topologies, using realistic traffic loads. As shown in
Figure 2: Description of the message passing and action selection process of MAGNNETO at a certain time-step \(t\) of an episode. For simplicity, visual representations of steps (c) and (d) are focused on a single agent (\(A_{9}\)); however, note that the same procedure is executed in parallel in all link-based agents.
our experimental results, MAGNNETO achieves similar performance to state-of-the-art TE optimizers with a significantly lower execution time. We begin by describing the considered baselines as well as the setup used in our evaluations. The rest of the section is devoted to analyze the results.
### _Baselines_
In this section we describe the set of baselines we use to benchmark MAGNNETO in our evaluation. We particularly consider a well-established standard TE mechanism and three advanced TE optimizers.
The first baseline is labeled as "Default OSPF", a simple heuristic widely used in today's ISP networks. In Default OSPF, link weights are inversely proportional to their capacities and traffic is split over multiple paths using Equal-Cost Multi-Path (ECMP). In our experiments, all performance results are expressed in terms of their improvement with respect to Default OSPF.
As state-of-the-art TE benchmarks, we consider the following set of centralized algorithms provided by REPETITA [22]:
* TabulGPWO (IGP Weight Optimizer, based on [11]): This algorithm runs a Local Search to find the OSPF weights that minimize the load of the maximally-utilized link. TabulGPWO requires more execution time than the rest of baselines, but represents a classical TE optimizer that operates in the same optimization space than MAGNNETO (i.e., OSPF link weight configuration).
* DEFO (Declarative and Expressive Forwarding Optimizer) [10]: It uses Constraint Programming and Segment Routing (SR) [25] to optimize routing configurations in the order of minutes. To this end, DEFO reroutes traffic paths through a sequence of middlepoints, spreading their traffic over multiple ECMP paths.
* SRLS (Segment Routing and Local Search) [9]: By leveraging Local Search and SR, SRLS achieves similar -or even better- performance than DEFO at a lower execution time. It also implements ECMP, and reroutes traffic paths through a sequence of middlepoints.
Particularly, SRLS and DEFO represent state-of-the-art TE optimizers obtaining close-to-optimal performance on several network optimization goals -one of them being our intradomain TE goal of minimizing the most loaded link. To this end, both optimizers leverage SR, which enables to define overlay paths at a source-destination granularity. In contrast, MAGNNETO and TabulGPWO operate directly over standard OSPF-based networks with destination-based routing.
### _Experimental Setup_
We compare MAGNNETO against the previously defined TE baselines in all our experimental settings, which involve 82 different real-world topologies: NSFNet, GBN, and GEANT2 from [42], and 79 networks from the Internet Topology Zoo dataset [24]. In this section we provide more low-level technical details of MAGNNETO's configuration, required to reproduce the results.
Regarding the length \(T\) of the training and evaluation RL-based episodes, it varies depending on the network topology size and the number of simultaneous actions allowed (more details below in Sec. V-C). At the beginning of each episode, the link weights are randomly selected as an integer in the range \([1,4]\), so our system is evaluated over a wide variety of scenarios with random routing initializations. From that point on, at each step of an episode one or several agents can modify their weight by increasing it by one unit.
Taking [43] as a reference for defining the hyperparameters' values of the solution, we ran several grid searches to appropriately fine-tune the model. The implemented optimizer is Adam with a learning rate of 3\(\cdot\)10\({}^{-4}\), \(\beta\)=0.9, and \(\epsilon\)=0.01. Regarding the PPO setting, the number of epochs for each training episode is set to 3 with batches of 25 samples, the discount factor \(\gamma\) is set to 0.97, and the clipping parameter is 0.2. We implement the Generalized Advantage Estimate (GAE), to estimate the advantage function with \(\lambda\)=0.9. In addition, we multiply the critic loss by a factor of 0.5, and we implement an entropy loss weighted by a factor of 0.001. Finally, links' hidden states \(h_{e}\) are encoded as 16-element vectors, and in each MPNN forward propagation \(K\)=4 message passing steps are executed.
For each experiment, we generate two sets of simulated traffic matrices: uniform distribution across source-destination traffic demands, and traffic distributions following a gravity model [44] -which produces realistic Internet traffic patterns. The training process of MAGNNETO highly depends on the topology size; in a machine with a single CPU of 2.20 GHz, it can take from few hours (\(\approx\)20 nodes) to few days (100+ nodes).
### _Multiple Actions and Episode Length_
As previously mentioned in Section III, there is a relevant hyperparameter that needs to be further addressed: the episode length \(T\) of RL-based episodes, which represents the maximum number of optimization steps that MAGNNETO needs to execute before producing a good set of link weights. In this section we provide more details about its definition in terms of the topology size and the number of simultaneous actions.
Figure 3: Evaluation of MAGNNETO for different number of simultaneous actions \(n\in\{1,2,5,10\}\), each of them considering an episode length of \(T=150/n\). The training only considers samples of NSFNet and GEANT2 topologies, and the evaluation is performed over 100 unseen TMs on the GBN topology. Each MAGNNETO model and baseline optimizer is trained and/or evaluated twice for uniform and gravity-based traffic profiles; markers represent the mean of these results, and we also include the corresponding boxplots.
Let \(n\) be such maximum number of simultaneous actions allowed at each time-step \(t\) of the episode. When imposing \(n\)=1 -i.e., only one link weight changes per time-step-, we have empirically found that MAGNNETO requires an episode length of \(\approx\)2-3 times the number of links in the network to reach its best performance. This is in line to what we already observed in our preliminary work [26]. However, whereas [26] was subject to \(n\)=1 by design, MAGNNETO allows taking \(n\)>1 actions at each time-step, which can potentially reduce the number of required optimization steps (i.e., speed up the optimization process).
Figure 3 shows that the length \(T\) of the episode -which directly relates to the execution time- can be reduced proportionally by \(n\) without a noticeable performance loss. In particular, the model with \(n\)=10 actually reduces by one order of magnitude the execution time of the 1-action model, but still achieves comparable performance to the state-of-the-art optimizers of our benchmark -for both traffic profiles, and evaluating on a topology not previously seen in training.
Given the good trade-off that provides allowing more than one action at each time-step, for the rest of our experiments we fine-tuned the number of actions \(n\) and the episode length \(T\) to balance a competitive performance with the minimum possible execution time. Later in Section V-F we will analyze in detail the execution cost of MAGNNETO.
### _Generalization over Unseen Topologies_
In Section I we argued the importance of generalization in ML-based solutions, which refers to the capability of the solution to operate successfully in other networks where it was not trained. In this section, we bring MAGNNETO under an intensive evaluation in this regard.
In our experiments, MAGNNETO only observes NSFNet (14 nodes, 42 links) and GEANT2 (24 nodes, 74 links) samples during training [42], whereas the evaluation is performed over a subset of 75 networks from the Topology Zoo dataset [24] including topologies ranging from 11 to 30 nodes, and from 30 to 90 links. More in detail:
* We train two MAGNNETO models, one for each traffic profile (uniform and gravity).
* Each model is trained observing 50 different TMs -either uniform or gravity-based, depending on the model- alternating between the NSFNet and GEANT2 topologies.
* Each of these two trained models is evaluated over 100 different TMs -again, either uniform or gravity-based-on each of the 75 topologies from Topology Zoo.
Overall, this experimental setup comprises \(7,500\) evaluation runs for each traffic profile, which we summarize in Figures 3(a) and 3(b), respectively for uniform and gravity-based loads. In particular, note that we first compute the mean _MinMaxLoad_ improvement of MAGNNETO -and the baselines- over the 100 TMs of each evaluation network, obtaining a single value for each of the 75 topologies. Thus, in these figures we represent the corresponding CDF and boxplot of the 75-sized vector of mean improvement values for each TE optimizer.
In both traffic scenarios MAGNNETO achieves comparable performance to the corresponding best performing benchmark -DEFO when considering uniform traffic and SRLS for gravity. In fact, MAGNNETO outperforms TabulGPWO, improves DEFO with gravity-based traffic, and lies within a 2% average improvement difference with respect to SRLS in both cases. We reckon that these represent remarkable results on generalization; as far as we know, this is the first time that a ML-based model consistently obtains close performance to state-of-the-art TE optimizers on such a large and diverse set of topologies not previously seen in training.
### _Traffic Changes in Large Topologies_
After evaluating the generalization capabilities of MAGNNETO, we aim to test the performance of our method over traffic changes in large networks, where the combinatorial of the optimization process might dramatically increase. Having considered networks up to 30 nodes and 90 links so far, for this set of experiments we arbitrarily select four large real-world topologies from Topology Zoo [24]: Interroute (110 nodes, 294 links), Colt (153 nodes, 354 links), DialtelecomCz (138 links, 302 links) and VtWavenet2011 (92 nodes, 192 links). Figures 5.I-IV depict these topologies.
In these experiments, for each traffic profile (uniform or gravity) we train a MAGNNETO model on each network. Then, we evaluate models on the same topology where they
Figure 4: Evaluation of MAGNNETO’s generalization capability for (a) uniform and (b) gravity traffic. Each point of the CDF corresponds to the mean _MinMaxLoad_ improvement over 100 TMs for one of the 75 evaluation topologies from Topology Zoo [24], and boxplots are computed based on these mean improvement values as well. Both the uniform (a) and gravity (b) MAGNNETO models evaluated were trained exclusively on samples from the NSFNet and GEANT2 topologies [42].
were trained, over 100 different TMs not previously seen in training. Figures 5.a-d and 5.e-f show the corresponding CDF of all these evaluations, considering uniform and gravity traffic loads respectively.
As we can see, with uniform traffic SRLS is clearly the best performing baseline, achieving a remarkable overall improvement gap with respect to the other two benchmarked optimizers. However, in this scenario MAGNNETO is able to obtain similar improvements to SRLS, slightly outperforming it in VtlWavenet2011. On the other hand, results with gravity-based traffic suggest that Default OSPF already provides with low-congested routing configurations in scale-free networks when considering more realistic traffic. Despite this fact, MAGNNETO turns out to be the overall winner in the comparison with gravity loads, consistently achieving lower congestion ratios for a large number of TMs in all four topologies.
In short, in all scenarios MAGNNETO attained equivalent -or even better- performance than the advanced TE optimizers benchmarked. These results evince its potential to successfully operate in large computer networks.
### _Execution Cost_
Lastly, in this section we evaluate the execution cost of MAGNNETO. In particular, we measure the impact of the message communications involved when running our distributed solution, as well as compare its execution time against the considered set of state-of-the-art TE baselines; Table I gathers these results for several variable-sized networks used in the previous evaluations.
Taking into account the recommendations of REPETITA [9], as well as analyzing the results provided in the original works [9, 10, 11], we defined the following running times for each of our benchmarks: 10 minutes for TabuIGPWO, 3 minutes for DEFO, and 1 minute for SRLS.
At first glance, the execution time of MAGNNETO becomes immediately its most remarkable feature. Particularly, it is able to obtain subsecond times even for the larger network of our evaluation (Colt). Indeed, as previously discussed in Section V-C these times could be further reduced by allowing multiple simultaneous actions. For instance, by considering up to 10 simultaneous actions, MAGNNETO can run 3 orders of magnitude faster than the most rapid state-of-the-art TE optimizer. This relevant difference can be explained by the fact that MAGNNETO's distributed execution naturally parallelizes the global optimization process across all network devices (i.e.,
Figure 5: Evaluation of MAGNNETO on traffic changes in four large real-world topologies (I-IV) from the Topology Zoo dataset [24], both for uniform ((a)-(d)) and gravity-based ((e)-(h)) traffic loads. A MAGNNETO model is trained for each network and traffic profile, and then evaluated on the same topology over 100 unseen TMs. CDFs represent the _MinMaxLoad_ improvement results of each optimizer for those 100 evaluation TMs.
routers); in contrast, typical TE optimizers rely on centralized frameworks that cannot benefit from this.
Such decentralization comes at the expense of the extra message overhead generated by the MPNN. In this context, Table I shows that the link overhead produced by MAGNNETO (few MB/s) can reasonably have a negligible impact in today's real-world networks with 10G/40G (or even more) interfaces. Moreover, note that this cost is quite similar in all topologies; this is as expected, given that the messaging overhead of the GNN-based communications is directly proportional to the average node degree of the network, and computations are distributed among all nodes.
To sum up, our results show that MAGNNETO is able to attain equivalent performance to state-of-the-art centralized TE optimizers -even in topologies not previously seen in training-with significantly lower execution time, and with an affordable message communication overhead.
## VI Related Work
Recently, numerous solutions based on Deep Reinforcement Learning (DRL) have been proposed to solve complex networking problems, especially in the context of routing optimization and TE [15, 17, 45]. However, current state-of-the-art RL-based TE solutions fail to generalize to unseen scenarios (e.g., different network topologies) as the implemented traditional neural networks (e.g., fully connected, convolutional) are not well-suited to learn and generalize over data that is inherently structured as graphs. In [16], the authors design a DRL-based architecture that obtains better results than Shortest Path and Load Balancing routing. Regarding MARL-based solutions [46, 47], most of them suffer from the same lack of topology generalization. An exception to that is the work of [18], an interesting MARL approach for multi-region TE that consistently outperforms ECMP in several scenarios, although it is not benchmarked against state-of-the-art TE optimizers.
GNNs [20, 48], and in particular Message Passing Neural Networks (MPNN) [40], precisely emerged as specialized methods for dealing with graph-structured data; for the first time, there was an AI-based technology able to provide with topology-aware systems. In fact, GNNs have recently attracted a large interest in the field of computer networks for addressing the aforementioned generalization limitations. The work from [42] proposes to use GNN to predict network metrics and a traditional optimizer to find the routing that minimizes some of these metrics (e.g., average delay). Authors of [49] propose a novel architecture for routing optimization in Optical Transport Networks that embeds a GNN into a centralized, single-agent RL setting that is compared against Load Balancing routing.
Narrowing down the use case to intradomain TE, we highlight the work of [50], whose premise is similar to ours: the generation of easily-scalable, automated distributed protocols. For doing so, the authors also use a GNN, but in contrast to our approach they are focused on learning routing strategies that directly imitate already existing ones -shortest path and min-max routing- and compare their solution against these ones. This is the reason why they did not implement a RL-based approach, but instead a semi-supervised learning algorithm, therefore guiding the learning process with explicit labeled data. In fact, so far the very few works that combine GNNs with a MARL framework [51, 52] are theoretical papers from the ML community, and none of them apply to the field of networking.
## VII Conclusions
Intradomain Traffic engineering (TE) is nowadays among the most common network operation tasks, and has a major impact on the performance of today's ISP networks. As such, it has been largely studied, and there are already some well-established TE optimizers that deliver near-optimal performance in large-scale networks. During the last few years, state-of-the-art TE solutions have systematically competed for reducing execution times (e.g., DEFO [10], SRLS [9]), thus scaling better to carrier-grade networks and achieving faster reaction to traffic changes. In this context, ML has attracted interest as a suitable technology for achieving faster execution of TE tasks and -as a result- during recent years the networking community has devoted large efforts to develop effective ML-based TE solutions [15, 16, 17, 18]. However, at the time of this writing no ML-based solution had shown to outperform state-of-the-art TE optimizers.
In this paper we have presented MAGNNETO, a novel ML-based framework for intradomain TE optimization. Our system implements a novel distributed architecture based on Multi-Agent Reinforcement Learning and Graph Neural Networks. In our evaluation, we have compared MAGNNETO with a set of non-ML-based TE optimizers that represent the state of the art in this domain. After applying our system to 75+ real-world topologies, we have observed that it achieves comparable performance to the reference TE benchmarks. However, MAGNNETO offers considerably faster operation than these
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline & **NSFNet** & **GBN** & **GEANT2** & **VlWavenet2011** & **Interroute** & **DialtelecomCz** & **Colt** \\ \hline (\#nodes, \#links) & (14,42) & (17,52) & (24,74) & (92,192) & (110,294) & (138,302) & (153,354) \\ MAGNNETO Link Overhead* (MB/s) & 1.20 & 1.32 & 1.20 & 0.83 & 1.28 & 0.91 & 1.01 \\ Execution Time (s) & & & & & & & \\ \hline TabulGPW0 [11] & 600 & 600 & 600 & 600 & 600 & 600 & 600 \\ DEFO [10] & 180 & 180 & 180 & 180 & 180 & 180 & 180 \\ SRLS [9] & 60 & 60 & 60 & 60 & 60 & 60 & 60 \\ MAGNNETO [\(n\) actions] & \(0.08/n\) & \(0.12/n\) & \(0.16/n\) & \(0.42/n\) & \(0.64/n\) & \(0.66/n\) & \(0.78/n\) \\ \hline \multicolumn{7}{l}{*Average value, with an extra 20\% message size for headers and metadata.} \\ \end{tabular}
\end{table}
Table I: Cost of MAGNNETO: Average link overhead and execution time –in terms of the maximum number of simultaneous actions allowed– for variable-sized network topologies.
state-of-the-art TE solutions, reducing execution times from several minutes to sub-second timescales in networks of 100+ nodes. In this context, MAGNNETO was especially designed to perform several actions at each RL optimization step, which enables to considerably accelerate the optimization process. Particularly, we have seen that our system was able to perform up to 10 actions in parallel with no noticeable decrease in performance. These results lay the foundations for a new generation of ML-based systems that can offer the near-optimal performance of traditional TE techniques while reacting much faster to traffic changes.
Last but not least, we have shown that the proposed system offers strong generalization power over networks unseen during the training phase, which is an important characteristic from the perspective of deployability and commercialization. Particularly, generalization enables to train ML-based products in controlled testbeds, and then deploy them in different real-world networks in production. However, this property has been barely addressed by prior ML-based TE solutions. In contrast, MAGNNETO has demonstrated to generalize succesfully over a wide and varied set of 75 real-world topologies unseen during training. The main reason behind this generalization capability is that the proposed system implements internally a GNN that structures and processes network information as graphs, and computes the information on distributed agents that communicate with their neighbors according to the underlying graph structure (i.e., the network topology).
## Acknowledgment
This publication is part of the Spanish I+D+i project TRAINER-A (ref. PID2020-118011GB-C21), funded by MCIN/ AEI/10.13039/501100011033. This work is also partially funded by the Catalan Institution for Research and Advanced Studies (ICREA), the Secretariat for Universities and Research of the Ministry of Business and Knowledge of the Government of Catalonia, and the European Social Fund.
|
2309.14722 | Physics-informed neural network to augment experimental data: an
application to stratified flows | We develop a physics-informed neural network (PINN) to significantly augment
state-of-the-art experimental data and apply it to stratified flows. The PINN
is a fully-connected deep neural network fed with time-resolved,
three-component velocity fields and density fields measured simultaneously in
three dimensions at $Re = O(10^3)$ in a stratified inclined duct experiment.
The PINN enforces incompressibility, the governing equations for momentum and
buoyancy, and the boundary conditions by automatic differentiation. The
physics-constrained, augmented data are output at an increased spatio-temporal
resolution and demonstrate five key results: (i) the elimination of measurement
noise; (ii) the correction of distortion caused by the scanning measurement
technique; (iii) the identification of weak but dynamically important
three-dimensional vortices; (iv) the revision of turbulent energy budgets and
mixing efficiency; and (v) the prediction of the latent pressure field and its
role in the observed Holmboe wave dynamics. These results mark a significant
step forward in furthering the reach of experiments, especially in the context
of turbulence, where accurately computing three-dimensional gradients and
resolving small scales remain enduring challenges. | Lu Zhu, Xianyang Jiang, Adrien Lefauve, Rich R. Kerswell, P. F. Linden | 2023-09-26T07:29:42Z | http://arxiv.org/abs/2309.14722v1 | # Physics-informed neural network to augment experimental data: an application to stratified flows
###### Abstract
We develop a physics-informed neural network (PINN) to significantly augment state-of-the-art experimental data and apply it to stratified flows. The PINN is a fully-connected deep neural network fed with time-resolved, three-component velocity fields and density fields measured simultaneously in three dimensions at \(Re=O(10^{3})\) in a stratified inclined duct experiment. The PINN enforces incompressibility, the governing equations for momentum and buoyancy, and the boundary conditions by automatic differentiation. The physics-constrained, augmented data are output at an increased spatio-temporal resolution and demonstrate five key results: (i) the elimination of measurement noise; (ii) the correction of distortion caused by the scanning measurement technique; (iii) the identification of weak but dynamically important three-dimensional vortices; (iv) the revision of turbulent energy budgets and mixing efficiency; and (v) the prediction of the latent pressure field and its role in the observed Holmboe wave dynamics. These results mark a significant step forward in furthering the reach of experiments, especially in the context of turbulence, where accurately computing three-dimensional gradients and resolving small scales remain enduring challenges.
x
eurmess, physics-informed neural network, stratified flows, Holmboe waves
## 1 Introduction
Since the seminal pipe flow experiments of Osborne Reynolds (Reynolds, 1883), experiments have been widely designed and used to study fluid flow (Tropea _et al._, 2007). The recent development of state-of-the-art measurement techniques allows the investigation of flow fields at high spatio-temporal resolution (e.g. Partridge _et al._, 2019). However, experimentalists often face the challenge of accurately measuring flow structures across a wide range of scales, particularly in turbulent flows. Moreover, latent flow variables, such as the pressure field, can rarely be measured. To complement experiments, numerical simulations are widely used, often affording better resolution and the full set of flow variables. However, simulations remain idealised models subject to computational limits. These limitations make them
challenging to deploy in regions of parameter space appropriate to environmental or industrial applications, as well as in realistic geometries with non-trivial boundary conditions.
Recent advances in machine learning have stimulated new efforts in fluid mechanics (Vinuesa _et al._, 2023), with one application being the reconstruction of flow fields from limited observations (Fukami _et al._, 2019; Raissi _et al._, 2020). Among the available tools, physics-informed neural networks (PINN) (Raissi _et al._, 2019) hold particular promise. The idea behind a PINN is to impose physical laws on a neural network fed with observations. This allows the model to super-resolve the flow in space and time, to remove spurious noise, and to predict unmeasured (latent) variables such as the pressure (Raissi _et al._, 2019). While PINNs have been used in numerical fluid problems (Cuomo _et al._, 2022), their application to experiments remains limited, perhaps due to the scarcity of high-quality data.
In this paper, we demonstrate the potential of a PINN to augment experimental data and reveal new physical insights into stratified flows. For this purpose, we use the canonical stratified inclined duct (SID) experiment (Meyer & Linden, 2014) sustaining a salt-stratified shear flow in a long tilted duct.The PINN is fed datasets comprising the time-resolved, three-component velocity field and density field measured simultaneously in a three-dimensional volume. We focus on experiments in the Holmboe wave (HW) regime, as these interfacial waves are important precursors of turbulence in environmental flows, e.g. between salt-stratified layers in the ocean (Kawaguchi _et al._, 2022).
In the remainder of this paper, we describe the datasets and the PINN in SS 2, and our results in SS 3. We demonstrate the improvement in signal-to-noise ratio in SS 3.1, in the detection of weak but important coherent structures in SS 3.2, in the accuracy of energy budgets and quantification of mixing in SS 3.3. We also show SS 3.4 that the latent pressure field revealed by the PINN is key to explain asymmetric HWs in SID. Finally, we conclude in SS 4.
## 2 Methodology
### The stratified inclined duct (SID) dataset
The data were collected in the stratified inclined duct facility (SID, sketched in figure 1), where two salt solutions with density \(\rho_{0}\pm\Delta\rho/2\) are exchanged through a long square duct tilted at an angle \(\theta\). Importantly, the volumetric, three-component velocity and density fields are collected through a continuous back-and-forth scanning of a streamwise (\(x\)) - vertical (\(z\)) laser sheet across the spanwise (\(y\)) direction, as introduced in Partridge _et al._ (2019). Figure 1 shows how \(n_{y}\) successive planar measurements of laser-induced fluorescence (for density) and stereo particle image velocity (for velocity) captured over a short time \(\Delta t\) are aggregated to yield \(n_{t}\) 'near-instantaneous' volumes. The typical processed dataset has \((n_{x},n_{y},n_{z},n_{t})\approx(400,35,80,250)\) points, noting that the spatial resolution in \(x\), \(z\) is identical but is about two to three times higher (better) than along \(y\).
All data are made non-dimensional with the following scales. For the spatial coordinates we use the half-duct height \(H^{*}/2=22.5\) mm. For the velocity we use half the fixed peak-to-peak 'buoyancy velocity' scale \(U^{*}/2\equiv\sqrt{g^{\prime}H^{*}}\) (where \(g^{\prime}=g\Delta\rho/\rho_{0}\) is the reduced gravity chosen for the experiment), leading to velocities being approximately bounded by \(\pm 1\). This means time is non-dimensionalised by the advective unit \(H^{*}/U^{*}\), yielding the Reynolds number \(\mathrm{Re}=H^{*}U^{*}/4\nu\), where \(\nu\) is the kinematic viscosity of water. The Prandtl number is \(\mathrm{Pr}=\nu/\kappa\approx 700\), where \(\kappa\) the molecular diffusivity of salt. For the density field (its deviation from the neutral level \(\rho_{0}\), i.e. the buoyancy) we use half the maximum jump \(\Delta\rho/2\), which yields the fixed bulk Richardson number \(\mathrm{Ri}=(g^{\prime}/2)(H^{*}/2)/(U^{*}/2)^{2}=1/4\). The data can be downloaded from Lefauve _et al._ (2019).
We focus on comparing two typical Holmboe wave (HW) datasets: H1 featuring a double
mode, symmetric HW at \((\text{Re},\theta)=(1455,1^{\circ})\); and H4, featuring a single-mode, asymmetric HW at \((\text{Re},\theta)=(438,5^{\circ})\) studied in detail in Lefauve _et al._ (2018).
### The physics-informed neural network (PINN)
The PINN is sketched in figure 2. A fully-connected deep neural network is set up using the spatial \(\mathbf{x}=(x,y,z)\) and temporal \(t\) coordinates of the flow domain as the inputs, and the corresponding velocity \(\mathbf{u}=(u,v,w)\), density \(\rho\), and pressure \(p\) as the outputs. The network is composed of 14 layers with an increasing number of artificial neurons (\([64\times 4\), \(128\times 3\), \(256\times 4\), \(512\times 3]\)) (a sensitivity analysis was conducted to ensure convergence). The outputs of each layer \(\mathbf{n}_{k}\) are computed by a nonlinear transformation of the previous layer \(\mathbf{n}_{k-1}\) following the basic 'neuron' \(\mathbf{n}_{k}=\mathbf{\sigma}(\mathbf{w}_{k-1}^{T}\mathbf{n}_{k-1}+\mathbf{b}_{k-1})\), where \(\mathbf{b}_{k-1}\) and \(\mathbf{w}_{k-1}^{T}\) are the bias vectors and weight matrices of layer \(k-1\). To introduce nonlinearity and overcome the potential vanishing gradient of deep networks, we use a Swish activation function \(\mathbf{\sigma}\) for all the hidden layers (Ramachandran _et al._, 2017).
The outputs \(\mathbf{u}\) and \(\rho\) are compared with experimental data \(\mathbf{u}_{O}\) and \(\rho_{O}\) to obtain the absolute mean error loss function \(\mathcal{L}_{O}\) (in red in figure 2). The spatial and temporal derivatives of \(\mathbf{u}\), \(\rho\) and \(p\) are computed at every sampling point using automatic differentiation (Baydin _et al._, 2018) (in green). To impose the physical constraints, the derivatives are substituted into the governing mass, momentum, and density scalar equations, corresponding to loss functions \(\mathcal{L}_{E1}\), \(\mathcal{L}_{E2}\), \(\mathcal{L}_{E3}\), respectively (in blue). We also impose the boundary conditions through \(\mathcal{L}_{B}\) (no-slip \(\mathbf{u}=0\), and no-flux \(\partial_{y}\rho=0,\partial_{z}\rho=0\) at the four walls \(y=\pm 1,z=\pm 1\), respectivley).
Combining these constraints, we define the total loss function
\[\mathcal{L}_{\text{tot}}=\frac{\lambda_{E}}{N_{E}}\sum_{j=0}^{N_{E}}\sum_{i=0} ^{3}\mathcal{L}_{Ei}^{j}+\frac{\lambda_{B}}{N_{B}}\sum_{j=0}^{N_{B}}\mathcal{ L}_{B}^{j}+\frac{\lambda_{O}}{N_{O}}\sum_{j=0}^{N_{O}}\mathcal{L}_{O}^{j}, \tag{1}\]
where \(\lambda_{E}\), \(\lambda_{B}\), and \(\lambda_{O}\) are weight coefficients for the governing equation, boundary condition and observation losses, respectively. We define the number of training samples \(N_{E}\), \(N_{B}\), and \(N_{O}\) for the equations, boundaries and observations, respectively. Here \(N_{E}=3\times 10^{7}\) for H1 and \(N_{E}=2\times 10^{7}\) for H4 (as it depends on the resolution of the data), \(N_{B}=10^{7}\) and \(N_{O}=2\times 10^{6}\). The neural network is trained to seek the optimal parameters using the ADAM algorithm (Kingma & Ba, 2014) to minimise the total loss \(\mathcal{L}_{\text{tot}}\). To enhance convergence, we adopt exponentially decaying optimizer steps.
Figure 1: The SID setup and dataset. Each volume is constructed by aggregating \(n_{y}\) planes (closely-spaced dots) obtained by scanning across the duct over time \(\Delta t\)
A key strength of this PINN is its natural ability to reconstruct truly instantaneous three-dimensional flow fields by overcoming the spanwise distortion of our 'near-instantaneous' data acquired by scanning (as also attempted by Knutsen _et al._, 2020; Zigunov _et al._, 2023). This is done simply by feeding the \(n_{y}\) successive snapshots \((\mathbf{u},\rho)(x,y_{i},z,t_{i})\), taken during each alternating forward and backward scan (see figure 1), at times \(t_{i}\) equally spaced by \(\Delta t/n_{y}\), where \((\Delta t,n_{y})=(2.29,39)\) for H1 and \((1.08,30)\) for H4.
## 3 Results
### Improved quality of experimental data
We start by comparing flow snapshots and statistics of HWs in the raw experimental data to the PINN-reconstructed data. Here the spatial resolution of the PINN data is typically doubled in all directions \(x,y,z\) compared to the experimental data, which was sufficient for convergence (though higher resolutions are possible by increasing the sampling input points in the PINN). Instantaneous volumes are also generated at a higher temporal resolution than acquired experimentally, namely at intervals \(\Delta t=0.24\) (for H1) and \(0.74\) (for H4), noting that we restrict our analysis to times \(t\in[150-200]\) (for H1) and \(t\in[150-300]\) (H4). Comparisons of the time-resolved H1 and H4 raw and PINN data are provided in Supplementary Movies 1-4.
First, figure 3(a,b) shows an instantaneous spanshot vertical velocity \(w(x,z)\) taken in the mid-plane \(y=0\) and at time \(t=283\) in H4. The large-scale HW structures of the experiment (panel a) are faithfully reconstructed by the PINN (panel b) with much less small-scale structure, which we identify as experimental noise, violating the physical constraints and increasing the loss in (2.1). This instantaneous \(w\) snapshot closely matches the 'confined Holmboe instability' mode predicted by Lefauve _et al._ (2018) (see their figure 9m) from a stability analysis of the mean flow in the same dataset H4. Such 'clean', noise-free augmented experimental data will improve the three-dimensional structure of HW in SS 3.2.
Second, figure 3(c-f) illustrates how the PINN is able to correct the inevitable distortion of experimental data along the spanwise, scanning direction (recall SS 2.1 and figure 1), also clearly visible in Supplementary Movie 4. The top view shows a snapshot of the horizontal plane \(\rho(x,y)\) sampled at \(z=0.1\) near the density interface of neutral density where \(\langle\rho\rangle_{x,y,t}\approx 0\). The top panels (c,e) show two successive volumes taken at time \(t=178.9\) (forward scan) and \(181.2\) (backwards scan) in flow H1, where the distortion is greatest (due to \(\Delta t=2.3\) twice as large as in H4), while the bottom panels (d,f) show the respective instantaneous volumes output by the PINN. The original data make the peaks and troughs of this right-travelling HW mode appear alternatively slanted to the right during a forward
Figure 2: Schematics of the physics-informed neural network (PINN). The output variables \((\mathbf{u},\rho,p)\) (in yellow) are predicted from the input variables \((\mathbf{x},t)\) (in orange) subject to physical constraints and experimental observations.
scan, and to the left during a backward scan (see slanted black arrows). This distortion is successfully corrected by the PINN, which shows HW propagating along \(x\) with a phase plane normal to \(x\) (straight arrows), as predicted by theory (Ducimetiere _et al._, 2021).
Third, figure 4 compares the spatio-temporal diagrams of interface height \(\eta(x,t)\), defined as the vertical coordinate where \(\rho(x,y=0,\eta,t)=0\). The characteristics showing the propagation of HWs in experiments (panels a,c) are largely consistent with the PINN results (panels b,d) but noteworthy differences exist. The determination of a density interface (\(\rho=0\)) is particularly subject to noise due to a low signal-to-noise ratio when the signal approaches zero. Here we see that the contours are rendered much less jagged by the PINN. The 10-fold increased temporal resolution in H1 (\(\Delta t=0.24\) vs 2.3) also allows a much clearer picture of the characteristics (panels c-d), which propagate in both directions, with sign of interference. By contrast, H4 has a single leftward propagating mode as a result of the density interface (\(\eta\approx-0.2\)) being offset from the mid-point of the shear layer (\(u=0\) at \(z\approx-0.1\)), as explained by (Lefauve _et al._, 2018). However, the reason behind this offset was not elucidated by experimental data alone, and will be elucidated by the pressure field in SS 3.4.
Fourth, we quantify and elucidate the magnitude of the PINN correction through the root-mean-square difference \(d\) between the \(\mathbf{u}\) and \(\rho\) of the raw experiment and the reconstruction. We find \(d=0.069\) for H1 and \(d=0.029\) for H4. We identify at least two sources contributing to \(d\): (i) the noises in planar PIV/LIF measurements and (ii) the distortion of the volumes caused by scanning in \(y\). As a baseline, we also compare a third, fully laminar dataset (L1) at \(Re,\theta=(398,2^{\circ})\)(Lefauve _et al._, 2019_b_) having a PINN correction of \(d=0.030\). In L1, \(d\) is primarily attributed to (i), since this simple steady, stable flow renders (ii) negligible. The energy spectrum of L1 in Lefauve & Linden (2022) (their figure 4a) highlighted the presence of small-scale measurement noise. We are confident that cause (i) remains relatively unchanged in H1 and H4, resulting in \(d\) being predominantly explained by (i) in H4, while the larger spanwise distortion (ii) must be invoked to explain the larger \(d\) in H1, consistent with an increasing interface variation and a scanning time that is twice as slow as in figure 3(c-f).
Figure 4: Improvement of the spatio-temporal diagrams of the interface height \(\eta(x,t)\) for (a,b) H4 and (c,d) H1, capturing the characteristics of HW propagation.
Figure 3: Improvement of instantaneous snapshots: (a,b) vertical velocity \(w(x,y=0,z)\) in H4 at \(t=283\), and (c-f) density just above the interface \(\rho(x,y,z=0.1)\) in H1 at (c,d) \(t=178.9\) (forward scan) and (e,f) \(t=181.2\) (backward scan).
### Improved three-dimensional vortical structures
Previous studies of this HW dataset in SID revealed how, under increasing turbulence levels quantified by the product \(\theta\,Re\), the relatively weak three-dimensional Holmboe vortical structures evolve into pairs of counter-propagating turbulent hairpin vortices (Jiang _et al._, 2022). The particular morphology of these vortices conspires to entrain and stir fluid into the mixed interfacial region, elucidating a key mechanism for shear-driven mixing (Riley, 2022). However, Jiang _et al._ (2022) alludes to limitations in the signal-to-noise ratio to accurately resolve the structure of the weakest nascent Holmboe vortices, as shown, e.g., in a visualisation of the \(Q\)-criterion in their figure 1.
Figure 5 shows how noise-free PINN data can uncover the vortex kinematics of this experimental HW in H4 by visualising an instantaneous \(Q=0.15\) isosurface (in grey), where \(Q=(1/2)(||\mathbf{W}||^{2}-||\mathbf{E}||^{2})\), using the Frobenius tensor norm of the strain rate \(\mathbf{E}=(1/2)(\mathbf{\nabla}\mathbf{u}+(\mathbf{\nabla}\mathbf{u})^{T})\) and rotation rate \(\mathbf{W}=(1/2)(\mathbf{\nabla}\mathbf{u}-(\mathbf{\nabla}\mathbf{u})^{T})\)(Hunt _et al._, 1988). A density isopycnal surface just above the density interface is superposed, with yellow to red shading denoting its vertical position.
The unsmoothed experiment data (panel a) are highly fragmented rendering these weak individual vortices unrecognisable. A similar noise pattern in \(Q\) is observed in the fully laminar dataset L1, not shown here, confirming its unphysical nature. The PINN data (panel b) effectively filters out the noise and shows well-organised vortices on either side of the isopycnal surface.
The interfacial vortices are flat and appear to be formed when new wave crests appear (see left side of panel b), acting to lift and drop the interfaces, as the HW propagates (in this case, as a single left-going mode). Near the top of larger-amplitude isopycnal crests, a \(\Lambda\)-shape vortex is formed, which may eject wisps of relatively mixed fluid at the interface (low \(|\rho|\)) up into the unmixed region (high \(|\rho|\)), leading to the upward deflection of isopycnals. This locally 'anti-diffusive' process is an example of scouring-type mixing typical of Holmboe waves (Salehipour _et al._, 2016; Caulfield, 2021) and allows a density interface to remain sharp. This \(\Lambda\)-shape vortex causes the distance between the \(\rho=-0.85\) and \(0\) isopycnals (the two black lines) to increase, indicating enhanced mixing in this region.
These findings confirm the hypothesis of Jiang _et al._ (2022) and provide direct evidence for the existence of \(\Lambda\)-vortices in HW experiments. Similar \(\Lambda\)-vortices have also been observed in more idealised numerical simulations of sheared turbulence (Watanabe _et al._, 2019), proving their relevance beyond the SID geometry alone and the importance of correctly identifying these coherent structures in experimental data.
### Improved energy budgets and mixing efficiency
The energetics of turbulent mixing in stratified flows is a major topic of research in environmental fluid mechanics (Caulfield, 2020; Dauxois _et al._, 2021). The energy budgets of datasets H1 and H4 were investigated in Lefauve _et al._ (2019_b_) and Lefauve & Linden
Figure 5: Three-dimensional vortex and isopycnal surfaces in H4 at \(t=283\): (a) Exp. vs (b) PINN. The grey iso-surfaces show \(Q=0.15\), while the colours show the isopycnal \(\rho=-0.85\) and its vertical position \(z\in[-0.2,\,0.2]\). The two black lines are the isopycnals \(\rho=-0.85\) and \(\rho=0\) in the mid-plane \(y=0\).
(2022) but we will show that the PINN data can overcome limitations in spatial resolution (especially in \(y\)), in the relatively low signal-to-noise ratio of perturbation variables in HWs, and other limitations inherent to calculating energetics from experiments.
Figure 6 shows four key terms in the budgets of turbulent kinetic energy (TKE) \(K^{\prime}=(1/2)(\mathbf{u}^{\prime}\cdot\mathbf{u}^{\prime})\) and turbulent scalar variance (TSV) \(K^{\prime}_{\rho}=(\mathrm{Ri}/2)(\rho^{\prime})^{2}\), namely the production \(P\) and dissipation \(\varepsilon\) of TKE, the buoyancy flux \(B\) (exchanging energy between TKE and TSV) and the dissipation of TSV \(\chi\), defined as in Lefauve & Linden (2022):
\[P\equiv-\langle u^{\prime}v^{\prime}\partial_{y}\bar{u}+u^{\prime}w^{\prime} \partial_{z}\bar{u}\rangle,\ \ \varepsilon\equiv\frac{2}{\mathrm{Re}}\langle||\mathbf{E}^{\prime}||^{2}\rangle,\ \ B\equiv\mathrm{Ri}\langle w^{\prime}\rho^{\prime}\rangle,\ \ \chi\equiv\frac{\mathrm{Ri}}{\mathrm{Re}\,\mathrm{Pr}} \langle|\mathbf{\nabla}\rho^{\prime}|^{2}\rangle, \tag{1}\]
where fluctuations (prime variables) are computed around the \(x,t\) averages (overbars), as in \(\phi^{\prime}=\phi-\bar{\phi}\), and \(\langle\,\rangle\) denote averaging over \(x\), \(y\), and \(t\) (but not \(z\)).
The vertical profiles of TKE production \(P(z)\) (panel a) show little difference between the PINN data (solid lines) and the experimental data (dashed lines), whether in H1 (blue) or H4 (red). We rationalise this by the fact that \(P\) does not contain any derivatives of the velocity perturbations, and is thus less affected by small-scale noise. By contrast, the TKE dissipation \(\varepsilon(z)\) (panel b) does contain gradients, and, consequently, greater differences between PINN and experiments are found. In H4, the noise in experiments overestimates dissipation, especially away from the interface and near the walls (\(|z|\gtrsim 0.5\)) where turbulent fluctuations are not expected.
The buoyancy flux \(B(z)\) (panel c) and TSV dissipation \(\chi(z)\) peak at the respective density interfaces of H1 (\(z\approx 0.1\)) and H4 (\(z\approx-0.2\)) and significant differences between experiments and PINN are again found. While experiments typically overestimate the magnitude of \(B\) (which does not contain derivatives), they underestimate \(\chi\), which contains derivatives of \(\rho^{\prime}\) that occur on notoriously small scales in such a high \(\mathrm{Pr}=700\) flow. These derivatives are expected to be better captured by the PINN as a consequence of its ability to super-resolve the density field. We also note the locally _negative_ values of \(B\) at the respective interfaces, which confirm the scouring behaviour of HWs (Zhou _et al._, 2017). The mixing efficiency, defined as the ratio of TSV to TKE dissipation \((\chi/\varepsilon)(z)\) (panel e), is about twice as high in the PINN reconstructed data than in the experiments, peaking sharply at \(\chi/\varepsilon\approx 0.06\) in H1 and \(\chi/\varepsilon\approx 0.12\) in H4 at the respective density interfaces (note that volume-averaged values are an order of magnitude below). The ability of the PINN to correct mixing efficiency estimates is significant for further research into mixing of high\(-\)Pr turbulence.
Finally, \(K^{\prime}\) and \(K^{\prime}_{\rho}\) are expected to approach a statistical steady state when averaged over the entire volume \(\mathcal{V}\) and over long time periods of order 100, such that the sum of all sources and sinks in their budgets should cancel (Lefauve & Linden, 2022). Importantly, the PINN predicts more plausible budgets than experiments, e.g. in H4, \(\langle\partial_{t}K^{\prime}\rangle_{\mathcal{V},t}=-5.7\times 10^{-5}\) (eight
Figure 6: Improved energy budgets in H1 and H4: vertical profiles of (a) production \(P\); (b) dissipation \(\varepsilon\); (c) buoyancy flux \(B\); (d) scalar dissipation \(\chi\); (e) mixing efficiency \(\chi/\varepsilon\), as defined in (1). The blue and red dotted line shows the mean density interface \(\langle\rho\rangle=0\).
times closer to zero than the experiment: \(-4.8\times 10^{-4}\)) and \(\langle\partial_{t}K^{\prime}_{\rho}\rangle_{\mathcal{V},t}=8.6\times 10^{-6}\) (five times closer to zero than the experiment \(-4.2\times 10^{-5}\)).
### Revealing the latent pressure field
It is well known than an offset of the sharp density interface (here \(\langle\rho\rangle=0\)) with respect to the mid-point of the shear layer (here \(\langle u\rangle=0\)) leads to asymmetric HWs (Lawrence _et al._, 1991), where one of the travelling modes dominates at the expense of the other, which may disappear entirely, as in H4. Recent direct numerical simulations (DNS) of SID revealed the non-trivial role of the pressure field in offsetting the density interface (Zhu _et al._, 2023) through a type of hydraulic jump appearing at relatively large duct tilt angle of \(\theta=5^{\circ}\), as in H4. However, due to computational costs, these DNS were run at Pr = 7 (versus 700 in experiments), and could thus not reproduce Holmboe waves. Here we demonstrate the physical insights afforded by the PINN reconstruction of Pr = 700 experimental data.
Figure 7(a,b) compares the instantaneous non-dimensional pressure field predicted by the PINN in H4 (panel b) to that DNS of Zhu _et al._ (2023) (case B5, panel a) where the full duct geometry was simulated at identical \(\theta=5^{\circ}\) and slightly higher Re = 650 (vs 438).
Although the vastly different Pr led to slightly different flow states (HW in H4 versus a stationary wave in B5, leading to an internal hydraulic jump at \(x=0\)), the pressure distributions have clear similarities, as seen by comparing the black box of panel a to panel b. In both cases, a minimum pressure is found near the density interface, and a negative pressure (blue shades) is found nearer the centre of the duct \(x>-10\). This minimum yields, in the top layer (\(z>0\)) on the left-hand side of the duct (\(x<0\)), a pressure that increases from right to left, i.e. in the direction of the flow, which slows down the upper layer over the second half of its transit along the duct. A symmetric situation occurs in the bottom layer on the right hand side of the duct. This behaviour was rationalised as the consequence of a hydraulic jump at \(\theta=5^{\circ}\)(Atoufi _et al._, 2023), which was absent at \(\theta=2^{\circ}\).
This physical picture is confirmed by the pressure force \(-\partial_{x}p(z)\) profiles in figure 7(c). This panel shows that both H1 and B2 (at low \(\theta\)) have the typical favourable force \(-\partial_{x}p(z)\) (positive in the lower layer, negative in the upper layer) expected of horizontal exchange flows. However, both H4 and B5 have an adverse pressure force in the upper layer on the left-hand side of the duct (i.e. \(-\partial_{x}p>0\)), which is typical downstream of a hydraulic jump, and results in the density interface being shifted down in this region, explaining the asymmetry of HWs.
The superposed \(Q\)-criterion lines to PINN in panel b also highlight that this low pressure zone is associated with intense vortices, which is consistent with the common vortex-pressure
Figure 7: Prediction of the latent pressure: instantaneous pressure field in the mid-plane \(y=0\) (colours). (a) DNS reproduced from case B5 of Zhu _et al._ (2023) (whole duct shown). (b) H4 reconstructed by PINN showing the measured volume \(x\in[-17,-7]\) only, indicated by a black box in (a), with \(Q\)-criterion black lines superimposed. The white solid lines indicate the density interface \(\rho=0\). (c) Longitudinal pressure force \(-\langle\partial_{x}p\rangle(z)\) in H1 and H4, compared with the closest respective DNS B2 and B5.
relation (Hunt _et al._, 1988). We hypothesise that the lift-up of a three-dimensional HW in a shear layer causes the development of local high shears in proximity to the wave, which further evolve into \(\Lambda\)-vortices (Jiang _et al._, 2022).
## 4 Conclusions
In this paper, we applied a physics-informed neural network (PINN) that uses physical laws to augment experimental data and applied it to two stratified inclined duct datasets featuring symmetric and asymmetric Holmboe waves (HWs) at high Prandtl number \(\mathrm{Pr}\approx 700\).
We first demonstrated in SS 3.1 the elimination of unphysical noise and of the spanwise distortion or wavefronts caused by the scanning data acquisition, yielding cleaner, highly-resolved spatio-temporal wave propagation plots. This noise reduction allowed us in SS 3.2 to unambiguously detect weak but influential three-dimensional vortical structures, previously connected to higher-Re turbulent structures, and to study their interaction with isopycnals. The accuracy of energy budgets was also improved in SS 3.3 owing to the PINN noise removal and super-resolution capabilities, especially for terms involving the computation of small-scale derivatives such as the dissipation of turbulent kinetic energy and scalar variance. Mixing efficiency was revealed to be twice as high as suggested by raw experimental data, locally peaking at 0.06 in the symmetric HW case and 0.12 in the asymmetric HW case. Finally, additional physics were uncovered in SS 3.4 through the latent pressure field, confirming the existence of a pressure minimum towards the centre of the duct observed in simulation data. This minimum was linked to the existence of a hydraulic jump, offsetting the density interface, which is key to explain the presence or absence of asymmetric HW in our data.
These results mark a significant step forward in experimental fluid mechanics, and hold particular promise for the study of density-stratified turbulence and mixing from state-of-the-art laboratory data. Future work should attempt to reconstruct the more challenging turbulent flows at higher values of \(\theta\,\mathrm{Re}\) than done in this paper. This should resolve the large spectrum of turbulent scales below experimental resolution, which is especially relevant for scalar dissipation and mixing rates given the separation of scales at high \(\mathrm{Pr}\).
We acknowledge the ERC Horizon 2020 Grant No 742480 'Stratified Turbulence And Mixing Processes'. A.L. acknowledges a NERC Independent Research Fellowship (NE/W008971/1).
Declaration of interest: The authors report no conflict of interest.
|
2309.05613 | Learning the Geodesic Embedding with Graph Neural Networks | We present GeGnn, a learning-based method for computing the approximate
geodesic distance between two arbitrary points on discrete polyhedra surfaces
with constant time complexity after fast precomputation. Previous relevant
methods either focus on computing the geodesic distance between a single source
and all destinations, which has linear complexity at least or require a long
precomputation time. Our key idea is to train a graph neural network to embed
an input mesh into a high-dimensional embedding space and compute the geodesic
distance between a pair of points using the corresponding embedding vectors and
a lightweight decoding function. To facilitate the learning of the embedding,
we propose novel graph convolution and graph pooling modules that incorporate
local geodesic information and are verified to be much more effective than
previous designs. After training, our method requires only one forward pass of
the network per mesh as precomputation. Then, we can compute the geodesic
distance between a pair of points using our decoding function, which requires
only several matrix multiplications and can be massively parallelized on GPUs.
We verify the efficiency and effectiveness of our method on ShapeNet and
demonstrate that our method is faster than existing methods by orders of
magnitude while achieving comparable or better accuracy. Additionally, our
method exhibits robustness on noisy and incomplete meshes and strong
generalization ability on out-of-distribution meshes. The code and pretrained
model can be found on https://github.com/IntelligentGeometry/GeGnn. | Bo Pang, Zhongtian Zheng, Guoping Wang, Peng-Shuai Wang | 2023-09-11T16:54:34Z | http://arxiv.org/abs/2309.05613v2 | # Learning the Geodesic Embedding with Graph Neural Networks
###### Abstract.
We present GrGNN, a learning-based method for computing the approximate geodesic distance between two arbitrary points on discrete polyhedra surfaces with constant time complexity after fast precomputation. Previous relevant methods either focus on computing the geodesic distance between a single source and all destinations, which has linear complexity at least or require a long precomputation time. Our key idea is to train a graph neural network to embed an input mesh into a high-dimensional embedding space and compute the geodesic distance between a pair of points using the corresponding embedding vectors and a lightweight decoding function. To facilitate the learning of the embedding, we propose novel graph convolution and graph pooling modules that incorporate local geodesic information and are verified to be much more effective than previous designs. After training, our method requires only one forward pass of the network per mesh as precomputation. Then, we can compute the geodesic distance between a pair of points using our decoding function, which requires only several matrix multiplications and can be massively parallelized on GPUs. We verify the efficiency and effectiveness of our method on ShapeNet and demonstrate that our method is faster than existing methods by orders of magnitude while achieving comparable or better accuracy. Additionally, our method exhibits robustness on noisy and incomplete meshes and strong generalization ability on out-of-distribution meshes. The code and pretrained model can be found on [https://github.com/IntelligentGeometry/GeGmn](https://github.com/IntelligentGeometry/GeGmn).
**ACM Reference Format:**
Bo Pang, Zhongtian Zheng, Guoping Wang, and Peng-Shuai Wang. 2024. Learning the Geodesic Embedding with Graph Neural Networks. _ACM Trans. Graph._ 42, 6, Article 233 (December 2024), 12 pages. [https://doi.org/10.1145/3618317](https://doi.org/10.1145/3618317)
## 1. Introduction
The computation of the geodesic distance between two arbitrary points on polyhedral surfaces, also referred to as the geodesic distance query (GDQ), is a fundamental problem in computational geometry and graphics and has a broad range of applications, including texture mapping (Zigelman et al., 2002), symmetry detection (Xu et al., 2009), mesh deformation (Bendels and Klein, 2003), and surface correspondence (Raviv et al., 2010). Although plenty of methods (Adikusuma et al., 2020; Crane et al., 2013; Mitchell et al., 1987; Ying et al., 2013) have been proposed for computing single-source-all-destinations geodesic distances, and some of them can even run empirically in linear time (Crane et al., 2013; Tao et al., 2019; Ying et al., 2013), it is still too expensive to leverage these methods for GDQs. Consequently, a few dedicated methods (Gotsman and Hormann, 2022; Panozzo et al., 2013; Xia et al., 2021; Xin et al., 2012) have been proposed to ensure that the computation
Figure 1. Appealing results produced by our method. The distance fields are color-coded using a gradient ranging from red to blue. (a) We train a graph neural network on ShapeNet to predict the geodesic distances among arbitrary pairs of points on meshes with constant time complexity after a single forward pass as precomputation; the shown results are from the testing set and unseen categories of ShapeNet with an average relative error of less than 1.4% and 2.6%, respectively, which are visually indistinguishable from the ground truth. (b) The network exhibits strong generalization capabilities and can be applied to general meshes out of ShapeNet. (c) The network demonstrates strong robustness on meshes with corrupted topology, which traditional geodesic algorithms cannot handle. (d) Our method is general and can also be used to predict biharmonic distances.
complexity of GDQ is constant to meet the huge requirements of frequent GDQs in interactive applications.
However, these methods either have low accuracy or require long precomputation time, which severely limits their applicability to large-scale meshes. Specifically, in the precomputation stage, these methods typically need to compute the exact geodesic distances between a large number of vertex pairs (Gotsman and Hormann, 2022; Panozzo et al., 2013; Xia et al., 2021; Xin et al., 2012), incurring at least quadratic space and computation complexity; then some methods employ nonlinear optimization (Panozzo et al., 2013; Rustamov et al., 2009) like Metric Multidimensional Scaling (MDS) (Carroll and Arabie, 1998) or cascaded optimization (Xia et al., 2021) to embed the input mesh into a high-dimensional space and approximate the geodesic distance with Euclidean distance in the high-dimensional space, where the optimization process is also time-consuming and costs several minutes, up to hours, even for meshes with tens of thousands of vertices. Additionally, these methods are highly dependent on the quality of input meshes, severely limiting their ability to deal with noisy or incomplete meshes.
In this paper, we propose a learning-based method for cost time GDQ on arbitrary meshes. Our key idea is to train a graph neural network (GNN) to embed an input mesh into a high-dimensional feature space and compute the geodesic distance between any pair of vertices with the corresponding feature distance defined by a learnable mapping function. The high-dimensional feature space is referred to as a geodesic embedding of a mesh (Panozzo et al., 2013; Xia et al., 2021). Therefore, our method can be regarded as learning the geodesic embedding with a GNN, instead of relying on costly optimization procedures (Panozzo et al., 2013; Rustamov et al., 2009; Xia et al., 2021); thus, we name our method as GeGnn. After training, our GeGnn can predict the geodesic embedding by just one forward pass of the network on GPUs, which is significantly more efficient than previous optimization-based methods (Panozzo et al., 2013; Rustamov et al., 2009; Xia et al., 2021). Our GeGnn also learns shape priors for geodesic distances during the training process, which makes it robust to the quality of input meshes. As a result, our GeGnn can even be applied to corrupted or incomplete meshes, with which previous methods often fail to produce reasonable results.
The key challenges of our GeGnn are how to design the graph convolution and pooling modules, as well as the mapping function for geodesic distance prediction. Although plenty of graph convolutions have been proposed (Wu et al., 2020) and widely used for learning on meshes (Fey et al., 2018; Hanocka et al., 2019), they are mainly designed for mesh understanding, thus demonstrating inferior performance for our purpose. Our key observation is that the local geometric structures of a mesh are crucial for geodesic embedding. To this end, we propose a novel graph convolution by incorporating local distance features on edges and an adaptive graph pooling module by considering the normal directions of vertices. Our graph convolution follows the message-passing paradigm (Gilmer et al., 2017; Simonovsky and Komodakis, 2017), which updates the vertex feature by aggregating neighboring vertex features. Inspired by the Dijkstra-like methods for computing geodesic distances (Sethian, 1999; Tsitsiklis, 1995) that propagates extremal distance on the wavefront, we propose aggregating local features with the _max_ operator instead of _sum_ or _mean_, which we find to be much more effective for geodesic embedding.
After embedding an input mesh into a high-dimensional feature space with the proposed GNN, one may naively follow previous works (Panozzo et al., 2013; Rustamov et al., 2009) to use the Euclidean distance in the embedding space between two vertices to approximate the geodesic distance. However, we observe that the Euclidean distance cannot well approximate the geodesic distance, regardless of the dimension of the embedding space. To tackle this issue, we propose to use a lightweight multilayer perceptron (MLP) as a learnable distance function to map the embedding features of a pair of vertices to the geodesic distance, which turns out to work much better than the Euclidean distance.
We train our GeGnn on ShapeNet and verify its effectiveness and efficiency over other state-of-the-art methods for GDQs. Our GeGnn has constant time complexity after a single forward pass of the network as precomputation and linear space complexity, which is much more efficient than previous methods (Panozzo et al., 2013; Rustamov et al., 2009; Xia et al., 2021), with a speedup of orders of magnitude for precomputation and comparable approximation errors for geodesic distances. Our GeGnn also demonstrates superior robustness and strong generalization ability during the inference stage. Finally, we showcase a series of interesting applications supported by our GeGnn.
In summary, our main contributions are as follows:
* We propose GeGnn, a learning-based method for GDQ using a graph neural network with constant time complexity after a single forward pass as precomputation.
* We propose a novel graph convolution module and a graph pooling module, which significantly increase the capability of GeGnn on learning the geodesic embedding.
* We propose to use a learnable distance function to map the embedding features of a pair of vertices to the geodesic distance, resulting in a significant improvement in the approximation accuracy compared with the Euclidean distance.
* We conduct experiments to demonstrate that the proposed framework is robust and generalizable and is also applicable to predict the biharmonic distance.
## 2. Related Work
Single-Source Geodesic Distances.Plenty of algorithms have been developed for the problem of single source geodesic distances computing, we refer the readers to (Crane et al., 2020) for a comprehensive survey. Representative methods include wavefront-propagation-based methods with a priority queue following the classic Dijkstra algorithm (Chen and Han, 1990; Mitchell et al., 1987; Qin et al., 2016; Surazhsky et al., 2005; Xin and Wang, 2009; Xu et al., 2015; Ying et al., 2014), PDE-based methods by solving the Eikonal equation (Crane et al., 2013, 2017; Kimmel and Sethian, 1998; Memoli and Sapiro, 2001, 2005; Sethian, 1999), and Geodesic-graph-based methods (Adikusuma et al., 2020; Sharp and Crane, 2020; Ying et al., 2013). There are also parallel algorithms that leverage GPUs for acceleration (Weber et al., 2008). However, the computational cost of these methods is at least linear to the number of vertices, while our method has constant complexity during inference.
Geodesic Distance QueriesThe goal of our paper is to approximate the geodesic distance between two arbitrary vertices on a mesh in const time after fast precomputation. The research on this problem is relatively preliminary, and there are only a few related works (Gotsman and Hormann, 2022; Panozzo et al., 2013; Shamai et al., 2018; Xin et al., 2012; Zhang et al., 2023). A naive solution is to compute the geodesic distance between all pairs of vertices in advance and store them in a lookup table. However, this method is not scalable to large meshes due to its quadratic time and space complexity. Xin _et al._ (2012) propose to construct the geodesic Delaunay triangulation for a fixed number of sample vertices, compute pairwise geodesic distances, record the distances of each vertex to the three vertices of the corresponding geodesic triangle, and approximate the geodesic distance between two arbitrary vertices using triangle unfolding. Its time and space complexity is quadratic to the number of samples, which is still not scalable to large meshes. Panozzo _et al._ (2013) propose to embed a mesh into a high-dimensional space to approximate the geodesic distance between a pair of points with the Euclidean distance in the embedding space, which is realized by optimizing with Metric MDS (Carroll and Arabie, 1998; Rustamov et al., 2009). However, the metric MDS in (Panozzo et al., 2013) involves a complex nonlinear optimization and thus is computationally expensive. Shamai _et al._ (2018) solve pairwise geodesics employing a fast classic MDS, which extrapolates distance information obtained from a subset of points to the remaining points. Xia _et al._ (2021) propose to compute the geodesic embedding by cascaded optimization on a proximity graph containing a subset of vertices (Ying et al., 2013). Likewise, the computational expense of this optimization process makes it impractical for large meshes. Recently, Gotsman _et al._ (2022) propose to compress the pairwise geodesic distances by a heuristic divide and conquer algorithm, but its worst case time complexity for GDQ is not constant. In contrast, our GeGnn needs no expensive precomputation and has remarkable efficiency for GDQ, thus is scalable to large meshes. Concurrently, Zhang _et al._ (2023) also propose to use neural networks for GDQs; however, this method concentrates on learning on a single mesh, while our method is trained on a large dataset to achieve the generalization capability across different meshes.
Other Geometric DistancesApart from the geodesic distance, there are also several other distances on meshes. The diffusion distance (Coifman et al., 2005) is defined by a diffusion process on the mesh and is widely used for global shape analysis. The commute distance (Fouss et al., 2007; Yen et al., 2007) is intuitively described by the expected time for a random walk to travel between two vertices. The biharmonic distance (Lipman et al., 2010) can be calculated by the solution of the biharmonic equation. The earth mover's distance (Solomon et al., 2014) is defined by the minimum cost of moving the mass of one point cloud to another. Apart from geodesic distances, our GeGnn can also be easily extended to approximate these distances by replacing the geodesic distance with the corresponding distance in the loss function. We showcase the results of approximating the biharmonic distance in our experiments.
Graph Neural NetworksGraph neural networks (GNN) have been widely used in many applications (Wu et al., 2020). As the core of a GNN, a graph convolution module can be formulated under the message passing paradigm (Gilmer et al., 2017; Simonovsky and Komodakis, 2017). Representative GNNs include the graph convolutional network (Kipf and Welling, 2017), the graph attention network (Velickovic et al., 2017), and GraphSAGE (Hamilton et al., 2017). Many point-based neural networks for point cloud understanding (Li et al., 2018; Qi et al., 2017; Thomas et al., 2019; Wang et al., 2019; Xu et al., 2018) can be regarded as special cases of graph neural networks on k-nearest-neighbor graphs constructed from unstructured point clouds. Graph neural networks can also be applied to triangle meshes for mesh understanding (Hanocka et al., 2019; Hu et al., 2022; Yi et al., 2017), simulation (Pfaff et al., 2020), processing (Liu et al., 2020), and generation (Hanocka et al., 2020). However, these methods are not specifically designed for GDQs and demonstrate inferior performance compared to our dedicated graph convolution in terms of approximation accuracy.
## 3. Method
OverviewThe overall pipeline of our GeGnn is shown in Fig. 2. Our method can be roughly separated into two parts: precomputation, and geodesic distance query (GDQ). The precompuation step (Fig. 2 (b)) involves evaluating a graph neural network to obtain vertex-wise features of fixed dimension. This step is performed only once for each mesh, regardless of the number of subsequent queries. The geodesic distance query (Fig. 2 (d)) involves forwarding the powered difference between two features to a fixed size MLP. This
Figure 2. Overview of our GeGnn. We construct a graph from an input mesh (a), and feed the graph to a graph neural network (b). The network architecture follows U-Net. The network lifts every vertex to a high-dimensional embedding space (c). Then we use a lightweight MLP to decode the embeddings of a vertex pair to the corresponding geodesic distance (d). In the training stage, we obtain a loss by comparing the predicted geodesic distance with the precomputed ground truth geodesic distance, which is used to backpropagate through the network and update its parameters. In the testing stage, we can directly use the trained network to predict the vertex features and then compute geodesic distance between any two vertices on the mesh with the MLP.
process requires a fixed number of max/add/multiplication operations for each query, resulting in a time complexity of \(O(1)\) for GDQ. Specifically, given an input mesh \(\mathcal{M}=\{\mathcal{V},\mathcal{F}\}\), where \(\mathcal{V}\) and \(\mathcal{F}\) denote the set of vertices and faces of the mesh, the connectivity of the mesh defined by \(\mathcal{F}\) naturally forms a graph \(\mathcal{G}\). We first train a graph neural network that takes \(\mathcal{G}\) and \(\mathcal{V}\) as input and maps each vertex \(v_{i}\) in \(\mathcal{V}\) to a high dimensional embedding vector \(p_{i}\in\mathbb{R}^{c}\), where \(c\) is set to \(256\) by default. We construct a U-Net (Ronneberger et al., 2015) on the graph to effectively extract features for each vertex. Then we use a lightweight MLP to take the features of two arbitrary vertices as input and output the corresponding geodesic distance between them. The key building blocks of our network include a novel graph convolution module and a graph pooling module, which are elaborated in Section 3.1. The feature decoding scheme for geodesic distance prediction is elaborated in Section 3.2. Finally, the network details and the loss function are introduced in Section 3.3.
### Graph Neural Network
In this section, we introduce our graph convolution and pooling modules tailored for learning the geodesic embedding, which serve as the key building blocks of our network.
#### 3.1.1. Graph Convolution
The graph convolution module is used to aggregate and update features on a graph. The graph \(\mathcal{G}\) constructed from an input mesh \(\mathcal{M}=\{\mathcal{V},\mathcal{F}\}\) contains the neighborhood relationships among vertices. For the \(i^{th}\) vertex \(v_{i}\) in \(\mathcal{V}\), we denote the set of its neighboring vertices as \(\mathcal{N}(i)\) and its feature as \(F_{i}\). Generally, denote the output of the graph convolution module as \(F^{\prime}_{i}\), the module under the message passing paradigm (Fey and Lenssen, 2019; Gilmer et al., 2017; Simonovsky and Komodakis, 2017) can be defined as follows:
\[F^{\prime}_{i}=\gamma\left(F_{i},\ \mathcal{A}_{\mathcal{F}\in\mathcal{N}(i)} \phi\left(F_{i},F_{j},v_{i},v_{j}\right)\right), \tag{1}\]
where \(\gamma\) and \(\phi\) are differentiable functions for updating features, and \(\mathcal{A}\) is a differentiable and permutation invariant function for aggregating neighboring features, e.g., _sum_, _mean_, or _max_. The key difference among different graph convolution operators lies in the design of \(\gamma\), \(\phi\), and \(\mathcal{A}\).
Although plenty of graph convolutions have been proposed, they are mainly used for graph or mesh understanding (Fey et al., 2018; Kipf and Welling, 2017; Velickovic et al., 2017) and are insensitive to the local distances between vertices, which are crucial for geodesic embedding. Our design philosophy is to explicitly incorporate the distances between neighboring vertices into the graph convolution, with a focus on maximizing simplicity while maintaining expressiveness. Therefore, we define the functions for updating features \(\phi\) and \(\gamma\) as follows:
\[\gamma(F_{i},\tilde{F}_{i}) =W_{0}\times F_{i}+\tilde{F}_{i}, \tag{3}\] \[\phi\left(F_{i},F_{j},v_{i},v_{j}\right) =W_{1}\times[F_{j}\parallel v_{ij}\parallel l_{ij}], \tag{2}\]
where \(\tilde{F}_{i}=\mathcal{A}_{j\in\mathcal{N}(i)}\phi\left(F_{i},F_{j},v_{i},v_{j }\right)\), \(W_{0}\) and \(W_{1}\) are trainable weights, \(\parallel\) is a concatenation operator, \(v_{ij}\) is a shorthand for \(v_{i}-v_{j}\), and \(l_{ij}\) is the length of \(v_{ij}\). For the aggregation operator, we choose _max_, instead of _sum_ or _mean_, which is motivated by Dijkstra-like methods for computing geodesic distances (Sethian, 1999; Tsitsiklis, 1995) that propagate the shortest distance on the wavefront. This choice is also consistent with the observation in (Xu et al., 2018) that the _max_ is advantageous in identifying representative elements. In summary, our graph convolution is defined as follows:
\[F^{\prime}_{i}=W_{0}\times F_{i}+\max_{j\in\mathcal{N}(i)}\left(W_{1}\times[F_ {j}\parallel u_{ij}\parallel l_{ij}]\right), \tag{4}\]
Since our graph convolution is designed for geodesic embedding, we name it as _GeoConv_. According to Eq. (4), our _GeoConv_ does not use any global information such as absolute positions, and thus is translation and permutation invariant by design, which are desired properties for geodesic computing.
Compared with previous graph convolutions in the field of 3D deep learning (Simonovsky and Komodakis, 2017; Wang et al., 2019) that define \(\phi\) or \(\gamma\) as MLPs, our _GeoConv_ is much simpler, while being more effective for geodesic embedding, as verified in our experiments and ablation studies. Our _GeoConv_ is reminiscent of GraphSAGE (Hamilton et al., 2017) for graph node classification on citation and Reddit post data; however, GraphSAGE does not consider local distances between vertices and uses _mean_ for aggregation, resulting in an inferior performance for geodesic embedding.
#### 3.1.2. Graph Pooling
Graph pooling is used to progressively downsample the graph to facilitate the construction of U-Net. Representative graph pooling designs in previous works are either based on mesh simplification with a priority queue (Hanocka et al., 2019) or leverage farthest point sampling (Qi et al., 2017), incurring huge computational cost and difficulties concerning parallelism. Therefore, we resort to grid-based graph pooling for efficiency (Hu et al., 2020; Simonovsky and Komodakis, 2017; Thomas et al., 2019). Grid-based pooling employs regular voxel grids to cluster vertices within the same voxel into a single vertex, resulting in a straightforward and efficient way to downsample the graph. Nevertheless, grid-based pooling only relies on the Euclidean distance of vertices and ignores the topology of the underlying graph or mesh. An example is shown in Fig. 3-(a): the red and blue vertices are on the opposite sides of the tabletop; they are far away in terms of geodesic distance but close in Euclidean space. Grid-based pooling may merge these two vertices into a single vertex as shown in Fig. 3-(b), which is undesirable for geodesic embedding.
To address this issue, we propose a novel graph pooling, called _GeoPool_, which is aware of both Euclidean and geodesic distances. Specifically, we construct regular grids in 6D space consisting of
Figure 3. Comparison of graph pooling strategies. (a) The red and blue vertices are located on opposite sides of the tabletop. Although they are distant in terms of geodesic distance, they are close in Euclidean space. (b) The grid-based pooling merges the red and blue vertices into a single vertex, which is inadequate for geodesic embedding. (c) Our _GeoPool_ is capable of preserving the geodesic distance and effectively distinguishing between the red and blue vertices.
vertex coordinates and the corresponding normals. We specify two scale factors, \(\sigma_{c}\) for the coordinates and \(\sigma_{n}\) for the normals, to control the size of the grids in different dimensions. Then we merge vertices within the same grid in 6D space and compute the averages of coordinates, normals, and associated features as the output. The use of normals can effectively prevent vertices on the opposite sides of a thin plane from being merged when downsampling the graph. As shown in Fig. 3-(c), the red and blue vertices are in different grids in 6D space, and thus are not merged into a single vertex with _GeoPool_. Note that _GeoPool_ reduces to vanilla grid-based pooling when \(\sigma_{n}\) is set to infinity.
We also construct a graph unpooling module, named _GeoUnpool_, for upsampling the graph in the decoder of U-Net. Specifically, we keep track of the mapping relation between vertices before and after the pooling operation, and _GeoUnpool_ is performed by reversing the _GeoPool_ with the cached mapping relation.
#### 3.1.3. Network Architecture
We construct a U-Net [Ronneberger et al., 2015] with our _GeoConv_, _GeoPool_, and _GeoUnpool_ modules, as shown in Fig. 4. The network takes the graph constructed from an input mesh as input. The initial vertex signals have 6 channels, including the normalized vertex coordinates and normals. The vertex features are updated by _GeoConv_; and each _GeoConv_ is followed by a ReLU activation and a group normalization [Wu and He, 2018]. The _ResBlock_ in Fig. 4 is built by stacking 2 _GeoConv_ modules with a skip connection [He et al., 2016]. The output channels of each _GeoConv_ are all set to 256. The graph is progressively downsampled and upsampled by _GeoPool_ and _GeoUnpool_. Since the input mesh is first normalized into \([-1,1]\), \(\sigma_{c}\) and \(\sigma_{n}\) are initially set to \(1/16\) and \(3/16\), and increase by a factor of 2 after each pooling operation. Overall, the U-Net extracts a feature for every vertex, which is then mapped to 256 channels as the geodesic embedding using a MLP consisting of two fully-connected layers with 256 channels.
The geodesic distance between two arbitrary vertices is determined by the shortest path between them, which is a global property of the mesh. The U-Net built upon our graph modules has global receptive fields, while being aware of local geometric structures, which is advantageous for geodesic embedding. The multi-resolution structure of U-Net also increases the robustness when dealing with incomplete meshes, since the missing edges or isolated vertices in incomplete meshes can be merged and get connected in coarser resolutions. We verify the robustness of our network in the experiments.
### Geodesic Decoder
The graph network outputs a geodesic embedding \(p_{i}\in\mathbb{R}^{256}\) for each vertex \(v_{i}\). To approximate the geodesic distance between vertex \(v_{i}\) and \(v_{j}\), we need to define a distance function \(d(p_{i},p_{j})\) in the embedding space as the decoder. An intuitive strategy is to directly use the Euclidean distance between \(p_{i}\) and \(p_{j}\), following [Panozzo et al., 2013; Rustamov et al., 2009]: \(d(p_{i},p_{j})=||p_{i}-p_{j}||_{2}\). However, such an Euclidean embedding is unattainable in most scenarios, including curved surfaces with non-zero Gaussian curvature [Pressley and Presley, 2010] and finite metric spaces with Gramian matrices that lack positive semidefiniteness [Maehara, 2013].
Here is a simple example to illustrate the issue. Denote the geodesic distance between point \(X\) and point \(Y\) as \(d_{XY}\). For a unit circle depicted on the right, we can easily compute that \(d_{AC}=d_{BC}=\pi/2\) and \(d_{AB}=\pi\). If we embed this circle into a high-dimensional Euclidean space \(\mathcal{X}\), we will have \(C\) at the midpoint of the segment \(AB\) in \(\mathcal{X}\). Similarly, when considering \(A\), \(B\), and \(D\), we know that \(D\) is also the midpoint of the segment \(AB\). Therefore, \(C\) and \(D\) would be at the same point in \(\mathcal{X}\); however, \(d_{CD}\) is equal to \(\pi\) on the circle. This example shows that the Euclidean distance cannot well approximate the geodesic distance even on a circle, which is irrelevant to the dimension of the embedding space.
Previous efforts have also attempted manual design of embedding spaces [Shamai et al., 2018], which involves mapping data onto a sphere. Instead of manually designing the decoding function \(d(p_{i},p_{j})\), which turns out to be tedious and hard according to our initial experiments, we propose to learn it from data. Specifically, we train a small MLP with 3 fully connected layers and ReLU activation functions in between to learn the decoding function, as shown in Fig. 2-(d). The channels of the decoding MLP are set to 256, 256, and 1, respectively. Define \(p_{i}=\left(p_{i,1},\ p_{i,2},\ \ldots,\ p_{i,n}\right)\), where \(p_{i,k}\) is the \(k^{th}\) components of \(p_{i}\) and \(n\) is the dimension of \(p_{i}\), the MLP takes the squared difference \(s_{ij}\) between \(p_{i}\) and \(p_{j}\) as the input:
\[s_{ij}=\left(\left(p_{i,1}-p_{j,1}\right)^{2},\ \left(p_{i,2}-p_{j,2} \right)^{2},\ \ldots,\ \left(p_{i,n}-p_{j,n}\right)^{2}\right), \tag{5}\]
and the output of the MLP approximates the geodesic distance between \(v_{i}\) and \(v_{j}\): \(d(p_{i},p_{j})=MLP(s_{ij})\). The square operation in Eq. (5) is necessary as it preserves the symmetry of \(d(p_{i},p_{j})\) with respect to \(p_{i}\) and \(p_{j}\).
Since the MLP only contains 3 layers, its execution on GPUs is very efficient. For each input mesh, we forward the U-Net once to obtain the geodesic embedding of each vertex and cache the results; then each batch of GDQs only repairs a single evaluation of the MLP.
### Loss Function
We adopt the mean relative error (MRE) as the loss function for training, which is also used as the evaluation metric to assess the
Figure 4. U-Net constructed with our basic network modules. The network takes a graph as input and outputs high-dimensional geodesic embeddings for vertices. The feature channels are all set to 256, and the numbers of each block are shown underneath.
precision of predicted geodesic distances following (Surazhsky et al., 2005; Xia et al., 2021).
In the data preparation stage, we precompute and store \(N\) pairs of exact geodesic distances for each mesh with the method proposed in (Mitchell et al., 1987). In each training iteration, we randomly sample a batch of meshes from the training set, then randomly sample \(n\) pair of vertices from the precomputed geodesic distances for each mesh. The MRE for each mesh can be defined as follows:
\[L=\frac{1}{n}\sum\nolimits_{(i,j)\in\mathcal{I}}\frac{|d(p_{i},p_{j})-d_{ij}|}{ d_{ij}+\epsilon}, \tag{6}\]
where \(\mathcal{I}\) is the set of sampled vertex pairs, \(d_{ij}\) is the ground-truth geodesic distance between vertex \(v_{i}\) and vertex \(v_{j}\), \(d(p_{i},p_{j})\) is the decoding function described in Section 3.2, and \(\epsilon\) is a small constant to avoid numerical issues. In our experiments, we set \(N\) to \(600k\), \(n\) to \(100k\), and \(\epsilon\) to \(0.001\).
RemarkAlthough our main goal is to predict the geodesic distance, our method is general and can be trivially applied to learn other forms of distances. For example, we precompute Biharmonic distances (Lipman et al., 2010) for certain meshes and use them to replace the geodesic distances in Eq. (6) to train the network. After training, our network can learn to estimate the Biharmonic distances as well. We verify this in our experiments, and we expect that our method has the potential to learn other types of distances as well, which we leave for future work.
## 4. Results
In this section, we validate the efficiency, accuracy, and robustness of our GeGnn on the task of GDQ and demonstrate its potential applications. We also analyze and discuss key design choices of GeGnn in the ablation study. The experiments were conducted using 4 Nvidia 3090 GPUs with 24GB of memory.
### Geodesic Distance Queries
DatasetWe use a subset of ShapeNet (Chang et al., 2015) to train our network. The subset contains 24,807 meshes from 13 categories of ShapeNet, of which roughly 80% are used for training and others for testing. Since the meshes from ShapeNet are non-manifold, prohibiting the computation of ground-truth geodesic distances, we first convert them into watertight manifolds following (Wang et al., 2022). Then, we apply isotropic explicit remeshing (Bhat et al., 2004; Hoppe et al., 1993) to make the connectivity of meshes regular. Next, we do mesh simplification (Garland and Heckbert, 1997) to collapse 15% of the edges, which diversifies the range of edge length and potentially increases the robustness of our network. The resulting meshes have an average of 5,057 vertices. We normalize the meshes in \([-1,1]\) and leverage the MMP method (Mitchell et al., 1987) to calculate the exact geodesic distance of 600\(k\) pairs of points on each mesh, which takes 10-15s on a single CPU core for each mesh.
SettingsWe employ the AdamW optimizer (Loshchilov and Hutter, 2019) for training, with an initial learning rate of 0.0025, a weight decay of 0.01, and a batch size of 40. We train the network for 500 epochs and decay the learning rate using a polynomial function with a power of 0.9. We implemented our method with PyTorch (Paszke et al., 2019); the training process took 64 hours on 4 Nvidia 3090 GPUs. We use the mean relative error (MRE) defined in Eq. (6) to compare the accuracy of the predicted geodesic distance. To compute the MRE, we randomly sample 500 vertices, calculate the geodesic distances from each vertex and all other vertices within a given mesh, and then compute the average error between the predicted distances and the ground-truth distances. We use the average time required of 1 million geodesic queries and pre-processing time to evaluate the efficiency. We choose three groups of meshes for evaluation:
* ShapeNet-A: 100 meshes from the testing set of ShapeNet, whose categories are included in the training set. The average number of vertices is 5,057.
* ShapeNet-B: 50 meshes from the ShapeNet, whose categories are _different_ from the 13 categories of the training set. The average number of vertices is 5,120.
* Common: 10 meshes that are not contained in ShapeNet, such as Elephant, Fandisk, and Bunny. The topologies, scales, and geometric features of these meshes are significantly different from those in ShapeNet. The average number of vertices is 7,202.
Several examples from these three groups of meshes are shown in Fig. 5. The meshes from ShapeNet-B and Common are used to test the generalization ability of our method when dealing with out-of-distribution meshes.
ComparisonsWe compare our method with several state-of-the-art methods on geodesic distance queries (GDQs). MMP (Mitchell et al., 1987) is a classic method for computing geodesic distances, with a time complexity of \(\mathcal{O}(N^{2}\log N)\) for each GDQ, where \(N\) is the number of mesh vertices. MMP does not require precomputation and produces _exact_ geodesic distances, which are used for reference. The Heat Method (HM) (Crane et al., 2013, 2017) computes geodesic distances by solving a heat equation and a Poisson equation on meshes, with a time complexity of \(\mathcal{O}(N)\) for each GDQ after matrix factorization of the involved linear system. The Discrete Geodesic Graphs (DGG) (Adikusuma et al., 2020) construct an undirected and sparse graph for computing discrete geodesic distances on triangle meshes and requires \(\mathcal{O}(N)\) time for each GDQ. The optimization-based methods, including the Euclidean Embedding
Figure 5. Example meshes for evaluation. The meshes from the ShapeNet-A, ShapeNet-B, and Common groups are shown in red, yellow, and blue, respectively. Notice that our network can handle meshes with boundaries: the head (MRE=2.0%) has open boundaries near its eyes and neck.
Method (EEM) [17] and the Geodesic Embedding (GE) [14], leverage costly cascaded non-linear optimization to compute the geodesic embedding of a mesh, with a time complexity of \(\mathcal{O}(1)\) for each GDQ. The precomputation time of FPGDC [14] depends on its sample number \(k\), with a time complexity of \(\mathcal{O}(1)\) for each GDQ. Our method produces the geodesic embedding by a forward evaluation of a GNN and has a time complexity of \(\mathcal{O}(1)\) for each GDQ. We compare our method with the above methods on the three groups of meshes using the code provided by the authors or publicly available on the web. Previous methods are mainly developed with C++ and run on CPU, and it is non-trivial for us to reimplement them on GPU; we thus run them on top of a Windows computer with a Ryzen R9 5900HX CPU and 32GB of memory. We keep their default parameter settings unchanged during the evaluations unless specified. We evaluate our method on a single RTX 3090 GPU with a batch size of 10.
_Speed._ The key benefit of our method is its efficiency in terms of both precomputation and GDQs compared with the state-of-the-art methods. The results are summarized in Table 1. Since MMP, HM, and DGG are targeted at computing single-source-all-destinations geodesic distances, our method is at least \(3.2\times 10^{4}\)_times faster_ than them when computing GDQs. Compared with EEM, GE and FPGDC, which all have \(\mathcal{O}(1)\) complexity for GDQs, in terms of precomputation, our method is \(3.2\times 10^{3}\)_times faster_ than GE, \(4.1\times 10^{2}\)_times faster_ than EEM, \(71\)_times faster_ than FPGDC with its parameter \(N\) set to 400. Here, the precomputation time is the time required to compute the geodesic embedding of a mesh, which is not applicable to MMP, HM, and SVG. The precomputation of our GeGnn involves a single forward pass of the network, which is trivially parallelized on GPUs. In contrast, the precomputation of GE, FPGDC and EMM requires a costly non-linear optimization process or eigen decomposition, which is hard to take advantage of GPU parallelism without sophisticated optimization efforts. Additionally, each GDQ of our method only involves a few matrix products, which are also highly optimized and parallelized on GPUs, whereas the GDQ of GE involves geodesic computation with the help of saddle vertex graphs. We did not calculate the GDQ and MRE of GE in Table 1 since the authors only provided the code for precomputation.
_Accuracy._ In Table 1, MMP [11] computes the exact geodesic distances, which is used as the ground truth; the other methods compute approximate geodesic distances. We observe that the accuracy of FPGDC is severely affected by the sample number. With sample number 20 (the default value), its MRE is 12%, much worse than the other methods. Therefore, we set its sample number \(k\) to 400, with which it achieves an accuracy that is slightly worse than ours. Note that FPGDC fails on over 5% of samples in ShapeNet-A and ShapeNet-B; EEM also fails on around 2% meshes. We exclude those failed samples when computing their accuracies. It can be seen that the accuracy of our GeGnn is significantly better than EEM [17], and our GeGnn even outperforms the Heat Method [13, 10] on ShapeNet-A. However, our accuracy is slightly worse than DGG [1]. Nevertheless, we emphasize that it does not affect the applicability of our method in many graphics applications with a mean relative error of less than 3%, including texture mapping, shape analysis as verified in Section 4.3, and many other applications as demonstrated in EEM [17], of which the mean relative error is above 8% on ShapeNet.
_Visual Results._ We compare the geodesic distances of our method with other methods in Fig. 7. And we show more geodesic distance fields generated by our method in Fig. 8 on various shapes from group ShapeNet-A, ShapeNet-B, and Common in each of the three rows, respectively. These results demonstrate the effectiveness of our GeGnn in computing geodesic distances on meshes with different topologies and geometries, such as those with high genus, curved surfaces, and complex features. It is worth highlighting that our network is trained on ShapeNet-A, and it can generalize well to
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Complexity} & \multirow{2}{*}{Speedup} & \multirow{2}{*}{Metric} & ShapeNet-A & ShapeNet-B & Common \\ & & & & 5,057 & 5,120 & 7,202 \\ \hline MMP & \(\mathcal{O}(N^{2}\log N)\) & \(1.7\times 10^{6}\) & GDQ & 15.85h & 18.00h & 25.41h \\ \hline HM & \(\mathcal{O}(N)\) & \(\begin{array}{c}6.7\times 10^{4}\end{array}\) & GDQ & 2345s & 2406s & 3506s \\ & & MRE & 2.41\% & 2.04\% & 1.46\% \\ \hline DGG & \(\mathcal{O}(N)\) & \(\begin{array}{c}3.2\times 10^{4}\end{array}\) & GDQ & 1163s & 1157s & 1626s \\ & & MRE & 0.27\% & 0.27\% & 0.50\% \\ \hline \multirow{2}{*}{**EEM**} & \multirow{2}{*}{\(\mathcal{O}(1)\)} & \(\begin{array}{c}4.1\times 10^{2}\end{array}\) & PC & 45.02s & 30.87s & 41.21s \\ & & MRE & 0.134s & 0.120s & 0.136s \\ & & MRE & 8.11\% & 8.78\% & 5.64\% \\ \hline \multirow{2}{*}{**FPGDC**} & \multirow{2}{*}{\(\mathcal{O}(1)\)} & \(\begin{array}{c}7.1\times 10^{1}\end{array}\) & PC & 5.64s & 6.28s & 8.23s \\ & & MRE & 2.75\% & 2.37s & 2.56s \\ & & MRE & 2.50\% & 2.30\% & 2.30\% \\ \hline
**GE** & \(\mathcal{O}(1)\) & \(\begin{array}{c}3.2\times 10^{3}\end{array}\) & PC & 225.5s & 297.8s & 387.8s \\ \hline \multirow{2}{*}{**GeGNN**} & \multirow{2}{*}{\(\mathcal{O}(1)\)} & \multicolumn{2}{c}{PC} & 0.089s & 0.091s & 0.104s \\ & & GDQ & 0.042s & 0.040s & 0.041s \\ \cline{1-1} & & MRE & 1.33\% & 2.55\% & 2.30\% \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performances comparisons on geodesic distance queries. Algorithms specifically designed for GDQ are in bold. _PC_ represents the average time of precomputation for GDQs. _GDQ_ represents the average time of 1 million GDQs after any precomputation. _MRE_ represents the mean relative error. Compared with the state-of-the-art methods, including MMP [11], HM [13, 10], DGG [14], EEM [17], FPGDC [14] and GE [14], our GeGNN achieves comparable accuracies while being significantly faster in terms of both precomputation and GDQs by orders of magnitude, as indicated in the _Speedup_ column with green color.
Figure 6: Robustness of our method. Left: the result on an incomplete and noisy mesh. Right: the result on a mesh containing two separated parts.
ShapeNet-B and Common, which demonstrates the strong generalization ability of our method.
_Robustness._ After training, our GeGnn learns shape priors from the dataset, endowing it with strong robustness to incomplete or noisy meshes. On the left of Fig. 6, we randomly remove 15% of the triangles and add a Gaussian noise with a standard deviation of 0.06 on every vertex. Applying our method to the incomplete and noisy mesh results in an MRE of 2.76%. On the right of Fig. 6, we removed a continuous triangular area, separating the mesh into two parts. Traditional methods cannot handle this case, while our method can still generate a good result.
_Biharmonic Distance._ In this experiment, we verify the versatility of our method on learning high-dimensional embeddings for the biharmonic distance [Lipman et al.2010]. We select 900 meshes from the training data and compute their ground-truth biharmonic distance. Then, we train the network in a similar way as learning
Fig. 8: More results on meshes with complex geometric or topological structures. For each pair of meshes, our results are shown on the left or top side, and the corresponding ground truth results are shown on the right or bottom side. The meshes from top to bottom are selected from ShapeNet-A, ShapeNet-B, and Common, respectively. These results demonstrate the effectiveness and strong generalization ability of our method.
Fig. 7: Visual Comparisons. We compared our results with the results of MMP [Mitchell et al.1987], DDG [Adikusuma et al.2020], EEM [Panozzo et al.2013], and HM [Crane et al.2017]. All methods apart from EEM get visually pleasing results.
the geodesic embedding. After training, we can also query the biharmonic distances between two arbitrary points, eliminating the requirements of estimating eigenvectors and solving linear systems in (Lipman et al., 2010). We show the predicted results in Fig. 9, which are faithful to the ground truth. Potentially, it is possible to apply our method to learn embeddings for other types of distances on manifolds, such as the diffusion distance (Coifman et al., 2005) and the earth mover's distance (Solomon et al., 2014).
_Finetune._ We also test the effect of finetuning by additionally training the network with the ground-truth samples on the specified mesh to improve the performance of GeGnn. An example is shown in Fig. 10. The mesh is not contained in ShapeNet and has a relatively complicated topology. Our network produces moderately accurate results initially. After finetuning, the predicted geodesic distance is much closer to the ground truth.
_Scalability and Convergence._ Our network is fully convolutional and has good scalability with large meshes. We firstly test our method on the ShapeNet-A dataset with meshes of different sizes. Specifically, we apply the Loop subdivision (Loop, 1987) on ShapeNet-A to get two sets of meshes with an average number of vertices of 18,708 and 23,681, respectively. Then, we test the speed and accuracy of our method on these meshes. The results are shown in Table 2. Although our network is trained on ShapeNet with an average number of vertices of 5,057, it can efficiently and accurately deal with mesh with many more vertices. To verify the ability of our method on even larger meshes, we conduct experiments on subdivided spheres following (Surazhsky et al., 2005). We report the accuracy and running time in Fig. 11. The mean, maximum, and minimum relative errors on meshes are almost constant on spheres with a wide range of resolutions. And the running time only increases slightly when the resolution increases since the network runs in parallel on GPUs. The results demonstrate that our algorithm exhibits good convergence in terms of mesh resolution.
### Ablation Studies
In this section, we study and discuss the effectiveness of key designs in our method, including the graph convolution module, the graph pooling module, and the decoding function. We compare the MRE on the whole testing set, including 4,732 meshes.
_Graph Convolution._ The two key designs of our graph convolution are the local geometric features and the _max_ aggregator in Eq. (4). To verify the effectiveness of these designs, we conduct a set of experiments to compare the performance; the mean relative errors are listed as follows:
\begin{tabular}{c|c c c c} & GeConv & mean & sum & w/o dist & w/o rel. pos. \\ \hline MRE & **1.57\%** & 2.53\% & 2.53\% & 1.67\% & 1.84\% \\ \end{tabular} After replacing the max aggregator with _mean_ or _sum_, as shown in the third and fourth columns of the table, the performance drops by 0.96%. After removing the pairwise edge length (w/o dist) and the relative position (w/o rel. pos.), the performance drops by 0.27% and 0.67%, respectively.
We also compare our GeConv with other graph convolutions, including GraphSAGE (Hamilton et al., 2017), GCN (Kipf and Welling, 2017), GATv2 (Brody et al., 2022), and EdgeConv (Wang et al., 2018). We keep all the other settings the same as our method, except that we halve the batch size GATv2 and EdgeConv; otherwise, the models will run out of GPU memory. The results are listed as follows:
\begin{tabular}{c|c c c c} & GeConv & GraphSAGE & GCN & GATv2 & EdgeConv \\ \hline MRE & **1.57**\% & 2.84\% & 3.25\% & 2.34\% & 1.62\% \\ \end{tabular}
It can be seen that although our GeoConv in Eq. (4) only contains two trainable weights, it still outperforms all the other graph convolutions by a large margin in the task of geodesic embedding. Our GeoConv is easy to implement and can be implemented within 15 lines of code with the Torch Geometric library (Fey and Lenssen, 2019), which we expect to be useful for other geometric learning tasks.
Graph PoolingIn this experiment, we replace our GeoPool with naive grid-based pooling to verify the efficacy of our graph pooling. We did not compare the pooling strategy in MeshCNN (Hanocka et al., 2019) and SubdivConv (Hu et al., 2022) since they require sequential operations on CPUs and are time-consuming for our task. The MRE of grid-based pooling is 2.04%, which is 0.47% worse than our GeoPool.
Decoding FunctionIn this experiment, we verify the superiority of our MLP-based function over conventional Euclidean distance (Panozzo et al., 2013; Xia et al., 2021) when decoding the geodesic distance from the embedding vectors. After replacing our MLP with Euclidean distance, the MRE is 3.03%, decreasing by 1.46%.
### Applications
In this section, we demonstrate several interesting applications with our constant-time GDQs.
Texture MappingWe create a local geodesic polar coordinate system around a specified point on the mesh, which is known as the logarithmic map in differential geometry, also called the logarithmic map. Then, we can map the texture from the polar coordinate system to the mesh. We follow (Panozzo et al., 2013; Xin et al., 2012) to compute the exponential map with constant-time GDQs, which has proven to be more efficient and more accurate than (Schmidt et al., 2006). The results are shown in Fig. 12, where the texture is smoothly mapped to curved meshes with the computed exponential map, demonstrating that our algorithm is robust even on the edge of the surface.
Shape MatchingShape distributions (Osada et al., 2002) are widely used in shape matching and retrieval, which is based on shape histograms of pairwise distances between many point pairs on the mesh using a specific shape function. In (Osada et al., 2002), the shape function is defined as the Euclidean distance or angle difference between two points. We follow (Martinek et al., 2012) to use the geodesic distances as the shape function. Our constant-time GDQs allow for the extremely efficient computation of geodesic distances between randomly selected surface points. The results are shown in Fig. 13; we can see that the shape distributions computed with geodesic distances are invariant to the articulated deformation of the mesh, while the shape distributions computed with Euclidean distances are not.
Geodesic PathWe can also efficiently compute the geodesic path by tracing the gradient of the geodesic distance field from the target point to the source point (Kimmel and Sethian, 1998; Xin et al., 2012). The gradient of the geodesic distance field can be computed in constant time on each triangle with our method, therefore the geodesic path can be computed in \(\mathcal{O}(k)\) time, where \(k\) is the number of edges crossed by the path. The results are shown on the right. For visualization, the geodesic distance field is also drawn on the mesh.
## 5. Discussion and Limitations
In this section, we discuss our method's limitations and possible improvements.
TriangulationsAlthough our method exhibits good robustness against different triangulations, it cannot work well with extremely anisotropic meshes; an example is shown in Fig. 14-(a). However, this issue could be easily alleviated by making the mesh regular through remeshing.
Unusual meshesOur method may struggle to process meshes that have unusual topology and geometry. The chair in Fig. 14-(b) is constructed by bending a long and thin tube multiple times; such geometry is rare in the training dataset. As a consequence, our method produces an MRE of about 25%, significantly higher than the average error throughout the test set. In order to address this issue, we could potentially expand and diversify our training data while also enlarging our networks to enhance the overall generalizability and robustness.
Triangle Inequality and Error BoundThe geodesic distance satisfies the triangle inequality. However, our GeGnn only predicts the approximate geodesic distances and cannot strictly pass the triangle inequality test. We follow (Solomon et al., 2014) to illustrate this issue in Fig. 14 (c). Specifically, given two fixed vertices \(p\) and \(q\)
Figure 12. Texture Mapping. We compute the exponential map with our constant-time GDQs (below) and map the texture from the polar coordinate system to the mesh (top).
Figure 13. Shape distributions. We use Blender to deform the mesh of the armadillo and compute the shape distributions with geodesic distances and Euclidean distances as shape functions, respectively. The shape distributions computed with geodesic distances are more invariant to the articulated deformation of the mesh than Euclidean distances.
we identify and mark all vertices \(x\) for which triangle inequality \(d(p,q)\geq d(p,x)+d(q,x)\) are not satisfied. Our results are visually on par with the results of (Crane et al., 2013). As a learning-based method, our GeGnn cannot offer theoretical guarantees of error bound, either. It would be interesting to incorporate additional geometric priors to address this limitation.
#### Rotation and Translation Invariance
Although the _GeoConv_ in Eq. (4) is translation-invariant, the network is not due to the grid sampling in pooling. Our network is not rotation invariant, either. It is possible to leverage network architectures that possess invariant properties (Deng et al., 2021) to increase the performance further.
## 6. Conclusion
We propose to learn the geodesic embedding with graph neural networks, which enables constant time complexity for geodesic distance queries on discrete surfaces with arbitrary topology and geometry. We design a novel graph neural network to predict the vertex-wise geodesic embedding and leverage a lightweight MLP to decode the geodesic distance from the embeddings. The key technical contributions of our method include a novel graph convolution, graph pooling, and the design of the MLP decoder. We verify the efficiency, effectiveness, robustness, and generalizability of our method on ShapeNet and a variety of out-of-distribution meshes. We expect our work can inspire more learning-based methods for geodesic distance or general geometry problems. Several future works are discussed as follows.
#### General Geometric Distances
In this paper, we mainly use GrGnn to learn the geodesic embedding and verify the feasibility in learning an embedding for biharmonic distances. In the future, it is interesting to explore the possibility of learning other geometric distances and, furthermore, training a single general model for all geometric distances on meshes.
#### Geometry Optimization
The process of computing geodesic embedding is essentially a geometry optimization problem on meshes. In the future, we plan to use graph neural networks to solve other geometry optimization problems, such as shape deformation, remeshing, and parameterization.
#### Transformers on Meshes
Transformers have been prevailing in natural language processing and computer vision. It is interesting to use Transformers to meshes to replace the graph neural networks in our method for better performance in the future.
#### Acknowledgments
This work is supported by the National Key R&D Program of China (No. 2022YFB3303400 and No. 2021YFF0500901), and the Southern Marine Science and Engineering Guangdong Laboratory (Zhuhai) (No.SML2021SP101). We also thank the anonymous reviewers for their valuable feedback and Mr. Tianzuo Qin from the University of Hong Kong for his discussion on graph neural networks.
|